Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
6,700 | 7,060 | Affine-Invariant Online Optimization
and the Low-rank Experts Problem
Tomer Koren
Google Brain
1600 Amphitheatre Pkwy
Mountain View, CA 94043
[email protected]
Roi Livni
Princeton University
35 Olden St.
Princeton, NJ 08540
[email protected]
Abstract
We present a new affine-invariant optimization algorithm called Online Lazy Newton.
The regret of Online Lazy Newton is independent of conditioning: the algorithm?s
performance depends on the best possible preconditioning of the problem in
retrospect and on its intrinsic dimensionality. As an application, we?show how
Online Lazy Newton can be used to achieve
? an optimal regret of order rT for the
low-rank experts problem, improving by a r factor over the previously best known
bound and resolving an open problem posed by Hazan et al. [15].
1
Introduction
In the online convex optimization setting, a learner is faced with a stream of T convex functions
over a bounded convex domain X ? Rd . At each round t the learner gets to observe a single convex
function ft and has to choose a point xt ? X. The aim of the learner is to minimize the cumulative
T-round regret, defined as
T
X
t=1
ft (xt ) ? min
T
X
x?X t=1
ft (x).
?
In this very general setting, Online Gradient Descent [24] achieves an optimal regret rate of ?(GD T),
where G is a bound on the Lipschitz constants of the ft and D is a bound on the diameter of the
domain, both with respect to the Euclidean norm. For simplicity, let us restrict the exposition to linear
losses ft (xt ) = gTt xt , in which case G bounds the maximal Euclidean norm kgt k; it is well known that
the general convex case can be easily reduced to this case.
One often useful way to obtain faster convergence in optimization is to employ preconditioning,
namely to apply a linear transformation P to the gradients before using them to make update steps.
In an online optimization setting, if we have had access to?the best preconditioner in hindsight we
could have achieved a regret rate of the form ?(inf P G P D P T), where D P is the diameter of the set
P ? X and G P is a bound on the norm of the conditioned gradients kP?1 gt k. We shall thus refer to the
quantity G P D P as the conditioning of the problem when a preconditioner P is used.
In many cases, however, it is more natural to directly assume a bound |gTt xt | ? C on the magnitude of
the losses rather than assuming the bounds D and G. In this case, the condition number need not
be bounded and typical guarantees of gradient-based methods such as online gradient descent?do
not directly apply. In principle, it is possible to find a preconditioner P such that G P D P = O(C d),
and if one further assumes that the intrinsic dimensionality of the problem (i.e., the rank of the
loss vectors
g1, . . . , gT ) is r d, the conditioning of the optimization problem can be improved
?
to O(C r). However, this approach requires one to have access to the transformation P, which is
typically data-dependent and known only in retrospect.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?
In this paper we address the following natural question: can one achieve a regret rate of O(C rT)
without the explicit prior knowledge of a good preconditioner P? We answer to this question in the
affirmative and present a new algorithm that achieves this rate, called Online Lazy Newton. Our
algorithm is a variant of the Online Newton Step algorithm due to Hazan et al. [14], that employs a lazy
projection step. While the Online Newton Step algorithm was designed to exploit curvature in the loss
functions (specifically, a property called exp-concavity), our adaptation is aimed at general?possibly
even linear?online convex optimization and exploits latent
? low-dimensional structure. It turns out
that this adaptation of the algorithm is able to achieve O(C rT) regret up to a small logarithmic factor,
without any prior knowledge of the optimal preconditioner. A crucial property of our algorithm is its
affine-invariance: Online Lazy Newton is invariant to any affine transformation of the gradients gt , in
the sense that running the algorithm on gradients gt0 = P?1 gt and applying the inverse transformation
P on the produced decisions results with the same decisions that would have been obtained by applying
the algorithm directly to the original vectors gt .
As our main application, we establish a new regret rate for the low rank experts problem, introduced
by Hazan et al. [15]. The low rank experts setting is a variant of the classical prediction with expert
advice problem, where one further assumes that the experts are linearly dependent and their losses
span a low dimensional space of rank r. The challenge in this setting is to achieve a regret rate which
is independent of number of experts,
? and only depends on their rank r. In this setting, Hazan et al.
[15] proved a lower bound of ?( rT) on the regret, but fell?short of providing a matching upper
bound and only gave an algorithm achieving a suboptimal O(r T) regret bound. Applying the Online
LazypNewton algorithm to this problem, we are able to p
improve upon the latter bound and achieve
a O( rT log T) regret bound, which is optimal up to a log T factor and improves upon the prior
bound unless T is exponential in the rank r.
1.1
Related work
Adaptive regularization is an important topic in online optimization that has received considerable
attention in recent years. The AdaGrad algorithm presented in [9] (as well as the closely related
algorithm was analyzed in [21]) dynamically adapts to the geometry of the data. In a sense, AdaGrad
learns the best preconditioner from a trace-bounded family of Mahalanobis norms. (See Section 2.2
for a detailed discussion and comparison of guarantees). The MegaGrad algorithm of van Erven and
Koolen [22] uses a similar dynamic regularization technique in order to obliviously adapt to possible
curvature in the loss functions. Lower bounds for preconditioning when the domain is unbounded
has been presented in [7]. These lower bounds are inapplicable, however, once losses are bounded
(as assumed in this paper). More generally, going beyond worst case analysis and exploiting latent
structure in the data is a very active line of research within online learning. Work in this direction
include adaptation to stochastic i.i.d data (e.g., [11, 12, 19, 8]), as well as the exploration of various
structural assumptions that can be leveraged for better guarantees [4, 12, 13, 5, 18].
Our Online Lazy Newton algorithm is a part of a wide family of algorithms named Follow the
Regularized Leader (FTRL). FTRL methods choose at each iteration the minimizer of past observed
losses with an additional regularization term [16, 12, 20]. Our algorithm is closely related to the
Follow The Approximate Leader (FTAL) algorithm presented in [14]. The FTAL algorithm is designed
to achieve logarithmic regret rate for exp-concave problems, exploiting the curvature information
of such functions. In contrast, our algorithm is aimed for optimizing general convex functions
with possibly no curvature; while FTAL performs FTL over the second-order approximation of the
functions, Online Lazy Newton instead utilizes a first-order approximation with an additional rank-one
quadratic regularizer. Finally, another algorithm closely related to ours is the Second-order Perceptron
algorithm of Cesa-Bianchi et al. [3] (which in turn is closely related to the Vovk-Azoury-Warmuth
forecaster [23, 1]), which is a variant of the classical Perceptron algorithm adapted to the case where
the data is ?skewed?, or ill-conditioned in the sense used above. Similarly to our algorithm, the
Second-order Perceptron employs adaptive whitening of the data to address its skewness.
This work is highly inspired and motivated by the problem of low rank experts to which we give an
optimal algorithm.
The problem was first introduced in [15] where the authors established a regret
?
e T), where r is the rank of the experts? losses, which was the first regret bound in this
rate of O(r
setting that did not depend on the total number of experts. The problem has been further studied and
investigated by Cohen and Mannor [6], Barman et al. [2] and Foster et al. [10]. Here we establish the
first tight upper bound (up to logarithmic factor) that is independent of the total number of experts N.
2
2
Setup and Main Results
We begin by recalling the standard framework of Online Convex Optimization. At each round
t = 1, . . . , T a learner chooses a decision xt from a bounded convex subset X ? Rd in d-dimensional
space. An adversary then chooses a convex cost function ft , and the learner suffers a loss of ft (xt ).
We measure the performance of the learner in terms of the regret, which is defined as the difference
between accumulated loss incurred by the learner and the loss of the best decision in hindsight.
Namely, the T-round regret of the learner is given by
T
T
X
X
RegretT =
ft (xt ) ? min
ft (x).
x?X t=1
t=1
One typically assumes that the diameter of the set X is bounded, and that the convex functions
f1, . . . , fT are all Lipschitz continuous, both with respect to certain norms on Rd (typically, the norms
are taken as dual to each other). However, a main point of this paper is to refrain from making explicit
assumptions on the geometry of the optimization problem, and design algorithms that are, in a sense,
oblivious to it.
Notation. Given
? a positive definite matrix A 0 we will denote by k ? k A the norm induced by A,
namely, kxk A = xT Ax. The dual norm to k ? k A is defined by kgk ?A = sup kx k A ?1 |xT g| and can be
shown to be equal to kgk ?A = kgk A?1 . Finally, for a non?invertible matrix A, we denote by A? its
Moore?Penrose psuedo inverse.
2.1
Main Results
Our main results are affine invariant regret bounds for the Online Lazy Newton algorithm, which we
present below in Section 3. We begin with a bound for linear losses that controls the regret in terms
of the intrinsic dimensionality of the problem and a bound on the losses.
Theorem 1. Consider the online convex optimization setting with linear losses ft (x) = gTt x, and
assume that |gtT x| ? C for all t and x ? X. If Algorithm 1 is run with ? < 1/C, then for every H 0
the regret is bounded as
T
X
4r
(D H G H T)2
RegretT ?
+ 3? 1 +
|gtT x? | 2 ,
(1)
log 1 +
?
r
t=1
P
where r = rank( Tt=1 gt gTt ) ? d and
D H = max kx ? yk H ,
?
G H = max kgt k H
.
1?t ?T
x,y?X
By a standard reduction, the analogue statement for convex losses holds, as long as we assume that
the dot-products between gradients and decision vectors are bounded.
Corollary 2. Let f1, . . . , fT be an arbitrary sequence of convex functions over X. Suppose Algorithm 1
is run with 1/? > maxt maxx?X |?Tt xt |. Then, for every H 0 the regret is bounded as
T
X
4r
(D H G H T)2
RegretT ?
log 1 +
+ 3? 1 +
|?Tt x? | 2 ,
(2)
?
r
t=1
P
where r = rank( Tt=1 ?t ?Tt ) ? d and
D H = max kx ? yk H ,
?
G H = max k?t k H
.
1?t ?T
x,y?X
In particular, we can use the theorem to show that as long as we bound |? f (xt )T xt | by a constant?a
significantly weaker requirement than assuming bounds on the diameter of X and on the norms of the
gradients?one can find a norm k ? k H for which the quantities D H and G H are properly bounded.
We stress again that, importantly, Algorithm 1 need not know the matrix H in order to achieve the
corresponding bound.
Theorem 3. Assume that
max max |?Tt x| ? C.
1?t ?T x?X
p
P
Let r = rank( Tt=1?t ?Tt ) ? d, and run Algorithm 1 with ? = ? r log(T)/T . The regret of the
p
algorithm is then at most O C rT log T .
3
2.2
Discussion
It is worth comparing our result to previously studied adaptive regularization algorithms techniques.
Perhaps the most popular gradient method that employs adaptive regularization is the AdaGrad
algorithm introduced in [9]. The AdaGrad algorithm enjoys the regret bound depicted in Eq. (3). It is
competitive with any fixed regularization matrix S 0 such that Tr(S) ? d:
!
qX
?
T
2
2
?
RegretT (AdaGrad) = O d inf
kx k2 k?t kS ? ,
(3)
t=1
S 0,
Tr(S)?d
RegretT (OLN)
e
= O
?
r inf
S 0
qX
T
kx? kS2
t=1
k?t kS2 ?
.
(4)
On the other hand, for every matrix S 0 by the generalized Cauchy-Schwartz inequality we have
k?Tt x? k ? k?t kS? kx? kS . Plugging this into Eq. (2) and a proper tuning of ? gives a bound which is
competitive with any fixed regularization matrix S 0, depicted in Eq. (4).
Our bound improves on AdaGrad?s regret bound in two ways. First, the bound in Eq. (4) scales with
the intrinsic dimension of the problem: when the true underlying dimensionality of the problem is
smaller than the dimension of the ambient space, Online Lazy Newton enjoys a superior regret bound.
Furthermore, as demonstrated in [15], the dependence of AdaGrad?s regret on the ambient dimension
is not an artifact of the analysis, and there are cases where the actual regret grows polynomially with
d rather than with the true rank r d.
The second case where the Online Lazy Newton bound can be superior over AdaGrad?s is when there
exists a conditioning matrix that improves not only the norm of the gradients with respect to the
? is smaller with respect to the optimal norm induced by S.
Euclidean norm, but also that
of xP
PT the norm
T ? 2
More generally, whenever t=1 (?t x ) < Tt=1 k?t kS2 kx? k22 , and in particular when kx? kS < kx? k2 ,
Eq. (4) will produce a tighter bound than the one in Eq. (3).
3
The Online Lazy Newton Algorithm
We next present the main focus of this paper: the affine-invariant algorithm Online Lazy Newton (OLN),
given in Algorithm 1. The algorithm maintains two vectors, xt and yt . The vector yt is updated at
each iteration using the gradient of ft at xt , via yt = yt?1 ? ?t where ?t = ? ft (xt ). The vector yt is
not guaranteed to be at X, hence the actual prediction of OLN is determined via a projection onto the
set X, resulting with the vector xt+1 ? X. However, similarly to ONS, the algorithm first
P transforms
yt via the (pseudo-)inverse of the matrix At given by the sum of the outer products ts=1?s ?Ts , and
projections are taken with respect to At . In this context, we use the notation
A
?X
(y) = arg min (x ? y)T A(x ? y).
x?X
to denote the projection onto a set X with respect to the (semi-)norm k ? k A induced by a positive
semidefinite matrix A 0.
Algorithm 1 OLN: Online Lazy Newton
Parameters: initial point x1 ? X, step size ? > 0
Initialize y0 = 0 and A0 = 0
for t = 1, 2 . . . T do
Play xt , incur cost ft (xt ), observe gradient ?t = ? ft (xt )
Rank 1 update At = At?1 + ??t ?Tt
Online Newton step and projection:
yt
= yt?1 ? ?t
At
xt+1 = ?X
(A?t yt )
end for
return
The main motivation behind working with the At as preconditioners is that?as demonstrated in
our analysis?the algorithm becomes invariant to linear transformations of the gradient vectors ?t .
4
Indeed, if P is some linear transformation, one can observe that if we run the algorithm on P?t instead
of ?t , this will transform the solution at step t from xt to P?1 xt . In turn, the cumulative regret is
invariant to such transformations. As seen in Theorem 1, this invariance indeed leads to an algorithm
with an improved regret bound when the input representation of the data is poorly conditioned.
While the algorithm is very similar to ONS, it is different in several important aspects. First, unlike
ONS, our lazy version maintains at each step a vector yt which is updated without any projections.
Projection is then applied only when we need to calculate xt+1 . In that sense, it can be thought as a
gradient descent method with lazy projections (analogous to dual-averaging methods) while ONS is
similar to gradient descent methods with a greedy projection step (reminiscent of mirror-descent type
algorithms). The effect of this, is a decoupling between past and future conditioning and projections:
and if the transformation matrix At changes between rounds, the lazy approach allows us to condition
and project the problem at each iteration independently.
Second, ONS uses an initialization of A0 = Id (while OLN uses A0 = 0) and, as a result, it is not
invariant to affine transformations. While this difference might seem negligible as is typically chosen
to be very small, recall that the matrices At are used as preconditioners and their small eigenvalues
can be very meaningful, especially when the problem at hand is poorly conditioned.
4
Application: Low Rank Experts
In this section we consider the Low-rank Experts problem and show how the Online Lazy Newton
algorithm can be used to obtain a nearly optimal regret in that setting. In the Low-rank Experts
problem, which is a variant of the classic problem of prediction with expert advice, a learner has to
choose at each round t = 1, . . . , T between following the advice of one of N experts. On round t,
the learner chooses a distribution over the experts in the form of a probability vector xt ? ?n (here
?n denotes the n-dimensional simplex); thereafter, an adversary chooses a cost vector gt ? [?1, 1] N
assigning losses to experts, and the player suffers a loss of xTt gt ? [?1, 1]. In contrast with the standard
experts setting, here we assume that in hindsight the expert share a common low rank structure and
we have that rank(g1, . . . , gT ) ? r, for some r < N.
It is known that in the stochastic setting (i.e., gt are drawn i.i.d.
p some fixed distribution) a
? from
follow-the-leader algorithm will enjoy a regret bound of min{ rT, T log N }. In [15] the authors
asked whether one can achieve the same regret bound in the online setting. Here we answer this
question on the affirmative.
Theorem
p 4 (Low Rank Experts). Consider the low rank expert setting, where rank(g1, . . . , gT ) ? r.
Set ? = r log(T)/T, and run Algorithm 1 with X = ?n and ft (x) = gTt x. Then, the obtained regret
satisfies
p
RegretT = O( rT log T).
?
This?bound matches the ?( rT) lower bound of [15] up to a log T factor, and improves upon their
O(r T) upper bound, so long as T is not exponential in r. Put differently, if one aims at ensuring an
average regret of at most , the OLN algorithm would need O((r/ 2 ) log(1/)) iterations as opposed
to the O(r 2 / 2 ) iterations required by the algorithm
p of [15]. We also remark that, since the Hedge
algorithm can be used to obtain
regret
rate
of
O(
p
p
T log N), we can obtain an algorithm with regret
bound of the form O min rT log T, T log N by treating Hedge and OLN as meta-experts and
applying Hedge over them.
5
Analysis
For the proofs of our main theorems we will rely on the following two technical lemmas.
Lemma 5 ([17], Lemma 5). Let ?1, ?2 : X 7? R be two convex functions defined over a closed and
convex domain X ? Rd , and let x1 ? arg minx ?X ?1 (x) and x2 ? arg minx ?X ?2 (x). Assume that ?2
is ?-strongly-convex with respect to a norm k ? k. Then, for ? = ?2 ? ?1 we have
k x2 ? x1 k ?
1
k??(x1 )k ? .
?
5
Furthermore, if ? is convex then
0 ? ?(x1 ) ? ?(x2 ) ?
2
1
k??(x1 )k ? .
?
The following lemma is a slight strengthening of a result given in [14].
P
Lemma 6. Let g1, . . . , gT ? Rd be a sequence of vectors, and define Gt = H + ts=1 gs gTs for all t,
?
where H is a positive definite matrix such that kgt k H ? ? for all t. Then
? 2T
gTt G?1
g
?
r
log
1
+
,
t
t
r
t=1
P
where r is the rank of the matrix ts=1 gs gTs .
T
X
Proof. Following [14], we first prove that
T
X
gTt G?1
t gt ? log
t=1
det GT
= log det H ?1/2 GT H ?1/2 .
det H
(5)
To this end, let G0 = H, so that we have Gt = Gt?1 + gt gTt for all t ? 1. The well-known matrix
determinant lemma, which states that det(A ? uuT ) = (1 ? uT A?1 u) det(A), gives
gTt G?1
t gt = 1 ?
det(Gt ? gt gTt )
det(Gt?1 )
=1?
.
det Gt
det Gt
Using the inequality 1 ? x ? log(1/x) and summing over t = 1, . . . , T, we obtain
T
X
gTt G?1
t gt ?
t=1
T
X
log
t=1
det Gt
det GT
,
= log
det Gt?1
det H
which yields Eq. (5).
P
Next, observe that H ?1/2 GT H ?1/2 = I + Ts=1 H ?1/2 gs gTs H ?1/2 and
!
T
T
T
X
X
X
? 2
Tr
H ?1/2 gs gTs H ?1/2 =
Tr gTs H ?1 gs =
(kgs k H
) ? ? 2T .
s=1
s=1
s=1
Also, the rank of the matrix s=1 H ?1/2 gs gTs H ?1/2 = H ?1/2 ( s=1 gs gTs )H ?1/2 is at most r. Hence,
all the eigenvalues of the matrix H ?1/2 GT H ?1/2 are equal to 1, except for r of them whose sum is at
most r + ? 2T. Denote the latter by ?1, . . . , ?r ; using the concavity of log(?) and Jensen?s inequality,
we conclude that
1 X
r
r
X
? 2T
log det H ?1/2 GT H ?1/2 =
log ?i ? r log
?i ? r log 1 +
,
r i=1
r
i=1
PT
PT
which together with Eq. (5) gives the lemma.
We can now prove our main results. We begin by proving Theorem 1.
Proof of Theorem 1. For all t, let
?
f?t (x) = gTt x + (gTt x)2
2
and set
et (x) =
F
t
X
s=1
1
f?s (x) = ?yTt x + xT At x.
2
et ; indeed,
Observe that xt+1 , which is the choice of Algorithm 1 at iteration t + 1, is the minimizer of F
since yt is in the column span of At , we can write up to a constant:
et (x) = 1 x ? A?t yt T At x ? A?t yt + const.
F
2
6
In other words, Algorithm 1 is equivalent to a follow-the-leader algorithm on the functions f?t .
Next, fix some positive definite matrix H 0 and let D H = maxx,y?X kx ? yk H and G H =
? . Next we have
max1?t ?T kgt k H
2
et (x) + ? kx ? xt+1 k H
=
F
2
1
2
= xT At x ? yTt x + ?2 kx ? xt+1 k H
2
1
?
2
2
= kxk 2At + kxk H
? yTt x ? 2xTt+1 x + kxt+1 k H
2
2
?
2
2
,
= kxkG
? yTt x ? 2xTt+1 x + kxt+1 k H
t
2
where Gt =
Pt
T
s=1 gt gt
+ H.
In turn, we have that the function is ?-strongly convex with respect to the norm k ? kGt , where
P
et?1 (x) and
Gt = H + gt gTt , and is minimized at x = xt+1 . Then by Lemma 5 with ?1 (x) = F
?
?
2
2
?
e
?2 (x) = F t (x) + 2 kx ? xt+1 k H , thus ?(x) = ft (x) + 2 kx ? xt+1 k H , we have
?
2
f?t (xt ) ? f?t (xt+1 ) + kxt ? xt+1 k H
2
1
? 2
? (kgt + ?gt gTt xt + ?H(xt ? xt+1 )kG
)
t
?
2
? 2
? 2
? (1 + ?gTt xt )2 (kgt kG
) + 2?(kH(xt ? xt+1 )kG
)
t
t
?
8
? 2
? 2
? (kgt kG
) + 2?(kH(xt ? xt+1 )kG
)
t
t
?
8
? 2
? 2
) + 2?(kH(xt ? xt+1 )k H
)
? (kgt kG
t
?
8
? 2
2
= (kgt kG
) + 2?kxt ? xt+1 k H
.
t
?
? kv + uk 2 ? 2kvk 2 + 2kuk 2
?
1
? max |gTt x|
x?X
?
? H ? Gt ? H ?1 G?1
t
Overall, we obtain
T
X
t=1
T
T
8X
3? X
2
f?t (xt ) ? f?t (xt+1 ) ?
gTt G?1
kxt ? xt+1 k H
.
t gt +
? t=1
2 t=1
By the FTL-BTL Lemma (e.g., [16]), we have that
Hence, we obtain that:
T
X
t=1
PT
?
t=1 ft (xt )
P
? f?t (x?) ? Tt=1 f?t (xt ) ? f?t (xt+1 ).
T
T
8X
3? X
2
f?t (xt ) ? f?t (x?) ?
gTt G?1
kxt ? xt+1 k H
.
t gt +
? t=1
2 t=1
Plugging in ft (x) = gTt x + ?2 (gTt x)2 and rearranging, we obtain
T
T
T
8X
3? X
?X
2
gTt xt ? x? ?
gTt G?1
kxt ? xt+1 k H
+
(gTt x?)2
t gt +
?
2
2
t=1
t=1
t=1
t=1
T
X
T
T
8X
?X
3?
gTt G?1
(gT x?)2 + T D2H
t gt +
? t=1
2 t=1 t
2
2
T
G T 3?
8r
?X
?
log 1 + H
+ T D2H +
(gT x?)2,
?
r
2
2 t=1 t
?
Finally, note that we have obtained?the last inequality for every matrix H 0. By rescaling a matrix
H
?
and re-parametrizing
H
?
H/(
T
D
)
we
obtain
a
matrix
whose
diameter
is
D
?
1/
T
and
H
H
?
G H ? T D H G H . Plugging these into the last inequality yield the result.
7
Proof of Theorem 3. To simplify notations, let us assume that |?Tt x? | ? 1. We get from Corollary 2
that for every ?:
2r
D 2 G 2T 2
RegretT ?
log 1 +
+ 3?(1 + T).
?
r
q
For each G H and D H we can set ? = (2r/T) log(1 + G2H D2H /r) and obtain the regret bound
s
RegretT ?
D2 G 2 T
rT log 1 + H H .
r
Hence, we only need to show that there exists a matrix H 0 such that D2H G2H = O(r). Indeed, set
S = span(?1, . . . , ?T ), and denote XS to be the projection of X onto S (i.e., XS = PX where P is the
projection over S). Define
B = {? ? S : |?T x| ? 1, ? x ? XS }.
Note that by definition we have that {?t }Tt=1 ? B. Further, B is a symmetric convex set, hence by an
ellipsoid approximation we obtain a positive semidefinite matrix B 0, with positive eigenvalues
restricted to S, such that
B ? {? ? S : ?T B? ? 1} ? r B.
By duality we have that
1
r XS
? r1 B? ? {v ? S : vT B? v ? 1}.
Thus if PS is the projection over S we have for every x ? X that xT PS B? PS x ? r. On the other hand
for every ?t we have ?Tt B?t ? 1. We can now choose H = B? + Id where is arbitrary small and
have
?Tt H ?1 ?t = ?Tt (B? + Id )?1 ?t ? 2
and
xT Hx = xT PST B? PS x + kxk 2 ? 2r.
Acknowledgements
The authors would like to thank Elad Hazan for helpful discussions. RL is supported by the Eric and
Wendy Schmidt Fund for Strategic Innovations.
References
[1] K. S. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the
exponential family of distributions. Machine Learning, 43(3):211?246, 2001.
[2] S. Barman, A. Gopalan, and A. Saha. Online learning for structured loss spaces. arXiv preprint
arXiv:1706.04125, 2017.
[3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order perceptron algorithm. SIAM
Journal on Computing, 34(3):640?668, 2005.
[4] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with
expert advice. Machine Learning, 66(2-3):321?352, 2007.
[5] C.-K. Chiang, T. Yang, C.-J. Lee, M. Mahdavi, C.-J. Lu, R. Jin, and S. Zhu. Online optimization
with gradual variations. In COLT, pages 6?1, 2012.
[6] A. Cohen and S. Mannor. Online learning with many experts. arXiv preprint arXiv:1702.07870,
2017.
[7] A. Cutkosky and K. Boahen. Online learning without prior information. arXiv preprint
arXiv:1703.02629, 2017.
8
[8] S. De Rooij, T. Van Erven, P. D. Gr?nwald, and W. M. Koolen. Follow the leader if you can,
hedge if you must. The Journal of Machine Learning Research, 15(1):1281?1316, 2014.
[9] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[10] D. J. Foster, A. Rakhlin, and K. Sridharan. Zigzag: A new approach to adaptive online learning.
arXiv preprint arXiv:1704.04010, 2017.
[11] E. Hazan and S. Kale. On stochastic and worst-case models for investing. In Advances in Neural
Information Processing Systems, pages 709?717, 2009.
[12] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in
costs. Machine learning, 80(2-3):165?188, 2010.
[13] E. Hazan and S. Kale. Better algorithms for benign bandits. The Journal of Machine Learning
Research, 12:1287?1311, 2011.
[14] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex
optimization. In International Conference on Computational Learning Theory, pages 499?513.
Springer, 2006.
[15] E. Hazan, T. Koren, R. Livni, and Y. Mansour. Online learning with low rank experts. In 29th
Annual Conference on Learning Theory, pages 1096?1114, 2016.
[16] A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer
and System Sciences, 71(3):291?307, 2005.
[17] T. Koren and K. Levy. Fast rates for exp-concave empirical risk minimization. In Advances in
Neural Information Processing Systems, pages 1477?1485, 2015.
[18] A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In Conference on
Learning Theory, pages 993?1019, 2013.
[19] A. Rakhlin, O. Shamir, and K. Sridharan. Localization and adaptation in online learning. In
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics,
pages 516?526, 2013.
[20] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends?
in Machine Learning, 4(2):107?194, 2012.
[21] M. Streeter and H. B. McMahan. Less regret via online conditioning. Technical report, 2010.
[22] T. van Erven and W. M. Koolen. Metagrad: Multiple learning rates in online learning. In
Advances in Neural Information Processing Systems, pages 3666?3674, 2016.
[23] V. Vovk. Competitive on-line statistics. International Statistical Review, 69(2):213?248, 2001.
[24] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. 2003.
9
| 7060 |@word kgk:3 determinant:1 version:1 norm:17 open:1 d2:1 gradual:1 forecaster:1 tr:4 reduction:1 initial:1 ftrl:2 ours:1 erven:3 past:2 com:1 comparing:1 assigning:1 reminiscent:1 must:1 benign:1 designed:2 treating:1 update:2 fund:1 greedy:1 intelligence:1 warmuth:2 short:1 chiang:1 mannor:2 unbounded:1 gtt:27 prove:2 amphitheatre:1 indeed:4 brain:1 inspired:1 actual:2 becomes:1 begin:3 project:1 bounded:11 notation:3 underlying:1 mountain:1 kg:8 skewness:1 affirmative:2 hindsight:3 transformation:9 nj:1 guarantee:3 pseudo:1 certainty:1 every:7 concave:2 k2:2 schwartz:1 control:1 uk:1 enjoy:1 before:1 positive:6 negligible:1 id:3 might:1 initialization:1 studied:2 k:4 dynamically:1 regret:40 definite:3 empirical:1 maxx:2 significantly:1 thought:1 projection:13 matching:1 word:1 get:2 onto:3 put:1 context:1 applying:4 risk:1 equivalent:1 zinkevich:1 demonstrated:2 yt:13 kale:4 attention:1 independently:1 convex:23 simplicity:1 importantly:1 classic:1 proving:1 variation:2 analogous:1 updated:2 pt:5 suppose:1 play:1 shamir:1 programming:1 us:3 kxkg:1 trend:1 ks2:3 observed:1 ft:20 preprint:4 worst:2 calculate:1 yk:3 oln:7 boahen:1 predictable:1 asked:1 dynamic:1 depend:1 tight:1 incur:1 upon:3 inapplicable:1 max1:1 learner:10 gt0:1 preconditioning:3 eric:1 localization:1 easily:1 differently:1 various:1 regularizer:1 fast:1 kp:1 artificial:1 shalev:1 whose:2 posed:1 elad:1 statistic:2 g1:4 transform:1 online:45 sequence:3 eigenvalue:3 kxt:7 maximal:1 product:2 adaptation:4 strengthening:1 poorly:2 achieve:8 adapts:1 sixteenth:1 kh:3 kv:1 exploiting:2 convergence:1 requirement:1 r1:1 p:4 produce:1 received:1 eq:8 c:1 direction:1 psuedo:1 closely:4 kgt:10 stochastic:4 exploration:1 hx:1 f1:2 fix:1 tighter:1 obliviously:1 ftal:3 hold:1 roi:1 exp:3 achieves:2 estimation:1 minimization:1 aim:2 rather:2 kalai:2 corollary:2 ax:1 focus:1 properly:1 rank:27 contrast:2 sense:5 helpful:1 dependent:2 accumulated:1 typically:4 a0:3 bandit:1 going:1 arg:3 dual:3 ill:1 overall:1 colt:1 initialize:1 equal:2 once:1 beach:1 nearly:1 future:1 simplex:1 minimized:1 report:1 simplify:1 employ:4 oblivious:1 saha:1 geometry:2 recalling:1 highly:1 analyzed:1 kvk:1 semidefinite:2 behind:1 zigzag:1 ambient:2 unless:1 stoltz:1 euclidean:3 re:1 column:1 cost:4 strategic:1 subset:1 gr:1 answer:2 gd:1 chooses:4 st:2 density:1 international:3 siam:1 lee:1 invertible:1 together:1 again:1 cesa:3 opposed:1 choose:4 possibly:2 leveraged:1 expert:26 return:1 rescaling:1 mahdavi:1 de:1 depends:2 stream:1 view:1 closed:1 hazan:11 sup:1 competitive:3 maintains:2 minimize:1 yield:2 produced:1 lu:1 worth:1 metagrad:1 suffers:2 whenever:1 definition:1 infinitesimal:1 proof:4 proved:1 pst:1 popular:1 recall:1 knowledge:2 ut:1 dimensionality:4 improves:4 g2h:2 follow:5 improved:3 strongly:2 furthermore:2 preconditioner:6 retrospect:2 hand:3 working:1 google:2 artifact:1 perhaps:1 grows:1 usa:1 effect:1 k22:1 true:2 regularization:7 hence:5 symmetric:1 moore:1 round:7 mahalanobis:1 skewed:1 pkwy:1 generalized:2 stress:1 tt:17 performs:1 duchi:1 superior:2 common:1 koolen:3 rl:1 cohen:2 conditioning:6 slight:1 refer:1 rd:5 tuning:1 similarly:2 had:1 dot:1 access:2 whitening:1 gt:45 curvature:4 recent:1 optimizing:1 inf:3 certain:1 inequality:5 meta:1 refrain:1 vt:1 seen:1 additional:2 gentile:1 semi:1 resolving:1 nwald:1 multiple:1 technical:2 faster:1 adapt:1 match:1 long:4 plugging:3 ensuring:1 prediction:4 variant:4 arxiv:8 iteration:6 agarwal:1 achieved:1 ftl:2 crucial:1 unlike:1 fell:1 ascent:1 induced:3 gts:7 seem:1 sridharan:3 extracting:1 structural:1 yang:1 gave:1 restrict:1 suboptimal:1 det:14 whether:1 motivated:1 remark:1 regrett:8 useful:1 generally:2 detailed:1 aimed:2 gopalan:1 transforms:1 diameter:5 reduced:1 wendy:1 rlivni:1 write:1 shall:1 thereafter:1 rooij:1 achieving:1 drawn:1 kuk:1 btl:1 subgradient:1 year:1 sum:2 run:5 inverse:3 you:2 uncertainty:1 named:1 family:3 utilizes:1 decision:6 bound:41 guaranteed:1 koren:3 quadratic:1 g:7 annual:1 adapted:1 x2:3 aspect:1 min:5 span:3 preconditioners:2 vempala:1 px:1 structured:1 smaller:2 y0:1 making:1 invariant:8 restricted:1 taken:2 previously:2 turn:4 singer:1 know:1 end:2 apply:2 observe:5 schmidt:1 original:1 assumes:3 running:1 include:1 denotes:1 newton:17 const:1 exploit:2 especially:1 establish:2 classical:2 g0:1 question:3 quantity:2 rt:11 dependence:1 gradient:17 minx:2 thank:1 olden:1 outer:1 topic:1 cauchy:1 assuming:2 ellipsoid:1 providing:1 innovation:1 setup:1 statement:1 trace:1 design:1 proper:1 bianchi:3 upper:3 descent:5 t:5 parametrizing:1 jin:1 mansour:2 arbitrary:2 tomer:1 introduced:3 namely:3 required:1 established:1 nip:1 ytt:4 address:2 able:2 beyond:1 adversary:2 below:1 challenge:1 max:7 analogue:1 natural:2 rely:1 regularized:1 zhu:1 improve:1 barman:2 cutkosky:1 faced:1 prior:4 review:1 acknowledgement:1 xtt:3 adagrad:8 relative:1 loss:20 foundation:1 incurred:1 affine:7 xp:1 principle:1 foster:2 share:1 maxt:1 supported:1 last:2 enjoys:2 weaker:1 perceptron:4 wide:1 livni:2 van:3 dimension:3 cumulative:2 concavity:2 author:3 adaptive:6 polynomially:1 qx:2 approximate:1 ons:5 active:1 summing:1 assumed:1 conclude:1 leader:5 shwartz:1 continuous:1 latent:2 investing:1 streeter:1 ca:2 decoupling:1 rearranging:1 improving:1 investigated:1 domain:4 did:1 main:9 linearly:1 azoury:2 motivation:1 x1:6 advice:4 explicit:2 exponential:3 mcmahan:1 levy:1 learns:1 theorem:9 xt:60 uut:1 jensen:1 rakhlin:3 x:4 tkoren:1 intrinsic:4 exists:2 mirror:1 magnitude:1 conditioned:4 kx:14 depicted:2 logarithmic:4 penrose:1 lazy:18 kxk:4 conconi:1 d2h:4 springer:1 minimizer:2 satisfies:1 hedge:4 exposition:1 lipschitz:2 considerable:1 change:1 typical:1 specifically:1 determined:1 vovk:2 averaging:1 except:1 lemma:9 called:3 total:2 invariance:2 duality:1 player:1 meaningful:1 latter:2 princeton:3 |
6,701 | 7,061 | Beyond Worst-case: A Probabilistic Analysis of Affine
Policies in Dynamic Optimization
Omar El Housni
IEOR Department
Columbia University
[email protected]
Vineet Goyal
IEOR Department
Columbia University
[email protected]
Abstract
Affine policies (or control) are widely used as a solution approach in dynamic
optimization where computing an optimal adjustable solution is usually intractable.
While the worst case performance of affine policies can be significantly bad, the
empirical performance is observed to be near-optimal for a large class of problem
instances. For instance, in the two-stage dynamic robust optimization problem with
linear covering constraints and uncertain
p right hand side, the worst-case approximation bound for affine policies is O( m) that is also tight (see Bertsimas and
Goyal [8]), whereas observed empirical performance is near-optimal. In this paper,
we aim to address this stark-contrast between the worst-case and the empirical
performance of affine policies. In particular, we show that affine policies give
a good approximation for the two-stage adjustable robust optimization problem
with high probability on random instances where the constraint coefficients are
generated i.i.d. from a large class of distributions; thereby, providing a theoretical justification of the observed empirical performance. On the other hand, we
also present a distribution such that the performance bound
p for affine policies on
instances generated according to that distribution is ?( m) with high probability; however, the constraint coefficients are not i.i.d.. This demonstrates that the
empirical performance of affine policies can depend on the generative model for
instances.
1
Introduction
In most real word problems, parameters are uncertain at the optimization phase and decisions need
to be made in the face of uncertainty. Stochastic and robust optimization are two widely used
paradigms to handle uncertainty. In the stochastic optimization approach, uncertainty is modeled as a
probability distribution and the goal is to optimize an expected objective [13]. We refer the reader
to Kall and Wallace [19], Prekopa [20], Shapiro [21], Shapiro et al. [22] for a detailed discussion
on stochastic optimization. On the other hand, in the robust optimization approach, we consider
an adversarial model of uncertainty using an uncertainty set and the goal is to optimize over the
worst-case realization from the uncertainty set. This approach was first introduced by Soyster [23] and
has been extensively studied in recent past. We refer the reader to Ben-Tal and Nemirovski [3, 4, 5],
El Ghaoui and Lebret [14], Bertsimas and Sim [10, 11], Goldfarb and Iyengar [17], Bertsimas et
al. [6] and Ben-Tal et al. [1] for a detailed discussion of robust optimization. However, in both these
paradigms, computing an optimal dynamic solution is intractable in general due to the ?curse of
dimensionality?.
This intractability of computing the optimal adjustable solution necessitates considering approximate
solution policies such as static and affine policies where the decision in any period t is restricted
to a particular function of the sample path until period t. Both static and affine policies have been
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
studied extensively in the literature and can be computed efficiently for a large class of problems.
While the worst-case performance of such approximate policies can be significantly bad as compared
to the optimal dynamic solution, the empirical performance, especially of affine policies, has been
observed to be near-optimal in a broad range of computational experiments. Our goal in this paper is
to address this stark contrast between the worst-case performance bounds and near-optimal empirical
performance of affine policies.
In particular, we consider the following two-stage adjustable robust linear optimization problems
with uncertain demand requirements:
zAR (c, d, A, B, U ) = min cT x + max min dT y(h)
x
h2U y(h)
Ax + By(h)
h 8h 2 U
x 2 Rn+ , y(h) 2 Rn+ 8h 2 U
(1)
where A 2 Rm?n
, c 2 Rn+ , d 2 Rn+ , B 2 Rm?n
. The right-hand-side h belongs to a compact
+
+
convex uncertainty set U ? Rm
.
The
goal
in
this
problem
is to select the first-stage decision x, and
+
the second-stage recourse decision, y(h), as a function of the uncertain right hand side realization, h
such that the worst-case cost over all realizations of h 2 U is minimized. We assume without loss of
generality that c = e and d = d? ? e (by appropriately scaling A and B). Here, d? can interpreted as
the inflation factor for costs in the second-stage.
This model captures many important applications including set cover, facility location, network
design, inventory management, resource planning and capacity planning under uncertain demand.
Here the right hand side, h models the uncertain demand and the covering constraints capture the
requirement of satisfying the uncertain demand. However, the adjustable robust optimization problem
(1) is intractable in general. In fact, Feige et al. [16] show that ?AR (U ) (1) is hard to approximate
within any factor that is better than ?(log n).
Both static and affine policy approximations have been studied in the literature for (1). In a static
solution, we compute a single optimal solution (x, y) that is feasible for all realizations of the
uncertain right hand side. Bertsimas et al. [9] relate the performance of static solution to the
symmetry of the uncertainty set and show that it provides a good approximation to the adjustable
problem if the uncertainty is close to being centrally symmetric. However, the performance of static
solutions can be arbitrarily large for a general convex uncertainty set with the worst case performance
being ?(m). El Housni and Goyal [15] consider piecewise static policies for two-stage adjustable
robust problem with uncertain constraint coefficients. These are a generalization of static policies
where we divide the uncertainty set into several pieces and specify a static solution for each piece.
However, they show that, in general, there is no piecewise static policy with a polynomial number of
pieces that has a significantly better performance than an optimal static policy.
An affine policy restricts the second-stage decisions, y(h) to being an affine function of the uncertain
right-hand-side h, i.e., y(h) = P h + q for some P 2 Rn?m and q 2 Rm are decision variables.
Affine policies in this context were introduced in Ben-Tal et al. [2] and can be formulated as:
zA? (c, d, A, B, U ) = min cT x + max dT (P h + q)
x,P ,q
h2U
Ax + B (P h + q)
h 8h 2 U
(2)
Ph + q
0 8h 2 U
x 2 Rn+
An optimal affine policy can be computed efficiently
for a large class of problems. Bertsimas and
p
Goyal [8] show that affine policies give a O( m)-approximation
p to the optimal dynamic solution
for (1). Furthermore, they show that the approximation bound O( m) is tight. However, the observed
empirical performance for affine policies is near-optimal for a large set of synthetic instances of (1).
1.1
Our Contributions
Our goal in this paper is to address this stark contrast by providing a theoretical analysis of the
performance of affine policies on synthetic instances of the problem generated from a probabilistic
model. In particular, we consider random instances of the two-stage adjustable problem (1) where the
entries of the constraint matrix B are random from a given distribution and analyze the performance
of affine policies for a large class of distributions. Our main contributions are summarized below.
2
Independent and Identically distributed Constraint Coefficients. We consider random instances
of the two-stage adjustable problem where the entries of B are generated i.i.d. according to a
given distribution and show that an affine policy gives a good approximation for a large class of
distributions including distributions with bounded support and unbounded distributions with Gaussian
and sub-gaussian tails.
In particular, for distributions with bounded support in [0, b] and expectation ?, we show that for
sufficiently large values of m and n, affine policy gives a b/?-approximation to the adjustable
problem (1). More specifically, with probability at least (1 1/m), we have that
b
zA? (c, d, A, B, U ) ?
? zAR (c, d, A, B, U ),
?(1 ?)
p
where ? = b/? log m/n (Theorem 2.1). Therefore, if the distribution is symmetric, affine policy
gives a 2-approximation for the adjustable problem (1). For instance, for the case of uniform or
Bernoulli distribution with parameter p = 1/2, affine gives a nearly 2-approximation for (1).
While the above bound leads to a good approximation for many distributions, the ratio ?b can be
significantly large in general; for instance, for distributions where extreme values of the support are
extremely rare and significantly far from the mean. In such instances, the bound b/? can be quite
loose. We can tighten the analysis by using the concentration properties of distributions and can
extend the analysis even for the case of unbounded support. More specifically, we show that if Bij
are i.i.d. according to an unbounded distribution with a sub-gaussian tail, then for sufficiently large
values of m and n, with probability at least (1 1/m),
p
zA? (c, d, A, B, U ) ? O( log mn) ? zAR (c, d, A, B, U ).
We prove the case of folded normal distribution in Theorem 2.6. Here we assume that the parameters
of the distributions are constants independent of the problem dimension and we would like to emphasis
?
that the i.i.d. assumption on the entries of B is for the scaled problem where c = e and d = de.
We would like to note p
that the above performance bounds are in stark contrast with the worst case
performance bound O( m) for affine policies which is tight. For the random instances where Bij are
i.i.d. according to above distributions, the performance is significantly better. Therefore, our results
provide a theoretical justification of the p
good empirical performance of affine policies and close
the gap between worst case bound of O( m) and observed empirical performance. Furthermore,
surprisingly these performance bounds are independent of the structure of the uncertainty set, U
unlike in previous work where the performance bounds depend on the geometric properties of U .
Our analysis is based on a dual-reformulation of (1) introduced in [7] where (1) is reformulated as
an alternate two-stage adjustable optimization and the uncertainty set in the alternate formulation
depends on the constraint matrix B. Using the probabilistic structure of B, we show that the alternate
dual uncertainty set is close to a simplex for which affine policies are optimal.
We would also like to note that our performance bounds are not necessarily tight and the actual
performance on particular instances can be even better. We test the empirical performance of affine
policies for random instances generated according to uniform and folded normal distributions and
observe that affine policies are nearly optimal with a worst optimality gap of 4% (i.e. approximation
ratio of 1.04) on our test instances as compared to the optimal adjustable solution that is computed
using a Mixed Integer Program (MIP).
Worst-case distribution for Affine policies. While for a large class of commonly used distributions,
affine policies give a good approximation with high probability for random i.i.d. instances according
p
to the given distribution, we present a distribution where the performance of affine policies is ?( m)
with high probability for instances generated from this distribution. Note that this matches the
worst-case deterministic bound for affine policies. We would like to remark that in the worst-case
distribution, the coefficients Bij are not identically distributed. Our analysis suggests that to obtain
bad instances for affine policies, we need to generate instances using a structured distribution where
the structure of the distribution might depend on the problem structure.
2
Random instances with i.i.d. coefficients
In this section, we theoretically characterize the performance of affine policies for random instances
of (1) for a large class of generative distributions including both bounded and unbounded support
3
distributions. In particular, we consider the two-stage problem where constraint coefficients A and
B are i.i.d. according to a given distribution. We consider a polyhedral uncertainty set U given as
U = {h 2 Rm
+ | Rh ? r}
(3)
where R 2 RL?m
and r 2 RL
+ . This is a fairly general class of uncertainty sets that includes many
+
commonly used sets such as hypercube and budget uncertainty sets.
Our analysis of the performance of affine policies does not depend on the structure of first stage
constraint matrix A or cost c. The second-stage cost, as already mentioned, is wlog of the form
? Therefore, we restrict our attention only to the distribution of coefficients of the second
d = de.
? to emphasis that B is random. For simplicity, we refer
stage matrix B. We will use the notation B
to zAR (c, d, A, B, U ) as zAR (B) and to zA? (c, d, A, B, U ) as zA? (B).
2.1
Distributions with bounded support
?ij are i.i.d. according to a bounded distribution with support in
We first consider the case when B
[0, b] for some constant b independent of the dimension of the problem. We show a performance
bound of affine policies as compared to the optimal dynamic solution. The bound depends only on the
? and holds for any polyhedral uncertainty set U . In particular, we have the following
distribution of B
theorem.
?ij are i.i.d. according to
Theorem 2.1. Consider the two-stage adjustable problem (1) where B
?ij ] = ? 8i 2 [m] 8j 2 [n]. For n and m
a bounded distribution with support in [0, b] and E[B
1
sufficiently large, we have with probability at least 1 m
,
where ? =
b
?
q
? ?
zA? (B)
b
?(1
?)
?
? zAR (B)
log m
n .
The above theorem shows that for sufficiently large values of m and n, the performance of affine
policies is at most b/? times the performance of an optimal adjustable solution. Moreover, we know
? ? zA? (B)
? for any B since the adjustable problem is a relaxation of the affine problem.
that zAR (B)
This shows that
affine
policies give a good approximation (and significantly better than the worst-case
p
bound of O( m)) for many important distributions. We present some examples below.
?ij are i.i.d. uniform in
Example 1. [Uniform distribution] Suppose for all i 2 [m] and j 2 [n] B
[0, 1]. Then ? = 1/2 and from Theorem 2.1 we have with probability at least 1 1/m,
? ?
zA? (B)
2
1
?
?
? zAR (B)
p
where ? = 2 log m/n. Therefore, for sufficiently large values of n and m affine policy gives a
2-approximation to the adjustable problem in this case. Note that the approximation bound of 2 is a
conservative bound and the empirical performance is significantly better. We demonstrate this in our
numerical experiments.
?ij are i.i.d. according
Example 2. [Bernoulli distribution] Suppose for all i 2 [m] and j 2 [n], B
to a Bernoulli distribution of parameter p. Then ? = p, b = 1 and from Theorem 2.1 we have with
1
probability at least 1 m
,
1
? ?
?
zA? (B)
? zAR (B)
p(1 ?)
q
where ? = p1 lognm . Therefore for constant p, affine policy gives a constant approximation to the
adjustable problem (for example 2-approximation for p = 1/2).
Note
p that these performance bounds are in stark contrast with the worst case performance bound
O( m) for affine policies which is tight. For these random instances, the performance is significantly
better. We would like to note that the above distributions are very commonly used to generate
instances for testing the performance of affine policies and exhibit good empirical performance.
4
Here, we give a theoretical justification of the good empirical performance
p of affine policies on such
instances, thereby closing the gap between worst case bound of O( m) and observed empirical
performance. We discuss the intuition and the proof of Theorem 2.1 in the following subsections.
2.1.1
Preliminaries
In order to prove Theorem 2.1, we need to introduce certain preliminary results. We first introduce
the following formulation for the adjustable problem (1) based on ideas in Bertsimas and de Ruiter
[7].
zd AR (B) = min cT x + max min (Ax)T w + r T (w)
x
w2W
R
where the set W is defined as
T
w 8w 2 W
(w)
x 2 Rn+ ,
(w)
(4)
(w) 2 RL
+ , 8w 2 W
T
W = {w 2 Rm
+ | B w ? d}.
(5)
We show that the above problem is an equivalent formulation of (1).
Lemma 2.2. Let zAR (B) be as defined in (1) and zd AR (B) as defined in (4). Then, zAR (B) =
zd AR (B).
The proof follows from [7]. For completeness, we present it in Appendix A. Reformulation (4) can
be interpreted as a new two-stage adjustable problem over dualized uncertainty set W and decision
(w). Following [7], we refer to (4) as the dualized formulation and to (1) as the primal formulation.
Bertsimas and de Ruiter [7] show that even the affine approximations of (1) and (4) (where recourse
decisions are restricted to be affine functions of respective uncertainties) are equivalent. In particular,
we have the following Lemma which is a restatement of Theorem 2 in [7].
Lemma 2.3. (Theorem 2 in Bertsimas and de Ruiter [7]) Let zd A? (B) be the objective value
when (w) is restricted to be affine function of w and zA? (B) as defined in (2). Then zd A? (B) =
zA? (B).
Bertsimas and Goyal [8] show that affine policy is optimal for the adjustable problem (1) when the
uncertainty set U is a simplex. In fact, optimality of affine policies for simplex uncertainty sets holds
for more general formulation than considered in [8]. In particular, we have the following lemma
Lemma 2.4. Suppose the set W is a simplex, i.e. a convex combination of m + 1 affinely independent
points, then affine policy is optimal for the adjustable problem (4), i.e. zd A? (B) = zd AR (B).
The proof proceeds along similar lines as in [8]. For completeness, we provide it in Appendix A.
In fact, if the uncertainty set is not simplex but can be approximated by a simplex within a small
scaling factor, affine policies can still be shown to be a good approximation, in particular we have the
following lemma.
Lemma 2.5. Denote W the dualized uncertainty set as defined in (5) and suppose there exists a
simplex S and ? 1 such that S ? W ? ? ? S. Therefore, zd AR (B) ? zd A? (B) ? ? ? zd AR (B).
Furthermore, zAR (B) ? zA? (B) ? ? ? zAR (B).
The proof of Lemma 2.5 is presented in Appendix A.
2.1.2
Proof of Theorem 2.1
?ij are i.i.d. according to a bounded distribution
We consider instances of problem (1) where B
?ij ] = ? for all i 2 [m], j 2 [n]. Denote the dualized uncertainty set
with support in [0, b] and E[B
? = {w 2 Rm | B
? can be
? T w ? d? ? e}. Our performance bound is based on showing that W
W
+
sandwiched between two simplicies with a small scaling factor. In particular, consider the following
simplex,
(
)
m
X
d?
m
S = w 2 R+
wi ?
.
(6)
b
i=1
q
? ? b ? S with probability at least 1 1 where ? = b log m .
we will show that S ? W
?(1 ?)
m
?
n
5
? Consider any w 2 S. For any any i = 1, . . . , n
First, we show that S ? W.
m
m
X
X
?ji wj ? b
B
wj ? d?
j=1
j=1
? are upper bounded by b and the second one
The first inequality holds because all components of B
T
?
?
?
follows from w 2 S. Hence, we have B w ? de and consequently S ? W.
? We have
Now, we show that the other inclusion holds with high probability. Consider any w 2 W.
T
?
?
B w ? d ? e. Summing up all the inequalities and dividing by n, we get
Pn ? !
m
X
i=1 Bji
?
? wj ? d.
(7)
n
j=1
q
Using Hoeffding?s inequality [18] (see Appendix B) with ? = b lognm , we have
!
Pn ?
?
?
B
2n? 2
1
ji
i=1
P
?
?
1 exp
=1
n
b2
m2
and a union bound over j = 1, . . . , m gives us
!
Pn ?
i=1 Bji
P
? ? 8j = 1, . . . , m
n
?
1
1
m2
?m
1
1
.
m
where the last inequality follows from Bernoulli?s inequality. Therefore, with probability at least
1
1 m
, we have
Pn ? !
m
m
X
X
1
d?
b
d?
i=1 Bji
wj ?
? wj ?
=
?
? ?
n
(? ? )
?(1 ?) b
j=1
j=1
where the second inequality follows from (7). Note that for m sufficiently large , we have ? ? > 0.
? and consequently S ? W
? ? b ? S with probability at
Then, w 2 ?(1b ?) ? S for any w 2 W
?(1 ?)
least 1 1/m. Finally, we apply the result of Lemma 2.5 to conclude.
?
2.2
Unbounded distributions
While the approximation bound in Theorem 2.1 leads to a good approximation for many distributions,
the ratio b/? can be significantly large in general. We can tighten the analysis by using the concentration properties of distributions and can extend the analysis even for the case of distributions with
?ij are
unbounded support and sub-gaussian tails. In this section, we consider the special case where B
i.i.d. according to absolute value of a standard Gaussian, also called the folded normal distribution,
and show a logarithmic approximation bound for affine policies. In particular, we have the following
theorem.
?ij = |G
? ij |
Theorem 2.6. Consider the two-stage adjustable problem (1) where 8i 2 [n], j 2 [m], B
?
and Gij are i.i.d. according to a standard Gaussian distribution. For n and m sufficiently large, we
1
have with probability at least 1 m
,
where ? = O
p
log m + log n .
? ? ? ? zAR (B)
?
zA? (B)
The proof of Theorem 2.6 is presented in Appendix C. We can extend the analysis and
p show a similar
bound for the class of distributions with sub-gaussian tails. The bound of O log m + log n
depends on the dimension of the problem unlike
p the case of uniform bounded distribution. But, it is
significantly better than the worst-case of O( m) [8] for general instances. Furthermore, this bound
holds for all uncertainty sets with high probability. We would like to note though that the bounds are
not necessarily tight. In fact, in our numerical experiments where the uncertainty set is a budget of
uncertainty, we observe that affine policies are near optimal.
6
3
Family of worst-case distribution: perturbation of i.i.d. coefficients
1
For any m sufficiently large, the authors in [8] present an instance where affine policy is ?(m 2 )
away from the optimal adjustable solution. The parameters of the instance in [8] were carefully
1
chosen to achieve the gap ?(m 2 ). In this section, we show that the family of worst-case instances
is not measure zero set. In fact, we exhibit a distribution and an uncertainty
set such that a random
p
instance from that distribution achieves a worst-case bound of ?( m) with high probability. The
?ij in our bad family of instances are independent but not identically distributed. The
coefficients B
instance can be given as follows.
n = m, A = 0, c = 0, d = e
1
U = conv (0, e1 , . . . , em , ? 1 , . . . , ? m ) where ? i = p (e ei ) 8i 2 [m].
m
?
1
if i = j
?ij =
B
?ij are i.i.d. uniform[0, 1].
p1 ? u
?
if i 6= j where for all i 6= j, u
m
(8)
ij
Theorem 3.1. For the instance defined in (8), we have with probability at least 1
p
? = ?( m) ? zAR (B).
?
zA? (B)
1/m,
We present the proof of Theorem 3.1 in Appendix D. As a byproduct, we also tighten the lower bound
p
1
on the performance of affine policy to ?( m) improving from the lower bound of ?(m 2 ) in [8].
We would like to note that both uncertainty set and distribution of coefficients in our instance (8) are
carefully chosen to achieve the worst-case gap. Our analysis suggests that to obtain bad instances for
affine policies, we need to generate instances using a structured distribution as above and it may not
be easy to obtain bad instances in a completely random setting.
4
Performance of affine policy: Empirical study
In this section, we present a computational study to test the empirical performance of affine policy
for the two-stage adjustable problem (1) on random instances.
Experimental setup. We consider two classes of distributions for generating random instances:
? are i.i.d. uniform [0, 1], and ii) Coefficients of B
? are absolute value of i.i.d.
i) Coefficients of B
standard Gaussian. We consider the following budget of uncertainty set.
(
)
m
X
p
m
U = h 2 [0, 1]
hi ? m .
(9)
i=1
Note that the set (9) is widely used in both theory and practice and arises naturally as a consequence of
concentration of sum of independent uncertain demand requirements. We would like to also note that
the adjustable problem over this budget of uncertainty, U is hard to approximate within a factor better
than O(log n) [16]. We consider n = m, d = e. Also, we consider c = 0, A = 0. We restrict to
this case in order to compute the optimal adjustable solution in a reasonable time by solving a single
Mixed Integer Program (MIP). For the general problem, computing the optimal adjustable solution
requires solving a sequence of MIPs each one of which is significantly challenging to solve. We
would like to note though that our analysis does not depend on the first stage cost c and matrix A and
affine policy can be computed efficiently even without this assumption. We consider values of m from
?
?
10 to 50 and consider 20 instances for each value of m. We report the ratio r = zA? (B)/z
AR (B) in
Table 1. In particular, for each value of m, we report the average ratio ravg , the maximum ratio rmax ,
the running time of adjustable policy TAR (s) and the running time of affine policy TA? (s). We first
give a compact LP formulation for the affine problem (2) and a compact MIP formulation for the
separation of the adjustable problem(1).
LP formulations for the affine policies. The affine problem (2) can be reformulated as follows
8
9
z dT (P h + q) 8h 2 U
>
>
<
=
Ax
+
B
(P
h
+
q)
h
8h
2
U
T
zA? (B) = min c x + z
.
Ph + q
0 8h 2 U
>
>
:
;
n
x 2 R+
7
Note that this formulation has infinitely many constraints but we can write a compact LP formulation
using standard techniques from duality. For example, the first constraint is equivalent to z dT q
max {dT P h | Rh ? r, h 0}. By taking the dual of the maximization problem, the constraint
becomes z dT q min {r T v | RT v P T d, v 0}. We can then drop the min and introduce v
as a variable, hence we obtain the following linear constraints z dT q r T v , RT v P T d and
v 0. We can apply the same techniques for the other constraints. The complete LP formulation
and its proof of correctness is presented in Appendix E.
Mixed Integer Program Formulation for the adjustable problem (1). For the adjustable problem (1), we show that the separation problem (10) can be formulated as a mixed integer program.
? and z? decide whether
The separation problem can be formulated as follows: Given x
max {(h
A?
x)T w | w 2 W, h 2 U} > z?
(10)
The correctness of formulation (10) follows from equation (11) in the proof of Lemma 2.2 in
Appendix A. The constraints in (10) are linear but the objective function contains a bilinear term,
hT w. We linearize this using a standard digitized reformulation. In particular, we consider finite bit
representations of continuous variables, hi nd wi to desired accuracy and introduce additional binary
variables, ?ik , ik where ?ik and ik represents the k th bits of hi and wi respectively. Now, for any
i 2 [m], hi ? wi can be expressed as a bilinear expression with products of binary variables, ?ik ? ij
which can be linearized using additional variable ijk and standard linear inequalities: ijk ? ij ,
?ik + ij . The complete MIP formulation and the proof of correctness is
ijk ? ?ik , ijk + 1
presented in Appendix E.
For general A 6= 0, we need to solve a sequence of MIPs to find the optimal adjustable solution. In
order to compute the optimal adjustable solution in a reasonable time, we assume A = 0, c = 0 in
our experimental setting so that we only need to solve one MIP.
Results. In our experiments, we observe that the empirical performance of affine policy is nearoptimal. In particular, the performance is significantly better than the theoretical performance
bounds implied in Theorem 2.1 and Theorem 2.6. For instance, Theorem 2.1 implies that affine
policy is a 2-approximation with high probability for random instances from a uniform distribution.
However, in our experiments, we observe that the optimality gap for affine policies is at most 4%
(i.e. approximation ratio of at most 1.04). The same observation
holds for Gaussian distributions
p
as well Theorem 2.6 gives an approximation bound of O( log(mn)). We would like to remark
that we are not able to report the ratio r for large values of m because the adjustable problem is
computationally very challenging and for m 40, MIP does not solve within a time limit of 3 hours
for most instances . On the other hand, affine policy scales very well and the average running time is
few seconds even for large values of m. This demonstrates the power of affine policies that can be
computed efficiently and give good approximations for a large class of instances.
m
10
20
30
50
ravg
1.01
1.02
1.01
**
rmax
1.03
1.04
1.02
**
TAR (s)
10.55
110.57
761.21
**
TA? (s)
0.01
0.23
1.29
14.92
m
10
20
30
50
(a) Uniform
ravg
1.00
1.01
1.01
**
rmax
1.03
1.03
1.03
**
TAR (s)
12.95
217.08
594.15
**
TA? (s)
0.01
0.39
1.15
13.87
(b) Folded Normal
Table 1: Comparison on the performance and computation time of affine policy and optimal adjustable
?
?
policy for uniform and folded normal distributions. For 20 instances, we compute zA? (B)/z
AR (B)
and present the average and max ratios. Here, TAR (s) denotes the running time for the adjustable
policy and TA? (s) denotes the running time for affine policy in seconds. ** Denotes the cases when
we set a time limit of 3 hours. These results are obtained using Gurobi 7.0.2 on a 16-core server with
2.93GHz processor and 56GB RAM.
8
References
[1] A. Ben-Tal, L. El Ghaoui, and A. Nemirovski. Robust optimization. Princeton University press, 2009.
[2] A. Ben-Tal, A. Goryashko, E. Guslitzer, and A. Nemirovski. Adjustable robust solutions of uncertain linear
programs. Mathematical Programming, 99(2):351?376, 2004.
[3] A. Ben-Tal and A. Nemirovski. Robust convex optimization. Mathematics of Operations Research,
23(4):769?805, 1998.
[4] A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Operations Research
Letters, 25(1):1?14, 1999.
[5] A. Ben-Tal and A. Nemirovski. Robust optimization?methodology and applications. Mathematical
Programming, 92(3):453?480, 2002.
[6] D. Bertsimas, D. Brown, and C. Caramanis. Theory and applications of robust optimization. SIAM review,
53(3):464?501, 2011.
[7] D. Bertsimas and F. J. de Ruiter. Duality in two-stage adaptive linear optimization: Faster computation and
stronger bounds. INFORMS Journal on Computing, 28(3):500?511, 2016.
[8] D. Bertsimas and V. Goyal. On the Power and Limitations of Affine Policies in Two-Stage Adaptive
Optimization. Mathematical Programming, 134(2):491?531, 2012.
[9] D. Bertsimas, V. Goyal, and X. Sun. A geometric characterization of the power of finite adaptability in
multistage stochastic and adaptive optimization. Mathematics of Operations Research, 36(1):24?54, 2011.
[10] D. Bertsimas and M. Sim. Robust Discrete Optimization and Network Flows. Mathematical Programming
Series B, 98:49?71, 2003.
[11] D. Bertsimas and M. Sim. The Price of Robustness. Operations Research, 52(2):35?53, 2004.
[12] F. Chung and L. Lu. Concentration inequalities and martingale inequalities: a survey. Internet Mathematics,
3(1):79?127, 2006.
[13] G. Dantzig. Linear programming under uncertainty. Management Science, 1:197?206, 1955.
[14] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data. SIAM Journal
on Matrix Analysis and Applications, 18:1035?1064, 1997.
[15] O. El Housni and V. Goyal. Piecewise static policies for two-stage adjustable robust linear optimization.
Mathematical Programming, pages 1?17, 2017.
[16] U. Feige, K. Jain, M. Mahdian, and V. Mirrokni. Robust combinatorial optimization with exponential
scenarios. Lecture Notes in Computer Science, 4513:439?453, 2007.
[17] D. Goldfarb and G. Iyengar. Robust portfolio selection problems. Mathematics of Operations Research,
28(1):1?38, 2003.
[18] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
statistical association, 58(301):13?30, 1963.
[19] P. Kall and S. Wallace. Stochastic programming. Wiley New York, 1994.
[20] A. Pr?kopa. Stochastic programming. Kluwer Academic Publishers, Dordrecht, Boston, 1995.
[21] A. Shapiro. Stochastic programming approach to optimization under uncertainty. Mathematical Programming, Series B, 112(1):183?220, 2008.
[22] A. Shapiro, D. Dentcheva, and A. Ruszczy?nski. Lectures on stochastic programming: modeling and theory.
Society for Industrial and Applied Mathematics, 2009.
[23] A. Soyster. Convex programming with set-inclusive constraints and applications to inexact linear programming. Operations research, 21(5):1154?1157, 1973.
9
| 7061 |@word polynomial:1 stronger:1 nd:1 linearized:1 thereby:2 contains:1 series:2 past:1 numerical:2 drop:1 generative:2 core:1 completeness:2 characterization:1 provides:1 location:1 unbounded:6 mathematical:6 along:1 ik:7 prove:2 polyhedral:2 introduce:4 theoretically:1 expected:1 p1:2 wallace:2 planning:2 mahdian:1 actual:1 curse:1 considering:1 conv:1 becomes:1 bounded:10 notation:1 moreover:1 interpreted:2 rmax:3 demonstrates:2 rm:7 scaled:1 control:1 limit:2 consequence:1 bilinear:2 path:1 might:1 emphasis:2 studied:3 dantzig:1 suggests:2 challenging:2 nemirovski:6 range:1 testing:1 union:1 goyal:8 practice:1 empirical:18 significantly:13 word:1 get:1 close:3 selection:1 context:1 optimize:2 equivalent:3 deterministic:1 attention:1 convex:5 survey:1 simplicity:1 m2:2 handle:1 justification:3 suppose:4 programming:13 satisfying:1 approximated:1 observed:7 restatement:1 capture:2 worst:23 dualized:4 wj:5 sun:1 mentioned:1 intuition:1 multistage:1 dynamic:7 depend:5 tight:6 solving:2 zar:15 completely:1 necessitates:1 caramanis:1 jain:1 dordrecht:1 quite:1 widely:3 solve:4 sequence:2 product:1 realization:4 achieve:2 requirement:3 generating:1 ben:8 linearize:1 informs:1 ij:17 sim:3 dividing:1 implies:1 stochastic:8 generalization:1 preliminary:2 hold:6 inflation:1 sufficiently:8 considered:1 normal:5 exp:1 achieves:1 combinatorial:1 w2w:1 correctness:3 iyengar:2 gaussian:9 aim:1 pn:4 tar:4 ax:4 bernoulli:4 contrast:5 adversarial:1 affinely:1 industrial:1 el:6 dual:3 special:1 fairly:1 beach:1 represents:1 broad:1 nearly:2 minimized:1 simplex:8 report:3 piecewise:3 few:1 phase:1 extreme:1 primal:1 byproduct:1 respective:1 divide:1 desired:1 mip:6 theoretical:5 uncertain:14 instance:46 modeling:1 cover:1 ar:9 maximization:1 cost:5 entry:3 rare:1 uniform:10 characterize:1 nearoptimal:1 synthetic:2 nski:1 st:1 siam:2 vineet:1 probabilistic:3 management:2 hoeffding:2 ieor:2 american:1 chung:1 stark:5 de:7 summarized:1 b2:1 includes:1 coefficient:13 depends:3 piece:3 analyze:1 contribution:2 square:1 accuracy:1 efficiently:4 lu:1 processor:1 za:17 inexact:1 ravg:3 naturally:1 proof:10 static:12 kall:2 subsection:1 dimensionality:1 adaptability:1 carefully:2 ta:4 dt:7 methodology:1 specify:1 formulation:15 though:2 generality:1 furthermore:4 stage:23 until:1 hand:9 ei:1 usa:1 brown:1 facility:1 hence:2 symmetric:2 goldfarb:2 covering:2 complete:2 demonstrate:1 rl:3 ji:2 tail:4 extend:3 association:1 kluwer:1 refer:4 mathematics:5 inclusion:1 closing:1 portfolio:1 recent:1 belongs:1 scenario:1 certain:1 server:1 inequality:10 binary:2 arbitrarily:1 h2u:2 additional:2 paradigm:2 period:2 ii:1 match:1 faster:1 academic:1 long:1 e1:1 expectation:1 whereas:1 publisher:1 appropriately:1 unlike:2 flow:1 integer:4 near:6 identically:3 easy:1 mips:2 restrict:2 idea:1 whether:1 expression:1 gb:1 kopa:1 reformulated:2 york:1 remark:2 detailed:2 extensively:2 ph:2 generate:3 shapiro:4 restricts:1 zd:10 write:1 discrete:1 reformulation:3 ht:1 ram:1 bertsimas:15 relaxation:1 sum:2 letter:1 uncertainty:34 family:3 reader:2 reasonable:2 decide:1 separation:3 decision:8 appendix:9 scaling:3 bit:2 bound:35 ct:3 hi:4 internet:1 centrally:1 prekopa:1 constraint:17 inclusive:1 tal:8 min:8 extremely:1 optimality:3 department:2 structured:2 according:13 alternate:3 combination:1 feige:2 em:1 wi:4 lp:4 restricted:3 pr:1 ghaoui:3 recourse:2 computationally:1 resource:1 equation:1 discus:1 loose:1 know:1 operation:6 apply:2 observe:4 away:1 robustness:1 denotes:3 running:5 especially:1 hypercube:1 sandwiched:1 society:1 implied:1 objective:3 already:1 ruszczy:1 concentration:4 rt:2 mirrokni:1 exhibit:2 capacity:1 omar:1 modeled:1 providing:2 ratio:9 setup:1 relate:1 dentcheva:1 design:1 policy:75 adjustable:39 upper:1 observation:1 finite:2 digitized:1 rn:7 perturbation:1 introduced:3 gurobi:1 hour:2 nip:1 lebret:2 address:3 beyond:1 able:1 proceeds:1 usually:1 below:2 program:6 max:6 including:3 power:3 lognm:2 mn:2 columbia:4 review:1 literature:2 geometric:2 loss:1 lecture:2 mixed:4 limitation:1 affine:73 intractability:1 surprisingly:1 last:1 side:6 face:1 taking:1 absolute:2 distributed:3 ghz:1 dimension:3 author:1 made:1 commonly:3 adaptive:3 far:1 tighten:3 approximate:4 compact:4 soyster:2 summing:1 conclude:1 continuous:1 table:2 robust:19 ca:1 symmetry:1 improving:1 inventory:1 necessarily:2 main:1 rh:2 martingale:1 wiley:1 wlog:1 sub:4 exponential:1 bij:3 theorem:22 bad:6 showing:1 intractable:3 exists:1 budget:4 demand:5 gap:6 boston:1 logarithmic:1 infinitely:1 expressed:1 bji:3 goal:5 formulated:3 consequently:2 price:1 feasible:1 hard:2 specifically:2 folded:5 lemma:10 conservative:1 called:1 gij:1 duality:2 experimental:2 ijk:4 select:1 support:10 arises:1 princeton:1 |
6,702 | 7,062 | A Unified Approach to Interpreting Model
Predictions
Scott M. Lundberg
Paul G. Allen School of Computer Science
University of Washington
Seattle, WA 98105
[email protected]
Su-In Lee
Paul G. Allen School of Computer Science
Department of Genome Sciences
University of Washington
Seattle, WA 98105
[email protected]
Abstract
Understanding why a model makes a certain prediction can be as crucial as the
prediction?s accuracy in many applications. However, the highest accuracy for large
modern datasets is often achieved by complex models that even experts struggle to
interpret, such as ensemble or deep learning models, creating a tension between
accuracy and interpretability. In response, various methods have recently been
proposed to help users interpret the predictions of complex models, but it is often
unclear how these methods are related and when one method is preferable over
another. To address this problem, we present a unified framework for interpreting
predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature
an importance value for a particular prediction. Its novel components include: (1)
the identification of a new class of additive feature importance measures, and (2)
theoretical results showing there is a unique solution in this class with a set of
desirable properties. The new class unifies six existing methods, notable because
several recent methods in the class lack the proposed desirable properties. Based
on insights from this unification, we present new methods that show improved
computational performance and/or better consistency with human intuition than
previous approaches.
1
Introduction
The ability to correctly interpret a prediction model?s output is extremely important. It engenders
appropriate user trust, provides insight into how a model may be improved, and supports understanding
of the process being modeled. In some applications, simple models (e.g., linear models) are often
preferred for their ease of interpretation, even if they may be less accurate than complex ones.
However, the growing availability of big data has increased the benefits of using complex models, so
bringing to the forefront the trade-off between accuracy and interpretability of a model?s output. A
wide variety of different methods have been recently proposed to address this issue [5, 8, 9, 3, 4, 1].
But an understanding of how these methods relate and when one method is preferable to another is
still lacking.
Here, we present a novel unified approach to interpreting model predictions.1 Our approach leads to
three potentially surprising results that bring clarity to the growing space of methods:
1. We introduce the perspective of viewing any explanation of a model?s prediction as a model itself,
which we term the explanation model. This lets us define the class of additive feature attribution
methods (Section 2), which unifies six current methods.
1
https://github.com/slundberg/shap
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2. We then show that game theory results guaranteeing a unique solution apply to the entire class of
additive feature attribution methods (Section 3) and propose SHAP values as a unified measure of
feature importance that various methods approximate (Section 4).
3. We propose new SHAP value estimation methods and demonstrate that they are better aligned
with human intuition as measured by user studies and more effectually discriminate among model
output classes than several existing methods (Section 5).
2
Additive Feature Attribution Methods
The best explanation of a simple model is the model itself; it perfectly represents itself and is easy to
understand. For complex models, such as ensemble methods or deep networks, we cannot use the
original model as its own best explanation because it is not easy to understand. Instead, we must use a
simpler explanation model, which we define as any interpretable approximation of the original model.
We show below that six current explanation methods from the literature all use the same explanation
model. This previously unappreciated unity has interesting implications, which we describe in later
sections.
Let f be the original prediction model to be explained and g the explanation model. Here, we focus
on local methods designed to explain a prediction f (x) based on a single input x, as proposed in
LIME [5]. Explanation models often use simplified inputs x0 that map to the original inputs through a
mapping function x = hx (x0 ). Local methods try to ensure g(z 0 ) ? f (hx (z 0 )) whenever z 0 ? x0 .
(Note that hx (x0 ) = x even though x0 may contain less information than x because hx is specific to
the current input x.)
Definition 1 Additive feature attribution methods have an explanation model that is a linear
function of binary variables:
g(z 0 ) = ?0 +
M
X
?i zi0 ,
(1)
i=1
where z 0 ? {0, 1}M , M is the number of simplified input features, and ?i ? R.
Methods with explanation models matching Definition 1 attribute an effect ?i to each feature, and
summing the effects of all feature attributions approximates the output f (x) of the original model.
Many current methods match Definition 1, several of which are discussed below.
2.1
LIME
The LIME method interprets individual model predictions based on locally approximating the model
around a given prediction [5]. The local linear explanation model that LIME uses adheres to Equation
1 exactly and is thus an additive feature attribution method. LIME refers to simplified inputs x0 as
?interpretable inputs,? and the mapping x = hx (x0 ) converts a binary vector of interpretable inputs
into the original input space. Different types of hx mappings are used for different input spaces. For
bag of words text features, hx converts a vector of 1?s or 0?s (present or not) into the original word
count if the simplified input is one, or zero if the simplified input is zero. For images, hx treats the
image as a set of super pixels; it then maps 1 to leaving the super pixel as its original value and 0
to replacing the super pixel with an average of neighboring pixels (this is meant to represent being
missing).
To find ?, LIME minimizes the following objective function:
? = arg min L(f, g, ?x0 ) + ?(g).
(2)
g?G
Faithfulness of the explanation model g(z 0 ) to the original model f (hx (z 0 )) is enforced through
the loss L over a set of samples in the simplified input space weighted by the local kernel ?x0 . ?
penalizes the complexity of g. Since in LIME g follows Equation 1 and L is a squared loss, Equation
2 can be solved using penalized linear regression.
2
2.2
DeepLIFT
DeepLIFT was recently proposed as a recursive prediction explanation method for deep learning
[8, 7]. It attributes to each input xi a value C?xi ?y that represents the effect of that input being set
to a reference value as opposed to its original value. This means that for DeepLIFT, the mapping
x = hx (x0 ) converts binary values into the original inputs, where 1 indicates that an input takes its
original value, and 0 indicates that it takes the reference value. The reference value, though chosen
by the user, represents a typical uninformative background value for the feature.
DeepLIFT uses a "summation-to-delta" property that states:
n
X
C?xi ?o = ?o,
(3)
i=1
where o = f (x) is the model output, ?o = f (x) ? f (r), ?xi = xi ? ri , and r is the reference input.
If we let ?i = C?xi ?o and ?0 = f (r), then DeepLIFT?s explanation model matches Equation 1 and
is thus another additive feature attribution method.
2.3
Layer-Wise Relevance Propagation
The layer-wise relevance propagation method interprets the predictions of deep networks [1]. As
noted by Shrikumar et al., this menthod is equivalent to DeepLIFT with the reference activations of all
neurons fixed to zero. Thus, x = hx (x0 ) converts binary values into the original input space, where
1 means that an input takes its original value, and 0 means an input takes the 0 value. Layer-wise
relevance propagation?s explanation model, like DeepLIFT?s, matches Equation 1.
2.4
Classic Shapley Value Estimation
Three previous methods use classic equations from cooperative game theory to compute explanations
of model predictions: Shapley regression values [4], Shapley sampling values [9], and Quantitative
Input Influence [3].
Shapley regression values are feature importances for linear models in the presence of multicollinearity.
This method requires retraining the model on all feature subsets S ? F , where F is the set of all
features. It assigns an importance value to each feature that represents the effect on the model
prediction of including that feature. To compute this effect, a model fS?{i} is trained with that feature
present, and another model fS is trained with the feature withheld. Then, predictions from the two
models are compared on the current input fS?{i} (xS?{i} ) ? fS (xS ), where xS represents the values
of the input features in the set S. Since the effect of withholding a feature depends on other features
in the model, the preceding differences are computed for all possible subsets S ? F \ {i}. The
Shapley values are then computed and used as feature attributions. They are a weighted average of all
possible differences:
?i =
X
S?F \{i}
|S|!(|F | ? |S| ? 1)!
fS?{i} (xS?{i} ) ? fS (xS ) .
|F |!
(4)
For Shapley regression values, hx maps 1 or 0 to the original input space, where 1 indicates the input
is included in the model, and 0 indicates exclusion from the model. If we let ?0 = f? (?), then the
Shapley regression values match Equation 1 and are hence an additive feature attribution method.
Shapley sampling values are meant to explain any model by: (1) applying sampling approximations
to Equation 4, and (2) approximating the effect of removing a variable from the model by integrating
over samples from the training dataset. This eliminates the need to retrain the model and allows fewer
than 2|F | differences to be computed. Since the explanation model form of Shapley sampling values
is the same as that for Shapley regression values, it is also an additive feature attribution method.
Quantitative input influence is a broader framework that addresses more than feature attributions.
However, as part of its method it independently proposes a sampling approximation to Shapley values
that is nearly identical to Shapley sampling values. It is thus another additive feature attribution
method.
3
3
Simple Properties Uniquely Determine Additive Feature Attributions
A surprising attribute of the class of additive feature attribution methods is the presence of a single
unique solution in this class with three desirable properties (described below). While these properties
are familiar to the classical Shapley value estimation methods, they were previously unknown for
other additive feature attribution methods.
The first desirable property is local accuracy. When approximating the original model f for a specific
input x, local accuracy requires the explanation model to at least match the output of f for the
simplified input x0 (which corresponds to the original input x).
Property 1 (Local accuracy)
f (x) = g(x0 ) = ?0 +
M
X
?i x0i
(5)
i=1
The explanation model g(x0 ) matches the original model f (x) when x = hx (x0 ).
The second property is missingness. If the simplified inputs represent feature presence, then missingness requires features missing in the original input to have no impact. All of the methods described in
Section 2 obey the missingness property.
Property 2 (Missingness)
x0i = 0 =? ?i = 0
Missingness constrains features where
x0i
(6)
= 0 to have no attributed impact.
The third property is consistency. Consistency states that if a model changes so that some simplified
input?s contribution increases or stays the same regardless of the other inputs, that input?s attribution
should not decrease.
Property 3 (Consistency) Let fx (z 0 ) = f (hx (z 0 )) and z 0 \ i denote setting zi0 = 0. For any two
models f and f 0 , if
fx0 (z 0 ) ? fx0 (z 0 \ i) ? fx (z 0 ) ? fx (z 0 \ i)
(7)
for all inputs z 0 ? {0, 1}M , then ?i (f 0 , x) ? ?i (f, x).
Theorem 1 Only one possible explanation model g follows Definition 1 and satisfies Properties 1, 2,
and 3:
X |z 0 |!(M ? |z 0 | ? 1)!
?i (f, x) =
[fx (z 0 ) ? fx (z 0 \ i)]
(8)
M
!
0
0
z ?x
0
where |z | is the number of non-zero entries in z 0 , and z 0 ? x0 represents all z 0 vectors where the
non-zero entries are a subset of the non-zero entries in x0 .
Theorem 1 follows from combined cooperative game theory results, where the values ?i are known
as Shapley values [6]. Young (1985) demonstrated that Shapley values are the only set of values
that satisfy three axioms similar to Property 1, Property 3, and a final property that we show to be
redundant in this setting (see Supplementary Material). Property 2 is required to adapt the Shapley
proofs to the class of additive feature attribution methods.
Under Properties 1-3, for a given simplified input mapping hx , Theorem 1 shows that there is only one
possible additive feature attribution method. This result implies that methods not based on Shapley
values violate local accuracy and/or consistency (methods in Section 2 already respect missingness).
The following section proposes a unified approach that improves previous methods, preventing them
from unintentionally violating Properties 1 and 3.
4
SHAP (SHapley Additive exPlanation) Values
We propose SHAP values as a unified measure of feature importance. These are the Shapley values
of a conditional expectation function of the original model; thus, they are the solution to Equation
4
Figure 1: SHAP (SHapley Additive exPlanation) values attribute to each feature the change in the
expected model prediction when conditioning on that feature. They explain how to get from the
base value E[f (z)] that would be predicted if we did not know any features to the current output
f (x). This diagram shows a single ordering. When the model is non-linear or the input features are
not independent, however, the order in which features are added to the expectation matters, and the
SHAP values arise from averaging the ?i values across all possible orderings.
8, where fx (z 0 ) = f (hx (z 0 )) = E[f (z) | zS ], and S is the set of non-zero indexes in z 0 (Figure 1).
Based on Sections 2 and 3, SHAP values provide the unique additive feature importance measure that
adheres to Properties 1-3 and uses conditional expectations to define simplified inputs. Implicit in this
definition of SHAP values is a simplified input mapping, hx (z 0 ) = zS , where zS has missing values
for features not in the set S. Since most models cannot handle arbitrary patterns of missing input
values, we approximate f (zS ) with E[f (z) | zS ]. This definition of SHAP values is designed to
closely align with the Shapley regression, Shapley sampling, and quantitative input influence feature
attributions, while also allowing for connections with LIME, DeepLIFT, and layer-wise relevance
propagation.
The exact computation of SHAP values is challenging. However, by combining insights from current
additive feature attribution methods, we can approximate them. We describe two model-agnostic
approximation methods, one that is already known (Shapley sampling values) and another that is
novel (Kernel SHAP). We also describe four model-type-specific approximation methods, two of
which are novel (Max SHAP, Deep SHAP). When using these methods, feature independence and
model linearity are two optional assumptions simplifying the computation of the expected values
(note that S? is the set of features not in S):
f (hx (z 0 )) = E[f (z) | zS ]
= EzS? |zS [f (z)]
SHAP explanation model simplified input mapping
expectation over zS? | zS
(9)
(10)
assume feature independence (as in [9, 5, 7, 3])
assume model linearity
(11)
(12)
? EzS? [f (z)]
? f ([zS , E[zS? ]]).
4.1
Model-Agnostic Approximations
If we assume feature independence when approximating conditional expectations (Equation 11), as
in [9, 5, 7, 3], then SHAP values can be estimated directly using the Shapley sampling values method
[9] or equivalently the Quantitative Input Influence method [3]. These methods use a sampling
approximation of a permutation version of the classic Shapley value equations (Equation 8). Separate
sampling estimates are performed for each feature attribution. While reasonable to compute for a
small number of inputs, the Kernel SHAP method described next requires fewer evaluations of the
original model to obtain similar approximation accuracy (Section 5).
Kernel SHAP (Linear LIME + Shapley values)
Linear LIME uses a linear explanation model to locally approximate f , where local is measured in the
simplified binary input space. At first glance, the regression formulation of LIME in Equation 2 seems
very different from the classical Shapley value formulation of Equation 8. However, since linear
LIME is an additive feature attribution method, we know the Shapley values are the only possible
solution to Equation 2 that satisfies Properties 1-3 ? local accuracy, missingness and consistency. A
natural question to pose is whether the solution to Equation 2 recovers these values. The answer
depends on the choice of loss function L, weighting kernel ?x0 and regularization term ?. The LIME
choices for these parameters are made heuristically; using these choices, Equation 2 does not recover
the Shapley values. One consequence is that local accuracy and/or consistency are violated, which in
turn leads to unintuitive behavior in certain circumstances (see Section 5).
5
Below we show how to avoid heuristically choosing the parameters in Equation 2 and how to find the
loss function L, weighting kernel ?x0 , and regularization term ? that recover the Shapley values.
Theorem 2 (Shapley kernel) Under Definition 1, the specific forms of ?x0 , L, and ? that make
solutions of Equation 2 consistent with Properties 1 through 3 are:
?(g) = 0,
(M ? 1)
,
(M choose |z 0 |)|z 0 |(M ? |z 0 |)
X
0
0 2
0
L(f, g, ?x0 ) =
f (h?1
x (z )) ? g(z ) ?x0 (z ),
?x0 (z 0 ) =
z 0 ?Z
where |z 0 | is the number of non-zero elements in z 0 .
The proof of Theorem 2 is shown in the Supplementary Material.
It is important to note that ?x0 (z 0 ) = ? when |z 0 | ? {0, M }, which enforces ?0 = fx (?) and f (x) =
PM
i=0 ?i . In practice, these infinite weights can be avoided during optimization by analytically
eliminating two variables using these constraints.
Since g(z 0 ) in Theorem 2 is assumed to follow a linear form, and L is a squared loss, Equation 2
can still be solved using linear regression. As a consequence, the Shapley values from game theory
can be computed using weighted linear regression.2 Since LIME uses a simplified input mapping
that is equivalent to the approximation of the SHAP mapping given in Equation 12, this enables
regression-based, model-agnostic estimation of SHAP values. Jointly estimating all SHAP values
using regression provides better sample efficiency than the direct use of classical Shapley equations
(see Section 5).
The intuitive connection between linear regression and Shapley values is that Equation 8 is a difference
of means. Since the mean is also the best least squares point estimate for a set of data points, it is
natural to search for a weighting kernel that causes linear least squares regression to recapitulate
the Shapley values. This leads to a kernel that distinctly differs from previous heuristically chosen
kernels (Figure 2A).
4.2
Model-Specific Approximations
While Kernel SHAP improves the sample efficiency of model-agnostic estimations of SHAP values, by
restricting our attention to specific model types, we can develop faster model-specific approximation
methods.
Linear SHAP
For linear models, if we assume input feature independence (Equation 11), SHAP values can be
approximated directly from the model?s weight coefficients.
Corollary 1 (Linear SHAP) Given a linear model f (x) =
PM
j=1
wj xj + b: ?0 (f, x) = b and
?i (f, x) = wj (xj ? E[xj ])
This follows from Theorem 2 and Equation 11, and it has been previously noted by ?trumbelj and
Kononenko [9].
Low-Order SHAP
Since linear regression using Theorem 2 has complexity O(2M + M 3 ), it is efficient for small values
of M if we choose an approximation of the conditional expectations (Equation 11 or 12).
2
During the preparation of this manuscript we discovered this parallels an equivalent constrained quadratic
minimization formulation of Shapley values proposed in econometrics [2].
6
(A)
(B)
hapley
f3
f1
f3
f2
f1
f2
Figure 2: (A) The Shapley kernel weighting is symmetric when all possible z 0 vectors are ordered
by cardinality there are 215 vectors in this example. This is distinctly different from previous
heuristically chosen kernels. (B) Compositional models such as deep neural networks are comprised
of many simple components. Given analytic solutions for the Shapley values of the components, fast
approximations for the full model can be made using DeepLIFT?s style of back-propagation.
Max SHAP
Using a permutation formulation of Shapley values, we can calculate the probability that each input
will increase the maximum value over every other input. Doing this on a sorted order of input values
lets us compute the Shapley values of a max function with M inputs in O(M 2 ) time instead of
O(M 2M ). See Supplementary Material for the full algorithm.
Deep SHAP (DeepLIFT + Shapley values)
While Kernel SHAP can be used on any model, including deep models, it is natural to ask whether
there is a way to leverage extra knowledge about the compositional nature of deep networks to improve
computational performance. We find an answer to this question through a previously unappreciated
connection between Shapley values and DeepLIFT [8]. If we interpret the reference value in Equation
3 as representing E[x] in Equation 12, then DeepLIFT approximates SHAP values assuming that
the input features are independent of one another and the deep model is linear. DeepLIFT uses a
linear composition rule, which is equivalent to linearizing the non-linear components of a neural
network. Its back-propagation rules defining how each component is linearized are intuitive but were
heuristically chosen. Since DeepLIFT is an additive feature attribution method that satisfies local
accuracy and missingness, we know that Shapley values represent the only attribution values that
satisfy consistency. This motivates our adapting DeepLIFT to become a compositional approximation
of SHAP values, leading to Deep SHAP.
Deep SHAP combines SHAP values computed for smaller components of the network into SHAP
values for the whole network. It does so by recursively passing DeepLIFT?s multipliers, now defined
in terms of SHAP values, backwards through the network as in Figure 2B:
?i (f3 , x)
xj ? E[xj ]
?i (fj , y)
=
yi ? E[yi ]
mxj f3 =
?j?{1,2} myi fj
myi f3 =
2
X
(13)
(14)
myi fj mxj f3
chain rule
(15)
linear approximation
(16)
j=1
?i (f3 , y) ? myi f3 (yi ? E[yi ])
Since the SHAP values for the simple network components can be efficiently solved analytically
if they are linear, max pooling, or an activation function with just one input, this composition
rule enables a fast approximation of values for the whole model. Deep SHAP avoids the need to
heuristically choose ways to linearize components. Instead, it derives an effective linearization from
the SHAP values computed for each component. The max function offers one example where this
leads to improved attributions (see Section 5).
7
(A)
Feature importance
SHAP
Shapley sampling
LIME
True Shapley value
(B)
Dense original model
Sparse original model
Figure 3: Comparison of three additive feature attribution methods: Kernel SHAP (using a debiased
lasso), Shapley sampling values, and LIME (using the open source implementation). Feature
importance estimates are shown for one feature in two models as the number of evaluations of the
original model function increases. The 10th and 90th percentiles are shown for 200 replicate estimates
at each sample size. (A) A decision tree model using all 10 input features is explained for a single
input. (B) A decision tree using only 3 of 100 input features is explained for a single input.
5
Computational and User Study Experiments
We evaluated the benefits of SHAP values using the Kernel SHAP and Deep SHAP approximation
methods. First, we compared the computational efficiency and accuracy of Kernel SHAP vs. LIME
and Shapley sampling values. Second, we designed user studies to compare SHAP values with
alternative feature importance allocations represented by DeepLIFT and LIME. As might be expected,
SHAP values prove more consistent with human intuition than other methods that fail to meet
Properties 1-3 (Section 2). Finally, we use MNIST digit image classification to compare SHAP with
DeepLIFT and LIME.
5.1
Computational Efficiency
Theorem 2 connects Shapley values from game theory with weighted linear regression. Kernal SHAP
uses this connection to compute feature importance. This leads to more accurate estimates with fewer
evaluations of the original model than previous sampling-based estimates of Equation 8, particularly
when regularization is added to the linear model (Figure 3). Comparing Shapley sampling, SHAP, and
LIME on both dense and sparse decision tree models illustrates both the improved sample efficiency
of Kernel SHAP and that values from LIME can differ significantly from SHAP values that satisfy
local accuracy and consistency.
5.2
Consistency with Human Intuition
Theorem 1 provides a strong incentive for all additive feature attribution methods to use SHAP
values. Both LIME and DeepLIFT, as originally demonstrated, compute different feature importance
values. To validate the importance of Theorem 1, we compared explanations from LIME, DeepLIFT,
and SHAP with user explanations of simple models (using Amazon Mechanical Turk). Our testing
assumes that good model explanations should be consistent with explanations from humans who
understand that model.
We compared LIME, DeepLIFT, and SHAP with human explanations for two settings. The first
setting used a sickness score that was higher when only one of two symptoms was present (Figure 4A).
The second used a max allocation problem to which DeepLIFT can be applied. Participants were told
a short story about how three men made money based on the maximum score any of them achieved
(Figure 4B). In both cases, participants were asked to assign credit for the output (the sickness score
or money won) among the inputs (i.e., symptoms or players). We found a much stronger agreement
between human explanations and SHAP than with other methods. SHAP?s improved performance for
max functions addresses the open problem of max pooling functions in DeepLIFT [7].
5.3
Explaining Class Differences
As discussed in Section 4.2, DeepLIFT?s compositional approach suggests a compositional approximation of SHAP values (Deep SHAP). These insights, in turn, improve DeepLIFT, and a new version
8
(A)
(B)
Human
SHAP
LIME
Orig. DeepLIFT
Human
SHAP
LIME
Figure 4: Human feature impact estimates are shown as the most common explanation given among
30 (A) and 52 (B) random individuals, respectively. (A) Feature attributions for a model output value
(sickness score) of 2. The model output is 2 when fever and cough are both present, 5 when only
one of fever or cough is present, and 0 otherwise. (B) Attributions of profit among three men, given
according to the maximum number of questions any man got right. The first man got 5 questions
right, the second 4 questions, and the third got none right, so the profit is $5.
Orig. DeepLift
New DeepLift
SHAP
LIME
Explain 8 Explain 3
Masked
(B)
60
Change in log-odds
Input
(A)
50
40
30
20
Orig. DeepLift
New DeepLift
SHAP
LIME
Figure 5: Explaining the output of a convolutional network trained on the MNIST digit dataset. Orig.
DeepLIFT has no explicit Shapley approximations, while New DeepLIFT seeks to better approximate
Shapley values. (A) Red areas increase the probability of that class, and blue areas decrease the
probability. Masked removes pixels in order to go from 8 to 3. (B) The change in log odds when
masking over 20 random images supports the use of better estimates of SHAP values.
includes updates to better match Shapley values [7]. Figure 5 extends DeepLIFT?s convolutional
network example to highlight the increased performance of estimates that are closer to SHAP values.
The pre-trained model and Figure 5 example are the same as those used in [7], with inputs normalized
between 0 and 1. Two convolution layers and 2 dense layers are followed by a 10-way softmax
output layer. Both DeepLIFT versions explain a normalized version of the linear layer, while SHAP
(computed using Kernel SHAP) and LIME explain the model?s output. SHAP and LIME were both
run with 50k samples (Supplementary Figure 1); to improve performance, LIME was modified to use
single pixel segmentation over the digit pixels. To match [7], we masked 20% of the pixels chosen to
switch the predicted class from 8 to 3 according to the feature attribution given by each method.
6
Conclusion
The growing tension between the accuracy and interpretability of model predictions has motivated
the development of methods that help users interpret predictions. The SHAP framework identifies
the class of additive feature importance methods (which includes six previous methods) and shows
there is a unique solution in this class that adheres to desirable properties. The thread of unity that
SHAP weaves through the literature is an encouraging sign that common principles about model
interpretation can inform the development of future methods.
We presented several different estimation methods for SHAP values, along with proofs and experiments showing that these values are desirable. Promising next steps involve developing faster
model-type-specific estimation methods that make fewer assumptions, integrating work on estimating
interaction effects from game theory, and defining new explanation model classes.
9
Acknowledgements
This work was supported by a National Science Foundation (NSF) DBI-135589, NSF CAREER
DBI-155230, American Cancer Society 127332-RSG-15-097-01-TBG, National Institute of Health
(NIH) AG049196, and NSF Graduate Research Fellowship. We would like to thank Marco Ribeiro,
Erik ?trumbelj, Avanti Shrikumar, Yair Zick, the Lee Lab, and the NIPS reviewers for feedback that
has significantly improved this work.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
Sebastian Bach et al. ?On pixel-wise explanations for non-linear classifier decisions by layerwise relevance propagation?. In: PloS One 10.7 (2015), e0130140.
A Charnes et al. ?Extremal principle solutions of games in characteristic function form: core,
Chebychev and Shapley value generalizations?. In: Econometrics of Planning and Efficiency
11 (1988), pp. 123?133.
Anupam Datta, Shayak Sen, and Yair Zick. ?Algorithmic transparency via quantitative input
influence: Theory and experiments with learning systems?. In: Security and Privacy (SP), 2016
IEEE Symposium on. IEEE. 2016, pp. 598?617.
Stan Lipovetsky and Michael Conklin. ?Analysis of regression in game theory approach?. In:
Applied Stochastic Models in Business and Industry 17.4 (2001), pp. 319?330.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ?Why should i trust you?: Explaining
the predictions of any classifier?. In: Proceedings of the 22nd ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining. ACM. 2016, pp. 1135?1144.
Lloyd S Shapley. ?A value for n-person games?. In: Contributions to the Theory of Games
2.28 (1953), pp. 307?317.
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. ?Learning Important Features
Through Propagating Activation Differences?. In: arXiv preprint arXiv:1704.02685 (2017).
Avanti Shrikumar et al. ?Not Just a Black Box: Learning Important Features Through Propagating Activation Differences?. In: arXiv preprint arXiv:1605.01713 (2016).
Erik ?trumbelj and Igor Kononenko. ?Explaining prediction models and individual predictions
with feature contributions?. In: Knowledge and information systems 41.3 (2014), pp. 647?665.
H Peyton Young. ?Monotonic solutions of cooperative games?. In: International Journal of
Game Theory 14.2 (1985), pp. 65?72.
10
| 7062 |@word version:4 eliminating:1 seems:1 replicate:1 retraining:1 stronger:1 nd:1 open:2 heuristically:6 seek:1 linearized:1 simplifying:1 recapitulate:1 profit:2 recursively:1 score:4 existing:2 current:7 com:1 comparing:1 surprising:2 activation:4 peyton:2 must:1 additive:25 enables:2 analytic:1 remove:1 designed:3 interpretable:3 update:1 v:1 fewer:4 short:1 core:1 provides:3 simpler:1 along:1 direct:1 become:1 symposium:1 prove:1 shapley:55 combine:1 weave:1 privacy:1 introduce:1 x0:24 expected:3 behavior:1 planning:1 growing:3 encouraging:1 cardinality:1 estimating:2 linearity:2 agnostic:4 minimizes:1 z:11 unified:6 quantitative:5 every:1 preferable:2 exactly:1 classifier:2 local:13 treat:1 struggle:1 consequence:2 meet:1 might:1 black:1 suggests:1 challenging:1 conklin:1 ease:1 zi0:2 forefront:1 graduate:1 unique:5 enforces:1 testing:1 recursive:1 practice:1 differs:1 digit:3 area:2 axiom:1 adapting:1 significantly:2 matching:1 got:3 word:2 integrating:2 refers:1 pre:1 get:1 cannot:2 influence:5 applying:1 equivalent:4 map:3 demonstrated:2 missing:4 reviewer:1 attribution:30 regardless:1 attention:1 independently:1 go:1 amazon:1 assigns:2 insight:4 rule:4 dbi:2 classic:3 handle:1 fx:7 user:8 exact:1 us:7 agreement:1 element:1 approximated:1 particularly:1 econometrics:2 cooperative:3 anshul:1 preprint:2 solved:3 calculate:1 wj:2 ordering:2 trade:1 highest:1 decrease:2 plo:1 intuition:4 complexity:2 constrains:1 asked:1 trained:4 singh:1 orig:4 mxj:2 efficiency:6 f2:2 engenders:1 various:2 represented:1 fast:2 describe:3 effective:1 choosing:1 supplementary:4 otherwise:1 ability:1 withholding:1 jointly:1 itself:3 final:1 sen:1 propose:3 interaction:1 neighboring:1 aligned:1 combining:1 intuitive:2 validate:1 seattle:2 guaranteeing:1 help:2 develop:1 linearize:1 propagating:2 pose:1 unintentionally:1 measured:2 x0i:3 school:2 strong:1 c:2 predicted:2 implies:1 differ:1 closely:1 attribute:4 stochastic:1 human:10 viewing:1 material:3 hx:18 assign:1 f1:2 generalization:1 summation:1 marco:2 around:1 credit:1 mapping:9 algorithmic:1 estimation:7 bag:1 extremal:1 weighted:4 minimization:1 super:3 modified:1 avoid:1 broader:1 corollary:1 focus:1 indicates:4 sigkdd:1 sickness:3 entire:1 pixel:9 issue:1 among:4 arg:1 classification:1 proposes:2 development:2 constrained:1 softmax:1 f3:8 washington:4 beach:1 sampling:16 identical:1 represents:6 nearly:1 igor:1 future:1 modern:1 national:2 individual:3 familiar:1 connects:1 mining:1 evaluation:3 chain:1 implication:1 accurate:2 closer:1 unification:1 tree:3 penalizes:1 theoretical:1 increased:2 industry:1 subset:3 entry:3 masked:3 comprised:1 tulio:1 answer:2 combined:1 st:1 person:1 international:2 stay:1 lee:2 off:1 told:1 michael:1 squared:2 opposed:1 choose:3 creating:1 expert:1 american:1 style:1 leading:1 lloyd:1 availability:1 coefficient:1 matter:1 includes:2 satisfy:3 notable:1 depends:2 later:1 try:1 performed:1 lab:1 doing:1 red:1 recover:2 participant:2 parallel:1 carlos:1 masking:1 contribution:3 square:2 accuracy:15 convolutional:2 who:1 efficiently:1 ensemble:2 characteristic:1 rsg:1 identification:1 unifies:2 none:1 multicollinearity:1 explain:7 inform:1 whenever:1 sebastian:1 definition:7 pp:7 turk:1 proof:3 attributed:1 recovers:1 dataset:2 ask:1 knowledge:3 improves:2 segmentation:1 back:2 manuscript:1 originally:1 higher:1 violating:1 follow:1 tension:2 response:1 improved:6 formulation:4 evaluated:1 though:2 symptom:2 box:1 just:2 implicit:1 replacing:1 trust:2 su:1 suinlee:1 lack:1 propagation:7 glance:1 usa:1 effect:8 normalized:2 contain:1 multiplier:1 true:1 hence:1 regularization:3 analytically:2 symmetric:1 game:12 during:2 uniquely:1 noted:2 percentile:1 won:1 linearizing:1 demonstrate:1 allen:2 interpreting:3 bring:1 fj:3 image:4 wise:5 novel:4 recently:3 nih:1 common:2 conditioning:1 discussed:2 interpretation:2 approximates:2 interpret:5 composition:2 consistency:10 pm:2 money:2 base:1 align:1 own:1 recent:1 exclusion:1 perspective:1 certain:2 binary:5 yi:4 guestrin:1 preceding:1 myi:4 determine:1 redundant:1 violate:1 desirable:6 full:2 sameer:1 transparency:1 match:8 adapt:1 faster:2 offer:1 long:1 bach:1 impact:3 prediction:24 regression:17 circumstance:1 expectation:6 arxiv:4 represent:3 kernel:19 achieved:2 background:1 uninformative:1 fellowship:1 diagram:1 leaving:1 source:1 crucial:1 extra:1 eliminates:1 bringing:1 pooling:2 odds:2 presence:3 leverage:1 backwards:1 easy:2 variety:1 independence:4 xj:5 switch:1 ezs:2 perfectly:1 lasso:1 interprets:2 whether:2 six:4 motivated:1 thread:1 f:6 passing:1 cause:1 compositional:5 deep:15 involve:1 kundaje:1 locally:2 http:1 nsf:3 sign:1 delta:1 estimated:1 correctly:1 blue:1 incentive:1 four:1 clarity:1 convert:4 enforced:1 missingness:8 run:1 you:1 extends:1 reasonable:1 decision:4 lime:31 layer:8 followed:1 fever:2 quadratic:1 constraint:1 ri:1 layerwise:1 extremely:1 min:1 department:1 developing:1 according:2 across:1 smaller:1 unity:2 kernal:1 explained:3 equation:29 previously:4 turn:2 count:1 fail:1 know:3 apply:1 obey:1 appropriate:1 alternative:1 yair:2 anupam:1 original:25 assumes:1 cough:2 ensure:1 include:1 approximating:4 classical:3 society:1 objective:1 already:2 added:2 question:5 unclear:1 separate:1 thank:1 assuming:1 erik:2 modeled:1 index:1 equivalently:1 potentially:1 relate:1 unintuitive:1 implementation:1 motivates:1 unknown:1 allowing:1 neuron:1 convolution:1 datasets:1 withheld:1 zick:2 optional:1 defining:2 discovered:1 arbitrary:1 datta:1 required:1 mechanical:1 connection:4 faithfulness:1 security:1 chebychev:1 nip:2 address:4 below:4 pattern:1 scott:1 interpretability:3 including:2 explanation:35 max:8 natural:3 business:1 representing:1 improve:3 github:1 identifies:1 stan:1 health:1 text:1 understanding:3 literature:2 acknowledgement:1 discovery:1 lacking:1 loss:5 permutation:2 highlight:1 interesting:1 men:2 allocation:2 foundation:1 unappreciated:2 consistent:3 principle:2 story:1 cancer:1 penalized:1 supported:1 understand:3 institute:1 wide:1 explaining:4 sparse:2 distinctly:2 benefit:2 feedback:1 avoids:1 genome:1 preventing:1 made:3 simplified:15 avoided:1 ribeiro:2 approximate:5 preferred:1 summing:1 assumed:1 kononenko:2 xi:6 search:1 why:2 promising:1 nature:1 ca:1 career:1 adheres:3 complex:5 did:1 sp:1 dense:3 fx0:2 big:1 whole:2 paul:2 arise:1 retrain:1 explicit:1 third:2 weighting:4 young:2 removing:1 theorem:11 specific:8 showing:2 x:5 derives:1 mnist:2 restricting:1 importance:14 linearization:1 illustrates:1 ordered:1 monotonic:1 corresponds:1 satisfies:3 acm:2 conditional:4 sorted:1 man:2 change:4 included:1 typical:1 infinite:1 debiased:1 averaging:1 discriminate:1 player:1 support:2 meant:2 relevance:5 violated:1 preparation:1 |
6,703 | 7,063 | Stochastic Approximation
for Canonical Correlation Analysis
Raman Arora
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21204
[email protected]
Teodor V. Marinov
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21204
[email protected]
Poorya Mianjy
Dept. of Computer Science
Johns Hopkins University
Baltimore, MD 21204
[email protected]
Nathan Srebro
TTI-Chicago
Chicago, Illinois 60637
[email protected]
Abstract
We propose novel first-order stochastic approximation algorithms for canonical
correlation analysis (CCA). Algorithms presented are instances of inexact matrix
stochastic gradient (MSG) and inexact matrix exponentiated gradient (MEG), and
achieve ?-suboptimality in the population objective in poly( 1? ) iterations. We also
consider practical variants of the proposed algorithms and compare them with other
methods for CCA both theoretically and empirically.
1
Introduction
Canonical Correlation Analysis (CCA) [11] is a ubiquitous statistical technique for finding maximally
correlated linear components of two sets of random variables. CCA can be posed as the following
stochastic optimization problem: given a pair of random vectors (x, y) 2 Rdx ? Rdy , with some
(unknown) joint distribution D, find the k-dimensional subspaces where the projections of x and y
? 2 Rdx ?k and V
? 2 Rdy ?k that
are maximally correlated, i.e. find matrices U
?V
? > y] subject to U
? > Ex [xx> ]U
? = Ik , V
? > Ey [yy> ]V
? = Ik .
maximize Ex,y [x> U
(1)
CCA-based techniques have recently met with success at unsupervised representation learning where
multiple ?views? of data are used to learn improved representations for each of the views [3, 5, 13,
23]. The different views often contain complementary information, and CCA-based ?multiview?
representation learning methods can take advantage of this information to learn features that are
useful for understanding the structure of the data and that are beneficial for downstream tasks.
Unsupervised learning techniques leverage unlabeled data which is often plentiful. Accordingly,
in this paper, we are interested in first-order stochastic Approximation (SA) algorithms for solving
Problem (1) that can easily scale to very large datasets. A stochastic approximation algorithm is an
iterative algorithm, where in each iteration a single sample from the population is used to perform an
update, as in stochastic gradient descent (SGD), the classic SA algorithm.
There are several computational challenges associated with solving Problem (1). A first challenge
stems from the fact that Problem (1) is non-convex. Nevertheless, akin to related spectral methods
such as principal component analysis (PCA), the solution to CCA can be given in terms of a
generalized eigenvalue problem. In other words, despite being non-convex, CCA admits a tractable
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
algorithm. In particular, numerical techniques based on power iteration method and its variants can
be applied to these problems to find globally optimal solutions. Much recent work, therefore, has
focused on analyzing optimization error for power iteration method for the generalized eigenvalue
problem [1, 8, 24]. However, these analyses are on numerical (empirical) optimization error for
finding left and right singular vectors of a fixed given matrix based on empirical estimates of the
covariance matrices, and not on the population ? suboptimality (aka bound in terms of population
objective) of Problem (1) which is the focus here.
The second challenge, which is our main concern here, presents when designing first order stochastic
approximation algorithms for CCA. The main difficulty here, compared to PCA, and most other
machine learning problems, is that the constraints also involve stochastic quantities that depend on the
unknown distribution D. Put differently, the CCA objective does not decompose over samples. To see
this, consider the case for k = 1. The CCA problem
as maximizing
pbe posed equivalently
p
? > then
? can
T
T
>
>
>
>
the correlation objective ?(u x, v y) = Ex,y u xy v /( Ex [u xx u] Ey [v yy> v]). This
yields an unconstrained optimization problem. However, the objective is no longer an expectation,
but is instead a ratio of expectations. If we were to solve the empirical version of this problem,
it is easy to check that the objective ties all the samples together. This departs significantly from
typical stochastic approximation scenario. Crucially, with a single sample, it is not possible to get an
unbiased estimate of the gradient of the objective ?(uT x, vT y). Therefore, we consider a first-order
oracle that provides inexact estimates of the gradient with a norm bound on the additive noise, and
focus on inexact proximal gradient descent algorithms for CCA.
Finally, it can be shown that?the CCA
problem
?
?
? given in Problem (1) is ill-posed if the population
auto-covariance matrices Ex xx> or Ey yy> are ill-conditioned.
follows from
? > ? This observation
? >?
the fact that if there exists a direction in the kernel of Ex xx or Ey yy in which x and y
exhibit non-zero covariance, then the objective of Problem (1) is unbounded. We would like to avoid
recovering such directions of spurious correlation and therefore assume that the smallest eigenvalues
of the auto-covariance matrices and their empirical estimates are bounded below by some positive
constant. Formally, we assume that Cx ? rx I and Cy ? ry I. This is the typical assumption made in
analyzing CCA [1, 7, 8].
1.1
Notation
Scalars, vectors and matrices are represented by normal, Roman and capital Roman letters respectively,
e.g. x, x, and X. Ik denotes identity matrix of size k ? k, where we drop the subscript whenever the
size is clear from the context. The `2 -norm of a vector x is denoted by kxk. For any matrix X, spectral
norm, nuclear norm, and Frobenius norm are represented by kXk2 , kXk? , and kXkF respectively.
The trace of a square matrix X is denoted by Tr (X). Given two matrices X 2 Rk?d , Y 2 Rk?d , the
standard inner-product between the two is given as hX, Yi = Tr X> Y ; we use the two notations
interchangeably. For symmetric matrices X and Y, we say X ? Y if X Y is positive semi-definite
(PSD). Let x 2 Rdx and y 2 Rdy denote two sets of centered random variables jointly distributed as
D with corresponding auto-covariance matrices Cx = Ex [xx> ], Cy = Ey [yy> ], and cross-covariance
matrix Cxy = E(x,y) [xy> ], and define d := max{dx , dy }. Finally, X 2 Rdx ?n and Y 2 Rdy ?n
denote data matrices with n corresponding samples from view 1 and view 2, respectively.
1.2
Problem Formulation
Given paired samples (x1 , y1 ), . . . , (xT , yT ), drawn i.i.d. from D, the goal is to find a maximally
correlated subspace of D, i.e. in terms of the population objective. A simple change of variables in
1/2 ?
1/2 ?
Problem (1), with U = Cx U
and V = Cy V,
yields the following equivalent problem:
?
?
1
1
(2)
maximize Tr U> Cx 2 Cxy Cy 2 V s.t. U> U = I, V> V = I.
To ensure that Problem 2 is well-posed, we assume that r := min{rx , ry } > 0, where rx = min (Cx )
and ry = min (Cy ) are smallest eigenvalues of the population auto-covariance matrices. Furthermore,
2
2
we assume that with probability one, for (x, y) ? D, we have that max{kxk , kyk } ? B. Let
2 Rdx ?k and 2 Rdy ?k denote the top-k left and right singular vectors, respectively, of the
1/2
1/2
population cross-covariance matrix of the whitened views T := Cx Cxy Cy . It is easy to check
1/2
1/2
that the optimum of Problem (1) is achieved at U? = Cx
, V? = C y
. Therefore, a natural
2
approach, given a training dataset, is to estimate empirical auto-covariance and cross-covariance
b an empirical estimate of T; matrices U? and V? can then be estimated
matrices to compute T,
b This approach is referred to as sample average
using the top-k left and right singular vectors of T.
approximation (SAA) or empirical risk minimization (ERM).
In this paper, we consider the following equivalent re-parameterization of Problem (2) given by the
variable substitution M = UV> , also referred to as lifting. Find M 2 Rdx ?dy that
1
1
maximize hM, Cx 2 Cxy Cy 2 i s.t.
i (M)
2 {0, 1}, i = 1, . . . , min{dx , dy }, rank (M) ? k.
(3)
We are interested in designing SA algorithms that, for any bounded distribution D with minimum
eigenvalue of the auto-covariance matrices bounded below by r, are guaranteed to find an ?-suboptimal
solution on the population objective (3), from which, we can extract a good solution for Problem (1).
1.3
Related Work
There has been a flurry of recent work on scalable approaches to the empirical CCA problem,
i.e. methods for numerical optimization of the empirical CCA objective on a fixed data set [1, 8, 14,
15, 24]. These are typically batch approaches which use the entire data set at each iteration, either for
performing a power iteration [1, 8] or for optimizing the alternative empirical objective [14, 15, 24]:
minimize
1 ?>
kU X
2n
? > Yk2F +
V
? 2
x kUkF
+
? 2
y kVkF
? > Cx,n U
? = I, V
? > Cy,n V
? = I,
s.t. U
(4)
where Cx,n and Cy,n are the empirical estimates of covariance matrices for the n samples stacked
in the matrices X 2 Rdx ?n and Y 2 Rdy ?n , using alternating least squares [14], projected gradient
descent (AppaGrad, [15]) or alternating SVRG combined with shift-and-invert pre-conditioning [24].
However, all the works above focus only on the empirical problem, and can all be seen as instances of
SAA (ERM) approach to the stochastic optimization (learning) problem (1). In particular, the analyses
in these works bounds suboptimality on the training objective, not the population objective (1).
The only relevant work we are aware of that studies algorithms for CCA as a population problem is a
parallel work by [7]. However, there are several key differences. First, the objective considered in [7]
is different from ours. The focus in [7] is on finding a solution U, V that is very similar (has high
alignment with) the optimal population solution U? , V? . In order for this to be possible, [7] must rely
on an "eigengap" between the singular values of the cross-correlation matrix Cxy . In contrast, since
we are only concerned with finding a solution that is good in terms of the population objective (2),
we need not, and do not, depend on such an eigengap. If there is no eigengap in the cross-correlation
matrix, the population optimal solution is not well-defined, but that is fine for us ? we are happy to
return any optimal (or nearly optimal) solution.
Furthermore, given such an eigengap, the emphasis in [7] is on the guaranteed overall runtime of their
method. Their core algorithm is very efficient in terms of runtime, but is not a streaming algorithm
and cannot be viewed as an SA algorithm. They do also provide a streaming version, which is runtime
and memory efficient, but is still not a ?natural? SA algorithm, in that it does not work by making
a small update to the solution at each iteration. In contrast, here we present a more ?natural? SA
algorithm and put more emphasis on its iteration complexity, i.e. the number of samples processed.
We do provide polynomial runtime guarantees, but rely on a heuristic capping in order to achieve
good runtime performance in practice.
Finally, [7] only consider obtaining the top correlated direction (k = 1) and it is not clear how to
extend their approach to Problem (1) of finding the top k 1 correlated directions. Our methods
handle the general problem, with k 1, naturally and all our guarantees are valid for any number of
desired directions k.
1.4
Contributions
The goal in this paper is to directly optimize the CCA ?population objective? based on i.i.d. draws
from the population rather than capturing the sample, i.e. the training objective. This view justifies
and favors stochastic approximation approaches that are far from optimal on the sample but are
essentially as good as the sample average approximation approach on the population. Such a view
3
has been advocated in supervised machine learning [6, 18]; here, we carry over the same view to the
rich world of unsupervised learning. The main contributions of the paper are as follows.
? We give a convex relaxation of the CCA optimization problem. We present two stochastic
approximation algorithms for solving the resulting problem. These algorithms work in a
streaming setting, i.e. they process one sample at a time, requiring only a single pass through
the data, and can easily scale to large datasets.
? The proposed algorithms are instances of inexact stochastic mirror descent with the choice
of potential function being Frobenius norm and von Neumann entropy, respectively. Prior
work on inexact proximal gradient descent suggests a lower bound on the size of the noise
required to guarantee convergence for inexact updates [16]. While that condition is violated
here for the CCA problem, we give a tighter analysis of our algorithms with noisy gradients
establishing sub-linear convergence rates.
? We give precise iteration complexity bounds for our algorithms, i.e. we give upper bounds
on iterations needed to guarantee a user-specified ?-suboptimality (w.r.t. population) for
CCA. These bounds do not depend on the eigengap in the cross-correlation matrix. To the
best of our knowledge this is a first such characterization of CCA in terms of generalization.
? We show empirically that the proposed algorithms outperform existing state-of-the-art
methods for CCA on a real dataset. We make our implementation of the proposed algorithms
and existing competing techniques available online1 .
2
Matrix Stochastic Gradient for CCA (MSG-CCA)
Problem (3) is a non-convex optimization problem, however, it admits a simple convex relaxation.
Taking the convex hull of the constraint set in Problem 3 gives the following convex relaxation:
1
1
maximize hM, Cx 2 Cxy Cy 2 i s.t. kMk2 ? 1, kMk? ? k.
(5)
While our updates are designed for Problem (5), our algorithm returns a rank-k solution, through a
simple rounding procedure ([27, Algorithm 4]; see more details below), which has the same objective
in expectation. This allows us to guarantee ?-suboptimality of the output of the algorithm on the
original non-convex Problem (3), and equivalently Problem (2).
Similar relaxations have been considered previously to design stochastic approximation (SA) algorithms for principal component analysis (PCA) [2] and partial least squares (PLS) [4]. These
SA algorithms are instances of stochastic gradient descent ? a popular choice for convex learning
problems. However, designing similar updates for the CCA problem is challenging since the gradient
of the CCA objective (see Problem (5)) w.r.t. M is g := Cx 1/2 Cxy Cy 1/2 , and it is not at all clear
how one can design an unbiased estimator, gt , of the gradient g unless one knows the marginal
distributions of x and y. Therefore, we consider an instance of inexact proximal gradient method [16]
which requires access to a first-order oracle
p with noisy estimates, @t , of gt . We show that an oracle
PT
with bound on E[ t=1 kgt @t k] of O( T ) ensures convergence of the proximal gradient method.
Furthermore, we propose a first order oracle with the above property which instantiates the inexact
gradient as
@t := Wx,t xt yt> Wy,t ? gt ,
(6)
where Wx,t , Wy,t are empirical estimates of whitening transformation based on training data seen
until time t. This leads to the following stochastic inexact gradient update:
Mt+1 = PF (Mt + ?t @t ),
(7)
where PF is the projection operator onto the constraint set of Problem (5).
Algorithm 1 provides the pseudocode for the proposed method which we term inexact matrix
stochastic gradient method for CCA (MSG-CCA). At each iteration, we receive a new sample
(xt , yt ), update the empirical estimates of the whitening transformations which define the inexact
gradient @t . This is followed by a gradient update with step-size ?, and projection onto the set of
constraints of Problem (5) with respect to the Frobenius norm through the operator PF (?) [2]. After
T iterations, the algorithm returns a rank-k matrix after a simple rounding procedure [27].
1
https://www.dropbox.com/sh/dkz4zgkevfyzif3/AABK9JlUvIUYtHvLPCBXLlpha?dl=0
4
Algorithm 1 Matrix Stochastic Gradient for CCA (MSG-CCA)
Input: Training data {(xt , yt )}Tt=1 , step size ?, auxiliary training data {(x0i , yi0 )}?i=1
?
Output: M
P?
P?
1
1
0 0>
0 0>
1: Initialize: M1
0, Cx,0
i=1 xi xi , Cy,0
i=1 yi yi
?
?
2: for t = 1, ? ? ? , T do
1
t+? 1
1
>
3:
Cx,t
Cx,t2
t+? Cx,t 1 + t+? xt xt , Wx,t
4:
t+? 1
t+? Cy,t 1
Cy,t
1
>
t+? yt yt ,
+
@t
Wx,t xt yt> Wy,t
6:
Mt+1
PF (Mt + ?@t )
7: end for
? = 1 PT M t
8: M
t=1
T
1
Wy,t
Cy,t2
5:
% Projection given in [2]
? = rounding(M)
?
9: M
% Algorithm 2 in [27]
We denote the empirical estimates of auto-covariance matrices based on the first t samples by Cx,t
and Cy,t . Our analysis of MSG-CCA follows a two-step procedure. First, we show that the empirical
estimates of the whitening transform matrices, i.e. Wx,t := Cx,t1/2 , Wy,t := Cy,t1/2 , guarantee that the
p
expected error in the ?inexact? estimate, @t , converges to zero as O(1/ t). Next,
p we show that the
resulting noisy stochastic gradient method converges to the optimum as O(1/ T ). In what follows,
we will denote the true whitening transforms by Wx := Cx 1/2 and Wy := Cy 1/2 .
Since Algorithm 1 requires inverting empirical auto-covariance matrices, we need to ensure that the
smallest eigenvalues of Cx,t and Cy,t are bounded away from zero. Our first technical result shows
that in this happens with high probability for all iterates.
Lemma 2.1. With probability 1
with respect to training data drawn i.i.d. from D, it holds
ry
uniformly for all t that min (Cx,t ) r2x and min (Cy,t )
2 whenever:
0
1
0
1
1
2d
1
1
2d
1
x
y
?
? A 1,
?
? A 1,
? max{ log @
log (2dx ) , log @
log (2dy )}.
1
1
cx
cx
cy
cy
log
log
1
Here cx =
2
3rx
6B 2 +Br
x
, cy =
1
3ry2
6B 2 +Br
y
.
We denote by At the event that for all j = 1, .., t 1 the empirical cross-covariance matrices
Cx,j and Cy,j have their smallest eigenvalues bounded from below by rx and ry , respectively.
Lemma 2.1?above, guarantees
that this event occurs with probability at least 1
, as long as there
?
??
are ? = ?
B2
r2
log
2d
log( 1 1
samples in the auxiliary dataset.
)
Lemma 2.2. Assume that the event At occurs, and
p that with probability one, for (x, y) ? D, we
8B 2 2 log(d)
2
2
have max{kxk , kyk } ? B. Then, for ? :=
, the following holds for all t:
r2
?
E D [kgt @t k2 | At ] ? p .
t
The result above bounds the size of the expected noise in the estimate of the inexact gradient. Not
surprisingly, the error decays as our estimates of the whitening transformation improve with more
data. Moreover, the rate at which the error decreases is sufficient to bound the suboptimality of the
MSG-CCA algorithm even with noisy biased stochastic gradients.
Theorem 2.3. After T iterations of MSG-CCA (Algorithm 1) with step size ? =
2
sample of size ? = ?( Br2 log(
2d
p
)),
log( p T )
T
1
1
hM? , Cx 2 Cxy Cy 2 i
p
2 pk
G T
, auxiliary
and initializing M1 = 0, the following holds:
1
1
1
? Cx 2 Cxy Cy 2 i] ?
E[hM,
5
p
2 kG + 2k? + kB/r
p
,
T
(8)
where the expectation is with respect to the i.i.d. samples and rounding, ? is as defined in Lemma 2.2,
? is the rank-k output of MSG-CCA, and G = p2B .
M? is the optimum of (3), M
rx ry
While Theorem 2.3 gives a bound on the objective of Problem (3), it implies a bound on the original
? := UV> , such that
CCA objective of Problem (1). In particular, given a rank-k factorization of M
>
>
U U = Ik and V V = Ik , we construct
1
1
? = C 2 U, V
? := C 2 V.
U
x,T
y,T
We then have the following generalization bound.
Theorem 2.4. After T iterations of MSG-CCA (Algorithm 1) with step size ? =
sample of size ? =
2
?( Br2 log( log(2dT ) )),
T 1
(9)
p
2 pk
G T
, auxiliary
and initializing M1 = 0, the following holds
!
r
p
2
2
kG
+
2k?
kB
2kB
2B
2B
? ?
p
E[Tr(U Cxy V)]
+
+ 2
log (d) +
log (d) ,
rT
r
T
3T
T
!
r
B
2B 2
2B
B+1
>
?
?
E[kU Cx U Ik2 ] ? 2
log (dx ) +
log (dx ) +
,
rx
T
3T
T
!
r
2
B
2B
2B
B+1
>
? Cy V
? Ik2 ] ?
E[kV
log (dy ) +
log (dy ) +
,
ry2
T
3T
T
Tr(U>
? Cxy V? )
?>
where the expectation is with respect to the i.i.d. samples and rounding, the pair (U? , V? ) is
? , V? ) are the factors (defined in (9)) of the rank-k output of MSG-CCA,
the optimum of (1), (U
r := min{rx , ry }, d := max{dx , dy }, ? is as given in Lemma 2.2, and G = p2B
rx ry .
All proofs are deferred to the Appendix in the supplementary material. Few remarks are in order.
Convexity: In our design and analysis of MSG-CCA, we have leveraged the following observations:
(i) since the objective is linear, an optimum of (5) is always attained at an extreme point, corresponding
to an optimum of (3); (ii) the exact convex relaxation (5) is tractable (this is not often the case for
non-convex problems); and (iii) although (5) might also have optima not on extreme points, we have
an efficient randomized method, called rounding, to extract from any feasible point of (5) a solution
of (3) that has the same value in expectation [27].
Eigengap free bound: Theorem 2.3 and 2.4 do not require an eigengap in the cross-correlation
matrix Cxy , and in particular the error bound, and thus the implied iteration complexity to achieve a
desired suboptimality does not depend on an eigengap.
Comparison with [7]: It is not straightforward to compare with the results of [7]. As discussed in
Section 1.3, authors in [7] consider only the case k = 1 and their objective is different than ours. They
seek (u, v) that have high alignment with the optimal (u? , v? ) as measured through the alignment
(?
u, v
?) := 12 u
? > Cx u ? + v
?> Cy v? . Furthermore, the analysis in [7] is dependent on the eigengap
= 1
2 between the top two singular values 1 , 2 of the population cross-correlation matrix T.
Nevertheless, one can relate their objective (u, v) to ours and ask what their guarantees ensure in
terms of our objective, namely achieving ?-suboptimality for Problem (3). For the case k = 1, and
in the presence of an eigengap , the method of [7] can be used to find an ?-suboptimal solution to
2
Problem (3) with O( log? (d)
2 ) samples.
Capped MSG-CCA: Although MSG-CCA comes with good theoretical guarantees, the computational cost per iteration can be O(d3 ). Therefore, we consider a practical variant of MSG-CCA
that explicitly controls the rank of the iterates. To ensure computational efficiency, we recommend
imposing a hard constraint on the rank of the iterates of MSG-CCA, following an approach similar to
previous works on PCA [2] and PLS [4]:
1
1
maximize hM, Cx 2 Cxy Cy 2 i s.t. kMk2 ? 1, kMk? ? k, rank (M) ? K.
(10)
For estimates of the whitening transformations, at each iteration, we set the smallest d K eigenvalues
of the covariance matrices to a constant (of the order of the estimated smallest eigenvalue of the
6
covariance matrix). This allows us to efficiently compute the whitening transformations since the
covariance matrices decompose into a sum of a low-rank matrix and a scaled identity matrix, bringing
down the computational cost per iteration to O(dK 2 ). We observe empirically on a real dataset
(see Section 4) that this procedure along with capping the rank of MSG iterates does not hurt the
convergence of MSG-CCA.
3
Matrix Exponentiated Gradient for CCA (MEG-CCA)
In this section, we consider matrix multiplicative weight updates for CCA. Multiplicative weights
method is a generic algorithmic technique in which one updates a distribution over a set of interest
by iteratively multiplying probability mass of elements [12]. In our setting, the set is that of d kdimensional (paired) subspaces and the multiplicative algorithm is an instance of matrix exponentiated
gradient (MEG) update. A motivation for considering MEG is the fact that for related problems,
including principal component analysis (PCA) and partial least squares (PLS), MEG has been shown
to yield fast optimistic rates [4, 22, 26]. Unfortunately we are not able to recover such optimistic
rates for CCA as the error in the inexact gradient decreases too slowly.
Our development of MEG requires the symmetrization of Problem
(3). Recall that g :=
?
0 g
1/2
1/2
Cx Cxy Cy . Consider the following symmetric matrix C :=
of size d ? d, where
g 0
d = dx + dy . The matrix C is referred to as the self-adjoint dilation of the matrix g [20]. Given the
SVD of g = U?V> with no repeated singular values, the eigen-decomposition of C is given as
?
??
??
?>
1 U U
?
0
U U
C=
.
V
0
?
V
V
2 V
1/2
1/2
In other words, the top-k left and right singular vectors of Cx Cxy Cy , which comprise the CCA
solution we seek, are encoded in top and bottom rows, respectively, of the top-k eigenvectors of its
dilation. This suggests the following scaled re-parameterization of Problem (3): find M 2 Rd?d that
maximize hM, Ci s.t.
i (M)
2 {0, 1}, i = 1, . . . , d, rank (M) = k.
(11)
As in Section 2, we take the convex hull of the constraint set to get a convex relaxation to Problem (11).
maximize hM, Ci s.t. M ? 0, kMk2 ? 1, Tr (M) = k.
(12)
Stochastic mirror descent on Problem (12) with the choice of potential function being the quantum
relative entropy gives the following updates [4, 27]:
? ?
b t = exp (log (Mt 1 ) + ?Ct ) , Mt = P M
bt ,
M
(13)
Tr (exp (log (Mt 1 ) + ?Ct ))
where Ct is the self-adjoint dilation of unbiased instantaneous gradient gt , and P denotes the Bregman
projection [10] onto the convex set of constraints in Problem (12). As discussed in Section p
2 we
PT
?
?
only need an inexact gradient estimate Ct of Ct with a bound on E[ t=1 kCt Ct k| AT ] of O( T ).
? t to be the self-adjoint dilation of @t , defined in Section 2, guarantees such a bound.
Setting C
Lemma 3.1. Assume that the event At occurs,gt @t has no repeated singular values and that with
2
2
probability one, for (x, y) ? D, we have max{kxk , kyk } ? B. Then, for ? defined in lemma 2.2,
2k?
?
we have that, Ext ,yt hMt 1 M? , Ct Ct | At i ? pt , where M? is the optimum of Problem (11).
Using the bound above, we can bound the suboptimality gap in the population objective between the
true rank-k CCA solution and the rank-k solution returned by MEG-CCA.
Theorem
After ?
T iterations of MEG-CCA (see Algorithm 2 in Appendix) with step size ? =
? 3.2.
q
2
log(d)
1
, auxiliary sample of size ? = ?( Br2 log( p2dpT )) and initializing M0 = d1 I,
G log 1 +
GT
log(
the following holds:
hM? , Ci
? Ci] ? 2k
E[hM,
7
r
T
1
)
G2 log (d)
k?
+ 2p ,
T
T
where the conditional expectation is taken with respect to the distribution and the internal randomiza? is the rank-k output of MEG-CCA after
tion of the algorithm, M? is the optimum of Problem (11), M
2B
rounding, G = prx ry and ? is defined in Lemma 2.2.
All of our remarks regarding latent convexity of the problem and practical variants from Section 2
apply to MEG-CCA as well. We note, however, that without additional assumptions like eigengap for
T we are not able to recover projections to the canonical subspaces as done in Theorem 2.4.
4
Experiments
We provide experimental results for our proposed methods, in particular we compare capped-MSG
which is the practical variant of Algorithm 1 with capping as defined in equation (10), and MEG
(Algorithm 2 in the Appendix), on a real dataset, Mediamill [19], consisting of paired observations
of videos and corresponding commentary. We compare our algorithms against CCALin of [8], ALS
CCA of [24]2 , and SAA, which is denoted by ?batch? in Figure 1. All of the comparisons are given
in terms of the CCA objective as a function of either CPU runtime or number of iterations. The
target dimensionality in our experiments is k 2 {1, 2, 4}. The choice of k is dictated largely by the
fact that the spectrum of the Mediamill dataset decays exponentially. To ensure that the problem
is well-conditioned, we add I for = 0.1 to the empirical estimates of the covariance matrices on
p .
Mediamill dataset. For both MSG and MEG we set the step size at iteration t to be ?t = 0.1
t
Mediamill is a multiview dataset consisting of n = 10, 000 corresponding videos and text annotations with labels representing semantic concepts [19]. The image view consists of 120-dimensional
visual features extracted from representative frames selected from videos, and the textual features are
100-dimensional. We give the competing algorithms, both CCALin and ALS CCA, the advantage of
b for the
the knowledge of the eigengap at k. In particular, we estimate the spectrum of the matrix T
Mediamill dataset and set the gap-dependent parameters in CCALin and ALS CCA accordingly.
We note, however, that estimating the eigengap to set the parameters is impractical in real scenarios.
Both CCALin and ALS CCA will therefore require additional tuning compared to MSG and MEG
algorithms proposed here. In the experiments, we observe that CCALin and ALS CCA outperform
MEG and capped-MSG when recovering the top CCA component, in terms of progress per-iteration.
However, capped-MSG is the best in terms of the overall runtime. The plots are shown in Figure 1.
5
Discussion
We study CCA as a stochastic optimization problem and show that it is efficiently learnable by
providing analysis for two stochastic approximation algorithms. In particular, the proposed algorithms
achieve ?-suboptimality in population objective in iterations O( ?12 ).
Note that both of our Algorithms, MSG-CCA in Algorithm 1 and MEG-CCA in Algorithm 2
in Appendix B are instances of inexact proximal-gradient method which was studied in [16]. In
particular, both algorithms receive a noisy gradient @t = gt + Et at iteration t and perform exact
proximal steps (Bregman projections in equations (7) and (13)). The main result in [16] provides an
PT
O(E 2 /T ) convergence p
rate, where E = t=1 kEt k is the partial sum of the errors in the gradients.
It is shown that E = o( T ) is a necessary condition to obtain convergence. However,
p for the CCA
problem that we are considering
in
this
paper,
our
lemma
A.6
shows
that
E
=
O(
T ). In fact, it is
p
1
p
easy to see that E = ?( T ). Our analysis yields O( T ) convergence rates for both Algorithms 1
and 2. This perhaps warrants further investigation into the more general problem of inexact proximal
gradient method.
In empirical comparisons, we found the capped version of the proposed MSG algorithm to outperform
other methods including MEG in terms of overall runtime needed to reach an ?-suboptimal solution.
Future work will focus on gaining a better theoretical understanding of capped MSG.
2
We run ALS only for k = 1 as the algorithm and the current implementation from the authors does not
handle k 1.
8
(a) k = 1
(b) k = 2
(c) k = 4
0.8
0.35
0.7
0.5
0.3
0.6
Objective
Objective
0.2
0.15
CAPPED-MSG
MEG
CCALin
batch
ALSCCA
Max Objective
0.1
0.05
0
102
Objective
0.4
0.25
0.3
0.2
0.5
0.4
0.3
0.2
0.1
0
103
0.1
102
0
103
Iteration
102
103
Iteration
Iteration
0.8
0.35
0.7
0.5
0.3
0.6
Objective
Objective
0.2
0.15
Objective
0.4
0.25
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
0.05
0
10
0
10
Runtime (in seconds)
2
0.1
0
10
0
10
2
0
Runtime (in seconds)
100
101
102
Runtime (in seconds)
Figure 1: Comparisons of CCA-Lin, CCA-ALS, MSG, and MEG for CCA optimization on the MediaMill
dataset, in terms of the objective value as a function of iteration (top) and as a function of CPU runtime (bottom).
Acknowledgements
This research was supported in part by NSF BIGDATA grant IIS-1546482.
References
[1] Z. Allen-Zhu and Y. Li. Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition. In Proceedings of the 34th International Conference on Machine Learning,
ICML, 2017. Full version available at http://arxiv.org/abs/1607.06017.
[2] R. Arora, A. Cotter, and N. Srebro. Stochastic optimization of PCA with capped MSG. In
Advances in Neural Information Processing Systems, NIPS, 2013.
[3] R. Arora and K. Livescu. Multi-view CCA-based acoustic features for phonetic recognition
across speakers and domains. In Acoustics, Speech and Signal Processing (ICASSP), 2013
IEEE International Conference on, pages 7135?7139. IEEE, 2013.
[4] R. Arora, P. Mianjy, and T. Marinov. Stochastic optimization for multiview representation
learning using partial least squares. In Proceedings of The 33rd International Conference on
Machine Learning, ICML, pages 1786?1794, 2016.
[5] A. Benton, R. Arora, and M. Dredze. Learning multiview embeddings of twitter users. In
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics,
volume 2, pages 14?19, 2016.
[6] O. Bousquet and L. Bottou. The tradeoffs of large scale learning. In Advances in neural
information processing systems, pages 161?168, 2008.
[7] C. Gao, D. Garber, N. Srebro, J. Wang, and W. Wang. Stochastic canonical correlation analysis.
arXiv preprint arXiv:1702.06533, 2017.
[8] R. Ge, C. Jin, P. Netrapalli, A. Sidford, et al. Efficient algorithms for large-scale generalized
eigenvector computation and canonical correlation analysis. In International Conference on
Machine Learning, pages 2741?2750, 2016.
[9] S. Golden. Lower bounds for the helmholtz function. Physical Review, 137(4B):B1127, 1965.
9
[10] M. Herbster and M. K. Warmuth. Tracking the best linear predictor. Journal of Machine
Learning Research, 1(Sep):281?309, 2001.
[11] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321?377, 1936.
[12] S. Kale. Efficient algorithms using the multiplicative weights update method. Princeton
University, 2007.
[13] E. Kidron, Y. Y. Schechner, and M. Elad. Pixels that sound. In Computer Vision and Pattern
Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 88?95.
IEEE, 2005.
[14] Y. Lu and D. P. Foster. Large scale canonical correlation analysis with iterative least squares. In
Advances in Neural Information Processing Systems, pages 91?99, 2014.
[15] Z. Ma, Y. Lu, and D. Foster. Finding linear structure in large datasets with scalable canonical
correlation analysis. In Proceedings of The 32nd International Conference on Machine Learning,
pages 169?178, 2015.
[16] M. Schmidt, N. L. Roux, and F. R. Bach. Convergence rates of inexact proximal-gradient
methods for convex optimization. In Advances in neural information processing systems, pages
1458?1466, 2011.
[17] B. A. Schmitt. Perturbation bounds for matrix square roots and pythagorean sums. Linear
algebra and its applications, 174:215?227, 1992.
[18] S. Shalev-Shwartz and N. Srebro. Svm optimization: inverse dependence on training set size. In
Proceedings of the 25th international conference on Machine learning, pages 928?935. ACM,
2008.
[19] C. G. Snoek, M. Worring, J. C. Van Gemert, J.-M. Geusebroek, and A. W. Smeulders. The challenge problem for automated detection of 101 semantic concepts in multimedia. In Proceedings
of the 14th ACM international conference on Multimedia, pages 421?430. ACM, 2006.
[20] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389?434, 2012.
[21] J. A. Tropp et al. An introduction to matrix concentration inequalities. Foundations and
Trends R in Machine Learning, 8(1-2):1?230, 2015.
[22] K. Tsuda, G. R?tsch, and M. K. Warmuth. Matrix exponentiated gradient updates for on-line
learning and bregman projection. In Journal of Machine Learning Research, pages 995?1018,
2005.
[23] A. Vinokourov, N. Cristianini, and J. Shawe-Taylor. Inferring a semantic representation of text
via cross-language correlation analysis. In Advances in neural information processing systems,
pages 1497?1504, 2003.
[24] W. Wang, J. Wang, D. Garber, and N. Srebro. Efficient globally convergent stochastic optimization for canonical correlation analysis. In Advances in Neural Information Processing Systems,
pages 766?774, 2016.
[25] M. K. Warmuth and D. Kuzmin. Online variance minimization. In Learning theory, pages
514?528. Springer, 2006.
[26] M. K. Warmuth and D. Kuzmin. Randomized PCA algorithms with regret bounds that are
logarithmic in the dimension. In NIPS?06, 2006.
[27] M. K. Warmuth and D. Kuzmin. Randomized online PCA algorithms with regret bounds that
are logarithmic in the dimension. Journal of Machine Learning Research, 9(10), 2008.
10
| 7063 |@word version:4 polynomial:1 norm:7 yi0:1 nd:1 seek:2 crucially:1 covariance:19 decomposition:1 sgd:1 tr:7 carry:1 plentiful:1 substitution:1 ours:3 existing:2 kmk:2 current:1 com:1 dx:7 must:1 john:3 chicago:2 numerical:3 additive:1 wx:6 drop:1 designed:1 update:14 plot:1 selected:1 kyk:3 accordingly:2 parameterization:2 warmuth:5 core:1 provides:3 characterization:1 iterates:4 org:1 unbounded:1 along:1 ik:5 consists:1 doubly:1 theoretically:1 snoek:1 expected:2 ry:9 multi:1 globally:2 cpu:2 pf:4 considering:2 xx:5 bounded:5 notation:2 moreover:1 mass:1 estimating:1 what:2 kg:2 eigenvector:1 finding:6 transformation:5 impractical:1 guarantee:10 golden:1 friendly:1 tie:1 runtime:12 biometrika:1 k2:1 scaled:2 control:1 grant:1 positive:2 t1:2 despite:1 ext:1 analyzing:2 establishing:1 subscript:1 might:1 emphasis:2 studied:1 suggests:2 challenging:1 kvkf:1 factorization:1 practical:4 ccalin:6 practice:1 regret:2 definite:1 procedure:4 empirical:20 jhu:3 significantly:1 projection:8 word:2 pre:1 p2b:2 get:2 cannot:1 unlabeled:1 onto:3 operator:2 put:2 context:1 risk:1 optimize:1 equivalent:2 www:1 yt:8 maximizing:1 straightforward:1 kale:1 convex:15 focused:1 roux:1 estimator:1 nuclear:1 population:21 classic:1 handle:2 hurt:1 pt:5 target:1 user:3 exact:2 livescu:1 designing:3 element:1 helmholtz:1 recognition:2 trend:1 bottom:2 preprint:1 initializing:3 wang:4 cy:31 ensures:1 decrease:2 convexity:2 complexity:3 tsch:1 cristianini:1 flurry:1 depend:4 solving:3 algebra:1 efficiency:1 easily:2 joint:1 icassp:1 differently:1 sep:1 represented:2 stacked:1 fast:1 shalev:1 heuristic:1 posed:4 solve:1 supplementary:1 say:1 encoded:1 garber:2 elad:1 cvpr:1 favor:1 jointly:1 noisy:5 transform:1 online:2 advantage:2 eigenvalue:9 propose:2 product:1 relevant:1 achieve:4 mediamill:6 adjoint:3 frobenius:3 kv:1 ry2:2 convergence:8 optimum:9 neumann:1 tti:1 converges:2 measured:1 x0i:1 ex:7 advocated:1 progress:1 sa:8 netrapalli:1 recovering:2 c:1 auxiliary:5 implies:1 come:1 met:1 direction:5 kgt:2 stochastic:29 hull:2 centered:1 kb:3 material:1 require:2 hx:1 generalization:2 decompose:2 investigation:1 tighter:1 hold:5 considered:2 normal:1 exp:2 algorithmic:1 m0:1 smallest:6 label:1 symmetrization:1 cotter:1 minimization:2 always:1 rather:1 rdx:7 avoid:1 focus:5 rank:15 check:2 aka:1 contrast:2 twitter:1 dependent:2 streaming:3 typically:1 entire:1 bt:1 spurious:1 relation:1 interested:2 pixel:1 overall:3 ill:2 denoted:3 development:1 art:1 initialize:1 marginal:1 aware:1 construct:1 comprise:1 beach:1 unsupervised:3 nearly:1 warrant:1 icml:2 future:1 t2:2 recommend:1 roman:2 few:1 consisting:2 psd:1 ab:1 detection:1 interest:1 alignment:3 deferred:1 sh:1 extreme:2 bregman:3 partial:4 necessary:1 xy:2 unless:1 taylor:1 re:2 desired:2 tsuda:1 theoretical:2 instance:7 kxkf:1 sidford:1 cost:2 predictor:1 rounding:7 too:1 proximal:8 combined:1 st:1 international:7 randomized:3 herbster:1 pbe:1 kdimensional:1 together:1 hopkins:3 von:1 leveraged:1 slowly:1 ket:1 return:3 li:1 potential:2 b2:1 explicitly:1 multiplicative:4 view:11 tion:1 root:1 optimistic:2 recover:2 parallel:1 annotation:1 contribution:2 smeulders:1 square:7 minimize:1 cxy:15 variance:1 largely:1 efficiently:2 yield:4 lu:2 rx:9 multiplying:1 reach:1 whenever:2 inexact:19 against:1 kct:1 naturally:1 associated:1 proof:1 dataset:10 popular:1 ask:1 kmk2:3 knowledge:2 ut:1 recall:1 ubiquitous:1 dimensionality:1 attained:1 dt:1 supervised:1 maximally:3 improved:1 formulation:1 done:1 furthermore:4 correlation:16 until:1 tropp:2 perhaps:1 dredze:1 usa:1 contain:1 unbiased:3 requiring:1 true:2 concept:2 alternating:2 symmetric:2 iteratively:1 semantic:3 interchangeably:1 self:3 speaker:1 suboptimality:10 generalized:4 multiview:4 tt:1 allen:1 image:1 instantaneous:1 novel:1 recently:1 pseudocode:1 mt:7 empirically:3 physical:1 conditioning:1 exponentially:1 volume:2 extend:1 discussed:2 m1:3 association:1 tail:1 rdy:6 imposing:1 rd:2 unconstrained:1 uv:2 tuning:1 mathematics:1 illinois:1 shawe:1 language:1 access:1 longer:1 whitening:7 gt:7 add:1 recent:2 dictated:1 optimizing:1 scenario:2 phonetic:1 inequality:1 success:1 vt:1 meeting:1 yi:3 seen:2 minimum:1 additional:2 commentary:1 ey:5 maximize:7 signal:1 semi:1 ii:2 multiple:1 full:1 sound:1 stem:1 technical:1 faster:1 cross:10 long:2 lin:1 bach:1 paired:3 variant:5 scalable:2 whitened:1 essentially:1 expectation:7 vision:1 arxiv:3 iteration:28 kernel:1 achieved:1 invert:1 receive:2 fine:1 baltimore:3 singular:8 biased:1 bringing:1 subject:1 leverage:1 presence:1 iii:1 easy:3 concerned:1 embeddings:1 automated:1 variate:1 competing:2 suboptimal:3 inner:1 regarding:1 br:2 tradeoff:1 vinokourov:1 shift:1 pca:8 eigengap:13 akin:1 returned:1 speech:1 remark:2 useful:1 teodor:1 involve:1 clear:3 eigenvectors:1 transforms:1 processed:1 http:2 outperform:3 canonical:9 nsf:1 estimated:2 per:3 yy:5 benton:1 hmt:1 key:1 nevertheless:2 achieving:1 drawn:2 capital:1 d3:1 relaxation:6 downstream:1 sum:4 run:1 inverse:1 letter:1 draw:1 raman:1 dy:8 appendix:4 capturing:1 cca:70 bound:24 ct:8 guaranteed:2 followed:1 convergent:1 oracle:4 annual:1 msg:28 constraint:7 bousquet:1 nathan:1 min:7 performing:1 instantiates:1 beneficial:1 across:1 making:1 happens:1 online1:1 erm:2 taken:1 equation:2 previously:1 needed:2 know:1 ge:1 tractable:2 gemert:1 end:1 available:2 apply:1 observe:2 away:1 spectral:2 generic:1 hotelling:1 batch:3 alternative:1 schmidt:1 eigen:1 original:2 denotes:2 top:10 ensure:5 linguistics:1 society:1 implied:1 objective:37 quantity:1 occurs:3 concentration:1 rt:1 md:3 dependence:1 exhibit:1 gradient:35 subspace:4 meg:18 ratio:1 happy:1 providing:1 equivalently:2 unfortunately:1 br2:3 relate:1 trace:1 implementation:2 design:3 unknown:2 perform:2 upper:1 observation:3 datasets:3 descent:7 dropbox:1 jin:1 worring:1 precise:1 y1:1 frame:1 perturbation:1 ttic:1 inverting:1 pair:2 required:1 specified:1 namely:1 acoustic:2 textual:1 nip:3 capped:8 able:2 below:4 wy:6 pattern:1 challenge:4 geusebroek:1 max:7 memory:1 including:2 video:3 gaining:1 power:3 event:4 difficulty:1 natural:3 rely:2 zhu:1 representing:1 improve:1 arora:6 hm:9 auto:8 extract:2 text:2 prior:1 understanding:2 acknowledgement:1 nati:1 review:1 relative:1 srebro:5 eigendecomposition:1 foundation:2 sufficient:1 schmitt:1 foster:2 row:1 surprisingly:1 supported:1 free:1 svrg:1 exponentiated:4 ik2:2 taking:1 distributed:1 van:1 dimension:2 valid:1 world:1 rich:1 quantum:1 author:2 made:1 projected:1 saa:3 far:1 yk2f:1 xi:2 shwartz:1 spectrum:2 iterative:2 latent:1 dilation:4 learn:2 ku:2 ca:1 obtaining:1 bottou:1 poly:1 domain:1 pk:2 main:4 motivation:1 noise:3 prx:1 repeated:2 complementary:1 x1:1 kuzmin:3 referred:3 representative:1 sub:1 inferring:1 kxk2:1 capping:3 rk:2 departs:1 theorem:6 down:1 xt:7 learnable:1 r2:2 decay:2 admits:2 dk:1 svm:1 concern:1 dl:1 exists:1 ci:4 mirror:2 lifting:1 conditioned:2 justifies:1 gap:2 entropy:2 cx:32 logarithmic:2 gao:1 visual:1 kxk:5 pls:3 g2:1 scalar:1 tracking:1 springer:1 extracted:1 ma:1 acm:3 conditional:1 identity:2 goal:2 viewed:1 feasible:1 change:1 hard:1 typical:2 uniformly:1 principal:3 lemma:9 called:1 multimedia:2 pas:1 svd:1 experimental:1 formally:1 internal:1 violated:1 pythagorean:1 accelerated:1 bigdata:1 dept:3 princeton:1 d1:1 correlated:5 |
6,704 | 7,064 | Resurrecting the sigmoid in deep learning through
dynamical isometry: theory and practice
Jeffrey Pennington
Google Brain
Samuel S. Schoenholz
Google Brain
Surya Ganguli
Applied Physics, Stanford University and Google Brain
Abstract
It is well known that weight initialization in deep networks can have a dramatic
impact on learning speed. For example, ensuring the mean squared singular value
of a network?s input-output Jacobian is O(1) is essential for avoiding exponentially
vanishing or exploding gradients. Moreover, in deep linear networks, ensuring that
all singular values of the Jacobian are concentrated near 1 can yield a dramatic
additional speed-up in learning; this is a property known as dynamical isometry.
However, it is unclear how to achieve dynamical isometry in nonlinear deep networks. We address this question by employing powerful tools from free probability
theory to analytically compute the entire singular value distribution of a deep
network?s input-output Jacobian. We explore the dependence of the singular value
distribution on the depth of the network, the weight initialization, and the choice of
nonlinearity. Intriguingly, we find that ReLU networks are incapable of dynamical
isometry. On the other hand, sigmoidal networks can achieve isometry, but only
with orthogonal weight initialization. Moreover, we demonstrate empirically that
deep nonlinear networks achieving dynamical isometry learn orders of magnitude
faster than networks that do not. Indeed, we show that properly-initialized deep
sigmoidal networks consistently outperform deep ReLU networks. Overall, our
analysis reveals that controlling the entire distribution of Jacobian singular values
is an important design consideration in deep learning.
1
Introduction
Deep learning has achieved state-of-the-art performance in many domains, including computer
vision [1], machine translation [2], human games [3], education [4], and neurobiological modelling [5,
6]. A major determinant of success in training deep networks lies in appropriately choosing the
initial weights. Indeed the very genesis of deep learning rested upon the initial observation that
unsupervised pre-training provides a good set of initial weights for subsequent fine-tuning through
backpropagation [7]. Moreover, seminal work in deep learning suggested that appropriately-scaled
Gaussian weights can prevent gradients from exploding or vanishing exponentially [8], a condition
that has been found to be necessary to achieve reasonable learning speeds [9].
These random weight initializations were primarily driven by the principle that the mean squared
singular value of a deep network?s Jacobian from input to output should remain close to 1. This
condition implies that on average, a randomly chosen error vector will preserve its norm under
backpropagation; however, it provides no guarantees on the worst case growth or shrinkage of an error
vector. A stronger requirement one might demand is that every Jacobian singular value remain close
to 1. Under this stronger requirement, every single error vector will approximately preserve its norm,
and moreover all angles between different error vectors will be preserved. Since error information
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
backpropagates faithfully and isometrically through the network, this stronger requirement is called
dynamical isometry [10].
A theoretical analysis of exact solutions to the nonlinear dynamics of learning in deep linear networks
[10] revealed that weight initializations satisfying dynamical isometry yield a dramatic increase in
learning speed compared to initializations that do not. For such linear networks, orthogonal weight
initializations achieve dynamical isometry, and, remarkably, their learning time, measured in number
of learning epochs, becomes independent of depth. In contrast, random Gaussian initializations do
not achieve dynamical isometry, nor do they achieve depth-independent training times.
It remains unclear, however, how these results carry over to deep nonlinear networks. Indeed,
empirically, a simple change from Gaussian to orthogonal initializations in nonlinear networks has
yielded mixed results [11], raising important theoretical and practical questions. First, how does the
entire distribution of singular values of a deep network?s input-output Jacobian depend upon the depth,
the statistics of random initial weights, and the shape of the nonlinearity? Second, what combinations
of these ingredients can achieve dynamical isometry? And third, among the nonlinear networks
that have neither vanishing nor exploding gradients, do those that in addition achieve dynamical
isometry also achieve much faster learning compared to those that do not? Here we answer these
three questions, and we provide a detailed summary of our results in the discussion.
2
Theoretical Results
In this section we derive expressions for the entire singular value density of the input-output Jacobian
for a variety of nonlinear networks in the large-width limit. We compute the mean squared singular
value of J (or, equivalently, the mean eiganvalue of JJT ), and deduce a rescaling that sets it equal
to 1. We then examine two metrics that help quantify the conditioning of the Jacobian: smax , the
2
maximum singular value of J (or, equivalently, max , the maximum eigenvalue of JJT ); and JJ
T,
2
the variance of the eigenvalue distribution of JJT . If max
1 and JJ
1
then
the
Jacobian
is
T
ill-conditioned and we expect the learning dynamics to be slow.
2.1
Problem setup
Consider an L-layer feed-forward neural network of width N with synaptic weight matrices Wl 2
RN ?N , bias vectors bl , pre-activations hl and post-activations xl , with l = 1, . . . , L. The feedforward dynamics of the network are governed by,
xl = (hl ) ,
xl = W l hl
1
+ bl ,
(1)
where : R ! R is a pointwise nonlinearity and the input is h 2 R . Now consider the
input-output Jacobian J 2 RN ?N given by
0
N
L
J=
Y
@xL
=
Dl W l .
0
@h
(2)
l=1
l
Here Dl is a diagonal matrix with entries Dij
= 0 (hli ) ij . The input-output Jacobian J is closely
related to the backpropagation operator mapping output errors to weight matrices at a given layer; if
the former is well conditioned, then the latter tends to be well-conditioned for all weight layers. We
therefore wish to understand the entire singular value spectrum of J for deep networks with randomly
initialized weights and biases.
In particular, we will take the biases bli to be drawn i.i.d. from a zero mean Gaussian with standard
deviation b . For the weights, we will consider two random matrix ensembles: (1) random Gaussian
2
weights in which each Wijl is drawn i.i.d from a Gaussian with variance w
/N , and (2) random
orthogonal weights, drawn from a uniform distribution over scaled orthogonal matrices obeying
2
(Wl )T Wl = w
I.
2.2
Review of signal propagation
The random matrices Dl in eqn. (2) depend on the empirical distribution of pre-activations hl entering
the nonlinearity in eqn. (1). The propagation of this empirical distribution through different layers l
2
was studied in [12]. There, it was shown that in the large-N limit this empirical distribution converges
to a Gaussian with zero mean and variance q l , where q l obeys a recursion relation induced by the
dynamics in eqn. (1),
Z
?p
?2
l
2
q = w
Dh
q l 1 h + b2 ,
(3)
PN
with initial condition q 0 = N1 i=1 (h0i )2 , and where Dh = pdh
exp (
2?
Gaussian measure. This recursion has a fixed point obeying,
Z
p ? 2
?
2
q = w
Dh
q h + b2 .
h2
2 )
denotes the standard
(4)
If the input h0 is chosen so that q 0 = q ? , then we start at the fixed point, and the distribution of Dl
becomes independent of l. Also, if we do not start at the fixed point, in many scenarios we rapidly
approach it in a few layers (see [12]), so for large L, assuming q l = q ? at all depths l is a good
approximation in computing the spectrum of J.
Another important quantity governing signal propagation through deep networks [12, 13] is
Z
?
?
p ? ?2
1 ?
T
2
=
Tr (DW) DW = w
Dh 0
q h ,
N
(5)
where 0 is the derivative of . Here is the mean of the distribution of squared singular values of
the matrix DW, when the pre-activations are at their fixed point distribution with variance q ? . As
shown in [12, 13] and Fig. 1, ( w , b ) separates the ( w , b ) plane into two phases, chaotic and
ordered, in which gradients exponentially explode or vanish respectively. Indeed, the mean squared
singular value of J was shown simply to be L in [12, 13], so = 1 is a critical line of initializations
with neither vanishing nor exploding gradients.
q ? = 1.5
Ordered
( w , b) < 1
Vanishing Gradients
1.5
1.0
0.5
0.0
Chaotic
( w , b) > 1
Exploding Gradients
2.3
Figure 1: Order-chaos transition when (h) = tanh(h). The
critical line ( w , b ) = 1 determines the boundary between
two phases [12, 13]: (a) a chaotic phase when > 1, where
forward signal propagation expands and folds space in a
chaotic manner and back-propagated gradients exponentially
explode, and (b) an ordered phase when < 1, where forward signal propagation contracts space in an ordered manner
and back-propagated gradients exponentially vanish. The
value of q ? along the critical line separating the two phases
is shown as a heatmap.
Free probability, random matrix theory and deep networks.
While the previous section revealed that the mean squared singular value of J is L , we would like to
obtain more detailed information about the entire singular value distribution of J, especially when
= 1. Since eqn. (2) consists of a product of random matrices, free probability [14, 15, 16] becomes
relevant to deep learning as a powerful tool to compute the spectrum of J, as we now review.
In general, given a random matrix X, its limiting spectral density is defined as
*
+
N
1 X
?X ( ) ?
(
,
i)
N i=1
(6)
X
where h?iX denotes the mean with respect to the distribution of the random matrix X. Also,
Z
?X (t)
GX (z) ?
dt ,
z 2 C \ R,
t
R z
(7)
is the definition of the Stieltjes transform of ?X , which can be inverted using,
?X ( ) =
1
lim Im GX ( + i?) .
? ?!0+
3
(8)
L=8
L=2
(a)
L = 32
L = 128
(b)
(c)
Linear Gaussian (d)
ReLU Orthogonal
HTanh Orthogonal
Figure 2: Examples of deep spectra at criticality for different nonlinearities at different depths.
Excellent agreement is observed between empirical simulations of networks of width 1000 (dashed
lines) and theoretical predictions (solid lines). ReLU and hard tanh are with orthogonal weights,
and linear is with Gaussian weights. Gaussian linear and orthogonal ReLU have similarly-shaped
distributions, especially for large depths, where poor conditioning and many large singular values are
observed. On the other hand, orthogonal hard tanh is much better conditioned.
The Stieltjes transform GX is related to the moment generating function MX ,
MX (z) ? zGX (z)
1=
1
X
mk
k=1
zk
,
(9)
R
where the mk is the kth moment of the distribution ?X , mk = d ?X ( ) k = N1 htrXk iX . In turn,
we denote the functional inverse of MX by MX 1 , which by definition satisfies MX (MX 1 (z)) =
MX 1 (MX (z)) = z. Finally, the S-transform [14, 15] is defined as,
SX (z) =
1+z
.
zMX 1 (z)
(10)
The utility of the S-transform arises from its behavior under multiplication. Specifically, if A and
B are two freely-independent random matrices, then the S-transform of the product random matrix
ensemble AB is simply the product of their S-transforms,
SAB (z) = SA (z)SB (z) .
(11)
Our first main result will be to use eqn. (11) to write down an implicit definition of the spectral density
of JJT . To do this we first note that (see Result 1 of the supplementary material),
SJJ T =
L
Y
L
L
SWl WlT SDl2 = SW
W T SD 2 ,
(12)
l=1
where we have used the identical distribution of the weights to define SW W T = SWl WlT for all l, and
we have also used the fact the pre-activations are distributed independently of depth as hl ? N (0, q ? ),
which implies that SDl2 = SD2 for all l.
Eqn. (12) provides a method to compute the spectrum ?JJ T ( ). Starting from ?W T W ( ) and ?D2 ( ),
we compute their respective S-transforms through the sequence of equations eqns. (7), (9), and (10),
take the product in eqn. (12), and then reverse the sequence of steps to go from SJJ T to ?JJ T ( )
through the inverses of eqns. (10), (9), and (8). Thus we must calculate the S-transforms of WWT
and D2 , which we attack next for specific nonlinearities and weight ensembles in the following
sections. In principle, this procedure can be carried out numerically for an arbitrary choice of
nonlinearity, but we postpone this investigation to future work.
2.4
Linear networks
QL
As a warm-up, we first consider a linear network in which J = l=1 Wl . Since criticality ( = 1
2
2 ?
in eqn. (5)) implies w
= 1 and eqn. (4) reduces to q ? = w
q + b2 , the only critical point is
( w , b ) = (1, 0). The case of orthogonal weights is simple: J is also orthogonal, and all its singular
values are 1, thereby achieving perfect dynamic isometry. Gaussian weights behave very differently.
4
The squared singular values s2i of J equal the eigenvalues i of JJT , which is a product Wishart
matrix, whose spectral density was recently computed in [17]. The resulting singular value density of
J is given by,
s
s
2 sin3 ( ) sinL 2 (L )
sinL+1 ((L + 1) )
?(s( )) =
, s( ) =
.
(13)
L
1
?
sin
((L + 1) )
sin sinL (L )
Fig. 2(a) demonstrates a match between this theoretical density and the empirical density obtained
from numerical simulations of random linear networks. As the depth increases, this density becomes
highly anisotropic, both concentrating about zero and developing an extended tail.
Note that = ?/(L + 1) corresponds to the minimum singular value smin = 0, while = 0
corresponds to the maximum eigenvalue, max = s2max = L L (L + 1)L+1 , which, for large L scales
as max ? eL. Both eqn. (13) and the methods of Section 2.5 yield the variance of the eigenvalue
2
2
distribution of JJT to be JJ
T = L. Thus for linear Gaussian networks, both smax and
JJ T grow
linearly with depth, signalling poor conditioning and the breakdown of dynamical isometry.
2.5
ReLU and hard-tanh networks
We first discuss the criticality conditions (finite q ? in eqn. (4) and = 1 in eqn. (5)) in these
two nonlinear networks. For both networks, since the slope of the nonlinearity 0 (h) only takes
2
the values 0 and 1, in eqn. (5) reduces to
= w
p(q ? ) where p(q ? ) is the probability that
a given neuron is in the linear regime with 0 (h) = 1. As discussed above, we take the largewidth limit in which the distribution of the pre-activations h is a zero mean Gaussian with variance
q ? . We therefore find that for ReLU, p(q ? ) = 12 is independent of q ? , whereas for hard-tanh,
R1
p
h2 /2q ?
p(q ? ) = 1 dh e p2?q? = erf(1/ 2q ? ) depends on q ? . In particular, it approaches 1 as q ? ! 0.
2
2 ?
Thus for ReLU, = 1 if and only if w
= 2, in which case eqn. (4) reduces to q ? = 12 w
q + b2 ,
2
implying that the only critical point is ( w , b ) = (2, 0). For hard-tanh, in contrast, = w
p(q ? ),
?
where p(q ) itself depends on w and b through eqn. (4), and so the criticality condition = 1
yields a curve in the ( w , b ) plane similar to that shown for the tanh network in Fig. 1. As one moves
along this curve in the direction of decreasing w , the curve approaches the point ( w , b ) = (1, 0)
with q ? monotonically decreasing towards 0, i.e. q ? ! 0 as w ! 1.
The critical ReLU network and the one parameter family of critical hard-tanh networks have neither
vanishing nor exploding gradients, due to = 1. Nevertheless, the entire singular value spectrum
of J of these networks can behave very differently. From eqn. (12), this spectrum depends on
the non-linearity (h) through SD2 in eqn. (10), which in turn only depends on the distribution
of eigenvalues of D2 , or equivalently, the distribution of squared derivatives 0 (h)2 . As we have
seen, this distribution is a Bernoulli distribution with parameter p(q ? ): ?D2 (z) = (1 p(q ? )) (z) +
p(q ? ) (z 1). Inserting this distribution into the sequence eqn. (7), eqn. (9), eqn. (10) then yields
GD2 (z) =
1
p(q ? ) p(q ? )
+
,
z
z 1
MD2 (z) =
p(q ? )
,
z 1
SD2 (z) =
z+1
.
z + p(q ? )
(14)
To complete the calculation of SJJ T in eqn. (12), we must also compute SW W T . We do this for
Gaussian and orthogonal weights in the next two subsections.
2.5.1
Gaussian weights
We re-derive the well-known expression for the S-transform of products of random Gaussian matrices
2
with variance w
in Example 3 of the supplementary material. The result is SW W T = w 2 (1 + z) 1 ,
which, when combined with eqn. (14) for SD2 , eqn. (12) for SJJ T , and eqn. (10) for MX 1 (z), yields
z+1
L 2L
z + p(q ? )
(15)
w .
z
Using eqn. (15) and eqn. (9), we can define a polynomial that the Stieltjes transform G satisfies,
SJJ T (z) =
w
2L
(z + p(q ? ))
2L
w G(Gz
L
,
+ p(q ? )
MJJ1T (z) =
1)L
(Gz
1) = 0 .
(16)
The correct root of this equation is the one for which G ? 1/z as z ! 1 [16]. From eqn. (8), the
spectral density is obtained from the imaginary part of G( + i?) as ? ! 0+ .
5
(a)
(b)
(c)
q ? = 64
(d)
L = 1024
q ? = 1/64
L=1
Figure 3: The max singular value smax of J versus L and q ? for Gaussian (a,c) and orthogonal (b,d)
weights, with ReLU (dashed) and hard-tanh (solid) networks. For Gaussian weights and for both
ReLU and hard-tanh, smax grows with L for all q ? (see a,c) as predicted in eqn. (17) . In contrast, for
orthogonal hard-tanh, but not orthogonal ReLU, at small enough q ? , smax can remain O(1) even at
large L (see b,d) as predicted in eqn. (22). In essence, at fixed small q ? , if p(q ? ) is the large fraction
of neurons in the linear regime, smax only grows with L after L > p/(1 p) (see d). As q ? ! 0,
p(q ? ) ! 1 and the hard-tanh networks look linear. Thus the lowest curve in (a) corresponds to
the prediction of linear Gaussian networks in eqn. (13), while the lowest curve in (b) is simply 1,
corresponding to linear orthogonal networks.
The positions of the spectral edges, namely locations of the minimum and maximum eigenvalues
of JJT , can be deduced from the values of z for which the imaginary part of the root of eqn. (16)
vanishes, i.e. when the discriminant of the polynomial in eqn. (16) vanishes. After a detailed but
unenlightening calculation, we find, for large L,
?
?
e
2
2
? L
L + O(1) .
(17)
max = smax =
w p(q )
p(q ? )
2
Recalling that = w
p(q ? ), we find exponential growth in max if > 1 and exponential decay if
< 1. Moreover, even at criticality when = 1, max still grows linearly with depth.
2
T
Next, we obtain the variance JJ
by computing its first two
T of the eigenvalue density of JJ
moments m1 and m2 . We employ the Lagrange inversion theorem [18],
m1
m2
m1
m2
MJJ T (z) =
+ 2 + ??? ,
MJJ1T (z) =
+
+ ??? ,
(18)
z
z
z
m1
which relates the expansions of the moment generating function MJJ T (z) and its functional inverse
MJJ1T (z). Substituting this expansion for MJJ1T (z) into eqn. (15), expanding the right hand side,
and equating the coefficients of z, we find,
m1 = (
2
? L
w p(q ))
,
m2 = (
2
? 2L
w p(q ))
L + p(q ? ) /p(q ? ) .
(19)
Both moments generically either exponentially grow or vanish. However even at criticality, when
2
2
= w
p(q ? ) = 1, the variance JJ
m21 = p(qL? ) still exhibits linear growth with depth.
T = m2
Note that p(q ? ) is the fraction of neurons operating in the linear regime, which is always less than 1.
Thus for both ReLU and hard-tanh networks, no choice of Gaussian initialization can ever prevent
2
this linear growth, both in JJ
T and
max , implying that even critical Gaussian initializations will
always lead to a failure of dynamical isometry at large depth for these networks.
2.5.2
Orthogonal weights
For orthogonal W, we have WWT = I, and the S-transform is SI = 1 (see Example 2 of
the supplementary material). After scaling by w , we have SW W T = S w2 I = w 2 SI = w 2 .
Combining this with eqn. (14) and eqn. (12) yields SJJ T (z) and, through eqn. (10), yields MJJ1T :
?
?L
?
? L
z+1
z+1
z+1
1
2L
SJJ T (z) = w 2L
,
M
=
(20)
w .
JJ T
z + p(q ? )
z
z + p(q ? )
Now, combining eqn. (20) and eqn. (9), we obtain a polynomial that the Stieltjes transform G satisfies:
g 2L G(Gz + p(g)
1)L
6
(zG)L (Gz
1) = 0 .
(21)
Momentum
SGD
(a)
RMSProp
ADAM
(b)
(c)
(d)
Figure 4: Learning dynamics, measured by generalization performance on a test set, for networks of
2
depth 200 and width 400 trained on CIFAR10 with different optimizers. Blue is tanh with w
= 1.05,
2
2
red is tanh with w = 2, and black is ReLU with w = 2. Solid lines are orthogonal and dashed
lines are Gaussian initialization. The relative ordering of curves robustly persists across optimizers,
and is strongly correlated with the degree to which dynamical isometry is present at initialization, as
measured by smax in Fig. 3. Networks with smax closer to 1 learn faster, even though all networks are
2
initialized critically with = 1. The most isometric orthogonal tanh with small w
trains several
orders of magnitude faster than the least isometric ReLU network.
From this we can extract the eigenvalue and singular value density of JJT and J, respectively, through
eqn. (8). Figs. 2(b) and 2(c) demonstrate an excellent match between our theoretical predictions and
numerical simulations of random networks. We find that at modest depths, the singular values are
peaked near max , but at larger depths, the distribution both accumulates mass at 0 and spreads out,
developing a growing tail. Thus at fixed critical values of w and b , both deep ReLU and hard-tanh
networks have ill-conditioned Jacobians, even with orthogonal weight matrices.
As above, we can obtain the maximum eigenvalue of JJT by determining the values of z for which
the discriminant of the polynomial in eqn. (21) vanishes. This calculation yields,
max
= s2max =
2
? L
w p(q )
1
p(q ? )
LL
p(q ? ) (L 1)L
1
.
(22)
2
For large L, max either exponentially explodes or decays, except at criticality when = w
p(q ? ) =
1 p(q ? )
e
1
1, where it behaves as max = p(q? ) eL 2 + O(L ). Also, as above, we can compute the
1
2
variance JJ
T by expanding MJJ T in eqn. (20) and applying eqn. (18). At criticality, we find
1 p(q ? )
2
2
JJ T = p(q ? ) L for large L. Now the large L asymptotic behavior of both max and JJ T depends
crucially on p(q ? ), the fraction of neurons in the linear regime.
2
For ReLU networks, p(q ? ) = 1/2, and we see that max and JJ
T grow linearly with depth and
dynamical isometry is unachievable in ReLU networks,
even for critical orthogonal weights. In
p
contrast, for hard tanh networks, p(q ? ) = erf(1/ 2q ? ). Therefore, one can always move along the
critical line in the ( w , b ) plane towards the point (1, 0), thereby reducing q ? , increasing p(q ? ), and
p(q ? )
decreasing, to an arbitrarily small value, the prefactor 1 p(q
controlling the linear growth of both
?)
2
max and JJ T . So unlike either ReLU networks, or Gaussian networks, one can achieve dynamical
isometry up to depth L by choosing q ? small enough so that p(q ? ) ? 1 L1 . In essence, this strategy
increases the fraction of neurons operating in the linear regime, enabling orthogonal hard-tanh nets to
mimic the successful dynamical isometry achieved by orthogonal linear nets. However, this strategy
is unavailable for orthogonal ReLU networks. A demonstration of these results is shown in Fig. 3.
3
Experiments
Having established a theory of the entire singular value distribution of J, and in particular of when
dynamical isometry is present or not, we now provide empirical evidence that the presence or absence
of this isometry can have a large impact on training speed. In our first experiment, summarized in
2
= 1.05
Fig. 4, we compare three different classes of critical neural networks: (1) tanh with small w
2
5
2
2
2
and b = 2.01 ? 10 ; (2) tanh with large w = 2 and b = 0.104; and (3) ReLU with w = 2 and
2
5
. In each case b is chosen appropriately to achieve critical initial conditions at the
b = 2.01 ? 10
7
L = 10
(a)
q ? = 64
(b)
(c)
(d)
L = 300
q ? = 1/64
Figure 5: Empirical measurements of SGD training time ? , defined as number of steps to reach
p ? 0.25 accuracy, for orthogonal tanh networks. In (a), curves reflect different depths L at fixed
small q ? = 0.025. Intriguingly, they all collapse
p onto a single universal curve when the learning
rate ? is rescaled by L and ? is rescaled by 1/ L. This implies
pthe optimal learning rate is O(1/L),
and remarkably, the optimal learning time ? grows only as O( L). (b) Now different curves reflect
different q ? at fixed L = 200, revealing that smaller q ? , associated with increased dynamical isometry
in J, enables faster training times by allowing a larger optimal learning rate ?. (c) ? as a function of
L for a few values of q ? . (d) ? as a function of q ? for a few values of L. We see qualitative agreement
of (c,d) with Fig. 3(b,d), suggesting a strong connection between ? and smax .
boundary between order and chaos [12, 13], with = 1. All three of these networks have a mean
squared singular value of 1 with neither vanishing nor exploding gradients in the infinite width limit.
These experiments therefore probe the specific effect of dynamical isometry, or the entire shape of
the spectrum of J, on learning. We also explore the degree to which more sophisticated optimizers
can overcome poor initializations. We compare SGD, Momentum, RMSProp [19], and ADAM [20].
We train networks of depth L = 200 and width N = 400 for 105 steps with a batch size of 103 . We
additionally average our results over 30 different instantiations of the network to reduce noise. For
each nonlinearity, initialization, and optimizer, we obtain the optimal learning rate through grid search.
For SGD and SGD+Momentum we consider logarithmically spaced rates between [10 4 , 10 1 ] in
steps 100.1 ; for ADAM and RMSProp we explore the range [10 7 , 10 4 ] at the same step size. To
choose the optimal learning rate we select a threshold accuracy p and measure the first step when
performance exceeds p. Our qualitative conclusions are fairly independent of p. Here we report
results on a version of CIFAR101 .
Based on our theory, we expect the performance advantage of orthogonal over Gaussian initializations
to be significant in case (1) and somewhat negligible in cases (2) and (3). This prediction is verified
in Fig. 4 (blue solid and dashed learning curves are well-separated, compared to red and black cases).
Furthermore, the extent of dynamical isometry at initialization strongly predicts the speed of learning.
2
The effect is large, with the most isometric case (orthogonal tanh with small w
) learning faster
than the least isometric case (ReLU networks) by several orders of magnitude. Moreover, these
conclusions robustly persist across all optimizers. Intriguingly, in the case where dynamical isometry
2
helps the most (tanh with small w
), the effect of initialization (orthogonal versus Gaussian) has a
much larger impact on learning speed than the choice of optimizer.
These insights suggest a more quantitative analysis of the relation between dynamical isometry and
learning speed for orthogonal tanh networks, summarized in Fig. 5. We focus on SGD, given the lack
of a strong
dependence on optimizer. Intriguingly, Fig. 5(a) demonstrates the optimal training time
p
is O( L) and so grows sublinearly with depth L. Also Fig. 5(b) reveals that increased dynamical
isometry enables faster training by making available larger (i.e. faster) learning rates. Finally,
Fig. 5(c,d) and their similarity to Fig. 3(b,d) suggest a strong positive correlation between training
time and max singular value of J. Overall, these results suggest that dynamical isometry is correlated
with learning speed, and controlling the entire distribution of Jacobian singular values may be an
important design consideration in deep learning.
In Fig. 6, we explore the relationship between dynamical isometry and performance going beyond
initialization by studying the evolution of singular values throughout training. We find that if
dynamical isometry is present at initialization, it persists for some time into training. Intriguingly,
1
We use the standard CIFAR10 dataset augmented with random flips and crops, and random saturation,
brightness, and contrast perturbations
8
(a)
103
102 101
(b)
(c)
q ? = 1/64
(d)
t=0
q ? = 32
Figure 6: Singular value evolution of J for orthogonal tanh networks during SGD training. (a) The
average distribution, over 30 networks with q ? = 1/64, at different SGD steps. (b) A measure of
eigenvalue ill-conditioning of JJT (h i2 /h 2 i ? 1 with equality if and only if ?( ) = (
0 ))
over number of SGD steps for different initial q ? . Interestingly, the optimal q ? that best maintains
dynamical isometry in later stages of training is not simply the smallest q ? . (c) Test accuracy as a
function of SGD step for those q ? considered in (b). (d) Generalization accuracy as a function of
initial q ? . Together (b,c,d) reveal that the optimal nonzero q ? , that best maintains dynamical isometry
into training, also yields the fastest learning and best generalization accuracy.
perfect dynamical isometry at initialization (q ? = 0) is not the best choice for preserving isometry
throughout training; instead, some small but nonzero value of q ? appears optimal. Moreover, both
learning speed and generalization accuracy peak at this nonzero value. These results bolster the
relationship between dynamical isometry and performance beyond simply the initialization.
4
Discussion
In summary, we have employed free probability theory to analytically compute the entire distribution
of Jacobian singular values as a function of depth, random initialization, and nonlinearity shape.
This analytic computation yielded several insights into which combinations of these ingredients
enable nonlinear deep networks to achieve dynamical isometry. In particular, deep linear Gaussian
networks cannot; the maximum Jacobian singular value grows linearly with depth even if the second
moment remains 1. The same is true for both orthogonal and Gaussian ReLU networks. Thus
the ReLU nonlinearity destroys the dynamical isometry of orthogonal linear networks. In contrast,
orthogonal, but not Gaussian, sigmoidal networks can achieve dynamical isometry; as the depth
increases, the max singular value can remain O(1) in the former case but grows linearly in the latter.
Thus orthogonal sigmoidal networks rescue the failure of dynamical isometry in ReLU networks.
Correspondingly, we demonstrate, on CIFAR-10, that orthogonal sigmoidal networks can learn orders
of magnitude faster than ReLU networks. This performance advantage is robust to the choice of a
variety of optimizers, including SGD, momentum, RMSProp and ADAM. Orthogonal sigmoidal
networks moreover have sublinear learning times with depth. While not as fast as orthogonal
linear networks, which have depth independent training times [10], orthogonal sigmoidal networks
have training times growing as the square root of depth. Finally, dynamical isometry, if present at
initialization, persists for a large amount of time during training. Moreover, isometric initializations
with longer persistence times yield both faster learning and better generalization.
Overall, these results yield the insight that the shape of the entire distribution of a deep network?s
Jacobian singular values can have a dramatic effect on learning speed; only controlling the second
moment, to avoid exponentially vanishing and exploding gradients, can leave significant performance
advantages on the table. Moreover, by pursuing the design principle of tightly concentrating the entire
distribution around 1, we reveal that very deep feedfoward networks, with sigmoidal nonlinearities,
can actually outperform ReLU networks, the most popular type of nonlinear deep network used today.
In future work, it would be interesting to extend our methods to other types of networks, including for
example skip connections, or convolutional architectures. More generally, the performance advantage
in learning that accompanies dynamical isometry suggests it may be interesting to explicitly optimize
this property in reinforcement learning based searches over architectures [21].
Acknowledgments
S.G. thanks the Simons, McKnight, James S. McDonnell, and Burroughs Wellcome Foundations and
the Office of Naval Research for support.
9
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[2] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim
Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing
Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George
Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals,
Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google?s neural machine translation system: Bridging
the gap between human and machine translation. CoRR, abs/1609.08144, 2016.
[3] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik
Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray
Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks
and tree search. Nature, 529(7587):484?489, 01 2016.
[4] Chris Piech, Jonathan Bassen, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J Guibas, and
Jascha Sohl-Dickstein. Deep knowledge tracing. In Advances in Neural Information Processing Systems,
pages 505?513, 2015.
[5] Daniel LK Yamins, Ha Hong, Charles F Cadieu, Ethan A Solomon, Darren Seibert, and James J DiCarlo.
Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings
of the National Academy of Sciences, 111(23):8619?8624, 2014.
[6] Lane McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen Baccus. Deep learning
models of the retinal response to natural scenes. In Advances in Neural Information Processing Systems,
pages 1369?1377, 2016.
[7] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks.
science, 313(5786):504?507, 2006.
[8] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and
Statistics, volume 9, pages 249?256, 2010.
[9] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural
networks. In International Conference on Machine Learning, pages 1310?1318, 2013.
[10] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of
learning in deep linear neural networks. ICLR 2014, 2013.
[11] Dmytro Mishkin and Jiri Matas. All you need is a good init. CoRR, abs/1511.06422, 2015.
[12] B. Poole, S. Lahiri, M. Raghu, J. Sohl-Dickstein, and S. Ganguli. Exponential expressivity in deep neural
networks through transient chaos. Neural Information Processing Systems, 2016.
[13] S. S. Schoenholz, J. Gilmer, S. Ganguli, and J. Sohl-Dickstein. Deep Information Propagation. International
Conference on Learning Representations (ICLR), 2017.
[14] Roland Speicher. Multiplicative functions on the lattice of non-crossing partitions and free convolution.
Mathematische Annalen, 298(1):611?628, 1994.
[15] Dan V Voiculescu, Ken J Dykema, and Alexandru Nica. Free random variables. Number 1. American
Mathematical Soc., 1992.
[16] Terence Tao. Topics in random matrix theory, volume 132. American Mathematical Society Providence,
RI, 2012.
[17] Thorsten Neuschel. Plancherel?rotach formulae for average characteristic polynomials of products of
ginibre random matrices and the fuss?catalan distribution. Random Matrices: Theory and Applications,
3(01):1450003, 2014.
[18] Joseph Louis Lagrange. Nouvelle m?thode pour r?soudre les probl?mes ind?termin?s en nombres entiers.
Chez Haude et Spener, Libraires de la Cour & de l?Acad?mie royale, 1770.
[19] Geoffrey Hinton, NiRsh Srivastava, and Kevin Swersky. Neural networks for machine learning lecture 6a
overview of mini?batch gradient descent.
10
[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[21] Barret Zoph and Quoc V. Le.
abs/1611.01578, 2016.
Neural architecture search with reinforcement learning.
11
CoRR,
| 7064 |@word determinant:1 version:1 inversion:1 polynomial:5 norm:2 stronger:3 d2:4 simulation:3 crucially:1 brightness:1 dramatic:4 thereby:2 tr:1 solid:4 sgd:11 carry:1 moment:7 initial:8 liu:1 daniel:1 interestingly:1 imaginary:2 activation:6 s2max:2 si:2 must:2 guez:1 john:1 diederik:1 subsequent:1 numerical:2 partition:1 shape:4 enables:2 analytic:1 fuss:1 implying:2 intelligence:1 signalling:1 plane:3 krikun:1 vanishing:8 feedfoward:1 smith:1 aja:1 provides:3 pascanu:1 location:1 gx:3 attack:1 sigmoidal:8 melvin:1 along:3 mathematical:2 jiri:1 qualitative:2 consists:1 yuan:1 dan:1 manner:2 sublinearly:1 indeed:4 behavior:2 pour:1 nor:5 examine:1 growing:2 brain:3 nham:1 salakhutdinov:1 decreasing:3 increasing:1 becomes:4 moreover:10 linearity:1 mass:1 lowest:2 what:1 guarantee:1 quantitative:1 every:2 expands:1 growth:5 isometrically:1 scaled:2 demonstrates:2 louis:1 positive:1 negligible:1 persists:3 tends:1 limit:4 sd:1 acad:1 accumulates:1 cliff:1 laurent:1 approximately:1 might:1 black:2 initialization:26 studied:1 equating:1 suggests:1 fastest:1 collapse:1 range:1 obeys:1 practical:1 acknowledgment:1 practice:1 hughes:1 postpone:1 backpropagation:3 chaotic:4 optimizers:5 razvan:1 procedure:1 demis:1 empirical:7 universal:1 revealing:1 persistence:1 pre:6 suggest:3 onto:1 close:2 cannot:1 operator:1 klingner:1 applying:1 seminal:1 optimize:1 dean:1 go:2 starting:1 independently:1 jimmy:1 tomas:1 jascha:1 m2:5 insight:3 dw:3 limiting:1 controlling:4 today:1 exact:2 agreement:2 logarithmically:1 crossing:1 satisfying:1 breakdown:1 predicts:1 persist:1 observed:2 mike:1 prefactor:1 preprint:1 wang:1 worst:1 calculate:1 ordering:1 rescaled:2 vanishes:3 rmsprop:4 dynamic:7 trained:1 depend:2 upon:2 differently:2 maheswaranathan:1 s2i:1 train:2 separated:1 fast:1 niru:1 artificial:1 klaus:1 kevin:1 choosing:2 h0:1 kalchbrenner:1 whose:1 stanford:1 supplementary:3 larger:4 statistic:2 erf:2 transform:9 itself:1 sequence:3 eigenvalue:11 advantage:4 net:2 product:7 qin:1 inserting:1 relevant:1 combining:2 cao:1 rapidly:1 kato:1 pthe:1 achieve:13 academy:1 sutskever:2 cour:1 m21:1 requirement:3 r1:1 smax:10 generating:2 perfect:2 converges:1 adam:5 leave:1 help:2 derive:2 silver:1 recurrent:1 andrew:1 measured:3 ij:1 keith:1 sa:1 strong:3 soc:1 p2:1 predicted:2 skip:1 implies:4 quantify:1 direction:1 closely:1 correct:1 stevens:1 alexandru:1 mie:1 stochastic:1 human:2 saxe:1 enable:1 transient:1 material:3 education:1 generalization:5 investigation:1 pdh:1 im:1 around:1 considered:1 guibas:1 exp:1 mapping:1 dieleman:1 predict:1 substituting:1 major:1 optimizer:3 smallest:1 ruslan:1 tanh:26 wl:4 faithfully:1 tool:2 destroys:1 gaussian:29 always:3 pn:1 sab:1 shrinkage:1 avoid:1 office:1 focus:1 naval:1 properly:1 consistently:1 modelling:1 bernoulli:1 contrast:6 ganguli:6 el:2 sb:1 entire:13 relation:2 going:1 tao:1 overall:3 among:1 ill:3 classification:1 heatmap:1 art:1 fairly:1 equal:2 intriguingly:5 beach:1 shaped:1 having:1 identical:1 koray:1 cadieu:1 look:1 unsupervised:1 peaked:1 future:2 mimic:1 wlt:2 report:1 yoshua:2 primarily:1 few:3 employ:1 randomly:2 preserve:2 tightly:1 national:1 phase:5 jeffrey:2 n1:2 ab:4 recalling:1 highly:1 generically:1 edge:1 closer:1 necessary:1 cifar10:2 arthur:1 respective:1 orthogonal:40 modest:1 tree:1 initialized:3 re:1 theoretical:6 mk:3 increased:2 lattice:1 deviation:1 entry:1 uniform:1 krizhevsky:1 successful:1 dij:1 johnson:1 providence:1 answer:1 combined:1 st:1 density:11 deduced:1 peak:1 thanks:1 international:3 contract:1 physic:1 terence:1 together:1 ilya:2 squared:9 reflect:2 solomon:1 choose:1 huang:2 wishart:1 lukasz:1 american:2 derivative:2 rescaling:1 jacobians:1 suggesting:1 nonlinearities:3 de:2 retinal:1 b2:4 summarized:2 ioannis:1 coefficient:1 explicitly:1 depends:5 leonidas:1 later:1 root:3 stieltjes:4 jason:2 wolfgang:1 multiplicative:1 red:2 start:2 maintains:2 slope:1 simon:1 square:1 accuracy:6 convolutional:2 variance:10 greg:1 characteristic:1 ensemble:3 yield:12 spaced:1 norouzi:1 mishkin:1 kavukcuoglu:1 hli:1 critically:1 reach:1 synaptic:1 definition:3 failure:2 james:3 burroughs:1 associated:1 propagated:2 dataset:1 concentrating:2 popular:1 lim:1 subsection:1 knowledge:1 dimensionality:1 graepel:1 sophisticated:1 actually:1 back:2 appears:1 feed:1 wwt:2 higher:1 dt:1 isometric:5 response:2 wei:1 though:1 strongly:2 catalan:1 furthermore:1 governing:1 implicit:1 stage:1 correlation:1 hand:3 eqn:42 lahiri:1 nonlinear:11 propagation:6 google:4 lack:1 reveal:2 grows:7 thore:1 usa:1 effect:4 lillicrap:1 true:1 former:2 analytically:2 evolution:2 equality:1 entering:1 xavier:1 nonzero:3 i2:1 ind:1 ll:1 game:2 width:6 eqns:2 sin:2 essence:2 during:2 backpropagates:1 samuel:1 hong:1 complete:1 demonstrate:3 mohammad:1 l1:1 consideration:2 chaos:3 recently:1 charles:1 sigmoid:1 behaves:1 functional:2 empirically:2 overview:1 conditioning:4 exponentially:8 volume:2 anisotropic:1 tail:2 discussed:1 m1:5 extend:1 numerically:1 measurement:1 significant:2 probl:1 mcintosh:1 tuning:1 grid:1 similarly:1 nonlinearity:9 htanh:1 similarity:1 operating:2 longer:1 deduce:1 cortex:1 isometry:41 driven:1 reverse:1 scenario:1 incapable:1 success:1 arbitrarily:1 leach:1 inverted:1 seen:1 minimum:2 additional:1 somewhat:1 preserving:1 george:2 employed:1 freely:1 taku:1 monotonically:1 exploding:8 signal:4 dashed:4 relates:1 corrado:1 stephen:1 reduces:3 exceeds:1 hideto:1 faster:10 match:2 calculation:3 kudo:1 long:1 cifar:1 post:1 roland:1 impact:3 ensuring:2 prediction:4 crop:1 vision:1 metric:1 mehran:1 arxiv:2 achieved:2 preserved:1 addition:1 remarkably:2 fine:1 whereas:1 thirteenth:1 singular:35 grow:3 appropriately:3 w2:1 bassen:1 unlike:1 yonghui:1 explodes:1 induced:1 near:2 rested:1 presence:1 revealed:2 feedforward:2 enough:2 stephan:1 sander:1 variety:2 bengio:2 relu:27 architecture:3 reduce:1 expression:2 veda:1 utility:1 bridging:1 accompanies:1 jj:16 bli:1 deep:37 generally:1 detailed:3 dmytro:1 transforms:3 amount:1 annalen:1 concentrated:1 mcclelland:1 ken:1 outperform:2 rescue:1 blue:2 mathematische:1 write:1 dickstein:3 smin:1 nevertheless:1 threshold:1 achieving:2 drawn:3 ginibre:1 prevent:2 neither:4 verified:1 nal:1 fraction:4 angle:1 inverse:3 powerful:2 you:1 swersky:1 family:1 reasonable:1 throughout:2 pursuing:1 wu:1 lanctot:1 scaling:1 layer:5 piech:1 fold:1 yielded:2 alex:2 scene:1 md2:1 unachievable:1 lane:1 ri:1 explode:2 speed:11 mikolov:1 schoenholz:2 jjt:10 developing:2 combination:2 poor:3 mcknight:1 mcdonnell:1 remain:4 across:2 smaller:1 h0i:1 mastering:1 joseph:1 making:1 quoc:2 hl:5 den:1 thorsten:1 wellcome:1 sd2:4 equation:2 remains:2 turn:2 discus:1 royale:1 sahami:1 yamins:1 flip:1 madeleine:1 antonoglou:1 raghu:1 studying:1 available:1 panneershelvam:1 probe:1 sjj:7 hierarchical:1 spectral:5 robustly:2 batch:2 shah:1 hassabis:1 kazawa:1 denotes:2 patil:1 sw:5 especially:2 aran:1 society:1 bl:2 move:2 matas:1 question:3 quantity:1 nayebi:1 kaiser:1 strategy:2 dependence:2 diagonal:1 unclear:2 exhibit:1 gradient:13 kth:1 mx:9 iclr:2 separate:1 separating:1 chris:2 maddison:1 topic:1 me:1 extent:1 discriminant:2 assuming:1 dicarlo:1 pointwise:1 relationship:2 mini:1 julian:1 demonstration:1 schrittwieser:1 equivalently:3 setup:1 ql:2 baccus:1 ba:1 design:3 allowing:1 observation:1 neuron:5 convolution:1 finite:1 enabling:1 descent:1 behave:2 criticality:8 extended:1 ever:1 hinton:3 genesis:1 rn:2 perturbation:1 arbitrary:1 david:1 namely:1 connection:2 imagenet:1 ethan:1 optimized:1 raising:1 nishant:1 expressivity:1 established:1 kingma:1 nip:1 address:1 beyond:2 suggested:1 poole:1 dynamical:37 regime:5 saturation:1 including:3 max:18 critical:13 natural:1 warm:1 difficulty:2 recursion:2 lk:1 grewe:1 carried:1 gz:4 extract:1 epoch:1 review:2 understanding:1 multiplication:1 determining:1 relative:1 asymptotic:1 macherey:2 expect:2 lecture:1 mixed:1 sublinear:1 interesting:2 versus:2 ingredient:2 geoffrey:3 h2:2 foundation:1 gilmer:1 degree:2 principle:3 translation:3 summary:2 free:6 bias:3 side:1 understand:1 correspondingly:1 tracing:1 distributed:1 van:1 boundary:2 depth:27 curve:10 transition:1 overcome:1 forward:3 reinforcement:2 sifre:1 employing:1 wijl:1 neurobiological:1 reveals:2 instantiation:1 surya:4 spectrum:8 search:4 swl:2 table:1 additionally:1 nature:1 learn:3 zk:1 plancherel:1 robust:1 ca:1 expanding:2 init:1 unavailable:1 correlated:2 expansion:2 excellent:2 domain:1 marc:1 main:1 spread:1 linearly:5 noise:1 augmented:1 fig:15 en:1 slow:1 position:1 momentum:4 wish:1 obeying:2 exponential:3 xl:4 lie:1 governed:1 vanish:3 dominik:1 jacobian:16 third:1 ix:2 zhifeng:1 young:1 down:1 theorem:1 formula:1 specific:2 decay:2 evidence:1 dl:4 essential:1 glorot:1 sohl:3 pennington:1 corr:3 maxim:1 magnitude:4 conditioned:5 demand:1 sx:1 chen:1 gap:1 timothy:1 simply:5 explore:4 gao:1 visual:1 lagrange:2 vinyals:1 ordered:4 mjj:3 driessche:1 srivastava:1 corresponds:3 darren:1 determines:1 satisfies:3 dh:5 towards:2 seibert:1 jeff:1 absence:1 change:1 hard:14 specifically:1 except:1 reducing:2 infinite:1 called:1 la:1 zg:1 select:1 support:1 latter:2 arises:1 jonathan:2 avoiding:1 oriol:1 rudnick:1 schuster:1 nica:1 |
6,705 | 7,065 | Sample and Computationally Efficient Learning
Algorithms under S-Concave Distributions
Maria-Florina Balcan
Machine Learning Department
Carnegie Mellon University, USA
[email protected]
Hongyang Zhang
Machine Learning Department
Carnegie Mellon University, USA
[email protected]
Abstract
We provide new results for noise-tolerant and sample-efficient learning algorithms
under s-concave distributions. The new class of s-concave distributions is a broad
and natural generalization of log-concavity, and includes many important additional
distributions, e.g., the Pareto distribution and t-distribution. This class has been
studied in the context of efficient sampling, integration, and optimization, but
much remains unknown about the geometry of this class of distributions and their
applications in the context of learning. The challenge is that unlike the commonly
used distributions in learning (uniform or more generally log-concave distributions),
this broader class is not closed under the marginalization operator and many such
distributions are fat-tailed. In this work, we introduce new convex geometry
tools to study the properties of s-concave distributions and use these properties
to provide bounds on quantities of interest to learning including the probability
of disagreement between two halfspaces, disagreement outside a band, and the
disagreement coefficient. We use these results to significantly generalize prior
results for margin-based active learning, disagreement-based active learning, and
passive learning of intersections of halfspaces. Our analysis of geometric properties
of s-concave distributions might be of independent interest to optimization more
broadly.
1
Introduction
Developing provable learning algorithms is one of the central challenges in learning theory. The study
of such algorithms has led to significant advances in both the theory and practice of passive and active
learning. In the passive learning model, the learning algorithm has access to a set of labeled examples
sampled i.i.d. from some unknown distribution over the instance space and labeled according to
some underlying target function. In the active learning model, however, the algorithm can access
unlabeled examples and request labels of its own choice, and the goal is to learn the target function
with significantly fewer labels. In this work, we study both learning models in the case where the
underlying distribution belongs to the class of s-concave distributions.
Prior work on noise-tolerant and sample-efficient algorithms mostly relies on the assumption that
the distribution over the instance space is log-concave [1, 11, 7]. A distribution is log-concave
if the logarithm of its density is a concave function. The assumption of log-concavity has been
made for a few purposes: for computational efficiency reasons and for sample efficiency reasons.
For computational efficiency reasons, it was made to obtain a noise-tolerant algorithm even for
seemingly simple decision surfaces like linear separators. These simple algorithms exist for noiseless
scenarios, e.g., via linear programming [27], but they are notoriously hard once we have noise [14,
24, 18]; This is why progress on noise-tolerant algorithms has focused on uniform [21, 25] and
log-concave distributions [4]. Other concept spaces, like intersections of halfspaces, even have no
computationally efficient algorithm in the noise-free settings that works under general distributions,
but there has been nice progress under uniform and log-concave distributions [26]. For sample
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
efficiency reasons, in the context of active learning, we need distributional assumptions in order to
obtain label complexity improvements [15]. The most concrete and general class for which prior
work obtains such improvements is when the marginal distribution over instance space satisfies
log-concavity [30, 7]. In this work, we provide a broad generalization of all above results, showing
how they extend to s-concave distributions (s < 0). A distribution with density f (x) is s-concave
if f (x)s is a concave function. We identify key properties of these distributions that allow us to
simultaneously extend all above results.
How general and important is the class of s-concave distributions? The class of s-concave
distributions is very broad and contains many well-known (classes of) distributions as special cases.
For example, when s ? 0, s-concave distributions reduce to log-concave distributions. Furthermore,
the s-concave class contains infinitely many fat-tailed distributions that do not belong to the class of
log-concave distributions, e.g., Cauchy, Pareto, and t distributions, which have been widely applied in
the context of theoretical physics and economics, but much remains unknown about how the provable
learning algorithms, such as active learning of halfspaces, perform under these realistic distributions.
We also compare s-concave distributions with nearly-log-concave distributions, a slightly broader
class of distributions than log-concavity. A distribution with density f (x) is nearly-log-concave if
for any ? ? [0, 1], x1 , x2 ? Rn , we have f (?x1 + (1 ? ?)x2 ) ? e?0.0154 f (x1 )? f (x2 )1?? [7]. The
class of s-concave distributions includes many important extra distributions which do not belong to
the nearly-log-concave distributions: a nearly-log-concave distribution must have sub-exponential
tails (see Theorem 11, [7]), while the tail probability of an s-concave distribution might decay much
slower (see Theorem 1 (6)). We also note that efficient sampling, integration and optimization
algorithms for s-concave distributions have been well understood [12, 22]. Our analysis of s-concave
distributions bridges these algorithms to the strong guarantees of noise-tolerant and sample-efficient
learning algorithms.
1.1
Our Contributions
Structural Results. We study various geometric properties of s-concave distributions. These properties serve as the structural results for many provable learning algorithms, e.g., margin-based active
learning [7], disagreement-based active learning [28, 20], learning intersections of halfspaces [26],
etc. When s ? 0, our results exactly reduce to those for log-concave distributions [7, 2, 4]. Below,
we state our structural results informally:
Theorem 1 (Informal). Let D be an isotropic s-concave distribution in Rn . Then there exist closedform functions ?(s, m), f1 (s, n), f2 (s, n), f3 (s, n), f4 (s, n), and f5 (s, n) such that
1. (Weakly Closed under Marginal) The marginal of D over m arguments (or cumulative distribution
function, CDF) is isotropic ?(s, m)-concave. (Theorems 3, 4)
2. (Lower Bound on Hyperplane Disagreement) For any two unit vectors u and v in Rn ,
f1 (s, n)?(u, v) ? Prx?D [sign(u ? x) 6= sign(v ? x)], where ?(u, v) is the angle between u
and v. (Theorem 12)
3. (Probability of Band) There is a function d(s, n) such that for any unit vector w ? Rn and any
0 < t ? d(s, n), we have f2 (s, n)t < Prx?D [|w ? x| ? t] ? f3 (s, n)t. (Theorem 11)
4. (Disagreement outside Margin) For any absolute constant c1 > 0 and any function f (s, n),
there exists a function f4 (s, n) > 0 such that Prx?D [sign(u ? x) 6= sign(v ? x) and |v ? x| ?
f4 (s, n)?(u, v)] ? c1 f (s, n)?(u, v). (Theorem 13)
5. (Variance in 1-D Direction) There is a function d(s, n) such that for any unit vectors u and a in Rn
such that ku?ak ? r and for any 0 < t ? d(s, n), we have Ex?Du,t [(a?x)2 ] ? f5 (s, n)(r2 +t2 ),
where Du,t is the conditional distribution of D over the set {x : |u ? x| ? t}. (Theorem 14)
h
i(1+ns)/s
?
cst
6. (Tail Probability) We have Pr[kxk > nt] ? 1 ? 1+ns
. (Theorem 5)
If s ? 0 (i.e., the distribution is log-concave), then ?(s, m) ? 0 and the functions f (s, n), f1 (s, n),
f2 (s, n), f3 (s, n), f4 (s, n), f5 (s, n), and d(s, n) are all absolute constants.
To prove Theorem 1, we introduce multiple new techniques, e.g., extension of Prekopa-Leindler
theorem and reduction to baseline function (see the supplementary material for our techniques),
which might be of independent interest to optimization more broadly.
Margin Based Active Learning: We apply our structural results to margin-based active learning of
a halfspace w? under any isotropic s-concave distribution for both realizable and adversarial noise
2
Table 1: Comparisons with prior distributions for margin-based active learning, disagreement-based
active learning, and Baum?s algorithm.
Margin (Efficient, Noise)
Disagreement
Baum?s
uniform [3]
uniform [19]
symmetric [8]
Prior Work
log-concave [4]
nearly-log-concave [7]
log-concave [26]
Ours
s-concave
s-concave
s-concave
models. In the realizable case, the instance X is drawn from an isotropic s-concave distribution and
the label Y = sign(w? ? X). In the adversarial noise model, an adversary can corrupt any ? (? O())
fraction of labels. For both cases, we show that there exists a computationally efficient algorithm that
outputs a linear separator wT such that Prx?D [sign(wT ? x) 6= sign(w? ? x)] ? (see Theorems 15
and 16). The label complexity w.r.t. 1/ improves exponentially over the passive learning scenario
under s-concave distributions, though the underlying distribution might be fat-tailed. To the best
of our knowledge, this is the first result concerning the computationally-efficient, noise-tolerant
margin-based active learning under the broader class of s-concave distributions. Our work solves
an open problem proposed by Awasthi et al. [4] about exploring wider classes of distributions for
provable active learning algorithms.
Disagreement Based Active Learning: We apply our results to agnostic disagreement-based active
learning under s-concave distributions. The key to the analysis is estimating the disagreement
coefficient, a distribution-dependent measure of complexity that is used to analyze certain types of
active learning algorithms, e.g., the CAL [13] and A2 algorithm [5]. We work out the disagreement
coefficient under isotropic s-concave distribution (see Theorem 17). By composing it with the
existing work on active learning [16], we obtain a bound on label complexity under the class of
s-concave distributions. As far as we are aware, this is the first result concerning disagreementbased active learning that goes beyond log-concave distributions. Our bounds on the disagreement
coefficient match the best known results for the much less general case of log-concave distributions [7];
Furthermore, they apply to the s-concave case where we allow an arbitrary number of discontinuities,
a case not captured by [17].
Learning Intersections of Halfspaces: Baum?s algorithm is one of the most famous algorithms
for learning the intersections of halfspaces. The algorithm was first proposed by Baum [8] under
symmetric distribution, and later extended to log-concave distribution by Klivans et al. [26] as these
distributions are almost symmetric. In this paper, we show that approximate symmetry also holds for
the case of s-concave distributions. With this, we work out the label complexity of Baum?s algorithm
under the broader class of s-concave distributions (see Theorem 18), and advance the state-of-the-art
results (see Table 1).
We provide lower bounds to partially show the tightness of our analysis. Our results can be potentially
applied to other provable learning algorithms as well [23, 29, 9], which might be of independent
interest. We discuss our techniques and other related papers in the supplementary material.
2
Preliminary
Before proceeding, we define some notations and clarify our problem setup in this section.
Notations: We will use capital or lower-case letters to represent random variables, D to represent
an s-concave distribution, and Du,t to represent the conditional distribution of D over the set
{x : |u ? x| ? t}. We define the sign function as sign(x) = +1 if x ? 0 and ?1 otherwise. We
R1
R?
denote by B(?, ?) = 0 t??1 (1 ? t)??1 dt the beta function, and ?(?) = 0 t??1 e?t dt the gamma
function. We will consider a single norm for the vectors in Rn , namely, the 2-norm denoted by
kxk. We will frequently use ? (or ?f , ?D ) to represent the measure of the probability distribution
D with density function f . The notation ball(w? , t) represents the set {w ? Rn : kw ? w? k ? t}.
For convenience, the symbol ? slightly differs from the ordinary addition +: For f = 0 or g = 0,
{f s ? g s }1/s = 0; Otherwise, ? and + are the same. For u, v ? Rn , we define the angle between
them as ?(u, v).
2.1 From Log-Concavity to S-Concavity
We begin with the definition of s-concavity. There are slight differences among the definitions of
s-concave density, s-concave distribution, and s-concave measure.
Definition 1 (S-Concave (Density) Function, Distribution, Measure). A function f : Rn ? R+ is
s-concave, for ?? ? s ? 1, if f (?x + (1 ? ?)y) ? (?f (x)s + (1 ? ?)f (y)s )1/s for all ? ? [0, 1],
3
?x, y ? Rn .1 A probability distribution D is s-concave, if its density function is s-concave. A
probability measure ? is s-concave if ?(?A + (1 ? ?)B) ? [??(A)s + (1 ? ?)?(B)s ]1/s for any
sets A, B ? Rn and ? ? [0, 1].
Special classes of s-concave functions include concavity (s = 1), harmonic-concavity (s = ?1),
quasi-concavity (s = ??), etc. The conditions in Definition 1 are progressively weaker as s becomes
smaller: s1 -concave densities (distributions, measures) are s2 -concave if s1 ? s2 . Thus one can
verify [12]: concave (s = 1) ( log-concave (s = 0) ( s-concave (s < 0) ( quasi-concave (s =
??).
3
Structural Results of S-Concave Distributions: A Toolkit
In this section, we develop geometric properties of s-concave distribution. The challenge is that unlike
the commonly used distributions in learning (uniform or more generally log-concave distributions),
this broader class is not closed under the marginalization operator and many such distributions are fattailed. To address this issue, we introduce several new techniques. We first introduce the extension of
the Prekopa-Leindler inequality so as to reduce the high-dimensional problem to the one-dimensional
case. We then reduce the resulting one-dimensional s-concave function to a well-defined baseline
function, and explore the geometric properties of that baseline function. We summarize our high-level
proof ideas briefly by the following figure.
n-D s-concave 1-D !-concave 1-D ? # = %(1 + )#)+/-
Extension of
Prekopa-Leindler
3.1
Baseline Function
Marginal Distribution and Cumulative Distribution Function
We begin with the analysis of the marginal distribution, which forms the basis of other geometric
properties of s-concave distributions (s ? 0). Unlike the (nearly) log-concave distribution where the
marginal remains (nearly) log-concave, the class of s-concave distributions is not closed under the
marginalization operator. To study the marginal, our primary tool is the theory of convex geometry.
Specifically, we will use an extension of the Pr?kopa-Leindler inequality developed by Brascamp and
Lieb [10], which allows for a characterization of the integral of s-concave functions.
Theorem 2 ([10], Thm 3.3). Let 0 < ? < 1, and Hs , G1 , and G2 be non-negative integrable
functions on Rm such that Hs (?x + (1 ? ?)y) ? [?G1 (x)s ? (1 ? ?)G2 (y)s ]1/s for every x, y ? Rm .
R
?
? 1/?
R
R
Then Rm Hs (x)dx ? ? Rm G1 (x)dx + (1 ? ?) Rm G2 (x)dx
for s ? ?1/m, with
? = s/(1 + ms).
Building on this, the following theorem plays a key role in our analysis of the marginal distribution.
Theorem 3 (Marginal). Let f (x, y) be an s-concave density on a convex set K ? Rn+m with
1
s ? ?m
. Denote by K|Rn = {x ? Rn : ?y ? Rm s.t. (x, y) ? K}. For every x in K|Rn , consider
R
the section K(x) , {y ? Rm : (x, y) ? K}. Then the marginal density g(x) , K(x) f (x, y)dy is
s
?-concave on K|Rn , where ? = 1+ms
. Moreover, if f (x, y) is isotropic, then g(x) is isotropic.
Similar to the marginal, the CDF of an s-concave distribution might not remain in the same class.
This is in sharp contrast to log-concave distributions. The following theorem studies the CDF of an
s-concave distribution.
s
Theorem 4. The CDF of s-concave distribution in Rn is ?-concave, where ? = 1+ns
and s ? ? n1 .
Theorem 3 and 4 serve as the bridge that connects high-dimensional s-concave distributions to
one-dimensional ?-concave distributions. With them, we are able to reduce the high-dimensional
problem to the one-dimensional one.
3.2 Fat-Tailed Density
Tail probability is one of the most distinct characteristics of s-concave distributions compared to
(nearly) log-concave distributions. While it can be shown that the (nearly) log-concave distribution
When s ? 0, we note that lims?0 (?f (x)s + (1 ? ?)f (y)s )1/s = exp(? log f (x) + (1 ? ?) log f (y)).
In this case, f (x) is known to be log-concave.
1
4
has an exponentially small tail (Theorem 11, [7]), the tail of an s-concave distribution is fat, as proved
by the following theorem.
Theorem 5 (Tail Probability). Let x come from an isotropic distribution over Rn with an s-concave
i(1+ns)/s
h
?
cst
density. Then for every t ? 16, we have Pr[kxk > nt] ? 1 ? 1+ns
, where c is an
absolute constant.
Theorem 5 is almost tight for s < 0. To see this, consider X that is drawn from a one-dimensional
1
1
Pareto distribution with density f (x) = (?1 ? 1s )? s x s (x ? s+1
?s ). It can be easily seen that
h
i s+1
s
?s
Pr[X > t] = s+1
t
for t ? s+1
?s , which matches Theorem 5 up to an absolute constant factor.
3.3 Geometry of S-Concave Distributions
We now investigate the geometry of s-concave distributions. We first consider one-dimensional sconcave distributions: We provide bounds on the density of centroid-centered halfspaces (Lemma 6)
and range of the density function (Lemma 7). Building upon these, we develop geometric properties
of high-dimensional s-concave distributions by reducing the distributions to the one-dimensional
case based on marginalization (Theorem 3).
3.3.1 One-Dimensional Case
We begin with the analysis of one-dimensional halfspaces. To bound the probability, a normal
technique is to bound the centroid region and the tail region separately. However, the challenge is that
the s-concave distribution is fat-tailed (Theorem 5). So while the probability of a one-dimensional
halfspace is bounded below by an absolute constant for log-concave distributions, such a probability
for s-concave distributions decays as s (? 0) becomes smaller. The following lemma captures such
an intuition.
Lemma 6 (Density of Centroid-Centered Halfspaces). Let X be drawn from a one-dimensional
distribution with s-concave density for ?1/2 ? s ? 0. Then Pr(X ? EX) ? (1 + ?)?1/? for
? = s/(1 + s).
We also study the image of a one-dimensional s-concave density. The following condition for
s > ?1/3 is for the existence of second-order moment.
Lemma 7. Let g : R ? R+ be an q
isotropic s-concave density function and s > ?1/3. (a) For all x,
1
1+s
s
g(x) ? 1+3s
; (b) We have g(0) ? 3(1+?)
3/? , where ? = s+1 .
3.3.2 High-Dimensional Case
1
We now move on to the high-dimensional case (n ? 2). In the following, we will assume ? 2n+3
?
s ? 0. Though this working range of s vanishes as n becomes larger, it is almost the broadest range
1
of s that we can hopefully achieve: Chandrasekaran et al. [12] showed a lower bound of s ? ? n?1
if one require the s-concave distribution to have good geometric properties. In addition, we can
1
see from Theorem 3 that if s < ? n?1
, the marginal of an s-concave distribution might even not
1
exist; Such a case does happen for certain s-concave distributions with s < ? n?1
, e.g., the Cauchy
distribution. So our range of s is almost tight up to a 1/2 factor.
We start our analysis with the density of centroid-centered halfspaces in high-dimensional spaces.
Lemma 8 (Density of Centroid-Centered Halfspaces). Let f : Rn ?
R R+ be an s-concave density
function, and let H be any halfspace containing its centroid. Then H f (x)dx ? (1 + ?)?1/? for
? = s/(1 + ns).
Proof. W.L.O.G., we assume H is orthogonal Rto the first axis. By Theorem 3, the first marginal of f is
s/(1+(n?1)s)-concave. Then by Lemma 6, H f (x)dx ? (1+?)?1/? , where ? = s/(1+ns).
The following theorem is an extension of Lemma 7 to high-dimensional spaces. The proofs basically
reduce the n-dimensional density to its first marginal by Theorem 3, and apply Lemma 7 to bound
the image.
Theorem 9 (Bounds on Density). Let f : Rn ? R+ be an isotropic s-concave density. Then
?
s
n
(a) Let d(s, n) = (1 + ?)?1/? 1+3?
3+3? , where ? = 1+(n?1)s and ? = 1+? . For any u ? R such that
1/s
?(n+1)s ?1
kuk ? d(s, n), we have f (u) ? kuk
) ? 1) + 1
f (0).
d ((2 ? 2
5
(b) f (x) ? f (0)
h
1+?
1+3?
s
i1/s
p
3(1 + ?)3/? 2n?1+1/s ? 1
for every x.
(c) There exists an x ? Rn such that f (x) > (4e?)?n/2 .
s
h
i? 1s
p
1
n?(n/2)
1+?
< f (0) ? (2 ? 2?(n+1)s )1/s 2?
(d) (4e?)?n/2 1+3?
3(1 + ?)3/? 2n?1+ s ? 1
n/2 dn .
n?(n/2)
(e) f (x) ? (2 ? 2?(n+1)s )1/s 2?
n/2 dn
(f) For any line ` through the origin,
h
R
`
1+?
1+3?
p
3(1 + ?)3/? 2n?1+1/s
s
?1
i1/s
for every x.
.
f ? (2 ? 2?ns )1/s (n?1)?((n?1)/2)
2? (n?1)/2 dn?1
Theorem 9 provides uniform bounds on the density function. To obtain more refined upper bound on
the image of s-concave densities, we have the following lemma. The proof is built upon Theorem 9.
Lemma 10 (More Refined Upper Bound on Densities). Let f : Rn ? R+ be an isotropic s-concave
density. Then f (x) ? ?1 (n, s)(1 ? s?2 (n, s)kxk)1/s for every x ? Rn , where
s
1/s
1
q
(2 ? 2?(n+1)s ) s
1+?
?1/s
3/? 2n?1+1/s
?1 (n, s) =
(1
?
s)
n?(n/2)
3(1
+
?)
?
1
,
1 + 3?
2? n/2 dn
1
1
s 1+ s
1 [(a(n, s) + (1 ? s)?1 (n, s) )
2? (n?1)/2 dn?1
? a(n, s)1+ s ]s
(2 ? 2?ns )? s
,
(n ? 1)?((n ? 1)/2)
?1 (n, s)s (1 + s)(1 ? s)
h
i?1
s
p
1+?
?
s
a(n, s) = (4e?)?ns/2 1+3?
3(1 + ?)3/? 2n?1+1/s ? 1
, ? = 1+?
, ? = 1+(n?1)s
, and
?2 (n, s) =
1
d = (1 + ?)? ?
1+3?
3+3? .
We also give an absolute bound on the measure of band.
Theorem 11 (Probability inside Band). Let D be an isotropic s-concave distribution in Rn . Denote
by f3 (s, n) = 2(1 + ns)/(1 + (n + 2)s). Then for any unit vector w, Prx?D [|w ? x| ? t] ? f3 (s, n)t.
? 1+?
?
1+3?
s
Moreover, if t ? d(s, n) , 1+2?
1+?
3+3? where ? = 1+(n?1)s , then Prx?D [|w ? x| ? t] >
!?1/?
?
3+3?
?
2?
1+?
3 1+2?
f2 (s, n)t, where f2 (s, n) = 2(2 ? 2?2? )?1/? (4e?)?1/2 2 1+3?
?1
.
1+?
To analyze the problem of learning linear separators, we are interested in studying the disagreement
between the hypothesis of the output and the hypothesis of the target. The following theorem captures
such a characteristic under s-concave distributions.
Theorem 12 (Probability of Disagreement). Assume D is an isotropic s-concave distribution in Rn .
Then for any two unit vectors u and v in Rn , we have dD (u, v) = Prx?D [sign(u ? x) 6= sign(v ? x)] ?
h
? i? ?1
p
1
1+?
f1 (s, n)?(u, v), where f1 (s, n) = c(2 ? 2?3? )? ? 1+3?
(1 +
3(1 + ?)3/? 21+1/? ? 1
2
1+3?
s
s
s
?)?2/? 3+3?
, c is an absolute constant, ? = 1+(n?2)s
, ? = 1+(n?1)s
, ? = 1+ns
.
Due to space constraints, all missing proofs are deferred to the supplementary material.
4
Applications: Provable Algorithms under S-Concave Distributions
In this section, we show that many algorithms that work under log-concave distributions behave well
under s-concave distributions by applying the above-mentioned geometric properties. For simplicity,
we will frequently use the notations in Theorem 1.
4.1 Margin Based Active Learning
We first investigate margin-based active learning under isotropic s-concave distributions in both
realizable and adversarial noise models. The algorithm (see Algorithm 1) follows a localization
technique: It proceeds in rounds, aiming to cut the error down by half in each round in the margin [6].
4.1.1
Relevant Properties of S-Concave Distributions
The analysis requires more refined geometric properties as below. Theorem 13 basically claims that
the error mostly concentrates in a band, and Theorem 14 guarantees that the variance in any 1-D
direction cannot be too large. We defer the detailed proofs to the supplementary material.
6
Algorithm 1 Margin Based Active Learning under S-Concave Distributions
Input: Parameters bk , ?k , rk , mk , ?, and T as in Theorem 16.
1: Draw m1 examples from D, label them and put them into W .
2: For k = 1, 2, ..., T
3: Find vk ? ball(wk?1 , rk ) to approximately minimize the hinge loss over W s.t. kvk k ? 1:
`?k ? minw?ball(wk?1 ,rk )?ball(0,1) `?k (w, W ) + ?/8.
4: Normalize vk , yielding wk = kvvkk k ; Clear the working set W .
5: While mk+1 additional data points are not labeled
6:
Draw sample x from D.
7:
If |wk ? x| ? bk , reject x; else ask for label of x and put into W .
Output: Hypothesis wT .
Theorem 13 (Disagreement outside Band). Let u and v be two vectors in Rn and assume that
?(u, v) = ? < ?/2. Let D be an isotropic s-concave distribution. Then for any absolute constant
c1 > 0 and any function f1 (s, n) > 0, there exists a function f4 (s, n) > 0 such that Prx?D [sign(u ?
1 (2,?)B(?1/??3,3)
x) 6= sign(v ? x) and |v ? x| ? f4 (s, n)?] ? c1 f1 (s, n)?, where f4 (s, n) = 4?
?c1 f1 (s,n)?3 ?2 (2,?)3 ,
B(?, ?) is the beta function, ? = s/(1 + (n ? 2)s), ?1 (2, ?) and ?2 (2, ?) are given by Lemma 10.
Theorem 14 (1-D Variance). Assume that D is isotropic s-concave. For d given by Theorem 9
(a), there is an absolute C0 such that for all 0 < t ? d and for all a such that ku ? ak ? r and
1 (2,?)B(?1/??3,2)
kak ? 1, Ex?Du,t [(a ? x)2 ] ? f5 (s, n)(r2 + t2 ), where f5 (s, n) = 16 + C0 f8?
3
2,
2 (s,n)?2 (2,?) (?+1)?
s
.
(?1 (2, ?), ?2 (2, ?)) and f2 (s, n) are given by Lemma 10 and Theorem 11, and ? = 1+(n?2)s
4.1.2 Realizable Case
We show that margin-based active learning works under s-concave distributions in the realizable case.
Theorem 15. In the realizable case, let D be an isotropic s-concave distribution in Rn .
Then for 0 < < 1/4, ? > 0, and absolute constants c, there is an algorithm
1
e iterations, requires mk =
(see the supplementary material) that runs in T = dlog c
?1
?1
?k
?k
f3 min{2 f4 f1 ,d}
f3 min{2 f4 f1 ,d}
1+s?k
labels in the k-th round, and outputs
O
n log
+log ?
2?k
2?k
a linear separator of error at most with probability
at least 1 ? ?. In particular, when s ? 0 (a.k.a.
log-concave), we have mk = O n + log( 1+s?k
)
.
?
By Theorem 15, we see that the algorithm of margin-based active learning under s-concave distributions works almost as well as the log-concave distributions in the resizable case, improving
exponentially w.r.t. the variable 1/ over passive learning algorithms.
4.1.3
Efficient Learning with Adversarial Noise
e over Rn ? {+1, ?1} such
In the adversarial noise model, an adversary can choose any distribution P
n
that the marginal D over R is s-concave but an ? fraction of labels can be flipped adversarially. The
analysis builds upon an induction technique where in each round we do hinge loss minimization in
the band and cut down the 0/1 loss by half. The algorithm was previously analyzed in [3, 4] for the
special class of log-concave distributions. In this paper, we analyze it for the much more general
class of s-concave distributions.
Theorem 16. Let D be an isotropic s-concave distribution in Rn over x and the label y
obey the adversarial noise model. If the rate ? of adversarial noise satisfies ? < c0 for
some absolute constant c0 , then for 0 < < 1/4, ? > 0, and an absolute constant
1
c, Algorithm 1 runs in T = dlog c
e iterations, outputs a linear separator wT such that
?
Prx?D [sign(wT ? x) 6= sign(w
?
x)]
?
with probability at least 1 ? ?. Thelabel complexity
?
2
s/(1+ns)
2
2
n(k+k )))
]+?k s]
in the k-th round is mk = O [bk?1 s+?k (1+ns)[1?(?/(
n n + log k+k
,
?
?2 ?k2 s2
n
? o
?1/2
1/2
f
f3 ?k
? 5 , ?k = ? f ?2 f
where ? = max f2 min{b
, bk?1
f3 f42 f5 2?(k?1) , and bk =
1
2
?k f2
k?1 ,d}
n
min{?(2?k f4 f1?1 ), d}. In particular, if s ? 0, mk = O n log( ?
)(n + log( k? )) .
By Theorem 16, the label complexity of margin-based active learning improves exponentially over
that of passive learning w.r.t. 1/ even under fat-tailed s-concave distributions and challenging
adversarial noise model.
7
4.2 Disagreement Based Active Learning
We apply our results to the analysis of disagreement-based active learning under s-concave distributions. The key is estimating the disagreement coefficient, a measure of complexity of active
learning problems that can be used to bound the label complexity [19]. Recall the definition of the
disagreement coefficient w.r.t. classifier w? , precision , and distribution D as follows. For any r > 0,
define ballD (w, r) = {u ? H : dD (u, w) ? r} where dD (u, w) = Prx?D [(u ? x)(w ? x) < 0].
Define the disagreement region as DIS(H) = {x : ?u, v ? H s.t. (u ? x)(v ? x) < 0}. Let the
?
D (w ,r)))
Alexander capacity capw? ,D = PrD (DIS(ball
. The disagreement coefficient is defined as
r
?w? ,D () = supr? [capw? ,D (r)]. Below, we state our results on the disagreement coefficient under
isotropic s-concave distributions.
Theorem 17 (Disagreement Coefficient). Let D be an isotropic s-concave
distribution over Rn . For
?
s
n(1+ns)2
1+ns ) .
(1
?
any w? and r > 0, the disagreement coefficient is ?w? ,D () = O s(1+(n+2)s)f
1 (s,n)
?
In particular, when s ? 0 (a.k.a. log-concave), ?w? ,D () = O( n log(1/)).
Our bounds on the disagreement coefficient match the best known results for the much less general
case of log-concave distributions [7]; Furthermore, they apply to the s-concave case where we allow
arbitrary number of discontinuities, a case not captured by [17]. The result immediately implies
concrete bounds on the label complexity of disagreement-based active learning algorithms, e.g.,
CAL[13] and A2 [5]. For instance, by composing
it with the
result from [16], we obtain a bound
(1+ns)2
2 1
OP T 2
s/(1+ns)
3/2
e
of O n s(1+(n+2)s)f (s) (1 ?
) log + 2
for agnostic active learning under an
isotropic s-concave distribution D. Namely, it suffices to output a halfspace with error at most
OP T + , where OP T = minw errD (w).
4.3 Learning Intersections of Halfspaces
Baum [8] provided a polynomial-time algorithm for learning the intersections of halfspaces w.r.t.
symmetric distributions. Later, Klivans [26] extended the result by showing that the algorithm works
under any distribution D as long as ?D (E) ? ?D (?E) for any set E. In this section, we show that it
is possible to learn intersections of halfspaces under the broader class of s-concave distributions.
Theorem 18. In the PAC realizable case, there is an algorithm (see the supplementary material) that
outputs a hypothesis h of error at most with probability at least 1 ? ? under isotropic s-concave
2
distributions.
The label complexity is M (/2, ?/4, n2 ) + max{2m2 /, (2/
) log(4/?)},
n
1
1
1
where M (, ?, m) is defined by M (, ?, n)
=
O log + log ? , m2
=
B(?1/??3,3)
3+1/?
?1/? 1+3?
M (max{?/(4eKm1 ), /2}, ?/4, n), K = ?1 (3, ?) (???2 (3,?))3 h(?)d3+1/? , d = (1 + ?)
3+3? ,
i
h
?1/?
?
p
1
3
1
1+?
?
h(?) = d1 ((2 ? 2?4? )?1 ? 1) + 1 ? (4e?)? 2
3(1+?)3/? 22+ ? ?1
, ? = 1+2?
,
1+3?
?
s
? = 1+? , and ? = 1+(n?3)s . In particular, if s ? 0 (a.k.a. log-concave), K is an absolute constant.
5
Lower Bounds
In this section, we give information-theoretic lower bounds on the label complexity of passive and
active learning of homogeneous halfspaces under s-concave distributions.
1
Theorem 19. For a fixed value ? 2n+3
? s ? 0 we have: (a) For any s-concave distribution D in
n
R whose covariance matrix is of full rank, the sample complexity of learning origin-centered linear
separators under D in the passive learning scenario is ? (nf1 (s, n)/); (b) The label complexity of
active learning of linear separators under s-concave distributions is ? (n log (f1 (s, n)/)).
If the covariance matrix of D is not of full rank, then the intrinsic dimension is less than d. So
our lower bounds essentially apply to all s-concave distributions. According to Theorem 19, it is
possible to have an exponential improvement of label complexity w.r.t. 1/ over passive learning by
active sampling, even though the underlying distribution is a fat-tailed s-concave distribution. This
observation is captured by Theorems 15 and 16.
6
Conclusions
In this paper, we study the geometric properties of s-concave distributions. Our work advances the
state-of-the-art results on the margin-based active learning, disagreement-based active learning, and
learning intersections of halfspaces w.r.t. the distributions over the instance space. When s ? 0, our
results reduce to the best-known results for log-concave distributions. The geometric properties of
8
s-concave distributions can be potentially applied to other learning algorithms, which might be of
independent interest more broadly.
Acknowledgements. This work was supported in part by grants NSF-CCF 1535967, NSF CCF1422910, NSF CCF-1451177, a Sloan Fellowship, and a Microsoft Research Fellowship.
References
[1] D. Applegate and R. Kannan. Sampling and integration of near log-concave functions. In ACM
Symposium on Theory of Computing, pages 156?163, 1991.
[2] P. Awasthi, M.-F. Balcan, N. Haghtalab, and H. Zhang. Learning and 1-bit compressed sensing
under asymmetric noise. In Annual Conference on Learning Theory, pages 152?192, 2016.
[3] P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning
linear separators with noise. In ACM Symposium on Theory of Computing, pages 449?458,
2014.
[4] P. Awasthi, M.-F. Balcan, and P. M. Long. The power of localization for efficiently learning
linear separators with noise. Journal of the ACM, 63(6):50, 2017.
[5] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. Journal of Computer
and System Sciences, 75(1):78?89, 2009.
[6] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Annual Conference on
Learning Theory, pages 35?50, 2007.
[7] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under log-concave
distributions. In Annual Conference on Learning Theory, pages 288?316, 2013.
[8] E. B. Baum. A polynomial time algorithm that learns two hidden unit nets. Neural Computation,
2(4):510?522, 1990.
[9] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In
International Conference on Machine Learning, pages 49?56, 2009.
[10] H. J. Brascamp and E. H. Lieb. On extensions of the Brunn-Minkowski and Pr?kopa-Leindler
theorems, including inequalities for log concave functions, and with an application to the
diffusion equation. Journal of Functional Analysis, 22(4):366?389, 1976.
[11] C. Caramanis and S. Mannor. An inequality for nearly log-concave distributions with applications to learning. IEEE Transactions on Information Theory, 53(3):1043?1057, 2007.
[12] K. Chandrasekaran, A. Deshpande, and S. Vempala. Sampling s-concave functions: The
limit of convexity based isoperimetry. In Approximation, Randomization, and Combinatorial
Optimization. Algorithms and Techniques, pages 420?433. 2009.
[13] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine
Learning, 15(2):201?221, 1994.
[14] A. Daniely. Complexity theoretic limitations on learning halfspaces. In ACM Symposium on
Theory of computing, pages 105?117, 2016.
[15] S. Dasgupta. Analysis of a greedy active learning strategy. In Advances in Neural Information
Processing Systems, volume 17, pages 337?344, 2004.
[16] S. Dasgupta, D. J. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In
Advances in Neural Information Processing Systems, pages 353?360, 2007.
[17] E. Friedman. Active learning for smooth problems. In Annual Conference on Learning Theory,
2009.
[18] V. Guruswami and P. Raghavendra. Hardness of learning halfspaces with noise. SIAM Journal
on Computing, 39(2):742?765, 2009.
9
[19] S. Hanneke. A bound on the label complexity of agnostic active learning. In International
Conference on Machine Learning, pages 353?360, 2007.
[20] S. Hanneke et al. Theory of disagreement-based active learning. Foundations and Trends in
Machine Learning, 7(2-3):131?309, 2014.
[21] A. T. Kalai, A. R. Klivans, Y. Mansour, and R. A. Servedio. Agnostically learning halfspaces.
SIAM Journal on Computing, 37(6):1777?1805, 2008.
[22] A. T. Kalai and S. Vempala. Simulated annealing for convex optimization. Mathematics of
Operations Research, 31(2):253?266, 2006.
[23] D. M. Kane, S. Lovett, S. Moran, and J. Zhang. Active classification with comparison queries.
arXiv preprint arXiv:1704.03564, 2017.
[24] A. Klivans and P. Kothari. Embedding hard learning problems into gaussian space. International
Workshop on Approximation Algorithms for Combinatorial Optimization Problems, 28:793?809,
2014.
[25] A. R. Klivans, P. M. Long, and R. A. Servedio. Learning halfspaces with malicious noise.
Journal of Machine Learning Research, 10:2715?2740, 2009.
[26] A. R. Klivans, P. M. Long, and A. K. Tang. Baum?s algorithm learns intersections of halfspaces
with respect to log-concave distributions. In Approximation, Randomization, and Combinatorial
Optimization. Algorithms and Techniques, pages 588?600. 2009.
[27] R. A. Servedio. Efficient algorithms in computational learning theory. PhD thesis, Harvard
University, 2001.
[28] L. Wang. Smoothness, disagreement coefficient, and the label complexity of agnostic active
learning. Journal of Machine Learning Research, 12(Jul):2269?2292, 2011.
[29] S. Yan and C. Zhang. Revisiting perceptron: Efficient and label-optimal active learning of
halfspaces. arXiv preprint arXiv:1702.05581, 2017.
[30] C. Zhang and K. Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances
in Neural Information Processing Systems, pages 442?450, 2014.
10
| 7065 |@word h:3 briefly:1 polynomial:2 norm:2 c0:4 open:1 covariance:2 moment:1 reduction:1 contains:2 ours:1 existing:1 nt:2 beygelzimer:2 dx:5 must:1 realistic:1 happen:1 hongyang:1 atlas:1 progressively:1 half:2 fewer:1 greedy:1 isotropic:22 characterization:1 provides:1 mannor:1 zhang:6 dn:5 beta:2 symposium:3 prove:1 inside:1 introduce:4 hardness:1 frequently:2 becomes:3 begin:3 estimating:2 underlying:4 notation:4 moreover:2 agnostic:7 bounded:1 provided:1 developed:1 nf1:1 guarantee:2 every:6 concave:158 fat:8 exactly:1 rm:7 k2:1 classifier:1 unit:6 grant:1 before:1 understood:1 limit:1 aiming:1 ak:2 approximately:1 might:8 studied:1 kane:1 challenging:1 range:4 practice:1 differs:1 yan:1 significantly:2 reject:1 convenience:1 unlabeled:1 cannot:1 operator:3 cal:2 put:2 context:4 applying:1 missing:1 baum:8 go:1 economics:1 convex:4 focused:1 simplicity:1 immediately:1 m2:2 embedding:1 haghtalab:1 target:3 play:1 programming:1 homogeneous:1 hypothesis:4 origin:2 harvard:1 trend:1 asymmetric:1 cut:2 distributional:1 labeled:3 role:1 preprint:2 wang:1 capture:2 revisiting:1 region:3 halfspaces:23 mentioned:1 intuition:1 vanishes:1 convexity:1 complexity:18 weakly:1 tight:2 applegate:1 serve:2 upon:3 localization:3 efficiency:4 f2:8 basis:1 easily:1 various:1 brunn:1 caramanis:1 distinct:1 query:1 outside:3 refined:3 whose:1 widely:1 supplementary:6 larger:1 tightness:1 otherwise:2 compressed:1 g1:3 rto:1 seemingly:1 net:1 relevant:1 chaudhuri:1 achieve:1 normalize:1 r1:1 wider:1 develop:2 op:3 progress:2 solves:1 strong:1 c:2 come:1 implies:1 direction:2 concentrate:1 f4:10 centered:5 material:6 require:1 f1:12 generalization:3 suffices:1 preliminary:1 randomization:2 extension:6 exploring:1 clarify:1 hold:1 normal:1 exp:1 claim:1 a2:2 purpose:1 label:24 combinatorial:3 bridge:2 tool:2 weighted:1 minimization:1 awasthi:4 gaussian:1 kalai:2 broader:6 maria:1 improvement:3 vk:2 rank:2 contrast:1 adversarial:8 centroid:6 baseline:4 realizable:7 dependent:1 hidden:1 quasi:2 i1:2 interested:1 issue:1 among:1 classification:1 denoted:1 art:2 integration:3 special:3 marginal:15 once:1 f3:9 aware:1 beach:1 sampling:5 represents:1 broad:3 kw:1 flipped:1 nearly:10 adversarially:1 t2:2 few:1 simultaneously:1 gamma:1 geometry:5 connects:1 n1:1 microsoft:1 friedman:1 interest:5 investigate:2 deferred:1 analyzed:1 kvk:1 yielding:1 lims:1 integral:1 minw:2 orthogonal:1 supr:1 logarithm:1 theoretical:1 mk:6 instance:6 ordinary:1 daniely:1 uniform:7 too:1 st:1 density:29 broder:1 international:3 siam:2 physic:1 concrete:2 thesis:1 central:1 containing:1 choose:1 f5:6 closedform:1 wk:4 includes:2 coefficient:12 sloan:1 later:2 closed:4 analyze:3 start:1 errd:1 jul:1 defer:1 halfspace:4 contribution:1 minimize:1 variance:3 characteristic:2 efficiently:2 identify:1 generalize:1 raghavendra:1 famous:1 basically:2 notoriously:1 hanneke:2 monteleoni:1 definition:5 servedio:3 ninamf:1 deshpande:1 broadest:1 proof:6 sampled:1 hsu:1 proved:1 ask:1 recall:1 knowledge:1 improves:2 dt:2 though:3 furthermore:3 langford:2 working:2 cohn:1 hopefully:1 usa:3 building:2 concept:1 verify:1 ccf:2 symmetric:4 round:5 kak:1 m:2 theoretic:2 passive:10 balcan:7 image:3 harmonic:1 functional:1 exponentially:4 volume:1 extend:2 belong:2 tail:8 m1:1 slight:1 mellon:2 significant:1 smoothness:1 mathematics:1 toolkit:1 access:2 surface:1 etc:2 own:1 showed:1 belongs:1 scenario:3 certain:2 inequality:4 integrable:1 captured:3 seen:1 additional:2 multiple:1 full:2 smooth:1 match:3 hongyanz:1 long:7 concerning:2 florina:1 noiseless:1 cmu:2 essentially:1 arxiv:4 iteration:2 represent:4 c1:5 addition:2 fellowship:2 separately:1 annealing:1 else:1 malicious:1 extra:1 unlike:3 structural:5 near:1 marginalization:4 agnostically:1 reduce:7 idea:1 guruswami:1 kopa:2 lieb:2 generally:2 detailed:1 informally:1 clear:1 band:7 exist:3 nsf:3 sign:15 broadly:3 carnegie:2 prd:1 dasgupta:3 key:4 drawn:3 capital:1 d3:1 kuk:2 f8:1 diffusion:1 fraction:2 run:2 angle:2 letter:1 almost:5 chandrasekaran:2 draw:2 decision:1 dy:1 bit:1 bound:23 prekopa:3 annual:4 constraint:1 x2:3 argument:1 klivans:6 min:4 minkowski:1 vempala:2 department:2 developing:1 according:2 request:1 ball:5 smaller:2 slightly:2 remain:1 s1:2 dlog:2 pr:6 computationally:4 equation:1 remains:3 previously:1 discus:1 informal:1 studying:1 operation:1 apply:7 obey:1 disagreement:32 slower:1 disagreementbased:1 existence:1 include:1 hinge:2 build:1 move:1 quantity:1 strategy:1 primary:1 simulated:1 capacity:1 cauchy:2 reason:4 provable:6 induction:1 kannan:1 setup:1 mostly:2 potentially:2 negative:1 unknown:3 perform:1 upper:2 brascamp:2 observation:1 ladner:1 kothari:1 behave:1 extended:2 rn:31 mansour:1 arbitrary:2 thm:1 sharp:1 bk:5 namely:2 nip:1 discontinuity:2 address:1 beyond:2 adversary:2 able:1 below:4 proceeds:1 challenge:4 summarize:1 built:1 including:2 max:3 power:2 natural:1 isoperimetry:1 axis:1 prior:5 geometric:11 nice:1 acknowledgement:1 loss:3 lovett:1 limitation:1 foundation:1 dd:3 corrupt:1 pareto:3 supported:1 free:1 dis:2 allow:3 weaker:1 perceptron:1 absolute:13 dimension:1 cumulative:2 concavity:10 commonly:2 made:2 far:1 transaction:1 approximate:1 obtains:1 active:49 tolerant:6 tailed:7 why:1 table:2 learn:2 ku:2 ca:1 composing:2 symmetry:1 improving:2 du:4 separator:10 s2:3 noise:22 n2:1 prx:10 x1:3 n:18 precision:1 sub:1 exponential:2 learns:2 tang:1 theorem:55 down:2 rk:3 showing:2 pac:1 symbol:1 r2:2 decay:2 sensing:1 moran:1 exists:4 intrinsic:1 workshop:1 importance:1 phd:1 margin:17 intersection:10 led:1 explore:1 infinitely:1 kxk:4 partially:1 g2:3 satisfies:2 relies:1 acm:4 cdf:4 conditional:2 goal:1 hard:2 cst:2 specifically:1 reducing:1 hyperplane:1 wt:5 lemma:13 alexander:1 d1:1 ex:3 |
6,706 | 7,066 | Scalable Variational Inference for Dynamical Systems
Nico S. Gorbach?
Dept. of Computer Science
ETH Zurich
[email protected]
Stefan Bauer?
Dept. of Computer Science
ETH Zurich
[email protected]
Joachim M. Buhmann
Dept. of Computer Science
ETH Zurich
[email protected]
Abstract
Gradient matching is a promising tool for learning parameters and state dynamics
of ordinary differential equations. It is a grid free inference approach, which,
for fully observable systems is at times competitive with numerical integration.
However, for many real-world applications, only sparse observations are available
or even unobserved variables are included in the model description. In these cases
most gradient matching methods are difficult to apply or simply do not provide
satisfactory results. That is why, despite the high computational cost, numerical
integration is still the gold standard in many applications. Using an existing gradient
matching approach, we propose a scalable variational inference framework which
can infer states and parameters simultaneously, offers computational speedups,
improved accuracy and works well even under model misspecifications in a partially
observable system.
1
Introduction
Parameter estimation for ordinary differential equations (ODE?s) is challenging due to the high
computational cost of numerical integration. In recent years, gradient matching techniques established
themselves as successful tools [e.g. Babtie et al., 2014] to circumvent the high computational
cost of numerical integration for parameter and state estimation in ordinary differential equations.
Gradient matching is based on minimizing the difference between the interpolated slopes and the time
derivatives of the state variables in the ODE?s. First steps go back to spline based methods [Varah,
1982, Ramsay et al., 2007] where in an iterated two-step procedure coefficients and parameters are
estimated. Often cubic B-splines are used as basis functions while more advanced approaches [Niu
et al., 2016] use kernel functions derived from the ODE?s. An overview of recent approaches with a
focus on the application for systems biology is provided in Macdonald and Husmeier [2015]. It is
unfortunately not straightforward to extend spline based approaches to include unobserved variables
since they usually require full observability of the system. Moreover, these methods critically
depend on the estimation of smoothing parameters, which are difficult to estimate when only sparse
observations are available. As a solution for both problems, Gaussian process (GP) regression was
proposed in Calderhead et al. [2008] and further improved in Dondelinger et al. [2013]. While both
Bayesian approaches work very well for fully observable systems, they (opposite to splines) cannot
simultaneously infer parameters and unobserved states and perform poorly when only combinations
of variables are observed or the differential equations contain unobserved variables. Unfortunately
this is the case for most practical applications [e.g. Barenco et al., 2006].
Related work. Archambeau et al. [2008] proposed variational inference to approximate the true
process of the dynamical system by a time-varying linear system. Their approach was later signficantly
extended [Ruttor et al., 2013, Ruttor and Opper, 2010, Vrettas et al., 2015]. However, similiar to
[Lyons et al., 2012] they study parameter estimation in stochastic dynamical systems while our work
?
The first two authors contributed equally to this work.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
focuses on deterministic systems. In addition, they use the Euler-Maruyama discretization, whereas
our approach is grid free. Wang and Barber [2014] propose an approach based on a belief network
but as discussed in the controversy of mechanistic modelling [Macdonald et al., 2015], this leads to
an intrinsic identifiability problem.
Our contributions. Our proposal is a scalable variational inference based framework which can infer
states and parameters simultaneously, offers significant runtime improvements, improved accuracy
and works well even in the case of partially observable systems. Since it is based on simplistic
mean-field approximations it offers the opportunity for significant future improvements. We illustrate
the potential of our work by analyzing a system of up to 1000 states in less than 400 seconds on a
standard Laptop2 .
2
Deterministic Dynamical Systems
A deterministic dynamical system is represented by a set of K ordinary differential equations (ODE?s)
with model parameters ? that describe the evolution of K states x(t) = [x1 (t), x2 (t), . . . , xK (t)]T
such that:
?
x(t)
=
dx(t)
= f (x(t), ?).
dt
(1)
A sequence of observations, y(t), is usually contaminated by some measurement error which
we assume to be normally distributed with zero mean and variance for each of the K states, i.e.
E ? N (0, D), with Dik = ?k2 ?ik . Thus for N distinct time points the overall system may be
summarized as:
Y = X + E,
(2)
where
X = [x(t1 ), . . . , x(tN )] = [x1 , . . . , xK ]T ,
Y = [y(t1 ), . . . , y(tN )] = [y1 , . . . , yK ]T ,
and xk = [xk (t1 ), . . . , xk (tN )]T is the k?th state sequence and yk = [yk (t1 ), . . . , yk (tN )]T are the
observations. Given the observations Y and the description of the dynamical system (1), the aim is to
estimate both state variables X and parameters ?. While numerical integration can be used for both
problems, its computational cost is prohibitive for large systems and motivates the grid free method
outlined in section 3.
3
GP based Gradient Matching
Gaussian process based gradient matching was originally motivated in Calderhead et al. [2008] and
further developed in Dondelinger et al. [2013]. Assuming a Gaussian process prior on state variables
such that:
Y
p(X | ?) :=
N (0, C?k )
(3)
k
where C?k is a covariance matrix defined by a given kernel with hyper-parameters ?k , the k-th
element of ?, we obtain a posterior distribution over state-variables (from (2)):
p(X | Y, ?, ?) =
Y
N (?k (yk ), ?k ) ,
(4)
k
?1
?2
?1
where ?k (yk ) := ?k?2 ?k?2 I + C?1
yk and ??1
k := ?k I + C?k .
?k
Assuming that the covariance function C?k is differentiable and using the closure property under
differentiation of Gaussian processes, the conditional distribution over state derivatives is:
? | X, ?) =
p(X
Y
N (x? k | mk , Ak ),
k
2
All experiments were run on a 2.5 GHz Intel Core i7 Macbook.
2
(5)
where the mean and covariance is given by:
mk := 0 C?k C?1
?k xk ,
0
Ak := C00?k ? 0 C?k C?1
?k C?k ,
(6)
C00?k denotes the auto-covariance for each state-derivative with C0?k and 0 C?k denoting the crosscovariances between the state and its derivative.
Assuming additive, normally distributed noise with state-specific error variance ?k in (1), we have:
? | X, ?, ?) =
p(X
Y
N (x? k | fk (X, ?), ?k I) .
(7)
k
A product of experts approach, combines the ODE informed distribution of state-derivatives (distribution (7)) with the smoothed distribution of state-derivatives (distribution (5)):
? | X, ?, ?, ?) ? p(X
? | X, ?)p(X
? | X, ?, ?)
p(X
(8)
The motivation for the product of experts is that the multiplication implies that both the data fit
and the ODE response have to be satisfied at the same time in order to achieve a high value of
? | X, ?, ?, ?). This is contrary to a mixture model, i.e. a normalized addition, where a high
p(X
value for one expert e.g. overfitting the data while neglecting the ODE response or vice versa, is
acceptable.
?
The proposed methodology in Calderhead et al. [2008] is to analytically integrate out X:
Z??1 (X)
Z
?
?
?
p(X|X,
?)p(X|X,
?, ?)dX
Y
= Z??1 (X) p(?)
N (fk (X, ?)|mk , ??1
k ),
p(?|X, ?, ?) =
p(?)
(9)
k
?1
with ??1
k := Ak + ?k I and Z? (X) as the normalization that depends on the states X. Calderhead
et al. [2008] infer the parameters ? by first sampling the states (i.e. X ? p(X | Y, ?, ?)) followed
by sampling the parameters given the states (i.e. ?, ? ? p(?, ? | X, ?, ?)). In this setup, sampling
X is independent of ?, which implies that ? and ? have no influence on the inference of the state
variables. The desired feedback loop was closed by Dondelinger et al. [2013] through sampling from
the joint posterior of p(? | X, ?, ?, ?, Y). Since sampling the states only provides their values at
discrete time points, Calderhead et al. [2008] and Dondelinger et al. [2013] require the existence of an
external ODE solver to obtain continuous trajectories of the state variables. For simplicity, we derived
the approach assuming full observability. However, the approach has the advantage (as opposed
to splines) that the assumption of full observability can be relaxed to include only observations for
combinations of states by replacing (2) with Y = AX + E, where A encodes the linear relationship
between observations and states. In addition, unobserved states can be naturally included in the
inference by simply using the prior on state variables (3) [Calderhead et al., 2008].
4
Variational Inference for Gradient Matching by Exploiting Local
Linearity in ODE?s
For subsequent sections we consider only models of the form (1) with reactions based on mass-action
kinetics which are given by:
X
Y
fk (x(t), ?) =
?ki
xj
(10)
i=1
j?Mki
with Mki ? {1, . . . , K} describing the state variables in each factor of the equation i.e. the
functions are linear in parameters and contain arbitrary large products of monomials of the states.
The motivation for the restriction to this functional class is twofold. First, this formulation includes
models which exhibit periodicity as well as high nonlinearity and especially physically realistic
reactions in systems biology [Schillings et al., 2015].
3
Second, the true joint posterior over all unknowns is given by:
p(?, X | Y, ?, ?, ?) = p(? | X, ?, ?)p(X | Y, ?, ?)
Y
= Z??1 (X) p(?)
N fk (X, ?) | mk , ??1
N (xk | ?k (Y), ?k ) ,
k
k
where the normalization of the parameter posterior (9), Z? (X), depends on the states X. The
dependence is nontrivial and induced by the nonlinear couplings of the states X, which make
the inference (e.g. by integration) challenging in the first place. Previous approaches ignore the
dependence of Z? (X) on the states X by setting Z? (X) equal to one [Dondelinger et al., 2013,
equation 20]. We determine Z? (X) analytically by exploiting the local linearity of the ODE?s as
shown in section 4.1 (and section 7 in the supplementary material). More precisely, for mass action
kinetics 10, we can rewrite the ODE?s as a linear combination in an individual state or as a linear
combination in the ODE parameters3 . We thus achieve superior performance over existing gradient
matching approaches, as shown in the experimental section 5.
4.1
Mean-field Variational Inference
To infer the parameters ?, we want to find the maximum a posteriori estimate (MAP):
Z
? ? := argmax ln p(? | Y, ?, ?, ?) = argmax ln p(? | X, ?, ?)p(X | Y, ?, ?) dX
{z
}
|
?
?
(11)
=p(?,X|Y,?,?,?)
However, the integral in (11) is intractable in most cases due to the strong couplings induced by
the nonlinear ODE?s f which appear in the term p(? | X, ?, ?) (equation 9). We therefore use
mean-field variational inference to establish variational lower bounds that are analytically tractable
by decoupling state variables from the ODE parameters as well as decoupling the state variables
from each other. Before explaining the mechanism behind mean-field variational inference, we first
observe that, due to the model assumption (10), the true conditional distributions p(? | X, Y, ?, ?, ?)
and p(xu | ?, X?u , Y, ?, ?, ?) are Gaussian distributed, where X?u denotes all states excluding
state xu (i.e. X?u := {x ? X | x 6= xu }). For didactical reasons, we write the true conditional
distributions in canonical form:
p(? | X, Y, ?, ?, ?) = h(?) ? exp ? ? (X, Y, ?, ?, ?)T t(?) ? a? (? ? (X, Y, ?, ?, ?)
p(xu | ?, X?u , Y, ?, ?, ?) = h(xu ) ? exp ? u (?, X?u , Y, ?, ?, ?)T t(xu )
? au (? u (X?u , Y, ?, ?, ?)
(12)
where h(?) and a(?) are the base measure and log-normalizer and ?(?) and t(?) are the natural
parameter and sufficient statistics.
The decoupling is induced by designing a variational distribution Q(?, X) which is restricted to the
family of factorial distributions:
Y
Q := Q : Q(?, X) = q(? | ?)
q(xu | ? u ) ,
(13)
u
where ? and ? u are the variational parameters. The particular form of q(? | ?) and q(xu | ? u ) is
designed to be in the same exponential family as the true conditional distributions in equation (12):
q(? | ?) := h(?) exp ?T t(?) ? a? (?)
q(xu | ? u ) := h(xu ) exp ? Tu t(xu ) ? au (? u )
3
For mass-action kinetics as in (10), the ODE?s are nonlinear in all states but linear in a single state as well
as linear in all ODE parameters.
4
To find the optimal factorial distribution we minimize the Kullback-Leibler divergence between the
variational and the true posterior distribution:
? : = argmin KL Q(?, X)p(?, X | Y, ?, ?, ?)
Q
Q(?,X)?Q
= argmin EQ log Q(?, X) ? EQ log p(?, X | Y, ?, ?, ?)
Q(?,X)?Q
= argmax LQ (?, ?)
(14)
Q(?,X)?Q
? is the proxy distribution and LQ (?, ?) is the ELBO (Evidence Lower Bound) terms
where Q
that depends on the variational parameters ? and ?. Maximizing ELBO w.r.t. ? is equivalent to
maximizing the following lower bound:
L? (?) : = EQ log p(? | X, Y, ?, ?, ?) ? EQ log q(? | ?)
= EQ ? T? 5? a? (?) ? ?T 5? a? (?),
where we substitute the true conditionals given in equation (12) and 5? is the gradient operator.
Similarly, maximizing ELBO w.r.t. latent state xu , we have:
Lx (? u ) : = EQ log p(xu | ?, X?u , Y, ?, ?, ?) ? EQ log q(xu | ? u )
= EQ ? Tu 5?u au (? u ) ? ? Tu 5?u au (? u )
Given the assumptions we made about the true posterior and the variational distribution (i.e. that each
true conditional is in an exponential family and that the corresponding variational distribution is in
the same exponential family) we can optimize each coordinate in closed form.
To maximize ELBO we set the gradient w.r.t. the variational parameters to zero:
!
5? L? (?) = 52? a? (?) (EQ ? ? ? ?) = 0
which is zero when:
? = EQ ?
?
?
Similarly, the optimal variational parameters of the states are given by:
? = EQ ?
?
u
u
(15)
(16)
Since the true conditionals are Gaussian distributed the expectations over the natural parameters are
given by:
ru
EQ ??1
r?
EQ ??1
u
?
EQ ? ? =
, EQ ? u =
,
(17)
? 12 EQ ??1
? 12 EQ ??1
u
?
where r? and ?? are the mean and covariance of the true conditional distribution over ODE parameters. Similarly, ru and ?u are the mean and covariance of the true conditional distribution over states.
The variational parameters in equation (17) are derived analytically in the supplementary material 7.
The coordinate ascent approach (where each step is analytically tractable) for estimating states and
parameters is summarized in algorithm 1.
Algorithm 1 Mean-field coordinate ascent for GP Gradient Matching
1: Initialization of proxy moments ? u and ? ? .
2: repeat
? calculate the proxy over individual states
3: Given the proxy over ODE parameters q(? | ?),
? ) ? u ? n, by computing its moments ?
? = EQ ? .
q(xu | ?
u
u
u
?
4: Given the proxy over individual states q(xu | ? u ), calculate the proxy over ODE parameters
? by computing its moments ?
? = EQ ? .
q(? | ?),
?
5: until convergence of maximum number of iterations is exceeded.
Assuming that the maximal number of states for each equation in (10) is constant (which is to the
best of our knowledge the case for any reasonable dynamical system), the computational complexity
of the algorithm is linear in the states O(N ? K) for each iteration. This result is experimentally
supported by figure 5 where we analyzed a system of up to 1000 states in less than 400 seconds.
5
5
Experiments
In order to provide a fair comparison to existing approaches, we test our approach on two small to
medium sized ODE models, which have been extensively studied in the same parameter settings
before [e.g. Calderhead et al., 2008, Dondelinger et al., 2013, Wang and Barber, 2014]. Additionally,
we show the scalability of our approach on a large-scale partially observable system which has so far
been infeasible to analyze with existing gradient matching methods due to the number of unobserved
states.
5.1
Lotka-Volterra
Runtime (seconds)
Parameter Value
Population
6
6
500
true
mean-field GM
AGM
splines
400
4
4
300
200
2
2
100
0
0
1
2
0
31
time
32
33
34
mean-field GM
ODE parameters
AGM
splines
Figure 1: Lotka-Volterra: Given few noisy observations (red stars), simulated with a variance of
? 2 = 0.25, the leftmost plot shows the inferred state dynamics using our variational mean-field
method (mean-field GM, median runtime 4.7sec). Estimated mean and standard deviation for one
random data initialization using our approach are illustrated in the left-center plot. The implemented
spline method (splines, median runtime 48sec) was based on Niu et al. [2016] and the adaptive
gradient matching (AGM) is the approach proposed by Dondelinger et al. [2013]. Boxplots in the
leftmost, right-center and rightmost plot illustrate the variance in the state and parameter estimations
over 10 independent datasets.
The ODE?s f (X, ?) of the Lotka-Volterra system [Lotka, 1978] is given by:
x? 1 : = ?1 x1 ? ?2 x1 x2
x? 2 : = ??3 x2 + ?4 x1 x2
The above system is used to study predator-prey
interactions and exhibits periodicity and nonODE parameters
linearity at the same time. We used the same
ODE parameters as in Dondelinger et al. [2013]
(i.e. ?1 = 2, ?2 = 1, ?3 = 4, ?4 = 1) to simulate the data over an interval [0, 2] with a sampling interval of 0.1. Predator species (i.e. x1 )
were initialized to 3 and prey species (i.e. x)
were initialized to 5. Mean-field variational inference for gradient matching was performed on
a simulated dataset with additive Gaussian noise
with variance ? 2 = 0.25. The radial basis function kernel was used to capture the covariance
between a state at different time points.
4
7
3.5
6
3
prey
4
3.5
3
5
2.5
2.5
4
2
2
3
1.5
1.5
2
1
0
0
0
1
2
3
1
1
0.5
As shown in figure 1, our method performs significantly better than all other methods at a fraction of the computational cost. The poor performance in accuracy of Niu et al. [2016] can
be explained by the significantly lower number
of samples and higher noise level, compared
to the simpler setting of their experiments. In
order to show the potential of our work we decided to follow the more difficult and established
experimental settings used in [e.g. Calderhead
et al., 2008, Dondelinger et al., 2013, Wang and
Barber, 2014]. This illustrates the difficulty of
predator
4
0.5
2
time
4
0
0
2
4
time
Figure 2: Lotka-Volterra: Given only observations (red stars) until time t = 2 the state trajectories are inferred including the unobserved time
points up to time t = 4. The typical patterns of
the Lotka-Volterra system for predator and prey
species are recovered. The shaded blue area shows
the uncertainty around for the inferred state trajectories.
6
spline based gradient matching methods when only few observations are available. We estimated
the smoothing parameter ? in the proposal of Niu et al. [2016] using leave-one-out cross-validation.
While their method can in principle achieve the same runtime (e.g. using 10-fold cv) as our method,
the performance for parameter estimation is significantly worse already when using leave-one-out
cross-validation, where the median parameter estimation over ten independent data initializations
is completely off for three out of four parameters (figure 1). Adaptive gradient matching (AGM)
[Dondelinger et al., 2013] would eventually converge to the true parameter values but at roughly 100
times the runtime achieves signifcantly worse results in accuracy than our approach (figure 1). In
figure 2 we additionally show that the mechanism of the Lotka-Volterra system is correctly inferred
even when including unobserved time intervals.
5.2
Protein Signalling Transduction Pathway
In the following we only compare with the current state of the art in GP based gradient matching
[Dondelinger et al., 2013] since spline methods are in general difficult or inapplicable for partial
observable systems. In addition, already in the case of a simpler system and more data points (e.g.
figure 1), splines were not competitive (in accuracy) with the approach of Dondelinger et al. [2013].
State S
1
State S
State S
1
State S
1
0.5
0.5
400
0.5
200
0
0
0
50
time
100
0
1
2
3
0
time
0
50
100
0
time
50
100
time
Figure 3: For the noise level of ? 2 = 0.1 the leftmost and left-center plot show the performance
of Dondelinger et al. [2013](AGM) for inferring the state trajectories of state S. The red curve in
all plots is the groundtruth, while the inferred trajectories of AGM are plotted in green (left and
left-center plot) and in blue (right and right center) for our approach. While in the scenario of the
leftmost and right-center plot observations are available (red stars) and both approaches work well,
the approach of Dondelinger et al. [2013](AGM) is significantly off in inferring the same state when
it is unobserved but all other parameters remain the same (left-center plot) while our approach infers
similar dynamics in both scenarios.
The chemical kinetics for the protein signalling transduction pathway is governed by a combination
of mass action kinetics and the Michaelis-Menten kinetics:
S? = ?k1 ? S ? k2 ? S ? R + k3 ? RS
? = k1 ? S
dS
R? = ?k2 ? S ? R + k3 ? RS + V ?
Rpp
Km + Rpp
? = k2 ? S ? R ? k3 ? RS ? k4 ? RS
RS
Rpp
? = k4 ? RS ? V ?
Rpp
Km + Rpp
For a detailed descripton of the systems with its biological interpretations we refer to Vyshemirsky
and Girolami [2008]. While mass-action kinetics in the protein transduction pathway satisfy our
constraints on the functional form of the ODE?s 1, the Michaelis-Menten kinetics do not, since they
give rise to the ratio of states KmRpp
+Rpp . We therefore define the following latent variables:
x1 := S, x2 := dS, x3 := R, x4 := RS, x5 :=
Rpp
Km + Rpp
?1 := k1 , ?2 := k2 , ?3 := k3 , ?4 := k4 , ?5 := V
The transformation is motivated by the fact that in the new system, all states only appear as monomials,
as required in (10). Our variable transformation includes an inherent error (e.g. by replacing
7
? = k4 ? RS ? V ? Rpp with x? 5 = ?4 ? x4 ? ?5 ? x5 ) but despite such a misspecification,
Rpp
Km +Rpp
our method estimates four out of five parameters correctly (4). Once more, we use the same
ODE parameters as in Dondelinger et al. [2013] i.e. k1 = 0.07, k2 = 0.6, k3 = 0.05, k4 =
0.3, V = 0.017. The data was sampled over an interval [0, 100] with time point samples at t =
[0, 1, 2, 4, 5, 7, 10, 15, 20, 30, 40, 50, 60, 80, 100]. Parameters were inferred in two experiments with
different standard Gaussian distributed noise with variances ? 2 = 0.01 and ? 2 = 0.1.
Even for a misspecified model, containing a systematic error, the ranking according to parameter
values is preserved as indicated in figure 4. While the approach of Dondelinger et al. [2013] converges
much slower (again factor 100 in runtime) to the true values of the parameters (for a fully observable
system), it is significantly off if state S is unobserved and is more senstitive to the introduction of
noise than our approach (figure 3). Our method infers similar dynamics for the fully and partially
observable system as shown in figure 3 and remains unchanged in its estimation accuracy after
the introduction of unobserved variables (even having its inherent bias) and performs well even in
comparison to numerical integration (figure 4). Plots for the additional state dynamics are shown in
the supplementary material 6.
RMSE of ODE Parameters
0.3
RMSE of ODE Parameters
RMSE of ODE Parameters
0.8
0.8
mean-field GM
AGM
Bayes num. int.
mean-field GM
AGM
Bayes num. int.
0.6
0.6
0.4
0.4
0.2
0.2
mean-field GM
AGM
Bayes num. int.
Bayes num. int. mf
0.2
0.1
0
k1
k2
k3
k4
0
k1
k2
k3
k4
0
k1
k2
k3
k4
Figure 4: From the left to the right the plots represent three different inference settings of increasing
difficulty using the protein transduction pathway as an example. The left plot shows the results for a
fully observable system and a small noise level (? 2 = 0.01). Due to the violation of the functional
form assumption our approach has an inherent bias and Dondelinger et al. [2013](AGM) performs
better while Bayesian numerical integration (Bayes num. int.) serves as a gold standard and performs
best. The middle plot shows the same system with an increased noise level of ? 2 = 0.1. Due to many
outliers we only show the median over ten independent runs and adjust the scale for the middle and
right plot. In the right plot state S was unobserved while the noise level was kept at ? 2 = 0.1 (the
estimate for k3 of AGM is at 18 and out of the limits of the plot). Initializing numerical integration
with our result (Bayes num. int. mf.) achieves the best results and significantly lowers the estimation
error (right plot).
5.3
Scalability
To show the true scalability of our approach we apply it to the Lorenz 96 system, which consists of
equations of the form:
fk (x(t), ?) = (xk+1 ? xk?2 )xk?1 ? xk + ?,
(18)
where ? is a scalar forcing parameter, x?1 = xK?1 , x0 = xK and xK+1 = x1 (with K being the
number of states in the deterministic system (1)). The Lorenz 96 system can be seen as a minimalistic
weather model [Lorenz and Emanuel, 1998] and is often used with an additional diffusion term as a
reference model for stochastic systems [e.g. Vrettas et al., 2015]. It offers a flexible framework for
increasing the number states in the inference problem and in our experiments we use between 125
to 1000 states. Due to the dimensionality the Lorenz 96 system has so far not been analyzed using
gradient matching methods and to additionally increase the difficulty of the inference problem we
randomly selected one third of the states to be unobserved. We simulated data setting ? = 8 with an
observation noise of ? 2 = 1 using 32 equally space observations between zero to four seconds. Due
to its scaling properties, our approach is able to infer a system with 1000 states within less than 400
seconds (right plot in figure 5). We can visually conclude that unobserved states are approximately
correct inferred and the approximation error is independent of the dimensionality of the problem
(right plot in figure 5).
8
4.5
8
average RMSE of unobs. states
4
6
3.5
3
4
1.4
2
2.5
2
2
2.5
3.5
RMSE reduction of
ODE parameter
1.5
0
1
-2
0.5
0
0
5
10
15
20
-4
0
1
iteration number
2
time
time
Scaling of Mean-field
Gradient Matching for Lorenz 96
1000
3
4
500
400
900
300
800
200
700
100
runtime (seconds)
unobserved
stateState
Unobserved
RMSE for ODE Parameter
600
0
500
125
250
375
500
625
750
875
1000
number of ODEs
Figure 5: The left plot shows the improved mechanistic modelling and the reduction of the root median
squared error (RMSE) with each iteration of our algorithm. The groundtruth for an unobserved state
is plotted in red while the thin gray lines correspond to the inferred state trajectories in each iteration
of the algorithm (the first flat thin gray line being the initialisation). The blue line is the inferred
state trajectory of the unobserved state after convergence. The right plot shows the scaling of our
algorithm with the dimensionality in the states. The red curve is the runtime in seconds wheras the
blue curve is corresponding to the RSME (right plot).
Due to space limitations, we show additional experiments for various dynamical systems in the fields
of fluid dynamics, electrical engineering, system biology and neuroscience only in the supplementary
material in section 8.
6
Discussion
Numerical integration is a major bottleneck due to its computational cost for large scale estimation
of parameters and states e.g. in systems biology. However, it still serves as the gold standard for
practical applications. Techniques based on gradient matching offer a computationally appealing
and successful shortcut for parameter inference but are difficult to extend to include unobserved
variables in the model descripton or are unable to keep their performance level from fully observed
systems. However, most real world applications are only partially observed. Provided that state
variables appear as monomials in the ODE, we offer a simple, yet powerful inference framework
that is scalable, significantly outperforms existing approaches in runtime and accuracy and performs
well in the case of sparse observations even for partially observable systems. Many non-linear and
periodic ODE?s, e.g. the Lotka-Volterra system, already fulfill our assumptions. The empirically
shown robustness of our model to misspecification even in the case of additional partial observability
already indicates that a relaxation of the functional form assumption might be possible in future
research.
Acknowledgements
This research was partially supported by the Max Planck ETH Center for Learning Systems and the
SystemsX.ch project SignalX.
References
C?dric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, and John S Shawe-taylor. Variational
inference for diffusion processes. Neural Information Processing Systems (NIPS), 2008.
Ann C Babtie, Paul Kirk, and Michael PH Stumpf. Topological sensitivity analysis for systems
biology. Proceedings of the National Academy of Sciences, 111(52):18507?18512, 2014.
Martino Barenco, Daniela Tomescu, Daniel Brewer, Robin Callard, Jaroslav Stark, and Michael
Hubank. Ranked prediction of p53 targets using hidden variable dynamic modeling. Genome
biology, 7(3):R25, 2006.
9
Ben Calderhead, Mark Girolami and Neil D. Lawrence. Accelerating bayesian inference over
nonliner differential equations with gaussian processes. Neural Information Processing Systems
(NIPS), 2008.
Frank Dondelinger, Maurizio Filippone, Simon Rogers and Dirk Husmeier. Ode parameter inference
using adaptive gradient matching with gaussian processes. International Conference on Artificial
Intelligence and Statistics (AISTATS), 2013.
Edward N Lorenz and Kerry A Emanuel. Optimal sites for supplementary weather observations:
Simulation with a small model. Journal of the Atmospheric Sciences, 55(3):399?414, 1998.
Alfred J Lotka. The growth of mixed populations: two species competing for a common food supply.
In The Golden Age of Theoretical Ecology: 1923?1940, pages 274?286. Springer, 1978.
Simon Lyons, Amos J Storkey, and Simo S?rkk?. The coloured noise expansion and parameter
estimation of diffusion processes. Neural Information Processing Systems (NIPS), 2012.
Benn Macdonald and Dirk Husmeier. Gradient matching methods for computational inference
in mechanistic models for systems biology: a review and comparative analysis. Frontiers in
bioengineering and biotechnology, 3, 2015.
Benn Macdonald, Catherine F. Higham and Dirk Husmeier. Controversy in mechanistic modemodel
with gaussian processes. International Conference on Machine Learning (ICML), 2015.
Mu Niu, Simon Rogers, Maurizio Filippone, and Dirk Husmeier. Fast inference in nonlinear
dynamical systems using gradient matching. International Conference on Machine Learning
(ICML), 2016.
Jim O Ramsay, Giles Hooker, David Campbell, and Jiguo Cao. Parameter estimation for differential
equations: a generalized smoothing approach. Journal of the Royal Statistical Society: Series B
(Statistical Methodology), 69(5):741?796, 2007.
Andreas Ruttor and Manfred Opper. Approximate parameter inference in a stochastic reactiondiffusion model. AISTATS, 2010.
Andreas Ruttor, Philipp Batz, and Manfred Opper. Approximate gaussian process inference for the
drift function in stochastic differential equations. Neural Information Processing Systems (NIPS),
2013.
Claudia Schillings, Mikael Sunn?ker, J?rg Stelling, and Christoph Schwab. Efficient characterization
of parametric uncertainty of complex (bio) chemical networks. PLoS Comput Biol, 11(8):e1004457,
2015.
Klaas Enno Stephan, Lars Kasper, Lee M Harrison, Jean Daunizeau, Hanneke EM den Ouden,
Michael Breakspear, and Karl J Friston. Nonlinear dynamic causal models for fmri. NeuroImage,
42(2):649?662, 08 2008. doi: 10.1016/j.neuroimage.2008.04.262. URL http://www.ncbi.
nlm.nih.gov/pmc/articles/PMC2636907/.
James M Varah. A spline least squares method for numerical parameter estimation in differential
equations. SIAM Journal on Scientific and Statistical Computing, 3(1):28?46, 1982.
Michail D Vrettas, Manfred Opper, and Dan Cornford. Variational mean-field algorithm for efficient
inference in large systems of stochastic differential equations. Physical Review E, 91(1):012148,
2015.
Vladislav Vyshemirsky and Mark A Girolami. Bayesian ranking of biochemical system models.
Bioinformatics, 24(6):833?839, 2008.
Yali Wang and David Barber. Gaussian processes for bayesian estimation in ordinary differential
equations. International Conference on Machine Learning (ICML), 2014.
10
| 7066 |@word middle:2 c0:1 km:4 closure:1 r:8 simulation:1 covariance:7 reduction:2 moment:3 series:1 initialisation:1 denoting:1 daniel:1 rightmost:1 outperforms:1 existing:5 reaction:2 recovered:1 discretization:1 current:1 yet:1 dx:3 john:1 numerical:10 additive:2 subsequent:1 realistic:1 klaas:1 designed:1 plot:21 intelligence:1 prohibitive:1 selected:1 signalling:2 xk:14 core:1 manfred:4 num:6 provides:1 characterization:1 philipp:1 lx:1 schwab:1 simpler:2 five:1 differential:11 supply:1 ik:1 yuan:1 consists:1 combine:1 pathway:4 dan:2 x0:1 roughly:1 themselves:1 food:1 lyon:2 gov:1 solver:1 increasing:2 provided:2 estimating:1 moreover:1 linearity:3 project:1 mass:5 medium:1 argmin:2 developed:1 informed:1 unobserved:19 transformation:2 differentiation:1 golden:1 growth:1 runtime:10 k2:9 bio:1 normally:2 appear:3 planck:1 t1:4 before:2 engineering:1 local:2 limit:1 despite:2 ak:3 analyzing:1 niu:5 approximately:1 might:1 batz:1 au:4 initialization:3 studied:1 kasper:1 challenging:2 shaded:1 christoph:1 archambeau:2 decided:1 practical:2 x3:1 procedure:1 ker:1 area:1 eth:4 significantly:7 weather:2 matching:22 radial:1 protein:4 cannot:1 operator:1 influence:1 restriction:1 equivalent:1 deterministic:4 map:1 optimize:1 maximizing:3 center:8 go:1 straightforward:1 www:1 systemsx:1 shen:1 simplicity:1 population:2 coordinate:3 target:1 gm:6 designing:1 element:1 storkey:1 observed:3 wang:4 capture:1 initializing:1 calculate:2 electrical:1 cornford:2 plo:1 yk:7 mu:1 complexity:1 dynamic:8 controversy:2 depend:1 rewrite:1 calderhead:9 inapplicable:1 basis:2 completely:1 joint:2 represented:1 various:1 distinct:1 fast:1 describe:1 doi:1 artificial:1 hyper:1 jean:1 supplementary:5 elbo:4 statistic:2 neil:1 gp:4 noisy:1 sequence:2 differentiable:1 advantage:1 propose:2 interaction:1 product:3 maximal:1 tu:3 cao:1 loop:1 poorly:1 achieve:3 gold:3 academy:1 description:2 scalability:3 exploiting:2 convergence:2 comparative:1 leave:2 converges:1 ben:1 illustrate:2 coupling:2 eq:19 edward:1 strong:1 implemented:1 implies:2 girolami:3 correct:1 stochastic:5 lars:1 nlm:1 material:4 rogers:2 require:2 rkk:1 mki:2 biological:1 c00:2 frontier:1 kinetics:8 around:1 exp:4 visually:1 k3:9 lawrence:1 major:1 achieves:2 enno:1 estimation:14 vice:1 tool:2 amos:1 stefan:1 gaussian:13 aim:1 dric:1 fulfill:1 varying:1 derived:3 focus:2 ax:1 joachim:1 improvement:2 martino:1 modelling:2 indicates:1 normalizer:1 rpp:11 posteriori:1 inference:26 biochemical:1 hidden:1 hubank:1 overall:1 flexible:1 smoothing:3 integration:10 art:1 field:16 equal:1 once:1 having:1 beach:1 sampling:6 biology:7 x4:2 icml:3 thin:2 future:2 fmri:1 contaminated:1 spline:13 kerry:1 inherent:3 few:2 randomly:1 simultaneously:3 divergence:1 national:1 individual:3 argmax:3 ecology:1 adjust:1 violation:1 mixture:1 analyzed:2 behind:1 bioengineering:1 integral:1 neglecting:1 partial:2 simo:1 vladislav:1 taylor:1 initialized:2 desired:1 plotted:2 causal:1 agm:12 theoretical:1 mk:4 increased:1 modeling:1 giles:1 ordinary:5 cost:6 deviation:1 euler:1 monomials:3 r25:1 successful:2 periodic:1 sunn:1 st:1 international:4 sensitivity:1 siam:1 systematic:1 off:3 lee:1 michael:3 again:1 squared:1 satisfied:1 opposed:1 containing:1 nico:1 worse:2 external:1 expert:3 derivative:6 stark:1 potential:2 star:3 summarized:2 sec:2 includes:2 coefficient:1 int:6 satisfy:1 ranking:2 depends:3 later:1 performed:1 root:1 closed:2 analyze:1 red:6 competitive:2 bayes:6 michaelis:2 identifiability:1 slope:1 predator:4 rmse:7 simon:3 contribution:1 minimize:1 square:1 accuracy:7 variance:6 correspond:1 bayesian:5 iterated:1 critically:1 trajectory:7 hanneke:1 james:1 naturally:1 sampled:1 maruyama:1 macbook:1 dataset:1 emanuel:2 knowledge:1 infers:2 dimensionality:3 back:1 campbell:1 exceeded:1 originally:1 dt:1 higher:1 follow:1 methodology:2 response:2 improved:4 formulation:1 stumpf:1 until:2 d:2 replacing:2 nonlinear:5 indicated:1 gray:2 scientific:1 usa:1 contain:2 true:16 normalized:1 evolution:1 analytically:5 chemical:2 leibler:1 satisfactory:1 illustrated:1 x5:2 claudia:1 leftmost:4 generalized:1 tn:4 performs:5 variational:22 misspecified:1 superior:1 common:1 nih:1 functional:4 empirically:1 overview:1 physical:1 reactiondiffusion:1 extend:2 discussed:1 interpretation:1 significant:2 measurement:1 refer:1 versa:1 cv:1 outlined:1 grid:3 fk:5 similarly:3 nonlinearity:1 ramsay:2 menten:2 shawe:1 base:1 posterior:6 recent:2 inf:3 forcing:1 scenario:2 catherine:1 seen:1 additional:4 relaxed:1 michail:1 husmeier:5 determine:1 maximize:1 converge:1 full:3 infer:6 offer:6 long:1 cross:2 equally:2 prediction:1 scalable:4 varah:2 regression:1 simplistic:1 expectation:1 physically:1 iteration:5 kernel:3 normalization:2 represent:1 filippone:2 proposal:2 addition:4 whereas:1 want:1 ode:34 vrettas:3 conditionals:2 interval:4 median:5 preserved:1 harrison:1 daunizeau:1 ascent:2 induced:3 contrary:1 stephan:1 xj:1 fit:1 competing:1 opposite:1 observability:4 andreas:2 schilling:2 i7:1 bottleneck:1 motivated:2 url:1 accelerating:1 dik:1 biotechnology:1 action:5 detailed:1 factorial:2 extensively:1 ten:2 ph:1 http:1 canonical:1 estimated:3 neuroscience:1 correctly:2 blue:4 alfred:1 discrete:1 write:1 lotka:9 four:3 k4:8 prey:4 boxplots:1 diffusion:3 kept:1 relaxation:1 fraction:1 year:1 run:2 uncertainty:2 powerful:1 place:1 family:4 reasonable:1 groundtruth:2 acceptable:1 scaling:3 ki:1 bound:3 followed:1 yali:1 fold:1 topological:1 nontrivial:1 precisely:1 constraint:1 x2:5 flat:1 encodes:1 interpolated:1 simulate:1 barenco:2 tomescu:1 speedup:1 signficantly:1 p53:1 according:1 maurizio:2 combination:5 poor:1 remain:1 em:1 appealing:1 explained:1 restricted:1 outlier:1 den:1 ln:2 equation:19 zurich:3 remains:1 computationally:1 describing:1 eventually:1 mechanism:2 daniela:1 brewer:1 mechanistic:4 tractable:2 serf:2 available:4 apply:2 observe:1 minimalistic:1 robustness:1 slower:1 callard:1 existence:1 substitute:1 denotes:2 include:3 opportunity:1 ncbi:1 mikael:1 k1:7 especially:1 establish:1 society:1 unchanged:1 already:4 volterra:7 parametric:1 dependence:2 exhibit:2 gradient:24 unable:1 macdonald:4 simulated:3 barber:4 reason:1 assuming:5 ru:2 relationship:1 ratio:1 minimizing:1 pmc:1 difficult:5 unfortunately:2 setup:1 frank:1 rise:1 fluid:1 motivates:1 unknown:1 perform:1 contributed:1 observation:15 datasets:1 vyshemirsky:2 similiar:1 extended:1 excluding:1 misspecification:2 dirk:4 y1:1 jim:1 smoothed:1 arbitrary:1 misspecifications:1 drift:1 inferred:9 atmospheric:1 david:2 required:1 kl:1 established:2 nip:5 able:1 dynamical:9 usually:2 pattern:1 including:2 green:1 max:1 belief:1 royal:1 natural:2 difficulty:3 circumvent:1 ranked:1 buhmann:1 friston:1 advanced:1 auto:1 prior:2 coloured:1 acknowledgement:1 review:2 multiplication:1 fully:6 mixed:1 limitation:1 age:1 validation:2 integrate:1 sufficient:1 proxy:6 article:1 jbuhmann:1 principle:1 karl:1 periodicity:2 repeat:1 supported:2 free:3 infeasible:1 bias:2 explaining:1 sparse:3 bauer:1 distributed:5 ghz:1 opper:5 feedback:1 world:2 curve:3 genome:1 author:1 made:1 adaptive:3 far:2 approximate:3 observable:10 ignore:1 kullback:1 ruttor:4 dondelinger:18 keep:1 overfitting:1 conclude:1 continuous:1 latent:2 hooker:1 why:1 robin:1 additionally:3 promising:1 ca:1 decoupling:3 expansion:1 complex:1 aistats:2 motivation:2 noise:11 paul:1 fair:1 x1:8 xu:16 site:1 intel:1 cubic:1 transduction:4 neuroimage:2 inferring:2 exponential:3 lq:2 comput:1 governed:1 third:1 kirk:1 specific:1 evidence:1 intrinsic:1 intractable:1 lorenz:6 higham:1 jaroslav:1 illustrates:1 breakspear:1 mf:2 rg:1 simply:2 gorbach:1 partially:7 scalar:1 springer:1 ch:4 parameters3:1 conditional:7 sized:1 ann:1 twofold:1 shortcut:1 experimentally:1 included:2 typical:1 specie:4 experimental:2 mark:2 bioinformatics:1 ethz:3 dept:3 biol:1 |
6,707 | 7,067 | Context Selection for Embedding Models
Li-Ping Liu?
Tufts University
Francisco J. R. Ruiz
Columbia University
University of Cambridge
Susan Athey
Stanford University
David M. Blei
Columbia University
Abstract
Word embeddings are an effective tool to analyze language. They have been
recently extended to model other types of data beyond text, such as items in
recommendation systems. Embedding models consider the probability of a target
observation (a word or an item) conditioned on the elements in the context (other
words or items). In this paper, we show that conditioning on all the elements in the
context is not optimal. Instead, we model the probability of the target conditioned
on a learned subset of the elements in the context. We use amortized variational
inference to automatically choose this subset. Compared to standard embedding
models, this method improves predictions and the quality of the embeddings.
1
Introduction
Word embeddings are a powerful model to capture latent semantic structure of language. They
can capture the co-occurrence patterns of words (Bengio et al., 2006; Mikolov et al., 2013a,b,c;
Pennington et al., 2014; Mnih and Kavukcuoglu, 2013; Levy and Goldberg, 2014; Vilnis and
McCallum, 2015; Arora et al., 2016), which allows for reasoning about word usage and meaning
(Harris, 1954; Firth, 1957; Rumelhart et al., 1986). The ideas of word embeddings have been extended
to other types of high-dimensional data beyond text, such as items in a supermarket or movies in
a recommendation system (Liang et al., 2016; Barkan and Koenigstein, 2016), with the goal of
capturing the co-occurrence patterns of objects. Here, we focus on exponential family embeddings
(EFE) (Rudolph et al., 2016), a method that encompasses many existing methods for embeddings and
opens the door to bringing expressive probabilistic modeling (Bishop, 2006; Murphy, 2012) to the
problem of learning distributed representations.
In embedding models, the object of interest is the conditional probability of a target given its context.
For instance, in text, the target corresponds to a word in a given position and the context are the words
in a window around it. For an embedding model of items in a supermarket, the target corresponds to
an item in a basket and the context are the other items purchased in the same shopping trip.
In this paper, we show that conditioning on all elements of the context is not optimal. Intuitively,
this is because not all objects (words or items) necessarily interact with each other, though they may
appear together as target/context pairs. For instance, in shopping data, the probability of purchasing
chocolates should be independent of whether bathroom tissue is in the context, even if the latter is
actually purchased in the same shopping trip.
With this in mind, we build a generalization of the EFE model (Rudolph et al., 2016) that relaxes
the assumption that the target depends on all elements in the context. Rather, our model considers
that the target depends only on a subset of the elements in the context. We refer to our approach as
?
Li-Ping Liu?s contribution was made when he was a postdoctoral researcher at Columbia University.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
context selection for exponential family embeddings (CS - EFE). Specifically, we introduce a binary
hidden vector to indicate which elements the target depends on. By inferring the indicator vector, the
embedding model is able to use more related context elements to fit the conditional distribution, and
the resulting learned vectors capture more about the underlying item relations.
The introduction of the indicator comes at the price of solving this inference problem. Most embedding
tasks have a large amount of target/context pairs and require a fast solution to the inference problem.
To avoid solving the inference problem separately for all target/context pairs, we use amortized
variational inference (Dayan et al., 1995; Gershman and Goodman, 2014; Korattikara et al., 2015;
Kingma and Welling, 2014; Rezende et al., 2014; Mnih and Gregor, 2014). We design a shared neural
network structure to perform inference for all pairs. One difficulty here is that the varied sizes of the
contexts require varied input and output sizes for the shared structure. We overcome this problem
with a binning technique, which we detail in Section 2.3.
Our contributions are as follows. First, we develop a model that allows conditioning on a subset of the
elements in the context in an EFE model. Second, we develop an efficient inference algorithm for the
CS - EFE model, based on amortized variational inference, which can automatically infer the subset of
elements in the context that are most relevant to predict the target. Third, we run a comprehensive
experimental study on three datasets, namely, MovieLens for movie recommendations, eBird-PA for
bird watching events, and grocery data for shopping behavior. We found that CS - EFE consistently
outperforms EFE in terms of held-out predictive performance on the three datasets. For MovieLens,
we also show that the embedding representations of the CS - EFE model have higher quality.
2
The Model
Our context selection procedure builds on models based on embeddings. We adopt the formalism
of exponential family embeddings (EFE) (Rudolph et al., 2016), which extend the ideas of word
embeddings to other types of data such as count or continuous-valued data. We briefly review the
EFE model in Section 2.1. We then describe our model in Section 2.2, and we put forward an efficient
inference procedure in Section 2.3.
2.1
Exponential Family Embeddings
In exponential family embeddings (EFE), we have a collection of J objects, such as words (in text
applications) or movies (in a recommendation problem). Our goal is to learn a vector representation
of these objects based on their co-occurrence patterns.
Let us consider a dataset represented as a (typically sparse) N ? J matrix X, where rows are
datapoints and columns are objects. For example, in text applications each row corresponds to a
location in the text, and it is a one-hot vector that represents the word appearing in that location. In
movie data, each entry xnj indicates the rating of movie j for user n.
The EFE model learns the vector representation of objects based on the conditional probability of each
observation, conditioned on the observations in its context. The context cnj = [(n1 , j1 ), (n2 , j2 ), . . .]
gives the indices of the observations that appear in the conditional probability distribution of xnj .
The definition of the context varies across applications. In text, it corresponds to the set of words in a
fixed-size window centered at location n. In movie recommendation, cnj corresponds to the set of
movies rated by user n, excluding j.
In EFE, we represent each object j with two vectors: an embedding vector ?j and a context vector
?j . These two vectors interact in the conditional probability distributions of each observation xnj
as follows. Given the context cnj and the corresponding observations xcnj indexed by cnj , the
distribution for xnj is in the exponential family,
p(xnj | xcnj ; ?, ?) = ExpFam t(xnj ), ?j (xcnj ; ?, ?) ,
(1)
where t(xnj ) is the sufficient statistic of the exponential family distribution, and ?j (xcnj ; ?, ?) is its
natural parameter. The natural parameter is set to
?
?
|cnj |
X
1
(0)
?j (xcnj ; ?, ?) = g ??j +
?>
x nk j k ? j k ? ,
(2)
|cnj | j
k=1
2
where |cnj | is the number of elements in the context, and g(?) is the link function (which depends
on the application and plays the same role as in generalized linear models). We consider a slightly
different form for ?j (xcnj ; ?, ?) than in the original EFE paper by including the intercept terms
(0)
?j . We also average the elements in the context. These choices generally improve the model
performance.
The vectors ?j and ?j (and the intercepts) are found by maximizing the pseudo-likelihood, i.e., the
product of the conditional probabilities in Eq. 1 for each observation xnj .
2.2
Context Selection for Exponential Family Embeddings
The base EFE model assumes that all objects in the context cnj play a role in the distribution of xnj
through Eq. 2. This is often an unrealistic assumption. The probability of purchasing chocolates
should not depend on the context vector of bathroom tissue, even when the latter is actually in the
context. Put formally, there are domains where the all elements in the context interact selectively in
the probability of xnj .
We now develop our context selection for exponential family embeddings (CS - EFE) model, which
selects a subset of the elements in the context for the embedding model, so that the natural parameter
only depends on objects that are truly related to the target object. For each pair (n, j), we introduce a
hidden binary vector bnj ? {0, 1}|cnj | that indicates which elements in the context cnj should be
considered in the distribution for xnj . Thus, we set the natural parameter as
?
?
|cnj |
X
1 >
(0)
?j (xcnj , bnj ; ?, ?) = g ??j +
?
bnjk xnk jk ?jk ? ,
(3)
Bnj j
k=1
P
where Bnj = k bnjk is the number of non-zero elements of bnj .
The prior distribution. We assign a prior to bnj , such that Bnj ? 1 and
Y
p(bnj ; ?nj ) ?
(?njk )bnjk (1 ? ?njk )1?bnjk .
(4)
k
The constraint Bnj ? 1 states that at least one element in the context needs to be selected. For
values of bnj satisfying the constraint, their probabilities are proportional to those of independent
Bernoulli variables, with hyperparameters ?njk . If ?njk is small for all k (near 0), then the distribution
approaches a categorical distribution. If a few ?njk values are large (near 1), then the constraint Bnj ?
1 becomes less relevant and the distribution approaches a product of Bernoulli distributions.
The scale of the probabilities ?nj has an impact on the number if elements to be selected as the
context. We let
?njk ? ?nj = ? min(1, ?/|cnj |),
(5)
where ? ? (0, 1) is a global parameter to be learned, and ? is a hyperparameter. The value of ?
controls the average number of elements to be selected. If ? tends to infinity and we hold ? fixed to
1, then we recover the basic EFE model.
The objective function. We form the objective function L as the (regularized) pseudo log-likelihood.
After marginalizing out the variables bnj , it is
X
X
L = Lreg +
log
p(xnj | xcnj , bnj ; ?, ?)p(bnj ; ?nj ),
(6)
n,j
bnj
where Lreg is the regularization term. Following Rudolph et al. (2016), we use `2 -regularization over
the embedding and context vectors.
It is computationally difficult to marginalize out the context selection variables bnj , particularly
when the cardinality of the context cnj is large. We address this issue in the next section.
2.3
Inference
We now show how to maximize the objective function in Eq. 6. We propose an algorithm based on
amortized variational inference, which shares a global inference network for all local variables bnk .
Here, we describe the inference method in detail.
3
Variational inference. In variational inference, we introduce a variational distribution q(bnj ; ?nj ),
e
parameterized by ?nj ? R|cnj | , and we maximize a lower bound Le of the objective in Eq. 6, L ? L,
Le = Lreg +
X
Eq(bnj ;?nj ) log p(xnj | xcnj , bnj ; ?, ?) + log p(bnj ; ?nj ) ? log q(bnj ; ?nj ) .
n,j
(7)
Maximizing this bound with respect to the variational parameters ?nj corresponds to minimizing
the Kullback-Leibler divergence from the posterior of bnj to the variational distribution q(bnj ; ?nj )
(Jordan et al., 1999; Wainwright and Jordan, 2008). Variational inference was also used for EFE by
Bamler and Mandt (2017).
The properties of this maximization problem makes this approach hard in our case. First, there is
no closed-form solution, even if we use a mean-field variational distribution. Second, the large size
of the dataset requires fast online training of the model. Generally, we cannot fit each q(bnj ; ?nj )
individually by solving a set of optimization problems, nor even store ?nj for later use.
To address the former problem, we use black-box variational inference (Ranganath et al., 2014),
which approximates the expectations via Monte Carlo to obtain noisy gradients of the variational
lower bound. To tackle the latter, we use amortized inference (Gershman and Goodman, 2014; Dayan
et al., 1995), which has the advantage that we do not need to store or optimize local variables.
Amortization. Amortized inference avoids the optimization of the parameter ?nj for each local
variational distribution q(bnj ; ?nj ); instead, it fits a shared structure to calculate each local parameter
?nj . Specifically, we consider a function f (?) that inputs the target observation xnj , the context
elements xcnj and indices cnj , and the model parameters, and outputs a variational distribution for
bnj . Let anj = [xnj , cnj , xcnj , ?, ?, ?nj ] be the set of inputs of f (?), and let ?nj ? R|cnj | be its
output, such that ?nj = f (anj ) is a vector containing the logits of the variational distribution,
q(bnjk = 1; ?njk ) = sigmoid (?njk ) ,
with ?njk = [f (anj )]k .
(8)
Similarly to previous work (Korattikara et al., 2015; Kingma and Welling, 2014; Rezende et al., 2014;
Mnih and Gregor, 2014), we let f (?) be a neural network, parameterized by W. The key in amortized
inference is to design the network and learn its parameters W.
Network design. Typical neural networks transform fixed-length inputs into fixed-length outputs.
However, in our case, we face variable size inputs and outputs. First, the output of the function f (?) for
q(bnj ; ?nj ) has length equal to the context size |cnj |, which varies across target/context pairs. Second,
the length of the local variables anj also varies, because the length of xcnj depends on the number of
elements in the context. We propose a network design that addresses these challenges.
To overcome the difficulty of the varying output sizes, we split the computation of each component
?njk of ?nj into |cnj | separate tasks. Each task computes the logit ?njk using a shared function f (?),
?njk = f (anjk ). The input anjk contains information about anj and depends on the index k.
We now need to specify how we form the input anjk . A na?ve approach would be to represent the
indices of the context items and their corresponding counts as a sparse vector, but this would require
a network with a very large input size. Moreover, most of the weights of this large network would not
be used (nor trained) in the computation of ?njk , since only a small subset of them would be assigned
a non-zero input.
Instead, in this work we use a two-step process to build an input vector anjk that has fixed
length regardless of the context size |cnj |. In Step 1, we transform the original input anj =
[xnj , cnj , xcnj , ?, ?, ?nj ] into a vector of reduced dimensionality that preserves the relevant information (we define ?relevant? below). In Step 2, we transform the vector of reduced dimensionality
into a fixed-length vector.
For Step 1, we first need to determine which information is relevant. For that, we inspect the posterior
for bnj ,
p(bnj | xnj , xcnj ; ?, ?, ?nj ) ? p(xnj | xcnj , bnj , bnj ; ?, ?)p(bnj ; ?nj )
= p(xnj | snj , bnj )p(bnj ; ?nj ).
(9)
We note that the dependence on xcnj , ?, and ? comes through the scores snj , a vector of length |cnj |
that contains for each element the inner product of the corresponding embedding and context vector,
4
(L bins)
Other scores
(variable length)
(k)
hnjL
Figure 1: Representation of the amortized inference network that outputs the variational parameter
for the context selection variable bnjk . The input has fixed size regardless of the context size, and
it is formed by the score snjk (Eq. 10), the prior parameter ?nj , the target observation xnj , and a
histogram of the scores snjk0 (for k 0 6= k).
scaled by the context observation,
snjk = xnk jk ?>
j ?jk .
(10)
Therefore, the scores snj are sufficient: f (?) does not need the raw embedding vectors as input, but
rather the scores snj ? R|cnj | . We have thus reduced the dimensionality of the input.
For Step 2, we need to transform the scores snj ? R|cnj | into a fixed-length vector that the neural
network f (?) can take as input. We represent this vector and the full neural network structure in
Figure 1. The transformation is carried out differently for each value of k. For the network that
outputs the variational parameter ?njk , we let the k-th score snjk be directly one of the inputs. The
reason is that the k-th score snjk is more related to ?njk , because the network that outputs ?njk
ultimately indicates the probability that bnjk takes value 1, i.e., ?njk indicates whether to include
the kth element as part of the context in the computation of the natural parameter in Eq. 3. All
other scores (snjk0 for k 0 6= k) have the same relation to ?njk , and their permutations give the same
posterior. We bin these scores (snjk0 , for k 0 6= k) into L bins, therefore obtaining a fixed-length
vector. Instead of using bins with hard boundaries, we use Gaussian-shaped kernels. We denote by
(k)
?` and ?` the mean and width of each Gaussian kernel, and we denote by hnj ? RL to the binned
variables, such that
|cnj |
X
(snjk0 ? ?` )2
(k)
hnj` =
exp ?
.
(11)
?`2
0
k =1
k0 6=k
Finally, for ?njk = f (anjk ) we form a neural network that takes as input the score snjk , the binned
(k)
variables hnj , which summarize the information of the scores (snjk0 : k 0 6= k), as well as the target
(k)
observation xnj and the prior probability ?nj . That is, anjk = [snjk , hnj , xnj , ?nj ].
Variational updates. We denote by W the parameters of the network (all weights and biases). To
perform inference, we need to iteratively update W, together with ?, ?, and ?, to maximize Eq. 7,
where ?nj is the output of the network f (?). We follow a variational expectation maximization (EM)
algorithm. In the M step, we take a gradient step with respect to the model parameters (?, ?, and
?). In the E step, we take a gradient step with respect to the network parameters (W). We obtain
the (noisy) gradient with respect to W using the score function method as in black-box variational
inference (Paisley et al., 2012; Mnih and Gregor, 2014; Ranganath et al., 2015), which allows
rewriting the gradient of Eq. 7 as an expectation with respect to the variational distribution,
h
X
?Le =
Eq(bnj ;W) log p(xnj | xsnj , bnj ) + log p(bnj ; ?nj ) ? log q(bnj ; W) ?
n,j
i
? log q(bnj ; W) .
Then, we can estimate the gradient via Monte Carlo by drawing samples from q(bnj ; W).
5
3
Empirical Study
We study the performance of context selection on three different application domains: movie recommendations, ornithology, and market basket analysis. On these domains, we show that context
selection improves predictions. For the movie data, we also show that the learned embeddings are
more interpretable; and for the market basket analysis, we provide a motivating example of the
variational probabilities inferred by the network.
Data. MovieLens: We consider the MovieLens-100K dataset (Harper and Konstan, 2015), which
contains ratings of movies on a scale from 1 to 5. We only keep those ratings with value 3 or more
(and we subtract 2 from all ratings, so that the counts are between 0 and 3). We remove users who
rated less than 20 movies and movies that were rated fewer than 50 times, yielding a dataset with 943
users and 811 movies. The average number of non-zeros per user is 82.2. We set aside 9% of the data
for validation and 10% for test.
eBird-PA: The eBird data (Munson et al., 2015; Sullivan et al., 2009) contains information about a
set of bird observation events. Each datum corresponds to a checklist of counts of 213 bird species
reported from each event. The values of the counts range from zero to hundreds. Some extraordinarily
large counts are treated as outliers and set to the mean of positive counts of that species. Bird
observations in the subset eBird-PA are from a rectangular area that mostly overlaps Pennsylvania
and the period from day 180 to day 210 of years from 2002 to 2014. There are 22, 363 checklists in
the data and 213 unique species. The average number of non-zeros per checklist is 18.3. We split the
data into train (67%), test (26%), and validation (7%) sets.
Market-Basket: This dataset contains purchase records of more than 3, 000 customers on an anonymous supermarket. We aggregate the purchases of one month at the category level, i.e., we combine
all individual UPC (Universal Product Code) items into item categories. This yields 45, 615 purchases
and 364 unique items. The average basket size is of 12.5 items. We split the data into training (86%),
test (5%), and validation (9%) sets.
Models. We compare the base exponential family embeddings (EFE) model (Rudolph et al., 2016)
with our context selection procedure. We implement the amortized inference network described in
Section 2.32 , for different values of the prior hyperparameter ? (Eq. 5) (see below).
For the movie data, in which the ratings range from 0 to 3, we use a binomial conditional distribution
(Eq. 1) with 3 trials, and we use an identity link function for the natural parameter ?j (Eq. 2), which is
the logit of the binomial probability. For the eBird-PA and Market-Basket data, which contain counts,
we consider a Poisson conditional distribution and use the link function3 g(?) = log softplus (?) for
the natural parameter, which is the Poisson log-rate. The context set corresponds to the set of other
movies rated by the same user in MovieLens; the set of other birds in the same checklist on eBird-PA;
and the rest of items in the same market basket.
Experimental setup. We explore different values for the dimensionality K of the embedding vectors.
In our tables of results, we report the values that performed best in the validation set (there was no
qualitative difference in the relative performance between the methods for the non-reported results).
We use negative sampling (Rudolph et al., 2016) with a ratio of 1/10 of positive (non-zero) versus
negative samples. We use stochastic gradient descent to maximize the objective function, adaptively
setting the stepsize with Adam (Kingma and Ba, 2015), and we use the validation log-likelihood to
assess convergence. We consider unit-variance `2 -regularization, and the weight of the regularization
term is fixed to 1.0.
In the context selection for exponential family embeddings (CS - EFE) model, we set the number of
hidden units to 30 and 15 for each of the hidden layers, and we consider 40 bins to form the histogram.
(We have also explored other settings of the network, obtaining very similar results.) We believe
that the network layers can adapt to different settings of the bins as long as they pick up essential
information of the scores. In this work, we place these 40 bins equally spaced by a distance of 0.2
and set the width to 0.1.
2
3
The code is in the github repo: https://github.com/blei-lab/context-selection-embedding
The softplus function is defined as softplus (x) = log(1 + exp(x)).
6
CS - EFE (this paper)
K
Baseline: EFE
(Rudolph et al., 2016)
? = 20
? = 50
? = 100
?=?
10
50
-1.06 ( 0.01 )
-1.06 ( 0.01 )
-1.00 ( 0.01 )
-0.97 ( 0.01 )
-1.03 ( 0.01 )
-0.99 ( 0.01 )
-1.03 ( 0.01 )
-1.00 ( 0.01 )
-1.03 ( 0.01 )
-1.01 ( 0.01 )
(a) MovieLens-100K.
CS - EFE (this paper)
K
Baseline: EFE
(Rudolph et al., 2016)
?=2
?=5
? = 10
?=?
50
100
-1.74 ( 0.01 )
-1.74 ( 0.01 )
-1.34 ( 0.01 )
-1.34 ( 0.00 )
-1.33 ( 0.00 )
-1.33 ( 0.00 )
-1.51 ( 0.01 )
-1.31 ( 0.00 )
-1.34 ( 0.01 )
-1.31 ( 0.01 )
(b) eBird-PA.
CS - EFE (this paper)
K
Baseline: EFE
(Rudolph et al., 2016)
?=2
?=5
? = 10
?=?
50
100
-0.632 ( 0.003 )
-0.633 ( 0.003 )
-0.626 ( 0.003 )
-0.630 ( 0.003 )
-0.623 ( 0.003 )
-0.623 ( 0.003 )
-0.625 ( 0.003 )
-0.626 ( 0.003 )
-0.628 ( 0.003 )
-0.628 ( 0.003 )
(c) Market-Basket.
Table 1: Test log-likelihood for the three considered datasets. Our CS - EFE models consistently
outperforms the baseline for different values of the prior hyperparameter ?. The numbers in brackets
indicate the standard errors.
In our experiments, we vary the hyperparameter ? in Eq. 5 to check how the expected context size
(see Section 2.2) impacts the results. For the MovieLens dataset, we choose ? ? {20, 50, 100, ?},
while for the other two datasets we choose ? ? {2, 5, 10, ?}.
Results: Predictive performance. We compare the methods in terms of predictive pseudo loglikelihood on the test set. We calculate the marginal log-likelihood in the same way as Rezende et al.
(2014). We report the average test log-likelihood on the three datasets in Table 1. The numbers are
the average predictive log-likelihood per item, together with the standard errors in brackets. We
compare the predictions of our models (in each setting) with the baseline EFE method using paired
t-test, obtaining that all our results are better than the baseline at a significance level p = 0.05. In the
table we only bold the best performance across different settings of ?.
The results show that our method outperforms the baseline on all three datasets. The improvement
over the baseline is more significant on the eBird-PA datasets. We can also see that the prior parameter
? has some impact on the model?s performance.
Evaluation: Embedding quality. We also study how context selection affects the quality of the
embedding vectors of the items. In the MovieLens dataset, each movie has up to 3 genre labels. We
calculate movie similarities by their genre labels and check whether the similarities derived from the
embedding vectors are consistent with genre similarities.
More in detail, let gj ? {0, 1}G be a binary vector containing the genre labels for each movie j,
where G = 19 is the number of genres. We define the similarity between two genre vectors, gj and
gj 0 , as the number of common genres normalized by the larger number genres,
sim(gj , gj 0 ) =
gj> gj 0
,
max(1> gj , 1> gj 0 )
(12)
where 1 is a vector of ones. In an analogous manner, we define the similarity of two embedding
vectors as their cosine distance.
We now compute the similarities of each movie to all other movies, according to both definitions
of similarity (based on genre and based on embeddings). For each query movie, we provide two
correlation metrics between both lists. The first metric is simply Spearman?s correlation between the
two ranked lists. For the second metric, we rank the movies based on the embedding similarity only,
and we calculate the average genre similarity of the top 5 movies. Finally, we average both metrics
across all possible query movies, and we report the results in Table 2.
7
CS - EFE (this paper)
Metric
Baseline: EFE
(Rudolph et al., 2016)
? = 20
? = 50
? = 100
?=?
Spearmans
mean-sim@5
0.066
0.272
0.108
0.328
0.090
0.317
0.082
0.299
0.076
0.289
Table 2: Correlation between the embedding vectors and the movie genre. The embedding vectors
found with our CS - EFE model exhibit higher correlation with movie genres.
Target: Taco shells
Target: Cat food dry
Taco shells
Hispanic salsa
Tortilla
Hispanic canned food
?
0.309
0.287
0.315
0.219
0.185
0.151
0.221
Cat food dry
Cat food wet
Cat litter
Pet supplies
0.220
0.206
0.225
0.173
?
0.297
0.347
0.312
Table 3: Approximate posterior probabilities of the CS - EFE model for a basket with eight items
broken down into two unrelated clusters. The left column represents a basket of eight items of two
types, and then we take one item of each type as target in the other two columns. For a Mexican
food target, the posterior probabilities of the items in the Mexican type are larger compared to the
probabilities in the pet type, and vice-versa.
From this result, we can see that the similarity of the embedding vectors obtained by our model is
more consistent with the genre similarity. (We have also computed the top-1 and top-10 similarities,
which supports the same conclusion.) The result suggests a small number of context items are actually
better for learning relations of movies.
Evaluation: Posterior checking. To get more insight of the variational posterior distribution that our
model provides, we form a heterogeneous market basket that contains two types of items: Mexican
food, and pet-related products. In particular, we form a basket with four items of each of those types,
and we compute the variational distribution (i.e., the output of the neural network) for two different
target items from the basket. Intuitively, the Mexican food items should have higher probabilities
when the target item is also in the same type, and similarly for the pet food.
We fit the CS - EFE model with ? = 2 on the Market-Basket data. We report the approximate posterior
probabilities in Table 3, for two query items (one from each type). As expected, the probabilities for
the items of the same type than the target are higher, indicating that their contribution to the context
will be higher.
4
Conclusion
The standard exponential family embeddings (EFE) model finds vector representations by fitting
the conditional distributions of objects conditioned on their contexts. In this work, we show that
choosing a subset of the elements in the context can improve performance when the objects in the
subset are truly related to the object to be modeled. As a consequence, the embedding vectors can
reflect co-occurrence relations with higher fidelity compared with the base embedding model.
We formulate the context selection problem as a Bayesian inference problem by using a hidden binary
vector to indicate which objects to select from each context set. This leads to a difficult inference
problem due to the (large) scale of the problems we face. We develop a fast inference algorithm by
leveraging amortization and stochastic gradients. The varying length of the binary context selection
vectors poses further challenges in our amortized inference algorithm, which we address using a
binning technique. We fit our model on three datasets from different application domains, showing
its superiority over the EFE model.
There are still many directions to explore to further improve the performance of the proposed context
selection for exponential family embeddings (CS - EFE). First, we can apply the context selection
technique on text data. Though the neighboring words of each target word are more likely to be the
8
?correct? context, we can still combine the context selection technique with the ordering in which
words appear in the context, hopefully leading to better word representations. Second, we can explore
variational inference schemes that do not rely on mean-field, improving the inference network to
capture more complex variational distributions.
Acknowledgments
This work is supported by NSF IIS-1247664, ONR N00014-11-1-0651, DARPA PPAML FA875014-2-0009, DARPA SIMPLEX N66001-15-C-4032, the Alfred P. Sloan Foundation, and the John
Simon Guggenheim Foundation. Francisco J. R. Ruiz is supported by the EU H2020 programme
(Marie Sk?odowska-Curie grant agreement 706760). We also acknowledge the support of NVIDIA
Corporation with the donation of two GPUs used for this research.
References
Arora, S., Li, Y., Liang, Y., and Ma, T. (2016). RAND-WALK: A latent variable model approach to word
embeddings. Transactions of the Association for Computational Linguistics, 4.
Bamler, R. and Mandt, S. (2017). Dynamic word embeddings. In International Conference in Machine Learning.
Barkan, O. and Koenigstein, N. (2016). Item2Vec: Neural item embedding for collaborative filtering. In IEEE
International Workshop on Machine Learning for Signal Processing.
Bengio, Y., Schwenk, H., Sen?cal, J.-S., Morin, F., and Gauvain, J.-L. (2006). Neural probabilistic language
models. In Innovations in Machine Learning. Springer.
Bishop, C. M. (2006). Pattern Recognition and Machine Learning (Information Science and Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA.
Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. (1995). The Helmholtz machine. Neural Computation,
7(5):889?904.
Firth, J. R. (1957). A synopsis of linguistic theory 1930-1955. In Studies in Linguistic Analysis (special volume
of the Philological Society), volume 1952?1959.
Gershman, S. J. and Goodman, N. D. (2014). Amortized inference in probabilistic reasoning. In Proceedings of
the Thirty-Sixth Annual Conference of the Cognitive Science Society.
Harper, F. M. and Konstan, J. A. (2015). The MovieLens datasets: History and context. ACM Transactions on
Interactive Intelligent Systems (TiiS), 5(4):19.
Harris, Z. S. (1954). Distributional structure. Word, 10(2?3):146?162.
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). An introduction to variational methods
for graphical models. Machine Learning, 37(2):183?233.
Kingma, D. P. and Ba, J. L. (2015). Adam: A method for stochastic optimization. In International Conference
on Learning Representations.
Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In International Conference on
Learning Representations.
Korattikara, A., Rathod, V., Murphy, K. P., and Welling, M. (2015). Bayesian dark knowledge. In Advances in
Neural Information Processing Systems.
Levy, O. and Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Advances in
Neural Information Processing Systems.
Liang, D., Altosaar, J., Charlin, L., and Blei, D. M. (2016). Factorization meets the item embedding: Regularizing
matrix factorization with item co-occurrence. In ACM Conference on Recommender System.
Mikolov, T., Chen, K., Corrado, G. S., and Dean, J. (2013a). Efficient estimation of word representations in
vector space. International Conference on Learning Representations.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words
and phrases and their compositionality. In Advances in Neural Information Processing Systems.
Mikolov, T., Yih, W.-T. a., and Zweig, G. (2013c). Linguistic regularities in continuous space word representations. In Conference of the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies.
9
Mnih, A. and Gregor, K. (2014). Neural variational inference and learning in belief networks. In International
Conference on Machine Learning.
Mnih, A. and Kavukcuoglu, K. (2013). Learning word embeddings efficiently with noise-contrastive estimation.
In Advances in Neural Information Processing Systems.
Munson, M. A., Webb, K., Sheldon, D., Fink, D., Hochachka, W. M., Iliff, M., Riedewald, M., Sorokina, D.,
Sullivan, B., Wood, C., and Kelling, S. (2015). The eBird reference dataset.
Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.
Paisley, J. W., Blei, D. M., and Jordan, M. I. (2012). Variational Bayesian inference with stochastic search. In
International Conference on Machine Learning.
Pennington, J., Socher, R., and Manning, C. D. (2014). GloVe: Global vectors for word representation. In
Conference on Empirical Methods on Natural Language Processing.
Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In Artificial Intelligence
and Statistics.
Ranganath, R., Tang, L., Charlin, L., and Blei, D. M. (2015). Deep exponential families. In Artificial Intelligence
and Statistics.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference
in deep generative models. In International Conference on Machine Learning.
Rudolph, M., Ruiz, F. J. R., Mandt, S., and Blei, D. M. (2016). Exponential family embeddings. In Advances in
Neural Information Processing Systems.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by back-propagating
errors. Nature, 323(9):533?536.
Sullivan, B., Wood, C., Iliff, M. J., Bonney, R. E., Fink, D., and Kelling, S. (2009). eBird: A citizen-based bird
observation network in the biological sciences. Biological Conservation, 142:2282?2292.
Vilnis, L. and McCallum, A. (2015). Word representations via Gaussian embedding. In International Conference
on Learning Representations.
Wainwright, M. J. and Jordan, M. I. (2008). Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1?2):1?305.
10
| 7067 |@word trial:1 briefly:1 logit:2 open:1 contrastive:1 pick:1 yih:1 liu:2 contains:6 score:15 njk:19 outperforms:3 existing:1 xnj:23 com:1 gauvain:1 john:1 j1:1 remove:1 interpretable:1 update:2 aside:1 intelligence:2 selected:3 fewer:1 item:32 generative:1 mccallum:2 record:1 blei:7 barkan:2 provides:1 location:3 wierstra:1 supply:1 qualitative:1 combine:2 fitting:1 manner:1 introduce:3 expected:2 market:8 behavior:1 nor:2 automatically:2 food:8 window:2 cardinality:1 becomes:1 underlying:1 moreover:1 unrelated:1 anj:6 transformation:1 corporation:1 nj:31 pseudo:3 tackle:1 interactive:1 fink:2 scaled:1 kelling:2 control:1 unit:2 grant:1 appear:3 superiority:1 positive:2 local:5 tends:1 consequence:1 encoding:1 chocolate:2 meet:1 mandt:3 black:3 bird:6 suggests:1 co:5 factorization:3 range:2 unique:2 acknowledgment:1 thirty:1 implement:1 backpropagation:1 sullivan:3 procedure:3 cnj:25 ornithology:1 area:1 empirical:2 universal:1 word:28 morin:1 get:1 cannot:1 marginalize:1 selection:18 cal:1 put:2 context:72 altosaar:1 intercept:2 optimize:1 dean:2 customer:1 maximizing:2 williams:1 regardless:2 rectangular:1 formulate:1 insight:1 datapoints:1 embedding:29 analogous:1 target:25 play:2 user:6 goldberg:2 checklist:4 pa:7 element:24 amortized:11 rumelhart:2 jk:4 satisfying:1 particularly:1 agreement:1 recognition:1 helmholtz:1 trend:1 distributional:1 binning:2 role:2 capture:4 susan:1 calculate:4 munson:2 ordering:1 repo:1 eu:1 broken:1 dynamic:1 ultimately:1 trained:1 depend:1 solving:3 predictive:4 darpa:2 differently:1 k0:1 represented:1 cat:4 genre:13 ppaml:1 schwenk:1 chapter:1 train:1 fast:3 effective:1 describe:2 monte:2 query:3 zemel:1 artificial:2 aggregate:1 choosing:1 extraordinarily:1 stanford:1 valued:1 larger:2 loglikelihood:1 drawing:1 statistic:4 transform:4 rudolph:11 noisy:2 online:1 advantage:1 sen:1 propose:2 product:5 j2:1 relevant:5 korattikara:3 neighboring:1 secaucus:1 sutskever:1 convergence:1 cluster:1 regularity:1 adam:2 h2020:1 object:15 koenigstein:2 donation:1 develop:4 propagating:1 pose:1 sim:2 eq:14 c:15 indicate:3 come:2 direction:1 correct:1 stochastic:5 centered:1 human:1 bin:7 require:3 assign:1 shopping:4 generalization:1 anonymous:1 biological:2 hold:1 around:1 considered:2 exp:2 predict:1 hispanic:2 vary:1 adopt:1 estimation:2 wet:1 label:3 individually:1 vice:1 tool:1 hochachka:1 mit:1 gaussian:3 rather:2 avoid:1 varying:2 jaakkola:1 linguistic:3 rezende:4 focus:1 derived:1 improvement:1 consistently:2 rank:1 bernoulli:2 indicates:4 likelihood:7 check:2 baseline:9 inference:36 dayan:3 typically:1 xnk:2 hidden:5 relation:4 tiis:1 selects:1 issue:1 fidelity:1 grocery:1 special:1 marginal:1 field:2 equal:1 shaped:1 beach:1 sampling:1 represents:2 athey:1 purchase:3 simplex:1 report:4 intelligent:1 few:1 preserve:1 divergence:1 comprehensive:1 ve:1 individual:1 murphy:3 n1:1 interest:1 mnih:6 bnk:1 evaluation:2 function3:1 truly:2 bracket:2 yielding:1 held:1 citizen:1 expfam:1 indexed:1 walk:1 instance:2 formalism:1 modeling:1 column:3 maximization:2 phrase:1 subset:10 entry:1 hundred:1 motivating:1 reported:2 varies:3 adaptively:1 st:1 international:9 probabilistic:4 together:3 na:1 reflect:1 containing:2 choose:3 watching:1 cognitive:1 american:1 leading:1 li:3 hnj:4 bold:1 north:1 inc:1 sloan:1 depends:7 later:1 performed:1 closed:1 lab:1 analyze:1 recover:1 bayes:1 simon:1 curie:1 contribution:3 ass:1 odowska:1 formed:1 collaborative:1 variance:1 who:1 efficiently:1 yield:1 spaced:1 dry:2 raw:1 bayesian:3 kavukcuoglu:2 carlo:2 researcher:1 tissue:2 history:1 ping:2 basket:14 definition:2 sixth:1 mohamed:1 dataset:8 knowledge:1 improves:2 dimensionality:4 lreg:3 actually:3 back:1 higher:6 day:2 follow:1 specify:1 synopsis:1 rand:1 charlin:2 though:2 box:3 implicit:1 correlation:4 expressive:1 hopefully:1 canned:1 quality:4 believe:1 usage:1 usa:2 contain:1 normalized:1 logits:1 former:1 regularization:4 assigned:1 leibler:1 iteratively:1 semantic:1 neal:1 width:2 cosine:1 generalized:1 reasoning:2 meaning:1 variational:33 recently:1 sigmoid:1 common:1 rl:1 conditioning:3 volume:2 extend:1 he:1 approximates:1 association:2 refer:1 significant:1 cambridge:1 versa:1 paisley:2 similarly:2 language:5 similarity:12 gj:9 base:3 posterior:8 perspective:1 salsa:1 store:2 n00014:1 nvidia:1 binary:5 onr:1 bathroom:2 determine:1 maximize:4 period:1 corrado:2 signal:1 ii:1 full:1 infer:1 adapt:1 long:2 zweig:1 equally:1 paired:1 impact:3 prediction:3 basic:1 heterogeneous:1 expectation:3 poisson:2 metric:5 histogram:2 represent:3 kernel:2 separately:1 goodman:3 rest:1 bringing:1 snj:5 litter:1 leveraging:1 jordan:5 near:2 door:1 bengio:2 embeddings:24 relaxes:1 split:3 affect:1 fit:5 pennsylvania:1 inner:1 idea:2 whether:3 york:1 deep:2 generally:2 amount:1 bnj:40 dark:1 category:2 reduced:3 http:1 nsf:1 efe:36 per:3 alfred:1 hyperparameter:4 key:1 four:1 marie:1 rewriting:1 n66001:1 year:1 wood:2 run:1 parameterized:2 powerful:1 place:1 family:16 capturing:1 bound:3 layer:2 datum:1 annual:1 binned:2 constraint:3 infinity:1 sheldon:1 min:1 mikolov:4 gpus:1 according:1 manning:1 guggenheim:1 spearman:1 across:4 slightly:1 em:1 intuitively:2 outlier:1 computationally:1 count:8 mind:1 eight:2 apply:1 tuft:1 occurrence:5 appearing:1 stepsize:1 original:2 assumes:1 binomial:2 include:1 top:3 linguistics:2 graphical:2 ghahramani:1 build:3 gregor:4 society:2 purchased:2 objective:5 dependence:1 exhibit:1 gradient:8 kth:1 distance:2 link:3 separate:1 considers:1 reason:1 pet:4 length:12 code:2 index:4 modeled:1 ratio:1 minimizing:1 innovation:1 liang:3 difficult:2 mostly:1 setup:1 webb:1 negative:2 ba:2 design:4 perform:2 recommender:1 inspect:1 observation:14 datasets:9 acknowledge:1 descent:1 extended:2 excluding:1 hinton:2 varied:2 inferred:1 rating:5 david:1 compositionality:1 pair:6 namely:1 trip:2 learned:4 kingma:5 nip:1 address:4 beyond:2 able:1 taco:2 below:2 pattern:4 challenge:2 summarize:1 encompasses:1 including:1 max:1 belief:1 wainwright:2 hot:1 event:3 unrealistic:1 difficulty:2 natural:8 regularized:1 treated:1 indicator:2 overlap:1 ranked:1 rely:1 scheme:1 improve:3 firth:2 movie:27 rated:4 github:2 technology:1 arora:2 carried:1 categorical:1 auto:1 columbia:3 supermarket:3 text:8 review:1 prior:7 rathod:1 checking:1 marginalizing:1 relative:1 permutation:1 proportional:1 filtering:1 gershman:3 versus:1 validation:5 foundation:3 purchasing:2 sufficient:2 consistent:2 share:1 amortization:2 row:2 supported:2 bias:1 saul:1 face:2 sparse:2 distributed:2 overcome:2 boundary:1 avoids:1 computes:1 forward:1 made:1 collection:1 riedewald:1 programme:1 welling:4 transaction:2 ranganath:4 approximate:3 kullback:1 keep:1 global:3 conservation:1 francisco:2 postdoctoral:1 continuous:2 latent:2 search:1 sk:1 table:8 learn:2 nature:1 ca:1 obtaining:3 improving:1 interact:3 necessarily:1 bamler:2 complex:1 domain:4 significance:1 upc:1 hyperparameters:1 noise:1 n2:1 position:1 inferring:1 exponential:16 konstan:2 levy:2 third:1 ruiz:3 learns:1 tang:1 down:1 bishop:2 showing:1 explored:1 list:2 essential:1 workshop:1 socher:1 pennington:2 conditioned:4 nk:1 chen:2 subtract:1 simply:1 explore:3 likely:1 recommendation:6 springer:1 corresponds:8 gerrish:1 harris:2 ma:1 shell:2 acm:2 conditional:9 goal:2 month:1 identity:1 price:1 shared:4 hard:2 springerverlag:1 specifically:2 movielens:9 typical:1 glove:1 mexican:4 specie:3 experimental:2 indicating:1 formally:1 selectively:1 select:1 support:2 softplus:3 latter:3 harper:2 vilnis:2 ebird:10 regularizing:1 |
6,708 | 7,068 | Working hard to know your neighbor?s margins:
Local descriptor learning loss
Anastasiya Mishchuk1 , Dmytro Mishkin2 , Filip Radenovi?c2 , Ji?ri Matas2
1
Szkocka Research Group, Ukraine
[email protected]
2
Visual Recognition Group, CTU in Prague
{mishkdmy, filip.radenovic, matas}@cmp.felk.cvut.cz
Abstract
We introduce a loss for metric learning, which is inspired by the Lowe?s matching
criterion for SIFT. We show that the proposed loss, that maximizes the distance
between the closest positive and closest negative example in the batch, is better
than complex regularization methods; it works well for both shallow and deep
convolution network architectures. Applying the novel loss to the L2Net CNN
architecture results in a compact descriptor named HardNet. It has the same
dimensionality as SIFT (128) and shows state-of-art performance in wide baseline
stereo, patch verification and instance retrieval benchmarks.
1
Introduction
Many computer vision tasks rely on finding local correspondences, e.g. image retrieval [1, 2],
panorama stitching [3], wide baseline stereo [4], 3D-reconstruction [5, 6]. Despite the growing
number of attempts to replace complex classical pipelines with end-to-end learned models, e.g., for
image matching [7], camera localization [8], the classical detectors and descriptors of local patches
are still in use, due to their robustness, efficiency and their tight integration. Moreover, reformulating
the task, which is solved by the complex pipeline as a differentiable end-to-end process is highly
challenging.
As a first step towards end-to-end learning, hand-crafted descriptors like SIFT [9, 10] or detectors [9, 11, 12] have been replace with learned ones, e.g., LIFT [13], MatchNet [14] and DeepCompare [15]. However, these descriptors have not gained popularity in practical applications despite
good performance in the patch verification task. Recent studies have confirmed that SIFT and its
variants (RootSIFT-PCA [16], DSP-SIFT [17]) significantly outperform learned descriptors in image
matching and small-scale retrieval [18], as well as in 3D-reconstruction [19]. One of the conclusions
made in [19] is that current local patches datasets are not large and diverse enough to allow the
learning of a high-quality widely-applicable descriptor.
In this paper, we focus on descriptor learning and, using a novel method, train a convolutional neural
network (CNN), called HardNet. We additionally show that our learned descriptor significantly
outperforms both hand-crafted and learned descriptors in real-world tasks like image retrieval and two
view matching under extreme conditions. For the training, we use the standard patch correspondence
data thus showing that the available datasets are sufficient for going beyond the state of the art.
2
Related work
Classical SIFT local feature matching consists of two parts: finding nearest neighbors and comparing
the first to second nearest neighbor distance ratio threshold for filtering false positive matches. To
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
best of our knowledge, no work in local descriptor learning fully mimics such strategy as the learning
objective.
Simonyan and Zisserman [20] proposed a simple filter plus pooling scheme learned with convex
optimization to replace the hand-crafted filters and poolings in SIFT. Han et al. [14] proposed a twostage siamese architecture ? for embedding and for two-patch similarity. The latter network improved
matching performance, but prevented the use of fast approximate nearest neighbor algorithms like
kd-tree [21]. Zagoruyko and Komodakis [15] have independently presented similar siamese-based
method which explored different convolutional architectures. Simo-Serra et al [22] harnessed hardnegative mining with a relative shallow architecture that exploited pair-based similarity.
The three following papers have most closedly followed the classical SIFT matching scheme. Balntas et al [23] used a triplet margin loss and a triplet distance loss, with random sampling of the
patch triplets. They show the superiority of the triplet-based architecture over a pair based. Although,
unlike SIFT matching or our work, they sampled negatives randomly. Choy et al [7] calculate the
distance matrix for mining positive as well as negative examples, followed by pairwise contrastive
loss.
Tian et al [24] use n matching pairs in batch for generating n2 ? n negative samples and require that
the distance to the ground truth matchings is minimum in each row and column. No other constraint
on the distance or distance ratio is enforced. Instead, they propose a penalty for the correlation
of the descriptor dimensions and adopt deep supervision [25] by using intermediate feature maps
for matching. Given the state-of-art performance, we have adopted the L2Net [24] architecture as
base for our descriptor. We show that it is possible to learn even more powerful descriptor with
significantly simpler learning objective without need of the two auxiliary loss terms.
Batch of input patches
??
Distance matrix
Descriptors
??
??
? = ?????(?, ?)
??
?1 ?1
?(?1 , ?1 ) ?(?1 , ?2 ) ?(?1 , ?3 ) ?(?1 , ?4 )
?4???
?(?2 , ?1 ) ?(?2 , ?2 ) ?(?2 , ?3 ) ?(?2 , ?4 )
?2???
?(?3 , ?1 ) ?(?3 , ?2 ) ?(?3 , ?3 ) ?(?3 , ?4 )
?2 ?2
?(?4 , ?1 ) ?(?4 , ?2 ) ?(?4 , ?3 ) ?(?4 , ?4 )
Final triplet
(one of n in batch)
?3 ?3
?4 ?4
?(?1 , ?4??? ) > ?(?2??? , ?1 ) ? ?????? ?2
?2???
?1
?1
Figure 1: Proposed sampling procedure. First, patches are described by the current network, then a
distance matrix is calculated. The closest non-matching descriptor ? shown in red ? is selected for
each ai and pi patch from positive pair (green) respectively. Finally, among two negative candidates
the hardest one is chosen. All operations are done in a single forward pass.
3
3.1
The proposed descriptor
Sampling and loss
Our learning objective mimics SIFT matching criterion. The process is shown in Figure 1. First, a
batch X = (Ai , Pi )i=1..n of matching local patches is generated, where A stands for the anchor and
P for the positive. The patches Ai and Pi correspond to the same point on 3D surface. We make sure
that in batch X , there is exactly one pair originating from a given 3D point.
Second, the 2n patches in X are passed through the network shown in Figure 2.
2
p
L2 pairwise distance matrix D = cdist(a, p), where, d(ai , pj ) = 2 ? 2ai pj , i = 1..n, j = 1..n of
size n ? n is calculated, where ai and pj denote the descriptors of patches Ai and Pj respectively.
Next, for each matching pair ai and pi the closest non-matching descriptors i.e. the 2nd nearest
neighbor, are found respectively:
ai ? anchor descriptor, pi ? positive descriptor,
pjmin ? closest non-matching descriptor to ai , where jmin = arg minj=1..n,j6=i d(ai , pj ),
akmin ? closest non-matching descriptor to pi where kmin = arg mink=1..n,k6=i d(ak , pi ).
Then from each quadruplet of descriptors (ai , pi , pjmin , akmin ), a triplet is formed: (ai , pi , pjmin ), if
d(ai , pjmin ) < d(akmin , pi ) and (pi , ai , akmin ) otherwise.
Our goal is to minimize the distance between the matching descriptor and closest non-matching
descriptor. These n triplet distances are fed into the triplet margin loss:
1 X
L=
max (0, 1 + d(ai , pi ) ? min (d(ai , pjmin ), d(akmin , pi )))
(1)
n i=1,n
where min (d(ai , pjmin ), d(akmin , pi ) is pre-computed during the triplet construction.
The distance matrix calculation is done on GPU and the only overhead compared to the random triplet
sampling is the distance matrix calculation and calculating the minimum over rows and columns.
Moreover, compared to usual learning with triplets, our scheme needs only two-stream CNN, not
three, which results in 30% less memory consumption and computations.
Unlike in [24], neither deep supervision for intermediate layers is used, nor a constraint on the
correlation of descriptor dimensions. We experienced no significant over-fitting.
3.2
Model architecture
3x3 Conv
pad 1
BN + ReLU
32
3x3 Conv
pad 1
32
BN + ReLU
3x3 Conv
pad 1 /2
BN + ReLU
3x3 Conv
pad 1
64
32x32
3x3 Conv
pad 1 /2
64
BN + ReLU
1
3x3 Conv
pad 1
128
BN + ReLU
BN + ReLU
8x8 Conv
128
BN+ L2Norm
128
Figure 2: The architecture of our network, adopted from L2Net [24]. Each convolutional layer is
followed by batch normalization and ReLU, except the last one. Dropout regularization is used before
the last convolution layer.
The HardNet architecture, Figure 2, is identical to L2Net [24]. Padding with zeros is applied to
all convolutional layers, to preserve the spatial size, except to the final one. There are no pooling
layers, since we found that they decrease performance of the descriptor. That is why the spatial size is
reduced by strided convolutions. Batch normalization [26] layer followed by ReLU [27] non-linearity
is added after each layer, except the last one. Dropout [28] regularization with 0.1 dropout rate is
applied before the last convolution layer. The output of the network is L2 normalized to produce
128-D descriptor with unit-length. Grayscale input patches with size 32 ? 32 pixels are normalized
by subtracting the per-patch mean and dividing by the per-patch standard deviation.
Optimization is done by stochastic gradient descent with learning rate of 0.1, momentum of 0.9 and
weight decay of 0.0001. Learning rate was linearly decayed to zero within 10 epochs for the most of
the experiments in this paper. Training is done with PyTorch library [29].
3.3
Model training
UBC Phototour [3], also known as Brown dataset. It consists of three subsets: Liberty, Notre Dame
and Yosemite with about 400k normalized 64x64 patches in each. Keypoints were detected by DoG
detector and verified by 3D model.
3
Test set consists of 100k matching and non-matching pairs for each sequence. Common setup is to
train descriptor on one subset and test on two others. Metric is the false positive rate (FPR) at point
of 0.95 true positive recall. It was found out by Michel Keller that [14] and [23] evaluation procedure
reports FDR (false discovery rate) instead of FPR (false positive rate). To avoid the incomprehension
of results we?ve decided to provide both FPR and FDR rates and re-estimated the scores for straight
comparison. Results are shown in Table 1. Proposed descriptor outperforms competitors, with training
augmentation, or without it. We haven?t included results on multiscale patch sampling or so called
?center-surrounding? architecture for two reasons. First, architectural choices are beyond the scope
of current paper. Second, it was already shown in [24, 30] that ?center-surrounding? consistently
improves results on Brown dataset for different descriptors, while hurts matching performance on
other, more realistic setups, e.g., on Oxford-Affine [31] dataset.
In the rest of paper we use descriptor trained on Liberty sequence, which is a common practice, to
allow a fair comparison. TFeat [23] and L2Net [24] use the same dataset for training.
Table 1: Patch correspondence verification performance on the Brown dataset. We report false
positive rate at true positive rate equal to 95% (FPR95). Some papers report false discovery rate
(FDR) instead of FPR due to bug in the source code. For consistency we provide FPR, either
obtained from the original article or re-estimated from the given FDR (marked with *). The best
results are in bold.
Training
Notredame
Test
Yosemite
Liberty
SIFT [9]
MatchNet*[14]
TFeat-M* [23]
L2Net [24]
HardNet (ours)
Liberty
Liberty
Notredame
29.84
7.04
7.39
3.64
3.06
Yosemite
Yosemite
22.53
11.47
10.31
5.29
4.27
3.82
3.06
1.15
0.96
Notredame
Mean
FDR
27.29
5.65
3.8
1.62
1.4
11.6
8.06
4.43
3.04
8.7
7.24
3.30
2.53
FPR
3.00
26.55
8.05
6.64
3.24
2.54
1.97
2.71
4.19
2.23
1.9
7.74
6.47
?
Augmentation: flip, 90 random rotation
GLoss+[30]
DC2ch2st+[15]
L2Net+ [24] +
HardNet+ (ours)
3.4
3.69
4.85
2.36
2.28
4.91
7.2
4.7
3.25
0.77
1.9
0.72
0.57
1.14
2.11
1.29
0.96
3.09
5.00
2.57
2.13
2.67
4.10
1.71
2.22
Exploring the batch size influence
We study the influence of mini-batch size on the final descriptor performance. It is known that
small mini-batches are beneficial to faster convergence and better generalization [32], while large
batches allow better GPU utilization. Our loss function design should benefit from seeing more
hard negative patches to learn to distinguish them from true positive patches. We report the results
for batch sizes 16, 64, 128, 512, 1024, 2048. We trained the model described in Section 3.2 using
Liberty sequence of Brown dataset. Results are shown in Figure 3. As expected, model performance
improves with increasing the mini-batch size, as more examples are seen to get harder negatives.
Although, increasing batch size to more than 512 does not bring significant benefit.
4
Empirical evaluation
Recently, Balntas et al. [23] showed that good performance on patch verification task on Brown dataset
does not always mean good performance in the nearest neighbor setup and vice versa. Therefore, we
have extensively evaluated learned descriptors on real-world tasks like two view matching and image
retrieval.
We have selected RootSIFT [10], TFeat-M* [23], and L2Net [24] for direct comparison with our
descriptor, as they show the best results on a variety of datasets.
4
0.9
0.12
0.8
0.10
0.7
0.6
0.06
16
64
128
512
1024
2048
0.04
0.02
0.00
1
2
3
4
5
Epoch
6
7
mAP
FPR
0.08
0.5
0.4
0.3
0.2
8
0.1
0.0 2
10
Figure 3: Influence of the batch size on descriptor performance. The metric is false
positive rate (FPR) at true positive rate
equal to 95%, averaged over Notredame
and Yosemite validation sequences.
4.1
HardNet
HardNet+
L2Net
L2Net+
RootSIFT
SIFT
TFeat?M*
10 3
number?of?distractors
10 4
Figure 4: Patch retrieval descriptor performance
(mAP) vs. the number of distractors, evaluated on
HPatches dataset.
Patch descriptor evaluation
HPatches [18] is a recent dataset for local patch descriptor evaluation. It consists of 116 sequences of
6 images. The dataset is split into two parts: viewpoint ? 59 sequences with significant viewpoint
change and illumination ? 57 sequences with significant illumination change, both natural and
artificial. Keypoints are detected by DoG, Hessian and Harris detectors in the reference image and
reprojected to the rest of the images in each sequence with 3 levels of geometric noise: Easy, Hard,
and Tough variants. The HPatches benchmark defines three tasks: patch correspondence verification,
image matching and small-scale patch retrieval. We refer the reader to the HPatches paper [18] for a
detailed protocol for each task.
Results are shown in Figure 5. L2Net and HardNet have shown similar performance on the patch
verification task with a small advantage of HardNet. On the matching task, even the non-augmented
version of HardNet outperforms the augmented version of L2Net+ by a noticeable margin. The
difference is larger in the T OUGH and H ARD setups. Illumination sequences are more challenging
than the geometric ones, for all the descriptors. We have trained network with TFeat architecture, but
with proposed loss function ? it is denoted as HardTFeat. It outperforms original version in matching
and retrieval, while being on par with it on patch verification task.
In patch retrieval, relative performance of the descriptors is similar to the matching problem: HardNet
beats L2Net+. Both descriptors significantly outperform the previous state-of-the-art, showing the
superiority of the selected deep CNN architecture over the shallow TFeat model.
E ASY
D IFF S EQ S AME S EQ
rSIFT
HardTFeat
TFeat-M*
L2Net
HardNet
L2Net+
HardNet+
81.32%
81.90%
84.46%
86.19%
86.69%
87.12%
20
40
60
80
Patch Verification mAP [%]
T OUGH
V IEWPT I LLUM
58.53%
0
H ARD
100
rSIFT
27.22%
TFeat-M*
32.64%
HardTFeat
38.07%
L2Net
40.82%
L2Net+
45.04%
HardNet
48.24%
HardNet+
50.38%
0
20
40
60
Image Matching mAP [%]
80
100
rSIFT
TFeat-M*
HardTFeat
L2Net
L2Net+
HardNet
HardNet+
42.49%
52.03%
55.12%
59.64%
63.37%
65.26%
66.82%
0
20
40
60
80
100
Patch Retrieval mAP [%]
Figure 5: Left to right: Verification, matching and retrieval results on HPatches dataset. Marker
color indicates the level of geometrical noise in: E ASY, H ARD and T OUGH. Marker type indicates
the experimental setup. D IFF S EQ and S AME S EQ shows the source of negative examples for the
verification task. V IEWPT and I LLUM indicate the type of sequences for matching. None of the
descriptors is trained on HPatches.
5
Table 2: Comparison of the loss functions and sampling strategies on the HPatches matching task,
the mean mAP is reported. CPR stands for the regularization penalty of the correlation between
descriptor channels, as proposed in [24]. Hard negative mining is performed once per epoch. Best
results are in bold. HardNet uses the hardest-in-batch sampling and the triplet margin loss.
Sampling / Loss
Softmin
Triplet margin
Contrastive
m=1
m=1
m=2
0.007
0.055
0.083
0.279
0.444
0.482
Random
Hard negative mining
Random + CPR
Hard negative mining + CPR
0.349
0.391
overfit
overfit
0.286
0.346
Hardest in batch (ours)
0.474
0.482
We also ran another patch retrieval experiment, varying the number of distractors (non-matching
patches) in the retrieval dataset. The results are shown in Figure 4. TFeat descriptor performance,
which is comparable to L2Net in the presence of low number distractors, degrades quickly as
the size of the the database grows. At about 10,000 its performance drops below SIFT. This
experiment explains why TFeat performs relatively poorly on the Oxford5k [33] and Paris6k [34]
benchmarks, which contain around 12M and 15M distractors, respectively, see Section 4.4 for more
details. Performance of the HardNet decreases slightly for both augmented and plain version and the
difference in mAP to other descriptors grows with the increasing complexity of the task.
4.2
Ablation study
For better understanding of the significance of the sampling strategy and the loss function, we conduct
experiments summarized in Table 2. We train our HardNet model (architecture is exactly the same as
L2Net model), change one parameter at a time and evaluate its impact.
The following sampling strategies are compared: random, the proposed ?hardest-in-batch?, and the
?classical? hard negative mining, i.e. selecting in each epoch the closest negatives from the full
training set. The following loss functions are tested: softmin on distances, triplet margin with margin
m = 1, contrastive with margins m = 1, m = 2. The last is the maximum possible distance for
unit-normed descriptors. Mean mAP on HPatches Matching task is shown in Table 2.
The proposed ?hardest-in-batch? clearly outperforms all the other sampling strategies for all loss
functions and it is the main reason for HardNet?s good performance. The random sampling and
?classical? hard negative mining led to huge overfit, when training loss was high, but test performance
was low and varied several times from run to run. This behavior was observed with all loss function.
Similar results for random sampling were reported in [24]. The poor results of hard negative mining
(?hardest-in-the-training-set?) are surprising. We guess that this is due to dataset label noise, the
mined ?hard negatives? are actually positives. Visual inspection confirms this. We were able to get
0.5
1.0
0
0.5
1.0 1.5
d(a, n)
2.0
0
0
0.5
1.0 1.5
d(a, n)
2.0
1.0
0
?L
?n
?L
?n
?L
?n
1.5
0.5
0.5
Contrastive, m = 2
2.0
1.5
d(a, p)
1.0
Contrastive, m = 1
2.0
1.5
d(a, p)
d(a, p)
1.5
0
Softmin
2.0
d(a, p)
Triplet margin, m = 1
2.0
1.0
0.5
0
0.5
1.0 1.5
d(a, n)
2.0
0
0
0.5
1.0 1.5
d(a, n)
=
=
=
?L
?p = 1
0, ?L
?p =
?L
?p = 0
1
2.0
Figure 6: Contribution to the gradient magnitude from the positive and negative examples. Horizontal
and vertical axes show the distance from the anchor (a) to the negative (n) and positive (p) examples
respectively. Softmin loss gradient quickly decreases when d(a, n) > d(a, p), unlike the triplet
margin loss. For the contrastive loss, negative examples with d(a, n) > m contribute zero to the
gradient. The triplet margin loss and the contrastive loss with a big margin behave very similarly.
6
reasonable results with random and hard negative mining sampling only with additional correlation
penalty on descriptor channels (CPR), as proposed in [24].
Regarding the loss functions, softmin gave the most stable results across all sampling strategies, but
it is marginally outperformed by contrastive and triplet margin loss for our strategy. One possible
explanation is that the triplet margin loss and contrastive loss with a large margin have constant
non-zero derivative w.r.t to both positive and negative samples, see Figure 6. In the case of contrastive
loss with a small margin, many negative examples are not used in the optimization (zero derivatives),
while the softmin derivatives become small, once the distance to the positive example is smaller than
to the negative one.
4.3
Wide baseline stereo
To validate descriptor generalization and their ability to operate in extreme conditions, we tested them
on the W1BS dataset [4]. It consists of 40 image pairs with one particular extreme change between
the images:
Appearance (A): difference in appearance due to seasonal or weather change, occlusions, etc;
Geometry (G): difference in scale, camera and object position;
Illumination (L): significant difference in intensity, wavelength of light source;
Sensor (S): difference in sensor data (IR, MRI).
Moreover, local features in W1BS dataset are detected with MSER [35], Hessian-Affine [11] (in
implementation from [36]) and FOCI [37] detectors. They fire on different local structures than DoG.
Note that DoG patches were used for the training of the descriptors. Another significant difference to
the HPatches setup is the absence of the geometrical noise: all patches are perfectly reprojected to
the target image in pair. The testing protocol is the same as for the HPatches matching task.
Results are shown in Figure 7. HardNet and L2Net perform comparably, former is performing better
on images with geometrical and appearance changes, while latter works a bit better in map2photo and
visible-vs-infrared pairs. Both outperform SIFT, but only by a small margin. However, considering
the significant amount of the domain shift, descriptors perform very well, while TFeat loses badly
to SIFT. HardTFeat significantly outperforms the original TFeat descriptor on the W1BS dataset,
showing the superiority of the proposed loss.
Good performance on patch matching and verification task does not automatically lead to the better
performance in practice, e.g. to more images registered. Therefore we also compared descriptor on
wide baseline stereo setup with two metric: number of successfully matched image pairs and average
number of inliers per matched pair, following the matcher comparison protocol from [4]. The only
change to the original protocol is that first fast matching steps with ORB detector and descriptor were
removed, as we are comparing ?SIFT-replacement? descriptors.
mAUC
The results are shown in Table 3. Results on Edge Foci (EF) [37], Extreme view [38] and Oxford
Affine [11] datasets are saturated and all the descriptors are good enough for matching all image pairs.
0.14
0.12
0.10
0.08
0.06
0.04
0.02
0.00
A
G
L
S
Nuisance?factor
map2photo Average
TFeat
Hard?TFeat
SIFT
RootSIFT
L2Net
L2Net+
HardNet
HardNet+
Figure 7: Descriptor evaluation on the W1BS patch dataset, mean area under precision-recall curve is
reported. Letters denote nuisance factor, A: appearance; G: viewpoint/geometry; L: illumination; S:
sensor; map2photo: satellite photo vs. map.
7
HardNet has an a slight advantage in a number of inliers per image. The rest of datasets: SymB [39],
GDB [40], WxBS [4] and LTLL [41] have one thing in common: image pairs are or from different
domain than photo (e.g. drawing to drawing) or cross-domain (e.g., drawing to photo). Here HardNet
outperforms learned descriptors and is on-par with hand-crafted RootSIFT. We would like to note
that HardNet was not learned to match in different domain, nor cross-domain scenario, therefore such
results show the generalization ability.
Table 3: Comparison of the descriptors on wide baseline stereo within MODS matcher[4] on wide
baseline stereo datasets. Number of matched image pairs and average number of inliers are reported.
Numbers is the header corresponds to the number of image pairs in dataset.
EF
4.4
EVD
OxAff
SymB
GDB
WxBS
LTLL
Descriptor
33
inl.
15
inl.
40
inl.
46
inl.
22
inl.
37
inl.
172
inl.
RootSIFT
TFeat-M*
L2Net+
HardNet+
33
32
33
33
32
30
34
35
15
15
15
15
34
37
34
41
40
40
40
40
169
265
304
316
45
40
43
44
43
45
46
47
21
16
19
21
52
72
78
75
11
10
9
11
93
62
51
54
123
96
127
127
27
29
26
31
Image retrieval
We evaluate our method, and compare against the related ones, on the practical application of image
retrieval with local features. Standard image retrieval datasets are used for the evaluation, i.e.,
Oxford5k [33] and Paris6k [34] datasets. Both datasets contain a set of images (5062 for Oxford5k
and 6300 for Paris6k) depicting 11 different landmarks together with distractors. For each of the
11 landmarks there are 5 different query regions defined by a bounding box, constituting 55 query
regions per dataset. The performance is reported as mean average precision (mAP) [33].
In the first experiment, for each image in the dataset, multi-scale Hessian-affine features [31] are
extracted. Exactly the same features are described by ours and all related methods, each of them
producing a 128-D descriptor per feature. Then, k-means with approximate nearest neighbor [21] is
used to learn a 1 million visual vocabulary on an independent dataset, that is, when evaluating on
Oxford5k, the vocabulary is learned with descriptors of Paris6k and vice versa. All descriptors of
testing dataset are assigned to the corresponding vocabulary, so finally, an image is represented by
the histogram of visual word occurrences, i.e., the bag-of-words (BoW) [1] representation, and an
inverted file is used for an efficient search. Additionally, spatial verification (SV) [33], and standard
query expansion (QE) [34] are used to re-rank and refine the search results. Comparison with the
related work on patch description is presented in Table 4. HardNet+ and L2Net+ perform comparably
across both datasets and all settings, with slightly better performance of HardNet+ on average across
Table 4: Performance (mAP) evaluation on bag-of-words (BoW) image retrieval. Vocabulary
consisting of 1M visual words is learned on independent dataset, that is, when evaluating on Oxford5k,
the vocabulary is learned with features of Paris6k and vice versa. SV: spatial verification. QE: query
expansion. The best results are highlighted in bold. All the descriptors except SIFT and HardNet++
were learned on Liberty sequence of Brown dataset [3]. HardNet++ is trained on union of Brown and
HPatches [18] datasets.
Oxford5k
Paris6k
Descriptor
BoW
BoW+SV
BoW+QE
BoW
BoW+SV
BoW+QE
TFeat-M* [23]
RootSIFT [10]
L2Net+ [24]
HardNet
HardNet+
46.7
55.1
59.8
59.0
59.8
55.6
63.0
67.7
67.6
68.8
72.2
78.4
80.4
83.2
83.0
43.8
59.3
63.0
61.4
61.0
51.8
63.7
66.6
67.4
67.0
65.3
76.4
77.2
77.5
77.5
HardNet++
60.8
69.6
84.5
65.0
70.3
79.1
8
Table 5: Performance (mAP) comparison with the state-of-the-art image retrieval with local features.
Vocabulary is learned on independent dataset, that is, when evaluating on Oxford5k, the vocabulary
is learned with features of Paris6k and vice versa. All presented results are with spatial verification
and query expansion. VS: vocabulary size. SA: single assignment. MA: multiple assignments. The
best results are highlighted in bold.
Oxford5k
Method
SIFT?BoW [36]
SIFT?BoW-fVocab [46]
RootSIFT?HQE [43]
HardNet++?HQE
Paris6k
VS
SA
MA
SA
MA
1M
16M
65k
65k
78.4
74.0
85.3
86.8
82.2
84.9
88.0
88.3
?
73.6
81.3
82.8
?
82.4
82.8
84.9
all results (average mAP 69.5 vs. 69.1). RootSIFT, which was the best performing descriptor in
image retrieval for a long time, falls behind with average mAP 66.0 across all results.
We also trained HardNet++ version ? with all available training data at the moment: union of Brown
and HPatches datasets, instead of just Liberty sequence from Brown for the HardNet+. It shows the
benefits of having more training data and is performing best for all setups.
Finally, we compare our descriptor with the state-of-the-art image retrieval approaches that use local
features. For fairness, all methods presented in Table 5 use the same local feature detector as described
before, learn the vocabulary on an independent dataset, and use spatial verification (SV) and query
expansion (QE). In our case (HardNet++?HQE), a visual vocabulary of 65k visual words is learned,
with additional Hamming embedding (HE) [42] technique that further refines descriptor assignments
with a 128 bits binary signature. We follow the same procedure as RootSIFT?HQE [43] method, by
replacing RootSIFT with our learned HardNet++ descriptor. Specifically, we use: (i) weighting of the
votes as a decreasing function of the Hamming distance [44]; (ii) burstiness suppression [44]; (iii)
multiple assignments of features to visual words [34, 45]; and (iv) QE with feature aggregation [43].
All parameters are set as in [43]. The performance of our method is the best reported on both
Oxford5k and Paris6k when learning the vocabulary on an independent dataset (mAP 89.1 was
reported [10] on Oxford5k by learning it on the same dataset comprising the relevant images), and
using the same amount of features (mAP 89.4 was reported [43] on Oxford5k when using twice as
many local features, i.e., 22M compared to 12.5M used here).
5
Conclusions
We proposed a novel loss function for learning a local image descriptor that relies on the hard negative
mining within a mini-batch and the maximization of the distance between the closest positive and
closest negative patches. The proposed sampling strategy outperforms classical hard-negative mining
and random sampling for softmin, triplet margin and contrastive losses.
The resulting descriptor is compact ? it has the same dimensionality as SIFT (128), it shows stateof-art performance on standard matching, patch verification and retrieval benchmarks and it is
fast to compute on a GPU. The training source code and the trained convnets are available at
https://github.com/DagnyT/hardnet.
Acknowledgements
The authors were supported by the Czech Science Foundation Project GACR P103/12/G084, the
Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center, the
CTU student grant SGS17/185/OHK3/3T/13, and the MSMT LL1303 ERC-CZ grant. Anastasiya
Mishchuk was supported by the Szkocka Research Group Grant.
9
References
[1] Josef Sivic and Andrew Zisserman. Video google: A text retrieval approach to object matching
in videos. In International Conference on Computer Vision (ICCV), pages 1470?1477, 2003.
[2] Filip Radenovic, Giorgos Tolias, and Ondrej Chum. CNN image retrieval learns from BoW:
Unsupervised fine-tuning with hard examples. In European Conference on Computer Vision
(ECCV), pages 3?20, 2016.
[3] Matthew Brown and David G. Lowe. Automatic panoramic image stitching using invariant
features. International Journal of Computer Vision (IJCV), 74(1):59?73, 2007.
[4] Dmytro Mishkin, Jiri Matas, Michal Perdoch, and Karel Lenc. Wxbs: Wide baseline stereo
generalizations. Arxiv 1504.06603, 2015.
[5] Johannes L. Schonberger, Filip Radenovic, Ondrej Chum, and Jan-Michael Frahm. From single
image query to detailed 3D reconstruction. In Conference on Computer Vision and Pattern
Recognition (CVPR), pages 5126?5134, 2015.
[6] Johannes L. Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 4104?4113, 2016.
[7] Christopher B. Choy, JunYoung Gwak, Silvio Savarese, and Manmohan Chandraker. Universal
correspondence network. In Advances in Neural Information Processing Systems, pages 2414?
2422, 2016.
[8] Alex Kendall, Matthew Grimes, and Roberto Cipolla. Posenet: A convolutional network
for real-time 6-DOF camera relocalization. In International Conference on Computer Vision
(ICCV), 2015.
[9] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal
of Computer Vision (IJCV), 60(2):91?110, 2004.
[10] Relja Arandjelovic and Andrew Zisserman. Three things everyone should know to improve
object retrieval. In Conference on Computer Vision and Pattern Recognition (CVPR), pages
2911?2918, 2012.
[11] Krystian Mikolajczyk and Cordelia Schmid. Scale & affine invariant interest point detectors.
International Journal of Computer Vision (IJCV), 60(1):63?86, 2004.
[12] Ethan Rublee, Vincent Rabaud, Kurt Konolige, and Gary Bradski. ORB: An efficient alternative
to SIFT or SURF. In International Conference on Computer Vision (ICCV), pages 2564?2571,
2011.
[13] Kwang Moo Yi, Eduard Trulls, Vincent Lepetit, and Pascal Fua. LIFT: Learned invariant feature
transform. In European Conference on Computer Vision (ECCV), pages 467?483, 2016.
[14] Xufeng Han, T. Leung, Y. Jia, R. Sukthankar, and A. C. Berg. Matchnet: Unifying feature
and metric learning for patch-based matching. In Conference on Computer Vision and Pattern
Recognition (CVPR), pages 3279?3286, 2015.
[15] Sergey Zagoruyko and Nikos Komodakis. Learning to compare image patches via convolutional
neural networks. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
[16] Andrei Bursuc, Giorgos Tolias, and Herve Jegou. Kernel local descriptors with implicit rotation
matching. In ACM International Conference on Multimedia Retrieval, 2015.
[17] Jingming Dong and Stefano Soatto. Domain-size pooling in local descriptors: DSP-SIFT. In
Conference on Computer Vision and Pattern Recognition (CVPR), pages 5097?5106, 2015.
[18] Vassileios Balntas, Karel Lenc, Andrea Vedaldi, and Krystian Mikolajczyk. HPatches: A
benchmark and evaluation of handcrafted and learned local descriptors. In Conference on
Computer Vision and Pattern Recognition (CVPR), 2017.
10
[19] Johannes L. Schonberger, Hans Hardmeier, Torsten Sattler, and Marc Pollefeys. Comparative
evaluation of hand-crafted and learned local features. In Conference on Computer Vision and
Pattern Recognition (CVPR), 2017.
[20] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Descriptor learning using convex
optimisation. In European Conference on Computer Vision (ECCV), pages 243?256, 2012.
[21] Marius Muja and David G. Lowe. Fast approximate nearest neighbors with automatic algorithm configuration. In International Conference on Computer Vision Theory and Application
(VISSAPP), pages 331?340, 2009.
[22] Edgar Simo-Serra, Eduard Trulls, Luis Ferraz, Iasonas Kokkinos, Pascal Fua, and Francesc
Moreno-Noguer. Discriminative learning of deep convolutional feature point descriptors. In
International Conference on Computer Vision (ICCV), pages 118?126, 2015.
[23] Vassileios Balntas, Edgar Riba, Daniel Ponsa, and Krystian Mikolajczyk. Learning local feature
descriptors with triplets and shallow convolutional neural networks. In British Machine Vision
Conference (BMVC), 2016.
[24] Bin Fan Yurun Tian and Fuchao Wu. L2-Net: Deep learning of discriminative patch descriptor
in euclidean space. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
[25] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeplysupervised nets. In Artificial Intelligence and Statistics, pages 562?570, 2015.
[26] Sergey Ioffe and Christian Szegedy. Batch Normalization: Accelerating Deep Network Training
by Reducing Internal Covariate Shift. ArXiv 1502.03167, 2015.
[27] Vinod Nair and Geoffrey E. Hinton. Rectified linear units improve restricted boltzmann
machines. In International Conference on Machine Learning (ICML), pages 807?814, 2010.
[28] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine
Learning Research (JMLR), 15(1):1929?1958, 2014.
[29] PyTorch. http://pytorch.org.
[30] Vijay Kumar B. G., Gustavo Carneiro, and Ian Reid. Learning local image descriptors with deep
siamese and triplet convolutional networks by minimising global loss functions. In Conference
on Computer Vision and Pattern Recognition (CVPR), pages 5385?5394, 2016.
[31] Krystian Mikolajczyk, Tinne Tuytelaars, Cordelia Schmid, Andrew Zisserman, Jiri Matas,
Frederik Schaffalitzky, Timor Kadir, and Luc Van Gool. A comparison of affine region detectors.
International Journal of Computer Vision (IJCV), 65(1):43?72, 2005.
[32] D. Randall Wilson and Tony R. Martinez. The general inefficiency of batch training for gradient
descent learning. Neural Networks, 16(10):1429?1451, 2003.
[33] James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Object
retrieval with large vocabularies and fast spatial matching. In Conference on Computer Vision
and Pattern Recognition (CVPR), pages 1?8, 2007.
[34] James Philbin, Ondrej Chum, Michael Isard, Josef Sivic, and Andrew Zisserman. Lost in
quantization: Improving particular object retrieval in large scale image databases. In Conference
on Computer Vision and Pattern Recognition (CVPR), pages 1?8, 2008.
[35] Jiri Matas, Ondrej Chum, Martin Urban, and Tomas Pajdla. Robust wide baseline stereo from
maximally stable extrema regions. In British Machine Vision Conference (BMVC), pages
384?393, 2002.
[36] Michal Perdoch, Ondrej Chum, and Jiri Matas. Efficient representation of local geometry for
large scale object retrieval. In Conference on Computer Vision and Pattern Recognition (CVPR),
pages 9?16, 2009.
11
[37] C. Lawrence Zitnick and Krishnan Ramnath. Edge foci interest points. In International
Conference on Computer Vision (ICCV), pages 359?366, 2011.
[38] Dmytro Mishkin, Jiri Matas, and Michal Perdoch. Mods: Fast and robust method for twoview matching. Computer Vision and Image Understanding, 141:81 ? 93, 2015. doi: https:
//doi.org/10.1016/j.cviu.2015.08.005.
[39] Daniel C. Hauagge and Noah Snavely. Image matching using local symmetry features. In
Computer Vision and Pattern Recognition (CVPR), pages 206?213, 2012.
[40] Gehua Yang, Charles V Stewart, Michal Sofka, and Chia-Ling Tsai. Registration of challenging
image pairs: Initialization, estimation, and decision. Pattern Analysis and Machine Intelligence
(PAMI), 29(11):1973?1989, 2007.
[41] Basura Fernando, Tatiana Tommasi, and Tinne Tuytelaars. Location recognition over large time
lags. Computer Vision and Image Understanding, 139:21 ? 28, 2015. ISSN 1077-3142. doi:
https://doi.org/10.1016/j.cviu.2015.05.016.
[42] Herve Jegou, Matthijs Douze, and Cordelia Schmid. Improving bag-of-features for large scale
image search. International Journal of Computer Vision (IJCV), 87(3):316?336, 2010.
[43] Giorgos Tolias and Herve Jegou. Visual query expansion with or without geometry: refining
local descriptors by feature aggregation. Pattern Recognition, 47(10):3466?3476, 2014.
[44] Herve Jegou, Matthijs Douze, and Cordelia Schmid. On the burstiness of visual elements. In
Computer Vision and Pattern Recognition (CVPR), pages 1169?1176, 2009.
[45] Herve Jegou, Cordelia Schmid, Hedi Harzallah, and Jakob Verbeek. Accurate image search
using the contextual dissimilarity measure. Pattern Analysis and Machine Intelligence (PAMI),
32(1):2?11, 2010.
[46] Andrej Mikulik, Michal Perdoch, Ond?rej Chum, and Ji?r? Matas. Learning vocabularies over a
fine quantization. International Journal of Computer Vision (IJCV), 103(1):163?175, 2013.
12
| 7068 |@word cnn:5 mri:1 version:5 torsten:1 kokkinos:1 nd:1 choy:2 confirms:1 bn:7 contrastive:11 harder:1 lepetit:1 moment:1 gloss:1 configuration:1 inefficiency:1 score:1 selecting:1 daniel:2 ours:4 kurt:1 outperforms:8 current:3 com:2 comparing:2 surprising:1 michal:5 contextual:1 gmail:1 moo:1 gpu:3 luis:1 refines:1 realistic:1 visible:1 christian:1 moreno:1 drop:1 v:6 intelligence:3 selected:3 guess:1 isard:2 ctu:2 inspection:1 fpr:8 contribute:1 revisited:1 location:1 org:3 simpler:1 zhang:1 c2:1 direct:1 become:1 jiri:5 consists:5 ijcv:6 overhead:1 fitting:1 introduce:1 pairwise:2 expected:1 andrea:2 behavior:1 nor:2 growing:1 multi:1 jegou:5 inspired:1 salakhutdinov:1 decreasing:1 automatically:1 considering:1 increasing:3 conv:7 project:1 moreover:3 linearity:1 maximizes:1 matched:3 finding:2 extremum:1 exactly:3 utilization:1 unit:3 grant:3 superiority:3 producing:1 reid:1 positive:20 before:3 local:25 despite:2 ak:1 oxford:2 pami:2 plus:1 twice:1 initialization:1 challenging:3 tian:2 averaged:1 decided:1 practical:2 camera:3 testing:2 ond:1 practice:2 union:2 lost:1 x3:6 procedure:3 jan:2 area:1 empirical:1 universal:1 significantly:5 weather:1 matching:44 vedaldi:2 pre:1 word:6 seeing:1 get:2 andrej:1 applying:1 influence:3 sukthankar:1 map:17 center:3 mser:1 independently:1 convex:2 keller:1 reprojected:2 normed:1 tomas:1 x32:1 embedding:2 x64:1 hurt:1 construction:1 target:1 us:1 element:1 recognition:18 infrared:1 database:2 observed:1 solved:1 calculate:1 region:4 decrease:3 removed:1 ran:1 burstiness:2 complexity:1 signature:1 trained:7 tight:1 localization:1 distinctive:1 efficiency:1 rublee:1 matchings:1 tinne:2 represented:1 carneiro:1 surrounding:2 train:3 fast:6 doi:4 detected:3 artificial:2 query:8 lift:2 asy:2 header:1 dof:1 basura:1 lag:1 widely:1 larger:1 cvpr:15 kadir:1 drawing:3 otherwise:1 ability:2 simonyan:2 statistic:1 tuytelaars:2 highlighted:2 transform:1 final:3 sequence:12 differentiable:1 advantage:2 net:2 reconstruction:3 propose:1 subtracting:1 douze:2 tu:1 relevant:1 ablation:1 bow:11 iff:2 poorly:1 bug:1 description:1 validate:1 sutskever:1 convergence:1 satellite:1 produce:1 generating:1 comparative:1 object:6 andrew:6 ard:3 nearest:7 noticeable:1 sa:3 eq:4 dividing:1 auxiliary:1 indicate:1 orb:2 liberty:8 filter:2 stochastic:1 hedi:1 bin:1 explains:1 require:1 generalization:4 frederik:1 exploring:1 cpr:4 pytorch:3 around:1 ground:1 eduard:2 lawrence:1 scope:1 matthew:2 adopt:1 ruslan:1 estimation:1 outperformed:1 applicable:1 bag:3 label:1 vice:4 successfully:1 karel:2 federal:1 clearly:1 sensor:3 always:1 avoid:1 cmp:1 varying:1 wilson:1 ax:1 focus:4 refining:1 dsp:2 seasonal:1 consistently:1 rank:1 indicates:2 panoramic:1 suppression:1 baseline:8 economy:1 leung:1 pad:6 originating:1 going:1 comprising:1 josef:3 pixel:1 arg:2 among:1 pascal:2 denoted:1 k6:1 stateof:1 art:7 integration:1 spatial:7 equal:2 once:2 having:1 beach:1 sampling:17 cordelia:5 identical:1 hardest:6 unsupervised:1 fairness:1 yu:1 icml:1 mimic:2 others:1 report:4 haven:1 strided:1 konolige:1 randomly:1 preserve:1 ve:1 geometry:4 occlusion:1 replacement:1 fire:1 consisting:1 attempt:1 huge:1 interest:2 bradski:1 mining:11 highly:1 evaluation:9 saturated:1 extreme:4 notre:1 grime:1 light:1 inliers:3 behind:1 accurate:1 edge:2 herve:5 simo:2 tree:1 conduct:1 iv:1 savarese:1 euclidean:1 re:3 instance:1 column:2 yosemite:5 stewart:1 assignment:4 maximization:1 deviation:1 subset:2 krizhevsky:1 reported:8 sv:5 st:1 decayed:1 international:14 matthijs:2 lee:1 dong:1 michael:4 together:1 quickly:2 ilya:1 augmentation:2 derivative:3 michel:1 szegedy:1 bold:4 summarized:1 student:1 stream:1 performed:1 view:3 lowe:4 philbin:2 kendall:1 red:1 aggregation:2 jia:1 contribution:1 minimize:1 formed:1 ir:1 convolutional:9 descriptor:81 poolings:1 correspond:1 mishkin:2 vincent:2 zhengyou:1 comparably:2 none:1 marginally:1 edgar:2 confirmed:1 rectified:1 j6:1 straight:1 detector:9 minj:1 competitor:1 against:1 james:2 hamming:2 sampled:1 dataset:28 recall:2 knowledge:1 distractors:6 dimensionality:2 improves:2 color:1 austria:1 iasonas:1 actually:1 ondrej:6 xie:1 follow:1 zisserman:7 improved:1 bmvc:2 fua:2 maximally:1 done:4 notredame:4 evaluated:2 box:1 just:1 implicit:1 convnets:1 correlation:4 working:1 hand:5 overfit:3 horizontal:1 christopher:1 replacing:1 multiscale:1 marker:2 transport:1 google:1 defines:1 quality:1 gdb:2 grows:2 usa:1 normalized:3 brown:10 true:4 contain:2 former:1 regularization:4 assigned:1 reformulating:1 soatto:1 komodakis:2 quadruplet:1 during:1 nuisance:2 qe:6 criterion:2 performs:1 motion:1 bring:1 stefano:1 geometrical:3 image:46 novel:3 recently:1 ef:2 charles:1 common:3 rotation:2 muja:1 ji:2 harnessed:1 handcrafted:1 million:1 slight:1 he:1 significant:7 refer:1 versa:4 ai:18 tuning:1 automatic:2 consistency:1 similarly:1 erc:1 arandjelovic:1 gwak:1 stable:2 han:3 similarity:2 supervision:2 surface:1 etc:1 base:1 patrick:1 closest:10 recent:2 showed:1 vissapp:1 scenario:1 binary:1 yi:1 exploited:1 inverted:1 seen:1 minimum:2 additional:2 ministry:2 nikos:1 fernando:1 ii:1 siamese:3 full:1 keypoints:3 multiple:2 match:2 faster:1 calculation:2 cross:2 long:2 retrieval:28 minimising:1 chia:1 prevented:1 impact:1 verbeek:1 variant:2 austrian:1 vision:35 metric:5 optimisation:1 arxiv:2 histogram:1 normalization:3 cz:2 sergey:2 kernel:1 kmin:1 fine:2 source:4 zagoruyko:2 unlike:3 rest:3 operate:1 sure:1 file:1 pooling:3 lenc:2 thing:2 tough:1 mod:2 prague:1 presence:1 yang:1 intermediate:2 split:1 vinod:1 enough:2 easy:1 iii:1 variety:1 relu:8 gave:1 zhuowen:1 architecture:14 perfectly:1 krishnan:1 regarding:1 shift:2 tommasi:1 pca:1 passed:1 padding:1 accelerating:1 penalty:3 stereo:8 trulls:2 karen:1 hessian:3 deep:8 detailed:2 relocalization:1 dmytro:3 johannes:3 amount:2 extensively:1 reduced:1 http:4 outperform:3 chum:7 estimated:2 popularity:1 per:7 diverse:1 pollefeys:1 ame:2 group:3 threshold:1 comet:1 urban:1 prevent:1 pj:5 neither:1 verified:1 registration:1 giorgos:3 enforced:1 run:2 letter:1 powerful:1 named:1 reader:1 reasonable:1 architectural:1 wu:1 patch:45 decision:1 comparable:1 bit:2 dropout:4 layer:8 dame:1 followed:4 distinguish:1 mined:1 correspondence:5 fan:1 refine:1 badly:1 noah:1 constraint:2 your:1 alex:2 ri:1 ltll:2 nitish:1 min:2 kumar:1 performing:3 relatively:1 martin:1 marius:1 poor:1 kd:1 beneficial:1 llum:2 slightly:2 across:4 smaller:1 shallow:4 randall:1 iccv:5 invariant:4 restricted:1 pipeline:2 phototour:1 know:2 flip:1 stitching:2 fed:1 end:6 photo:3 adopted:2 available:3 operation:1 noguer:1 occurrence:1 batch:23 robustness:1 alternative:1 rej:1 original:4 tony:1 unifying:1 calculating:1 tatiana:1 classical:7 matas:7 objective:3 matchnet:3 added:1 already:1 manmohan:1 strategy:8 degrades:1 snavely:1 usual:1 gradient:5 cvut:1 distance:20 landmark:2 consumption:1 reason:2 length:1 code:2 issn:1 mini:4 ratio:2 innovation:1 setup:8 pajdla:1 negative:26 mink:1 design:1 implementation:1 fdr:5 boltzmann:1 perform:3 upper:1 vertical:1 convolution:4 datasets:12 benchmark:5 descent:2 behave:1 beat:1 hinton:2 frame:1 varied:1 jakob:1 intensity:1 david:3 pair:17 dog:4 ethan:1 sivic:3 learned:20 registered:1 czech:1 nip:1 beyond:2 able:1 below:1 pattern:18 saining:1 green:1 max:1 memory:1 explanation:1 video:2 everyone:1 gool:1 natural:1 rely:1 scheme:3 improve:2 github:1 technology:1 library:1 x8:1 schmid:5 roberto:1 text:1 epoch:4 geometric:2 l2:3 discovery:2 understanding:3 acknowledgement:1 relative:2 loss:34 fully:1 par:2 filtering:1 geoffrey:2 validation:1 foundation:1 affine:6 verification:16 sufficient:1 article:1 viewpoint:3 pi:14 row:2 eccv:3 supported:2 last:5 allow:3 neighbor:8 wide:8 evd:1 fall:1 kwang:1 serra:2 benefit:3 van:1 curve:1 dimension:2 calculated:2 world:2 stand:2 plain:1 vocabulary:13 evaluating:3 forward:1 made:1 author:1 twostage:1 mikolajczyk:4 rabaud:1 harzallah:1 constituting:1 schaffalitzky:1 approximate:3 compact:2 chandraker:1 overfitting:1 global:1 anchor:3 ioffe:1 filip:4 mauc:1 tolias:3 discriminative:2 grayscale:1 search:4 triplet:22 why:2 table:11 additionally:2 learn:4 channel:2 robust:2 ca:1 symmetry:1 depicting:1 improving:2 expansion:5 softmin:7 complex:3 european:3 zitnick:1 protocol:4 domain:6 surf:1 significance:1 main:1 marc:1 linearly:1 big:1 noise:4 bounding:1 ling:1 n2:1 martinez:1 fair:1 augmented:3 crafted:5 frahm:2 junyoung:1 andrei:1 precision:2 experienced:1 momentum:1 position:1 candidate:1 jmlr:1 weighting:1 learns:1 ian:1 british:2 ferraz:1 covariate:1 sift:23 showing:3 explored:1 decay:1 quantization:2 false:7 gustavo:1 gained:1 magnitude:1 province:1 illumination:5 gallagher:1 dissimilarity:1 margin:19 chen:1 vijay:1 cviu:2 led:1 wavelength:1 appearance:4 visual:10 inl:7 cipolla:1 ubc:1 truth:1 loses:1 corresponds:1 harris:1 extracted:1 ma:3 relies:1 gary:1 acm:1 goal:1 marked:1 nair:1 towards:1 krystian:4 replace:3 absence:1 luc:1 hard:15 change:7 included:1 specifically:1 except:4 reducing:1 panorama:1 called:2 silvio:1 pas:1 multimedia:1 experimental:1 matcher:2 rootsift:11 vote:1 jmin:1 berg:1 internal:1 latter:2 tsai:1 evaluate:2 tested:2 srivastava:1 |
6,709 | 7,069 | Accelerated Stochastic Greedy Coordinate Descent by
Soft Thresholding Projection onto Simplex
Chaobing Song, Shaobo Cui, Yong Jiang, Shu-Tao Xia
Tsinghua University
{songcb16,shaobocui16}@mails.tsinghua.edu.cn
{jiangy, xiast}@sz.tsinghua.edu.cn ?
Abstract
In this paper we study the well-known greedy coordinate descent (GCD) algorithm
to solve `1 -regularized problems and improve GCD by the two popular strategies:
Nesterov?s acceleration and stochastic optimization. Firstly, based on an `1 -norm
square approximation, we propose a new rule for greedy selection which is nontrivial to solve but convex; then an efficient algorithm called ?SOft ThreshOlding
PrOjection (SOTOPO)? is proposed to exactly solve an `1 -regularized `1 -norm
square approximation problem, which is induced by the new rule. Based on the
new rule and the SOTOPO algorithm, the Nesterov?s acceleration and stochastic
optimization strategies are then successfully applied to the GCD algorithm. The resulted algorithm called accelerated stochastic
greedy coordinate descent (ASGCD)
p
has the optimal convergence rate O( 1/?); meanwhile, it reduces the iteration
complexity of greedy selection up to a factor of sample size. Both theoretically and
empirically, we show that ASGCD has better performance for high-dimensional
and dense problems with sparse solutions.
1
Introduction
In large-scale convex optimization, first-order methods are widely used due to their cheap iteration
cost. In order to improve the convergence rate and reduce the iteration cost further, two important
strategies are used in first-order methods: Nesterov?s acceleration and stochastic optimization.
Nesterov?s acceleration is referred to the technique that uses some algebra trick to accelerate firstorder algorithms; while stochastic optimization is referred to the method that samples one training
example or one dual coordinate at random from the training data in each iteration. Assume the
objective function F (x) is convex and smooth. Let F ? = minx2Rd F (x) be the optimal value. In
order to find an approximate solution x that satisfies F (x) F ? ? ?, the vanilla gradient descent
method needs O(1/?) iterations. While after applying the Nesterov?s p
acceleration scheme [18],
the resulted accelerated full gradient method (AFG) [18] only needs O( 1/?) iterations, which is
optimal for first-order algorithms [18]. Meanwhile, assume F (x) is also a finite sum of n sample
convex functions. By sampling one training example, the resulted stochastic gradient descent (SGD)
and its variants [15, 25, 1] can reduce the iteration complexity by a factor of the sample size. As an
alternative of SGD, randomized coordinate descent (RCD) can also reduce thep
iteration complexity
by a factor of the sample size [17] and obtain the optimal convergence rate O( 1/?) by Nesterov?s
acceleration [16, 14]. The development of gradient descent and RCD raises an interesting problem:
can the Nesterov?s acceleration and stochastic optimization strategies be used to improve other
existing first-order algorithms?
?
This work is supported by the National Natural Science Foundation of China under grant Nos. 61771273,
61371078.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we answer this question partly by studying coordinate descent with Gauss-Southwell
selection, i.e., greedy coordinate descent (GCD). GCD is widely used for solving sparse optimization
problems in machine learning [24, 11, 19]. If an optimization problem has a sparse solution, it is
more suitable than its counterpart RCD. However, the theoretical convergence rate is still O(1/?).
Meanwhile if the iteration complexity is comparable, GCD will be preferable than RCD [19]. However
in the general case, in order to do exact Gauss-Southwell selection, computing the full gradient
beforehand is necessary, which causes GCD has much higher iteration complexity than RCD. To be
concrete, in this paper we consider the well-known nonsmooth `1 -regularized problem:
n
n
o
X
def
def 1
min F (x) = f (x) + kxk1 =
fj (x) + kxk1 ,
n j=1
x2Rd
(1)
Pn
where
0 is a regularization parameter, f (x) = n1 j=1 fj (x) is a smooth convex function that is
a finite average of n smooth convex function fj (x). Given samples {(a1 , b1 ), (a2 , b2 ), . . . , (an , bn )}
def
with aj 2 Rd (j 2 [n] = {1, 2, . . . , n}), if each fj (x) = fj (aTj x, bj ), then (1) is an `1 -regularized
empirical risk minimization (`1 -ERM) problem. For example, if bj 2 R and fj (x) = 12 (bj aTj x)2 ,
(1) is Lasso; if bj 2 { 1, 1} and fj (x) = log(1 + exp( bj aTj x)), `1 -regularized logistic regression
is obtained.
In the above nonsmooth case, the Gauss-Southwell rule has 3 different variants [19, 24]: GS-s, GS-r
and GS-q. The GCD algorithm with all the 3 rules can be viewed as the following procedure: in
each iteration based on a quadratic approximation of f (x) in (1), one minimizes a surrogate objective
function under the constraint that the direction vector used for update has at most 1 nonzero entry.
The resulted problems under the 3 rules are easy to solve but are nonconvex due to the cardinality
constraint of direction vector. While when using Nesterov?s
p acceleration scheme, convexity is needed
for the derivation of the optimal convergence rate O( 1/?) [18]. Therefore, it is impossible to
accelerate GCD by the Nesterov?s acceleration scheme under the existing 3 rules.
In this paper, we propose a novel variant of Gauss-Southwell rule by using an `1 -norm square
approximation of f (x) rather than quadratic approximation. The new rule involves an `1 -regularized
`1 -norm square approximation problem, which is nontrivial to solve but is convex. To exactly
solve the challenging problem, we propose an efficient SOft ThreshOlding PrOjection (SOTOPO)
algorithm. The SOTOPO algorithm has O(d + |Q| log |Q|) cost, where it is often the case |Q| ? d.
The complexity result O(d + |Q| log |Q|) is better than O(d log d) of its counterpart SOPOPO [20],
which is an Euclidean projection method.
Then
p based on the new rule and SOTOPO, we accelerate GCD to attain the optimal convergence rate
O( 1/?) by combing a delicately selected mirror descent step. Meanwhile, we show that it is not
necessary to compute full gradient beforehand: sampling one training example and computing a noisy
gradient rather than full gradient is enough to perform greedy selection. This stochastic optimization
technique reduces the iteration complexity of greedy selection by a factor of the sample size. The
final result is an accelerated stochastic greedy coordinate descent (ASGCD) algorithm.
Assume x? is an optimal solution of (1). Assume that each fj (x)(for all j 2 [n]) is Lp -smooth w.r.t.
k ? kp (p = 1, 2), i.e., for all x, y 2 Rd ,
krfj (x)
rfj (y)kq ? Lp kx
where if p = 1, then q = 1; if p = 2, then q = 2.
In order to find an x that satisfies F (x)
(2)
ykp ,
F (x? ) ? ?, ASGCD needs O
?p
CL1 kx? k1
p
?
?
iterations (see
(16)), where C is a function of d that varies slowly over d and is upper bounded by log2 (d). For
high-dimensional and dense problems with sparse solutions, ASGCD has better performance than the
state of the art. Experiments demonstrate the theoretical result.
Notations: Let [d] denote the set {1, 2, . . . , d}. Let R+ denote the set of nonnegative real number. For
Pd
1
x 2 Rd , let kxkp = ( i=1 |xi |p ) p (1 ? p < 1) denote the `p -norm and kxk1 = maxi2[d] |xi |
denote the `1 -norm of x. For a vector x, let dim(x) denote the dimension of x; let xi denote the i-th
element of x. For a gradient vector rf (x), let ri f (x) denote the i-th element of rf (x). For a set
Pd
S, let |S| denote the cardinality of S. Denote the simplex 4d = {? 2 Rd+ : i=1 ?i = 1}.
2
2
The SOTOPO algorithm
The proposed SOTOPO algorithm aims to solve the proposed new rule, i.e., minimizing the following
`1 -regularized `1 -norm square approximation problem,
?
1
? def
h
= arg min hrf (x), gi + kgk21 + kx + gk1 ,
(3)
2?
d
g2R
def
x
?
?
x + h,
=
(4)
? the director vector for
where x denotes the current iteration, ? a step size, g the variable to optimize, h
?
update and x
? the next iteration. The number of nonzero entries of h denotes how many coordinates
will be updated in this iteration. Unlike the quadratic approximation used in GS-s, GS-r and GS-q
rules, in the new rule the coordinate(s) to update is implicitly selected by the sparsity-inducing
property of the `1 -norm square kgk21 rather than using the cardinality constraint kgk0 ? 1 [19, 24].
By [8, ?9.4.2], when the nonsmooth term kx + gk1 in (1) does not exist, the minimizer of the
`1 -norm square approximation (i.e., `1 -norm steepest descent) is equivalent to GCD. When kx + gk1
exists, generally, there may be one or more coordinates to update in this new rule. Because of the
? and the iterative solution
sparsity-inducing property of kgk21 and kx + gk1 , both the direction vector h
x
? are sparse. In addition, (3) is an unconstrained problem and thus is feasible.
2.1
A variational reformulation and its properties
(3) involves the nonseparable, nonsmooth term kgk21 and the nonsmooth term kx + gk1 . Because
there are two nonsmooth terms, it seems difficult to solve (3) directly. While by the variational
Pd g 2
identity kgk21 = inf ?24d i=1 ?ii in [5] 2 , in Lemma 1, it is shown that we can transform the
original nonseparable and nonsmooth problem into a separable and smooth optimization problem on
a simplex.
Lemma 1. By defining
J(g, ?)
g?(?)
def
=
def
=
def
?? =
hrf (x), gi +
d
1 X gi2
+ kx + gk1 ,
2? i=1 ?i
arg ming2Rd J(g, ?),
(5)
def
(6)
J(?) = J(?
g (?), ?),
(7)
arg inf ?24d J(?),
? in (3) is equivalent to the
where g?(?) is a vector function. Then the minimization problem to find h
? = g?(?).
? Meanwhile, g?(?) and J(?) in (6) are both coordinate
problem (7) to find ?? with the relation h
separable with the expressions
def
8i 2 [d], g?i (?) = g?i (?i ) = sign(xi
J(?) =
d
X
i=1
Ji (?i ),
?i ?ri f (x)) ? max{0, |xi
def
where Ji (?i ) = ri f (x) ? g?i (?i ) +
1
2?
d
X
i=1
?i ?ri f (x)|
g?i2 (?i )
?i
?i ? }
xi , (8)
+ |xi + g?i (?i )|.
(9)
In Lemma 1, (8) is obtained by the iterative soft thresholding operator [7]. By Lemma 1, we can
reformulate (3) into the problem (5), which is about two parameters g and ?. Then by the joint
convexity, we swap the optimization order of g and ?. Fixing ? and optimizing with respect to (w.r.t.)
g, we can get a closed form of g?(?), which is a vector function about ?. Substituting g?(?) into J(g, ?),
? in (3) can be obtained by h
? = g?(?).
?
we get the problem (7) about ?. Finally, the optimal solution h
The explicit expression of each Ji (?i ) can be given by substituting (8) into (9). Because ? 2 4d , we
have for all i 2 [d], 0 ? ?i ? 1. In the following Lemma 2, it is observed that the derivate Ji0 (?i ) can
be a constant or have a piecewise structure, which is the key to deduce the SOTOPO algorithm.
2
The infima can be replaced by minimization if the convention ?0/0 = 0? is used.
3
def
Lemma 2. Assume that for all i 2 [d], Ji0 (0) and Ji0 (1) have been computed. Denote ri1 =
def
p |xi | 0 and ri2 = p |xi | 0 , then Ji0 (?i ) belongs to one of the 4 cases,
2?Ji (0)
2?Ji (1)
(case a) :
(case c) :
Ji0 (?i )
Ji0 (?i )
= 0,
=
(
0 ? ?i ? 1,
Ji0 (0),
x2i
,
2??i2
0 ? ?i ? ri1
ri1 < ?i ? 1
,
(case b) : Ji0 (?i ) = Ji0 (0) < 0, 0 ? ?i ? 1,
8 0
0 ? ?i ? ri1
>
<Ji (0),
x2i
0
(case d) : Ji (?i ) =
, ri1 < ?i < ri2 .
2?? 2
>
: 0 i
Ji (1), ri2 ? ?i ? 1
Although the formulation of Ji0 (?i ) is complicated, by summarizing the property of the 4 cases in
Lemma 2, we have Corollary 1.
Corollary 1. For all i 2 [d] and 0 ? ?i ? 1, if the derivate Ji0 (?i ) is not always 0, then Ji0 (?i ) is a
non-decreasing, continuous function with value always less than 0.
Corollary 1 shows that except the trivial (case a), for all i 2 [d], whichever Ji0 (?i ) belong to (case b),
(case c) or case (d), they all share the same group of properties, which makes a consistent iterative
procedure possible for all the cases. The different formulations in the four cases mainly have impact
about the stopping criteria of SOTOPO.
2.2
The property of the optimal solution
The Lagrangian of the problem (7) is
def
L(?, , ?) = J(?) +
d
?X
?i
1
i=1
?
(10)
h?, ?i,
where 2 R is a Lagrange multiplier and ? 2 Rd+ is a vector of non-negative Lagrange multipliers.
0
Due to the coordinate separable property of J(?) in (9), it follows that @J(?)
@?i = Ji (?i ). Then the
KKT condition of (10) can be written as
8i 2 [d],
Ji0 (?i ) +
?i = 0,
?i ?i = 0,
and
d
X
?i = 1.
(11)
i=1
By reformulating the KKT condition (11), we have Lemma 3.
? ?)
? is a stationary point of (10), then ?? is an optimal solution of (7). Meanwhile,
Lemma 3. If (? , ?,
def
def
denote S = {i : ??i > 0} and T = {j : ??j = 0}, then the KKT condition can be formulated as
8P
< i2S ??i = 1;
(12)
for all j 2 T, ??j = 0;
:
0 ?
0
for all i 2 S, ? = Ji (?i ) maxj2T Jj (0).
By Lemma 3, if the set S in Lemma 3 is known beforehand, then we can compute ?? by simply
applying the equations in (12). Therefore finding the optimal solution ?? is equivalent to finding the
?
set of the nonzero elements of ?.
2.3
The soft thresholding projection algorithm
In Lemma 3, for each i 2 [d] with ??i > 0, it is shown that the negative derivate Ji0 (??i ) is equal to a
single variable ? . Therefore, a much simpler problem can be obtained if we know the coordinates of
these positive elements. At first glance, it seems difficult to identify these coordinates, because the
number of potential subsets of coordinates is clearly exponential on the dimension d. However, the
property clarified by Lemma 2 enables an efficient procedure for identifying the nonzero elements of
? Lemma 4 is a key tool in deriving the procedure for identifying the non-zero elements of ?.
?
?.
Lemma 4 (Nonzero element identification). Let ?? be an optimal solution of (7). Let s and t be two
coordinates such that Js0 (0) < Jt0 (0). If ??s = 0, then ??t must be 0 as well; equivalently, if ??t > 0,
then ??s must be greater than 0 as well.
4
def
Lemma 4 shows that if we sort u = rJ(0) such that ui1 ui2 ? ? ? uid , where {i1 , i2 , . . . , id }
is a permutation of [d], then the set S in Lemma 3 is of the form {i1 , i2 , . . . , i% }, where 1 ? % ? d.
If % is obtained, then we can use the fact that for all j 2 [%],
Ji0j (??ij ) = ?
and
%
X
??ij = 1
(13)
j=1
to compute ? . Therefore, by Lemma 4, we can efficiently identify the nonzero elements of the optimal
solution ?? after a sort operation, which costs O(d log d). However based on Lemmas 2 and 3, the sort
cost O(d log d) can be further reduced by the following Lemma 5.
Lemma 5 (Efficient identification). Assume ?? and S are given in Lemma 3. Then for all i 2 S,
Ji0 (0)
max{ Jj0 (1)}.
(14)
j2[d]
By Lemma 5, before ordering u, we can filter out all the coordinates i?s that satisfy Ji0 (0) <
maxj2[d] Jj0 (1). Based on Lemmas 4 and 5, we propose the SOft ThreshOlding PrOjection
? In the step 1, by Lemma 5,
(SOTOPO) algorithm in Alg. 1 to efficiently obtain an optimal solution ?.
we find the quantity vm , im and Q. In the step 2, by Lemma 4, we sort the elements { Ji0 (0)| i 2 Q}.
In the step 3, because S in Lemma 3 is of the form {i1 , i2 , . . . , i% }, we search the quantity ? from
1 to |Q| + 1 until a stopping criteria is met. In Alg. 1, ? or ? 1 may be the number of nonzero
? In the step 4, we compute the ? in Lemma 3 according to the conditions. In the step 5,
elements of ?.
?
? x
the optimal ? and the corresponding h,
? are given.
Algorithm 1 x
? =SOTOPO(rf (x), x, , ?)
1. Find
def
def
(vm , im ) = (maxi2[d] { Ji0 (1)}, arg maxi2[d] { Ji0 (1)}), Q = {i 2 [d]|
2. Sort { Ji0 (0)| i 2 Q} such that Ji01 (0)
Ji02 (0)
???
{i1 , i2 , . . . , i|Q| } is a permutation of the elements in Q. Denote
def
v = ( Ji01 (0), Ji02 (0), . . . , Ji0|Q| (0), vm ),
def
and
Ji0 (0) > vm }.
Ji0|Q| (0), where
def
i|Q|+1 = im , v|Q|+1 = vm .
3. For j 2 [|Q| + 1], denote Rj = {ik |k 2 [j]}. Search from 1 to |Q| + 1 to find the quantity
X
p
def
? = min j 2 [|Q| + 1]| Ji0j (0) = Ji0j (1) or
|xl |
2?vj or j = |Q| + 1 .
l2Rj
4. The ? in Lemma 3 is given by
(?P
?2
|x
|
/(2?),
l
l2R? 1
?=
v? ,
if
P
l2R?
otherwise.
1
|xl |
p
2?v? ;
? x
5. Then the ?? in Lemma 3 and its corresponding h,
? in (3) and (4) are obtained by
8 |x |
l
>
, xl , 0 ,
if l 2 R? \{i? };
< p2??P
?l, x
?k , g?l (??l ), xl + g?l (??l ) , if l = i? ;
(??l , h
?l ) =
1
?
k2 R? \{i? }
>
:
(0, 0, xl ),
if l 2 [d]\R? .
In Theorem 1, we give the main result about the SOTOPO algorithm.
? x
Theorem 1. The SOTOPO algorihtm in Alg. 1 can get the exact minimizer h,
? of the `1 -regularized
`1 -norm square approximation problem in (3) and (4).
The SOTOPO algorithm seems complicated but is indeed efficient. The dominant operations in Alg.
1 are steps 1 and 2 with the total cost O(d + |Q| log |Q|). To show the effect of the complexity
reduction by Lemma 5, we give the following fact.
5
Proposition 1. For the optimization problem defined in (5)-(7), where is the regularization parameter of the original problem (1), we have that
8s
9
(s
)
<
0 (1) =
0
2J
2Ji (0)
j
0 ? max
max
?2 .
(15)
;
?
?
i2[d]
j2[d] :
Assume vm is defined in the step 1 of Alg. 1. By Proposition 1, for all i 2 Q,
8s
9
s
(s
)
r
<
2Jj0 (1) =
2Jk0 (0)
2Ji0 (0)
2vm
? max
? max
+2 =
+2 ,
;
?
?
?
?
k2[d]
j2[d] :
q
q
2Jj0 (0)
Therefore at least the coordinates j?s that satisfy
> 2v?m + 2 will be not contained in
?
Q. In practice, it can considerably reduce the sort complexity.
Remark 1. SOTOPO can be viewed as an extension of the SOPOPO algorithm [20] by changing the
objective function from Euclidean distance to a more general function J(?) in (9). It should be noted
that Lemma 5 does not have a counterpart in the case that the objective function is Euclidean distance
[20]. In addition, some extension of randomized median finding algorithm [12] with linear time in
our setting is also deserved to research. Due to the limited space, it is left for further discussion.
3
The ASGCD algorithm
Now we can
p come back to our motivation, i.e., accelerating GCD to obtain the optimal convergence
rate O(1/ ?) by Nesterov?s acceleration and reducing the complexity of greedy selection by stochastic optimization. The main idea is that although like any (block) coordinate descent algorithm, the
proposed new rule, i.e., minimizing the problem in (3), performs update on one or several coordinates,
it is a generalized proximal gradient descent problem based on `1 -norm. Therefore this rule can be
applied into the existing Nesterov?s acceleration and stochastic optimization framework ?Katyusha?
[1] if it can be solved efficiently. The final result is the accelerated stochastic greedy coordinate
descent (ASGCD) algorithm, which is described in Alg. 2.
Algorithm 2 ASGCD
p
= log(d) 1
(log(d)
1)2
1;
2
d 1+
p = 1 + , q = pp1, C =
;
z0 = y 0 = x
?0 = #0 = 0;
?2 = 12 , m = d nb e, ? = 1+2 n1 b L ;
(
b(n 1) ) 1
for s = 0, 1, 2, . . . , S 1, do
?
2
1. ?1,s = s+4
, ?s = ?1,s
C;
2. ?s = rf (?
xs );
3. for l = 0, 1, . . . , m 1, do
(a) k = (sm) + l;
(b) randomly sample a mini batch B of size b from {1, 2, . . . , n} with equal probability;
(c) xk+1 = ?1,s zk + ?2 x
?s + (1 ?1,s ?2 )yk ;
P
1
?
(d) rk+1 = ?s +
(rfj (xk+1 ) rfj (?
xs ));
b
j2B
? k+1 , xk+1 , , ?);
(e) yk+1 =SOTOPO(r
? k+1 , #k , q, , ?s );
(f) (zk+1 , #k+1 ) = pCOMID(r
end for
Pm
1
4. x
?s+1 = m
l=1 ysm+l ;
end for
Output: x
?S
6
? = pCOMID(g, #, q, , ?)
Algorithm 3 (?
x, #)
1. 8i 2 [d], #?i = sign(#i
2. 8i 2 [d], x
?i =
?
3. Output: x
?, #.
?i )|??i |q
sign(#
? qq 2
k#k
?gi ) ? max{0, |#i
1
?gi |
? };
;
In Alg. 2, the gradient descent step 3(e) is solved by the proposed SOTOPO algorithm, while the
mirror descent step 3(f ) is solved by the COMID algorithm with p-norm divergence [13, Sec. 7.2].
We denote the mirror descent step as pCOMID in Alg. 3. All other parts are standard steps in the
Katyusha framework except some parameter settings. For example, instead of the custom setting
p = 1 + 1/log(d) [21, 13], a particular choice p = 1 + ( is defined in Alg. 2) is used to minimize
2
1+
the C = d
. C varies slowly over d and is upper bounded by log2 (d). Meanwhile, ?k+1 depends
on the extra constant C. Furthermore, the step size ? = 1+2 n1 b L is used, where L1 is defined
(
b(n 1) ) 1
in (2). Finally, unlike [1, Alg. 2], we let the batch size b as an algorithm parameter to cover both the
stochastic case b < n and the deterministic case b = n. To the best of our knowledge, the existing
GCD algorithms are deterministic, therefore by setting b = n, we can compare with the existing
GCD algorithms better.
Based on the efficient SOTOPO algorithm, ASGCD has nearly the same iteration complexity with
the standard form [1, Alg. 2] of Katyusha. Meanwhile we have the following convergence rate.
Theorem 2. If each fj (x)(j 2 [n]) is convex, L1 -smooth in (2) and x? is an optimum of the
`1 -regularized problem (1), then ASGCD satisfies
?
?
?
?
4
1 + 2 (b)
CL1 kx? k21
S
?
? 2
E[F (?
x )] F (x ) ?
1+
C L1 kx k1 = O
, (16)
(S + 3)2
2m
S2
where (b) =
n b
b(n 1) ,
S, b, m and C are given in Alg. 2. In other words, ASGCD achieves an
?p
?
CL1 kx? k1
p
?-additive error (i.e., E[F (?
xS )] F (x? ) ? ? ) using at most O
iterations.
?
In Table 1, we give the convergence rate of the existing algorithms and ASGCD to solve the `1 regularized problem (1). In the first column, ?Acc? and ?Non-Acc? denote the corresponding
algorithms are Nesterov?s accelerated or not respectively, ?Primal? and ?Dual? denote the corresponding algorithms solves the primal problem (1) and its regularized dual problem [22] respectively,
`2 -norm and `1 -norm denote the theoretical guarantee is based on `2 -norm and `1 -norm respectively.
In ?
terms
of `2 -norm
based guarantee, Katyusha and APPROX give the state of the art convergence rate
?
p
L2 kx? k2
p
O
. In terms of `1 -norm based guarantee, GCD gives the state of the art convergence rate
?
L kxk2
O( 1 ? 1 ), which is only applicable for the smooth case = 0 in (1). When > 0, the generalized
GS-r, GS-s and GS-q rules generallyphave worse theoretical guarantee than GCD [19]. While the
log d
p1
bound of ASGCD in this paper is O( L1 kxk
), which can be viewed as an accelerated version
?
L kxk2
of the `1 -norm based guarantee O( 1 ? 1 ). Meanwhile, because the bound depends on kx? k1 rather
than kx? k2 and on L1 rather than L2 (L1 and L2 are defined in (2)), for the `1 -ERM problem, if the
samples are high-dimensional, dense and the regularization parameter is relatively large, then it is
possible that L1 ? L2 (in thepextreme case, L2 = dL1 [11]) and kx? k1 ? kx? k2 . In this case, the
log d
p1
`1 -norm based guarantee O( L1 kxk
) of ASGCD is better than the `2 -norm based guarantee
?
?p
?
?
L2 kx k2
p
O
of Katyusha and APPROX. Finally, whether the log d factor in the bound of ASGCD
?
(which also appears in the COMID [13] analysis) is necessary deserves further research.
Remark 2. When the batch size b = n, ASGCD is a deterministic algorithm. In this case, we can use
a better smooth constant T1 that satisfies krf (x) rf (y)k1 ? T1 kx yk1 rather than L1 [1].
Remark 3. The necessity of computing the full gradient beforehand is the main bottleneck of GCD
in applications [19]. There exists some work [11] to avoid the computation of full gradient by
performing some approximate greedy selection. While the method in [11] needs preprocessing,
7
Table 1: Convergence rate on `1 -regularized empirical risk minimization problems. (For GCD, the
convergence rate is applied for = 0. )
A LGORITHM T YPE
PAPER
N ON -ACC , P RIMAL , `2 - NORM
ACC, P RIMAL , `2 - NORM
ACC ,
D UAL ,
`2 - NORM
ACC , P RIMAL , `1 - NORM
L kx? k2
SAGA [10]
K ATYUSHA [1]
ACC -SDCA [23]
SPDC [26]
APCG [16]
APPROX [14]
N ON -ACC , P RIMAL , `1 - NORM
C ONVERGENCE
R ATE
?
?
2
2
O
?p ? ? ?
L2 kx k2
p
O
?
O
?p
GCD [3]
ASGCD (T HIS PAPER )
O
L2 kx? k2
p
?
log( 1? )
?
?
?
L1 kx? k2
1
O
?
?p
?
?
L1 kx k1 log d
p
?
incoherence condition for dataset and is somewhat complicated. Contrary to [11], the proposed
ASGCD algorithm reduces the complexity of greedy selection by a factor up to n in terms of the
amortized cost by simply applying the existing stochastic variance reduction framework.
4
Experiments
In this section, we use numerical experiments to demonstrate the theoretical results in Section 3
and show the empirical performance of ASGCD with batch size b = 1 and its deterministic version
with b = n (In Fig. 1 they are denoted as ASGCD (b = 1) and ASGCD (b = n) respectively). In
addition, following the claim to using data access rather than CPU time [21] and the recent SGD
and RCD literature [15, 16, 1], we use the data access, i.e., the number of times the algorithm
accesses the data matrix, to measure the algorithm performance. To show the effect of Nesterov?s
acceleration, we compare ASGCD (b = n) with the non-accelerated greedy coordinate descent
with GS-q rule, i.e., coordinate gradient descent (CGD) [24]. To show the effect of both Nesterov?s
acceleration and stochastic optimization strategies, we compare ASGCD (b = 1) with Katyusha
[1, Alg. 2]. To show the effect of the proposed new rule in Section 2, which is based on `1 -norm
square approximation, we compare ASGCD (b = n) with the `2 -norm based proximal accelerated
full gradient (AFG) implemented by the linear coupling framework [4]. Meanwhile, as a benchmark
of stochastic optimization for the problems with finite-sum structure, we also show the performance
of proximal stochastic variance reduced gradient (SVRG) [25]. In addition, based on [1] and our
experiments, we find that ?Katyusha? [1, Alg. 2] has the best empirical performance in general for
the `1 -regularized problem (1). Therefore other well-known state-of-art algorithms, such as APCG
[16] and accelerated SDCA [23], are not included in the experiments.
The datasets are obtained from LIBSVM data [9] and summarized in Table 2. All the algorithms are
used to solve the following lasso problem
min {f (x) + kxk1 =
x2Rd
1
kb
2n
Axk22 + kxk1 }
(17)
on the 3 datasets, where A = (a1 , a2 , . . . , an )T = (h1 , h2 , . . . , hd ) 2 Rn?d with each aj 2 Rd
representing a sample vector and hi 2 Rn representing a feature vector, b 2 Rn is the prediction
vector.
Table 2: Characteristics of three real datasets.
DATASET NAME
L EUKEMIA
G ISETTE
M NIST
# SAMPLES n
38
6000
60000
# FEATURES d
7129
5000
780
For ASGCD (b = 1) and Katyusha [1, Alg. 2], we can use the tight smooth constant L1 =
maxj2[n],i2[d] |a2j,i | and L2 = maxj2[n] kaj k22 respectively in their implementation. While for AS8
Leu
Gisette
0
0
0
-2
-2
-2
-4
-4
-4
-6
-6
-10
-12
-14
CGD
AFG
ASGCD (b=n)
SVRG
Katyusha
ASGCD (b=1)
-16
-18
1
2
3
4
Number of Passes
-12
-14
-16
-16
5
?10
-8
-10
-14
-18
-20
0
200
400
600
4
800
1000
1200
1400
1600
1800
2000
0
0
0
-2
-2
-2
-4
-4
-4
-6
-6
-8
-8
-10
-12
-14
-16
-16
-18
-18
3000
4000
5000
6000
7000
8000
9000 10000
1400
1600
1800
2000
3.5
4
4.5
5
-18
-20
2000
1200
-12
-14
1000
1000
-8
-16
0
800
-10
-14
-20
600
-6
Log Loss
-12
400
Number of Passes
0
-10
200
Number of Passes
Log Loss
Log Loss
-12
-20
0
6
-10
-18
-20
10
-6
-8
Log Loss
-8
Log Loss
2
Log loss
10
Mnist
-20
0
1
2
3
Number of Passes
4
5
6
7
8
9
10
0
?10 4
Number of Passes
0.5
1
1.5
2
2.5
3
Number of Passes
?10 4
Figure 1: Comparing AGCD (b = 1) and ASGCD (b = n) with CGD, SVRG, AFG and Katyusha on
Lasso.
max
kh k2
2
i 2
i2[d]
GCD (b = n) and AFG, the better smooth constant T1 =
and T2 = kAk
n
n are used re6
5
spectively. The learning rate of CGD and SVRG are tuned in {10 , 10 , 10 4 , 10 3 , 10 2 , 10 1 }.
Table 3: Factor rates of for the 6 cases
10
10
2
6
L EU
(0.85, 1.33)
(1.45, 2.27)
G ISETTE
(0.88, 0.74)
(3.51, 2.94)
M NIST
(5.85, 3.02)
(5.84, 3.02)
We use = 10 6 and = 10 2 in the experiments. In addition, for each case (Dataset, ), AFG is
used to find an optimum x? with enough accuracy.
The performance of the 6 algorithms is plotted in Fig. 1. We use Log loss log(F (xk ) F (x? )) in the
y-axis. x-axis denotes the number that the algorithm access the data matrix A. For example, ASGCD
(b = n) accesses A once in each iteration, while ASGCD (b = 1) accesses?A
twice in anpentire outer
?
p
?
?
1 kx k1
1 kx k1
iteration. For each case (Dataset, ), we compute the rate (r1 , r2 ) = pCL
, pCT
L2 kx? k2
T2 kx? k2
in Table 3. First, because of the acceleration effect, ASGCD (b = n) are always better than the
non-accelerated CGD algorithm; second, by comparing ASGCD(b = 1) with Katyusha and ASGCD
(b = n) with AFG, we find that for the cases (Leu, 10 2 ), (Leu, 10 6 ) and (Gisette, 10 2 ), ASGCD
(b = 1) dominates Katyusha [1, Alg.2] and ASGCD (b = n) dominates AFG. While the theoretical
analysis in Section 3 shows that if r1 is relatively small such as around 1, then ASGCD (b = 1)
will be better than [1, Alg.2]. For the other 3 cases, [1, Alg.2] and AFG are better. The consistency
between Table 3 and Fig. 1 demonstrates the theoretical analysis.
References
[1] Zeyuan Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. ArXiv e-prints,
abs/1603.05953, 2016.
[2] Zeyuan Allen-Zhu, Zhenyu Liao, and Lorenzo Orecchia. Spectral sparsification and regret minimization
beyond matrix multiplicative updates. In Proceedings of the Forty-Seventh Annual ACM on Symposium on
Theory of Computing, pages 237?245. ACM, 2015.
[3] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear Coupling: An Ultimate Unification of Gradient and
Mirror Descent. ArXiv e-prints, abs/1407.1537, July 2014.
[4] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror
descent. ArXiv e-prints, abs/1407.1537, July 2014.
9
[5] Francis Bach, Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, et al. Optimization with sparsityinducing penalties. Foundations and Trends R in Machine Learning, 4(1):1?106, 2012.
[6] Keith Ball, Eric A Carlen, and Elliott H Lieb. Sharp uniform convexity and smoothness inequalities for
trace norms. Inventiones mathematicae, 115(1):463?482, 1994.
[7] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM journal on imaging sciences, 2(1):183?202, 2009.
[8] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
[9] Chih-Chung Chang. Libsvm: Introduction and benchmarks. http://www. csie. ntn. edu. tw/? cjlin/libsvm,
2000.
[10] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with
support for non-strongly convex composite objectives. In Advances in Neural Information Processing
Systems, pages 1646?1654, 2014.
[11] Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. Nearest neighbor based greedy coordinate
descent. In Advances in Neural Information Processing Systems, pages 2160?2168, 2011.
[12] John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l
1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine
learning, pages 272?279. ACM, 2008.
[13] John C Duchi, Shai Shalev-Shwartz, Yoram Singer, and Ambuj Tewari. Composite objective mirror descent.
In COLT, pages 14?26, 2010.
[14] Olivier Fercoq and Peter Richt?rik. Accelerated, parallel, and proximal coordinate descent. SIAM Journal
on Optimization, 25(4):1997?2023, 2015.
[15] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[16] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method. In Advances
in Neural Information Processing Systems, pages 3059?3067, 2014.
[17] Yu Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[18] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer Science
& Business Media, 2013.
[19] Julie Nutini, Mark Schmidt, Issam H Laradji, Michael Friedlander, and Hoyt Koepke. Coordinate
descent converges faster with the gauss-southwell rule than random selection. In Proceedings of the 32nd
International Conference on Machine Learning (ICML-15), pages 1632?1641, 2015.
[20] Shai Shalev-Shwartz and Yoram Singer. Efficient learning of label ranking by soft projections onto
polyhedra. Journal of Machine Learning Research, 7(Jul):1567?1599, 2006.
[21] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for l1-regularized loss minimization. Journal
of Machine Learning Research, 12(Jun):1865?1892, 2011.
[22] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss
minimization. Journal of Machine Learning Research, 14(Feb):567?599, 2013.
[23] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. In ICML, pages 64?72, 2014.
[24] Paul Tseng and Sangwoon Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117(1):387?423, 2009.
[25] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction.
SIAM Journal on Optimization, 24(4):2057?2075, 2014.
[26] Yuchen Zhang and Lin Xiao. Stochastic primal-dual coordinate method for regularized empirical risk
minimization. In Proceedings of the 32nd International Conference on Machine Learning, volume 951,
page 2015, 2015.
[27] Shuai Zheng and James T Kwok. Fast-and-light stochastic admm. In The 25th International Joint
Conference on Artificial Intelligence (IJCAI-16), New York City, NY, USA, 2016.
10
| 7069 |@word version:2 norm:30 seems:3 nd:2 bn:1 delicately:1 sgd:3 reduction:4 necessity:1 tuned:1 existing:7 current:1 comparing:2 written:1 must:2 john:2 axk22:1 numerical:1 additive:1 cheap:1 enables:1 update:6 stationary:1 greedy:15 selected:2 intelligence:1 amir:1 xk:4 steepest:1 clarified:1 firstly:1 simpler:1 zhang:5 atj:3 mathematical:1 direct:1 symposium:1 ik:1 director:1 a2j:1 introductory:1 theoretically:1 indeed:1 p1:2 nonseparable:2 decreasing:1 cpu:1 cardinality:3 notation:1 bounded:2 gisette:2 spectively:1 medium:1 minimizes:1 finding:3 sparsification:1 guarantee:7 firstorder:1 exactly:2 preferable:1 k2:13 demonstrates:1 grant:1 positive:1 before:1 t1:3 tsinghua:3 infima:1 id:1 jiang:1 incoherence:1 twice:1 china:1 challenging:1 ji0:25 limited:1 practice:1 block:1 regret:1 procedure:4 sdca:2 ri2:3 empirical:5 attain:1 composite:2 projection:8 boyd:1 word:1 get:3 onto:3 selection:10 operator:1 nb:1 risk:3 applying:3 impossible:1 optimize:1 equivalent:3 deterministic:4 lagrangian:1 www:1 convex:11 identifying:2 rule:20 eukemia:1 deriving:1 vandenberghe:1 cgd:5 his:1 hd:1 coordinate:33 updated:1 qq:1 ui2:1 exact:2 zhenyu:1 sparsityinducing:1 us:1 olivier:1 programming:1 trick:1 element:11 amortized:1 trend:1 yk1:1 kxk1:5 observed:1 csie:1 solved:3 ykp:1 ordering:1 eu:1 richt:1 yk:2 pd:3 convexity:3 complexity:12 nesterov:16 raise:1 solving:1 tight:1 algebra:1 predictive:1 eric:1 efficiency:1 swap:1 accelerate:3 joint:2 afg:9 derivation:1 fast:3 kp:1 artificial:1 jk0:1 shalev:6 widely:2 solve:10 otherwise:1 gi:4 transform:1 noisy:1 final:2 propose:4 j2:3 inducing:2 kh:1 cl1:3 convergence:13 ijcai:1 optimum:2 r1:2 incremental:1 converges:1 i2s:1 coupling:3 fixing:1 nearest:1 ij:2 pp1:1 keith:1 solves:1 p2:1 implemented:1 involves:2 come:1 convention:1 met:1 direction:3 gi2:1 filter:1 stochastic:26 kb:1 proposition:2 im:3 extension:2 around:1 exp:1 bj:5 claim:1 substituting:2 achieves:1 a2:2 applicable:1 label:1 re6:1 successfully:1 tool:1 city:1 minimization:10 clearly:1 always:3 aim:1 rather:7 pn:1 avoid:1 shrinkage:1 koepke:1 corollary:3 polyhedron:1 mainly:1 summarizing:1 comid:2 dim:1 stopping:2 carlen:1 relation:1 i1:4 tao:1 arg:4 dual:6 colt:1 denoted:1 development:1 art:4 g2r:1 equal:2 once:1 beach:1 sampling:2 progressive:1 yu:1 icml:2 nearly:1 simplex:3 nonsmooth:8 t2:2 piecewise:1 randomly:1 resulted:4 national:1 divergence:1 beck:1 replaced:1 n1:3 ab:3 huge:1 zheng:1 custom:1 zhaosong:1 rodolphe:1 pradeep:1 light:1 primal:3 beforehand:4 maxi2:3 unification:2 necessary:3 euclidean:3 yuchen:1 plotted:1 theoretical:7 column:1 soft:7 teboulle:1 cover:1 deserves:1 cost:7 entry:2 subset:1 ri1:5 kq:1 uniform:1 seventh:1 johnson:1 answer:1 varies:2 atyusha:1 proximal:7 considerably:1 st:1 international:4 randomized:2 siam:4 vm:7 hoyt:1 michael:1 concrete:1 slowly:2 worse:1 chung:1 combing:1 potential:1 b2:1 sec:1 summarized:1 satisfy:2 ranking:1 depends:2 multiplicative:1 h1:1 closed:1 francis:2 sort:6 complicated:3 parallel:1 shai:6 jul:1 simon:1 minimize:1 square:9 accuracy:1 variance:4 characteristic:1 efficiently:3 identify:2 identification:2 lu:1 acc:8 mathematicae:1 inventiones:1 james:1 dataset:4 popular:1 leu:3 knowledge:1 jenatton:1 back:1 appears:1 higher:1 rimal:4 katyusha:13 rie:1 formulation:2 strongly:1 furthermore:1 shuai:1 until:1 glance:1 logistic:1 aj:2 usa:2 effect:5 name:1 k22:1 multiplier:2 lgorithm:1 counterpart:3 regularization:3 reformulating:1 nonzero:7 dhillon:1 i2:9 noted:1 kak:1 criterion:2 generalized:2 yun:1 demonstrate:2 performs:1 l1:13 allen:4 fj:9 duchi:2 variational:2 novel:1 empirically:1 ji:11 gcd:20 volume:2 belong:1 lieven:1 cambridge:1 smoothness:1 rd:6 vanilla:1 unconstrained:1 pm:1 approx:3 consistency:1 access:6 jj0:4 deduce:1 feb:1 dominant:1 recent:1 optimizing:1 inf:2 belongs:1 nonconvex:1 inequality:1 greater:1 somewhat:1 zeyuan:4 forty:1 july:2 ii:1 stephen:1 full:7 rj:2 reduces:3 smooth:10 faster:1 bach:2 long:1 lin:4 ravikumar:1 a1:2 impact:1 prediction:1 variant:3 regression:1 basic:1 liao:1 chandra:1 arxiv:3 iteration:20 ui1:1 addition:5 derivate:3 median:1 extra:1 unlike:2 pass:6 ascent:2 induced:1 sangwoon:1 orecchia:3 contrary:1 easy:1 enough:2 lasso:3 gk1:6 reduce:4 idea:1 cn:2 bottleneck:1 whether:1 expression:2 defazio:1 ultimate:2 accelerating:2 penalty:1 song:1 lieb:1 peter:1 york:1 cause:1 jj:1 remark:3 generally:1 tewari:3 reduced:2 http:1 exist:1 qihang:1 sign:3 group:1 key:2 four:1 reformulation:1 uid:1 changing:1 krf:1 libsvm:3 lacoste:1 imaging:1 sum:2 inverse:1 chih:1 kaj:1 comparable:1 def:23 bound:3 hi:1 pct:1 ntn:1 quadratic:3 nonnegative:1 g:10 nontrivial:2 annual:1 constraint:3 ri:4 pcl:1 yong:1 min:4 fercoq:1 performing:1 separable:4 relatively:2 according:1 ball:2 cui:1 ate:1 lp:2 tw:1 erm:2 southwell:5 equation:1 cjlin:1 needed:1 know:1 singer:3 whichever:1 end:2 yurii:1 issam:1 studying:1 operation:2 kwok:1 spectral:1 ype:1 alternative:1 batch:4 schmidt:1 deserved:1 original:2 denotes:3 log2:2 yoram:3 k1:9 objective:6 question:1 quantity:3 print:3 strategy:5 surrogate:1 gradient:24 distance:2 maxj2:3 outer:1 mail:1 tseng:1 trivial:1 reformulate:1 mini:1 minimizing:2 equivalently:1 difficult:2 shu:1 trace:1 negative:2 implementation:1 perform:1 upper:2 datasets:3 sm:1 benchmark:2 finite:3 nist:2 descent:29 defining:1 rn:3 sharp:1 kgk21:5 nip:1 beyond:1 sparsity:2 l2r:2 ambuj:3 rf:5 max:8 suitable:1 natural:1 ual:1 regularized:17 business:1 zhu:4 representing:2 scheme:3 improve:3 x2i:2 lorenzo:3 julien:2 axis:2 jun:1 literature:1 l2:10 friedlander:1 loss:10 lecture:1 permutation:2 interesting:1 shaobo:1 foundation:2 h2:1 rik:1 elliott:1 consistent:1 xiao:3 thresholding:7 kxkp:1 share:1 course:1 supported:1 svrg:4 neighbor:1 sparse:5 julie:1 xia:1 dimension:3 apcg:2 preprocessing:1 approximate:2 implicitly:1 sz:1 kkt:3 mairal:1 b1:1 xi:9 thep:1 shwartz:6 continuous:1 iterative:4 search:2 table:7 zk:2 ca:1 alg:18 meanwhile:10 marc:1 vj:1 dense:3 main:3 motivation:1 s2:1 paul:1 dl1:1 fig:3 referred:2 rcd:6 ny:1 tong:4 explicit:1 saga:2 exponential:1 xl:5 kxk2:2 hrf:2 x2rd:2 theorem:3 z0:1 rk:1 k21:1 r2:1 x:3 dominates:2 exists:2 mnist:1 mirror:6 kx:27 spdc:1 simply:2 lagrange:2 kxk:2 contained:1 inderjit:1 chang:1 springer:1 nutini:1 minimizer:2 satisfies:4 acm:3 obozinski:1 viewed:3 identity:1 formulated:1 acceleration:15 admm:1 feasible:1 jt0:1 included:1 except:2 reducing:1 laradji:1 tushar:1 lemma:32 called:2 total:1 partly:1 gauss:5 aaron:1 guillaume:1 support:1 mark:1 accelerated:14 |
6,710 | 707 | A Knowledge-Based Model of Geometry Learning
Geoffrey Towell
Siemens Corporate Research
755 College Road East
Princeton, NJ 08540
Richard Lehrer
Educational Psychology
University of Wisconsin
1025 West Johnson St.
Madison, WI 53706
towe ll@ learning. siemens. com
[email protected]. wisc.edu
Abstract
We propose a model of the development of geometric reasoning in children that
explicitly involves learning. The model uses a neural network that is initialized
with an understanding of geometry similar to that of second-grade children.
Through the presentation of a series of examples, the model is shown to develop
an understanding of geometry similar to that of fifth-grade children who were
trained using similar materials.
1 Introduction
One of the principal problems in instructing children is to develop sequences of examples
that help children acquire useful concepts. In this endeavor it is often useful to have a
model of how children learn the material, for a good model can guide an instructor towards
particularly effective examples. In short, good models of learning help a teacher maximize
the utility of the example presented.
The particular problem with which we are concerned is learning about conventional
concepts in geometry, like those involved in identifying, and recognizing similarities and
differences among, shapes. This is a difficult subject to teach because children (and adults)
have a complex set of informal rules for geometry (that are often at odds with conventional
rules). Hence, instruction must supplant this informal geometry with a common formalism.
To be efficient in their instruction, teachers need a model of geometric learning which, at
the very least:
1. can represent children's understanding of geometry prior to instruction,
2. can describe how understanding changes as a result of instruction,
3. can predict the effect of differing instructional sequences.
In this paper we describe a neural network based model that has these properties.
887
888
Towell and Lehrer
An extant model of geometry learning, the "van Hiele model" [6] represents children's
understanding as purely perceptual -- appearances dominate reasoning. However, our
research suggests that children's reasoning is better characterized as a mix of perception
and rules. Moreover, unlike the model we propose, the van Hiele model can neither be used
to test the effectiveness of instruction prior to trying that instruction on children nor can it
be used to describe how understanding changes as a result of a specific type of instruction.
Briefly, our model uses a set of rules derived from interviews with first and second
grade children [1, 2], to produce a stereotypical informal conception of geometry. These
rules, described in more detail in Section 2.1, give our model an explicit representation
of pre-instructional geometry understanding. The rules are then translated into a neural
network using the KBANN algorithm [3]. As a neural network, our model can test the effect
of differing instructional sequences by simply training two instances with different sets of
examples. The experiments in Section 3 take advantage of this ability of our model; they
show that it is able to accurately model the effect of two different sets of instruction.
2 ANew Model
This section describes the initial state of our model and its implementation as a neural
network. The initial state of the model is intended to reproduce the decision processes
of a typical child prior to instruction. The methodology used to derive this information
and a brief description of this information both are in the first subsection. In addition,
this subsection contains a small experiment that shows the accuracy of the initial state of
the model. In the next subsection, we briefly describe the translation of those rules into a
neural network.
2.1
The initial state of the model
Our model is based upon interviews with children in first and second grade [1, 2]. In these
interviews, children were presented with sets of three figures such as the triad in Figure 1.
They were asked which pair of the three figures is the most similar and why they made
their decision. These interviews revealed that, prior to instruction, children base judgments
of similarity upon the seven attributes in Table 1.
For the triad discrimination task, children find ways in which a pair is similar that is not
shared by the other two pairs. For instance, Band C in Figure 1.2 are both pointy but A
is not. As a result, the modal response of children prior to instruction is that {B C} is the
most similar pair. This decision making process is described by the rules in Table 2.
In addition to the rules in Table 2, we include in our initial model a set of rules that describe
templates for standard geometric shapes. This addition is based upon interviews with
children which suggest that they know the names of shapes such as triangles and squares,
and that they associate with each name a small set of templates. Initially, children treat
these shape names as having no more importance than any of the attributes in Table 1. So,
our model initial treats shape names exactly as one of those attributes. Over time children
learn that the names of shapes are very important because they are diagnostic (the name
indicates properties). Our hope was that the model would make a similar transition so that
the shape names would become sufficient for similarity determination.
Note that the rules in Table 2 do not always yield a unique decision. Rather, there are
A Knowledge-Based Model of Geometry Learning
Table 1: Attributes used by children prior to instruction.
Attribute name
Possible values
Attribute name
Possible values
Tilt
0, 10,20,30,40
Slant
yes, no
Area
small, medium, large Shape
skinny, medium, fat
Pointy
yes, no
Direction
+-,-,j,l
2 long & short
yes, no
Table 2: Rules for similarity judgment in the triad discrimination task.
1. IF fig-val(figl?, att?) = fig-val(fig2?, att?) THEN
same-att-value(figl?, fig2?, att?).
2. IF not (same-att-value(figl?, fig3?, att?)) AND figl? * fig3?
AND fig2? * fig3? THEN unq-sim(figl?, fig2?, att?).
3. IF c(unq-sim(figl?, fig2?, att?)) >
c(unq-sim(figl?, fig2?, att?)) AND
c(unq-sim(figl?, fig3?, att?)) > c(unq-sim(fig2?, fig3?, att?))
AND figl? * fig3? AND fig2?* fig3? THEN
most-similar(figl?, fig2?).
Labels followed by a '?' indicate variables.
fig-val(fig?, att?) returns the value of att? in fig?
cO counts the number of instances.
A
1
2
3
4
5
D
D
0
C7
~
C
B
0
A
<>
0
D
~
t.
6
A
B
D
D
~
7
b
~
8
~
9
0
c==--
~
C>
/\
C
~
~
~
~
~
Figure 1: Triads used to test learning.
triads for which these rules cannot decide which pair is most similar. This is not often
the case for a particular child, who usually finds one attribute more salient than another.
Yet, frequently when the rules cannot uniquely identify the most similar pair, a classroom
of children is equally divided. Hence, the model may not accurately predict an individual
response, but is it usually correct at identifying the modal responses.
To verify the accuracy of the initial state of our model, we used the set of nine testing triads
shown in Figure 1 which were developed for the interviews with children. As shown in
Table 3, the model matches very nicely responses obtained from a separate sample of 48
second grade children. Thus, we believe that we have a valid point from which to start.
2.2 The translation of rule sets into a neural network
We translate rules sets into neural networks using the KBANN algorithm [3] which uses a
set of hierarchically-structured rules in the form of propositional Horn clauses to set the
topology and initial weights of an artificial neural network. Because the rules in Table 2 are
889
890
Towell and Lehrer
Table 3: Initial responses by the model.
Triad Number
1
2
3
4
5
7
8
9
6
Initial Model
BC BC AC AC BC ABIBC AC ADIBC ACIBC
Second Grade Children BC BC AC AC BC ABIBC AC
AD
ACIBC
Answers in the "initial model" row indicate the responses generated by the initial rules.
More than response in a column indicates that the rules could not differentiate among two
pairs.
Answers in the "second grade" row are the modal responses of second grade children. More
than one answer in a column indicates that equal numbers of children judged the pairs most
similar.
Property name
Convex
# Sides
# Angles
All Sides Equal
# Right Angles
All Angles Equal
# Equal Angles
Table 4: Properties used to describe figures.
Property name
values
Yes No
34568
34568
Yes No
01234
Yes No
0234568
# Pairs Equal Opposite Angles
# Pairs Opposite Sides Equal
# Pairs Parallel Sides
Adj acent Angles = 180
# Lines of Symmetry
# Equal Sides
values
01234
01234
01234
Yes No
01234568
0234568
not in propositional form, they must be expanded before they can be accepted by KBANN.
The expansion turns a simple set of three rules into an ugly set of approximately 100 rules.
Figure 2 is a high-level view of the structure of the neural network that results from the
rules. In this implementation we present all three figures at the same time and all decisions
are made in parallel. Hence, the rules described above must be repeated at least three
times. In the neural network that results from the rule translation, these repeated rules are
not independent. Rather they are linked so that modifications of the rules are shared across
every pairing. Thus, the network cannot learn a rule which applies only to one pair.
Finally, the model begins with the set of 13 properties listed in Table 4 in addition to the
attributes of Table 1. (Note that we use "attribute" to refer to the informal, visual features
in Table 1 and "property" to refer to the symbolic features in Table 4.) As a result, each
figure is described to the model as a 74 position vector (18 positions encode the attributes;
the remaining 56 positions encode the properties).
3
An Experiment Using the Model
One of the points we made in the introduction is that a useful model of geometry learning
should be able to predict the effect of instruction. The experiment reported in this section
tests this facet of our model. Briefly, this experiment trains two instances of our model
using different sets of data. We then compare the instances to children who have been
trained using a set of problems similar to one of those used to train the model. Our results
show that the two instances learn quite different things. Moreover, the instance trained
witn material similar to the children predicts the children's responses on test problems with
a high level of accuracy.
A Knowledge-Based Model of Geometry Learning
rll"Bci~''''''~
i....most similar
........ ..?
......
.....
..
Unique
Same
?
AB
Boxes indicate one or more units.
Dashed boxes indicate units associated with duplicated rules
Dashed lines indicate one or more negatively weighted links.
Solid lines indicate one or more positively weighted links.
Figure 2: The structure of the neural network for our model.
3.1
Training the model
For this experiment, we developed two sets of training shapes. One set contains every
polygon in a fifth-grade math textbook [4] (Figure 3). The other set consists of 81 items
which might be produced by a child using a modified version of LOGO (Figure 4). Here
we assume that one of the effects of learning geometry with a tool like LOGO is simply to
increase the extent and range of possible examples. A collection of 33 triads were selected
from each set to train the model. 1 Training consisted of repeated presentations of each of
the 33 triads until the network correctly identified the most similar pair for each triad.
3.2
Tests of the model
In this section, we test the ability of the model to accurately predict the effects of instruction.
We do this by comparing the two trained instances of the model to the modal responses
of fifth graders who had used LOGO for two weeks. In those two weeks, the children had
generates many (but not all) of the figures in Figure 4. Hence, we expected that the instance
1 In choosing the same number of triads for each training set, we are being very generous to the
textbook. In reality, not only do children see more figures when using LOGO, they are also able to
make many more contrasts between figures. Hence, it might be more accurate to make the LOGO
training set much larger than the textbook training set.
891
892
Towell and Lehrer
Figure 3: Representative textbook shapes.
Figure 4: Representative shapes encountered using a modified version of LOGo.
of the model trained using triads drawn from Figure 4 would better predict the responses
of these children than the other instance of the model.
Clearly, the results in Table 5 verify our expectations. The LOGo-trained model agrees with
the modal responses of children on an average of six examples while the textbook-trained
model agrees on an average of three examples. The respective binomial probabilities of six
and three matches is 0.024 and 0.249. These probabilities suggest that the match between
the LOGo-trained model and the children is unlikely to have occurred by chance. On
the other hand, the instance of the model trained by the textbook examples has the most
probable outcome from simply random guessing. Thus, we conclude that the LOGo-trained
model is a good predictor of children's learning when using LOGO.
In addition, whereas the textbook-trained model was no better than chance at estimating
the conventional response, the LOGo-trained model matched convention on an average
of seven triads. Interestingly, on both triads where the LOGo-trained model did not
match convention, it could not due to lack of appropriate information. For triad 3,
convention matches the trapezoid with the parallelogram rather than either of these with the
quadrilateral because the trapezoid and the parallelogram both have some pairs of parallel
lines. The model, however, has only information about the number of pairs of parallel
lines. On the basis of this feature, the three figures are equally dissimilar. For triad 7, the
other triad for which the LOGo-trained model did not match convention, the conventional
paring matches two obtuse triangles. However, the model has no information about angles
other than number and number of right angles. Hence, it could not possibly get this triad
correct (at least not for the right reason). We expect that correcting these minor weaknesses
will improve the model's ability to make the conventional response.
Table 5: Responses after learning by trained instances of the model and children.
Triad Number
1
2
3
4
5
6
AC BC
Textbook Trained
ABIBC BC
AC
AB
LOGO Trained
AB/BC AB
??
BC AB
AB
Fifth Grade Children ABIBC AB AC/AB BC AB ABIBC
Convention
AB
AB
BC AB
AB
AB
Responses by the model are the modal responses over 500 trials.
?? indicates that the model was unable to select among the pairings.
7
8
9
AC
AB
AC
AB
AB
BC
AC ABIBC BC
BC
AB
BC
The success of our model in the prediction experiment lead us to investigate the reasons
unoerlying the answers generated by its two instances. In so doing we hoped to gain
an understanding of the networks' reasoning processes. Such an understanding would
A Knowledge-Based Model of Geometry Learning
be invaluable in the design of instruction for it would allow the selection of examples
that fill specific learning deficits. Unfortunately, trained neural networks are often nearly
impossible to comprehend. However, using tools such as those described by Towell and
Shavlik [5], we believe that we developed a reasonably clear understanding of the effects
of each set of training examples.
The LOGo-trained model made comprehensive adjustments of its initial conditions. Of the
eight attributes, it attends to only size and 2 long & short after training. \Vhile learning
to ignore most of the attributes, the model also learned to pay attention to several of the
properties. In particular, number of angles, number of sides, all angles, equal, all sides
equal, and number of pairs of opposite sides parallel, all were important to the network
after training. Thus, the LOGo-trained instance of the model made a significant transition in
its basis for geometric reasoning. Sadly, in making this transition, the declarative clarity of
the initial rules was lost. Hence, it is impossible to precisely state the rules that the trained
model used to make its final decisions.
By contrast, the textbook-trained instance of the model failed to learn that most of
the attributes were unimportant. Instead, the model simply learned that several of the
properties were also important. As a result, reasons for answers on the test set often seemed
schizophrenic. For instance, in responding Be on test triad 2, the network attributed
the decision to similarities in: area, pointiness, point-direction, number of sides, number
of angles, number of right angle and all angles equal. Given this combination, it is
not surprising that the example is answered incorrectly. This result suggests that typical
textbooks may accentuate the importance of conventional properties, but they provide little
grist for abandoning the mill of informal attributes.
3.3
Discussion
This experiment demonstrated the utility of our model in several ways. First, it showed
that the model is sensitive to differences in training set. Of itself, this is neither a surprising
nor interesting conclusion. What is important about the difference in learning is that the
model trained in a manner similar to a classroom of fifth grade children made responses to
the test set that we quite similar to those of fifth grade children.
In addition to making different responses to the test set, the two trained instances of the
model appeared to learn different things. In particular, the LOGO-trained instance essentially
replaced its initial knowledge with something much more like the formal geometry. On
the other hand, the textbook-trained instance simply added several concepts from formal
geometry to the informal concept with which it was initialized. An improved transition from
informal to formal geometry is one of the advantages claimed for LOGO based instruction
[2]. Hence, the difference between the two instances of the model agrees with observation
of children.
This result suggests that our model is able to predict the effect of differing instructional
sequences. A further experiment of this hypothesis would be to use our model to design a
set of instruction materials. This could be done by starting with an apparently good set of
materials, training the model, examining its deficiencies and revising the training materials
appropriately. Our hypothesis is that a set of materials so constructed would be superior to
the materials normally used in classrooms. Testing of this hypothesis is one of our major
directions for future research.
893
894
Towell and Lehrer
4
Conclusions
In this paper we have described a model of the initial stages of geometry learning by
elementary school children. This model is initialized using a set of rules based upon
interviews with first and second grade children. This set of rules is shown to accurately
predict the responses of second grade children on a hard set of similarity determination
problems.
Given that we have a valid starting point for our model, we test it by training those rules,
after re-representing them in a neural network, with two different sets of training materials.
Each instance of the model is analyzed in two ways. First, they are compared, on an
independent set of testing examples, to fifth grade children who had been trained using
materials similar to one of the model's training sets. This comparison showed that the
model trained with materials similar to the children accurately reproduced the responses of
the children. The second analysis involved examining the model after training to determine
what it had learned. Both instances of the model learned to attend to the properties that were
not mentioned in the initial rules. The model trained with the richer (LOGo-based) training
set also learned that the informal attributes were relatively unimportant. Conversely, the
model trained with the textbook-based training examples merely added information about
properties to the pre-existing information. Therefore, we believe that the model we have
described is has the potential to become a valuable tool for teachers.
References
[1] R. Lehrer, W. Knight, M. Love, and L. Sancilio. Software to link action and description in
pre-proof geometry. Presented at the Annual Meeting of the American Educational Research
Association, 1989.
[2] R. Lehrer, L. Randle, and L. Sancilio. Learning preproof geometry with LOGO. Cognition and
Instruction, 6:159--184, 1989.
[3] M. O. Noordewier, G. G. Towell, and J. W. Shavlik. Training knowledge-based neural networks
to recognize genes in DNA sequences. In Advances in Neural Infonnation Processing Systems,
volume 3, pages 530--536, Denver, CO, 1991. Morgan Kaufmann.
[4] M. A. Sobel, editor. Mathematics. McGraw-Hill, New York, 1987.
[5] G. G. Towell and J. W. Shavlik. Interpretation of artificial neural networks: Mapping knowledgebased neural networks into rules. In Advances in Neural Infonnation Processing Systems,
volume 4, pages 977--984, Denver, CO, 1991. Morgan Kaufmann.
[6] P. M. van HieJe. Structure and Insight. Academic Press, New York, 1986.
| 707 |@word trial:1 version:2 briefly:3 instruction:18 solid:1 initial:17 series:1 contains:2 att:13 bc:16 interestingly:1 quadrilateral:1 existing:1 com:1 adj:1 comparing:1 surprising:2 yet:1 must:3 shape:11 discrimination:2 selected:1 item:1 short:3 math:1 constructed:1 become:2 pairing:2 consists:1 manner:1 expected:1 nor:2 frequently:1 love:1 grade:15 little:1 begin:1 estimating:1 moreover:2 matched:1 medium:2 what:2 textbook:12 developed:3 revising:1 differing:3 nj:1 every:2 exactly:1 fat:1 unit:2 normally:1 before:1 attend:1 treat:2 approximately:1 might:2 logo:20 suggests:3 conversely:1 lehrer:8 co:3 range:1 abandoning:1 unique:2 horn:1 testing:3 lost:1 area:2 fig3:7 instructor:1 road:1 pre:3 suggest:2 symbolic:1 get:1 cannot:3 selection:1 judged:1 impossible:2 conventional:6 demonstrated:1 rll:1 educational:2 attention:1 starting:2 convex:1 identifying:2 correcting:1 rule:36 stereotypical:1 insight:1 dominate:1 fill:1 us:3 hypothesis:3 associate:1 particularly:1 predicts:1 sadly:1 triad:20 knight:1 valuable:1 mentioned:1 asked:1 trained:30 kbann:3 purely:1 upon:4 negatively:1 basis:2 triangle:2 translated:1 polygon:1 train:3 effective:1 describe:6 artificial:2 choosing:1 outcome:1 quite:2 richer:1 larger:1 bci:1 ability:3 itself:1 final:1 reproduced:1 differentiate:1 sequence:5 advantage:2 interview:7 vhile:1 propose:2 translate:1 description:2 knowledgebased:1 produce:1 help:2 derive:1 develop:2 ac:12 attends:1 school:1 minor:1 sim:5 involves:1 indicate:6 convention:5 direction:3 correct:2 attribute:15 material:11 probable:1 elementary:1 cognition:1 predict:7 week:2 mapping:1 major:1 generous:1 label:1 infonnation:2 sensitive:1 agrees:3 tool:3 weighted:2 hope:1 clearly:1 always:1 modified:2 rather:3 encode:2 derived:1 indicates:4 contrast:2 unlikely:1 initially:1 reproduce:1 among:3 development:1 equal:10 having:1 nicely:1 represents:1 nearly:1 future:1 richard:1 recognize:1 comprehensive:1 individual:1 replaced:1 geometry:21 intended:1 skinny:1 ab:18 investigate:1 weakness:1 analyzed:1 sobel:1 accurate:1 obtuse:1 respective:1 initialized:3 re:1 instance:22 formalism:1 column:2 facet:1 predictor:1 noordewier:1 recognizing:1 examining:2 johnson:1 reported:1 answer:5 teacher:3 st:1 extant:1 possibly:1 american:1 return:1 potential:1 explicitly:1 ad:1 view:1 linked:1 doing:1 apparently:1 start:1 parallel:5 square:1 accuracy:3 kaufmann:2 who:5 judgment:2 yield:1 identify:1 yes:7 accurately:5 produced:1 c7:1 involved:2 associated:1 attributed:1 proof:1 gain:1 duplicated:1 knowledge:6 subsection:3 classroom:3 methodology:1 modal:6 response:21 improved:1 grader:1 box:2 done:1 stage:1 until:1 hand:2 lack:1 believe:3 name:11 effect:8 concept:4 verify:2 consisted:1 hence:8 comprehend:1 ll:1 uniquely:1 trying:1 hill:1 invaluable:1 reasoning:5 common:1 superior:1 clause:1 denver:2 tilt:1 volume:2 association:1 occurred:1 interpretation:1 refer:2 significant:1 slant:1 mathematics:1 had:4 similarity:6 base:1 something:1 showed:2 claimed:1 success:1 meeting:1 morgan:2 determine:1 maximize:1 dashed:2 corporate:1 mix:1 match:7 characterized:1 determination:2 academic:1 long:2 divided:1 equally:2 prediction:1 essentially:1 expectation:1 represent:1 accentuate:1 addition:6 whereas:1 appropriately:1 unlike:1 subject:1 thing:2 effectiveness:1 odds:1 revealed:1 conception:1 concerned:1 psychology:1 topology:1 opposite:3 identified:1 six:2 utility:2 york:2 nine:1 action:1 useful:3 clear:1 listed:1 unimportant:2 band:1 dna:1 diagnostic:1 towell:8 correctly:1 salient:1 drawn:1 wisc:1 clarity:1 neither:2 merely:1 angle:13 decide:1 parallelogram:2 decision:7 pay:1 followed:1 encountered:1 annual:1 precisely:1 deficiency:1 software:1 generates:1 answered:1 expanded:1 relatively:1 structured:1 combination:1 describes:1 across:1 wi:1 making:3 modification:1 instructional:4 turn:1 count:1 know:1 hiele:2 informal:8 eight:1 schizophrenic:1 appropriate:1 binomial:1 remaining:1 include:1 responding:1 madison:1 added:2 guessing:1 separate:1 link:3 unable:1 deficit:1 seven:2 extent:1 reason:3 declarative:1 trapezoid:2 acquire:1 difficult:1 unfortunately:1 teach:1 implementation:2 design:2 observation:1 incorrectly:1 propositional:2 pair:16 instructing:1 learned:5 adult:1 able:4 usually:2 perception:1 appeared:1 representing:1 improve:1 brief:1 prior:6 geometric:4 understanding:10 val:3 wisconsin:1 expect:1 interesting:1 geoffrey:1 sufficient:1 editor:1 translation:3 row:2 guide:1 side:9 ugly:1 allow:1 shavlik:3 formal:3 template:2 fifth:7 van:3 transition:4 valid:2 seemed:1 made:6 collection:1 ignore:1 mcgraw:1 gene:1 anew:1 conclude:1 why:1 table:17 reality:1 learn:6 reasonably:1 symmetry:1 expansion:1 complex:1 did:2 hierarchically:1 child:50 repeated:3 positively:1 fig:5 west:1 representative:2 vms:1 position:3 explicit:1 perceptual:1 specific:2 importance:2 hoped:1 mill:1 simply:5 appearance:1 visual:1 failed:1 adjustment:1 applies:1 chance:2 presentation:2 endeavor:1 towards:1 shared:2 change:2 hard:1 typical:2 principal:1 accepted:1 siemens:2 east:1 pointy:2 select:1 college:1 dissimilar:1 princeton:1 |
6,711 | 7,070 | Multi-Task Learning for Contextual Bandits
Aniket Anand Deshmukh
Department of EECS
University of Michigan Ann Arbor
Ann Arbor, MI 48105
[email protected]
Urun Dogan
Microsoft Research
Cambridge CB1 2FB, UK
[email protected]
Clayton Scott
Department of EECS
University of Michigan Ann Arbor
Ann Arbor, MI 48105
[email protected]
Abstract
Contextual bandits are a form of multi-armed bandit in which the agent has access
to predictive side information (known as the context) for each arm at each time step,
and have been used to model personalized news recommendation, ad placement,
and other applications. In this work, we propose a multi-task learning framework
for contextual bandit problems. Like multi-task learning in the batch setting, the
goal is to leverage similarities in contexts for different arms so as to improve the
agent?s ability to predict rewards from contexts. We propose an upper confidence
bound-based multi-task learning algorithm for contextual bandits, establish a corresponding regret bound, and interpret this bound to quantify the advantages of
learning in the presence of high task (arm) similarity. We also describe an effective
scheme for estimating task similarity from data, and demonstrate our algorithm?s
performance on several data sets.
1
Introduction
A multi-armed bandit (MAB) problem is a sequential decision making problem where, at each time
step, an agent chooses one of several ?arms," and observes some reward for the choice it made. The
reward for each arm is random according to a fixed distribution, and the agent?s goal is to maximize
its cumulative reward [4] through a combination of exploring different arms and exploiting those
arms that have yielded high rewards in the past [15, 11].
The contextual bandit problem is an extension of the MAB problem where there is some side
information, called the context, associated to each arm [12]. Each context determines the distribution
of rewards for the associated arm. The goal in contextual bandits is still to maximize the cumulative
reward, but now leveraging the contexts to predict the expected reward of each arm. Contextual
bandits have been employed to model various applications like news article recommendation [7],
computational advertisement [9], website optimization [20] and clinical trials [19]. For example, in
the case of news article recommendation, the agent must select a news article to recommend to a
particular user. The arms are articles and contextual features are features derived from the article and
the user. The reward is based on whether a user reads the recommended article.
One common approach to contextual bandits is to fix the class of policy functions (i.e., functions from
contexts to arms) and try to learn the best function with time [13, 18, 16]. Most algorithms estimate
rewards either separately for each arm, or have one single estimator that is applied to all arms. In
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
contrast, our approach is to adopt the perspective of multi-task learning (MTL). The intuition is that
some arms may be similar to each other, in which case it should be possible to pool the historical
data for these arms to estimate the mapping from context to rewards more rapidly. For example, in
the case of news article recommendation, there may be thousands of articles, and some of those are
bound to be similar to each other.
Problem 1 Contextual Bandits
for t = 1, ..., T do
Observe context xa,t ? Rd for all arms a ? [N ], where [N ] = {1, ...N }
Choose an arm at ? [N ]
Receive a reward rat ,t ? R
Improve arm selection strategy based on new observation (xat ,t , at , rat ,t )
end for
The contextual bandit problem is formally stated in Problem 1. The total T trial reward is defined as
PT
PT
?
t=1 rat ,t and the optimal T trial reward as
t=1 rat ,t , where rat ,t is reward of the selected arm
at at time t and a?t is the arm with maximum reward at trial t. The goal is to find an algorithm that
minimizes the T trial regret
R(T ) =
T
X
r
a?
t ,t
t=1
?
T
X
rat ,t .
t=1
We focus on upper confidence bound (UCB) type algorithms for the remainder of the paper. A UCB
strategy is a simple way to represent the exploration and exploitation tradeoff. For each arm, there is
an upper bound on reward, comprised of two terms. The first term is a point estimate of the reward,
and the second term reflects the confidence in the reward estimate. The strategy is to select the arm
with maximum UCB. The second term dominates when the agent is not confident about its reward
estimates, which promotes exploration. On the other hand, when all the confidence terms are small,
the algorithm exploits the best arm(s) [2].
In the popular UCB type contextual bandits algorithm called Lin-UCB, the expected reward of an
arm is modeled as a linear function of the context, E[ra,t |xa,t ] = xTa,t ?a? , where ra,t is the reward of
arm a at time t and xa,t is the context of arm a at time t. To select the best arm, one estimates ?a
for each arm independently using the data for that particular arm [13]. In the language of multi-task
learning, each arm is a task, and Lin-UCB learns each task independently.
In the theoretical analysis of the Lin-UCB [7] and its kernelized version Kernel-UCB [18] ?a is
replaced by ?, and the goal is to learn one single estimator using data from all the arms. In other
words, the data from the different arms are pooled together and viewed as coming from a single task.
These two approaches, independent and pooled learning, are two extremes, and reality often lies
somewhere in between. In the MTL approach, we seek to pool some tasks together, while learning
others independently.
We present an algorithm motivated by this idea and call it kernelized multi-task learning UCB
(KMTL-UCB). Our main contributions are proposing a UCB type multi-task learning algorithm
for contextual bandits, established a regret bound and interpreting the bound to reveal the impact
of increased task similarity, introducing a technique for estimating task similarities on the fly, and
demonstrating the effectiveness of our algorithm on several datasets.
This paper is organized as follows. Section 2 describes related work and in Section 3 we propose
a UCB algorithm using multi-task learning. Regret analysis is presented in Section 4, and our
experimental findings are reported in Section 5. We conclude in Section 6.
2
Related Work
A UCB strategy is a common approach to quantify the exploration/exploitation tradeoff. At each
time step t, and for each arm a, a UCB strategy estimates a reward r?a,t and a one-sided confidence
interval above r?a,t with width w
?a,t . The term ucba,t = r?a,t + w
?a,t is called the UCB index or just
UCB. Then at each time step t, the algorithm chooses the arm a with the highest UCB.
2
In contextual bandits, the idea is to view learning the mapping x 7? r as a regression problem.
Lin-UCB uses a linear regression model while Kernel-UCB uses a nonlinear regression model drawn
from the reproducing kernel Hilbert space (RKHS) of a symmetric and positive definite (SPD) kernel.
Either of these two regression models could be applied in either the independent setting or the pooled
setting. In the independent setting, the regression function for each arm is estimated separately. This
was the approach adopted by Li et al. [13] with a linear model. Regret analysis for both Lin-UCB
and Kernel-UCB adopted the pooled setting [7, 18]. Kernel-UCB in the independent setting has not
previously been considered to our knowledge, although the algorithm would just be a kernelized
version of Li et al. [13]. We will propose a methodology that extends the above four combinations
of setting (independent and pooled) and regression model (linear and nonlinear). Gaussian Process
UCB (GP-UCB) uses a Gaussian prior on the regression function and is a Bayesian equivalent of
Kernel-UCB [16].
There are some contextual bandit setups that incorporate multi-task learning. In Lin-UCB with Hybrid
Linear Models the estimated reward consists of two linear terms, one that is arm-specific and another
that is common to all arms [13]. Gang of bandits [5] uses a graph structure (e.g., a social network) to
transfer the learning from one user to other for personalized recommendation. Collaborative filtering
bandits [14] is a similar technique which clusters the users based on context. Contextual Gaussian
Process UCB (CGP-UCB) builds on GP-UCB and has many elements in common with our framework
[10]. We defer a more detailed comparison to CGP-UCB until later.
3
KMTL-UCB
We propose an alternate regression model that includes the independent and pooled settings as special
cases. Our approach is inspired by work on transfer and multi-task learning in the batch setting
[3, 8]. Intuitively, if two arms (tasks) are similar, we can pool the data for those arms to train better
predictors for both.
Formally, we consider regression functions of the form
? 7? Y
f :X
? = Z ? X , and Z is what we call the task similarity space, X is the context space and
where X
Y ? R is the reward space. Every context xa ? X is associated with an arm descriptor za ? Z, and
we define x
?a = (za , xa ) to be the augmented context. Intuitively, za is a variable that can be used to
determine the similarity between different arms. Examples of Z and za will be given below.
? In this work we focus on kernels of the form
Let k? be a SPD kernel on X.
k? (z, x), (z 0 , x0 ) = kZ (z, z 0 )kX (x, x0 ),
(1)
where kX is a SPD kernel on X , such as linear or Gaussian kernel if X = Rd , and kZ is a kernel on
? Note that
? 7? R associated to k.
Z (examples given below). Let Hk? be the RKHS of functions f : X
?
a product kernel is just one option for k, and other forms may be worth exploring.
3.1
Upper Confidence Bound
Instead of learning regression estimates for each arm separately, we effectively learn regression
estimates for all arms at once by using all the available training data. Let N be the total number
of distinct arms that algorithm has to choose from. Define [N ] = {1, ..., N } and let the observed
contexts at time t be xa,t , ?a ? [N ]. Let na,t be the number of times the algorithm has selected arm
PN
a up to and including time t so that a=1 na,t = t. Define sets ta = {? < t : a? = a}, where a? is
the arm selected at time ? . Notice that |ta | = na,t?1 for all a. We solve the following problem at
time t:
N
X
1 X 1
f?t = arg min
(f (?
xa,? ) ? ra,? )2 + ?kf k2Hk? ,
(2)
N
n
a,t?1
f ?Hk
?
? ?t
a=1
a
where x
?a,? is the augmented context of arm a at time ? , and ra,? is the reward of an arm a selected at
time ? . This problem (2) is a variant of kernel ridge regression. Applying the representer theorem [17]
3
PN P
? x
the optimal f can be expressed as f = a0 =1 ? 0 ?ta0 ?a0 ,? 0 k(?,
?a0 ,? 0 ), which yields the solution
(detailed derivation is in the supplementary material)
? t?1 + ?I)?1 ?t?1 yt?1 ,
f?t (?
x) = k?t?1 (?
x)T (?t?1 K
(3)
?
? t?1 is the (t ? 1) ? (t ? 1) kernel matrix on the augmented data [?
where K
xa? ,? ]t?1
x) =
? =1 , kt?1 (?
t?1
?
? and the past data, yt?1 = [ra? ,? ]t?1
[k(?
x, x
?a? ,? )]? =1 is a vector of kernel evaluations between x
? =1 are
all observed rewards, and ?t?1 is the (t ? 1) ? (t ? 1) diagonal matrix ?t?1 = diag[ na 1 ]?t?1
=1 .
?,t?1
When x
?=x
?a,t , we write k?a,t = k?t?1 (?
xa,t ). With only minor modifications to the argument in Valko
et al [18], we have the following:
Lemma 1. Suppose the rewards [ra? ,? ]T?=1 are independent randomqvariables with means
E[ra? ,? |?
xa? ,? ] = f ? (?
xa? ,? ), where f ? ? Hk? and kf ? kHk? ? c. Let ? =
With probability at least 1 ?
?
T
log(2T N/?)
2
, we have that ?a ? [N ]
?
|f?t (?
xa,t ) ? f ? (?
xa,t )| ? wa,t := (? + c ?)sa,t
where sa,t = ??1/2
q
and ? > 0.
(4)
T (?
?1 ?
? xa,t , x
?
?
k(?
?a,t ) ? k?a,t
t?1 Kt?1 + ?I)
t?1 ka,t .
The result in Lemma 1 motivates the UCB
ucba,t = f?t (?
xa,t ) + wa,t
and inspires Algorithm 1.
Algorithm 1 KMTL-UCB
Input: ? ? R+ ,
for t = 1, ..., T do
? t?1 and ?t?1
Update the (product) kernel matrix K
Observe context features at time t: xa,t for each a ? [N ].
Determine arm descriptor za for each a ? [N ] to get augmented context x
?a,t .
for all a at time t do
pa,t ? f?t (?
xa,t ) + ?sa,t
end for
Choose arm at = arg max pa,t , observe a real valued payoff rat ,t and update yt .
Output: at
end for
Before an arm has been selected at least once, f?t (?
xa,t ) and the second term in sa,t , i.e.,
T
? t?1 + ?I)?1 ?t?1 k?a,t , are taken to be 0. In that case, the algorithm only uses the first
k?a,t
(?t?1 K
q
? xa,t , x
term of sa,t , i.e., k(?
?a,t ), to form the UCB.
3.2
Choice of Task Similarity Space and Kernel
To illustrate the flexibility of our framework, we present the following three options for Z and kZ :
1. Independent: Z = {1, ..., N }, kZ (a, a0 ) = 1a=a0 . The augmented context for a context xa
from arm a is just (a, xa ).
2. Pooled: Z = {1}, kZ ? 1. The augmented context for a context xa for arm a is just (1, xa ).
3. Multi-Task: Z = {1, ..., N } and kZ is a PSD matrix reflecting arm/task similarities. If this
matrix is unknown, it can be estimated as discussed below.
Algorithm 1 with the first two choices specializes to the independent and pooled settings mentioned
previously. In either setting, choosing a linear kernel for kX leads to Lin-UCB, while a more general
kernel essentially gives rise to Kernel-UCB. We will argue that the multi-task setting facilitates
learning when there is high task similarity.
4
We also introduce a fourth option for Z and kZ that allows task similarity to be estimated when it is
unknown. In particular, we are inspired by the kernel transfer learning framework of Blanchard et al.
[3]. Thus, we define the arm similarity space to be Z = PX , the set of all probability distributions
on X . We further assume that contexts for arm a are drawn from probability measure Pa . Given a
context xa for arm a, we define its augmented context to be (Pa , xa ).
To define a kernel on Z = PX , we use the same construction described in [3], originally introduced
by Steinwart and Christmann [6]. In particular, in our experiments we use a Gaussian-like kernel
2
kZ (Pa , Pa0 ) = exp(?k?(Pa ) ? ?(Pa0 )k2 /2?Z
),
(5)
R 0
where ?(P ) = kX (?, x)dP x is the kernel mean embedding of a distribution P . This embedding is
0
?
defined by yet another SPD kernel kX
on X , which could be different from the kX used to define k.
P
1
0
We may estimate ?(Pa ) via ?(Pba ) = na,t?1 ? ?ta kX (?, xa? ,? ), which leads to an estimate of kZ .
4
Theoretical Analysis
To simplify the analysis we consider a modified version of the original problem 2:
N
1 XX
f?t = arg min
(f (?
xa,? ) ? ra,? )2 + ?kf k2Hk? .
N a=1 ? ?t
f ?Hk
?
(6)
a
1
In particular, this modified problem omits the terms na,t?1
as they obscure the analysis. In practice,
these terms should be incorporated.
q
T (K
? xa,t , x
? t?1 + ?I)?1 k?a,t . Under this assumption KernelIn this case sa,t = ??1/2 k(?
?a,t ) ? k?a,t
UCB is exactly KMTL-UCB with kZ ? 1. On the other hand, KMTL-UCB can be viewed as
a special case of Kernel-UCB on the augmented context space X? . Thus, the regret analysis of
Kernel-UCB applies to KMTL-UCB, but it does not reveal the potential gains of multi-task learning.
We present an interpretable regret bound that reveals the benefits of MTL. We also establish a lower
bound on the UCB width that decreases as task similarity increases (presented in the supplementary
file).
4.1
Analysis of SupKMTL-UCB
It is not trivial to analyze algorithm 1 because the reward at time t is dependent on the past rewards.
We follow the same strategy originally proposed in [1] and used in [7, 18] which uses SupKMTL-UCB
as a master algorithm, and BaseKMTL-UCB (which is called by SupKMTL-UCB) to get estimates of
reward and width. SupKMTL-UCB builds mutually exclusive subsets of [T ] such that rewards in any
subset are independent. This guarantees that the independence assumption of Lemma 1 is satisfied.
We describe these algorithms in a supplementary section because of space constraints.
? x, x
? and
Theorem 1. Assume that ra,t ? [0, 1], ?a ? [N ], T ? 1, kf ? kHk? ? c, k(?
?) ? ck? , ??
x?X
the task similarity matrix KZ is known. With probability at least 1 ? ?, SupKMTL-UCB satisfies
v
u
!
u log 2T N (log(T ) + 1)/?
p
? p
?
t
R(T ) ? 2 T + 10
+c ?
2m log g([T ]) T dlog(T )e
2
p
= O
T log(g([T ]))
where g([T ]) =
? T +1 +?I)
det(K
?T +1
and m = max(1,
ck
?
? ).
Note that this theorem assumes that task similarity is known. In the experiments for real datasets
using the approach discussed in subsection 3.2 we estimate the task similarity from the available data.
4.2
Interpretation of Regret Bound
The following theorems help us interpret the regret bound by looking at
+1
? T +1 + ?I) TY
det(K
(?t + ?)
g([T ]) =
=
,
?T +1
?
t=1
5
? T +1 .
where, ?1 ? ?2 ? ? ? ? ? ?T +1 are the eigenvalues of the kernel matrix K
As mentioned above, the regret bound of Kernel-UCB applies to our method, and we are able to
? t = KX , ?t ? [T ] as
recover this bound as a corollary of Theorem 1. In the case of Kernel-UCB K
t
? T +1 in the same way
all arm estimators are assumed to be the same. We define the effective rank of K
as [18] defines the effective dimension of the kernel feature space.
? T +1 is defined to be r := min{j : j? log T ? PT +1 ?i }.
Definition 1. The effective rank of K
i=j+1
? hides logarithmic terms.
In the following result, the notation O
?
2(T +1)ck
? +r??r? log T
? rT )
, and therefore R(T ) = O(
Corollary 1. log(g([T ])) ? r log 2T
r?
However, beyond recovering a known bound, Theorem 1 can also be interpreted to reveal the
potential gains of multi-task learning. To interpret the regret bound in Theorem 1, we make a
further assumption that after time t, na,t = Nt for all a ? [N ]. For simplicity define nt = na,t .
Let () denote the Hadamard product, (?) denote the Kronecker product and 1n ? Rn be the
vector of ones. Let KXt = [kX (xa? ,? , xa? 0 ,? 0 )]t?,? 0 =1 be the t ? t kernel matrix on contexts,
KZt = [kZ (za? , za? 0 )]t?,? 0 =1 be the associated t ? t kernel matrix based on arm similarity, and
KZ = [kZ (za , za )]N
a=1 be the N ? N arm/task similarity matrix between N arms, where xa? ,?
is the observed context and za? is the associated arm descriptor. Using eqn. (1), we can write
? t = KZ KX . We rearrange the sequence of xa ,? to get [xa,? ]N
K
?
t
t
a=1,? =(t+1)a such that elements
r
r
r
?
(a?1)nt to ant belong to arm a. Define Kt , KXt and KZt to be the rearranged kernel matrices based
? tr = (KZ ? 1n 1Tn ) K r
on the re-ordered set [xa,? ]N
. Notice that we can write K
a=1,? =(t+1)a
t
? t ) and ?(K
? tr ) are equal. To summarize, we have
and the eigenvalues ?(K
? t = KZ KX
K
t
t
r
? t ) = ? (KZ ? 1n 1Tn ) KX
?(K
.
t
t
t
t
Xt
(7)
Theorem 2. Let the rank
of matrix
KXT +1 be rx and the rank of matrix KZ be rz . Then
(T +1)ck
? +?
log(g([T ])) ? rz rx log
?
This means that when the rank of the task similarity matrix is low, which reflects a high degree
of inter-task similarity, the regret bound is tighter. For comparison, note that when all tasks are
independent, rz = N and when all tasks are the same (pooled), then rz = 1. In the case of LinUCB [7] where all arm estimators are?assumed to be the same and kX is a linear kernel, the regret
? dT ), where d is the dimension of the context space. In the
bound in Theorem 1 evaluates to O(
original
Lin-UCB
algorithm
[13]
where
all arm estimators are different, the regret bound would be
?
? N dT ).
O(
We can further comment on g([T ]) when all distinct tasks (arms) are similar to each other with
? r (?) = (KZ (?) ?
task similarity equal to ?. Thus define KZ (?) := (1 ? ?)IN + ?1N 1TN and K
t
T
r
1nt 1nt ) KXt .
Theorem 3. Let g? ([T ]) =
? r (?)+?I)
det(K
T +1
.
?T +1
If ?1 ? ?2 then g?1 ([T ]) ? g?2 ([T ]).
This shows that when there is more task similarity, the regret bound is tighter.
4.3
Comparison with CGP-UCB
CGP-UCB transfers the learning from one task to another by leveraging additional known taskspecific context variables [10], similar in spirit to KTML-UCB. Indeed, with slight modifications,
KMTL-UCB can be viewed as a frequentist analogue of CGP-UCB, and similarly CGP-UCB could
be modified to address our setting. Furthermore, the term g([T ]) appearing in our regret bound is
equivalent to an information gain term used to analyze CGP-UCB. In the agnostic case of CGPUCB where there
? is no assumption of a Gaussian prior on decision functions, their regret bound is
O(log(g([T ])) T ), while their regret bound matches ours when they adopt a GP prior on f ? . Thus,
our primary contributions with respect to CGP-UCB are to provide a tighter regret bound in agnostic
case, and a technique for estimating task similarity which is critical for real-world applications.
6
5
Experiments
We test our algorithm on synthetic data and some multi-class classification datasets. In the case of
multi-class datasets, the number of arms N is the number of classes and the reward is 1 if we predict
the correct class, otherwise it is 0. We separate the data into two parts - validation set and test set.
We use all Gaussian kernels and pre-select the bandwidth of kernels using five fold cross-validation
on a holdout validation set and we use ? = 0.1 for all experiments. Then we run the algorithm on
the test set 10 times (with different sequences of streaming data) and report the mean regret. For the
synthetic data, we compare Kernel-UCB in the independent setting (Kernel-UCB-Ind) and pooled
setting (Kernel-UCB-Pool), KMTL-UCB with known task similarity, and KMTL-UCB-Est which
estimates task similarity on the fly. For the real datasets in the multi-class classification setting, we
compare Kernel-UCB-Ind and KMTL-UCB-Est. In this case, the pooled setting is not valid because
xa,t is the same for all arms (only za differs) and KMTL-UCB is not valid because the task similarity
matrix is unknown. We also report the confidence intervals for these results in the supplementary
material.
5.1
Synthetic News Article Data
Suppose an agent has access to a pool of articles and their context features. The agent then sees a
user along with his/her features for which it needs to recommend an article. Based on user features
and article features the algorithm gets a combined context xa,t . The user context xu,t ? R2 , ?t is
randomly drawn from an ellipse centered at (0, 0) with major axis length 1 and minor axis length 0.5.
Let xu,t [:, 1] be the minor axis and xu,t [:, 2] be the major axis. Article context xart,t is any angle ? ?
[0, ?2 ]. To get the overall summary xa,t of user and article the user context
xu,t is rotated with xart,t
.
Rewards for each article are defined based on the minor axis ra,t = 1.0 ? (xu,t [:, 1] ?
a
N
+ 0.5)2 .
Figure 1: Synthetic Data
Figure 1 shows one such example for 4 different arms. The color code describes the reward, the two
axes show the information about user context, and theta is the article context. We take N = 5. For
KMTL-UCB, we use a Gaussian kernel on xart,t to get the task similarity.
The results of this experiment are shown in Figure 1. As one can see, Kernel-UCB-Pool performs the
worst. That means for this setting combining all the data and learning a single estimator is not efficient.
KMTL-UCB beats the other methods in all 10 runs, and Kernel-UCB-Ind and KMTL-UCB-Est
perform equally well.
5.2
Multi-class Datasets
In the case of multi-class classification, each class is an arm and the features of an example for which
the algorithm needs to recommend a class are the contexts. We consider the following datasets:
Digits (N = 10, d = 64), Letter (N = 26, d = 16), MNIST (N = 10, d = 780 ), Pendigits
(N = 10, d = 16), Segment (N = 7, d = 19) and USPS (N = 10, d = 256). Empirical mean regrets
are shown in Figure 2. KMTL-UCB-Est performs the best in three of the datasets and performs
equally well in the other three datasets. Figure 3 shows the estimated task similarity (re-ordered
7
to reveal block structure) and one can see the effect of the estimated task similarity matrix on the
empirical regret in Figure 2. For the Digits, Segment and MNIST datasets, there is significant
inter-task similarity. For Digits and Segment datasets, KMTL-UCB-Est is the best in all 10 runs of
the experiment while for MNIST, KMTL-UCB-Est is better for all but 1 run.
Figure 2: Results on Multiclass Datasets - Empirical Mean Regret
Figure 3: Estimated Task Similarity for Real Datasets
6
Conclusions and future work
We present a multi-task learning framework in the contextual bandit setting and describe a way to
estimate task similarity when it is not given. We give theoretical analysis, interpret the regret bound,
and support the theoretical analysis with extensive experiments. In the supplementary material we
establish a lower bound on the UCB width, and argue that it decreases as task similarity increases.
Our proposal to estimate the task similarity matrix using the arm similarity space Z = PX can be
extended in different ways. For example, we could also incorporate previously observed rewards
into Z. This would alleviate a potential problem with our approach, namely, that some contexts
may have been selected when they did not yield a high reward. Additionally, by estimating the task
similarity matrix, we are estimating arm-specific information. In the case of multiclass classification,
kZ reflects information that represents the various classes. A natural extension is to incorporate
methods for representation learning into the MTL bandit setting.
8
References
[1] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine
Learning Research, 3(Nov):397?422, 2002.
[2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine learning, 47(2-3):235?256, 2002.
[3] G. Blanchard, G. Lee, and C. Scott. Generalizing from several related classification tasks to a
new unlabeled sample. In Advances in neural information processing systems, pages 2178?2186,
2011.
[4] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Machine Learning, 5(1):1?122, 2012.
[5] N. Cesa-Bianchi, C. Gentile, and G. Zappella. A gang of bandits. In Advances in Neural
Information Processing Systems, pages 737?745, 2013.
[6] A. Christmann and I. Steinwart. Universal kernels on non-standard input spaces. In Advances
in neural information processing systems, pages 406?414, 2010.
[7] W. Chu, L. Li, L. Reyzin, and R. E. Schapire. Contextual bandits with linear payoff functions.
[8] T. Evgeniou and M. Pontil. Regularized multi?task learning. In Proceedings of the tenth ACM
SIGKDD international conference on Knowledge discovery and data mining, pages 109?117.
ACM, 2004.
[9] S. Kale, L. Reyzin, and R. E. Schapire. Non-stochastic bandit slate problems. In Advances in
Neural Information Processing Systems, pages 1054?1062, 2010.
[10] A. Krause and C. S. Ong. Contextual gaussian process bandit optimization. In Advances in
Neural Information Processing Systems, pages 2447?2455, 2011.
[11] V. Kuleshov and D. Precup. Algorithms for multi-armed bandit problems. arXiv preprint
arXiv:1402.6028, 2014.
[12] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side
information. In Advances in neural information processing systems, pages 817?824, 2008.
[13] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized
news article recommendation. In Proceedings of the 19th international conference on World
wide web, pages 661?670. ACM, 2010.
[14] S. Li, A. Karatzoglou, and C. Gentile. Collaborative filtering bandits. In Proceedings of the 39th
International ACM SIGIR conference on Research and Development in Information Retrieval,
pages 539?548. ACM, 2016.
[15] H. Robbins. Some aspects of the sequential design of experiments. In Herbert Robbins Selected
Papers, pages 169?177. Springer, 1985.
[16] N. Srinivas, A. Krause, M. Seeger, and S. M. Kakade. Gaussian process optimization in the
bandit setting: No regret and experimental design. In Proceedings of the 27th International
Conference on Machine Learning (ICML-10), pages 1015?1022, 2010.
[17] I. Steinwart and A. Christmann. Support vector machines. Springer Science & Business Media,
2008.
[18] M. Valko, N. Korda, R. Munos, I. Flaounas, and N. Cristianini. Finite-time analysis of kernelised
contextual bandits. In Uncertainty in Artificial Intelligence, page 654. Citeseer, 2013.
[19] S. S. Villar, J. Bowden, and J. Wason. Multi-armed bandit models for the optimal design of
clinical trials: benefits and challenges. Statistical science: a review journal of the Institute of
Mathematical Statistics, 30(2):199, 2015.
[20] J. White. Bandit algorithms for website optimization. " O?Reilly Media, Inc.", 2012.
9
| 7070 |@word trial:6 exploitation:3 version:3 k2hk:2 seek:1 citeseer:1 tr:2 rkhs:2 ours:1 past:3 ka:1 contextual:21 nt:5 yet:1 chu:2 must:1 interpretable:1 update:2 greedy:1 selected:7 website:2 intelligence:1 zhang:1 five:1 mathematical:1 along:1 consists:1 khk:2 introduce:1 x0:2 inter:2 ra:10 indeed:1 expected:2 multi:28 inspired:2 armed:6 estimating:5 xx:1 notation:1 agnostic:2 medium:2 what:1 interpreted:1 minimizes:1 proposing:1 finding:1 guarantee:1 every:1 exactly:1 k2:1 uk:1 positive:1 before:1 pendigits:1 practice:1 regret:26 definite:1 differs:1 block:1 digit:3 kernelised:1 pontil:1 universal:1 empirical:3 reilly:1 confidence:8 word:1 pre:1 bowden:1 get:6 unlabeled:1 selection:1 context:40 applying:1 equivalent:2 yt:3 kale:1 independently:3 sigir:1 simplicity:1 estimator:6 his:1 embedding:2 pt:3 suppose:2 construction:1 user:11 us:6 kuleshov:1 pa:7 element:2 observed:4 fly:2 preprint:1 worst:1 thousand:1 news:7 decrease:2 highest:1 trade:1 observes:1 mentioned:2 intuition:1 reward:37 cristianini:1 ong:1 segment:3 predictive:1 usps:1 slate:1 various:2 derivation:1 train:1 distinct:2 describe:3 effective:4 artificial:1 choosing:1 supplementary:5 solve:1 valued:1 otherwise:1 ability:1 statistic:1 fischer:1 gp:3 advantage:1 eigenvalue:2 kxt:4 net:1 sequence:2 propose:5 coming:1 product:4 remainder:1 hadamard:1 combining:1 rapidly:1 reyzin:2 flexibility:1 exploiting:1 cluster:1 xta:1 rotated:1 help:1 illustrate:1 minor:4 sa:6 recovering:1 taskspecific:1 christmann:3 quantify:2 correct:1 stochastic:2 exploration:4 centered:1 karatzoglou:1 material:3 fix:1 alleviate:1 mab:2 tighter:3 exploring:2 extension:2 considered:1 exp:1 mapping:2 predict:3 major:2 dogan:2 adopt:2 villar:1 robbins:2 reflects:3 offs:1 gaussian:10 modified:3 ck:4 pn:2 corollary:2 derived:1 focus:2 ax:1 rank:5 hk:4 contrast:1 sigkdd:1 seeger:1 dependent:1 streaming:1 a0:5 kernelized:3 bandit:34 her:1 arg:3 classification:5 overall:1 development:1 special:2 equal:2 once:2 evgeniou:1 beach:1 represents:1 icml:1 representer:1 future:1 others:1 recommend:3 simplify:1 report:2 randomly:1 replaced:1 microsoft:1 psd:1 mining:1 evaluation:1 extreme:1 rearrange:1 xat:1 kt:3 re:2 aniket:1 theoretical:4 korda:1 increased:1 introducing:1 subset:2 predictor:1 comprised:1 inspires:1 reported:1 eec:2 synthetic:4 chooses:2 confident:1 st:1 combined:1 cb1:1 international:4 lee:1 pool:6 together:2 precup:1 na:8 clayscot:1 satisfied:1 cesa:3 choose:3 li:5 potential:3 pooled:11 includes:1 blanchard:2 inc:1 ad:1 later:1 try:1 view:1 analyze:2 recover:1 option:3 defer:1 contribution:2 collaborative:2 pba:1 descriptor:3 yield:2 ant:1 bayesian:1 rx:2 worth:1 za:11 definition:1 evaluates:1 ty:1 associated:6 mi:2 gain:3 holdout:1 popular:1 knowledge:2 subsection:1 color:1 organized:1 hilbert:1 auer:2 reflecting:1 ta:3 originally:2 dt:2 mtl:4 methodology:1 follow:1 furthermore:1 just:5 xa:37 until:1 langford:2 hand:2 steinwart:3 eqn:1 web:1 nonlinear:2 defines:1 reveal:4 usa:1 effect:1 read:1 symmetric:1 white:1 ind:3 width:4 rat:7 ridge:1 demonstrate:1 tn:3 performs:3 interpreting:1 common:4 discussed:2 interpretation:1 belong:1 slight:1 interpret:4 significant:1 multiarmed:1 cambridge:1 rd:2 similarly:1 language:1 access:2 similarity:36 hide:1 perspective:1 herbert:1 additional:1 gentile:2 employed:1 determine:2 maximize:2 recommended:1 cgp:8 match:1 clinical:2 long:1 lin:8 cross:1 retrieval:1 equally:2 promotes:1 impact:1 variant:1 regression:12 essentially:1 arxiv:2 pa0:2 represent:1 kernel:46 receive:1 proposal:1 separately:3 krause:2 interval:2 file:1 comment:1 facilitates:1 anand:1 leveraging:2 spirit:1 effectiveness:1 call:2 leverage:1 presence:1 spd:4 independence:1 nonstochastic:1 bandwidth:1 idea:2 tradeoff:2 multiclass:2 det:3 whether:1 motivated:1 detailed:2 rearranged:1 schapire:3 notice:2 estimated:7 write:3 four:1 demonstrating:1 drawn:3 tenth:1 graph:1 run:4 angle:1 letter:1 fourth:1 master:1 uncertainty:1 extends:1 decision:2 bound:28 fold:1 yielded:1 gang:2 placement:1 constraint:1 kronecker:1 personalized:3 aspect:1 argument:1 min:3 px:3 department:2 according:1 alternate:1 combination:2 describes:2 kakade:1 wason:1 making:1 modification:2 intuitively:2 dlog:1 sided:1 taken:1 mutually:1 previously:3 end:3 umich:2 adopted:2 available:2 observe:3 appearing:1 frequentist:1 batch:2 original:2 rz:4 assumes:1 somewhere:1 exploit:1 build:2 establish:3 ellipse:1 strategy:6 primary:1 exclusive:1 rt:1 diagonal:1 dp:1 deshmukh:1 linucb:1 separate:1 argue:2 trivial:1 length:2 code:1 modeled:1 index:1 setup:1 stated:1 rise:1 design:3 motivates:1 policy:1 unknown:3 perform:1 bianchi:3 upper:4 observation:1 datasets:13 finite:2 beat:1 payoff:2 extended:1 incorporated:1 looking:1 rn:1 reproducing:1 clayton:1 introduced:1 namely:1 extensive:1 omits:1 established:1 nip:1 address:1 able:1 beyond:1 below:3 scott:2 summarize:1 challenge:1 including:1 max:2 analogue:1 critical:1 zappella:1 natural:1 hybrid:1 regularized:1 valko:2 business:1 arm:73 scheme:1 improve:2 theta:1 axis:5 specializes:1 prior:3 epoch:1 discovery:1 review:1 kf:4 filtering:2 validation:3 agent:8 degree:1 article:17 obscure:1 summary:1 side:3 institute:1 wide:1 munos:1 benefit:2 dimension:2 world:2 cumulative:2 valid:2 fb:1 kz:22 made:1 historical:1 social:1 nov:1 reveals:1 conclude:1 assumed:2 reality:1 additionally:1 learn:3 transfer:4 ca:1 diag:1 did:1 main:1 xu:5 augmented:8 kzt:2 lie:1 advertisement:1 learns:1 theorem:10 specific:2 xt:1 r2:1 dominates:1 mnist:3 sequential:2 effectively:1 kx:13 generalizing:1 michigan:2 logarithmic:1 bubeck:1 expressed:1 ordered:2 recommendation:6 applies:2 springer:2 determines:1 satisfies:1 acm:5 goal:5 viewed:3 ann:4 lemma:3 total:2 called:4 experimental:2 arbor:4 est:6 ucb:77 select:4 formally:2 ta0:1 support:2 incorporate:3 srinivas:1 |
6,712 | 7,071 | Learning to Prune Deep Neural Networks via
Layer-wise Optimal Brain Surgeon
Xin Dong
Nanyang Technological University, Singapore
[email protected]
Shangyu Chen
Nanyang Technological University, Singapore
[email protected]
Sinno Jialin Pan
Nanyang Technological University, Singapore
[email protected]
Abstract
How to develop slim and accurate deep neural networks has become crucial for realworld applications, especially for those employed in embedded systems. Though
previous work along this research line has shown some promising results, most
existing methods either fail to significantly compress a well-trained deep network
or require a heavy retraining process for the pruned deep network to re-boost its
prediction performance. In this paper, we propose a new layer-wise pruning method
for deep neural networks. In our proposed method, parameters of each individual
layer are pruned independently based on second order derivatives of a layer-wise
error function with respect to the corresponding parameters. We prove that the
final prediction performance drop after pruning is bounded by a linear combination
of the reconstructed errors caused at each layer. By controlling layer-wise errors
properly, one only needs to perform a light retraining process on the pruned network
to resume its original prediction performance. We conduct extensive experiments
on benchmark datasets to demonstrate the effectiveness of our pruning method
compared with several state-of-the-art baseline methods. Codes of our work are
released at: https://github.com/csyhhu/L-OBS.
1
Introduction
Intuitively, deep neural networks [1] can approximate predictive functions of arbitrary complexity
well when they are of a huge amount of parameters, i.e., a lot of layers and neurons. In practice, the
size of deep neural networks has been being tremendously increased, from LeNet-5 with less than
1M parameters [2] to VGG-16 with 133M parameters [3]. Such a large number of parameters not
only make deep models memory intensive and computationally expensive, but also urge researchers
to dig into redundancy of deep neural networks. On one hand, in neuroscience, recent studies point
out that there are significant redundant neurons in human brain, and memory may have relation with
vanishment of specific synapses [4]. On the other hand, in machine learning, both theoretical analysis
and empirical experiments have shown the evidence of redundancy in several deep models [5, 6].
Therefore, it is possible to compress deep neural networks without or with little loss in prediction by
pruning parameters with carefully designed criteria.
However, finding an optimal pruning solution is NP-hard because the search space for pruning
is exponential in terms of parameter size. Recent work mainly focuses on developing efficient
algorithms to obtain a near-optimal pruning solution [7, 8, 9, 10, 11]. A common idea behind most
exiting approaches is to select parameters for pruning based on certain criteria, such as increase in
training error, magnitude of the parameter values, etc. As most of the existing pruning criteria are
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
designed heuristically, there is no guarantee that prediction performance of a deep neural network
can be preserved after pruning. Therefore, a time-consuming retraining process is usually needed to
boost the performance of the trimmed neural network.
Instead of consuming efforts on a whole deep network, a layer-wise pruning method, Net-Trim, was
proposed to learn sparse parameters by minimizing reconstructed error for each individual layer [6].
A theoretical analysis is provided that the overall performance drop of the deep network is bounded by
the sum of reconstructed errors for each layer. In this way, the pruned deep network has a theoretical
guarantee on its error. However, as Net-Trim adopts `1 -norm to induce sparsity for pruning, it fails to
obtain high compression ratio compared with other methods [9, 11].
In this paper, we propose a new layer-wise pruning method for deep neural networks, aiming to
achieve the following three goals: 1) For each layer, parameters can be highly compressed after
pruning, while the reconstructed error is small. 2) There is a theoretical guarantee on the overall
prediction performance of the pruned deep neural network in terms of reconstructed errors for each
layer. 3) After the deep network is pruned, only a light retraining process is required to resume its
original prediction performance.
To achieve our first goal, we borrow an idea from some classic pruning approaches for shallow neural
networks, such as optimal brain damage (OBD) [12] and optimal brain surgeon (OBS) [13]. These
classic methods approximate a change in the error function via functional Taylor Series, and identify
unimportant weights based on second order derivatives. Though these approaches have proven to
be effective for shallow neural networks, it remains challenging to extend them for deep neural
networks because of the high computational cost on computing second order derivatives, i.e., the
inverse of the Hessian matrix over all the parameters. In this work, as we restrict the computation on
second order derivatives w.r.t. the parameters of each individual layer only, i.e., the Hessian matrix is
only over parameters for a specific layer, the computation becomes tractable. Moreover, we utilize
characteristics of back-propagation for fully-connected layers in well-trained deep networks to further
reduce computational complexity of the inverse operation of the Hessian matrix.
To achieve our second goal, based on the theoretical results in [6], we provide a proof on the bound
of performance drop before and after pruning in terms of the reconstructed errors for each layer.
With such a layer-wise pruning framework using second-order derivatives for trimming parameters
for each layer, we empirically show that after significantly pruning parameters, there is only a little
drop of prediction performance compared with that before pruning. Therefore, only a light retraining
process is needed to resume the performance, which achieves our third goal.
The contributions of this paper are summarized as follows. 1) We propose a new layer-wise pruning
method for deep neural networks, which is able to significantly trim networks and preserve the
prediction performance of networks after pruning with a theoretical guarantee. In addition, with the
proposed method, a time-consuming retraining process for re-boosting the performance of the pruned
network is waived. 2) We conduct extensive experiments to verify the effectiveness of our proposed
method compared with several state-of-the-art approaches.
2
Related Works and Preliminary
Pruning methods have been widely used for model compression in early neural networks [7] and
modern deep neural networks [6, 8, 9, 10, 11]. In the past, with relatively small size of training data,
pruning is crucial to avoid overfitting. Classical methods include OBD and OBS. These methods
aim to prune parameters with the least increase of error approximated by second order derivatives.
However, computation of the Hessian inverse over all the parameters is expensive. In OBD, the
Hessian matrix is restricted to be a diagonal matrix to make it computationally tractable. However,
this approach implicitly assumes parameters have no interactions, which may hurt the pruning
performance. Different from OBD, OBS makes use of the full Hessian matrix for pruning. It obtains
better performance while is much more computationally expensive even using Woodbury matrix
identity [14], which is an iterative method to compute the Hessian inverse. For example, using OBS
on VGG-16 naturally requires to compute inverse of the Hessian matrix with a size of 133M ? 133M.
Regarding pruning for modern deep models, Han et al. [9] proposed to delete unimportant parameters
based on magnitude of their absolute values, and retrain the remaining ones to recover the original
prediction performance. This method achieves considerable compression ratio in practice. However,
2
as pointed out by pioneer research work [12, 13], parameters with low magnitude of their absolute
values can be necessary for low error. Therefore, magnitude-based approaches may eliminate
wrong parameters, resulting in a big prediction performance drop right after pruning, and poor
robustness before retraining [15]. Though some variants have tried to find better magnitude-based
criteria [16, 17], the significant drop of prediction performance after pruning still remains. To avoid
pruning wrong parameters, Guo et al. [11] introduced a mask matrix to indicate the state of network
connection for dynamically pruning after each gradient decent step. Jin et al. [18] proposed an
iterative hard thresholding approach to re-activate the pruned parameters after each pruning phase.
Besides Net-trim, which is a layer-wise pruning method discussed in the previous section, there
is some other work proposed to induce sparsity or low-rank approximation on certain layers for
pruning [19, 20]. However, as the `0 -norm or the `1 -norm sparsity-induced regularization term
increases difficulty in optimization, the pruned deep neural networks using these methods either
obtain much smaller compression ratio [6] compared with direct pruning methods or require retraining
of the whole network to prevent accumulation of errors [10].
Optimal Brain Surgeon As our proposed layer-wise pruning method is an extension of OBS on
deep neural networks, we briefly review the basic of OBS here. Consider a network in terms of
parameters w trained to a local minimum in error. The functional Taylor series of the error w.r.t. w is:
?E >
?E = ?w
?w + 12 ?w> H?w + O k?wk3 , where ? denotes a perturbation of a corresponding
variable, H ? ? 2 E/?w2 ? Rm?m is the Hessian matrix, where m is the number of parameters, and
O(k??l k3 ) is the third and all higher order terms. For a network trained to a local minimum in error,
the first term vanishes, and the term O(k??l k3 ) can be ignored. In OBS, the goal is to set one of the
parameters to zero, denoted by wq (scalar), to minimize ?E in each pruning iteration. The resultant
optimization problem is written as follows,
1
min ?w> H?w, s.t. e>
q ?w + wq = 0,
q 2
(1)
where eq is the unit selecting vector whose q-th element is 1 and otherwise 0. As shown in [21], the
optimization problem (1) can be solved by the Lagrange multipliers method. Note that a computation
bottleneck of OBS is to calculate and store the non-diagonal Hesssian matrix and its inverse, which
makes it impractical on pruning deep models which are usually of a huge number of parameters.
3
Layer-wise Optimal Brain Surgeon
3.1
Problem Statement
Given a training set of n instances, {(xj , yj )}nj=1 , and a well-trained deep neural network of L layers
(excluding the input layer)1 . Denote the input and the output of the whole deep neural network by
X = [x1 , ..., xn ] ? Rd?n and Y ? Rn?1 , respectively. For a layer l, we denote the input and output of
the layer by Yl?1 = [y1l?1 , ..., ynl?1 ] ? Rml?1 ?n and Yl = [y1l , ..., ynl ] ? Rml ?n , respectively, where
yil can be considered as a representation of xi in layer l, and Y0 = X, YL = Y, and m0 = d. Using
one forward-pass step, we have Yl = ?(Zl ), where Zl = Wl > Yl?1 with Wl ? Rml?1 ?ml being the
matrix of parameters for layer l, and ?(?) is the activation function. For convenience in presentation
and proof, we define the activation function ?(?) as the rectified linear unit (ReLU) [22]. We further
denote by ?l ? Rml?1 ml ?1 the vectorization of Wl . For a well-trained neural network, Yl , Zl and
??l are all fixed matrixes and contain most information of the neural network. The goal of pruning is
to set the values of some elements in ?l to be zero.
3.2
Layer-Wise Error
During layer-wise pruning in layer l, the input Yl?1 is fixed as the same as the well-trained network.
Suppose we set the q-th element of ?l , denoted by ?l[q] , to be zero, and get a new parameter vector,
? l . With Yl?1 , we obtain a new output for layer l, denoted by Y
? l . Consider the root of
denoted by ?
1
For simplicity in presentation, we suppose the neural network is a feed-forward (fully-connected) network.
In Section 3.4, we will show how to extend our method to filter layers in Convolutional Neural Networks.
3
? l and Yl over the whole training data as the layer-wise error:
mean square error between Y
v
u X
u1 n
1 ?l
l
yjl ? yjl ) = ? kY
? =t
(?
yjl ? yjl )> (?
? Y l kF ,
n j=1
n
(2)
where k ? kF is the Frobenius Norm. Note that for any single parameter pruning, one can compute its
error ?lq , where 1 ? q ? ml?1 ml , and use it as a pruning criterion. This idea has been adopted by
some existing methods [15]. However, in this way, for each parameter at each layer, one has to pass
the whole training data once to compute its error measure, which is very computationally expensive.
A more efficient approach is to make use of the second order derivatives of the error function to help
identify importance of each parameter.
We first define an error function E(?) as
2
? l ? Zl
?l) = 1
(3)
E l = E(Z
Z
,
n
F
where Zl is outcome of the weighted sum operation right before performing the activation function
? l is outcome of the weighted sum operation
?(?) at layer l of the well-trained neural network, and Z
l
after pruning at layer l . Note that Z is considered as the desired output of layer l before activation.
The following lemma shows that the layer-wise error is bounded by the error defined in (3).
q
l
l
l
? l ).
Lemma 3.1. With the error function (3) and Y = ?(Z ), the following holds: ? ? E(Z
Therefore, to find parameters whose deletion (set to be zero) minimizes (2) can be translated to find
parameters those deletion minimizes the error function (3). Following [12, 13], the error function can
be approximated by functional Taylor series as follows,
l >
1
? l ) ? E(Zl ) = ?E l = ?E
(4)
E(Z
??l + ??l > Hl ??l + O k??l k3 ,
??l
2
where ? denotes a perturbation of a corresponding variable, Hl ? ? 2 E l /??l 2 is the Hessian matrix
w.r.t. ?l , and O(k??l k3 ) is the third and all higher
order terms. It can be proven that with the error
?E l
function defined in (3), the first (linear) term ??l?l =??l and O(k??l k3 ) are equal to 0.
Suppose every time one aims to find a parameter ?l[q] to set to be zero such that the change ?E l is
minimal. Similar to OBS, we can formulate it as the following optimization problem:
1
min ??l > Hl ??l , s.t. e>
(5)
q ??l + ?l[q] = 0,
q 2
where eq is the unit selecting vector whose q-th element is 1 and otherwise 0. By using the Lagrange
multipliers method as suggested in [21], we obtain the closed-form solutions of the optimal parameter
pruning and the resultant minimal change in the error function as follows,
?l[q]
1 (?l[q] )2
l
??l = ? ?1 H?1
e
,
and
L
=
?E
=
.
(6)
q
q
2 [H?1
[Hl ]qq l
l ]qq
Here Lq is referred to as the sensitivity of parameter ?l[q] . Then we select parameters to prune based
on their sensitivity scores instead of their magnitudes. As mentioned in section 2, magnitude-based
criteria which merely consider the numerator in (6) is a poor estimation of sensitivity of parameters.
Moreover, in (6), as the inverse Hessian matrix over the training data is involved, it is able to capture
data distribution when measuring sensitivities of parameters.
After pruning the parameter, ?l[q] , with the smallest sensitivity, the parameter vector is updated via
? l = ?l +??l . With Lemma 3.1 and (6), we have that the layer-wise error for layer l is bounded by
?
q
q
?
|? |
? l ) = E(Z
? l ) ? E(Zl ) = ?E l = q l[q]
?lq ? E(Z
.
(7)
2[H?1
]
qq
l
Note that first equality is obtained because of the fact that E(Zl ) = 0. It is worth to mention
that though we merely focus on layer l, the Hessian matrix is still a square matrix with size of
ml?1 ml ? ml?1 ml . However, we will show how to significantly reduce the computation of H?1
for
l
each layer in Section 3.4.
4
3.3
Layer-Wise Error Propagation and Accumulation
So far, we have shown how to prune parameters for each layer, and estimate their introduced errors
independently. However, our aim is to control the consistence of the network?s final output YL before
and after pruning. To do this, in the following, we show how the layer-wise errors propagate to final
output layer, and the accumulated error over multiple layers will not explode.
Theorem 3.2. Given a pruned deep network via layer-wise pruning introduced in Section 3.2, each
layer has its own layer-wise error ?l for 1 ? l ? L, then the accumulated error of ultimate network
? L ? YL kF obeys:
output ??L = ?1n kY
!
L?1
L
X
Y
?
?
L
? l kF ?E k + ?E L ,
?? ?
(8)
k?
k=1
l=k+1
? l = ?(W
? >Y
? l?1 ), for 2 ? l ? L denotes ?accumulated pruned output? of layer l, and
where Y
l
1
>
?
?
Y = ?(W1 X).
Theorem 3.2 shows that: 1) Layer-wise error for a layer l will be scaled by continued multiplication
of parameters? Frobenius Norm over the following layers when it propagates to final output, i.e.,
the L?l layers after the l-th layer; 2) The final error of ultimate network output is bounded by the
weighted sum of layer-wise errors. The proof of Theorem 3.2 can be found in Appendix.
Consider a general case with (6) and (8): parameter ?l[q] who has the smallest sensitivity in layer l
?
QL
? k kF ?E l to the ultimate
is pruned by the i-th pruning operation, and this finally adds k=l+1 k?
network output error. It is worth to mention that although it seems that the layer-wise error is scaled
QL
? k kF when it propagates to the final layer, this scaling
by a quite large product factor, Sl = k=l+1 k?
is still tractable in practice because ultimate network output is also scaled by the same product factor
compared with the output of layer l. For example, we can easily estimate the norm of ultimate network
output
via, kYL kF ? S1 kY1 kF . If one pruning operation in the 1st layer causes the layer-wise error
?
1
?E , then the relative ultimate output error is
?
? L ? Y L kF
?E 1
kY
L
?r =
.
?
kYL kF
k n1 Y1 kF
Thus,
we can see that even S1 may be quite large, the relative ultimate output error would still be about
?
1
?E /k n1 Y1 kF which is controllable in practice especially when most of modern deep networks
adopt maxout layer [23] as ultimate output. Actually, S0 is called as network gain representing the
ratio of the magnitude of the network output to the magnitude of the network input.
3.4
3.4.1
The Proposed Algorithm
Pruning on Fully-Connected Layers
To selectively prune parameters, our approach needs to compute the inverse Hessian matrix at each
layer to measure the sensitivities of each parameter of the layer, which is still computationally
expensive though tractable. In this section, we present an efficient algorithm that can reduce the size
of the Hessian matrix and thus speed up computation on its inverse.
For each layer l, according to the definition of the error function used in Lemma 3.1, the first
l
P
l
l
? l is ?E l = ? 1 n ?zj (?
derivative of the error function with respect to ?
zlj and
j=1 ??l zj ? zj ), where ?
??l
n
? l and Zl , respectively, and the Hessian matrix is defined as:
zlj are the j-th columns of the matrices Z
!
>
Pn
?zjl
?zjl
? 2 zjl
?2 El
1
l
l >
? ?(? )2 (?
zj ?zj ) . Note that for most cases ?
Hl ? ?(? )2 = n j=1 ??l ??l
zlj is quite
l
l
close to zlj , we simply ignore the term containing ?
zlj ?zlj . Even in the late-stage of pruning when this
difference is not small, we can still ignore the corresponding term [13]. For layer l that has ml output
l
l
units, zlj = [z1j
, . . . , zm
], the Hessian matrix can be calculated via
lj
!>
n
n ml
l
l
?zij
?zij
1X j
1 XX
H =
,
(9)
Hl =
n j=1 l
n j=1 i=1 ??l ??l
5
y1
y2
W21
W31
y3
H ? R12?12
W11
W41
z1
H11
z2
H22
H33
z3
y4
H11 , H22 , H33 ? R4?4
Figure 1: Illustration of shape of Hessian. For feed-forward neural networks, unit z1 gets its
activation via forward propagation: z = W> y, where W ? R4?3 , y = [y1 , y2 , y3 , y4 ]> ? R4?1 ,
and z = [z1 , z2 , z3 ]> ? R3?1 . Then the Hessian matrix of z1 w.r.t. all parameters is denoted by
H[z1 ] . As illustrated in the figure, H[z1 ] ?s elements are zero except for those corresponding to W?1
(the 1st column of W), which is denoted by H11 . H[z2 ] and H[z3 ] are similar. More importantly,
?1
?1
H?1 = diag(H?1
11 , H22 , H33 ), and H11 = H22 = H33 . As a result, one only needs to compute
?1
?1
H11 to obtain H which significantly reduces computational complexity.
where the Hessian matrix for a single instance j at layer l, Hjl , is a block diagonal square matrix
?z l
l
of the size ml?1 ? ml . Specifically, the gradient of the first output unit z1j
w.s.t. ?l is ??1jl =
l
l
?z1j
?z1j
l
?w1 , . . . , ?wm , where wi is the i-th column of Wl . As z1j is the layer output before activation
l
function, its gradient is simply to calculate, and more importantly all output units?s gradients are
?z l
?z l
equal to the layer input: ?wijk = yjl?1 if k = i, otherwise ?wijk = 0. An illustrated example is shown in
Figure 1, where we ignore the scripts j and l for simplicity in presentation.
It can be shown that the block diagonal square matrix Hjl ?s diagonal blocks Hjlii ? Rml?1 ?ml?1 ,
>
where 1 ? i ? ml , are all equal to ? jl = yjl?1 (yjl?1 ) , and the inverse Hessian matrix H?1
is also a
l
Pn
block diagonal square matrix with its diagonal blocks being ( n1 j=1 ? jl )?1 . In addition, normally
Pn
?l = n1 j=1 ? jl is degenerate and its pseudo-inverse can be calculated recursively via Woodbury
matrix identity [13]:
>
?1
?1
(?lj ) yjl?1 yjl?1 (?lj )
?1
l
l ?1
,
(?j+1 ) = (?j ) ?
?1 l?1
l?1 >
n + yj+1
(?lj ) yj+1
Pt
?1
?1
?1
where ?lt = 1t j=1 ? jl with (?l0 ) = ?I, ? ? [104 , 108 ], and (?l ) = (?ln ) . The size of ?l
is then reduced to ml?1 , and the computational complexity of calculating H?1
is O nm2l?1 .
l
To make the estimated minimal change of the error function optimal in (6), the layer-wise Hessian
matrices need to be exact. Since the layer-wise Hessian matrices only depend on the corresponding
layer inputs, they are always able to be exact even after several pruning operations. The only parameter
we need to control is the layer-wise error ?l . Note that there may be a ?pruning inflection point? after
which layer-wise error would drop dramatically. In practice, user can incrementally increase the size
of pruned parameters based on the sensitivity Lq , and make a trade-off between the pruning ratio and
the performance drop to set a proper tolerable error threshold or pruning ratio.
The procedure of our pruning algorithm for a fully-connected layer l is summarized as follows.
Step 1: Get layer input yl?1 from a well-trained deep network.
Step 2: Calculate the Hessian matrix Hlii , for i = 1, ..., ml , and its pseudo-inverse over the dataset,
and get the whole pseudo-inverse of the Hessian matrix.
Step 3: Compute optimal parameter change ??l and the sensitivity Lq for each parameter at layer l.
Set tolerable error threshold .
6
Step 4: Pick up parameters ?l[q] ?s with the smallest sensitivity scores.
p
? l = ?l + ??l ,
Step 5: If Lq ? , prune the parameter ?l[q] ?s and get new parameter values via ?
then repeat Step 4; otherwise stop pruning.
3.4.2
Pruning on Convolutional Layers
It is straightforward to generalize our method to a convolutional layer and its variants if we vectorize
filters of each channel and consider them as special fully-connected layers that have multiple inputs
(patches) from a single instance. Consider a vectorized filter wi of channel i, 1 ? i ? ml , it
acts similarly to parameters which are connected to the same output unit in a fully-connected layer.
However, the difference is that for a single input instance j, every filter step of a sliding window across
l
of it will extract a patch Cjn from the input volume. Similarly, each pixel zij
in the 2-dimensional
n
activation map that gives the response to each patch corresponds to one output unit in a fully-connected
l
Pn Pml P
?zij
n
layer. Hence, for convolutional layers, (9) is generalized as Hl = n1 j=1 i=1
jn ?[w1 ,...,wml ] ,
where Hl is a block diagonal square matrix whose diagonal blocks are all the same. Then, we can
slightly revise the computation of the Hessian matrix, and extend the algorithm for fully-connected
layers to convolutional layers.
Note that the accumulated error of ultimate network output can be linearly bounded by layer-wise
error as long as the model is feed-forward. Thus, L-OBS is a general pruning method and friendly
with most of feed-forward neural networks whose layer-wise Hessian can be computed expediently
with slight modifications. However, if models have sizable layers like ResNet-101, L-OBS may not
be economical because of computational cost of Hessian, which will be studied in our future work.
4
Experiments
In this section, we verify the effectiveness of our proposed Layer-wise OBS (L-OBS) using various
architectures of deep neural networks in terms of compression ratio (CR), error rate before retraining,
and the number of iterations required for retraining to resume satisfactory performance. CR is defined
as the ratio of the number of preserved parameters to that of original parameters, lower is better.
We conduct comparison results of L-OBS with the following pruning approaches: 1) Randomly
pruning, 2) OBD [12], 3) LWC [9], 4) DNS [11], and 5) Net-Trim [6]. The deep architectures used for
experiments include: LeNet-300-100 [2] and LeNet-5 [2] on the MNIST dataset, CIFAR-Net2 [24]
on the CIFAR-10 dataset, AlexNet [25] and VGG-16 [3] on the ImageNet ILSVRC-2012 dataset. For
experiments, we first well-train the networks, and apply various pruning approaches on networks to
evaluate their performance. The retraining batch size, crop method and other hyper-parameters are
under the same setting as used in LWC. Note that to make comparisons fair, we do not adopt any
other pruning related methods like Dropout or sparse regularizers on MNIST. In practice, L-OBS can
work well along with these techniques as shown on CIFAR-10 and ImageNet.
4.1
Overall Comparison Results
The overall comparison results are shown in Table 1. In the first set of experiments, we prune each
layer of the well-trained LeNet-300-100 with compression ratios: 6.7%, 20% and 65%, achieving
slightly better overall compression ratio (7%) than LWC (8%). Under comparable compression
ratio, L-OBS has quite less drop of performance (before retraining) and lighter retraining compared
with LWC whose performance is almost ruined by pruning. Classic pruning approach OBD is
also compared though we observe that Hessian matrices of most modern deep models are strongly
non-diagonal in practice. Besides relative heavy cost to obtain the second derivatives via the chain
rule, OBD suffers from drastic drop of performance when it is directly applied to modern deep
models.
To properly prune each layer of LeNet-5, we increase tolerable error threshold from relative small
initial value to incrementally prune more parameters, monitor model performance, stop pruning and
set until encounter the ?pruning inflection point? mentioned in Section 3.4. In practice, we prune
each layer of LeNet-5 with compression ratio: 54%, 43%, 6% and 25% and retrain pruned model with
2
A revised AlexNet for CIFAR-10 containing three convolutional layers and two fully connected layers.
7
Table 1: Overall comparison results. (For iterative L-OBS, err. after pruning regards the last pruning stage.)
Method
Networks
Original error
CR
Err. after pruning
Re-Error
#Re-Iters.
Random
OBD
LWC
DNS
L-OBS
L-OBS (iterative)
LeNet-300-100
LeNet-300-100
LeNet-300-100
LeNet-300-100
LeNet-300-100
LeNet-300-100
1.76%
1.76%
1.76%
1.76%
1.76%
1.76%
8%
8%
8%
1.8%
7%
1.5%
85.72%
86.72%
81.32%
3.10%
2.43%
2.25%
1.96%
1.95%
1.99%
1.82%
1.96%
3.50 ? 105
8.10 ? 104
1.40 ? 105
3.40 ? 104
510
643
OBD
LWC
DNS
L-OBS
L-OBS (iterative)
LeNet-5
LeNet-5
LeNet-5
LeNet-5
LeNet-5
1.27%
1.27%
1.27%
1.27%
1.27%
8%
8%
0.9%
7%
0.9%
86.72%
89.55%
3.21%
2.04%
2.65%
1.36%
1.36%
1.27%
1.66%
2.90 ? 105
9.60 ? 104
4.70 ? 104
740
841
LWC
L-OBS
CIFAR-Net
CIFAR-Net
18.57%
18.57%
9%
9%
87.65%
21.32%
19.36%
18.76%
1.62 ? 105
1020
DNS
LWC
L-OBS
AlexNet (Top-1 / Top-5 err.)
AlexNet (Top-1 / Top-5 err.)
AlexNet (Top-1 / Top-5 err.)
43.30 / 20.08%
43.30 / 20.08%
43.30 / 20.08%
5.7%
11%
11%
76.14 / 57.68%
50.04 / 26.87%
43.91 / 20.72%
44.06 / 20.64%
43.11 / 20.01%
7.30 ? 105
5.04 ? 106
1.81 ? 104
DNS
LWC
L-OBS (iterative)
VGG-16 (Top-1 / Top-5 err.)
VGG-16 (Top-1 / Top-5 err.)
VGG-16 (Top-1 / Top-5 err.)
31.66 / 10.12%
31.66 / 10.12%
31.66 / 10.12%
7.5%
7.5%
7.5%
73.61 / 52.64%
37.32 / 14.82%
63.38% / 38.69%
32.43 / 11.12%
32.02 / 10.97%
1.07 ? 106
2.35 ? 107
8.63 ? 104
much fewer iterations compared with other methods (around 1 : 1000). As DNS retrains the pruned
network after every pruning operation, we are not able to report its error rate of the pruned network
before retraining. However, as can be seen, similar to LWC, the total number of iterations used by
DNS for rebooting the network is very large compared with L-OBS. Results of retraining iterations
of DNS are reported from [11] and the other experiments are implemented based on TensorFlow [26].
In addition, in the scenario of requiring high pruning ratio, L-OBS can be quite flexibly adopted to an
iterative version, which performs pruning and light retraining alternatively to obtain higher pruning
ratio with relative higher cost of pruning. With two iterations of pruning and retraining, L-OBS is
able to achieve as the same pruning ratio as DNS with much lighter total retraining: 643 iterations on
LeNet-300-100 and 841 iterations on LeNet-5.
Regarding comparison experiments on CIFAR-Net, we first well-train it to achieve a testing error of
18.57% with Dropout and Batch-Normalization. We then prune the well-trained network with LWC
and L-OBS, and get the similar results as those on other network architectures. We also observe that
LWC and other retraining-required methods always require much smaller learning rate in retraining.
This is because representation capability of the pruned networks which have much fewer parameters
is damaged during pruning based on a principle that number of parameters is an important factor for
representation capability. However, L-OBS can still adopt original learning rate to retrain the pruned
networks. Under this consideration, L-OBS not only ensures a warm-start for retraining, but also
finds important connections (parameters) and preserve capability of representation for the pruned
network instead of ruining model with pruning.
Regarding AlexNet, L-OBS achieves an overall compression ratio of 11% without loss of accuracy
with 2.9 hours on 48 Intel Xeon(R) CPU E5-1650 to compute Hessians and 3.1 hours on NVIDIA
Tian X GPU to retrain pruned model (i.e. 18.1K iterations). The computation cost of the Hessian
inverse in L-OBS is negligible compared with that on heavy retraining in other methods. This
claim can also be supported by the analysis of time
complexity. As mentioned in Section 3.4, the
2
time complexity of calculating H?1
is
O
nm
l?1 . Assume that neural networks are retrained via
l
SGD, then the approximate time complexity of retraining is O (IdM ), where d is the size of the
mini-batch, M and I are the total numbers of parameters and iterations, respectively. By considering
Pl=L
that M ? l=1 m2l?1 , and retraining in other methods always requires millions of iterations
(Id n) as shown in experiments, complexity of calculating the Hessian (inverse) in L-OBS is quite
economic. More interestingly, there is a trade-off between compression ratio and pruning (including
retraining) cost. Compared with other methods, L-OBS is able to provide fast-compression: prune
AlexNet to 16% of its original size without substantively impacting accuracy (pruned top-5 error
20.98%) even without any retraining. We further apply L-OBS to VGG-16 that has 138M parameters.
To achieve more promising compression ratio, we perform pruning and retraining alteratively twice.
As can be seen from the table, L-OBS achieves an overall compression ratio of 7.5% without loss
8
1.4
0.95
1.2
Memory used (Byte)
Accuracy (Top-5)
1.00
0.90
0.85
0.80
0.75
0.70
0.65
0.60
0.3
0.5 0.6 0.7 0.8
Compression Rate
0.9
Net-Trim
Our Method
1.0
0.8
0.6
0.4
0.2
0.0
0.4
?108
1.0
100
101
102
Number of data sample
(a) Top-5 test accuracy of L-OBS on ResNet-50
under different compression ratios.
(b) Memory Comparion between L-OBS and NetTrim on MNIST.
Table 2: Comparison of Net-Trim and Layer-wise OBS on the second layer of LeNet-300-100.
Method
Net-Trim
L-OBS
L-OBS
?r2
Pruned Error
0.13
0.70
0.71
13.24%
11.34%
10.83%
CR
Method
19%
3.4%
3.8%
Net-Trim
L-OBS
Net-Trim
?r2
Pruned Error
0.62
0.37
0.71
28.45%
4.56%
47.69%
CR
7.4%
7.4%
4.2%
of accuracy taking 10.2 hours in total on 48 Intel Xeon(R) CPU E5-1650 to compute the Hessian
inverses and 86.3K iterations to retrain the pruned model.
We also apply L-OBS on ResNet-50 [27]. From our best knowledge, this is the first work to perform
pruning on ResNet. We perform pruning on all the layers: All layers share a same compression ratio,
and we change this compression ratio in each experiments. The results are shown in Figure 2(a). As
we can see, L-OBS is able to maintain ResNet?s accuracy (above 85%) when the compression ratio is
larger than or equal to 45%.
4.2
Comparison between L-OBS and Net-Trim
As our proposed L-OBS is inspired by Net-Trim, which adopts `1 -norm to induce sparsity, we
conduct comparison experiments between these two methods. In Net-Trim, networks are pruned by
formulating layer-wise pruning as a optimization: minWl kWl k1 s.t. k?(Wl> Yl?1 ) ? Yl kF ? ? l ,
where ? l corresponds to ?rl kYl kF in L-OBS. Due to memory limitation of Net-Trim, we only prune
the middle layer of LeNet-300-100 with L-OBS and Net-Trim under the same setting. As shown in
Table 2, under the same pruned error rate, CR of L-OBS outnumbers that of the Net-Trim by about
six times. In addition, Net-Trim encounters explosion of memory and time on large-scale datasets
and large-size parameters. Specifically, space complexity of the positive semidefinite matrix Q in
quadratic constraints used in Net-Trim for optimization is O 2nm2l ml?1 . For example, Q requires
about 65.7Gb for 1,000 samples on MNIST as illustrated in Figure 2(b). Moreover, Net-Trim is
designed for multi-layer perceptrons and not clear how to deploy it on convolutional layers.
5
Conclusion
We have proposed a novel L-OBS pruning framework to prune parameters based on second order
derivatives information of the layer-wise error function and provided a theoretical guarantee on the
overall error in terms of the reconstructed errors for each layer. Our proposed L-OBS can prune
considerable number of parameters with tiny drop of performance and reduce or even omit retraining.
More importantly, it identifies and preserves the real important part of networks when pruning
compared with previous methods, which may help to dive into nature of neural networks.
Acknowledgements
This work is supported by NTU Singapore Nanyang Assistant Professorship (NAP) grant
M4081532.020, Singapore MOE AcRF Tier-2 grant MOE2016-T2-2-060, and Singapore MOE
AcRF Tier-1 grant 2016-T1-001-159.
9
References
[1] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[2] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[4] Luisa de Vivo, Michele Bellesi, William Marshall, Eric A Bushong, Mark H Ellisman, Giulio
Tononi, and Chiara Cirelli. Ultrastructural evidence for synaptic scaling across the wake/sleep
cycle. Science, 355(6324):507?510, 2017.
[5] Misha Denil, Babak Shakibi, Laurent Dinh, Nando de Freitas, et al. Predicting parameters in
deep learning. In Advances in Neural Information Processing Systems, pages 2148?2156, 2013.
[6] Nguyen N. Aghasi, A. and J. Romberg. Net-trim: A layer-wise convex pruning of deep neural
networks. Journal of Machine Learning Research, 2016.
[7] Russell Reed. Pruning algorithms-a survey. IEEE transactions on Neural Networks, 4(5):740?
747, 1993.
[8] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional
networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
[9] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections
for efficient neural network. In Advances in Neural Information Processing Systems, pages
1135?1143, 2015.
[10] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Sparsifying neural network connections for
face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 4856?4864, 2016.
[11] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In
Advances In Neural Information Processing Systems, pages 1379?1387, 2016.
[12] Yann LeCun, John S Denker, Sara A Solla, Richard E Howard, and Lawrence D Jackel. Optimal
brain damage. In NIPs, volume 2, pages 598?605, 1989.
[13] Babak Hassibi, David G Stork, et al. Second order derivatives for network pruning: Optimal
brain surgeon. Advances in neural information processing systems, pages 164?164, 1993.
[14] Thomas Kailath. Linear systems, volume 156. Prentice-Hall Englewood Cliffs, NJ, 1980.
[15] Nikolas Wolfe, Aditya Sharma, Lukas Drude, and Bhiksha Raj. The incredible shrinking neural
network: New perspectives on learning representations through the lens of pruning. arXiv
preprint arXiv:1701.04465, 2017.
[16] Hengyuan Hu, Rui Peng, Yu-Wing Tai, and Chi-Keung Tang. Network trimming: A data-driven
neuron pruning approach towards efficient deep architectures. arXiv preprint arXiv:1607.03250,
2016.
[17] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for
efficient convnets. arXiv preprint arXiv:1608.08710, 2016.
[18] Xiaojie Jin, Xiaotong Yuan, Jiashi Feng, and Shuicheng Yan. Training skinny deep neural
networks with iterative hard thresholding methods. arXiv preprint arXiv:1607.05423, 2016.
[19] Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks with
low-rank regularization. arXiv preprint arXiv:1511.06067, 2015.
[20] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse
convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 806?814, 2015.
10
[21] R Tyrrell Rockafellar. Convex analysis. princeton landmarks in mathematics, 1997.
[22] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In
Aistats, volume 15, page 275, 2011.
[23] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, and Yoshua Bengio.
Maxout networks. ICML (3), 28:1319?1327, 2013.
[24] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
2009.
[25] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems, pages
1097?1105, 2012.
[26] Mart?n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale
machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467,
2016.
[27] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition,
pages 770?778, 2016.
11
| 7071 |@word version:1 briefly:1 compression:19 norm:7 seems:1 retraining:28 middle:1 heuristically:1 hu:1 shuicheng:1 tried:1 propagate:1 pick:1 sgd:1 mention:2 recursively:1 initial:1 liu:3 series:3 score:2 selecting:2 zij:4 document:1 interestingly:1 past:1 existing:3 err:8 freitas:1 com:1 z2:3 activation:7 written:1 gpu:1 pioneer:1 john:2 devin:1 shape:1 dive:1 drop:11 designed:3 fewer:2 incredible:1 boosting:1 zhang:2 along:2 direct:1 become:1 waived:1 yuan:1 prove:1 abadi:1 peng:1 mask:1 multi:1 brain:8 chi:1 wml:1 ming:1 inspired:1 little:2 consistence:1 window:1 cpu:2 considering:1 becomes:1 provided:2 xx:1 sinno:1 bounded:6 moreover:3 alexnet:7 minimizes:2 finding:1 impractical:1 nj:2 guarantee:5 pseudo:3 every:3 y3:2 act:1 friendly:1 rebooting:1 wrong:2 rm:1 scaled:3 zl:9 unit:9 control:2 normally:1 omit:1 grant:3 giulio:1 before:10 negligible:1 positive:1 local:2 t1:1 aiming:1 id:1 nap:1 cliff:1 laurent:1 twice:1 studied:1 dynamically:1 r4:3 challenging:1 sara:1 professorship:1 tian:1 obeys:1 woodbury:2 lecun:3 yj:3 testing:1 nanyang:4 practice:8 block:7 urge:1 procedure:1 empirical:1 yan:1 significantly:5 luisa:1 induce:3 get:6 convenience:1 close:1 romberg:1 prentice:1 accumulation:2 map:1 dean:1 straightforward:1 flexibly:1 independently:2 convex:2 survey:1 formulate:1 simplicity:2 matthieu:1 rule:1 continued:1 importantly:3 borrow:1 classic:3 hurt:1 qq:3 updated:1 controlling:1 suppose:3 pt:1 user:1 exact:2 lighter:2 damaged:1 deploy:1 goodfellow:1 element:5 wolfe:1 expensive:5 approximated:2 net2:1 recognition:7 tappen:1 preprint:8 solved:1 capture:1 wang:3 calculate:3 ensures:1 connected:10 cycle:1 compressing:1 sun:2 solla:1 trade:2 technological:3 russell:1 mentioned:3 xiaojie:1 vanishes:1 complexity:9 warde:1 dynamic:1 babak:2 trained:11 hjl:2 depend:1 surgeon:5 predictive:1 eric:1 translated:1 easily:1 xiaoou:1 various:2 train:2 fast:1 effective:1 activate:1 hyper:1 outcome:2 whose:6 quite:6 widely:1 larger:1 otherwise:4 compressed:1 simonyan:1 final:6 net:22 h22:4 propose:3 tran:1 interaction:1 product:2 slim:1 zm:1 h33:4 degenerate:1 achieve:6 frobenius:2 yjl:9 ky:3 foroosh:1 sutskever:1 resnet:5 help:2 develop:1 andrew:1 gong:1 bourdev:1 eq:2 sizable:1 implemented:1 indicate:1 rml:5 filter:5 human:1 nando:1 hassan:1 require:3 dnns:1 h11:5 samet:1 ntu:4 preliminary:1 dns:9 extension:1 pl:1 hold:1 around:1 considered:2 hall:1 k3:5 lawrence:1 claim:1 m0:1 achieves:4 early:1 smallest:3 released:1 adopt:3 estimation:1 assistant:1 jackel:1 wl:5 weighted:3 always:3 aim:3 denil:1 avoid:2 pn:4 cr:6 l0:1 focus:2 zlj:7 properly:2 rank:2 mainly:1 tremendously:1 inflection:2 baseline:1 el:1 accumulated:4 eliminate:1 lj:4 relation:1 pixel:1 overall:9 hanan:1 classification:1 denoted:6 impacting:1 art:2 special:1 iters:1 equal:4 once:1 beach:1 yu:1 icml:1 igor:1 future:1 np:1 mirza:1 report:1 richard:1 yoshua:4 t2:1 modern:5 randomly:1 preserve:3 individual:3 phase:1 skinny:1 jeffrey:1 n1:5 maintain:1 william:2 huge:2 trimming:2 englewood:1 highly:1 tononi:1 wijk:2 misha:1 farley:1 semidefinite:1 light:4 behind:1 regularizers:1 jialin:1 chain:1 accurate:1 andy:1 explosion:1 necessary:1 conduct:4 taylor:3 re:5 desired:1 theoretical:7 delete:1 minimal:3 increased:1 instance:4 column:3 xeon:2 marshall:2 measuring:1 cost:6 krizhevsky:2 jiashi:1 reported:1 st:3 sensitivity:10 dong:1 yl:14 off:2 pool:1 ashish:1 ilya:1 yao:1 w1:3 nm:1 containing:2 derivative:11 wing:1 marianna:1 li:1 de:2 kwl:1 summarized:2 rockafellar:1 caused:1 script:1 root:1 lot:1 closed:1 dally:1 wm:1 recover:1 start:1 capability:3 vivo:1 contribution:1 minimize:1 shakibi:1 square:6 greg:1 convolutional:12 accuracy:6 characteristic:1 who:1 identify:2 resume:4 generalize:1 craig:1 ren:1 economical:1 worth:2 researcher:1 dig:1 rectified:1 w21:1 synapsis:1 suffers:1 synaptic:1 definition:1 yurong:1 involved:1 naturally:1 proof:3 resultant:2 gain:1 stop:2 dataset:4 revise:1 substantively:1 knowledge:1 carefully:1 actually:1 back:1 feed:4 higher:4 response:1 zisserman:1 though:6 strongly:1 stage:2 until:1 convnets:1 hand:2 mehdi:1 propagation:3 incrementally:2 acrf:2 michele:1 usa:1 bhiksha:1 requiring:1 verify:2 multiplier:2 contain:1 y2:2 lenet:21 regularization:2 equality:1 hence:1 xavier:1 satisfactory:1 illustrated:3 during:2 numerator:1 davis:1 criterion:6 generalized:1 demonstrate:1 performs:1 image:3 wise:35 consideration:1 novel:1 common:1 functional:3 empirically:1 yil:1 rl:1 stork:1 volume:4 jl:5 extend:3 discussed:1 slight:1 million:1 he:1 significant:2 dinh:1 rd:1 mathematics:1 similarly:2 pointed:1 han:3 etc:1 add:1 patrick:1 own:1 recent:2 perspective:1 raj:1 driven:1 scenario:1 store:1 certain:2 nvidia:1 kyl:3 yi:2 seen:2 minimum:2 ynl:2 employed:1 prune:15 sharma:1 xiangyu:1 redundant:1 corrado:1 sliding:1 full:1 multiple:3 reduces:1 long:2 cifar:7 prediction:12 variant:2 basic:1 crop:1 heterogeneous:1 vision:3 arxiv:16 iteration:12 normalization:1 agarwal:1 preserved:2 addition:4 wake:1 jian:1 crucial:2 w2:1 induced:1 effectiveness:3 w31:1 near:1 yang:1 bengio:4 decent:1 xj:1 relu:1 architecture:4 restrict:1 reduce:4 idea:3 barham:1 regarding:3 vgg:7 economic:1 intensive:1 haffner:1 lubomir:1 retrains:1 bottleneck:1 six:1 ultimate:9 gb:1 trimmed:1 effort:1 song:1 peter:1 karen:1 hessian:32 cause:1 shaoqing:1 deep:45 ignored:1 dramatically:1 baoyuan:1 clear:1 unimportant:2 amount:1 reduced:1 http:1 sl:1 zj:5 singapore:6 r12:1 neuroscience:1 estimated:1 sparsifying:1 redundancy:2 threshold:3 achieving:1 monitor:1 prevent:1 utilize:1 merely:2 sum:4 realworld:1 inverse:16 almost:1 yann:3 patch:3 z1j:5 ob:52 appendix:1 scaling:2 comparable:1 dropout:2 layer:112 bound:1 courville:1 cheng:1 quadratic:1 sleep:1 xiaogang:2 constraint:1 alex:2 asim:1 explode:1 u1:1 speed:1 min:3 formulating:1 pruned:26 performing:1 xiaotong:1 relatively:1 cirelli:1 developing:1 according:1 combination:1 poor:2 smaller:2 across:2 pan:1 y0:1 pml:1 wi:2 slightly:2 shallow:2 modification:1 s1:2 hl:8 intuitively:1 restricted:1 tier:2 computationally:5 ln:1 remains:2 tai:2 r3:1 fail:1 needed:2 tractable:4 drastic:1 adopted:2 brevdo:1 operation:7 apply:3 observe:2 denker:1 tolerable:3 batch:3 robustness:1 ky1:1 encounter:2 jn:1 original:7 compress:2 assumes:1 remaining:1 include:2 denotes:3 top:15 thomas:1 calculating:3 k1:1 especially:2 classical:1 feng:1 surgery:1 yunchao:1 damage:2 diagonal:10 antoine:1 gradient:5 landmark:1 vectorize:1 idm:1 code:1 besides:2 y4:2 z3:3 ratio:23 minimizing:1 illustration:1 mini:1 reed:1 ql:2 statement:1 hao:1 proper:1 perform:4 y1l:2 neuron:3 revised:1 datasets:2 benchmark:1 howard:1 jin:2 hinton:3 excluding:1 y1:4 rn:1 perturbation:2 arbitrary:1 retrained:1 exiting:1 zjl:3 introduced:3 david:2 required:3 moe:2 extensive:2 connection:4 z1:6 imagenet:3 deletion:2 tensorflow:2 boost:2 hour:3 nip:2 able:7 suggested:1 usually:2 pattern:3 sparsity:4 including:1 memory:6 difficulty:1 warm:1 predicting:1 residual:1 representing:1 github:1 w11:1 identifies:1 extract:1 byte:1 review:1 sg:3 acknowledgement:1 eugene:1 kf:14 multiplication:1 relative:5 graf:1 embedded:1 loss:3 fully:9 limitation:1 proven:2 geoffrey:3 yiwen:1 vectorized:1 s0:1 propagates:2 thresholding:2 principle:1 xiao:1 tiny:2 share:1 heavy:3 bordes:1 obd:9 repeat:1 last:1 supported:2 taking:1 face:1 lukas:1 absolute:2 sparse:4 distributed:1 regard:1 calculated:2 xn:1 adopts:2 forward:6 nguyen:1 far:1 transaction:1 reconstructed:7 pruning:91 approximate:3 trim:20 implicitly:1 obtains:1 ignore:3 ml:18 overfitting:1 consuming:3 xi:1 alternatively:1 search:1 iterative:8 vectorization:1 table:5 promising:2 nature:2 learn:1 channel:2 ca:1 controllable:1 e5:2 bottou:1 durdanovic:1 diag:1 aistats:1 linearly:1 whole:6 big:1 paul:1 fair:1 x1:1 referred:1 retrain:5 intel:2 tong:1 hassibi:1 fails:1 shrinking:1 exponential:1 lq:6 third:3 late:1 anbang:1 zhifeng:1 tang:2 ian:1 theorem:3 specific:2 rectifier:1 r2:2 evidence:2 glorot:1 mnist:4 quantization:1 importance:1 magnitude:9 rui:1 chen:3 lt:1 simply:2 lagrange:2 chiara:1 aditya:1 kaiming:1 scalar:1 corresponds:2 mart:1 goal:6 identity:2 presentation:3 kailath:1 towards:1 maxout:2 jeff:1 considerable:2 hard:3 change:6 specifically:2 except:1 pensky:1 tyrrell:1 ruined:1 lemma:4 called:1 total:4 pas:2 lens:1 xin:1 citro:1 outnumbers:1 perceptrons:1 select:2 selectively:1 ilsvrc:1 wq:2 mark:1 guo:2 aaron:1 evaluate:1 princeton:1 |
6,713 | 7,072 | Accelerated First-order Methods for Geodesically
Convex Optimization on Riemannian Manifolds
Yuanyuan Liu1 , Fanhua Shang1?, James Cheng1 , Hong Cheng2 , Licheng Jiao3
1
Dept. of Computer Science and Engineering, The Chinese University of Hong Kong
2
Dept. of Systems Engineering and Engineering Management,
The Chinese University of Hong Kong, Hong Kong
3
Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education,
School of Artificial Intelligence, Xidian University, China
{yyliu, fhshang, jcheng}@cse.cuhk.edu.hk; [email protected];
[email protected]
Abstract
In this paper, we propose an accelerated first-order method for geodesically convex
optimization, which is the generalization of the standard Nesterov?s accelerated
method from Euclidean space to nonlinear Riemannian space. We first derive two
equations and obtain two nonlinear operators for geodesically convex optimization
instead of the linear extrapolation step in Euclidean space. In particular, we analyze
the global convergence properties of our accelerated method for geodesically
strongly-convex problems, which show
p that our method improves the convergence
rate from O((1??/L)k ) to O((1? ?/L)k ). Moreover, our method also improves
the global convergence rate on geodesically general convex problems from O(1/k)
to O(1/k 2 ). Finally, we give a specific iterative scheme for matrix Karcher mean
problems, and validate our theoretical results with experiments.
1
Introduction
In this paper, we study the following Riemannian optimization problem:
min f (x) such that x ? X ? M,
(1)
where (M, %) denotes a Riemannian manifold with the Riemannian metric %, X ? M is a nonempty,
compact, geodesically convex set, and f : X ? R is geodesically convex (G-convex) and geodesically
L-smooth (G-L-smooth). Here, G-convex functions may be non-convex in the usual Euclidean space
but convex along the manifold, and thus can be solved by a global optimization solver. [5] presented
G-convexity and G-convex optimization on geodesic metric spaces, though without any attention
to global complexity analysis. As discussed in [11], the topic of "geometric programming" may be
viewed as a special case of G-convex optimization. [25] developed theoretical tools to recognize
and generate G-convex functions as well as cone theoretic fixed point optimization algorithms.
However, none of these three works provided a global convergence rate analysis for their algorithms.
Very recently, [31] provided the global complexity analysis of first-order algorithms for G-convex
optimization, and designed the following Riemannian gradient descent rule:
xk+1 = Expxk (?? gradf (xk )),
where k is the iterate index, Expxk is an exponential map at xk (see Section 2 for details), ? is a
step-size or learning rate, and gradf (xk ) is the Riemannian gradient of f at xk ? X .
?
Corresponding author.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we extend the Nesterov?s accelerated gradient descent method [19] from Euclidean
space to nonlinear Riemannian space. Below, we first introduce the Nesterov?s method and its variants
for convex optimization on Euclidean space, which can be viewed as a special case of our method,
when M = Rd , and % is the Euclidean inner product. Nowadays many real-world applications involve
large data sets. As data sets and problems are getting larger in size, accelerating first-order methods
is of both practical and theoretical interests. The earliest first-order method for minimizing a convex
function f is perhaps the gradient method. Thirty years ago, Nesterov [19] proposed an accelerated
gradient method, which takes the following form: starting with x0 and y0 = x0 , and for any k ? 1,
xk = yk?1 ? ??f (yk?1 ),
yk = xk + ?k (xk ? xk?1 ),
(2)
where 0 ? ?k ? 1 is the momentum parameter. For a fixed step-size ? = 1/L, where L is the
Lipschitz constant of ?f , this scheme with ?k = (k?1)/(k+2) exhibits the optimal convergence
2
0k
rate, f (xk )?f (x? ) ? O( Lkx?k?x
), for general convex (or non-strongly convex) problems [20],
2
where x? is any minimizer of f . In contrast, standard gradient descent methods can only achieve
a convergence rate of O(1/k). We can see that this improvement relies on the introduction of the
momentum term ?k (xk ? xk?1 ) as well as the particularly tuned coefficient (k?1)/(k+2) ? 1?3/k.
Inspired by the success of the Nesterov?s momentum, there has been much work on the development
of first-order accelerated methods,
p see [2, 8, 21,
p 26, 27] for example. In addition, for strongly convex
problems and setting ?k ? (1?p ?/L)/(1+ ?/L), Nesterov?s accelerated gradient method attains
a convergence rate of O((1? ?/L)k ), while standard gradient descent methods achieve a linear
convergence rate of O((1 ? ?/L)k ). It is then natural to ask whether our accelerated method in
nonlinear Riemannian space has the same convergence rates as its Euclidean space counterparts (e.g.,
Nesterov?s accelerated method [20])?
1.1
Motivation and Challenges
Zhang and Sra [31] proposed an efficient Riemannian gradient descent (RGD) method, which
attains the convergence rates of O((1 ? ?/L)k ) and O(1/k) for geodesically strongly-convex and
geodesically convex problems, respectively. Hence, there still remain gaps in convergence rates
between RGD and the Nesterov?s accelerated method.
As discussed in [31], a long-time question is whether the famous Nesterov?s accelerated gradient
descent algorithm has a counterpart in nonlinear Riemannian spaces. Compared with standard
gradient descent methods in Euclidean space, Nesterov?s accelerated gradient method involves a
linear extrapolation step: yk = xk + ?k (xk ? xk?1 ), which can improve its convergence rates for both
strongly convex and non-strongly convex problems. It is clear that ?k (x) := f (yk )+h?f (yk ), x?yk i
is a linear function in Euclidean space, while its counterpart in nonlinear space, e.g., ?k (x) :=
?1
f (yk ) + hgradf (yk ), Exp?1
yk (x)iyk , is a nonlinear function, where Expyk is the inverse of the
exponential map Expyk , and h?, ?iy is the inner product (see Section 2 for details). Therefore, in
nonlinear Riemannian spaces, there is no trivial analogy of such a linear extrapolation step. In
other words, although Riemannian geometry provides tools that enable generalization of Euclidean
algorithms mentioned above [1], we must overcome some fundamental geometric hurdles to analyze
the global convergence properties of our accelerated method as in [31].
1.2
Contributions
To answer the above-mentioned open problem in [31], in this paper we propose a general accelerated
first-order method for nonlinear Riemannian spaces, which is in essence the generalization of the
standard Nesterov?s accelerated method. We summarize the key contributions of this paper as follows.
? We first present a general Nesterov?s accelerated iterative scheme in nonlinear Riemannian
spaces, where the linear extrapolation step in (2) is replaced by a nonlinear operator. Furthermore, we derive two equations and obtain two corresponding nonlinear operators for
both geodesically strongly-convex and geodesically convex cases, respectively.
? We provide the global convergence analysis of our accelerated
p algorithms, which shows
that our algorithms attain the convergence rates of O((1 ? ?/L)k ) and O(1/k 2 ) for
geodesically strongly-convex and geodesically convex objectives, respectively.
2
? Finally, we present a specific iterative scheme for matrix Karcher mean problems. Our
experimental results verify the effectiveness and efficiency of our accelerated method.
2
Notation and Preliminaries
We first introduce some key notations and definitions about Riemannian geometry (see [23, 30] for
details). A Riemannian manifold (M, %) is a real smooth manifold M equipped with a Riemannian
metric %. Let hw1 , w2 ix = %x (w1p
, w2 ) denote the inner product of w1 , w2 ? Tx M; and the norm of
w ? Tx M is defined as kwkx = %x (w, w), where the metric % induces an inner product structure
in each tangent space Tx M associated with every x ? M. A geodesic is a constant speed curve
? : [0, 1] ? M that is locally distance minimizing. Let y ? M and w ? Tx M, then an exponential
map y = Expx (w) : Tx M ? M maps w to y on M, such that there is a geodesic ? with ?(0) = x,
?(1) = y and ?(0)
?
= w. If there is a unique geodesic between any two points in X ? M, the
?1
exponential map has inverse Exp?1
x : X ? Tx M, i.e., w = Expx (y), and the geodesic is the unique
?1
?1
shortest path with kExpx (y)kx = kExpy (x)ky = d(x, y), where d(x, y) is the geodesic distance
between x, y ? X . Parallel transport ?yx : Tx M ? Ty M maps a vector w ? Tx M to ?yx w ? Ty M,
and preserves inner products and norm, that is, hw1 , w2 ix = h?yx w1 , ?yx w2 iy and kw1 kx = k?yx w1 ky ,
where w1 , w2 ? Tx M.
For any x, y ? X and any geodesic ? with ?(0) = x, ?(1) = y and ?(t) ? X for t ? [0, 1] such
that f (?(t)) ? (1 ? t)f (x) + tf (y), then f is geodesically convex (G-convex), and an equivalent
definition is formulated as follows:
f (y) ? f (x) + hgradf (x), Exp?1
x (y)ix ,
where gradf (x) is the Riemannian gradient of f at x. A function f : X ? R is called geodesically
?-strongly convex (?-strongly G-convex) if for any x, y ? X , the following inequality holds
f (y) ? f (x) + hgradf (x), Exp?1
x (y)ix +
?
2
kExp?1
x (y)kx .
2
A differential function f is geodesically L-smooth (G-L-smooth) if its gradient is L-Lipschitz, i.e.,
f (y) ? f (x) + hgradf (x), Exp?1
x (y)ix +
3
L
2
kExp?1
x (y)kx .
2
An Accelerated Method for Geodesically Convex Optimization
In this section, we propose a general acceleration method for geodesically convex optimization,
which can be viewed as a generalization of the famous Nesterov?s accelerated method from Euclidean
space to Riemannian space. Nesterov?s accelerated method involves a linear extrapolation step as
in (2), while in nonlinear Riemannian spaces, we do not have a simple way to find an analogy to
such a linear extrapolation. Therefore, some standard analysis techniques do not work in nonlinear
space. Motivated by this, we derive two equations to bridge the gap for both geodesically stronglyconvex and geodesically convex cases, and then generalized Nesterov?s algorithms are proposed for
geodesically convex optimization by solving these two equations.
We first propose to replace the classical Nesterov?s scheme in (2) with the following update rules for
geodesically convex optimization in Riemannian space:
xk = Expyk?1 (?? gradf (yk?1 )),
yk = S(yk?1 , xk , xk?1 ),
(3)
where yk , xk ? X , S denotes a nonlinear operator, and yk = S(yk?1 , xk , xk?1 ) can be obtained by
solving the two proposed equations (see (4) and (5) below, which can be used to deduce the key
analysis tools for our convergence analysis) for strongly G-convex and general G-convex cases, respectively. Different from the Riemannian gradient descent rule (e.g., xk+1 = Expxk(??gradf (xk ))),
the Nesterov?s accelerated technique is introduced into our update rule of yk . Compared with the
Nesterov?s scheme in (2), the main difference is the update rule of yk . That is, our update rule for yk
is an implicit iteration process as shown below, while that of (2) is an explicit iteration one.
3
Figure 1: Illustration of geometric interpretation for Equations (4) and (5).
Algorithm 1 Accelerated method for strongly G-convex optimization
Input: ?, L
Initialize: x0 , y0 , ?.
1: for k = 1, 2, . . . , K do
2:
Computing the gradient at yk?1 : gk?1 = gradf (yk?1 );
3:
xk = Expyk?1 (??gk?1 );
4:
yk = S(yk?1 , xk , xk?1 ) by solving (4).
5: end for
Output: xK
3.1
Geodesically Strongly Convex Cases
We first design the following equation with respect to yk ? X for the ?-strongly G-convex case:
3/2
p
p
yk?1
1 ? ?/L ?yykk?1 Exp?1
?/L
Exp?1
(4)
(x
)
?
??
gradf
(y
)
=
1
?
k
k
y
yk?1 (xk?1 ),
yk
k
?
where ? = 4/ ?L?1/L > 0. Figure 1(a) illustrates the geometric
interpretation of the proposed
p
equation (4) for the strongly G-convex case, where uk = (1? ?/L)Exp?1
yk (xk ), vk = ??gradf (yk ),
p
?1
3/2
and wk?1 = (1 ? ?/L) Expyk?1 (xk?1 ). The vectors uk , vk ? Tyk M are parallel transported
to Tyk?1 M, and the sum of their parallel translations is equal to wk?1 ? Tyk?1 M, which means
that the equation (4) holds. We design an accelerated first-order algorithm for solving geodesically
strongly-convex problems, as shown in Algorithm 1. In real applications, the proposed equation
(4) can be manipulated into simpler forms. For example, we will give a specific equation for the
averaging real symmetric positive definite matrices problem below.
3.2
Geodesically Convex Cases
Let f be G-convex and G-L-smooth, the diameter of X be bounded by D (i.e., maxx,y?X d(x, y) ?
D), the variable yk ? X can be obtained by solving the following equation:
k
k?1
(k+??2)?
?1
?yyk?1
Exp
(x
)?Db
g
Exp?1
gk?1 +
gk?1 ,
(5)
k
k =
yk
yk?1 (xk?1 )?Db
k
??1
??1
??1
where gk?1 = gradf (yk?1 ), and gbk = gk /kgk kyk , and ? ? 3 is a given constant. Figure 1(b)
illustrates the geometric interpretation of the proposed equation (5) for the G-convex case, where
k
uk = ??1
Exp?1
gk , and vk?1 = (k+??2)?
gk?1 . We also present an accelerated first-order
yk (xk )?Db
??1
algorithm for solving geodesically convex problems, as shown in Algorithm 2.
3.3
Key Lemmas
For the Nesterov?s accelerated scheme in (2) with ?k = k?1
k+2 (for example, the general convex case)
in Euclidean space, the following result in [3, 20] plays a key role in the convergence analysis of
Nesterov?s accelerated algorithm.
2
?
2
h?f (yk ), zk ?x? i ? k?f (yk )k2 =
kzk ? x? k2 ? kzk+1 ? x? k2 ,
(6)
2
k+2
2
?(k+2)
4
Algorithm 2 Accelerated method for general G-convex optimization
Input: L, D, ?
Initialize: x0 , y0 , ?.
1: for k = 1, 2, . . . , K do
2:
Computing the gradient at yk?1 : gk?1 = gradf (yk?1 ) and g?k?1 = gk?1 /kgk?1 kyk?1 ;
3:
xk = Expyk?1 (??gk?1 );
4:
yk = S(yk?1 , xk , xk?1 ) by solving (5).
5: end for
Output: xK
where zk = (k+2)yk /2 ? (k/2)xk . Correspondingly, we can also obtain the following analysis tools
for our convergence analysis using the proposed equations (4) and (5). In other words, the following
equations (7) and (8) can be viewed as the Riemannian space counterparts of (6).
Lemma 1 (Strongly G-convex). If f : X ? R is geodesically ?-strongly convex and G-L-smooth,
and {yk } satisfies the equation (4), and zk is defined as follows:
p
zk = 1 ? ?/L Exp?1
yk (xk ) ? Tyk M.
Then the following results hold:
1/2
p
zk?1 ,
?yykk?1 (zk ? ?gradf (yk )) = 1 ? ?/L
p
?
1
1
?hgradf (yk ), zk iyk + kgradf (yk )k2yk =
kzk k2yk .
1 ? ?/L kzk?1 k2yk?1 ?
2
2?
2?
(7)
For general G-convex objectives, we have the following result.
Lemma 2 (General G-convex). If f : X ? R is G-convex and G-L-smooth, the diameter of X is
bounded by D, and {yk } satisfies the equation (5), and zk is defined as
k
Exp?1
gk ? Tyk M.
zk =
yk (xk ) ? Db
??1
Then the following results hold:
(k + ? ? 1)?
gradf (yk ),
?yykk+1 zk+1 = zk +
??1
??1
?
2(??1)2
hgradf (yk ), ?zk iyk ? kgradf (yk )k2yk =
kzk k2yk ? kzk+1 k2yk+1 . (8)
2
k+??1
2
?(k+??1)
The proofs of Lemmas 1 and 2 are provided in the Supplementary Materials.
4
Convergence Analysis
In this section, we analyze the global convergence properties of the proposed algorithms (i.e.,
Algorithms 1 and 2) for both geodesically strongly convex and general convex problems.
Lemma 3. If f : X ? R is G-convex and G-L-smooth for any x ? X , and {xk } is the sequence
produced by Algorithms 1 and 2 with ? ? 1/L, then the following result holds:
?
2
f (xk+1 ) ? f (x) + hgradf (yk ), ?Exp?1
yk (x)iyk ? kgradf (yk )kyk .
2
The proof of this lemma can be found in the Supplementary Materials. For the geodesically strongly
convex case, we have the following result.
Theorem 1 (Strongly G-convex). Let x? be the optimal solution of Problem (1), and {xk } be the
sequence produced by Algorithm 1. If f : X ? R is geodesically ?-strongly convex and G-L-smooth,
then the following result holds
k
p
p
1
f (xk+1 ) ? f (x? ) ? 1 ? ?/L
f (x0 ) ? f (x? ) +
1 ? ?/L kz0 k2y0 ,
2?
where z0 is defined in Lemma 1.
5
Table 1: Comparison of convergence rates for geodesically convex optimization algorithms.
Algorithms
RGD [31]
RSGD [31]
?
})k
O (1 ?min{ 1c ,L
?
O c+c+kck
Strongly G-convex and smooth
General G-convex and smooth
O(1/k)
?
O 1/ k
Ours
p?
O (1 ?
O 1/k
)k
L
2
The proof of Theorem 1 can be found in the Supplementary Materials. From
p this theorem, we can see
that the proposed algorithm attains a linear convergence rate of O((1? ?/L)k ) for geodesically
strongly convex problems, which is the same as that of its Euclidean space counterparts and significantly faster than that of non-accelerated algorithms such as [31] (i.e., O((1??/L)k )), as shown in
Table 1. For the geodesically non-strongly convex case, we have the following result.
Theorem 2 (General G-convex). Let {xk } be the sequence produced by Algorithm 2. If f : X ? R
is G-convex and G-L-smooth, and the diameter of X is bounded by D, then
f (xk+1 ) ? f (x? ) ?
(? ? 1)2
kz0 k2y0 ,
2?(k + ? ? 2)2
where z0 = ?Db
g0 , as defined in Lemma 2.
The proof of Theorem 2 can be found in the Supplementary Materials. Theorem 2 shows that for
general G-convex objectives, our acceleration method improves the theoretical convergence rate from
O(1/k) (e.g., RGD [31]) to O(1/k 2 ), which matches the optimal rate for general convex settings in
Euclidean space. Please see the detail in Table 1, where the parameter c is defined in [31].
5
Application for Matrix Karcher Mean Problems
In this section, we give a specific accelerated scheme for a type of conic geometric optimization
problems [25], e.g., the matrix Karcher mean problem. Specifically, the loss function of the Karcher
mean problem for a set of N symmetric positive definite (SPD) matrices {Wi }N
i=1 is defined as
N
1 X
klog(X ?1/2 Wi X ?1/2 )k2F ,
2N i=1
f (X) :=
(9)
where X ? P := {Z ? Rd?d , s.t., Z = Z T 0}. The loss function f is known to be non-convex
in Euclidean space but geodesically 2N -strongly convex. The inner product of two tangent vectors at
point X on the manifold is given by
h?, ?iX = tr(?X ?1 ?X ?1 ), ?, ? ? TX P,
(10)
where tr(?) is the trace of a real square matrix. For any matrices X, Y ? P, the Riemannian distance
is defined as follows:
1
1
d(X, Y ) = klog(X ? 2 Y X ? 2 )kF .
5.1
Computation of Yk
For the accelerated update rules in (3) for Algorithm 1, we need to compute Yk via solving the
equation (4). However, for the specific problem in (9) with the inner product in (10), we can derive a
simpler form to solve Yk below. We first give the following properties:
Property 1. For the loss function f in (9) with the inner product in (10), we have
1/2
1. Exp?1
Yk (Xk ) = Yk
?1/2
log(Yk
?1/2
Xk Yk
1/2
)Yk
;
PN
1/2
1/2
1/2
1/2
2. gradf (Yk ) = N1 i=1 Yk log(Yk Wi?1 Yk )Yk ;
3. gradf (Yk ), Exp?1
Yk (Xk ) Y = hU, V i;
k
4. kgradf (Yk )k2Yk =
2
kU kF ,
6
where U = N1
1/2
1/2
Wi?1 Yk )
i=1 log(Yk
PN
?1/2
? Rd?d , and V = log(Yk
?1/2
Xk Yk
) ? Rd?d .
Proof. In this part, we only provide the proof of Result 1 in Property 1, and the proofs of the other
results are provided in the Supplementary Materials. The inner product in (10) on the Riemannian
manifold leads to the following exponential map:
1
1
1
1
ExpX (?X ) = X 2 exp(X ? 2 ?X X ? 2 )X 2 ,
(11)
where ?X ? TX P denotes the tangent vector with the geometry, and tangent vectors ?X are expressed
as follows (see [17] for details):
1
1
?X = X 2 sym(?)X 2 , ? ? Rd?d ,
where sym(?) extracts the symmetric part of its argument, that is, sym(A) = (AT +A)/2. Then we
1/2
1/2
can set Exp?1
sym(?Xk )Yk ? TYk P. By the definition of Exp?1
Yk (Xk ) = Yk
Yk (Xk ), we have
ExpYk (Exp?1
(X
))
=
X
,
that
is,
k
k
Yk
1/2
ExpYk (Yk
1/2
sym(?Xk )Yk
) = Xk .
(12)
Using (11) and (12), we have
?1/2
sym(?Xk ) = log(Yk
?1/2
Xk Yk
) ? Rd?d .
Therefore, we have
1/2
Exp?1
Yk (Xk ) = Yk
1/2
sym(?Xk )Yk
1/2
= Yk
?1/2
log(Yk
?1/2
Xk Yk
1/2
)Yk
= ?Yk log(Xk?1 Yk ),
where the last equality holds due to the fact that log(X ?1 Y X) = X ?1 log(Y )X.
Result 3 in Property 1 shows that the inner product of two tangent vectors at Yk is equal to the
Euclidean inner-product of two vectors U, V ? Rd?d . Thus, we can reformulate (4) as follows:
r 32
r
N
1
1
?
?X
?
? 12
? 12
?1
?1
?1 2
?1
2
1?
log(Yk Xk Yk ) ?
log(Yk Wi Yk ) = 1?
log(Yk?12 Xk?1
Yk?12 ),
L
N i=1
L
(13)
?
where ? = 4/ ?L?1/L. Then Yk can be obtained by solving (13). From a numerical perspective,
1
1
1
1
2
2
log(Yk2 Wi?1 Yk2 ) can be approximated by log(Yk?1
Wi?1 Yk?1
), and then Yk is given by
"
#
r 12
N
X
1
1
1
1
1
1
?
??
?
?
2
2
log(Yk?1
Yk = Xk2 exp?1
1?
log(Yk?12 Xk?1 Yk?12 ) +
Wi?1 Yk?1
) Xk2 ,
L
N i=1
(14)
p
where ? = 1/(1? ?/L), and Yk ? P.
6
Experiments
In this section, we validate the performance of our accelerated method for averaging SPD matrices
under the Riemannian metric, e.g., the matrix Karcher mean problem (9), and also compare our
method against the state-of-the-art methods: Riemannian gradient descent (RGD) [31] and limitedmemory Riemannian BFGS (LRBFGS) [29]. The matrix Karcher mean problem has been widely
applied to many real-world applications such as elasticity [18], radar signal and image processing [6,
15, 22], and medical imaging [9, 7, 13]. In fact, this problem is geodesically strongly convex, but
non-convex in Euclidean space.
Other methods for solving this problem include the relaxed Richardson iteration algorithm [10],
the approximated joint diagonalization algorithm [12], and Riemannian stochastic gradient descent
(RSGD) [31]. Since all the three methods achieve similar performance to RGD, especially in data
science applications where N is large and relatively small optimization error is not required [31], we
only report the experimental results of RGD. The step-size ? of both RGD and LRBFGS is selected
with a line search method as in [29] (see [29] for details), while ? of our accelerated method is set to
1/L. For the algorithms, we initialize X using the arithmetic mean of the data set as in [29].
7
0
0
RGD
LRBFGS
Ours
10
dist(X? , Xk )
dist(X? , Xk )
10
?5
10
RGD
LRBFGS
Ours
?5
10
?10
?10
10
10
0
20
40
0
60
5
0
dist(X? , Xk )
dist(X? , Xk )
20
RGD
LRBFGS
Ours
10
?5
10
?5
10
?10
?10
10
10
0
15
0
RGD
LRBFGS
Ours
10
10
Running time (s)
Number of iterations
20
40
0
60
20
40
60
80
100
Running time (s)
Number of iterations
Figure 2: Comparison of RGD, LRBFGS and our accelerated method for solving geodesically
strongly convex Karcher mean problems on data sets with d = 100 (the first row) and d = 200 (the
second row). The vertical axis represents the distance in log scale, and the horizontal axis denotes the
number of iterations (left) or running time (right).
The input synthetic data are random SPD matrices of size 100?100 or 200?200 generated by using
the technique in [29] or the matrix mean toolbox [10], and all matrices are explicitly normalized so
that their norms are all equal to 1. We report the experimental results of RGD, LRBFGS and our
accelerated method on the two data sets in Figure 2, where N is set to 100, and the condition number
2
C of each matrix {Wi }N
i=1 is set to 10 . Figure 2 shows the evolution of the distance between the
exact Karcher mean and current iterate (i.e., dist(X? , Xk )) of the methods with respect to number
of iterations and running time (seconds), where X? is the exact Karcher mean. We can observe that
our method consistently converges much faster than RGD, which empirically verifies our theoretical
result in Theorem 1 that our accelerated method has a much faster convergence rate than RGD.
Although LRBFGS outperforms our method in terms of number of iterations, our accelerated method
converges much faster than LRBFGS in terms of running time.
7
Conclusions
In this paper, we proposed a general Nesterov?s accelerated gradient method for nonlinear Riemannian
space, which is a generalization of the famous Nesterov?s accelerated method for Euclidean space.
We derived two equations and presented two accelerated algorithms for geodesically strongly-convex
and general convex optimization problems, respectively. In particular, our theoretical results show
that our accelerated method attains the same convergence rates as the standard Nesterov?s accelerated
method in Euclidean space for both strongly G-convex and G-convex cases. Finally, we presented a
special iteration scheme for solving matrix Karcher mean problems, which in essence is non-convex
in Euclidean space, and the numerical results verify the efficiency of our accelerated method.
We can extend our accelerated method to the stochastic setting using variance reduction techniques [14,
16, 24, 28], and apply our method to solve more geodesically convex problems in the future, e.g.,
the general G-convex problem with a non-smooth regularization term as in [4]. In addition, we
can replace exponential mapping by computationally cheap retractions together with corresponding
theoretical guarantees [31]. An interesting direction of future work is to design accelerated schemes
for non-convex optimization in Riemannian space.
8
Acknowledgments
This research is supported in part by Grants (CUHK 14206715 & 14222816) from the Hong Kong
RGC, the Major Research Plan of the National Natural Science Foundation of China (Nos. 91438201
and 91438103), and the National Natural Science Foundation of China (No. 61573267).
References
[1] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds.
Princeton University Press, Princeton, N.J., 2009.
[2] Z. Allen-Zhu. Katyusha: The first direct acceleration of stochastic gradient methods. In STOC,
pages 1200?1205, 2017.
[3] H. Attouch and J. Peypouquet. The rate of convergence of Nesterov?s accelerated forwardbackward method is actually faster than 1/k 2 . SIAM J. Optim., 26:1824?1834, 2015.
[4] D. Azagra and J. Ferrera. Inf-convolution and regularization of convex functions on Riemannian
manifolds of nonpositive curvature. Rev. Mat. Complut., 2006.
[5] M. Bacak. Convex analysis and optimization in Hadamard spaces. Walter de Gruyter GmbH &
Co KG, 2014.
[6] F. Barbaresco. New foundation of radar Doppler signal processing based on advanced differential
geometry of symmetric spaces: Doppler matrix CFAR radar application. In RADAR, 2009.
[7] P. G. Batchelor, M. Moakher, D. Atkinson, F. Calamante, and A. Connelly. A rigorous framework
for diffusion tensor calculus. Magn. Reson. Med., 53:221?225, 2005.
[8] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM J. Imaging Sci., 2(1):183?202, 2009.
[9] R. Bhatia. Positive definite matrices, Princeton Series in Applied Mathematics. Princeton
University Press, Princeton, N.J., 2007.
[10] D. A. Bini and B. Iannazzo. Computing the Karcher mean of symmetric positive definite
matrices. Linear Algebra Appl., 438:1700?1710, 2013.
[11] S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi. A tutorial on geometric programming.
Optim. Eng., 8:67?127, 2007.
[12] M. Congedo, B. Afsari, A. Barachant, and M. Moakher. Approximate joint diagonalization and
geometric mean of symmetric positive definite matrices. PloS one, 10:e0121423, 2015.
[13] P. T. Fletcher and S. Joshi. Riemannian geometry for the statistical analysis of diffusion tensor
data. Signal Process., 87:250?262, 2007.
[14] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In NIPS, pages 315?323, 2013.
[15] J. Lapuyade-Lahorgue and F. Barbaresco. Radar detection using Siegel distance between
autoregressive processes, application to HF and X-band radar. In RADAR, 2008.
[16] Y. Liu, F. Shang, and J. Cheng. Accelerated variance reduced stochastic ADMM. In AAAI,
pages 2287?2293, 2017.
[17] G. Meyer, S. Bonnabel, and R. Sepulchre. Regression on fixed-rank positive semidefinite
matrices: A Riemannian approach. J. Mach. Learn. Res., 12:593?625, 2011.
[18] M. Moakher. On the averaging of symmetric positive-definite tensors. J. Elasticity, 82:273?296,
2006.
[19] Y. Nesterov. A method of solving a convex programming problem with convergence rate
O(1/k 2 ). Soviet Mathematics Doklady, 27:372?376, 1983.
9
[20] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer Academic
Publ., Boston, 2004.
[21] Y. Nesterov. Gradient methods for minimizing composite functions. Math. Program., 140:125?
161, 2013.
[22] X. Pennec, P. Fillard, and N. Ayache. A Riemannian framework for tensor computing. International Journal of Computer Vision, 66:41?66, 2006.
[23] P. Petersen. Riemannian Geometry. Springer-Verlag, New York, 2016.
[24] F. Shang. Larger is better: The effect of learning rates enjoyed by stochastic optimization with
progressive variance reduction. arXiv:1704.04966, 2017.
[25] S. Sra and R. Hosseini. Conic geometric optimization on the manifold of positive definite
matrices. SIAM J. Optim., 25(1):713?739, 2015.
[26] W. Su, S. Boyd, and E. J. Candes. A differential equation for modeling Nesterov?s accelerated
gradient method: Theory and insights. J. Mach. Learn. Res., 17:1?43, 2016.
[27] P. Tseng. On aacelerated proximal gradient methods for convex-concave optimization. 2008.
[28] L. Xiao and T. Zhang. A proximal stochastic gradient method with progressive variance
reduction. SIAM J. Optim., 24(4):2057?2075, 2014.
[29] X. Yuan, W. Huang, P.-A. Absil, and K. Gallivan. A Riemannian limited-memory BFGS
algorithm for computing the matrix geometric mean. Procedia Computer Science, 80:2147?
2157, 2016.
[30] H. Zhang, S. Reddi, and S. Sra. Riemannian SVRG: Fast stochastic optimization on Riemannian
manifolds. In NIPS, pages 4592?4600, 2016.
[31] H. Zhang and S. Sra. First-order methods for geodesically convex optimization. In COLT, pages
1617?1638, 2016.
10
| 7072 |@word kong:4 kgk:2 norm:3 open:1 hu:1 calculus:1 eng:1 tr:2 sepulchre:2 reduction:4 liu:1 series:1 tuned:1 ours:5 outperforms:1 current:1 optim:4 must:1 numerical:2 cheap:1 designed:1 update:5 intelligence:1 selected:1 kyk:3 xk:68 provides:1 math:1 cse:1 simpler:2 zhang:5 along:1 direct:1 differential:3 stronglyconvex:1 yuan:1 introductory:1 introduce:2 x0:5 congedo:1 dist:5 inspired:1 equipped:1 solver:1 provided:4 moreover:1 notation:2 bounded:3 gradf:14 kg:1 developed:1 guarantee:1 every:1 concave:1 doklady:1 k2:3 uk:3 medical:1 grant:1 positive:8 engineering:3 magn:1 mach:2 xidian:2 path:1 china:3 appl:1 co:1 limited:1 klog:2 practical:1 thirty:1 unique:2 cheng2:1 acknowledgment:1 cfar:1 definite:7 maxx:1 attain:1 significantly:1 boyd:2 composite:1 word:2 petersen:1 mahony:1 operator:4 equivalent:1 map:7 limitedmemory:1 fanhua:1 attention:1 starting:1 convex:89 rule:7 insight:1 vandenberghe:1 reson:1 play:1 exact:2 programming:3 approximated:2 particularly:1 role:1 solved:1 plo:1 forwardbackward:1 yk:109 mentioned:2 convexity:1 complexity:2 nesterov:28 geodesic:7 radar:7 solving:13 algebra:1 predictive:1 efficiency:2 joint:2 tx:11 soviet:1 walter:1 fast:2 artificial:1 bhatia:1 larger:2 supplementary:5 solve:2 widely:1 richardson:1 moakher:3 sequence:3 propose:4 product:11 connelly:1 hadamard:1 achieve:3 yuanyuan:1 validate:2 ky:2 getting:1 convergence:27 converges:2 derive:4 school:1 involves:2 direction:1 stochastic:8 enable:1 material:5 education:1 generalization:5 preliminary:1 hold:7 bonnabel:1 exp:22 fletcher:1 mapping:1 major:1 xk2:2 bridge:1 tf:1 tool:4 pn:2 shrinkage:1 earliest:1 derived:1 afsari:1 improvement:1 vk:3 consistently:1 rank:1 hk:2 contrast:1 rigorous:1 attains:4 geodesically:40 absil:2 kim:1 colt:1 development:1 plan:1 art:1 special:3 initialize:3 equal:3 beach:1 jcheng:1 represents:1 progressive:2 k2f:1 future:2 report:2 intelligent:1 manipulated:1 preserve:1 recognize:1 national:2 beck:1 replaced:1 geometry:6 batchelor:1 n1:2 detection:1 interest:1 semidefinite:1 nowadays:1 elasticity:2 euclidean:20 re:2 theoretical:7 modeling:1 teboulle:1 karcher:12 johnson:1 answer:1 proximal:2 synthetic:1 st:1 fundamental:1 siam:4 international:1 together:1 iy:2 w1:4 aaai:1 management:1 huang:1 bfgs:2 de:1 wk:2 coefficient:1 explicitly:1 extrapolation:6 analyze:3 liu1:1 hf:1 parallel:3 candes:1 contribution:2 square:1 variance:5 famous:3 produced:3 none:1 ago:1 retraction:1 definition:3 against:1 ty:2 james:1 associated:1 riemannian:40 proof:7 nonpositive:1 ask:1 improves:3 rgd:16 actually:1 calamante:1 katyusha:1 though:1 strongly:30 furthermore:1 implicit:1 horizontal:1 transport:1 su:1 nonlinear:16 perhaps:1 usa:1 effect:1 attouch:1 verify:2 normalized:1 counterpart:5 evolution:1 hence:1 equality:1 regularization:2 symmetric:7 laboratory:1 please:1 essence:2 hong:5 generalized:1 theoretic:1 allen:1 image:2 recently:1 empirically:1 discussed:2 extend:2 interpretation:3 kluwer:1 fillard:1 rd:7 enjoyed:1 mathematics:2 peypouquet:1 kw1:1 yk2:2 lkx:1 deduce:1 curvature:1 perspective:1 inf:1 verlag:1 pennec:1 inequality:1 success:1 ministry:1 relaxed:1 shortest:1 cuhk:3 signal:3 arithmetic:1 smooth:14 faster:5 match:1 academic:1 long:2 variant:1 regression:1 basic:1 vision:1 metric:5 arxiv:1 iteration:9 addition:2 hurdle:1 w2:6 med:1 db:5 effectiveness:1 reddi:1 joshi:1 iterate:2 spd:3 gbk:1 inner:11 cn:1 whether:2 motivated:1 accelerating:2 york:1 se:1 involve:1 clear:1 locally:1 band:1 induces:1 diameter:3 reduced:1 generate:1 hw1:2 tutorial:1 mat:1 kck:1 key:6 diffusion:2 imaging:2 cone:1 year:1 sum:1 inverse:3 w1p:1 atkinson:1 cheng:1 kexp:2 hgradf:7 speed:1 argument:1 min:2 relatively:1 remain:1 y0:3 wi:9 rev:1 computationally:1 equation:20 nonempty:1 end:2 apply:1 observe:1 denotes:4 running:5 include:1 yx:5 bini:1 chinese:2 especially:1 hosseini:1 classical:1 tensor:4 objective:3 g0:1 question:1 usual:1 exhibit:1 gradient:26 distance:6 sci:1 topic:1 manifold:11 mail:1 tseng:1 trivial:1 index:1 illustration:1 reformulate:1 minimizing:3 stoc:1 gk:12 trace:1 design:3 publ:1 vertical:1 convolution:1 descent:11 introduced:1 required:1 toolbox:1 doppler:2 nip:3 below:5 perception:1 challenge:1 summarize:1 program:1 memory:1 gallivan:1 tyk:6 kgradf:4 natural:3 advanced:1 zhu:1 scheme:10 improve:1 iyk:4 conic:2 axis:2 extract:1 understanding:1 geometric:10 tangent:5 kf:2 loss:3 lecture:1 interesting:1 analogy:2 foundation:3 xiao:1 thresholding:1 expx:3 translation:1 row:2 course:1 supported:1 kexpx:1 last:1 sym:7 svrg:1 correspondingly:1 overcome:1 curve:1 kzk:6 world:2 autoregressive:1 author:1 approximate:1 compact:1 rsgd:2 global:9 search:1 iterative:4 ayache:1 table:3 ku:1 transported:1 zk:12 ca:1 sra:4 learn:2 main:1 motivation:1 verifies:1 gmbh:1 siegel:1 hassibi:1 momentum:3 meyer:1 explicit:1 exponential:6 ix:6 theorem:7 z0:2 specific:5 iannazzo:1 diagonalization:2 illustrates:2 kx:4 gap:2 boston:1 expressed:1 springer:1 minimizer:1 satisfies:2 relies:1 kz0:2 gruyter:1 procedia:1 viewed:4 formulated:1 acceleration:3 lipschitz:2 replace:2 admm:1 specifically:1 averaging:3 lemma:8 shang:2 called:1 experimental:3 rgc:1 accelerated:49 dept:2 princeton:5 |
6,714 | 7,073 | Selective Classification for Deep Neural Networks
Yonatan Geifman
Computer Science Department
Technion ? Israel Institute of Technology
[email protected]
Ran El-Yaniv
Computer Science Department
Technion ? Israel Institute of Technology
[email protected]
Abstract
Selective classification techniques (also known as reject option) have not yet been
considered in the context of deep neural networks (DNNs). These techniques
can potentially significantly improve DNNs prediction performance by trading-off
coverage. In this paper we propose a method to construct a selective classifier
given a trained neural network. Our method allows a user to set a desired risk
level. At test time, the classifier rejects instances as needed, to grant the desired
risk (with high probability). Empirical results over CIFAR and ImageNet convincingly demonstrate the viability of our method, which opens up possibilities to
operate DNNs in mission-critical applications. For example, using our method an
unprecedented 2% error in top-5 ImageNet classification can be guaranteed with
probability 99.9%, and almost 60% test coverage.
1
Introduction
While self-awareness remains an illusive, hard to define concept, a rudimentary kind of self-awareness,
which is much easier to grasp, is the ability to know what you don?t know, which can make you
smarter. The subfield dealing with such capabilities in machine learning is called selective prediction
(also known as prediction with a reject option), which has been around for 60 years [1, 5]. The main
motivation for selective prediction is to reduce the error rate by abstaining from prediction when in
doubt, while keeping coverage as high as possible. An ultimate manifestation of selective prediction
is a classifier equipped with a ?dial? that allows for precise control of the desired true error rate
(which should be guaranteed with high probability), while keeping the coverage of the classifier as
high as possible.
Many present and future tasks performed by (deep) predictive models can be dramatically enhanced
by high quality selective prediction. Consider, for example, autonomous driving. Since we cannot
rely on the advent of ?singularity?, where AI is superhuman, we must manage with standard machine
learning, which sometimes errs. But what if our deep autonomous driving network were capable of
knowing that it doesn?t know how to respond in a certain situation, disengaging itself in advance and
alerting the human driver (hopefully not sleeping at that time) to take over? There are plenty of other
mission-critical applications that would likewise greatly benefit from effective selective prediction.
The literature on the reject option is quite extensive and mainly discusses rejection mechanisms for
various hypothesis classes and learning algorithms, such as SVM, boosting, and nearest-neighbors
[8, 13, 3]. The reject option has rarely been discussed in the context of neural networks (NNs), and
so far has not been considered for deep NNs (DNNs). Existing NN works consider a cost-based
rejection model [2, 4], whereby the costs of misclassification and abstaining must be specified, and a
rejection mechanism is optimized for these costs. The proposed mechanism for classification is based
on applying a carefully selected threshold on the maximal neuronal response of the softmax layer.
We that call this mechanism softmax response (SR). The cost model can be very useful when we can
quantify the involved costs, but in many applications of interest meaningful costs are hard to reason.
(Imagine trying to set up appropriate rejection/misclassification costs for disengaging an autopilot
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
driving system.) Here we consider the alternative risk-coverage view for selective classification
discussed in [5].
Ensemble techniques have been considered for selective (and confidence-rated) prediction, where
rejection mechanisms are typically based on the ensemble statistics [18, 7]. However, such techniques
are presently hard to realize in the context of DNNs, for which it could be very costly to train
sufficiently many ensemble members. Recently, Gal and Ghahramani [9] proposed an ensemble-like
method for measuring uncertainty in DNNs, which bypasses the need to train several ensemble
members. Their method works via sampling multiple dropout applications of the forward pass to
perturb the network prediction randomly. While this Monte-Carlo dropout (MC-dropout) technique
was not mentioned in the context of selective prediction, it can be directly applied as a viable selective
prediction method using a threshold, as we discuss here.
In this paper we consider classification tasks, and our goal is to learn a selective classifier (f, g),
where f is a standard classifier and g is a rejection function. The selective classifier has to allow
full guaranteed control over the true risk. The ideal method should be able to classify samples in
production with any desired level of risk with the optimal coverage rate. It is reasonable to assume
that this optimal performance can only be obtained if the pair (f, g) is trained together. As a first step,
however, we consider a simpler setting where a (deep) neural classifier f is already given, and our
goal is to learn a rejection function g that will guarantee with high probability a desired error rate.
To this end, we consider the above two known techniques for rejection (SR and MC-dropout), and
devise a learning method that chooses an appropriate threshold that ensures the desired risk. For a
given classifier f , confidence level ?, and desired risk r? , our method outputs a selective classifier
(f, g) whose test error will be no larger than r? with probability of at least 1 ? ?.
Using the well-known VGG-16 architecture, we apply our method on CIFAR-10, CIFAR-100 and
ImageNet (on ImageNet we also apply the RESNET-50 architecture). We show that both SR and
dropout lead to extremely effective selective classification. On both the CIFAR datasets, these two
mechanisms achieve nearly identical results. However, on ImageNet, the simpler SR mechanism
is significantly superior. More importantly, we show that almost any desirable risk level can be
guaranteed with a surprisingly high coverage. For example, an unprecedented 2% error in top-5
ImageNet classification can be guaranteed with probability 99.9%, and almost 60% test coverage.
2
Problem Setting
We consider a standard multi-class classification problem. Let X be some feature space (e.g., raw
image data) and Y, a finite label set, Y = {1, 2, 3, . . . , k}, representing k classes. Let P (X, Y ) be
a distribution over X ? Y. A classifier f is a function f : X ? Y, and the true risk of f w.r.t. P
?
is R(f |P ) =
EP (X,Y ) [`(f (x), y)], where ` : Y ? Y ? R+ is a given loss function, for example
the 0/1 error. Given a labeled set Sm = {(xi , yi )}m
i=1 ? (X ? Y) sampled i.i.d. from P (X, Y ), the
? 1 Pm
empirical risk of the classifier f is r?(f |Sm ) = m i=1 `(f (xi ), yi ).
A selective classifier [5] is a pair (f, g), where f is a classifier, and g : X ? {0, 1} is a selection
function, which serves as a binary qualifier for f as follows,
f (x),
if g(x) = 1;
?
(f, g)(x) =
don?t know, if g(x) = 0.
Thus, the selective classifier abstains from prediction at a point x iff g(x) = 0. The performance
of a selective classifier is quantified using coverage and risk. Fixing P , coverage, defined to be
?
?(f, g) =
EP [g(x)], is the probability mass of the non-rejected region in X . The selective risk of
(f, g) is
? EP [`(f (x), y)g(x)]
R(f, g) =
.
(1)
?(f, g)
Clearly, the risk of a selective classifier can be traded-off for coverage. The entire performance profile
of such a classifier can be specified by its risk-coverage curve, defined to be risk as a function of
coverage [5].
Consider the following problem. We are given a classifier f , a training sample Sm , a confidence
parameter ? > 0, and a desired risk target r? > 0. Our goal is to use Sm to create a selection function
2
g such that the selective risk of (f, g) satisfies
PrSm {R(f, g) > r? } < ?,
(2)
where the probability is over training samples, Sm , sampled i.i.d. from the unknown underlying
distribution P . Among all classifiers satisfying (2), the best ones are those that maximize the coverage.
For a fixed f , and a given class G (which will be discussed below), in this paper our goal is to select
g ? G such that the selective risk R(f, g) satisfies (2) while the coverage ?(f, g). is maximized.
3
Selection with Guaranteed Risk Control
In this section, we present a general technique for constructing a selection function with guaranteed
performance, based on a given classifier f , and a confidence-rate function ?f : X ? R+ for f .
We do not assume anything on ?f , and the interpretation is that ? can rank in the sense that if
?f (x1 ) ? ?f (x2 ), for x1 , x2 ? X , the confidence function ?f indicates that the confidence in the
prediction f (x2 ) is not higher than the confidence in the prediction f (x1 ). In this section we are not
concerned with the question of what is a good ?f (which is discussed in Section 4); our goal is to
generate a selection function g, with guaranteed performance for a given ?f .
For the reminder of this paper, the loss function ` is taken to be the standard 0/1 loss function (unless
m
explicitly mentioned otherwise). Let Sm = {(xi , yi )}m
i=1 ? (X ? Y) be a training set, assumed
to be sampled i.i.d. from an unknown distribution P (X, Y ). Given also are a confidence parameter
? > 0, and a desired risk target r? > 0. Based on Sm , our goal is to learn a selection function g such
that the selective risk of the classifier (f, g) satisfies (2).
For ? > 0, we define the selection function g? : X ? {0, 1} as
1, if ?f (x) ? ?;
?
g? (x) = g? (x|?f ) =
0, otherwise.
(3)
For any selective classifier (f, g), we define its empirical selective risk with respect to the labeled
sample Sm ,
Pm
1
? m
i=1 `(f (xi ), yi )g(xi )
r?(f, g|Sm ) =
,
? g|Sm )
?(f,
? 1 Pm
? g|Sm ) =
where ?? is the empirical coverage, ?(f,
i=1 g(xi ). For any selection function g,
m
?
denote by g(Sm ) the g-projection of Sm , g(Sm ) =
{(x, y) ? Sm : g(x) = 1}.
The selection with guaranteed risk (SGR) learning algorithm appears in Algorithm 1. The algorithm
receives as input a classifier f , a confidence-rate function ?f , a confidence parameter ? > 0, a target
risk r?1 , and a training set Sm . The algorithm performs a binary search to find the optimal bound
guaranteeing the required risk with sufficient confidence. The SGR algorithm outputs a selective
classifier (f, g) and a risk bound b? . In the rest of this section we analyze the SGR algorithm. We
make use of the following lemma, which gives the tightest possible numerical generalization bound
for a single classifier, based on a test over a labeled sample.
Lemma 3.1 (Gascuel and Caraux, 1992, [10]) Let P be any distribution and consider a classifier
f whose true error w.r.t. P is R(f |P ). Let 0 < ? < 1 be given and let r?(f |Sm ) be the empirical
error of f w.r.t. to the labeled set Sm , sampled i.i.d. from P . Let B ? (?
ri , ?, Sm ) be the solution b of
the following equation,
m??
r (f |Sm )
X
m j
b (1 ? b)m?j = ?.
(4)
j
j=0
Then, PrSm {R(f |P ) > B ? (?
ri , ?, Sm )} < ?.
We emphasize that the numerical bound of Lemma 3.1 is the tightest possible in this setting. As
discussed in [10], the analytic bounds derived using, e.g., Hoeffding inequality (or other concentration
inequalities), approximate this numerical bound and incur some slack.
Whenever the triplet Sm , ? and r? is infeasible, the algorithm will return a vacuous solution with zero
coverage.
1
3
Algorithm 1 Selection with Guaranteed Risk (SGR)
1: SGR(f ,?f ,?,r ? ,Sm )
2: Sort Sm according to ?f (xi ), xi ? Sm (and now assume w.l.o.g. that indices reflect this
ordering).
3: zmin = 1; zmax = m
?
4: for i = 1 to k = dlog2 me do
5:
z = d(zmin + zmax )/2e
6:
? = ?f (xz )
7:
gi = g? {(see (3))}
8:
r?i = r?(f, gi |Sm )
9:
b?i = B ? (?
ri , ?/dlog2 me, gi (Sm )) {see Lemma 3.1 }
10:
if b?i < r? then
11:
zmax = z
12:
else
13:
zmin = z
14:
end if
15: end for
16: Output- (f, gk ) and the bound b?k .
?
For any selection function, g, let Pg (X, Y ) be the projection of P over g; that is, Pg (X, Y ) =
P (X, Y |g(X) = 1). The following theorem is a uniform convergence result for the SGR procedure.
Theorem 3.2 (SGR) Let Sm be a given labeled set, sampled i.i.d. from P , and consider an appli?
cation of the SGR procedure. For k =
dlog2 me, let (f, gi ) and b?i , i = 1, . . . , k, be the selective
classifier and bound computed by SGR in its ith iterations. Then,
PrSm {?i : R(f |Pgi ) > B ? (?
ri , ?/k, gi (Sm ))} < ?.
Proof Sketch: For any i = 1, . . . , k, let mi = |gi (Sm )| be the random variable giving the size of
accepted examples from Sm on the ith iteration of SGR. For any fixed value of 0 ? mi ? m, by
Lemma 3.1, applied with the projected distribution Pgi (X, Y ), and a sample Smi , consisting of mi
examples drawn from the product distribution (Pgi )mi ,
PrSmi ?(Pgi )mi {R(f |Pgi ) > B ? (?
ri , ?/k, gi (Sm ))} < ?/k.
(5)
The sampling distribution of mi labeled examples in SGR is determined by the following process:
sample a set Sm of m examples from the product distribution P m and then use gi to filter Sm ,
resulting in a (randon) number mi of examples. Therefore, the left-hand side of (5) equals
PrSm ?P m {R(f |Pgi ) > B ? (?
ri , ?/k, gi (Sm )) |gi (Sm ) = mi } .
Clearly,
R(f |Pgi ) = EPgi [`(f (x), y)] =
EP [`(f (x), y)g(x)]
= R(f, gi ).
?(f, g)
Therefore,
=
?
PrSm {R(f, gi ) > B ? (?
ri , ?/k, gi (Sm ))}
m
X
PrSm {R(f, gi ) > B ? (?
ri , ?/k, gi (Sm )) | gi (Sm ) = n} ? Pr{gi (Sm ) = n}
n=0
m
X
?
k
n=0
Pr{gi (Sm ) = n} =
?
.
k
An application of the union bound completes the proof.
4
Confidence-Rate Functions for Neural Networks
Consider a classifier f , assumed to be trained for some unknown distribution P . In this section we
consider two confidence-rate functions, ?f , based on previous work [9, 2]. We note that an ideal
4
confidence-rate function ?f (x) for f , should reflect true loss monotonicity. Given (x1 , y1 ) ? P and
(x2 , y2 ) ? P , we would like the following to hold: ?f (x1 ) ? ?f (x2 ) if and only if `(f (x1 ), y1 ) ?
`(f (x2 ), y2 ). Obviously, one cannot expect to have an ideal ?f . Given a confidence-rate functions
?f , a useful way to analyze its effectiveness is to draw the risk-coverage curve of its induced rejection
function, g? (x|?f ), as defined in (3). This risk-coverage curve shows the relationship between ?
and R(f, g? ). For example, see Figure 2(a) where a two (nearly identical) risk-coverage curves are
plotted. While the confidence-rate functions we consider are not ideal, they will be shown empirically
to be extremely effective. 2
The first confidence-rate function we consider has been around in the NN folklore for years, and is
explicitly mentioned by [2, 4] in the context of reject option. This function works as follows: given
any neural network classifier f (x) where the last layer is a softmax, we denote by f (x|j) the soft
?
response output for the jth class. The confidence-rate function is defined as ? =
maxj?Y (f (x|j)).
We call this function softmax response (SR).
Softmax responses are often treated as probabilities (responses are positive and sum to 1), but some
authors criticize this approach [9]. Noting that, for our purposes, the ideal confidence-rate function
should only provide coherent ranking rather than absolute probability values, softmax responses are
potentially good candidates for relative confidence rates.
We are not familiar with a rigorous explanation for SR, but it can be intuitively motivated by observing
neuron activations. For example, Figure 1 depicts average response values of every neuron in the
second-to-last layer for true positives and false positives for the class ?8? in the MNIST dataset (and
qualitatively similar behavior occurs in all MNIST classes). The x-axis corresponds to neuron indices
in that layer (1-128); and the y-axis shows the average responses, where green squares are averages of
true positives, boldface squares highlight strong responses, and red circles correspond to the average
response of false positives. It is evident that the true positive activation response in the active neurons
is much higher than the false positive, which is expected to be reflected in the final softmax layer
response. Moreover, it can be seen that the large activation values are spread over many neurons,
indicating that the confidence signal arises due to numerous patterns detected by neurons in this layer.
Qualitatively similar behavior can be observed in deeper layers.
Figure 1: Average response values of neuron activations for class "8" on the MNIST dataset; green
squares, true positives, red circles, false negatives
The MC-dropout technique we consider was recently proposed to quantify uncertainty in neural
networks [9]. To estimate uncertainty for a given instance x, we run a number of feed-forward
iterations over x, each applied with dropout in the last fully connected layer. Uncertainty is taken as
the variance in the responses of the neuron corresponding to the most probable class. We consider
minus uncertainty as the MC-dropout confidence rate.
5
Empirical Results
In Section 4 we introduced the SR and MC-dropout confidence-rate function, defined for a given
model f . We trained VGG models [17] for CIFAR-10, CIFAR-100 and ImageNet. For each of these
models f , we considered both the SR and MC-dropout confidence-rate functions, ?f , and the induced
2
While Theorem 3.2 always holds, we note that if ?f is severely skewed (far from ideal), the bound of the
resulting selective classifier can be far from the target risk.
5
rejection function, g? (x|?f ). In Figure 2 we present the risk-coverage curves obtained for each of the
three datasets. These curves were obtained by computing a validation risk and coverage for many ?
values. It is evident that the risk-coverage profile for SR and MC-dropout is nearly identical for both
the CIFAR datasets. For the ImageNet set we plot the curves corresponding to top-1 (dashed curves)
and top-5 tasks (solid curves). On this dataset, we see that SR is significantly better than MC-dropout
on both tasks. For example, in the top-1 task and 60% coverage, the SR rejection has 10% error while
MC-dropout rejection incurs more than 20% error. But most importantly, these risk-coverage curves
show that selective classification can potentially be used to dramatically reduce the error in the three
datasets. Due to the relative advantage of SR, in the rest of our experiments we only focus on the SR
rating.
(a) CIFAR-10
(b) CIFAR-100
(c) Image-Net
Figure 2: Risk coverage curves for (a) cifar-10, (b) cifar-100 and (c) image-net (top-1 task: dashed
curves; top-5 task: solid crves), SR method in blue and MC-dropout in red.
We now report on experiments with our SGR routine, and apply it on each of the datasets to construct
high probability risk-controlled selective classifiers for the three datasets.
Table 1: Risk control results for CIFAR-10 for ? = 0.001
Desired risk (r? )
Train risk
Train coverage
Test risk
Test coverage
Risk bound (b? )
0.01
0.02
0.03
0.04
0.05
0.06
0.0079
0.0160
0.0260
0.0362
0.0454
0.0526
0.7822
0.8482
0.8988
0.9348
0.9610
0.9778
0.0092
0.0149
0.0261
0.0380
0.0486
0.0572
0.7856
0.8466
0.8966
0.9318
0.9596
0.9784
0.0099
0.0199
0.0298
0.0399
0.0491
0.0600
5.1
Selective Guaranteed Risk for CIFAR-10
We now consider CIFAR-10; see [14] for details. We used the VGG-16 architecture [17] and adapted
it to the CIFAR-10 dataset by adding massive dropout, exactly as described in [15]. We used data
augmentation containing horizontal flips, vertical and horizontal shifts, and rotations, and trained
using SGD with momentum of 0.9, initial learning rate of 0.1, and weight decay of 0.0005. We
multiplicatively dropped the learning rate by 0.5 every 25 epochs, and trained for 250 epochs. With
this setting we reached validation accuracy of 93.54, and used the resulting network f10 as the basis
for our selective classifier.
We applied the SGR algorithm on f10 with the SR confidence-rating function, where the training
set for SGR, Sm , was taken as half of the standard CIFAR-10 validation set that was randomly split
to two equal parts. The other half, which was not consumed by SGR for training, was reserved for
testing the resulting bounds. Thus, this training and test sets where each of approximately 5000
samples. We applied the SGR routine with several desired risk values, r? , and obtained, for each
such r? , corresponding selective classifier and risk bound b? . All our applications of the SGR routine
6
(for this dataset and the rest) where with a particularly small confidence level ? = 0.001.3 We then
applied these selective classifiers on the reserved test set, and computed, for each selective classifier,
test risk and test coverage. The results are summarized in Table 1, where we also include train risk
and train coverage that were computed, for each selective classifier, over the training set.
Observing the results in Table 1, we see that the risk bound, b? , is always very close to the target risk,
r? . Moreover, the test risk is always bounded above by the bound b? , as required. We compared this
result to a basic baseline in which the threshold is defined to be the value that maximizes coverage
while keeping train error smaller then r? . For this simple baseline we found that in over 50% of
the cases (1000 random train/test splits), the bound r? was violated over the test set, with a mean
violation of 18% relative to the requested r? . Finally, we see that it is possible to guarantee with this
method amazingly small 1% error while covering more than 78% of the domain.
5.2
Selective Guaranteed Risk for CIFAR-100
Using the same VGG architechture (now adapted to 100 classes) we trained a model for CIFAR-100
while applying the same data augmentation routine as in the CIFAR-10 experiment. Following
precisly the same experimental design as in the CFAR-10 case, we obtained the results of Table 2
Table 2: Risk control results for CIFAR-100 for ? = 0.001
Desired risk (r? )
Train risk
Train coverage
Test risk
Test coverage
Risk bound (b? )
0.02
0.05
0.10
0.15
0.20
0.25
0.0119
0.0425
0.0927
0.1363
0.1872
0.2380
0.2010
0.4286
0.5736
0.6546
0.7650
0.8716
0.0187
0.0413
0.0938
0.1327
0.1810
0.2395
0.2134
0.4450
0.5952
0.6752
0.7778
0.8826
0.0197
0.0499
0.0998
0.1498
0.1999
0.2499
Here again, SGR generated tight bounds, very close to the desired target risk, and the bounds were
never violated by the true risk. Also, we see again that it is possible to dramatically reduce the risk
with only moderate compromise of the coverage. While the architecture we used is not state-of-the art,
with a coverage of 67%, we easily surpassed the best known result for CIFAR-100, which currently
stands on 18.85% using the wide residual network architecture [19]. It is very likely that by using
ourselves the wide residual network architecture we could obtain significantly better results.
5.3
Selective Guaranteed Risk for ImageNet
We used an already trained Image-Net VGG-16 model based on ILSVRC2014 [16]. We repeated the
same experimental design but now the sizes of the training and test set were approximately 25,000.
The SGR results for both the top-1 and top-5 classification tasks are summarized in Tables 3 and 4,
respectively. We also implemented the RESNET-50 architecture [12] in order to see if qualitatively
similar results can be obtained with a different architecture. The RESNET-50 results for ImageNet
top-1 and top-5 classification tasks are summarized in Tables 5 and 6, respectively.
Table 3: SGR results for Image-Net dataset using VGG-16 top-1 for ? = 0.001
Desired risk (r? )
Train risk
Train coverage
Test risk
Test coverage
Risk bound(b? )
0.02
0.05
0.10
0.15
0.20
0.25
0.0161
0.0462
0.0964
0.1466
0.1937
0.2441
0.2355
0.4292
0.5968
0.7164
0.8131
0.9117
0.0131
0.0446
0.0948
0.1467
0.1949
0.2445
0.2322
0.4276
0.5951
0.7138
0.8154
0.9120
0.0200
0.0500
0.1000
0.1500
0.2000
0.2500
3
With this small ?, and small number of reported experiments (6-7 lines in each table) we did not perform a
Bonferroni correction (which can be easily added).
7
Table 4: SGR results for Image-Net dataset using VGG-16 top-5 for ? = 0.001
Desired risk (r? )
Train risk
Train coverage
Test risk
Test coverage
Risk bound(b? )
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.0080
0.0181
0.0281
0.0381
0.0481
0.0563
0.0663
0.3391
0.5360
0.6768
0.7610
0.8263
0.8654
0.9093
0.0078
0.0179
0.0290
0.0379
0.0496
0.0577
0.0694
0.3341
0.5351
0.6735
0.7586
0.8262
0.8668
0.9114
0.0100
0.0200
0.0300
0.0400
0.0500
0.0600
0.0700
Table 5: SGR results for Image-Net dataset using RESNET50 top-1 for ? = 0.001
Desired risk (r? )
Train risk
Train coverage
Test risk
Test coverage
Risk bound (b? )
0.02
0.05
0.10
0.15
0.20
0.25
0.0161
0.0462
0.0965
0.1466
0.1937
0.2441
0.2613
0.4906
0.6544
0.7711
0.8688
0.9634
0.0164
0.0474
0.0988
0.1475
0.1955
0.2451
0.2585
0.4878
0.6502
0.7676
0.8677
0.9614
0.0199
0.0500
0.1000
0.1500
0.2000
0.2500
These results show that even for the challenging ImageNet, with both the VGG and RESNET architectures, our selective classifiers are extremely effective, and with appropriate coverage compromise,
our classifier easily surpasses the best known results for ImageNet. Not surprisingly, RESNET, which
is known to achieve better results than VGG on this set, preserves its relative advantage relative to
VGG through all r? values.
6
Concluding Remarks
We presented an algorithm for learning a selective classifier whose risk can be fully controlled and
guaranteed with high confidence. Our empirical study validated this algorithm on challenging image
classification datasets, and showed that guaranteed risk-control is achievable. Our methods can be
immediately used by deep learning practitioners, helping them in coping with mission-critical tasks.
We believe that our work is only the first significant step in this direction, and many research questions
are left open. The starting point in our approach is a trained neural classifier f (supposedly trained to
optimize risk under full coverage). While the rejection mechanisms we considered were extremely
effective, it might be possible to identify superior mechanisms for a given classifier f . We believe,
however, that the most challenging open question would be to simultaneously train both the classifier
f and the selection function g to optimize coverage for a given risk level. Selective classification
is intimately related to active learning in the context of linear classifiers [6, 11]. It would be very
interesting to explore this potential relationship in the context of (deep) neural classification. In this
paper we only studied selective classification under the 0/1 loss. It would be of great importance
Table 6: SGR results for Image-Net dataset using RESNET50 top-5 for ? = 0.001
Desired risk (r? )
Train risk
Train coverage
Test risk
Test coverage
Risk bound(b? )
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.0080
0.0181
0.0281
0.0381
0.0481
0.0581
0.0663
0.3796
0.5938
0.7122
0.8180
0.8856
0.9256
0.9508
0.0085
0.0189
0.0273
0.0358
0.0464
0.0552
0.0629
0.3807
0.5935
0.7096
0.8158
0.8846
0.9231
0.9484
0.0099
0.0200
0.0300
0.0400
0.0500
0.0600
0.0700
8
to extend our techniques to other loss functions and specifically to regression, and to fully control
false-positive and false-negative rates.
This work has many applications. In general, any classification task where a controlled risk is critical
would benefit by using our methods. An obvious example is that of medical applications where
utmost precision is required and rejections should be handled by human experts. In such applications
the existence of performance guarantees, as we propose here, is essential. Financial investment
applications are also obvious, where there are great many opportunities from which one should
cherry-pick the most certain ones. A more futuristic application is that of robotic sales representatives,
where it could extremely harmful if the bot would try to answer questions it does not fully understand.
Acknowledgments
This research was supported by The Israel Science Foundation (grant No. 1890/14)
References
[1] Chao K Chow. An optimum character recognition system using decision functions. IRE
Transactions on Electronic Computers, (4):247?254, 1957.
[2] Luigi Pietro Cordella, Claudio De Stefano, Francesco Tortorella, and Mario Vento. A method
for improving classification reliability of multilayer perceptrons. IEEE Transactions on Neural
Networks, 6(5):1140?1147, 1995.
[3] Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri. Boosting with abstention. In Advances in
Neural Information Processing Systems, pages 1660?1668, 2016.
[4] Claudio De Stefano, Carlo Sansone, and Mario Vento. To reject or not to reject: that is the
question-an answer in case of neural classifiers. IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), 30(1):84?94, 2000.
[5] R. El-Yaniv and Y. Wiener. On the foundations of noise-free selective classification. Journal of
Machine Learning Research, 11:1605?1641, 2010.
[6] Ran El-Yaniv and Yair Wiener. Active learning via perfect selective classification. Journal of
Machine Learning Research (JMLR), 13(Feb):255?279, 2012.
[7] Yoav Freund, Yishay Mansour, and Robert E Schapire. Generalization bounds for averaged
classifiers. Annals of Statistics, pages 1698?1722, 2004.
[8] Giorgio Fumera and Fabio Roli. Support vector machines with embedded reject option. In
Pattern recognition with support vector machines, pages 68?82. Springer, 2002.
[9] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: representing model
uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine
Learning, pages 1050?1059, 2016.
[10] O. Gascuel and G. Caraux. Distribution-free performance bounds with the resubstitution error
estimate. Pattern Recognition Letters, 13:757?764, 1992.
[11] R. Gelbhart and R. El-Yaniv. The Relationship Between Agnostic Selective Classification and
Active. ArXiv e-prints, January 2017.
[12] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 770?778, 2016.
[13] Martin E Hellman. The nearest neighbor classification rule with a reject option. IEEE Transactions on Systems Science and Cybernetics, 6(3):179?185, 1970.
[14] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images.
2009.
9
[15] Shuying Liu and Weihong Deng. Very deep convolutional neural network based image classification using small training sample size. In Pattern Recognition (ACPR), 2015 3rd IAPR Asian
Conference on, pages 730?734. IEEE, 2015.
[16] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng
Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.
ImageNet large scale visual recognition challenge. International Journal of Computer Vision
(IJCV), 115(3):211?252, 2015.
[17] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
[18] Kush R Varshney. A risk bound for ensemble classification with a reject option. In Statistical
Signal Processing Workshop (SSP), 2011 IEEE, pages 769?772. IEEE, 2011.
[19] Sergey Zagoruyko and Nikos Komodakis.
arXiv:1605.07146, 2016.
10
Wide residual networks.
arXiv preprint
| 7073 |@word rani:1 achievable:1 open:3 pg:2 pick:1 sgd:1 incurs:1 minus:1 solid:2 initial:1 liu:1 luigi:1 existing:1 activation:4 yet:1 must:2 realize:1 numerical:3 analytic:1 plot:1 half:2 selected:1 zmax:3 ith:2 ire:1 boosting:2 simpler:2 zhang:1 driver:1 viable:1 ijcv:1 expected:1 zmin:3 behavior:2 xz:1 multi:1 equipped:1 underlying:1 moreover:2 bounded:1 mass:1 advent:1 agnostic:1 israel:3 what:3 kind:1 maximizes:1 gal:2 guarantee:3 every:2 exactly:1 classifier:46 control:7 sale:1 grant:2 medical:1 positive:9 giorgio:1 dropped:1 severely:1 approximately:2 might:1 studied:1 quantified:1 challenging:3 averaged:1 acknowledgment:1 testing:1 cfar:1 union:1 investment:1 procedure:2 coping:1 empirical:7 reject:11 significantly:4 projection:2 confidence:27 zoubin:1 cannot:2 close:2 selection:12 andrej:1 risk:82 context:7 applying:2 optimize:2 starting:1 immediately:1 rule:1 importantly:2 financial:1 autonomous:2 annals:1 enhanced:1 imagine:1 target:6 user:1 massive:1 yishay:1 hypothesis:1 satisfying:1 particularly:1 recognition:8 labeled:6 ep:4 observed:1 preprint:2 region:1 ensures:1 connected:1 sun:1 ordering:1 ran:2 mentioned:3 supposedly:1 trained:10 tight:1 compromise:2 predictive:1 incur:1 iapr:1 basis:1 easily:3 various:1 train:19 effective:5 monte:1 detected:1 quite:1 whose:3 larger:1 otherwise:2 ability:1 statistic:2 gi:18 simonyan:1 itself:1 final:1 obviously:1 advantage:2 unprecedented:2 net:7 propose:2 mission:3 maximal:1 product:2 iff:1 achieve:2 f10:2 convergence:1 yaniv:4 optimum:1 guaranteeing:1 perfect:1 resnet:5 andrew:1 ac:2 fixing:1 nearest:2 strong:1 coverage:46 c:2 implemented:1 trading:1 quantify:2 direction:1 filter:1 appli:1 abstains:1 human:2 dnns:6 generalization:2 probable:1 singularity:1 helping:1 correction:1 hold:2 around:2 considered:5 sufficiently:1 great:2 traded:1 driving:3 purpose:1 label:1 currently:1 create:1 clearly:2 illusive:1 always:3 rather:1 claudio:2 derived:1 focus:1 validated:1 rank:1 indicates:1 mainly:1 greatly:1 rigorous:1 sgr:23 baseline:2 sense:1 el:4 nn:2 typically:1 entire:1 chow:1 selective:46 classification:24 among:1 smi:1 art:1 softmax:7 equal:2 construct:2 never:1 beach:1 sampling:2 identical:3 nearly:3 plenty:1 future:1 report:1 randomly:2 preserve:1 simultaneously:1 asian:1 maxj:1 familiar:1 consisting:1 ourselves:1 interest:1 possibility:1 grasp:1 violation:1 amazingly:1 cherry:1 capable:1 unless:1 harmful:1 desired:17 plotted:1 circle:2 instance:2 classify:1 soft:1 measuring:1 yoav:1 cost:7 surpasses:1 qualifier:1 uniform:1 technion:4 krizhevsky:1 reported:1 answer:2 nns:2 chooses:1 st:1 international:2 off:2 michael:1 together:1 sanjeev:1 again:2 augmentation:2 reflect:2 manage:1 containing:1 huang:1 hoeffding:1 expert:1 return:1 doubt:1 li:1 potential:1 de:2 summarized:3 explicitly:2 ranking:1 performed:1 view:1 try:1 analyze:2 observing:2 red:3 reached:1 sort:1 option:8 capability:1 mario:2 jia:1 il:2 square:3 accuracy:1 wiener:2 variance:1 reserved:2 likewise:1 ensemble:6 maximized:1 correspond:1 identify:1 convolutional:2 raw:1 bayesian:1 mc:10 carlo:2 ren:1 russakovsky:1 cybernetics:2 cation:1 whenever:1 involved:1 pgi:7 obvious:2 proof:2 mi:8 sampled:5 dataset:9 reminder:1 sean:1 routine:4 carefully:1 appears:1 feed:1 higher:2 reflected:1 response:15 zisserman:1 rejected:1 sketch:1 receives:1 hand:1 horizontal:2 su:1 hopefully:1 quality:1 believe:2 usa:1 concept:1 true:10 y2:2 komodakis:1 skewed:1 self:2 bonferroni:1 covering:1 whereby:1 anything:1 manifestation:1 trying:1 evident:2 demonstrate:1 performs:1 stefano:2 hellman:1 zhiheng:1 rudimentary:1 image:13 recently:2 superior:2 rotation:1 empirically:1 discussed:5 interpretation:1 extend:1 he:1 significant:1 ai:1 rd:2 pm:3 sansone:1 reliability:1 feb:1 resubstitution:1 showed:1 moderate:1 yonatan:2 certain:2 tortorella:1 inequality:2 binary:2 errs:1 yi:4 devise:1 abstention:1 seen:1 nikos:1 deng:2 xiangyu:1 maximize:1 signal:2 dashed:2 multiple:2 full:2 desirable:1 long:1 cifar:21 controlled:3 prediction:15 basic:1 regression:1 multilayer:1 vision:2 surpassed:1 arxiv:5 iteration:3 sometimes:1 sergey:1 smarter:1 sleeping:1 krause:1 else:1 completes:1 jian:1 operate:1 rest:3 sr:15 zagoruyko:1 induced:2 member:2 superhuman:1 effectiveness:1 call:2 practitioner:1 noting:1 ideal:6 bernstein:1 split:2 viability:1 concerned:1 architecture:9 reduce:3 knowing:1 vgg:10 consumed:1 shift:1 kush:1 motivated:1 handled:1 ultimate:1 karen:1 shaoqing:1 remark:1 deep:12 dramatically:3 useful:2 utmost:1 karpathy:1 generate:1 schapire:1 bot:1 blue:1 threshold:4 drawn:1 abstaining:2 pietro:1 year:2 sum:1 run:1 letter:1 you:2 respond:1 uncertainty:6 almost:3 reasonable:1 electronic:1 draw:1 decision:1 dropout:16 layer:9 bound:26 guaranteed:15 adapted:2 alex:1 fei:2 x2:6 ri:8 extremely:5 concluding:1 martin:1 department:2 according:1 smaller:1 intimately:1 character:1 presently:1 gascuel:2 intuitively:1 pr:2 taken:3 equation:1 remains:1 discus:2 slack:1 mechanism:9 needed:1 know:4 flip:1 end:3 serf:1 tightest:2 apply:3 appropriate:3 alternative:1 corinna:1 yair:1 existence:1 top:15 include:1 opportunity:1 dial:1 folklore:1 giving:1 perturb:1 ghahramani:2 desalvo:1 already:2 question:5 occurs:1 added:1 print:1 costly:1 concentration:1 ssp:1 fabio:1 me:3 disengaging:2 reason:1 boldface:1 index:2 relationship:3 multiplicatively:1 robert:1 potentially:3 gk:1 hao:1 negative:2 design:2 satheesh:1 unknown:3 perform:1 vertical:1 neuron:8 francesco:1 datasets:7 sm:42 finite:1 january:1 situation:1 hinton:1 precise:1 y1:2 mansour:1 rating:2 introduced:1 vacuous:1 pair:2 required:3 specified:2 extensive:1 alerting:1 imagenet:13 optimized:1 coherent:1 nip:1 able:1 below:1 pattern:5 criticize:1 challenge:1 convincingly:1 green:2 explanation:1 critical:4 misclassification:2 treated:1 rely:1 residual:4 representing:2 improve:1 technology:2 rated:1 numerous:1 axis:2 resnet50:2 chao:1 epoch:2 literature:1 review:1 relative:5 freund:1 subfield:1 loss:6 expect:1 highlight:1 fully:4 interesting:1 embedded:1 geoffrey:1 validation:3 foundation:2 awareness:2 sufficient:1 bypass:1 tiny:1 production:1 roli:1 mohri:1 surprisingly:2 last:3 free:2 keeping:3 infeasible:1 jth:1 supported:1 side:1 allow:1 deeper:1 understand:1 institute:2 neighbor:2 wide:3 absolute:1 benefit:2 curve:12 stand:1 doesn:1 forward:2 author:1 qualitatively:3 projected:1 far:3 transaction:4 approximate:1 emphasize:1 dlog2:3 dealing:1 monotonicity:1 varshney:1 active:4 robotic:1 assumed:2 xi:8 fumera:1 don:2 search:1 triplet:1 khosla:1 table:12 learn:3 ca:1 improving:1 requested:1 mehryar:1 constructing:1 domain:1 did:1 main:1 spread:1 motivation:1 noise:1 profile:2 repeated:1 yarin:1 x1:6 neuronal:1 representative:1 architechture:1 depicts:1 autopilot:1 vento:2 precision:1 momentum:1 candidate:1 jmlr:1 theorem:3 decay:1 svm:1 cortes:1 essential:1 giulia:1 mnist:3 false:6 adding:1 workshop:1 importance:1 easier:1 rejection:14 likely:1 explore:1 visual:1 aditya:1 kaiming:1 springer:1 acpr:1 corresponds:1 satisfies:3 ma:1 goal:6 man:1 hard:3 determined:1 specifically:1 olga:1 lemma:5 called:1 pas:1 accepted:1 experimental:2 meaningful:1 perceptrons:1 rarely:1 select:1 indicating:1 berg:1 support:2 arises:1 jonathan:1 alexander:1 violated:2 |
6,715 | 7,074 | Minimax Estimation of Bandable Precision Matrices
Addison J. Hu?
Department of Statistics and Data Science
Yale University
New Haven, CT 06520
[email protected]
Sahand N. Negahban
Department of Statistics and Data Science
Yale University
New Haven, CT 06520
[email protected]
Abstract
The inverse covariance matrix provides considerable insight for understanding
statistical models in the multivariate setting. In particular, when the distribution over
variables is assumed to be multivariate normal, the sparsity pattern in the inverse
covariance matrix, commonly referred to as the precision matrix, corresponds to
the adjacency matrix representation of the Gauss-Markov graph, which encodes
conditional independence statements between variables. Minimax results under the
spectral norm have previously been established for covariance matrices, both sparse
and banded, and for sparse precision matrices. We establish minimax estimation
bounds for estimating banded precision matrices under the spectral norm. Our
results greatly improve upon the existing bounds; in particular, we find that the
minimax rate for estimating banded precision matrices matches that of estimating
banded covariance matrices. The key insight in our analysis is that we are able to
obtain barely-noisy estimates of k?k subblocks of the precision matrix by inverting
slightly wider blocks of the empirical covariance matrix along the diagonal. Our
theoretical results are complemented by experiments demonstrating the sharpness
of our bounds.
1
Introduction
Imposing structure is crucial to performing statistical estimation in the high-dimensional regime
where the number of observations can be much smaller than the number of parameters. In estimating
graphical models, a long line of work has focused on understanding how to impose sparsity on the
underlying graph structure.
Sparse edge recovery is generally not easy for an arbitrary distribution. However, for Gaussian
graphical models, it is well-known that the graphical structure is encoded in the inverse of the
covariance matrix ??1 = ?, commonly referred to as the precision matrix [12, 14, 3]. Therefore,
accurate recovery of the precision matrix is paramount to understanding the structure of the graphical
model. As a consequence, a great deal of work has focused on sparse recovery of precision matrices
under the multivariate normal assumption [8, 4, 5, 17, 16]. Beyond revealing the graph structure, the
precision matrix also turns out to be highly useful in a variety of applications, including portfolio
optimization, speech recognition, and genomics [12, 23, 18].
Although there has been a rich literature exploring the sparse precision matrix setting for Gaussian
graphical models, less work has emphasized understanding the estimation of precision matrices
under additional structural assumptions, with some exceptions for block structured sparsity [10] or
bandability [1]. One would hope that extra structure should allow us to obtain more statistically
efficient solutions. In this work, we focus on the case of bandable precision matrices, which capture
?
Addison graduated from Yale in May 2017. Up-to-date contact information may be found at http:
//huisaddison.com/.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a sense of locality between variables. Bandable matrices arise in a number of time-series contexts
and have applications in climatology, spectroscopy, fMRI analysis, and astronomy [9, 20, 15]. For
example, in the time-series setting, we may assume that edges between variables Xi , Xj are more
likely when i is temporally close to j, as is the case in an auto-regressive process. The precision and
covariance matrices corresponding to distributions with this property are referred to as bandable, or
tapering. We will discuss the details of this model in the sequel.
Past work: Previous work has explored the estimation of both bandable covariance and precision
matrices [6, 15]. Closely related work includes the estimation of sparse precision and covariance
matrices [3, 17, 4]. Asymptotically-normal entrywise precision estimates as well as minimax rates
for operator norm recovery of sparse precision matrices have also been established [16]. A line of
work developed concurrently to our own establishes a matching minimax lower bound [13].
When considering an estimation technique, a powerful criterion for evaluating whether the technique
performs optimally in terms of convergence rate is minimaxity. Past work has established minimax
rates of convergence for sparse covariance matrices, bandable covariance matrices, and sparse
precision matrices [7, 6, 4, 17].
The technique for estimating bandable covariance matrices proposed in [6] is shown to achieve the
optimal rate of convergence. However, no such theoretical guarantees have been shown for the
bandable precision estimator proposed in recent work for estimating sparse and smooth precision
matrices that arise from cosmological data [15].
Of note is the fact that the minimax rate of convergence for estimating sparse covariance matrices
matches the minimax rate of convergence of estimating sparse precision matrices. In this paper,
we introduce an adaptive estimator and show that it achieves the optimal rate of convergence when
estimating bandable precision matrices from the banded parameter space (3). We find, satisfyingly,
that analogous to the sparse case, in which the minimax rate of convergence enjoys the same rate for
both precision and covariance matrices, the minimax rate of convergence for estimating bandable
precision matrices matches the minimax rate of convergence for estimating bandable covariance
matrices that has been established in the literature [6].
Our contributions: Our goal is to estimate a banded precision matrix based on n i.i.d. observations.
We consider a parameter space of precision matrices ? with a power law decay structure nearly
identical to the bandable covariance matrices considered for covariance matrix estimation [6]. We
present a simple-to-implement algorithm for estimating the precision matrix. Furthermore, we show
that the algorithm is minimax optimal with respect to the spectral norm. The upper and lower bounds
given in Section 3 together imply the following optimal rate of convergence for estimating bandable
precision matrices under the spectral norm. Informally, our results show the following bound for
recovering a banded precision matrix with bandwidth k.
Theorem 1.1 (Informal). The minimax risk for estimating the precision matrix ? over the class P?
given in (3) satisfies:
2
k + log p
?
inf sup E
?
? ?
?
(1)
?
n
? P?
? k as defined in Equation (7).
where this bound is achieved by the tapering estimator ?
An important point to note, which is shown more precisely in the sequel, is that the rate of convergence
as compared to sparse precision matrix recovery is improved by a factor of min(k log(p), k 2 ).
We establish a minimax upper bound by detailing an algorithm for obtaining an estimator given
observations x1 , . . . , xn and a pre-specified bandwidth k, and studying the resultant estimator?s risk
properties under the spectral norm. We show that an estimator using our algorithm with the optimal
choice of bandwidth attains the minimax rate of convergence with high probability.
To establish the optimality of our estimation routine, we derive a minimax lower bound to show
that the rate of convergence cannot be improved beyond that of our estimator. The lower bound is
established by constructing subparameter spaces of (3) and applying testing arguments through Le
Cam?s method and Assouad?s lemma [22, 6].
To supplement our analysis, we conduct numerical experiments to explore the performance of our
estimator in the finite sample setting. The numerical experiments confirm that even in the finite
sample case, our proposed estimator exhibits the minimax rate of convergence.
2
The remainder of the paper is organized as follows. In Section 2, we detail the exact model setting
and introduce a blockwise inversion technique for precision matrix estimation. In Section 3, theorems
establishing the minimaxity of our estimator under the spectral norm are presented. An upper bound
on the estimator?s risk is given in high probability with the help of a result from set packing. The
minimax lower bound is derived by way of a testing argument. Both bounds are accompanied by
their proofs. Finally, in Section 4, our estimator is subjected to numerical experiments. Formal proofs
of the theorems may be found in the longer version of the paper [11].
Notation: We will now collect notation that will be used throughout the remaining sections. Vectors
will be denoted as lower-case x while matrices are upper-case A. The spectral or operator norm of a
matrix is defined to be kAk = supx6=0,y6=0 hAx, yi while the matrix `1 norm of a symmetric matrix
Pm
A ? Rm?m is defined to be kAk1 = maxj i=1 |Aij |.
2
Background and problem set-up
In this section we present details of our model and the estimation procedure. If one considers
observations of the form x1 , . . . , xn ? Rp drawn from a distribution with precision matrix ?p?p and
zero mean, the goal then is to estimate the unknown matrix ?p?p based on the observations {xi }ni=1 .
Given a random sample of p-variate observations x1 , . . . , xn drawn from a multivariate distribution
with population covariance ? = ?p?p , our procedure is based on a tapering estimator derived from
blockwise estimates for estimating the precision matrix ?p?p = ??1 .
The maximum likelihood estimator of ? is
n
1X
? = (?
? )(xl ? x
? )>
?
?ij )1?i,j?p =
(xl ? x
n
(2)
l=1
? is the empirical mean of the vectors xi . We will construct estimators of the precision matrix
where x
? along the diagonal, and averaging over the resultant subblocks.
? = ??1 by inverting blocks of ?
Throughout this paper we adhere to the convention that ?ij refers to the ij th element in a matrix ?.
Consider the parameter space F? , with associated probability measure P? , given by:
(
)
X
?1
??
F? = F? (M0 , M ) = ? : max
{|?ij | : |i ? j| ? k} ? M k for all k, ?i (?) ? [M0 , M0 ]
j
i
(3)
where ?i (?) denotes the ith eigenvalue of ?, with ?i ? ?j for all i ? j. We also constrain
? > 0, M > 0, M0 > 0. Observe that this parameter space is nearly identical to that given in
Equation (3) of [6]. We take on an additional assumption on the minimum eigenvalue of ? ? F? ,
which is used in the technical arguments where the risk of estimating ? under the spectral norm is
bounded in terms of the error of estimating ? = ??1 .
Observe that the parameter space intuitively dictates that the magnitude of the entries of ? decays in
power law as we move away from the diagonal. As with the parameter space for bandable covariance
matrices given in [6], we may understand ? in (3) as a rate of decay for the precision entries ?ij as
they move away from the diagonal; it can also be understood in terms of the smoothness parameter in
nonparametric estimation [19]. As will be discussed in Section 3, the optimal choice of k depends on
both n and the decay rate ?.
2.1
Estimation procedure
We now detail the algorithm for obtaining minimax estimates for bandable ?, which is also given as
pseudo-code2 in Algorithm 1.
The algorithm is inspired by the tapering procedure introduced by Cai, Zhang, and Zhou [6] in the
case of covariance matrices, with modifications in order to estimate the precision matrix. Estimating
2
In the pseudo-code, we adhere to the NumPy convention (1) that arrays are zero-indexed, (2) that slicing an
array arr with the operation arr[a:b] includes the element indexed at a and excludes the element indexed at
b, and (3) that if b is greater than the length of the array, only elements up to the terminal element are included,
with no errors.
3
the precision matrix introduces new difficulties as we do not have direct access to the estimates of
elements of the precision matrix. For a given integer k, 1 ? k ? p, we construct a tapering estimator
as follows. First, we calculate the maximum likelihood estimator for the covariance, as given in
Equation (2). Then, for all integers 1 ? m ? l ? p and m ? 1, we define the matrices with square
blocks of size at most 3m along the diagonal:
? (3m) = (?
?
?ij 1{l ? m ? i < l + 2m, l ? m ? j < l + 2m})p?p
l?m
(4)
? (3m) , we replace the nonzero block with its inverse to obtain ?
? (3m) . For a given l, we
For each ?
l?m
l?m
refer to the individual entries of this intermediate matrix as follows:
l
? (3m) = (?
?
?ij
1{l ? m ? i < l + 2m, l ? m ? j < l + 2m})p?p
(5)
l?m
(3m)
?
For each l, we then keep only the central m ? m subblock of ?
l?m to obtain the blockwise estimate
(m)
?
?l :
? (m) = (?
?
? l 1{l ? i < l + m, l ? j < l + m})p?p
(6)
l
ij
Note that this notation allows for l < 0 and l + m > p; in each case, this out-of-bounds indexing
allows us to cleanly handle corner cases where the subblocks are smaller than m ? m.
For a given bandwidth k (assume k is divisible by 2), we calculate these blockwise estimates for both
m = k and m = k2 . Finally, we construct our estimator by averaging over the block matrices:
?
?
p
p
X
X
2
k/2)
(k)
(
?k = ? ?
? ?
?
?
(7)
?
?
?
l
l
k
k
l=1?k
l=1? /2
k
2
We note that within entries of the diagonal, each entry is effectively the sum of k2 estimates, and as
we move from k2 to k from the diagonal, each entry is progressively the sum of one fewer entry.
Therefore, within k2 of the diagonal, the entries are not tapered; and from k2 to k of the diagonal, the
entries are linearly tapered to zero. The analysis of this estimator makes careful use of this tapering
schedule and the fact that our estimator is constructed through the average of block matrices of size
at most k ? k.
2.2
Implementation details
The naive algorithm performs O(p + k) inversions of square matrices with size at most 3k. This
method can be sped up considerably through an application of the Woodbury matrix identity and
the Schur complement relation [21, 2]. Doing so reduces the computational complexity of the
algorithm from O(pk 3 ) to O(pk 2 ). We discuss the details of modified algorithm and its computational
complexity below.
? (3m) and are interested in obtaining ?
? (3m) . We observe that the nonzero block
Suppose we have ?
l?m
l?m+1
? (3m) corresponds to the inverse of the nonzero block of ?
? (3m) , which only differs by one
of ?
l?m+1
l?m+1
? (3m) , the matrix for which the inverse of the nonzero block corresponds
row and one column from ?
l?m
(3m)
?
? (3m) , ?
? (3m)
to ?
, which we have already computed. We may understand the movement from ?
l?m
(3m)
l?m
l?m
(3m)
?
?
to ?
l?m+1 (to which we already have direct access) and ?l?m+1 as two rank-1 updates. Let us view
(3m)
(3m)
?
?
the nonzero blocks of ?
l?m , ?l?m as the block matrices:
A ? R1?1
B ? R1?(3m?1)
3m
?
NonZero(?l?m ) =
B > ? R(3m?1)?1 C ? R(3m?1)?(3m?1)
? ? R1?(3m?1)
A? ? R1?1
B
(3m)
?
NonZero(?l?m ) = ? >
B ? R(3m?1)?1 C? ? R(3m?1)?(3m?1)
? 3m , ?
? (3m) , we may trivially compute C ?1 as
The Schur complement relation tells us that given ?
l?m
l?m
follows:
?1
? > B C?
CB
C ?1 = C? ?1 + B > A?1 B
= C? ?
(8)
? >
A + B CB
4
Algorithm 1 Blockwise Inversion Technique
? k)
function F IT B LOCKWISE(?,
?
? ? 0p?p
for l ? [1 ? k, p) do
? ??
? + B LOCK I NVERSE(?,
? k, l)
?
end for
for l ? [1 ? bk/2c, p) do
? ??
? ? B LOCK I NVERSE(?,
? bk/2c, l)
?
end for
?
return ?
end function
? m, l)
function B LOCK I NVERSE(?,
. Obtain 3m ? 3m block inverse.
s ? max{l ? m, 0}
f ? min{p, l + 2m}
?1
?
M ? ?[s:f,
s:f]
. Preserve central m ? m block of inverse.
s ? m + min{l ? m, 0}
N ? M [s:s+m, s:s+m]
. Restore block inverse to appropriate indices.
s ? max{l, 0}
f ? min{l + m, p}
P [s:f, s:f] = N
return P
end function
by the Woodbury matrix identity, which gives an efficient algorithm for computing the inverse of
a matrix subject to a low-rank (in this case, rank-1) perturbation. This allows us to move from the
inverse of a matrix in R3m?3m to the inverse of a matrix in R(3m?1)?(3m?1) where a row and column
have been removed. A nearly identical argument allows us to move from the R(3m?1)?(3m?1) matrix
to an R3m?3m matrix where a row and column have been appended, which gives us the desired block
? (3m) .
of ?
l?m+1
With this modification to the algorithm, we need only compute the inverse of a square matrix of width
2m at the beginning of the routine; thereafter, every subsequent block inverse may be computed
through simple rank one matrix updates.
2.3
Complexity details
We now detail the factor of k improvement in computational complexity provided through the
application of the Woodbury matrix identity and the Schur complement relation introduced in Section
2.2. Recall that the naive implementation of Algorithm 1 involves O(p + k) inversions of square
matrices of size at most 3k, each of which cost O(k 3 ). Therefore, the overall complexity of the naive
algorithm is O(pk 3 ), as k < p.
Now, consider the Woodbury-Schur-improved algorithm. The initial single inversion of a 2k ? 2k
matrix costs O(k 3 ). Thereafter, we perform O(p + k) updates of the form given in Equation (8).
These updates simply require vector matrix operations. Therefore, the update complexity on each
iteration is O(k 2 ). It follows that the overall complexity of the amended algorithm is O(pk 2 ).
3
Rate optimality under the spectral norm
Here we present the results that establish the rate optimality of the above estimator under the spectral
norm. For symmetric matrices A, the spectral norm, which corresponds to the largest singular value
of A, coincides with the `2 -operator norm. We establish optimality by first deriving an upper bound
5
in high probability using the blockwise inversion estimator defined in Section 2.1. We then give
a matching lower bound in expectation by carefully constructing two sets of multivariate normal
distributions and then applying Assouad?s lemma and Le Cam?s method.
3.1
Upper bound under the spectral norm
In this section we derive a risk upper bound for the tapering estimator defined in (7) under the operator
norm. We assume the distribution of the xi ?s is subgaussian; that is, there exists ? > 0 such that:
t2 ?
P |v> (xi ? E xi )| > t ? e? 2
(9)
for all t > 0 and kvk2 = 1. Let P? = P? (M0 , M, ?) denote the set of distributions of xi that satisfy
(3) and (9).
? k , defined in (7), of the precision matrix ?p?p with p >
Theorem 3.1. The tapering estimator ?
1
n 2?+1 satisfies:
2
k + log p
?
?2?
sup P
?
?
?
?
C
+
Ck
= O p?15
k
n
P?
(10)
with k = o(n), log p = o(n), and a universal constant C > 0.
1
? =?
? k with k = n 2?+1
In particular, the estimator ?
satisfies:
2
2?
log p
?
? 2?+1
sup P
?
?
?
?
Cn
+
C
= O p?15
k
n
P?
(11)
1
Given the result in Equation (10), it is easy to show that setting k = n 2?+1 yields the optimal rate by
balancing the size of the inside-taper and outside-taper terms, which gives Equation (11).
The proof of this theorem, which is given in the supplementary material, relies on the fact that when
we invert a 3k ? 3k block, the difference between the central k ? k block and the corresponding
k ? k block which would have been obtained by inverting the full matrix has a negligible contribution
to the risk. As a result, we are able to take concentration bounds on the operator norm of subgaussian
matrices, customarily used for bounding the norm of the difference of covariance matrices, and apply
them instead to differences of precision matrices to obtain our result.
The key insight is that we can relate the spectral norm of a k ? k subblock produced by our estimator
to the spectral norm of the corresponding k ? k subblock of the covariance matrix, which allows us
to apply concentration bounds from classical random matrix theory. Moreover, it turns out that if we
apply the tapering schedule induced by the construction of our estimator to the population parameter
? ? F? , we may express the tapered population ? as a sum of block matrices in exactly the same
way that our estimator is expressed as a sum of block matrices.
In particular, the tapering schedule is presented next. Suppose a population precision matrix ? ? F? .
Then, we denote the tapered version of ? by ?A , and construct:
?A = (?ij ? vij )p?p
?B = (?ij ? (1 ? vij ))p?p
where the tapering coefficients are given by:
?
?
?1
vij = |i?j|
k/2
?
?0
for |i ? j| < k2
for k2 ? |i ? j| < k
for |i ? j| ? k
We then handle the risk of estimating the inside-taper ?A and the risk of estimating the outside-taper
?B separately.
Because our estimator and the population parameter are both averages over k ? k block matrices
along the diagonal, we may then take a union bound over the high probability bounds on the spectral
norm deviation for the k ? k subblocks to obtain a high probability bound on the risk of our estimator.
We refer the reader to the longer version of the paper for further details [11].
6
3.2
Lower bound under the spectral norm
In Section 3.1, we established Theorem 3.1, which states that our estimator achieves the rate of
2?
1
convergence n? 2?+1 under the spectral norm by using the optimal choice of k = n 2?+1 . Next we
demonstrate a matching lower bound, which implies that the upper bound established in Equation
(11) is tight up to constant factors.
Specifically, for the estimation of precision matrices in the parameter space given by Equation (3),
the following minimax lower bound holds.
Theorem 3.2. The minimax risk for estimating the precision matrix ? over P? under the operator
norm satisfies:
2
2?
log p
?
inf sup E
?
? ?
? cn? 2?+1 + c
(12)
? P?
n
?
As in many information theoretic lower bounds, we first identify a subset of our parameter space that
captures most of the complexity of the full space. We then establish an information theoretic limit
on estimating parameters from this subspace, which yields a valid minimax lower bound over the
original set.
Specifically, for our particular parameter space F? , we identify two subparameter spaces, F11 , F12 .
The first, F11 , is a collection of 2k matrices with varying levels of density. To this collection, we
2?
apply Assouad?s lemma obtain a lower bound with rate n? 2?+1 . The second, F12 , is a collection of
diagonal matrices, to which we apply Le Cam?s method to derive a lower bound with rate logn p .
The rate given in Theorem 3.2 is therefore a lower bound on minimax rate for estimating the union
(F11 ? F12 ) = F1 ? F? . The full details of the subparameter space construction and derivation of
lower bounds may be found in the full-length version of the paper [11].
4
Experimental results
We implemented the blockwise inversion technique in NumPy and ran simulations on synthetic
datasets. Our experiments confirm that even in the finite sample case, the blockwise inversion
technique achieves the theoretical rates. In the experiments, we draw observations from a multivariate
normal distribution with precision parameter ? ? F? , as defined in (3). Following [6], for given
constants ?, ?, p, we consider precision matrices ? = (?ij )1?i,j?p of the form:
1
for 1 ? i = j ? p
?ij =
(13)
?|i ? j|???1 for 1 ? i 6= j ? p
Though the precision matrices considered in our experiments are Toeplitz, our estimator does not
take advantage of this knowledge. We choose ? = 0.6 to ensure that the matrices generated are
non-negative definite.
1
In applying the tapering estimator as defined in (7), we choose the bandwidth to be k = bn 2?+1 c,
which gives the optimal rate of convergence, as established in Theorem 3.1.
In our experiments, we varied ?, n, and p. For our first set of experiments, we allowed ? to take
on values in {0.2, 0.3, 0.4, 0.5}, n to take values in {250, 500, 750, 1000}, and p to take values in
{100, 200, 300, 400}. Each setting was run for five trials, and the averages are plotted with error
bars to show variability between experiments. We observe in Figure 1a that the spectral norm error
increases linearly as log p increases, confirming the logn p term in the rate of convergence.
Building upon the experimental results from the first set of simulations, we provide an additional
sets of trials for the ? = 0.2, p = 400 case, with n ? {11000, 3162, 1670}. These sample sizes were
chosen so that in Figure 1b, there is overlap between the error plots for ? = 0.2 and the other ?
regimes3 . As with Figure 1a, Figure 1b confirms the minimax rate of convergence given in Theorem
2?
3.1. Namely, we see that plotting the error with respect to n? 2?+1 results in linear plots with almost
3
For the ? = 0.2, p = 400 case, we omit the settings where n ? {250, 500, 750} from Figure 1b to
improve the clarity of the plot.
7
Setting: n = 1000
8
0.025
? = 0. 2
? = 0. 3
? = 0. 4
? = 0. 5
Spectral Norm Error
6
0.020
Mean Spectral Norm
7
? = 0. 2
? = 0. 3
? = 0. 4
? = 0. 5
Setting: p = 400
0.015
5
4
0.010
3
2
0.005
1
0
4.6
4.8
5.0
5.2
5.4
log(p)
5.6
5.8
0.0000.02
6.0
0.04
0.06
0.08?
n ?2? + 1
2
0.10
0.12
0.14
2?
(b) Mean spectral norm error as n? 2?+1 changes.
(a) Spectral norm error as log p changes.
Figure 1: Experimental results. Note that the plotted error grows linearly as a function of log p and
2?
n? 2?+1 , respectively, matching the theoretical results; however, the linear relationship is less clear in
the ? = 0.2 case, due to the subtle interplay of the error terms.
identical slopes. We note that in both plots, there is a small difference in the behavior for the case
? = 0.2. This observation can be attributed to the fact that for such a slow decay of the precision
matrix bandwidth, we have a more subtle interplay between the bias and variance terms presented in
the theorems above.
5
Discussion
In this paper we have presented minimax upper and lower bounds for estimating banded precision matrices after observing n samples drawn from a p-dimensional subgaussian distribution. Furthermore,
we have provided a computationally efficient algorithm that achieves the optimal rate of convergence
for estimating a banded precision matrix under the operator norm. Theorems 3.1 and 3.2 together
establish that the minimax rate of convergence for estimating precision matrices over the parameter
2?
space F? given in Equation (3) is n? 2?+1 + logn p , where ? dictates the bandwidth of the precision
matrix.
The rate achieved in this setting parallels the results established for estimating a bandable covariance
matrix [6]. As in that result, we observe that different regimes dictate which term dominates in the
1
2?
rate of convergence. In the setting where log p is of a lower order than n 2?+1 , the n? 2?+1 term
dominates, and the rate of convergence is determined by the smoothness parameter ?. However, when
1
log p is much larger than n 2?+1 , p has a much greater influence on the minimax rate of convergence.
Overall, we have shown the performance gains that may be obtained through added structural
constraints. An interesting line of future work will be to explore algorithms that uniformly exhibit
a smooth transition between fully banded models and sparse models on the precision matrix. Such
methods could adapt to the structure and allow for mixtures between banded and sparse precision
matrices. Another interesting direction would be in understanding how dependencies between the n
observations will influence the error rate of the estimator.
Finally, the results presented here apply to the case of subgaussian random variables. Unfortunately,
moving away from the Gaussian setting in general breaks the connection between precision matrices
and graph structure. Hence, a fruitful line of work will be to also develop methods that can be applied
to estimating the banded graphical model structure with general exponential family observations.
Acknowledgements
We would like to thank Harry Zhou for stimulating discussions regarding matrix estimation problems.
SN acknowledges funding from NSF Grant DMS 1723128.
8
References
[1] P. J. Bickel and Y. R. Gel. Banded regularization of autocovariance matrices in application to parameter
estimation and forecasting of time series. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 73(5):711?728, 2011.
[2] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, UK, 2004.
[3] T. T. Cai, W. Liu, and X. Luo. A Constrained L1 Minimization Approach to Sparse Precision Matrix
Estimation. arXiv:1102.2233 [stat], February 2011. arXiv: 1102.2233.
[4] T. T. Cai, W. Liu, and H. H. Zhou. Estimating sparse precision matrix: Optimal rates of convergence and
adaptive estimation. Ann. Statist., 44(2):455?488, 04 2016.
[5] T. T. Cai, Z. Ren, H. H. Zhou, et al. Estimating structured high-dimensional covariance and precision
matrices: Optimal rates and adaptive estimation. Electronic Journal of Statistics, 10(1):1?59, 2016.
[6] T. T. Cai, C.-H. Zhang, and H. H. Zhou. Optimal rates of convergence for covariance matrix estimation.
The Annals of Statistics, 38(4):2118?2144, August 2010.
[7] T. T. Cai and H. H. Zhou. Optimal rates of convergence for sparse covariance matrix estimation. Ann.
Statist., 40(5):2389?2420, 10 2012.
[8] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso.
Biostatistics, 2007.
[9] K. J. Friston, P. Jezzard, and R. Turner. Analysis of functional mri time-series. Human brain mapping,
1(2):153?171, 1994.
[10] M. J. Hosseini and S.-I. Lee. Learning sparse gaussian graphical models with overlapping blocks. In
Advances in Neural Information Processing Systems, pages 3808?3816, 2016.
[11] A. J. Hu and S. N. Negahban. Minimax Estimation of Bandable Precision Matrices. arXiv, 2017. arXiv:
1710.07006v1.
[12] S. L. Lauritzen. Graphical Models. Oxford Statistical Science Series. Clarendon Press, Oxford, 1996.
[13] K. Lee and J. Lee. Estimating Large Precision Matrices via Modified Cholesky Decomposition.
arXiv:1707.01143 [stat], July 2017. arXiv: 1707.01143.
[14] N. Meinshausen and P. B?hlmann. High-dimensional graphs and variable selection with the Lasso. Annals
of Statistics, 34:1436?1462, 2006.
[15] N. Padmanabhan, M. White, H. H. Zhou, and R. O?Connell. Estimating sparse precision matrices. Monthly
Notices of the Royal Astronomical Society, 460(2):1567?1576, 2016.
[16] Z. Ren, T. Sun, C.-H. Zhang, and H. H. Zhou. Asymptotic normality and optimalities in estimation of large
Gaussian graphical models. The Annals of Statistics, 43(3):991?1026, June 2015.
[17] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation.
Electronic Journal of Statistics, 2:494?515, 2008.
[18] G. Saon and J. T. Chien. Bayesian sensing hidden markov models for speech recognition. In 2011 IEEE
International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5056?5059, May
2011.
[19] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer Publishing Company, Incorporated,
1st edition, 2008.
[20] H. Visser and J. Molenaar. Trend estimation and regression analysis in climatological time series: an
application of structural time series models and the kalman filter. Journal of Climate, 8(5):969?979, 1995.
[21] M. A. Woodbury. Inverting modified matrices. Statistical Research Group, Memo. Rep. no. 42. Princeton
University, Princeton, N. J., 1950.
[22] B. Yu. Assouad, Fano and Le Cam. In Festschrift for Lucien Le Cam, pages 423?435. Springer-Verlag,
Berlin, 1997.
[23] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika,
94(1):19?35, 2007.
9
| 7074 |@word trial:2 mri:1 version:4 inversion:8 norm:30 hu:3 cleanly:1 simulation:2 confirms:1 bn:1 covariance:29 decomposition:1 initial:1 liu:2 series:8 tapering:12 past:2 existing:1 molenaar:1 com:1 luo:1 numerical:3 subsequent:1 confirming:1 plot:4 progressively:1 update:5 fewer:1 beginning:1 ith:1 regressive:1 provides:1 zhang:3 five:1 along:4 constructed:1 direct:2 kvk2:1 yuan:1 inside:2 introduce:2 behavior:1 f11:3 brain:1 terminal:1 inspired:1 company:1 considering:1 provided:2 estimating:32 underlying:1 notation:3 bounded:1 moreover:1 biostatistics:1 developed:1 astronomy:1 guarantee:1 pseudo:2 every:1 exactly:1 biometrika:1 rm:1 k2:7 uk:1 grant:1 omit:1 negligible:1 understood:1 limit:1 consequence:1 oxford:2 establishing:1 meinshausen:1 collect:1 statistically:1 woodbury:5 testing:2 union:2 block:24 implement:1 differs:1 definite:1 procedure:4 empirical:2 universal:1 revealing:1 matching:4 dictate:3 pre:1 boyd:1 refers:1 cannot:1 close:1 selection:2 operator:7 context:1 risk:10 applying:3 influence:2 fruitful:1 convex:1 focused:2 sharpness:1 recovery:5 slicing:1 insight:3 estimator:34 array:3 deriving:1 vandenberghe:1 population:5 handle:2 analogous:1 annals:3 construction:2 suppose:2 exact:1 element:6 trend:1 recognition:2 capture:2 calculate:2 sun:1 movement:1 removed:1 ran:1 complexity:8 cam:5 tight:1 upon:2 packing:1 icassp:1 derivation:1 tell:1 outside:2 encoded:1 supplementary:1 larger:1 toeplitz:1 statistic:7 noisy:1 interplay:2 advantage:1 eigenvalue:2 jezzard:1 cai:6 arr:2 remainder:1 date:1 kak1:1 achieve:1 convergence:26 r1:4 wider:1 derive:3 help:1 develop:1 stat:2 ij:12 lauritzen:1 recovering:1 implemented:1 involves:1 implies:1 convention:2 direction:1 closely:1 filter:1 human:1 material:1 adjacency:1 require:1 f1:1 rothman:1 exploring:1 hold:1 considered:2 normal:5 great:1 cb:2 mapping:1 m0:5 achieves:4 bickel:2 estimation:28 lucien:1 largest:1 establishes:1 hope:1 minimization:1 concurrently:1 gaussian:6 modified:3 ck:1 zhou:8 varying:1 derived:2 focus:1 june:1 improvement:1 rank:4 likelihood:2 greatly:1 attains:1 sense:1 hidden:1 relation:3 interested:1 overall:3 denoted:1 logn:3 nverse:3 constrained:1 construct:4 beach:1 identical:4 y6:1 yu:1 nearly:3 fmri:1 future:1 t2:1 haven:2 supx6:1 preserve:1 numpy:2 individual:1 maxj:1 festschrift:1 friedman:1 highly:1 introduces:1 mixture:1 accurate:1 edge:2 autocovariance:1 conduct:1 indexed:3 detailing:1 desired:1 plotted:2 theoretical:4 column:3 hlmann:1 cost:2 deviation:1 entry:9 subset:1 optimally:1 dependency:1 considerably:1 synthetic:1 st:2 density:1 international:1 negahban:3 sequel:2 lee:3 together:2 central:3 choose:2 r3m:2 corner:1 return:2 accompanied:1 harry:1 includes:2 coefficient:1 satisfy:1 depends:1 view:1 break:1 doing:1 sup:4 observing:1 parallel:1 slope:1 contribution:2 appended:1 square:4 ni:1 f12:3 variance:1 yield:2 identify:2 bayesian:1 produced:1 ren:2 banded:13 dm:1 resultant:2 proof:3 associated:1 attributed:1 gain:1 recall:1 knowledge:1 astronomical:1 organized:1 schedule:3 subtle:2 routine:2 carefully:1 clarendon:1 methodology:1 improved:3 entrywise:1 though:1 furthermore:2 overlapping:1 grows:1 building:1 usa:1 regularization:1 hence:1 symmetric:2 nonzero:7 deal:1 white:1 climate:1 width:1 climatological:1 kak:1 coincides:1 criterion:1 theoretic:2 demonstrate:1 climatology:1 performs:2 l1:1 funding:1 sped:1 functional:1 discussed:1 refer:2 monthly:1 cambridge:2 imposing:1 cosmological:1 smoothness:2 trivially:1 pm:1 fano:1 portfolio:1 moving:1 access:2 longer:2 multivariate:6 own:1 recent:1 inf:2 verlag:1 rep:1 yi:1 minimum:1 additional:3 greater:2 impose:1 subblock:3 july:1 signal:1 full:4 reduces:1 smooth:2 technical:1 match:3 adapt:1 levina:1 long:2 lin:1 regression:1 expectation:1 arxiv:6 iteration:1 achieved:2 invert:1 background:1 separately:1 adhere:2 singular:1 crucial:1 extra:1 subject:1 induced:1 schur:4 integer:2 structural:3 subgaussian:4 intermediate:1 easy:2 divisible:1 variety:1 independence:1 xj:1 graduated:1 variate:1 hastie:1 bandwidth:7 lasso:2 regarding:1 cn:2 whether:1 sahand:2 forecasting:1 speech:3 generally:1 useful:1 clear:1 informally:1 nonparametric:2 tsybakov:1 statist:2 http:1 nsf:1 notice:1 tibshirani:1 express:1 group:1 key:2 thereafter:2 demonstrating:1 drawn:3 tapered:4 clarity:1 v1:1 graph:5 asymptotically:1 excludes:1 sum:4 run:1 inverse:15 powerful:1 throughout:2 reader:1 almost:1 family:1 electronic:2 draw:1 bound:34 ct:2 yale:5 paramount:1 precisely:1 constraint:1 constrain:1 encodes:1 argument:4 min:4 optimality:4 connell:1 performing:1 department:2 structured:2 saon:1 smaller:2 slightly:1 modification:2 intuitively:1 invariant:1 indexing:1 computationally:1 equation:9 previously:1 turn:2 discus:2 addison:3 subjected:1 end:4 informal:1 studying:1 operation:2 apply:6 observe:5 away:3 spectral:22 appropriate:1 rp:1 original:1 denotes:1 remaining:1 ensure:1 publishing:1 graphical:11 lock:3 establish:7 february:1 classical:1 society:2 contact:1 hosseini:1 move:5 already:2 added:1 concentration:2 diagonal:11 exhibit:2 subspace:1 thank:1 berlin:1 considers:1 barely:1 code:1 length:2 index:1 relationship:1 gel:1 kalman:1 unfortunately:1 statement:1 blockwise:8 relate:1 negative:1 memo:1 implementation:2 unknown:1 perform:1 upper:9 observation:10 markov:2 datasets:1 finite:3 padmanabhan:1 variability:1 incorporated:1 perturbation:1 varied:1 arbitrary:1 august:1 introduced:2 inverting:4 complement:3 bk:2 specified:1 namely:1 connection:1 acoustic:1 established:9 nip:1 able:2 beyond:2 bar:1 below:1 pattern:1 regime:2 sparsity:3 including:1 max:3 royal:2 power:2 overlap:1 difficulty:1 friston:1 restore:1 turner:1 zhu:1 minimax:29 normality:1 improve:2 imply:1 temporally:1 acknowledges:1 auto:1 minimaxity:2 naive:3 genomics:1 sn:1 understanding:5 literature:2 acknowledgement:1 asymptotic:1 law:2 fully:1 permutation:1 interesting:2 plotting:1 vij:3 balancing:1 row:3 enjoys:1 aij:1 formal:1 allow:2 understand:2 bias:1 taper:4 sparse:23 xn:3 evaluating:1 valid:1 rich:1 transition:1 amended:1 commonly:2 adaptive:3 collection:3 chien:1 keep:1 confirm:2 assumed:1 xi:7 ca:1 obtaining:3 spectroscopy:1 constructing:2 pk:4 linearly:3 bounding:1 arise:2 edition:1 customarily:1 allowed:1 x1:3 referred:3 slow:1 precision:62 exponential:1 xl:2 theorem:12 emphasized:1 sensing:1 explored:1 decay:5 dominates:2 exists:1 effectively:1 supplement:1 magnitude:1 locality:1 simply:1 likely:1 explore:2 hax:1 expressed:1 springer:2 corresponds:4 satisfies:4 relies:1 complemented:1 assouad:4 stimulating:1 conditional:1 goal:2 identity:3 ann:2 careful:1 replace:1 considerable:1 change:2 included:1 specifically:2 determined:1 uniformly:1 averaging:2 lemma:3 gauss:1 experimental:3 exception:1 subblocks:4 cholesky:1 princeton:2 |
6,716 | 7,075 | Monte-Carlo Tree Search by Best Arm Identification
Emilie Kaufmann
CNRS & Univ. Lille, UMR 9189 (CRIStAL), Inria SequeL
Lille, France
[email protected]
Wouter M. Koolen
Centrum Wiskunde & Informatica,
Science Park 123, 1098 XG Amsterdam, The Netherlands
[email protected]
Abstract
Recent advances in bandit tools and techniques for sequential learning are steadily
enabling new applications and are promising the resolution of a range of challenging related problems. We study the game tree search problem, where the goal is to
quickly identify the optimal move in a given game tree by sequentially sampling its
stochastic payoffs. We develop new algorithms for trees of arbitrary depth, that operate by summarizing all deeper levels of the tree into confidence intervals at depth
one, and applying a best arm identification procedure at the root. We prove new
sample complexity guarantees with a refined dependence on the problem instance.
We show experimentally that our algorithms outperform existing elimination-based
algorithms and match previous special-purpose methods for depth-two trees.
1
Introduction
We consider two-player zero-sum turn-based interactions, in which the sequence of possible successive moves is represented by a maximin game tree T . This tree models the possible actions sequences
by a collection of MAX nodes, that correspond to states in the game in which player A should take
action, MIN nodes, for states in the game in which player B should take action, and leaves which
specify the payoff for player A. The goal is to determine the best action at the root for player A. For
deterministic payoffs this search problem is primarily algorithmic, with several powerful pruning
strategies available [20]. We look at problems with stochastic payoffs, which in addition present a
major statistical challenge.
Sequential identification questions in game trees with stochastic payoffs arise naturally as robust
versions of bandit problems. They are also a core component of Monte Carlo tree search (MCTS)
approaches for solving intractably large deterministic tree search problems, where an entire sub-tree
is represented by a stochastic leaf in which randomized play-out and/or evaluations are performed [4].
A play-out consists in finishing the game with some simple, typically random, policy and observing
the outcome for player A.
For example, MCTS is used within the AlphaGo system [21], and the evaluation of a leaf position
combines supervised learning and (smart) play-outs. While MCTS algorithms for Go have now
reached expert human level, such algorithms remain very costly, in that many (expensive) leaf
evaluations or play-outs are necessary to output the next action to be taken by the player. In this
paper, we focus on the sample complexity of Monte-Carlo Tree Search methods, about which very
little is known. For this purpose, we work under a simplified model for MCTS already studied by
[22], and that generalizes the depth-two framework of [10].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1
A simple model for Monte-Carlo Tree Search
We start by fixing a game tree T , in which the root is a MAX node. Letting L be the set of leaves
of this tree, for each ` ? L we introduce a stochastic oracle O` that represents the leaf evaluation or
play-out performed when this leaf is reached by an MCTS algorithm. In this model, we do not try
to optimize the evaluation or play-out strategy, but we rather assume that the oracle O` produces
i.i.d. samples from an unknown distribution whose mean ?` is the value of the position `. To ease the
presentation, we focus on binary oracles (indicating the win or loss of a play-out), in which the oracle
O` is a Bernoulli distribution with unknown mean ?` (the probability of player A winning the game
in the corresponding state). Our algorithms can be used without modification in case the oracle is a
distribution bounded in [0, 1].
For each node s in the tree, we denote by C(s) the set of its children and by P(s) its parent. The root
is denoted by s0 . The value (for player A) of any node s is recursively defined by V` = ?` if ` ? L and
Vs = {
maxc?C(s) Vc
minc?C(s) Vc
if s is a MAX node,
if s is a MIN node.
The best move is the action at the root with highest value,
s? = argmax Vs .
s?C(s0 )
To identify s (or an -close move), an MCTS algorithm sequentially selects paths in the game tree
and calls the corresponding leaf oracle. At round t, a leaf Lt ? L is chosen by this adaptive sampling
rule, after which a sample Xt ? OLt is collected. We consider here the same PAC learning framework
as [22, 10], in which the strategy also requires a stopping rule, after which leaves are no longer
evaluated, and a recommendation rule that outputs upon stopping a guess s?? ? C(s0 ) for the best
move of player A.
?
Given a risk level ? and some accuracy parameter ? 0 our goal is have a recommendation s?? ? C(s0 )
whose value is within of the value of the best move, with probability larger than 1 ? ?, that is
P (V (s0 ) ? V (?
s? ) ? ) ? 1 ? ?.
An algorithm satisfying this property is called (, ?)-correct. The main challenge is to design
(, ?)-correct algorithms that use as few leaf evaluations ? as possible.
Related work The model we introduce for Monte-Carlo Tree Search is very reminiscent of a
stochastic bandit model. In those, an agent repeatedly selects one out of several probability distributions, called arms, and draws a sample from the chosen distribution. Bandits models have been
studied since the 1930s [23], mostly with a focus on regret minimization, where the agent aims to
maximize the sum of the samples collected, which are viewed as rewards [18]. In the context of
MCTS, a sample corresponds to a win or a loss in one play-out, and maximizing the number of
successful play-outs (that correspond to simulated games) may be at odds with identifying quickly
the next best action to take at the root. In that, our best action identification problem is closer to a
so-called Best Arm Identification (BAI) problem.
The goal in the standard BAI problem is to find quickly and accurately the arm with highest mean.
The BAI problem in the fixed-confidence setting [7] is the special case of our simple model for a tree
of depth one. For deeper trees, rather than finding the best arm (i.e. leaf), we are interested in finding
the best action at the root. As the best root action is a function of the means of all leaves, this is a
more structured problem.
Bandit algorithms, and more recently BAI algorithms have been successfully adapted to tree search.
Building on the UCB algorithm [2], a regret minimizing algorithm, variants of the UCT algorithm
[17] have been used for MCTS in growing trees, leading to successful AIs for games. However, there
are only very weak theoretical guarantees for UCT. Moreover, observing that maximizing the number
of successful play-outs is not the target, recent work rather tried to leverage tools from the BAI
literature. In [19, 6] Sequential Halving [14] is used for exploring game trees. The latter algorithm is
a state-of-the-art algorithm for the fixed-budget BAI problem [1], in which the goal is to identify the
best arm with the smallest probability of error based on a given budget of draws. The proposed SHOT
(Sequential Halving applied tO Trees) algorithm [6] is compared empirically to the UCT approach
of [17], showing improvements in some cases. A hybrid approach mixing SHOT and UCT is also
studied [19], still without sample complexity guarantees.
2
In the fixed-confidence setting, [22] develop the first sample complexity guarantees in the model we
consider. The proposed algorithm, FindTopWinner is based on uniform sampling and eliminations,
an approach that may be related to the Successive Eliminations algorithm [7] for fixed-confidence
BAI in bandit models. FindTopWinner proceeds in rounds, in which the leaves that have not been
eliminated are sampled repeatedly until the precision of their estimates doubled. Then the tree is
pruned of every node whose estimated value differs significantly from the estimated value of its
parent, which leads to the possible elimination of several leaves. For depth-two trees, [10] propose
an elimination procedure that is not round-based. In this simpler setting, an algorithm that exploits
confidence intervals is also developed, inspired by the LUCB algorithm for fixed-confidence BAI
[13]. Some variants of the proposed M-LUCB algorithm appear to perform better in simulations than
elimination based algorithms. We now investigate this trend further in deeper trees, both in theory
and in practice.
Our Contribution. In this paper, we propose a generic architecture, called BAI-MCTS, that builds
on a Best Arm Identification (BAI) algorithm and on confidence intervals on the node values in order
to solve the best action identification problem in a tree of arbitrary depth. In particular, we study two
specific instances, UGapE-MCTS and LUCB-MCTS, that rely on confidence-based BAI algorithms
[8, 13]. We prove that these are (, ?)-correct and give a high-probability upper bound on their
sample complexity. Both our theoretical and empirical results improve over the elimination-based
state-of-the-art algorithm, FindTopWinner [22].
2
BAI-MCTS algorithms
We present a generic class of algorithms, called BAI-MCTS, that combines a BAI algorithm with
an exploration of the tree based on confidence intervals on the node values. Before introducing the
algorithm and two particular instances, we first explain how to build such confidence intervals, and
also introduce the central notion of representative child and representative leaf.
2.1
Confidence intervals and representative nodes
For each leaf ` ? L, using the past observations from this leaf we may build a confidence interval
I` (t) = [L` (t), U` (t)],
where U` (t) (resp. L` (t)) is an Upper Confidence Bound (resp. a Lower Confidence Bound) on the
value V (`) = ?` . The specific confidence interval we shall use will be discussed later.
These confidence intervals are then propagated upwards in the tree using the following construction. For each internal node s, we recursively define Is (t) = [Ls (t), Us (t)] with
Ls (t) = {
maxc?C(s) Lc (t)
minc?C(s) Lc (t)
for a MAX node s,
maxc?C(s) Uc (t)
Us (t) = {
for a MIN node s,
minc?C(s) Uc (t)
for a MAX node s,
for a MIN node s.
Note that these intervals are the tightest possible on the parent under the sole assumption that the
child confidence intervals are all valid. A similar construction was used in the OMS algorithm of [3]
in a different context. It is easy to convince oneself (or prove by induction, see Appendix B.1) that
the accuracy of the confidence intervals is preserved under this construction, as stated below.
Proposition 1. Let t ? N. One has ?`?L (?` ? I` (t)) ? ?s?T (Vs ? Is (t)).
We now define the representative child cs (t) of an internal node s as
argmaxc?C(s) Uc (t) if s is a MAX node,
cs (t) = {
argminc?C(s) Lc (t) if s is a MIN node,
and the representative leaf `s (t) of a node s ? T , which is the leaf obtained when going down the
tree by always selecting the representative child:
`s (t) = s if s ? L, `s (t) = `cs (t) (t) otherwise.
The confidence intervals in the tree represent the statistically plausible values in each node, hence the
representative child can be interpreted as an ?optimistic move? in a MAX node and a ?pessimistic
move? in a MIN node (assuming we play against the best possible adversary). This is reminiscent of
the behavior of the UCT algorithm [17]. The construction of the confidence intervals and associated
representative children are illustrated in Figure 1.
3
(a) Children
Input: a BAI algorithm
Initialization: t = 0.
while not BAIStop ({s ? C(s0 )}) do
Rt+1 = BAIStep ({s ? C(s0 )})
Sample the representative leaf
Lt+1 = `Rt+1 (t)
Update the information about the arms.
t = t + 1.
end
Output: BAIReco ({s ? C(s0 )})
(b) Parent
Figure 1: Construction of confidence interval and representative child (in red)
for a MAX node.
2.2
Figure 2: The BAI-MCTS architecture
The BAI-MCTS architecture
In this section we present the generic BAI-MCTS algorithm, whose sampling rule combines two
ingredients: a best arm identification step which selects an action at the root, followed by a confidence
based exploration step, that goes down the tree starting from this depth-one node in order to select
the representative leaf for evaluation.
The structure of a BAI-MCTS algorithm is presented in Figure 2. The algorithm depends on a Best
Arm Identification (BAI) algorithm, and uses the three components of this algorithm:
? the sampling rule BAIStep(S) selects an arm in the set S
? the stopping rule BAIStop(S) returns True if the algorithm decides to stop
? the recommendation rule BAIReco(S) selects an arm as a candidate for the best arm
In BAI-MCTS, the arms are the depth-one nodes, hence the information needed by the BAI algorithm
to make a decision (e.g. BAIStep for choosing an arm, or BAIStop for stopping) is information
about depth-one nodes, that has to be updated at the end of each round (last line in the while loop).
Different BAI algorithms may require different information, and we now present two instances that
rely on confidence intervals (and empirical estimates) for the value of the depth-one nodes.
2.3
UGapE-MCTS and LUCB-MCTS
Several Best Arm Identification algorithms may be used within BAI-MCTS, and we now present
two variants, that are respectively based on the UGapE [8] and the LUCB [13] algorithms. These
two algorithms are very similar in that they exploit confidence intervals and use the same stopping
rule, however the LUCB algorithm additionally uses the empirical means of the arms, which within
BAI-MCTS requires defining an estimate V?s (t) of the value of the depth-one nodes.
The generic structure of the two algorithms is similar. At round t + 1 two promising depth-one nodes
are computed, that we denote by bt and ct . Among these two candidates, the node whose confidence
interval is the largest (that is, the most uncertain node) is selected:
Rt+1 = argmax [Ui (t) ? Li (t)] .
i?{bt ,ct }
Then, following the BAI-MCTS architecture, the representative leaf of Rt+1 (computed by going
down the tree) is sampled: Lt+1 = `Rt+1 (t). The algorithm stops whenever the confidence intervals
of the two promising arms overlap by less than :
? = inf {t ? N ? Uct (t) ? Lbt (t) < } ,
and it recommends s?? = b? .
In both algorithms that we detail below bt represents a guess for the best depth-one node, while ct is
an ?optimistic? challenger, that has the maximal possible value among the other depth-one nodes.
Both nodes need to be explored enough in order to discover the best depth-one action quickly.
4
UGapE-MCTS. In UGapE-MCTS, introducing for each depth-one node the index
Bs (t) =
max
s? ?C(s0 )/{s}
Us? (t) ? Ls (t),
the promising depth-one nodes are defined as
bt = argmin Ba (t) and ct = argmax Ub (t).
a?C(s0 )
LUCB-MCTS.
b?C(s0 )/{bt }
In LUCB-MCTS, the promising depth-one nodes are defined as
bt = argmax V?a (t) and ct = argmax Ub (t),
a?C(s0 )
b?C(s0 )/{bt }
where V?s (t) = ?
?`s (t) (t) is the empirical mean of the reprentative leaf of node s. Note that several
alternative definitions of V?s (t) may be proposed (such as the middle of the confidence interval Is (t),
or maxa?C(s) V?a (t)), but our choice is crucial for the analysis of LUCB-MCTS, given in Appendix C.
3
Analysis of UGapE-MCTS
In this section we first prove that UGapE-MCTS and LUCB-MCTS are both (, ?)-correct. Then we
give in Theorem 3 a high-probability upper bound on the number of samples used by UGapE-MCTS.
A similar upper bound is obtained for LUCB-MCTS in Theorem 9, stated in Appendix C.
3.1
Choosing the Confidence Intervals
From now on, we assume that the confidence intervals on the leaves are of the form
?
?
? ?(N` (t), ?)
? ?(N` (t), ?)
?
?
L` (t) = ?
?` (t) ? ?
and U` (t) = ?
?` (t) + ?
.
2N` (t)
2N` (t)
(1)
?(s, ?) is some exploration function, that can be tuned to have a ?-PAC algorithm, as expressed in
the following lemma, whose proof can be found in Appendix B.2
Lemma 2. If ? ? max(0.1?L?, 1), for the choice
?(s, ?) = ln(?L?/?) + 3 ln ln(?L?/?) + (3/2) ln(ln s + 1)
(2)
both UGapE-MCTS and LUCB-MCTS satisfy P(V (s? ) ? V (?
s? ) ? ) ? 1 ? ?.
An interesting practical feature of these confidence intervals is that they only depend on the local
number of draws N` (t), whereas most of the BAI algorithms use exploration functions that depend
on the number of rounds t. Hence the only confidence intervals that need to be updated at round t are
those of the ancestors of the selected leaf, which can be done recursively.
Moreover, ?(s, ?) scales with ln(ln(s)), and not ln(s), leveraging some tools recently introduced to
obtain tighter confidence intervals [12, 15]. The union bound over L (that may be an artifact of our
current analysis) however makes the exploration function of Lemma 2 still a bit over-conservative
and in practice, we recommend the use of ?(s, ?) = ln (ln(es)/?).
Finally, similar correctness results (with slightly larger exploration functions) may be obtained for
confidence intervals based on the Kullback-Leibler divergence (see [5]), which are known to lead to
better performance in standard best arm identification problems [16] and also depth-two tree search
problems [10]. However, the sample complexity analysis is much more intricate, hence we stick to
the above Hoeffding-based confidence intervals for the next section.
3.2
Complexity term and sample complexity guarantees
We first introduce some notation. Recall that s? is the optimal action at the root, identified with
the depth-one node satisfying V (s? ) = V (s0 ), and define the second-best depth-one node as s?2 =
5
argmaxs?C(s0 )/{s? } Vs . Recall P(s) denotes the parent of a node s different from the root. Introducing
furthermore the set Anc(s) of all the ancestors of a node s, we define the complexity term by
H? (?) ?= ?
2
`?L ?`
1
, where
? ?2? ? 2
??
?`
?= V (s? ) ? V (s?2 )
?= maxs?Anc(`)/{s0 } ?Vs ? V (P(s))?
(3)
The intuition behind these squared terms in the denominator is the following. We will sample a leaf `
until we either prune it (by determining that it or one of its ancestors is a bad move), prune everyone
else (this happens for leaves below the optimal arm) or reach the required precision .
Theorem 3. Let ? ? min(1, 0.1?L?). UGapE-MCTS using the exploration function (2) is such that,
with probability larger than 1 ? ?, (V (s? ) ? V (?
s? ) < ) and, letting ?`, = ?` ? ?? ? ,
? ? 8H? (?) ln
?L?
16
1
+ ? 2 ln ln 2
?
` ?
?
`,
`,
?L?
?L?
?L?
+ 8H? (?) [3 ln ln
+ 2 ln ln (8e ln
+ 24e ln ln
)] + 1.
?
?
?
Remark 4. If ?(Na (t), ?) is changed to ?(t, ?), one can still prove (, ?) correctness and furthermore upper bound the expectation of ? . However the algorithm becomes less efficient to implement,
since after each leaf observation, ALL the confidence intervals have to be updated. In practice, this
change lowers the probability of error but does not effect significantly the number of play-outs used.
3.3
Comparison with previous work
To the best of our knowledge1 , the FindTopWinner algorithm [22] is the only algorithm from the
literature designed to solve the best action identification problem in any-depth trees. The number of
play-outs of this algorithm is upper bounded with high probability by
? (
`??` >2
32
16?L?
8
8?L?
ln
+ 1) + ? ( 2 ln
+ 1)
2
?`
?` ?
?
`??` ?2
One can first note the improvement in the constant in front of the leading term in ln(1/?), as well as
the presence of the ln ln(1/?`,2 ) second order, that is unavoidable in a regime in which the gaps are
small [12]. The most interesting improvement is in the control of the number of draws of 2-optimal
leaves (such that ?` ? 2). In UGapE-MCTS, the number of draws of such leaves is at most of order
( ? ?2? )?1 ln(1/?), which may be significantly smaller than ?1 ln(1/?) if there is a gap in the best
and second best value. Moreover, unlike FindTopWinner and M-LUCB [10] in the depth two case,
UGapE-MCTS can also be used when = 0, with provable guarantees.
Regarding the algorithms themselves, one can note that M-LUCB, an extension of LUCB suited for
depth-two tree, does not belong to the class of BAI-MCTS algorithms. Indeed, it has a ?reversed?
structure, first computing the representative leaf for each depth-one node: ?s ? C(s0 ), Rs,t = `s (t)
? t+1 = BAIStep(Rs,t , s ? C(s0 )).
and then performing a BAI step over the representative leaves: L
This alternative architecture can also be generalized to deeper trees, and was found to have empirical
performance similar to BAI-MCTS. M-LUCB, which will be used as a benchmark in Section 4, also
distinguish itself from LUCB-MCTS by the fact that it uses an exploration rate that depends on the
global time ?(t, ?) and that bt is the empirical maximin arm (which can be different from the arm
maximizing V?s ). This alternative choice is not yet supported by theoretical guarantees in deeper trees.
Finally, the exploration step of BAI-MCTS algorithm bears some similarity with the UCT algorithm
[17], as it goes down the tree choosing alternatively the move that yields the highest UCB or the
lowest LCB. However, the behavior of BAI-MCTS is very different at the root, where the first move is
selected using a BAI algorithm. Another key difference is that BAI-MCTS relies on exact confidence
1
In a recent paper, [11] independently proposed the LUCBMinMax algorithm, that differs from UGapEMCTS and LUCB-MCTS only by the way the best guess bt is picked. The analysis is very similar to ours,
but features some refined complexity measure, in which ?` (that is the maximal distance between consecutive
ancestors of the leaf, see (3)) is replaced by the maximal distance between any ancestors of that leaf. Similar
results could be obtained for our two algorithms following the same lines.
6
intervals: each interval Is (t) is shown to contain with high probability the corresponding value Vs ,
whereas UCT uses more heuristic confidence intervals, based on the number of visits of the parent
node, and aggregating all the samples from descendant nodes. Using UCT in our setting is not obvious
as it would require to define a suitable stopping rule, hence we don?t include a comparison with this
algorithm in Section 4. A hybrid comparison between UCT and FindTopWinner is proposed in
[22], providing UCT with the random number of samples used by the the fixed-confidence algorithm.
It is shown that FindTopWinner has the advantage for hard trees that require many samples. Our
experiments show that our algorithms in turn always dominate FindTopWinner.
3.4
Proof of Theorem 3.
Letting Et = ?`?L (?` ? I` (t)) and E = ?t?N Et , we upper bound ? assuming the event E holds,
using the following key result, which is proved in Appendix D.
Lemma 5. Let t ? N. Et ? (? > t) ? (Lt+1 = `) ? N` (t) ?
8?(N` (t),?)
.
?2` ??2? ?2
An intuition behind this result is the following. First, using that the selected leaf ` is a representative
leaf, it can be seen that the confidence intervals from sD = ` to s0 are nested (Lemma 11). Hence if
Et holds, V (sk ) ? I` (t) for all k = 1, . . . , D, which permits to lower bound the width of this interval
(and thus upper bound N` (t)) as a function of the V (sk ) (Lemma 12). Then Lemma 13 exploits the
mechanism of UGapE to further relate this width to ?? and .
Another useful tool is the following lemma, that will allow to leverage the particular form of the
exploration function ? to obtain an explicit upper bound on N` (? ).
Lemma 6. Let ?(s) = C + 23 ln(1 + ln(s)) and define S = sup{s ? 1 ? a?(s) ? s}. Then
S ? aC + 2a ln(1 + ln(aC)).
This result is a consequence of Theorem 16 stated in Appendix F, that uses the fact that for C ?
? ln(0.1) and a ? 8, it holds that
3 C(1 + ln(aC))
2 C (1 + ln(aC)) ?
3
2
? 1.7995564 ? 2.
On the event E, letting ?` be the last instant before ? at which the leaf ` has been played before
stopping, one has N` (? ? 1) = N` (?` ) that satisfies by Lemma 5
N` (?` ) ?
Applying Lemma 6 with a = a` =
8
?2` ??2? ?2
8?(N` (?` ), ?)
.
?2` ? ?2? ? 2
and C = ln ?L?
+ 3 ln ln ?L?
leads to
?
?
N` (? ? 1) ? a` (C + 2 ln(1 + ln(a` C))) .
Letting ?`, = ?` ? ?? ? and summing over arms, we find
? = 1 + ? N` (? ? 1)
`
? 1+?
`
= 1+?
`
? ln ?L? + 3 ln ln ?L? ??
8 ? ?L?
?L?
?
?
?8e ?
??
ln
+
3
ln
ln
+
2
ln
ln
2
2
?
?
??
?`, ? ?
?`,
8 ? ?L?
1 ?
?L?
?L?
?L?
?ln
+ 2 ln ln 2 ? + 8H? (?) [3 ln ln
+ 2 ln ln (8e ln
+ 24e ln ln
)] .
2
?
?
?
?
?`, ?
?`, ?
To conclude the proof, we remark that from the proof of Lemma 2 (see Appendix B.2) it follows that
on E, V (s? ) ? V (?
s? ) < and that E holds with probability larger than 1 ? ?.
7
4
Experimental Validation
In this section we evaluate the performance of our algorithms in three experiments. We evaluate
on the depth-two benchmark tree from [10], a new depth-three tree and the random tree ensemble
from [22]. We compare to the FindTopWinner algorithm from [22] in all experiments, and in the
depth-two experiment we include the M-LUCB algorithm from [10]. Its relation to BAI-MCTS is
discussed in Section 3.3. For our BAI-MCTS algorithms and for M-LUCB we use the exploration
rate ?(s, ?) = ln ?L?
+ ln(ln(s) + 1) (a stylized version of Lemma 2 that works well in practice), and
?
we use the KL refinement of the confidence intervals (1). To replicate the experiment from [22], we
supply all algorithms with ? = 0.1 and = 0.01. For comparing with [10] we run all algorithms with
= 0 and ? = 0.1?L? (undoing the conservative union bound over leaves. This excessive choice, which
?
= 0.1). In none of
might even exceed one, does not cause a problem, as the algorithms depend on ?L?
our experiments the observed error rate exceeds 0.1.
Figure 3 shows the benchmark tree from [10, Section 5] and the performance of four algorithms on it.
We see that the special-purpose depth-two M-LUCB performs best, very closely followed by both
our new arbitrary-depth LUCB-MCTS and UGapE-MCTS methods. All three use significantly fewer
samples than FindTopWinner. Figure 4 (displayed in Appendix A for the sake of readability) shows
a full 3-way tree of depth 3 with leafs drawn uniformly from [0, 1]. Again our algorithms outperform
the previous state of the art by an order of magnitude. Finally, we replicate the experiment from
[22, Section 4]. To make the comparison as fair as possible, we use the proven exploration rate from
(2). On 10K full 10-ary trees of depth 3 with Bernoulli leaf parameters drawn uniformly at random
from [0, 1] the average numbers of samples are: LUCB-MCTS 141811, UGapE-MCTS 142953 and
FindTopWinner 2254560. To closely follow the original experiment, we do apply the union bound
over leaves to all algorithms, which are run with = 0.01 and ? = 0.1. We did not observe any error
from any algorithm (even though we allow 10%). Our BAI-MCTS algorithms deliver an impressive
15-fold reduction in samples.
0.45
0.45
0.45
0.50
0.35
0.55
0.35
0.40
0.30
0.60
0.30
0.47
0.52
905
199
81
629
287
17
197
123
875
200
82
630
279
17
193
123
20
20
2941
2931
2498
2932
2930
418
1140
739
566
798
212
92
752
248
22
210
44
21
Figure 3: The 3 ? 3 tree of depth 2 that is the benchmark in [10]. Shown below the leaves are the
average numbers of pulls for 4 algorithms: LUCB-MCTS (0.89% errors, 2460 samples), UGapEMCTS (0.94%, 2419), FindTopWinner (0%, 17097) and M-LUCB (0.14%, 2399). All counts are
averages over 10K repetitions with = 0 and ? = 0.1 ? 9.
5
Lower bounds and discussion
Given a tree T , a MCTS model is parameterized by the leaf values, ? ?= (?` )`?L , which determine the
best root action: s? = s? (?). For ? ? [0, 1]?L? , We define Alt(?) = {? ? [0, 1]?L? ? s? (?) ? s? (?)}.
Using the same technique as [9] for the classic best arm identification problem, one can establish the
following (non explicit) lower bound. The proof is given in Appendix E.
8
Theorem 7. Assume = 0. Any ?-correct algorithm satisfies
E? [? ] ? T ? (?)d(?, 1 ? ?), where T ? (?)?1 ?= sup
inf
? w` d (?` , ?` )
w???L? ??Alt(?) `?L
(4)
with ?k = {w ? [0, 1]i ? ?ki=1 wi = 1} and d(x, y) = x ln(x/y) + (1 ? x) ln((1 ? x)/(1 ? y)) is the
binary Kullback-Leibler divergence.
This result is however not directly amenable for comparison with our upper bounds, as the optimization problem defined in Lemma 7 is not easy to solve. Note that d(?, 1 ? ?) ? ln(1/(2.4?)) [15], thus
our upper bounds have the right dependency in ?. For depth-two trees with K (resp. M ) actions for
player A (resp. B), we can moreover prove the following result, that suggests an intriguing behavior.
Lemma 8. Assume = 0 and consider a tree of depth two with ? = (?i,j )1?i?K,1?j?M such that
?(i, j), ?1,1 > ?i,1 , ?i,1 < ?i,j . The supremum in the definition of T ? (?)?1 can be restricted to
? K,M ?= {w ? ?K?M ? wi,j = 0 if i ? 2 and j ? 2}
?
and
T ? (?)?1 = max
min [w1,a d (?1,a ,
? K,M i=2,...,K
w??
a=1,...,M
w1,a ?1,a + wi,1 ?i,1
w1,a ?1,a + wi,1 ?i,1
)+wi,1 d (?i,1 ,
)] .
w1,a + wi,1
w1,a + wi,1
It can be extracted from the proof of Theorem 7 (see Appendix E) that the vector w? (?) that attains
the supremum in (4) represents the average proportions of selections of leaves by any algorithm
matching the lower bound. Hence, the sparsity pattern of Lemma 8 suggests that matching algorithms
should draw many of the leaves much less than O(ln(1/?)) times. This hints at the exciting prospect
of optimal stochastic pruning, at least in the asymptotic regime ? ? 0.
As an example, we numerically solve the lower bound optimization problem (which is a concave
maximization problem) for ? corresponding to the benchmark tree displayed in Figure 3 to obtain
T ? (?) = 259.9
and
w? = (0.3633, 0.1057, 0.0532), (0.3738, 0, 0), (0.1040, 0, 0).
With ? = 0.1 we find kl(?, 1 ? ?) = 1.76 and the lower bound is E? [? ] ? 456.9. We see that there is a
potential improvement of at least a factor 4.
Future directions An (asymptotically) optimal algorithm for BAI called Track-and-Stop was
? adding forced
developed by [9]. It maintains the empirical proportions of draws close to w? (?),
? ? ?. We believe that developing this line of ideas for MCTS would result in
exploration to ensure ?
a major advance in the quality of tree search algorithms. The main challenge is developing efficient
solvers for the general optimization problem (4). For now, even the sparsity pattern revealed by
Lemma 8 for depth two does not give rise to efficient solvers. We also do not know how this sparsity
pattern evolves for deeper trees, let alone how to compute w? (?).
Acknowledgments. Emilie Kaufmann acknowledges the support of the French Agence Nationale
de la Recherche (ANR), under grant ANR-16-CE40-0002 (project BADASS). Wouter Koolen acknowledges support from the Netherlands Organization for Scientific Research (NWO) under Veni
grant 639.021.439.
References
[1] J-Y. Audibert, S. Bubeck, and R. Munos. Best Arm Identification in Multi-armed Bandits. In
Proceedings of the 23rd Conference on Learning Theory, 2010.
[2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning, 47(2):235?256, 2002.
[3] L. Borsoniu, R. Munos, and E. P?ll. An analysis of optimistic, best-first search for minimax
sequential decision making. In ADPRL14, 2014.
[4] C. Browne, E. Powley, D. Whitehouse, S. Lucas, P. Cowling, P. Rohlfshagen, S. Tavener,
D. Perez, S. Samothrakis, and S. Colton. A survey of monte carlo tree search methods. IEEE
Transactions on Computational Intelligence and AI in games,, 4(1):1?49, 2012.
9
[5] O. Capp?, A. Garivier, O-A. Maillard, R. Munos, and G. Stoltz. Kullback-Leibler upper
confidence bounds for optimal sequential allocation. Annals of Statistics, 41(3):1516?1541,
2013.
[6] T. Cazenave. Sequential halving applied to trees. IEEE Transactions on Computational
Intelligence and AI in Games, 7(1):102?105, 2015.
[7] E. Even-Dar, S. Mannor, and Y. Mansour. Action Elimination and Stopping Conditions for
the Multi-Armed Bandit and Reinforcement Learning Problems. Journal of Machine Learning
Research, 7:1079?1105, 2006.
[8] V. Gabillon, M. Ghavamzadeh, and A. Lazaric. Best Arm Identification: A Unified Approach to
Fixed Budget and Fixed Confidence. In Advances in Neural Information Processing Systems,
2012.
[9] A. Garivier and E. Kaufmann. Optimal best arm identification with fixed confidence. In
Proceedings of the 29th Conference On Learning Theory (COLT), 2016.
[10] A. Garivier, E. Kaufmann, and W.M. Koolen. Maximin action identification: A new bandit
framework for games. In Proceedings of the 29th Conference On Learning Theory, 2016.
[11] Ruitong Huang, Mohammad M. Ajallooeian, Csaba Szepesv?ri, and Martin M?ller. Structured
best arm identification with fixed confidence. In 28th International Conference on Algorithmic
Learning Theory (ALT), 2017.
[12] K. Jamieson, M. Malloy, R. Nowak, and S. Bubeck. lil?UCB: an Optimal Exploration Algorithm
for Multi-Armed Bandits. In Proceedings of the 27th Conference on Learning Theory, 2014.
[13] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone. PAC subset selection in stochastic
multi-armed bandits. In International Conference on Machine Learning (ICML), 2012.
[14] Z. Karnin, T. Koren, and O. Somekh. Almost optimal Exploration in multi-armed bandits. In
International Conference on Machine Learning (ICML), 2013.
[15] E. Kaufmann, O. Capp?, and A. Garivier. On the Complexity of Best Arm Identification in
Multi-Armed Bandit Models. Journal of Machine Learning Research, 17(1):1?42, 2016.
[16] E. Kaufmann and S. Kalyanakrishnan. Information complexity in bandit subset selection. In
Proceeding of the 26th Conference On Learning Theory., 2013.
[17] L. Kocsis and C. Szepesv?ri. Bandit based monte-carlo planning. In Proceedings of the 17th
European Conference on Machine Learning, ECML?06, pages 282?293, Berlin, Heidelberg,
2006. Springer-Verlag.
[18] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied
Mathematics, 6(1):4?22, 1985.
[19] T. Pepels, T. Cazenave, M. Winands, and M. Lanctot. Minimizing simple and cumulative regret
in monte-carlo tree search. In Computer Games Workshop, ECAI, 2014.
[20] Aske Plaat, Jonathan Schaeffer, Wim Pijls, and Arie de Bruin. Best-first fixed-depth minimax
algorithms. Artificial Intelligence, 87(1):255 ? 293, 1996.
[21] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander
Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap,
Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the
game of go with deep neural networks and tree search. Nature, 529:484?489, 2016.
[22] K. Teraoka, K. Hatano, and E. Takimoto. Efficient sampling method for monte carlo tree search
problem. IEICE Transactions on Infomation and Systems, pages 392?398, 2014.
[23] W.R. Thompson. On the likelihood that one unknown probability exceeds another in view of
the evidence of two samples. Biometrika, 25:285?294, 1933.
10
| 7075 |@word middle:1 version:2 proportion:2 replicate:2 aske:1 simulation:1 tried:1 r:2 kalyanakrishnan:2 shot:2 recursively:3 reduction:1 bai:38 selecting:1 tuned:1 ours:1 past:1 existing:1 current:1 comparing:1 yet:1 intriguing:1 reminiscent:2 guez:1 john:1 designed:1 update:1 v:6 alone:1 intelligence:3 leaf:46 guess:3 selected:4 fewer:1 core:1 aja:1 recherche:1 mannor:1 node:46 readability:1 successive:2 simpler:1 supply:1 descendant:1 prove:6 consists:1 combine:3 introduce:4 indeed:1 intricate:1 behavior:3 themselves:1 planning:1 growing:1 multi:6 inspired:1 nham:1 little:1 armed:6 solver:2 becomes:1 project:1 discover:1 bounded:2 moreover:4 notation:1 lowest:1 argmin:1 interpreted:1 maxa:1 developed:2 unified:1 finding:2 csaba:1 guarantee:7 every:1 concave:1 biometrika:1 stick:1 control:1 grant:2 jamieson:1 appear:1 before:3 local:1 aggregating:1 sd:1 consequence:1 laurent:1 path:1 inria:1 might:1 umr:1 initialization:1 studied:3 argminc:1 suggests:2 challenging:1 ease:1 range:1 statistically:1 practical:1 acknowledgment:1 practice:4 regret:3 union:3 differs:2 implement:1 procedure:2 demis:1 empirical:7 significantly:4 matching:2 confidence:44 lcb:1 doubled:1 close:2 selection:3 risk:1 applying:2 context:2 optimize:1 deterministic:2 maximizing:3 go:4 starting:1 l:3 independently:1 survey:1 resolution:1 thompson:1 identifying:1 rule:10 dominate:1 pull:1 bruin:1 classic:1 notion:1 updated:3 resp:4 target:1 play:13 construction:5 annals:1 exact:1 us:5 trend:1 expensive:1 centrum:1 satisfying:2 observed:1 highest:3 prospect:1 intuition:2 ugape:15 complexity:12 ui:1 reward:1 ghavamzadeh:1 depend:3 solving:1 smart:1 deliver:1 upon:1 capp:2 stylized:1 represented:2 univ:2 forced:1 monte:9 artificial:1 outcome:1 refined:2 choosing:3 kalchbrenner:1 whose:6 heuristic:1 larger:4 solve:4 plausible:1 otherwise:1 anr:2 statistic:1 fischer:1 itself:1 kocsis:1 sequence:2 advantage:1 propose:2 interaction:1 maximal:3 fr:1 argmaxs:1 loop:1 mixing:1 sutskever:1 parent:6 produce:1 silver:1 develop:2 ac:4 fixing:1 sole:1 c:3 direction:1 closely:2 correct:5 stochastic:8 vc:2 exploration:15 human:1 elimination:8 alphago:1 require:3 proposition:1 pessimistic:1 tighter:1 exploring:1 extension:1 hold:4 algorithmic:2 dieleman:1 major:2 tavener:1 consecutive:1 smallest:1 purpose:3 nwo:1 wim:1 robbins:1 largest:1 correctness:2 repetition:1 successfully:1 tool:4 minimization:1 challenger:1 always:2 aim:1 rather:3 minc:3 cwi:1 focus:3 finishing:1 improvement:4 bernoulli:2 likelihood:1 attains:1 summarizing:1 stopping:8 cnrs:1 entire:1 typically:1 bt:9 bandit:16 ancestor:5 relation:1 going:2 france:1 selects:5 interested:1 among:2 colt:1 denoted:1 lucas:1 art:3 special:3 uc:3 karnin:1 beach:1 sampling:6 eliminated:1 koray:1 represents:3 lille:2 park:1 look:1 excessive:1 icml:2 future:1 recommend:1 hint:1 primarily:1 few:1 divergence:2 powley:1 replaced:1 argmax:5 organization:1 wouter:2 investigate:1 evaluation:7 nl:1 winands:1 perez:1 behind:2 amenable:1 closer:1 nowak:1 necessary:1 arthur:1 stoltz:1 tree:60 theoretical:3 uncertain:1 instance:4 maximization:1 introducing:3 subset:2 uniform:1 successful:3 front:1 dependency:1 convince:1 st:1 international:3 randomized:1 sequel:1 ilya:1 quickly:4 gabillon:1 na:1 w1:5 again:1 squared:1 central:1 unavoidable:1 cesa:1 huang:2 hoeffding:1 expert:1 leading:2 return:1 li:1 potential:1 de:2 ioannis:1 satisfy:1 audibert:1 depends:2 performed:2 root:13 try:1 later:1 optimistic:3 observing:2 picked:1 reached:2 start:1 red:1 sup:2 maintains:1 contribution:1 view:1 accuracy:2 kaufmann:7 ensemble:1 correspond:2 identify:3 yield:1 ruitong:1 weak:1 identification:19 kavukcuoglu:1 accurately:1 none:1 carlo:9 ary:1 maxc:3 explain:1 emilie:3 reach:1 whenever:1 definition:2 against:1 steadily:1 obvious:1 naturally:1 associated:1 proof:6 schaeffer:1 propagated:1 sampled:2 argmaxc:1 stop:3 proved:1 recall:2 maillard:1 graepel:1 auer:2 supervised:1 follow:1 specify:1 evaluated:1 done:1 though:1 furthermore:2 uct:11 until:2 french:1 artifact:1 quality:1 scientific:1 believe:1 thore:1 ieice:1 usa:1 building:1 effect:1 contain:1 true:1 lillicrap:1 hence:7 leibler:3 illustrated:1 whitehouse:1 round:7 ll:1 game:18 width:2 generalized:1 stone:1 mohammad:1 performs:1 upwards:1 recently:2 koolen:3 empirically:1 discussed:2 belong:1 numerically:1 wmkoolen:1 multiarmed:1 ai:3 rd:1 mathematics:1 hatano:1 longer:1 similarity:1 impressive:1 agence:1 recent:3 inf:2 verlag:1 binary:2 leach:1 seen:1 george:1 prune:2 undoing:1 determine:2 maximize:1 ller:1 full:2 exceeds:2 match:1 long:1 lai:1 visit:1 halving:3 variant:3 denominator:1 expectation:1 represent:1 preserved:1 addition:1 whereas:2 szepesv:2 interval:34 else:1 crucial:1 operate:1 unlike:1 leveraging:1 call:1 odds:1 leverage:2 presence:1 exceed:1 revealed:1 easy:2 recommends:1 enough:1 sander:1 browne:1 architecture:5 identified:1 regarding:1 idea:1 oneself:1 veda:1 wiskunde:1 cause:1 action:19 repeatedly:2 remark:2 dar:1 useful:1 tewari:1 deep:1 netherlands:2 informatica:1 outperform:2 estimated:2 lazaric:1 track:1 shall:1 key:2 four:1 veni:1 drawn:2 takimoto:1 garivier:4 nal:1 asymptotically:2 sum:2 run:2 parameterized:1 powerful:1 almost:1 draw:7 decision:2 appendix:10 cowling:1 lanctot:2 bit:1 bound:21 ct:5 ki:1 followed:2 distinguish:1 played:1 koren:1 fold:1 oracle:6 adapted:1 ri:2 sake:1 min:8 pruned:1 performing:1 martin:1 structured:2 developing:2 remain:1 slightly:1 smaller:1 mastering:1 wi:7 evolves:1 modification:1 b:1 happens:1 making:1 den:1 restricted:1 taken:1 ln:64 turn:2 count:1 mechanism:1 needed:1 know:1 letting:5 madeleine:1 antonoglou:1 end:2 rohlfshagen:1 available:1 generalizes:1 tightest:1 permit:1 malloy:1 apply:1 observe:1 panneershelvam:1 generic:4 alternative:3 hassabis:1 original:1 denotes:1 include:2 ensure:1 instant:1 exploit:3 build:3 establish:1 move:11 question:1 already:1 strategy:3 costly:1 dependence:1 rt:5 win:2 reversed:1 distance:2 simulated:1 berlin:1 chris:1 maddison:1 collected:2 samothrakis:1 induction:1 provable:1 assuming:2 index:1 providing:1 minimizing:2 julian:1 schrittwieser:1 mostly:1 relate:1 stated:3 rise:1 ba:1 design:1 lil:1 policy:1 unknown:3 perform:1 bianchi:1 upper:12 observation:2 benchmark:5 enabling:1 finite:1 displayed:2 ecml:1 payoff:5 defining:1 mansour:1 olt:1 arbitrary:3 introduced:1 david:1 required:1 kl:2 maximin:3 nip:1 cristal:1 proceeds:1 below:4 adversary:1 pattern:3 regime:2 sparsity:3 challenge:3 max:12 everyone:1 overlap:1 suitable:1 event:2 hybrid:2 rely:2 arm:30 minimax:2 improve:1 lbt:1 mcts:55 grewe:1 xg:1 acknowledges:2 literature:2 determining:1 asymptotic:1 loss:2 bear:1 oms:1 interesting:2 allocation:2 proven:1 ingredient:1 validation:1 agent:2 s0:19 exciting:1 changed:1 supported:1 last:2 ecai:1 intractably:1 allow:2 deeper:6 munos:3 van:1 depth:38 valid:1 cumulative:1 collection:1 adaptive:2 refinement:1 simplified:1 reinforcement:1 sifre:1 transaction:3 pruning:2 kullback:3 supremum:2 global:1 sequentially:2 decides:1 colton:1 summing:1 conclude:1 alternatively:1 don:1 search:16 sk:2 additionally:1 promising:5 nature:1 robust:1 ca:1 arie:1 somekh:1 heidelberg:1 anc:2 european:1 marc:1 did:1 main:2 arise:1 child:9 fair:1 representative:15 lc:3 precision:2 sub:1 position:2 explicit:2 winning:1 candidate:2 dominik:1 down:4 theorem:7 bad:1 xt:1 specific:2 pac:3 showing:1 explored:1 alt:3 evidence:1 lille1:1 workshop:1 sequential:7 adding:1 magnitude:1 budget:3 nationale:1 gap:2 suited:1 lt:4 timothy:1 bubeck:2 amsterdam:1 expressed:1 recommendation:3 driessche:1 springer:1 corresponds:1 nested:1 satisfies:2 relies:1 extracted:1 goal:5 presentation:1 viewed:1 experimentally:1 change:1 hard:1 uniformly:2 lemma:17 conservative:2 called:6 lucb:25 e:1 player:11 experimental:1 ucb:3 la:1 indicating:1 select:1 internal:2 support:2 latter:1 jonathan:1 ub:2 evaluate:2 |
6,717 | 7,076 | Group Additive Structure Identification for Kernel
Nonparametric Regression
Pan Chao
Department of Statistics
Purdue University
West Lafayette, IN 47906
[email protected]
Michael Zhu
Department of Statistics, Purdue University
West Lafayette, IN 47906
Center for Statistical Science
Department of Industrial Engineering
Tsinghua University, Beijing, China
[email protected]
Abstract
The additive model is one of the most popularly used models for high dimensional
nonparametric regression analysis. However, its main drawback is that it neglects
possible interactions between predictor variables. In this paper, we reexamine the
group additive model proposed in the literature, and rigorously define the intrinsic
group additive structure for the relationship between the response variable Y and
the predictor vector X, and further develop an effective structure-penalized kernel
method for simultaneous identification of the intrinsic group additive structure
and nonparametric function estimation. The method utilizes a novel complexity
measure we derive for group additive structures. We show that the proposed method
is consistent in identifying the intrinsic group additive structure. Simulation study
and real data applications demonstrate the effectiveness of the proposed method as
a general tool for high dimensional nonparametric regression.
1
Introduction
Regression analysis is popularly used to study the relationship between a response variable Y
and a vector of predictor variables X. Linear and logistic regression analysis are arguably two
most popularly used regression tools in practice, and both postulate explicit parametric models on
f (X) = E[Y |X] as a function of X. When no parametric models can be imposed, nonparametric
regression analysis can instead be performed. On one hand, nonparametric regression analysis is
flexible and not susceptible to model mis-specification, whereas on the other hand, it suffers from a
number of well-known drawbacks especially in high dimensional settings. Firstly, the asymptotic
error rate of nonparametric regression deteriorates quickly as the dimension of X increases. [16]
shows that with some regularity conditions, the
optimal asymptotic error rate for estimating a dtimes differentiable function is O n?d/(2d+p) , where p is the dimensionality of X. Secondly, the
resulting fitted nonparametric function is often complicated and difficult to interpret.
To overcome the drawbacks of high dimensional nonparametric regression, one popularly used
approach is to impose the additive structure [5] on f (X), that is to assume that f (X) = f1 (X1 ) +
? ? ?+fp (Xp ) where f1 , . . . , fp are p unspecified univariate functions. Thanks to the additive structure,
the nonparametric estimation of f or equivalently the individual fi ?s for 1 ? i ? p becomes efficient
and does not suffer from the curse of dimensionality. Furthermore, the interpretability of the resulting
model has also been much improved.
The key drawback of the additive model is that it does not assume interactions between the predictor variables. To address this limitation, functional ANOVA models were proposed to accommodate higher order interactions, see [4] and [13]. For example, by neglecting interactions of
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Pp
order higher than 2, the functional ANOVA model can be written as f (X) =
i=1 fi (Xi ) +
P
1?i,j?p fij (Xi , Xj ), with some marginal constraints. Another higher order interaction model,
PD P
f (X) = d=1 1?i1 ,...,id ?p fj (Xi1 , . . . , Xid ), is proposed by [6]. This model considers all interactions of order up to D, which is estimated by Kernel Ridge Regression (KRR) [10] with the
elementary symmetric polynomial (ESP) kernel. Both of the two models assume the existence of
possible interactions between any two or more predictor variables. This can lead to a serious problem,
that is, the number of nonparametric functions that need to be estimated quickly increases as the
number of predictor variables increases.
There exists another direction to generalize the additive model. When proposing the Optimal Kernel
Group Transformation (OKGT) method for nonparametric regression, [11] considers the additive
structure of predictor variables in groups instead of individual predictor variables. Let G := {uj }dj=1
be a index partition of the predictor variables, that is, uj ?uk = ? if j 6= k and ?dj=1 uj = {1, . . . , p}.
Let X uj = {Xk ; k ? uj } for j = 1, . . . , d. Then {X1 , . . . , Xd } = X u1 ? ? ? ? ? X ud . For any
function f (X), if there exists an index partition G = {u1 , . . . , ud } such that
f (X) = fu1 (X u1 ) + . . . + fud (X ud ),
(1)
where fu1 (X u1 ), . . . , fud (X ud ) are d unspecified nonparametric functions, then it is said that f (X)
admits the group additive structure G. We also refer to (1) as a group additive model for f (X). It is
clear that the usual additive model is a special case with G = {(1), . . . , (p)}.
Suppose Xj1 and Xj2 are two predictor variables. Intuitively, if Xj1 and Xj2 interact to each other,
then they must appear in the same group in an reasonable group additive structure of f (X). On the
other hand, if Xj1 and Xj2 belong to two different groups, then they do not interact with each other.
Therefore, in terms of accommodating interactions, the group additive model can be considered
lying in the middle between the original additive model and the functional ANOVA or higher order
interaction models. When the group sizes are small, for example all are less than or equal to 3,
the group additive model can maintain the estimation efficiency and interpretability of the original
additive model while avoiding the problem of a high order model discussed earlier.
However, in [11], there are two important issues not addressed. First, the group additive structure
may not be unique, which will lead to the nonidentifiability problem for the group additive model.
(See discussion in Section 2.1). Second, [11] has not proposed a systematic approach to identify the
group additive structure. In this paper, we intend to resolve these two issues. To address the first issue,
we rigorously define the intrinsic group additive structure for any square integrable function, which
in some sense is the minimal group additive structure among all correct group additive structures.
To address the second issue, we propose a general approach to simultaneously identifying the
intrinsic group additive structure and estimating the nonparametric functions using kernel methods
and Reproducing Kernel Hilbert Spaces (RKHSs). For a given group additive structure G =
{u1 , . . . , ud }, we first define the corresponding direct sum RKHS as HG = Hu1 ? ? ? ? ? Hud where
Hui is the usual RKHS for the variables in uj only for j = 1, . . . , d. Based on the results on the
capacity measure of RKHSs in the literature, we derive a tractable capacity measure of the direct
sum RKHS HG which is further used as the complexity measure of G. Then, the identification of the
intrinsic group additive structure and the estimation of the nonparametric functions can be performed
through the following minimization problem
n
X
? = arg min 1
f?, G
(yi ? f (xi ))2 + ?kf k2HG + ?C(G).
f ?HG ,G n i=1
We show that when the novel complexity measure of group additive structure C(G) is used, the
? is consistent for the intrinsic group additive structure. We further develop two algorithms,
minimizer G
one uses exhaustive search and the other employs a stepwise approach, for identifying true additive
group structures under the small p and large p scenarios. Extensive simulation study and real data
applications show that our proposed method can successfully recover the true additive group structures
in a variety of model settings.
There exists a connection between our proposed group additive model and graphical models ([2],
[7]). This is especially true when a sparse block structure is imposed [9]. However, a key difference
exists. Let?s consider the following example. Y = sin(X1 + X22 + X3 ) + cos(X4 + X5 + X62 ) + .
A graphical model typically considers the conditional dependence (CD) structure among all of the
2
variables including X1 , . . . , X6 and Y , which is more complex than the group additive (GA) structure
{(X1 , X2 , X3 ), (X4 , X5 , X6 )}. The CD structure, once known, can be further examined to infer
the GA structure. In this paper, we however proposed methods that directly target the GA structure
instead of the more complex CD structure.
The rest of the paper is organized as follows. In Section 2, we rigorously formulate the problem
of Group Additive Structure Identification (GASI) for nonparametric regression and propose the
structural penalty method to solve the problem. In Section 3, we prove the selection consistency for
the method. We report the experimental results based on simulation studies and real data application
in Section 4 and 5. Section 6 concludes this paper with discussion.
2
2.1
Method
Group Additive Structures
In the Introduction, we discussed that the group additive structure for f (X) may not be unique. Here
is an example. Consider the model Y = 2 + 3X1 + 1/(1 + X22 + X32 ) + arcsin ((X4 + X5 )/2) + ,
where is the 0 mean error independent of X. According to our definition, this model admits
the group additive structure G0 = {(1) , (2, 3) , (4, 5)}. Let G1 = {(1, 2, 3) , (4, 5)} and G2 =
{(1, 4, 5) , (2, 3)}. The model can also be said to admit G1 and G2 . However, there exists a major
difference between G0 , G1 and G2 . While the groups in G0 cannot be further divided into subgroups,
both G1 and G2 contain groups that can be further split. We define the following partial order between
group structures to characterize the difference and their relationship.
Definition 1. Let G and G0 be two group additive structures. If for every group u ? G there is a
group v ? G0 such that u ? v, then G is called a sub group additive structure of G0 . This relation is
denoted as G ? G0 . Equivalently, G0 is a super group additive structure of G, denoted as G0 ? G.
In the previous example, G0 is a sub group additive structure of both G1 and G2 . However, the
order between G1 and G2 is not defined. Let X := [0, 1]p be the p-dimensional unit cube for all
the predictor variables X with distribution PX . For a group of predictor variables u, we define
the space of square integrable functions as L2u (X ) := {g ? L2PX (X ) | g(X) = fu (X u )},
that is L2u contains the functions that only depend on the variables in group u. Then the group
Pd
additive model f (X) = j=1 fuj (X uj ) is a member of the direct sum function space defined as
L2G (X ) := ?u?G L2u (X ). Let |u| be the cardinality of the group u. If u is the only group in a group
additive structure and |u| = p, then L2u = L2G and f is a fully non-parametric function.
The following proposition shows that the order of two different group additive structures is preserved
by their corresponding square integrable function spaces.
Proposition 1. Let G1 and G2 be two group additive structures. If G1 ? G2 , then L2G1 ? L2G2 .
Furthermore, if X1 , . . . , Xp are independent and G1 6= G2 , then L2G1 ? L2G2 .
Definition 2. Let f (X) be an square integrable function. For a group additive structure G, if there
is a function fG ? L2G such that fG = f , then G is called an amiable group additive structure for f .
In the example discussed in the beginning of the subsection, G0 , G1 and G2 are all amiable group
structures. So amiable group structures may not be unique.
Proposition 2. Suppose G is an amiable group additive structure for f . If there is a second group
additive structure G0 such that G ? G0 , then G0 is also amiable for f .
We denote the collection of all amiable group structures for f (X) as G a , which is partially ordered
and complete. Therefore, there exists a minimal group additive structure in G a , which is the most
concise group additive structure for the target function. We state this result as a theorem.
Theorem 1. Let G a be the set of amiable group additive structures for f . There is a unique minimal
group additive structure G? ? G a such that G? ? G for all G ? G a , where the order is given by
Definition 1. G? is called the intrinsic group additive structure for f .
For statistical modeling, G? achieves the greatest dimension reduction for the relationship between
Y and X. It induces the smallest function space which includes the model. In general, the intrinsic
group structure can help much mitigate the curse of dimensionality while improving both efficiency
and interpretability of high dimensional nonparametric regression.
3
2.2
Kernel Method with Known Intrinsic Group Additive Structure
Suppose the intrinsic group additive structure for f (X) = E[Y |X] is known to be G? = {uj }dj=1 ,
that is, f (X) = fu1 (X u1 ) + ? ? ? + fud (X ud ). Then, we will use the kernel method to estimate the
functions fu1 , fu2 , . . ., fud . Let (Kuj , Huj ) be the kernel and its corresponding RKHS for the j-th
group uj . Then using kernel methods is to solve
( n
)
X
1
2
2
(yi ? fG? (xi )) + ?kfG? kHG? ,
f??,G? = arg min
(2)
n i=1
fG? ?HG?
Pd
where HG? := {f = j=1 fuj | fuj ? Huj }. The subscripts of f? are used to explicitly indicate its
dependence on the group additive structure G? and tuning parameter ?.
In general, an RKHS is usually smaller than the L2 space defined on the same input domain. So, it is
not always true that f??,G? achieves f . However, one can choose to use universal kernels Kuj so that
their corresponding RKHSs are dense in the L2 spaces (see [3], [15]). Using universal kernels allows
f??,G? to not only achieve unbiasedness but also recover the group additive structure of f (X). This is
the fundamental reason for the consistency property of our proposed method to identify the intrinsic
group additive structure. Two examples of universal kernel are Gaussian and Laplace.
2.3
2.3.1
Identification of Unknown Intrinsic Group Additive Structure
Penalization on Group Additive Structures
The success of the kernel method hinges on knowing the intrinsic group additive structure G? . In
practice, however, G? is seldom known, and it may be of primary interest to identify G? while
estimating a group additive model. Recall that in Subsection 2.1, we have shown that G? exists and
is unique. The other group additive structures belong to two categories, amiable and non-amiable.
Let?s consider an arbitrary non-amiable group additive structure G ? G \ G a first. If G is used in the
place of G? in (2), the solution f??,G , as an estimator of f , will have a systematic bias because the L2
distance between any function fG ? HG and the true function f will be bounded below. In other
words, using a non-amiable structure will result in poor fitting of the model.
Next we consider an arbitrary amiable group additive structure G ? G a to be used in (2). Recall that
because G is amiable, we have fG? = fG almost surely (in population) for the true function f (X).
The bias of the resulting fitted function f??,G will vanish as the sample size increases. Although their
asymptotic rates are in general different, under fixed sample size n, simply using goodness of fit
will not be able to distinguish G from G? . The key difference between G? and G is their structural
complexities, that is, G? is the smallest among all amiable structures (i.e. G? ? G, ?G ? G a ).
Suppose a proper complexity measure of a group additive structure G can be defined (to be addressed
in the next section) and is denoted as C(G). We can then incorporate C(G) into (2) as an additional
penalty term and change the kernel method to the following structure-penalized kernel method.
)
( n
X
1
2
? = arg min
f??,? , G
(yi ? fG (xi )) + ?kfG k2HG + ?C(G) .
(3)
n i=1
fG ?HG ,G
It is clear that the only difference between (2) and (3) is the term ?C(G). As discussed above, the
intrinsic group additive structure G? can achieve the goodness of fit represented by the first two
terms in (3) and the penalty on the structural complexity represented by the last term. Therefore,
? is consistent in that the probability
by properly choosing the tuning parameters, we expect that G
?
? = G increases to one as n increases (see the Theory Section below). In the next section, we
G
derive a tractable complexity measure for a group additive structure.
2.3.2
Complexity Measure of Group additive Structure
It is tempting to propose an intuitive complexity measure for a group additive structure C(?) such
that C(G1 ) ? C(G2 ) whenever G1 ? G2 . The intuition however breaks down or at least becomes
less clear when the order between G1 and G2 cannot be defined. From Proposition 1, it is known
that when G1 < G2 , we have L2G1 ? L2G2 . It is not difficult to show that it is also true that when
4
G1 < G2 , then HK,G1 ? HK,G2 . This observation motivates us to define the structural complexity
measure of G through the capacity measure of its corresponding RKHS HG .
There exist a number of different capacity measures for RKHSs in the literature, including entropy
[18], VC dimensions [17], Rademacher complexity [1], and covering numbers ([14], [18]). After
investigating and comparing different measures, we use covering number to design a practically
convenient complexity measure for group additive structures.
It is known that an RKHS HK can be embedded in the continuous function space C(X ) (see [12],
[18]), with the inclusion mapping denoted as IK : HK ? C(X ). Let HK,r = {h : khkHk ?
r, and h ? HK } be an r-ball in HK and I (HK,r ) be the closure of I (HK,r ) ? C(X ). One way
to measure the capacity of HK is through the covering number of I (HK,r ) in C(X ), denoted as
N (, I (HK,r ), d? ), which is the smallest cardinalty of a subset S of C(X ) such that I (HK,r ) ?
?s?S {t ? C(X ) : d? (t, s) ? }. Here is any small positive value and d? is the sup-norm.
The exact formula for N (, I (HK,r ), d? ) is in general not available. Under certain conditions,
various upper bounds have been obtained in the literature. One such upper bound is presented below.
When K is a convolution kernel, i.e. K(x, t) = k(x ? t), and the Fourier transform of k decays
exponentially, then it is given in [18] that
r p+1
ln N , I(HK,r ), d? ? Ck,p ln
,
(4)
where Ck,p is a constant depending on the kernel function k and input dimension p. In particular,
when K is a Gaussian kernel, [18] has obtained more elaborate upper bounds.
The upper bound in (4) depends on r and through ln(r/). When ? 0 with r being fixed (e,g.
p+1
r = 1 when a unit ball is considered), (ln(r/))
dominates the upper bound. According to [8], the
growth rate of N (, IK ) or its logarithm can be viewed as a capacity measure of RKHS. So we use
p+1
(ln(r/))
as the capacity measure, which can be reparameterized as ?p+1 with ? = ln(r/). Let
p+1
C(Hk ) denote the capacity measure of Hk , which is defined as C(Hk ) = (ln(r/))
= ?()p+1 .
We know is the radius of a covering ball, which is the unit of measurement we use to quantify the
capacity. The capacity of two RKHSs with different input dimensions are easier to be differentiated
when is small. This gives an interpretation of ?.
We have defined a capacity measure for a general RKHS. In Problem (3), the model space HG is a
direct sum of a number of RKHSs. Let G = {u1 , . . . , ud }; let HG , Hu1 , . . . , Hud be the RKHSs
corresponding to G, u1 , . . . , ud , respectively; let IG , Iu1 , . . . , Iud be the inclusion mappings of
HG , Hu1 , . . . , Hud into C(X ). Then, we have the following proposition.
Proposition 3. Let G be a group additive structure and HG be the induced direct sum RKHS defined
in (3). Then, we have the following inequality relating the covering number of HG and the covering
numbers of Huj
ln N (, IG , d? ) ?
d
X
ln N
j=1
, Iuj , d? ,
|G|
(5)
where |G| denotes the number of groups in G.
By applying
Proposition
3 and using the parameterized upper bound, we have ln N (, IG , d? ) =
P
|u|+1
O
?()
. The rate can be used as the explicit expression of the complexity measure
u?G
Pd
for group additive structures, that is C(G) = j=1 ?()|uj |+1 . Recall that there is another tuning
parameter ? which controls the effect of the complexity of group structure on the penalized risk.
By combining the common factor 1 in the exponent with ?, we could further simplify the penalty?s
expression. Thus, we have the following explicit formulation for GASI
?
?
n
d
?X
?
X
2
? = arg min
f??,? , G
(yi ? fG (xi )) + ?kfG k2HG + ?
?|uj | .
(6)
?
fG ?HG ,G ?
i=1
j=1
5
2.4
Estimation
We assume that the value of ? is given. In practice, ? can be tuned separately. If the values of ? and
? are also given, Problem (6) can be solved by following a two-step procedure.
First, when the group structure G is known, fu can be estimated by solving the following problem
( n
)
X
1
2
2
? ? = min
R
(yi ? fG (xi )) + ? kfG kHG .
(7)
G
fG ?HG
n i=1
Second, the optimal group structure is chosen to achieve both small fitting error and complexity, i.e.
?
?
d
?
?
X
? = arg min R
? ?G + ?
G
?|uj | .
(8)
?
G?G ?
j=1
? = G? .
The two-step procedure above is expected to identify the intrinsic group structure, that is, G
Recall a group structure belongs to one of the three categories, intrinsic, amiable, or non-amiable
? ? is expected to be large, because G is a wrong structure
structures. If G is non-amiable, then R
G
? ? is expected to be small, the
which will result in a biased estimate. If G is amiable, though R
G
?
complexity penalty of G is larger than that for G . As a consequence, only G? can simultaneously
? ? ? and a relatively small complexity. Therefore, when the sample size is large
achieve a small R
G
? = G? with high probability. If the values of ? and ? are not given, a separate
enough, we expect G
validation set can be used to select tuning parameters.
The two-step estimation is summarized in Algorithm 1. When a model contains a large number of
predictor variables, such exhaustive search suffers high computational cost. In order to apply GASI
on a large model, we propose a backward stepwise algorithm which is illustrated in Algorithm 2.
Algorithm 1: Exhaustive Search w/ Validation
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
Split data into training (T ) and validation (V) sets.
for (?, ?) in grid do
for G ? G do
? G , f?G ? solve (7) using G;
R
? pen,?,? ;
Calculate the sum in (8), denoted by R
G
end for
pen,?,?
? ?,? ? arg min
?
G
;
G?G RG
V
V
?
y? ? fGb ?,? (x );
e2Gb ?,? ? ky V ? y?V k2 ;
end for
?? , ?? ? arg min?,? e2Gb ?,? ;
? ?? ,?? ;
G? ? G
3
Theory
Algorithm 2: Basic Backward Stepwise
1: Start with the group structure {(1, . . . , p)};
? pen ;
2: Solve (6) and obtain its minimum value R
G
3: for each predictor variable j do
0
4:
G ? either split j as a new group or add
to an existing group;
5:
Solve (6) and obtain its minimum value
? pen0 ;
R
G
? pen0 < R
? pen then
6:
if R
G
G
7:
Keep G0 as the new group structure;
8:
end if
9: end for
10: return G0 ;
? as a solution to (6) is consistent,
In this section, we prove that the estimated group additive structure G
?
?
that is the probability P (G = G ) goes to 1 as the sample size n goes to infinity. The proof and
supporting lemmas are included in the supplementary material.
2
? ) =
LetPR(fG ) = E[(Y ? f (X)) ] denote the population risk of a function f ? HG , and R(f
n
1
2
a
i=1 (yi ? f (xi )) be the empirical risk. First, we show that for any amiable structure G ? G ,
n
?
?
?
its minimized empirical risk R(fG ) converges in probability to the optimal population risk R(fG? )
achieved by the intrinsic group additive structure. Here f?G denotes the minimizer of Problem (7)
?
with the given G, and fG
? denotes the minimizer of the population risk when the intrinsic group
structure is used. The result is given below as Proposition 4.
Proposition 4. Let G? be the intrinsic group additive structure, G ? G a a given amiable group
?
structure, and HG? and HG the respective direct sum RKHSs. If f?G
? HG is the optimal solution of
6
ID
Model
Intrinsic Group Structure
x22
y = 2x1 +
+ x33 + sin(?x
4 ) + log(x5
1
3
= 1+x2 + arcsin x2 +x
+ arctan (x4
1
21
3
= arcsin x1 +x
+ arctan (x4
+
2
1+x2
2
+ 5) + |x6 | +
+ x 5 + x 6 )3 +
y
+ x 5 + x 6 )3 +
y = x1 ? x2 +nsin((x3 + x4 ) ? ?) + log(x5 ? x6 +
o 10) +
p
2
2
2
2
2
2
y = exp
x1 + x2 + x3 + x4 + x5 + x6 +
M1
M2
y
M3
M4
M5
{(1) , (2) , (3) , (4) , (5) , (6)}
{(1) , (2, 3) , (4, 5, 6)}
{(1, 3) , (2) , (4, 5, 6)}
{(1, 2) , (3, 4) , (5, 6)}
{(1, 2, 3, 4, 5, 6)}
Table 1: Selected models for the simulation study using the exhaustive search method and the corresponding
additive group structures.
Problem (7), then for any > 0, we have
)
2 n
ln N
, Hu , d ? ?
+
12|G|
144
u?G
(
)
?
2 2
X
?n kfG
?k
12n ? exp
ln N
, Hu , d ? ? n
?
. (9)
12|G|
24
12
u?G
?
b f?G ) ? R(fG
? )| >
P |R(
? 12n ? exp
(
X
?
2
Note that ?n in (9) must be chosen such that /24 ? ?n kfG
? k /12 is positive. For a fixed p, the
number of amiable group additive structures is finite. Using a Bonferroni type of technique, we can
in fact obtain a uniform upper bound for all of the amiable group additive structures in G a .
Theorem 2. Let G a be the set of all amiable group structures. For any > 0 and n > 2/2 , we have
P
G?G a
"
2 n
, HG , d ? ?
G?G
12
144
(
)#
?
2 2
?n kfG
?k
+ exp maxa ln N
, HG , d ? ? n
?
(10)
G?G
12
24
12
?
?
b g (f?G
? )| >
sup |R
) ? Rg (fG
? 12n|G a | ? exp
maxa ln N
? f?G ) fails to
Next we consider a non-amiable group additive structure G0 ? G \ G a . It turns out that R(
?
?
?
?
converge to R(fG? ), and |R(fG ) ? R(fG? )| converges to a positive constant. Furthermore, because
? f?G ) ? R(f ? ? )|
the number of non-amiable group additive structures is finite, we can show that |R(
G
is uniformly bounded below from zero with probability going to 1. We state the results below.
Theorem 3. (i) For a non-amiable group structure G ? G \ G a , there exists a constant C > 0 such
? g (f?? ) ? Rg (f ? ? )| converges to C in probability. (ii) There exits a constant C? such that
that |R
G
G
?
?
a
?
?
P (|Rg (f?G
) ? Rg (fG
? )| > C for all G ? G \ G ) goes to 1 as n goes to infinity.
By combining Theorem 2 and Theorem 3, we can prove consistency for our GASI method.
Theorem 4. Let ?n ? n ? 0. By choosing a proper tuning parameter ? > 0 for the structural
? is consistent for the intrinsic group additive structure G? ,
penalty , the estimated group structure G
?
?
that is, P (G = G ) goes to one as the sample size n goes to infinity.
4
Simulation
In this section, we evaluate the performance of GASI using synthetic data. Table 1 lists the five models
we are using. Observations of X are simulated independently from N (0, 1) in M1, Unif(?1, 1) in
M2 and M3, and Unif(0, 2) in M4 and M5. The noise is i.i.d. N (0, 0.012 ). The grid values of ?
are equally spaced in [1e?10, 1/64] on a log-scale and each ? is an integer in [1, 10].
We first show that GASI has the ability to identify the intrinsic group additive structure. The two-step
procedure is carried out for each (?, ?) pair multiple times. If there are (?, ?) pairs for each model
that the true group structure can be often identified, then GASI has the power to identify true group
structures. We also apply Algorithm 1 which uses an additional validation set to select the parameters.
We simulate 100 different samples for each model. The frequency of the true group structure being
identified is calculated for each (?, ?).
7
Model
M1
M2
M3
M4
M5
Max freq.
?
?
Max freq.
?
?
Max freq.
?
?
100
97
97
100
100
1.2500e-06
1.2500e-06
1.2500e-06
1.2500e-06
1.2500e-06
10
8
9
7
1
59
89
89
99
100
1.2500e-06
1.2500e-06
1.2500e-06
1.2500e-06
1.2500e-06
4
7
7
4
1
99
70
65
1
100
1.5625e-02
1.3975e-04
1.3975e-04
1.3975e-04
1.2500e-06
10
9
8
8
1
Table 2: Maximum frequencies that the intrinsic group additive structures are identified for the five models
using exhaustive search algorithm without parameter tuning (left panel), with parameter tuning (middle panel)
and stepwise algorithm (right panel). If different pairs share the same max frequency, a pair is randomly chosen.
Figure 1: Estimated transformation functions for selected groups. Top-left: group (1, 6), top-right:
group (3), bottom-left: group (5, 8), bottom-right: group (10, 12).
In Table 2, we report the maximum frequency and the corresponding (?, ?) for each model. The
complete results are included in the supplementary material. It can be seen from the left panel that
the intrinsic group additive structures can be successfully identified. When the parameters are tuned,
the middle panel shows that the performance of Model 1 deteriorated. This might be caused by the
estimation method (KRR to solve Problem (7)) used in the algorithm. It could also be affected by ?.
When the number of predictor variables increases, we use a backward stepwise algorithm. We apply
Algorithm 2 on the same models. The results are reported in the right panel in Figure 2. The true
group structures could be identified most of time for Model 1, 2, 3, 5. The result of Model 4 is not
satisfying. Since stepwise algorithm is greedy, it is possible that the true group structures were never
visited. Further research is needed to develop a better algorithms.
5
Real Data
In this section, we report the results of applying GASI on the Boston Housing data (another real
data application is reported in the supplementary material). The data includes 13 predictor variables
used to predict the house median value. The sample size is 506. Our goal is to identify a probable
group additive structure for the predictor variables. The backward algorithm is used and the tuning
parameters ? and ? are selected via 10-fold CV. The group structure that achieves the lowest average
validation error is {(1, 6) , (2, 11) , (3) , (4, 9) , (5, 8) , (7, 13) , (10, 12)}, which is used for further
investigation. Then the nonparametric functions for each group were estimated using the whole
data set. Because each group contains no more than two variables, the estimated functions can be
visualized. Figure 1 shows the selected results.
It is interesting to see some patterns emerging in the plots. For example, the top-left plot shows the
function of the average number of rooms per dwelling and per capita crime rate by town. We can see
the house value increases with more rooms and decreases as the crime rate increases. However, when
the crime rate is low, smaller sized houses (4 or 5 rooms) seem to be preferred. The top-right plot
8
shows that there is a changing point in terms of how house value is related to the size of non-retail
business in the area. The value initially drops when the percentage of non-retail business is small,
then increases at around 8%. The increase in the value might be due to the high demand of housing
from the employees of those business.
6
Discussion
We use group additive model for nonparametric regression and propose a RKHS complexity penalty
based approach for identifying the intrinsic group additive structure. There are two main directions
for future research. First, our penalty function is based on the covering number of RKHSs. It is of
interest to know if there exists other more effective penalty functions. Second, it is of great interest
to further improve the proposed method and apply it in general high dimensional nonparametric
regression.
9
References
[1] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results.
The Journal of Machine Learning Research, 3:463?482, 2003.
[2] C. M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). SpringerVerlag New York, Inc., Secaucus, NJ, USA, 2006.
[3] C. Carmeli, E. De Vito, A. Toigo, and V. Umanit?. Vector valued reproducing kernel hilbert spaces and
universality. Analysis and Applications, 8(01):19?61, 2010.
[4] C. Gu. Smoothing spline ANOVA models, volume 297. Springer Science & Business Media, 2013.
[5] T. Hastie and R. Tibshirani. Generalized additive models. Statistical science, pages 297?310, 1986.
[6] K. Kandasamy and Y. Yu. Additive approximations in high dimensional nonparametric regression via
the salsa. In Proceedings of the 33rd International Conference on International Conference on Machine
Learning - Volume 48, ICML?16, pages 69?78. JMLR.org, 2016.
[7] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques - Adaptive
Computation and Machine Learning. The MIT Press, 2009.
[8] T. K?hn. Covering numbers of Gaussian reproducing kernel Hilbert spaces. Journal of Complexity,
27(5):489?499, 2011.
[9] B. M. Marlin and K. P. Murphy. Sparse gaussian graphical models with unknown block structure. In
Proceedings of the 26th Annual International Conference on Machine Learning, ICML ?09, pages 705?712,
New York, NY, USA, 2009. ACM.
[10] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
[11] C. Pan, Q. Huang, and M. Zhu. Optimal kernel group transformation for exploratory regression analysis
and graphics. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, pages 905?914. ACM, 2015.
[12] T. Poggio and C. Shelton. On the mathematical foundations of learning. American Mathematical Society,
39(1):1?49, 2002.
[13] J. O. Ramsay and B. W. Silverman. Applied functional data analysis: methods and case studies, volume 77.
Citeseer, 2002.
[14] A. J. Smola and B. Sch?lkopf. Learning with kernels. Citeseer, 1998.
[15] I. Steinwart and A. Christmann. Support vector machines. Springer Science & Business Media, 2008.
[16] C. J. Stone. Optimal global rates of convergence for nonparametric regression. The annals of statistics,
pages 1040?1053, 1982.
[17] V. Vapnik. The nature of statistical learning theory. Springer Science & Business Media, 2013.
[18] D.-X. Zhou. The covering number in learning theory. Journal of Complexity, 18(3):739 ? 767, 2002.
10
| 7076 |@word middle:3 polynomial:1 norm:1 unif:2 closure:1 hu:2 simulation:5 citeseer:2 concise:1 accommodate:1 reduction:1 contains:3 tuned:2 rkhs:11 existing:1 com:1 comparing:1 gmail:1 universality:1 written:1 must:2 additive:93 partition:2 plot:3 drop:1 greedy:1 selected:4 kandasamy:1 xk:1 beginning:1 firstly:1 arctan:2 org:1 five:2 mathematical:2 direct:6 ik:2 prove:3 kuj:2 fitting:2 expected:3 resolve:1 curse:2 cardinality:1 becomes:2 estimating:3 bounded:2 panel:6 medium:3 lowest:1 unspecified:2 emerging:1 maxa:2 proposing:1 marlin:1 transformation:3 nj:1 mitigate:1 every:1 xd:1 growth:1 wrong:1 k2:1 uk:1 control:1 unit:3 appear:1 arguably:1 positive:3 engineering:1 tsinghua:1 esp:1 consequence:1 id:2 subscript:1 might:2 china:1 examined:1 co:1 lafayette:2 unique:5 practice:3 block:2 x3:4 silverman:1 procedure:3 area:1 universal:3 empirical:2 convenient:1 word:1 cannot:2 ga:3 selection:1 risk:7 applying:2 imposed:2 center:1 go:6 independently:1 formulate:1 identifying:4 x32:1 m2:3 estimator:1 population:4 exploratory:1 laplace:1 deteriorated:1 target:2 suppose:4 annals:1 exact:1 us:2 iu1:1 satisfying:1 recognition:1 bottom:2 solved:1 reexamine:1 calculate:1 decrease:1 intuition:1 pd:4 complexity:21 rigorously:3 vito:1 depend:1 solving:1 efficiency:2 exit:1 gu:1 represented:2 various:1 effective:2 choosing:2 exhaustive:5 larger:1 solve:6 supplementary:3 valued:1 ability:1 statistic:4 g1:16 transform:1 housing:2 differentiable:1 propose:5 interaction:9 combining:2 achieve:4 intuitive:1 secaucus:1 ky:1 xj2:3 convergence:1 regularity:1 rademacher:2 converges:3 help:1 derive:3 develop:3 depending:1 fu2:1 christmann:1 indicate:1 quantify:1 direction:2 fij:1 popularly:4 drawback:4 correct:1 radius:1 vc:1 material:3 xid:1 f1:2 investigation:1 proposition:9 probable:1 elementary:1 secondly:1 khkhk:1 lying:1 practically:1 considered:2 around:1 exp:5 great:1 mapping:2 predict:1 major:1 achieves:3 smallest:3 estimation:7 krr:2 visited:1 fud:4 successfully:2 tool:2 nonidentifiability:1 minimization:1 mit:2 always:1 gaussian:5 super:1 ck:2 zhou:1 properly:1 hk:18 industrial:1 sigkdd:1 sense:1 typically:1 initially:1 relation:1 koller:1 going:1 i1:1 issue:4 among:3 flexible:1 arg:7 denoted:6 exponent:1 smoothing:1 special:1 marginal:1 equal:1 once:1 cube:1 never:1 beach:1 x4:7 yu:1 icml:2 future:1 minimized:1 report:3 spline:1 simplify:1 serious:1 employ:1 randomly:1 simultaneously:2 individual:2 m4:3 murphy:2 maintain:1 friedman:1 interest:3 mining:1 hg:21 x22:3 fu:2 neglecting:1 partial:1 poggio:1 respective:1 huj:3 logarithm:1 minimal:3 fitted:2 earlier:1 modeling:1 goodness:2 cost:1 subset:1 predictor:17 uniform:1 graphic:1 characterize:1 reported:2 synthetic:1 unbiasedness:1 thanks:1 st:1 fundamental:1 international:4 systematic:2 xi1:1 probabilistic:2 michael:1 quickly:2 postulate:1 town:1 choose:1 hn:1 huang:1 admit:1 american:1 return:1 de:1 summarized:1 includes:2 inc:1 explicitly:1 caused:1 depends:1 performed:2 break:1 hud:3 sup:2 start:1 recover:2 complicated:1 square:4 spaced:1 identify:7 generalize:1 lkopf:1 identification:5 l2g:3 simultaneous:1 suffers:2 whenever:1 definition:4 pp:1 frequency:4 proof:1 mi:1 recall:4 subsection:2 knowledge:1 dimensionality:3 hilbert:3 organized:1 higher:4 x6:5 response:2 improved:1 caput:1 formulation:1 though:1 furthermore:3 smola:1 hand:3 steinwart:1 logistic:1 usa:3 effect:1 xj1:3 contain:1 true:12 symmetric:1 freq:3 illustrated:1 sin:2 x5:6 bonferroni:1 covering:9 m5:3 generalized:1 stone:1 ridge:1 demonstrate:1 complete:2 fj:1 novel:2 fi:2 common:1 functional:4 exponentially:1 volume:3 belong:2 discussed:4 interpretation:1 relating:1 interpret:1 m1:3 employee:1 refer:1 measurement:1 cv:1 tuning:8 seldom:1 consistency:3 grid:2 rd:1 inclusion:2 ramsay:1 dj:3 specification:1 add:1 perspective:1 belongs:1 scenario:1 salsa:1 certain:1 inequality:1 success:1 yi:6 integrable:4 seen:1 minimum:2 additional:2 impose:1 surely:1 converge:1 ud:8 tempting:1 ii:1 multiple:1 infer:1 long:1 divided:1 equally:1 regression:19 basic:1 kernel:24 achieved:1 retail:2 preserved:1 whereas:1 separately:1 addressed:2 median:1 sch:1 biased:1 rest:1 induced:1 member:1 effectiveness:1 seem:1 integer:1 structural:6 kfg:7 split:3 enough:1 variety:1 xj:1 fit:2 hastie:1 identified:5 knowing:1 expression:2 bartlett:1 penalty:9 suffer:1 york:2 clear:3 fu1:4 nonparametric:22 induces:1 visualized:1 category:2 exist:1 percentage:1 deteriorates:1 estimated:8 per:2 tibshirani:1 affected:1 group:131 key:3 changing:1 anova:4 backward:4 sum:7 beijing:1 parameterized:1 place:1 almost:1 reasonable:1 utilizes:1 dwelling:1 bound:8 distinguish:1 fold:1 annual:1 constraint:1 infinity:3 x2:6 u1:8 fourier:1 simulate:1 min:8 px:1 relatively:1 department:3 according:2 ball:3 poor:1 carmeli:1 smaller:2 pan:2 intuitively:1 ln:14 turn:1 needed:1 know:2 tractable:2 end:4 toigo:1 available:1 apply:4 differentiated:1 rkhss:9 existence:1 original:2 denotes:3 top:4 graphical:4 hinge:1 neglect:1 especially:2 uj:12 society:1 intend:1 g0:17 parametric:3 primary:1 dependence:2 usual:2 said:2 distance:1 separate:1 simulated:1 capacity:11 accommodating:1 considers:3 reason:1 index:2 relationship:4 equivalently:2 difficult:2 susceptible:1 design:1 proper:2 motivates:1 unknown:2 upper:7 observation:2 convolution:1 purdue:3 finite:2 supporting:1 reparameterized:1 reproducing:3 arbitrary:2 pair:4 extensive:1 connection:1 l2u:4 crime:3 subgroup:1 nip:1 address:3 able:1 usually:1 below:6 pattern:2 fp:2 interpretability:3 including:2 max:4 greatest:1 power:1 business:6 hu1:3 zhu:2 improve:1 concludes:1 carried:1 chao:1 literature:4 l2:3 discovery:1 kf:1 asymptotic:3 embedded:1 fully:1 expect:2 interesting:1 limitation:1 x33:1 penalization:1 validation:5 foundation:1 consistent:5 xp:2 principle:1 share:1 cd:3 penalized:3 last:1 bias:2 iud:1 sparse:2 fg:23 overcome:1 dimension:5 calculated:1 collection:1 adaptive:1 ig:3 preferred:1 keep:1 global:1 investigating:1 xi:8 search:5 continuous:1 pen:4 table:4 nature:1 ca:1 improving:1 interact:2 complex:2 domain:1 main:2 dense:1 iuj:1 whole:1 noise:1 x1:11 west:2 elaborate:1 ny:1 sub:2 fails:1 explicit:3 house:4 vanish:1 jmlr:1 theorem:7 down:1 formula:1 bishop:1 list:1 decay:1 admits:2 dominates:1 intrinsic:26 exists:9 stepwise:6 mendelson:1 vapnik:1 hui:1 arcsin:3 demand:1 easier:1 boston:1 entropy:1 rg:5 simply:1 univariate:1 ordered:1 g2:16 partially:1 springer:3 minimizer:3 acm:3 conditional:1 viewed:1 goal:1 sized:1 room:3 change:1 springerverlag:1 included:2 uniformly:1 lemma:1 called:3 experimental:1 m3:3 select:2 support:1 fgb:1 avoiding:1 incorporate:1 evaluate:1 shelton:1 |
6,718 | 7,077 | Fast, Sample-Ef?cient Algorithms for
Structured Phase Retrieval
Gauri jagatap
Electrical and Computer Engineering
Iowa State University
Chinmay Hegde
Electrical and Computer Engineering
Iowa State University
Abstract
We consider the problem of recovering a signal x? ? Rn , from magnitude-only
measurements, yi = |ai , x? | for i = {1, 2, . . . , m}. Also known as the phase
retrieval problem, it is a fundamental challenge in nano-, bio- and astronomical
imaging systems, and speech processing. The problem is ill-posed, and therefore
additional assumptions on the signal and/or the measurements are necessary.
In this paper, we ?rst study the case where the underlying signal x? is s-sparse.
We develop a novel recovery algorithm that we call Compressive Phase Retrieval
with Alternating Minimization, or CoPRAM. Our algorithm is simple and can
be obtained via a natural combination of the classical alternating minimization
approach for phase retrieval, with the CoSaMP algorithm for sparse recovery.
Despite
we prove that our algorithm achieves a sample complexity
its simplicity,
of O s2 log n with Gaussian samples, which matches the best known existing
results. It also demonstrates linear convergence in theory and practice and requires
no extra tuning parameters other than the signal sparsity level s.
We then consider the case where the underlying signal x? arises from structured
sparsity models. We speci?cally examine the case of block-sparse signals with
uniform block size of b and block sparsity k = s/b. For this problem, we design
a recovery algorithm that we call Block CoPRAM that further reduces the sample
complexity to O (ks log n). For suf?ciently large block lengths of b = ?(s), this
bound equates to O (s log n). To our knowledge, this constitutes the ?rst end-toend linearly convergent algorithm for phase retrieval where the Gaussian sample
complexity has a sub-quadratic dependence on the sparsity level of the signal.
1
1.1
Introduction
Motivation
In this paper, we consider the problem of recovering a signal x? ? Rn from (possibly noisy)
magnitude-only linear measurements. That is, for sampling vector ai ? Rn , if
yi = |ai , x? | ,
for i = 1, . . . , m,
(1)
then the task is to recover x? using the measurements y and the sampling matrix A = [a1 . . . am ] .
Problems of this kind arise in numerous scenarios in machine learning, imaging, and statistics.
For example, the classical problem of phase retrieval is encountered in imaging systems such as
diffraction imaging, X-ray crystallography, ptychography, and astronomy [1, 2, 3, 4, 5]. For such
imaging systems, the optical sensors used for light acquisition can only record the intensity of the
light waves but not their phase. In terms of our setup, the vector x? corresponds to an image (with
a resolution of n pixels) and the measurements correspond to the magnitudes of its 2D Fourier
coef?cients. The goal is to stably recover the image x? using as few observations m as possible.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Despite the prevalence of several heuristic approaches [6, 7, 8, 9], it is generally accepted that (1) is a
challenging nonlinear, ill-posed inverse problem in theory and practice. For generic ai and x? , one
can show that (1) is NP-hard by reduction from well-known combinatorial problems [10]. Therefore,
additional assumptions on the signal x? and/or the measurement vectors ai are necessary.
A recent line of breakthrough results [11, 12] have provided ef?cient algorithms for the case where the
measurement vectors arise from certain multi-variate probability distributions. The seminal paper by
Netrapalli et al. [13] provides the ?rst rigorous justi?cation of classical heuristics for phase retrieval
based on alternating minimization. However, all these newer results require an ?overcomplete" set
of observations, i.e., the number of observations m exceeds the problem dimension n (m = O (n)
being the tightest evaluation of this bound [14]). This requirement can pose severe limitations on
computation and storage, particularly when m and n are very large.
One way to mitigate the dimensionality issue is to use the fact that in practical applications, x? often
obeys certain low-dimensional structural assumptions. For example, in imaging applications x? is
s-sparse in some known basis, such as identity or wavelet. For transparency, we assume the canonical
basis for sparsity throughout this paper. Similar structural assumptions form the core of sparse
recovery, and streaming algorithms [15, 16, 17], and it has been established that only O s log ns
samples are necessary for stable recovery of x? , which is information-theoretically optimal [18].
Several approaches for solving the sparsity-constrained version of (1) have been proposed, including
alternating minimization [13], methods based on convex relaxation [19, 20, 21], and iterative thresholding [22, 23]. Curiously, all of the above techniques incur a sample complexity of ?(s2log n) for
stable recovery, which is quadratically worse than the information-theoretic limit [18] of O s log ns 1 .
Moreover, most of these algorithms have quadratic (or worse) running time [19, 22], stringent assumptions on the nonzero signal coef?cients [13, 23], and require several tuning parameters [22, 23].
Finally, for speci?c applications, more re?ned structural assumptions on x? are applicable. For
example, point sources in astronomical images often produce clusters of nonzero pixels in a given
image, while wavelet coef?cients of natural images often can be organized as connected sub-trees.
Algorithms that leverage such structured sparsity assumptions have been shown to achieve considerably improved sample-complexity in statistical learning and sparse recovery problems using
block-sparsity [30, 31, 32, 33], tree sparsity [34, 30, 35, 36], clusters [37, 31, 38], and graph models [39, 38, 40]. However, these models have not been understood in the context of phase retrieval.
1.2
Our contributions
The contributions in this paper are two-fold. First, we provide a new, ?exible algorithm for sparse
phase retrieval that matches state of the art methods both from a statistical as well as computational
viewpoint. Next, we show that it is possible to extend this algorithm to the case where the signal
is block-sparse, thereby further lowering the sample complexity of stable recovery. Our work can
be viewed as a ?rst step towards a general framework for phase retrieval of structured signals from
Gaussian samples.
Sparse phase retrieval. We ?rst study the case where the underlying signal x? is s-sparse. We
develop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating
Minimization, or CoPRAM 2 . Our algorithm is simple and can be obtained via a natural combination
of the classical alternating minimization approach for phase retrieval with the CoSaMP [41] algorithm
for sparse recovery (CoSAMP also naturally extends
to several
sparsity models [30]). We prove that
our algorithm achieves a sample complexity of O s2 log n with Gaussian measurement vectors ai
in order to achieve linear convergence, matching the best among all existing results. An appealing
feature of our algorithm is that it requires no extra a priori information other than the signal sparsity
level s, and no assumptions on the nonzero signal coef?cients. To our knowledge, this is the ?rst
algorithm for sparse phase retrieval that simultaneously achieves all of the above properties. We use
CoPRAM as the basis to formulate a block-sparse extension (Block CoPRAM).
Block-sparse phase retrieval. We consider the case where the underlying signal x? arises from
structured sparsity models, speci?cally block-sparse signals with uniform block size b (i.e., s nonzeros equally grouped into k = s/b blocks). For this problem, we design a recovery algorithm that we
1
2
Exceptions to this rule are [24, 25, 26, 27, 28, 29] where very carefully crafted measurements ai are used.
We use the terms sparse phase retrieval and compressive phase retrieval interchangeably.
2
Table 1: Comparison of (Gaussian sample) sparse phase retrieval algorithms. Here, n, s, k = s/b
denote signal length, sparsity, and block-sparsity. O (?) hides polylogarithmic dependence on 1 .
Algorithm
AltMinSparse
1 -PhaseLift
Thresholded WF
SPARTA
CoPRAM
Block CoPRAM
Sample
complexity
O s2 log n + s2 log3 s
O s2 log n
2
O s log n
O s2 log n
O s2 log n
O (ks log n)
Running
time
O s2 n log n
3
O n2
O n2 log n
O s2 n log n
O s2 n log n
O (ksn log n)
Assumptions
x?min ? ?cs x? 2
Parameters
none
none
none
none
x?min ?
none
none
?c
s
x? 2
step ?, thresholds ?, ?
step ?, threshold ?
none
none
call Block CoPRAM. We analyze this algorithm and show that leveraging block-structure reduces the
sample complexity for stable recovery to O (ks log n). For suf?ciently large block lengths b = ?(s),
this bound equates to O (s log n). To our knowledge, this constitutes the ?rst phase retrieval algorithm
where the Gaussian sample complexity has a sub-quadratic dependence on the sparsity s of the signal.
A comparative description of the performance of our algorithms is presented in Table 1.
1.3
Techniques
Sparse phase retrieval. Our proposed algorithm, CoPRAM, is conceptually very simple. It integrates
existing approaches in stable sparse recovery (speci?cally, the CoSaMP algorithm [41]) with the
alternating minimization approach for phase retrieval proposed in [13].
A similar integration of sparse recovery with alternating minimization was also introduced in [13];
however, their approach only succeeds when the true support of the underlying signal is accurately
identi?ed during initialization, which can be unrealistic. Instead, CoPRAM permits the support of the
estimate to evolve across iterations, and therefore can iteratively ?correct" for any errors made during
the initialization. Moreover, their analysis requires using fresh samples for every new update of the
estimate, while ours succeeds in the (more practical) setting of using all the available samples.
Our ?rst challenge is to identify a good initial guess of the signal. As is the case with most nonconvex techniques, CoPRAM requires an initial estimate x0 that is close to the true signal x? . The
basic idea is to identify ?important" co-ordinates by constructing suitable biased estimators of each
signal coef?cient, followed by a speci?c eigendecomposition. The initialization in CoPRAM is far
simpler than the approaches in [22, 23]; requiring no pre-processing of the measurements and or
tuning parameters other than the sparsity level s. A drawback of the theoretical?results of [23] is
that they impose a requirement on signal coef?cients: minj?S |x?j | = C x? 2 / s. However, this
assumption disobeys the power-law decay observed in real world signals. Our approach also differs
from [22], where they estimate an initial support based on a parameter-dependent threshold value.
Our analysis removes these requirements; we show that a coarse estimate of the support, coupled
with
technique in [22, 23] gives us a suitable initialization. A sample complexity of
the spectral
O s2 log n is incurred for achieving this estimate, matching the best available previous methods.
Our next challenge is to show that given a good initial guess, alternatingly estimating the phases and
non-zero coef?cients (using CoSaMP) gives a rapid convergence to the desired solution. To this end,
we use the analysis of CoSaMP [41] and leverage a recent result by [42], to show per step decrease in
the signal estimation error using the generic chaining technique of [43, 44]. In particular, we show
that any ?phase errors" made in the initialization, can be suitably controlled across different estimates.
Block-sparse phase retrieval. We use CoPRAM to establish its extension Block CoPRAM, which is
a novel phase retrieval strategy for block sparse signals from Gaussian measurements. Again, the
algorithm is based on a suitable initialization followed by an alternating minimization procedure,
mirroring the steps in CoPRAM. To our knowledge, this is the ?rst result for phase retrieval under
more re?ned structured sparsity assumptions on the signal.
As above, the ?rst stage consists of identifying a good initial guess of the solution. We proceed as in
CoPRAM, isolating blocks of nonzero coordinates, by constructing a biased estimator for the ?mass"
of each block. We prove that a good initialization can be achieved using this procedure using only
O (ks log n) measurements. When the block-size is large enough (b = ?(s)), the sample complexity
of the initialization is sub-quadratic in the sparsity level s and only a logarithmic factor above the
3
information-theoretic limit O (s) [30]. In the second stage, we demonstrate a rapid descent to the
desired solution. To this end, we replace the CoSaMP sub-routine in CoPRAM with the model-based
CoSaMP algorithm of [30], specialized to block-sparse recovery. The analysis proceeds analogously
as above. To our knowledge, this constitutes the ?rst end-to-end algorithm for phase retrieval (from
Gaussian samples) that demonstrates a sub-quadratic dependence on the sparsity level of the signal.
1.4
Prior work
The phase retrieval problem has received signi?cant attention in the past few years. Convex methodologies to solve the problem in the lifted framework include PhaseLift and its variations [11, 45, 46, 47].
Most of these approaches suffer severely in terms of computational complexity. PhaseMax, produces
a convex relaxation of the phase retrieval problem similar to basis pursuit [48]; however it is not emperically competitive. Non-convex algorithms typically rely on ?nding a good initial point, followed
by minimizing a quadratic (Wirtinger Flow [12, 14, 49]) or moduli ( [50, 51]) measurement loss
function. Arbitrary initializations have been studied in a polynomial-time trust-region setting in [52].
Some of the convex approaches in sparse phase retrieval include [19, 53], which uses a combination
of trace-norm and -norm
relaxation.Constrained
sensing vectors have been used [25] at optimal
sample complexity O s log ns . Fourier measurements have been studied extensively in the convex
[54] and non-convex [55] settings. More non-convex approaches
for sparse
phase retrieval include
[13, 23, 22] which achieve Gaussian sample complexities of O s2 log n .
Structured sparsity models such as groups, blocks, clusters, and trees can be used to model real-world
signals.Applications of such models have been developed for sparse recovery [30, 33, 39, 38, 40, 56,
34, 35, 36] as well as in high-dimensional optimization and statistical learning [32, 31]. However, to
the best of our knowledge, there have been no rigorous results that explore the impact of structured
sparsity models for the phase retrieval problem.
2
Paper organization and notation
The remainder of the paper is organized as follows. In Sections 3 and 4, we introduce the CoPRAM
and Block CoPRAM algorithms respectively, and provide a theoretical analysis of their statistical
performance. In Section 5 we present numerical experiments for our algorithms.
Standard notation for matrices (capital, bold: A, P, etc.), vectors (small, bold: x, y, etc.) and scalars
( ?, c etc.) hold. Matrix and vector transposes are represented using (eg. x and A ) respectively.
The diagonal matrix form of a column vector y ? Rm is represented as diag(y) ? Rm?m . Operator
card(S) represents cardinality of S. Elements of a are distributed according to the zero-mean
standard normal distribution N (0, 1). The phase is denoted using sign (y) ? y/|y| for y ? Rm , and
dist (x1 , x2 ) ? min(x1 ? x2 2 , x1 + x2 2 ) for every x1 , x2 ? Rn is the distance metric, upto a
global phase factor (both x = x? , ?x? satisfy y = |Ax|). The projection of vector x ? Rn onto a
set of coordinates S is represented as xS ? Rn , xS j = xj for j ? S, and 0 elsewhere. Projection
of matrix M ? Rn?n onto S is MS ? Rn?n , MS ij = Mij for i, j ? S, and 0 elsewhere. For
faster algorithmic implementations, MS can be assumed to be a truncated matrix MS ? Rs?s ,
discarding all row and column elements corresponding to S c . The element-wise inner product of
two vectors y1 and y2 ? Rm is represented as y1 ? y2 . Unspeci?ed large and small constants are
represented by C and ? respectively. The abbreviation whp denotes ?with high probability".
3
Compressive phase retrieval
In this section, we propose a new algorithm for solving the sparse phase retrieval problem and
analyze its performance. Later, we will show how to extend this algorithm to the case of more re?ned
structural assumptions about the underlying sparse signal.
We ?rst provide a brief outline of our proposed algorithm. It is clear that the sparse recovery version
of (1) is highly non-convex, and possibly has multiple local minima[22]. Therefore, as is typical
in modern non-convex methods [13, 23, 57] we use an spectral technique to obtain a good initial
estimate. Our technique is a modi?cation of the initialization stages in [22, 23], but requires no tuning
parameters or assumptions on signal coef?cients, except for the sparsity s. Once an appropriate initial
4
Algorithm 1 CoPRAM: Initialization.
input A, y, s
m 2
1
yi .
Compute signal power: ?2 = m
i=1
m
1
Compute signal marginals: Mjj = m i=1 yi2 a2ij ?j.
Set S? ? j?s corresponding to top-s Mjj ?s.
m
1
T
2
Set v1 ? top singular vector of MS? = m
? Rs?s .
? ai S
?
i=1 yi ai S
Compute x0 ? ?v, where v ? v1 for S? and 0 ? Rn?s for S?c .
output x0 .
Algorithm 2 CoPRAM: Descent.
input A, y, x0 , s, t0
Initialize x0 according to Algorithm 1.
for t = 0, ? ? ? , t0 ? 1 do
Pt+1 ? diag sign Axt ,
xt+1 ? C O S A MP( ?1m A, ?1m Pt+1 y,s,xt ).
end for
output z ? xt0 .
estimate is chosen, we then show that a simple alternating-minimization algorithm, based on the
algorithm in [13] will converge rapidly to the underlying true signal. We call our overall algorithm
Compressive Phase Retrieval with Alternating Minimization (CoPRAM) which is divided into two
stages: Initialization (Algorithm 1) and Descent (Algorithm 2).
3.1
Initialization
The high level idea of the ?rst stage of CoPRAM is as follows; we use measurements yi to construct
a biased estimator, marginal Mjj corresponding to the j th signal coef?cient and given by:
m
Mjj
1 2 2
=
y a ,
m i=1 i ij
for
j ? {1, . . . n}.
(2)
The marginals themselves do not directly produce signal coef?cients, but the ?weight" of each
marginal identi?es the true signal support. Then, a spectral technique based on [13, 23, 22] constructs
an initial estimate x0 . To accurately estimate support, earlier works [13, 23] assume
? that the
magnitudes of the nonzero signal coef?cients are all suf?ciently large, i.e., ? (x2 / s), which
can be unrealistic, violating the power-decay law. Our analysis resolves this issue by relaxing the
requirement of accurately identifying the support, without any tuning parameters, unlike [22]. We
claim that a coarse estimate of the support is good enough, since the errors would correspond to small
coef?cients. Such ?noise" in the signal estimate can be controlled with a suf?cient number of samples.
Instead, we show that a simple pruning step that rejects the smallest n ? k coordinates, followed
by the spectral procedure of [23], gives us the initialization that we need. Concretely, if elements
of A are
distributed as per standard normal distribution N (0, 1), a weighted correlation matrix
m
1
2
M= m
i=1 yi ai ai , can be constructed, having diagonal elements Mjj . Then, the diagonal
elements of this expectation matrix E [M] are given by:
E [Mjj ] = x? 2 + 2x?2
j
(3)
exhibiting a clear separation when analyzed for j ? S and j ? S c . We can hence claim, that signal
marginals at locations on the diagonal of M corresponding to j ? S are larger, on an average, than
those for j ? S c . Based on this, we evaluate the diagonal elements Mjj and reject n ? k coordinates
? Using a
corresponding to the smallest marginals obtain a crude approximation of signal support S.
spectral technique,
we
?nd
an
initial
vector
in
the
reduced
space,
which
is
close
to
the
true
signal, if
m = O s2 log n .
Theorem 3.1. The initial estimate x0 , which is the output of Algorithm 1, is a small constant distance
?0 away from the true s-sparse signal x? , i.e.,
dist x0 , x? ? ?0 x? 2 ,
5
where 0 < ?0 < 1, as long as the number of (Gaussian) measurements satisfy, m ? Cs2 log mn,
8
with probability greater than 1 ? m
.
This theorem is proved via Lemmas C.1 through C.4 (Appendix C), and the argument proceeds as
follows. We evaluate the marginals of the signal Mjj , in broadly two cases: j ? S and j ? S c .
The key idea is to establish one of the following: (1) If the signal coef?cients obey minj?S |x?j | =
?
C x? 2 / s, then there exists a clear separation between the marginals Mjj for j ? S and j ? S c ,
whp. Then Algorithm 1 picks up the correct support (i.e. S? = S); (2) if there is no such restriction,
? contains a bulk of the correct support S. The
even then the support picked up in Algorithm 1, S,
?
incorrect elements of S induce negligible error in estimating the intial vector. These approaches are
illustrated in Figures 4 and 5 in Appendix
C. The marginals Mjj < ?, whp, for j ? S c and a big
?2
chunk j ? S+ {j ? S : xj ? 15 (log mn)/m x? 2 } are separated by threshold ? (Lemmas C.1
and C.2). The identi?cation of the support S? (which provably contains a signi?cant chunk S+ of the
true support S) is used to construct the truncated correlation matrix MS? . The top singular vector of
this matrix MS? , gives us a good initial estimate x0 .
The ?nal step of Algorithm 1 requires a scaling of the normalized vector v1 by a factor ?, which
conserves the power in the signal (Lemma F.1 in Appendix F), whp, where ?2 which is de?ned as
m
1 2
y .
m i=1 i
?2 =
3.2
(4)
Descent to optimal solution
After obtaining an initial estimate x0 , we construct a method to accurately recover x? . For this, we
adapt the alternating minimization approach from [13]. The observation model (1) can be restated as:
sign (ai , x? ) ? yi = ai , x?
for
i = {1, 2, . . . m}.
m
We introduce the phase vector p ? R containing (unknown) signs of measurements, i.e., pi =
sign (ai , x) , ?i and phase matrix P = diag (p). Then our measurement model gets modi?ed as
P? y = Ax? , where P? is the true phase matrix. We then minimize the loss function composed of
variables x and P,
min
x0 ?s,P?P
Ax ? Py2 .
(5)
Here P is a set of all diagonal matrices ? Rm?m with diagonal entries constrained to be in {?1, 1}.
Hence the problem stated above is not convex. Instead, we alternate between estimating P and x
as follows: (1) if we ?x the signal estimate x, then the minimizer P is given in closed form as
P = diag (sign (Ax)); we call this the phase estimation step; (2) if we ?x the phase matrix P, the
sparse vector x can be obtained by solving the signal estimation step:
min
x,x0 ?s
Ax ? Py2 .
(6)
We employ the CoSaMP [41] algorithm to (approximately) solve the non-convex problem (6). We do
not need to explicitly obtain the minimizer for (6) but only show a suf?cient descent criterion, which
we achieve by performing a careful analysis of the CoSaMP algorithm. For analysis reasons,
we
?
require that the entries of the input sensing matrix are distributed according to N?(0, I/ m). This
can be achieved by scaling down the inputs to CoSaMP: At , Pt+1 y by a factor of m (see x-update
step of Algorithm 2). Another distinction is that we use a ?warm start" CoSaMP routine for each
iteration where the initial guess of the solution to 6 is given by the current signal estimate.
We now analyze our proposed descent scheme. We obtain the following theoretical result:
Theorem 3.2. Given an initialization x0 satisfying Algorithm 1, if we have number of (Gaussian)
measurements m ? Cs log ns , then the iterates of Algorithm 2 satisfy:
dist xt+1 , x? ? ?0 dist xt , x? .
(7)
where 0 < ?0 < 1 is a constant, with probability greater than 1 ? e??m , for positive constant ?.
The proof of this theorem can be found in Appendix E.
6
4
Block-sparse phase retrieval
The analysis of the proofs mentioned so far, as well as experimental results suggest that we can reduce
sample complexity for successful sparse phase retrieval by exploiting further structural information
about the signal. Block-sparse signals x? , can be said to be following a sparsity model Ms,b , where
Ms,b describes the set of all block-sparse signals with s non-zeros being grouped into uniform predetermined blocks of size b, such that block-sparsity k = sb . We use the index set jb = {1, 2 . . . k},
to denote block-indices. We introduce the concept of block marginals, a block-analogue to signal
marginals, which can be analyzed to crudely estimate the block support of the signal in consideration.
We use this formulation, along with the alternating minimization approach that uses model-based
CoSaMP [30] to descend to the optimal solution.
4.1
Initialization
Analogous to the concept of marginals de?ned above, we introduce block marginals Mjb jb , where
Mjj is de?ned as in (2). For block index jb , we de?ne:
2 ,
Mjj
(8)
Mj b j b =
j?jb
to develop the initialization stage of our Block CoPRAM algorithm. Similar to the proof approach
of CoPRAM, we evaluate the block marginals, and use the top-k such marginals to obtain a crude
approximation S?b of the true block support Sb . This support can be used to construct the truncated
correlation matrix MS?b . The top singular vector of this matrix MS?b gives a good initial estimate x0
(Algorithm 3, Appendix A) for the Block CoPRAM algorithm (Algorithm 4, Appendix A). Through
the evaluation of block marginals, we proceed to prove that the sample complexity required for a
good initial estimate (and subsequently, successful signal recovery of block sparse signals) is given
by O (ks log n). This essentially reduces the sample complexity of signal recovery by a factor equal
to the block-length b over the sample complexity required for standard sparse phase retrieval.
Theorem 4.1. The initial vector x0 , which is the output of Algorithm 3, is a small constant distance
?b away from the true signal x? ? Ms,b , i.e.,
dist x0 , x? ? ?b x? 2 ,
2
where 0 < ?b < 1, as long as the number of (Gaussian) measurements satisfy m ? C sb log mn with
8
probability greater than 1 ? m
.
The proof can be found in Appendix D, and carries forward intuitively from the proof of the
compressive phase-retrieval framework.
4.2
Descent to optimal solution
For the descent of Block CoPRAM to optimal solution, the phase-estimation step is the same as that
in CoPRAM. For the signal estimation step, we attempt to solve the same minimization as in (6),
except with the additional constraint that the signal x? is block sparse,
(9)
min Ax ? Py2 ,
x?Ms,b
where Ms,b describes the block sparsity model. In order to approximate the solution to (9), we use
the model-based CoSaMP approach of [30]. This is a straightforward specialization of the CoSaMP
algorithm and has been shown to achieve improved sample complexity over existing approaches for
standard sparse recovery.
Similar to Theorem 3.2 above, we obtain the following result (the proof can be found in Appendix E):
0
Theorem 4.2. Given an
initialization
x satisfying Algorithm 3, if we have number of (Gaussian)
sb
n
measurements m ? C s + n log s , then the iterates of Algorithm 4 satisfy:
dist xt+1 , x? ? ?b dist xt , x? .
(10)
where 0 < ?b < 1 is a constant, with probability greater than 1 ? e??m , for positive constant ?.
The analysis so far has been made for uniform blocks of size b. However the same algorithm can be
extended to the case of sparse signals with non-uniform blocks or clusters (refer Appendix A).
7
CoPRAM
Block
CoPRAM
ThWF
SPARTA
0.5
0
Probability of recovery
Probability of recovery
1
500 1,000 1,500 2,000
0.5
0
500 1,000 1,500 2,000
Number of samples m
Number of samples m
(a) Sparsity s = 20
Probability of recovery
1
(b) Sparsity s = 30
1
b = 20, k = 1
b = 10, k = 2
b = 5, k = 4
b = 2, k = 10
b = 1, k = 20
0.5
0
0
500
1,000
1,500
Number of samples m
(c) Block CoPRAM, s = 20
Figure 1: Phase transitions for signal of length n = 3, 000, sparsity s and block length b (a) s = 20,
b = 5, (b) s = 30, b = 5, and (c) s = 20, b = 20, 10, 5, 2, 1 (Block CoPRAM only).
5
Experiments
We explore the performance of the CoPRAM and Block CoPRAM on synthetic data. All numerical
experiments were conducted using MATLAB 2016a on a computer with an Intel Xeon CPU at
3.3GHz and 8GB RAM. The nonzero elements of the unit norm vector x? ? R3000 are generated
from N (0, 1). We repeated each of the experiments (?xed n, s, b, m) in Figure 1 (a) and (b), for
50 and Figure 1 (c) for 200 independent Monte Carlo trials. For our simulations, we compared our
algorithms CoPRAM and Block CoPRAM with Thresholded Wirtinger ?ow (Thresholded WF or
ThWF) [22] and SPARTA [23]. The parameters for these algorithms were carefully chosen as per the
description in their respective papers.
For the ?rst experiment, we generated phase transition plots by evaluating the probability of empirical
successful recovery, i.e. number of trials out of 50. The recovery probability for the four algorithms is
displayed in Figure 1. It can be noted that increasing the sparsity of signal shifts the phase transitions
to the right. However, the phase transition for Block CoPRAM has a less apparent shift (suggesting
that sample complexity of m has sub-quadratic dependence on s). We see that Block CoPRAM
exhibits lowest sample complexity for the phase transitions in both cases (a) and (b) of Figure 1.
For the second experiment, we study the variation of phase transition with block length, for Block
CoPRAM (Figure 1(c)). For this experiment we ?xed a signal of length n = 3, 000, sparsities
s = 20, k = 1 for a block length of b = 20. We observe that the phase transitions improve with
increase in block length. At block sparsity sb = 20
10 = 2 (for large b ? s), we observe a saturation
effect and the regime of the experiment is very close to the information theoretic limit.
Several additional phase transition diagrams can be found in Figure 2 in Appendix B. The running
time of our algorithms compare favorably with Thresholded WF and SPARTA (see Table 2 in
Appendix B). We also show that Block CoPRAM is more robust to noisy Gaussian measurements, in
comparison to CoPRAM and SPARTA (see Figure 3 in Appendix B).
8
References
[1] Y. Shechtman, Y. Eldar, O. Cohen, H. Chapman, J. Miao, and M. Segev. Phase retrieval with application to
optical imaging: a contemporary overview. IEEE Sig. Proc. Mag., 32(3):87?109, 2015.
[2] R. Millane. Phase retrieval in crystallography and optics. JOSA A, 7(3):394?411, 1990.
[3] A. Maiden and J. Rodenburg. An improved ptychographical phase retrieval algorithm for diffractive
imaging. Ultramicroscopy, 109(10):1256?1262, 2009.
[4] R. Harrison. Phase problem in crystallography. JOSA a, 10(5):1046?1055, 1993.
[5] J. Miao, T. Ishikawa, Q Shen, and T. Earnest. Extending x-ray crystallography to allow the imaging of
noncrystalline materials, cells, and single protein complexes. Annu. Rev. Phys. Chem., 59:387?410, 2008.
[6] R. Gerchberg and W. Saxton. A practical algorithm for the determination of phase from image and
diffraction plane pictures. Optik, 35(237), 1972.
[7] J. Fienup. Phase retrieval algorithms: a comparison. Applied optics, 21(15):2758?2769, 1982.
[8] S. Marchesini. Phase retrieval and saddle-point optimization. JOSA A, 24(10):3289?3296, 2007.
[9] K. Nugent, A. Peele, H. Chapman, and A. Mancuso. Unique phase recovery for nonperiodic objects.
Physical review letters, 91(20):203902, 2003.
[10] M. Fickus, D. Mixon, A. Nelson, and Y. Wang. Phase retrieval from very few measurements. Linear Alg.
Appl., 449:475?499, 2014.
[11] E. Candes, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude
measurements via convex programming. Comm. Pure Appl. Math., 66(8):1241?1274, 2013.
[12] E. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger ?ow: Theory and algorithms. IEEE
Trans. Inform. Theory, 61(4):1985?2007, 2015.
[13] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In Adv. Neural Inf.
Proc. Sys. (NIPS), pages 2796?2804, 2013.
[14] Y. Chen and E. Candes. Solving random quadratic systems of equations is nearly as easy as solving linear
systems. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 739?747, 2015.
[15] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly
incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489?509, 2006.
[16] D. Needell, J. Tropp, and R. Vershynin. Greedy signal recovery review. In Proc. Asilomar Conf. Sig. Sys.
Comput., pages 1048?1050. IEEE, 2008.
[17] E. Candes, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements.
Comm. Pure Appl. Math., 59(8):1207?1223, 2006.
[18] K. Do Ba, P. Indyk, E. Price, and D. Woodruff. Lower bounds for sparse recovery. In Proc. ACM Symp.
Discrete Alg. (SODA), pages 1190?1197, 2010.
[19] H. Ohlsson, A. Yang, R. Dong, and S. Sastry. Cprl?an extension of compressive sensing to the phase
retrieval problem. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 1367?1375, 2012.
[20] Y. Chen, Y. Chi, and A. Goldsmith. Exact and stable covariance estimation from quadratic sampling via
convex programming. IEEE Trans. Inform. Theory, 61(7):4034?4059, 2015.
[21] K. Jaganathan, S. Oymak, and B. Hassibi. Sparse phase retrieval: Convex algorithms and limitations. In
Proc. IEEE Int. Symp. Inform. Theory (ISIT), pages 1022?1026. IEEE, 2013.
[22] T. Cai, X. Li, and Z. Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded
wirtinger ?ow. Ann. Stat., 44(5):2221?2251, 2016.
[23] G. Wang, L. Zhang, G. Giannakis, M. Akcakaya, and J. Chen. Sparse phase retrieval via truncated
amplitude ?ow. arXiv preprint arXiv:1611.07641, 2016.
[24] M. Iwen, A. Viswanathan, and Y. Wang. Robust sparse phase retrieval made easy. Appl. Comput. Harmon.
Anal., 42(1):135?142, 2017.
9
[25] S. Bahmani and J. Romberg. Ef?cient compressive phase retrieval with constrained sensing vectors. In
Adv. Neural Inf. Proc. Sys. (NIPS), pages 523?531, 2015.
[26] H. Qiao and P. Pal. Sparse phase retrieval using partial nested Fourier samplers. In Proc. IEEE Global
Conf. Signal and Image Processing (GlobalSIP), pages 522?526. IEEE, 2015.
[27] S. Cai, M. Bakshi, S. Jaggi, and M. Chen. Super: Sparse signals with unknown phases ef?ciently recovered.
In Proc. IEEE Int. Symp. Inform. Theory (ISIT), pages 2007?2011. IEEE, 2014.
[28] D. Yin, R. Pedarsani, X. Li, and K. Ramchandran. Compressed sensing using sparse-graph codes for the
continuous-alphabet setting. In Proc. Allerton Conf. on Comm., Contr., and Comp., pages 758?765. IEEE,
2016.
[29] R. Pedarsani, D. Yin, K. Lee, and K. Ramchandran. Phasecode: Fast and ef?cient compressive phase
retrieval based on sparse-graph codes. IEEE Trans. Inform. Theory, 2017.
[30] R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde. Model-based compressive sensing. IEEE Trans. Inform.
Theory, 56(4):1982?2001, Apr. 2010.
[31] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. J. Machine Learning Research,
12(Nov):3371?3412, 2011.
[32] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Royal Stat.
Soc. Stat. Meth., 68(1):49?67, 2006.
[33] Y. Eldar, P. Kuppinger, and H. Bolcskei. Block-sparse signals: Uncertainty relations and ef?cient recovery.
IEEE Trans. Sig. Proc., 58(6):3042?3054, 2010.
[34] M. Duarte, C. Hegde, V. Cevher, and R. Baraniuk. Recovery of compressible signals from unions of
subspaces. In Proc. IEEE Conf. Inform. Science and Systems (CISS), March 2009.
[35] C. Hegde, P. Indyk, and L. Schmidt. A fast approximation algorithm for tree-sparse recovery. In Proc.
IEEE Int. Symp. Inform. Theory (ISIT), June 2014.
[36] C. Hegde, P. Indyk, and L. Schmidt. Nearly linear-time model-based compressive sensing. In Proc. Intl.
Colloquium on Automata, Languages, and Programming (ICALP), July 2014.
[37] V. Cevher, P. Indyk, C. Hegde, and R. Baraniuk. Recovery of clustered sparse signals from compressive
measurements. In Proc. Sampling Theory and Appl. (SampTA), May 2009.
[38] C. Hegde, P. Indyk, and L. Schmidt. A nearly linear-time framework for graph-structured sparsity. In Proc.
Int. Conf. Machine Learning (ICML), July 2015.
[39] V. Cevher, M. Duarte, C. Hegde, and R. Baraniuk. Sparse signal recovery using Markov Random Fields.
In Adv. Neural Inf. Proc. Sys. (NIPS), Dec. 2008.
[40] C. Hegde, P. Indyk, and L. Schmidt. Approximation-tolerant model-based compressive sensing. In Proc.
ACM Symp. Discrete Alg. (SODA), Jan. 2014.
[41] D. Needell and J. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl.
Comput. Harmon. Anal., 26(3):301?321, 2009.
[42] M. Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample complexity
barriers via nonconvex optimization. arXiv preprint arXiv:1702.06175, 2017.
[43] M. Talagrand. The generic chaining: upper and lower bounds of stochastic processes. Springer Science &
Business Media, 2006.
[44] S. Dirksen. Tail bounds via generic chaining. Electronic J. Probability, 20, 2015.
[45] D. Gross, F. Krahmer, and R. Kueng. Improved recovery guarantees for phase retrieval from coded
diffraction patterns. Appl. Comput. Harmon. Anal., 42(1):37?64, 2017.
[46] E. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval from coded diffraction patterns. Appl. Comput.
Harmon. Anal., 39(2):277?299, 2015.
[47] I. Waldspurger, A.d?Aspremont, and S. Mallat. Phase recovery, maxcut and complex semide?nite programming. Mathematical Programming, 149(1-2):47?81, 2015.
10
[48] T. Goldstein and C. Studer. Phasemax: Convex phase retrieval via basis pursuit. arXiv preprint
arXiv:1610.07531, 2016.
[49] H. Zhang and Y. Liang. Reshaped wirtinger ?ow for solving quadratic system of equations. In Adv. Neural
Inf. Proc. Sys. (NIPS), pages 2622?2630, 2016.
[50] G. Wang and G. Giannakis. Solving random systems of quadratic equations via truncated generalized
gradient ?ow. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 568?576, 2016.
[51] K. Wei. Solving systems of phaseless equations via kaczmarz methods: A proof of concept study. Inverse
Problems, 31(12):125008, 2015.
[52] J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. In Proc. IEEE Int. Symp. Inform.
Theory (ISIT), pages 2379?2383. IEEE, 2016.
[53] X. Li and V. Voroninski. Sparse signal recovery from quadratic measurements via convex programming.
SIAM J. Math. Anal., 45(5):3019?3033, 2013.
[54] K. Jaganathan, S. Oymak, and B. Hassibi. Recovery of sparse 1-d signals from the magnitudes of their
fourier transform. In Proc. IEEE Int. Symp. Inform. Theory (ISIT), pages 1473?1477. IEEE, 2012.
[55] Y. Shechtman, A. Beck, and Y. C. Eldar. Gespar: Ef?cient phase retrieval of sparse signals. IEEE Trans.
Sig. Proc., 62(4):928?938, 2014.
[56] C. Hegde, P. Indyk, and L. Schmidt. Fast algorithms for structured sparsity. Bulletin of the EATCS,
1(117):197?228, Oct. 2015.
[57] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inform. Theory,
56(6):2980?2998, 2010.
[58] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. Ann. Stat.,
pages 1302?1338, 2000.
[59] C. Davis and W. Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM J. Num. Anal., 7(1):1?46,
1970.
[60] V. Bentkus. An inequality for tail probabilities of martingales with differences bounded from one side. J.
Theoretical Prob., 16(1):161?173, 2003.
11
| 7077 |@word trial:2 version:2 phasemax:2 polynomial:1 norm:3 nd:1 suitably:1 earnest:1 r:2 simulation:1 covariance:1 dirksen:1 pick:1 thereby:1 bahmani:1 carry:1 marchesini:1 shechtman:2 reduction:1 contains:2 initial:17 mag:1 woodruff:1 ours:1 mixon:1 past:1 existing:4 current:1 whp:4 recovered:1 numerical:2 cant:2 predetermined:1 cis:1 remove:1 plot:1 update:2 greedy:1 guess:4 plane:1 sys:8 core:1 record:1 num:1 provides:1 coarse:2 iterates:2 location:1 math:3 allerton:1 compressible:1 simpler:1 zhang:3 mathematical:1 along:1 constructed:1 incorrect:1 prove:4 consists:1 yuan:1 ray:2 symp:7 introduce:4 x0:14 theoretically:1 rapid:2 themselves:1 examine:1 dist:7 multi:1 chi:1 resolve:1 cpu:1 cardinality:1 increasing:1 provided:1 estimating:3 underlying:7 moreover:2 notation:2 bounded:1 mass:1 lowest:1 medium:1 xed:2 kind:1 developed:1 compressive:13 astronomy:1 bolcskei:1 guarantee:1 mitigate:1 every:2 axt:1 demonstrates:2 rm:5 bio:1 unit:1 phaseless:1 positive:2 negligible:1 engineering:2 understood:1 local:1 sparta:5 limit:3 severely:1 despite:2 laurent:1 approximately:1 initialization:18 k:5 studied:2 challenging:1 relaxing:1 co:1 appl:8 obeys:1 practical:3 unique:1 practice:2 block:65 union:1 differs:1 kaczmarz:1 prevalence:1 procedure:3 jan:1 nite:1 empirical:1 reject:2 matching:2 projection:2 pre:1 induce:1 suggest:1 protein:1 get:1 onto:2 close:3 selection:2 operator:1 romberg:3 storage:1 context:1 seminal:1 py:3 restriction:1 hegde:10 straightforward:1 attention:1 convex:17 automaton:1 resolution:1 formulate:1 simplicity:1 recovery:41 identifying:2 restated:1 shen:1 pure:2 needell:2 rule:1 estimator:3 oh:1 coordinate:4 variation:2 analogous:1 pt:3 mallat:1 exact:3 programming:6 us:2 sig:4 element:9 satisfying:2 particularly:1 conserve:1 observed:1 preprint:3 electrical:2 wang:4 descend:1 region:1 connected:1 adv:7 sun:1 decrease:1 contemporary:1 mentioned:1 gross:1 colloquium:1 comm:3 complexity:23 saxton:1 solving:8 incur:1 basis:5 represented:5 alphabet:1 separated:1 jain:1 fast:4 monte:1 apparent:1 heuristic:2 posed:2 solve:3 larger:1 compressed:1 statistic:1 reshaped:1 transform:1 noisy:3 kahan:1 indyk:7 semide:1 cai:2 propose:1 reconstruction:1 product:1 remainder:1 cients:11 rapidly:1 achieve:5 description:2 waldspurger:1 rst:14 convergence:4 cluster:4 cosamp:16 requirement:4 exploiting:1 produce:3 comparative:1 extending:1 intl:1 object:1 develop:3 completion:1 stat:4 pose:1 ij:2 received:1 netrapalli:2 recovering:2 c:2 signi:2 soc:1 exhibiting:1 drawback:1 correct:3 subsequently:1 stochastic:1 stringent:1 material:1 require:3 clustered:1 isit:5 extension:3 hold:1 wright:1 normal:2 algorithmic:1 claim:2 achieves:3 smallest:2 estimation:8 proc:24 integrates:1 applicable:1 combinatorial:1 grouped:3 weighted:1 minimization:15 sensor:1 gaussian:14 super:1 unspeci:1 lifted:1 kueng:1 ax:6 june:1 rigorous:2 am:1 wf:3 contr:1 duarte:3 dependent:1 streaming:1 sb:5 typically:1 inaccurate:2 eatcs:1 relation:1 voroninski:2 tao:2 provably:1 pixel:2 issue:2 among:1 cprl:1 overall:1 denoted:1 ill:2 eldar:3 priori:1 constrained:4 integration:1 breakthrough:1 art:1 marginal:2 field:1 construct:5 once:1 initialize:1 beach:1 sampling:4 chapman:2 equal:1 represents:1 ishikawa:1 having:1 icml:1 constitutes:3 nearly:3 np:1 jb:4 sanghavi:1 employ:1 few:4 modern:1 modi:2 composed:1 simultaneously:1 studer:1 beck:1 phase:82 attempt:1 organization:1 highly:2 evaluation:2 severe:1 analyzed:2 light:2 strohmer:1 partial:1 necessary:3 respective:1 harmon:4 tree:4 incomplete:3 phaselift:3 re:3 desired:2 overcomplete:1 isolating:1 theoretical:4 cevher:4 column:2 earlier:1 xeon:1 intial:1 entry:3 uniform:5 successful:3 conducted:1 pal:1 considerably:1 synthetic:1 chunk:2 st:1 vershynin:1 fundamental:1 oymak:2 siam:2 lee:1 dong:1 analogously:1 iwen:1 again:1 containing:1 huang:1 possibly:2 nano:1 worse:2 conf:5 li:5 suggesting:1 de:4 bold:2 int:6 satisfy:5 chinmay:1 mp:1 explicitly:1 later:1 picked:1 closed:1 analyze:3 wave:1 recover:3 competitive:1 start:1 candes:6 contribution:2 minimize:1 correspond:2 identify:2 conceptually:1 metaxas:1 accurately:4 ohlsson:1 none:8 carlo:1 globalsip:1 comp:1 cation:3 alternatingly:1 minj:2 phys:1 coef:13 inform:12 ed:3 acquisition:1 frequency:1 naturally:1 proof:7 josa:3 proved:1 astronomical:2 knowledge:6 dimensionality:1 organized:2 amplitude:1 routine:2 carefully:2 goldstein:1 miao:2 violating:1 methodology:1 improved:4 wei:1 formulation:1 stage:6 correlation:3 crudely:1 talagrand:1 tropp:2 trust:1 keshavan:1 nonlinear:1 stably:1 modulus:1 usa:1 effect:1 requiring:1 true:10 y2:2 normalized:1 concept:3 hence:2 alternating:14 nonzero:6 iteratively:1 illustrated:1 eg:1 interchangeably:1 during:2 davis:1 noted:1 chaining:3 ptychography:1 m:14 criterion:1 generalized:1 jaganathan:2 outline:1 theoretic:3 demonstrate:1 goldsmith:1 optik:1 image:7 wise:1 consideration:1 ef:7 novel:3 rotation:1 specialized:1 functional:1 physical:1 overview:1 cohen:1 extend:2 tail:2 marginals:14 measurement:28 refer:1 ai:14 tuning:5 sastry:1 maxcut:1 soltanolkotabi:3 language:1 stable:8 etc:3 jaggi:1 recent:2 hide:1 inf:7 scenario:1 certain:2 nonconvex:2 inequality:1 yi:7 minimum:1 additional:4 greater:4 impose:1 speci:5 converge:1 signal:80 july:2 multiple:1 reduces:3 transparency:1 nonzeros:1 exceeds:1 match:2 faster:1 adapt:1 determination:1 long:3 retrieval:58 lin:1 divided:1 equally:1 coded:2 a1:1 controlled:2 impact:1 basic:1 regression:1 essentially:1 metric:1 expectation:1 arxiv:6 iteration:2 achieved:2 cell:1 dec:1 diagram:1 singular:3 source:1 harrison:1 extra:2 biased:3 unlike:1 massart:1 leveraging:1 flow:1 call:6 ciently:4 structural:5 leverage:2 wirtinger:5 yang:1 iii:1 enough:2 easy:2 xj:2 variate:1 inner:1 idea:3 reduce:1 shift:2 t0:2 specialization:1 curiously:1 gb:1 suffer:1 speech:1 proceed:2 matlab:1 mirroring:1 generally:1 clear:3 eigenvectors:1 extensively:1 nugent:1 reduced:1 canonical:1 toend:1 sign:6 per:3 bulk:1 broadly:1 discrete:2 group:1 key:1 four:1 threshold:4 achieving:1 capital:1 thresholded:5 nal:1 lowering:1 v1:3 imaging:9 graph:4 relaxation:3 ram:1 maiden:1 year:1 inverse:2 letter:1 uncertainty:2 baraniuk:4 soda:2 prob:1 extends:1 throughout:1 kuppinger:1 electronic:1 separation:2 diffraction:4 appendix:12 scaling:2 bound:6 followed:4 convergent:1 fold:1 quadratic:14 encountered:1 optic:2 constraint:1 segev:1 x2:4 fourier:4 argument:1 min:6 sampta:1 performing:1 optical:2 ned:6 structured:12 according:3 alternate:1 viswanathan:1 combination:3 march:1 across:2 describes:2 giannakis:2 newer:1 appealing:1 qu:1 rev:1 intuitively:1 asilomar:1 equation:4 end:6 available:2 tightest:1 pursuit:2 permit:1 obey:1 observe:2 away:2 generic:4 spectral:5 upto:1 appropriate:1 schmidt:5 denotes:1 running:3 include:3 top:5 cally:3 establish:2 mjb:1 classical:4 strategy:1 dependence:5 gauri:1 diagonal:7 said:1 exhibit:1 ow:6 gradient:1 subspace:1 distance:3 card:1 nelson:1 reason:1 fresh:1 length:10 code:2 index:3 minimizing:1 liang:1 setup:1 favorably:1 trace:1 stated:1 a2ij:1 ba:1 design:2 implementation:1 anal:6 unknown:2 upper:1 observation:4 markov:1 descent:8 displayed:1 truncated:5 extended:1 y1:2 rn:9 perturbation:1 arbitrary:1 intensity:1 ordinate:1 introduced:1 required:2 identi:3 qiao:1 quadratically:1 distinction:1 polylogarithmic:1 established:1 nip:8 trans:8 proceeds:2 pattern:2 regime:1 sparsity:34 challenge:3 saturation:1 including:1 royal:1 analogue:1 unrealistic:2 suitable:3 power:4 natural:3 rely:1 warm:1 business:1 mn:3 meth:1 scheme:1 improve:1 brief:1 numerous:1 ne:1 picture:1 nding:1 aspremont:1 coupled:1 prior:1 review:2 geometric:1 evolve:1 law:2 loss:2 icalp:1 suf:5 limitation:2 eigendecomposition:1 iowa:2 incurred:1 fienup:1 emperically:1 thresholding:1 viewpoint:1 principle:1 pedarsani:2 pi:1 row:1 elsewhere:2 transpose:1 side:1 allow:1 bulletin:1 barrier:1 sparse:56 distributed:3 ghz:1 dimension:1 world:2 transition:8 evaluating:1 concretely:1 made:4 forward:1 adaptive:1 far:3 log3:1 pruning:1 approximate:1 nov:1 global:2 gerchberg:1 tolerant:1 assumed:1 gespar:1 continuous:1 iterative:2 table:3 mj:1 robust:3 ca:1 obtaining:1 alg:3 complex:2 constructing:2 diag:4 apr:1 yi2:1 montanari:1 linearly:1 s2:14 motivation:1 arise:2 noise:1 n2:1 big:1 krahmer:1 repeated:1 phasecode:1 x1:4 crafted:1 intel:1 cient:10 martingale:1 n:4 cs2:1 sub:7 hassibi:2 comput:5 crude:2 breaking:1 justi:1 wavelet:2 theorem:7 down:1 annu:1 discarding:1 xt:6 exible:1 sensing:8 decay:2 x:2 exists:1 equates:2 magnitude:6 ramchandran:2 chen:4 crystallography:4 logarithmic:1 yin:2 explore:2 xt0:1 saddle:1 mjj:12 scalar:1 springer:1 mij:1 corresponds:1 minimizer:2 nested:1 acm:2 ma:1 oct:1 abbreviation:1 goal:1 identity:1 viewed:1 ann:2 careful:1 towards:1 replace:1 price:1 hard:1 typical:1 except:2 sampler:1 lemma:3 ksn:1 accepted:1 e:1 experimental:1 succeeds:2 exception:1 support:17 millane:1 arises:2 chem:1 evaluate:3 nonperiodic:1 |
6,719 | 7,078 | Hash Embeddings for Efficient Word Representations
Dan Svenstrup
Department for Applied Mathematics and Computer Science
Technical University of Denmark (DTU)
2800 Lyngby, Denmark
[email protected]
Jonas Meinertz Hansen
FindZebra
Copenhagen, Denmark
[email protected]
Ole Winther
Department for Applied Mathematics and Computer Science
Technical University of Denmark (DTU)
2800 Lyngby, Denmark
[email protected]
Abstract
We present hash embeddings, an efficient method for representing words in a
continuous vector form. A hash embedding may be seen as an interpolation between
a standard word embedding and a word embedding created using a random hash
function (the hashing trick). In hash embeddings each token is represented by
k d-dimensional embeddings vectors and one k dimensional weight vector. The
final d dimensional representation of the token is the product of the two. Rather
than fitting the embedding vectors for each token these are selected by the hashing
trick from a shared pool of B embedding vectors. Our experiments show that
hash embeddings can easily deal with huge vocabularies consisting of millions
of tokens. When using a hash embedding there is no need to create a dictionary
before training nor to perform any kind of vocabulary pruning after training. We
show that models trained using hash embeddings exhibit at least the same level
of performance as models trained using regular embeddings across a wide range
of tasks. Furthermore, the number of parameters needed by such an embedding
is only a fraction of what is required by a regular embedding. Since standard
embeddings and embeddings constructed using the hashing trick are actually just
special cases of a hash embedding, hash embeddings can be considered an extension
and improvement over the existing regular embedding types.
1
Introduction
Contemporary neural networks rely on loss functions that are continuous in the model?s parameters
in order to be able to compute gradients for training. For this reason, any data that we wish to feed
through the network, even data that is of a discrete nature in its original form will be translated into a
continuous form. For textual input it often makes sense to represent each distinct word or phrase with
a dense real-valued vector in Rn . These word vectors are trained either jointly with the rest of the
model, or pre-trained on a large corpus beforehand.
For large datasets the size of the vocabulary can easily be in the order of hundreds of thousands,
adding millions or even billions of parameters to the model. This problem can be especially severe
when n-grams are allowed as tokens in the vocabulary. For example, the pre-trained Word2Vec
vectors from Google (Mih?ltz, 2016) has a vocabulary consisting of 3 million words and phrases.
This means that even though the embedding size is moderately small (300 dimensions), the total
number of parameters is close to one billion.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The embedding size problem caused by a large vocabulary can be solved in several ways. Each of the
methods have some advantages and some drawbacks:
1. Ignore infrequent words. In many cases, the majority of a text is made up of a small subset
of the vocabulary, and most words will only appear very few times (Zipf?s law (Manning
et al., 1999)).
By ignoring anything but most frequent words, and sometimes stop words as well, it is
possible to preserve most of the text while drastically reducing the number of embedding
vectors and parameters. However, for any given task, there is a risk of removing too much
or to little. Many frequent words (besides stop words) are unimportant and sometimes even
stop words can be of value for a particular task (e.g. a typical stop word such as ?and? when
training a model on a corpus of texts about logic). Conversely, for some problems (e.g.
specialized domains such as medical search) rare words might be very important.
2. Remove non-discriminative tokens after training. For some models it is possible to
perform efficient feature pruning based on e.g. entropy (Stolcke, 2000) or by only retaining
the K tokens with highest norm (Joulin et al., 2016a). This reduction in vocabulary size can
lead to a decrease in performance, but in some cases it actually avoids some over-fitting
and increases performance (Stolcke, 2000). For many models, however, such pruning is not
possible (e.g. for on-line training algorithms).
3. Compress the embedding vectors. Lossy compression techniques can be employed to
reduce the amount of memory needed to store embedding vectors. One such method is
quantization, where each vector is replaced by an approximation which is constructed as a
sum of vectors from a previously determined set of centroids (Joulin et al., 2016a; Jegou
et al., 2011; Gray and Neuhoff, 1998).
For some problems, such as online learning, the need for creating a dictionary before training can be
a nuisance. This is often solved with feature hashing, where a hash function is used to assign each
token w ? T to one of a fixed set of ?buckets? {1, 2, . . . B}, each of which has its own embedding
vector. Since the goal of hashing is to reduce the dimensionality of the token space T , we normally
have that B |T |. This results in many tokens ?colliding? with each other because they are assigned
to the same bucket. When multiple tokens collide, they will get the same vector representation which
prevents the model from distinguishing between the tokens. Even though some information is lost
when tokens collide, the method often works surprisingly well in practice (Weinberger et al., 2009).
One obvious improvement to the feature hashing method described above would be to learn an
optimal hash function where important tokens do not collide. However, since a hash function has
a discrete codomain, it is not easy to optimize using e.g. gradient based methods used for training
neural networks (Kulis and Darrell, 2009).
The method proposed in this article is an extension of feature hashing where we use k hash functions
instead of a single hash function, and then use k trainable parameters for each word in order to choose
the ?best? hash function for the tokens (or actually the best combination of hash functions). We call
the resulting embedding hash embedding. As we explain in section 3, embeddings constructed by
both feature hashing and standard embeddings can be considered special cases of hash embeddings.
A hash embedding is an efficient hybrid between a standard embedding and an embedding created
using feature hashing, i.e. a hash embedding has all of the advantages of the methods described
above, but none of the disadvantages:
? When using hash embeddings there is no need for creating a dictionary beforehand and the
method can handle a dynamically expanding vocabulary.
? A hash embedding has a mechanism capable of implicit vocabulary pruning.
? Hash embeddings are based on hashing but has a trainable mechanism that can handle
problematic collisions.
? Hash embeddings perform something similar to product quantization. But instead of all of
the tokens sharing a single small codebook, each token has access to a few elements in a
very large codebook.
Using a hash embedding typically results in a reduction of parameters of several orders of magnitude.
Since the bulk of the model parameters often resides in the embedding layer, this reduction of
2
input
token
hash functions
component
vectors
H1 (?horse?) =
importance
parameters
hash vector
e??horse?
p1?horse?
H2 (?horse?) =
?horse?
p2?horse?
P
..
.
pk?horse?
Hk (?horse?) =
Figure 1: Illustration of how to build the hash vector for the word ?horse?. The optional step of
concatenating the vector of importance parameters to e??horse? has been omitted. The size of component
vectors in the illustration is d = 4.
parameters opens up for e.g. a wider use of e.g. ensemble methods or large dimensionality of word
vectors.
2
Related Work
Argerich et al. (2016) proposed a type of embedding that is based on hashing and word co-occurrence
and demonstrates that correlations between those embedding vectors correspond to the subjective
judgement of word similarity by humans. Ultimately, it is a clever reduction in the embedding sizes
of word co-occurrence based embeddings.
Reisinger and Mooney (2010) and since then Huang et al. (2012) have used multiple different word
embeddings (prototypes) for the same words for representing different possible meanings of the same
words. Conversely, Bai et al. (2009) have experimented with hashing and treating words that co-occur
frequently as the same feature in order to reduce dimensionality.
Huang et al. (2013) have used bags of either bi-grams or tri-grams of letters of input words to create
feature vectors that are somewhat robust to new words and minor spelling differences.
Another approach employed by Zhang et al. (2015); Xiao and Cho (2016); Conneau et al. (2016)
is to use inputs that represent sub-word units such as syllables or individual characters rather than
words. This generally moves the task of finding meaningful representations of the text from the
input embeddings into the model itself and increases the computational cost of running the models
(Johnson and Zhang, 2016). Johansen et al. (2016) used a hierarchical encoding technique to do
machine translation with character inputs while keeping computational costs low.
3
Hash Embeddings
In the following we will go through the step by step construction of a vector representation for a
token w ? T using hash embeddings. The following steps are also illustrated in fig. 1:
1. Use k different functions H1 , . . . , Hk to choose k component vectors for the token w from
a predefined pool of B shared component vectors
2. Combine the chosen component vectors from step 1 as a weighted sum: e?w =
Pk
i
1
k >
? Rk are called the importance parameters for
i=1 pw Hi (w). pw = (pw , . . . , pw )
w.
3. Optional: The vector of importance parameters for the token pw can be concatenated with
e?w in order to construct the final hash vector ew .
3
The full translation of a token to a hash vector can be written in vector notation (? denotes the
concatenation operator):
cw
=
(H1 (w), H2 (w), . . . , Hk (w))>
pw
=
(p1w , . . . , pkw )>
e?w
= p>
w cw
e>
w
>
= e?>
w ? pw (optional)
The token to component vector functions Hi are implemented by Hi (w) = ED2 (D1 (w)) , where
? D1 : T ? {1, . . . K} is a token to id function.
? D2 : {1, . . . , K} ? {1, . . . B} is an id to bucket (hash) function.
? E is a B ? d matrix.
If creating a dictionary beforehand is not a problem, we can use an enumeration (dictionary) of the
tokens as D1 . If, on the other hand, it is inconvenient (or impossible) to use a dictionary because of
the size of T , we can simply use a hash function D1 : T ? {1, . . . K}.
The importance parameter vectors pw are represented as rows in a K ? k matrix P , and the token to
?
importance vector mapping is implemented by w ? PD(w)
. D(w)
can be either equal to D1 , or we
?
? = D1 , and leave the case
can use a different hash function. In the rest of the article we will use D
?
where D 6= D1 to future work.
Based on the description above we see that the construction of hash embeddings requires the
following:
1. A trainable embedding matrix E of size B ? d, where each of the B rows is a component
vector of length d.
2. A trainable matrix P of importance parameters of size K ? k where each of the K rows is a
vector of k scalar importance parameters.
3. k different hash functions H1 , . . . , Hk that each uniformly assigns one of the B component
vectors to each token w ? T .
The total number of trainable parameters in a hash embedding is thus equal to B ? d + K ? k, which
should be compared to a standard embedding where the number of trainable parameters is K ? d. The
number of hash functions k and buckets B can typically be chosen quite small without degrading
performance, and this is what can give a huge reduction in the number of parameters (we typically
use k = 2 and choose K and B s.t. K > 10 ? B).
From the description above we also see that the computational overhead of using hash embeddings instead of standard embeddings is just a matrix multiplication of a 1 ? k matrix (importance parameters)
with a k ? d matrix (component vectors). When using small values of k, the computational overhead
is therefore negligible. In our experiments, hash embeddings were actually marginally faster to train
than standard embedding types for large vocabulary problems1 . However, since the embedding layer
is responsible for only a negligible fraction of the computational complexity of most models, using
hash embeddings instead of regular embeddings should not make any difference for most models.
Furthermore, when using hash embeddings it is not necessary to create a dictionary before training
nor to perform vocabulary pruning after training. This can also reduce the total training time.
Note that in the special case where the number of hash functions is k = 1, and all importance
parameters are fixed to p1w = 1 for all tokens w ? T , hash embeddings are equivalent to using
the hashing trick. If furthermore the number of component vectors is set to B = |T | and the hash
function h1 (w) is the identity function, hash embeddings are equivalent to standard embeddings.
1
the small performance difference was observed when using Keras with a Tensorflow backend on a GeForce
GTX TITAN X with 12 GB of memory and a Nvidia GeForce GTX 660 with 2GB memory. The performance
penalty when using standard embeddings for large vocabulary problems can possibly be avoided by using a
custom embedding layer, but we have not pursued this further.
4
4
Hashing theory
Theorem 4.1. Let h : T ? {0, . . . , K} be a hash function. Then the probability pcol that w0 ? T
collides with one or more other tokens is given by
pcol = 1 ? (1 ? 1/K)|T |?1 .
(1)
For large K we have the approximation
pcol ? 1 ? e?
|T |
K
.
(2)
The expected number of tokens in collision Ctot is given by
Ctot = |T |pcol .
(3)
Proof. This is a simple variation of the ?birthday problem?.
When using hashing for dimensionality reduction, collisions are unavoidable, which is the main
disadvantage for feature hashing. This is counteracted by hash embeddings in two ways:
First of all, for choosing the component vectors for a token w ? T , hash embeddings use k
independent uniform hash functions hi : T ? {1, . . . , B} for i = 1, . . . , k. The combination
of multiple hash functions approximates a single hash function with much larger range h : T ?
{1, . . . , B k }, which drastically reduces the risk of total collisions. With a vocabulary of |T | = 100M,
B = 1M different component vectors and just k = 2 instead of 1, the chance of a given token colliding
with at least one other token in the vocabulary
is reduced from approximately 1?exp ?108 /106 ? 1
to approximately 1 ? exp ?108 /1012 ? 0.0001. Using more hash functions will further reduce
the number of collisions.
Second, only a small number of the tokens in the vocabulary are usually important for the task at
hand. The purpose of the importance parameters is to implicitly prune unimportant words by setting
their importance
parameters
close to 0. This would reduce the expected number of collisions to
|Timp |
|Timp | ? exp ? B
where Timp ? T is the set of important words for the given task. The weighting
with the component vector will further be able to separate the colliding tokens in the k dimensional
subspace spanned by their k d dimensional embedding vectors.
Note that hash embeddings consist of two layers of hashing. In the first layer each token is simply
translated to an integer in {1, . . . , K} by a dictionary or a hash function D1 . If D1 is a dictionary,
there will of course not be any collisions in the first layer. If D1 is a random hash function then
the expected number of tokens in collision will be given by equation 3. These collisions cannot be
avoided, and the expected number of collisions can only be decreased by increasing K. Increasing the
vocabulary size by 1 introduces d parameters in standard embeddings and only k in hash embeddings.
The typical d ranges from 10 to 300, and k is in the range 1-3. This means that even when the
embedding size is kept small, the parameter savings can be huge. In (Joulin et al., 2016b) for example,
the embedding size is chosen to be as small as 10. In order to go from a bi-gram model to a general
n-gram model the number of buckets is increased from K = 107 to K = 108 . This increase of
buckets requires an additional 900 million parameters when using standard embeddings, but less than
200 million when using hash embeddings with the default of k = 2 hash functions. I.e. even when
the embedding size is kept extremely small, the parameter savings can be huge.
5
Experiments
We benchmark hash embeddings with and without dictionaries on text classification tasks.
5.1
Data and preprocessing
We evaluate hash embeddings on 7 different datasets in the form introduced by Zhang et al. (2015)
for various text classification tasks including topic classification, sentiment analysis, and news
categorization. All of the datasets are balanced so the samples are distributed evenly among the
5
classes. An overview of the datasets can be seen in table 1. Significant previous results are listed in
table 2. We use the same experimental protocol as in (Zhang et al., 2015).
We do not perform any preprocessing besides removing punctuation. The models are trained on
snippets of text that are created by first converting each text to a sequence of n-grams, and from this
list a training sample is created by randomly selecting between 4 and 100 consecutive n-grams as
input. This may be seen as input drop-out and helps the model avoid overfitting. When testing we use
the entire document as input. The snippet/document-level embedding is obtained by simply adding
up the word-level embeddings.
Table 1: Datasets used in the experiments, See (Zhang et al., 2015) for a complete description.
#Train #Test #Classes Task
AG?s news
120k
7.6k
4
English news categorization
DBPedia
450k
70k
14
Ontology classification
Yelp Review Polarity
560k
38k
2
Sentiment analysis
Yelp Review Full
560k
50k
5
Sentiment analysis
Yahoo! Answers
650k
60k
10
Topic classification
Amazon Review Full
3000k 650k
5
Sentiment analysis
Amazon Review Polarity 3600k 400k
2
Sentiment analysis
5.2
Training
All the models are trained by minimizing the cross entropy using the stochastic gradient descentbased Adam method (Kingma and Ba, 2014) with a learning rate set to ? = 0.001. We use early
stopping with a patience of 10, and use 5% of the training data as validation data. All models
were implemented using Keras with TensorFlow backend. The training was performed on a Nvidia
GeForce GTX TITAN X with 12 GB of memory.
5.3
Hash embeddings without a dictionary
In this experiment we compare the use of a standard hashing trick embedding with a hash embedding.
The hash embeddings use K = 10M different importance parameter vectors, k = 2 hash functions,
and B = 1M component vectors of dimension d = 20. This adds up to 40M parameters for the hash
embeddings. For the standard hashing trick embeddings, we use an architecture almost identical to
the one used in (Joulin et al., 2016b). As in (Joulin et al., 2016b) we only consider bi-grams. We use
one layer of hashing with 10M buckets and an embeddings size of 20. This requires 200M parameters.
The document-level embedding input is passed through a single fully connected layer with softmax
activation.
The performance of the model when using each of the two embedding types can be seen in the left
side of table 2. We see that even though hash embeddings require 5 times less parameters compared to
standard embeddings, they perform at least as well as standard embeddings across all of the datasets,
except for DBPedia where standard embeddings perform a tiny bit better.
5.4
Hash embeddings using a dictionary
In this experiment we limit the vocabulary to the 1M most frequent n-grams for n < 10. Most of the
tokens are uni-grams and bi-grams, but also many tokens of higher order are present in the vocabulary.
We use embedding vectors of size d = 200. The hash embeddings use k = 2 hash functions and the
bucket size B is chosen by cross-validation among [500, 10K, 50K, 100K, 150K]. The maximum
number of words for the standard embeddings is chosen by cross-validation among [10K, 25K, 50K,
300K, 500K, 1M]. We use a more complex architecture than in the experiment above, consisting of
an embedding layer (standard or hash) followed by three dense layers with 1000 hidden units and
ReLU activations, ending in a softmax layer. We use batch normalization (Ioffe and Szegedy, 2015)
as regularization between all of the layers.
The parameter savings for this problem are not as great as in the experiment without a dictionary, but
the hash embeddings still use 3 times less parameters on average compared to a standard embedding.
6
As can be seen in table 2 the more complex models actually achieve a worse result than the simple
model described above. This could be caused by either an insufficient number of words in the
vocabulary or by overfitting. Note however, that the two models have access to the same vocabulary,
and the vocabulary can therefore only explain the general drop in performance, not the performance
difference between the two types of embedding. This seems to suggest that using hash embeddings
have a regularizing effect on performance.
When using a dictionary in the first layer of hashing, each vector of importance parameters will correspond directly to a unique phrase. In table 4 we see the phrases corresponding to the largest/smallest
(absolute) importance values. As we would expect, large absolute values of the importance parameters
correspond to important phrases. Also note that some of the n-grams contain information that e.g.
the bi-gram model above would not be able to capture. For example, the bi-gram model would not be
able to tell whether 4 or 5 stars had been given on behalf of the sentence ?I gave it 4 stars instead of
5 stars?, but the general n-gram model would.
5.5
Ensemble of hash embeddings
The number of buckets for a hash embedding can be chosen quite small without severely affecting
performance. B = 500 ? 10.000 buckets is typically sufficient in order to obtain a performance
almost at par with the best results. In the experiments using a dictionary only about 3M parameters
are required in the layers on top of the embedding, while kK + Bd = 2M + B ? 200 are required
in the embedding itself. This means that we can choose to train an ensemble of models with small
bucket sizes instead of a large model, while at the same time use the same amount of parameters (and
the same training time since models can be trained in parallel). Using an ensemble is particularly
useful for hash embeddings: even though collisions are handled effectively by the word importance
parameters, there is still a possibility that a few of the important words have to use suboptimal
embedding vectors. When using several models in an ensemble this can more or less be avoided since
different hash functions can be chosen for each hash embedding in the ensemble.
We use an ensemble consisting of 10 models and combine the models using soft voting. Each model
use B = 50.000 and d = 200. The architecture is the same as in the previous section except that
models with one to three hidden layers are used instead of just ten models with three hidden layers.
This was done in order to diversify the models. The total number of parameters in the ensemble is
approximately 150M. This should be compared to both the standard embedding model in section 5.3
and the standard embedding model in section 5.4 (when using the full vocabulary), both of which
require ? 200M parameters.
Table 2: Test accuracy (in %) for the selected datasets
Without dictionary
With dictionary
Shallow network (section 5.3)
Deep network (section 5.4)
Hash emb.
Std emb
Hash emb. Std. emb. Ensemble
AG
92.4
92.0
91.5
91.7
92.0
Amazon full
60.0
58.3
59.4
58.5
60.5
Dbpedia
98.5
98.6
98.7
98.6
98.8
Yahoo
72.3
72.3
71.3
65.8
72.9
Yelp full
63.8
62.6
62.6
61.4
62.9
Amazon pol
94.4
94.2
94.7
93.6
94.7
Yelp pol
95.9
95.5
95.8
95.0
95.7
6
Future Work
Hash embeddings are complementary to other state-of-the-art methods as it addresses the problem
of large vocabularies. An attractive possibility is to use hash-embeddings to create a word-level
embedding to be used in a context sensitive model such as wordCNN.
As noted in section 3, we have used the same token to id function D1 for both the component vectors
and the importance parameters. This means that words that hash to the same bucket in the first layer
get both identical component vectors and importance parameters. This effectively means that those
? for
words become indistinguishable to the model. If we instead use a different token to id function D
7
Table 3: State-of-the-art test accuracy in %. The table is split between BOW embedding approaches
(bottom) and more complex rnn/cnn approaches (top). The best result in each category for each
dataset is bolded.
char-CNN (Zhang et al., 2015)
char-CRNN (Xiao and Cho, 2016)
VDCNN (Conneau et al., 2016)
wordCNN (Johnson and Zhang, 2016)
Discr. LSTM (Yogatama et al., 2017)
Virt. adv. net. (Miyato et al., 2016)
fastText (Joulin et al., 2016b)
BoW (Zhang et al., 2015)
n-grams (Zhang et al., 2015)
n-grams TFIDF (Zhang et al., 2015)
Hash embeddings (no dict.)
Hash embeddings (dict.)
Hash embeddings (dict., ensemble)
AG
87.2
91.4
91.3
93.4
92.1
92.5
88.8
92.0
92.4
92.4
91.5
92.0
DBP
98.3
98.6
98.7
99.2
98.7
99.2
98.6
96.6
98.6
98.7
98.5
98.7
98.8
Yelp P
94.7
94.5
95.7
97.1
92.6
Yelp F
62.0
61.8
64.7
67.6
59.6
Yah A
71.2
71.7
73.4
75.2
73.7
Amz F
59.5
59.2
63.0
63.8
Amz P
94.5
94.1
95.7
96.2
95.7
92.2
95.6
95.4
95.9
95.8
95.7
63.9
58.0
56.3
54.8
63.8
62.5
62.9
72.3
68.9
68.5
68.5
72.3
71.9
72.9
60.2
54.6
54.3
52.4
60.0
59.4
60.5
94.6
90.4
92.0
91.5
94.4
94.7
94.7
Table 4: Words in the vocabulary with the highest/lowest importance parameters.
Important tokens
Unimportant tokens
Yelp polarity
What_a_joke, not_a_good_experience,
Great_experience, wanted_to_love,
and_lacking, Awful, by_far_the_worst,
The_service_was, got_a_cinnamon,
15_you_can, while_touching,
and_that_table, style_There_is
Amazon full
gave_it_4, it_two_stars_because,
4_stars_instead_of_5, 4_stars,
four_stars, gave_it_two_stars
that_my_wife_and_I, the_state_I,
power_back_on, years_and_though,
you_want_a_real_good
the importance parameters, we severely reduce the chance of "total collisions". Our initial findings
indicate that using a different hash function for the index of the importance parameters gives a small
but consistent improvement compared to using the same hash function.
In this article we have represented word vector using a weighed sum of component vectors. However,
other aggregation methods are possible. One such method is simply to concatenate the (weighed)
component vectors. The resulting kd-dimensional vector is then equivalent to a weighed sum of
orthogonal vectors in Rkd .
Finally, it might be interesting to experiment with pre-training lean, high-quality hash vectors that
could be distributed as an alternative to word2vec vectors, which require around 3.5 GB of space for
almost a billion parameters.
7
Conclusion
We have described an extension and improvement to standard word embeddings and made an
empirical comparisons between hash embeddings and standard embeddings across a wide range of
classification tasks. Our experiments show that the performance of hash embeddings is always at par
with using standard embeddings, and in most cases better.
We have shown that hash embeddings can easily deal with huge vocabularies, and we have shown
that hash embeddings can be used both with and without a dictionary. This is particularly useful for
problems such as online learning where a dictionary cannot be constructed before training.
Our experiments also suggest that hash embeddings have an inherent regularizing effect on performance. When using a standard method of regularization (such as L1 or L2 regularization), we start
with the full parameter space and regularize parameters by pushing some of them closer to 0. This
is in contrast to regularization using hash embeddings where the number of parameters (number of
buckets) determines the degree of regularization. Thus parameters not needed by the model will not
have to be added in the first place.
The hash embedding models used in this article achieve equal or better performance than previous
bag-of-words models using standard embeddings. Furthermore, in 5 of 7 datasets, the performance of
hash embeddings is in top 3 of state-of-the art.
8
References
Argerich, L., Zaffaroni, J. T., and Cano, M. J. (2016). Hash2vec, feature hashing for word embeddings. CoRR,
abs/1608.08940.
Bai, B., Weston, J., Grangier, D., Collobert, R., Sadamasa, K., Qi, Y., Chapelle, O., and Weinberger, K. (2009).
Supervised semantic indexing. In Proceedings of the 18th ACM conference on Information and knowledge
management, pages 187?196. ACM.
Conneau, A., Schwenk, H., Barrault, L., and LeCun, Y. (2016). Very deep convolutional networks for natural
language processing. CoRR, abs/1606.01781.
Gray, R. M. and Neuhoff, D. L. (1998). Quantization. IEEE Trans. Inf. Theor., 44(6):2325?2383.
Huang, E. H., Socher, R., Manning, C. D., and Ng, A. Y. (2012). Improving word representations via global
context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for
Computational Linguistics: Long Papers - Volume 1, ACL ?12, pages 873?882, Stroudsburg, PA, USA.
Association for Computational Linguistics.
Huang, P.-S., He, X., Gao, J., Deng, L., Acero, A., and Heck, L. (2013). Learning deep structured semantic
models for web search using clickthrough data. In Proceedings of the 22nd ACM International Conference on
Information and Knowledge Management (CIKM), pages 2333?2338.
Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal
covariate shift. CoRR, abs/1502.03167.
Jegou, H., Douze, M., and Schmid, C. (2011). Product quantization for nearest neighbor search. IEEE Trans.
Pattern Anal. Mach. Intell., 33(1):117?128.
Johansen, A. R., Hansen, J. M., Obeid, E. K., S?nderby, C. K., and Winther, O. (2016). Neural machine
translation with characters and hierarchical encoding. CoRR, abs/1610.06550.
Johnson, R. and Zhang, T. (2016). Convolutional neural networks for text categorization: Shallow word-level vs.
deep character-level. CoRR, abs/1609.00718.
Joulin, A., Grave, E., Bojanowski, P., Douze, M., J?gou, H., and Mikolov, T. (2016a). Fasttext.zip: Compressing
text classification models. CoRR, abs/1612.03651.
Joulin, A., Grave, E., Bojanowski, P., and Mikolov, T. (2016b). Bag of tricks for efficient text classification.
CoRR, abs/1607.01759.
Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Kulis, B. and Darrell, T. (2009). Learning to hash with binary reconstructive embeddings. In Bengio, Y.,
Schuurmans, D., Lafferty, J. D., Williams, C. K. I., and Culotta, A., editors, Advances in Neural Information
Processing Systems 22, pages 1042?1050. Curran Associates, Inc.
Manning, C. D., Sch?tze, H., et al. (1999). Foundations of statistical natural language processing, volume 999.
MIT Press.
Mih?ltz, M. (2016). Google?s trained word2vec model in python. https://github.com/mmihaltz/
word2vec-GoogleNews-vectors. Accessed: 2017-02-08.
Miyato, T., Dai, A. M., and Goodfellow, I. (2016). Virtual adversarial training for semi-supervised text
classification. stat, 1050:25.
Reisinger, J. and Mooney, R. J. (2010). Multi-prototype vector-space models of word meaning. In Human
Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for
Computational Linguistics, HLT ?10, pages 109?117, Stroudsburg, PA, USA. Association for Computational
Linguistics.
Stolcke, A. (2000). Entropy-based pruning of backoff language models. CoRR, cs.CL/0006025.
Weinberger, K. Q., Dasgupta, A., Attenberg, J., Langford, J., and Smola, A. J. (2009). Feature hashing for large
scale multitask learning. CoRR, abs/0902.2206.
Xiao, Y. and Cho, K. (2016). Efficient character-level document classification by combining convolution and
recurrent layers. CoRR, abs/1602.00367.
Yogatama, D., Dyer, C., Ling, W., and Blunsom, P. (2017). Generative and discriminative text classification with
recurrent neural networks. arXiv preprint arXiv:1703.01898.
Zhang, X., Zhao, J. J., and LeCun, Y. (2015). Character-level convolutional networks for text classification.
CoRR, abs/1509.01626.
9
| 7078 |@word multitask:1 kulis:2 cnn:2 judgement:1 compression:1 norm:1 pw:8 seems:1 nd:1 open:1 d2:1 reduction:6 initial:1 bai:2 selecting:1 document:4 subjective:1 existing:1 com:2 activation:2 written:1 bd:1 concatenate:1 remove:1 treating:1 drop:2 hash:100 v:1 pursued:1 selected:2 generative:1 codebook:2 barrault:1 zhang:12 accessed:1 constructed:4 become:1 jonas:2 dan:1 fitting:2 combine:2 overhead:2 expected:4 ontology:1 mih:2 nor:2 frequently:1 p1:1 problems1:1 multi:1 jegou:2 little:1 enumeration:1 gou:1 increasing:2 notation:1 lowest:1 what:2 kind:1 degrading:1 finding:2 ag:3 voting:1 demonstrates:1 normally:1 medical:1 unit:2 appear:1 before:4 negligible:2 yelp:7 limit:1 severely:2 encoding:2 mach:1 id:4 interpolation:1 approximately:3 birthday:1 might:2 acl:1 blunsom:1 dynamically:1 conversely:2 co:3 range:5 bi:6 unique:1 responsible:1 lecun:2 testing:1 lost:1 practice:1 rnn:1 empirical:1 word:49 pre:3 regular:4 suggest:2 get:2 cannot:2 close:2 clever:1 operator:1 acero:1 risk:2 impossible:1 context:2 optimize:1 equivalent:3 weighed:3 go:2 williams:1 backend:2 amazon:5 conneau:3 assigns:1 ctot:2 spanned:1 regularize:1 embedding:57 handle:2 variation:1 construction:2 infrequent:1 dbpedia:3 p1w:2 distinguishing:1 curran:1 goodfellow:1 trick:7 element:1 pa:2 associate:1 particularly:2 nderby:1 std:2 lean:1 observed:1 bottom:1 virt:1 preprint:1 solved:2 capture:1 thousand:1 culotta:1 compressing:1 news:3 connected:1 adv:1 decrease:1 contemporary:1 highest:2 balanced:1 pd:1 complexity:1 moderately:1 pol:2 ultimately:1 trained:9 translated:2 easily:3 collide:3 schwenk:1 represented:3 various:1 chapter:1 train:3 distinct:1 reconstructive:1 ole:1 horse:10 tell:1 choosing:1 quite:2 grave:2 larger:1 valued:1 jointly:1 itself:2 final:2 online:2 advantage:2 sequence:1 net:1 douze:2 product:3 frequent:3 combining:1 bow:2 achieve:2 description:3 neuhoff:2 billion:3 darrell:2 categorization:3 adam:2 leave:1 wider:1 help:1 stroudsburg:2 recurrent:2 stat:1 nearest:1 minor:1 p2:1 implemented:3 c:1 indicate:1 drawback:1 stochastic:2 human:2 bojanowski:2 char:2 virtual:1 require:3 assign:1 tfidf:1 theor:1 extension:3 around:1 considered:2 exp:3 great:1 mapping:1 dictionary:19 consecutive:1 early:1 omitted:1 smallest:1 purpose:1 bag:3 hansen:2 sensitive:1 largest:1 create:4 weighted:1 mit:1 always:1 rather:2 avoid:1 improvement:4 hk:4 contrast:1 adversarial:1 centroid:1 sense:1 stopping:1 typically:4 entire:1 hidden:3 classification:12 among:3 retaining:1 yahoo:2 art:3 special:3 softmax:2 equal:3 construct:1 saving:3 beach:1 ng:1 identical:2 future:2 inherent:1 few:3 randomly:1 preserve:1 intell:1 individual:1 replaced:1 consisting:4 ab:11 huge:5 possibility:2 custom:1 severe:1 introduces:1 punctuation:1 word2vec:4 predefined:1 beforehand:3 capable:1 closer:1 necessary:1 orthogonal:1 inconvenient:1 increased:1 soft:1 disadvantage:2 phrase:5 cost:2 subset:1 rare:1 hundred:1 uniform:1 johnson:3 too:1 pcol:4 answer:1 cho:3 st:1 winther:2 lstm:1 international:1 pool:2 sadamasa:1 unavoidable:1 management:2 choose:4 huang:4 possibly:1 worse:1 creating:3 american:1 zhao:1 szegedy:2 star:4 heck:1 north:1 inc:1 titan:2 caused:2 collobert:1 performed:1 h1:5 start:1 aggregation:1 parallel:1 accuracy:2 convolutional:3 bolded:1 ensemble:10 correspond:3 reisinger:2 none:1 marginally:1 mooney:2 explain:2 sharing:1 hlt:1 geforce:3 obvious:1 proof:1 stop:4 dataset:1 knowledge:2 dimensionality:4 actually:5 feed:1 hashing:23 higher:1 supervised:2 done:1 though:4 furthermore:4 just:4 implicit:1 smola:1 correlation:1 langford:1 hand:2 web:1 google:2 quality:1 gray:2 lossy:1 usa:3 effect:2 contain:1 gtx:3 regularization:5 assigned:1 semantic:2 illustrated:1 deal:2 attractive:1 indistinguishable:1 nuisance:1 noted:1 anything:1 complete:1 l1:1 cano:1 meaning:2 regularizing:2 specialized:1 googlenews:1 overview:1 volume:2 million:5 association:4 he:1 approximates:1 significant:1 diversify:1 counteracted:1 zipf:1 mathematics:2 grangier:1 language:4 had:1 chapelle:1 access:2 similarity:1 add:1 something:1 own:1 inf:1 rkd:1 store:1 nvidia:2 binary:1 meeting:1 seen:5 additional:1 somewhat:1 dai:1 zip:1 employed:2 prune:1 converting:1 deng:1 semi:1 multiple:4 full:8 reduces:1 technical:2 faster:1 cross:3 long:2 qi:1 arxiv:2 represent:2 sometimes:2 normalization:2 affecting:1 decreased:1 sch:1 rest:2 collides:1 tri:1 lafferty:1 call:1 integer:1 kera:2 split:1 embeddings:76 stolcke:3 easy:1 bengio:1 relu:1 gave:1 architecture:3 suboptimal:1 reduce:7 prototype:3 shift:1 whether:1 handled:1 gb:4 passed:1 accelerating:1 penalty:1 sentiment:5 discr:1 deep:5 generally:1 collision:12 useful:2 unimportant:3 listed:1 backoff:1 amount:2 ten:1 category:1 reduced:1 http:1 problematic:1 cikm:1 bulk:1 discrete:2 dasgupta:1 kept:2 fraction:2 sum:4 letter:1 place:1 almost:3 patience:1 bit:1 layer:18 hi:4 followed:1 syllable:1 annual:2 occur:1 colliding:3 extremely:1 mikolov:2 department:2 structured:1 combination:2 manning:3 kd:1 across:3 character:6 shallow:2 indexing:1 yogatama:2 bucket:13 lyngby:2 equation:1 previously:1 mechanism:2 needed:3 dyer:1 hierarchical:2 occurrence:2 attenberg:1 batch:2 weinberger:3 alternative:1 dbp:1 original:1 compress:1 denotes:1 running:1 top:3 miyato:2 linguistics:4 pushing:1 concatenated:1 especially:1 build:1 move:1 added:1 spelling:1 exhibit:1 gradient:3 behalf:1 subspace:1 cw:2 separate:1 concatenation:1 majority:1 w0:1 evenly:1 topic:2 reason:1 denmark:5 besides:2 length:1 index:1 polarity:3 illustration:2 insufficient:1 minimizing:1 kk:1 ba:2 anal:1 clickthrough:1 perform:7 fasttext:2 convolution:1 datasets:8 benchmark:1 snippet:2 optional:3 rn:1 introduced:1 copenhagen:1 required:3 trainable:6 sentence:1 johansen:2 textual:1 tensorflow:2 kingma:2 nip:1 trans:2 address:1 able:4 usually:1 pattern:1 including:1 memory:4 dict:3 natural:2 rely:1 hybrid:1 representing:2 github:1 technology:1 dtu:4 created:4 schmid:1 text:14 review:4 l2:1 amz:2 python:1 multiplication:1 law:1 loss:1 fully:1 expect:1 par:2 interesting:1 validation:3 h2:2 foundation:1 degree:1 sufficient:1 consistent:1 article:4 xiao:3 editor:1 tiny:1 translation:3 row:3 course:1 token:43 surprisingly:1 keeping:1 english:1 drastically:2 side:1 emb:4 wide:2 neighbor:1 absolute:2 distributed:2 dimension:2 vocabulary:26 gram:17 avoids:1 resides:1 default:1 ending:1 made:2 preprocessing:2 avoided:3 pruning:6 ed2:1 ignore:1 implicitly:1 uni:1 logic:1 global:1 overfitting:2 ioffe:2 corpus:2 discriminative:2 continuous:3 search:3 table:10 nature:1 learn:1 robust:1 ca:1 expanding:1 ignoring:1 improving:1 schuurmans:1 complex:3 cl:1 domain:1 protocol:1 joulin:8 dense:2 pk:2 main:1 ling:1 allowed:1 complementary:1 fig:1 sub:1 wish:1 awful:1 concatenating:1 weighting:1 removing:2 rk:1 theorem:1 covariate:1 list:1 dk:2 experimented:1 consist:1 socher:1 quantization:4 adding:2 effectively:2 importance:22 corr:12 magnitude:1 entropy:3 simply:4 tze:1 gao:1 prevents:1 scalar:1 olwi:1 chance:2 determines:1 acm:3 weston:1 goal:1 identity:1 shared:2 typical:2 determined:1 reducing:2 uniformly:1 except:2 total:6 called:1 experimental:1 meaningful:1 ew:1 internal:1 evaluate:1 d1:11 |
6,720 | 7,079 | Online Learning for Multivariate Hawkes Processes
Yingxiang Yang?
Jalal Etesami?
Niao He?
Negar Kiyavash??
University of Illinois at Urbana-Champaign
Urbana, IL 61801
{yyang172,etesami2,niaohe,kiyavash} @illinois.edu
Abstract
We develop a nonparametric and online learning algorithm that estimates the
triggering functions of a multivariate Hawkes process (MHP). The approach we
take approximates the triggering function fi,j (t) by functions in a reproducing
kernel Hilbert space (RKHS), and maximizes a time-discretized version of the
log-likelihood, with Tikhonov regularization. Theoretically, our algorithm achieves
an O(log T ) regret bound. Numerical results show that our algorithm offers a
competing performance to that of the nonparametric batch learning algorithm, with
a run time comparable to parametric online learning algorithms.
1
Introduction
Multivariate Hawkes processes (MHPs) are counting processes where an arrival in one dimension can
affect the arrival rates of other dimensions. They were originally proposed to statistically model the
arrival patterns of earthquakes [16]. However, MHP?s ability to capture mutual excitation between
dimensions of a process also makes it a popular model in many other areas, including high frequency
trading [3], modeling neural spike trains [24], and modeling diffusion in social networks [28], and
capturing causality [12, 18].
For a p-dimensional MHP, the intensity function of the i-th dimension takes the following form:
p Z t
X
?i (t) = ?i +
fi,j (t ? ? )dNj (? ),
(1)
j=1
0
where the constant ?i is the base intensity of the i-th dimension, Nj (t) counts the number of arrivals
in the j-th dimension within [0, t], and fi,j (t) is the triggering function that embeds the underlying
causal structure of the model. In particular, one arrival in the j-th dimension at time ? will affect the
intensity of the arrivals in the i-th dimension at time t by the amount fi,j (t ? ? ) for t > ? . Therefore,
learning the triggering function is the key to learning an MHP model. In this work, we consider the
problem of estimating the fi,j (t)s using nonparametric online learning techniques.
1.1
Motivations
Why nonparametric? Most of existing works consider exponential triggering functions:
fi,j (t) = ?i,j exp{??i,j t}1{t > 0},
(2)
where ?i,j is unknown while ?i,j is given a priori. Under this assumption, learning fi,j (t) is
equivalent to learning a real number, ?i,j . However, there are many scenarios where (2) fails to
?
Department of Electrical and Computer Engineering. ? Department of Industrial and Enterprise Systems
Engineering. This work was supported in part by MURI grant ARMY W911NF-15-1-0479 and ONR grant
W911NF-15-1-0479.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
describe the correct mutual influence pattern between dimensions. For example, [20] and [11] have
reported delayed and bell-shaped triggering functions when applying the MHP model to neural spike
train datasets. Moreover, when fi,j (t)s are not exponential, or when ?i,j s are inaccurate, formulation
in (2) is prone to model mismatch [15].
Why online learning? There are many reasons to consider an online framework. (i) Batch learning
algorithms do not scale well due to high computational complexity [15]. (ii) The data can be costly
to observe, and can be streaming in nature, for example, in criminology.
The above concerns motivate us to design an online learning algorithm in the nonparametric regime.
1.2
Related Works
Earlier works on learning the triggering functions can be largely categorized into three classes.
Batch and parametric. The simplest way to learn the triggering functions is to assume that they
possess a parametric form, e.g. (2), and learn the coefficients. The most widely used estimators
include the maximum likelihood estimator [23], and the minimum mean-square error estimator [2].
These estimators can also be generalized to the high dimensional case when the coefficient matrix is
sparse and low-rank [2]. More generally, one can assume that fi,j (t)s lie within the span of a given
P|S|
set of basis functions S = {e1 (t), . . . , e|S| (t)}: fi,j (t) = i=1 ci ei (t), where ei (t)s have a given
parametric form [13, 27]. The state-of-the-art of such algorithms is [27], where |S| is adaptively
chosen, which sometimes requires a significant portion of the data to determine the optimal S.
Batch and nonparametric. A more sophisticated approach towards finding the set S is explored in
[29], where the coefficients and the basis functions are iteratively updated and refined. Unlike [27],
where the basis functions take a predetermined form, [29] updates the basis functions by solving a
set of Euler-Lagrange equations in the nonparametric regime. However, the formulation of [29] is
nonconvex, and therefore the optimality is not guaranteed. The method also requires more than 105
arrivals for each dimension in order to obtain good results, on networks of less than 5 dimensions.
Another way to estimate fi,j (t)s nonparametrically is proposed in [4], which solves a set of p WienerHopf systems, each of dimension at least p2 . The algorithm works well on small dimensions; however,
it requires inverting a p2 ? p2 matrix, which is costly, if not all together infeasible, when p is large.
Online and parametric. To the best of our knowledge, learning the triggering functions in an online
setting seems largely unexplored. Under the assumption that fi,j (t)s are exponential, [15] proposes
an online algorithm using gradient descent, exploiting the evolutionary dynamics of the intensity
function. The time axis is discretized into small intervals, and the updates are performed at the end of
each interval. While the authors provide the online solution to the parametric case, their work cannot
readily extend to the nonparametric setting where the triggering functions are not exponential, mainly
because the evolutionary dynamics of the intensity functions no longer hold. Learning triggering
functions nonparametrically remains an open problem.
1.3
Challenges and Our Contributions
Designing an online algorithm in the nonparametric regime is not without its challenges: (i) It is
not clear how to represent fi,j (t)s. In this work, we relate fi,j (t) to an RKHS. (ii) Although online
learning with kernels is a well studied subject in other scenarios [19], a typical choice of loss function
for learning an MHP usually involves the integral of fi,j (t)s, which prevents the direct application of
the representer theorem. (iii) The outputs of the algorithm at each step require a projection step to
ensure positivity of the intensity function. This requires solving a quadratic programming problem,
which can be computationally expensive. How to circumvent this computational complexity issue is
another challenge of this work.
In this paper, we design, to the best of our knowledge, the first online learning algorithm for the
triggering functions in the nonparametric regime. In particular, we tackle the challenges mentioned
above, and the only assumption we make is that the triggering functions fi,j (t)s are positive, have
a decreasing tail, and that they belong to an RKHS. Theoretically, our algorithm achieves a regret
2
bound of O(log T ), and numerical experiments show that our approach outperforms the previous
approaches despite the fact that they are tailored to a less general setting. In particular, our algorithm
attains a similar performance to the nonparametric batch learning maximum likelihood estimators
while reducing the run time extensively.
1.4
Notations
Prior to discussing our results, we introduce the basic notations used in the paper. Detailed notations
will be introduced along the way. For a p-dimensional MHP, we denote the intensity function of
the i-th dimension by ?i (t). We use ?(t) to denote the vector of intensity functions, and we use
F = [fi,j (t)] to denote the matrix of triggering functions. The i-th row of F is denoted by fi . The
number P
of arrivals in the i-th dimension up to t is denoted by the counting process Ni (t). We set
p
N (t) = i=1 Ni (t). The estimates of these quantities are denoted by their ?hatted? versions. The
arrival time of the n-th event in the j-th dimension is denoted by ?j,n . Lastly, define bxcy = ybx/yc.
2
Problem Formulation
In this section, we introduce our assumptions and definitions followed by the formulation of the loss
function. We omit the basics on MHPs and instead refer the readers to [22] for details.
Assumption 2.1. We assume that the constant base intensity ?i is lower bounded by a given threshold
?min > 0. We also assume bounded and stationary increments for the MHP [16, 9]: for any t, z > 0,
Ni (t)?Ni (t?z) ? ?z = O(z). See Appendix A for more details.
Definition 2.1. Suppose that {tk }?
k=0 is an arbitrary time sequence with t0 = 0, and supk?1 (tk ?
tk?1 ) ? ? ? 1. Let ?f : [0, ?) ? [0, ?) be a continuous and bounded function such that
limt?? ?f (t) = 0. Then, f (x) satisfies the decreasing tail property with tail function ?f (t) if
?
X
(tk ? tk?1 ) sup |f (x)| ? ?f (tm?1 ), ?m > 0.
x?(tk?1 ,tk ]
k=m
Assumption 2.2. Let H be an RKHS associated with a kernel K(?, ?) that satisfies K(x, x) ? 1.
Let L1 [0, ?) be the space of functions for which the absolute value is Lebesgue integrable. For
any i, j ? {1, . . . , p}, we assume that fi,j (t) ? H and fi,j (t) ? L1 [0, ?), with both fi,j (t) and
dfi,j (t)/dt satisfying the decreasing tail property of Definition 2.1.
Assumption 2.1 is common and has been adopted in existing literature [22]. It ensures that the MHP
is not ?explosive? by assuming that N (t)/t is bounded. Assumption 2.2 restricts the tail behaviors
of both fi,j (t) and dfi,j (t)/dt. Complicated as it may seem, functions with exponentially decaying
tails satisfy this assumption, as is illustrated by the following example (See Appendix B for proof):
Example 1. Functions f1 (t) = exp{??t}1{t > 0} and f2 (t)?= exp{?(t?? ?)2 }1{t > 0} satisfy
Assumption 2.2 with tail functions ? ?1 exp{??(t ? ?)} and 2? erfc(t/ 2 ? ?) exp{? 2 /2}.
2.1
A Discretized Loss Function for Online Learning
A common approach for learning the parameters of an MHP is to perform regularized maximum
likelihood estimation. As such, we introduce a loss function comprised of the negative of the loglikelihood function and a penalty term to enforce desired structural properties, e.g. sparsity of the
triggering matrix F or smoothness of the triggering functions (see, e.g., [2, 29, 27]). The negative of
the log-likelihood function of an MHP over a time interval of [0, t] is given by
Z t
p Z t
X
Lt (?) := ?
log ?i (? )dNi (? ) ?
?i (? )d? .
(3)
i=1
0
0
Let {?1 , ..., ?N (t) } denote the arrival times of all the events within [0, t] and let {t0 , . . . , tM (t) } be
a finite partition of the time interval [0, t] such that t0 = 0 and tk+1 := min?i ?tk {btk c? + ?, ?i }.
Using this partitioning, it is straightforward to see that the function in (3) can be written as
!
(t) Z tk
p M
p
X
X
X
Lt (?) =
?i (? )d? ? xi,k log ?i (tk ) :=
Li,t (?i ),
(4)
i=1 k=1
tk?1
i=1
3
where xi,k := Ni (tk ) ? Ni (tk?1 ). By the definition of tk , we know that xi,k ? {0, 1}. In order to
learn fi,j (t)s using an online kernel method, we require a similar result as the representer theorem in
[25] that specifies the form of the optimizer. This theorem requires that the regularized version of the
loss in (4) to be a function of only fi,j (t)s. However, due to the integral part, Lt (?) is a function of
both fi,j (t)s and their integrals, which prevents us from applying the representer theorem directly. To
resolve this issue, several approaches can be applied such as adjusting the Hilbert space as proposed
in [14] in context of Poisson processes, or approximating the log-likelihood function as in [15]. Here,
we adopt a method similar to [15] and approximate (4) by discretizing the integral:
(?)
Lt (?) :=
(t)
p M
X
X
((tk ? tk?1 )?i (tk ) ? xi,k log ?i (tk )):=
i=1 k=1
p
X
(?)
Li,t (?i ).
(5)
i=1
Intuitively, if ? is small enough and the triggering functions are bounded, it is reasonable to expect
(?)
that Li,t (?) is close to Li,t (?). Below, we characterize the accuracy of the above discretization and
also truncation of the intensity function. First, we require the following definition.
Definition 2.2. We define the truncated intensity function as follows
p Z t
X
(z)
?i (t) := ?i +
1{t ? ? < z}fi,j (t ? ? )dNj (? ).
j=1
(6)
0
Proposition 1. Under Assumptions 2.1 and 2.2, for any i ? {1, . . . , p}, we have
(?) (z)
0
Li,t (?i ) ? Li,t (?i ) ? (1 + ?1 ??1
min )N (t ? z)?(z) + ?N (t)? (0),
where ?min is the lower bound for ?i , ?1 is the upper bound for Ni (t)?Ni (t ? 1) from Definition
2.1, while ? and ?0 are two tail functions that uniformly capture the decreasing tail property of all
fi,j (t)s and all dfi,j (t)/dts, respectively.
The first term in the bound characterizes the approximation error when one truncates ?i (t) with
(z)
?i (t). The second term describes the approximation error caused by the discretization. When
(z)
z = ?, ?i (t) = ?i (t), and the approximation error is contributed solely by discretization. Note
that, in many cases, a small enough truncation error can be obtained by setting a relatively small z.
For example, for fi,j (t) = exp{?3t}1{t > 0}, setting z = 10 would result in a truncation error less
than 10?13 . Meanwhile, truncating ?i (t) greatly simplifies the procedure of computing its value.
(z)
Hence, in our algorithm, we focus on ?i instead of ?i .
In the following, we consider the regularized instantaneous loss function with the Tikhonov regularization for fi,j (t)s and ?i :
p
X
1
?i,j
2
li,k (?i ) := (tk ? tk?1 )?i (tk ) ? xi,k log ?i (tk ) + ?i ?i +
kfi,j k2H ,
2
2
j=1
(7)
M (t)
bi (tk )}
and aim at producing a sequence of estimates {?
k=1 of ?i (t) with minimal regret:
M (t)
X
M (t)
bi (tk )) ?
li,k (?
k=1
min
?i ??min ,fi,j (t)?0
X
li,k (?i (tk )).
(8)
k=1
Each regularized instantaneous loss function in (7) is jointly strongly convex with respect to fi,j s and
?i . Combining with the representer theorem in [25], the minimizer to (8) is a linear combination of
a finite set of kernels. In addition, by setting ?i,j = O(1), our algorithm achieves ?-stability with
? = O((?i,j t)?1 ), which is typical for a learning algorithm in RKHS (Theorem 22 of [8]).
3
Online Learning for MHPs
We introduce our NonParametric OnLine Estimation for MHP (NPOLE-MHP) in Algorithm 1. The
most important components of the algorithm are (i) the computation of the gradients and (ii) the
4
Algorithm 1 NonParametric OnLine Estimation for MHP (NPOLE-MHP)
1: input: a sequence of step sizes {?k }?
k=1 and a set of regularization coefficients ?i,j s, along with
b (M (t)) and Fb (M (t)) .
positive values of ?min , z and ?. output: ?
(0)
(0)
Initialize fbi,j and ?
bi for all i, j.
for k = 0, ..., M (t) ? 1 do
Observe the interval [tk , tk+1 ), and compute xi,k for i ? {1, . . . , p}.
for i = 1, . . . , p do n
o
(k+1)
(k)
(z) (k) b(k)
Set ?
bi
? max ?
bi ? ?k+1 ??i li,k ?i (b
?i , fi ) , ?min .
for j = 1, . . . , ph do
i
h
i
(k+ 1 )
(k+ 1 )
(k)
(z) (k) b(k)
(k+1)
Set fbi,j 2 ? fbi,j ? ?k+1 ?fi,j li,k ?i (b
?i , fi ) , and fbi,j
? ? fbi,j 2 .
end for
end for
end for
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
projections in lines 6 and 8. For the partial derivative with respect to ?i , recall the definition of li,k in
(z)
(z)
(7) and ?i in (6). Since ?i is a linear function of ?i , we have
h
i?1
(z) (k) b(k)
(z)
(k)
(k)
(k)
(k)
? , f ) = (tk ? tk?1 ) ? xi,k ?
?
b , fb
+ ?i ?
b , ?k + ?i ?
b ,
?? li,k ? (b
i
i
i
i
i
i
i
i
i
where ?k is the simplified notation for the first two terms. Upon performing gradient descent, the
(k+1)
(k+1) b(k+1)
b(z) (b
algorithm makes sure that ?
bi
? ?min , which further ensures that ?
?i
, fi
) ? ?min .
i
(k)
For the update step of fbi,j (t), notice that the li,k is also a linear function with respect to fi,j . Since
?fi,j fi,j (x) = K(x, ?), which holds true due to the reproducing property of the kernel, we thus have
X
(z) (k) b(k)
(k)
?fi,j li,k ?i (b
?i , fi ) = ?k
K(tk ? ?j,n , ?) + ?i,j fbi,j (?).
(9)
?j,n ?[tk ?z,tk )
Once again, a projection ?[?] is necessary to ensure that the estimated triggering functions are positive.
3.1
Projection of the Triggering Functions
For any kernel, the projection step for a triggering function can be executed by solving a quadratic
(k+ 1 )
programming problem: min kf ? fbi,j 2 k2H subject to f ? H and f (t) ? 0. Ideally, the positivity
constraint has to hold for every t > 0, but in order to simplify computation, one can approximate the
solution by relaxing the constraint such that f (t) ? 0 holds for only a finite set of ts within [0, z].
Semi-Definite Programming (SDP). When the reproducing kernel is polynomial, the problem is
much simpler. The projection step can be formulated as an SDP problem [26] as follows:
Proposition 2. Let S = ?r?k {tr ? ?j,n : tr ? z ? ?j,n < tr } be the set of tr ? ?j,n s. Let
K(x, y) = (1+xy)2d and K 0 (x, y) = (1+xy)d be two polynomial kernels with d ? 1. Furthermore,
let K and G denote the Gramian matrices where the i, j-th element correspond to K(s, s0 ), with s
and s0 being the i-th and j-th element in S. Suppose that a ? R|S| is the coefficient vector such
P
P
(k+ 1 )
(k+1)
that fbi,j 2 (?) = s?S as K(s, ?), and that the projection step returns fbi,j (?) = s?S b?s K(s, ?).
Then the coefficient vector b? can be obtained by
b? = argmin ?2a> Kb + b> Kb,
s.t. G ? diag(b) + diag(b) ? G 0.
(10)
b?R|S|
2
Non-convex approach. Alternatively, we can assume that fi,j (t) = gi,j
(t) where gi,j (t) ? H. By
minimizing the loss with respect to gi,j (t), one can naturally guarantee that fi,j (t) ? 0. This method
was adopted in [14] for estimating the intensity function of non-homogeneous Poisson processes.
While this approach breaks the convexity of the loss function, it works relatively well when the
initialization is close to the global minimum. It is also interestingly related to a line of recent works
in non-convex SDP [6], as well as phase retrieval with Wirtinger flow [10]. Deriving guarantees on
regret bound and convergence performances is a future direction implied by the result of this work.
5
4
Theoretical Properties
We now discuss the theoretical properties of NPOLE-MHP. We start with defining the regret.
Definition 4.1. The regret of Algorithm 1 at time t is given by
M (t)
X
(z) (k) b(k)
(z)
(?) (z)
li,k (?i (b
?i , fi )) ? li,k (?i (?i , fi )) ,
Rt (?i (?i , fi )) :=
k=1
(k)
?
bi
(k)
where
and fbi denote the estimated base intensity and the triggering functions, respectively.
Theorem 1. Suppose that the observations are generated from a p-dimensional MHP that satisfies
Assumptions 2.1 and 2.2. Let ? = mini,j {?i,j , ?i }, and ?k = 1/(?k + b) for some positive constants
b. Then
(?)
(z)
Rt (?i (?i , fi )) ? C1 (1 + log M (t)),
2
where C1 = 2(1 + p?2z )? ?1 |? ? ??1
min | .
The regret bound of Theorem 1 resembles the regret bound for a typical online learning algorithm
with strongly convex loss function (see for example, Theorem 3.3 of [17]). When ?, ? and ??1
min
are fixed, C1 = O(p), which is intuitive as one needs to update p functions at a time. Note that
the regret in Definition 4.1, encodes the performance of Algorithm 1 by comparing its loss with the
approximated loss. Below, we compare the loss of Algorithm 1 with the original loss in (4).
Corollary 1. Under the same assumptions as Theorem 1, we have
M (t)
X
(z)
(k)
li,k (?i (b
?i , fbi )) ? li,k (?i (?i , fi )) ? C1 [1 + log M (t)] + C2 N (t),
(11)
k=1
0
where C1 is defined in Theorem 1 and C2 = (1 + ?1 ??1
min )?(z) + ?? (0).
Note that C3 N (t) is due to discretization and truncation steps and it can be made arbitrary small for
given t and setting small ? and large enough z.
Computational Complexity. Since fbi s can be estimated in parallel, we restrict our analysis to the
case of a fixed i ? {1, . . . , p} in a single iteration. For each iteration, the computational complexity
comes from evaluating the intensity function and projection. Since the number of arrivals within
the interval [tk ? z, tk ) is bounded by p?z and ?z = O(1), evaluating the intensity costs O(p2 )
operations. For the projection in each step, one can truncate the number of kernels used to represent
fi,j (t) to be O(1) with controllable error (Proposition 1 of [19]), and therefore the computation
cost is O(1). Hence, the per iteration computation cost of NPOLE-MHP is O(p2 ). By comparison,
parametric online algorithms (DMD, OGD of [15]) also require O(p2 ) operations for each iteration,
while the batch learning algorithms (MLE-SGLP, MLE of [27]) require O(p2 t3 ) operations.
5
Numerical Experiments
We evaluate the performance of NPOLE-MHP on both synthetic and real data, from multiple aspects:
(i) visual assessment of theP
goodness-of-fit
comparing to the ground truth; (ii) the ?average L1 error?
Pp
p
defined as the average of i=1 j=1 kfi,j ? fbi,j kL1 [0,z] over multiple trials; (iii) scalability over
both dimension p and time horizon T . For benchmarks, we compare NPOLE-MHP?s performance
to that of online parametric algorithms (DMD, OGD of [15]) and nonparametric batch learning
algorithms (MLE-SGLP, MLE of [27]).
5.1
Synthetic Data
Consider a 5-dimensional MHP with ?i = 0.05 for all dimensions. We set the triggering functions as
?
e?2.5t
? 2?5t
?
?
F =? 0
?
? 0
0
0
(1 + cos(?t))e?t /2
2e?3t
0
0
2
0
e?5t
0
0
2
te?5(t?1)
6
e?10(t?1)
0
0
2
2
0.6e?3t + 0.4e?3(t?1)
0
0
0
0
?
?
?
?
?.
?
e?4t ?
e?3t
1
1.8
0.9
0.9
1.6
0.8
0.8
1.4
0.7
0.7
0.6
1.2
0.5
1
0.4
0.8
0.3
0.6
0.3
0.2
0.4
0.2
0.1
0.2
0
0.6
0.5
0.4
0.1
0
0
0.5
1
1.5
2
2.5
3
0
0
f2,2 (t)
1
1.5
2
2.5
3
0
0.5
1
f3,2 (t)
True fi,j (t)
1
0.5
NPOLE-MHP
DMD
1.5
2
2.5
3
f1,4 (t)
OGD
MLE-SGLP
MLE
Figure 1: Performances of different algorithms for estimating F . Complete set of result can be found
in Appendix F. For each subplot, the horizontal axis covers [0, z] and the vertical axis covers [0, 1].
The performances are similar between DMD and OGD, and between MLE and MLE-SGLP.
0.9
0.8
0.7
0.6
The design of F allows us to test NPOLE-MHP?s ability of detecting (i) exponential triggering
functions with various decaying rate; (ii) zero functions; (iii) functions with delayed peaks and tail
behaviors different from an exponential function.
0.5
0.4
0.3
0.2
0.1
0
0
Goodness-of-fit. We run NPOLE-MHP over a set of data with T = 105 and around 4 ? 104 events
for each dimension. The parameters are chosen by grid search over a small portion of data, and the
parameters of the benchmark algorithms are fine-tuned (see Appendix F for details). In particular, we
set the discretization level ? = 0.05, the window size z = 3, the step size ?k = (k?/20 + 100)?1 , and
the regularization coefficient ?i,j ? ? = 10?8 . The performances of NPOLE-MHP and benchmarks
are shown in Figure 1. We see that NPOLE-MHP captures the shape of the function much better than
the DMD and OGD algorithms with mismatched forms of the triggering functions. It is especially
visible for f1,4 (t) and f2,2 (t). In fact, our algorithm scores a similar performance to the batch
learning MLE estimator, which is optimal for any given set of data. We next plot the average loss
per iteration for this dataset in Figure 2. In the left-hand side, the loss is high due to initialization.
However, the effect of initialization quickly diminishes as the number of events increases.
0.5
1
1.5
2
2.5
3
Run time comparison. The simulation of the DMD and OGD algorithms took 2 minutes combined
on a Macintosh with two 6-core Intel Xeon processor at 2.4 GHz, while NPOLE-MHP took 3 minutes.
The batch learning algorithms MLE-SGLP and MLE in [27] each took about 1.5 hours. Therefore,
our algorithm achieves the performance similar to batch learning algorithms with a run time close to
that of parametric online learning algorithms.
Effects of the hyperparameters: ?, ?i,j , and ?k . We investigate the sensitivity of NPOLE-MHP
with respect to the hyperparameters, measuring the ?averaged L1 error? defined at the beginning
of this section. We independently generate 100 sets of data with the same parameters, and a
smaller T = 104 for faster data generation. The result is shown in Table 1. For NPOLE-MHP,
we fix ?k = 1/(k/2000 + 10). MLE and MLE-SGLP score around 1.949 with 5/5 inner/outer
rounds of iterations. NPOLE-MHP?s performance is robust when the regularization coefficient and
discretization level are sufficiently small. It surpasses MLE and MLE-SGLP on large datasets, in
which case the iterations of MLE and MLE-SGLP are limited due to computational considerations. As
? increases, the error decreases first before rising drastically, a phenomenon caused by the mismatch
between the loss functions. For the step size, the error varies under different choice of ?k , which can
be selected via grid-search on a small portion of the data like most other online algorithms.
5.2
Real Data: Inferencing Impact Between News Agencies with Memetracker Data
We test the performance of NPOLE-MHP on the memetracker data [21], which collects from the
internet a set of popular phrases, including their content, the time they were posted, and the url
address of the articles that included them. We study the relationship between different news agencies,
modeling the data with a p-dimensional MHP where each dimension corresponds to a news website.
Unlike [15], which conducted a similar experiment where all the data was used, we focus on only 20
7
?
0.01
0.05
0.1
0.5
1
?8
1.83
1.86
1.92
4.80
5.73
Regularization log10 ?
?6
?4
?2
1.83 1.84 4.15
1.86 1.86 3.10
1.92 1.88 2.73
4.80 4.64 2.19
5.73 5.58 2.38
0
4.64
4.64
4.64
4.62
4.59
Dimension
p
Table 2: Average CPU-time for estimating
one triggering function (seconds).
Table 1: Effect of hyperparameters ? and ?,
measured by the ?average L1 error?.
?
?
?
?
?
0.8
= 0.05,
= 0.05,
= 0.10,
= 0.50,
= 0.05,
6
with true fi,j (t)s
NPOLE-MHP
NPOLE-MHP
NPOLE-MHP
DMD
?10 5
NPOLE-MHP
DMD
OGD
5
Cumulative Loss
Average Loss per Iteration
1
20
40
60
80
100
Horizon T (days)
1.8 3.6
5.4
3.9 9.1 15.3
4.6 10.4 17.0
4.6 10.2 16.7
4.5 10.0 16.4
4.5 9.7 15.9
0.6
0.4
0.2
4
3
2
1
0
0
0
2
4
6
Time Axis t
8
0
10
5
10
Time Axis t
?104
Figure 2: Effect of discretization in NPOLEMHP.
15
?10 5
Figure 3: Cumulative loss on memetracker data
of 20 dimensions.
websites that are most active, using 18 days of data. We plot the cumulative losses in Figure 3, using
a window size of 3 hours, an update interval ? = 0.2 seconds, and a stepp
size ?k = 1/(k? + 800)
with ? = 10?10 for NPOLE-MHP. For DMD and OGD, we set ?k = 5/ T /?. The result shows
that NPOLE-MHP accumulates a smaller loss per step compared to OGD and DMD.
Scalability and generalization error. Finally, we evaluate the scalability of NPOLE-MHP using
the average CPU-time for estimating one triggering function. The result in Table 2 shows that the
computation cost of NPOLE-MHP scales almost linearly with the dimension and data size. When
scaling the data to 100 dimensions and 2 ? 105 events, NPOLE-MHP scores an average 0.01 loss per
iteration on both training and test data, while OGD and DMD scored 0.005 on training data and 0.14
on test data. This shows a much better generalization performance of NPOLE-MHP.
6
Conclusion
We developed a nonparametric method for learning the triggering functions of a multivariate Hawkes
process (MHP) given time series observations. To formulate the instantaneous loss function, we
adopted the method of discretizing the time axis into small intervals of lengths at most ?, and we
derived the corresponding upper bound for approximation error. From this point, we proposed an
online learning algorithm, NPOLE-MHP, based on the framework of online kernel learning and
exploits the interarrival time statistics under the MHP setup. Theoretically, we derived the regret
bound for NPOLE-MHP, which is O(log T ) when the time horizon T is known a priori, and we
showed that the per iteration cost of NPOLE-MHP is O(p2 ). Numerically, we compared NPOLEMHP?s performance with parametric online learning algorithms and nonparametric batch learning
algorithms. Results on both synthetic and real data showed that we are able to achieve similar
performance to that of the nonparametric batch learning algorithms with a run time comparable to the
parametric online learning algorithms.
8
References
[1] Emmanuel Bacry, Khalil Dayri, and Jean-Franc?ois Muzy. Non-parametric kernel estimation
for symmetric Hawkes processes. application to high frequency financial data. The European
Physical Journal B-Condensed Matter and Complex Systems, 85(5):1?12, 2012.
[2] Emmanuel Bacry, St?ephane Ga??ffas, and Jean-Franc?ois Muzy. A generalization error bound for
sparse and low-rank multivariate Hawkes processes, 2015.
[3] Emmanuel Bacry, Iacopo Mastromatteo, and Jean-Franc?ois Muzy. Hawkes processes in finance.
Market Microstructure and Liquidity, 1(01):1550005, 2015.
[4] Emmanuel Bacry and Jean-Franc?ois Muzy. First- and second-order statistics characterization of
Hawkes processes and non-parametric estimation. IEEE Transactions on Information Theory,
62(4):2184?2202, 2016.
[5] J Andrew Bagnell and Amir-massoud Farahmand. Learning positive functions in a Hilbert
space, 2015.
[6] Srinadh Bhojanapalli, Anastasios Kyrillidis, and Sujay Sanghavi. Dropping convexity for faster
semi-definite optimization. Conference on Learning Theory, pages 530?582, 2016.
[7] Jacek Bochnak, Michel Coste, and Marie-Franc?oise Roy. Real algebraic geometry, volume 36.
Springer Science & Business Media, 2013.
[8] Olivier Bousquet and Andr?e Elisseeff. Stability and generalization. Journal of Machine Learning
Research, 2(Mar):499?526, 2002.
[9] Pierre Br?emaud and Laurent Massouli?e. Stability of nonlinear Hawkes processes. The Annals
of Probability, pages 1563?1588, 1996.
[10] Emmanuel J Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger
flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985?2007,
2015.
[11] Michael Eichler, Rainer Dahlhaus, and Johannes Dueck. Graphical modeling for multivariate
Hawkes processes with nonparametric link functions. Journal of Time Series Analysis, 38(2):225?
242, 2017.
[12] Jalal Etesami and Negar Kiyavash. Directed information graphs: A generalization of linear
dynamical graphs. In American Control Conference (ACC), 2014, pages 2563?2568. IEEE,
2014.
[13] Jalal Etesami, Negar Kiyavash, Kun Zhang, and Kushagra Singhal. Learning network of
multivariate Hawkes processes: A time series approach. Conference on Uncertainty in Artificial
Intelligence, 2016.
[14] Seth Flaxman, Yee Whye Teh, and Dino Sejdinovic. Poisson intensity estimation with reproducing kernels. International Conference on Artificial Intelligence and Statistics, 2017.
[15] Eric C Hall and Rebecca M Willett. Tracking dynamic point processes on networks. IEEE
Transactions on Information Theory, 62(7):4327?4346, 2016.
[16] Alan G Hawkes. Spectra of some self-exciting and mutually exciting point processes.
Biometrika, 58(1):83?90, 1971.
R in
[17] Elad Hazan et al. Introduction to online convex optimization. Foundations and Trends
Optimization, 2(3-4):157?325, 2016.
9
[18] Sanggyun Kim, Christopher J Quinn, Negar Kiyavash, and Todd P Coleman. Dynamic and
succinct statistical analysis of neuroscience data. Proceedings of the IEEE, 102(5):683?698,
2014.
[19] Jyrki Kivinen, Alexander J Smola, and Robert C Williamson. Online learning with kernels.
IEEE Transactions on Signal Processing, 52(8):2165?2176, 2004.
[20] Michael Krumin, Inna Reutsky, and Shy Shoham. Correlation-based analysis and generation of
multiple spike trains using Hawkes models with an exogenous input. Frontiers in Computational
Neuroscience, 4, 2010.
[21] Jure Leskovec, Lars Backstrom, and Jon Kleinberg. Meme-tracking and the dynamics of
the news cycle. International Conference on Knowledge Discovery and Data Mining, pages
497?506, 2009.
[22] Thomas Josef Liniger. Multivariate Hawkes processes. PhD thesis, Eidgen?ossische Technische
Hochschule ETH Z?urich, 2009.
[23] Tohru Ozaki. Maximum likelihood estimation of Hawkes? self-exciting point processes. Annals
of the Institute of Statistical Mathematics, 31(1):145?155, 1979.
[24] Patricia Reynaud-Bouret, Sophie Schbath, et al. Adaptive estimation for Hawkes processes;
application to genome analysis. The Annals of Statistics, 38(5):2781?2822, 2010.
[25] Bernhard Sch?olkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem.
International Conference on Computational Learning Theory, pages 416?426, 2001.
[26] Lieven Vandenberghe and Stephen Boyd. Semidefinite programming. SIAM review, 38(1):49?95,
1996.
[27] Hongteng Xu, Mehrdad Farajtabar, and Hongyuan Zha. Learning Granger causality for Hawkes
processes. International Conference on Machine Learning, 48:1717?1726, 2016.
[28] Shuang-Hong Yang and Hongyuan Zha. Mixture of mutually exciting processes for viral
diffusion. International Conference on Machine Learning, 28:1?9, 2013.
[29] Ke Zhou, Hongyuan Zha, and Le Song. Learning triggering kernels for multi-dimensional
Hawkes processes. International Conference on Machine Learning, 28:1301?1309, 2013.
10
| 7079 |@word trial:1 version:3 rising:1 polynomial:2 seems:1 open:1 simulation:1 elisseeff:1 tr:4 memetracker:3 series:3 score:3 tuned:1 rkhs:5 interestingly:1 outperforms:1 existing:2 discretization:7 comparing:2 written:1 readily:1 numerical:3 partition:1 visible:1 predetermined:1 shape:1 plot:2 update:5 stationary:1 intelligence:2 selected:1 website:2 amir:1 beginning:1 coleman:1 core:1 detecting:1 characterization:1 herbrich:1 simpler:1 zhang:1 along:2 enterprise:1 direct:1 c2:2 ozaki:1 farahmand:1 mhp:47 introduce:4 theoretically:3 market:1 behavior:2 sdp:3 multi:1 discretized:3 decreasing:4 resolve:1 cpu:2 window:2 estimating:5 underlying:1 moreover:1 maximizes:1 notation:4 bounded:6 bhojanapalli:1 medium:1 argmin:1 developed:1 finding:1 nj:1 guarantee:2 dueck:1 unexplored:1 every:1 tackle:1 finance:1 biometrika:1 control:1 partitioning:1 grant:2 omit:1 producing:1 positive:5 before:1 engineering:2 todd:1 despite:1 accumulates:1 laurent:1 solely:1 initialization:3 studied:1 resembles:1 collect:1 relaxing:1 co:1 limited:1 kfi:2 statistically:1 bi:7 averaged:1 directed:1 earthquake:1 regret:10 definite:2 procedure:1 area:1 bell:1 shoham:1 eth:1 projection:9 boyd:1 cannot:1 close:3 ga:1 context:1 influence:1 applying:2 yee:1 equivalent:1 straightforward:1 urich:1 truncating:1 convex:5 independently:1 formulate:1 ke:1 estimator:6 kushagra:1 deriving:1 vandenberghe:1 financial:1 ralf:1 stability:3 increment:1 updated:1 annals:3 suppose:3 programming:4 homogeneous:1 ogd:10 designing:1 olivier:1 element:2 roy:1 expensive:1 satisfying:1 approximated:1 trend:1 muri:1 electrical:1 capture:3 ensures:2 news:4 cycle:1 decrease:1 mentioned:1 agency:2 convexity:2 complexity:4 meme:1 ideally:1 dynamic:5 motivate:1 solving:3 upon:1 f2:3 eric:1 basis:4 seth:1 bouret:1 various:1 train:3 describe:1 artificial:2 refined:1 ossische:1 jean:4 widely:1 elad:1 loglikelihood:1 ability:2 statistic:4 gi:3 jointly:1 online:30 sequence:3 took:3 combining:1 achieve:1 intuitive:1 khalil:1 scalability:3 olkopf:1 exploiting:1 convergence:1 macintosh:1 tk:35 develop:1 inferencing:1 andrew:1 measured:1 solves:1 p2:8 ois:4 involves:1 trading:1 come:1 direction:1 correct:1 lars:1 kb:2 require:5 f1:3 fix:1 generalization:5 microstructure:1 proposition:3 frontier:1 hold:4 around:2 sufficiently:1 ground:1 hall:1 exp:6 k2h:2 achieves:4 optimizer:1 adopt:1 estimation:8 diminishes:1 condensed:1 aim:1 zhou:1 jalal:3 corollary:1 rainer:1 derived:2 focus:2 rank:2 likelihood:7 mainly:1 reynaud:1 greatly:1 industrial:1 attains:1 kim:1 streaming:1 inaccurate:1 josef:1 issue:2 denoted:4 priori:2 proposes:1 art:1 initialize:1 mutual:2 once:1 f3:1 shaped:1 beach:1 representer:5 jon:1 future:1 ephane:1 sanghavi:1 simplify:1 franc:5 delayed:2 phase:2 geometry:1 lebesgue:1 explosive:1 investigate:1 mining:1 patricia:1 mixture:1 semidefinite:1 coste:1 integral:4 partial:1 necessary:1 xy:2 desired:1 causal:1 theoretical:2 minimal:1 leskovec:1 xeon:1 modeling:4 earlier:1 cover:2 w911nf:2 goodness:2 measuring:1 phrase:1 cost:5 surpasses:1 singhal:1 euler:1 kl1:1 technische:1 comprised:1 shuang:1 conducted:1 characterize:1 reported:1 hochschule:1 varies:1 synthetic:3 combined:1 adaptively:1 st:2 peak:1 sensitivity:1 international:6 siam:1 michael:2 together:1 quickly:1 again:1 thesis:1 positivity:2 american:1 derivative:1 return:1 michel:1 li:20 coefficient:8 matter:1 satisfy:2 caused:2 performed:1 break:1 exogenous:1 hazan:1 sup:1 portion:3 characterizes:1 decaying:2 start:1 complicated:1 parallel:1 candes:1 zha:3 contribution:1 il:1 square:1 ni:8 accuracy:1 largely:2 correspond:1 t3:1 interarrival:1 processor:1 acc:1 definition:10 frequency:2 pp:1 naturally:1 associated:1 proof:1 dataset:1 adjusting:1 popular:2 recall:1 knowledge:3 hilbert:3 sophisticated:1 originally:1 dt:2 day:2 formulation:4 strongly:2 mar:1 furthermore:1 smola:2 lastly:1 correlation:1 hand:1 horizontal:1 ei:2 christopher:1 nonlinear:1 assessment:1 nonparametrically:2 usa:1 effect:4 xiaodong:1 true:3 regularization:6 hence:2 symmetric:1 iteratively:1 illustrated:1 round:1 self:2 ffas:1 hawkes:18 excitation:1 hong:1 generalized:2 whye:1 complete:1 l1:5 jacek:1 instantaneous:3 consideration:1 fi:50 common:2 viral:1 physical:1 eichler:1 exponentially:1 volume:1 extend:1 he:1 approximates:1 tail:10 belong:1 numerically:1 willett:1 significant:1 refer:1 lieven:1 smoothness:1 sujay:1 grid:2 mathematics:1 illinois:2 soltanolkotabi:1 dino:1 longer:1 base:3 multivariate:8 recent:1 showed:2 scenario:2 tikhonov:2 nonconvex:1 onr:1 discretizing:2 discussing:1 integrable:1 minimum:2 subplot:1 determine:1 signal:1 ii:5 semi:2 multiple:3 stephen:1 anastasios:1 champaign:1 alan:1 faster:2 offer:1 long:1 retrieval:2 e1:1 mle:17 impact:1 basic:2 poisson:3 iteration:10 kernel:15 sometimes:1 represent:2 tailored:1 limt:1 sejdinovic:1 c1:5 addition:1 fine:1 interval:8 sch:1 unlike:2 posse:1 sure:1 subject:2 flow:2 seem:1 structural:1 yang:2 counting:2 wirtinger:2 iii:3 enough:3 affect:2 fit:2 competing:1 triggering:28 restrict:1 inner:1 simplifies:1 tm:2 kyrillidis:1 br:1 t0:3 url:1 penalty:1 song:1 algebraic:1 generally:1 clear:1 detailed:1 johannes:1 amount:1 nonparametric:18 extensively:1 ph:1 simplest:1 generate:1 specifies:1 restricts:1 massoud:1 andr:1 notice:1 estimated:3 neuroscience:2 per:6 dropping:1 emaud:1 key:1 threshold:1 marie:1 diffusion:2 graph:2 run:6 uncertainty:1 massouli:1 farajtabar:1 almost:1 reader:1 reasonable:1 appendix:4 scaling:1 comparable:2 capturing:1 bound:11 internet:1 guaranteed:1 followed:1 quadratic:2 constraint:2 alex:1 btk:1 encodes:1 bousquet:1 kleinberg:1 aspect:1 span:1 optimality:1 min:14 performing:1 relatively:2 department:2 truncate:1 combination:1 describes:1 smaller:2 backstrom:1 intuitively:1 computationally:1 equation:1 mutually:2 remains:1 discus:1 count:1 granger:1 know:1 end:4 adopted:3 operation:3 hatted:1 observe:2 enforce:1 fbi:14 quinn:1 pierre:1 batch:12 original:1 thomas:1 include:1 ensure:2 graphical:1 log10:1 liniger:1 exploit:1 emmanuel:5 especially:1 erfc:1 approximating:1 implied:1 quantity:1 spike:3 parametric:13 costly:2 rt:2 inna:1 niao:1 bagnell:1 mehrdad:1 evolutionary:2 gradient:3 link:1 bacry:4 outer:1 reason:1 assuming:1 length:1 gramian:1 relationship:1 mini:1 minimizing:1 setup:1 executed:1 truncates:1 kun:1 robert:1 relate:1 dahlhaus:1 dnj:2 negative:2 design:3 unknown:1 perform:1 teh:1 contributed:1 upper:2 vertical:1 observation:2 etesami:3 datasets:2 urbana:2 finite:3 benchmark:3 descent:2 t:1 truncated:1 defining:1 kiyavash:5 reproducing:4 arbitrary:2 intensity:16 rebecca:1 introduced:1 inverting:1 c3:1 dts:1 hour:2 nip:1 address:1 able:1 jure:1 dynamical:1 pattern:2 mismatch:2 usually:1 yc:1 regime:4 sparsity:1 challenge:4 below:2 muzy:4 including:2 max:1 event:5 business:1 circumvent:1 regularized:4 kivinen:1 axis:6 flaxman:1 prior:1 literature:1 discovery:1 review:1 kf:1 hongteng:1 loss:24 negar:4 expect:1 generation:2 shy:1 foundation:1 eidgen:1 s0:2 article:1 exciting:4 row:1 prone:1 supported:1 truncation:4 infeasible:1 drastically:1 side:1 mismatched:1 institute:1 absolute:1 sparse:2 dni:1 ghz:1 liquidity:1 dimension:24 evaluating:2 cumulative:3 genome:1 fb:2 author:1 made:1 adaptive:1 simplified:1 social:1 transaction:4 approximate:2 bernhard:1 global:1 active:1 hongyuan:3 xi:7 thep:1 alternatively:1 spectrum:1 continuous:1 search:2 why:2 table:4 nature:1 learn:3 robust:1 ca:1 controllable:1 williamson:1 european:1 meanwhile:1 posted:1 complex:1 diag:2 linearly:1 motivation:1 hyperparameters:3 arrival:11 scored:1 succinct:1 categorized:1 xu:1 causality:2 intel:1 embeds:1 fails:1 dfi:3 exponential:6 lie:1 dmd:11 mahdi:1 srinadh:1 theorem:12 minute:2 krumin:1 explored:1 concern:1 ci:1 phd:1 te:1 horizon:3 lt:4 army:1 visual:1 lagrange:1 prevents:2 tracking:2 supk:1 springer:1 corresponds:1 minimizer:1 satisfies:3 truth:1 formulated:1 jyrki:1 towards:1 content:1 included:1 typical:3 reducing:1 uniformly:1 sophie:1 oise:1 alexander:1 evaluate:2 phenomenon:1 |
6,721 | 708 | An Object-Oriented Framework for the
Simulation of Neural Nets
A. Linden
Th. Sudbrak
Ch. Tietz
F. Weber
German National Research Center for Computer Science
D-5205 Sankt Augustin 1, Germany
Abstract
The field of software simulators for neural networks has been expanding very rapidly in the last years but their importance is still
being underestimated. They must provide increasing levels of assistance for the design, simulation and analysis of neural networks.
With our object-oriented framework (SESAME) we intend to show
that very high degrees of transparency, manageability and flexibility for complex experiments can be obtained. SESAME's basic design philosophy is inspired by the natural way in which researchers
explain their computational models. Experiments are performed
with networks of building blocks, which can be extended very easily. Mechanisms have been integrated to facilitate the construction
and analysis of very complex architectures. Among these mechanisms are t.he automatic configuration of building blocks for an
experiment and multiple inheritance at run-time.
1
Introduction
In recent years a lot of work has been put into the development of simulation
systems for neural networks [1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12]. Unfortunately their
importance has been largely underestimated. In future, software environments will
provide increasing It-vels of assistance for the design, simulation and analysis of
neural networks as well as for other pattern and signal processing architectures. Yet
large improvements are still necessary in order to fulfill the growing demands of the
research community. Despite the existence of at least 100 software simulators, only
very few of them can deal with, e. g. multiple learning paradigms and applications,
797
798
Linden, Sudbrak, Tietz, and Weber
very large experiments.
In this paper we describe an object oriented framework for the simulation of neural
networks and try to illustrate its flexibility, transparency and extendability. The
prototype called SESAME has been implemented using C++ (on UNIX workstations
running X-Windows) and currently consists of about 39.000 lines of code, implementing over 80 classes for neural network algorithms, pattern handling, graphical
output and other utilities.
2
Philosophy of Design
The main objective of SESAME is to allow for arbitrary combinations of different
learning and pattern processing paradigms (e. g. supervised, unsupervised, selfsupervised or reinforcement learning) and different application domains (e. g. pattern recognition, vision, speech or control). To some degree the design of SESAME
has been based on the observation that many researchers explain their neural information processing systems (NIPS) with block-diagrams. Such a block diagram
consists of a group of primitive elements (building blocks). Each building block has
inputs and outputs and a functional relationship between them. Connections describe the flow of data between the building blocks. Scripts related to the building
blocks specify the flow of control. Complex NIPS are constructed from a library
of building blocks (possibly themselves whole NIPS), which are interconnected via
uniform communication links.
3
SESAME Design and Features
All building blocks share a list of common components. They all have insites and
outsites that build the endpoints of communication links. Datafields contain the
data (e. g. weight matrices or activation vectors) which is sent over the links. Action functions process input from the insites, update the internal state and compute
appropriate outputs, e. g. performing weight updates and propagating activation or
error vectors. Command functions provide a uniform user interface for all building blocks. Scripts control the execution of action or command functions or other
script.s. They may contain conditional statements and loops as control structures.
Furthermore a symbol table allows run-time access to parameters of the building
hlock as learning rat.E's, sizes, data ranges etc. Many other internal data structures and routines are provided for the administration and maintainance of building
blocks.
The description of an experiment
of the building blocks (which can
scription language, see below), the
the cont.rol flow defined by scripts
blocks.
may be divided into the functional description
be done either in C++ or in the high-level deconnection topology of the building blocks used,
and a set of parameters for each of the building
An Object-Oriented Framework for the Simulation of Neural Nets
Design Highlights
3.1
User Interface
The user interface is text oriented and may be used interactively as well as script
driven. This implies that any command that the user may choose interactively can
also be used in a command file that is called non-interactively. This allows the easy
adaption of arbitrary user interface structures from a primitive batch interface for
large offline simulatiors to a fancy graphical user interface for online experiments.
Another consequence is that experiments are specified in the same command language that is used for the user interface. The user may thus easily switch from
description files from previously saved experiments to the interactive manipulation
of already loaded ones. Since the complete structure of an experiment is accessible
at runtime, this not only means manipulation of parameters but also includes any
imaginable modification of the experiment topology. The experienced user can, for
example, include new building blocks for experiment observation or statistical evaluation and connect them to any point of the communication structure. Deletion of
building blocks is possible, as well as modifying control scripts. The complete state
of the experiment (i. e. the current values of all relevant data) can be saved for later
experiments.
3.2
Hierarchies
In SESAME we distinguish two kinds of building blocks: terminal and non-terminal
blocks. Non-terminal building blocks are used to structure a complex experiment
into hierarchies of abstract building blocks containing substructures of an experiment that may themselves contain hierarchies of substructures. Terminal building
blocks provide the data structures and primitive functions that are used in scripts
(of non-terminal blocks) to compose the network algorithms. A non-terminal building block hides its internal structure and provides abstract sites and scripts as an
interface to its internals. Therefore it appears as a terminal building block to the
outside and may be used as such for the construction of an experiment. This construction is equivalent. to the building of a single non-terminal building block (the
Experiment) that encloses the complete experiment structure.
3.3
Construction of New Building Blocks
The functionality of SESAME can be extended using two different approaches. New
terminal building blocks can be programmed deriving from existing C++ classes or
new non-terminal building blocks may be assembled by using previously defined
building blocks:
3.3.1
Programming New Terminal Building Blocks
Terminal building blocks can be designed by derivation from already existing C++
classes. The complete administration structure and possible predefined properties
are inherited from the parent classes. In order to add new properties - e. g. new
action functions, symbols, datafields, insites or outsites - a set of basic operations is being provided by the framework. One should note that new algorithms
799
800
Linden, Sudbrak, Tietz, and Weber
and structures can be added to a class without any changes to the framework of
SESAME.
3.3.2
Composing New Non-Terminal Building Blocks
Non-terminal building blocks can be combined from libraries of already designed
terminal or non-terminal blocks. See for an example fig. ??, where a set of building
blocks build a multilayer net which can be collapsed into one building block and
reused in other contexts. Here insites and outsites define an interface between
building blocks on adjacent levels of the experiment hierarchy. The flow of data
inside the new building block is controlled by scripts that call action functions or
scripts of its components. Such an abstract building block may be saved in a library
for reuse. Even whole experiments can be collapsed to one building block leaving
a lot of possibilities for the experimenter to cope with very large and complicated
experiments.
3.3.3
Deriving New Non-Terminal Building Blocks
A powerful mechanism for organizing very complex experiments and allowing high
degrees of flexibility and reuse is offered by the concept of inheritance. The basic
mechanism executes the description of the parent building block and thereafter
the description of the refinements for the derived block. All this may be done
interactively, thus additional refinements can be added at runtime. Even the set of
formal parameters of a block may be inherited and/or refined. Multiple inheritance
is also possible.
For an example consider a general function approximator which may be used at
many points in a more complex architecture. It can be implemented as an abstract
base building block, only supplying basic structure as input and output and basic
operations as ((propagate input" and ((train" . Derivations of it then implement the
algorithm and structure actually used. Statistical routines, visualization facilities,
pattern handling and other utilities can be added as further specializations to a
basic function approximator.
3.3.4
Parameters and Generic Building Blocks
A building block may also define formal parameters that allow the user to configure it at the time of its instantiation or inclusion into some other non-terminal
building block. Thus non-terminal building blocks can be generic. They may be
parameterized with types for interior building blocks, names of scripts etc. With
this mechanism a multilayer net can be created with an arbitrary type of node or
weight layers.
3.4
Autoconfiguration
When a user defines an experiment, only parameters that are really important
must be specified. Redundant parameters, that depend on other paremeters of other
building blocks, can often be determined automatically. In SESAME this is done via
a constraint satisfaction process. Not only does this mechanism avoid specification
of redundant information and check experiment parameters for consistency, but it
An Object-Oriented Framework for the Simulation of Neural Nets
also enables the construction of generic structures. Communication links between
outsites and insites of building blocks check data for matching types. Building blocks
impose additional constraints on the data formats of their own sites. Constraints are
formed upon information about the base types, dimensions, sizes and ranges of the
data sent between the sites. The primary source of information are the parameters
given to the building blocks at the time of their instantiation. After building the
whole experiment, a propagation mechanism iteratively tries to complete missing
information in order to satisfy all constraints. Thus information which is determined
in one building block of the experiment may spread all over the experiment topology.
As an example one can think of a building block which loads patterns from a
file. The dimensionality of these patterns may be used automatically to configure
building blocks holding weight layers for a multilayer network.
This autoconfiguration can be considered as finding the unique solution of set of
equations where three cases may occur: inconsistency (contradiction between two
information sources at one site), deadlock (insufficient information for a site) or success (unique solution). Inconsistencies are a proof of an erroneous design. Deadlocks
indicate that the user has missed something.
3.5
Experiment Observation
Graphical output, file I/O or statistical analysis are usually not performed within
the normal building blocks which comprise the network algorithms. These features
are built into specialized utility building blocks that can be integrated at any point
of the experiment topology, even during experiment runs.
4
Classes of Building Blocks
SESAME supports a rich taxonomy of building blocks for experiment construction:
For neural networks one can use building blocks for complete node and weight layers
to construct multilayer networks. This granulation was chosen to allow for a more
efficient way of computation than with building blocks that contain single neurons
only. This level of abstraction still captures enough flexibility for many paradigms
,of NIPS. However, terminal building blocks for complete classes of neural nets are
also provided if efficiency is first demand.
Mathematical building blocks perform arithmetic, trigonometric or more general
mathematical transformations, as scaling and normalization. Building blocks for
coding provide functionality to encode or decode patterns.
Utility building blocks provide access to the filesystem, where not only input or
output files can be dealt with but also other UNIX processes by means of pipes.
Others simply store structured or unstructured patterns to make them randomly
accessible.
Graphical building blocks can be used to display any kind of data no matter if
weight matrices, activation or error vectors are involved. This is a consequence of
the abstract view of combining building blocks with different functionality but a
uniform data interface. There are special building blocks for analysis which allow
for clustering, averaging, error analysis, plotting and other statistical evaluations.
801
802
Linden, Sudbrak, Tietz, and Weber
Finally simulations (cart pole, robot-arms etc.) can also be incorporated into building blocks. Real-world applications or other software packages can be accessed via
specialized interface blocks.
5
Examples
Some illustrative examples for experiments can be found in [?] and many additional
and more complex examples in the SESAME documentation. The full documentation as well as the software are available via ftp (see below).
Here we sketch only briefly, how paradigms and applications from different domains
can be easily glued together as a natural consequence of the design of SESAME.
Figure?? shows part of an experiment in which a robot arm is controlled via
a modified Kohonen feature map and a potential field path planner. The three
building blocks, workspace, map and planner form the main part of the experiment.
Workspace contains the simulation for the controlled robot arm and its graphical
display and map contains the feature map that is used to transform the map coordinates proposed by planner to robot arm configurations. The map has been trained
in another experiment to map the configuration space of the robot arm and the
planner may have stored the positions of obstacles with respect to the map coordinates in still another experiment. The configuration and obstacle map have been
saved as the results of the earlier experiments and are reused here. The map was
taken from a library that contains different flavors of feature maps in form of nonterminal building blocks and hides the details of its complicated inner structure.
The Views help to visualize the experiment and the Buffers are used to provide
start values for the experiment runs. A Subtractor is shown that generates control
inputs for the workspace by simply performing vector subtraction on subsequently
proposed state vectors for the robot arm simulation.
6
Epilogue
We designed an object-oriented neural network simulator to cope with the increasing demands imposed by the current lines of research. Our implementation offers
a high degree of flexibility for the experimental setup. Building blocks may be
combined to build complex experiments in short development cycles. The simulator framework provides mechanisms to detect errors in the experiment setup and
to provide parameters for generic subexperiments. A prototype was built, that is
in use as our main research tool for neural network experiments and is constantly
refined. Future developments are still necessary, e. g. to provide a graphical interface and more elegant mechanisms for the reuse of predefined building blocks.
Further research issues are the parallelization of SESAME and the compilation of
experiment parts to optimize their performance.
The software and its preliminary documentation can be obtained via ftp at
:ftp.gmd.de in the directory gmd/as/sesame . Unfortunately we cannot provide
professional support at this moment.
Acknowledgments go to the numerous programmers and users of SESAME for all
t.he work, valuable discussions and hints.
An Object-Oriented Framework for the Simulation of Neural Nets
References
[1] B. Angeniol and P. '"rreleaven. The PYGMALION neural network programming
environment. In R. Eckmiller, editor, Advanced Neural Computers, pages 167
- 175, Amsterdam, 1990. Elsevier Science Publishers B. V. (North-Holland).
[2] N. Goddard, K. Lynne, T. Mintz, and 1. Bukys. Rochester connectionist simulator. Technical Report TR-233 (revised), Computer Science Dept, University
of Rochester, 1989.
[3] G . 1. Heileman, H. K. Brown, and Georgiopoulos. Simulation of artificial neural
network models using an object-oriented software paradigm. In Proceedings of
the International Joint Conference on Neural Networks, pages 11-133 - 11-136,
Washington, DC, 1990.
[4] NeuralWare Inc. Neuralworks professional ii user manual. 1989.
[5] T. T. Kraft. AN Spec tutorial workbook. San Diego, CA, 1990.
[6] T. Lange, J .B. Hodges, M. Fuenmayor, and L. Belyaev. Descartes: Development environment for simulating hybrid connectionist architectures. In Proceedings of the Eleventh A nnual Conference of the Cognitive Science Society,
Ann Arbor, MI, August 1989, 1989.
[7] A. Linden and C. Tietz. Combining mUltiple neural network paradigms and applications using SESAME. In Proceedings of the Internation Joint Conference
on Neural Networks IJCNN - Baltimore. IEEE, 1992.
[8] Y. Miyata. A user's guide to Sun Net version 5.6 - a tool for constructing,
running, and looking into a PDP network. 1990.
[9] J. M. J. Murre and S. E. Kleynenberg. The MetaNet network environment
for the development of modular neural networks. In Proceedings of the International Neural Network Conference, Paris, 1990, pages 717 - 720. IEEE,
1990.
[10] M.A. Wilson, S.B. Upinder, J.D. Uhley, and J.M. Bower. GENESIS: A system
for simulating neural networks. In David S. Touretzky, editor, Advances in
Neural Information Processing Systems I, pages 485-492. Morgan Kaufmann,
1988. Collected papers of the IEEE Conference on Neural Information Processing Systems - Natural and Synthetic, Denver, CO, November 1988.
[11] A. ZeIl, N. Mache, T. Sommer, and T. Korb. Recent developments of the snns
neural network simulator. In SPIE Conference on Applications of Artificial
Neural Networks. Universit"at Stuttgart, April 1991.
803
804
Linden, Sudbrak, Tietz, and Weber
............
1
tet
::
Nt
f: '
0......
:: '
Nt
. ~;
..: .
uu.a
.......
B .....
all. . . ::
':;.;.
...
..
I
---1
?
.PN....
1IIcI? ?
Nt
"' Sea_Is
d.,.1s
I
..
Bl'Lleu
IIWape.
.:;.'
: : ..
Figure 1: Integration of several terminal building blocks into a non-terminal building
block with the Backpropagation example.
L~
....
I""""
SUb.,...,...~b
loaIID
I
,leldPlaa ....
,..a...
inJliD2
~
T
."IIB1a
I .weOu!
I ooatrolIa
t*!MapO"
.atPo.Out j '
I--
I ctP.. Ja
L
TwoA....VIcw: :
wert...I
,.,
??rr.,
ow
.alrcr
.lIIr1aNlr
ow
10lIIla
t
I8fennal
ia2
l~rOu!
DotIbieMap
?? p
I
iDdea
-
~
rcul&!u_vj4,9,
oIIltm.p
p;
...
W.I
wID ....
Plal.Vlew
Figure 2: Robot arm control with a hybrid controller
| 708 |@word ia2:1 version:1 briefly:1 reused:2 simulation:12 propagate:1 rol:1 tr:1 moment:1 configuration:4 contains:3 existing:2 current:2 nt:3 activation:3 yet:1 must:2 enables:1 designed:3 selfsupervised:1 update:2 spec:1 deadlock:2 directory:1 short:1 supplying:1 provides:2 node:2 accessed:1 mathematical:2 constructed:1 consists:2 compose:1 eleventh:1 inside:1 themselves:2 growing:1 simulator:6 terminal:22 inspired:1 automatically:2 window:1 increasing:3 provided:3 tietz:6 kind:2 sankt:1 finding:1 transformation:1 interactive:1 runtime:2 universit:1 control:7 consequence:3 despite:1 heileman:1 path:1 glued:1 co:1 programmed:1 range:2 internals:1 unique:2 acknowledgment:1 block:79 implement:1 backpropagation:1 matching:1 cannot:1 interior:1 encloses:1 put:1 collapsed:2 context:1 optimize:1 equivalent:1 map:11 imposed:1 center:1 missing:1 primitive:3 go:1 unstructured:1 contradiction:1 deriving:2 coordinate:2 construction:6 hierarchy:4 diego:1 user:15 decode:1 programming:2 element:1 documentation:3 recognition:1 capture:1 cycle:1 sun:1 rou:1 valuable:1 environment:4 trained:1 depend:1 upon:1 efficiency:1 kraft:1 easily:3 joint:2 derivation:2 train:1 describe:2 artificial:2 outside:1 refined:2 belyaev:1 modular:1 think:1 transform:1 online:1 rr:1 net:8 interconnected:1 kohonen:1 relevant:1 loop:1 combining:2 rapidly:1 organizing:1 trigonometric:1 flexibility:5 description:5 parent:2 object:8 ftp:3 illustrate:1 help:1 propagating:1 nonterminal:1 implemented:2 implies:1 indicate:1 uu:1 saved:4 imaginable:1 modifying:1 functionality:3 subsequently:1 wid:1 programmer:1 implementing:1 ja:1 really:1 preliminary:1 considered:1 normal:1 visualize:1 murre:1 currently:1 augustin:1 tool:2 iici:1 modified:1 fulfill:1 avoid:1 pn:1 command:5 wilson:1 encode:1 derived:1 improvement:1 check:2 detect:1 elsevier:1 abstraction:1 integrated:2 germany:1 issue:1 among:1 development:6 scription:1 special:1 subtractor:1 integration:1 field:2 comprise:1 construct:1 washington:1 unsupervised:1 future:2 others:1 connectionist:2 report:1 hint:1 few:1 oriented:9 randomly:1 national:1 mintz:1 possibility:1 evaluation:2 configure:2 compilation:1 predefined:2 necessary:2 earlier:1 obstacle:2 pole:1 uniform:3 stored:1 connect:1 synthetic:1 combined:2 international:2 accessible:2 workspace:3 together:1 hodges:1 interactively:4 containing:1 choose:1 possibly:1 cognitive:1 tet:1 potential:1 de:1 coding:1 includes:1 north:1 matter:1 inc:1 satisfy:1 performed:2 try:2 lot:2 script:11 later:1 view:2 start:1 complicated:2 inherited:2 substructure:2 rochester:2 formed:1 loaded:1 largely:1 kaufmann:1 angeniol:1 dealt:1 plal:1 researcher:2 executes:1 explain:2 touretzky:1 manual:1 involved:1 proof:1 mi:1 spie:1 workstation:1 neuralworks:1 experimenter:1 dimensionality:1 routine:2 actually:1 appears:1 supervised:1 specify:1 april:1 done:3 stuttgart:1 furthermore:1 sketch:1 propagation:1 defines:1 name:1 building:74 facilitate:1 contain:4 concept:1 brown:1 facility:1 iteratively:1 deal:1 adjacent:1 assistance:2 during:1 illustrative:1 rat:1 uhley:1 complete:7 interface:12 weber:5 common:1 specialized:2 functional:2 denver:1 endpoint:1 he:2 automatic:1 consistency:1 inclusion:1 language:2 access:2 specification:1 robot:7 etc:3 base:2 add:1 something:1 own:1 recent:2 hide:2 driven:1 manipulation:2 store:1 buffer:1 success:1 inconsistency:2 morgan:1 additional:3 impose:1 subtraction:1 paradigm:6 redundant:2 signal:1 arithmetic:1 ii:1 multiple:4 full:1 transparency:2 technical:1 offer:1 divided:1 controlled:3 descartes:1 basic:6 multilayer:4 vision:1 controller:1 normalization:1 baltimore:1 underestimated:2 diagram:2 leaving:1 source:2 publisher:1 parallelization:1 file:5 cart:1 elegant:1 sent:2 flow:4 call:1 easy:1 enough:1 switch:1 architecture:4 topology:4 inner:1 lange:1 prototype:2 administration:2 specialization:1 utility:4 reuse:3 speech:1 action:4 gmd:2 tutorial:1 fancy:1 eckmiller:1 group:1 thereafter:1 year:2 run:4 package:1 unix:2 powerful:1 parameterized:1 planner:4 missed:1 scaling:1 layer:3 distinguish:1 display:2 occur:1 ijcnn:1 constraint:4 software:7 generates:1 performing:2 format:1 structured:1 combination:1 modification:1 taken:1 nnual:1 equation:1 visualization:1 previously:2 german:1 mechanism:9 available:1 operation:2 appropriate:1 generic:4 simulating:2 batch:1 professional:2 existence:1 running:2 include:1 clustering:1 sommer:1 graphical:6 goddard:1 build:3 society:1 bl:1 objective:1 intend:1 already:3 added:3 primary:1 ow:2 link:4 collected:1 code:1 cont:1 relationship:1 insufficient:1 setup:2 unfortunately:2 statement:1 holding:1 taxonomy:1 design:9 implementation:1 perform:1 allowing:1 observation:3 neuron:1 revised:1 november:1 extended:2 communication:4 incorporated:1 looking:1 dc:1 pdp:1 genesis:1 georgiopoulos:1 arbitrary:3 august:1 community:1 david:1 paris:1 specified:2 pipe:1 connection:1 deletion:1 nip:4 assembled:1 below:2 pattern:9 usually:1 built:2 satisfaction:1 natural:3 hybrid:2 advanced:1 arm:7 library:4 numerous:1 created:1 text:1 inheritance:3 highlight:1 approximator:2 degree:4 offered:1 plotting:1 editor:2 share:1 last:1 offline:1 formal:2 allow:4 guide:1 dimension:1 world:1 rich:1 reinforcement:1 refinement:2 san:1 cope:2 paremeters:1 instantiation:2 table:1 ctp:1 expanding:1 composing:1 ca:1 miyata:1 complex:8 constructing:1 domain:2 main:3 spread:1 whole:3 site:5 fig:1 experienced:1 position:1 sub:1 bower:1 erroneous:1 load:1 symbol:2 list:1 linden:6 importance:2 execution:1 demand:3 flavor:1 simply:2 snns:1 amsterdam:1 holland:1 ch:1 adaption:1 constantly:1 conditional:1 ann:1 internation:1 change:1 determined:2 averaging:1 called:2 experimental:1 arbor:1 internal:3 support:2 philosophy:2 dept:1 handling:2 |
6,722 | 7,080 | Maximum Margin Interval Trees
Alexandre Drouin
D?partement d?informatique et de g?nie logiciel
Universit? Laval, Qu?bec, Canada
[email protected]
Toby Dylan Hocking
McGill Genome Center
McGill University, Montr?al, Canada
[email protected]
Fran?ois Laviolette
D?partement d?informatique et de g?nie logiciel
Universit? Laval, Qu?bec, Canada
[email protected]
Abstract
Learning a regression function using censored or interval-valued output data is an
important problem in fields such as genomics and medicine. The goal is to learn a
real-valued prediction function, and the training output labels indicate an interval of
possible values. Whereas most existing algorithms for this task are linear models,
in this paper we investigate learning nonlinear tree models. We propose to learn
a tree by minimizing a margin-based discriminative objective function, and we
provide a dynamic programming algorithm for computing the optimal solution in
log-linear time. We show empirically that this algorithm achieves state-of-the-art
speed and prediction accuracy in a benchmark of several data sets.
1
Introduction
In the typical supervised regression setting, we are given set of learning examples, each associated
with a real-valued output. The goal is to learn a predictor that accurately estimates the outputs,
given new examples. This fundamental problem has been extensively studied and has given rise to
algorithms such as Support Vector Regression (Basak et al., 2007). A similar, but far less studied,
problem is that of interval regression, where each learning example is associated with an interval
(y i , y i ), indicating a range of acceptable output values, and the expected predictions are real numbers.
Interval-valued outputs arise naturally in fields such as computational biology and survival analysis.
In the latter setting, one is interested in predicting the time until some adverse event, such as
death, occurs. The available information is often limited, giving rise to outputs that are said to be
either un-censored (?? < y i = y i < ?), left-censored (?? = y i < y i < ?), right-censored
(?? < y i < y i = ?), or interval-censored (?? < y i < y i < ?) (Klein and Moeschberger, 2005).
For instance, right censored data occurs when all that is known is that an individual is still alive after
a period of time. Another recent example is from the field of genomics, where interval regression
was used to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq
data (Rigaill et al., 2013). Despite the ubiquity of this type of problem, there are surprisingly few
existing algorithms that have been designed to learn from such outputs, and most are linear models.
Decision tree algorithms have been proposed in the 1980s with the pioneering work of Breiman et al.
(1984) and Quinlan (1986). Such algorithms rely on a simple framework, where trees are grown by
recursive partitioning of leaves, each time maximizing some task-specific criterion. Advantages of
these algorithms include the ability to learn non-linear models from both numerical and categorical
data of various scales, and having a relatively low training time complexity. In this work, we extend
the work of Breiman et al. (1984) to learning non-linear interval regression tree models.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
1.1
Contributions and organization
Our first contribution is Section 3, in which we propose a new decision tree algorithm for interval
regression. We propose to partition leaves using a margin-based hinge loss, which yields a sequence of
convex optimization problems. Our second contribution is Section 4, in which we propose a dynamic
programming algorithm that computes the optimal solution to all of these problems in log-linear time.
In Section 5 we show that our algorithm achieves state-of-the-art prediction accuracy in several real
and simulated data sets. In Section 6 we discuss the significance of our contributions and propose
possible future research directions. An implementation is available at https://git.io/mmit.
2
Related work
The bulk of related work comes from the field of survival analysis. Linear models for censored
outputs have been extensively studied under the name accelerated failure time (AFT) models (Wei,
1992). Recently, L1-regularized variants have been proposed to learn from high-dimensional data
(Cai et al., 2009; Huang et al., 2005). Nonlinear models for censored data have also been studied,
including decision trees (Segal, 1988; Molinaro et al., 2004), Random Forests (Hothorn et al., 2006)
and Support Vector Machines (P?lsterl et al., 2016). However, most of these algorithms are limited to
the case of right-censored and un-censored data. In contrast, in the interval regression setting, the data
are either left, right or interval-censored. To the best of our knowledge, the only existing nonlinear
model for this setting is the recently proposed Transformation Tree of Hothorn and Zeileis (2017).
Another related method, which shares great similarity with ours, is the L1-regularized linear models
of Rigaill et al. (2013). Like our proposed algorithm, their method optimizes a convex loss function
with a margin hyperparameter. Nevertheless, one key limitation of their algorithm is that it is limited
to modeling linear patterns, whereas our regression tree algorithm is not.
3
3.1
Problem
Learning from interval outputs
def
Let S = {(x1 , y1 ), ..., (xn , yn )} ? Dn be a data set of n learning examples, where xi ? Rp is
def
a feature vector, yi = (yi , yi ), with yi , yi ? R and yi < yi , are the lower and upper limits of a
target interval, and D is an unknown data generating distribution. In the interval regression setting, a
predicted value is only considered erroneous if it is outside of the target interval.
def
Formally, let ` : R ? R be a function and define ?` (x) = `[(x)+ ] as its corresponding hinge loss,
where (x)+ is the positive part function, i.e. (x)+ = x if x > 0 and (x)+ = 0 otherwise. In this
work, we will consider two possible hinge loss functions: the linear one, where `(x) = x, and the
squared one where `(x) = x2 . Our goal is to find a function h : Rp ? R that minimizes the expected
error on data drawn from D:
minimize
h
E
(xi ,yi )?D
?` (?h(xi ) + yi ) + ?` (h(xi ) ? yi ),
Notice that, if `(x) = x2 , this is a generalization of the mean squared error to interval outputs.
Moreover, this can be seen as a surrogate to a zero-one loss that measures if a predicted value lies
within the target interval (Rigaill et al., 2013).
3.2
Maximum margin interval trees
We will seek an interval regression tree model T : Rp ? R that minimizes the total hinge loss on
data set S:
X
def
C(T ) =
?` ?T (xi ) + yi + + ?` (T (xi ) ? yi + ) ,
(1)
(xi ,yi )?S
where ? R+
0 is a hyperparameter introduced to improve regularity (see supplementary material for
details).
2
Upper limit (yi )
Lower limit (yi )
Threshold (?)
Predicted values (?0 , ?1 , ?2 )
Cost
Leaf ?1 : xij ? ?
Interval limits
Interval limits
Leaf ?0
Margin ()
?0
?2
?1
Leaf ?2 : xij > ?
Feature value (xij )
Feature value (xij )
Figure 1: An example partition of leaf ?0 into leaves ?1 and ?2 .
A decision tree is an arrangement of nodes and leaves. The leaves are responsible for making
predictions, whereas the nodes guide the examples to the leaves based on the outcome of some
boolean-valued rules (Breiman et al., 1984). Let Te denote the set of leaves in a decision tree T .
Each leaf ? ? Te is associated with a set of examples S? ? S,Sfor which it is responsible for making
predictions. The sets S? obey the following properties: S = ? ?Te S? and S? ? S? 0 6= ? ? ? = ? 0 .
Hence, the contribution of a leaf ? to the total loss of the tree C(T ), given that it predicts ? ? R, is
X
def
C? (?) =
?` (?? + yi + ) + ?` (? ? yi + )
(2)
(xi ,yi )?S?
and the optimal predicted value for the leaf is obtained by minimizing this function over all ? ? R.
As in the CART algorithm (Breiman et al., 1984), our tree growing algorithm relies on recursive
partitioning of the leaves. That is, at any step of the tree growing algorithm, we obtain a new tree T 0
f0 , s.t. S? = S? ? S?
from T by selecting a leaf ?0 ? Te and dividing it into two leaves ?1 , ?2 ? T
0
1
2
f0 . This partitioning results from applying a boolean-valued rule r : Rp ? B to each
and ?0 6? T
example (xi , yi ) ? S?0 and sending it to ?1 if r(xi ) = True and to ?2 otherwise. The rules that we
def
consider are threshold functions on the value of a single feature, i.e., r(xi ) = ? xij ? ? ?. This is
illustrated in Figure 1. According to Equation (2), for any such rule, we have that the total hinge loss
for the examples that are sent to ?1 and ?2 are
X
??
def
C?1 (?) = C?0 (?|j, ?) =
?` (?? + yi + ) + ?` (? ? yi + )
(3)
(xi ,yi )?S?0 :xij ??
??
X
def
C?2 (?) = C?0 (?|j, ?) =
?` (?? + yi + ) + ?` (? ? yi + ) .
(4)
(xi ,yi )?S?0 :xij >?
The best rule is the one that leads to the smallest total cost C(T 0 ). This rule, as well as the optimal
predicted values for ?1 and ?2 , are obtained by solving the following optimization problem:
??
??
argmin C?0 (?1 |j, ?) + C?0 (?2 |j, ?) .
(5)
j,?,?1 ,?2
In the next section we propose a dynamic programming algorithm for this task.
4
Algorithm
First note that, for a given j, ?, the optimization separates into two convex minimization sub-problems,
which each amount to minimizing a sum of convex loss functions:
??
??
??
??
min C? (?1 |j, ?) + C? (?2 |j, ?) = min min C? (?1 |j, ?) + min C? (?2 |j, ?) .
(6)
j,?,?1 ,?2
j,?
3
?1
?2
We will show that if there exists an efficient dynamic program ? which, given any set of hinge
loss functions defined over ?, computes their sum and returns the minimum value, along with a
minimizing value of ?, the minimization problem of Equation (6) can be solved efficiently.
Observe that, although there is a continuum of possible values for ?, we can limit the search to
the values of feature j that are observed in the data (i.e., ? ? {xij ; i = 1, ... , n}), since all other
values do not lead to different configurations of S?1 and S?2 . Thus, there are at most nj ? n unique
thresholds to consider for each feature. Let these thresholds be ?j,1 < ... < ?j,nj . Now, consider
?j,k as the set that contains all the losses ?` (?? + yi + ) and ?` (? ? yi + ) for which we have
(xi , yi ) ? S?0 and xij = ?j,k . Since we now only consider a finite number of ?-values, it follows
??
??
from Equation (3), that one can obtain C? (?1 |j, ?j,k ) from C? (?1 |j, ?j,k?1 ) by adding all the losses
??
??
in ?j,k . Similarly, one can also obtain C? (?1 |j, ?j,k ) from C? (?1 |j, ?j,k?1 ) by removing all the
??
losses in ?j,k (see Equation (4)). This, in turn, implies that min? C? (?|j, ?j,k ) = ?(?j,1 ?...??j,k )
??
and min? C? (?|j, ?j,k ) = ?(?j,k+1 ? ... ? ?j,nj ) .
Hence, the cost associated with a split on each threshold ?j,k is given by:
?j,1 :
...
?j,i :
...
?j,nj ?1 :
?(?j,1 )
+ ?(?j,2 ? ? ? ? ? ?j,nj )
...
...
?(?j,1 ? ? ? ? ? ?j,i )
+ ?(?j,i+1 ? ? ? ? ? ?j,nj )
...
...
?(?j,1 ? ? ? ? ? ?j,nj ?1 ) +
?(?j,nj )
(7)
and the best threshold is the one with the smallest cost. Note that, in contrast with the other thresholds,
?j,nj needs not be considered, since it leads to an empty leaf. Note also that, since ? is a dynamic
program, one can efficiently compute Equation (7) by using ? twice, from the top down for the first
column and from the bottom up for the second. Below, we propose such an algorithm.
4.1
Definitions
A general expression for the hinge losses ?` (?? + yi + ) and ?` (? ? yi + ) is ?` (si (? ? yi ) + ),
where si = ?1 or 1 respectively. Now, choose any convex function ` : R ? R and let
def
Pt (?) =
t
X
?` (si (? ? yi ) + )
(8)
i=1
be a sum of t hinge loss functions. In this notation, ?(?j,1 ? ... ? ?j,i ) = min? Pt (?), where
t = |?j,1 ? ... ? ?j,i |.
Observation 1. Each of the t hinge loss functions has a breakpoint at yi ? si , where it transitions
from a zero function to a non-zero one if si = 1 and the converse if si = ?1.
For the sake of simplicity, we will now consider the case where these breakpoints are all different;
the generalization is straightforward, but would needlessly complexify the presentation (see the
supplementary material for details). Now, note that Pt (?) is a convex piecewise function that can be
uniquely represented as:
?
?
pt,1 (?)
if ? ? (??, bt,1 ]
?
?
?
?
?. . .
Pt (?) = pt,i (?)
(9)
if ? ? (bt,i?1 , bt,i ]
?
?
?
.
.
.
?
?
?p
t,t+1 (?) if ? ? (bt,t , ?)
where we will call pt,i the ith piece of Pt and bt,i the ith breakpoint of Pt (see Figure 2 for an
example). Observe that each piece pt,i is the sum of all the functions that are non-zero on the interval
(bt,i?1 , bt,i ]. It therefore follows from Observation 1 that
pt,i (?) =
t
X
`[sj (? ? yj ) + ] I[(sj = ?1 ? bt,i?1 < yj + ) ? (sj = 1 ? yj ? < bt,i )] (10)
j=1
where I[?] is the (Boolean) indicator function, i.e., I[True] = 1 and 0 otherwise.
4
t = 1 before optimization
t = 1 after optimization
j1 = 2
j2 = J2 = 2
p2,1 (?)
p1,2 (?) = ? ? 3
Cost
1
0
t=2
J1 = 1
p1,1 (?)
P1 (?)
p2,3 (?)
M2 (?) =
p2,2 (?)
P2 (?)
M1 (?) = p1,1 (?) = 0
b1,1 = 3
b2,1 = 2
b2,2 = 3
f1,1 (?) = ? ? 3
f2,1 (?) = ? ? 2
f2,2 (?) = ? ? 3
-1
1
2
3
4
1
2
3
4
1
2
3
4
Predicted value (?)
Figure 2: First two steps of the dynamic programming algorithm for the data y1 = 4, s1 = 1, y2 =
1, s2 = ?1 and margin = 1, using the linear hinge loss (`(x) = x). Left: The algorithm begins by
creating a first breakpoint at b1,1 = y1 ? = 3, with corresponding function f1,1 (?) = ? ? 3. At this
time, we have j1 = 2 and thus b1,j1 = ?. Note that the cost p1,1 before the first breakpoint is not
yet stored by the algorithm. Middle: The optimization step is to move the pointer to the minimum
(J1 = j1 ? 1) and update the cost function, M1 (?) = p1,2 (?) ? f1,1 (?). Right: The algorithm adds
the second breakpoint at b2,1 = y2 + = 2 with f2,1 (?) = ? ? 2. The cost at the pointer is not
affected by the new data point, so the pointer does not move.
Lemma 1. For any i ? {1, ..., t}, we have that pt,i+1 (?) = pt,i (?) + ft,i (?), where ft,i (?) =
sk `[sk (? ? yk ) + ] for some k ? {1, ..., t} such that yk ? sk = bt,i .
Proof. The proof relies on Equation (10) and is detailed in the supplementary material.
4.2
Minimizing a sum of hinge losses by dynamic programming
Our algorithm works by recursively adding a hinge loss to the total function Pt (?), each time, keeping
track of the minima. To achieve this, we use a pointer Jt , which points to rightmost piece of Pt (?)
that contains a minimum. Since Pt (?) is a convex function of ?, we know that this minimum is global.
In the algorithm, we refer to the segment pt,Jt as Mt and the essence of the dynamic programming
update is moving Jt to its correct position after a new hinge loss is added to the sum.
At any time step t, let Bt = {(bt,1 , ft,1 ), ..., (bt,t , ft,t ) | bt,1 < ... < bt,t } be the current set of
breakpoints (bt,i ) together with their corresponding difference functions (ft,i ). Moreover, assume the
convention bt,0 = ?? and bt,t+1 = ?, which are defined, but not stored in Bt .
The initialization (t = 0) is
B0 = {}, J0 = 1, M0 (?) = 0 .
(11)
Now, at any time step t > 0, start by inserting the new breakpoint and difference function. Hence,
Bt = Bt?1 ? {(yt ? st , st `[st (? ? yt ) + ])} .
(12)
Recall that, by definition, the set Bt remains sorted after the insertion. Let jt ? {1, . . . , t + 1}, be
the updated value for the previous minimum pointer (Jt?1 ) after adding the tth hinge loss (i.e., the
index of bt?1,Jt?1 in the sorted set of breakpoints at time t). It is obtained by adding 1 if the new
breakpoint is before Jt?1 and 0 otherwise. In other words,
jt = Jt?1 + I[yt ? st < bt?1,Jt?1 ] .
(13)
If there is no minimum of Pt (?) in piece pt,jt , we must move the pointer from jt to its final position
Jt ? {1, ..., t + 1}, where Jt is the index of the rightmost function piece that contains a minimum:
Jt =
max
i?{1,...,t+1}
i, s.t. (bt,i?1 , bt,i ] ? {x ? R | Pt (x) = min Pt (?)} =
6 ?.
?
(14)
See Figure 2 for an example. The minimum after optimization is in piece Mt , which is obtained by
adding or subtracting a series of difference functions ft,i . Hence, applying Lemma 1 multiple times,
5
Pointer moves on changepoint/UCI data sets
average
linear
square
10.00
1
1.00
seconds
m = pointer moves
max
10
9
8
7
6
5
4
3
2
1
0
Timings on simulated data sets
square
0.10
0.01
linear
square
linear
0
100
1000
10000
100
1000
10000
n = number of outputs (finite interval limits)
1e+04 1e+05 1e+06 1e+07
number of outputs (finite interval limits)
Figure 3: Empirical evaluation of the expected O(n(m + log n)) time complexity for n data points
and m pointer moves per data point. Left: max and average number of pointer moves m over all real
and simulated data sets we considered (median line and shaded quartiles over all features, margin
parameters, and data sets of a given size). We also observed m = O(1) pointer moves on average for
both the linear and squared hinge loss. Right: timings in seconds are consistent with the expected
O(n log n) time complexity.
we obtain:
def
Mt (?) = pt,Jt (?) = pt,jt (?) +
?
?
0
?P
Jt ?1
i=jt ft,i (?)
P
?
?? jt ?1 f (?)
i=Jt t,i
if jt = Jt
if jt < Jt
if Jt < jt
(15)
Then, the optimization problem can be solved using min? Pt (?) = min??(bt,Jt ?1 ,bt,Jt ] Mt (?). The
proof of this statement is available in the supplementary material, along with a detailed pseudocode
and implementation details.
4.3
Complexity analysis
The ` functions that we consider are `(x) = x and `(x) = x2 . Notice that any such function can be
encoded by three coefficients a, b, c ? R. Therefore, summing two functions amounts to summing
their respective coefficients and takes time O(1). The set of breakpoints Bt can be stored using any
data structure that allows sorted insertions in logarithmic time (e.g., a binary search tree).
Assume that we have n hinge losses. Inserting a new breakpoint at Equation (12) takes O(log n)
time. Updating the jt pointer at Equation (13) takes O(1). In contrast, the complexity of finding the
new pointer position Jt and updating Mt at Equations (14) and (15) varies depending on the nature
of `. For the case where `(x) = x, we are guaranteed that Jt is at distance at most one of jt . This is
demonstrated in Theorem 2 of the supplementary material. Since we can sum two functions in O(1)
time, we have that the worst case time complexity of the linear hinge loss algorithm is O(n log n).
However, for the case where `(x) = x2 , the worst case could involve going through the n breakpoints.
Hence, the worst case time complexity of the squared hinge loss algorithm is O(n2 ). Nevertheless, in
Section 5.1, we show that, when tested on a variety real-world data sets, the algorithm achieved a
time complexity of O(n log n) in this case also.
Finally, the space complexity of this algorithm is O(n), since a list of n breakpoints (bt,i ) and
difference functions (ft,i ) must be stored, along with the coefficients (a, b, c ? R) of Mt . Moreover,
it follows from Lemma 1 that the function pieces pt,i need not be stored, since they can be recovered
using the bt,i and ft,i .
5
5.1
Results
Empirical evaluation of time complexity
We performed two experiments to evaluate the expected O(n(m + log n)) time complexity for n
interval limits and m pointer moves per limit. First, we ran our algorithm (MMIT) with both squared
6
MMIT
f (x) = |x|
Signal feature (x)
L1-Linear
Function f (x)
f (x) = sin(x)
Signal feature (x)
Lower limit
Upper limit
f (x) = x/5
Signal feature (x)
Figure 4: Predictions of MMIT (linear hinge loss) and the L1-regularized linear model of Rigaill
et al. (2013) (L1-Linear) for simulated data sets.
and linear hinge loss solvers on a variety of real-world data sets of varying sizes (Rigaill et al., 2013;
Lichman, 2013), and recorded the number of pointer moves. We plot the average and max pointer
moves over a wide range of margin parameters, and all possible feature orderings (Figure 3, left). In
agreement with our theoretical result (supplementary material, Theorem 2), we observed a maximum
of one move per interval limit for the linear hinge loss. On average we observed that the number of
moves does not increase with data set size, even for the squared hinge loss. These results suggest that
the number of pointer moves per limit is generally constant m = O(1), so we expect an overall time
complexity of O(n log n) in practice, even for the squared hinge loss. Second, we used the limits of
the target intervals in the neuroblastoma changepoint data set (see Section 5.3) to simulate data sets
from n = 103 to n = 107 limits. We recorded the time required to run the solvers (Figure 3, right),
and observed timings which are consistent with the expected O(n log n) complexity.
5.2
MMIT recovers a good approximation in simulations with nonlinear patterns
We demonstrate one key limitation of the margin-based interval regression algorithm of Rigaill
et al. (2013) (L1-Linear): it is limited to modeling linear patterns. To achieve this, we created three
simulated data sets, each containing 200 examples and 20 features. Each data set was generated in
such a way that the target intervals followed a specific pattern f : R ? R according to a single feature,
which we call the signal feature. The width of the intervals and a small random shift around the true
value of f were determined randomly. The details of the data generation protocol are available in the
supplementary material. MMIT (linear hinge loss) and L1-Linear were trained on each data set, using
cross-validation to choose the hyperparameter values. The resulting data sets and the predictions of
each algorithm are illustrated in Figure 4. As expected, L1-Linear fails to fit the non-linear patterns,
but achieves a near perfect fit for the linear pattern. In contrast, MMIT learns stepwise approximations
of the true functions, which results from each leaf predicting a constant value. Notice the fluctuations
in the models of both algorithms, which result from using irrelevant features.
5.3
Empirical evaluation of prediction accuracy
In this section, we compare the accuracy of predictions made by MMIT and other learning algorithms
on real and simulated data sets.
Evaluation protocol To evaluate the accuracy of the algorithms, we performed 5-fold crossvalidation and computed the mean squared error (MSE) with respect to the intervals in each of the
2
five testing sets (Figure 5). For a data set S = {(xi , yi )}ni=1 with xi ? Rp and yi ? R , and for a
model h : Rp ? R, the MSE is given by
n
2
1X
MSE(h, S) =
[h(xi ) ? yi ] I[h(xi ) < yi ] + [h(xi ) ? yi ] I[h(xi ) > yi ] .
n i=1
7
(16)
changepoint changepoint
neuroblastoma
histone
n=3418
n=935
p=117
p=26
0%finite
47%finite
16%up
32%up
MMIT?S
MMIT?L
Interval?CART
TransfoTree
L1?Linear
Constant
? ??
?
??? ?
?
?
?
???
?
??
??
?
??
? ?
?
???? ?
? ?? ??
???
UCI
triazines
n=186
p=60
93%finite
50%up
?
????
??
?
?
?
??? ?
??
?
??
??
? ? ???
???
?
?
?
?
?
? ???
??
simulated
linear
n=200
p=20
80%finite
49%up
?
? ??
??
? ? ? ??
?
?
? ??
?
?
?
?? ?
?
?? ?
?
UCI
servo
n=167
p=19
92%finite
50%up
?
?
??
?
?2.5 ?2.0 ?1.5 ?0.6?0.4?0.2 0.0 ?3.0 ?2.5 ?2.0 ?4 ?3 ?2
? ????
?
?
simulated
sin
n=200
p=20
85%finite
49%up
?
? ?
? ?
simulated
abs
n=200
p=20
77%finite
49%up
?
?? ?
?
?
? ?
?
???
???
? ??
??
? ? ??
? ?
?? ?
?
?
???
?
??
?
?
?? ? ??
???
?
?
?4 ?3 ?2 ?1 ?3
?2
??
?
?
??
?
?
??
?
?
?
??
?
??
?
?
?1
?2
?1
0
log10(mean squared test error) in 5?fold CV, one point per fold
Figure 5: MMIT testing set mean squared error is comparable to, or better than, other interval
regression algorithms in seven real and simulated data sets. Five-fold cross-validation was used to
compute 5 test error values (points) for each model in each of the data sets (panel titles indicate data
set source, name, number of observations=n, number of features=p, proportion of intervals with finite
limits and proportion of all interval limits that are upper limits).
At each step of the cross-validation, another cross-validation (nested within the former) was used
to select the hyperparameters of each algorithm based on the training data. The hyperparameters
selected for MMIT are available in the supplementary material.
Algorithms The linear and squared hinge loss variants of Maximum Margin Interval Trees (MMITL and MMIT-S) were compared to two state-of-the-art interval regression algorithms: the marginbased L1-regularized linear model of Rigaill et al. (2013) (L1-Linear) and the Transformation Trees
of Hothorn and Zeileis (2017) (TransfoTree). Moreover, two baseline methods were included in the
comparison. To provide an upper bound for prediction error, we computed the trivial model that
ignores all features and just learns a constant function h(x) = ? that minimizes the MSE on the
training data (Constant). To demonstrate the importance of using a loss function designed for interval
regression, we also considered the CART algorithm (Breiman et al., 1984). Specifically, CART was
used to fit a regular regression tree on a transformed training set, where each interval regression
example (x, [y, y]) was replaced by two real-valued regression examples with features x and labels
y + and y ? . This algorithm, which we call Interval-CART, uses a margin hyperparameter and
minimizes a squared loss with respect to the interval limits. However, in contrast with MMIT, it does
not take the structure of the interval regression problem into account, i.e., it ignores the fact that no
cost should be incurred for values predicted inside the target intervals.
Results in changepoint data sets The problem in the first two data sets is to learn a penalty
function for changepoint detection in DNA copy number and ChIP-seq data (Hocking et al., 2013;
Rigaill et al., 2013), two significant interval regression problems from the field of genomics. For the
neuroblastoma data set, all methods, except the constant model, perform comparably. Interval-CART
achieves the lowest error for one fold, but L1-Linear is the overall best performing method. For the
histone data set, the margin-based models clearly outperform the non-margin-based models: Constant
and TransfoTree. MMIT-S achieves the lowest error on one of the folds. Moreover, MMIT-S tends
to outperform MMIT-L, suggesting that a squared loss is better suited for this task. Interestingly,
MMIT-S outperforms Interval-CART, which also uses a squared loss, supporting the importance of
using a loss function adapted to the interval regression problem.
Results in UCI data sets The next two data sets are regression problems taken from the UCI
repository (Lichman, 2013). For the sake of our comparison, the real-valued outputs in these data
sets were transformed into censored intervals, using a protocol that we detail in the supplementary
material. For the difficult triazines data set, all methods struggle to surpass the Constant model.
Neverthess, some achieve lower errors for one fold. For the servo data set, the margin-based tree
models: MMIT-S, MMIT-L, and Interval-CART perform comparably and outperform the other
models. This highlights the importance of developping non-linear models for interval regression and
suggests a positive effect of the margin hyperparameter on accuracy.
8
Results in simulated data sets The last three data sets are the simulated data sets discussed in the
previous section. As expected, the L1-linear model tends outperforms the others on the linear data set.
However, surprisingly, on a few folds, the MMIT-L and Interval-CART models were able to achieve
low test errors. For the non-linear data sets (sin and abs), MMIT-S, MMIT-L and Interval-Cart clearly
outperform the TransfoTree, L1-linear and Constant models. Observe that the TransfoTree algorithm
achieves results comparable to those of L1-linear which, in Section 5.2, has been shown to learn
a roughly constant model in these situations. Hence, although these data sets are simulated, they
highlight situations where this non-linear interval regression algorithm fails to yield accurate models,
but where MMITs do not.
Results for more data sets are available in the supplementary material.
6
Discussion and conclusions
We proposed a new margin-based decision tree algorithm for the interval regression problem. We
showed that it could be trained by solving a sequence of convex sub-problems, for which we proposed
a new dynamic programming algorithm. We showed empirically that the latter?s time complexity is
log-linear in the number of intervals in the data set. Hence, like classical regression trees (Breiman
et al., 1984), our tree growing algorithm?s time complexity is linear in the number of features and
log-linear in the number of examples. Moreover, we studied the prediction accuracy in several real
and simulated data sets, showing that our algorithm is competitive with other linear and nonlinear
models for interval regression.
This initial work on Maximum Margin Interval Trees opens a variety of research directions, which
we will explore in future work. We will investigate learning ensembles of MMITs, such as random
forests. We also plan to extend the method to learning trees with non-constant leaves. This will
increase the smoothness of the models, which, as observed in Figure 4, tend to have a stepwise nature.
Moreover, we plan to study the average time complexity of the dynamic programming algorithm.
Assuming a certain regularity in the data generating distribution, we should be able to bound the
number of pointer moves and justify the time complexity that we observed empirically. In addition,
we will study the conditions in which the proposed MMIT algorithm is expected to surpass methods
that do not exploit the structure of the target intervals, such as the proposed Interval-CART method.
Intuitively, one weakness of Interval-CART is that it does not properly model left and right-censored
intervals, for which it favors predictions that are near the finite limits. Finally, we plan to extend
the dynamic programming algorithm to data with un-censored outputs. This will make Maximum
Margin Interval Trees applicable to survival analysis problems, where they should rank among the
state of the art.
Reproducibility
? Implementation: https://git.io/mmit
? Experimental code: https://git.io/mmit-paper
? Data: https://git.io/mmit-data
The versions of the software used in this work are also provided in the supplementary material.
Acknowledgements
We are grateful to Ulysse C?t?-Allard, Mathieu Blanchette, Pascal Germain, S?bastien Gigu?re, Ga?l
Letarte, Mario Marchand, and Pier-Luc Plante for their insightful comments and suggestions. This
work was supported by the National Sciences and Engineering Research Council of Canada, through
an Alexander Graham Bell Canada Graduate Scholarship Doctoral Award awarded to AD and a
Discovery Grant awarded to FL (#262067).
9
References
Basak, D., Pal, S., and Patranabis, D. C. (2007). Support vector regression. Neural Information
Processing-Letters and Reviews, 11(10), 203?224.
Breiman, L., Friedman, J., Stone, C. J., and Olshen, R. A. (1984). Classification and regression trees.
CRC press.
Cai, T., Huang, J., and Tian, L. (2009). Regularized estimation for the accelerated failure time model.
Biometrics, 65, 394?404.
Hocking, T. D., Schleiermacher, G., Janoueix-Lerosey, I., Boeva, V., Cappo, J., Delattre, O., Bach,
F., and Vert, J.-P. (2013). Learning smoothing models of copy number profiles using breakpoint
annotations. BMC Bioinformatics, 14(1), 164.
Hothorn, T. and Zeileis, A. (2017). Transformation Forests. arXiv:1701.02110.
Hothorn, T., B?hlmann, P., Dudoit, S., Molinaro, A., and Van Der Laan, M. J. (2006). Survival
ensembles. Biostatistics, 7(3), 355?373.
Huang, J., Ma, S., and Xie, H. (2005). Regularized estimation in the accelerated failure time
model with high dimensional covariates. Technical Report 349, University of Iowa Department of
Statistics and Actuarial Science.
Klein, J. P. and Moeschberger, M. L. (2005). Survival analysis: techniques for censored and truncated
data. Springer Science & Business Media.
Lichman, M. (2013). UCI machine learning repository.
Molinaro, A. M., Dudoit, S., and van der Laan, M. J. (2004). Tree-based multivariate regression and
density estimation with right-censored data. Journal of Multivariate Analysis, 90, 154?177.
P?lsterl, S., Navab, N., and Katouzian, A. (2016). An Efficient Training Algorithm for Kernel
Survival Support Vector Machines. arXiv:1611.07054.
Quinlan, J. R. (1986). Induction of decision trees. Machine learning, 1(1), 81?106.
Rigaill, G., Hocking, T., Vert, J.-P., and Bach, F. (2013). Learning sparse penalties for change-point
detection using max margin interval regression. In Proc. 30th ICML, pages 172?180.
Segal, M. R. (1988). Regression trees for censored data. Biometrics, pages 35?47.
Wei, L. (1992). The accelerated failure time model: a useful alternative to the cox regression model
in survival analysis. Stat Med, 11(14?15), 1871?9.
10
| 7080 |@word repository:2 version:1 middle:1 cox:1 proportion:2 triazine:2 open:1 seek:1 simulation:1 git:4 recursively:1 initial:1 configuration:1 contains:3 series:1 selecting:1 lichman:3 ours:1 interestingly:1 rightmost:2 outperforms:2 existing:3 current:1 recovered:1 si:6 yet:1 aft:1 must:2 numerical:1 partition:2 j1:6 designed:2 plot:1 update:2 leaf:20 histone:2 selected:1 ith:2 pointer:18 node:2 org:1 five:2 dn:1 along:3 inside:1 expected:9 roughly:1 p1:6 growing:3 solver:2 project:1 begin:1 moreover:7 notation:1 panel:1 provided:1 biostatistics:1 lowest:2 medium:1 argmin:1 minimizes:4 finding:1 transformation:3 nj:9 universit:2 partitioning:3 converse:1 grant:1 yn:1 positive:2 before:3 engineering:1 timing:3 tends:2 limit:21 io:4 struggle:1 despite:1 fluctuation:1 twice:1 initialization:1 studied:5 doctoral:1 suggests:1 shaded:1 limited:4 range:2 graduate:1 tian:1 unique:1 responsible:2 yj:3 testing:2 recursive:2 practice:1 j0:1 empirical:3 bell:1 vert:2 word:1 regular:1 suggest:1 ga:1 applying:2 demonstrated:1 center:1 maximizing:1 yt:3 straightforward:1 convex:8 simplicity:1 m2:1 rule:6 updated:1 mcgill:2 target:7 pt:25 programming:9 us:2 agreement:1 updating:2 bec:2 predicts:1 observed:7 bottom:1 ft:9 solved:2 worst:3 ordering:1 servo:2 yk:2 ran:1 complexity:17 nie:2 insertion:2 covariates:1 dynamic:11 trained:2 grateful:1 solving:2 segment:1 f2:3 chip:2 various:1 represented:1 grown:1 actuarial:1 informatique:2 outside:1 outcome:1 encoded:1 supplementary:11 valued:8 lsterl:2 otherwise:4 ability:1 favor:1 statistic:1 final:1 advantage:1 sequence:2 cai:2 propose:7 subtracting:1 j2:2 inserting:2 uci:6 reproducibility:1 achieve:4 crossvalidation:1 regularity:2 empty:1 francois:1 generating:2 perfect:1 depending:1 stat:1 b0:1 p2:4 dividing:1 ois:1 predicted:7 indicate:2 come:1 implies:1 convention:1 direction:2 correct:1 quartile:1 material:11 crc:1 f1:3 generalization:2 around:1 considered:4 great:1 gigu:1 changepoint:7 m0:1 achieves:6 continuum:1 smallest:2 estimation:3 proc:1 applicable:1 label:2 title:1 council:1 navab:1 minimization:2 clearly:2 breiman:7 varying:1 properly:1 rank:1 contrast:5 baseline:1 bt:31 going:1 transformed:2 interested:1 overall:2 among:1 classification:1 pascal:1 plan:3 art:4 smoothing:1 field:5 having:1 beach:1 biology:1 bmc:1 icml:1 breakpoint:9 future:2 others:1 partement:2 piecewise:1 report:1 few:2 randomly:1 national:1 individual:1 replaced:1 ab:2 friedman:1 detection:3 montr:1 organization:1 investigate:2 evaluation:4 weakness:1 logiciel:2 accurate:1 censored:17 respective:1 biometrics:2 tree:33 re:1 theoretical:1 instance:1 column:1 modeling:2 boolean:3 hlmann:1 cost:9 predictor:1 pal:1 stored:5 varies:1 st:5 density:1 fundamental:1 together:1 squared:14 recorded:2 containing:1 huang:3 choose:2 creating:1 return:1 account:1 suggesting:1 segal:2 de:2 b2:3 coefficient:3 ad:1 piece:7 performed:2 mario:1 start:1 competitive:1 annotation:1 contribution:5 minimize:1 square:3 ni:1 accuracy:7 efficiently:2 ensemble:2 yield:2 accurately:1 comparably:2 delattre:1 definition:2 failure:4 pier:1 naturally:1 associated:4 proof:3 recovers:1 plante:1 recall:1 knowledge:1 alexandre:2 supervised:1 xie:1 wei:2 just:1 until:1 nonlinear:5 usa:1 effect:1 name:2 true:4 y2:2 former:1 hence:7 death:1 illustrated:2 sin:3 width:1 uniquely:1 essence:1 ulaval:2 criterion:1 stone:1 demonstrate:2 l1:15 recently:2 pseudocode:1 mt:6 laval:2 empirically:3 extend:3 discussed:1 m1:2 refer:1 significant:1 cv:1 smoothness:1 similarly:1 moving:1 f0:2 similarity:1 add:1 multivariate:2 recent:1 showed:2 optimizes:1 irrelevant:1 awarded:2 certain:1 binary:1 allard:1 blanchette:1 yi:39 der:2 seen:1 minimum:9 period:1 signal:4 multiple:1 technical:1 cross:4 long:1 bach:2 award:1 prediction:13 variant:2 regression:33 arxiv:2 kernel:1 achieved:1 whereas:3 addition:1 interval:64 median:1 source:1 comment:1 cart:12 tend:1 med:1 sent:1 call:3 near:2 split:1 variety:3 fit:3 shift:1 expression:1 penalty:3 generally:1 useful:1 detailed:2 involve:1 amount:2 extensively:2 dna:2 tth:1 http:4 outperform:4 xij:9 notice:3 track:1 per:5 klein:2 bulk:1 hyperparameter:5 affected:1 key:2 nevertheless:2 threshold:7 drawn:1 sum:7 run:1 letter:1 fran:1 seq:2 decision:7 acceptable:1 comparable:2 graham:1 bound:2 def:10 breakpoints:6 guaranteed:1 followed:1 fl:1 fold:8 marchand:1 adapted:1 alive:1 x2:4 software:1 sake:2 speed:1 simulate:1 min:10 hocking:5 performing:1 relatively:1 department:1 according:2 marginbased:1 qu:2 making:2 s1:1 intuitively:1 taken:1 equation:9 remains:1 discus:1 turn:1 know:1 sending:1 available:6 obey:1 observe:3 ubiquity:1 alternative:1 rp:6 top:1 include:1 quinlan:2 hinge:25 log10:1 laviolette:2 medicine:1 exploit:1 giving:1 scholarship:1 neuroblastoma:3 classical:1 objective:1 move:15 arrangement:1 added:1 occurs:2 surrogate:1 said:1 needlessly:1 distance:1 separate:1 simulated:14 seven:1 trivial:1 induction:1 assuming:1 code:1 index:2 minimizing:5 difficult:1 olshen:1 statement:1 rise:2 implementation:3 unknown:1 perform:2 upper:5 observation:3 benchmark:1 finite:12 supporting:1 truncated:1 situation:2 y1:3 zeileis:3 canada:5 introduced:1 germain:1 required:1 nip:1 able:2 below:1 pattern:6 pioneering:1 program:2 including:1 max:5 event:1 business:1 rely:1 regularized:6 predicting:2 indicator:1 improve:1 mathieu:1 created:1 categorical:1 genomics:3 review:1 acknowledgement:1 discovery:1 loss:37 molinaro:3 expect:1 highlight:2 generation:1 limitation:2 suggestion:1 validation:4 iowa:1 incurred:1 consistent:2 share:1 ift:1 surprisingly:2 last:1 copy:3 keeping:1 supported:1 guide:1 wide:1 sparse:1 van:2 xn:1 transition:1 world:2 genome:1 basak:2 computes:2 ignores:2 made:1 far:1 sj:3 global:1 b1:3 summing:2 discriminative:1 xi:20 un:3 search:2 sk:3 hothorn:5 learn:9 nature:2 ca:3 forest:3 mse:4 protocol:3 significance:1 s2:1 toby:2 arise:1 n2:1 hyperparameters:2 profile:1 x1:1 sub:2 position:3 fails:2 dylan:1 lie:1 learns:2 removing:1 down:1 erroneous:1 theorem:2 specific:2 bastien:1 jt:33 showing:1 insightful:1 list:1 survival:7 exists:1 stepwise:2 adding:5 importance:3 te:4 margin:20 suited:1 logarithmic:1 explore:1 springer:1 nested:1 relies:2 ma:1 goal:3 presentation:1 sorted:3 dudoit:2 luc:1 adverse:1 change:1 included:1 typical:1 determined:1 specifically:1 except:1 justify:1 surpass:2 laan:2 lemma:3 total:5 experimental:1 indicating:1 formally:1 select:1 support:4 latter:2 alexander:1 bioinformatics:1 accelerated:4 evaluate:2 tested:1 |
6,723 | 7,081 | DropoutNet: Addressing Cold Start
in Recommender Systems
Maksims Volkovs
layer6.ai
[email protected]
Guangwei Yu
layer6.ai
[email protected]
Tomi Poutanen
layer6.ai
[email protected]
Abstract
Latent models have become the default choice for recommender systems due to
their performance and scalability. However, research in this area has primarily focused on modeling user-item interactions, and few latent models have been developed for cold start. Deep learning has recently achieved remarkable success showing excellent results for diverse input types. Inspired by these results we propose
a neural network based latent model called DropoutNet to address the cold start
problem in recommender systems. Unlike existing approaches that incorporate additional content-based objective terms, we instead focus on the optimization and
show that neural network models can be explicitly trained for cold start through
dropout. Our model can be applied on top of any existing latent model effectively
providing cold start capabilities, and full power of deep architectures. Empirically
we demonstrate state-of-the-art accuracy on publicly available benchmarks. Code
is available at https://github.com/layer6ai-labs/DropoutNet.
1
Introduction
Popularity of online content delivery services, e-commerce, and social web has highlighted an important challenge of surfacing relevant content to consumers. Recommender systems have proven
to be effective tools for this task, receiving increasingly more attention. One common approach to
building accurate recommender models is collaborative filtering (CF). CF is a method of making
predictions about an individual?s preferences based on the preference information from other users.
CF has been shown to work well across various domains [19], and many successful web-services
such as Netflix, Amazon and YouTube use CF to deliver highly personalized recommendations to
their users.
The majority of the existing approaches in CF can be divided into two categories: neighbor-based
and model-based. Model-based approaches, and in particular latent models, are typically the preferred choice since they build compact representations of the data and achieve high accuracy. These
representations are optimized for fast retrieval and can be scaled to handle millions of users in
real-time. For these reasons we concentrate on latent approaches in this work. Latent models are
typically learned by applying a variant of low rank approximation to the target preference matrix.
As such, they work well when lots of preference information is available but start to degrade in
highly sparse settings. The most extreme case of sparsity known as cold start occurs when no preference information is available for a given user or item. In such cases, the only way a personalized
recommendation can be generated is by incorporating additional content information. Base latent
approaches cannot incorporate content, so a number of hybrid models have been proposed [3, 21, 22]
to combine preference and content information. However, most hybrid methods introduce additional
objective terms considerably complicating learning and inference. Moreover, the content part of the
objective is typically generative [21, 9, 22] forcing the model to ?explain? the content rather than
use it to maximize recommendation accuracy.
Recently, deep learning has achieved remarkable success in areas such as computer vision [15, 11],
speech [12, 10] and natural language processing [5, 16]. In all of these areas end-to-end deep neu31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
ral network (DNN) models achieve state-of-the-art accuracy with virtually no feature engineering.
These results suggest that deep learning should also be highly effective at modeling content for recommender systems. However, while there has been some recent progress in applying deep learning
to CF [7, 22, 6, 23], little investigation has been done on using deep learning to address the cold start
problem.
In this work we propose a model to address this gap. Our approach is based on the observation that
cold start is equivalent to the missing data problem where preference information is missing. Hence,
instead of adding additional objective terms to model content, we modify the learning procedure to
explicitly condition the model for the missing input. The key idea behind our approach is that by
applying dropout [18] to input mini-batches, we can train DNNs to generalize to missing input. By
selecting an appropriate amount of dropout we show that it is possible to learn a DNN-based latent
model that performs comparably to state-of-the-art on warm start while significantly outperforming
it on cold start. The resulting model is simpler than most hybrid approaches and uses a single
objective function, jointly optimizing all components to maximize recommendation accuracy.
An additional advantage of our approach is that it can be applied on top of any existing latent model
to provide/enhance its cold start capability. This requires virtually no modification to the original
model thus minimizing the implementation barrier for any production environment that?s already
running latent models. In the following sections we give a detailed description of our approach and
show empirical results on publicly available benchmarks.
2
Framework
In a typical CF problem we have a set of N users U = {u1 , ..., uN } and a set of M items V =
{v1 , ..., vM }. The users? feedback for the items can be represented by an N ? M preference matrix
R where Ruv is the preference for item v by user u. Ruv can be either explicitly provided by the
user in the form of rating, like/dislike etc., or inferred from implicit interactions such as views, plays
and purchases. In the explicit setting R typically contains graded relevance (e.g., 1-5 ratings), while
in the implicit setting R is often binary; we consider both cases in this work. When no preference
information is available Ruv = 0. We use U(v) = {u ? U | Ruv 6= 0} to denote the set of users
that expressed preference for v, and V(u) = {v ? V | Ruv 6= 0} to denote the set of items that u
expressed preference for. In cold start no preference information is available and we formally define
cold start when V(u) = ? and U(v) = ? for a given user u and item v.
Additionally, in many domains we often have access to content information for both users and
items. For items, this information can come in the form of text, audio or images/video. For users we
could have profile information (age, gender, location, device etc.), and social media data (Facebook,
Twitter etc.). This data can provide highly useful signal for recommender models, and is particularly
effective in sparse and cold start settings where little or no preference information is available.
After applying relevant transformations most content information can be represented by fixed-length
feature vectors. We use ?U and ?V to denote the content features for users and items respectively
V
where ?U
u (?v ) is the content feature vector for user u (item v). When content is missing the
corresponding feature vector is set to 0. The goal is to use the preference information R together
with content ?U and ?V , to learn accurate and robust recommendation model. Ideally this model
should handle all stages of the user/item journey: from cold start, to early stage sparse preferences,
to a late stage well-defined preference profile.
3
Relevant Work
A number of hybrid latent approaches have been proposed to address cold start in CF. One of the
more popular models is the collaborative topic regression (CTR) [21] which combines latent Dirichlet allocation (LDA) [4] and weighted matrix factorization (WMF) [13]. CTR interpolates between
LDA representations in cold start and WMF when preferences are available. Recently, several related approaches have been proposed. Collaborative topic Poisson factorization (CTPF) [8] uses a
similar interpolation architecture but replaces both LDA and WMF components with Poisson factorization [9]. Collaborative deep learning (CDL) [22] is another approach with analogous architecture
where LDA is replaced with a stacked denoising autoencoder [20].
2
Figure 1: DropoutNet architecture diagram. For each user u, the preference Uu and content ?U
u
inputs are first passed through the corresponding DNNs fU and f?U . Top layer activations are then
concatenated together and passed to the fine-tuning network fU which outputs the latent representa? u . Items are handled in a similar fashion with fV , f?V and fV to produce V
? v . All components
tion U
are optimized jointly with back-propagation and then kept fixed during inference. Retrieval is done
? and V
? that replace the original representations U and V.
in the new latent space using U
While these models achieve highly competitive performance, they also share several disadvantages.
First, they incorporate both preference and content components into the objective function making it highly complex. CDL for example, contains four objective terms and requires tuning three
combining weights in addition to WMF and autoencoder parameters. This makes it challenging to
tune these models on large datasets where every parameter setting experiment is expensive and time
consuming. Second, the formulation of each model assumes cold start items and is not applicable
to cold start users. Most online services have to frequently incorporate new users and items and
thus require models that can handle both. In principle it is possible to derive an analogous model
for users and jointly optimize both models. However, this would require an even more complex
objective nearly doubling the number of free parameters. One of the main questions that we aim to
address with this work is whether we develop a simpler cold start model that is applicable to both
users and items?
In addition to CDL, a number of approaches haven been proposed to leverage DNNs for CF. One
of the earlier approaches DeepMusic [7] aimed to predict latent representations learned by a latent model using content only DNN. Recently, [6] described YouTube?s two-stage recommendation
model that takes as input user session (recent plays and searches) and profile information. Latent
representations for items in a given session are averaged, concatenated with profile information, and
passed to a DNN which outputs a session-dependent latent representation for the user. Averaging
the items addresses variable length input problem but can loose temporal aspects of the session. To
more accurately model how users? preferences change over time a recurrent neural network (RNN)
approach has been proposed by [23]. RNN is applied sequentially to one item at a time, and after all
items are processed hidden layer activations are used as latent representation.
Many of these models show clear benefits of applying deep architectures to CF. However, few investigate cold start and sparse setting performance when content information is available. Arguably, we
expect deep learning to be the most beneficial in these scenarios due to its excellent generalization
to various content types. Our proposed approach aims to leverage this advantage and is most similar
to [6]. We also use latent representations as preference feature input for users and items, and combine them with content to train a hybrid DNN-based model. But unlike [6] which focuses primarily
on warm start users, we develop analogous models for both users and items, and then show how
these models can be trained to explicitly handle cold start.
4
Our Approach
In this section we describe the architecture of our model that we call DropoutNet, together with
learning and inference procedures. We begin with input representation. Our aim is to develop a
model that is able to handle both cold and warm start scenarios. Consequently, input to the model
3
needs to contain content and preference information. One option is to directly use rows and columns
of R in their raw form. However, these become prohibitively large as the number of users and
items grows. Instead, we take a similar approach to [6] and [23], and use latent representations as
preference input. Latent models typically approximate the preference matrix with a product of low
rank matrices U and V:
Ruv ? Uu VvT
(1)
where Uu and Vv are the latent representations for user u and item v respectively. Both U and V
are dense and low dimensional with rank D min(N, M ). Noting the strong performance of latent
approaches on a wide range of CF datasets, it is adequate to assume that the latent representations
accurately summarize preference information about users and items. Moreover, low input dimensionality significantly reduces model complexity for DNNs since activation size of the first hidden
layer is directly proportional to the input size. Given these advantages we set the input to [Uu , ?U
u]
and [Vu , ?V
v ] for each user u and item v respectively.
4.1
Model Architecture
Given the joint preference-content input we propose to apply a DNN model to map it into a new
latent space that incorporates both content and preference information. Formally, preference Uu
and content ?U
u inputs are first passed through the corresponding DNNs fU and f?U . Top layer
activations are then concatenated together and passed to the fine-tuning network fU which outputs
? u . Items are handled in a similar fashion with fV , f?V and fU to produce
the latent representation U
?
Vv . We use separate components for preference and content inputs to handle complex structured
content such as images that can?t be directly concatenated with preference input in raw form. Another advantage of using a split architecture is that it allows to use any of the publicly available (or
proprietary) pre-trained models for f?U and/or f?V . Training can then be significantly accelerated
by updating only the last few layers of each pre-trained network. For domains such as vision where
models can exceed 100 layers [11], this can effectively reduce the training time from days to hours.
Note that when content input is ?compatible? with preference representations we remove fU and
f?U , and directly apply fU to concatenated input [Uu , ?U
u ]. To avoid notation clutter we omit the
sub-networks and use fU and fV to denote user and item models in subsequent sections.
During training all components are optimized jointly with back-propagation. Once the model is
? and V ? V.
? All retrieval is then done
trained we fix it, and make forward passes to map U ? U
?
?
?
? T . Figure 1 shows the full
using U and V with relevance scores estimated as before by s?uv = Uu V
v
model architecture with both user and item components.
4.2
Training For Cold Start
During training we aim to generalize the model to cold start while preserving warm start accuracy.
We discussed that existing hybrid model approach this problem by adding additional objective terms
and training the model to fall-back on content representations when preferences are not available.
However, this complicates learning by forcing the implementer to balance multiple objective terms
in addition to training content representations. Moreover, content part of the objective is typically
generative forcing the model to explain the observed data instead of using it to maximize recommendation accuracy. This can waste capacity by modeling content aspects that are not useful for
recommendations.
We take a different approach and borrow ideas from denoising autoencoders [20] by training the
model to reconstruct the input from its corrupted version. The goal is to learn a model that would
still produce accurate representations when parts of the input are missing. To achieve this we propose
an objective to reproduce the relevance scores after the input is passed through the model:
X
X
V T 2
? uV
? T )2
O=
(Uu VvT ? fU (Uu , ?U
(Uu VvT ? U
u )fV (Vv , ?v ) ) =
v
(2)
u,v
u,v
O minimizes the difference between scores produced by the input latent model and DNN. When all
input is available this objective is trivially minimized by setting the content weights to 0 and learning
identity function for preference input. This is a desirable property for reasons discussed below.
In cold start either Uu or Vv (or both) is missing so our main idea is to train for this by applying
input dropout [18]. We use stochastic mini-batch optimization and randomly sample user-item pairs
4
to compute gradients and update the model. In each mini-batch a fraction of users and items is
selected at random and their preference inputs are set to 0 before passing the mini-batch to the
model. For ?dropped out? pairs the model thus has to reconstruct the relevance scores without
seeing the preference input:
V T 2
user cold start: Ouv = (Uu VvT ? fU (0, ?U
u )fV (Vv , ?v ) )
V T 2
item cold start: Ouv = (Uu VvT ? fU (Uu , ?U
u )fV (0, ?v ) )
(3)
Training with dropout has a two-fold effect: pairs with dropout encourage the model to only use
content information, while pairs without dropout encourage it to ignore content and simply reproduce preference input. The net effect is balanced between these two extremes. The model learns to
reproduce the accuracy of the input latent model when preference data is available while also generalizing to cold start. Dropout thus has a similar effect to hybrid preference-content interpolation
objectives but with a much simpler architecture that is easy to optimize. An additional advantage of
using dropout is that it was originally developed as a way of regularizing the model. We observe a
similar effect here, finding that additional regularization is rarely required even for deeper and more
complex models.
There are interesting parallels between our
Algorithm 1: Learning Algorithm
model and areas such as denoising autoencoders [20] and dimensionality reducInput: R, U, V, ?U , ?V
tion [17]. Analogous to denoising autoenInitialize: user model fU , item model fV
coders, our model is trained to reproduce
repeat {DNN optimization}
the input from a noisy version. The noise
sample mini-batch B = {(u1 , v1 ), ..., (uk , vk )}
comes in the form of dropout that fully refor each (u, v) ? B do
moves a subset of input dimensions. Howapply one of:
ever, instead of reconstructing the actual un1. leave as is
corrupted input we minimize pairwise dis2. user dropout:
tances between points in the original and reU
[Uu , ?U
u ] ? [0, ?u ]
constructed spaces. Considering relevance
3. item dropout:
scores S = {Uu VvT |u ? U, v ? V} and
V
[Vv , ?V
v ] ? [0, ?v ]
T
? uV
?
S? = {U
|u
?
U,
v
?
V}
as
sets
of
4. user transform:
v
U
points in one dimensional space, the goal is
[Uu , ?U
u ] ? [meanv?V(u) Vv , ?u ]
to preserve the relative ordering between the
5. item transform:
V
points in S? produced by our model and the
[Vv , ?V
v ] ? [meanu?V(v) Uu , ?v ]
original set S. We focus on reconstructing
end for
distances because it gives greater flexibility
update fV , fU using B
allowing the model to learn an entirely new
until convergence
latent space, and not tying it to a representaOutput: fV , fU
tion learned by another model. This objective is analogous to many popular dimensionality reduction models that project the data to a low
dimensional space where relative distances between points are preserved [17]. In fact, many of the
objective functions developed for dimensionality reduction can also be used here.
A drawback of the objective in Equation 2 is that it depends on the input latent model and thus its
accuracy. However, empirically we found this objective to work well producing robust models. The
main advantages are that, first, it is simple to implement and has no additional free parameters to tune
making it easy to apply to large datasets. Second, in mini-batch mode, N M unique user-item pairs
can be sampled to update the networks. Even for moderate size datasets the number of pairs is in
the billions making it significantly easier to train large DNNs without over-fitting. The performance
is particularly robust on sparse implicit datasets commonly found in CF where R is binary and over
99% sparse. In this setting training with mini-batches sampled from raw R requires careful tuning
to avoid oversampling 0?s, and to avoid getting stuck in bad local optima.
4.3
Inference
Once training is completed, we fix the model and make forward passes to infer new latent representations. Ideally we would apply the model continuously throughout all stages of the user (item)
journey ? starting from cold start, to first few interactions and finally to an established preference
? u as we observe first preferences from a cold
profile. However, to update latent representation U
5
start user u, we need to infer the input preference vector Uu . As many leading latent models use
complex non-convex objectives, updating latent representations with new preferences is a non-trivial
task that requires iterative optimization. To avoid this we use a simple trick by representing each
user as a weighted sum of items that the user interacted with until the input latent model is retrained.
Formally, given cold start user u that has generated new set of interactions V(u) we approximate
Uu with the average latent representations of the items in V(u):
Uu ?
X
1
Vv
|V(u)|
(4)
v?V(u)
Using this approximation, we then make a forward pass through the user DNN to get the updated
? u = fU (meanv?V(u) Vv , ?U ). This procedure can be used continuously in near
representation: U
u
real-time as new data is collected until the input latent model is re-trained. Cold start items are
handled in a similar way by using averages of user representations. Distribution of representations
obtained via this approximation can deviate from the one produced by the input latent model. We
explicitly train for this using a similar idea to dropout for cold start. Throughout learning preference
input for a randomly chosen subset of users and items in each mini-batch is replaced with Equation 4.
We alternate between dropout and this transformation and control for the relative frequency of each
transformation (i.e., dropout fraction). Algorithm 1 outlines the full learning procedure.
5
Experiments
To validate the proposed approach, we conducted extensive experiments on two publicly available
datasets: CiteULike [21] and the ACM RecSys 2017 challenge dataset [2]. These datasets are
chosen because they contain content information, allowing cold start evaluation. We implemented
Algorithm 1 using the TensorFlow library [1]. All experiments were conducted on a server with
20-core Intel Xeon CPU E5-2630 CPU, Nvidia Titan X GPU and 128GB of RAM. We compare
our model against leading CF approaches including WMF [13], CTR [21], DeepMusic [7] and
CDL [22] described in Section 3. For all baselines except DeepMusic, we use the code released by
respective authors, and extensively tune each model to find an optimal setting of hyper-parameters.
For DeepMusic we use a modified version of the model replacing the objective function from [7]
with Equation 2 which we found to work better. To make comparison fair we use the same DNN
architecture (number of hidden layers and layer size) for DeepMusic and our models.
All DNN models are trained with mini batches of size 100, fixed learning rate and momentum of
0.9. Algorithm 1 is applied directly to the mini batches, and we alternate between applying dropout,
and inference transforms. Using ? to denote the dropout rate, for each batch we randomly select
? ? batch size users and items. Then for batch 1 we apply dropout to selected users and items,
for batch 2 inference transform and so on. We found this procedure to work well across different
datasets and use it in all experiments.
5.1
CiteULike
At CiteULike, registered users create scientific article libraries and save them for future reference.
The goal is to leverage these libraries to recommend relevant new articles to each user. We use a
subset of the CiteULike data with 5,551 users, 16,980 articles and 204,986 observed user-article
pairs. This is a binary problem with R(u, v) = 1 if article v is in u?s library and R(u, v) = 0
otherwise. R is over 99.8% sparse with each user collecting an average of 37 articles. In addition to
preference data, we also have article content information in the form of title and abstract. To make
the comparison fair we follow the approach of [21] and use the same vocabulary of top 8,000 words
selected by tf-idf. This produces the 16, 980 ? 8, 000 item content matrix ?V ; since no user content
is available ?U is dropped from the model. For all evaluation we use Fold 1 from [21] (results on
other folds are nearly identical) and report results of the test set from this fold. We modify warm
start evaluation and measure accuracy by generating recommendations from the full set of 16, 980
articles for each user (excluding training interactions). This makes the problem more challenging,
and provides a better evaluation of model performance. Cold start evaluation is the same as in [21],
we remove a subset of 3396 articles from the training data and then generate recommendations from
these articles at test time.
6
Method
Warm Start
Cold Start
WMF [13]
CTR [21]
DeepMusic [7]
CDL [22]
0.592
0.597
0.371
0.603
?
0.589
0.601
0.573
DN-WMF
DN-CDL
0.593
0.598
0.636
0.629
Figure 2: CiteULike warm and cold start results Table 1: CiteULike recall@100 warm and cold
for dropout rates between 0 and 1.
start test set results.
We fix rank D = 200 for all models to stay consistent with the setup used in [21]. For our model
we found that 1-hidden layer architectures with 500 hidden units and tanh activations gave good
performance and going deeper did not significantly improve results. To train the model for cold start
we apply dropout to preference input as outlined in Section 4.2. Here, we only apply dropout to item
preferences since only item content is available. Figure 2 shows warm and cold start recall@100
accuracy for dropout rate (probability to drop) between 0 and 1. From the figure we see an interesting pattern where warm start accuracy remains virtually unchanged decreasing by less than 1%
until dropout reaches 0.7 where it rapidly degrades. Cold start accuracy on the other hand, steadily
increases with dropout. Moreover, without dropout cold start performance is poor and even dropout
of 0.1 improves it by over 60%. This indicates that there is a region of dropout values where significant gains in cold start accuracy can be achieved without losses on warm start. Similar patterns were
observed on other datasets and further validate that the proposed approach of applying dropout for
cold start generalization achieves the desired effect.
Warm and cold start recall@100 results are shown in Table 1. To verify that our model can be
trained in conjunction with any existing latent model, we trained two versions denoted DN-WMF
and DN-CDL, that use WMF and CDL as input preference models respectively. Both models were
trained with preference input dropout rate of 0.5. From the table we see that most baselines produce
similar results on warm start which is expected since virtually all of these models use WMF objective
to model R. One exception is DeepMusic that performs significantly worse than other baselines.
This can be attributed to the fact that in DeepMusic item latent representations are functions of
content only and thus lack preference information. DN-WMF and DN-CDL on the other hand,
perform comparably to the best baseline indicating that adding preference information as input into
the model significantly improves performance over content only models like DeepMusic. Moreover,
as Figure 2 suggests even aggressive dropout of 0.5 does not affect warm start performance and the
our model is still able to recover the accuracy of the input latent model.
Cold start results are more diverse, as expected best cold start baseline is DeepMusic. Unlike CTR
and CDL that have unsupervised and semi-supervised content components, DeepMusic is end-toend supervised, and can thus learn representations that are better tailored to the target retrieval task.
We also see that DNN-WMF outperforms all baselines improving recall@100 by 6% over the best
baseline. This indicates that incorporating preference information as input during training can also
improve cold start generalization. Moreover, WMF can?t be applied to cold start so our model
effectively adds cold start capability to WMF with excellent generalization and without affecting
performance on warm start. Similar pattern can be seen for DN-CDL that improves cold start performance of CDL by almost 10% without affecting warm start.
5.2
RecSys
The ACM RecSys 2017 dataset was released as part of the ACM RecSys 2017 Challenge [2]. It?s
a large scale data collection of user-job interactions from the career oriented social network XING
(European analog of LinkedIn). Importantly, this is one of the only publicly available datasets that
contains both user and item content information enabling cold start evaluation on both. In total
there are 1.5M users, 1.3M jobs and over 300M interactions. Interactions are divided into six types
{impression, click, bookmark, reply, delete, recruiter}, and each interaction is recorded with the
corresponding type and timestamp. In addition, for users we have access to profile information such
as education, work experience, location and current position. Similarly, for items we have industry,
7
(a)
(b)
(c)
Figure 3: RecSys warm start (Figure 3(a)), user cold start (Figure 3(b)) and item cold start (Figure 3(c)) results. All figures show test set recall for truncations 50 to 500 in increments of 50. Code
release by the authors of CTR and CDL is only applicable to item cold start so these baselines are
excluded from user cold start evaluation.
location, title/tags, career level and other related information; see [2] for full description of the data.
After cleaning and transforming all categorical inputs into 1-of-n representation we ended up with
831 user features and 2738 item features forming ?U and ?V respectively.
We process the interaction data by removing duplicate interactions (i.e. multiple clicks on the
same item) and deletes, and collapse remaining interactions into a single binary matrix R where
R(u, v) = 1 if user u interacted with job v and R(u, v) = 0 otherwise. We then split the data
forward in time using interactions from the last two weeks as the test set. To evaluate both warm
and cold start scenarios simultaneously, test set interactions are further split into three groups: warm
start, user cold start and item cold start. The three groups contain approximately 426K, 159K and
184K interactions respectively with a total of 42, 153 cold start users and 49, 975 cold start items;
training set contains 18.7M interactions. Cold start users and items are obtained by removing all
training interactions for randomly selected subsets of users and items. The goal is to train a single
model that is able to handle all three tasks. This simulates real-world scenarios for many online services like XING where new users and items are added daily and need to be recommended together
with existing users and items. We set rank D = 200 for all models and in all experiments train
our model (denoted DN-WMF) using latent representations from WMF. During training we alternate between applying dropout and inference approximation (see Section 4.3) for users and items in
each mini-batch with a rate of 0.5. For CTR and CDL the code released by respective authors only
supports item cold start so we evaluate these models on warm start and item cold start tasks only.
To find the appropriate DNN architecture we conduct extensive experiments
Network Architecture Warm User
Item
using increasingly deeper DNNs. We follow the approach of [6] and use a pyraWMF
0.426
mid structure where the network gradu400
0.421 0.211 0.234
ally compresses the input witch each suc800 ? 400
0.420 0.229 0.255
cessive layer. For all architecture we
800 ? 800 ? 400
0.412 0.231 0.265
use fully connected layers with batch
norm [14] and tanh activation functions; Table 2: Recsys recall@100 warm, user cold start and
other activation functions such as ReLU item cold start results for different DNN architectures.
and sigmoid produced significantly worse We use tanh activations and batch norm in each layer.
results. All models were trained using
WMF as input latent model, however note that WMF cannot be applied to either user or item cold
start. Table 2 shows warm start, user cold start, and item cold start recall at 100 results as the number
of layers is increased from one to four. From the table we see that up to three layers, the accuracy on
both cold start tasks steadily improves with each additional layer while the accuracy on warm start
remains approximately the same. These results suggest that deeper architectures are highly useful
for this task. We use the three layer model in all experiments.
RecSys results are shown in Figure 3. From warm start results in Figure 3(a) we see a similar pattern
where all baselines perform comparably except DeepMusic, suggesting that content only models are
unlikely to perform well on warm start. User and item cold start results are shown in Figures 3(b)
and 3(c) respectively. From the figures we see that DeepMusic is the best performing baseline
8
significantly beating the next best baseline CTR on the item cold start. We also see that DN-WMF
significantly outperforms DeepMusic with over 50% relative improvement for most truncations.
This is despite the fact that DeepMusic was trained using the same 3-layer architecture and the
same objective function as DN-WMF. These results further indicate that incorporating preference
information as input into the model is highly important even when the end goal is cold start.
User inference results are shown in Figure 4. We randomly selected a subset of 10K cold start users that have
at least 5 training interactions. Note that all training interactions were removed for these users during training to
simulate cold start. For each of the selected users we then
incorporate training interactions one at a time into the
model in chronological order using the inference procedure outlined in Section 4.3. Resulting latent representations are tested on the test set. Figure 4 shows recall@100
results as number of interactions is increased from 0 (cold
start) to 5. We compare with WMF by applying similar procedure from Equation 4 to WMF representations. Figure 4: User inference results as numFrom the figure it is seen that our model is able to seam- ber of interactions is increased from 0
lessly transition from cold start to preferences without re- (cold start) to 5.
training. Moreover, even though our model uses WMF as
input it is able to significantly outperform WMF at all interaction sizes. Item inference results are
similar and are omitted. These results indicate that training with inference approximations achieves
the desired effect allowing our model to transition from cold start to first few preferences without
re-training and with excellent generalization.
6
Conclusion
We presented DropoutNet ? a deep neural network model for cold start in recommender systems.
DropoutNet applies input dropout during training to condition for missing preference information.
Optimization with missing data forces the model to leverage preference and content information
without explicitly relying on both being present. This leads to excellent generalization on both warm
and cold start scenarios. Moreover, unlike existing approaches that typically have complex multiterm objective functions, our objective only has a single term and is easy to implement and optimize.
DropoutNet can be applied on top of any existing latent model effectively, providing cold-start
capabilities and leveraging full power of deep architectures for content modeling. Empirically, we
demonstrate state-of-the-art results on two public benchmarks. Future work includes investigating
objective functions that directly incorporate preference information with the aim of improving warm
start accuracy beyond the input latent model. We also plan to explore different DNN architectures
for both user and item models to better leverage diverse content types.
References
[1] Mart??n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro,
Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale
machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467,
2016.
[2] F. Abel, Y. Deldjo, M. Elahi, and D. Kohlsdorf. Recsys challenge 2017. http://2017.
recsyschallenge.com, 2017.
[3] D. Agarwal and B.-C. Chen. Regression-based latent factor models. In Conference on Knowledge Discovery and Data Mining, 2009.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of Machine
Learning Research, 3, 2003.
[5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 2011.
9
[6] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations.
In ACM Recommender Systems, 2016.
[7] A. Van den Oord, S. Dieleman, and B. Schrauwen. Deep content-based music recommendation. In Neural Information Processing Systems, 2013.
[8] P. Gopalan, J. M. Hofman, and D. M. Blei. Scalable recommendation with poisson factorization. arXiv:1311.1704, 2013.
[9] P. K. Gopalan, L. Charlin, and D. Blei. Content-based recommendations with poisson factorization. In Neural Information Processing Systems, 2014.
[10] A. Graves, A.-R. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural
networks. In Conference on Acoustics, Speech, and Signal Processing, 2013.
[11] K. He, X. Zhang, S. Ren, and J. Sun.
arXiv:1512.03385, 2015.
Deep residual learning for image recognition.
[12] G. E. Hinton, L. Deng, D. Yu, G. Dahl, A.-R. Mohamed, and N. Jaitly. Deep neural networks
for acoustic modeling in speech recognition: The shared views of four research groups. IEEE
Signal Processing, 2012.
[13] Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for implicit feedback datasets. In
International Conference on Data Engineering, 2008.
[14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. In International Conference on Machine Learning, 2015.
[15] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In Neural Information Processing Systems, 2012.
[16] Q. V. Le and T. Mikolov. Distributed representations of sentences and documents. In International Conference on Machine Learning, 2014.
[17] L. V. D. Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning
Research, 2008.
[18] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research,
2014.
[19] X. Su and T. M. Khoshgoftaar. A survey of collaborative filtering techniques. Advances in
Artificial Intelligence, 2009.
[20] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Journal of Machine Learning Research, 2010.
[21] C. Wang and D. M. Blei. Collaborative topic modeling for recommending scientific articles.
In Conference on Knowledge Discovery and Data Mining, 2011.
[22] H. Wang, N. Wang, and D.-Y. Yeung. Collaborative deep learning for recommender systems.
In Conference on Knowledge Discovery and Data Mining, 2015.
[23] C.-Y. Wu, A. Ahmed, A. Beutel, A. Smola, and H. Jing. Recurrent recommender networks. In
Conference on Web Search and Data Mining, 2017.
10
| 7081 |@word version:4 wmf:24 norm:2 hu:1 reduction:2 contains:4 score:5 selecting:1 document:1 outperforms:2 existing:9 current:1 com:2 activation:8 gpu:1 citeulike:6 devin:1 subsequent:1 christian:1 remove:2 drop:1 update:4 generative:2 selected:6 device:1 item:70 intelligence:1 core:1 blei:4 provides:1 location:3 preference:58 simpler:3 zhang:1 dn:10 constructed:1 become:2 abadi:1 combine:3 fitting:1 introduce:1 pairwise:1 expected:2 kuksa:1 frequently:1 inspired:1 relying:1 decreasing:1 salakhutdinov:1 little:2 actual:1 cpu:2 considering:1 provided:1 begin:1 moreover:8 notation:1 project:1 medium:1 coder:1 tying:1 minimizes:1 developed:3 finding:1 transformation:3 ended:1 temporal:1 every:1 collecting:1 chronological:1 prohibitively:1 scaled:1 uk:1 control:1 unit:1 omit:1 producing:1 arguably:1 before:2 service:4 dropped:2 engineering:2 modify:2 local:2 despite:1 dis2:1 interpolation:2 approximately:2 suggests:1 challenging:2 factorization:5 collapse:1 range:1 averaged:1 commerce:1 unique:1 vu:1 implement:2 cold:82 procedure:7 area:4 empirical:1 rnn:2 significantly:11 pre:2 word:1 seeing:1 suggest:2 get:1 cannot:2 applying:10 optimize:3 equivalent:1 map:2 dean:1 missing:9 attention:1 starting:1 convex:1 focused:1 survey:1 amazon:1 matthieu:1 importantly:1 borrow:1 handle:7 sargin:1 linkedin:1 analogous:5 updated:1 increment:1 target:2 play:2 user:83 cleaning:1 us:3 jaitly:1 trick:1 expensive:1 particularly:2 updating:2 recognition:3 observed:3 preprint:1 wang:3 region:1 connected:1 sun:1 ordering:1 removed:1 balanced:1 environment:1 transforming:1 complexity:1 abel:1 ideally:2 trained:13 hofman:1 deliver:1 joint:1 various:2 represented:2 train:8 stacked:2 fast:1 effective:3 describe:1 artificial:1 hyper:1 reconstruct:2 otherwise:2 highlighted:1 jointly:4 noisy:1 transform:3 online:3 advantage:6 un1:1 net:1 propose:4 interaction:23 product:1 relevant:4 combining:1 rapidly:1 flexibility:1 achieve:4 description:2 validate:2 scalability:1 getting:1 billion:1 convergence:1 interacted:2 optimum:1 sutskever:2 jing:1 produce:5 generating:1 adam:1 leave:1 derive:1 develop:3 recurrent:3 progress:1 job:3 strong:1 implemented:1 come:2 uu:21 indicate:2 larochelle:1 concentrate:1 drawback:1 stochastic:1 public:1 education:1 require:2 dnns:7 fix:3 generalization:6 investigation:1 dieleman:1 predict:1 week:1 achieves:2 early:1 released:3 omitted:1 applicable:3 khoshgoftaar:1 tanh:3 title:2 create:1 tf:1 tool:1 weighted:2 aim:5 representa:1 rather:1 modified:1 avoid:4 conjunction:1 release:1 focus:3 vk:1 improvement:1 rank:5 ral:1 indicates:2 baseline:11 inference:12 twitter:1 dependent:1 typically:7 unlikely:1 hidden:5 dnn:15 reproduce:4 going:1 classification:1 denoted:2 plan:1 art:4 mak:1 timestamp:1 once:2 beach:1 ng:1 identical:1 vvt:6 yu:2 unsupervised:1 nearly:2 purchase:1 minimized:1 report:1 future:2 recommend:1 duplicate:1 primarily:2 haven:1 few:5 randomly:5 oriented:1 preserve:1 simultaneously:1 individual:1 replaced:2 jeffrey:1 highly:8 investigate:1 mining:4 evaluation:7 extreme:2 behind:1 accurate:3 andy:1 fu:15 encourage:2 daily:1 experience:1 respective:2 conduct:1 re:3 desired:2 delete:1 complicates:1 covington:1 increased:3 industry:1 modeling:6 earlier:1 column:1 xeon:1 disadvantage:1 addressing:1 deepmusic:15 subset:6 krizhevsky:2 successful:1 conducted:2 corrupted:2 considerably:1 international:3 oord:1 volkovs:1 stay:1 vm:1 receiving:1 enhance:1 ashish:1 continuously:2 together:5 schrauwen:1 ctr:8 recorded:1 worse:2 leading:2 szegedy:1 aggressive:1 suggesting:1 waste:1 includes:1 titan:1 explicitly:6 depends:1 collobert:1 tion:3 view:2 lot:1 lab:1 start:102 netflix:1 competitive:1 capability:4 option:1 parallel:1 recover:1 xing:2 collaborative:8 minimize:1 publicly:5 accuracy:18 greg:1 convolutional:1 generalize:2 raw:3 vincent:1 kavukcuoglu:1 accurately:2 comparably:3 produced:4 craig:1 ren:1 explain:2 reach:1 facebook:1 against:1 volinsky:1 frequency:1 steadily:2 mohamed:2 attributed:1 sampled:2 gain:1 dataset:2 popular:2 recall:8 knowledge:3 dimensionality:4 improves:4 back:3 originally:1 day:1 follow:2 supervised:2 seam:1 formulation:1 done:3 though:1 charlin:1 implicit:4 stage:5 reply:1 autoencoders:3 until:4 hand:2 ally:1 smola:1 web:3 replacing:1 su:1 propagation:2 lack:1 mode:1 lda:4 scientific:2 grows:1 usa:1 building:1 effect:6 contain:3 verify:1 hence:1 regularization:1 excluded:1 visualizing:1 during:7 davis:1 criterion:1 outline:1 impression:1 demonstrate:2 performs:2 image:3 regularizing:1 recently:4 common:1 sigmoid:1 witch:1 empirically:3 million:1 discussed:2 analog:1 he:1 significant:1 ai:6 tuning:4 uv:3 trivially:1 outlined:2 session:4 similarly:1 language:2 access:2 etc:3 base:1 add:1 recent:2 optimizing:1 moderate:1 forcing:3 scenario:5 nvidia:1 server:1 outperforming:1 success:2 binary:4 preserving:1 seen:2 additional:10 greater:1 deng:1 maximize:3 recommended:1 signal:3 semi:1 corrado:1 full:6 multiple:2 desirable:1 reduces:1 infer:2 karlen:1 ahmed:1 long:1 retrieval:4 divided:2 prediction:1 variant:1 regression:2 scalable:1 heterogeneous:1 vision:2 poisson:4 arxiv:4 yeung:1 sergey:1 tailored:1 normalization:1 agarwal:2 achieved:3 preserved:1 addition:5 affecting:2 fine:2 diagram:1 unlike:4 pass:2 virtually:4 simulates:1 incorporates:1 leveraging:1 jordan:1 call:1 near:1 leverage:5 noting:1 exceed:1 split:3 easy:3 bengio:1 affect:1 relu:1 gave:1 architecture:20 click:2 reduce:1 idea:4 barham:1 shift:1 whether:1 six:1 handled:3 gb:1 passed:6 accelerating:1 speech:4 interpolates:1 passing:1 adequate:1 deep:21 proprietary:1 useful:4 detailed:1 aimed:1 tune:3 clear:1 gopalan:2 amount:1 clutter:1 transforms:1 mid:1 extensively:1 processed:1 category:1 http:2 generate:1 outperform:1 oversampling:1 toend:1 estimated:1 popularity:1 diverse:3 group:3 key:1 four:3 deletes:1 prevent:1 dahl:1 kept:1 v1:2 ram:1 fraction:2 sum:1 bookmark:1 journey:2 throughout:2 almost:2 wu:1 delivery:1 maaten:1 dropout:33 layer:17 entirely:1 koren:1 fold:4 replaces:1 idf:1 personalized:2 tag:1 u1:2 aspect:2 simulate:1 ctpf:1 min:1 performing:1 mikolov:1 structured:1 alternate:3 poor:1 tomi:2 across:2 beneficial:1 increasingly:2 reconstructing:2 making:4 modification:1 den:1 equation:4 remains:2 loose:1 end:5 available:18 brevdo:1 apply:7 observe:2 appropriate:2 save:1 batch:18 original:4 compress:1 top:6 running:1 cf:13 dirichlet:2 assumes:1 completed:1 remaining:1 music:1 concatenated:5 build:1 graded:1 unchanged:1 objective:25 move:1 already:1 question:1 occurs:1 added:1 degrades:1 gradient:1 distance:2 separate:1 capacity:1 majority:1 recsys:8 degrade:1 topic:3 lajoie:1 collected:1 trivial:1 reason:2 consumer:1 code:4 length:2 mini:11 providing:2 minimizing:1 balance:1 manzagol:1 setup:1 sne:1 implementation:1 perform:3 allowing:3 recommender:11 observation:1 datasets:11 benchmark:3 enabling:1 hinton:5 ever:1 excluding:1 retrained:1 inferred:1 rating:2 pair:7 required:1 extensive:2 optimized:3 imagenet:1 sentence:1 acoustic:2 fv:10 learned:3 registered:1 tensorflow:2 established:1 hour:1 nip:1 address:6 able:5 beyond:1 below:1 pattern:4 beating:1 sparsity:1 challenge:4 summarize:1 including:1 video:1 power:2 tances:1 natural:2 hybrid:7 warm:28 force:1 residual:1 representing:1 improve:2 github:1 cdl:14 library:4 categorical:1 autoencoder:2 text:1 deviate:1 eugene:1 discovery:3 dislike:1 layer6:6 relative:4 maksims:1 graf:1 fully:2 expect:1 loss:1 interesting:2 filtering:3 allocation:2 proven:1 proportional:1 remarkable:2 age:1 consistent:1 article:11 principle:1 share:1 production:1 row:1 compatible:1 repeat:1 last:2 free:2 truncation:2 vv:10 deeper:4 ber:1 neighbor:1 wide:1 fall:1 barrier:1 sparse:7 benefit:1 distributed:2 feedback:2 default:1 complicating:1 dimension:1 vocabulary:1 world:1 transition:2 van:1 forward:4 commonly:1 stuck:1 author:3 collection:1 social:3 approximate:2 compact:1 ignore:1 preferred:1 sequentially:1 investigating:1 ioffe:1 overfitting:1 recommending:1 consuming:1 un:1 latent:50 search:2 iterative:1 table:6 additionally:1 learn:5 robust:3 ca:1 career:2 improving:2 e5:1 excellent:5 complex:6 european:1 bottou:1 domain:3 did:1 main:3 dense:1 noise:1 paul:1 profile:6 guang:1 fair:2 intel:1 fashion:2 sub:1 momentum:1 position:1 explicit:1 late:1 learns:1 zhifeng:1 removing:2 bad:1 covariate:1 showing:1 incorporating:3 adding:3 effectively:4 gap:1 easier:1 chen:2 generalizing:1 simply:1 explore:1 forming:1 expressed:2 doubling:1 recommendation:14 applies:1 gender:1 acm:4 mart:1 weston:1 goal:6 identity:1 consequently:1 careful:1 replace:1 shared:1 content:52 change:1 youtube:3 typical:1 except:2 reducing:1 averaging:1 denoising:6 called:1 total:2 pas:1 citro:1 rarely:1 formally:3 select:1 exception:1 indicating:1 support:1 internal:1 relevance:5 accelerated:1 incorporate:6 evaluate:2 audio:1 tested:1 scratch:1 srivastava:1 |
6,724 | 7,082 | A simple neural network module
for relational reasoning
Adam Santoro?
[email protected]
Mateusz Malinowski
[email protected]
David Raposo?
[email protected]
Razvan Pascanu
[email protected]
David G.T. Barrett
[email protected]
Peter Battaglia
[email protected]
Timothy Lillicrap
DeepMind
London, United Kingdom
[email protected]
Abstract
Relational reasoning is a central component of generally intelligent behavior, but
has proven difficult for neural networks to learn. In this paper we describe how to
use Relation Networks (RNs) as a simple plug-and-play module to solve problems
that fundamentally hinge on relational reasoning. We tested RN-augmented networks on three tasks: visual question answering using a challenging dataset called
CLEVR, on which we achieve state-of-the-art, super-human performance; textbased question answering using the bAbI suite of tasks; and complex reasoning
about dynamic physical systems. Then, using a curated dataset called Sort-ofCLEVR we show that powerful convolutional networks do not have a general
capacity to solve relational questions, but can gain this capacity when augmented
with RNs. Thus, by simply augmenting convolutions, LSTMs, and MLPs with
RNs, we can remove computational burden from network components that are
not well-suited to handle relational reasoning, reduce overall network complexity,
and gain a general ability to reason about the relations between entities and their
properties.
1
Introduction
The ability to reason about the relations between entities and their properties is central to generally
intelligent behavior (Figure 1) [10, 7]. Consider a child proposing a race between the two trees in the
park that are furthest apart: the pairwise distances between every tree in the park must be inferred and
compared to know where to run. Or, consider a reader piecing together evidence to predict the culprit
in a murder-mystery novel: each clue must be considered in its broader context to build a plausible
narrative and solve the mystery.
Symbolic approaches to artificial intelligence are inherently relational [16, 5]. Practitioners define
the relations between symbols using the language of logic and mathematics, and then reason about
these relations using a multitude of powerful methods, including deduction, arithmetic, and algebra.
But symbolic approaches suffer from the symbol grounding problem and are not robust to small
?
Equal contribution.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Original Image:
Non-relational question:
What is the size of
the brown sphere?
Relational question:
Are there any rubber
things that have the
same size as the yellow
metallic cylinder?
Figure 1: An illustrative example from the CLEVR dataset of relational reasoning. An image
containing four objects is shown alongside non-relational and relational questions. The relational
question requires explicit reasoning about the relations between the four objects in the image, whereas
the non-relational question requires reasoning about the attributes of a particular object.
task and input variations [5]. Other approaches, such as those based on statistical learning, build
representations from raw data and often generalize across diverse and noisy conditions [12]. However,
a number of these approaches, such as deep learning, often struggle in data-poor problems where the
underlying structure is characterized by sparse but complex relations [3, 11]. Our results corroborate
these claims, and further demonstrate that seemingly simple relational inferences are remarkably
difficult for powerful neural network architectures such as convolutional neural networks (CNNs)
and multi-layer perceptrons (MLPs).
Here, we explore ?Relation Networks? (RN) as a general solution to relational reasoning in neural
networks. RNs are architectures whose computations focus explicitly on relational reasoning [18].
Although several other models supporting relation-centric computation have been proposed, such
as Graph Neural Neworks, Gated Graph Sequence Neural Netoworks, and Interaction Networks,
[20, 13, 2], RNs are simpler, more exclusively focused on general relation reasoning, and easier to
integrate within broader architectures. Moreover, RNs require minimal oversight to construct their
input, and can be applied successfully to tasks even when provided with relatively unstructured inputs
coming from CNNs and LSTMs. We applied an RN-augmented architecture to CLEVR [7], a recent
visual question answering (QA) dataset on which state-of-the-art approaches have struggled due to the
demand for rich relational reasoning. Our networks vastly outperformed the best generally-applicable
visual QA architectures, and achieve state-of-the-art, super-human performance. RNs also solve
CLEVR from state descriptions, highlighting their versatility in regards to the form of their input. We
also applied an RN-based architecture to the bAbI text-based QA suite [22] and solved 18/20 of the
subtasks. Finally, we trained an RN to make challenging relational inferences about complex physical
systems and motion capture data. The success of RNs across this set of substantially dissimilar task
domains is testament to the general utility of RNs for solving problems that require relation reasoning.
2
Relation Networks
An RN is a neural network module with a structure primed for relational reasoning. The design
philosophy behind RNs is to constrain the functional form of a neural network so that it captures the
core common properties of relational reasoning. In other words, the capacity to compute relations is
baked into the RN architecture without needing to be learned, just as the capacity to reason about
spatial, translation invariant properties is built-in to CNNs, and the capacity to reason about sequential
dependencies is built into recurrent neural networks.
In its simplest form the RN is a composite function:
?
?
X
RN(O) = f? ?
g? (oi , oj )? ,
i,j
2
(1)
where the input is a set of ?objects? O = {o1 , o2 , ..., on }, oi ? Rm is the ith object, and f? and g?
are functions with parameters ? and ?, respectively. For our purposes, f? and g? are MLPs, and the
parameters are learnable synaptic weights, making RNs end-to-end differentiable. We call the output
of g? a ?relation?; therefore, the role of g? is to infer the ways in which two objects are related, or if
they are even related at all.
RNs have three notable strengths: they learn to infer relations, they are data efficient, and they operate
on a set of objects ? a particularly general and versatile input format ? in a manner that is order
invariant.
RNs learn to infer relations The functional form in Equation 1 dictates that an RN should consider
the potential relations between all object pairs. This implies that an RN is not necessarily privy to
which object relations actually exist, nor to the actual meaning of any particular relation. Thus, RNs
must learn to infer the existence and implications of object relations.
In graph theory parlance, the input can be thought of as a complete and directed graph whose nodes
are objects and whose edges denote the object pairs whose relations should be considered. Although
we focus on this ?all-to-all? version of the RN throughout this paper, this RN definition can be
adjusted to consider only some object pairs. Similar to Interaction Networks [2], to which RNs are
related, RNs can take as input a list of only those pairs that should be considered, if this information
is available. This information could be explicit in the input data, or could perhaps be extracted by
some upstream mechanism.
RNs are data efficient RNs use a single function g? to compute each relation. This can be thought
of as a single function operating on a batch of object pairs, where each member of the batch is a
particular object-object pair from the same object set. This mode of operation encourages greater
generalization for computing relations, since g? is encouraged not to over-fit to the features of any
particular object pair. Consider how an MLP would learn the same function. An MLP would receive
all objects from the object set simultaneously as its input. It must then learn and embed n2 (where n
is the number of objects) identical functions within its weight parameters to account for all possible
object pairings. This quickly becomes intractable as the number of objects grows. Therefore, the cost
of learning a relation function n2 times using a single feedforward pass per sample, as in an MLP, is
replaced by the cost of n2 feedforward passes per object set (i.e., for each possible object pair in the
set) and learning a relation function just once, as in an RN.
RNs operate on a set of objects The summation in Equation 1 ensures that the RN is invariant to
the order of objects in its input, respecting the property that sets are order invariant. Although we
used summation, other commutative operators ? such as max, and average pooling ? can be used
instead.
3
Tasks
We applied RN-augmented networks to a variety of tasks that hinge on relational reasoning. To
demonstrate the versatility of these networks we chose tasks from a number of different domains,
including visual QA, text-based QA, and dynamic physical systems.
3.1
CLEVR
In visual QA a model must learn to answer questions about an image (Figure 1). This is a challenging
problem domain because it requires high-level scene understanding [1, 14]. Architectures must
perform complex relational reasoning ? spatial and otherwise ? over the features in the visual inputs,
language inputs, and their conjunction. However, the majority of visual QA datasets require reasoning
in the absence of fully specified word vocabularies, and perhaps more perniciously, a vast and
complicated knowledge of the world that is not available in the training data. They also contain
ambiguities and exhibit strong linguistic biases that allow a model to learn answering strategies that
exploit those biases, without reasoning about the visual input [1, 15, 19].
To control for these issues, and to distill the core challenges of visual QA, the CLEVR visual QA
dataset was developed [7]. CLEVR contains images of 3D-rendered objects, such as spheres and
cylinders (Figure 2). Each image is associated with a number of questions that fall into different
3
categories. For example, query attribute questions may ask ?What is the color of the sphere??,
while compare attribute questions may ask ?Is the cube the same material as the cylinder??.
For our purposes, an important feature of CLEVR is that many questions are explicitly relational in
nature. Remarkably, powerful QA architectures [24] are unable to solve CLEVR, presumably because
they cannot handle core relational aspects of the task. For example, as reported in the original paper a
model comprised of ResNet-101 image embeddings with LSTM question processing and augmented
with stacked attention modules vastly outperformed other models at an overall performance of 68.5%
(compared to 52.3% for the next best, and 92.6% human performance) [7]. However, for compare
attribute and count questions (i.e., questions heavily involving relations across objects), the
model performed little better than the simplest baseline, which answered questions solely based on
the probability of answers in the training set for a given question category (Q-type baseline).
We used two versions of the CLEVR dataset: (i) the pixel version, in which images were represented
in standard 2D pixel form. (ii) a state description version, in which images were explicitly represented
by state description matrices containing factored object descriptions. Each row in the matrix contained
the features of a single object ? 3D coordinates (x, y, z); color (r, g, b); shape (cube, cylinder, etc.);
material (rubber, metal, etc.); size (small, large, etc.). When we trained our models, we used either
the pixel version or the state description version, depending on the experiment, but not both together.
3.2
Sort-of-CLEVR
To explore our hypothesis that the RN architecture is better suited to general relational reasoning as
compared to more standard neural architectures, we constructed a dataset similar to CLEVR that we
call ?Sort-of-CLEVR?2 . This dataset separates relational and non-relational questions.
Sort-of-CLEVR consists of images of 2D colored shapes along with questions and answers about the
images. Each image has a total of 6 objects, where each object is a randomly chosen shape (square
or circle). We used 6 colors (red, blue, green, orange, yellow, gray) to unambiguously identify each
object. Questions are hard-coded as fixed-length binary strings to reduce the difficulty involved
with natural language question-word processing, and thereby remove any confounding difficulty
with language parsing. For each image we generated 10 relational questions and 10 non-relational
questions. Examples of relational questions are: ?What is the shape of the object that is farthest from
the gray object??; and ?How many objects have the same shape as the green object??. Examples of
non-relational questions are: ?What is the shape of the gray object??; and ?Is the blue object on the
top or bottom of the scene??. The dataset is also visually simple, reducing complexities involved in
image processing.
3.3
bAbI
bAbI is a pure text-based QA dataset [22]. There are 20 tasks, each corresponding to a particular
type of reasoning, such as deduction, induction, or counting. Each question is associated with a set
of supporting facts. For example, the facts ?Sandra picked up the football? and ?Sandra went to
the office? support the question ?Where is the football?? (answer: ?office?). A model succeeds on
a task if its performance surpasses 95%. Many memory-augmented neural networks have reported
impressive results on bAbI. When training jointly on all tasks using 10K examples per task, Memory
Networks pass 14/20, DNC 16/20, Sparse DNC 19/20, and EntNet 16/20 (the authors of EntNets
report state-of-the-art at 20/20; however, unlike previously reported results this was not done with
joint training on all tasks, where they instead achieve 16/20) [23, 4, 17, 6].
3.4
Dynamic physical systems
We developed a dataset of simulated physical mass-spring systems using the MuJoCo physics engine
[21]. Each scene contained 10 colored balls moving on a table-top surface. Some of the balls moved
independently, free to collide with other balls and the barrier walls. Other randomly selected ball
pairs were connected by invisible springs or a rigid constraint. These connections prevented the balls
from moving independently, due to the force imposed through the connections. Input data consisted
of state descriptions matrices, where each ball was represented as a row in a matrix with features
2
The ?Sort-of-CLEVR? dataset will be made publicly available online
4
representing the rgb color values of each object and their spatial coordinates (x and y) across 16
sequential time steps.
The introduction of random links between balls created an evolving physical system with a variable
number ?systems? of connected balls (where ?systems? refers to connected graphs with balls as
nodes and connections between balls as edges). We defined two separate tasks: 1) infer the existence
or absence of connections between balls when only observing their color and coordinate positions
across multiple sequential frames, and 2) count the number of systems on the table-top, again when
only observing each ball?s color and coordinate position across multiple sequential frames.
Both of these tasks involve reasoning about the relative positions and velocities of the balls to infer
whether they are moving independently, or whether their movement is somehow dependent on the
movement of other balls through invisible connections. For example, if the distance between two
balls remains similar across frames, then it can be inferred that there is a connection between them.
The first task makes these inferences explicit, while the second task demands that this reasoning
occur implicitly, which is much more difficult. For further information on all tasks, including videos
of the dynamic systems, see the supplementary information.
4
Models
In their simplest form RNs operate on objects, and hence do not explicitly operate on images or
natural language. A central contribution of this work is to demonstrate the flexibility with which
relatively unstructured inputs, such as CNN or LSTM embeddings, can be considered as a set of
objects for an RN. As we describe below, we require minimal oversight in factorizing the RN?s input
into a set of objects.
Final CNN feature maps
RN
Object pair
with question
object
-MLP
-MLP
*
...
...
Conv.
+
small
Element-wise
sum
What size is the cylinder
that is left of the brown
metal thing that is left
of the big sphere?
...
what size
is ... sphere
LSTM
Figure 2: Visual QA architecture. Questions are processed with an LSTM to produce a question
embedding, and images are processed with a CNN to produce a set of objects for the RN. Objects
(three examples illustrated here in yellow, red, and blue) are constructed using feature-map vectors
from the convolved image. The RN considers relations across all pairs of objects, conditioned on the
question embedding, and integrates all these relations to answer the question.
Dealing with pixels We used a CNN to parse pixel inputs into a set of objects. The CNN took
images of size 128 ? 128 and convolved them through four convolutional layers to k feature maps of
size d ? d, where k is the number of kernels in the final convolutional layer. We remained agnostic as
to what particular image features should constitute an object. So, after convolving the image, each of
the d2 k-dimensional cells in the d ? d feature maps was tagged with a coordinate (from the range
(?1, 1) for each of the x- and y-coordinates)3 indicating its relative spatial position, and was treated
as an object for the RN (see Figure 2). This means that an ?object? could comprise the background, a
particular physical object, a texture, conjunctions of physical objects, etc., which affords the model
great flexibility in the learning process.
3
We also experimented without this tagging, and achieved performance of 88% on the validation set.
5
Conditioning RNs with question embeddings The existence and meaning of an object-object
relation should be question dependent. For example, if a question asks about a large sphere, then the
relations between small cubes are probably irrelevant. So, weP
modified the RN architecture such that
g? could condition its processing on the question: a = f? ( i,j g? (oi , oj , q)). To get the question
embedding q, we used the final state of an LSTM that processed question words. Question words
were assigned unique integers, which were then used to index a learnable lookup table that provided
embeddings to the LSTM. At each time-step, the LSTM received a single word embedding as input,
according to the syntax of the English-encoded question.
Dealing with state descriptions We can provide state descriptions directly into the RN, since state
descriptions are pre-factored object representations. Question processing can proceed as before:
questions pass through an LSTM using a learnable lookup embedding for individual words, and the
final state of the LSTM is concatenated to each object-pair.
Dealing with natural language For the bAbI suite of tasks the natural language inputs must be
transformed into a set of objects. This is a distinctly different requirement from visual QA, where
objects were defined as spatially distinct regions in convolved feature maps. So, we first took the
20 sentences in the support set that were immediately prior to the probe question. Then, we tagged
these sentences with labels indicating their relative position in the support set, and processed each
sentence word-by-word with an LSTM (with the same LSTM acting on each sentence independently).
We note that this setup invokes minimal prior knowledge, in that we delineate objects as sentences,
whereas previous bAbI models processed all word tokens from all support sentences sequentially.
It?s unclear how much of an advantage this prior knowledge provides, since period punctuation also
unambiguously delineates sentences for the token-by-token processing models. The final state of the
sentence-processing-LSTM is considered to be an object. Similar to visual QA, a separate LSTM
produced a question embedding, which was appened to each object pair as input to the RN. Our
model was trained on the joint version of bAbI (all 20 tasks simultaneously), using the full dataset of
10K examples per task.
Model configuration details For the CLEVR-from-pixels task we used: 4 convolutional layers
each with 24 kernels, ReLU non-linearities, and batch normalization; 128 unit LSTM for question
processing; 32 unit word-lookup embeddings; four-layer MLP consisting of 256 units per layer with
ReLU non-linearities for g? ; and a three-layer MLP consisting of 256, 256 (with 50% dropout), and
29 units with ReLU non-linearities for f? . The final layer was a linear layer that produced logits for a
softmax over the answer vocabulary. The softmax output was optimized with a cross-entropy loss
function using the Adam optimizer with a learning rate of 2.5e?4 . We used size 64 mini-batches
and distributed training with 10 workers synchronously updating a central parameter server. The
configurations for the other tasks are similar, and can be found in the supplementary information.
We?d like to emphasize the simplicity of our overall model architecture compared to the visual QA
architectures used on CLEVR thus far, which use ResNet or VGG embeddings, sometimes with
fine-tuning, very large LSTMs for language encoding, and further processing modules, such as
stacked or iterative attention, or large fully connected layers (upwards of 4000 units, often) [7].
5
5.1
Results
CLEVR from pixels
Our model achieved state-of-the-art performance on CLEVR at 95.5%, exceeding the best model
trained only on the pixel images and questions at the time of the dataset?s publication by 27%, and
surpassing human performance in the task (see Table 1 and Figure 3).
These results ? in particular, those obtained in the compare attribute and count categories ? are
a testament to the ability of our model to do relational reasoning. In fact, it is in these categories that
state-of-the-art models struggle most. Furthermore, the relative simplicity of the network components
used in our model suggests that the difficulty of the CLEVR task lies in its relational reasoning
demands, not on the language or the visual processing.
Many CLEVR questions involve computing and comparing more than one relation; for example,
consider the question: ?There is a big thing on the right side of the big rubber cylinder that is behind
6
Model
Overall
Count
Exist
92.6
41.8
46.8
52.3
68.5
76.6
95.5
86.7
34.6
41.7
43.7
52.2
64.4
90.1
96.6
50.2
61.1
65.2
71.1
82.7
97.8
Human
Q-type baseline
LSTM
CNN+LSTM
CNN+LSTM+SA
CNN+LSTM+SA*
CNN+LSTM+RN
Compare
Numbers
86.5
51.0
69.8
67.1
73.5
77.4
93.6
Query
Attribute
95.0
36.0
36.8
49.3
85.3
82.6
97.9
Compare
Attribute
96.0
51.3
51.8
53.0
52.3
75.4
97.1
* Our implementation, with optimized hyperparameters and trained end-to-end using the same
CNN as in our RN model. We also tagged coordinates, which did not improve performance.
Table 1: Results on CLEVR from pixels. Performances of our model (RN) and previously reported
models [8], measured as accuracy on the test set and broken down by question category.
the large cylinder to the right of the tiny yellow rubber thing; What is its shape??, which has three
spatial relations (?right side?, ?behind?, ?right of?). On such questions, our model achieves 93%
performance, indicating that the model can handle complex relational reasoning.
Results using privileged training information A more recent study reports overall performance
of 96.9% on CLEVR, but uses additional supervisory signals on the functional programs used to
generate the CLEVR questions [8]. It is not possible for us to directly compare this to our work since
we do not use these additional supervision signals. Nonetheless, our approach greatly outperforms
a version of their model that was not trained with these extra signals, and even a version of their
model trained using 9K ground-truth programs. Thus, RNs can achieve very competitive, and even
super-human results under much weaker and more natural assumptions, and even in situations when
functional programs are unavailable.
Accuracy
compare numbers
1.0
0.5
0.25
0.0
overall
count
exist
query attribute
Accuracy
Human
CNN+LSTM+RN
CNN+LSTM+SA
CNN+LSTM
LSTM
Q-type baseline
0.75
more
than
less
than
query
color
compare
size
equal
compare attribute
1.0
0.75
0.5
0.25
0.0
query
size
query
shape
query
material
compare
shape
compare
material
compare
color
Figure 3: Results on CLEVR from pixels. The RN-augmented model outperformed all other models
and exhibited super-human performance overall. In particular, it solved ?compare attribute? questions,
which trouble all other models because they heavily depend on relational reasoning.
5.2
CLEVR from state descriptions
To demonstrate that the RN is robust to the form of its input, we trained our model on the state
description matrix version of the CLEVR dataset. The model achieved an accuracy of 96.4%. This
result demonstrates the generality of the RN module, showing its capacity to learn and reason
about object relations while being agnostic to the kind of inputs it receives ? i.e., to the particular
representation of the object features to which it has access. Therefore, RNs are not necessarily
7
restricted to visual problems, and can thus be applied in very different contexts, and to different tasks
that require relational reasoning.
5.3
Sort-of-CLEVR from pixels
The results so far led us to hypothesize that the difficulty in solving CLEVR lies in its heavy emphasis
on relational reasoning, contrary to previous claims that the difficulty lies in question parsing [9].
However, the questions in the CLEVR dataset are not categorized based on the degree to which they
may be relational, making it hard to assess our hypothesis. Therefore, we use the Sort-of-CLEVR
dataset which we explicitly designed to seperate out relational and non-relational questions (see
Section 3.2).
We find that a CNN augmented with an RN achieves an accuracy above 94% for both relational and
non-relational questions. However, a CNN augmented with an MLP only reached this performance
on the non-relational questions, plateauing at 63% on the relational questions. This strongly indicates
that models lacking a dedicated relational reasoning component struggle, or may even be completely
incapable of solving tasks that require very simple relational reasoning. Augmenting these models
with a relational module, like the RN, is sufficient to overcome this hurdle.
A simple ?closest-to? or ?furthest-from? relation is particularly revealing of a CNN+MLP?s lack
of general reasoning capabilities (52.3% success). For these relations a model must gauge the
distances between each object, and then compare each of these distances. Moreover, depending on
the images, the relevant distance could be quite small in magnitude, or quite large, further increasing
the combinatoric difficulty of this task.
5.4
bAbI
Our model succeeded on 18/20 tasks. Notably, it succeeded on the basic induction task (2.1%
total error), which proved difficult for the Sparse DNC (54%), DNC (55.1%), and EntNet (52.1%).
Also, our model did not catastrophically fail in any of the tasks: for the 2 tasks that it failed (the
?two supporting facts?, and ?three supporting facts? tasks), it missed the 95% threshold by 3.1%
and 11.5%, respectively. We also note that the model we evaluated was chosen based on overall
performance on a withheld validation set, using a single seed. That is, we did not run multiple replicas
with the best hyperparameter settings (as was done in other models, such as the Sparse DNC, which
demonstrated performance fluctuations with a standard deviation of more than ?3 tasks passed for
the best choice of hyperparameters).
5.5
Dynamic physical systems
Finally, we trained our model on two tasks requiring reasoning about the dynamics of balls moving
along a surface. In the connection inference task, our model correctly classified all the connections in
93% of the sample scenes in the test set. In the counting task, the RN achieved similar performance,
reporting the correct number of connected systems for 95% of the test scene samples. In comparison,
an MLP with comparable number of parameters was unable to perform better than chance for both
tasks. Moreover, using this task to learn to infer relations results in transfer to unseen motion capture
data, where RNs predict the connections between body joints of a walking human (see Supplementary
Material for experimental details and example videos).
6
Discussion and Conclusions
RNs are powerful, versatile, and simple neural network modules with the capacity for relational
reasoning. The performance of RN-augmented networks on CLEVR is especially notable; they
significantly improve upon current general purpose, state-of-the-art models (upwards of 25%),
indicating that previous architectures lacked a fundamental, general capacity to reason about relations.
Moreover, these results unveil an important distinction between the often confounded notions of
processing and reasoning. Powerful visual QA architectures contain components, such as ResNets,
which are highly capable visual processors capable of detecting complicated textures and forms.
However, as demonstrated by CLEVR, they lack an ability to reason about the features they detect.
8
RNs can easily exploit foreknowledge of the relations that should be computed for a particular task.
Indeed, especially in circumstances with strong computational constraints, bounding the otherwise
quadratic complexity of the number of relations could be advantageous. Attentional mechanisms
could reduce the number of objects fed as input to the RN, and hence reduce the number of relations
that need to be considered. Or, using an additional down-sampling convolutional or pooling layer
could further reduce the number of objects provided as input to the RN; indeed, max-pooling to 4 ? 4
feature maps reduces the total number of objects, and hence computed relations, and results in 87%
performance on the validation set.
RNs have a flexible input format: a set of objects. Our results show that, strikingly, the set of objects
does not need to be cleverly pre-factored. RNs learn to deal with ?object? representations provided
by CNNs and LSTMs, presumably by influencing the content and form of the object representations
via the gradients they propagate.
In future work it would be interesting to apply RNs to relational reasoning across highly abstract
entities (for example, decisions in hierarchical reinforcement learning tasks). Relation reasoning is a
central component of generally intelligent behavior, and so, we expect the RN to be a simple-to-use,
useful and widely used neural module.
Acknowledgments
We would like to thank Murray Shanahan, Ari Morcos, Scott Reed, Daan Wierstra, and many others
on the DeepMind team, for critical feedback and discussions.
9
References
[1] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick,
and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015.
[2] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for
learning about objects, relations and physics. In NIPS, 2016.
[3] Marta Garnelo, Kai Arulkumaran, and Murray Shanahan. Towards deep symbolic reinforcement learning.
arXiv:1609.05518, 2016.
[4] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi?nska,
Sergio G?mez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing
using a neural network with dynamic external memory. Nature, 2016.
[5] Stevan Harnad. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335?346,
1990.
[6] Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. Tracking the world state
with recurrent entity networks. In ICLR, 2017.
[7] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross
Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In
CVPR, 2017.
[8] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence
Zitnick, and Ross Girshick. Inferring and executing programs for visual reasoning. arXiv:1705.03633,
2017.
[9] Kushal Kafle and Christopher Kanan.
arXiv:1703.09684, 2017.
An analysis of visual question answering algorithms.
[10] Charles Kemp and Joshua B Tenenbaum. The discovery of structural form. Proceedings of the National
Academy of Sciences, 105(31):10687?10692, 2008.
[11] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines
that learn and think like people. arXiv:1604.00289, 2016.
[12] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[13] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks.
ICLR, 2016.
[14] Mateusz Malinowski and Mario Fritz. A multi-world approach to question answering about real-world
scenes based on uncertain input. In NIPS, 2014.
[15] Mateusz Malinowski, Marcus Rohrbach, and Mario Fritz. Ask your neurons: A deep learning approach to
visual question answering. arXiv:1605.02697, 2016.
[16] Allen Newell. Physical symbol systems. Cognitive science, 4(2):135?183, 1980.
[17] Jack Rae, Jonathan J Hunt, Ivo Danihelka, Timothy Harley, Andrew W Senior, Gregory Wayne, Alex
Graves, and Tim Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In
NIPS, 2016.
[18] David Raposo, Adam Santoro, David Barrett, Razvan Pascanu, Timothy Lillicrap, and Peter Battaglia.
Discovering objects and their relations from entangled scene representations. arXiv:1702.05068, 2017.
[19] Mengye Ren, Ryan Kiros, and Richard Zemel. Image question answering: A visual semantic embedding
model and a new dataset. In NIPS, 2015.
[20] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The
graph neural network model. IEEE Transactions on Neural Networks, 2009.
[21] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In
IROS, 2012.
[22] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. Towards ai-complete question
answering: A set of prerequisite toy tasks. arXiv:1502.05698, 2015.
[23] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLR, 2015.
[24] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image
question answering. In CVPR, 2016.
10
| 7082 |@word cnn:16 version:10 advantageous:1 d2:1 propagate:1 rgb:1 mengye:1 asks:1 thereby:1 versatile:2 catastrophically:1 configuration:2 contains:1 murder:1 united:1 exclusively:1 jimenez:1 daniel:1 reynolds:1 o2:1 outperforms:1 current:1 com:7 comparing:1 culprit:1 must:8 parsing:2 john:1 shape:9 remove:2 hypothesize:1 designed:1 shanahan:2 intelligence:1 selected:1 discovering:1 ivo:2 ith:1 core:3 colored:2 tarlow:1 provides:1 pascanu:3 node:2 detecting:1 simpler:1 wierstra:1 along:2 constructed:2 pairing:1 consists:1 manner:1 pairwise:1 notably:1 tagging:1 indeed:2 behavior:3 nor:1 kiros:1 multi:2 actual:1 little:1 increasing:1 becomes:1 provided:4 conv:1 underlying:1 moreover:4 linearity:3 mass:1 agnostic:2 what:8 grabska:1 kind:1 substantially:1 deepmind:2 string:1 developed:2 proposing:1 suite:3 every:1 rm:1 demonstrates:1 control:2 unit:5 farthest:1 wayne:2 szlam:1 danihelka:2 before:1 influencing:1 struggle:3 encoding:1 solely:1 clevr:34 fluctuation:1 chose:1 emphasis:1 suggests:1 challenging:3 mujoco:2 hunt:1 range:1 directed:1 unique:1 acknowledgment:1 lecun:2 tsoi:1 writes:1 razvan:3 countzero:1 evolving:1 thought:2 composite:1 dictate:1 revealing:1 word:11 pre:2 refers:1 significantly:1 symbolic:3 get:1 cannot:1 operator:1 context:2 imposed:1 map:6 demonstrated:2 attention:3 independently:4 focused:1 tomas:1 simplicity:2 unstructured:2 immediately:1 pure:1 factored:3 embedding:7 handle:3 notion:1 variation:1 coordinate:7 marta:1 play:1 heavily:2 us:1 hypothesis:2 velocity:1 element:1 particularly:2 updating:1 walking:1 curated:1 bottom:1 role:1 module:9 solved:2 capture:3 region:1 ensures:1 connected:5 went:1 movement:2 broken:1 complexity:3 respecting:1 dynamic:7 trained:9 depend:1 solving:3 algebra:1 upon:1 completely:1 strikingly:1 easily:1 joint:3 collide:1 represented:3 testament:2 stacked:3 distinct:1 seperate:1 describe:2 london:1 ramalho:1 artificial:1 query:7 zemel:2 jianfeng:1 whose:4 encoded:1 supplementary:3 solve:5 plausible:1 quite:2 widely:1 otherwise:2 football:2 kai:1 ability:4 cvpr:2 tested:1 unseen:1 think:1 jointly:1 noisy:1 final:6 seemingly:1 online:1 hagenbuchner:1 sequence:2 differentiable:1 advantage:1 agrawal:1 took:2 interaction:3 coming:1 relevant:1 flexibility:2 achieve:4 academy:1 margaret:1 description:11 moved:1 requirement:1 produce:2 aishwarya:1 adam:3 executing:1 object:76 resnet:2 depending:2 recurrent:2 tim:2 augmenting:2 andrew:1 measured:1 received:1 sa:3 strong:2 edward:1 implies:1 laurens:2 correct:1 attribute:10 cnns:4 human:9 material:5 require:6 piecing:1 sandra:2 generalization:1 wall:1 neworks:1 elementary:1 summation:2 ryan:1 adjusted:1 baked:1 physica:1 marco:1 considered:6 ground:1 dhruv:1 visually:1 presumably:2 great:1 seed:1 predict:2 lawrence:3 claim:2 matthew:1 optimizer:1 achieves:2 battaglia:3 purpose:3 narrative:1 outperformed:3 applicable:1 integrates:1 label:1 ross:2 gauge:1 successfully:1 hoffman:1 super:4 modified:1 primed:1 broader:2 office:2 conjunction:2 linguistic:1 publication:1 rezende:1 focus:2 arulkumaran:1 indicates:1 greatly:1 baseline:4 detect:1 inference:4 kushal:1 textbased:1 rigid:1 dependent:2 santoro:2 relation:44 deduction:2 transformed:1 unveil:1 pixel:11 overall:8 issue:1 oversight:2 flexible:1 art:7 spatial:5 orange:1 softmax:2 cube:3 equal:2 construct:1 once:1 comprise:1 beach:1 sampling:1 encouraged:1 identical:1 park:2 future:1 report:2 others:1 fundamentally:1 richard:2 intelligent:3 yoshua:1 randomly:2 simultaneously:2 wep:1 national:1 individual:1 scarselli:1 replaced:1 consisting:2 versatility:2 harley:2 cylinder:7 mlp:10 rae:1 highly:2 punctuation:1 behind:3 razp:1 implication:1 antol:1 edge:2 succeeded:2 worker:1 capable:2 arthur:1 tree:2 circle:1 girshick:2 minimal:3 uncertain:1 combinatoric:1 corroborate:1 cost:2 distill:1 surpasses:1 deviation:1 comprised:1 johnson:2 sumit:2 reported:4 dependency:1 answer:6 gregory:1 st:1 fritz:2 lstm:23 fundamental:1 physic:3 together:2 quickly:1 vastly:2 central:5 ambiguity:1 again:1 containing:2 external:1 cognitive:1 convolving:1 chung:1 ullman:1 li:4 toy:1 account:1 potential:1 lookup:3 gabriele:1 notable:2 explicitly:5 race:1 performed:1 picked:1 jason:3 observing:2 mario:2 red:2 competitive:1 sort:7 reached:1 complicated:2 capability:1 contribution:2 ass:1 mlps:3 hariharan:2 oi:3 convolutional:6 square:1 publicly:1 accuracy:5 greg:1 identify:1 yellow:4 generalize:1 raw:1 produced:2 lu:1 ren:1 processor:1 classified:1 ah:1 bharath:2 synaptic:1 definition:1 nonetheless:1 involved:2 associated:2 gain:2 emanuel:1 dataset:19 proved:1 ask:3 mitchell:1 knowledge:3 color:8 barwi:1 actually:1 centric:1 danilo:1 unambiguously:2 tom:1 raposo:2 done:2 delineate:1 strongly:1 generality:1 furthermore:1 just:2 evaluated:1 mez:1 smola:1 parlance:1 receives:1 babi:9 lstms:4 parse:1 christopher:1 nonlinear:1 lack:2 google:7 somehow:1 mode:1 perhaps:2 gray:3 grows:1 supervisory:1 building:1 xiaodong:1 usa:1 requiring:1 grounding:2 lillicrap:3 brown:2 contain:2 consisted:1 logits:1 hence:3 tagged:3 assigned:1 spatially:1 read:1 semantic:1 illustrated:1 deal:1 encourages:1 illustrative:1 samuel:1 syntax:1 complete:2 demonstrate:4 invisible:2 motion:2 dedicated:1 upwards:2 allen:1 reasoning:39 image:24 meaning:2 wise:1 novel:1 ari:1 parikh:1 charles:1 common:1 jack:1 functional:4 physical:10 conditioning:1 tassa:1 he:1 surpassing:1 ai:1 tuning:1 mathematics:1 erez:1 language:10 moving:4 privy:1 access:1 impressive:1 operating:1 surface:2 etc:4 supervision:1 sergio:1 closest:1 recent:2 confounding:1 henaff:1 irrelevant:1 apart:1 server:1 incapable:1 binary:1 success:2 der:2 joshua:2 greater:1 additional:3 deng:1 period:1 signal:3 arithmetic:1 ii:1 dnc:5 multiple:3 needing:1 infer:7 full:1 reduces:1 characterized:1 plug:1 cross:1 long:1 sphere:6 lai:1 prevented:1 coded:1 privileged:1 involving:1 basic:1 circumstance:1 arxiv:7 resnets:1 kernel:2 normalization:1 sometimes:1 achieved:4 cell:1 receive:1 whereas:2 remarkably:2 background:1 fine:1 hurdle:1 entangled:1 extra:1 operate:4 unlike:1 exhibited:1 nska:1 pass:1 probably:1 pooling:3 thing:4 member:1 contrary:1 practitioner:1 call:2 integer:1 structural:1 counting:2 chopra:2 feedforward:2 bengio:1 embeddings:6 yang:1 variety:1 plateauing:1 fit:1 relu:3 todorov:1 architecture:17 reduce:5 vgg:1 peterbattaglia:1 whether:2 utility:1 passed:1 suffer:1 peter:3 proceed:1 constitute:1 compositional:1 deep:4 generally:4 useful:1 malinowski:3 involve:2 vqa:1 tenenbaum:2 processed:5 category:5 struggled:1 simplest:3 generate:1 exist:3 affords:1 diagnostic:1 per:5 correctly:1 blue:3 diverse:1 hyperparameter:1 four:4 threshold:1 kanan:1 iros:1 replica:1 vast:1 graph:7 sum:1 run:2 mystery:2 powerful:6 reporting:1 throughout:1 reader:1 yann:2 missed:1 lake:1 decision:1 maaten:2 scaling:1 comparable:1 dropout:1 layer:11 quadratic:1 strength:1 occur:1 constraint:2 constrain:1 alex:3 scene:7 fei:4 your:1 markus:1 aspect:1 answered:1 franco:1 spring:2 mikolov:1 rendered:1 relatively:2 format:2 according:1 ball:16 poor:1 cleverly:1 across:9 making:2 invariant:4 restricted:1 iccv:1 jiasen:1 rubber:4 equation:2 previously:2 remains:1 count:5 mechanism:2 fail:1 know:1 fed:1 end:4 confounded:1 available:3 operation:1 lacked:1 prerequisite:1 probe:1 apply:1 hierarchical:1 batch:4 convolved:3 existence:3 original:2 top:3 gori:1 trouble:1 hinge:2 tiago:1 mikael:1 exploit:2 concatenated:1 invokes:1 build:2 especially:2 murray:2 question:67 strategy:1 antoine:3 unclear:1 exhibit:1 gradient:1 iclr:3 distance:5 unable:2 separate:3 simulated:1 capacity:8 entity:4 majority:1 link:1 attentional:1 thank:1 considers:1 kemp:1 reason:8 furthest:2 induction:2 stanislaw:1 marcus:1 length:1 o1:1 index:1 reed:1 mini:1 zichao:1 kingdom:1 difficult:4 setup:1 design:1 implementation:1 gated:2 perform:2 convolution:1 neuron:1 datasets:1 daan:1 withheld:1 supporting:4 situation:1 relational:49 hinton:1 team:1 frame:3 rn:69 synchronously:1 brenden:1 tomer:1 subtasks:1 inferred:2 david:4 pair:13 specified:1 connection:9 sentence:8 optimized:2 engine:2 learned:1 distinction:1 nip:5 qa:16 justin:2 alongside:1 below:1 scott:1 yujia:1 mateusz:3 challenge:1 monfardini:1 program:4 built:2 including:3 oj:2 max:2 green:2 memory:5 video:2 critical:1 difficulty:6 natural:5 force:1 treated:1 hybrid:1 representing:1 improve:2 created:1 text:3 prior:3 understanding:1 discovery:1 relative:4 delineates:1 lacking:1 fully:2 loss:1 expect:1 graf:2 interesting:1 proven:1 gershman:1 geoffrey:1 validation:3 integrate:1 degree:1 metal:2 sufficient:1 harnad:1 tiny:1 heavy:1 translation:1 row:2 bordes:3 token:3 free:1 english:1 bias:2 allow:1 side:2 weaker:1 senior:1 fall:1 barrier:1 sparse:5 distinctly:1 distributed:1 regard:1 overcome:1 feedback:1 vocabulary:2 world:4 van:2 rich:1 author:1 made:1 clue:1 reinforcement:2 agnieszka:1 far:2 transaction:1 emphasize:1 implicitly:1 logic:1 dealing:3 sequentially:1 agapiou:1 factorizing:1 iterative:1 table:5 learn:12 nature:3 robust:2 morcos:1 ca:1 inherently:1 transfer:1 brockschmidt:1 unavailable:1 metallic:1 complex:5 necessarily:2 upstream:1 domain:3 zitnick:3 marc:1 did:3 big:3 bounding:1 hyperparameters:2 n2:3 child:1 categorized:1 body:1 augmented:11 judy:1 position:5 inferring:1 explicit:3 exceeding:1 lie:3 answering:11 down:2 remained:1 embed:1 showing:1 symbol:4 learnable:3 barrett:2 list:1 experimented:1 multitude:1 evidence:1 burden:1 intractable:1 sequential:4 texture:2 magnitude:1 conditioned:1 commutative:1 demand:3 easier:1 suited:2 entropy:1 led:1 timothy:3 simply:1 explore:2 rohrbach:1 devi:1 visual:24 failed:1 highlighting:1 gao:1 contained:2 tracking:1 newell:1 truth:1 chance:1 extracted:1 grefenstette:1 weston:3 towards:2 absence:2 content:1 hard:2 reducing:1 yuval:1 acting:1 called:2 total:3 pas:3 batra:1 experimental:1 succeeds:1 perceptrons:1 indicating:4 colmenarejo:1 support:4 people:1 jonathan:1 dissimilar:1 philosophy:1 malcolm:1 phenomenon:1 |
6,725 | 7,083 | Q-LDA: Uncovering Latent Patterns in Text-based
Sequential Decision Processes
Jianshu Chen? , Chong Wang? , Lin Xiao? , Ji He? , Lihong Li? and Li Deng?
?
Microsoft Research, Redmond, WA, USA
{jianshuc,lin.xiao}@microsoft.com
?
Google Inc., Kirkland, WA, USA?
{chongw,lihong}@google.com
?
Citadel LLC, Seattle/Chicago, USA
{Ji.He,Li.Deng}@citadel.com
Abstract
In sequential decision making, it is often important and useful for end users to
understand the underlying patterns or causes that lead to the corresponding decisions. However, typical deep reinforcement learning algorithms seldom provide
such information due to their black-box nature. In this paper, we present a probabilistic model, Q-LDA, to uncover latent patterns in text-based sequential decision
processes. The model can be understood as a variant of latent topic models that
are tailored to maximize total rewards; we further draw an interesting connection
between an approximate maximum-likelihood estimation of Q-LDA and the celebrated Q-learning algorithm. We demonstrate in the text-game domain that our
proposed method not only provides a viable mechanism to uncover latent patterns
in decision processes, but also obtains state-of-the-art rewards in these games.
1
Introduction
Reinforcement learning [21] plays an important role in solving sequential decision making problems,
and has seen considerable successes in many applications [16, 18, 20]. With these methods, however,
it is often difficult to understand or examine the underlying patterns or causes that lead to the sequence
of decisions. Being more interpretable to end users can provide more insights to the problem itself
and be potentially useful for downstream applications based on these results [5].
To investigate new approaches to uncovering underlying patterns of a text-based sequential decision
process, we use text games (also known as interactive fictions) [11, 19] as the experimental domain.
Specifically, we focus on choice-based and hypertext-based games studied in the literature [11],
where both the action space and the state space are characterized in natural languages. At each time
step, the decision maker (i.e., agent) observes one text document (i.e., observation text) that describes
the current observation of the game environment, and several text documents (i.e., action texts) that
characterize different possible actions that can be taken. Based on the history of these observations,
the agent selects one of the provided actions and the game transits to a new state with an immediate
reward. This game continues until the agent reaches a final state and receives a terminal reward.
In this paper, we present a probabilistic model called Q-LDA that is tailored to maximize total
rewards in a decision process. Specially, observation texts and action texts are characterized by two
separate topic models, which are variants of latent Dirichlet allocation (LDA) [4]. In each topic
model, topic proportions are chained over time to model the dependencies for actions or states. And
?
The work was done while Chong Wang, Ji He, Lihong Li and Li Deng were at Microsoft Research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
these proportions are partially responsible for generating the immediate/terminal rewards. We also
show an interesting connection between the maximum-likelihood parameter estimation of the model
and the Q-learning algorithm [22, 18]. We empirically demonstrate that our proposed method not
only provides a viable mechanism to uncover latent patterns in decision processes, but also obtains
state-of-the-art performance in these text games.
Contribution. The main contribution of this paper is to seamlessly integrate topic modeling with
Q-learning to uncover the latent patterns and interpretable causes in text-based sequential decisionmaking processes. Contemporary deep reinforcement learning models and algorithms can seldom
provide such information due to their black-box nature. To the best of our knowledge, there is no
prior work that can achieve this and learn the topic model in an end-to-end fashion to maximize the
long-term reward.
Related work. Q-LDA uses variants of LDA to capture observation and action texts in text-based
decision processes. In this model, the dependence of immediate reward on the topic proportions
is similar to supervised topic models [3], and the chaining of topic proportions over time to model
long-term dependencies on previous actions and observations is similar to dynamic topic models [6].
The novelty in our approach is that the model is estimated in a way that aims to maximize long-term
reward, thus producing near-optimal policies; hence it can also be viewed as a topic-model-based
reinforcement-learning algorithm. Furthermore, we show an interesting connection to the DQN
variant of Q-learning [18]. The text-game setup used in our experiment is most similar to previous
work [11] in that both observations and actions are described by natural languages, leading to
challenges in both representation and learning. The main difference from that previous work is that
those authors treat observation-texts as Markovian states. In contrast, our model is more general,
capturing both partial observability and long-term dependence on observations that are common
in many text-based decision processes such as dialogues. Finally, the choice of reward function in
Q-LDA share similarity with that in Gaussian process temporal difference methods [9].
Organization. Section 2 describes the details of our probabilistic model, and draws a connection
to the Q-learning algorithm. Section 3 presents an end-to-end learning algorithm that is based on
mirror descent back-propagation. Section 4 demonstrates the empirical performance of our model,
and we conclude with discussions and future work in Section 5.
2
A Probabilistic Model for Text-based Sequential Decision Processes
In this section, we first describe text games as an example of sequential decision processes. Then, we
describe our probabilistic model, and relate it to a variant of Q-learning.
2.1
Sequential decision making in text games
Text games are an episodic task that proceeds in discrete time steps t 2 {1, . . . , T }, where the length
T may vary across different episodes. At time step t, the agent receives a text document of N words
S
2
describing the current observation of the environment: wtS , {wt,n
}N
n=1 . We call these words
observation text. The agent also receives At text documents, each of which describes a possible
a
action that the agent can take. We denote them by wta , {wt,n
}N
n=1 with a 2 {1, . . . , At }, where At
is the number of feasible actions and it could vary over time. We call these texts action texts. After the
agent takes one of the provided actions, the environment transits to time t + 1 with a new state and an
immediate reward rt ; both dynamics and reward generation may be stochastic and unknown. The new
S
a
state then reveals a new observation text wt+1
and several action texts wt+1
for a 2 {1, . . . , At+1 }.
The transition continues until the end of the game at step T when the agent receives a terminal reward
rT . The reward rT depends on the ending of the story in the text game: a good ending leads to a large
positive reward, while bad endings negative rewards.
The goal of the agent is to maximize its cumulative reward by acting optimally in the environment.
S
A
a
At step t, given all observation texts w1:t
, all action texts w1:t
, {w1:t
: 8a}, previous actions
S
A
a1:t 1 and rewards r1:t 1 , the agent is to find a policy, ?(at |w1:t
, w1:t
, a1:t 1 , r1:t 1 ), a conditional
2
For notation simplicity, we assume all texts have the same length N .
2
A
a
wt,n
A
a
zt,n
N
A
a
wt+1,n
A
a
zt+1,n
|At |
|At+1 |
?ta
A
?t+1
a
?t+1
at
rt+1
at+1
?tS
?tS
S
?t+1
S
?t+1
S
S
zt,n
S
S
zt+1,n
S
S
wt,n
S
S
wt+1,n
?tA
?
rt
N
N
?
N
Figure 1: Graphical model representation for the studied sequential decision process. The bottom
section shows the observation topic models, which share the same topics in S , but the topic
distributions ?tS changes with time t. The top section shows the action topic models, sharing the
same action topics in A , but with time varying topic distribution ?ta for each a 2 At . The middle
section shows the dependence of variables between consecutive time steps. There are no plates for
the observation text (bottom part of the figure) because there is only one observation text document
at each time step. We follow the standard notation for graphical models by using shaded circles as
observables. Since the topic distributions ?tS and ?ta and the Dirichlet parameters ?tS and ?tA (except
?1S and ?1A ) are not observable, we need to use their MAP estimate to make end-to-end learning
feasible; see Section 3 for details. The figure characterizes the general case where rewards appear at
each time step, while in our experiments the (non-zero) rewards only appear at the end of the games.
PT
probability of selecting action at , that maximizes the expected long-term reward E{ ? =t ? t r? },
where 2 (0, 1) is a discount factor. In this paper, for simplicity of exposition, we focus on problems
where the reward is nonzero only in the final step T . While our algorithm can be generalized to the
general case (with greater complexity), this special case is an important case of RL (e.g., [20]). As a
S
A
result, the policy is independent of r1:t 1 and its form is simplified to ?(at |w1:t
, w1:t
, a1:t 1 ).
The problem setup is similar to previous work [11] in that both observations and actions are described
by natural languages. For actions described by natural languages, the action space is inherently
discrete and large due to the exponential complexity with respect to sentence length. This is
different from most reinforcement learning problems where the action spaces are either small
or continuous. Here, we take a probabilistic modeling approach to this challenge: the observed
variables?observation texts, action texts, selected actions, and rewards?are assumed to be generated
from a probabilistic latent variable model. By examining these latent variables, we aim to uncover
the underlying patterns that lead to the sequence of the decisions. We then show how the model is
related to Q-learning, so that estimation of the model leads to reward maximization.
2.2
The Q-LDA model
The graphical representation of our model, Q-LDA, is depicted in Figure 1. It has two instances of
topic models, one for observation texts and the other for action texts. The basic idea is to chain the
topic proportions (?s in the figure) in a way such that they can influence the topic proportions in the
future, thus capturing long-term effects of actions. Details of the generative models are as follows.
For the observation topic model, we use the columns of S ? Dir( S )3 to denote the topics for
the observation texts. For the action topic model, we use the columns of A ? Dir( A ) to denote
the topics for the action texts. We assume these topics do not change over time. Given the initial
topic proportion Dirichlet parameters??1S and ?1A for observation and action texts respectively?the
Q-LDA proceeds sequentially from t = 1 to T as follows (see Figure 1 for all latent variables).
3
S is a word-by-topic matrix. Each column is drawn from a Dirichlet distribution with hyper-parameter
representing the word-emission probabilities of the corresponding topic. A is similarly defined.
3
S,
1. Draw observation text wtS as follows,
(a) Draw observation topic proportions ?tS ? Dir(?tS ).
(b) Draw all words for the observation text wtS ? LDA(wtS |?tS , S ), where LDA(?)
denotes the standard LDA generative process given its topic proportion ?tS and topics
S
S
S [4]. The latent variable zt,n indicates the topic for the word wt,n .
a
2. For a = 1, ..., At , draw action text wt as follows,
(a) Draw action topic proportions ?ta ? Dir(?tA ).
(b) Draw all words for the a-th action text using wta ? LDA(wta |?ta , A ), where the latent
a
a
variable zt,n
indicates the topic for the word wt,n
.
S
A
3. Draw the action: at ? ?b (at |w1:t , w1:t , a1:t 1 ), where ?b is an exploration policy for data
collection. It could be chosen in different ways, as discussed in the experiment Section 4.
After model learning is finished, a greedy policy may be used instead (c.f., Section 3).
4. The immediate reward rt is generated according to a Gaussian distribution with mean
function ?r (?tS , ?tat , U ) and variance r2 :
rt ? N ?r (?tS , ?tat , U ),
2
r
.
(1)
Here, we defer the definitions of ?r (?tS , ?tat , U ) and its parameter U to the next section,
where we draw a connection between likelihood-based learning and Q-learning.
5. Compute the topic proportions Dirichlet parameters for the next time step t + 1 as
WSS ?tS + WSA ?tat + ?1S ,
S
?t+1
=
A
?t+1
=
WAS ?tS +WAA ?tat +?1A ,
(2)
where (x) , max{x, ?} with ? being a small positive number (e.g., 10 6 ), at is the action
selected by the agent at time t, and {WSS , WSA , WAS , WAA } are the model parameters to
t
be learned. Note that, besides ?tS , the only topic proportions from {?ta }A
a=1 that will influence
S
A
?t+1
and ?t+1
is ?tat , i.e., the one corresponding to the chosen action at . Furthermore, since
S
A
?tS and ?tat are generated according to Dir(?tS ) and Dir(?tA ), respectively, ?t+1
and ?t+1
at
S
are (implicitly) chained over time via ?t and ?t (c.f. Figure 1).
This generative process defines a joint distribution p(?) among all random variables depicted in
Figure 1. Running this generative process?step 1 to 5 above for T steps until the game ends?
produces one episode of the game. Now suppose we already have M episodes. In this paper, we
choose to directly learn the conditional distribution of the rewards given other observations. By
learning the model in a discriminative manner [2, 7, 12, 15, 23], we hope to make better predictions
of the rewards for different actions, from which the agent could obtain the best policy for taking
actions. This can be obtained by applying Bayes rule to the joint distribution defined by the generative
process. Let ? denote all model parameters: ? = { S , A , U, WSS , WSA , WAS , WAA }. We have
the following loss function
(
)
M
X
S
A
min
ln p(?)
ln p r1:Ti |w1:Ti , w1:Ti , a1:Ti , ?
,
(3)
?
i=1
where p(?) denotes a prior distribution of the model parameters (e.g., Dirichlet parameters over
S and A ), and Ti denotes the length of the i-th episode. Let KS and KA denote the number of
topics for the observation texts and action texts, and let VS and VA denote the vocabulary sizes for
the observation texts and action texts, respectively. Then, the total number of learnable parameters
for Q-LDA is: VS ? KS + VA ? KA + KA ? KS + (KS + KA )2 .
We note that a good model learned through Eq. (3) may predict the values of rewards well, but might
not imply the best policy for the game. Next, we show by defining the appropriate mean function
for the rewards, ?r (?tS , ?tat , U ), we can achieve both. This closely resembles Q-learning [21, 22],
allowing us to effectively learn the policy in an iterative fashion.
2.3
From Q-LDA to Q-learning
Before relating Q-LDA to Q-learning, we first give a brief introduction to the latter. Q-learning [22,
18] is a reinforcement learning algorithm for finding an optimal policy in a Markov decision process
(MDP) described by (S, A, P, r, ), where S is a state space, A is an action space, and 2 (0, 1)
is a discount factor. Furthermore, P defines a transition probability p(s0 |s, a) for going to the next
4
state s0 2 S from the current state s 2 S after taking action a 2 A, and r(s, a) is the immediate
reward corresponding to this transition. A policy ?(a|s) in an MDP is defined to be the probability
of taking action a at state s. Let st and at be the state and action at time t, and let rt = r(st , at ) be
the immediate reward at time t. An optimal policy is the one that maximizes the expected long-term
P+1
reward E{ t=1 t 1 rt }. Q-learning seeks to find the optimal policy by estimating the Q-function,
Q(s, a), defined as the expected long-term discounted reward for taking action a at state s and then
following an optimal policy thereafter. It satisfies the Bellman equation [21]
Q(s, a) = E{r(s, a) + ? max Q(s0 , b)|s, a} ,
(4)
b
and directly gives the optimal action for any state s: arg maxa Q(s, a).
Q-learning solves for Q(s, a) iteratively based on observed state transitions. The basic Q-learning [22]
requires storing and updating the values of Q(s, a) for all state?action pairs in S ? A, which is
not practical when S and A are large. This is especially true in our text games, where they can be
exponentially large. Hence, Q(s, a) is usually approximated by a parametric function Q? (s, a) (e.g.,
neural networks [18]), in which case the model parameter ? is updated by:
?
? + ? ? r? Q? ? (dt Q? (st , at )) ,
(5)
where dt , rt + ? maxa0 Q?0 (st+1 , a0 ) if st nonterminal and dt , rt otherwise, and ?0 denotes
a delayed version of the model parameter updated periodically [18]. The update rule (5) may be
understood as applying stochastic gradient descent (SGD) to a regression loss function J(?) ,
E[dt Q? (s, a)]2 . Thus, dt is the target, computed from rt and Q?0 , for the prediction Q? (st , at ).
We are now ready to define the mean reward function ?r in Q-LDA. First, we model the Q-function
by Q(?tS , ?ta ) = (?ta )T U ?tS , where U is the same parameter as the one in (1).4 This is different from
typical deep RL approaches, where black-box models like neural networks are used. In order to
connect our probabilistic model to Q-learning, we define the mean reward function as follows,
?
?
S
b
?r (?tS , ?tat , U ) = Q(?tS , ?tat )
? E max Q(?t+1
, ?t+1
)|?tS , ?tat
(6)
b
?tat
Note that ?r remains as a function of
and
since the second term in the above expression is
a conditional expectation given ?tS and ?tat . The definition of the mean reward function in Eq. (6)
has a strong relationship with the Bellman equation (4) in Q-learning; it relates the long-term reward
Q(?tS , ?tat ) to the mean immediate reward ?r in the same manner as the Bellman equation (4). To
see this, we move the second term on the right-hand side of (6) to the left, and make the identification
that ?r corresponds to E{r(s, a)} since both of them represent the mean immediate reward. The
resulting equation share a same form as the Bellman equation (4). With the mean function ?r defined
above, we show in Appendix B that the loss function (3) can be approximated by the one below using
the maximum a posteriori (MAP) estimate of ?tS and ?tat (denoted as ??tS and ??tat , respectively):
Ti
M X
n
i2 o
X
1 h
dt Q(??tS , ??tat )
(7)
min
ln p( S | S ) ln p( A | A ) +
2
?
2 r
i=1 t=1
?tS
S
b
where dt = rt + maxb Q(??t+1
, ??t+1
) for t < Ti and dt = rt for t = Ti . Observe that the first two
terms in (7) are regularization terms coming from the Dirichlet prior over S and A , and the third
term shares a similar form as the cost J(?) in Q-learning; it can also be interpreted as a regression
problem for estimating the Q-function, where the target dt is constructed in a similar manner as
Q-learning. Therefore, optimizing the discriminative objective (3) leads to a variant of Q-learning.
After learning is finished, we can obtain the greedy policy by taking the action that maximizes the
Q-function estimate in any given state.
We also note that we have used the MAP estimates of ?tS and ?tat due to the intractable marginalization
of the latent variables [14]. Other more advanced approximation techniques, such as Markov Chain
Monte Carlo (MCMC) [1] and variational inference [13] can also be used, and we leave these
explorations as future work.
3
End-to-end Learning by Mirror Descent Back Propagation
4
The intuition of choosing Q(?, ?) to be this form is that we want ?tS to be aligned with ?ta of the correct
action (large Q-value), and to be misaligned with the ?ta of the wrong actions (small Q-value). The introduction
of U allows the number and the meaning of topics for the observations and actions to be different.
5
Algorithm 1 The training algorithm by mirror descent back propagation
1: Input: D (number of experience replays), J (number of SGD updates), and learning rate.
2: Randomly initialize the model parameters.
3: for m = 1, . . . , D do
4:
Interact with the environment using a behavior policy ?bm (at |xS1:t , xA
1:t , a1:t 1 ) to collect M
S
A
M
episodes of data {w1:T
,
w
,
a
,
r
}
and
add
them
to
D.
1:Ti 1:Ti i=1
1:Ti
i
5:
for j = 1, . . . , J do
6:
Randomly sample an episode from D.
7:
For the sampled episode, compute ??tS , ??ta and Q(??tS , ??ta ) with a = 1, . . . , At and t =
1, . . . , Ti according to Algorithm 2.
8:
For the sampled episode, compute the stochastic gradients of (7) with respect to ? using
back propagation through the computational graph defined in Algorithm 2.
9:
Update {U, WSS , WSA , WAS , WAA } by stochastic gradient descent and update { S , A }
using stochastic mirror descent.
10:
end for
11: end for
Algorithm 2 The recursive MAP inference for one episode
a
1: Input: ?1S , ?1A , L, , xS
t , {xt : a = 1, . . . , At } and at , for all t = 1, . . . , Ti .
S
S
2: Initialization: ?
? 1 = ?1 and ?
? 1A = ?1A
3: for t = 1, . . . , Ti do
? h
i?
S
?
?S
1
1 ?S
T xt
t
4:
Compute ??tS by repeating ??tS
?
exp
+
for L times with initialS S ??S
C t
??tS
t
ization ??tS / 1, where C is a normalization factor.
? h
i?
a
?
?A
1
1 ?a
T xt
t
5:
Compute ??ta for each a = 1, . . . , At by repeating ??ta
?
exp
+
A A ??a
C t
??ta
t
a
?
for L times with initialization ?t / 1, where C is a normalization factor.
S
A
6:
Compute ?
? t+1
and ?
? t+1
from ??tS and ??tat according to (11).
7:
Compute the Q-values: Q(??tS , ??ta ) = (??ta )T U ??tS for a = 1, . . . , At .
8: end for
In this section, we develop an end-to-end learning algorithm for Q-LDA, by minimizing the loss
function given in (7). As shown in the previous section, solving (7) leads to a variant of Q-learning,
thus our algorithm could be viewed as a reinforcement-learning algorithm for the proposed model.
We consider learning our model with experience replay [17], a widely used technique in recent stateof-the-art systems [18]. Specifically, the learning process consists of multiple stages, and at each stage,
the agent interacts with the environment using a fixed exploration policy ?b (at |xS1:t , xA
1:t , a1:t 1 ) to
S
A
M
collect M episodes of data {w1:T
,
w
,
a
,
r
}
and
saves
them
into
a
replay
memory D.
1:T
1:T
i
i i=1
1:Ti
i
(We will discuss the choice of ?b in section 4.) Under the assumption of the generative model Q-LDA,
our objective is to update our estimates of the model parameters in ? using D; the updating process
may take several randomized passes over the data in D. A stage of such learning process is called one
replay. Once a replay is done, we let the agent use a new behavior policy ?b0 to collect more episodes,
add them to D, and continue to update ? from the augmented D. This process repeats for multiple
stages, and the model parameters learned from the previous stage will be used as the initialization
for the next stage. Therefore, we can focus on learning at a single stage, which was formulated in
Section 2 as one of solving the optimization problem (7). Note that the objective (7) is a function of
the MAP estimates of ?tS and ?tat . Therefore, we start with a recursion for computing ??tS and ??tat and
then introduce our learning algorithm for ?.
3.1
Recursive MAP inference by mirror descent
The MAP estimates, ??tS and ??ta , for the topic proportions ?tS and ?ta are defined as
S
A
(??tS , ??ta ) = arg max p(?tS , ?ta |w1:t
, w1:t
, a1:t
?tS ,?ta
6
1)
(8)
Solving for the exact solution is, however, intractable. We instead develop an approximate algorithm
that recursively estimate ??tS and ??ta . To develop the algorithm, we rely on the following result, whose
proof is deferred to Appendix A.
Proposition 1. The MAP estimates in (8) could be approximated by recursively solving the problems:
?
?
??tS = arg max ln p(xSt |?tS , S ) + ln p ?tS |?
? tS
(9)
?tS
?
??ta = arg max
ln p(xat |?ta ,
a
?t
A)
+ ln p ?ta |?
? tA
?
,
a 2 {1, . . . , At } ,
(10)
where xSt and xat are the bag-of-words vectors for the observation text wtS and the a-th action text
wta , respectively. To compute ?
? tS and ?
? tA , we begin with ?
? 1S = ?1S and ?
? 1A = ?1A and update their
values for the next t + 1 time step according to
?
?
?
?
S
A
?
? t+1
=
WSS ??tS +WSA ??tat +?1S , ?
? t+1
=
WAS ??tS +WAA ??tat +?1A
(11)
Note from (9)?(10) that, for given ??tS and ??ta , the solution of ?tS and ?ta now becomes At +1 decoupled
sub-problems, each of which has the same form as the MAP inference problem of Chen et al. [8].
Therefore, we solve each sub-problem in (9)?(10) using their mirror descent inference algorithm, and
then use (11) to compute the Dirichlet parameters at the next time step. The overall MAP inference
procedure is summarized in Algorithm 2. We further remark that, after obtaining ??tS and ??ta , the
Q-value for the t step is readily estimated by:
?
?
S
A
E Q(?tS , ?ta )|w1:t
, w1:t
, a1:t 1 ? Q(??tS , ??ta ), a 2 {1, . . . , At } ,
(12)
where we approximate the conditional expectation using the MAP estimates. After learning is finished,
the agent may extract a greedy policy for any state s by taking the action arg maxa Q(??S , ??a ). It
is known that if the learned Q-function is closed to the true Q-function, such a greedy policy is
near-optimal [21].
3.2
End-to-end learning by backpropagation
The training loss (7) for each learning stage has the form of a finite sum over M episodes. Each term
inside the summation depends on ??tS and ??tat , which in turn depend on all the model parameters in ?
via the computational graph defined by Algorithm 2 (see Appendix E for a diagram of the graph).
Therefore, we can learn the model parameters in ? by sampling an episode in the data, computing
the corresponding stochastic gradient in (7) by back-propagation on the computational graph given
in Algorithm 2, and updating ? by stochastic gradient/mirror descent. More details are found in
Algorithm 1, and Appendix E.4 gives the gradient formulas.
4
Experiments
In this section, we use two text games from [11] to evaluate our proposed model and demonstrate the
idea of interpreting the decision making processes: (i) ?Saving John? and (ii) ?Machine of Death?
(see Appendix C for a brief introduction of the two games).5 The action spaces of both games are
defined by natural languages and the feasible actions change over time, which is a setting that Q-LDA
is designed for. We choose to use the same experiment setup as [11] in order to have a fair comparison
with their results. For example, at each m-th experience-replay learning (see Algorithm 1), we use the
softmax action selection rule [21, pp.30?31] as the exploration policy to collect data (see Appendix
E.3 for more details). We collect M = 200 episodes of data (about 3K time steps in ?Saving John?
and 16K in ?Machine of Death?) at each of D = 20 experience replays, which amounts to a total
of 4, 000 episodes. At each experience replay, we update the model with 10 epochs before the next
replay. Appendix E provides additional experimental details.
We first evaluate the performance of the proposed Q-LDA model by the long-term rewards it receives
when applied to the two text games. Similar to [11], we repeat our experiments for five times with
different random initializations. Table 1 summarize the means and standard deviations of the rewards
5
The simulators are obtained from https://github.com/jvking/text-games
7
Table 1: The average rewards (higher is better) and standard deviations of different models on the two
tasks. For DRRN and MA-DQN, the number of topics becomes the number of hidden units per layer.
Tasks
Saving
John
Machine
of Death
# topics
20
50
100
20
50
100
Q-LDA
18.8 (0.3)
18.6 (0.6)
19.1 (0.6)
19.9 (0.8)
18.7 (2.1)
17.5 (2.4)
DRRN (1-layer)
17.1 (0.6)
18.3 (0.2)
18.2 (0.2)
7.2 (1.5)
8.4 (1.3)
8.7 (0.9)
DRRN (2-layer)
18.4 (0.1)
18.5 (0.3)
18.7 (0.4)
9.2 (2.1)
10.7 (2.7)
11.2 (0.6)
MA-DQN (2-layer)
4.9 (3.2)
9.0 (3.2)
7.1 (3.1)
2.8 (0.9)
4.3 (0.9)
5.2 (1.2)
on the two games. We include the results of Deep Reinforcement Relevance Network (DRRN)
proposed in [11] with different hidden layers. In [11], there are several variants of DQN (deep
Q-networks) baselines, among which MA-DQN (max-action DQN) is the best performing one. We
therefore only include the results of MA-DQN. Table 1 shows that Q-LDA outperforms all other
approaches on both tasks, especially ?Machine of Death?, where Q-LDA even beats the DRRN
models by a large margin. The gain of Q-LDA on ?Saving John? is smaller, as both Q-LDA and
DRRN are approaching the upper bound of the reward, which is 20. ?Machine of Death? was believed
to be a more difficult task due to its stochastic nature and larger state and action spaces [11], where the
upper bound on the reward is 30. (See Tables 4?5 for the definition of the rewards for different story
endings.) Therefore, Q-LDA gets much closer to the upper bound than any other method, although
there may still be room for improvement. Finally, our experiments follow the standard online RL
setup: after a model is updated based on the data observed so far, it is tested on newly generated
episodes. Therefore, the numbers reported in Table 1 are not evaluated on the training dataset, so
they truthfully reflect the actual average reward of the learned models.
We now proceed to demonstrate the analysis of the latent pattern of the decision making process
using one example episode of ?Machine of Death?. In this episode, the game starts with the player
wandering in a shopping mall, after the peak hour ended. The player approaches a machine that
prints a death card after inserting a coin. The death card hints on how the player will die in future. In
one of the story development, the player?s death is related to a man called Bon Jovi. The player is
so scared that he tries to combat with a cardboard standee of Bon Jovi. He reveals his concern to a
friend named Rachel, and with her help he finally overcomes his fear and maintains his friendship.
This episode reaches a good ending and receives the highest possible reward of 30 in this game.
In Figure 2, we show the evolution of the topic proportions for the four most active topics (shown in
Table 2)6 for both the observation texts and the selected actions? texts. We note from Figure 2 that the
most dominant observation topic and action topic at beginning of the episode are ?wander at mall?
and ?action at mall?, respectively, which is not surprising since the episode starts at a mall scenario.
The topics related to ?mall? quickly dies off after the player starts the death machine. Afterwards, the
most salient observation topic becomes ?meet Bon Jovi? and then ?combat? (t = 8). This is because
after the activation of death machine, the story enters a scenario where the player tries to combat with
a cardboard standee. Towards the end of the episode, the observation topic ?converse w/rachel? and
the topic ?kitchen & chat? corresponding to the selected action reach their peaks and then decay right
before the end of the story, where the action topic ?relieve? climbs up to its peak. This is consistent
with the story ending, where the player chooses to overcome his fear after chatting with Rachel. In
Appendix D, we show the observation and the action texts in the above stages of the story.
Finally, another interesting observation is about the matrix U . Since the Q-function value is computed
from [??ta ]T U ??tS , the (i, j)-th element of the matrix U measures the positive/negative correlation
between the i-th action topic and the j-th observation topic. In Figure 2(c), we show the value of the
learned matrix U for the four observation topics and the four action topics in Table 2. Interestingly,
the largest value (39.5) of U is the (1, 2)-th element, meaning that the action topic ?relieve? and the
state topic ?converse w/rachel? has strong positive contribution to a high long-term reward, which is
what happens at the end of the story.
6
In practice, we observe that some topics are never or rarely activated during the learning process. This is
especially true when the number of topics becomes large (e.g., 100). Therefore, we only show the most active
topics. This might also explain why the performance improvement is marginal when the number of topics grows.
8
Table 2: The four most active topics for the observation texts and the action texts, respectively.
Observation Topics
1: combat
2: converse w/ rachel
3: meet Bon Jovi
4: wander at mall
Action Topics
1: relieve
2: kitchen & chat
3: operate the machine
4: action at mall
1
0.8
minutes, lights, firearm, shoulders, whiff, red, suddenly, huge, rendition
rachel, tonight, grabs, bar, towards, happy, believing, said, moonlight
small, jovi, bon, door, next, dog, insists, room, wrapped, standees
ended, catcher, shopping, peak, wrapped, hanging, attention, door
leave, get, gotta, go, hands, away, maybe, stay, ability, turn, easy, rachel
wait, tea, look, brisk, classics, oysters, kitchen, turn, chair, moment
coin, insert, west, cloth, desk, apply, dollars, saying, hands, touch, tell
alarm, machine, east, ignore, take, shot, oysters, win, gaze, bestowed
1
Observation Topic 1
Observation Topic 2
Observation Topic 3
Observation Topic 4
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
5
10
(a) Observation topic
15
?tS
0
Action Topic 1
Action Topic 2
Action Topic 3
Action Topic 4
2
1.2
622.1
4 2.5
5.3
5
10
(b) Selected action topic
39.5
12.4
4.8
8.4
20.7
1.4
4.1
13.3
3
12.2
0.27
1.9 5
4.1
15
?tat
(c) Learned values of matrix U
Figure 2: The evolution of the most active topics in ?Machine of Death.?
5
Conclusion
We proposed a probabilistic model, Q-LDA, to uncover latent patterns in text-based sequential
decision processes. The model can be viewed as a latent topic model, which chains the topic
proportions over time. Interestingly, by modeling the mean function of the immediate reward in a
special way, we showed that discriminative learning of Q-LDA using its likelihood is closely related
to Q-learning. Thus, our approach could also be viewed as a Q-learning variant for sequential topic
models. We evaluate Q-LDA on two text-game tasks, demonstrating state-of-the-art rewards in these
games. Furthermore, we showed our method provides a viable approach to finding interesting latent
patterns in such decision processes.
Acknowledgments
The authors would like to thank all the anonymous reviewers for their constructive feedback.
References
[1] Christophe Andrieu, Nando De Freitas, Arnaud Doucet, and Michael I Jordan. An introduction
to MCMC for machine learning. Machine learning, 50(1):5?43, 2003.
[2] C. M. Bishop and J. Lasserre. Generative or discriminative? getting the best of both worlds.
Bayesian Statistics, 8:3?24, 2007.
[3] D. M. Blei and J. D. Mcauliffe. Supervised topic models. In Proc. NIPS, pages 121?128, 2007.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[5] David M Blei. Probabilistic topic models. Communications of the ACM, 55(4):77?84, 2012.
[6] David M Blei and John D Lafferty. Dynamic topic models. In Proceedings of the 23rd
international conference on Machine learning, pages 113?120. ACM, 2006.
9
[7] G. Bouchard and B. Triggs. The tradeoff between generative and discriminative classifiers. In
Proc. COMPSTAT, pages 721?728, 2004.
[8] Jianshu Chen, Ji He, Yelong Shen, Lin Xiao, Xiaodong He, Jianfeng Gao, Xinying Song, and
Li Deng. End-to-end learning of lda by mirror-descent back propagation over a deep architecture.
In Proc. NIPS, pages 1765?1773, 2015.
[9] Yaakov Engel, Shie Mannor, and Ron Meir. Reinforcement learning with Gaussian processes. In
Proceedings of the Twenty-Second International Conference on Machine Learning (ICML-05),
pages 201?208, 2005.
[10] Matthew Hausknecht and Peter Stone. Deep recurrent Q-learning for partially observable MDPs.
In Proc. AAAI-SDMIA, November 2015.
[11] Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf.
Deep reinforcement learning with a natural language action space. In Proc. ACL, 2016.
[12] A. Holub and P. Perona. A discriminative framework for modelling object classes. In Proc.
IEEE CVPR, volume 1, pages 664?671, 2005.
[13] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. In Learning in graphical models, pages
105?161. Springer, 1998.
[14] Michael Irwin Jordan. Learning in graphical models, volume 89. Springer Science & Business
Media, 1998.
[15] S. Kapadia. Discriminative Training of Hidden Markov Models. PhD thesis, University of
Cambridge, 1998.
[16] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep
visuomotor policies. Journal of Machine Learning Research, 17(1):1334?1373, 2016.
[17] Long-Ji Lin. Reinforcement learning for robots using neural networks. Technical report,
Technical report, DTIC Document, 1993.
[18] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G.
Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan
Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement
learning. Nature, 518:529?533, 2015.
[19] Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for textbased games using deep reinforcement learning. In Proc. EMNLP, 2015.
[20] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander
Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap,
Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the
game of Go with deep neural networks and tree search. Nature, 529:484?489, 2016.
[21] Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press
Cambridge, 1998.
[22] Christopher Watkins and Peter Dayan. Q-learning. Machine learning, 8(3-4):279?292, 1992.
[23] Oksana Yakhnenko, Adrian Silvescu, and Vasant Honavar. Discriminatively trained Markov
model for sequence classification. In Proc. IEEE ICDM, 2005.
10
| 7083 |@word version:1 middle:1 proportion:15 triggs:1 adrian:1 pieter:1 tat:25 seek:1 sgd:2 shot:1 recursively:2 moment:1 initial:2 celebrated:1 selecting:1 document:6 interestingly:2 outperforms:1 freitas:1 silvescu:1 current:3 com:4 ka:4 surprising:1 mari:1 activation:1 guez:1 readily:1 john:6 chicago:1 periodically:1 designed:1 interpretable:2 update:8 v:2 generative:8 selected:5 greedy:4 amir:1 beginning:1 aja:1 blei:4 provides:4 mannor:1 ron:1 five:1 wierstra:1 constructed:1 viable:3 yelong:1 consists:1 inside:1 introduce:1 manner:3 expected:3 behavior:2 examine:1 simulator:1 terminal:3 bellman:4 nham:1 discounted:1 actual:1 becomes:4 provided:2 estimating:2 underlying:4 notation:2 maximizes:3 begin:1 medium:1 what:1 interpreted:1 dharshan:1 maxa:2 narasimhan:1 finding:2 ended:2 temporal:1 combat:4 ti:15 interactive:1 demonstrates:1 wrong:1 classifier:1 control:1 unit:1 converse:3 appear:2 producing:1 mcauliffe:1 positive:4 before:3 understood:2 treat:1 sutton:1 meet:2 laurent:1 black:3 might:2 acl:1 initialization:4 studied:2 k:4 resembles:1 collect:5 shaded:1 misaligned:1 practical:1 responsible:1 acknowledgment:1 oyster:2 chongw:1 recursive:2 practice:1 backpropagation:1 procedure:1 demis:2 episodic:1 riedmiller:1 empirical:1 yakhnenko:1 word:9 wait:1 zoubin:1 petersen:1 get:2 selection:1 influence:2 applying:2 bellemare:1 map:11 reviewer:1 compstat:1 go:2 attention:1 helen:1 shen:1 citadel:2 simplicity:2 insight:1 rule:3 his:4 classic:1 updated:3 pt:1 play:1 suppose:1 user:2 target:2 exact:1 vasant:1 us:1 element:2 approximated:3 updating:3 continues:2 bottom:2 role:1 observed:3 levine:1 wang:2 capture:1 hypertext:1 enters:1 episode:22 contemporary:1 highest:1 observes:1 intuition:1 environment:6 complexity:2 reward:50 dynamic:3 chained:2 trained:1 depend:1 solving:5 observables:1 joint:2 describe:2 monte:1 tell:1 visuomotor:1 hyper:1 choosing:1 jianfeng:2 kalchbrenner:1 whose:1 widely:1 solve:1 larger:1 cvpr:1 otherwise:1 ability:1 statistic:1 itself:1 final:2 online:1 sequence:3 kapadia:1 coming:1 chatting:1 inserting:1 aligned:1 achieve:2 getting:1 seattle:1 sutskever:1 darrell:1 decisionmaking:1 r1:4 produce:1 generating:1 silver:2 leave:2 object:1 help:1 develop:3 friend:1 recurrent:1 andrew:1 nonterminal:1 b0:1 barzilay:1 eq:2 strong:2 solves:1 tommi:1 closely:2 correct:1 stochastic:8 exploration:4 nando:1 human:1 maxa0:1 regina:1 shopping:2 abbeel:1 anonymous:1 proposition:1 summation:1 insert:1 exp:2 lawrence:1 dieleman:1 predict:1 matthew:1 vary:2 consecutive:1 estimation:3 proc:8 bag:1 maker:1 largest:1 jvking:1 engel:1 hope:1 mit:1 gaussian:3 aim:2 rusu:1 varying:1 barto:1 jaakkola:1 focus:3 emission:1 improvement:2 legg:1 modelling:1 believing:1 likelihood:4 indicates:2 seamlessly:1 contrast:1 firearm:1 baseline:1 dollar:1 posteriori:1 inference:6 textbased:1 cloth:1 dayan:1 a0:1 w:5 hidden:3 her:1 perona:1 going:1 selects:1 arg:5 uncovering:2 among:2 overall:1 denoted:1 stateof:1 classification:1 development:1 art:4 special:2 initialize:1 softmax:1 marginal:1 oksana:1 once:1 saving:4 never:1 beach:1 sampling:1 koray:2 ng:1 veness:1 look:1 icml:1 future:4 report:2 hint:1 richard:1 randomly:2 delayed:1 kitchen:3 microsoft:3 karthik:1 organization:1 huge:1 ostrovski:1 investigate:1 mnih:1 joel:1 chong:2 deferred:1 light:1 activated:1 tonight:1 xat:2 chain:3 closer:1 partial:1 arthur:1 experience:5 wts:5 hausknecht:1 decoupled:1 tree:1 cardboard:2 circle:1 instance:1 column:3 modeling:3 markovian:1 maximization:1 cost:1 deviation:2 bon:5 examining:1 characterize:1 optimally:1 reported:1 dependency:2 connect:1 dir:6 chooses:1 st:7 peak:4 randomized:1 international:2 stay:1 probabilistic:10 off:1 gaze:1 michael:3 quickly:1 ilya:1 w1:17 thesis:1 reflect:1 aaai:1 choose:2 huang:1 emnlp:1 dialogue:1 leading:1 li:8 volodymyr:1 de:1 summarized:1 ioannis:2 relieve:3 inc:1 depends:2 try:2 closed:1 characterizes:1 red:1 start:4 bayes:1 maintains:1 bouchard:1 defer:1 contribution:3 variance:1 identification:1 bayesian:1 kavukcuoglu:2 carlo:1 history:1 explain:1 reach:3 sharing:1 trevor:1 definition:3 pp:1 proof:1 sampled:2 gain:1 newly:1 dataset:1 knowledge:1 graepel:1 holub:1 uncover:6 back:6 ta:38 dt:9 supervised:2 follow:2 higher:1 insists:1 done:2 box:3 evaluated:1 furthermore:4 xa:2 stage:9 until:3 correlation:1 hand:3 receives:6 christopher:1 touch:1 ostendorf:1 propagation:6 google:2 defines:2 chat:2 lda:33 mdp:2 grows:1 dqn:7 xiaodong:2 usa:4 lillicrap:1 effect:1 true:3 thore:1 ization:1 evolution:2 hence:2 regularization:1 andrieu:1 arnaud:1 nonzero:1 iteratively:1 death:12 i2:1 wrapped:2 game:31 during:1 waa:5 die:1 chaining:1 generalized:1 stone:1 plate:1 demonstrate:4 interpreting:1 meaning:2 variational:2 charles:1 common:1 ji:6 empirically:1 rl:3 exponentially:1 volume:2 discussed:1 he:10 relating:1 cambridge:2 rd:1 seldom:2 similarly:1 language:7 lihong:4 robot:1 similarity:1 add:2 dominant:1 chelsea:1 recent:1 showed:2 optimizing:1 scenario:2 success:1 continue:1 christophe:1 leach:1 seen:1 yaakov:1 greater:1 additional:1 george:1 deng:5 novelty:1 maximize:5 ii:1 relates:1 multiple:2 afterwards:1 technical:2 characterized:2 believed:1 long:13 lin:4 icdm:1 a1:9 va:2 prediction:2 variant:9 basic:2 regression:2 expectation:2 represent:1 tailored:2 normalization:2 sergey:1 want:1 xst:2 diagram:1 operate:1 specially:1 pass:1 shane:1 shie:1 lafferty:1 climb:1 jordan:4 call:2 near:2 door:2 maxb:1 easy:1 sander:1 marginalization:1 architecture:1 approaching:1 kirkland:1 observability:1 idea:2 andreas:1 tradeoff:1 expression:1 veda:1 song:1 peter:2 wandering:1 proceed:1 cause:3 action:76 remark:1 deep:12 useful:2 maybe:1 amount:1 repeating:2 discount:2 desk:1 jianshu:3 http:1 meir:1 fiction:1 estimated:2 per:1 discrete:2 tea:1 georg:1 thereafter:1 four:4 salient:1 demonstrating:1 drawn:1 nal:1 graph:4 grab:1 downstream:1 sum:1 named:1 rachel:7 saying:1 draw:10 decision:23 appendix:8 dy:1 lanctot:1 capturing:2 layer:5 bound:3 alex:1 min:2 chair:1 performing:1 martin:1 according:5 hanging:1 honavar:1 describes:3 across:1 smaller:1 mastering:1 wta:4 making:5 happens:1 den:1 taken:1 ln:8 equation:5 wsa:5 remains:1 describing:1 discus:1 mechanism:2 turn:3 madeleine:1 finn:1 antonoglou:2 end:27 panneershelvam:1 apply:1 observe:2 away:1 appropriate:1 stig:1 save:1 coin:2 hassabis:2 top:1 dirichlet:9 denotes:4 running:1 include:2 graphical:6 ghahramani:1 especially:3 suddenly:1 move:1 objective:3 already:1 print:1 parametric:1 dependence:3 rt:14 interacts:1 said:1 gradient:6 win:1 separate:1 card:2 thank:1 fidjeland:1 catcher:1 chris:1 topic:81 transit:2 maddison:1 length:4 besides:1 relationship:1 julian:1 minimizing:1 happy:1 schrittwieser:1 difficult:2 setup:4 potentially:1 relate:1 negative:2 zt:6 policy:21 unknown:1 twenty:1 allowing:1 upper:3 observation:45 kumaran:1 markov:4 daan:1 finite:1 descent:10 t:64 november:1 beat:1 immediate:10 defining:1 shoulder:1 communication:1 david:4 pair:1 dog:1 connection:5 sentence:1 learned:7 hour:1 nip:3 redmond:1 proceeds:2 usually:1 pattern:12 below:1 bar:1 challenge:2 summarize:1 max:7 memory:1 mall:7 rendition:1 natural:6 rely:1 business:1 recursion:1 advanced:1 representing:1 github:1 brief:2 imply:1 mdps:1 finished:3 grewe:1 ready:1 extract:1 text:63 prior:3 literature:1 epoch:1 understanding:1 graf:1 wander:2 loss:5 discriminatively:1 interesting:5 generation:1 allocation:2 integrate:1 agent:15 consistent:1 s0:3 xiao:3 story:8 storing:1 share:4 repeat:2 side:1 understand:2 xs1:2 saul:1 taking:6 van:1 overcome:1 feedback:1 llc:1 transition:4 ending:6 cumulative:1 vocabulary:1 world:1 author:2 collection:1 reinforcement:14 simplified:1 bm:1 sifre:1 far:1 approximate:3 obtains:2 observable:2 implicitly:1 ignore:1 overcomes:1 doucet:1 sequentially:1 reveals:2 active:4 conclude:1 assumed:1 discriminative:7 truthfully:1 continuous:1 latent:18 iterative:1 search:1 why:1 table:8 lasserre:1 nature:5 learn:4 ca:1 inherently:1 obtaining:1 brisk:1 interact:1 domain:2 marc:2 main:2 alarm:1 fair:1 augmented:1 west:1 fashion:2 andrei:1 sub:2 exponential:1 replay:9 jmlr:1 third:1 dominik:1 watkins:1 formula:1 minute:1 friendship:1 bad:1 xt:3 bishop:1 learnable:1 r2:1 x:1 decay:1 concern:1 intractable:2 sequential:12 effectively:1 jianshuc:1 mirror:8 phd:1 margin:1 dtic:1 chen:4 depicted:2 timothy:1 gao:2 partially:2 fear:2 sadik:1 springer:2 driessche:1 corresponds:1 satisfies:1 acm:2 ma:4 tejas:1 conditional:4 viewed:4 goal:1 formulated:1 king:1 exposition:1 towards:2 room:2 man:1 considerable:1 feasible:3 change:3 typical:2 specifically:2 except:1 wt:11 acting:1 beattie:1 total:4 called:3 experimental:2 player:8 east:1 rarely:1 latter:1 irwin:1 relevance:1 kulkarni:1 constructive:1 evaluate:3 mcmc:2 tested:1 |
6,726 | 7,084 | Online Reinforcement Learning in Stochastic Games
Yi-Te Hong
Institute of Information Science
Academia Sinica, Taiwan
[email protected]
Chen-Yu Wei
Institute of Information Science
Academia Sinica, Taiwan
[email protected]
Chi-Jen Lu
Institute of Information Science
Academia Sinica, Taiwan
[email protected]
Abstract
We study online reinforcement learning in average-reward stochastic games (SGs).
An SG models a two-player zero-sum game in a Markov environment, where state
transitions and one-step payoffs are determined simultaneously by a learner and
an adversary. We propose the UCSG algorithm that achieves a sublinear regret
compared to the game value when competing with an arbitrary opponent. This
result improves previous ones under the same setting. The regret bound has a
dependency on the diameter, which is an intrinsic value related to the mixing
property of SGs. If we let the opponent play an optimistic best response to the
learner, UCSG finds an ?-maximin stationary policy with a sample complexity of
? (poly(1/?)), where ? is the gap to the best policy.
O
1
Introduction
Many real-world scenarios (e.g., markets, computer networks, board games) can be cast as multi-agent
systems. The framework of Multi-Agent Reinforcement Learning (MARL) targets at learning to act in
such systems. While in traditional reinforcement learning (RL) problems, Markov decision processes
(MDPs) are widely used to model a single agent?s interaction with the environment, stochastic games
(SGs, [32]), as an extension of MDPs, are able to describe multiple agents? simultaneous interaction
with the environment. In this view, SGs are most well-suited to model MARL problems [24].
In this paper, two-player zero-sum SGs are considered. These games proceed like MDPs, with the
exception that in each state, both players select their own actions simultaneously 1 , which jointly
determine the transition probabilities and their rewards . The zero-sum property restricts that the
two players? payoffs sum to zero. Thus, while one player (Player 1) wants to maximize his/her total
reward, the other (Player 2) would like to minimize that amount. Similar to the case of MDPs, the
reward can be discounted or undiscounted, and the game can be episodic or non-episodic.
In the literature, SGs are typically learned under two different settings, and we will call them online
and offline settings, respectively. In the offline setting, the learner controls both players in a centralized
manner, and the goal is to find the equilibrium of the game [33, 21, 30]. This is also known as finding
the worst-case optimality for each player (a.k.a. maximin or minimax policy). In this case, we
care about the sample complexity, i.e., how many samples are required to estimate the worst-case
optimality such that the error is below some threshold. In the online setting, the learner controls only
one of the players, and plays against an arbitrary opponent [24, 4, 5, 8, 31]. In this case, we care
1
Turn-based SGs, like Go, are special cases: in each state, one player?s action set contains only a null action.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
about the learner?s regret, i.e., the difference between some benchmark measure and the learner?s total
reward earned in the learning process. This benchmark can be defined as the total reward when both
players play optimal policies [5], or when Player 1 plays the best stationary response to Player 2 [4].
Some of the above online-setting algorithms can find the equilibrium simply through self-playing.
Most previous results on offline sample complexity consider discounted SGs. Their bounds depend
heavily on the chosen discount factor [33, 21, 30, 31]. However, as noted in [5, 19], the discounted
setting might not be suitable for SGs that require long-term planning, because only finite steps are
relevant in the reward function it defines. This paper, to the best of our knowledge, is the first to give
? (poly(1/?)) in the average-reward (undiscounted
an offline sample complexity bound of order O
and non-episodic) setting, where ? is the error parameter. A major difference between our algorithm
and previous ones is that the two players play asymmetric roles in our algorithm: by focusing on
finding only one player?s worst-case optimal policy at a time, the sampling can be rather efficient.
This resembles but strictly extends [13]?s methods in finding the maximin action in a two-stage game.
In the online setting, we are only aware of [5]?s R- MAX algorithm that deals with average-reward SGs
and provides a regret bound. Considering a similar scenario and adopting the same regret definition,
we significantly improve their bounds (see Appendix A for details). Another difference between our
algorithm and theirs is that ours is able to output a currently best stationary policy at any stage in the
learning process, while theirs only produces a T? -step fixed-horizon policy for some input parameter
T? . The former could be more natural since the worst-case optimal policy is itself a stationary policy.
The techniques used in this paper are most related to RL for MDPs based on the optimism principle
[2, 19, 9] (see Appendix A). The optimism principle built on concentration inequalities automatically
strikes a balance between exploitation and exploration, eliminating the need to manually adjust the
learning rate or the exploration ratio. However, when importing analysis from MDPs to SGs, we
face the challenge caused by the opponent?s uncontrollability and non-stationarity. This prevents the
learner from freely exploring the state space and makes previous analysis that relies on stationary
distribution?s perturbation analysis [2] useless. In this paper, we develop a novel way to replace the
opponent?s non-stationary policy with a stationary one in the analysis (introduced in Section 5.1),
which facilitates the use of techniques based on perturbation analysis. We hope that this technique
can benefit future analysis concerning non-stationary agents in MARL.
One related topic is the robust MDP problem [29, 17, 23]. It is an MDP where some state-action
pairs have adversarial rewards and transitions. It is often assumed in robust MDP that the adversarial
choices by the environment are not directly observable by the Player, but in our SG setting, we
assume that the actions of Player 2 can be observed. However, there are still difficulties in SG that
are not addressed by previous works on robust MDP.
Here we compare our work to [23], a recent work on learning robust MDP. In their setting, there are
adversarial and stochastic state-action pairs, and their proposed OLRM2 algorithm tries to distinguish
them. Under the scenario where the environment is fully adversarial, which is the counterpart to our
setting, the worst-case transitions and rewards are all revealed to the learner, and what the learner
needs to do is to perform a maximin planning. In our case, however, the worst-case transitions and
rewards are still to be learned, and the opponent?s arbitrary actions may hinder the learner to learn
this information. We would say that the contribution of [23] is orthogonal to ours.
Other lines of research that are related to SGs are on MDPs with adversarially changing reward
functions [11, 27, 28, 10] and with adversarially changing transition probabilities [35, 1]. The
assumptions in these works have several differences with ours, and therefore their results are not
comparable to our results. However, they indeed provide other viewpoints about learning in stochastic
games.
2
Preliminaries
Game Models and Policies. A SG is a 4-tuple M = (S, A, r, p). S denotes the state space and
A = A1 ? A2 the players? joint action space. We denote S = |S| and A = |A|. The game starts
from an initial state s1 . Suppose at time t the players are at state st . After the players play the joint
actions (a1t , a2t ), Player 1 receives the reward rt = r(st , a1t , a2t ) ? [0, 1] from Player 2, and both
players visit state st+1 following the transition probability p(?|st , a1t , a2t ). For simplicity, we consider
2
deterministic rewards as in [3]. The extension to stochastic case is straightforward. We shorten our
notation by a := (a1 , a2 ) or at := (a1t , a2t ), and use abbreviations such as r(st , at ) and p(?|st , at ).
Without loss of generality, players are assumed to determine their actions based on the history. A
policy ? at time t maps the history up to time t, Ht = (s1 , a1 , r1 , ..., st ) ? Ht , to a probability
distribution over actions. Such policies are called history-dependent policies, whose class is denoted
by ?HR . On the other hand, a stationary policy, whose class is denoted by ?SR , selects actions as a
function of the current state. For either class, joint policies (? 1 , ? 2 ) are often written as ?.
Average Return and the Game Value. Let the players play joint policy ?. Define the T -step total
PT
reward as RT (M, ?, s) := t=1 r(st , at ), where s1 = s, and the average reward as ?(M, ?, s) :=
limT ?? T1 E [RT (M, ?, s)], whenever the limit exists. In fact, the game value exists2 [26]:
1
E RT (M, ?1 , ?2 , s) .
T ?? T
?? (M, s) := sup inf2 lim
?1 ?
If ?(M, ?, s) or ?? (M, s) does not depend on the initial state s, we simply write ?(M, ?) or ?? (M ).
The Bias Vector.
s, as
For a stationary policy ?, the bias vector h(M, ?, ?) is defined, for each coordinate
h(M, ?, s) := E
"
?
X
#
r(st , at ) ? ?(M, ?, s)s1 = s, at ? ?(?|st ) .
(1)
t=1
The bias vector satisfies the Bellman equation: ?s ? S,
?(M, ?, s) + h(M, ?, s) = r(s, ?) +
X
p(s0 |s, ?)h(M, ?, s0 ),
s0
where r(s, ?) := Ea??(?|s) [r(s, a)] and p(s0 |s, ?) :=Ea??(?|s) [p(s0 |s, a)].
The vector h(M, ?, ?) describes the relative advantage among states under model M and (joint) policy
?. The advantage (or disadvantage) of state s compared to state s0 under policy ? is defined as the
difference between the accumulated rewards with initial states s and s0 , which, from (1), converges
to the difference h(M, ?, s) ? h(M, ?, s0 ) asymptotically. For the ease of notation, the span of a
vector v is defined as sp(v) := maxi vi ? mini vi . Therefore if a model, together with any policy,
induces large sp (h), then this model will be difficult to learn because visiting a bad state costs a lot
in the learning process. As shown in [3] for the MDP case, the regret has an inevitable dependency
on sp(h(M, ? ? , ?)), where ? ? is the optimal policy.
On the other hand, sp(h(M, ?, ?)) is closely related to the mean first passage time under the Markov
?
chain induced by M and ?. Actually we have sp(h(M, ?, ?)) ? T ? (M ) := maxs,s0 Ts?s
0 (M ),
?
0
where Ts?s
(M
)
denotes
the
expected
time
to
reach
state
s
starting
from
s
when
the
model
is M and
0
the player(s) follow the (joint) policy ?. This fact is intuitive, and the proof can be seen at Remark
M.1.
Notations.
In order to save space, we often write equations in vector or matrix form. We use
vectors inequalities: if u, v ? Rn , then u ? v ? ui ? vi ?i = 1, ..., n. For a general matrix game
with matrix G of size n ? m, we denote the value of the game as val G := max min p> Gq =
p??n q??m
min max p> Gq, where ?k is the probability simplex of dimension k. In SGs, given the estimated
q??m p??n
value function u(s0 ) ?s0 , we often need to solve the following matrix game equation:
v(s) =
max
a1 ?? 1 (?|s)
min
a2 ?? 2 (?|s)
{r(s, a1 , a2 ) +
X
p(s0 |s, a1 , a2 )u(s0 )},
s0
and this is abbreviated with the vector form v = val{r + P u}. We also use solve1 G and solve2 G to
denote the optimal solutions of p and q. In addition, the indicator function is denoted by 1{?} or 1{?} .
Unlike in one-player MDPs, the sup and inf in the definition of ?? (M, s) are not necessarily attainable.
Moreover, players may not have stationary optimal policies.
2
3
3
Problem Settings and Results Overview
We assume that the game proceeds for T steps. In order to have meaningful regret bounds (i.e., sublinear to T ), we must make some assumptions to the SG model itself. Our two different assumptions
are
1
2
1
2
? ,?
Assumption 1. max
max max Ts?s
0 (M ) ? D.
0
s,s
? 1 ??SR
Assumption 2. max
max
0
s,s
? 2 ??SR
? 2 ??SR
? ,?
min Ts?s
0 (M ) ? D.
? 1 ??SR
Why we make these assumptions is as follows. Consider an SG model where the opponent (Player 2)
has some way to lock the learner (Player 1) to some bad state. The best strategy for the learner might
be to totally avoid, if possible, entering that state. However, in the early stage of the learning process,
the learner won?t know this, and he/she will have a certain probability to visit that state and get locked.
This will cause linear regret to the learner. Therefore, we assume the following: whatever policy the
opponent executes, the learner always has some way to reach any state within some bounded time.
This is essentially our Assumption 2.
Assumption 1 is the stronger one that actually implies that under any policies executed by the players
(not necessarily stationary, see Remark M.2), every state is visited within an average of D steps. We
find that under this assumption, the asymptotic regret can be improved. This assumption also has a
sense similar to those required for Q-learning-type algorithms? convergence: they require that every
state be visited infinitely often. See [18] for example.
These assumptions define some notion of diameters that are specific to the SG model. It is known
that under Assumption 1 or Assumption 2, both players have optimal stationary policies, and the
game value is independent of the initial state. Thus we can simply write ?? (M, s) as ?? (M ). For a
proof of these facts, please refer to Theorem E.1 in the appendix.
3.1
Two Settings and Results Overview
We focus on training Player 1 and discuss two settings. In the online setting, Player 1 competes with
an arbitrary Player 2. The regret is defined as
Reg(on)
T =
T
X
?? (M ) ? r(st , at ).
t=1
In the offline setting, we control both Player 1 and Player 2?s actions, and find Player 1?s maximin
policy. The sample complexity is defined as
L? =
T
X
t=1
1{?? (M ) ? min
?(M, ?t1 , ? 2 ) > ?},
2
?
where ?t1 is a stationary policy being executed by Player 1 at time t. This definition is similar to those
in [20, 19] for one-player MDPs. By the definition of L? , if we have an upper bound for L? and run
the algorithm for T > L? steps, there is some t such that ?t1 is ?-optimal. We will explain how to
pick this t in Section 7 and Appendix L.
It turns out that we can use almost the same algorithm to handle these two settings. Since learning in
the online setting is more challenging, from now on we will mainly focus on the online setting, and
leave the discussion about the offline setting at the end of the paper. Our results can be summarized
by the following two theorems.
?
3
? 3 5
Theorem 3.1. Under Assumption 1, UCSG achieves Reg(on)
T = O(D S A + DS AT ) w.h.p.
?
2
2
? 3
Theorem 3.2. Under Assumption 2, UCSG achieves Reg(on)
T = O( DS AT ) w.h.p.
4
Upper Confidence Stochastic Game Algorithm (UCSG)
? )? or ?w.h.p., g = O(f
? )? to indicate ?with probability
We write, ?with high probability, g = O(f
? 1 ? ?, g = f1 O(f ) + f2 ?, where f1 , f2 are some polynomials of log D, log S, log A, log T, log(1/?).
3
4
Algorithm 1 UCSG
Input: S, A = A1 ? A2 , T .
Initialization: t = 1.
for phase k = 1, 2, ... do
tk = t.
n P
o
t ?1
1. Initialize phase k: vk (s, a) = 0, nk (s, a) = max 1, ?k=1 1(s? ,a? )=(s,a) ,
0
Pt ?1
)
nk (s, a, s0 ) = ?k=1 1(s? ,a? ,s? +1 )=(s,a,s0 ) , p?k (s0 |s, a) = nnk (s,a,s
, ?s, a, s0 .
k (s,a)
? : ?s, a, p?(?|s, a) ? Pk (s, a)}, where
2. Update the confidence set: Mk = {M
Pk (s, a) := CONF1 (?
pk (?|s, a), nk (s, a)) ? CONF2 (?
pk (?|s, a), nk (s, a)).
?
3. Optimistic planning: Mk1 , ?k1 = M AXIMIN -EVI (Mk , ?k ) , where ?k := 1/ tk .
4. Execute policies:
repeat
Draw a1t ? ?k1 (?|st ); observe the reward rt and the next state st+1 .
Set vk (st , at ) = vk (st , at ) + 1 and t = t + 1.
until ?(s, a) such that vk (s, a) = nk (s, a)
end for
Definitions of confidence
regions:
q
S
2S ln(1/?1 )
CONF 1 (?
p, n) := p? ? [0, 1] : k?
p ? p?k1 ?
, ?1 = 2S 2 A?log T .
n
2
q
p
p
1)
CONF 2 (?
p, n) := p? ? [0, 1]S : ?i, p?i (1 ? p?i ) ? p?i (1 ? p?i ) ? 2 ln(6/?
,
n?1
q
q
ln(6/?1 )
2p?i (1?p?i )
7
6
6
+
|?
pi ? p?i | ? min
,
ln
ln
.
2n
n
?1
3(n?1)
?1
The Upper Confidence Stochastic Game algorithm (UCSG) (Algorithm 1) extends UCRL2 [19],
using the optimism principle to balance exploitation and exploration. It proceeds in phases (indexed
by k), and only changes the learner?s policy ?k1 at the beginning of each phase. The length of each
phase is not fixed a priori, but depends on the statistics of past observations.
In the beginning of each phase k, the algorithm estimates the transition probabilities using empirical
frequencies p?k (?|s, a) observed in previous phases (Step 1). With these empirical frequencies, it can
then create a confidence region Pk (s, a) for each transition probability. The transition probabilities
lying in the confidence regions constitute a set of plausible stochastic game models Mk , where the
true model M belongs to with high probability (Step 2). Then, Player 1 optimistically picks one
model Mk1 from Mk , and finds the optimal (stationary) policy ?k1 under this model (Step 3). Finally,
Player 1 executes the policy ?k1 for a while until some (s, a)-pair?s number of occurrences is doubled
during this phase (Step 4). The count vk (s, a) records the number of steps the (s, a)-pair is observed
in phase k; it is reset to zero in the beginning of every phase.
In Step 3, to pick an optimistic model and a policy is to pick Mk1 ? Mk and ?k1 ? ?SR such that ?s,
? , s) ? ?k .
min
?(Mk1 , ?k1 , ? 2 , s) ? max ?? (M
2
? ?Mk
M
?
(2)
where ?k denotes the error parameter for M AXIMIN -EVI. The LHS of (2) is well-defined because
Player 2 has stationary optimal policy under the MDP induced by Mk1 and ?k1 . Roughly speaking,
? , ? 1 , ? 2 , s) by an error
(2) says that min
?(Mk1 , ?k1 , ? 2 , s) should approximate max min
?(M
2
2
? ?Mk ,? 1 ?
M
?
no more than ?k . That is, (Mk1 , ?k1 ) are picked optimistically in Mk ? ?SR considering the most
adversarial opponent.
4.1
Extended SG and Maximin-EVI
The calculation of Mk1 and ?k1 involves the technique of Extended Value Iteration (EVI), which also
appears in [19] as a one-player version.
Consider the following SG, named M + . Let the state space S and Player 2?s action space A2 remain
the same as in M . Let A1+ , p+ (?|?, ?, ?), r+ (?, ?, ?) be Player 1?s action set, the transition kernel, and
5
the reward function of M + , such that for any a1 ? A1 and a2 ? A2 and an admissible transition
probability p?(?|s, a1 , a2 ) ? Pk (s, a1 , a2 ), there is an action a1+ ? A1+ such that p+ (?|s, a1+ , a2 ) =
p?(?|s, a1 , a2 ) and r+ (s, a1+ , a2 ) = r(s, a1 , a2 ). In other words, Player 1 selecting an action in A1+
is equivalent to selecting an action in A1 and simultaneously selecting an admissible transition
probability in the confidence region Pk (?, ?).
Suppose that M ? Mk , then the extended SG M + satisfies Assumption 2 because the true model
M is embedded in M + . By Theorem E.1 in Appendix E, it has a constant game value ?? (M + )
independent of the initial state, and satisfies Bellman equation of the form val{r + P f } = ? ? e + f ,
for some bounded function f (?), where e stands for the all-one constant vector. With the above
conditions, we can use value iteration with Schweitzer transform (a.k.a. aperiodic transform)[34]
to solve the optimal policy in the extended EG M + . We call it M AXIMIN -EVI. For the details
of M AXIMIN -EVI, please refer to Appendix F. We only summarize the result with the following
Lemma.
Lemma 4.1. Suppose the true model M ? Mk , then the estimated model Mk1 and stationary policy
?k1 output by M AXIMIN -EVI in Step 3 satisfy
?s, min
?(Mk1 , ?k1 , ? 2 , s) ? max
min
?(M, ? 1 , ? 2 , s) ? ?k .
2
1
2
?
?
?
Before diving into the analysis under the two assumptions, we first establish the following fact.
Lemma 4.2. With high probability, the true model M ? Mk for all phases k.
It is proved in Appendix D. With Lemma 4.2, we can fairly assume M ? Mk in most of our analysis.
5
Analysis under Assumption 1
In this section, we import analysis techniques from one-player MDPs [2, 19, 22, 9]. We also develop
some techniques that deal with non-stationary opponents.
We model Player 2?s behavior in the most general way, i.e., assuming it using a history-dependent
randomized policy. Let Ht = (s1 , a1 , r1 , ..., st?1 , at?1 , rt?1 , st ) ? Ht be the history up to st , then
we assume ?t2 to be a mapping from Ht to a distribution over A2 . We will simply write ?t2 (?) and
hide its dependency on Ht inside the subscript t. A similar definition applies to ?t1 (?). With abuse
of notations, we denote by k(t) the phase where step t lies in, and thus our algorithm uses policy
1
?t1 (?) = ?k(t)
(?|st ). The notations ?t1 and ?k1 are used interchangeably. Let Tk := tk+1 ? tk be the
length of phase k. We decompose the regret in phase k in the following way:
tk+1 ?1
?k := Tk ?? (M ) ?
X
r(st , at ) =
t=tk
4
X
(n)
?k ,
(3)
n=1
in which we define
(1)
?k
?
min
?(Mk1 , ?k1 , ? 2 , stk )
?2
?k
= Tk ? (M ) ?
,
1
1
2
1
1
2
= Tk min
?(Mk , ?k , ? , stk ) ? ?(Mk , ?k , ?
?k , stk ) ,
2
(3)
?k
= Tk ?(Mk1 , ?k1 , ?
?k2 , stk ) ? ?(M, ?k1 , ?
?k2 ) ,
(2)
?
tk+1 ?1
(4)
?k = Tk ?(M, ?k1 , ?
?k2 ) ?
X
r(st , at ),
t=tk
where ?
?k2 is some stationary policy of Player 2 which will be defined later. Since the actions of
Player 2 are arbitrary, ?
?k2 is imaginary and only exists in analysis. Note that under Assumption 1, any
stationary policy pair over M induces an irreducible Markov chain, so we do not need to specify the
(2)
(1)
initial states for ?(M, ?k1 , ?
?k2 ) in (3). Among the four terms, ?k is clearly non-positive, and ?k ,
(3)
(4)
by optimism, can be bounded using Lemma 4.1. Now remains to bound ?k and ?k .
6
5.1
Bounding
P
(3)
k
?k and
P
(3)
k
(4)
?k
(4)
?k2 , which is a stationary policy
The Introduction of ?
?k2 . ?k and ?k involve the artificial policy ?
that replaces Player 2?s non-stationary policy in the analysis. This replacement costs some constant
regret but facilitates the use of perturbation analysis in regret bounding. The selection of ?
?k2 is based
on the principle that the behavior (e.g., total number of visits to some (s, a)) of the Markov chain
induced by M, ?k1 , ?
?k2 should be close to the empirical statistics. Intuitively, ?
?k2 can be defined as
Ptk+1 ?1
1st =s ?t2 (a2 )
2 2
?
?k (a |s) := t=t
.
(4)
Pktk+1 ?1
t=tk
1st =s
Note two things, however. First, since we need the actual trajectory in defining this policy, it can only
be defined after phase k has ended. Second, ?
?k2 can be undefined because the denominator of (4) can
be zero. However, this will not happen in too many steps. Actually, we have
P
2
?
Lemma 5.1.
Tk 1{?
? 2 not well-defined}? O(DS
A) with high probability.
k
k
Before describing how we bound the regret with the help of ?
?k2 and the perturbation analysis, we
establish the following lemma:
Lemma 5.2. We say the transition probability at time step t is ?-accurate if |p1k (s0 |st , ?t ) ?
p(s0 |st , ?t )| ? ? ?s0 where p1k denotes the transition kernel of Mk1 . We let Bt (?) = 1 if the
transition probability at time t is ?-accurate; otherwise Bt (?) = 0. Then for any state s, with high
PT
? A/?2 .
probability, t=1 1st =s 1Bt (?)=0 ? O
Now we are able to sketch the logic behind our proofs. Let?s assume that ?
?k2 models ?k2 quite well,
1
2
i.e., the expected frequency of every state-action pair induced by M, ?k , ?
?k is close to the empirical
(4)
(3)
frequency induced by M, ?k1 , ?k2 . Then clearly, ?k is close to zero in expectation. The term ?k
now becomes the difference of average reward between two Markov reward processes with slightly
different transition probabilities. This term has a counterpart in [19] as a single-player version. Using
(3)
similar analysis, we can prove that the dominant term of ?k is proportional to sp(h(Mk1 , ?k1 , ?
?k2 , ?)).
1
1
In the single-player case, [19] can directly claim that sp(h(Mk , ?k , ?)) ? D (see their Remark 8),
but unfortunately, this is not the case in the two-player version. 4
To continue, we resort to the perturbation analysis for the mean first passage times (developed
in Appendix C). Lemma 5.2 shows that Mk1 will not be far from M for too many steps. Then
1
2
Theorem C.9 in Appendix C tells that if Mk1 are close enough to M , T ?k ,??k (Mk1 ) can be bounded by
1
2
1
2
2T ?k ,??k (M ). As Remark M.1 implies that sp(h(Mk1 , ?k1 , ?
?k2 , ?)) ? T ?k ,??k (Mk1 ) and Assumption 1
1
2
1
2
1
2
guarantees that T ?k ,??k (M ) ? D, we have sp(h(Mk1 , ?k1 , ?
?k2 , ?)) ? T ?k ,??k (Mk1 ) ? 2T ?k ,??k (M ) ?
2D.
The above approach leads to Lemma 5.3, which is a key in our analysis. We first define some
notations. Under Assumption 1, any pair of stationary policies induces an irreducible Markov chain,
which has a unique stationary distribution. If the policy pair ? = (? 1 , ? 2 ) is executed, we denote its
Ptk+1 ?1
stationary distribution by ?(M, ? 1 , ? 2 , ?) = ?(M, ?, ?). Besides, denote vk (s) := t=t
1st =s .
k
We say a phase k is benign if the following hold true: the true model M lies in Mk , ?
?k2 is well-defined,
sp(h(Mk1 , ?k1 , ?
?k2 , ?)) ? 2D, and ?(M, ?k1 , ?
?k2 , s) ? 2vTkk(s) ?s. We can show the following:
P
? 3 5
Lemma 5.3.
k Tk 1{phase k is not benign} ?O(D S A) with high probability.
Finally, for benign phases, we can have the following two lemmas.
?
P (4)
?
Lemma 5.4.
?k2 is well-defined }? O(D
ST + DSA) with high probability.
k ?k 1{?
The argument in [19] is simple: suppose that h(Mk1 , ?k1 , s) ? h(Mk1 , ?k1 , s0 ) > D, by the communicating
assumption, there is a path from s0 to s with expected time no more than D. Thus a policy that first goes from s0
to s within D steps and then executes ?k1 will outperform ?k1 at s0 . This leads to a contradiction. In two-player
SGs, with a similar argument, we can also show that sp(h(Mk1 , ?k1 , ?k2? , ?)) ? D, where ?k2? is the best response
to ?k1 under Mk1 . However, since Player 2 is uncontrollable, his/her policy ?k2 (or ?
?k2 ) can be quite different from
2?
1
1
2
?k , and thus sp(h(Mk , ?k , ?
?k , ?)) ? D does not necessarily hold true.
4
7
Lemma 5.5.
P
k
(3)
?k
?
?
AT + DS 2 A) with high probability,
1{phase k is benign} ?O(DS
(1)
Proof of Theorem 3.1. The regret proof starts from the decomposition of (3). ?k is bounded with
?
P (1) P
P (2)
?
the help of Lemma 4.1: k ?k ? k Tk / tk = O( T ). k ?k ? 0 by definition. Then with
?
(3)
(4)
? 3 S 5 A + DS AT ).
Lemma 5.1, 5.3, 5.4, and 5.5, we can bound ?k and ?k by O(D
6
Analysis under Assumption 2
In Section 5, the main ingredient of regret analysis lies in bounding the span of the bias vector,
sp(h(Mk1 , ?k1 , ?
?k2 , ?)). However, the same approach does not work because under the weaker Assumption 2, we do not have a bound on the mean first passage time under arbitrary policy pairs. Hence we
adopt the approach of approximating the average reward SG problem by a sequence of finite-horizon
SGs: on a high level, first, with the help of Assumption 2, we approximate the T multiple of the
original average-reward SG game value (i.e. the total reward in hindsight) with the sum of those of
H-step episodic SGs; second, we resort to [9]?s results to bound the H-step SGs? sample complexity
and translates it to regret.
Approximation by repeated episodic SGs. For the approximation, the quantity H does not appear
in UCSG but only in the analysis. The horizon T is divided into episodes each with length H. Index
episodes with i = 1, ..., T /H, and denote episode i?s first time step by ?i . We say i ? ph(k) if all H
steps of episode i lie in phaseh k. Define the H-step expected
reward under joint policy ? with initial
i
PH
state s as VH (M, ?, s) := E
t=1 rt |at ? ?, s1 = s . Now we decompose the regret in phase k as
tk+1 ?1
?k := Tk ?? ?
X
r(st , at ) ?
t=tk
6
X
(n)
?k ,
(5)
n=1
where
(1)
H ?? ? min?2 ?(Mk1 , ?k1 , ? 2 , s?i ) ,
P
= i?ph(k) H min?2 ?(Mk1 , ?k1 , ? 2 , s?i ) ? min?2 VH (Mk1 , ?k1 , ? 2 , s?i ) ,
P
= i?ph(k) min?2 VH (Mk1 , ?k1 , ? 2 , s?i ) ? VH (Mk1 , ?k1 , ?i2 , s?i ) ,
P
= i?ph(k) VH (Mk1 , ?k1 , ?i2 , s?i ) ? VH (M, ?k1 , ?i2 , s?i ) ,
P
P?i+1 ?1
(6)
r(s
,
a
)
, ?k = 2H.
= i?ph(k) VH (M, ?k1 , ?i2 , s?i ) ? t=?
t
t
i
?k =
(2)
?k
(3)
?k
(4)
?k
(5)
?k
P
i?ph(k)
(6)
Here, ?i2 denotes Player 2?s policy in episode i, which may be non-stationary. ?k comes from the
(1)
possible two incomplete episodes in phase k. ?k is related to the tolerance level we set for the
?
(1)
(2)
M AXIMIN -EVI algorithm: ?k ? Tk ?k = Tk / tk . ?k is an error caused by approximating an
(3)
infinite-horizon SG by a repeated episodic H-step SG (with possibly different initial states). ?k is
(2)
(4)
(5)
clearly non-positive. It remains to bound ?k , ?k and ?k .
?
P (5)
? HT ) with high probability.
Lemma 6.1. By Azuma-Hoeffding?s inequality, k ?k ? O(
P (2)
P
Lemma 6.2. Under Assumption 2, k ?k ? T D/H + k Tk ?k .
(4)
From sample complexity to regret bound. As the main contributor of regret, ?k corresponds
to the inaccuracy in the transition probability estimation. Here we largely reuse [9]?s results where
they consider one-player episodic MDP with a fixed initial state distribution. Their main lemma
1
states that the number of episodes
in phases such that |VH (Mk , ?k , s0 ) ? VH (M, ?k , s0 )| > ?
2
2
2
? H S A/? , where s0 is their initial state in each episode. In other words,
will not exceed O
P Tk
1
2
? 2 2
k H 1{|VH (Mk , ?k , s0 ) ? VH (M, ?k , s0 )| > ?} = O(H S A/? ). Note that their proof allows
?k to be an arbitrarily selected non-stationary policy for phase k.
8
We can directly utilize their analysis and we summarize it as Theorem K.1 in the appendix. While
their algorithm has an input ?, this input can be removed without affecting bounds. This means that
the PAC bounds holds for arbitrarily selected ?. With the help of Theorem K.1, we have
?
P (4)
2
?
Lemma 6.3.
k ?k ? O(S HAT + HS A) with high probability.
Proof of Theorem 3.2. With the decomposition
(5) and the help?of Lemma 6.1, 6.2, and 6.3,
?
? T D + S HAT + S 2 AH) = O(
? 3 DS 2 AT 2 ) by selecting H =
the regret is bounded by O(
H
p
3
max{D, D2 T /(S 2 A)}.
7
Sample Complexity of Offline Training
In Section 3.1, we defined L? to be the sample complexity of Player 1?s maximin policy. In our
offline version of UCSG, in each phase k we let both players each select their own optimistic policy.
After Player 1 has optimistically selected ?k1 , Player 2 then optimistically
selects his policy ?k2 based
1
2
2
on the known ?k . Specifically, the model-policy pair Mk , ?k is obtained by another extended value
iteration on the extended MDP under fixed ?k1 , where Player 2?s action set is extended. By setting the
stopping threshold also as ?k , we have
? , ?k1 , ? 2 , s) + ?k
?(Mk2 , ?k1 , ?k2 , s) ? min min
?(M
2
(6)
? ?Mk ?
M
when value iteration halts. With this selection rule, we are able to obtain the following theorems.
? 3 S 5 A + D2 S 2 A/?2 ) w.h.p.
Theorem 7.1. Under Assumption 1, UCSG achieves L? = O(D
max
Theorem 7.2. Let Assumption 2 hold, and further assume that max
0
s,s
1
2
? ,?
min Ts?s
0 (M ) ? D.
? 1 ??SR ? 2 ??SR
2
?
Then UCSG achieves L? = O(DS
A/?3 ) w.h.p.
The algorithm can output a single stationary policy for Player 1 with the following guarantee: if we
run the offline version of UCSG for T > L? steps, the algorithm can output a single stationary policy
that is ?-optimal. We show how to output this policy in the proofs of Theorem 7.1 and 7.2.
8
Open Problems
?
?
? 3 S 5 A + DS AT ) and O(
? 3 DS 2 AT ) under different
In this work, we obtain the regret of O(D
mixing assumptions. A natural open problem is how to improve these bounds on both asymptotic and
constant
terms. A lower bound of them can be inherited from the one-player MDP setting, which is
?
?( DSAT ) [19].
1
2
? ,?
Another open problem is that if we further weaken the assumptions to maxs,s0 min?1 min?2 Ts?s
0 ?
D, can we still learn the SG? We have argued that if we only have this assumption, in general we
cannot get sublinear regret in the online setting. However, it is still possible to obtain polynomial-time
offline sample complexity if the two players cooperate to explore the state-action space.
Acknowledgments
We would like to thank all anonymous reviewers who have devoted their time for reviewing this work
and giving us valuable feedbacks. We would like to give special thanks to the reviewer who reviewed
this work?s previous version in ICML; your detailed check of our proofs greatly improved the quality
of this paper.
9
References
[1] Yasin Abbasi, Peter L Bartlett, Varun Kanade, Yevgeny Seldin, and Csaba Szepesv?ri. Online learning in markov decision processes with adversarially chosen transition probability
distributions. In Advances in Neural Information Processing Systems, 2013.
[2] Peter Auer and Ronald Ortner. Logarithmic online regret bounds for undiscounted reinforcement
learning. In Advances in Neural Information Processing Systems, 2007.
[3] Peter L Bartlett and Ambuj Tewari. Regal: A regularization based algorithm for reinforcement
learning in weakly communicating mdps. In Proceedings of Conference on Uncertainty in
Artificial Intelligence. AUAI Press, 2009.
[4] Michael Bowling and Manuela Veloso. Rational and convergent learning in stochastic games.
In International Joint Conference on Artificial Intelligence, 2001.
[5] Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for
near-optimal reinforcement learning. Journal of Machine Learning Research, 2002.
[6] S?bastien Bubeck and Aleksandrs Slivkins. The best of both worlds: Stochastic and adversarial
bandits. In Conference on Learning Theory, 2012.
[7] Grace E Cho and Carl D Meyer. Markov chain sensitivity measured by mean first passage times.
Linear Algebra and Its Applications, 2000.
[8] Vincent Conitzer and Tuomas Sandholm. Awesome: A general multiagent learning algorithm
that converges in self-play and learns a best response against stationary opponents. Machine
Learning, 2007.
[9] Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, 2015.
[10] Travis Dick, Andras Gyorgy, and Csaba Szepesvari. Online learning in markov decision
processes with changing cost sequences. In Proceedings of International Conference of Machine
Learning, 2014.
[11] Eyal Even-Dar, Sham M Kakade, and Yishay Mansour. Online markov decision processes.
Mathematics of Operations Research, 2009.
[12] Awi Federgruen. On n-person stochastic games by denumerable state space. Advances in
Applied Probability, 1978.
[13] Aur?lien Garivier, Emilie Kaufmann, and Wouter M Koolen. Maximin action identification: A
new bandit framework for games. In Conference on Learning Theory, pages 1028?1050, 2016.
[14] Arie Hordijk. Dynamic programming and markov potential theory. MC Tracts, 1974.
[15] Jeffrey J Hunter. Generalized inverses and their application to applied probability problems.
Linear Algebra and Its Applications, 1982.
[16] Jeffrey J Hunter. Stationary distributions and mean first passage times of perturbed markov
chains. Linear Algebra and Its Applications, 2005.
[17] Garud N. Iyengar. Robust dynamic programming. Math. Oper. Res., 30(2):257?280, 2005.
[18] Tommi Jaakkola, Michael I Jordan, and Satinder P Singh. On the convergence of stochastic
iterative dynamic programming algorithms. Neural computation, 1994.
[19] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 2010.
[20] Sham Machandranath Kakade et al. On the sample complexity of reinforcement learning. PhD
thesis, University of London London, England, 2003.
10
[21] Michail G Lagoudakis and Ronald Parr. Value function approximation in zero-sum markov
games. In Proceedings of Conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., 2002.
[22] Tor Lattimore and Marcus Hutter. Pac bounds for discounted mdps. In International Conference
on Algorithmic Learning Theory. Springer, 2012.
[23] Shiau Hong Lim, Huan Xu, and Shie Mannor. Reinforcement learning in robust markov decision
processes. Math. Oper. Res., 41(4):1325?1353, 2016.
[24] Michael L Littman. Markov games as a framework for multi-agent reinforcement learning. In
Proceedings of International Conference of Machine Learning, 1994.
[25] A Maurer and M Pontil. Empirical bernstein bounds and sample variance penalization. In
Conference on Learning Theory, 2009.
[26] J-F Mertens and Abraham Neyman. Stochastic games. International Journal of Game Theory,
1981.
[27] Gergely Neu, Andras Antos, Andr?s Gy?rgy, and Csaba Szepesv?ri. Online markov decision
processes under bandit feedback. In Advances in Neural Information Processing Systems, 2010.
[28] Gergely Neu, Andr?s Gy?rgy, and Csaba Szepesv?ri. The adversarial stochastic shortest path
problem with unknown transition probabilities. In AISTATS, 2012.
[29] Arnab Nilim and Laurent El Ghaoui. Robust control of markov decision processes with uncertain
transition matrices. Math. Oper. Res., 53(5):780?798, 2005.
[30] Julien Perolat, Bruno Scherrer, Bilal Piot, and Olivier Pietquin. Approximate dynamic programming for two-player zero-sum markov games. In Proceedings of International Conference of
Machine Learning, 2015.
[31] HL Prasad, Prashanth LA, and Shalabh Bhatnagar. Two-timescale algorithms for learning nash
equilibria in general-sum stochastic games. In Proceedings of the 2015 International Conference
on Autonomous Agents and Multiagent Systems. International Foundation for Autonomous
Agents and Multiagent Systems, 2015.
[32] Lloyd S Shapley. Stochastic games. Proceedings of the National Academy of Sciences, 1953.
[33] Csaba Szepesv?ri and Michael L Littman. Generalized markov decision processes: Dynamicprogramming and reinforcement-learning algorithms. In Proceedings of International Conference of Machine Learning, 1996.
[34] J Van der Wal. Successive approximations for average reward markov games. International
Journal of Game Theory, 1980.
[35] Jia Yuan Yu and Shie Mannor. Arbitrarily modulated markov decision processes. In Proceedings
of Conference on Decision and Control. IEEE, 2009.
11
| 7084 |@word h:1 exploitation:2 version:6 eliminating:1 polynomial:3 stronger:1 open:3 d2:2 prasad:1 decomposition:2 attainable:1 pick:4 initial:10 contains:1 selecting:4 ours:3 bilal:1 past:1 imaginary:1 current:1 written:1 must:1 import:1 ronald:3 academia:3 happen:1 benign:4 garud:1 update:1 stationary:31 intelligence:3 selected:3 beginning:3 record:1 provides:1 math:3 mannor:2 successive:1 ucrl2:1 schweitzer:1 nnk:1 yuan:1 prove:1 shapley:1 inside:1 emma:1 manner:1 indeed:1 expected:4 behavior:2 market:1 planning:3 roughly:1 multi:3 chi:1 bellman:2 yasin:1 discounted:4 automatically:1 actual:1 considering:2 totally:1 becomes:1 notation:6 moreover:1 bounded:6 olrm2:1 competes:1 null:1 what:1 denumerable:1 developed:1 finding:3 hindsight:1 csaba:5 ended:1 guarantee:2 every:4 act:1 auai:1 k2:30 control:5 whatever:1 conitzer:1 appear:1 t1:7 before:2 positive:2 limit:1 subscript:1 path:2 optimistically:4 abuse:1 laurent:1 might:2 awi:1 initialization:1 resembles:1 challenging:1 christoph:1 ease:1 locked:1 unique:1 acknowledgment:1 regret:25 pontil:1 episodic:8 empirical:5 marl:3 significantly:1 confidence:7 word:2 get:2 doubled:1 close:4 selection:2 cannot:1 a2t:4 equivalent:1 deterministic:1 map:1 reviewer:2 go:2 straightforward:1 starting:1 simplicity:1 shorten:1 communicating:2 contradiction:1 rule:1 his:3 handle:1 notion:1 coordinate:1 autonomous:2 target:1 play:8 heavily:1 suppose:4 pt:3 yishay:1 carl:1 us:1 programming:4 olivier:1 asymmetric:1 observed:3 role:1 worst:6 region:4 earned:1 episode:8 removed:1 machandranath:1 valuable:1 environment:5 nash:1 complexity:12 ui:1 reward:27 littman:2 hinder:1 dynamic:4 depend:2 reviewing:1 weakly:1 algebra:3 singh:1 f2:2 learner:16 joint:8 describe:1 london:2 artificial:4 tell:1 whose:2 quite:2 widely:1 solve:2 plausible:1 say:5 otherwise:1 statistic:2 p1k:2 jointly:1 itself:2 transform:2 timescale:1 online:15 ptk:2 advantage:2 sequence:2 propose:1 interaction:2 gq:2 reset:1 relevant:1 hordijk:1 mixing:2 academy:1 intuitive:1 rgy:2 shiau:1 convergence:2 undiscounted:3 r1:2 produce:1 tract:1 converges:2 leave:1 tk:27 help:5 develop:2 measured:1 pietquin:1 involves:1 implies:2 indicate:1 come:1 tommi:1 closely:1 aperiodic:1 dsat:1 stochastic:17 exploration:3 require:2 argued:1 f1:2 uncontrollable:1 preliminary:1 decompose:2 anonymous:1 extension:2 strictly:1 exploring:1 hold:4 lying:1 considered:1 equilibrium:3 mapping:1 algorithmic:1 claim:1 parr:1 major:1 achieves:5 early:1 a2:17 tor:1 adopt:1 estimation:1 currently:1 visited:2 contributor:1 mertens:1 create:1 hope:1 clearly:3 iyengar:1 always:1 conf1:1 rather:1 avoid:1 jaakkola:1 focus:2 she:1 vk:6 check:1 mainly:1 greatly:1 adversarial:7 sense:1 dependent:2 el:1 stopping:1 accumulated:1 typically:1 bt:3 her:2 bandit:3 lien:1 selects:2 among:2 scherrer:1 denoted:3 priori:1 special:2 initialize:1 fairly:1 aware:1 beach:1 sampling:1 manually:1 adversarially:3 yu:2 icml:1 inevitable:1 future:1 simplex:1 t2:3 ortner:2 irreducible:2 simultaneously:3 national:1 phase:24 replacement:1 jeffrey:2 stationarity:1 centralized:1 wouter:1 adjust:1 inf2:1 undefined:1 behind:1 antos:1 devoted:1 chain:6 accurate:2 tuple:1 lh:1 huan:1 orthogonal:1 indexed:1 incomplete:1 maurer:1 re:3 weaken:1 mk:21 hutter:1 uncertain:1 disadvantage:1 cost:3 too:2 dependency:3 perturbed:1 cho:1 st:30 thanks:1 international:10 randomized:1 sensitivity:1 person:1 aur:1 michael:4 together:1 gergely:2 thesis:1 abbasi:1 possibly:1 hoeffding:1 a1t:5 conf:2 resort:2 return:1 oper:3 potential:1 importing:1 gy:2 summarized:1 lloyd:1 inc:1 satisfy:1 caused:2 dann:1 vi:3 depends:1 later:1 view:1 try:1 lot:1 optimistic:4 picked:1 sup:2 eyal:1 start:2 inherited:1 jia:1 prashanth:1 contribution:1 minimize:1 kaufmann:2 largely:1 who:2 variance:1 wal:1 ronen:1 identification:1 vincent:1 hunter:2 lu:1 mc:1 trajectory:1 bhatnagar:1 executes:3 history:5 ah:1 simultaneous:1 explain:1 reach:2 emilie:1 whenever:1 neu:2 definition:7 against:2 frequency:4 proof:9 rational:1 proved:1 knowledge:1 lim:2 improves:1 ea:2 actually:3 auer:2 focusing:1 appears:1 varun:1 follow:1 response:4 wei:1 improved:2 specify:1 execute:1 generality:1 stage:3 until:2 d:10 hand:2 receives:1 sketch:1 defines:1 quality:1 mdp:10 usa:1 shalabh:1 true:7 counterpart:2 former:1 hence:1 regularization:1 entering:1 jaksch:1 i2:5 deal:2 eg:1 interchangeably:1 game:37 self:2 please:2 during:1 noted:1 bowling:1 won:1 hong:2 generalized:2 passage:5 cooperate:1 lattimore:1 novel:1 lagoudakis:1 koolen:1 rl:2 overview:2 he:1 theirs:2 refer:2 perolat:1 mathematics:1 bruno:1 dominant:1 own:2 recent:1 hide:1 dynamicprogramming:1 inf:1 belongs:1 diving:1 scenario:3 certain:1 inequality:3 continue:1 arbitrarily:3 yi:1 der:1 seen:1 gyorgy:1 morgan:1 care:2 michail:1 freely:1 determine:2 maximize:1 strike:1 shortest:1 ii:3 multiple:2 sham:2 england:1 calculation:1 veloso:1 long:2 divided:1 concerning:1 visit:3 a1:21 halt:1 mk1:33 denominator:1 essentially:1 expectation:1 iteration:4 kernel:2 adopting:1 limt:1 arnab:1 addition:1 want:1 affecting:1 szepesv:4 addressed:1 publisher:1 unlike:1 sr:9 induced:5 facilitates:2 thing:1 shie:2 jordan:1 call:2 near:2 revealed:1 exceed:1 enough:1 bernstein:1 awesome:1 competing:1 translates:1 optimism:4 bartlett:2 reuse:1 peter:4 proceed:1 cause:1 constitute:1 action:24 remark:4 speaking:1 dar:1 tewari:1 detailed:1 involve:1 amount:1 discount:1 ph:7 induces:3 diameter:2 outperform:1 restricts:1 andr:2 piot:1 estimated:2 write:5 key:1 four:1 threshold:2 changing:3 garivier:1 ht:7 utilize:1 asymptotically:1 sum:8 run:2 inverse:1 uncertainty:2 named:1 extends:2 almost:1 mk2:1 draw:1 decision:10 appendix:10 comparable:1 bound:22 distinguish:1 convergent:1 replaces:1 your:1 ri:4 argument:2 optimality:2 span:2 min:22 describes:1 remain:1 slightly:1 sandholm:1 kakade:2 tw:3 s1:6 hl:1 intuitively:1 ghaoui:1 ln:5 equation:4 neyman:1 remains:2 turn:2 abbreviated:1 discus:1 count:1 describing:1 know:1 end:2 operation:1 opponent:11 observe:1 travis:1 occurrence:1 save:1 evi:8 hat:2 original:1 thomas:1 denotes:5 lock:1 giving:1 k1:46 establish:2 approximating:2 quantity:1 moshe:1 strategy:1 concentration:1 rt:7 traditional:1 grace:1 visiting:1 thank:1 topic:1 marcus:1 taiwan:3 assuming:1 length:3 besides:1 useless:1 index:1 mini:1 ratio:1 balance:2 tuomas:1 dick:1 sinica:6 difficult:1 executed:3 unfortunately:1 policy:60 unknown:1 perform:1 upper:3 observation:1 markov:22 benchmark:2 finite:2 t:6 payoff:2 extended:7 defining:1 rn:1 perturbation:5 mansour:1 arbitrary:6 regal:1 aleksandrs:1 introduced:1 cast:1 required:2 pair:10 slivkins:1 maximin:8 learned:2 aximin:6 inaccuracy:1 nip:1 able:4 adversary:1 proceeds:2 below:1 tennenholtz:1 azuma:1 challenge:1 summarize:2 ambuj:1 built:1 max:19 suitable:1 natural:2 difficulty:1 indicator:1 hr:1 minimax:1 improve:2 mdps:12 julien:1 vh:11 sg:33 literature:1 val:3 relative:1 asymptotic:2 embedded:1 fully:1 loss:1 multiagent:3 sublinear:3 proportional:1 ingredient:1 penalization:1 foundation:1 agent:8 s0:31 principle:4 viewpoint:1 playing:1 pi:1 repeat:1 brafman:1 offline:10 bias:4 weaker:1 institute:3 face:1 benefit:1 tolerance:1 feedback:2 dimension:1 van:1 transition:21 world:2 stand:1 reinforcement:13 far:1 approximate:3 observable:1 logic:1 satinder:1 manuela:1 assumed:2 iterative:1 why:1 reviewed:1 kanade:1 learn:3 robust:7 ca:1 szepesvari:1 arie:1 poly:2 necessarily:3 sp:13 pk:7 main:3 aistats:1 abraham:1 bounding:3 yevgeny:1 repeated:2 xu:1 board:1 brunskill:1 meyer:1 andras:2 nilim:1 lie:4 learns:1 admissible:2 theorem:14 bad:2 jen:1 specific:1 bastien:1 pac:2 maxi:1 dsa:1 intrinsic:1 exists:2 phd:1 te:1 horizon:5 nk:4 chen:1 gap:1 suited:1 logarithmic:1 simply:4 explore:1 infinitely:1 seldin:1 bubeck:1 prevents:1 applies:1 springer:1 corresponds:1 satisfies:3 relies:1 abbreviation:1 goal:1 stk:4 replace:1 change:1 determined:1 infinite:1 specifically:1 lemma:21 total:6 called:1 la:1 player:69 meaningful:1 exception:1 select:2 modulated:1 reg:3 |
6,727 | 7,085 | Position-based Multiple-play Bandit Problem with
Unknown Position Bias
Junpei Komiyama
The University of Tokyo
[email protected]
Junya Honda
The University of Tokyo / RIKEN
[email protected]
Akiko Takeda
The Institute of Statistical Mathematics / RIKEN
[email protected]
Abstract
Motivated by online advertising, we study a multiple-play multi-armed bandit
problem with position bias that involves several slots and the latter slots yield
fewer rewards. We characterize the hardness of the problem by deriving an asymptotic regret bound. We propose the Permutation Minimum Empirical Divergence
(PMED) algorithm and derive its asymptotically optimal regret bound. Because
of the uncertainty of the position bias, the optimal algorithm for such a problem
requires non-convex optimizations that are different from usual partial monitoring and semi-bandit problems. We propose a cutting-plane method and related
bi-convex relaxation for these optimizations by using auxiliary variables.
1
Introduction
One of the most important industries related to computer science is online advertising. In the United
States, 72.5 billion dollars was spent on online advertising [19] in 2016. Most online advertising is
viewed on web pages during Internet browsing. A web-site owner has a set of possible advertisements
(ads): some of them are more attractive than others, and the owner would like to maximize the
attention of visiting users. One of the observable metrics of the user attention is the number of
clicks on the ads. By considering each ad (resp. click) to be an arm (resp. reward) and assuming
only one slot is available for advertisements, the maximization of clicks boils down to the so-called
multi-armed bandit problem, where the arm with the largest expected reward is sought.
When two or more ad slots are available on the web page, the problem boils down to a multiple-play
multi-armed bandit problem. Several variants of the multiple play bandit problem and its extension
called semi-bandit problem have been considered in the literature. Arguably, the simplest is one
assuming that an ad receives equal clicks regardless of its position [2, 24]. In practice, ads receive
less clicks when they are placed at bottom slots; this is so-called position bias.
A well-known model that explains position bias is the cascade model [23], which assumes that the
users? attention goes from top to bottom until they lose interest. While this model explains position
bias in early positions well [10], a drawback to the cascade model when it is applied to the bandit
setting [26] is that the order of the allocated ads does not affect the reward, which is not very natural.
To resolve this issue, Combes et al. [8] introduced a weight for each slot that corresponds to the
reward obtained by clicking on that slot. However, no principled way of defining the weight has been
described.
An extension of the cascade model, called the dependent click model (DCM) [14], addresses these
issues by admitting multiple clicks of a user. In DCM, each slot is associated with a probability that
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
the user loses interest in the following ads if the current ad is interesting. While the algorithm in
Katariya et al. [21] cleverly exploits this structure, it still depends on the cascade assumption, and
as a result it discards some of the feedback on the latter slots, which reduces the efficiency of the
algorithm. Moreover, the reward in DCM does not exactly correspond to the number of clicks.
Lagr?e et al. [27] has studied a position-based model (PBM) where each slot has its own discount
factor on the number of clicks. PBM takes the order of the shown ads into consideration. However,
the algorithms proposed in Lagr?e et al. [27] are ?half-online? in the sense that the value of an ad is
adaptively estimated, whereas the values of the slots are estimated by using an off-line dataset. Such
an off-line computation is not very handy since the click trend varies depending on the day and hour
[1]. Moreover, a significant portion of online advertisements is sold via ad networks [34]. As a result,
advertisers have to deal with thousands of web pages to show their ads. Taking these aspects into
consideration, pre-computing position bias for each web page limits the use of these algorithms.
To address this issue, we provide a way to allocate advertisements in a fully online manner by
considering ?PBM under Uncertainty of position bias? (PBMU). One of the challenges when the
uncertainty of a position-based factor is taken into account is that, when some ad appears to have a
small click through rate (CTR, the probability of click) in some slot, we cannot directly attribute it to
either the arm or the slot. In this sense, several combinations of ads and slots need to be examined to
estimate both the ad-based and position-based model parameters.
Note also that an extension of the non-stochastic bandit approach [3] to multiple-play, such as
the ordered slate model [20], is general enough to deal with PBMU. However, algorithms based
on the non-stochastic approach do not always perform well in compensation for its generality.
Another extension of multi-armed bandit problems is the partial monitoring problem [31, 4] that
admits the case in which the parameters are not directly observable. However, partial monitoring is
inefficient at solving bandit problems: a K-armed bandit problem with binary rewards corresponds to
a partial monitoring problem with 2K possible outcomes. As a result, the existing partial monitoring
algorithms, such as the ones in [33, 25], are not practical even for a moderate number of arms.
Besides, the computation of a feasible solution in PBMU requires non-convex optimizations as we
will see in Section 5. This implies that PBMU cannot directly be converted into the partial monitoring
where such a non-convex optimization does not appear [25].
The contributions of this paper are as follows: First, we study the position-based bandit model with
uncertainty (PBMU) and derive a regret lower bound (Section 3). Second, we propose an algorithm
that efficiently utilizes feedback (Section 4). One of the challenges in the multiple-play bandit
problem is that there is an exponentially large number of possible sequences of arms to allocate at
each round. We reduce the number of candidates by using a bipartite matching algorithm that runs in
a polynomial time to the number of arms. The performance of the proposed algorithm is verified in
Section 6. Third, a slightly modified version of the algorithm is analyzed in Section 7. This algorithm
has a regret upper bound that matches the lower bound. Finally, we reveal that the lower bound is
related to a linear optimization problem with an infinite number of constraints. Such an optimization
problem appears in many versions of the bandit problem [9, 25, 12]. We propose an optimization
method that reduces it to a finite-constraint linear optimization based on a version of the cutting-plane
method (Section 5). Related non-convex optimizations that are characteristic to PBMU are solved by
using bi-convex relaxation. Such optimization methods are of interest in solving even larger classes
of bandit problems.
2
Problem Setup
Let K be the number of arms (ads) and L < K be the number of slots. Each arm i ? [K] =
{1, 2, . . . , K} is associated with a distinct parameter ?i? ? (0, 1), and each slot l ? [L] is associated
with a parameter ??l ? (0, 1]. At each round t = 1, 2, . . . , T , the system selects L arms I(t) =
(I1 (t), . . . , IL (t)) and receives a corresponding binary reward (click or non-click) for each slot. The
reward of the l-th slot is i.i.d. drawn from a Bernoulli distribution Ber(??Il (t),l ), where ??i,l = ?i? ??l .
Although the slot-based parameters are unknown, it is natural that the ads receives more clicks when
they are placed at early slots: we assume ??1 > ??2 > ? ? ? > ??L > 0 and this order is known.
Note that this model is redundant: a model with ??i,l = ?i? ??l is equivalent to the model with
??i,l = (?i? /?1 )(??l ?1 ). Therefore, without loss of generality, we assume ?1 = 1. In summary,
2
this model involves K + L parameters {?i? }i?[K] and {??l }l?[L] , and the number of rounds T . The
parameters except for ?1 = 1 are unknown to the system. Let Ni,l (t) be the number of rounds before
Pt?1
t-th round at which arm i was in slot l (i.e., Ni,l (t) = t0 =1 1{i = Il (t0 )}, where 1{E} is 1 if E
holds and 0 otherwise). In the following, we abbreviate arm i in slot l to ?pair (i, l)?. Let ?
?i,l (t) be
the empirical mean of the reward of pair (i, l) after the first t ? 1 rounds.
The goal of the system is to maximize the cumulative rewards by using some sophisticated algorithm.
?
Without loss of generality, we can assume ?1? > ?2? > ?3? > ? ? ? > ?K
. The algorithm cannot exploit
this ordering. In this model, allocating arms of larger expected rewards on earlier slots increases
expected rewards: As a result, allocating arms 1, 2, . . . , L to slots 1, 2, . . . ,L maximizes the expected
PT
P
?
?
?
reward. A quantity called (pseudo) regret is defined as: Reg(T ) = t=1
i?[L] (?i ? ?Ii (t) )?i ,
? ?
? ?
and E[Reg(T )] is used for evaluating the performance
P of an algorithm. Let ?i,l = ?l ?l ? ?i ?l . Regret can be alternatively represented as Reg(T ) = (i,l)?[K]?[L] ?i,l Ni,l (T ). The regret increases
unless I(t) = (1, 2, . . . , L).
3
Regret Lower Bound
Here, we derive an asymptotic regret lower bound when T ? ?. In the context of the standard multiarmed bandit problem, Lai and Robbins [28] derived a regret lower bound for strongly consistent
algorithms, and it is followed by many extensions, such as the one for multi-parameter distributions
[6] and the ones for Markov decision processes [13, 7]. Intuitively, a strongly consistent algorithm is
?uniformly good? in the sense that it works well with any set of model parameters. Their result was
extended to the multiple-play [2] and PBM [27] cases. We further extend it to the case of PBMU.
0
Let Tall = {(?10 , . . . , ?K
) ? (0, 1)K } and Kall = {(?01 , . . . , ?0L ) : 1 = ?01 > ?02 > ? ? ? > ?0L > 0} be
the sets of all possible values on the parameters of the arms and slots, respectively. Let (1), . . . , (K)
be a permutation of 1, . . . , K and T(1),...,(L) be the subset of Tall such that the i-th best arm is (i).
Namely,
n
o
0
0
0
0
0
T(1),...,(L) = (?10 , . . . , ?K
) ? (0, 1)K : ?(1)
> ?(2)
> ? ? ? > ?(L)
, ?i?{(1),...,(L)}
(?i0 < ?(L)
) ,
/
c
and T(1),...,(L)
= Tall \ T(1),...,(L) . An algorithm is strongly consistent if E[Reg(T )] = o(T a ) for any
a > 0 given any instance of the bandit problem with its parameters {?i0 }i?[K] ? Tall , {?0l } ? Kall .
The following lemma, whose proof is in Appendix F, lower-bounds the number of draws on the pairs
of arms and slots.
Lemma 1. (Lower bound on the number of draws) The following inequality holds for Ni,l (T ) of the
strongly consistent algorithm:
X
c
?{?i0 }?T1,...,L
E[Ni,l (T )]dKL (?i? ??l , ?i0 ?0l ) ? log T ? o(log T ),
,{?0l }?Kall
(i,l)?[K]?[L]
where dKL (p, q) = p log(p/q) + (1 ? p) log((1 ? p)/(1 ? q)) is the KL divergence between two
Bernoulli distributions.
Such a divergence-based bound appears in many stochastic bandit problems. However, unlike other
bandit problems, the argument inside the KL divergence is a product of parameters ?i0 ?0l : While
dKL (?, ?i0 ?0l ) is convex to ?i0 ?0l , it isP
not convex to the parameter space {?i0 }, {?0l }. Therefore, finding
a set of parameters that minimizes i,l dKL (?i,l , ?i0 ?0l ) is non-convex, which makes PBMU difficult.
Furthermore, we can formalize the regret lower bound in what follows. Let
?
?
?
?
X
X
X
X
Q = {qi,l } ? [0, ?)[K]?[K] : ?i?[K?1]
qi,l =
qi+1,l , ?l?[K?1]
qi,l =
qi,l+1 .
?
?
l?[K]
l?[K]
i?[K]
i?[K]
Intuitively, {qi,l } for l ? L corresponds to the draw of arm i in slot l, and {qi,l } for l > L corresponds
to the non-draw of arm i, as we will see later. The following quantities characterizes the minimum
3
amount of exploration for consistency:
R(1),...,(L) ({?i,l }, {?i }, {?l }) = {qi,l } ? Q :
inf
c
,{?0l }?Kall :?i?[L] ?i0 ?0i =?i ?i
{?i0 }?T(1),...,(L)
X
qi,l dKL (?i,l , ?i0 ?0l )
? 1 . (1)
(i,l)?[K]?[L]:i6=(l)
Equality (1) states that drawing each pair (i, l) for Ni,l = qi,l log T times suffices to reduce the risk
c
that the true parameter is {?i0 }, {?0l } for any parameters {?i0 }, {?0l } such that ?i0 ? T(1),...,(L)
and
?i0 ?0l = ?i ?i for any i ? [L]. Note that the constraint ?i0 ?0i = ?i ?i corresponds to the fact that drawing
an optimal list of arms does not increase the regret: Intuitively, this corresponds to the fact that the
true parameter of the best arm is obtained for free in the regret lower bound of the standard bandit
problem1 . Moreover, let
X
?
C(1),...,(L)
({?i,l }, {?i }, {?l }) =
inf
?i,l qi,l ,
{qi,l }?R(1),...,(L) ({?i,l },{?i },{?l })
(i,l)?[K]?[L]
the set of optimal solutions of which is denoted by
R?(1),...,(L) ({?i,l }, {?i }, {?l }) = {qi,l } ? R(1),...,(L) ({?i,l }, {?i }, {?l }) :
?
?i,l qi,l = C(1),...,(L)
({?i,l }, {?i }, {?l }) . (2)
X
(i,l)?[K]?[L]
?
The value C1,...,L
log T is the possible minimum regret such that the minimum divergence of
?
?
{?i }, {?l } from any {?i0 }, {?0l } is larger than log T . Using Lemma 1 yields the following regret
lower bound, whose proof is also in the Appendix F.
Theorem 2. The regret of a strongly consistent algorithm is lower bounded as follows:
?
E[Reg(T )] ? C1,...,L
({??i,l }, {?i? }, {??l }) log T ? o(log T ).
Remark 3. Ni,l = (log T )/dKL (?i? ??i , ?j? ??i ) for j = min(i ? 1, L) satisfies the conditions in
Lemma 1, which means that regret lower bound in Theorem 2 is O(K log T /?) = O(K log T ),
where ? = mini6=j,l6=m |?i? ? ?j? ||??l ? ??m |.
4
Algorithm
Our algorithm, called Permutation Minimum Empirical Divergence (PMED), is closely related to the
optimization we discussed in Section 3.
4.1
PMED Algorithm
We denote a list of L arms that are drawn at each round as L-allocation. For example, (3, 2, 1, 5)
is a 4-allocation, which corresponds to allocating arms 3, 2, 1, 5 to slots 1, 2, 3, 4, respectively.
Like the Deterministic Minimum Empirical Divergence (DMED) algorithm [17] for the single-play
multi-armed bandit problem, Algorithm 1 selects arms by using a loop. LC = LC (t) is the set of
L-allocations in the current loop, and LN = LN (t) is the set of L-allocations that are to be drawn in
the next loop. Note that, |LN | ? 1 always holds at the end of each loop so that at least one element is
1
The infimum should take parameters ?i0 ?0i 6= ?i ?i into consideration. However, such parameters can
be removed without increasing regret, and thus the infimum over ?i0 ?0i = ?i ?i suffices. This can be understood because the regret bound of the standard K-armed bandit problem with expectation of each arm ?i
P
is K
i=2 (log T )/dKL (?i , ?1 ): Arm 1 is drawn without increasing regret, and thus estimation of ?1 can be
arbitrary accurate. In our case placing arms 1, ..., L into slots 1, ..., L does not increase the regret, and thus the
estimation of the product parameter ?i ?i for each i ? [L] is very accurate.
4
Algorithm 1 PMED and PMED-Hinge Algorithms
?
1: Input: ? > 0, ? > 0 (for PMED-Hinge), f (n) = ?/ n with ? > 0 (for PMED-Hinge).
mod
2: LN ? ?. LC ? {v1mod , . . . , vK
}.
3: while t ? T do
mod
4:
for each vm
: m ? [K] do
?
mod
mod
If there exists some pair (i, l) ? vm
such that Ni,l (t) < ? log t, then put vm
into LN .
5:
6:
end for
7:
Compute the MLE {??i (t)}K
?l (t)}L
i=1 , {?
l=1
P
(
min{?i ,?l } (i,l)?[K]?[L] Ni,l (t)dKL (?
?i,l (t), ?i ?l )
(PMED)
=
P
min{?i ,?l } (i,l)?[K]?[L] Ni,l (t) (dKL (?
?i,l (t), ?i ?l ) ? f (Ni,l (t)))+ . (PMED-Hinge)
if Algorithm is PMED-Hinge then
If |??i (t) ? ??j (t)| < ?/(log log t) for some i 6= j or |?
?l (t) ? ?
? m (t)| < ?/(log log t) for
mod
some l 6= m, then put all of v1mod , . . . , vK
to LN .
S
If
?i,l (t), ??i (t)?
?l (t)) > f (Ni,l (t))} holds, then put all of
(i,l)?[K]?[L] {dKL (?
mod
mod
v1 , . . . , vK into LN .
end if
?
?R??
?i,l (t)}, {??i (t)}, {?
?l (t)})
(PMED)
? ({?
1(t),...,L(t)
Compute {qi,l }?
?,H
?R
?i,l (t)}, {??i (t)}, {?
?l (t)}, {f (Ni,l (t))}). (PMED-Hinge)
? ({?
?
1(t),...,L(t)
?
Ni,l ? qi,l log t for each (i, l) ? [K] ? [K].
?i,l = P creq ev where ev for each v is a permutation matrix and creq > 0 by
Decompose N
v
v v
using Algorithm 2.
ri,l ? Ni,l (t).
for each permutation
matrix ev do
aff
req
cv ? min cv , maxc c > 0 : min(i,l)?[K]?[L] (ri,l ? c ev,i,l ) ? 0 .
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
req
Let (v1 , . . . , vL ) be the L-allocation corresponding to ev . If caff
v < cv and there exists a
pair (vl , l) that is in none of the L-allocations in LN , then put (v1 , . . . , vL ) into LN .
ri,l ? ri,l ? caff
v ev,i,l .
end for
Select I(t) ? LC in an arbitrary fixed order. LC ? LC \ {I(t)}.
?
Put (?
1(t), . . . , L(t))
into LN .
If LC = ? then LC ? LN , LN ? ?.
end while
18:
19:
20:
21:
22:
23:
24:
put into LC . There are three lines where L-allocations are put into LN without duplication: Lines 5,
18, and 22. We explain each of these lines below.
mod
Line 5 is a uniform exploration over all pairs (i, l). For m ? [K], let vm
be an L-allocation
(1 + modK (m), 1 + modK (1 + m), . . . , 1 + modK (L + m ? 1)), where modK (x) is the minimum
mod
non-negative integer among {x?cK : c ? N}. From the definition of vm
, any pair (i,
?l) ? [K]?[L]
mod
mod
belongs to exactly one of v1 , . . . , vK . If some pair (i, l) is not allocated ? log t times, a
corresponding L-allocation is put into LN . This exploration stabilizes the estimators.
?i,l }i?[K],l?[K] is
Line 18 and related routines are based on the optimal amount of explorations. {N
?
calculated by plugging in the maximum likelihood estimator (MLE) ({?i }i?[K] , {?
?l }l?[L] ) into the
?i,l } is a set of K ? K variables2 , the algorithm needs
optimization problem of Inequality (2). As {N
to convert it into a set of L-allocations to put them into LN . This is done by decomposing it into a set
of permutation matrices, which we will explain in Section 4.2.
Line 22 is for exploitation: If no pair is put to LN by Line 5 or Line 18 and LC is empty, then Line
?
22 puts arms (?
1(t), . . . , L(t))
of the top-L largest {??i (t)} (with ties broken arbitrarily) into LN .
2
?i,l } are sets of K 2 variables.
K ? K is not a typo of K ? L: {qi,l } and {N
5
Algorithm 2 Permutation Matrix Decomposition
1: Input: Ni,l .
?i,l ? Ni,l .
2: N
?i,l > 0 for some (i, l) ? [K] ? [K] do
3: while N
?
4:
Find a permutation
matrix ev such that, for any i, l such that ev,i,l
= 1 ? Ni,l > 0.
?
5:
Let creq
=
max
c
>
0
:
min
(
N
?
ce
)
?
0
.
c
i,l
v,i,l
(i,l)?[K]?[K]
v
?i,l ? N
?i,l ? creq
6:
N
v ev,i,l for each (i, l) ? [K] ? [K].
7: end while
8: Output {creq
v , ev }
1
0
0
0
0
0
1
0
0
1
0
0
0
0
0
1
1
0
0
0
0
0
1
0
Figure 1: A permutation matrix with K = 4, where (i, l) = 1 for (i, l) ? (1, 1), (2, 3), (3, 2), (4, 4).
If L = 2, this matrix corresponds to allocating arm 1 in slot 1 and arm 3 in slot 2.
4.2
Permutation Matrix and Allocation Strategy
?i,l } = {qi,l log t}, the estimated optimal amount of
In this section, we discuss the way to convert {N
exploration, into L-allocations. A permutation matrix is a square matrix that has exactly one entry of
1 at each row and each column and 0s elsewhere (Figure 1, left). There are K! permutation matrices
since they corresponds to ordering K elements. Therefore, even though {qi,l } can be obviously
decomposed into a linear combination of permutation matrices, it is not clear how to compute them
without computing the set of all permutation matrices that are exponentially large in K. Algorithm
?i,l be a temporal variable that is initialized by N
?i,l at the beginning.
2 solves this problem: Let N
In each iteration, it subtracts a scalar multiplication of a permutation matrix ev whose (i, l) entry
?i,l > 0. (Line 6 in Algorithm 2). This boils down to finding a
ev,i,l of value 1 corresponds to N
perfect matching in a bipartite graph where the left (resp. right) nodes correspond to rows (resp.
?i,l > 0. Although a naive greedy fails
columns) and edges between nodes i and l are spanned if N
in such a matching problem (c.f., Appendix A), a maximal matching in a bipartite graph can be
computed by the Hopcroft?Karp algorithm [18] in O(K 2.5 ) times, and Theorem 4 below ensures
that the maximum matching is always perfect:
?i,l ? [K] ? [K] : N
?i,l ? 0, ?(i,l) N
?i,l > 0}
Theorem 4. (Existence of a perfect matching) For any {N
such that the sums of each row and column are equal, there exists a permutation matrix ev such that
?i,l > 0.
?(i,l)?[K]?[K]:ev,i,l =1 N
The proof of Theorem 4 is in Appendix E. Each subtraction increases the number of 0 entries in
?i,l (Line 5 in Algorithm 2); Algorithm 2 runs in O(K 4.5 ) times by computing at most O(K 2 )
N
?i,l into a positive linear combination
perfect matching sub-problems, and as a result it decomposes N
of permutation matrices. The main algorithm checks whether each the entries of the permutation
matrices are sufficiently explored (Line 18 in Algorithm 1), and draws an L-allocation corresponding
to a permutation matrix (Figure 1, right) if under-explored.
5
Optimizations
This section discusses two optimizations that appear in Algorithm 1, namely, the MLE computation
(Line 7), and the computation of the optimal solution (Line 12).
MLE (Line 7) is the solution of a bi-convex optimization: the optimization of {?i } (resp. {?l }) is
convex when we view {?l } (resp. {?i }) as a constant. Therefore, off-the-shelf tools for optimizing
convex functions (e.g., Newton?s method) are applicable to alternately optimizing {?i } and {?l }.
Assuming that each convex optimization yields an optimal value, such an alternate optimization
6
Algorithm 3 Cutting-plane method for obtaining {qi,l } on Line 12 of Algorithm 1
(0)
1: Input: the number of iterations S, nominal constraint {?i } ? T?c
?
1(t),...,L(t)
.
2: for s = 1, 2, . . . , S do
P
(s)
3:
Find qi,l ? min{qi,l }?Q (i,l)?[K]?[L] ?i,l qi,l such that
?
X
qi,l dKL
(i,l)?[K]?[L]:i6=?
l(t)
(0)
(1)
?l (t)?
?l (t)
?
?i,l (t), ?i0
?l0
!
?1
(s?1)
for all {?i0 } ? {?i }, {?i }, . . . , {?i
}.
P
?
(s)
(s)
?l (t)
?i,l (t), ?i0 ?l (t)?
4:
Find {?i } ? min{?i0 } (i,l)?[K]?[L] qi,l dKL (?
).
?l0
5: end for
monotonically decreases the objective function and thus converges. Note that a local minimum
obtained by bi-convex optimizations is not always a global minimum due to its non-convex nature.
Although the computation of the optimal solution (Line 12) involves {?i0 } and {?0l }, the constraint
eliminates latter variables as ?0i = ??i (t)?
?i (t)/?i0 . This optimization is a linear semi-infinite programming (LSIP) on {qi,l }, which is a linear programming (LP) with an infinite set of linear
constraints parameterized by {?i0 }. Algorithm 3 is the cutting-plane method with pessimistic oracle
[29] that boils the LSIP down to finite constraint LPs. At each iteration s, it adds a new constraint
(s)
c
{?i } ? T?1(t),...,
that is ?hardest? in a sense that it minimizes the sum of divergences (Line 4 in
?
L(t)
Algorithm 3). The following theorem guarantees the convergence of the algorithm when the exactly
hardest constraint is found.
Theorem 5. (Convergence of the cutting-plane method, Mutapcic and Boyd [29, Section 5.2]) Assume that there exists a constant C and that the constraint f ({?i0 }) =
P
?
(s)
(1)
(2)
?l (t)
?i,l (t), ?i0 ?l (t)?
) is Lipchitz continuous as |f ({?i }) ? f ({?i })| ?
(i,l)?[K]?[L] qi,l dKL (?
?0
l
(1)
(2)
C||{?i } ? {?i }||, where the norm || ? || is any Lp norm. Then, Algorithm 3 converges to its optimal
solution as S ? ?.
Although the Lipchitz continuity assumption does not hold as dKL (p, q) approaches infinity when q
is close to 0 or 1, by restricting q to some region [, 1 ? ], Lipchitz continuity can be guaranteed for
some C = C(). Theorem 5 assumes the availability of an exact solution to the hardest constraint,
which is generally hard since this objective is non-convex in its nature. Still, we can obtain a fair
c
solution with the following reasons: First, although the space T?1(t),...,
is not convex, it suf?
L(t)
n
o
0
K
0
0
0
0
fices to consider each of the convex subspaces {?i } ? (0, 1) : ??1(t) ? ? ? ? ? ?L(t)
= ??l(t)
? , ?X(t)
?
where X = min(L, l ? 1), for each l ? [K] \ {1} separately because the hardest constraint
is always in one of these subspaces (which follows from the convexity of the objective func0
tion). Second, the following bi-convex relaxation can be used: Let ?10 , . . . , ?L
be auxiliary
0
0
variables that correspond
to
1/?
,
.
.
.
,
1/?
.
Namely,
we
optimize
a
relaxed
objective
function
1
L
P
P
(s)
0 0?
0 0
2
q dKL (?
?i,l (t), ? ? ?l (t)?
?l (t)) + ?
(? ? ? 1) , where ? > 0 is a penalty
(i,l)?[K]?[L]
i,l
i l
i?[L]
i i
parameter. Convexity of KL divergence implies that this objective is a bi-convex function of {?i0 } and
{?l0 }, and thus an alternate optimization is effective. Setting ? ? ? induces a solution in which ?i0 is
equal to 1/?i0 ([30, Theorem 17.1]). Our algorithm starts with a small value of ?; then it gradually
increases ?.
6
Experiment
To evaluate the empirical performance of the proposed algorithms, we conducted computer simulations with synthetic and real-world datasets. The compared algorithms are MP-TS [24], dcmKL-UCB
[21], PBM-PIE [27], and PMED (proposed in this paper). MP-TS is an algorithm based on Thompson
sampling [32] that ignores position bias: it draws the top-L arms on the basis of posterior sampling,
and the posterior is calculated without considering position bias. DcmKL-UCB is a KL-UCB [11]
7
(a) Synthetic
(b) Real-world (Tencent)
Figure 2: Regret-round log-log plots of algorithms.
based algorithm that works under the DCM assumption. PBM-PIE is an algorithm that allocates
top-(L ? 1) slots greedily and allocates L-th arm based on the KL-UCB bound. Note that PBM-PIE
requires an estimation of {??l }; here, a bi-convex optimization is used to estimate it3 . We did not test
PBM-TS [27], which is another algorithm for PBM, mainly because that its regret bound has not
been derived yet. However, its regret appears to be asymptotically optimal when {??l } are known
(Figure 1(a) in Lagr?e et al.[27]), and thus it does not explore sufficiently when there is uncertainty in
the position bias. We set ? = 10 for PMED. We used the Gurobi LP solver4 for solving the LPs. To
speed up the computation, we skipped the bi-convex and LP optimizations in most rounds with large
t and used the result of the last computation. We used the Newton?s method (resp. a gradient method)
for computing the MLE (resp. the hardest constraint) in Algorithm 3.
Synthetic data: This simulation was designed to check the consistency of the algorithms, and it
involved 5 arms with (?1 , . . . , ?5 ) = (0.95, 0.8, 0.65, 0.5, 0.35), and 2 slots with (?1 , ?2 ) = (1, 0.6).
The experimental results are shown on the left of Figure 2. The results are averaged over 100 runs. LB
is the simulated value of the regret lower bound in Section 3. While the regret of PMED converges,
the other algorithms suffer a 100 times or larger regret than LB at T = 107 , which implies that these
algorithms are not consistent under our model.
Real-world data: Following the existing work [24, 27], we used the KDD Cup 2012 track 2 dataset
[22] that involves session logs of soso.com, a search engine owned by Tencent. Each of the 150M
lines from the log contains the user ID, the query, an ad, and a slot in {1, 2, 3} at which the ad was
displayed and a binary reward indicated (click/no-click). Following Lagr?e et al. [27], we obtained
major 8 queries. Using the click logs of the queries, the CTRs and position bias were estimated in
order to maximize the likelihood by using bi-convex optimization in Section 4. Note that, the number
of arms and parameters are slightly different from the ones reported previously [27]. For the sake
of completeness, we show the parameters in Appendix C. We conducted 100 runs for each queries,
and the right figure in Figure 2 shows the averaged regret over 8 queries. Although the gap between
PMED and existing algorithms are not drastic compared with synthetic parameters, the existing
algorithms suffer larger regret than PMED.
7
Analysis
Although the authors conjecture that PMED is optimal, it is hard to analyze it directly. The technically
hardest part arises from the case in which the divergence of each action is small but not yet fully
converged. To circumvent these difficulty, we devised a modified algorithm called PMED-Hinge
(Algorithm 1) that involves extra exploration. In particular, we modify the optimization problem as
3
4
The bi-convex optimization is identical to the one used for obtaining the MLE in PMED.
http://www.gurobi.com
8
follows: Let
RH
(1),...,(L) ({?i,l }, {?i }, {?l }, {?i,l }) =
{qi,l } ? Q :
inf
0 ?0 )??
c
,{?0l }?Kall :?l?[L] dKL (?(l),l ,?(l)
{?i0 }?T(1),...,(L)
i,l
l
X
qi,l (dKL (?i,l , ?i0 ?0l )
? ?i,l )+ ? 1 ,
(i,l)?[K]?[L]:i6=(l)
where (x)+ = max(x, 0). Moreover, let
?,H
C(1),...,(L)
({?i,l }, {?i }, {?l }, {?i,l }) =
X
inf
{qi,l }?RH
({?i,l },{?i },{?l },{?i,l })
(1),...,(L)
?i,l qi,l ,
(i,l)?[K]?[L]
the optimal solution of which is
?,H
({?i,l }, {?i }, {?l }, {?i,l }) =
R(1),...,(L)
{qi,l } ? RH
(1),...,(L) ({?i,l }, {?i }, {?l }, {?i,l }) :
X
?,H
?i,l qi,l = C(1),...,(L)
({?i,l }, {?i }, {?l }, {?i,l }) .
(i,l)?[K]?[L]
The necessity of additional terms in PMED-Hinge are discussed in Appendix B. The following
theorem, whose proof is in Appendix G, derives a regret upper bound that matches the lower bound
in Theorem 2.
Theorem 6. (Asymptotic optimality of PMED-Hinge) Let the solution of the optimal exploration
?
?
?
R?,H
1,...,L ({?i,l }, {?i }, {?l }, {?i,l }) restricted to l ? L is unique at ({?i,l }, {?i }, {?l }, {0}). For any
? > 0, ? > 0, and ? > 0, the regret of PMED-Hinge is bounded as:
?
E[Reg(T )] ? C1,...,L
({??i,l }, {?i? }, {??l }) log T + o(log T ) .
Note that, the assumption on the uniqueness of the solution in Theorem 6 is required to achieve
an optimal coefficient on the log T factor. It is not very difficult to derive an O(log T ) regret even
though the uniqueness condition is not satisfied. Although our regret bound is not finite-time, the
only asymptotic analysis comes from the optimal constant on the top of log T term (Lemma 11 in
Appendix) and it is not very hard to derive an O(log T ) finite-time regret bound.
8
Conclusion
By providing a regret lower bound and an algorithm with a matching regret bound, we gave the first
complete characterization of a position-based multiple-play multi-armed bandit problem where the
quality of the arms and the discount factor of the slots are unknown. We provided a way to compute
the optimization problems related to the algorithm, which is of its own interest and is potentially
applicable to other bandit problems.
9
Acknowledgements
The authors gratefully acknowledge Kohei Komiyama for discussion on a permutation matrix and
sincerely thank the anonymous reviewers for their useful comments. This work was supported in
part by JSPS KAKENHI Grant Number 17K12736, 16H00881, 15K00031, and Inamori Foundation
Research Grant.
References
[1] D. Agarwal, B.-C. Chen, and P. Elango. Spatio-temporal models for estimating click-through
rate. In WWW, pages 21?30, 2009.
[2] V. Anantharam, P. Varaiya, and J. Walrand. Asymptotically efficient allocation rules for the
multiarmed bandit problem with multiple plays-part i: I.i.d. rewards. Automatic Control, IEEE
Transactions on, 32(11):968?976, 1987.
[3] P. Auer, Y. Freund, and R. E. Schapire. The non-stochastic multi-armed bandit problem. Siam
Journal on Computing, 2002.
[4] G. Bart?k, D. P. Foster, D. P?l, A. Rakhlin, and C. Szepesv?ri. Partial monitoring - classification,
regret bounds, and algorithms. Math. Oper. Res., 39(4):967?997, 2014.
[5] S. Bubeck. Bandits Games and Clustering Foundations. Theses, Universit? des Sciences et
Technologie de Lille - Lille I, June 2010.
[6] A. Burnetas and M. Katehakis. Optimal adaptive policies for sequential allocation problems.
Advances in Applied Mathematics, 17(2):122?142, 1996.
[7] A. Burnetas and M. Katehakis. Optimal adaptive policies for markov decision processes. Math.
Oper. Res., 22(1):222?255, Feb. 1997.
[8] R. Combes, S. Magureanu, A. Prouti?re, and C. Laroche. Learning to rank: Regret lower bounds
and efficient algorithms. In Proceedings of the 2015 ACM SIGMETRICS, pages 231?244, 2015.
[9] R. Combes, M. S. Talebi, A. Prouti?re, and M. Lelarge. Combinatorial bandits revisited. In
NIPS, pages 2116?2124, 2015.
[10] N. Craswell, O. Zoeter, M. J. Taylor, and B. Ramsey. An experimental comparison of click
position-bias models. In WSDM, pages 87?94, 2008.
[11] A. Garivier and O. Capp?. The KL-UCB algorithm for bounded stochastic bandits and beyond.
In COLT, pages 359?376, 2011.
[12] A. Garivier and E. Kaufmann. Optimal best arm identification with fixed confidence. In COLT,
pages 998?1027, 2016.
[13] T. L. Graves and T. L. Lai. Asymptotically efficient adaptive choice of control laws in controlled
markov chains. SIAM Journal on Control and Optimization, 35(3):715?743, 1997.
[14] F. Guo, C. Liu, and Y. M. Wang. Efficient multiple-click models in web search. In WSDM,
pages 124?131, 2009.
[15] P. Hall. On representatives of subsets. Journal of the London Mathematical Society, s1-10(1):26?
30, 1935.
[16] W. W. Hogan. Point-to-set maps in mathematical programming. SIAM Review, 15(3):591?603,
1973.
[17] J. Honda and A. Takemura. An Asymptotically Optimal Bandit Algorithm for Bounded Support
Models. In COLT, pages 67?79, 2010.
[18] J. E. Hopcroft and R. M. Karp. An n5/2 algorithm for maximum matchings in bipartite graphs.
SIAM Journal on Computing, 2(4):225?231, 1973.
10
[19] Interactive Advertising Bureau. IAB internet advertising revenue report - 2016 full year results.,
2017.
[20] S. Kale, L. Reyzin, and R. E. Schapire. Non-stochastic bandit slate problems. In NIPS, pages
1054?1062, 2010.
[21] S. Katariya, B. Kveton, C. Szepesv?ri, and Z. Wen. DCM bandits: Learning to rank with
multiple clicks. In ICML, pages 1215?1224, 2016.
[22] KDD cup 2012 track 2, 2012.
[23] D. Kempe and M. Mahdian. A cascade model for externalities in sponsored search. In WINE,
pages 585?596, 2008.
[24] J. Komiyama, J. Honda, and H. Nakagawa. Optimal regret analysis of thompson sampling in
stochastic multi-armed bandit problem with multiple plays. In ICML, pages 1152?1161, 2015.
[25] J. Komiyama, J. Honda, and H. Nakagawa. Regret lower bound and optimal algorithm in finite
stochastic partial monitoring. In NIPS, pages 1792?1800, 2015.
[26] B. Kveton, C. Szepesv?ri, Z. Wen, and A. Ashkan. Cascading bandits: Learning to rank in the
cascade model. In ICML, pages 767?776, 2015.
[27] P. Lagr?e, C. Vernade, and O. Capp?. Multiple-play bandits in the position-based model. In
NIPS, pages 1597?1605, 2016.
[28] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6(1):4?22, 1985.
[29] A. Mutapcic and S. P. Boyd. Cutting-set methods for robust convex optimization with pessimizing oracles. Optimization Methods and Software, 24(3):381?406, 2009.
[30] J. Nocedal and S. Wright. Numerical Optimization. Springer Series in Operations Research and
Financial Engineering. Springer New York, 2nd edition, 2006.
[31] A. Piccolboni and C. Schindelhauer. Discrete prediction games with arbitrary feedback and
loss. In COLT 2001 and EuroCOLT 2001, pages 208?223, 2001.
[32] W. R. Thompson. On The Likelihood That One Unknown Probability Exceeds Another In View
Of The Evidence Of Two Samples. Biometrika, 25:285?294, 1933.
[33] H. P. Vanchinathan, G. Bart?k, and A. Krause. Efficient partial monitoring with prior information.
In NIPS, pages 1691?1699, 2014.
[34] S. Yuan, J. Wang, and X. Zhao. Real-time bidding for online advertising: Measurement and
analysis. In Proceedings of the Seventh International Workshop on Data Mining for Online
Advertising, ADKDD ?13, pages 3:1?3:8. ACM, 2013.
11
| 7085 |@word exploitation:1 version:3 polynomial:1 norm:2 nd:1 simulation:2 decomposition:1 necessity:1 liu:1 contains:1 series:1 united:1 ramsey:1 existing:4 current:2 com:2 yet:2 numerical:1 kdd:2 plot:1 designed:1 sponsored:1 bart:2 half:1 fewer:1 greedy:1 plane:5 akiko:1 beginning:1 completeness:1 characterization:1 node:2 honda:5 math:2 lipchitz:3 revisited:1 elango:1 mathematical:2 lagr:5 katehakis:2 yuan:1 owner:2 inside:1 manner:1 expected:4 hardness:1 multi:9 mahdian:1 wsdm:2 eurocolt:1 decomposed:1 resolve:1 armed:10 considering:3 increasing:2 provided:1 estimating:1 moreover:4 bounded:4 maximizes:1 what:1 minimizes:2 finding:2 guarantee:1 pseudo:1 temporal:2 pmed:23 interactive:1 tie:1 exactly:4 universit:1 biometrika:1 control:3 grant:2 appear:2 arguably:1 positive:1 before:1 t1:1 understood:1 variables2:1 local:1 limit:1 engineering:1 modify:1 schindelhauer:1 id:1 studied:1 examined:1 bi:10 averaged:2 practical:1 unique:1 kveton:2 practice:1 regret:40 handy:1 empirical:5 kohei:1 cascade:6 matching:8 boyd:2 pre:1 confidence:1 dmed:1 cannot:3 close:1 put:11 context:1 risk:1 optimize:1 equivalent:1 deterministic:1 www:2 reviewer:1 map:1 go:1 attention:3 regardless:1 kale:1 convex:25 thompson:3 estimator:2 solver4:1 rule:2 cascading:1 deriving:1 spanned:1 creq:5 financial:1 resp:8 pt:2 play:12 nominal:1 user:6 exact:1 programming:3 trend:1 element:2 bottom:2 solved:1 wang:2 thousand:1 region:1 ensures:1 ordering:2 decrease:1 removed:1 principled:1 broken:1 convexity:2 reward:16 technologie:1 hogan:1 solving:3 technically:1 bipartite:4 efficiency:1 req:2 basis:1 matchings:1 capp:2 bidding:1 isp:1 hopcroft:2 slate:2 represented:1 riken:2 distinct:1 effective:1 london:1 query:5 outcome:1 whose:4 larger:5 drawing:2 otherwise:1 online:9 obviously:1 sequence:1 propose:4 product:2 maximal:1 loop:4 reyzin:1 achieve:1 takeda:1 billion:1 convergence:2 empty:1 perfect:4 converges:3 spent:1 derive:5 depending:1 ac:2 stat:1 tall:4 mutapcic:2 solves:1 auxiliary:2 involves:5 implies:3 come:1 drawback:1 tokyo:3 attribute:1 mini6:1 stochastic:8 closely:1 exploration:7 explains:2 suffices:2 decompose:1 anonymous:1 pessimistic:1 extension:5 hold:5 sufficiently:2 considered:1 hall:1 wright:1 stabilizes:1 major:1 sought:1 early:2 wine:1 uniqueness:2 estimation:3 applicable:2 lose:1 combinatorial:1 robbins:2 largest:2 tool:1 always:5 sigmetrics:1 modified:2 ck:1 shelf:1 karp:2 derived:2 l0:3 june:1 vk:4 kakenhi:1 bernoulli:2 likelihood:3 check:2 mainly:1 rank:3 skipped:1 greedily:1 sense:4 dollar:1 dependent:1 i0:34 vl:3 bandit:36 selects:2 i1:1 issue:3 among:1 classification:1 colt:4 denoted:1 kempe:1 equal:3 beach:1 sampling:3 identical:1 placing:1 lille:2 hardest:6 icml:3 problem1:1 others:1 report:1 wen:2 divergence:10 interest:4 mining:1 analyzed:1 admitting:1 chain:1 allocating:4 accurate:2 edge:1 partial:9 allocates:2 unless:1 taylor:1 initialized:1 re:4 instance:1 industry:1 earlier:1 column:3 maximization:1 subset:2 entry:4 uniform:1 jsps:1 conducted:2 seventh:1 characterize:1 reported:1 burnetas:2 varies:1 ctrs:1 synthetic:4 adaptively:1 st:1 international:1 siam:4 off:3 vm:5 talebi:1 ctr:1 thesis:1 satisfied:1 pbm:9 inefficient:1 zhao:1 oper:2 account:1 converted:1 de:2 availability:1 coefficient:1 mp:2 ad:20 depends:1 later:1 view:2 tion:1 analyze:1 characterizes:1 portion:1 start:1 zoeter:1 contribution:1 il:3 ni:18 square:1 iab:1 kaufmann:1 characteristic:1 efficiently:1 yield:3 correspond:3 identification:1 none:1 advertising:8 monitoring:9 converged:1 maxc:1 explain:2 ashkan:1 definition:1 lelarge:1 modk:4 involved:1 associated:3 proof:4 boil:4 dataset:2 kall:5 formalize:1 routine:1 sophisticated:1 auer:1 appears:4 day:1 done:1 though:2 strongly:5 generality:3 furthermore:1 until:1 receives:3 web:6 combes:3 continuity:2 infimum:2 quality:1 reveal:1 indicated:1 usa:1 true:2 piccolboni:1 equality:1 deal:2 attractive:1 round:9 during:1 game:2 complete:1 consideration:3 jp:2 exponentially:2 extend:1 discussed:2 significant:1 multiarmed:2 measurement:1 cup:2 cv:3 automatic:1 consistency:2 mathematics:3 i6:3 session:1 gratefully:1 add:1 feb:1 posterior:2 own:2 optimizing:2 moderate:1 inf:4 discard:1 belongs:1 inequality:2 binary:3 arbitrarily:1 minimum:9 additional:1 relaxed:1 vanchinathan:1 subtraction:1 maximize:3 advertiser:1 redundant:1 monotonically:1 semi:3 ii:1 multiple:14 full:1 reduces:2 exceeds:1 match:2 long:1 lai:3 devised:1 mle:6 dkl:17 plugging:1 controlled:1 qi:33 prediction:1 variant:1 n5:1 metric:1 expectation:1 externality:1 iteration:3 agarwal:1 c1:3 receive:1 whereas:1 szepesv:3 separately:1 krause:1 allocated:2 extra:1 typo:1 unlike:1 eliminates:1 comment:1 duplication:1 mod:11 integer:1 enough:1 affect:1 gave:1 click:22 reduce:2 t0:2 whether:1 motivated:1 allocate:2 penalty:1 suffer:2 york:1 remark:1 action:1 generally:1 useful:1 clear:1 amount:3 discount:2 induces:1 simplest:1 lsip:2 http:1 schapire:2 walrand:1 it3:1 estimated:4 track:2 discrete:1 drawn:4 ce:1 verified:1 garivier:2 nocedal:1 v1:4 asymptotically:6 relaxation:3 graph:3 year:1 convert:2 sum:2 run:4 parameterized:1 uncertainty:5 utilizes:1 draw:6 decision:2 appendix:8 bound:29 internet:2 followed:1 guaranteed:1 oracle:2 constraint:13 infinity:1 aff:1 junya:1 ri:7 software:1 sake:1 katariya:2 aspect:1 speed:1 argument:1 min:9 optimality:1 conjecture:1 alternate:2 combination:3 cleverly:1 slightly:2 lp:6 s1:1 intuitively:3 gradually:1 restricted:1 taken:1 ln:17 previously:1 sincerely:1 discus:2 drastic:1 end:7 available:2 decomposing:1 operation:1 komiyama:5 existence:1 bureau:1 assumes:2 top:5 clustering:1 hinge:10 l6:1 newton:2 exploit:2 ism:1 society:1 objective:5 quantity:2 strategy:1 craswell:1 usual:1 visiting:1 gradient:1 subspace:2 thank:1 simulated:1 evaluate:1 reason:1 assuming:3 besides:1 providing:1 setup:1 difficult:2 pie:3 potentially:1 info:1 negative:1 policy:2 unknown:5 perform:1 upper:2 markov:3 sold:1 datasets:1 finite:5 acknowledge:1 compensation:1 t:3 displayed:1 magureanu:1 defining:1 extended:1 arbitrary:3 lb:2 introduced:1 pair:10 namely:3 kl:6 gurobi:2 required:1 varaiya:1 engine:1 hour:1 prouti:2 nip:6 alternately:1 address:2 beyond:1 below:2 ev:14 challenge:2 max:2 natural:2 difficulty:1 circumvent:1 abbreviate:1 arm:35 naive:1 review:1 literature:1 acknowledgement:1 prior:1 multiplication:1 asymptotic:4 graf:1 freund:1 fully:2 loss:3 permutation:20 law:1 takemura:1 interesting:1 suf:1 allocation:16 revenue:1 foundation:2 vernade:1 consistent:6 foster:1 laroche:1 row:3 elsewhere:1 summary:1 placed:2 last:1 free:1 supported:1 bias:13 ber:1 institute:1 taking:1 feedback:3 calculated:2 evaluating:1 cumulative:1 world:3 ignores:1 author:2 adaptive:4 subtracts:1 transaction:1 observable:2 cutting:6 global:1 spatio:1 alternatively:1 continuous:1 search:3 decomposes:1 nature:2 robust:1 ca:1 obtaining:2 tencent:2 did:1 main:1 rh:3 edition:1 fair:1 site:1 representative:1 lc:10 fails:1 position:22 sub:1 clicking:1 candidate:1 third:1 advertisement:4 down:4 theorem:13 list:2 explored:2 admits:1 rakhlin:1 evidence:1 derives:1 exists:4 workshop:1 restricting:1 sequential:1 browsing:1 gap:1 chen:1 explore:1 bubeck:1 ordered:1 scalar:1 soso:1 springer:2 corresponds:10 loses:1 satisfies:1 owned:1 acm:2 dcm:5 slot:35 viewed:1 goal:1 feasible:1 hard:3 infinite:3 except:1 uniformly:1 nakagawa:2 lemma:5 called:7 experimental:2 ucb:5 select:1 junpei:2 support:1 guo:1 latter:3 arises:1 anantharam:1 reg:6 |
6,728 | 7,086 | Active Exploration for Learning
Symbolic Representations
Garrett Andersen
PROWLER.io
Cambridge, United Kingdom
[email protected]
George Konidaris
Department of Computer Science
Brown University
[email protected]
Abstract
We introduce an online active exploration algorithm for data-efficiently learning
an abstract symbolic model of an environment. Our algorithm is divided into two
parts: the first part quickly generates an intermediate Bayesian symbolic model
from the data that the agent has collected so far, which the agent can then use along
with the second part to guide its future exploration towards regions of the state
space that the model is uncertain about. We show that our algorithm outperforms
random and greedy exploration policies on two different computer game domains.
The first domain is an Asteroids-inspired game with complex dynamics but basic
logical structure. The second is the Treasure Game, with simpler dynamics but
more complex logical structure.
1
Introduction
Much work has been done in artificial intelligence and robotics on how high-level state abstractions
can be used to significantly improve planning [19]. However, building these abstractions is difficult,
and consequently, they are typically hand-crafted [15, 13, 7, 4, 5, 6, 20, 9].
A major open question is then the problem of abstraction: how can an intelligent agent learn highlevel models that can be used to improve decision making, using only noisy observations from its
high-dimensional sensor and actuation spaces? Recent work [11, 12] has shown how to automatically
generate symbolic representations suitable for planning in high-dimensional, continuous domains.
This work is based on the hierarchical reinforcement learning framework [1], where the agent has
access to high-level skills that abstract away the low-level details of control. The agent then learns
representations for the (potentially abstract) effect of using these skills. For instance, opening a door
is a high-level skill, while knowing that opening a door typically allows one to enter a building would
be part of the representation for this skill. The key result of that work was that the symbols required to
determine the probability of a plan succeeding are directly determined by characteristics of the skills
available to an agent. The agent can learn these symbols autonomously by exploring the environment,
which removes the need to hand-design symbolic representations of the world.
It is therefore possible to learn the symbols by naively collecting samples from the environment,
for example by random exploration. However, in an online setting the agent shall be able to use
its previously collected data to compute an exploration policy which leads to better data efficiency.
We introduce such an algorithm, which is divided into two parts: the first part quickly generates
an intermediate Bayesian symbolic model from the data that the agent has collected so far, while
the second part uses the model plus Monte-Carlo tree search to guide the agent?s future exploration
towards regions of the state space that the model is uncertain about. We show that our algorithm is
significantly more data-efficient than more naive methods in two different computer game domains.
The first domain is an Asteroids-inspired game with complex dynamics but basic logical structure.
The second is the Treasure Game, with simpler dynamics but more complex logical structure.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
Background
As a motivating example, imagine deciding the route you are going to take to the grocery store;
instead of planning over the various sequences of muscle contractions that you would use to complete
the trip, you would consider a small number of high-level alternatives such as whether to take one
route or another. You also would avoid considering how your exact low-level state affected your
decision making, and instead use an abstract (symbolic) representation of your state with components
such as whether you are at home or an work, whether you have to get gas, whether there is traffic, etc.
This simplification reduces computational complexity, and allows for increased generalization over
past experiences. In the following sections, we introduce the frameworks that we use to represent the
agent?s high-level skills, and symbolic models for those skills.
2.1
Semi-Markov Decision Processes
We assume that the agent?s environment can be described by a semi-Markov decision process
(SMDP), given by a tuple D = (S, O, R, P, ?), where S ? Rd is a d-dimensional continuous state
space, O(s) returns a set of temporally extended actions, or options [19] available in state s ? S,
R(s0 , t, s, o) and P (s0 , t | s, o) are the reward received and probability of termination in state s0 ? S
after t time steps following the execution of option o ? O(s) in state s ? S, and ? ? (0, 1] is a
discount factor.R In this paper, we are not concerned with the time taken to execute o, so we use
P (s0 | s, o) = P (s0 , t | s, o)dt.
An option o is given by three components: ?o , the option policy that is executed when the option is
invoked, Io , the initiation set consisting of the states where the option can be executed from, and
?o (s) ? [0, 1], the termination condition, which returns the probability that the option will terminate
upon reaching state s. Learning models for the initiation set, rewards, and transitions for each option,
allows the agent to reason about the effect of its actions in the environment. To learn these option
models, the agent has the ability to collect observations of the forms (s, O(s)) when entering a state
s and (s, o, s0 , r, t) upon executing option o from s.
2.2
Abstract Representations for Planning
We are specifically interested in learning option models which allow the agent to easily evaluate the
success probability of plans. A plan is a sequence of options to be executed from some starting state,
and it succeeds if and only if it is able to be run to completion (regardless of the reward). Thus, a plan
{o1 , o2 , ..., on } with starting state s succeeds if and only if s ? Io1 and the termination state of each
option (except for the last) lies in the initiation set of the following option, i.e. s0 ? P (s0 | s, o1 ) ? Io2 ,
s00 ? P (s00 | s0 , o2 ) ? Io3 , and so on.
Recent work [11, 12] has shown how to automatically generate a symbolic representation that supports
such queries, and is therefore suitable for planning. This work is based on the idea of a probabilistic
symbol, a compact representation of a distribution over infinitely many continuous, low-level states.
For example, a probabilistic symbol could be used to classify whether or not the agent is currently in
front of a door, or one could be used to represent the state that the agent would find itself in after
executing its ?open the door? option. In both cases, using probabilistic symbols also allows the agent
to be uncertain about its state.
The following two probabilistic symbols are provably sufficient for evaluating the success probability
of any plan [12]; the probabilistic precondition: Pre(o) = P (s ? Io ), which expresses the probability
that an option o can be executed from each state s ? S, and the probabilistic image operator:
R
P (s0 | s, o)Z(s)P (Io | s)ds
Im(o, Z) = S R
,
Z(s)P (Io | s)ds
S
which represents the distribution over termination states if an option o is executed from a distribution
over starting states Z. These symbols can be used to compute the probability that each successive
option in the plan can be executed, and these probabilities can then be multiplied to compute the
overall success probability of the plan; see Figure 1 for a visual demonstration of a plan of length 2.
Subgoal Options Unfortunately, it is difficult to model Im(o, Z) for arbitrary options, so we focus
on restricted types of options. A subgoal option [17] is a special class of option where the distribution
over termination states (referred to as the subgoal) is independent of the distribution over starting
2
o1 ?
(a)
(b)
Im(o1 , Z0 )
o2 ?
(c)
Pre(
o2 )
Pr
e(
o1
)
Z0
o1
Figure 1: Determining the probability that a plan consisting of two options can be executed from
a starting distribution Z0 . (a): Z0 is contained in Pre(o1 ), so o1 can definitely be executed. (b):
Executing o1 from Z0 leads to distribution over states Im(o1 , Z0 ). (c): Im(o1 , Z0 ) is not completely
contained in Pre(o2 ), so the probability of being able to execute o2 is less than 1. Note that Pre is a
set and Im is a distribution, and the agent typically has uncertain models for them.
states that it was executed from, e.g. if you make the decision to walk to your kitchen, the end result
will be the same regardless of where you started from.
For subgoal options, the image operator can be replaced with the effects distribution: Eff(o) =
Im(o, Z), ?Z(S), the resulting distribution over states after executing o from any start distribution
Z(S). Planning with a set of subgoal options is simple because for each ordered pair of options oi and
oj , it is possible to compute andRstore G(oi , oj ), the probability that oj can be executed immediately
after executing oi : G(oi , oj ) = S Pre(oj , s)Eff(oi )(s)ds.
We use the following two generalizations of subgoal options: abstract subgoal options model the
more general case where executing an option leads to a subgoal for a subset of the state variables
(called the mask), leaving the rest unchanged. For example, walking to the kitchen leaves the amount
of gas in your car unchanged. More formally, the state vector can be partitioned into two parts
s = [a, b], such that executing o leaves the agent in state s0 = [a, b0 ], where P (b0 ) is independent of
the distribution over starting states. The second generalization is the (abstract) partitioned subgoal
option, which can be partitioned into a finite number of (abstract) subgoal options. For instance, an
option for opening doors is not a subgoal option because there are many doors in the world, however
it can be partitioned into a set of subgoal options, with one for every door.
The subgoal (and abstract subgoal) assumptions propose that the exact state from which option
execution starts does not really affect the options that can be executed next. This is somewhat
restrictive and often does not hold for options as given, but can hold for options once they have been
partitioned. Additionally, the assumptions need only hold approximately in practice.
3
Online Active Symbol Acquisition
Previous approaches for learning symbolic models from data [11, 12] used random exploration.
However, real world data from high-level skills is very expensive to collect, so it is important to use a
more data-efficient approach. In this section, we introduce a new method for learning abstract models
data-efficiently. Our approach maintains a distribution over symbolic models which is updated
after every new observation. This distribution is used to choose the sequence of options that in
expectation maximally reduces the amount of uncertainty in the posterior distribution over models.
Our approach has two components: an active exploration algorithm which takes as input a distribution
over symbolic models and returns the next option to execute, and an algorithm for quickly building
a distribution over symbolic models from data. The second component is an improvement upon
previous approaches in that it returns a distribution and is fast enough to be updated online, both of
which we require.
3.1
Fast Construction of a Distribution over Symbolic Option Models
Now we show how to construct a more general model than G that can be used for planning with
abstract partitioned subgoal options. The advantages of our approach versus previous methods are
that our algorithm is much faster, and the resulting model is Bayesian, both of which are necessary
for the active exploration algorithm introduced in the next section.
Recall that the agent can collect observations of the forms (s, o, s0 ) upon executing option o from s,
and (s, O(s)) when entering a state s, where O(s) is the set of available options in state s. Given
a sequence of observations of this form, the first step of our approach is to find the factors [12],
3
partitions of state variables that always change together in the observed data. For example, consider a
robot which has options for moving to the nearest table and picking up a glass on an adjacent table.
Moving to a table changes the x and y coordinates of the robot without changing the joint angles of
the robot?s arms, while picking up a glass does the opposite. Thus, the x and y coordinates and the
arm joint angles of the robot belong to different factors. Splitting the state space into factors reduces
the number of potential masks (see end of Section 2.2) because we assume that if state variables i
and j always change together in the observations, then this will always occur, e.g. we assume that
moving to the table will never move the robot?s arms.1
Finding the Factors Compute the set of observed masks M from the (s, o, s0 ) observations: each
observation?s mask is the subset of state variables that differ substantially between s and s0 . Since
we work in continuous, stochastic domains, we must detect the difference between minor random
noise (independent of the action) and a substantial change in a state variable caused by action
execution. In principle this requires modeling action-independent and action-dependent differences,
and distinguishing between them, but this is difficult to implement. Fortunately we have found that in
practice allowing some noise and having a simple threshold is often effective, even in more noisy and
complex domains. For each state variable i, let Mi ? M be the subset of the observed masks that
contain i. Two state variables i and j belong to the same factor f ? F if and only if Mi = Mj . Each
factor f is given by a set of state variables and thus corresponds to a subspace Sf . The factors are
updated after every new observation.
Let S ? be the set of states that the agent has observed and let Sf? be the projection of S ? onto the
subspace Sf for some factor f , e.g. in the previous example there is a Sf? which consists of the set
of observed robot (x, y) coordinates. It is important to note that the agent?s observations come only
from executing partitioned abstract subgoal options. This means that Sf? consists only of abstract
subgoals, because for each s ? S ? , sf was either unchanged from the previous state, or changed to
another abstract subgoal. In the robot example, all (x, y) observations must be adjacent to a table
because the robot can only execute an option that terminates with it adjacent to a table or one that
does not change its (x, y) coordinates. Thus, the states in S ? can be imagined as a collection of
abstract subgoals for each of the factors. Our next step is to build a set of symbols for each factor to
represent its abstract subgoals, which we do using unsupervised clustering.
Finding the Symbols For each factor f ? F , we find the set of symbols Z f by clustering Sf? . Let
Z f (sf ) be the corresponding symbol for state s and factor f . We then map the observed states s ? S ?
to their corresponding symbolic states sd = {Z f (sf ), ?f ? F }, and the observations (s, O(s)) and
(s, o, s0 ) to (sd , O(s)) and (sd , o, s0d ), respectively.
In the robot example, the (x, y) observations would be clustered around tables that the robot could
travel to, so there would be a symbol corresponding to each table.
We want to build our models within the symbolic state space S d . Thus we define the symbolic
precondition, Pre(o, sd ), which returns the probability that the agent can execute an option from
some symbolic state, and the symbolic effects distribution for a subgoal option o, Eff (o), maps to a
subgoal distribution over symbolic states. For example, the robot?s ?move to the nearest table? option
maps the robot?s current (x, y) symbol to the one which corresponds to the nearest table.
The next step is to partition the options into abstract subgoal options (in the symbolic state space),
e.g. we want to partition the ?move to the nearest table? option in the symbolic state space so that the
symbolic states in each partition have the same nearest table.
Partitioning the Options For each option o, we initialize the partitioning P o so that each symbolic
state starts in its own partition. We use independent Bayesian sparse Dirichlet-categorical models [18]
for the symbolic effects distribution of each option partition.2 We then perform Bayesian Hierarchical
Clustering [8] to merge partitions which have similar symbolic effects distributions.3
1
The factors assumption is not strictly necessary as we can assign each state variable to its own factor.
However, using this uncompressed representation can lead to an exponential increase in the size of the symbolic
state space and a corresponding increase in the sample complexity of learning the symbolic models.
2
We use sparse Dirichlet-categorical models because there are a combinatorial number of possible symbolic
state transitions, but we expect that each partition has non-zero probability for only a small number of them.
3
We use the closed form solutions for Dirichlet-multinomial models provided by the paper.
4
Algorithm 1 Fast Construction of a Distribution over Symbolic Option Models
Find the set of observed masks M .
Find the factors F .
?f ? F , find the set of symbols Z f .
Map the observed states s ? S ? to symbolic states sd ? S ?d .
Map the observations (s, O(s)) and (s, o, s0 ) to (sd , O(s)) and (sd , o, s0d ).
?o ? O, initialize P o and perform Bayesian Hierarchical Clustering on it.
?o ? O, find Ao and F?o .
1:
2:
3:
4:
5:
6:
7:
There is a special case where the agent has observed that an option o was available in some symbolic
states Sad , but has yet to actually execute it from any sd ? Sad . These are not included in the Bayesian
Hierarchical Clustering, instead we have a special prior for the partition of o that they belong to. After
completing the merge step, the agent has a partitioning P o for each option o. Our prior is that with
probability qo ,4 each sd ? Sad belongs to the partition po ? P o which contains the symbolic states
most similar to sd , and with probability 1 ? qo each sd belongs to its own partition. To determine the
partition which is most similar to some symbolic state, we first find Ao , the smallest subset of factors
which can still be used to correctly classify P o . We then map each sd ? Sad to the most similar
partition by trying to match sd masked by Ao with a masked symbolic state already in one of the
partitions. If there is no match, sd is placed in its own partition.
Our final consideration is how to model the symbolic preconditions. The main concern is that many
factors are often irrelevant for determining if some option can be executed. For example, whether or
not you have keys in your pocket does not affect whether you can put on your shoe.
Modeling the Symbolic Preconditions Given an option o and subset of factors F o , let SFd o be the
symbolic state space projected onto F o . We use independent Bayesian Beta-Bernoulli models for the
symbolic precondition of o in each masked symbolic state sdF o ? SFd o . For each option o, we use
Bayesian model selection to find the the subset of factors F?o which maximizes the likelihood of the
symbolic precondition models.
The final result is a distribution over symbolic option models H, which consists of the combined sets
of independent symbolic precondition models {Pre(o, sdF?o ); ?o ? O, ?sdF?o ? SFd ?o } and independent
symbolic effects distribution models {Eff (o, po ); ?o ? O, ?po ? P o }.
The complete procedure is given in Algorithm 1. A symbolic option model h ? H can be sampled
by drawing parameters for each of the Bernoulli and categorical distributions from the corresponding
Beta and sparse Dirichlet distributions, and drawing outcomes for each qo . It is also possible to
consider distributions over other parts of the model such as the symbolic state space and/or a more
complicated one for the option partitionings, which we leave for future work.
3.2
Optimal Exploration
In the previous section we have shown how to efficiently compute a distribution over symbolic option
models H. Recall that the ultimate purpose of H is to compute the success probabilities of plans
(see Section 2.2). Thus, the quality of H is determined by the accuracy of its predicted plan success
probabilities, and efficiently learning H corresponds to selecting the sequence of observations which
maximizes the expected accuracy of H. However, it is difficult to calculate the expected accuracy of
H over all possible plans, so we define a proxy measure to optimize which is intended to represent
the amount of uncertainty in H. In this section, we introduce our proxy measure, followed by an
algorithm for finding the exploration policy which optimizes it. The algorithm operates in an online
manner, building H from the data collected so far, using H to select an option to execute, updating
H with the new observation, and so on.
First we define the standard deviation ?H , the quantity we use to represent the amount of uncertainty
in H. To define the standard deviation, we need to also define the distance and mean.
4
This is a user specified parameter.
5
We define the distance K from h2 ? H to h1 ? H, to be the sum of the Kullback-Leibler (KL)
divergences5 between their individual symbolic effect distributions plus the sum of the KL divergences
between their individual symbolic precondition distributions:6
X X
K(h1 , h2 ) =
[
DKL (Pre h1 (o, sdF?o ) || Pre h2 (o, sdF?o ))
o?O sd o ?S d o
F
F
?
+
X
?
DKL (Eff h1 (o, po ) || Eff h2 (o, po ))].
po ?P o
We define the mean, E[H], to be the symbolic option model such that each Bernoulli symbolic
precondition and categorical symbolic effects distribution is equal to the mean of the corresponding
Beta or sparse Dirichlet distribution:
?o ? O, ?po ? P o , Eff E[H] (o, po ) = Eh?H [Eff h (o, po )],
?o ? O, ?sdF?o ? SFd ?o , Pre E[H] (o, sdF?o )) = Eh?H [Pre h (o, sdF?o ))].
The standard deviation ?H is then simply: ?H = Eh?H [K(h, E[H])]. This represents the expected
amount of information which is lost if E[H] is used to approximate H.
Now we define the optimal exploration policy for the agent, which aims to maximize the expected
reduction in ?H after H is updated with new observations. Let H(w) be the posterior distribution
over symbolic models when H is updated with symbolic observations w (the partitioning is not
updated, only the symbolic effects distribution and symbolic precondition models), and let W (H, i, ?)
be the distribution over symbolic observations drawn from the posterior of H if the agent follows
policy ? for i steps. We define the optimal exploration policy ? ? as:
? ? = argmax ?H ? Ew?W (H,i,?) [?H(w) ].
?
For the convenience of our algorithm, we rewrite the second term by switching the order of the
expectations: Ew?W (H,i,?) [Eh?H(w) [K(h, E[H(w)])]] = Ew?W (h,i,?) [K(h, E[H(w)])]].
Note that the objective function is non-Markovian because H is continuously updated with the
agent?s new observations, which changes ?H . This means that ? ? is non-stationary, so Algorithm 2
approximates ? ? in an online manner using Monte-Carlo tree search (MCTS) [3] with the UCT tree
policy [10]. ?T is the combined tree and rollout policy for MCTS, given tree T .
There is a special case when the agent simulates the observation of a previously unobserved transition,
which can occur under the sparse Dirichlet-categorical model. In this case, the amount of information
gained is very large, and furthermore, the agent is likely to transition to a novel symbolic state. Rather
than modeling the unexplored state space, instead, if an unobserved transition is encountered during
an MCTS update, it immediately terminates with a large bonus to the score, a similar approach to
that of the R-max algorithm [2]. The form of the bonus is -zg, where g is the depth that the update
terminated and z is a constant. The bonus reflects the opportunity cost of not experiencing something
novel as quickly as possible, and in practice it tends to dominate (as it should).
4
The Asteroids Domain
The Asteroids domain is shown in Figure 2a and was implemented using physics simulator pybox2d.
The agent controls a ship by either applying a thrust in the direction it is facing or applying a torque
in either direction. The goal of the agent is to be able to navigate the environment without colliding
with any of the four ?asteroids.? The agent?s starting location is next to asteroid 1. The agent is given
the following 6 options (see Appendix A for additional details):
1. move-counterclockwise and move-clockwise: the ship moves from the current face it is
adjacent to, to the midpoint of the face which is counterclockwise/clockwise on the same
asteroid from the current face. Only available if the ship is at an asteroid.
5
The KL divergence has previously been used in other active exploration scenarios [16, 14].
Similarly to other active exploration papers, we define the distance to depend only on the transition models
and not the reward models.
6
6
Algorithm 2 Optimal Exploration
Input: Number of remaining option executions i.
1: while i ? 0 do
2:
Build H from observations (Algorithm 1).
3:
Initialize tree T for MCTS.
4:
while number updates < threshold do
5:
Sample a symbolic model h ? H.
6:
Do an MCTS update of T with dynamics given by h.
7:
Terminate current update if depth g is ? i, or unobserved transition is encountered.
8:
Store simulated observations w ? W (h, g, ?T ).
9:
Score = K(h, E[H]) ? K(h, E[H(w)]) ? zg.
10:
end while
11:
return most visited child of root node.
12:
Execute corresponding option; Update observations; i--.
13: end while
2. move-to-asteroid-1, move-to-asteroid-2, move-to-asteroid-3, and move-to-asteroid-4:
the ship moves to the midpoint of the closest face of asteroid 1-4 to which it has an
unobstructed path. Only available if the ship is not already at the asteroid and an unobstructed
path to some face exists.
Exploring with these options results in only one factor (for the entire state space), with symbols
corresponding to each of the 35 asteroid faces as shown in Figure 2a.
(a)
(b)
Figure 2: (a): The Asteroids Domain, and the 35 symbols which can be encountered while exploring
with the provided options. (b): The Treasure Game domain. Although the game screen is drawn
using large image tiles, sprite movement is at the pixel level.
Results We tested the performance of three exploration algorithms: random, greedy, and our
algorithm. For the greedy algorithm, the agent first computes the symbolic state space using steps
1-5 of Algorithm 1, and then chooses the option with the lowest execution count from its current
symbolic state. The hyperparameter settings that we use for our algorithm are given in Appendix A.
Figures 3a, 3b, and 3c show the percentage of time that the agent spends on exploring asteroids 1, 3,
and 4, respectively. The random and greedy policies have difficulty escaping asteroid 1, and are rarely
able to reach asteroid 4. On the other hand, our algorithm allocates its time much more proportionally.
Figure 4d shows the number of symbolic transitions that the agent has not observed (out of 115
possible).7 As we discussed in Section 3, the number of unobserved symbolic transitions is a good
representation of the amount of information that the models are missing from the environment.
Our algorithm significantly outperforms random and greedy exploration. Note that these results are
using an uninformative prior and the performance of our algorithm could be significantly improved by
7
We used Algorithm 1 to build symbolic models from the data gathered by each exploration algorithms.
7
Asteroid 1
0.7
random
greedy
MCTS
0.25
Fraction of Time Spent
Fraction of Time Spent
0.8
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.15
0.10
0.05
Option Executions
(b)
random
greedy
MCTS
No. Unobserved Symbolic Transitions
Fraction of Time Spent
0.35
0.30
0.25
0.20
0.15
0.10
0.05
0.00
200 400 600 800 1000 1200 1400 1600 1800 2000
Option Executions
(a)
Asteroid 4
random
greedy
MCTS
0.20
0.00
200 400 600 800 1000 1200 1400 1600 1800 2000
Asteroid 3
200 400 600 800 1000 1200 1400 1600 1800 2000
Unobserved Transitions
60
random
greedy
MCTS
50
40
30
20
10
0
200 400 600 800 1000 1200 1400 1600 1800 2000
Option Executions
Option Executions
(d)
(c)
Figure 3: Simulation results for the Asteroids domain. Each bar represents the average of 100 runs.
The error bars represent a 99% confidence interval for the mean. (a), (b), (c): The fraction of time that
the agent spends on asteroids 1, 3, and 4, respectively. The greedy and random exploration policies
spend significantly more time than our algorithm exploring asteroid 1 and significantly less time
exploring asteroids 3 and 4. (d): The number of symbolic transitions that the agent has not observed
(out of 115 possible). The greedy and random policies require 2-3 times as many option executions
to match the performance of our algorithm.
starting with more information about the environment. To try to give additional intuition, in Appendix
A we show heatmaps of the (x, y) coordinates visited by each of the exploration algorithms.
5
The Treasure Game Domain
The Treasure Game [12], shown in Figure 2b, features an agent in a 2D, 528 ? 528 pixel video-game
like world, whose goal is to obtain treasure and return to its starting position on a ladder at the top of
the screen. The 9-dimensional state space is given by the x and y positions of the agent, key, and
treasure, the angles of the two handles, and the state of the lock.
The agent is given 9 options: go-left, go-right, up-ladder, down-ladder, jump-left, jump-right, downright, down-left, and interact. See Appendix A for a more detailed description of the options and
the environment dynamics. Given these options, the 7 factors with their corresponding number of
symbols are: player-x, 10; player-y, 9; handle1-angle, 2; handle2-angle, 2; key-x and key-y, 3;
bolt-locked, 2; and goldcoin-x and goldcoin-y, 2.
Results We tested the performance of the same three algorithms: random, greedy, and our algorithm.
Figure 4a shows the fraction of time that the agent spends without having the key and with the lock
still locked. Figures 4b and 4c show the number of times that the agent obtains the key and treasure,
respectively. Figure 4d shows the number of unobserved symbolic transitions (out of 240 possible).
Again, our algorithm performs significantly better than random and greedy exploration. The data
8
No Key
Key Obtained
random
greedy
MCTS
0.8
Number of Times
Fraction of Time Spent
1.0
0.6
0.4
0.2
0.0
200 400 600 800 1000 1200 1400 1600 1800 2000
8
7
6
5
4
3
2
1
0
Option Executions
Number of Times
3.0
(b)
random
greedy
MCTS
No. Unobserved Symbolic Transitions
3.5
2.5
2.0
1.5
1.0
0.5
0.0
200 400 600 800 1000 1200 1400 1600 1800 2000
Option Executions
(a)
Treasure Obtained
random
greedy
MCTS
200 400 600 800 1000 1200 1400 1600 1800 2000
Unobserved Transitions
200
175
150
125
100
75
50
25
0
random
greedy
MCTS
200 400 600 800 1000 1200 1400 1600 1800 2000
Option Executions
Option Executions
(d)
(c)
Figure 4: Simulation results for the Treasure Game domain. Each bar represents the average of 100
runs. The error bars represent a 99% confidence interval for the mean. (a): The fraction of time
that the agent spends without having the key and with the lock still locked. The greedy and random
exploration policies spend significantly more time than our algorithm exploring without the key and
with the lock still locked. (b), (c): The average number of times that the agent obtains the key and
treasure, respectively. Our algorithm obtains both the key and treasure significantly more frequently
than the greedy and random exploration policies. There is a discrepancy between the number of times
that our agent obtains the key and the treasure because there are more symbolic states where the
agent can try the option that leads to a reset than where it can try the option that leads to obtaining
the treasure. (d): The number of symbolic transitions that the agent has not observed (out of 240
possible). The greedy and random policies require 2-3 times as many option executions to match the
performance of our algorithm.
from our algorithm has much better coverage, and thus leads to more accurate symbolic models. For
instance in Figure 4c you can see that random and greedy exploration did not obtain the treasure
after 200 executions; without that data the agent would not know that it should have a symbol that
corresponds to possessing the treasure.
6
Conclusion
We have introduced a two-part algorithm for data-efficiently learning an abstract symbolic representation of an environment which is suitable for planning with high-level skills. The first part of the
algorithm quickly generates an intermediate Bayesian symbolic model directly from data. The second
part guides the agent?s exploration towards areas of the environment that the model is uncertain about.
This algorithm is useful when the cost of data collection is high, as is the case in most real world
artificial intelligence applications. Our results show that the algorithm is significantly more data
efficient than using more naive exploration policies.
9
7
Acknowledgements
This research was supported in part by the National Institutes of Health under award number
R01MH109177. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The content is solely the
responsibility of the authors and does not necessarily represent the official views of the National
Institutes of Health.
References
[1] A.G. Barto and S. Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete
Event Dynamic Systems, 13(4):341?379, 2003.
[2] Ronen I Brafman and Moshe Tennenholtz. R-max-a general polynomial time algorithm for
near-optimal reinforcement learning. Journal of Machine Learning Research, 3(Oct):213?231,
2002.
[3] C.B. Browne, E. Powley, D. Whitehouse, S.M. Lucas, P.I. Cowling, P. Rohlfshagen, S. Tavener,
D. Perez, S. Samothrakis, and S. Colton. A survey of Monte-Carlo tree search methods. IEEE
Transactions on Computational Intelligence and AI in Games, 4(1):1?43, 2012.
[4] S. Cambon, R. Alami, and F. Gravot. A hybrid approach to intricate motion, manipulation and
task planning. International Journal of Robotics Research, 28(1):104?126, 2009.
[5] J. Choi and E. Amir. Combining planning and motion planning. In Proceedings of the IEEE
International Conference on Robotics and Automation, pages 4374?4380, 2009.
[6] Christian Dornhege, Marc Gissler, Matthias Teschner, and Bernhard Nebel. Integrating symbolic
and geometric planning for mobile manipulation. In IEEE International Workshop on Safety,
Security and Rescue Robotics, November 2009.
[7] E. Gat. On three-layer architectures. In D. Kortenkamp, R.P. Bonnasso, and R. Murphy, editors,
Artificial Intelligence and Mobile Robots. AAAI Press, 1998.
[8] K.A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of the 22nd
international conference on Machine learning, pages 297?304. ACM, 2005.
[9] L. Kaelbling and T. Lozano-P?rez. Hierarchical planning in the Now. In Proceedings of the
IEEE Conference on Robotics and Automation, 2011.
[10] L. Kocsis and C. Szepesv?ri. Bandit based Monte-Carlo planning. In Machine Learning: ECML
2006, pages 282?293. Springer, 2006.
[11] G.D. Konidaris, L.P. Kaelbling, and T. Lozano-Perez. Constructing symbolic representations for
high-level planning. In Proceedings of the Twenty-Eighth Conference on Artificial Intelligence,
pages 1932?1940, 2014.
[12] G.D. Konidaris, L.P. Kaelbling, and T. Lozano-Perez. Symbol acquisition for probabilistic
high-level planning. In Proceedings of the Twenty Fourth International Joint Conference on
Artificial Intelligence, pages 3619?3627, 2015.
[13] C. Malcolm and T. Smithers. Symbol grounding via a hybrid architecture in an autonomous
assembly system. Robotics and Autonomous Systems, 6(1-2):123?144, 1990.
[14] S.A. Mobin, J.A. Arnemann, and F. Sommer. Information-based learning by agents in unbounded state spaces. In Advances in Neural Information Processing Systems, pages 3023?3031,
2014.
[15] N.J. Nilsson. Shakey the robot. Technical report, SRI International, April 1984.
[16] L. Orseau, T. Lattimore, and M. Hutter. Universal knowledge-seeking agents for stochastic
environments. In International Conference on Algorithmic Learning Theory, pages 158?172.
Springer, 2013.
10
[17] D. Precup. Temporal Abstraction in Reinforcement Learning. PhD thesis, Department of
Computer Science, University of Massachusetts Amherst, 2000.
[18] N.F.Y. Singer. Efficient Bayesian parameter estimation in large discrete domains. In Advances
in Neural Information Processing Systems 11: Proceedings of the 1998 Conference, volume 11,
page 417. MIT Press, 1999.
[19] R.S. Sutton, D. Precup, and S.P. Singh. Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1-2):181?211, 1999.
[20] J. Wolfe, B. Marthi, and S.J. Russell. Combined task and motion planning for mobile manipulation. In International Conference on Automated Planning and Scheduling, 2010.
11
| 7086 |@word sri:1 polynomial:1 nd:1 open:2 termination:5 simulation:2 contraction:1 reduction:1 contains:1 score:2 united:1 selecting:1 outperforms:2 past:1 o2:6 current:5 yet:1 must:2 partition:15 thrust:1 christian:1 remove:1 succeeding:1 update:6 smdp:1 stationary:1 greedy:21 intelligence:7 leaf:2 amir:1 node:1 location:1 successive:1 simpler:2 unbounded:1 rollout:1 along:1 beta:3 consists:3 manner:2 introduce:5 mask:6 intricate:1 expected:4 planning:18 frequently:1 treasure:16 simulator:1 torque:1 inspired:2 automatically:2 considering:1 provided:2 notation:1 maximizes:2 bonus:3 lowest:1 substantially:1 spends:4 finding:3 unobserved:9 dornhege:1 temporal:2 every:3 collecting:1 unexplored:1 control:2 partitioning:5 thereon:1 safety:1 sd:15 tends:1 io:6 switching:1 sutton:1 path:2 solely:1 approximately:1 merge:2 plus:2 collect:3 locked:4 practice:3 lost:1 implement:1 procedure:1 area:1 universal:1 significantly:10 projection:1 pre:12 confidence:2 integrating:1 symbolic:75 get:1 onto:2 convenience:1 selection:1 operator:2 scheduling:1 put:1 applying:2 optimize:1 map:6 missing:1 go:2 regardless:2 starting:9 survey:1 splitting:1 immediately:2 dominate:1 handle:1 coordinate:5 autonomous:2 updated:7 imagine:1 construction:2 user:1 exact:2 experiencing:1 us:1 distinguishing:1 wolfe:1 expensive:1 walking:1 updating:1 observed:12 precondition:10 calculate:1 region:2 autonomously:1 movement:1 russell:1 substantial:1 intuition:1 environment:12 complexity:2 reward:4 dynamic:7 depend:1 rewrite:1 singh:1 orseau:1 upon:4 efficiency:1 completely:1 easily:1 joint:3 po:9 various:1 fast:3 effective:1 monte:4 artificial:6 query:1 outcome:1 whose:1 spend:2 tested:2 drawing:2 ability:1 noisy:2 itself:1 final:2 online:6 kocsis:1 highlevel:1 sequence:5 advantage:1 matthias:1 propose:1 reset:1 combining:1 description:1 executing:9 leave:1 spent:4 completion:1 nearest:5 minor:1 b0:2 received:1 implemented:1 c:1 predicted:1 come:1 coverage:1 differ:1 direction:2 stochastic:2 exploration:28 eff:8 require:3 government:1 assign:1 ao:3 generalization:3 really:1 clustered:1 im:7 exploring:7 strictly:1 heatmaps:1 hold:3 io3:1 around:1 deciding:1 algorithmic:1 major:1 tavener:1 smallest:1 nebel:1 purpose:2 estimation:1 travel:1 combinatorial:1 currently:1 visited:2 reflects:1 sfd:4 mit:1 sensor:1 always:3 aim:1 reaching:1 rather:1 avoid:1 kortenkamp:1 mobile:3 barto:1 focus:1 improvement:1 bernoulli:3 likelihood:1 detect:1 glass:2 abstraction:5 dependent:1 typically:3 entire:1 bandit:1 going:1 reproduce:1 interested:1 provably:1 pixel:2 overall:1 lucas:1 plan:12 grocery:1 special:4 initialize:3 equal:1 once:1 construct:1 never:1 beach:1 having:3 represents:4 unsupervised:1 uncompressed:1 future:3 discrepancy:1 report:1 intelligent:1 opening:3 powley:1 divergence:2 individual:2 national:2 murphy:1 kitchen:2 replaced:1 consisting:2 intended:1 argmax:1 perez:3 copyright:1 io1:1 accurate:1 tuple:1 necessary:2 experience:1 allocates:1 tree:7 walk:1 uncertain:5 hutter:1 instance:3 increased:1 classify:2 modeling:3 markovian:1 cost:2 kaelbling:3 deviation:3 subset:6 masked:3 front:1 motivating:1 combined:3 chooses:1 st:1 definitely:1 international:8 amherst:1 probabilistic:7 physic:1 picking:2 together:2 continuously:1 quickly:5 precup:2 again:1 aaai:1 thesis:1 andersen:1 s00:2 s0d:2 choose:1 tile:1 return:7 potential:1 distribute:1 automation:2 caused:1 h1:4 root:1 closed:1 try:3 responsibility:1 traffic:1 view:1 start:3 option:90 maintains:1 complicated:1 oi:5 accuracy:3 characteristic:1 efficiently:5 gathered:1 ronen:1 bayesian:12 carlo:4 reach:1 konidaris:3 acquisition:2 mi:2 sampled:1 massachusetts:1 logical:4 recall:2 knowledge:1 car:1 pocket:1 garrett:2 actually:1 dt:1 maximally:1 improved:1 april:1 subgoal:20 done:1 execute:8 furthermore:1 uct:1 d:3 hand:3 qo:3 quality:1 usa:1 effect:10 building:4 brown:2 contain:1 grounding:1 lozano:3 entering:2 leibler:1 whitehouse:1 adjacent:4 game:13 during:1 trying:1 complete:2 performs:1 motion:3 image:3 invoked:1 consideration:1 novel:2 possessing:1 lattimore:1 sdf:8 unobstructed:2 multinomial:1 volume:1 subgoals:3 belong:3 imagined:1 approximates:1 discussed:1 cambridge:1 enter:1 ai:1 rd:1 similarly:1 moving:3 gdk:1 access:1 robot:14 bolt:1 etc:1 something:1 posterior:3 own:4 recent:3 closest:1 belongs:2 irrelevant:1 optimizes:1 ship:5 route:2 store:2 initiation:3 scenario:1 manipulation:3 success:5 muscle:1 george:1 somewhat:1 fortunately:1 additional:2 determine:2 maximize:1 clockwise:2 semi:3 reduces:3 technical:1 faster:1 match:4 long:1 divided:2 award:1 dkl:2 basic:2 expectation:2 smithers:1 represent:8 robotics:6 background:1 want:2 uninformative:1 szepesv:1 interval:2 leaving:1 rest:1 simulates:1 counterclockwise:2 near:1 door:7 intermediate:3 mahadevan:1 enough:1 concerned:1 automated:1 affect:2 browne:1 architecture:2 opposite:1 escaping:1 idea:1 knowing:1 whether:7 ultimate:1 sprite:1 action:6 useful:1 proportionally:1 detailed:1 amount:7 discount:1 generate:2 percentage:1 governmental:1 rescue:1 correctly:1 discrete:2 hyperparameter:1 shall:1 affected:1 express:1 key:14 four:1 threshold:2 drawn:2 changing:1 fraction:7 sum:2 run:3 angle:5 you:11 uncertainty:3 fourth:1 home:1 sad:4 decision:5 appendix:4 cowling:1 layer:1 completing:1 followed:1 simplification:1 encountered:3 occur:2 your:7 colliding:1 ri:1 generates:3 department:2 arnemann:1 terminates:2 partitioned:7 alami:1 making:2 nilsson:1 restricted:1 pr:1 taken:1 previously:3 count:1 singer:1 know:1 end:4 rohlfshagen:1 available:6 multiplied:1 hierarchical:7 away:1 alternative:1 top:1 clustering:6 dirichlet:6 remaining:1 assembly:1 sommer:1 opportunity:1 lock:4 restrictive:1 ghahramani:1 build:4 unchanged:3 seeking:1 move:11 objective:1 question:1 already:2 quantity:1 moshe:1 subspace:2 distance:3 simulated:1 collected:4 samothrakis:1 reason:1 length:1 o1:11 demonstration:1 kingdom:1 difficult:4 executed:12 unfortunately:1 potentially:1 design:1 policy:16 twenty:2 perform:2 allowing:1 observation:24 markov:2 finite:1 november:1 gas:2 ecml:1 extended:1 arbitrary:1 introduced:2 pair:1 required:1 specified:1 trip:1 kl:3 security:1 marthi:1 nip:1 able:5 bar:4 tennenholtz:1 eighth:1 oj:5 max:2 video:1 suitable:3 event:1 difficulty:1 eh:4 hybrid:2 arm:3 improve:2 mdps:2 ladder:3 temporally:1 mcts:13 started:1 reprint:1 categorical:5 naive:2 health:2 prior:3 geometric:1 acknowledgement:1 heller:1 determining:2 expect:1 versus:1 facing:1 h2:4 agent:54 sufficient:1 proxy:2 s0:16 principle:1 editor:1 changed:1 placed:1 last:1 supported:1 brafman:1 guide:3 allow:1 institute:2 face:6 midpoint:2 sparse:5 depth:2 world:5 transition:16 evaluating:1 computes:1 author:1 collection:2 reinforcement:5 projected:1 jump:2 far:3 transaction:1 approximate:1 skill:9 compact:1 obtains:4 kullback:1 bernhard:1 colton:1 active:7 continuous:4 search:3 table:12 additionally:1 learn:4 terminate:2 mj:1 ca:1 io2:1 obtaining:1 interact:1 complex:5 necessarily:1 constructing:1 domain:15 official:1 marc:1 did:1 main:1 terminated:1 noise:2 child:1 crafted:1 referred:1 screen:2 position:2 downright:1 sf:9 exponential:1 lie:1 learns:1 rez:1 z0:7 down:2 choi:1 navigate:1 symbol:22 concern:1 naively:1 exists:1 workshop:1 gained:1 gat:1 phd:1 execution:16 notwithstanding:1 authorized:1 simply:1 likely:1 infinitely:1 shoe:1 visual:1 contained:2 ordered:1 springer:2 corresponds:4 acm:1 oct:1 goal:2 consequently:1 towards:3 content:1 change:6 included:1 determined:2 specifically:1 except:1 asteroid:26 operates:1 called:1 succeeds:2 player:2 ew:3 zg:2 formally:1 select:1 rarely:1 support:1 prowler:2 actuation:1 evaluate:1 malcolm:1 |
6,729 | 7,087 | Clone MCMC: Parallel High-Dimensional Gaussian
Gibbs Sampling
Andrei-Cristian B?arbos
IMS Laboratory
Univ. Bordeaux - CNRS - BINP
[email protected]
Fran?ois Caron
Department of Statistics
University of Oxford
[email protected]
Jean-Fran?ois Giovannelli
IMS Laboratory
Univ. Bordeaux - CNRS - BINP
[email protected]
Arnaud Doucet
Department of Statistics
University of Oxford
[email protected]
Abstract
We propose a generalized Gibbs sampler algorithm for obtaining samples approximately distributed from a high-dimensional Gaussian distribution. Similarly to
Hogwild methods, our approach does not target the original Gaussian distribution
of interest, but an approximation to it. Contrary to Hogwild methods, a single parameter allows us to trade bias for variance. We show empirically that our method
is very flexible and performs well compared to Hogwild-type algorithms.
1
Introduction
Sampling high-dimensional distributions is notoriously difficult in the presence of strong dependence
between the different components. The Gibbs sampler proposes a simple and generic approach, but
may be slow to converge, due to its sequential nature. A number of recent papers have advocated
the use of so-called "Hogwild Gibbs samplers", which perform conditional updates in parallel,
without synchronizing the outputs. Although the corresponding algorithms do not target the correct
distribution, this class of methods has shown to give interesting empirical results, in particular for
Latent Dirichlet Allocation models [1, 2] and Gaussian distributions [3].
In this paper, we focus on the simulation of high-dimensional Gaussian distributions. In numerous
applications, such as computer vision, satellite imagery, medical imaging, tomography or weather
forecasting, simulation of high-dimensional Gaussians is needed for prediction, or as part of a Markov
chain Monte Carlo (MCMC) algorithm. For example, [4] simulate high dimensional Gaussian
random fields for prediction of hydrological and meteorological quantities. For posterior inference
via MCMC in a hierarchical Bayesian model, elementary blocks of a Gibbs sampler often require to
simulate high-dimensional Gaussian variables. In image processing, the typical number of variables
(pixels/voxels) is of the order of 106 /109 . Due to this large size, Cholesky factorization is not
applicable; see for example [5] or [6].
In [7, 8] the sampling problem is recast as an optimisation one: a sample is obtained by minimising a
perturbed quadratic criterion. The cost of the algorithm depends on the choice of the optimisation
technique. Exact resolution is prohibitively expensive so an iterative solver with a truncated number
of iterations is typically used [5] and the distribution of the samples one obtains is unknown.
In this paper, we propose an alternative class of iterative algorithms for approximately sampling
high-dimensional Gaussian distributions. The class of algorithms we propose borrows ideas from
optimization and linear solvers. Similarly to Hogwild algorithms, our sampler does not target the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
distribution of interest but an approximation to this distribution. A single scalar parameter allows
us to tune both the error and the convergence rate of the Markov chain, allowing to trade variance
for bias. We show empirically that the method is very flexible and performs well compared to
Hogwild algorithms. Its performance are illustrated on a large-scale image inpainting-deconvolution
application.
The rest of the article is organized as follows. In Section 2, we review the matrix splitting techniques
that have been used to propose novel algorithms to sample high-dimensional normals. In Section
3, we present our novel methodology. Section 4 provides the intuition for such a scheme, which
we refer to as clone MCMC, and discusses some generalization of the idea to non-Gaussian target
distributions. We compare empirically Hogwild and our methodology on a variety of simulated
examples in Section 5. The application to image inpainting-deconvolution is developed in Section 6.
2
Background on matrix splitting and Hogwild Gaussian sampling
We consider a d-dimensional Gaussian random variable X with mean ? and positive definite covariance matrix ?. The probability density function of X, evaluated at x = (x1 . . . , xd )T , is
1
1
T
?(x) ? exp ? (x ? ?) ??1 (x ? ?) ? exp ? xT J x + hT x
2
2
where J = ??1 is the precision matrix and h = J? the potential vector. Typically, the pair (h, J)
is available, and the objective is to estimate (?, ?) or to simulate from ?. For moderate-size or
sparse precision matrices, the standard method for exact simulation from ? is based on the Cholesky
decomposition of ?, which has computational complexity O(d3 ) in the most general case [9]. If
d is very large, the cost of Cholesky decomposition becomes prohibitive and iterative methods are
favoured due to their smaller cost per iteration and low memory requirements. A principled iterative
approach to draw samples approximately distributed from ? is the single-site Gibbs sampler, which
simulates a Markov chain (X (i) )i=1,2,... with stationary distribution ? by updating each variable in
turn from its conditional distribution. A complete update of the d variables can be written in matrix
form as
X (i+1) = ?(D + L)?1 LT X (i) + (D + L)?1 Z (i+1) , Z (i+1) ? N (h, D)
(1)
where D is the diagonal part of J and L is is the strictly lower triangular part of J. Equation (1)
highlights the connection between the Gibbs sampler and linear iterative solvers as
E[X (i+1) |X (i) = x] = ?(D + L)?1 LT x + (D + L)?1 h
is the expression of the Gauss-Seidel linear iterative solver update to solve the system J? = h for a
given pair (h, J). The single-site Gaussian Gibbs sampler can therefore be interpreted as a stochastic
version of the Gauss-Seidel linear solver. This connection has been noted by [10] and [11], and later
exploited by [3] to analyse the Hogwild Gibbs sampler and by [6] to derive a family of Gaussian
Gibbs samplers.
The Gauss-Seidel iterative solver is just a particular example of a larger class of matrix splitting
solvers [12]. In general, consider the linear system J? = h and the matrix splitting J = M ? N ,
where M is invertible. Gauss-Seidel corresponds to setting M = D + L and N = ?LT . More
generally, [6] established that the Markov chain with transition
X (i+1) = M ?1 N X (i) + M ?1 Z (i+1) , Z (i+1) ? N (h, M T + N )
(2)
admits ? as stationary distribution if and only if the associated iterative solver with update
x(i+1) = M ?1 N x(i) + M ?1 h
is convergent; that is if and only if ?(M ?1 N ) < 1, where ? denotes the spectral radius. Using
this result, [6] built on the large literature on linear iterative solvers in order to derive generalized
Gibbs samplers with the correct Gaussian target distribution, extending the approaches proposed by
[10, 11, 13].
The practicality of the iterative samplers with transition (2) and matrix splitting (M, N ) depends on
? How easy it is to solve the system M x = r for any r,
2
? How easy it is to sample from N (0, M T + N ).
As noted by [6], there is a necessary trade-off here. The Jacobi splitting M = D would lead to a
simple solution to the linear system, but sampling from a Gaussian distribution with covariance matrix
M T + N would be as complicated as solving the original sampling problem. The Gauss-Seidel
splitting M = D + L provides an interesting trade-off as M x = r can be solved by substitution
and M T + N = D is diagonal. The method of successive over-relaxation (SOR) uses a splitting
M = ? ?1 D + L with an additional tuning parameter ? > 0. In both the SOR and Gauss-Seidel
cases, the system M x = r can be solved by substitution in O(d2 ), but the resolution of the linear
system cannot be parallelized.
All the methods discussed so far asymptotically sample from the correct target distribution. The
Hogwild Gaussian Gibbs sampler does not, but its properties can also be analyzed using techniques
from the linear iterative solver literature as demonstrated by [3]. For simplicity of exposure, we focus
here on the Hogwild sampler with blocks of size 1. In this case, the Hogwild algorithm simulates a
Markov chain with transition
?1
?1 (i+1)
X (i+1) = MHog
NHog X (i) + MHog
Z
, Z (i+1) ? N (h, MHog )
where MHog = D and NHog = ?(L + LT ). This update is highly amenable to parallelization as
MHog is diagonal thus one can easily solve the system MHog x = r and sample from N (0, MHog ). [3]
?1
e as stationary distribution where
showed that if ?(MHog
NHog ) < 1, the Markov chain admits N (?, ?)
e = (I + M ?1 NHog )?1 ?.
?
Hog
The above approach can be generalized to blocks of larger sizes. However, beyond the block size, the
Hogwild sampler does not have any tunable parameter allowing us to modify its incorrect stationary
distribution. Depending on the computational budget, we may want to trade bias for variance. In the
next Section, we describe our approach, which offers such flexibility.
3
High-dimensional Gaussian sampling
Let J = M ? N be a matrix splitting, with M positive semi-definite. Consider the Markov chain
(X (i) )i=1,2,... with initial state X (0) and transition
X (i+1) = M ?1 N X (i) + M ?1 Z (i+1) , Z (i+1) ? N (h, 2M ).
(3)
The following theorem shows that, if the corresponding iterative solver converges, the Markov chain
converges to a Gaussian distribution with the correct mean and an approximate covariance matrix.
Theorem 1. If ?(M ?1 N ) < 1, the Markov chain (X (i) )i=1,2,... defined by (3) has stationary
e where
distribution N (?, ?)
e = 2 I + M ?1 N ?1 ?
?
1
= (I ? M ?1 ??1 )?1 ?.
2
Proof. The equivalence between the convergence of the iterative linear solvers and their stochastic
counterparts was established in [6, Theorem 1]. The mean ?
e of the stationary distribution verifies the
recurrence
?
e = M ?1 N ?
e + M ?1 ??1 ?
hence
(I ? M ?1 N )e
? = M ?1 ??1 ?
?
?
e=?
as ??1 = M ? N . For the covariance matrix, consider the 2d-dimensional random variable
?1 !
Y1
?
M/2 ?N/2
=N
,
Y2
?
?N/2 M/2
3
(4)
Then using standard manipulations of multivariate Gaussians and the inversion lemma on block
matrices we obtain
Y1 |Y2 ? N (M ?1 N Y2 , 2M ?1 )
Y2 |Y1 ? N (M ?1 N Y1 , 2M ?1 )
and
e
e
Y1 ? N (?, ?),
Y2 ? N (?, ?)
The above proof is not constructive, and we give in Section 4 the intuition behind the choice of the
transition and the name clone MCMC.
We will focus here on the following matrix splitting
M = D + 2?I,
N = 2?I ? L ? LT
(5)
where ? ? 0. Under this matrix splitting, M is a diagonal matrix and an iteration only involves a
matrix-vector multiplication of computational cost O(d2 ). This operation can be easily parallelized.
Each update has thus the same computational complexity as the Hogwild algorithm. We have
e = (I ? 1 (D + 2?I)?1 ??1 )?1 ?.
?
2
Since M ?1 ? 0 and M ?1 N ? I for ? ? ?, we have
e = ?,
lim ?
???
lim ?(M ?1 N ) = 1.
???
The parameter ? is an easily interpretable tuning parameter for the method: as ? increases, the
stationary distribution of the Markov chain becomes closer to the target distribution, but the samples
become more correlated.
For example, consider the target precision matrix J = ??1 with Jii = 1, Jij = ?1/(d + 1) for
i 6= j and d = 1000. The proposed sampler is run for different values of ? in order to estimate the
? = 1/ns Pns (X (i) ? ?
covariance matrix ?. Let ?
?)T (X (i) ? ?
?) be the estimated covariance matrix
i=1
Pns
e
where ?
? = 1/ns i=1 X (i) is the estimated mean. The Figure 1(a) reports the bias term ||? ? ?||,
b
e
b
the variance term ||? ? ?|| and the overall error ||? ? ?|| as a function of ?, using ns = 10000
samples and 100 replications, with || ? || the `2 (Frobenius) norm. As ? increases, the bias term
decreases while the variance term increases, yielding an optimal value at ? ' 10. Figure 1(b-c) show
the estimation error for the mean and covariance matrix as a function of ?, for different sample sizes.
Figure 2 shows the estimation error as a function of the sample size for different values of ?.
The following theorem gives a sufficient condition for the Markov chain to converge for any value ?.
Theorem 2. Let M = D + 2?I, N = 2?I ? L ? LT . A sufficient condition for ?(M ?1 N ) < 1 for
all ? ? 0 is that ??1 is strictly diagonally dominant.
Proof. M is non singular, hence
det(M ?1 N ? ?I) = 0 ? det(N ? ?M ) = 0.
??1 = M ? N is diagonally dominant, hence ?M ? N = (? ? 1)M + M ? N is also diagonally
dominant for any ? ? 1. From Gershgorin?s theorem, a diagonally dominant matrix is nonsingular,
so det(N ? ?M ) 6= 0 for all ? ? 1. We conclude that ?(M ?1 N ) < 1.
4
b
(b) ||? ? ?||
(a) ns = 20000
(c) ||? ? ?
b||
Figure 1: Influence of the tuning parameter ? on the estimation error
b
(a) ||? ? ?||
(b) ||? ? ?
b||
Figure 2: Influence of the sample size on the estimation error
4
Clone MCMC
We now provide some intuition on the construction given in Section 3, and justify the name given to
the method. The joint pdf of (Y1 , Y2 ) on R2d defined in (4) with matrix splitting (5) can be expressed
as
?
?
e? (y1 , y2 ) ? exp{? (y1 ? y2 )T (y1 ? y2 )}
2
1
1
? exp{? (y1 ? ?)T D(y1 ? ?) ? (y1 ? ?)T (L + LT )(y2 ? ?)}
4
4
1
1
? exp{? (y2 ? ?)T D(y2 ? ?) ? (y2 ? ?)T (L + LT )(y1 ? ?)}
4
4
We can interpret the joint pdf above as having cloned the original random variable X into two
dependent random variables Y1 and Y2 . The parameter ? tunes the correlation between the two
Qd
variables, and ?
e? (y1 |y2 ) = k=1 ?
e? (y1k |y2 ), which allows for straightforward parallelization of
the method. As ? ? ?, the clones become more and more correlated, with corr(Y1 , Y2 ) ? 1 and
?
e? (y1 ) ? ?(y1 ).
The idea can be generalized further to pairwise Markov random fields. Consider the target distribution
?
?
X
?(x) ? exp ??
?ij (xi , xj )?
1?i?j?d
for some potential functions ?ij , 1 ? i ? j ? d. The clone pdf is
?
1 X
?
e(y1 , y2 ) ? exp{? (y1 ? y2 )T (y1 ? y2 ) ?
(?ij (y1i , y2i ) + ?ij (y2i , y1i ))}
2
2
1?i?j?d
where
?
e(y1 |y2 ) =
d
Y
?
e(y1k |y2 ).
k=1
Assuming ?
e is a proper pdf, we have ?
e(y1 , y2 ) ? ?(y1 ) as ? ? ?.
5
(a) 10s
(b) 80s
(c) 120s
Figure 3: Estimation error for the covariance matrix ?1 for fixed computation time, d = 1000.
(a) 10s
(b) 80s
(c) 120s
Figure 4: Estimation error for the covariance matrix ?2 for fixed computation time, d = 1000.
5
Comparison with Hogwild and Gibbs sampling
In this section, we provide an empirical comparison of the proposed approach with the Gibbs sampler
and Hogwild algorithm, using the splitting (5). Note that in order to provide a fair comparison
between the algorithms, we only consider the single-site Gibbs sampling and block-1 Hogwild
algorithms, whose updates are respectively given in Equations (1) and (2). Versions of all three
algorithms could also be developed with blocks of larger sizes.
We consider the following two precision matrices.
?
?
1
??
?
2
..
??? 1 + ? ??
?
.
?
?
?
..
..
..
?
? , ??1 = ?
??1
=
.
.
.
1
2
?
?
?
?
?? 1 + ?2 ???
??
1
..
.
0.15
..
.
0.3
..
.
..
.
1
..
.
?
..
.
0.3
..
.
0.15
..
.
..
?
?
?
.
where for the first precision matrix we have ? = 0.95. Experiments are run on GPU with 2688
CUDA cores. In order to compare the algorithms, we run each algorithm for a fixed execution time
(10s, 80s and 120s). Computation time per iteration for Hogwild and Clone MCMC are similar, and
they return a similar number of samples. The computation time per iteration of the Gibbs sampling is
much higher, due to the lack of parallelization, and it returns less samples. For Hogwild and Clone
e and the estimation errors ||? ? ?||.
b For
MCMC, we report both the approximation error ||? ? ?||
Gibbs, only the estimation error is reported.
Figures 3 and 4 show that, for a range of values of ?, our method outperforms both Hogwild and
Gibbs, whatever the execution time. As the computational budget increases, the optimal value for ?
increases.
6
Application to image inpainting-deconvolution
In order to demonstrate the usefulness of the approach, we consider an application to image inpaintingdeconvolution. Let
Y = T HX + B, B ? N (0, ?b )
(6)
6
(a) True image
(b) Observed Image
(c) Posterior mean
(optimization)
(d) Posterior mean
(clone MCMC)
Figure 5: Deconvolution-Interpolation results
be the observation model where Y ? Rn is the observed image, X ? Rd is the true image, B ? Rn
is the noise component, H ? Rd?d is the convolution matrix and T ? Rn?d is the truncation matrix.
?2
The observation noise is assumed to be independent of X with ??1
. Assume
b = ?b I and ?b = 10
X ? N (0, ?x )
with
T
T
??1
x = ?0 1d 1d + ?1 CC
and C is the block-Toeplitz convolution matrix corresponding to the 2D Laplacian filter and ?0 =
?1 = 10?2 .
The objective is to sample from the posterior distribution
X|Y = y ? N (?x|y , ?x|y )
where
T T ?1
?1
??1
x|y = H T ?b T H + ?x
?x|y = ?x|y H T T T ??1
b y.
The true unobserved image is of size 1000 ? 1000, hence the posterior distribution corresponds to a
random variable of size d = 106 . We have considered that 20% of the pixels are not observed. The
true image is given in Figure 5(a); the observed image is given in Figure 5(b).
In this high-dimensional setting with d = 106 , direct sampling via Cholesky decomposition or
standard single-site Gibbs algorithm are not applicable. We have implemented the block-1 Hogwild
algorithm. However, in this scenario the algorithm diverges, which is certainly due to the fact that the
?1
spectral radius of MHog
NHog is greater than 1.
We run our clone MCMC algorithm for ns = 19000 samples, out of which the first 4000 were
discarded as burn-in samples, using as initialization the observed image, with missing entries padded
with zero. The tuning parameter ? is set to 1. Figure 5(c) contains the reconstructed image that was
obtained by numerically maximizing the posterior distribution using gradient ascent. We shall take
this image as reference when evaluating the reconstructed image computed as the posterior mean
from the drawn samples. The reconstructed image is given in Figure 5(d).
If we compare the restored image with the one obtained by the optimization approach we can
immediately see that the two images are visually very similar. This observation is further reinforced
by the top plot from Figure 6 where we have depicted the same line of pixels from both images. The
line of pixels that is displayed is indicated by the blue line segments in Figure 5(d). The traces in
grey represent the 99% credible intervals. We can see that for most of the pixels, if not for all for that
matter, the estimated value lies well within the 99% credible intervals. The bottom plot from Figure
6 displays the estimated image together with the true image for the same line of pixels, showing an
accurate estimation of the true image. Figure 7 shows traces of the Markov chains for 4 selected
pixels. Their exact position is indicated in Figure 5(b). The red marker corresponds to an observed
pixel from a region having a mid-grey tone. The green marker corresponds to an observed pixel from
a white tone region. The dark blue marker corresponds to an observed pixel from dark tone region.
7
Figure 6: Line of pixels from the restored image
Figure 7: Markov chains for selected pixels, clone MCMC
The cyan marker corresponds to an observed pixel from a region having a tone between mid-grey and
white.
The choice of ? can be a sensible issue for the practical implementation of the algorithm. We observed
empirically convergence of our algorithm for any value ? greater than 0.075. This is a clear advantage
over Hogwild, as our approach is applicable in settings where Hogwild is not as it diverges, and
offers an interesting way of controlling the bias/variance trade-off. We plan to investigate methods to
automatically choose the tuning parameter ? in future work.
References
[1] D. Newman, P. Smyth, M. Welling, and A. Asuncion. Distributed inference for latent Dirichlet
allocation. In Advances in neural information processing systems, pages 1081?1088, 2008.
[2] R. Bekkerman, M. Bilenko, and J. Langford. Scaling up machine learning: Parallel and
distributed approaches. Cambridge University Press, 2011.
[3] M. Johnson, J. Saunderson, and A. Willsky. Analyzing Hogwild parallel Gaussian Gibbs
sampling. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger,
editors, Advances in Neural Information Processing Systems 26, pages 2715?2723. Curran
Associates, Inc., 2013.
[4] Y. Gel, A. E. Raftery, T. Gneiting, C. Tebaldi, D. Nychka, W. Briggs, M. S. Roulston, and V. J.
Berrocal. Calibrated probabilistic mesoscale weather field forecasting: The geostatistical output
perturbation method. Journal of the American Statistical Association, 99(467):575?590, 2004.
[5] C. Gilavert, S. Moussaoui, and J. Idier. Efficient Gaussian sampling for solving large-scale
inverse problems using MCMC. Signal Processing, IEEE Transactions on, 63(1):70?80, January
2015.
8
[6] C. Fox and A. Parker. Accelerated Gibbs sampling of normal distributions using matrix splittings
and polynomials. Bernoulli, 23(4B):3711?3743, 2017.
[7] G. Papandreou and A. L. Yuille. Gaussian sampling by local perturbations. In J. D. Lafferty,
C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural
Information Processing Systems 23, pages 1858?1866. Curran Associates, Inc., 2010.
[8] F. Orieux, O. F?ron, and J. F. Giovannelli. Sampling high-dimensional Gaussian distributions
for general linear inverse problems. IEEE Signal Processing Letters, 19(5):251?254, 2012.
[9] H. Rue. Fast sampling of Gaussian Markov random fields. Journal of the Royal Statistical
Society: Series B, 63(2):325?338, 2001.
[10] S.L. Adler. Over-relaxation method for the Monte Carlo evaluation of the partition function for
multiquadratic actions. Physical Review D, 23(12):2901, 1981.
[11] P. Barone and A. Frigessi. Improving stochastic relaxation for Gaussian random fields. Probability in the Engineering and Informational sciences, 4(03):369?389, 1990.
[12] G. Golub and C. Van Loan. Matrix Computations. The John Hopkins University Press,
Baltimore, Maryland 21218-4363, Fourth edition, 2013.
[13] G.O. Roberts and S.K. Sahu. Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. Journal of the Royal Statistical Society: Series B, 59(2):291?317,
1997.
9
| 7087 |@word version:2 inversion:1 polynomial:1 norm:1 bekkerman:1 frigessi:1 d2:2 cloned:1 simulation:3 grey:3 covariance:9 decomposition:3 inpainting:3 initial:1 substitution:2 contains:1 series:2 outperforms:1 written:1 gpu:1 john:1 partition:1 plot:2 interpretable:1 update:7 stationary:7 prohibitive:1 selected:2 tone:4 parameterization:1 core:1 provides:2 ron:1 successive:1 direct:1 become:2 replication:1 incorrect:1 pairwise:1 informational:1 automatically:1 bilenko:1 solver:12 becomes:2 interpreted:1 developed:2 unobserved:1 xd:1 prohibitively:1 uk:2 whatever:1 medical:1 positive:2 engineering:1 gneiting:1 modify:1 local:1 oxford:2 analyzing:1 interpolation:1 approximately:3 burn:1 mesoscale:1 initialization:1 equivalence:1 factorization:1 range:1 practical:1 block:9 definite:2 y2i:2 empirical:2 weather:2 cannot:1 influence:2 demonstrated:1 missing:1 maximizing:1 exposure:1 straightforward:1 williams:1 resolution:2 simplicity:1 splitting:13 stats:2 immediately:1 target:9 construction:1 controlling:1 exact:3 smyth:1 us:1 curran:2 associate:2 expensive:1 updating:2 blocking:1 observed:10 bottom:1 solved:2 region:4 culotta:1 trade:6 decrease:1 principled:1 intuition:3 complexity:2 solving:2 segment:1 yuille:1 easily:3 joint:2 univ:2 fast:1 describe:1 monte:2 zemel:1 newman:1 jean:1 whose:1 larger:3 solve:3 triangular:1 toeplitz:1 statistic:2 analyse:1 cristian:1 advantage:1 propose:4 jij:1 fr:2 flexibility:1 frobenius:1 convergence:3 requirement:1 satellite:1 extending:1 diverges:2 converges:2 derive:2 depending:1 ac:2 ij:4 advocated:1 strong:1 implemented:1 ois:2 involves:1 qd:1 radius:2 correct:4 filter:1 stochastic:3 require:1 hx:1 sor:2 generalization:1 elementary:1 strictly:2 considered:1 normal:2 exp:7 visually:1 estimation:9 applicable:3 gaussian:24 hydrological:1 focus:3 bernoulli:1 inference:2 dependent:1 cnrs:2 typically:2 pixel:13 overall:1 issue:1 flexible:2 proposes:1 plan:1 field:5 having:3 beach:1 sampling:18 synchronizing:1 future:1 report:2 interest:2 highly:1 investigate:1 evaluation:1 certainly:1 golub:1 analyzed:1 yielding:1 behind:1 chain:13 amenable:1 accurate:1 closer:1 necessary:1 fox:1 taylor:1 y1k:2 papandreou:1 cost:4 entry:1 usefulness:1 johnson:1 reported:1 perturbed:1 calibrated:1 adler:1 st:1 density:1 clone:11 probabilistic:1 off:3 invertible:1 together:1 hopkins:1 imagery:1 choose:1 r2d:1 american:1 return:2 jii:1 potential:2 matter:1 inc:2 depends:2 later:1 hogwild:24 red:1 parallel:4 complicated:1 asuncion:1 variance:6 reinforced:1 nonsingular:1 bayesian:1 carlo:2 notoriously:1 cc:1 gershgorin:1 associated:1 jacobi:1 proof:3 saunderson:1 tunable:1 lim:2 credible:2 organized:1 higher:1 methodology:2 evaluated:1 ox:2 just:1 correlation:2 langford:1 marker:4 lack:1 meteorological:1 indicated:2 usa:1 name:2 true:6 y2:23 counterpart:1 hence:4 arnaud:1 laboratory:2 illustrated:1 white:2 recurrence:1 noted:2 criterion:1 generalized:4 pdf:4 complete:1 demonstrate:1 performs:2 image:24 novel:2 empirically:4 physical:1 discussed:1 association:1 interpret:1 numerically:1 ims:3 refer:1 caron:2 gibbs:22 cambridge:1 tuning:5 rd:2 similarly:2 shawe:1 dominant:4 posterior:7 multivariate:1 recent:1 showed:1 moderate:1 manipulation:1 pns:2 scenario:1 exploited:1 additional:1 greater:2 parallelized:2 converge:2 signal:2 semi:1 seidel:6 minimising:1 long:1 offer:2 laplacian:1 prediction:2 vision:1 optimisation:2 iteration:5 represent:1 background:1 want:1 interval:2 baltimore:1 singular:1 parallelization:3 rest:1 ascent:1 simulates:2 contrary:1 lafferty:1 presence:1 easy:2 variety:1 xj:1 idea:3 det:3 expression:1 forecasting:2 splittings:1 action:1 generally:1 clear:1 tune:2 dark:2 mid:2 tomography:1 cuda:1 estimated:4 per:3 blue:2 shall:1 drawn:1 d3:1 ht:1 imaging:1 asymptotically:1 relaxation:3 padded:1 geostatistical:1 run:4 inverse:2 letter:1 fourth:1 family:1 fran:2 draw:1 scaling:1 cyan:1 convergent:1 display:1 quadratic:1 tebaldi:1 y1i:2 simulate:3 department:2 smaller:1 equation:2 discus:1 turn:1 needed:1 briggs:1 available:1 gaussians:2 operation:1 hierarchical:1 generic:1 spectral:2 alternative:1 weinberger:1 original:3 denotes:1 dirichlet:2 top:1 practicality:1 ghahramani:1 society:2 objective:2 quantity:1 restored:2 dependence:1 diagonal:4 gradient:1 maryland:1 simulated:1 sensible:1 willsky:1 assuming:1 gel:1 difficult:1 robert:1 hog:1 trace:2 implementation:1 proper:1 unknown:1 perform:1 allowing:2 observation:3 convolution:2 markov:15 discarded:1 displayed:1 truncated:1 january:1 y1:24 rn:3 perturbation:2 pair:2 connection:2 established:2 nip:1 beyond:1 recast:1 built:1 memory:1 green:1 royal:2 bordeaux:4 scheme:2 numerous:1 raftery:1 review:2 voxels:1 literature:2 multiplication:1 highlight:1 interesting:3 allocation:2 borrows:1 sufficient:2 article:1 editor:2 diagonally:4 truncation:1 bias:6 burges:1 sparse:1 distributed:4 van:1 transition:5 evaluating:1 far:1 welling:2 transaction:1 reconstructed:3 approximate:1 obtains:1 doucet:2 conclude:1 assumed:1 xi:1 latent:2 iterative:13 sahu:1 nature:1 ca:1 obtaining:1 improving:1 bottou:1 rue:1 noise:2 edition:1 verifies:1 fair:1 x1:1 site:4 andrei:1 parker:1 slow:1 n:5 precision:5 favoured:1 position:1 lie:1 theorem:6 xt:1 showing:1 admits:2 deconvolution:4 sequential:1 corr:1 idier:1 execution:2 budget:2 depicted:1 lt:8 expressed:1 scalar:1 corresponds:6 conditional:2 loan:1 typical:1 sampler:18 justify:1 lemma:1 called:1 gauss:6 cholesky:4 accelerated:1 constructive:1 mcmc:12 correlated:2 |
6,730 | 7,088 | Fair Clustering Through Fairlets
Flavio Chierichetti
Dipartimento di Informatica
Sapienza University
Rome, Italy
Ravi Kumar
Google Research
1600 Amphitheater Parkway
Mountain View, CA 94043
Silvio Lattanzi
Google Research
76 9th Ave
New York, NY 10011
Sergei Vassilvitskii
Google Research
76 9th Ave
New York, NY 10011
Abstract
We study the question of fair clustering under the disparate impact doctrine, where
each protected class must have approximately equal representation in every cluster. We formulate the fair clustering problem under both the k-center and the
k-median objectives, and show that even with two protected classes the problem
is challenging, as the optimum solution can violate common conventions?for
instance a point may no longer be assigned to its nearest cluster center!
En route we introduce the concept of fairlets, which are minimal sets that satisfy
fair representation while approximately preserving the clustering objective. We
show that any fair clustering problem can be decomposed into first finding good
fairlets, and then using existing machinery for traditional clustering algorithms.
While finding good fairlets can be NP-hard, we proceed to obtain efficient approximation algorithms based on minimum cost flow.
We empirically demonstrate the price of fairness by quantifying the value of fair
clustering on real-world datasets with sensitive attributes.
1
Introduction
From self driving cars, to smart thermostats, and digital assistants, machine learning is behind many
of the technologies we use and rely on every day. Machine learning is also increasingly used to
aid with decision making?in awarding home loans or in sentencing recommendations in courts of
law (Kleinberg et al. , 2017a). While the learning algorithms are not inherently biased, or unfair, the
algorithms may pick up and amplify biases already present in the training data that is available to
them. Thus a recent line of work has emerged on designing fair algorithms.
The first challenge is to formally define the concept of fairness, and indeed recent work shows that
some natural conditions for fairness cannot be simultaneously achieved (Kleinberg et al. , 2017b;
Corbett-Davies et al. , 2017). In our work we follow the notion of disparate impact as articulated
by Feldman et al. (2015), following the Griggs v. Duke Power Co. US Supreme Court case.
Informally, the doctrine codifies the notion that not only should protected attributes, such as race and
gender, not be explicitly used in making decisions, but even after the decisions are made they should
not be disproportionately different for applicants in different protected classes. In other words, if
an unprotected feature, for example, height, is closely correlated with a protected feature, such as
gender, then decisions made based on height may still be unfair, as they can be used to effectively
discriminate based on gender.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
While much of the previous work deals with supervised
b
learning, in this work we consider the most common una
supervised learning problem, that of clustering. In modx
ern machine learning systems, clustering is often used for
feature engineering, for instance augmenting each examc
ple in the dataset with the id of the cluster it belongs to
y
z
in an effort to bring expressive power to simple learning
methods. In this way we want to make sure that the feaFigure 1: A colorblind k-center clustertures that are generated are fair themselves. As in stan- ing algorithm would group points a, b, c into
dard clustering literature, we are given a set X of points one cluster, and x, y, z into a second cluster,
lying in some metric space, and our goal is to find a par- with centers at a and z respectively. A fair
tition of X into k different clusters, optimizing a partic- clustering algorithm, on the other hand, may
ular objective function. We assume that the coordinates give a partition indicated by the dashed line.
of each point x ? X are unprotected; however each point Observe that in this case a point is no longer
also has a color, which identifies its protected class. The assigned to its nearest cluster center. For exnotion of disparate impact and fair representation then ample x is assigned to the same cluster as a
translates to that of color balance in each cluster. We even though z is closer.
study the two color case, where each point is either red
or blue, and show that even this simple version has a lot of underlying complexity. We formalize
these views and define a fair clustering objective that incorporates both fair representation and the
traditional clustering cost; see Section 2 for exact definitions.
A clustering algorithm that is colorblind, and thus does not take a protected attribute into its decision
making, may still result in very unfair clusterings; see Figure 1. This means that we must explicitly
use the protected attribute to find a fair solution. Moreover, this implies that a fair clustering solution
could be strictly worse (with respect to an objective function) than a colorblind solution.
Finally, the example in Figure 1 also shows the main technical hurdle in looking for fair clusterings.
Unlike the classical formulation where every point is assigned to its nearest cluster center, this may
no longer be the case. Indeed, a fair clustering is defined not just by the position of the centers, but
also by an assignment function that assigns a cluster label to each input.
Our contributions. In this work we show how to reduce the problem of fair clustering to that of
classical clustering via a pre-processing step that ensures that any resulting solution will be fair.
In this way, our approach is similar to that of Zemel et al. (2013), although we formulate the
first step as an explicit combinatorial problem, and show approximation guarantees that translate to
approximation guarantees on the optimal solution. Specifically we:
(i) Define fair variants of classical clustering problems such as k-center and k-median;
(ii) Define the concepts of fairlets and fairlet decompositions, which encapsulate minimal fair
sets;
(iii) Show that any fair clustering problem can be reduced to first finding a fairlet decomposition,
and then using the classical (not necessarily fair) clustering algorithm;
(iv) Develop approximation algorithms for finding fair decompositions for a large range of
fairness values, and complement these results with NP-hardness; and
(v) Empirically quantify the price of fairness, i.e., the ratio of the cost of traditional clustering
to the cost of fair clustering.
Related work. Data clustering is a classic problem in unsupervised learning that takes on many
forms, from partition clustering, to soft clustering, hierarchical clustering, spectral clustering, among
many others. See, for example, the books by Aggarwal & Reddy (2013); Xu & Wunsch (2009) for
an extensive list of problems and algorithms. In this work, we focus our attention on the k-center and
k-median problems. Both of these problems are NP-hard but have known efficient approximation
algorithms.?The state of the art approaches give a 2-approximation for k-center (Gonzalez, 1985)
and a (1 + 3 + )-approximation for k-median (Li & Svensson, 2013).
Unlike clustering, the exploration of fairness in machine learning is relatively nascent. There are
two broad lines of work. The first is in codifying what it means for an algorithm to be fair. See
for example the work on statistical parity (Luong et al. , 2011; Kamishima et al. , 2012), disparate
impact (Feldman et al. , 2015), and individual fairness (Dwork et al. , 2012). More recent work
2
by Corbett-Davies et al. (2017) and Kleinberg et al. (2017b) also shows that some of the desired
properties of fairness may be incompatible with each other.
A second line of work takes a specific notion of fairness and looks for algorithms that achieve fair
outcomes. Here the focus has largely been on supervised learning (Luong et al. , 2011; Hardt et al.
, 2016) and online (Joseph et al. , 2016) learning. The direction that is most similar to our work is
that of learning intermediate representations that are guaranteed to be fair, see for example the work
by Zemel et al. (2013) and Kamishima et al. (2012). However, unlike their work, we give strong
guarantees on the relationship between the quality of the fairlet representation, and the quality of
any fair clustering solution.
In this paper we use the notion of fairness known as disparate impact and introduced by Feldman
et al. (2015). This notion is also closely related to the p%-rule as a measure for fairness. The
p%-rule is a generalization of the 80%-rule advocated by US Equal Employment Opportunity Commission (Biddle, 2006) and was used in a recent paper on mechanism for fair classification (Zafar
et al. , 2017). In particular our paper addresses an open question of Zafar et al. (2017) presenting a
framework to solve an unsupervised learning task respecting the p%-rule.
2
Preliminaries
Let X be a set of points in a metric space equipped with a distance function d : X 2 ? R?0 . For an
integer k, let [k] denote the set {1, . . . , k}.
We first recall standard concepts in clustering. A k-clustering C is a partition of X into k disjoint
subsets, C1 , . . . , Ck , called clusters. We can evaluate the quality of a clustering C with different
objective functions. In the k-center problem, the goal is to minimize
?(X, C) = max min max d(x, c),
C?C c?C x?C
and in the k-median problem, the goal is to minimize
X
X
?(X, C) =
min
d(x, c).
C?C
c?C
x?C
A clustering C can be equivalently described via an assignment function ? : X ? [k]. The points in
cluster Ci are simply the pre-image of i under ?, i.e., Ci = {x ? X | ?(x) = i}.
Throughout this paper we assume that each point in X is colored either red or blue; let ? : X ?
{RED, BLUE} denote the color of a point. For a subset Y ? X and for c ? {RED, BLUE}, let
c(Y ) = {x ? X | ?(x) = c} and let #c(Y ) = |c(Y )|.
We first define a natural notion of balance.
Definition 1 (Balance). For a subset ? 6=
Y ? X, the balance of Y is defined as:
#RED(Y ) #BLUE(Y )
balance(Y ) = min
,
? [0, 1].
#BLUE(Y ) #RED(Y )
The balance of a clustering C is defined as:
balance(C) = min balance(C).
C?C
A subset with an equal number of red and blue points has balance 1 (perfectly balanced) and a
monochromatic subset has balance 0 (fully unbalanced). To gain more intuition about the notion of
balance, we investigate some basic properties that follow from its definition.
Lemma 2 (Combination). Let Y, Y 0 ? X be disjoint. If C is a clustering of Y and C 0 is a clustering
of Y 0 , then balance(C ? C 0 ) = min(balance(C), balance(C 0 )).
It is easy to see that for any clustering C of X, we have balance(C) ? balance(X). In particular,
if X is not perfectly balanced, then no clustering of X can be perfectly balanced. We next show an
interesting converse, relating the balance of X to the balance of a well-chosen clustering.
Lemma 3. Let balance(X) = b/r for some integers 1 ? b ? r such that gcd(b, r) = 1. Then there
exists a clustering Y = {Y1 , . . . , Ym } of X such that (i) |Yj | ? b + r for each Yj ? Y, i.e., each
cluster is small, and (ii) balance(Y) = b/r = balance(X).
Fairness and fairlets. Balance encapsulates a specific notion of fairness, where a clustering with
a monochromatic cluster (i.e., fully unbalanced) is considered unfair. We call the clustering Y as
described in Lemma 3 a (b, r)-fairlet decomposition of X and call each cluster Y ? Y a fairlet.
3
Equipped with the notion of balance, we now revisit the clustering objectives defined earlier. The
objectives do not consider the color of the points, so they can lead to solutions with monochromatic
clusters. We now extend them to incorporate fairness.
Definition 4 ((t, k)-fair clustering problems). In the (t, k)-fair center (resp., (t, k)-fair median)
problem, the goal is to partition X into C such that |C| = k, balance(C) ? t, and ?(X, C) (resp.
?(X, C)) is minimized.
Traditional formulations of k-center and k-median eschew the notion of an assignment function.
Instead it is implicit through a set {c1 , . . . , ck } of centers, where each point assigned to its nearest center, i.e., ?(x) = arg mini?[1,k] d(x, ci ). Without fairness as an issue, they are equivalent
formulations; however, with fairness, we need an explicit assignment function (see Figure 1).
Missing proofs are deferred to the full version of the paper.
3
Fairlet decomposition and fair clustering
At first glance, the fair version of a clustering problem appears harder than its vanilla counterpart.
In this section we prove, interestingly, a reduction from the former to the latter. We do this by first
clustering the original points into small clusters preserving the balance, and then applying vanilla
clustering on these smaller clusters instead of on the original points.
As noted earlier, there are different ways to partition the input to obtain a fairlet decomposition. We
will show next that the choice of the partition directly impacts the approximation guarantees of the
final clustering algorithm.
Before proving our reduction we need to introduce some additional notation. Let Y = {Y1 , . . . , Ym }
be a fairlet decomposition. For each cluster Yj , we designate an arbitrary point yj ? Yj as its center.
Then for a point x, we let ? : X ? [1, m] denote the index of the fairlet to which it is mapped. We
are now ready to define the cost of a fairlet decomposition
Definition
5 (Fairlet decomposition cost). For a fairlet decomposition, we define its k-median cost
P
as x?X d(x, ?(x)), and its k-center cost as maxx?X d(x, ?(x)). We say that a (b, r)-fairlet decomposition is optimal if it has minimum cost among all (b, r)-fairlet decompositions.
Since (X, d) is a metric, we have from the triangle inequality that for any other point c ? X,
d(x, c) ? d(x, y?(x) ) + d(y?(x) , c).
Now suppose that we aim to obtain a (t, k)-fair clustering of the original points X. (As we observed
earlier, necessarily t ? balance(X).) To solve the problem we can cluster instead the centers of
each fairlet, i.e., the set {y1 , . . . , ym } = Y , into k clusters. In this way we obtain a set of centers
{c1 , . . . , ck } and an assignment function ?Y : Y ? [k].
We can then define the overall assignment function as ?(x) = ?Y (y?(x) ) and denote the clustering
induced by ? as C? . From the definition of Y and the property of fairlets and balance, we get that
balance(C? ) = t. We now need to bound its cost. Let Y? be a multiset, where each yi appears |Yi |
number of times.
Lemma 6. ?(X, C? ) = ?(X, Y) + ?(Y? , C? ) and ?(X, C? ) = ?(X, Y) + ?(Y? , C? ).
Therefore in both cases we can reduce the fair clustering problem to the problem of finding a good
fairlet decomposition and then solving the vanilla clustering problem on the centers of the fairlets.
We refer to ?(X, Y) and ?(X, Y) as the k-median and k-center costs of the fairlet decomposition.
4
Algorithms
In the previous section we presented a reduction from the fair clustering problem to the regular
counterpart. In this section we use it to design efficient algorithms for fair clustering.
We first focus on the k-center objective and show in Section 4.3 how to adapt the reasoning to solve
the k-median objective. We begin with the most natural case in which we require the clusters to be
perfectly balanced, and give efficient algorithms for the (1, k)-fair center problem. Then we analyze
the more challenging (t, k)-fair center problem for t < 1. Let B = BLUE(X), R = RED(X).
4
4.1
Fair k-center warmup: (1, 1)-fairlets
Suppose balance(X) = 1, i.e., (|R| = |B|) and we wish to find a perfectly balanced clustering. We
now show how we can obtain it using a good (1, 1)-fairlet decomposition.
Lemma 7. An optimal (1, 1)-fairlet decomposition for k-center can be found in polynomial time.
Proof. To find the best decomposition, we first relate this question to a graph covering problem.
Consider a bipartite graph G = (B ? R, E) where we create an edge E = (bi , rj ) with weight
wij = d(ri , bj ) between any bichromatic pair of nodes. In this case a decomposition into fairlets
corresponds to some perfect matching in the graph. Each edge in the matching represents a fairlet,
Yi . Let Y = {Yi } be the set of edges in the matching.
Observe that the k-center cost ?(X, Y) is exactly the cost of the maximum weight edge in the
matching, therefore our goal is to find a perfect matching that minimizes the weight of the maximum
edge. This can be done by defining a threshold graph G? that has the same nodes as G but only those
edges of weight at most ? . We then look for the minimum ? where the corresponding graph has a
perfect matching, which can be done by (binary) searching through the O(n2 ) values.
Finally, for each fairlet (edge) Yi we can arbitrarily set one of the two nodes as the center, yi .
Since any fair solution to the clustering problem induces a set of minimal fairlets (as described in
Lemma 3), the cost of the fairlet decomposition found is at most the cost of the clustering solution.
Lemma 8. Let Y be the partition found above, and let ??t be the cost of the optimal (t, k)-fair center
clustering. Then ?(X, Y) ? ??t .
This, combined with the fact that the best approximation algorithm for k-center yields a 2approximation (Gonzalez, 1985) gives us the following.
Theorem 9. The algorithm that first finds fairlets and then clusters them is a 3-approximation for
the (1, k)-fair center problem.
4.2
Fair k-center: (1, t0 )-fairlets
Now, suppose that instead we look for a clustering with balance t 1. In this section we assume
t = 1/t0 for some integer t0 > 1. We show how to extend the intuition in the matching construction
above to find approximately optimal (1, t0 )-fairlet decompositions for integral t0 > 1.
In this case, we transform the problem into a minimum cost flow (MCF) problem.1 Let ? > 0 be a
parameter of the algorithm. Given the points B, R, and an integer t0 , we construct a directed graph
H? = (V, E). Its node set V is composed of two special nodes ? and ?, all of the nodes in B ? R,
and t0 additional copies for eachnnode v ? B ? R. More formally,
o n
o
V = {?, ?} ? B ? R ? bji | bi ? B and j ? [t0 ] ? rij | ri ? R and j ? [t0 ] .
The directed edges of H? are as follows:
(i) A (?, ?) edge with cost 0 and capacity min(|B|, |R|).
(ii) A (?, bi ) edge for each bi ? B, and an (ri , ?) edge for each ri ? R. All of these edges have cost
0 and capacity t0 ? 1.
(iii) For each bi ? B and for each j ? [t0 ], a (bi , bji ) edge, and for each ri ? R and for each j ? [t0 ],
an (ri , rij ) edge. All of these edges have cost 0 and capacity 1.
(iv) Finally, for each bi ? B, rj ? R and for each 1 ? k, ` ? t, a (bki , rj` ) edge with capacity 1. The
cost of this edge is 1 if d(bi , rj ) ? ? and ? otherwise.
To finish the description of this MCF instance, we have now specify supply and demand at every
node. Each node in B has a supply of 1, each node in R has a demand of 1, ? has a supply of |R|,
and ? has a demand of |B|. Every other node has zero supply and demand. In Figure 2 we show an
example of this construction for a small graph.
1
Given a graph with edges costs and capacities, a source, a sink, the goal is to push a given amount of
flow from source to sink, respecting flow conservation at nodes, capacity constraints on the edges, at the least
possible cost.
5
b1
b1
r1
r1
?
?
b2
b2
b3
r2
r2
b3
The MCF problem can be solved in
polynomial time and since all of the
demands and capacities are integral,
there exists an optimal solution that
sends integral flow on each edge. In
our case, the solution is a set of edges
of H? that have non-zero flow, and
the total flow on the (?, ?) edge.
In the rest of this section we assume
for simplicity that any two distinct elFigure 2: The construction of the MCF instance for the bipartite
ements of the metric are at a positive
0
graph for t = 2. Note that the only nodes with positive demands
or supplies are ?, ?, b1 , b2 , b3 , r1 , and r2 and all the dotted edges distance apart and we show that starting from a solution to the described
have cost 0.
MCF instance we can build a low
cost (1, t0 )-fairlet decomposition. We
start by showing that every (1, t0 )-fairlet decomposition can be used to construct a feasible solution
for the MCF instance and then prove that an optimal solution for the MCF instance can be used to
obtain a (1, t0 )-fairlet decomposition.
Lemma 10. Let Y be a (1, t0 )-fairlet decomposition of cost C for the (1/t0 , k)-fair center problem.
Then it is possible to construct a feasible solution of cost 2C to the MCF instance.
Proof. We begin by building a feasible solution and then bound its cost. Consider each fairlet in the
(1, t0 )-fairlet decomposition.
Suppose the fairlet contains 1 red node and c blue nodes, with c ? t0 , i.e., the fairlet is of the form
{r1 , b1 , . . . , bc }. For any such fairlet we send a unit of flow form each node bi to b1i , for i ? [c] and
a unit of flow from nodes b11 , . . . , b1c to nodes r11 , . . . , r1c . Furthermore we send a unit of flow from
each r11 , . . . , r1c to r1 and c ? 1 units of flow from r1 to ?. Note that in this way we saturate the
demands of all nodes in this fairlet.
Similarly, if the fairlet contains c red nodes and 1 blue node, with c ? t0 , i.e., the fairlet is of the
form {r1 , . . . , rc , b1 }. For any such fairlet, we send c ? 1 units of flow from ? to b1 . Then we send
a unit of flow from each b1 to each b11 , . . . , bc1 and a unit of flow from nodes b11 , . . . , bc1 to nodes
r11 , . . . , rc1 . Furthermore we send a unit of flow from each r11 , . . . , rc1 to the nodes r1 , . . . , rc . Note
that also in this case we saturate all the request of nodes in this fairlet.
Since every node v ? B ? R is contained in a fairlet, all of the demands of these nodes are satisfied.
Hence, the only nodes that can have still unsatisfied demand are ? and ?, but we can use the direct
edge (?, ?) to route the excess demand, since the total demand is equal to the total supply. In this
way we obtain a feasible solution for the MCF instance starting from a (1, t0 )-fairlet decomposition.
To bound the cost of the solution note that the only edges with positive cost in the constructed
solution are the edges between nodes bji and rk` . Furthermore an edge is part of the solution only if
the nodes bi and rk are contained in the same fairlet F . Given that the k-center cost for the fairlet
decomposition is C, the cost of the edges between nodes in F in the constructed feasible solution
for the MCF instance is at most 2 times this distance. The claim follows.
Now we show that given an optimal solution for the MCF instance of cost C, we can construct a
(1, t0 )-fairlet decomposition of cost no bigger than C.
Lemma 11. Let Y be an optimal solution of cost C to the MCF instance. Then it is possible to
construct a (1, t0 )-fairlet decomposition for (1/t0 , k)-fair center problem of cost at most C.
Combining Lemma 10 and Lemma 11 yields the following.
Lemma 12. By reducing the (1, t0 )-fairlet decomposition problem to an MCF problem, it is possible
to compute a 2-approximation for the optimal (1, t0 )-fairlet decomposition for the (1/t0 , k)-fair
center problem.
Note that the cost of a (1, t0 )-fairlet decomposition is necessarily smaller than the cost of a (1/t0 , k)fair clustering. Our main theorem follows.
6
Fair Cost
Fair Balance
Unfair Cost
Unfair Balance
Fairlet Cost
Bank (k-center)
Census (k-center)
16000
1
14000
0.8
12000
Diabetes (k-center)
300000
1
250000
0.8
0.6
8000
1
30
0.8
25
200000
10000
35
0.6
20
0.6
0.4
15
0.4
150000
0.4
6000
100000
4000
0.2
2000
0
0
3
4
6
8
10
12
14
16
18
10
0.2
50000
0
20
0
3
4
6
8
Number of Clusters
Fair Cost
Fair Balance
Unfair Cost
Unfair Balance
10
12
14
16
18
20
8
10
12
14
16
18
20
Diabetes (k-median)
4.5x107
0.8
3.5x107
0.6
2.5x107
1
12000
1
0.8
10000
0.8
3x107
400000
300000
0.4
200000
0.2
100000
8000
0.6
0
12
14
Number of Clusters
16
18
20
0.6
6000
2x107
0.4
1.5x107
0.4
4000
1x107
0.2
5x106
0
10
6
Number of Clusters
4x107
8
4
Census (k-median)
1
500000
6
0
3
Fairlet Cost
600000
4
0
Number of Clusters
Bank (k-median)
3
0.2
5
0
0
3
4
6
8
10
12
14
Number of Clusters
16
18
20
0.2
2000
0
0
3
4
6
8
10
12
14
16
18
20
Number of Clusters
Figure 3: Empirical performance of the classical and fair clustering median and center algorithms
on the three datasets. The cost of each solution is on left axis, and its balance on the right axis.
Theorem 13. The algorithm that first finds fairlets and then clusters them is a 4-approximation for
the (1/t0 , k)-fair center problem for any positive integer t0 .
4.3
Fair k-median
The results in the previous section can be modified to yield results for the (t, k)-fair median problem
with minor changes that we describe below.
For the perfectly balanced case, as before, we look for a perfect matching on the bichromatic graph.
Unlike, the k-center case, we let the weight of a (bi , rj ) edge be the distance between the two points.
Our goal is to find a perfect matching of minimum total cost, since that exactly represents
? the cost
of the fairlet decomposition. Since the best known approximation for k-median is 1 + 3 + (Li &
Svensson, 2013), we have:
?
Theorem 14. The algorithm that first finds fairlets and then clusters them is a (2 + 3 + )approximation for the (1, k)-fair median problem.
To find (1, t0 )-fairlet decompositions for integral t0 > 1, we again resort to MCF and create an
instance as in the k-center case, but for each bi ? B, rj ? R, and for each 1 ? k, ` ? t, we set the
cost of the edge (bki , rj` ) to d(bi , rj ).
?
Theorem 15. The algorithm that first finds fairlets and then clusters them is a (t0 + 1 + 3 + )approximation for the (1/t0 , k)-fair median problem for any positive integer t0 .
4.4
Hardness
We complement our algorithmic results with discussion of computational hardness for fair clustering. We show that the question of finding a good fairlet decomposition is itself computationally
hard. Thus, ensuring fairness causes hardness, regardless of the underlying clustering objective.
Theorem 16. For each fixed t0 ? 3, finding an optimal (1, t0 )-fairlet decomposition is NP-hard.
Also, finding the minimum cost (1/t0 , k)-fair median clustering is NP-hard.
5
Experiments
In this section we illustrate our algorithm by performing experiments on real data. The goal of our
experiments is two-fold: first, we show that traditional algorithms for k-center and k-median tend
to produce unfair clusters; second, we show that by using our algorithms one can obtain clusters
that respect the fairness guarantees. We show that in the latter case, the cost of the solution tends to
7
converge to the cost of the fairlet decomposition, which serves as a lower bound on the cost of the
optimal solution.
Datasets. We consider 3 datasets from the UCI repository Lichman (2013) for experimentation.
Diabetes. This dataset2 represents the outcomes of patients pertaining to diabetes. We chose numeric
attributes such as age, time in hospital, to represent points in the Euclidean space and gender as the
sensitive dimension, i.e., we aim to balance gender. We subsampled the dataset to 1000 records.
Bank. This dataset3 contains one record for each phone call in a marketing campaign ran by a
Portuguese banking institution (Moro et al. , 2014)). Each record contains information about the
client that was contacted by the institution. We chose numeric attributes such as age, balance, and
duration to represents points in the Euclidean space, we aim to cluster to balance married and not
married clients. We subsampled the dataset to 1000 records.
Census. This dataset4 contains the census records extracted from the 1994 US census (Kohavi,
1996). Each record contains information about individuals including education, occupation, hours
worked per week, etc. . We chose numeric attributes such as age, fnlwgt, education-num, capitalgain and hours-per-week to represents points in the Euclidean space and we aim to cluster the dataset
so to balance gender. We subsampled the dataset to 600 records.
Algorithms. We implement the flow-based fairlet decomposition algorithm as described in Section 4. To solve the k-center problem we augment it with the greedy furthest point algorithm due
to Gonzalez (1985), which is known to obtain a 2-approximation. To solve the k-median problem
we use the single swap algorithm due to Arya et al. (2004), which also gets a 5-approximation in
the worst case, but performs much better in practice (Kanungo et al. , 2002).
Results. Figure 3 shows the results for k-center for the three datasets, and Figure 3 shows the same
for the k-median objective. In all of the cases, we run with t0 = 2, that is we aim for balance of at
least 0.5 in each cluster.
Observe that the balance of the solutions produced by the classical algorithms is very low, and in
four out of the six cases, the balance is 0 for larger values of k, meaning that the optimal solution
has monochromatic clusters. Moreover, this is not an isolated incident, for instance the k-median
instance of the Bank dataset has three monochromatic clusters starting at k = 12. Finally, left
unchecked, the balance in all datasets keeps decreasing as the clustering becomes more discriminative, with increased k.
On the other hand the fair clustering solutions maintain a balanced solution even as k increases.
Not surprisingly, the balance comes with a corresponding increase in cost, and the fair solutions are
costlier than their unfair counterparts. In each plot we also show the cost of the fairlet decomposition,
which represents the limit of the cost of the fair clustering; in all of the scenarios the overall cost of
the clustering converges to the cost of the fairlet decomposition.
6
Conclusions
In this work we initiate the study of fair clustering algorithms. Our main result is a reduction of
fair clustering to classical clustering via the notion of fairlets. We gave efficient approximation
algorithms for finding fairlet decompositions, and proved lower bounds showing that fairness can
introduce a computational bottleneck. An immediate future direction is to tighten the gap between
lower and upper bounds by improving the approximation ratio of the decomposition algorithms, or
giving stronger hardness results. A different avenue is to extend these results to situations where
the protected class is not binary, but can take on multiple values. Here there are multiple challenges
including defining an appropriate version of fairness.
2
https://archive.ics.uci.edu/ml/datasets/diabetes
https://archive.ics.uci.edu/ml/datasets/Bank+Marketing
4
https://archive.ics.uci.edu/ml/datasets/adult
3
8
References
Aggarwal, Charu C., & Reddy, Chandan K. 2013. Data Clustering: Algorithms and Applications.
1st edn. Chapman & Hall/CRC.
Arya, Vijay, Garg, Naveen, Khandekar, Rohit, Meyerson, Adam, Munagala, Kamesh, & Pandit,
Vinayaka. 2004. Local search heuristics for k-median and facility location problems. SIAM J.
Comput., 33(3), 544?562.
Biddle, Dan. 2006. Adverse Impact and Test Validation: A Practitioner?G guide to Valid and Defensible Employment Testing. Gower Publishing, Ltd.
Corbett-Davies, Sam, Pierson, Emma, Feller, Avi, Goel, Sharad, & Huq, Aziz. 2017. Algorithmic
Decision Making and the Cost of Fairness. Pages 797?806 of: Proceedings of the 23rd ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ?17. New
York, NY, USA: ACM.
Dwork, Cynthia, Hardt, Moritz, Pitassi, Toniann, Reingold, Omer, & Zemel, Richard. 2012. Fairness
through awareness. Pages 214?226 of: ITCS.
Feldman, Michael, Friedler, Sorelle A., Moeller, John, Scheidegger, Carlos, & Venkatasubramanian,
Suresh. 2015. Certifying and removing disparate impact. Pages 259?268 of: KDD.
Gonzalez, T. 1985. Clustering to minimize the maximum intercluster distance. TCS, 38, 293?306.
Hardt, Moritz, Price, Eric, & Srebro, Nati. 2016. Equality of opportunity in supervised learning.
Pages 3315?3323 of: NIPS.
Joseph, Matthew, Kearns, Michael, Morgenstern, Jamie H., & Roth, Aaron. 2016. Fairness in learning: Classic and contextual bandits. Pages 325?333 of: NIPS.
Kamishima, Toshihiro, Akaho, Shotaro, Asoh, Hideki, & Sakuma, Jun. 2012. Fairness-aware classifier with prejudice remover regularizer. Pages 35?50 of: ECML/PKDD.
Kanungo, Tapas, Mount, David M., Netanyahu, Nathan S., Piatko, Christine D., Silverman, Ruth, &
Wu, Angela Y. 2002. An efficient k-means clustering algorithm: Analysis and implementation.
PAMI, 24(7), 881?892.
Kleinberg, Jon, Lakkaraju, Himabindu, Leskovec, Jure, Ludwig, Jens, & Mullainathan, Sendhil.
2017a. Human decisions and machine predictions. Working Paper 23180. NBER.
Kleinberg, Jon M., Mullainathan, Sendhil, & Raghavan, Manish. 2017b. Inherent trade-offs in the
fair determination of risk scores. In: ITCS.
Kohavi, Ron. 1996. Scaling up the accuracy of naive-Bayes classifiers: A decision-tree hybrid.
Pages 202?207 of: KDD.
Li, Shi, & Svensson, Ola. 2013. Approximating k-median via pseudo-approximation. Pages 901?
910 of: STOC.
Lichman, M. 2013. UCI Machine Learning Repository.
Luong, Binh Thanh, Ruggieri, Salvatore, & Turini, Franco. 2011. k-NN as an implementation of
situation testing for discrimination discovery and prevention. Pages 502?510 of: KDD.
Moro, S?ergio, Cortez, Paulo, & Rita, Paulo. 2014. A data-driven approach to predict the success of
bank telemarketing. Decision Support Systems, 62, 22?31.
Xu, Rui, & Wunsch, Don. 2009. Clustering. Wiley-IEEE Press.
Zafar, Muhammad Bilal, Valera, Isabel, Gomez-Rodriguez, Manuel, & Gummadi, Krishna P. 2017.
Fairness constraints: Mechanisms for fair classification. Pages 259?268 of: AISTATS.
Zemel, Richard S., Wu, Yu, Swersky, Kevin, Pitassi, Toniann, & Dwork, Cynthia. 2013. Learning
fair representations. Pages 325?333 of: ICML.
9
| 7088 |@word repository:2 version:4 polynomial:2 stronger:1 cortez:1 open:1 decomposition:42 pick:1 harder:1 reduction:4 venkatasubramanian:1 contains:6 lichman:2 score:1 bc:1 interestingly:1 sendhil:2 bilal:1 existing:1 contextual:1 manuel:1 b1c:1 sergei:1 must:2 applicant:1 portuguese:1 r1c:2 john:1 partition:7 kdd:4 plot:1 discrimination:1 greedy:1 record:7 colored:1 num:1 institution:2 multiset:1 node:30 location:1 ron:1 height:2 warmup:1 rc:2 constructed:2 direct:1 contacted:1 supply:6 prove:2 dan:1 emma:1 introduce:3 hardness:5 indeed:2 themselves:1 pkdd:1 decomposed:1 decreasing:1 costlier:1 equipped:2 becomes:1 begin:2 underlying:2 moreover:2 notation:1 what:1 mountain:1 minimizes:1 morgenstern:1 finding:9 guarantee:5 pseudo:1 every:7 exactly:2 classifier:2 unit:8 converse:1 encapsulate:1 before:2 positive:5 engineering:1 local:1 tends:1 limit:1 id:1 mount:1 approximately:3 pami:1 chose:3 garg:1 married:2 bc1:2 challenging:2 co:1 campaign:1 range:1 bi:13 ola:1 directed:2 yj:5 testing:2 practice:1 piatko:1 implement:1 silverman:1 suresh:1 empirical:1 maxx:1 davy:3 matching:9 word:1 pre:2 regular:1 get:2 amplify:1 cannot:1 risk:1 applying:1 equivalent:1 center:46 missing:1 send:5 roth:1 attention:1 starting:3 regardless:1 duration:1 shi:1 formulate:2 simplicity:1 assigns:1 rule:4 wunsch:2 classic:2 proving:1 notion:11 coordinate:1 searching:1 resp:2 construction:3 suppose:4 exact:1 duke:1 edn:1 designing:1 rita:1 diabetes:5 vinayaka:1 observed:1 rij:2 solved:1 worst:1 ensures:1 trade:1 ran:1 balanced:7 intuition:2 feller:1 complexity:1 respecting:2 employment:2 mcf:14 solving:1 smart:1 bipartite:2 eric:1 swap:1 triangle:1 sink:2 isabel:1 regularizer:1 articulated:1 distinct:1 describe:1 pertaining:1 zemel:4 avi:1 sentencing:1 outcome:2 kevin:1 doctrine:2 emerged:1 larger:1 solve:5 heuristic:1 say:1 otherwise:1 remover:1 transform:1 itself:1 final:1 online:1 jamie:1 supreme:1 uci:5 combining:1 moeller:1 translate:1 omer:1 achieve:1 ludwig:1 description:1 cluster:40 optimum:1 r1:8 produce:1 perfect:5 converges:1 adam:1 illustrate:1 develop:1 augmenting:1 nearest:4 minor:1 advocated:1 strong:1 una:1 implies:1 come:1 convention:1 quantify:1 direction:2 closely:2 attribute:7 exploration:1 human:1 munagala:1 raghavan:1 pandit:1 education:2 disproportionately:1 require:1 crc:1 muhammad:1 generalization:1 preliminary:1 dipartimento:1 designate:1 strictly:1 codifying:1 lying:1 considered:1 ic:3 hall:1 algorithmic:2 bj:1 week:2 claim:1 matthew:1 predict:1 driving:1 friedler:1 assistant:1 label:1 combinatorial:1 sensitive:2 create:2 offs:1 aim:5 modified:1 ck:3 asoh:1 defensible:1 partic:1 focus:3 ave:2 sigkdd:1 nn:1 bandit:1 wij:1 arg:1 among:2 classification:2 issue:1 overall:2 augment:1 rc1:2 prevention:1 art:1 special:1 equal:4 construct:5 aware:1 beach:1 chapman:1 represents:6 broad:1 look:4 unsupervised:2 fairness:25 jon:2 yu:1 icml:1 future:1 minimized:1 np:5 others:1 richard:2 inherent:1 r11:4 composed:1 simultaneously:1 individual:2 subsampled:3 maintain:1 dwork:3 investigate:1 mining:1 deferred:1 behind:1 bki:2 edge:30 closer:1 integral:4 dataset3:1 mullainathan:2 machinery:1 tree:1 iv:2 euclidean:3 desired:1 isolated:1 minimal:3 leskovec:1 instance:15 increased:1 soft:1 earlier:3 assignment:6 cost:58 subset:5 commission:1 combined:1 st:2 international:1 siam:1 michael:2 ym:3 again:1 satisfied:1 tition:1 worse:1 book:1 luong:3 resort:1 manish:1 li:3 paulo:2 b2:3 satisfy:1 explicitly:2 race:1 bichromatic:2 view:2 lot:1 analyze:1 red:10 start:1 bayes:1 carlos:1 contribution:1 minimize:3 accuracy:1 largely:1 yield:3 itcs:2 produced:1 thanh:1 chandan:1 definition:6 proof:3 di:1 x106:1 gain:1 ruggieri:1 dataset:6 hardt:3 proved:1 recall:1 color:5 car:1 knowledge:1 formalize:1 appears:2 day:1 follow:2 supervised:4 specify:1 formulation:3 done:2 though:1 furthermore:3 just:1 implicit:1 marketing:2 hand:2 working:1 expressive:1 google:3 glance:1 rodriguez:1 quality:3 indicated:1 nber:1 usa:2 b3:3 building:1 concept:4 counterpart:3 former:1 hence:1 assigned:5 facility:1 moritz:2 equality:1 deal:1 self:1 covering:1 noted:1 presenting:1 demonstrate:1 performs:1 bring:1 christine:1 reasoning:1 image:1 meaning:1 common:2 empirically:2 gcd:1 extend:3 relating:1 refer:1 feldman:4 sorelle:1 rd:1 vanilla:3 similarly:1 akaho:1 longer:3 etc:1 pitassi:2 recent:4 italy:1 belongs:1 optimizing:1 apart:1 phone:1 route:2 scenario:1 driven:1 inequality:1 binary:2 success:1 arbitrarily:1 dataset2:1 flavio:1 yi:6 jens:1 preserving:2 minimum:6 additional:2 krishna:1 goel:1 converge:1 dashed:1 ii:3 violate:1 full:1 rj:8 multiple:2 aggarwal:2 ing:1 technical:1 adapt:1 determination:1 long:1 gummadi:1 bigger:1 impact:8 ensuring:1 variant:1 basic:1 prediction:1 patient:1 metric:4 represent:1 achieved:1 c1:3 want:1 hurdle:1 scheidegger:1 median:26 source:2 sends:1 kohavi:2 biased:1 rest:1 unlike:4 archive:3 sure:1 induced:1 tend:1 biddle:2 ample:1 monochromatic:5 flow:16 incorporates:1 reingold:1 integer:6 call:3 practitioner:1 intermediate:1 iii:2 easy:1 finish:1 gave:1 perfectly:6 reduce:2 avenue:1 court:2 translates:1 t0:40 vassilvitskii:1 six:1 bottleneck:1 ltd:1 effort:1 york:3 proceed:1 cause:1 informally:1 amount:1 kanungo:2 lakkaraju:1 induces:1 informatica:1 reduced:1 http:3 revisit:1 dotted:1 disjoint:2 x107:8 per:2 blue:10 group:1 four:1 threshold:1 ravi:1 graph:10 run:1 nascent:1 sakuma:1 swersky:1 throughout:1 wu:2 home:1 sapienza:1 gonzalez:4 decision:9 incompatible:1 charu:1 banking:1 scaling:1 bound:6 guaranteed:1 gomez:1 ements:1 fold:1 constraint:2 worked:1 ri:6 certifying:1 kleinberg:5 nathan:1 franco:1 min:6 kumar:1 performing:1 relatively:1 ern:1 combination:1 request:1 smaller:2 increasingly:1 sam:1 joseph:2 making:4 encapsulates:1 census:5 computationally:1 reddy:2 mechanism:2 initiate:1 serf:1 available:1 experimentation:1 observe:3 hierarchical:1 spectral:1 appropriate:1 salvatore:1 shotaro:1 original:3 angela:1 clustering:79 publishing:1 opportunity:2 gower:1 giving:1 build:1 approximating:1 classical:7 objective:12 question:4 already:1 codifies:1 traditional:5 distance:5 mapped:1 capacity:7 awarding:1 dataset4:1 furthest:1 khandekar:1 ruth:1 index:1 relationship:1 mini:1 ratio:2 balance:44 equivalently:1 stoc:1 relate:1 disparate:6 design:1 implementation:2 upper:1 datasets:9 arya:2 kamesh:1 ecml:1 immediate:1 defining:2 situation:2 looking:1 rome:1 y1:3 arbitrary:1 moro:2 introduced:1 complement:2 pair:1 david:1 extensive:1 hideki:1 hour:2 nip:3 address:1 adult:1 jure:1 below:1 challenge:2 max:2 eschew:1 including:2 power:2 natural:3 rely:1 client:2 hybrid:1 valera:1 amphitheater:1 griggs:1 technology:1 stan:1 identifies:1 axis:2 ready:1 jun:1 naive:1 literature:1 discovery:2 nati:1 rohit:1 occupation:1 law:1 unsatisfied:1 fully:2 par:1 toniann:2 huq:1 interesting:1 srebro:1 age:3 digital:1 validation:1 awareness:1 incident:1 bank:6 netanyahu:1 surprisingly:1 parity:1 copy:1 bias:1 guide:1 unprotected:2 dimension:1 world:1 numeric:3 unchecked:1 valid:1 meyerson:1 made:2 dard:1 turini:1 ple:1 tighten:1 excess:1 keep:1 ml:3 parkway:1 b1:7 conservation:1 pierson:1 discriminative:1 corbett:3 don:1 search:1 svensson:3 protected:9 toshihiro:1 b11:3 ca:2 inherently:1 improving:1 necessarily:3 zafar:3 aistats:1 main:3 n2:1 tapa:1 lattanzi:1 fair:69 xu:2 en:1 chierichetti:1 ny:3 aid:1 binh:1 wiley:1 position:1 explicit:2 wish:1 comput:1 unfair:10 theorem:6 saturate:2 rk:2 removing:1 specific:2 showing:2 cynthia:2 list:1 r2:3 thermostat:1 exists:2 effectively:1 ci:3 telemarketing:1 push:1 demand:11 rui:1 gap:1 vijay:1 tc:1 simply:1 sharad:1 contained:2 recommendation:1 gender:6 corresponds:1 kamishima:3 bji:3 b1i:1 extracted:1 acm:2 intercluster:1 goal:8 quantifying:1 price:3 feasible:5 hard:5 change:1 loan:1 specifically:1 adverse:1 reducing:1 ular:1 prejudice:1 lemma:12 kearns:1 silvio:1 called:1 discriminate:1 total:4 hospital:1 aaron:1 formally:2 naveen:1 support:1 latter:2 unbalanced:2 incorporate:1 evaluate:1 correlated:1 |
6,731 | 7,089 | Polynomial time algorithms for dual volume sampling
Chengtao Li
MIT
[email protected]
Stefanie Jegelka
MIT
[email protected]
Suvrit Sra
MIT
[email protected]
Abstract
We study dual volume sampling, a method for selecting k columns from an n ? m
short and wide matrix (n ? k ? m) such that the probability of selection is proportional to the volume spanned by the rows of the induced submatrix. This method
was proposed by Avron and Boutsidis (2013), who showed it to be a promising
method for column subset selection and its multiple applications. However, its
wider adoption has been hampered by the lack of polynomial time sampling algorithms. We remove this hindrance by developing an exact (randomized) polynomial
time sampling algorithm as well as its derandomization. Thereafter, we study
dual volume sampling via the theory of real stable polynomials and prove that its
distribution satisfies the ?Strong Rayleigh? property. This result has numerous
consequences, including a provably fast-mixing Markov chain sampler that makes
dual volume sampling much more attractive to practitioners. This sampler is closely
related to classical algorithms for popular experimental design methods that are to
date lacking theoretical analysis but are known to empirically work well.
1
Introduction
A variety of applications share the core task of selecting a subset of columns from a short, wide
matrix A with n rows and m > n columns. The criteria for selecting these columns typically aim at
preserving information about the span of A while generating a well-conditioned submatrix. Classical
and recent examples include experimental design, where we select observations or experiments [38];
preconditioning for solving linear systems and constructing low-stretch spanning trees (here A
is a version of the node-edge incidence matrix and we select edges in a graph) [6, 4]; matrix
approximation [11, 13, 24]; feature selection in k-means clustering [10, 12]; sensor selection [25]
and graph signal processing [14, 41].
In this work, we study a randomized approach that holds promise for all of these applications. This
approach relies on sampling columns of A according to a probability distribution defined over its
submatrices: the probability of selecting a set S of k columns from A, with n ? k ? m, is
P (S; A) / det(AS A>
S ),
(1.1)
where AS is the submatrix consisting of the selected columns. This distribution is reminiscent of
volume sampling, where k < n columns are selected with probability proportional to the determinant
det(A>
S AS ) of a k ? k matrix, i.e., the squared volume of the parallelepiped spanned by the selected
columns. (Volume sampling does not apply to k > n as the involved determinants vanish.) In contrast,
P (S; A) uses the determinant of an n ? n matrix and uses the volume spanned by the rows formed
by the selected columns. Hence we refer to P (S; A)-sampling as dual volume sampling (DVS).
Contributions. Despite the ostensible similarity between volume sampling and DVS, and despite
the many practical implications of DVS outlined below, efficient algorithms for DVS are not known
and were raised as open questions in [6]. In this work, we make two key contributions:
? We develop polynomial-time randomized sampling algorithms and their derandomization for
DVS. Surprisingly, our proofs require only elementary (but involved) matrix manipulations.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
? We establish that P (S; A) is a Strongly Rayleigh measure [8], a remarkable property that
captures a specific form of negative dependence. Our proof relies on the theory of real stable
polynomials, and the ensuing result implies a provably fast-mixing, practical MCMC sampler.
Moreover, this result implies concentration properties for dual volume sampling.
In parallel with our work, [16] also proposed a polynomial time sampling algorithm that works
efficiently in practice. Our work goes on to further uncover the hitherto unknown ?Strong Rayleigh?
property of DVS, which has important consequences, including those noted above.
1.1 Connections and implications.
The selection of k n columns from a short and wide matrix has many applications. Our algorithms
for DVS hence have several implications and connections; we note a few below.
Experimental design. The theory of optimal experiment design explores several criteria for selecting
the set of columns (experiments) S. Popular choices are
S 2 argminS?{1,...,m} J(AS ), with
J(AS ) = kA?S kF = k(AS A>
S)
J(AS ) = kA?S k2 (E-optimal design) , J(AS ) =
1
kF (A-optimal design) ,
log det(AS A>
S ) (D-optimal design).
(1.2)
Here, A denotes the Moore-Penrose pseudoinverse of A, and the minimization ranges over all S
such that AS has full row rank n. A-optimal design, for instance, is statistically optimal for linear
regression [38].
?
Finding an optimal solution for these design problems is NP-hard; and most discrete algorithms use
local search [33]. Avron and Boutsidis [6, Theorem 3.1] show that dual volume sampling yields an
approximation guarantee for both A- and E-optimal design: if S is sampled from P (S; A), then
?
h
i m n+1
h
i ?
n(m k)
? 2
? 2
? 2
E kAS kF ?
kA kF ; E kAS k2 ? 1 +
kA? k22 .
(1.3)
k n+1
k n+1
Avron and Boutsidis [6] provide a polynomial time sampling algorithm only for the case k = n. Our
algorithms achieve the bound (1.3) in expectation, and the derandomization in Section 2.3 achieves
the bound deterministically. Wang et al. [43] recently (in parallel) achieved approximation bounds
for A-optimality via a different algorithm combining convex relaxation and a greedy method. Other
methods include leverage score sampling [30] and predictive length sampling [45].
Low-stretch spanning trees and applications. Objectives 1.2 also arise in the construction of
low-stretch spanning trees, which have important applications in graph sparsification, preconditioning
and solving symmetric diagonally dominant (SDD) linear systems [40], among others [18]. In the
node-edge incidence matrix ? 2 Rn?mp
of an undirected graph G with n nodes and m edges, the
column corresponding to edge (u, v) is w(u, v)(eu ev ). Let ? = U ?Y be the SVD of ? with
Y 2 Rn 1?m . The stretch of a spanning tree T in G is then given by StT (G) = kYT 1 k2F [6]. In
those applications, we hence search for a set of edges with low stretch.
Network controllability. The problem of sampling k
n columns in a matrix also arises in
network controllability. For example, Zhao et al. [44] consider selecting control nodes S (under
certain constraints) over time in complex networks to control a linear time-invariant network. After
transforming the problem into a column subset selection problem from a short and wide controllability
matrix, the objective becomes essentially an E-optimal design problem, for which the authors use
greedy heuristics.
Notation. From a matrix A 2 Rn?m with m
n columns, we sample a set S ? [m] of k columns
(n ? k ? m), where [m] := {1, 2, . . . , m}. We denote the singular values of A by { i (A)}ni=1 ,
in decreasing order. We will assume A has full row rank r(A) = n, so n (A) > 0. We also
assume that r(AS ) = r(A) = n for every S ? [m] where |S| n. By ek (A), we denote the k-th
elementary symmetric polynomial of A, i.e., the k-th coefficient of the characteristic polynomial
PN
det( I A) = j=0 ( 1)j ej (A) N j .
2
Polynomial-time Dual Volume Sampling
We describe in this section our method to sample from the distribution P (S; A). Our first method
relies on the key insight that, as we show, the marginal probabilities for DVS can be computed in
polynomial time. To demonstrate this, we begin with the partition function and then derive marginals.
2
2.1 Marginals
The partition function has a conveniently simple closed form, which follows from the Cauchy-Binet
formula and was also derived in [6].
Lemma 1 (Partition Function [6]). For A 2 Rn?m with r(A) = n and n ? |S| = k ? m, we have
?
?
X
m n
>
ZA :=
det(AS AS ) =
det(AA> ).
|S|=k,S?[m]
k n
P
Next, we will need the marginal probability P (T ? S; A) = S:T ?S P (S; A) that a given set
T ? [m] is a subset of the random set S. In the following theorem, the set Tc = [m] \ T denotes the
(set) complement of T , and Q? denotes the orthogonal complement of Q.
Theorem 2 (Marginals). Let T ? [m], |T | ? k, and " > 0. Let AT = Q?V > be the singular value
decomposition of AT where Q 2 Rn?r(AT ) , and Q? 2 Rn?(n r(AT )) . Further define the matrices
B = (Q? )> ATc 2 R(n
2 p 1
6
C=6
4
2
1 (AT )+"
0
..
.
p
r(AT ))?(m |T |)
0
...
,
3
7
. . . 7 Q> AT 2 Rr(AT )?(m
c
5
..
.
1
2
2 (AT )+"
..
.
|T |)
.
>
Let QB diag( i2 (B))Q
of B > B where QB 2 R|Tc |?r(B) . MoreB be
?
? the eigenvalue decomposition
>
>
? > ?
over, let W = ITc ; C and = ek |T | r(B) (W ((QB ) QB )W > ). Then the marginal probability of T in DVS is
hQ
i hQ
i
r(AT ) 2
r(B) 2
i (AT ) ?
j (B) ?
i=1
j=1
P (T ? S; A) =
.
ZA
We prove Theorem 2 via a perturbation argument that connects DVS to volume sampling. Specifically,
observe that for ? > 0 and |S| n it holds that
!
?
>?
A
A
S
S
>
n k
>
n k
p
det(AS AS + "In ) = "
det(AS AS + "Ik ) = "
det p
. (2.1)
"(Im )S
"(Im )S
Carefully letting ? ! 0 bridges volumes with ?dual? volumes. The technical remainder of the proof
further relates this equality to singular values, and exploits properties of characteristic polynomials. A
similar argument yields an alternative proof of Lemma 1. We show the proofs in detail in Appendix A
and B respectively.
Complexity. The numerator of P (T ? S; A) in Theorem 2 requires O(mn2 ) time to compute the
first term, O(mn2 ) to compute the second and O(m3 ) to compute the third. The denominator takes
O(mn2 ) time, amounting in a total time of O(m3 ) to compute the marginal probability.
2.2 Sampling
The marginal probabilities derived above directly yield a polynomial-time exact DVS algorithm.
!
Instead of k-sets, we sample ordered k-tuples S = (s1 , . . . , sk ) 2 [m]k . We denote the k-tuple
!
variant of the DVS distribution by P (?; A):
Yk !
!
1
P ((sj = ij )kj=1 ; A) = P ({i1 , . . . , ik }; A) =
P (sj = ij |s1 = i1 , . . . , sj 1 = ij 1 ; A).
j=1
k!
!
!
Sampling S is now straightforward. At the jth step we sample sj via P (sj = ij |s1 =
i1 , . . . , sj 1 = ij 1 ; A); these probabilities are easily obtained from the marginals in Theorem 2.
Corollary 3. Let T = {i1 , . . . , it 1 }, and P (T ? S; A) as in Theorem 2. Then,
!
P (T [ {i} ? S; A)
P (st = i; A|s1 = i1 , . . . , st 1 = it 1 ) =
.
(k t + 1) P (T ? S; A)
As a result, it is possible to draw an exact dual volume sample in time O(km4 ).
The full proof may be found in the appendix. The running time claim follows since the sampling
algorithm invokes O(mk) computations of marginal probabilities, each costing O(m3 ) time.
3
Remark A potentially more efficient approximate algorithm could be derived by noting the relations between volume sampling and DVS. Specifically, we add a small perturbation to DVS as in
Equation 2.1 to transform it into a volume sampling problem, and apply random projection for more
efficient volume sampling as in [17]. Please refer to Appendix C for more details.
2.3
Derandomization
Next, we derandomize the above sampling algorithm to deterministically select a subset that satisfies
the bound (1.3) for the Frobenius norm, thereby answering another question in [6]. The key insight
for derandomization is that conditional expectations can be computed in polynomial time, given the
marginals in Theorem 2:
!
Corollary 4. Let (i1 , . . . , it 1 ) 2 [m]t 1 be such that the marginal distribution satisfies P (s1 =
i1 , . . . , st 1 = it 1 ; A) > 0. The conditional expectation can be expressed as
h
i Pn P 0 ({i1 , . . . , it 1 } ? S|S ? P (S; A[n]\{j} ))
j=1
? 2
E kAS kF | s1 = i1 , . . . , st 1 = it 1 =
,
P 0 ({i1 , . . . , it 1 } ? S|S ? P (S; A))
where P 0 are the unnormalized marginal distributions, and it can be computed in O(nm3 ) time.
We show the full derivation in Appendix D.
!
Corollary 4 enables a greedy derandomization procedure. Starting with the empty tuple S 0 = ;, in
!
the ith iteration, we greedily select j ? 2 argmaxj E[kA?S[j k2F | (s1 , . . . , si ) = S i 1 j] and append
!
!
!
it to our selection: S i = S i 1 j. The final set is the non-ordered version Sk of S k . Theorem 5
shows that this greedy procedure succeeds, and implies a deterministic version of the bound (1.3).
Theorem 5. The greedy derandomization selects a column set S satisfying
kA?S k2F ?
m
k
n+1 ? 2
kA kF ;
n+1
kA?S k22 ?
n(m n + 1) ? 2
kA k2 .
k n+1
In the proof, we construct a greedy algorithm. In each iteration, the algorithm computes, for each
column that has not yet been selected, the expectation conditioned on this column being included
in the current set. Then it chooses the element with the lowest conditional expectation to actually
be added to the current set. This greedy inclusion of elements will only decrease the conditional
expectation, thus retaining the bound in Theorem 5. The detailed proof is deferred to Appendix E.
Complexity. Each iteration of the greedy selection requires O(nm3 ) to compute O(m) conditional
expectations. Thus, the total running time for k iterations is O(knm4 ). The approximation bound for
the spectral norm is slightly worse than that in (1.3), but is of the same order if k = O(n).
3
Strong Rayleigh Property and Fast Markov Chain Sampling
Next, we investigate DVS more deeply and discover that it possesses a remarkable structural property,
namely, the Strongly Rayleigh (SR) [8] property. This property has proved remarkably fruitful
in a variety of recent contexts, including recent progress in approximation algorithms [23], fast
sampling [2, 27], graph sparsification [22, 39], extensions to the Kadison-Singer problem [1], and
certain concentration of measure results [37], among others.
For DVS, the SR property has two major consequences: it leads to a fast mixing practical MCMC
sampler, and it implies results on concentration of measure.
Strongly Rayleigh measures. SR measures were introduced in the landmark paper of Borcea
et al. [8], who develop a rich theory of negatively associated measures.
R
R In particular,
R we say that a
probability measure ? : 2[n] ! R+ is negatively associated if F d? Gd?
F Gd? for F, G
[n]
increasing functions on 2 with disjoint support. This property reflects a ?repelling? nature of ?, a
property that occurs more broadly across probability, combinatorics, physics, and other fields?see
[36, 8, 42] and references therein. The negative association property turns out to be quite subtle in
general; the class of SR measures captures a strong notion of negative association and provides a
framework for analyzing such measures.
4
Specifically, SR measures are defined via their connection to real stable polynomials [36, 8, 42]. A
multivariate polynomial f 2 C[z] where z 2 Cm is called real stable if all its coefficients are real and
f (z) 6= 0 whenever Im(zi ) > 0 forP
1 ? i ? m. AQmeasure is called an SR measure if its multivariate
generating polynomial f? (z) :=
S?[n] ?(S)
i2S zi is real stable. Notable examples of SR
measures are Determinantal Point Processes [31, 29, 9, 26], balanced matroids [19, 37], Bernoullis
conditioned on their sum, among others. It is known (see [8, pg. 523]) that the class of SR measures
is exponentially larger than the class of determinantal measures.
3.1
Strong Rayleigh Property of DVS
Theorem 6 establishes the SR propertyQfor DVS and is the main result of this section. Here and in the
following, we use the notation z S = i2S zi .
Theorem 6. Let A 2 Rn?m and n ? k ? m. Then the multiaffine polynomial
X
Y
X
S
p(z) :=
det(AS A>
zi =
det(AS A>
S)
S )z ,
|S|=k,S?[m]
i2S
(3.1)
|S|=k,S?[m]
is real stable. Consequently, P (S; A) is an SR measure.
The proof of Theorem 6 relies on key properties of real stable polynomials and SR measures
established in [8]. Essentially, the proof demonstrates that the generating polynomial of P (Sc ; A)
can be obtained by applying a few carefully chosen stability preserving operations to a polynomial
that we know to be real stable. Stability, although easily destroyed, is closed under several operations
noted in the important proposition below.
Proposition 7 (Prop. 2.1 [8]). Let f : Cm ! C be a stable polynomial. The following properties preserve stability: (i) Substitution: f (?, z2 , . . . , zm ) for ? 2 R; (ii) Differentiation: @ S f (z1 , . . . , zm )
for any S ? [m]; (iii) Diagonalization: f (z, z, z3 . . . , zm ) is stable, and hence f (z, z, . . . , z); and
(iv) Inversion: z1 ? ? ? zn f (z1 1 , . . . , zn 1 ).
In addition, we need the following two propositions for proving Theorem 6.
Proposition 8 (Prop. 2.4 [7]). Let B be Hermitian, z 2 Cm and Ai (1 ? i ? m) be Hermitian
semidefinite matrices. Then, the following polynomial is stable:
X
f (z) := det(B +
zi Ai ).
(3.2)
i
Proposition 9. For n ? |S| ? m and L := A A, we have det(AS A>
S ) = en (LS,S ).
>
Proof. Let Y = Diag([yi ]m
i=1 ) be a diagonal matrix. Using the Cauchy-Binet identity we have
X
X
T
det(AY A> ) =
det((AY ):,T ) det((A> )T,: ) =
det(A>
T AT )y .
|T |=n,T ?[m]
|T |=n,T ?[m]
Thus, when Y = IS , the (diagonal) indicator matrix for S, we obtain AY A> = AS A>
S . Consequently, in the summation above only terms with T ? S survive, yielding
X
X
det(AS A>
det(A>
det(LT,T ) = en (LS,S ).
S) =
T AT ) =
|T |=n,T ?S
|T |=n,T ?S
We are now ready to sketch the proof of Theorem 6.
Proof. (Theorem
6). Notationally, it is more convenient to prove that the ?complement? polynomial
P
Sc
pc (z) := |S|=k,S?[m] det(AS A>
is stable; subsequently, an application of Prop. 7-(iv) yields
S )z
stability of (3.1). Using matrix notation W = Diag(w1 , . . . , wm ), Z = Diag(z1 , . . . , zm ), our
starting stable polynomial (this stability follows from Prop. 8) is
h(z, w) := det(L + W + Z),
which can be expanded as
X
X
h(z, w) =
det(WS + LS )z Sc =
S?[m]
5
w 2 Cm , z 2 C m ,
S?[m]
?X
T ?S
?
wS\T det(LT,T ) z Sc .
Thus, h(z, w) is real stable in 2m variables, indexed below by S and R where R := S\T . Instead of
the form above, We can sum over S, R ? [m] but then have to constrain the support to the case when
Sc \ T = ; and Sc \ R = ;. In other words, we may write (using Iverson-brackets J?K)
X
h(z, w) =
JSc \ R = ; ^ Sc \ T = ;K det(LT,T )z Sc wR .
(3.3)
S,R?[m]
Next, we truncate polynomial (3.3) at degree (m k)+(k n) = m n by restricting |Sc [R| = m n.
By [8, Corollary 4.18] this truncation preserves stability, whence
X
H(z, w) :=
JSc \ R = ;K det(LS\R,S\R )z Sc wR ,
S,R?[m]
|Sc [R|=m n
is also stable. Using Prop. 7-(iii), setting w1 = . . . = wm = y retains stability; thus
X
g(z, y) : = H(z, (y, y, . . . , y )) =
JSc \ R = ;K det(LS\R,S\R )z Sc y |R|
| {z }
S,R?[m]
|Sc [R|=m n
m times
=
X ?X
S?[m]
|T |=n,T ?S
?
det(LT,T ) y |S|
|T | Sc
z
=
X
en (LS,S )y |S|
n Sc
z ,
S?[m]
is also stable. Next, differentiating g(z, y), k n times with respect to y and evaluating at 0 preserves
stability (Prop. 7-(ii) and (i)). In doing so, only terms corresponding to |S| = k survive, resulting in
@k
@y k
n
n
g(z, y)
= (k
y=0
n)!
X
en (LS,S )z Sc = (k
|S|=k,S?[m]
n)!
X
Sc
det(AS A>
S )z ,
|S|=k,S?[m]
which is just pc (z) (up to a constant); here, the last equality follows from Prop. 9. This establishes
stability of pc (z) and hence of p(z). Since p(z) is in addition multiaffine, it is the generating
polynomial of an SR measure, completing the proof.
3.2
Implications: MCMC
The SR property of P (S; A) established in Theorem 6 implies a fast mixing Markov chain for
sampling S. The states for the Markov chain are all sets of cardinality k. The chain starts with a
randomly-initialized active set S, and in each iteration we swap an element sin 2 S with an element
sout 2
/ S with a specific probability determined by the probability of the current and proposed set.
The stationary distribution of this chain is the one induced by DVS, by a simple detailed-balance
argument. The chain is shown in Algorithm 1.
Algorithm 1 Markov Chain for Dual Volume Sampling
Input: A 2 Rn?m the matrix of interest, k the target cardinality, T the number of steps
Output: S ? P (S; A)
Initialize S ? [m] such that |S| = k and det(AS A>
S) > 0
for i = 1 to T do
draw b 2 {0, 1} uniformly
if b = 1 then
Pick sin 2 S and sout 2
n [m]\S uniformly randomly
o
q(sin , sout , S)
>
min 1, det(AS[{sout }\{sin } A>
S[{sout }\{sin } )/ det(AS AS )
S
S [ {sout }\{sin } with probability q(sin , sout , S)
end if
end for
The convergence of the markov chain is measured via its mixing time: The mixing time of the chain
indicates the number of iterations t that we must perform (starting from S0 ) before we can consider
St as an approximately valid sample from P (S; A). Formally, if S0 (t) is the total variation distance
between the distribution of St and P (S; A) after t steps, then
?S0 (") := min{t :
S0 (t
6
0
) ? ", 8t0
t}
is the mixing time to sample from a distribution "-close to P (S; A) in terms of total variation distance.
We say that the chain mixes fast if ?S0 is polynomial in the problem size.
The fast mixing result for Algorithm 1 is a corollary of Theorem 6 combined with a recent result
of [3] on fast-mixing Markov chains for homogeneous SR measures. Theorem 10 states this precisely.
Theorem 10 (Mixing time). The mixing time of Markov chain shown in Algorithm 1 is given by
?S0 (") ? 2k(m
k)(log P (S0 ; A)
1
+ log "
1
).
Proof. Since P (S; A) is k-homogeneous SR by Theorem 6, the chain constructed for sampling S
following that in [3] mixes in ?S0 (") ? 2k(m k)(log P (S0 ; A) 1 + log " 1 ) time.
Implementation. To implement Algorithm 1 we need to compute the transition probabilities
q(sin , sout , S). Let T = S\{sin } and assume r(AT ) = n. By the matrix determinant lemma we have
the acceptance ratio
det(AS[{sout }\{sin } A>
S[{sout }\{sin } )
det(AS A>
S)
=
>
(1 + A>
{sout } (AT AT )
(1 +
1
A{sout } )
1 A in )
A>
(AT A>
{s }
T)
{sin }
.
Thus, the transition probabilities can be computed in O(n2 k) time. Moreover, one can further
accelerate this algorithm by using the quadrature techniques of [28] to compute lower and upper
bounds on this acceptance ratio to determine early acceptance or rejection of the proposed move.
Since the mixing time involves
Initialization. A remaining question is initialization.
log P (S0 ; A) 1 , we need to start with S0 such that P (S0 ; A) is sufficiently bounded away from 0.
We show in Appendix F that by a simple greedy algorithm, we are able to initialize S such that
log P (S; A) 1 log(2n k! m
k ) = O(k log m), and the resulting running time for Algorithm 1 is
3 2
e
O(k n m), which is linear in the size of data set m and is efficient when k is not too large.
3.3
Further implications and connections
Concentration. Pemantle and Peres [37] show concentration results for strong Rayleigh measures.
As a corollary of our Theorem 6 together with their results, we directly obtain tail bounds for DVS.
Algorithms for experimental design. Widely used, classical algorithms for finding an approximate
optimal design include Fedorov?s exchange algorithm [20, 21] (a greedy local search) and simulated
annealing [34]. Both methods start with a random initial set S, and greedily or randomly exchange
a column i 2 S with a column j 2
/ S. Apart from very expensive running times, they are known
to work well in practice [35, 43]. Yet so far there is no theoretical analysis, or a principled way of
determining when to stop the greedy search.
Curiously, our MCMC sampler is essentially a randomized version of Fedorov?s exchange method.
The two methods can be connected by a unified, simulated annealing view, where we define
P (S; A) / exp{log det(AS A>
to zero essenS )/ } with temperature parameter . Driving
tially recovers Fedorov?s method, while our results imply fast mixing for = 1, together with
approximation guarantees. Through this lens, simulated annealing may be viewed as initializing
Fedorov?s method with the fast-mixing sampler. In practice, we observe that letting < 1 improves
the approximation results, which opens interesting questions for future work.
4
Experiments
We report selection performance of DVS on real regression data (CompAct, CompAct(s), Abalone
and Bank32NH1 ) for experimental design. We use 4,000 samples from each dataset for estimation. We compare against various baselines, including uniform sampling (Unif), leverage score
sampling (Lev) [30], predictive length sampling (PL) [45], the sampling (Smpl)/greedy (Greedy)
selection methods in [43] and Fedorov?s exchange algorithm [20]. We initialize the MCMC sampler
with Kmeans++ [5] for DVS and run for 10,000 iterations, which empirically yields selections that are
1
http://www.dcc.fc.up.pt/?ltorgo/Regression/DataSets.html
7
sufficiently good. We measure performances via (1) the prediction error ky X ?
? k, and 2) running
times. Figure 1 shows the results for these three measures with sample sizes k varying from 60 to
200. Further experiments (including for the interpolation < 1), may be found in the appendix.
Unif
Lev
PL
Smpl
Greedy
DVS
Fedorov
Error
0.35
0.3
0.25
0.2
100
Running Time
80
0.26
60
0.24
40
20
0.15
150
200
0.22
0.2
0
100
Time-Error Trade-off
0.28
Error
Prediction Error
Seconds
0.4
0.18
100
150
k
k
200
0
10
20
30
Seconds
40
50
Figure 1: Results on the CompAct(s) dataset. Results are the median of 10 runs, except Greedy and
Fedorov. Note that Unif, Lev, PL and DVS use less than 1 second to finish experiments.
In terms of prediction error, DVS performs well and is comparable with Lev. Its strength compared
to the greedy and relaxation methods (Smpl, Greedy, Fedorov) is running time, leading to good
time-error tradeoffs. These tradeoffs are illustrated in Figure 1 for k = 120.
In other experiments (shown in Appendix G) we observed that in some cases, the optimization and
greedy methods (Smpl, Greedy, Fedorov) yield better results than sampling, however with much
higher running times. Hence, given time-error tradeoffs, DVS may be an interesting alternative in
situations where time is a very limited resource and results are needed quickly.
5
Conclusion
In this paper, we study the problem of DVS and develop an exact (randomized) polynomial time
sampling algorithm as well as its derandomization. We further study dual volume sampling via
the theory of real-stable polynomials and prove that its distribution satisfies the ?Strong Rayleigh?
property. This result has remarkable consequences, especially because it implies a provably fastmixing Markov chain sampler that makes dual volume sampling much more attractive to practitioners.
Finally, we observe connections to classical, computationally more expensive experimental design
methods (Fedorov?s method and SA); together with our results here, these could be a first step towards
a better theoretical understanding of those methods.
Acknowledgement
This research was supported by NSF CAREER award 1553284, NSF grant IIS-1409802, DARPA
grant N66001-17-1-4039, DARPA FunLoL grant (W911NF-16-1-0551) and a Siebel Scholar Fellowship. The views, opinions, and/or findings contained in this article are those of the author and should
not be interpreted as representing the official views or policies, either expressed or implied, of the
Defense Advanced Research Projects Agency or the Department of Defense.
References
[1] N. Anari and S. O. Gharan. The Kadison-Singer problem for strongly Rayleigh measures and
applications to asymmetric TSP. arXiv:1412.1143, 2014.
[2] N. Anari and S. O. Gharan. Effective-resistance-reducing flows and asymmetric TSP. In IEEE
Symposium on Foundations of Computer Science (FOCS), 2015.
[3] N. Anari, S. O. Gharan, and A. Rezaei. Monte Carlo Markov chain algorithms for sampling
strongly Rayleigh distributions and determinantal point processes. In COLT, pages 23?26, 2016.
[4] M. Arioli and I. S. Duff. Preconditioning of linear least-squares problems by identifying basic
variables. SIAM J. Sci. Comput., 2015.
[5] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proceedings
of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 2007.
8
[6] H. Avron and C. Boutsidis. Faster subset selection for matrices and applications. SIAM Journal
on Matrix Analysis and Applications, 34(4):1464?1499, 2013.
[7] J. Borcea and P. Br?nd?n. Applications of stable polynomials to mixed determinants: Johnson?s
conjectures, unimodality, and symmetrized Fischer products. Duke Mathematical Journal,
pages 205?223, 2008.
[8] J. Borcea, P. Br?nd?n, and T. Liggett. Negative dependence and the geometry of polynomials.
Journal of the American Mathematical Society, 22:521?567, 2009.
[9] A. Borodin. Determinantal point processes. arXiv:0911.1153, 2009.
[10] C. Boutsidis and M. Magdon-Ismail. Deterministic feature selection for k-means clustering.
IEEE Transactions on Information Theory, pages 6099?6110, 2013.
[11] C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the
column subset selection problem. In SODA, pages 968?977, 2009.
[12] C. Boutsidis, A. Zouzias, M. W. Mahoney, and P. Drineas. Stochastic dimensionality reduction
for k-means clustering. arXiv preprint arXiv:1110.2897, 2011.
[13] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column-based matrix reconstruction. SIAM Journal on Computing, pages 687?717, 2014.
[14] S. Chen, R. Varma, A. Sandryhaila, and J. Kova?cevi?c. Discrete signal processing on graphs:
Sampling theory. IEEE Transactions on Signal Processing, 63(24):6510?6523, 2015.
[15] A. ?ivril and M. Magdon-Ismail. On selecting a maximum volume sub-matrix of a matrix and
related problems. Theoretical Computer Science, pages 4801?4811, 2009.
[16] M. Derezinski and M. K. Warmuth. Unbiased estimates for linear regression via volume
sampling. Advances in Neural Information Processing Systems (NIPS), 2017.
[17] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection.
In Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on, pages
329?338. IEEE, 2010.
[18] M. Elkin, Y. Emek, D. A. Spielman, and S.-H. Teng. Lower-stretch spanning trees. SIAM
Journal on Computing, 2008.
[19] T. Feder and M. Mihail. Balanced matroids. In Symposium on Theory of Computing (STOC),
pages 26?38, 1992.
[20] V. Fedorov. Theory of optimal experiments. Preprint 7 lsm, Moscow State University, 1969.
[21] V. Fedorov. Theory of optimal experiments. Academic Press, 1972.
[22] A. Frieze, N. Goyal, L. Rademacher, and S. Vempala. Expanders via random spanning trees.
SIAM Journal on Computing, 43(2):497?513, 2014.
[23] S. O. Gharan, A. Saberi, and M. Singh. A randomized rounding approach to the traveling
salesman problem. In IEEE Symposium on Foundations of Computer Science (FOCS), pages
550?559, 2011.
[24] N. Halko, P.-G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic
algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217?288,
2011.
[25] S. Joshi and S. Boyd. Sensor selection via convex optimization. IEEE Transactions on Signal
Processing, pages 451?462, 2009.
[26] A. Kulesza and B. Taskar. Determinantal Point Processes for machine learning, volume 5.
Foundations and Trends in Machine Learning, 2012.
[27] C. Li, S. Jegelka, and S. Sra. Fast mixing markov chains for strongly Rayleigh measures, DPPs,
and constrained sampling. In Advances in Neural Information Processing Systems (NIPS), 2016.
[28] C. Li, S. Sra, and S. Jegelka. Gaussian quadrature for matrix inverse forms with applications.
In ICML, pages 1766?1775, 2016.
[29] R. Lyons. Determinantal probability measures. Publications Math?matiques de l?Institut des
Hautes ?tudes Scientifiques, 98(1):167?212, 2003.
9
[30] P. Ma, M. Mahoney, and B. Yu. A statistical perspective on algorithmic leveraging. In Journal
of Machine Learning Research (JMLR), 2015.
[31] O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied
Probability, 7(1), 1975.
[32] A. Magen and A. Zouzias. Near optimal dimensionality reductions that preserve volumes. In
Approximation, Randomization and Combinatorial Optimization. Algorithms and Techniques,
pages 523?534. Springer, 2008.
[33] A. J. Miller and N.-K. Nguyen. A fedorov exchange algorithm for d-optimal design. Journal of
the royal statistical society, 1994.
[34] M. D. Morris and T. J. Mitchell. Exploratory designs for computational experiments. Journal
of Statistical Planning and Inference, 43:381?402, 1995.
[35] N.-K. Nguyen and A. J. Miller. A review of some exchange algorithms for constructng discrete
optimal designs. Computational Statistics and Data Analysis, 14:489?498, 1992.
[36] R. Pemantle. Towards a theory of negative dependence. Journal of Mathematical Physics, 41:
1371?1390, 2000.
[37] R. Pemantle and Y. Peres. Concentration of Lipschitz functionals of determinantal and other
strong Rayleigh measures. Combinatorics, Probability and Computing, 23:140?160, 2014.
[38] F. Pukelsheim. Optimal design of experiments. SIAM, 2006.
[39] D. Spielman and N. Srivastava. Graph sparsification by effective resistances. SIAM J. Comput.,
40(6):1913?1926, 2011.
[40] D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph
sparsification, and solving linear systems. In STOC, 2004.
[41] M. Tsitsvero, S. Barbarossa, and P. D. Lorenzo. Signals on graphs: Uncertainty principle and
sampling. IEEE Transactions on Signal Processing, 64(18):4845?4860, 2016.
[42] D. Wagner. Multivariate stable polynomials: theory and applications. Bulletin of the American
Mathematical Society, 48(1):53?84, 2011.
[43] Y. Wang, A. W. Yu, and A. Singh. On Computationally Tractable Selection of Experiments in
Regression Models. ArXiv e-prints, 2016.
[44] Y. Zhao, F. Pasqualetti, and J. Cort?s. Scheduling of control nodes for improved network
controllability. In 2016 IEEE 55th Conference on Decision and Control (CDC), pages 1859?
1864, 2016.
[45] R. Zhu, P. Ma, M. W. Mahoney, and B. Yu. Optimal subsampling approaches for large sample
linear regression. arXiv preprint arXiv:1509.05111, 2015.
10
| 7089 |@word determinant:5 version:4 inversion:1 polynomial:34 norm:2 nd:2 open:2 unif:3 decomposition:3 pg:1 pick:1 thereby:1 reduction:2 initial:1 substitution:1 siebel:1 score:2 selecting:7 cort:1 ka:12 current:3 incidence:2 repelling:1 z2:1 si:1 yet:2 reminiscent:1 must:1 determinantal:7 partition:3 enables:1 remove:1 seeding:1 stationary:1 greedy:19 selected:5 warmuth:1 ith:1 short:4 core:1 provides:1 math:1 node:5 lsm:1 scientifiques:1 mathematical:4 iverson:1 constructed:1 symposium:5 ik:2 focs:3 prove:4 hermitian:2 planning:1 derandomization:8 decreasing:1 lyon:1 cardinality:2 increasing:1 becomes:1 begin:1 discover:1 moreover:2 notation:3 bounded:1 project:1 lowest:1 hitherto:1 cm:4 interpreted:1 chengtao:1 finding:4 sparsification:4 differentiation:1 unified:1 guarantee:2 avron:4 every:1 k2:3 demonstrates:1 control:4 partitioning:1 grant:3 before:1 local:2 consequence:4 despite:2 analyzing:1 lev:4 interpolation:1 approximately:1 therein:1 initialization:2 limited:1 range:1 adoption:1 statistically:1 practical:3 practice:3 goyal:1 implement:1 procedure:2 parallelepiped:1 submatrices:1 projection:1 convenient:1 word:1 boyd:1 magen:1 close:1 selection:17 scheduling:1 context:1 applying:1 www:1 fruitful:1 deterministic:2 eighteenth:1 go:1 straightforward:1 starting:3 l:7 convex:2 identifying:1 insight:2 spanned:3 varma:1 stability:9 proving:1 notion:1 variation:2 exploratory:1 construction:1 target:1 pt:1 exact:4 duke:1 homogeneous:2 us:2 element:4 trend:1 satisfying:1 expensive:2 asymmetric:2 observed:1 taskar:1 preprint:3 coincidence:1 wang:2 capture:2 initializing:1 connected:1 eu:1 decrease:1 trade:1 yk:1 deeply:1 balanced:2 transforming:1 principled:1 complexity:2 agency:1 singh:2 solving:3 predictive:2 negatively:2 swap:1 preconditioning:3 drineas:3 easily:2 accelerate:1 darpa:2 various:1 unimodality:1 derivation:1 fast:12 describe:1 mn2:3 effective:2 monte:1 sc:17 quite:1 heuristic:1 larger:1 pemantle:3 widely:1 say:2 statistic:1 fischer:1 transform:1 tsp:2 final:1 advantage:1 rr:1 eigenvalue:1 reconstruction:1 product:1 remainder:1 zm:4 combining:1 date:1 mixing:15 achieve:1 ismail:3 frobenius:1 ky:1 amounting:1 convergence:1 empty:1 rademacher:2 generating:4 i2s:3 wider:1 derive:1 develop:3 measured:1 ij:5 progress:1 sa:1 strong:8 involves:1 implies:6 ctli:1 closely:1 subsequently:1 stochastic:2 opinion:1 require:1 exchange:6 scholar:1 randomization:1 proposition:5 elementary:2 summation:1 im:3 extension:1 pl:3 stretch:6 hold:2 sufficiently:2 stt:1 exp:1 algorithmic:1 claim:1 driving:1 major:1 achieves:1 early:1 estimation:1 combinatorial:1 bridge:1 tudes:1 establishes:2 reflects:1 minimization:1 mit:6 sensor:2 gaussian:1 aim:1 pn:2 ej:1 varying:1 gharan:4 publication:1 corollary:6 derived:3 elkin:1 rank:2 bernoulli:1 indicates:1 contrast:1 greedily:2 baseline:1 whence:1 inference:1 typically:1 w:2 relation:1 i1:10 selects:1 provably:3 dual:13 among:3 html:1 colt:1 retaining:1 raised:1 constrained:1 initialize:3 marginal:8 field:1 construct:1 beach:1 sampling:48 yu:3 k2f:3 survive:2 icml:1 nearly:1 future:1 np:1 others:3 report:1 few:2 randomly:3 frieze:1 preserve:4 geometry:1 consisting:1 connects:1 interest:1 acceptance:3 investigate:1 essen:1 deferred:1 mahoney:4 bracket:1 semidefinite:1 yielding:1 pc:3 chain:17 implication:5 edge:6 tuple:2 arthur:1 orthogonal:1 institut:1 tree:6 iv:2 indexed:1 initialized:1 forp:1 theoretical:4 mk:1 instance:1 column:26 w911nf:1 retains:1 zn:2 subset:8 uniform:1 rounding:1 johnson:1 too:1 chooses:1 gd:2 st:8 combined:1 explores:1 randomized:6 siam:9 csail:1 sout:12 physic:2 off:1 probabilistic:1 together:3 quickly:1 w1:2 squared:1 ltorgo:1 worse:1 ek:2 zhao:2 leading:1 american:2 li:3 de:2 coefficient:2 combinatorics:2 mp:1 notable:1 mcmc:5 view:3 closed:2 doing:1 wm:2 start:3 parallel:2 contribution:2 formed:1 ni:1 square:1 who:2 efficiently:1 characteristic:2 yield:6 miller:2 carlo:1 randomness:1 za:2 whenever:1 against:1 boutsidis:8 involved:2 deshpande:1 proof:15 associated:2 recovers:1 sampled:1 stop:1 proved:1 dataset:2 popular:2 mitchell:1 improves:1 dimensionality:2 subtle:1 uncover:1 carefully:2 actually:1 higher:1 dcc:1 improved:2 borcea:3 strongly:6 just:1 traveling:1 sketch:1 hindrance:1 tropp:1 lack:1 usa:1 k22:2 unbiased:1 binet:2 hence:6 equality:2 symmetric:2 moore:1 i2:1 illustrated:1 attractive:2 sin:12 numerator:1 please:1 noted:2 unnormalized:1 abalone:1 criterion:2 ay:3 demonstrate:1 performs:1 temperature:1 saberi:1 matiques:1 recently:1 empirically:2 exponentially:1 volume:29 association:2 tail:1 martinsson:1 marginals:5 refer:2 ai:2 dpps:1 outlined:1 inclusion:1 stable:19 similarity:1 add:1 dominant:1 smpl:4 multivariate:3 showed:1 recent:4 perspective:1 apart:1 manipulation:1 certain:2 suvrit:2 yi:1 preserving:2 zouzias:2 determine:1 signal:6 ii:3 relates:1 multiple:1 full:4 mix:2 technical:1 faster:1 academic:1 long:1 award:1 prediction:3 variant:1 regression:6 basic:1 denominator:1 essentially:3 expectation:7 itc:1 arxiv:7 iteration:7 achieved:1 addition:2 remarkably:1 fellowship:1 annealing:3 derezinski:1 singular:3 median:1 posse:1 sr:15 anari:3 induced:2 undirected:1 flow:1 leveraging:1 practitioner:2 structural:1 near:2 leverage:2 noting:1 joshi:1 iii:2 destroyed:1 variety:2 finish:1 zi:5 tradeoff:3 br:2 det:35 t0:1 vassilvitskii:1 curiously:1 defense:2 sandryhaila:1 feder:1 resistance:2 remark:1 emek:1 detailed:2 morris:1 http:1 nsf:2 disjoint:1 wr:2 broadly:1 discrete:4 write:1 promise:1 thereafter:1 key:4 costing:1 cevi:1 n66001:1 graph:10 relaxation:2 sum:2 mihail:1 run:2 inverse:1 uncertainty:1 soda:1 draw:2 decision:1 appendix:8 comparable:1 submatrix:3 bound:9 completing:1 kyt:1 annual:2 strength:1 constraint:1 precisely:1 constrain:1 argument:3 span:1 optimality:1 qb:4 notationally:1 expanded:1 min:2 vempala:1 conjecture:1 department:1 developing:1 according:1 truncate:1 across:1 slightly:1 s1:7 invariant:1 macchi:1 computationally:2 equation:1 resource:1 turn:1 dvs:27 argmaxj:1 singer:2 know:1 letting:2 needed:1 tractable:1 end:2 salesman:1 operation:2 magdon:3 apply:2 observe:3 away:1 spectral:1 alternative:2 symmetrized:1 hampered:1 denotes:3 clustering:3 include:3 running:8 remaining:1 moscow:1 subsampling:1 exploit:1 atc:1 invokes:1 especially:1 establish:1 classical:4 society:3 implied:1 objective:2 move:1 question:4 added:1 occurs:1 print:1 concentration:6 dependence:3 diagonal:2 hq:2 distance:2 simulated:3 sci:1 ensuing:1 landmark:1 cauchy:2 spanning:6 argmins:1 length:2 z3:1 ratio:2 balance:1 kova:1 potentially:1 stoc:2 negative:5 append:1 design:19 implementation:1 policy:1 unknown:1 perform:1 upper:1 observation:1 fedorov:13 markov:11 datasets:1 controllability:4 peres:2 situation:1 rn:8 perturbation:2 duff:1 introduced:1 complement:3 namely:1 connection:5 z1:4 established:2 nip:3 able:1 below:4 ev:1 borodin:1 kulesza:1 including:5 royal:1 indicator:1 advanced:1 zhu:1 representing:1 lorenzo:1 imply:1 numerous:1 ready:1 stefanie:1 kj:1 review:2 understanding:1 acknowledgement:1 kf:6 determining:1 lacking:1 cdc:1 mixed:1 interesting:2 sdd:1 proportional:2 remarkable:3 tially:1 foundation:4 degree:1 jegelka:3 nm3:2 s0:12 article:1 principle:1 share:1 row:6 diagonally:1 surprisingly:1 last:1 truncation:1 supported:1 jth:1 wide:4 bulletin:1 wagner:1 differentiating:1 matroids:2 evaluating:1 valid:1 rich:1 computes:1 transition:2 author:2 nguyen:2 far:1 transaction:4 functionals:1 sj:6 approximate:3 compact:3 pseudoinverse:1 active:1 rezaei:1 tuples:1 search:4 sk:2 promising:1 nature:1 ca:1 sra:3 career:1 complex:1 constructing:2 diag:4 official:1 main:1 arise:1 expanders:1 n2:1 quadrature:2 en:4 sub:1 deterministically:2 comput:2 answering:1 vanish:1 ivril:1 third:1 jmlr:1 theorem:23 formula:1 specific:2 liggett:1 restricting:1 diagonalization:1 conditioned:3 chen:1 rejection:1 tc:2 rayleigh:13 lt:4 fc:1 halko:1 penrose:1 conveniently:1 pukelsheim:1 expressed:2 ordered:2 contained:1 springer:1 aa:1 satisfies:4 relies:4 acm:1 ma:2 prop:7 conditional:5 identity:1 viewed:1 kmeans:1 consequently:2 careful:1 towards:2 lipschitz:1 hard:1 included:1 specifically:3 determined:1 uniformly:2 except:1 sampler:8 reducing:1 lemma:3 total:4 called:2 lens:1 teng:2 experimental:6 svd:1 m3:3 succeeds:1 hautes:1 select:4 formally:1 support:2 arises:1 spielman:3 stefje:1 srivastava:1 |
6,732 | 709 | The Computation of Stereo Disparity for
Transparent and for Opaque Surfaces
Suthep Madarasmi
Computer Science Department
University of Minnesota
Minneapolis, MN 55455
Daniel Kersten
Department of Psychology
University of Minnesota
Ting-Chuen Pong
Computer Science Department
University of Minnesota
Abstract
The classical computational model for stereo vision incorporates
a uniqueness inhibition constraint to enforce a one-to-one feature
match, thereby sacrificing the ability to handle transparency. Critics of the model disregard the uniqueness constraint and argue
that the smoothness constraint can provide the excitation support
required for transparency computation. However, this modification fails in neighborhoods with sparse features. We propose a
Bayesian approach to stereo vision with priors favoring cohesive
over transparent surfaces. The disparity and its segmentation into a
multi-layer "depth planes" representation are simultaneously computed. The smoothness constraint propagates support within each
layer, providing mutual excitation for non-neighboring transparent
or partially occluded regions. Test results for various random-dot
and other stereograms are presented.
1
INTRODUCTION
The horizontal disparity in the projection of a 3-D point in a parallel stereo imaging system can be used to compute depth through triangulation. As the number of
385
386
Madarasmi, Kersten, and Pong
points in the scene increases, the correspondence problem increases in complexity
due to the matching ambiguity. Prior constraints on surfaces are needed to arrive
at a correct solution. Marr and Poggio [1976] use the smoothness constraint to resolve matching ambiguity and the uniqueness constraint to enforce a 1-to-1 match.
Their smoothness constraint tends to oversmooth at occluding boundaries and their
uniqueness assumption discourages the computation of stereo transparency for two
overlaid surfaces. Prazdny [1985] disregards the uniqueness inhibition term to enable transparency perception. However, their smoothness constraint is locally enforced and fails at providing excitation for spatially disjoint regions and for sparse
transparency.
More recently, Bayesian approaches have been used to incorporate prior constraints
(see [Clark and Yuille, 1990] for a review) for stereopsis while overcoming the problem of oversmoothing. Line processes are activated for disparity discontinuities to
mark the smoothness boundaries while the disparity is simultaneously computed.
A drawback of such methods is the lack of an explicit grouping of image sites
into piece-wise smooth regions. In addition, when presented with a stereogram of
overlaid (transparent) surfaces such as in the random-dot stereogram in figure 5,
multiple edges in the image are obtained while we clearly perceive two distinct,
overlaid surfaces. With edges as output, further grouping of overlapping surfaces
is impossible using the edges as boundaries. This suggests that surface grouping
should be performed simultaneously with disparity computation.
2
THE MULTI-LAYER REPRESENTATION
We propose a Bayesian approach to computing disparity and its segmentation that
uses a different output representation from the previous, edge-based methods. Our
representation was inspired by the observations of Nakayama et al. [1989] that midlevel processing such as the grouping of objects behind occluders is performed for
objects within the same "depth plane" .
As an example consider the stereogram of a floating square shown in figure 1a. The
edge-based segmentation method computes the disparity and marks the disparity
edges as shown in figure lb. Our approach produces two types of output at each
pixel: a layer (depth plane) number and a disparity value for that layer. The goal
of the system is to place points that could have arisen from a single smooth surface
in the scene into one distinct layer. The output for our multi-surface representation
is shown in figure 1c. Note that the floating square has a unique layer label, namely
layer 4, and the background has another label of 2. Layers 1 and 3 have no data
support and are, therefore, inactive.
The rest of the pixels in each layer that have no data support obtain values by a
membrane fitting process using the computed disparity as anchors. The occluded
parts of surfaces are, thus, represented in each layer. In addition, disjoint regions of a
single surface due to occlusion are represented in a single layer. This representation
of occluded parts is an important difference between our representation and a similar
representation for segmentation by Darrell and Pentland [1991].
The Computation of Stereo Disparity for Transparent and for Opaque Surfaces
Figure I: a) A gray scale
display of a noisy stereogram depicting a floating
square. b. Edge based
disp. = 0 method: disparity computed and disparity discontinuity computed. c. MultiSurface method: disparity
computed, surface grouping
.
performed by layer assigndlsp. = 4
. f or eac h
ment. an d d'lspanty
layer filled in.
(a)
Il1IIII -Layer 4
(b)
3
~
-Layer 2
Layer 4
ALGORITHM AND SIMULATION METHOD
We use Bayes' [1783] rule to compute the scene attribute, namely disparity u and
its layer assignment 1 for each layer:
( IldL dR)
p u,'
= p(dL,dRlu, I)p(u, I)
p( dL , dR )
where dL and dR are the left and right intensity image data. Each constraint is expressed as a local cost function using the Markov Random Field (MRF) assumption
lGeman and Geman, 1984], that pixels values are conditional only on their nearest
neighbors. Using the Gibbs-MRF equivalence, the energy function can be written
as a probability function:
1 E(.,)
p(x) -e-"-
=Z
where Z is the normalizing constant, T is the temperature, E is the energy cost
function, and x is a random variable
Our energy constraints can be expressed as
= >'D VD + >'s Vs + >'G VG + >'E VE + AR VR
E
where the>. 's are the weighting factors and the VD, Vs, VG, VE, VR functions are
the data matching cost, the smoothness term, the gap term, the edge shape term,
and the disparity versus intensity edge coupling term, respectively.
The data matching constraint prefers matches with similar intensity and contrast:
VD
=
t
,
[Idr -
dfl +.., .2: I(df JENi
dr) -
(d~ - df)l]
=
with the image indices k and m given by the ordered pairs k (row(i), col(i)+uC,i),
m (row(j) , col(j) + UCii), M is the number of pixels in the image, Ci is the layer
classification for site i, and Uli is the disparity at layer I. The.., weighs absolute
intensity versus contrast matching.
=
The >'D is higher for points that belong to unambiguous features such as straight
vertical contours, so that ambiguous pixels rely more on their prior constraints.
387
388
Madarasmi, Kersten, and Pong
cost
(b)
depth difference
depth difference
Figure 2: Cost function Vs. a) The smoothness cost is quadratic until the disparity difference is high and an edge process is activated. b) In our simulations we use a threshold
below which the smoothness cost is scaled down and above which a different layer
assignment is accepted at a constant high cost.
Also, if neighboring pixels have a higher disparity than the current pixel and are in
a different layer, its )..D is lowered since its corresponding point in the left image is
likely to be occluded.
The equation for the smoothness term is given by:
M
Vs
L
= LL L
i
V,(uu, U'j)a,
1 jEN.
a,
where, Ni are the neighbors of i, V, is the local smoothness potential,
is the
activity level for layer I defined by the percent of pixels belonging to layer I, and L
is the number layers in the system. The local smoothness potential is given by:
if (a - b)2
otherwise
< Tn
where JJ is the weighting term between depth smoothness and directional derivative
smoothness. The ~k is the difference operation in various directions k, and T
is the threshold. Instead of the commonly used quadratic smoothness function
graphed in figure 2a, we use the (7 function graphed in figure 2b which resembles
the Ising potential. This allows for some flexibility since )..5 is set rather high in
our simulations.
The VG term ensures a gap in the values of corresponding pixels between layers:
This ensures that if a site i belongs to layer C., then all points j neighboring i for
each layer 1 must have different disparity values ulj than uCia'
The edge or boundary shape constraint VE incorporates two types of constraints:
a cohesive measure and a saliency measure. The costs for various neighborhood
configurations are given in figure 3.
The constraint VR ensures that if there is no edge in intensity then there should be
no edge in the disparity. This is particularly important to avoid local minima for
gray scale images since there is so much ambiguity in the matching.
The Computation of Stereo Disparity for Transparent and for Opaque Surfaces
?
cost = 0
cost = 0.2
cost = 0.25
cost == 0.5
cost == 0.7
cost == I
- same layer label
D -different layer label
Figure 3: Cost function VE . The costs associated nearest neighborhood layer label con~gurations. a) Fully cohesive region (lowest cost) b) Two opaque regions with straight
hne boundary. c) Two opaque regions with diagonal line boundary. d) Opaque regions
with no figural continuity. e) Transparent region with dense samplings. f) Transparent
region with no other neighbors (highest cost).
Layer 3
layer labels
Wire-frame plot of Layer 3
Figure 4: Stereogram of floating cylinder shown in crossed and uncrossed
disparity. Only disparity values in the
active layers are shown. A wireframe rendering for layer 3 which
captures the cylinder is shown.
The Gibbs Sampler [Geman and Geman, 1984] with simulated annealing is used
to compute the disparity and layer assignments. After each iteration of the Gibbs
Sampler, the missing values within each layer are filled-in using the disparity at the
available sites. A quadratic energy functional enforces smoothness of disparity and
of disparity difference in various directions. A gradient descent approach minimizes
this energy and the missing values are filled-in.
4
SIMULATION RESULTS
After normalizing each of the local costs to lie between 0 and 1, the values for the
weighting parameters used in decreasing order are: .As, .AR, .AD, .AE,.AG with the .AD
value moved to follow .AG if a pixel is partially occluded. The results for a randomdot stereogram with a floating half-cylinder are shown in figure 4. Note that for
clarity only the visible pixels within each layer are displayed, though the remaining
pixels are filled-in. A wire-frame rendering for layer 3 is also provided.
Figure 5 is a random-dot stereogram with features from two transparent frontoparallel surfaces. The output consists primarily of two labels corresponding to
the foreground and the background. Note that when the stereogram is fused, the
percept is of two overlaid surfaces with various small, noisy regions of incorrect
matches.
Figure 6 is a random-dot stereogram depicting many planar-parallel surfaces. Note
389
390
Madarasmi, Kersten, and Pong
~
_
_
1
...
Layer I
- _- -,
~ 4j.
Layer 2
Layer 3
Figure 5: Random-dot stereogram of two overlaid
surfaces. Layers 1 and 4
are the mostly activated
layers. Only 5 of the layers
are shown here.
-
.")51
"
,
?.ii
--
- ~
-~
-iJ2~JI!5iw
7 -
An
- _ .....
........-
Laye~o;II~_:-~_IIIII;;C;=;:;F;_~.~2i:4::3="":;;"--~~
Layer 5
layer labeb
Figure 6: Random-dot stereogram of
multiple flat surfaces. Layers 4 captures
two regions since they belong to the
same surface (equal disparity).
..
).
layer labels
that there are two disjoint regions which are classified into the same layer since they
form a single surface.
A gray-scale stereogram depicting a floating square occluding the letter 'C' also
floating above the background is shown in figure 7. A feature-based matching
scheme is bound to fail here since locally one cannot correctly attribute the computed disparity at a matched corner of the rectangle, for example, to either the
rectangle, the background, or to both regions. Our VR constraint forces the system
to attempt various matches until points with no intensity discontinuity have no
disparity discontinuity. Another important feature is that the two ends of the letter
'C' are in the same "depth plane" [Nakayama et al., 1989] and may later be merged
to complete the letter.
Figure 8 is a gray scale stereogram depicting 4 distant surfaces with planar disparity.
At occluding boundaries, the region corresponding to the further surface in the right
image has no corresponding region in the left image. A high .AD would only force
these points to find an incorrect match and add to the systems errors. The.AD
reduction factor for partially occluded points reduces the data matching requirement
for such points. This is crucial for obtaining correct matches especially since the
images are sparsely textured and the dependence on accurate information from the
textured regions is high.
A transparency example of a fence in front a bill-board is given in figure 9. Note
| 709 |@word simulation:4 thereby:1 reduction:1 configuration:1 disparity:33 daniel:1 current:1 written:1 must:1 distant:1 visible:1 shape:2 plot:1 v:4 half:1 plane:4 dfl:1 incorrect:2 consists:1 fitting:1 idr:1 multi:3 inspired:1 decreasing:1 resolve:1 provided:1 matched:1 lowest:1 minimizes:1 gurations:1 ag:2 scaled:1 occluders:1 local:5 tends:1 resembles:1 equivalence:1 suggests:1 minneapolis:1 unique:1 enforces:1 projection:1 matching:8 cannot:1 impossible:1 kersten:4 bill:1 missing:2 perceive:1 rule:1 marr:1 handle:1 us:1 particularly:1 sparsely:1 geman:3 ising:1 capture:2 region:17 ensures:3 highest:1 pong:4 stereograms:1 complexity:1 ij2:1 occluded:6 yuille:1 textured:2 various:6 represented:2 oversmoothing:1 distinct:2 neighborhood:3 otherwise:1 ability:1 noisy:2 propose:2 ment:1 neighboring:3 figural:1 flexibility:1 moved:1 randomdot:1 darrell:1 requirement:1 produce:1 object:2 coupling:1 nearest:2 uu:1 frontoparallel:1 eac:1 direction:2 drawback:1 correct:2 attribute:2 merged:1 enable:1 transparent:9 overlaid:5 uniqueness:5 label:8 iw:1 clearly:1 rather:1 avoid:1 fence:1 uli:1 contrast:2 favoring:1 pixel:12 classification:1 mutual:1 uc:1 field:1 equal:1 sampling:1 foreground:1 primarily:1 simultaneously:3 ve:4 floating:7 occlusion:1 attempt:1 cylinder:3 disp:1 activated:3 behind:1 accurate:1 edge:13 poggio:1 filled:4 sacrificing:1 weighs:1 ar:2 assignment:3 cost:20 front:1 fused:1 ambiguity:3 dr:4 corner:1 derivative:1 potential:3 ad:4 crossed:1 piece:1 performed:3 later:1 bayes:1 parallel:2 square:4 ni:1 percept:1 saliency:1 directional:1 bayesian:3 straight:2 classified:1 energy:5 associated:1 con:1 segmentation:4 higher:2 follow:1 planar:2 though:1 until:2 horizontal:1 iiiii:1 overlapping:1 lack:1 continuity:1 gray:4 graphed:2 spatially:1 cohesive:3 ll:1 unambiguous:1 ambiguous:1 excitation:3 complete:1 tn:1 temperature:1 percent:1 image:10 wise:1 recently:1 discourages:1 functional:1 ji:1 belong:2 gibbs:3 smoothness:16 dot:6 minnesota:3 lowered:1 surface:24 inhibition:2 add:1 triangulation:1 belongs:1 minimum:1 ii:2 multiple:2 stereogram:13 reduces:1 transparency:6 smooth:2 match:7 mrf:2 ae:1 vision:2 df:2 iteration:1 arisen:1 addition:2 background:4 annealing:1 crucial:1 rest:1 incorporates:2 rendering:2 psychology:1 inactive:1 stereo:7 jj:1 prefers:1 locally:2 disjoint:3 correctly:1 wireframe:1 threshold:2 clarity:1 rectangle:2 imaging:1 enforced:1 letter:3 opaque:6 arrive:1 place:1 uncrossed:1 layer:52 bound:1 display:1 correspondence:1 quadratic:3 activity:1 constraint:18 scene:3 flat:1 department:3 belonging:1 membrane:1 modification:1 equation:1 fail:1 needed:1 end:1 available:1 operation:1 enforce:2 remaining:1 ting:1 especially:1 classical:1 dependence:1 diagonal:1 gradient:1 simulated:1 vd:3 argue:1 index:1 providing:2 mostly:1 vertical:1 observation:1 wire:2 markov:1 descent:1 pentland:1 displayed:1 frame:2 lb:1 intensity:6 overcoming:1 namely:2 required:1 pair:1 discontinuity:4 below:1 perception:1 rely:1 force:2 mn:1 scheme:1 prior:4 review:1 fully:1 versus:2 vg:3 clark:1 propagates:1 critic:1 oversmooth:1 row:2 neighbor:3 absolute:1 sparse:2 boundary:7 depth:8 contour:1 computes:1 commonly:1 ulj:1 active:1 anchor:1 stereopsis:1 nakayama:2 obtaining:1 depicting:4 dense:1 site:4 board:1 jeni:1 vr:4 fails:2 explicit:1 col:2 lie:1 weighting:3 laye:1 down:1 jen:1 normalizing:2 grouping:5 dl:3 chuen:1 ci:1 gap:2 likely:1 expressed:2 ordered:1 partially:3 midlevel:1 conditional:1 hne:1 goal:1 sampler:2 accepted:1 disregard:2 occluding:3 support:4 mark:2 incorporate:1 |
6,733 | 7,090 | Hindsight Experience Replay
Marcin Andrychowicz? , Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong,
Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel? , Wojciech Zaremba?
OpenAI
Abstract
Dealing with sparse rewards is one of the biggest challenges in Reinforcement
Learning (RL). We present a novel technique called Hindsight Experience Replay
which allows sample-efficient learning from rewards which are sparse and binary
and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of
implicit curriculum.
We demonstrate our approach on the task of manipulating objects with a robotic
arm. In particular, we run experiments on three different tasks: pushing, sliding,
and pick-and-place, in each case using only binary rewards indicating whether or
not the task is completed. Our ablation studies show that Hindsight Experience
Replay is a crucial ingredient which makes training possible in these challenging
environments. We show that our policies trained on a physics simulation can
be deployed on a physical robot and successfully complete the task. The video
presenting our experiments is available at https://goo.gl/SMrQnI.
1
Introduction
Reinforcement learning (RL) combined with neural networks has recently led to a wide range of
successes in learning policies for sequential decision-making problems. This includes simulated
environments, such as playing Atari games (Mnih et al., 2015), and defeating the best human player
at the game of Go (Silver et al., 2016), as well as robotic tasks such as helicopter control (Ng et al.,
2006), hitting a baseball (Peters and Schaal, 2008), screwing a cap onto a bottle (Levine et al., 2015),
or door opening (Chebotar et al., 2016).
However, a common challenge, especially for robotics, is the need to engineer a reward function
that not only reflects the task at hand but is also carefully shaped (Ng et al., 1999) to guide the
policy optimization. For example, Popov et al. (2017) use a cost function consisting of five relatively
complicated terms which need to be carefully weighted in order to train a policy for stacking a
brick on top of another one. The necessity of cost engineering limits the applicability of RL in the
real world because it requires both RL expertise and domain-specific knowledge. Moreover, it is
not applicable in situations where we do not know what admissible behaviour may look like. It is
therefore of great practical relevance to develop algorithms which can learn from unshaped reward
signals, e.g. a binary signal indicating successful task completion.
One ability humans have, unlike the current generation of model-free RL algorithms, is to learn
almost as much from achieving an undesired outcome as from the desired one. Imagine that you are
learning how to play hockey and are trying to shoot a puck into a net. You hit the puck but it misses
the net on the right side. The conclusion drawn by a standard RL algorithm in such a situation would
be that the performed sequence of actions does not lead to a successful shot, and little (if anything)
?
?
[email protected]
Equal advising.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
would be learned. It is however possible to draw another conclusion, namely that this sequence of
actions would be successful if the net had been placed further to the right.
In this paper we introduce a technique called Hindsight Experience Replay (HER) which allows the
algorithm to perform exactly this kind of reasoning and can be combined with any off-policy RL
algorithm. It is applicable whenever there are multiple goals which can be achieved, e.g. achieving
each state of the system may be treated as a separate goal. Not only does HER improve the sample
efficiency in this setting, but more importantly, it makes learning possible even if the reward signal is
sparse and binary. Our approach is based on training universal policies (Schaul et al., 2015a) which
take as input not only the current state, but also a goal state. The pivotal idea behind HER is to replay
each episode with a different goal than the one the agent was trying to achieve, e.g. one of the goals
which was achieved in the episode.
2
2.1
Background
Reinforcement Learning
We consider the standard reinforcement learning formalism consisting of an agent interacting with
an environment. To simplify the exposition we assume that the environment is fully observable.
An environment is described by a set of states S, a set of actions A, a distribution of initial states
p(s0 ), a reward function r : S ? A ! R, transition probabilities p(st+1 |st , at ), and a discount factor
2 [0, 1].
A deterministic policy is a mapping from states to actions: ? : S ! A. Every episode starts with
sampling an initial state s0 . At every timestep t the agent produces an action based on the current state:
at = ?(st ). Then it gets the reward rt = r(st , at ) and the environment?s new state is sampled
P1 from
the distribution p(?|st , at ). A discounted sum of future rewards is called a return: Rt = i=t i t ri .
The agent?s goal is to maximize its expected return Es0 [R0 |s0 ]. The Q-function or action-value
function is defined as Q? (st , at ) = E[Rt |st , at ].
?
Let ? ? denote an optimal policy i.e. any policy ? ? s.t. Q? (s, a) Q? (s, a) for every s 2 S, a 2 A
and any policy ?. All optimal policies have the same Q-function which is called optimal Q-function
and denoted Q? . It is easy to show that it satisfies the following equation called the Bellman equation:
?
Q? (s, a) = Es0 ?p(?|s,a) r(s, a) + max
Q? (s0 , a0 ) .
0
a 2A
2.2 Deep Q-Networks (DQN)
Deep Q-Networks (DQN) (Mnih et al., 2015) is a model-free RL algorithm for discrete action
spaces. Here we sketch it only informally, see Mnih et al. (2015) for more details. In DQN we
maintain a neural network Q which approximates Q? . A greedy policy w.r.t. Q is defined as
?Q (s) = argmaxa2A Q(s, a). An ?-greedy policy w.r.t. Q is a policy which with probability ? takes
a random action (sampled uniformly from A) and takes the action ?Q (s) with probability 1 ?.
During training we generate episodes using ?-greedy policy w.r.t. the current approximation of
the action-value function Q. The transition tuples (st , at , rt , st+1 ) encountered during training are
stored in the so-called replay buffer. The generation of new episodes is interleaved with neural
network training. The network is trained using mini-batch gradient descent on the loss L which
2
encourages the approximated Q-function to satisfy the Bellman equation: L = E (Q(st , at ) yt ) ,
0
where yt = rt + maxa0 2A Q(st+1 , a ) and the tuples (st , at , rt , st+1 ) are sampled from the replay
buffer1 .
2.3
Deep Deterministic Policy Gradients (DDPG)
Deep Deterministic Policy Gradients (DDPG) (Lillicrap et al., 2015) is a model-free RL algorithm
for continuous action spaces. Here we sketch it only informally, see Lillicrap et al. (2015) for more
details. In DDPG we maintain two neural networks: a target policy (also called an actor) ? : S ! A
and an action-value function approximator (called the critic) Q : S ? A ! R. The critic?s job is to
approximate the actor?s action-value function Q? .
1
The targets yt depend on the network parameters but this dependency is ignored during backpropagation.
Moreover, DQN uses the so-called target network to make the optimization procedure more stable but we omit it
here as it is not relevant to our results.
2
Episodes are generated using a behavioral policy which is a noisy version of the target policy, e.g.
?b (s) = ?(s) + N (0, 1). The critic is trained in a similar way as the Q-function in DQN but the
targets yt are computed using actions outputted by the actor, i.e. yt = rt + Q(st+1 , ?(st+1 )).
The actor is trained with mini-batch gradient descent on the loss La = Es Q(s, ?(s)), where s
is sampled from the replay buffer. The gradient of La w.r.t. actor parameters can be computed by
backpropagation through the combined critic and actor networks.
2.4
Universal Value Function Approximators (UVFA)
Universal Value Function Approximators (UVFA) (Schaul et al., 2015a) is an extension of DQN to
the setup where there is more than one goal we may try to achieve. Let G be the space of possible
goals. Every goal g 2 G corresponds to some reward function rg : S ? A ! R. Every episode starts
with sampling a state-goal pair from some distribution p(s0 , g). The goal stays fixed for the whole
episode. At every timestep the agent gets as input not only the current state but also the current goal
? : S ? G ! A and gets the reward rt = rg (st , at ). The Q-function now depends not only on a
state-action pair but also on a goal Q? (st , at , g) = E[Rt |st , at , g]. Schaul et al. (2015a) show that in
this setup it is possible to train an approximator to the Q-function using direct bootstrapping from the
Bellman equation (just like in case of DQN) and that a greedy policy derived from it can generalize
to previously unseen state-action pairs. The extension of this approach to DDPG is straightforward.
3
3.1
Hindsight Experience Replay
A motivating example
Consider a bit-flipping environment with the state space S = {0, 1}n and the action space A =
{0, 1, . . . , n 1} for some integer n in which executing the i-th action flips the i-th bit of the state.
For every episode we sample uniformly an initial state as well as a target state and the policy gets a
reward of 1 as long as it is not in the target state, i.e. rg (s, a) = [s 6= g].
Standard RL algorithms are bound to fail in this environment for
n > 40 because they will never experience any reward other than 1. Figure 1: Bit-flipping experiNotice that using techniques for improving exploration (e.g. VIME ment.
(Houthooft et al., 2016), count-based exploration (Ostrovski et al.,
2017) or bootstrapped DQN (Osband et al., 2016)) does not help
here because the real problem is not in lack of diversity of states
being visited, rather it is simply impractical to explore such a large
state space. The standard solution to this problem would be to use
a shaped reward function which is more informative and guides the
agent towards the goal, e.g. rg (s, a) = ||s g||2 . While using a
shaped reward solves the problem in our toy environment, it may be
difficult to apply to more complicated problems. We investigate the
results of reward shaping experimentally in Sec. 4.4.
Instead of shaping the reward we propose a different solution which does not require any domain
knowledge. Consider an episode with a state sequence s1 , . . . , sT and a goal g 6= s1 , . . . , sT which
implies that the agent received a reward of 1 at every timestep. The pivotal idea behind our approach
is to re-examine this trajectory with a different goal ? while this trajectory may not help us learn
how to achieve the state g, it definitely tells us something about how to achieve the state sT . This
information can be harvested by using an off-policy RL algorithm and experience replay where we
replace g in the replay buffer by sT . In addition we can still replay with the original goal g left intact
in the replay buffer. With this modification at least half of the replayed trajectories contain rewards
different from 1 and learning becomes much simpler. Fig. 1 compares the final performance of
DQN with and without this additional replay technique which we call Hindsight Experience Replay
(HER). DQN without HER can only solve the task for n ? 13 while DQN with HER easily solves
the task for n up to 50. See Appendix A for the details of the experimental setup. Note that this
approach combined with powerful function approximators (e.g., deep neural networks) allows the
agent to learn how to achieve the goal g even if it has never observed it during training.
We more formally describe our approach in the following sections.
3.2 Multi-goal RL
We are interested in training agents which learn to achieve multiple different goals. We follow the
approach from Universal Value Function Approximators (Schaul et al., 2015a), i.e. we train policies
3
and value functions which take as input not only a state s 2 S but also a goal g 2 G. Moreover, we
show that training an agent to perform multiple tasks can be easier than training it to perform only
one task (see Sec. 4.3 for details) and therefore our approach may be applicable even if there is only
one task we would like the agent to perform (a similar situation was recently observed by Pinto and
Gupta (2016)).
We assume that every goal g 2 G corresponds to some predicate fg : S ! {0, 1} and that the agent?s
goal is to achieve any state s that satisfies fg (s) = 1. In the case when we want to exactly specify the
desired state of the system we may use S = G and fg (s) = [s = g]. The goals can also specify only
some properties of the state, e.g. suppose that S = R2 and we want to be able to achieve an arbitrary
state with the given value of x coordinate. In this case G = R and fg ((x, y)) = [x = g].
Moreover, we assume that given a state s we can easily find a goal g which is satisfied in this state.
More formally, we assume that there is given a mapping m : S ! G s.t. 8s2S fm(s) (s) = 1. Notice
that this assumption is not very restrictive and can usually be satisfied. In the case where each goal
corresponds to a state we want to achieve, i.e. G = S and fg (s) = [s = g], the mapping m is just an
identity. For the case of 2-dimensional state and 1-dimensional goals from the previous paragraph
this mapping is also very simple m((x, y)) = x.
A universal policy can be trained using an arbitrary RL algorithm by sampling goals and initial states
from some distributions, running the agent for some number of timesteps and giving it a negative
reward at every timestep when the goal is not achieved, i.e. rg (s, a) = [fg (s) = 0]. This does not
however work very well in practice because this reward function is sparse and not very informative.
In order to solve this problem we introduce the technique of Hindsight Experience Replay which is
the crux of our approach.
3.3
Algorithm
The idea behind Hindsight Experience Replay (HER) is very simple: after experiencing some episode
s0 , s1 , . . . , sT we store in the replay buffer every transition st ! st+1 not only with the original
goal used for this episode but also with a subset of other goals. Notice that the goal being pursued
influences the agent?s actions but not the environment dynamics and therefore we can replay each
trajectory with an arbitrary goal assuming that we use an off-policy RL algorithm like DQN (Mnih
et al., 2015), DDPG (Lillicrap et al., 2015), NAF (Gu et al., 2016) or SDQN (Metz et al., 2017).
One choice which has to be made in order to use HER is the set of additional goals used for replay.
In the simplest version of our algorithm we replay each trajectory with the goal m(sT ), i.e. the goal
which is achieved in the final state of the episode. We experimentally compare different types and
quantities of additional goals for replay in Sec. 4.5. In all cases we also replay each trajectory with
the original goal pursued in the episode. See Alg. 1 for a more formal description of the algorithm.
HER may be seen as a form of implicit curriculum as the goals used for replay naturally shift from
ones which are simple to achieve even by a random agent to more difficult ones. However, in contrast
to explicit curriculum, HER does not require having any control over the distribution of initial
environment states. Not only does HER learn with extremely sparse rewards, in our experiments
it also performs better with sparse rewards than with shaped ones (See Sec. 4.4). These results are
indicative of the practical challenges with reward shaping, and that shaped rewards would often
constitute a compromise on the metric we truly care about (such as binary success/failure).
4
Experiments
The video presenting our experiments is available at https://goo.gl/SMrQnI.
4.1
Environments
The are no standard environments for multi-goal RL and therefore we created our own environments.
We decided to use manipulation environments based on an existing hardware robot to ensure that the
challenges we face correspond as closely as possible to the real world. In all experiments we use a
7-DOF Fetch Robotics arm which has a two-fingered parallel gripper. The robot is simulated using
the MuJoCo (Todorov et al., 2012) physics engine. The whole training procedure is performed in
the simulation but we show in Sec. 4.6 that the trained policies perform well on the physical robot
without any finetuning.
Policies are represented as Multi-Layer Perceptrons (MLPs) with Rectified Linear Unit (ReLU)
activation functions. Training is performed using the DDPG algorithm (Lillicrap et al., 2015) with
4
Algorithm 1 Hindsight Experience Replay (HER)
Given:
? an off-policy RL algorithm A,
. e.g. DQN, DDPG, NAF, SDQN
? a strategy S for sampling goals for replay,
. e.g. S(s0 , . . . , sT ) = m(sT )
? a reward function r : S ? A ? G ! R.
. e.g. r(s, a, g) = [fg (s) = 0]
Initialize A
. e.g. initialize neural networks
Initialize replay buffer R
for episode = 1, M do
Sample a goal g and an initial state s0 .
for t = 0, T 1 do
Sample an action at using the behavioral policy from A:
at
?b (st ||g)
. || denotes concatenation
Execute the action at and observe a new state st+1
end for
for t = 0, T 1 do
rt := r(st , at , g)
Store the transition (st ||g, at , rt , st+1 ||g) in R
. standard experience replay
Sample a set of additional goals for replay G := S(current episode)
for g 0 2 G do
r0 := r(st , at , g 0 )
Store the transition (st ||g 0 , at , r0 , st+1 ||g 0 ) in R
. HER
end for
end for
for t = 1, N do
Sample a minibatch B from the replay buffer R
Perform one step of optimization using A and minibatch B
end for
end for
Adam (Kingma and Ba, 2014) as the optimizer. See Appendix A for more details and the values of all
hyperparameters.
We consider 3 different tasks:
1. Pushing. In this task a box is placed on a table in front of the robot and the task is to move
it to the target location on the table. The robot fingers are locked to prevent grasping. The
learned behaviour is a mixture of pushing and rolling.
2. Sliding. In this task a puck is placed on a long slippery table and the target position is outside
of the robot?s reach so that it has to hit the puck with such a force that it slides and then
stops in the appropriate place due to friction.
3. Pick-and-place. This task is similar to pushing but the target position is in the air and the
fingers are not locked. To make exploration in this task easier we recorded a single state in
which the box is grasped and start half of the training episodes from this state2 .
The images showing the tasks being performed can be found in Appendix C.
States: The state of the system is represented in the MuJoCo physics engine.
Goals: Goals describe the desired position of the object (a box or a puck depending on the task) with
some fixed tolerance of ? i.e. G = R3 and fg (s) = [|g sobject | ? ?], where sobject is the position
of the object in the state s. The mapping from states to goals used in HER is simply m(s) = sobject .
Rewards: Unless stated otherwise we use binary and sparse rewards r(s, a, g) = [fg (s0 ) = 0]
where s0 if the state after the execution of the action a in the state s. We compare sparse and shaped
reward functions in Sec. 4.4.
State-goal distributions: For all tasks the initial position of the gripper is fixed, while the initial
position of the object and the target are randomized. See Appendix A for details.
Observations: In this paragraph relative means relative to the current gripper position. The policy is
2
This was necessary because we could not successfully train any policies for this task without using the
demonstration state. We have later discovered that training is possible without this trick if only the goal position
is sometimes on the table and sometimes in the air.
5
given as input the absolute position of the gripper, the relative position of the object and the target3 ,
as well as the distance between the fingers. The Q-function is additionally given the linear velocity of
the gripper and fingers as well as relative linear and angular velocity of the object. We decided to
restrict the input to the policy in order to make deployment on the physical robot easier.
Actions: None of the problems we consider require gripper rotation and therefore we keep it fixed.
Action space is 4-dimensional. Three dimensions specify the desired relative gripper position at
the next timestep. We use MuJoCo constraints to move the gripper towards the desired position but
Jacobian-based control could be used instead4 . The last dimension specifies the desired distance
between the 2 fingers which are position controlled.
Strategy S for sampling goals for replay: Unless stated otherwise HER uses replay with the goal
corresponding to the final state in each episode, i.e. S(s0 , . . . , sT ) = m(sT ). We compare different
strategies for choosing which goals to replay with in Sec. 4.5.
4.2
Does HER improve performance?
In order to verify if HER improves performance we evaluate DDPG with and without HER on all
3 tasks. Moreover, we compare against DDPG with count-based exploration5 (Strehl and Littman,
2005; Kolter and Ng, 2009; Tang et al., 2016; Bellemare et al., 2016; Ostrovski et al., 2017). For
HER we store each transition in the replay buffer twice: once with the goal used for the generation
of the episode and once with the goal corresponding to the final state from the episode (we call this
strategy final). In Sec. 4.5 we perform ablation studies of different strategies S for choosing goals
for replay, here we include the best version from Sec. 4.5 in the plot for comparison.
Figure 2: Multiple goals.
Figure 3: Single goal.
Fig. 2 shows the learning curves for all 3 tasks6 . DDPG without HER is unable to solve any of the
tasks7 and DDPG with count-based exploration is only able to make some progress on the sliding
task. On the other hand, DDPG with HER solves all tasks almost perfectly. It confirms that HER is a
crucial element which makes learning from sparse, binary rewards possible.
4.3
Does HER improve performance even if there is only one goal we care about?
In this section we evaluate whether HER improves performance in the case where there is only one
goal we care about. To this end, we repeat the experiments from the previous section but the goal
state is identical in all episodes.
From Fig. 3 it is clear that DDPG+HER performs much better than pure DDPG even if the goal state
is identical in all episodes. More importantly, comparing Fig. 2 and Fig. 3 we can also notice that
HER learns faster if training episodes contain multiple goals, so in practice it is advisable to train on
multiple goals even if we care only about one of them.
3
The target position is relative to the current object position.
The successful deployment on a physical robot (Sec. 4.6) confirms that our control model produces
movements which are reproducible on the physical robot despite not being fully physically
plausible.
p
5
We discretize the state space and use an intrinsic reward of the form ?/ N , where ? is a hyperparameter and N is the number of times the given state was visited. The discretization works as follows. We take the relative position of the box and the target and then discretize every coordinate using
a grid with a stepsize which is a hyperparameter. We have performed a hyperparameter search over
? 2 {0.032, 0.064, 0.125, 0.25, 0.5, 1, 2, 4, 8, 16, 32}, 2 {1cm, 2cm, 4cm, 8cm}. The best results were
obtained using ? = 1 and = 1cm and these are the results we report.
6
An episode is considered successful if the distance between the object and the goal at the end of the episode
is less than 7cm for pushing and pick-and-place and less than 20cm for sliding. The results are averaged across 5
random seeds and shaded areas represent one standard deviation.
7
We also evaluated DQN (without HER) on our tasks and it was not able to solve any of them.
4
6
Figure 4: Ablation study of different strategies for choosing additional goals for replay. The top row
shows the highest (across the training epochs) test performance and the bottom row shows the average
test performance across all training epochs. On the right top plot the curves for final, episode and
future coincide as all these strategies achieve perfect performance on this task.
4.4
How does HER interact with reward shaping?
So far we only considered binary rewards of the form r(s, a, g) = [|g sobject | > ?]. In this
section we check how the performance of DDPG with and without HER changes if we replace
this reward with one which is shaped. We considered reward functions of the form r(s, a, g) =
|g sobject |p
|g s0object |p , where s0 is the state of the environment after the execution of the
action a in the state s and 2 {0, 1}, p 2 {1, 2} are hyperparameters.
Surprisingly neither DDPG, nor DDPG+HER was able to successfully solve any of the tasks with any
of these reward functions8 (learning curves can be found in Appendix D). Our results are consistent
with the fact that successful applications of RL to difficult manipulation tasks which does not use
demonstrations usually have more complicated reward functions than the ones we tried (e.g. Popov
et al. (2017)).
The following two reasons can cause shaped rewards to perform so poorly: (1) There is a huge
discrepancy between what we optimize (i.e. a shaped reward function) and the success condition (i.e.:
is the object within some radius from the goal at the end of the episode); (2) Shaped rewards penalize
for inappropriate behaviour (e.g. moving the box in a wrong direction) which may hinder exploration.
It can cause the agent to learn not to touch the box at all if it can not manipulate it precisely and we
noticed such behaviour in some of our experiments.
Our results suggest that domain-agnostic reward shaping does not work well (at least in the simple
forms we have tried). Of course for every problem there exists a reward which makes it easy (Ng
et al., 1999) but designing such shaped rewards requires a lot of domain knowledge and may in some
cases not be much easier than directly scripting the policy. This strengthens our belief that learning
from sparse, binary rewards is an important problem.
4.5
How many goals should we replay each trajectory with and how to choose them?
In this section we experimentally evaluate different strategies (i.e. S in Alg. 1) for choosing goals
to use with HER. So far the only additional goals we used for replay were the ones corresponding
to the final state of the environment and we will call this strategy final. Apart from it we consider
the following strategies: future ? replay with k random states which come from the same episode
as the transition being replayed and were observed after it, episode ? replay with k random
states coming from the same episode as the transition being replayed, random ? replay with k
random states encountered so far in the whole training procedure. All of these strategies have a
hyperparameter k which controls the ratio of HER data to data coming from normal experience replay
in the replay buffer.
8
We also tried to rescale the distances, so that the range of rewards is similar as in the case of binary rewards,
clipping big distances and adding a simple (linear or quadratic) term encouraging the gripper to move towards
the object but none of these techniques have led to successful training.
7
Figure 5: The pick-and-place policy deployed on the physical robot.
The plots comparing different strategies and different values of k can be found in Fig. 4. We can
see from the plots that all strategies apart from random solve pushing and pick-and-place almost
perfectly regardless of the values of k. In all cases future with k equal 4 or 8 performs best and it
is the only strategy which is able to solve the sliding task almost perfectly. The learning curves for
future with k = 4 can be found in Fig. 2. It confirms that the most valuable goals for replay are the
ones which are going to be achieved in the near future9 . Notice that increasing the values of k above
8 degrades performance because the fraction of normal replay data in the buffer becomes very low.
4.6
Deployment on a physical robot
We took a policy for the pick-and-place task trained in the simulator (version with the future strategy
and k = 4 from Sec. 4.5) and deployed it on a physical fetch robot without any finetuning. The box
position was predicted using a separately trained CNN using raw fetch head camera images. See
Appendix B for details.
Initially the policy succeeded in 2 out of 5 trials. It was not robust to small errors in the box position
estimation because it was trained on perfect state coming from the simulation. After retraining the
policy with gaussian noise (std=1cm) added to observations10 the success rate increased to 5/5. The
video showing some of the trials is available at https://goo.gl/SMrQnI.
5
Related work
The technique of experience replay has been introduced in Lin (1992) and became very popular
after it was used in the DQN agent playing Atari (Mnih et al., 2015). Prioritized experience replay
(Schaul et al., 2015b) is an improvement to experience replay which prioritizes transitions in the
replay buffer in order to speed up training. It it orthogonal to our work and both approaches can be
easily combined.
Learning simultaneously policies for multiple tasks have been heavily explored in the context of
policy search, e.g. Schmidhuber and Huber (1990); Caruana (1998); Da Silva et al. (2012); Kober et al.
(2012); Devin et al. (2016); Pinto and Gupta (2016). Learning off-policy value functions for multiple
tasks was investigated by Foster and Dayan (2002) and Sutton et al. (2011). Our work is most heavily
based on Schaul et al. (2015a) who considers training a single neural network approximating multiple
value functions. Learning simultaneously to perform multiple tasks has been also investigated for
a long time in the context of Hierarchical Reinforcement Learning, e.g. Bakker and Schmidhuber
(2004); Vezhnevets et al. (2017).
Our approach may be seen as a form of implicit curriculum learning (Elman, 1993; Bengio et al.,
2009). While curriculum is now often used for training neural networks (e.g. Zaremba and Sutskever
(2014); Graves et al. (2016)), the curriculum is almost always hand-crafted. The problem of automatic
curriculum generation was approached by Schmidhuber (2004) who constructed an asymptotically
optimal algorithm for this problem using program search. Another interesting approach is PowerPlay
(Schmidhuber, 2013; Srivastava et al., 2013) which is a general framework for automatic task selection.
Graves et al. (2017) consider a setup where there is a fixed discrete set of tasks and empirically
evaluate different strategies for automatic curriculum generation in this settings. Another approach
investigated by Sukhbaatar et al. (2017) and Held et al. (2017) uses self-play between the policy and
a task-setter in order to automatically generate goal states which are on the border of what the current
policy can achieve. Our approach is orthogonal to these techniques and can be combined with them.
9
We have also tried replaying the goals which are close to the ones achieved in the near future but it has not
performed better than the future strategy
10
The Q-function approximator was trained using exact observations. It does not have to be robust to noisy
observations because it is not used during the deployment on the physical robot.
8
6
Conclusions
We introduced a novel technique called Hindsight Experience Replay which makes possible applying
RL algorithms to problems with sparse and binary rewards. Our technique can be combined with an
arbitrary off-policy RL algorithm and we experimentally demonstrated that with DQN and DDPG.
We showed that HER allows training policies which push, slide and pick-and-place objects with a
robotic arm to the specified positions while the vanilla RL algorithm fails to solve these tasks. We
also showed that the policy for the pick-and-place task performs well on the physical robot without
any finetuning. As far as we know, it is the first time so complicated behaviours were learned using
only sparse, binary rewards.
Acknowledgments
We would like to thank Ankur Handa, Jonathan Ho, John Schulman, Matthias Plappert, Tim Salimans,
and Vikash Kumar for providing feedback on the previous versions of this manuscript. We would
also like to thank Rein Houthooft and the whole OpenAI team for fruitful discussions as well as
Bowen Baker for performing some additional experiments.
References
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin,
M., et al. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv
preprint arXiv:1603.04467.
Bakker, B. and Schmidhuber, J. (2004). Hierarchical reinforcement learning based on subgoal discovery and
subpolicy specialization. In Proc. of the 8-th Conf. on Intelligent Autonomous Systems, pages 438?445.
Bellemare, M., Srinivasan, S., Ostrovski, G., Schaul, T., Saxton, D., and Munos, R. (2016). Unifying countbased exploration and intrinsic motivation. In Advances in Neural Information Processing Systems, pages
1471?1479.
Bengio, Y., Louradour, J., Collobert, R., and Weston, J. (2009). Curriculum learning. In Proceedings of the 26th
annual international conference on machine learning, pages 41?48. ACM.
Caruana, R. (1998). Multitask learning. In Learning to learn, pages 95?133. Springer.
Chebotar, Y., Kalakrishnan, M., Yahya, A., Li, A., Schaal, S., and Levine, S. (2016). Path integral guided policy
search. arXiv preprint arXiv:1610.00529.
Da Silva, B., Konidaris, G., and Barto, A. (2012). Learning parameterized skills. arXiv preprint arXiv:1206.6398.
Devin, C., Gupta, A., Darrell, T., Abbeel, P., and Levine, S. (2016). Learning modular neural network policies
for multi-task and multi-robot transfer. arXiv preprint arXiv:1609.07088.
Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition,
48(1):71?99.
Foster, D. and Dayan, P. (2002). Structure in the space of value functions. Machine Learning, 49(2):325?346.
Graves, A., Bellemare, M. G., Menick, J., Munos, R., and Kavukcuoglu, K. (2017). Automated curriculum
learning for neural networks. arXiv preprint arXiv:1704.03003.
Graves, A., Wayne, G., Reynolds, M., Harley, T., Danihelka, I., Grabska-Barwi?nska, A., Colmenarejo, S. G.,
Grefenstette, E., Ramalho, T., Agapiou, J., et al. (2016). Hybrid computing using a neural network with
dynamic external memory. Nature, 538(7626):471?476.
Gu, S., Lillicrap, T., Sutskever, I., and Levine, S. (2016). Continuous deep q-learning with model-based
acceleration. arXiv preprint arXiv:1603.00748.
Held, D., Geng, X., Florensa, C., and Abbeel, P. (2017). Automatic goal generation for reinforcement learning
agents. arXiv preprint arXiv:1705.06366.
Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). Vime: Variational
information maximizing exploration. In Advances in Neural Information Processing Systems, pages 1109?
1117.
Kingma, D. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
9
Kober, J., Wilhelm, A., Oztop, E., and Peters, J. (2012). Reinforcement learning to adjust parametrized motor
primitives to new situations. Autonomous Robots, 33(4):361?379.
Kolter, J. Z. and Ng, A. Y. (2009). Near-bayesian exploration in polynomial time. In Proceedings of the 26th
Annual International Conference on Machine Learning, pages 513?520. ACM.
Levine, S., Finn, C., Darrell, T., and Abbeel, P. (2015). End-to-end training of deep visuomotor policies. arXiv
preprint arXiv:1504.00702.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. (2015).
Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971.
Lin, L.-J. (1992). Self-improving reactive agents based on reinforcement learning, planning and teaching.
Machine learning, 8(3-4):293?321.
Metz, L., Ibarz, J., Jaitly, N., and Davidson, J. (2017). Discrete sequential prediction of continuous actions for
deep rl. arXiv preprint arXiv:1705.05035.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M.,
Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning.
Nature, 518(7540):529?533.
Ng, A., Coates, A., Diel, M., Ganapathi, V., Schulte, J., Tse, B., Berger, E., and Liang, E. (2006). Autonomous
inverted helicopter flight via reinforcement learning. Experimental Robotics IX, pages 363?372.
Ng, A. Y., Harada, D., and Russell, S. (1999). Policy invariance under reward transformations: Theory and
application to reward shaping. In ICML, volume 99, pages 278?287.
Osband, I., Blundell, C., Pritzel, A., and Van Roy, B. (2016). Deep exploration via bootstrapped dqn. In
Advances In Neural Information Processing Systems, pages 4026?4034.
Ostrovski, G., Bellemare, M. G., Oord, A. v. d., and Munos, R. (2017). Count-based exploration with neural
density models. arXiv preprint arXiv:1703.01310.
Peters, J. and Schaal, S. (2008). Reinforcement learning of motor skills with policy gradients. Neural networks,
21(4):682?697.
Pinto, L. and Gupta, A. (2016). Learning to push by grasping: Using multiple tasks for effective learning. arXiv
preprint arXiv:1609.09025.
Popov, I., Heess, N., Lillicrap, T., Hafner, R., Barth-Maron, G., Vecerik, M., Lampe, T., Tassa, Y., Erez, T., and
Riedmiller, M. (2017). Data-efficient deep reinforcement learning for dexterous manipulation. arXiv preprint
arXiv:1704.03073.
Schaul, T., Horgan, D., Gregor, K., and Silver, D. (2015a). Universal value function approximators. In
Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pages 1312?1320.
Schaul, T., Quan, J., Antonoglou, I., and Silver, D. (2015b). Prioritized experience replay. arXiv preprint
arXiv:1511.05952.
Schmidhuber, J. (2004). Optimal ordered problem solver. Machine Learning, 54(3):211?254.
Schmidhuber, J. (2013). Powerplay: Training an increasingly general problem solver by continually searching
for the simplest still unsolvable problem. Frontiers in psychology, 4.
Schmidhuber, J. and Huber, R. (1990). Learning to generate focus trajectories for attentive vision. Institut f?r
Informatik.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou,
I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and
tree search. Nature, 529(7587):484?489.
Srivastava, R. K., Steunebrink, B. R., and Schmidhuber, J. (2013). First experiments with powerplay. Neural
Networks, 41:130?136.
Strehl, A. L. and Littman, M. L. (2005). A theoretical analysis of model-based interval estimation. In Proceedings
of the 22nd international conference on Machine learning, pages 856?863. ACM.
Sukhbaatar, S., Kostrikov, I., Szlam, A., and Fergus, R. (2017). Intrinsic motivation and automatic curricula via
asymmetric self-play. arXiv preprint arXiv:1703.05407.
10
Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. (2011). Horde: A
scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In The
10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 761?768.
International Foundation for Autonomous Agents and Multiagent Systems.
Tang, H., Houthooft, R., Foote, D., Stooke, A., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P.
(2016). # exploration: A study of count-based exploration for deep reinforcement learning. arXiv preprint
arXiv:1611.04717.
Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. (2017). Domain randomization for
transferring deep neural networks from simulation to the real world. arXiv preprint arXiv:1703.06907.
Todorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In Intelligent
Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026?5033. IEEE.
Vezhnevets, A. S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D., and Kavukcuoglu, K. (2017).
Feudal networks for hierarchical reinforcement learning. arXiv preprint arXiv:1703.01161.
Zaremba, W. and Sutskever, I. (2014). Learning to execute. arXiv preprint arXiv:1410.4615.
11
| 7090 |@word multitask:1 trial:2 cnn:1 version:5 polynomial:1 retraining:1 nd:2 pieter:1 confirms:3 simulation:4 tried:4 pick:8 shot:1 initial:8 necessity:1 bootstrapped:2 reynolds:1 existing:1 current:10 com:1 comparing:2 discretization:1 activation:1 guez:1 john:1 devin:3 informative:2 motor:2 plot:4 reproducible:1 sukhbaatar:2 greedy:4 half:2 pursued:2 indicative:1 location:1 simpler:1 five:1 wierstra:1 constructed:1 direct:1 jonas:1 abadi:1 pritzel:2 ray:2 behavioral:2 paragraph:2 introduce:2 huber:2 expected:1 p1:1 examine:1 nor:1 multi:5 simulator:1 elman:2 planning:1 bellman:3 discounted:1 ibarz:1 automatically:1 duan:2 little:1 es0:2 inappropriate:1 encouraging:1 increasing:1 becomes:2 solver:2 moreover:5 baker:1 agnostic:1 what:3 grabska:1 atari:2 kind:1 cm:8 bakker:2 hindsight:10 transformation:1 bootstrapping:1 impractical:1 every:13 zaremba:4 exactly:2 wrong:1 hit:2 control:8 unit:1 wayne:1 omit:1 szlam:1 continually:1 danihelka:1 wolski:1 engineering:2 limit:1 despite:1 sutton:2 path:1 twice:1 ankur:1 challenging:1 shaded:1 mujoco:4 deployment:4 hunt:1 range:2 locked:2 averaged:1 decided:2 practical:2 camera:1 acknowledgment:1 practice:2 backpropagation:2 procedure:3 grasped:1 area:1 riedmiller:2 universal:6 outputted:1 suggest:1 get:4 onto:1 close:1 selection:1 context:2 influence:1 applying:1 bellemare:5 optimize:1 fruitful:1 deterministic:3 demonstrated:1 yt:5 dean:1 maximizing:1 go:2 straightforward:1 regardless:1 starting:1 primitive:1 pure:1 kalakrishnan:1 kostrikov:1 importantly:2 countbased:1 searching:1 coordinate:2 autonomous:5 imagine:1 play:3 target:13 suppose:1 experiencing:1 heavily:2 exact:1 us:3 designing:1 jaitly:1 trick:1 velocity:2 element:1 approximated:1 strengthens:1 roy:1 std:1 asymmetric:1 observed:3 levine:5 bottom:1 preprint:20 episode:30 grasping:2 goo:3 movement:1 highest:1 valuable:1 russell:1 environment:17 reward:51 littman:2 saxton:1 dynamic:2 hinder:1 trained:10 depend:1 compromise:1 baseball:1 efficiency:1 gu:2 easily:3 finetuning:3 represented:2 finger:5 train:5 ramalho:1 describe:2 effective:1 approached:1 tell:1 visuomotor:1 outcome:1 dof:1 outside:1 choosing:4 rein:1 modular:1 solve:8 plausible:1 vime:2 otherwise:2 pilarski:1 ability:1 unseen:1 noisy:2 final:8 sequence:3 net:3 matthias:1 took:1 propose:1 ment:1 interaction:1 coming:3 helicopter:2 kober:2 relevant:1 ablation:3 poorly:1 achieve:12 schaul:10 description:1 sutskever:3 darrell:2 produce:2 silver:7 executing:1 adam:2 object:11 help:2 depending:1 develop:1 completion:1 advisable:1 tim:1 rescale:1 received:1 progress:1 job:1 solves:3 predicted:1 implies:1 come:1 direction:1 guided:1 radius:1 closely:1 stochastic:1 exploration:12 human:3 require:3 maxa0:1 behaviour:5 crux:1 abbeel:7 randomization:1 extension:2 frontier:1 stooke:1 considered:3 normal:2 great:1 seed:1 mapping:5 cognition:1 optimizer:1 powerplay:3 estimation:2 proc:1 applicable:3 visited:2 successfully:3 defeating:1 reflects:1 weighted:1 gaussian:1 always:1 rather:1 avoid:1 rusu:1 barto:1 derived:1 focus:1 schaal:3 improvement:1 check:1 contrast:1 dayan:2 transferring:1 a0:1 initially:1 her:33 manipulating:1 marcin:2 going:1 interested:1 denoted:1 development:1 initialize:3 equal:2 once:2 never:2 shaped:11 ng:7 beach:1 sampling:5 having:1 identical:2 schulte:1 look:1 unsupervised:1 icml:2 prioritizes:1 geng:1 discrepancy:1 future:8 report:1 simplify:1 intelligent:2 opening:1 simultaneously:2 puck:5 hafner:1 consisting:2 maintain:2 harley:1 ostrovski:5 huge:1 investigate:1 mnih:6 fingered:1 adjust:1 truly:1 mixture:1 behind:3 held:2 succeeded:1 integral:1 popov:3 necessary:1 experience:18 orthogonal:2 unless:2 institut:1 tree:1 desired:6 re:1 theoretical:1 brick:1 formalism:1 increased:1 tse:1 dexterous:1 caruana:2 clipping:1 cost:2 stacking:1 applicability:1 subset:1 deviation:1 rolling:1 harada:1 welinder:1 successful:7 predicate:1 osindero:1 front:1 motivating:1 stored:1 dependency:1 fong:2 fetch:3 combined:8 st:39 density:1 definitely:1 randomized:1 international:7 oord:1 stay:1 off:7 physic:4 precup:1 satisfied:2 recorded:1 choose:1 huang:1 conf:1 external:1 wojciech:1 return:2 toy:1 li:1 ganapathi:1 diversity:1 de:2 degris:1 sec:11 includes:1 satisfy:1 kolter:2 vecerik:1 depends:1 collobert:1 performed:6 try:1 later:1 lot:1 start:3 metz:2 complicated:5 parallel:1 mlps:1 air:2 became:1 who:2 correspond:1 generalize:1 raw:1 bayesian:1 kavukcuoglu:3 informatik:1 none:2 trajectory:8 expertise:1 rectified:1 bob:1 veness:1 reach:1 whenever:1 failure:1 against:1 konidaris:1 attentive:1 sensorimotor:1 naturally:1 sampled:4 stop:1 popular:1 knowledge:4 cap:1 improves:2 barwi:1 shaping:6 carefully:2 barth:1 manuscript:1 follow:1 specify:3 replayed:3 subgoal:1 execute:2 box:8 evaluated:1 just:2 implicit:3 angular:1 hand:3 sketch:2 flight:1 touch:1 lack:1 minibatch:2 maron:1 diel:1 dqn:17 usa:1 lillicrap:7 contain:2 verify:1 horde:1 undesired:1 white:1 game:3 during:5 encourages:1 self:3 davis:1 anything:1 trying:2 presenting:2 complete:1 demonstrate:1 performs:4 silva:2 reasoning:1 naf:2 image:2 shoot:1 novel:2 recently:2 handa:1 variational:1 common:1 chebotar:2 rotation:1 rl:22 physical:10 vezhnevets:2 empirically:1 tassa:3 volume:2 approximates:1 automatic:5 vanilla:1 grid:1 erez:3 teaching:1 had:1 moving:1 robot:18 actor:6 stable:1 something:1 own:1 showed:2 delp:1 apart:2 manipulation:3 store:4 buffer:11 schmidhuber:9 binary:12 success:4 approximators:5 inverted:1 seen:3 additional:7 care:4 schneider:2 r0:3 maximize:1 corrado:1 signal:3 sliding:5 multiple:11 faster:1 long:4 lin:2 manipulate:1 controlled:1 prediction:1 scalable:1 heterogeneous:1 vision:1 metric:1 physically:1 arxiv:40 sometimes:2 represent:1 agarwal:1 robotics:3 achieved:6 penalize:1 background:1 addition:1 want:3 separately:1 interval:1 crucial:2 unlike:1 nska:1 quan:1 integer:1 tobin:2 call:3 near:3 door:1 bengio:2 easy:2 automated:1 todorov:2 relu:1 timesteps:1 psychology:1 architecture:1 fm:1 restrict:1 perfectly:3 idea:3 barham:1 shift:1 vikash:1 blundell:1 whether:2 specialization:1 osband:2 peter:4 cause:2 constitute:1 andrychowicz:1 action:26 deep:15 ignored:1 heess:3 clear:1 informally:2 discount:1 slide:2 hardware:1 simplest:2 http:3 generate:3 specifies:1 coates:1 notice:4 discrete:3 hyperparameter:4 srinivasan:1 openai:3 achieving:2 drawn:1 prevent:1 neither:1 iros:1 timestep:5 asymptotically:1 fraction:1 sum:1 houthooft:4 run:1 parameterized:1 you:2 powerful:1 unsolvable:1 rachel:1 place:9 almost:5 draw:1 decision:1 appendix:6 lanctot:1 bit:3 interleaved:1 bound:1 layer:1 quadratic:1 encountered:2 annual:2 constraint:1 precisely:1 alex:1 feudal:1 ri:1 speed:1 friction:1 extremely:1 kumar:1 performing:1 subpolicy:1 relatively:1 across:3 increasingly:1 mastering:1 making:1 s1:3 modification:1 den:1 modayil:1 equation:4 previously:1 count:5 fail:1 r3:1 know:2 flip:1 finn:1 antonoglou:2 end:10 available:3 brevdo:1 panneershelvam:1 apply:1 observe:1 hierarchical:3 salimans:1 appropriate:1 stepsize:1 batch:2 ho:1 original:3 top:3 running:1 ensure:1 denotes:1 completed:1 include:1 unifying:1 pushing:6 giving:1 restrictive:1 especially:1 approximating:1 gregor:1 rsj:1 move:3 noticed:1 added:1 quantity:1 flipping:2 turck:2 strategy:17 degrades:1 rt:11 gradient:6 distance:5 separate:1 unable:1 simulated:2 concatenation:1 thank:2 parametrized:1 fidjeland:1 maddison:1 considers:1 reason:1 assuming:1 wilhelm:1 mini:2 ratio:1 demonstration:2 providing:1 berger:1 liang:1 setup:4 difficult:3 schrittwieser:1 negative:1 stated:2 ba:2 policy:52 perform:9 discretize:2 observation:3 descent:2 horgan:1 situation:4 scripting:1 head:1 team:1 interacting:1 discovered:1 arbitrary:5 s2s:1 introduced:2 bottle:1 namely:1 pair:3 specified:1 engine:3 learned:3 tensorflow:1 kingma:2 nip:1 able:5 usually:2 challenge:4 program:1 max:1 memory:1 video:3 belief:1 treated:1 force:1 hybrid:1 curriculum:11 replaying:1 arm:3 improve:3 bowen:1 created:1 slippery:1 epoch:2 schulman:3 discovery:1 uvfa:2 relative:7 graf:5 fully:2 loss:2 harvested:1 multiagent:2 florensa:1 generation:6 interesting:1 approximator:3 ingredient:1 foundation:1 agent:21 consistent:1 s0:12 foster:2 setter:1 playing:2 critic:4 strehl:2 row:2 course:1 gl:3 placed:3 free:3 last:1 repeat:1 surprisingly:1 guide:2 side:1 formal:1 foote:1 wide:1 face:1 munos:3 absolute:1 sparse:12 fg:9 tolerance:1 distributed:1 curve:4 dimension:2 feedback:1 world:3 transition:9 van:2 made:1 reinforcement:16 coincide:1 sifre:1 far:4 approximate:1 observable:1 skill:2 jaderberg:1 keep:1 dealing:1 robotic:3 filip:1 state2:1 tuples:2 davidson:1 fergus:1 agapiou:1 continuous:4 search:5 hockey:1 table:4 additionally:1 learn:8 transfer:1 robust:2 ca:1 nature:3 improving:2 steunebrink:1 alg:2 interact:1 investigated:3 domain:5 da:2 louradour:1 whole:4 big:1 hyperparameters:2 noise:1 border:1 motivation:2 pivotal:2 fig:7 biggest:1 crafted:1 deployed:3 fails:1 position:19 explicit:1 replay:51 jacobian:1 learns:1 perfect:2 admissible:1 tang:2 ix:1 specific:1 showing:2 r2:1 explored:1 gupta:4 intrinsic:3 gripper:9 exists:1 sequential:2 adding:1 importance:1 execution:2 push:2 chen:3 easier:4 rg:5 led:2 simply:2 explore:1 josh:1 hitting:1 ordered:1 pinto:3 driessche:1 springer:1 corresponds:3 satisfies:2 acm:3 weston:1 grefenstette:1 goal:72 identity:1 ddpg:18 exposition:1 towards:3 prioritized:2 acceleration:1 replace:2 experimentally:4 change:1 uniformly:2 miss:1 engineer:1 called:10 invariance:1 e:1 la:2 player:1 experimental:2 intact:1 citro:1 perceptrons:1 indicating:2 formally:2 colmenarejo:1 jonathan:1 relevance:1 reactive:1 evaluate:4 srivastava:2 |
6,734 | 7,091 | Stochastic and Adversarial Online Learning without
Hyperparameters
Ashok Cutkosky
Department of Computer Science
Stanford University
[email protected]
Kwabena Boahen
Department of Bioengineering
Stanford University
[email protected]
Abstract
Most online optimization algorithms focus on one of two things: performing well
in adversarial settings by adapting to
?unknown data parameters (such as Lipschitz
constants), typically achieving O( T ) regret, or performing well in stochastic
settings where they can leverage some structure in the losses (such as strong
convexity), typically achieving O(log(T
? )) regret. Algorithms that focus on the
former problem hitherto achieved O( T ) in the stochastic setting rather than
O(log(T )). Here we introduce an online optimization algorithm that achieves
O(log4 (T )) regret?
in a wide class of stochastic settings while gracefully degrading
to the optimal O( T ) regret in adversarial settings (up to logarithmic factors).
Our algorithm does not require any prior knowledge about the data or tuning of
parameters to achieve superior performance.
1
Extending Adversarial Algorithms to Stochastic Settings
The online convex optimization (OCO) paradigm [1, 2] can be used to model a large number of
scenarios of interest, such as streaming problems, adversarial environments, or stochastic optimization.
In brief, an OCO algorithm plays T rounds of a game in which on each round the algorithm outputs
a vector wt in some convex space W , and then receives a loss function `t : W ? R that is convex.
The algorithm?s objective is to minimize regret, which is the total loss of all rounds relative to w? ,
PT
the minimizer of t=1 `t in W :
RT (w? ) =
T
X
`t (wt ) ? `t (w? )
t=1
OCO algorithms typically either make as few as possible assumptions about the `t while attempting
to perform well (adversarial settings), or assume that the `t have some particular structure that can
be leveraged to perform much
? better (stochastic settings). For the adversarial setting, the minimax
optimal regret is O(BLmax T ), where B is the diameter of W and Lmax is the maximum Lipschitz
constant of the losses [3]. A wide variety of algorithms achieve this bound without prior knowledge of
one or both of B and Lmax [4, 5, 6, 7], resulting in hyperparameter-free algorithms. In the stochastic
setting, it was recently shown that for a class of problems (those satisfying the so-called Bernstein
condition), one can achieve regret O(dBLmax log(T )) where W ? Rd using the M ETAG RAD
algorithm [8, 9]. This approach requires knowledge of the parameter Lmax .
In this paper, we extend an algorithm for the parameter-free adversarial setting [7] to the stochastic
setting, achieving both optimal regret in adversarial settings as well as logarithmic regret in a wide
class of stochastic settings, without needing to tune parameters. Our class of stochastic settings is
those for which E[?`t (wt )] is aligned with wt ? w? , quantified by a value ? that increases with
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
increasing alignment. We call losses in this class ?-acutely convex, and show that a single quadratic
lower bound on the average loss is sufficient to ensure high ?.
This paper is organized as follows. In Section 2, we provide an overview of our approach. In Section
3, we give explicit pseudo-code and prove our regret bounds for the adversarial setting. In Section
4, we formally define ?-acute convexity and prove regret bounds for the acutely convex stochastic
setting. Finally, in Section 5, we give some motivating examples of acutely convex stochastic losses.
Section 6 concludes the paper.
2
Overview of Approach
Before giving the overview, we fix some notation. We assume our domain W is a closed convex
subset of a Hilbert space with 0 ? W . We write gt to be an arbitrary subgradient of `t at wt for all
t, which we denote by gt ? ?`t (wt ). Lmax is the maximum Lipschitz constant?of all the `t , and B
is the diameter of the space W . The norm k ? k we use is the 2-norm: kwk = w ? w. We observe
PT
that since each `t is convex, we have RT (w? ) ? t=1 gt (wt ? w? ). We will make heavy use of this
PT
inequality; every regret bound we state will in fact be an upper bound on t=1 gt (wt ? w? ). Finally,
Pt
? to suppress logarithmic terms in
we use a compressed sum notation g1:t = t0 =1 gt0 , and we use O
big-Oh notation. All proofs omitted from the main text appear in the appendix.
Our algorithm works by trading off some performance in order to avoid knowledge of problem
parameters. Prior
analysis of the M ETAG RAD
q
algorithm [9] showed that any algorithm guaranteeing
P
T
? 2
?
RT (w? ) = O
will obtain logarithmic regret for stochastic settings
t=1 (gt ? (wt ? w ))
satisfying the Bernstein condition. We will instead guarantee the weaker regret bound:
?v
?
u
T
u
X
? ?tLmax
RT (w? ) ? O
kgt kkwt ? w? k2 ?
(1)
t=1
?
which we will show in turn implies T regret in adversarial settings and logarithmic regret for
acutely convex stochastic settings. Although (1) is weaker than the M ETAG RAD regret bound, we
can obtain it without prior knoweldge.
In order to come up with an algorithm that achieves the bound (1), we interpret it as the square root
of E[kw ? w? k2 ], where w takes on value wt with probability proportional to kgt k. This allows us to
use the bias-variance decomposition to write (1) as:
v
?
u T
uX
p
?
?
? ?kw ? wk Lmax kgk1:T + t
RT (w ) ? O
Lmax kgt kkwt ? wk2 ?
?
(2)
t=1
PT
kg kw
t
t
t=1
where w =
. Certain algorithms for unconstrained OCO can achieve RT (u) =
p kgk1:T
?
O(kukLmax kgk1:T ) simultaneously for all u ? W [10, 6, 11, 7]. Thus if we knew w ahead
of time, we could p
translate the predictions of one such algorithm by w to abtain RT (w? ) ?
?
?
O(kw
? wkLmax kgk1:T ), the bias term of (2). We do not know w, but we can estimate it
over time. Errors in the estimation procedure will cause us to incur the variance term of (2). We
implement this strategy by modifying F REE R EX [7], an unconstrained OCO algorithm that does not
require prior knowledge of any parameters.
Our modification to F REE R EX is very simple: we set wt = w
?t + wt?1 where w
?t is the tth output of
F REE R EX, and wt?1 is (approximately) a weighted average of the previous vectors w1 , . . . , wt?1
with the weight of wt equal to kgt k. This wt offset can be viewed as a kind of momentum term that
accelerates us towards optimal points when the losses are stochastic (which tends to cause correlated
wt and therefore large offsets), but has very little effect when the losses are adversarial (which tends
to cause uncorrelated wt and therefore small offsets).
2
3
F REE R EX M OMENTUM
In this section, we explicitly describe and analyze our algorithm, F REE R EX M OMENTUM, a modification of F REE R EX. F REE R EX is a Follow-the-Regularized-Leader (FTRL) algorithm, which means
that for all t, there is some regularizer
function ?t such that wt+1 = argminW ?t (w) + g1:t ? w.
?
5
Specifically, F REE R EX uses ?t = at ?t ?(at w), where ?(w) = (kwk + 1) log(kwk + 1) ? kwk
and ?t and at are specific numbers that grow over time as specified in Algorithm 1. F REE R EX M O MENTUM ?s predictions are given by offsetting F REE R EX ?s predictions wt+1 by a momentum term
P
t?1
t0 =1
kgt0 kwt
.
1+kgk1:t
wt =
We accomplish this by shifting the regularizers ?t by wt , so that F REE R EX M O MENTUM is FTRL with regularizers ?t (w ? w t ).
Algorithm 1 F REE R EX M OMENTUM
Initialize: ?12 ? 0, a0 ? 0, w1 ? 0, L0 ? 0, ?(w) = (kwk + 1) log(kwk + 1) ? kwk
0
for t = 1 to T do
Play wt
Receive subgradient gt ? ?`t (wt )
Lt ? max(L
t?1 , kgt k). // Lt = maxt0 ?t kgt k
1
?t2
? max
1
2
?t?1
+ 2kgt k2 , Lt kg1:t k .
2
at ? max(a
t?1 , 1/(Lt ?t ) )
P
wt ?
t?1
t0 =1
kgt0 kwt
1+kgk1:t h
wt+1 ? argminW
end for
3.1
?
5?(at (w?wt )
at ?t
+ g1:t ? w
i
Regret Analysis
We leverage the description of F REE R EX M OMENTUM in terms of shifted regularizers to prove a
regret bound of the same form as (1) in four steps:
1. From [7] Theorem 13, we bound the regret by
RT (w? ) ?
T
X
gt ? (wt ? w? )
t=1
? ?T (w? ) +
T
X
+
+
+
?t?1 (wt+1
) ? ?t+ (wt+1
) + gt ? (wt ? wt+1
)
t=1
+ ?T+ (w? ) ? ?T (w? ) +
T
?1
X
+
+
?t+ (wt+2
) ? ?t (wt+2
)
t=1
?
t?1 )
where
? 5?(ata(w?w
is a version of ?t shifted by wt?1 instead of wt , and
t ?t
+
+
wt+1 = argminW ?t (w) + g1:t w. This breaks the regret out into two sums, one in which
+
+
we have the term ?t?1 (wt+1
) ? ?t+ (wt+1
) for which the two different functions are shifted
+
+
by the same amount, and one with the term ?t+ (wt+2
) ? ?t (wt+2
), for which the functions
?t+ (w)
are shifted differently, but the arguments are the same.
2. Because ?t?1 and ?t+ are shifted by the same amount, the regret analysis for F REE R EX
in [7]papplies to the second line of the regret bound, yielding a quantity similar to kw? ?
wT k Lmax kgk1:T .
3. Next, we analyze the third line. We show that wt ? wt?1 q
cannot be too big, and use this
PT
2
observation to bound the third line with a quantity similar to
t=1 Lmax kgt k(wt ? w T ) .
At this point we have enough results to prove a bound of the form (2) (see Theorem 1).
4. Finally, we perform some algebraic manipulation on the bound from the first three steps to
obtain a bound of the form (1) (see Corollary 2).
3
The details of Steps 1-3 procedure are in the appendix, resulting in Theorem 1, stated below. Step 4
is carried out in Corollary 2, which follows.
1:T
Theorem 1. Let ?(w) = (kwk+1) log(kwk+1)?kwk. Set Lt = maxt0 ?t kgt0 k, and QT = 2 kgk
Lmax .
Define ?1t and at as in the pseudo-code for F REE R EX M OMENTUM (Algorithm 1). Then the regret of
F REE R EX M OMENTUM is bounded by:
?
?
T
X
5
Lmax 2Lmax
gt ?(wt ?w? ) ?
B log(BaT +1)
?(QT (w? ?wT ))+405Lmax +2Lmax B+3 ?
QT ?T
1 + L1
t=1
v
u
u
+t2Lmax
kwT
k2
+
T
X
!
kgt kkwt ? wT
k2
2 + log
t=1
1 + kgk1:T
1 + kg1 k
log(BaT + 1)
Corollary 2. Under the assumptions and notation of Theorem 1, the regret of F REE R EX M OMENTUM
is bounded by:
v
!
u
T
T
u
X
X
?
?
kgt kkw? ? wt k2 log(2BT + 1)(2 + log(T ))
gt ? (wt ? w ) ? 2 5tLmax kw? k2 +
t=1
t=1
?
Lmax 2Lmax
B log(2BT + 1)
+ 405Lmax + 2Lmax B + 3 ?
1 + L1
Observe that since wt and w? are both in W , kw? k and kwt ? w? k?both are at most B, so that
?
Corollary 2 implies that F REE R EX M OMENTUM achieves O(BL
max T ) regret in the worst-case,
which is optimal up to logarithmic factors.
3.2
Efficient Implementation for L? Balls
A carefulhreader
may notice that the
i procedure for F REE R EX M OMENTUM involves computing
?
5?(at (w?wt )
argminW
+ g1:t ? w , which may not be easy if the solution wt+1 is on the boundary
at ?t
of W . When the wt+1 is not on the boundary of W , then we have a closed-form update:
g1:t
?t kg1:t k
?
wt+1 = wt ?
?1
(3)
exp
at kg1:t k
5
However, when wt+1 lies on the boundary of W , it is not clear how to compute it for general W . In
Qd
this section we offer a simple strategy for the case that W is an L? ball, W = i=1 [?b, b].
In this setting, we can use the standard trick (e.g. see [12]) of running a separate copy of F REE R EX M OMENTUM for each coordinate. That is, we observe that
RT (w? ) ?
T
X
gt ? (wt ? u) =
t=1
d X
T
X
gt,i (wt,i ? ui )
(4)
i=1 t=1
so that if we run an independent online learning algorithm on each coordinate, using the coordinates
of the gradients gt,i as losses, then the total regret is at most the sum of the individual regrets. More
detailed pseudocode is given in Algorithm 2.
Coordinate-wise F REE R EX M OMENTUM is easily implementable in time O(d) per update because
the F REE R EX M OMENTUM update is easy to perform in one dimension: if the update (3) is outside
the domain [?b, b], simply set wt+1 to b or ?b, whichever is closer to the unconstrained update.
Therefore, coordinate-wise F REE R EX M OMENTUM can be computed in O(d) time per update.
We bound the regret of coordinate-wise F REE R EX M OMENTUM using Corollary 2 and Equation (4),
resulting the following Corollary.
4
Algorithm 2 Coordinate-Wise F REE R EX M OMENTUM
Initialize: w1 = 0, d copies of F REE R EX M OMENTUM, F1 ,. . . ,Fd , where each Fi uses domain
W = [?b, b].
for t = 1 to T do
Play wt , receive subgradient gt .
for i = 1 to d do
Give gt,i to Fi .
Get wt+1,i ? [?b, b] from Fi .
end for
end for
Corollary 3. The regret of coordinate-wise F REE R EX M OMENTUM is bounded by:
v
!
u
T
T
u
X
X
?
?
t
?
2
?
2
g ? (w ? w ) ? 2 5 dL
dkw k +
kg kkw ? w k log(2T b + 1)(2 + log(T ))
t
max
t
t
t=1
t=1
+ 405dLmax + 2Lmax db + 3d
4
t
?
Lmax 2Lmax
?
b log(2bT + 1)
1 + L1
Logarithmic Regret in Stochastic Problems
In this section we formally define ?-acute convexity and show that F REE R EX M OMENTUM achieves
logarithmic regret for ?-acutely convex losses. As a warm-up, we first consider the simplest case in
which the loss functions `t are fixed, `t = ` for all t. After showing logarithmic regret for this case,
we will then generalize to more complicated stochastic settings.
Intuitively, an acutely convex loss function ` is one for which the gradient gt is aligned with the
vector wt ? w? where w? = argmin `, as defined below.
Definition 4. A convex function ` is ?-acutely convex on a set W if ` has a global minimum at some
w? ? W and for all w ? W , for all subgradients g ? ?`(w), we have
g ? (w ? w? ) ? ?kgkkw ? w? k2
With this definition in hand, we can show logarithmic regret in the case where `t = ` for all t for
some ?-acutely convex function `. From Corollary 2, with w? = argmin `, we have
?v
!?
u
T
T
u
X
X
? ?tLmax kw? k2 +
gt ? (wt ? w? ) ? O
kgt kkw? ? wt k2 ?
t=1
t=1
?v
u
u
?
?
? O tLmax
!?
T
X
1
kw? k +
gt ? (w? ? wt ) ?
? t=1
(5)
? notation suppresses terms whose dependence on T is at most O(log2 (T )). Now we
Where the O
need a small Proposition:
Proposition 5. If a, b, c and d are non-negative constants such that
?
x ? a bx + c + d
Then
?
x ? 4a2 b + 2a c + 2d
PT
Applying Proposition 5 to Equation (5) with x = t=1 gt ? (wt ? w? ) yields
?
? Lmax kw k
RT (u) ? O
?
5
? again suppresses logarithmic terms, now with dependence on T at most O(log4 (T )).
where the O
Having shown that F REE R EX M OMENTUM achieves logarithmic regret on fixed ?-acutely convex
losses, we now generalize to stochastic losses. In order to do this we will necessarily have to
make some assumptions about the process generating the stochastic losses. We encapsulate these
assumptions in a stochastic version of ?-acute convexity, given below.
Definition 6. Suppose for all t, gt is such that E[gt |g1 , . . . gt?1 ] ? ?`(wt ) for some convex function
` with minimum at w? . Then we say gt is ?-acutely convex in expectation if:
?
? 2
E[gt ] ? (wt ? w ) ? ? E[kgt kkwt ? w k ]
where all expectations are conditioned on g1 , . . . , gt?1 .
Using this definition, a fairly straightforward calculation gives us the following result.
Theorem 7. Suppose gt is ?-acutely convex in expectation and gt is bounded kgt k ? Lmax with
probability 1. Then F REE R EX M OMENTUM achieves expected regret:
Lmax kw? k
?
?
[R
(w
)]
?
O
E T
?
Proof. Throughout this proof, all expectations are conditioned on prior subgradients. By Corollary 2
and Jensen?s inequality we have
" T
#
?
X
Lmax 2Lmax
?
gt ? (wt ? w ) ? E 405Lmax + 2Lmax B + 3 ?
B log(2BT + 1)
E
1 + L1
t=1
v
?
!
u
T
X
? u
kw? k2 +
kg kkw? ? w k2 log(2T B + 1)(2 + log(T ))?
+2 5tL
max
t
t
t=1
?
Lmax 2Lmax
?
B log(2BT + 1)
? 405Lmax + 2Lmax B + 3
?
v
!
u
T
X
? u
t
+ 2 5 Lmax kw? k2 +
E[kgt kkw? ? wt k2 ] log(2T B + 1)(2 + log(T ))
t=1
?
Lmax 2Lmax
?
B log(2BT + 1)
? 405Lmax + 2Lmax B + 3
?
v
!
u
T
X
? u
1
+ 2 5tLmax kw? k2 +
E[gt ? (wt ? w? )] log(2T B + 1)(2 + log(T ))
? t=1
Set R = E
i
?
g
(w
?
w
)
. Then we have shown
t
t=1 t
s
?
R
log(2T B + 1)(2 + log(T ))
R ? 2 5 Lmax kw? k2 +
?
?
Lmax 2Lmax
?
+ 405Lmax + 2Lmax B + 3
B log(BT + 1)
?
s
"
#
R
?
2
?
=O
Lmax kw k +
?
hP
T
And now we use Proposition 5 to conclude:
T
X
Lmax kw? k
?
?
E[gt ? (wt ? w )] = O
?
t=1
? hides at most a O(log4 (T )) dependence on T .
as desired, where again O
Exactly the same argument with an extra factor of d applies to the regret of F REE R EX M OMENTUM
with coordinate-wise updates.
6
5
Examples of ?-acute convexity in expectation
In this section, we show that ?-acute convexity in expectation is a condition that arises in practice,
justifying the relevance of our logarithmic regret bounds. To do this, we show that a quadratic lower
bound on the expected loss implies ?-acute convexity, demonstrating acutely convexity is a weaker
condition than strong convexity.
Proposition 8. Suppose E[gt |g1 , . . . , gt?1 ] ? ?`(wt ) for some convex ` such that for some ? > 0
and w? = argmin `, `(w) ? `(w? ) ? ?2 kw ? w? k2 for all w ? W . Suppose kgk ? Lmax with
probability 1. Then gt is 2L?max -acutely convex in expectation.
Proof. By convexity and the hypothesis of the proposition: E[gt ] ? (wt ? w? ) ? `(wt ) ? `(w? ) ?
?
?
? 2
? 2
2 kwt ? w k ? 2Lmax E[kgt kkwt ? w k
With Proposition 8, we see that F REE R EX M OMENTUM obtains logarithmic regret for any loss that is
larger than a quadratic, without requiring knowledge of the parameter ? or the Lipschitz bound Lmax .
Further, this result requires only the expected loss ` = E[`t ] to have a quadratic lower bound - the
individual losses `t themselves need not do so.
The boundedness of W makes it surprisingly easy to have a quadratic lower bound. Although a
quadratic lower bound for a function ` is easily implied by strong convexity, the quadratic lower
bound is a significantly weaker condition. For example, since W has diameter B, kwk ? B1 kwk2
and so the absolute value is B1 -acutely convex, but not strongly convex. The following Proposition
shows that existence of a quadratic lower bound is actually a local condition; so long as the expected
loss ` has a quadratic lower bound in a neighborhood of w? , it must do so over the entire space W :
Proposition 9. Supppose ` : W ? R is a convex function such that
`(w) ? `(w? ) ? ?2 kw ? w? k
?r ?
for all w with kw ? w? k ? r. Then `(w) ? `(w? ) ? min 2B
, 2 kw ? w? k2 for all w ? W .
?
Proof. We translate by w? to assume without loss of generalityh that
w = 0. Then
i the statement
kwk
rw
?
is clear for kwk ? r. By convexity, `(w) ? `(w ) ? r ` kwk ? `(w? ) ? ?r
2 kwk ?
?r
2
2B kwk .
Finally, we provide a simple motivating example of an interesting problem we can solve with an
?-acutely convex loss that is not strongly convex: computing the median.
Proposition 10. Let W = [a, b], and `t (w) = |w ? xt | where each xt is drawn i.i.d. from some fixed
distribution with a continuous cumulative distribution function D, and assume D(x? ) = 12 . Further,
0
suppose |2D(w) ? 1| ? F |w ? x? | for all |w ? x? | ? G. Suppose
gt = `t (wt ) for wt 6= xt and
gt = ?1 with equal probability if wt = xt . Then gt is min
FG
b?a , F
-acutely convex in expectation.
Proof. By a little calculation, E[gt ] = `0 (wt ) = 2D(wt ) ? 1, and E[|gt |] = 1. Since `0 (x? ) = 0,
w? = x? (the median). For |wt ? x? | ? G, we have |2D(w) ? 1| ? F G, which gives E[gt ] ? (wt ?
FG
w? ) ? b?a
|](wt ? w? )2 . For |wt ? x? | ? G, we have E[gt ] ? (wt ? w? ) ? F E[|gt |](wt ? w? )2 ,
E[|gt
FG
so that gt is min b?a
, F -acutely convex in expectation.
Proposition 10 shows that we can obtain low regret for an interesting stochastic problem without
curvature. The condition on the cumulative distribution function D is asking only that there be
positive density in a neighborhood of the median; it would be satisfied if D0 (w) ? F for |w| ? G.
If the expected loss ` is ?-strongly convex, we can apply Proposition 8 to see that ` is ?/2-aligned,
? max kw? k/?). This is different from the usual regret
and then use Theorem 7 to obtain a regret of O(L
2
?
bound of O(Lmax /?) obtained by Online Newton Step [13], which is due to an inefficiency in using
the wearker ?-alignment condition. Instead, arguing from the regret bound of Corollary 2 directly,
we can recover the optimal regret bound:
7
Corollary 11. Suppose each `t is an independent random variable with E[`t ] = ` for some ?-strongly
convex ` with minimum at w? . Then the expected regret of F REE R EX M OMENTUM satisfies
" T
#
X
? 2 /?)
`(wt ) ? `(w? ) ? O(L
E
max
t=1
? hides terms that are logarithmic in T B.
Where the O
Proof. From strong-convexity, we have
kwt ? w? k2 ?
2
(`(wt ) ? `(w? ))
?
Therefore applying Corollary 2 we have
?
?v
" T
#
u
T
u
X
X
?
? ?tL2max E[
kwt ? w? k2 ]?
`(wt ) ? `(w? ) ? O
E[RT (w )] = E
t=1
t=1
p
? L2 E[RT (w? )])
? O(
max
So that applying Proposition 5 we obtain the desired result.
As a result of Corollary 11, we see that F REE R EX M OMENTUM obtains logarithmic regret for ?aligned problems and also obtains the optimal (up to log factors) regret bound for ?-strongly-convex
problems, all without requiring any knowledge of the parameters ? or ?. This stands in contrast to
prior algorithms that adapt to user-supplied curvature information such as Adaptive Gradient Descent
[14] or (A, B)-prod [15].
6
Conclusions and Open Problems
?
?
We have presented an algorithm, F REE
R
EX M OMENTUM , that achieves both O(BL
T ) regret in
max
? Lmax B regret in ?-acutely convex stochastic settings without requiring
adversarial settings and O
?
any prior information about any parameters. We further showed that a quadratic lower bound on
the expected loss implies acute convexity, so that while strong-convexity is sufficient for acute
convexity, other important loss families such as the absolute loss may also be acutely convex. Since
F REE R EX M OMENTUM does not require prior information about any problem parameters, it does not
require any hyperparameter tuning to be assured of good convergence. Therefore, the user need not
actually know whether a particular problem is adversarial or acutely convex and stochastic, or really
much of anything at all about the problem, in order to use F REE R EX M OMENTUM.
There are still many interesting open questions in this area. First, we would like to find an efficient
way to implement the F REE R EX M OMENTUM algorithm or some variant directly, without appealing
to coordinate-wise updates. This would enable us to remove the factor of d we incur by using
coordinate-wise updates. Second, our modification to F REE R EX is extremely simple and intuitive,
but our analysis makes use of some of the internal logic of F REE R EX. It is possible, however, that
any algorithm with sufficiently low regret can be modified in?
a similar way to achieve our results.
4
Finally, we?observe that while log (T ) is much better than T asymptotically, it turns out that
log4 (T ) > T for T < 1011 , which casts the practical relevance of our logarithmic bounds in doubt.
Therefore we hope that this work serves as a starting point for either new analysis or algorithm design
that further simplifies and improves regret bounds.
References
[1] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 928?936, 2003.
[2] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2011.
8
[3] Jacob Abernethy, Peter L Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the nineteenth annual conference on
computational learning theory, 2008.
[4] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. In Conference on Learning Theory (COLT), 2010.
[5] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization.
In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010.
[6] Francesco Orabona and D?vid P?l. Coin betting and parameter-free online learning. In D. D. Lee,
M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing
Systems 29, pages 577?585. Curran Associates, Inc., 2016.
[7] Ashok Cutkosky and Kwabena Boahen. Online learning without prior information. arXiv preprint
arXiv:1703.02629, 2017.
[8] Tim van Erven and Wouter M Koolen. Metagrad: Multiple learning rates in online learning. In D. D.
Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information
Processing Systems 29, pages 3666?3674. Curran Associates, Inc., 2016.
[9] Wouter M Koolen, Peter Gr?nwald, and Tim van Erven. Combining adversarial guarantees and stochastic
fast rates in online learning. In Advances in Neural Information Processing Systems, pages 4457?4465,
2016.
[10] Francesco Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing
Systems, pages 1806?1814, 2013.
[11] Ashok Cutkosky and Kwabena A Boahen. Online convex optimization with unconstrained domains and
losses. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural
Information Processing Systems 29, pages 748?756. Curran Associates, Inc., 2016.
[12] Brendan Mcmahan and Matthew Streeter. No-regret algorithms for unconstrained online convex optimization. In Advances in neural information processing systems, pages 2402?2410, 2012.
[13] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization.
Machine Learning, 69(2):169?192, 2007.
[14] Peter L Bartlett, Elad Hazan, and Alexander Rakhlin. Adaptive online gradient descent. In NIPS, volume 20,
pages 65?72, 2007.
[15] Amir Sani, Gergely Neu, and Alessandro Lazaric. Exploiting easy data in online optimization. In Advances
in Neural Information Processing Systems, pages 810?818, 2014.
9
| 7091 |@word kgk:2 version:2 norm:2 open:2 decomposition:1 jacob:1 boundedness:1 ftrl:2 inefficiency:1 erven:2 must:1 remove:1 update:9 amir:1 prove:4 introduce:1 expected:7 themselves:1 little:2 increasing:1 notation:5 bounded:4 hitherto:1 kg:3 kind:1 argmin:3 degrading:1 suppresses:2 guarantee:2 pseudo:2 every:1 exactly:1 k2:20 appear:1 encapsulate:1 before:1 positive:1 local:1 tends:2 ree:40 approximately:1 quantified:1 wk2:1 bat:2 practical:1 arguing:1 offsetting:1 practice:1 regret:53 implement:2 procedure:3 area:1 adapting:1 significantly:1 get:1 cannot:1 applying:3 zinkevich:1 straightforward:1 kale:1 starting:1 convex:40 oh:1 coordinate:11 pt:7 play:3 suppose:7 user:2 programming:1 us:2 curran:3 hypothesis:1 trick:1 trend:1 associate:3 satisfying:2 preprint:1 worst:1 alessandro:1 boahen:4 environment:1 convexity:16 ui:1 mentum:2 incur:2 gt0:1 sani:1 vid:1 easily:2 differently:1 regularizer:1 fast:1 describe:1 outside:1 neighborhood:2 shalev:1 abernethy:1 whose:1 stanford:4 larger:1 solve:1 say:1 nineteenth:1 elad:2 compressed:1 satyen:1 g1:9 online:21 argminw:4 aligned:4 combining:1 translate:2 achieve:5 description:1 intuitive:1 exploiting:1 convergence:1 extending:1 generating:1 guaranteeing:1 tim:2 qt:3 strong:5 c:1 involves:1 trading:1 implies:4 come:1 qd:1 kgt:15 modifying:1 stochastic:26 enable:1 require:4 fix:1 f1:1 really:1 proposition:13 sufficiently:1 exp:1 matthew:2 achieves:7 a2:1 omitted:1 estimation:1 weighted:1 hope:1 modified:1 rather:1 avoid:1 corollary:13 l0:1 focus:2 contrast:1 adversarial:15 brendan:2 streaming:1 typically:3 bt:7 a0:1 entire:1 kkw:5 colt:2 acutely:20 initialize:2 fairly:1 equal:2 having:1 beach:1 kwabena:3 kw:22 icml:1 oco:5 t2:1 few:1 simultaneously:1 kwt:7 individual:2 interest:1 fd:1 wouter:2 alignment:2 yielding:1 regularizers:3 bioengineering:1 closer:1 dkw:1 desired:2 asking:1 subset:1 gr:1 too:1 motivating:2 accomplish:1 st:1 density:1 international:1 lee:3 off:1 w1:3 again:2 gergely:1 satisfied:1 leveraged:1 bx:1 doubt:1 wk:1 inc:3 explicitly:1 root:1 break:1 closed:2 kwk:16 analyze:2 hazan:3 recover:1 complicated:1 shai:1 minimize:1 square:1 variance:2 yield:1 generalize:2 metagrad:1 neu:1 definition:4 infinitesimal:1 proof:7 knowledge:7 improves:1 organized:1 hilbert:1 actually:2 follow:1 strongly:5 hand:1 receives:1 usa:1 effect:1 requiring:3 former:1 round:3 game:2 anything:1 generalized:1 duchi:1 l1:4 wise:8 recently:1 fi:3 superior:1 pseudocode:1 koolen:2 overview:3 volume:1 extend:1 interpret:1 kwk2:1 tuning:2 rd:2 unconstrained:5 hp:1 sugiyama:3 acute:8 gt:44 curvature:2 showed:2 hide:2 scenario:1 manipulation:1 certain:1 kg1:4 inequality:2 minimum:3 ashok:3 paradigm:1 nwald:1 multiple:1 needing:1 d0:1 adapt:1 calculation:2 offer:1 long:2 justifying:1 prediction:3 variant:1 expectation:9 arxiv:2 agarwal:1 achieved:1 receive:2 grow:1 median:3 extra:1 ascent:1 db:1 thing:1 call:1 leverage:2 bernstein:2 enough:1 easy:4 variety:1 simplifies:1 t0:4 whether:1 bartlett:2 peter:3 algebraic:1 cause:3 tewari:1 clear:2 detailed:1 tune:1 amount:2 diameter:3 tth:1 simplest:1 rw:1 supplied:1 shifted:5 notice:1 lazaric:1 per:2 write:2 hyperparameter:2 four:1 demonstrating:1 achieving:3 drawn:1 asymptotically:1 subgradient:4 sum:3 run:1 luxburg:3 throughout:1 family:1 guyon:3 appendix:2 accelerates:1 bound:35 quadratic:10 annual:2 ahead:1 argument:2 min:3 extremely:1 performing:2 attempting:1 subgradients:2 martin:1 betting:1 department:2 kgt0:3 ball:2 appealing:1 modification:3 intuitively:1 equation:2 turn:2 singer:1 know:2 whichever:1 end:3 serf:1 apply:1 observe:4 coin:1 existence:1 running:1 ensure:1 log2:1 newton:1 giving:1 amit:1 bl:2 implied:1 objective:1 question:1 quantity:2 strategy:3 rt:12 dependence:3 usual:1 gradient:6 separate:1 gracefully:1 code:2 statement:1 stated:1 negative:1 suppress:1 implementation:1 design:1 unknown:1 perform:4 upper:1 observation:1 francesco:2 implementable:1 descent:2 arbitrary:1 cast:1 specified:1 rad:3 nip:2 below:3 ambuj:1 max:12 shifting:1 warm:1 regularized:1 minimax:2 brief:1 concludes:1 carried:1 cutkosky:3 text:1 prior:10 l2:1 relative:1 loss:28 interesting:3 proportional:1 foundation:1 sufficient:2 editor:3 uncorrelated:1 heavy:1 ata:1 lmax:48 surprisingly:1 free:4 copy:2 bias:2 weaker:4 exponentiated:1 wide:3 absolute:2 fg:3 van:2 boundary:3 dimension:2 stand:1 cumulative:2 adaptive:4 obtains:3 logic:1 global:1 b1:2 conclude:1 knew:1 leader:1 shwartz:1 continuous:1 prod:1 streeter:2 ca:1 correlated:1 necessarily:1 domain:4 assured:1 garnett:3 main:1 big:2 hyperparameters:1 tl:1 momentum:2 explicit:1 lie:1 mcmahan:2 third:2 theorem:7 specific:1 xt:4 showing:1 jensen:1 offset:3 rakhlin:2 dl:1 conditioned:2 maxt0:1 logarithmic:18 lt:5 simply:1 ux:1 applies:1 minimizer:1 satisfies:1 viewed:1 towards:1 orabona:2 lipschitz:4 specifically:1 wt:90 total:2 called:1 formally:2 internal:1 log4:4 arises:1 alexander:2 relevance:2 kgk1:8 ex:40 |
6,735 | 7,092 | Teaching Machines to Describe Images via Natural
Language Feedback
Huan Ling1 , Sanja Fidler1,2
University of Toronto1 , Vector Institute2
{linghuan,fidler}@cs.toronto.edu
Abstract
Robots will eventually be part of every household. It is thus critical to enable
algorithms to learn from and be guided by non-expert users. In this paper, we
bring a human in the loop, and enable a human teacher to give feedback to a
learning agent in the form of natural language. We argue that a descriptive sentence
can provide a much stronger learning signal than a numeric reward in that it can
easily point to where the mistakes are and how to correct them. We focus on
the problem of image captioning in which the quality of the output can easily be
judged by non-experts. In particular, we first train a captioning model on a subset
of images paired with human written captions. We then let the model describe new
images and collect human feedback on the generated descriptions. We propose a
hierarchical phrase-based captioning model, and design a feedback network that
provides reward to the learner by conditioning on the human-provided feedback.
We show that by exploiting descriptive feedback on new images our model learns
to perform better than when given human written captions on these images.
1
Introduction
In the era where A.I. is slowly finding its way into everyone?s lives, be in the form of social bots [36, 2],
personal assistants [24, 13, 32], or household robots [1], it becomes critical to allow non-expert users
to teach and guide their robots [37, 18]. For example, if a household robot keeps bringing food served
on an ashtray thinking it?s a plate, one should ideally be able to educate the robot about its mistakes,
possibly without needing to dig into the underlying software.
Reinforcement learning has become a standard way of training artificial agents that interact with an
environment. There have been significant advances in a variety of domains such as games [31, 25],
robotics [17], and even fields like vision and NLP [30, 19]. RL agents optimize their action policies
so as to maximize the expected reward received from the environment. Training typically requires a
large number of episodes, particularly in environments with large action spaces and sparse rewards.
Several works explored the idea of incorporating humans in the learning process, in order to help
the reinforcement learning agent to learn faster [35, 12, 11, 6, 5]. In most cases, a human teacher
observes the agent act in an environment, and is allowed to give additional guidance to the learner.
This feedback typically comes in the form of a simple numerical (or ?good?/?bad?) reward which is
used to either shape the MDP reward [35] or directly shape the policy of the learner [5].
In this paper, we aim to exploit natural language as a way to guide an RL agent. We argue that a
sentence provides a much stronger learning signal than a numeric reward in that it can easily point
to where the mistakes occur and suggests how to correct them. Such descriptive feedback can thus
naturally facilitate solving the credit assignment problem as well as to help guide exploration. Despite
its clear benefits, very few approaches aimed at incorporating language in Reinforcement Learning.
In pioneering work, [22] translated natural language advice into a short program which was used to
bias action selection. While this is possible in limited domains such as in navigating a maze [22] or
learning to play a soccer game [15], it can hardly scale to the real scenarios with large action spaces
requiring versatile language feedback.
Machine
( a cat ) ( sitting ) ( on a sidewalk ) ( next to a street . )
Human Teacher
Feedback: There is a dog on a sidewalk, not a cat.
Type of mistake: wrong object
Select the mistake area:
( a cat ) ( sitting ) ( on a sidewalk ) ( next to a street . )
Correct the mistake:
( a dog ) ( sitting ) ( on a sidewalk ) ( next to a street . )
Figure 1: Our model accepts feedback from a human teacher in the form of natural language. We generate
captions using the current snapshot of the model and collect feedback via AMT. The annotators are requested to
focus their feedback on a single word/phrase at a time. Phrases, indicated with brackets in the example, are part
or our captioning model?s output. We also collect information about which word the feedback applies to and its
suggested correction. This information is used to train our feedback network.
Here our goal is to allow a non-expert human teacher to give feedback to an RL agent in the form of
natural language, just as one would to a learning child. We focus on the problem of image captioning
in which the quality of the output can easily be judged by non-experts.
Towards this goal, we make several contributions. We propose a hierarchical phrase-based RNN as
our image captioning model, as it can be naturally integrated with human feedback. We design a web
interface which allows us to collect natural language feedback from human ?teachers? for a snapshot of
our model, as in Fig. 1. We show how to incorporate this information in Policy Gradient RL [30], and
show that we can improve over RL that has access to the same amount of ground-truth captions. Our
code and data will be released (http://www.cs.toronto.edu/~linghuan/feedbackImageCaption/)
to facilitate more human-like training of captioning models.
2
Related Work
Several works incorporate human feedback to help an RL agent learn faster. [35] exploits humans
in the loop to teach an agent to cook in a virtual kitchen. The users watch the agent learn and
may intervene at any time to give a scalar reward. Reward shaping [26] is used to incorporate this
information in the MDP. [6] iterates between ?practice?, during which the agent interacts with the real
environment, and a critique session where a human labels any subset of the chosen actions as good or
bad. In [12], the authors compare different ways of incorporating human feedback, including reward
shaping, Q augmentation, action biasing, and control sharing. The same authors implement their
TAMER framework on a real robotic platform [11]. [5] proposes policy shaping which incorporates
right/wrong feedback by utilizing it as direct policy labels. These approaches mostly assume that
humans provide a numeric reward, unlike in our work where the feedback is given in natural language.
A few attempts have been made to advise an RL agent using language. [22]?s pioneering work
translated advice to a short program which was then implemented as a neural network. The units in
this network represent Boolean concepts, which recognize whether the observed state satisfies the
constraints given by the program. In such a case, the advice network will encourage the policy to
take the suggested action. [15] incorporated natural language advice for a RoboCup simulated soccer
task. They too translate the advice in a formal language which is then used to bias action selection.
Parallel to our work, [7] exploits textual advice to improve training time of the A3C algorithm in
playing an Atari game. Recently, [37, 18] incorporates human feedback to improve a text-based QA
agent. Our work shares similar ideas, but applies them to the problem of image captioning. In [27],
the authors incorporate human feedback in an active learning scenario, however not in an RL setting.
Captioning represents a natural way of showing that our algorithm understands a photograph to a
non-expert observer. This domain has received significant attention [8, 39, 10], achieving impressive
performance on standard benchmarks. Our phrase model shares the most similarity with [16],
but differs in that exploits attention [39], linguistic information, and RL to train. Several recent
approaches trained the captioning model with policy gradients in order to directly optimize for the
desired performance metrics [21, 30, 3]. We follow this line of work. However, to the best of our
knowledge, our work is the first to incorporate natural language feedback into a captioning model.
2
Figure 2: Our hierarchical phrase-based captioning model, composed of a phrase-RNN at
the top level, and a word-level RNN which outputs a sequence of words for each phrase. The
useful property of this model is that it directly
produces an output sentence segmented into
linguistic phrases. We exploit this information
while collecting and incorporating human feedback into the model. Our model also exploits
attention, and linguistic information (phrase
labels such as noun, preposition, verb, and conjunction phrase). Please see text for details.
Related to our efforts is also work on dialogue based visual representation learning [40, 41], however
this work tackles a simpler scenario, and employs a slightly more engineered approach.
We stress that our work differs from the recent efforts in conversation modeling [19] or visual
dialog [4] using Reinforcement Learning. These models aim to mimic human-to-human conversations
while in our work the human converses with and guides an artificial learning agent.
3
Our Approach
Our framework consists of a new phrase-based captioning model trained with Policy Gradients that
incorporates natural language feedback provided by a human teacher. While a number of captioning
methods exist, we design our own which is phrase-based, allowing for natural guidance by a nonexpert. In particular, we argue that the strongest learning signal is provided when the feedback
describes one mistake at a time, e.g. a single wrong word or a phrase in a caption. An example can
be seen in Fig. 1. This is also how one most effectively teaches a learning child. To avoid parsing the
generated sentences at test time, we aim to predict phrases directly with our captioning model. We
first describe our phrase-based captioner, then describe our feedback collection process, and finally
propose how to exploit feedback as a guiding signal in policy gradient optimization.
3.1
Phrase-based Image Captioning
Our captioning model, forming the base of our approach, uses a hierarchical Recurrent Neural
Network, similar to [34, 14]. In [14], the authors use a two-level LSTM to generate paragraphs,
while [34] uses it to generate sentences as a sequence of phrases. The latter model shares a similar
overall structure as ours, however, our model additionally reasons about the type of phrases and
exploits the attention mechanism over the image.
The structure of our model is best explained through Fig. 2. The model receives an image as input and
outputs a caption. It is composed of a phrase RNN at the top level, and a word RNN that generates a
sequence of words for each phrase. One can think of the phrase RNN as providing a ?topic? at each
time step, which instructs the word RNN what to talk about.
{z
} | {z }
word-RNN
|
phrase-RNN
Following [39], we use a convolutional neural network in order to extract a set of feature vectors
a = (a1 , . . . , an ), with aj a feature in location j in the input image. We denote the hidden state of
the phrase RNN at time step t with ht , and ht,i to denote the i-th hidden state of the word RNN for
the t-th phrase. Computation in our model can be expressed with the following equations:
ht = fphrase (ht?1 , lt?1 , ct?1 , et?1 )
lt = softmax(fphrase?label (ht ))
ct = fatt (ht , lt , a)
ht,0 = fphrase?word (ht , lt , ct )
fphrase
fphrase?label
fatt
fphrase?word
2-layer MLP with ReLu
3-layer MLP with ReLu
ht,i = fword (ht,i?1 , ct , wt,i )
wt,i = fout (ht,i , ct , wt,i?1 )
et = fword?phrase (wt,1 , . . . , wt,end )
fword
fout
fword?phrase
LSTM, dim 256
deep decoder [28]
mean+3-lay. MLP with ReLu
3
LSTM, dim 256
3-layer MLP
Image
Ref. caption
Feedback
Corr. caption
( a woman ) ( is sitting ) ( on a bench
) ( with a plate ) (
of food . )
What the
woman is
sitting on is
not visible.
( a woman ) ( is
sitting ) ( with a
plate ) ( of food .
)
Image
Ref. caption
Feedback
Corr. caption
( a man ) ( rid- There is a ( a man and a
ing a motorcy- man and a woman ) ( riding a
cle ) ( on a city woman.
motorcycle ) ( on
street . )
a city street . )
( a horse ) ( is There is no ( a horse ) ( is
standing ) ( in a barn. There standing ) ( in
barn ) ( in a field is a fence.
a fence ) ( in a
.)
field . )
( a man ) (
is swinging a
baseball bat ) (
on a field . )
The baseball ( a man ) ( is playplayer is not ing baseball ) ( on
swinging a a field . )
bate.
Table 1: Examples of collected feedback. Reference caption comes from the MLE model.
Table 2: Statistics for our collected feedback information. The table on the right shows how many times the
feedback sentences mention words to be corrected and suggest correction.
Something should be replaced
2999
Num. of evaluated examples (annot. round 1) 9000
mistake word is in description
2664
Evaluated as containing errors
5150
correct word is in description
2674
To ask for feedback (annot. round 2)
4174
Something is missing
334
Avg. num. of feedback rounds per image
2.22
missing word is in description
246
Avg. num. of words in feedback sent.
8.04
Avg. num. of words needing correction
1.52
Something should be removed
841
Avg. num. of modified words
1.46
removed word is in description
779
feedback round: number of correction rounds for the same example, description: natural language feedback
3000
evaluation after correction
Figure 3: Caption quality evalua-
2500
2000
1500
1000
500
0
perfect accecptable grammar minor_errormajor_error
tion by the human annotators. Plot on
the left shows evaluation for captions
generated with our reference model
(MLE). The right plot shows evaluation of the human-corrected captions
(after completing at least one round
of feedback).
As in [39], ct denotes a context vector obtained by applying the attention mechanism to the image.
This context vector essentially represents the image area that the model ?looks at? in order to generate
the t-th phrase. This information is passed to both the word-RNN as well as to the next hidden state
of the phrase-RNN. We found that computing two different context vectors, one passed to the phrase
and one to the word RNN, improves generation by 0.6 points (in weighted metric, see Table 4) mainly
helping the model to avoid repetition of words. Furthermore, we noticed that the quality of attention
significantly improves (1.5 points, Table 4) if we provide it with additional linguistic information. In
particular, at each time step t our phrase RNN also predicts a phrase label lt , following the standard
definition from the Penn Tree Bank. For each phrase, we predict one out of four possible phrase labels,
i.e., a noun (NP), preposition (PP), verb (VP), and a conjunction phrase (CP). We use additional
<EOS> token to indicate the end of the sentence. By conditioning on the NP label, we help the model
look at the objects in the image, while VP may focus on more global image information.
Above, wt,i denotes the i-th word output of the word-RNN in the t-th phrase, encoded with a one-hot
vector. Note that we use an additional <EOP> token in word-RNN?s vocabulary, which signals the
end-of-phrase. Further, et encodes the generated phrase via simple mean-pooling over the words,
which provides additional word-level context to the next phrase. Details about the choices of the
functions are given in the table. Following [39], we use a deep output layer [28] in the LSTM and
double stochastic attention.
Implementation details.
To train our hierarchical model, we first process MS-COCO image
caption data [20] using the Stanford Core NLP toolkit [23]. We flatten each parse tree, separate a
sentence into parts, and label each part with a phrase label (<NP>, <PP>, <CP>, <VP>). To simplify
the phrase structure, we merge some NPs to its previous phrase label if it is not another NP.
Pre-training. We pre-train our model using the standard cross-entropy loss. We use the ADAM
optimizer [9] with learning rate 0.001. We discuss Policy Gradient optimization in Subsec. 3.4.
3.2
Crowd-sourcing Human Feedback
We aim to bring a human in the loop when training the captioning model. Towards this, we create a
web interface that allows us to collect feedback information on a larger scale via AMT. Our interface
4
Figure 4: The architecture of our feedback network (FBN) that classifies each phrase (bottom left) in a sampled
sentence (top left) as either correct, wrong or not relevant, by conditioning on the feedback sentence.
is akin to that depicted in Fig. 1, and we provide further visualizations in the Appendix. We also
provide it online on our project page. In particular, we take a snapshot of our model and generate
captions for a subset of MS-COCO images [20] using greedy decoding. In our experiments, we take
the model trained with the MLE objective.
We do two rounds of annotation. In the first round, the annotator is shown a captioned image and
is asked to assess the quality of the caption, by choosing between: perfect, acceptable, grammar
mistakes only, minor or major errors. We asked the annotators to choose minor and major error if the
caption contained errors in semantics, i.e., indicating that the ?robot? is not understanding the photo
correctly. We advised them to choose minor for small errors such as wrong or missing attributes or
awkward prepositions, and go with major errors whenever any object or action naming is wrong.
For the next (more detailed, and thus more costly) round of annotation, we only select captions which
are not marked as either perfect or acceptable in the first round. Since these captions contain errors,
the new annotator is required to provide detailed feedback about the mistakes. We found that some of
the annotators did not find errors in some of these captions, pointing to the annotator noise in the
process. The annotator is shown the generated caption, delineating different phrases with the ?(? and
?)? tokens. We ask the annotator to 1) choose the type of required correction, 2) write feedback in
natural language, 3) mark the type of mistake, 4) highlight the word/phrase that contains the mistake,
5) correct the chosen word/phrase, 6) evaluate the quality of the caption after correction. We allow
the annotator to submit the HIT after one correction even if her/his evaluation still points to errors.
However, we plea to the good will of the annotators to continue in providing feedback. In the latter
case, we reset the webpage, and replace the generated caption with their current correction.
The annotator first chooses the type of error, i.e., something ? should be replaced?, ?is missing?, or
?should be deleted?. (S)he then writes a sentence providing feedback about the mistake and how
it should be corrected. We require that the feedback is provided sequentially, describing a single
mistake at a time. We do this by restricting the annotator to only select mistaken words within a single
phrase (in step 4). In 3), the annotator marks further details about the mistake, indicating whether it
corresponds to an error in object, action, attribute, preposition, counting, or grammar. For 4) and 5)
we let the annotator highlight the area of mistake in the caption, and replace it with a correction.
The statistics of the data is provided in Table 2, with examples shown in Table 1. An interesting fact
is that the feedback sentences in most cases mention both the wrong word from the caption, as well
as the correction word. Fig. 3 (left) shows evaluation of the caption quality of the reference (MLE)
model. Out of 9000 captions, 5150 are marked as containing errors (either semantic or grammar),
and we randomly choose 4174 for the second round of annotation (detailed feedback). Fig. 3 (left)
shows the quality of all the captions after correction, i.e. good reference captions as well as 4174
corrected captions as submitted by the annotators. Note that we only paid for one round of feedback,
thus some of the captions still contained errors even after correction. Interestingly, on average the
annotators still did 2.2 rounds of feedback per image (Table 2).
3.3
Feedback Network
Our goal is to incorporate natural language feedback into the learning process. The collected feedback
contains rich information of how the caption can be improved: it conveys the location of the mistake
and typically suggests how to correct it, as seen in Table 2. This provides strong supervisory signal
which we want to exploit in our RL framework. In particular, we design a neural network which will
provide additional reward based on the feedback sentence. We refer to it as the feedback network
(FBN). We first explain our feedback network, and show how to integrate its output in RL.
5
Sampled caption
Feedback
Phrase
Prediction
A cat on a sidewalk.
A dog on a sidewalk.
A cat on a sidewalk.
There is a dog on a sidewalk not a cat.
A cat
A dog
on a sidewalk
wrong
correct
not relevant
Table 3: Example classif. of each phrase in a newly sampled caption into correct/wrong/not-relevant conditioned
on the feedback sentence. Notice that we do not need the image to judge the correctness/relevance of a phrase.
Note that RL training will require us to generate samples (captions) from the model. Thus, during
training, the sampled captions for each training image will change (will differ from the reference
MLE caption for which we obtained feedback for). The goal of the feedback network is to read a
newly sampled caption, and judge the correctness of each phrase conditioned on the feedback. We
make our FBN to only depend on text (and not on the image), making its learning task easier. In
particular, our FBN performs the following computation:
hcaption
= fsent (hcaption
, wtc )
t
t?1
hft eedback
=
qi =
oi =
eedback
fsent (hft?1
, wtf )
c
c
fphrase (wi,1
, . . . , wi,N
)
f
c
ff bn (hT , hT 0 , qi , m)
(1)
(2)
(3)
fsent
fphrase
ff bn
(4)
LSTM, dim 256
linear+mean pool
3-layer MLP with dropout
+3-way softmax
Here, wtc and wtf denote the one-hot encoding of words in the sampled caption and feedback sentence,
c
respectively. By wi,?
we denote words in the i-th phrase of the sampled caption. FBN thus encodes
both the caption and feedback using an LSTM (with shared parameters), performs mean pooling over
the words in a phrase to represent the phrase i, and passes this information through a 3-layer MLP.
The MLP additionally accepts information about the mistake type (e.g., wrong object/action) encoded
as a one hot vector m (denoted as ?extra information? in Fig. 4). The output layer of the MLP is a
3-way classification layer that predicts whether the phrase i is correct, wrong, or not relevant (wrt
feedback sentence). An example output from FBN is shown in Table 3.
Implementation details. We train our FBN with the ground-truth data that we collected. In
particular, we use (reference, feedback, marked phrase in reference caption) as an example of a wrong
phrase, (corrected sentence, feedback, marked phrase in corrected caption) as an example of the
correct phrase, and treat the rest as the not relevant label. Reference here means the generated caption
that we collected feedback for, and marked phrase means the phrase that the annotator highlighted
in either the reference or the corrected caption. We use the standard cross-entropy loss to train our
model. We use ADAM [9] with learning rate 0.001, and a batch size of 256. When a reference
caption has several feedback sentences, we treat each one as independent training data.
3.4
Policy Gradient Optimization using Natural Language Feedback
We follow [30, 29] to directly optimize for the desired image captioning metrics using the Policy
Gradient technique. For completeness, we briefly summarize it here [30].
One can think of an caption decoder as an agent following a parameterized policy p? that selects an
action at each time step. An ?action? in our case requires choosing a word from the vocabulary (for
the word RNN), or a phrase label (for the phrase RNN). An ?agent? (our captioning model) then
receives the reward after generating the full caption, i.e., the reward can be any of the automatic
metrics, their weighted sum [30, 21], or in our case will also include the reward from feedback.
The objective for learning the parameters of the model is the expected reward received when completing the caption ws = (w1s , . . . , wTs ) (wts is the word sampled from the model at time step t):
L(?) = ?Ews ?p? [r(ws )]
(5)
To optimize this objective, we follow the reinforce algorithm [38], as also used in [30, 29]. The
gradient of (5) can be computed as
?? L(?) = ?Ews ?p? [r(ws )?? log p? (ws )],
(6)
which is typically estimated by using a single Monte-Carlo sample:
?? L(?) ? ?r(ws )?? log p? (ws )
6
(7)
We follow [30] to define the baseline b as the reward obtained by performing greedy decoding:
b = r(w),
?
w?t = arg max p(wt |ht )
(8)
s
?? L(?) ? ?(r(ws ) ? r(w))?
?
? log p? (w )
Note that the baseline does not change the expected gradient but can drastically reduce its variance.
Reward. We define two different rewards, one at the sentence level (optimizing for a performance
metrics), and one at the phrase level. We use human feedback information in both. We first define the
sentence reward wrt to a reference caption as a weighted sum of the BLEU scores:
X
r(ws ) = ?
?i ? BLEUi (ws , ref )
(9)
i
In particular, we choose ?1 = ?2 = 0.5, ?3 = ?4 = 1, ?5 = 0.3. As reference captions to compute
the reward, we either use the reference captions generated by a snapshot of our model which were
evaluated as not having minor and major errors, or ground-truth captions. The details are given in the
experimental section. We weigh the reward by the caption quality as provided by the annotators. In
particular, ? = 1 for perfect (or GT), 0.8 for acceptable, and 0.6 for grammar/fluency issues only.
We further incorporate the reward provided by the feedback network. In particular, our FBN allows
us to define the reward at the phrase level (thus helping with the credit assignment problem). Since
our generated sentence is segmented into phrases, i.e., ws = w1p w2p . . . wPp , where wtp denotes the
(sequence of words in the) t-th phrase, we define the combined phrase reward as:
r(wtp ) = r(ws ) + ?f ff bn (ws , f eedback, wtp )
(10)
Note that FBN produces a classification of each phrase. We convert this into reward, by assigning
correct to 1, wrong to ?1, and 0 to not relevant. We do not weigh the reward by the confidence of the
network, which might be worth exploring in the future. Our final gradient takes the following form:
P
X
?? L(?) = ?
(r(wp ) ? r(w
? p ))?? log p? (wp )
(11)
p=1
Implementation details. We use Adam with learning rate 1e?6 and batch size 50. As in [29], we
follow an annealing schedule. We first optimize the cross entropy loss for the first K epochs, then
for the following t = 1, . . . , T epochs, we use cross entropy loss for the first (P ? f loor(t/m))
phrases (where P denotes the number of phrases), and the policy gradient algorithm for the remaining
f loor(t/m) phrases. We choose m = 5. When a caption has multiple feedback sentences, we take
the sum of the FBN?s outputs (converted to rewards) as the reward for each phrase. When a sentence
does not have any feedback, we assign it a zero reward.
4
Experimental Results
To validate our approach we use the MS-COCO dataset [20]. We use 82K images for training, 2K for
validation, and 4K for testing. In particular, we randomly chose 2K val and 4K test images from the
official validation split. To collect feedback, we randomly chose 7K images from the training set, as
well as all 2K images from our validation. In all experiments, we report the performance on our (held
out) test set. For all the models (including baselines) we used a pre-trained VGG [33] network to
extract image features. We use a word vocabulary size of 23,115.
Phrase-based captioning model. We analyze different instantiations of our phrase-based captioning in Table 4, showing the importance of predicting phrase labels. To sanity check our model we
compare it to a flat approach (word-RNN only) [39]. Overall, our model performs slightly worse
than [39] (0.66 points). However, the main strength of our model is that it allows a more natural
integration with feedback. Note that these results are reported for the models trained with MLE.
Feedback network. As reported in Table 2, our dataset which contains detailed feedback (descriptions) contains 4173 images. We randomly select 9/10 of them to serve as a training set for our
feedback network, and use 1/10 of them to be our test set. The classification performance of our FBN
is reported in Table 5. We tried exploiting additional information in the network. The second line
reports the result for FBN which also exploits the reference caption (for which the feedback was
written) as input, represented with a LSTM. The model in the third line uses the type of error, i.e.
the phrase is ?missing?, ?wrong?, or ?redundant?. We found that by using information about what
kind of mistake the reference caption had (e.g., corresponding to misnaming an object, action, etc)
achieves the best performance. We use this model as our FBN used in the following experiments.
7
BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-L Weighted metric
flat (word level) with att
65.36
44.03
29.68
20.40
51.04
104.78
phrase with att.
64.69
43.37
28.80
19.31
50.80
102.14
phrase with att +phrase label
65.46
44.59
29.36
19.25
51.40
103.64
phrase with 2 att +phrase label
65.37
44.02
29.51
19.91
50.90
104.12
Table 4: Comparing performance of the flat captioning model [39], and different instantiations of our phrasebased captioning model. All these models were trained using the cross-entropy loss.
Feedback network
no extra information
use reference caption
use "missing"/"wrong"/"redundant"
use "action"/"object"/"preposition"/etc
Accuracy
73.30
73.24
72.92
74.66
Table 5: Classification results of our feedback
network (FBN) on a held-out feedback data. The
FBN predicts correct/wrong/not relevant for each
phrase in a caption. See text for details.
RL with Natural Language Feedback. In Table 6 we report the performance for several instantiations of the RL models. All models have been pre-trained using cross-entropy loss (MLE) on
the full MS-COCO training set. For the next rounds of training, all the models are trained only
on the 9K images that comprise our full evaluation+feedback dataset from Table 2. In particular,
we separate two cases. In the first, standard case, the ?agent? has access to 5 captions for each
image. We experiment with different types of captions, e.g. ground-truth captions (provided by
MS-COCO), as well as feedback data. For a fair comparison, we ensure that each model has access
to (roughly) the same amount of data. This means that we count a feedback sentence as one source
of information, and a human-corrected reference caption as yet another source. We also exploit
reference (MLE) captions which were evaluated as correct, as well as corrected captions obtained
from the annotators. In particular, we tried two types of experiments. We define ?C? captions as all
captions that were corrected by the annotators and were not evaluated as containing minor or major
error, and ground-truth captions for the rest of the images. For ?A?, we use all captions (including
reference MLE captions) that did not have minor or major errors, and GT for the rest. A detailed
break-down of these captions is reported in Table 7.
We first test a model using the standard cross-entropy loss, but which now also has access to the
corrected captions in addition to the 5GT captions. This model (MLEC) is able to improve over the
original MLE model by 1.4 points. We then test the RL model by optimizing the metric wrt the 5GT
captions (as in [30]). This brings an additional point, achieving 2.4 over the MLE model. Our RL
agent with feedback is given access to 3GT captions, the ?C" captions and feedback sentences. We
show that this model outperforms the no-feedback baseline by 0.5 points. Interestingly, with ?A?
captions we get an additional 0.3 boost. If our RL agent has access to 4GT captions and feedback
descriptions, we achieve a total of 1.1 points over the baseline RL model and 3.5 over the MLE
model. Examples of generated captions are shown in Fig. 6.
We also conducted a human evaluation using AMT. In particular, Turkers are shown an image
captioned by the baseline RL and our method, and are asked to choose the better caption. As shown
in Fig. 5, our RL with feedback is 4.7 percent higher than the RL baseline. We additionally count
how much human interaction is required for either the baseline RL and our approach. In particular,
we count every interaction with the keyboard as 1 click. In evaluation, choosing the quality of the
caption counts as 1 click, and for captions/feedback, every letter counts as a click. The main save
comes from the first evaluation round, in which we only as for the quality of captions. Overall, there
is almost half clicks saved in our setting.
We also test a more realistic scenario, in which the models have access to either a single GT caption,
or in our case ?C" (or ?A?) and feedback. This mimics a scenario in which the human teacher observes
the agent and either gives feedback about the agent?s mistakes, or, if the agent?s caption is completely
wrong, the teacher writes a new caption. Interestingly, RL when provided with the corrected captions
performs better than when given GT captions. Overall, our model outperforms the base RL (no
feedback) by 1.2 points. We note that our RL agents are trained (not counting pre-training) only on a
small (9K) subset of the full MS-COCO training set. Further improvements are thus possible.
Discussion. These experiments make an important point. Instead of giving the RL agent a completely new target (caption), a better strategy is to ?teach? the agent about the mistakes it is doing
and suggest a correction. Natural language thus offers itself as a rich modality for providing such
guidance not only to humans but also to artificial agents.
8
Table 6: Comparison of our RL with feedback information to baseline RL and MLE models.
1 sent.
5 sent.
BLEU-1 BLEU-2 BLEU-3 BLEU-4 ROUGE-L Weighted metric
MLE (5 GT)
65.37
44.02
29.51
19.91
50.90
104.12
MLEC (5 GT + C)
66.85
45.19
29.89
19.79
51.20
105.58
MLEC (5 GT + A)
66.14
44.87
30.17
20.27
51.32
105.47
RLB (5 GT)
66.90
45.10
30.10
20.30
51.10
106.55
RLF (3GT+FB+C)
66.52
45.23
30.48
20.66
51.41
107.02
RLF (3GT+FB+A)
66.98
45.54
30.52
20.53
51.54
107.31
RLF (4GT + FB)
67.10
45.50
30.60
20.30
51.30
107.67
RLB (1 GT)
65.68
44.58
29.81
19.97
51.07
104.93
RLB (C)
65.84
44.64
30.01
20.23
51.06
105.50
RLB (A)
65.81
44.58
29.87
20.24
51.28
105.31
RLF (C + FB)
65.76
44.65
30.20
20.62
51.35
106.03
RLF (A + FB)
66.23
45.00
30.15
20.34
51.58
106.12
GT: ground truth captions; FB: feedback; MLE(A)(C): MLE model using five GT sentences + either C or A
captions (see text and Table 7); RLB: baseline RL (no feedback network); RLF: RL with feedback (here we
also use C or A captions as well as FBN);
A
C
Table 7: Detailed break-down of what
ground-truth perfect acceptable grammar error only
3107
2661
2790
442
6326
1502
1502
234
captions were used as ?A? or ?C? in Table 6
for computing additional rewards in RL.
Human preferences
60
50
47.7
# of clicks
400000
52.3
350000
300000
40
250000
30
200000
150000
20
100000
10
50000
0
(a)
0
RLB
RLF
(b)
RLB
RLF
Figure 5: (a) Human preferences: RL baseline vs RL with feedback (our approach), (b) Number of human
?clicks? required for MLE/baseline RL, and ours. A click is counted when an annotator hits the keyboard: in
evaluation, choosing the quality of the caption counts as 1 click, and for captions/feedback, every letter counts as
a click. The main save comes from the first evaluation round, in which we only as for the quality of captions.
MLE: ( a man ) ( walking ) ( in front of a
building ) ( with a cell phone . )
RLB: ( a man ) ( is standing ) ( on a sidewalk )
( with a cell phone . )
RLF: ( a man ) ( wearing a black suit ) ( and
tie ) ( on a sidewalk . )
MLE: ( two giraffes ) ( are standing ) ( in a
field ) ( in a field . )
RLB: ( a giraffe ) ( is standing ) ( in front of a
large building . )
RLF: ( a giraffe ) ( is ) ( in a green field ) ( in a
zoo . )
MLE: ( a clock tower ) ( with a clock ) ( on
top . )
RLB: ( a clock tower ) ( with a clock ) ( on top
of it . )
RLF: ( a clock tower ) ( with a clock ) ( on the
front . )
MLE: ( two birds ) ( are standing ) ( on the
beach ) ( on a beach . )
RLB: ( a group ) ( of birds ) ( are ) ( on the
beach . )
RLF: ( two birds ) ( are standing ) ( on a beach
) ( in front of water . )
Figure 6: Qualitative examples of captions from the MLE and RLB models (baselines), and our RBF model.
5
Conclusion
In this paper, we enable a human teacher to provide feedback to the learning agent in the form of
natural language. We focused on the problem of image captioning. We proposed a hierarchical
phrase-based RNN as our captioning model, which allowed natural integration with human feedback.
We crowd-sourced feedback for a snapshot of our model, and showed how to incorporate it in Policy
Gradient optimization. We showed that by exploiting descriptive feedback our model learns to
perform better than when given independently written captions.
Acknowledgment
We gratefully acknowledge the support from NVIDIA for their donation of the GPUs used for this research. This
work was partially supported by NSERC. We also thank Relu Patrascu for infrastructure support.
9
References
[1] Cmu?s herb robotic platform, http://www.cmu.edu/herb-robot/.
[2] Microsoft?s tay, https://twitter.com/tayandyou.
[3] Bo Dai, Dahua Lin, Raquel Urtasun, and Sanja Fidler. Towards diverse and natural image descriptions via
a conditional gan. In arXiv:1703.06029, 2017.
[4] A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. Moura, D. Parikh, and D. Batra. Visual dialog. In
arXiv:1611.08669, 2016.
[5] Shane Griffith, Kaushik Subramanian, Jonathan Scholz, Charles L. Isbell, and Andrea Lockerd Thomaz.
Policy shaping: Integrating human feedback with reinforcement learning. In NIPS, 2013.
[6] K. Judah, S. Roy, A. Fern, and T. Dietterich. Reinforcement learning via practice and critique advice. In
AAAI, 2010.
[7] Russell Kaplan, Christopher Sauer, and Alexander Sosa. Beating atari with natural language guided
reinforcement learning. In arXiv:1704.05539, 2017.
[8] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR,
2015.
[9] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[10] Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. Unifying visual-semantic embeddings with
multimodal neural language models. CoRR, abs/1411.2539, 2014.
[11] W. Bradley Knox, Cynthia Breazeal, , and Peter Stone. Training a robot via human feedback: A case study.
In International Conference on Social Robotics, 2013.
[12] W. Bradley Knox and Peter Stone. Reinforcement learning from simultaneous human and mdp reward. In
Intl. Conf. on Autonomous Agents and Multiagent Systems, 2012.
[13] Jacqueline Kory Westlund, Jin Joo Lee, Luke Plummer, Fardad Faridi, Jesse Gray, Matt Berlin, Harald
Quintus-Bosz, Robert Hartmann, Mike Hess, Stacy Dyer, Kristopher dos Santos, Sigurdhur ?rn Adhalgeirsson, Goren Gordon, Samuel Spaulding, Marayna Martinez, Madhurima Das, Maryam Archie,
Sooyeon Jeong, and Cynthia Breazeal. Tega: A social robot. In International Conference on Human-Robot
Interaction, 2016.
[14] Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. A hierarchical approach for generating
descriptive image paragraphs. In CVPR, 2017.
[15] G. Kuhlmann, P. Stone, R. Mooney, and J. Shavlik. Guiding a reinforcement learner with natural language
advice: Initial results in robocup soccer. In AAAI Workshop on Supervisory Control of Learning and
Adaptive Systems, 2004.
[16] Remi Lebret, Pedro O. Pinheiro, and Ronan Collobert.
arXiv:1502.03671, 2015.
Phrase-based image captioning.
In
[17] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor
policies. J. Mach. Learn. Res., 17(1):1334?1373, 2016.
[18] Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc?Aurelio Ranzato, and Jason Weston. Dialogue learning
with human-in-the-loop. In arXiv:1611.09823, 2016.
[19] Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforcement
learning for dialogue generation. In arXiv:1606.01541, 2016.
[20] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r,
and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740?755. 2014.
[21] Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. Improved image captioning via
policy gradient optimization of spider. In arXiv:1612.00370, 2016.
[22] Richard Maclin and Jude W. Shavlik. Incorporating advice into agents that learn from reinforcements. In
National Conference on Artificial Intelligence, pages 694?699, 1994.
[23] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David
McClosky. The Stanford CoreNLP natural language processing toolkit. In ICLR, 2014.
[24] Maja J. Matari?c. Socially assistive robotics: Human augmentation vs. automation. Science Robotics, 2(4),
2017.
[25] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis
Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
10
[26] Andrew Y. Ng, Daishi Harada, and Stuart J. Russell. Policy invariance under reward transformations:
Theory and application to reward shaping. In ICML, pages 278?287, 1999.
[27] Amar Parkash and Devi Parikh. Attributes for classifier feedback. In European Conference on Computer
Vision (ECCV), 2012.
[28] Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent
neural networks. In Association for Computational Linguistics (ACL) System Demonstrations, pages 55?60,
2014.
[29] Marc?Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with
recurrent neural networks. In arXiv:1511.06732, 2015.
[30] Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Self-critical
sequence training for image captioning. In arXiv:1612.00563, 2016.
[31] David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik
Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray
Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural networks
and tree search. Nature, 529(7587):484?489, 2016.
[32] Edgar Simo-Serra, Sanja Fidler, Francesc Moreno-Noguer, and Raquel Urtasun. Neuroaesthetics in fashion:
Modeling the perception of beauty. In CVPR, 2015.
[33] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. 23, 2015.
[34] Ying Hua Tan and Chee Seng Chan. phi-lstm: A phrase-based hierarchical lstm model for image captioning.
In ACCV, 2016.
[35] A. Thomaz and C. Breazeal. Reinforcement learning with human teachers: Evidence of feedback and
guidance. In AAAI, 2006.
[36] Oriol Vinyals and Quoc Le. A neural conversational model. In arXiv:1506.05869, 2015.
[37] Jason Weston. Dialog-based language learning. In arXiv:1604.06045, 2016.
[38] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. In Machine Learning, 1992.
[39] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard
Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention.
In ICML, 2015.
[40] Yanchao Yu, Arash Eshghi, and Oliver Lemon. Training an adaptive dialogue policy for interactive learning
of visually grounded word meanings. In Proc. of SIGDIAL, 2016.
[41] Yanchao Yu, Arash Eshghi, Gregory Mills, and Oliver Lemon. The burchak corpus: a challenge data set
for interactive learning of visually grounded word meanings. In Workshop on Vision and Language, 2017.
11
| 7092 |@word briefly:1 stronger:2 pieter:1 tried:2 bn:3 paid:1 mention:2 versatile:1 jacqueline:1 initial:1 liu:1 contains:4 score:1 att:4 ours:2 interestingly:3 outperforms:2 bradley:2 guadarrama:1 current:2 sosa:1 comparing:1 com:1 assigning:1 yet:1 diederik:1 guez:1 written:4 parsing:1 ronald:1 visible:1 realistic:1 john:2 numerical:1 shape:2 ronan:1 moreno:1 plot:2 v:2 greedy:2 intelligence:1 cook:1 half:1 amir:1 core:1 short:2 aja:1 num:5 infrastructure:1 provides:4 pascanu:1 completeness:1 iterates:1 preference:2 toronto:2 location:2 simpler:1 instructs:1 five:1 wierstra:1 direct:1 become:1 qualitative:1 consists:1 dan:1 paragraph:2 expected:3 roughly:1 andrea:1 dialog:3 kiros:2 subsec:1 salakhutdinov:2 nham:1 socially:1 food:3 nonexpert:1 becomes:1 provided:9 classifies:1 maja:1 underlying:1 project:1 what:4 santos:1 atari:2 kind:1 dharshan:1 finding:1 transformation:1 every:4 collecting:1 act:1 tackle:1 interactive:2 tie:1 zaremba:1 delineating:1 wrong:17 classifier:1 hit:2 control:3 ramanan:1 converse:1 penn:1 unit:1 kelvin:1 attend:1 treat:2 mistake:21 rouge:2 era:1 despite:1 encoding:1 mach:1 critique:2 laurent:1 merge:1 advised:1 black:1 might:1 chose:2 bird:3 acl:1 suggests:2 collect:6 luke:1 jurafsky:1 limited:1 scholz:1 bat:1 acknowledgment:1 testing:1 practice:2 implement:1 differs:2 writes:2 razvan:1 maire:1 demis:2 area:3 riedmiller:1 rnn:21 significantly:1 flatten:1 word:45 griffith:1 pre:5 advise:1 petersen:1 suggest:2 get:1 confidence:1 integrating:1 selection:2 judged:2 context:5 applying:1 bellemare:1 optimize:5 www:2 missing:6 jesse:1 williams:1 attention:8 go:2 jimmy:2 independently:1 focused:1 toronto1:1 helen:1 swinging:2 utilizing:1 surdeanu:1 his:1 autonomous:1 target:1 play:1 tan:1 user:3 caption:100 us:3 fout:2 roy:1 recognition:1 particularly:1 walking:1 lay:1 predicts:3 observed:1 mike:1 levine:1 bottom:1 steven:2 preprint:1 yadav:1 episode:1 ranzato:2 russell:2 removed:2 observes:2 weigh:2 environment:5 reward:34 asked:3 ideally:1 personal:1 trained:9 depend:1 deva:1 solving:1 singh:1 serve:1 baseball:3 learner:4 completely:2 translated:2 easily:4 multimodal:1 represented:1 cat:7 talk:1 assistive:1 train:7 describe:4 plummer:1 monte:1 artificial:4 zemel:2 horse:2 youssef:1 jianfeng:1 tamer:1 choosing:4 sourced:1 sanity:1 kalchbrenner:1 kevin:1 stanford:2 larger:1 cvpr:3 rennie:1 crowd:2 eos:1 encoded:2 grammar:6 statistic:2 simonyan:1 amar:1 think:2 highlighted:1 itself:1 final:1 online:1 sequence:6 descriptive:5 thomaz:2 propose:3 interaction:3 maryam:1 reset:1 relevant:7 loop:4 motorcycle:1 translate:1 achieve:1 building:2 description:10 validate:1 webpage:1 sutskever:1 double:1 exploiting:3 darrell:1 intl:1 captioning:30 generating:3 silver:2 adam:4 produce:2 object:8 help:4 perfect:5 andrew:2 recurrent:3 donation:1 fluency:1 minor:6 received:3 strong:1 implemented:1 c:2 indicate:1 come:4 judge:2 differ:1 guided:2 ning:1 correct:14 saved:1 attribute:3 stochastic:2 arash:2 exploration:1 human:49 engineered:1 enable:3 virtual:1 require:2 vaibhava:1 assign:1 abbeel:1 ryan:2 exploring:1 helping:2 correction:14 credit:2 ground:7 barn:2 visually:2 lawrence:1 dieleman:1 predict:2 pointing:1 major:6 optimizer:1 achieves:1 released:1 ruslan:2 assistant:1 proc:1 label:16 ross:1 correctness:2 repetition:1 city:2 create:1 weighted:5 aim:4 modified:1 avoid:2 finkel:1 rusu:1 beauty:1 conjunction:2 linguistic:4 focus:4 fence:2 improvement:1 legg:1 check:1 mainly:1 baseline:13 dim:3 twitter:1 kottur:1 typically:4 integrated:1 maclin:1 w:12 hidden:3 captioned:2 her:1 perona:1 selects:1 semantics:1 overall:4 classification:4 arg:1 issue:1 hartmann:1 denoted:1 proposes:1 platform:2 noun:2 integration:2 softmax:2 field:8 construct:1 comprise:1 having:1 beach:4 ng:1 veness:1 piotr:1 represents:2 stuart:1 look:2 icml:2 koray:2 yu:2 thinking:1 siqi:1 mimic:2 future:1 yoshua:2 np:5 gordon:1 richard:3 report:3 connectionist:1 simplify:1 randomly:4 employ:1 composed:2 few:2 national:1 recognize:1 murphy:1 kitchen:1 replaced:2 microsoft:2 suit:1 ab:1 attempt:1 mlp:8 ostrovski:1 mnih:1 evaluation:11 joel:1 alignment:1 mcclosky:1 rlb:12 bracket:1 held:2 oliver:2 encourage:1 arthur:1 wts:2 huan:1 sauer:1 simo:1 tree:3 a3c:1 desired:2 re:1 guidance:4 visuomotor:1 modeling:2 herb:2 boolean:1 assignment:2 phrase:86 subset:4 archie:1 harada:1 galley:1 johnson:1 sumit:2 too:1 front:4 conducted:1 reported:4 teacher:11 gregory:1 combined:1 knox:2 cho:2 chooses:1 international:2 lstm:9 standing:7 lee:1 ritter:1 decoding:2 pool:1 michael:2 corenlp:1 ilya:1 augmentation:2 aaai:3 fbn:16 containing:3 choose:7 slowly:1 woman:5 huang:1 possibly:1 worse:1 conf:1 expert:6 dialogue:4 wojciech:1 michel:1 li:3 volodymyr:1 converted:1 seng:1 ioannis:2 automation:1 collobert:1 tsung:1 tion:1 break:2 fatt:2 jason:2 observer:1 analyze:1 doing:1 parallel:1 annotation:3 contribution:1 ass:1 oi:1 robocup:2 accuracy:1 convolutional:2 variance:1 miller:1 serge:1 sitting:6 vp:3 kavukcuoglu:2 fern:1 carlo:1 zoo:1 served:1 edgar:1 dig:1 mooney:1 worth:1 submitted:1 simultaneous:1 strongest:1 explain:1 moura:1 whenever:1 sharing:1 trevor:1 definition:1 pp:2 kuhlmann:1 james:1 conveys:1 naturally:2 sampled:8 newly:2 dataset:3 ask:2 conversation:2 knowledge:1 improves:2 graepel:1 shaping:5 schedule:1 understands:1 higher:1 follow:5 zisserman:1 improved:2 awkward:1 evaluated:5 furthermore:1 just:1 clock:6 receives:2 web:2 christopher:2 parse:1 brings:1 quality:13 gray:1 aj:1 indicated:1 mdp:3 thore:1 riding:1 supervisory:2 facilitate:2 ye:1 dietterich:1 lillicrap:1 requiring:1 contain:1 classif:1 fidler:3 kyunghyun:2 tell:1 concept:1 read:1 wp:2 semantic:3 round:16 game:4 self:1 during:2 kaushik:1 please:1 soccer:3 samuel:1 m:6 stone:3 plate:3 stress:1 performs:4 cp:2 bring:2 interface:3 percent:1 matari:1 meaning:2 image:48 recently:1 charles:2 parikh:2 common:1 rl:34 conditioning:3 association:1 he:1 dahua:1 significant:2 refer:1 mihai:1 hess:1 automatic:1 mistaken:1 mroueh:1 session:1 teaching:1 gratefully:1 had:1 joo:1 language:28 toolkit:2 robot:10 intervene:1 impressive:1 similarity:1 sanja:3 etc:2 base:2 gt:18 something:4 access:7 sergio:1 own:1 recent:2 chan:1 chelsea:1 optimizing:2 showed:2 coco:7 scenario:5 phone:2 nvidia:1 keyboard:2 hay:1 continue:1 life:1 yi:1 leach:1 krishna:1 wtf:2 george:1 additional:10 seen:2 dai:1 goel:1 maximize:1 redundant:2 signal:6 jenny:1 full:4 multiple:1 needing:2 ing:2 alan:1 segmented:2 faster:2 wtc:2 offer:1 lin:2 cross:7 naming:1 mle:22 paired:1 a1:1 qi:2 prediction:1 essentially:1 metric:8 cmu:2 vision:3 arxiv:13 jude:1 sergey:1 represent:2 grounded:2 robotics:4 cell:2 harald:1 addition:1 want:1 krause:1 annealing:1 jiwei:2 hft:2 source:2 modality:1 extra:2 rest:3 unlike:1 bringing:1 pinheiro:1 pass:1 shane:2 pooling:2 sent:3 chee:1 incorporates:3 chopra:2 counting:2 split:1 bengio:2 embeddings:1 sander:1 variety:1 relu:4 architecture:1 click:9 reduce:1 idea:2 andreas:1 vgg:1 whether:3 veda:1 passed:2 wtp:3 effort:2 akin:1 peter:2 karen:1 hardly:1 action:15 deep:9 useful:1 detailed:6 clear:1 aimed:1 karpathy:1 amount:2 wpp:1 generate:6 http:3 exist:1 notice:1 bot:1 estimated:1 correctly:1 per:2 diverse:1 write:1 georg:1 group:1 four:1 achieving:2 deleted:1 nal:1 ht:14 pietro:1 convert:1 sum:3 letter:2 parameterized:1 raquel:2 almost:1 lanctot:1 acceptable:4 appendix:1 w1p:1 dropout:1 layer:8 ct:6 completing:2 daishi:1 courville:1 strength:1 occur:1 lemon:2 constraint:1 fei:4 isbell:1 alex:1 software:1 flat:3 encodes:2 generates:1 conversational:1 performing:1 martin:1 gpus:1 manning:1 describes:1 slightly:2 mastering:1 wi:3 making:1 quoc:1 explained:1 den:1 equation:1 visualization:1 discus:1 count:7 mechanism:2 eventually:1 wrt:3 describing:1 madeleine:1 finn:1 antonoglou:2 dyer:1 photo:1 end:5 gulcehre:1 panneershelvam:1 doll:1 sidewalk:11 noguer:1 hierarchical:8 stig:1 save:2 batch:2 hassabis:2 original:1 denotes:4 remaining:1 ensure:1 linguistics:1 include:1 gan:1 top:5 nlp:2 unifying:1 etienne:1 household:3 exploit:11 giving:1 objective:3 noticed:1 strategy:1 costly:1 breazeal:3 interacts:1 gradient:14 iclr:1 navigating:1 separate:2 reinforce:1 berlin:1 thank:1 street:5 decoder:2 fidjeland:1 chris:1 tower:3 topic:1 maddison:1 collected:5 argue:3 urtasun:2 reason:1 bleu:9 water:1 loor:2 code:1 julian:1 providing:4 demonstration:1 ying:1 schrittwieser:1 mostly:1 robert:1 spider:1 teach:4 kaplan:1 ba:2 design:4 implementation:3 policy:20 perform:2 allowing:1 snapshot:5 kumaran:1 benchmark:1 acknowledge:1 caglar:1 jin:1 accv:1 daan:1 incorporated:1 rn:1 auli:1 verb:2 david:3 dog:5 required:4 sentence:26 jeong:1 accepts:2 textual:1 kingma:1 boost:1 nip:1 qa:1 lebret:1 justin:1 suggested:2 able:2 perception:1 beating:1 ranjay:1 biasing:1 summarize:1 challenge:1 program:3 pioneering:2 max:1 including:3 green:1 everyone:1 subramanian:1 hot:3 critical:3 natural:26 predicting:1 zhu:1 improve:4 grewe:1 extract:2 text:5 epoch:2 understanding:1 val:1 graf:1 multiagent:1 evalua:1 loss:7 highlight:2 generation:3 interesting:1 annotator:22 validation:3 integrate:1 agent:29 bank:1 wearing:1 playing:1 share:3 eccv:2 preposition:5 token:3 sourcing:1 supported:1 drastically:1 bias:2 formal:1 guide:4 allow:3 turkers:1 shavlik:2 stacy:1 sparse:1 serra:1 bauer:1 van:1 feedback:114 benefit:1 vocabulary:3 numeric:3 cle:1 rich:2 fb:6 maze:1 author:4 made:1 reinforcement:14 collection:1 sifre:1 adaptive:2 counted:1 avg:4 social:3 keep:1 global:1 active:1 robotic:2 sequentially:1 rid:1 corpus:1 instantiation:3 belongie:1 search:1 eshghi:2 table:24 additionally:3 learn:6 nature:2 requested:1 interact:1 european:1 domain:3 official:1 submit:1 did:3 zitnick:1 giraffe:3 marc:4 da:2 aurelio:2 main:3 noise:1 judah:1 martinez:1 child:2 allowed:2 fair:1 ref:3 xu:1 fig:9 eedback:3 advice:9 ff:3 fashion:1 andrei:1 guiding:2 dominik:1 third:1 learns:2 rlf:12 simulated:1 down:2 bad:2 showing:2 cynthia:2 explored:1 gupta:1 evidence:1 incorporating:5 workshop:2 restricting:1 effectively:1 corr:3 importance:1 zhenhai:1 conditioned:2 easier:1 monroe:1 entropy:7 depicted:1 lt:5 remi:1 mill:1 timothy:1 photograph:1 forming:1 gao:1 visual:6 devi:1 vinyals:1 expressed:1 nserc:1 contained:2 phi:1 partially:1 scalar:1 watch:1 patrascu:1 bo:1 sadik:1 applies:2 pedro:1 hua:1 truth:7 satisfies:1 amt:3 driessche:1 corresponds:1 weston:2 conditional:1 goal:4 marked:5 king:1 rbf:1 towards:3 replace:2 shared:1 man:8 change:2 tay:1 corrected:12 wt:7 beattie:1 total:1 batra:1 matt:1 invariance:1 experimental:2 ew:2 indicating:2 select:4 aaron:1 mark:2 support:2 latter:2 jonathan:2 alexander:2 relevance:1 oriol:1 incorporate:8 evaluate:1 bench:1 |
6,736 | 7,093 | Perturbative Black Box Variational Inference
Robert Bamler?
Disney Research
Pittsburgh, USA
Cheng Zhang?
Disney Research
Pittsburgh, USA
Manfred Opper
TU Berlin
Berlin, Germany
Stephan Mandt?
Disney Research
Pittsburgh, USA
firstname.lastname@{disneyresearch.com, tu-berlin.de}
Abstract
Black box variational inference (BBVI) with reparameterization gradients triggered
the exploration of divergence measures other than the Kullback-Leibler (KL) divergence, such as alpha divergences. These divergences can be tuned to be more
mass-covering (preventing overfitting in complex models), but are also often harder
to optimize using Monte-Carlo gradients. In this paper, we view BBVI with generalized divergences as a form of biased importance sampling. The choice of divergence
determines a bias-variance tradeoff between the tightness of the bound (low bias)
and the variance of its gradient estimators. Drawing on variational perturbation
theory of statistical physics, we use these insights to construct a new variational
bound which is tighter than the KL bound and more mass covering. Compared
to alpha-divergences, its reparameterization gradients have a lower variance. We
show in several experiments on Gaussian Processes and Variational Autoencoders
that the resulting posterior covariances are closer to the true posterior and lead to
higher likelihoods on held-out data.
1
Introduction
Variational inference (VI) (Jordan et al., 1999) provides a way to convert Bayesian inference to
optimization by minimizing a divergence measure. Recent advances of VI have been devoted to
scalability (Hoffman et al., 2013; Ranganath et al., 2014), divergence measures (Minka, 2005; Li and
Turner, 2016; Hernandez-Lobato et al., 2016), and structured variational distributions (Hoffman and
Blei, 2014; Ranganath et al., 2016).
While traditional stochastic variational inference (SVI) (Hoffman et al., 2013) was limited to conditionally conjugate Bayesian models, black box variational inference (Ranganath et al., 2014) (BBVI)
enables SVI on a large class of models by expressing the gradient as an expectation, and estimating it
by Monte-Carlo sampling. A similar version of this method relies on reparametrized gradients and
has a lower variance (Salimans and Knowles, 2013; Kingma and Welling, 2014; Rezende et al., 2014;
Ruiz et al., 2016). BBVI paved the way for approximate inference in complex and deep generative
models (Kingma and Welling, 2014; Rezende et al., 2014; Ranganath et al., 2015; Bamler and Mandt,
2017).
Before the advent of BBVI, divergence measures other than the KL divergence had been of limited
practical use due to their complexity in both mathematical derivation and computation (Minka, 2005),
but have since then been revisited. Alpha-divergences (Hernandez-Lobato et al., 2016; Dieng et al.,
2017; Li and Turner, 2016) achieve a better matching of the variational distribution to different regions
of the posterior and may be tuned to either fit its dominant mode or to cover its entire support. The
problem with reparameterizing the gradient of the alpha-divergence is, however, that the resulting
gradient estimates have large variances. It is therefore desirable to find other divergence measures
with low-variance reparameterization gradients.
?
Equal contributions. First authorship determined by coin flip among first two authors.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we develop a new variational bound based on concepts from perturbation theory of
statistical physics. Our contributions are as follows.
? We establish a view on black box variational inference with generalized divergences as a form
of biased importance sampling (Section 3.1). The choice of divergence allows us to trade-off
between a low-variance stochastic gradient and loose bound, and a tight variational bound
with higher-variance Monte-Carlo gradients. As we explain below, importance sampling
and point estimation are at opposite sides of this spectrum.
? We use these insights to construct a new variational bound with favorable properties. Based
on perturbation theory of statistical physics, we derive a new variational bound (Section 3.2)
which has a small variance and which is tighter compared to the KL-bound. The bound is
easy to optimize and contains perturbative corrections around the mean-field solution.
In our experiments (Section 4), we find that the new bound is more mass-covering than the KL-bound,
but its variance is much smaller than alpha divergences which have a similar mass-covering effect.
2
Related work
Our approach is related to BBVI, VI with generalized divergences, and variational perturbation theory.
We thus briefly discuss related work in these three directions.
Black box variational inference (BBVI). BBVI has already been addressed in the introduction (Salimans and Knowles, 2013; Kingma and Welling, 2014; Rezende et al., 2014; Ranganath et al., 2014;
Ruiz et al., 2016); it enables variational inference for many models. Our work builds upon BBVI in that
BBVI makes a large class of new divergence measures between the posterior and the approximating
distribution tractable. Depending on the divergence measure, BBVI may suffer from high-variance
stochastic gradients. This is a practical problem that we aim to improve in this paper.
Generalized divergences measures. Our work connects to generalized information-theoretic divergences (Amari, 2012). Minka (2005) introduced a broad class of divergences for variational
inference, including alpha-divergences. Most of these divergences have been intractable in large-scale
applications until the advent of BBVI. In this context, alpha-divergences were first suggested by
Hernandez-Lobato et al. (2016) for local divergence minimization, and later for global minimization
by Li and Turner (2016) and Dieng et al. (2017). As we show in this paper, alpha-divergences have the
disadvantage of inducing high-variance gradients, since the ratio between posterior and variational
distribution enters polynomially instead of logarithmically. In contrast, our approach leads to a more
stable inference scheme in high dimensions.
Variational perturbation theory. Our work also relates to variational perturbation theory. Perturbation theory refers to a set of methods that aim to truncate a typically divergent power series to a
convergent series. In machine learning, these approaches have been addressed from an informationtheoretic perspective by Tanaka (1999, 2000). Thouless-Anderson-Palmer (TAP) equations (Thouless
et al., 1977) are a special form of second-order perturbation theory and were originally developed in
statistical physics. TAP equations are aimed at including perturbative corrections to the mean-field
solution of Ising models. They have been adopted into Bayesian inference in (Plefka, 1982) and
were advanced by many authors (Kappen and Wiegerinck, 2001; Paquet et al., 2009; Opper et al.,
2013; Opper, 2015). In variational inference, perturbation theory yields extra terms to the mean-field
variational objective which are difficult to calculate analytically. This may be a reason why the methods
discussed are not widely adopted by practitioners. In this paper, we emphasize the ease of including
perturbative corrections in a black box variational inference framework. Furthermore, in contrast
to earlier formulations, our approach yields a strict lower bound to the marginal likelihood which
can be conveniently optimized. Our approach is different from traditional variational perpetuation
formulation, because variational perturbation theory (Kleinert, 2009) generally does not result in a
bound.
3
Method
In this section, we present our main contributions. We first present our view of black box variational
inference (BBVI) as a form of biased importance sampling in Section 3.1. With this view, we bridge
the gap between variational inference and importance sampling. In Section 3.2, we introduce our new
variational bound, and analyze its properties further in Section 3.3.
2
3
0.4
2
1
0
importance sampling: f (x) = x
KLVI: f (x) = 1 + log(x)
PVI (proposed): fV0 (x)
?1
0
1
2
3
4
x
Figure 1: Different choices for
f in Eq. 3. KLVI corresponds to
f (x) = log(x)+const. (red), and
importance sampling to f (x) = x
(black). Our proposed PVI bound
uses fV0 (green) as specified in
Eq. 6, which lies between KLVI
and importance sampling (we set
V0 = 0 for PVI here).
3.1
0.3
target distribution p(z)
PVI (proposed)
? = 0.2
? ? 1 (KLVI)
?=2
?-divergence with ? = 0.2
?-divergence with ? = 2
?-divergence with ? = 0.5
PVI (proposed)
1023
average variance of ?L? [log]
0.5
p(z), q(z)
f (x)
4
1017
1011
0.2
105
10?1
0.1
10?7
0.0
?4
?2
0
z
2
4
Figure 2: Behavior of different
VI methods on fitting a univariate
Gaussian to a bimodal target distribution (black). PVI (proposed,
green) covers more of the mass
of the entire distribution than the
traditional KLVI (red). Alpha-VI
is mode seeking for large ? and
mass covering for smaller ?.
1
50
100
150
number N of latent variables
200
Figure 3: Sampling variance of
the stochastic gradient (averaged
over its components) in the optimum, for alpha-divergences (orange, purple, gray), and the proposed PVI (green). The variance
grows exponentially with the latent dimension N for alpha-VI,
and only algebraically for PVI.
Black Box Variational Inference as Biased Importance Sampling
Consider a probabilistic model with data x, latent variables z, and joint distribution p(x, z). We
are interested in the posterior distribution over the latent variables, p(z|x) = p(x, z)/p(x). This
involves the intractable marginal likelihood p(x). In variational inference (Jordan et al., 1999), we
instead minimize a divergence measure between a variational distribution q(z; ?) and the posterior.
Here, ? are parameters of the variational distribution, and the task is to find the parameters ?? that
minimize the distance to the posterior. This is equivalent to maximizing a lower bound to the marginal
likelihood.
We call the difference between the log variational distribution and the log joint distribution the
interaction energy,
V (z; ?) = log q(z; ?) ? log p(x, z).
(1)
We use V or V (z) interchangeably to denote V (z; ?), when more convenient. Using this notation,
the marginal likelihood is
p(x) = E [e?V (z) ],
q(z)
(2)
We call e?V (z) = p(x, z)/q(z) the importance ratio, since sampling from q(z) to estimate the righthand side of Eq. 2 is equivalent to importance sampling. As this is inefficient in high dimensions, we
resort to variational inference. To this end, let f (?) be any concave function defined on the positive
reals. We assume furthermore that for all x > 0, we have f (x) ? x. Applying Jensen?s inequality,
we can lower-bound the marginal likelihood,
p(x) ? f (p(x)) ? E [f (e?V (z;?) )] ? Lf (?).
q(z)
(3)
We call this bound the f -ELBO, in comparison to the evidence lower bound (ELBO) used in KullbackLeibler variational inference (KLVI). Figure 1 shows exemplary choices of f . We maximize Lf (?)
using reparameterization gradients, where the bound is not computed analytically, but rather its
gradients are estimated by sampling from q(z) (Kingma and Welling, 2014). This leads to a stochastic
gradient descent scheme, where the noise is a result of the Monte-Carlo estimation of the gradients.
Our approach builds on the insight that black box variational inference is a type of biased importance
sampling, where we estimate a lower bound of the marginal likelihood by sampling from a proposal
distribution, iteratively improving this distribution. The approach is biased, since we do not estimate
the exact marginal likelihood but only a lower bound to this quantity. As we argue below, the introduced
bias allows us to estimate this bound more easily, because we decrease the variance of this estimator.
The choice of the function f thereby trades-off between bias and variance in the following way:
? For f = id being the identity, we obtain importance sampling. (See the black line in
Figure 1). In this case, Eq. 3 does not depend on the variational parameters, so there is
3
nothing to optimize and we can directly sample from any proposal distribution q. Since
the expectation under q of the importance ratio e?V (z) gives the exact marginal likelihood,
there is no bias. For a large number of latent variables, the importance ratio e?V (z) becomes
tightly peaked around the minimum of the interaction energy V , resulting in a very high
variance of this estimator. Importance sampling is therefore on one extreme end of the
bias-variance spectrum.
? For f = log, we obtain the familiar Kullback-Leibler (KL) bound. (See the pink line
in Figure 1; here we add a constant of 1 for comparison, which does not influence the
optimization). Since f (e?V (z) ) = ?V (z), the bound is
LKL (?) = E [?V (z)] = E [log p(x, z) ? log q(z)].
(4)
q(z)
q(z)
The Monte-Carlo expectation of Eq [?V ] has a much smaller variance than Eq [e?V ], implying
efficient learning (Bottou, 2010). However, by replacing e?V with ?V we introduce a bias.
We can further trade-off less variance for more bias by dropping the entropy term. A flexible
enough variational distribution will shrink to zero variance, which completely eliminates the
noise. This is equivalent to point-estimation, and is at the opposite end of the bias-variance
spectrum.
? Now, consider any f which is between the logarithm and the identity, for example, the green
line in Figure 1 (this is the bound that we will propose in Section 3.2). The more similar
f is to the identity, the less biased is our estimate of the marginal likelihood, but the larger
the variance. Conversely, the more f behaves like the logarithm, the easier it is to estimate
f (e?V (z) ) by sampling, while at the same time the bias grows.
One example of alternative divergences to the KL divergence that have been discussed in the literature
are alpha-divergences (Minka, 2005; Hernandez-Lobato et al., 2016; Li and Turner, 2016; Dieng
et al., 2017). Up to a constant, they correspond to the following choice of f :
f (?) (e?V ) ? e?(1??)V .
(5)
The parameter ? determines the distance to the importance sampling case (? = 0). As ? approaches
1 from below, this bound leads to a better-behaved estimation problem of the Monte-Carlo gradient.
However, unless taking the limit of ? ? 1 (where the objective becomes the KL-bound), V still
enters exponentially in the bound. As we show, this leads to a high variance of the gradient estimator
in high dimensions (see Figure 3 discussed below). The alpha-divergence bound is therefore similarly
as hard to estimate as the marginal likelihood in importance sampling.
Our analysis relies on the observation that expectations of exponentials in V are difficult to estimate,
and expectations of polynomials in V are easy to estimate. We derive a new variational bound which
is a polynomial in V and at the same time results in a tighter bound than the KL-bound.
3.2
Perturbative Black Box Variational Inference
We now propose a new bound based on the considerations outlined above. This bound is tighter and
more mass-covering than the KL bound, and its gradients are easy to estimate via the reparameterization
approach. Since V never appears in the exponent, the reparameterization gradients have a lower
variance than for unbiased alpha divergences.
We construct a function f with a free parameter V0 , where V enters only polynomially:
1
1
fV0 (e?V ) = e?V0 1 + (V0 ? V ) + (V0 ? V )2 + (V0 ? V )3 .
(6)
2
6
This function f is a third-order Taylor expansion of the importance ratio e?V in the interaction energy
V around the reference energy V0 ? R. We introduce V0 so that any additive constant in the log-joint
distribution can be absorbed into V0 . It is easy to see that fV0 (?) is concave for any choice of V0 ,
and that its graph lies below the identity function (see proof in Section 3.3). Thus, fV0 (?) meets all
conditions for Eq. 3 to hold, and we have p(x) ? LfV0 (?) for all V0 and ?. We find the optimal
values for the reference energy V0 and the variational parameters ? by simultaneously maximizing
LfV0 (?) over both V0 and ?. We call LfV0 (?) the perturbative variational lower bound and name the
method perturbative variational inference (PVI). The resulting variational lower bound is
h
i
1
1
LfV0 (?) = e?V0 E 1 + (V0 ? V ) + (V0 ? V )2 + (V0 ? V )3 .
(7)
q
2
6
4
Using the reparameterization gradient representation, we can easily take gradients with respect to
the variational parameters. The fact that V enters only polynomially and not exponentially leads to a
low-variance stochastic gradient. This is in contrast to the alpha-divergence bound (Eq. 5), where V
enters exponentially.
As a technical note, the factor e?V0 is not a function of the latent variables and does not contribute
to the variance, however, it may lead to numerical underflow or overflow in large models. This can
be easily mitigated by considering the surrogate objective L?fV0 (?) ? eV0 LfV0 (?). The gradients
with respect to ? of LfV0 (?) and L?fV0 (?) are equal up to a positive prefactor, so we can replace
the former with the latter in gradient descent. The gradient with respect to V0 is ?LfV0 (?)/?V0 ?
? L?fV0 (?)/?V0 ? L?fV0 (?). Using the surrogate L?fV0 (?) avoids numerical underflow or overflow, as
well as exponentially increasing or decreasing gradients.
Figure 1 shows several choices for f that correspond to different divergences. The red curve shows
the logarithm, corresponding to the typical KL divergence bound, while the black line shows the
importance sampling case. Our function fV0 corresponds to the green curve. We see that it lies
between the importance sampling case and the KL-divergence case and therefore has a lower bias
than KLVI. Note that in this example we have set the reference energy V0 to zero.
In Figure 2, we fit a Gaussian distribution to a one-dimensional bimodal target distribution (black line),
using different divergences. Compared to KLVI (pink line), alpha-divergences are more mode-seeking
(purple line) for large values of ?, and more mass-covering (orange line) for small ? (Li and Turner,
2016). Our PVI bound (green line) achieves a similar mass-covering effect as in alpha-divergences, but
with associated low-variance reparameterization gradients. This is also seen in Figure 3, discussed in
Section 4.2, which compares the gradient variances of alpha-VI and PVI as a function of dimensions.
3.3
Theoretical Considerations
We conclude the presentation of the PVI bound by exploring several aspects theoretically. We
generalize the perturbative expansion to all odd orders and recover the KL-bound as the first order.
We also show that the proposed PVI method does not result in a trivial bound. In addition, we show
in the supplement that PVI implicitly minimizes a valid divergence from q to the true posterior.
Generalization to all odd orders. Eq. 6 defines fV0 (e?V ) as the third order Taylor expansion of
e?V in V around V0 . We generalize this definition to a general order n, and define for x > 0,
n
X
(V0 + log x)k
(n)
(8)
fV0 (x) ? e?V0
k!
k=0
This includes Eq. 6 in the case n = 3. It turns out that Lf (n) (?) is a lower bound for all odd n,
V0
because fV0 is concave and lies below the identity function for all x. To see this, note that the second
(n)
derivative ? 2 fV0 (x)/?x2 = ?e?V0 (V0 + log x)n?1 /((n ? 1)! x2 ) is non-positive everywhere for
(n)
odd n. Therefore, the function is concave. Next, consider the function g(x) = fV0 (x) ? x, which
has a stationary point at x = x0 ? e?V0 . Since g is also concave, x0 is a global maximum, and thus
(n)
(n)
g(x) ? g(x0 ) = 0 for all x, implying that fV0 (x) ? x. Thus, for odd n, the function fV0 satisfies
(n)
all requirements for Eq. 3, and Lf (n) (?) ? Eq [fV0 (e?V )] is a lower bound on the model evidence.
(n)
V0
Note that an even order n does not lead to a valid concave function.
First order: KLVI. For n = 1, the lower bound is
Lf (1) (?) = e?V0 (1 + V0 ? E[V ]) = e?V0 (1 + V0 + E[log p(x, z) ? log q(z; ?)])
V0
q
q
(9)
Maximizing this bound over ? is equivalent to maximizing the evidence lower bound (ELBO) of
traditional KLVI. Apart from a positive prefactor, the reference energy V0 has no influence on the
gradient of the first-order lower bound with respect to ?. This is why one can safely ignore V0 in
traditional KLVI. As a matter of fact, optimizing over V0 results here exactly in the KL bound.
Third order: Non-triviality of the bound. When we go beyond a first-order Taylor expansion, the
lower bound is no longer invariant under shifts of the interaction energy V , and we can no longer
ignore the reference energy V0 . For n = 3, we obtain the proposed PVI lower bound, see Eq. 7. Since
5
Observations
Observations
Mean
Analytic 3std
Mean
Analytic 3std
Inferred 3std
Inferred 3std
(a) KLVI
(b) PVI
Figure 4: Gaussian process regression on synthetic data (green dots). Three standard deviations are shown in
varying shades of oranges. The blue dashed lines show three standard deviations of the true posterior. The red
dashed lines show the inferred three standard deviations using KLVI (a) and PVI (b). We can see that the results
from our proposed PVI are close to the analytic solution while traditional KLVI underestimates the variances.
Method
Avg variances
Analytic
KLVI
PVI
0.0415
0.0176
0.0355
Table 1: Average variances across training examples in the synthetic data experiment. The
closer to the analytic solution, the better.
Data set
Crab
Pima
Heart
Sonar
KLVI
PVI
0.22
0.11
0.245
0.240
0.148
0.1333
0.212
0.1731
Table 2: Error rate of GP classification on the test set. The
lower the better. Our proposed PVI consistently obtains
better classification results.
the model evidence p(x) is always positive, a lower bound would be useless if it was negative. We
show that once the inference algorithm is converged, the bound at the optimum is always positive.
At the optimum, all gradients vanish. By setting the derivative with respect to V0 of the right-hand
side of Eq. 7 to zero we find that Eq? [(V0? ? V )3 ] = 0, where q ? ? q(z; ?? ) and V0? denote the
variational distribution and the reference energy at the optimum, respectively. This means that the
third-order term of the lower bound vanishes at the optimum. We rewrite the remaining terms as
follows, which shows that the bound at the optimum is always positive:
i
h
i e?V0? h
1 ?
2
?
?V0?
?
2
?
?
E (1 + V0 ? V ) + 1 > 0.
LV0 (? ) = e
E 1 + (V0 ? V ) + (V0 ? V ) =
q?
q?
2
2
4
Experiments
We evaluate PVI with different models. First we investigate the behavior of the new bound in a
controlled setup of Gaussian processes on synthetic data (Section 4.1). We then evaluate PVI based
on a classification task using Gaussian processes classifiers, where we use data from the UCI machine
learning repository (Section 4.2). This is a Bayesian non-conjugate setup where black box inference
is required. Finally, we use an experiment with the variational autoencoder (VAE) to explore our
approach on a deep generative model (Section 4.3). This experiment is carried out on MNIST data.
Across all the experiments, PVI demonstrates advantages based on different metrics.
4.1
GP Regression on Synthetic Data
In this section, we inspect the inference behavior using a synthetic data set with Gaussian processes
(GP). We generate the data according to a Gaussian noise distribution centered around a mixture of
sinusoids, and sample 50 data points (green dots in Figure 4). We then use a GP to model the data,
thus assuming the generative process f ? GP(0, ?) and yi ? N (fi , ).
We first compute an analytic solution of the posterior of the GP, (three standard deviations shown in
blue dashed lines) and compare it to approximate posteriors obtained by KLVI (Figure 4 (a)) and the
proposed PVI (Figure 4 (b)). The results from PVI are almost identical to the analytic solution. In
contrast, KLVI underestimates the posterior variance. This is consistent with Table 1, which shows
the average diagonal variances. PVI results are much closer to the exact posterior variances.
6
4.2
Gaussian Process Classification
We evaluate the performance of PVI and KLVI on a GP classification task. Since the model is
non-conjugate, no analytical baseline is available in this case. We model the data with the following
generative process:
f ? GP(0, K), zi = ?(fi ), yi ? Bern(zi ).
Above, K is the GP kernel, ? indicates the sigmoid function, and Bern indicates the Bernoulli
distribution. We furthermore use the Matern 32 kernel,
?
?
p
3r
3r
Kij = k(xi , xj ) = s2 (1 + l ij ) exp(? l ij ), rij = (xi ? xj )T (xi ? xj ).
Data. We use four data sets from the UCI machine learning repository, suitable for binary classification:
Crab (200 datapoints), Pima (768 datapoints), Heart (270 datapoints), and Sonar (208 datapoints). We
randomly split each of the data sets into two halves. One half is ?
used for training and the other half
is used for testing. We set the hyper parameters s = 1 and l = D/2 throughout all experiments,
where D is the dimensionality of input x.
Table 2 shows the classification performance (error rate) for these data sets. Our proposed PVI
consistently performs better than the traditional KLVI.
Convergence speed comparison. We also carry out a comparison in terms of speed of convergence,
focusing on PVI and alpha-divergence VI. Our results indicate that the smaller variance of the
reparameterization gradient leads to faster convergence of the optimization algorithm.
We train the GP classifier from Section 4.2 on the Sonar UCI data set using a constant learning rate.
Figure 5 shows the test log-likelihood under the posterior mean as a function of training iterations.
We split the data set into equally sized training, validation, and test sets. We then tune the learning
rate and the number of Monte Carlo samples per gradient step to obtain optimal performance on the
validation set after minimizing the alpha-divergence with a fixed budget of random samples. We use
? = 0.5 here; smaller values of ? lead to even slower convergence. We optimize the PVI lower bound
using the same learning rate and number of Monte Carlo samples. The final test error rate is 22% on
an approximately balanced data set. PVI converges an order of magnitude faster.
Figure 3 in Section 3 provides more insight in the scaling of the gradient variance. Here, we fit
GP regression models on synthetically generated data by maximizing the PVI lower bound and the
alpha-VI lower bound with ? ? {0.2, 0.5, 2}. We generate a separate synthetic data set for each
N ? {1, . . . , 200} by drawing N random data points around a sinusoidal curve. For each N , we fit a
one-dimensional GP regression with PVI and alpha-VI, respectively, using the same data set for both
methods. The variational distribution is a fully factorized Gaussian with N latent variables. After
convergence, we estimate the sampling variance of the gradient of each lower bound with respect to
the posterior mean. We calculate the empirical variance of the gradient based on 105 samples from q,
and we average over the N coordinates. Figure 3 shows the average sampling variance as a function
of N on a logarithmic scale. The variance of the gradient of the alpha-VI bound grows exponentially
in the number of latent variables. By contrast, we find only algebraic growth for PVI.
4.3
Variational Autoencoder
We experiment on Variational Autoencoders (VAEs), and we compare the PVI and the KLVI bound in
terms of predictive likelihoods on held-out data (Kingma and Welling, 2014). Autoencoders compress
unlabeled training data into low-dimensional representations by fitting it to an encoder-decoder
model that maps the data to itself. These models are prone to learning the identity function when
the hyperparameters are not carefully tuned, or when the network is too expressive, especially for
a moderately sized training set. VAEs are designed to partially avoid this problem by estimating
the uncertainty that is associated with each data point in the latent space. It is therefore important
that the inference method does not underestimate posterior variances. We show that, for small data
sets, training a VAE by maximizing the PVI lower bound leads to higher predictive likelihoods than
maximizing the KLVI lower bound.
We train the VAE on the MNIST data set of handwritten digits (LeCun et al., 1998). We build on
the publicly available implementation by Burda et al. (2016) and also use the same architecture and
hyperparamters, with L = 2 stochastic layers and K = 5 samples from the variational distribution
per gradient step. The model has 100 latent units in the first stochastic layer and 50 latent units in the
second stochastic layer.
7
log-likelihood of test set
normalized test log-likelihood
under posterior mean
PVI (proposed)
?-VI with ? = 0.5
?0.62
?0.64
?0.66
?0.68
0
2 ? 104
4 ? 104 6 ? 104
training iteration
8 ? 104
?100
?150
?200
?250
PVI (proposed)
KLVI
?300
102
105
103
104
size of training set [log]
Figure 5: Test log-likelihood (normalized by the num- Figure 6: Predictive likelihood of a VAE trained on
ber of test points) as a function of training iterations
using GP classification on the Sonar data set. PVI converges faster than alpha-VI even though we tuned the
number of Monte Carlo samples per training step (100)
and the constant learning rate (10?5 ) so as to maximize
the performance of alpha-VI on a validation set.
different sizes of the data. The training data are randomly sampled subsets of the MNIST training set. The
higher value the better. Our proposed PVI method outperforms KLVI mainly when the size of the training
data set is small. The fewer the training data, the more
advantage PVI obtains.
The VAE model factorizes over all data points. We train it by stochastically maximizing the sum
of the PVI lower bounds for all data points using a minibatch size of 20. The VAE amortizes the
gradient signal across data points by training inference networks. The inference networks express the
mean and variance of the variational distribution as a function of the data point. We add an additional
inference network that learns the mapping from a data point to the reference energy V0 . Here, we use
a network with four fully connected hidden layers of 200, 200, 100, and 50 units, respectively.
MNIST contains 60,000 training images. To test our approach on smaller-scale data where Bayesian
uncertainty matters more, we evaluate the test likelihood after training the model on randomly
sampled fractions of the training set. We use the same training schedules as in the publicly available
implementation, keeping the total number of training iterations independent of the size of the training
set. Different to the original implementation, we shuffle the training set before each training epoch as
this turns out to increase the performance for both our method and the baseline.
Figure 6 shows the predictive log-likelihood of the whole test set, where the VAE is trained on random
subsets of different sizes of the training set. We use the same subset to train with PVI and KLVI for
each training set size. PVI leads to a higher predictive likelihood than traditional KLVI on subsets of
the data. We explain this finding with our observation that the variational distributions obtained from
PVI capture more of the posterior variance. As the size of the training set grows?and the posterior
uncertainty decreases?the performance of KLVI catches up with PVI.
As a potential explanation why PVI converges to the KLVI result for large training sets, we note
that Eq? [(V0? ? V )3 ] = 0 at the optimal variational distribution q ? and reference energy V0? (see
Section 3.3). If V becomes a symmetric random variable (such as a Gaussian) in the limit of a large
training set, then this implies that Eq? [V ] = V0? , and PVI reduces to KLVI for large training sets.
5
Conclusion
We first presented a view on black box variational inference as a form of biased importance sampling,
where we can trade-off bias versus variance by the choice of divergence. Bias refers to the deviation
of the bound from the true marginal likelihood, and variance refers to its reparameterization gradient
estimator. We then propose a new variational bound that connects to variational perturbation theory,
and which includes corrections to the standard Kullback-Leibler bound. We showed both theoretically
and experimentally that our proposed PVI bound is tighter than the KL bound, and has lower-variance
reparameterization gradients compared to alpha-VI. In order to scale up our method to massive data
sets, future work will explore stochastic versions of PVI. Since the PVI bound contains interaction
terms between all data points, breaking it up into mini-batches is non-straightforward. Furthermore,
the PVI and alpha-bounds can also be combined, such that PVI further approximates alpha-VI. This
could lead to promising results on large data sets where traditional alpha-VI is hard to optimize due to
its variance, and traditional PVI converges to KLVI. As a final remark, a tighter variational bound
is not guaranteed to always result in a better posterior approximation since the variational family
limits the quality of the solution. However, in the context of variational EM, where one performs
gradient-based hyperparameter optimization on the log marginal likelihood, a tighter bound gives
more reliable results.
8
References
Amari, S. (2012). Differential-geometrical methods in statistics, volume 28. Springer Science &
Business Media.
Bamler, R. and Mandt, S. (2017). Dynamic word embeddings. In ICML.
Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In COMPSTAT.
Springer.
Burda, Y., Grosse, R., and Salakhutdinov, R. (2016). Importance weighted autoencoders. In ICLR.
Dieng, A. B., Tran, D., Ranganath, R., Paisley, J., and Blei, D. M. (2017). Variational inference via ?
upper bound minimization. In ICML.
Hernandez-Lobato, J., Li, Y., Rowland, M., Bui, T., Hern?ndez-Lobato, D., and Turner, R. (2016).
Black-box alpha divergence minimization. In ICML.
Hoffman, M. and Blei, D. (2014). Structured stochastic variational inference. CoRR abs/1404.4114.
Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. W. (2013). Stochastic variational inference.
JMLR, 14(1).
Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). An introduction to variational
methods for graphical models. Machine learning, 37(2).
Kappen, H. J. and Wiegerinck, W. (2001). Second order approximations for probability models. MIT;
1998.
Kingma, D. P. and Welling, M. (2014). Auto-encoding variational Bayes. In ICLR.
Kleinert, H. (2009). Path integrals in quantum mechanics, statistics, polymer physics, and financial
markets. World scientific.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document
recognition. volume 86. IEEE.
Li, Y. and Turner, R. E. (2016). R?nyi divergence variational inference. In NIPS.
Minka, T. (2005). Divergence measures and message passing. Technical report, Technical report,
Microsoft Research.
Opper, M. (2015). Expectation propagation. In Krzakala, F., Ricci-Tersenghi, F., Zdeborova, L.,
Zecchina, R., Tramel, E. W., and Cugliandolo, L. F., editors, Statistical Physics, Optimization,
Inference, and Message-Passing Algorithms, chapter 9, pages 263?292. Oxford University Press.
Opper, M., Paquet, U., and Winther, O. (2013). Perturbative corrections for approximate inference in
gaussian latent variable models. JMLR, 14(1).
Paquet, U., Winther, O., and Opper, M. (2009). Perturbation corrections in approximate inference:
Mixture modelling applications. JMLR, 10(Jun).
Plefka, T. (1982). Convergence condition of the TAP equation for the infinite-ranged ising spin glass
model. Journal of Physics A: Mathematical and general, 15(6):1971.
Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In AISTATS.
Ranganath, R., Tang, L., Charlin, L., and Blei, D. (2015). Deep exponential families. In AISTATS.
Ranganath, R., Tran, D., and Blei, D. (2016). Hierarchical variational models. In ICML.
Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate
inference in deep generative models. In ICML.
Ruiz, F., Titsias, M., and Blei, D. (2016). The generalized reparameterization gradient. In NIPS.
Salimans, T. and Knowles, D. A. (2013). Fixed-form variational posterior approximation through
stochastic line ar regression. Bayesian Analysis, 8(4).
Tanaka, T. (1999). A theory of mean field approximation. In NIPS.
Tanaka, T. (2000). Information geometry of mean-field approximation. Neural Computation, 12(8).
Thouless, D., Anderson, P. W., and Palmer, R. G. (1977). Solution of ?solvable model of a spin glass?.
Philosophical Magazine, 35(3).
9
| 7093 |@word repository:2 briefly:1 version:2 polynomial:2 covariance:1 thereby:1 harder:1 carry:1 kappen:2 ndez:1 contains:3 series:2 tuned:4 document:1 outperforms:1 com:1 perturbative:9 additive:1 numerical:2 enables:2 analytic:7 designed:1 implying:2 generative:5 stationary:1 half:3 fewer:1 manfred:1 blei:8 num:1 provides:2 revisited:1 contribute:1 zhang:1 mathematical:2 wierstra:1 differential:1 fitting:2 krzakala:1 introduce:3 x0:3 theoretically:2 market:1 behavior:3 mechanic:1 salakhutdinov:1 decreasing:1 considering:1 increasing:1 becomes:3 estimating:2 notation:1 mitigated:1 mass:9 advent:2 factorized:1 medium:1 minimizes:1 developed:1 disneyresearch:1 finding:1 safely:1 zecchina:1 concave:6 growth:1 zdeborova:1 exactly:1 classifier:2 demonstrates:1 unit:3 before:2 positive:7 local:1 limit:3 encoding:1 id:1 oxford:1 meet:1 path:1 mandt:3 hernandez:5 approximately:1 black:19 conversely:1 ease:1 limited:2 palmer:2 averaged:1 practical:2 lecun:2 testing:1 lf:5 backpropagation:1 svi:2 digit:1 empirical:1 matching:1 convenient:1 word:1 refers:3 close:1 unlabeled:1 context:2 applying:1 influence:2 optimize:5 equivalent:4 map:1 lobato:6 maximizing:8 go:1 straightforward:1 compstat:1 estimator:5 insight:4 datapoints:4 financial:1 reparameterization:12 amortizes:1 coordinate:1 target:3 hyperparamters:1 massive:1 exact:3 magazine:1 us:1 logarithmically:1 recognition:1 std:4 ising:2 prefactor:2 enters:5 rij:1 capture:1 calculate:2 wang:1 region:1 connected:1 trade:4 decrease:2 shuffle:1 balanced:1 vanishes:1 complexity:1 moderately:1 dynamic:1 trained:2 depend:1 tight:1 rewrite:1 predictive:5 titsias:1 upon:1 completely:1 easily:3 joint:3 chapter:1 derivation:1 train:4 monte:9 paved:1 hyper:1 widely:1 larger:1 tightness:1 drawing:2 amari:2 elbo:3 encoder:1 statistic:2 paquet:3 gp:13 itself:1 final:2 triggered:1 advantage:2 exemplary:1 analytical:1 propose:3 tran:2 interaction:5 tu:2 uci:3 achieve:1 inducing:1 scalability:1 convergence:6 optimum:6 requirement:1 converges:4 derive:2 develop:1 depending:1 ij:2 odd:5 eq:17 involves:1 indicate:1 implies:1 direction:1 stochastic:15 exploration:1 centered:1 ricci:1 generalization:1 polymer:1 tighter:7 exploring:1 correction:6 hold:1 around:6 lkl:1 crab:2 exp:1 mapping:1 achieves:1 klvi:30 estimation:4 favorable:1 bridge:1 weighted:1 hoffman:5 minimization:4 reparameterizing:1 mit:1 gaussian:12 always:4 aim:2 rather:1 avoid:1 varying:1 factorizes:1 vae:7 jaakkola:1 rezende:4 consistently:2 bernoulli:1 likelihood:23 indicates:2 mainly:1 modelling:1 contrast:5 baseline:2 glass:2 inference:41 entire:2 typically:1 hidden:1 interested:1 germany:1 among:1 flexible:1 classification:8 exponent:1 special:1 orange:3 marginal:12 equal:2 construct:3 field:5 never:1 beach:1 sampling:26 once:1 identical:1 broad:1 icml:5 peaked:1 future:1 report:2 randomly:3 simultaneously:1 divergence:52 tightly:1 thouless:3 familiar:1 geometry:1 connects:2 microsoft:1 ab:1 message:2 investigate:1 righthand:1 bbvi:13 mixture:2 extreme:1 devoted:1 held:2 integral:1 closer:3 unless:1 taylor:3 logarithm:3 theoretical:1 kij:1 earlier:1 cover:2 disadvantage:1 ar:1 deviation:5 reparametrized:1 plefka:2 subset:4 too:1 kullbackleibler:1 synthetic:6 combined:1 st:1 winther:2 probabilistic:1 physic:7 off:4 stochastically:1 resort:1 inefficient:1 derivative:2 li:7 potential:1 sinusoidal:1 de:1 includes:2 matter:2 vi:17 later:1 view:5 matern:1 analyze:1 red:4 recover:1 bayes:1 dieng:4 contribution:3 minimize:2 purple:2 spin:2 publicly:2 variance:49 yield:2 correspond:2 generalize:2 bayesian:6 handwritten:1 carlo:9 converged:1 explain:2 definition:1 underestimate:3 energy:12 mohamed:1 minka:5 proof:1 associated:2 sampled:2 dimensionality:1 schedule:1 carefully:1 appears:1 focusing:1 higher:5 originally:1 formulation:2 charlin:1 box:14 shrink:1 anderson:2 furthermore:4 though:1 autoencoders:4 until:1 hand:1 replacing:1 expressive:1 propagation:1 tramel:1 minibatch:1 defines:1 mode:3 quality:1 gray:1 behaved:1 scientific:1 grows:4 usa:4 effect:2 name:1 normalized:2 ranged:1 concept:1 true:4 unbiased:1 analytically:2 former:1 sinusoid:1 symmetric:1 leibler:3 iteratively:1 conditionally:1 interchangeably:1 lastname:1 covering:8 authorship:1 generalized:6 theoretic:1 performs:2 geometrical:1 image:1 variational:71 consideration:2 fi:2 sigmoid:1 behaves:1 exponentially:6 volume:2 discussed:4 approximates:1 expressing:1 paisley:2 outlined:1 similarly:1 had:1 dot:2 stable:1 longer:2 v0:51 add:2 dominant:1 posterior:22 recent:1 showed:1 perspective:1 optimizing:1 apart:1 inequality:1 binary:1 yi:2 seen:1 minimum:1 additional:1 algebraically:1 maximize:2 dashed:3 signal:1 relates:1 desirable:1 reduces:1 technical:3 faster:3 long:1 equally:1 controlled:1 regression:5 expectation:6 metric:1 iteration:4 kernel:2 bimodal:2 proposal:2 addition:1 addressed:2 biased:8 extra:1 eliminates:1 strict:1 jordan:3 practitioner:1 call:4 synthetically:1 split:2 stephan:1 easy:4 enough:1 embeddings:1 xj:3 fit:4 zi:2 bengio:1 architecture:1 opposite:2 haffner:1 tradeoff:1 shift:1 triviality:1 fv0:19 suffer:1 algebraic:1 passing:2 remark:1 deep:4 generally:1 aimed:1 tune:1 generate:2 estimated:1 per:3 blue:2 hyperparameter:1 dropping:1 express:1 four:2 graph:1 fraction:1 convert:1 sum:1 everywhere:1 uncertainty:3 almost:1 throughout:1 family:2 knowles:3 scaling:1 bound:79 layer:4 guaranteed:1 convergent:1 cheng:1 x2:2 pvi:55 aspect:1 speed:2 structured:2 according:1 truncate:1 pink:2 conjugate:3 smaller:6 across:3 em:1 invariant:1 heart:2 equation:3 hern:1 discus:1 loose:1 turn:2 flip:1 tractable:1 end:3 adopted:2 available:3 hierarchical:1 salimans:3 cugliandolo:1 alternative:1 coin:1 batch:1 slower:1 original:1 compress:1 remaining:1 graphical:1 const:1 ghahramani:1 build:3 establish:1 approximating:1 overflow:2 especially:1 nyi:1 seeking:2 objective:3 already:1 quantity:1 traditional:10 surrogate:2 diagonal:1 gradient:47 iclr:2 distance:2 separate:1 berlin:3 decoder:1 argue:1 trivial:1 reason:1 assuming:1 useless:1 mini:1 ratio:5 minimizing:2 difficult:2 setup:2 robert:1 pima:2 negative:1 implementation:3 upper:1 inspect:1 observation:4 descent:3 disney:3 perturbation:12 inferred:3 introduced:2 required:1 kl:15 specified:1 optimized:1 philosophical:1 tap:3 kingma:6 tanaka:3 nip:4 beyond:1 suggested:1 below:6 firstname:1 including:3 green:8 explanation:1 reliable:1 power:1 suitable:1 business:1 solvable:1 turner:7 advanced:1 scheme:2 improve:1 carried:1 catch:1 autoencoder:2 auto:1 jun:1 epoch:1 literature:1 fully:2 versus:1 validation:3 consistent:1 editor:1 prone:1 free:1 bern:2 keeping:1 bias:13 side:3 burda:2 ber:1 saul:1 taking:1 curve:3 opper:6 dimension:5 valid:2 avoids:1 world:1 quantum:1 preventing:1 author:2 avg:1 polynomially:3 welling:6 rowland:1 ranganath:9 alpha:30 approximate:5 informationtheoretic:1 emphasize:1 kullback:3 implicitly:1 ignore:2 obtains:2 bui:1 global:2 overfitting:1 pittsburgh:3 conclude:1 xi:3 spectrum:3 latent:12 sonar:4 why:3 table:4 promising:1 ca:1 improving:1 expansion:4 bottou:3 complex:2 bamler:3 aistats:2 main:1 s2:1 noise:3 hyperparameters:1 whole:1 nothing:1 grosse:1 exponential:2 lie:4 vanish:1 breaking:1 third:4 jmlr:3 ruiz:3 learns:1 tang:1 shade:1 jensen:1 divergent:1 evidence:4 intractable:2 mnist:4 corr:1 importance:23 supplement:1 magnitude:1 budget:1 gap:1 easier:1 entropy:1 logarithmic:1 univariate:1 explore:2 absorbed:1 conveniently:1 partially:1 springer:2 corresponds:2 underflow:2 determines:2 relies:2 satisfies:1 tersenghi:1 gerrish:1 identity:6 presentation:1 sized:2 replace:1 hard:2 experimentally:1 determined:1 typical:1 infinite:1 wiegerinck:2 total:1 vaes:2 support:1 latter:1 evaluate:4 |
6,737 | 7,094 | GibbsNet: Iterative Adversarial Inference for Deep
Graphical Models
Alex Lamb
R Devon Hjelm
Yaroslav Ganin
Aaron Courville
Joseph Paul Cohen
Yoshua Bengio
Abstract
Directed latent variable models that formulate the joint distribution as p(x, z) =
p(z)p(x | z) have the advantage of fast and exact sampling. However, these
models have the weakness of needing to specify p(z), often with a simple fixed
prior that limits the expressiveness of the model. Undirected latent variable models
discard the requirement that p(z) be specified with a prior, yet sampling from them
generally requires an iterative procedure such as blocked Gibbs-sampling that may
require many steps to draw samples from the joint distribution p(x, z). We propose
a novel approach to learning the joint distribution between the data and a latent
code which uses an adversarially learned iterative procedure to gradually refine the
joint distribution, p(x, z), to better match with the data distribution on each step.
GibbsNet is the best of both worlds both in theory and in practice. Achieving the
speed and simplicity of a directed latent variable model, it is guaranteed (assuming
the adversarial game reaches the virtual training criteria global minimum) to
produce samples from p(x, z) with only a few sampling iterations. Achieving the
expressiveness and flexibility of an undirected latent variable model, GibbsNet
does away with the need for an explicit p(z) and has the ability to do attribute
prediction, class-conditional generation, and joint image-attribute modeling in
a single model which is not trained for any of these specific tasks. We show
empirically that GibbsNet is able to learn a more complex p(z) and show that this
leads to improved inpainting and iterative refinement of p(x, z) for dozens of steps
and stable generation without collapse for thousands of steps, despite being trained
on only a few steps.
1
Introduction
Generative models are powerful tools for learning an underlying representation of complex data.
While early undirected models, such as Deep Boltzmann Machines or DBMs (Salakhutdinov and Hinton, 2009), showed great promise, practically they did not scale well to complicated high-dimensional
settings (beyond MNIST), possibly because of optimization and mixing difficulties (Bengio et al.,
2012). More recent work on Helmholtz machines (Bornschein et al., 2015) and on variational autoencoders (Kingma and Welling, 2013) borrow from deep learning tools and can achieve impressive
results, having now been adopted in a large array of domains (Larsen et al., 2015).
Many of the important generative models available to us rely on a formulation of some sort of stochastic latent or hidden variables along with a generative relationship to the observed data. Arguably
the simplest is the directed graphical models (such as the VAE) with a factorized decomposition
p(z, x) = p(z)p(x | z). In this, it is typical to assume that p(z) follows some factorized prior with
simple statistics (such as Gaussian). While sampling with directed models is simple, inference and
learning tends to be difficult and often requires advanced techniques such as approximate inference
using a proposal distribution for the true posterior.
z0 ? N (0, I)
xi ? p(x | zi )
zN ? q(z | xN ?1 )
xN ? p(x | zN )
D(z, x)
zi+1 ? q(z | xi )
? ? q(z | xdata )
z
xdata ? q(x)
Figure 1: Diagram illustrating the training procedure for GibbsNet. The unclamped chain (dashed
box) starts with a sample from an isotropic Gaussian distribution N (0, I) and runs for N steps.
The last step (iteration N ) shown as a solid pink box is then compared with a single step from the
clamped chain (solid blue box) using joint discriminator D.
The other dominant family of graphical models are undirected graphical models, such that the
joint is represented by a product of clique potentials and a normalizing factor. It is common to
assume that the clique potentials are positive, so that the un-normalized density can be represented
by an energy function, E and the joint is represented by p(x, z) = e?E(z,x) /Z, where Z is the
normalizing constant or partition function. These so-called energy-based models (of which the
Boltzmann Machine is an example) are potentially very flexible and powerful, but are difficult to
train in practice and do not seem to scale well. Note also how in such models, the marginal p(z) can
have a very rich form (as rich as that of p(x)).
The methods above rely on a fully parameterized joint distribution (and approximate posterior in
the case of directed models), to train with approximate maximum likelihood estimation (MLE,
Dempster et al., 1977). Recently, generative adversarial networks (GANs, Goodfellow et al., 2014)
have provided a likelihood-free solution to generative modeling that provides an implicit distribution
unconstrained by density assumptions on the data. In comparison to MLE-based latent variable
methods, generated samples can be of very high quality (Radford et al., 2015), and do not suffer
from well-known problems associated with parameterizing noise in the observation space (Goodfellow, 2016). Recently, there have been advances in incorporating latent variables in generative
adversarial networks in a way reminiscent of Helmholtz machines (Dayan et al., 1995), such as
adversarially learned inference (Dumoulin et al., 2017; Donahue et al., 2017) and implicit variational
inference (Husz?r, 2017).
These models, as being essentially complex directed graphical models, rely on approximate inference
to train. While potentially powerful, there is good evidence that using an approximate posterior
necessarily limits the generator in practice (Hjelm et al., 2016; Rezende and Mohamed, 2015). In
contrast, it would perhaps be more appropriate to start with inference (encoder) and generative
(decoder) processes and derive the prior directly from these processes. This approach, which we call
GibbsNet, uses these two processes to define a transition operator of a Markov chain similar to Gibbs
sampling, alternating between sampling observations and sampling latent variables. This is similar
to the previously proposed generative stochastic networks (GSNs, Bengio et al., 2013) but with a
GAN training framework rather than minimizing reconstruction error. By training a discriminator to
place a decision boundary between the data-driven distribution (with x clamped) and the free-running
model (which alternates between sampling x and z), we are able to train the model so that the two
joint distributions (x, z) match. This approach is similar to Gibbs sampling in undirected models,
yet, like traditional GANs, it lacks the strong parametric constraints, i.e., there is no explicit energy
function. While losing some the theoretical simplicity of undirected models, we gain great flexibility
and ease of training. In summary, our method offers the following contributions:
? We introduce the theoretical foundation for a novel approach to learning and performing
inference in deep graphical models. The resulting model of our algorithm is similar to
undirected graphical models, but avoids the need for MLE-based training and also lacks an
explicitly defined energy, instead being trained with a GAN-like discriminator.
2
? We present a stable way of performing inference in the adversarial framework, meaning
that useful inference is performed under a wide range of architectures for the encoder and
decoder networks. This stability comes from the fact that the encoder q(z | x) appears
in both the clamped and the unclamped chain, so gets its training signal from both the
discriminator in the clamped chain and from the gradient in the unclamped chain.
? We show improvements in the quality of the latent space over models which use a simple
prior for p(z). This manifests itself in improved conditional generation. The expressiveness
of the latent space is also demonstrated in cleaner inpainting, smoother mixing when running
blocked Gibbs sampling, and better separation between classes in the inferred latent space.
? Our model has the flexibility of undirected graphical models, including the ability to do
label prediction, class-conditional generation, and joint image-label generation in a single
model which is not explicitly trained for any of these specific tasks. To our knowledge our
model is the first model which combines this flexibility with the ability to produce high
quality samples on natural images.
2
Proposed Approach: GibbsNet
The goal of GibbsNet is to train a graphical model with transition operators that are defined and
learned directly by matching the joint distributions of the model expectation with that with the
observations clamped to data. This is analogous to and inspired by undirected graphical models,
except that the transition operators, which correspond to blocked Gibbs sampling, are defined to
move along a defined energy manifold, so we will make this connection throughout our formulation.
We first explain GibbsNet in the simplest case where the graphical model consists of a single layer of
observed units and a single layer of latent variable with stochastic mappings from one to the other as
parameterized by arbitrary neural network. Like Professor Forcing (Lamb et al., 2016), GibbsNet
uses a GAN-like discriminator to make two distributions match, one corresponding to the model
iteratively sampling both observation, x, and latent variables, z (free-running), and one corresponding
to the same generative model but with the observations, x, clamped. The free-running generator is
analogous to Gibbs sampling in Restricted Boltzmann Machines (RBM, Hinton et al., 2006) or Deep
Boltzmann Machines (DBM, Salakhutdinov and Hinton, 2009). In the simplest case, the free-running
generator is defined by conditional distributions q(z|x) and p(x|z) which stochastically map back
and forth between data space x and latent space z.
To begin our free-running process, we start the chain with a latent variable sampled from a normal
distribution: z ? N (0, I), and follow this by N steps of alternating between sampling from p(x|z)
and q(z|x). For the clamped version, we do simple ancestral sampling from q(z|x), given xdata is
drawn from the data distribution q(x) (a training example). When the model has more layers (e.g., a
hierarchy of layers with stochastic latent variables, ? la DBM), the data-driven model also needs to
iterate to correctly sample from the joint. While this situation highly resembles that of undirected
graphical models, GibbsNet is trained adversarially so that its free-running generative states become
indistinguishable from its data-driven states. In addition, while in principle undirected graphical
models need to either start their chains from data or sample a very large number of steps, we find
in practice GibbsNet only requires a very small number of steps (on the order of 3 to 5 with very
complex datasets) from noise.
An example of the free-running (unclamped) chain can be seen in Figure 2. An interesting aspect
of GibbsNet is that we found that it was enough and in fact best experimentally to back-propagate
discriminator gradients through a single step of the iterative procedure, yielding more stable training.
An intuition for why this helps is that each step of the procedure is supposed to generate increasingly
realistic samples. However, if we passed gradients through the iterative procedure, then this gradient
could encourage the earlier steps to store features which have downstream value instead of immediate
realistic x-values.
2.1
Theoretical Analysis
We consider a simple case of an undirected graph with single layers of visible and latent units trained
with alternating 2-step (p then q) unclamped chains and the asymptotic scenario where the GAN
objective is properly optimized. We then ask the following questions: in spite of training for a
3
Figure 2: Evolution of samples for 20 iterations from the unclamped chain, trained on the SVHN
dataset starting on the left and ending on the right.
bounded number of Markov chain steps, are we learning a transition operator? Are the encoder
and decoder estimating compatible conditionals associated with the stationary distribution of that
transition operator? We find positive answers to both questions.
A high level explanation of our argument is that if the discriminator is fooled, then the consecutive
(z, x) pairs from the chain match the data-driven (z, x) pair. Because the two marginals on x from
these two distributions match, we can show that the next z in the chain will form again the same joint
distribution. Similarly, we can show that the next x in the chain also forms the same joint with the
previous z. Because the state only depends on the previous value of the chain (as it?s Markov), then
all following steps of the chain will also match the clamped distribution. This explains the result,
validated experimentally, that even though we train for just a few steps, we can generate high quality
samples for thousands or more steps.
Proposition 1. If (a) the stochastic encoder q(z|x) and stochastic decoder p(x|z) inject noise such
that the transition operator defined by their composition (p followed by q or vice-versa) allows for all
possible x-to-x or z-to-z transitions (x ? z ? x or z ? x ? z), and if (b) those GAN objectives
are properly trained in the sense that the discriminator is fooled in spite of having sufficient capacity
and training time, then (1) the Markov chain which alternates the stochastic encoder followed by the
stochastic decoder as its transition operator T (or vice-versa) has the data-driven distribution ?D
as its stationary distribution ?T , (2) the two conditionals q(z|x) and p(x|z) converge to compatible
conditionals associated with the joint ?D = ?T .
Proof. When the stochastic decoder and encoder inject noise so that their composition forms a
transition operator T with paths with non-zero probability from any state to any other state, then T is
ergodic. So condition (a) implies that T has a stationary distribution ?T . The properly trained GAN
discriminators for each of these two steps (condition (b)) forces the matching of the distributions of
the pairs (zt , xt ) (from the generative trajectory) and (x, z) with x ? q(x), the data distribution and
z ? q(z | x), both pairs converging to the same data-driven distribution ?D . Because (zt , xt ) has the
same joint distribution as (z, x), it means that xt has the same distribution as x. Since z ? q(z | x),
when we apply q to xt , we get zt+1 which must form a joint (zt+1 , xt ) which has the same distribution
as (z, x). Similarly, since we just showed that zt+1 has the same distribution as z and thus the same
as zt , if we apply p to zt+1 , we get xt+1 and the joint (zt+1 , xt+1 ) must have the same distribution
as (z, x). Because the two pairs (zt , xt ) and (zt+1 , xt+1 ) have the same joint distribution ?D , it
means that the transition operator T , that maps samples (zt , xt ) to samples (zt+1 , xt+1 ), maps ?D to
itself, i.e., ?D = ?T is both the data distribution and the stationary distribution of T and result (1)
is obtained. Now consider the "odd" pairs (zt+1 , xt ) and (zt+2 , xt+1 ) in the generated sequences.
Because of (1), xt and xt+1 have the same marginal distribution ?D (x). Thus when we apply the
same q(z|x) to these x?s we obtain that (zt+1 , xt ) and (zt+2 , xt+1 ) also have the same distribution.
Following the same reasoning as for proving (1), we conclude that the associated transition operator
Todd has also ?D as stationary distribution. So starting from z ? ?D (z) and applying p(x | z)
gives an x so that the pair (z, x) has ?D as joint distribution, i.e., ?D (z, x) = ?D (z)p(x | z). This
means that p(x | z) = ??DD(x,z)
(z) is the x | z conditional of ?D . Since (zt , xt ) also converges to joint
distribution ?D , we can apply the same argument when starting from an x ? ?D (x) followed by q
and obtain that ?D (z, x) = ?D (x)q(z | x) and so q(z|x) = ??DD(z,x)
(x) is the z | x conditional of ?D .
This proves result (2).
4
2.2
Architecture
GibbsNet always involves three networks: the inference network q(z|x), the generation network
p(x|z), and the joint discriminator. In general, our architecture for these networks closely follow
Dumoulin et al. (2017), except that we use the boundary-seeking GAN (BGAN, Hjelm et al., 2017)
as it explicitly optimizes on matching the opposing distributions (in this case, the model expectation
and the data-driven joint distributions), allows us to use discrete variables where we consider learning
graphs with labels or discrete attributes, and worked well across our experiments.
3
Related Work
Energy Models and Deep Boltzmann Machines The training and sampling procedure for generating from GibbsNet is very similar to that of a deep Boltzmann machine (DBM, Salakhutdinov and
Hinton, 2009): both involve blocked Gibbs sampling between observation- and latent-variable layers.
A major difference is that in a deep Boltzmann machine, the ?decoder" p(x|z) and ?encoder" p(z|x)
exactly correspond to conditionals of a joint distribution p(x, z), which is parameterized by an energy
function. This, in turn, puts strong constraints on the forms of the encoder and decoder.
In a restricted Boltzmann machine (RBM, Hinton, 2010), the visible units are conditionally independent given the hidden units on the adjacent layer, and likewise the hidden units are conditionally
independent given the visible units. This may force the layers close to the data to need to be nearly
deterministic, which could cause poor mixing and thus make learning difficult. These conditional
independence assumptions in RBMs and DBMs have been discussed before in the literature as a
potential weakness in these models (Bengio et al., 2012).
In our model, p(x|z) and q(z|x) are modeled by separate deep neural networks with no shared
parameters. The disadvantage is that the networks are over-parameterized, but this has the added
flexibility that these conditionals can be much deeper, can take advantage of all the recent advances
in deep architectures, and have fewer conditional independence assumptions than DBMs and RBMs.
Generative Stochastic Networks Like GibbsNet, generative stochastic networks (GSNs, Bengio
et al., 2013) also directly parameterizes a transition operator of a Markov chain using deep neural
networks. However, GSNs and GibbsNet have completely different training procedures. In GSNs,
the training procedure is based on an objective that is similar to de-noising autoencoders (Vincent
et al., 2008).
GSNs begin by drawing a sampling from the data, iteratively corrupting it, then learning a transition
operator which de-noises it (i.e., reverses that corruption), so that the reconstruction after k steps is
brought closer to the original un-corrupted input.
In GibbsNet, there is no corruption in the visible space, and the learning procedure never involves
?walk-back" (de-noising) towards a real data-point. Instead, the processes from and to data are
modeled by different networks, with the constraint of the marginal, p(x), matches the real distribution
imposed through the GAN loss on the joint distributions from the clamped and unclamped phases.
Non-Equilibrium Thermodynamics The Non-Equilibrium Thermodynamics method (SohlDickstein et al., 2015) learns a reverse diffusion process against a forward diffusion process which
starts from real data points and gradually injects noise until the data distribution matches a analytically
tractible / simple distribution. This is similar to GibbsNet in that generation involves a stochastic
process which is initialized from noise, but differs in that Non-Equilibrium Thermodynamics is
trained using MLE and relies on noising + reversal for training, similar to GSNs above.
Generative Adversarial Learning of Markov Chains The Adversarial Markov Chain algorithm
(AMC, Song et al., 2017) learns a markov chain over the data distribution in the visible space.
GibbsNet and AMC are related in that they both involve adversarial training and an iterative procedure
for generation. However there are major differences. GibbsNet learns deep graphical models with
latent variables, whereas the AMC method learns a transition operator directly in the visible space.
The AMC approach involves running chains which start from real data points and repeatedly apply
the transition operator, which is different from the clamped chain used in GibbsNet. The experiments
5
shown in Figure 3 demonstrate that giving the latent variables to the discriminator in our method has
a significant impact on inference.
Adversarially Learned Inference (ALI) Adversarially learned inference (ALI, Dumoulin et al.,
2017) learns to match distributions generative and inference distributions, p(x, z) and q(x, z) (can be
thought of forward and backward models) with a discriminator, so that p(z)p(x | z) = q(x)q(z | x).
In the single latent layer case, GibbsNet also has forward and reverse models, p(x | z) and q(z | x).
The un-clamped chain is sampled as p(z), p(x | z), q(z | x), p(x | z), . . . and the clamped chain
is sampled as q(x), q(z | x). We then adversarially encourage the clamped chain to match the
equilibrium distribution of the unclamped chain. When the number of iterations is set to N = 1,
GibbsNet reduces to ALI. However, in the general setting of N > 1, Gibbsnet should learn a richer
representation than ALI, as the prior, p(z), is no longer forced to be the simple one at the beginning
of the unclamped phase.
4
Experiments and Results
The goal of our experiments is to explore and give insight into the joint distribution p(x, z) learned
by GibbsNet and to understand how this joint distribution evolves over the course of the iterative
inference procedure. Since ALI is identical to GibbsNet when the number of iterative inference steps
is N = 1, results obtained with ALI serve as an informative baseline.
From our experiments, the clearest result (covered in detail below) is that the p(z) obtained with
GibbsNet can be more complex than in ALI (or other directed graphical models). This is demonstrated
directly in experiments with 2-D latent spaces and indirectly by improvements in classification when
directly using the variables q(z | x). We achieve strong improvements over ALI using GibbsNet even
when q(z | x) has exactly the same architecture in both models.
We also show that GibbsNet allows for gradual refinement of the joint, (x, z), in the sampling chain
q(z | x), p(x | z). This is a result of the sampling chain making small steps towards the equilibrium
distribution. This allows GibbsNet to gradually improve sampling quality when running for many
iterations. Additionally it allows for inpainting and conditional generation where the conditioning
information is not fixed during training, and indeed where the model is not trained specifically for
these tasks.
4.1
Expressiveness of GibbsNet?s Learned Latent Variables
Latent structure of GibbsNet The latent variables from q(z | x) learned from GibbsNet are more
expressive than those learned with ALI. We show this in two ways. First, we train a model on the
MNIST digits 0, 1, and 9 with a 2-D latent space which allows us to easily visualize inference. As
seen in Figure 3, we show that GibbsNet is able to learn a latent space which is not Gaussian and has
a structure that makes the different classes well separated.
Semi-supervised learning Following from this, we show that the latent variables learned by
GibbsNet are better for classification. The goal here is not to show state of the art results on
classification, but instead to show that the requirement that p(z) be something simple (like a Gaussian,
as in ALI) is undesirable as it forces the latent space to be filled. This means that different classes
need to be packed closely together in that latent space, which makes it hard for such a latent space to
maintain the class during inference and reconstruction.
We evaluate this property on two datasets: Street View House Number (SVHN, Netzer et al., 2011)
and permutation invariant MNIST. In both cases we use the latent features q(z | x) directly from a
trained model, and train a 2-layer MLP on top of the latent variables, without passing gradient from
the classifier through to q(z | x). ALI and GibbsNet were trained for the same amount of time and
with exactly the same architecture for the discriminator, the generative network, p(x | z), and the
inference network, q(z | x).
On permutation invariant MNIST, ALI achieves 91% test accuracy and GibbsNet achieves 97.7% test
accuracy. On SVHN, ALI achieves 66.7% test accuracy and GibbsNet achieves 79.6% test accuracy.
This does not demonstrate a competitive classifier in either case, but rather demonstrates that the
latent space inferred by GibbsNet keeps more information about its input image than the encoder
6
learned by ALI. This is consistent with the reported ALI reconstructions (Dumoulin et al., 2017) on
SVHN where the reconstructed image and the input image show the same digit roughly half of the
time.
We found that ALI?s inferred latent variables not being effective for classification is a fairly robust
result that holds across a variety of architectures for the inference network. For example, with 1024
units, we varied the number of fully-connected layers in ALI?s inference network between 2 and
8 and found that the classification accuracies on the MNIST validation set ranged from 89.4% to
91.0%. Using 6 layers with 2048 units on each layer and a 256 dimensional latent prior achieved
91.2% accuracy. This suggests that the weak performance of the latent variables for classification is
due to ALI?s prior, and is probably not due to a lack of capacity in the inference network.
Figure 3: Illustration of the distribution over inferred latent variables for real data points from the
MNIST digits (0, 1, 9) learned with different models trained for roughly the same amount of time:
GibbsNet with a determinstic decoder and the latent variables not given to the discriminator (a),
GibbsNet with a stochastic decoder and the latent variables not given to the discriminator (b), ALI
(c), GibbsNet with a deterministic decoder (f), GibbsNet with a stochastic decoder with two different
runs (g and h), GibbsNet with a stochastic decoder?s inferred latent states in an unclamped chain at 1,
2 , 3, and 15 steps (d, e, i, and j, respectively) into the P-chain (d, e, i, and j, respectively). Note that
we continue to see refinement in the marginal distribution of z when running for far more steps (15
steps) than we used during training (3 steps).
4.2
Inception Scores
The GAN literature is limited in terms of quantitative evaluation, with none of the existing techniques
(such as inception scores) being satisfactory (Theis et al., 2015). Nonetheless, we computed inception
scores on CIFAR-10 using the standard method and code released from Salimans et al. (2016). In our
experiments, we compared the inception scores from samples from Gibbsnet and ALI on two tasks,
generation and inpainting.
Our conclusion from the inception scores (Table 1) is that GibbsNet slightly improves sample quality
but greatly improves the expressiveness of the latent space z, which leads to more detail being
preserved in the inpainting chain and a much larger improvement in inception scores in this setting.
The supplementary materials includes examples of sampling and inpainting chains for both ALI and
GibbsNet which shows differences between sampling and inpainting quality that are consistent with
the inception scores.
Table 1: Inception Scores from different models. Inpainting results were achieved by fixing the left
half of the image while running the chain for four steps. Sampling refers to unconditional sampling.
Source
Real Images
ALI (ours)
ALI (Dumoulin)
GibbsNet
Samples
11.24
5.41
5.34
5.69
7
Inpainting
11.24
5.59
N/A
6.15
Figure 4: CIFAR samples on methods which learn transition operators. Non-Equilibrium Thermodynamics (Sohl-Dickstein et al., 2015) after 1000 steps (left) and GibbsNet after 20 steps (right).
4.3
Generation, Inpainting, and Learning the Image-Attribute Joint Distribution
Generation Here, we compare generation on the CIFAR dataset against Non-Equilibrium Thermodynamics method (Sohl-Dickstein et al., 2015), which also begins its sampling procedure from noise.
We show in Figure 4 that, even with a relatively small number of steps (20) in its sampling procedure,
GibbsNet outperforms the Non-Equilibrium Thermodynamics approach in sample quality, even after
many more steps (1000).
Inpainting The inpainting that can be done with the transition operator in GibbsNet is stronger
than what can be done with an explicit conditional generative model, such as Conditional GANs,
which are only suited to inpainting when the conditioning information is known about during training
or there is a strong prior over what types of conditioning will be performed at test time. We show here
that GibbsNet performs more consistent and higher quality inpainting than ALI, even when the two
networks share exactly the same architecture for p(x | z) and q(z | x) (Figure 5), which is consistent
with our results on latent structure above.
Joint generation Finally, we show that GibbsNet is able to learn the joint distribution between
face images and their attributes (CelebA, Liu et al., 2015) (Figure 6). In this case, q(z | x, y) (y
is the attribute) is a network that takes both the image and attribute, separately processing the two
modalities before joining them into one network. p(x, y | z) is one network that splits into two
networks to predict the modalities separately. Training was done with continuous boundary-seeking
GAN (BGAN, Hjelm et al., 2017) on the image side (same as our other experiments) and discrete
BGAN on the attribute side, which is an importance-sampling-based technique for training GANs
with discrete data.
5
Conclusion
We have introduced GibbsNet, a powerful new model for performing iterative inference and generation
in deep graphical models. Although models like the RBM and the GSN have become less investigated
in recent years, their theoretical properties worth pursuing, and we follow the theoretical motivations
here using a GAN-like objective. With a training and sampling procedure that is closely related to
undirected graphical models, GibbsNet is able to learn a joint distribution which converges in a very
small number of steps of its Markov chain, and with no requirement that the marginal p(z) match a
simple prior. We prove that at convergence of training, in spite of unrolling only a few steps of the
chain during training, we obtain a transition operator whose stationary distribution also matches the
data and makes the conditionals p(x | z) and q(z | x) consistent with that unique joint stationary
distribution. We show that this allows the prior, p(z), to be shaped into a complicated distribution
(not a simple one, e.g., a spherical Gaussian) where different classes have representations that are
easily separable in the latent space. This leads to improved classification when the inferred latent
variables q(z|x) are used directly. Finally, we show that GibbsNet?s flexible prior produces a flexible
model which can simultaneously perform inpainting, conditional image generation, and prediction
with a single model not explicitly trained for any of these specific tasks, outperforming a competitive
ALI baseline with the same setup.
8
(a) SVHN inpainting after 20 steps (ALI).
(b) SVHN inpainting after 20 steps (GibbsNet).
Figure 5: Inpainting results on SVHN, where the right side is given and the left side is inpainted. In
both cases our model?s trained procedure did not consider the inpainting or conditional generation
task at all, and inpainting is done by repeatedly applying the transition operators and clamping the
right side of the image to its observed value. GibbsNet?s richer latent space allows the transition
operator to keep more of the structure of the input image, allowing for tighter inpainting.
Figure 6: Demonstration of learning the joint distribution between images and a list of 40 binary
attributes. Attributes (right) are generated from a multinomial distribution as part of the joint with the
image (left).
References
Bengio, Y., Mesnil, G., Dauphin, Y., and Rifai, S. (2012). Better mixing via deep representations.
CoRR, abs/1207.4404.
Bengio, Y., Thibodeau-Laufer, E., and Yosinski, J. (2013). Deep generative stochastic networks
trainable by backprop. CoRR, abs/1306.1091.
Bornschein, J., Shabanian, S., Fischer, A., and Bengio, Y. (2015). Training opposing directed models
using geometric mean matching. CoRR, abs/1506.03877.
Dayan, P., Hinton, G. E., Neal, R. M., and Zemel, R. S. (1995). The helmholtz machine. Neural
computation, 7(5):889?904.
Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977). Maximum likelihood from incomplete data
via the em algorithm. Journal of the royal statistical society. Series B (methodological), pages
1?38.
Donahue, J., Kr?henb?hl, P., and Darrell, T. (2017). Adversarial feature learning. In Proceedings of
the International Conference on Learning Representations (ICLR). abs/1605.09782.
Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky, M., Mastropietro, O., and Courville,
A. (2017). Adversarially learned inference. In Proceedings of the International Conference on
Learning Representations (ICLR). arXiv:1606.00704.
9
Goodfellow, I. (2016). Nips 2016 tutorial: Generative adversarial networks. arXiv preprint
arXiv:1701.00160.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and
Bengio, Y. (2014). Generative adversarial nets. In Advances in neural information processing
systems, pages 2672?2680.
Hinton, G. (2010). A practical guide to training restricted boltzmann machines. Momentum, 9(1):926.
Hinton, G. E., Osindero, S., and Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets.
Neural computation, 18(7):1527?1554.
Hjelm, D., Salakhutdinov, R. R., Cho, K., Jojic, N., Calhoun, V., and Chung, J. (2016). Iterative
refinement of the approximate posterior for directed belief networks. In Advances in Neural
Information Processing Systems, pages 4691?4699.
Hjelm, R. D., Jacob, A. P., Che, T., Cho, K., and Bengio, Y. (2017). Boundary-seeking generative
adversarial networks. arXiv preprint arXiv:1702.08431.
Husz?r, F. (2017). Variational Inference using Implicit Distributions. ArXiv e-prints.
Kingma, D. P. and Welling, M. (2013).
arXiv:1312.6114.
Auto-encoding variational bayes.
arXiv preprint
Lamb, A., Goyal, A., Zhang, Y., Zhang, S., Courville, A., and Bengio, Y. (2016). Professor forcing:
A new algorithm for training recurrent networks. Neural Information Processing Systems (NIPS)
2016.
Larsen, A. B. L., S?nderby, S. K., and Winther, O. (2015). Autoencoding beyond pixels using a
learned similarity metric. CoRR, abs/1512.09300.
Liu, Z., Luo, P., Wang, X., and Tang, X. (2015). Deep learning face attributes in the wild. In
Proceedings of International Conference on Computer Vision (ICCV).
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural
images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised
feature learning, volume 2011, page 5.
Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep
convolutional generative adversarial networks. CoRR, abs/1511.06434.
Rezende, D. J. and Mohamed, S. (2015). Variational inference with normalizing flows. arXiv preprint
arXiv:1505.05770.
Salakhutdinov, R. and Hinton, G. (2009). Deep boltzmann machines. In Artificial Intelligence and
Statistics, pages 448?455.
Salimans, T., Goodfellow, I. J., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016). Improved
techniques for training gans. CoRR, abs/1606.03498.
Sohl-Dickstein, J., Weiss, E. A., Maheswaranathan, N., and Ganguli, S. (2015). Deep unsupervised
learning using nonequilibrium thermodynamics. CoRR, abs/1503.03585.
Song, J., Zhao, S., and Ermon, S. (2017). Generative adversarial learning of markov chains. ICLR
Workshop Track.
Theis, L., van den Oord, A., and Bethge, M. (2015). A note on the evaluation of generative models.
ArXiv e-prints.
Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P.-A. (2008). Extracting and composing robust
features with denoising autoencoders. In Proceedings of the 25th international conference on
Machine learning, pages 1096?1103. ACM.
10
| 7094 |@word illustrating:1 version:1 stronger:1 gradual:1 propagate:1 decomposition:1 jacob:1 inpainting:21 solid:2 liu:2 series:1 score:8 ours:1 outperforms:1 existing:1 luo:1 yet:2 reminiscent:1 must:2 realistic:2 partition:1 visible:6 informative:1 stationary:7 generative:24 fewer:1 half:2 intelligence:1 isotropic:1 beginning:1 bissacco:1 provides:1 zhang:2 along:2 become:2 consists:1 prove:1 combine:1 wild:1 introduce:1 indeed:1 roughly:2 salakhutdinov:5 inspired:1 spherical:1 unrolling:1 provided:1 begin:3 underlying:1 bounded:1 estimating:1 factorized:2 what:2 tractible:1 quantitative:1 zaremba:1 exactly:4 classifier:2 demonstrates:1 unit:8 arguably:1 positive:2 before:2 laufer:1 todd:1 tends:1 limit:2 despite:1 encoding:1 joining:1 path:1 resembles:1 suggests:1 ease:1 collapse:1 limited:1 range:1 directed:9 unique:1 practical:1 practice:4 goyal:1 differs:1 digit:4 procedure:16 thought:1 matching:4 refers:1 spite:3 get:3 close:1 undesirable:1 operator:19 noising:3 put:1 applying:2 map:3 demonstrated:2 deterministic:2 imposed:1 starting:3 ergodic:1 formulate:1 simplicity:2 pouget:1 parameterizing:1 insight:1 array:1 borrow:1 gsn:1 stability:1 proving:1 analogous:2 hierarchy:1 exact:1 losing:1 us:3 goodfellow:5 helmholtz:3 nderby:1 observed:3 preprint:4 wang:2 thousand:2 connected:1 mesnil:1 intuition:1 dempster:2 warde:1 trained:16 ali:26 serve:1 completely:1 easily:2 joint:36 maheswaranathan:1 represented:3 train:8 separated:1 forced:1 fast:2 effective:1 inpainted:1 artificial:1 zemel:1 whose:1 richer:2 larger:1 supplementary:1 drawing:1 calhoun:1 encoder:10 ability:3 statistic:2 fischer:1 itself:2 laird:1 autoencoding:1 advantage:2 sequence:1 net:2 bornschein:2 propose:1 reconstruction:4 product:1 mixing:4 flexibility:5 achieve:2 supposed:1 forth:1 convergence:1 requirement:3 darrell:1 produce:3 generating:1 converges:2 help:1 derive:1 recurrent:1 ganin:1 fixing:1 odd:1 strong:4 involves:4 come:1 implies:1 revers:1 larochelle:1 closely:3 attribute:11 stochastic:16 dbms:3 ermon:1 virtual:1 material:1 explains:1 require:1 backprop:1 proposition:1 tighter:1 hold:1 practically:1 normal:1 great:2 equilibrium:8 mapping:1 dbm:3 visualize:1 predict:1 major:2 achieves:4 early:1 consecutive:1 released:1 estimation:1 label:3 vice:2 tool:2 brought:1 gaussian:5 always:1 rather:2 husz:2 vae:1 rezende:2 unclamped:10 validated:1 improvement:4 properly:3 methodological:1 likelihood:3 fooled:2 greatly:1 contrast:1 adversarial:14 baseline:2 sense:1 inference:27 ganguli:1 dayan:2 hidden:3 pixel:1 classification:7 flexible:3 dauphin:1 art:1 fairly:1 marginal:5 amc:4 never:1 having:2 shaped:1 sampling:30 ng:1 identical:1 adversarially:7 unsupervised:4 nearly:1 celeba:1 yoshua:1 mirza:1 few:4 simultaneously:1 phase:2 opposing:2 maintain:1 ab:8 mlp:1 highly:1 evaluation:2 weakness:2 yielding:1 unconditional:1 farley:1 chain:38 encourage:2 closer:1 netzer:2 filled:1 incomplete:1 walk:1 initialized:1 theoretical:5 modeling:2 earlier:1 disadvantage:1 zn:2 nonequilibrium:1 osindero:1 reported:1 answer:1 corrupted:1 thibodeau:1 cho:2 density:2 international:4 winther:1 oord:1 ancestral:1 together:1 bethge:1 gans:5 again:1 possibly:1 stochastically:1 inject:2 chung:1 zhao:1 potential:3 de:3 yaroslav:1 includes:1 explicitly:4 depends:1 performed:2 view:1 dumoulin:6 start:6 sort:1 competitive:2 complicated:2 bayes:1 metz:1 contribution:1 accuracy:6 convolutional:1 likewise:1 correspond:2 weak:1 vincent:2 none:1 trajectory:1 worth:1 corruption:2 explain:1 reach:1 against:2 energy:7 rbms:2 nonetheless:1 larsen:2 mohamed:2 clearest:1 chintala:1 associated:4 rbm:3 proof:1 gain:1 sampled:3 dataset:2 ask:1 manifest:1 knowledge:1 improves:2 back:3 appears:1 higher:1 supervised:1 follow:3 specify:1 improved:4 wei:1 formulation:2 done:4 box:3 though:1 just:2 implicit:3 inception:8 autoencoders:3 until:1 expressive:1 lack:3 quality:9 perhaps:1 normalized:1 true:1 ranged:1 evolution:1 analytically:1 alternating:3 jojic:1 iteratively:2 satisfactory:1 neal:1 conditionally:2 adjacent:1 indistinguishable:1 game:1 during:5 criterion:1 demonstrate:2 performs:1 svhn:7 reasoning:1 image:18 variational:5 meaning:1 novel:2 recently:2 common:1 multinomial:1 empirically:1 cohen:1 conditioning:3 volume:1 discussed:1 yosinski:1 marginals:1 significant:1 blocked:4 composition:2 versa:2 gibbs:7 unconstrained:1 xdata:3 similarly:2 stable:3 impressive:1 longer:1 similarity:1 dominant:1 something:1 posterior:4 showed:2 recent:3 optimizes:1 driven:7 discard:1 forcing:2 store:1 scenario:1 reverse:2 outperforming:1 continue:1 binary:1 seen:2 minimum:1 arjovsky:1 converge:1 dashed:1 signal:1 smoother:1 semi:1 needing:1 reduces:1 match:12 offer:1 cifar:3 mle:4 impact:1 prediction:3 converging:1 essentially:1 expectation:2 metric:1 vision:1 arxiv:11 iteration:5 achieved:2 proposal:1 addition:1 conditionals:6 whereas:1 preserved:1 separately:2 diagram:1 source:1 modality:2 probably:1 undirected:13 flow:1 seem:1 call:1 extracting:1 bengio:12 enough:1 split:1 mastropietro:1 iterate:1 independence:2 variety:1 zi:2 architecture:8 rifai:1 parameterizes:1 passed:1 song:2 suffer:1 henb:1 passing:1 cause:1 repeatedly:2 deep:21 generally:1 useful:1 covered:1 involve:2 cleaner:1 amount:2 simplest:3 generate:2 coates:1 tutorial:1 correctly:1 track:1 blue:1 discrete:4 promise:1 dickstein:3 four:1 achieving:2 drawn:1 diffusion:2 backward:1 graph:2 downstream:1 injects:1 year:1 run:2 parameterized:4 powerful:4 place:1 family:1 throughout:1 lamb:4 pursuing:1 wu:1 separation:1 draw:1 decision:1 layer:13 guaranteed:1 followed:3 courville:4 refine:1 constraint:3 worked:1 alex:1 aspect:1 speed:1 argument:2 performing:3 separable:1 relatively:1 alternate:2 poor:1 pink:1 across:2 slightly:1 increasingly:1 em:1 joseph:1 evolves:1 making:1 hl:1 den:1 gradually:3 restricted:3 invariant:2 iccv:1 previously:1 turn:1 reversal:1 adopted:1 available:1 apply:5 away:1 appropriate:1 indirectly:1 salimans:2 original:1 top:1 running:12 gan:11 graphical:17 giving:1 prof:1 society:1 seeking:3 move:1 objective:4 question:2 added:1 print:2 parametric:1 traditional:1 che:1 gradient:5 iclr:3 separate:1 capacity:2 decoder:13 street:1 manifold:1 ozair:1 assuming:1 code:2 modeled:2 relationship:1 illustration:1 manzagol:1 minimizing:1 demonstration:1 difficult:3 setup:1 potentially:2 zt:17 boltzmann:10 packed:1 perform:1 allowing:1 teh:1 observation:6 markov:10 datasets:2 immediate:1 situation:1 hinton:9 varied:1 arbitrary:1 expressiveness:5 inferred:6 introduced:1 pair:7 specified:1 connection:1 discriminator:15 devon:1 optimized:1 learned:14 kingma:2 nip:3 able:5 beyond:2 poole:1 below:1 reading:1 including:1 royal:1 explanation:1 belief:2 difficulty:1 rely:3 natural:2 force:3 advanced:1 thermodynamics:7 improve:1 auto:1 prior:12 literature:2 geometric:1 theis:2 asymptotic:1 fully:2 loss:1 permutation:2 generation:17 interesting:1 generator:3 validation:1 foundation:1 sufficient:1 consistent:5 rubin:1 principle:1 dd:2 corrupting:1 share:1 compatible:2 summary:1 course:1 last:1 free:8 side:5 guide:1 deeper:1 understand:1 wide:1 face:2 van:1 boundary:4 xn:2 world:1 transition:20 rich:2 avoids:1 ending:1 forward:3 refinement:4 far:1 welling:2 reconstructed:1 approximate:6 keep:2 clique:2 belghazi:1 global:1 conclude:1 xi:2 un:3 iterative:11 latent:47 continuous:1 why:1 table:2 additionally:1 learn:6 gsns:6 robust:2 composing:1 investigated:1 complex:5 necessarily:1 domain:1 did:2 motivation:1 noise:8 paul:1 xu:1 momentum:1 explicit:3 clamped:13 house:1 learns:5 donahue:2 tang:1 dozen:1 z0:1 specific:3 xt:18 list:1 abadie:1 normalizing:3 evidence:1 incorporating:1 workshop:2 mnist:6 sohl:3 corr:7 importance:1 kr:1 clamping:1 chen:1 suited:1 explore:1 radford:3 relies:1 acm:1 conditional:13 goal:3 cheung:1 towards:2 shared:1 hjelm:6 professor:2 experimentally:2 hard:1 typical:1 except:2 specifically:1 denoising:1 called:1 la:1 aaron:1 evaluate:1 trainable:1 |
6,738 | 7,095 | PointNet++: Deep Hierarchical Feature Learning on
Point Sets in a Metric Space
Charles R. Qi
Li Yi Hao Su Leonidas J. Guibas
Stanford University
Abstract
Few prior works study deep learning on point sets. PointNet [20] is a pioneer in this
direction. However, by design PointNet does not capture local structures induced by
the metric space points live in, limiting its ability to recognize fine-grained patterns
and generalizability to complex scenes. In this work, we introduce a hierarchical
neural network that applies PointNet recursively on a nested partitioning of the
input point set. By exploiting metric space distances, our network is able to learn
local features with increasing contextual scales. With further observation that point
sets are usually sampled with varying densities, which results in greatly decreased
performance for networks trained on uniform densities, we propose novel set
learning layers to adaptively combine features from multiple scales. Experiments
show that our network called PointNet++ is able to learn deep point set features
efficiently and robustly. In particular, results significantly better than state-of-the-art
have been obtained on challenging benchmarks of 3D point clouds.
1
Introduction
We are interested in analyzing geometric point sets which are collections of points in a Euclidean
space. A particularly important type of geometric point set is point cloud captured by 3D scanners,
e.g., from appropriately equipped autonomous vehicles. As a set, such data has to be invariant to
permutations of its members. In addition, the distance metric defines local neighborhoods that may
exhibit different properties. For example, the density and other attributes of points may not be uniform
across different locations ? in 3D scanning the density variability can come from perspective effects,
radial density variations, motion, etc.
Few prior works study deep learning on point sets. PointNet [20] is a pioneering effort that directly
processes point sets. The basic idea of PointNet is to learn a spatial encoding of each point and then
aggregate all individual point features to a global point cloud signature. By its design, PointNet does
not capture local structure induced by the metric. However, exploiting local structure has proven to
be important for the success of convolutional architectures. A CNN takes data defined on regular
grids as the input and is able to progressively capture features at increasingly larger scales along a
multi-resolution hierarchy. At lower levels neurons have smaller receptive fields whereas at higher
levels they have larger receptive fields. The ability to abstract local patterns along the hierarchy
allows better generalizability to unseen cases.
We introduce a hierarchical neural network, named as PointNet++, to process a set of points sampled
in a metric space in a hierarchical fashion. The general idea of PointNet++ is simple. We first
partition the set of points into overlapping local regions by the distance metric of the underlying
space. Similar to CNNs, we extract local features capturing fine geometric structures from small
neighborhoods; such local features are further grouped into larger units and processed to produce
higher level features. This process is repeated until we obtain the features of the whole point set.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The design of PointNet++ has to address two issues: how to generate the partitioning of the point set,
and how to abstract sets of points or local features through a local feature learner. The two issues
are correlated because the partitioning of the point set has to produce common structures across
partitions, so that weights of local feature learners can be shared, as in the convolutional setting. We
choose our local feature learner to be PointNet. As demonstrated in that work, PointNet is an effective
architecture to process an unordered set of points for semantic feature extraction. In addition, this
architecture is robust to input data corruption. As a basic building block, PointNet abstracts sets of
local points or features into higher level representations. In this view, PointNet++ applies PointNet
recursively on a nested partitioning of the input set.
One issue that still remains is how to generate
overlapping partitioning of a point set. Each
partition is defined as a neighborhood ball in
the underlying Euclidean space, whose parameters include centroid location and scale. To
evenly cover the whole set, the centroids are selected among input point set by a farthest point Figure 1: Visualization of a scan captured from a
sampling (FPS) algorithm. Compared with vol- Structure Sensor (left: RGB; right: point cloud).
umetric CNNs that scan the space with fixed
strides, our local receptive fields are dependent
on both the input data and the metric, and thus more efficient and effective.
Deciding the appropriate scale of local neighborhood balls, however, is a more challenging yet
intriguing problem, due to the entanglement of feature scale and non-uniformity of input point
set. We assume that the input point set may have variable density at different areas, which is quite
common in real data such as Structure Sensor scanning [18] (see Fig. 1). Our input point set is thus
very different from CNN inputs which can be viewed as data defined on regular grids with uniform
constant density. In CNNs, the counterpart to local partition scale is the size of kernels. [25] shows
that using smaller kernels helps to improve the ability of CNNs. Our experiments on point set data,
however, give counter evidence to this rule. Small neighborhood may consist of too few points due to
sampling deficiency, which might be insufficient to allow PointNets to capture patterns robustly.
A significant contribution of our paper is that PointNet++ leverages neighborhoods at multiple scales
to achieve both robustness and detail capture. Assisted with random input dropout during training,
the network learns to adaptively weight patterns detected at different scales and combine multi-scale
features according to the input data. Experiments show that our PointNet++ is able to process point
sets efficiently and robustly. In particular, results that are significantly better than state-of-the-art have
been obtained on challenging benchmarks of 3D point clouds.
2
Problem Statement
Suppose that X = (M, d) is a discrete metric space whose metric is inherited from a Euclidean space
Rn , where M ? Rn is the set of points and d is the distance metric. In addition, the density of M
in the ambient Euclidean space may not be uniform everywhere. We are interested in learning set
functions f that take such X as the input (along with additional features for each point) and produce
information of semantic interest regrading X . In practice, such f can be classification function that
assigns a label to X or a segmentation function that assigns a per point label to each member of M .
3
Method
Our work can be viewed as an extension of PointNet [20] with added hierarchical structure. We
first review PointNet (Sec. 3.1) and then introduce a basic extension of PointNet with hierarchical
structure (Sec. 3.2). Finally, we propose our PointNet++ that is able to robustly learn features even in
non-uniformly sampled point sets (Sec. 3.3).
2
skip link concatenation
1)
Segmentation
Hierarchical point set feature learning
(N
,d+
(N
C)
d+
K,
1,
)
C1
d+
1,
(N
(N
)
C1
d+
K,
2,
(N
2,
+C
C2
d+
(N
1,
)
)
C3
d+
+C
C3
d+
N,
)
unit
pointnet
Classification
interpolate
unit
pointnet
(1,C4)
pointnet
set abstraction
sampling &
grouping
,k)
C2
d+
interpolate
sampling &
grouping
(N
(
t
oin
r-p es
pe scor
(k)
pointnet
class scores
C)
(N
1,
set abstraction
pointnet
fully connected layers
Figure 2: Illustration of our hierarchical feature learning architecture and its application for set
segmentation and classification using points in 2D Euclidean space as an example. Single scale point
grouping is visualized here. For details on density adaptive grouping, see Fig. 3
3.1
Review of PointNet [20]: A Universal Continuous Set Function Approximator
38
Given an unordered point set {x1 , x2 , ..., xn } with xi 2 Rd , one can define a set function f : X ! R
that maps a set of points to a vector:
?
?
f (x1 , x2 , ..., xn ) =
MAX {h(xi )}
(1)
i=1,...,n
where
and h are usually multi-layer perceptron (MLP) networks.
The set function f in Eq. 1 is invariant to input point permutations and can arbitrarily approximate any
continuous set function [20]. Note that the response of h can be interpreted as the spatial encoding of
a point (see [20] for details).
PointNet achieved impressive performance on a few benchmarks. However, it lacks the ability to
capture local context at different scales. We will introduce a hierarchical feature learning framework
in the next section to resolve the limitation.
3.2
Hierarchical Point Set Feature Learning
While PointNet uses a single max pooling operation to aggregate the whole point set, our new
architecture builds a hierarchical grouping of points and progressively abstract larger and larger local
regions along the hierarchy.
Our hierarchical structure is composed by a number of set abstraction levels (Fig. 2). At each level, a
set of points is processed and abstracted to produce a new set with fewer elements. The set abstraction
level is made of three key layers: Sampling layer, Grouping layer and PointNet layer. The Sampling
layer selects a set of points from input points, which defines the centroids of local regions. Grouping
layer then constructs local region sets by finding ?neighboring? points around the centroids. PointNet
layer uses a mini-PointNet to encode local region patterns into feature vectors.
A set abstraction level takes an N ? (d + C) matrix as input that is from N points with d-dim
coordinates and C-dim point feature. It outputs an N 0 ? (d + C 0 ) matrix of N 0 subsampled points
with d-dim coordinates and new C 0 -dim feature vectors summarizing local context. We introduce the
layers of a set abstraction level in the following paragraphs.
Sampling layer. Given input points {x1 , x2 , ..., xn }, we use iterative farthest point sampling (FPS)
to choose a subset of points {xi1 , xi2 , ..., xim }, such that xij is the most distant point (in metric
distance) from the set {xi1 , xi2 , ..., xij 1 } with regard to the rest points. Compared with random
sampling, it has better coverage of the entire point set given the same number of centroids. In contrast
to CNNs that scan the vector space agnostic of data distribution, our sampling strategy generates
receptive fields in a data dependent manner.
3
Grouping layer. The input to this layer is a point set of size N ? (d + C) and the coordinates of
a set of centroids of size N 0 ? d. The output are groups of point sets of size N 0 ? K ? (d + C),
where each group corresponds to a local region and K is the number of points in the neighborhood of
centroid points. Note that K varies across groups but the succeeding PointNet layer is able to convert
flexible number of points into a fixed length local region feature vector.
In convolutional neural networks, a local region of a pixel consists of pixels with array indices within
certain Manhattan distance (kernel size) of the pixel. In a point set sampled from a metric space, the
neighborhood of a point is defined by metric distance.
Ball query finds all points that are within a radius to the query point (an upper limit of K is set in
implementation). An alternative range query is K nearest neighbor (kNN) search which finds a fixed
number of neighboring points. Compared with kNN, ball query?s local neighborhood guarantees
a fixed region scale thus making local region feature more generalizable across space, which is
preferred for tasks requiring local pattern recognition (e.g. semantic point labeling).
PointNet layer. In this layer, the input are N 0 local regions of points with data size N 0 ?K ?(d+C).
Each local region in the output is abstracted by its centroid and local feature that encodes the centroid?s
neighborhood. Output data size is N 0 ? (d + C 0 ).
The coordinates of points in a local region are firstly translated into a local frame relative to the
(j)
(j)
centroid point: xi = xi
x
?(j) for i = 1, 2, ..., K and j = 1, 2, ..., d where x
? is the coordinate of
the centroid. We use PointNet [20] as described in Sec. 3.1 as the basic building block for local pattern
learning. By using relative coordinates together with point features we can capture point-to-point
relations in the local region.
3.3
Robust Feature Learning under Non-Uniform Sampling Density
As discussed earlier, it is common that a point set comes with nonuniform density in different areas. Such non-uniformity introduces
a significant challenge for point set feature learning. Features learned
in dense data may not generalize to sparsely sampled regions. Consequently, models trained for sparse point cloud may not recognize
fine-grained local structures.
concat
A or B
concat
A
B
Ideally, we want to inspect as closely as possible into a point set
(a)
(c)
(b)
to capture finest details in densely sampled regions. However, such
close inspect is prohibited at low density areas because local patterns Figure 3: (a) Multi-scale cross-level adaptive scale selection
multi-scale aggregation
may be corrupted by the sampling deficiency. In this case, we should grouping
(MSG);
(b)multi-scale
Multicross-level
aggregation
look for larger scale patterns in greater vicinity. To achieve this goal resolution grouping (MRG).
we propose density adaptive PointNet layers (Fig. 3) that learn to
combine features from regions of different scales when the input
sampling density changes. We call our hierarchical network with density adaptive PointNet layers as
PointNet++.
Previously in Sec. 3.2, each abstraction level contains grouping and feature extraction of a single scale.
In PointNet++, each abstraction level extracts multiple scales of local patterns and combine them
intelligently according to local point densities. In terms of grouping local regions and combining
features from different scales, we propose two types of density adaptive layers as listed below.
Multi-scale grouping (MSG). As shown in Fig. 3 (a), a simple but effective way to capture multiscale patterns is to apply grouping layers with different scales followed by according PointNets to
extract features of each scale. Features at different scales are concatenated to form a multi-scale
feature.
We train the network to learn an optimized strategy to combine the multi-scale features. This is done
by randomly dropping out input points with a randomized probability for each instance, which we call
random input dropout. Specifically, for each training point set, we choose a dropout ratio ? uniformly
sampled from [0, p] where p ? 1. For each point, we randomly drop a point with probability ?. In
practice we set p = 0.95 to avoid generating empty point sets. In doing so we present the network
with training sets of various sparsity (induced by ?) and varying uniformity (induced by randomness
in dropout). During test, we keep all available points.
4
Multi-resolution grouping (MRG). The MSG approach above is computationally expensive since
it runs local PointNet at large scale neighborhoods for every centroid point. In particular, since the
number of centroid points is usually quite large at the lowest level, the time cost is significant.
Here we propose an alternative approach that avoids such expensive computation but still preserves
the ability to adaptively aggregate information according to the distributional properties of points. In
Fig. 3 (b), features of a region at some level Li is a concatenation of two vectors. One vector (left in
figure) is obtained by summarizing the features at each subregion from the lower level Li 1 using
the set abstraction level. The other vector (right) is the feature that is obtained by directly processing
all raw points in the local region using a single PointNet.
When the density of a local region is low, the first vector may be less reliable than the second vector,
since the subregion in computing the first vector contains even sparser points and suffers more from
sampling deficiency. In such a case, the second vector should be weighted higher. On the other hand,
when the density of a local region is high, the first vector provides information of finer details since it
possesses the ability to inspect at higher resolutions recursively in lower levels.
Compared with MSG, this method is computationally more efficient since we avoids the feature
extraction in large scale neighborhoods at lowest levels.
3.4
Point Feature Propagation for Set Segmentation
In set abstraction layer, the original point set is subsampled. However in set segmentation task such
as semantic point labeling, we want to obtain point features for all the original points. One solution is
to always sample all points as centroids in all set abstraction levels, which however results in high
computation cost. Another way is to propagate features from subsampled points to the original points.
We adopt a hierarchical propagation strategy with distance based interpolation and across level
skip links (as shown in Fig. 2). In a feature propagation level, we propagate point features from
Nl ? (d + C) points to Nl 1 points where Nl 1 and Nl (with Nl ? Nl 1 ) are point set size of input
and output of set abstraction level l. We achieve feature propagation by interpolating feature values
f of Nl points at coordinates of the Nl 1 points. Among the many choices for interpolation, we
use inverse distance weighted average based on k nearest neighbors (as in Eq. 2, in default we use
p = 2, k = 3). The interpolated features on Nl 1 points are then concatenated with skip linked point
features from the set abstraction level. Then the concatenated features are passed through a ?unit
pointnet?, which is similar to one-by-one convolution in CNNs. A few shared fully connected and
ReLU layers are applied to update each point?s feature vector. The process is repeated until we have
propagated features to the original set of points.
f
4
(j)
(x) =
Experiments
Pk
(j)
i=1
Pk
wi (x)fi
i=1
wi (x)
where wi (x) =
1
, j = 1, ..., C
d(x, xi )p
(2)
Datasets We evaluate on four datasets ranging from 2D objects (MNIST [11]), 3D objects (ModelNet40 [31] rigid object, SHREC15 [12] non-rigid object) to real 3D scenes (ScanNet [5]). Object
classification is evaluated by accuracy. Semantic scene labeling is evaluated by average voxel
classification accuracy following [5]. We list below the experiment setting for each dataset:
? MNIST: Images of handwritten digits with 60k training and 10k testing samples.
? ModelNet40: CAD models of 40 categories (mostly man-made). We use the official split
with 9,843 shapes for training and 2,468 for testing.
? SHREC15: 1200 shapes from 50 categories. Each category contains 24 shapes which are
mostly organic ones with various poses such as horses, cats, etc. We use five fold cross
validation to acquire classification accuracy on this dataset.
? ScanNet: 1513 scanned and reconstructed indoor scenes. We follow the experiment setting
in [5] and use 1201 scenes for training, 312 scenes for test.
5
Method
Error rate (%)
Multi-layer perceptron [24]
LeNet5 [11]
Network in Network [13]
PointNet (vanilla) [20]
PointNet [20]
1.60
0.80
0.47
1.30
0.78
Ours
0.51
512 points
256 points
Input
Accuracy (%)
Subvolume [21]
MVCNN [26]
PointNet (vanilla) [20]
PointNet [20]
vox
img
pc
pc
89.2
90.1
87.2
89.2
pc
pc
90.7
91.9
Ours
Ours (with normal)
Table 1: MNIST digit classification.
1024 points
Method
Table 2: ModelNet40 shape classification.
128 points
Figure 4: Left: Point cloud with random point dropout. Right: Curve showing advantage of our
density adaptive strategy in dealing with non-uniform density. DP means random input dropout
during training; otherwise training is on uniformly dense points. See Sec.3.3 for details.
4.1
Point Set Classification in Euclidean Metric Space
We evaluate our network on classifying point clouds sampled from both 2D (MNIST) and 3D
(ModleNet40) Euclidean spaces. MNIST images are converted to 2D point clouds of digit pixel
locations. 3D point clouds are sampled from mesh surfaces from ModelNet40 shapes. In default we
use 512 points for MNIST and 1024 points for ModelNet40. In last row (ours normal) in Table 2, we
use face normals as additional point features, where we also use more points (N = 5000) to further
boost performance. All point sets are normalized to be zero mean and within a unit ball. We use a
three-level hierarchical network with three fully connected layers 1
Results. In Table 1 and Table 2, we compare our method with a representative set of previous
state of the arts. Note that PointNet (vanilla) in Table 2 is the the version in [20] that does not use
transformation networks, which is equivalent to our hierarchical net with only one level.
Firstly, our hierarchical learning architecture achieves significantly better performance than the
non-hierarchical PointNet [20]. In MNIST, we see a relative 60.8% and 34.6% error rate reduction
from PointNet (vanilla) and PointNet to our method. In ModelNet40 classification, we also see that
using same input data size (1024 points) and features (coordinates only), ours is remarkably stronger
than PointNet. Secondly, we observe that point set based method can even achieve better or similar
performance as mature image CNNs. In MNIST, our method (based on 2D point set) is achieving
an accuracy close to the Network in Network CNN. In ModelNet40, ours with normal information
significantly outperforms previous state-of-the-art method MVCNN [26].
Robustness to Sampling Density Variation. Sensor data directly captured from real world usually
suffers from severe irregular sampling issues (Fig. 1). Our approach selects point neighborhood of
multiple scales and learns to balance the descriptiveness and robustness by properly weighting them.
We randomly drop points (see Fig. 4 left) during test time to validate our network?s robustness to
non-uniform and sparse data. In Fig. 4 right, we see MSG+DP (multi-scale grouping with random
input dropout during training) and MRG+DP (multi-resolution grouping with random input dropout
during training) are very robust to sampling density variation. MSG+DP performance drops by less
than 1% from 1024 to 256 test points. Moreover, it achieves the best performance on almost all
sampling densities compared with alternatives. PointNet vanilla [20] is fairly robust under density
variation due to its focus on global abstraction rather than fine details. However loss of details
also makes it less powerful compared to our approach. SSG (ablated PointNet++ with single scale
grouping in each level) fails to generalize to sparse sampling density while SSG+DP amends the
problem by randomly dropping out points in training time.
1
See supplementary for more details on network architecture and experiment preparation.
6
?
?
0.85
0.8
0.8
Accuracy
Point Set Segmentation for Semantic Scene Labeling
Accuracy
4.2
4
0.85
0.75
0.7
5
0.75
0.7
To validate that our approach is suitable for large
0.9
scale point cloud analysis, we also evaluate on
ScanNet
0.845
0.834
0.833
ScanNet non-uniform
0.804
semantic scene labeling task. The goal is to pre0.762
0.775
dict semantic object label for points in indoor
0.739
0.730
0.727
scans. [5] provides a baseline using fully con0.680
volutional neural network on voxelized scans.
0.65
3DCNN[3]
PointNet[19]
Ours(SSG) Ours(MSG+DP)Ours(MRG+DP)
They purely rely on scanning geometry instead
Figure 5: Scannet labeling accuracy.
of RGB information and report the accuracy on
a per-voxel basis. To make a fair comparison,
we remove RGB information in all our experiments and convert point cloud label prediction into
voxel labeling following [5]. We also compare with [20]. The accuracy is reported on a per-voxel
basis in Fig. 5 (blue bar).
3DCNN[3]
PointNet[12]
Ours
0.65
?
PointNet[12]
6
Ours(SSG)
Ours(SSG+DP) Ours(MSG+DP)
Accuracy
0.65
Our approach outperforms all the baseline methods by a large margin. In comparison with [5], which
learns on voxelized scans, we directly learn on point clouds to avoid additional quantization error,
and conduct data dependent sampling to allow more effective learning. Compared with [20], our
approach introduces hierarchical feature learning and captures geometry features at different scales.
This is very important for understanding scenes at multiple levels and labeling objects with various
sizes. We visualize example scene labeling results in Fig. 6.
Robustness to Sampling Density Variation
To test how our trained model performs on scans
with non-uniform sampling density, we synthesize virtual scans of Scannet scenes similar to
that in Fig. 1 and evaluate our network on this
data. We refer readers to supplementary material for how we generate the virtual scans. We
evaluate our framework in three settings (SSG,
MSG+DP, MRG+DP) and compare with a baseline approach [20].
PointNet
Ours
Ground Truth
Performance comparison is shown in Fig. 5 (yelWall
Floor
Chair
Desk
Bed
Door
Table
low bar). We see that SSG performance greatly Figure 6: Scannet labeling results. [20] captures
falls due to the sampling density shift from uni- the overall layout of the room correctly but fails to
form point cloud to virtually scanned scenes. discover the furniture. Our approach, in contrast,
MRG network, on the other hand, is more robust is much better at segmenting objects besides the
to the sampling density shift since it is able to au- room layout.
tomatically switch to features depicting coarser
granularity when the sampling is sparse. Even though there is a domain gap between training data
(uniform points with random dropout) and scanned data with non-uniform density, our MSG network
is only slightly affected and achieves the best accuracy among methods in comparison. These prove
the effectiveness of our density adaptive layer design.
4.3
Point Set Classification in Non-Euclidean Metric Space
In this section, we show generalizability of our approach to non-Euclidean space. In non-rigid shape
classification (Fig. 7), a good classifier should be able to classify (a) and (c) in Fig. 7 correctly as the
same category even given their difference in pose, which requires knowledge of intrinsic structure.
Shapes in SHREC15 are 2D surfaces embedded in 3D space. Geodesic distances along the surfaces
naturally induce a metric space. We show through experiments that adopting PointNet++ in this
metric space is an effective way to capture intrinsic structure of the underlying point set.
For each shape in [12], we firstly construct the metric space induced by pairwise geodesic distances.
We follow [23] to obtain an embedding metric that mimics geodesic distance. Next we extract
intrinsic point features in this metric space including WKS [1], HKS [27] and multi-scale Gaussian
curvature [16]. We use these features as input and then sample and group points according to the
underlying metric space. In this way, our network learns to capture multi-scale intrinsic structure
that is not influenced by the specific pose of a shape. Alternative design choices include using XY Z
coordinates as points feature or use Euclidean space R3 as the underlying metric space. We show
below these are not optimal choices.
7
Results. We compare our methods with previous state-of-theart method [14] in Table 3. [14] extracts geodesic moments as
shape features and use a stacked sparse autoencoder to digest
these features to predict shape category. Our approach using nonEuclidean metric space and intrinsic features achieves the best
performance in all settings and outperforms [14] by a large margin.
Comparing the first and second setting of our approach, we see Figure 7: An example of nonintrinsic features are very important for non-rigid shape classifica- rigid shape classification.
tion. XY Z feature fails to reveal intrinsic structures and is greatly
influenced by pose variation. Comparing the second and third
setting of our approach, we see using geodesic neighborhood is beneficial compared with Euclidean
neighborhood. Euclidean neighborhood might include points far away on surfaces and this neighborhood could change dramatically when shape affords non-rigid deformation. This introduces difficulty
for effective weight sharing since the local structure could become combinatorially complicated.
Geodesic neighborhood on surfaces, on the other hand, gets rid of this issue and improves the learning
effectiveness.
DeepGM [14]
Ours
Metric space
Input feature
Accuracy (%)
-
Intrinsic features
93.03
Euclidean
Euclidean
Non-Euclidean
XYZ
Intrinsic features
Intrinsic features
60.18
94.49
96.09
Table 3: SHREC15 Non-rigid shape classification.
4.4
Feature Visualization.
In Fig. 8 we visualize what has been learned by the first
level kernels of our hierarchical network. We created
a voxel grid in space and aggregate local point sets that
activate certain neurons the most in grid cells (highest
100 examples are used). Grid cells with high votes
are kept and converted back to 3D point clouds, which
represents the pattern that neuron recognizes. Since the
model is trained on ModelNet40 which is mostly consisted of furniture, we see structures of planes, double
planes, lines, corners etc. in the visualization.
5
Related Work
The idea of hierarchical feature learning has been very
successful. Among all the learning models, convolutional neural network [10; 25; 8] is one of the most
prominent ones. However, convolution does not apply
to unordered point sets with distance metrics, which is
the focus of our work.
Figure 8: 3D point cloud patterns learned
from the first layer kernels. The model is
trained for ModelNet40 shape classification
(20 out of the 128 kernels are randomly
selected). Color indicates point depth (red
is near, blue is far).
A few very recent works [20; 28] have studied how to
apply deep learning to unordered sets. They ignore the underlying distance metric even if the point
set does possess one. As a result, they are unable to capture local context of points and are sensitive
to global set translation and normalization. In this work, we target at points sampled from a metric
space and tackle these issues by explicitly considering the underlying distance metric in our design.
Point sampled from a metric space are usually noisy and with non-uniform sampling density. This
affects effective point feature extraction and causes difficulty for learning. One of the key issue is
to select proper scale for point feature design. Previously several approaches have been developed
regarding this [19; 17; 2; 6; 7; 30] either in geometry processing community or photogrammetry
and remote sensing community. In contrast to all these works, our approach learns to extract point
features and balance multiple feature scales in an end-to-end fashion.
8
In 3D metric space, other than point set, there are several popular representations for deep learning,
including volumetric grids [21; 22; 29], and geometric graphs [3; 15; 33]. However, in none of these
works, the problem of non-uniform sampling density has been explicitly considered.
6
Conclusion
In this work, we propose PointNet++, a powerful neural network architecture for processing point
sets sampled in a metric space. PointNet++ recursively functions on a nested partitioning of the
input point set, and is effective in learning hierarchical features with respect to the distance metric.
To handle the non uniform point sampling issue, we propose two novel set abstraction layers that
intelligently aggregate multi-scale information according to local point densities. These contributions
enable us to achieve state-of-the-art performance on challenging benchmarks of 3D point clouds.
In the future, it?s worthwhile thinking how to accelerate inference speed of our proposed network
especially for MSG and MRG layers by sharing more computation in each local regions. It?s also
interesting to find applications in higher dimensional metric spaces where CNN based method would
be computationally unfeasible while our method can scale well.
Acknowledgement. The authors would like to acknowledge the support of a Samsung GRO grant,
NSF grants IIS-1528025 and DMS-1546206, and ONR MURI grant N00014-13-1-0341.
References
[1] M. Aubry, U. Schlickewei, and D. Cremers. The wave kernel signature: A quantum mechanical approach
to shape analysis. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference
on, pages 1626?1633. IEEE, 2011.
[2] D. Belton and D. D. Lichti. Classification and segmentation of terrestrial laser scanner point clouds using
local variance information. Iaprs, Xxxvi, 5:44?49, 2006.
[3] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on
graphs. arXiv preprint arXiv:1312.6203, 2013.
[4] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song,
H. Su, J. Xiao, L. Yi, and F. Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report
arXiv:1512.03012 [cs.GR], 2015.
[5] A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, and M. Nie?ner. Scannet: Richly-annotated 3d
reconstructions of indoor scenes. arXiv preprint arXiv:1702.04405, 2017.
[6] J. Demantk?, C. Mallet, N. David, and B. Vallet. Dimensionality based scale selection in 3d lidar point
clouds. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information
Sciences, 38(Part 5):W12, 2011.
[7] A. Gressin, C. Mallet, J. Demantk?, and N. David. Towards 3d lidar point cloud registration improvement
using optimal neighborhood knowledge. ISPRS journal of photogrammetry and remote sensing, 79:240?
251, 2013.
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016.
[9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[10] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[12] Z. Lian, J. Zhang, S. Choi, H. ElNaghy, J. El-Sana, T. Furuya, A. Giachetti, R. A. Guler, L. Lai, C. Li,
H. Li, F. A. Limberger, R. Martin, R. U. Nakanishi, A. P. Neto, L. G. Nonato, R. Ohbuchi, K. Pevzner,
D. Pickup, P. Rosin, A. Sharf, L. Sun, X. Sun, S. Tari, G. Unal, and R. C. Wilson. Non-rigid 3D Shape
Retrieval. In I. Pratikakis, M. Spagnuolo, T. Theoharis, L. V. Gool, and R. Veltkamp, editors, Eurographics
Workshop on 3D Object Retrieval. The Eurographics Association, 2015.
[13] M. Lin, Q. Chen, and S. Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[14] L. Luciano and A. B. Hamza. Deep learning with geodesic moments for 3d shape classification. Pattern
Recognition Letters, 2017.
[15] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks
on riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision
Workshops, pages 37?45, 2015.
[16] M. Meyer, M. Desbrun, P. Schr?der, A. H. Barr, et al. Discrete differential-geometry operators for
triangulated 2-manifolds. Visualization and mathematics, 3(2):52?58, 2002.
[17] N. J. MITRA, A. NGUYEN, and L. GUIBAS. Estimating surface normals in noisy point cloud data.
International Journal of Computational Geometry & Applications, 14(04n05):261?276, 2004.
[18] I. Occipital. Structure sensor-3d scanning, augmented reality, and more for mobile devices, 2016.
9
[19] M. Pauly, L. P. Kobbelt, and M. Gross. Point-based multiscale surface representation. ACM Transactions
on Graphics (TOG), 25(2):177?193, 2006.
[20] C. R. Qi, H. Su, K. Mo, and L. J. Guibas. Pointnet: Deep learning on point sets for 3d classification and
segmentation. arXiv preprint arXiv:1612.00593, 2016.
[21] C. R. Qi, H. Su, M. Nie?ner, A. Dai, M. Yan, and L. Guibas. Volumetric and multi-view cnns for object
classification on 3d data. In Proc. Computer Vision and Pattern Recognition (CVPR), IEEE, 2016.
[22] G. Riegler, A. O. Ulusoys, and A. Geiger. Octnet: Learning deep 3d representations at high resolutions.
arXiv preprint arXiv:1611.05009, 2016.
[23] R. M. Rustamov, Y. Lipman, and T. Funkhouser. Interior distance using barycentric coordinates. In
Computer Graphics Forum, volume 28, pages 1279?1288. Wiley Online Library, 2009.
[24] P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied to
visual document analysis. In ICDAR, volume 3, pages 958?962, 2003.
[25] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
arXiv preprint arXiv:1409.1556, 2014.
[26] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller. Multi-view convolutional neural networks for 3d
shape recognition. In Proc. ICCV, to appear, 2015.
[27] J. Sun, M. Ovsjanikov, and L. Guibas. A concise and provably informative multi-scale signature based on
heat diffusion. In Computer graphics forum, volume 28, pages 1383?1392. Wiley Online Library, 2009.
[28] O. Vinyals, S. Bengio, and M. Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint
arXiv:1511.06391, 2015.
[29] P.-S. WANG, Y. LIU, Y.-X. GUO, C.-Y. SUN, and X. TONG. O-cnn: Octree-based convolutional neural
networks for 3d shape analysis. 2017.
[30] M. Weinmann, B. Jutzi, S. Hinz, and C. Mallet. Semantic point cloud interpretation based on optimal
neighborhoods, relevant features and efficient classifiers. ISPRS Journal of Photogrammetry and Remote
Sensing, 105:286?304, 2015.
[31] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3d shapenets: A deep representation for
volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 1912?1920, 2015.
[32] L. Yi, V. G. Kim, D. Ceylan, I.-C. Shen, M. Yan, H. Su, C. Lu, Q. Huang, A. Sheffer, and L. Guibas. A
scalable active framework for region annotation in 3d shape collections. SIGGRAPH Asia, 2016.
[33] L. Yi, H. Su, X. Guo, and L. Guibas. Syncspeccnn: Synchronized spectral cnn for 3d shape segmentation.
arXiv preprint arXiv:1612.00606, 2016.
10
| 7095 |@word cnn:6 version:1 repository:1 stronger:1 propagate:2 rgb:3 concise:1 recursively:4 moment:2 reduction:1 liu:1 contains:3 score:1 ours:15 document:2 outperforms:3 contextual:1 comparing:2 cad:1 yet:1 intriguing:1 finest:1 pioneer:1 mesh:1 distant:1 partition:4 informative:1 shape:24 remove:1 drop:3 succeeding:1 progressively:2 update:1 selected:2 fewer:1 device:1 concat:2 plane:2 provides:2 location:3 firstly:3 zhang:3 five:1 along:5 c2:2 become:1 pevzner:1 differential:1 fps:2 kalogerakis:1 consists:1 prove:1 combine:5 paragraph:1 con0:1 manner:1 introduce:5 pairwise:1 multi:19 steinkraus:1 resolve:1 equipped:1 considering:1 increasing:1 discover:1 underlying:7 moreover:1 estimating:1 agnostic:1 lowest:2 what:1 interpreted:1 generalizable:1 developed:1 finding:1 transformation:1 guarantee:1 every:1 tackle:1 zaremba:1 classifier:2 platt:1 partitioning:6 unit:5 farthest:2 grant:3 szlam:1 appear:1 segmenting:1 ner:2 local:51 mitra:1 limit:1 encoding:2 analyzing:1 riegler:1 interpolation:2 might:2 au:1 studied:1 challenging:4 range:1 lecun:2 testing:2 practice:3 block:2 digit:3 area:3 universal:1 yan:3 significantly:4 subvolume:1 organic:1 radial:1 regular:2 induce:1 get:1 unfeasible:1 close:2 selection:2 operator:1 interior:1 context:3 live:1 equivalent:1 map:1 demonstrated:1 layout:2 occipital:1 resolution:6 shen:1 assigns:2 rule:1 array:1 embedding:1 handle:1 autonomous:1 variation:6 coordinate:10 limiting:1 hierarchy:3 suppose:1 target:1 us:2 element:1 synthesize:1 recognition:9 particularly:1 expensive:2 sheffer:1 hamza:1 sparsely:1 distributional:1 coarser:1 muri:1 cloud:22 preprint:9 wang:1 capture:14 region:24 connected:4 sun:5 remote:4 counter:1 highest:1 gross:1 entanglement:1 nie:2 ideally:1 geodesic:8 signature:3 trained:5 uniformity:3 purely:1 tog:1 learner:3 basis:2 translated:1 accelerate:1 samsung:1 siggraph:1 various:3 cat:1 maji:1 train:1 stacked:1 laser:1 heat:1 effective:8 activate:1 detected:1 query:4 labeling:10 aggregate:5 horse:1 neighborhood:20 unal:1 whose:2 quite:2 stanford:1 larger:6 supplementary:2 cvpr:1 otherwise:1 ability:6 gro:1 knn:2 unseen:1 simonyan:1 noisy:2 online:2 advantage:1 sequence:2 intelligently:2 net:1 propose:7 reconstruction:1 neighboring:2 modelnet40:9 combining:1 descriptiveness:1 relevant:1 achieve:5 bed:1 validate:2 n05:1 exploiting:2 sutskever:1 xim:1 empty:1 double:1 produce:4 generating:1 amends:1 adam:1 object:10 help:1 pose:4 nearest:2 eq:2 subregion:2 coverage:1 c:1 skip:3 come:2 triangulated:1 synchronized:1 direction:1 radius:1 closely:1 annotated:1 attribute:1 cnns:8 stochastic:1 enable:1 vox:1 virtual:2 material:1 barr:1 secondly:1 regrading:1 ceylan:1 extension:2 assisted:1 scanner:2 around:1 considered:1 ground:1 guibas:8 prohibited:1 normal:5 deciding:1 predict:1 visualize:2 mo:1 achieves:4 adopt:1 proc:2 label:4 sensitive:1 grouped:1 combinatorially:1 weighted:2 sensor:4 always:1 dcnn:2 gaussian:1 rather:1 avoid:2 varying:2 mobile:1 wilson:1 encode:1 focus:2 properly:1 improvement:1 indicates:1 greatly:3 contrast:3 shapenet:1 centroid:14 summarizing:2 baseline:3 dim:4 inference:1 kim:1 dependent:3 abstraction:15 rigid:8 el:1 entire:1 relation:1 interested:2 selects:2 provably:1 pixel:4 issue:8 among:4 flexible:1 classification:20 overall:1 art:5 spatial:3 fairly:1 noneuclidean:1 field:4 construct:2 extraction:4 beach:1 sampling:28 lipman:1 represents:1 look:1 yu:2 theart:1 thinking:1 future:1 mimic:1 report:2 few:6 randomly:5 composed:1 preserve:1 recognize:2 interpolate:2 individual:1 densely:1 subsampled:3 geometry:5 interest:1 mlp:1 severe:1 introduces:3 nl:9 pc:4 ambient:1 xy:2 conduct:1 euclidean:15 savarese:1 ablated:1 deformation:1 instance:1 classify:1 earlier:1 cover:1 cost:2 subset:1 rosin:1 uniform:14 krizhevsky:1 successful:1 gr:1 too:1 graphic:3 reported:1 wks:1 scanning:4 varies:1 generalizability:3 corrupted:1 kudlur:1 adaptively:3 st:1 density:35 international:4 randomized:1 xi1:2 together:1 eurographics:2 choose:3 huang:2 corner:1 simard:1 li:6 converted:2 spagnuolo:1 stride:1 unordered:4 sec:6 pointnet:62 scor:1 matter:1 cremers:1 explicitly:2 leonidas:1 vehicle:1 view:3 tion:1 doing:1 linked:1 red:1 wave:1 aggregation:2 complicated:1 inherited:1 annotation:1 contribution:2 accuracy:13 convolutional:10 variance:1 efficiently:2 miller:1 ssg:7 generalize:2 raw:1 handwritten:1 none:1 ren:1 lu:1 corruption:1 finer:1 randomness:1 influenced:2 suffers:2 sharing:2 volumetric:3 dm:1 naturally:1 riemannian:1 propagated:1 sampled:12 dataset:2 aubry:1 popular:1 richly:1 knowledge:2 color:1 improves:1 dimensionality:1 segmentation:9 back:1 higher:6 follow:2 classifica:1 response:1 zisserman:1 asia:1 done:1 evaluated:2 though:1 until:2 hand:3 su:7 multiscale:2 overlapping:2 lack:1 propagation:4 defines:2 reveal:1 usa:1 effect:1 building:2 requiring:1 normalized:1 consisted:1 counterpart:1 vicinity:1 semantic:9 funkhouser:3 during:6 prominent:1 mallet:3 performs:1 motion:1 ranging:1 image:5 novel:2 fi:1 charles:1 common:3 volume:3 discussed:1 he:1 association:1 interpretation:1 significant:3 refer:1 rd:1 vanilla:5 grid:6 mathematics:1 bruna:1 impressive:1 surface:7 etc:3 curvature:1 recent:1 perspective:1 certain:2 n00014:1 onr:1 success:1 arbitrarily:1 yi:4 hanrahan:1 der:1 captured:3 pauly:1 additional:3 greater:1 floor:1 dai:2 ii:1 multiple:6 technical:1 cross:2 long:1 retrieval:2 lin:1 lai:1 nakanishi:1 qi:3 prediction:1 scalable:1 basic:4 vision:5 metric:34 arxiv:19 kernel:7 adopting:1 normalization:1 achieved:1 cell:2 c1:2 irregular:1 addition:3 whereas:1 fine:4 want:2 decreased:1 remarkably:1 boscaini:1 appropriately:1 rest:1 posse:2 archive:1 induced:5 pooling:1 virtually:1 mature:1 member:2 effectiveness:2 vallet:1 call:2 kobbelt:1 near:1 leverage:1 door:1 granularity:1 split:1 bengio:2 switch:1 affect:1 relu:1 architecture:8 idea:3 regarding:1 haffner:1 shift:2 passed:1 effort:1 song:2 cause:1 deep:13 dramatically:1 listed:1 desk:1 locally:1 processed:2 visualized:1 category:5 generate:3 xij:2 affords:1 nsf:1 ovsjanikov:1 per:3 correctly:2 blue:2 discrete:2 dropping:2 vol:1 affected:1 group:4 key:2 four:1 achieving:1 registration:1 diffusion:1 kept:1 veltkamp:1 graph:2 convert:2 run:1 inverse:1 everywhere:1 powerful:2 letter:1 named:1 almost:1 reader:1 wu:1 geiger:1 w12:1 photogrammetry:4 capturing:1 layer:29 dropout:9 followed:1 furniture:2 fold:1 msg:11 scanned:3 deficiency:3 scene:13 x2:3 encodes:1 generates:1 interpolated:1 speed:1 chair:1 martin:1 according:6 ball:5 across:5 smaller:2 increasingly:1 slightly:1 beneficial:1 wi:3 making:1 invariant:2 iccv:2 computationally:3 visualization:4 remains:1 previously:2 r3:1 xyz:1 xi2:2 icdar:1 end:2 available:1 operation:1 apply:3 observe:1 hierarchical:22 away:1 appropriate:1 worthwhile:1 spectral:2 robustly:4 alternative:4 robustness:5 original:4 include:3 recognizes:1 desbrun:1 concatenated:3 build:1 especially:1 forum:2 lenet5:1 added:1 digest:1 receptive:4 strategy:4 shapenets:1 exhibit:1 gradient:1 dp:11 distance:17 link:2 unable:1 concatenation:2 evenly:1 manifold:2 terrestrial:1 length:1 besides:1 index:1 insufficient:1 illustration:1 mini:1 ratio:1 acquire:1 balance:2 mostly:3 voxelized:2 statement:1 hao:1 ba:1 design:7 implementation:1 proper:1 sana:1 neto:1 bronstein:1 upper:1 inspect:3 observation:1 neuron:3 convolution:2 datasets:2 benchmark:4 acknowledge:1 pickup:1 hinton:1 variability:1 frame:1 rn:2 nonuniform:1 schr:1 barycentric:1 community:2 david:2 mechanical:1 c3:2 optimized:1 imagenet:1 c4:1 learned:4 boost:1 kingma:1 nip:1 address:1 able:8 bar:2 usually:5 pattern:17 below:3 indoor:3 sparsity:1 challenge:1 pioneering:1 max:2 reliable:1 including:2 gool:1 suitable:1 dict:1 difficulty:2 rely:1 residual:1 hks:1 improve:1 library:2 created:1 extract:6 autoencoder:1 prior:2 geometric:4 review:2 understanding:1 acknowledgement:1 relative:3 manhattan:1 embedded:1 fully:4 loss:1 permutation:2 interesting:1 limitation:1 proven:1 approximator:1 vandergheynst:1 validation:1 xiao:2 editor:1 classifying:1 translation:1 row:1 last:1 allow:2 perceptron:2 neighbor:2 fall:1 face:1 sparse:5 regard:1 curve:1 default:2 xn:3 world:1 avoids:2 depth:1 quantum:1 rich:1 author:1 collection:2 adaptive:7 made:2 nguyen:1 voxel:5 far:2 transaction:1 reconstructed:1 approximate:1 uni:1 preferred:1 ignore:1 keep:1 dealing:1 abstracted:2 global:3 active:1 rid:1 img:1 xi:5 continuous:2 iterative:1 search:1 khosla:1 table:9 reality:1 learn:7 robust:5 ca:1 depicting:1 bottou:1 complex:1 interpolating:1 domain:1 official:1 oin:1 dense:2 pk:2 whole:3 repeated:2 fair:1 rustamov:1 x1:3 augmented:1 fig:17 representative:1 fashion:2 wiley:2 tong:1 fails:3 meyer:1 pe:1 weighting:1 third:1 learns:5 grained:2 masci:1 tang:1 choi:1 specific:1 showing:1 sensing:4 list:1 evidence:1 grouping:18 consist:1 intrinsic:9 mnist:8 quantization:1 workshop:4 margin:2 sparser:1 gap:1 chen:1 savva:2 mvcnn:2 visual:1 vinyals:1 chang:2 applies:2 nested:3 corresponds:1 truth:1 luciano:1 acm:1 viewed:2 goal:2 consequently:1 towards:1 room:2 shared:2 man:1 change:2 lidar:2 specifically:1 uniformly:3 isprs:2 called:1 e:1 vote:1 select:1 support:1 guo:2 scan:9 preparation:1 evaluate:5 lian:1 correlated:1 |
6,739 | 7,096 | Regularizing Deep Neural Networks by Noise:
Its Interpretation and Optimization
Hyeonwoo Noh
Tackgeun You
Jonghwan Mun
Bohyung Han
Dept. of Computer Science and Engineering, POSTECH, Korea
{shgusdngogo,tackgeun.you,choco1916,bhhan}@postech.ac.kr
Abstract
Overfitting is one of the most critical challenges in deep neural networks, and there
are various types of regularization methods to improve generalization performance.
Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques
work well in practice and how we can maximize their benefit in the presence of
two conflicting objectives?optimizing to true data distribution and preventing
overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection
optimize the lower bound of the true objective and 2) proposing a technique to
achieve a tighter lower bound using multiple noise samples per training example
in a stochastic gradient descent iteration. We demonstrate the effectiveness of our
idea in several computer vision applications.
1
Introduction
Deep neural networks have been showing impressive performance in a variety of applications in
multiple domains [2, 12, 20, 23, 26, 27, 28, 31, 35, 38]. Its great success comes from various factors
including emergence of large-scale datasets, high-performance hardware support, new activation
functions, and better optimization methods. Proper regularization is another critical reason for better
generalization performance because deep neural networks are often over-parametrized and likely to
suffer from overfitting problem. A common type of regularization is to inject noises during training
procedure: adding or multiplying noise to hidden units of the neural networks, e.g., dropout. This
kind of technique is frequently adopted in many applications due to its simplicity, generality, and
effectiveness.
Noise injection for training incurs a tradeoff between data fitting and model regularization, even
though both objectives are important to improve performance of a model. Using more noise makes it
harder for a model to fit data distribution while reducing noise weakens regularization effect. Since
the level of noise directly affects the two terms in objective function, model fitting and regularization
terms, it would be desirable to maintain proper noise levels during training or develop an effective
training algorithm given a noise level.
Between these two potential directions, we are interested in the latter, more effective training. Within
the standard stochastic gradient descent framework, we propose to facilitate optimization of deep
neural networks with noise added for better regularization. Specifically, by regarding noise injected
outputs of hidden units as stochastic activations, we interpret that the conventional training strategy
optimizes the lower bound of the marginal likelihood over the hidden units whose values are sampled
with a reparametrization trick [18].
Our algorithm is motivated by the importance weighted autoencoders [7], which are variational
autoencoders trained for tighter variational lower bounds using more samples of stochastic variables
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
per training example in a stochastic gradient descent iteration. Our novel interpretation of noise
injected hidden units as stochastic activations enables the lower bound analysis of [7] to be naturally
applied to training deep neural networks with regularization by noise. It introduces the importance
weighted stochastic gradient descent, a variant of the standard stochastic gradient descent, which
employs multiple noise samples in an iteration for each training example. The proposed training
strategy allows trained models to achieve good balance between model fitting and regularization.
Although our method is general for various regularization techniques by noise, we mainly discuss its
special form, dropout?one of the most famous methods for regularization by noise.
The main contribution of our paper is three-fold:
? We present that the conventional training with regularization by noise is equivalent to
optimizing the lower bound of the marginal likelihood through a novel interpretation of
noise injected hidden units as stochastic activations.
? We derive the importance weighted stochastic gradient descent for regularization by noise
through the lower bound analysis.
? We demonstrate that the importance weighted stochastic gradient descent often improves
performance of deep neural networks with dropout, a special form of regularization by noise.
The rest of the paper is organized as follows. Section 2 discusses prior works related to our approach.
We describe our main idea and instantiation to dropout in Section 3 and 4, respectively. Section 5
analyzes experimental results on various applications and Section 6 makes our conclusion.
2
Related Work
Regularization by noise is a common technique to improve generalization performance of deep neural
networks, and various implementations are available depending on network architectures and target
applications. A well-known example is dropout [34], which randomly turns off a subset of hidden
units of neural networks by multiplying noise sampled from a Bernoulli distribution.
In addition to the standard form of dropout, there exist several variations of dropout designed to
further improve generalization performance. For example, Ba et al. [3] proposed adaptive dropout,
where dropout rate is determined by another neural network dynamically. Li et al. [22] employ
dropout with a multinormial distribution, instead of a Bernoulli distribution, which generates noise by
selecting a subset of hidden units out of multiple subsets. Bulo et al. [6] improve dropout by reducing
gap between training and inference procedure, where the output of dropout layers in inference stage
is given by learning expected average of multiple dropouts. There are several related concepts to
dropout, which can be categorized as regularization by noise. In [17, 37], noise is added to weights
of neural networks, not to hidden states. Learning with stochastic depth [15] and stochastic ensemble
learning [10] can also be regarded as noise injection techniques to weights or architecture. Our work
is differentiated with the prior study in the sense that we improve generalization performance using
better training objective while dropout and its variations rely on the original objective function.
Originally, dropout is proposed with interpretation as an extreme form of model ensemble [20, 34],
and this intuition makes sense to explain good generalization performance of dropout. On the other
hand, [36] views dropout as an adaptive regularizer for generalized linear models and [16] claims that
dropout is effective to escape local optima for training deep neural networks. In addition, [9] uses
dropout for estimating uncertainty based on Bayesian perspective. The proposed training algorithm
is based on a novel interpretation of training with regularization by noise as training with latent
variables. Such understanding is distinguishable from the existing views on dropout, and provides a
probabilistic formulation to analyze dropout. A similar interpretation to our work is proposed in [24],
but it focuses on reducing gap between training and inference steps of using dropout while our work
proposes to use a novel training objective for better regularization.
Our goal is to formulate a stochastic model for regularization by noise and propose an effective
training algorithm given a predefined noise level within a stochastic gradient descent framework.
A closely related work is importance weighted autoencoder [7], which employs multiple samples
weighted by importance to compute gradients and improve performance. This work shows that the
importance weighted stochastic gradient descent method achieves a tighter lower-bound of the ideal
marginal likelihood over latent variables than the variational lower bound. It also presents that the
2
bound becomes tighter as the number of samples for the latent variables increases. The importance
weighted objective has been applied to various applications such as generative modeling [5, 7],
training binary stochastic feed-forward networks [30] and training recurrent attention models [4].
This idea is extended to discrete latent variables in [25].
3
Proposed Method
This section describes the proposed importance weighted stochastic gradient descent using multiple
samples in deep neural networks for regularization by noise.
3.1
Main Idea
The premise of our paper is that injecting noise into deterministic hidden units constructs stochastic
hidden units. Noise injection during training obviously incurs stochastic behavior of the model and
the optimizer. By defining deterministic hidden units with noise as stochastic hidden units, we can
exploit well-defined probabilistic formulations to analyze the conventional training procedure and
propose approaches for better optimization.
Suppose that a set of activations over all hidden units across all layers, z, is given by
z = g(h? (x), ) ? p? (z|x),
(1)
where h? (x) is a deterministic activations of hidden units for input x and model parameters ?. A
noise injection function g(?, ?) is given by addition or multiplication of activation and noise, where
denotes noise sampled from a certain probability distribution such as Gaussian distribution. If this
premise is applied to dropout, the noise means random selections of hidden units in a layer and the
random variable z indicates the activation of the hidden layer given a specific sample of dropout.
Training a neural network with stochastic hidden units requires optimizing the marginal likelihood
over the stochastic hidden units z, which is given by
Lmarginal = log Ep? (z|x) [p? (y|z, x)] ,
(2)
where p? (y|z, x) is an output probability of ground-truth y given input x and hidden units z, and ? is
the model parameter for the output prediction. Note that the expectation over training data Ep(x,y)
outside the logarithm is omitted for notational simplicity.
For marginalization of stochastic hidden units constructed by noise, we employ the reparameterization
trick proposed in [18]. Specifically, random variable z is replaced by Eq. (1) and the marginalization
is performed over noise, which is given by
Lmarginal = log Ep() [p? (y|g(h? (x), ), x)] ,
(3)
where p() is the distribution of noise. Eq. (3) means that training a noise injected neural network
requires optimizing the marginal likelihood over noise .
3.2
Importance Weighted Stochastic Gradient Descent
We now describe how the marginal likelihood in Eq. (3) is optimized in a SGD (Stochastic Gradient
Descent) framework and propose the IWSGD (Importance Weighted Stochastic Gradient Descent)
method derived from the lower bound introduced by the SGD.
3.2.1
Objective
In practice, SGD estimates the marginal likelihood in Eq. (3) by taking expectation over multiple sets
of noisy samples, where we computes a marginal log-likelihood for a finite number of noise samples
in each set. Therefore, the real objective for SGD is as follows:
"
#
1X
Lmarginal ? LSGD (S) = Ep(E) log
p? (y|g(h? (x), ), x) ,
(4)
S
?E
where S is the number of noise samples for each training example and E = {1 , 2 , ..., S } is a set of
noises.
3
The main observation from Burda et al. [7] is that the SGD objective in Eq. (4) is the lower-bound of
the marginal likelihood in Eq. (3), which is held by Jensen?s inequality as
"
#
1X
LSGD (S) = Ep(E) log
p? (y|g(h? (x), ), x)
S
?E
"
#
1X
(5)
? log Ep(E)
p? (y|g(h? (x), ), x)
S
?E
= log Ep() [p? (y|g(h? (x), ), x)]
= Lmarginal ,
1 P
where Ep() [f ()] = Ep(E) S ?E f () for an arbitrary function f (?) over if the cardinality of
E is equal to S. This characteristic makes the number of noise samples S directly related to the
tightness of the lower-bound as
Lmarginal ? LSGD (S + 1) ? LSGD (S).
(6)
Refer to [7] for the proof of Eq. (6).
Based on this observation, we propose to use LSGD (S > 1) as an objective of IWSGD. Note that the
conventional training procedure for regularization by noise such as dropout [34] relies on the objective
with S = 1 (Section 4). Thus, we show that using more samples achieves tighter lower-bound and
that the optimization by IWSGD has great potential to improve accuracy by proper regularization.
3.2.2
Training
Training with IWSGD is achieved by computing the weighted average of gradients obtained from
multiple noise samples . This training strategy is based on the derivative of IWSGD objective with
respect to the model parameters ? and ?, which is given by
#
"
1X
??,? LSGD (S) = ??,? Ep(E) log
p? (y|g(h? (x), ), x)
S
?E
"
#
1X
= Ep(E) ??,? log
p? (y|g(h? (x), ), x)
S
?E
(7)
P
??,? ?E p? (y|g(h? (x), ), x)
P
= Ep(E)
0
0 ?E p? (y|g(h? (x), ), x)
"
#
X
= Ep(E)
w ??,? log p? (y|g(h? (x), ), x) ,
?E
where w denotes an importance weight with respect to sample noise and is given by
w = P
p? (y|g(h? (x), ), x)
.
0
0 ?E p? (y|g(h? (x), ), x)
(8)
Note that the weight of each sample is equal to the normalized likelihood of the sample.
For training, we first draw a set of noise samples E and perform forward and backward propagation
for each noise sample ? E to compute likelihoods and corresponding gradients. Then, importance
weights are computed by Eq. (8), and employed to compute the weighted average of gradients. Finally,
we optimize the model by SGD with the importance weighted gradients.
3.2.3
Inference
Inference in the IWSGD is same as the standard dropout; input activations to each dropout layer
are scaled based on dropout probability, rather than taking a subset of activations stochastically.
Therefore, compared to the standard dropout, neither additional sampling nor computation is required
during inference.
4
Figure 1: Implementation detail of IWSGD for dropout optimization. We compute a weighted
average of the gradients from multiple dropout masks. For each training example the gradients for
multiple dropout masks are independently computed and are averaged with importance weights in
Eq. (8).
3.3
Discussion
One may argue that the use of multiple samples is equivalent to running multiple iterations either
theoretically or empirically. It is difficult to derive the aggregated lower bounds of the marginal
likelihood over multiple iterations since the model parameters are updated in every iteration. However,
we observed that performance with a single sample is saturated easily and it is unlikely to achieve
better accuracy with additional iterations than our algorithm based on IWSGD, as presented in
Section 5.1.
4
Importance Weighted Stochastic Gradient Descent for Dropout
This section describes how the proposed idea is realized in the context of dropout, which is one of the
most popular techniques for regularization by noise.
4.1
Analysis of Conventional Dropout
For training with dropout, binary dropout masks are sampled from a Bernoulli distribution. The
hidden activations below dropout layers, denoted by h(x), are either kept or discarded by elementwise multiplication with a randomly sampled dropout mask ; activations after the dropout layers are
denoted by g(h? (x), ). The objective of SGD optimization is obtained by averaging log-likelihoods,
which is formally given by
Ldropout = Ep() [log p? (y|g(h? (x), ), x)] ,
(9)
where the outermost expectation over training data Ep(x,y) is omitted for simplicity as mentioned
earlier. Note that the objective in Eq. (9) is a special case of the objective of IWSGD with S = 1.
This implies that the conventional dropout training optimizes the lower-bound of the ideal marginal
likelihood, which is improved by increasing the number of dropout masks for each training example
in an iteration.
4.2
Training Dropout with Tighter Lower-bound
Figure 1 illustrates how IWSGD is employed to train with dropout layers for regularization. Following
the same training procedure described in Section 3.2.2, we sample multiple dropout masks as a
realization of the multiple noise sampling.
5
(a) Test error in CIFAR 10 (depth=28)
(b) Test error in CIFAR-100 (depth=28)
Figure 2: Impact of multi-sample training in CIFAR datasets with variable dropout rates. These
results are with wide residual net (widening factor=10, depth=28). Each data point and error bar
are computed from 3 trials with different seeds. The results show that using IWSGD with multiple
samples consistently improves the performance and the results are not sensitive to dropout rates.
Table 1: Comparison with various models in CIFAR datasets. We achieve the near state-of-the-art
performance by applying the multi-sample objective to wide residual network [40]. Note that ?4
iterations means a model trained with 4 times more iterations. The test errors of our implementations
(including reproduction of [40]) are obtained from the results with 3 different seeds. The numbers
within parentheses denote the standard deviations of test errors.
ResNet [12]
ResNet with Stochastic Depth [15]
FractalNet with Dropout [21]
ResNet (pre-activation) [13]
PyramidNet [11]
Wide ResNet (depth=40) [40]
DenseNet [14]
Wide ResNet (depth=28, dropout=0.3) [40]
Wide ResNet (depth=28, dropout=0.5) (?4 iterations)
Wide ResNet (depth=28, dropout=0.5) (reproduced)
Wide ResNet (depth=28, dropout=0.5) with IWSGD (S = 4)
Wide ResNet (depth=28, dropout=0.5) with IWSGD (S = 8)
CIFAR-10
6.43
4.91
4.60
4.62
3.77
3.80
3.46
3.89
4.48 (0.15)
3.88 (0.15)
3.58 (0.05)
3.55 (0.11)
CIFAR-100
24.58
23.73
22.71
18.29
18.30
17.18
18.85
20.70 (0.19)
19.12 (0.24)
18.01 (0.16)
17.63 (0.13)
The use of IWSGD for optimization requires only minor modifications in implementation. This
is because the gradient computation part in the standard dropout is reusable. The gradient for the
standard dropout is given by
??,? Ldropout = Ep() [??,? logp? (y|g(h? (x), ), x)] .
(10)
Note that this is actually unweighted version of the final line in Eq. (7). Therefore, the only additional
component for IWSGD is about weighting gradients with importance weights. This property makes
it easy to incorporate IWSGD into many applications with dropout.
5
Experiments
We evaluate the proposed training algorithm in various architectures for real world tasks including object recognition [40], visual question answering [39], image captioning [35] and action recognition [8].
These models are chosen for our experiments since they use dropouts actively for regularization. To
isolate the effect of the proposed training method, we employ simple models without integrating
heuristics for performance improvement (e.g., model ensembles, multi-scaling, etc.) and make
hyper-parameters (e.g., type of optimizer, learning rate, batch size, etc.) fixed.
6
Table 2: Accuracy on VQA test-dev dataset. Our re-implementation of SAN [39] is used as baseline.
Increasing the number of samples S with IWSGD consistently improves performance.
SAN [39]
SAN with 2-layer LSTM (reproduced)
with IWSGD (S = 5)
with IWSGD (S = 8)
5.1
All
58.68
60.19
60.31
60.41
Open-Ended
Y/N Num
79.28 36.56
79.69 36.74
80.74 34.70
80.86 35.56
Multiple-Choice
Others All
Y/N Num Others
46.09
48.84 64.77 79.72 39.03 57.82
48.66 65.01 80.73 36.36 58.05
48.56 65.21 80.77 37.56 58.18
Object Recognition
The proposed algorithm is integrated into wide residual network [40], which uses dropout in every
residual block, and evaluated on CIFAR datasets [19]. This network shows the accuracy close to the
state-of-the-art performance in both CIFAR 10 and CIFAR 100 datasets with data augmentation. We
use the publicly available implementation1 by the authors of [40] and follow all the implementation
details in the original paper.
Figure 2 presents the impact of IWSGD with multiple samples. We perform experiments using
the wide residual network with widening factor 10 and depth 28. Each experiment is performed 3
times with different seeds in CIFAR datasets and test errors with corresponding standard deviations
are reported. The baseline performance is from [40], and we also report the reproduced results by
our implementation, which is denoted by Wide ResNet (reproduced). The result by the proposed
algorithm is denoted by IWSGD together with the number of samples S.
Training with IWSGD with multiple samples clearly improves performance as illustrated in Figure 2.
It also presents that, as the number of samples increases, the test errors decrease even more both on
CIFAR-10 and CIFAR-100, regardless of the dropout rate. Another observation is that the results
from the proposed multi-sample training strategy are not sensitive to dropout rates.
Using IWSGD with multiple samples to train the wide residual network enables us to achieve the near
state-of-the-art performance on CIFAR datasets. As illustrated in Table 1, the accuracy of the model
with S = 8 samples is very close to the state-of-the-art performance for CIFAR datasets, which is
based on another architecture [14]. To illustrate the benefit of our algorithm compared to the strategy
to simply increase the number of iterations, we evaluate the performance of the model trained with 4
times more iterations, which is denoted by ?4 iterations. Note that the model with more iterations
does not improve the performance as discussed in Section 3.3. We believe that the simple increase of
the number of iterations is likely to overfit the trained model.
5.2
Visual Question Answering
Visual Question Answering (VQA) [2] is a task to answer a question about a given image. Input of
this task is a pair of an image and a question, and output is an answer to the question. This task is
typically formulated as a classification problem with multi-modal inputs [1, 29, 39].
To train models and run experiments, we use VQA dataset [2], which is commonly used for the
evaluation of VQA algorithms. There are two different kinds of tasks: open-ended and multiplechoice task. The model predicts an answer for an open-ended task without knowing predefined set of
candidate answers while selecting one of candidate answers in multiple-choice task. We evaluate
the proposed training method using a baseline model, which is similar to [39] but has a single stack
of attention layer. For question features, we employ a two-layer LSTM based on word embedding2 ,
while using activations from pool5 layer of VGG-16 [32] for image features.
Table 2 presents the results of our experiment for VQA. SAN with 2-layer LSTM denotes our
baseline with the standard dropout. This method already outperforms the comparable model with
spatial attention [39] possibly due to the use of a stronger question encoder, two-layer LSTM. When
we evaluate performance of IWSGD with 5 and 8 samples, we observe consistent performance
improvement of our algorithm with increase of the number of samples.
1
2
https://github.com/szagoruyko/wide-residual-networks
https://github.com/VT-vision-lab/VQA_LSTM_CNN
7
Table 3: Results on MSCOCO test dataset for image captioning. For BLEU metric, we use BLEU-4,
which is computed based on 4-gram words, since the baseline method [35] reported BLEU-4 only.
Google-NIC [35]
Google-NIC (reproduced)
with IWSGD (S = 5)
BLEU
27.7
26.8
27.5
METEOR
23.7
22.6
22.9
CIDEr
85.5
82.2
83.6
Table 4: Average classification accuracy of compared algorithms over three splits on UCF-101
dataset. TwoStreamFusion (reproduced) denotes our reproduction based on the public source code.
Method
TwoStreamFusion [8]
TwoStreamFusion (reproduced)
with IWSGD (S = 5)
with IWSGD (S = 10)
with IWSGD (S = 15)
5.3
UCF-101
92.50 %
92.49 %
92.73 %
92.69 %
92.72 %
Image Captioning
Image captioning is a problem generating a natural language description given an image. This task is
typically handled by an encoder-decoder network, where a CNN encoder transforms an input image
into a feature vector and an LSTM decoder generates a caption from the feature by predicting words
one by one. A dropout layer is located on top of the hidden state in LSTM decoder. To evaluate the
proposed training method, we exploit a publicly available implementation3 whose model is identical
to the standard encoder-decoder model of [35], but uses VGG-16 [32] instead of GoogLeNet as a
CNN encoder. We fix the parameters of VGG-16 network to follow the implementation of [35].
We use MSCOCO dataset for experiment, and evaluate models with several metrics (BLEU, METEOR
and CIDEr) using the public MSCOCO caption evaluation tool. These metrics measure precision or
recall of n-gram words between the generated captions and the ground-truths.
Table 3 summarizes the results on image captioning. Google-NIC is the reported scores in the
original paper [35] while Google-NIC (reproduced) denotes the results of our reproduction. Our
reproduction has slightly lower accuracy due to use of a different CNN encoder. IWSGD with 5
samples consistently improves performance in terms of all three metrics, which indicates our training
method is also effective to learn LSTMs.
5.4
Action Recognition
Action recognition is a task recognizing a human action in videos. We employ a well-known
benchmark of action classification, UCF-101 [33], for evaluation, which has 13,320 trimmed videos
annotated with 101 action categories. The dataset has three splits for cross validation, and the final
performance is calculated by the average accuracy of the three splits.
We employ a variation of two-stream CNN proposed by [8], which shows competitive performance on
UCF-101. The network consists of three subnetworks: a spatial stream network for image, a temporal
stream network for optical flow and a fusion network for combining the two-stream networks. We
apply our IWSGD only to fine-tuning the fusion unit for training efficiency. Our implementation is
based on the public source code4 . Hyper-parameters such as dropout rate and learning rate scheduling
is the same as the baseline model [8].
Table 4 illustrates performance improvement by integrating IWSGD but the overall tendency with
increase of the number of samples is not consistent. We suspect that this is because the performance
of the model is already saturated and there is no much room for improvement through fine-tuning
only the fusion unit.
3
4
https://github.com/karpathy/neuraltalk2
http://www.robots.ox.ac.uk/~vgg/software/two_stream_action/
8
6
Conclusion
We proposed an optimization method for regularization by noise, especially for dropout, in deep
neural networks. This method is based on a novel interpretation of noise injected deterministic hidden
units as stochastic hidden ones. Using this interpretation, we proposed to use IWSGD (Importance
Weighted Stochastic Gradient Descent), which achieves tighter lower bounds as the number of
samples increases. We applied the proposed optimization method to dropout, a special case of the
regularization by noise, and evaluated on various visual recognition tasks: image classification,
visual question answering, image captioning and action classification. We observed the consistent
improvement of our algorithm over all tasks, and achieved near state-of-the-art performance on
CIFAR datasets through better optimization. We believe that the proposed method may improve
many other deep neural network models with dropout layers.
Acknowledgement This work was supported by the IITP grant funded by the Korea government
(MSIT) [2017-0-01778, Development of Explainable Human-level Deep Machine Learning Inference Framework; 2017-0-01780, The Technology Development for Event Recognition/Relational
Reasoning and Learning Knowledge based System for Video Understanding].
References
[1] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein. Neural module networks. In CVPR, 2016.
[2] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zitnick, and D. Parikh. VQA:
visual question answering. In ICCV, 2015.
[3] J. Ba and B. Frey. Adaptive dropout for training deep neural networks. In NIPS, 2013.
[4] J. Ba, R. R. Salakhutdinov, R. B. Grosse, and B. J. Frey. Learning wake-sleep recurrent attention
models. In NIPS, 2015.
[5] J. Bornschein and Y. Bengio. Reweighted wake-sleep. In ICLR, 2015.
[6] S. R. Bulo, L. Porzi, and P. Kontschieder. Dropout distillation. In ICML, 2016.
[7] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016.
[8] C. Feichtenhofer, A. Pinz, and A. Zisserman. Convolutional two-stream network fusion for
video action recognition. In CVPR, 2016.
[9] Y. Gal and Z. Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016.
[10] B. Han, J. Sim, and H. Adam. Branchout: Regularization for online ensemble tracking with
convolutional neural networks. In CVPR, 2017.
[11] D. Han, J. Kim, and J. Kim. Deep pyramidal residual networks. CVPR, 2017.
[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR,
2016.
[13] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In ECCV,
2016.
[14] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. Densely connected convolutional
networks. CVPR, 2017.
[15] G. Huang, Y. Sun, Z. Liu, D. Sedra, and K. Q. Weinberger. Deep networks with stochastic
depth. In ECCV, 2016.
[16] P. Jain, V. Kulkarni, A. Thakurta, and O. Williams. To drop or not to drop: Robustness,
consistency and differential privacy properties of dropout. arXiv preprint arXiv:1503.02031,
2015.
[17] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization
trick. In NIPS, 2015.
[18] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014.
[19] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report,
University of Toronto, 2009.
[20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[21] G. Larsson, M. Maire, and G. Shakhnarovich. Fractalnet: Ultra-deep neural networks without
residuals. ICLR, 2017.
[22] Z. Li, B. Gong, and T. Yang. Improved dropout for shallow and deep learning. In NIPS, 2016.
9
[23] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation.
In CVPR, 2015.
[24] X. Ma, Y. Gao, Z. Hu, Y. Yu, Y. Deng, and E. Hovy. Dropout with expectation-linear regularization. In ICLR, 2016.
[25] A. Mnih and D. Rezende. Variational inference for monte carlo objectives. In ICML, 2016.
[26] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[27] H. Nam and B. Han. Learning multi-domain convolutional neural networks for visual tracking.
In CVPR, 2016.
[28] H. Noh, S. Hong, and B. Han. Learning deconvolution network for semantic segmentation. In
ICCV, 2015.
[29] H. Noh, S. Hong, and B. Han. Image question answering using convolutional neural network
with dynamic parameter prediction. In CVPR, 2016.
[30] T. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic
feedforward neural networks. In ICLR, 2015.
[31] S. Ren, K. He, R. Girshick, and J. Sun. Faster R-CNN: Towards real-time object detection with
region proposal networks. In NIPS, 2015.
[32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. In ICLR, 2015.
[33] K. Soomro, A. R. Zamir, and M. Shah. UCF101: a dataset of 101 human actions classes from
videos in the wild. arXiv preprint arXiv:1212.0402, 2012.
[34] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a
simple way to prevent neural networks from overfitting. JMLR, 15(1):1929?1958, 2014.
[35] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption
generator. In CVPR, 2015.
[36] S. Wager, S. Wang, and P. S. Liang. Dropout training as adaptive regularization. In NIPS, 2013.
[37] L. Wan, M. Zeiler, S. Zhang, Y. LeCun, and R. Fergus. Regularization of neural networks using
dropconnect. In ICML, 2013.
[38] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao,
K. Macherey, et al. Google?s neural machine translation system: Bridging the gap between
human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
[39] Z. Yang, X. He, J. Gao, L. Deng, and A. Smola. Stacked attention networks for image question
answering. In CVPR, 2016.
[40] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016.
10
| 7096 |@word trial:1 cnn:5 version:1 stronger:1 open:3 hu:1 sgd:7 incurs:2 harder:1 liu:2 score:1 selecting:2 outperforms:1 existing:1 com:3 activation:14 neuraltalk2:1 enables:2 designed:1 drop:2 generative:1 krikun:1 num:2 provides:1 toronto:1 zhang:3 constructed:1 differential:1 consists:1 fitting:3 wild:1 privacy:1 theoretically:1 mask:6 expected:1 behavior:1 frequently:1 nor:1 multi:6 salakhutdinov:3 cardinality:1 increasing:2 becomes:1 estimating:1 kind:2 proposing:1 vqa_lstm_cnn:1 gal:1 ended:3 temporal:1 every:2 scaled:1 uk:1 control:1 unit:22 grant:1 engineering:1 local:2 frey:2 encoding:1 dynamically:1 averaged:1 lecun:1 msit:1 practice:2 block:1 procedure:5 maire:1 riedmiller:1 implementation3:1 ucf101:1 pre:1 integrating:2 word:4 close:2 selection:1 scheduling:1 context:1 applying:1 bellemare:1 optimize:2 conventional:7 equivalent:2 www:1 deterministic:4 williams:1 attention:5 regardless:1 independently:1 formulate:1 simplicity:3 regarded:1 nam:1 reparameterization:2 variation:3 updated:1 target:1 suppose:1 caption:4 us:3 trick:3 recognition:10 located:1 predicts:1 ep:16 observed:2 module:1 preprint:3 wang:1 zamir:1 region:1 connected:1 sun:4 iitp:1 decrease:1 mentioned:1 intuition:1 pinz:1 dynamic:1 trained:5 shakhnarovich:1 efficiency:1 easily:1 various:9 regularizer:2 train:3 stacked:1 jain:1 effective:5 describe:2 pool5:1 monte:1 tell:1 hyper:2 outside:1 whose:2 heuristic:1 cvpr:11 tightness:1 encoder:6 simonyan:1 emergence:1 noisy:1 final:2 reproduced:8 obviously:1 online:1 agrawal:1 net:1 bornschein:1 propose:5 cao:1 combining:1 realization:1 achieve:5 description:1 sutskever:2 optimum:1 darrell:2 captioning:6 generating:1 adam:1 silver:1 resnet:10 object:3 weakens:1 develop:1 ac:2 gong:1 derive:2 depending:1 recurrent:2 illustrate:1 minor:1 sim:1 eq:11 come:1 implies:1 direction:1 closely:1 meteor:2 annotated:1 stochastic:33 human:5 public:3 premise:2 government:1 fix:1 generalization:6 ultra:1 tighter:7 ground:2 great:2 seed:3 lawrence:1 mapping:1 claim:1 achieves:3 optimizer:2 omitted:2 injecting:2 thakurta:1 sensitive:2 tool:1 weighted:18 clearly:1 gaussian:1 cider:2 rather:1 rusu:1 derived:1 focus:1 rezende:1 notational:1 consistently:3 bernoulli:3 likelihood:14 mainly:1 indicates:2 improvement:5 baseline:6 sense:2 kim:2 inference:8 feichtenhofer:1 unlikely:1 integrated:1 typically:2 hidden:25 interested:1 noh:3 issue:1 classification:6 overall:1 denoted:5 proposes:1 development:2 art:5 special:4 spatial:2 marginal:11 equal:2 construct:1 beach:1 sampling:2 veness:1 identical:1 yu:1 icml:4 others:2 report:2 escape:1 employ:8 randomly:2 densely:1 replaced:1 maintain:1 detection:1 ostrovski:1 mnih:2 evaluation:3 saturated:2 introduces:1 extreme:1 held:1 wager:1 predefined:2 antol:1 korea:2 logarithm:1 re:1 girshick:1 modeling:1 earlier:1 dev:1 logp:1 deviation:2 subset:4 recognizing:1 successful:1 krizhevsky:3 reported:3 answer:5 st:1 lstm:6 probabilistic:2 off:1 together:1 augmentation:1 huang:2 possibly:1 wan:1 berglund:1 dropconnect:1 stochastically:1 inject:1 derivative:1 li:2 actively:1 potential:2 stream:5 performed:2 view:2 lab:1 analyze:2 competitive:1 bayes:1 reparametrization:1 contribution:1 implementation1:1 publicly:2 accuracy:8 convolutional:8 hovy:1 characteristic:1 ensemble:4 famous:1 bayesian:2 kavukcuoglu:1 norouzi:1 lu:1 ren:3 multiplying:2 carlo:1 explain:1 naturally:1 proof:1 sampled:5 dataset:7 popular:1 mitchell:1 recall:1 knowledge:1 improves:5 organized:1 segmentation:2 actually:1 feed:1 originally:1 follow:2 modal:1 improved:2 zisserman:2 sedra:1 formulation:2 evaluated:2 though:1 ox:1 generality:1 bmvc:1 stage:1 smola:1 bhhan:1 autoencoders:3 overfit:1 hand:1 lstms:1 propagation:1 google:5 believe:2 facilitate:1 effect:2 usa:1 concept:1 true:2 normalized:1 regularization:33 semantic:2 postech:2 illustrated:2 reweighted:1 komodakis:1 during:5 hong:2 generalized:1 demonstrate:2 interpreting:1 reasoning:1 image:19 variational:6 novel:5 parikh:1 common:2 empirically:1 discussed:1 interpretation:8 googlenet:1 elementwise:1 interpret:1 he:4 refer:1 distillation:1 dinh:1 tuning:2 consistency:1 language:1 funded:1 robot:1 han:6 impressive:1 ucf:4 mun:1 etc:2 larsson:1 perspective:1 optimizing:4 optimizes:2 certain:1 inequality:1 binary:3 success:1 vt:1 der:1 analyzes:1 additional:3 employed:2 deng:2 aggregated:1 maximize:1 multiple:23 desirable:1 technical:1 faster:1 cross:1 long:2 cifar:15 dept:1 parenthesis:1 impact:2 prediction:2 variant:1 vision:2 expectation:4 metric:4 arxiv:6 iteration:16 achieved:2 proposal:1 addition:3 fine:2 wake:2 pyramidal:1 source:2 rest:1 zagoruyko:1 isolate:1 suspect:1 bohyung:1 flow:1 effectiveness:2 near:3 yang:2 presence:1 ideal:2 split:3 enough:1 easy:1 bengio:2 variety:1 affect:1 fit:1 marginalization:2 feedforward:1 architecture:4 andreas:1 idea:5 regarding:1 knowing:1 tradeoff:1 vgg:4 motivated:1 handled:1 bridging:1 trimmed:1 explainable:1 soomro:1 hyeonwoo:1 suffer:1 action:9 deep:24 clear:1 vqa:6 karpathy:1 transforms:1 hardware:1 nic:4 category:1 http:4 exist:1 per:2 klein:1 discrete:1 reusable:1 bulo:2 prevent:1 neither:1 densenet:1 kept:1 backward:1 run:1 you:2 injected:5 uncertainty:2 wu:1 draw:1 maaten:1 summarizes:1 scaling:1 comparable:1 dropout:77 bound:18 layer:17 fold:1 sleep:2 software:1 generates:2 toshev:1 injection:5 optical:1 describes:2 across:1 slightly:1 shallow:1 modification:1 iccv:2 discus:2 turn:1 subnetworks:1 adopted:1 available:3 apply:1 observe:1 salimans:1 differentiated:1 batch:1 weinberger:2 robustness:1 shah:1 original:3 denotes:5 running:1 top:1 zeiler:1 exploit:2 ghahramani:1 especially:1 objective:19 added:2 realized:1 question:12 already:2 strategy:5 gradient:25 iclr:7 fidjeland:1 parametrized:1 decoder:4 argue:1 reason:1 bleu:5 code:1 balance:1 liang:1 difficult:1 ba:3 implementation:9 proper:3 perform:2 observation:3 datasets:9 discarded:1 benchmark:1 finite:1 descent:15 defining:1 extended:1 relational:1 hinton:2 stack:1 arbitrary:1 introduced:1 pair:1 required:1 optimized:1 imagenet:1 conflicting:1 kingma:2 nip:8 address:1 bar:1 below:1 challenge:1 including:3 video:5 critical:2 event:1 widening:2 rely:1 natural:1 predicting:1 residual:12 representing:1 improve:10 github:3 technology:1 raiko:1 autoencoder:1 auto:1 prior:2 understanding:2 acknowledgement:1 multiplication:2 graf:1 macherey:2 fully:1 generator:1 validation:1 shelhamer:1 consistent:3 tiny:1 translation:2 eccv:2 supported:1 alain:1 burda:2 wide:14 taking:2 benefit:2 outermost:1 van:1 depth:13 calculated:1 world:1 gram:2 unweighted:1 computes:1 preventing:1 forward:2 author:1 adaptive:4 san:4 commonly:1 reinforcement:1 erhan:1 welling:2 overfitting:4 instantiation:1 fergus:1 latent:4 why:1 table:8 learn:1 nature:1 ca:1 domain:2 zitnick:1 main:4 noise:59 categorized:1 grosse:2 mscoco:3 precision:1 candidate:2 answering:7 jmlr:1 weighting:1 specific:1 showing:1 jensen:1 reproduction:4 fusion:4 deconvolution:1 adding:1 kr:1 importance:19 illustrates:2 gap:3 chen:1 distinguishable:1 simply:1 likely:2 rohrbach:1 gao:3 visual:7 vinyals:1 tracking:2 truth:2 relies:1 ma:1 goal:1 formulated:1 identity:1 towards:1 room:1 specifically:2 determined:1 reducing:3 kontschieder:1 averaging:1 batra:1 experimental:1 tendency:1 formally:1 support:1 latter:1 kulkarni:1 incorporate:1 evaluate:6 regularizing:1 schuster:1 srivastava:1 |
6,740 | 7,097 | Learning Graph Representations with Embedding
Propagation
Alberto Garc?a-Dur?n
NEC Labs Europe
Heidelberg, Germany
[email protected]
Mathias Niepert
NEC Labs Europe
Heidelberg, Germany
[email protected]
Abstract
We propose Embedding Propagation (E P), an unsupervised learning framework for
graph-structured data. E P learns vector representations of graphs by passing two
types of messages between neighboring nodes. Forward messages consist of label
representations such as representations of words and other attributes associated with
the nodes. Backward messages consist of gradients that result from aggregating the
label representations and applying a reconstruction loss. Node representations are
finally computed from the representation of their labels. With significantly fewer
parameters and hyperparameters an instance of E P is competitive with and often
outperforms state of the art unsupervised and semi-supervised learning methods on
a range of benchmark data sets.
1
Introduction
Graph-structured data occurs in numerous application domains such as social networks, bioinformatics, natural language processing, and relational knowledge bases. The computational problems
commonly addressed in these domains are network classification [40], statistical relational learning [12, 36], link prediction [22, 24], and anomaly detection [8, 1], to name but a few. In addition,
graph-based methods for unsupervised and semi-supervised learning are often applied to data sets
with few labeled examples. For instance, spectral decompositions [25] and locally linear embeddings
(LLE) [38] are always computed for a data set?s affinity graph, that is, a graph that is first constructed
using domain knowledge or some measure of similarity between data points. Novel approaches to
unsupervised representation learning for graph-structured data, therefore, are important contributions
and are directly applicable to a wide range of problems.
E P learns vector representations (embeddings) of graphs by passing messages between neighboring
nodes. This is reminiscent of power iteration algorithms which are used for such problems as computing the PageRank for the web graph [33], running label propagation algorithms [47], performing
isomorphism testing [16], and spectral clustering [25]. Whenever a computational process can be
mapped to message exchanges between nodes, it is implementable in graph processing frameworks
such as Pregel [29], GraphLab [23], and GraphX [44].
Graph labels represent vertex attributes such as bag of words, movie genres, categorical features,
and continuous features. They are not to be confused with class labels of a supervised classification
problem. In the E P learning framework, each vertex v sends and receives two types of messages.
Label representations are sent from v?s neighboring nodes to v and are combined so as to reconstruct
the representations of v?s labels. The gradients resulting from the application of some reconstruction
loss are sent back as messages to the neighboring vertices so as to update their labels? representations
and the representations of v?s labels. This process is repeated for a certain number of iterations or
until a convergence threshold is reached. Finally, the label representations of v are used to compute a
representation of v itself.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Despite its conceptual simplicity, we show that E P generalizes several existing machine learning
methods for graph-structured data. Since E P learns embeddings by incorporating different label types
(representing, for instance, text and images) it is a framework for learning with multi-modal data [31].
2
Previous Work
There are numerous methods for embedding learning such as multidimensional scaling (MDS) [20],
Laplacian Eigenmap [3], Siamese networks [7], IsoMap [43], and LLE [38]. Most of these approaches
construct an affinity graph on the data points first and then embed the graph into a low dimensional
space. The corresponding optimization problems often have to be solved in closed form (for instance,
due to constraints on the objective that remove degenerate solutions) which is intractable for large
graphs. We discuss the relation to LLE [38] in more detail when we analyze our framework.
Graph neural networks (GNN) [39] is a general class of recursive neural networks for graphs where
each node is associated with one label. Learning is performed with the Almeida-Pineda algorithm
[2, 35]. The computation of the node embeddings is performed by backpropagating gradients for a
supervised loss after running a recursive propagation model to convergence. In the E P framework
gradients are computed and backpropagated immediately for each node. Gated graph sequence
neural networks (GG-SNN) [21] modify GNN to use gated recurrent units and modern optimization
techniques. Recent work on graph convolutional networks (GCNs) uses a supervised loss to inject
class label information into the learned representations [18]. GCNs as well as GNNs and GG-SNNs,
can be seen as instances of the Message Passing Neural Network (M PNN) framework, recently
introduced in [13]. There are several significant differences between the E P and M PNN framework:
(i) all instances of M PNN use a supervised loss but E P is unsupervised and, therefore, classifier
agnostic; (ii) E P learns label embeddings for each of the different label types independently and
combines them into a joint node representation whereas all existing instances of M PNN do not provide
an explicit method for combining heterogeneous feature types. Moreover, E P?s learning principle
based on reconstructing each node?s representation from neighboring nodes? representations is highly
suitable for the inductive setting where nodes are missing during training.
Most closely related to our work is D EEP WALK [34] which applies a word embedding algorithm
to random walks. The idea is that random walks (node sequences) are treated as sentences (word
sequences). A S KIP G RAM [30] model is then used to learn node embeddings from the random
walks. N ODE 2 VEC [15] is identical to D EEP WALK with the exception that it explores new methods
to generate random walks (the input sentences to WORD 2 VEC), at the cost of introducing more
hyperparamenters. L INE [41] optimizes similarities between pairs of node embeddings so as to
preserve their first and second-order proximity. The main advantage of E P over these approaches is its
ability to incorporate graph attributes such as text and continuous features. P LANETOID [45] combines
a learning objective similar to that of D EEP WALK with supervised objectives. It also incorporates
bag of words associated with nodes into these supervised objectives. We show experimentally that
for graph without attributes, all of the above methods learn embeddings of similar quality and that E P
outperforms all other methods significantly on graphs with word labels. We can also show that E P
generalizes methods that learn embeddings for multi-relational graphs such as T RANS E [5].
3
Embedding Propagation
{a214}
{bio, chemical, dna, rna}
A graph G = (V, E) consists of a set of vertices
V and a set of edges E ? {(v, w) | v, w ? V }.
{a95}
{bio, health, gene}
v
The approach works with directed and undi- {a23}
rected edges as well as with multiple edge types. {health, symptom}
N(v) is the set of neighbors of v if G is undi{a237}
rected and the set of in-neighbors if G is directed.
{learning, rna}
The graph G is associated with a set of k label
{a651}
classes L = {L1 , ..., Lk } where each Li is a
{margin, SVM, loss}
set of labels corresponding to label type i. A
Figure 1: A fragment of a citation network.
label is an identifier of some object and not to
be confused with a class label in classification
problems. Labels allow us to represent a wide range of objects associated with the vertices such
as words, movie genres, and continuous features. To illustrate the concept of label types, Figure 1
2
a95
bio
health
gene
current
embeddings
em
be
g?1
dd
d1(
d2(
h1(v)
,
,
h2(v)
be
dd
in
gs
g?2
health
symptom
em
current
embeddings
die
g1
ing
s
v
a237
health
rna
a23
v's current
label embeddings
h1(v)
h2 (v)
g2
gra
)
)
nts
updated
embeddings
a95
bio
health
gene
v
a237
health
rna
gr
a
di
en
v's current
label embeddings
ts
updated
embeddings
Figure 2: Illustration of the messages passed between a vertex v and its neighbors for the citation
network of Figure 1. First, the label embeddings are sent from the neighboring vertices to the vertex
v (black node). These embeddings are fed into differentiable functions e
gi . Here, there is one function
for the article identifier label type (yellow shades) and one for the natural language words label type
(red shades). The gradients are derived from the distances di between (i) the output of the functions
e
gi applied to the embeddings sent from v?s neighbors and (ii) the output of the functions gi applied
to v?s label embeddings. The better the output of the functions e
gi is able to reconstruct the output of
the functions gi , the smaller the value of the distance measure. The gradients are the messages that
are propagated back to the neighboring nodes so as to update the corresponding embedding vectors.
The figure is best seen in color.
depicts a fragment of a citation network. There are two label types. One representing the unique
article identifiers and the other representing the identifiers of natural language words occurring in the
articles.
The functions li : V S? 2Li map every vertex in the graph to a subset of the labels Li of label type
i. We write l(v) = i li (v) for the set of all labels associated with vertex v. Moreover, we write
li (N(v)) = {li (u) | u ? N(v)} for the multiset of labels of type i associated with the neighbors of
vertex v.
We begin by describing the general learning framework of E P which proceeds in two steps.
? First, EP learns a vector representation for every label by passing messages along the edges
of the input graph. We write ` for the current vector representation of a label `. For labels
of label type i, we apply a learnable embedding function ` = fi (`) that maps every label
` of type i to its embedding `. The embedding functions fi have to be differentiable so
as to facilitate parameter updates during learning. For each label type one can chose an
appropriate embedding function such as a linear embedding function for text input or a more
complex convolutional network for image data.
? Second, EP computes a vector representation for each vertex v from the vector representations of v?s labels. We write v for the current vector representation of a vertex v.
Let v ? V , let i ? {1, ..., k} be a label type, and let di ? N be the size of the embedding for label type
e i (v) = e
gi ({` | ` ? li (N(v))}), where gi and
i. Moreover, let hi (v) = gi ({` | ` ? li (v)}) and let h
e
gi are differentiable functions that map multisets of di -dimensional vectors to a single di -dimensional
e i (v) as
vector. We refer to the vector hi (v) as the embedding of label type i for vertex v and to h
the reconstruction of the embedding of label type i for vertex v since it is computed from the label
embeddings of v?s neighbors. While the gi and e
gi can be parameterized (typically with a neural
network), in many cases they are simple parameter free functions that compute, for instance, the
element-wise average or maximum of the input.
The first learning procedure is driven by the following objectives for each label type i ? {1, ..., k}
X
e i (v), hi (v) ,
min Li = min
di h
(1)
v?V
where di is some measure of distance between hi (v), the current representation of label type i for
e i (v). Hence, the objective of the approach is to learn the parameters
vertex v, and its reconstruction h
3
r
label
embeddings
vertex
embedding
v
a23
health
symptom
Figure 3: For each vertex v, the function r computes a vector representation of the vertex based on
the vector representations of v?s labels.
of the functions gi and e
gi (if such parameters exist) and the vector representations of the labels such
that the output of e
gi applied to the type i label embeddings of v?s neighbors is close to the output
of gi applied to the type i label embeddings of v. For each vertex v the messages passed to v from
its neighbors are the representations of their labels. The messages passed back to v?s neighbors are
the gradients which are used to update the label embeddings. The gradients also update v?s label
embeddings. Figure 2 illustrates the first part of the unsupervised learning framework for a part of a
citation network. A representation is learned both for the article identifiers and the words occurring
in the articles. The gradients are computed based on a loss between the reconstruction of the label
type embeddings and their current values.
Due to the learning principle of E P, nodes that do not have any labels for label type i can be assigned
a new dummy label unique to the node and the label type. The representations learned for these
dummy labels can then be used as part of the representation of the node itself. Hence, E P is also
applicable in situations where data is missing and incomplete.
The embedding functions fi can be initialized randomly or with an existing model. For instance,
embedding functions for words can be initialized using word embedding algorithms [30] and those
for images with pretrained CNNs [19, 11]. Initialized parameters are then refined by the application
of E P. We can show empirically, however, that random initializations of the embedding functions fi
also lead to effective vertex embeddings.
The second step of the learning framework applies a function r to compute the representations of the
vertex v from the representations of v?s labels: v = r ({` | ` ? l(v)}) . Here, the label embeddings
and the parameters of the functions gi and e
gi (if such parameters exist) remain unchanged. Figure 3
illustrates the second step of E P.
We now introduce E P -B, an instance of the E P framework that we have found to be highly effective
for several of P
the typical graph-based learning problems. The instance results from setting gi (H) =
1
e
gi (H) = |H|
h?H h for all label types i and all sets of embedding vectors H. In this case we have,
for any vertex v and any label type i,
hi (v) =
X
1
`,
|li (v)|
e i (v) =
h
`?li (v)
X
1
|li (N(v))|
X
`.
(2)
u?N(v) `?li (u)
In conjunction with the above functions gi and e
gi , we can use the margin-based ranking loss1
i
X X h
e i (v), hi (v) ? di h
e i (v), hi (u)
Li =
? + di h
,
+
v?V u?V \{v}
(3)
where di is the Euclidean distance, [x]+ is the positive part of x, and ? > 0 is a margin hyperparameter.
e i (v), the reconstructed embedding of label
Hence, the objective is to make the distance between h
type i for vertex v, and hi (v), the current embedding of label type i for vertex v, smaller than the
e i (v) and hi (u), the embedding of label type i of a vertex u different from v. We
distance between h
solve the minimization problem with gradient descent algorithms and use one node u for every v in
each learning iteration. Despite using only first-order proximity information in the reconstruction
of the label embeddings, this learning is effectively propagating embedding information across the
graph: an update of a label embedding affects neighboring label embeddings which, in other updates,
affects their neighboring label embeddings, and so on; hence the name of this learning framework.
1
Directly minimizing Equation (1) could lead to degenerate solutions.
4
Table 2: Dataset statistics. k is the number of
label types.
Table 1: Number of parameters and hyperparameters for a graph without node attributes.
Method
D EEP WALK [34]
NODE 2 VEC [15]
L INE [41]
P LANETOID [45]
E P -B
#params
2d|V |
2d|V |
2d|V |
2d|V |
d|V |
#hyperparams
4
6
2
?6
2
Dataset
BlogCatalog
PPI
POS
Cora
Citeseer
Pubmed
|V |
10,312
3,890
4,777
2,708
3,327
19,717
|E|
#classes
333,983
39
76,584
50
184,812
40
5,429
7
4,732
6
44,338
3
k
1
1
1
2
2
2
Finally, a simple instance of the function r is a function that concatenates all the embeddings hi (v)
for i ? {1, ..., k} to form one single vector representation v for each node v
v = concat [g1 ({` | ` ? l1 (v)}), ..., gk ({` | ` ? lk (v)})] = concat [h1 (v), ..., hk (v)] .
(4)
Figure 3 illustrates the working of this particular function r. We refer to the instance of the learning
framework based on the formulas (2),(3), and (4) as E P -B. The resulting vector representation of
the vertices can now be used for downstream learning problems such as vertex classification, link
prediction, and so on.
4
Formal Analysis
We now analyze the computation and model complexities of the E P framework and its connection to
existing models.
4.1
Computational and Model Complexity
Let G = (V, E) be a graph (either directed or undirected) with k label types L = {L1 , ..., Lk }.
Moreover, let labmax = maxv?V,i?{1,...,k} |li (v)| be the maximum number of labels for any type
and any vertex of the input graph, let degmax = maxv?V |N(v)| be the maximum degree of the input
graph, and let ? (n) be the worst-case complexity of computing any of the functions gi and e
gi on n
input vectors of size di . Now, the worst-case complexity of one learning iteration is
O (k|V |? (labmax degmax )) .
For an input graph without attributes, that is, where the only label type represents node identities, the
worst-case complexity of one learning iteration is O(|V |? (degmax )). If, in addition, the complexity
of the single reconstruction function is linear in the number of input vectors, the complexity is
O(|V |degmax ) and, hence, linear in both the number of nodes and the maximum degree of the input
graph. This is the case for most aggregation functions and, in particular, for the functions e
gi and
gi used in E P -B, the particular instance of the learning framework defined by the formulas (2),(3),
and (4). Furthermore, the average complexity is linear in the average node degree of the input graph.
The worst-case complexity of E P can be limited by not exchanging messages from all neighbors but
only a sampled subset of size at most ?. We explore different sampling scenarios in the experimental
section.
In general, the number of parameters and hyperparameters of the learning framework depends on
the parameters of the functions gi and e
gi , the loss functions, and the number of distinct labels of the
input graph. For graphs without attributes, the only parameters of E P -B are the embedding weights
and the only hyperparameters are the size of the embedding d and the margin ?. Hence, the number
of parameters is d|V | and the number of hyperparameters is 2. Table 1 lists the parameter counts for
a set of state of the art methods for learning embeddings for graphs without attributes.
4.2
Comparison to Existing Models
E P -B is related to locally linear embeddings (LLE) [38]. In LLE there is a single function e
g which
computes a linear combination of the vertex embeddings. e
g?s weights are learned for each vertex in
a separate previous step. Hence, unlike E P -B, e
g does not compute the unweighted average of the
input embeddings. Moreover, LLE does not learn embeddings for the labels (attribute values) but
5
directly for vertices of the input graph. Finally, LLE is only feasible for graphs where each node
has at most a small constant number of neighbors. LLE imposes additional constraints to avoid
degenerate solutions to the objective and solves the resulting optimization problem in closed form.
This is not feasible for large graphs.
In several applications, the nodes of the graphs are associated with a set of words. For instance,
in citation networks, the nodes which represent individual articles can be associated with a bag of
words. Every label corresponds to one of the words. Figure 1 illustrates a part of such a citation
network. In this context, E P -B?s learning of word embeddings is related to the CB OW model [30].
The difference is that for E P -B the context of a word is determined by the neighborhood of the
vertices it is associated with and it is the embedding of the word that is reconstructed and not its
one-hot encoding.
For graphs with several different edge types such as multi-relational graphs, the reconstruction
functions e
gi can be made dependent on the type of the edge. For instance, one could have, for any
vertex v and label type i,
X X
1
e i (v) =
h
` + r(u,v) ,
|li (N(v))|
u?N(v) `?li (u)
where r(u,v) is the vector representation corresponding to the type of the edge (the relation) from
vertex u to vertex v, and hi (v) could be the average embedding of v?s node id labels. In combination
with the margin-based ranking loss (3), this is related to embedding models for multi-relational
graphs [32] such as T RANS E [5].
5
Experiments
The objectives of the experiments are threefold. First, we compare E P -B to the state of the art on
node classification problems. Second, we visualize the learned representations. Third, we investigate
the impact of an upper bound on the number of neighbors that are sending messages.
We evaluate E P with the following six commonly used benchmark data sets. BlogCatalog [46] is a
graph representing the social relationships of the bloggers listed on the BlogCatalog website. The
class labels represent user interests. PPI [6] is a subgraph of the protein-protein interactions for
Homo Sapiens. The class labels represent biological states. POS [28] is a co-occurrence network
of words appearing in the first million bytes of the Wikipedia dump. The class labels represent
the Part-of-Speech (POS) tags. Cora, Citeseer and Pubmed [40] are citation networks where nodes
represent documents and their corresponding bag-of-words and links represent citations. The class
labels represents the main topic of the document. Whereas BlogCatalog, PPI and POS are multi-label
classification problems, Cora, Citeseer and Pubmed have exactly one class label per node. Some
statistics of these data sets are summarized in Table 2.
5.1
Set-up
The input to the node classification problem is a graph (with or without node attributes) where a
fraction of the nodes is assigned a class label. The output is an assignment of class labels to the test
nodes. Using the node classification data sets, we compare the performance of E P -B to the state
of the art approaches D EEP WALK [34], L INE [41], N ODE 2 VEC [15], P LANETOID [45], GCN [18],
and also to the baselines WV RN [27] and M AJORITY. WV RN is a weighted relational classifier that
estimates the class label of a node with a weigthed mean of its neighbors? class labels. Since all the
input graphs are unweighted, WV RN assigns the class label to a node v that appears most frequently
in v?s neighborhood. M AJORITY always chooses the most frequent class labels in the training set.
For all data sets and all label types the functions fi are always linear embeddings equivalent to an
embedding lookup table. The dimension of the embeddings is always fixed to 128. We used this
dimension for all methods which is in line with previous work such as D EEP WALK and N ODE 2 VEC
for the data sets under consideration. For E P -B, we chose the margin ? in (3) from the set of values
[1, 5, 10, 20] on validation data. For all approaches except L INE, we used the hyperparameter values
reported in previous work since these values were tuned to the data sets. As L INE has not been applied
to the data sets before, we set its number of samples to 20 million and negative samples to 5. This
means that L INE is trained on (at least) an order of magnitude more examples than all other methods.
6
Table 3: Multi-label classification results for BlogCatalog, POS and PPI in the transductive setting.
The upper and lower part list micro and macro F1 scores, respectively.
90
10
35.05 ? 0.41
34.48 ? 0.40
35.54 ? 0.49
34.83 ? 0.39
20.50 ? 0.45
16.51 ? 0.53
BlogCatalog
50
?=1
39.44 ? 0.29
38.11 ? 0.43
39.31 ? 0.25
38.99 ? 0.25
30.24 ? 0.96
16.88 ? 0.35
40.41 ? 1.59
38.34 ? 1.82
40.03 ? 1.22
38.77 ? 1.08
33.47 ? 1.50
16.53 ? 0.74
19.08 +- 0.78
18.16 ? 0.44
19.08 ? 0.52
18.13 ? 0.33
10.86 ? 0.87
2.51 ? 0.09
?=1
25.11 ? 0.43
22.65 ? 0.49
23.97 ? 0.58
22.56 ? 0.49
17.46 ? 0.74
2.57 ? 0.08
25.97 ? 1.25
22.86 ? 1.03
24.82 ? 1.00
23.00 ? 0.92
20.10 ? 0.98
2.53 ? 0.31
Tr [%]
10
E P -B
D EEP WALK
N ODE 2 VEC
L INE
WV RN
M AJORITY
E P -B
D EEP WALK
N ODE 2 VEC
L INE
WV RN
M AJORITY
90
10
46.97 ? 0.36
45.02 ? 1.09
44.66 ? 0.92
45.22 ? 0.86
26.07 ? 4.35
40.40 ? 0.62
POS
50
? = 10
49.52 ? 0.48
49.10 ? 0.52
48.73 ? 0.59
51.64 ? 0.65
29.21 ? 2.21
40.47 ? 0.51
17.82 ? 0.77
17.14 ? 0.89
17.00 ? 0.81
16.55 ? 1.50
10.99 ? 0.57
6.15 ? 0.40
PPI
50
?=5
23.30 ? 0.37
23.52 ? 0.65
23.31 ? 0.62
23.01 ? 0.84
18.14 ? 0.60
5.94 ? 0.66
50.05 ? 2.23
49.33 ? 2.39
49.73 ? 2.35
52.28 ? 1.87
33.09 ? 2.27
40.10 ? 2.57
24.74 ? 1.30
25.02 ? 1.38
24.75 ? 2.02
25.28 ? 1.68
21.49 ? 1.19
5.66 ? 0.92
8.85 ? 0.33
8.20 ? 0.27
8.32 ? 0.36
8.49 ? 0.41
4.14 ? 0.54
3.38 ? 0.13
? = 10
10.45 ? 0.69
10.84 ? 0.62
11.07 ? 0.60
12.43 ? 0.81
4.42 ? 0.35
3.36 ? 0.14
12.17 ? 1.19
12.23 ? 1.38
12.11 ? 1.93
12.40 ? 1.18
4.41 ? 0.53
3.36 ? 0.44
13.80 ? 0.67
13.01 ? 0.90
13,32 ? 0.49
12,79 ? 0.48
8.60 ? 0.57
1.58 ? 0.25
?=5
18.96 ? 0.43
18.73 ? 0.59
18.57 ? 0.49
18.06 ? 0.81
14.65 ? 0.74
1.51 ? 0.27
20.36 ? 1.42
20.01 ? 1.82
19.66 ? 2.34
20.59 ? 1.59
17.50 ? 1.42
1.44 ? 0.35
90
Table 4: Multi-label classification results for BlogCatalog, POS and PPI in the inductive setting for
Tr = 0.1. The upper and lower part of the table list micro and macro F1 scores, respectively.
Removed Nodes [%]
E P -B
D EEP WALK -I
LINE-I
WV RN
M AJORITY
E P -B
D EEP WALK -I
LINE-I
WV RN
M AJORITY
BlogCatalog
20
40
? = 10
?=5
29.22 ? 0.95 27.30 ? 1.33
27.84 ? 1.37 27.14 ? 0.99
19.15 ? 1.30 19.96 ? 2.44
19.36 ? 0.59 19.07 ? 1.53
16.84 ? 0.68 16.81 ? 0.55
? = 10
12.12 ? 0.75
11.96 ? 0.88
6.64 ? 0.49
9.45 ? 0.65
2.50 ? 0.18
?=5
11.24 ? 0.89
10.91 ? 0.95
6.54 ? 1.87
9.18 ? 0.62
2.59 ? 0.19
POS
20
40
? = 10
? = 10
43.23 ? 1.44 42.12 ? 0.78
40.92 ? 1.11 41.02 ? 0.70
40.34 ? 1.72 40.08 ? 1.64
23.35 ? 0.66 27.91 ? 0.53
40.43 ? 0.86 40.59 ? 0.55
? = 10
5.47 ? 0.80
4.54 ? 0.32
4.67 ? 0.46
3.74 ? 0.64
3.35 ? 0.24
? = 10
5.16 ? 0.49
4.46 ? 0.57
4.24 ? 0.52
3.87 ? 0.44
3.27 ? 0.15
PPI
20
40
? = 10
? = 10
16.63 ? 0.98 14.87 ? 1.04
15.55 ? 1.06 13.99 ? 1.18
14.89 ? 1.16 13.55 ? 0.90
8.83 ? 0.91
9.41 ? 0.94
6.09 ? 0.40
6.39 ? 0.61
? = 10
11.55 ? 0.90
10.52 ? 0.56
9.86 ? 1.07
6.90 ? 1.02
1.54 ? 0.31
? = 10
10.38 ? 0.90
9.69 ? 1.14
9.15 ? 0.74
6.81 ? 0.89
1.55 ? 0.26
We did not simply copy results from previous work but used the authors? code to run all experiments
again. For D EEP WALK we used the implementation provided by the authors of N ODE 2 VEC (setting
p = 1.0 and q = 1.0). We also used the other hyperparameters values for D EEP WALK reported in
the N ODE 2 VEC paper to ensure a fair comparison. We did 10 runs for each method in each of the
experimental set-ups described in this section, and computed the mean and standard deviation of
the corresponding evaluation metrics. We use the same sets of training, validation and test data for
each method. All methods were evaluated in the transductive and inductive setting. The transductive
setting is the setting where all nodes of the input graph are present during training. In the inductive
setting, a certain percentage of the nodes are not part of the graph during unsupervised learning.
Instead, these removed nodes are added after the training has concluded. The results computed for
the nodes not present during unsupervised training reflect the methods ability to incorporate newly
added nodes without retraining the model.
For the graphs without attributes (BlogCatalog, PPI and POS) we follow the exact same experimental
procedure as in previous work [42, 34, 15]. First, the node embeddings were computed in an
unsupervised fashion. Second, we sampled a fraction Tr of nodes uniformly at random and used
their embeddings and class labels as training data for a logistic regression classifier. The embeddings
and class labels of the remaining nodes were used as test data. E P -B?s margin hyperparameter ?
was chosen by 3-fold cross validation for Tr = 0.1 once. The resulting margin ? was used for
the same data set and for all other values of Tr . For each method, we use 3-fold cross validation
to determine the L2 regularization parameter for the logistic regression classifier from the values
[0.01, 0.1, 0.5, 1, 5, 10]. We did this for each value of Tr and the F1 macro and F1 micro scores
separately. This proved to be important since the L2 regularization had a considerable impact on the
performance of the methods.
For the graphs with attributes (Cora, Citeseer, Pubmed) we follow the same experimental procedure
as in previous work [45]. We sample 20 nodes uniformly at random for each class as training data,
1000 nodes as test data, and a different 1000 nodes as validation data. In the transductive setting,
unsupervised training was performed on the entire graph. In the inductive setting, the 1000 test nodes
were removed from the graph before training. The hyperparameter values of GCN for these same
data sets in the transductive setting are reported in [18]; we used these values for both the transductive
and inductive setting. For E P -B, L INE and D EEP WALK, the learned node embeddings for the 20
nodes per class label were fed to a one-vs-rest logistic regression classifier with L2 regularization. We
7
Table 5: Classification accuracy for Cora, Citeseer, and Pubmed. (Left) The upper and lower part of
the table list the results for the transuctive and inductive setting, respectively. (Right) Results for the
transductive setting where the directionality of the edges is taken into account.
Method
E P -B
DW+B OW
P LANETOID -T
GCN
D EEP WALK
B OW F EAT
E P -B
DW-I+B OW
P LANETOID -I
GCN-I
B OW F EAT
Cora
? = 20
78.05 ? 1.49
76.15 ? 2.06
71.90 ? 5.33
79.59 ? 2.02
71.11 ? 2.70
58.63 ? 0.68
?=5
73.09 ? 1.75
68.35 ? 1.70
64.80 ? 3.70
67.76 ? 2.11
58.63 ? 0.68
Citeseer
? = 10
71.01 ? 1.35
61.87 ? 2.30
58.58 ? 6.35
69.21 ? 1.25
47.60 ? 2.34
58.07 ? 1.72
?=5
68.61 ? 1.69
59.47 ? 2.48
61.97 ? 3.82
63.40 ? 0.98
58.07 ? 1.72
Pubmed
?=1
79.56 ? 2.10
77.82 ? 2.19
74.49 ? 4.95
77.32 ? 2.66
73.49 ? 3.00
70.49 ? 2.89
?=1
79.94 ? 2.30
74.87 ? 1.23
75.73 ? 4.21
73.47 ? 2.48
70.49 ? 2.89
Method
E P -B
D EEPWALK
Cora
? = 20
77.31 ? 1.43
14.82 ? 2.15
Citeseer
?=5
70.21 ? 1.17
15.79 ? 3.58
Pubmed
?=1
78.77 ? 2.06
32.82 ? 2.12
chose the best value for E P -B?s margins and the L2 regularizer on the validation set from the values
[0.01, 0.1, 0.5, 1, 5, 10]. The same was done for the baselines DW+B OW and B OW F EAT. Since
P LANETOID jointly optimizes an unsupervised and supervised loss, we applied the learned models
directly to classify the nodes. The authors of P LANETOID did not report the number of learning
iterations, so we ensured the training had converged. This was the case after 5000, 5000, and 20000
training steps for Cora, Citeseer, and Pubmed, respectively. For E P -B we used A DAM [17] to learn
the parameters in a mini-batch setting with a learning rate of 0.001. A single learning epoch iterates
through all nodes of the input graph and we fixed the number of epochs to 200 and the mini-batch size
to 64. In all cases, the parameteres were initilized following [14] and the learning always converged.
E P was implemented with the Theano [4] wrapper Keras [9]. We used the logistic regression classifier
from LibLinear [10]. All experiments were run on commodity hardware with 128GB RAM, a single
2.8 GHz CPU, and a TitanX GPU.
5.2
Results
The results for BlogCatalog, POS and PPI in the transductive setting are listed in Table 3. The best
results are always indicated in bold. We observe that E P -B tends to have the best F1 scores, with
the additional aforementioned advantage of fewer parameters and hyperparameters to tune. Even
though we use the hyperparameter values reported in N ODE 2 VEC, we do not observe significant
differences to D EEP WALK. This is contrary to earlier findings [15]. We conjecture that validating the
L2 regularization of the logistic regression classifier is crucial and might not have been performed in
some earlier work. The F1 scores of E P -B, D EEP WALK, L INE, and N ODE 2 VEC are significantly
higher than those of the baselines WV RN and M AJORITY. The results for the same data sets in the
inductive setting are listed in Table 4 for different percentages of nodes removed before unsupervised
training. E P reconstructs label embeddings from the embeddings of labels of neighboring nodes.
e i (v) as
Hence, with E P -B we can directly use the concatenation of the reconstructed embedding h
the node embedding for each of the nodes v that were not part of the graph during training. For
D EEP WALK and L INE we computed the embeddings of those nodes that were removed during
training by averaging the embeddings of neighboring nodes; we indicate this by the suffix I. E P -B
outperforms all these methods in the inductive setting.
The results for the data sets Cora, Citeseer and Pubmed are listed in Table 5. Since these data sets
have bag of words associated with nodes, we include the baseline method DW+B OW. DW+B OW
concatenates the embedding of a node learned by D EEP WALK with a vector that encodes the bag of
words of the node. P LANETOID -T and P LANETOID -I are the transductive and inductive formulation
of P LANETOID [45]. GCN-I is an inductive variant of GCN [18] where edges from training to test
nodes are removed from the graph but those from test nodes to training nodes are not. Contrary
to other methods, E P -B?s F1 scores on the transductive and inductive setting are very similar,
demonstrating its suitability for the inductive setting. D EEP WALK cannot make use of the word
labels but we included it in the evaluation to investigate to what extent the word labels improve the
performance of the other methods. The baseline B OW F EAT trains a logistic regression classifier
on the binary vectors encoding the bag of words of each node. E P -B significantly outperforms all
existing approaches in both the transductive and inductive setting on all three data sets with one
8
Average Batch Loss
(a)
70
60
50
40
30
20
10
0
0
?=1
?=5
? = 50
? = degmax = 3992
50
100
Epoch
150
200
(b)
Figure 4: (a) The plot visualizes embeddings for the Cora data set learned from node identity labels
only (left), word labels only (center), and from the combination of the two (right). The Silhouette
score is from left to right 0.008, 0.107 and 0.158. (b) Average batch loss vs. number of epochs for
different values of the parameter ? for the BlogCatalog data set.
exception: for the transductive setting on Cora GCN achieves a higher accuracy. Both P LANETOID -T
and DW+B OW do not take full advantage of the information given by the bag of words, since the
encoding of the bag of words is only exposed to the respective models for nodes with class labels and,
therefore, only for a small fraction of nodes in the graph. This could also explain P LANETOID -T?s
high standard deviation since some nodes might be associated with words that occur in the test data
but which might not have been encountered during training. This would lead to misclassifications of
these nodes.
Figure 4 depicts a visualization of the learned embeddings for the Cora citation network by applying tsne [26] to the 128-dimensional embeddings generated by E P -B. Both qualitatively and quantitatively
? as demonstrated by the Silhouette score [37] that measures clustering quality ? it shows E P -B?s
ability to learn and combine embeddings of several label types.
Up until now, we did not take into account the direction of the edges, that is, we treated all graphs as
undirected. Citation networks, however, are intrinsically directed. The right part of Table 5 shows
the performance of E P -B and D EEP WALK when the edge directions are considered. For E P this
means label representations are only sent along the directed edges. For D EEP WALK this means
that the generated random walks are directed walks. While we observe a significant performance
deterioration for D EEP WALK, the accuracy of E P -B does not change significantly. This demonstrates
that E P is also applicable when edge directions are taken into account.
For densely connected graphs with a high average node degree, it is beneficial to limit the number of
neighbors that send label representations in each learning step. This can be accomplished by sampling
a subset of at most size ? from the set of all neighbors and to send messages only from the sampled
nodes. We evaluated the impact of this strategy by varying the parameter ? in Figure 4. The loss is
significantly higher for smaller values of ?. For ? = 50, however, the average loss is almost identical
to the case where all neighbors send messages while reducing the training time per epoch by an order
of magnitude (from 20s per epoch to less than 1s per epoch).
6
Conclusion and Future Work
Embedding Propagation (E P) is an unsupervised machine learning framework for graph-structured
data. It learns label and node representations by exchanging messages between nodes. It supports
arbitrary label types such as node identities, text, movie genres, and generalizes several existing
approaches to graph representation learning. We have shown that E P -B, a simple instance of E P,
is competitive with and often outperforms state of the art methods while having fewer parameters
and/or hyperparameters. We believe that E P?s crucial advantage over existing methods is its ability to
learn label type representations and to combine these label type representations into a joint vertex
embedding.
Direction of future research include the combination of E P with multitask learning, that is, learning
the embeddings of labels and nodes guided by both an unsupervised loss and a supervised loss defined
with respect to different tasks; a variant of E P that incorporates image and sequence data; and the
integration of E P with an existing distributed graph processing framework. One might also want to
investigate the application of the E P framework to multi-relational graphs.
9
References
[1] L. Akoglu, H. Tong, and D. Koutra. Graph based anomaly detection and description: a survey.
Data Mining and Knowledge Discovery, 29(3):626?688, 2015.
[2] L. B. Almeida. Artificial neural networks. chapter A Learning Rule for Asynchronous Perceptrons with Feedback in a Combinatorial Environment, pages 102?111. 1990.
[3] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering. In NIPS, volume 14, pages 585?591, 2001.
[4] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. WardeFarley, and Y. Bengio. Theano: A cpu and gpu math compiler in python. In Proc. 9th Python in
Science Conf, pages 1?7, 2010.
[5] A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and O. Yakhnenko. Translating embeddings
for modeling multi-relational data. In Advances in Neural Information Processing Systems,
pages 2787?2795, 2013.
[6] B.-J. Breitkreutz, C. Stark, T. Reguly, L. Boucher, A. Breitkreutz, M. Livstone, R. Oughtred,
D. H. Lackner, J. B?hler, V. Wood, et al. The biogrid interaction database: 2008 update. Nucleic
acids research, 36(suppl 1):D637?D640, 2008.
[7] J. Bromley, I. Guyon, Y. Lecun, E. S?ckinger, and R. Shah. Signature verification using a
"siamese" time delay neural network. In Neural Information Processing Systems, 1994.
[8] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Comput. Surv.,
41(3):15:1?15:58, 2009.
[9] F. Chollet. Keras. URL http://keras. io, 2016.
[10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large
linear classification. Journal of machine learning research, 9(Aug):1871?1874, 2008.
[11] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov, et al. Devise: A deep
visual-semantic embedding model. In Advances in neural information processing systems,
pages 2121?2129, 2013.
[12] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning (Adaptive Computation
and Machine Learning). The MIT Press, 2007.
[13] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for
quantum chemistry. arXiv preprint arXiv:1704.01212, 2017.
[14] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural
networks. In Aistats, volume 9, pages 249?256, 2010.
[15] A. Grover and J. Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of
the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 855?864, 2016.
[16] K. Kersting, M. Mladenov, R. Garnett, and M. Grohe. Power iterated color refinement. In
Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
[17] D. Kingma and J. Ba.
arXiv:1412.6980, 2014.
Adam: A method for stochastic optimization.
arXiv preprint
[18] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks.
arXiv preprint arXiv:1609.02907, 2016.
[19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105,
2012.
[20] J. B. Kruskal and M. Wish. Multidimensional scaling. Sage Publications, Beverely Hills,
California, 1978.
10
[21] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks.
arXiv preprint arXiv:1511.05493, 2015.
[22] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. J. Am. Soc.
Inf. Sci. Technol., 58(7):1019?1031, 2007.
[23] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Graphlab:
A new parallel framework for machine learning. In Conference on Uncertainty in Artificial
Intelligence (UAI), July 2010.
[24] L. L? and T. Zhou. Link prediction in complex networks: A survey. Physica A: Statistical
Mechanics and its Applications, 390(6):1150?1170, 2011.
[25] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416, 2007.
[26] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne. Journal of Machine Learning
Research, 9(Nov):2579?2605, 2008.
[27] S. A. Macskassy and F. Provost. A simple relational classifier. Technical report, DTIC Document,
2003.
[28] M. Mahoney. Large text compression benchmark. URL: http://www. mattmahoney. net/text/text.
html, 2009.
[29] G. Malewicz, M. H. Austern, A. J. Bik, J. C. Dehnert, I. Horn, N. Leiser, and G. Czajkowski.
Pregel: A system for large-scale graph processing. In Proceedings of the 2010 ACM SIGMOD
International Conference on Management of Data, pages 135?146, 2010.
[30] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of
words and phrases and their compositionality. In Advances in neural information processing
systems, pages 3111?3119, 2013.
[31] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In
Proceedings of the 28th international conference on machine learning, pages 689?696, 2011.
[32] M. Nickel, K. Murphy, V. Tresp, and E. Gabrilovich. A review of relational machine learning
for knowledge graphs. Proceedings of the IEEE, 104(1):11?33, 2016.
[33] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: bringing order
to the web. 1999.
[34] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. In
Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, pages 701?710, 2014.
[35] F. J. Pineda. Generalization of back-propagation to recurrent neural networks. Phys. Rev. Lett.,
59:2229?2232, 1987.
[36] L. D. Raedt, K. Kersting, S. Natarajan, and D. Poole. Statistical relational artificial intelligence:
Logic, probability, and computation. Synthesis Lectures on Artificial Intelligence and Machine
Learning, 10(2):1?189, 2016.
[37] P. J. Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster
analysis. Journal of computational and applied mathematics, 20:53?65, 1987.
[38] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290(5500):2323?2326, 2000.
[39] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural
network model. IEEE Transactions on Neural Networks, 20(1):61?80, 2009.
[40] P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad. Collective
classification in network data. AI Magazine, 29(3):93?106, 2008.
11
[41] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale information network
embedding. In Proceedings of the 24th International Conference on World Wide Web, pages
1067?1077. ACM, 2015.
[42] L. Tang and H. Liu. Relational learning via latent social dimensions. In Proceedings of the
15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages
817?826. ACM, 2009.
[43] J. B. Tenenbaum, V. De Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, 2000.
[44] R. S. Xin, J. E. Gonzalez, M. J. Franklin, and I. Stoica. Graphx: A resilient distributed graph
system on spark. In First International Workshop on Graph Data Management Experiences
and Systems, pages 2:1?2:6, 2013.
[45] Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph
embeddings. In Proceedings of the 33nd International Conference on Machine Learning, pages
40?48, 2016.
[46] R. Zafarani and H. Liu. Social computing data repository at asu. School of Computing,
Informatics and Decision Systems Engineering, Arizona State University, 2009.
[47] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation.
Technical Report CMU-CALD-02-107, 2002.
12
| 7097 |@word multitask:1 repository:1 compression:1 retraining:1 nd:2 duran:2 d2:1 decomposition:1 hsieh:1 citeseer:9 tr:6 reduction:2 liblinear:2 liu:2 wrapper:1 fragment:2 score:8 tuned:1 document:3 franklin:1 outperforms:5 existing:9 current:9 nt:1 reminiscent:1 gpu:2 remove:1 plot:1 update:8 maxv:2 v:2 bickson:1 intelligence:4 fewer:3 website:1 asu:1 concat:2 tarlow:1 iterates:1 multiset:1 node:84 pascanu:1 math:1 zhang:1 along:2 constructed:1 consists:1 combine:4 introduce:1 node2vec:1 frequently:1 mechanic:1 multi:9 gabrilovich:1 salakhutdinov:1 titanx:1 snn:1 cpu:2 confused:2 begin:1 moreover:5 provided:1 agnostic:1 what:1 gcn:7 finding:1 every:5 multidimensional:2 commodity:1 exactly:1 ensured:1 classifier:9 demonstrates:1 bio:4 unit:1 before:3 positive:1 engineering:1 aggregating:1 modify:1 tends:1 limit:1 io:1 despite:2 encoding:3 id:1 black:1 chose:3 might:4 initialization:1 co:1 limited:1 range:3 directed:6 unique:2 lecun:1 horn:1 testing:1 tsoi:1 recursive:2 procedure:3 mei:1 yan:1 significantly:6 yakhnenko:1 ups:1 word:31 protein:2 cannot:1 close:1 deepwalk:1 unlabeled:1 context:2 applying:2 dam:1 www:1 equivalent:1 map:3 demonstrated:1 missing:2 center:1 send:3 dean:2 independently:1 survey:3 simplicity:1 spark:1 immediately:1 assigns:1 rule:1 lamblin:1 shlens:1 nam:1 dw:6 embedding:40 gcns:2 updated:2 user:1 anomaly:3 exact:1 magazine:1 us:1 surv:1 element:1 natarajan:1 labeled:2 database:1 ep:2 taskar:1 winograd:1 preprint:4 solved:1 wang:2 worst:4 revisiting:1 connected:1 eu:2 removed:6 liben:1 environment:1 complexity:9 signature:1 trained:1 exposed:1 graphx:2 po:10 joint:2 multimodal:1 chapter:1 genre:3 regularizer:1 train:1 distinct:1 effective:2 artificial:5 zemel:1 neighborhood:2 refined:1 mladenov:1 solve:1 reconstruct:2 ability:4 statistic:3 gi:28 g1:2 niyogi:1 transductive:12 jointly:1 itself:2 online:1 czajkowski:1 pineda:2 sequence:5 advantage:4 differentiable:3 net:1 hagenbuchner:1 sen:1 propose:1 reconstruction:8 blogger:1 interaction:2 frequent:1 neighboring:11 macro:3 combining:1 subgraph:1 degenerate:3 roweis:1 ine:11 description:1 sutskever:2 convergence:2 motwani:1 cluster:1 adam:1 object:2 illustrate:1 recurrent:2 propagating:1 school:1 aug:1 solves:1 soc:1 implemented:1 frome:1 indicate:1 direction:4 guided:1 closely:1 attribute:12 cnns:1 stochastic:1 translating:1 brin:1 garc:1 exchange:1 resilient:1 f1:7 generalization:1 suitability:1 biological:1 physica:1 proximity:2 considered:1 bromley:1 cb:1 rfou:1 visualize:1 desjardins:1 achieves:1 kruskal:1 nowell:1 proc:1 applicable:3 bag:9 label:117 combinatorial:1 weighted:1 minimization:1 cora:12 mit:1 namata:1 always:6 rna:4 avoid:1 zhou:1 kersting:2 varying:1 publication:1 conjunction:1 derived:1 kyrola:1 hk:1 sigkdd:3 baseline:5 am:1 kim:1 dependent:1 suffix:1 typically:1 entire:1 relation:2 germany:2 classification:15 aforementioned:1 html:1 art:5 integration:1 construct:1 once:1 having:1 beach:1 sampling:2 ng:1 identical:2 represents:2 ckinger:1 unsupervised:14 future:2 report:3 quantitatively:1 micro:3 few:2 belkin:1 modern:1 randomly:1 preserve:1 densely:1 individual:1 murphy:1 scarselli:1 skiena:1 detection:3 interest:1 message:19 highly:2 investigate:3 mining:4 homo:1 evaluation:2 mahoney:1 loss1:1 edge:13 experience:1 respective:1 incomplete:1 euclidean:1 walk:28 initialized:3 leskovec:1 instance:17 classify:1 earlier:2 modeling:1 eep:22 raedt:1 exchanging:2 assignment:1 riley:1 cost:1 introducing:1 vertex:37 subset:3 deviation:2 phrase:1 delay:1 krizhevsky:1 eigenmaps:1 gr:1 reported:4 params:1 combined:1 chooses:1 st:1 explores:1 international:8 lee:1 informatics:1 synthesis:1 sapiens:1 again:1 reflect:1 aaai:1 reconstructs:1 management:2 conf:1 inject:1 stark:1 li:19 account:3 de:1 lookup:1 chemistry:1 bergstra:1 dur:1 summarized:1 bold:1 chandola:1 ranking:3 depends:1 performed:4 h1:3 stoica:1 lab:2 closed:2 analyze:2 reached:1 competitive:2 red:1 aggregation:1 compiler:1 parallel:1 contribution:1 accuracy:3 convolutional:4 acid:1 yellow:1 iterated:1 visualizes:1 converged:2 explain:1 phys:1 whenever:1 associated:12 di:11 propagated:1 sampled:3 newly:1 dataset:2 proved:1 intrinsically:1 knowledge:7 color:2 dimensionality:2 back:4 appears:1 higher:3 supervised:12 follow:2 modal:1 leiser:1 formulation:1 evaluated:2 done:1 niepert:2 symptom:3 furthermore:1 though:1 until:2 langford:1 working:1 receives:1 web:3 nonlinear:2 banerjee:1 propagation:8 tsne:1 logistic:6 boucher:1 quality:2 indicated:1 believe:1 usa:1 name:2 facilitate:1 cald:1 concept:1 isomap:1 mattmahoney:1 inductive:14 hence:8 chemical:1 assigned:2 regularization:4 semantic:1 visualizing:1 during:8 backpropagating:1 die:1 gg:2 hill:1 l1:3 silva:1 image:4 wise:1 consideration:1 novel:1 recently:1 fi:5 wikipedia:1 empirically:1 cohen:1 volume:2 million:2 interpretation:1 significant:3 refer:2 austern:1 vec:11 ai:1 mathematics:1 language:3 had:2 kipf:1 europe:2 similarity:2 base:1 pregel:2 recent:1 optimizes:2 driven:1 inf:1 scenario:1 certain:2 wv:8 binary:1 accomplished:1 devise:1 seen:2 guestrin:1 additional:2 determine:1 corrado:2 july:1 semi:4 ii:2 siamese:2 multiple:1 full:1 ing:1 technical:2 cross:2 long:1 lin:1 alberto:2 laplacian:2 impact:3 prediction:4 variant:2 regression:6 scalable:1 heterogeneous:1 metric:1 cmu:1 arxiv:8 iteration:6 represent:8 deterioration:1 pnn:4 suppl:1 addition:2 whereas:2 separately:1 ode:9 addressed:1 want:1 sends:1 concluded:1 crucial:2 breuleux:1 rest:1 unlike:1 bringing:1 perozzi:1 validating:1 sent:5 undirected:2 contrary:2 incorporates:2 bik:1 eliassi:1 weigthed:1 kera:3 yang:1 feedforward:1 bengio:3 embeddings:55 affect:2 misclassifications:1 idea:1 six:1 isomorphism:1 gb:1 passed:3 url:2 speech:1 passing:5 deep:4 listed:4 tune:1 rousseeuw:1 backpropagated:1 locally:3 tenenbaum:1 hardware:1 dna:1 generate:1 http:2 exist:2 percentage:2 tutorial:1 dummy:2 per:5 write:4 hyperparameter:5 threefold:1 macskassy:1 threshold:1 demonstrating:1 dahl:1 a23:3 backward:1 ram:2 graph:75 chollet:1 downstream:1 fraction:3 wood:1 run:3 luxburg:1 parameterized:1 uncertainty:1 gra:1 almost:1 guyon:1 gonzalez:2 maaten:1 decision:1 scaling:2 gnn:2 hi:11 bound:1 fold:2 fan:1 arizona:1 encountered:1 g:1 occur:1 constraint:2 encodes:1 tag:1 kleinberg:1 min:2 kumar:1 performing:1 mikolov:2 eat:4 conjecture:1 schoenholz:1 structured:5 combination:4 smaller:3 remain:1 reconstructing:1 em:2 across:1 beneficial:1 qu:1 rev:1 theano:2 taken:2 equation:1 visualization:1 rans:2 discus:1 describing:1 count:1 fed:2 dehnert:1 sending:1 usunier:1 generalizes:3 apply:1 observe:3 hellerstein:1 spectral:4 appropriate:1 occurrence:1 appearing:1 batch:4 shah:1 gori:1 running:2 clustering:4 ensure:1 remaining:1 include:2 graphical:1 ppi:9 sigmod:1 ghahramani:1 unchanged:1 objective:9 added:2 malewicz:1 occurs:1 strategy:1 md:1 gradient:10 affinity:2 ow:11 distance:6 link:5 mapped:1 separate:1 concatenation:1 sci:1 topic:1 extent:1 code:1 relationship:1 illustration:1 mini:2 minimizing:1 sne:1 gk:1 negative:1 sage:1 ba:1 implementation:1 collective:1 twenty:1 gated:3 upper:4 nucleic:1 benchmark:3 implementable:1 descent:1 t:1 technol:1 situation:1 relational:13 hinton:2 rn:8 provost:1 arbitrary:1 compositionality:1 introduced:1 pair:1 sentence:2 connection:1 kip:1 imagenet:1 rad:1 california:1 learned:10 kingma:1 nip:2 able:1 wardefarley:1 proceeds:1 poole:1 eighth:1 monfardini:1 pagerank:2 power:2 suitable:1 hot:1 natural:3 treated:2 getoor:2 difficulty:1 bilgic:1 zhu:1 representing:4 improve:1 movie:3 library:1 numerous:2 lk:3 multisets:1 categorical:1 health:8 tresp:1 text:7 byte:1 epoch:7 l2:5 discovery:4 python:2 understanding:1 review:1 geometric:1 loss:16 lecture:1 grover:1 nickel:1 validation:7 h2:2 gilmer:1 degree:4 verification:1 imposes:1 article:6 principle:2 dd:2 bordes:1 free:1 copy:1 asynchronous:1 formal:1 lle:8 allow:1 wide:3 neighbor:16 saul:1 ghz:1 distributed:3 feedback:1 dimension:3 lett:1 world:1 unweighted:2 computes:3 quantum:1 forward:1 commonly:2 made:1 author:3 qualitatively:1 adaptive:1 refinement:1 social:6 welling:1 transaction:1 reconstructed:3 citation:11 nov:1 silhouette:3 gene:3 logic:1 graphlab:2 global:1 uai:1 conceptual:1 continuous:3 latent:1 khosla:1 table:14 learn:8 concatenates:2 ca:1 brockschmidt:1 heidelberg:2 ngiam:1 complex:2 domain:3 garnett:1 did:5 aistats:1 main:2 hyperparameters:8 turian:1 identifier:5 repeated:1 fair:1 dump:1 en:1 pubmed:9 depicts:2 fashion:1 tong:1 aid:1 explicit:1 wish:1 comput:1 third:1 learns:6 tang:2 formula:2 embed:1 shade:2 bastien:1 learnable:1 list:4 svm:1 glorot:1 consist:2 incorporating:1 intractable:1 workshop:1 effectively:1 gallagher:1 nec:2 magnitude:2 rected:2 occurring:2 illustrates:4 margin:9 dtic:1 chen:1 garcia:1 simply:1 explore:1 visual:1 snns:1 vinyals:1 g2:1 pretrained:1 chang:1 applies:2 corresponds:1 acm:7 weston:1 identity:3 feasible:2 experimentally:1 considerable:1 directionality:1 typical:1 determined:1 except:1 uniformly:2 averaging:1 included:1 change:1 reducing:1 mathias:2 experimental:4 xin:1 perceptrons:1 exception:2 almeida:2 support:1 bioinformatics:1 eigenmap:1 incorporate:2 evaluate:1 d1:1 |
6,741 | 7,098 | Efficient Modeling of Latent Information in
Supervised Learning using Gaussian Processes
Zhenwen Dai ??
[email protected]
Mauricio A. ?lvarez ?
[email protected]
Neil D. Lawrence ??
[email protected]
Abstract
Often in machine learning, data are collected as a combination of multiple conditions, e.g., the voice recordings of multiple persons, each labeled with an ID. How
could we build a model that captures the latent information related to these conditions and generalize to a new one with few data? We present a new model called
Latent Variable Multiple Output Gaussian Processes (LVMOGP) that allows to
jointly model multiple conditions for regression and generalize to a new condition
with a few data points at test time. LVMOGP infers the posteriors of Gaussian
processes together with a latent space representing the information about different
conditions. We derive an efficient variational inference method for LVMOGP for
which the computational complexity is as low as sparse Gaussian processes. We
show that LVMOGP significantly outperforms related Gaussian process methods
on various tasks with both synthetic and real data.
1
Introduction
Machine learning has been very successful in providing tools for learning a function mapping from an
input to an output, which is typically referred to as supervised learning. One of the most pronouncing
examples currently is deep neural networks (DNN), which empowers a wide range of applications
such as computer vision, speech recognition, natural language processing and machine translation
[Krizhevsky et al., 2012, Sutskever et al., 2014]. The modeling in terms of function mapping assumes
a one/many to one mapping between input and output. In other words, ideally the input should
contain sufficient information to uniquely determine the output apart from some sensory noise.
Unfortunately, in most cases, this assumption does not hold. We often collect data as a combination
of multiple scenarios, e.g., the voice recording of multiple persons, the images taken from different
models of cameras. We only have some labels to identify these scenarios in our data, e.g., we can
have the names of the speakers and the specifications of the used cameras. These labels themselves
do not represent the full information about these scenarios. A question therefore is how to use
these labels in a supervised learning task. A common practice in this case would be to ignore the
difference of scenarios, but this will result in low accuracy of modeling, because all the variations
related to the different scenarios are considered as the observation noise, as different scenarios are not
distinguishable anymore in the inputs,. Alternatively, we can either model each scenario separately,
which often suffers from too small training data, or use a one-hot encoding to represent each scenario.
In both of these cases, generalization/transfer to new scenario is not possible.
?
Inferentia Limited.
Dept. of Computer Science, University of Sheffield, Sheffield, UK.
?
Amazon.com. The scientific idea and a preliminary version of code were developed prior to joining Amazon.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?"
?$
Distance
Mean
Data
Confidence
40
0
?20
2
4
6
Speed
8
10
0
10
0
2
4
(b)
6
Speed
8
10
(c)
1
ground truth
data
10
Latent Variable
Braking Distance Braking Distance
0
(a)
ground truth
data
10
0
20
Distance
Braking Distance
60
0
10
0
?1
?2
0
0
2
4
6
Initial Speed
8
10
(d)
2.5
5.0
1/?
7.5
10.0
(e)
Figure 1: A toy example about modeling the braking distance of a car. (a) illustrating a car with
the initial speed v0 on a flat road starts to brake due to the friction force Fr . (b) the results of a GP
regression on all the data from 10 different road and tyre conditions. (c) The top plot visualizes the
fitted model with respect to one of the conditions in the training data and the bottom plot shows the
prediction of the trained model for a new condition with only one observation. The model assumes
every condition independently. (d) LVMOGP captures the correlation among different conditions and
the plot shows the curve with respect to one of the conditions. By using the information from all the
conditions, it is able to predict in a new condition with only one observation.(e) The learned latent
variable with uncertainty corresponds to a linear transformation of the inverse of the true friction
coefficient (?). The blue error bars denote the variational posterior of the latent variables q(H).
In this paper, we address this problem by proposing a probabilistic model that can jointly consider
different scenarios and enables efficient generalization to new scenarios. Our model is based on
Gaussian Processes (GP) augmented with additional latent variables. The model is able to represent
the data variance related to different scenarios in the latent space, where each location corresponds
to a different scenario. When encountering a new scenario, the model is able to efficient infer the
posterior distribution of the location of the new scenario in the latent space. This allows the model
to efficiently and robustly generalize to a new scenario. An efficient Bayesian inference method of
the propose model is developed by deriving a closed-form variational lower bound for the model.
Additionally, with assuming a Kronecker product structure in the variational posterior, the derived
stochastic variational inference method achieves the same computational complexity as a typical
sparse Gaussian process model with independent output dimensions.
2
2.1
Modeling Latent Information
A Toy Problem
Let us consider a toy example where we wish to model the braking distance of a car in a completely
data-driven way. Assuming that we do not know physics about car, we could treat it as a nonparametric regression problem, where the input is the initial speed read from the speedometer and
the output is the distance from the location where the car starts to brake to the point where the car is
fully stopped. We know that the braking distance depends on the friction coefficient, which varies
according to the condition of the tyres and road. As the friction coefficient is difficult to measure
directly, we can conduct experiments with a set of different tyre and road conditions, each associated
with a condition id, e.g., ten different conditions, each has five experiments with different initial
speeds. How can we model the relation between the speed and distance in a data-driven way, so that
we can extrapolate to a new condition with only one experiment?
Denote the speed to be x, the observed braking distance to be y, and the condition id to be d. A
straight-forward modeling choice is to ignore the difference in conditions. Then, the relation between
2
the speed and the distance can be modeled as
y = f (x) + ,
f ? GP,
(1)
where represents measurement noise, and the function f is modeled as a Gaussian Process (GP).
Since we do not know the parametric form of the function, we model it non-parametrically. The
drawback of this model is that the accuracy is very low as all the variations caused by different
conditions are modeled as measurement noise (see Figure 1b). Alternatively, we can model each
condition separately, i.e., fd ? GP, d = 1, . . . , D, where D denotes the number of considered
conditions. In this case, the relation between speed and distance for each condition can be modeled
cleanly if there are sufficient data in that condition. However, such modeling is not able to generalize
to new conditions (see Figure 1c), because it does not consider the correlations among conditions.
Ideally, we wish to model the relation together with the latent information associated with different
conditions, i.e., the friction coefficient in this example. A probabilistic approach is to assume a latent
variable. With a latent variable hd that represents the latent information associated with the condition
d, the relation between speed and distance for the condition d is, then, modeled as
y = f (x, hd ) + ,
f ? GP,
hd ? N (0, I).
(2)
Note that the function f is shared across all the conditions like in (1), while for each condition a
different latent variable hd is inferred. As all the conditions are jointly modeled, the correlation
among different conditions are correctly captured, which enables generalization to new conditions
(see Figure 1d for the results of the proposed model).
This model enables us to capture the relation between the speed, distance as well as the latent
information. The latent information is learned into a latent space, where each condition is encoded
as a point in the latent space. Figure 1e shows how the model ?discovers" the concept of friction
coefficient by learning the latent variable as a linear transformation of the inverse of the true friction
coefficients. With this latent representation, we are able to infer the posterior distribution of a new
condition given only one observation and it gives reasonable prediction for the speed-distance relation
with uncertainty.
2.2
Latent Variable Multiple Output Gaussian Processes
In general, we denote the set of inputs as X = [x1 , . . . , xN ]> , which corresponds to the speed in
the toy example, and each input xn can be considered in D different conditions in the training data.
For simplicity, we assume that, given an input xn , the outputs associated with all the D conditions
are observed, denoted as yn = [yn1 , . . . , ynD ]> and Y = [y1 , . . . , yN ]> . The latent variables
representing different conditions are denoted as H = [h1 , . . . , hD ]> , hd ? RQH . The dimensionality
of the latent space QH needs to be pre-specified like in other latent variable models. The more general
case where each condition has a different set of inputs and outputs will be discussed in Section 4.
Unfortunately, inference of the model
in (2) is challenging, because the integral for computing the
R
marginal likelihood, p(Y|X) = p(Y|X, H)p(H)dH, is analytically intractable. Apart from the
analytical intractability, the computation of the likelihood p(Y|X, H) is also very expensive, because
of its cubic complexity O((N D)3 ). To enable efficient inference, we propose a new model which
assumes the covariance matrix can be decomposed as a Kronecker product of the covariance matrix
of the latent variables KH and the covariance matrix of the inputs KX . We call the new model Latent
Variable Multiple Output Gaussian Processes (LVMOGP) due to its connection with multiple output
Gaussian processes. The probabilistic distributions of LVMOGP are defined as
p(Y: |F: ) = N Y: |F: , ? 2 I , p(F: |X, H) = N F: |0, KH ? KX ,
(3)
where the latent variables H have unit Gaussian priors, hd ? N (0, I), F = [f1 , . . . , fN ]> , fn ? RD
denote the noise-free observations, the notation ":" represents the vectorization of a matrix, e.g.,
Y: = vec(Y) and ? denotes the Kronecker product. KX denotes the covariance matrix computed
on the inputs X with the kernel function kX and KH denotes the covariance matrix computed on the
latent variable H with the kernel function kH . Note that the definition of LVMOGP only introduces
a Kronecker product structure in the kernel, which does not directly avoid the intractability of its
marginal likelihood. In the next section, we will show how the Kronecker product structure can be
used for deriving an efficient variational lower bound.
3
3
Scalable Variational Inference
The exact inference of LVMOGP in (3) is analytically intractable due to an integral of the latent
variable in the marginal likelihood. Titsias and Lawrence [2010] develop a variational inference
method by deriving a closed-form variational lower bound for a Gaussian process model with latent
variables, known as Bayesian Gaussian process latent variable model. Their method is applicable to
a broad family of models including the one in (2), but is not efficient for LVMOGP because it has
cubic complexity with respect to D.4 In this section, we derive a variational lower bound that has
the same complexity as a sparse Gaussian process assuming independent outputs by exploiting the
Kronecker product structure of the kernel of LVMOGP.
We augment the model with an auxiliary variable, known as the inducing variable U, following
the same Gaussian process prior p(U: ) = N (U: |0, Kuu ). The covariance matrix Kuu is defined
X
as Kuu = KH
uu ? Kuu following the assumption of the Kronecker product decomposition in (3),
H
H
> H
QH
where Kuu is computed on a set of inducing inputs ZH = [zH
with
1 , . . . , zM H ] , zm ? R
X
X
the kernel function kH . Similarly, Kuu is computed on another set of inducing inputs Z =
X
> X
QX
[zX
with the kernel function kX , where zX
m has the same dimensionality as
1 , . . . , zMX ] , zm ? R
the inputs xn . We construct the conditional distribution of F as:
?1 >
p(F|U, ZX , ZH , X, H) = N F: |Kf u K?1
(4)
uu U: , Kf f ? Kf u Kuu Kf u ,
X
H
X
X
where Kf u = KH
f u ? Kf u and Kf f = Kf f ? Kf f . Kf u is the cross-covariance computed
X
H
between X and Z with kX and Kf u is the cross-covariance computed between H and ZH with
kH . Kf f is the covariance matrix computed on X with kX and KH
f f is computed on H with kH .
Note that the prior distribution
of
F
after
marginalizing
U
is
not
changed
with the augmentation,
R
because p(F|X, H) = p(F|U, ZX , ZH , X, H)p(U|ZX , ZH )dU. Assuming variational posteriors
q(F|U) = p(F|U, X, H) and q(H), the lower bound of the log marginal likelihood can be derived
as
log p(Y|X) ? F ? KL (q(U) k p(U)) ? KL (q(H) k p(H)) ,
(5)
where F = hlog p(Y: |F: )ip(F|U,X,H)q(U)q(H) . It is known that the optimal posterior distribution of
q(U) is a Gaussian distribution [Titsias,
2009, Matthews et al., 2016]. With an explicit Gaussian
definition of q(U) = N U|M, ?U , the integral in F has a closed-form solution:
ND
1
1
?1
>
U
log 2?? 2 ? 2 Y:> Y: ? 2 Tr K?1
uu ?Kuu (M: M: + ? )
2
2?
2?
1
1 >
?1
,
(6)
+ 2 Y: ?Kuu M: ? 2 ? ? tr K?1
uu ?
?
2?
D
E
where ? = htr (Kf f )iq(H) , ? = hKf u iq(H) and ? = K>
.5 Note that the optimal
f u Kf u
F =?
q(H)
variational posterior of q(U) with respect to the lower bound can be computed in closed-form.
2
2
However, the computational complexity of the closed-form solution is O(N DMX
MH
).
3.1
More Efficient Formulation
Note that the lower bound in (5-6) does not take advantage of the Kronecker product decomposition.
The computational efficiency could be improved by avoiding directly computing the Kronecker
product of the covariance matrices. Firstly, we reformulate the expectations of the covariance
matrices ?, ? and ?, so that the expectation computation can be decomposed,
> X
? = ? H tr KX
? = ?H ? KX
? = ?H ? (KX
(7)
ff ,
f u,
f u ) Kf u ) ,
D
E
D
E
D
E
> H
where ? H = tr KH
, ?H = K H
and ?H = (KH
. Secondly, we
ff
fu
f u ) Kf u
q(H)
q(H)
q(H)
assume a Kronecker product decomposition of the covariance matrix of q(U), i.e., ?U = ?H ? ?X .
Although this decomposition restricts the covariance matrix representation, it dramatically reduces
4
Assume that the number of inducing points is proportional to D.
The expectation with respect to a matrix h?iq(H) denotes the expectation with respect to every element of
the matrix.
5
4
2
2
2
2
the number of variational parameters in the covariance matrix from MX
MH
to MX
+ MH
. Thanks
to the above decomposition, the lower bound can be rearranged to speed up the computation,
F =?
?
?
+
+
ND
1
log 2?? 2 ? 2 Y:> Y:
2
2?
1
?1 C
?1
?1 H
?1
tr M> ((KX
? (KX
)M(KH
? (KH
uu )
uu )
uu )
uu )
2? 2
1
?1 H
?1 H
?1 X
?1 X
tr (KH
? (KH
? tr (KX
? (KX
?
uu )
uu )
uu )
uu )
2? 2
1 >
1
?1
?1
Y (?X (KX
)M(KH
(?H )> : ? 2 ?
uu )
uu )
?2 :
2?
1
?1 H
?1 X
tr (KH
? tr (KX
? .
uu )
uu )
2? 2
(8)
Similarly, the KL-divergence between q(U) and p(U) can also take advantage of the above decomposition:
1
|KH |
|KX |
?1
?1
KL (q(U) k p(U)) = MX log uu
+ MH log uu
+ tr M> (KX
M(KH
uu )
uu )
H
X
2
|? |
|? |
?1 H
X ?1 X
+ tr (KH
)
?
tr
(K
)
?
?
M
M
(9)
H
X .
uu
uu
As shown in the above equations, the direct computation of Kronecker products is completely avoided. Therefore, the computational complexity of the lower bound is reduced to
O(max(N, MH ) max(D, MX ) max(MX , MH )), which is comparable to the complexity of sparse
GPs with independent observations O(N M max(D, M )). The new formulation is significantly
more efficient than the formulation described in the previous section. This enables LVMOGP to be
applicable to real world scenarios. It is also straight-forward to extend this lower bound to mini-batch
learning like in Hensman et al. [2013], which allows further scaling up.
3.2
Prediction
After estimating the model parameters and variational posterior distributions, the trained model is
typically used to make predictions. In our model, a prediction can be about a new input x? as well
as a new scenario which corresponds to a new value of the hidden variable h? . Given both a set of
new inputs X? with a set of new scenarios H? , the prediction of noiseless observation F? can be
computed in closed-form,
Z
?
?
?
q(F: |X , H ) = p(F?: |U: , X? , H? )q(U: )dU:
?1 >
?1 U ?1 >
=N F?: |Kf ? u K?1
uu M: , Kf ? f ? ? Kf ? u Kuu Kf ? u + Kf ? u Kuu ? Kuu Kf ? u ,
X
H
X
H
H
where Kf ? f ? = KH
f ? f ? ? Kf ? f ? and Kf ? u = Kf ? u ? Kf ? u . Kf ? f ? and Kf ? u are the covariance
matrices computed on H? and the cross-covariance matrix computed between H? and ZH . Similarly,
X
?
KX
f ? f ? and Kf ? u are the covariance matrices computed on X and the cross-covariance matrix
computed between X? and ZX . For a regression problem, we are often more interested in predicting
for the existing condition from the training data. As the posterior distributions of the existing
conditions have already been estimated as q(H), we can
R approximate the prediction by integrating the
above prediction equation with q(H), q(F?: |X? ) = q(F?: |X? , H)q(H)dH. The above integration
is intractable, however, as suggested by Titsias and Lawrence [2010], the first and second moment of
F?: under q(F?: |X? ) can be computed in closed-form.
4
Missing Data
The model described in Section 2.2 assumes that for N different inputs, we observe them in all the
D different conditions. However, in real world problems, we often collect data at a different set of
inputs for each scenario, i.e., for each condition d, d = 1, . . . , D. Alternatively, we can view the
problem as having a large set of inputs and for each condition only the outputs associated with a
5
subset of the inputs being observed. We refer to this problem as missing data. For the condition d,
(d)
(d)
we denote the inputs as X(d) = [x1 , . . . , xNd ]> and the outputs as Yd = [y1d , . . . , yNd d ]> , and
optionally a different noise variance as ?d2 . The proposed model can be extended to handle this case
by reformulating the F as
D
X
Nd
1
1
?1
>
U
F=
?
log 2??d2 ? 2 Yd> Yd ? 2 Tr K?1
uu ?d Kuu (M: M: + ? )
2
2?d
2?d
d=1
1 >
1
Yd ?d K?1
?d ? tr K?1
,
(10)
uu M: ?
uu ?d
2
2
?d
2?d
X
> X
H
X
H
X
where ?d = ?H
d ? (Kfd u ) Kfd u ) , ?d = ?d ? Kfd u , ?d = ?d ? tr Kfd fd , in which
D
E
D
E
D
E
H
> H
H
?H
, ?H
and ?dH = tr KH
. The rest of the
d = (Kfd u ) Kfd u
d = Kfd u
fd fd
+
q(hd )
q(hd )
q(hd )
lower bound remains unchanged because it does not depend on the inputs and outputs. Note that,
although it looks very similar to the bound in Section 3, the above lower bound is computationally
more expensive, because it involves the computation of a different set of ?d , ?d , ?d and the
corresponding part of the lower bound for each condition.
5
Related works
LVMOGP can be viewed as an extension of a multiple output Gaussian process. Multiple output
Gaussian processes have been thoughtfully studied in ?lvarez et al. [2012]. LVMOGP can be seen as
an intrinsic model of coregionalization [Goovaerts, 1997] or a multi-task Gaussian process [Bonilla
et al., 2008], if the coregionalization matrix B is replaced by the kernel KH . By replacing the
coregionalization matrix with a kernel matrix, we endow the multiple output GP with the ability to
predict new outputs or tasks at test time, which is not possible if a finite matrix B is used at training
time. Also, by using a model for the coregionalization matrix in the form of a kernel function, we
reduce the number of hyperparameters necessary to fit the covariance between the different conditions,
reducing overfitting when fewer datapoints are available for training. Replacing the coregionalization
matrix by a kernel matrix has also been used in Qian et al. [2008] and more recently by Bussas et al.
[2017]. However, these works do not address the computational complexity problem and their models
can not scale to large datasets. Furthermore, in our model, the different conditions hd are treated as
latent variables, which are not observed, as opposed to these two models where we would need to
provide observed data to compute KH .
Computational complexity in multi-output Gaussian processes has also been studied before for
convolved multiple output Gaussian processes [?lvarez and Lawrence, 2011] and for the intrinsic
model of coregionalization [Stegle et al., 2011]. In ?lvarez and Lawrence [2011], the idea of inducing
inputs is also used and computational complexity reduces to O(N DM 2 ), where M refers to a generic
number of inducing inputs. In Stegle et al. [2011], the covariances KH and KX are replaced by their
respective eigenvalue decompositions and computational complexity reduces to O(N 3 + D3 ). Our
method reduces computationally complexity to O(max(N, MH ) max(D, MX ) max(MX , MH ))
when there are no missing data. Notice that if MH = MX = M , N > M and D > M , our method
achieves a computational complexity of O(N DM ), which is faster than O(N DM 2 ) in ?lvarez and
Lawrence [2011]. If N = D = MH = MX , our method achieves a computational complexity of
O(N 3 ), similar to Stegle et al. [2011]. Nonetheless, the usual case is that N MX , improving the
computational complexity over Stegle et al. [2011]. An additional advantage of our method is that it
can easily be parallelized using mini-batches like in Hensman et al. [2013]. Note that we have also
provided expressions for dealing with missing data, a setup which is very common in our days, but
that has not been taken into account in previous formulations.
The idea of modeling latent information about different conditions jointly with the modeling of data
points is related to the style and content model by Tenenbaum and Freeman [2000], where they
explicitly model the style and content separation as a bilinear model for unsupervised learning.
6
Experiments
We evaluate the performance of the proposed model with both synthetic and real data.
6
0.7
0.6
0.6
0.5
RMSE
RMSE
0.7
0.4
0.3
2
?2
GP-ind
2
0.5
0
?2
0.4
LMC
2
0.3
0
0.2
GP-ind
LMC
(a)
LVMOGP
0.2
test
train
0
?2
GP-ind
LMC
(b)
LVMOGP
?0.2
0.0
0.2
0.4
0.6
LVMOGP
0.8
1.0
1.2
(c)
Figure 2: The results on two synthetic datasets. (a) The performance of GP-ind, LMC and LVMOGP
evaluated on 20 randomly drawn datasets without missing data. (b) The performance evaluated on 20
randomly drawn datasets with missing data. (c) A comparison of the estimated functions by the three
methods on one of the synthetic datasets with missing data. The plots show the estimated functions
for one of the conditions with few training data. The red rectangles are the noisy training data and the
black crosses are the test data.
Synthetic Data. We compare the performance of the proposed method with GP with independent observations and the linear model of coregionalization (LMC) [Journel and Huijbregts, 1978, Goovaerts,
1997] on synthetic data, where the ground truth is known. We generated synthetic data by sampling
from a Gaussian process, as stated in (3), and assuming a two-dimensional space for the different
conditions. We first generated a dataset, where all the conditions of a set of inputs are observed. The
dataset contains 100 different uniformly sampled input locations (50 for training and 50 for testing),
where each corresponds to 40 different conditions. An observation noise with variance 0.3 is added
onto the training data. This dataset belongs to the case of no missing data, therefore, we can apply
LVMOGP with the inference method presented in Section 3. We assume a 2 dimensional latent
space and set MH = 30 and MX = 10. We compare LVMOGP with two other methods: GP with
independent output dimensions (GP-ind) and LMC (with a full rank coregionalization matrix). We
repeated the experiments on 20 randomly sampled datasets. The results are summarized in Figure
2a. The means and standard deviations of all the methods on 20 repeats are: GP-ind: 0.24 ? 0.02,
LMC:0.28 ? 0.11, LVMOGP 0.20 ? 0.02. Note that, in this case, GP-ind performs quite well because
the only gain by modeling different conditions jointly is the reduction of estimation variance from the
observation noise.
Then, we generated another dataset following the same setting, but where each condition had a
different set of inputs. Often, in real data problems, the number of available data in different
conditions is quite uneven. To generate a dataset with uneven numbers of training data in different
conditions, we group the conditions into 10 groups. Within each group, the numbers of training
data in four conditions are generated through a three-step stick breaking procedure with a uniform
prior distribution (200 data points in total). We apply LVMOGP with missing data (Section 4) and
compare with GP-ind and LMC. The results are summarized in Figure 2b. The means and standard
deviations of all the methods on 20 repeats are: GP-ind: 0.43 ? 0.06, LMC:0.47 ? 0.09, LVMOGP
0.30 ? 0.04. In both synthetic experiments, LMC does not perform well because of overfitting
caused by estimating the full rank coregionalization matrix. The figure 2c shows a comparison of
the estimated functions by the three methods for a condition with few training data. Both LMC and
LVMOGP can leverage the information from other conditions to make better predictions, while LMC
often suffers from overfitting due to the high number of parameters in the coregionalization matrix.
Servo Data. We apply our method to a servo modeling problem, in which the task is to predict the
rise time of a servomechanism in terms of two (continuous) gain settings and two (discrete) choices of
mechanical linkages [Quinlan, 1992]. The two choices of mechanical linkages introduce 25 different
conditions in the experiments (five types of motors and five types of lead screws). The data in each
condition are scarce, which makes joint modeling necessary (see Figure 3a). We took 70% of the
dataset as training data and the rest as test data, and randomly generated 20 partitions. We applied
LVMOGP with a two-dimensional latent space with an ARD kernel and used five inducing points
for the latent space and 10 inducing points for the function. We compared LVMOGP with GP with
ignoring the different conditions (GP-WO), GP with taking each condition as an independent output
(GP-ind), GP with one-hot encoding of conditions (GP-OH) and LMC. The means and standard
deviations of the RMSE of all the methods on 20 partitions are: GP-WO: 1.03 ? 0.20, GP-ind:
7
12
2.0
10
1.5
RMSE
8
6
1.0
4
0.5
2
0
5
10
15
20
(a)
3
2
-0.9
48
52
-0.4
04
-0.2
00
-0.7
-0.452
1
2.5
LMC
LVMOGP
3.5
4.0
0.787
4.5
0.7
0.6
0.5
-0.204
0.044
0.539
3.0
GP-OH
0.8
7
39
0.5
GP-ind
0.9
0.78
0.2
92
4
4
GP-WO
(b)
train
test
0.04
5
25
RMSE
0
5.0
0.29
2
5.5
0.4
6.0
6.5
GP-ind
(c)
LMC
LVMOGP
(d)
Figure 3: The experimental results on servo data and sensor imputation. (a) The numbers of data
points are scarce in each condition. (b) The performance of a list of methods on 20 different train/test
partitions is shown in the box plot. (c) The function learned by LVMOGP for the condition with the
smallest amount of data. With only one training data, the method is able to extrapolate a non-linear
function due to the joint modeling of all the conditions. (d) The performance of three methods on
sensor imputation with 20 repeats.
1.30 ? 0.31, GP-OH: 0.73 ? 0.26, LMC:0.69 ? 0.35, LVMOGP 0.52 ? 0.16. Note that in some
conditions the data are very scarce, e.g., there are only one training data point and one test data
point (see Figure 3c). As all the conditions are jointly modeled in LVMOGP, the method is able to
extrapolate a non-linear function by only seeing one data point.
Sensor Imputation. We apply our method to impute multivariate time series data with massive
missing data. We take a in-house multi-sensor recordings including a list of sensor measurements such
as temperature, carbon dioxide, humidity, etc. [Zamora-Mart?nez et al., 2014]. The measurements
are recorded every minute for roughly a month and smoothed with 15 minute means. Different
measurements are normalized to zero-mean and unit-variance. We mimic the scenario of massive
missing data by randomly taking out 95% of the data entries and aim at imputing all the missing
values. The performance is measured as RMSE on the imputed values. We apply LVMOGP with
missing data with the settings: QH = 2, MH = 10 and MX = 100. We compare with LMC and
GP-ind. The experiments are repeated 20 times with different missing values. The results are shown
in a box plot in Figure 3d. The means and standard deviations of all the methods on 20 repeats are:
GP-ind: 0.85 ? 0.09, LMC:0.59 ? 0.21, LVMOGP 0.45 ? 0.02. The high variance of LMC results
are due to the large number of parameters in the coregionalization matrix.
7
Conclusion
In this work, we study the problem of how to model multiple conditions in supervised learning.
Common practices such as one-hot encoding cannot efficiently model the relation among different
conditions and are not able to generalize to a new condition at test time. We propose to solve this
problem in a principled way, where we learn the latent information of conditions into a latent space.
By exploiting the Kronecker product decomposition in the variational posterior, our inference method
is able to achieve the same computational complexity as sparse GPs with independent observations,
when there are no missing data. In experiments on synthetic and real data, LVMOGP outperforms
common approaches such as ignoring condition difference, using one-hot encoding and LMC. In
Figure 3b and 3d, LVMOGP delivers more reliable performance than LMC among different train/test
partitions due to the marginalization of latent variables.
Acknowledgements MAA has been financed by the Engineering and Physical Research Council (EPSRC)
Research Project EP/N014162/1.
8
References
Mauricio A. ?lvarez and Neil D. Lawrence. Computationally efficient convolved multiple output
Gaussian processes. J. Mach. Learn. Res., 12:1459?1500, July 2011.
Edwin V. Bonilla, Kian Ming Chai, and Christopher K. I. Williams. Multi-task Gaussian process
prediction. In John C. Platt, Daphne Koller, Yoram Singer, and Sam Roweis, editors, NIPS,
volume 20, 2008.
Matthias Bussas, Christoph Sawade, Nicolas K?hn, Tobias Scheffer, and Niels Landwehr. Varyingcoefficient models for geospatial transfer learning. Machine Learning, pages 1?22, 2017.
Pierre Goovaerts. Geostatistics For Natural Resources Evaluation. Oxford University Press, 1997.
James Hensman, Nicolo Fusi, and Neil D. Lawrence. Gaussian processes for big data. In UAI, 2013.
Andre G. Journel and Charles J. Huijbregts. Mining Geostatistics. Academic Press, 1978.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097?1105,
2012.
Alexander G. D. G. Matthews, James Hensman, Richard E Turner, and Zoubin Ghahramani. On
sparse variational methods and the Kullback-Leibler divergence between stochastic processes. In
AISTATS, 2016.
Peter Z. G Qian, Huaiqing Wu, and C. F. Jeff Wu. Gaussian process models for computer experiments
with qualitative and quantitative factors. Technometrics, 50(3):383?396, 2008.
J R Quinlan. Learning with continuous classes. In Australian Joint Conference on Artificial
Intelligence, pages 343?348, 1992.
Oliver Stegle, Christoph Lippert, Joris Mooij, Neil Lawrence, and Karsten Borgwardt. Efficient
inference in matrix-variate Gaussian models with IID observation noise. In NIPS, pages 630?638,
2011.
Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks.
In Advances in Neural Information Processing Systems, 2014.
JB Tenenbaum and WT Freeman. Separating style and content with bilinear models. Neural
Computation, 12:1473?83, 2000.
Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In
AISTATS, 2009.
Michalis K. Titsias and Neil D. Lawrence. Bayesian Gaussian process latent variable model. In
AISTATS, 2010.
F. Zamora-Mart?nez, P. Romeu, P. Botella-Rocamora, and J. Pardo. On-line learning of indoor
temperature forecasting models towards energy efficiency. Energy and Buildings, 83:162?172,
2014.
Mauricio A. ?lvarez, Lorenzo Rosasco, and Neil D. Lawrence. Kernels for vector-valued functions:
R in Machine Learning, 4(3):195?266, 2012. ISSN 1935-8237.
A review. Foundations and Trends
doi: 10.1561/2200000036. URL http://dx.doi.org/10.1561/2200000036.
9
| 7098 |@word illustrating:1 version:1 nd:3 humidity:1 cleanly:1 d2:2 covariance:20 decomposition:8 tr:16 reduction:1 moment:1 initial:4 contains:1 series:1 outperforms:2 existing:2 com:3 dx:1 john:1 fn:2 partition:4 enables:4 motor:1 plot:6 intelligence:1 fewer:1 sawade:1 geospatial:1 location:4 firstly:1 org:1 daphne:1 five:4 direct:1 qualitative:1 introduce:1 karsten:1 roughly:1 themselves:1 multi:4 freeman:2 ming:1 decomposed:2 provided:1 estimating:2 notation:1 project:1 developed:2 proposing:1 transformation:2 quantitative:1 every:3 uk:2 stick:1 unit:2 platt:1 mauricio:4 yn:2 before:1 engineering:1 treat:1 bilinear:2 encoding:4 joining:1 id:3 mach:1 oxford:1 yd:4 black:1 studied:2 collect:2 challenging:1 christoph:2 limited:1 range:1 camera:2 testing:1 practice:2 procedure:1 goovaerts:3 significantly:2 word:1 confidence:1 road:4 pre:1 integrating:1 refers:1 seeing:1 zoubin:1 onto:1 cannot:1 missing:15 brake:2 williams:1 independently:1 amazon:4 simplicity:1 qian:2 deriving:3 oh:3 datapoints:1 hd:11 handle:1 variation:2 qh:3 massive:2 exact:1 gps:2 element:1 trend:1 recognition:1 expensive:2 xnd:1 labeled:1 bottom:1 observed:6 epsrc:1 ep:1 capture:3 servo:3 principled:1 complexity:17 ideally:2 tobias:1 trained:2 depend:1 titsias:5 efficiency:2 completely:2 edwin:1 easily:1 mh:12 joint:3 various:1 train:4 doi:2 artificial:1 quite:2 encoded:1 solve:1 valued:1 ability:1 neil:6 gp:33 jointly:6 noisy:1 ip:1 advantage:3 eigenvalue:1 sequence:2 analytical:1 matthias:1 took:1 propose:3 product:12 fr:1 zm:3 financed:1 achieve:1 roweis:1 kh:26 inducing:9 sutskever:3 exploiting:2 chai:1 derive:2 develop:1 ac:1 iq:3 measured:1 ard:1 auxiliary:1 involves:1 uu:26 australian:1 drawback:1 stochastic:2 enable:1 f1:1 generalization:3 preliminary:1 secondly:1 extension:1 hold:1 considered:3 ground:3 lawrence:11 mapping:3 predict:3 matthew:2 landwehr:1 achieves:3 smallest:1 niels:1 estimation:1 applicable:2 label:3 currently:1 council:1 tool:1 sensor:5 gaussian:31 aim:1 avoid:1 endow:1 derived:2 rank:2 likelihood:5 hkf:1 zhenwen:1 inference:11 typically:2 hidden:1 relation:8 dnn:1 koller:1 interested:1 among:5 pronouncing:1 classification:1 denoted:2 augment:1 integration:1 marginal:4 construct:1 having:1 beach:1 sampling:1 represents:3 broad:1 look:1 unsupervised:1 mimic:1 screw:1 jb:1 richard:1 few:4 randomly:5 divergence:2 replaced:2 technometrics:1 y1d:1 fd:4 kfd:7 mining:1 evaluation:1 introduces:1 oliver:1 integral:3 fu:1 necessary:2 respective:1 conduct:1 re:1 stopped:1 fitted:1 modeling:13 deviation:4 parametrically:1 subset:1 entry:1 uniform:1 krizhevsky:2 successful:1 too:1 varies:1 synthetic:9 person:2 st:1 thanks:1 borgwardt:1 probabilistic:3 physic:1 together:2 ilya:2 augmentation:1 recorded:1 opposed:1 hn:1 rosasco:1 style:3 toy:4 account:1 summarized:2 coefficient:6 caused:2 bonilla:2 depends:1 explicitly:1 h1:1 ynd:2 closed:7 view:1 red:1 start:2 rmse:6 accuracy:2 convolutional:1 variance:6 efficiently:2 identify:1 dmx:1 generalize:5 bayesian:3 iid:1 zx:6 straight:2 visualizes:1 suffers:2 andre:1 definition:2 nonetheless:1 energy:2 james:2 dm:3 associated:5 sampled:2 gain:2 dataset:6 car:6 infers:1 dimensionality:2 supervised:4 day:1 alvarez:1 improved:1 formulation:4 evaluated:2 box:2 furthermore:1 correlation:3 replacing:2 christopher:1 varyingcoefficient:1 scientific:1 name:1 usa:1 building:1 contain:1 true:2 concept:1 normalized:1 analytically:2 reformulating:1 read:1 yn1:1 leibler:1 ind:15 lmc:21 impute:1 uniquely:1 speaker:1 performs:1 delivers:1 temperature:2 image:1 variational:17 discovers:1 recently:1 charles:1 common:4 imputing:1 physical:1 volume:1 discussed:1 extend:1 braking:7 measurement:5 refer:1 vec:1 rd:1 similarly:3 language:1 had:1 specification:1 encountering:1 v0:1 etc:1 nicolo:1 posterior:11 multivariate:1 belongs:1 apart:2 driven:2 scenario:21 captured:1 seen:1 dai:1 additional:2 parallelized:1 determine:1 july:1 multiple:15 full:3 infer:2 reduces:4 faster:1 academic:1 cross:5 long:1 dept:1 prediction:10 scalable:1 regression:4 sheffield:3 vision:1 expectation:4 noiseless:1 represent:3 kernel:12 separately:2 rest:2 recording:3 call:1 leverage:1 marginalization:1 fit:1 variate:1 reduce:1 idea:3 expression:1 url:1 linkage:2 forecasting:1 wo:3 peter:1 speech:1 kuu:13 deep:2 dramatically:1 amount:1 nonparametric:1 ten:1 tenenbaum:2 servomechanism:1 rearranged:1 reduced:1 generate:1 imputed:1 kian:1 http:1 restricts:1 notice:1 estimated:4 correctly:1 blue:1 discrete:1 group:3 four:1 drawn:2 d3:1 imputation:3 rectangle:1 journel:2 inverse:2 uncertainty:2 family:1 reasonable:1 wu:2 separation:1 fusi:1 scaling:1 comparable:1 bound:14 kronecker:12 alex:1 flat:1 pardo:1 speed:15 friction:7 according:1 combination:2 across:1 sam:1 quoc:1 taken:2 computationally:3 equation:2 dioxide:1 remains:1 resource:1 singer:1 know:3 available:2 apply:5 observe:1 generic:1 pierre:1 anymore:1 robustly:1 batch:2 voice:2 convolved:2 assumes:4 top:1 denotes:5 michalis:2 quinlan:2 joris:1 yoram:1 ghahramani:1 build:1 unchanged:1 lippert:1 question:1 already:1 added:1 parametric:1 usual:1 mx:12 distance:16 separating:1 collected:1 assuming:5 code:1 issn:1 modeled:7 reformulate:1 providing:1 mini:2 optionally:1 difficult:1 unfortunately:2 hlog:1 setup:1 carbon:1 stated:1 rise:1 perform:1 observation:12 datasets:6 finite:1 huijbregts:2 extended:1 hinton:1 y1:1 smoothed:1 inferred:1 zamora:2 lvarez:7 mechanical:2 specified:1 kl:4 connection:1 imagenet:1 learned:3 nip:3 geostatistics:2 address:2 able:9 bar:1 suggested:1 stegle:5 indoor:1 including:2 max:7 reliable:1 hot:4 natural:2 force:1 treated:1 predicting:1 scarce:3 turner:1 representing:2 lorenzo:1 prior:5 review:1 acknowledgement:1 zh:7 kf:30 marginalizing:1 mooij:1 fully:1 proportional:1 geoffrey:1 foundation:1 sufficient:2 editor:1 intractability:2 translation:1 changed:1 repeat:4 free:1 vv:1 wide:1 taking:2 sparse:7 curve:1 dimension:2 xn:4 world:2 hensman:4 sensory:1 forward:2 coregionalization:11 avoided:1 qx:1 approximate:1 ignore:2 kullback:1 dealing:1 overfitting:3 uai:1 alternatively:3 continuous:2 latent:42 vectorization:1 additionally:1 learn:2 transfer:2 nicolas:1 ca:1 ignoring:2 tyre:3 improving:1 du:2 aistats:3 big:1 noise:9 hyperparameters:1 repeated:2 x1:2 augmented:1 referred:1 scheffer:1 ff:2 cubic:2 wish:2 explicit:1 house:1 breaking:1 htr:1 minute:2 list:2 speedometer:1 intractable:3 intrinsic:2 kx:20 distinguishable:1 nez:2 vinyals:1 maa:1 corresponds:5 truth:3 dh:3 mart:2 conditional:1 viewed:1 month:1 towards:1 jeff:1 shared:1 content:3 typical:1 reducing:1 uniformly:1 wt:1 called:1 total:1 experimental:1 uneven:2 alexander:1 oriol:1 evaluate:1 avoiding:1 extrapolate:3 |
6,742 | 7,099 | A-NICE-MC: Adversarial Training for MCMC
Jiaming Song
Stanford University
[email protected]
Shengjia Zhao
Stanford University
[email protected]
Stefano Ermon
Stanford University
[email protected]
Abstract
Existing Markov Chain Monte Carlo (MCMC) methods are either based on generalpurpose and domain-agnostic schemes, which can lead to slow convergence, or
problem-specific proposals hand-crafted by an expert. In this paper, we propose ANICE-MC, a novel method to automatically design efficient Markov chain kernels
tailored for a specific domain. First, we propose an efficient likelihood-free adversarial training method to train a Markov chain and mimic a given data distribution.
Then, we leverage flexible volume preserving flows to obtain parametric kernels
for MCMC. Using a bootstrap approach, we show how to train efficient Markov
chains to sample from a prescribed posterior distribution by iteratively improving
the quality of both the model and the samples. Empirical results demonstrate that
A-NICE-MC combines the strong guarantees of MCMC with the expressiveness of
deep neural networks, and is able to significantly outperform competing methods
such as Hamiltonian Monte Carlo.
1
Introduction
Variational inference (VI) and Monte Carlo (MC) methods are two key approaches to deal with
complex probability distributions in machine learning. The former approximates an intractable
distribution by solving a variational optimization problem to minimize a divergence measure with
respect to some tractable family. The latter approximates a complex distribution using a small number
of typical states, obtained by sampling ancestrally from a proposal distribution or iteratively using a
suitable Markov chain (Markov Chain Monte Carlo, or MCMC).
Recent progress in deep learning has vastly advanced the field of variational inference. Notable
examples include black-box variational inference and variational autoencoders [1?3], which enabled
variational methods to benefit from the expressive power of deep neural networks, and adversarial
training [4, 5], which allowed the training of new families of implicit generative models with efficient
ancestral sampling. MCMC methods, on the other hand, have not benefited as much from these recent
advancements. Unlike variational approaches, MCMC methods are iterative in nature and do not
naturally lend themselves to the use of expressive function approximators [6, 7]. Even evaluating
an existing MCMC technique is often challenging, and natural performance metrics are intractable
to compute [8?11]. Defining an objective to improve the performance of MCMC that can be easily
optimized in practice over a large parameter space is itself a difficult problem [12, 13].
To address these limitations, we introduce A-NICE-MC, a new method for training flexible MCMC
kernels, e.g., parameterized using (deep) neural networks. Given a kernel, we view the resulting
Markov Chain as an implicit generative model, i.e., one where sampling is efficient but evaluating the
(marginal) likelihood is intractable. We then propose adversarial training as an effective, likelihoodfree method for training a Markov chain to match a target distribution. First, we show it can be used in
a learning setting to directly approximate an (empirical) data distribution. We then use the approach
to train a Markov Chain to sample efficiently from a model prescribed by an analytic expression (e.g.,
a Bayesian posterior distribution), the classic use case for MCMC techniques. We leverage flexible
volume preserving flow models [14] and a ?bootstrap? technique to automatically design powerful
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
domain-specific proposals that combine the guarantees of MCMC and the expressiveness of neural
networks. Finally, we propose a method that decreases autocorrelation and increases the effective
sample size of the chain as training proceeds. We demonstrate that these trained operators are able to
significantly outperform traditional ones, such as Hamiltonian Monte Carlo, in various domains.
2
Notations and Problem Setup
n
A sequence of continuous random variables {xt }1
t=0 , xt 2 R , is drawn through the following
Markov chain:
x0 ? ? 0
xt+1 ? T? (xt+1 |xt )
where T? (?|x) is a time-homogeneous stochastic transition kernel parametrized by ? 2 ? and ? 0
is some initial distribution for x0 . In particular, we assume that T? is defined through an implicit
generative model f? (?|x, v), where v ? p(v) is an auxiliary random variable, and f? is a deterministic
transformation (e.g., a neural network). Let ??t denote the distribution for xt . If the Markov chain is
lim ? t . We
both irreducible and positive recurrent, then it has an unique stationary distribution ?? = t!1
?
assume that this is the case for all the parameters ? 2 ?.
Let pd (x) be a target distribution over x 2 Rn , e.g, a data distribution or an (intractable) posterior
distribution in a Bayesian inference setting. Our objective is to find a T? such that:
1. Low bias: The stationary distribution is close to the target distribution (minimize |??
2. Efficiency:
{??t }1
t=0
converges quickly (minimize t such that
3. Low variance: Samples from one chain
(minimize autocorrelation of {xt }1
t=0 ).
{xt }1
t=0
|??t
pd | < ).
pd |).
should be as uncorrelated as possible
We think of ?? as a stochastic generative model, which can be used to efficiently produce samples
with certain characteristics (specified by pd ), allowing for efficient Monte Carlo estimates. We
consider two settings for specifying the target distribution. The first is a learning setting where we do
not have an analytic expression for pd (x) but we have access to typical samples {si }m
i=1 ? pd ; in the
second case we have an analytic expression for pd (x), possibly up to a normalization constant, but no
access to samples. The two cases are discussed in Sections 3 and 4 respectively.
3
Adversarial Training for Markov Chains
Consider the setting where we have direct access to samples from pd (x). Assume that the transition
kernel T? (xt+1 |xt ) is the following implicit generative model:
v ? p(v)
(1)
xt+1 = f? (xt , v)
Assuming a stationary distribution ?? (x) exists, the value of ?? (x) is typically intractable to compute.
The marginal distribution ??t (x) at time t is also intractable, since it involves integration over all the
possible paths (of length t) to x. However, we can directly obtain samples from ??t , which will be
close to ?? if t is large enough (assuming ergodicity). This aligns well with the idea of generative
adversarial networks (GANs), a likelihood free method which only requires samples from the model.
Generative Adversarial Network (GAN) [4] is a framework for training deep generative models using
a two player minimax game. A generator network G generates samples by transforming a noise
variable z ? p(z) into G(z). A discriminator network D(x) is trained to distinguish between ?fake?
samples from the generator and ?real? samples from a given data distribution pd . Formally, this
defines the following objective (Wasserstein GAN, from [15])
min max V (D, G) = min max Ex?pd [D(x)]
G
D
G
D
Ez?p(z) [D(G(z))]
(2)
In our setting, we could assume pd (x) is the empirical distribution from the samples, and choose
z ? ? 0 and let G? (z) be the state of the Markov Chain after t steps, which is a good approximation
of ?? if t is large enough. However, optimization is difficult because we do not know a reasonable t
in advance, and the gradient updates are expensive due to backpropagation through the entire chain.
2
Figure 1: Visualizing samples of ?1 to ?50 (each row) from a model trained on the MNIST dataset.
Consecutive samples can be related in label (red box), inclination (green box) or width (blue box).
Figure 2: T? (yt+1 |yt ).
Figure 3: Samples of ?1 to ?30 from models (top: without shortcut connections; bottom: with shortcut connections) trained on the CelebA dataset.
Therefore, we propose a more efficient approximation, called Markov GAN (MGAN):
min max Ex?pd [D(x)]
?
D
Ex????b [D(?
x)]
(1
)Exd ?pd ,?x?T?m (?x|xd ) [D(?
x)]
(3)
where 2 (0, 1), b 2 N+ , m 2 N+ are hyperparameters, x
? denotes ?fake? samples from the
generator and T?m (x|xd ) denotes the distribution of x when the transition kernel is applied m times,
starting from some ?real? sample xd .
We use two types of samples from the generator for training, optimizing ? such that the samples will
fool the discriminator:
1. Samples obtained after b transitions x
? ? ??b , starting from x0 ? ? 0 .
2. Samples obtained after m transitions, starting from a data sample xd ? pd .
Intuitively, the first condition encourages the Markov Chain to converge towards pd over relatively
short runs (of length b). The second condition enforces that pd is a fixed point for the transition
operator. 1 Instead of simulating the chain until convergence, which will be especially time-consuming
if the initial Markov chain takes many steps to mix, the generator would run only (b + m)/2 steps
on average. Empirically, we observe better training times by uniformly sampling b from [1, B] and
m from [1, M ] respectively in each iteration, so we use B and M as the hyperparameters for our
experiments.
3.1
Example: Generative Model for Images
We experiment with a distribution pd over images, such as digits (MNIST) and faces (CelebA). In
the experiments, we parametrize f? to have an autoencoding structure, where the auxiliary variable
v ? N (0, I) is directly added to the latent code of the network serving as a source of randomness:
z = encoder? (xt )
z 0 = ReLU(z + v)
xt+1 = decoder? (z 0 )
(4)
where is a hyperparameter we set to 0.1. While sampling is inexpensive, evaluating probabilities
according to T? (?|xt ) is generally intractable as it would require integration over v. The starting
distribution ?0 is a factored Gaussian distribution with mean and standard deviation being the mean
and standard deviation of the training set. We include all the details, which ares based on the DCGAN
[16] architecture, in Appendix E.1. All the models are trained with the gradient penalty objective for
Wasserstein GANs [17, 15], where = 1/3, B = 4 and M = 3.
We visualize the samples generated from our trained Markov chain in Figures 1 and 3, where each
row shows consecutive samples from the same chain (we include more images in Appendix F) From
1
We provide a more rigorous justification in Appendix B.
3
Figure 1 it is clear that xt+1 is related to xt in terms of high-level properties such as digit identity
(label). Our model learns to find and ?move between the modes? of the dataset, instead of generating
a single sample ancestrally. This is drastically different from other iterative generative models trained
with maximum likelihood, such as Generative Stochastic Networks (GSN, [18]) and Infusion Training
(IF, [19]), because when we train T? (xt+1 |xt ) we are not specifying a particular target for xt+1 . In
fact, to maximize the discriminator score the model (generator) may choose to generate some xt+1
near a different mode.
To further investigate the frequency of various modes in the stationary distribution, we consider the
class-to-class transition probabilities for MNIST. We run one step of the transition operator starting
from real samples where we have class labels y 2 {0, . . . , 9}, and classify the generated samples
with a CNN. We are thus able to quantify the transition matrix for labels in Figure 2. Results show
that class probabilities are fairly uniform and range between 0.09 and 0.11.
Although it seems that the MGAN objective encourages rapid transitions between different modes,
it is not always the case. In particular, as shown in Figure 3, adding residual connections [20] and
highway connections [21] to an existing model can significantly increase the time needed to transition
between modes. This suggests that the time needed to transition between modes can be affected by
the architecture we choose for f? (xt , v). If the architecture introduces an information bottleneck
which forces the model to ?forget? xt , then xt+1 will have higher chance to occur in another mode;
on the other hand, if the model has shortcut connections, it tends to generate xt+1 that are close to
xt . The increase in autocorrelation will hinder performance if samples are used for Monte Carlo
estimates.
4
Adversarial Training for Markov Chain Monte Carlo
We now consider the setting where the target distribution pd is specified by an analytic expression:
pd (x) / exp( U (x))
(5)
where U (x) is a known ?energy function? and the normalization constant in Equation (5) might be
intractable to compute. This form is very common in Bayesian statistics [22], computational physics
[23] and graphics [24]. Compared to the setting in Section 3, there are two additional challenges:
1. We want to train a Markov chain such that the stationary distribution ?? is exactly pd ;
2. We do not have direct access to samples from pd during training.
4.1
Exact Sampling Through MCMC
We use ideas from the Markov Chain Monte Carlo (MCMC) literature to satisfy the first condition
and guarantee that {??t }1
t=0 will asymptotically converge to pd . Specifically, we require the transition
operator T? (?|x) to satisfy the detailed balance condition:
pd (x)T? (x0 |x) = pd (x0 )T? (x|x0 )
(6)
0
for all x and x . This condition can be satisfied using Metropolis-Hastings (MH), where a sample x0
is first obtained from a proposal distribution g? (x0 |x) and accepted with the following probability:
?
?
?
?
0
pd (x0 ) g? (x|x0 )
0
0 g? (x|x )
A? (x |x) = min 1,
= min 1, exp(U (x) U (x ))
(7)
pd (x) g? (x0 |x)
g? (x0 |x)
Therefore, the resulting MH transition kernel can be expressed as T? (x0 |x) = g? (x0 |x)A? (x0 |x) (if
x 6= x0 ), and it can be shown that pd is stationary for T? (?|x) [25].
The idea is then to optimize for a good proposal g? (x0 |x). We can set g? directly as in Equation (1)
(if f? takes a form where the probability g? can be computed efficiently), and attempt to optimize
the MGAN objective in Eq. (3) (assuming we have access to samples from pd , a challenge we will
address later). Unfortunately, Eq. (7) is not differentiable - the setting is similar to policy gradient
optimization in reinforcement learning. In principle, score function gradient estimators (such as
REINFORCE [26]) could be used in this case; in our experiments, however, this approach leads to
extremely low acceptance rates. This is because during initialization, the ratio g? (x|x0 )/g? (x0 |x) can
be extremely low, which leads to low acceptance rates and trajectories that are not informative for
training. While it might be possible to optimize directly using more sophisticated techniques from
the RL literature, we introduce an alternative approach based on volume preserving dynamics.
4
4.2
Hamiltonian Monte Carlo and Volume Preserving Flow
To gain some intuition to our method, we introduce Hamiltonian Monte Carlo (HMC) and volume
preserving flow models [27]. HMC is a widely applicable MCMC method that introduces an auxiliary
?velocity? variable v to g? (x0 |x). The proposal first draws v from p(v) (typically a factored Gaussian
distribution) and then obtains (x0 , v 0 ) by simulating the dynamics (and inverting v at the end of the
simulation) corresponding to the Hamiltonian
H(x, v) = v > v/2 + U (x)
(8)
where x and v are iteratively updated using the leapfrog integrator (see [27]). The transition from
(x, v) to (x0 , v 0 ) is deterministic, invertible and volume preserving, which means that
g? (x0 , v 0 |x, v) = g? (x, v|x0 , v 0 )
(9)
MH acceptance (7) is computed using the distribution p(x, v) = pd (x)p(v), where the acceptance
probability is p(x0 , v 0 )/p(x, v) since g? (x0 , v 0 |x, v)/g? (x, v|x0 , v 0 ) = 1. We can safely discard v 0
after the transition since x and v are independent.
Let us return to the case where the proposal is parametrized by a neural network; if we could satisfy
Equation 9 then we could significantly improve the acceptance rate compared to the ?REINFORCE?
setting. Fortunately, we can design such an proposal by using a volume preserving flow model [14].
A flow model [14, 28?30] defines a generative model for x 2 Rn through a bijection f : h ! x,
where h 2 Rn have the same number of dimensions as x with a fixed prior pH (h) (typically a
factored Gaussian). In this form, pX (x) is tractable because
pX (x) = pH (f
1
(x)) det
@f
1
(x)
@x
1
(10)
and can be optimized by maximum likelihood.
(h)
In the case of a volume preserving flow model f , the determinant of the Jacobian @f@h
is one. Such
models can be constructed using additive coupling layers, which first partition the input into two
parts, y and z, and then define a mapping from (y, z) to (y 0 , z 0 ) as:
y0 = y
z 0 = z + m(y)
(11)
where m(?) can be a complex function. By stacking multiple coupling layers the model becomes
highly expressive. Moreover, once we have the forward transformation f , the backward transformation f 1 can be easily derived. This family of models are called Non-linear Independent Components
Estimation (NICE)[14].
4.3
A NICE Proposal
HMC has two crucial components. One is the introduction of the auxiliary variable v, which prevents
random walk behavior; the other is the symmetric proposal in Equation (9), which allows the MH step
to only consider p(x, v). In particular, if we simulate the Hamiltonian dynamics (the deterministic
part of the proposal) twice starting from any (x, v) (without MH or resampling v), we will always
return to (x, v).
Auxiliary variables can be easily integrated into neural network proposals. However, it is hard to
obtain symmetric behavior. If our proposal is deterministic, then f? (f? (x, v)) = (x, v) should hold
for all (x, v), a condition which is difficult to achieve 2 . Therefore, we introduce a proposal which
satisfies Equation (9) for any ?, while preventing random walk in practice by resampling v after every
MH step.
Our proposal considers a NICE model f? (x, v) with its inverse f? 1 , where v ? p(v) is the auxiliary
variable. We draw a sample x0 from the proposal g? (x0 , v 0 |x, v) using the following procedure:
1. Randomly sample v ? p(v) and u ? Uniform[0, 1];
2. If u > 0.5, then (x0 , v 0 ) = f? (x, v);
2
The cycle consistency loss (as in CycleGAN [31]) introduces a regularization term for this condition; we
added this to the REINFORCE objective but were not able to achieve satisfactory results.
5
High
U (x, v)
f
f
Low
U (x, v)
1
?high? acceptance
?low? acceptance
p(x, v)
Figure 4: Sampling process of A-NICE-MC. Each step, the proposal executes f? or f? 1 . Outside
the high probability regions f? will guide x towards pd (x), while MH will tend to reject f? 1 . Inside
high probability regions both operations will have a reasonable probability of being accepted.
3. If u ? 0.5, then (x0 , v 0 ) = f? 1 (x, v).
We call this proposal a NICE proposal and introduce the following theorem.
Theorem 1. For any (x, v) and (x0 , v 0 ) in their domain, a NICE proposal g? satisfies
g? (x0 , v 0 |x, v) = g? (x, v|x0 , v 0 )
Proof. In Appendix C.
4.4
Training A NICE Proposal
Given any NICE proposal with f? , the MH acceptance step guarantees that pd is a stationary
distribution, yet the ratio p(x0 , v 0 )/p(x, v) can still lead to low acceptance rates unless ? is carefully
chosen. Intuitively, we would like to train our proposal g? to produce samples that are likely under
p(x, v).
Although the proposal itself is non-differentiable w.r.t. x and v, we do not require score function
gradient estimators to train it. In fact, if f? is a bijection between samples in high probability
regions, then f? 1 is automatically also such a bijection. Therefore, we ignore f? 1 during training
and only train f? (x, v) to reach the target distribution p(x, v) = pd (x)p(v). For pd (x), we use the
MGAN objective in Equation (3); for p(v), we minimize the distance between the distribution for the
generated v 0 (tractable through Equation (10)) and the prior distribution p(v) (which is a factored
Gaussian):
min max L(x; ?, D) + Ld (p(v), p? (v 0 ))
(12)
?
D
where L is the MGAN objective, Ld is an objective that measures the divergence between two
distributions and is a parameter to balance between the two factors; in our experiments, we use KL
divergence for Ld and = 1 3 .
Our transition operator includes a trained NICE proposal followed by a Metropolis-Hastings step,
and we call the resulting Markov chain Adversarial NICE Monte Carlo (A-NICE-MC). The sampling
process is illustrated in Figure 4. Intuitively, if (x, v) lies in a high probability region, then both f?
and f? 1 should propose a state in another high probability region. If (x, v) is in a low-probability
probability region, then f? would move it closer to the target, while f? 1 does the opposite. However,
the MH step will bias the process towards high probability regions, thereby suppressing the randomwalk behavior.
4.5
Bootstrap
The main remaining challenge is that we do not have direct access to samples from pd in order to
train f? according to the adversarial objective in Equation (12), whereas in the case of Section 3, we
have a dataset to get samples from the data distribution.
In order to retrieve samples from pd and train our model, we use a bootstrap process [33] where the
quality of samples used for adversarial training should increase over time. We obtain initial samples
by running a (possibly) slow mixing operator T?0 with stationary distribution pd starting from an
arbitrary initial distribution ?0 . We use these samples to train our model f?i , and then use it to obtain
new samples from our trained transition operator T?i ; by repeating the process we can obtain samples
of better quality which should in turn lead to a better model.
3
The results are not very sensitive to changes in ; we also tried Maximum Mean Discrepancy (MMD, see
[32] for details) and achieved similar results.
6
Figure 5: Left: Samples from a model with shortcut connections trained with ordinary discriminator.
Right: Samples from the same model trained with a pairwise discriminator.
Figure 6: Densities of ring, mog2, mog6 and ring5 (from left to right).
4.6
Reducing Autocorrelation by Pairwise Discriminator
An important metric for evaluating MCMC algorithms is the effective sample size (ESS), which
measures the number of ?effective samples? we obtain from running the chain. As samples from
MCMC methods are not i.i.d., to have higher ESS we would like the samples to be as independent as
possible (low autocorrelation). In the case of training a NICE proposal, the objective in Equation (3)
may lead to high autocorrelation even though the acceptance rate is reasonably high. This is because
the coupling layer contains residual connections from the input to the output; as shown in Section 3.1,
such models tend to learn an identity mapping and empirically they have high autocorrelation.
We propose to use a pairwise discriminator to reduce autocorrelation and improve ESS. Instead of
scoring one sample at a time, the discriminator scores two samples (x1 , x2 ) at a time. For ?real
data? we draw two independent samples from our bootstrapped samples; for ?fake data? we draw
x2 ? T?m (?|x1 ) such that x1 is either drawn from the data distribution or from samples after running
the chain for b steps, and x2 is the sample after running the chain for m steps, which is similar to the
samples drawn in the original MGAN objective.
The optimal solution would be match both distributions of x1 and x2 to the target distribution.
Moreover, if x1 and x2 are correlated, then the discriminator should be able distinguish the ?real?
and ?fake? pairs, so the model is forced to generate samples with little autocorrelation. More details
are included in Appendix D. The pairwise discriminator is conceptually similar to the minibatch
discrimination layer [34]; the difference is that we provide correlated samples as ?fake? data, while
[34] provides independent samples that might be similar.
To demonstrate the effectiveness of the pairwise discriminator, we show an example for the image
domain in Figure 5, where the same model with shortcut connections is trained with and without
pairwise discrimination (details in Appendix E.1); it is clear from the variety in the samples that the
pairwise discriminator significantly reduces autocorrelation.
5
Experiments
Code for reproducing the experiments is available at https://github.com/ermongroup/a-nice-mc.
To demonstrate the effectiveness of A-NICE-MC, we first compare its performance with HMC on
several synthetic 2D energy functions: ring (a ring-shaped density), mog2 (a mixture of 2 Gaussians)
mog6 (a mixture of 6 Gaussians), ring5 (a mixture of 5 distinct rings). The densities are illustrated
in Figure 6 (Appendix E.2 has the analytic expressions). ring has a single connected component of
high-probability regions and HMC performs well; mog2, mog6 and ring5 are selected to demonstrate
cases where HMC fails to move across modes using gradient information. A-NICE-MC performs
well in all the cases.
We use the same hyperparameters for all the experiments (see Appendix E.4 for details). In particular,
we consider f? (x, v) with three coupling layers, which update v, x and v respectively. This is to
ensure that both x and v could affect the updates to x0 and v 0 .
How does A-NICE-MC perform? We evaluate and compare ESS and ESS per second (ESS/s) for
both methods in Table 1. For ring, mog2, mog6, we report the smallest ESS of all the dimensions
7
Table 1: Performance of MCMC samplers as measured by Effective Sample Size (ESS). Higher is
better (1000 maximum). Averaged over 5 runs under different initializations. See Appendix A for the
ESS formulation, and Appendix E.3 for how we benchmark the running time of both methods.
ESS
A-NICE-MC
HMC
ESS/s
A-NICE-MC
HMC
ring
mog2
mog6
ring5
1000.00
355.39
320.03
155.57
1000.00
1.00
1.00
0.43
ring
mog2
mog6
ring5
128205
50409
40768
19325
121212
78
39
29
(a) E[
p
x21 + x22 ]
(b) Std[
p
x21 + x22 ]
(c) HMC
(d) A-NICE-MC
Figure 7: (a-b) Mean absolute error for estimating the statistics in ring5 w.r.t. simulation length.
Averaged over 100 chains. (c-d) Density plots for both methods. When the initial distribution is a
Gaussian centered at the origin, HMC overestimates the densities of the rings towards the center.
(as in [35]); for ring5, we report the ESS of the distance between the sample and the origin, which
indicates mixing across different rings. In the four scenarios, HMC performed well only in ring; in
cases where modes are distant from each other, there is little gradient information for HMC to move
between modes. On the other hand, A-NICE-MC is able to freely move between the modes since the
NICE proposal is parametrized by a flexible neural network.
We use ring5 as an example to demonstrate the results. We assume ?0 (x) = N (0, 2 I) as the initial
distribution, and optimize through maximum likelihood. Then we run both methods, and use the
resulting particles to estimate pd . As shown in Figures 7a and 7b, HMC fails and there is a large gap
between true and estimated statistics. This also explains why the ESS is lower than 1 for HMC for
ring5 in Table 1.
Another reasonable measurement to consider is Gelman?s R hat diagnostic [36], which evaluates
performance across multiple sampled chains. We evaluate this over the rings5 domain (where the
statistics is the distance to the origin), using 32 chains with 5000 samples and 1000 burn-in steps
for each sample. HMC gives a R hat value of 1.26, whereas A-NICE-MC gives a R hat value of
1.002 4 . This suggest that even with 32 chains, HMC does not succeed at estimating the distribution
reasonably well.
Does training increase ESS? We show in Figure 8 that in all cases ESS increases with more
training iterations and bootstrap rounds, which also indicates that using the pairwise discriminator is
effective at reducing autocorrelation.
Admittedly, training introduces an additional computational cost which HMC could utilize to obtain
more samples initially (not taking parameter tuning into account), yet the initial cost can be amortized
thanks to the improved ESS. For example, in the ring5 domain, we can reach an ESS of 121.54 in
approximately 550 seconds (2500 iterations on 1 thread CPU, bootstrap included). If we then sample
from the trained A-NICE-MC, it will catch up with HMC in less than 2 seconds.
Next, we demonstrate the effectiveness of A-NICE-MC on Bayesian logistic regression, where the
posterior has a single mode in a higher dimensional space, making HMC a strong candidate for the
task. However, in order to achieve high ESS, HMC samplers typically use many leap frog steps
and require gradients at every step, which is inefficient when rx U (x) is computationally expensive.
A-NICE-MC only requires running f? or f? 1 once to obtain a proposal, which is much cheaper
computationally. We consider three datasets - german (25 covariates, 1000 data points), heart (14
covariates, 532 data points) and australian (15 covariates, 690 data points) - and evaluate the lowest
ESS across all covariates (following the settings in [35]), where we obtain 5000 samples after 1000
4
For R hat values, the perfect value is 1, and 1.1-1.2 would be regarded as too high.
8
Figure 8: ESS with respect to the number of training iterations.
Table 2: ESS and ESS per second for Bayesian logistic regression tasks.
ESS
A-NICE-MC
HMC
ESS/s
A-NICE-MC
HMC
german
heart
australian
926.49
1251.16
1015.75
2178.00
5000.00
1345.82
german
heart
australian
1289.03
3204.00
1857.37
216.17
1005.03
289.11
burn-in samples. For HMC we use 40 leap frog steps and tune the step size for the best ESS possible.
For A-NICE-MC we use the same hyperparameters for all experiments (details in Appendix E.5).
Although HMC outperforms A-NICE-MC in terms of ESS, the NICE proposal is less expensive to
compute than the HMC proposal by almost an order of magnitude, which leads to higher ESS per
second (see Table 2).
6
Discussion
To the best of our knowledge, this paper presents the first likelihood-free method to train a parametric
MCMC operator with good mixing properties. The resulting Markov Chains can be used to target
both empirical and analytic distributions. We showed that using our novel training objective we
can leverage flexible neural networks and volume preserving flow models to obtain domain-specific
transition kernels. These kernels significantly outperform traditional ones which are based on elegant
yet very simple and general-purpose analytic formulas. Our hope is that these ideas will allow us
to bridge the gap between MCMC and neural network function approximators, similarly to what
?black-box techniques? did in the context of variational inference [1].
Combining the guarantees of MCMC and the expressiveness of neural networks unlocks the potential to perform fast and accurate inference in high-dimensional domains, such as Bayesian neural
networks. This would likely require us to gather the initial samples through other methods, such
as variational inference, since the chances for untrained proposals to ?stumble upon? low energy
regions is diminished by the curse of dimensionality. Therefore, it would be interesting to see whether
we could bypass the bootstrap process and directly train on U (x) by leveraging the properties of
flow models. Another promising future direction is to investigate proposals that can rapidly adapt
to changes in the data. One use case is to infer the latent variable of a particular data point, as in
variational autoencoders. We believe it should be possible to utilize meta-learning algorithms with
data-dependent parametrized proposals.
Acknowledgements
This research was funded by Intel Corporation, TRI, FLI and NSF grants 1651565, 1522054, 1733686.
The authors would like to thank Daniel L?vy for discussions on the NICE proposal proof, Yingzhen Li
for suggestions on the training procedure and Aditya Grover for suggestions on the implementation.
References
[1] R. Ranganath, S. Gerrish, and D. Blei, ?Black box variational inference,? in Artificial Intelligence
and Statistics, pp. 814?822, 2014.
[2] D. P. Kingma and M. Welling, ?Auto-encoding variational bayes,? arXiv preprint
arXiv:1312.6114, 2013.
[3] D. J. Rezende, S. Mohamed, and D. Wierstra, ?Stochastic backpropagation and approximate
inference in deep generative models,? arXiv preprint arXiv:1401.4082, 2014.
9
[4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio, ?Generative adversarial nets,? in Advances in neural information processing systems,
pp. 2672?2680, 2014.
[5] S. Mohamed and B. Lakshminarayanan, ?Learning in implicit generative models,? arXiv preprint
arXiv:1610.03483, 2016.
[6] T. Salimans, D. P. Kingma, M. Welling, et al., ?Markov chain monte carlo and variational
inference: Bridging the gap.,? in ICML, vol. 37, pp. 1218?1226, 2015.
[7] N. De Freitas, P. H?jen-S?rensen, M. I. Jordan, and S. Russell, ?Variational mcmc,? in Proceedings of the Seventeenth conference on Uncertainty in artificial intelligence, pp. 120?127,
Morgan Kaufmann Publishers Inc., 2001.
[8] J. Gorham and L. Mackey, ?Measuring sample quality with stein?s method,? in Advances in
Neural Information Processing Systems, pp. 226?234, 2015.
[9] J. Gorham, A. B. Duncan, S. J. Vollmer, and L. Mackey, ?Measuring sample quality with
diffusions,? arXiv preprint arXiv:1611.06972, 2016.
[10] J. Gorham and L. Mackey, ?Measuring sample quality with kernels,? arXiv preprint
arXiv:1703.01717, 2017.
[11] S. Ermon, C. P. Gomes, A. Sabharwal, and B. Selman, ?Designing fast absorbing markov
chains.,? in AAAI, pp. 849?855, 2014.
[12] N. Mahendran, Z. Wang, F. Hamze, and N. De Freitas, ?Adaptive mcmc with bayesian optimization.,? in AISTATS, vol. 22, pp. 751?760, 2012.
[13] S. Boyd, P. Diaconis, and L. Xiao, ?Fastest mixing markov chain on a graph,? SIAM review,
vol. 46, no. 4, pp. 667?689, 2004.
[14] L. Dinh, D. Krueger, and Y. Bengio, ?Nice: Non-linear independent components estimation,?
arXiv preprint arXiv:1410.8516, 2014.
[15] M. Arjovsky, S. Chintala, and L. Bottou, ?Wasserstein gan,? arXiv preprint arXiv:1701.07875,
2017.
[16] A. Radford, L. Metz, and S. Chintala, ?Unsupervised representation learning with deep convolutional generative adversarial networks,? arXiv preprint arXiv:1511.06434, 2015.
[17] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville, ?Improved training of
wasserstein gans,? arXiv preprint arXiv:1704.00028, 2017.
[18] Y. Bengio, E. Thibodeau-Laufer, G. Alain, and J. Yosinski, ?Deep generative stochastic networks
trainable by backprop,? 2014.
[19] F. Bordes, S. Honari, and P. Vincent, ?Learning to generate samples from noise through infusion
training,? ICLR, 2017.
[20] K. He, X. Zhang, S. Ren, and J. Sun, ?Deep residual learning for image recognition,? in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770?778,
2016.
[21] R. K. Srivastava, K. Greff, and J. Schmidhuber, ?Highway networks,? arXiv preprint
arXiv:1505.00387, 2015.
[22] P. J. Green, ?Reversible jump markov chain monte carlo computation and bayesian model
determination,? Biometrika, pp. 711?732, 1995.
[23] W. Jakob and S. Marschner, ?Manifold exploration: a markov chain monte carlo technique
for rendering scenes with difficult specular transport,? ACM Transactions on Graphics (TOG),
vol. 31, no. 4, p. 58, 2012.
10
[24] D. P. Landau and K. Binder, A guide to Monte Carlo simulations in statistical physics. Cambridge university press, 2014.
[25] W. K. Hastings, ?Monte carlo sampling methods using markov chains and their applications,?
Biometrika, vol. 57, no. 1, pp. 97?109, 1970.
[26] R. J. Williams, ?Simple statistical gradient-following algorithms for connectionist reinforcement
learning,? Machine learning, vol. 8, no. 3-4, pp. 229?256, 1992.
[27] R. M. Neal et al., ?Mcmc using hamiltonian dynamics,? Handbook of Markov Chain Monte
Carlo, vol. 2, pp. 113?162, 2011.
[28] D. J. Rezende and S. Mohamed, ?Variational inference with normalizing flows,? arXiv preprint
arXiv:1505.05770, 2015.
[29] D. P. Kingma, T. Salimans, and M. Welling, ?Improving variational inference with inverse
autoregressive flow,? arXiv preprint arXiv:1606.04934, 2016.
[30] A. Grover, M. Dhar, and S. Ermon, ?Flow-gan: Bridging implicit and prescribed learning in
generative models,? arXiv preprint arXiv:1705.08868, 2017.
[31] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, ?Unpaired image-to-image translation using
cycle-consistent adversarial networks,? arXiv preprint arXiv:1703.10593, 2017.
[32] Y. Li, K. Swersky, and R. Zemel, ?Generative moment matching networks,? in International
Conference on Machine Learning, pp. 1718?1727, 2015.
[33] B. Efron and R. J. Tibshirani, An introduction to the bootstrap. CRC press, 1994.
[34] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, ?Improved
techniques for training gans,? in Advances in Neural Information Processing Systems, pp. 2226?
2234, 2016.
[35] M. Girolami and B. Calderhead, ?Riemann manifold langevin and hamiltonian monte carlo
methods,? Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol. 73,
no. 2, pp. 123?214, 2011.
[36] S. P. Brooks and A. Gelman, ?General methods for monitoring convergence of iterative simulations,? Journal of computational and graphical statistics, vol. 7, no. 4, pp. 434?455, 1998.
[37] M. D. Hoffman and A. Gelman, ?The no-u-turn sampler: adaptively setting path lengths in
hamiltonian monte carlo.,? Journal of Machine Learning Research, vol. 15, no. 1, pp. 1593?1623,
2014.
[38] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis,
J. Dean, M. Devin, et al., ?Tensorflow: Large-scale machine learning on heterogeneous distributed systems,? arXiv preprint arXiv:1603.04467, 2016.
[39] D. Kingma and J. Ba, ?Adam: A method for stochastic optimization,? arXiv preprint
arXiv:1412.6980, 2014.
11
| 7099 |@word cnn:1 determinant:1 seems:1 simulation:4 tried:1 thereby:1 ld:3 moment:1 initial:8 contains:1 score:4 series:1 daniel:1 bootstrapped:1 suppressing:1 outperforms:1 existing:3 freitas:2 com:1 si:1 yet:3 devin:1 additive:1 partition:1 informative:1 distant:1 analytic:7 plot:1 update:3 resampling:2 stationary:8 generative:19 discrimination:2 advancement:1 selected:1 intelligence:2 mackey:3 es:27 hamiltonian:9 short:1 blei:1 provides:1 bijection:3 zhang:1 wierstra:1 constructed:1 direct:3 abadi:1 combine:2 autocorrelation:11 inside:1 introduce:5 pairwise:8 x0:36 rapid:1 behavior:3 themselves:1 integrator:1 riemann:1 automatically:3 landau:1 little:2 cpu:1 curse:1 becomes:1 estimating:2 notation:1 moreover:2 agnostic:1 lowest:1 what:1 transformation:3 corporation:1 guarantee:5 safely:1 every:2 xd:4 zaremba:1 exactly:1 biometrika:2 grant:1 overestimate:1 positive:1 laufer:1 tends:1 encoding:1 path:2 approximately:1 black:3 might:3 twice:1 initialization:2 burn:2 frog:2 specifying:2 challenging:1 suggests:1 binder:1 fastest:1 mog2:6 range:1 averaged:2 seventeenth:1 unique:1 enforces:1 practice:2 backpropagation:2 bootstrap:8 digit:2 procedure:2 empirical:4 significantly:6 reject:1 boyd:1 matching:1 suggest:1 get:1 close:3 operator:8 gelman:3 context:1 optimize:4 deterministic:4 dean:1 yt:2 center:1 williams:1 starting:7 pouget:1 factored:4 estimator:2 regarded:1 gsn:1 enabled:1 retrieve:1 classic:1 justification:1 updated:1 target:10 exact:1 homogeneous:1 vollmer:1 designing:1 goodfellow:2 origin:3 velocity:1 amortized:1 expensive:3 recognition:2 std:1 bottom:1 preprint:16 wang:1 region:9 cycle:2 connected:1 sun:1 decrease:1 russell:1 intuition:1 pd:37 transforming:1 covariates:4 warde:1 hinder:1 dynamic:4 trained:13 solving:1 calderhead:1 upon:1 tog:1 efficiency:1 easily:3 mh:9 various:2 train:13 forced:1 distinct:1 effective:6 fast:2 monte:20 artificial:2 zemel:1 outside:1 gorham:3 stanford:6 widely:1 encoder:1 statistic:6 think:1 itself:2 autoencoding:1 sequence:1 differentiable:2 net:1 propose:7 combining:1 rapidly:1 mixing:4 achieve:3 convergence:3 produce:2 generating:1 perfect:1 converges:1 ring:11 adam:1 coupling:4 recurrent:1 measured:1 ex:3 progress:1 eq:2 strong:2 auxiliary:6 c:3 involves:1 australian:3 quantify:1 girolami:1 direction:1 sabharwal:1 stochastic:6 centered:1 exploration:1 ermon:4 explains:1 require:5 backprop:1 crc:1 hold:1 exp:2 mapping:2 visualize:1 efros:1 consecutive:2 smallest:1 purpose:1 estimation:2 applicable:1 label:4 leap:2 sensitive:1 highway:2 bridge:1 hoffman:1 hope:1 gaussian:5 always:2 derived:1 rezende:2 leapfrog:1 shengjia:1 likelihood:7 indicates:2 adversarial:14 rigorous:1 inference:12 dependent:1 typically:4 entire:1 integrated:1 initially:1 flexible:5 integration:2 fairly:1 marginal:2 field:1 once:2 shaped:1 beach:1 sampling:9 park:1 icml:1 unsupervised:1 celeba:2 mimic:1 discrepancy:1 report:2 future:1 mirza:1 connectionist:1 irreducible:1 randomly:1 diaconis:1 divergence:3 cheaper:1 attempt:1 acceptance:10 investigate:2 highly:1 introduces:4 mixture:3 farley:1 x22:2 chain:40 accurate:1 closer:1 unless:1 walk:2 classify:1 measuring:3 ordinary:1 stacking:1 cost:2 deviation:2 uniform:2 graphic:2 too:1 thibodeau:1 synthetic:1 adaptively:1 st:1 density:5 thanks:1 siam:1 international:1 ancestral:1 physic:2 invertible:1 quickly:1 gans:4 vastly:1 aaai:1 satisfied:1 choose:3 possibly:2 expert:1 zhao:1 inefficient:1 return:2 li:2 account:1 potential:1 de:2 includes:1 lakshminarayanan:1 inc:1 satisfy:3 notable:1 vi:1 later:1 view:1 performed:1 dumoulin:1 red:1 bayes:1 metz:1 minimize:5 ancestrally:2 variance:1 characteristic:1 efficiently:3 kaufmann:1 convolutional:1 conceptually:1 bayesian:8 vincent:1 mc:23 carlo:20 trajectory:1 rx:1 ren:1 monitoring:1 randomness:1 executes:1 reach:2 randomwalk:1 aligns:1 inexpensive:1 evaluates:1 energy:3 frequency:1 pp:18 mohamed:3 naturally:1 proof:2 chintala:2 gain:1 sampled:1 dataset:4 lim:1 knowledge:1 dimensionality:1 efron:1 sophisticated:1 carefully:1 higher:5 methodology:1 improved:3 formulation:1 box:6 cyclegan:1 though:1 ergodicity:1 implicit:6 autoencoders:2 until:1 hand:4 hastings:3 expressive:3 transport:1 reversible:1 minibatch:1 defines:2 mode:12 logistic:2 quality:6 gulrajani:1 believe:1 usa:1 true:1 former:1 regularization:1 symmetric:2 iteratively:3 satisfactory:1 neal:1 illustrated:2 deal:1 visualizing:1 round:1 game:1 width:1 encourages:2 during:3 davis:1 demonstrate:7 performs:2 stefano:1 greff:1 image:7 variational:16 novel:2 krueger:1 common:1 absorbing:1 empirically:2 rl:1 volume:9 discussed:1 yosinski:1 approximates:2 he:1 measurement:1 dinh:1 cambridge:1 tuning:1 consistency:1 similarly:1 particle:1 funded:1 access:6 posterior:4 recent:2 showed:1 optimizing:1 discard:1 scenario:1 schmidhuber:1 certain:1 meta:1 approximators:2 scoring:1 preserving:9 morgan:1 wasserstein:4 additional:2 fortunately:1 arjovsky:2 isola:1 freely:1 converge:2 maximize:1 corrado:1 multiple:2 mix:1 reduces:1 infer:1 match:2 adapt:1 ahmed:1 likelihoodfree:1 long:1 determination:1 regression:2 heterogeneous:1 vision:1 metric:2 arxiv:32 iteration:4 kernel:11 tailored:1 normalization:2 mmd:1 jiaming:1 achieved:1 agarwal:1 proposal:34 whereas:2 want:1 source:1 crucial:1 publisher:1 unlike:1 tri:1 tend:2 elegant:1 mahendran:1 flow:12 leveraging:1 effectiveness:3 jordan:1 call:2 hamze:1 near:1 leverage:3 bengio:3 enough:2 rendering:1 variety:1 affect:1 relu:1 specular:1 architecture:3 competing:1 opposite:1 reduce:1 idea:4 barham:1 det:1 bottleneck:1 thread:1 expression:5 whether:1 bridging:2 penalty:1 song:1 deep:9 generally:1 fake:5 fool:1 clear:2 detailed:1 tune:1 repeating:1 stein:1 ph:2 unpaired:1 generate:4 http:1 outperform:3 rensen:1 nsf:1 vy:1 estimated:1 diagnostic:1 per:3 tibshirani:1 blue:1 serving:1 hyperparameter:1 vol:10 affected:1 key:1 four:1 drawn:3 diffusion:1 utilize:2 backward:1 asymptotically:1 graph:1 dhar:1 run:5 inverse:2 parameterized:1 powerful:1 uncertainty:1 swersky:1 family:3 reasonable:3 almost:1 draw:4 appendix:11 unlocks:1 duncan:1 layer:5 followed:1 distinguish:2 courville:2 occur:1 x2:5 scene:1 generates:1 simulate:1 prescribed:3 min:6 extremely:2 relatively:1 px:2 according:2 across:4 y0:1 metropolis:2 making:1 intuitively:3 heart:3 computationally:2 equation:9 turn:2 german:3 needed:2 know:1 tractable:3 end:1 parametrize:1 operation:1 available:1 gaussians:2 brevdo:1 observe:1 salimans:3 simulating:2 alternative:1 hat:4 original:1 top:1 denotes:2 include:3 remaining:1 gan:5 running:6 ensure:1 x21:2 graphical:1 infusion:2 especially:1 society:1 objective:14 move:5 added:2 parametric:2 traditional:2 gradient:9 iclr:1 distance:3 thank:1 reinforce:3 parametrized:4 decoder:1 manifold:2 considers:1 ozair:1 assuming:3 length:4 code:2 ratio:2 balance:2 difficult:4 setup:1 unfortunately:1 hmc:25 yingzhen:1 honari:1 ba:1 design:3 implementation:1 policy:1 perform:2 allowing:1 markov:29 datasets:1 benchmark:1 defining:1 langevin:1 rn:3 reproducing:1 jakob:1 arbitrary:1 expressiveness:3 inverting:1 pair:1 specified:2 kl:1 optimized:2 discriminator:13 connection:8 inclination:1 tensorflow:1 kingma:4 nip:1 brook:1 address:2 able:6 proceeds:1 pattern:1 challenge:3 max:4 green:2 lend:1 royal:1 power:1 suitable:1 natural:1 force:1 residual:3 advanced:1 zhu:1 minimax:1 scheme:1 improve:3 github:1 catch:1 auto:1 nice:35 literature:2 prior:2 acknowledgement:1 review:1 stumble:1 loss:1 interesting:1 limitation:1 suggestion:2 grover:2 generator:6 gather:1 consistent:1 xiao:1 principle:1 uncorrelated:1 bypass:1 bordes:1 translation:1 row:2 free:3 alain:1 drastically:1 bias:2 guide:2 allow:1 face:1 taking:1 absolute:1 benefit:1 distributed:1 dimension:2 evaluating:4 transition:19 autoregressive:1 preventing:1 forward:1 author:1 reinforcement:2 selman:1 adaptive:1 jump:1 welling:3 transaction:1 ranganath:1 approximate:2 obtains:1 ignore:1 handbook:1 gomes:1 consuming:1 continuous:1 iterative:3 latent:2 why:1 table:5 promising:1 learn:1 nature:1 reasonably:2 ca:1 improving:2 generalpurpose:1 complex:3 untrained:1 fli:1 domain:10 bottou:1 did:1 aistats:1 main:1 noise:2 hyperparameters:4 allowed:1 x1:5 xu:1 crafted:1 benefited:1 intel:1 slow:2 fails:2 lie:1 candidate:1 jacobian:1 learns:1 theorem:2 formula:1 specific:4 xt:26 jen:1 abadie:1 normalizing:1 exists:1 intractable:8 mnist:3 adding:1 magnitude:1 gap:3 chen:2 forget:1 likely:2 ez:1 prevents:1 expressed:1 dcgan:1 aditya:1 srivastava:1 radford:2 chance:2 satisfies:2 gerrish:1 acm:1 succeed:1 identity:2 cheung:1 towards:4 shortcut:5 hard:1 change:2 included:2 typical:2 specifically:1 uniformly:1 reducing:2 sampler:3 diminished:1 admittedly:1 called:2 accepted:2 player:1 citro:1 formally:1 latter:1 evaluate:3 mcmc:24 trainable:1 correlated:2 |
6,743 | 71 | 62
Centric Models of the Orientation Map in Primary Visual Cortex
William Baxter
Department of Computer Science, S.U.N.Y. at Buffalo, NY 14620
Bruce Dow
Department of Physiology, S.U.N.Y. at Buffalo, NY 14620
Abstract
In the visual cortex of the monkey the horizontal organization of the preferred
orientations of orientation-selective cells follows two opposing rules: 1) neighbors tend
to have similar orientation preferences, and 2) many different orientations are observed
in a local region. Several orientation models which satisfy these constraints are found
to differ in the spacing and the topological index of their singularities. Using the rate
of orientation change as a measure, the models are compared to published experimental
results.
Introduction
It has been known for some years that there exist orientation-sensitive neurons in
the visual cortex of cats and mOnkeysl,2. These cells react to highly specific patterns of
light occurring in narrowly circumscribed regiOns of the visual field, i.e., the cell's
receptive field. The best patterns for such cells are typically not diffuse levels of
illumination, but elongated bars or edges oriented at specific angles. An individual cell
responds maximally to a bar at a particular orientation, called the preferred orientation. Its response declines as the bar or edge is rotated away from this preferred orientation.
Orientation-sensitive cells have a highly regular organization in primary cortex 3?
Vertically, as an electrode proceeds into the depth of the cortex, the column of tissue
contains cells that tend to have the same preferred orientation, at least in the upper
layers. Horizontally, as an electrode progresses across the cortical surface, the preferred
orientations change in a smooth, regular manner, so that the recorded orientations
appear to rotate with distance. It is this horizontal structure we are concerned with,
hereafter referred to as the orientation map. An orientation map is defined as a twodimensional surface in which every point has associated with it a preferred orientation
ranging from 00 ... 1800. In discrete versions, such as the array of cells in the cortex or
the discrete simulations in this paper, the orientation map will be considered to be a
sampled version of the underlying continuous surface. The investigations of this paper
are confined to the upper layers of macaque striate cortex.
Detailed knowledge of the two-dimensional layout of the orientation map has
implications for the architecture, development, and function of the visual cortex. The
organization of orientation-sensitive cells reflects, to some degree, the organization of
intracortical connections in striate cortex. Plausible orientation maps can be generated
by models with lateral connections that are uniformly exhibited by all cells in the
layer4,5, or by models which presume no specific intracortical connections, only
appropriate patterns of afferent input6? In this paper, we examine models in which
intracortical connections produce the orientation map but the orientation-controlling
circuitry is not displayed by all cells. Rather, it derives from localized "centers" which
are distributed across the cortical surface with uniform spacing7,8,9.
? American Institute of Physics 1988
63
The orientation map also represents a deformation in the retinotopy of primary
visual cortex. Since the early sixties it has been known that V1 refiects a topographic
map of the retina and hence the visual field 10. There is some global distortion of this
mapping 11 ,t2,13, but generally spatial relations between points in the visual field are
maintained on the cortical surface. This well-known phenomenon is only accurate for
a medium-grain description of V1, however. At a finer cellular level there is considerable scattering of receptive fields at a given cortical location 14. The notion of the hypercolumn 3 proposes that such scattering permits each region of the visual field to be
analyzed by a population of cells consisting of all the necessary orientations and with
inputs from both eyes. A quantitative description of the orientation map will allow
prediction of the distances between iso-orientation zones of a particular orientation,
and suggest how much cortical machinery is being brought to bear on the analysis of a
given feature at a given location in the visual field.
Models of the Orientation Map
Hubel and Wiesel's Parallel Stripe Model
The classic model of the orientation map is the parallel stripe model first published by Hubel and Wiesel in 1972 15? This model has been reproduced several times in
their publications3,16,17 and appears in many textbooks. The model consists of a series of
parallel slabs, one slab for each orientation, which are postulated to be orthogonal to
the ocular dominance stripes. The model predicts that a microelectrode advancing
tangentially (i.e., horizontally) through the tissue should encounter steadily changing
orientations. The rate of change, which is also called the orientation drift rate 18, is
determined by the angle of the electrode with respect to the array of orientation
stripes.
The parallel stripe model does not account for several phenomena reported in
long tangential penetrations through striate cortex in macaque monkeys I7.19. First, as
pointed out by Swindale 20 , the model predicts that some penetrations will have fiat or
very low orientation drift rates over lateral distances of hundreds of micrometers.
This is because an electrode advancing horizontally and perpendicular to the ocular
dominance stripes (and therefore parallel to the orientation stripes) would be expected
to remain within a single orientation column over a considerable distance with its
orientation drift rate equal to zero. Such results have never been observed. Second,
reversals in the direction of the orientation drift, from clockwise to counterclockwise
or vice versa, are commonly seen, yet this phenomenon is not addressed by the parallel
stripe model. Wavy stripes in the ocular dominace system 21 do not by themselves
introduce reversals. Third, there should be a negative correlation between the orientation drift rate and the ocularity "drift rate". That is, when orientation is changing
rapidly, the electrode should be confined to a single ocular dominance stripe (low ocularity drift rate), whereas when ocularity is changing rapidly the electrode should be
confined to a single orientation stripe (low orientation drift rate). This is clearly not
evident in the recent studies of Uvingstone and Hubel 17 (see especially their figs. 3b,
21 & 23), where both orientation and ocularity often have high drift rates in the same
electrode track, i.e., they show a positive correlation. Anatomical studies with 2deoxyglucose also fail to show that the orientation and ocular dominance column systems are orthogonal 22 ?
64
Centric Models and the Topological Index
Another model, proposed by Braitenberg and Braitenberg in 1979 7, has the orientations arrayed radially around centers like spokes in a wheel The centers are spaced at
distances of about O.5mm. This model produces reversals and also the sinusoidal progressions frequently encountered in horizontal penetrations. However this approach
suggests other possibilities, in fact an entire class of centric models. The organizing
centers form discontinuities in the otherwise smooth field of orientations. Different
topological types of discontinuity are possible, characterized by their topological
index 23 ? The topological index is a parameter computed by taking a path around a
discontinuity and recording the rotation of the field elements (figure 1). The value of
the index indicates the amount of rotation; the sign indicates the direction of rotation.
An index of 1 signifies that the orientations rotate through 3600; an index of 112
signifies a 1800 rotation. A positive index indicates that the orientations rotate in the
same sense as a path taken around the singularity; a negative index indicates the
reverse rotation.
Topological singularities are stable under orthogonal transformations, so that if
the field elements are each rotated 900 the index of the singUlarity remains unchanged.
Thus a +1 singularity may have orientations radiating out from it like spokes from a
wheel, or it may be at the center of a series of concentric circles. Only four types of
discontinuities are considered here, +1, -1, +1/2, -1/2, since these are the most stable, i.e ..
their neighborhoods are characterized by smooth change.
\
"
\
,
?
\
I
I
,
:
\
I
,
I
,
"
......... ',\!,',',
..........',\1'. . '_ ......
- - - - : : ~/~\,E : : - - - -
... ... "'1\',
, ,\ , ...... _
/
...... '
,
I
:\
I
I
'"
\
\
~
J
~
J
I
,,
I
I
\
_/
I
\
... , ," "' -" '--
-------+------- ........ ,
: - .... , ' ,
..... ,
'
\
I
\
\
I
+1
I
I
I I
, "/ ,
I
,
I
I
?
I
I
I
I
I
,,
I
I
I
---
,
',.I
I
I
-1
figure 1. Topological singularities. A positive index indicates that the orientations rotate
in the same direction as a path taken around the singularity; a negative index indicates
the reverse rotation. Orientations rotate through 3600 around ?1 centers, 1800 around
?l12 centers.
Cytochrome Oxidase Puffs
At topological singularities the change in orientation is discontinuous, which
violates the structure of a smoothly changing orientation map; modellers try to
minimize discontinuities in their models in order to satisfy the smoothness constraint.
Interestingly, in the upper layers of striate cortex of monkeys, zones with little or no
orientation selectivity have been discovered. These zones are notable for their high
cytochrome oxidase reactivity 24 and have been referred to as cytochrome oxidase puffs,
dots, spots, patches or blobs17,25,26,27. We will refer to them as puffs. If the organizing
centers of centric models are located in the cytochrome oxidase puffs then the discontinuities in the orientation map are effectively eliminated (but see below). Braitenberg
has indicated 28 that the +1 centers of his model should correspond to the puffs. Dow
and Bauer proposed a model 8 with +1 and -1 centers in alternating puffs. Gotz proposed
a similar model 9 with alternating +1f2 and -1f2 centers in the puffs. The last two
models manage to eliminate all discontinuities from the interpuff zones, but they
65
assume a perfect rectangular lattice of cytochrome oxidase puffs.
A Set of Centric Models
There are two parameters for the models considered here. (1) Whether the positive singularities are placed in every puff or in alternate puffs; and (2) whether the
singularities are ?1's or ?'-h's. This gives four centric models (figure 2):
El
Al
Elh
Alh
+1 centers in puffs. -1 centers in the interpuff zones.
ooth +1 and -1 centers in the puffs, interdigitated in a checkerboard fashion.
+112 centers in the puffs, -112 centers in the interpuff zones.
ooth +lj2 and -lj2 centers in the puffs, as in At.
The El model corresponds to the Braitenberg model transposed to a rectangular array
rather than an hexagonal one, in accordance with the observed organization of the
cytochrome oxidase regions 27 . In fact, the rectangular version of the Braitenberg model
is pictured in figure 49 of27. The Al model was originally proposed by Dow and
Bauer 8 and is also pictured in an article by Mitchison29. The A1f2 model was proposed
by Gotz 9. It should be noted that the El and Al models are the same model rotated
and scaled a bit; the Ph and A 1h have the same relationship.
E1
At
A 1h
figure 2. The four centric models. Dark ellipses represent cytochrome oxidase puffs.
Dots in interpuff zones of El & E1/2 indicate singularities at those points.
66
Simulations
Simulated horizontal electrode recordings were made in the four models to compare their orientation drift rates with those of published recordings. In the computer
simulations (figure 2) the interpuff distances were chosen to correspond to histological
measurements 27 ? Puff centers are separated by 500JL along their long axes, 350JL along
the short axes. The density of the arrays was chosen to approximate the sampling frequency observed in Hubel and Wiesel's horizontal electrode recording experiments 19,
about 20 cells per millimeter. Therefore the cell density of the simulation arrays was
about six times that shown in the figure.
All of the models produce simulated electrode data that qualitatively resemble
the published recording resUlts, e.g., they contain reversals, and runs of constantly
changing orientations. The orientation drift rate and number of reversals vary in the
different models.
The models of figure 2 are shown in perfectly rectangular arrays. Some important characteristics of the models, such as the absence of discontinuites in interpuff
zones, are dependent on this regularity. However, the real arrangement of cytochrome
oxidase puffs is somewhat irregular, as in Horton's figure 3 27 ? A small set of puffs from
the parafoveal region of Horton's figure was enlarged and each of the centric models
was embedded in this irregular array. The E1 model and a typical simulated electrode
track are shown in figure 3. Several problems are encountered when models developed
in a regular lattice are implemented in the irregular lattice of the real system; the
models have appreciably different properties. The -1 singularities in E1's interpuff
zones have been reduced to _1/2's; the A1 and A1f2 models now have some interpuff
discontinuities where before they had none.
Quantitative Comparisons
Measurement of the Orientation Drift Rate
There are two sets of centric models in the computer simulations: a set in the perfectly rectangular array (figure 2) and a set in the irregular puff array (as in figure 3).
At this point we can generate as many tracks in the simulation arrays as we wish.
How can this information be compared to the published records? The orientation drift
rate, or slope, is one basis for distinguishing between models. In real electrode tracks
however, the data are rather noisy, perhaps from the measuring process or from
inherent unevenness of the orientation map. The typical approach is to fit a straight
line and use the slope of this line. Reversals in the tracks require that lines be fit piecewise, the approach used by Hubel and Wiese1 19? Because of the unevenness of the data
it is not always clear what constitutes a reversal. Livingstone and Hubel 17 report that
the track in their figure 5 has only two reversals in 5 millimeters. Yet there seem to be
numerous microreversals between the 1st and 3rd mj11jmeter of their track. At what
point is a change in slope considered a true reversal rather than just noise?
The approach used here was to use a local slope measure and ignore the problem
of reversals - this permitted the fast calculation of slope by computer. A single electrode track, usually several millimeters long, was assigned a single slope, the average
of the derivative taken at each point of the track. Since these are discrete samples, the
local derivative must be approximated by taking measurements over a small neighborhood. How large should this neighborhood be? If too small it will be susceptible to
noise in the orientation measures, if too large it will "flatten out" true reversals. Slope
67
EI
..
913
+
..
+
1
2
MM
figure 3. A centric model in a realistic puff array (from 27 ). A simulated electrode track
and resulting data are shown. Only the El model is shown here, but other models
were similarly embedded in this array.
68
measures using neighborhoods of several sizes were applied to six published horizontal
electrode tracks from the foveal and parafoveal upper layers of macaque striate cortex:
figures 5,6,7 from 17, figure 16 from3, figure 1 from 3o. A neighborhood of O.lmm,
which attempts to fit a line between virtually every pair of points, gave abnormally
high slopes. Larger neighborhoods tended to give lower slopes, especially to those
tracks which contained reversals. The smallest window that gave consistent measures
for all six tracks was O.2mm; therefore this window was chosen for comparisons
between published data and the centric models. This measure gave an average slope of
285 degrees per millimeter in the six published samples of track data, compared to
Hubel & Wiesel's measure of 281 deg/mm for the penetrations in their 1974 paper19.
Slope measures of the centric models
The slope measure was applied to several thousand tracks at random locations
and angles in the simulation arrays, and a slope was computed for each simulated electrode track. Average slopes of the models are shown in Table 1. Generally, models
with ?1 centers have higher slopes than those with ?lh centers; models with centers
in every puff have higher slopes than the alternate puff models. Thus EI showed the
highest orientation drift rate, Al/2 the lowest, with A1 and E1f2 having intermediate
rates. The E1 model, in both the rectangular and irregular arrays, produced the most
realistic slope values.
TABLE I
EI
Al
Ph
Alh
Average slopes of the centric models
RectangUlar
array
312
216
198
117
Irregular
array
289
216
202
144
Numbers are in degrees/mm. Slope measure (window = O.2mm) applied
to 6 published records yielded an average slope of 285 degrees/mm.
Discussion
Constraints on the Orientation Map
Our original definition of the orientation map permits each cell to have an orientation preference whose angle is completely independent of its neighbors. But this is
much too general. Looking at the results of tangential electrode penetrations, there are
two striking constraints in the data. The first of these is reflected in the smoothness of
the graphs. Orientation changes in a regular manner as the electrode moves horizontally through the upper layers: neighboring cells have similar orientation preferences.
Discontinuities do occur but are rare. The other constraint is the fact that the orientation is always changing with distance, although the rate of change may vary.
Sequences of constant orientation are very rare and when they do occur they never
carryon for any appreciable distance. This is one of the major reasons why the parallel stripe model is untenable. The two major constraints on the orientation map may
be put informally as follows:
69
1. The smoothness constraint: neighboring points have similar orientation
preferences.
2. The heterogeneity constraint: all orientations should be represented
within a small region of the cortical surface.
This second constraint is a bit stronger than the data imply. The experimental results
only show that the orientations change regularly with distance, not that all orientations must be present within a region. But this constraint is important with respect to
visual processing and the notion of hypercolumns 3?
These are opposing constraints: the first tends to minimize the slope, or orientation drift rate, while the second tends to maximize this rate. Thus the organization of
the orientation map is analogous to physical systems that exhibit "frustration", that is,
the elements must satisfy conflicting constraints31 ? One of the properties of such systems is that there are many near-optimal solutions, no one of which is significantly
better than the others. As a result, there are many plausible orientation maps: any map
that satisfies these two constraints will generate qualitatively plausible simulated electrode tracks. This points out the need for quantitative comparisons between models
and experimental results.
Centric models and the two constraints
What are some possible mechanisms of the constraints that generate the orientation map? Smoothness is a local property and could be attributed to the workings of
individual cells. It seems to be a fundamental property of cortex that adjacent cells
respond to similar stimuli. The heterogeneity requirement operates at a slightly larger
scale, that of a hypercolumn rather than a minicolumn. While the first constraint may
be modeled as a property of individual cells, the second constraint is distributed over a
region of cells. How can such a collection of cells insure that its members cycle
through all the required orientations? The topological singularities discussed earlier, by
definition, include all orientations within a restricted region. By distributing these
centers across the surface of the cortex, the heterogeneity constraint may be satisfied. In
fact, the amount of orientation drift rate is a function of the density of this distribution (i.e., more centers per unit area give higher drift rates).
It has been noted that the El and the Al organizations are the same topological
model, but on different scales; the low drift rates of the At model may be increased by
increasing the density of the + 1 centers to that of the El model. The same relationship
holds for the E1I2 and A1f2 models. It is also possible to obtain realistic orientation drift
rates by increasing the density of +1f2 centers, or by mixing +1's and +Ws. However,
these alternatives increase the number of interpuff singularities. And given the possible
combinations of centers, it may be more than coincidental that a set of + t centers at
just the spacing of the cytochrome oxidase regions results in realistic orientation drift
rates.
Cortical Architecture and Types o/Circuitry
Thus far, we have not addressed the issue of how the preferred orientations are
generated. The mechanism is presently unknown, but attempts to depict it have traditionally been of a geometric nature, alluding to the dendritic morphology l.8.28.32. More
recently, computer simulations have shown that orientation-sensitive units may be
obtained from asymmetries in the receptive fields of afferents6, or developed using
70
simple Hebbian rules for altering synaptic weights5? That is, given appropriate network parameters, orientation tuning arises an as inherent property of some neural networks. Centric models propose a quite different approach in which an originally
untuned cell is "programmed" by a center located at some distance to respond to a
specific orientation. So, for an individual cell, does orientation develop locally, or is it
"imposed from without"? Both of these mechanisms may be in effect, acting synergistically to produce the final orientation map. The map may spontaneously form on the
embryonic cortex, but with cells that are nonspecific and broadly tuned. The organization imposed by the centers could have two effects on this incipient map. First, the
additional inft.uence from centers could "tighten up" the tuning curves, making the
cells more specific. Second, the spacing of the centers specifies a distinct and uniform
scale for the heterogeneity of the map. An unsupervised developing orientation map
could have broad expanses of iso-orientation zones mixed with regions of rapidly
changing orientations. The spacing of the puffs, hence the architecture of the cortex,
insures that there is an appropriate variety of feature sensitive cells at each location.
This has implications for cortical functioning: given the distances of lateral connectivity, for a cell of a given orientation, we can estimate how many other isoorientation zones of that same orientation the cell may be communicating with. For a
given orientation, the E1 model has twice as many iso-orientation zones per unit area
as At.
Ever since the discovery of orientation-specific cells in visual cortex there have
been attempts to relate the distribution of cell selectivities to architectural features of
the cortex. Hubel and Wiesel originally suggested that the orientation slabs followed
the organization of the ocular dominance slabs15? The Braitenbergs suggested in their
original mode1 7 that the centers might be identified with the giant cells of Meynert.
Later centric models have identified the centers with the cytochrome oxidase regiOns,
again relating the orientation map to the ocular dominance array, since the puffs themselves are closely related to this array.
While biologists have habitually related form to function, workers in machine
vision have traditionally relied on general-purpose architectures to implement a
variety of algorithms related to the processing of visual information33 ? More recently,
many computer scientists designing artificial vision systems have turned their attention towards connectionist systems and neural networks. There is great interest in
how the sensitivities to different features and how the selectivities to different values
of those features may be embedded in the system architecture 34 .3S.36. Linsker has proposed (this volume) that the development of feature spaces is a natural concomitance
of layered networks. providing a generic organizing principle for networks. Our work
deals with more specific cortical architectonics, but we are convinced that the study of
the cortical layout of feature maps will provide important insights for the design of
artificial systems.
References
1. D. Hubel & T. Wiesel. J. Physiol. (Lond.) 160, 106 (1962).
D. Hubel & T. Wiesel, J. Physiol. (Lond.) 195,225 (1968).
D. Hubel & T. Wiesel, Pmc. Roy. Soc. Lond. B 198, 1 (1977).
N. Swindale, Proc. Roy. Soc. Lond. B 215,211 (1982).
2.
3.
4.
5.
6.
R.Linsker, Proc. Natl. Acad. Sci. USA 83, 8779 (1986).
R. Soodak. Proc. Natl. Acad. Sci. USA 84, 3936 (1987).
71
7. V. Braitenberg & c. Braitenberg, Biol. Cyber. 33, 179 (1979).
8. B. Dow & R. Bauer, Biol. Cyber. 49, 189 (1984).
9. K. Gotz, Biol. Cyber. 56, 107 (1987).
10. P. Daniel & D. Whitteridge, J. Physiol. (Lond.) 159,302 (1961).
11. B. Dow, R. Vautin & R. Bauer, J. Neurosci. 5, 890 (1985).
12. R.B. Tootell, M.S. Silverman, E. Switkes & R. DeValois, Science 218,902 (1982).
13. D.C. Van Essen, W.T. Newsome & J.H. Maunsell, Vision Research 24,429 (1984).
14. D. Hubel & T. Wiesel, J. Compo Neurol. 158, 295 (1974).
15. D. Hubel & T. Wiesel, J. Compo Neurol. 146,421 (1972).
16. D. Hubel, Nature 299. 515 (1982).
17. M. Livingstone & D. Hubel, J. Neurosci. 4,309 (1984).
18. R. Bauer, B. Dow, A. Snyder & R. Vautin, Exp. Brain Res. SO, 133 (1983).
19. D. Hubel & T. Wiesel, J. Compo Neurol. 158,267 (1974).
20. N. Swindale, in Models of the Visual Cortex, D. Rose & V. Dobson, eds.,
(W iley, 1985), p. 452.
21. S. LeVay, D. Hubel, & T. Wiesel, J. Compo Neurol. 159,559 (1975).
22. D. Hubel, T. Wiesel & M. Stryker, J. Compo Neurol. 177,361 (1978).
23. T. Elsdale & F. Wasoff, Wilhelm Roux's Archives 180, 121 (1976).
24. M.T. Wong-Riley, Brain Res. 162,201 (1979).
25. A. Humphrey & A. Hendrickson. J. Neurosci. 3,345 (1983).
26. E. Carroll & M. Wong-Riley, J. Compo Neurol. 222,1(1984).
27. J. Horton, Proc. Roy. Soc. Lond. B 304, 199 (1984).
28. V. Braitenberg. in Models of the Visual Cortex, p.479.
29. G. Mitchison, in Models of the Visual Cortex, p. 443.
30. C. Michael, Vision Research 25 415 (1985).
31. S. Kirkpatrick, M. Gelatt & M. Vecchio Science 220, 671 (1983).
32. S. Tieman & H. Hirsch, in Models of the Visual Cortex, p. 432.
33. D. Ballard & C. Brown Computer Vision (Prentice-Hall, NJ., 1982).
34. D. Ballard, G. Hinton, & T. Sejnowski, Nature 306, 21 (1983).
35. D. Ballard, Behav. and Brain Sci. 9, 67 (1986).
36. D. Walters, Proc. First Int. Conf. on Neural Networks (June 1987).
| 71 |@word version:3 wiesel:13 stronger:1 seems:1 simulation:8 synergistically:1 contains:1 series:2 hereafter:1 foveal:1 daniel:1 tuned:1 interestingly:1 yet:2 must:3 grain:1 physiol:3 realistic:4 arrayed:1 depict:1 iso:3 short:1 record:2 compo:6 location:4 preference:4 along:2 consists:1 lj2:2 introduce:1 manner:2 expected:1 themselves:2 examine:1 frequently:1 morphology:1 brain:3 little:1 window:3 humphrey:1 increasing:2 underlying:1 insure:1 medium:1 lowest:1 what:3 coincidental:1 monkey:3 textbook:1 developed:2 transformation:1 giant:1 nj:1 quantitative:3 every:4 scaled:1 unit:3 maunsell:1 appear:1 positive:4 before:1 scientist:1 local:4 vertically:1 accordance:1 tends:2 acad:2 path:3 might:1 twice:1 suggests:1 programmed:1 perpendicular:1 spontaneously:1 implement:1 silverman:1 spot:1 area:2 physiology:1 significantly:1 flatten:1 regular:4 suggest:1 wheel:2 layered:1 twodimensional:1 put:1 tootell:1 prentice:1 wong:2 map:29 elongated:1 center:33 imposed:2 layout:2 nonspecific:1 attention:1 rectangular:7 roux:1 react:1 communicating:1 rule:2 insight:1 array:18 his:1 population:1 classic:1 notion:2 traditionally:2 analogous:1 controlling:1 distinguishing:1 designing:1 element:3 circumscribed:1 approximated:1 roy:3 located:2 stripe:12 predicts:2 observed:4 thousand:1 region:12 cycle:1 highest:1 rose:1 f2:3 basis:1 completely:1 cat:1 represented:1 separated:1 distinct:1 fast:1 walter:1 sejnowski:1 artificial:2 neighborhood:6 whose:1 quite:1 larger:2 plausible:3 distortion:1 otherwise:1 expanse:1 topographic:1 reactivity:1 noisy:1 final:1 reproduced:1 sequence:1 propose:1 neighboring:2 turned:1 rapidly:3 micrometer:1 organizing:3 mixing:1 levay:1 description:2 electrode:19 regularity:1 requirement:1 asymmetry:1 produce:4 perfect:1 wavy:1 rotated:3 develop:1 progress:1 soc:3 implemented:1 resemble:1 indicate:1 differ:1 direction:3 closely:1 discontinuous:1 violates:1 require:1 investigation:1 dendritic:1 singularity:14 swindale:3 mm:7 hold:1 around:6 considered:4 hall:1 exp:1 great:1 mapping:1 slab:3 circuitry:2 major:2 vary:2 early:1 smallest:1 purpose:1 proc:5 sensitive:5 appreciably:1 vice:1 reflects:1 brought:1 clearly:1 always:2 rather:5 ax:2 june:1 indicates:6 sense:1 dependent:1 el:7 typically:1 entire:1 eliminate:1 w:1 relation:1 selective:1 microelectrode:1 issue:1 orientation:102 development:2 proposes:1 spatial:1 biologist:1 field:11 equal:1 never:2 having:1 eliminated:1 sampling:1 represents:1 broad:1 unsupervised:1 constitutes:1 linsker:2 braitenberg:8 t2:1 stimulus:1 report:1 piecewise:1 inherent:2 tangential:2 retina:1 oriented:1 others:1 connectionist:1 individual:4 consisting:1 william:1 opposing:2 attempt:3 organization:9 interest:1 highly:2 possibility:1 essen:1 kirkpatrick:1 analyzed:1 sixty:1 light:1 natl:2 implication:2 accurate:1 parafoveal:2 edge:2 worker:1 necessary:1 lh:1 machinery:1 orthogonal:3 circle:1 re:2 deformation:1 uence:1 increased:1 column:3 earlier:1 measuring:1 altering:1 newsome:1 lattice:3 riley:2 signifies:2 rare:2 uniform:2 hundred:1 too:3 reported:1 hypercolumns:1 st:1 density:5 fundamental:1 sensitivity:1 mode1:1 physic:1 michael:1 horton:3 vecchio:1 connectivity:1 again:1 frustration:1 recorded:1 manage:1 satisfied:1 conf:1 american:1 derivative:2 checkerboard:1 account:1 sinusoidal:1 intracortical:3 int:1 satisfy:3 postulated:1 notable:1 afferent:1 later:1 try:1 tieman:1 relied:1 parallel:7 slope:21 bruce:1 minimize:2 tangentially:1 characteristic:1 spaced:1 correspond:2 millimeter:4 inft:1 produced:1 none:1 presume:1 finer:1 published:9 tissue:2 modeller:1 straight:1 tended:1 synaptic:1 ed:1 definition:2 frequency:1 ocular:7 steadily:1 associated:1 attributed:1 transposed:1 sampled:1 radially:1 knowledge:1 fiat:1 centric:16 appears:1 scattering:2 originally:3 higher:3 permitted:1 response:1 maximally:1 reflected:1 just:2 correlation:2 working:1 dow:6 horizontal:6 ei:3 indicated:1 perhaps:1 usa:2 effect:2 contain:1 true:2 functioning:1 brown:1 hence:2 assigned:1 alternating:2 deal:1 adjacent:1 maintained:1 noted:2 evident:1 ranging:1 recently:2 rotation:6 physical:1 volume:1 jl:2 discussed:1 relating:1 refer:1 measurement:3 versa:1 ooth:2 whitteridge:1 smoothness:4 rd:1 tuning:2 similarly:1 pointed:1 had:1 dot:2 stable:2 cortex:23 surface:7 carroll:1 recent:1 showed:1 reverse:2 selectivity:3 interdigitated:1 devalois:1 seen:1 additional:1 somewhat:1 abnormally:1 lmm:1 maximize:1 clockwise:1 hebbian:1 smooth:3 characterized:2 calculation:1 long:3 e1:6 ellipsis:1 a1:2 prediction:1 vision:5 represent:1 confined:3 cell:31 irregular:6 whereas:1 spacing:4 addressed:2 exhibited:1 vautin:2 archive:1 recording:5 tend:2 virtually:1 counterclockwise:1 cyber:3 member:1 regularly:1 seem:1 near:1 intermediate:1 baxter:1 concerned:1 variety:2 fit:3 gave:3 architecture:5 perfectly:2 identified:2 decline:1 i7:1 narrowly:1 whether:2 six:4 distributing:1 behav:1 generally:2 detailed:1 clear:1 informally:1 amount:2 dark:1 locally:1 ph:2 reduced:1 generate:3 specifies:1 exist:1 sign:1 track:17 per:4 anatomical:1 broadly:1 dobson:1 discrete:3 snyder:1 dominance:6 incipient:1 four:4 changing:7 spoke:2 advancing:2 v1:2 graph:1 year:1 run:1 angle:4 respond:2 striking:1 architectural:1 patch:1 bit:2 layer:5 followed:1 topological:10 encountered:2 yielded:1 occur:2 untenable:1 constraint:17 diffuse:1 architectonics:1 lond:6 hexagonal:1 department:2 developing:1 alternate:2 combination:1 across:3 remain:1 slightly:1 penetration:5 making:1 gotz:3 presently:1 restricted:1 taken:3 remains:1 fail:1 mechanism:3 reversal:12 permit:2 progression:1 away:1 appropriate:3 generic:1 gelatt:1 alternative:1 encounter:1 original:2 include:1 cytochrome:10 especially:2 unchanged:1 move:1 arrangement:1 receptive:3 primary:3 stryker:1 striate:5 responds:1 exhibit:1 distance:11 lateral:3 simulated:6 sci:3 cellular:1 l12:1 reason:1 index:12 relationship:2 modeled:1 providing:1 wilhelm:1 pmc:1 susceptible:1 relate:1 negative:3 design:1 unknown:1 upper:5 neuron:1 buffalo:2 displayed:1 heterogeneity:4 hinton:1 looking:1 ever:1 discovered:1 drift:20 concentric:1 oxidase:10 pair:1 hypercolumn:2 required:1 connection:4 conflicting:1 macaque:3 discontinuity:9 bar:3 proceeds:1 below:1 pattern:3 usually:1 suggested:2 ocularity:4 natural:1 pictured:2 eye:1 numerous:1 imply:1 deoxyglucose:1 geometric:1 discovery:1 embedded:3 bear:1 mixed:1 localized:1 untuned:1 degree:4 consistent:1 article:1 principle:1 embryonic:1 convinced:1 placed:1 last:1 histological:1 allow:1 unevenness:2 institute:1 neighbor:2 taking:2 distributed:2 bauer:5 curve:1 depth:1 cortical:10 van:1 hendrickson:1 commonly:1 made:1 qualitatively:2 collection:1 far:1 tighten:1 approximate:1 ignore:1 preferred:7 deg:1 global:1 hubel:18 hirsch:1 mitchison:1 continuous:1 why:1 table:2 nature:3 ballard:3 neurosci:3 noise:2 enlarged:1 fig:1 referred:2 fashion:1 ny:2 wish:1 third:1 specific:7 neurol:6 derives:1 effectively:1 illumination:1 occurring:1 smoothly:1 insures:1 isoorientation:1 visual:17 horizontally:4 contained:1 corresponds:1 satisfies:1 constantly:1 towards:1 appreciable:1 absence:1 retinotopy:1 change:9 considerable:2 determined:1 typical:2 uniformly:1 operates:1 acting:1 called:2 experimental:3 livingstone:2 zone:12 puff:24 soodak:1 rotate:5 arises:1 phenomenon:3 biol:3 |
6,744 | 710 | Context-Dependent Multiple
Distribution Phonetic Modeling with
MLPs
Michael Cohen
SRI International
Menlo Park. CA 94025
Horacio Franco
Nelson Morgan
SRl International
IntI. Computer Science Inst.
Berkeley, CA 94704
Victor Abrash
SRI International
David Rumelhart
Stanford University
Stanford, CA 94305
Abstract
A number of hybrid multilayer perceptron (MLP)/hidden Markov
model (HMM:) speech recognition systems have been developed in
recent years (Morgan and Bourlard. 1990). In this paper. we present
a new MLP architecture and training algorithm which allows the
modeling of context-dependent phonetic classes in a hybrid
MLP/HMM: framework. The new training procedure smooths MLPs
trained at different degrees of context dependence in order to obtain
a robust estimate of the cootext-dependent probabilities. Tests with
the DARPA Resomce Management database have shown substantial
advantages of the context-dependent MLPs over earlier cootextindependent MLPs. and have shown substantial advantages of this
hybrid approach over a pure HMM approach.
1 INTRODUCTION
Bidden Markov models are used in most current state-of-the-art continuous-speech
recognition systems. A hidden Markov model (HMM) is a stochastic finite state
machine with two sets of probability distributions. Associated with each state is a
probability distribution over transitions to next states and a probability distribution
over output symbols (often referred to as observation probabilities). When applied to
continuous speech. the observation probabilities are typically used to model local
649
650
Cohen, Franco, Morgan, Rumelhart, and Abrash
speech features such as spectra, and the transition probabilities are used to model the
displacement of these features through time. HMMs of individual phonetic segments
(phones) can be concatenated to model words and word models can be concatenated,
according to a grammar, to model sentences, resulting in a finite state representation
of acoustic-phonetic, phonological, and syntactic structure.
The HMM approach is limited by the need for strong statistical assumptions that are
unlikely to be valid for speech. Previous work by Morgan and Bourlard (1990) has
shown both theoretically and practically that some of these limitations can be overcome by using multilayer perceptrons (MLPs) to estimate the HMM state-dependent
observation probabilities. In addition to relaxing the restrictive independence assumptions of traditional HMMs, this approach results in a reduction in the number of
parameters needed for detailed phonetic modeling as a result of increased sharing of
model parameters between phonetic classes.
Recently, this approach was applied to the SRI-DECIPHER? system, a state-of-the-art
continuous speech recognition system (Cohen et al., 1990), using an MLP to provide
estimates of context-independent posterior probabilities of phone classes, which were
then converted to HMM context-independent state observation likelihoods using
Bayes' rule (Renals et aI., 1992). In this paper, we describe refinements of the system
to model phonetic classes with a sequence of context-dependent probabilities.
Context-dependent modeling: The realization of individual phones in continuous
speech is highly dependent upon phonetic context. For example, the sound of the
vowel /ae/ in the words "map" and "tap" is different, due to the influence of the
preceding phone. These context effects are referred to as "coarticulation". Experience
with HMM technology has shown that using context-dependent phonetic models
improves recognition accuracy significantly (Schwartz et al., 1985). This is so because
acoustic correlates of coarticulatory effects are explicitly modeled, producing sharper
and less overlapping probability density functions for the different phone classes.
Context-dependent HMMs use different probability distributions for every phone in
every different relevant context. This practice causes problems that are due to the
reduced amount of data available to train phones in highly specific contexts, resulting
in models that are not robust and generalize poorly. The solution to this problem used
by many HMM systems is to train models at many different levels of contextspecificity, including biphone (conditioned only on the phone immediately to the left
or right), generalized biphone (conditioned on the broad class of the phone to the left
or right), triphone (conditioned on the phone to the left and the right), generalized triphone, and word specific phone. Models conditioned by more specific contexts are
linearly smoothed with more general models. The "deleted interpolation" algorithm
(Jelinek and Mercer, 1980) provides linear weighting coefficients for the observation
probabilities with different degrees of context dependence by maximizing the likelihood of the different models over new, unseen data. This approach cannot be directly
extended to MLP-based systems because averaging the weights of two MLPs does not
result in an MLP with the average performance. It would be possible to use this
approach to average the probabilities that are output from different MLPs; however,
since the MLP training algorithm is a discriminant procedure, it would be desirable to
use a discriminant or error-based procedure to smooth the MLP probabilities together.
An earlier approach to context-dependent phonetic modeling with MLPs was proposed
by Bourlard et al. (1992). It is based on factoring the context-dependent likelihood
and uses a set of binary inputs to the network to specify context classes. The number
Context-Dependent Multiple Distribution Phonetic Modeling with MLPs
of parameters and the computational load using this approach are not much greater
than those for the original context-independent net.
The context-dependent modeling approach we present here uses a different factoring
of the desired context-dependent likelihoods. a network architecture that shares the
input-to-hidden layer among the context-dependent classes to reduce the number of
parameters. and a training procedure that smooths networks with different degrees of
context-dependence in order to achieve robustness in probability estimates.
Multidistribution modeling: Experience with HMM-based systems has shown the
importance of modeling phonetic units with a sequence of distributions rather than a
single distribution. This allows the model to capture some of the dynamics of
phonetic segments. The SRI-DECIPHER? system models most phones with a
sequence of three HMM states. Our initial hybrid system used only a single MLP output unit for each HMM phonetic class. This output unit supplied the probability for
all the states of the associated phone model.
Our initial attempt to extend the hybrid system to the modeling of a sequence of distributions for each phone involved increasing the number of output units from 69
(corresponding to phone classes) to 200 (corresponding to the states of the HMM
phone models). This resulted in an increase in word-recognition error rate by almost
30%. Experiments at ICSI had a similar result (personal communication). The higher
error rate seemed to be due to the discriminative nature of the MLP training algorithm. The new MLP. with 200 output units. was attempting to discriminate subphonetic classes. corresponding to HMM states. As a result. the MLP was attempting
to discriminate into separate classes acoustic vectors that corresponded to the same
phone and. in many cases. were very similar but were aligned with different HMM
states. There were likely to have been many cases in which almost identical acoustic
training vectors were labeled as a positive example in one instance and a negative
example in another for the same output class. The appropriate level at which to train
discrimination is likely to be the level of the phone (or higher) rather than the subphonetic HMM-state level (to which these outputs units correspond). The new architecture presented here accomplishes this by training separate output layers for each of
the three HMM states. resulting in a network trained to discriminate at the phone
level. while allowing three distributions to model each phone. This approach is combined with the context-dependent modeling approach. described in Section 3.
2 HYBRID MLP/HMM
The SRI-DECIPHER? system is a phone-based. speaker-independent. continuousspeech recognition system. based on semicontinuous (tied Gaussian mixture) HMMs
(Cohen et al.. 1990). The system extracts four features from the input speech
waveform. including 12th-order mel cepstrum. log energy. and their smoothed derivatives. The front end produces the 26 coefficients for these four features for each 10ms frame of speech.
Training of the phonetic models is based on maximum-likelihood estimation using the
forward-backward algorithm (Levinson et a1.. 1983). Recognition uses the Viterbi
algorithm (Levinson et al .? 1983) to find the HMM state sequence (corresponding to a
sentence) with the highest probability of generating the observed acoustic sequence.
The hybrid MLP/HMM DECIPHERTM system substitutes (scaled) probability estimates
computed with MLPs for the tied-mixture HMM state-dependent observation
651
652
Cohen, Franco, Morgan, Rumelhart, and Abrash
probability densities. No changes are made in the topology of the HMM system.
The initial hybrid system used an MLP to compute context-independent phonetic probabilities for the 69 phone classes in the DECIPHERTM system. Separate probabilities
were not computed for the different states of phone models. During the Viterbi recognition search. the probability of acoustic vector Yt given the phone class qj. P (Yt Iqj)'
is required for each HMM state. Since MLPs can compute Bayesian posterior probabilities. we compute the required HMM probabilities using
P (Y I .)
t
q]
= P (q j IYt )P (Yt )
(l)
P(qj)
The factor P (qj IYt ) is the posterior probability of phone class qj given the input vector Y at time t. This is computed by a backpropagation-trained (Rumelhart et al.?
1986) three-layer feed-forward MLP. P (qj) is the prior probability of phone class %
and is estimated by counting class occurrences in the examples used to train the MLP.
P (Yt ) is common to all states for any given time frame. and can therefore be discarded in the Viterbi computation. since it will not change the optimal state sequence
used to get the recognized string.
The MLP has an input layer of 234 units. spanning 9 frames (with 26 coefficients for
each) of cepstra. delta-cepstra. log-energy. and delta-log-energy that are normalized to
have zero mean and unit variance. The hidden layer has 1000 units. and the output
layer has 69 units. one for each context-independent phonetic class in the
DECIPHERTM system. Both the hidden and output layers consist of sigmoidal units.
The MLP is trained to estimate P (q. IYt ). where qj is the class associated with the
middle frame of the input window. Stochastic ~adient descent is used. The training
signal is provided by the HMM DECIPHER system previously trained by the
forward-backward algorithm. Forced Viterbi alignments (alignments to the known
word string) for every training sentence provide phone labels. among 69 classes. for
every frame of speech. The target distribution is defined as 1 for the index
corresponding to the phone class label and 0 for the other classes. A minimum relative entropy between posterior target distribution and posterior output distribution is
used as a training criterion. With this training criterion and target distribution. assuming enough parameters in the MLP. enough training data. and that the training does
not get stuck in a local minimum. the MLP outputs will approximate the posterior
class probabilities P (q j IYt ) (Morgan and Bourlard. 1990). Frame classification on an
independent cross-validation set is used to control the learning rate and to decide when
to stop training as in Renals et al. (1992). The initial learning rate is kept constant
until cross-validation performance increases less than 0.5%, after which it is reduced
as l/2n until performance increases no further.
3 CONTEXT-DEPENDENCE
Our initial implementation of context-dependent MLPs models generalized biphone
phonetic categories. We chose a set of eight left and eight right generalized biphone
phonetic-context classes, based principally on place of articulation and acoustic
characteristics. The context-dependent architecture is shown in Figure 1. A separate
output layer (consisting of 69 output units corresponding to 69 context-dependent
phonetic classes) is trained for each context. The context-dependent MLP can be
viewed as a set of MLPs. one for each context. which have the same input-to-hidden
Context-Dependent Multiple Distribution Phonetic Modeling with MLPs
weights. Separate sets of context-dependent output layers are used to model context
effects in different states of HMM phone models. thereby combining the modeling of
multiple phonetic distributions and cmtext-dependence. During training and recognition. speech frames aligned with first states of HMM phones are associated with the
appropriate left context output layer. those aligned with last states of HMM phones are
associated with the appropriate right context output layer. and middle states of three
state models are associated with the context-independent output layer. As a result.
since the training proceeds (as before) as if each output layer were part of an independent net. the system learns discriminatioo between the different phonetic classes within
an output layer (which now corresponds to a specific context and HMM-state position). but does not learn discrimjnatioo between occurrences of the same phooe in
different contexts or between the different states of the same HMM phone.
RS
L1
1,000 hidden unIts
234 Inputs
Figure 1: Context-Dependent MLP
3.1 CONTEXT?DEPENDENT FACTORING
In a context-dependent HMM. every state is associated with a specific phone class and
context During the Viterbi recognition search. P (Yt Iqj .CA:) (the probability of acoustic vector Yt given the phone class qj in the context class CA:) is required for each
state. We compute the required HMM probabilities using
I.
_ P (qj IYt .CA:)P (Yt ICA:)
P(Yt %.c/c) -
P(qj Ic/c)
where P (Yt ICA:) can be factored again as
P (Ck IYt)p (Yt)
I
P(Yt CA:) = - - - - P (CA:)
(2)
(3)
The factor P(qj IYt.cA:) is the posterior probability of phone class qj given the input
vector Yt and the context class C/c' To compute this factor. we consider the cooditioning on C/c in (2) as restricting the set of input vectors only to those produced in the
context C/c. If M is the number of context classes. this implementation uses a set of M
MLPs (all sharing the same input-to-hidden layer) similar to those used in the
context-independent case except that each MLP is trained using only input-output
examples obtained from the corresponding context. Ck.
653
654
Cohen, Franco, Morgan, Rumelhart, and Abrash
Every context-specific net performs a simpler classification than in the contextindependent case because within a context the acoustics corresponding to different
phones have less overlap.
P (Ck Iy,) is computed by a second MLP. A three-layer feed-forward MLP is used
which has 1000 hidden units and an output unit corresponding to each context class.
P (qj Ic!) and P (Ck) are estimated by counting over the training examples. Finally,
P CY,) is common to all states for any given time frame, and can therefore be discarded in the Viterbi computation, since it will not change the optimal state sequence
used to get the recognized string.
3.2 CONTEXT -DEPENDENT TRAINING AND SMOOTHING
We use the following method to achieve robust training of context-specific nets:
An initial context-independent MLP is trained, as described in Section 2, to estimate
the context-independent posterior probabilities over the N phone classes. After the
context-independent training converges, the resulting weights are used to initialize the
weights going to the context-specific output layers. Context-dependent training
proceeds by backpropagating error only from the appropriate output layer for each
training example. Otherwise, the training procedure is similar to that for the contextindependent net, using stochastic gradient descent and a relative-entropy training criterion. Overall classification performance evaluated on an independent cross-validation
set is used to determine the learning rate and stopping point. Only hidden-to-output
weights are adjusted during context-dependent training. We can view the separate
output layers as belonging to independent nets, each one trained on a non-overlapping
subset of the original training data.
Every context-specific net would asymptotically converge to the context conditioned
posteriors P (qj IY, ,Ck) given enough training data and training iterations. As a result
of the initialization, the net starts estimating P (qj IY,), and from that point it follows a
trajectory in weight space, incrementally moving away from the context-independent
parameters as long as classification performance on the cross-validation set improves.
As a result, the net retains useful information from the context-independent initial conditions. In this way, we perform a type of nonlinear smoothing between the pure
context-independent parameters and the pure context-dependent parameters.
4 EVALUATION
Training and recognition experiments were conducted using the speaker-independent,
continuous-speech, DARPA Resource Management database. The vocabulary size is
998 words. Tests were run both with a word-pair (perplexity 60) grammar and with no
grammar. The training set for the HMM system and for the MLP consisted of the
3990 sentences that make up the standard DARPA speaker-independent training set for
the Resource Management task. The 600 sentences making up the Resource Management February 89 and October 89 test sets were used for cross-validation during both
the context-independent and context-dependent MLP training, and for tuning HMM
system parameters (e.g., word transition weight).
Context-Dependent Multiple Distribution Phonetic Modeling with MLPs
Table 1: Percent Word Error and Parameter Count with Word-Pair Grammar
Feb91
Sep92a
Sep92b
# Parms
CIMLP
CD~P
HMM
5.~
4.7
3.~
10.9
9.5
300K
7.6
6.6
1400K
10.1
7.0
5500K
MIXED
3.2
7.7
5.7
6 lOOK
Table 2: Percent Word Error with No Grammar
CIMLP
Feb91
Sep92a
Sep92b
24.7
31.5
30.9
CDMLP
18.4
27.1
24.9
HMM
19.3
29.2
26.6
MIXED
15.9
25.4
21.5
Table I presents word recognition error and number of system parameters for four
different versions of the system, for three different Resource Management test sets
using the word-pair grammar. Table 2 presents word recognition error for the
corresponding tests with no grammar (the number of system parameters are the same
as those shown in Table I).
Comparing context-independent MLP (CIMLP) to context-dependent MLP (CDMLP)
shows improvements with CDMLP in all six tests, ranging from a 15% to 30% reduction in word error. The CDMLP system combines multiple-distribution modeling with
the context-dependent modeling technique. The CDMLP system performs better than
the context-dependent HMM: (CDHMM:) system in five out of the six tests.
The :MIXED system uses a weighted mixture of the logs of state obseIV ation likelihoods provided by the CIMLP and the CDHMM: (Renals et al., 1992). This system
shows the best recognition performance so far achieved with the DECIPHERTM system
on the Resource Management database. In all six tests, it performs significantly better
than the pure CDIDv1M: system.
5 DISCUSSION
The results shown in Tables I and 2 suggest that MLP estimation of HMM obsexvation likelihoods can improve the performance of standard IDv1M:s. These results also
suggest that systems that use MLP-based probability estimation make more efficient
use of their parameters than standard HMM: systems. In standard HMMs, most of the
parameters in the system are in the obseIVation distributions associated with the individual states of phone models. MLPs use representations that are more distributed in
nature, allowing more sharing of representational resources and better allocation of
representational resources based on training. In addition, since MLPs are trained to
discriminate between classes, they focus on modeling boundaries between classes
rather than class internals.
One should keep in mind that the reduction in memory needs that may be attained by
replacing HMM distributions with MLP-based estimates must be traded off against
increased computational load during both training and recognition. The MLP computations during training and recognition are much larger than the corresponding Gaussian mixture computations for IDv1M: systems.
655
656
Cohen, Franco, Morgan, Rumelhart, and Abrash
The results also show that the context-dependent modeling approach presented here
substantially improves performance over the earlier context-independent MLP. In
addition, the context-dependent MLP performed better than the context-dependent
HMM in five out of the six tests although the CDMLP is a far simpler system than the
CDHMM, with approximately a factor of four fewer parameters and modeling of only
generalized biphone phonetic contexts. The CDHMM uses a range of contextdependent models including generalized and specific biphone, triphone, and wordspecific phone. The fact that context-dependent MLPs can perform as well or better
than context-dependent HMMs while using less specific models suggests that they may
be more vocabulary-independent, which is useful when porting systems to new tasks.
In the near future we will test the CDMLP system on new vocabularies.
The MLP smoothing approach described here can be extended to the modeling of finer
context classes. A hierarchy of context classes can be defined in which each context
class at one level is included in a broader class at a higher level. The context-specific
MLP at a given level in the hierarchy is initialized with the weights of a previously
trained context-specific MLP at the next higher level, and then finer context training
can proceed as described in Section 3.2.
The distributed representation used by MLPs is exploited in the context-dependent
modeling approach by sharing the input-to-hidden layer weights between all context
classes. This sharing substantially reduces the number of parameters to train and the
amount of computation required during both training and recognition. In addition, we
do not adjust the input-to-hidden weights during the context-dependent phase of training, assuming that the features provided by the hidden layer activations are relatively
low level and are appropriate for context-dependent as well as context-independent
modeling. The large decrease in cross-validation error observed going from contextindependent to context-dependent MLPs (30.6% to 21.4%) suggests that the features
learned by the hidden layer during the context-independent training phase, combined
with the extra modeling power of the context-specific hidden-to-output layers, were
adequate to capture the more detailed context-specific phone classes.
The best performance shown in Tables 1 and 2 is that of the MIXED system, which
combines CIMLP and CDHMM probabilities. The CDMLP probabilities can also be
combined with CDHMM probabilities; however, we hope that the planned extension
of our CDMLP system to finer contexts will lead to a better system than the MIXED
system without the need for such mixing, therefore resulting in a simpler system.
The context-dependent MLP shown here has more than 1,400,000 weights. We were
able to robustly train such a large network by using a cross-validation set to determine
when to stop training, sharing many of the weights between context classes, and
smoothing context-dependent with context-independent MLPs using the approach
described in Section 3.2. In addition, the Ring Array Processor (RAP) special purpose
hardware, developed at ICSI (Morgan et aI., 1992), allowed rapid training of such
large networks on large data sets. In order to reduce the number of weights in the
MLP, we are currently exploring alternative architectures which apply the smoothing
techniques described here to binary context inputs.
6 CONCLUSIONS
MLP-based probability estimation can be useful for both improving recognition accuracy and reducing memory needs for HMM-based speech recognition systems. These
benefits, however, must be weighed against increased computational requirements.
Context-Dependent Multiple Distribution Phonetic Modeling with MLPs
We have presented a new MLP architecture and training procedure for modeling
context-dependent phonetic classes with a sequence of distributions. Tests using the
DARPA Resource Management database have shown improvements in recognition
performance using this new approach, modeling only generalized biphone context
categories. These results suggest that sharing input-to-hidden weights between context
categories (and not retraining them during the context-dependent training phase)
results in a hidden layer representation which is adequate for context-dependent as
well as context-independent modeling, error-based smoothing of context-independent
and context-dependent weights is effective for training a robust model, and using
separate output layers and hidden-to-output weights corresponding to different context
classes of different states of HMM: phone models is adequate to capture acoustic
effects which change throughout the production of individual phonetic segments.
Acknowledgements
The work reported here was partially supported by DARPA Contract MDA904-9O-C5253. Discussions with Herve Bourlard were very helpful.
References
H. Bourlard, N. Morgan, C. Wooters, and S. Renals (1992), "CDNN: A Context
Dependent Neural Network for Continuous Speech Recognition," ICASSP, pp. 349352, San Francisco.
M. Cohen, H. Murveit. J Bernstein, P. Price. and M. Weintraub (1990), "The DECIPHER Speech Recognition System." ICASSP, pp. 77-80, Alburquerque. New Mexico.
F. Jelinek and R. Mercer (1980). "Interpolated estimation of markov source parameters from sparse data," in Pattern Recognition in Practice, E. Gelsema and L. Kanal.
Eds. Amsterdam: North-Holland. pp. 381-397.
S. Levinson, L. Rabiner, and M. Sondhi (1983). "An introduction to the application of
the theory of probabilistic functions of a Markov process to automatic speech recognition." Bell Syst. Tech. Journal 62, pp. 1035-1074.
N. Morgan and H. Bourlard (1990). "Continuous Speech Recognition Using Multilayer Perceptrons with Hidden Markov Models," ICASSP, pp. 413-416. Alburquerque, New Mexico.
N. Morgan. 1. Beck, P. Kohn, 1. Bilmes. E. Allman, and 1. Beer (1992). "The Ring
Array Processor (RAP): A Multiprocessing Peripheral for Connectionist Applications."
Journal of Parallel and Distributed Computing, pp. 248-259.
S. Renals, N. Morgan, M. Cohen, and H. Franco (1992), "Connectionist Probability
&timation in the DECIPHER Speech Recognition System," ICASSP, pp. 601-604.
San Francisco.
D. Rumelhart. G. Hinton. and R. Williams (1986), "Learning Internal Representations
by Error Propagation." in Parallel Distributed Processing: Explorations of the
Microstructure of Cognition, vol 1: Foundations. D. Rumelhart & 1. McOelland. Eds.
Cambridge: MIT Press.
R. Schwartz. Y. Chow. O. Kimball, S. Roucos. M. Krasner. and 1. Makhoul (1985),
"Context-dependent modeling for acoustic-phonetic recognition of continuous
speech." ICASSP, pp. 1205-1208.
657
| 710 |@word middle:2 version:1 sri:5 retraining:1 semicontinuous:1 r:1 thereby:1 feb91:2 reduction:3 initial:7 current:1 comparing:1 activation:1 must:2 discrimination:1 fewer:1 provides:1 sigmoidal:1 simpler:3 five:2 combine:2 theoretically:1 ica:2 rapid:1 window:1 increasing:1 provided:3 estimating:1 string:3 biphone:7 substantially:2 developed:2 berkeley:1 every:7 scaled:1 schwartz:2 control:1 unit:15 producing:1 positive:1 before:1 local:2 interpolation:1 approximately:1 chose:1 initialization:1 suggests:2 relaxing:1 hmms:6 limited:1 range:1 internals:1 practice:2 backpropagation:1 procedure:6 displacement:1 bell:1 significantly:2 word:16 suggest:3 get:3 cannot:1 context:118 influence:1 adient:1 weighed:1 map:1 yt:12 maximizing:1 williams:1 immediately:1 pure:4 factored:1 rule:1 array:2 target:3 hierarchy:2 us:6 rumelhart:8 recognition:27 database:4 labeled:1 observed:2 capture:3 cy:1 decrease:1 highest:1 icsi:2 substantial:2 dynamic:1 personal:1 trained:11 segment:3 upon:1 icassp:5 darpa:5 sondhi:1 train:6 forced:1 describe:1 effective:1 corresponded:1 stanford:2 larger:1 otherwise:1 grammar:7 multiprocessing:1 unseen:1 syntactic:1 advantage:2 sequence:9 net:9 renals:5 relevant:1 aligned:3 realization:1 combining:1 mixing:1 poorly:1 achieve:2 representational:2 requirement:1 produce:1 generating:1 converges:1 ring:2 strong:1 waveform:1 coarticulation:1 murveit:1 stochastic:3 exploration:1 microstructure:1 adjusted:1 extension:1 exploring:1 practically:1 ic:2 viterbi:6 cognition:1 traded:1 purpose:1 estimation:5 label:2 currently:1 weighted:1 hope:1 mit:1 gaussian:2 rather:3 srl:1 ck:5 broader:1 kimball:1 focus:1 improvement:2 likelihood:7 tech:1 inst:1 helpful:1 dependent:54 factoring:3 stopping:1 typically:1 unlikely:1 chow:1 hidden:19 going:2 overall:1 among:2 classification:4 art:2 smoothing:6 initialize:1 special:1 phonological:1 identical:1 park:1 broad:1 look:1 future:1 mda904:1 cdhmm:6 wooters:1 connectionist:2 resulted:1 individual:4 beck:1 phase:3 consisting:1 vowel:1 attempt:1 mlp:44 highly:2 evaluation:1 adjust:1 alignment:2 mixture:4 experience:2 herve:1 initialized:1 desired:1 rap:2 increased:3 instance:1 modeling:28 earlier:3 planned:1 retains:1 subset:1 conducted:1 front:1 reported:1 combined:3 density:2 international:3 contract:1 off:1 probabilistic:1 michael:1 together:1 iy:3 again:1 management:7 derivative:1 syst:1 converted:1 north:1 coefficient:3 explicitly:1 performed:1 view:1 start:1 bayes:1 parallel:2 horacio:1 mlps:23 accuracy:2 variance:1 characteristic:1 correspond:1 rabiner:1 decipher:6 generalize:1 bayesian:1 produced:1 trajectory:1 bilmes:1 finer:3 processor:2 sharing:7 ed:2 against:2 energy:3 pp:8 involved:1 weintraub:1 associated:8 stop:2 improves:3 porting:1 feed:2 higher:4 attained:1 specify:1 cepstrum:1 evaluated:1 until:2 replacing:1 nonlinear:1 overlapping:2 propagation:1 incrementally:1 effect:4 normalized:1 consisted:1 subphonetic:2 during:11 backpropagating:1 speaker:3 mel:1 abrash:5 m:1 generalized:7 criterion:3 performs:3 bidden:1 l1:1 percent:2 ranging:1 recently:1 common:2 cohen:9 extend:1 cambridge:1 ai:2 tuning:1 automatic:1 had:1 moving:1 posterior:9 recent:1 phone:41 perplexity:1 phonetic:29 binary:2 exploited:1 victor:1 morgan:13 minimum:2 greater:1 preceding:1 accomplishes:1 triphone:3 converge:1 recognized:2 determine:2 signal:1 levinson:3 multiple:7 sound:1 desirable:1 reduces:1 smooth:3 cross:7 long:1 a1:1 multilayer:3 ae:1 iteration:1 achieved:1 addition:5 source:1 extra:1 allman:1 near:1 counting:2 bernstein:1 enough:3 independence:1 architecture:6 topology:1 reduce:2 qj:14 six:4 kohn:1 speech:19 proceed:1 cause:1 adequate:3 useful:3 detailed:2 amount:2 hardware:1 category:3 reduced:2 supplied:1 estimated:2 delta:2 vol:1 four:4 deleted:1 kept:1 backward:2 asymptotically:1 year:1 run:1 place:1 almost:2 throughout:1 decide:1 layer:25 cepstra:2 interpolated:1 franco:6 attempting:2 relatively:1 according:1 peripheral:1 belonging:1 makhoul:1 making:1 principally:1 inti:1 resource:8 previously:2 count:1 needed:1 mind:1 end:1 available:1 eight:2 apply:1 away:1 appropriate:5 occurrence:2 robustly:1 alternative:1 robustness:1 original:2 substitute:1 restrictive:1 concatenated:2 mcoelland:1 february:1 dependence:5 traditional:1 gradient:1 separate:7 hmm:43 nelson:1 discriminant:2 spanning:1 assuming:2 modeled:1 index:1 iyt:7 mexico:2 october:1 sharper:1 negative:1 implementation:2 perform:2 allowing:2 observation:6 markov:6 discarded:2 finite:2 descent:2 extended:2 communication:1 hinton:1 frame:8 smoothed:2 david:1 pair:3 required:5 sentence:5 tap:1 acoustic:11 learned:1 able:1 proceeds:2 pattern:1 articulation:1 including:3 memory:2 power:1 overlap:1 ation:1 hybrid:8 bourlard:7 improve:1 technology:1 extract:1 prior:1 acknowledgement:1 relative:2 mixed:5 limitation:1 allocation:1 validation:7 foundation:1 degree:3 beer:1 mercer:2 share:1 cd:1 production:1 supported:1 last:1 perceptron:1 jelinek:2 sparse:1 distributed:4 benefit:1 overcome:1 boundary:1 vocabulary:3 transition:3 valid:1 seemed:1 forward:4 made:1 refinement:1 stuck:1 san:2 far:2 correlate:1 approximate:1 roucos:1 keep:1 iqj:2 francisco:2 discriminative:1 spectrum:1 continuous:8 search:2 timation:1 table:7 nature:2 learn:1 robust:4 ca:9 menlo:1 kanal:1 improving:1 linearly:1 allowed:1 referred:2 contextindependent:3 position:1 tied:2 weighting:1 learns:1 load:2 specific:15 symbol:1 consist:1 restricting:1 importance:1 conditioned:5 entropy:2 likely:2 amsterdam:1 partially:1 holland:1 corresponds:1 viewed:1 price:1 change:4 included:1 except:1 reducing:1 averaging:1 discriminate:4 perceptrons:2 internal:1 contextdependent:1 |
6,745 | 7,100 | Excess Risk Bounds for the Bayes Risk using
Variational Inference in Latent Gaussian Models
Rishit Sheth and Roni Khardon
Department of Computer Science, Tufts University
Medford, MA, 02155, USA
[email protected] | [email protected]
Abstract
Bayesian models are established as one of the main successful paradigms for
complex problems in machine learning. To handle intractable inference, research
in this area has developed new approximation methods that are fast and effective.
However, theoretical analysis of the performance of such approximations is not
well developed. The paper furthers such analysis by providing bounds on the excess
risk of variational inference algorithms and related regularized loss minimization
algorithms for a large class of latent variable models with Gaussian latent variables.
We strengthen previous results for variational algorithms by showing that they
are competitive with any point-estimate predictor. Unlike previous work, we
provide bounds on the risk of the Bayesian predictor and not just the risk of the
Gibbs predictor for the same approximate posterior. The bounds are applied in
complex models including sparse Gaussian processes and correlated topic models.
Theoretical results are complemented by identifying novel approximations to
the Bayesian objective that attempt to minimize the risk directly. An empirical
evaluation compares the variational and new algorithms shedding further light on
their performance.
1
Introduction
Bayesian models are established as one of the main successful paradigms for complex problems
in machine learning. Since inference in complex models is intractable, research in this area is
devoted to developing new approximation methods that are fast and effective (Laplace/Taylor
approximation, variational approximation, expectation propagation, MCMC, etc.), i.e., these can
be seen as algorithmic contributions. Much less is known about theoretical guarantees on the
loss incurred by such approximations, either when the Bayesian model is correct or under model
misspecification.
Several authors provide risk bounds for the Bayesian predictor (that aggregates predictions over its
posterior and then predicts), e.g., see [15, 6, 12]. However, the analysis is specialized to certain
classification or regression settings, and the results have not been shown to be applicable to complex
Bayesian models and algorithms like the ones studied in this paper.
In recent work, [7] and [1] identified strong connections between variational inference [10] and
PAC-Bayes bounds [14] and have provided oracle inequalities for variational inference. As we show
in Section 3, similar results that are stronger in some aspects can be obtained by viewing variational
inference as performing regularized loss minimization. These results are an exciting first step, but
they are limited in two aspects. First, they hold for the Gibbs predictor (that samples a hypothesis
and uses it to predict) and not the Bayesian predictor and, second, they are only meaningful against
?weak? competitors. For example, the bounds go to infinity if the competitor is a point estimate
with zero variance. In addition, these results do not explicitly address hierarchical Bayesian models
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
where further development is needed to distinguish among different variational approximations in
the literature. Another important result by [11] provides relative loss bounds for generalized linear
models (GLM). These bounds can be translated to risk bounds and they hold against point estimates.
However, they are limited to the prediction of the true Bayesian posterior which is hard to compute.
In this paper we strengthen these theoretical results and, motivated by these, make additional
algorithmic and empirical contributions. In particular, we focus on latent Gaussian models (LGM)
whose latent variables are normally distributed. We extend the technique of [11] to derive agnostic
bounds for the excess risk of an approximate Bayesian predictor against any point estimate competitor.
We then apply these results to several models with two levels of latent variables, including generalized
linear models (GLM), sparse Gaussian processes (sGP) [17, 26] and correlated topic models (CTM)
[3] providing high probability bounds for risk. For CTM our results apply precisely to the variational
algorithm and for GLM and sGP they apply for a variant with a smoothed loss function.
Our results improve over [7, 1] by strengthening the bounds, showing that they can be applied directly
to the variational algorithm, and showing that they apply to the Bayesian predictor. On the other hand
they improve over [11] in analyzing the approximate inference algorithms and in showing how to
apply the bounds to a larger class of models.
Finally, viewing approximate inference as regularized loss minimization, our exploration of the
hierarchical models shows that there is a mismatch between the objective being optimized by
algorithms such as variational inference and the loss that defines our performance criterion. We
identify three possible objectives corresponding respectively to a ?simple variational approximation?,
the ?collapsed variational approximation?, and to a new algorithm performing direct regularized loss
minimization instead of optimizing the variational objective. We explore these ideas empirically in
CTM. Experimental results confirm that each variant is the ?best" for optimizing its own implicit
objective, and therefore direct loss minimization, for which we do not yet have a theoretical analysis,
might be the algorithm of choice. However, they also show that the collapsed approximation comes
close to direct loss minimization. The concluding section of the paper further discusses the results.
2
2.1
Preliminaries
Learning Model, Hypotheses and Risk
We consider the standard PAC setting where n samples are drawn i.i.d. according to an unknown joint
distribution D over the sample space z. This captures the supervised case where z = (x, y) and the
goal is to predict y|x. In the unsupervised case, z = y and we are simply modeling the distribution.
To treat both cases together we always include x in the notation but fix it to a dummy value in the
unsupervised case.
A learning algorithm outputs a hypothesis h which induces a distribution ph (y|x). One would
normally use this predictive distribution and an application-specific loss to pick the prediction.
Following previous work, we primarily focus on log loss, i.e., the loss of h on example (x? , y? )
is `(h, (x? , y? )) = ? log ph (y? |x? ). In cases where this loss is not bounded, a smoothed and
? (y? , x? )) = ? log (1 ? ?)ph (y|x) + ? ,
bounded variant of the log loss can be defined as `(h,
where 0 < ? < 1. We state our results w.r.t. log loss, and demonstrate, by example, how the
smoothed log loss can be used. Later, we briefly discuss how our results hold more generally for
losses that are convex in p.
We start by considering
one-level (1L) latent variable models given by p(w)p(y|w, x) where
Q
p(y|w, x) =
i p(yi |w, xi ). For example, in Bayesian logistic regression, w is the hidden
weight vector, the prior p(w) is given by a Normal distribution N (w|?, ?) and the likelihood
term is p(y|w, x) = ?(ywT x) where ?() is the sigmoid function. A hypothesis h represents a
distribution q(w) over w, where point estimates for w are modeled as delta functions. Regardless
of how h is computed, the Bayesian predictor calculates a predictive distribution ph (y|x) =
Eq(w) [p(y|w, x)] and accordingly its risk is defined as rBay (q(w)) = E(x,y)?D [? log ph (y|x)] =
E(x,y)?D [? log Eq(w) [p(y|w, x)]].
Following previous work we also analyze the average risk of the Gibbs predictor which draws a
random w from q(w) and predicts using p(y|w, x). Although the Gibbs predictor is not an optimal
strategy, its analysis has been found useful in previous work and it serves as an intermediate step
2
in our results. Assuming the draw of w is done independently for each x we get: rGib (q(w)) =
E(x,y)?D [Eq(w) [? log p(y|w, x)]]. Previous work has defined the Gibbs risk with expectations in
reversed order. That is, the algorithm draws a single w and uses it for prediction on all examples. We
find the one given here more natural. Some of our results require the two definitions to be equivalent,
i.e., the conditions for Fubini?s theorem must hold. We make this explicit in
Assumption 1. E(x,y)?D [Eq(w) [? log p(y|w, x)]] = Eq(w) [E(x,y)?D [? log p(y|w, x)]].
This is a relatively mild assumption. It clearly holds when y takes discrete values, where p(y|x, w) ? 1
implies that the log loss is positive and Fubini?s theorem applies. In the case of continuous y, upper
bounded likelihood functions imply that a translation of the loss function satisfies the condition of
Fubini?s theorem. For example,
N (y|f (w, x), ? 2 ) where ? 2 is a hyperparameter, then
?if p(y|x, w) =
2
log p(y|x, w) ? B = ? log( 2?) ? log(? ). Therefore, ? log p(y|x, w) + B ? 0 so that if we
redefine1 the loss by adding the constant B, then the loss is positive and Fubini?s theorem applies.
More generally, we might need to enforce constraints on D, q(w), and/or p(y|x, w).
2.2
Variational Learners for Latent Variable Models
Approximate inference generally limits q(w) to some fixed family of distributions Q (e.g. the family
of normal distributions, or the family of products of independent components in the mean-field
approximation). Given a dataset S = {(xi , yi )}ni=1 , we define the following general problem,
q ? = arg min
q?Q
1
KL q(w)kp(w) + L(w, S) ,
?
(1)
where KL denotes
P Kullback-Leibler divergence. Standard variational inference uses ? = 1 and
L(w, S) = ? i Eq(w) [log p(yi |w, xi )], and it is well known that (1) is the optimization of a lower
bound on p(y). If ? log p(yi |w, xi ) is replaced with a general loss function, then (1) may no longer
?
correspond to a lower bound on p(y). In any case, the output of (1), denoted by qGib
, is achieved via
regularized cumulative-loss minimization (RCLM) which optimizes a sum of training set error and a
?
regularization function. In particular, qGib
uses a KL regularizer and optimizes the Gibbs risk rGib in
contrast to the Bayes risk rBay . This motivates some of the analysis in the paper.
Many interesting
Q Bayesian models have two levels (2L) of latent variables given by
p(w)p(f |w, x) i p(yi |fi ) where both w and f are latent. Of course one can treat (w, f ) as one set
of parameters and apply the one-level model, but this does not capture the hierarchical structure of the
model. The standard approach in the literature infers a posterior on w via a variational distribution
q(w)q(f |w), and assumes that q(w) is sufficient for predicting p(y? |x? ). We refer to this structural
assumption, i.e., p(f? , f |w, x, x? ) = p(f? |w, x? )p(f |w, x),
Q as Conditional Independence. It holds
in models where an additional factorization p(f |w, x) = i p(fi |w, xi ) holds, e.g., in GLM, CTM.
In the case of sparse Gaussian processes (sGP), Conditional Independence does not hold, but it is
required in order to reduce the cubic complexity of the algorithm, and it has been used in all prior
work on sGP. Assuming Conditional Independence, the definition of risk extends naturally from the
one-level model by writing p(y|w, x) = Ep(f |w,x) [p(y|f )] to get:
r2Bay (q(w)) =
r2Gib (q(w)) =
E
[? log E [
E
[p(y|f )]]],
(2)
[ E [? log
E
[p(y|f )]]].
(3)
q(w) p(f |w,x)
(x,y)?D
E
(x,y)?D q(w)
p(f |w,x)
Even though Conditional Independence is used in prediction, the learning algorithm must decide
how to treat q(f |w) during the optimization of q(w). The mean field approximation uses q(w)q(f )
in the optimization. We analyze two alternatives that have been used in previous work. The
approximation
q(f |w) = p(f |w), used in sparse GP [26, 8, 23], is described by (1) with L(w, S) =
P
?
? i Eq(w) [Ep(fi |w,xi ) [log p(yi |fi )]]. We denote this by q2A
and observe it is the RCLM solution
for the risk defined as
r2A (q(w)) =
E
[ E [
E
[? log p(y|f )]]].
(x,y)?D q(w) p(f |w,x)
(4)
1
For the smoothed log loss, the translation can be applied prior to the re-scaling, i.e.,
1??
p(y|w, x) + ?).
? log( maxw,x,y
p(y|w,x)
3
As shown by [25, 9, 22], alternatively, for each w, we can pick the optimal q(f |w) = p(f |w, S).
Following [25] we call
This leads to (1) with L(w, S) =
Q this a collapsed approximation.
?
? Eq(w) [log Ep(f |w,x) [ i p(yi |fi )]] and is denoted by q2Bj
(joint expectation). For models where
Q
P
p(f |w) = i p(fi |w), this simplifies to L(w, S) = ? i Eq(w) [log Ep(fi |w,xi ) [p(yi |fi )]], and we
?
?
denote the algorithm by q2Bi (independent expectation). Note that q2Bi
performs RCLM for the risk
given by r2Gib even if the factorization does not hold.
Finally, viewing approximate inference as performing RCLM, we observe a discrepancy between
our definition of risk in (2) and the loss function being optimized by existing algorithms, e.g.,
variational inference.
This perspective suggests direct loss minimization described by the alternative
P
?
?
L(w, S) = ? i log Eq(w) [Ep(fi |w,xi ) [p(yi |fi )]] in (1) and which we denote q2D
. In this case, q2D
is a ?posterior? but one for which we do not have a Bayesian interpretation.
Given the discussion so far, we can hope to get some analysis for regularized loss minimization where
each of the algorithms implicitly optimizes a different definition of risk. Our goal is to identify good
algorithms for which we can bound the definition of risk we care about, r2Bay , as defined in (2).
3
RCLM
Regularized loss minimization has been analyzed for general hypothesis spaces and losses. For
hypothesis space H and hypothesis h ? H we have loss function `(h, (x, y)), and associated risk
r(h) = E(x,y)?D [`(h, (x, y))]. Now, given a regularizer R : H ? 0 ? R+ , a non-negative scalar ?,
and sample S, regularized cumulative loss minimization is defined as
?
?
X
1
RCLM(H, `, R, ?, S) = arg min ? R(h) +
`(h, (xi , yi ))? .
(5)
?
h?H
i
Theorem 1 ([20]2 ). Assume that the regularizer R(h) is ?-strong-convex in h and the loss `(h, (x, y))
is ?-Lipschitz and convex in h, and let h? (S) = RCLM(H, `, R, ?, S). Then, for all h ? H,
2
1
ES?Dn [r(h? (S))] ? r(h) + ?n
R(h) + 4?? ? .
The theorem bounds the expectation of the risk. Using Markov?s inequality we can get a high
2
1
probability bound: with probability ? 1 ? ?, r(h? (S)) ? r(h) + 1? ( ?n
R(h) + 4?? ? ). Tighter
dependence on ? can be achieved for bounded losses using standard techniques. To simplify the
presentation we keep the expectation version throughout the paper.
For this paper we specialize RCLM for Bayesian algorithms, that is, H corresponds to the parameter
space for a parameterized family of (possibly degenerate) distributions, denoted Q, where q ? Q is a
distribution over a base parameter space w.
?
?
?
We have already noted above that qGib
(w), q2Bi
(w) and q2D
(w) are RCLM algorithms. We can
therefore get immediate corollaries for the corresponding risks (see supplementary material). Such
results are already useful, but the convexity and ?-Lipschitz conditions are not always easy to analyze
or guarantee. We next show how to use recent ideas from PAC-Bayes analysis to derive a similar
result for Gibbs risk with less strong requirements. We first develop the result for the one-level model.
Toward this, define the loss and risk for individual base parameters
P as `W (w, (x, y)), and rW (w) =
ED [`W (w, (x, y))], and the empirical estimate r?W (w, S) = n1 i `W (w, (xi , yi )). Following [7], let
?(?, n) = log ES?Dn [Ep(w) [e?(rW (w)??rW (w,S)) ]] where ? is an additional parameter. Combining
arguments from [20] with the use of the compression lemma [2] as in [7] we can derive the following
bound (proof in supplementary material):
1
?
Theorem 2. For all q ? Q, ES?Dn [rGib (qGib
(w))] ? rGib (q)+ ?n
KL qkp + ?1 maxq?Q KL qkp +
1
? ?(?, n).
The theorem applies to the two-level model by writing p(y|w) = Ep(f |w) [p(y|f )]. This yields
?
Corollary 3. For all q ? Q, ES?Dn [r2Gib (q2Bi
(w))]
1
1
max
KL
qkp
+
?(?,
n).
q?Q
?
?
2
?
r2Gib (q) +
1
?n KL
qkp +
[20] analyzed regularized average loss but the same proof steps with minor modifications yield the statement
for cumulative loss given here.
4
A similar result has already been derived by [1] without making the explicit connection to RCLM.
However, the implied algorithm uses a ?regularization factor? ? which may not coincide with ? = 1,
whereas standard variational inference can be analyzed with Theorem 2 (or Corollary 3).
The work of [4, 7] showed how the ? term can be bounded. Briefly, if `W (w, (x, y)) is bounded
2
2
in [a, b], then ?(?, n) ? ? (b?a)
; if `W (w, (x, y)) is not bounded, but the random variable
2n
rW (w) ? `W (w, (x, y)) is sub-Gaussian or sub-gamma, then ?(?, n) can be bounded with additional
assumptions on the underlying distribution D. More details are in the supplementary material.
4
Concrete Bounds on Excess Risk in LGM
The LGM family is a special case of the two-level model where the prior p(w) over the M -dimensional
parameter w is given by a Normal distribution. Following previous work we let Q to be a family
of Normal distributions. For the analysis we further restrict Q by placing bounds on the mean and
covariance as follows: Q = {N (w|m, V ) s.t. kmk2 ? Bm , ?min (V ) ? , ?max (V ) ? BV } for
some > 0. The
KL divergence from q(w) = N (w|m, V ) to p(w)
= N (w|?, ?) is given by
1
|?|
?1
T ?1
KL qkp = 2 tr(? V ) + (? ? m) ? (? ? m) + log |V | ? M .
4.1
General Bounds on Excess Risk in LGM Against Point Estimates
First, we note that KL qkp is bounded under a lower bound on the minimum eigenvalue of V (proof
in supplementary material follows from linear algebra identities):
2
M BV +k?k22 +Bm
0
Lemma 4. Let BR
= 12
+
M
log
?
(?)
?
M
. For q ? Q,
max
?min (?)
1
KL qkp ? BR =
2
2
M BV +k?k22 + Bm
+ M log
?min (?)
?max (?)
!
?M
0
= BR
?
1
M log .
2
(6)
The risk bounds of the previous section do not allow for point estimate competitors because the
KL portion is not bounded. We next generalize a technique from [11] showing that adding a little
variance to a point estimate does not hurt too much. This allows us to derive the promised bounds. In
the following, > 0 is a constant whose value is determined in the proof. For any w,
? we consider the
-inflated distribution q (w) = N (w|w,
? I) and calculate the distribution?s Gibbs risk w.r.t. a generic
loss. Specifically, we consider the (1L or 2L) Gibbs risk r(q) = E(x,y)?D [Eq(w) [` w, (x, y) ]] with
` : RM ? (X ? Y ) 7? R.
Lemma 5. If (i) `(w,
differentiable in w up to order 2, and (ii)
(x, y)) is continuously
?max ?2w `(w, (x, y)) ? BH , then for w
? ? RM and q(w) = N (w|w,
? I)
rGib q(w) =
1
[ E [` w, (x, y) ]] ? rGib ? (w ? w)
? + M BH .
2
(x,y)?D q(w)
E
(7)
Proof. By the multivariable Taylor?s theorem, for w
? ? RM
?T
?
`(w, (x, y)) = `(w,
? (x, y)) + ??w `(w, (x, y))
? (w ? w)
?
w=w
?
?
1
T ? 2
?
?w `(w, (x, y))
+ (w ? w)
2
?
? (w ? w)
?
w=w
?
where ?w `(w, (x, y)) and ?2w `(w, (x, y)) denote the gradient and Hessian, and w
? = (1 ? ?) w+?w
?
for some ? ? [0, 1] where ? is a function of w. Taking the expectation results in
1
T
2
E [`(w, (x, y))] = `(w,
? (x, y)) +
E [(w ? w)
? ?w `(w, (x, y))
(w ? w)].
?
(8)
2 q(w)
q(w)
w=w
?
5
If the maximum eigenvalue of ?2w `(w, (x, y)) is bounded uniformly by some BH < ?, then the
second term of (8) is bounded above by 12 BH E[(w ? w)
? T (w ? w)]
? = 12 M BH . Taking expectation
w.r.t. D yields the statement of the lemma.
Since Q includes -inflated distributions centered on w
? wherekwk
? 2 ? Bm , we have the following.
Theorem 6 (Bound on Gibbs Risk Against Point Estimate Competitors). If (i)
? logEp(f |w) [p(y|f ]) is continuously
differentiable in w up to order 2, and (ii)
?max ?2w ? log Ep(f |w) [p(y|f ])
? BH , then, for all w
? withkwk
? 2 ? Bm ,
1
?
E n [r2Gib (q2Bi
(w))] ? r2Gib ? (w ? w)
? + ?(BH ) + ?(?, n),
S?D
?
?
?
!
1
1
1 ? 2 0
n?
? . (9)
?(BH ) , M
+
B + 1 + log BH
2
n ?
M R
n+?
Proof. Using the distribution q = N (w|w,
? I) in the RHS of Corollary 3 yields
1
1
1
KL qkp + max KL qkp + ?(?, n)
?n
? q?Q
?
1
1
1
0
+ ?(?, n)
? (w ? w)
? + M BH ? AM log + ABR
2
2
?
?
E n [r2Gib (q2Bi
(w))] ? r2Gib (q) +
S?D
? r2Gib
where A =
=
A
BH .
1
?n
+
1
?
(10)
and we have used Lemma 4 and Lemma 5. Eq (10) is optimized when
Re-substituting the optimal in (10) yields
?
(w))] ? r2Gib ? (w ? w)
?
E n [r2Gib (q2Bi
S?D
?
1
1
1 ? 2 0
+ M
+
B + 1 ? log
2
?n ?
M R
1
BH
1
1
+
?n ?
?
!
? + 1 ?(?, n). (11)
?
Setting ? = 1 yields the result.
The theorem calls for running the variational algorithm with constraints on eigenvalues of V . The
fixed-point characterization [21] of the optimal solution in linear LGM implies that such constraints
hold for the optimal solution. Therefore, they need not be enforced explicitly in these models.
For any distribution q(w) and function f (w) we have minw [f (w)] ? Eq(w) [f (w)]. Therefore, the
minimizer of the Gibbs risk is a point estimate, which with Theorem 6 implies:
Corollary 7. Under the conditions of Theorem 6, for all q(w) = N (w|m, V ) with kmk2 ? Bm ,
?
ES?Dn [r2Gib (q2Bi
(w))] ? r2Gib q(w) + ?(BH ) + ?1 ?(?, n).
More importantly, as another immediate corollary, we have a bound for the Bayes risk:
Corollary 8 (Bound on Bayes Risk Against Point Estimate Competitors). Under the conditions
of Theorem 6, for all w
? withkwk
? 2 ? Bm ,
?
ES?Dn [r2Bay (q2Bi
(w))] ? r2Bay ? (w ? w)
? + ?(BH ) + ?1 ?(?, n).
Proof. Follows from (a) ?q, r2Bay (q) ?
RM , r2Bay (?(w ? w))
? = r2Gib (?(w ? w)).
?
r2Gib (q) (Jensen?s inequality), and (b) ?w
?
?
The extension for Bayes risk in step b of the proof is only possible thanks to the extension to
point estimates. As stated in the previous section, for bounded losses, ?(?, n) is bounded as
?
?2 (b?a)2
? n or log n
. As in [7], we can choose ? = n or ? = n to obtain decays rates log
2n
n
n
respectively, where the latter has a fixed non-decaying gap term (b ? a)2 /2. However, unlike [7],
in our proof both cases are achievable with ? = 1, i.e., for the variational algorithm. For example,
6
?
1
2
n, the
bounded loss,
prior with ? = 0 and ? = M (M BV + Bm )I,2 and
(b?a)
1
M
1
2
.
?(BH ) + ? ?(?, n) ? ?n 1 + log BH + log n + log BV + M Bm + 2M
using ? = 1, ? =
The results above are developed for the log loss but we can apply them more generally. Toward
this we note that Corollary 3 holds for an arbitrary loss, and Lemma 5, and Theorem 6 hold for
a sufficiently smooth loss with bounded 2nd derivative w.r.t. w. The conversion to Bayes risk in
Corollary 8 holds for any loss convex in p. Therefore, the result of Corollary 8 holds more generally
for any sufficiently smooth loss that has bounded 2nd derivative in w and that is convex in p. We
provide an application of this more general result in the next section.
4.2
Applications in Concrete Models
This section develops bounds on ? and BH for members of the 2L family.
CTM: For a document, the generative model for CTM first draws w ? N (?, ?), w ? RK?1
where {?, ?} are model parameters, and then maps this vector to the K-simplex with the logistic
transformation, ? = h(w). For each position i in the document, the latent topic variable, fi , is drawn
from Discrete(?), and the word yi is drawn from a Discrete(?fi ,? ) where ? denotes the topics and is
treated as a parameter
of the model.
In this case p(f |w) can be integrated out analytically and the
P
K
loss is ? log
?
h
(w)
.
We
have (proof in supplementary material):
k=1 k,y k
Corollary 9. For CTM models where the parameters ?k,y are uniformly bounded away from 0, i.e.,
?k,y ? ? > 0, for all w
? withkwk
? 2 ? Bm ,
?)2
?
ES?Dn [r2Bay (q2Bi
(w))] ? r2Bay ? (w ? w)
? + ?(BH ) + ?(log
with BH =5.
2n
The following lemma is expressed in terms of log loss but also holds for smoothed log loss (proof in
supplementary material):
Lemma 10. When f is a deterministic function of w, if (i) ? log p y|f (w, x) is continuously
differentiable in f up to order 2, and
h f (w, x) isi continuously differentiable in w up to order
h
i
? ? log p(y|f )
2
? 2 ? log p(y|f )
f
2, (ii)
?
c
,
(iii)
? c1 , (iv)
?w f (w, x)
2 ? c1 , and (v)
2
2
?f
?f
?max ?2w f (w, x) ? cf2 (?max is the max singular value), then BH = c2 cf1 + c1 cf2 .
GLM: The bound of [11] for GLM was developed for exact Bayesian inference. The following
corollary extends this to approximate inference through RCLM. In GLM, f = wT x, k?w k2 = kxk2 ,
and ?2w = 0 and a bound on BH is immediate from Lemma 10. In addition the smoothed loss is
bounded 0 ? `? ? ? log ?. This implies
? (x, y)) = ? log((1 ? ?)p y|f (w, x) + ?) is continuously
Corollary 11. For GLM, if (i) `(w,
2?
? `
differentiable in f up to order 2, and (ii) ?f
? c, then, for all w
? with kwk
? 2 ? Bm ,
2
2
2
?(log
?)
?
ES?Dn [?
r2Bay (?
q2Bi
(w))] ? r?2Bay ? (w ? w)
? + ?(BH ) + 2n
with BH = c maxx?X kxk2 .
We develop the bound c for the logistic and Normal likelihoods (see supplementary
material). Let
?
3 1
?
1
1
?0 = 1??
. For the logistic likelihood ?(yf ), we have c = 16
+
.
For
the Gaussian
0
2
0
(? )
18 ?
likelihood
? 1
2??Y
2
)
exp(? 12 (y?f
), we have c =
?2
Y
1
1
4 e (?0 )2
2??Y
+
? 1 3 10
2??Y ?
.
The work of [7] has claimed3 a bound on the Gibbs risk for linear regression which should be
compared to our result for the Gaussian likelihood. Their result is developed under the assumption that
the Bayesian model specification is correct and in addition that x is generated from x ? N (0, ?x2 I).
In contrast our result, using the smoothed loss, holds for arbitrary distributions D without the
assumption of correct model specification.
?
Denoting ?ri (w) = rW (w) ? r?W (w, (xi , yi )) and fi (w, n, ?) = Ep(?ri (w)) [exp n
?ri (w) ], the
Q
Q
proof of Corollary 5 in [7] erroneously replaces Ep(w) [ i fi (w, n, ?)] with i Ep(w) [fi (w, n, ?)]. We are not
aware of a correction of this proof which yields a correct bound for ? without using a smoothed loss. Any such
bound would, of course, be applicable with our Corollary 8.
3
7
Sparse GP: In the sparse GP model, the conditional is p f |w, x = N (f |a(x)T w + b(x), ? 2 (x))
where a(x)T = KUT x KU?1U , b(x) = ?x ? KUT x KU?1U ?U and ? 2 (x) = Kxx ? KUT x KU?1U KU x with ?
denoting the mean function and KU x , KU U denoting the kernel matrix evaluated
at inputs (U, x)
and (U, U ) respectively. In the conjugate case, the likelihood is given by p y|f = N (y|f, ?Y2 ) and
integrating f out yields N (y|a(x)T w + b(x), ? 2 (x) + ?Y2 ). Using the smoothed loss, we obtain:
Corollary 12. For conjugate sparse GP, for all w
? withkwk
? 2 ? Bm ,
2
?)2
?
ES?Dn [?
r2Bay (?
q2Bi
(w))] ? r?2Bay ? (w ? w)
? + ?(BH ) + ?(log
with BH = c maxx?X
a(x)
2 ,
2n
1
1
where c = 2??14 e (?10 )2 + ?2??
3 ?0 .
Y
Y
T
1
1
2
? (x, y)) =
Proof. The Hessian is given by ?2w `(w,
(N +?0 )2 ?w N (?w N ) ? N +?0 ?w N where
2
2
T
N
w+ b(x). The gradient ?w N equals
denotes
N (y|f (w), ? (x) + ?Y ), with f (w) =
a(x)
2
?N
? N
T
2
. Therefore, ?2w `? =
?(f (w)) a(x) and the Hessian ?w N equals
?(f (w))2 a(x)a(x)
h
i
2
? 2 ? log((1??)N +?)
1
?N
1
?2N
T
?
a(x)a(x)
=
a(x)a(x)T . The
(N +?0 )2
?(f (w))
N +?0 ?(f (w))2
?(f (w))2
result of Corollary
11 for Gaussian
likelihood can be used to bound the 2nd derivative of the
h
i
? 2 ? log((1??)N +?)
1
1
1
1
1
1
? 2?(?2 (x)+?
smoothed loss:
2 )2 e (?0 )2 + ?
3 ?0 ? 2?? 4 e (?0 )2 +
?(f (w))2
2
2
Y
? 1 3 10
2??Y ?
2?(? (x)+?Y ) 2
Y
= c . Finally, the eigenvalue of the rank-1 matrix ca(x)a(x)T is bounded by
2
c maxx?X
a(x)
2 .
?
Remark 1. We noted above that, for sGP, q2Bi
does not correspond to a variational algorithm. The
?
?
standard variational approach uses q2A and the collapsed bound uses q2Bj
(but requires cubic time).
?
It can be shown that q2Bi corresponds exactly to the fully independent training conditional (FITC)
approximation for sGP [24, 16] in that their optimal solutions are identical. Our result can be seen to
justify the use of this algorithm which is known to perform well empirically.
Finally, we consider binary classification in GLM with the convex loss function `0 (w, (x, y)) =
1
2
8 (y ? (2p(y|w, x) ? 1)) . The proof of the following corollary is in the supplementary material:
? with kwk
? 2 ?
Corollary 13. For GLM with p(y|w, x) = ?(ywT x), for all w
2
?
5
0
0?
0
ES?Dn [r2Bay (q2Bi (w))] ? r2Bay ? (w ? w)
? + ?(BH ) + 8n with BH = 16 maxx?X kxk2 .
4.3
Bm ,
Direct Application of RCLM to Conjugate Linear LGM
In this section we derive a bound for an algorithm that optimizes a surrogate of the loss directly. In
particular, we consider the Bayes loss for linear LGM with conjugate likelihood p(y|f ) = N (y|f, ?Y2 )
where ? log Eq(w) [Ep(f |w) [p(y|f )]] = ? log N (y|aT m + b, ? 2 + ?Y2 + aT V a) and where a, b, and
? 2 are functions of x. This includes, for example, linear regression and conjugate sGP.
?
The proposed algorithm q2Ds
performs RCLM with competitor set ? = {(m, V ) : kmk2 ? Bm , V ?
2
2
S++ ,kV kF ? BV }, regularizer R(m, V ) = 12 kmk2 + 12 kV kF , ? = ?1n and the surrogate loss
T
m?b)2
`surr (m, V ) = 12 log (2?) + 12 ? 2 + ?Y2 + aT V a + 21 ?(y?a
2 +? 2 +aT V a .With these definitions we can
Y
apply Theorem 1 to get (proof in supplementary material):
?
surr
Theorem 14. With probability at least 1 ? ?, r2Bay (q2Ds
) ? minq?Q r2Bay
(q(w)) +
1
1
2
2
2
2
T
?
Bm + BV + 8(?m + ?V ) where ?m = ?2 maxx?X kak2 maxx?X,y?Y,m |y ? a m ? b| and
? n
Y
T
2
2
?V = 2?12 maxx?X,y?Y,m kak2 1 + (y?a ?2m?b) .
Y
5
Y
Direct Loss Minimization
The results in this paper expose the fact that different algorithms are apparently implicitly optimizing
?
?
criteria for different loss functions. In particular, q2A
optimizes for r2A , q2Bi
optimizes for r2Gib
8
P
yi ?test
P
`2Gib (yi )
yi ?test
3300
3250
3250
3250
3200
3200
3200
3150
3100
3050
3000
3150
3100
3050
3000
2950
2950
2900
0
2900
0
200
400
600
800
1000
1200
Iteration
1400
1600
1800
2000
Cumulative loss value
3300
Cumulative loss value
Cumulative loss value
P
`2A (yi )
yi ?test
3300
`2Bay (yi )
3150
3100
3050
3000
2950
200
400
600
800
1000
1200
Iteration
1400
1600
1800
2000
2900
0
200
400
600
800
1000
1200
1400
1600
1800
2000
Iteration
Figure 1: Artificial data. Cumulative test set losses of different variational algorithms. x-axis is
?
?
?
iteration. Mean ? 1? of 30 trials are shown per objective. q2A
in blue. q2Bi
in green. q2D
in red.
?
?
and q2D
optimizes for r2Bay . Even though we were able to bound r2Bay of the q2Bi
algorithm, it is
interesting to check the performance of these algorithms in practice.
We present an experimental study comparing these algorithms on the correlated topic model (CTM)
that was described in the previous section. To explore the relation between the algorithms and their
performance we run the three algorithms and report their empirical risk on a test set, where the risk
is also measured in three different ways. Figure 1 shows the corresponding learning curves on an
artificial document generated from the model. Full experimental details and additional results on a
real dataset are given in the supplementary material.
We observe that at convergence each algorithm is best at optimizing its own implicit criterion.
?
However, considering r2Bay , the differences between the outputs of the variational algorithm q2Bi
and
?
?
direct loss minimization q2D are relatively small. We also see that at least in this case q2Bi takes longer
?
to reach the optimal point for r2Bay . Clearly, except for its own implicit criterion, q2A
should not be
?
?
used. This agrees with prior empirical work on q2A
and q2Bi
[22]. The current experiment shows the
?
potential of direct loss optimization for improved performance but justifies the use of q2Bi
both under
correct model specification (artificial data) and when the model is incorrect (real data in supplement).
Preliminary experiments in sparse GP show similar trends. The comparison in that case is more
?
complex because q2Bi
is not the same as the collapsed variational approximation, which in turn
?
requires cubic time to compute, and we additionally have the surrogate optimizer q2Ds
. We defer a
full empirical exploration in sparse GP to future work.
6
Discussion
The paper provides agnostic learning bounds for the risk of the Bayesian predictor, which uses the
posterior calculated by RCLM, against the best single predictor. The bounds apply for a wide class
of Bayesian models, including GLM, sGP and CTM. For CTM our bound applies precisely to the
variational algorithm with the collapsed variational bound. For sGP and GLM the bounds apply
to bounded variants of the log loss. The results add theoretical understanding of why approximate
inference algorithms are successful, even though they optimize the wrong objective, and therefore
justify the use of such algorithms. In addition, we expose a discrepancy between the loss used
in optimization and the loss typically used in evaluation and propose alternative algorithms using
regularized loss minimization. A preliminary empirical evaluation in CTM shows the potential of
?
direct loss minimization but that the collapsed variational approximation q2Bi
has the advantage of
strong theoretical guarantees and excellent empirical performance, both when the Bayesian model is
correct and under model misspecification.
Our results can be seen as a first step toward full analysis of approximate Bayesian inference methods.
One limitation is that the competitor class in our results is restricted to point estimates. While point
estimate predictors are optimal for the Gibbs risk, they are not optimal for Bayes predictors. In
addition, the bounds show that the Bayesian procedures will do almost as well as the best point
estimator. However, they do not show an advantage over such estimators, whereas one would expect
such an advantage. It would also be interesting to incorporate direct loss minimization within the
Bayesian framework. These issues remain an important challenge for future work.
9
Acknowledgments
This work was partly supported by NSF under grant IIS-1714440.
References
[1] Pierre Alquier, James Ridgway, and Nicolas Chopin. On the properties of variational
approximations of Gibbs posteriors. JMLR, 17:1?41, 2016.
[2] Arindam Banerjee. On Bayesian bounds. In ICML, pages 81?88, 2006.
[3] David M. Blei and John D. Lafferty. Correlated topic models. In NIPS, pages 147?154. 2006.
[4] St?phane Boucheron, G?bor Lugosi, and Pascal Massart. Concentration Inequalities: A
Nonasymptotic Theory of Independence. Oxford University Press, 2013.
[5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press,
March 2004.
[6] Arnak S. Dalalyan and Alexandre B. Tsybakov. Aggregation by exponential weighting, sharp
PAC-Bayesian bounds and sparsity. Machine Learning, 72:39?61, 2008.
[7] Pascal Germain, Francis Bach, Alexandre Lacoste, and Simon Lacoste-Julien. PAC-Bayesian
theory meets Bayesian inference. In NIPS, pages 1876?1884, 2016.
[8] James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable variational Gaussian
process classification. In AISTATS, pages 351?360, 2015.
[9] Matthew D. Hoffman and David M. Blei. Structured stochastic variational inference. In
AISTATS, pages 361?369, 2015.
[10] Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An
introduction to variational methods for graphical models. Machine Learning, 37:183?233, 1999.
[11] Sham M. Kakade and Andrew Y. Ng. Online bounds for Bayesian algorithms. In NIPS, pages
641?648, 2004.
[12] Alexandre Lacasse, Fran?ois Laviolette, Mario Marchand, Pascal Germain, and Nicolas Usunier.
PAC-Bayes bounds for the risk of the majority vote and the variance of the Gibbs classifier. In
NIPS, pages 769?776, 2006.
[13] Moshe Lichman. UCI machine learning repository, 2013. http://archive.ics.uci.edu/
ml.
[14] David A. McAllester. Some PAC-Bayesian theorems. In COLT, pages 230?234, 1998.
[15] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms.
JMLR, 4:839?860, 2003.
[16] Joaquin Qui?onero-Candela, Carl E. Rasmussen, and Ralf Herbrich. A unifying view of sparse
approximate Gaussian process regression. JMLR, 6:1939?1959, 2005.
[17] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning.
MIT Press, 2006.
[18] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation
and approximate inference in deep generative models. In ICML, pages 1278?1286, 2014.
[19] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and
R in Machine Learning, 4:107?194, 2012.
Trends
[20] Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to
algorithms. Cambridge University Press, 2014.
10
[21] Rishit Sheth and Roni Khardon. A fixed-point operator for inference in variational Bayesian
latent Gaussian models. In AISTATS, pages 761?769, 2016.
[22] Rishit Sheth and Roni Khardon. Monte Carlo structured SVI for non-conjugate models.
arXiv:1309.6835, 2016.
[23] Rishit Sheth, Yuyang Wang, and Roni Khardon. Sparse variational inference for generalized
Gaussian process models. In ICML, pages 1302?1311, 2015.
[24] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In
NIPS, pages 1257?1264, 2006.
[25] Yee Whye Teh, David Newman, and Max Welling. A collapsed variational Bayesian inference
algorithm for latent Dirichlet allocation. In NIPS, pages 1353?1360, 2006.
[26] Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In
AISTATS, pages 567?574, 2009.
[27] Sheng-De Wang, Te-Son Kuo, and Chen-Fa Hsu. Trace bounds on the solution of the algebraic
matrix Riccati and Lyapunov equation. IEEE Transactions on Automatic Control, 31:654?656,
1986.
11
| 7100 |@word mild:1 trial:1 repository:1 version:1 briefly:2 compression:1 stronger:1 achievable:1 nd:3 covariance:1 pick:2 tr:1 lichman:1 jimenez:1 denoting:3 document:3 existing:1 current:1 comparing:1 yet:1 must:2 john:1 generative:2 accordingly:1 blei:2 provides:2 characterization:1 ron:1 herbrich:1 zhang:1 wierstra:1 dn:10 c2:1 direct:10 incorrect:1 specialize:1 surr:2 cf1:1 isi:1 little:1 considering:2 provided:1 notation:1 bounded:21 underlying:1 agnostic:2 developed:5 transformation:1 guarantee:3 pseudo:1 exactly:1 rm:4 k2:1 wrong:1 control:1 normally:2 grant:1 classifier:1 arnak:1 positive:2 treat:3 limit:1 analyzing:1 oxford:1 meet:1 lugosi:1 might:2 studied:1 suggests:1 limited:2 factorization:2 acknowledgment:1 practice:1 backpropagation:1 svi:1 procedure:1 area:2 empirical:8 maxx:7 boyd:1 word:1 integrating:1 zoubin:3 get:6 close:1 operator:1 bh:27 risk:44 collapsed:8 writing:2 yee:1 optimize:1 equivalent:1 map:1 deterministic:1 go:1 regardless:1 dalalyan:1 independently:1 convex:8 minq:1 williams:1 identifying:1 estimator:2 importantly:1 vandenberghe:1 ralf:1 handle:1 hurt:1 laplace:1 qkp:9 strengthen:2 exact:1 carl:2 us:9 hypothesis:7 trend:2 predicts:2 ep:13 wang:2 capture:2 calculate:1 convexity:1 complexity:1 algebra:1 predictive:2 titsias:1 learner:1 translated:1 joint:2 regularizer:4 fast:2 effective:2 monte:1 kp:1 artificial:3 newman:1 aggregate:1 shalev:2 whose:2 larger:1 supplementary:10 gp:6 shakir:1 online:3 advantage:3 eigenvalue:4 differentiable:5 propose:1 product:1 strengthening:1 uci:2 combining:1 riccati:1 degenerate:1 ridgway:1 kv:2 inducing:1 convergence:1 requirement:1 phane:1 ben:1 derive:5 develop:2 andrew:1 measured:1 minor:1 eq:14 edward:1 strong:4 c:1 gib:1 come:1 implies:4 inflated:2 tommi:1 lyapunov:1 ois:1 correct:6 stochastic:2 exploration:2 centered:1 viewing:3 mcallester:1 material:10 require:1 fix:1 generalization:1 preliminary:3 tighter:1 extension:2 correction:1 hold:16 sufficiently:2 ic:1 normal:5 exp:2 lawrence:1 algorithmic:2 predict:2 matthew:2 substituting:1 optimizer:1 ctm:11 applicable:2 expose:2 agrees:1 hoffman:1 minimization:16 hope:1 mit:1 clearly:2 gaussian:17 always:2 jaakkola:1 corollary:19 derived:1 focus:2 rezende:1 rank:1 likelihood:9 check:1 contrast:2 am:1 inference:25 integrated:1 typically:1 hidden:1 relation:1 chopin:1 sheth:5 classification:3 among:1 arg:2 denoted:3 issue:1 pascal:3 development:1 colt:1 special:1 field:2 aware:1 equal:2 beach:1 ng:1 identical:1 represents:1 placing:1 unsupervised:2 icml:3 discrepancy:2 simplex:1 report:1 future:2 simplify:1 develops:1 primarily:1 gamma:1 divergence:2 individual:1 replaced:1 n1:1 attempt:1 evaluation:3 analyzed:3 mixture:1 light:1 devoted:1 minw:1 iv:1 taylor:2 kxx:1 re:2 theoretical:7 modeling:1 predictor:15 successful:3 too:1 st:2 thanks:1 kut:3 michael:1 together:1 continuously:5 concrete:2 choose:1 possibly:1 derivative:3 potential:2 nonasymptotic:1 r2a:2 de:1 ywt:2 includes:2 explicitly:2 later:1 view:1 candela:1 apparently:1 mario:1 francis:1 analyze:3 kwk:2 portion:1 competitive:1 bayes:11 start:1 decaying:1 red:1 aggregation:1 defer:1 simon:1 shai:3 contribution:2 minimize:1 ni:1 variance:3 correspond:2 identify:2 yield:8 generalize:1 weak:1 bayesian:34 bor:1 onero:1 carlo:1 reach:1 ed:1 definition:6 against:7 competitor:8 mohamed:1 james:2 naturally:1 associated:1 proof:16 hsu:1 dataset:2 kmk2:4 infers:1 alexandre:3 fubini:4 supervised:1 danilo:1 improved:1 done:1 though:3 evaluated:1 just:1 implicit:3 hand:1 joaquin:1 sheng:1 christopher:1 banerjee:1 propagation:1 defines:1 logistic:4 yf:1 usa:2 k22:2 alquier:1 true:1 y2:5 regularization:2 analytically:1 boucheron:1 leibler:1 sgp:9 during:1 noted:2 criterion:4 generalized:3 multivariable:1 whye:1 demonstrate:1 performs:2 variational:38 snelson:1 novel:1 fi:15 arindam:1 sigmoid:1 specialized:1 empirically:2 extend:1 interpretation:1 lieven:1 refer:1 cambridge:2 gibbs:15 automatic:1 specification:3 longer:2 etc:1 base:2 add:1 posterior:7 own:3 recent:2 showed:1 perspective:1 optimizing:4 optimizes:7 certain:1 inequality:4 binary:1 yi:19 seen:3 minimum:1 additional:5 care:1 paradigm:2 ii:5 stephen:1 full:3 sham:1 smooth:2 bach:1 long:1 calculates:1 prediction:5 variant:4 regression:5 scalable:1 expectation:8 arxiv:1 iteration:4 kernel:1 achieved:2 c1:3 addition:5 whereas:2 singular:1 unlike:2 archive:1 massart:1 member:1 lafferty:1 jordan:1 call:2 structural:1 intermediate:1 iii:1 easy:1 independence:5 identified:1 restrict:1 reduce:1 idea:2 simplifies:1 br:3 motivated:1 algebraic:1 roni:5 hessian:3 remark:1 deep:1 generally:5 useful:2 tsybakov:1 ph:5 induces:1 rw:5 http:1 meir:1 nsf:1 delta:1 dummy:1 per:1 blue:1 discrete:3 hyperparameter:1 promised:1 drawn:3 lacoste:2 sum:1 enforced:1 run:1 parameterized:1 extends:2 family:7 throughout:1 decide:1 almost:1 fran:1 draw:4 scaling:1 qui:1 bound:52 distinguish:1 replaces:1 marchand:1 oracle:1 bv:7 infinity:1 precisely:2 constraint:3 x2:1 ri:3 erroneously:1 aspect:2 argument:1 min:5 concluding:1 performing:3 relatively:2 department:1 developing:1 according:1 structured:2 march:1 conjugate:6 remain:1 son:1 kakade:1 modification:1 making:1 restricted:1 glm:12 equation:1 discus:2 turn:1 needed:1 abr:1 serf:1 usunier:1 apply:10 observe:3 hierarchical:3 away:1 enforce:1 generic:1 medford:1 tuft:3 pierre:1 alternative:3 denotes:3 assumes:1 include:1 running:1 dirichlet:1 graphical:1 michalis:1 unifying:1 laviolette:1 ghahramani:3 implied:1 objective:7 already:3 moshe:1 strategy:1 concentration:1 dependence:1 kak2:2 fa:1 surrogate:3 gradient:2 reversed:1 majority:1 topic:6 toward:3 assuming:2 modeled:1 providing:2 statement:2 trace:1 negative:1 stated:1 motivates:1 unknown:1 perform:1 teh:1 upper:1 conversion:1 markov:1 daan:1 lacasse:1 immediate:3 misspecification:2 smoothed:10 arbitrary:2 sharp:1 david:5 germain:2 required:1 kl:14 connection:2 optimized:3 established:2 maxq:1 nip:7 address:1 able:1 mismatch:1 sparsity:1 challenge:1 including:3 max:11 green:1 natural:1 treated:1 regularized:10 predicting:1 fitc:1 improve:2 imply:1 julien:1 axis:1 prior:6 literature:2 understanding:2 kf:2 relative:1 loss:71 fully:1 expect:1 interesting:3 limitation:1 allocation:1 foundation:1 incurred:1 sufficient:1 exciting:1 translation:2 course:2 supported:1 rasmussen:2 allow:1 wide:1 saul:1 taking:2 sparse:13 distributed:1 curve:1 calculated:1 hensman:1 cumulative:7 author:1 coincide:1 bm:15 far:1 welling:1 transaction:1 excess:5 approximate:11 implicitly:2 kullback:1 keep:1 confirm:1 ml:1 yuyang:1 rishit:5 xi:11 shwartz:2 alternatively:1 continuous:1 latent:13 bay:3 why:1 additionally:1 ku:6 ca:2 nicolas:2 excellent:1 complex:6 lgm:7 aistats:4 main:2 rh:1 cubic:3 tong:1 sub:2 position:1 explicit:2 khardon:4 exponential:1 kxk2:3 jmlr:3 weighting:1 theorem:19 rk:1 specific:1 showing:5 pac:7 jensen:1 decay:1 intractable:2 adding:2 supplement:1 te:1 justifies:1 gap:1 chen:1 simply:1 explore:2 cf2:2 expressed:1 scalar:1 maxw:1 applies:4 corresponds:2 minimizer:1 satisfies:1 complemented:1 ma:1 conditional:6 goal:2 presentation:1 identity:1 lipschitz:2 hard:1 determined:1 specifically:1 uniformly:2 except:1 wt:1 justify:2 lemma:10 kuo:1 partly:1 experimental:3 e:10 shedding:1 meaningful:1 vote:1 latter:1 alexander:1 incorporate:1 mcmc:1 correlated:4 |
6,746 | 7,101 | Real-Time Bidding with Side Information
Arthur Flajolet
MIT, ORC
[email protected]
Patrick Jaillet
MIT, EECS, LIDS, ORC
[email protected]
Abstract
We consider the problem of repeated bidding in online advertising auctions when
some side information (e.g. browser cookies) is available ahead of submitting a bid
in the form of a d-dimensional vector. The goal for the advertiser is to maximize
the total utility (e.g. the total number of clicks) derived from displaying ads given
that a limited budget B is allocated for a given time horizon T . Optimizing the bids
is modeled as a contextual Multi-Armed Bandit (MAB) problem with a knapsack
constraint and a continuum of arms. We develop UCB-type algorithms that combine
two streams of literature: the confidence-set approach to linear contextual MABs
and the probabilistic bisection search method for stochastic root-finding. Under
mild assumptions on the underlying unknown
?distribution, we establish distribution? ? T ) when either B = ? or when B
independent regret bounds of order O(d
scales linearly with T .
1
Introduction
On the internet, advertisers and publishers now interact through real-time marketplaces called ad
exchanges. Through them, any publisher can sell the opportunity to display an ad when somebody is
visiting a webpage he or she owns. Conversely, any advertiser interested in such an opportunity can
pay to have his or her ad displayed. In order to match publishers with advertisers and to determine
prices, ad exchanges commonly use a variant of second-price auctions which typically runs as follows.
Each participant is initially provided with some information about the person that will be targeted
by the ad (e.g. browser cookies, IP address, and operating system) along with some information
about the webpage (e.g. theme) and the ad slot (e.g. width and visibility). Based on this limited
knowledge, advertisers must submit a bid in a timely fashion if they deem the opportunity worthwhile.
Subsequently, the highest bidder gets his or her ad displayed and is charged the second-highest bid.
Moreover, the winner can usually track the customer?s interaction with the ad (e.g. clicks). Because
the auction is sealed, very limited feedback is provided to the advertiser if the auction is lost. In
particular, the advertiser does not receive any customer feedback in this scenario. In addition, the
demand for ad slots, the supply of ad slots, and the websurfers? profiles cannot be predicted ahead of
time and are thus commonly modeled as random variables, see [19]. These two features contribute to
making the problem of bid optimization in ad auctions particularly challenging for advertisers.
1.1
Problem statement and contributions
We consider an advertiser interested in purchasing ad impressions through an ad exchange. As
standard practice in the online advertising industry, we suppose that the advertiser has allocated a
limited budget B for a limited period of time, which corresponds to the next T ad auctions. Rounds,
indexed by t ? N, correspond to ad auctions in which the advertiser participates. At the beginning of
round t ? N, some contextual information about the ad slot and the person that will be targeted is
revealed to the advertiser in the form of a multidimensional vector xt ? X , where X is a subset of
Rd . Without loss of generality, the coordinates of xt are assumed to be normalized in such a way that
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
kxk? ? 1 for all x ? X . Given xt , the advertiser must submit a bid bt in a timely fashion. If bt is
larger than the highest bid submitted by the competitors, denoted by pt and also referred to as the
market price, the advertiser wins the auction, is charged pt , and gets his or her ad displayed, from
which he or she derives a utility vt . Monetary amounts and utility values are assumed to be normalized
in such a way that bt , pt , vt ? [0, 1]. In this modeling, one of the competitors is the publisher himself
who submits a reserve price so that pt > 0. No one wins the auction if no bid is larger than the reserve
price. For the purpose of modeling, we suppose that ties are broken in favor of the advertiser but this
choice is arbitrary and by no means a limitation of the approach. Hence, the advertiser collects a
reward rt = vt ? 1bt ?pt and is charged ct = pt ? 1bt ?pt at the end of round t. Since the monetary
value of getting an ad displayed is typically difficult to assess, vt and ct may be expressed in different
units and thus cannot be compared directly in general, which makes the problem two-dimensional.
This is the case, for example, when the goal of the advertiser is to maximize the number of clicks, in
which case vt = 1 if the ad was clicked on and vt = 0 otherwise. We consider a stochastic setting
where the environment and the competitors are not fully adversarial. Specifically, we assume that,
at any round t ? N, the vector (xt , vt , pt ) is jointly drawn from a fixed probability distribution ?
independently from the past. While this assumption may seem unnatural at first as the other bidders
also act as learning agents, it is motivated by the following observation. In our setting, we consider
that there are many bidders, each participating in a small subset of a large number of auctions, that
value ad opportunities very differently depending on the intended audience, the nature and topic of
the ads, and other technical constraints. Since bidders have no idea who they will be competing
against for a particular ad (because the auctions are sealed), they are naturally led to be oblivious
to the competition and to bid with the only objective of maximizing their own objective functions.
Given the variety of objective functions and the large number of bidders and ad auctions, we argue
that, by the law of large numbers, the process (xt , pt , vt )t=1,...,T that we experience as a bidder is
i.i.d., at least for a short period of time. Moreover, while the assumption that the distribution of
(xt , vt , pt ) is stationary may only be valid for a short period of time, advertisers tend to participate in
a large number of ad auctions per second so that T and B are typically large values, which motivates
an asymptotic study. We generically denote by (X, V, P ) a vector of random variables distributed
according to ?. We make a structural assumption about ?, which we use throughout the paper.
Assumption 1. The random variables V and P are conditionally independent given X. Moreover,
there exists ?? ? Rd such that E[V | X] = X T ?? and k?? k? ? 1.
Note, in particular, that Assumption 1 is satisfied if V and P are deterministic functions of X. The
first part of Assumption 1 is very natural since: (i) X captures all and only the information about the
ad shared to all bidders before submitting a bid and (ii) websurfers are oblivious to the ad auctions
that take place behind the scenes to determine which ad they will be presented with. The second
part of Assumption 1 is standard in the literature on linear contextual MABs, see [1] and [16], and
is arguably the simplest model capturing a dependence between xt and vt . When the advertiser?s
objective is to maximize the number of clicks, this assumption translates into a linear Click-Through
Rate (CTR) model.
We denote by (Ft )t?N (resp. (F?t )t?N ) the natural filtration generated by ((xt , vt , pt ))t?N
(resp. ((xt+1 , vt , pt ))t?N ). Since the advertiser can keep bidding only so long as he or she does
not run out of money or time, he or she can no longer participate in ad auctions at round ? ? ,
mathematically defined by:
t
X
? ? = min(T + 1, min{t ? N |
c? > B}).
? =1
Note that ? ? is a stopping time with respect to (Ft )t?N . The difficulty for the advertiser when it
comes to determining how much to bid at each round lies in the fact that the underlying distribution ?
is initially unknown. This task is further complicated by the fact that the feedback provided to the
advertiser upon bidding bt is partially censored: pt and vt are only revealed if the advertiser wins
the auction, i.e. if bt ? pt . In particular when bt < pt , the advertiser can never evaluate how much
reward would have been obtained and what price would have been charged if he or she had submitted
a higher bid. The goal for the advertiser is to design a non-anticipating algorithm that, at any round t,
selects bt based on the information acquired in the past so as to keep the pseudo-regret defined as:
?
?X
?1
RB,T = EROPT (B, T ) ? E[
t=1
2
rt ]
as small as possible, where EROPT (B, T ) is the maximum expected sum of rewards that can be
obtained by a non-anticipating oracle algorithm that has knowledge of the underlying distribution.
Here, an algorithm is said to be non-anticipating if the bid selection process does not depend on the
future observations. We develop algorithms with bounds on the pseudo-regret that do not depend on
the underlying distribution ?, which are referred to as distribution-independent regret bounds. This
entails studying the asymptotic behavior of RB,T when B and T go to infinity. For mathematical
convenience, we consider that the advertiser keeps bidding even if he or she has run out of time or
money so that all quantities are well defined for any t ? N. Of course, the rewards obtained for
t ? ? ? are not taken into account in the advertiser?s total reward when establishing regret bounds.
Contributions We develop UCB-type algorithms that combine the ellipsoidal confidence set approach to linear contextual MAB problems with a special-purpose stochastic binary search procedure.
When the budget is unlimited or when it scales linearly with time, we show that, under additional?
tech? ? T ),
nical assumptions on the underlying distribution ?, our algorithms incur a regret RB,T = O(d
? notation hides logarithmic factors in d and T . A key insight is that overbidding is not
where the O
only essential to incentivize exploration in order to estimate ?? , but also crucial to find the optimal
bidding strategy given ?? because bidding higher always provide more feedback in real-time bidding.
1.2
Literature review
To handle the exploration-exploitation trade-off inherent to MAB problems, an approach that has
proved to be particularly successful is the optimism in the face of uncertainty paradigm. The idea is
to consider all plausible scenarios consistent with the information collected so far and to select the
decision that yields the largest reward among all identified scenarios. Auer et al. [7] use this idea
to solve the standard MAB problem where decisions are represented by K ? N arms and pulling
arm k ? {1, ? ? ? , K} at round t ? {1, ? ? ? , T } yields a random reward drawn from an unknown
distribution specific to this arm independently from the past. Specifically, Auer et al. [7] develop the
Upper Confidence Bound algorithm (UCB1), which consists in selecting the arm with the current
largest upper confidence bound on its mean reward, and establish near-optimal regret bounds. This
approach has since been successfully extended to a number of more general settings. Of most notable
interest to us are: (i) linear contextual MAB problems, where, for each arm k and at each round t,
some context xkt is provided to the decision maker ahead of pulling any arm and the expected reward
of arm k is ??T xkt for some unknown ?? ? Rd , and (ii) the Bandits with Knapsacks (BwK) framework,
an extension to the standard MAB problem allowing to model resource consumption.
UCB-type algorithms for linear contextual MAB problems were first developed in [6] and
later extended and improved upon in [1] and [16]. In this line of work, the key idea is to build, at any
round t, an ellipsoidal confidence set Ct on the unknown parameter
??? and to pull the arm k that
? d ? T ) upper bounds on regret
maximizes max??Ct ?T xkt . Using this idea, Chu et al. [16] derive O(
? notations hides logarithmic factors in d and T . While
that hold with high probability, where the O
this result is not directly applicable in our setting, partly because of the knapsack constraint, we rely
on this technique to estimate ?? .
The real-time bidding problem considered in this work can be formulated as a BwK problem with contextual information and a continuum of arms. This framework, first introduced in its
full generality in [10] and later extended to incorporate contextual information in [11], [3], and
[2], captures resource consumption by assuming that pulling any arm incurs the consumption of
possibly many different limited resource types by random amounts. BwK problems are notoriously
harder to solve than standard MAB problems. For example, sublinear regret cannot be achieved
in general for BwK problems when an opponent is adversarially picking the rewards and the
amounts of resource consumption at each round, see [10], while this is possible for standard MAB
problems, see [8]. The problem becomes even more complex when some contextual information
is available at the beginning of each round as approaches developed for standard contextual MAB
problems and for BwK problems fail when applied to contextual BwK problems, see the discussion
in [11], which calls for the development of new techniques. Agrawal and Devanur [2] consider a
particular case where the expected rewards and the expected
amounts of resource consumption
?
? d ? T ) bounds on regret when the initial
are linear in the context and derive, in particular, O(
endowments of resources scale linearly with the time horizon T . These results do not carry over
3
to our setting because the expected costs, and in fact also the expected rewards, are not linear in
the context. To the best of our knowledge, the only prior works that deal simultaneously with
knapsack constraints and a non-linear dependence of the rewards and the amounts of resource
consumption on the contextual information are Agrawal et al. [3] and Badanidiyurup
et al. [11]. When
? K ? T ? ln(?)),
there is a finite number of arms K, they derive regret bounds that scale as O(
where ? is the size of the set of benchmark policies. To some extent, at least when ?? is
known, it is possible to apply these results but this requires to discretize the set of valid bids
[0, 1] and the regret bounds thus derived scale as ? T 2/3 , see the analysis in [10], which is suboptimal.
On the modeling side, the most closely related prior works studying repeated ad auctions
under the lens of online learning are [25], [23], [17], [12], and [5]. Weed et al. [25] develop
algorithms to solve the problem considered in this work when no contextual information is
available and when there is no budget constraint, in which case the rewards are defined as
rt = (vt ? pt ) ? 1bt ?pt , but in a more general adversarial
?setting where few assumptions are made
? T ) regret bounds with an improved rate
concerning the sequence ((vt , pt ))t?N . They obtain O(
O(ln(T )) in some favorable settings of interest. Inspired by [4], Tran-Thanh et al. [23] study a
particular case of the problem considered in this work when no contextual information is available
and when the goal is to maximize?the number of impressions. They use a dynamic programming
? T ) regret bounds. Balseiro and Gur [12] identify near-optimal
approach and claim to derive O(
bidding strategies in a game-theoretic setting assuming that each bidder has a black-box function
that maps the contextual information available before bidding to the expected utility derived from
displaying an ad (which amounts to assuming that ?? is known a priori in our setting). They show that
bidding an amount equal to the expected utility derived from displaying an ad normalized by a bid
multiplier, to be estimated, is a near-optimal strategy. We extend this observation to the contextual
settings. Compared to their work, the difficulty in our setting lies in estimating simultaneously the
bid multiplier and ?? . Finally, the authors of [5] and [17] take the point of view of the publisher
whose goal is to price ad impressions, as opposed to purchasing them, in order to maximize revenues
with no knapsack constraint. Cohen et al. [17] derive O(ln(d2 ? ln(T /d))) bounds on regret with
high probability with a multidimensional binary search.
On the technical side, our work builds upon and contributes to the stream of literature on
probabilistic bisection search algorithms. This class of algorithms was originally developed for
solving stochastic root finding problems, see [22] for an overview, but has also recently appeared in
the MAB literature, see [20]. Our approach is largely inspired by the work of Lei et al. [20] who
develop a stochastic binary search algorithm to solve a dynamic pricing problem with limited supply
but no contextual information, which can be modeled as a BwK problem with a continuum of arms.
Dynamic pricing problems with limited supply are often modeled as BwK problems in the literature,
see [24], [9], and [20], but, to the best of our knowledge, the availability of contextual information
about potential customers is never captured. Inspired by the technical developments introduced in
these works, our approach is to characterize a near-optimal strategy in closed form and to refine
our estimates of the (usually few) initially unknown parameters involved in the characterization as
we make decisions online, implementing this strategy using the latest estimates for the parameters.
However, the technical challenge in these works differs from ours in one key aspect: the feedback
provided to the decision maker is completely censored in dynamic pricing problems, since the
customers? valuations are never revealed, while it is only partially censored in real-time bidding, since
the market price is revealed if the auction is won. Making the most of this additional feature enables
us to develop a stochastic binary search procedure that can be compounded with the ellipsoidal
confidence set approach to linear contextual bandits in order to incorporate contextual information.
Organization The remainder of the paper is organized as follows. In order to increase the level of
difficulty progressively, we start by studying the situation of an advertiser with unlimited budget, i.e.
B = ?, in Section 2. Given that second-price auctions induce truthful bidding when the bidder has
no budget constraint, this setting is easier since the optimal bidding strategy is to bid bt = xTt ?? at
any round t ? N. This drives us to focus on the problem of estimating ?? , which we do by means
of ellipsoidal confidence sets. Next, in Section 3, we study the setting where B is finite and scales
linearly with the time horizon T . We show that a near-optimal strategy is to bid bt = xTt ?? /?? at
any round t ? N, where ?? ? 0 is a scalar factor whose purpose is to spread the budget as evenly
as possible, i.e. E[P ? 1X T ?? ??? ?P ] = B/T . Given this characterization, we first assume that ??
4
is known a priori to focus instead on the problem of computing an approximate solution ? ? 0 to
E[P ? 1X T ?? ???P ] = B/T in Section 3.1. We develop a stochastic binary search algorithm for this
?
? T ) regret under mild assumptions on the underlying distribution
purpose which is shown to incur O(
?. In Section 3.2, we bring the stochastic binary search algorithm together with the?
estimation method
? ? T ) regret bounds.
based on ellipsoidal confidence sets to tackle the general problem and derive O(d
All the proofs are deferred to the Appendix.
Notations For a vector x ? Rd , kxk? refers to the L? -norm of x. For?a positive definite matrix
M ? Rd?d and a vector x ? Rd , we define the norm kxkM as kxkM = xT M x. For x, y ? Rd , it
is well known that the following Cauchy-Schwarz inequality holds: |xT y| ? kxkM ? kykM ?1 . We
denote by Id the identity matrix in dimension d. We use the standard asymptotic notation O(?) when
? that hides logarithmic factors in d, T , and B.
T , B, and d go to infinity. We also use the notation O(?)
For x ? R, (x)+ refers to the positive part of x. For a finite set S (resp. a compact interval I ? R),
|S| (resp. |I|) denotes the cardinality of S (resp. the length of I). For a set S, P(S) denotes the set
of all subsets of S. Finally, for a real-valued function f (?), supp f (?) denotes the support of f (?).
2
Unlimited budget
In this section, we suppose that the budget is unlimited, i.e. B = ?, which implies that the rewards
have to be redefined in order to directly incorporate the costs. For this purpose, we assume in this
section that vt is expressed in monetary value and we redefine the rewards as rt = (vt ? pt ) ? 1bt ?pt .
Since the budget constraint is irrelevant when B = ?, we use the notations RT and EROPT (T )
in place of RB,T and EROPT (B, T ). As standard in the literature on MAB problems, we start by
analyzing the optimal oracle strategy that has knowledge of the underlying distribution. This will not
only guide the design of algorithms when ? is unknown but this will also facilitate the regret analysis.
The algorithm developed in this section as well as the regret analysis are extensions of the work of
Weed et al. [25] to the contextual setting.
Benchmark analysis It is well known that second-price auctions induce truthful bidding in the
sense that any participant whose only objective is to maximize the immediate payoff should always
bid what he or she thinks the good being auctioned is worth. The following result should thus come at
no surprise in the context of real-time bidding given Assumption 1 and the fact that each participant
is provided with the contextual information xt before the t-th auction takes place.
Lemma 1. The optimal non-anticipating strategy is to bid bt = xTt ?? at any time period t ? N and
PT
we have EROPT (T ) = t=1 E[(xTt ?? ? pt )+ ].
Lemma 1 shows that the problem faced by the advertiser essentially boils down to estimating ?? .
Since the bidder only gets to observe vt if the auction is won, this gives advertisers a clear incentive
to overbid early on so that they can progressively refine their estimates downward as they collect
more data points.
Specification of the algorithm Following the approach developed in [6] for linear contextual MAB
problems, we define, at any round t, the regularized least square estimate of ?? given all the feedback
Pt?1
Pt?1
acquired in the past ??t = Mt?1 ? =1 1b? ?p? ? v? ? x? , where Mt = Id + ? =1 1b? ?p? ? x? xT? , as
well as the corresponding ellipsoidal confidence set:
Ct = {? ? Rd |
? ? ??t
? ?T },
Mt
p
with ?T = 2 d ? ln((1 + d ? T ) ? T ). For the reasons mentioned above, we take the optimism in the
face of uncertainty approach and bid:
q
bt = max(0, min(1, max ?T xt )) = max(0, min(1, ??tT xt + ?T ? xTt Mt?1 xt ))
(1)
??Ct
at any round t. Since Ct was designed with the objective of guaranteeing that ?? ? Ct with high
probability at any round t, irrespective of the number of auctions won in the past, bt is larger than the
optimal bid xTt ?? in general, i.e. we tend to overbid.
5
Regret analysis Concentration inequalities are intrinsic to any kind of learning and are thus key to
derive regret bounds in online learning. We start with the following lemma, which is a consequence
of the results derived in [1] for linear contextual MABs, that shows that ?? lies in all the ellipsoidal
confidence sets with high probability. Assumption 1 is key to establish this result.
Lemma 2. We have P[?? ?
/ ?Tt=1 Ct ] ? T1 .
Equipped with Lemma 2 along with some standard results for linear contextual bandits, we are now
ready to extend the analysis of Weed et al. [25] to the contextual setting.
?
? ? T ).
Theorem 1. Bidding according to (1) incurs a regret RT = O(d
Alternative algorithm with lazy updates As first pointed out by Abbasi-Yadkori et al. [1] in the
context of linear bandits, updating the confidence set Ct at every round is not only inefficient but also
unnecessary from a performance standpoint. Instead, we can perform batch updates, only updating Ct
using all the feedback collected in the past at rounds t for which det(Mt ) has increased by a factor at
least (1 + A) compared to the last time there was an update, for some constant A > 0 of our choosing.
This leads to an interesting trade-off between computational efficiency and deterioration of the regret
bound captured in our next result. For mathematical convenience, we keep the same notations as
when we were updating the confidence sets at every round. The only difference lies in the fact that
the bid submitted at time t is now defined as:
bt = max(0, min(1, max ?T xt )),
??C?t
(2)
where ?t is the last round before round t where the last batch update happened.
?
? ? A ? T ).
Theorem 2. Bidding according to (2) at any round t incurs a regret RT = O(d
The fact that we can afford lazy updates will turn out to be important to tackle the general case in
Section 3.2 since we will only be able to update the confidence sets at most O(ln(T )) times overall.
3
Limited budget
In this section, we consider the setting where B is finite and scales linearly with the time horizon T .
We will need the following assumptions for the remainder of the paper.
Assumption 2. (a) B/T = ? is a constant independent of any other relevant quantities.
(b) There exists r > 0, known to the advertiser, such that pt ? r for all t ? N.
(c) We have E[1/X T ?? ] < ?.
(d) The random variable P has a continuous conditional probability density function given the
? < ?.
occurrence of the value x of X, denoted by fx (?), that is upper bounded by L
Conditions (a) and (b) are very natural in real-time bidding where the budget scales linearly with
time and where r corresponds to the minimum reserve price across ad auctions. Observe that
Condition (c) is satisfied, for example, when the probability of a click given any context is at least no
smaller than a (possibly unknown) positive threshold. Condition (d) is motivated by technical consid? is not assumed to be known to the advertiser.
erations that will appear clear in the analysis. Note that L
In order to increase the level of difficulty progressively and to prepare for the integration of
the ellipsoidal confidence sets, we first look at an artificial setting in Section 3.1 where we assume
that there exists a known set C ? Rd such that E[V |X] = min(1, max??C X T ?) (as opposed to
E[V |X] = X T ?? ) and such that ?? ? C. This is to sidestep the estimation problem in a first step in
order to focus on determining an optimal bidding strategy given ?? . Next, in Section 3.2, we bring
together the methods developed in Section 2 and Section 3.1 to tackle the general setting.
3.1
Preliminary work
In this section, we make the following modeling assumption in lieu of E[V |X] = X T ?? .
Assumption 3. There exists C ? Rd such that E[V |X] = min(1, max??C X T ?) and ?? ? C.
6
Furthermore, we assume that C is known to the advertiser initially. Of course, we recover the original
setting introduced in Section 1 when C = {?? } (since V ? [0, 1] implies E[V |X] ? [0, 1]) and ??
is known but the level of generality considered here will prove useful to tackle the general case in
Section 3.2 when we define C as an ellipsoidal confidence set on ?? . As in Section 2, we start by
identifying a near-optimal oracle bidding strategy that has knowledge of the underlying distribution.
This will not only guide the design of algorithms when ? is unknown but this will also facilitate the
regret analysis. We use the shorthand g(X) = min(1, max??C X T ?) throughout this section.
Benchmark analysis To bound the performance of any non-anticipating strategy, we will be
interested in the mappings ? : ?, C ? E[P ? 1g(X)???P ] and R : ?, C ? E[g(X) ? 1g(X)???P ] for
(?, C) ? [0, 2/r] ? P(Rd ). Note that ?(?, C) is non-increasing and that, without loss of generality, we
can restrict ? to be no larger than 2/r because ?(?, C) = ?(2/r, C) = 0 for ? ? 2/r since P ? r.
Exploiting the structure of the MAB problem at hand, we can bound the sum of rewards obtained by
any non-anticipating strategy by the value of a knapsack problem where the weights and the values of
the items are drawn in an i.i.d. fashion from a fixed distribution. Since characterizing the expected
optimal value of a knapsack problem is a well-studied problem, see [21], we can derive a simple
upper bound on EROPT (B, T ) through this reduction, as we next show.
?
Lemma 3. We have EROPT (B, T ) ? T ?R(?? , C)+ T /r+1, where ?? ? 0 satisfies ?(?? , C) = ?
or ?? = 0 if no such solution exists (i.e. if E[P ] < ?) in which case ?(?? , C) ? ?.
Lemma 3 suggests that, given C, a good strategy is to bid bt = min(1, min(1, max??C xTt ?)/?? ), at
any round t. The following result shows that we can actually afford to settle for an approximate
solution ? ? 0 to ?(?, C) = ?.
Lemma 4. For any ?1 , ?2 ? 0, we have: |R(?1 , C) ? R(?2 , C)| ? 1/r ? |?(?1 , C) ? ?(?2 , C)|.
Lemma 3 combined with Lemma 4 suggests that the problem of computing a near-optimal bidding
strategy essentially reduces to a stochastic root-finding problem for the function |?(?, C) ? ?|. As it
turns out, the fact that the feedback is only partially censored makes a stochastic bisection search
possible with minimal assumptions on ?(?, C). Specifically, we only need that ?(?, C) be Lipschitz,
while the technique developed in [20] for a dynamic pricing problem requires ?(?, C) to be biLipschitz. This is a significant improvement because this last condition is not necessarily satisfied
uniformly for all confidence sets C, which will be important when we use a varying ellipsoidal
confidence set instead of C = {?? } in Section 3.2. Note, however, that Assumption 2 guarantees that
?(?, C) is always Lipschitz, as we next show.
? ? E[1/X T ?? ]-Lipschitz.
Lemma 5. ?(?, C) is L
We stress that Conditions (c) and (d) of Assumption 2 are crucial to establish Lemma 5 but are not
relied upon anywhere else in this paper.
Specification of the algorithm At any round t ? N, we bid:
bt = min(1, min(1, max xTt ?)/?t ),
??C
(3)
where ?t ? 0 is the current proxy for ?? . We perform a binary search on ?? by repeatedly using
the same value of ?t for consecutive rounds forming phases, indexed by k ? N, and by keeping
? k ]. We start with phase k = 0 and we initially set
track of an interval, denoted by Ik = [?k , ?
? 0 = 2/r. The length of the interval is shrunk by half at the end of every phase so that
?0 = 0 and ?
|Ik | = (2/r)/2k for any k. Phase k lasts for Nk = 3 ? 4k ? ln2 (T ) rounds during which we set the
value of ?t to ?k . Since ?k will be no larger than ?? with high
this means that we tend
Pprobability,
n
to overbid. Note that there are at most k?T = inf{n ? N |
N
?
T
} phases overall. The
k
k=0
key observation enabling a bisection search approach is that, since the feedback is only partially
censored, we can build, at the end of any phase k, an empirical estimate of ?(?, C), which we denote
7
by ??k (?, C), for any ? ? ?k using all of the Nk samples obtained during phase k. The decision rule
used to update Ik at the end of phase of k is specified next.
Algorithm 1: Interval updating procedure at the end of phase k
p
? k , ?k , ?k = 3 2 ln(2T )/Nk , and ??k (?, C) for any ? ? ?k
Data: ?
? k+1 and ?k+1
Result: ?
? k , ? = ?k ;
??k = ?
k
while ??k (?
?k , C) > ? + ?k do
??k = ??k + |Ik |, ? k = ? k + |Ik |;
end
if ??k (1/2?
?k + 1/2? k , C) ? ? + ?k then
? k+1 = 1/2?
?
?k + 1/2? k , ?k+1 = ? k ;
else
? k+1 = ??k , ?k+1 = 1/2?
?
?k + 1/2? k ;
end
The splitting decision is trivial when |??k (1/2?
?k + 1/2? k , C) ? ?| > ?k because we get a clear
signal that dominates the stochastic noise to either increase or decrease the current proxy for ?? . The
tricky situation is when |??k (1/2?
?k + 1/2? k , C) ? ?| ? ?k , in which case the level of noise is too
high to draw any conclusion. In this situation, we always favor a smaller value for ?k even if that
means shifting the interval upwards later on if we realize that we have made a mistake (which is the
purpose of the while loop). This is because we can always recover from underestimating ?? since the
feedback is only partially censored. Finally, note that the while loop of Algorithm 1 always ends after
a finite number of iterations since ??k (2/r, C) = 0 ? ? + ?k .
Regret analysis Just like in Section 2, using concentration inequalities is essential to establish
regret bounds but this time we need uniform concentration inequalities. We use the Rademacher
complexity approach to concentration inequalities (see, for example, [13] and [15]) to control the
deviations of ??k (?, C) uniformly.
Lemma 6. We have P[sup??[?
k ,2/r]
|??k (?, C) ? ?(?, C)| ? ?k ] ? 1 ? 1/T , for any k.
Next, we bound the number of phases as a function of the time horizon.
?
Lemma 7. For T ? 3, we have k?T ? ln(T + 1) and 4kT ?
T
ln2 (T )
+ 1.
Using Lemma 6, we next show that the stochastic bisection search procedure correctly identifies
? ? 0 such that |?(?, C) ? ?(?? , C)| is small with high probability, which is all we really need to
lower bound the rewards accumulated in all rounds given Lemma 4.
? ? E[1/X T ?? ] and provided that T ? exp(8r2 /C 2 ), we have:
Lemma 8. For C = L
2 ln2 (T )
?T
P[?kk=0
{|??k (?k , C) ? ?(?? , C)| ? 4C ? |Ik |, |?(?k , C) ? ?(?? , C)| ? 3C ? |Ik |}] ? 1 ?
.
T
In a last step, we show, using the above result and at the cost of an additive logarithmic term in the
regret bound, that we may assume that the advertiser participates in exactly T auctions. This enables
us to combine Lemma 4, Lemma 7, and Lemma 8 to establish a distribution-free regret bound.
?
T ?
?
? ]
? L?E[1/X
Theorem 3. Bidding according to (3) incurs a regret RB,T = O(
? T ? ln(T )).
r2
Observe that Theorem 3 applies in particular when ?? is known to the advertiser initially and that the
regret bound derived does not depend on d.
3.2
General case
In this section, we combine the methods developed in Sections 2 and 3.1 to tackle the general case.
8
Specification of the algorithm At any round t ? N, we bid:
bt = min(1, min(1, max xTt ?)/?t ),
??C?t
(4)
where ?t is defined in the last paragraph of Section 2 and ?t ? 0 is specified below. We use the
bisection search method developed in Section 3.1 as a subroutine in a master algorithm that also
runs in phases. Master phases are indexed by q = 0, ? ? ? , Q and a new master phase starts whenever
det(Mt ) has increased by a factor at least (1 + A) compared to the last time there was an update,
for some A > 0 of our choosing. By construction, the ellipsoidal confidence set used during the
q-th master phase is fixed so that we can denote it by Cq . During the q-th master phase, we run the
bisection search method described in Section 3.1 from scratch for the choice C = Cq in order to
identify a solution ?q,? ? 0 to ?(?q,? , Cq ) = ? (or ?q,? = 0 if no solution exists). Thus, ?t is a proxy
for ?q,? during the q-th master phase. This bisection search lasts for k?q phases and stops
Pnas soon as we
move on to a new master phase. Hence, there are at most k?q ? k?T = inf{n ? N |
k=0 Nk ? T }
phases during the q-th master phase. We denote by ?q,k the lower end of the interval used at the k-th
phase of the bisection search run during the q-th master phase.
Regret analysis First we show that there can be at most O(d ? ln(T ? d)) master phases overall.
? = d ? ln(T ? d)/ ln(1 + A) almost surely.
Lemma 9. We have Q ? Q
Lemma 9 is important because it implies that the bisection searches run long enough to be able to
identify sufficiently good approximate values for ?q,? . Note that our approach is ?doubly? optimistic
since both ?q,k ? ?q,? and ?? ? Cq hold with high probability at any point in time. At a high level,
the regret analysis goes as follows. First, just like in Section 3.1, we show, using Lemma 8 and at the
cost of an additive logarithmic term in the final regret bound, that we may assume that the advertiser
participates in exactly T auctions. Second, we show, using the analysis of Theorem 2, that we may
assume that the expected per-round reward obtained during phase q is E[min(1, max??Cq xTt ?)] (as
?
? ? T ) in the final regret bound.
opposed to xTt ?? ) at any round t, up to an additive term of order O(d
Third, we note that Theorem 3 essentially shows that the ?
expected per-round reward obtained during
? T ) in the final regret bound. Finally, what
phase q is R(?q,? , Cq ), up to an additive term of order O(
remains to be done is to compare R(?q,? , Cq ) with R(?? , {?? }), which is done using Lemmas 2 and
3.
?
T ?
?
? ]
? ? L?E[1/X
Theorem 4. Bidding according to (4) incurs a regret RB,T = O(d
? f (A) ? T ), where
r2
?
f (A) = 1/ ln(1 + A) + 1 + A.
4
Concluding remark
An interesting direction for future research is to characterize achievable regret bounds, in particular
through the derivation of lower bounds on regret. When there is no budget limit and no contextual
information,
Weed et al. [25] provide a thorough characterization with rates ranging from ?(ln(T ))
?
to ?( T ), depending on whether a margin condition on the underlying distribution is satisfied.
These lower bounds carry over to our more general setting and, as a result, the dependence of our
regret bounds with respect to T cannot be improved in general. It is however unclear whether the
dependence with respect to d is optimal. Based on the lower bounds established by Dani et al. [18]
for linear stochastic bandits, a model which is arguably closer to our setting than that of Chu et al.
[16] because of the need to estimate the bid multiplier ?? , we conjecture that a linear dependence on
d is optimal but this calls for more work. Given that the contextual information available in practice
is often high-dimensional, developing algorithms that exploit the sparsity of the data in a similar
fashion as done in [14] for linear contextual MAB problems is also a promising research direction.
In this paper, observing that general BwK problems with contextual information are notoriously
hard to solve, we exploit the structure of real-time bidding problems to develop a special-purpose
algorithm (a stochastic binary search combined with an ellipsoidal confidence set) to get optimal
regret bounds. We believe that the ideas behind this special-purpose algorithm could be adapted for
other important applications such as contextual dynamic pricing with limited supply.
Acknowledgments
Research funded in part by the Office of Naval Research (ONR) grant N00014-15-1-2083.
9
References
[1] Abbasi-Yadkori, Y., P?l, D., and Szepesv?ri, C. (2011). Improved algorithms for linear stochastic
bandits. In Adv. Neural Inform. Processing Systems, pages 2312?2320.
[2] Agrawal, S. and Devanur, N. (2016). Linear contextual bandits with knapsacks. In Adv. Neural
Inform. Processing Systems, pages 3450?3458.
[3] Agrawal, S., Devanur, N. R., and Li, L. (2016). An efficient algorithm for contextual bandits
with knapsacks, and an extension to concave objectives. In Proc. 29th Annual Conf. Learning
Theory, pages 4?18.
[4] Amin, K., Kearns, M., Key, P., and Schwaighofer, A. (2012). Budget optimization for sponsored
search: Censored learning in mdps. In Proc. 28th Conf. Uncertainty in Artificial Intelligence,
pages 54?63.
[5] Amin, K., Rostamizadeh, A., and Syed, U. (2014). Repeated contextual auctions with strategic
buyers. In Adv. Neural Inform. Processing Systems, pages 622?630.
[6] Auer, P. (2002). Using confidence bounds for exploitation-exploration trade-offs. J. Machine
Learning Res., 3(Nov):397?422.
[7] Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002a). Finite-time analysis of the multiarmed bandit
problem. Machine Learning, 47(2-3):235?256.
[8] Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002b). The nonstochastic multiarmed
bandit problem. SIAM J. Comput., 32(1):48?77.
[9] Babaioff, M., Dughmi, S., Kleinberg, R., and Slivkins, A. (2012). Dynamic pricing with limited
supply. In Proc. 13th ACM Conf. Electronic Commerce, pages 74?91.
[10] Badanidiyuru, A., Kleinberg, R., and Slivkins, A. (2013). Bandits with knapsacks. In Proc.
54th IEEE Annual Symp. Foundations of Comput. Sci., pages 207?216.
[11] Badanidiyuru, A., Langford, J., and Slivkins, A. (2014). Resourceful contextual bandits. In
Proc. 27th Annual Conf. Learning Theory, volume 35, pages 1109?1134.
[12] Balseiro, S. and Gur, Y. (2017). Learning in repeated auctions with budgets: Regret minimization
and equilibrium. In Proc. 18th ACM Conf. Economics and Comput., pages 609?609.
[13] Bartlett, P. and Mendelson, S. (2002). Rademacher and gaussian complexities: Risk bounds and
structural results. J. Machine Learning Res., 3(Nov):463?482.
[14] Bastani, H. and Bayati, M. (2015). Online decision-making with high-dimensional covariates.
Working Paper.
[15] Boucheron, S., Bousquet, O., and Lugosi, G. (2005). Theory of classification: A survey of some
recent advances. ESAIM: Probability and Statist., 9:323?375.
[16] Chu, W., Li, L., Reyzin, L., and Schapire, R. (2011). Contextual bandits with linear payoff
functions. In J. Machine Learning Res. - Proc., volume 15, pages 208?214.
[17] Cohen, M., Lobel, I., and Leme, R. P. (2016). Feature-based dynamic pricing. In Proc. 17th
ACM Conf. Economics and Comput., pages 817?817.
[18] Dani, V., Hayes, T., and Kakade, S. (2008). Stochastic linear optimization under bandit feedback.
In Proc. 21st Annual Conf. Learning Theory, pages 355?366.
[19] Ghosh, A., Rubinstein, B. I. P., Vassilvitskii, S., and Zinkevich, M. (2009). Adaptive bidding
for display advertising. In Proc. 18th Int. Conf. World Wide Web, pages 251?260.
[20] Lei, Y., Jasin, S., and Sinha, A. (2015). Near-optimal bisection search for nonparametric
dynamic pricing with inventory constraint. Working Paper.
[21] Lueker, G. (1998). Average-case analysis of off-line and on-line knapsack problems. Journal of
Algorithms, 29(2):277?305.
10
[22] Pasupathy, R. and Kim, S. (2011). The stochastic root-finding problem: Overview, solutions,
and open questions. ACM Trans. Modeling and Comput. Simulation, 21(3):19.
[23] Tran-Thanh, L., Stavrogiannis, C., Naroditskiy, V., Robu, V., Jennings, N. R., and Key, P. (2014).
Efficient regret bounds for online bid optimisation in budget-limited sponsored search auctions. In
Proc. 30th Conf. Uncertainty in Artificial Intelligence, pages 809?818.
[24] Wang, Z., Deng, S., and Ye, Y. (2014). Close the gaps: A learning-while-doing algorithm for
single-product revenue management problems. Operations Research, 62(2):318?331.
[25] Weed, J., Perchet, V., and Rigollet, P. (2016). Online learning in repeated auctions. In Proc.
29th Annual Conf. Learning Theory, volume 49, pages 1562?1583.
11
| 7101 |@word mild:2 exploitation:2 achievable:1 norm:2 open:1 d2:1 simulation:1 incurs:5 harder:1 carry:2 reduction:1 initial:1 selecting:1 ours:1 past:6 current:3 contextual:37 chu:3 must:2 realize:1 additive:4 enables:2 visibility:1 designed:1 sponsored:2 progressively:3 update:8 stationary:1 half:1 intelligence:2 item:1 beginning:2 short:2 underestimating:1 characterization:3 contribute:1 mathematical:2 along:2 robu:1 supply:5 ik:7 consists:1 prove:1 shorthand:1 combine:4 doubly:1 redefine:1 paragraph:1 symp:1 acquired:2 expected:11 market:2 behavior:1 multi:1 inspired:3 armed:1 equipped:1 cardinality:1 deem:1 clicked:1 provided:7 becomes:1 underlying:9 moreover:3 notation:7 maximizes:1 estimating:3 increasing:1 what:3 bounded:1 kind:1 developed:9 finding:4 ghosh:1 guarantee:1 pseudo:2 thorough:1 every:3 multidimensional:2 act:1 concave:1 tackle:5 tie:1 exactly:2 tricky:1 control:1 unit:1 grant:1 appear:1 arguably:2 before:4 positive:3 t1:1 mistake:1 consequence:1 limit:1 id:2 analyzing:1 establishing:1 lugosi:1 black:1 studied:1 conversely:1 challenging:1 collect:2 balseiro:2 suggests:2 limited:12 acknowledgment:1 commerce:1 lost:1 regret:43 practice:2 differs:1 definite:1 babaioff:1 procedure:4 kxkm:3 empirical:1 confidence:20 induce:2 refers:2 submits:1 get:5 cannot:4 convenience:2 selection:1 close:1 context:6 risk:1 zinkevich:1 deterministic:1 charged:4 customer:4 maximizing:1 thanh:2 go:3 map:1 latest:1 independently:2 devanur:3 economics:2 survey:1 identifying:1 splitting:1 insight:1 kykm:1 rule:1 pull:1 gur:2 his:3 handle:1 coordinate:1 fx:1 resp:5 pt:25 suppose:3 construction:1 programming:1 particularly:2 updating:4 perchet:1 ft:2 wang:1 capture:2 adv:3 trade:3 highest:3 decrease:1 nical:1 mentioned:1 environment:1 broken:1 complexity:2 covariates:1 reward:20 dynamic:9 depend:3 solving:1 badanidiyuru:2 incur:2 upon:4 efficiency:1 completely:1 bidding:27 differently:1 represented:1 derivation:1 artificial:3 marketplace:1 rubinstein:1 choosing:2 whose:3 larger:5 plausible:1 solve:5 valued:1 otherwise:1 favor:2 fischer:1 browser:2 think:1 jointly:1 ip:1 online:8 final:3 agrawal:4 sequence:1 tran:2 interaction:1 product:1 remainder:2 relevant:1 loop:2 monetary:3 reyzin:1 amin:2 participating:1 competition:1 getting:1 webpage:2 exploiting:1 rademacher:2 guaranteeing:1 depending:2 develop:9 derive:8 overbid:3 dughmi:1 predicted:1 come:2 implies:3 direction:2 closely:1 stochastic:17 subsequently:1 exploration:3 shrunk:1 settle:1 implementing:1 exchange:3 resourceful:1 really:1 preliminary:1 mab:15 mathematically:1 extension:3 hold:3 sufficiently:1 considered:4 exp:1 equilibrium:1 mapping:1 claim:1 reserve:3 continuum:3 early:1 consecutive:1 purpose:8 favorable:1 estimation:2 proc:12 applicable:1 erations:1 prepare:1 maker:2 schwarz:1 largest:2 successfully:1 minimization:1 dani:2 mit:4 offs:1 always:6 gaussian:1 varying:1 office:1 derived:6 focus:3 naval:1 she:7 improvement:1 tech:1 adversarial:2 rostamizadeh:1 sense:1 kim:1 stopping:1 accumulated:1 typically:3 bt:20 initially:6 her:3 bandit:15 subroutine:1 selects:1 interested:3 overall:3 among:1 classification:1 denoted:3 priori:2 development:2 special:3 integration:1 equal:1 never:3 beach:1 sell:1 adversarially:1 look:1 future:2 inherent:1 oblivious:2 few:2 simultaneously:2 intended:1 phase:25 organization:1 interest:2 deferred:1 generically:1 behind:2 kt:1 closer:1 arthur:1 experience:1 censored:7 indexed:3 cooky:2 re:3 minimal:1 sinha:1 increased:2 industry:1 modeling:5 cost:4 strategic:1 deviation:1 subset:3 uniform:1 successful:1 too:1 characterize:2 eec:1 combined:2 person:2 st:2 density:1 siam:1 probabilistic:2 participates:3 off:3 picking:1 together:2 ctr:1 abbasi:2 satisfied:4 cesa:2 opposed:3 management:1 possibly:2 conf:10 inefficient:1 sidestep:1 li:2 supp:1 account:1 potential:1 bidder:10 availability:1 int:1 notable:1 ad:34 stream:2 later:3 root:4 view:1 closed:1 optimistic:1 doing:1 sup:1 observing:1 start:6 recover:2 participant:3 complicated:1 relied:1 timely:2 contribution:2 ass:1 square:1 who:3 largely:1 correspond:1 yield:2 identify:3 bisection:11 advertising:3 notoriously:2 drive:1 worth:1 submitted:3 inform:3 whenever:1 competitor:3 against:1 involved:1 naturally:1 proof:1 boil:1 stop:1 proved:1 knowledge:6 organized:1 anticipating:6 auer:5 actually:1 higher:2 originally:1 improved:4 done:3 box:1 generality:4 furthermore:1 anywhere:1 just:2 langford:1 hand:1 working:2 web:1 pricing:8 pulling:3 lei:2 xkt:3 usa:1 facilitate:2 ye:1 normalized:3 multiplier:3 believe:1 hence:2 boucheron:1 deal:1 conditionally:1 round:32 game:1 width:1 during:9 won:3 ln2:3 stress:1 mabs:3 impression:3 theoretic:1 tt:2 bring:2 upwards:1 auction:30 ranging:1 recently:1 mt:6 rigollet:1 overview:2 cohen:2 winner:1 volume:3 extend:2 he:7 significant:1 multiarmed:2 rd:11 sealed:2 pointed:1 had:1 funded:1 specification:3 jaillet:2 longer:1 operating:1 money:2 entail:1 patrick:1 own:1 hide:3 recent:1 optimizing:1 irrelevant:1 inf:2 scenario:3 n00014:1 inequality:5 binary:8 onr:1 vt:18 captured:2 minimum:1 additional:2 deng:1 surely:1 determine:2 maximize:6 advertiser:36 period:4 signal:1 paradigm:1 ii:2 full:1 truthful:2 pnas:1 reduces:1 compounded:1 technical:5 match:1 long:3 naroditskiy:1 concerning:1 variant:1 himself:1 essentially:3 optimisation:1 iteration:1 deterioration:1 achieved:1 audience:1 receive:1 addition:1 szepesv:1 interval:6 else:2 allocated:2 publisher:5 crucial:2 standpoint:1 tend:3 seem:1 call:2 structural:2 near:8 revealed:4 enough:1 bid:28 variety:1 nonstochastic:1 competing:1 click:6 identified:1 suboptimal:1 idea:6 restrict:1 translates:1 det:2 whether:2 motivated:2 optimism:2 vassilvitskii:1 utility:5 bartlett:1 unnatural:1 afford:2 repeatedly:1 remark:1 useful:1 leme:1 clear:3 jennings:1 amount:7 nonparametric:1 pasupathy:1 ellipsoidal:12 statist:1 simplest:1 schapire:2 happened:1 estimated:1 track:2 per:3 rb:6 correctly:1 incentive:1 key:8 threshold:1 drawn:3 bastani:1 incentivize:1 sum:2 run:7 uncertainty:4 master:10 place:3 throughout:2 almost:1 electronic:1 flajolet:2 draw:1 decision:8 appendix:1 capturing:1 bound:38 ct:11 internet:1 pay:1 display:2 somebody:1 refine:2 oracle:3 annual:5 adapted:1 ahead:3 constraint:9 infinity:2 scene:1 ri:1 unlimited:4 bousquet:1 kleinberg:2 aspect:1 min:15 concluding:1 conjecture:1 developing:1 according:5 across:1 smaller:2 kakade:1 lid:1 making:3 stavrogiannis:1 taken:1 ln:14 resource:7 remains:1 turn:2 fail:1 end:9 lieu:1 studying:3 available:6 operation:1 opponent:1 apply:1 observe:3 worthwhile:1 occurrence:1 alternative:1 yadkori:2 batch:2 knapsack:11 original:1 denotes:3 opportunity:4 exploit:2 build:3 establish:6 objective:7 move:1 question:1 quantity:2 strategy:15 concentration:4 rt:7 dependence:5 visiting:1 said:1 unclear:1 win:3 sci:1 consumption:6 participate:2 topic:1 evenly:1 argue:1 collected:2 extent:1 valuation:1 cauchy:1 reason:1 trivial:1 assuming:3 length:2 modeled:4 kk:1 cq:7 difficult:1 statement:1 filtration:1 design:3 motivates:1 policy:1 unknown:9 redefined:1 allowing:1 upper:5 discretize:1 observation:4 perform:2 bianchi:2 benchmark:3 finite:6 enabling:1 displayed:4 immediate:1 situation:3 extended:3 payoff:2 bwk:9 arbitrary:1 introduced:3 specified:2 slivkins:3 established:1 nip:1 trans:1 address:1 able:2 usually:2 below:1 appeared:1 sparsity:1 challenge:1 max:13 shifting:1 syed:1 natural:3 difficulty:4 rely:1 regularized:1 arm:13 esaim:1 mdps:1 identifies:1 irrespective:1 ready:1 faced:1 review:1 literature:7 prior:2 xtt:11 determining:2 asymptotic:3 law:1 freund:1 loss:2 fully:1 sublinear:1 interesting:2 limitation:1 bayati:1 revenue:2 foundation:1 agent:1 purchasing:2 consistent:1 proxy:3 displaying:3 endowment:1 course:2 last:9 keeping:1 free:1 soon:1 side:4 guide:2 wide:1 face:2 characterizing:1 distributed:1 feedback:11 dimension:1 valid:2 world:1 author:1 commonly:2 made:2 adaptive:1 far:1 approximate:3 compact:1 nov:2 keep:4 hayes:1 owns:1 assumed:3 unnecessary:1 search:21 continuous:1 promising:1 nature:1 ca:1 contributes:1 interact:1 inventory:1 complex:1 necessarily:1 submit:2 spread:1 linearly:6 noise:2 profile:1 repeated:5 weed:5 referred:2 orc:2 fashion:4 theme:1 comput:5 lie:4 third:1 down:1 theorem:7 pprobability:1 xt:17 specific:1 r2:3 submitting:2 dominates:1 derives:1 exists:6 essential:2 intrinsic:1 mendelson:1 budget:16 downward:1 margin:1 horizon:5 demand:1 easier:1 surprise:1 nk:4 ucb1:1 gap:1 led:1 logarithmic:5 forming:1 lazy:2 kxk:2 expressed:2 schwaighofer:1 partially:5 scalar:1 applies:1 corresponds:2 satisfies:1 acm:4 slot:4 conditional:1 goal:5 targeted:2 formulated:1 identity:1 price:11 shared:1 lipschitz:3 hard:1 specifically:3 uniformly:2 lemma:24 kearns:1 total:3 called:1 lens:1 partly:1 buyer:1 ucb:3 select:1 support:1 consid:1 incorporate:3 evaluate:1 scratch:1 |
6,747 | 7,102 | Saliency-based Sequential Image Attention with
Multiset Prediction
Sean Welleck
New York University
[email protected]
Jialin Mao
New York University
[email protected]
Kyunghyun Cho
New York University
[email protected]
Zheng Zhang
New York University
[email protected]
Abstract
Humans process visual scenes selectively and sequentially using attention. Central
to models of human visual attention is the saliency map. We propose a hierarchical
visual architecture that operates on a saliency map and uses a novel attention
mechanism to sequentially focus on salient regions and take additional glimpses
within those regions. The architecture is motivated by human visual attention, and
is used for multi-label image classification on a novel multiset task, demonstrating
that it achieves high precision and recall while localizing objects with its attention.
Unlike conventional multi-label image classification models, the model supports
multiset prediction due to a reinforcement-learning based training process that
allows for arbitrary label permutation and multiple instances per label.
1
Introduction
Humans can rapidly process complex scenes containing multiple objects despite having limited
computational resources. The visual system uses various forms of attention to prioritize and selectively
process subsets of the vast amount of visual input [6]. Computational models and various forms of
psychophysical and neuro-biological evidence suggest that this process may be implemented using
various "maps" that topographically encode the relevance of locations in the visual field [17, 39, 13].
Under these models, visual input is compiled into a saliency-map that encodes the conspicuity
of locations based on bottom-up features, computed in a parallel, feed-forward process [20, 17].
Top-down, goal-specific relevance of locations is then incorporated to form a priority map, which is
then used to select the next target of attention [39]. Thus processing a scene with multiple attentional
shifts may be interpreted as a feed-forward process followed by sequential, recurrent stages [23].
Furthermore, the allocation of attention can be separated into covert attention, which is deployed to
regions without eye movement and precedes eye movements, and overt attention associated with
an eye movement [6]. Despite their evident importance to human visual attention, the notions of
incorporating saliency to decide attentional targets, integrating covert and overt attention mechanisms,
and using multiple, sequential shifts while processing a scene have not been fully addressed by
modern deep learning architectures.
Motivated by the model of Itti et al. [17], we propose a hierarchical visual architecture that operates
on a saliency map computed by a feed-forward process, followed by a recurrent process that uses
a combination of covert and overt attention mechanisms to sequentially focus on relevant regions
and take additional glimpses within those regions. We propose a novel attention mechanism for
implementing the covert attention. Here, the architecture is used for multi-label image classifica31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
tion. Unlike conventional multi-label image classification models, this model can perform multiset
classification due to the proposed reinforcement-learning based training.
2
Related Work
We first introduce relevant concepts from biological visual attention, then contextualize work in
deep learning related to visual attention, saliency, and hierarchical reinforcement learning (RL).
We observe that current deep learning models either exclusively focus on bottom-up, feed-forward
attention or overt sequential attention, and that saliency has traditionally been studied separately from
object recognition.
2.1
Biological Visual Attention
Visual attention can be classified into covert and overt components. Covert attention precedes
eye movements, and is intuitively used to monitor the environment and guide eye movements to
salient regions [6, 21]. Two particular functions of covert attention motivate the Gaussian attention
mechanism proposed below: noise exclusion, which modifies perceptual filters to enhance the signal
portion of the stimulus and mitigate the noise; and distractor suppression, which refers to suppressing
the representation strength outside an attention area [6]. Further inspiring the proposed attention
mechanism is evidence from cueing [1], multiple object tracking [8], and fMRI [30] studies, which
indicate that covert attention can be deployed to multiple, disjoint regions that vary in size and can be
conceptually viewed as multiple "spotlights".
Overt attention is associated with an eye movement, so that the attentional focus coincides with the
fovea?s line of sight. The planning of eye movements is thought to be influenced by bottom-up (scene
dependent) saliency as well as top-down (goal relevant) factors [21]. In particular, one major view is
that two types of maps, the saliency map and the priority map, encode measures used to determine
the target of attention [39]. Under this view, visual input is processed into a feature-agnostic saliency
map that quantifies distinctiveness of a location relative to other locations in the scene based on
bottom-up properties. The saliency map is then integrated to include top-down information, resulting
in a priority map.
The saliency map was initially proposed by Koch & Ullman [20], then implemented in a computational
model by Itti [17]. In their model, saliency is determined by relative feature differences and compiled
into a "master saliency map". Attentional selection then consists of directing a fixed-sized attentional
region to the area of highest saliency, i.e. in a "winner-take-all" process. The attended location?s
saliency is then suppressed, and the process repeats, so that multiple attentional shifts can occur
following a single feed-forward computation.
Subsequent research effort has been directed at finding neural correlates of the saliency map and
priority map. Some proposed areas for salience computation include the superficial layers of the
superior colliculus (sSC) and inferior sections of the pulvinar (PI), and for priority map computation
include the frontal eye field (FEF) and deeper layers of the superior colliculus (dSC)[39]. Here, we
need to only assume existence of the maps as conceptual mechanisms involved in influencing visual
attention and refer the reader to [39] for a recent review.
We explore two aspects of Itti?s model within the context of modern deep learning-based vision: the
use of a bottom-up, featureless saliency map to guide attention, and the sequential shifting of attention
to multiple regions. Furthermore, our model incorporates top-down signals with the bottom-up
saliency map to create a priority map, and includes covert and overt attention mechanisms.
2.2
Visual Attention, Saliency, and Hierarchical RL in Deep Learning
Visual attention is a major area of interest in deep learning; existing work can be separated into
sequential attention and bottom-up feed-forward attention. Sequential attention models choose a
series of attention regions. Larochelle & Hinton [24] used a RBM to classify images with a sequence
of fovea-like glimpses, while the Recurrent Attention Model (RAM) of Mnih et al. [31] posed
single-object image classification as a reinforcement learning problem, where a policy chooses the
sequence of glimpses that maximizes classification accuracy. This "hard attention" mechanism
developed in [31] has since been widely used [27, 44, 35, 2]. Notably, an extension to multiple
2
objects was made in the DRAM model [3], but DRAM is limited to datasets with a natural label
ordering, such as SVHN [32]. Recently, Cheung et al. [9] developed a variable-sized glimpse inspired
by biological vision, incorporating it into a simple RNN for single object recognition. Due to the
fovea-like attention which shifts based on task-specific objectives, the above models can be seen as
having overt, top-down attention mechanisms.
An alternative approach is to alter the structure of a feed-forward network so that the convolutional
activations are modified as the image moves through the network, i.e. in a bottom-up fashion. Spatial
transformer networks [18] learn parameters of a transformation that can have the effect of stretching,
rotating, and cropping activations between layers. Progressive Attention Networks [36] learn attention
filters placed at each layer of a CNN to progressively focus on an arbitrary subset of the input, while
Residual Attention Networks [41] learn feature-specific filters. Here, we consider an attentional stage
that follows a feed-forward stage, i.e. a saliency map and image representation are produced in a
feed-forward stage, then an attention mechanism determines which parts of the image representation
are relevant using the saliency map.
Saliency is typically studied in the context of saliency modeling, in which a model outputs a saliency
map for an image that matches human fixation data, or salient object segmentation [25]. Separately,
several works have considered extracting a saliency map for understanding classification network
decisions [37, 47]. Zagoruyko et al. [46] formulate a loss function that causes a student network
to have similar "saliency" to a teacher network. They model saliency as a reduction operation
F : RC?H?W ? RH?W applied to a volume of convolutional activations, which we adopt due to
its simplicity. Here, we investigate using a saliency map for a downstream task. Recent work has
begun to explore saliency maps as inputs for prominent object detection [38] and image captioning
[11], pointing to further uses of saliency-based vision models.
While we focus on using reinforcement learning for multiset classification with only class labels as
annotation, RL has been applied to other computer vision tasks, including modeling eye movements
based on annotated human scan paths [29], optimizing prediction performance subject to a computational budget [19], describing classification decisions with natural language [16], and object detection
[28, 5, 4].
Finally, our architecture is inspired by works in hierarchical reinforcement learning. The model
distinguishes between the upper level task of choosing an image region to focus on and the lower
level task of classifying the object related to that region. The tasks are handled by separate networks
that operate at different time-scales, with the upper level network specifying the task of the lower
level network. This hierarchical modularity relates to the meta-controller / controller architecture
of Kulkarni et al. [22] and feudal reinforcement learning [12, 40]. Here, we apply a hierarchical
architecture to multi-label image classification, with the two levels linked by a differentiable operation.
Figure 1: A high-level view of the model components. See Supplementary Materials section 3 for
detailed views.
3
3
Architecture
The architecture is a hierarchical recurrent neural network consisting of two main components: the
meta-controller and controller. These components assume access to a saliency model, which produces
a saliency map from an image, and an activation model, which produces an activation volume from
an image. Figure 1 shows the high level components, and Supplementary Materials section 3 shows
detailed views of the overall architecture and individual components.
In short, given a saliency map the meta-controller places an attention mask on an object, then the
controller takes subsequent glimpses and classifies that object. The saliency map is updated to account
for the processed locations, and the process repeats. The meta-controller and controller operate at
different time-scales; for each step of the meta-controller, the controller takes k + 1 steps.
Notation Let I denote the space of images, I ? RhI ?wI and Y = 1, ..., nc denote the set of labels.
Let S denote the space of saliency maps, S ? RhS ?wS , let V denote the space of activation volumes,
V ? RC?hV ?wV , let M denote the space of covert attention masks, M ? RhM ?wM , let P denote
the space of priority maps, P ? RhM ?wM , and let A denote an action space. The activation model is
a function fA : I ? V mapping an input image to an activation volume. An example volume is the
512 ? hV ? wV activation tensor from the final conv layer of a ResNet.
Meta-Controller The meta-controller is a function fM C : S ? M mapping a saliency map to a
covert attention mask. Here, fM C is a recurrent neural network defined as follows:
xt
et
ht
Mt
= [St , y?t?1 ],
= Wencode xt ,
= GRU(et , ht?1 ),
= attn(ht ).
xt is a concatenation of the flattened saliency map and one-hot encoding of the previous step?s class
label prediction, and attn(?) is the novel spatial attention mechanism defined below. The mask is then
transformed by the interface layer into a priority map that directs the controller?s glimpses towards a
salient region, and used to produce an initial glimpse vector for the controller.
Gaussian Attention Mechanism The spatial attention mechanism, inspired by covert visual attention,
is a 2D discrete convolution of a mixture of Gaussians filter. Specifically, the attention mask M is a
m ? n matrix with Mij = ?(i, j), where
K
2
2
X
(k)
(k)
(k)
(k)
?(i, j) =
? exp ??
?1 ? i + ?2 ? j
.
k=1
(k)
(k)
K denotes the number of Gaussian components and ?(k) , ? (k) , ?1 , ?2
importance, width, and x, y center of component k.
respectively denote the
To implement the mechanism, the parameters (?, ?, ?1 , ?2 ) are output by a network layer as a
4K-dimensional vector (?, ?, ?1 , ?2 ), and the elements are transformed to their proper ranges:
?1 = ?(?1 )m, ?2 = ?(?2 )n, ? = softmax(?), ? = exp(?). Then M is formed by applying ? to the
coordinates {(i, j) | 1 ? i ? m, 1 ? j ? n}. Note that these operations are differentiable, allowing
the attention mechanism to be used as a module in a network trained with back-propagation. Graves
[15] proposed a 1D version; here we use a 2D version for spatial attention.
Interface The interface layer transforms the meta-controller?s output into a priority map and glimpse
vector that are used as input to the controller (diagram in Supp. Materials 3.4). The priority map
combines the top-down covert attention mask with the bottom-up saliency map: P = M S.
Since P influences the region that is processed next, this can also be seen as a generalization of the
"winner-take-all" step in the Itti model; here a learned function chooses a region of high saliency
rather than greedily choosing the maximum location.
To provide an initial glimpse vector g~0 ? RC for the controller, the mask is used to spatially weight
PhV PwV
the activation volume: g~0 = i=1
j=1 Mi,j V?,i,j This is interpreted as the meta-controller taking
an initial, possibly broad and variable-sized glimpse using covert attention. The weighting produced
by the attention map retains the activations around the centers of attention, while down-weighting
outlying areas, effectively suppressing activations from noise outside of the attentional area. Since
4
the activations are averaged into a single vector, there is a trade-off between attentional area and
information retention.
Controller The controller is a recurrent neural network fC : (P, g0 ) ? A that runs for k + 1
steps and maps a priority map and initial glimpse vector from the interface layer to parameters of
a distribution, and an action is sampled. The first k actions select spatial indices of the activation
volume, and the final action chooses a class label, i.e. A1,...,k ? {1, 2, ..., hV wV } and Ak+1 ? Y.
Specifically:
xi = [Pt , y?t?1 , ai?1 , gi?1 ],
ei = Wencode xi ,
hi = GRU(ei , hi?1 ),
Wlocation hi 1 ? i ? k
si =
,
Wclass hi
i=k+1
pi = softmax(si ),
ai ? pi ,
where t indexes the meta-controller time-step and i indexes the controller time-step, and ai ? A is an
action sampled from the categorical distribution with parameter vector pi . The glimpse vectors gi ,
i ? 1 ? k are formed by extracting the column from the activation volume V at location ai = (x, y)i .
Intuitively, the controller uses overt attention to choose glimpse locations using the information
conveyed in the priority map and initial glimpse, compiling the information in its hidden state to make
a classification decision. Recall that both covert attention and priority maps are known to influence
eye saccades [21]. See Supplementary Materials 3.5 for a diagram.
Update Mechanism During a step t, the meta-controller takes saliency map St as input, focuses
on a region of St using an attention mask Mt , then the controller takes glimpses at locations
(x, y)1 , (x, y)2 , ..., (x, y)k . At step t + 1, the saliency map should reflect the fact that some regions
have already been attended to in order to encourage attending to novel areas. While the metacontroller?s hidden state can in principle prevent it from repeatedly focusing on the same regions, we
explicitly update the saliency map with a function update : S ? S that suppresses the saliency of
glimpsed locations and locations with nonzero attention mask values, thereby increasing the relative
saliency of the remaining unattended regions:
0
if (i, j) ? {(x, y)1 , (x, y)2 , ..., (x, y)k }
[St+1 ]ij =
max([St ]ij ? [Mt ]ij , 0) otherwise
This mechanism is motivated by the inhibition of return effect in the human visual system; after
attention has been removed from a region, there is an increased response time to stimuli in the region,
which may influence visual search and encourage attending to novel areas [13, 33].
Saliency Model The saliency model is a function fS : I ? S mapping an input image to a saliency
map. Here, we use a saliency model that computes a map by compressing an activation volume using
PC
a reduction operation F : RC?HV ?WV ? RHV ?WV as in [46]. We choose F (V ) = c=1 |Vi |2 ,
and use the output of the activation model as V . Furthermore, the activation model is fine-tuned on a
single-object dataset containing classes found in the multi-object dataset, so that the saliency model
has high activations around classes of interest.
4
4.1
Learning
Sequential Multiset Classification
Multi-label classification tasks can be categorized based on whether the labels are lists, sets, or
multisets. We claim that multiset classification most closely resembles a human?s free viewing of a
scene; the exact labeling order of objects may vary by individual, and multiple instances of the same
object may appear in a scene and receive individual labels. Specifically, let D = {(Xi , Yi )}ni=1 be a
dataset of images Xi with labels Yi ? Y and consider the structure of Yi .
In list-based classification, the labels Yi = [y1 , ..., y|Yi | ] have a consistent order, e.g. left to right. As
a sequential prediction problem, there is exactly one true label for each prediction step, so a standard
5
cross-entropy loss can be used at each prediction step, as in [3]. When the labels Yi = {y1 , ..., y|Yi | }
are a set, one approach for sequential prediction is to impose an ordering O(Yi ) ? [yo1 , ..., yo|Yi | ]
as a preprocessing step, transforming the set-based problem to a list-based problem. For instance,
O(?) may order the labels based on prevalence in the training data as in [42]. Finally, multiset
classification generalizes set-based classification to allow duplicate labels within an example, i.e.
m|Y |
Yi = {y1m1 , ...y|Yi | i }, where mj denotes the multiplicity of label yj .
Here, we propose a training process that allows duplicate labels and is permutation-invariant with
respect to the labels, removing the need for a hand-engineered ordering and supporting all three types
of classification. With a saliency-based model, permutation invariance for labels is especially crucial,
since the most salient (and hence first classified) object may not correspond to the first label.
4.2
Training
Our solution is to frame the problem in terms of maximizing a non-smooth reward function that
encourages the desired classification and attention behavior, and use reinforcement learning to
maximize the expected reward. Assuming access to a trained saliency model and activation model,
the meta-controller and controller can be jointly trained end-to-end.
Reward To support multiset classification, we propose a multiset-based reward for the controller?s
classification action. Specifically, consider an image X with m labels Y = {y1 , ..., ym }. At metacontroller step t, 1 ? t ? m, let Ai be a multiset of available labels, let fi (X) be the corresponding
class scores output by the controller. Then define:
+1 if y?i ? Ai
Ai \ y?i if y?i ? Ai
Riclf =
Ai+1 =
?1 otherwise
Ai
otherwise
where y?i ? softmax(fi (X)) and A1 ? Y. In short, a class label is sampled from the controller,
and the controller receives a positive reward if and only if that label is in the multiset of available
labels. If so, the label is removed from the available labels. Clearly, the reward for sampled labels
y?1 , y?2 , .., y?m equals the reward for ?(?
y1 ), ?(?
y2 ), .., ?(?
ym ) for any permutation ? of the m elements.
Note that list-based tasks can be supported by setting Ai ? yi .
The controller?s location-choice actions simply receive a reward equal to the priority map value at the
glimpse location, which encourages the controller to choose locations according to the priority map.
That is, for locations (x, y)1 , ..., (x, y)k sampled from the controller, define Riloc = P(x,y)i .
Objective Let n = 1...N index the example, t = 1...M index the meta-controller step, and i =
0...K index the hcontroller step.
i The goal is choosing ? to maximize the total expected reward:
P
J(?) = Ep(? |f? )
n,t,i Rn,t,i where the rewards Rn,t,i are defined as above, and the expectation
is over the distribution of trajectories produced using a model f parameterized by ?. An unbiased
gradient estimator for ? can be obtained using the REINFORCE [43] estimator within the stochastic
computation graph framework of Schulman et al. [34] as follows.
Viewed as a stochastic computation graph, an input saliency map Sn,t passes through a path of
deterministic nodes, reaching the controller. Each of the controller?s k +1 steps produces a categorical
parameter vector pn,t,i and a stochastic node is introduced
by each sampling operation at,i ? pn,t,i .
P
Then form a surrogate loss function L(?) =
log
p
Rt,i with the stochastic computation
t,i
t,i
graph. By Corollary 1 of [34], the gradient of L(?) gives an unbiased gradient estimator
ofthe
?
?
objective, which can be approximated using Monte-Carlo sampling: ??
J(?) = E ??
L(?) ?
PB ?
1
b=1 ?? L(?). As is standard in reinforcement learning, a state-value function
B
P V (st,i ) is used as
a baseline to reduce the variance of the REINFORCE estimator, thus L(?) = t,i log pt,i (V (st,i ) ?
Rt,i ). In our implementation, the controller outputs the state-value estimate, so that st,i is the
controller?s hidden state.
5
Experiments
We validate the classification performance, training process, and hierarchical attention with set-based
and multiset-based classification experiments. To test the effectiveness of the permutation-invariant
6
Table 1: Metrics on the test set for MNIST Set and Multiset tasks, and SVHN Multiset.
MNIST Set
HSAL-RL
Cross-Entropy
MNIST Multiset
F1
0-1
F1
0.990
0.735
0.960
0.478
0.978
0.726
0-1
0.935
0.477
SVHN Multiset
F1
0-1
0.973
0.589
0.947
0.307
RL training, we compare against a baseline model that uses a cross-entropy loss on the probabilities
pt,i and (randomly ordered) labels yi instead of the RL training, similar to training proposed in [42].
Datasets Two synthetic datasets, MNIST Set and MNIST Multiset, as well as the real-world SVHN
dataset, are used. For MNIST Set and Multiset, each 100x100 image in the dataset has a variable
number (1-4) of digits, of varying sizes (20-50px) and positions, along with cluttering objects that
introduce noise. Each label in an image from MNIST Set is unique, while MNIST Multiset images
may contain duplicate labels. Each dataset is split into 60,000 training examples and 10,000 testing
examples, and metrics are reported for the testing set. SVHN Multiset consists of SVHN examples
with label order randomized when a batch is sampled. This removes the natural left-to-right order of
the SVHN labels, thus turning the classification into a multiset task.
Evaluation Metrics To evaluate classification performance, macro-F1 and exact match (0-1) as
defined in [26] are used. For evaluating the hierarchical attention mechanism we use visualization
Pk
as well as a saliency metric for the controller?s glimpses, defined as attnsaliency = k1 i=1 Sti for a
controller trajectory (x, y)1 , ..., (x, y)k , y?t at meta-controller time step t, then averaged over all time
steps and examples. A high score means that the controller tends to pick salient points as glimpse
locations.
Implementation Details The activation and saliency model is a ResNet-34 network pre-trained
on ImageNet. For MNIST experiments, the ResNet is fine-tuned on a single object MNIST Set
dataset, and for SVHN is fine-tuned by randomly selecting one of an image?s labels each time a
batch is sampled. Images are resized to 224x224, and the final (4th) convolutional layer is used
(V ? R512?7?7 ). Since the label sets vary in size, the model is trained with an extra "stop"
class, and during inference greedy argmax sampling is used until the "stop" class is predicted. See
Supplementary Materials section 1 for further details.
5.1
Experimental Evaluation
In this section we analyze the model?s classification performance, the contribution of the proposed
RL training, and the behavior of the hierarchical attention mechanism.
Classification Performance Table 1 shows the evaluation metrics on the set-based and multiset-based
classification tasks for the proposed hierarchical saliency-based model with RL training ("HSAL-RL")
and the cross-entropy baseline ("Cross-Entropy") introduced above. HSAL-RL performs well across
all metrics; on both the set and multiset tasks the model achieves very high precision, recall, and
macro-F1 scores, but as expected, the multiset task is more difficult. We conclude that the proposed
model and training process is effective for these set and multiset image classification tasks.
Contribution of RL training As seen in Table 1, performance is greatly reduced when the standard
cross-entropy training is used, which is not invariant to the label ordering. This shows the importance
of the RL training, which only assumes that predictions are some permutation of the labels.
Controller Attention Based on attnsaliency , the controller learns to glimpse in salient regions more
often as training progresses, starting at 58.7 and ending at 126.5 (see graph in Supplementary
Materials Section 2). The baseline, which does not have the reward signal for its glimpses, fails to
improve over training (remaining near 25), demonstrating the importance and effectiveness of the
controller?s glimpse rewards.
Hierarchical Attention Visualization Figure 2 visualizes the hierarchical attention mechanism on
three example inference processes. See Supplementary Materials Section 4 for more examples, which
we discuss here. In general, the upper level attention highlights a region encompassing a digit, and
7
Figure 2: The inference process showing the hierarchical attention on three different examples. Each
column represents a single meta-controller step, two controller glimpses, and classification.
the lower level glimpses near the digit before classifying. Notice the saliency map update over time,
the priority map?s structure due to the Gaussian attention mechanism, and the variable-sized focus of
the priority map followed by finer-grained glimpses. Note that the predicted labels need not be in the
same order as the ground truth labels (e.g. "689"), and that the model can predict multiple instances
of a label (e.g. "33", "449"), illustrating multiset prediction. In some cases, the upper level attention
is sufficient to classify the object without the controller taking related glimpses, as in "373", where
the glimpses are in a blank region for the 7. In "722", the covert attention is initially placed on both
the 7 and the 2, then the controller focuses only on the 7; this can be interpreted as using the multiple
spotlight capability of covert attention, then directing overt attention to a single target.
5.2
Limitations
Saliency Map Input Since the saliency map is the only top-level input, the quality of the saliency
model is a potential performance bottleneck. As Figure 4 shows, in general there is no guarantee
that all objects of interest will have high saliency relative to the locations around them. However, the
modular architecture allows for plugging in alternative, rigorously evaluated saliency models such as
a state-of-the-art saliency model trained with human fixation data [10].
Activation Resolution Currently, the activation model returns the highest-level convolutional activations, which have a 7x7 spatial dimension for a 224x224 image. Consider the case shown in Figure 3.
Even if the controller acted optimally, activations for multiple digits would be included in its glimpse
vector due to the low resolution. This suggests activations with higher spatial resolution are needed,
perhaps by incorporating dilated convolutions [45] or using lower-level activations at attended areas,
motivated by covert attention?s known enhancement of spatial resolution [6, 7, 14].
6
Conclusion
We proposed a novel architecture, attention mechanism, and RL-based training process for sequential
image attention, supporting multiset classification. The proposal is a first step towards incorporating
notions of saliency, covert and overt attention, and sequential processing motivated by the biological
visual attention literature into deep learning architectures for downstream vision tasks.
8
Figure 4: The cat is a label in the ground truth
set but does not have high salience relative to
its surroundings.
Figure 3: The location of highest saliency
from a 7x7 saliency map (right) is projected
onto the 224x224 image (left).
Acknowledgments
This work was partly supported by the NYU Global Seed Funding <Model-Free Object Tracking
with Recurrent Neural Networks>, STCSM 17JC1404100/1, and Huawei HIPP Open 2017.
References
[1] Edward Awh and Harold Pashler. Evidence for split attentional foci. 26:834?46, 05 2000.
[2] Jimmy Ba, Roger Grosse, Ruslan Salakhutdinov, and Brendan Frey. Learning wake-sleep
recurrent attention models. In Proceedings of the 28th International Conference on Neural
Information Processing Systems - Volume 2, NIPS?15, pages 2593?2601, Cambridge, MA, USA,
2015. MIT Press.
[3] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual
attention. arXiv preprint arXiv:1412.7755, 2014.
[4] Miriam Bellver, Xavier Giro i Nieto, Ferran Marques, and Jordi Torres. Hierarchical object
detection with deep reinforcement learning. arXiv preprint arXiv:1611.03718, 2016.
[5] Juan C. Caicedo and Svetlana Lazebnik. Active object localization with deep reinforcement
learning. In Proceedings of the 2015 IEEE International Conference on Computer Vision
(ICCV), ICCV ?15, pages 2488?2496, Washington, DC, USA, 2015. IEEE Computer Society.
[6] Marisa Carrasco. Visual attention: The past 25 years. Vision Research, 51(13):1484 ? 1525,
2011. Vision Research 50th Anniversary Issue: Part 2.
[7] Marisa Carrasco, Patrick E Williams, and Yaffa Yeshurun. Covert attention increases spatial
resolution with or without masks: Support for signal enhancement. Journal of Vision, 2:467?479,
2002.
[8] Patrick Cavanagh and George A. Alvarez. Tracking multiple targets with multifocal attention.
Trends in Cognitive Sciences, 9(7):349 ? 354, 2005.
[9] Brian Cheung, Eric Weiss, and Bruno Olshausen. Emergence of foveal image sampling from
learning to attend in visual scenes. arXiv preprint arXiv:1611.09430, 2016.
[10] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. Predicting human eye
fixations via an lstm-based saliency attentive model. arXiv preprint arXiv:1611.09571, 2016.
[11] Marcella Cornia, Lorenzo Baraldi, Giuseppe Serra, and Rita Cucchiara. Paying more attention to saliency: Image captioning with saliency and context attention. arXiv preprint
arXiv:1706.08474, 2017.
[12] Peter Dayan and Geoffrey E. Hinton. Feudal reinforcement learning. In Advances in Neural
Information Processing Systems 5, [NIPS Conference], pages 271?278, San Francisco, CA,
USA, 1993. Morgan Kaufmann Publishers Inc.
[13] Jillian H. Fecteau and Douglas P. Munoz. Salience, relevance, and firing: a priority map for
target selection. Trends in Cognitive Sciences, 10(8):382 ? 390, 2006.
[14] Jason Fischer and David Whitney. Attention narrows position tuning of population responses in
v1. Current Biology, 19(16):1356 ? 1361, 2009.
9
[15] Alex Graves. Generating sequences with recurrent neural networks.
arXiv:1308.0850, 2013.
arXiv preprint
[16] Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, and
Trevor Darrell. Generating visual explanations. In European Conference on (ECCV), 2016.
[17] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene
analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259,
1998.
[18] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial transformer networks. arXiv preprint arXiv:1506.02025, 2015.
[19] Sergey Karayev, Tobias Baumgartner, Mario Fritz, and Trevor Darrell. Timely object recognition.
In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural
Information Processing Systems 25, pages 890?898. Curran Associates, Inc., 2012.
[20] C Koch and S Ullman. Shifts in selective visual attention: towards the underlying neural
circuitry. Human neurobiology, 4(4):219?27, 1985.
[21] Eileen Kowler. Eye movements: The Past 25 Years. Vision Research, 51(13):1457?1483, 7
2011.
[22] Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical
deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In
D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural
Information Processing Systems 29, pages 3675?3683. Curran Associates, Inc., 2016.
[23] Victor A.F. Lamme and Pieter R. Roelfsema. The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neurosciences, 23(11):571 ? 579, 2000.
[24] Hugo Larochelle and Geoffrey E Hinton. Learning to combine foveal glimpses with a third-order
boltzmann machine. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and
A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1243?1251.
Curran Associates, Inc., 2010.
[25] Y. Li, X. Hou, C. Koch, J. M. Rehg, and A. L. Yuille. The secrets of salient object segmentation.
In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 280?287, June
2014.
[26] Yuncheng Li, Yale Song, and Jiebo Luo. Improving pairwise ranking for multi-label image
classification. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
July 2017.
[27] Xiao Liu, Jiang Wang, Shilei Wen, Errui Ding, and Yuanqing Lin. Localizing by describing: Attribute-guided attention localization for fine-grained recognition. arXiv preprint
arXiv:1605.06217, 2016.
[28] S. Mathe, A. Pirinen, and C. Sminchisescu. Reinforcement learning for visual object detection.
In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2894?
2902, June 2016.
[29] Stefan Mathe and Cristian Sminchisescu. Action from still image dataset and inverse optimal
control to learn task specific visual scanpaths. In C. J. C. Burges, L. Bottou, M. Welling,
Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing
Systems 26, pages 1923?1931. Curran Associates, Inc., 2013.
[30] Stephanie A McMains and David C Somers. Multiple Spotlights of Attentional Selection in
Human Visual Cortex. Neuron, 42(4):677?686, 2004.
[31] Volodymyr Mnih, Nicolas Heess, Alex Graves, and koray kavukcuoglu. Recurrent models
of visual attention. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q.
Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2204?2212.
Curran Associates, Inc., 2014.
10
[32] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng.
Reading Digits in Natural Images with Unsupervised Feature Learning.
[33] Raymond M. Klein. Inhibition of return. Trends in cognitive sciences, 4(4):138?147, 4 2000.
[34] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient estimation
using stochastic computation graphs. In Proceedings of the 28th International Conference on
Neural Information Processing Systems - Volume 2, NIPS?15, pages 3528?3536, Cambridge,
MA, USA, 2015. MIT Press.
[35] Stanislau Semeniuta and Erhardt Barth. Image classification with recurrent attention models. In
2016 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1?7. IEEE, 12 2016.
[36] Paul Hongsuck Seo, Zhe Lin, Scott Cohen, Xiaohui Shen, and Bohyung Han. Progressive
attention networks for visual attribute prediction. arXiv preprint arXiv:1606.02393, 2016.
[37] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks:
Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034,
2013.
[38] Hamed R. Tavakoli and Jorma Laaksonen. Towards instance segmentation with object priority:
Prominent object detection and recognition. arXiv preprint arXiv:1704.07402, 2017.
[39] Richard Veale, Ziad M. Hafed, and Masatoshi Yoshida. How is visual salience computed in the
brain? Insights from behaviour, neurobiology and modelling. Philosophical Transactions of the
Royal Society of London B: Biological Sciences, 372(1714), 2017.
[40] Alexander Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Heess, Max Jaderberg,
David Silver, and Koray Kavukcuoglu. Feudal networks for hierarchical reinforcement learning.
arXiv preprint arXiv:1703.01161, 2017.
[41] Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang
Wang, and Xiaoou Tang. Residual attention network for image classification. In The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
[42] Jiang Wang, Yi Yang, Junhua Mao, Zhiheng Huang, Chang Huang, and Wei Xu. Cnn-rnn: A
unified framework for multi-label image classification. In The IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), June 2016.
[43] Ronald J Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8:229?256, 1992.
[44] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov,
Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation
with visual attention. arXiv preprint arXiv:1502.03044, 2015.
[45] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv
preprint arXiv:1511.07122, 2015.
[46] Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving
the performance of convolutional neural networks via attention transfer. arXiv preprint
arXiv:1612.03928, 2016.
[47] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning
deep features for discriminative localization. In The IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), June 2016.
11
| 7102 |@word cnn:2 version:2 illustrating:1 open:1 pieter:2 attended:3 pick:1 thereby:1 reduction:2 initial:5 liu:1 series:2 exclusively:1 score:3 selecting:1 foveal:2 tuned:3 suppressing:2 past:2 existing:1 current:2 blank:1 kowler:1 anne:1 luo:1 activation:27 si:2 hou:1 john:1 ronald:1 subsequent:2 remove:1 progressively:1 update:4 greedy:1 intelligence:2 bissacco:1 short:2 multiset:28 node:2 location:20 zhang:2 rc:4 along:1 symposium:1 koltun:1 consists:2 fixation:3 combine:2 inside:1 introduce:2 pairwise:1 secret:1 mask:10 notably:1 behavior:2 andrea:1 expected:3 planning:1 distractor:1 multi:10 brain:1 kiros:1 rapid:1 inspired:3 salakhutdinov:2 nieto:1 cluttering:1 increasing:1 conv:1 lapedriza:1 classifies:1 notation:1 underlying:1 maximizes:1 agnostic:1 theophane:1 interpreted:3 suppresses:1 developed:2 narasimhan:1 unified:1 finding:1 transformation:1 guarantee:1 temporal:1 mitigate:1 exactly:1 control:1 appear:1 kelvin:1 positive:1 retention:1 influencing:1 before:1 frey:1 tends:1 attend:2 despite:2 encoding:1 ak:1 jiang:3 path:2 firing:1 studied:2 resembles:1 specifying:1 suggests:1 limited:2 range:1 averaged:2 directed:1 unique:1 acknowledgment:1 yj:1 testing:2 implement:1 prevalence:1 digit:5 area:10 rnn:2 thought:1 vedaldi:1 awh:1 pre:1 integrating:2 refers:1 suggest:1 onto:1 selection:3 context:4 transformer:2 applying:1 influence:3 unattended:1 pashler:1 conventional:2 map:59 deterministic:1 center:2 maximizing:1 modifies:1 williams:3 attention:105 starting:1 jimmy:3 xiaohui:1 yoshida:1 formulate:1 resolution:5 simplicity:1 shen:1 jorma:1 qian:1 attending:2 estimator:4 insight:1 rehg:1 population:1 notion:2 traditionally:1 rhi:1 coordinate:1 updated:1 target:6 pt:3 exact:2 caption:1 us:6 curran:5 rita:2 associate:5 element:2 trend:4 recognition:12 approximated:1 carrasco:2 bottom:9 ep:1 module:1 preprint:15 ding:1 wang:5 hv:4 visualising:1 region:24 compressing:1 culotta:1 ordering:4 movement:9 highest:3 trade:1 removed:2 caicedo:1 alessandro:1 environment:1 transforming:1 schiele:1 reward:12 tobias:1 rigorously:1 motivate:1 trained:6 topographically:1 yuille:1 localization:3 eric:1 yeshurun:1 xiaoou:1 various:3 x100:1 cat:1 separated:2 distinct:1 effective:1 london:1 monte:1 precedes:2 zemel:2 labeling:1 tell:1 outside:2 choosing:3 modular:1 posed:1 widely:1 supplementary:6 bernt:1 cvpr:5 otherwise:3 simonyan:2 gi:2 fischer:1 jointly:1 emergence:1 final:3 baraldi:2 cristian:1 sequence:3 differentiable:2 karayev:1 propose:5 macro:2 relevant:4 rapidly:1 schaul:1 validate:1 enhancement:2 darrell:2 cropping:1 captioning:2 produce:4 generating:2 adam:1 silver:1 object:31 resnet:3 recurrent:12 andrew:3 ij:3 miriam:1 progress:1 paying:2 edward:1 implemented:2 predicted:2 indicate:1 larochelle:2 guided:1 closely:1 annotated:1 attribute:2 filter:4 stochastic:5 human:13 engineered:1 viewing:1 material:7 implementing:1 behaviour:1 f1:5 generalization:1 abbeel:1 biological:6 brian:1 ryan:1 extension:1 koch:4 considered:1 around:3 ground:2 exp:2 seed:1 mapping:3 predict:1 lawrence:1 claim:1 pointing:1 circuitry:1 major:2 achieves:2 vary:3 adopt:1 torralba:1 ruslan:2 estimation:1 overt:11 label:49 currently:1 seo:1 create:1 ferran:1 stefan:1 mit:2 clearly:1 gaussian:4 sight:1 modified:1 rather:1 reaching:1 pn:2 zhou:1 resized:1 agata:1 varying:1 corollary:1 encode:2 focus:11 yo:1 june:4 directs:1 modelling:1 greatly:1 brendan:1 suppression:1 greedily:1 baseline:4 inference:3 abstraction:1 dependent:1 huawei:1 dayan:1 integrated:1 typically:1 initially:2 w:1 hidden:3 transformed:2 selective:1 x224:3 tao:1 overall:1 classification:36 issue:1 spatial:10 softmax:3 art:1 field:2 equal:2 having:2 beach:1 sampling:4 zz:1 koray:4 progressive:2 broad:1 represents:1 washington:1 biology:1 hcontroller:1 unsupervised:1 alter:1 fmri:1 yu:1 connectionist:1 stimulus:2 yoshua:1 duplicate:3 richard:2 distinguishes:1 modern:2 randomly:2 surroundings:1 wen:1 individual:3 argmax:1 consisting:1 fef:1 karthik:1 detection:5 interest:3 investigate:1 mnih:3 zheng:1 evaluation:3 mixture:1 pc:1 jialin:2 encourage:2 glimpse:29 marcella:2 netzer:1 fecteau:1 taylor:1 rotating:1 desired:1 instance:5 classify:2 modeling:2 column:2 increased:1 stanislau:1 laaksonen:1 localizing:2 retains:1 whitney:1 subset:2 osindero:1 optimally:1 reported:1 contextualize:1 teacher:1 synthetic:1 cho:3 chooses:3 st:8 fritz:1 international:3 randomized:1 lstm:1 lee:1 off:1 enhance:1 ym:2 central:1 reflect:1 containing:2 choose:4 prioritize:1 possibly:1 ssc:1 juan:1 priority:19 huang:2 cognitive:3 itti:5 return:3 ullman:2 li:3 supp:1 account:1 potential:1 volodymyr:2 student:1 includes:1 dilated:2 inc:6 explicitly:1 ranking:1 vi:1 tion:1 view:5 jason:1 linked:1 analyze:1 portion:1 wm:2 mario:1 aggregation:1 parallel:1 capability:1 annotation:1 timely:1 simon:1 contribution:2 formed:2 ni:1 accuracy:1 convolutional:6 variance:1 stretching:1 kaufmann:1 correspond:1 saliency:72 conceptually:1 kavukcuoglu:4 produced:3 carlo:1 trajectory:2 niebur:1 finer:1 visualizes:1 classified:2 hamed:1 influenced:1 trevor:2 against:1 attentive:1 involved:1 associated:2 rbm:1 mi:1 jordi:1 sampled:7 cueing:1 dataset:8 stop:2 begun:1 recall:3 segmentation:3 sean:1 akata:1 yuncheng:1 back:1 barth:1 focusing:1 feed:9 higher:1 tom:1 response:2 alvarez:1 wei:2 zisserman:2 evaluated:1 furthermore:3 roger:1 stage:4 until:1 hand:1 receives:1 ei:2 propagation:1 mode:1 quality:1 perhaps:1 aude:1 olshausen:1 usa:5 effect:2 concept:1 true:1 y2:1 unbiased:2 contain:1 hence:1 kyunghyun:3 xavier:1 spatially:1 nonzero:1 komodakis:1 during:2 width:1 encourages:2 inferior:1 harold:1 coincides:1 prominent:2 evident:1 covert:20 performs:1 svhn:8 interface:4 zhiheng:1 image:40 lazebnik:1 weber:1 novel:7 recently:1 fi:2 funding:1 superior:2 mt:3 rl:13 dsc:1 hugo:1 cohen:1 winner:2 vezhnevets:1 volume:11 spotlight:3 refer:1 cambridge:2 munoz:1 ai:11 tuning:1 sugiyama:1 bruno:1 language:1 shawe:1 access:2 han:1 cortex:1 compiled:2 inhibition:2 patrick:2 exclusion:1 recent:2 optimizing:1 meta:15 wv:5 yi:14 victor:1 seen:3 morgan:1 additional:2 george:1 impose:1 nikos:1 determine:1 maximize:2 signal:4 july:2 relates:1 multiple:17 smooth:1 match:2 cross:6 long:1 lin:2 bolei:1 a1:2 plugging:1 prediction:11 neuro:1 oliva:1 controller:51 vision:17 expectation:1 metric:6 arxiv:30 sergey:2 receive:2 proposal:1 separately:2 fine:4 addressed:1 diagram:2 wake:1 crucial:1 publisher:1 extra:1 zagoruyko:2 unlike:2 operate:2 scanpaths:1 pass:1 subject:1 pirinen:1 incorporates:1 lafferty:1 effectiveness:2 bohyung:1 extracting:2 near:2 yang:2 feedforward:1 split:2 bengio:1 architecture:14 fm:2 reduce:1 shift:5 bottleneck:1 whether:1 motivated:5 handled:1 effort:1 song:1 f:1 peter:1 baumgartner:1 karen:2 york:4 cause:1 action:8 repeatedly:1 deep:12 heess:3 antonio:1 detailed:2 amount:1 transforms:1 giuseppe:2 tenenbaum:1 inspiring:1 processed:3 reduced:1 conspicuity:1 coates:1 notice:1 neuroscience:1 disjoint:1 per:1 klein:1 discrete:1 salient:8 demonstrating:2 pb:1 monitor:1 prevent:1 douglas:1 saeedi:1 ht:3 v1:1 vast:1 ram:1 graph:5 downstream:2 year:2 colliculus:2 run:1 inverse:1 parameterized:1 master:1 sti:1 svetlana:1 luxburg:1 somers:1 place:1 roelfsema:1 reader:1 decide:1 guyon:1 wu:1 decision:3 layer:10 hi:4 followed:3 courville:1 yale:1 cheng:1 sleep:1 cucchiara:2 strength:1 occur:1 pulvinar:1 xiaogang:1 feudal:3 alex:2 fei:1 scene:10 encodes:1 x7:2 aspect:1 px:1 acted:1 according:1 combination:1 vladlen:1 across:1 suppressed:1 wi:1 stephanie:1 intuitively:2 invariant:3 multiplicity:1 iccv:2 ardavan:1 resource:1 visualization:2 describing:2 discus:1 mechanism:23 needed:1 end:2 hongsuck:1 generalizes:1 operation:5 gaussians:1 available:3 cavanagh:1 apply:1 observe:1 hierarchical:18 alternative:2 batch:2 compiling:1 weinberger:3 existence:1 top:7 denotes:2 include:3 remaining:2 assumes:1 k1:1 especially:1 ghahramani:2 society:2 psychophysical:1 objective:3 move:1 tensor:1 g0:1 already:1 fa:1 rt:2 surrogate:1 gradient:5 fovea:3 attentional:11 separate:1 reinforce:2 concatenation:1 zeynep:1 marcus:1 yuanqing:1 assuming:1 index:6 nc:1 difficult:1 ba:3 dram:2 implementation:2 proper:1 policy:1 boltzmann:1 perform:1 allowing:1 upper:4 convolution:3 neuron:1 datasets:3 rhv:1 mathe:2 supporting:2 marque:1 hinton:3 incorporated:1 neurobiology:2 directing:2 y1:4 frame:1 rn:2 dc:1 arbitrary:2 jiebo:1 introduced:2 david:3 gru:2 imagenet:1 philosophical:1 learned:1 narrow:1 nip:4 below:2 hendricks:1 pattern:7 scott:1 reading:1 including:1 max:3 explanation:1 royal:1 shifting:1 hot:1 natural:4 predicting:1 turning:1 residual:2 improve:1 lorenzo:2 eye:12 multisets:1 categorical:2 sn:1 raymond:1 review:1 understanding:1 schulman:2 literature:1 relative:5 graf:3 fully:1 loss:4 permutation:6 highlight:1 encompassing:1 generation:1 limitation:1 allocation:1 geoffrey:2 conveyed:1 offered:1 sufficient:1 consistent:1 xiao:1 principle:1 editor:5 classifying:2 pi:4 eccv:1 anniversary:1 repeat:2 placed:2 free:2 supported:2 salience:4 guide:2 attn:2 deeper:1 allow:1 lisa:1 burges:2 distinctiveness:1 taking:2 serra:2 dimension:1 world:1 evaluating:1 ending:1 computes:1 cornia:2 forward:9 made:1 reinforcement:16 preprocessing:1 projected:1 san:1 outlying:1 welling:2 correlate:1 transaction:2 jaderberg:2 global:1 sequentially:3 hipp:1 active:1 conceptual:1 conclude:1 francisco:1 xi:4 discriminative:1 zhe:1 search:1 quantifies:1 modularity:1 khosla:1 table:3 learn:4 mj:1 superficial:1 nicolas:3 ca:2 transfer:1 improving:2 sminchisescu:2 bottou:2 complex:1 european:1 garnett:1 shuo:1 pk:1 main:1 rh:2 featureless:1 noise:4 motivation:1 paul:1 junhua:1 categorized:1 xu:2 ng:1 fashion:1 deployed:2 grosse:1 torres:1 precision:2 fails:1 mao:3 position:2 pereira:1 erhardt:1 sasha:1 perceptual:1 weighting:2 third:1 learns:1 grained:2 donahue:1 tang:1 down:7 removing:1 specific:4 xt:3 showing:1 lamme:1 nyu:5 list:4 cortes:1 evidence:3 incorporating:4 intrinsic:1 mnist:10 sequential:12 effectively:1 importance:4 flattened:1 eileen:1 budget:1 chen:1 entropy:6 fc:1 simply:1 explore:2 rohrbach:1 visual:35 josh:1 aditya:1 ordered:1 tracking:3 bo:1 saccade:1 chang:1 mij:1 truth:2 determines:1 semeniuta:1 ma:2 tejas:1 goal:3 viewed:2 sized:4 cheung:2 towards:4 jeff:1 fisher:1 hard:1 included:1 determined:1 specifically:4 operates:2 yuval:1 total:1 invariance:1 experimental:1 partly:1 aaron:1 selectively:2 select:2 support:3 scan:1 alexander:1 relevance:3 frontal:1 kulkarni:2 evaluate:1 |
6,748 | 7,103 | Variational Inference for Gaussian Process Models
with Linear Complexity
Ching-An Cheng
Institute for Robotics and Intelligent Machines
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Byron Boots
Institute for Robotics and Intelligent Machines
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Abstract
Large-scale Gaussian process inference has long faced practical challenges due
to time and space complexity that is superlinear in dataset size. While sparse
variational Gaussian process models are capable of learning from large-scale
data, standard strategies for sparsifying the model can prevent the approximation
of complex functions. In this work, we propose a novel variational Gaussian
process model that decouples the representation of mean and covariance functions
in reproducing kernel Hilbert space. We show that this new parametrization
generalizes previous models. Furthermore, it yields a variational inference problem
that can be solved by stochastic gradient ascent with time and space complexity that
is only linear in the number of mean function parameters, regardless of the choice of
kernels, likelihoods, and inducing points. This strategy makes the adoption of largescale expressive Gaussian process models possible. We run several experiments
on regression tasks and show that this decoupled approach greatly outperforms
previous sparse variational Gaussian process inference procedures.
1
Introduction
Gaussian process (GP) inference is a popular nonparametric framework for reasoning about functions
under uncertainty. However, the expressiveness of GPs comes at a price: solving (approximate)
inference for a GP with N data instances has time and space complexities in ?(N 3 ) and ?(N 2 ),
respectively. Therefore, GPs have traditionally been viewed as a tool for problems with small- or
medium-sized datasets
Recently, the concept of inducing points has been used to scale GPs to larger datasets. The idea is to
summarize a full GP model with statistics on a sparse set of M N fictitious observations [17, 23].
By representing a GP with these inducing points, the time and the space complexities are reduced to
O(N M 2 + M 3 ) and O(N M + M 2 ), respectively. To further process datasets that are too large to fit
into memory, stochastic approximations have been proposed for regression [9] and classification [10].
These methods have similar complexity bounds, but with N replaced by the size of a mini-batch Nm .
Despite the success of sparse models, the scalability issues of GP inference are far from resolved.
The major obstruction is that the cubic complexity in M in the aforementioned upper-bound is also
a lower-bound, which results from the inversion of an M -by-M covariance matrix defined on the
inducing points. As a consequence, these models can only afford to use a small set of M basis
functions, limiting the expressiveness of GPs for prediction.
In this work, we show that superlinear complexity is not completely necessary. Inspired by the
reproducing kernel Hilbert space (RKHS) representation of GPs [2], we propose a generalized
variational GP model, called DGPs (Decoupled Gaussian Processes), which decouples the bases
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
a, B
SVDGP
SVI
iVSGPR
VSGPR
GPR
SGA
SNGA
SMA
CG
CG
?,?
SGA
SGA
SMA
CG
CG
?
SGA
SGA
SGA
CG
CG
?=?
FALSE
TRUE
TRUE
TRUE
TRUE
N 6= M
Time
Space
N M?2
2
M?3 )
3
O(DN M? +
+
O(DN M + N M + M )
O(DN M + N M 2 + M 3 )
O(DN M + N M 2 + M 3 )
O(DN 2 + N 3 )
TRUE
TRUE
TRUE
TRUE
FALSE
O(N M? + M?2 )
O(N M + M 2 )
O(N M + M 2 )
O(N M + M 2 )
O(N 2 )
Table 1: Comparison between SVDGP and variational GPR algorithms: SVI [9], iVSGPR [2], VS GPR [23], and GPR [18], where N is the number of observations/the size of a mini-batch, M , M? ,
M? are the number of basis functions, and D is the input dimension. Here it is assumed M? ? M? 1 .
(a) M = 10
(b) M? = 100, M? = 10
(c) M = 100
Figure 1: Comparison between models with shared and decoupled basis. (a)(c) denote the models
with shared basis of size M . (b) denotes the model of decoupled basis with size (M? , M? ). In each
figure, the red line denotes the ground truth; the blue circles denote the observations; the black line
and the gray area denote the mean and variance in prediction, respectively.
for the mean and the covariance functions. Specifically, let M? and M? be the numbers of basis
functions used to model the mean and the covariance functions, respectively. Assume M? ? M? .
We show, when DGPs are used as a variational posterior [23], the associated variational inference
problem can be solved by stochastic gradient ascent with space complexity O(Nm M? + M?2 ) and
time complexity O(DNm M? + Nm M?2 + M?3 ), where D is the input dimension. We name this
algorithm SVDGP. As a result, we can choose M? M? , which allows us to keep the time and space
complexity similar to previous methods (by choosing M? = M ) while greatly increasing accuracy.
To the best of our knowledge, this is the first variational GP algorithm that admits linear complexity
in M? , without any assumption on the choice of kernel and likelihood.
While we design SVDGP for general likelihoods, in this paper we study its effectiveness in Gaussian
process regression (GPR) tasks. We consider this is without loss of generality, as most of the
sparse variational GPR algorithms in the literature can be modified to handle general likelihoods
by introducing additional approximations (e.g. in Hensman et al. [10] and Sheth et al. [21]). Our
experimental results show that SVDGP significantly outperforms the existing techniques, achieving
higher variational lower bounds and lower prediction errors when evaluated on held-out test sets.
1.1
Related Work
Our framework is based on the variational inference problem proposed by Titsias [23], which treats
the inducing points as variational parameters to allow direct approximation of the true posterior.
This is in contrast to Seeger et al. [20], Snelson and Ghahramani [22], Qui?onero-Candela and
Rasmussen [17], and L?zaro-Gredilla et al. [14], which all use inducing points as hyper-parameters
of a degenerate prior. While both approaches have the same time and space complexity, the latter
additionally introduces a large set of unregularized hyper-parameters and, therefore, is more likely to
suffer from over-fitting [1].
In Table 1, we compare SVDGP with recent GPR algorithms in terms of the assumptions made and the
time and space complexity. Each algorithm can be viewed as a special way to solve the maximization
of the variational lower bound (5), presented in Section 3.2. Our algorithm SVDGP generalizes the
previous approaches to allow the basis functions for the mean and the covariance to be decoupled, so
an approximate solution can be found by stochastic gradient ascent in linear complexity.
1
The first three columns show the algorithms to update the parameters: SGA/SNGA/SMA denotes stochastic
gradient/natural gradient/mirror ascent, and CG denotes batch nonlinear conjugate gradient ascent. The 4th and
the 5th columns indicate whether the bases for mean and covariance are strictly shared, and whether a variational
posterior can be used. The last two columns list the time and space complexity.
2
To illustrate the idea, we consider a toy GPR example in Figure 1. The dataset contains 500 noisy
observations of a sinc function. Given the same training data, we conduct experiments with three
different GP models. Figure 1 (a)(c) show the results of the traditional coupled basis, which can be
solved by any of the variational algorithms listed in Table 1, and Figure 1 (b) shows the result using the
decoupled approach SVDGP. The sizes of basis and observations are selected to emulate a large dataset
scenario. We can observe SVDGP achieves a nice trade-off between prediction performance and
complexity: it achieves almost the same accuracy in prediction as the full-scale model in Figure 1(c)
and preserves the overall shape of the predictive variance.
In addition to the sparse algorithms above, some recent attempts aim to revive the non-parametric
property of GPs by structured covariance functions. For example, Wilson and Nickisch [26] proposes
to space the inducing points on a multidimensional lattice, so the time and space complexities of
using a product kernel becomes O(N + DM 1+1/D ) and O(N + DM 1+2/D ), respectively. However,
because M = cD , where c is the number of grid points per dimension, the overall complexity is
exponential in D and infeasible for high-dimensional data. Another interesting approach by Hensman
et al. [11] combines variational inference [23] and a sparse spectral approximation [14]. By equally
spacing inducing points on the spectrum, they show the covariance matrix on the inducing points have
diagonal plus low-rank structure. With MCMC, the algorithm can achieve complexity O(DN M ).
However, the proposed structure in [11] does not help to reduce the complexity when an approximate
Gaussian posterior is favored or when the kernel hyper-parameters need to be updated.
Other kernel methods with linear complexity have been proposed using functional gradient descent
[13, 5]. However, because these methods use a model strictly the same size as the entire dataset, they
fail to estimate the predictive covariance, which requires ?(N 2 ) space complexity. Moreover, they
cannot learn hyper-parameters online. The latter drawback also applies to greedy algorithms based
on rank-one updates, e.g. the algorithm of Csat? and Opper [4].
In contrast to these previous methods, our algorithm applies to all choices of inducing points,
likelihoods, and kernels, and we allow both variational parameters and hyper-parameters to adapt
online as more data are encountered.
2
Preliminaries
In this section, we briefly review the inference for GPs and the variational framework proposed
by Titsias [23]. For now, we will focus on GPR for simplicity of exposition. We will discuss the case
of general likelihoods in the next section when we introduce our framework, DGPs.
2.1
Inference for GPs
Let f : X ? R be a latent function defined on a compact domain X ? RD . Here we assume a priori
that f is distributed according to a Gaussian process GP(m, k). That is, ?x, x0 ? X , E[f (x)] = m(x)
and C[f (x), f (x0 )] = k(x, x0 ). In short, we write f ? GP(m, k).
A GP probabilistic model is composed of a likelihood p(y|f (x)) and a GP prior GP(m, k); in GPR,
the likelihood is assumed to be Gaussian i.e. p(y|f (x)) = N (y|f (x), ? 2 ) with variance ? 2 . Usually,
the likelihood and the GP prior are parameterized by some hyper-parameters, which we summarize
as ?. This includes, for example, the variance ? 2 and the parameters implicitly involved in defining
k(x, x0 ). For notational convenience, and without loss of generality, we assume m(x) = 0 in the
prior distribution and omit explicitly writing the dependence of distributions on ?.
2
Assume we are given a dataset D = {(xn , yn )}N
n=1 , in which xn ? X and yn ? p(y|f (xn )). Let
N
?
X = {xn }N
and
y
=
(y
)
.
Inference
for
GPs
involves
solving
for
the
posterior
p
(f
(x)|y)
n n=1
?
n=1
for any new input x ? X , where ?? = arg max? log p? (y). For example in GPR, because the
likelihood is Gaussian, the predictive posterior is also Gaussian with mean and covariance
m|y (x) = kx,X (KX + ? 2 I)?1 y,
k|y (x, x0 ) = kx,x0 ? kx,X (KX + ? 2 I)?1 kX,x0 ,
(1)
and the hyper-parameter ?? can be found by nonlinear conjugate gradient ascent [18]
max log p? (y) = max log N (y|0, KX + ? 2 I),
?
?
2
(2)
In notation, we use boldface to distinguish finite-dimensional vectors (lower-case) and matrices (upper-case)
that are used in computation from scalar and abstract mathematical objects.
3
where k?,? , k?,? and K?,? denote the covariances between the sets in the subscript.3 One can show that
these two functions, m|y (x) and k|y (x, x0 ), define a valid GP. Therefore, given observations y, we
say f ? GP(m|y , k|y ).
Although theoretically GPs are non-parametric and can model any function as N ? ?, in practice
this is difficult. As the inference has time complexity ?(N 3 ) and space complexity ?(N 2 ), applying
vanilla GPs to large datasets is infeasible.
2.2
Variational Inference with Sparse GPs
To scale GPs to large datasets, Titsias [23] introduced a scheme to compactly approximate the true
posterior with a sparse GP, GP(m
? |y , k?|y ), defined by the statistics on M N function values:
M
{Lm f (?
xm )}m=1 , where Lm is a bounded linear operator4 and x
?m ? X . Lm f (?) is called an
inducing function and x
?m an inducing point. Common choices of Lm include the identity map (as
used originally by Titsias [23]) and integrals to achieve better approximation or to consider multidomain information [25, 7, 3]. Intuitively, we can think of {Lm f (?
xm )}M
m=1 as a set of potentially
indirect observations that capture salient information about the unknown function f .
N
? = {?
Titsias [23] solves for GP(m
? |y , k?|y ) by variational inference. Let X
xm } M
m=1 and let fX ? R
? respectively. Let p(f ? ) be
and fX? ? RM be the (inducing) function values defined on X and X,
X
?
? S) to be its variational posterior, where
the prior given by GP(m, k) and define q(fX? ) = N (fX? |m,
? ? RM ?M are the mean and the covariance of the approximate posterior of f ? .
? ? RM and S
m
X
Titsias [23] proposes to use q(fX , fX? ) = p(fX |fX? )q(fX? ) as the variational posterior to approximate
p(fX , fX? |y) and to solve for q(fX? ) together with the hyper-parameter ? through
Z
p(y|fX )p(fX |fX? )p(fX? )
? = max
? m,
? S)
max L? (X,
q(fX , fX? ) log
dfX dfX? ,
(3)
?
?
? m,
? m,
q(fX , fX? )
? S
? S
?,X,
?,X,
?
where L? is a variational lower bound of log p? (y), p(fX |fX? ) = N (fX |KX,X? K?1
? , KX ? KX )
? fX
X
?1
?
is the conditional probability given in GP(m, k), and KX = K ? K K ? .
X,X
?
X
X,X
At first glance, the specific choice of variational posterior q(fX , fX? ) seems heuristic. However,
although parameterized finitely, it resembles a full-fledged GP GP(m
? |y , k?|y ):
?|y (x, x0 ) = kx,x0 + k ? K?1 S
? ? K ? K?1 k ? 0 .
?
m
? |y (x) = kx,X? K?1
m,
k
(4)
?
?
?
x,X X
X
X,x
X
X
This result is further studied in Matthews et al. [15] and Cheng and Boots [2], where it is shown that
(3) is indeed minimizing a proper KL-divergence between Gaussian processes/measures.
By comparing (2) and (3), one can show that the time and the space complexities now reduce
to O(DN M + M 2 N + M 3 ) and O(M 2 + M N ), respectively, due to the low-rank structure of
? ? [23]. To further reduce complexity, stochastic optimization, such as stochastic natural ascent
K
X
[9] or stochastic mirror descent [2] can be applied. In this case, N in the above asymptotic bounds
would be replaced by the size of a mini-batch Nm . The above results can be modified to consider
general likelihoods as in [21, 10].
3
Variational Inference with Decoupled Gaussian Processes
Despite the success of sparse GPs, the scalability issues of GPs persist. Although parameterizing a GP
with inducing points/functions enables learning from large datasets, it also restricts the expressiveness
of the model. As the time and the space complexities still scale in ?(M 3 ) and ?(M 2 ), we cannot
learn or use a complex model with large M .
In this work, we show that these two complexity bounds, which have long accompanied GP models,
are not strictly necessary, but are due to the tangled representation canonically used in the GP
3
If the two sets are the same, only one is listed.
Here we use the notation Lm f loosely for the compactness of writing. Rigorously, Lm is a bounded linear
operator acting on m and k, not necessarily on all sample paths f .
4
4
literature. To elucidate this, we adopt the dual representation of Cheng and Boots [2], which treats
GPs as linear operators in RKHS. But, unlike Cheng and Boots [2], we show how to decouple the
basis representation of mean and covariance functions of a GP and derive a new variational problem,
which can be viewed as a generalization of (3). We show that this problem?with arbitrary likelihoods
and kernels?can be solved by stochastic gradient ascent with linear complexity in M? , the number
of parameters used to specify the mean function for prediction.
In the following, we first review the results in [2]. We next introduce the decoupled representation,
DGPs, and its variational inference problem. Finally, we present SVDGP and discuss the case with
general likelihoods.
3.1
Gaussian Processes as Gaussian Measures
Let an RKHS H be a Hilbert space of functions with the reproducing property: ?x ? X , ??x ? H
such that ?f ? H, f (x) = ?Tx f .5 A Gaussian process GP(m, k) is equivalent to a Gaussian
measure ? on Banach space B which possesses an RKHS H [2]:6 there is a mean functional ? ? H
and a bounded positive semi-definite linear operator ? : H ? H, such that for any x, x0 ? X ,
??x , ?x0 ? H, we can write m(x) = ?Tx ? and k(x, x0 ) = ?Tx ??x0 . The triple (B, ?, H) is known
as an abstract Wiener space [8, 6], in which H is also called the Cameron-Martin space. Here the
restriction that ?, ? are RKHS objects is necessary, so the variational inference problem in the next
section can be well-defined.
We call this the dual representation of a GP in RKHS H (the mean function m and the covariance
function k are realized as linear operators ? and ? defined in H). With abuse of notation, we write
N (f |?, ?) in short. This notation does not mean a GP has a Gaussian distribution in H, nor does it
imply that the sample paths from GP(m, k) are necessarily in H. Precisely, B contains the sample
paths of GP(m, k) and H is dense in B. In most applications of GP models, B is the Banach space
of continuous function C(X ; Y) and H is the span of the covariance function. As a special case, if
H is finite-dimensional, B and H coincide and ? becomes equivalent to a Gaussian distribution in a
Euclidean space.
In relation to our previous notation in Section 2.1: suppose k(x, x0 ) = ?Tx ?x0 and ?x : X ? H
is a feature map to some Hilbert space H. Then we have assumed a priori that GP(m, k) =
N (f |0, I) is a normal Gaussian measure; that is GP(m, k) samples functions f in the form f (x) =
Pdim H
?l (x)T l , where l ? N (0, 1) are independent. Note if dim H = ?, with probability one
l=1
f is not in H, but fortunately H is large enough for us to approximate the sampled functions. In
particular, it can be shown that the posterior GP(m|y , k|y ) in GPR has a dual RKHS representation
in the same RKHS as the prior GP [2].
3.2
Variational Inference in Gaussian Measures
Cheng and Boots [2] proposes a dual formulation of (3) in terms of Gaussian measures7 :
Z
p? (y|f )p(f )
df = max Eq [log p? (y|f )] ? KL[q||p],
max L? (q(f )) = max q(f ) log
q(f )
q(f ),?
q(f ),?
q(f ),?
(5)
? is a variational Gaussian measure and p(f ) = N (f |0, I) is a normal prior.
where q(f ) = N (f |?
?, ?)
Its connection to the inducing points/functions in (3) can be summarized as follows [2, 3]: Define
PM
a linear operator ?X? : RM ? H as a 7? m=1 am ?x?m , where ?x?m ? H is defined such that
?xT?m ? = E[Lm f (?
xm )]. Then (3) and (5) are equivalent, if q(f ) has a subspace parametrization,
?
? = ?X? a,
? = I + ? ? A?T? ,
?
X
X
(6)
? = K ? + K ? AK ? . In other words,
? = KX? a, and S
with a ? RM and A ? RM ?M satisfying m
X
X
X
the variational inference algorithms in the literature are all using a variational Gaussian measure in
? are parametrized by the same basis {?x? |?
? M .
which ?
? and ?
xm ? X}
m
i=1
To simplify the notation, we write ?Tx f for hf, ?x iH , and f T Lg for hf, LgiH , where f, g ? H and
L : H ? H, even if H is infinite-dimensional.
6
Such H w.l.o.g. can be identified as the natural RKHS of the covariance function of a zero-mean prior GP.
7
We assume q(f ) is absolutely continuous wrt p(f ), which is true as p(f ) is non-degenerate. The integral
q(f )
)
denotes the expectation of log p? (y|f ) + log p(f
over q(f ), and p(f
denotes the Radon-Nikodym derivative.
q(f )
)
5
5
Compared with (3), the formulation in (5) is neater: it follows the definition of the very basic
variational inference problem. This is not surprising, since GPs can be viewed as Bayesian linear
models in an infinite-dimensional space. Moreover, in (5) all hyper-parameters are isolated in the
likelihood p? (y|f ), because the prior is fixed as a normal Gaussian measure.
3.3
Disentangling the GP Representation with DGPs
While Cheng and Boots [2] treat (5) as an equivalent form of (3), here we show that it is a generaliza? in (6) is not
tion. By further inspecting (5), it is apparent that sharing the basis ?X? between ?
? and ?
?
strictly necessary, since (5) seeks to optimize two linear operators, ?
? and ?. With this in mind, we
?
propose a new parametrization that decouples the bases for ?
? and ?:
?
? = ?? a,
? = (I + ?? B?T )?1
?
?
(7)
where ?? : RM? ? H and ?? : RM? ? H denote linear operators defined similarly to ?X? and
? through its inversion with B so the
B 0 ? RM? ?M? . Compared with (6), here we parametrize ?
?
condition that ? 0 can be easily realized as B 0. This form agrees with the posterior covariance
in GPR [2] and will give a posterior that is strictly less uncertain than the prior.
??
The decoupled subspace parametrization (7) corresponds to a DGP, GP(m
??
|y , k|y ), with mean and
covariance functions as 8
?1
?
m
??
k?|y
(x, x0 ) = kx,x0 ? kx,? B?1 + K?
k?,x0 .
(8)
|y (x) = kx,? a,
? in (4) with ? and ? is
While the structure of (8) looks similar to (4), directly replacing the basis X
not trivial. Because the equations in (4) are derived from the traditional viewpoint of GPs as statistics
on function values, the original optimization problem (3) is not defined if ? 6= ? and therefore, it is
not clear how to learn a decoupled representation traditionally. Conversely, by using the dual RKHS
representation, the objective function to learn (8) follows naturally from (5), as we will show next.
3.4
SVDGP :
Algorithm and Analysis
Substituting the decoupled subspace parametrization (7) into the variational inference problem in (5)
results in a numerical optimization problem: maxq(f ),? Eq [log p? (y|f )] ? KL[q||p] with
1
?1
1 T
a K? a + log |I + K? B| +
tr K? (B?1 + K? )?1
2
2
2
N
X
Eq [log p? (y|f )] =
Eq(f (xn )) [log p? (yn |f (xn ))]
KL[q||p] =
(9)
(10)
n=1
where each expectation is over a scalar Gaussian q(f (xn )) given by (8) as functions of (a, ?) and
? In
(B, ?). Our objective function contains [10] as a special case, which assumes ? = ? = X.
?
? and S = LLT ,
addition, we note that Hensman et al. [10] indirectly parametrize the posterior by m
whereas we parametrize directly by (6) with a for scalability and B = LLT for better stability (which
always reduces the uncertainty in the posterior compared with the prior).
We notice that (a, ?) and (B, ?) are completely decoupled in (9) and potentially combined again in
(10). In particular, if p? (yn |f (xn )) is Gaussian as in GPR, we have an additional decoupling, i.e.
L? (a, B, ?, ?) = F? (a, ?)+G? (B, ?) for some F? (a, ?) and G? (B, ?). Intuitively, the optimization
over (a, ?) aims to minimize the fitting-error, and the optimization over (B, ?) aims to memorize the
samples encountered so far; the mean and the covariance functions only interact indirectly through
the optimization of the hyper-parameter ?.
One salient feature of SVDGP is that it tends to overestimate, rather than underestimate, the variance,
when we select M? ? M? . This is inherited from the non-degeneracy property of the variational
8
In practice, we can parametrize B = LLT with Cholesky factor L ? RM? ?M? so the problem is
?1
unconstrained. The required terms in (8) and later in (9) can be stably computed as B?1 + K?
=
?1 T
T
LH L and log |I + K? B| = log |H|, where H = I + L K? L.
6
Algorithm 1 Online Learning with DGPs
Parameters: M? , M? , Nm , N?
Input: M(a, B, ?, ?, ?) , D
1: ?0 ? initializedHyperparameters( sampleMinibatch(D, Nm ) )
2: for t = 1 . . . T do
3:
Dt ? sampleMinibatch(D, Nm )
4:
M.addBasis(Dt , N? , M? , M? )
5:
M.updateModel(Dt , t)
6: end for
framework [23] and can be seen in the toy example in Figure 1. In the extreme case when M? = 0,
we can see the covariance in (8) becomes the same as the prior; moreover, the objective function
of SVDGP becomes similar to kernel methods (exactly the same as kernel ridge regression, when the
likelihood is Gaussian). The additional inclusion of expected log-likelihoods here allows SVDGP to
learn the hyper-parameters in a unified framework.
SVDGP solves the above optimization problem by stochastic gradient ascent. Here we purposefully
ignore specific details of p? (y|f ) to emphasize that SVDGP can be applied to general likelihoods as it
only requires unbiased first-order information, which e.g. can be found in [21]. In addition to having
a more adaptive representation, the main benefit of SVDGP is that the computation of an unbiased
gradient requires only linear complexity in M? , as shown below (see Appendix Afor details).
KL-Divergence Assume |?| = O(DM? ) and |?| = O(DM? ). By (9), One can show
?a KL[q||p] = K? a and ?B KL[q||p] = 21 (I+K? B)?1 K? BK? (I+BK? )?1 . Therefore, the time
complexity to compute ?a KL[q||p] can be reduced to O(Nm M? ) if we sample over the columns
of K? with a mini-batch of size Nm . By contrast, the time complexity to compute ?B KL[q||p]
will always be ?(M?3 ) and cannot be further reduced, regardless of the parametrization of B.9 The
gradient with respect to ? and ? can be derived similarly and have time complexity O(DNm M? )
and O(DM?2 + M?3 ), respectively.
?
?) ? RN and ?s(B, ?) ? RN be the vectors of the mean and
Expected Log-Likelihood Let m(a,
covariance of scalar Gaussian q(f (xn )) for n ? {1, . . . , N }. As (10) is a sum over N terms, by
? ?s) can
sampling with a mini-batch of size Nm , an unbiased gradient of (10) with respect to (?, m,
be computed in O(Nm ). To compute the full gradient with respect to (a, B, ?, ?), we compute
? and ?s with respect to (a, B, ?, ?) and then apply chain rule. These steps take
the derivative of m
O(DNm M? ) and O(DNm M? + Nm M?2 + M?3 ) for (a, ?) and (B, ?), respectively.
The above analysis shows that the curse of dimensionality in GPs originates in the covariance function.
For space complexity, the decoupled parametrization (7) requires memory in O(Nm M? + M?2 );
for time complexity, an unbiased gradient with respect to (a, ?) can be computed in O(DNm M? ),
but that with respect to (B, ?) has time complexity ?(DNm M? + Nm M?2 + M?3 ). This motivates
choosing M? = O(M ) and M? in O(M?2 ) or O(M?3 ), which maintains the same complexity as
previous variational techniques but greatly improves the prediction performance.
4
Experimental Results
We compare our new algorithm, SVDGP, with the state-of-the-art incremental algorithms for sparse
variational GPR, SVI [9] and iVSGPR [2], as well as the classical GPR and the batch algorithm VS GPR [23]. As discussed in Section 1.1, these methods can be viewed as different ways to optimize (5).
Therefore, in addition to the normalized mean square error (nMSE) [18] in prediction, we report
the performance in the variational lower bound (VLB) (5), which also captures the quality of the
predictive variance and hyper-parameter learning.10 These two metrics are evaluated on held-out test
sets in all of our experimental domains.
9
10
Due to K? , the complexity would remain as O(M?3 ) even if B is constrained to be diagonal.
The exact marginal likelihood is computationally infeasible to evaluate for our large model.
7
KUKA 1
mean
std
- Variational Lower Bound (105 )
SVI
iVSGPR
VSGPR
GPR
1.262
0.195
0.391
0.076
0.649
0.201
0.472
0.265
-5.335
7.777
MUJOCO 1
mean
std
KUKA 1
SVDGP
mean
std
- Variational Lower Bound (105 )
- Prediction Error (nMSE)
SVDGP
SVI
iVSGPR
VSGPR
GPR
0.037
0.013
0.169
0.025
0.128
0.033
0.139
0.026
0.231
0.045
MUJOCO 1
SVDGP
SVI
iVSGPR
VSGPR
GPR
6.007
0.673
2.178
0.692
4.543
0.898
2.822
0.871
-10312.727
22679.778
mean
std
- Prediction Error (nMSE)
SVDGP
SVI
iVSGPR
VSGPR
GPR
0.072
0.013
0.163
0.053
0.099
0.026
0.118
0.016
0.213
0.061
Table 2: Experimental results of KUKA1 and MUJOCO1 after 2,000 iterations.
Algorithm 1 summarizes the online learning procedure used by all stochastic algorithms,11 where
each learner has to optimize all the parameters on-the-fly using i.i.d. data. The hyper-parameters are
first initialized heuristically by median trick using the first mini-batch. We incrementally build up the
variational posterior by including N? ? Nm observations in each mini-batch as the initialization of
new variational basis functions. Then all the hyper-parameters and the variational parameters are
updated online. These steps are repeated for T iterations.
For all the algorithms, we assume the prior covariance is defined by the SE - ARD kernel [18] and
we use the generalized SE - ARD kernel [2] as the inducing functions in the variational posterior (see
Appendix B for details). We note that all algorithms in comparison use the same kernel and optimize
both the variational parameters (including inducing points) and the hyperparameters.
In particular, we implement SGA by ADAM [12] (with default parameters ?1 = 0.9 and??2 = 0.999).
The step-size for each stochastic algorithms is scheduled according to ?t = ?0 (1 + 0.1 t)?1 , where
?0 ? {10?1 , 10?2 , 10?3 } is selected manually for each algorithm to maximize the improvement
in objective function after the first 100 iterations. We test each stochastic algorithm for T = 2000
iterations with mini-batches of size Nm = 1024 and the increment size N? = 128. Finally, the
model sizes used in the experiments are listed as follows: M? = 1282 and M? = 128 for SVDGP;
M = 1024 for SVI; M = 256 for iVSGPR; M = 1024, N = 4096 for VSGPR; N = 1024 for GP.
These settings share similar order of time complexity in our current Matlab implementation.
4.1
Datasets
Inverse Dynamics of KUKA Robotic Arm This dataset records the inverse dynamics of a KUKA
arm performing rhythmic motions at various speeds [16]. The original dataset consists of two parts:
KUKA1 and KUKA2 , each of which have 17,560 offline data and 180,360 online data with 28 attributes
and 7 outputs. In the experiment, we mix the online and the offline data and then split 90% as training
data (178,128 instances) and 10% testing data (19,792 instances) to satisfy the i.i.d. assumption.
Walking MuJoCo MuJoCo (Multi-Joint dynamics with Contact) is a physics engine for research
in robotics, graphics, and animation, created by [24]. In this experiment, we gather 1,000 walking
trajectories by running TRPO [19]. In each time frame, the MuJoCo transition dynamics have a
23-dimensional input and a 17-dimensional output. We consider two regression problems to predict
9 of the 17 outputs from the input12 : MUJOCO1 which maps the input of the current frame (23
dimensions) to the output, and MUJOCO2 which maps the inputs of the current and the previous
frames (46 dimensions) to the output. In each problem, we randomly select 90% of the data as
training data (842,745 instances) and 10% as test data (93,608 instances).
4.2
Results
We summarize part of the experimental results in Table 2 in terms of nMSE in prediction and VLB.
While each output is treated independently during learning, Table 2 present the mean and the standard
deviation over all the outputs as the selected metrics are normalized. For the complete experimental
results, please refer to Appendix C.
We observe that SVDGP consistently outperforms the other approaches with much higher VLBs and
much lower prediction errors; SVDGP also has smaller standard deviation. These results validate our
11
12
The algorithms differs only in whether the bases are shared and how the model is updated (see Table 1).
Because of the structure of MuJoCo dynamics, the rest 8 outputs can be trivially known from the input.
8
(a) Sample Complexity
(b) Time Complexity
Figure 2: An example of online learning results (the 9th output of MUJOCO1 dataset). The blue, red,
and yellow lines denote SVDGP, SVI, and iVSGPR, respectively.
initial hypothesis that adopting a large set of basis functions for the mean can help when modeling
complicated functions. iVSGPR has the next best result after SVDGP, despite using a basis size of
256, much smaller than that of 1,024 in SVI, VSGPR, and GPR. Similar to SVDGP, iVSGPR also
generalizes better than the batch algorithms VSGPR and GPR, which only have access to a smaller set
of training data and are more prone to over-fitting. By contrast, the performance of SVI is surprisingly
worse than VSGPR. We conjecture this might be due to the fact that the hyper-parameters and the
inducing points/functions are only crudely initialized in online learning. We additionally find that the
stability of SVI is more sensitive to the choice of step size than other methods. This might explain
why in [9, 2] batch data was used to initialize the hyper-parameters and the learning rate to update
the hyper-parameters was selected to be much smaller than that for stochastic natural gradient ascent.
To further investigate the properties of different stochastic approximations, we show the change of
VLB and the prediction error over iterations and time in Figure 2. Overall, whereas iVSGPR and SVI
share similar convergence rate, the behavior of SVDGP is different. We see that iVSGPR converges the
fastest, both in time and sample complexity. Afterwards, SVDGP starts to descend faster and surpass
the other two methods. From Figure 2, we can also observe that although SVI has similar convergence
to iVSGPR, it slows down earlier and therefore achieves a worse result. These phenomenon are
observed in multiple experiments.
5
Conclusion
We propose a novel, fully-differentiable framework, Decoupled Gaussian Processes DGPs, for largescale GP problems. By decoupling the representation, we derive a variational inference problem
that can be solved with stochastic gradients with linear time and space complexity. Compared with
existing algorithms, SVDGP can adopt a much larger set of basis functions to predict more accurately.
Empirically, SVDGP significantly outperforms state-of-the-arts variational sparse GPR algorithms in
multiple regression tasks. These encouraging experimental results motivate further application of
SVDGP to end-to-end learning with neural networks in large-scale, complex real world problems.
Acknowledgments
This work was supported in part by NSF NRI award 1637758.
References
[1] Matthias Bauer, Mark van der Wilk, and Carl Edward Rasmussen. Understanding probabilistic
sparse Gaussian process approximations. In Advances in Neural Information Processing Systems,
pages 1525?1533, 2016.
[2] Ching-An Cheng and Byron Boots. Incremental variational sparse Gaussian process regression.
In Advances in Neural Information Processing Systems, pages 4403?4411, 2016.
9
[3] Ching-An Cheng and Han-Pang Huang. Learn the Lagrangian: A vector-valued RKHS approach
to identifying Lagrangian systems. IEEE Transactions on Cybernetics, 46(12):3247?3258,
2016.
[4] Lehel Csat? and Manfred Opper. Sparse representation for Gaussian process models. Advances
in Neural Information Processing Systems, pages 444?450, 2001.
[5] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song.
Scalable kernel methods via doubly stochastic gradients. In Advances in Neural Information
Processing Systems, pages 3041?3049, 2014.
[6] Nathaniel Eldredge. Analysis and probability on infinite-dimensional spaces. arXiv preprint
arXiv:1607.03591, 2016.
[7] Anibal Figueiras-Vidal and Miguel L?zaro-gredilla. Inter-domain Gaussian processes for sparse
inference using inducing features. In Advances in Neural Information Processing Systems,
pages 1087?1095, 2009.
[8] Leonard Gross. Abstract wiener spaces. In Proceedings of the Fifth Berkeley Symposium on
Mathematical Statistics and Probability, Volume 2: Contributions to Probability Theory, Part 1,
pages 31?42. University of California Press, 1967.
[9] James Hensman, Nicolo Fusi, and Neil D. Lawrence. Gaussian processes for big data. arXiv
preprint arXiv:1309.6835, 2013.
[10] James Hensman, Alexander G. de G. Matthews, and Zoubin Ghahramani. Scalable variational
Gaussian process classification. In International Conference on Artificial Intelligence and
Statistics, 2015.
[11] James Hensman, Nicolas Durrande, and Arno Solin. Variational Fourier features for Gaussian
processes. arXiv preprint arXiv:1611.06740, 2016.
[12] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[13] Jyrki Kivinen, Alexander J Smola, and Robert C Williamson. Online learning with kernels.
IEEE transactions on signal processing, 52(8):2165?2176, 2004.
[14] Miguel L?zaro-Gredilla, Joaquin Qui?onero-Candela, Carl Edward Rasmussen, and An?bal R.
Figueiras-Vidal. Sparse spectrum Gaussian process regression. Journal of Machine Learning
Research, 11(Jun):1865?1881, 2010.
[15] Alexander G. de G. Matthews, James Hensman, Richard E. Turner, and Zoubin Ghahramani. On
sparse variational methods and the Kullback-Leibler divergence between stochastic processes. In
Proceedings of the Nineteenth International Conference on Artificial Intelligence and Statistics,
2016.
[16] Franziska Meier, Philipp Hennig, and Stefan Schaal. Incremental local Gaussian regression. In
Advances in Neural Information Processing Systems, pages 972?980, 2014.
[17] Joaquin Qui?onero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. The Journal of Machine Learning Research, 6:1939?1959,
2005.
[18] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine
learning. 2006.
[19] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust
region policy optimization. In Proceedings of the 32nd International Conference on Machine
Learning, pages 1889?1897, 2015.
[20] Matthias Seeger, Christopher Williams, and Neil Lawrence. Fast forward selection to speed
up sparse Gaussian process regression. In Artificial Intelligence and Statistics 9, number
EPFL-CONF-161318, 2003.
[21] Rishit Sheth, Yuyang Wang, and Roni Khardon. Sparse variational inference for generalized
GP models. In Proceedings of the 32nd International Conference on Machine Learning, pages
1302?1311, 2015.
[22] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In
Advances in Neural Information Processing Systems, pages 1257?1264, 2005.
10
[23] Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In
International Conference on Artificial Intelligence and Statistics, pages 567?574, 2009.
[24] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based
control. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages
5026?5033. IEEE, 2012.
[25] Christian Walder, Kwang In Kim, and Bernhard Sch?lkopf. Sparse multiscale Gaussian process
regression. In Proceedings of the 25th international conference on Machine learning, pages
1112?1119. ACM, 2008.
[26] Andrew Wilson and Hannes Nickisch. Kernel interpolation for scalable structured Gaussian
processes (KISS-GP). In Proceedings of the 32nd International Conference on Machine
Learning, pages 1775?1784, 2015.
11
| 7103 |@word briefly:1 inversion:2 seems:1 nd:3 heuristically:1 pieter:1 seek:1 covariance:23 tr:1 initial:1 contains:3 rkhs:11 outperforms:4 existing:2 current:3 comparing:1 surprising:1 diederik:1 john:1 numerical:1 shape:1 enables:1 christian:1 update:3 v:2 greedy:1 selected:4 intelligence:4 parametrization:7 short:2 record:1 manfred:1 philipp:2 mathematical:2 dn:7 direct:1 symposium:1 consists:1 doubly:1 fitting:3 combine:1 yingyu:1 introduce:2 theoretically:1 x0:19 inter:1 expected:2 indeed:1 behavior:1 nor:1 multi:1 inspired:1 encouraging:1 curse:1 increasing:1 becomes:4 moreover:3 notation:6 bounded:3 medium:1 arno:1 unified:1 pseudo:1 berkeley:1 multidimensional:1 exactly:1 decouples:3 rm:10 control:1 originates:1 omit:1 yn:4 overestimate:1 positive:1 local:1 treat:3 tends:1 consequence:1 despite:3 ak:1 subscript:1 dnm:6 path:3 abuse:1 interpolation:1 black:1 plus:1 might:2 initialization:1 resembles:1 studied:1 conversely:1 mujoco:7 fastest:1 adoption:1 practical:1 zaro:3 acknowledgment:1 testing:1 practice:2 definite:1 implement:1 differs:1 svi:14 procedure:2 area:1 significantly:2 sga:8 word:1 zoubin:3 cannot:3 ga:2 superlinear:2 convenience:1 operator:7 selection:1 applying:1 writing:2 tangled:1 restriction:1 equivalent:4 map:4 optimize:4 lagrangian:2 williams:2 regardless:2 independently:1 jimmy:1 simplicity:1 identifying:1 parameterizing:1 rule:1 stability:2 handle:1 kuka:4 traditionally:2 fx:26 increment:1 limiting:1 updated:3 elucidate:1 suppose:1 exact:1 gps:19 carl:4 hypothesis:1 trick:1 satisfying:1 walking:2 std:4 persist:1 observed:1 levine:1 fly:1 preprint:4 solved:5 capture:2 descend:1 wang:1 region:1 trade:1 gross:1 complexity:44 rigorously:1 dynamic:5 motivate:1 solving:2 predictive:4 titsias:7 learner:1 basis:17 completely:2 compactly:1 resolved:1 easily:1 indirect:1 joint:1 emulate:1 tx:5 various:1 fast:1 artificial:4 hyper:17 choosing:2 apparent:1 heuristic:1 larger:2 solve:2 valued:1 say:1 nineteenth:1 statistic:8 neil:2 revive:1 gp:44 think:1 noisy:1 online:10 differentiable:1 matthias:2 propose:4 product:1 updatemodel:1 canonically:1 degenerate:2 achieve:2 inducing:20 validate:1 scalability:3 figueiras:2 franziska:1 convergence:2 incremental:3 adam:2 converges:1 object:2 help:2 illustrate:1 derive:2 andrew:1 miguel:2 ard:2 finitely:1 eq:4 edward:5 solves:2 involves:1 come:1 indicate:1 memorize:1 drawback:1 attribute:1 stochastic:19 abbeel:1 generalization:1 preliminary:1 inspecting:1 strictly:5 ground:1 normal:3 lawrence:2 predict:2 lm:8 matthew:3 substituting:1 major:1 sma:3 achieves:3 adopt:2 sensitive:1 agrees:1 tool:1 stefan:1 gaussian:49 always:2 aim:3 modified:2 rather:1 dgp:1 gatech:2 wilson:2 derived:2 focus:1 schaal:1 notational:1 maria:1 rank:3 likelihood:19 consistently:1 improvement:1 greatly:3 contrast:4 seeger:2 cg:7 kim:1 am:1 dim:1 inference:26 epfl:1 entire:1 lehel:1 compactness:1 relation:1 arg:1 issue:2 classification:2 aforementioned:1 sheth:2 overall:3 favored:1 priori:2 proposes:3 art:2 special:3 constrained:1 vlb:3 marginal:1 initialize:1 having:1 beach:1 sampling:1 manually:1 look:1 report:1 intelligent:3 simplify:1 richard:1 randomly:1 composed:1 preserve:1 divergence:3 replaced:2 dual:5 attempt:1 atlanta:2 investigate:1 introduces:1 extreme:1 held:2 chain:1 integral:2 capable:1 necessary:4 lh:1 decoupled:14 conduct:1 loosely:1 euclidean:1 initialized:2 circle:1 isolated:1 uncertain:1 instance:5 column:4 modeling:1 earlier:1 maximization:1 lattice:1 introducing:1 deviation:2 too:1 graphic:1 nickisch:2 combined:1 durrande:1 st:1 international:8 probabilistic:2 off:1 physic:2 michael:1 together:1 again:1 nm:16 choose:1 huang:1 worse:2 conf:1 derivative:2 toy:2 de:2 accompanied:1 summarized:1 includes:1 satisfy:1 explicitly:1 tion:1 later:1 view:1 candela:3 red:2 start:1 hf:2 maintains:1 complicated:1 inherited:1 contribution:1 minimize:1 pang:1 square:1 accuracy:2 wiener:2 variance:6 nathaniel:1 yield:1 yellow:1 kuka2:1 bayesian:1 lkopf:1 accurately:1 onero:3 trajectory:1 cc:1 cybernetics:1 explain:1 llt:3 sharing:1 definition:1 underestimate:1 involved:1 james:4 dm:5 naturally:1 associated:1 degeneracy:1 sampled:1 emanuel:1 dataset:8 popular:1 knowledge:1 dimensionality:1 improves:1 hilbert:4 higher:2 originally:1 dt:3 xie:1 tom:1 specify:1 hannes:1 formulation:2 evaluated:2 bboots:1 generality:2 furthermore:1 smola:1 crudely:1 joaquin:2 christopher:2 expressive:1 replacing:1 nonlinear:2 multiscale:1 glance:1 incrementally:1 trust:1 stably:1 quality:1 gray:1 scheduled:1 usa:1 name:1 concept:1 true:11 unbiased:4 normalized:2 moritz:1 leibler:1 during:1 please:1 bal:1 generalized:3 ridge:1 complete:1 motion:1 balcan:1 reasoning:1 variational:55 snelson:2 novel:2 recently:1 common:1 functional:2 empirically:1 volume:1 banach:2 discussed:1 he:1 tassa:1 refer:1 rd:1 vanilla:1 grid:1 pm:1 similarly:2 unconstrained:1 inclusion:1 trivially:1 erez:1 access:1 han:1 robot:1 afor:1 base:4 nicolo:1 posterior:18 recent:2 raj:1 scenario:1 success:2 der:1 seen:1 additional:3 fortunately:1 dai:1 maximize:1 signal:1 semi:1 multiple:2 full:4 mix:1 reduces:1 afterwards:1 faster:1 adapt:1 long:3 equally:1 cameron:1 award:1 prediction:13 scalable:3 regression:12 basic:1 florina:1 expectation:2 df:1 metric:2 arxiv:8 iteration:5 kernel:17 adopting:1 sergey:1 robotics:3 addition:4 whereas:2 spacing:1 median:1 sch:1 rest:1 unlike:1 posse:1 ascent:10 byron:2 effectiveness:1 jordan:1 call:1 split:1 enough:1 todorov:1 fit:1 identified:1 reduce:3 idea:2 pdim:1 whether:3 song:1 suffer:1 roni:1 afford:1 generaliza:1 matlab:1 clear:1 listed:3 se:2 nonparametric:1 obstruction:1 reduced:3 restricts:1 nsf:1 notice:1 per:1 csat:2 blue:2 write:4 hennig:1 sparsifying:1 salient:2 trpo:1 achieving:1 prevent:1 sum:1 run:1 inverse:2 parameterized:2 uncertainty:2 almost:1 fusi:1 appendix:3 summarizes:1 qui:3 radon:1 bound:11 distinguish:1 cheng:8 encountered:2 precisely:1 fourier:1 speed:2 span:1 dgps:7 performing:1 martin:1 conjecture:1 structured:2 gredilla:3 according:2 conjugate:2 remain:1 smaller:4 intuitively:2 unregularized:1 computationally:1 equation:1 discus:2 fail:1 wrt:1 mind:1 end:3 generalizes:3 parametrize:4 nri:1 vidal:2 apply:1 observe:3 spectral:1 indirectly:2 batch:12 original:2 denotes:6 assumes:1 include:1 running:1 michalis:1 unifying:1 ghahramani:4 build:1 classical:1 rsj:1 contact:1 objective:4 realized:2 strategy:2 parametric:2 dependence:1 traditional:2 diagonal:2 niao:1 gradient:18 subspace:3 parametrized:1 trivial:1 boldface:1 mini:8 ching:3 minimizing:1 liang:1 difficult:1 lg:1 disentangling:1 robert:1 potentially:2 slows:1 ba:1 kuka1:2 design:1 implementation:1 proper:1 motivates:1 unknown:1 policy:1 upper:2 boot:7 observation:8 datasets:7 wilk:1 finite:2 walder:1 descent:2 solin:1 defining:1 frame:3 rn:2 reproducing:3 arbitrary:1 expressiveness:3 introduced:1 bk:2 meier:1 required:1 kl:9 connection:1 anant:1 engine:2 california:1 purposefully:1 kingma:1 maxq:1 nip:1 anibal:1 usually:1 below:1 xm:5 challenge:1 summarize:3 max:8 memory:2 including:2 natural:4 treated:1 largescale:2 kivinen:1 turner:1 arm:2 representing:1 scheme:1 technology:2 imply:1 created:1 jun:1 coupled:1 vsgpr:9 faced:1 prior:13 literature:3 nice:1 review:2 understanding:1 schulman:1 asymptotic:1 loss:2 fully:1 interesting:1 fictitious:1 triple:1 gather:1 viewpoint:1 nikodym:1 share:2 cd:1 prone:1 surprisingly:1 last:1 rasmussen:5 supported:1 infeasible:3 offline:2 allow:3 fledged:1 institute:4 kwang:1 rhythmic:1 sparse:24 fifth:1 distributed:1 benefit:1 bauer:1 dimension:5 hensman:7 opper:2 xn:9 valid:1 default:1 transition:1 world:1 made:1 adaptive:1 coincide:1 forward:1 far:2 transaction:2 approximate:8 compact:1 ignore:1 implicitly:1 emphasize:1 kullback:1 keep:1 yuyang:1 bernhard:1 robotic:1 rishit:1 assumed:3 spectrum:2 continuous:2 latent:1 dfx:2 why:1 table:7 additionally:2 learn:6 ca:1 decoupling:2 nicolas:1 interact:1 williamson:1 complex:3 necessarily:2 domain:3 dense:1 main:1 big:1 hyperparameters:1 animation:1 repeated:1 nmse:4 ivsgpr:14 georgia:2 cubic:1 khardon:1 exponential:1 gpr:24 down:1 specific:2 xt:1 list:1 admits:1 sinc:1 ih:1 false:2 mirror:2 kx:17 likely:1 kiss:1 scalar:3 bo:2 van:1 applies:2 corresponds:1 truth:1 acm:1 conditional:1 viewed:5 sized:1 identity:1 jyrki:1 exposition:1 leonard:1 price:1 shared:4 change:1 specifically:1 infinite:3 yuval:1 acting:1 surpass:1 decouple:1 called:3 experimental:7 select:2 cholesky:1 mark:1 latter:2 alexander:3 absolutely:1 evaluate:1 mcmc:1 phenomenon:1 |
6,749 | 7,104 | K-Medoids for K-Means Seeding
James Newling
Idiap Research Institue and
?
Ecole
polytechnique f?ed?erale de Lausanne
[email protected]
Franc?ois Fleuret
Idiap Research Institue and
?
Ecole
polytechnique f?ed?erale de Lausanne
[email protected]
Abstract
We show experimentally that the algorithm clarans of Ng and Han (1994) finds
better K-medoids solutions than the Voronoi iteration algorithm of Hastie et al.
(2001). This finding, along with the similarity between the Voronoi iteration algorithm and Lloyd?s K-means algorithm, motivates us to use clarans as a K-means
initializer. We show that clarans outperforms other algorithms on 23/23 datasets
with a mean decrease over k-means-++ (Arthur and Vassilvitskii, 2007) of 30%
for initialization mean squared error (MSE) and 3% for final MSE. We introduce
algorithmic improvements to clarans which improve its complexity and runtime,
making it a viable initialization scheme for large datasets.
1
1.1
Introduction
K-means and K-medoids
The K-means problem is to find a partitioning of points, so as to minimize the sum of the squares
of the distances from points to their assigned partition?s mean. In general this problem is NP-hard,
and in practice approximation algorithms are used. The most popular of these is Lloyd?s algorithm,
henceforth lloyd, which alternates between freezing centers and assignments, while updating the
other. Specifically, in the assignment step, for each point the nearest (frozen) center is determined.
Then during the update step, each center is set to the mean of points assigned to it. lloyd has
applications in data compression, data classification, density estimation and many other areas, and
was recognised in Wu et al. (2008) as one of the top-10 algorithms in data mining.
The closely related K-medoids problem differs in that the center of a cluster is its medoid, not its
mean, where the medoid is the cluster member which minimizes the sum of dissimilarities between
itself and other cluster members. In this paper, as our application is K-means initialization, we focus
on the case where dissimilarity is squared distance, although K-medoids generalizes to non-metric
spaces and arbitrary dissimilarity measures, as discussed in ?SM-A.
By modifying the update step in lloyd to compute medoids instead of means, a viable K-medoids
algorithm is obtained. This algorithm has been proposed at least twice (Hastie et al., 2001; Park and
Jun, 2009) and is often referred to as the Voronoi iteration algorithm. We refer to it as medlloyd.
Another K-medoids algorithm is clarans of Ng and Han (1994, 2002), for which there is no direct
K-means equivalent. It works by randomly proposing swaps between medoids and non-medoids,
accepting only those which decrease MSE. We will discuss how clarans works, what advantages
it has over medlloyd, and our motivation for using it for K-means initialization in ?2 and ?SM-A.
1.2
K-means initialization
lloyd is a local algorithm, in that far removed centers and points do not directly influence each
other. This property contributes to lloyd?s tendency to terminate in poor minima if not well initial31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
initial:
final:
?
?
?
?
?
?
initial:
final:
?
?
?
?
?
?
Figure 1: N = 3 points, to be partitioned into K = 2 clusters with lloyd, with two possible
initializations (top) and their solutions (bottom). Colors denote clusters, stars denote samples, rings
denote means. Initialization with clarans enables jumping between the initializations on the left
and right, ensuring that when lloyd eventually runs it avoids the local minimum on the left.
ized. Good initialization is key to guaranteeing that the refinement performed by lloyd is done in
the vicinity of a good solution, an example showing this is given in Figure 1.
In the comparative study of K-means initialization methods of Celebi et al. (2013), 8 schemes
are tested across a wide range of datasets. Comparison is done in terms of speed (time to run
initialization+lloyd) and energy (final MSE). They find that 3/8 schemes should be avoided, due to
poor performance. One of these schemes is uniform initialization, henceforth uni, where K samples
are randomly selected to initialize centers. Of the remaining 5/8 schemes, there is no clear best, with
results varying across datasets, but the authors suggest that the algorithm of Bradley and Fayyad
(1998), henceforth bf, is a good choice.
The bf scheme of Bradley and Fayyad (1998) works as follows. Samples are separated into J
(= 10) partitions. lloyd with uni initialization is performed on each of the partitions, providing J
centroid sets of size K. A superset of JK elements is created by concatenating the J center sets.
lloyd is then run J times on the superset, initialized at each run with a distinct center set. The
center set which obtains the lowest MSE on the superset is taken as the final initializer for the final
run of lloyd on all N samples.
Probably the most widely implemented initialization scheme other than uni is k-means++ (Arthur
and Vassilvitskii, 2007), henceforth km++. Its popularity stems from its simplicity, low computational complexity, theoretical guarantees, and strong experimental support. The algorithm works by
sequentially selecting K seeding samples. At each iteration, a sample is selected with probability
proportional to the square of its distance to the nearest previously selected sample.
The work of Bachem et al. (2016) focused on developing sampling schemes to accelerate km++,
while maintaining its theoretical guarantees. Their algorithm afk-mc2 results in as good initializations as km++, while using only a small fraction of the KN distance calculations required by km++.
This reduction is important for massive datasets.
In none of the 4 schemes discussed is a center ever replaced once selected. Such refinement is only
performed during the running of lloyd. In this paper we show that performing refinement during
initialization with clarans, before the final lloyd refinement, significantly lowers K-means MSEs.
1.3
Our contribution and paper summary
We compare the K-medoids algorithms clarans and medlloyd, finding that clarans finds better
local minima, in ?3 and ?SM-A. We offer an explanation for this, which motivates the use of
clarans for initializing lloyd (Figure 2). We discuss the complexity of clarans, and briefly
show how it can be optimised in ?4, with a full presentation of acceleration techniques in ?SM-D.
Most significantly, we compare clarans with methods uni, bf, km++ and afk-mc2 for K-means
initialization, and show that it provides significant reductions in initialization and final MSEs in
?5. We thus provide a conceptually simple initialization scheme which is demonstrably better than
km++, which has been the de facto initialization method for one decade now.
Our source code at https://github.com/idiap/zentas is available under an open source license. It consists of a C++ library with Python interface, with several examples for diverse data types
(sequence data, sparse and dense vectors), metrics (Levenshtein, l1 , etc.) and potentials (quadratic
as in K-means, logarithmic, etc.).
1.4
Other Related Works
Alternatives to lloyd have been considered which resemble the swapping approach of clarans.
One is by Hartigan (1975), where points are randomly selected and reassigned. Telgarsky and
2
Vattani (2010) show how this heuristic can result in better clustering when there are few points per
cluster.
The work most similar to clarans in the K-means setting is that of Kanungo et al. (2002), where
it is indirectly shown that clarans finds a solution within a factor 25 of the optimal K-medoids
clustering. The local search approximation algorithm they propose is a hybrid of clarans and
lloyd, alternating between the two, with sampling from a kd-tree during the clarans-like step.
Their source code includes an implementation of an algorithm they call ?Swap?, which is exactly the
clarans algorithm of Ng and Han (1994).
2
Two K-medoids algorithms
Like km++ and afk-mc2 , K-medoids generalizes beyond the standard K-means setting of Euclidean
metric with quadratic potential, but we consider only the standard setting in the main body of this
paper, referring the reader to SM-A for a more general presentation. In Algorithm 1, medlloyd is
presented. It is essentially lloyd with the update step modified for K-medoids.
Algorithm 1 two-step iterative medlloyd algo- Algorithm 2 swap-based clarans algorithm (in
rithm (in vector space with quadratic potential). a vector space and with quadratic potential).
1: nr ? 0
1: Initialize center indices c(k), as distinct elcenter indices C ? {1, . . . , N }
ements of {1, . . . , N }, where index k ? 2: Initialize
PN
{1, . . . , K}.
3: ? ? ? i=1 mini0 ?C kx(i) ? x(i0 )k2
2: do
4: while nr ? Nr do
3:
for i = 1 : N do
5:
sample i? ? C and i+ ? {1, . . . , N } \ C
PN
4:
a(i) ? arg min kx(i)?x(c(k))k2
6:
? + ? i=1
k?{1,...,K}
7:
mini0 ?C\{i? }?{i+ } kx(i) ? x(i0 )k2
5:
end for
8:
if ? + < ? ? then
6:
for k = 1 : K do
9:
C ? C \ {i? } ? {i+ }
7:
c(k) ?
X
10:
nr ? 0, ? ? ? ? +
kx(i)?x(i0 )k2 11:
8:
arg min
else
i:a(i)=k
i0 :a(i0 )=k
12:
nr ? nr + 1
13:
end if
14: end while
9:
end for
10: while c(k) changed for at least one k
In Algorithm 2, clarans is presented. Following a random initialization of the K centers (line
2), it proceeds by repeatedly proposing a random swap (line 5) between a center (i? ) and a noncenter (i+ ). If a swap results in a reduction in energy (line 8), it is implemented (line 9). clarans
terminates when Nr consecutive proposals have been rejected. Alternative stopping criteria could
be number of accepted swaps, rate of energy decrease or time. We use Nr = K 2 throughout, as this
makes proposals between all pairs of clusters probable, assuming balanced cluster sizes.
clarans was not the first swap-based K-medoids algorithm, being preceded by pam and clara of
Kaufman and Rousseeuw (1990). It can however provide better complexity than other swap-based
algorithms if certain optimisations are used, as discussed in ?4.
When updating centers in lloyd and medlloyd, assignments are frozen. In contrast, with swapbased algorithms such as clarans, assignments change along with the medoid index being changed
(i? to i+ ). As a consequence, swap-based algorithms look one step further ahead when computing
MSEs, which helps them escape from the minima of medlloyd. This is described in Figure 2.
3
A Simple Simulation Study for Illustration
We generate simple 2-D data, and compare medlloyd, clarans, and baseline K-means initializers
km++ and uni, in terms of MSEs. The data is described in Figure 3, where sample initializations
are also presented. Results in Figure 4 show that clarans provides significantly lower MSEs than
medlloyd, an observation which generalizes across data types (sequence, sparse, etc), metrics (Levenshtein, l? , etc), and potentials (exponential, logarithmic, etc), as shown in Appendix SM-A.
3
? x(3)
? x(1)
? x(2)
? x(5)
? x(4)
? x(6)
? x(7)
? = 2?2
? = 2?4
? = 2?6
Figure 2: Example with N = 7 samples, of which K = 2 are medoids. Current medoid indices
are 1 and 4. Using medlloyd, this is a local minimum, with final clusters {x(1)}, and the rest.
clarans may consider swap (i? , i+ ) = (4, 7) and so escape to a lower MSE. The key to swapbased algorithms is that cluster assignments are never frozen. Specifically, when considering the
swap of x(4) and x(7), clarans assigns x(2), x(3) and x(4) to the cluster of x(1) before computing
the new MSE.
0
4
19
uni
medlloyd
++
clarans
Figure 3: (Column 1) Simulated data in R2 .
For each
cluster center g ? {0, . . . , 19}2 ,
100 points are drawn from
N (g, ? 2 I), illustrated here for
? ? {2?6 , 2?4 , 2?2 }. (Columns
2,3,4,5) Sample initializations.
We observe ?holes? for methods uni, medlloyd and km++.
clarans successfully fills holes
by removing distant, underutilised centers.
The spatial
correlation of medlloyd?s holes
are due to its locality of updating.
Complexity and Accelerations
lloyd requires KN distance calculations to update K centers, assuming no acceleration technique
such as that of Elkan (2003) is used. The cost of several iterations of lloyd outweighs initialization
with any of uni, km++ and afk-mc2 . We ask if the same is true with clarans initialization, and
find that the answer depends on how clarans is implemented. clarans as presented in Ng and
Han (1994) is O(N 2 ) in computation and memory, making it unusable for large datasets. To make
clarans scalable, we have investigated ways of implementing it in O(N ) memory, and devised
optimisations which make its complexity equivalent to that of lloyd.
final M SE/? 2
init M SE/? 2
clarans consists of two main steps. The first is swap evaluation (line 6) and the second is swap
implementation (scope of if-statement at line 8). Proposing a good swap becomes less probable as
MSE decreases, thus as the number of swap implementations increases the number of consecutive
rejected proposals (nr ) is likely to grow large, illustrated in Figure 5. This results in a larger fraction
of time being spent in the evaluation step.
216
212
28
24
20
2?4?10 ?9 ?8 ?7 ?6 ?5 ?4 ?3 ?2 ?1
2 2 2 2 2 2 2 2 2 2
?
216
medlloyd
uni
212
++
28
clarans
4
2
20
2?4?10 ?9 ?8 ?7 ?6 ?5 ?4 ?3 ?2 ?1
2 2 2 2 2 2 2 2 2 2
?
Figure 4: Results on simulated data. For 400 values of ? ? [2?10 , 2?1 ], initialization (left) and final
(right) MSEs relative to true cluster variances. For ? ? [2?5 , 2?2 ] km++ never results in minimal
MSE (M SE/? 2 = 1), while clarans does for all ?. Initialization MSE with medlloyd is on
average 4 times lower than with uni, but most of this improvement is regained when lloyd is
subsequently run (final M SE/? 2 ).
4
evaluations
Nr
210
20
0
500
1000
1500
accepted swaps (implementations)
2000
Figure 5: The number of consecutive swap proposal rejections (evaluations) before one is accepted
(implementations), for simulated data (?3) with ? = 2?4 .
We will now discuss optimisations in order of increasing algorithmic complexity, presenting their
computational complexities in terms of evaluation and implementation steps. The explanations here
are high level, with algorithmic details and pseudocode deferred to ?SM-D.
Level -2 To evaluate swaps (line 6), simply compute all KN distances.
Level -1 Keep track of nearest centers. Now to evaluate a swap, samples whose nearest center is
x(i? ) need distances to all K samples indexed by C \ {i? } ? {i+ } computed in order to determine
the new nearest. Samples whose nearest is not x(i? ) only need the distance to x(i+ ) computed to
determine their nearest, as either, (1) their nearest is unchanged, or (2) it is x(i+ ).
Level 0 Also keep track of second nearest centers, as in the implementation of Ng and Han (1994),
which recall is O(N 2 ) in memory and computes all distances upfront. Doing so, nearest centers
can be determined for all samples by computing distances to x(i+ ). If swap (i? , i+ ) is accepted,
samples whose new nearest is x(i+ ) require K distance calculations to recompute second nearests.
Thus from level -1 to 0, computation is transferred from evaluation to implementation, which is
good, as implementation is less frequently performed, as illustrated in Figure 5.
Level 1 Also keep track, for each cluster center, of the distance to the furthest cluster member
as well as the maximum, over all cluster members, of the minimum distance to another center.
Using the triangle inequality, one can then frequently eliminate computation for clusters which are
unchanged by proposed swaps with just a single center-to-center distance calculation. Note that
using the triangle inequality requires that the K-medoids dissimilarity is metric based, as is the case
in the K-means initialization setting.
Level 2 Also keep track of center-to-center distances. This allows whole clusters to be tagged as
unchanged by a swap, without computing any distances in the evaluation step.
We have also considered optimisations which, unlike levels -2 to 2, do not result in the exact same
clustering as clarans, but provide additional acceleration. One such optimisation uses random subsampling to evaluate proposals, which helps significantly when N/K is large. Another optimisation
which is effective during initial rounds is to not implement the first MSE reducing swap found, but
to rather continue searching for approximately as long as swap implementation takes, thus balancing
time between searching (evaluation) and implementing swaps. Details can be found in ?SM-D.3.
The computational complexities of these optimisations are in Table 1. Proofs of these complexities
rely on there being O(N/K) samples changing their nearest or second nearest center during a swap.
In other words, for any two clusters of sizes n1 and n2 , we assume n1 = ?(n2 ). Using level 2
complexities, we see that if a fraction p(C) of proposals reduce MSE, then the expected complexity
is O(N (1 + 1/(p(C)K))). One cannot marginalise C out of the expectation, as C may have no MSE
reducing swaps, that is p(C) = 0. If p(C) is O(K), we obtain complexity O(N ) per swap, which
is equivalent to the O(KN ) for K center updates of lloyd. In Table 2, we consider run times and
distance calculation counts on simulated data at the various levels of optimisation.
5
Results
We first compare clarans with uni, km++, afk-mc2 and bf on the first 23 publicly available
datasets in Table 3 (datasets 1-23). As noted in Celebi et al. (2013), it is common practice to
run initialization+lloyd several time and retain the solution with the lowest MSE. In Bachem et al.
(2016) methods are run a fixed number of times, and mean MSEs are compared. However, when
comparing minimum MSEs over several runs, one should take into account that methods vary in
their time requirements.
5
1 evaluation
1 implementation
K 2 evaluations, K implementations
memory
-2
NK
1
K 3N
N
-1
N
1
K 2N
N
0
N
N
K 2N
N
1
+K
N
N K + K3
N
N
K
2
N
K
N
KN
N + K2
Table 1: The complexities at different levels of optimisation of evaluation and implementation, in
terms of required distance calculations, and overall memory. We see at level 2 that to perform K 2
evaluations and K implementations is O(KN ), equivalent to lloyd.
-2
-1
0
1
2
log2 (# dcs ) 44.1 36.5 35.5 29.4 26.7
time [s]
407 19.2 15.6
Table 2: Total number of distance calculations
(# dcs ) and time required by clarans on simulation data of ?3 with ? = 2?4 at different optimisation levels.
dataset
#
N
dim
K
TL [s]
a1
1
3000
2
40
1.94
a2
2
5250
2
70
1.37
a3
3
7500
2
100
1.69
birch1
4 100000
2
200 21.13
birch2
5 100000
2
200 15.29
birch3
6 100000
2
200 16.38
ConfLong 7 164860
3
22
30.74
dim032
8
1024
32
32
1.13
dim064
9
1024
64
32
1.19
dim1024 10 1024 1024 32
7.68
europe
11 169308
2
1000 166.08
dataset
housec8
KDD?
mnist
Mopsi
rna?
s1
s2
s3
s4
song?
susy?
yeast
#
N
dim K TL [s]
12 34112
3 400 18.71
13 145751 74 200 998.83
14 10000 784 300 233.48
15 13467
2 100 2.14
16 20000
8 200 6.84
17 5000
2
30
1.20
18 5000
2
30
1.50
19 5000
2
30
1.39
20 5000
2
30
1.44
21 20000 90 200 71.10
22 20000 18 200 24.50
23 1484
8
40
1.23
Table 3: The 23 datasets. Column ?TL? is time allocated to run with each initialization scheme, so
that no new runs start after TL elapsed seconds. The starred datasets are those used in Bachem et al.
(2016), the remainder are available at https://cs.joensuu.fi/sipu/datasets.
Rather than run each method a fixed number of times, we therefore run each method as many times
as possible in a given time limit, ?TL?. This dataset dependent time limit, given by columns TL in
Table 3, is taken as 80? the time of a single run of km+++lloyd. The numbers of runs completed
in time TL by each method are in columns 1-5 of Table 4. Recall that our stopping criterion for
clarans is K 2 consecutively rejected swap proposals. We have also experimented with stopping
criterion based on run time and number of swaps implemented, but find that stopping based on number of rejected swaps best guarantees convergence. We use K 2 rejections for simplicity, although
have found that fewer than K 2 are in general needed to obtain minimal MSEs.
We use the fast lloyd implementation accompanying Newling and Fleuret (2016) with the ?auto?
flag set to select the best exact accelerated algorithm, and run until complete convergence. For
initializations, we use our own C++/Cython implementation of level 2 optimised clarans, the implementation of afk-mc2 of Bachem et al. (2016), and km++ and bf of Newling and Fleuret (2016).
The objective of Bachem et al. (2016) was to prove and experimentally validate that afk-mc2 produces initialization MSEs equivalent to those of km++, and as such lloyd was not run during experiments. We consider both initialization MSE, as in Bachem et al. (2016), and final MSE after
lloyd has run. The latter is particularly important, as it is the objective we wish to minimize in the
K-means problem.
In addition to considering initialization and final MSEs, we also distinguish between mean and
minimum MSEs. We believe the latter is important as it captures the varying time requirements,
and as mentioned it is common to run lloyd several times and retain the lowest MSE clustering. In
Table 4 we consider two MSEs, namely mean initialization MSE and minimum final MSE.
6
bf
cla
rans
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
uni
29
7
4
5
6
4
46
19
16
18
4
4
5
4
4
4
25
30
24
24
4
4
6
8
afk
mc2
km
++
8
5
6
28
27
23
38
5
5
24
15
21
56
83
7
28
5
7
6
6
67
67
5
14
km
++
cla
rans
138
85
87
95
137
77
75
88
90
311
28
81
65
276
52
86
85
100
83
87
98
134
81
93
0.97
2
0.99 1.96
0.99 2.07
0.99 1.54
1
3.8
0.98 2.35
1
1.17
0.98 43.1
1.01 >102
0.99 >102
1
20.2
0.99 2.09
1
4
1
1
1
25
0.99 24.5
1.01 2.79
0.99 2.24
1.05 1.55
1.01 1.65
1
1.14
1
1.04
1
1.18
1
4.71
minimum final mse
cla
rans
bf
65
24
21
27
22
22
66
29
29
52
25
27
74
43
23
28
31
39
36
36
52
48
31
34
0.63
0.62
0.63
0.69
0.62
0.67
0.73
0.65
0.66
0.72
0.72
0.77
0.77
0.87
0.6
0.62
0.7
0.69
0.71
0.71
0.8
0.81
0.74
0.7
0.59
0.6
0.6
0.66
0.62
0.64
0.64
0.65
0.66
0.62
0.67
0.7
0.69
0.6
0.57
0.62
0.66
0.65
0.65
0.65
0.67
0.69
0.65
0.64
0.58
0.59
0.61
0.66
0.62
0.64
0.64
0.65
0.66
0.61
0.67
0.7
0.69
0.6
0.57
0.61
0.65
0.65
0.65
0.64
0.66
0.69
0.65
0.64
0.59
0.61
0.62
0.66
0.64
0.68
0.64
0.66
0.66
0.62
2.25
0.73
0.75
0.6
3.71
2.18
0.67
0.66
0.66
0.64
0.71
0.69
0.65
0.79
0.61
0.63
0.63
0.66
0.63
0.68
0.64
0.66
0.69
0.62
2.4
0.74
0.75
0.61
3.62
2.42
0.69
0.66
0.67
0.65
0.7
0.69
0.67
0.8
0.57
0.58
0.59
0.66
0.59
0.63
0.64
0.63
0.63
0.59
0.64
0.69
0.69
0.6
0.51
0.56
0.65
0.64
0.65
0.64
0.65
0.69
0.64
0.62
uni
uni
135
81
82
79
85
68
84
84
81
144
70
80
102
88
91
107
84
100
88
88
96
116
82
90
afk
mc2
afk
mc2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
gm
mean initial mse
km
++
runs completed
final MSE
initialisation MSE
Table 4: Summary of results on the 23 datasets (rows). Columns 1 to 5 contain the number of initialization+lloyd runs completed in time limit TL. Columns 6 to 14 contain MSEs relative to the mean
initialization MSE of km++. Columns 6 to 9 are mean MSEs after initialization but before lloyd,
and columns 10 to 14 are minimum MSEs after lloyd. The final row (gm) contains geometric
means of all columns. clarans consistently obtains the lowest across all MSE measurements, and
has a 30% lower initialization MSE than km++ and afk-mc2 , and a 3% lower final minimum MSE.
1.1
1.0
0.9
0.8
0.7
0.6
0.5
1.1
1.0
0.9
0.8
0.7
0.6
0.5
(3)
(2)
(1)
km++
clarans
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
dataset
Figure 6: Initialization (above) and final (below) MSEs for km++ (left bars) and clarans (right
bars), with minumum (1), mean (2) and mean + standard deviation (3) of MSE across all runs. For
all initialization MSEs and most final MSEs, the lowest km++ MSE is several standard deviations
higher than the mean clarans MSE.
7
5.1
Baseline performance
We briefly discuss findings related to algorithms uni, bf, afk-mc2 and km++. Results in Table 4
corroborate the previously established finding that uni is vastly outperformed by km++, both in
initialization and final MSEs. Table 4 results also agree with the finding of Bachem et al. (2016)
that initialization MSEs with afk-mc2 are indistinguishable from those of km++, and moreover that
final MSEs are indistinguishable. We observe in our experiments that runs with km++ are faster than
those with afk-mc2 (columns 1 and 2 of Table 4). We attribute this to the fast blas-based km++
implementation of Newling and Fleuret (2016).
Our final baseline finding is that MSEs obtained with bf are in general no better than those with uni.
This is not in strict agreement with the findings of Celebi et al. (2013). We attribute this discrepancy
to the fact that experiments in Celebi et al. (2013) are in the low K regime (K < 50, N/K > 100).
Note that Table 4 does not contain initialization MSEs for bf, as bf does not initialize with data
points but with means of sub-samples, and it would thus not make sense to compare bf initialization
with the 4 seeding methods.
5.2
clarans performance
Having established that the best baselines are km++ and afk-mc2 , and that they provide clusterings
of indistinguishable quality, we now focus on the central comparison of this paper, that between
km++ with clarans. In Figure 6 we present bar plots summarising all runs on all 23 datasets. We
observe a very low variance in the initialization MSEs of clarans. We speculatively hypothesize
that clarans often finds a globally minimal initialization. Figure 6 shows that clarans provides
significantly lower initialization MSEs than km++.
The final MSEs are also significantly better when initialization is done with clarans, although the
gap in MSE between clarans and km++ is reduced when lloyd has run. Note, as seen in Table 4,
that all 5 initializations for dataset 7 result in equally good clusterings.
As a supplementary experiment, we considered initialising with km++ and clarans in series, thus
using the three stage clustering km+++clarans+lloyd. We find that this can be slightly faster than
just clarans+lloyd with identical MSEs. Results of this experiment are presented in ?SM-I. We
perform a final experiment measure the dependence of improvement on K in ?SM-I, where we see
the improvement is most significant for large K.
6
Conclusion and Future Works
In this paper, we have demonstrated the effectiveness of the algorithm clarans at solving the kmedoids problem. We have described techniques for accelerating clarans, and most importantly
shown that clarans works very effectively as an initializer for lloyd, outperforming other initialization schemes, such as km++, on 23 datasets.
An interesting direction for future work might be to develop further optimisations for clarans. One
idea could be to use importance sampling to rapidly obtain good estimates of post-swap energies.
Another might be to propose two swaps simultaneously, as considered in Kanungo et al. (2002),
which could potentially lead to even better solutions, although we have hypothesized that clarans
is already finding globally optimal initializations.
All source code is made available under a public license. It consists of generic C++ code which
can be extended to various data types and metrics, compiling to a shared library with extensions in
Cython for a Python interface. It can currently be found in the git repository https://github.
com/idiap/zentas.
Acknowledgments
James Newling was funded by the Hasler Foundation under the grant 13018 MASH2.
8
References
Arthur, D. and Vassilvitskii, S. (2007). K-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?07, pages
1027?1035, Philadelphia, PA, USA. Society for Industrial and Applied Mathematics.
Bachem, O., Lucic, M., Hassani, S. H., and Krause, A. (2016). Fast and provably good seedings for
k-means. In Neural Information Processing Systems (NIPS).
Bradley, P. S. and Fayyad, U. M. (1998). Refining initial points for k-means clustering. In Proceedings of the Fifteenth International Conference on Machine Learning, ICML ?98, pages 91?99,
San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Celebi, M. E., Kingravi, H. A., and Vela, P. A. (2013). A comparative study of efficient initialization
methods for the k-means clustering algorithm. Expert Syst. Appl., 40(1):200?210.
Elkan, C. (2003). Using the triangle inequality to accelerate k-means. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), August 21-24, 2003, Washington, DC, USA, pages 147?153.
Hartigan, J. A. (1975). Clustering Algorithms. John Wiley & Sons, Inc., New York, NY, USA, 99th
edition.
Hastie, T. J., Tibshirani, R. J., and Friedman, J. H. (2001). The elements of statistical learning : data
mining, inference, and prediction. Springer series in statistics. Springer, New York.
Kanungo, T., Mount, D. M., Netanyahu, N. S., Piatko, C. D., Silverman, R., and Wu, A. Y. (2002).
A local search approximation algorithm for k-means clustering. In Proceedings of the Eighteenth
Annual Symposium on Computational Geometry, SCG ?02, pages 10?18, New York, NY, USA.
ACM.
Kaufman, L. and Rousseeuw, P. J. (1990). Finding groups in data : an introduction to cluster
analysis. Wiley series in probability and mathematical statistics. Wiley, New York. A WileyInterscience publication.
Lewis, D. D., Yang, Y., Rose, T. G., and Li, F. (2004). Rcv1: A new benchmark collection for text
categorization research. Journal of Machine Learning Research, 5:361?397.
Newling, J. and Fleuret, F. (2016). Fast k-means with accurate bounds. In Proceedings of the
International Conference on Machine Learning (ICML), pages 936?944.
Ng, R. T. and Han, J. (1994). Efficient and effective clustering methods for spatial data mining. In
Proceedings of the 20th International Conference on Very Large Data Bases, VLDB ?94, pages
144?155, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Ng, R. T. and Han, J. (2002). Clarans: A method for clustering objects for spatial data mining. IEEE
Transactions on Knowledge and Data Engineering, pages 1003?1017.
Park, H.-S. and Jun, C.-H. (2009). A simple and fast algorithm for k-medoids clustering. Expert
Syst. Appl., 36(2):3336?3341.
Telgarsky, M. and Vattani, A. (2010). Hartigan?s method: k-means clustering without voronoi. In
AISTATS, volume 9 of JMLR Proceedings, pages 820?827. JMLR.org.
Wu, X., Kumar, V., Quinlan, J. R., Ghosh, J., Yang, Q., Motoda, H., McLachlan, G., Ng, A., Liu,
B., Yu, P., Zhou, Z.-H., Steinbach, M., Hand, D., and Steinberg, D. (2008). Top 10 algorithms in
data mining. Knowledge and Information Systems, 14(1):1?37.
Yujian, L. and Bo, L. (2007). A normalized levenshtein distance metric. IEEE Trans. Pattern Anal.
Mach. Intell., 29(6):1091?1095.
9
| 7104 |@word repository:1 briefly:2 compression:1 bf:12 open:1 km:35 vldb:1 simulation:2 scg:1 git:1 motoda:1 cla:3 reduction:3 initial:5 liu:1 contains:1 series:3 selecting:1 kingravi:1 initialisation:1 ecole:2 outperforms:1 bradley:3 current:1 com:2 comparing:1 clara:1 john:1 distant:1 partition:3 kdd:1 enables:1 seeding:4 plot:1 hypothesize:1 update:5 selected:5 fewer:1 accepting:1 provides:3 recompute:1 org:1 mathematical:1 along:2 direct:1 symposium:2 viable:2 consists:3 prove:1 introduce:1 expected:1 frequently:2 globally:2 considering:2 increasing:1 becomes:1 moreover:1 lowest:5 what:1 kaufman:2 minimizes:1 proposing:3 finding:9 ghosh:1 guarantee:3 runtime:1 exactly:1 k2:5 facto:1 partitioning:1 grant:1 before:4 engineering:1 local:6 mini0:2 limit:3 consequence:1 mach:1 mount:1 mc2:15 optimised:2 approximately:1 might:2 pam:1 twice:1 initialization:53 lausanne:2 appl:2 range:1 ms:28 acknowledgment:1 piatko:1 practice:2 implement:1 differs:1 silverman:1 area:1 significantly:6 word:1 suggest:1 cannot:1 influence:1 equivalent:5 joensuu:1 center:29 demonstrated:1 eighteenth:2 focused:1 simplicity:2 assigns:1 importantly:1 fill:1 searching:2 gm:2 massive:1 exact:2 us:1 steinbach:1 elkan:2 agreement:1 element:2 pa:1 jk:1 updating:3 particularly:1 bottom:1 initializing:1 capture:1 decrease:4 removed:1 balanced:1 mentioned:1 rose:1 complexity:14 solving:1 algo:1 swap:33 triangle:3 accelerate:2 various:2 separated:1 distinct:2 fast:5 effective:2 whose:3 heuristic:1 widely:1 larger:1 supplementary:1 statistic:2 itself:1 final:26 advantage:2 sequence:2 frozen:3 propose:2 remainder:1 rapidly:1 erale:2 starred:1 validate:1 convergence:2 cluster:20 requirement:2 francois:1 produce:1 comparative:2 guaranteeing:1 ring:1 telgarsky:2 object:1 help:2 spent:1 develop:1 categorization:1 nearest:13 strong:1 implemented:4 idiap:6 ois:1 resemble:1 c:1 direction:1 closely:1 attribute:2 modifying:1 subsequently:1 consecutively:1 public:1 implementing:2 require:1 probable:2 extension:1 accompanying:1 considered:4 k3:1 algorithmic:3 scope:1 vary:1 consecutive:3 a2:1 estimation:1 outperformed:1 currently:1 vela:1 successfully:1 mclachlan:1 rna:1 modified:1 rather:2 pn:2 zhou:1 varying:2 publication:1 focus:2 refining:1 improvement:4 consistently:1 afk:15 contrast:1 centroid:1 baseline:4 sense:1 industrial:1 dim:2 inference:1 voronoi:4 stopping:4 dependent:1 i0:5 eliminate:1 provably:1 arg:2 classification:1 overall:1 spatial:3 initialize:4 once:1 never:2 having:1 ng:8 beach:1 sampling:3 identical:1 bachem:8 park:2 look:1 icml:3 yu:1 washington:1 discrepancy:1 future:2 np:1 escape:2 few:1 franc:1 randomly:3 simultaneously:1 intell:1 replaced:1 geometry:1 n1:2 friedman:1 newling:7 initializers:1 mining:5 evaluation:12 deferred:1 cython:2 swapping:1 accurate:1 arthur:3 jumping:1 tree:1 indexed:1 euclidean:1 initialized:1 theoretical:2 minimal:3 column:11 corroborate:1 assignment:5 cost:1 deviation:2 uniform:1 kn:6 answer:1 referring:1 density:1 international:4 siam:1 retain:2 squared:2 vastly:1 central:1 initializer:3 speculatively:1 henceforth:4 expert:2 vattani:2 li:1 syst:2 account:1 potential:5 de:3 star:1 lloyd:40 includes:1 inc:3 depends:1 performed:4 doing:1 start:1 contribution:1 minimize:2 square:2 publicly:1 variance:2 kaufmann:2 conceptually:1 none:1 ed:2 energy:4 james:3 proof:1 dataset:5 popular:1 ask:1 recall:2 color:1 knowledge:2 hassani:1 higher:1 done:3 rejected:4 just:2 stage:1 correlation:1 until:1 hand:1 freezing:1 quality:1 yeast:1 believe:1 usa:7 hypothesized:1 contain:3 true:2 normalized:1 vicinity:1 assigned:2 tagged:1 alternating:1 illustrated:3 round:1 indistinguishable:3 during:7 noted:1 criterion:3 recognised:1 presenting:1 complete:1 polytechnique:2 l1:1 interface:2 lucic:1 fi:1 common:2 pseudocode:1 preceded:1 volume:1 blas:1 discussed:3 refer:1 significant:2 measurement:1 mathematics:1 funded:1 han:7 similarity:1 europe:1 etc:5 base:1 own:1 susy:1 certain:1 inequality:3 outperforming:1 continue:1 seen:1 minimum:12 regained:1 additional:1 morgan:2 determine:2 full:1 stem:1 faster:2 calculation:7 offer:1 long:2 devised:1 post:1 equally:1 a1:1 ensuring:1 prediction:1 scalable:1 essentially:1 metric:7 optimisation:11 expectation:1 fifteenth:1 iteration:5 proposal:7 addition:1 krause:1 else:1 grow:1 source:4 allocated:1 publisher:2 rest:1 unlike:1 probably:1 strict:1 member:4 effectiveness:1 call:1 yang:2 superset:3 hastie:3 reduce:1 idea:1 vassilvitskii:3 accelerating:1 song:1 york:4 repeatedly:1 fleuret:6 clear:1 se:4 rousseeuw:2 kanungo:3 s4:1 demonstrably:1 reduced:1 http:3 generate:1 s3:1 upfront:1 medoid:4 popularity:1 per:2 track:4 tibshirani:1 diverse:1 discrete:1 kmedoids:1 key:2 group:1 license:2 drawn:1 hartigan:3 changing:1 hasler:1 fraction:3 sum:2 run:27 soda:1 throughout:1 reader:1 wu:3 appendix:1 initialising:1 bound:1 distinguish:1 ements:1 quadratic:4 annual:2 institue:2 ahead:1 speed:1 fayyad:3 min:2 kumar:1 performing:1 rcv1:1 transferred:1 developing:1 alternate:1 poor:2 kd:1 across:5 terminates:1 slightly:1 son:1 partitioned:1 making:2 s1:1 medoids:19 taken:2 agree:1 previously:2 rans:3 discus:4 eventually:1 count:1 needed:1 end:4 generalizes:3 available:4 observe:3 indirectly:1 generic:1 alternative:2 compiling:1 top:3 remaining:1 running:1 clustering:15 subsampling:1 completed:3 outweighs:1 maintaining:1 log2:1 quinlan:1 society:1 unchanged:3 objective:2 already:1 dependence:1 nr:10 distance:20 simulated:4 furthest:1 assuming:2 code:4 index:5 illustration:1 providing:1 statement:1 potentially:1 ized:1 implementation:18 anal:1 motivates:2 perform:2 seedings:1 observation:1 summarising:1 datasets:14 sm:10 benchmark:1 extended:1 ever:1 dc:3 arbitrary:1 august:1 pair:1 required:3 namely:1 marginalise:1 elapsed:1 established:2 nip:2 trans:1 celebi:5 beyond:1 proceeds:1 below:1 bar:3 pattern:1 regime:1 memory:5 explanation:2 hybrid:1 rely:1 scheme:12 improve:1 github:2 library:2 created:1 jun:2 auto:1 philadelphia:1 text:1 geometric:1 python:2 relative:2 interesting:1 proportional:1 foundation:1 netanyahu:1 balancing:1 row:2 summary:2 changed:2 wide:1 sparse:2 avoids:1 computes:1 author:1 made:1 refinement:4 san:2 avoided:1 collection:1 far:1 transaction:1 obtains:2 uni:17 keep:4 sequentially:1 francisco:2 search:2 iterative:1 decade:1 table:15 reassigned:1 terminate:1 ca:3 init:1 contributes:1 mse:31 investigated:1 aistats:1 dense:1 main:2 motivation:1 whole:1 s2:1 edition:1 n2:2 body:1 referred:1 tl:8 rithm:1 ny:2 wiley:3 sub:1 wish:1 concatenating:1 exponential:1 jmlr:2 steinberg:1 removing:1 unusable:1 showing:1 r2:1 experimented:1 a3:1 mnist:1 effectively:1 importance:1 dissimilarity:4 hole:3 kx:4 nk:1 gap:1 rejection:2 locality:1 logarithmic:2 simply:1 likely:1 twentieth:1 bo:1 springer:2 ch:2 lewis:1 acm:2 presentation:2 acceleration:4 careful:1 shared:1 experimentally:2 hard:1 change:1 specifically:2 determined:2 reducing:2 flag:1 total:1 accepted:4 tendency:1 experimental:1 select:1 support:1 latter:2 accelerated:1 levenshtein:3 evaluate:3 tested:1 wileyinterscience:1 |
6,750 | 7,105 | Identifying Outlier Arms in Multi-Armed Bandit ?
Honglei Zhuang1?
Chi Wang2
Yifan Wang3
1
University of Illinois at Urbana-Champaign
2
Microsoft Research, Redmond
3
Tsinghua University
[email protected]
[email protected]
[email protected]
Abstract
We study a novel problem lying at the intersection of two areas: multi-armed bandit
and outlier detection. Multi-armed bandit is a useful tool to model the process
of incrementally collecting data for multiple objects in a decision space. Outlier
detection is a powerful method to narrow down the attention to a few objects after
the data for them are collected. However, no one has studied how to detect outlier
objects while incrementally collecting data for them, which is necessary when data
collection is expensive. We formalize this problem as identifying outlier arms in a
multi-armed bandit. We propose two sampling strategies with theoretical guarantee,
and analyze their sampling efficiency. Our experimental results on both synthetic
and real data show that our solution saves 70-99% of data collection cost from
baseline while having nearly perfect accuracy.
1
Introduction
A multi-armed bandit models a set of items (arms), each associated with an unknown probability
distribution of rewards. An observer can iteratively select an item and request a sample reward from
its distribution. This model has been predominant in modeling a broad range of applications, such
as cold-start recommendation [23], crowdsourcing [12] etc. In some applications, the objective is to
maximize the collected rewards while playing the bandit (exploration-exploitation setting [6, 4, 22]);
in others, the goal is to identify an optimal object among multiple candidates (pure exploration
setting [5]).
In the pure exploration setting, rich literature is devoted to the problem of identifying the top-K arms
with largest reward expectations [7, 14, 19]. We consider a different scenario, in which one is more
concerned about ?outlier arms? with extremely high/low expectation of rewards that substantially
deviate from others. Such arms are valuable as they usually provide novel insight or imply potential
errors.
For example, suppose medical researchers are testing the effectiveness of a biomarker X (e.g.,
the existence of a certain gene sequence) in distinguishing several different diseases with similar
?
The authors would like to thank anonymous reviewers for their helpful comments.
Part of this work was done while the first author was an intern at Microsoft Research. The first author
was sponsored in part by the U.S. Army Research Lab. under Cooperative Agreement No. W911NF-092-0053 (NSCTA), National Science Foundation IIS 16-18481, IIS 17-04532, and IIS-17-41317, and grant
1U54GM114838 awarded by NIGMS through funds provided by the trans-NIH Big Data to Knowledge (BD2K)
initiative (www.bd2k.nih.gov). The views and conclusions contained in this document are those of the author(s)
and should not be interpreted as representing the official policies of the U.S. Army Research Laboratory or the
U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government
purposes notwithstanding any copyright notation hereon.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
symptoms. They need to perform medical tests (e.g., gene sequencing) on patients with each disease
of interest, and observe if X?s degree of presence is significantly higher in a certain disease than other
diseases. In this example, a disease can be modeled as an arm. The researchers can iteratively select
a disease with which they sample a patient and perform the medical test to observe the presence
of X. The reward is 1 if X is fully present, and 0 if fully absent. To make sure the biomarker is
useful, researchers look for the disease with an extremely high expectation of reward compared
to other diseases, instead of merely searching for the disease with the highest reward expectation.
The identification of ?outlier? diseases is required to be sufficiently accurate (e.g., correct with 99%
probability). Meanwhile, it should be achieved with a minimal number of medical tests in order
to save the cost. Hence, a good sampling strategy needs to be developed to both guarantee the
correctness and save cost.
As a generalization of the above example, we study a novel problem of identifying outlier arms in
multi-armed bandits. We define the criterion of outlierness by extending an established rule of thumb,
3? rule. The detection of such outliers requires calculating an outlier threshold that depends on the
mean reward of all arms, and outputing outlier arms with an expected reward above the threshold.
We specifically study pure exploration strategies in a fixed confidence setting, which aims to output
the correct results with probability no less than 1 ? ?.
Existing methods for top-K arm identification cannot be directly applied, mainly because the number
of outliers are unknown a priori. The problem also differs from the thresholding bandit problem [25],
as the outlier threshold depends on the (unknown) reward configuration of all the arms, and hence
also needs to be explored. Given the outlierness criterion, the key challenges in tackling this problem
are: i) how to guarantee the identified outlier arms truly satisfy the criterion; and ii) how to design
an efficient sampling strategy which balances the trade-off between exploring individual arms and
exploring outlier threshold.
In this paper, we make the following major contributions:
? We propose a Round-Robin sampling algorithm, with a theoretical guarantee of its correctness as
well as a theoretical upper bound of its total number of pulls.
? We further propose an improved algorithm Weighted Round-Robin, with the same correctness
guarantee, and a better upper bound of its total number of pulls.
? We verify our algorithms on both synthetic and real datasets. Our Round-Robin algorithm has
near 100% accuracy, while reducing the cost of a competitive baseline up to 99%. Our Weighted
Round-Robin algorithm further reduces the cost by around 60%, with even smaller error.
2
Related Work
We present studies related to our problem in different areas.
Multi-armed bandit. Multi-armed bandit is an extensively studied topic. A classic setting is to
regard the feedback of pulling an arm as a reward and aim to optimize the exploration-exploitation
trade-off [6, 4, 22]. In an alternative setting, the goal is to identify an optimal object using a small cost,
and the cost is related to the number of pulls rather than the feedback. This is the ?pure exploration?
setting [5]. Early work dates back to 1950s under the subject of sequential design of experiments [26].
Recent applications in crowdsourcing and big data-driven experimentation etc. revitalized this field.
The problem we study also falls into the general category of pure exploration bandit.
Within this category, a number of studies focus on best arm identification [3, 5, 13, 14], as well
as finding top-K arms [7, 14, 19]. These studies focus on designing algorithms with probabilistic
guarantee of finding correct top-K arms, and improving the number of pulls required by the algorithm.
Typical cases of study include: (a) fixed confidence, in which the algorithm needs to return correct
top-K arms with probability above a threshold; (b) fixed budget, in which the algorithm needs to
maximize the probability of correctness within a certain number of pulls. While there are promising
advances in recent theoretical work, optimal algorithms in general cases remain an open problem.
Finding top-K arms is different from finding outlier arms, because top arms are not necessarily
outliers. Yet the analysis methods are useful and inspiring to our study.
There are also studies [25, 10] on thresholding bandit problem, where the aim is to find the set of
arms whose expected rewards are larger than a given threshold. However, since the outlier threshold
2
depends on the unknown expected rewards of all the arms, these algorithms cannot apply to our
problem.
Some studies [11, 15] propose a generalized objective to find the set of arms with the largest sum
of reward expectations with a given combinatorial constraint. The constraint is independent of the
rewards (e.g., the set must have K elements). Our problem is different as the outlier constraint
depends on the reward configuration of all the arms.
A few studies on clustering bandits [16, 21] aim to identify the internal cluster structure between arms.
Their objective is different from outlier detection. Moreover, they do not study a pure-exploration
scenario.
Carpentier and Valko [8] propose the notion of ?extreme bandits? to detect a different kind of outlier:
They look for extreme values of individual rewards from each pull. Using the medical example
in Section 1, the goal can be interpreted as finding a patient with extremely high containment of a
biomarker. With that goal, the arm with the heaviest tail in its distribution is favored, because it is
more likely to generate extremely large rewards than other arms. In contrast, our objective is to find
arms with extremely large expectations of rewards.
Outlier detection. Outlier detection has been studied for decades [9, 17]. Most existing work
focuses on finding outlier data points from observed data points in a dataset. We do not target on
finding outlier data points from observed data points (rewards). Instead, we look for outlier arms
which generate these rewards. Also, these rewards are not provided at the beginning to the algorithm,
and the algorithm needs to proactively pull each arm to obtain more reward samples.
Sampling techniques were used in detecting outlier data points from observed data points with very
different purposes. In [1], outlier detection is reduced to a classification problem and an active
learning algorithm is proposed to selectively sample data points for training the outlier detector.
In [27, 28], a subset of data points is uniformly sampled to accelerate the outlier detector. Kollios et al.
[20] propose a biased sampling strategy. Zimek et al. [29], Liu et al. [24] use subsampling technique
to introduce diversity in order to apply ensemble methods for better outlier detection performance. In
outlier arm identification, the purpose of sampling is to estimate the reward expectation of each arm,
which is a hidden variable and can only be estimated from sampled rewards.
There are also studies on outlier detection when uncertainty of data points is considered [2, 18].
However, these algorithms do not attempt to actively request more information about data points to
reduce the uncertainty, which is a different setting from our work.
3
Problem Definition
In this section, we describe the problem of identifying outlier arms in a multi-armed bandit. We start
with recalling the settings of the multi-armed bandit model.
Multi-armed bandit. A multi-armed bandit (MAB) consists of n-arms, where each arm is associated
with a reward distribution. The (unknown) expectation of each reward distribution is denoted as yi .
At each iteration, the algorithm is allowed to select an arm i to play (pull), and obtain a sample reward
(j)
xi ? R from the corresponding distribution, where j corresponds to the j-th samples obtained from
the i-th arm. We further use xi to represent all the samples obtained from the i-th arm.
Problem definition. We study to identify outlier arms with extremely high reward expectations
compared to other arms in the bandit. To define ?outlier arms?, we adopt a general statistical rule
named k-sigma: The arms with reward expectations higher than the mean plus k standard deviation
of all arms are considered as outliers. Formally, we define the mean of all the n arms? reward
expectations as well as their standard deviation as:
v
u n
n
X
u1 X
1
yi , ?y = t
(yi ? ?y )2
?y =
n i=1
n i=1
In a multi-armed bandit setting, the value of yi for each arm is unknown. Instead, the system needs
to pull one arm at each iteration to obtain a sample, and estimate the value yi for each arm and the
3
threshold ? from all the obtained samples xi , ?i. We introduce the following estimators.
v
u n
n
u1 X
1 X (j)
1X
y?i =
xi , ?
?y =
y?i , ?
?y = t
(?
yi ? ?
?y )2 ,
mi j
n i=1
n i=1
where mi is the number of times the arm i is pulled.
We define a threshold function based on the above estimators as:
?? = ?
?y + k?
?y
An arm i is defined as an outlier arm iff E?
yi > E?? and is defined as a normal (non-outlier) arm iff
?
?
E?
yi < E?. We denote the set of outlier arms as ? = {i ? [n]|E?
yi > E?}.
We focus on the fixed confidence setting. The objective is to design an efficient pulling algorithm,
such that the algorithm can return the true set of outlier arms ? with probability at least 1 ? ? (? is a
small constant below 0.5). The fewer pulls the better, because each pull has a economic or time cost.
Note that this is a pure exploration setting, i.e., the reward incurred during exploration is irrelevant.
4
Algorithms
In this section, we propose several algorithms, and present the theoretical guarantee of each algorithm.
4.1
Round-Robin Algorithm
The most simple algorithm is to pull arms in a round-robin way. That is, the algorithm starts from
arm 1 and pulls arm 2, 3, ? ? ? respectively, and goes back to arm 1 after it iterates over all the n arms.
The process continues until a certain termination condition is met.
Intuitively, the algorithm should terminate when it is confident whether each arm is an outlier.
We achieve this by using the confidence interval of each arm?s reward expectation as well as the
confidence interval of the outlier threshold. If the significance levels of these intervals are carefully
set, and each reward expectation?s confidence interval has no overlap with the threshold?s confidence
interval, we can safely terminate the algorithm while guaranteeing correctness with desired high
probability. In the following, we first discuss the formal definition of confidence intervals, as well as
how to set the significance levels. Then we present the formal termination condition.
? The
Confidence intervals. We provide a general definition of confidence intervals for E?
yi and E?.
0
0
confidence interval for E?
yi at significance level ? is defined as [?
yi ? ?i (mi , ? ), y?i + ?i (mi , ? 0 )],
such that:
P(?
yi ? E?
yi > ?i (mi , ? 0 )) < ? 0 , and P(?
yi ? E?
yi < ??i (mi , ? 0 )) < ? 0
Similarly, the confidence interval for E?? at significance level ? 0 is defined as [?? ? ?? (m, ? 0 ), ?? +
?? (m, ? 0 )], such that:
P(?? ? E?? > ?? (m, ? 0 )) < ? 0 , and P(?? ? E?? < ??? (m, ? 0 )) < ? 0
The concrete form of confidence interval may vary with the reward distribution associated with each
arm. For the sake of generality, we defer the discussion of concrete form of confidence interval to
Section 4.3.
In our algorithm, we update the significance level ? 0 for the above confidence intervals at each
iteration. After T pulls, the ? 0 should be set as
?0 =
6?
? 2 (n + 1)T 2
(1)
In the following discussion, we omit the parameters in ?i and ?? when they are clear from the context.
? confidence interval, then the
Active arms. At any time, if y?i ?s confidence interval overlaps with ??s
algorithm cannot confidently tell if the arm i is an outlier or a normal arm. We call such arms active,
4
and vice versa. Formally, an arm i is active, denoted as ACTIVEi = TRUE, iff
(
?
y?i ? ?i < ?? + ?? , if y?i > ?;
?
y?i + ?i > ? ? ?? , otherwise.
(2)
We denote the set of active arms as A = {i ? [n]|ACTIVEi = TRUE}. With this definition, the
termination condition is simply A = ?. When this condition is met, we return the result set:
?
? = {i|?
?
yi > ?}
(3)
The algorithm is outlined in Algorithm 1.
Algorithm 1: Round-Robin Algorithm (RR)
Input: n arms, outlier parameter k
? of outlier arms
Output: A set ?
1
2
3
4
5
6
7
8
9
10
Pull each arm i once ?i ? [n];
T ? n;
? ?? ;
Update y?i , mi , ?i , ?i ? [n] and ?,
i ? 1;
while A 6= ? do
i ? i%n + 1;
Pull arm i;
T ? T + 1;
? ?? ;
Update y?i , mi , ?i and ?,
? according to Eq. (3);
return ?
// Initialization
// Round-robin
Theoretical results. We first show that if the algorithm terminates with no active arms, the returned
outlier sets will be correct with high probability.
Theorem 1 (Correctness). With probability 1 ? ?, if the algorithm terminates after a certain number
of pulls T when there is no active arms i.e. A = ?, then the returned set of outliers will be
? = ?.
correct, i.e. ?
We can also provide an upper bound for the efficiency of the algorithm in a specific case when all the
reward distributions are bounded within [a, b] where b ? a = R. In this case, the confidence intervals
can be instantiated as discussed in Section 4.3. And we can accordingly obtain the following results:
Theorem 2. With probability 1 ? ?, the total number of pulls T needed for the algorithm to terminate
is bounded by
2 2
?
? RR log 2R ? (n + 1)HRR + 1 + 4n
T ? 8R2 H
(4)
3?
where
? RR = H1 1 +
H
4.2
p
2
l(k) , H1 =
n
?2
mini?[n] (yi ? E?)
?
, l(k) = (1 + k n ? 1)2 /n
Weighted Round-Robin Algorithm
The round-robin algorithm evenly distributes resources to all the arms. Intuitively, active arms deserve
more pulls than inactive arms, since the algorithm is almost sure about whether an inactive arm is
outlier already.
Based on this idea, we propose an improved algorithm. We allow the algorithm to sample the active
arms ? times as many as inactive arms, where ? ? 1 is a real constant. Since ? is not necessarily an
integer, we use a method similar to stride scheduling to guarantee the ratio between number of pulls
of active and inactive arms are approximately ? in a long run. The algorithm still pulls by iterating
over all the arms. However, after each arm is pulled, the algorithm can decide either to stay at this
arm for a few ?extra pulls,? or proceed to the next arm. If the arm pulled at the T -th iteration is the
5
same as the arm pulled at the (T ? 1)-th iteration, we call the T -th pull an ?extra pull.? Otherwise,
we call it a ?regular pull.? We keep a counter ci for each arm i. When T > n, after the algorithm
performs a regular pull on arm i, we add ? to the counter ci . If this arm is still active, we keep pulling
this arm until mi ? ci or it becomes inactive. Otherwise we proceed to the next arm to perform the
next regular pull.
This algorithm is named Weighted Round-Robin, and outlined in Algorithm 2.
Algorithm 2: Weighted Round-Robin Algorithm (WRR)
Input: n arms, outlier parameter k, ?
?
Output: A set of outlier arms ?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
Pull each arm i once ?i ? [n];
T ? n;
? ?? ;
Update y?i , mi , ?i , ?i ? [n] and ?,
ci ? 0, ?i ? [n];
i ? 1;
while A 6= ? do
i ? i%n + 1 ;
ci ? ci + ?;
repeat
Pull arm i;
T ? T + 1;
?
Update y?W
i , mi , ?i and ?, ?? ;
until i ?
/A
mi ? ci ;
? according to Eq. (3);
return ?
// Initialization
// Next regular pull
Theoretical results. Since the Weighted Round-Robin algorithm has the same termination condition,
according to Theorem 1, it has the same correctness guarantee.
We can also bound the total number of pulls needed for this algorithm when the reward distributions
are bounded.
Theorem 3. With probability 1 ? ?, the total number of pulls T needed for the Weighted Round-Robin
algorithm to terminate is bounded by
2 2
?
? WRR log 2R ? (n + 1)HWRR + 1 + 2(? + 2)n
T ? 8R2 H
(5)
3?
where
? WRR =
H
(? ? 1)H2
H1
+
?
?
1+
p
2
l(k)?
, H2 =
X
1
i
?2
(yi ? E?)
Determining ?. One important parameter in this algorithm is ?. For bounded reward distributions,
since we have closed-form upper bounds, in principle we should pick a ? that minimizes the upper
? WRR with
bound of the pulls. Omitting the last small term in Eq. (5), it is equivalent to minimizing H
respect to ?.
Generally, H1 and H2 are unknown to the algorithm, but we know that H2 is dominated by H1 .
More precisely, Hn1 ? H2 ? H1 . When H2 = H1 , which corresponds to the easy scenario when
? WRR = H
? RR , and the optimal ? = 1, which
all the arms are equally distant from the threshold, H
degenerates to the round-robin algorithm.
On the other hand, when H1 = nH2 , it implies that there is one arm i? whose reward expectation is
fairly close to the threshold, while all the other arms? reward expectations are sufficiently distant. Let
? WRR /?? = 0, the optimal value of ? is
?H
2
?? =
(n ? 1) 3
1
3
l (k)
6
(6)
which grows with n and decreases with k. H1 = nH2 represents the case in which it is equally
difficult to determine the outlierness of each arm. If there is more information about the distribution
of y configuration, one may take advantage of such knowledge to pick the best ?. Otherwise, we can
simply use Eq. (6) as it corresponds to a difficult scenario.
Theoretical comparison with RR. We compare theses two algorithms by comparing their upper
? WRR /H
? RR since the two bounds only differ in this term after a small
bounds. Essentially, we study H
constant is ignored. We have
p
? WRR
? 2 1 + l(k)? 2
H
1 ??1H
p
+
(7)
=
? RR
?1
?
? H
H
1 + l(k)
? 2 and H
? 1 indicates how much cost WRR will save from RR. Notice that
The ratio between H
?2
H
1
? ?
? 1 ? 1. In the degenerate case H2 /H1 = 1, WRR does not save any cost from RR. This
n ? H
case occurs only when all arms have identical reward expectations, which is rare and not interesting.
? 2 /H
? 1 = 1/n, by setting ? to the optimal value in Eq. (6), it is possible to save a
However, if H
substantial portion of pulls. In this scenario, the RR algorithm will iteratively pull all the arms until
arm i? confidently determined as outlier or normal. However, the WRR algorithm is able to invest
more pulls on arm i? as it remains active, while pulling other arms for fewer times, only to obtain a
more precise estimate of the outlier threshold.
4.3
Confidence Interval Instantiation
With different prior knowledge of reward distributions, confidence intervals can be instantiated
differently. We introduce the confidence interval for a relatively general scenario, where reward
distributions are bounded.
Bounded distribution.
R = b ? a.
Suppose the reward distribution of each arm is bounded in [a, b], and
According to Hoeffding?s inequality and McDiarmid?s inequality, we can derive the confidence
interval for yi as
s
s
1
1
1
l(k)
0
0
?i (mi , ? ) = R
log 0 , ?? (m, ? ) = R
log 0
2mi
?
2h(m)
?
where mi is the number of pulls of arm i so far, and h(m) is the harmonic mean of all the mi ?s.
Confidence intervals for other well-known distributions such as Bernoulli or Gaussian distributions
can also be derived.
5
Experimental Results
In this section, we present experiments to evaluate both the effectivenss and efficiency of proposed
algorithms.
5.1
Datasets
Synthetic.
We construct several synthetic datasets with varying number of arms n =
20, 50, 100, 200, and varying k = 2, 2.5, 3. There are 12 configurations in total. For each configuration, we generate 10 random test cases. For each arm, we draw its reward from a Bernoulli
distribution Bern(yi ).
Twitter. We consider the following application of detecting outlier locations with respect to keywords
from Twitter data. A user has a set of candidate regions L = {l1 , ? ? ? , ln }, and is interested in finding
outlier regions where tweets are extremely likely to contain a keyword w. In this application, each
region corresponds to an arm. A region has an unknown probability of generating a tweet containing
the keyword, which can be regarded as a Bernoulli distribution. We collect a Twitter dataset with
1, 500, 000 tweets from NYC, associated with its latitude and longitude. We divide the entire space
into regions of 200 ? 200 in latitude and longitude respectively. We select 47 regions with more than
5, 000 tweets as arms and select 20 keywords as test cases.
7
Percentage of Test Cases
1.0
107
0.8
0.6
1??
NRR
IB
RR
WRR
0.4
0.2
0.0
#Pulls
%Correct
106
20
105
IB
RR
WRR
Cap
104
103
50
100
200
n
(a) % Exactly Correct
20
50
100
n
(b) Avg. #Pulls vs. n
200
1.0
0.8
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
Cost Reduction Percentage
1.0
(c) WRR?s Cost Reduction wrt RR
Figure 1: Effectiveness and efficiency studies on Synthetic data set. Cap indicates the maximum
number of pulls we allow an algorithm to run.
5.2
Setup
Methods for comparison. Since the problem is new, there is no directly comparable solution in
existing work. We design two baselines for comparative study.
? Naive Round-Robin (NRR). We play arms in a round-robin fashion, and terminate as soon as we
? has not changed in the last consecutive 1/? pulls. ?
? is defined
find the estimated outlier set ?
as in Eq. (3). This baseline reflects how well the problem can be solved by RR with a heuristic
termination condition.
? Iterative Best Arm Identification (IB). We apply a state-of-the-art best arm identification algorithm [11] iteratively. We first apply it to all n arms until it terminates, and then remove the best
?
arm and apply it to the rest arms. We repeat this process until the current best arm is not in ?,
where the threshold function is heuristically estimated based on the current data. We then return the
? This is a strong baseline that leverages the existing solution in best-arm identification.
current ?.
Then we compare them with our proposed two algorithms, Round-Robin (RR) and Weighted RoundRobin (WRR).
Parameter configurations. Since some algorithm takes extremely long time to terminate in certain
cases, we place a cap on the total number of pulls. Once an algorithm runs for 107 pulls, the algorithm
? We set ? = 0.1.
is forced to terminate and output the current estimated outlier set ?.
For each test case, we run the experiments for 10 times, and take the average of both the correctness
metrics and number of pulls.
5.3
Results
Performance on Synthetic. Figure 1(a) shows the correctness of each algorithm when n varies. It
can be observed that both of our proposed algorithms achieve perfect correctness on all the test sets.
In comparison, the NRR baseline has never achieved the desired level of correctness. Based on the
performance on correctness, the naive baseline NRR does not qualify an acceptable algorithm, so we
only measure the efficiency of the rest algorithms.
We plot the average number of pulls each algorithm takes before termination varying with the number
of arms n in Figure 1(b). On all the different configurations of n, IB takes a much larger number
of pulls than WRR and RR, which makes it 1-3 orders of magnitude as costly as WRR and RR. At
the same time, RR is also substantially slower than WRR, with the gap gradually increasing as n
increases. This shows our design of additional pulls helps. Figure 1(c) further shows that in 80% of
the test cases, WRR can save more than 40% of cost from RR; in about half of the test cases, WRR
can save more than 60% of the cost.
Performance on Twitter. Figure 2(a) shows the correctness of different algorithms on Twitter data
set. As one can see, both of our proposed algorithms qualify the correctness requirement, i.e., the
probability of returning the exactly correct outlier set is higher than 1 ? ?. The NRR baseline is
far from reaching that bar. The IB baseline barely meets the bar, and the precision, recall and F1
measures show that its returned result is averagely a good approximate to the correct result, with an
average F1 metric close to 0.95. This once again confirms that IB is a strong baseline.
8
0.6
1??
NRR
IB
RR
WRR
0.4
0.2
0.0
%Correct Precision
Recall
F1
1.0
1.08
1.06
0.6
0.4
0.2
0.0
?1.5
1.02
RR
WRR
?1.0
1.04
?0.5
0.0
0.5
Cost Reduction Percentage
1.0
1.00
1.0
(a) Correctness comparison
Preset ?
? = ??
0.8
T?/T??
0.8
Percentage of Test Cases
Performance Measure
1.0
(b) Cost Reduction wrt IB
1.5
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
?
Figure 2: Effectiveness and efficiency studies on Twitter Figure 3: Ratio between avg. #pulls
with a given ? and with ? = ?? .
dataset.
We compare the efficiency of IB, RR and WRR algorithms in Figure 2(b). In this figure, we plot
the cost reduction percentage for both RR and WRR in comparison with IB. WRR is a clear winner.
In almost 80% of the test cases, it saves more than 50% of IB?s cost, and in about 40% of the test
cases, it saves more than 75% of IB?s cost. In contrast, RR?s performance is comparable to IB. In
approximately 30% of the test cases, RR is actually slower than IB and has negative cost reduction,
though in another 40% of the test cases, RR saves more than 50% of IB?s cost.
Tuning ?. In order to experimentally justify our selection of ? value, we test the performance of
WRR on a specific setting of synthetic data set (n = 15, k = 2.5) with varying preset ? values.
Figure 3 shows the average number of pulls of 10 test cases for each ? in {1.5, 2, . . . , 5}, comparing
to the performance with ? = ?? according to Eq. (6). It can be observed that all the preset ? values
1
cannot achieve better performance than ? = ?? . A further investigation reveals that the H
H2 of these
1
test cases vary from 2 to 14. Although we choose ?? based on an extreme assumption H
H2 = n,
its average performance is found to be close to the optimal even when the data do not satisfy the
assumption.
6
Conclusion
In this paper, we study a novel problem of identifying the outlier arms with extremely high/low reward
expectations compared to other arms in a multi-armed bandit. We propose a Round-Robin algorithm
and a Weighted Round-Robin algorithm with correctness guarantee. We also upper bound both
algorithms when the reward distributions are bounded. We conduct experiments on both synthetic
and real data to verify our algorithms. There could be further extensions of this work, including
deriving a lower bound of this problem, or extending the problem to a PAC setting.
References
[1] N. Abe, B. Zadrozny, and J. Langford. Outlier detection by active learning. In KDD, pages
504?509. ACM, 2006.
[2] C. C. Aggarwal and P. S. Yu. Outlier detection with uncertain data. In SDM, pages 483?493.
SIAM, 2008.
[3] J.-Y. Audibert and S. Bubeck. Best arm identification in multi-armed bandits. In COLT, pages
13?p, 2010.
[4] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine learning, 47(2-3):235?256, 2002.
[5] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in finitely-armed and continuous-armed
bandits. Theoretical Computer Science, 412(19):1832?1852, 2011.
[6] S. Bubeck, N. Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[7] S. Bubeck, T. Wang, and N. Viswanathan. Multiple identifications in multi-armed bandits. In
ICML, pages 258?265, 2013.
[8] A. Carpentier and M. Valko. Extreme bandits. In NIPS, pages 1089?1097, 2014.
9
[9] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing
Surveys, 41(3):15:1?15:58, 2009.
[10] L. Chen and J. Li. On the optimal sample complexity for best arm identification. arXiv preprint
arXiv:1511.03774, 2015.
[11] S. Chen, T. Lin, I. King, M. R. Lyu, and W. Chen. Combinatorial pure exploration of multi-armed
bandits. In NIPS, pages 379?387, 2014.
[12] P. Donmez, J. G. Carbonell, and J. Schneider. Efficiently learning the accuracy of labeling
sources for selective sampling. In KDD, pages 259?268. ACM, 2009.
[13] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problems. Journal of machine learning research,
7(Jun):1079?1105, 2006.
[14] V. Gabillon, M. Ghavamzadeh, and A. Lazaric. Best arm identification: A unified approach to
fixed budget and fixed confidence. In NIPS, pages 3212?3220, 2012.
[15] V. Gabillon, A. Lazaric, M. Ghavamzadeh, R. Ortner, and P. Bartlett. Improved learning
complexity in combinatorial pure exploration bandits. In AISTATS, pages 1004?1012, 2016.
[16] C. Gentile, S. Li, and G. Zappella. Online clustering of bandits. In ICML, pages 757?765, 2014.
[17] V. J. Hodge and J. Austin. A survey of outlier detection methodologies. Artificial Intelligence
Review, 22(2):85?126, 2004.
[18] B. Jiang and J. Pei. Outlier detection on uncertain data: Objects, instances, and inferences. In
ICDE, pages 422?433. IEEE, 2011.
[19] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone. Pac subset selection in stochastic
multi-armed bandits. In ICML, pages 655?662, 2012.
[20] G. Kollios, D. Gunopulos, N. Koudas, and S. Berchtold. Efficient biased sampling for approximate clustering and outlier detection in large data sets. IEEE Transactions on Knowledge and
Data Engineering, 15(5):1170?1187, 2003.
[21] N. Korda, B. Sz?r?nyi, and L. Shuai. Distributed clustering of linear bandits in peer to peer
networks. In Journal of Machine Learning Research Workshop and Conference Proceedings,
volume 48, pages 1301?1309. International Machine Learning Societ, 2016.
[22] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6(1):4?22, 1985.
[23] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized
news article recommendation. In WWW, pages 661?670. ACM, 2010.
[24] H. Liu, Y. Zhang, B. Deng, and Y. Fu. Outlier detection via sampling ensemble. In Big Data,
pages 726?735. IEEE, 2016.
[25] A. Locatelli, M. Gutzeit, and A. Carpentier. An optimal algorithm for the thresholding bandit
problem. In ICML, pages 1690?1698. JMLR. org, 2016.
[26] H. Robbins. Some aspects of the sequential design of experiments. Bulletin of the American
Mathematical Society, 58(5):527?35, 1952.
[27] M. Sugiyama and K. Borgwardt. Rapid distance-based outlier detection via sampling. In NIPS,
pages 467?475, 2013.
[28] M. Wu and C. Jermaine. Outlier detection by sampling with accuracy guarantees. In KDD,
pages 767?772. ACM, 2006.
[29] A. Zimek, M. Gaudet, R. J. Campello, and J. Sander. Subsampling for efficient and effective
unsupervised outlier detection ensembles. In KDD, pages 428?436. ACM, 2013.
10
| 7105 |@word exploitation:2 averagely:1 nscta:1 open:1 termination:6 heuristically:1 confirms:1 kalyanakrishnan:1 pick:2 reduction:6 configuration:7 zimek:2 liu:2 document:1 existing:4 nrr:6 current:4 com:1 comparing:2 contextual:1 tackling:1 yet:1 must:1 chu:1 distant:2 kdd:4 remove:1 plot:2 sponsored:1 fund:1 update:5 v:1 half:1 fewer:2 intelligence:1 item:2 accordingly:1 beginning:1 detecting:2 iterates:1 mannor:1 location:1 mcdiarmid:1 org:1 zhang:1 mathematical:1 initiative:1 consists:1 introduce:3 expected:3 rapid:1 multi:20 chi:2 gov:1 armed:22 increasing:1 becomes:1 provided:2 notation:1 moreover:1 bounded:9 kind:1 interpreted:2 substantially:2 minimizes:1 developed:1 unified:1 finding:8 guarantee:11 safely:1 collecting:2 exactly:2 returning:1 medical:5 grant:1 omit:1 before:1 engineering:1 tsinghua:2 gunopulos:1 jiang:1 meet:1 approximately:2 plus:1 initialization:2 studied:3 collect:1 range:1 outlierness:3 testing:1 regret:1 differs:1 cold:1 area:2 significantly:1 confidence:24 regular:4 cannot:4 close:3 selection:2 scheduling:1 context:1 www:2 optimize:1 equivalent:1 reviewer:1 go:1 attention:1 survey:3 identifying:6 pure:10 insight:1 rule:4 estimator:2 regarded:1 deriving:1 pull:49 classic:1 searching:1 notion:1 target:1 suppose:2 play:2 user:1 anomaly:1 distinguishing:1 designing:1 agreement:1 element:1 trend:1 expensive:1 continues:1 cooperative:1 observed:5 preprint:1 wang:2 solved:1 region:6 news:1 keyword:2 trade:2 highest:1 counter:2 valuable:1 decrease:1 disease:10 substantial:1 complexity:2 reward:48 ghavamzadeh:2 efficiency:7 accelerate:1 differently:1 instantiated:2 forced:1 describe:1 effective:1 artificial:1 tell:1 labeling:1 peer:2 whose:2 heuristic:1 larger:2 wang3:1 otherwise:4 koudas:1 fischer:1 online:1 sequence:1 rr:26 advantage:1 sdm:1 propose:9 date:1 iff:3 degenerate:2 achieve:3 invest:1 cluster:1 requirement:1 extending:2 generating:1 perfect:2 guaranteeing:1 comparative:1 object:6 help:1 derive:1 finitely:1 keywords:2 eq:7 strong:2 longitude:2 implies:1 met:2 differ:1 correct:11 stochastic:2 exploration:13 elimination:1 government:3 f1:3 generalization:1 anonymous:1 mab:1 investigation:1 exploring:2 extension:1 lying:1 sufficiently:2 around:1 considered:2 normal:3 lyu:1 major:1 vary:2 early:1 adopt:1 consecutive:1 purpose:3 combinatorial:3 robbins:2 largest:2 correctness:16 vice:1 tool:1 weighted:9 reflects:1 gaussian:1 aim:4 rather:1 reaching:1 varying:4 derived:1 focus:4 sequencing:1 biomarker:3 mainly:1 indicates:2 bernoulli:3 contrast:2 baseline:10 detect:2 helpful:1 inference:1 twitter:6 stopping:1 entire:1 hidden:1 bandit:35 reproduce:1 selective:1 interested:1 among:1 classification:1 colt:1 denoted:2 priori:1 favored:1 art:1 fairly:1 field:1 once:4 construct:1 having:1 beach:1 sampling:13 never:1 identical:1 represents:1 broad:1 look:3 unsupervised:1 yu:1 nearly:1 icml:4 others:2 few:3 ortner:1 national:1 individual:2 microsoft:3 attempt:1 recalling:1 detection:19 interest:1 predominant:1 truly:1 extreme:4 copyright:1 devoted:1 accurate:1 fu:1 necessary:1 stoltz:1 conduct:1 divide:1 desired:2 theoretical:9 minimal:1 uncertain:2 korda:1 instance:1 modeling:1 w911nf:1 cost:21 wrr:26 deviation:2 subset:2 rare:1 varies:1 synthetic:8 confident:1 st:1 borgwardt:1 international:1 siam:1 stay:1 probabilistic:1 off:2 concrete:2 gabillon:2 again:1 heaviest:1 cesa:2 hn1:1 thesis:1 containing:1 choose:1 hoeffding:1 american:1 return:6 actively:1 li:3 potential:1 distribute:1 diversity:1 stride:1 chandola:1 satisfy:2 audibert:1 depends:4 proactively:1 view:1 observer:1 lab:1 h1:10 analyze:1 closed:1 portion:1 start:3 competitive:1 defer:1 contribution:1 accuracy:4 efficiently:1 ensemble:3 identify:4 identification:11 thumb:1 researcher:3 detector:2 definition:5 associated:4 mi:16 sampled:2 dataset:3 recall:2 knowledge:4 cap:3 formalize:1 carefully:1 actually:1 back:2 auer:2 higher:3 methodology:1 improved:3 done:1 though:1 symptom:1 generality:1 nh2:2 roundrobin:1 shuai:1 until:6 langford:2 hand:1 banerjee:1 incrementally:2 pulling:4 grows:1 usa:1 omitting:1 verify:2 true:3 contain:1 hence:2 iteratively:4 laboratory:1 round:20 during:1 criterion:3 outputing:1 generalized:1 stone:1 performs:1 l1:1 hereon:1 harmonic:1 novel:4 nih:2 donmez:1 winner:1 volume:1 tail:1 discussed:1 multiarmed:1 versa:1 nyc:1 tuning:1 outlined:2 mathematics:1 similarly:1 illinois:2 sugiyama:1 etc:2 add:1 recent:2 irrelevant:1 awarded:1 driven:1 scenario:6 certain:6 inequality:2 qualify:2 yi:21 additional:1 gentile:1 schneider:1 deng:1 determine:1 maximize:2 ii:4 multiple:3 hrr:1 reduces:1 aggarwal:1 champaign:1 long:3 lin:1 lai:1 equally:2 patient:3 expectation:17 essentially:1 metric:2 arxiv:2 iteration:5 represent:1 achieved:2 interval:21 source:1 revitalized:1 biased:2 extra:2 rest:2 sure:2 comment:1 subject:1 effectiveness:3 call:3 integer:1 near:1 presence:2 leverage:1 easy:1 concerned:1 sander:1 nonstochastic:1 identified:1 reduce:1 economic:1 cn:1 idea:1 absent:1 inactive:5 whether:2 kollios:2 bartlett:1 returned:3 proceed:2 action:1 dar:1 ignored:1 useful:3 iterating:1 clear:2 generally:1 tewari:1 extensively:1 inspiring:1 category:2 reduced:1 generate:3 schapire:1 percentage:5 notice:1 estimated:4 lazaric:2 key:1 threshold:15 carpentier:3 asymptotically:1 merely:1 tweet:4 sum:1 icde:1 run:4 powerful:1 uncertainty:2 named:2 place:1 almost:2 decide:1 wu:1 draw:1 decision:1 acceptable:1 comparable:2 bound:10 constraint:3 precisely:1 locatelli:1 personalized:1 sake:1 dominated:1 u1:2 aspect:1 extremely:9 kumar:1 relatively:1 according:5 viswanathan:1 request:2 smaller:1 remain:1 terminates:3 outlier:68 intuitively:2 gradually:1 ln:1 resource:1 remains:1 discus:1 needed:3 know:1 wrt:2 bd2k:2 experimentation:1 apply:5 observe:2 save:11 alternative:1 wang2:1 slower:2 existence:1 top:7 clustering:4 include:1 subsampling:2 calculating:1 nyi:1 society:1 objective:5 already:1 occurs:1 strategy:5 costly:1 berchtold:1 distance:1 thank:1 evenly:1 topic:1 mail:1 carbonell:1 collected:2 barely:1 modeled:1 mini:1 ratio:3 balance:1 minimizing:1 difficult:2 setup:1 sigma:1 negative:1 design:6 policy:1 unknown:8 perform:3 bianchi:2 upper:7 pei:1 datasets:3 urbana:1 finite:1 zadrozny:1 precise:1 mansour:1 abe:1 required:2 narrow:1 established:1 nip:5 trans:1 deserve:1 able:1 redmond:1 bar:2 usually:1 below:1 latitude:2 challenge:1 confidently:2 including:1 overlap:2 zappella:1 valko:2 arm:126 representing:1 imply:1 reprint:1 jun:1 naive:2 deviate:1 prior:1 literature:1 review:1 determining:1 fully:2 interesting:1 allocation:1 foundation:2 h2:9 incurred:1 degree:1 article:1 thresholding:3 principle:1 playing:1 austin:1 changed:1 repeat:2 last:2 soon:1 bern:1 formal:2 allow:2 pulled:4 fall:1 bulletin:1 munos:1 distributed:1 regard:1 feedback:2 rich:1 author:4 collection:2 avg:2 reinforcement:1 adaptive:1 far:2 transaction:1 approximate:2 gene:2 keep:2 sz:1 active:13 instantiation:1 reveals:1 containment:1 xi:4 yifan:2 continuous:1 iterative:1 decade:1 robin:20 promising:1 terminate:7 ca:1 improving:1 necessarily:2 meanwhile:1 official:1 aistats:1 significance:5 big:3 allowed:1 fashion:1 precision:2 jermaine:1 candidate:2 ib:15 jmlr:1 down:1 theorem:4 specific:2 pac:2 explored:1 r2:2 workshop:1 sequential:2 ci:7 magnitude:1 notwithstanding:1 budget:2 gap:1 authorized:1 chen:3 nigms:1 intersection:1 simply:2 army:2 likely:2 intern:1 bubeck:4 hodge:1 contained:1 recommendation:2 corresponds:4 acm:6 goal:4 king:1 experimentally:1 specifically:1 typical:1 reducing:1 uniformly:1 determined:1 preset:3 distributes:1 justify:1 total:7 experimental:2 select:5 selectively:1 formally:2 internal:1 evaluate:1 crowdsourcing:2 |
6,751 | 7,106 | Online Learning with Transductive Regret
Mehryar Mohri
Courant Institute and Google Research
New York, NY
[email protected]
Scott Yang?
D. E. Shaw & Co.
New York, NY
[email protected]
Abstract
We study online learning with the general notion of transductive regret, that is
regret with modification rules applying to expert sequences (as opposed to single
experts) that are representable by weighted finite-state transducers. We show how
transductive regret generalizes existing notions of regret, including: (1) external
regret; (2) internal regret; (3) swap regret; and (4) conditional swap regret. We
present a general and efficient online learning algorithm for minimizing transductive
regret. We further extend that to design efficient algorithms for the time-selection
and sleeping expert settings. A by-product of our study is an algorithm for swap
regret, which, under mild assumptions, is more efficient than existing ones, and a
substantially more efficient algorithm for time selection swap regret.
1
Introduction
Online learning is a general framework for sequential prediction. Within that framework, a widely
adopted setting is that of prediction with expert advice [Littlestone and Warmuth, 1994, Cesa-Bianchi
and Lugosi, 2006], where the algorithm maintains a distribution over a set of experts. At each round,
the loss assigned to each expert is revealed. The algorithm then incurs the expected value of these
losses for its current distribution and next updates its distribution.
The standard benchmark for the algorithm in this scenario is the external regret, that is the difference
between its cumulative loss and that of the best (static) expert in hindsight. However, while this
benchmark is useful in a variety of contexts and has led to the design of numerous effective online
learning algorithms, it may not constitute a useful criterion in common cases where no single fixed
expert performs well over the full course of the algorithm?s interaction with the environment. This
had led to several extensions of the notion of external regret, along two main directions.
The first is an extension of the notion of regret so that the learner?s algorithm is compared against a
competitor class consisting of dynamic sequences of experts. Research in this direction started with
the work of [Herbster and Warmuth, 1998] on tracking the best expert, which studied the scenario
of learning against the best sequence of experts with at most k switches. It has been subsequently
improved [Monteleoni and Jaakkola, 2003], generalized [Vovk, 1999, Cesa-Bianchi et al., 2012,
Koolen and de Rooij, 2013], and modified [Hazan and Seshadhri, 2009, Adamskiy et al., 2012,
Daniely et al., 2015]. More recently, an efficient algorithm with favorable regret guarantees has
been given for the general case of a competitor class consisting of sequences of experts represented
by a (weighted) finite automaton [Mohri and Yang, 2017]. This includes as special cases previous
competitor classes considered in the literature.
The second direction is to consider competitor classes based on modifications of the learner?s sequence
of actions. This approach began with the notion of internal regret [Foster and Vohra, 1997, Hart and
Mas-Colell, 2000], which considers how much better an algorithm could have performed if it had
?
Work done at the Courant Institute of Mathematical Sciences.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
switched all instances of playing one action with another, and was subsequently generalized to the
notion of swap regret [Blum and Mansour, 2007], which considers all possible in-time modifications
of a learner?s action sequence. More recently, Mohri and Yang [2014] introduced the notion of
conditional swap regret, which considers all possible modifications of a learner?s action sequence
that depend on some fixed bounded history. Odalric and Munos [2011] also studied regret against
history-dependent modifications and presented computationally tractable algorithms (with suboptimal
regret guarantees) when the comparator class can be organized into a small number of equivalence
classes.
In this paper, we consider the second direction and study regret with respect to modification rules. We
first present an efficient online algorithm for minimizing swap regret (Section 3). We then introduce
the notion of transductive regret in Section 4, that is the regret of the learner?s algorithm with respect
to modification rules representable by a family of weighted finite-state transducers (WFSTs). This
definition generalizes the existing notions of external, internal, swap, and conditional swap regret, and
includes modification rules that apply to expert sequences, as opposed to single experts. Moreover, we
present efficient algorithms for minimizing transductive regret. We further extend transductive regret
to the time-selection setting (Section 5) and present efficient algorithms minimizing time-selection
transductive regret. These algorithms significantly improve upon existing state-of-the-art algorithms
in the special case of time-selection swap regret. Finally, in Section 6, we extend transductive regret
to the sleeping experts setting and present new and efficient algorithms for minimizing sleeping
transductive regret.
2
Preliminaries and notation
We consider the setting of prediction with expert advice with a set ? of N experts. At each round
t 2 [T ], an online algorithm A selects a distribution pt over ?, the adversary reveals a loss vector
lt 2 [0, 1]N , where lt (x) is the loss of expert x 2 ?, and the algorithm incurs the expected loss pt ? lt .
Let ? ?? denote a set of modification functions mapping the expert set to itself. The objective
of the algorithm is to minimize its -regret, RegT (A, ), defined as the difference between its
cumulative expected loss and that of the best modification of the sequence in hindsight:
( T
)
X
RegT (A, ) = max
E [lt (xt )]
E [lt ('(xt ))] .
'2
t=1
xt ?pt
xt ?pt
This definition coincides with the standard notion of external regret [Cesa-Bianchi and Lugosi,
2006] when is reduced to the family of constant functions: ext = {'a : ? ! ? : a 2 ?, 8x 2
?, 'a (x) = a}, with the notion of internal regret [Foster and Vohra, 1997] when is the family
of functions that only switch two actions: int = {'a,b : ? ! ? : a, b 2 ?, 'a,b (x) = 1x=a b +
1x=b a + x1x6=a,b }, and with the notion of swap regret [Blum and Mansour, 2007] when consists
of all possible functions mapping ? to itself: swap . In Section 4, we will introduce a more general
notion of regret with modification rules applying to expert sequences, as opposed to single experts.
p
There are known algorithms achieving an external regret in O( T log N ) with a per-iteration
p
computational cost in O(N ) [Cesa-Bianchi and Lugosi, 2006], an internal regret in O( T log N )
3
with
p a per-iteration computational cost in O(N ) [Stoltz and Lugosi, 2005], and a swap regret in
O( T N log N ) with a per-iteration computational cost in O(N 3 ) [Blum and Mansour, 2007].
3
Efficient online algorithm for swap regret
In this section, we present an online algorithm, FAST S WAP, that
p achieves the same swap regret
guarantee as the algorithm of Blum and Mansour [2007], O( T N log N ), but admits the more
favorable per-iteration complexity of O(N 2 log(T )), under some mild assumptions.
Existing online algorithms for internal or swap regret minimization require, at each round, solving
for a fixed-point of an N ? N -stochastic matrix [Foster and Vohra, 1997, Stoltz and Lugosi, 2005,
Blum and Mansour, 2007]. For example, the algorithm of Blum and Mansour [2007] is based on
a meta-algorithm A that makes use of N external regret minimization sub-algorithms {Ai }i2[N ]
(see Figure 1). Sub-algorithm Ai is specialized in guaranteeing low regret against swapping expert
i with any other expert j. The meta-algorithm A maintains a distribution pt over the experts and,
2
pt,1 lt
A1
qt,1
pt,2 lt
A2
A
pt,N lt
qt,2
qt,N
AN
Figure 1: Illustration of the swap regret algorithm of Blum and Mansour [2007] or the FAST S WAP
algorithm, which use a meta-algorithm to control a set of N external regret minimizing algorithms.
Algorithm 1: FAST S WAP; {Ai }N
i=1 are external regret minimization algorithms.
Algorithm: FAST S WAP((Ai )N
i=1 )
for t
1 to T do
for i
1 to N do
qi
Q UERY(Ai )
Qt
[q1 ? ? ? qN ]>
for j
1 to N do
t
cj
minN
i=1 Qi,j
l log? p1 ? m
t
?t
kck1 ; ?t
log(1 ?t )
if ?t < N then
c
pt
p1t
?t
for ?
1 to ?t do
? >
(pt )
(p?t )> (Qt ~1c> ); pt
pt + p?t
pt
pt
kpt k1
else
t
p>
t = F IXED -P OINT (Q )
xt
S AMPLE(pt ); lt
R ECEIVE L OSS()
for i
1 to N do
ATTRIBUTE L OSS(pt [i]lt , Ai )
at each round t, assigns to sub-algorithm Ai only a fraction of the loss, (pt,i lt ), and receives the
distribution qi (over the experts) returned by Ai . At each round t, the distribution pt is selected to be
the fixed-point of the N ? N -stochastic matrix Qt = [q1 ? ? ? qN ]> . Thus, pt = pt Qt is the stationary
distribution of the Markov process defined by Qt . This choice of the distribution is natural to ensure
that the learner?s sequence of actions is competitive against a family of modifications, since it is
invariant under a mapping that relates to this family of modifications.
The computation of a fixed-point involves solving a linear system of equations, thus, the per-round
complexity of these algorithms is in O(N 3 ) using standard methods (or O(N 2.373 ), using the method
of Coppersmith and Winograd). To improve upon this complexity in the setting of internal regret,
Greenwald et al. [2008] estimate the fixed-point by applying, at each round, a single power iteration
to some stochastic matrix. Their algorithm
runs in O(N 2 ) time per iteration, but at the price of a
p
9
regret guarantee that is only in O( N T 10 ).
Here, we describe an efficient algorithm for swap regret, FAST S WAP. Algorithm 1 gives its pseudocode. As with the algorithm of Blum and Mansour [2007], FAST S WAP is based on a meta-algorithm
A making use of N external regret minimization sub-algorithms {Ai }i2[N ] . However, unlike the
algorithm of Blum and Mansour [2007], which explicitly computes the stationary distribution of
Qt at round t, or that of Greenwald et al. [2008], which applies a single power iteration at each
round, our algorithm applies multiple modified power iterations at round t (?t power iterations). Our
modified power iterations are based on the R EDUCED P OWER M ETHOD (RPM) algorithm introduced
by Nesterov and Nemirovski [2015]. Unlike the algorithm of Greenwald et al. [2008], FAST S WAP
uses a specific initial distribution at each round, applies the power method to a modification of the
original stochastic matrix, and uses, as an approximation, an average of all the iterates at that round.
Theorem 1. Let A1 , . . . , ANpbe external regret minimizing algorithms admitting data-dependent
regret bounds of the form O( LT (Ai ) log N ), where LT (Ai ) is the cumulative loss of Ai after T
3
b:?(b)/1
a:b/1
0
a:b/1
b:a/1
1
b:b/1
2
a:?(a)/1
0
sell:IBM/0.3
sell:Apple/0.7
c:?(c)/1
0
(ii)
1
Apple:IBM/0.3
Apple:Apple/0.5
Apple:sell/0.2
IBM:IBM/0.6
IBM:Apple/0.3
IBM:sell/0.1
2
gold:silver/0.2
gold:gold/0.6
gold:sell/0.2
silver:gold/0.3
silver:silver/0.6
silver:sell/0.1
gold:silver/0.4
gold:gold/0.5
gold:Apple/0.1
silver:gold/0.3
silver:silver/0.5
silver:IBM/0.2
sell:gold/0.5
sell:silver/0.5
b:b/1
(i)
Apple:IBM/0.4
Apple:Apple/0.5
Apple:gold/0.1
IBM:IBM/0.5
IBM:Apple/0.3
IBM:silver/0.2
(iii)
Figure 2: (i) Example of a WFST T: IT = 0, ilab[ET [0]] = {a, b}, olab[ET [1]] = {b}, ET [2] =
{(0, a, b, 1, 1), (0, b, a, 1, 1)}. (ii) Family of swap WFSTs T' , with ' : {a, b, c} ! {a, b, c}. (iii) A
more general example of a WFST.
rounds. Assume that, at each round, the sum of the minimal probabilities given to an expert by these
algorithms
is bounded below by some constant ? > 0. Then, FAST S WAP achieves a swap regret in
p
log T
O( T N log N ) with a per-iteration complexity in O N 2 min log(1/(1
.
?)) , N
The proof is given in Appendix D. It is based on a stability analysis bounding the additive regret
term due to using an approximation of the fixed point distribution, and the property that ?t iterations
of the reduced power method ensure a p1t -approximation, where t is the number of rounds. The
favorable complexity of our algorithm requires an assumption on the sum of the minimal probabilities
assigned to an expert by the algorithms at each round. This is a reasonable assumption which one
would expect to hold in practice if all the external regret minimizing sub-algorithms are the same.
This is because the true losses assigned to each column of the stochastic matrix are the same, and the
rescaling based on the distribution pt is uniform over each row. Furthermore, since the number of
rounds sufficient for a good approximation can be efficiently estimated, our algorithm can determine
when it is worthwhile to switch to standard fixed-point methods, that is when the condition ?t > N
holds. Thus, the time complexity of our algorithm is never worse than that of Blum and Mansour
[2007].
4
Online algorithm for transductive regret
In this section, we consider a more general notion of regret than swap regret, where the family
of modification functions applies to sequences instead of just to single experts. We will consider
sequence-to-sequence mappings that can be represented by finite-state transducers. In fact, more
generally, we will allow weights to be used for these mappings and will consider weighted finite-state
transducers. This will lead us to define the notion of transductive regret where the cumulative loss of
an algorithm?s sequence of actions is compared to that of sequences images of its action sequence via
a transducer mapping. As we shall see, this is an extremely flexible definition that admits as special
cases standard notions of external, internal, and swap regret.
We will start with some preliminary definitions and concepts related to transducers.
4.1
Weighted finite-state transducer definitions
A weighted finite-state transducer (WFST) T is a finite automaton whose transitions are augmented
with an output label and a real-valued weight, in addition to the familiar input label. Figure 2(i) shows
a simple example. We will assume both input and output labels to be elements of the alphabet ?,
which denotes the set of experts. ?? denotes the set of all strings over the alphabet ?.
We denote by ET the set of transitions of T and, for any transition e 2 ET , we denote by ilab[e] its
input label, by olab[e] its output label, and by w[e] its weight. For any state u of T, we denote by
ET [u] the set of transitions leaving u. We also extend the definition of ilab to sets and denote by
ilab[ET [u]] the set of input labels of the transitions ET [u].
We assume that T admits a single initial state, which we denote by IT . For any state u and string
x 2 ?? , we also denote by T (u, x) the set of states reached from u by reading string x as input. In
particular, we will denote by T (IT , x) the set of states reached from the initial state by reading string
x as input.
The input (or output) label of a path is obtained by concatenating the input (output) transition labels
along that path. The weight of a path is obtained by multiplying is transition weights. A path from
4
the initial state to a final state is called an accepting path. A WFST maps the input label of each
accepting path to its output label, with that path weight probability.
The WFSTs we consider may be non-deterministic, that is they may admit states with multiple
outgoing transitions sharing the same input label. However, we will assume that, at any state,
outgoing transitions sharing the same input label admit the same destination state. We will further
require that, at any state, the set of output labels of the outgoing transitions be contained in the set
of input labels of the same transitions. This requirement is natural for our definition of regret: our
learner will use input label experts and will compete against sequences of output label experts. Thus,
the algorithm should have the option of selecting an expert sequence it must compete against.
Finally, we
Pwill assume that our WFSTs are stochastic, that is, for any state u and input label a 2 ?,
we have e2ET [u,a] w[e] = 1. The class of WFSTs thereby defined is broad and, as we shall see,
includes the families defining external, internal and swap regret.
4.2
Transductive regret
Given any WFST T, let T be a family of WFSTs with the same alphabet ?, the same set of states Q,
the same initial state I and final states F , but with different output labels and weights. Thus, we can
write IT , FT , QT , and T , without any ambiguity. We will also use the notation ET when we refer
to the transitions of a transducer within the family T in a way that does not depend on the output
labels or weights. We define the learner?s transductive regret with respect to T as follows:
8
2
39
T
T
<X
=
X
X
RegT (A, T ) = max
E [lt (xt )]
E 4
w[e] lt (olab[e])5 .
xt ?pt
xt ?pt
;
T2T :
t=1
t=1
e2ET [
T
(IT ,x1:t
1 ),xt ]
This measures the maximum difference of the expected loss of the sequence xT1 played by A and
the expected loss of a competitor sequence, that is a sequence image by T 2 T of xT1 , where the
expectation for competing sequences is both over pt s and the transitions weights w[e] of T. We also
assume that the family T does not admit proper non-empty invariant subsets of labels out of any state,
i.e. for any state u, there exists no proper subset E ( ET [u] where the inclusion olab[E] ? ilab[E]
holds for all T 2 T . This is not a strict requirement but will allow us to avoid cases of degenerate
competitor classes.
As an example, consider the family of WFSTs Ta , a 2 ?, with a single state Q = I = F = {0} and
with Ta defined by self-loop transitions with all input labels b 2 ? with the same output label a, and
with uniform weights. Thus, Ta maps all labels to a. Then, the notion of transductive regret with
T = {Ta : a 2 ?} coincides with that of external regret.
Similarly, consider the family of WFSTs T' , ' : ? ! ?, with a single state Q = I = F = {0} and
with T' defined by self-loop transitions with input label a 2 ? and output '(a), all weights uniform.
Thus, T' maps a symbol a to '(a). Then, the notion of transductive regret with T = {T' : ' 2 ?? }
coincides with that of swap regret (see Figure 2 (ii)). The more general notion of k-gram conditional
swap regret presented in Mohri and Yang [2014] can also be modeled as transductive regret with
respect to a family of WFSTs (k-gram WFSTs). We present additional figures illustrating all of these
examples in Appendix A.
In general, it may be desirable to design WFSTs intended for a specific task, so that an algorithm is
robust against some sequence modifications more than others. In fact, such WFSTs may have been
learned from past data. The definition of transductive regret is flexible and can accommodate such
settings both because a transducer can conveniently help model mappings and because the transition
weights help distinguish alternatives. For instance, consider a scenario where each action naturally
admits a different swapping subset, which may be only a small subset of all actions. As an example,
an investor may only be expected to pick the best strategy from within a similar class of strategies.
For example, instead of buying IBM, the investor could have bought Apple or Microsoft, and instead
of buying gold, he could have bought silver or bronze. One can also imagine a setting where along
the sequences, some new alternatives are possible while others are excluded. Moreover, one may
wish to assign different weights to some sequence modifications or penalize the investor for choosing
strategies that are negatively correlated to recent choices. The algorithms in this work are flexible
enough to accommodate these environments, which can be straightforwardly modeled by a WFST.
We give a simple example in Figure 2(iii) and give another illustration in Figure 5 in Appendix A,
5
which can be easily generalized. Notice that, as we shall see later, in the case where the maximum
out-degree of any state in the WFST (size of the swapping subset) is bounded by a mild constant
independent of the number of actions, our transductive regret bounds can be very favorable.
4.3
Algorithm
We now present an algorithm, FAST T RANSDUCE, seeking to minimize the transductive regret given a
family T of WFSTs.
Our algorithm is an extension of FAST S WAP. As in that algorithm, a meta-algorithm is used that
assigns partial losses to external regret minimization slave algorithms and combines the distributions
it receives from these algorithms via multiple reduced power method iterations. The meta-algorithm
tracks the state reached in the WFST and maintains a set of external regret minimizing algorithms
that help the learner perform well at every state. Thus, here, we need one external regret minimization
algorithm Au,i , for each state u reached at time t after reading sequence x1:t 1 and each i 2 ?
labeling an outgoing transition at u. The pseudocode of this algorithm is provided in Appendix B.
Let |ET |in denote
P the sum of the number of transitions with distinct input label at each state of T , that
is |ET |in = u2QT |ilab[ET [u]]|. |ET |in is upper bounded by the total number of transitions |ET |.
Then, the following regret guarantee and computational complexity hold for FAST T RANSDUCE.
Theorem 2. Let (Au,i )u2Q,i2ilab[ET [u]] be external regret minimizing algorithms admitting datap
dependent regret bounds of the form O( LT (Au,i ) log N ), where LT (Au,i ) is the cumulative loss
of Au,i after T rounds. Assume that, at each round, the sum of the minimal probabilities given to
an expert by these algorithms is bounded below by p
some constant ? > 0. Then, FAST T RANSDUCE
achieves
a
transductive
regret
against
T
that
is
in
O(
T |ET |in log N ) with a per-iteration complexity
?
n
o?
log T
2
in O N min log(1/(1 ?)) , N .
The proof is given in Appendix E. The regret guarantee of FAST T RANSDUCE matches that of the
swap regret algorithm of Blum and Mansour [2007] or FAST S WAP in the case where T is chosen
to be the family of swap transducers, and it matches the conditional k-gram swap regret of Mohri
and Yang [2014] when T is chosen to be that of the k-gram swap transducers. Additionally, its
computational complexity is typically more favorable than that of algorithms previously presented in
the literature when the assumption on ? holds, and it is never worse.
Remarkably, the computational complexity of FAST T RANSDUCE is comparable to the cost of
FAST S WAP, even though FAST T RANSDUCE is a regret minimization algorithm against an arbitrary
family of finite-state transducers. This is because only the external regret minimizing algorithms that
correspond to the current state need to be updated at each round.
5
Time-selection transductive regret
In this section, we extend the notion of time-selection functions with modification rules to the setting
of transductive regret and present an algorithm that achieves the same regret guarantee as [Khot and
Ponnuswami, 2008] in their specific setting but with a substantially more favorable computational
complexity.
Time-selection functions were first introduced in [Lehrer, 2003] as boolean functions that determine
which subset of times are relevant in the calculation of regret. This concept was relaxed to the
real-valued setting by Blum and Mansour [2007] who considered time-selection functions taking
values in [0, 1]. The authors introduced an algorithm
which, for K modification rules and M timep
selection functions, guarantees a regret in O( T N log(M K)) and admits a per-iteration complexity
in O(max{Np
KM, N 3 }). For swap regret with time selection functions, this corresponds to a regret
bound of O( T N 2 log(M N )) and a per-iteration computational cost in O(N N +1 M ). [Khot and
Ponnuswami,
2008] improved upon this result and presented an algorithm with a regret bound
p
in O( T log(M K)) and a per-iteration computational cost in O(max{M K, N 3 }), which is still
prohibitively expensive for swap regret, since it is in O(N N M ).
We now formally define the scenario of online learning with time-selection transductive regret. Let
I ? [0, 1]N be a family of time-selection functions. Each time-selection function I 2 I determines
6
Algorithm 2: FAST T IME S ELECT T RANSDUCE; AI , (AI,u,i ) external regret algorithms.
Algorithm: FAST T IME S ELECT T RANSDUCE(I, T , AI , (AI,u,i )I2I,u2QT ,i2ilab[ET [q]] )
u
IT
for t
1 to T do
for each I 2 I do
?
q
Q UERY(AI )
for each i 2 ilab[ET [u]] do
qI,i
Q UERY(AI,u,i )
Mt,u,I
[qI,1 112ilab[ET [u]] ; . . . ; qI,N 1N 2ilab[ET [u]] ]; Qt,u
Qt,u + I(t)?
qI Mt,u,I ;
t
t
Z
Z + I(t)?
qI
Qt,u
Qt,u
Zt
for each j
1 to N do
cj
mini2ilab[ET [u]] Qt,u
i,j 1j2ilab[ET [u]]
l log? p1 ? m
t
?t
kck1 ; ?t
log(1 ?t )
if ?t < N then
c
pt
p0t
?t
for ?
1 to ?t do
? >
(pt )
(p?t )> (Qt,u ~1c> ); pt
pt + p?t
pt
pt
kpt k1
else
p>
F IXED -P OINT(Qt,u )
t
xt
S AMPLE(pt ); lt
R ECEIVE L OSS(); u
for each I 2 I do
t,u,I
?lt
I(t) p>
lt p >
t M
t lt
I
for each i 2 ilab[ET [u]] do
ATTRIBUTE L OSS(AI,u,i , pt [i]I(t)lt )
ATTRIBUTE L OSS(AI , ?lt )
T
[u, xt ]
the importance of the instantaneous regret at each round. Then, the time-selection transductive regret
is defined as:
RegT (A, I, )
8
2
39
T
T
<X
=
X
X
= max
I(t) E [lt (xt )]
I(t) E 4
w[e]lt (olab[e])5 .
xt ?pt
xt ?pt
;
I2I,T2 :
t=1
t=1
e2ET [
T
(IT ,x1:t
1 ),xt ]
When the family of transducers admits a single state, this definition coincides with the notion of
time-selection regret studied in [Blum and Mansour, 2007] or [Khot and Ponnuswami, 2008].
Time-selection transductive regret is a more difficult benchmark than transductive regret because the
learner must account for only a subset of the rounds being relevant, in addition to playing a strategy
that is robust against a large set of possible transductions.
To handle this scenario, we propose the following strategy. We maintain an external regret minimizing
algorithm AI over the set of time-selection functions. This algorithm will be responsible for ensuring
that our strategy is competitive against the a posteriori optimal time-selection function. We also
maintain |I||Q|N other external regret minimizing algorithms, {AI,u,i }I2I,u2QT ,i2ilab[ET [u]] , which
will ensure that our algorithm is robust against each of the modification rules and the potential
transductions. We will then use a meta-algorithm to assign appropriate surrogate losses to each of
these external regret minimizing algorithms and combine them to form a stochastic matrix. As in
FAST T RANSDUCE, this meta-algorithm will also approximate the stationary distribution of the matrix
and use that as the learner?s strategy. We call this algorithm FAST T IME S ELECT T RANSDUCE. Its
pseudocode is given in Algorithm 2.
Theorem 3. Suppose that the external regret minimizing algorithms (AI,u,i ) in the input of
FAST T IME S ELECT T RANSDUCE achieve data-dependent regret bounds, so that if LT (AI,u,i ) is
7
p
the cumulative loss of algorithm AI,u,i , then the regret of AI,u,i is at most O( LT (AI,u,i ) log N ).
Assume also
p that AI is an external regret minimizing algorithm over I that achieves a regret guarantee in O( T log(|I|)). Moreover, suppose that at each round, the sum of the minimal probabilities
given to an expert by these algorithms is bounded by some constant ? > 0. Then, FAST T IME S ELECTT RANSDUCE achieves a time-selection
to the time-selection family I
?p transductive regret with respect
?
and WFST family T that is in O
T (log(|I|) + |ET |in log N ) with a per-iteration complexity in
? ?
n
o
??
log(T
)
O N 2 min log((1 ?) 1 ) , N + |I| .
In particular, Theorem 3 implies that FAST T IME S ELECT T RANSDUCE achieves the same timeselection swap regret guarantee as the algorithm
? ?of Khot
n and Ponnuswami
o [2008]
?? but with a perlog(T )
2
round computational cost that is only in O N min log((1 ?) 1 ) , N + |I| , as opposed to
O(|I|N N ), that is an exponential improvement! Notice that this significant improvement does not
require any assumption (it holds even for ? = 0).
6
Sleeping transductive regret
In some applications, at each round, only a subset of the experts may be available. This scenario has
been modeled using the sleeping experts setup [Freund et al., 1997], an extension of prediction with
expert advice where, at each round, a subset of the experts are ?asleep? and unavailable to the learner.
The sleeping experts setting has been used to solve problems such as text categorization [Cohen
and Singer, 1999], calendar scheduling [Blum, 1997], and learning how to formulate search-engine
queries [Cohen and Singer, 1996].
The standard benchmark in this setting is sleeping regret, which is the difference between the
cumulative expected loss of the learner and the cumulative expected loss of the best static distribution
over the experts, normalized over the set of awake experts at each round. Thus, if we denote by
At ? ? the set of experts available to the learner at round t, then the sleeping regret can be written
as:
( T
)
T
X
X
max
E [lt (xt )]
E [lt (xt )] ,
u2
N
At
t=1 xt ?pt
where for any distribution p, pAt =
t=1
P
p|At
i2At
pi
xt ?uAt
and where for any a 2 ? and A ? ?,
p|A (a) = p(a)1a2A . The regret guarantees presented in [Freund et al., 1997] are actually
PT
PT
of the following form: maxu2 N t=1 u(At ) Ext ?pAt [lt (xt )]
t=1 Ext ?u|At [lt (xt )], and by
t
generalizing the results in that work to arbitrary losses (i.e. beyond those that satisfy equation
(6) q
in the paper), it is possible to show that there exist algorithms that bound this quantity by
? P
?
T
?
?
O
t=1 u (At ) Ext ?pt [lt (xt )] log(N ) , where u is a maximizer of the quantity.
In this section, we extend the notion of sleeping regret to accommodate transduction, and we present
the notion of sleeping transductive regret. We define the sleeping transductive regret of an algorithm
to be the difference between the learner?s cumulative expected loss and the cumulative expected loss
of any transduction of the learner?s actions among a family of finite-state transducers, where the
weights of the transductions are normalized over the set of awake experts. The sleeping transductive
regret can be expressed as follows:
RegT (A, T , AT1 )
8
T
<X
= max
E [lt (xt )]
A
T2T :
xt ?pt t
u2
N
t=1
T
X
E
At
t=1 xt ?pt
2
4
e2ET [
T
X
(IT ,x1:t
1 ),xt ]
39
=
t
5 .
uA
w[e]l
(olab[e])
t
olab[e]
;
When all experts are awake at every round, i.e. At = ?, the sleeping transductive regret reduces to
the standard transductive regret. When the family of transducers corresponds to that of swap regret,
PT
we uncover a natural definition for sleeping swap regret: max'2 swap ,u2 N t=1 Ext ?pAt [lt (xt )]
t
h
i
PT
At
At
E
u
l
('(x
))
.
We
now
present
an
efficient
algorithm
for
minimizing
sleeping
t
t=1 xt ?p
'(xt ) t
t
8
Figure 3: Maximum values of ? and minimum values of ? in FAST S WAP experiments. The vertical
bars represent the standard deviation across 16 instantiations of the same simulation.
transductive regret, FAST S LEEP T RANSDUCE. Similar to FAST T RANSDUCE, this algorithm uses
a meta-algorithm with multiple regret minimizing sub-algorithms and a fixed-point approximation
to compute the learner?s strategy. However, since FAST S LEEP T RANSDUCE minimizes sleeping
transductive regret, it uses sleeping regret minimizing sub-algorithms,2 . The meta-algorithm also
designs a different stochastic matrix. The pseudocode of this algorithm is given in Appendix C.
Theorem 4. Suppose that the sleeping regret minimizing algorithms in the input of
FAST S LEEP T RANSDUCE achieve data-dependent regret bounds, so that if the algorithm plays
T
T
(p?
sets (At )Tt=1 , then the regret of Aqi is at most
t )t=1 against losses (lt )t=1 and sees awake
?
qP
T
?
O
t=1 u (At ) Ext ?pt [lt (xt )] log(N ) . Moreover, suppose that at each round, the sum of
the minimal probabilities given to an expert by these algorithms is bounded below by some constant
? > 0. Then, FAST S LEEP T RANSDUCE guarantees that the quantity:
8
2
39
T
T
<X
=
X
X
max
u(At ) E [lt (xt )]
E 4
uolab[e] |At w[e]lt (olab[e])5
A
A
T2T :
;
xt ?pt t
xt ?pt t
u2
N
t=1
is upper bounded by O
t=1
?qP
e2ET [
?
T
T
(IT ,x1:t
1 ),xt ]
u(At )|ET |in log(N ) . Moreover, FAST S LEEP T RANSDUCE has a
?t=1
n
o?
log T
per-iteration complexity in O N 2 min log(1/(1
,
N
.
?))
7
Experiments
In this section, we present some toy experiments illustrating the efficacy of the Reduced Power
Method for approximating the stationary distribution in FAST S WAP.
We considered n base learners, where n 2 {40, 80, 120, 160, 200}, each using the weighted-majority
algorithm [Littlestone and Warmuth, 1994]. We generated losses as i.i.d. normal random variables
with means in (0.1, 0.9) (chosen randomly) and standard deviation equal to 0.1. We capped the
losses above and below to remain in [0, 1] so that they remained bounded. We ran FAST S WAP for
10,000 rounds in each simulation, i.e. each set of base learners, and repeated each simulation 16
times. The plot of the maximum ? for each simulation is shown in Figure 3. Across all simulations,
the maximum ? attained was 4, so that at most 4 iterations of the RPM were needed on any given
round to obtain a sufficient approximation. Thus, the per-iteration cost in these simulations was
e 2 ), an improvement over the O(N 3 ) cost in prior work.
indeed in O(N
8
Conclusion
We introduced the notion of transductive regret, further extended it to the time-selection and sleeping
experts settings, and presented efficient online learning algorithms for all these setting with sublinear
transductive regret guarantees. We both generalized the existing theory and gave more efficient
algorithms in existing subcases. The algorithms and results in this paper can be further extended to
the case of fully non-deterministic weighted finite-state transducers.
2
See [Freund et al., 1997] for a reference on sleeping regret minimizing algorithms.
9
Acknowledgments
We thank Avrim Blum for informing us of an existing lower bound for swap regret proven by Auer
[2017]. This work was partly funded by NSF CCF-1535987 and NSF IIS-1618662.
References
D. Adamskiy, W. M. Koolen, A. Chernov, and V. Vovk. A closer look at adaptive regret. In ALT,
pages 290?304. Springer, 2012.
P. Auer. Personal communication, 2017.
A. Blum. Empirical support for Winnow and Weighted-Majority algorithms: Results on a calendar
scheduling domain. Machine Learning, 26(1):5?23, 1997.
A. Blum and Y. Mansour. From external to internal regret. Journal of Machine Learning Research, 8:
1307?1324, 2007.
N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press,
New York, NY, USA, 2006.
N. Cesa-Bianchi, P. Gaillard, G. Lugosi, and G. Stoltz. Mirror descent meets fixed share (and feels
no regret). In NIPS, pages 980?988, 2012.
W. W. Cohen and Y. Singer. Learning to query the web. In In AAAI Workshop on Internet-Based
Information Systems. Citeseer, 1996.
W. W. Cohen and Y. Singer. Context-sensitive learning methods for text categorization. ACM
Transactions on Information Systems, 17(2):141?173, 1999.
A. Daniely, A. Gonen, and S. Shalev-Shwartz. Strongly adaptive online learning. In Proceedings of
ICML, pages 1405?1411, 2015.
D. P. Foster and R. V. Vohra. Calibrated learning and correlated equilibrium. Games and Economic
Behavior, 21(1-2):40?55, 1997.
Y. Freund, R. E. Schapire, Y. Singer, and M. K. Warmuth. Using and combining predictors that
specialize. In STOC, pages 334?343. ACM, 1997.
A. Greenwald, Z. Li, and W. Schudy. More efficient internal-regret-minimizing algorithms. In COLT,
pages 239?250, 2008.
S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127?1150, 2000.
E. Hazan and S. Kale. Computational equivalence of fixed points and no regret algorithms, and
convergence to equilibria. In NIPS, pages 625?632, 2008.
E. Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. In Proceedings
of ICML, pages 393?400. ACM, 2009.
M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32(2):151?178, 1998.
S. Khot and A. K. Ponnuswami. Minimizing wide range regret with time selection functions. In 21st
Annual Conference on Learning Theory, COLT 2008, 2008.
W. M. Koolen and S. de Rooij. Universal codes from switching strategies. IEEE Transactions on
Information Theory, 59(11):7168?7185, 2013.
E. Lehrer. A wide range no-regret theorem. Games and Economic Behavior, 42(1):101?115, 2003.
N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and computation,
108(2):212?261, 1994.
10
M. Mohri and S. Yang. Conditional swap regret and conditional correlated equilibrium. In NIPS,
pages 1314?1322, 2014.
M. Mohri and S. Yang. Online learning with expert automata. ArXiv 1705.00132, 2017. URL
http://arxiv.org/abs/1705.00132.
C. Monteleoni and T. S. Jaakkola. Online learning of non-stationary sequences. In NIPS, 2003.
Y. Nesterov and A. Nemirovski. Finding the stationary states of Markov chains by iterative methods.
Applied Mathematics and Computation, 255:58?65, 2015.
N. Nisan, T. Roughgarden, E. Tardos, and V. V. Vazirani. Algorithmic game theory, volume 1.
Cambridge University Press Cambridge, 2007.
M. Odalric and R. Munos. Adaptive bandits: Towards the best history-dependent strategy. In
AISTATS, pages 570?578, 2011.
G. Stoltz and G. Lugosi. Internal regret in on-line portfolio selection. Machine Learning, 59(1):
125?159, 2005.
V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35(3):247?282, 1999.
11
| 7106 |@word mild:3 illustrating:2 km:1 simulation:6 citeseer:1 q1:2 pick:1 incurs:2 thereby:1 accommodate:3 initial:5 efficacy:1 selecting:1 past:1 existing:8 current:2 must:2 written:1 additive:1 plot:1 update:1 stationary:6 selected:1 warmuth:6 wfst:9 accepting:2 iterates:1 org:1 mathematical:1 along:3 transducer:17 consists:1 specialize:1 combine:2 introduce:2 indeed:1 expected:10 behavior:2 p1:2 os:5 buying:2 ua:1 provided:1 bounded:9 moreover:5 notation:2 substantially:2 string:4 minimizes:1 hindsight:2 finding:1 guarantee:13 every:2 seshadhri:2 prohibitively:1 control:1 switching:1 ext:6 meet:1 path:7 lugosi:8 au:5 studied:3 equivalence:2 lehrer:2 co:1 schudy:1 nemirovski:2 range:2 acknowledgment:1 responsible:1 practice:1 regret:145 procedure:1 kpt:2 universal:1 empirical:1 significantly:1 selection:24 scheduling:2 context:2 applying:3 derandomizing:1 map:3 deterministic:2 kale:1 automaton:3 formulate:1 assigns:2 rule:8 stability:1 handle:1 notion:24 updated:1 feel:1 pt:45 imagine:1 suppose:4 play:1 tardos:1 us:4 element:1 expensive:1 winograd:1 ft:1 ran:1 environment:3 complexity:14 econometrica:1 nesterov:2 dynamic:1 personal:1 depend:2 solving:2 upon:3 negatively:1 learner:19 swap:37 easily:1 represented:2 alphabet:3 distinct:1 fast:33 effective:1 describe:1 query:2 labeling:1 choosing:1 shalev:1 ixed:2 whose:1 widely:1 valued:2 solve:1 calendar:2 transductive:38 itself:2 final:2 online:16 sequence:28 propose:1 interaction:1 product:1 relevant:2 loop:2 combining:1 oint:2 degenerate:1 achieve:2 gold:13 convergence:1 empty:1 requirement:2 categorization:2 guaranteeing:1 silver:13 help:3 qt:17 involves:1 implies:1 direction:4 attribute:3 subsequently:2 stochastic:9 require:3 assign:2 preliminary:2 extension:4 hold:6 considered:3 normal:1 equilibrium:4 mapping:7 algorithmic:1 achieves:7 a2:1 favorable:6 label:25 gaillard:1 sensitive:1 weighted:10 minimization:7 modified:3 avoid:1 jaakkola:2 improvement:3 posteriori:1 dependent:6 typically:1 bandit:1 selects:1 among:1 flexible:3 colt:2 art:1 special:3 equal:1 never:2 khot:5 beach:1 sell:8 broad:1 look:1 icml:2 others:2 np:1 t2:1 randomly:1 ime:6 familiar:1 intended:1 consisting:2 microsoft:1 maintain:2 ab:1 admitting:2 swapping:3 chain:1 closer:1 partial:1 stoltz:4 littlestone:3 minimal:5 instance:2 column:1 boolean:1 cost:9 wap:14 subset:9 deviation:2 daniely:2 uniform:3 predictor:1 colell:2 straightforwardly:1 calibrated:1 st:2 herbster:2 destination:1 ambiguity:1 cesa:6 aaai:1 opposed:4 worse:2 external:27 admit:3 expert:48 leading:1 rescaling:1 toy:1 li:1 account:1 potential:1 de:2 includes:3 int:1 satisfy:1 explicitly:1 nisan:1 performed:1 later:1 hazan:3 reached:4 competitive:2 start:1 maintains:3 option:1 investor:3 minimize:2 who:1 efficiently:1 correspond:1 i2i:3 vohra:4 multiplying:1 apple:13 history:3 monteleoni:2 sharing:2 definition:10 against:14 competitor:6 naturally:1 proof:2 static:2 organized:1 cj:2 uncover:1 actually:1 auer:2 ta:4 courant:2 attained:1 improved:2 done:1 though:1 strongly:1 furthermore:1 just:1 receives:2 web:1 maximizer:1 google:1 usa:2 concept:2 true:1 normalized:2 ccf:1 assigned:3 excluded:1 i2:2 round:32 game:4 self:2 elect:5 coincides:4 criterion:1 generalized:4 tt:1 performs:1 image:2 instantaneous:1 recently:2 began:1 common:1 specialized:1 koolen:3 pseudocode:4 mt:2 ilab:10 qp:2 ponnuswami:5 cohen:4 volume:1 extend:6 he:1 refer:1 significant:1 cambridge:3 ai:28 mathematics:1 similarly:1 inclusion:1 had:2 funded:1 portfolio:1 base:2 t2t:3 recent:1 winnow:1 scenario:6 meta:10 minimum:1 additional:1 relaxed:1 determine:2 ii:4 relates:1 full:1 multiple:4 desirable:1 reduces:1 chernov:1 match:2 calculation:1 long:1 hart:2 a1:2 qi:8 prediction:6 ensuring:1 expectation:1 arxiv:2 iteration:21 represent:1 sleeping:20 penalize:1 addition:2 remarkably:1 else:2 leaving:1 unlike:2 strict:1 ample:2 bought:2 call:1 yang:8 revealed:1 iii:3 enough:1 variety:1 switch:3 gave:1 competing:1 suboptimal:1 p0t:1 economic:2 regt:5 url:1 returned:1 york:3 constitute:1 action:12 useful:2 generally:1 reduced:4 schapire:1 http:1 exist:1 nsf:2 notice:2 estimated:1 per:14 track:1 write:1 shall:3 rooij:2 blum:17 achieving:1 changing:1 fraction:1 sum:6 run:1 compete:2 family:23 reasonable:1 appendix:6 rpm:2 comparable:1 bound:9 internet:1 subcases:1 played:1 distinguish:1 annual:1 roughgarden:1 awake:4 min:5 extremely:1 representable:2 across:2 remain:1 modification:20 making:1 invariant:2 computationally:1 equation:2 previously:1 singer:5 needed:1 tractable:1 adopted:1 generalizes:2 available:2 apply:1 worthwhile:1 appropriate:1 shaw:1 alternative:2 original:1 denotes:2 ensure:3 k1:2 approximating:1 seeking:1 objective:1 quantity:3 strategy:11 surrogate:1 thank:1 majority:3 wfsts:13 considers:3 odalric:2 eceive:2 code:1 minn:1 modeled:3 illustration:2 minimizing:23 difficult:1 setup:1 stoc:1 design:4 ethod:1 proper:2 zt:1 perform:1 bianchi:6 upper:2 vertical:1 markov:2 benchmark:4 finite:11 descent:1 pat:3 defining:1 extended:2 communication:1 mansour:14 arbitrary:2 introduced:5 engine:1 learned:1 nip:5 capped:1 beyond:1 adversary:1 bar:1 below:4 scott:1 gonen:1 coppersmith:1 reading:3 including:1 max:9 power:9 natural:3 improve:2 numerous:1 cim:2 started:1 text:2 prior:1 literature:2 uery:3 freund:4 loss:25 expect:1 a2a:1 fully:1 sublinear:1 proven:1 at1:1 switched:1 degree:1 sufficient:2 foster:4 playing:2 pi:1 share:1 ibm:13 row:1 course:1 mohri:8 allow:2 institute:2 wide:2 taking:1 munos:2 kck1:2 transition:19 cumulative:10 gram:4 qn:2 computes:1 author:1 adaptive:4 transaction:2 vazirani:1 approximate:1 adamskiy:2 reveals:1 instantiation:1 xt1:2 shwartz:1 search:1 iterative:1 additionally:1 robust:3 ca:1 unavailable:1 mehryar:1 domain:1 aistats:1 main:1 bounding:1 repeated:1 x1:5 augmented:1 advice:3 transduction:5 ny:3 sub:7 wish:1 slave:1 concatenating:1 exponential:1 uat:1 theorem:6 remained:1 xt:34 specific:3 symbol:1 nyu:2 admits:6 alt:1 exists:1 workshop:1 avrim:1 sequential:1 ower:1 importance:1 mirror:1 generalizing:1 led:2 lt:38 p1t:2 conveniently:1 expressed:1 contained:1 tracking:2 u2:4 applies:4 springer:1 corresponds:2 determines:1 acm:3 ma:2 asleep:1 conditional:7 comparator:1 greenwald:4 informing:1 towards:1 price:1 vovk:3 called:1 total:1 partly:1 formally:1 internal:12 support:1 outgoing:4 correlated:4 |
6,752 | 7,107 | Riemannian approach to batch normalization
Minhyung Cho
Jaehyung Lee
Applied Research Korea, Gracenote Inc.
[email protected]
[email protected]
Abstract
Batch Normalization (BN) has proven to be an effective algorithm for deep neural
network training by normalizing the input to each neuron and reducing the internal
covariate shift. The space of weight vectors in the BN layer can be naturally
interpreted as a Riemannian manifold, which is invariant to linear scaling of
weights. Following the intrinsic geometry of this manifold provides a new learning
rule that is more efficient and easier to analyze. We also propose intuitive and
effective gradient clipping and regularization methods for the proposed algorithm
by utilizing the geometry of the manifold. The resulting algorithm consistently
outperforms the original BN on various types of network architectures and datasets.
1
Introduction
Batch Normalization (BN) [1] has become an essential component for breaking performance records
in image recognition tasks [2, 3]. It speeds up training deep neural networks by normalizing the
distribution of the input to each neuron in the network by the mean and standard deviation of the
input computed over a mini-batch of training data, potentially reducing internal covariate shift [1],
the change in the distributions of internal nodes of a deep network during the training.
The authors of BN demonstrated that applying BN to a layer makes its forward pass invariant to
linear scaling of its weight parameters [1]. They argued that this property prevents model explosion
with higher learning rates by making the gradient propagation invariant to linear scaling. Moreover,
the gradient becomes inversely proportional to the scale factor of each weight parameter. While this
property could stabilize the parameter growth by reducing the gradients for larger weights, it could
also have an adverse effect in terms of optimization since there can be an infinite number of networks,
with the same forward pass but different scaling, which may converge to different local optima owing
to different gradients. In practice, networks may become sensitive to the parameters of regularization
methods such as weight decay.
This ambiguity in the optimization process can be removed by interpreting the space of weight
vectors as a Riemannian manifold on which all the scaled versions of a weight vector correspond
to a single point on the manifold. A properly selected metric tensor makes it possible to perform
a gradient descent on this manifold [4, 5], following the gradient direction while staying on the
manifold. This approach fundamentally removes the aforementioned ambiguity while keeping the
invariance property intact, thus ensuring stable weight updates.
In this paper, we first focus on selecting a proper manifold along with the corresponding Riemannian
metric for the scale invariant weight vectors used in BN (and potentially in other normalization
techniques [6, 7, 8]). Mapping scale invariant weight vectors to two well-known matrix manifolds
yields the same metric tensor, leading to a natural choice of the manifold and metric. Then, we derive
the necessary operators to perform a gradient descent on this manifold, which can be understood
as a constrained optimization on the unit sphere. Next, we present two optimization algorithms corresponding to the Stochastic Gradient Descent (SGD) with momentum and Adam [9] algorithms.
An intuitive gradient clipping method is also proposed utilizing the geometry of this space. Finally,
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
we illustrate the application of these algorithms to networks with BN layers, together with an effective
regularization method based on variational inference on the manifold. Experiments show that the
resulting algorithm consistently outperforms the original BN algorithm on various types of network
architectures and datasets.
2
2.1
Background
Batch normalization
We briefly revisit the BN transform and its properties. While it can be applied to any single activation
in the network, in practice it is usually inserted right before the nonlinearity, taking the pre-activation
z = w> x as its input. In this case, the BN transform is written as
z ? E[z]
w> (x ? E[x])
u> (x ? E[x])
BN(z) = p
= p
= p
Var[z]
w> Rxx w
u> Rxx u
(1)
where w is a weight vector, x is a vector of activations in the previous layer, u = w/|w|, and Rxx
is the covariance matrix of x. Note that BN(w> x) = BN(u> x). It was shown in [1] that
?BN(w> x)
?BN(u> x)
=
?x
?x
and
?BN(z)
1 ?BN(z)
=
?w
|w| ?u
(2)
illustrating the properties discussed in Sec. 1.
2.2
Optimization on Riemannian manifold
Recent studies have shown that various constrained optimization problems in Euclidian space can be
expressed as unconstrained optimization problems on submanifolds embedded in Euclidian space [5].
For applications to neural networks, we are interested in Stiefel and Grassmann manifolds [4, 10].
We briefly review them here. The Stiefel manifold V(p, n) is the set of p ordered orthonormal vectors
in Rn (p ? n). A point on the manifold is represented by an n-by-p orthonormal matrix Y , where
Y > Y = Ip . The Grassmann manifold G(p, n) is the set of p-dimensional subspaces of Rn (p ? n).
It follows that span(A), where A ? Rn?p , is understood to be a point on the Grassmann manifold
G(p, n) (note that two matrices A and B are equivalent if and only if span(A) = span(B)). A point
on this manifold can be specified by an arbitrary n-by-p matrix, but for computational efficiency,
an orthonormal matrix is commonly chosen to represent a point. Note that the representation is not
unique [5].
To perform gradient descent on those manifolds, it is essential to equip them with a Riemannian metric
tensor and derive geometric concepts such as geodesics, exponential map, and parallel translation.
Given a tangent vector v ? Tx M on a Riemannian manifold M with its tangent space Tx M at a
point x, let us denote ?v (t) as a unique geodesic on M, with initial velocity v. The exponential map
is defined as expx (v) = ?v (1), which maps v to the point that is reached in a unit time along the
geodesic starting at x. The parallel translation of a tangent vector on a Riemannian manifold can
be obtained by transporting the vector along the geodesic by an infinitesimally small amount, and
removing the vertical component of the tangent space [11]. In this way, the transported vector stays
in the tangent space of the manifold at a new point.
Using the concepts above, a gradient descent algorithm for an abstract Riemannian manifold is given
in Algorithm 1 for reference. This reduces to the familiar gradient descent algorithm when M = Rn ,
since expyt?1 (?? ? h) is given as yt?1 ? ? ? ?f (yt?1 ) in Rn .
Algorithm 1 Gradient descent of a function f on an abstract Riemannian manifold M
Require: Stepsize ?
Initialize y0 ? M
for t = 1, ? ? ? , T
h ? gradf (yt?1 ) ? Tyt?1 M
where gradf (y) is the gradient of f at y ? M
yt ? expyt?1 (?? ? h)
2
3
Geometry of scale invariant vectors
As discussed in Sec. 2.1, inserting the BN transform makes the weight vectors w, used to calculate
the pre-activation w> x, invariant to linear scaling. Assuming that there are no additional constraints
on the weight vectors, we can focus on the manifolds on which the scaled versions of a vector collapse
to a point. A natural choice for this would be the Grassmann manifold since the space of the scaled
versions of a vector is essentially a one-dimensional subspace of Rn . On the other hand, the Stiefel
manifold can also represent the same space if we set p = 1, in which case V(1, n) reduces to the unit
sphere. We can map each of the weight vectors w to its normalized version, i.e., w/|w|, on V(1, n).
We show that popular choices of metrics on those manifolds lead to the same geometry.
Tangent vectors to the Stiefel manifold V(p, n) at Z are all the n-by-p matrices ? such that Z > ? +
?> Z = 0 [4]. The canonical metric on the Stiefel manifold is derived based on the geometry of
quotient spaces of the orthogonal group [4] and is given by
>
gs (?1 , ?2 ) = tr(?>
(3)
1 (I ? ZZ /2)?2 )
>
>
where ?1 , ?2 are tangent vectors to V(p, n) at Z. If p = 1, the condition Z ? + ? Z = 0 is
reduced to Z > ? = 0, leading to gs (?1 , ?2 ) = tr(?>
1 ?2 ).
Now, let an n-by-p matrix Y be a representation of a point on the Grassmann manifold G(p, n).
Tangent vectors to the manifold at span(Y ) with the representation Y are all the n-by-p matrices
? such that Y > ? = 0. Since Y is not a unique representation, the tangent vector ? changes with
the choice of Y . For example, given a representation Y1 and its tangent vector ?1 , if a different
representation is selected by performing right multiplication, i.e., Y2 = Y1 R, then the tangent vector
must be moved in the same way, that is ?2 = ?1 R. The canonical metric, which is invariant under
the action of the orthogonal group and scaling [10], is given by
gg (?1 , ?2 ) = tr (Y > Y )?1 ?>
(4)
1 ?2
where Y > ?1 = 0 and Y > ?2 = 0. For G(1, n) with a representation y, the metric is given by
>
gg (?1 , ?2 ) = ?>
1 ?2 /y y. The metric is invariant to the scaling of y as shown below
>
>
>
?>
(5)
1 ?2 /y y = (k?1 ) (k?2 )/(ky) (ky).
Without loss of generality, we can choose a representation with y > y = 1 to obtain gg (?1 , ?2 ) =
tr(?>
1 ?2 ), which coincides with the canonical metric for V(1, n). Hereafter, we will focus on the
geometry of G(1, n) with the metric and representation chosen above, derived from the general
formula in [4, 10].
Gradient of a function
The gradient of a function f (y) defined on G(1, n) is given by
gradf = g ? (y T g)y
(6)
where gi = ?f /?yi .
Exponential map Let h be a tangent vector to G(1, n) at y. The exponential map on G(1, n)
emanating from y with initial velocity h is given by
h
expy (h) = y cos |h| +
sin |h|.
(7)
|h|
It can be easily shown that expy (h) = expy ((1 + 2?/|h|)h).
Parallel translation Let ? and h be tangent vectors to G(1, n) at y. The parallel translation of
? along the geodesic with the initial velocity h in a unit time is given by
pty (?; h) = ? ? u(1 ? cos |h|) + y sin |h| u> ?,
(8)
where u = h/|h|. Note that |?| = |pty (?; h)|. If ? = h, it can be further simplified as
pty (h) = h cos |h| ? y|h| sin |h|.
(9)
Note that BN(z) is not invariant to scaling with negative numbers. That is, BN(?z) = ?BN(z). To
be precise, there is an one-to-one mapping between the set of weights on which BN(z) is invariant
and a point on V(1, n), but not on G(1, n). However, the proposed method interprets each weight
vector as a point on the manifold only when the weight update is performed. As long as the weight
vector stays in the domain where V(1, n) and G(1, n) have the same invariance property, the weight
update remains equivalent. We prefer G(1, n) since the operators can easily be extended to G(p, n),
opening up further applications.
3
(a) Gradient
(b) Exponential map
(c) Parallel translation
Figure 1: An illustration of the operators on the Grassmann manifold G(1, 2). A 2-by-1 matrix y
is an orthonormal representation on G(1, 2). (a) A gradient calculated in Euclidean coordinate is
projected onto the tangent space Ty G(1, 2). (b) y1 = expy (h). (c) h1 = pty (h), |~h| = |~h1 |.
4
Optimization algorithms on G(1, n)
In this section, we derive optimization algorithms on the Grassmann manifold G(1, n). The algorithms
given below are iterative algorithms to solve the following unconstrained optimization:
min f (y).
y?G(1,n)
4.1
(10)
Stochastic gradient descent with momentum
The application of Algorithm 1 to the Grassmann manifold G(1, n) is straightforward. We extend
this algorithm to the one with momentum to speed up the training [12]. Algorithm 2 presents the
pseudo-code of the SGD with momentum on G(1, n). This algorithm differs from conventional
SGD in three ways. First, it projects the gradient onto the tangent space at the point y, as shown in
Fig. 1 (a). Second, it moves the position by the exponential map in Fig. 1 (b). Third, it moves the
momentum by the parallel translation of the Grassmann manifold in Fig. 1 (c). Note that if the weight
is initialized with a unit vector, it remains a unit vector after the update.
Algorithm 2 has an advantage over conventional SGD in that the amount of movement is intuitive,
i.e., it can be measured by the angle between the original point and the new point. As it returns
to the original point after moving by 2? (radian), it is natural to restrict the maximum movement
induced by a gradient to 2?. For first order methods like gradient descent, it would be beneficial to
restrict the maximum movement even more so that it stays in the range where linear approximation
is valid. Let h be the gradient calculated at t = 0. The amount of the first step by the gradient
of h is ?0 = ? ? |h| and the contributions
to later steps are recursively calculated by ?t = ? ? ?t?1 .
P?
The overall contribution of h is t=0 ?t = ? ? |h|/(1 ? ?). In practice, we found it beneficial
to restrict this amount to less than 0.2 (rad) ?
= 11.46? by clipping the norm of h at ?. For example, with initial learning rate ? = 0.2, setting ? = 0.9 and ? = 0.1 guarantees this condition.
Algorithm 2 Stochastic gradient descent with momentum on G(1, n)
Require: learning rate ?, momentum coefficient ?, norm_threshold ?
Initialize y0 ? Rn?1 with a random unit vector
Initialize ?0 ? Rn?1 with a zero vector
for t = 1, ? ? ? , T
g ? ?f (yt?1 )/?y
Run a backward pass to obtain g
>
h ? g ? (yt?1
g)yt?1 Project g onto the tangent space at yt?1
? ? norm_clip(h, ?)? Clip the norm of the gradient at ?
h
?
d ? ??t?1 ? ? h
Update delta with momentum
yt ? expyt?1 (d)
Move to the new position by the exponential map in Eq. (7)
?t ? ptyt?1 (d)
Move the momentum by the parallel translation in Eq. (9)
?
? d, yt?1 , yt ? Rn?1
Note that h, h, d ? yt?1 and ?t ? yt where h, h,
?
norm_clip(h, ?) = ? ? h/|h| if |h| > ?, else h
4
4.2
Adam
Adam [9] is a recently developed first-order optimization algorithm based on adaptive estimates
of lower-order moments that has been successfully applied to training deep neural networks. In
this section, we derive Adam on the Grassmann manifold G(1, n). Adam computes the individual
adaptive learning rate for each parameter. In contrast, we assign one adaptive learning rate to each
weight vector that corresponds to a point on the manifold. In this way, the direction of the gradient is
not corrupted, and the size of the step is adaptively controlled. The pseudo-code of Adam on G(1, n)
is presented in Algorithm 3.
It was shown in [9] that the effective step size of Adam (|d| in Algorithm 3) has two upper bounds.
1
The first occurs in the most severe case of sparsity, and the upper bound is given as ? ?1??
since the
1??2
previous momentum terms are negligible. The second case occurs if the gradient remains stationary
across time steps, and the upper bound is given as ?. For the common selection of hyperparameters
?1 = 0.9, ?2 = 0.99, two upper bounds coincide. In our experiments, ? was chosen to be 0.05 and
the upper bound was |d| ? 0.05 (rad).
Algorithm 3 Adam on G(1, n)
Require: learning rate ?, momentum coefficients ?1 , ?2 , norm_threshold ?, scalar = 10?8
Initialize y0 ? Rn?1 with a random unit vector
Initialize ?0 ? Rn?1 with a zero vector
Initialize a scalar v0 = 0
for t = 1, ? ?p
? ,T
?t ? ? 1 ? ?2t /(1 ? ?1t )
Calculate the bias correction factor
g ? ?f (yt?1 )/?y
Run a backward pass to obtain g
>
h ? g ? (yt?1
g)yt?1
Project g onto the tangent space at yt?1
? ? norm_clip(h, ?)
h
Clip the norm of the gradient at ?
?
mt ? ?1 ? ?t?1 + (1 ? ?1 ) ? h
? >h
? (vt is a scalar)
vt ? ?2 ? vt?1 ?
+ (1 ? ?2 ) ? h
Calculate delta
d ? ??t ? mt / vt +
yt ? expyt?1 (d)
Move to the new point by exponential map in Eq. (7)
Move the momentum by parallel translation in Eq. (8)
?t ? ptyt?1 (mt ; d)
? mt , d ? yt?1 and ?t ? yt where h, h,
? mt , d, ?t , yt?1 , yt ? Rn?1
Note that h, h,
5
Batch normalization on the product manifold of G(1, ?)
In Sec. 3, we have shown that a weight vector used to compute the pre-activation that serves as an
input to the BN transform can be naturally interpreted as a point on G(1, n). For deep networks
with multiple layers and multiple units per layer, there can be multiple weight vectors that the BN
transform is applied to. In this case, the training of neural networks is converted into an optimization
problem with respect to a set of points on Grassmann manifolds and the remaining set of parameters.
It is formalized as
min L(X ) where M = G(1, n1 ) ? ? ? ? ? G(1, nm ) ? Rl
(11)
X ?M
where n1 . . . nm are the dimensions of weight vectors, m is the number of the weight vectors on
G(1, ?) which will be optimized using Algorithm 2 or 3, and l is the number of remaining parameters
which include biases, learnable scaling and offset parameters in BN layers, and other weight matrices.
Algorithm 4 presents the whole process of training deep neural networks. The forward pass and
backward pass remain unchanged. The only change made is updating the weights by Algorithm 2
or Algorithm 3. Note that we apply the proposed algorithm only when the input layer to BN is
under-complete, that is, the number of output units is smaller than the number of input units, because
the regularization algorithm we will derive in Sec. 5.1 is only valid in this case. There should be ways
to expand the regularization to over-complete layers. However, we do not elaborate on this topic
since 1) the ratio of over-complete layers is very low (under 0.07% for wide resnets and under 5.5%
for VGG networks) and 2) we believe that over-complete layers are suboptimal in neural networks,
which should be avoided by proper selection of network architectures.
5
Algorithm 4 Batch normalization on product manifolds of G(1, ?)
Define the neural network model with BN layers
m?0
for W = {weight matrices in the network such that W > x is an input to a BN layer}
Let W be an n ? p matrix
if n > p
for i = 1, ? ? ? , p
m?m+1
Assign a column vector wi in W to ym ? G(1, n)
Assign remaining parameters to v ? Rl
min L(y1 , ? ? ? , ym , v)? w.r.t yi ? G(1, ni ) for i = 1, ? ? ? , m and v ? Rl
for t = 1, ? ? ? , T
Run a forward pass to calculate L
?L
Run a backward pass to obtain ?y
for i = 1, ? ? ? , m and ?L
?v
i
for i = 1, ? ? ? , m
Update the point yi by Algorithm 2 or Algorithm 3
Update v by conventional optimization algorithms (such as SGD)
?
For orthogonality regularization as in Sec. 5.1, L is replaced with L +
5.1
P
W
LO (?, W )
Regularization using variational inference
In conventional neural networks, L2 regularization is normally adopted to regularize the networks.
However, it does not work on Grassmann manifolds because the gradient vector of the L2 regularization is perpendicular to the tangent space of the Grassmann manifold. In [13], the L2 regularization
was derived based on the Gaussian prior and delta posterior in the framework of variational inference.
We extend this theory to Grassmann manifolds in order to derive a proper regularization method in
this space.
Consider the complexity loss, which accounts for the cost of describing the network weights. It is
given by the Kullback-Leibler divergence between the posterior distribution Q(w|?) and the prior
distribution P (w|?) [13]:
LC (?, ?) = DKL (Q(w|?) k P (w|?)).
(12)
Factor analysis (FA) [14] establishes the link between the Grassmann manifold and the space of
probabilistic distributions [15]. The factor analyzer is given by
p(x) = N (u, C),
C = ZZ > + ? 2 I
(13)
where Z is a full-rank n-by-p matrix (n > p) and N denotes a normal distribution. Under the
condition that u = 0 and ? ? 0, the samples from the analyzer lie in the linear subspace span(Z). In
this way, a linear subspace can be considered as an FA distribution.
Suppose that n-dimensional p weight vectors y1 , ? ? ? , yp for n > p are in the same layer, which are
assumed as p points on G(1, n). Let yi be a representation of a point such that yi> yi = 1. With the
choice of delta posterior and ? = [y1 , ? ? ? , yp ], the corresponding FA distribution can be given by
q(x|Y ) = N (0, Y Y > + ? 2 I), where Y = [y1 , ? ? ? , yp ] with the subspace condition ? ? 0. The
FA distribution for the prior is set to p(x|?) = N (0, ?I) that depends on the hyperparameter ?.
Substituting the FA distribution of the prior and posterior into Eq. (12) gives the complexity loss
LC (?, Y ) = DKL q(x|Y ) k p(x|?) .
(14)
Eq. (14) is minimized when the column vectors of Y are orthogonal to each other (refer to Appendix A
for details). That is, minimizing LC (?, Y ) will maximally scatter the points away from each other
on G(1, n). However, it is difficult to estimate its gradient. Alternatively, we minimize
?
LO (?, Y ) = k Y > Y ? I k2F
(15)
2
where k ? kF is the Frobenius norm. It has the same minimum as the original complexity loss and the
negative of its gradient is a descent direction of the original loss (refer to Appendix B).
6
6
Experiments
We evaluated the proposed learning algorithm for image classification tasks using three benchmark
datasets: CIFAR-10 [16], CIFAR-100 [16], and SVHN (Street View House Number) [17]. We used
the VGG network [18] and wide residual network [2, 19, 20] for experiments. The VGG network
is a widely used baseline for image classification tasks, while the wide residual network [2] has
shown state-of-the-art performance on the benchmark datasets. We followed the experimental setups
described in [2] so that the performance of algorithms can be directly compared. Source code is
publicly available at https://github.com/MinhyungCho/riemannian-batch-normalization.
CIFAR-10 is a database of 60,000 color images in 10 classes, which consists of 50,000 training
images and 10,000 test images. CIFAR-100 is similar to CIFAR-10, except that it has 100 classes
and contains fewer images per class. For preprocessing, we normalized the data using the mean
and variance calculated from the training set. During training, the images were randomly flipped
horizontally, padded by four pixels on each side with the reflection, and a 32?32 crop was randomly
sampled. SVHN [17] is a digit classification benchmark dataset that contains 73,257 images in the
training set, 26,032 images in the test set, and 531,131 images in the extra set. We merged the extra
set and the training set in our experiment, following the step in [2]. The only preprocessing done was
to divide the intensity by 255.
Detailed architectures for various VGG networks are described in [18]. We used 512 neurons in
fully connected layers rather than 4096 neurons, and the BN layer was placed before every ReLU
activation layer. The learnable scaling parameter in the BN layer was set to one because it does not
reduce the expressive power of the ReLU layer [21]. For SVHN experiments using VGG networks,
the dropout was applied after the pooling layer with dropout rate 0.4. For wide residual networks,
we adopted exactly the same model architectures in [2], including the BN and dropout layers. In all
cases, the biases were removed except the final layer.
For the baseline, the networks were trained by SGD with Nesterov momentum [22]. The weight
decay was set to 0.0005, momentum to 0.9, and minibatch size to 128. For CIFAR experiments, the
initial learning rate was set to 0.1 and multiplied by 0.2 at 60, 120, and 160 epochs. It was trained for
a total of 200 epochs. For SVHN, the initial learning rate was set to 0.01 and multiplied by 0.1 at 60
and 120 epochs. It was trained for a total of 160 epochs.
For the proposed method, we used different learning rates for the weights in Euclidean space and
on Grassmann manifolds. Let us denote the learning rates for Euclidean space and Grassmann
manifolds as ?e and ?g , respectively. The selected initial learning rates were ?e = 0.01, ?g = 0.2 for
Algorithm 2 and ?e = 0.01, ?g = 0.05 for Algorithm 3. The same initial learning rates were used
for all CIFAR experiments. For SVHN, they were scaled by 1/10, following the same ratio as the
baseline [2]. The training algorithm for Euclidean parameters was identical to the one used in the
baseline with one exception. We did not apply weight decay to scaling and offset parameters of BN,
whereas the baseline did as in [2]. To clarify, applying weight decay to mean and variance parameters
of BN was essential for reproducing the performance of baseline. The learning rate schedule was
also identical to the baseline, both for ?e and ?g . The threshold for clipping the gradient ? was set to
0.1. The regularization strength ? in Eq. (15) was set to 0.1, which gradually achieved near zero LO
during the course of the training, as shown in Fig. 2.
Figure 2: Changes in LO in
Eq. (15) during training for various ? values (y-axis on the left).
The red dotted line denotes the
learning rate (?g , y-axis on the
right). VGG-11 was trained by
SGD-G on CIFAR-10.
6.1
Results
Tables 1 and 2 compare the performance of the baseline SGD and two proposed algorithms described
in Sec. 4 and 5, on CIFAR-10, CIFAR-100, and SVHN datasets. All the numbers reported are the
median of five independent runs. In most cases, the networks trained using the proposed algorithms
7
(a) CIFAR-10
(b) CIFAR-100
(c) SVHN
Figure 3: Training curves of the baseline and proposed optimization methods. (a) WRN-28-10 on
CIFAR-10. (b) WRN-28-10 on CIFAR-100. (c) WRN-22-8 on SVHN.
outperformed the baseline across various datasets and network configurations, especially for the ones
with more parameters. The best performance was 3.72% (SGD and SGD-G) and 17.85% (ADAM-G)
on CIFAR-10 and CIFAR-100, respectively, with WRN-40-10; and 1.55% (ADAM-G) on SVHN
with WRN-22-8.
Training curves of the baseline and proposed methods are presented in Figure 3. The training curves
for SGD suffer from instability or experience a plateau after each learning rate drop, compared to
the proposed methods. We believe that this comes from the inverse proportionality of the gradient to
the norm of BN weight parameters (as in Eq. (2)). During the training process, this norm is affected
by weight decay, hence the magnitude of the gradient. It is effectively equivalent to disturbing the
learning rate by weight decay. The authors of wide resnet also observed that applying weight decay
caused this phenomena, but weight decay was indispensable for achieving the reported performance
[2]. Proposed methods resolve this issue in a principled way.
Table 3 summarizes the performance of recently published algorithms on the same datasets. We
present the best performance of five independent runs in this table.
Table 1: Classification error rate of various networks on CIFAR-10 and CIFAR-100 (median of five
runs). VGG-l denotes a VGG network with l layers. WRN-d-k denotes a wide residual network that
has d convolutional layers and a widening factor k. SGD-G and Adam-G denote Algorithm 2 and
Algorithm 3, respectively. The results in parenthesis show those reported in [2].
Dataset
CIFAR-10
CIFAR-100
Model
SGD
SGD-G Adam-G
SGD
SGD-G
Adam-G
VGG-11
7.43
7.14
7.59
29.25
28.02
28.05
VGG-13
5.88
5.87
6.05
26.17
25.29
24.89
VGG-16
6.32
5.88
5.98
26.84
25.64
25.29
VGG-19
6.49
5.92
6.02
27.62
25.79
25.59
WRN-52-1
6.23 (6.28)
6.56
6.58
27.44 (29.78)
28.13
28.16
WRN-16-4
4.96 (5.24)
5.35
5.28
23.41 (23.91)
24.51
24.24
3.89 (3.89)
3.85
3.78
18.66 (18.85)
18.19
18.30
WRN-28-10
WRN-40-10?
3.72 (3.8)
3.72
3.80
18.39 (18.3)
18.04
17.85
?
This model was trained on two GPUs. The gradients were summed from two minibatches of size 64, and
BN statistics were calculated from each minibatch.
7
Conclusion and discussion
We presented new optimization algorithms for scale-invariant vectors by representing them on G(1, n)
and following the intrinsic geometry. Specifically, we derived SGD with momentum and Adam
algorithms on G(1, n). An efficient regularization algorithm in this space has also been proposed.
Applying them in the context of BN showed consistent performance improvements over the baseline
BN algorithm with SGD on CIFAR-10, CIFAR-100, and SVHN datasets.
8
Table 2: Classification error rate of various networks on SVHN (median of five runs).
Model
VGG-11
VGG-13
VGG-16
VGG-19
WRN-52-1
WRN-16-4
WRN-16-8
WRN-22-8
SGD
2.11
1.78
1.85
1.94
1.68 (1.70)
1.64 (1.64)
1.60 (1.54)
1.64
SGD-G
2.10
1.74
1.76
1.81
1.72
1.67
1.69
1.63
Adam-G
2.14
1.72
1.76
1.77
1.67
1.61
1.68
1.55
Table 3: Performance comparison with previously published results.
Method
CIFAR-10 CIFAR-100 SVHN
NormProp [7]
7.47
29.24
1.88
ELU [23]
6.55
24.28
Scalable Bayesian optimization [24]
6.37
27.4
Generalizing pooling [25]
6.05
1.69
Stochastic depth [26]
4.91
24.98
1.75
4.62
22.71
ResNet-1001 [20]
Wide residual network [2]
3.8
18.3
1.54
Proposed (best of five runs)
3.491
17.592
1.493
1
WRN-40-10+SGD-G 2 WRN-40-10+Adam-G 3 WRN-22-8+Adam-G
Our work interprets each scale invariant piece of the weight matrix as a separate manifold, whereas
natural gradient based algorithms [27, 28, 29] interpret the whole parameter space as a manifold and
constrain the shape of the cost function (i.e. to the KL divergence) to obtain a cost efficient metric.
There are similar approaches to ours such as Path-SGD [30] and the one based on symmetry-invariant
updates [31], but the comparison remains to be done.
Proposed algorithms are computationally as efficient as their non-manifold versions since they do not
affect the forward and backward propagation steps, where majority of the computation takes place.
The weight update step is 2.5-3.5 times more expensive, but still O(n).
We did not explore the full range of possibilities offered by the proposed algorithm. For example,
techniques similar to BN, such as weight normalization [6] and normalization propagation [7], have
scale invariant weight vectors and can benefit from the proposed algorithm in the same way. Layer
normalization [8] is invariant to weight matrix rescaling, and simple vectorization of the weight
matrix enables the application of the proposed algorithm.
9
References
[1] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In Proceedings of The 32nd International Conference on Machine Learning, pages
448?456, 2015.
[2] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In BMVC, 2016.
[3] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the
inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 2818?2826, 2016.
[4] Alan Edelman, Tom?s A Arias, and Steven T Smith. The geometry of algorithms with orthogonality
constraints. SIAM journal on Matrix Analysis and Applications, 20(2):303?353, 1998.
[5] P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Optimization algorithms on matrix manifolds.
Princeton University Press, 2009.
[6] Tim Salimans and Diederik P Kingma. Weight normalization: A simple reparameterization to accelerate
training of deep neural networks. In Advances in Neural Information Processing Systems 29, pages
901?909, 2016.
[7] Devansh Arpit, Yingbo Zhou, Bhargava Kota, and Venu Govindaraju. Normalization propagation: A
parametric technique for removing internal covariate shift in deep networks. In Proceedings of The 33rd
International Conference on Machine Learning, pages 1168?1176, 2016.
[8] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint
arXiv:1607.06450, 2016.
[9] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[10] P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Riemannian geometry of grassmann manifolds with
a view on algorithmic computation. Acta Applicandae Mathematicae, 80(2):199?220, 2004.
[11] M.P. do Carmo. Differential Geometry of Curves and Surfaces. Prentice-Hall, 1976.
[12] David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning representations by backpropagating errors. Nature, 323(6088):533?536, 10 1986.
[13] Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information
Processing Systems 24, pages 2348?2356, 2011.
[14] Zoubin Ghahramani, Geoffrey E Hinton, et al. The EM algorithm for mixtures of factor analyzers.
Technical report, Technical Report CRG-TR-96-1, University of Toronto, 1996.
[15] Jihun Hamm and Daniel D Lee. Extended grassmann kernels for subspace-based learning. In Advances in
Neural Information Processing Systems 21, pages 601?608, 2009.
[16] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. Master?s
thesis, Department of Computer Science, University of Toronto, 2009.
[17] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in
natural images with unsupervised feature learning. In NIPS workshop on deep learning and unsupervised
feature learning, 2011.
[18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778,
2016.
[20] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks.
In European Conference on Computer Vision, pages 630?645. Springer, 2016.
[21] Sergey Ioffe. Batch renormalization: Towards reducing minibatch dependence in batch-normalized models.
arXiv preprint arXiv:1702.03275, 2017.
[22] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and
momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning,
pages 1139?1147, 2013.
[23] Djork-Arn? Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning
by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289, 2015.
[24] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Mostofa
Patwary, Mr Prabhat, and Ryan Adams. Scalable bayesian optimization using deep neural networks. In
International Conference on Machine Learning, pages 2171?2180, 2015.
[25] Chen-Yu Lee, Patrick W Gallagher, and Zhuowen Tu. Generalizing pooling functions in convolutional
neural networks: Mixed, gated, and tree. In International conference on artificial intelligence and statistics,
2016.
[26] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic
depth. In European Conference on Computer Vision, pages 646?661. Springer, 2016.
[27] Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276,
1998.
10
[28] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. In International
Conference on Learning Representations, 2014.
[29] James Martens and Roger B Grosse. Optimizing neural networks with kronecker-factored approximate
curvature. In Proceedings of The 32nd International Conference on Machine Learning, pages 2408?2417,
2015.
[30] Behnam Neyshabur, Ruslan R Salakhutdinov, and Nati Srebro. Path-SGD: Path-normalized optimization
in deep neural networks. In Advances in Neural Information Processing Systems, pages 2422?2430, 2015.
[31] Vijay Badrinarayanan, Bamdev Mishra, and Roberto Cipolla. Understanding symmetries in deep networks.
arXiv preprint arXiv:1511.01029, 2015.
11
| 7107 |@word illustrating:1 briefly:2 version:5 norm:6 nd:2 proportionality:1 bn:39 covariance:1 sgd:23 euclidian:2 tr:5 sepulchre:2 recursively:1 moment:1 initial:8 configuration:1 contains:2 liu:1 selecting:1 hereafter:1 daniel:2 rippel:1 ours:1 outperforms:2 mishra:1 com:2 activation:6 gmail:1 scatter:1 written:1 must:1 diederik:2 ronald:1 shape:1 enables:1 christian:2 remove:1 drop:1 update:9 sundaram:1 stationary:1 intelligence:1 selected:3 fewer:1 smith:1 bissacco:1 record:1 provides:1 pascanu:1 node:1 toronto:2 zhang:2 five:5 along:4 become:2 differential:1 edelman:1 consists:1 snoek:1 kiros:2 salakhutdinov:1 resolve:1 becomes:1 project:3 moreover:1 gradf:3 submanifolds:1 interpreted:2 developed:1 guarantee:1 pseudo:2 every:1 growth:1 exactly:1 scaled:4 unit:12 normally:1 before:2 negligible:1 understood:2 local:1 mostofa:1 path:3 acta:1 initialization:1 co:3 collapse:1 range:2 perpendicular:1 unique:3 practical:1 transporting:1 practice:3 hamm:1 differs:1 razvan:1 digit:2 pre:3 zoubin:1 onto:4 mahony:2 selection:2 operator:3 prentice:1 context:1 applying:4 instability:1 equivalent:3 map:10 demonstrated:1 yt:22 conventional:4 marten:2 straightforward:1 williams:1 starting:1 jimmy:2 sepp:1 formalized:1 factored:1 rule:1 utilizing:2 orthonormal:4 regularize:1 shlens:1 reparameterization:1 coordinate:1 suppose:1 pty:4 velocity:3 rumelhart:1 recognition:5 expensive:1 updating:1 database:1 observed:1 inserted:1 steven:1 preprint:6 wang:1 calculate:4 revisiting:1 connected:1 sun:3 kilian:1 movement:3 removed:2 principled:1 alessandro:1 complexity:3 yingbo:1 nesterov:1 geodesic:5 trained:6 efficiency:1 easily:2 accelerate:1 various:8 represented:1 tx:2 jaehyung:2 fast:1 effective:4 emanating:1 artificial:1 kevin:1 larger:1 kaist:1 solve:1 widely:1 amari:1 statistic:2 gi:1 simonyan:1 transform:5 ip:1 final:1 advantage:1 propose:1 jamie:1 product:2 clevert:1 inserting:1 tu:1 intuitive:3 moved:1 frobenius:1 ky:2 sutskever:1 optimum:1 adam:20 staying:1 resnet:2 tim:1 derive:6 illustrate:1 ac:1 andrew:2 measured:1 eq:9 quotient:1 come:1 elus:1 direction:3 merged:1 owing:1 stochastic:6 shun:1 argued:1 require:3 assign:3 ryan:3 crg:1 correction:1 clarify:1 considered:1 hall:1 normal:1 mapping:3 algorithmic:1 substituting:1 ruslan:1 outperformed:1 sensitive:1 successfully:1 establishes:1 gaussian:1 rather:1 zhou:1 derived:4 focus:3 properly:1 consistently:2 rank:1 improvement:1 devansh:1 contrast:1 absil:2 baseline:12 inference:4 expand:1 interested:1 tao:1 pixel:1 overall:1 aforementioned:1 classification:5 issue:1 constrained:2 art:1 initialize:6 summed:1 beach:1 ng:1 zz:2 identical:2 flipped:1 yu:2 k2f:1 unsupervised:2 jon:1 minimized:1 report:2 yoshua:1 fundamentally:1 opening:1 randomly:2 divergence:2 individual:1 familiar:1 replaced:1 geometry:11 n1:2 possibility:1 severe:1 rodolphe:2 mixture:1 accurate:1 explosion:1 necessary:1 experience:1 netzer:1 korea:1 orthogonal:3 tree:1 euclidean:4 divide:1 initialized:1 unterthiner:1 column:2 clipping:4 cost:3 deviation:1 krizhevsky:1 satish:1 reported:3 corrupted:1 cho:2 adaptively:1 st:1 international:7 siam:1 stay:3 lee:4 probabilistic:1 together:1 ym:2 ilya:1 thesis:1 ambiguity:2 nm:2 choose:1 huang:1 leading:2 return:1 yp:3 rescaling:1 szegedy:2 account:1 converted:1 sec:6 stabilize:1 coefficient:2 inc:1 caused:1 depends:1 piece:1 performed:1 h1:2 later:1 view:2 analyze:1 reached:1 red:1 parallel:8 contribution:2 minimize:1 ni:1 publicly:1 convolutional:3 variance:2 efficiently:1 correspond:1 yield:1 bayesian:2 vincent:1 ren:2 published:2 plateau:1 mathematicae:1 ty:1 james:2 naturally:2 riemannian:12 radian:1 sampled:1 dataset:2 popular:1 govindaraju:1 color:1 schedule:1 higher:1 tom:1 zisserman:1 maximally:1 bmvc:1 sedra:1 evaluated:1 done:2 generality:1 inception:1 roger:1 djork:1 hand:1 expressive:1 propagation:4 minibatch:3 lei:1 believe:2 usa:1 effect:1 concept:2 normalized:4 y2:1 regularization:13 hence:1 leibler:1 komodakis:1 sin:3 during:5 backpropagating:1 coincides:1 gg:3 complete:4 interpreting:1 svhn:12 stiefel:5 reflection:1 image:15 variational:4 recently:2 common:1 mt:5 rl:3 jasper:1 discussed:2 extend:2 he:2 interpret:1 refer:2 rd:1 unconstrained:2 nonlinearity:1 analyzer:3 moving:1 stable:1 surface:1 v0:1 patrick:1 curvature:1 posterior:4 recent:1 showed:1 optimizing:1 indispensable:1 carmo:1 patwary:1 vt:4 yi:6 minimum:1 additional:1 george:1 nikos:1 mr:1 arn:1 converge:1 xiangyu:2 multiple:4 full:2 reduces:2 alan:1 technical:2 sphere:2 long:2 cifar:24 grassmann:19 dkl:2 controlled:1 ensuring:1 parenthesis:1 scalable:2 crop:1 essentially:1 metric:13 vision:5 arxiv:12 resnets:1 normalization:15 represent:2 sergey:4 kernel:1 achieved:1 hochreiter:1 oren:1 background:1 whereas:2 else:1 median:3 source:1 jian:2 extra:2 zagoruyko:1 induced:1 pooling:3 near:1 prabhat:1 zhuowen:1 bengio:1 affect:1 relu:2 architecture:6 restrict:3 suboptimal:1 interprets:2 reduce:1 arpit:1 vgg:16 shift:4 accelerating:1 suffer:1 karen:1 shaoqing:2 action:1 deep:20 detailed:1 amount:4 clip:2 narayanan:1 reduced:1 http:1 canonical:3 coates:1 revisit:1 dotted:1 delta:4 per:2 hyperparameter:1 rxx:3 affected:1 group:2 ichi:1 four:1 threshold:1 achieving:1 dahl:1 backward:5 padded:1 run:9 angle:1 inverse:1 master:1 swersky:1 place:1 wu:1 prefer:1 scaling:11 appendix:2 summarizes:1 dropout:3 layer:27 bound:5 followed:1 g:2 strength:1 constraint:2 orthogonality:2 constrain:1 alex:2 kronecker:1 kota:1 speed:2 span:5 min:3 performing:1 infinitesimally:1 gpus:1 department:1 beneficial:2 across:2 remain:1 y0:3 smaller:1 wi:1 em:1 making:1 invariant:16 gradually:1 computationally:1 remains:4 previously:1 describing:1 serf:1 adopted:2 available:1 multiplied:2 apply:2 neyshabur:1 away:1 salimans:1 stepsize:1 batch:11 weinberger:1 original:6 thomas:1 denotes:4 remaining:3 include:1 ghahramani:1 especially:1 unchanged:1 tensor:3 move:6 occurs:2 fa:5 parametric:1 dependence:1 gradient:40 subspace:6 link:1 separate:1 venu:1 rethinking:1 street:1 majority:1 topic:1 manifold:54 equip:1 assuming:1 code:3 mini:1 illustration:1 ratio:2 minimizing:1 difficult:1 setup:1 robert:2 potentially:2 negative:2 ba:2 wojna:1 zbigniew:1 proper:3 perform:3 gated:1 upper:5 vertical:1 neuron:4 datasets:8 benchmark:3 descent:11 extended:2 hinton:5 precise:1 y1:7 rn:12 reproducing:1 arbitrary:1 intensity:1 david:1 nadathur:1 specified:1 kl:1 optimized:1 rad:2 kingma:2 nip:2 usually:1 below:2 pattern:2 sparsity:1 reading:1 including:1 power:1 natural:7 widening:1 residual:8 bhargava:1 representing:1 github:1 zhuang:1 badrinarayanan:1 inversely:1 axis:2 roberto:1 review:1 understanding:1 geometric:1 nati:1 tangent:18 l2:3 multiplication:1 prior:4 kf:1 epoch:4 embedded:1 loss:5 fully:1 graf:1 mixed:1 proportional:1 proven:1 var:1 geoffrey:5 srebro:1 vanhoucke:1 offered:1 consistent:1 tiny:1 expx:1 translation:8 lo:4 tyt:1 course:1 placed:1 keeping:1 bias:3 side:1 wide:8 taking:1 benefit:1 curve:4 calculated:5 dimension:1 valid:2 depth:2 computes:1 author:2 forward:5 commonly:1 projected:1 simplified:1 adaptive:3 coincide:1 made:1 avoided:1 preprocessing:2 disturbing:1 approximate:1 kullback:1 elu:1 ioffe:3 assumed:1 alternatively:1 iterative:1 vectorization:1 table:6 nature:1 transported:1 ca:1 symmetry:2 european:2 domain:1 did:3 whole:2 hyperparameters:1 fig:4 elaborate:1 renormalization:1 grosse:1 lc:3 momentum:16 position:2 exponential:9 lie:1 house:1 breaking:1 third:1 removing:2 formula:1 covariate:4 behnam:1 learnable:2 offset:2 decay:8 normalizing:2 intrinsic:2 essential:3 workshop:1 effectively:1 kr:1 importance:1 aria:1 magnitude:1 gallagher:1 chen:1 easier:1 vijay:1 generalizing:2 explore:1 gao:1 prevents:1 expressed:1 ordered:1 horizontally:1 kaiming:2 scalar:3 bo:1 cipolla:1 springer:2 corresponds:1 minibatches:1 identity:1 towards:1 change:4 adverse:1 wrn:17 infinite:1 except:2 reducing:5 specifically:1 yuval:1 total:2 pas:8 invariance:2 experimental:1 intact:1 exception:1 internal:5 jihun:1 princeton:1 phenomenon:1 |
6,753 | 7,108 | Self-supervised Learning of Motion Capture
Hsiao-Yu Fish Tung 1 , Hsiao-Wei Tung 2 , Ersin Yumer 3 , Katerina Fragkiadaki 1
1
Carnegie Mellon University, Machine Learning Department
2
University of Pittsburgh, Department of Electrical and Computer Engineering
3
Adobe Research
{htung, katef}@cs.cmu.edu, [email protected],[email protected]
Abstract
Current state-of-the-art solutions for motion capture from a single camera are
optimization driven: they optimize the parameters of a 3D human model so that
its re-projection matches measurements in the video (e.g. person segmentation,
optical flow, keypoint detections etc.). Optimization models are susceptible to
local minima. This has been the bottleneck that forced using clean green-screen
like backgrounds at capture time, manual initialization, or switching to multiple
cameras as input resource. In this work, we propose a learning based motion capture
model for single camera input. Instead of optimizing mesh and skeleton parameters
directly, our model optimizes neural network weights that predict 3D shape and
skeleton configurations given a monocular RGB video. Our model is trained using
a combination of strong supervision from synthetic data, and self-supervision from
differentiable rendering of (a) skeletal keypoints, (b) dense 3D mesh motion, and
(c) human-background segmentation, in an end-to-end framework. Empirically
we show our model combines the best of both worlds of supervised learning
and test-time optimization: supervised learning initializes the model parameters
in the right regime, ensuring good pose and surface initialization at test time,
without manual effort. Self-supervision by back-propagating through differentiable
rendering allows (unsupervised) adaptation of the model to the test data, and offers
much tighter fit than a pretrained fixed model. We show that the proposed model
improves with experience and converges to low-error solutions where previous
optimization methods fail.
1
Introduction
Detailed understanding of the human body and its motion from ?in the wild" monocular setups
would open the path to applications of automated gym and dancing teachers, rehabilitation guidance,
patient monitoring and safer human-robot interactions. It would also impact the movie industry
where character motion capture (MOCAP) and retargeting still requires tedious labor effort of artists
to achieve the desired accuracy, or the use of expensive multi-camera setups and green-screen
backgrounds.
Most current motion capture systems are optimization driven and cannot benefit from experience.
Monocular motion capture systems optimize the parameters of a 3D human model to match measurements in the video (e.g., person segmentation, optical flow). Background clutter and optimization
difficulties significantly impact tracking performance, leading prior work to use green screen-like
backdrops [5] and careful initializations. Additionally, these methods cannot leverage the data generated by laborious manual processes involved in motion capture, to improve over time. This means
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
?1
t1
?1
R1
T1
t2
SMPL
camera
re-projection
?2
?2
Keypoint
re-projection
Segmentation
re-projection
R2
T2
Motion
re-projection
Figure 1: Self-supervised learning of motion capture. Given a video sequence and a set of 2D body
joint heatmaps, our network predicts the body parameters for the SMPL 3D human mesh model [25].
Neural networks weights are pretrained using synthetic data and finetuned using self-supervised losses
driven by differentiable keypoint, segmentation, and motion reprojection errors, against detected 2D
keypoints, 2D segmentation and 2D optical flow, respectively. By finetuning its parameters at test
time through self-supervised losses, the proposed model achieves significantly higher level of 3D
reconstruction accuracy than pure supervised or pure optimization based models, which either do not
adapt at test time, or cannot benefit from training data, respectively.
that each time a video needs to be processed, the optimization and manual efforts need to be repeated
from scratch.
We propose a neural network model for motion capture in monocular videos, that learns to map an
image sequence to a sequence of corresponding 3D meshes. The success of deep learning models lies
in their supervision from large scale annotated datasets [14]. However, detailed 3D mesh annotations
are tedious and time consuming to obtain, thus, large scale annotation of 3D human shapes in realistic
video input is currently unavailable. Our work bypasses lack of 3D mesh annotations in real videos
by combining strong supervision from large scale synthetic data of human rendered models, and selfsupervision from 3D-to-2D differentiable rendering of 3D keypoints, motion and segmentation, and
matching with corresponding detected quantities in 2D, in real monocular videos. Our self-supervision
leverages recent advances in 2D body joint detection [37; 9], 2D figure-ground segmentation [22],
and 2D optical flow [21], each learnt using strong supervision from real or synthetic datasets, such as,
MPII [3], COCO [24], and flying chairs [15], respectively. Indeed, annotating 2D body joints is easier
than annotating 3D joints or 3D meshes, while optical flow has proven to be easy to generalize from
synthetic to real data. We show how state-of-the-art models of 2D joints, optical flow and 2D human
segmentation can be used to infer dense 3D human structure in videos in the wild, that is hard to
otherwise manually annotate. In contrast to previous optimization based motion capture works [8; 7],
we use differentiable warping and differentiable camera projection for optical flow and segmentation
losses, which allows our model to be trained end-to-end with standard back-propagation.
We use SMPL [25] as our dense human 3D mesh model. It consists of a fixed number of vertices and
triangles with fixed topology, where the global pose is controlled by relative angles between body
parts ?, and the local shape is controlled by mesh surface parameters ?. Given the pose and surface
parameters, a dense mesh can be generated in an analytical (differentiable) form, which could then be
globally rotated and translated to a desired location. The task of our model is to reverse-engineer the
rendering process and predict the parameters of the SMPL model (? and ?), as well as the focal length,
3D rotations and 3D translations in each input frame, provided an image crop around a detected
person.
Given 3D mesh predictions in two consecutive frames, we differentiably project the 3D motion
vectors of the mesh vertices, and match them against estimated 2D optical flow vectors (Figure 1).
Differentiable motion rendering and matching requires vertex visibility estimation, which we perform
using ray casting integrated with our neural model for code acceleration. Similarly, in each frame,
3D keypoints are projected and their distances to corresponding detected 2D keypoints are penalized.
Last but not the least, differentiable segmentation matching using Chamfer distances penalizes under
and over fitting of the projected vertices against 2D segmentation of the human foreground. Note that
2
these re-projection errors are only on the shape rather than the texture by design, since our predicted
3D meshes are textureless.
We provide quantitative and qualitative results on 3D dense human shape tracking in SURREAL
[35] and H3.6M [22] datasets. We compare against the corresponding optimization versions, where
mesh parameters are directly optimized by minimizing our self-supervised losses, as well as against
supervised models that do not use self-supervision at test time. Optimization baselines easily get
stuck in local minima, and are very sensitive to initialization. In contrast, our learning-based MOCAP
model relies on supervised pretraining (on synthetic data) to provide reasonable pose initialization
at test time. Further, self-supervised adaptation achieves lower 3D reconstruction error than the
pretrained, non-adapted model. Last, our ablation highlights the complementarity of the three
proposed self-supervised losses.
2
Related Work
3D Motion capture 3D motion capture using multiple cameras (four or more) is a well studied
problem where impressive results are achieved with existing methods [17]. However, motion capture
from a single monocular camera is still an open problem even for skeleton-only capture/tracking.
Since ambiguities and occlusions can be severe in monocular motion capture, most approaches rely on
prior models of pose and motion. Earlier works considered linear motion models [16; 13]. Non-linear
priors such as Gaussian process dynamical models [34], as well as twin Gaussian processes [6] have
also been proposed, and shown to outperform their linear counterparts. Recently, Bogo et al. [7]
presented a static image pose and 3D dense shape prediction model which works in two stages: first, a
3D human skeleton is predicted from the image, and then a parametric 3D shape is fit to the predicted
skeleton using an optimization procedure, during which the skeleton remains unchanged. Instead, our
work couples 3D skeleton and 3D mesh estimation in an end-to-end differentiable framework, via
test-time adaptation.
3D human pose estimation Earlier work on 3D pose estimation considered optimization methods
and hard-coded anthropomorphic constraints (e.g., limb symmetry) to fight ambiguity during 2Dto-3D lifting [28]. Many recent works learn to regress to 3D human pose directly given an RGB
image [27] using deep neural networks and large supervised training sets [22]. Many have explored
2D body pose as an intermediate representation [11; 38], or as an auxiliary task in a multi-task
setting [32; 38; 39], where the abundance of labelled 2D pose training examples helps feature
learning and complements limited 3D human pose supervision, which requires a Vicon system and
thus is restricted to lab instrumented environments. Rogez and Schmid [29] obtain large scale
RGB to 3D pose synthetic annotations by rendering synthetic 3D human models against realistic
backgrounds [29], a dataset also used in this work.
Deep geometry learning Our differentiable renderer follows recent works that integrate deep
learning and geometric inference [33]. Differentiable warping [23; 26] and backpropable camera
projection [39; 38] have been used to learn 3D camera motion [40] and joint 3D camera and 3D
object motion [30] in an end-to-end self-supervised fashion, minimizing a photometric loss. Garg et
al. [18]learns a monocular depth predictor, supervised by photometric error, given a stereo image
pair with known baseline as input. The work of [19] contributed a deep learning library with many
geometric operations including a backpropable camera projection layer, similar to the one used in
Yan et al. [39] and Wu et al. [38]?s cameras, as well as Garg et al.?s depth CNN [18].
3
Learning Motion Capture
The architecture of our network is shown in Figure 1. We use SMPL as the parametrized model of 3D
human shape, introduced by Loper et al. [25]. SMPL is comprised of parameters that control the yaw,
pitch and roll of body joints, and parameters that control deformation of the body skin surface. Let ?,
? denote the joint angle and surface deformation parameters, respectively. Given these parameters, a
fixed number (n = 6890) of 3D mesh vertex coordinates are obtained using the following analytical
expression, where Xi ? R3 stands for the 3D coordinates of the ith vertex in the mesh:
X
X
?i +
Xi = X
?m sm,i +
(Tn (?) ? Tn (?? ))pn,i
(1)
m
n
3
Distance maps
t2
t1
Threshold
Chamfer
Segmentation distance maps
~
x~
2d
xxKPT
2d
KPT
match
~~
u, v
(SM)
(CM)
(SI)
(CI)
SM
x
CI +
SI x
CM ?
0
u, v
match by
differentiable interpolation
Figure 2: Differentiable rendering of body joints (left), segmentation (middle) and mesh vertex
motion (right).
? i ? R3 is the nominal rest position of vertex i, ?m is the blend coefficient for the skin surface
where X
blendshapes, sm,i ? R3 is the element corresponding to ith vertex of the mth skin surface blendshape,
pn,i ? R3 is the element
corresponding to ith vertex of the nth skeletal pose blendshape, Tn (?) is a
camera
function that maps the nth pose blendshape to a vector of concatenated part relative rotation matrices,
and Tn (?? ) is the same for the rest pose ?? . Note the expression in Eq. 1 is differentiable.
Our model, given an image crop centered around a person detection, predicts parameters ? and ? of
the SMPL 3D human mesh. Since annotations of 3D meshes are very tedious and time consuming
to obtain, our model uses supervision from a large dataset of synthetic monocular videos, and selfsupervision with a number of losses that rely on differentiable rendering of 3d keypoints, segmentation
and vertex motion, and matching with their 2D equivalents. We detail supervision of our model
below.
Paired supervision from synthetic data We use the synthetic Surreal dataset [35] that contains
monocular videos of human characters performing activities against 2D image backgrounds. The
synthetic human characters have been generated using the SMPL model, and animated using Human
H3.6M dataset [22]. Texture is generated by directly coloring the mesh vertices, without actual
3D cloth simulation. Since values for ? and ? are directly available in this dataset, we use them to
pretrain the ? and ? branches of our network using a standard supervised regression loss.
3.1
Self-supervision through differentiable rendering
Self-supervision in our model is based on 3D-to-2D rendering and consistency checks against 2D
estimates of keypoints, segmentation and optical flow. Self-supervision can be used at both train and
test time, for adapting our model?s weights to the statistics of the test set.
Keypoint re-projection error Given a static image, predictions of 3D body joints of the depicted
person should match, when projected, corresponding 2D keypoint detections. Such keypoint reprojection error has been used already in numerous previous works [38; 39]. Our model predicts a
dense 3D mesh instead of a skeleton. We leverage the linear relationship that relates our 3D mesh
vertices to 3D body joints:
Xkpt | = A ? X|
(2)
Let X ? R4?n denote the 3D coordinates of the mesh vertices in homogeneous coordinates (with
a small abuse of notation since it is clear from the context), where n the number of vertices. For
estimating 3D-to-2D projection, our model further predicts focal length, rotation of the camera and
4
translation of the 3D mesh off the center of the image, in case the root node of the 3D mesh is
not exactly placed at the center of the image crop. We do not predict translation in the z direction
(perpendicular to the image plane), as the predicted focal length accounts for scaling of the person
figure. For rotation, we predict Euler rotation angles ?, ?, ? so that the 3D rotation of the camera
reads R = Rx (?)Ry (?)Rtz (?), where Rx (?) denotes rotation around the x-axis by angle ?, here in
homogeneous coordinates. The re-projection equation for the kth keypoint then reads:
xkkpt = P ? R ? Xkkpt + T
(3)
where P = diag([f f 1 0] is the predicted camera projection matrix and T =
T
[Tx Ty 0 0] handles small perturbations in object centering. Keypoint reprojection error
then reads:
Lkpt = kxkpt ? x
?kpt k22 ,
(4)
and x
?kpt are ground-truth or detected 2D keypoints. Since 3D mesh vertices are related to ?, ?
predictions using Eq. 1, re-projection error minimization updates the neural parameters for ?, ?
estimation.
Motion re-projection error Given a pair of frames, 3D mesh vertex displacements from one frame
to the next should match, when projected, corresponding 2D optical flow vectors, computed from
the corresponding RGB frames. All Structure-from-Motion (SfM) methods exploit such motion
re-projection error in one way or another: the estimated 3D pointcloud in time when projected should
match 2D optical flow vectors in [2], or multiframe 2D point trajectories in [31]. Though previous
SfM models use motion re-projection error to optimize 3D coordinates and camera parameters directly
[2], here we use it to optimize neural network parameters, that predict such quantities, instead.
Motion re-projection error estimation requires visibility of the mesh vertices in each frame. We
implement visibility inference through ray casting for each example and training iteration in Tensor
Flow and integrate it with our neural network model, which accelerates by ten times execution time,
as opposed to interfacing with raycasting in OpenGL. Vertex visibility inference does not need to be
differentiable: it is used only to mask motion re-projection loss for invisible vertices. Since we are
only interested in visibility rather than complex rendering functionality, ray casting boils down to
detecting the first mesh facet to intersect with the straight line from the image projected position of
the center of a facet to its 3D point. If the intercepted facet is the same as the one which the ray is
cast from, we denote that facet as visible since there is no occluder between that facet and the image
plane. We provide more details for the ray casting reasoning in the experiment section. Vertices that
constructs these visible facet are treated as visible. Let vi ? {0, 1}, i = 1 ? ? ? n denote visibilities of
mesh vertices.
Given two consecutive frames I1 , I2 , let ?1 , ?1 , R1 , T1 , ?2 , ?2 , R2 , T2 denote
predic? corresponding
?
X1i
tions from our model. We obtain corresponding 3D pointclouds, Xi1 = ? Y1i ? , i = 1 ? ? ? n, and
Z1i
? i?
X2
Xi2 = ? Y2i ? , i = 1 ? ? ? n using Eq. 1. The 3D mesh vertices are mapped to corresponding pixel
Z2i
coordinates (xi1 , y1i ), i = 1 ? ? ? n, (xi2 , y2i ), i = 1 ? ? ? n, using the camera projection equation (Eq.
3). Thus the predicted 2D body flow resulting from the 3D motion of the corresponding meshes is
(ui , v i ) = (xi2 ? xi1 , y2i ? y1i ), i = 1 ? ? ? n.
Let OF = (?
u, v?) denote the 2D optical flow field estimated with an optical flow method, such as
the state-of-the-art deep neural flow of [21]. Let OF(xi1 , y1i ) denote the optical flow at a potentially
subpixel location xi1 , y1i , obtained from the pixel centered optical flow field OF through differentiable
bilinear interpolation (differentiable warping) [23]. Then, the motion re-projection error reads:
L
motion
n
1 X i
= T
v kui (xi1 , y1i ) ? u
?(xi1 , y1i )k1 + kv i (xi1 , y1i ) ? v?(xi1 , y1i )k1
1 v i
5
Segmentation re-projection error Given a static image, the predicted 3D mesh for the depicted
person should match, when projected, the corresponding 2D figure-ground segmentation mask.
Numerous 3D shape reconstruction methods have used such segmentation consistency constraint
[36; 2; 4], but again, in an optimization as opposed to learning framework.
Let S I ? {0, 1}w?h denote the 2D figure-ground binary image segmentation, supplied by groundtruth, background subtraction or predicted by a figure-ground neural network segmenter [20]. Our
segmentation re-projection loss measures how well the projected mesh mask fits the image segmentation S I by penalizing non-overlapping pixels by the shortest distance to the projected model
segmentation S M = {x2d }. For this purpose Chamfer distance maps C I for the image segmentation
S I and Chamfer distance maps C M for the model projected segmentation S M are calculated. The
loss then reads:
Lseg = S M ? C I + S I ? C M ,
where ? denotes pointwise multiplication. Both terms are necessary to prevent under of over
coverage of the model segmentation over the image segmentation. For the loss to be differentiable
we cannot use distance transform for efficient computation of Chamfer maps. Rather, we brute
force its computation by calculating the shortest distance of each pixel to the model segmentation
and the inverse. Let xi2d , i ? 1 ? ? ? n denote the set of model projected vertex pixel coordinates and
xpseg , p ? 1 ? ? ? m denote the set of pixel centered coordinates that belong to the foreground of the 2D
segmentation map S I :
Lseg-proj =
n
X
i=1
|
min kxi2d ? xpseg k22 +
m
X
p
p
{z
prevent over-coverage
}
|
min kxpseg ? xi2d k22 .
i
{z
prevent under-coverage
(5)
}
The first term ensures the model projected segmentation is covered by the image segmentation, while
the second term ensures that model projected segmentation covers well the image segmentation. To
lower the memory requirements we use half of the image input resolution.
4
Experiments
We test our method on two datasets: Surreal [35] and H3.6M [22]. Surreal is currently the largest
synthetic dataset for people in motion. It contains short monocular video clips depicting human
characters performing daily activities. Ground-truth 3D human meshes are readily available. We split
the dataset into train and test video sequences. Human3.6M (H3.6M) is the largest real video dataset
with annotated 3D human skeletons. It contains videos of actors performing activities and provides
annotations of body joint locations in 2D and 3D at every frame, recorded through a Vicon system. It
does not provide dense 3D ground-truth though.
Our model is first trained using supervised skeleton and surface parameters in the training set of the
Surreal dataset. Then, it is self-supervised using differentiable rendering and re-projection error minimization at two test sets, one in the Surreal dataset, and one in H3.6M. For self-supervision, we use
ground-truth 2D keypoints and segmentations in both datasets, Surreal and H3.6M. The segmentation
mask in Surreal is very accurate while in H3.6M is obtained using background subtraction and can be
quite inaccurate, as you can see in Figure 4. Our model refines such initially inaccurate segmentation
mask. The 2D optical flows for dense motion matching are obtained using FlowNet2.0 [21] in both
datasets. We do not use any 3D ground-truth supervision in H3.6M as our goal is to demonstrate
successful domain transfer of our model, from SURREAL to H3.6M. We measure the quality of
the predicted 3D skeletons in both datasets, and we measure the quality of the predicted dense 3D
meshes in Surreal, since only there it is available.
Evaluation metrics Given predicted 3D body joint locations of K = 32 keypoints Xkkpt , k =
? k , k = 1 ? ? ? K, we define the per-joint
1 ? ? ? K and corresponding ground-truth 3D joint locations X
kpt
P
K
1
k
? k k2 similar to previous works [41]. We also define
error of each example as K
kX
?
X
kpt
kpt
k=1
the reconstruction error of each example as the 3D per-joint error up to a 3D translation T (3D
6
PK
1
k
?k
rotation should still be predicted correctly): minT K
k=1 k(Xkpt + T ) ? (Xkpt )k2 We define the
surface error
to be the per-joint error when considering all the vertices of the 3D
Pn of each example
? i k2 .
mesh: n1 i=1 kXi ? X
We compare our learning based model against two baselines: (1) Pretrained, a model that uses
only supervised training from synthetic data, without self-supervised adaptation. This baseline is
similar to the recent work of [12]. (2) Direct optimization, a model that uses our differentiable
self-supervised losses, but instead of optimizing neural network weights, optimizes directly over body
mesh parameters (?, ?), rotation (R), translation (T ), and focal length f . We use standard gradient
descent as our optimization method. We experiment with varying amount of supervision during
initialization of our optimization baseline: random initialization, using ground-truth 3D translation,
using ground-truth rotation and using ground-truth theta angles (to estimate the surface parameters).
Tables 1 and 2 show the results of our model and baselines for the different evaluation metrics. The
learning based self-supervised model outperforms both the pretrained model, that does not exploit
adaptation through differentiable rendering and consistency checks, as well as direct optimization
baselines, sensitive to initialization mistakes.
Ablation In Figure 3 we show the 3D keypoint reconstruction error after self-supervised finetuning
using different combinations of self-supervised losses. A model self-supervised by the keypoint
re-projection error (Lkpt ) alone does worse than model using both keypoint and segmentation reprojection error (Lkpt +Lseg ). Models trained using all three proposed losses (keypoint, segmentation
and dense motion re-projection error (Lkpt +Lseg +Lmotion ) outperformes the above two. This shows
the complementarity and importance of all the proposed losses.
surface error (mm) per-joint error (mm) recon. error (mm)
Optimization
346.5
532.8
1320.1
?
Optimization + R
301.1
222.0
294.9
?
?
Optimization + R + T
272.8
206.6
205.5
Pretrained
119.4
101.6
351.3
Pretrained+Self-Sup
74.5
64.4
203.9
Table 1: 3D mesh prediction results in Surreal [35]. The proposed model (pretrained+selfsupervised) outperforms both optimization based alternatives, as well as pretrained models using
supervised regression, that do not adapt to the test data. We use a superscript ?? to denote ground-truth
information provided at initialization of our optimization based baseline.
recon. error
Optimization
Pretrained
Pretrained+Self-Sup
per-joint error
(mm)
562.4
125.6
98.4
recon. error
(mm)
883.1
303.5
145.8
Table 2: 3D skeleton prediction results on H3.6M
[22]. The proposed model (pretrained+self-supervised)
outperforms both an optimization based baseline, as
well as a pretrained model. Self-supervised learning
through differentiable rendering allows our model to
adapt effectively across domains (Surreal to H3.6M),
while the fixed pretrained baseline cannot. Dense 3D
surface ground-truth is not available and thus cannot be
measured in H3.6M
0.28
0.26
Lk
Lk + Ls
Lk + Ls + LM
0.24
0.22
0.20
50.0k
Figure 3: 3D reconstruction error during purely unsupervised finetuning
under different self-supervised losses.
(Lk ? Lkpt : Keypoint re-projection
error; LS? Lseg : Segmentation reprojection error LM? Lmotion : Dense
motion re-projection error ). All losses
contribute to 3D error reduction.
Discussion We have shown that a combination of supervised pretraining and unsupervised adaptation is beneficial for accurate 3D mesh prediction. Learning based self-supervision combines the best
of both worlds of supervised learning and test time optimization: supervised learning initializes the
learning parameters in the right regime, ensuring good pose initialization at test time, without manual
7
effort. Self-supervision through differentiable rendering allows adaptation of the model to test data,
thus allows much tighter fitting that a pretrained model with ?frozen" weights at test time. Note that
overfitting in that sense is desirable. We want our predicted 3D mesh to fit as tight as possible to our
test set, and improve tracking accuracy with minimal human intervention.
Implementation details Our model architecture consists of 5 convolution blocks. Each block
contains two convolutional layers with filter size 5 ? 5 (stride 2) and 3 ? 3 (stride 1), followed by
input 1
input 2
predicted
mesh
predicted
segmentation
2d projection groundtruth
predicted
mask
predicted
flow
Figure 4: Qualitative results of 3D mesh prediction. In the top four rows, we show predictions in
Surreal and in the bottom four from H3.6M. Our model handles bad segmentation input masks in
H3.6M thanks to supervision from multiple rendering based losses. A byproduct of our 3D mesh
model is improved 2D person segmentation (column 6).
8
batch normalization and leaky relu activation. The first block contains 64 channels, and we double
size after each block. On top of these blocks, we add 3 fully connected layers and shrink the size of
the final layer to match our desired outputs. Input image to our model is 128 ? 128. The model is
trained with gradient descent optimizer with learning rate 0.0001 and is implemented in Tensorflow
v1.1.0 [1].
Chamfer distance: We obtain Chamfer distance map C I for an input image frame I using distance
transform with seed the image figure-ground segmentation mask S I . This assigns to every pixel in
C I the minimum distance to a pixel on the mask foreground. Next, we describe the differentiable
computation for C M used in our method. Let P = {x2d } denote a set of pixel coordinates for the
mesh?s visible projected points. For each pixel location p, we compute the minimum distance between
that pixel location and any pixel coordinate in P and obtain a distance map D ? Rw?h . Next, we
threshold the distance map D to get the Chamfer distance map C M and segmentation mask S M
where, for each pixel position p:
C M (p) = max(0.5, D(p))
M
S (p) = min(0.5, D(p)) + ?(D(p) < 0.5) ? 0.5,
(6)
(7)
and ?(?) is an indicator function.
Ray casting: We implemented a standard raycasting algorithm in TensorFlow to accelerate its
computation. Let r = (x, d) denote a casted ray, where x is the point where the ray casts from and d
is a normalized vector for the shooting direction. In our case, all the rays cast from the center of the
camera. For ease of explanation, we set x at (0,0,0). A facet f = (v0 , v1 , v2 ), is determined as "hit" if
it satisfies the following three conditions : (1) the facet is not parallel to the casted ray, (2) the facet is
not behind the ray and (3) the ray passes through the triangle region formed by the three edges of the
facet. Given a facet f = (v0 , v1 , v2 ), where vi denotes the ith vertex of the facet, the first condition is
satisfied if the magnitude of the inner product between the ray cast direction d and the surface normal
of the facet f is large than some threshold . Here we set to be 1e ? 8. The second condition is
satisfied if the inner product between the ray cast direction d and the surface normal N , which is
defined as the normalized cross product between v1 ? v0 and v2 ? v0 , has the same sign as the inner
product between v0 on N. Finally, the last condition can be split into three sub-problems: given one
of the edges on the facet, whether the ray casts on the same side as the facet or not. First, we find the
intersecting point p of the ray cast and the 2D plane expanded by the facet by the following equation:
p=x+d?
< N, v0 >
,
< N, d >
(8)
where < ?, ? > denotes inner product. Given an edge formed by vertices vi and vj , the ray casted is
determined to fall on the same side of the facet if the cross product between edge vi ? vj and vector
p ? vj has the same sign as the surface normal vector N. We examine this condition on all of the
three edges. If all the above conditions are satisfied, the facet is determined as hit by the ray cast.
Among the hit facets, we choose the one with the minimum distance to the origin as the visible facet
seen from the direction of the ray cast.
5
Conclusion
We have presented a learning based model for dense human 3D body tracking supervised by synthetic
data and self-supervised by differentiable rendering of mesh motion, keypoints, and segmentation,
and matching to their 2D equivalent quantities. We show that our model improves by using unlabelled
video data, which is very valuable for motion capture where dense 3D ground-truth is hard to annotate.
A clear direction for future work is iterative additive feedback [10] on the mesh parameters, for
achieving higher 3D reconstruction accuracy, and allowing learning a residual free form deformation
on top of the parametric SMPL model, again in a self-supervised manner. Extensions of our model
beyond human 3D shape would allow neural agents to learn 3D with experience as human do,
supervised solely by video motion.
9
References
[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. Corrado, A. Davis, J. Dean, M. Devin,
S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur,
J. Levenberg, D. Man?, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner,
I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi?gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: Large-scale machine learning on heterogeneous
distributed systems, 2015.
[2] T. Alldieck, M. Kassubeck, and M. A. Magnor. Optical flow-based 3d human motion estimation from
monocular video. CoRR, abs/1703.00177, 2017.
[3] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2d human pose estimation: New benchmark and
state of the art analysis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June
2014.
[4] A. Balan, L. Sigal, M. J. Black, J. Davis, and H. Haussecker. Detailed human shape and pose from images.
In IEEE Conf. on Computer Vision and Pattern Recognition, CVPR, pages 1?8, Minneapolis, June 2007.
[5] L. Ballan and G. M. Cortelazzo. Marker-less motion capture of skinned models in a four camera set-up
using optical flow and silhouettes. In 3DPVT, 2008.
[6] L. Bo and C. Sminchisescu. Twin gaussian processes for structured prediction. International Journal of
Computer Vision, 87(1):28?52, 2010.
[7] F. Bogo, A. Kanazawa, C. Lassner, P. V. Gehler, J. Romero, and M. J. Black. Keep it SMPL: automatic
estimation of 3d human pose and shape from a single image. ECCV, 2016, 2016.
[8] T. Brox, B. Rosenhahn, D. Cremers, and H.-P. Seidel. High accuracy optical flow serves 3-d pose tracking:
exploiting contour and flow based constraints. In ECCV, 2006.
[9] Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multi-person 2d pose estimation using part affinity
fields. In CVPR, 2017.
[10] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback.
In arXiv preprint arXiv:1507.06550, 2015.
[11] C. Chen and D. Ramanan. 3d human pose estimation = 2d pose estimation + matching. CoRR,
abs/1612.06524, 2016.
[12] W. Chen, H. Wang, Y. Li, H. Su, C. Tu, D. Lischinski, D. Cohen-Or, and B. Chen. Synthesizing training
images for boosting human 3d pose estimation. CoRR, abs/1604.02703, 2016.
[13] K. Choo and D. J. Fleet. People tracking using hybrid monte carlo filtering. In Computer Vision, 2001.
ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 321?328, 2001.
[14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image
Database. In CVPR09, 2009.
[15] A. Dosovitskiy, P. Fischer, E. Ilg, P. H?usser, C. Hazirbas, V. Golkov, P. Smagt, D. Cremers, and T. Brox.
Flownet: Learning optical flow with convolutional networks. In ICCV, 2015.
[16] D. Fleet, A. Jepson, and T. El-Maraghi. Robust on-line appearance models for vision tracking. In Proc.
IEEE Conf. Computer Vision and Pattern Recognition, 2001.
[17] J. Gall, C. Stoll, E. De Aguiar, C. Theobalt, B. Rosenhahn, and H.-P. Seidel. Motion capture using joint
skeleton tracking and surface estimation. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.
IEEE Conference on, pages 1746?1753. IEEE, 2009.
[18] R. Garg, B. V. Kumar, G. Carneiro, and I. Reid. Unsupervised cnn for single view depth estimation:
Geometry to the rescue. In European Conference on Computer Vision, pages 740?756. Springer, 2016.
[19] A. Handa, M. Bloesch, V. P?atr?aucean, S. Stent, J. McCormac, and A. Davison. gvnn: Neural network
library for geometric computer vision. In Computer Vision?ECCV 2016 Workshops, pages 67?82. Springer
International Publishing, 2016.
[20] K. He, G. Gkioxari, P. Doll?r, and R. B. Girshick. Mask R-CNN. CoRR, abs/1703.06870, 2017.
[21] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical
flow estimation with deep networks. CoRR, abs/1612.01925, 2016.
[22] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive
methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 36(7):1325?1339, jul 2014.
[23] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. In NIPS,
2015.
[24] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Doll?r,
and C. L. Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014.
[25] M. Loper, N. Mahmood, J. Romero, G. Pons-Moll, and M. J. Black. SMPL: A skinned multi-person linear
model. ACM Trans. Graphics (Proc. SIGGRAPH Asia), 34(6):248:1?248:16, Oct. 2015.
[26] V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory.
CoRR, abs/1511.06309, 2015.
[27] G. Pavlakos, X. Zhou, K. G. Derpanis, and K. Daniilidis. Coarse-to-fine volumetric prediction for
single-image 3d human pose. CoRR, abs/1611.07828, 2016.
[28] V. Ramakrishna, T. Kanade, and Y. Sheikh. Reconstructing 3d Human Pose from 2d Image Landmarks.
Computer Vision?ECCV 2012, pages 573?586, 2012.
[29] G. Rogez and C. Schmid. Mocap-guided data augmentation for 3d pose estimation in the wild. In NIPS,
2016.
[30] V. S., R. S., S. C., S. R., and F. K. Sfm-net: Learning of structure and motion from video. In arxiv, 2017.
10
[31] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography: A factorization
method. Int. J. Comput. Vision, 9(2):137?154, Nov. 1992.
[32] D. Tom?, C. Russell, and L. Agapito. Lifting from the deep: Convolutional 3d pose estimation from a
single image. CoRR, abs/1701.00295, 2017.
[33] H. F. Tung, A. Harley, W. Seto, and K. Fragkiadaki. Adversarial inverse graphics networks: Learning
2d-to-3d lifting and image-to-image translation from unpaired supervision. ICCV, 2017.
[34] R. Urtasun, D. Fleet, and P. Fua. Gaussian process dynamical models for 3d people tracking. In Proc. of
the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2006.
[35] G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev, and C. Schmid. Learning from
Synthetic Humans. In CVPR, 2017.
[36] S. Vicente, J. Carreira, L. Agapito, and J. Batista. Reconstructing PASCAL VOC. In 2014 IEEE Conference
on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pages
41?48, 2014.
[37] S.-E. Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016.
[38] J. Wu, T. Xue, J. J. Lim, Y. Tian, J. B. Tenenbaum, A. Torralba, and W. T. Freeman. Single image 3D
interpreter network. In ECCV, 2016.
[39] X. Yan, J. Yang, E. Yumer, Y. Guo, and H. Lee. Perspective transformer nets: Learning single-view 3d
object reconstruction without 3d supervision. In Advances in Neural Information Processing Systems,
pages 1696?1704, 2016.
[40] T. Zhou, M. Brown, N. Snavely, and D. G. Lowe. Unsupervised learning of depth and ego-motion from
video. In arxiv, 2017.
[41] X. Zhou, M. Zhu, G. Pavlakos, S. Leonardos, K. G. Derpanis, and K. Daniilidis. Monocap: Monocular
human motion capture using a CNN coupled with a geometric prior. CoRR, abs/1701.02354, 2017.
11
| 7108 |@word cnn:4 middle:1 version:1 tedious:3 open:2 simulation:1 rgb:4 reduction:1 configuration:1 contains:5 batista:1 animated:1 outperforms:3 existing:1 steiner:1 current:2 com:1 si:2 activation:1 readily:1 mesh:48 realistic:2 visible:5 refines:1 additive:1 shape:13 devin:1 romero:3 visibility:6 selfsupervised:1 update:1 alone:1 half:1 isard:1 intelligence:1 plane:3 ith:4 short:1 davison:1 detecting:1 provides:1 node:1 location:7 contribute:1 boosting:1 coarse:1 hazirbas:1 olah:1 direct:2 qualitative:2 consists:2 shooting:1 abadi:1 combine:2 wild:3 ray:19 fitting:2 manner:1 mask:11 indeed:1 examine:1 multi:4 ry:1 occluder:1 freeman:1 globally:1 voc:1 actual:1 considering:1 provided:2 project:1 notation:1 estimating:1 cm:2 interpreter:1 keuper:1 temporal:1 quantitative:1 every:2 exactly:1 k2:3 hit:3 brute:1 control:2 ramanan:2 intervention:1 reid:1 t1:4 engineering:1 local:3 mistake:1 switching:1 bilinear:1 blendshape:3 path:1 interpolation:2 hsiao:2 abuse:1 solely:1 black:4 garg:3 initialization:10 studied:1 r4:1 ease:1 limited:1 factorization:1 perpendicular:1 minneapolis:1 tian:1 camera:21 block:5 implement:1 stent:1 procedure:1 maire:1 displacement:1 y2i:3 intersect:1 kpt:6 yan:2 significantly:2 adapting:1 projection:29 matching:7 get:2 cannot:6 context:2 transformer:2 optimize:4 equivalent:2 map:12 dean:1 center:4 gkioxari:1 l:3 resolution:1 assigns:1 pure:2 shlens:1 oh:1 handle:2 cvpr09:1 coordinate:11 nominal:1 homogeneous:2 us:3 gall:1 goodfellow:1 origin:1 complementarity:2 element:2 ego:1 expensive:1 finetuned:1 recognition:6 predicts:4 database:1 gehler:2 bottom:1 preprint:1 tung:3 electrical:1 capture:21 wang:1 region:1 ensures:2 connected:1 russell:1 valuable:1 environment:2 ui:1 skeleton:13 schiele:1 trained:5 segmenter:1 tight:1 laptev:1 predictive:1 flying:1 purely:1 triangle:2 translated:1 easily:1 joint:20 finetuning:3 accelerate:1 siggraph:1 tx:1 carneiro:1 train:2 x2d:2 forced:1 describe:1 monte:1 detected:5 quite:1 z2i:1 cvpr:8 annotating:2 otherwise:1 statistic:1 fischer:1 simonyan:1 transform:2 superscript:1 final:1 sequence:4 differentiable:29 frozen:1 analytical:2 agrawal:1 net:2 reconstruction:8 propose:2 interaction:1 product:6 adaptation:7 varol:1 tu:1 cao:1 combining:1 ablation:2 achieve:1 kv:1 sutskever:1 exploiting:1 double:1 reprojection:5 r1:2 requirement:1 converges:1 rotated:1 object:4 help:1 tions:1 bourdev:1 pose:31 propagating:1 measured:1 h3:14 textureless:1 eq:4 strong:3 auxiliary:1 c:1 predicted:17 coverage:3 implemented:2 direction:6 guided:1 annotated:2 functionality:1 filter:1 centered:3 human:42 olaru:1 maraghi:1 anthropomorphic:1 tighter:2 opengl:1 heatmaps:1 extension:1 mm:5 around:3 considered:2 lischinski:1 ground:17 normal:3 seed:1 predict:5 pitt:1 lm:2 achieves:2 consecutive:2 optimizer:1 torralba:1 purpose:1 estimation:19 proc:3 currently:2 sensitive:2 ilg:2 largest:2 minimization:2 interfacing:1 gaussian:4 rather:3 pn:3 zhou:3 varying:1 casting:5 subpixel:1 loper:2 june:3 check:2 skinned:2 pretrain:1 contrast:2 adversarial:1 baseline:10 sense:1 inference:3 cloth:1 el:1 inaccurate:2 integrated:1 fight:1 mth:1 initially:1 perona:1 proj:1 smagt:1 interested:1 i1:1 pixel:13 among:1 pascal:1 art:4 andriluka:1 spatial:1 brox:3 field:3 construct:1 beach:1 manually:1 papava:1 yu:2 unsupervised:5 yaw:1 foreground:3 photometric:2 future:1 t2:4 dosovitskiy:2 geometry:2 occlusion:1 seto:1 n1:1 microsoft:1 ab:10 harley:1 detection:4 zheng:1 evaluation:2 laborious:1 severe:1 behind:1 accurate:2 edge:5 byproduct:1 necessary:1 experience:3 daily:1 mahmood:2 penalizes:1 re:22 desired:3 guidance:1 deformation:3 girshick:2 minimal:1 industry:1 earlier:2 column:1 facet:20 predic:1 cover:1 pons:1 vertex:27 euler:1 predictor:1 comprised:1 successful:1 graphic:2 teacher:1 learnt:1 vicon:2 synthetic:16 kxi:1 kudlur:1 person:10 st:1 thanks:1 international:3 xue:1 lee:1 off:1 xi1:9 dong:1 theobalt:1 intersecting:1 again:2 ambiguity:2 recorded:1 satisfied:3 opposed:2 multiframe:1 choose:1 flownet:2 augmentation:1 worse:1 conf:2 leading:1 li:3 account:1 de:1 stride:2 twin:2 coefficient:1 int:1 cremers:2 vi:5 stream:1 root:1 view:2 lab:1 lowe:1 sup:2 parallel:1 annotation:6 jul:1 simon:1 jia:1 formed:2 accuracy:5 roll:1 convolutional:4 generalize:1 kavukcuoglu:1 artist:1 carlo:1 monitoring:1 rx:2 trajectory:1 daniilidis:2 straight:1 manual:5 volumetric:1 centering:1 against:9 ty:1 z1i:1 involved:1 regress:1 tucker:1 haussecker:1 bogo:2 static:3 couple:1 boil:1 dataset:10 usser:1 wicke:1 improves:2 lim:1 segmentation:45 yumer:3 back:2 coloring:1 higher:2 supervised:36 patraucean:1 asia:1 zisserman:1 wei:3 improved:1 tom:1 fua:1 though:2 shrink:1 stage:1 su:1 overlapping:1 lack:1 propagation:1 marker:1 quality:2 columbus:1 usa:2 agapito:2 k22:3 normalized:2 brown:1 counterpart:1 vasudevan:1 evolution:1 read:5 moore:1 i2:1 during:4 self:32 dto:1 irving:1 davis:2 levenberg:1 demonstrate:1 tn:4 invisible:1 motion:51 reasoning:1 image:36 handa:2 recently:1 common:1 rotation:10 empirically:1 cohen:1 volume:1 belong:1 he:1 mellon:1 measurement:2 jozefowicz:1 automatic:1 focal:4 consistency:3 similarly:1 robot:1 actor:1 supervision:23 surface:16 impressive:1 etc:1 renderer:1 add:1 v0:6 smpl:11 recent:4 perspective:1 optimizing:2 optimizes:2 driven:3 coco:2 reverse:1 mint:1 wattenberg:1 hay:1 binary:1 success:1 seen:1 minimum:5 deng:1 subtraction:2 mocap:3 shortest:2 corrado:1 branch:1 multiple:3 relates:1 keypoints:11 infer:1 desirable:1 seidel:2 match:10 adapt:3 unlabelled:1 offer:1 long:1 cross:2 lin:1 coded:1 paired:1 controlled:2 adobe:2 ensuring:2 impact:2 crop:3 prediction:11 pitch:1 vision:14 patient:1 cmu:1 regression:2 metric:2 annotate:2 iteration:1 normalization:1 monga:1 orthography:1 achieved:1 agarwal:1 arxiv:4 background:8 want:1 fine:1 rest:2 warden:1 pass:1 flow:26 leverage:3 golkov:1 intermediate:1 split:2 easy:1 yang:1 rendering:17 automated:1 fit:4 relu:1 moll:1 architecture:2 topology:1 inner:4 barham:1 bottleneck:1 whether:1 expression:2 casted:3 fleet:3 effort:4 human3:2 stereo:1 pretraining:2 deep:8 fragkiadaki:3 detailed:3 clear:2 covered:1 amount:1 clutter:1 ten:1 tenenbaum:1 clip:1 processed:1 recon:3 rw:1 unpaired:1 outperform:1 supplied:1 fish:1 rescue:1 sign:2 estimated:3 per:5 correctly:1 ionescu:1 carnegie:1 skeletal:2 dancing:1 harp:1 four:4 threshold:3 bloesch:1 achieving:1 prevent:3 penalizing:1 clean:1 v1:4 flownet2:1 angle:5 inverse:2 you:1 talwar:1 reasonable:1 groundtruth:2 wu:2 realtime:1 ersin:1 scaling:1 sfm:3 accelerates:1 layer:4 followed:1 activity:3 adapted:1 constraint:3 fei:2 x2:1 leonardo:1 y1i:9 chair:1 min:3 kumar:1 performing:3 expanded:1 optical:21 rendered:1 stoll:1 saikia:1 martin:1 department:2 structured:1 combination:3 across:1 beneficial:1 reconstructing:2 character:4 instrumented:1 sheikh:3 rehabilitation:1 restricted:1 iccv:3 pointcloud:1 resource:1 monocular:13 remains:1 equation:3 r3:4 fail:1 xi2:3 mccormac:1 end:8 serf:1 available:4 operation:1 brevdo:1 doll:2 limb:1 blendshapes:1 v2:3 hierarchical:1 gym:1 alternative:1 batch:1 denotes:4 top:3 publishing:1 calculating:1 exploit:2 concatenated:1 k1:2 murray:1 society:1 unchanged:1 warping:3 initializes:2 skin:3 already:1 quantity:3 tensor:1 blend:1 parametric:2 kaiser:1 rosenhahn:2 malik:1 snavely:1 gradient:2 kth:1 affinity:1 distance:18 mapped:1 atr:1 parametrized:1 landmark:1 urtasun:1 length:4 code:1 pointwise:1 relationship:1 minimizing:2 setup:2 susceptible:1 potentially:1 synthesizing:1 design:1 implementation:1 lassner:1 perform:1 contributed:1 lseg:5 allowing:1 convolution:1 datasets:8 sm:4 benchmark:1 descent:2 gas:1 frame:10 perturbation:1 introduced:1 complement:1 pair:2 cast:9 optimized:1 imagenet:1 mayer:1 tomasi:1 tensorflow:3 nip:3 trans:1 beyond:1 dynamical:2 below:1 pattern:7 eighth:1 regime:2 differentiably:1 green:3 including:1 video:22 memory:2 max:1 explanation:1 difficulty:1 rely:2 treated:1 force:1 indicator:1 hybrid:1 residual:1 natural:1 nth:2 zhu:1 improve:2 movie:1 pishchulin:1 theta:1 keypoint:13 library:2 numerous:2 axis:1 lk:4 autoencoder:1 schmid:3 coupled:1 prior:4 understanding:1 geometric:4 multiplication:1 relative:2 loss:19 fully:1 highlight:1 filtering:1 proven:1 ramakrishna:2 integrate:2 agent:1 vanhoucke:1 sigal:1 bypass:1 translation:7 eccv:5 row:1 balan:1 penalized:1 placed:1 last:3 free:1 retargeting:1 side:2 allow:1 fall:1 leaky:1 benefit:2 distributed:1 feedback:2 depth:4 calculated:1 world:2 stand:1 contour:1 stuck:1 projected:14 transaction:1 nov:1 jaderberg:1 silhouette:1 keep:1 global:1 overfitting:1 pittsburgh:1 belongie:1 consuming:2 xi:2 spatio:1 iterative:2 mpii:1 table:3 additionally:1 kanade:3 learn:3 transfer:1 robust:1 ca:1 channel:1 scratch:1 symmetry:1 depicting:1 unavailable:1 sminchisescu:2 kui:1 european:1 rogez:2 complex:1 domain:2 diag:1 vj:3 jepson:1 pk:1 dense:15 zitnick:1 katef:1 repeated:1 derpanis:2 body:17 screen:3 backdrop:1 fashion:1 sub:1 position:3 x1i:1 lie:1 comput:1 learns:2 abundance:1 down:1 chamfer:8 bad:1 ghemawat:1 sensing:1 r2:2 explored:1 workshop:1 kanazawa:1 socher:1 effectively:1 importance:1 ci:2 corr:10 texture:2 lifting:3 execution:1 magnitude:1 kx:1 heterogeneous:1 chen:4 easier:1 depicted:2 appearance:1 vinyals:1 labor:1 tracking:10 bo:1 pretrained:15 cipolla:1 springer:2 truth:12 satisfies:1 relies:1 acm:1 oct:1 goal:1 acceleration:1 careful:1 choo:1 aguiar:1 labelled:1 man:1 hard:3 safer:1 carreira:2 determined:3 vicente:1 engineer:1 katerina:1 citro:1 people:3 guo:1 schuster:1 |
6,754 | 7,109 | Triangle Generative Adversarial Networks
Zhe Gan? , Liqun Chen? , Weiyao Wang, Yunchen Pu, Yizhe Zhang,
Hao Liu, Chunyuan Li, Lawrence Carin
Duke University
[email protected]
Abstract
A Triangle Generative Adversarial Network (?-GAN) is developed for semisupervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and supervision of domain correspondence
is provided by only a few paired samples. ?-GAN consists of four neural networks, two generators and two discriminators. The generators are designed to
learn the two-way conditional distributions between the two domains, while the
discriminators implicitly define a ternary discriminative function, which is trained
to distinguish real data pairs and two kinds of fake data pairs. The generators
and discriminators are trained together using adversarial learning. Under mild
assumptions, in theory the joint distributions characterized by the two generators
concentrate to the data distribution. In experiments, three different kinds of domain pairs are considered, image-label, image-image and image-attribute pairs.
Experiments on semi-supervised image classification, image-to-image translation
and attribute-based image generation demonstrate the superiority of the proposed
approach.
1
Introduction
Generative adversarial networks (GANs) [1] have emerged as a powerful framework for learning
generative models of arbitrarily complex data distributions. When trained on datasets of natural
images, significant progress has been made on generating realistic and sharp-looking images [2, 3].
The original GAN formulation was designed to learn the data distribution in one domain. In practice,
one may also be interested in matching two joint distributions. This is an important task, since
mapping data samples from one domain to another has a wide range of applications. For instance,
matching the joint distribution of image-text pairs allows simultaneous image captioning and textconditional image generation [4], while image-to-image translation [5] is another challenging problem
that requires matching the joint distribution of image-image pairs.
In this work, we are interested in designing a GAN framework to match joint distributions. If paired
data are available, a simple approach to achieve this is to train a conditional GAN model [4, 6],
from which a joint distribution is readily manifested and can be matched to the empirical joint
distribution provided by the paired data. However, fully supervised data are often difficult to acquire.
Several methods have been proposed to achieve unsupervised joint distribution matching without
any paired data, including DiscoGAN [7], CycleGAN [8] and DualGAN [9]. Adversarially Learned
Inference (ALI) [10] and Bidirectional GAN (BiGAN) [11] can be readily adapted to this case as
well. Though empirically achieving great success, in principle, there exist infinitely many possible
mapping functions that satisfy the requirement to map a sample from one domain to another. In
order to alleviate this nonidentifiability issue, paired data are needed to provide proper supervision to
inform the model the kind of joint distributions that are desired.
This motivates the proposed Triangle Generative Adversarial Network (?-GAN), a GAN framework that allows semi-supervised joint distribution matching, where the supervision of domain
?
Equal contribution.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1: Illustration of the Triangle Generative Adversarial Network (?-GAN).
correspondence is provided by a few paired samples. ?-GAN consists of two generators and two
discriminators. The generators are designed to learn the bidirectional mappings between domains,
while the discriminators are trained to distinguish real data pairs and two kinds of fake data pairs.
Both the generators and discriminators are trained together via adversarial learning.
?-GAN bears close resemblance to Triple GAN [12], a recently proposed method that can also be
utilized for semi-supervised joint distribution mapping. However, there exist several key differences
that make our work unique. First, ?-GAN uses two discriminators in total, which implicitly defines
a ternary discriminative function, instead of a binary discriminator as used in Triple GAN. Second,
?-GAN can be considered as a combination of conditional GAN and ALI, while Triple GAN
consists of two conditional GANs. Third, the distributions characterized by the two generators in
both ?-GAN and Triple GAN concentrate to the data distribution in theory. However, when the
discriminator is optimal, the objective of ?-GAN becomes the Jensen-Shannon divergence (JSD)
among three distributions, which is symmetric; the objective of Triple GAN consists of a JSD term
plus a Kullback-Leibler (KL) divergence term. The asymmetry of the KL term makes Triple GAN
more prone to generating fake-looking samples [13]. Lastly, the calculation of the additional KL
term in Triple GAN is equivalent to calculating a supervised loss, which requires the explicit density
form of the conditional distributions, which may not be desirable. On the other hand, ?-GAN is
a fully adversarial approach that does not require that the conditional densities can be computed;
?-GAN only require that the conditional densities can be sampled from in a way that allows gradient
backpropagation.
?-GAN is a general framework, and can be used to match any joint distributions. In experiments,
in order to demonstrate the versatility of the proposed model, we consider three domain pairs:
image-label, image-image and image-attribute pairs, and use them for semi-supervised classification,
image-to-image translation and attribute-based image editing, respectively. In order to demonstrate
the scalability of the model to large and complex datasets, we also present attribute-conditional image
generation on the COCO dataset [14].
2
2.1
Model
Generative Adversarial Networks (GANs)
Generative Adversarial Networks (GANs) [1] consist of a generator G and a discriminator D that
compete in a two-player minimax game, where the generator is learned to map samples from an
arbitray latent distribution to data, while the discriminator tries to distinguish between real and
generated samples. The goal of the generator is to ?fool? the discriminator by producing samples that
are as close to real data as possible. Specifically, D and G are learned as
min max V (D, G) = Ex?p(x) [log D(x)] + Ez?pz (z) [log(1 ? D(G(z)))] ,
G
D
(1)
where p(x) is the true data distribution, and pz (z) is usually defined to be a simple distribution, such
as the standard normal distribution. The generator G implicitly defines a probability distribution
pg (x) as the distribution of the samples G(z) obtained when z ? pz (z). For any fixed generator
2
p(x)
G, the optimal discriminator is D(x) = pg (x)+p(x)
. When the discriminator is optimal, solving this
adversarial game is equivalent to minimizing the Jenson-Shannon Divergence (JSD) between p(x)
and pg (x) [1]. The global equilibrium is achieved if and only if p(x) = pg (x).
2.2
Triangle Generative Adversarial Networks (?-GANs)
We now extend GAN to ?-GAN for joint distribution matching. We first consider ?-GAN in the
supervised setting, and then discuss semi-supervised learning in Section 2.4. Consider two related
domains, with x and y being the data samples for each domain. We have fully-paired data samples
that are characterized by the joint distribution p(x, y), which also implies that samples from both the
marginal p(x) and p(y) can be easily obtained.
?-GAN consists of two generators: (i) a generator Gx (y) that defines the conditional distribution
px (x|y), and (ii) a generator Gy (x) that characterizes the conditional distribution in the other
direction py (y|x). Gx (y) and Gy (x) may also implicitly contain a random latent variable z as input,
i.e., Gx (y, z) and Gy (x, z). In the ?-GAN game, after a sample x is drawn from p(x), the generator
? following the conditional distribution py (y|x). Hence, the fake data
Gy produces a pseudo sample y
? ) is a sample from the joint distribution py (x, y) = py (y|x)p(x). Similarly, a fake data
pair (x, y
pair (?
x, y) can be sampled from the generator Gx by first drawing y from p(y) and then drawing
? from px (x|y); hence (?
x
x, y) is sampled from the joint distribution px (x, y) = px (x|y)p(y). As
such, the generative process between px (x, y) and py (x, y) is reversed.
The objective of ?-GAN is to match the three joint distributions: p(x, y), px (x, y) and py (x, y). If
this is achieved, we are ensured that we have learned a bidirectional mapping px (x|y) and py (y|x)
? ) are indistinguishable from the true
that guarantees the generated fake data pairs (?
x, y) and (x, y
data pairs (x, y). In order to match the joint distributions, an adversarial game is played. Joint pairs
are drawn from three distributions: p(x, y), px (x, y) or py (x, y), and two discriminator networks
are learned to discriminate among the three, while the two conditional generator networks are trained
to fool the discriminators.
The value function describing the game is given by
min max V (Gx , Gy , D1 , D2 ) = E(x,y)?p(x,y) [log D1 (x, y)]
h
i
+ Ey?p(y),?x?px (x|y) log (1 ? D1 (?
x, y)) ? D2 (?
x, y)
h
i
? )) ? (1 ? D2 (x, y
? )) .
+ Ex?p(x),?y?py (y|x) log (1 ? D1 (x, y
Gx ,Gy D1 ,D2
(2)
The discriminator D1 is used to distinguish whether a sample pair is from p(x, y) or not, if this
sample pair is not from p(x, y), another discriminator D2 is used to distinguish whether this sample
pair is from px (x, y) or py (x, y). D1 and D2 work cooperatively, and the use of both implicitly
defines a ternary discriminative function D that distinguish sample pairs in three ways. See Figure 1
for an illustration of the adversarial game and Appendix B for an algorithmic description of the
training procedure.
2.3
Theoretical analysis
?-GAN shares many of the theoretical properties of GANs [1]. We first consider the optimal
discriminators D1 and D2 for any given generator Gx and Gy . These optimal discriminators then
allow reformulation of objective (2), which reduces to the Jensen-Shannon divergence among the
joint distribution p(x, y), px (x, y) and py (x, y).
Proposition 1. For any fixed generator Gx and Gy , the optimal discriminator D1 and D2 of the
game defined by V (Gx , Gy , D1 , D2 ) is
D1? (x, y) =
p(x, y)
px (x, y)
, D2? (x, y) =
.
p(x, y) + px (x, y) + py (x, y)
px (x, y) + py (x, y)
Proof. The proof is a straightforward extension of the proof in [1]. See Appendix A for details.
Proposition 2. The equilibrium of V (Gx , Gy , D1 , D2 ) is achieved if and only if p(x, y) =
px (x, y) = py (x, y) with D1? (x, y) = 31 and D2? (x, y) = 21 , and the optimum value is ?3 log 3.
3
Proof. Given the optimal D1? (x, y) and D2? (x, y), the minimax game can be reformulated as:
C(Gx , Gy ) = max V (Gx , Gy , D1 , D2 )
D1 ,D2
= ?3 log 3 + 3 ? JSD p(x, y), px (x, y), py (x, y) ? ?3 log 3 ,
(3)
(4)
where JSD denotes the Jensen-Shannon divergence (JSD) among three distributions. See Appendix
A for details.
Since p(x, y) = px (x, y) = py (x, y) can be achieved in theory, it can be readily seen that the
learned conditional generators can reveal the true conditional distributions underlying the data, i.e.,
px (x|y) = p(x|y) and py (y|x) = p(y|x).
2.4
Semi-supervised learning
In order to further understand ?-GAN, we write (2) as
? ))]
V = Ep(x,y) [log D1 (x, y)] + Epx (?x,y) [log(1 ? D1 (?
x, y))] + Epy (x,?y) [log(1 ? D1 (x, y
{z
}
|
(5)
conditional GAN
? ))] .
+ Epx (?x,y) [log D2 (?
x, y)] + Epy (x,?y) [log(1 ? D2 (x, y
|
{z
}
(6)
BiGAN/ALI
The objective of ?-GAN is a combination of the objectives of conditional GAN and BiGAN. The
BiGAN part matches two joint distributions: px (x, y) and py (x, y), while the conditional GAN part
provides the supervision signal to notify the BiGAN part what joint distribution to match. Therefore,
?-GAN provides a natural way to perform semi-supervised learning, since the conditional GAN part
and the BiGAN part can be used to account for paired and unpaired data, respectively.
However, when doing semi-supervised learning, there is also one potential problem that we need
to be cautious about. The theoretical analysis in Section 2.3 is based on the assumption that the
dataset is fully supervised, i.e., we have the ground-truth joint distribution p(x, y) and marginal
distributions p(x) and p(y). In the semi-supervised setting, p(x) and p(y) are still available but
p(x, y) is not. We can only obtain the joint distribution pl (x, y) characterized by the few paired data
samples. Hence, in the semi-supervised setting, px (x, y) and py (x, y) will try to concentrate to the
empirical distribution pl (x, y). We make the assumption that pl (x, y) ? p(x, y), i.e., the paired
data can roughly characterize the whole dataset. For example, in the semi-supervised classification
problem, one usually strives to make sure that labels are equally distributed among the labeled dataset.
2.5
Relation to Triple GAN
?-GAN is closely related to Triple GAN [12]. Below we review Triple GAN and then discuss the
main differences. The value function of Triple GAN is defined as follows:
? ))]
V =Ep(x,y) [log D(x, y)] + (1 ? ?)Epx (?x,y) [log(1 ? D(?
x, y))] + ?Epy (x,?y) [log(1 ? D(x, y
+Ep(x,y) [? log py (y|x)] ,
(7)
where ? ? (0, 1) is a contant that controls the relative importance of the two generators. Let Triple
GAN-s denote a simplified Triple GAN model with only the first three terms. As can be seen, Triple
GAN-s can be considered as a combination of two conditional GANs, with the importance of each
condtional GAN weighted by ?. It can be proven that Triple GAN-s achieves equilibrium if and
only if p(x, y) = (1 ? ?)px (x, y) + ?py (x, y), which is not desirable. To address this problem, in
Triple GAN a standard supervised loss RL = Ep(x,y) [? log py (y|x)] is added. As a result, when the
discriminator is optimal, the cost function in Triple GAN becomes:
2JSD p(x, y)||((1 ? ?)px (x, y) + ?py (x, y)) + KL(p(x, y)||py (x, y)) + const. (8)
This cost function has the good property that it has a unique minimum at p(x, y) = px (x, y) =
py (x, y). However, the objective becomes asymmetrical. The second KL term pays low cost
for generating fake-looking samples [13]. By contrast ?-GAN directly optimizes the symmetric Jensen-Shannon divergence among three distributions. More importantly, the calculation of
4
Ep(x,y) [? log py (y|x)] in Triple GAN also implies that the explicit density form of py (y|x) should
be provided, which may not be desirable. On the other hand,
R ?-GAN only requires that py (y|x) can
be sampled from. For example, if we assume py (y|x) = ?(y ? Gy (x, z))p(z)dz, and ?(?) is the
Dirac delta function, we can sample y through sampling z, however, the density function of py (y|x)
is not explicitly available.
2.6
Applications
?-GAN is a general framework that can be used for any joint distribution matching. Besides
the semi-supervised image classification task considered in [12], we also conduct experiments on
image-to-image translation and attribute-conditional image generation. When modeling image pairs,
both px (x|y) and py (y|x) are implemented without introducing additional latent variables, i.e.,
px (x|y) = ?(x ? Gx (y)), py (y|x) = ?(y ? Gy (x)).
A different strategy is adopted when modeling the image-label/attribute pairs. Specifically, let x
denote samples in the image domain, y denote samples in the label/attribute domain. y is a one-hot
vector or a binary vector when representing labels and attributes, respectively. When modeling
px (x|y), we assume that Rx is transformed by the latent style variables z given the label or attribute
vector y, i.e., px (x|y) = ?(x ? Gx (y, z))p(z)dz, where p(z) is chosen to be a simple distribution
(e.g., uniform or standard normal). When learning py (y|x), py (y|x) is assumed to be a standard
multi-class or multi-label classfier without latent variables z. In order to allow the training signal
backpropagated from D1 and D2 to Gy , we adopt the REINFORCE algorithm as in [12], and use the
label with the maximum probability to approximate the expectation over y, or use the output of the
sigmoid function as the predicted attribute vector.
3
Related work
The proposed framework focuses on designing GAN for joint-distribution matching. Conditional
GAN can be used for this task if supervised data is available. Various conditional GANs have been
proposed to condition the image generation on class labels [6], attributes [15], texts [4, 16] and
images [5, 17]. Unsupervised learning methods have also been developed for this task. BiGAN [11]
and ALI [10] proposed a method to jointly learn a generation network and an inference network
via adversarial learning. Though originally designed for learning the two-way transition between
the stochastic latent variables and real data samples, BiGAN and ALI can be directly adapted to
learn the joint distribution of two real domains. Another method is called DiscoGAN [7], in which
two generators are used to model the bidirectional mapping between domains, and another two
discriminators are used to decide whether a generated sample is fake or not in each individual
domain. Further, additional reconstructon losses are introduced to make the two generators strongly
coupled and also alleviate the problem of mode collapsing. Similiar work includes CycleGAN [8],
DualGAN [9] and DTN [18]. Additional weight-sharing constraints are introduced in CoGAN [19]
and UNIT [20].
Our work differs from the above work in that we aim at semi-supervised joint distribution matching.
The only work that we are aware of that also achieves this goal is Triple GAN. However, our model is
distinct from Triple GAN in important ways (see Section 2.5). Further, Triple GAN only focuses on
image classification, while ?-GAN has been shown to be applicable to a wide range of applications.
Various methods and model architectures have been proposed to improve and stabilize the training
of GAN, such as feature matching [21, 22, 23], Wasserstein GAN [24], energy-based GAN [25],
and unrolled GAN [26] among many other related works. Our work is orthogonal to these methods,
which could also be used to improve the training of ?-GAN. Instead of using adversarial loss, there
also exists work that uses supervised learning [27] for joint-distribution matching, and variational
autoencoders for semi-supervised learning [28, 29]. Lastly, our work is also closely related to the
recent work of [30, 31, 32], which treats one of the domains as latent variables.
4
Experiments
We present results on three tasks: (i) semi-supervised classification on CIFAR10 [33]; (ii) imageto-image translation on MNIST [34] and the edges2shoes dataset [5]; and (iii) attribute-to-image
generation on CelebA [35] and COCO [14]. We also conduct a toy data experiment to further
demonstrate the differences between ?-GAN and Triple GAN. We implement ?-GAN without
introducing additional regularization unless explicitly stated. All the network architectures are
provided in the Appendix.
5
(a) real data
(b) Triangle GAN
(c) Triple GAN
Figure 2: Toy data experiment on ?-GAN and Triple GAN. (a) the joint distribution p(x, y) of real data. For
(b) and (c), the left and right figure is the learned joint distribution px (x, y) and py (x, y), respectively.
Table 1: Error rates (%) on the partially labeled CIFAR10 dataset.
Algorithm
n = 4000
CatGAN [36]
Improved GAN [21]
ALI [10]
Triple GAN [12]
19.58 ? 0.58
18.63 ? 2.32
17.99 ? 1.62
16.99 ? 0.36
?-GAN (ours)
16.80 ? 0.42
Table 2: Classification accuracy (%) on the MNIST-toMNIST-transpose dataset.
n = 100
n = 1000
All
DiscoGAN
Triple GAN
?
63.79 ? 0.85
?
84.93 ? 1.63
15.00? 0.20
86.70 ? 1.52
?-GAN
83.20? 1.88
88.98? 1.50
93.34? 1.46
Algorithm
4.1 Toy data experiment
We first compare our method with Triple GAN on a toy dataset. We synthesize data by drawing
(x, y) ? 14 N (?1 , ?1 ) + 14 N (?2 , ?2 ) + 14 N (?3 , ?3 ) + 14 N (?4 , ?4 ), where ?1 = [0, 1.5]> , ?2 =
0 ) and ? = ? = ( 0.025 0 ). We
[?1.5, 0]> , ?3 = [1.5, 0]> , ?4 = [0, ?1.5]> , ?1 = ?4 = ( 30 0.025
2
3
0 3
generate 5000 (x, y) pairs for each mixture component.
In
order
to implement ?-GAN andR Triple
R
GAN-s, we model px (x|y) and py (y|x) as px (x|y) = ?(x ? Gx (y, z))p(z)dz, py (y|x) = ?(y ?
Gy (x, z))p(z)dz where both Gx and Gy are modeled as a 4-hidden-layer multilayer perceptron
(MLP) with 500 hidden units in each layer. p(z) is a bivariate standard Gaussian distribution. Triple
GAN can be implemented by specifying both px (x|y) and py (y|x) to be distributions with explicit
density form, e.g., Gaussian distributions. However, the performance can be bad since it fails to
capture the multi-modality of px (x|y) and py (y|x). Hence, only Triple GAN-s is implemented.
Results are shown in Figure 2. The joint distributions px (x, y) and py (x, y) learned by ?-GAN
successfully match the true joint distribution p(x, y). Triple GAN-s cannot achieve this, and can only
guarantee 12 (px (x, y) + py (x, y)) matches p(x, y). Although this experiment is limited due to its
simplicity, the results clearly support the advantage of our proposed model over Triple GAN.
4.2 Semi-supervised classification
We evaluate semi-supervised classification on the CIFAR10 dataset with 4000 labels. The labeled
data is distributed equally across classes and the results are averaged over 10 runs with different
random splits of the training data. For fair comparison, we follow the publically available code of
Triple GAN and use the same regularization terms and hyperparameter settings as theirs. Results
are summarized in Table 1. Our ?-GAN achieves the best performance among all the competing
methods. We also show the ability of ?-GAN to disentangle classes and styles in Figure 3. ?-GAN
can generate realistic data in a specific class and the injected noise vector encodes meaningful style
patterns like background and color.
4.3 Image-to-image translation
We first evaluate image-to-image translation on the edges2shoes dataset. Results are shown in
Figure 4(bottom). Though DiscoGAN is an unsupervised learning method, it achieves impressive
results. However, with supervision provided by 10% paired data, ?-GAN generally generates more
accurate edge details of the shoes. In order to provide quantitative evaluation of translating shoes to
edges, we use mean squared error (MSE) as our metric. The MSE of using DiscoGAN is 140.1; with
10%, 20%, 100% paired data, the MSE of using ?-GAN is 125.3, 113.0 and 66.4, respectively.
To further demonstrate the importance of providing supervision of domain correspondence, we
created a new dataset based on MNIST [34], where the two image domains are the MNIST images
and their corresponding tranposed ones. As can be seen in Figure 4(top), ?-GAN matches images
6
DiscoGAN
-GAN
Input:
Output:
Input:
Output:
Input:
GT Output:
DiscoGAN:
-GAN:
Figure 3: Generated CIFAR10 samples, where
Figure 4: Image-to-image translation experiments
each row shares the same label and each column
uses the same noise.
on the MNIST-to-MNIST-transpose and edges2shoes
datasets.
Input
images
Predicted
attributes
Big Nose,
Black Hair,
Bushy
Eyebrows,
Male, Young,
Sideburns
Attractive,
Smiling, High
Cheekbones,
Mouth Slightly
Open, Wearing
Lipstick
Attractive,
Black Hair,
Male, High
Cheekbones,
Smiling,
Straight Hair
Big Nose,
Chubby,
Goatee, Male,
Oval Face,
Sideburns,
Wearing Hat
Attractive,
Blond Hair, No
Beard, Pointy
Nose, Straight
Hair, Arched
Eyebrows
High
Cheekbones,
Mouth Slightly
Open, No
Beard, Oval
Face, Smiling
Attractive,
Brown Hair,
Heavy
Makeup, No
Beard, Wavy
Hair, Young
Attractive,
Eyeglasses, No
Beard, Straight
Hair, Wearing
Lipstick,
Young
Generated
images
Figure 5: Results on the face-to-attribute-to-face experiment. The 1st row is the input images; the 2nd row is
the predicted attributes given the input images; the 3rd row is the generated images given the predicted attributes.
Table 3: Results of P@10 and nDCG@10 for attribute predicting on CelebA and COCO.
Dataset
Method
Triple GAN
?-GAN
CelebA
COCO
1%
10%
100%
10%
50%
100%
40.97/50.74
53.21/58.39
62.13/73.56
63.68/75.22
70.12/79.37
70.37/81.47
32.64/35.91
34.38/37.91
34.00/37.76
36.72/40.39
35.35/39.60
39.05/42.86
betwen domains well, while DiscoGAN fails in this task. For supporting quantitative evaluation,
we have trained a classifier on the MNIST dataset, and the classification accuracy of this classifier
on the test set approaches 99.4%, and is, therefore, trustworthy as an evaluation metric. Given an
input MNIST image x, we first generate a transposed image y using the learned generator, and then
manually transpose it back to normal digits y T , and finally send this new image y T to the classifier.
Results are summarized in Table 2, which are averages over 5 runs with different random splits of the
training data. ?-GAN achieves significantly better performance than Triple GAN and DiscoGAN.
4.4
Attribute-conditional image generation
We apply our method to face images from the CelebA dataset. This dataset consists of 202,599
images annotated with 40 binary attributes. We scale and crop the images to 64 ? 64 pixels. In
order to qualitatively evaluate the learned attribute-conditional image generator and the multi-label
classifier, given an input face image, we first use the classifier to predict attributes, and then use
the image generator to produce images based on the predicted attributes. Figure 5 shows example
results. Both the learned attribute predictor and the image generator provides good results. We further
show another set of image editing experiment in Figure 6. For each subfigure, we use a same set of
attributes with different noise vectors to generate images. For example, for the top-right subfigure,
7
1st row + pale skin = 2nd row
1st row + eyeglasses = 2nd row
1st row + mouth slightly open = 2nd row
1st row + wearing hat = 2nd row
Figure 6: Results on the image editing experiment.
Input
Predicted attributes
Generated images
Input
Predicted attributes
baseball, standing, next,
player, man, group,
person, field, sport, ball,
outdoor, game, grass,
crowd
tennis, player, court,
man, playing, field,
racket, sport, swinging,
ball, outdoor, holding,
game, grass
surfing, people, woman,
water, standing, wave,
man, top, riding, sport,
ocean, outdoor, board
skiing, man, group,
covered, day, hill,
person, snow, riding,
outdoor
!
!
red, sign, street, next,
pole, outdoor, stop, grass
pizza, rack, blue, grill,
plate, stove, table, pan,
holding, pepperoni,
cooked
!
sink, shower, indoor,
tub, restroom, bathroom,
small, standing, room,
tile, white, stall, tiled,
black, bath
computer, laptop, room,
front, living, indoor,
table, desk
!
!
Generated images
Figure 7: Results on the image-to-attribute-to-image experiment.
all the images in the 1st row were generated based on the following attributes: black hair, female,
attractive, and we then added the attribute of ?sunglasses? when generating the images in the 2nd row.
It is interesting to see that ?-GAN has great flexibility to adjust the generated images by changing
certain input attribtutes. For instance, by switching on the wearing hat attribute, one can edit the face
image to have a hat on the head.
In order to demonstrate the scalablility of our model to large and complex datasets, we also present
results on the COCO dataset. Following [37], we first select a set of 1000 attributes from the caption
text in the training set, which includes the most frequent nouns, verbs, or adjectives. The images in
COCO are scaled and cropped to have 64 ? 64 pixels. Unlike the case of CelebA face images, the
networks need to learn how to handle multiple objects and diverse backgrounds. Results are provided
in Figure 7. We can generate reasonably good images based on the predicted attributes. The input
and generated images also clearly share a same set of attributes. We also observe diversity in the
samples by simply drawing multple noise vectors and using the same predicted attributes.
Precision (P) and normalized Discounted Cumulative Gain (nDCG) are two popular evaluation
metrics for multi-label classification problems. Table 3 provides the quantatitive results of P@10 and
nDCG@10 on CelebA and COCO, where @k means at rank k (see the Appendix for definitions). For
fair comparison, we use the same network architecures for both Triple GAN and ?-GAN. ?-GAN
consistently provides better results than Triple GAN. On the COCO dataset, our semi-supervised
learning approach with 50% labeled data achieves better performance than the results of Triple GAN
using the full dataset, demonstrating the effectiveness of our approach for semi-supervised joint
distribution matching. More results for the above experiments are provided in the Appendix.
5
Conclusion
We have presented the Triangle Generative Adversarial Network (?-GAN), a new GAN framework
that can be used for semi-supervised joint distribution matching. Our approach learns the bidirectional
mappings between two domains with a few paired samples. We have demonstrated that ?-GAN may
be employed for a wide range of applications. One possible future direction is to combine ?-GAN
with sequence GAN [38] or textGAN [23] to model the joint distribution of image-caption pairs.
Acknowledgements This research was supported in part by ARO, DARPA, DOE, NGA and ONR.
8
References
[1] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[2] Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a
laplacian pyramid of adversarial networks. In NIPS, 2015.
[3] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. In ICLR, 2016.
[4] Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.
Generative adversarial text to image synthesis. In ICML, 2016.
[5] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional
adversarial networks. In CVPR, 2017.
[6] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
[7] Taeksoo Kim, Moonsu Cha, Hyunsoo Kim, Jungkwon Lee, and Jiwon Kim. Learning to discover crossdomain relations with generative adversarial networks. In ICML, 2017.
[8] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using
cycle-consistent adversarial networks. In ICCV, 2017.
[9] Zili Yi, Hao Zhang, Ping Tan, and Minglun Gong. Dualgan: Unsupervised dual learning for image-to-image
translation. In ICCV, 2017.
[10] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and
Aaron Courville. Adversarially learned inference. In ICLR, 2017.
[11] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. In ICLR, 2017.
[12] Chongxuan Li, Kun Xu, Jun Zhu, and Bo Zhang. Triple generative adversarial nets. In NIPS, 2017.
[13] Martin Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial networks.
In ICLR, 2017.
[14] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll?r,
and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[15] Guim Perarnau, Joost van de Weijer, Bogdan Raducanu, and Jose M ?lvarez. Invertible conditional gans
for image editing. arXiv:1611.06355, 2016.
[16] Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dimitris
Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks.
In ICCV, 2017.
[17] Christian Ledig, Lucas Theis, Ferenc Husz?r, Jose Caballero, Andrew Cunningham, Alejandro Acosta,
Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image superresolution using a generative adversarial network. In CVPR, 2017.
[18] Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross-domain image generation. In ICLR,
2017.
[19] Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In NIPS, 2016.
[20] Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In
NIPS, 2017.
[21] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. Improved
techniques for training gans. In NIPS, 2016.
[22] Yizhe Zhang, Zhe Gan, and Lawrence Carin. Generating text via adversarial training. In NIPS workshop
on Adversarial Training, 2016.
[23] Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial
feature matching for text generation. In ICML, 2017.
[24] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. arXiv:1701.07875, 2017.
9
[25] Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. In ICLR,
2017.
[26] Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial networks.
In ICLR, 2017.
[27] Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. Dual supervised learning. In
ICML, 2017.
[28] Yunchen Pu, Zhe Gan, Ricardo Henao, Xin Yuan, Chunyuan Li, Andrew Stevens, and Lawrence Carin.
Variational autoencoder for deep learning of images, labels and captions. In NIPS, 2016.
[29] Yunchen Pu, Zhe Gan, Ricardo Henao, Chunyuan Li, Shaobo Han, and Lawrence Carin. Vae learning via
stein variational gradient descent. In NIPS, 2017.
[30] Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao, and Lawrence Carin.
Alice: Towards understanding adversarial learning for joint distribution matching. In NIPS, 2017.
[31] Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li, and Lawrence Carin.
Adversarial symmetric variational autoencoder. In NIPS, 2017.
[32] Yunchen Pu, Liqun Chen, Shuyang Dai, Weiyao Wang, Chunyuan Li, and Lawrence Carin. Symmetric
variational autoencoder and connections to adversarial learning. In NIPS, 2017.
[33] Alex Krizhevsky. Learning multiple layers of features from tiny images. Citeseer, 2009.
[34] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 1998.
[35] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In
ICCV, 2015.
[36] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv:1511.06390, 2015.
[37] Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and
Li Deng. Semantic compositional networks for visual captioning. In CVPR, 2017.
[38] Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: sequence generative adversarial nets with
policy gradient. In AAAI, 2017.
10
| 7109 |@word mild:1 changyou:1 nd:6 open:3 cha:1 d2:18 pg:4 citeseer:1 lantao:1 liu:6 ours:1 hyunsoo:1 document:1 trustworthy:1 luo:1 readily:3 realistic:4 christian:1 designed:4 jenson:1 grass:3 generative:26 alec:2 provides:5 philipp:1 gx:16 zhang:8 junbo:1 yuan:1 consists:7 combine:1 wild:1 aitken:1 roughly:1 multi:5 discounted:1 ming:2 zhi:1 soumith:3 becomes:3 provided:8 discover:1 matched:1 underlying:1 laptop:1 surfing:1 superresolution:1 what:1 kind:4 developed:2 guarantee:2 pseudo:1 quantitative:2 zaremba:1 tie:1 ensured:1 classifier:5 scaled:1 sherjil:1 control:1 unit:2 szlam:1 ramanan:1 superiority:1 producing:1 treat:1 switching:1 jiang:1 ndcg:3 condtional:1 plus:1 black:4 specifying:1 challenging:1 luke:2 alice:1 limited:1 logeswaran:1 range:3 catgan:1 averaged:1 unique:2 lecun:2 ternary:3 practice:1 implement:2 differs:1 backpropagation:1 digit:1 procedure:1 maire:1 jan:1 tuzel:1 empirical:2 yan:4 significantly:1 matching:16 cannot:1 close:2 zehan:1 context:1 py:41 equivalent:2 map:2 demonstrated:1 dz:4 send:1 straightforward:1 emily:1 swinging:1 shen:1 simplicity:1 pouget:1 jascha:1 importantly:1 handle:1 xinchen:1 arbitray:1 tan:1 caption:3 duke:2 olivier:1 us:3 designing:2 goodfellow:2 synthesize:1 recognition:1 utilized:1 labeled:4 ep:5 bottom:1 wang:7 capture:1 cycle:1 principled:1 schiele:1 warde:1 tobias:1 trained:7 solving:1 deva:1 ferenc:1 ali:6 baseball:1 triangle:7 sink:1 discogan:9 joint:38 easily:1 darpa:1 joost:1 various:2 xiaoou:1 train:1 stacked:1 distinct:1 vicki:1 jianfeng:1 crowd:1 jean:1 emerged:1 bernt:1 kai:1 cvpr:3 drawing:4 ability:1 cooked:1 jointly:1 advantage:1 sequence:2 net:4 breuel:1 aro:1 tran:1 pale:1 frequent:1 qin:1 bath:1 flexibility:1 achieve:3 description:1 dirac:1 scalability:1 cautious:1 epx:3 yaniv:1 darrell:1 requirement:1 asymmetry:1 optimum:1 captioning:2 generating:5 produce:2 wavy:1 ben:2 object:2 bogdan:1 adam:1 andrew:3 tim:1 gong:1 progress:1 implemented:3 predicted:9 jiwon:1 implies:2 concentrate:3 direction:2 snow:1 closely:2 annotated:1 attribute:36 stevens:1 stochastic:1 translating:1 require:2 alleviate:2 proposition:2 weiyao:3 extension:1 cooperatively:1 pl:3 considered:4 ground:1 normal:3 jungkwon:1 great:2 lawrence:10 mapping:7 equilibrium:3 algorithmic:1 predict:1 caballero:1 efros:2 achieves:6 adopt:1 applicable:1 label:15 edit:1 goatee:1 successfully:1 weighted:1 nonidentifiability:1 clearly:2 gaussian:2 aim:1 husz:1 zhou:1 vae:1 jsd:7 focus:2 consistently:1 rank:1 contrast:1 adversarial:41 kim:3 inference:3 publically:1 cunningham:1 hidden:2 relation:2 perona:1 transformed:1 interested:2 tao:2 pixel:2 issue:1 classification:11 among:8 henao:5 dual:2 lucas:1 noun:1 weijer:1 marginal:2 equal:1 aware:1 field:2 beach:1 sampling:1 manually:1 piotr:1 adversarially:2 park:1 yu:5 denton:1 unsupervised:8 carin:9 icml:4 celeba:6 future:1 betwen:1 mirza:2 yoshua:2 few:4 divergence:6 individual:1 versatility:1 microsoft:1 hongsheng:1 mlp:1 alexei:2 evaluation:4 adjust:1 male:3 mixture:1 farley:1 contant:1 accurate:1 edge:2 cifar10:4 arthur:1 notify:1 orthogonal:1 unless:1 conduct:2 desired:1 theoretical:3 subfigure:2 instance:2 column:1 modeling:3 ishmael:1 cost:3 introducing:2 pole:1 uniform:1 predictor:1 krizhevsky:1 osindero:1 front:1 characterize:1 st:7 density:6 person:2 standing:3 lee:2 bigan:8 invertible:1 michael:2 together:2 synthesis:2 gans:10 squared:1 aaai:1 huang:1 woman:1 tile:1 collapsing:1 zhao:1 style:3 wojciech:1 ricardo:5 li:9 toy:4 account:1 potential:1 diversity:1 de:1 gy:17 summarized:2 stabilize:1 includes:2 satisfy:1 explicitly:2 tsung:1 eyeglass:2 try:2 dumoulin:1 doing:1 characterizes:1 red:1 wave:1 weinan:1 metz:2 kautz:1 simon:1 contribution:1 accuracy:2 convolutional:1 serge:1 tejani:1 metaxas:1 vincent:1 rx:1 straight:3 simultaneous:1 ping:2 inform:1 sharing:1 trevor:1 definition:1 energy:2 james:1 chintala:3 proof:4 lior:1 transposed:1 sampled:4 stop:1 dataset:18 gain:1 popular:1 ledig:1 nenghai:1 color:1 akata:1 back:1 bidirectional:5 originally:1 supervised:29 follow:1 day:1 totz:1 improved:2 wei:1 editing:4 formulation:1 bian:1 cyclegan:2 though:3 strongly:1 lastly:2 alykhan:1 autoencoders:1 hand:2 mehdi:2 rack:1 defines:4 mode:1 reveal:1 resemblance:1 semisupervised:1 riding:2 xiaodong:1 usa:1 smiling:3 contain:1 true:4 asymmetrical:1 brown:1 normalized:1 hence:4 regularization:2 symmetric:4 dualgan:3 leibler:1 semantic:1 white:1 attractive:6 indistinguishable:1 game:10 plate:1 hill:1 demonstrate:6 image:95 variational:5 recently:1 sigmoid:1 common:1 shower:1 empirically:1 rl:1 extend:1 he:1 theirs:1 significant:1 honglak:1 rd:1 similarly:1 tennis:1 supervision:6 impressive:1 han:2 gt:1 pu:7 alejandro:1 patrick:1 disentangle:1 recent:1 female:1 optimizes:1 skiing:1 coco:9 certain:1 hay:1 manifested:1 binary:3 arbitrarily:1 success:1 onr:1 yi:2 seen:3 minimum:1 additional:5 wasserstein:2 bathroom:1 isola:2 ey:1 employed:1 tinghui:1 arjovsky:3 dai:1 deng:1 signal:2 semi:21 ii:2 full:1 desirable:3 living:1 reduces:1 multiple:2 taesung:1 match:9 characterized:4 calculation:2 cross:2 long:1 lin:1 equally:2 paired:13 laplacian:1 jost:1 crop:1 hair:9 multilayer:1 expectation:1 metric:3 arxiv:4 epy:3 pyramid:1 achieved:4 background:2 cropped:1 yunchen:7 modality:1 lajanugen:1 unlike:1 sure:1 effectiveness:1 iii:1 split:2 bengio:2 mastropietro:1 architecture:2 competing:1 taeksoo:1 stall:1 zili:1 polyak:1 tub:1 court:1 haffner:1 grill:1 whether:3 reformulated:1 henb:1 compositional:1 deep:4 generally:1 fake:8 fool:2 covered:1 johannes:1 stein:1 backpropagated:1 desk:1 unpaired:2 generate:5 exist:2 andr:1 sign:1 delta:1 blue:1 diverse:1 write:1 hyperparameter:1 dickstein:1 group:2 key:1 four:1 reformulation:1 demonstrating:1 stackgan:1 achieving:1 drawn:2 changing:1 kenneth:1 bushy:1 pietro:1 nga:1 compete:1 run:2 jose:2 powerful:1 injected:1 taigman:1 springenberg:1 decide:1 lamb:1 yann:2 appendix:6 layer:3 pay:1 stove:1 distinguish:6 played:1 correspondence:3 courville:2 fan:1 adapted:2 xiaogang:2 constraint:1 alex:2 encodes:1 yong:1 generates:1 min:2 px:34 martin:3 combination:3 ball:2 liqun:4 across:1 strives:1 slightly:3 pan:1 oncel:1 acosta:1 rob:1 hl:1 iccv:4 bing:1 discus:2 describing:1 needed:1 nose:3 photo:2 adopted:1 available:5 doll:1 apply:1 observe:1 salimans:1 ocean:1 hat:4 original:1 thomas:1 denotes:1 top:3 chuang:1 gan:121 const:1 calculating:1 objective:7 skin:1 added:2 strategy:1 gradient:4 iclr:7 reversed:1 reinforce:1 phillip:2 street:1 evaluate:3 zeynep:1 water:1 ozair:1 besides:1 code:1 modeled:1 reed:1 illustration:2 providing:1 minimizing:1 acquire:1 unrolled:2 difficult:1 kun:1 holding:2 hao:3 lipstick:2 stated:1 pizza:1 ziwei:1 proper:1 motivates:1 policy:1 perform:1 cogan:1 datasets:4 descent:1 similiar:1 supporting:1 looking:3 head:1 chunyuan:6 sharp:1 verb:1 makeup:1 yingce:1 introduced:2 david:2 pair:23 lvarez:1 kl:5 connection:1 discriminator:23 pfau:1 learned:12 nip:13 address:1 poole:2 usually:2 below:1 pattern:1 indoor:2 scott:1 dimitris:1 eyebrow:2 adjective:1 including:1 max:3 mouth:3 hot:1 natural:2 predicting:1 zhu:3 minimax:2 representing:1 improve:2 sunglass:1 mathieu:1 created:1 categorical:1 jun:4 coupled:2 autoencoder:3 text:7 review:1 understanding:1 acknowledgement:1 theis:1 seqgan:1 relative:1 fully:4 loss:4 bear:1 crossdomain:1 generation:10 interesting:1 proven:1 generator:29 triple:39 chongxuan:1 shaobo:1 consistent:1 principle:1 tiny:1 playing:1 share:3 heavy:1 translation:12 row:14 prone:1 eccv:1 supported:1 transpose:3 allow:2 understand:1 perceptron:1 wide:3 face:9 distributed:2 van:1 xia:1 transition:1 dtn:1 cumulative:1 made:1 qualitatively:1 simplified:1 shuyang:1 approximate:1 implicitly:5 kullback:1 belghazi:1 global:1 assumed:1 belongie:1 discriminative:3 fergus:1 zhe:8 xi:1 latent:7 table:8 moonsu:1 learn:6 reasonably:1 ca:1 mse:3 bottou:3 complex:3 domain:24 zitnick:1 main:1 whole:1 noise:4 big:2 fair:2 xu:3 beard:4 board:1 precision:1 fails:2 explicit:3 outdoor:5 third:1 learns:1 young:3 ian:2 donahue:1 tang:1 bad:1 specific:1 jensen:4 pz:3 abadie:1 bivariate:1 consist:1 exists:1 mnist:8 workshop:1 classfier:1 sohl:1 importance:3 kr:1 chen:8 racket:1 simply:1 infinitely:1 gao:1 ez:1 shoe:2 visual:1 partially:1 sport:3 bo:1 radford:2 wolf:1 truth:1 yizhe:3 conditional:27 goal:2 cheung:1 towards:2 room:2 jeff:1 man:4 specifically:2 total:1 called:1 discriminate:1 oval:2 blond:1 tiled:1 player:3 shannon:5 meaningful:1 xin:1 pointy:1 select:1 aaron:2 support:1 people:1 wearing:5 d1:20 dinghan:1 ex:2 |
6,755 | 7,110 | PRUNE: Preserving Proximity and Global Ranking
for Network Embedding
Yi-An Lai ??
National Taiwan University
[email protected]
Chin-Chi Hsu ??
Academia Sinica
[email protected]
Mi-Yen Yeh ?
Academia Sinica
[email protected]
Wen-Hao Chen ?
National Taiwan University
[email protected]
Shou-De Lin ?
National Taiwan University
[email protected]
Abstract
We investigate an unsupervised generative approach for network embedding. A
multi-task Siamese neural network structure is formulated to connect embedding
vectors and our objective to preserve the global node ranking and local proximity
of nodes. We provide deeper analysis to connect the proposed proximity objective
to link prediction and community detection in the network. We show our model can
satisfy the following design properties: scalability, asymmetry, unity and simplicity.
Experiment results not only verify the above design properties but also demonstrate
the superior performance in learning-to-rank, classification, regression, and link
prediction tasks.
1
Introduction
Network embedding aims at constructing a low-dimensional latent feature matrix from a sparse
high-dimensional adjacency matrix in an unsupervised manner [1?3, 6, 15, 18?21, 23, 24, 26, 31].
Most previous works [1?3, 6, 15, 18?20, 23, 31] try to preserve k-order proximity while performing
embedding. That is, given a pair of nodes (i, j), the similarity between their embedding vectors
shall be to certain extent reflect their k-hop distances (e.g. the number of k-hop distinct paths from
node i to j, or the probability that node j is visited via a random walk from i). Proximity reflects
local network topology, and could even preserve global network topology like communities. There
are some other works directly formulate node embedding to fit the community distributions by
maximizing the modularity [21, 24].
Although through experiments some of the proximity-based embedding methods had visualized the
community separation in two-dimensional vector space [2, 3, 6, 18, 20, 23], and some demonstrate an
effective usage scenario in link prediction [6, 15, 19, 23], so far we have not yet seen a theoretical
analysis to connect these three concepts. The first goal of this paper is to propose a proximity
model that connects node embedding with link prediction and community detection. There has been
some research focusing on a similar direction. [24] tries to propose an embedding model preserving
both proximity and community. However, the objective functions for proximity and community are
separately designed, not showing the connection between them. [26] models an embedding approach
considering link prediction, but not connect it to the preservation of the network proximity.
?
Department of Computer Science and Information Engineering
Institute of Information Science
?
These authors contributed equally to this paper.
?
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Besides connecting link prediction and proximity, here we also argue that it is beneficial for an
embedding model to preserve a network property not specifically addressed in the existing research:
global node importance ranking. For decades unsupervised node ranking algorithms such as PageRank
[16] and HITS [10] have shown the effectiveness in estimating global node ranks. Besides ranking
websites for better search outcomes, node rankings can be useful in other applications. For example,
the Webspam Challenge competition 4 requires that spam web pages to be ranked lower than nonspam ones; the WSDM 2016 Challenge 5 asks for ranking papers information without supervision
data in a billion-sized citation network. Our experiments demonstrate that being able to preserve the
global ranking in node embedding can not only boost the performance of a learning-to-ranking task,
but also a classification and regression task training from node embedding as features.
In this paper, we propose Proximity and Ranking-preserving Unsupervised Network Embedding
(PRUNE), an unsupervised Siamese neural network structure to learn node embeddings from not only
community-aware proximity but also global node ranking (see Figure 1). To achieve the above goals,
we rely on a generative solution. That is, taking the embedding vectors of the adjacent nodes of a
link as the training input, the shared hidden layers of our model non-linearly map node embeddings
to optimize a carefully designed objective function. During training, the objective function, for
global node ranking and community-aware proximity, propagate gradients back to update embedding
vectors. Besides deriving an upper-bound-based objective function from PageRank to represent the
global node ranking. we also provide theoretical connection of the proposed proximity objective
function to a general community detection solution. In sum, our model satisfies the following four
model design characteristics: (I) Scalability [1, 6, 15, 18?21, 23, 26, 31]. We show that for each
training epoch, our model enjoys linear time and space complexity to the number of nodes or links.
Furthermore, different from some previous works relying on sampling non-existing links as negative
examples for training, our model lifts the need to sample negative examples which not only saves
extra training time but also relieves concern of sampling bias. (II) Asymmetry [2, 3, 15, 19, 20, 31]. Our
model considers link directions to learn the embeddings of either directed or undirected networks.
(III) Unity [1, 2, 6, 15, 18, 19, 21, 23, 24, 26, 31]. We perform joint learning to satisfy two different
objective goals in a single model. The experiments show that the proposed multi-task neural network
structure outperforms a two-stage model. (IV) Simplicity. Empirical verifications reflect that our
model can achieve superior performance with only one hidden layer in neural networks and unified
hyperparameter setting, freeing from fine-tuning the hyperparameters. This properly is especially
important for an unsupervised learning task due to lack of validation data for fine-tuning. The source
code of the proposed model can be downloaded here 6 .
2
Related work
Recently, there exists growing number of works proposing embedding models specifically for network
property preservation. Most of the prior methods extract latent embedding features by singular value
decomposition or matrix factorization [1, 3, 8, 15, 19, 21, 22, 24, 28, 30]. Such methods typically define
an N -by-N matrix A (N is the number of nodes) that reflect certain network properties, and then
factorizes A ? U > V or A ? U > U into two low-dimensional embedding matrices U and V .
There are also random-walk-based methods [6,17,18,31] proposing an implicit reduction toward word
embedding [14] by gathering random-walk sequences of sampled nodes throughout a network. The
methods work well in practice but struggles to explain what network properties should be kept in their
objective functions [20]. Unsupervised deep autoencoders are also used to learn latent embedding
features of A [2, 23], especially achieve non-linear mapping strength through activation functions.
Finally, some research defined different objective functions, like Kullback?Leibler divergence [20] or
Huber loss [26] for network embedding. Please see Table 1 for detailed model comparisons.
4
http://webspam.lip6.fr/wiki/pmwiki.php
https://wsdmcupchallenge.azurewebsites.net/
6
https://github.com/ntumslab/PRUNE
5
2
Table 1: Model Comparisons. (I) Scalability; (II) Asymmetry; (III) Unity. No simplicity due to
difficult comparisons between models with few sensitive and many insensitive hyperparameters.
Model
Proximity Embedding [19]
SocDim [21]
Graph Factorization [1]
DeepWalk [18]
TADW [28]
LINE [20]
GraRep [3]
DNGR [2]
TriDNR [17]
Our PRUNE
(I)
X
X
X
X
X
X
X
(II)
X
(III)
X
X
X
X
X
X
X
X
X
X
X
X
X
Model
MMDW [22]
SDNE [23]
HOPE [15]
node2vec [6]
HSCA [30]
LANE [8]
APP [31]
M-NMF [24]
NRCL [26]
(I)
(II)
X
X
X
X
X
X
X
X
(III)
X
X
X
X
X
X
X
X
X
Objective function
arg min?,z, W ?(i, j) (ziTWzj - max{0, log[M / (? ni mj)]})2 + ? ?(i, j) mj (?i / ni - ?j / mj)2
Node ranking
Proximity
score ?i representation zi
Shared
matrix W
Node ranking
layer
Node ranking
Proximity
score ?j representation zj
Proximity
layer
Shared
hidden layers
Node i
Embedding ui
Node j
Embedding uj
Selecting embeddings of node i and j
Training link (i, j)
Figure 1: PRUNE overview. Each solid arrow represents a non-linear mapping function h between
two neural layers.
3
3.1
Model
Problem definition and notations
We are given a directed homogeneous graph or network G = (V, E) as input, where V is the set
of vertices or nodes and E is the set of directed edges or links. Let N = |V |, M = |E| be the
number of nodes and links in the network. For each node i, we denote Pi , Si respectively as the
set of direct predecessors and successors of node i. Therefore, mi = |Pi |, ni = |Si | imply the
in-degree and out-degree of the node i. Matrix A denotes the corresponding adjacency matrix where
each entry aij ? [0, ?) is the weight of link (i, j). For simplicity, here we discuss only binary link
weights: aij = {1, 0} and E = {(i, j) : aij = 1}, but solutions for non-negative link weights can be
derived in the same manner. Our goal is to build an unsupervised model, learning a K-dimensional
embedding vector ui ? RK for each node i, such that ui preserves global node ranking and local
proximity information.
3.2
Model overview
The Siamese neural network structure of our model is illustrated in Figure 1. Siamese architecture
has been widely applied to multi-task learning like [27]. As Figure 1 illustrates, we define a pair of
nodes (i, j) as a training instance. Since both i and j refer to the same type of objects (i.e. nodes),
it is natural to allow them to share the same hidden layer, which is what the Siamese architecture
3
suggests. We start from the bottom part in Figure 1 to introduce our proximity function. Here the
model is trained using each link (i, j) as a training instance. Given (i, j), first our model feeds the
existing embedding vectors ui and uj into the input layer. The values in ui and uj are updated
by gradients propagated back from the output layers. To learn the mapping from the embedding
vectors to objective functions, we put one hidden layer as bridge. Here we found the empirically
one single hidden layers already yield competitive results, implying that a simple neural network
is sufficient to encode graph properties into a vector space, which alleviates the burden on tuning
hyperparameters in a neural network. Second, both nodes i and j share the same hidden layers in our
neural networks, realizing by the Siamese neural networks. Each solid arrow in Figure 1 implies the
following mapping function:
h(u) = ?(?u + b)
(1)
where ?, b are the weight matrix and the bias vector. ? is an activation function leading to non-linear
mappings. In Figure 1, our goal is to encode the proximity information in embedding space. Thus
we define a D-dimensional vector z ? [0, ?)D that represents latent features of a node. In the
next sections, we show that the proximity property can be modeled by the interaction between
representations zi and zj . We write down the mapping from embedding u to z:
z = ?2 (?2 ?1 (?1 u + b1 ) + b2 ).
(2)
In Figure 1, we use the same network construction to encode an additional global node ranking ? ? 0.
It is used to compare the relative ranks between one node and another. Formally, ? can be mapped
from embedding u using the following formula:
? = ?4 (?4 ?3 (?3 u + b3 ) + b4 ).
(3)
We impose the non-negative constraints of z, ? for better theoretical property by exploiting the
non-negative activation functions (ReLU or softplus for example) over the outputs ?2 and ?4 . Other
outputs of activation functions and all the ?, b are not limited to be non-negative. To add global node
ranking information in proximity preservation, we construct a multi-task neural network structure
as illustrated in Figure 1. Let the hidden layers for different network properties share the same
embedding space. u is thus updated by the information simultaneously from multiple objective goals.
Different from a supervised learning task that the model can be trained by labeled data. Here instead
we need to introduce an objective function for weight-tuning:
2
2
X
X
?i
M
?j
>
?
arg min
zi W zj ? max 0, log
+?
mj
.
?mj ni
mj
ni
??0,z?0,W ?0
(i,j)?E
(i,j)?E
(4)
The first term aims at preserving the proximity and can be applied independently, as illustrated in
Figure 1. The second term corresponds to the global node ranking task, which regularizes the relative
scale among ranking scores. Here we import shared matrix W = ?5 (?5 ) to learn the global linking
correlations in the whole network. We also set non-negative-ranged activation function ?5 to satisfy
non-negative W . ? controls the relative importance of these two terms. We will provide analysis for
(4) in the next sections. Since the objective function (4) is differentiable, we are allowed to apply
mini-batch stochastic gradient descent (SGD) to optimize every ?, b and even u by propagating the
gradients top-down from the output layers.
Deterministic mapping in (2) could be misunderstood that both u and z capture the same embedding
information, but z specifically captures the proximity property of a network through performing
link prediction, and u in fact tries to influence both proximity and global ranking. The reason to
use z instead of u for link prediction is that we believe node ranking and link prediction are two
naturally different tasks (but their information can be shared since highly ranked nodes can have
better connectivity to others), using one single embedding representation u to achieve both goals can
lead to a compromised solution. Instead, z can be treated as some "distilled" information extracted
from u specifically for link prediction, which can prevent our model from settling to a mediocre u
that fails to satisfy both goals directly.
3.3
Proximity preservation as PMI matrix tri-factorization
The first term in (4) aims at preserving the proximity property from input networks. We focus on the
first-order and second-order proximity, which are explicitly addressed in several proximity-based
4
methods [3, 20, 23, 24]. The first-order proximity refers to whether node pair (i, j) is connected in
unweighted graphs. In an input network, links (i, j) ? E are observed as positive training examples
aij = 1. Thus, their latent inner product zi> W zj should be increased to reflect such close linking
relationship. Nonetheless, usually another set of randomly chosen node pairs (i, k) ? F is required
to train the embedding model as negative training examples. Since set F does not exist in input
networks, one can sample ? target nodes k (with probability proportional to in-degree mk ) to form
negative examples (i, k) . That is, given source node i, we emphasize the existence of link (i, j) by
distinguishing whether the corresponding target node is observed ((i, j) ? E) or not ((i, k) ? F ).
We can construct a binary logistic regression model to distinguish E and F :
arg maxE(i,j)?E log ?(zi> W zj ) + ?E(i,k)?F log 1 ? ?(zi> W zk )
(5)
z,W
1
where E denotes an expected value, ?(x) = 1+exp(?x)
is the sigmoid function. Inspired by the
derivations in [12], we have the following conclusion:
Lemma 3.1. Let yij = zi> W zj . We have the closed-form solution from zero first-order derivative
of (5) over yij :
yij = log
M
ps,t (i, j)
= log
? log ?
?ni mj
ps (i)pt (j)
(6)
1
1
where ps,t (i, j) = |E|
= M
is the joint probability of link (positive example) (i, j) in set E,
m
ni
ps (i) = M follows a distribution proportional to out-degree ni of source node i, whereas pt (j) = Mj
follows another distribution proportional to in-degree mj of target node j.
Proof. Please refer to our Supplementary Material Section 2.
Clearly, (6) is the pointwise mutual information (PMI) shifted by log ?, which can be viewed as link
weights in terms of out-degree ni and in-degree mj . If we directly minimize the difference between
two sides in (6) rather than maximize (5), then we are free from sampling negative examples (i, k) to
train a model. Following the suggestions in [12], we filter negative (less informative) PMI as shown
in (4), causing further performance improvement.
The second-order proximity refers to the fact that the similarity of zi and zj is higher if nodes i, j
have similar sets of direct predecessors and successors (that is, the similarity reflects 2-hop distance
relationships). Now we present how to preserve
the
h
n second-order
o proximity using tri-factorizationi
based link prediction [13, 32]. Let APMI = max 0, log ?nM
if
(i,
j)
?
E;
otherwise
missing
m
i
j
be the corresponding PMI matrix. Link prediction aims to predict the missing PMI values in
APMI . Factorization methods suppose APMI of low-rank D, and then learn matrix tri-factorization
Z > W Z ? APMI using non-missing entries. Matrix Z = [z1 z2 . . . zN ] aligns latent representations
with link distributions. Compared with classical factorization Z > V , such tri-factorization supports
the asymmetric transitivity property of directed links. Specifically, the existence of two directed
links (i, j) (zi> W zj ), (j, k) (zj> W zk ) increase the likelihood of (i, k) (zi> W zk ) via representation
propagation zi ? zj ? zk , but not the case for (k, i) due to asymmetric W . Then we have a lemma
as follows:
Lemma 3.2. Matrix tri-factorization Z > W Z ? APMI preserves the second-order proximity.
Proof. Please refer to our Supplementary Material Section 3.
Next, we discuss the connection between matrix tri-factorization and community. Different from
heuristic statements in [13, 32], we argue that the representation vector zi captures a D-community
distribution for node i (each dimension is proportional to the probability that node i belongs to certain
community), and shared matrix W implies the interactions among these D communities.
Lemma 3.3. Matrix tri-factorization zi> W zj can be regarded as the expectation of community
interactions with distributions of link (i, j).
zi> W zj ? E(i,j) [W ] =
D X
D
X
c=1 d=1
5
Pr(i ? Cc ) Pr(j ? Cd )wcd ,
(7)
where each entry wcd is the expected number of interactions from community c to d, and Cc denotes
the set of nodes in community c.
Proof. Please refer to the Supplementary Material Section 4.
Based on the binary classification model (5), when a true link (i, j) is observed in the training data,
the corresponding inner product zi> W zj is increased, which is equivalent to raising the expectation
E(i,j) [W ].
To summarize, the derivations from logistic classification (5) to PMI matrix tri-factorization (6)
show the tri-factorization model preserves the first-order proximity. Then Lemma 3.2 proves the
preservation of second-order proximity. Besides, if a non-negative constraint is imposed, Lemma 3.3
shows that the tri-factorization model can be interpreted as capturing community interactions. That
says, our proximity preserving loss achieves the first-order proximity, second-order proximity, and
community preservation.
Given non-negative log niMmj as our setting in (4), we make another observation on community
detection. (6) can be rewritten as the following equation:
ni mj
1 ? exp ?zi> W zj
=
1?
.
(8)
|
{z
}
| {zM }
P(X (i,j) >0)=1?P(X (i,j) =0)
Modularity as aij =1,?=1
Following Lemma 3.3, we can then derive
Lemma 3.4. The left-hand side of (8) is the probability P(X (i,j) > 0) , where 0 ? X (i,j) ? D2
represents the total numbers of interactions between all the community pairs (c, d) ?1 ? c ? D, 1 ?
d ? D that affect the existence of this link (i, j), following Poisson distribution P(X (i,j) ) with mean
zi> W zj .
Proof. Please refer to the Supplementary Material Section 5.
In fact, on either side of Equation (8), it evaluates the likelihood of the occurrence of a link. For
the left-hand side, as shown in reference [29] and our Supplementary Material 5, an existing link
implies at least one community interactions (X > 0), whose probability is assumed following Poisson
with means equal to the tri-factorization values. The right-hand side is commonly regarded as the
"modularity" [11], which measures the difference between links from the observed data and links
from random generation. Modularity is commonly used as an evaluation metric for the quality of a
community detection algorithm (see [21, 24]). The deep investigation of Equation (8) is left for our
future work.
3.4
Global node ranking preservation as PageRank upper bound
Here we want to connect the second objective to PageRank. To be more precise, the second term
in (4) (without parameter ?) comes from an upper bound of PageRank assumption. PageRank [16]
is arguably the most common unsupervised method to evaluate the rank of a node. It claims that
ranking score of a node j ?j is the probability of visiting j through random walks. ?j ? j ? V
can be obtained from the ranking score accumulation from direct predecessors i, weighted by the
reciprocal
out-degree ni . One can express PageRank using
P ofP
P the minimization of squared loss
L = j?V ( i?Pj n?ii ? ?j )2 . Here the probability constraint i?V ?i = 1 is not considered since
we care only about the relative rankings. The damping factor in PageRank is not considered either
for
P model simplicity. Unfortunately, it is infeasible to apply SGD to update
P L, since summation
is
inside
the
square,
violating
the
standard
SGD
assumption
L
=
i?Pj
(i,j)?E Lij where each
sub-objective function Lij is relevant to a single training link (i, j). Instead, we choose to minimize
an upper bound.
Lemma 3.5. By Cauchy?Schwarz inequality, we have the upper bound as follows:
?
?2
2
X X ?i
X
?i
?j
?
?
? ?j
?
mj
?
.
(9)
ni
ni
mj
j?V
i?Pj
(i,j)?E
6
Proof. Please refer to our Supplementary Material Section 6.
The proof of approximation ratio of such upper bound (9) is left as our future work. Nevertheless,
as will be shown later, the experiments have demonstrated the effectiveness of such upper bound.
?
Intuitively, (9) minimizes the difference between n?ii and mjj weighted by in-degree mj . This could
be explained by the following lemma:
?
Lemma 3.6. The objectvie n?ii = mjj at the right-hand side of (9) is a sufficient condition of the
P
objective i?Pj n?ii = ?j at the left-hand side of (9).
Proof. Please refer to our Supplementary Material Section 7.
3.5
Discussion
We have mentioned four major advantages of our model the introduction section. Here we would like
to provide in-depth discussions on them. (I) Scalability. Since only the positive links are used for
training, during SGD, our model spends O(M ?2 ) time for each epoch, where ? is the maximum
number of neurons of a layer in our model, which is usually in the hundreds. Also, our model costs
only O(N + M ) space to store input networks and the sparse PMI matrix consumes O(M ) non-zero
entries. In practice ?2 M , our model is thus scalable. (II) Asymmetry. By the observation in
(4), replacing (i, j) with (j, i) leads to different results since W and PageRank upper bound are
asymmetric. (III) Unity. All the objectives in our model are jointly optimized under a multi-task
Siamese neural network. (IV) Simplicity. As experiments shows, our model performs well with single
hidden layers and the same hyperparameter setting across all the datasets, which could alleviate the
difficult hyperparameter determination for unsupervised network embedding.
4
Experiments
4.1
Settings
Datasets. We benchmark our model on three real-world networks in different application domains:
(I) Hep-Ph 7 . It is a paper citation network from 1993 to 2003, including 34, 546 papers and
421, 578 citations relationships. Following the same setup as [25], we leave citations before 1999 for
embedding generation, and then evaluate paper ranks using the number of citations after 2000.
(II) Webspam 8 . It is a web page network used in Webspam Challenges. There are 114, 529 web
pages and 1, 836, 441 hyperlinks. Participants are challenged to build a model to rank the 1, 933
labeled non-spam web pages higher than 122 labeled spam ones.
(III) FB Wall Post 9 . Previous task [7] aims at ranking active users using a 63, 731-user, 831, 401-link
wall post network in social media website Facebook, New Orlean 2009. The nodes denote users and
a link implies that a user posts at least an article on someone?s wall. 14, 862 users are marked active,
that is, they continue to post articles in the next three weeks after a certain date. The goal is to rank
active users over inactive ones.
Competitors. We compare the performance of our model with DeepWalk [18], LINE [20], node2vec
[6], SDNE [23] and NRCL [26]. DeepWalk, LINE and node2vec are popular models used in various
applications. SDNE proposes another neural network structure to embed networks. NRCL is one
of the state-of-the-art network embedding model, specially designed for link prediction. Note that
NRCL encodes external node attributes into network embedding, but we discard this part since such
information are not assumed available in our setup.
Model Setup. For all experiments, our model fixes node embedding and hidden layers to be 128dimensional, proximity representation to be 64-dimensional. Exponential Linear Unit (ELU) [4]
activation is adopted in hidden layers for faster learning, while output layers use softplus activation
for node ranking score and Rectified Linear Unit (ReLU) [5] activation for proximity representation
7
http://snap.stanford.edu/data/cit-HepPh.html
http://chato.cl/webspam/datasets/uk2007/
9
http://socialnetworks.mpi-sws.org/data-wosn2009.html
8
7
to avoid negative-or-zero scores as well as negative representation values. We recommend and fix
? = 5, ? = 0.01. All training uses a batch size of 1024 and Adam [9] optimizer with learning rate
0.0001.
Evaluation. Similar to the previous works, we want to evaluate our embedding using supervised
learning tasks. That is, we want to evaluate whether the proposed embedding yields better results for
a (1) learning-to-rank (2) classification and regression (3) link prediction tasks.
4.2
Results
In the following paragraphs, we call our proposed model PRUNE. PRUNE without the global ranking
part is named TriFac below.
Learning-to-rank. In this setting, we use pairwise approach that formulates learning-to-rank as a
binary classification problem and take embeddings as node attributes. Linear Support Vector Machine
with regularization C = 1.0 is used as our learning-to-rank classifier. We train on 80% and evaluate
on 20% of datasets. Since Webspam and FB Wall Post possess binary labels, we choose Area Under
ROC Curve (AUC) as the evaluation metric. Following the setting in [25], Hep-Ph paper citation is a
real value, and thus suits better for Spearman?s rank correlation coefficient.
The results in Table 2 show that PRUNE significantly outperforms the competitors. Note that PRUNE
which incorporates global node ranking as a multi-task learning has superior performance compared
with TriFac which only considers the proximity. It shows that the unsupervised global ranking
we modeled is positively correlated with the rankings in these learning-to-ranking tasks. Also the
multi-task learning enriches the underlying interactions between two tasks and is the key to better
performance of PRUNE.
Table 2: Learning-to-rank performance (?: outperforms 2nd-best with p-value < 0.01).
Dataset
Hep-Ph
Webspam
FB Wall Post
Evaluation
Rank Corr.
AUC
AUC
DeepWalk
0.485
0.821
0.702
LINE
0.430
0.818
0.712
node2vec
0.494
0.843
0.730
SDNE
0.353
0.800
0.749
NRCL
0.327
0.839
0.573
TriFac
0.554
0.821
0.747
PRUNE
0.621?
0.853?
0.765?
Classification and Regression. In this experiment, embedding outputs are directly used for binary
node classification on Webspam and FB Wall Post and node regression on Hep-Ph. We only observe
80% nodes while training and predict the labels of remaining 20% nodes. Random Forest and Support
Vector Regression are used for classification and regression, respectively. Classification is evaluated
by AUC and regression is evaluated by the Root Mean Square Error (RMSE). Table 3 shows that
PRUNE reaches the lowest RMSE on the regression task and the highest AUC on two classification
tasks among embedding algorithms, while TriFac is competitive to others. The results show that the
global ranking modeled by us contains useful information to capture certain properties of nodes.
Table 3: Classification and regression performance (?: outperforms 2nd-best with p-value < 0.01).
Dataset
Hep-Ph
Webspam
FB Wall Post
Evaluation
RMSE
AUC
AUC
DeepWalk
12.079
0.620
0.733
LINE
12.307
0.597
0.707
node2vec
11.909
0.622
0.744
SDNE
12.451
0.605
0.752
NRCL
12.429
0.578
0.759
TriFac
11.967
0.576
0.763
PRUNE
11.720?
0.637?
0.775?
Link Prediction. We randomly split network edges into 80%-20% train-test subsets as positive
examples and sample equal number of node pairs with no edge connection as negative samples.
Embeddings are learned on the training set and performance is evaluated on the test set. Logistic
regression is adopted as the link prediction algorithm and models are evaluated by AUC. The results
in Table 4 show that PRUNE outperforms all counterparts significantly, while TriFac is competitive to
others. The results, together with previous two experiments, demonstrate the effectiveness of PRUNE
for diverse network applications.
Robustness to Noisy Data. In the real-world settings, usually only partial network is observable as
links can be missing. Perturbation analysis is then conducted in verifying the robustness of models
by measuring the learning-to-rank performance when different fractions of edges are missing. Figure
2 shows that PRUNE persistently outperforms competitors across different fractions of missing
8
Table 4: Link prediction performance (?: outperforms 2nd-best with p-value < 0.01).
Dataset
Hep-Ph
Webspam
FB Wall Post
DeepWalk
0.803
0.885
0.828
LINE
0.796
0.954
0.781
node2vec
0.805
0.894
0.853
Hep-Ph
0.7
TriFac
0.814
0.946
0.858
PRUNE
0.861?
0.973?
0.878?
0.75
0.5
0.4
PRUNE
DeepWalk
LINE
node2vec
SDNE
NRCL
0.80
AUC
RankCorr
NRCL
0.688
0.910
0.731
FB Wall Post
PRUNE
DeepWalk
LINE
node2vec
SDNE
NRCL
0.6
SDNE
0.751
0.953
0.855
0.70
0.65
0.3
0.60
0.2
0.55
0.1
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.50
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
drop rate
drop rate
Figure 2: Perturbation analysis for learning-to-rank on Hep-Ph and FB Wall Post.
edges. The results demonstrate its robustness to missing edges which is crucial for evolving or
costly-constructed networks.
Discussions. The superiority can be summarized based on the features of the models:
(I) We have an explicit objective to optimize. Random walk based models (i.e. DeepWalk, node2vec)
lack such objectives and moreover, noises are introduced during the random walk procedure.
(II) We are the only model that considers global node ranking information.
(III) We preserve first and second-order proximity and considers the asymmetry (i.e. direction of
links). NRCL only preserves the first-order proximity and does not consider asymmetry. SDNE does
not consider asymmetry either. LINE does not handle first-order and second-order proximity jointly
but instead treating them independently.
5
Conclusion
We propose a multi-task Siamese deep neural network to generate network embeddings that preserve
global node ranking and community-aware proximity. We design a novel objective function for
embedding training and provide corresponding theoretical interpretation. The experiments shows
that preserving the properties we have proposed can indeed improve the performance of supervised
learning tasks using the embedding as features.
Acknowledgments
This study was supported in part by the Ministry of Science and Technology (MOST) of Taiwan,
R.O.C., under Contracts 105-2628-E-001-002-MY2, 106-2628-E-006-005-MY3, 104-2628-E-002
-015 -MY3 & 106-2218-E-002 -014 -MY4 , Air Force Office of Scientific Research, Asian Office
of Aerospace Research and Development (AOARD) under award number No.FA2386-17-1-4038,
and Microsoft under Contracts FY16-RES-THEME-021. All opinions, findings, conclusions, and
recommendations in this paper are those of the authors and do not necessarily reflect the views of the
funding agencies.
References
[1] Amr Ahmed, Nino Shervashidze, Shravan Narayanamurthy, Vanja Josifovski, and Alexander J. Smola.
Distributed large-scale natural graph factorization. WWW ?13.
[2] Shaosheng Cao, Wei Lu, and Qiongkai Xu. Deep neural networks for learning graph representations.
AAAI?16.
9
[3] Shaosheng Cao, Wei Lu, and Qiongkai Xu. Grarep: Learning graph representations with global structural
information. CIKM ?15.
[4] Djork-Arn? Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning
by exponential linear units (elus). CoRR, 2015.
[5] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. AISTATS?11.
[6] Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. KDD ?16.
[7] Julia Heidemann, Mathias Klier, and Florian Probst. Identifying key users in online social networks: A
pagerank based approach. ICIS?10.
[8] Xiao Huang, Jundong Li, and Xia Hu. Label informed attributed network embedding. WSDM ?17.
[9] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, 2014.
[10] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. J. ACM, 1999.
[11] Elizabeth A Leicht and Mark EJ Newman. Community structure in directed networks. Physical review
letters, 2008.
[12] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. NIPS?14.
[13] Aditya Krishna Menon and Charles Elkan. Link prediction via matrix factorization. ECML PKDD?11.
[14] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of
words and phrases and their compositionality. NIPS?13.
[15] Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving
graph embedding. KDD ?16.
[16] Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. The pagerank citation ranking:
Bringing order to the web. Technical report, Stanford InfoLab, 1999.
[17] Shirui Pan, Jia Wu, Xingquan Zhu, Chengqi Zhang, and Yang Wang. Tri-party deep network representation.
IJCAI?16.
[18] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations.
KDD ?14.
[19] Han Hee Song, Tae Won Cho, Vacha Dave, Yin Zhang, and Lili Qiu. Scalable proximity estimation and
link prediction in online social networks. IMC ?09.
[20] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale
information network embedding. WWW ?15.
[21] Lei Tang and Huan Liu. Relational learning via latent social dimensions. KDD ?09.
[22] Cunchao Tu, Weicheng Zhang, Zhiyuan Liu, and Maosong Sun. Max-margin deepwalk: Discriminative
learning of network representation. IJCAI?16.
[23] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. KDD ?16.
[24] Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, and Shiqiang Yang. Community preserving
network embedding. AAAI?17.
[25] Yujing Wang, Yunhai Tong, and Ming Zeng. Ranking scientific articles by exploiting citations, authors,
journals, and time information. AAAI?13.
[26] Xiaokai Wei, Linchuan Xu, Bokai Cao, and Philip S. Yu. Cross view link prediction by learning noiseresilient representation consensus. WWW ?17.
[27] Zhizheng Wu, Cassia Valentini-Botinhao, Oliver Watts, and Simon King. Deep neural networks employing
multi-task learning and stacked bottleneck features for speech synthesis. ICASSP ?15, 2015.
[28] Cheng Yang, Zhiyuan Liu, Deli Zhao, Maosong Sun, and Edward Y. Chang. Network representation
learning with rich text information. IJCAI?15.
[29] Jaewon Yang and Jure Leskovec. Overlapping community detection at scale: A nonnegative matrix
factorization approach. WSDM ?13.
[30] D. Zhang, J. Yin, X. Zhu, and C. Zhang. Homophily, structure, and content augmented network representation learning. ICDM?16.
[31] Chang Zhou, Yuqiong Liu, Xiaofei Liu, Zhongyi Liu, and Jun Gao. Scalable graph embedding for
asymmetric proximity. AAAI?17.
[32] Shenghuo Zhu, Kai Yu, Yun Chi, and Yihong Gong. Combining content and link for classification using
matrix factorization. SIGIR ?07.
10
| 7110 |@word nd:3 hu:1 d2:1 propagate:1 decomposition:1 sgd:4 asks:1 solid:2 reduction:1 liu:6 contains:1 score:7 selecting:1 icis:1 maosong:2 outperforms:7 existing:4 com:1 z2:1 activation:8 yet:1 si:2 import:1 diederik:1 academia:2 informative:1 kdd:5 designed:3 drop:2 update:2 treating:1 implying:1 generative:2 website:2 reciprocal:1 realizing:1 node:73 org:1 zhang:7 shou:1 constructed:1 direct:3 predecessor:3 inside:1 paragraph:1 introduce:2 manner:2 pairwise:1 node2vec:10 peng:3 indeed:1 huber:1 expected:2 pkdd:1 growing:1 multi:9 chi:2 wsdm:3 relying:1 inspired:1 ming:2 considering:1 estimating:1 notation:1 underlying:1 moreover:1 medium:1 lowest:1 what:2 interpreted:1 minimizes:1 spends:1 proposing:2 informed:1 unified:1 finding:1 every:1 classifier:1 hit:1 control:1 unit:3 superiority:1 arguably:1 positive:4 before:1 engineering:1 local:3 struggle:1 meng:1 path:1 shenghuo:1 suggests:1 someone:1 josifovski:1 factorization:19 limited:1 directed:6 acknowledgment:1 practice:2 procedure:1 mei:1 area:1 empirical:1 evolving:1 yan:1 significantly:2 word:3 refers:2 close:1 deepwalk:11 hee:1 put:1 mediocre:1 influence:1 fa2386:1 optimize:3 equivalent:1 map:1 deterministic:1 missing:7 maximizing:1 imposed:1 accumulation:1 demonstrated:1 www:3 independently:2 sepp:1 jimmy:1 formulate:1 tomas:1 simplicity:6 identifying:1 sigir:1 aoard:1 regarded:2 deriving:1 embedding:54 handle:1 updated:2 construction:1 target:3 pt:2 suppose:1 user:7 homogeneous:1 distinguishing:1 us:1 goldberg:1 elkan:1 persistently:1 asymmetric:5 labeled:3 winograd:1 bottom:1 csie:1 observed:4 steven:1 wang:6 capture:4 verifying:1 connected:1 sun:2 highest:1 ramus:1 consumes:1 mentioned:1 agency:1 environment:1 complexity:1 ui:5 trained:2 icassp:1 joint:2 various:1 derivation:2 train:4 stacked:1 distinct:1 fast:1 effective:1 newman:1 lift:1 shervashidze:1 outcome:1 whose:1 heuristic:1 widely:1 supplementary:7 stanford:2 say:1 snap:1 otherwise:1 my2:1 kai:2 jointly:2 noisy:1 online:3 sequence:1 differentiable:1 advantage:1 net:1 hyperlinked:1 propose:4 interaction:8 product:2 clevert:1 fr:1 zm:1 causing:1 relevant:1 cao:3 combining:1 date:1 tu:1 alleviates:1 omer:1 achieve:4 competition:1 scalability:4 billion:1 exploiting:2 sutskever:1 asymmetry:7 p:4 motwani:1 ijcai:3 jing:1 adam:2 leave:1 object:1 derive:1 chengqi:1 gong:1 propagating:1 freeing:1 edward:1 implies:4 come:1 elus:1 direction:3 attribute:2 filter:1 stochastic:2 successor:2 opinion:1 material:7 brin:1 adjacency:2 fix:2 wall:10 ntu:3 investigation:1 alleviate:1 hepph:1 summation:1 yij:3 proximity:49 considered:2 exp:2 lawrence:1 mapping:7 predict:2 week:1 claim:1 rfou:1 major:1 achieves:1 optimizer:1 estimation:1 label:3 visited:1 sensitive:1 lip6:1 bridge:1 schwarz:1 vanja:1 reflects:2 weighted:2 hope:1 minimization:1 clearly:1 aim:5 rather:1 avoid:1 ej:1 zhou:1 factorizes:1 office:2 encode:3 derived:1 focus:1 properly:1 improvement:1 rank:17 likelihood:2 typically:1 hidden:11 arg:3 classification:13 among:3 html:2 proposes:1 development:1 art:1 mutual:1 equal:2 aware:3 construct:2 distilled:1 beach:1 sampling:3 hop:3 represents:3 yu:2 unsupervised:11 jon:1 future:2 others:3 recommend:1 yoshua:1 report:1 few:1 wen:1 randomly:2 simultaneously:1 preserve:12 national:3 divergence:1 asian:1 connects:1 relief:1 skiena:1 microsoft:1 suit:1 detection:6 investigate:1 highly:1 evaluation:5 accurate:1 oliver:1 edge:6 partial:1 huan:1 damping:1 iv:2 unterthiner:1 walk:6 re:1 theoretical:4 leskovec:2 mk:1 instance:2 increased:2 hep:8 formulates:1 measuring:1 zn:1 challenged:1 yoav:1 phrase:1 cost:1 vertex:1 entry:4 subset:1 hundred:1 conducted:1 connect:5 cho:1 st:1 contract:2 connecting:1 together:1 ilya:1 synthesis:1 connectivity:1 squared:1 reflect:5 nm:1 aaai:4 choose:2 huang:1 external:1 derivative:1 leading:1 zhao:1 li:1 de:1 b2:1 summarized:1 coefficient:1 satisfy:4 explicitly:1 ranking:39 later:1 try:3 root:1 closed:1 view:2 shravan:1 start:1 competitive:3 participant:1 simon:1 jia:1 rmse:3 yen:1 minimize:2 square:2 php:1 ni:13 air:1 greg:1 characteristic:1 yield:2 dean:1 infolab:1 lu:2 cc:2 rectified:1 dave:1 app:1 explain:1 reach:1 aligns:1 facebook:1 definition:1 evaluates:1 competitor:3 nonetheless:1 naturally:1 proof:7 mi:2 attributed:1 propagated:1 hsu:1 sampled:1 dataset:3 popular:1 ou:1 carefully:1 back:2 focusing:1 feed:1 higher:2 supervised:3 violating:1 wei:3 evaluated:4 furthermore:1 stage:1 implicit:2 smola:1 autoencoders:1 correlation:2 hand:5 djork:1 web:5 replacing:1 zeng:1 overlapping:1 lack:2 propagation:1 rajeev:1 logistic:3 quality:1 menon:1 scientific:2 believe:1 lei:1 usa:1 usage:1 b3:1 verify:1 concept:1 ranged:1 true:1 counterpart:1 regularization:1 xavier:1 leibler:1 illustrated:3 adjacent:1 during:3 transitivity:2 please:7 auc:9 mpi:1 won:1 chin:1 yun:1 demonstrate:5 julia:1 performs:1 novel:1 recently:1 funding:1 charles:1 superior:3 sigmoid:1 common:1 enriches:1 empirically:1 overview:2 physical:1 homophily:1 b4:1 insensitive:1 linking:2 interpretation:1 refer:7 imc:1 tuning:4 pmi:7 narayanamurthy:1 had:1 han:1 similarity:3 supervision:1 add:1 belongs:1 discard:1 scenario:1 store:1 certain:5 inequality:1 binary:6 continue:1 my3:2 yi:1 krishna:1 preserving:9 seen:1 additional:1 care:1 impose:1 ministry:1 florian:1 prune:19 arn:1 maximize:1 corrado:1 ii:13 preservation:7 siamese:8 multiple:1 technical:1 faster:1 determination:1 ahmed:1 cross:1 long:1 lin:1 ofp:1 lai:1 icdm:1 post:11 equally:1 award:1 prediction:20 scalable:4 regression:12 expectation:2 poisson:2 metric:2 represent:1 sergey:1 hochreiter:1 whereas:1 want:3 separately:1 fine:2 addressed:2 heidemann:1 singular:1 source:4 jian:3 crucial:1 extra:1 perozzi:1 specially:1 posse:1 bringing:1 tri:12 undirected:1 incorporates:1 effectiveness:3 call:1 structural:2 yang:4 iii:7 embeddings:7 split:1 bengio:1 affect:1 fit:1 zi:17 relu:2 architecture:2 topology:2 inner:2 yihong:1 inactive:1 whether:3 bottleneck:1 song:1 speech:1 deep:9 useful:2 detailed:1 klier:1 ph:8 visualized:1 cit:1 http:6 wiki:1 generate:1 exist:1 zj:15 shifted:1 cikm:1 deli:1 bryan:1 diverse:1 write:1 hyperparameter:3 shall:1 express:1 key:2 four:2 nevertheless:1 prevent:1 pj:4 kept:1 graph:9 fraction:2 sum:1 letter:1 named:1 throughout:1 wu:2 separation:1 misunderstood:1 capturing:1 layer:19 bound:8 distinguish:1 cheng:1 nonnegative:1 strength:1 constraint:3 encodes:1 lane:1 kleinberg:1 min:2 performing:2 mikolov:1 department:1 watt:1 cui:3 spearman:1 beneficial:1 across:2 pan:1 elizabeth:1 unity:4 tw:5 qu:1 intuitively:1 explained:1 pr:2 gathering:1 zhiyuan:2 wcd:2 equation:3 discus:2 adopted:2 available:1 rewritten:1 apply:2 observe:1 occurrence:1 save:1 batch:2 robustness:3 existence:3 thomas:1 denotes:3 top:1 remaining:1 sw:1 especially:2 uj:3 build:2 classical:1 prof:1 objective:22 already:1 amr:1 mingzhe:1 costly:1 antoine:1 visiting:1 gradient:4 distance:2 link:52 mapped:1 philip:1 argue:2 extent:1 considers:4 cauchy:1 toward:1 reason:1 consensus:1 taiwan:4 besides:4 code:1 modeled:3 relationship:3 mini:1 pointwise:1 ratio:1 sinica:4 difficult:2 unfortunately:1 setup:3 statement:1 hao:1 negative:17 ba:1 ziwei:1 design:4 pei:2 contributed:1 perform:1 upper:8 observation:2 neuron:1 datasets:4 benchmark:1 descent:1 xiaofei:1 ecml:1 regularizes:1 relational:1 precise:1 perturbation:2 community:27 nmf:1 compositionality:1 introduced:1 pair:6 required:1 connection:4 z1:1 optimized:1 raising:1 aerospace:1 learned:1 jaewon:1 boost:1 kingma:1 nip:3 jure:2 able:1 usually:3 below:1 challenge:3 summarize:1 hyperlink:1 leicht:1 pagerank:11 max:4 including:1 webspam:10 terry:1 ranked:2 rely:1 natural:2 treated:1 settling:1 force:1 zhu:6 improve:1 github:1 technology:1 imply:1 jun:2 extract:1 lij:2 text:1 epoch:2 yeh:1 prior:1 review:1 relative:4 loss:3 suggestion:1 generation:2 proportional:4 grover:1 validation:1 downloaded:1 authoritative:1 degree:9 verification:1 sufficient:2 article:3 xiao:2 pi:2 share:3 cd:1 bordes:1 supported:1 free:1 infeasible:1 enjoys:1 aij:5 bias:2 allow:1 deeper:1 side:7 institute:1 taking:1 sparse:3 distributed:2 curve:1 dimension:2 depth:1 world:2 xia:1 unweighted:1 fb:8 rich:1 author:3 commonly:2 qiaozhu:1 spam:3 far:1 party:1 social:5 employing:1 citation:8 emphasize:1 observable:1 kullback:1 elu:1 global:23 active:3 xingquan:1 b1:1 assumed:2 nino:1 discriminative:1 search:1 latent:7 compromised:1 decade:1 modularity:4 table:8 learn:6 mj:14 zk:4 ca:1 forest:1 cl:1 necessarily:1 constructing:1 domain:1 aistats:1 linearly:1 arrow:2 whole:1 noise:1 hyperparameters:3 qiu:1 grarep:2 allowed:1 positively:1 xu:3 augmented:1 roc:1 tong:1 fails:1 sub:1 theme:1 explicit:1 exponential:2 levy:1 tang:2 rk:1 down:2 formula:1 embed:1 rectifier:1 showing:1 concern:1 glorot:1 exists:1 burden:1 sdlin:1 corr:3 importance:2 illustrates:1 margin:1 chen:2 yin:2 gao:1 pmwiki:1 mjj:2 aditya:2 recommendation:1 chang:2 corresponds:1 satisfies:1 extracted:1 acm:1 goal:9 formulated:1 sized:1 viewed:1 marked:1 king:1 jeff:1 shared:6 content:2 specifically:5 lemma:11 total:1 mathias:1 maxe:1 formally:1 support:3 softplus:2 mark:1 wenwu:3 alexander:1 evaluate:5 tae:1 correlated:1 |
6,756 | 7,111 | Bayesian Optimization with Gradients
Jian Wu 1
Matthias Poloczek 2 Andrew Gordon Wilson 1
1
Cornell University, 2 University of Arizona
Peter I. Frazier 1
Abstract
Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization
methods, Bayesian optimization typically does not use derivative information. In
this paper we show how Bayesian optimization can exploit derivative information to
find good solutions with fewer objective function evaluations. In particular, we develop a novel Bayesian optimization algorithm, the derivative-enabled knowledgegradient (d-KG), which is one-step Bayes-optimal, asymptotically consistent, and
provides greater one-step value of information than in the derivative-free setting.
d-KG accommodates noisy and incomplete derivative information, comes in both
sequential and batch forms, and can optionally reduce the computational cost of
inference through automatically selected retention of a single directional derivative.
We also compute the d-KG acquisition function and its gradient using a novel fast
discretization-free technique. We show d-KG provides state-of-the-art performance
compared to a wide range of optimization procedures with and without gradients,
on benchmarks including logistic regression, deep learning, kernel learning, and
k-nearest neighbors.
1
Introduction
Bayesian optimization [3, 17] is able to find global optima with a remarkably small number of
potentially noisy objective function evaluations. Bayesian optimization has thus been particularly
successful for automatic hyperparameter tuning of machine learning algorithms [10, 11, 35, 38],
where objectives can be extremely expensive to evaluate, noisy, and multimodal.
Bayesian optimization supposes that the objective function (e.g., the predictive performance with
respect to some hyperparameters) is drawn from a prior distribution over functions, typically a
Gaussian process (GP), maintaining a posterior as we observe the objective in new places. Acquisition
functions, such as expected improvement [15, 17, 28], upper confidence bound [37], predictive entropy
search [14] or the knowledge gradient [32], determine a balance between exploration and exploitation,
to decide where to query the objective next. By choosing points with the largest acquisition function
values, one seeks to identify a global optimum using as few objective function evaluations as possible.
Bayesian optimization procedures do not generally leverage derivative information, beyond a few
exceptions described in Sect. 2. By contrast, other types of continuous optimization methods [36] use
gradient information extensively. The broader use of gradients for optimization suggests that gradients
should also be quite useful in Bayesian optimization: (1) Gradients inform us about the objective?s
relative value as a function of location, which is well-aligned with optimization. (2) In d-dimensional
problems, gradients provide d distinct pieces of information about the objective?s relative value in each
direction, constituting d + 1 values per query together with the objective value itself. This advantage
is particularly significant for high-dimensional problems. (3) Derivative information is available
in many applications at little additional cost. Recent work [e.g., 23] makes gradient information
available for hyperparameter tuning. Moreover, in the optimization of engineering systems modeled
by partial differential equations, which pre-dates most hyperparameter tuning applications [8], adjoint
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
methods provide gradients cheaply [16, 29]. And even when derivative information is not readily
available, we can compute approximative derivatives in parallel through finite differences.
In this paper, we explore the ?what, when, and why? of Bayesian optimization with derivative
information. We also develop a Bayesian optimization algorithm that effectively leverages gradients in hyperparameter tuning to outperform the state of the art. This algorithm accommodates
incomplete and noisy gradient observations, can be used in both the sequential and batch settings,
and can optionally reduce the computational overhead of inference by selecting the single most
valuable directional derivatives to retain. For this purpose, we develop a new acquisition function,
called the derivative-enabled knowledge-gradient (d-KG). d-KG generalizes the previously proposed
batch knowledge gradient method of Wu and Frazier [44] to the derivative setting, and replaces its
approximate discretization-based method for calculating the knowledge-gradient acquisition function
by a novel faster exact discretization-free method. We note that this discretization-free method is also
of interest beyond the derivative setting, as it can be used to improve knowledge-gradient methods for
other problem settings. We also provide a theoretical analysis of the d-KG algorithm, showing (1) it
is one-step Bayes-optimal by construction when derivatives are available; (2) that it provides one-step
value greater than in the derivative-free setting, under mild condition; and (3) that its estimator of the
global optimum is asymptotically consistent.
In numerical experiments we compare with state-of-the-art batch Bayesian optimization algorithms
with and without derivative information, and the gradient-based optimizer BFGS with full gradients.
We assume familiarity with GPs and Bayesian optimization, for which we recommend Rasmussen and
Williams [31] and Shahriari et al. [34] as a review. In Section 2 we begin by describing related work.
In Sect. 3 we describe our Bayesian optimization algorithm exploiting derivative information. In
Sect. 4 we compare the performance of our algorithm with several competing methods on a collection
of synthetic and real problems.
The code for this paper is available at https://github.com/wujian16/Cornell-MOE.
2
Related Work
Osborne et al. [26] proposes fully Bayesian optimization procedures that use derivative observations
to improve the conditioning of the GP covariance matrix. Samples taken near previously observed
points use only the derivative information to update the covariance matrix. Unlike our current work,
derivative information is not fully utilized for optimization in this previous work in the sense that
derivation information does not affect the acquisition function. We directly compare with Osborne
et al. [26] within the KNN benchmark in Sect. 4.2.
Lizotte [22, Sect. 4.2.1 and Sect. 5.2.4] incorporates derivatives into Bayesian optimization, modeling
the derivatives of a GP as in Rasmussen and Williams [31, Sect. 9.4]. Lizotte [22] shows that
Bayesian optimization with the expected improvement (EI) acquisition function and complete gradient
information at each sample can outperform BFGS. Our approach has six key differences: (i) we
allow for noisy and incomplete derivative information; (ii) we develop a novel acquisition function
that outperforms EI with derivatives; (iii) we enable batch evaluations; (iv) we implement and
compare batch Bayesian optimization with derivatives across several acquisition functions, on
benchmarks and new applications such as kernel learning, logistic regression, deep learning and
k-nearest neighbors, further revealing empirically where gradient information will be most valuable;
(v) we provide a theoretical analysis of Bayesian optimization with derivatives; (vi) we develop a
scalable implementation.
Very recently, Koistinen et al. [19] uses GPs with derivative observations for minimum energy path
calculations of atomic rearrangements and Ahmed et al. [1] studies expected improvement with
gradient observations. In Ahmed et al. [1], a randomly selected directional derivative is retained
in each iteration for computational reasons, which is similar to our approach of retaining a single
directional derivative, though differs in its random selection in contrast with our value-of-informationbased selection. Our approach is complementary to these works.
For batch Bayesian optimization, several recent algorithms have been proposed that choose a set of
points to evaluate in each iteration [5, 6, 12, 18, 24, 33, 35, 39]. Within this area, our approach to
handling batch observations is most closely related to the batch knowledge gradient (KG) of Wu and
Frazier [44]. We generalize this approach to the derivative setting, and provide a novel exact method
2
for computing the knowledge-gradient acquisition function that avoids the discretization used in Wu
and Frazier [44]. This generalization improves speed and accuracy, and is also applicable to other
knowledge gradient methods in continuous search spaces.
Recent advances improving both access to derivatives and computational tractability of GPs make
Bayesian optimization with gradients increasingly practical and timely for discussion.
3
Knowledge Gradient with Derivatives
Sect. 3.1 reviews a general approach to incorporating derivative information into GPs for Bayesian
optimization. Sect. 3.2 introduces a novel acquisition function d-KG, based on the knowledge gradient
approach, which utilizes derivative information. Sect. 3.3 computes this acquisition function and its
gradient efficiently using a novel fast discretization-free approach. Sect. 3.4 shows that this algorithm
provides greater value of information than in the derivative-free setting, is one-step Bayes-optimal,
and is asymptotically consistent when used over a discretized feasible space.
3.1
Derivative Information
Given an expensive-to-evaluate function f , we wish to find argminx?A f (x), where A ? Rd is
the domain of optimization. We place a GP prior over f : A ? R, which is specified by its mean
function ? : A ? R and kernel function K : A ? A ? R. We first suppose that for each sample of
x we observe the function value and all d partial derivatives, possibly with independent normally
distributed noise, and then later discuss relaxation to observing only a single directional derivative.
Since the gradient is a linear operator, the gradient of a GP is also a GP (see also Sect. 9.4 in Rasmussen
and Williams [31]), and the function and its gradient follow a multi-output GP with mean function ?
?
? defined below:
and kernel function K
K(x, x0 ) J(x, x0 )
T
?
?
?(x) = (?(x), ??(x)) , K(x,
x0 ) =
(3.1)
J(x0 , x)T H(x, x0 )
0
0
)
)
where J(x, x0 ) = ?K(x,x
, ? ? ? , ?K(x,x
and H(x, x0 ) is the d ? d Hessian of K(x, x0 ).
?x0
?x0
1
d
When evaluating at a point x, we observe the noise-obscured function value y(x) and gradient ?y(x).
Jointly, these observations form a (d + 1)-dimensional vector with conditional distribution
T
T
(y(x), ?y(x)) f (x), ?f (x) ? N (f (x), ?f (x)) , diag(? 2 (x)) ,
(3.2)
2
where ? 2 : A ? Rd+1
?0 gives the variance of the observational noise. If ? is not known, we may
estimate it from data. The posterior distribution is again a GP. We refer to the mean function of this
? (n) (?, ?). Suppose that we have
posterior GP after n samples as ?
?(n) (?) and its kernel function as K
(1)
(2)
(n)
sampled at n points X := {x , x , ? ? ? , x } and observed (y, ?y)(1:n) , where each observation
? (n) (?, ?) are given by
consists of the function value and the gradient at x(i) . Then ?
?(n) (?) and K
(n)
?
?
? (x) = ?
?(x) + K(x,
X)
?1
?
K(X,
X) + diag{? 2 (x(1) ), ? ? ? , ? 2 (x(n) )}
(y, ?y)(1:n) ? ?
?(X)
?1
? (n) (x, x0 ) = K(x,
?
?
?
?
K
x0 ) ? K(x,
X) K(X,
X) + diag{? 2 (x(1) ), ? ? ? , ? 2 (x(n) )}
K(X,
x0 ).
(3.3)
If our observations are incomplete, then we remove the rows and columns in (y, ?y)(1:n) , ?
?(X),
? X), K(X,
?
?
K(?,
X) and K(X,
?) of Eq. (3.3) corresponding to partial derivatives (or function values)
that were not observed. If we can observe directional derivatives, then we add rows and columns
? ?) are obtained by noting that a
corresponding to these observations, where entries in ?
?(X) and K(?,
directional derivative is a linear transformation of the gradient.
3.2
The d-KG Acquisition Function
We propose a novel Bayesian optimization algorithm to exploit available derivative information, based
on the knowledge gradient approach [9]. We call this algorithm the derivative-enabled knowledge
gradient (d-KG).
3
The algorithm proceeds iteratively, selecting in each iteration a batch of q points in A that has a
maximum value of information (VOI). Suppose we have observed n points, and recall from Section 3.1
that ?
?(n) (x) is the (d + 1)-dimensional vector giving the posterior mean for f (x) and its d partial
derivatives at x. Sect. 3.1 discusses how to remove the assumption that all d + 1 values are provided.
(n)
The expected value of f (x) under the posterior distribution is ?
?1 (x). If after n samples we
were to make an irrevocable (risk-neutral) decision now about the solution to our overarching
optimization problem and receive a loss equal to the value of f at the chosen point, we would choose
(n)
(n)
argminx?A ?
?1 (x) and suffer conditional expected loss minx?A ?
?1 (x). Similarly, if we made this
(n+q)
decision after n + q samples our conditional expected loss would be minx?A ?
?1
(x). Therefore,
we define the d-KG factor for a given set of q candidate points z (1:q) as
((n+1):(n+q))
(n)
(n+q)
(1:q)
(1:q)
d-KG(z
) = min ?
?1 (x) ? En min ?
?1
(x) x
=z
,
x?A
x?A
(3.4)
where En [?] is the expectation taken with respect to the posterior distribution after n eval(n+q)
uations, and the distribution
?1
(?) under this posterior
of ?
marginalizes over the observa(1:q)
(1:q)
tions y(z
), ?y(z
) = y(z (i) ), ?y(z (i) ) : i = 1, . . . , q upon which it depends. We subsequently refer to Eq. (3.4) as the inner optimization problem.
The d-KG algorithm then seeks to evaluate the batch of points next that maximizes the d-KG factor,
max d-KG(z (1:q) ).
z (1:q) ?A
(3.5)
We refer to Eq. (3.5) as the outer optimization problem. d-KG solves the outer optimization problem
using the method described in Section 3.3.
The d-KG acquisition function differs from the batch knowledge gradient acquisition function in Wu
(n+q)
and Frazier [44] because here the posterior mean ?
?1
(x) at time n+q depends on ?y(z (1:q) ). This
in turn requires calculating the distribution of these gradient observations under the time-n posterior
and marginalizing over them. Thus, the d-KG algorithm differs from KG not just in that gradient
observations change the posterior, but also in that the prospect of future gradient observations changes
the acquisition function. An additional major distinction from Wu and Frazier [44] is that d-KG
employs a novel discretization-free method for computing the acquisition function (see Section 3.3).
Fig. 1 illustrates the behavior of d-KG and d-EI on a 1-d example. d-EI generalizes expected
improvement (EI) to batch acquisition with derivative information [22]. d-KG clearly chooses a better
point to evaluate than d-EI.
Including all d partial derivatives can be computationally prohibitive since GP inference scales as
O(n3 (d + 1)3 ). To overcome this challenge while retaining the value of derivative observations, we
can include only one directional derivative from each iteration in our inference. d-KG can naturally
decide which derivative to include, and can adjust our choice of where to best sample given that we
observe more limited information. We define the d-KG acquisition function for observing only the
function value and the derivative with direction ? at z (1:q) as
(n)
(n+q)
d-KG(z (1:q) , ?) = min ?
?1 (x) ? En min ?
?1
(x) x((n+1):(n+q)) = z (1:q) ; ? .
(3.6)
x?A
x?A
(n+q)
where conditioning on ? is here understood to mean that ?
?1
(x) is the conditional mean of f (x)
given y(z (1:q) ) and ?T ?y(z (1:q) ) = (?T ?y(z (i) ) : i = 1, . . . , q). The full algorithm is as follows.
Algorithm 1 d-KG with Relevant Directional Derivative Detection
1: for t = 1 to N do
?
2:
(z (1:q) , ?? ) = argmaxz(1:q) ,? d-KG(z (1:q) , ?)
?
?
3:
Augment data with y(z (1:q) ) and ??T ?y(z (1:q) ). Update our posterior on (f (x), ?f (x)).
4: end for
q
Return x? = argminx?A ?
?N
1 (x)
4
Figure 1: KG [44] and EI [39] refer to acquisition functions without gradients. d-KG and d-EI refer to the
counterparts with gradients. The topmost plots show (1) the posterior surfaces of a function sampled from a one
dimensional GP without and with incorporating observations of the gradients. The posterior variance is smaller
if the gradients are incorporated; (2) the utility of sampling each point under the value of information criteria of
KG (d-KG) and EI (d-EI) in both settings. If no derivatives are observed, both KG and EI will query a point with
high potential gain (i.e. a small expected function value). On the other hand, when gradients are observed, d-KG
makes a considerably better sampling decision, whereas d-EI samples essentially the same location as EI. The
plots in the bottom row depict the posterior surface after the respective sample. Interestingly, KG benefits more
from observing the gradients than EI (the last two plots): d-KG?s observation yields accurate knowledge of the
optimum?s location, while d-EI?s observation leaves substantial uncertainty.
3.3
Efficient Exact Computation of d-KG
Calculating and maximizing d-KG is difficult when A is continuous because the term
(n+q)
minx?A ?
?1
(x) in Eq. (3.6) requires optimizing over a continuous domain, and then we must
integrate this optimal value through its dependence on y(z (1:q) ) and ?T ?y(z (1:q) ). Previous work
on the knowledge gradient in continuous domains [30, 32, 44] approaches this computation by
taking minima within expectations not over the full domain A but over a discretized finite approximation. This approach supports analytic integration in Scott et al. [32] and Poloczek et al. [30],
and a sampling-based scheme in Wu and Frazier [44]. However, the discretization in this approach
introduces error and scales poorly with the dimension of A.
Here we propose a novel method for calculating an unbiased estimator of the gradient of d-KG which
we then use within stochastic gradient ascent to maximize d-KG. This method avoids discretization,
and thus is exact. It also improves speed significantly over a discretization-based scheme.
In Section A of the supplement we show that the d-KG factor can be expressed as
(n)
(n)
(n)
?1 (x) ? min ?
?1 (x) + ?
?1 (x, ?, z (1:q) )W ,
d-KG(z (1:q) , ?) = En min ?
x?A
x?A
(3.7)
where ?
?(n) is the mean function of (f (x), ?T ?f (x)) after n evaluations, W is a 2q dimensional
(n)
standard normal random column vector and ?
?1 (x, ?, z (1:q) ) is the first row of a 2 ? 2q dimensional
matrix, which is related to the kernel function of (f (x), ?T ?f (x)) after n evaluations with an exact
form specified in (A.2) of the supplement.
Under sufficient regularity conditions [21], one can interchange the gradient and expectation operators,
(n)
(n)
(1:q)
(1:q)
?d-KG(z
, ?) = ?En ? min ?
?1 (x) + ?
?1 (x, ?, z
)W ,
x?A
where
here the gradient is with respect to z (1:) and ?.
If (x, z (1:q) , ?)
7?
(n)
(n)
(1:q)
?
?1 (x) + ?
?1 (x, ?, z
)W is continuously differentiable and A is compact, the envelope theorem [25] implies
h
i
(n)
(n)
?d-KG(z (1:q) , ?) = ?En ? ?
?1 (x? (W )) + ?
?1 (x? (W ), ?, z (1:q) )W ,
(3.8)
(n)
(n)
where x? (W ) ? arg minx?A ?
?1 (x) + ?
?1 (x, ?, z (1:q) )W . To find x? (W ), one can utilize a
multi-start gradient descent method since the gradient is analytically available for the objective
5
(n)
(n)
?
?1 (x) + ?
?1 (x, ?, z (1:q) )W . Practically, we find that the learning rate of ltinner = 0.03/t0.7 is
robust for finding x? (W ).
(n)
(n)
The expression (3.8) implies that ? ?
?1 (x? (W )) + ?
?1 (x? (W ), ?, z (1:q) )W is an unbiased
estimator of ?d-KG(z (1:q) , ?, A), when the regularity conditions it assumes hold. We can use this
unbiased gradient estimator within stochastic gradient ascent [13], optionally with multiple starts, to
solve the outer optimization problem argmaxz(1:q) ,? d-KG(z (1:q) , ?) and can use a similar approach
when observing full gradients to solve (3.5). For the outer optimization problem, we find that the
learning rate of ltouter = 10ltinner performs well over all the benchmarks we tested.
Bayesian Treatment of Hyperparameters. We adopt a fully Bayesian treatment of hyperparameters similar to Snoek et al. [35]. We draw M samples of hyperparameters ?(i) for 1 ? i ? M via the
emcee package [7] and average our acquisition function across them to obtain
d-KGIntegrated (z (1:q) , ?)
M
1 X
d-KG(z (1:q) , ?; ?(i) ),
M i=1
=
(3.9)
where the additional argument ?(i) in d-KG indicates that the computation is performed conditioning
on hyperparameters ?(i) . In our experiments, we found this method to be computationally efficient
and robust, although a more principled treatment of unknown hyperparameters within the knowledge
gradient framework would instead marginalize over them when computing ?
?(n+q) (x) and ?
?(n) .
3.4
Theoretical Analysis
Here we present three theoretical results giving insight into the properties of d-KG, with proofs in the
supplementary material. For the sake of simplicity, we suppose all partial derivatives are provided
to d-KG. Similar results hold for d-KG with relevant directional derivative detection. We begin by
stating that the value of information (VOI) obtained by d-KG exceeds the VOI that can be achieved
in the derivative-free setting.
Proposition 1. Given identical posteriors ?
?(n) ,
d-KG(z (1:q) ) ? KG(z (1:q) ),
where KG is the batch knowledge gradient acquisition function without gradients proposed by Wu
and Frazier [44]. This inequality is strict under mild conditions (see Sect. B in the supplement).
Next, we show that d-KG is one-step Bayes-optimal by construction.
Proposition 2. If only one iteration is left and we can observe both function values and partial
derivatives, then d-KG is Bayes-optimal among all feasible policies.
As a complement to the one-step optimality, we show that d-KG is asymptotically consistent if the
feasible set A is finite. Asymptotic consistency means that d-KG will choose the correct solution
when the number of samples goes to infinity.
Theorem 1. If the function f (x) is sampled from a GP with known hyperparameters, the d-KG
algorithm is asymptotically consistent, i.e.
lim f (x? (d-KG, N )) = min f (x)
N ??
x?A
?
almost surely, where x (d-KG, N ) is the point recommended by d-KG after N iterations.
4
Experiments
We evaluate the performance of the proposed algorithm d-KG with relevant directional derivative
detection (Algorithm 1) on six standard synthetic benchmarks (see Fig. 2). Moreover, we examine its
ability to tune the hyperparameters for the weighted k-nearest neighbor metric, logistic regression,
deep learning, and for a spectral mixture kernel (see Fig. 3).
We provide an easy-to-use Python package with the core written in C++, available at https://
github.com/wujian16/Cornell-MOE.
6
We compare d-KG to several state-of-the-art methods: (1) The batch expected improvement method
(EI) of Wang et al. [39] that does not utilize derivative information and an extension of EI that incorporates derivative information denoted d-EI. d-EI is similar to Lizotte [22] but handles incomplete
gradients and supports batches. (2) The batch GP-UCB-PE method of Contal et al. [5] that does
not utilize derivative information, and an extension that does. (3) The batch knowledge gradient
algorithm without derivative information (KG) of Wu and Frazier [44]. Moreover, we generalize the
method of Osborne et al. [26] to batches and evaluate it on the KNN benchmark. All of the above
algorithms allow incomplete gradient observations. In benchmarks that provide the full gradient, we
additionally compare to the gradient-based method L-BFGS-B provided in scipy. We suppose that
the objective function f is drawn from a Gaussian process GP (?, ?), where ? is a constant mean
function and ? is the squared exponential kernel. We sample M = 10 sets of hyperparameters by the
emcee package [7].
Recall that the immediate regret is defined as the loss with respect to a global optimum. The plots for
synthetic benchmark functions, shown in Fig. 2, report the log10 of immediate regret of the solution
that each algorithm would pick as a function of the number of function evaluations. Plots for other
experiments report the objective value of the solution instead of the immediate regret. Error bars give
the mean value plus and minus one standard deviation. The number of replications is stated in each
benchmark?s description.
4.1
Synthetic Test Functions
We evaluate all methods on six test functions chosen from Bingham [2]. To demonstrate the ability
to benefit from noisy derivative information, we sample additive normally distributed noise with
zero mean and standard deviation ? = 0.5 for both the objective function and its partial derivatives.
? is unknown to the algorithms and must be estimated from observations. We also investigate
how incomplete gradient observations affect algorithm performance. We also experiment with two
different batch sizes: we use a batch size q = 4 for the Branin, Rosenbrock, and Ackley functions;
otherwise, we use a batch size q = 8. Fig. 2 summarizes the experimental results.
Functions with Full Gradient Information. For 2d Branin on domain [?5, 15] ? [0, 15], 5d
Ackley on [?2, 2]5 , and 6d Hartmann function on [0, 1]6 , we assume that the full gradient is available.
Looking at the results for the Branin function in Fig. 2, d-KG outperforms its competitors after 40
function evaluations and obtains the best solution overall (within the limit of function evaluations).
BFGS makes faster progress than the Bayesian optimization methods during the first 20 evaluations,
but subsequently stalls and fails to obtain a competitive solution. On the Ackley function d-EI makes
fast progress during the first 50 evaluations but also fails to make subsequent progress. Conversely,
d-KG requires about 50 evaluations to improve on the performance of d-EI, after which d-KG
achieves the best overall performance again. For the Hartmann function d-KG clearly dominates its
competitors over all function evaluations.
Functions with Incomplete Derivative Information. For the 3d Rosenbrock function on [?2, 2]3
we only provide a noisy observation of the third partial derivative. Both EI and d-EI get stuck early.
d-KG on the other hand finds a near-optimal solution after ?50 function evaluations; KG, without
derivatives, catches up after ?75 evaluations and performs comparably afterwards. The 4d Levy
benchmark on [?10, 10]4 , where the fourth partial derivative is observable with noise, shows a
different ordering of the algorithms: EI has the best performance, beating even its formulation that
uses derivative information. One explanation could be that the smoothness and regular shape of the
function surface benefits this acquisition criteria. For the 8d Cosine mixture function on [?1, 1]8 we
provide two noisy partial derivatives. d-KG and UCB with derivatives perform better than EI-type
criterion, and achieve the best performances, with d-KG beating UCB with derivatives slightly.
In general, we see that d-KG successfully exploits noisy derivative information and has the best
overall performance.
4.2
Real-World Test Functions
Weighted k-Nearest Neighbor. Suppose a cab company wishes to predict the duration of trips.
Clearly, the duration not only depends on the endpoints of the trip, but also on the day and time.
7
Figure 2: The average performance of 100 replications (the log10 of the immediate regret vs. the number of
function evaluations). d-KG performs significantly better than its competitors for all benchmarks except Levy
funcion. In Branin and Hartmann, we also plot black lines, which is the performance of BFGS.
In this benchmark we tune a weighted k-nearest neighbor (KNN) metric to optimize predictions
of these durations, based on historical data. A trip is described by the pick-up time t, the pick-up
location (p1 , p2 ), and the drop-off point (d1 , d2 ). Then the estimate of the duration is obtained as
a weighted average over all trips Dm,t in our database that happened in theP
time interval t ? m
minutes, where m is a tunable hyperparameter: Prediction(t, p1 , p2 , d1 , d2 ) = ( i?Dm,t durationi ?
P
weight(i))/( i?Dm,t weight(i)). The weight of trip i ? Dm,t in this prediction is given by
?1
weight(i) = (t ? ti )2 /l12 + (p1 ? pi1 )2 /l22 + (p2 ? pi2 )2 /l32 + (d1 ? di1 )2 /l42 + (d2 ? di2 )2 /l52
,
where (ti , pi1 , pi2 , di1 , di2 ) are the respective parameter values for trip i, and (l1 , l2 , l3 , l4 , l5 ) are tunable
hyperparameters. Thus, we have 6 hyperparameters to tune: (m, l1 , l2 , l3 , l4 , l5 ). We choose m in
[30, 200], l12 in [101 , 108 ], and l22 , l32 , l42 , l52 each in [10?8 , 10?1 ].
We use the yellow cab NYC public data set from June 2016, sampling 10000 records from June 1 ?
25 as training data and 1000 trip records from June 26 ? 30 as validation data. Our test criterion is
the root mean squared error (RMSE), for which we compute the partial derivatives on the validation
dataset with respect to the hyperparameters (l1 , l2 , l3 , l4 , l5 ), while the hyperparameter m is not
differentiable. In Fig. 3 we see that d-KG overtakes the alternatives, and that UCB and KG acquisition
functions also benefit from exploiting derivative information.
Kernel Learning. Spectral mixture kernels [40] can be used for flexible kernel learning to enable
long-range extrapolation. These kernels are obtained by modeling a spectral density by a mixture of
Gaussians. While any stationary kernel can be described by a spectral mixture kernel with a particular
setting of its hyperparameters, initializing and learning these parameters can be difficult. Although
we have access to an analytic closed form of the (marginal likelihood) objective, this function is (i)
expensive to evaluate and (ii) highly multimodal. Moreover, (iii) derivative information is available.
Thus, learning flexible kernel functions is a perfect candidate for our approach.
The task is to train a 2-component spectral mixture kernel on an airline data set [40]. We must determine the mixture weights, means, and variances, for each of the two Gaussians. Fig. 3 summarizes
performance for batch size q = 8. BFGS is sensitive to its initialization and human intervention and
is often trapped in local optima. d-KG, on other hand, more consistently finds a good solution, and
obtains the best solution of all algorithms (within the step limit). Overall, we observe that gradient
information is highly valuable in performing this kernel learning task.
8
Logistic Regression and Deep Learning. We tune logistic regression and a feedforward neural
network with 2 hidden layers on the MNIST dataset [20], a standard classification task for handwritten
digits. The training set contains 60000 images, the test set 10000. We tune 4 hyperparameters for
logistic regression: the `2 regularization parameter from 0 to 1, learning rate from 0 to 1, mini batch
size from 20 to 2000 and training epochs from 5 to 50. The first derivatives of the first two parameters
can be obtained via the technique of Maclaurin et al. [23]. For the neural network, we additionally
tune the number of hidden units in [50, 500].
Fig. 3 reports the mean and standard deviation of the mean cross-entropy loss (or its log scale)
on the test set for 20 replications. d-KG outperforms the other approaches, which suggests that
derivative information is helpful. Our algorithm proves its value in tuning a deep neural network,
which harmonizes with research computing the gradients of hyperparameters [23, 27].
Figure 3: Results for the weighted KNN benchmark, the spectral mixture kernel benchmark, logistic regression
and deep neural network (from left to right), all with batch size 8 and averaged over 20 replications.
5
Discussion
Bayesian optimization is successfully applied to low dimensional problems where we wish to find a
good solution with a very small number of objective function evaluations. We considered several such
benchmarks, as well as logistic regression, deep learning, kernel learning, and k-nearest neighbor
applications. We have shown that in this context derivative information can be extremely useful: we
can greatly decrease the number of objective function evaluations, especially when building upon
the knowledge gradient acquisition function, even when derivative information is noisy and only
available for some variables.
Bayesian optimization is increasingly being used to automate parameter tuning in machine learning,
where objective functions can be extremely expensive to evaluate. For example, the parameters to
learn through Bayesian optimization could even be the hyperparameters of a deep neural network. We
expect derivative information with Bayesian optimization to help enable such promising applications,
moving us towards fully automatic and principled approaches to statistical machine learning.
In the future, one could combine derivative information with flexible deep projections [43], and
recent advances in scalable Gaussian processes for O(n) training and O(1) test time predictions
[41, 42]. These steps would help make Bayesian optimization applicable to a much wider range
of problems, wherever standard gradient based optimizers are used ? even when we have analytic
objective functions that are not expensive to evaluate ? while retaining faster convergence and
robustness to multimodality.
Acknowledgments
Wilson was partially supported by NSF IIS-1563887. Frazier, Poloczek, and Wu were partially
supported by NSF CAREER CMMI-1254298, NSF CMMI-1536895, NSF IIS-1247696, AFOSR
FA9550-12-1-0200, AFOSR FA9550-15-1-0038, and AFOSR FA9550-16-1-0046.
References
[1] M. O. Ahmed, B. Shahriari, and M. Schmidt. Do we need ?harmless? bayesian optimization
and ?first-order? bayesian optimization? In NIPS BayesOpt, 2016.
9
[2] D. Bingham. Optimization test problems. http://www.sfu.ca/~ssurjano/optimization.
html, 2015.
[3] E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on bayesian optimization of expensive
cost functions, with application to active user modeling and hierarchical reinforcement learning.
arXiv preprint arXiv:1012.2599, 2010.
[4] N. T. . L. Commission. NYC Trip Record Data. http://www.nyc.gov/html/tlc/, June
2016. Last accessed on 2016-10-10.
[5] E. Contal, D. Buffoni, A. Robicquet, and N. Vayatis. Parallel gaussian process optimization with
upper confidence bound and pure exploration. In Machine Learning and Knowledge Discovery
in Databases, pages 225?240. Springer, 2013.
[6] T. Desautels, A. Krause, and J. W. Burdick. Parallelizing exploration-exploitation tradeoffs
in gaussian process bandit optimization. The Journal of Machine Learning Research, 15(1):
3873?3923, 2014.
[7] D. Foreman-Mackey, D. W. Hogg, D. Lang, and J. Goodman. emcee: the mcmc hammer.
Publications of the Astronomical Society of the Pacific, 125(925):306, 2013.
[8] A. Forrester, A. Sobester, and A. Keane. Engineering design via surrogate modelling: a
practical guide. John Wiley & Sons, 2008.
[9] P. Frazier, W. Powell, and S. Dayanik. The knowledge-gradient policy for correlated normal
beliefs. INFORMS Journal on Computing, 21(4):599?613, 2009.
[10] J. R. Gardner, M. J. Kusner, Z. E. Xu, K. Q. Weinberger, and J. Cunningham. Bayesian
optimization with inequality constraints. In International Conference on Machine Learning,
pages 937?945, 2014.
[11] M. Gelbart, J. Snoek, and R. Adams. Bayesian optimization with unknown constraints. In
International Conference on Machine Learning, pages 250?259, Corvallis, Oregon, 2014.
[12] J. Gonzalez, Z. Dai, P. Hennig, and N. Lawrence. Batch bayesian optimization via local
penalization. In AISTATS, pages 648?657, 2016.
[13] J. Harold, G. Kushner, and G. Yin. Stochastic approximation and recursive algorithm and
applications. Springer, 2003.
[14] J. M. Hern?ndez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search
for efficient global optimization of black-box functions. In Advances in Neural Information
Processing Systems, pages 918?926, 2014.
[15] D. Huang, T. T. Allen, W. I. Notz, and N. Zeng. Global Optimization of Stochastic Black-Box
Systems via Sequential Kriging Meta-Models. Journal of Global Optimization, 34(3):441?466,
2006.
[16] A. Jameson. Re-engineering the design process through computation. Journal of Aircraft, 36
(1):36?50, 1999.
[17] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box
functions. Journal of Global Optimization, 13(4):455?492, 1998.
[18] T. Kathuria, A. Deshpande, and P. Kohli. Batched gaussian process bandit optimization via
determinantal point processes. In Advances in Neural Information Processing Systems, pages
4206?4214, 2016.
[19] O.-P. Koistinen, E. Maras, A. Vehtari, and H. J?nsson. Minimum energy path calculations with
gaussian process regression. Nanosystems: Physics, Chemistry, Mathematics, 7(6), 2016.
[20] Y. LeCun, C. Cortes, and C. J. Burges. The mnist database of handwritten digits, 1998.
[21] P. L?Ecuyer. Note: On the interchange of derivative and expectation for likelihood ratio
derivative estimators. Management Science, 41(4):738?747, 1995.
10
[22] D. J. Lizotte. Practical bayesian optimization. PhD thesis, University of Alberta, 2008.
[23] D. Maclaurin, D. Duvenaud, and R. P. Adams. Gradient-based hyperparameter optimization
through reversible learning. In International Conference on Machine Learning, pages 2113?
2122, 2015.
[24] S. Marmin, C. Chevalier, and D. Ginsbourger. Efficient batch-sequential bayesian optimization
with moments of truncated gaussian vectors. arXiv preprint arXiv:1609.02700, 2016.
[25] P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):
583?601, 2002.
[26] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In
3rd International Conference on Learning and Intelligent Optimization (LION3), pages 1?15.
Citeseer, 2009.
[27] F. Pedregosa. Hyperparameter optimization with approximate gradient. In International
Conference on Machine Learning, pages 737?746, 2016.
[28] V. Picheny, D. Ginsbourger, Y. Richet, and G. Caplin. Quantile-based optimization of noisy
computer experiments with tunable precision. Technometrics, 55(1):2?13, 2013.
[29] R.-?. Plessix. A review of the adjoint-state method for computing the gradient of a functional
with geophysical applications. Geophysical Journal International, 167(2):495?503, 2006.
[30] M. Poloczek, J. Wang, and P. I. Frazier. Multi-information source optimization. In Advances
in Neural Information Processing Systems, 2017. Accepted for publication. ArXiv preprint
1603.00389.
[31] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006. ISBN 0-262-18253-X.
[32] W. Scott, P. Frazier, and W. Powell. The correlated knowledge gradient for simulation optimization of continuous parameters using gaussian process regression. SIAM Journal on Optimization,
21(3):996?1026, 2011.
[33] A. Shah and Z. Ghahramani. Parallel predictive entropy search for batch global optimization of
expensive objective functions. In Advances in Neural Information Processing Systems, pages
3312?3320, 2015.
[34] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of
the loop: A review of bayesian optimization. Proceedings of the IEEE, 104(1):148?175, 2016.
[35] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning
algorithms. In Advances in Neural Information Processing Systems, pages 2951?2959, 2012.
[36] J. Snyman. Practical mathematical optimization: an introduction to basic optimization theory
and classical and new gradient-based algorithms, volume 97. Springer Science & Business
Media, 2005.
[37] N. Srinivas, A. Krause, M. Seeger, and S. M. Kakade. Gaussian process optimization in the
bandit setting: No regret and experimental design. In International Conference on Machine
Learning, pages 1015?1022, 2010.
[38] K. Swersky, J. Snoek, and R. P. Adams. Multi-task bayesian optimization. In Advances in
Neural Information Processing Systems, pages 2004?2012, 2013.
[39] J. Wang, S. C. Clark, E. Liu, and P. I. Frazier. Parallel bayesian global optimization of expensive
functions. arXiv preprint arXiv:1602.05149, 2016.
[40] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation.
In International Conference on Machine Learning, pages 1067?1075, 2013.
[41] A. G. Wilson and H. Nickisch. Kernel interpolation for scalable structured gaussian processes
(kiss-gp). In International Conference on Machine Learning, pages 1775?1784, 2015.
11
[42] A. G. Wilson, C. Dann, and H. Nickisch. Thoughts on massively scalable gaussian processes.
arXiv preprint arXiv:1511.01870, 2015.
[43] A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Deep kernel learning. In Proceedings of
the 19th International Conference on Artificial Intelligence and Statistics, pages 370?378, 2016.
[44] J. Wu and P. Frazier. The parallel knowledge gradient method for batch bayesian optimization.
In Advances in Neural Information Processing Systems, pages 3126?3134, 2016.
12
| 7111 |@word mild:2 aircraft:1 exploitation:2 kohli:1 d2:3 hu:1 seek:2 simulation:1 covariance:2 citeseer:1 pick:3 minus:1 moment:1 ndez:1 contains:1 liu:1 selecting:2 interestingly:1 outperforms:3 freitas:2 current:1 discretization:10 com:2 di2:2 lang:1 must:3 readily:1 written:1 john:1 determinantal:1 numerical:1 additive:1 subsequent:1 shape:1 analytic:3 burdick:1 remove:2 plot:6 drop:1 update:2 depict:1 v:1 stationary:1 mackey:1 fewer:1 selected:2 prohibitive:1 leaf:1 intelligence:1 rosenbrock:2 core:1 record:3 fa9550:3 provides:4 location:4 accessed:1 branin:4 mathematical:1 shahriari:3 differential:1 replication:4 consists:1 overhead:1 combine:1 multimodality:1 x0:13 snoek:4 expected:9 behavior:1 p1:3 examine:1 multi:4 discretized:2 salakhutdinov:1 alberta:1 automatically:1 company:1 little:1 gov:1 richet:1 begin:2 provided:3 moreover:4 maximizes:1 medium:1 what:1 kg:80 voi:3 finding:1 transformation:1 ti:2 wujian16:2 normally:2 intervention:1 unit:1 retention:1 engineering:3 understood:1 local:2 limit:2 path:2 interpolation:1 contal:2 black:4 plus:1 initialization:1 suggests:2 conversely:1 limited:1 range:3 averaged:1 lion3:1 practical:5 acknowledgment:1 lecun:1 atomic:1 recursive:1 regret:5 implement:1 differs:3 optimizers:1 digit:2 procedure:3 powell:2 area:1 significantly:2 revealing:1 projection:1 thought:1 confidence:2 pre:1 regular:1 get:1 marginalize:1 selection:2 operator:2 risk:1 context:1 optimize:1 www:2 maximizing:1 lobato:1 williams:4 overarching:1 go:1 duration:4 welch:1 simplicity:1 pure:1 scipy:1 schonlau:1 estimator:5 insight:1 enabled:3 harmless:1 handle:1 construction:2 suppose:6 user:1 exact:5 gps:4 approximative:1 us:2 expensive:9 particularly:2 utilized:1 database:3 observed:6 bottom:1 ackley:3 preprint:5 wang:4 informationbased:1 initializing:1 sect:14 ordering:1 decrease:1 prospect:1 valuable:3 kriging:1 topmost:1 substantial:1 principled:2 vehtari:1 econometrica:1 uations:1 predictive:4 upon:2 multimodal:3 foreman:1 derivation:1 train:1 distinct:1 fast:3 describe:1 query:3 artificial:1 choosing:1 quite:1 supplementary:1 solve:2 ecuyer:1 tested:1 otherwise:1 ability:2 statistic:1 knn:4 gp:15 jointly:1 noisy:11 itself:1 advantage:1 differentiable:2 matthias:1 isbn:1 propose:2 aligned:1 relevant:3 loop:1 date:1 poorly:1 achieve:1 adjoint:2 description:1 exploiting:2 convergence:1 regularity:2 optimum:6 perfect:1 adam:6 pi2:2 tions:1 help:2 andrew:1 develop:5 stating:1 wider:1 informs:1 nearest:6 progress:3 eq:4 p2:3 solves:1 come:1 implies:2 larochelle:1 direction:2 closely:1 correct:1 hammer:1 subsequently:2 stochastic:4 exploration:3 human:2 enable:3 observational:1 material:1 public:1 generalization:1 proposition:2 extension:2 hold:2 practically:1 considered:1 duvenaud:1 normal:2 maclaurin:2 lawrence:1 predict:1 automate:1 major:1 optimizer:1 achieves:1 adopt:1 early:1 purpose:1 applicable:2 sensitive:1 largest:1 successfully:2 weighted:5 hoffman:1 cora:1 mit:1 clearly:3 gaussian:15 cornell:3 wilson:6 broader:1 publication:2 june:4 frazier:15 improvement:5 consistently:1 indicates:1 likelihood:2 modelling:1 greatly:1 contrast:2 seeger:1 lizotte:4 sense:1 helpful:1 inference:4 typically:2 dayanik:1 hidden:2 bandit:3 cunningham:1 arg:1 classification:1 among:1 hartmann:3 overall:4 augment:1 denoted:1 retaining:3 proposes:1 art:4 integration:1 flexible:3 marginal:1 equal:1 l42:2 beach:1 sampling:4 identical:1 jones:1 future:2 report:3 recommend:1 gordon:1 di1:2 few:2 employ:1 intelligent:1 randomly:1 argminx:3 technometrics:1 rearrangement:1 detection:3 interest:1 ssurjano:1 investigate:1 highly:2 eval:1 evaluation:18 adjust:1 introduces:2 mixture:8 accurate:1 chevalier:1 partial:12 respective:2 incomplete:8 iv:1 re:1 obscured:1 observa:1 theoretical:4 column:3 modeling:3 bayesopt:1 cost:3 tractability:1 deviation:3 entry:1 neutral:1 l32:2 successful:2 commission:1 supposes:1 synthetic:4 chooses:1 considerably:1 st:1 density:1 international:10 siam:1 nickisch:2 retain:1 koistinen:2 l5:3 off:1 physic:1 together:1 continuously:1 thesis:1 again:2 squared:2 management:1 choose:4 possibly:1 marginalizes:1 l22:2 huang:1 derivative:85 return:1 potential:1 segal:1 bfgs:6 de:2 chemistry:1 oregon:1 dann:1 vi:1 depends:3 piece:1 later:1 performed:1 root:1 extrapolation:2 closed:1 observing:4 start:2 bayes:5 competitive:1 parallel:5 xing:1 timely:1 rmse:1 accuracy:1 variance:3 efficiently:1 yield:1 identify:1 directional:11 yellow:1 generalize:2 html:2 bayesian:45 handwritten:2 comparably:1 inform:1 competitor:3 energy:2 acquisition:25 deshpande:1 dm:4 naturally:1 proof:1 sampled:3 gain:1 tunable:3 treatment:3 dataset:2 recall:2 knowledge:23 lim:1 improves:2 astronomical:1 brochu:1 day:1 follow:1 formulation:1 though:1 box:3 keane:1 just:1 hand:3 ei:25 zeng:1 reversible:1 logistic:8 usa:1 building:1 unbiased:3 counterpart:1 analytically:1 regularization:1 iteratively:1 during:2 robicquet:1 harold:1 cosine:1 criterion:4 gelbart:1 complete:1 demonstrate:1 performs:3 l1:3 allen:1 image:1 novel:10 recently:1 kathuria:1 functional:1 empirically:1 conditioning:3 endpoint:1 volume:1 significant:1 refer:5 corvallis:1 smoothness:1 automatic:2 tuning:6 rd:3 consistency:1 similarly:1 nyc:3 hogg:1 mathematics:1 l3:3 moving:1 access:2 surface:3 add:1 posterior:15 recent:4 optimizing:1 massively:1 inequality:2 meta:1 minimum:3 greater:3 additional:3 dai:1 surely:1 determine:2 maximize:1 recommended:1 ii:4 full:7 multiple:1 afterwards:1 exceeds:1 faster:3 calculation:2 ahmed:3 long:2 cross:1 prediction:4 scalable:4 regression:10 basic:1 essentially:1 expectation:4 metric:2 arxiv:9 iteration:6 kernel:22 achieved:1 buffoni:1 receive:1 whereas:1 remarkably:1 vayatis:1 krause:2 interval:1 jian:1 source:1 goodman:1 envelope:2 unlike:2 airline:1 ascent:2 strict:1 incorporates:2 call:1 near:2 leverage:2 noting:1 feedforward:1 iii:2 easy:1 affect:2 competing:1 stall:1 reduce:2 inner:1 marmin:1 tradeoff:1 t0:1 six:3 expression:1 utility:1 suffer:1 peter:1 tlc:1 hessian:1 deep:10 generally:1 useful:2 tune:6 extensively:1 http:4 outperform:2 nsf:4 tutorial:1 happened:1 estimated:1 trapped:1 per:1 hyperparameter:8 hennig:1 key:1 drawn:2 utilize:3 asymptotically:5 relaxation:1 package:3 uncertainty:1 fourth:1 swersky:2 place:2 almost:1 decide:2 wu:11 utilizes:1 sfu:1 draw:1 gonzalez:1 decision:3 summarizes:2 bound:2 layer:1 replaces:1 arizona:1 infinity:1 constraint:2 n3:1 sake:1 speed:2 argument:1 extremely:3 min:8 optimality:1 pi1:2 performing:1 pacific:1 structured:1 across:2 smaller:1 increasingly:2 slightly:1 son:1 kusner:1 kakade:1 wherever:1 taken:2 computationally:2 equation:1 previously:2 hern:1 describing:1 discus:2 turn:1 milgrom:1 end:1 available:11 generalizes:2 gaussians:2 observe:7 hierarchical:1 spectral:6 batch:29 alternative:1 robustness:1 schmidt:1 weinberger:1 shah:1 assumes:1 include:2 kushner:1 maintaining:1 log10:2 calculating:4 exploit:3 giving:2 ghahramani:2 prof:1 especially:1 quantile:1 society:1 classical:1 objective:21 cmmi:2 dependence:1 surrogate:1 gradient:79 minx:4 accommodates:2 outer:4 l12:2 reason:1 code:1 modeled:1 retained:1 mini:1 ratio:1 balance:1 optionally:3 difficult:2 forrester:1 potentially:1 robert:1 stated:1 implementation:1 design:3 policy:2 unknown:3 perform:1 upper:2 observation:20 jameson:1 benchmark:15 finite:3 descent:1 truncated:1 immediate:4 incorporated:1 looking:1 overtakes:1 arbitrary:1 parallelizing:1 complement:1 moe:2 specified:2 trip:8 distinction:1 nip:2 able:1 beyond:2 proceeds:1 below:1 bar:1 scott:2 beating:2 pattern:1 challenge:1 including:2 max:1 explanation:1 belief:1 business:1 scheme:2 improve:3 github:2 gardner:1 catch:1 prior:2 review:4 l2:3 python:1 epoch:1 discovery:2 marginalizing:1 relative:2 asymptotic:1 afosr:3 fully:4 loss:5 expect:1 clark:1 validation:2 penalization:1 integrate:1 desautels:1 sufficient:1 consistent:5 irrevocable:1 row:4 supported:2 last:2 free:9 rasmussen:4 guide:1 allow:2 burges:1 wide:1 neighbor:6 taking:2 distributed:2 benefit:4 overcome:1 dimension:1 evaluating:1 avoids:2 world:1 computes:1 interchange:2 collection:1 made:1 stuck:1 reinforcement:1 ginsbourger:2 historical:1 constituting:1 picheny:1 approximate:2 compact:1 obtains:2 observable:1 global:13 active:1 thep:1 search:4 continuous:6 bingham:2 why:1 additionally:2 promising:1 learn:1 robust:2 ca:2 career:1 correlated:2 improving:1 domain:5 diag:3 garnett:1 aistats:1 noise:5 hyperparameters:16 osborne:4 complementary:1 xu:1 fig:9 caplin:1 en:6 batched:1 wiley:1 precision:1 fails:2 wish:3 exponential:1 candidate:2 pe:1 levy:2 third:1 theorem:3 minute:1 familiarity:1 showing:1 cortes:1 dominates:1 incorporating:2 mnist:2 sequential:4 effectively:1 supplement:3 cab:2 phd:1 illustrates:1 argmaxz:2 entropy:4 yin:1 explore:1 cheaply:1 expressed:1 kiss:1 partially:2 springer:3 conditional:4 towards:1 poloczek:4 feasible:3 change:2 notz:1 except:1 called:1 accepted:1 experimental:2 geophysical:2 ucb:4 exception:1 l4:3 pedregosa:1 support:2 evaluate:12 mcmc:1 d1:3 srinivas:1 handling:1 |
6,757 | 7,112 | Scalable trust-region method for deep reinforcement
learning using Kronecker-factored approximation
Yuhuai Wu?
University of Toronto
Vector Institute
[email protected]
Elman Mansimov?
New York University
[email protected]
Roger Grosse
University of Toronto
Vector Institute
[email protected]
Shun Liao
University of Toronto
Vector Institute
[email protected]
Jimmy Ba
University of Toronto
Vector Institute
[email protected]
Abstract
In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to
the curvature. We extend the framework of natural policy gradient and propose
to optimize both the actor and the critic using Kronecker-factored approximate
curvature (K-FAC) with trust region; hence we call our method Actor Critic using
Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this
is the first scalable trust region natural gradient method for actor-critic methods.
It is also the method that learns non-trivial tasks in continuous control as well as
discrete control policies directly from raw pixel inputs. We tested our approach
across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher
rewards and a 2- to 3-fold improvement in sample efficiency on average, compared
to previous state-of-the-art on-policy actor-critic methods. Code is available at
https://github.com/openai/baselines.
1
Introduction
Agents using deep reinforcement learning (deep RL) methods have shown tremendous success in
learning complex behaviour skills and solving challenging control tasks in high-dimensional raw
sensory state-space [24, 17, 12]. Deep RL methods make use of deep neural networks to represent
control policies. Despite the impressive results, these neural networks are still trained using simple
variants of stochastic gradient descent (SGD). SGD and related first-order methods explore weight
space inefficiently. It often takes days for the current deep RL methods to master various continuous
and discrete control tasks. Previously, a distributed approach was proposed [17] to reduce training
time by executing multiple agents to interact with the environment simultaneously, but this leads to
rapidly diminishing returns of sample efficiency as the degree of parallelism increases.
Sample efficiency is a dominant concern in RL; robotic interaction with the real world is typically
scarcer than computation time, and even in simulated environments the cost of simulation often
dominates that of the algorithm itself. One way to effectively reduce the sample size is to use
more advanced optimization techniques for gradient updates. Natural policy gradient [10] uses the
technique of natural gradient descent [1] to perform gradient updates. Natural gradient methods
?
Equal contribution.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
400
2000
100
1000
1M 2M
ACKTR
A2C
TRPO
4M
6M
Number of Timesteps
8M
Qbert
1800
1600
1400
12500
ACKTR
A2C
TRPO
4M
6M
Number of Timesteps
8M
5000
2500
4M
6M
Number of Timesteps
8M
10M
10
1M 2M
Seaquest
1000
800
600
ACKTR
A2C
TRPO
4M
6M
Number of Timesteps
8M
10M
8M
10M
SpaceInvaders
800
600
400
200
400
1M 2M
0
10M
1000
7500
10
Pong
ACKTR
A2C
TRPO
20
1M 2M
1200
10000
0
0
10M
Episode Rewards
Episode Rewards
15000
20
200
3000
17500
Breakout
300
4000
0
ACKTR
A2C
TRPO
Episode Rewards
BeamRider
Episode Rewards
5000
ACKTR
A2C
TRPO
Episode Rewards
6000
Episode Rewards
7000
1M 2M
4M
6M
Number of Timesteps
8M
10M
1M 2M
4M
6M
Number of Timesteps
Figure 1: Performance comparisons on six standard Atari games trained for 10 million timesteps (1 timestep
equals 4 frames). The shaded region denotes the standard deviation over 2 random seeds.
follow the steepest descent direction that uses the Fisher metric as the underlying metric, a metric
that is based not on the choice of coordinates but rather on the manifold (i.e., the surface).
However, the exact computation of the natural gradient is intractable because it requires inverting the
Fisher information matrix. Trust-region policy optimization (TRPO) [21] avoids explicitly storing
and inverting the Fisher matrix by using Fisher-vector products [20]. However, it typically requires
many steps of conjugate gradient to obtain a single parameter update, and accurately estimating the
curvature requires a large number of samples in each batch; hence TRPO is impractical for large
models and suffers from sample inefficiency.
Kronecker-factored approximated curvature (K-FAC) [15, 6] is a scalable approximation to natural
gradient. It has been shown to speed up training of various state-of-the-art large-scale neural
networks [2] in supervised learning by using larger mini-batches. Unlike TRPO, each update is
comparable in cost to an SGD update, and it keeps a running average of curvature information,
allowing it to use small batches. This suggests that applying K-FAC to policy optimization could
improve the sample efficiency of the current deep RL methods.
In this paper, we introduce the actor-critic using Kronecker-factored trust region (ACKTR; pronounced ?actor?) method, a scalable trust-region optimization algorithm for actor-critic methods. The
proposed algorithm uses a Kronecker-factored approximation to natural policy gradient that allows
the covariance matrix of the gradient to be inverted efficiently. To best of our knowledge, we are also
the first to extend the natural policy gradient algorithm to optimize value functions via Gauss-Newton
approximation. In practice, the per-update computation cost of ACKTR is only 10% to 25% higher
than SGD-based methods. Empirically, we show that ACKTR substantially improves both sample
efficiency and the final performance of the agent in the Atari environments [4] and the MuJoCo [26]
tasks compared to the state-of-the-art on-policy actor-critic method A2C [17] and the famous trust
region optimizer TRPO [21].
We make our source code available online at https://github.com/openai/baselines.
2
2.1
Background
Reinforcement learning and actor-critic methods
We consider an agent interacting with an infinite-horizon, discounted Markov Decision Process
(X , A, ?, P, r). At time t, the agent chooses an action at ? A according to its policy ?? (a|st ) given
its current state st ? X . The environment in turn produces a reward r(st , at ) and transitions to the
next state st+1 according to the transition probability P (st+1 |st , at ). The goal
P? of the agent is to
maximize the expected ?-discounted cumulative return J (?) = E? [Rt ] = E? [ i?0 ? i r(st+i , at+i )]
with respect to the policy parameters ?. Policy gradient methods [28, 25] directly parameterize a
policy ?? (a|st ) and update parameter ? so as to maximize the objective J (?). In its general form,
2
the policy gradient is defined as [22],
?? J (?) = E? [
?
X
?t ?? log ?? (at |st )],
t=0
where ?t is often chosen to be the advantage function A? (st , at ), which provides a relative measure
of value of each action at at a given state st . There is an active line of research [22] on designing an
advantage function that provides both low-variance and low-bias gradient estimates. As this is not
the focus of our work, we simply follow the asynchronous advantage actor critic (A3C) method [17]
and define the advantage function as the k-step returns with function approximation,
A? (st , at ) =
k?1
X
? i r(st+i , at+i ) + ? k V?? (st+k ) ? V?? (st ),
i=0
V?? (st )
where
is the value network, which provides an estimate of the expected sum of rewards from
the given state following policy ?, V?? (st ) = E? [Rt ]. To train the parameters of the value network,
we again follow [17] by performing temporal difference updates, so as to minimize the squared
? t and the prediction value 1 ||R
? t ? V ? (st )||2 .
difference between the bootstrapped k-step returns R
?
2
2.2
Natural gradient using Kronecker-factored approximation
To minimize a nonconvex function J (?), the method of steepest descent calculates the update
?? that minimizes J (? + ??), subject to the constraint that ||??||B < 1, where || ? ||B is the
1
norm defined by ||x||B = (xT Bx) 2 , and B is a positive semidefinite matrix. The solution to the
constraint optimization problem has the form ?? ? ?B ?1 ?? J , where ?? J is the standard gradient.
When the norm is Euclidean, i.e., B = I, this becomes the commonly used method of gradient
descent. However, the Euclidean norm of the change depends on the parameterization ?. This is
not favorable because the parameterization of the model is an arbitrary choice, and it should not
affect the optimization trajectory. The method of natural gradient constructs the norm using the
Fisher information matrix F , a local quadratic approximation to the KL divergence. This norm is
independent of the model parameterization ? on the class of probability distributions, providing a
more stable and effective update. However, since modern neural networks may contain millions of
parameters, computing and storing the exact Fisher matrix and its inverse is impractical, so we have
to resort to approximations.
A recently proposed technique called Kronecker-factored approximate curvature (K-FAC) [15] uses
a Kronecker-factored approximation to the Fisher matrix to perform efficient approximate natural
gradient updates. We let p(y|x) denote the output distribution of a neural network, and L = log p(y|x)
denote the log-likelihood. Let W ? RCout ?Cin be the weight matrix in the `th layer, where Cout and
Cin are the number of output/input neurons of the layer. Denote the input activation vector to the
layer as a ? RCin , and the pre-activation vector for the next layer as s = W a. Note that the weight
gradient is given by ?W L = (?s L)a| . K-FAC utilizes this fact and further approximates the block
F` corresponding to layer ` as F?` ,
F` = E[vec{?W L}vec{?W L}| ] = E[aa| ? ?s L(?s L)| ]
? E[aa| ] ? E[?s L(?s L)| ] := A ? S := F?` ,
where A denotes E[aa| ] and S denotes E[?s L(?s L)| ]. This approximation can be interpreted as
making the assumption that the second-order statistics of the activations and the backpropagated
derivatives are uncorrelated. With this approximation, the natural gradient update can be efficiently
computed by exploiting the basic identities (P ?Q)?1 = P ?1 ?Q?1 and (P ?Q) vec(T ) = P T Q| :
vec(?W ) = F?`?1 vec{?W J } = vec A?1 ?W J S ?1 .
From the above equation we see that the K-FAC approximate natural gradient update only requires
computations on matrices comparable in size to W . Grosse and Martens [6] have recently extended
the K-FAC algorithm to handle convolutional networks. Ba et al. [2] later developed a distributed
version of the method where most of the overhead is mitigated through asynchronous computation. Distributed K-FAC achieved 2- to 3-times speed-ups in training large modern classification
convolutional networks.
3
2M
1M
0.5M
Atlantis
2M
ACKTR
A2C
Episode Rewards
Atlantis
ACKTR
A2C
Episode Rewards
Episode Rewards
2M
1M
2M
2.5M
0.0
Atlantis
1M
0.5M
0.5M
1M
Number of Timesteps
ACKTR
A2C
0.2
0.4
0.6
0.8
Hours
1.0
1.2
1.4
0
100
200
300 400 500
Number of Episode
600
700
Figure 2: In the Atari game of Atlantis, our agent (ACKTR) quickly learns to obtain rewards of 2 million in
1.3 hours, 600 episodes of games, 2.5 million timesteps. The same result is achieved by advantage actor critic
(A2C) in 10 hours, 6000 episodes, 25 million timesteps. ACKTR is 10 times more sample efficient than A2C on
this game.
3
3.1
Methods
Natural gradient in actor-critic
Natural gradient was proposed to apply to the policy gradient method more than a decade ago
by Kakade [10]. But there still doesn?t exist a scalable, sample-efficient, and general-purpose
instantiation of the natural policy gradient. In this section, we introduce the first scalable and sampleefficient natural gradient algorithm for actor-critic methods: the actor-critic using Kronecker-factored
trust region (ACKTR) method. We use Kronecker-factored approximation to compute the natural
gradient update, and apply the natural gradient update to both the actor and the critic.
To define the Fisher metric for reinforcement learning objectives, one natural choice is to use the
policy function which defines a distribution over the action given the current state, and take the
expectation over the trajectory distribution:
F = Ep(? ) [?? log ?(at |st )(?? log ?(at |st ))| ],
QT
where p(? ) is the distribution of trajectories, given by p(s0 ) t=0 ?(at |st )p(st+1 |st , at ). In practice,
one approximates the intractable expectation over trajectories collected during training.
We now describe one way to apply natural gradient to optimize the critic. Learning the critic can be
thought of as a least-squares function approximation problem, albeit one with a moving target. In the
setting of least-squares function approximation, the second-order algorithm of choice is commonly
Gauss-Newton, which approximates the curvature as the Gauss-Newton matrix G := E[J T J], where
J is the Jacobian of the mapping from parameters to outputs [18]. The Gauss-Newton matrix is
equivalent to the Fisher matrix for a Gaussian observation model [14]; this equivalence allows us to
apply K-FAC to the critic as well. Specifically, we assume the output of the critic v is defined to be
a Gaussian distribution p(v|st ) ? N (v; V (st ), ? 2 ). The Fisher matrix for the critic is defined with
respect to this Gaussian output distribution. In practice, we can simply set ? to 1, which is equivalent
to the vanilla Gauss-Newton method.
If the actor and critic are disjoint, one can separately apply K-FAC updates to each using the metrics
defined above. But to avoid instability in training, it is often beneficial to use an architecture where the
two networks share lower-layer representations but have distinct output layers [17, 27]. In this case,
we can define the joint distribution of the policy and the value distribution by assuming independence
of the two output distributions, i.e., p(a, v|s) = ?(a|s)p(v|s), and construct the Fisher metric with
respect to p(a, v|s), which is no different than the standard K-FAC except that we need to sample
the networks? outputs independently. We can then apply K-FAC to approximate the Fisher matrix
Ep(? ) [? log p(a, v|s)? log p(a, v|s)T ] to perform updates simultaneously.
In addition, we use regular damping for regularization. We also follow [2] and perform the asynchronous computation of second-order statistics and inverses required by the Kronecker approximation
to reduce computation time.
4
3.2
Step-size Selection and trust-region optimization
Traditionally, natural gradient is performed with SGD-like updates, ? ? ? ? ?F ?1 ?? L. But in the
context of deep RL, Schulman et al. [21] observed that such an update rule can result in large updates
to the policy, causing the algorithm to prematurely converge to a near-deterministic policy. They
advocate instead using a trust region approach, whereby the update is scaled down to modify the
policy distribution (in terms of KL divergence) by at most a specified amount. Therefore, we adopt
the trust region
q formulation of K-FAC introduced by [2], choosing the effective step size ? to be
min(?max , ??|2?F? ?? ), where the learning rate ?max and trust region radius ? are hyperparameters. If
the actor and the critic are disjoint, then we need to tune a different set of ?max and ? separately for
both. The variance parameter for the critic output distribution can be absorbed into the learning rate
parameter for vanilla Gauss-Newton. On the other hand, if they share representations, we need to
tune one set of ?max , ?, and also the weighting parameter of the training loss of the critic, with respect
to that of the actor.
4
Related work
Natural gradient [1] was first applied to policy gradient methods by Kakade [10]. Bagnell and
Schneider [3] further proved that the metric defined in [10] is a covariant metric induced by the
path-distribution manifold. Peters and Schaal [19] then applied natural gradient to the actor-critic
algorithm. They proposed performing natural policy gradient for the actor?s update and using a
least-squares temporal difference (LSTD) method for the critic?s update. However, there are great
computational challenges when applying natural gradient methods, mainly associated with efficiently
storing the Fisher matrix as well as computing its inverse. For tractability, previous work restricted
the method to using the compatible function approximator (a linear function approximator). To
avoid the computational burden, Trust Region Policy Optimization (TRPO) [21] approximately
solves the linear system using conjugate gradient with fast Fisher matrix-vector products, similar
to the work of Martens [13]. This approach has two main shortcomings. First, it requires repeated
computation of Fisher vector products, preventing it from scaling to the larger architectures typically
used in experiments on learning from image observations in Atari and MuJoCo. Second, it requires
a large batch of rollouts in order to accurately estimate curvature. K-FAC avoids both issues by
using tractable Fisher matrix approximations and by keeping a running average of curvature statistics
during training. Although TRPO shows better per-iteration progress than policy gradient methods
trained with first-order optimizers such as Adam [11], it is generally less sample efficient.
Several methods were proposed to improve the computational efficiency of TRPO. To avoid repeated
computation of Fisher-vector products, Wang et al. [27] solve the constrained optimization problem
with a linear approximation of KL divergence between a running average of the policy network and
the current policy network. Instead of the hard constraint imposed by the trust region optimizer,
Heess et al. [8] and Schulman et al. [23] added a KL cost to the objective function as a soft constraint.
Both papers show some improvement over vanilla policy gradient on continuous and discrete control
tasks in terms of sample efficiency.
There are other recently introduced actor-critic models that improve sample efficiency by introducing
experience replay [27], [7] or auxiliary objectives [9]. These approaches are orthogonal to our work,
and could potentially be combined with ACKTR to further enhance sample efficiency.
5
Experiments
We conducted a series of experiments to investigate the following questions: (1) How does ACKTR
compare with the state-of-the-art on-policy method and common second-order optimizer baseline
in terms of sample efficiency and computational efficiency? (2) What makes a better norm for
optimization of the critic? (3) How does the performance of ACKTR scale with batch size compared
to the first-order method?
We evaluated our proposed method, ACKTR, on two standard benchmark platforms. We first
evaluated it on the discrete control tasks defined in OpenAI Gym [5], simulated by Arcade Learning
Environment [4], a simulator for Atari 2600 games which is commonly used as a deep reinforcement
learning benchmark for discrete control. We then evaluated it on a variety of continuous control
5
ACKTR
Domain
A2C
TRPO (10 M)
Human level
Rewards
Episode
Rewards
Episode
Rewards
5775.0
31.8
9.3
13455.0
20182.0
1652.0
13581.4
735.7
20.9
21500.3
1776.0
19723.0
3279
4094
904
6422
N/A
14696
8148.1
581.6
19.9
15967.4
1754.0
1757.2
8930
14464
4768
19168
N/A
N/A
670.0
14.7
-1.2
971.8
810.4
465.1
Beamrider
Breakout
Pong
Q-bert
Seaquest
Space Invaders
Episode
N/A
N/A
N/A
N/A
N/A
N/A
Table 1: ACKTR and A2C results showing the last 100 average episode rewards attained after 50 million
timesteps, and TRPO results after 10 million timesteps. The table also shows the episode N , where N denotes
the first episode for which the mean episode reward over the N th game to the (N + 100)th game crosses the
human performance level [16], averaged over 2 random seeds.
benchmark tasks defined in OpenAI Gym [5], simulated by the MuJoCo [26] physics engine. Our
baselines are (a) a synchronous and batched version of the asynchronous advantage actor critic model
(A3C) [17], henceforth called A2C (advantage actor critic), and (b) TRPO [21]. ACKTR and the
baselines use the same model architecture except for the TRPO baseline on Atari games, with which
we are limited to using a smaller architecture because of the computing burden of running a conjugate
gradient inner-loop. See the appendix for other experiment details.
5.1
Discrete control
We first present results on the standard six Atari 2600 games to measure the performance improvement
obtained by ACKTR. The results on the six Atari games trained for 10 million timesteps are shown
in Figure 1, with comparison to A2C and TRPO2 . ACKTR significantly outperformed A2C in terms
of sample efficiency (i.e., speed of convergence per number of timesteps) by a significant margin
in all games. We found that TRPO could only learn two games, Seaquest and Pong, in 10 million
timesteps, and performed worse than A2C in terms of sample efficiency.
In Table 1 we present the mean of rewards of the last 100 episodes in training for 50 million timesteps,
as well as the number of episodes required to achieve human performance [16] . Notably, on the
games Beamrider, Breakout, Pong, and Q-bert, A2C required respectively 2.7, 3.5, 5.3, and 3.0 times
more episodes than ACKTR to achieve human performance. In addition, one of the runs by A2C in
Space Invaders failed to match human performance, whereas ACKTR achieved 19723 on average,
12 times better than human performance (1652). On the games Breakout, Q-bert and Beamrider,
ACKTR achieved 26%, 35%, and 67% larger episode rewards than A2C.
We also evaluated ACKTR on the rest of the Atari games; see Appendix for full results. We compared
ACKTR with Q-learning methods, and we found that in 36 out of 44 benchmarks, ACKTR is on par
with Q-learning methods in terms of sample efficiency, and consumed a lot less computation time.
Remarkably, in the game of Atlantis, ACKTR quickly learned to obtain rewards of 2 million in 1.3
hours (600 episodes), as shown in Figure 2. It took A2C 10 hours (6000 episodes) to reach the same
performance level.
5.2
Continuous control
We ran experiments on the standard benchmark of continuous control tasks defined in OpenAI Gym
[5] simulated in MuJoCo [26], both from low-dimensional state-space representation and directly
from pixels. In contrast to Atari, the continuous control tasks are sometimes more challenging due to
high-dimensional action spaces and exploration. The results of eight MuJoCo environments trained
for 1 million timesteps are shown in Figure 3. Our model significantly outperformed baselines on six
out of eight MuJoCo tasks and performed competitively with A2C on the other two tasks (Walker2d
and Swimmer).
We further evaluated ACKTR for 30 million timesteps on eight MuJoCo tasks and in Table 2 we
present mean rewards of the top 10 consecutive episodes in training, as well as the number of
2
The A2C and TRPO Atari baseline results are provided to us by the OpenAI team, https://github.com/
openai/baselines.
6
0
200K
400K
600K
Number of Timesteps
800K
0
1M
Episode Reward
20
0
ACKTR
A2C
TRPO
20
200K
200K
400K
600K
Number of Timesteps
800K
400K
600K
Number of Timesteps
800K
ACKTR
A2C
TRPO
200K
600
400
ACKTR
A2C
TRPO
800K
400K
600K
Number of Timesteps
800K
500
200K
400K
600K
Number of Timesteps
800K
1M
Ant
1000
1500
1000
500
500
0
1M
1500
ACKTR
A2C
TRPO
0
1M
ACKTR
A2C
TRPO
1000
HalfCheetah
3000
2000
400K
600K
Number of Timesteps
1500
50
70
1M
800
200K
2000
40
1000
200
2500
30
2500
0
3000
20
1200
200
1M
3500
60
Walker2d
1400
40
40
ACKTR
A2C
TRPO
2000
Swimmer
60
4000
Hopper
4000
Episode Reward
ACKTR
A2C
TRPO
Episode Reward
400
6000
Episode Reward
Episode Reward
200K
400K
600K
Number of Timesteps
800K
500
0
500
ACKTR
A2C
TRPO
1000
1M
1500
200K
400K
600K
Number of Timesteps
800K
1M
Figure 3: Performance comparisons on eight MuJoCo environments trained for 1 million timesteps (1 timestep
equals 4 frames). The shaded region denotes the standard deviation over 3 random seeds.
Reacher (pixels)
0
Walker2d (pixels)
2000
2000
Episode Reward
Episode Reward
1500
4
6
1000
8
10
12
14
HalfCheetah (pixels)
3000
2500
2
Episode Reward
Episode Reward
600
Reacher
0
10
8000
800
200
Episode Reward
InvertedDoublePendulum
10000
1000
Episode Reward
InvertedPendulum
1200
ACKTR
A2C
10M
20M
Number of Timesteps
40M
500
0
ACKTR
A2C
10M
20M
Number of Timesteps
40M
1500
1000
500
0
ACKTR
A2C
500
1000
10M
20M
Number of Timesteps
40M
Figure 4: Performance comparisons on 3 MuJoCo environments from image observations trained for 40 million
timesteps (1 timestep equals 4 frames).
episodes to reach a certain threshold defined in [7]. As shown in Table 2, ACKTR reaches the
specified threshold faster on all tasks, except for Swimmer where TRPO achieves 4.1 times better
sample efficiency. A particularly notable case is Ant, where ACKTR is 16.4 times more sample
efficient than TRPO. As for the mean reward score, all three models achieve results comparable with
each other with the exception of TRPO, which in the Walker2d environment achieves a 10% better
reward score.
We also attempted to learn continuous control policies directly from pixels, without providing lowdimensional state space as an input. Learning continuous control policies from pixels is much
more challenging than learning from the state space, partially due to the slower rendering time
compared to Atari (0.5 seconds in MuJoCo vs 0.002 seconds in Atari). The state-of-the-art actorcritic method A3C [17] only reported results from pixels on relatively simple tasks, such as Pendulum,
Pointmass2D, and Gripper. As shown in Figure 4 we can see that our model significantly outperforms
A2C in terms of final episode reward after training for 40 million timesteps. More specifically,
on Reacher, HalfCheetah, and Walker2d our model achieved a 1.6, 2.8, and 1.7 times greater
final reward compared to A2C. The videos of trained policies from pixels can be found at https:
//www.youtube.com/watch?v=gtM87w1xGoM. Pretrained model weights are available at https:
//github.com/emansim/acktr.
5.3
A better norm for critic optimization?
The previous natural policy gradient method applied a natural gradient update only to the actor. In
our work, we propose also applying a natural gradient update to the critic. The difference lies in the
norm with which we choose to perform steepest descent on the critic; that is, the norm || ? ||B defined
in section 2.2. In this section, we applied ACKTR to the actor, and compared using a first-order
method (i.e., Euclidean norm) with using ACKTR (i.e., the norm defined by Gauss-Newton) for critic
optimization. Figures 5 (a) and (b) show the results on the continuous control task HalfCheetah and
the Atari game Breakout. We observe that regardless of which norm we use to optimize the critic,
there are improvements brought by applying ACKTR to the actor compared to the baseline A2C.
However, the improvements brought by using the Gauss-Newton norm for optimizing the critic are
more substantial in terms of sample efficiency and episode rewards at the end of training. In addition,
7
Domain
Ant
HalfCheetah
Hopper
IP
IDP
Reacher
Swimmer
Walker2d
Threshold
3500 (6000)
4700 (4800)
2000 (3800)
950 (950)
9100 (9100)
-7 (-3.75)
90 (360)
3000 (N/A)
ACKTR
Rewards Episodes
4621.6
3660
5586.3
12980
3915.9
17033
1000.0
6831
9356.0
41996
-1.5
3325
138.0
6475
6198.8
15043
A2C
Rewards Episodes
4870.5
106186
5343.7
21152
3915.3
33481
1000.0
10982
9356.1
82694
-1.7
20591
140.7
11516
5874.9
26828
TRPO
Rewards Episodes
5095.0
60156
5704.7
21033
3755.0
39426
1000.0
29267
9320.0
78519
-2.0
14940
136.4
1571
6874.1
27720
Table 2: ACKTR, A2C, and TRPO results, showing the top 10 average episode rewards attained within 30
million timesteps, averaged over the 3 best performing random seeds out of 8 random seeds. ?Episode? denotes
the smallest N for which the mean episode reward over the N th to the (N + 10)th game crosses a certain
threshold. The thresholds for all environments except for InvertedPendulum (IP) and InvertedDoublePendulum
(IDP) were chosen according to Gu et al. [7], and in brackets we show the reward threshold needed to solve the
environment according to the OpenAI Gym website [5].
the Gauss-Newton norm also helps stabilize the training, as we observe larger variance in the results
over random seeds with the Euclidean norm.
Recall that the Fisher matrix for the critic is constructed using the output distribution of the critic,
a Gaussian distribution with variance ?. In vanilla Gauss-Newton, ? is set to 1. We experimented
with estimating ? using the variance of the Bellman error, which resembles estimating the variance
of the noise in regression analysis. We call this method adaptive Gauss-Newton. However, we find
adaptive Gauss-Newton doesn?t provide any significant improvement over vanilla Gauss-Newton.
(See detailed comparisons on the choices of ? in Appendix.
5.4
How does ACKTR compare with A2C in wall-clock time?
We compared ACKTR to the baselines A2C and TRPO in terms of wall-clock time. Table 3 shows the
average timesteps per second over six Atari games and eight MuJoCo (from state space) environments.
The result is obtained with the same experiment setup as previous experiments. Note that in MuJoCo
tasks episodes are processed sequentially, whereas in the Atari environment episodes are processed in
parallel; hence more frames are processed in Atari environments. From the table we see that ACKTR
only increases computing time by at most 25% per timestep, demonstrating its practicality with large
optimization benefits.
(Timesteps/Second)
batch size
ACKTR
A2C
TRPO
80
712
1010
160
Atari
160
753
1038
161
640
852
1162
177
1000
519
624
593
MuJoCo
2500
551
650
619
25000
582
651
637
Table 3: Comparison of computational cost. The average timesteps per second over six Atari games and eight
MuJoCo tasks during training for each algorithms. ACKTR only increases computing time at most 25% over
A2C.
5.5
How do ACKTR and A2C perform with different batch sizes?
In a large-scale distributed learning setting, large batch size is used in optimization. Therefore, in
such a setting, it is preferable to use a method that can scale well with batch size. In this section,
we compare how ACKTR and the baseline A2C perform with respect to different batch sizes. We
experimented with batch sizes of 160 and 640. Figure 5 (c) shows the rewards in number of timesteps.
We found that ACKTR with a larger batch size performed as well as that with a smaller batch size.
However, with a larger batch size, A2C experienced significant degradation in terms of sample
efficiency. This corresponds to the observation in Figure 5 (d), where we plotted the training curve
in terms of number of updates. We see that the benefit increases substantially when using a larger
batch size with ACKTR compared to with A2C. This suggests there is potential for large speed-ups
with ACKTR in a distributed setting, where one needs to use large mini-batches; this matches the
observation in [2].
8
500
ACKTR(Both Actor and Critic)
ACKTR(Only Actor)
A2C
Episode Rewards
400
Episode Rewards
Episode Reward
2000
Breakout
400
300
1500
500
0
200K
400K
600K
Number of Timesteps
800K
1M
(a)
0
500
1M 2M
4M
6M
Number of Timesteps
8M
10M
(b)
Breakout
300
200
100
0
ACKTR (640)
ACKTR (160)
A2C (640)
A2C (160)
400
200
100
500
Breakout
300
200
1000
ACKTR (640)
ACKTR (160)
A2C (640)
A2C (160)
Episode Rewards
HalfCheetah
ACKTR (Both Actor and Critic)
ACKTR (Only Actor)
A2C
3000
2500
100
1M 2M
4M
6M
Number of Timesteps
8M
10M
0
0
10000
(c)
20000 30000 40000
Number of Updates
50000
60000
(d)
Figure 5: (a) and (b) compare optimizing the critic (value network) with a Gauss-Newton norm (ACKTR)
against a Euclidean norm (first order). (c) and (d) compare ACKTR and A2C with different batch sizes.
6
Conclusion
In this work we proposed a sample-efficient and computationally inexpensive trust-regionoptimization method for deep reinforcement learning. We used a recently proposed technique
called K-FAC to approximate the natural gradient update for actor-critic methods, with trust region
optimization for stability. To the best of our knowledge, we are the first to propose optimizing both
the actor and the critic using natural gradient updates. We tested our method on Atari games as well
as the MuJoCo environments, and we observed 2- to 3-fold improvements in sample efficiency on
average compared with a first-order gradient method (A2C) and an iterative second-order method
(TRPO). Because of the scalability of our algorithm, we are also the first to train several non-trivial
tasks in continuous control directly from raw pixel observation space. This suggests that extending
Kronecker-factored natural gradient approximations to other algorithms in reinforcement learning is
a promising research direction.
Acknowledgements
We would like to thank the OpenAI team for their generous support in providing baseline results and
Atari environment preprocessing codes. We also want to thank John Schulman for helpful discussions.
References
[1] S. I. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276,
1998.
[2] J. Ba, R. Grosse, and J. Martens. Distributed second-order optimization using Kroneckerfactored approximations. In ICLR, 2017.
[3] J. A. Bagnell and J. G. Schneider. Covariant policy search. In IJCAI, 2003.
[4] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An
evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253?279,
2013.
[5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba.
OpenAI Gym. arXiv preprint arXiv:1606.01540, 2016.
[6] R. Grosse and J. Martens. A Kronecker-factored approximate Fisher matrix for convolutional
layers. In ICML, 2016.
[7] S. Gu, T. Lillicrap, Z. Ghahramani, R. E. Turner, and S. Levine. Q-prop: Sample-efficient policy
gradient with an off-policy critic. In ICLR, 2017.
[8] N. Heess, D. TB, S. Sriram, J. Lemmon, J. Merel, G. Wayne, Y. Tassa, T. Erez, Z. Wang,
S. M. A. Eslami, M. Riedmiller, and D. Silver. Emergence of locomotion behaviours in rich
environments. arXiv preprint arXiv:1707.02286, 2017.
[9] M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu.
Reinforcement learning with unsupervised auxiliary tasks. In ICLR, 2017.
[10] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems,
2002.
9
[11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
[12] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra.
Continuous control with deep reinforcement learning. In ICLR, 2016.
[13] J. Martens. Deep learning via Hessian-free optimization. In ICML-10, 2010.
[14] J. Martens. New insights and perspectives on the natural gradient method. arXiv preprint
arXiv:1412.1193, 2014.
[15] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate
curvature. In ICML, 2015.
[16] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou,
H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through
deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[17] V. Mnih, A. Puigdomenech Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and
K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In ICML, 2016.
[18] J. Nocedal and S. Wright. Numerical Optimization. Springer, 2006.
[19] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71(7-9):1180?1190, 2008.
[20] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent.
Neural Computation, 2002.
[21] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization.
In Proceedings of the 32nd International Conference on Machine Learning (ICML), 2015.
[22] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous
control using generalized advantage estimation. In Proceedings of the International Conference
on Learning Representations (ICLR), 2016.
[23] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization
algorithms. arXiv preprint arXiv:1707.06347, 2017.
[24] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis.
Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484?
489, 2016.
[25] R. S. Sutton, D. A. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing
Systems 12, 2000.
[26] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control.
IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012.
[27] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas. Sample
efficient actor-critic with experience replay. In ICLR, 2016.
[28] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement
learning. Machine Learning, 8(3):229?256, 1992.
10
| 7112 |@word version:2 norm:17 nd:1 simulation:1 covariance:1 sgd:5 inefficiency:1 series:1 score:2 bootstrapped:1 outperforms:1 freitas:1 current:5 com:5 activation:3 guez:1 john:1 numerical:1 update:29 v:1 intelligence:1 spaceinvaders:1 website:1 parameterization:3 steepest:3 provides:3 toronto:7 wierstra:2 constructed:1 pritzel:1 overhead:1 advocate:1 invertedpendulum:2 introduce:2 notably:1 expected:2 elman:1 simulator:1 bellman:1 nham:1 discounted:2 becomes:1 provided:1 estimating:3 underlying:1 mitigated:1 what:1 atari:22 interpreted:1 substantially:2 minimizes:1 developed:1 impractical:2 temporal:2 zaremba:1 preferable:1 mansimov:2 scaled:1 control:21 wayne:1 atlantis:5 positive:1 wolski:1 local:1 modify:1 despite:1 eslami:1 sutton:1 path:1 approximately:1 resembles:1 equivalence:1 suggests:3 challenging:3 shaded:2 mujoco:17 limited:1 hunt:1 averaged:2 practice:3 block:1 optimizers:1 riedmiller:2 thought:1 significantly:3 ups:2 pre:1 regular:1 arcade:2 petersen:1 selection:1 context:1 applying:4 instability:1 bellemare:2 optimize:4 equivalent:2 deterministic:1 marten:7 imposed:1 www:1 go:1 regardless:1 williams:1 jimmy:2 independently:1 factored:15 rule:1 insight:1 stability:1 handle:1 coordinate:1 traditionally:1 target:1 exact:2 us:4 designing:1 swimmer:4 invader:2 locomotion:1 approximated:1 particularly:1 ep:2 observed:2 levine:3 preprint:4 wang:3 parameterize:1 region:20 episode:52 cin:2 ran:1 substantial:1 environment:19 pong:4 a2c:56 reward:49 trained:8 singh:1 solving:1 efficiency:18 czarnecki:1 gu:2 joint:1 various:2 train:2 distinct:1 fac:15 effective:2 describe:1 fast:2 shortcoming:1 artificial:1 choosing:1 kalchbrenner:1 larger:7 solve:2 amari:1 statistic:3 emergence:1 itself:1 final:3 online:1 ip:2 advantage:8 took:1 propose:4 lowdimensional:1 interaction:1 product:5 causing:1 loop:1 rapidly:1 halfcheetah:6 achieve:4 schaul:1 breakout:8 pronounced:1 scalability:1 exploiting:1 convergence:1 ijcai:1 sutskever:1 extending:1 produce:1 adam:2 executing:1 silver:6 help:1 qt:1 progress:1 solves:1 auxiliary:2 c:4 idp:2 direction:2 radius:1 stochastic:2 exploration:1 human:7 mcallester:1 shun:1 behaviour:2 abbeel:2 wall:2 wright:1 great:1 seed:6 mapping:1 dieleman:1 optimizer:3 adopt:1 consecutive:1 achieves:2 smallest:1 purpose:1 generous:1 favorable:1 estimation:1 outperformed:2 yuhuai:1 brought:2 bapst:1 gaussian:4 rather:1 avoid:3 rusu:1 focus:1 schaal:2 improvement:7 legg:1 likelihood:1 mainly:1 contrast:1 baseline:13 helpful:1 typically:3 diminishing:1 pixel:10 issue:1 classification:1 qbert:1 seaquest:3 art:5 constrained:1 platform:2 equal:4 construct:2 beach:1 veness:2 icml:5 unsupervised:1 mirza:1 brockman:1 intelligent:1 connectionist:1 modern:2 simultaneously:2 divergence:3 neurocomputing:1 rollouts:1 harley:1 ostrovski:1 sriram:1 investigate:1 mnih:4 evaluation:1 bracket:1 semidefinite:1 experience:2 rgrosse:1 orthogonal:1 damping:1 tree:1 euclidean:5 puigdomenech:1 a3c:3 plotted:1 soft:1 cost:5 tractability:1 deviation:2 introducing:1 conducted:1 reported:1 proximal:1 chooses:1 combined:1 st:26 international:3 physic:2 off:1 enhance:1 quickly:2 again:1 squared:1 choose:1 huang:1 henceforth:1 worse:1 resort:1 derivative:1 return:4 bx:1 potential:1 de:1 stabilize:1 notable:1 explicitly:1 depends:1 later:1 performed:4 lot:1 pendulum:1 parallel:1 actorcritic:1 contribution:1 minimize:2 square:3 convolutional:3 variance:6 efficiently:4 ant:3 raw:3 famous:1 kavukcuoglu:5 accurately:2 trajectory:4 ago:1 reach:3 suffers:1 against:1 inexpensive:1 associated:1 psi:1 proved:1 recall:1 knowledge:3 improves:1 graepel:1 higher:2 attained:2 day:1 follow:4 supervised:1 formulation:1 evaluated:5 roger:1 clock:2 hand:1 trust:19 defines:1 usa:1 lillicrap:4 contain:1 hence:3 regularization:1 moritz:2 game:23 during:3 bowling:1 whereby:1 generalized:1 cout:1 image:2 recently:5 common:1 hopper:2 rl:6 empirically:1 tassa:3 million:17 extend:2 approximates:3 significant:3 vec:6 vanilla:5 erez:3 moving:1 stable:1 actor:34 impressive:1 surface:1 badia:1 robot:1 dominant:1 curvature:11 perspective:1 optimizing:4 certain:2 nonconvex:1 success:1 leach:1 walker2d:6 inverted:1 greater:1 schneider:3 converge:1 maximize:2 multiple:1 full:1 match:2 faster:1 cross:2 long:1 schraudolph:1 calculates:1 prediction:1 scalable:6 variant:1 basic:1 liao:1 regression:1 metric:8 expectation:2 arxiv:8 iteration:1 represent:1 sometimes:1 achieved:5 background:1 addition:3 separately:2 whereas:2 remarkably:1 want:1 source:1 rest:1 unlike:1 subject:1 induced:1 jordan:2 call:2 near:1 rendering:1 variety:1 affect:1 independence:1 timesteps:40 todorov:1 architecture:4 reduce:3 inner:1 consumed:1 synchronous:1 pettersson:1 six:6 peter:2 york:1 hessian:1 action:4 deep:16 heess:4 generally:1 detailed:1 tune:2 amount:1 backpropagated:1 processed:3 http:5 exist:1 disjoint:2 per:6 naddaf:1 discrete:7 trpo:35 openai:10 threshold:6 demonstrating:1 leibo:1 nocedal:1 timestep:4 sum:1 run:1 inverse:3 master:1 wu:1 utilizes:1 decision:1 appendix:3 scaling:1 lanctot:1 comparable:3 scarcer:1 layer:8 beamrider:4 fold:2 quadratic:1 kronecker:16 constraint:4 speed:4 min:1 performing:3 relatively:1 according:4 conjugate:3 across:1 beneficial:1 smaller:2 mastering:1 kakade:3 making:1 den:1 restricted:1 computationally:1 equation:1 previously:1 turn:1 needed:1 tractable:1 antonoglou:2 end:1 available:3 panneershelvam:1 competitively:1 apply:7 eight:6 observe:2 batch:17 gym:5 hassabis:2 slower:1 inefficiently:1 denotes:6 running:4 top:2 newton:14 practicality:1 ghahramani:1 rsj:1 objective:4 added:1 question:1 rt:2 bagnell:2 gradient:57 iclr:7 thank:2 simulated:4 fidjeland:1 maddison:1 manifold:2 collected:1 trivial:2 assuming:1 code:3 mini:2 providing:3 schrittwieser:1 setup:1 potentially:1 ba:4 policy:42 perform:7 allowing:1 neuron:1 observation:6 markov:1 kumaran:1 benchmark:5 descent:7 lemmon:1 extended:1 team:2 frame:4 interacting:1 prematurely:1 mansour:1 bert:3 arbitrary:1 introduced:2 inverting:2 required:3 kl:4 specified:2 engine:2 learned:1 tremendous:1 dhariwal:1 hour:5 kingma:1 nip:1 able:1 parallelism:1 challenge:1 tb:1 max:4 video:1 natural:37 advanced:1 turner:1 improve:3 github:4 grewe:1 schulman:7 acknowledgement:1 relative:1 graf:2 loss:1 par:1 merel:1 approximator:2 agent:8 degree:1 s0:1 storing:3 critic:45 uncorrelated:1 share:2 compatible:1 last:2 asynchronous:5 keeping:1 free:1 bias:1 institute:4 munos:1 distributed:6 benefit:2 curve:1 van:1 world:1 avoids:2 transition:2 cumulative:1 sensory:1 doesn:2 commonly:3 reinforcement:14 preventing:1 adaptive:2 preprocessing:1 rich:1 sifre:1 approximate:8 skill:1 jaderberg:1 keep:1 robotic:1 active:1 instantiation:1 sequentially:1 continuous:14 iterative:1 search:2 decade:1 table:9 reacher:4 promising:1 learn:2 nature:2 ca:2 interact:1 complex:1 domain:4 main:1 noise:1 hyperparameters:1 repeated:2 batched:1 grosse:5 experienced:1 replay:2 lie:1 jacobian:1 weighting:1 learns:2 tang:1 down:1 xt:1 utoronto:1 showing:2 nyu:1 experimented:2 concern:1 dominates:1 intractable:2 burden:2 gripper:1 albeit:1 effectively:1 horizon:1 margin:1 simply:2 explore:1 absorbed:1 failed:1 partially:1 watch:1 pretrained:1 lstd:1 sadik:1 aa:3 covariant:2 corresponds:1 springer:1 radford:1 driessche:1 prop:1 ywu:1 goal:1 identity:1 cheung:1 king:1 fisher:19 change:1 hard:1 youtube:1 infinite:1 specifically:2 except:4 beattie:1 degradation:1 called:3 gauss:14 attempted:1 exception:1 support:1 tested:2 |
6,758 | 7,113 | R?nyi Differential Privacy Mechanisms for Posterior
Sampling
Joseph Geumlek
University of California, San Diego
[email protected]
Shuang Song
University of California, San Diego
[email protected]
Kamalika Chaudhuri
University of California, San Diego
[email protected]
Abstract
With the newly proposed privacy definition of R?nyi Differential Privacy (RDP)
in [14], we re-examine the inherent privacy of releasing a single sample from a
posterior distribution. We exploit the impact of the prior distribution in mitigating
the influence of individual data points. In particular, we focus on sampling from
an exponential family and specific generalized linear models, such as logistic
regression. We propose novel RDP mechanisms as well as offering a new RDP
analysis for an existing method in order to add value to the RDP framework. Each
method is capable of achieving arbitrary RDP privacy guarantees, and we offer
experimental results of their efficacy.
1
Introduction
As data analysis continues to expand and permeate ever more facets of life, the concerns over the
privacy of one?s data grow too. Many results have arrived in recent years to tackle the inherent
conflict of extracting usable knowledge from a data set without over-extracting or leaking the private
data of individuals. Before one can strike a balance between these competing goals, one needs a
framework by which to quantify what it means to preserve an individual?s privacy.
Since 2006, Differential Privacy (DP) has reigned as the privacy framework of choice [6]. It quantifies
privacy by measuring how indistinguishability of the mechanism output across whether or not any
one individual is in or out of the data set. This gave not just privacy semantics, but also robust
mathematical guarantees. However, the requirements have been cumbersome for utility, leading to
many proposed relaxations. One common relaxation is approximate DP, which allows arbitrarily bad
events to occur with probability at most ?. A more recent relaxation is R?nyi Differential Privacy
(RDP) proposed in [14], which uses the measure of R?nyi divergences to smoothly vary between
bounding the average and maximum privacy loss. However, RDP has very few mechanisms compared
to the more established approximate DP. We expand the RDP repertoire with novel mechanisms
inspired by R?nyi divergences, as well as re-analyzing an existing method in this new light.
Inherent to DP and RDP is that there must be some uncertainty in the mechanism; they cannot be
deterministic. Many privacy methods have been motivated by exploiting pre-existing sources of
randomness in machine learning algorithms. One promising area has been Bayesian data analysis,
which focuses on maintaining and tracking the uncertainty within probabilistic models. Posterior
sampling is prevalent in many Bayesian methods, serving to introduce randomness that matches the
currently held uncertainty.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
We analyze the privacy arising from posterior sampling as applied to two domains: sampling from
exponential families and Bayesian logistic regression. Along with these analyses, we offer tunable
mechanisms that can achieve stronger privacy guarantees than directly sampling from the posterior.
These mechanisms work via controlling the relative strength of the prior in determining the posterior,
building off the common intuition that concentrated prior distributions can prevent overfitting in
Bayesian data analysis. We experimentally validate our new methods on synthetic and real data.
2
Background
Privacy Model. We say two data sets X and X0 are neighboring if they differ in the private record
of a single individual or person. We use n to refer to the number of records in the data set.
Definition 1. Differential Privacy (DP) [6]. A randomized mechanism A(X) is said to be (, ?)differentially private if for any subset U of the output range of A and any neighboring data sets X
and X0 , we have p(A(X) ? U ) ? exp () p(A(X0 ) ? U ) + ?.
DP is concerned with the difference the participation of an individual might have on the output
distribution of the mechanism. When ? > 0, it is known as approximate DP while the ? = 0 case
is known as pure DP. The requirements for DP can be phrased in terms of a privacy loss variable, a
random variable that captures the effective privacy loss of the mechanism output.
Definition 2. Privacy Loss Variable [2]. We can define a random variable Z that measures the
privacy loss of a given output of a mechanism across two neighboring data sets X and X0 .
p(A(X) = o)
Z = log
(1)
p(A(X0 ) = o) o?A(X)
(, ?)-DP is the requirement that for any two neighboring data sets Z ? with probability at least
1 ? ?. The exact nature of the trade-off and semantics between and ? is subtle, and choosing them
appropriately is difficult. For example, setting ? = 1/n permits (, ?)-DP mechanisms that always
violate the privacy of a random individual [12]. However, there are other ways to specify that a
random variable is mostly small. One such way is to bound the R?nyi divergence of A(X) and
A(X0 ).
Definition 3. R?nyi Divergence [2]. The R?nyi divergence of order ? between the two distributions
P and Q is defined as
Z
1
log P (o)? Q(o)1?? do.
(2)
D? (P ||Q) =
??1
As ? ? ?, R?nyi divergence becomes the max divergence; moreover, setting P = A(X) and
1
Q = A(X0 ) ensures that D? (P ||Q) = ??1
log EZ [e(??1)Z ], where Z is the privacy loss variable.
Thus, a bound on the R?nyi divergence over all orders ? ? (0, ?) is equivalent to (, 0)-DP, and as
? ? 1, this approaches the expected value of Z equal to KL(A(X)||A(X0 )). This leads us to R?nyi
Differential Privacy, a flexible privacy notion that covers this intermediate behavior.
Definition 4. R?nyi Differential Privacy (RDP) [14]. A randomized mechanism A(X) is said
to be (?, )-R?nyi differentially private if for any neighboring data sets X and X0 we have
D? (A(X)||A(X0 )) ? .
The choice of ? in RDP is used to tune how much concern is placed on unlikely large values of Z
versus the average value of Z. One can consider a mechanism?s privacy as being quantified by the
entire curve of values associated with each order ?, but the results of [14] show that almost identical
results can be achieved when this curve is known at only a finite collection of possible ? values.
Posterior Sampling. In Bayesian inference, we have a model class ?, and are given observations
x1 , . . . , xn assumed to be drawn from a ? ? ?. Our goal is to maintain our beliefs about ? given the
observational data in the form of the posterior distribution p(?|x1 , . . . , xn ). This is often done in the
form of drawing samples from the posterior.
Our goal in this paper is to develop privacy preserving mechanisms for two popular and simple
posterior sampling methods. The first is sampling from the exponential family posterior, which we
address in Section 3; the second is sampling from posteriors induced by a subset of Generalized
Linear Models, which we address in Section 4.
2
Related Work. Differential privacy has emerged as the gold standard for privacy in a number of
data analysis applications ? see [8, 15] for surveys. Since enforcing pure DP sometimes requires the
addition of high noise, a number of relaxations have been proposed in the literature. The most popular
relaxation is approximate DP [6], and a number of uniquely approximate DP mechanisms have been
designed by [7, 16, 3, 1] among others. However, while this relaxation has some nice properties,
recent work [14, 12] has argued that it can also lead privacy pitfalls in some cases. Approximate
differential privacy is also related to, but is weaker than, the closely related ?-probabilistic privacy [11]
and (1, , ?)-indistinguishability [4].
Our privacy definition of choice is R?nyi differential privacy [14], which is motivated by two
recent relaxations ? concentrated DP [9] and z-CDP [2]. Concentrated DP has two parameters,
? and ? , controlling the mean and concentration of the privacy loss variable. Given a privacy
parameter ?, z-CDP essentially requires (?, ??)-RDP for all ?. While [2, 9, 14] establish tighter
bounds on the privacy of existing differentially private and approximate DP mechanisms, we provide
mechanisms based on posterior sampling from exponential families that are uniquely RDP. RDP
is also a generalization of the notion of KL-privacy [19], which has been shown to be related to
generalization in machine learning.
There has also been some recent work on privacy properties of Bayesian posterior sampling; however
most of the work has focused on establishing pure or approximate DP. [5] establishes conditions under
which some popular Bayesian posterior sampling procedures directly satisfy pure or approximate
DP. [18] provides a pure DP way to sample from a posterior that satisfies certain mild conditions by
raising the temperature. [10, 20] provide a simple statistically efficient algorithm for sampling from
exponential family posteriors. [13] shows that directly sampling from the posterior of certain GLMs,
such as logistic regression, with the right parameters provides approximate differential privacy. While
our work draws inspiration from all [5, 18, 13], the main difference between their and our work is
that we provide RDP guarantees.
3
RDP Mechanisms based on Exponential Family Posterior Sampling
In this section, we analyze the R?nyi divergences between distributions from the same exponential
family, which will lead to our RDP mechanisms for sampling from exponential family posteriors.
An exponential family is a family of probability distributions over x ? X indexed by the parameter
? ? ? ? Rd that can be written in this canonical form for some choice of functions h : X ? R,
S : X ? Rd , and A : ? ? R:
p(x1 , . . . , xn |?) =
n
Y
!
!
n
X
h(xi ) exp (
S(xi )) ? ? ? n ? A(?) .
i=1
(3)
i=1
Of particular importance is S, the sufficient statistics function, and A, the log-partition function of
this family. Our analysis will be restricted to the families that satisfy the following three properties.
Definition 5. The natural parameterization of an exponential family is the one that indexes the
distributions of the family by the vector ? that appears in the inner product of equation (3).
Definition 6. An exponential family is minimal if the coordinates of the function S are not linearly
dependent for all x ? X .
Definition 7. For any ? ? R, an exponential family is ?-bounded if
? ? supx,y?X ||S(x) ? S(y)||.
This constraint can be relaxed with some caveats explored in the appendix.
A minimal exponential family will always have a minimal conjugate prior family. This conjugate
prior family is also an exponential family, and it satisfies the property that the posterior distribution
formed after observing data is also within the same family. It has the following form:
p(?|?) = exp (T (?) ? ? ? C(?)) .
(4)
0
The sufficient statistics
Pn of ? can be written as T (?) = (?, ?A(?)) and p(?|?0 , x1 , . . . , xn ) = p(?|? )
0
where ? = ?0 + i=1 (S(xi ), 1).
3
Beta-Bernoulli System. A specific example of an exponential family that we will be interested in is
the Beta-Bernoulli system, where an individual?s data is a single i.i.d. bit modeled as a Bernoulli
variable with parameter ?, along with a Beta conjugate prior. p(x|?) = ?x (1 ? ?)1?x .
The Bernoulli distribution can be written in the form of equation (3) by letting h(x) = 1, S(x) = x,
?
), and A(?) = log(1 + exp (?)) = ? log(1 ? ?). The Beta distribution with the usual
? = log( 1??
(1)
(2)
parameters ?0 , ?0 will be parameterized by ?0 = (?0 , ?0 ) = (?0 , ?0 +?0 ) in accordance equation
(4). This system satisfies the properties we require, as this natural parameterization is minimal and
?-bounded for ? = 1. In this system, C(?) = ?(? (1) ) + ?(? (2) ? ? (1) ) ? ?(? (2) ).
Closed Form R?nyi Divergence. The R?nyi divergences of two distributions within the same family
can be written in terms of the log-partition function.
1
D? (P ||Q) =
log
??1
Z
!
?
1??
P (?) Q(?)
d?
=
?
C(??P + (1 ? ?)?Q ) ? ?C(?P )
+ C(?Q ).
??1
(5)
To help analyze the implication of equation (5) for R?nyi Differential Privacy, we define some sets of
prior/posterior parameters ? that arise in our analysis.
Definition
8. Normalizable Set E. We say a posterior parameter ? is normalizable if C(?) =
R
log ? exp (T (?) ? ?)) d? is finite. Let E contain all normalizable ? for the conjugate prior family.
Definition 9. Let pset(?0 , n) be the convex hull of all parameters ? of the form ?0 + n(S(x), 1) for
x ? X . When n is an integer this represents the hull of possible posterior parameters after observing
n data points starting with the prior ?0 .
Definition 10. Let Dif f be the difference set for the family, where Dif f is the convex hull of all
vectors of the form (S(x) ? S(y), 0) for x, y ? X .
Definition 11. Two posterior parameters ?1 and ?2 are neighboring iff ?1 ? ?2 ? Dif f .
They are r-neighboring iff ?1 ? ?2 ? r ? Dif f .
3.1
Mechanisms and Privacy Guarantees
We begin with our simplest mechanism, Direct Sampling, which samples according to the true
posterior. This mechanism is presented as Algorithm 1.
Algorithm 1 Direct Posterior
Require: ?0 , {x1 , . . . , xn }
Pn
1: Sample ? ? p(?|? 0 ) where ? 0 = ?0 + i=1 (S(xi ), 1)
Even though Algorithm 1 is generally not differentially private [5], Theorem 12 suggests that it offers
RDP for ?-bounded exponential families and certain orders ?.
Theorem 12. For a ?-bounded minimal exponential family of distributions p(x|?) with continuous
log-partition function A(?), there exists ?? ? (1, ?] such Algorithm 1 achieves (?, (?0 , n, ?))-RDP
for ? < ?? .
?? is the supremum over all ? such that all ? in the set ?0 + (? ? 1)Dif f are normalizable.
Corollary 1. For the Beta-Bernoulli system with a prior Beta(?0 , ?0 ), Algorithm 1 achieves (?, )RDP iff ? > 1 and ? < 1 + min(?0 , ?0 ).
Notice the implication of Corollary 1: for any ?0 and n > 0, there exists finite ? such that direct
posterior sampling does not guarantee (?, )-RDP for any finite . This also prevents (, 0)-DP as an
achievable goal. Algorithm 1 is inflexible; it offers us no way to change the privacy guarantee.
This motivates us to propose two different modifications to Algorithm 1 that are capable of achieving
arbitrary privacy parameters. Algorithm 2 modifies the contribution of the data X to the posterior by
introducing a coefficient r, while Algorithm 3 modifies the contribution of the prior ?0 by introducing
a coefficient m. These simple ideas have shown up before in variations: [18] introduces a temperature
4
Algorithm 2 Diffused Posterior
Require: ?0 , {x1 , . . . , xn }, , ?
1: Find r ? (0, 1] such that ?r-neighboringP
?P , ?Q ? pset(?0 , rn), D? (p(?|?P )||p(?|?Q )) ?
n
2: Sample ? ? p(?|? 0 ) where ? 0 = ?0 + r i=1 (S(xi ), 1)
scaling that acts similarly to r, while [13, 5] analyze concentration constraints for prior distributions
much like our coefficient m.
Theorem 13. For any ?-bounded minimal exponential family with prior ?0 in the interior of E, any
? > 1, and any > 0, there exists r? ? (0, 1] such that using r ? (0, r? ] in Algorithm 2 will achieve
(?, )-RDP.
Algorithm 3 Concentrated Posterior
Require: ?0 , {x1 , . . . , xn }, , ?
1: Find m ? (0, 1] such that ? neighboring ?P
P , ?Q ? pset(?0 /m, n), D? (p(?|?P )||p(?|?Q )) ?
n
2: Sample ? ? p(?|? 0 ) where ? 0 = ?0 /m + i=1 (S(xi ), 1)
Theorem 14. For any ?-bounded minimal exponential family with prior ?0 in the interior of E, any
? > 1, and any > 0, there exists m? ? (0, 1] such that using m ? (0, m? ] in Algorithm 3 will
achieve (?, )-RDP.
Theorems 13 and 14 can be interpreted as demonstrating that any RDP privacy level can be achieved
by setting r or m arbitrarily close to zero. A small r implies a weak contribution from the data, while
a small m implies a strong prior that outweighs the contribution from the data. Setting r = 1 and
m = 1 reduces to Algorithm 1, in which a sample is released from the true posterior without any
modifications for privacy.
We have not yet specified how to find the appropriate values of r or m, and the condition requires
checking the supremum of divergences across the possible pset range of parameters arising as
posteriors. However, with an additional assumption this supremum of divergences can be efficiently
computed.
Theorem 15. Let e(?P , ?Q , ?) = D? (p(?|?P )||p(?|?Q )). For a fixed ? and fixed ?P , the function e
is a convex function over ?Q .
If for any direction v ? Dif f , the function gv (?) = v | ?2 C(?)v is convex over ?, then for a fixed ?,
the function f? (?P ) = sup?Q r?neighboring ?P e(?P , ?Q , ?) is convex over ?P in the directions spanned
by Dif f .
Corollary 2. The Beta-Bernoulli system satisfies the conditions of Theorem 15 since
the functions gv (?) have the form (v (1) )2 (?1 (? (1) ) + ?1 (? (2) ? ? (1) )), and ?1 is the
digamma function. Both pset and Dif f are defined as convex sets. The expression
supr?neighboring ?P ,?Q ?pset(?0 ,n) D? (p(?|?P )||p(?|?Q )) is therefore equivalent to the maximum of
D? (p(?|?P )||p(?|?Q )) where ?P ? ?0 + {(0, n), (n, n)} and ?Q ? ?P ? (r, 0).
The higher dimensional Dirichlet-Categorical system also satsifies the conditions of Theorem 15.
This result is located in the appendix.
We can do a binary search over (0, 1] to find an appropriate value of r or m. At each candidate value,
we only need to consider the boundary situations to evaluate whether this value achieves the desired
RDP privacy level. These boundary situations depend on the choice of model, and not the data size
n. For example, in the Beta-Bernoulli system, evaluating the supremum involves calculating the
R?nyi diverengence across at most 4 pairs of distributions, as in Corollary 2. In the d dimensional
Dirichlet-Categorical setting, there are O(d3 ) distribution pairs to evaluate.
Eventually, the search process is guaranteed to find a non-zero choice for r or m that achieves the
desired privacy level, although the utility optimality of this choice is not guaranteed. If stopped early
and none of the tested candidate values satisfy the privacy constraint, the analyst can either continue
to iterate or decide not to release anything.
5
Extensions. These methods have convenient privacy implications to the settings where some data is
public, such as after a data breach, and for releasing a statistical query. They can also be applied to
non-?-bounded exponential families with some caveats. These additional results are located in the
appendix.
4
RDP for Generalized Linear Models with Gaussian Prior
In this section, we reinterpret some existing algorithms in [13] in the light of RDP, and use ideas
from [13] to provide new RDP algorithms for posterior sampling for a subset of generalized linear
models with Gaussian priors.
4.1
Background: Generalized Linear Models (GLMs)
The goal of generalized linear models (GLMs) is to predict an outcome y given an input vector x; y
is assumed to be generated from a distribution in the exponential family whose mean depends on x
through E [y|x] = g ?1 (w> x), where w represents the weight of linear combination of x, and g is
called the link function. For example, in logistic regression, the link function g is logit and g ?1 is the
sigmoid function; and in linear regression, the link functions is the identity function. Learning in
GLMs means learning the actual linear combination w.
Specifically, the likelihood of y given x can be written as p(y|w, x) = h(y)exp yw> x ? A(w> x) ,
where x ? X , y ? Y, A is the log-partition function, and h(y) the scaling constant. Given a
dataset D = {(x1 , y1 ), . . . , (xn , yn )} of n examples with xi ? X and yi ? Y,
Qnour goal is to
learn the parameter w. Let p(D|w) denote p({y1 , . . . , yn }|w, {x1 , . . . , xn }) = i=1 p(yi |w, xi ).
We set the prior p(w) as a multivariate Gaussian distribution with covariance ? = (n?)?1 I, i.e.,
p(w) ? N (0, (n?)?1 I). The posterior distribution of w given D can be written as
n
n?kwk2 Y
p(D|w)p(w)
p(yi |w, xi ).
p(w|D) = R
? exp ?
2
p(D|w0 )p(w0 )dw0
Rd
i=1
4.2
(6)
Mechanisms and Privacy Guarantees
First, we introduce some assumptions that characterize the subset of GLMs and the corresponding
training data on which RDP can be guaranteed.
Assumption 1. (1). X is a bounded domain such that kxk2 ? c for all x ? X , and xi ? X
for all (xi , yi ) ? D. (2). Y is a bounded domain such that Y ? [ymin , ymax ], and yi ? Y
for all (xi , yi ) ? D.. (3). g ?1 has bounded range such that g ?1 ? [?min , ?max ]. Then, let
B = max{|ymin ? ?max |, |ymax ? ?min |}.
Example: Binary Regression with Bounded X Binary regression is used in the case where y takes
value Y = {0, 1}. There are three
common types of binary regression, logistic regression with
g ?1 (w> x) = 1/(1 + exp ?w> x ), probit regression with g ?1 (w> x) = ?(w> x) where ?
is the
?1
>
>
Gaussian cdf, and complementary log-log regression with g (w x) = 1 ? exp ?exp w x . In
these three cases, Y = {0, 1}, g ?1 has range (0, 1) and thus B = 1. Moreover, it is often assumed
for binary regression that any example lies in a bounded domain, i.e., kxk2 ? c for x ? X .
Now we establish the privacy guarantee for sampling directly from the posterior in (6) in Theorem 17.
We also show that this privacy bound is tight for logistic regression; a detailed analysis is in Appendix.
Theorem 16. Suppose we are given a GLM and a dataset D of size n that satisfies Assumption 1,
and a Gaussian prior with covariance ? = (n?)?1 I, then sampling with posterior in (6) satisfies
2 2
(?, 2cn?B ?)-RDP for all ? ? 1.
Notice that direct posterior sampling cannot achieve (?, )-RDP for arbitrary ? and . We next
present Algorithm 4 and 5, as analogous to Algorithm 3 and 2 for exponential family respectively,
that guarantee any given RDP requirement. Algorithm 4 achieves a given RDP level by setting a
stronger prior, while Algorithm 5 by raising the temperature of the likelihood.
6
Algorithm 4 Concentrated Posterior
Algorithm 5 Diffuse Posterior
Require: Dataset D of size n; Gaussian
prior with covariance (n?0 )?1 I; (?, ).
Require: Dataset D of size n; Gaussian prior with
covariance (n?)?1 I; (?, ).
1: Replace p(yi |w, xiq
) with p(yi |w, xi )? in (6)
2
2
B ?
, ?0 } in (6).
1: Set ? = max{ 2c n
2: Sample w ? p(w|D) in (6).
where ? = min{1, 2cn?
2 B 2 ? }.
2: Sample w ? p(w|D) in (6).
It follows directly from Theorem 17 that under Assumption 1, Algorithm 4 satisfies (?, )-RDP.
Theorem 17. Suppose we are given a GLM and a dataset D of size n that satisfies Assumption 1,
and a Gaussian prior with covariance ? = (n?)?1 I, then Algorithm 5 guarantees (?, )-RDP. In
?
? ? 1.
? ?)-RDP
for any ?
fact, it guarantees (?,
?
5
Experiments
In this section, we present the experimental results for our proposed algorithms for both exponential
family and GLMs. Our experimental design focuses on two goals ? first, analyzing the relationship
between ? and in our privacy guarantees and second, exploring the privacy-utility trade-off of our
proposed methods in relation to existing methods.
5.1
Synthetic Data: Beta-Bernoulli Sampling Experiments
In this section, we consider posterior sampling in the Beta-Bernoulli system. We compare three
algorithms. As a baseline, we select a modified version of the algorithm in [10], which privatizes
the sufficient statistic of the data to create a privatized posterior. Instead of Laplace noise that is
used by[10], we use Gaussian noise to do the privatization; [14] shows that if Gaussian noise with
2
variance ? 2 is added, then this offers an RDP guarantee of (?, ? ?
? 2 ) for ?-bounded exponential
families. We also consider the two algorithms presented in Section 3.1 ? Algorithm 2 and 3; observe
that Algorithm 1 is a special case of both. 500 iterations of binary search were used to select r and m
when needed.
Achievable Privacy Levels. We plot the (?, )-RDP parameters achieved by Algorithms 2 and
3 for a few values of r and m. These parameters are plotted for a prior ?0 = (6, 18) and the
data size n = 100 which are selected arbitrarily for illustrative purposes. We plot over six values
{0.1, 0.3, 0.5, 0.7, 0.9, 1} of the scaling constants r and m. The results are presented in Figure 1.
Our primary observation is the presence of the vertical asymptotes for our proposed methods. Recall
that any privacy level is achievable with our algorithms given small enough r or m; these plots
demonstrate the interaction of ? and . As r and m decrease, the guarantees improve at each ? and
even become finite at larger orders ?, but a vertical asymptote still exists. The results for the baseline
are not plotted: it achieves RDP along any line of positive slope passing through the origin.
Privacy-Utility Tradeoff. We next evaluate the privacy-utility tradeoff of the algorithms by plotting
KL(P ||A) as a function of with ? fixed, where P is the true posterior and A is the output
distribution of a mechanism. For Algorithms 2 and 3, the KL divergence can be evaluated in closed
form. For the Gaussian mechanism, numerical integration was used to evaluate the KL divergence
integral. We have arbitrarily chosen ?0 = (6, 18) and data set X with 100 total trials and 38 successful
trials. We have plotted the resulting divergences over a range of for ? = 2 in (a) and for ? = 15 in
(b) of Figure 2. When ? = 2 < ?? , both Algorithms 2 and 3 reach zero KL divergence once direct
sampling is possible. The Gaussian mechanism must always add nonzero noise. As ? 0, Algorithm
3 approaches a point mass distribution heavily penalized by the KL divergence. Due to its projection
step, the Gaussian Mechanism follows a bimodal distribution as ? 0. Algorithm 2 degrades to
the prior, with modest KL divergence. When ? = 15 > ?? , the divergences for Algorithms 2 and 3
are bounded away from 0, while the Gaussian mechanism still approaches the truth as ? ?. In a
non-private setting, the KL divergence would be zero.
Finally, we plot log p(XH |?) as a function of , where ? comes from one of the mechanisms applied
to X. Both X and XH consist of 100 Bernoulli trials with proportion parameter ? = 0.5. This
7
2
2
r = 0.1
r = 0.3
r = 0.5
r = 0.7
r = 0.9
direct posterior
1.5
1
m = 0.1
m = 0.3
m = 0.5
m = 0.7
m = 0.9
direct posterior
1.5
1
0.5
0.5
0
0
0
5
10
15
20
0
5
10
(a) Algorithm 2
15
20
(b) Algorithm 3
Figure 1: Illustration of Potential (?, )-RDP Curves for Exponential Family Sampling.
70
0
-10
-5
0
(a) KL: ? = 2 < ??
Alg. 2
Alg. 3
Gauss.Mech.
0.4
0.2
50
40
30
20
10
0
-10
-5
40
30
20
0
-15
(b) KL: ? = 15 > ??
50
10
0
0
Alg. 2
Alg. 3
Gauss.Mech.
True Post.
60
- log-likelihood
0.2
- log-likelihood
Alg. 2
Alg. 3
Gauss.Mech.
0.4
KL divergence
KL divergence
0.6
70
Alg. 2
Alg. 3
Gauss.Mech.
True Post.
60
0.6
-10
-5
0
-15
-10
-5
0
(c) ? log p(XH ): ? = 2 (d) ? log p(XH ): ? = 15
Figure 2: Exponential Family Synthetic Data Experiments.
experiment was run 10000 times, and we report the mean and standard deviation. Similar to the
previous section, we have a fixed prior of ?0 = (6, 18). The results are shown for ? = 2 in (c) and
for ? = 15 in (d) of 2. These results agree with the limit behaviors in the KL test. This experiment is
more favorable for Algorithm 3, as it degrades only to the log likelihood under the mode of the prior.
In this plot, we have included sampling from the true posterior as a non-private baseline.
5.2
Real Data: Bayesian Logistic Regression Experiments
We now experiment with Bayesian logistic regression with Gaussian prior on three real datasets. We
consider three algorithms ? Algorithm 4 and 5, as well as the OPS algorithm proposed in [18] as a
sanity check. OPS achieves pure differential privacy when the posterior has bounded support; for this
algorithm, we thus truncate the Gaussian prior to make its support the L2 ball of radius c/?, which is
the smallest data-independent ball guaranteed to contain the MAP classifier.
Achievable Privacy Levels. We consider the achievable RDP guarantees for our algorithms and
OPS under the same set of parameters ?, c, ? and B = 1. [18] shows that with the truncated prior,
2
2
OPS guarantees 4c? ? -differential privacy, which implies (?, 4c? ? )-RDP for all ? ? [1, ?]; whereas
2 2
our algorithm guarantees (?, 2cn?? ?)-RDP for all ? ? 1. Therefore our algorithm achieves better
RDP guarantees at ? ? 2n
? , which is quite high in practice as n is the dataset size.
Privacy-Utility: Test Log-Likelihood and Error. We conduct Bayesian logistic regression on three
real datasets: Abalone, Adult and MNIST. We perform binary classification tasks: abalones with less
than 10 rings vs. the rest for Abalone, digit 3 vs. digit 8 for MNIST, and income ? 50K vs. > 50K
for Adult. We encode all categorical features with one-hot encoding, resulting in 9 dimensions for
Abalone, 100 dimensions for Adult and 784 dimensions in MNIST. We then scale each feature to
range from [?0.5, 0.5], and normalize each example to norm 1. 1/3 of the each dataset is used for
testing, and the rest for training. Abalone has 2784 training and 1393 test samples, Adult has 32561
and 16281, and MNIST has 7988 and 3994 respectively.
For all algorithms, we use an original Gaussian prior with ? = 10?3 . The posterior sampling is done
using slice sampling with 1000 burn-in samples. Notice that slice sampling does not give samples
from the exact posterior. However, a number of MCMC methods are known to converge in total
variational distance in time polynomial in the data dimension for log-concave posteriors (which is
the case here) [17]. Thus, provided that the burn-in period is long enough, we expect the induced
8
0.5
0.4
Concentrated
Diffuse
OPS
True Posterior
0.5
0.4
0.3
Concentrated
Diffuse
OPS
True Posterior
0.5
Test error
Concentrated
Diffuse
OPS
True Posterior
Test error
Test error
0.6
0.4
0.3
0.2
0.3
0.1
0.2
Test error
0.6
0
2
Concentrated
Diffuse
OPS
True Posterior
0.5
0.4
-4
-2
0
2
Concentrated
Diffuse
OPS
True Posterior
0.5
0.4
0.3
-4
0.3
2
Concentrated
Diffuse
OPS
True Posterior
0.5
0.4
-4
-2
0
Concentrated
Diffuse
OPS
True Posterior
0.5
0.4
0.3
0.3
-2
0
(a) Abalone.
2
0.3
0.2
-4
-2
0
2
Concentrated
Diffuse
OPS
True Posterior
0.5
0.4
0.3
0.2
0.1
0.2
-4
2
Concentrated
Diffuse
OPS
True Posterior
0.4
2
Test error
Test error
0.6
0
Test error
-2
0
0.1
0.2
-4
-2
0.5
Test error
-2
Test error
-4
-4
-2
0
(b) Adult.
2
-4
-2
0
2
(c) MNIST 3vs8.
Figure 3: Test error vs. privacy parameter . ? = 1, 10, 100 from top to bottom.
distribution to be quite close, and we leave an exact RDP analysis of the MCMC sampling as future
work. For privacy parameters, we set ? = 1, 10, 100 and ? {e?5 , e?4 , . . . , e3 }. Figure 3 shows the
test error averaged over 50 repeated runs. More experiments for test log-likelihood presented in the
Appendix.
We see that both Algorithm 4 and 5 achieve lower test error than OPS at all privacy levels and across
all datasets. This is to be expected, since OPS guarantees pure differential privacy which is stronger
than RDP. Comparing Algorithm 4 and 5, we can see that the latter always achieves better utility.
6
Conclusion
The inherent randomness of posterior sampling and the mitigating influence of a prior can be made
to offer a wide range of privacy guarantees. Our proposed methods outperform existing methods in
specific situations. The privacy analyses of the mechanisms fit nicely into the recently introduced RDP
framework, which continues to present itself as a relaxation of DP worthy of further investigation.
Acknowledgements
This work was partially supported by NSF under IIS 1253942, ONR under N00014-16-1-2616, and a
Google Faculty Research Award.
References
[1] M. Bun, K. Nissim, U. Stemmer, and S. Vadhan. Differentially private release and learning of threshold
functions. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages
634?649. IEEE, 2015.
[2] M. Bun and T. Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds.
In Theory of Cryptography Conference, pages 635?658. Springer, 2016.
9
[3] K. Chaudhuri, D. Hsu, and S. Song. The large margin mechanism for differentially private maximization.
In Neural Inf. Processing Systems, 2014.
[4] K. Chaudhuri and N. Mishra. When random sampling preserves privacy. In Annual International
Cryptology Conference, pages 198?213. Springer, 2006.
[5] C. Dimitrakakis, B. Nelson, A. Mitrokotsa, and B. I. Rubinstein. Robust and private Bayesian inference.
In International Conference on Algorithmic Learning Theory, pages 291?305. Springer, 2014.
[6] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via
distributed noise generation. In Annual International Conference on the Theory and Applications of
Cryptographic Techniques, pages 486?503. Springer, 2006.
[7] C. Dwork and J. Lei. Differential privacy and robust statistics. In Proceedings of the forty-first annual
ACM symposium on Theory of computing, pages 371?380. ACM, 2009.
[8] C. Dwork, A. Roth, et al. The algorithmic foundations of differential privacy, volume 9. Now Publishers,
Inc., 2014.
[9] C. Dwork and G. N. Rothblum. Concentrated differential privacy. arXiv preprint arXiv:1603.01887, 2016.
[10] J. Foulds, J. Geumlek, M. Welling, and K. Chaudhuri. On the theory and practice of privacy-preserving
Bayesian data analysis. In Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence
(UAI), 2016.
[11] A. Machanavajjhala, D. Kifer, J. Abowd, J. Gehrke, and L. Vilhuber. Privacy: Theory meets practice on
the map. In Data Engineering, 2008. ICDE 2008. IEEE 24th International Conference on, pages 277?286.
IEEE, 2008.
[12] F. McSherry. How many secrets do you have? https://github.com/frankmcsherry/blog/blob/
master/posts/2017-02-08.md, 2017.
[13] K. Minami, H. Arai, I. Sato, and H. Nakagawa. Differential privacy without sensitivity. In Advances in
Neural Information Processing Systems, pages 956?964, 2016.
[14] I. Mironov. R?nyi differential privacy. In Proceedings of IEEE 30th Computer Security Foundations
Symposium CSF 2017, pages 263?275. IEEE, 2017.
[15] A. D. Sarwate and K. Chaudhuri. Signal processing and machine learning with differential privacy:
Algorithms and challenges for continuous data. IEEE signal processing magazine, 30(5):86?94, 2013.
[16] A. G. Thakurta and A. Smith. Differentially private feature selection via stability arguments, and the
robustness of the lasso. In Conference on Learning Theory, pages 819?850, 2013.
[17] S. Vempala. Geometric random walks: a survey. Combinatorial and computational geometry, 52(573612):2, 2005.
[18] Y.-X. Wang, S. E. Fienberg, and A. Smola. Privacy for free: Posterior sampling and stochastic gradient
Monte Carlo. Proceedings of The 32nd International Conference on Machine Learning (ICML), pages
2493??2502, 2015.
[19] Y.-X. Wang, J. Lei, and S. E. Fienberg. On-average kl-privacy and its equivalence to generalization for
max-entropy mechanisms. In International Conference on Privacy in Statistical Databases, pages 121?134.
Springer, 2016.
[20] Z. Zhang, B. Rubinstein, and C. Dimitrakakis. On the differential privacy of Bayesian inference. In
Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence (AAAI), 2016.
10
| 7113 |@word mild:1 trial:3 private:12 version:1 faculty:1 achievable:5 stronger:3 proportion:1 logit:1 norm:1 polynomial:1 bun:2 nd:2 eng:1 covariance:5 pset:6 efficacy:1 offering:1 existing:7 mishra:1 comparing:1 com:1 yet:1 must:2 written:6 numerical:1 partition:4 gv:2 asymptote:2 designed:1 plot:5 v:4 mitrokotsa:1 intelligence:2 selected:1 parameterization:2 smith:1 record:2 caveat:2 provides:2 zhang:1 mathematical:1 along:3 direct:7 differential:23 beta:10 become:1 symposium:3 focs:1 naor:1 introduce:2 privacy:86 x0:10 abowd:1 secret:1 expected:2 behavior:2 examine:1 inspired:1 pitfall:1 actual:1 becomes:1 begin:1 provided:1 moreover:2 bounded:15 mass:1 what:1 arai:1 interpreted:1 kenthapadi:1 guarantee:21 reinterpret:1 act:1 concave:1 tackle:1 classifier:1 indistinguishability:2 yn:2 before:2 positive:1 engineering:1 accordance:1 cdp:2 limit:1 encoding:1 analyzing:2 establishing:1 meet:1 rothblum:1 might:1 burn:2 quantified:1 equivalence:1 suggests:1 dif:8 range:7 statistically:1 averaged:1 testing:1 practice:3 digit:2 procedure:1 mech:4 area:1 convenient:1 projection:1 pre:1 cannot:2 interior:2 close:2 selection:1 influence:2 equivalent:2 deterministic:1 map:2 roth:1 modifies:2 starting:1 convex:6 survey:2 focused:1 foulds:1 privatized:1 pure:7 mironov:2 spanned:1 stability:1 notion:2 coordinate:1 variation:1 analogous:1 laplace:1 diego:3 controlling:2 suppose:2 heavily:1 exact:3 magazine:1 us:1 origin:1 located:2 continues:2 database:1 bottom:1 preprint:1 wang:2 capture:1 ensures:1 trade:2 decrease:1 intuition:1 leaking:1 depend:1 tight:1 effective:1 monte:1 query:1 rubinstein:2 artificial:2 choosing:1 outcome:1 sanity:1 whose:1 emerged:1 larger:1 quite:2 say:2 drawing:1 statistic:4 itself:1 blob:1 propose:2 interaction:1 product:1 neighboring:10 iff:3 chaudhuri:5 achieve:5 ymax:2 gold:1 validate:1 normalize:1 differentially:7 exploiting:1 requirement:4 ring:1 leave:1 help:1 develop:1 cryptology:1 xiq:1 strong:1 c:2 involves:1 implies:3 come:1 quantify:1 differ:1 direction:2 radius:1 closely:1 csf:1 hull:3 stochastic:1 observational:1 public:1 argued:1 require:6 generalization:3 investigation:1 repertoire:1 tighter:1 minami:1 extension:2 exploring:1 exp:10 algorithmic:2 predict:1 vary:1 achieves:9 early:1 released:1 smallest:1 purpose:1 favorable:1 combinatorial:1 currently:1 thakurta:1 create:1 establishes:1 gehrke:1 always:4 gaussian:17 modified:1 pn:2 thirtieth:1 corollary:4 encode:1 release:2 focus:3 prevalent:1 bernoulli:10 likelihood:7 check:1 digamma:1 baseline:3 inference:3 dependent:1 unlikely:1 entire:1 relation:1 expand:2 interested:1 semantics:2 mitigating:2 among:1 flexible:1 classification:1 special:1 integration:1 equal:1 once:1 nicely:1 beach:1 sampling:35 identical:1 represents:2 icml:1 future:1 others:1 report:1 inherent:4 few:2 preserve:2 divergence:23 individual:8 geometry:1 ourselves:1 maintain:1 dwork:4 rdp:47 introduces:1 light:2 mcsherry:2 held:1 implication:3 integral:1 capable:2 modest:1 indexed:1 conduct:1 supr:1 walk:1 re:2 desired:2 plotted:3 minimal:7 stopped:1 facet:1 cover:1 measuring:1 maximization:1 introducing:2 deviation:1 subset:4 successful:1 shuang:1 too:1 characterize:1 supx:1 synthetic:3 st:1 person:1 international:6 randomized:2 ops:15 sensitivity:1 probabilistic:2 off:3 aaai:2 usable:1 leading:1 potential:1 coefficient:3 inc:1 satisfy:3 depends:1 closed:2 analyze:4 observing:2 sup:1 slope:1 contribution:4 formed:1 variance:1 efficiently:1 weak:1 bayesian:13 none:1 machanavajjhala:1 carlo:1 randomness:3 reach:1 cumbersome:1 definition:13 associated:1 hsu:1 newly:1 tunable:1 dataset:7 popular:3 recall:1 knowledge:1 subtle:1 appears:1 higher:1 specify:1 done:2 though:1 evaluated:1 just:1 smola:1 glms:6 google:1 logistic:9 mode:1 lei:2 usa:1 building:1 contain:2 true:15 inspiration:1 nonzero:1 uniquely:2 illustrative:1 anything:1 abalone:6 generalized:6 arrived:1 demonstrate:1 temperature:3 variational:1 novel:2 recently:1 common:3 sigmoid:1 volume:1 sarwate:1 kwk2:1 refer:1 rd:3 similarly:1 add:2 posterior:61 multivariate:1 recent:5 inf:1 certain:3 n00014:1 blog:1 binary:7 arbitrarily:4 continue:1 life:1 yi:8 onr:1 preserving:2 additional:2 relaxed:1 converge:1 forty:1 strike:1 period:1 signal:2 ii:1 violate:1 reduces:1 match:1 offer:6 long:2 post:3 award:1 impact:1 permeate:1 regression:16 essentially:1 arxiv:2 iteration:1 sometimes:1 bimodal:1 achieved:3 background:2 addition:1 whereas:1 grow:1 source:1 publisher:1 appropriately:1 releasing:2 rest:2 induced:2 integer:1 extracting:2 vadhan:1 presence:1 intermediate:1 enough:2 concerned:1 iterate:1 fit:1 gave:1 competing:1 lasso:1 inner:1 idea:2 cn:2 tradeoff:2 whether:2 motivated:2 expression:1 six:1 utility:7 song:2 e3:1 passing:1 generally:1 yw:1 detailed:1 tune:1 concentrated:16 simplest:1 http:1 outperform:1 canonical:1 nsf:1 notice:3 arising:2 serving:1 demonstrating:1 threshold:1 achieving:2 drawn:1 d3:1 prevent:1 relaxation:8 icde:1 year:1 dimitrakakis:2 run:2 parameterized:1 uncertainty:4 you:1 master:1 family:36 almost:1 decide:1 draw:1 appendix:5 scaling:3 bit:1 bound:5 guaranteed:4 simplification:1 annual:4 sato:1 strength:1 occur:1 constraint:3 normalizable:4 phrased:1 diffuse:10 argument:1 min:4 optimality:1 vempala:1 according:1 truncate:1 combination:2 ball:2 conjugate:4 across:5 inflexible:1 joseph:1 modification:2 restricted:1 glm:2 fienberg:2 equation:4 agree:1 eventually:1 mechanism:34 needed:1 letting:1 kifer:1 permit:1 observe:1 away:1 appropriate:2 robustness:1 original:1 top:1 dirichlet:2 outweighs:1 maintaining:1 calculating:1 exploit:1 establish:2 nyi:20 diffused:1 added:1 degrades:2 concentration:2 primary:1 usual:1 md:1 said:2 gradient:1 dp:23 distance:1 link:3 w0:2 nelson:1 nissim:1 enforcing:1 analyst:1 index:1 modeled:1 relationship:1 illustration:1 balance:1 difficult:1 mostly:1 design:1 motivates:1 cryptographic:1 geumlek:2 perform:1 vertical:2 observation:2 datasets:3 finite:5 truncated:1 situation:3 ever:1 y1:2 rn:1 ucsd:3 worthy:1 arbitrary:3 introduced:1 pair:2 kl:15 specified:1 security:1 conflict:1 raising:2 california:3 established:1 nip:1 address:2 adult:5 vs8:1 privatization:1 challenge:1 max:6 belief:1 hot:1 event:1 natural:2 participation:1 improve:1 github:1 ymin:2 categorical:3 breach:1 prior:32 literature:1 nice:1 checking:1 l2:1 acknowledgement:1 determining:1 relative:1 geometric:1 loss:7 probit:1 expect:1 generation:1 versus:1 foundation:3 sufficient:3 plotting:1 penalized:1 placed:1 supported:1 free:1 weaker:1 wide:1 stemmer:1 steinke:1 distributed:1 slice:2 curve:3 boundary:2 xn:9 evaluating:1 dimension:4 collection:1 made:1 san:3 income:1 welling:1 approximate:10 supremum:4 overfitting:1 uai:1 assumed:3 xi:13 continuous:2 search:3 quantifies:1 promising:1 nature:1 learn:1 robust:3 ca:1 alg:8 domain:4 main:1 linearly:1 bounding:1 noise:6 arise:1 repeated:1 complementary:1 cryptography:1 x1:9 xh:4 exponential:26 candidate:2 kxk2:2 lie:1 theorem:12 bad:1 specific:3 explored:1 concern:2 exists:5 consist:1 mnist:5 kamalika:2 importance:1 margin:1 smoothly:1 entropy:1 ez:1 prevents:1 tracking:1 partially:1 springer:5 truth:1 satisfies:8 acm:2 cdf:1 goal:7 identity:1 replace:1 experimentally:1 change:1 included:1 specifically:1 nakagawa:1 called:1 total:2 experimental:3 gauss:4 select:2 support:2 latter:1 evaluate:4 mcmc:2 tested:1 |
6,759 | 7,114 | Online Learning with a Hint
Ofer Dekel
Microsoft Research
[email protected]
Nika Haghtalab
Computer Science Department
Carnegie Mellon University
[email protected]
Arthur Flajolet
Operations Research Center
Massachusetts Institute of Technology
[email protected]
Patrick Jaillet
EECS, LIDS, ORC
Massachusetts Institute of Technology
[email protected]
Abstract
We study a variant of online linear optimization where the player receives a hint
about the loss function at the beginning of each round. The hint is given in the
form of a vector that is weakly correlated with the loss vector on that round. We
show that the player can benefit from such a hint if the set of feasible actions is
sufficiently round. Specifically, if the set is strongly convex, the hint can be used to
guarantee a regret of O(log(T )), and if the set is q-uniformly
convex for q ? (2, 3),
?
the?hint can be used to guarantee a regret of o( T ). In contrast, we establish
?( T ) lower bounds on regret when the set of feasible actions is a polyhedron.
1
Introduction
Online linear optimization is a canonical problem in online learning. In this setting, a player attempts
to minimize an online adversarial sequence of loss functions while incurring a small regret, compared
?
to the best offline solution. Many online algorithms exist that are designed to have
? a regret of O( T )
in the worst case and it has been known that one cannot avoid a regret of ?( T ) in the worst case.
While this worst-case perspective on online linear optimization has lead to elegant algorithms and
deep connections to other fields, such as boosting [9, 10] and game theory [4, 2], it can be overly
pessimistic. In particular, it does not account for the fact that the player may have
? side-information
that allows him to anticipate the upcoming loss functions and evade the ?( T ) regret. In this
work, we go beyond this worst case analysis and consider online linear optimization when additional
information in the form of a function that is correlated with the loss is presented to the player.
More formally, online convex optimization [24, 11] is a T -round repeated game between a player and
an adversary. On each round, the player chooses an action xt from a convex set of feasible actions
K ? Rd and the adversary chooses a convex bounded loss function ft . Both choices are revealed and
the player incurs a loss of ft (xt ). The player then uses its knowledge of ft to adjust its strategy for
the subsequent rounds. The player?s goal is to accumulate a small loss compared to the best fixed
action in hindsight. This value is called regret and is a measure of success of the player?s algorithm.
When
? the adversary is restricted to Lipschitz loss functions, several algorithms are known to guarantee
O( T ) regret [24, 16, 11]. If we further restrict the adversary to strongly convex loss functions, the
regret bound improves to O(log(T
?)) [14]. However, when the loss functions are linear, no online
algorithm can have a regret of o( T ) [5]. In this sense, linear loss functions are the most difficult
convex loss functions to handle [24].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we focus on the case where the adversary is restricted to linear Lipschitz loss functions.
More specifically, we assume that the loss function ft (x) takes the form cTt x, where ct is a bounded
loss vector in C ? Rd . We further assume that the player receives a hint before choosing the action
on each round. The hint in our setting is a vector that is guaranteed to be weakly correlated with the
loss vector. Namely, at the beginning of round t, the player observes a unit-length vector vt ? Rd
such that vtT ct ? ?kct k2 , and where ? is a small positive constant. So long as this requirement is met,
the hint could be chosen maliciously, possibly by an adversary who knows how the player?s algorithm
uses the hint. Our goal is to develop a player strategy that takes these hints into account, and to
understand when hints of this type make the problem provably easier and lead to smaller regret.
We show that the player?s ability to benefit from the hints depends on the geometry of the player?s
action set K. Specifically, we characterize the roundness of the set K using the notion of (C, q)uniform convexity for convex sets. In Section 3, we show that if K is a (C, 2)-uniformly convex
set (or in other words, if K is a C-strongly convex set), then we can
use the hint to design a player
strategy that improves the regret guarantee to O (C?)?1 log(T ) , where our O(?) notation hides
a polynomial dependence on the dimension d and other constants. Furthermore, as we show in
Section 4, if K is a (C, q)-uniformly
convex set for q ? (2, 3), we can use the hint to improve the
2?q
1
regret to O (C?) 1?q T 1?q , when the hint belongs to a small set of possible hints at every step.
In Section 5, we prove lower bounds on the regret of any online algorithm in this model. We first
show that when
? K is a polyhedron, such as a L1 ball, even a stronger form of hint cannot guarantee a
regret of o( T ). Next, we prove a lower bound of ?(log(T )) regret when K is strongly convex.
1.1
Comparison with Other Notions of Hints
The notion of hint that we introduce in this work generalizes some of the notions of predictability on online learning. Hazan and Megiddo [13] considered as an example a setting where
the player knows the first coordinate of the loss vector at all rounds, and showed that when
|ct1 | ? ? and when the set of feasible actions is the Euclidean ball, one can achieve a regret
of O(1/? ? log(T )). Our work directly improves over this result, as in our setting a hint vt = ?e1
also achieves O(1/? ? log(T )) regret, but we can deal with hints in different directions at different rounds and we allow for general uniformly convex action sets. Rakhlin and Sridharan [20]
considered online learning with predictable sequences, with a notion of predictability that is concerned with the gradient of the convex loss functions. They show that
qPif the player receives a
T
2
hint Mt at round t, then the regret of the algorithm is at most O(
t=1 k?ft (xt ) ? Mt k? ).
c?(x)
In the case of linear loss functions, this implies that having
an estimate vector
x
c
c0t of the loss vector within distance
v loss vector ct results in
? ? of the true
an improved regret bound of O(? T ). In contrast, we consider a notion of
c0
hint that pertains to the direction of the loss vectorc rather than its location.
v
?
Our work shows that merely knowing whether the loss vector positivelyx? or
c
negatively correlates with another vector is sufficient toxachieve improved
c?(x)+?
c(y)
regret bound, when the set is uniformly convex. That is,
rather
than
having
x+y
2
z=
access to an approximate value of ct , we only need
to2 have access to wa
Figure 1: Comparison
c?(z)
halfspace that classifies ct correctly with a margin. This
y notion of hinty?is between notions of hint.
weaker that the notion of hint in the work of Rakhlin and Sridharan [20]
when the approximation error satisfies kct ? c0t k2 ? ? ? kct k2 for ?v? [0, 1). In this case one can
1
use c0t / kc0t k2 as the direction of the hint in our setting and achieve a regret of O( 1??
log T ) when
the set is strongly convex. This shows that when the set of feasible actions is strongly convex, a
directional hint can improve the regret bound beyond what has been known to be achievable by an
approximation hint. However, we note that our results require the hints to be always valid, whereas
the algorithm of Rakhlin and Sridharan [19] can adapt to the quality of the hints.
We discuss these works and other related works, such as [15], in more details in Appendix A.
2
Preliminaries
We begin with a more formal definition of online linear optimization (without hints). Let A denote the
player?s algorithm for choosing its actions. On round t the player uses A and all of the information
2
it has observed so far to choose an action xt in a convex compact set K ? Rd . Subsequently, the
adversary chooses a loss vector ct in a compact set C ? Rd . The player and the adversary reveal their
actions and the player incurs the loss cTt xt . The player?s regret is defined as
R(A, c1:T ) =
T
X
t=1
cTt xt ? min
x?K
T
X
cTt x.
t=1
In online linear optimization with hints, the player observes vt ? Rd with kvt k2 = 1, before choosing
xt , with the guarantee that vtT ct ? ?kct k2 , for some ? > 0.
We use uniform convexity to characterize the degree of convexity of the player?s action set K.
Informally, uniform convexity requires that the convex combination of any two points x and y on the
boundary of K be sufficiently far from the boundary. A formal definition is given below.
Definition 2.1 (Pisier [18]). Let K be a convex set that is symmetric around the origin. K and the
Banach space defined by K are said to be uniformly convex if for any 0 < < 2 there exists a
? > 0
such that for any pair of points x, y ? K with kxkK ? 1, kykK ? 1, kx ? ykK ? , we have
x+y
? 1 ? ?. The modulus of uniform-convexity ?K () is the best possible ? for that , i.e.,
2
K
x + y
?K () = inf 1 ?
: kxkK ? 1, kykK ? 1, kx ? ykK ? .
2
K
For brevity, we say that K is (C, q)-uniformly convex when ?K () = Cq and we omit C when it is
clear from the context.
Examples of uniformly convex sets include Lp balls for any 1 < p < ? with modulus of convexity
?Lp () = Cp p for p ? 2 and a constant Cp and ?Lp () = (p ? 1)2 for 1 < p ? 2. On the other
hand, L1 and L? units balls are not uniformly convex. When ?K () ? ?(2 ), we say that K is
strongly convex.
Another notion of convexity we use in this work is called exp-concavity. A function f : K ? R is
exp-concave with parameter ? > 0, if exp(??f (x)) is a concave function of x ? K. This is a weaker
requirement than strong convexity when the gradient of f is uniformly bounded [14]. The next
proposition shows that we can obtain regret bounds of order ?(log(T )) in online convex optimization
when the loss functions are exp-concave with parameter ?.
Proposition 2.2 ([14]). Consider online convex optimization on a sequence of loss functions
f1 , . . . , fT over a feasible set K ? Rd , such that all t, ft : K ? R is exp-concave with parameter ? > 0. There is an algorithm, with runtime polynomial in d, which we call AEXP , with a
regret that is at most ?d (1 + log(T + 1)).
Throughout this work, we draw intuition from basic orthogonal geometry. Given any vector x and a
hint v, we define x v = (xT v)v and x v = x?(xT v)v, as the parallel and the orthogonal components
of x with respect to v. When the hint v is clear from the context we simply use x and x to denote
these vectors.
T
T
Naturally, our regret bounds involve a number of geometric parameters. Since the set of actions of the
adversary C is compact, we can find G ? 0 such that kck2 ? G for all c ? C. When K is uniformly
convex, we denote K = {w ? Rd | kwkK ? 1}. In this case, since all norms are equivalent in finite
dimension, there exist R > 0 and r > 0 such that Br ? K ? BR , where Br (resp. BR ) denote the
L2 unit ball centered at 0 with radius r (resp. R). This implies that R1 k?k2 ? k?kK ? 1r k?k2 .
3
Improved Regret Bounds for Strongly Convex K
At first sight, it is not immediately clear how one should use the hint. Since vt is guaranteed to satisfy
cTt vt ? ?kct k2 , moving the action x in the direction ?vt always decreases the loss. One could hope
to get the most benefit out of the hint by choosing xt to be the extremal point in K in the direction
?vt . However, this na?ve strategy could lead to a linear regret in the worst case. For example, say
that ct = (1, 12 ) and vt = (0, 1) for all t and let K be the Euclidean unit ball. Choosing xt = ?vt
?2 ?
would incur a loss of ? T2 , while the best fixed action in hindsight, the point ( ?
, ?15 ), would incur a
5
loss of
?
? 5
2 T.
The player?s regret would therefore be
3
?
5?1
2 T.
T
T
T
w?K:w =?
x
x
w?K:w =?
T
w?K:w =?
x
where the last transition holds by the fact that ct =
ct
2 vt since the hint is valid.
This provides an intuitive understanding of a measure of convexity
x
?
of our virtual loss functions. When K is uniformly convex then the
x
c
function c?t (?) demonstrates convexity in the subspace orthogonal
to vt . To see that, note that for any x and y that lie in the space
c?(x)+?
c(y)
x+y
2
orthogonal to vt , their mid point x+y
transforms
to
a
point
that
z=
w
2
2
is farther away in the direction of ?vt than the midpoint of the
c?(z)
transformations of x and y. As shown in Figure 3, the modulus
y
of uniform convexity of K affects the degree of convexity of
y?
c?t (?). We note, however, that c?t (?) is not strongly convex in
v
all directions. In fact, c?t (?) is constant in the direction of vt .
Nevertheless, the properties shown here allude to the fact that Figure 3: Uniform-convexity of the
c?t (?) demonstrates some notion of convexity. As we show in the feasible set affects the convexity the
virtual loss function.
next lemma, this notion is indeed exp-concavity:
??C?r
Lemma 3.1. If K is (C, 2)-uniformly convex, then c?t (?) is 8 G?R2 -exp-concave.
?t (?) = 0 is a
Proof. Let ? = 8 ??C?r
G?R2 . Without loss of generality, we assume that ct 6= 0, otherwise c
constant function and the proof follows immediately. Based on the above discussion, it is not hard to
see that c?t (?) is continuous (we prove this in more detail in the Appendix D.1. So, to prove that c?t (?)
is exp-concave, it is sufficient to show that
x+y
1
1
exp ?? ? c?t
? exp (?? ? c?t (x)) + exp (?? ? c?t (y)) ?(x, y) ? K.
2
2
2
Consider (x, y) ? K and choose corresponding (?
x, y?) ? K such that c?t (x) = cTt x
? and c?t (y) = cTt y?.
Without loss of generality, we have k?
xkK = k?
y kK = 1, as we can always choose corresponding x
?, y?
that are extreme points of K. Since exp(???
ct (?)) is decreasing in c?t (?), we have
x+y
2
=
w
x
?+?
y
2
? ?K (k?
x ? y?kK ) kvvttk
K
vt
=( x+y
2 )
(2)
vt
y
satisfies kwkK ? 1, since kwkK ?
x?+?
2
T
Note that w =
exp(?? ? cTt w).
max
kwkK ?1
vt
vt
( x+y
.
2 )
T
exp ?? ? c?t
T
T
v
.
In words, we consider the 1-dimensional subspace spanned by vt
and its (d ? 1)-dimensional orthogonal subspace separately. For any
c?(x)
action x ? K, we find another point, w ? K, that equals x in the
x
(d ? 1)-dimensional orthogonal subspace, but otherwise incurs the
v
optimal loss. The value of the virtual loss c?t (x) is defined to be the
value of the original loss function ct at w. The virtual loss simulates
the process of moving x as far as possible in the direction ?vt without
c
changing its value in any other direction (see Figure 2). This can be
Figure 2: Virtual function c?(?). more formally seen by the following equation.
arg min cTt w = arg min (ct )T x
? + (ct )T w = arg min vtT w,
(1)
T
y
vt
T
(?
z)
z
=x
T
z0
vt
T
xc
w
s.t.
w?K
T
c?t (x) = min cTt w
T
))
Intuitively, the flaw of this na?ve strategy is that the hint does not give the player any information
about the (d ? 1)-dimensional subspace orthogonal to vt . Our solution is to use standard online
learning machinery to learn how to act in this orthogonal subspace. Specifically, on round t, we use
vt to define the following virtual loss function:
K
+
?K (k?
x ? y?kK ) ? 1 (see also Figure 3). Moreover, w
=
So, by using this w in
Equation (2), we have
x+y
?
c T vt
exp ?? ? c?t
? exp ? ? (cTt x
? + cTt y?) + ? ? t
? ?K (k?
x ? y?kK ) .
(3)
2
2
kvt kK
4
x
v
c
On the other hand, since kvt kK ? 1r kvt k2 = 1r and k?
x ? y?kK ? R1 k?
x ? y?k2 , we have
T
1
c t vt
2
x ? y?k2
? ?K (k?
x ? y?kK ) ? exp ? ? r ? ? ? kct k2 ? C ? 2 ? k?
exp ? ?
kvt kK
R
T
2 !
??C ?r
ct x
?
cTt y?
? exp ? ?
? kct k2 ?
?
R2
kct k2
kct k2
2
T
T
2
(?/2) ? (ct x
? ? ct y?)
? exp
2
?
1
?
1
? ? exp
? (cTt x
? ? cTt y?) + ? exp
? (cTt y? ? cTt x
?) ,
2
2
2
2
where the penultimate inequality follows by the definition of ? and the last inequality is a consequence
of the inequality exp(z 2 /2) ? 21 exp(z) + 12 exp(?z), ?z ? R. Plugging the last inequality into (3)
yields
?
n
?
?
o
x+y
1
exp ???
ct (
) ? exp ? (cTt x
(cTt x
(cTt y? ? cTt x
? + cTt y?) ? exp
? ? cTt y?) + exp
?)
2
2
2
2
2
1
1
?)
= exp (?? ? cTt y?) + exp (?? ? cTt x
2
2
1
1
= exp (?? ? c?t (y)) + exp (?? ? c?t (x)) ,
2
2
which concludes the proof.
Now, we use the sequence of virtual loss functions to reduce our problem to a standard online convex
optimization problem (without hints). Namely, the player applies AEXP (from Proposition 2.2),
which is an online convex optimization algorithm known to have O(log(T )) regret with respect to
exp-concave functions, to the sequence of virtual loss functions. Then our algorithm takes the action
x
?t ? K that is prescribed by AEXP and moves it as far as possible in the direction of ?vt . This
process is formalized in Algorithm 1.
Algorithm 1 Ahint FOR S TRONGLY C ONVEX K
For t = 1, . . . , T ,
1. Use Algorithm AEXP with the history c?? (?) for ? < t, and let x
?t be the chosen action.
vt
T
T
2. Let xt = arg minw?K (vtT w) s.t. w
=x
?t
vt
. Play xt and receive ct as feedback.
Next, we show that the regret of algorithm AEXP on the sequence of virtual loss functions is an upper
bound on the regret of Algorithm 1.
Lemma 3.2. For any sequence of loss functions c1 , . . . , cT , let R(Ahint , c1:T ) be the regret of
algorithm Ahint on the sequence c1 , . . . , cT , and R(AEXP , c?1:T ) be the regret of algorithm AEXP
on the sequence of virtual loss functions c?1 , . . . , c?T . Then, R(Ahint , c1:T ) ? R(AEXP , c?1:T ).
T
Proof. Equation (1) provides an equivalent definition xt = arg minw?K (cTt w) s.t. w vt = x
? t vt .
Using this, we show that the loss of algorithm Ahint on the sequence c1:T is the same as the loss of
algorithm AEXP on the sequence c?1:T .
T
min
T
T
X
T
xt
t=1 w?K:w =?
t=1
cTt w =
T
X
t=1
cTt ( arg min
T
c?t (?
xt ) =
T
T
X
cTt w) =
w?K:w =?
xt
T
X
cTt xt .
t=1
Next, we show that the offline optimal on the sequence c?1:T is more competitive that the offline
optimal on the sequence c1:T . First note that for any x and t, c?t (x) = minw?K:w =x cTt w ? cTt x.
PT
PT
Therefore, minx?K t=1 c?t (x) ? minx?K t=1 cTt x. The proof concludes by
T
T
X
t=1
cTt xt ? min
x?K
T
X
t=1
cTt x ?
T
X
t=1
5
c?t (?
xt ) ? min
x?K
T
X
t=1
T
R(Ahint , c1:T ) =
c?t (x) = R(AEXP , c?1:T ).
Our main result follows from the application of Lemmas 3.1 and 3.2.
Theorem 3.3. Suppose that K ? Rd is a (C, 2)-uniformly convex set that is symmetric around the
origin, and Br ? K ? BR for some r and R. Consider online linear optimization with hints where
the cost function at round t is kct k2 ? G and the hint vt is such that cTt vt ? ?kct k2 , while kvt k2 = 1.
Algorithm 1 in combination with AEXP has a worst-case regret of
d ? G ? R2
R(Ahint , c1:T ) ?
? (1 + log(T + 1)).
8? ? C ? r
Since AEXP requires the coefficient of exp-concavity to be given as an input, ? needs to be known
a priori to be able to use Algorithm 1. However, we can use a standard doubling trick to relax this
requirement and derive the same asymptotic regret bound. We defer the presentation of this argument
to Appendix B.
4
Improved Regret Bounds for (C, q)-Uniformly Convex K
In this section, we consider any feasible set K that is (C, q)-uniformly convex for q ? 2. Our results
differ from the previous section in two aspects. First, our algorithm can be used with (C, q)-uniformly
convex feasible sets for any q ? 2 compared to the results of the previous section that only hold for
strongly convex sets (q = 2). On the other hand, the approach in this section requires the hints to be
restricted to a finite set of vectors V. We show that when K is (C, q)-uniformly convex for q > 2,
?
2?q
our regret is O(T 1?q ). If q ? (2, 3), this is an improvement over the worst case regret of O( T )
guaranteed in the absence of hints.
We first consider the scenario where the hint is always pointing in the same direction, i.e. vt = v for
some v and all t ? [T ]. In this case, we show how one can use a simple algorithm that picks the best
performing action so far (a.k.a the Follow-The-Leader algorithm) to obtain improved regret bounds.
We then consider the case where the hint belongs to a finite set V. In this case, we instantiate one
copy of the Follow-The-Leader algorithm for each v ? V and combine their outcomes in order to
obtain improved regret bounds that depend on the cardinality of V, which we denote by |V|.
Lemma 4.1. Suppose that vt = v for all t = 1, ? ? ? , T and that K is (C, q)-uniformly convex that is
symmetric around the origin, and Br ? K ? BR for some r and R. Consider
the algorithm,
called
Pt
P
Follow-The-Leader (FTL), that at every round t, plays xt ? arg minx?K ? <t cT? x. If ? =1 cT? v ? 0
for all t = 1, ? ? ? , T , then the regret is bounded as follows,
!1/(q?1)
1/(q?1) X
T
q
kvkK ? Rq
kct k2
R(AFTL , c1:T ) ?
.
?
Pt
T
2C
? =1 c? v
t=1
Furthermore, when v is a valid hint with margin ?, i.e., cTt v ? ? ? kct k2 for all t = 1, ? ? ? , T , the
right-hand side can be further simplified to obtain the regret bound:
1
R(AFTL , c1:T ) ?
? G ? (ln(T ) + 1)
if q = 2
2?
and
q?2
1
q?1
R(AFTL , c1:T ) ?
? T q?1
if q > 2,
?G?
1/(q?1)
q?2
(2?)
where ? = kvkC???Rq .
K
Proof. We use a well-known inequality, known as FT(R)L Lemma (see e.g., [12, 17]), on the regret
incurred by the FTL algorithm:
R(AFTL , c1:T ) ?
T
X
t=1
cTt (xt ? xt+1 ).
Without loss of generality, we can assume that kxt kK = kxt+1 kK = 1 since the maximum of a linear
function is attained at a boundary point. Since K is (C, q)-uniformly convex, we have
xt + xt+1
? 1 ? ?K (kxt ? xt+1 k ).
K
2
K
6
This implies that
xt + xt+1
v
? 1.
?
?
(kx
?
x
k
)
K
t
t+1 K
2
kvkK
K
Pt
Moreover, xt+1 ? arg minx?K xT ? =1 c? . So, we have
!T
t
t
t
X
X
X
xt + xt+1
v
c?
c? = xTt+1
c? .
? ?K (kxt ? xt+1 kK )
? inf xT
x?K
2
kvkK
? =1
? =1
? =1
Pt
Rearranging this last inequality and using the fact that ? =1 v T c? ? 0, we obtain:
!T
!
Pt
t
t
q
T
X
X
C ? kxt ? xt+1 k2
xt ? xt+1
T
? =1 v c?
c?
?
? ?K (kxt ? xt+1 kK ) ?
?
v c? .
2
kvkK
kvkK ? Rq
? =1
? =1
Pt?1
By definition of FTL, we have xt ? arg minx?K xT ? =1 c? , which implies:
!T
t?1
X
xt+1 ? xt
c?
? 0.
2
? =1
Summing up the last two inequalities and setting ? = kvkC???Rq , we derive:
K
!
!
t
t
X
X
?
(cT (xt ? xt+1 ))q
x
?
x
?
t
t+1
q
.
cTt
v T c? ? kxt ? xt+1 k2 ? ?
v T c? ? t
? ?
q
2
?
?
kct k2
? =1
? =1
Pt
Rearranging this last inequality and using the fact that ? =1 v T c? ? 0, we obtain:
!1/(q?1)
q
kct k2
1
T
? Pt
|ct (xt ? xt+1 )| ?
.
(4)
T
(2?/?)1/(q?1)
? =1 v c?
Summing (4) over all t completes the proof of the first claim. The regret bounds for when v T ct ?
? ? kct k2 for all t = 1, ? ? ? , T follow from the first regret bound. We defer this part of the proof to
Appendix D.2.
Note that the regret bounds become O(T ) when q ? ?. This is expected because Lq balls are
q-uniformly convex for q ? 2 and converge to L? balls as q ? ? and it is well-known that
Follow-The-Leader yields ?(T ) regret in online linear optimization when K is a L? ball.
Using the above lemma, we introduce an algorithm for online linear optimization with hints that
belong to a set V. In this algorithm, we instantiate one copy of the FTL algorithm for each possible
direction of the hint. On round t, we invoke the copy of the algorithm that corresponds to the direction
of the hint vt , using the history of the game for rounds with hints in that direction. We show that the
overall regret of this algorithm is no larger than the sum of the regrets of the individual copies.
Algorithm 2 Aset : S ET- OF -H INTS
For all v ? V, let Tv = ?.
For t = 1, . . . , T ,
P
1. Play xt ? arg minx?K ? ?Tv cT? x and receive ct as feedback.
t
2. Update Tvt ? Tvt ? {t}.
Theorem 4.2. Suppose that K ? Rd is a (C, q)-uniformly convex set that is symmetric around the
origin, and Br ? K ? BR for some r and R. Consider online linear optimization with hints where
the cost function at round t is kct k2 ? G and the hint vt comes from a finite set V and is such that
cTt vt ? ?kct k2 , while kvt k2 = 1. Algorithm 2 has a worst-case regret of
R(Aset , c1:T ) ? |V| ?
and
R(Aset , c1:T ) ? |V| ?
R2
? G ? (ln(T ) + 1),
2C ? ? ? r
Rq
2C ? ? ? r
1/(q?1)
?G?
7
if q = 2,
q?2
q?1
? T q?1
q?2
if q > 2.
Proof. We decompose the regret as follows:
R(Aset , c1:T ) =
T
X
t=1
ct xt ? inf
T
x?K
T
X
t=1
(
ct x ?
T
)
X
X
v?V
t?Tv
ct xt ? inf
T
x?K
X
T
ct x
t?Tv
? |V| ? max R(AFTL , cTv ).
v?V
The proof follows by applying Lemma 4.1 and by using kvt kK ? (1/r) ? kvt k2 = 1/r.
Note that Aset does not require ? or V to be known a priori, as it can compile the set of hint directions
as it sees new ones. Moreover, if the hints are not limited to finite set V a priori, then the algorithm
can first discretize the L2 unit ball with an ?/2-net and approximate any given hint with one of the
hints in the discretized set. Using this discretization technique, Theorem 4.2 can be extended to the
setting where the hints are not constrained to a finite set while having a regret that is linear in the size
of the ?/2-net (exponential in the dimension d.) Extensions of Theorem 4.2 are discussed in more
details in the Appendix C.
5
Lower Bounds
The regret bounds derived in Sections 3 and 4 suggest that the ?
curvature of K can make up for the
lack of curvature of the loss function to get rates faster than O( T ) in online convex optimization,
provided we receive additional information about the next move of the adversary in the form of a
hint. In this section,
we show that the curvature of the player?s decision set K is necessary to get rates
?
better than O( T ), even in the presence of a hint.
As an example, consider the unit cube, i.e. K = {x | kxk? ? 1}. Note that this set is not uniformly
convex. Since, the ith coordinate of points in such a set, namely xi , has no effect on the range of
acceptable values for the other coordinates, revealing one coordinate does not give us any information
about the other coordinates xj for j 6= i. For example, suppose that ct has each of its first two
coordinates set to +1 or ?1 with equal probability and all other coordinates set to 1. In this case,
even after observing the last d ? 2 coordinates of the loss vector, the problem is reduced to a standard
online linear optimization
problem in the 2-dimensional unit cube. This choice of ct is known to
?
incur a regret of ?( T ) [1]. Therefore, online linear optimization
with the set K = {x | kxk? ? 1},
?
even in the presence of hints, has a worst-case regret of ?( T ). As it turns out, this result holds for
any polyhedral set of actions. We prove this by means of a reduction to the lower bounds established
in [8] that apply to the online convex optimization framework (without hint). We defer the proof to
the Appendix D.4.
Theorem 5.1. If the set of feasible actions is a polyhedron then, depending on the set C, either there
exists
? a trivial algorithm that achieves zero regret or every online algorithm has worst-case regret
?( T ). This is true even if the adversary is restricted to pick a fixed hint vt = v for all t = 1, ? ? ? , T .
At first sight, this result may come as a surprise. After all, since any Lp ball with 1 < p ? 2 is
strongly convex, one can hope to use a L1+? unit ball K0 to approximate K when K is a L1 ball
(which is a polyhedron) and apply the results of Section 3 to achieve better regret bounds. The
problem with this approach is that the constant in the modulus of convexity of K0 deteriorates when
p ? 1 since ?Lp () = (p ? 1) ? 2 , see [3]. As a result, the regret bound established in Theorem 3.3
1
becomes O( p?1
? log T ). Since the best approximation of a L1 unit ball using a Lp ball is of the
1
form {x ? Rd | d1? p kxkp ? 1}, the distance between the offline benchmark in the definition
1
of regret when using K0 instead of K can be as large as (1 ? d p ?1 ) ? T , which translates into an
1
additive term of order (1 ? d p ?1 ) ? T in the regret bound when using K0 as a proxy for K. Due to
the inverse dependence of the regret bound obtained
in Theorem 3.3 on p ? 1, the optimal choice of
?
? ?1 ) leads to a regret of order O(
? T ).
p = 1 + O(
T
Finally, we conclude with a result that suggests that O(log(T )) is, in fact, the optimal achievable
regret when K is strongly convex in online linear optimization with a hint. We defer the proof to the
Appendix D.4.
8
Theorem 5.2. If K is a L2 ball then, depending on the set C, either there exists a trivial algorithm
that achieves zero regret or every online algorithm has worst-case regret ?(log(T )). This is true
even if the adversary is restricted to pick a fixed hint vt = v for all t = 1, ? ? ? , T .
6
Directions for Future Research
We conjecture that the dependence of our regret bounds with respect to T is suboptimal
when K is
?
(C, q)-uniformly convex for q > 2. We expect the optimal rate to converge to T when
? q ? ? as
Lq balls converge to L? balls and it is well known that the minimax regret scales as T in online
linear optimization without hints when the decision set is a L? ball. However, this calls for the
development of an algorithm that is not based on a reduction to the Follow-The-Leader algorithm, as
discussed after Lemma 4.1.
We also conjecture that it is possible to relax the assumption that there are finitely many hints when
K is (C, q)-uniformly convex with q > 2 without incurring an exponential dependence of the regret
bounds (and the runtime) on the dimension d, see Appendix C. Again, this calls for the development
of an algorithm that is not based on a reduction to the Follow-The-Leader algorithm.
A solution that would alleviate the two aforementioned shortcomings would likely be derived through
a reduction to online convex optimization with convex functions that are (C, q)-uniformly convex,
for q ? 2, in all but one direction and constant in the other, in a similar fashion as done in Section
3 when q = 2. There has been progress in this direction in the literature, but, to the best of our
knowledge, no conclusive result yet. For instance, Vovk [23] studies a related problem but restricts
the study to the squared loss function. It is not clear if the setting studied in this paper can be reduced
to the setting of square loss function. Another example is given by [21], where the authors consider
online convex optimization with general (C, q)-uniformly convex functions in Banach spaces (with
no hint) achieving a regret of order O(T (q?2)/(q?1) ). Note that this rate matches the one derived in
Theorem 4.2. However, as noted above, our setting cannot be reduced to theirs because our virtual
loss functions are not uniformly convex in every direction.
Acknowledgments
Haghtalab was partially funded by an IBM Ph.D. fellowship and a Microsoft Ph.D. fellowship. Jaillet
acknowledges the research support of the Office of Naval Research (ONR) grant N00014-15-1-2083.
This work was partially done when Haghtalab was an intern at Microsoft Research, Redmond WA.
References
[1] Jacob Abernethy, Peter L Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax
lower bounds for online convex games. In Proceedings of the 21st Conference on Learning Theory (COLT),
pages 415?424, 2008.
[2] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta-algorithm
and applications. Theory of Computing, 8(1):121?164, 2012.
[3] Keith Ball, Eric A Carlen, and Elliott H Lieb. Sharp uniform convexity and smoothness inequalities for
trace norms. Inventiones mathematicae, 115(1):463?482, 1994.
[4] Avrim Blum and Yishay Monsour. Learning, regret minimization, and equilibria. In Algorithmic Game
Theory, pages 79?102. 2007.
[5] Nicolo Cesa-Bianchi and G?bor Lugosi. Prediction, learning, and games. Cambridge university press,
2006.
[6] Chao-Kai Chiang and Chi-Jen Lu. Online learning with queries. In Proceedings of the 21st Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 616?629, 2010.
[7] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo
Zhu. Online optimization with gradual variations. In Proceedings of the 25th Conference on Learning
Theory (COLT), pages 6?1, 2012.
[8] Arthur Flajolet and Patrick Jaillet. No-regret learnability for piecewise linear losses. arXiv preprint
arXiv:1411.5649, 2014.
9
[9] Yoav Freund. Boosting a weak learning algorithm by majority. Information and computation, 121(2):
256?285, 1995.
[10] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of computer and system sciences, 55(1):119?139, 1997.
[11] Elad Hazan. The convex optimization approach to regret minimization. Optimization for machine learning,
pages 287?303, 2012.
[12] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs.
In Proceedings of the 23th Conference on Learning Theory (COLT), 2008.
[13] Elad Hazan and Nimrod Megiddo. Online learning with prior knowledge. In Proceedings of the 20th
Conference on Learning Theory (COLT), pages 499?513, 2007.
[14] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization.
Machine Learning, 69(2-3):169?192, 2007.
[15] Ruitong Huang, Tor Lattimore, Andr?s Gy?rgy, and Csaba Szepesv?ri. Following the leader and fast
rates in linear prediction: Curved constraint sets and other regularities. In Proceedings of the 30th Annual
Conference on Neural Information Processing Systems (NIPS), pages 4970?4978, 2016.
[16] Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer
and System Sciences, 71(3):291?307, 2005.
[17] H Brendan McMahan. A survey of algorithms and analysis for adaptive online learning. Journal of
Machine Learning Research, 18:1?50, 2017.
[18] Gilles Pisier. Martingales in banach spaces (in connection with type and cotype). Manuscript., Course IHP,
Feb, pages 2?8, 2011.
[19] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In Proceedings of
the 25th Conference on Learning Theory (COLT), pages 993?1019, 2013.
[20] Alexander Rakhlin and Karthik Sridharan. Optimization, learning, and games with predictable sequences.
In Proceedings of the 27th Annual Conference on Neural Information Processing Systems (NIPS), pages
3066?3074, 2013.
[21] Karthik Sridharan and Ambuj Tewari. Convex games in banach spaces. In Proceedings of the 23rd
Conference on Learning Theory (COLT), pages 1?13, 2010.
[22] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Compressed
Sensing: Theory and Applications, pages 210?268. Cambridge University Press, 2012.
[23] Vladimir Vovk. Competing with wild prediction rules. Machine Learning, 69(2-3):193?212, 2007.
[24] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML), pages 928?936, 2003.
10
| 7114 |@word achievable:2 polynomial:2 stronger:1 norm:2 dekel:1 c0:1 gradual:1 jacob:1 pick:3 incurs:3 kxkk:2 reduction:4 com:1 discretization:1 yet:1 additive:1 subsequent:1 designed:1 ints:1 update:2 instantiate:2 beginning:2 ith:1 farther:1 chiang:2 provides:2 boosting:3 location:1 become:1 symposium:1 prove:5 combine:1 wild:1 polyhedral:1 introduce:2 expected:1 indeed:1 vtt:4 discretized:1 chi:2 decreasing:1 cardinality:1 becomes:1 begin:1 classifies:1 bounded:5 notation:1 moreover:3 provided:1 what:1 hindsight:2 transformation:1 csaba:1 guarantee:6 certainty:1 every:5 act:1 concave:7 megiddo:2 runtime:2 k2:30 demonstrates:2 unit:9 grant:1 omit:1 before:2 positive:1 consequence:1 lugosi:1 shenghuo:1 studied:1 suggests:1 compile:1 limited:1 range:1 acknowledgment:1 regret:78 revealing:1 word:2 suggest:1 get:3 cannot:3 context:2 applying:1 equivalent:2 zinkevich:1 center:1 go:1 kale:3 tianbao:1 convex:62 survey:1 formalized:1 immediately:2 rule:1 maliciously:1 spanned:1 handle:1 notion:12 coordinate:8 variation:2 haghtalab:3 resp:2 pt:10 play:3 suppose:4 yishay:1 programming:1 us:3 origin:4 trick:1 observed:1 ft:8 preprint:1 worst:11 decrease:1 observes:2 rq:5 intuition:1 predictable:3 convexity:17 weakly:2 depend:1 incur:3 negatively:1 eric:1 xkk:1 k0:4 fast:1 shortcoming:1 query:1 choosing:5 outcome:1 abernethy:1 larger:1 elad:5 kai:2 say:3 relax:2 otherwise:2 compressed:1 ability:1 satyen:3 online:43 sequence:15 kxt:7 net:2 achieve:3 intuitive:1 rgy:1 regularity:1 requirement:3 r1:2 adam:1 derive:2 develop:1 depending:2 finitely:1 keith:1 progress:1 strong:1 implies:4 come:2 met:1 differ:1 direction:20 radius:1 subsequently:1 aset:5 centered:1 virtual:11 require:2 f1:1 generalization:1 preliminary:1 decompose:1 proposition:3 pessimistic:1 anticipate:1 alleviate:1 extension:1 rong:1 hold:3 sufficiently:2 considered:2 around:4 exp:35 equilibrium:1 algorithmic:1 kvt:9 pointing:1 claim:1 tor:1 achieves:3 kvkc:2 extremal:1 him:1 hope:2 minimization:2 mit:2 always:4 sight:2 rather:2 kalai:1 avoid:1 office:1 derived:3 focus:1 naval:1 improvement:1 polyhedron:4 contrast:2 adversarial:1 brendan:1 sense:1 flaw:1 carlen:1 nika:2 provably:1 arg:10 overall:1 aforementioned:1 colt:6 priori:3 development:2 constrained:1 cube:2 santosh:1 field:1 equal:2 having:3 beach:1 icml:1 future:1 t2:1 piecewise:1 hint:68 roman:1 ve:2 individual:1 geometry:2 microsoft:4 karthik:3 attempt:1 adjust:1 extreme:1 arthur:2 necessary:1 minw:3 orthogonal:8 machinery:1 euclidean:2 instance:1 yoav:2 cost:3 uniform:7 learnability:1 characterize:2 eec:1 chooses:3 vershynin:1 st:3 international:1 siam:1 lee:1 invoke:1 na:2 sanjeev:1 again:1 squared:1 cesa:1 choose:3 possibly:1 huang:1 mahdavi:1 account:2 gy:1 coefficient:1 satisfy:1 depends:1 multiplicative:1 hazan:6 observing:1 competitive:1 parallel:1 defer:4 halfspace:1 minimize:1 square:1 who:1 yield:2 directional:1 ruitong:1 weak:1 bor:1 lu:2 history:2 mathematicae:1 definition:7 infinitesimal:1 inventiones:1 evade:1 kct:18 naturally:1 proof:12 massachusetts:2 to2:1 knowledge:3 improves:3 manuscript:1 attained:1 follow:7 improved:6 done:2 strongly:12 generality:3 furthermore:2 hand:4 receives:3 lack:1 quality:1 reveal:1 modulus:4 usa:1 effect:1 true:3 ihp:1 symmetric:4 deal:1 round:18 game:8 noted:1 generalized:1 theoretic:1 l1:5 cp:2 lattimore:1 mt:2 banach:4 belong:1 discussed:2 theirs:1 accumulate:1 mellon:1 cambridge:2 smoothness:1 rd:12 aexp:12 c0t:3 funded:1 moving:2 access:2 jaillet:4 patrick:2 nicolo:1 feb:1 curvature:3 hide:1 showed:1 perspective:1 ctt:40 belongs:2 inf:4 scenario:1 n00014:1 inequality:9 onr:1 success:1 meta:1 vt:40 seen:1 additional:2 converge:3 faster:1 adapt:1 match:1 long:2 chia:1 e1:1 plugging:1 prediction:3 variant:1 basic:1 cmu:1 arxiv:2 agarwal:1 c1:16 receive:3 whereas:1 ftl:4 separately:1 fellowship:2 szepesv:1 completes:1 ascent:1 elegant:1 simulates:1 sridharan:6 call:3 extracting:1 presence:2 yang:1 revealed:1 concerned:1 affect:2 xj:1 restrict:1 suboptimal:1 competing:1 reduce:1 knowing:1 br:10 translates:1 whether:1 bartlett:1 lieb:1 peter:1 action:23 deep:1 tewari:2 clear:4 informally:1 involve:1 transforms:1 mid:1 ph:2 reduced:3 schapire:1 nimrod:1 kck2:1 exist:2 restricts:1 canonical:1 andr:1 deteriorates:1 overly:1 correctly:1 carnegie:1 discrete:1 nevertheless:1 blum:1 achieving:1 changing:1 merely:1 sum:1 inverse:1 uncertainty:1 soda:1 throughout:1 flajolet:3 draw:1 decision:4 appendix:8 acceptable:1 bound:29 ct:35 guaranteed:3 annual:3 constraint:1 ri:1 aspect:1 argument:1 min:9 prescribed:1 performing:1 vempala:1 martin:1 conjecture:2 department:1 tv:4 ball:20 combination:2 smaller:1 lp:6 lid:1 intuitively:1 restricted:5 ln:2 equation:3 discus:1 turn:1 know:2 ofer:1 operation:1 incurring:2 generalizes:1 apply:2 away:1 original:1 include:1 xc:1 amit:1 establish:1 upcoming:1 move:2 strategy:6 dependence:4 mehrdad:1 said:1 gradient:3 minx:6 subspace:6 distance:2 penultimate:1 majority:1 trivial:2 length:1 kk:15 vladimir:1 difficult:1 robert:1 trace:1 design:1 bianchi:1 upper:1 discretize:1 gilles:1 benchmark:1 finite:6 jin:1 curved:1 extended:1 sharp:1 namely:3 pisier:2 pair:1 ct1:1 connection:2 conclusive:1 established:2 nip:3 beyond:2 adversary:12 able:1 below:1 redmond:1 ambuj:2 max:2 zhu:1 minimax:2 improve:2 technology:2 arora:1 concludes:2 acknowledges:1 ctv:1 chao:2 prior:1 geometric:1 l2:3 understanding:1 literature:1 xtt:1 asymptotic:2 freund:2 loss:54 expect:1 incurred:1 degree:2 sufficient:2 proxy:1 elliott:1 kxkp:1 ibm:1 course:1 jung:1 last:7 copy:4 offline:4 side:2 allow:1 understand:1 weaker:2 institute:2 formal:2 midpoint:1 benefit:3 boundary:3 dimension:4 feedback:2 valid:3 transition:1 concavity:3 author:1 adaptive:1 simplified:1 far:5 correlate:1 approximate:3 compact:3 summing:2 conclude:1 leader:7 xi:1 continuous:1 learn:1 ca:1 rearranging:2 main:1 oferd:1 repeated:1 ykk:2 tvt:2 orc:1 fashion:1 martingale:1 predictability:2 lq:2 exponential:2 lie:1 mcmahan:1 z0:1 theorem:9 xt:50 jen:2 sensing:1 rakhlin:6 r2:5 exists:3 avrim:1 margin:2 kx:3 easier:1 surprise:1 logarithmic:1 simply:1 likely:1 intern:1 kxk:2 partially:2 doubling:1 applies:1 corresponds:1 satisfies:2 acm:1 goal:2 presentation:1 lipschitz:2 absence:1 feasible:10 hard:1 specifically:4 uniformly:28 vovk:2 lemma:9 called:3 player:31 kwkk:4 formally:2 support:1 pertains:1 brevity:1 alexander:3 kvkk:5 d1:1 cotype:1 correlated:3 |
6,760 | 7,115 | Identification of Gaussian Process State Space Models
Stefanos Eleftheriadis? , Thomas F.W. Nicholson? , Marc P. Deisenroth?? , James Hensman?
?
PROWLER.io, ? Imperial College London
{stefanos, tom, marc, james}@prowler.io
Abstract
The Gaussian process state space model (GPSSM) is a non-linear dynamical system, where unknown transition and/or measurement mappings are described by
GPs. Most research in GPSSMs has focussed on the state estimation problem,
i.e., computing a posterior of the latent state given the model. However, the key
challenge in GPSSMs has not been satisfactorily addressed yet: system identification, i.e., learning the model. To address this challenge, we impose a structured
Gaussian variational posterior distribution over the latent states, which is parameterised by a recognition model in the form of a bi-directional recurrent neural
network. Inference with this structure allows us to recover a posterior smoothed
over sequences of data. We provide a practical algorithm for efficiently computing
a lower bound on the marginal likelihood using the reparameterisation trick. This
further allows for the use of arbitrary kernels within the GPSSM. We demonstrate
that the learnt GPSSM can efficiently generate plausible future trajectories of the
identified system after only observing a small number of episodes from the true
system.
1 Introduction
State space models can effectively address the problem of learning patterns and predicting behaviour
in sequential data. Due to their modelling power they have a vast applicability in various domains of
science and engineering, such as robotics, finance, neuroscience, etc. (Brown et al., 1998).
Most research and applications have focussed on linear state space models for which solutions for
inference (state estimation) and learning (system identification) are well established (Kalman, 1960;
Ljung, 1999). In this work, we are interested in non-linear state space models. In particular, we
consider the case where a Gaussian process (GP) (Rasmussen and Williams, 2006) is responsible for
modelling the underlying dynamics. This is widely known as the Gaussian process state space model
(GPSSM). We choose to build upon GPs for a number of reasons. First, they are non-parametric,
which makes them effective in learning from small datasets. This can be advantageous over wellknown parametric models (e.g., recurrent neural networks?RNNs), especially in situation where
data are not abundant. Second, we want to take advantage of the probabilistic properties of GPs.
By using a GP for the latent transitions, we can get away with an approximate model and learn a
distribution over functions. This allows us to account for model errors whilst quantifying uncertainty,
as discussed and empirically shown by Schneider (1997) and Deisenroth et al. (2015). Consequently,
the system will not become overconfident in regions of the space where data are scarce.
System identification with the GPSSM is a challenging task. This is due to un-identifiability issues:
both states and transition functions are unknown. Most work so far has focused only on state
estimation of the GPSSM. In this paper, we focus on addressing the challenge of system identification
and based on recent work by Frigola et al. (2014) we propose a novel inference method for learning
the GPSSM. We approximate the entire process of the state transition function by employing the
framework of variational inference. We assume a Markov-structured Gaussian posterior distribution
over the latent states. The variational posterior can be naturally combined with a recognition model
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
based on bi-directional recurrent neural networks, which facilitate smoothing of the state posterior
over the data sequences. We present an efficient algorithm based on the reparameterisation trick for
computing the lower bound on the marginal likelihood. This significantly accelerates learning of the
model and allows for arbitrary kernel functions.
2 Gaussian process state space models
We consider the dynamical system
xt = f (xt
1 , at 1 )
+ ?f ,
(1)
y t = g(xt ) + ?g ,
where t indexes time, x 2 R is a latent state, a 2 R are control signals (actions) and y 2 RO
are measurements/observations. We assume i.i.d. Gaussian system/measurement noise ?(?) ?
2
N 0, (?)
I . The state-space model in eq. (1) can be fully described by the measurement and
transition functions, g and f .
D
P
The key idea of a GPSSM is to model the transition function f and/or the measurement function g
in eq. (1) using GPs, which are distributions over functions. A GP is fully specified by a mean ?(?)
and a covariance/kernel function k(?, ?), see e.g., (Rasmussen and Williams, 2006). The covariance
function allows us to encode basic structural assumptions of the class of functions we want to model,
e.g., smoothness, periodicity or stationarity. A common choice for a covariance function is the radial
basis function (RBF).
Let f (?) denote a GP random function, and X = [xi ]N
i=1 be a series of points in the domain of
that function. Then, any finite subset of function evaluations, f = [f (xi )]N
i=1 , are jointly Gaussian
distributed
p(f |X) = N f | ?, K xx ,
(2)
where the matrix K xx contains evaluations of the kernel function at all pairs of datapoints in X, and
? = [?(xi )]N
i=1 is the prior mean function. This property leads to the widely used GP regression
model: if Gaussian noise is assumed, the marginal likelihood can be computed in closed form,
enabling learning of the kernel parameters. By definition, the conditional distribution of a GP is
another GP. If we are to observe the values f at the input locations X, then we predict the values
elsewhere on the GP using the conditional
f (?) | f ? GP ?(?) + k(?, X)K xx1 (f
?)), k(?, ?)
k(?, X)K xx1 k(X, ?) .
(3)
Unlike the supervised setting, in the GPSSM, we are presented with neither values of the function on
which to condition, nor on inputs to the function since the hidden states xt are latent. The challenge
of inference in the GPSSM lies in dually inferring the latent variables x and in fitting the Gaussian
process dynamics f (?).
In the GPSSM, we place independent GP priors on the transition function f in eq. (1) for each output
dimension of xt+1 , and collect realisations of those functions in the random variables f , such that
fd (?) ? GP ?d (?), kd (?, ?) ,
f t = [fd (?
xt
D
1 )]d=1
and p(xt |f t ) = N (xt |f t ,
2
f I),
(4)
? t = [xt , at ] to collect the state-action pair at time t. In this
where we used the short-hand notation x
(d)
work, we use a mean function that keeps the state constant, so ?d (?
xt ) = xt .
To reduce some of the un-identifiability problems of GPSSMs, we assume a linear measurement
mapping g so that the data conditional is
p(y t |xt ) = N (y t |Wg xt + bg ,
2
g I) .
(5)
The linear observation model g(x) = Wg x + bg + ?g is not limiting since a non-linear g could be
replaced by additional dimensions in the state space (Frigola, 2015).
2.1
Related work
State estimation in GPSSMs has been proposed by Ko and Fox (2009a) and Deisenroth et al. (2009)
for filtering and by Deisenroth et al. (2012) and Deisenroth and Mohamed (2012) for smoothing
using both deterministic (e.g., linearisation) and stochastic (e.g., particles) approximations. These
2
approaches focused only on inference in learnt GPSSMs and not on system identification, since
learning of the state transition function f without observing the system?s true state x is challenging.
Towards this approach, Wang et al. (2008), Ko and Fox (2009b) and Turner et al. (2010) proposed
methods for learning GPSSMs based on maximum likelihood estimation. Frigola et al. (2013)
followed a Bayesian treatment to the problem and proposed an inference mechanism based on
particle Markov chain Monte Carlo. Specifically, they first obtain sample trajectories from the
smoothing distribution that could be used to define a predictive density via Monte Carlo integration.
Then, conditioned on this trajectory they sample the model?s hyper-parameters. This approach
scales proportionally to the length of the time series and the number of the particles. To tackle
this inefficiency, Frigola et al. (2014) suggested a hybrid inference approach combining variational
inference and sequential Monte Carlo. Using the sparse variational framework from (Titsias, 2009) to
approximate the GP led to a tractable distribution over the state transition function that is independent
of the length of the time series.
An alternative to learning a state-space model is to follow an autoregressive strategy (as in MurraySmith and Girard, 2001; Likar and Kocijan, 2007; Turner, 2011; Roberts et al., 2013; Kocijan, 2016),
to directly model the mapping from previous to current observations. This can be problematic since
noise is propagated through the system during inference. To alleviate this, Mattos et al. (2015)
proposed the recurrent GP, a non-linear dynamical model that resembles a deep GP mapping from
observed inputs to observed outputs, with an autoregressive structure on the intermediate latent states.
They further followed the idea by Dai et al. (2015) and introduced an RNN-based recognition model
to approximate the true posterior of the latent state. A downside is the requirement to feed future
actions forward into the RNN during inference, in order to propagate uncertainty towards the outputs.
Another issue stems from the model?s inefficiency in analytically computing expectations of the kernel
functions under the approximate posterior when dealing with high-dimensional latent states. Recently,
Al-Shedivat et al. (2016), introduced a recurrent structure to the manifold GP (Calandra et al., 2016).
They proposed to use an LSTM in order to map the observed inputs onto a non-linear manifold,
where the GP actually operates on. For inefficiency, they followed an approximate inference scheme
based on Kronecker products over Toeplitz-structured kernels.
3 Inference
Our inference scheme uses variational Bayes (see e.g., Beal, 2003; Blei et al., 2017). We first define
the form of the approximation to the posterior, q(?). Then we derive the evidence lower bound
(ELBO) with respect to which the posterior approximation is optimised in order to minimise the
Kullback-Leibler divergence between the approximate and true posterior. We detail how the ELBO is
estimated in a stochastic fashion and optimized using gradient-based methods, and describe how the
form of the approximate posterior is given by a recurrent neural network. The graphical models of
the GPSSM and our proposed approximation are shown in Figure 1.
3.1
Posterior approximation
Following the work by Frigola et al. (2014), we adopt a variational approximation to the posterior,
assuming factorisation between the latent functions f (?) and the state trajectories X. However,
unlike Frigola et al.?s work, we do not run particle MCMC to approximate the state trajectories, but
instead assume that the posterior over states is given by a Markov-structured Gaussian distribution
parameterised by a recognition model (see section 3.3). In concordance with Frigola et al. (2014), we
adopt a sparse variational framework to approximate the GP. The sparse approximation allows us to
deal with both (a) the unobserved nature of the GP inputs and (b) any potential computational scaling
issues with the GP by controlling the number of inducing points in the approximation.
The variational approximation to the GP posterior is formed as follows: Let Z = [z 1 , . . . , z M ] be
? . For each Gaussian process fd (?), we define the inducing
some points in the same domain as x
variables ud = [fd (z m )]M
m=1 , so that the density of ud under the GP prior is N (? d , K zz ), with
? d = [?d (z m )]M
. We make a mean-field variational approximation to the posterior for U , taking
m=1
QD
the form q(U ) = d=1 N (ud | ?d , ?d ). The variational posterior of the rest of the points on the
GP is assumed to be given by the same conditional distribution as the prior:
fd (?) | ud ? GP ?d (?) + k(?, Z)K zz1 (ud
3
? d ),
k(?, ?)
k(?, Z)K zz1 k(Z, ?) .
(6)
a1
y1
a2
y2
a3
?d
y3
?
fd(?)
a1
?d
W?
h0
x0
x1
x2
x3
WA,L
?
fd(?)
y1
W(f,b)
h
x0
a2
W?
h1
WA,L
W(f,b)
h
x1
a3
y2
W?
h2
WA,L
y3
W(f,b)
h
x2
h3
WA,L
x3
Figure 1: The GPSSM with the GP state transition functions (left), and the proposed approximation with the
recognition model in the form of a bi-RNN (right). Black arrows show conditional dependencies of the model,
red arrows show the data-flow in the recognition.
Integrating this expression with respect to the prior distribution p(ud ) = N (? d , K zz ) gives the GP
prior in eq. (4). Integrating with respect to the variational distribution q(U ) gives our approximation
to the posterior process fd (?) ? GP ?d (?), vd (?, ?) , with
?d (?) = ?d (?) + k(?, Z)K zz1 (?d
vd (?, ?) = k(?, ?)
(7)
? d ),
k(?, Z)K zz1 [K zz
?d ]K zz1 k(Z, ?) .
(8)
The approximation to the posterior of the state trajectory is assumed to have a Gauss-Markov structure:
q(x0 ) = N x0 | m0 , L0 L>
q(xt | xt 1 ) = N xt | At xt 1 , Lt L>
(9)
0 ,
t .
This distribution is specified through a single mean vector m0 , a series of square matrices At , and
a series of lower-triangular matrices Lt . It serves as a locally linear approximation to an overall
non-linear posterior over the states. This is a good approximation provided that the t between the
transitions is sufficiently small.
With the approximating distributions for the variational posterior defined in eq. (7)?(9), we are ready
to derive the evidence lower bound (ELBO) on the model?s true likelihood. Following (Frigola, 2015,
eq. (5.10)), the ELBO is given by
ELBO = Eq(x0 ) [log p(x0 )] + H[q(X)] KL[q(U ) || p(U )]
T X
D
hX
1
(d)
? t 1 ) + log N xt | ?d (?
+ Eq(X)
xt 1 , x
xt
2 vd (?
2
f
t=1
d=1
+ Eq(X)
T
hX
t=1
log N y t | g(xt ),
2
gIO
i
1 ),
2
f
i
(10)
,
where KL[?||?] is the Kullback-Leibler divergence, and H[?] denotes the entropy. Note that with
the above formulation we can naturally deal with multiple episodic data since the ELBO can be
factorised across independent episodes. We can now learn the GPSSM by optimising the ELBO
w.r.t. the parameters of the model and the variational parameters. A full derivation is provided in the
supplementary material.
The form of the ELBO justifies the Markov-structure that we have assumed for the variational
distribution q(X): we see that the latent states only interact over pairwise time steps xt and xt 1 ;
adding further structure to q(X) is unnecessary.
3.2
Efficient computation of the ELBO
To compute the ELBO in eq. (10), we need to compute expectations w.r.t. q(X). Frigola et al.
(2014) showed that for the RBF kernel the relevant expectations can be computed in closed form in
a similar way to Titsias and Lawrence (2010). To allow for general kernels we propose to use the
reparameterisation trick (Kingma and Welling, 2014; Rezende et al., 2014) instead: by sampling
a single trajectory from q(X) and evaluating the integrands in eq. (10), we obtain an unbiased
estimate of the ELBO. To draw a sample from the Gauss-Markov structure in eq. (9), we first sample
?t ? N (0, I), t = 0, . . . , T , and then apply recursively the affine transformation
x0 = m 0 + L 0 ? 0 ,
xt = At xt
4
1
+ L t ?t .
(11)
This simple estimator of the ELBO can then be used in optimisation using stochastic gradient methods;
we used the Adam optimizer (Kingma and Ba, 2015). It may seem initially counter-intuitive to use a
stochastic estimate of the ELBO where one is available in closed form, but this approach offers two
distinct advantages. First, computation is dramatically reduced: our scheme requires O(T D) storage
in order to evaluate the integrand in eq. (10) at a single sample from q(X). A scheme that computes
the integral in closed form requires O(T M 2 ) (where M is the number of inducing variables in the
sparse GP) storage for the sufficient statistics of the kernel evaluations. The second advantage is that
we are no longer restricted to the RBF kernel, but can use any valid kernel for inference and learning
in GPSSMs. The reparameterisation trick also allows us to perform batched updates of the model
parameters, amounting to doubly stochastic variational inference (Titsias and L?zaro-Gredilla, 2014),
which we experimentally found to improve run-time and sample-efficiency.
Some of the elements of the ELBO in eq. (10) are still available in closed-form. To reduce the
variance of the estimate of the ELBO we exploit this where possible: the entropy of the GaussPT
Markov structure is H[q(X)] = T2D log(2?e)
t=0 log(det(Lt )); the expected likelihood (last
term in eq. (10)) can be computed easily given the marginals of q(X), which are given by
>
q(xt ) = N (mt , ?t ), mt = At mt 1 , ?t = At ?t 1 A>
(12)
t + Lt Lt ,
and the necessary Kullback-Leibler divergences can be computed analytically: we use the implementations from GPflow (Matthews et al., 2017).
3.3
A recurrent recognition model
The variational distribution of the latent trajectories in eq. (9) has a large number of parameters
(At , Lt ) that grows with the length of the dataset. Further, if we wish to train a model on multiple
episodes (independent data sequences sharing the same dynamics), then the number of parameters
grows further. To alleviate this, we propose to use a recognition model in the form of a bi-directional
recurrent neural network (bi-RNN), which is responsible for recovering the variational parameters
At , L t .
A bi-RNN is a combination of two independent RNNs operating on opposite directions of the
sequence. Each network is specified by two weight matrices W acting on a hidden state h:
(f )
ht
(b)
ht
(f )
(f )
1
= (W h ht
=
(b) (b)
(W h ht+1
(f )
(f )
forward passing
(13)
(b)
bh ) ,
backward passing
(14)
? t + bh ) ,
+ W y? y
+
(b)
?t
W y? y
+
? t = [y t , at ] denotes the concatenation of the observed data and control actions and the
where y
superscripts denote the direction (forward/backward) of the RNN. The activation function (we use
the tanh function), acts on each element of its argument separately. In our experiments we found that
using gated recurrent units (Cho et al., 2014) improved performance of our model. We now make the
parameters of the Gauss-Markov structure dependent on the sequences h(f ) , h(b) , so that
(f )
(b)
(f )
At = reshape(WA [ht ; ht ] + bA ),
(b)
Lt = reshape(WL [ht ; ht ] + bL ) .
(15)
The parameters of the Gauss-Markov structure q(X) are now almost completely encapsulated in the
(f,b)
(f,b)
(f,b)
recurrent recognition model as W h , W y? , WA , WL , bh , bA , bL . We only need to infer the
parameters of the initial state, m0 , L0 for each episode; this is where we utilise the functionality of the
bi-RNN structure. Instead of directly learning the initial state q(x0 ), we can now obtain it indirectly
via the output state of the backward RNN. Another nice property of the proposed recognition model
is that now q(X) is recognised from both future and past observations, since the proposed bi-RNN
recognition model can be regarded as a forward and backward sequential smoother of our variational
posterior. Finally, it is worth noting the interplay between the variational distribution q(X) and the
recognition model. Recall that the variational distribution is a Bayesian linear approximation to the
non-linear posterior and is fully defined by the time varying parameters, At , Lt ; the recognition
model has the role to recover these parameters via the non-linear and time invariant RNN.
4 Experiments
We benchmark the proposed GPSSM approach on data from one illustrative example and three
challenging non-linear data sets of simulated and real data. Our aim is to demonstrate that we can: (i)
5
GP posterior
GP posterior
inducing points
ground truth
MGP
Arc-cosine
RBF RBF + Matern
RBF + Matern
RBF
Arc-cosine
4
xt+1
inducing points
MGP
2
0
2
2
1 0
1
2 3
xt
4
5
6 2
1 0
1
2 3
xt
4
5
6 2
1 0
1
2 3
xt
4
5
2
6
1 0
1
2 3
xt
4
5
6
Figure 2: The learnt state transition function with different kernels. The true function is given by eq. (16).
benefit from the use of non-smooth kernels with our approximate inference and accurately model
non-smooth transition functions; (ii) successfully learn non-linear dynamical systems even from
noisy and partially observed inputs; (iii) sample plausible future trajectories from the system even
when trained with either a small number of episodes or long time sequences.
4.1
2
1 0
Non-linear system identification
first
12 We
21 apply
03 our approach
142 251to a synthetic
036 1dataset
42 2generated
51 036broadly14 according
6 4et al., 5
25 to 3(Frigola
2014). The data is created using a non-linear, non-smooth transition function with additive state and
observation
xt noise accordingxto:t p(x |x ) = N (fx(x t), ), and p(y |x ) =xNt(x , ), where
t+1
f (xt ) = xt + 1,
t
2
f
t
if xt < 4,
13
t
2xt ,
t
t
2
g
otherwise .
(16)
In our experiments, we set the system and measurement noise variances to
= 0.01 and
= 0.1,
respectively, and generate 200 episodes of length 10 that were used as the observed data for training
the GPSSM. We used 20 inducing points (initialised uniformly across the range of the input data)
for approximating the GP and 20 hidden units for the recurrent recognition model. We evaluate the
following kernels: RBF, additive composition of the RBF (initial ` = 10) and Matern (? = 12 , initial
` = 0.1), 0-order arc-cosine (Cho and Saul, 2009), and the MGP kernel (Calandra et al., 2016) (depth
5, hidden dimensions [3, 2, 3, 2, 3], tanh activation, Matern (? = 12 ) compound kernel).
2
f
2
g
The learnt GP state transition functions are shown in Figure 2. With the non-smooth kernels we are
able to learn accurate transitions and model the instantaneous dynamical change, as opposed to the
smooth transition learnt with the RBF. Note that all non-smooth kernels place inducing points directly
on the peak (at xt = 4) to model the kink, whereas the RBF kernel explains this behaviour as a longerscale wiggliness of the posterior process. When using a kernel without the RBF component the GP
posterior quickly reverts to the mean function (?(x) = x) as we move away from the data: the short
length-scales that enable them to model the instantaneous change prevent them from extrapolating
downwards in the transition function. The composition of the RBF and Matern kernel benefits from
long and short length scales and can better extrapolate. The posteriors can be viewed across a longer
range of the function space in the supplementary material.
4.2
Modelling cart-pole dynamics
We demonstrate the efficacy of the proposed GPSSM on learning the non-linear dynamics of the
cart-pole system from (Deisenroth and Rasmussen, 2011). The system is composed of a cart running
on a track, with a freely swinging pendulum attached to it. The state of the system consists of the
cart?s position and velocity, and the pendulum?s angle and angular velocity, while a horizontal force
(action) a 2 [ 10, 10]N can be applied to the cart. We used the PILCO algorithm from (Deisenroth
and Rasmussen, 2011) to learn a feedback controller that swings the pendulum and balances it in
the inverted position in the middle of the track. We collected trajectory data from 16 trials during
learning; each trajectory/episode was 4 s (40 time steps) long.
When training the GPSSM for the cart-pole system we used data up to the first 15 episodes. We
used 100 inducing points to approximate the GP function with a Matern ? = 12 and 50 hidden units
for the recurrent recognition model. The learning rate for the Adam optimiser was set to 10 3 . We
qualitatively assess the performance of our model by feeding the control sequence of the last episode
to the GPSSM in order to generate future responses.
6
6
g
2 episodes (80 time steps in total)
8 episodes (320 time steps in total)
15 episodes (600 time steps in total)
10
0.2
5
0
angle
cart position
0.4
0.2
0
0.4
10
0.2
5
0
angle
cart position
0.4
0.2
0
0.4
control signal
10
10
0
5
10
15
20
25
30
35
40
0
5
10
time step
15
20
25
30
35
0
40
5
10
15
20
25
30
35
40
time step
time step
Figure 3: Predicting the cart?s position and pendulum?s angle behaviour from the cart-pole dataset by applying
the control signal of the testing episode to sampled future trajectories from the proposed GPSSM. Learning of
the dynamics is demonstrated with observed (upper row) and hidden (lower row) velocities and with increasing
number of training episodes. Ground truth is denoted with the marked lines.
In Figure 3, we demonstrate the ability of the proposed GPSSM to learn the underlying dynamics
of the system from a different number of episodes with fully and partially observed data. In the top
row, the GPSSM observes the full 4D state, while in the bottom row, we train the GPSSM with only
the cart?s position and the pendulum?s angle observed (i.e., the true state is not fully observed since
the velocities are hidden). In both cases, sampling long-term trajectories based on only 2 episodes
for training does not result in plausible future trajectories. However, we could model part of the
dynamics after training with only 8 episodes (320 time steps interaction with the system), while
training with 15 episodes (600 time steps in total) allowed the GPSSM to produce trajectories similar
to the ground truth. It is worth emphasising the fact that the GPSSM could recover the unobserved
velocities in the latent states, which resulted in smooth transitions of the cart and swinging of the
pendulum. However, it seems that the recovered cart?s velocity is overestimated. This is evidenced
by the increased variance in the prediction of the cart?s position around 0 (the centre of the track).
Detailed fittings for each episode and learnt latent states with observed and hidden velocities are
provided in the supplementary material.
and the predicted trajectories, measured at the pendulum?s tip. The error is in pendulum?s length units.
Kalman
ARGP
GPSSM
2 episodes
8 episodes
15 episodes
1.65
1.22
1.21
1.52
1.03
0.67
1.48
0.80
0.59
10
0.2
5
0
angle
Table 1: Average Euclidean distance between the true
cart position
0.4
0.2
0
0.4
control signal
10
10
0
5
10
15
20
25
30
35
40
time step
Figure 4: Predictions with lagged actions.
In Table 1, we provide the average Euclidean distance between the predicted and the true trajectories
measured at the pendulum?s tip, with fully observed states. We compare to two baselines: (i) the
auto-regressive GP (ARGP) that maps the tuple [y t 1 , at 1 ] to the next observation y t (as in PILCO
(Deisenroth et al., 2015)), and (ii) a linear system for identification that uses the Kalman filtering
technique (Kalman, 1960). We see that the GPSSM significantly outperforms the baselines on this
highly non-linear benchmark. The linear system cannot learn the dynamics at all, while the ARGP
only manages to produce sensible error (less than a pendulum?s length) after seeing 15 episodes. Note
7
that the GPSSM trained on 8 episodes produces trajectories with less error than the ARGP trained on
15 episodes.
We also ran experiments using lagged actions where the partially observed state at time t is affected
by the action at t 2. Figure 4 shows that we are able to sample future trajectories with an
accuracy similar to time-aligned actions. This indicates that our model is able to learn a compressed
representation of the full state and previous inputs, essentially ?remembering? the lagged actions.
4.3
Modelling double pendulum dynamics
We demonstrate the learning and modelling of the dynamics of the double pendulum system
from (Deisenroth et al., 2015). The double pendulum is a two-link robot arm with two actuators. The state of the system consists of the angles and the corresponding angular velocities of the
inner and outer link, respectively, while different torques a1 , a2 2 [ 2, 2] Nm can be applied to the
two actuators. The task of swinging the double pendulum and balancing it in the upwards position
is extremely challenging. First, it requires the interplay of two correlated control signals (i.e., the
torques). Second, the behaviour of the system, when operating at free will, is chaotic.
We learn the underlying dynamics from episodic data (15 episodes, 30 time steps long each). Training
of the GPSSM was performed with data up to 14 episodes, while always demonstrating the learnt
underlying dynamics on the last episode, which serves as the test set. We used 200 inducing points to
approximate the GP function with a Matern ? = 12 and 80 hidden units for the recurrent recognition
model. The learning rate for the Adam optimiser was set to 10 3 . The difficulty of the task is evident
in Figure 5, where we can see that even after observing 14 episodes we cannot accurately predict
the system?s future behaviour for more than 15 time steps (i.e., 1.5 s). It is worth noting that we can
generate reliable simulation even though we observe only the pendulums? angles.
2 episodes
8 episodes
14 episodes
4
5
2
4
outer angle
inner angle
6
0
3
4
5
2
4
outer angle
inner angle
6
0
3
outer torque
inner torque
2
-2
0
5
10
15
20
25
30
0
time step
5
10
15
time step
20
25
30
0
5
10
15
20
25
30
time step
Figure 5: Predicting the inner and outer pendulum?s angle from the double pendulum dataset by
applying the control signals of the testing episode to sampled future trajectories from the proposed
GPSSM. Learning of the dynamics is demonstrated with observed (upper row) and hidden (lower
row) angular velocities and with increasing number of training episodes. Ground truth is denoted
with the marked lines.
4.4
Modelling actuator dynamics
Here we evaluate the proposed GPSSM on real data from a hydraulic actuator that controls a robot
arm (Sj?berg et al., 1995). The input is the size of the actuator?s valve opening and the output is
its oil pressure. We train the GPSSM on half the sequence (512 steps) and evaluate the model on
the remaining half. We use 15 inducing points to approximate the GP function with a combination
of an RBF and a Matern ? = 12 and 15 hidden units for the recurrent recognition model. Figure 6
8
training
testing
4
2
0
2
4
control signal
1
1
50
0
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
800
850
900
950
1,000 1,050
time step
Figure 6: Demonstration of the identified model that controls the non-linear dynamics of the actuator dataset.
The model?s fitting on the train data and sampled future predictions, after applying the control signal to the
system. Ground truth is denoted with the marked lines.
shows the fitting on the train data along with sampled future predictions from the learnt system when
operating on a free simulation mode. It is worth noting the correct capturing of the uncertainty from
the model at the points where the predictions are not accurate.
5 Discussion and conclusion
We have proposed a novel inference mechanism for the GPSSM, in order to address the challenging
task of non-linear system identification. Since our inference is based on the variational framework,
successful learning of the model relies on defining good approximations to the posterior of the latent
functions and states. Approximating the posterior over the dynamics with a sparse GP seems to be a
reasonable choice given our assumptions over the transition function. However, the difficulty remains
in the selection of the approximate posterior of the latent states. This is the key component that
enables successful learning of the GPSSM.
In this work, we construct the variational posterior so that it follows the same Markov properties as
the true states. Furthermore, it is enforced to have a simple-to-learn, linear, time-varying structure. To
assure, though, that this approximation has rich representational capacity we proposed to recover the
variational parameters of the posterior via a non-linear recurrent recognition model. Consequently,
the joint approximate posterior resembles the behaviour of the true system, which facilitates the
effective learning of the GPSSM.
In the experimental section we have provided evidence that the proposed approach is able to identify
latent dynamics in true and simulated data, even from partial and lagged observations, while requiring
only small data sets for this challenging task.
Acknowledgement
Marc P. Deisenroth has been supported by a Google faculty research award.
References
Maruan Al-Shedivat, Andrew G. Wilson, Yunus Saatchi, Zhiting Hu, and Eric P. Xing. Learning
scalable deep kernels with recurrent structure. arXiv preprint arXiv:1610.08936, 2016.
Matthew J. Beal. Variational algorithms for approximate Bayesian inference. PhD thesis, University
of London, London, UK, 2003.
David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians.
Journal of the American Statistical Association, 112(518):859?877, 2017.
Emery N. Brown, Loren M. Frank, Dengda Tang, Michael C. Quirk, and Matthew A. Wilson. A
statistical paradigm for neural spike train decoding applied to position prediction from ensemble
firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18(18):7411?7425, 1998.
Roberto Calandra, Jan Peters, Carl E. Rasmussen, and Marc P. Deisenroth. Manifold Gaussian
processes for regression. In IEEE International Joint Conference on Neural Networks, 2016.
9
KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties
of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259,
2014.
Youngmin Cho and Lawrence K. Saul. Kernel methods for deep learning. In Advances in Neural
Information Processing Systems, pages 342?350. 2009.
Zhenwen Dai, Andreas Damianou, Javier Gonz?lez, and Neil Lawrence. Variational auto-encoded
deep Gaussian processes. In International Conference on Learning Representations, 2015.
Marc P. Deisenroth and Shakir Mohamed. Expectation propagation in Gaussian process dynamical
systems. In Advances in Neural Information Processing Systems, pages 2618?2626, 2012.
Marc P. Deisenroth and Carl E. Rasmussen. PILCO: A model-based and data-efficient approach to
policy search. In International Conference on Machine Learning, pages 465?472, 2011.
Marc P. Deisenroth, Marco F. Huber, and Uwe D. Hanebeck. Analytic moment-based Gaussian
process filtering. In International Conference on Machine Learning, pages 225?232, 2009.
Marc P. Deisenroth, Ryan D. Turner, Marco Huber, Uwe D. Hanebeck, and Carl E. Rasmussen.
Robust filtering and smoothing with Gaussian processes. IEEE Transactions on Automatic Control,
57(7):1865?1871, 2012.
Marc P. Deisenroth, Dieter Fox, and Carl E. Rasmussen. Gaussian processes for data-efficient learning
in robotics and control. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(2):
408?423, 2015.
Roger Frigola. Bayesian time series learning with Gaussian processes. PhD thesis, University of
Cambridge, Cambridge, UK, 2015.
Roger Frigola, Fredrik Lindsten, Thomas B. Sch?n, and Carl E. Rasmussen. Bayesian inference and
learning in Gaussian process state-space models with particle MCMC. In Advances in Neural
Information Processing Systems, pages 3156?3164, 2013.
Roger Frigola, Yutian Chen, and Carl E. Rasmussen. Variational Gaussian process state-space models.
In Advances in Neural Information Processing Systems, pages 3680?3688, 2014.
Rudolf E. Kalman. A new approach to linear filtering and prediction problems. Transactions of the
American Society of Mathematical Engineering, Journal of Basic Engineering, 82(D):35?45, 1960.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In International
Conference on Learning Representations, 2015.
Diederik P. Kingma and Max Welling. Auto-encoding variational Bayes. In International Conference
on Learning Representations, 2014.
Jonathan Ko and Dieter Fox. GP-BayesFilters: Bayesian filtering using Gaussian process prediction
and observation models. Autonomous Robots, 27(1):75?90, 2009a.
Jonathan Ko and Dieter Fox. Learning GP-BayesFilters via Gaussian process latent variable models.
In Robotics: Science and Systems, 2009b.
Ju? Kocijan. Modelling and control of dynamic systems using Gaussian process models. Springer,
2016.
Bojan Likar and Ju? Kocijan. Predictive control of a gas-liquid separation plant based on a Gaussian
process model. Computers & Chemical Engineering, 31(3):142?152, 2007.
Lennart Ljung. System identification: Theory for the user. Prentice Hall, 1999.
Alexander G. de G. Matthews. Scalable Gaussian process inference using variational methods. PhD
thesis, University of Cambridge, Cambridge, UK, 2017.
10
Alexander G. de G. Matthews, James Hensman, Richard E. Turner, and Zoubin Ghahramani. On
sparse variational methods and the Kullback-Leibler divergence between stochastic processes. In
International Conference on Artificial Intelligence and Statistics, volume 51 of JMLR W&CP,
pages 231?239, 2016.
Alexander G. de G. Matthews, Mark van der Wilk, Tom Nickson, Keisuke Fujii, Alexis Boukouvalas,
Pablo Le?n-Villagr?, Zoubin Ghahramani, and James Hensman. GPflow: A Gaussian process
library using TensorFlow. Journal of Machine Learning Research, 18(40):1?6, 2017.
C?sar Lincoln C. Mattos, Zhenwen Dai, Andreas Damianou, Jeremy Forth, Guilherme A. Barreto,
and Neil D. Lawrence. Recurrent Gaussian processes. In International Conference on Learning
Representations, 2015.
Roderick Murray-Smith and Agathe Girard. Gaussian process priors with ARMA noise models. In
Irish Signals and Systems Conference, pages 147?152, 2001.
Carl E. Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. The
MIT Press, Cambridge, MA, USA, 2006.
Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning,
pages 1278?1286, 2014.
Stephen Roberts, Michael Osborne, Mark Ebden, Steven Reece, Neale Gibson, and Suzanne Aigrain.
Gaussian processes for time-series modelling. Philosophical Transactions of the Royal Society A,
371(1984):20110550, 2013.
Jeff G. Schneider. Exploiting model uncertainty estimates for safe dynamic control learning. In
Advances in Neural Information Processing Systems. 1997.
Jonas Sj?berg, Qinghua Zhang, Lennart Ljung, Albert Benveniste, Bernard Delyon, Pierre-Yves
Glorennec, H?kan Hjalmarsson, and Anatoli Juditsky. Nonlinear black-box modeling in system
identification: A unified overview. Automatica, 31(12):1691?1724, 1995.
Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In
International Conference on Artificial Intelligence and Statistics, volume 5 of JMLR W&CP, pages
567?574, 2009.
Michalis K. Titsias and Neil D. Lawrence. Bayesian Gaussian process latent variable model. In
International Conference on Artificial Intelligence and Statistics, volume 9 of JMLR W&CP, pages
844?851, 2010.
Michalis K. Titsias and Miguel L?zaro-Gredilla. Doubly stochastic variational Bayes for nonconjugate inference. In International Conference on Machine Learning, pages 1971?1979, 2014.
Ryan D. Turner. Gaussian processes for state space models and change point detection. PhD thesis,
University of Cambridge, Cambridge, UK, 2011.
Ryan D. Turner, Marc P. Deisenroth, and Carl E. Rasmussen. State-space inference and learning with
Gaussian processes. In International Conference on Artificial Intelligence and Statistics, volume 9
of JMLR W&CP, pages 868?875, 2010.
Jack M. Wang, David J. Fleet, and Aaron Hertzmann. Gaussian process dynamical models for human
motion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(2):283?298, 2008.
11
| 7115 |@word trial:1 faculty:1 middle:1 advantageous:1 seems:2 hu:1 simulation:2 nicholson:1 propagate:1 covariance:3 pressure:1 recursively:1 moment:1 initial:4 inefficiency:3 series:7 contains:1 efficacy:1 liquid:1 past:1 outperforms:1 current:1 recovered:1 activation:2 yet:1 diederik:2 additive:2 enables:1 analytic:1 extrapolating:1 update:1 juditsky:1 bart:1 half:2 intelligence:6 generative:1 keisuke:1 smith:1 short:3 blei:2 regressive:1 yunus:1 location:1 bayesfilters:2 zhang:1 gio:1 mathematical:1 along:1 fujii:1 wierstra:1 become:1 jonas:1 consists:2 doubly:2 fitting:4 pairwise:1 x0:8 huber:2 villagr:1 expected:1 nor:1 torque:4 valve:1 increasing:2 maruan:1 provided:4 xx:2 underlying:4 notation:1 lindsten:1 whilst:1 unified:1 unobserved:2 transformation:1 y3:2 act:1 tackle:1 finance:1 ro:1 uk:4 control:17 unit:6 mcauliffe:1 engineering:4 io:2 encoding:1 optimised:1 firing:1 black:2 rnns:2 resembles:2 dengda:1 collect:2 challenging:6 youngmin:1 bi:8 range:2 practical:1 satisfactorily:1 responsible:2 zaro:2 testing:3 x3:2 chaotic:1 backpropagation:1 jan:1 episodic:2 rnn:10 gibson:1 significantly:2 radial:1 integrating:2 seeing:1 zoubin:2 get:1 onto:1 cannot:2 selection:1 bh:3 storage:2 prentice:1 applying:3 deterministic:1 map:2 demonstrated:2 williams:3 jimmy:1 focused:2 swinging:3 factorisation:1 estimator:1 regarded:1 datapoints:1 fx:1 autonomous:1 sar:1 limiting:1 controlling:1 user:1 gps:4 us:2 carl:8 alexis:1 trick:4 element:2 velocity:9 recognition:18 assure:1 observed:14 role:1 bottom:1 preprint:2 steven:1 wang:2 region:1 episode:34 counter:1 observes:1 ran:1 roderick:1 hertzmann:1 dynamic:20 trained:3 predictive:2 titsias:6 upon:1 yutian:1 efficiency:1 eric:1 basis:1 completely:1 easily:1 joint:2 various:1 hydraulic:1 derivation:1 train:6 distinct:1 reece:1 effective:2 london:3 monte:3 describe:1 artificial:4 hyper:1 h0:1 encoded:1 widely:2 plausible:3 supplementary:3 elbo:15 wg:2 triangular:1 toeplitz:1 statistic:5 otherwise:1 ability:1 encoder:1 gp:39 jointly:1 noisy:1 neil:3 superscript:1 shakir:2 beal:2 interplay:2 sequence:8 advantage:3 propose:3 interaction:1 product:1 relevant:1 combining:1 aligned:1 representational:1 lincoln:1 forth:1 intuitive:1 inducing:11 amounting:1 kink:1 exploiting:1 double:5 requirement:1 produce:3 emery:1 adam:4 derive:2 recurrent:17 andrew:1 miguel:1 quirk:1 measured:2 h3:1 eq:17 recovering:1 predicted:2 fredrik:1 qd:1 direction:2 safe:1 correct:1 functionality:1 stochastic:9 human:1 alp:1 enable:1 material:3 explains:1 behaviour:6 hx:2 feeding:1 emphasising:1 alleviate:2 merrienboer:1 ryan:3 marco:2 sufficiently:1 around:1 ground:5 hall:1 lawrence:5 mapping:4 predict:2 matthew:6 m0:3 optimizer:1 adopt:2 a2:3 estimation:5 encapsulated:1 tanh:2 wl:2 successfully:1 mit:1 gaussian:37 always:1 aim:1 varying:2 wilson:2 encode:1 l0:2 focus:1 rezende:2 modelling:8 likelihood:6 indicates:1 bojan:1 zhenwen:2 baseline:2 inference:26 dependent:1 entire:1 initially:1 hidden:11 interested:1 issue:3 overall:1 uwe:2 denoted:3 smoothing:4 integration:1 t2d:1 marginal:3 field:1 construct:1 integrands:1 beach:1 sampling:2 zz:3 optimising:1 irish:1 jon:1 future:12 yoshua:1 realisation:1 opening:1 richard:1 composed:1 divergence:4 resulted:1 replaced:1 statistician:1 shedivat:2 detection:1 stationarity:1 fd:8 highly:1 evaluation:3 chain:1 accurate:2 integral:1 tuple:1 partial:1 necessary:1 fox:5 euclidean:2 abundant:1 arma:1 increased:1 modeling:1 downside:1 applicability:1 pole:4 addressing:1 subset:1 nickson:1 successful:2 calandra:3 dependency:1 learnt:8 synthetic:1 combined:1 cho:4 st:1 density:2 lstm:1 peak:1 loren:1 international:13 overestimated:1 ju:2 probabilistic:1 decoding:1 tip:2 michael:2 quickly:1 lez:1 thesis:4 nm:1 opposed:1 choose:1 kucukelbir:1 american:2 concordance:1 account:1 potential:1 jeremy:1 de:3 factorised:1 ebden:1 bg:2 performed:1 h1:1 matern:8 closed:5 observing:3 pendulum:17 red:1 xing:1 recover:4 bayes:3 identifiability:2 ass:1 formed:1 square:1 accuracy:1 yves:1 variance:3 efficiently:2 ensemble:1 identify:1 directional:3 identification:11 bayesian:7 accurately:2 manages:1 carlo:3 trajectory:20 worth:4 damianou:2 sharing:1 definition:1 initialised:1 mohamed:3 james:4 naturally:2 propagated:1 sampled:4 dataset:5 treatment:1 recall:1 javier:1 actually:1 feed:1 supervised:1 follow:1 tom:2 response:1 improved:1 danilo:1 nonconjugate:1 formulation:1 though:2 box:1 furthermore:1 parameterised:2 angular:3 roger:3 agathe:1 hand:1 horizontal:1 lennart:2 christopher:1 nonlinear:1 propagation:1 google:1 mode:1 grows:2 oil:1 facilitate:1 usa:2 brown:2 true:12 y2:2 unbiased:1 swing:1 analytically:2 requiring:1 kyunghyun:1 chemical:1 leibler:4 deal:2 during:3 illustrative:1 cosine:3 rat:1 gpflow:2 hippocampal:1 recognised:1 evident:1 demonstrate:5 cp:4 motion:1 upwards:1 variational:33 instantaneous:2 novel:2 recently:1 jack:1 common:1 mt:3 empirically:1 overview:1 attached:1 volume:4 discussed:1 association:1 marginals:1 measurement:7 composition:2 cambridge:7 smoothness:1 automatic:1 particle:5 centre:1 robot:3 longer:2 operating:3 etc:1 posterior:35 recent:1 showed:1 linearisation:1 wellknown:1 gonz:1 compound:1 der:1 inverted:1 optimiser:2 additional:1 dai:3 impose:1 schneider:2 remembering:1 freely:1 paradigm:1 ud:6 signal:9 pilco:3 smoother:1 multiple:2 full:3 ii:2 infer:1 stem:1 stephen:1 smooth:7 offer:1 long:6 award:1 a1:3 prediction:8 scalable:2 basic:2 regression:2 ko:4 optimisation:1 expectation:4 controller:1 essentially:1 arxiv:4 albert:1 kernel:24 robotics:3 cell:1 whereas:1 want:2 separately:1 addressed:1 sch:1 rest:1 unlike:2 compressed:1 cart:15 gpssms:7 facilitates:1 bahdanau:1 flow:1 seem:1 structural:1 noting:3 intermediate:1 iii:1 bengio:1 mgp:3 identified:2 opposite:1 reduce:2 idea:2 inner:5 andreas:2 det:1 minimise:1 fleet:1 expression:1 peter:1 passing:2 action:10 deep:5 dramatically:1 proportionally:1 detailed:1 locally:1 reduced:1 generate:4 problematic:1 neuroscience:2 estimated:1 track:3 affected:1 key:3 demonstrating:1 imperial:1 prevent:1 neither:1 ht:8 backward:4 vast:1 enforced:1 run:2 angle:13 uncertainty:4 place:3 almost:1 reasonable:1 separation:1 draw:1 scaling:1 accelerates:1 bound:4 capturing:1 followed:3 kronecker:1 x2:2 integrand:1 argument:1 extremely:1 structured:4 gredilla:2 overconfident:1 according:1 combination:2 kd:1 across:3 restricted:1 invariant:1 dieter:3 suzanne:1 remains:1 mechanism:2 tractable:1 serf:2 available:2 aigrain:1 apply:2 observe:2 actuator:6 away:2 murraysmith:1 reshape:2 indirectly:1 pierre:1 alternative:1 thomas:2 denotes:2 running:1 top:1 remaining:1 michalis:3 graphical:1 xx1:2 anatoli:1 exploit:1 mattos:2 ghahramani:2 build:1 especially:1 approximating:3 society:2 murray:1 bl:2 move:1 spike:1 parametric:2 strategy:1 gradient:2 distance:2 link:2 simulated:2 concatenation:1 vd:3 sensible:1 outer:5 reparameterisation:4 capacity:1 manifold:3 decoder:1 collected:1 reason:1 dzmitry:1 assuming:1 kalman:5 length:8 index:1 balance:1 demonstration:1 robert:2 frank:1 ba:4 xnt:1 lagged:4 implementation:1 policy:1 unknown:2 perform:1 gated:1 upper:2 observation:8 datasets:1 markov:10 daan:1 wilk:1 finite:1 enabling:1 benchmark:2 arc:3 gas:1 situation:1 defining:1 y1:2 dually:1 smoothed:1 arbitrary:2 hanebeck:2 introduced:2 evidenced:1 pair:2 david:2 specified:3 kl:2 optimized:1 pablo:1 philosophical:1 tensorflow:1 established:1 kingma:4 nip:1 boukouvalas:1 address:3 able:4 suggested:1 dynamical:7 pattern:4 challenge:4 reverts:1 reliable:1 zhiting:1 max:1 royal:1 power:1 difficulty:2 hybrid:1 force:1 predicting:3 scarce:1 turner:6 arm:2 scheme:4 improve:1 library:1 created:1 ready:1 auto:3 roberto:1 prior:7 nice:1 acknowledgement:1 review:1 fully:6 frigola:13 ljung:3 plant:1 filtering:6 h2:1 affine:1 sufficient:1 benveniste:1 balancing:1 translation:1 row:6 elsewhere:1 periodicity:1 supported:1 last:3 rasmussen:12 free:2 allow:1 saul:2 focussed:2 taking:1 sparse:7 distributed:1 benefit:2 feedback:1 hensman:3 dimension:3 transition:20 evaluating:1 valid:1 autoregressive:2 computes:1 forward:4 depth:1 qualitatively:1 rich:1 far:1 employing:1 welling:2 transaction:5 sj:2 approximate:18 kullback:4 keep:1 dealing:1 automatica:1 assumed:4 unnecessary:1 xi:3 un:2 latent:20 search:1 table:2 guilherme:1 learn:10 nature:1 robust:1 ca:1 correlated:1 interact:1 marc:10 domain:3 arrow:2 noise:6 osborne:1 allowed:1 girard:2 x1:2 batched:1 fashion:1 downwards:1 inferring:1 position:10 wish:1 lie:1 jmlr:4 tang:1 neale:1 xt:37 evidence:3 a3:2 sequential:3 effectively:1 adding:1 phd:4 delyon:1 conditioned:1 justifies:1 chen:1 entropy:2 led:1 lt:8 partially:3 van:2 springer:1 utilise:1 truth:5 kan:1 relies:1 ma:1 conditional:5 viewed:1 marked:3 quantifying:1 consequently:2 rbf:14 towards:2 jeff:1 experimentally:1 change:3 specifically:1 operates:1 uniformly:1 acting:1 total:4 bernard:1 gauss:4 experimental:1 aaron:1 deisenroth:17 college:1 berg:2 rudolf:1 mark:2 jonathan:2 alexander:3 prowler:2 barreto:1 evaluate:4 mcmc:2 extrapolate:1 |
6,761 | 7,116 | Robust Imitation of Diverse Behaviors
Ziyu Wang?, Josh Merel? , Scott Reed, Greg Wayne, Nando de Freitas, Nicolas Heess
DeepMind
ziyu,jsmerel,reedscot,gregwayne,nandodefreitas,[email protected]
Abstract
Deep generative models have recently shown great promise in imitation learning
for motor control. Given enough data, even supervised approaches can do one-shot
imitation learning; however, they are vulnerable to cascading failures when the
agent trajectory diverges from the demonstrations. Compared to purely supervised
methods, Generative Adversarial Imitation Learning (GAIL) can learn more robust
controllers from fewer demonstrations, but is inherently mode-seeking and more
difficult to train. In this paper, we show how to combine the favourable aspects
of these two approaches. The base of our model is a new type of variational
autoencoder on demonstration trajectories that learns semantic policy embeddings.
We show that these embeddings can be learned on a 9 DoF Jaco robot arm in
reaching tasks, and then smoothly interpolated with a resulting smooth interpolation
of reaching behavior. Leveraging these policy representations, we develop a new
version of GAIL that (1) is much more robust than the purely-supervised controller,
especially with few demonstrations, and (2) avoids mode collapse, capturing many
diverse behaviors when GAIL on its own does not. We demonstrate our approach
on learning diverse gaits from demonstration on a 2D biped and a 62 DoF 3D
humanoid in the MuJoCo physics environment.
1
Introduction
Building versatile embodied agents, both in the form of real robots and animated avatars, capable
of a wide and diverse set of behaviors is one of the long-standing challenges of AI. State-of-the-art
robots cannot compete with the effortless variety and adaptive flexibility of motor behaviors produced
by toddlers. Towards addressing this challenge, in this work we combine several deep generative
approaches to imitation learning in a way that accentuates their individual strengths and addresses
their limitations. The end product of this is a robust neural network policy that can imitate a large and
diverse set of behaviors using few training demonstrations.
We first introduce a variational autoencoder (VAE) [15, 26] for supervised imitation, consisting of a
bi-directional LSTM [13, 32, 9] encoder mapping demonstration sequences to embedding vectors,
and two decoders. The first decoder is a multi-layer perceptron (MLP) policy mapping a trajectory
embedding and the current state to a continuous action vector. The second is a dynamics model
mapping the embedding and previous state to the present state, while modelling correlations among
states with a WaveNet [39]. Experiments with a 9 DoF Jaco robot arm and a 9 DoF 2D biped walker,
implemented in the MuJoCo physics engine [38], show that the VAE learns a structured semantic
embedding space, which allows for smooth policy interpolation.
While supervised policies that condition on demonstrations (such as our VAE or the recent approach
of Duan et al. [6]) are powerful models for one-shot imitation, they require large training datasets in
order to work for non-trivial tasks. They also tend to be brittle and fail when the agent diverges too
much from the demonstration trajectories. These limitations of supervised learning for imitation, also
known as behavioral cloning (BC) [24], are well known [28, 29].
?
Joint First authors.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Recently, Ho and Ermon [12] showed a way to overcome the brittleness of supervised imitation using
another type of deep generative model called Generative Adversarial Networks (GANs) [8]. Their
technique, called Generative Adversarial Imitation Learning (GAIL) uses reinforcement learning,
allowing the agent to interact with the environment during training. GAIL allows one to learn more
robust policies with fewer demonstrations, but adversarial training introduces another difficulty called
mode collapse [7]. This refers to the tendency of adversarial generative models to cover only a subset
of modes of a probability distribution, resulting in a failure to produce adequately diverse samples.
This will cause the learned policy to capture only a subset of control behaviors (which can be viewed
as modes of a distribution), rather than allocating capacity to cover all modes.
Roughly speaking, VAEs can model diverse behaviors without dropping modes, but do not learn
robust policies, while GANs give us robust policies but insufficiently diverse behaviors. In section
3, we show how to engineer an objective function that takes advantage of both GANs and VAEs to
obtain robust policies capturing diverse behaviors. In section 4, we show that our combined approach
enables us to learn diverse behaviors for a 9 DoF 2D biped and a 62 DoF humanoid, where the VAE
policy alone is brittle and GAIL alone does not capture all of the diverse behaviors.
2
Background and Related Work
We begin our brief review with generative models. One P
canonical way of training generative
models is to maximize the likelihood of the data: max i log p? (xi ). This is equivalent to
minimizing the Kullback-Leibler divergence between the distribution of the data and the model:
DKL (pdata (?)||p? (?)). For highly-expressive generative models, however, optimizing the loglikelihood is often intractable.
One class of highly-expressive yet tractable
models are the auto-regressive models which decompose
P
the log likelihood as log p(x) =
log
p
? (xi |x<i ). Auto-regressive models have been highly
i
effective in both image and audio generation [40, 39].
Instead of optimizing the log-likelihood directly, one can introduce a parametric inference model
over the latent variables, q (z|x), and optimize a lower bound of the log-likelihood:
Eq (z|xi ) [log p? (xi |z)] DKL (q (z|xi )||p(z)) ? log p(x).
(1)
For continuous latent variables, this bound can be optimized efficiently via the re-parameterization
trick [15, 26]. This class of models are often referred to as VAEs.
GANs, introduced by Goodfellow et al. [8], have become very popular. GANs use two networks:
a generator G and a discriminator D. The generator attempts to generate samples that are indistinguishable from real data. The job of the discriminator is then to tell apart the data and the samples,
predicting 1 with high probability if the sample is real and 0 otherwise. More precisely, GANs
optimize the following objective function
min max Epdata (x) [log D(x)] + Ep(z) [log(1 D(G(z))] .
(2)
G
D
Auto-regressive models, VAEs and GANs are all highly effective generative models, but have different
trade-offs. GANs were noted for their ability to produce sharp image samples, unlike the blurrier
samples from contemporary VAE models [8]. However, unlike VAEs and autoregressive models
trained via maximum likelihood, they suffer from the mode collapse problem [7]. Recent work has
focused on alleviating mode collapse in image modeling [2, 4, 19, 25, 42, 11, 27], but so far these
have not been demonstrated in the control domain. Like GANs, autoregressive models produce sharp
and at times realistic image samples [40], but they tend to be slow to sample from and unlike VAEs
do not immediately provide a latent vector representation of the data. This is why we used VAEs to
learn representations of demonstration trajectories.
We turn our attention to imitation. Imitation is the problem of learning a control policy that mimics a
behavior provided via a demonstration. It is natural to view imitation learning from the perspective
of generative modeling. However, unlike in image and audio modeling, in imitation the generation
process is constrained by the environment and the agent?s actions, with observations becoming
accessible through interaction. Imitation learning brings its own unique challenges.
In this paper, we assume that we have been provided with demonstrations {?i }i where the i-th
trajectory of state-action pairs is ?i = {xi1 , ai1 , ? ? ? , xiTi , aiTi }. These trajectories may have been
produced by either an artificial or natural agent.
2
As in generative modeling, we can easily apply maximum likelihood to imitation learning. For
instance, if the dynamics are tractable, we can maximize the likelihood of the states directly:
P PTi
max? i t=1
log p(xit+1 |xit , ?? (xit )). If a model of the dynamics is unavailable, we can instead
P P Ti
maximize the likelihood of the actions: max? i t=1
log ?? (ait |xit ). The latter approach is what
we referred to as behavioral cloning (BC) in the introduction.
When demonstrations are plentiful, BC is effective [24, 30, 6]. Without abundant data, BC is known
to be inadequate [28, 29, 12]. The inefficiencies of BC stem from the sequential nature of the problem.
When using BC, even the slightest errors in mimicking the demonstration behavior can quickly
accumulate as the policy is unrolled. A good policy should correct for mistakes made previously, but
for BC to achieve this, the corrective behaviors have to appear frequently in the training data.
GAIL [12] avoids some of the pitfalls of BC by allowing the agent to interact with the environment and
learn from these interactions. It constructs a reward function using GANs to measure the similarity
between the policy-generated trajectories and the expert trajectories. As in GANs, GAIL adopts the
following objective function
min max E?E [log D (x, a)] + E?? [log(1
?
D (x, a))] ,
(3)
where ?E denotes the expert policy that generated the demonstration trajectories.
To avoid differentiating through the system dynamics, policy gradient algorithms are used to train
the policy by maximizing the discounted sum of rewards r (xt , at ) = log(1 D (xt , at )).
Maximizing this reward, which may differ from the expert reward, drives ?? to expert-like regions
of the state-action space. In practice, trust region policy optimization (TRPO) is used to stabilize
the learning process [31]. GAIL has become a popular choice for imitation learning [16] and there
already exist model-based [3] and third-person [36] extensions. Two recent GAIL-based approaches
[17, 10] introduce additional reward signals that encourage the policy to make use of latent variables
which would correspond to different types of demonstrations after training. These approaches are
complementary to ours. Neither paper, however, demonstrates the ability to do one-shot imitation.
The literature on imitation including BC, apprenticeship learning and inverse reinforcement learning
is vast. We cannot cover this literature at the level of detail it deserves, and instead refer readers to
recent authoritative surveys on the topic [5, 1, 14]. Inspired by recent works, including [12, 36, 6],
we focus on taking advantage of the dramatic recent advances in deep generative modelling to learn
high-dimensional policies capable of learning a diverse set of behaviors from few demonstrations.
In graphics, a significant effort has been devoted to the design physics controllers that take advantage
of motion capture data, or key-frames and other inputs provided by animators [33, 35, 43, 22]. Yet,
as pointed out in a recent hierarchical control paper [23], the design of such controllers often requires
significant human insight. Our focus is on flexible, general imitation methods.
3
3.1
A Generative Modeling Approach to Imitating Diverse Behaviors
Behavioral cloning with variational autoencoders suited for control
In this section, we follow a similar approach to Duan et al. [6], but opt for stochastic VAEs as having
a distribution q (z|x1:T ) to better regularize the latent space.
In our VAE, an encoder maps a demonstration sequence to an embedding vector z. Given z, we
decode both the state and action trajectories as shown in Figure 1. To train the model, we minimize
the following loss:
"T
#
i
X
i i
i
i
L(?, w, ; ?i ) = Eq (z|xi1:T )
log ?? (at |xt , z)+log pw (xt+1 |xt , z) +DKL q (z|xi1:Ti )||p(z)
i
t=1
Our encoder q uses a bi-directional LSTM. To produce the final embedding, it calculates the average
of all the outputs of the second layer of this LSTM before applying a final linear transformation to
generate the mean and standard deviation of an Gaussian. We take one sample from this Gaussian as
our demonstration encoding.
The action decoder is an MLP that maps the concatenation of the state and the embedding to the
parameters of a Gaussian policy. The state decoder is similar to a conditional WaveNet model [39].
3
Action decoder
Demonstration state encoder
...
Autoregressive state model
(given ,
)
State decoder
...
...
Figure 1: Schematic of the encoder decoder architecture. L EFT: Bidirectional LSTM on demonstration states, followed by action and state decoders at each time step. R IGHT: State decoder model
within a single time step, that is autoregressive over the state dimensions.
In particular, it conditions on the embedding z and previous state xt 1 to generate the vector xt
autoregressively. That is, the autoregression is over the components of the vector xt . Wavenet lessens
the load of the encoder which no longer has to carry information that can be captured by modeling
auto-correlations between components of the state vector . Finally, instead of a Softmax, we use a
mixture of Gaussians as the output of the WaveNet.
3.2
Diverse generative adversarial imitation learning
As pointed out earlier, it is hard for BC policies to mimic experts under environmental perturbations.
Our solution to obtain more robust policies from few demonstrations, which are also capable of
diverse behaviors, is to build on GAIL. Specifically, to enable GAIL to produce diverse solutions,
we condition the discriminator on the embeddings generated by the VAE encoder and integrate out
the GAIL objective with respect to the variational posterior q (z|x1:T ). Specifically, we train the
discriminator by optimizing the following objective
(
"
#)
Ti
1 X
i i
max E?i ??E Eq(z|xi1:T )
log D (xt , at |z) + E?? [log(1 D (x, a|z))]
.
(4)
i
Ti t=1
A related work [20] introduces a conditional GAIL objective to learn controllers for multiple behaviors
from state trajectories, but the discriminator conditions on an annotated class label, as in conditional
GANs [21].
We condition on unlabeled trajectories, which have been passed through a powerful encoder, and
hence our approach is capable of one-shot imitation learning. Moreover, the VAE encoder enables us
to obtain a continuous latent embedding space where interpolation is possible, as shown in Figure 3.
Since our discriminator is conditional, the reward function is also conditional: rt (xt , at |z) =
log(1 D (xt , at |z)). We also clip the reward so that it is upper-bounded. Conditioning on z
allows us to generate an infinite number of reward functions each of them tailored to imitating a
different trajectory. Policy gradients, though mode seeking, will not cause collapse into one particular
mode due to the diversity of reward functions.
To better motivate our objective, let us temporarily leave the context of imitation learning and consider
the following alternative value function for training GANs
?
Z
Z
Z
min max V (G, D) = p(y) q(z|y) log D(y|z) + G(?
y |z) log(1 D(?
y |z))d?
y dydz.
G
D
y
z
y?
This function is a simplification of our objective function. Furthermore, it satisfies the following
property.
Lemma 1. Assuming that q computes the true posterior distribution that is q(z|y) = p(y|z)p(z)
, then
p(y)
?Z
Z
Z
V (G, D) = p(z)
p(y|z) log D(y|z)dy + G(?
y |z) log(1 D(?
y |z))d?
y dz.
z
y
x
?
4
Algorithm 1 Diverse generative adversarial imitation learning.
INPUT: Demonstration trajectories {?i }i and VAE encoder q.
repeat
for j 2 {1, ? ? ? , n} do
Sample trajectory ?j from the demonstration set and sample zj ? q(?|xj1:Tj ).
Run policy ?? (?|zj ) to obtain the trajectory ?bj .
end for
Update policy parameters via TRPO with rewards rtj (xjt , ajt |zj ) = log(1 D (xjt , ajt |zj )).
Update discriminator parameters from i to i+1 with gradient:
8
2
3 2
39
bj
Tj
T
n
< X
=
X
X
1
1
1
j
j
j
j
4
r
log D (xt , at |zj )5 + 4
log(1 D (b
xt , b
at |zj ))5
:n
;
Tj
Tbj
j=1
t=1
t=1
until Max iteration or time reached.
If we further assume an optimal discriminator [8], the cost optimized by the generator then becomes
Z
C(G) = 2 p(z)JSD [p( ? |z) || G( ? |z)] dz log 4,
(5)
z
where JSD stands for the Jensen-Shannon divergence. We know that GANs approximately optimize
this divergence, and it is well documented that optimizing it leads to mode seeking behavior [37].
The objective defined in (5) alleviates this problem. Consider an example where p(x) is a mixture
of Gaussians and p(z) describes the distribution over the mixture components. In this case, the
conditional distribution p(x|z) is not multi-modal, and therefore minimizing the Jensen-Shannon
divergence is no longer problematic. In general, if the latent variable z removes most of the ambiguity,
we can expect the conditional distributions to be close to uni-modal and therefore our generators to
be non-degenerate. In light of this analysis, we would like q to be as close to the posterior as possible
and hence our choice of training q with VAEs.
We now turn our attention to some algorithmic considerations. We can use the VAE policy ?? (at |xt , z)
to accelerate the training of ?? (at |xt , z). One possible route is to initialize the weights ? to ?.
However, before the policy behaves reasonably, the noise injected into the policy for exploration
(when using stochastic policy gradients) can cause poor initial performance. Instead, we fix ? and
structure the conditional policy as follows
?? ( ? |x, z) = N ( ? |?? (x, z) + ?? (x, z),
? (x, z)) ,
where ?? is the mean of the VAE policy. Finally, the policy parameterized by ? is optimized with
TRPO [31] while holding parameters ? fixed, as shown in Algorithm 1.
4
Experiments
The primary focus of our experimental evaluation is to demonstrate that the architecture allows
learning of robust controllers capable of producing the full spectrum of demonstration behaviors for
a diverse range of challenging control problems. We consider three bodies: a 9 DoF robotic arm,
a 9 DoF planar walker, and a 62 DoF complex humanoid (56-actuated joint angles, and a freely
translating and rotating 3d root joint). While for the reaching task BC is sufficient to obtain a working
controller, for the other two problems our full learning procedure is critical.
We analyze the resulting embedding spaces and demonstrate that they exhibit rich and sensible
structure that an be exploited for control. Finally, we show that the encoder can be used to capture
the gist of novel demonstration trajectories which can then be reproduced by the controller.
All experiments are conducted with the MuJoCo physics engine [38]. For details of the simulation
and the experimental setup please see appendix.
4.1 Robotic arm reaching
We first demonstrate the effectiveness of our VAE architecture and investigate the nature of the
learned embedding space on a reaching task with a simulated Jaco arm. The physical Jaco is a
robotics arm developed by Kinova Robotics.
5
Interpolated policies
Policy
2
Time
Policy
1
Figure 3: Interpolation in the latent space for the Jaco arm. Each column shows three frames of a
target-reach trajectory (time increases across rows). The left and right most columns correspond to the
demonstration trajectories in between which we interpolate. Intermediate columns show trajectories
generated by our VAE policy conditioned on embeddings which are convex combinations of the
embeddings of the demonstration trajectories. Interpolating in the latent space indeed correspond to
interpolation in the physical dimensions.
To obtain demonstrations, we trained 60 independent policies to reach to random target locations2 in
the workspace starting from the same initial configuration. We generated 30 trajectories from each of
the first 50 policies. These serve as training data for the VAE model (1500 training trajectories in
total). The remaining 10 policies were used to generate test data.
The reaching task is relatively simple, so with this amount
of data the VAE policy is fairly robust. After training,
the VAE encodes and reproduces the demonstrations as
shown in Figure 2. Representative examples can be found
in the video in the supplemental material.
To further investigate the nature of the embedding space
we encode two trajectories. Next, we construct the embeddings of interpolating policies by taking convex combinations of the embedding vectors of the two trajectories.
We condition the VAE policy on these interpolating embeddings and execute it. The results of this experiment
are illustrated with a representative pair in Figure 3. We
observe that interpolating in the latent space indeed corresponds to interpolation in task (trajectory endpoint)
space, highlighting the semantic meaningfulness of the
discovered latent space.
Figure 2: Trajectories for the Jaco arm?s
end-effector on test set demonstrations.
The trajectories produced by the VAE policy and corresponding demonstration are
plotted with the same color, illustrating
that the policy can imitate well.
4.2 2D Walker
We found reaching behavior to be relatively easy to imitate, presumably because it does not involve
much physical contact. As a more challenging test we consider bipedal locomotion. We train 60
neural network policies for a 2d walker to serve as demonstrations3 . These policies are each trained
to move at different speeds both forward and backward depending on a label provided as additional
input to the policy. Target speeds for training were chosen from a set of four different speeds (m/s):
-1, 0, 1, 3. For the distribution of speeds that the trained policies actually achieve see Figure 4, top
right). Besides the target speed the reward function imposes few constraints on the behavior. The
resulting policies thus form a diverse set with several rather idiosyncratic movement styles. While for
most purposes this diversity is undesirable, for the present experiment we consider it a feature.
2
3
See appendix for details
See section A.2 in the appendix for details.
6
Figure 4: L EFT: t-SNE plot of the embedding vectors of the training trajectories; marker color
indicates average speed. The plot reveals a clear clustering according to speed. Insets show pairs
of frames from selected example trajectories. Trajectories nearby in the plot tend to correspond to
similar movement styles even when differing in speed (e.g. see pair of trajectories on the right hand
side of plot). R IGHT, TOP: Distribution of walker speeds for the demonstration trajectories. R IGHT,
BOTTOM : Difference in speed between the demonstration and imitation trajectories. Measured
against the demonstration trajectories, we observe that the fine-tuned controllers tend to have less
difference in speed compared to controllers without fine-tuning.
We trained our model with 20 episodes per policy (1200 demonstration trajectories in total, each
with a length of 400 steps or 10s of simulated time). In this experiment our full approach is required:
training the VAE with BC alone can imitate some of the trajectories, but it performs poorly in general,
presumably because our relatively small training set does not cover the space of trajectories sufficiently
densely. On this generated dataset, we also train policies with GAIL using the same architecture and
hyper-parameters. Due to the lack of conditioning, GAIL does not reproduce coherently trajectories.
Instead, it simply meshes different behaviors together. In addition, the policies trained with GAIL
also exhibit dramatically less diversity; see video.
A general problem of adversarial training is that there is no easy way to quantitatively assess the
quality of learned models. Here, since we aim to imitate particular demonstration trajectories that
were trained to achieve particular target speed(s) we can use the difference between the speed of the
demonstration trajectory the trajectory produced by the decoder as a surrogate measure of the quality
of the imitation (cf. also [12]).
The general quality of the learned model and the improvement achieved by the adversarial stage of
our training procedure are quantified in Fig. 4. We draw 660 trajectories (11 trajectories each for all
60 policies) from the training set, compute the corresponding embedding vectors using the encoder,
and use both the VAE policy as well as the improved policy from the adversarial stage to imitate
each of the trajectories. We determine the absolute values of the difference between the average
speed of the demonstration and the imitation trajectories (measured in m/s). As shown in Fig. 4 the
adversarial training greatly improves reliability of the controller as well as the ability of the model to
accurately match the speed of the demonstration. We also include addition quantitative analysis of
our approach using this speed metric in Appendix B. Video of our agent imitating a diverse set of
behaviors can be found in the supplemental material.
To assess generalization to novel trajectories we encode and subsequently imitate trajectories not
contained in the training set. The supplemental video contains several representative examples,
demonstrating that the style of movement is successfully imitated for previously unseen trajectories.
Finally, we analyze the structure of the embedding space. We embed training trajectories and perform
dimensionality reduction with t-SNE [41]. The result is shown in Fig. 4. It reveals a clear clustering
according to movement speeds thus recovering the nature of the task context for the demonstration
trajectories. We further find that trajectories that are nearby in embedding space tend to correspond
to similar movement styles even when differing in speed.
7
Time
Train
Test
Imitation
Demo
Time
Figure 5: Left: examples of the demonstration trajectories in the CMU humanoid domain. The
top row shows demonstrations from both the training and test set. The bottom row shows the
corresponding imitation. Right: Percentage of falling down before the end of the episode with and
without fine tuning.
4.3 Complex humanoid
We consider a humanoid body of high dimensionality that poses a hard control problem. The
construction of this body and associated control policies is described in [20], and is briefly summarized
in the appendix (section A.3) for completness. We generate training trajectories with the existing
controllers, which can produce instances of one of six different movement styles (see section A.3).
Examples of such trajectories are shown in Fig. 5 and in the supplemental video.
The training set consists of 250 random trajectories from 6 different neural network controllers that
were trained to match 6 different movement styles from the CMU motion capture data base4 . Each
trajectory is 334 steps or 10s long. We use a second set of 5 controllers from which we generate
trajectories for evaluation (3 of these policies were trained on the same movement styles as the
policies used for generating training data).
Surprisingly, despite the complexity of the body, supervised learning is quite effective at producing
sensible controllers: The VAE policy is reasonably good at imitating the demonstration trajectories,
although it lacks the robustness to be practically useful. Adversarial training dramatically improves
the stability of the controller. We analyze the improvement quantitatively by computing the percentage
of the humanoid falling down before the end of an episode while imitating either training or test
policies. The results are summarized in Figure 5 right. The figure further shows sequences of frames
of representative demonstration and associated imitation trajectories. Videos of demonstration and
imitation behaviors can be found in the supplemental video.
For practical purposes it is desirable to allow the controller to transition from one behavior to another.
We test this possibility in an experiment similar to the one for the Jaco arm: We determine the
embedding vectors of pairs of demonstration trajectories, start the trajectory by conditioning on
the first embedding vector, and then transition from one behavior to the other half-way through the
episode by linearly interpolating the embeddings of the two demonstration trajectories over a window
of 20 control steps. Although not always successful the learned controller often transitions robustly,
despite not having been trained to do so. Representative examples of these transitions can be found in
the supplemental video.
5
Conclusions
We have proposed an approach for imitation learning that combines the favorable properties of
techniques for density modeling with latent variables (VAEs) with those of GAIL. The result is a
model that learns, from a moderate number of demonstration trajectories (1) a semantically well
structured embedding of behaviors, (2) a corresponding multi-task controller that allows to robustly
execute diverse behaviors from this embedding space, as well as (3) an encoder that can map new
trajectories into the embedding space and hence allows for one-shot imitation.
Our experimental results demonstrate that our approach can work on a variety of control problems,
and that it scales even to very challenging ones such as the control of a simulated humanoid with a
large number of degrees of freedoms.
4
See appendix for details.
8
References
[1] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration.
Robotics and Autonomous Systems, 57(5):469?483, 2009.
[2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. Preprint arXiv:1701.07875, 2017.
[3] N. Baram, O. Anschel, and S. Mannor.
arXiv:1612.02179, 2016.
Model-based adversarial imitation learning.
Preprint
[4] D. Berthelot, T. Schumm, and L. Metz. BEGAN: Boundary equilibrium generative adversarial networks.
Preprint arXiv:1703.10717, 2017.
[5] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Robot programming by demonstration. In Springer
handbook of robotics, pages 1371?1394. 2008.
[6] Y. Duan, M. Andrychowicz, B. Stadie, J. Ho, J. Schneider, I. Sutskever, P. Abbeel, and W. Zaremba.
One-shot imitation learning. Preprint arXiv:1703.07326, 2017.
[7] I. Goodfellow. NIPS 2016 tutorial: Generative adversarial networks. Preprint arXiv:1701.00160, 2016.
[8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, pages 2672?2680, 2014.
[9] A. Graves, S. Fern?ndez, and J. Schmidhuber. Bidirectional LSTM networks for improved phoneme
classification and recognition. Artificial Neural Networks: Formal Models and Their Applications?ICANN
2005, pages 753?753, 2005.
[10] K. Hausman, Y. Chebotar, S. Schaal, G. Sukhatme, and J. Lim. Multi-modal imitation learning from
unstructured demonstrations using generative adversarial nets. arXiv preprint arXiv:1705.10479, 2017.
[11] R. D. Hjelm, A. P. Jacob, T. Che, K. Cho, and Y. Bengio. Boundary-seeking generative adversarial
networks. Preprint arXiv:1702.08431, 2017.
[12] J. Ho and S. Ermon. Generative adversarial imitation learning. In NIPS, pages 4565?4573, 2016.
[13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[14] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne. Imitation learning: A survey of learning methods. ACM
Computing Surveys, 50(2):21, 2017.
[15] D. Kingma and M. Welling. Auto-encoding variational bayes. Preprint arXiv:1312.6114, 2013.
[16] A. Kuefler, J. Morton, T. Wheeler, and M. Kochenderfer. Imitating driver behavior with generative
adversarial networks. Preprint arXiv:1701.06699, 2017.
[17] Y. Li, J. Song, and S. Ermon. Inferring the latent structure of human decision-making from raw visual
inputs. arXiv preprint arXiv:1703.08840, 2017.
[18] T. Lillicrap, J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control
with deep reinforcement learning. arXiv:1509.02971, 2015.
[19] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley. Least squares generative adversarial
networks. Preprint ArXiv:1611.04076, 2016.
[20] J. Merel, Y. Tassa, TB. Dhruva, S. Srinivasan, J. Lemmon, Z. Wang, G. Wayne, and N. Heess. Learning
human behaviors from motion capture by adversarial imitation. Preprint arXiv:1707.02201, 2017.
[21] M. Mirza and S. Osindero. Conditional generative adversarial nets. Preprint arXiv:1411.1784, 2014.
[22] U. Muico, Y. Lee, J. Popovi?c, and Z. Popovi?c. Contact-aware nonlinear control of dynamic characters. In
SIGGRAPH, 2009.
[23] X. B. Peng, G. Berseth, K. Yin, and M. van de Panne. DeepLoco: Dynamic locomotion skills using
hierarchical deep reinforcement learning. In SIGGRAPH, 2017.
[24] D. A. Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural
Computation, 3(1):88?97, 1991.
[25] G. J. Qi. Loss-sensitive generative adversarial networks on Lipschitz densities. Preprint arXiv:1701.06264,
2017.
9
[26] D. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep
generative models. In ICML, 2014.
[27] M. Rosca, B. Lakshminarayanan, D. Warde-Farley, and S. Mohamed. Variational approaches for autoencoding generative adversarial networks. arXiv preprint arXiv:1706.04987, 2017.
[28] S. Ross and A. Bagnell. Efficient reductions for imitation learning. In AIStats, 2010.
[29] S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to
no-regret online learning. In AIStats, 2011.
[30] A. Rusu, S. Colmenarejo, C. Gulcehre, G. Desjardins, J. Kirkpatrick, R. Pascanu, V. Mnih, K. Kavukcuoglu,
and R. Hadsell. Policy distillation. Preprint arXiv:1511.06295, 2015.
[31] J. Schulman, S. Levine, P. Abbeel, M. I. Jordan, and P. Moritz. Trust region policy optimization. In ICML,
2015.
[32] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal
Processing, 45(11):2673?2681, 1997.
[33] D. Sharon and M. van de Panne. Synthesis of controllers for stylized planar bipedal walking. In ICRA,
pages 2387?2392, 2005.
[34] D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient
algorithms. In ICML, 2014.
[35] K. W. Sok, M. Kim, and J. Lee. Simulating biped behaviors from human motion data. 2007.
[36] B. C. Stadie, P. Abbeel, and I. Sutskever. Third-person imitation learning. Preprint arXiv:1703.01703,
2017.
[37] L. Theis, A. van den Oord, and M. Bethge. A note on the evaluation of generative models. Preprint
arXiv:1511.01844, 2015.
[38] E. Todorov, T. Erez, and Y. Tassa. MuJoCo: A physics engine for model-based control. In IROS, pages
5026?5033, 2012.
[39] A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior,
and K. Kavukcuoglu. WaveNet: A generative model for raw audio. Preprint arXiv:1609.03499, 2016.
[40] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals, and A. Graves. Conditional image generation
with pixelCNN decoders. In NIPS, 2016.
[41] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research,
9:2579?2605, 2008.
[42] R. Wang, A. Cully, H. Jin Chang, and Y. Demiris. MAGAN: Margin adaptation for generative adversarial
networks. Preprint arXiv:1704.03817, 2017.
[43] K. Yin, K. Loken, and M. van de Panne. SIMBICON: Simple biped locomotion control. In SIGGRAPH,
2007.
10
| 7116 |@word illustrating:1 briefly:1 version:1 pw:1 simulation:1 jacob:1 dramatic:1 versatile:1 shot:6 carry:1 reduction:3 initial:2 plentiful:1 inefficiency:1 configuration:1 contains:1 ndez:1 tuned:1 bc:12 ours:1 animated:1 freitas:1 existing:1 current:1 com:1 yet:2 mesh:1 realistic:1 enables:2 motor:2 remove:1 plot:4 gist:1 update:2 alone:3 generative:32 fewer:2 selected:1 half:1 imitate:7 parameterization:1 imitated:1 short:1 regressive:3 mannor:1 pascanu:1 loken:1 nandodefreitas:1 wierstra:3 become:2 driver:1 pritzel:1 consists:1 combine:3 behavioral:3 introduce:3 apprenticeship:1 peng:1 indeed:2 roughly:1 behavior:33 frequently:1 multi:4 wavenet:5 inspired:1 discounted:1 animator:1 pitfall:1 duan:3 window:1 becomes:1 begin:1 provided:4 moreover:1 bounded:1 what:1 reedscot:1 argall:1 deepmind:1 developed:1 supplemental:6 differing:2 transformation:1 quantitative:1 ti:4 zaremba:1 demonstrates:1 control:17 wayne:2 appear:1 producing:2 before:4 mistake:1 despite:2 encoding:2 interpolation:6 becoming:1 approximately:1 quantified:1 challenging:3 mujoco:4 collapse:5 hunt:1 bi:2 range:1 unique:1 practical:1 practice:1 regret:1 backpropagation:1 procedure:2 wheeler:1 riedmiller:1 refers:1 cannot:2 unlabeled:1 close:2 undesirable:1 effortless:1 applying:1 context:2 optimize:3 equivalent:1 map:3 demonstrated:1 dz:2 maximizing:2 deterministic:1 attention:2 starting:1 convex:2 focused:1 survey:4 hadsell:1 unstructured:1 immediately:1 pouget:1 insight:1 cascading:1 brittleness:1 regularize:1 embedding:22 stability:1 autonomous:2 avatar:1 target:5 construction:1 alleviating:1 decode:1 programming:1 us:2 goodfellow:3 locomotion:3 trick:1 recognition:1 walking:1 ep:1 bottom:2 levine:1 preprint:20 wang:4 capture:6 region:3 episode:4 trade:1 contemporary:1 movement:8 environment:4 complexity:1 reward:11 warde:2 dynamic:6 trained:10 motivate:1 purely:2 serve:2 easily:1 joint:3 accelerate:1 siggraph:3 stylized:1 corrective:1 train:7 effective:4 artificial:3 tell:1 hyper:1 dof:9 kalchbrenner:2 quite:1 loglikelihood:1 otherwise:1 encoder:13 ability:3 simonyan:1 unseen:1 final:2 reproduced:1 online:1 autoencoding:1 sequence:3 advantage:3 accentuates:1 net:3 gait:1 interaction:2 product:1 adaptation:1 alleviates:1 flexibility:1 achieve:3 degenerate:1 poorly:1 sutskever:2 diverges:2 produce:6 generating:1 silver:2 leave:1 depending:1 develop:1 recurrent:1 pose:1 measured:2 job:1 eq:3 implemented:1 recovering:1 differ:1 correct:1 annotated:1 stochastic:3 subsequently:1 exploration:1 nando:1 human:4 ermon:3 enable:1 translating:1 material:2 require:1 espeholt:1 fix:1 generalization:1 abbeel:3 decompose:1 sok:1 opt:1 extension:1 practically:1 sufficiently:1 great:1 presumably:2 mapping:3 bj:2 algorithmic:1 equilibrium:1 dieleman:1 desjardins:1 purpose:2 favorable:1 lessens:1 label:2 ross:2 sensitive:1 gail:18 successfully:1 offs:1 gaussian:3 always:1 aim:1 reaching:7 rather:2 avoid:1 rusu:1 vae:21 encode:2 jsd:2 xit:4 focus:3 schaal:2 improvement:2 morton:1 modelling:2 likelihood:8 cloning:3 indicates:1 greatly:1 adversarial:26 kim:1 inference:2 browning:1 reproduce:1 mimicking:1 among:1 flexible:1 classification:1 art:1 constrained:1 softmax:1 initialize:1 fairly:1 construct:2 aware:1 having:2 beach:1 blurrier:1 icml:3 pdata:1 mimic:2 mirza:2 quantitatively:2 gordon:1 few:5 divergence:4 interpolate:1 individual:1 densely:1 consisting:1 attempt:1 freedom:1 mlp:2 highly:4 mnih:1 hussein:1 investigate:2 possibility:1 ai1:1 evaluation:3 dillmann:1 introduces:2 mixture:3 bipedal:2 chernova:1 farley:2 light:1 navigation:1 tj:3 devoted:1 kirkpatrick:1 allocating:1 capable:5 encourage:1 rotating:1 re:1 abundant:1 plotted:1 panne:3 effector:1 instance:2 column:3 modeling:7 earlier:1 cover:4 deserves:1 cost:1 addressing:1 subset:2 deviation:1 calinon:1 successful:1 inadequate:1 conducted:1 too:1 graphic:1 osindero:1 combined:1 cho:1 st:1 person:2 lstm:5 density:2 accessible:1 oord:3 standing:1 workspace:1 physic:5 xi1:4 lee:2 together:1 quickly:1 synthesis:1 gans:14 bethge:1 ambiguity:1 lever:1 zen:1 expert:5 style:7 li:2 de:4 diversity:3 degris:1 summarized:2 stabilize:1 lakshminarayanan:1 dydz:1 view:1 root:1 analyze:3 reached:1 start:1 bayes:1 metz:1 minimize:1 square:1 ass:2 greg:1 phoneme:1 efficiently:1 correspond:5 directional:2 raw:2 kavukcuoglu:2 accurately:1 produced:4 fern:1 trajectory:66 drive:1 reach:2 failure:2 against:1 sukhatme:1 mohamed:2 chintala:1 associated:2 dataset:1 baram:1 popular:2 color:2 lim:1 improves:2 dimensionality:2 actually:1 bidirectional:3 popovi:2 supervised:8 follow:1 planar:2 modal:3 improved:2 xie:1 execute:2 though:1 furthermore:1 stage:2 correlation:2 autoencoders:1 until:1 working:1 hand:1 expressive:2 trust:2 nonlinear:1 marker:1 lack:2 google:1 completness:1 gaber:1 mode:12 brings:1 quality:3 usa:1 building:1 xj1:1 lillicrap:1 true:1 adequately:1 hence:3 moritz:1 leibler:1 semantic:3 illustrated:1 visualizing:1 indistinguishable:1 during:1 please:1 noted:1 demonstrate:5 performs:1 motion:4 image:6 variational:6 consideration:1 novel:2 recently:2 began:1 chebotar:1 behaves:1 physical:3 conditioning:3 endpoint:1 tassa:3 eft:2 berthelot:1 accumulate:1 refer:1 significant:2 distillation:1 ai:1 tuning:2 pointed:2 erez:2 biped:5 reliability:1 cully:1 pixelcnn:1 robot:6 similarity:1 longer:2 base:1 posterior:3 own:2 recent:7 showed:1 perspective:1 optimizing:4 moderate:1 apart:1 schmidhuber:2 route:1 paliwal:1 der:1 exploited:1 captured:1 arjovsky:1 additional:2 wasserstein:1 schneider:1 freely:1 determine:2 maximize:3 ight:3 signal:2 multiple:1 full:3 desirable:1 stem:1 smooth:2 match:2 veloso:1 long:4 dkl:3 calculates:1 prediction:1 schematic:1 xjt:2 qi:1 controller:20 metric:1 cmu:2 arxiv:24 iteration:1 tailored:1 magan:1 robotics:4 achieved:1 hochreiter:1 background:1 addition:2 fine:3 walker:5 unlike:4 tend:5 leveraging:1 effectiveness:1 hausman:1 jordan:1 intermediate:1 bengio:2 enough:1 embeddings:8 easy:2 variety:2 todorov:1 architecture:4 toddler:1 six:1 passed:1 effort:1 song:1 suffer:1 speaking:1 cause:3 action:9 andrychowicz:1 deep:7 heess:5 dramatically:2 useful:1 clear:2 involve:1 amount:1 clip:1 documented:1 generate:7 exist:1 percentage:2 canonical:1 zj:6 problematic:1 tutorial:1 per:1 diverse:21 promise:1 dropping:1 srinivasan:1 key:1 four:1 trpo:3 demonstrating:1 falling:2 rosca:1 neither:1 iros:1 backward:1 sharon:1 vast:1 schumm:1 sum:1 compete:1 inverse:1 run:1 powerful:2 injected:1 parameterized:1 angle:1 reader:1 draw:1 simbicon:1 decision:1 dy:1 appendix:6 maaten:1 capturing:2 layer:2 bound:2 epdata:1 followed:1 simplification:1 courville:1 strength:1 insufficiently:1 precisely:1 constraint:1 encodes:1 nearby:2 interpolated:2 aspect:1 speed:18 min:3 relatively:3 structured:3 according:2 combination:2 poor:1 describes:1 across:1 pti:1 character:1 making:1 lau:1 den:3 imitating:6 previously:2 turn:2 fail:1 know:1 tractable:2 kochenderfer:1 end:5 gulcehre:1 autoregression:1 gaussians:2 apply:1 observe:2 hierarchical:2 simulating:1 robustly:2 alternative:1 robustness:1 ho:3 denotes:1 remaining:1 top:3 clustering:2 cf:1 include:1 gan:1 especially:1 build:1 meaningfulness:1 icra:1 contact:2 seeking:4 objective:9 move:1 already:1 coherently:1 parametric:1 primary:1 rt:1 bagnell:2 surrogate:1 che:1 exhibit:2 gradient:5 simulated:3 capacity:1 decoder:11 concatenation:1 sensible:2 topic:1 trivial:1 ozair:1 assuming:1 besides:1 length:1 reed:1 demonstration:53 minimizing:2 unrolled:1 difficult:1 setup:1 idiosyncratic:1 sne:3 holding:1 design:2 pomerleau:1 policy:66 perform:1 allowing:2 upper:1 billard:1 observation:1 datasets:1 lemmon:1 jin:1 hinton:1 frame:4 discovered:1 perturbation:1 sharp:2 introduced:1 pair:5 required:1 optimized:3 discriminator:8 engine:3 learned:6 kingma:1 nip:5 address:1 scott:1 challenge:3 tb:1 max:8 including:2 rtj:1 video:8 memory:1 critical:1 difficulty:1 natural:2 predicting:1 arm:9 brief:1 autoencoder:2 auto:5 embodied:1 review:1 literature:2 schulman:1 theis:1 graf:3 loss:2 expect:1 brittle:2 generation:3 limitation:2 merel:2 generator:4 humanoid:8 authoritative:1 agent:8 integrate:1 degree:1 sufficient:1 imposes:1 row:3 repeat:1 surprisingly:1 side:1 allow:1 formal:1 perceptron:1 senior:1 wide:1 taking:2 differentiating:1 absolute:1 van:7 overcome:1 dimension:2 boundary:2 transition:4 stand:1 avoids:2 rich:1 autoregressive:4 computes:1 author:1 made:1 adaptive:1 reinforcement:4 adopts:1 forward:1 far:1 welling:1 transaction:1 approximate:1 skill:1 uni:1 kullback:1 reproduces:1 gregwayne:1 robotic:2 reveals:2 handbook:1 xi:5 ziyu:2 imitation:42 spectrum:1 demo:1 continuous:4 latent:13 slightest:1 why:1 learn:8 reasonably:2 nature:4 robust:11 nicolas:1 inherently:1 ca:1 actuated:1 unavailable:1 interact:2 ajt:2 bottou:1 complex:2 interpolating:5 domain:2 aistats:2 icann:1 linearly:1 noise:1 ait:1 complementary:1 x1:2 body:4 fig:4 referred:2 representative:5 xu:1 slow:1 stadie:2 inferring:1 mao:1 third:2 learns:3 down:2 embed:1 load:1 xt:15 inset:1 jensen:2 favourable:1 abadie:1 intractable:1 sequential:1 demiris:1 conditioned:1 margin:1 suited:1 smoothly:1 yin:2 simply:1 josh:1 visual:1 highlighting:1 vinyals:2 contained:1 temporarily:1 vulnerable:1 chang:1 springer:1 corresponds:1 environmental:1 satisfies:1 acm:1 jaco:7 rezende:1 conditional:10 viewed:1 towards:1 lipschitz:1 hjelm:1 hard:2 specifically:2 infinite:1 semantically:1 engineer:1 lemma:1 called:3 total:2 tendency:1 experimental:3 shannon:2 vaes:10 colmenarejo:1 latter:1 audio:3 schuster:1 |
6,762 | 7,117 | Can Decentralized Algorithms Outperform
Centralized Algorithms? A Case Study for
Decentralized Parallel Stochastic Gradient Descent
Xiangru Lian? , Ce Zhang? , Huan Zhang+ , Cho-Jui Hsieh+ , Wei Zhang# , and Ji Liu?\
?
University of Rochester, ? ETH Zurich
+
University of California, Davis, # IBM T. J. Watson Research Center, \ Tencent AI lab
[email protected], [email protected], [email protected],
[email protected], [email protected], [email protected]
Abstract
Most distributed machine learning systems nowadays, including TensorFlow and
CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms
lies on high communication cost on the central node. Motivated by this, we ask,
can decentralized algorithms be faster than its centralized counterpart?
Although decentralized PSGD (D-PSGD) algorithms have been studied by the
control community, existing analysis and theory do not show any advantage over
centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario
where only the decentralized network is available. In this paper, we study a DPSGD algorithm and provide the first theoretical analysis that indicates a regime
in which decentralized algorithms might outperform centralized algorithms for
distributed stochastic gradient descent. This is because D-PSGD has comparable
total computational complexities to C-PSGD but requires much less communication
cost on the busiest node. We further conduct an empirical study to validate our
theoretical analysis across multiple frameworks (CNTK and Torch), different
network configurations, and computation platforms up to 112 GPUs. On network
configurations with low bandwidth or high latency, D-PSGD can be up to one order
of magnitude faster than its well-optimized centralized counterparts.
1
Introduction
In the context of distributed machine learning, decentralized algorithms have long been treated as a
compromise ? when the underlying network topology does not allow centralized communication,
one has to resort to decentralized communication, while, understandably, paying for the ?cost of being
decentralized?. In fact, most distributed machine learning systems nowadays, including TensorFlow
and CNTK, are built in a centralized fashion. But can decentralized algorithms be faster than their
centralized counterparts? In this paper, we provide the first theoretical analysis, verified by empirical
experiments, for a positive answer to this question.
We consider solving the following stochastic optimization problem
min f (x) := E??D F (x; ?),
x?RN
(1)
where D is a predefined distribution and ? is a random variable usually referring to a data sample in
machine learning. This formulation summarizes many popular machine learning models including
deep learning [LeCun et al., 2015], linear regression, and logistic regression.
Parallel stochastic gradient descent (PSGD) methods are leading algorithms in solving large-scale
machine learning problems such as deep learning [Dean et al., 2012, Li et al., 2014], matrix completion
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Parameter
Server
(b) Decentralized Topology
(a) Centralized Topology
Figure 1: An illustration of different network topologies.
communication complexity computational complexity
on the busiest node
C-PSGD (mini-batch SGD)
O(n)
O n + 12
D-PSGD
O (Deg(network))
O n + 12
Table 1: Comparison of C-PSGD and D-PSGD. The unit of the communication cost is the number
of stochastic gradients or optimization variables. n is the number of nodes. The computational
complexity is the number of stochastic gradient evaluations we need to get a -approximation solution,
which is defined in (3).
Algorithm
[Recht et al., 2011, Zhuang et al., 2013] and SVM. Existing PSGD algorithms are mostly designed for
centralized network topology, for example, the parameter server topology [Li et al., 2014], where there
is a central node connected with multiple nodes as shown in Figure 1(a). The central node aggregates
the stochastic gradients computed from all other nodes and updates the model parameter, for example,
the weights of a neural network. The potential bottleneck of the centralized network topology lies on
the communication traffic jam on the central node, because all nodes need to communicate with it
concurrently iteratively. The performance will be significantly degraded when the network bandwidth
is low.1 These motivate us to study algorithms for decentralized topologies, where all nodes can only
communicate with its neighbors and there is no such a central node, shown in Figure 1(b).
Although decentralized algorithms have been studied as consensus optimization in the control community and used for preserving data privacy [Ram et al., 2009a, Yan et al., 2013, Yuan et al., 2016], for
the application scenario where only the decentralized network is available, it is still an open question
if decentralized methods could have advantages over centralized algorithms in some scenarios
in case both types of communication patterns are feasible ? for example, on a supercomputer with
thousands of nodes, should we use decentralized or centralized communication? Existing theory and
analysis either do not make such comparison [Bianchi et al., 2013, Ram et al., 2009a, Srivastava and
Nedic, 2011, Sundhar Ram et al., 2010] or implicitly indicate that decentralized algorithms were much
worse than centralized algorithms in terms of computational complexity and total communication
complexity [Aybat et al., 2015, Lan et al., 2017, Ram et al., 2010, Zhang and Kwok, 2014]. This paper
gives a positive result for decentralized algorithms by studying a decentralized PSGD (D-PSGD)
algorithm on the connected decentralized network. Our theory indicates that D-PSGD admits similar
total computational complexity but requires much less communication for the busiest node. Table 1
shows a quick comparison between C-PSGD and D-PSGD with respect to the computation and
communication complexity. Our contributions are:
? We theoretically justify the potential advantage of decentralizedalgorithms over centralized
algorithms. Instead of treating decentralized algorithms as a compromise one has to make, we are the
first to conduct a theoretical analysis that identifies cases in which decentralized algorithms can be
faster than its centralized counterpart.
? We theoretically analyze the scalability behavior of decentralized SGD when more nodes are
used. Surprisingly, we show that, when more nodes are available, decentralized algorithms can bring
speedup, asymptotically linearly, with respect to computational complexity. To our best knowledge,
this is the first speedup result related to decentralized algorithms.
? We conduct extensive empirical study to validate our theoretical analysis of D-PSGD and different
C-PSGD variants (e.g., plain SGD, EASGD [Zhang et al., 2015]). We observe similar computational
1
There has been research in how to accommodate this problem by having multiple parameter servers
communicating with efficient MPI A LL R EDUCE primitives. As we will see in the experiments, these methods,
on the other hand, might suffer when the network latency is high.
2
complexity as our theory indicates; on networks with low bandwidth or high latency, D-PSGD
can be up to 10? faster than C-PSGD. Our result holds across multiple frameworks (CNTK and
Torch), different network configurations, and computation platforms up to 112 GPUs. This indicates
promising future direction in pushing the research horizon of machine learning systems from pure
centralized topology to a more decentralized fashion.
Definitions and notations Throughout this paper, we use following notation and definitions:
?
?
?
?
?
?
2
k ? k denotes the vector `2 norm or the matrix spectral norm depending on the argument.
k ? kF denotes the matrix Frobenius norm.
?f (?) denotes the gradient of a function f .
1n denotes the column vector in Rn with 1 for all elements.
f ? denotes the optimal solution of (1).
?i (?) denotes the i-th largest eigenvalue of a matrix.
Related work
In the following, we use K and n to refer to the number of iterations and the number of nodes.
Stochastic Gradient Descent (SGD) SGD is a powerful approach for solving
large scale machine
?
learning. The well known convergence rate of stochastic gradient is O(1/ K) for convex problems
and O(1/K) for strongly convex problems [Moulines and Bach, 2011, Nemirovski et al., 2009]. SGD
is closely related to online learning algorithms, for example, Crammer et al. [2006], Shalev-Shwartz
[2011],
? Yang et al. [2014]. For SGD on nonconvex optimization, an ergodic convergence rate of
O(1/ K) is proved in Ghadimi and Lan [2013].
Centralized parallel SGD For C ENTRALIZED PARALLEL SGD (C-PSGD) algorithms, the most
popular implementation is based on ?
the parameter server, which is essentially the mini-batch SGD
admitting a convergence rate of O(1/ Kn) [Agarwal and Duchi, 2011, Dekel et al., 2012, Lian et al.,
2015], where in each iteration n stochastic gradients are evaluated. In this implementation there is a
parameter server communicating with all nodes. The linear speedup is implied by the convergence
rate automatically. More implementation details for C-PSGD can be found in Chen et al. [2016],
Dean et al. [2012], Li et al. [2014], Zinkevich et al. [2010]. The asynchronous version of centralized
parallel SGD is proved to guarantee the linear speedup on all kinds of objectives (including convex,
strongly convex, and nonconvex objectives) if the staleness of the stochastic gradient is bounded
[Agarwal and Duchi, 2011, Feyzmahdavian et al., 2015, Lian et al., 2015, 2016, Recht et al., 2011,
Zhang et al., 2016b,c].
Decentralized parallel stochastic algorithms Decentralized algorithms do not specify any central
node unlike centralized algorithms, and each node maintains its own local model but can only
communicate with with its neighbors. Decentralized algorithms can usually be applied to any
connected computational network. Lan et al. [2017] proposed a decentralized stochastic algorithm
with computational complexities O(n/2 ) for general convex objectives and O(n/) for strongly
convex objectives. Sirb and Ye [2016] proposed an asynchronous decentralized stochastic algorithm
ensuring complexity O(n/2 ) for convex objectives. A similar algorithm to our D-PSGD in both
synchronous and asynchronous fashion was studied in Ram et al. [2009a, 2010], Srivastava and
Nedic [2011], Sundhar Ram et al. [2010]. The difference is that in their algorithm all node can only
perform either communication or computation but not simultaneously. Sundhar Ram et al. [2010]
proposed a stochastic decentralized optimization algorithm for constrained convex optimization
and the algorithm can be used for non-differentiable objectives by using subgradients. Please also
refer to Srivastava and Nedic [2011] for the subgradient variant. The analysis in Ram et al. [2009a,
2010], Srivastava and Nedic [2011], Sundhar Ram et al. [2010] requires the gradients of each term
of the objective to be bounded by a constant. Bianchi et al. [2013] proposed a similar decentralized
stochastic algorithm and provided a convergence rate for the consensus of the local models when
the local models are bounded. The convergence to a solution was also provided by using central
limit theorem, but the rate is unclear. HogWild++ [Zhang et al., 2016a] uses decentralized model
parameters for parallel asynchronous SGD on multi-socket systems and shows that this algorithm
empirically outperforms some centralized algorithms. Yet the convergence or the convergence rate is
unclear. The common issue for these work above lies on that the speedup is unclear, that is, we do
not know if decentralized algorithms (involving multiple nodes) can improve the efficiency of only
using a single node.
3
Other decentralized algorithms In other areas including control, privacy and wireless sensing
network, decentralized algorithms are usually studied for solving the consensus problem [Aysal et al.,
2009, Boyd et al., 2005, Carli et al., 2010, Fagnani and Zampieri, 2008, Olfati-Saber et al., 2007,
Schenato and Gamba, 2007]. Lu et al. [2010] proves a gossip algorithm to converge to the optimal
solution for convex optimization. Mokhtari and Ribeiro [2016] analyzed decentralized SAG and
SAGA algorithms for minimizing finite sum strongly convex objectives, but they are not shown
to admit any speedup. The decentralized gradient descent method for convex and strongly convex
problems was analyzed in Yuan et al. [2016]. Nedic and Ozdaglar [2009], Ram et al. [2009b] studied
its subgradient variants. However, this type of algorithms can only converge to a ball of the optimal
solution, whose diameter depends on the steplength. This issue was fixed by Shi et al. [2015] using a
modified algorithm, namely EXTRA, that can guarantee to converge to the optimal solution. Wu et al.
[2016] analyzed an asynchronous version of decentralized gradient descent with some modification
like in Shi et al. [2015] and showed that the algorithm converges to a solution when K ? ?. Aybat
et al. [2015], Shi et al., Zhang and Kwok [2014] analyzed decentralized ADMM algorithms and they
are not shown to have speedup. From all of these reviewed papers, it is still unclear if decentralized
algorithms can have any advantage over their centralized counterparts.
3
Decentralized parallel stochastic gradient descent (D-PSGD)
Algorithm 1 Decentralized Parallel Stochastic Gradient Descent (D-PSGD) on the ith node
Require: initial point x0,i = x0 , step length ?, weight matrix W , and number of iterations K
1: for k = 0, 1, 2, . . . , K ? 1 do
2:
Randomly sample ?k,i from local data of the i-th node
3:
Compute the local stochastic gradient ?Fi (xk,i ; ?k,i ) ?i on all nodes a
Compute
weighted average by fetching optimization variables from neighbors:
4:
P the neighborhood
b
xk+ 1 ,i = n
j=1 Wij xk,j
2
5:
Update the local optimization variable xk+1,i ? xk+ 1 ,i ? ??Fi (xk,i ; ?k,i )c
2
6: end for
Pn
1
7: Output: n i=1 xK,i
a
Note that the stochastic gradient computed in can be replaced with a mini-batch of stochastic gradients,
which will not hurt our theoretical results.
b
Note that the Line 3 and Line 4 can be run in parallel.
c
Note that the Line 4 and step Line 5 can be exchanged. That is, we first update the local stochastic gradient
into the local optimization variable, and then average the local optimization variable with neighbors. This does
not hurt our theoretical analysis. When Line 4 is logically before Line 5, then Line 3 and Line 4 can be run in
parallel. That is to say, if the communication time used by Line 4 is smaller than the computation time used by
Line 3, the communication time can be completely hidden (it is overlapped by the computation time).
This section introduces the D-PSGD algorithm. We represent the decentralized communication
topology with an undirected graph with weights: (V, W ). V denotes the set of n computational
nodes: V := {1, 2, ? ? ? , n}. W ? Rn?n is a symmetric
Pdoubly stochastic matrix, which means (i)
Wij ? [0, 1], ?i, j, (ii) Wij = Wji for all i, j, and (ii) j Wij = 1 for all i. We use Wij to encode
how much node j can affect node i, while Wij = 0 means node i and j are disconnected.
To design distributed algorithms on a decentralized network, we first distribute the data onto all nodes
such that the original objective defined in (1) can be rewritten into
n
1X
min f (x) =
E??Di Fi (x; ?) .
(2)
n i=1 |
{z
}
x?RN
=:fi (x)
There are two simple ways to achieve (2), both of which can be captured by our theoretical analysis
and they both imply Fi (?; ?) = F (?; ?), ?i.
Strategy-1 All distributions Di ?s are the same as D, that is, all nodes can access a shared database;
Strategy-2 n nodes partition all data in the database and appropriately define a distribution for
sampling local data, for example, if D is the uniform distribution over all data, Di can be defined
to be the uniform distribution over local data.
The D-PSGD algorithm is a synchronous parallel algorithm. All nodes are usually synchronized by a
clock. Each node maintains its own local variable and runs the protocol in Algorithm 1 concurrently,
which includes three key steps at iterate k:
4
? Each node computes the stochastic gradient ?Fi (xk,i ; ?k,i )2 using the current local variable xk,i ,
where k is the iterate number and i is the node index;
? When the synchronization barrier is met, each node exchanges local variables with its neighbors
and average the local variables it receives with its own local variable;
? Each node update its local variable using the average and the local stochastic gradient.
To view the D-PSGD algorithm from a global view, at iterate k, we define the concatenation of all
local variables, random samples, stochastic gradients by matrix Xk ? RN ?n , vector ?k ? Rn , and
?F (Xk , ?k ), respectively:
Xk := [ xk,1
xk,n ] ? RN ?n ,
???
?k := [ ?k,1
?F (Xk , ?k ) := [ ?F1 (xk,1 ; ?k,1 ) ?F2 (xk,2 ; ?k,2 ) ? ? ?
???
>
?k,n ] ? Rn ,
?Fn (xk,n ; ?k,n ) ] ? RN ?n .
Then the k-th iterate of Algorithm 1 can be viewed as the following update
Xk+1 ? Xk W ? ??F (Xk ; ?k ).
We say the algorithm gives an -approximation solution if
P
K?1
?f Xk 1n
2 6 .
K ?1
E
k=0
n
4
(3)
Convergence rate analysis
This section provides the analysis for the convergence rate of the D-PSGD algorithm. Our analysis
will show that the convergence rate of D-PSGD w.r.t. iterations is similar to the C-PSGD (or minibatch SGD) [Agarwal and Duchi, 2011, Dekel et al., 2012, Lian et al., 2015], but D-PSGD avoids the
communication traffic jam on the parameter server.
To show the convergence results, we first define
?f (Xk ) := [ ?f1 (xk,1 ) ?f2 (xk,2 ) ? ? ?
?fn (xk,n ) ] ? RN ?n ,
where functions fi (?)?s are defined in (2).
Assumption 1. Throughout this paper, we make the following commonly used assumptions:
1. Lipschitzian gradient: All function fi (?)?s are with L-Lipschitzian gradients.
2. Spectral gap: Given the symmetric doubly stochastic matrix W , we define ? :=
(max{|?2 (W )|, |?n (W )|})2 . We assume ? < 1.
3. Bounded variance: Assume the variance of stochastic gradient Ei?U ([n]) E??Di k?Fi (x; ?) ?
?f (x)k2 is bounded for any x with i uniformly sampled from {1, . . . , n} and ? from the distribution
Di . This implies there exist constants ?, ? such that
E??Di k?Fi (x; ?) ? ?fi (x)k2 6? 2 , ?i, ?x,
Ei?U ([n]) k?fi (x) ? ?f (x)k2 6 ? 2 , ?x.
Note that if all nodes can access the shared database, then ? = 0.
4. Start from 0: We assume X0 = 0. This assumption simplifies the proof w.l.o.g.
Let
D1 :=
1
9? 2 L2 n
?
?
2 (1 ? ?)2 D2
D2 := 1 ?
,
18? 2
2
? nL .
(1 ? ?)2
Under Assumption 1, we have the following convergence result for Algorithm 1.
Theorem 1 (Convergence of Algorithm 1). Under Assumption 1, we have the following convergence
rate for Algorithm 1:
2
2 !
K?1
K?1
X
1 1 ? ?L X
?f (Xk )1n
Xk 1n
E
E
?f
+ D1
K
2
n
n
k=0
k=0
f (0) ? f ?
?L 2
? 2 L2 n? 2
9? 2 L2 n? 2
6
+
? +
+
.
?
?K
2n
(1 ? ?)D2
(1 ? ?)2 D2
2
It can be easily extended to mini-batch stochastic gradient descent.
5
Pn
Noting that Xkn1n = n1 i=1 xk,i , this theorem characterizes the convergence of the average of all
local optimization variables xk,i . To take a closer look at this result, we appropriately choose the step
length in Theorem 1 to obtain the following result:
1
3
?
Corollary 2. Under the same assumptions as in Theorem 1, if we set ? =
, for
2L+?
Algorithm 1 we have the following convergence rate:
PK?1
Xk 1n
2
8(f (0) ? f ? )L (8f (0) ? 8f ? + 4L)?
k=0 E ?f
n
?
6
+
.
K
K
Kn
if the total number of iterate K is sufficiently large, in particular,
2
2
?
4L4 n5
9? 2
, and
K> 6
+
?
? (f (0) ? f ? + L)2 1 ? ? (1 ? ?)2
K>
K/n
(4)
(5)
72L2 n2
? 2 .
?2 1 ? ?
(6)
This result basically suggests that the convergence rate for D-PSGD is O
enough. We highlight two key observations from this result:
1
K
+
?1
nK
, if K is large
1
1
Linear speedup When K is large enough, the K
term which
term will be dominated by the ?Kn
1
4
?
leads to a nK convergence rate. It indicates that the total computational complexity to achieve
an -approximation solution (3) is bounded by O 12 . Since the total number of nodes does
not
affect the total complexity, a single node only shares a computational complexity of O n12 . Thus
linear speedup can be achieved by D-PSGD asymptotically w.r.t. computational complexity.
D-PSGD can be better than C-PSGD Note that this rate is the same as C-PSGD (or mini-batch
SGD with mini-batch size n) [Agarwal and Duchi, 2011, Dekel et al., 2012, Lian et al., 2015].
The advantage of D-PSGD over C-PSGD is to avoid the communication traffic jam. At each
iteration, the maximal communication cost for every single node is O(the degree of the network)
for D-PSGD, in contrast with O(n) for C-PSGD. The degree of the network could be much smaller
than O(n), e.g., it could be O(1) in the special case of a ring.
The key difference from most existing analysis for decentralized algorithms lies on that we do not
use the boundedness assumption for domain or gradient or stochastic gradient. Those boundedness
assumptions can significantly simplify the proof but lose some subtle structures in the problem.
The linear speedup indicated by Corollary 2 requires the total number of iteration K is sufficiently
large. The following special example gives a concrete bound of K for the ring network topology.
Theorem 3. (Ring network) Choose the steplength ? in the same as Corollary 2 and consider the
ring network topology with corresponding W in the form of
1/3
? 1/3
?
?
?
?
W =?
?
?
?
?
?
1/3
1/3
1/3
1/3
1/3
1/3
..
.
..
..
.
.
1/3
1/3
1/3
1/3
1/3
?
?
?
?
?
?
n?n
.
??R
?
?
?
1/3 ?
1/3
Under Assumption 1, Algorithm 1 achieves the same convergence rate in (4), which indicates a linear
speedup can be achieved, if the number of involved nodes is bounded by
? n = O(K 1/9 ), if apply strategy-1 distributing data (? = 0);
? n = O(K 1/13 ), if apply strategy-2 distributing data (? > 0),
3
In Theorem
p 1 and Corollary 2, we choose the constant steplength for simplicity. Using the diminishing
steplength O( n/k) can achieve a similar convergence rate by following the proof
? procedure in this paper. For
convex objectives, D-PSGD could be proven to admit the convergence rate O(1/ nK) which is consistent with
the non-convex case. For strongly convex objectives, the convergence rate for D-PSGD could be improved to
O(1/nK) which is consistent with the rate for C-PSGD.
4
The complexity to compute a single stochastic gradient counts 1.
6
Centralized
1.5
1
Decentralized
CNTK
0.5
0
0
300
2
Centralized
1.5
1
CNTK
Decentralized
0.5
0
500
0
1000
Time (Seconds)
(a) ResNet-20, 7GPU, 10Mbps
250
140
Slower Network
200
Centralized
150
Decentralized
100
50
CNTK
0
500
0
1000
Time (Seconds)
(b) ResNet-20, 7GPU, 5ms
Seconds/Epoch
2
Seconds/Epoch
2.5
Training Loss
Training Loss
2.5
0.5
120
100
Slower Network
Centralized
80
60
CNTK
40
Decentralized
20
0
1
0
1/Bandwidth (1 / 1Mbps)
(c) Impact of Network Bandwidth
5
10
Network Latency (ms)
(d) Impact of Network Latency
Figure 2: Comparison between D-PSGD and two centralized implementations (7 and 10 GPUs).
3
1.5
Decentralized
Centralized
1
0.5
0
0
200
400
600
Epochs
(a) ResNet20, 112GPUs
8
2.5
2
1.5
1
0.5
0
Decentralized
0
50
Centralized
Speedup
2
Training Loss
Training Loss
3
2.5
6
4
2
0
100
150
Epochs
(b) ResNet-56, 7GPU
0
2
4
6
8
# Workers
(c) ResNet20, 7GPUs
(d) DPSGD Comm. Pattern
Figure 3: (a) Convergence Rate; (b) D-PSGD Speedup; (c) D-PSGD Communication Patterns.
where the capital ?O? swallows ?, ?, L, and f (0) ? f ? .
This result considers a special decentralized network topology: ring network, where each node can
only exchange information with its two neighbors. The linear speedup can be achieved up to K 1/9
and K 1/13 for different scenarios. These two upper bound can be improved potentially. This is the
first work to show the speedup for decentralized algorithms, to the best of our knowledge.
In this section, we mainly investigate the convergence rate for the average of all local variables
{xk,i }ni=1 . Actually one can also obtain a similar rate for each individual xk,i , since all nodes achieve
Pn
2
the consensus quickly, in particular, the running average of E
n1 i0 =1 xk,i0 ? xk,i
converges to
0 with a O(1/K) rate, where the ?O? swallows n, ?, ?, ?, L and f (0) ? f ? . See Theorem 6 for more
details in Supplemental Material.
5
Experiments
We validate our theory with experiments that compare D-PSGD with other centralized implementations. We run experiments on clusters up to 112 GPUs and show that, on some network configurations,
D-PSGD can outperform well-optimized centralized implementations by an order of magnitude.
5.1
Experiment setting
Datasets and models We evaluate D-PSGD on two machine learning tasks, namely (1) image classification, and (2) Natural Language Processing (NLP). For image classification we train ResNet [He
et al., 2015] with different number of layers on CIFAR-10 [Krizhevsky, 2009]; for natural language
processing, we train both proprietary and public dataset on a proprietary CNN model that we get
from our industry partner [Feng et al., 2016, Lin et al., 2017, Zhang et al., 2017].
Implementations and setups We implement D-PSGD on two different frameworks, namely Microsoft CNTK and Torch. We evaluate four SGD implementations:
1. CNTK. We compare with the standard CNTK implementation of synchronous SGD. The implementation is based on MPI?s AllReduce primitive.
2. Centralized. We implemented the standard parameter server-based synchronous SGD using MPI.
One node will serve as the parameter server in our implementation.
3. Decentralized. We implemented our D-PSGD algorithm using MPI within CNTK.
4. EASGD. We compare with the standard EASGD implementation of Torch.
All three implementations are compiled with gcc 7.1, cuDNN 5.0, OpenMPI 2.1.1. We fork from
CNTK after commit 57d7b9d and enable distributed minibatch reading for all of our experiments.
During training, we keep the local batch size of each node the same as the reference configurations
provided by CNTK. We tune learning rate for each SGD variant and report the best configuration.
7
Machines/Clusters We conduct experiments on three different machines/clusters:
1. 7GPUs. A single local machine with 8 GPUs, each of which is a Nvidia TITAN Xp.
2. 10GPUs. 10 p2.xlarge EC2 instances, each of which has one Nvidia K80 GPU.
3. 16GPUs. 16 local machines, each of which has two Xeon E5-2680 8-core processors and a
NVIDIA K20 GPU. Machines are connected by Gigabit Ethernet in this case.
4. 112GPUs. 4 p2.16xlarge and 6 p2.8xlarge EC2 instances. Each p2.16xlarge (resp.
p2.8xlarge) instance has 16 (resp. 8) Nvidia K80 GPUs.
In all of our experiments, we use each GPU as a node.
5.2
Results on CNTK
End-to-end performance We first validate that, under certain network configurations, D-PSGD
converges faster, in wall-clock time, to a solution that has the same quality of centralized SGD.
Figure 2(a, b) and Figure 3(a) shows the result of training ResNet20 on 7GPUs. We see that DPSGD converges faster than both centralized SGD competitors. This is because when the network is
slow, both centralized SGD competitors take more time per epoch due to communication overheads.
Figure 3(a, b) illustrates the convergence with respect to the number of epochs, and D-PSGD shows
similar convergence rate as centralized SGD even with 112 nodes.
Speedup The end-to-end speedup of D-PSGD over centralized SGD highly depends on the underlying network. We use the tc command to manually vary the network bandwidth and latency and
compare the wall-clock time that all three SGD implementations need to finish one epoch.
Figure 2(c, d) shows the result. We see that, when the network has high bandwidth and low latency,
not surprisingly, all three SGD implementations have similar speed. This is because in this case,
the communication is never the system bottleneck. However, when the bandwidth becomes smaller
(Figure 2(c)) or the latency becomes higher (Figure 2(d)), both centralized SGD implementations
slow down significantly. In some cases, D-PSGD can be even one order of magnitude faster than
its centralized competitors. Compared with Centralized (implemented with a parameter server), DPSGD has more balanced communication patterns between nodes and thus outperforms Centralized
in low-bandwidth networks; compared with CNTK (implemented with AllReduce), D-PSGD needs
fewer number of communications between nodes and thus outperforms CNTK in high-latency
networks. Figure 3(c) illustrates the communication between nodes for one run of D-PSGD.
We also vary the number of GPUs that D-PSGD uses and report the speed up over a single GPU to
reach the same loss. Figure 3(b) shows the result on a machine with 7GPUs. We see that, up to 4
GPUs, D-PSGD shows near linear speed up. When all seven GPUs are used, D-PSGD achieves up
to 5? speed up. This subliner speed up for 7 GPUs is due to the synchronization cost but also that
our machine only has 4 PCIe channels and thus more than two GPUs will share PCIe bandwidths.
5.3
Results on Torch
Due to the space limitation, the results on Torch can be found in Supplement Material.
6
Conclusion
This paper studies the D-PSGD algorithm on the decentralized computational network. We prove
that D-PSGD achieves the same convergence rate (or equivalently computational complexity) as the
C-PSGD algorithm, but outperforms C-PSGD by avoiding the communication traffic jam. To the
best of our knowledge, this is the first work to show that decentralized algorithms admit the linear
speedup and can outperform centralized algorithms.
Limitation and Future Work The potential limitation of D-PSGD lies on the cost of synchronization.
Breaking the synchronization barrier could make the decentralize algorithms even more efficient, but
requires more complicated analysis. We will leave this direction for the future work.
On the system side, one future direction is to deploy D-PSGD to larger clusters beyond 112 GPUs
and one such environment is state-of-the-art supercomputers. In such environment, we envision
D-PSGD to be one necessary building blocks for multiple ?centralized groups? to communicate. It is
also interesting to deploy D-PSGD to mobile environments.
Acknowledgements Xiangru Lian and Ji Liu are supported in part by NSF CCF1718513. Ce Zhang gratefully
acknowledge the support from the Swiss National Science Foundation NRP 75 407540_167266, IBM Zurich,
Mercedes-Benz Research & Development North America, Oracle Labs, Swisscom, Chinese Scholarship Council,
8
the Department of Computer Science at ETH Zurich, the GPU donation from NVIDIA Corporation, and the
cloud computation resources from Microsoft Azure for Research award program. Huan Zhang and Cho-Jui
Hsieh acknowledge the support of NSF IIS-1719097 and the TACC computation resources.
References
A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. NIPS, 2011.
N. S. Aybat, Z. Wang, T. Lin, and S. Ma. Distributed linearized alternating direction method of
multipliers for composite convex consensus optimization. arXiv preprint arXiv:1512.08122, 2015.
T. C. Aysal, M. E. Yildiz, A. D. Sarwate, and A. Scaglione. Broadcast gossip algorithms for consensus.
IEEE Transactions on Signal processing, 57(7):2748?2761, 2009.
P. Bianchi, G. Fort, and W. Hachem. Performance of a distributed stochastic approximation algorithm.
IEEE Transactions on Information Theory, 59(11):7405?7418, 2013.
S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah. Gossip algorithms: Design, analysis and applications.
In INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications
Societies. Proceedings IEEE, volume 3, pages 1653?1664. IEEE, 2005.
R. Carli, F. Fagnani, P. Frasca, and S. Zampieri. Gossip consensus algorithms via quantized communication. Automatica, 46(1):70?80, 2010.
J. Chen, R. Monga, S. Bengio, and R. Jozefowicz. Revisiting distributed synchronous sgd. arXiv
preprint arXiv:1604.00981, 2016.
K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. Journal of Machine Learning Research, 7:551?585, 2006.
J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V.
Le, et al. Large scale distributed deep networks. In Advances in neural information processing
systems, pages 1223?1231, 2012.
O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using
mini-batches. Journal of Machine Learning Research, 13(Jan):165?202, 2012.
F. Fagnani and S. Zampieri. Randomized consensus algorithms over large scale networks. IEEE
Journal on Selected Areas in Communications, 26(4), 2008.
M. Feng, B. Xiang, and B. Zhou. Distributed deep learning for question answering. In Proceedings
of the 25th ACM International on Conference on Information and Knowledge Management, pages
2413?2416. ACM, 2016.
H. R. Feyzmahdavian, A. Aytekin, and M. Johansson. An asynchronous mini-batch algorithm for
regularized stochastic optimization. arXiv, 2015.
S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341?2368, 2013.
K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. ArXiv e-prints,
Dec. 2015.
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778, 2016.
A. Krizhevsky. Learning multiple layers of features from tiny images. In Technical Report, 2009.
G. Lan, S. Lee, and Y. Zhou. Communication-efficient algorithms for decentralized and stochastic
optimization. arXiv preprint arXiv:1701.03961, 2017.
Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and
B.-Y. Su. Scaling distributed machine learning with the parameter server. In OSDI, volume 14,
pages 583?598, 2014.
9
X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex
optimization. In Advances in Neural Information Processing Systems, pages 2737?2745, 2015.
X. Lian, H. Zhang, C.-J. Hsieh, Y. Huang, and J. Liu. A comprehensive linear speedup analysis for
asynchronous stochastic parallel optimization from zeroth-order to first-order. In Advances in
Neural Information Processing Systems, pages 3054?3062, 2016.
Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and Y. Bengio. A structured self-attentive
sentence embedding. 5th International Conference on Learning Representations, 2017.
J. Lu, C. Y. Tang, P. R. Regier, and T. D. Bow. A gossip algorithm for convex consensus optimization
over networks. In American Control Conference (ACC), 2010, pages 301?308. IEEE, 2010.
A. Mokhtari and A. Ribeiro. Dsa: decentralized double stochastic averaging gradient algorithm.
Journal of Machine Learning Research, 17(61):1?35, 2016.
E. Moulines and F. R. Bach. Non-asymptotic analysis of stochastic approximation algorithms for
machine learning. NIPS, 2011.
A. Nedic and A. Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE
Transactions on Automatic Control, 54(1):48?61, 2009.
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
Nvidia.
Nccl:
Optimized
https://github.com/NVIDIA/nccl.
primitives
for
collective
multi-gpu
communication.
R. Olfati-Saber, J. A. Fax, and R. M. Murray. Consensus and cooperation in networked multi-agent
systems. Proceedings of the IEEE, 95(1):215?233, 2007.
S. S. Ram, A. Nedi?c, and V. V. Veeravalli. Asynchronous gossip algorithms for stochastic optimization.
In Decision and Control, 2009 held jointly with the 2009 28th Chinese Control Conference.
CDC/CCC 2009. Proceedings of the 48th IEEE Conference on, pages 3581?3586. IEEE, 2009a.
S. S. Ram, A. Nedic, and V. V. Veeravalli. Distributed subgradient projection algorithm for convex
optimization. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International
Conference on, pages 3653?3656. IEEE, 2009b.
S. S. Ram, A. Nedi?c, and V. V. Veeravalli. Asynchronous gossip algorithm for stochastic optimization:
Constant stepsize analysis. In Recent Advances in Optimization and its Applications in Engineering,
pages 51?60. Springer, 2010.
B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic
gradient descent. In Advances in Neural Information Processing Systems, pages 693?701, 2011.
L. Schenato and G. Gamba. A distributed consensus protocol for clock synchronization in wireless
sensor network. In Decision and Control, 2007 46th IEEE Conference on, pages 2289?2294. IEEE,
2007.
S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in
Machine Learning, 4(2):107?194, 2011.
W. Shi, Q. Ling, K. Yuan, G. Wu, and W. Yin. On the linear convergence of the admm in decentralized
consensus optimization.
W. Shi, Q. Ling, G. Wu, and W. Yin. Extra: An exact first-order algorithm for decentralized consensus
optimization. SIAM Journal on Optimization, 25(2):944?966, 2015.
B. Sirb and X. Ye. Consensus optimization with delayed and stochastic gradients on decentralized
networks. In Big Data (Big Data), 2016 IEEE International Conference on, pages 76?85. IEEE,
2016.
K. Srivastava and A. Nedic. Distributed asynchronous constrained stochastic optimization. IEEE
Journal of Selected Topics in Signal Processing, 5(4):772?790, 2011.
10
S. Sundhar Ram, A. Nedi?c, and V. Veeravalli. Distributed stochastic subgradient projection algorithms
for convex optimization. Journal of optimization theory and applications, 147(3):516?545, 2010.
T. Wu, K. Yuan, Q. Ling, W. Yin, and A. H. Sayed. Decentralized consensus optimization with
asynchrony and delays. arXiv preprint arXiv:1612.00150, 2016.
F. Yan, S. Sundaram, S. Vishwanathan, and Y. Qi. Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties. IEEE Transactions on Knowledge and Data
Engineering, 25(11):2483?2493, 2013.
T. Yang, M. Mahdavi, R. Jin, and S. Zhu. Regret bounded by gradual variation for online convex
optimization. Machine learning, 95(2):183?223, 2014.
K. Yuan, Q. Ling, and W. Yin. On the convergence of decentralized gradient descent. SIAM Journal
on Optimization, 26(3):1835?1854, 2016.
H. Zhang, C.-J. Hsieh, and V. Akella. Hogwild++: A new mechanism for decentralized asynchronous
stochastic gradient descent. ICDM, 2016a.
R. Zhang and J. Kwok. Asynchronous distributed admm for consensus optimization. In International
Conference on Machine Learning, pages 1701?1709, 2014.
S. Zhang, A. E. Choromanska, and Y. LeCun. Deep learning with elastic averaging sgd. In Advances
in Neural Information Processing Systems, pages 685?693, 2015.
W. Zhang, S. Gupta, X. Lian, and J. Liu. Staleness-aware async-sgd for distributed deep learning. In
Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI
2016, New York, NY, USA, 9-15 July 2016, 2016b.
W. Zhang, S. Gupta, and F. Wang. Model accuracy and runtime tradeoff in distributed deep learning:
A systematic study. In IEEE International Conference on Data Mining, 2016c.
W. Zhang, M. Feng, Y. Zheng, Y. Ren, Y. Wang, J. Liu, P. Liu, B. Xiang, L. Zhang, B. Zhou, and
F. Wang. Gadei: On scale-up training as a service for deep learning. In Proceedings of the
25th ACM International on Conference on Information and Knowledge Management. The IEEE
International Conference on Data Mining series(ICDM?2017), 2017.
Y. Zhuang, W.-S. Chin, Y.-C. Juan, and C.-J. Lin. A fast parallel sgd for matrix factorization in shared
memory systems. In Proceedings of the 7th ACM conference on Recommender systems, pages
249?256. ACM, 2013.
M. Zinkevich, M. Weimer, L. Li, and A. J. Smola. Parallelized stochastic gradient descent. In
Advances in neural information processing systems, pages 2595?2603, 2010.
11
| 7117 |@word cnn:1 version:2 norm:3 johansson:1 dekel:5 open:1 d2:4 gradual:1 linearized:1 hsieh:4 sgd:30 boundedness:2 accommodate:1 initial:1 liu:8 configuration:7 series:1 envision:1 outperforms:4 existing:4 current:1 com:5 gmail:2 yet:1 gpu:9 fn:2 devin:1 partition:1 designed:1 treating:1 update:5 juditsky:1 sundaram:1 chohsieh:1 intelligence:1 fewer:1 selected:2 xk:35 ith:1 core:1 provides:1 quantized:1 node:53 zhang:22 yuan:5 prove:1 doubly:1 overhead:1 privacy:3 sayed:1 x0:3 theoretically:2 behavior:1 gigabit:1 multi:4 moulines:2 automatically:1 becomes:2 provided:3 underlying:2 notation:2 bounded:8 santos:1 kind:1 supplemental:1 ghosh:1 corporation:1 guarantee:2 every:1 sag:1 runtime:1 k2:3 control:8 unit:1 ozdaglar:2 before:1 positive:2 engineering:2 local:24 service:1 limit:1 niu:1 might:2 zeroth:2 studied:5 suggests:1 josifovski:1 factorization:1 nemirovski:2 lecun:3 block:1 implement:1 regret:2 swiss:1 procedure:1 jan:1 area:2 empirical:3 yan:2 eth:2 significantly:3 composite:1 boyd:2 projection:2 fetching:1 jui:2 get:2 onto:1 context:1 ghadimi:2 dean:3 quick:1 center:1 zinkevich:2 shi:5 primitive:3 convex:21 ergodic:1 bachrach:1 nedi:3 simplicity:1 pure:1 communicating:2 k20:1 embedding:1 autonomous:1 hurt:2 variation:1 resp:2 shamir:1 deploy:2 saber:2 programming:2 exact:1 us:2 overlapped:1 element:1 trend:1 recognition:3 swallow:2 database:3 fork:1 cloud:1 preprint:4 wang:4 thousand:1 busiest:3 revisiting:1 connected:4 sun:2 psgd:71 balanced:1 environment:3 comm:1 complexity:18 motivate:1 solving:4 compromise:2 serve:1 efficiency:1 f2:2 completely:1 hachem:1 easily:1 joint:2 icassp:1 america:1 train:2 fast:1 artificial:1 aggregate:1 shalev:3 neighborhood:1 whose:1 larger:1 say:2 commit:1 jointly:1 online:7 advantage:5 eigenvalue:1 differentiable:1 maximal:1 networked:1 bow:1 achieve:4 frobenius:1 validate:4 scalability:1 convergence:30 cluster:4 double:1 prabhakar:1 ijcai:1 converges:4 ring:5 resnet:4 leave:1 depending:1 donation:1 completion:1 paying:1 p2:5 implemented:4 indicate:1 implies:1 synchronized:1 met:1 direction:4 ethernet:1 closely:1 stochastic:51 enable:1 material:2 public:1 jam:4 require:1 exchange:2 f1:2 wall:2 frasca:1 hold:1 sufficiently:2 wright:1 feyzmahdavian:2 steplength:4 achieves:3 vary:2 lose:1 council:1 largest:1 weighted:1 concurrently:2 sensor:1 modified:1 pn:3 avoid:1 zhou:4 mobile:1 command:1 corollary:4 encode:1 indicates:6 logically:1 mainly:1 contrast:1 osdi:1 i0:2 yildiz:1 torch:6 diminishing:1 hidden:1 wij:6 choromanska:1 issue:2 classification:2 development:1 platform:2 constrained:2 special:3 art:1 aware:1 never:1 having:1 beach:1 sampling:1 manually:1 park:1 look:1 yu:1 future:4 report:3 simplify:1 randomly:1 simultaneously:1 national:1 comprehensive:1 individual:1 delayed:2 replaced:1 azure:1 n1:2 microsoft:2 centralized:45 investigate:1 highly:1 mining:2 zheng:1 evaluation:1 introduces:1 analyzed:4 nl:1 admitting:1 held:1 predefined:1 mbps:2 nowadays:2 closer:1 worker:1 necessary:1 huan:2 conduct:4 exchanged:1 re:1 theoretical:8 instance:3 column:1 industry:1 xeon:1 cost:7 olfati:2 uniform:2 krizhevsky:2 delay:1 kn:3 answer:1 mokhtari:2 cho:2 referring:1 st:1 recht:3 international:9 ec2:2 randomized:1 siam:4 lee:1 systematic:1 quickly:1 concrete:1 andersen:1 central:7 management:2 choose:3 broadcast:1 huang:2 juan:1 worse:1 admit:3 resort:1 american:1 leading:1 li:6 mahdavi:1 aggressive:1 potential:3 distribute:1 includes:1 north:1 titan:1 yandex:1 depends:2 hogwild:3 view:2 lab:2 infocom:1 analyze:1 traffic:4 characterizes:1 start:1 pcie:2 maintains:2 parallel:15 complicated:1 rochester:1 contribution:1 ni:1 degraded:1 accuracy:1 variance:2 socket:1 basically:1 lu:2 ren:3 processor:1 scaglione:1 acc:1 reach:1 definition:2 competitor:3 attentive:1 involved:1 tucker:1 aybat:3 di:6 proof:3 sampled:1 proved:2 dataset:1 popular:2 ask:1 cntk:17 knowledge:6 subtle:1 actually:1 higher:1 specify:1 wei:1 improved:2 formulation:1 evaluated:1 strongly:6 smola:2 clock:4 hand:1 receives:1 ei:2 su:1 veeravalli:4 minibatch:2 logistic:1 quality:1 indicated:1 asynchrony:1 usa:2 building:1 ye:2 multiplier:1 counterpart:5 alternating:1 symmetric:2 iteratively:1 staleness:2 regier:1 ll:1 during:1 self:1 please:1 davis:1 mpi:4 m:2 chin:1 duchi:5 bring:1 passive:1 nccl:2 image:5 fi:12 common:1 ji:3 empirically:1 volume:2 sarwate:1 he:3 refer:2 jozefowicz:1 ai:1 automatic:1 language:2 gratefully:1 access:2 compiled:1 own:3 showed:1 recent:1 inf:1 scenario:4 nvidia:7 server:10 nonconvex:4 certain:1 watson:1 wji:1 preserving:2 captured:1 parallelized:1 converge:3 corrado:1 signal:3 ii:3 july:1 multiple:7 technical:1 faster:8 ahmed:1 bach:2 long:3 cifar:1 lin:4 icdm:2 award:1 ensuring:1 impact:2 variant:4 regression:2 involving:1 n5:1 essentially:1 prediction:1 vision:1 qi:1 arxiv:10 iteration:6 represent:1 monga:2 agarwal:5 achieved:3 gilad:1 dec:1 appropriately:2 extra:2 unlike:1 undirected:1 allreduce:2 near:1 yang:3 noting:1 bengio:3 enough:2 iterate:5 affect:2 finish:1 bandwidth:10 topology:13 simplifies:1 tradeoff:1 easgd:3 bottleneck:3 synchronous:5 motivated:1 distributing:2 suffer:1 speech:1 york:1 proprietary:2 deep:11 latency:9 tune:1 diameter:1 tacc:1 http:1 shapiro:1 outperform:4 exist:1 nsf:2 async:1 per:1 group:1 key:3 four:1 lan:6 capital:1 mercedes:1 ce:3 verified:1 ram:14 asymptotically:2 subgradient:5 graph:1 sum:1 run:5 powerful:1 communicate:4 throughout:2 shekita:1 wu:4 decision:2 summarizes:1 scaling:1 sundhar:5 comparable:1 bound:2 layer:2 oracle:1 annual:1 vishwanathan:1 dominated:1 speed:5 argument:1 min:2 subgradients:1 gpus:20 speedup:19 department:1 structured:1 ball:1 disconnected:1 across:2 smaller:3 modification:1 benz:1 xiangru:3 zurich:3 resource:2 count:1 mechanism:1 singer:1 know:1 end:5 studying:1 available:3 decentralized:66 rewritten:1 apply:2 kwok:3 observe:1 spectral:2 stepsize:1 batch:9 shah:1 slower:2 supercomputer:2 carli:2 original:1 denotes:7 running:1 nlp:1 lock:1 lipschitzian:2 pushing:1 scholarship:1 prof:1 chinese:2 murray:1 society:1 feng:4 implied:1 objective:11 question:3 print:1 strategy:4 ccc:1 unclear:4 cudnn:1 gradient:37 concatenation:1 seven:1 partner:1 topic:1 considers:1 consensus:16 assuming:1 length:2 index:1 illustration:1 mini:8 minimizing:1 equivalently:1 setup:1 mostly:1 potentially:1 gcc:1 implementation:16 design:2 collective:1 twenty:1 perform:1 bianchi:3 upper:1 recommender:1 observation:1 datasets:1 finite:1 acknowledge:2 descent:13 jin:1 extended:1 communication:31 hinton:1 rn:10 parallelizing:1 community:2 namely:3 fort:1 extensive:1 optimized:3 sentence:1 california:1 acoustic:1 tensorflow:2 ucdavis:1 nip:3 beyond:1 usually:4 pattern:5 regime:1 reading:1 program:1 built:2 including:5 max:1 memory:1 treated:1 natural:2 regularized:1 residual:2 nedic:8 zhu:1 improve:1 zhuang:2 github:1 imply:1 identifies:1 fax:1 epoch:7 l2:4 acknowledgement:1 kf:1 xiang:3 asymptotic:1 synchronization:5 loss:5 highlight:1 cdc:1 interesting:1 limitation:3 proven:1 foundation:2 degree:2 agent:2 consistent:2 xp:1 xiao:1 decentralize:1 tiny:1 educe:1 share:2 ibm:3 cooperation:1 surprisingly:2 wireless:2 asynchronous:13 supported:1 free:1 side:1 allow:1 senior:1 neighbor:6 barrier:2 fifth:1 distributed:23 plain:1 avoids:1 xlarge:5 computes:1 commonly:1 ribeiro:2 transaction:4 k80:2 implicitly:1 keep:1 aytekin:1 deg:1 global:1 automatica:1 shwartz:3 table:2 reviewed:1 promising:1 channel:1 nature:1 robust:1 ca:1 elastic:1 tencent:1 e5:1 protocol:2 domain:1 understandably:1 pk:1 linearly:1 weimer:1 big:2 ling:4 n2:1 gossip:7 fashion:4 openmpi:1 slow:2 ny:1 mao:1 saga:1 lie:5 answering:1 breaking:1 tang:1 theorem:8 down:1 sensing:1 svm:1 admits:1 dsa:1 gupta:2 intrinsic:1 supplement:1 keshet:1 magnitude:3 illustrates:2 horizon:1 nk:4 chen:3 gap:1 tc:1 yin:4 simply:1 uwisc:1 springer:1 ch:1 acm:5 ma:1 viewed:1 shared:3 admm:3 feasible:1 uniformly:1 justify:1 averaging:2 total:8 l4:1 support:2 crammer:2 ethz:1 evaluate:2 lian:9 d1:2 avoiding:1 srivastava:5 |
6,763 | 7,118 | Local Aggregative Games
Vikas K. Garg
CSAIL, MIT
[email protected]
Tommi Jaakkola
CSAIL, MIT
[email protected]
Aggregative games provide a rich abstraction to model strategic multi-agent interactions. We introduce
local aggregative games, where the payoff of each player is a function of its own action and the
aggregate behavior of its neighbors in a connected digraph. We show the existence of a pure strategy
-Nash equilibrium in such games when the payoff functions are convex or sub-modular. We prove
an information theoretic lower bound, in a value oracle model, on approximating the structure of the
digraph with non-negative monotone sub-modular cost functions on the edge set cardinality. We also
define a new notion of structural stability, and introduce ?-aggregative games that generalize local
aggregative games and admit -Nash equilibrium that are stable with respect to small changes in some
specified graph property. Moreover, we provide algorithms for our models that can meaningfully
estimate the game structure and the parameters of the aggregator function from real voting data.
1
Introduction
Structured prediction methods have been remarkably successful in learning mappings between input
observations and output configurations [1; 2; 3]. The central guiding formulation involves learning a
scoring function that recovers the configuration as the highest scoring assignment. In contrast, in
a game theoretic setting, myopic strategic interactions among players lead to a Nash equilibrium
or locally optimal configuration rather than highest scoring global configuration. Learning games
therefore involves, at best, enforcement of local consistency constraints as recently advocated [4].
[4] introduced the notion of contextual potential games, and proposed a dual decomposition algorithm
for learning these games from a set of pure strategy Nash equilibria. However, since their setting was
restricted to learning undirected tree structured potential games, it cannot handle (a) asymmetries in
the strategic interactions, and (b) higher order interactions. Moreover, a wide class of strategic games
(e.g. anonymous games [5]) do not admit a potential function and thus locally optimal configurations
do not coincide with pure strategy Nash equilibria. In such games, the existence of only (approximate)
mixed strategy equilibria is guaranteed [6].
In this work, we focus on learning local aggregative games to address some of these issues. In an
aggregative game [7; 8; 9], every player gets a payoff that depends only on its own strategy and the
aggregate of all the other players? strategies. Aggregative games and their generalizations form a very
rich class of strategic games that subsumes Cournot oligopoly, public goods, anonymous, mean field,
and cost and surplus sharing games [10; 11; 12; 13]. In a local aggregative game, a player?s payoff is
a function of its own strategy and the aggregate strategy of its neighbors (i.e. only a subset of other
players). We do not assume that the interactions are symmetric or confined to a tree structure, and
therefore the game structure could, in general, be a spanning digraph, possibly with cycles.
We consider local aggregative games where each player?s payoff is a convex or submodular Lipschitz
function of the aggregate of its neighbors. We prove sufficient conditions under which such games
admit some pure strategy -Nash equilibrium. We then prove an information theoretic lower bound
that for a specified , approximating a game structure that minimizes a non-negative monotone
submodular cost objective on the cardinality of the edge set may require exponentially many queries
under a zero-order or value oracle model. Our result generalizes the approximability of the submodular
minimum spanning tree problem to degree constrained spanning digraphs [14]. We argue that this
lower bound might be averted with a dataset of multiple -Nash equilibrium configurations sampled
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
from the local aggregative game. We also introduce ?-aggregative games that generalize local
aggregative games to accommodate the (relatively weaker) effect of players that are not neighbors.
These games are shown to have a desirable stability property that makes their -Nash equilibria robust
to small fluctuations in the aggregator input. We formulate learning these games as optimization
problems that can be efficiently solved via branch and bound, outer approximation decomposition, or
extended cutting plane methods [17; 18]. The information theoretic hardness results do not apply to
our algorithms since they have access to the (sub)gradients as well, unlike the value oracle model
where only the function values may be queried. Our experiments strongly corroborate the efficacy of
the local aggregative and ?-aggregative games in estimating the game structure on two real voting
datasets, namely, the US Supreme Court Rulings and the Congressional Votes.
2
Setting
We consider an n-player game where each player i ? [n] , {1, 2, . . . , n} plays a strategy (or action)
from a finite set Ai . For any strategy profile a, ai denotes the strategy of the ith player, and a?i the
strategies of the other players. We are interested in local aggregative games that have the property that
the payoff of each player i depends only on its own action and the aggregate action of its neighbors
NG (i) = {j ? V (G) : (j, i) ? E(G)} in a connected digraph G = (V, E), where |V | = n. Since,
the graph is directed, the neighbors need not be symmetric, i.e., (j, i) ? E does not imply (i, j) ? E.
For any strategy profile a, we will denote the strategy vector of neighbors of player i by aNG (i) . We
assume that player i has a payoff function of the form ui (ai , fG (a, i)), where fG (a, i) , f (aNG (i) )
is a local aggregator function, and ui is convex and Lipschitz in the aggregate fG (a, i) for all ai ? Ai .
Since fG (a, i) may take only finitely many values, we will assume interpolation between these values
such that they form a convex set. We can define the Lipschitz constant of G as
?(G) ,
max
i,ai ,a0?i ,a00
?i
{ui (ai , fG (a0 , i)) ? ui (ai , fG (a00 , i))},
(1)
where the vectors a0?i and a00?i differ in exactly one coordinate. Clearly, the payoff of any player in
the network does not change by more than ?(G) when one of the neighbors changes its strategy. We
can now talk about a class of aggregative games characterized by the Lipschitz constant:
L(?, n) = {G : V (G) = n, ?(G) ? ?}.
A strategy profile a = (ai , a?i ) is said to be a pure strategy -Nash equilibrium (-PSNE) if no player
can improve its payoff by more than by unilaterally switching its strategy a0i . In other words, any
player i cannot gain more than by playing an alternative strategy a0i if the other players continue
to play a?i . More generally, instead of playing deterministic actions in response to the actions of
others, each player can randomize its actions. Then, the distributions over players? actions constitute
a mixed strategy -Nash equilibrium if any unilateral deviation could improve the expected payoff
by at most . We will prove the existence of -PSNE in our setting. We will assume a training set
S = {a1 , a2 , . . . , aM }, where each ai is an -PSNE sampled from our game. Our objective is to
recover the game digraph G and the payoff functions ui , i ? [n] from the set S.
The rest of the paper is organized as follows. We first establish some important theoretical paraphernalia on the local aggregative games in Section 3. In Section 4, we introduce ?-aggregative games
and show that ?-aggregators are structurally stable. We formulate the learning problem in Section 5,
and describe our experimental set up and results in Section 6. We state the theoretical results in the
main text, and provide the detailed proofs in the Supplementary (Section 7) for improved readability.
3
Theoretical foundations
Any finite game is guaranteed to admit a mixed strategy -equilibrium due to a seminal result by
Nash [6]. However, general games may not have any -PSNE (for small ). We first prove a sufficient
condition for the existence of -PSNE in local aggregative games with small Lipschitz constant. A
similar result holds when the payoff functions ui (?) are non-negative monotone submodular and
Lipschitz (see the supplementary material for details).
Theorem 1. Any local aggregative
game on a connected digraph G, where G ? L(?, n) and
p
max |Ai | ? m, admits a 10? ln(8mn)-PSNE.
i
2
Proof. (Sketch.) The main idea behind the proof is to sample a random strategy profile from a mixed
strategy Nash equilibrium of the game, and show that with high probability the sampled profile
corresponds to an -PSNE when the Lipschitz constant is small. The proof is based on a novel
application of the Talagrand?s concentration inequality.
Theorem 1 implies the minimum degree d (which depends on number of players n, the local
aggregator function A, Lipschitz constant ?, and ) of the game structure that ensures the existence
of at least one -PSNE. One example is the following local generalization of binary summarization
games [8]. Each player i plays ai ? {0, 1} and has access to an averaging aggregator that computes
the fraction of its neighbors playing action 1. Then, the Lipschitz constant of G is 1/k, where
? k is the
minimum degree the underlying game digraph. Then, an -PSNE is guaranteed for k = ?( ln n/).
In other words, k needs to grow slowly (i.e., only sub-logarithmically) in the number of players n.
An important follow-up question is to determine the complexity of recovering the underlying game
structure in a local aggregative game with an -PSNE. We will answer this question in a combinatorial
setting with non-negative monotone submodular cost functions on the edge set cardinality. Specifically,
we consider the following problem. Given a connected digraph G(V, E), a degree parameter d, and
a submodular cost function h : 2E ? R+ that is normalized (i.e. h(?) = 0) and monotone (i.e.
h(S) ? h(T ) for all S ? T ? 2E ), we would like to find a spanning directed subgraph1 Gs of G
such that f (Gs ) is minimized, the in-degree of each player is at least d, and Gs admits some -Nash
equilibrium when players play to maximize their individual payoffs. We first establish a technical
lemma that provides tight lower and upper bounds on the probability that a directed random graph is
disconnected, and thus extends a similar result for Erd?os-R?nyi random graphs [25] to the directed
setting. The lemma will be invoked while proving a bound for the recovery problem, and might be of
independent interest beyond this work.
Lemma 2. Consider a directed random graph DG(n, p) where p ? (0, 1) is the probability of
choosing any directed edge independently of others. Define q = 1 ? p. Let Pn be the probability that
DG is connected. Then, the probability that DG is disconnected is 1 ? PN = nq 2(n?1) + O n2 q 3n .
We will now prove an information theoretic lower bound for the recovery problem under the value
oracle model [14]. A problem with an information theoretic lower bound of ? has the property that
any randomized algorithm that approximates the optimum to within a factor ? with high probability
needs to make superpolynomial number of queries under the specified oracle model. In the value
oracle model, each query Q corresponds to obtaining the cost/value of any candidate set by issuing
Q to the value oracle (which acts as a black-box). We invoke the Yao?s minimax principle [28],
which states the relation between distributional complexity and randomized complexity. Using Yao?s
principle, the performance of randomized algorithms can be lower bounded by proving that no
deterministic algorithm can perform well on an appropriately defined distribution of hard inputs.
Theorem 3. Let > 0, and ?, ? ? (0, 1). Let n be the number of players in a local aggregative
game, where each player i ? [n] is provided with some convex ?-Lipschitz function ui and an
aggregator A. Let Dn , Dn (?, , A, (ui )i?[n] ) be the sufficient in-degree (number of incoming
edges) of each player such that the game admits some -PSNE when the players play to maximize
their individual payoffs ui according to the local information provided by the aggregator A. Assume
any non-negative monotone submodular cost function on the edge set cardinality. Then for any
d ? max{Dn , n? ln n}/(1 ? ?), any randomized algorithm that approximates the game structure to
a factor n1?? /(1 + ?)d requires exponentially many queries under the value oracle model.
Proof. (Sketch.) The main idea is to construct a digraph that has exponentially many spanning
directed subgraphs, and define two carefully designed submodular cost functions over the edges of
the digraph, one of which is deterministic in query size while the other depends on a distribution.
We make it hard for the deterministic algorithm to tell one cost function from the other. This can be
accomplished by ensuring two conditions: (a) these cost functions map to the same value on almost
all the queries, and (b) the discrepancy in the optimum value of the functions (on the optimum query)
is massive. The proof invokes Lemma 2, exploits the degree constraint for -PSNE, argues about the
optimal query size, and appeals to the Yao?s minimax principle.
1
A spanning directed graph spans all the vertices, and has the property that the (multi)graph obtained by
replacing the directed edges with undirected edges is connected.
3
Theorem 3 might sound pessimistic from a practical perspective, however, a closer look reveals why
the query complexity turned out to be prohibitive. The proof hinged on the fact that all spanning
subgraphs with same edge cardinality that satisfied the sufficiency condition for existence of any
-PSNE were equally good with respect to our deterministic submodular function, and we created
an instance with exponentially such spanning subgraphs. However, we might be able to circumvent
Theorem 3 by breaking the symmetry, e.g., by using data that specifies multiple distinct -Nash
equilibria. Then, since the digraph instance would be required to satisfy these equilibria, fooling
the deterministic algorithm would be more difficult. Thus data could, in principle, help us avoid the
complexity result of Theorem 3. We will formulate optimization problems that would enforce margin
separability on the equilibrium profiles, which will further limit the number of potential digraphs and
thus facilitate learning the aggregative game. Moreover, the hardness result does not apply to our
estimation algorithms that will have access to the (sub)gradients in addition to the function values.
4
?-Aggregative Games
We now describe a generalization of the local aggregative games, which we call the ?-aggregative
games. The main idea behind these games is that a player i ? [n] may, often, be influenced not only
by the aggregate behavior of its neighbors, but also to a lesser extent on the aggregate behavior of
the other players, whose influence on the payoff of i decreases with increase in their distance to i.
Let dG (i, j) be the number of intermediate nodes on a shortest path from j to i in the underlying
digraph G = (V, E). That is, dG (i, j) = 0 if (j, i) ? E, and 1 + mink?V \{i,j} dG (i, k) + dG (k, j)
otherwise. Let WG , maxi,j?V dG (i, j) be the width of G. For any strategy profile a ? {0, 1}n
t
and t ? {0, 1, . . . , WG }, let IG
(i) = {j : dG (i, j) = t} be the set of nodes that have exactly t
intermediaries on a shortest path to i, and let aIGt (i) be a strategy profile of the nodes in this set. We
t
define aggregator functions fG
(a, i) , f (aIGt (i) ) that return the aggregate at level t with respect to
player i. Let ? ? (0, 1) be a discount rate. Define the ?-aggregator function
gG (a, ?, `, i) ,
`
X
t
? t fG
(a, i)/
t=0
`
X
?t,
t=0
which discounts the aggregates based on the distance ` ? {0, 1, . . . , WG } to i. We assume that player
i ? [n] has a payoff function of the form ui (ai , ?), which is convex and ?-Lipschitz in its second
argument for each fixed ai . Finally, we define the Lipschitz constant of the ?-aggregative game as
? ? (G) ,
max
i,ai ,a0?i ,a00
?i
{ui (ai , gG (a0 , ?, WG , i)) ? ui (ai , gG (a00 , ?, WG , i))},
where the vectors a0?i and a00?i differ in exactly one coordinate.
The main criticism of the concept of -Nash equilibrium concerns lack of stability: if any player
deviates (due to -incentive), then in general, some other player may have a high incentive to deviate
as well, resulting in a non-equilibrium profile. Worse, it may take exponentially many steps to reach
an -equilibrium again. Thus, stability of -equilibrium is an important consideration. We will now
introduce an appropriate notion of stability, and prove that ?-aggregative games admit stable pure
strategy -equilibrium in that any deviation by a player does not affect the equilibrium much.
Structurally Stable Aggregator (SSA): Let G = (E, V ) be a connected digraph and PG (w) be a
property of G, where w denotes the parameters of PG . Let A be an aggregator function that depends
on PG . Suppose M = (a1 , a2 , . . . , an ) be an -PSNE when A aggregates information according
to PG (w), where ai is the strategy of player i ? V = [n]. Suppose now A aggregates information
according to PG (w0 ). Then, A is a (?, ?)P,w,w0 -structurally stable aggregator (SSA) with respect to
G, where ? and ? are functions of the gap between w, w0 , if it satisfies these conditions: (a) M is a
( + ?)-equilibrium under PG (w0 ), and (b) the payoff of each player at the equilibrium profile M
under PG (w0 ) is at most ? = O(?) worse than that under PG (w).
A SSA with small values of ? and ? with respect to a small change in w is desirable since
that would discourage the players from deviating from their -equilibrium strategy, however, such an
aggregator might not exist in general. The following result shows the ?-aggregator is a SSA.
4
Theorem 4. Let ? ? (0, 1), and gG (?, ?, `, ?) be the ?-aggregator defined above. Let PG (`) be the
property ?the number of maximum permissible intermediaries in a shortest path of length l in G?.
Then, gG is a (2??G , ??G )P,WG ,L - SSA, where L < WG and ?G depends on ? and WG ? L.
5
Learning formulation
We now formulate an optimization problem to recover the underlying graph structure, the parameters
of the aggregator function, and the payoff functions. Let S = {a1 , a2 , . . . , aM } be our training
set, where each strategy profile am ? {0, 1}n is an -PSNE, and am
i is the action of player i in
example m ? [M ]. Let f be a local aggregator function, and let am
Ni be the actions of neighbors Ni
of player i ? [n] on training example m. We will also represent N as a 0-1 adjacency matrix, with
the interpretation that Nij = 1 implies that j ? Ni , and Nij = 0 otherwise. We will use the notation
Ni? , {Nij : j 6= i}. Note that since the underlying game structure is represented as a digraph, Nij
and Nji need not be equal. Let h be a concave function such that h(0) = 0. Then Fi (h) , h(|Ni |)
is submodular since the concave
P transformation of the cardinality function results in a submodular
function. Moreover F (h) = i?[n] Fi (h)is submodular since it is a sum of submodular functions.
We will use F (h) as a sparsity-inducing prior. Several choices of h have been advocated in the
literature, including suitably normalized geometric, log, smooth log and square root functions [15].
We would denote the parameters of the aggregator function f by ?f . The payoff functions will depend
on the choice of this parameterization. For a fixed aggregator f (such as the sum aggregator), linear
parameterization is one possibility, where the payoff function for player i ? [n] takes the form,
m
m
m
ufi (am , Ni? ) = am
i wi1 (wf f (aNi ) + bf ) + (1 ? ai )wi0 (wf f (aNi ) + bf ),
where wi? = (wi0 , wi1 )> and Ni? denote the independent parameters for player i and ?f = (wf , bf )>
are the shared parameters. Our setting is flexible, and we can easily accommodate more complex
aggregators instead of the standard aggregators (e.g. sum). Exchangeable functions over sets [16]
provide one such example. An interesting instantiation is a neural network comprising one hidden
layer, an output sum layer, with tied weights. Specifically, let W ? Rn?(n?1) where all entries of W
are equal to wN N . Let ? be an element-wise non-linearity (e.g. we used the ReLU function, ?(x) =
max{x, 0} for our experiments). Then, using the element-wise multiplication operator and a vector
m
m
m
1 with all ones, ui may be expressed as ufi N N (am , Ni? ) = am
i wi1 fN N (aNi )+(1?ai )wi0 fN N (aNi ),
>
where the permutation invariant neural aggregator, parameterized by ?fN N = (wN N , bN N ) ,
>
m
fN N (am
Ni ) = 1 ?(W a?i Ni? + bN N ).
We could have more complex functions such as deeper neural nets, with parameter sharing, at
the expense of increased computation. We believe this versatility makes local aggregative games
particularly attractive, and provides a promising avenue for modeling structured strategic settings.
Each am is an -PSNE, so it ensures a locally (near) optimal reward for each player. We will impose
a margin constraint on the difference in the payoffs when player i unilaterally deviates from am
i .
Note that Ni = {j ? Ni? : Nij = 1}. Then, introducing slack variables ?im , and hyperparameters
C, C 0 , Cf > 0, we obtain the following optimization problem in O(n2 ) variables:
n
n
n
M
1X
Cf
C0 X
C XX m
min
||wi? ||2 +
||?f ||2 +
Fi (h) +
?
?f ,w1? ,...,wn? ,Ni? ,...,Nn?
2 i=1
2M
n i=1
M i=1 m=1 i
s.t.
?i ? [n], m ? [M ] :
?i ? [n], m ? [M ] :
?i ? [n] :
ufi (am , Ni? ) ? ufi (1 ? am , Ni? ) ? e(am , a0 ) ? ?im
?im ? 0
Ni? ? {0, 1}n?1 ,
where am and a0 differ in exactly one coordinate, and e is a margin specific loss term, such as
Hamming loss eH (a, a
?) = 1{a 6= a
?} or scaled 0-1 loss es (a, a
?) = 1{a 6= a
?}/n. From a game
theoretic perspective, the scaled loss has a natural asymptotic interpretation: as the number of players
n ? ?, es (am , a0 ) ? 0, and we get ?i ? [n], m ? [M ] : ufi (am , Ni? ) ? ufi (1 ? am , Ni? ) ? ?im ,
i.e., each training example am is an -PSNE, where = maxi?[n],m?[M ] ?im .
Once ?f are fixed, the problem clearly becomes separable, i.e., each player i can solve an independent
sub-problem in O(n) variables. Each sub-problem includes both continuous and binary variables,
5
and may be solved via branch and bound, outer approximation decomposition, or extended cutting
plane methods (see [17; 18] for an overview of these techniques). The individual solutions can be
forced to agree on ?f via a standard dual decomposition procedure, and methods like alternating
direction method of multipliers (ADMM) [19] could be leveraged to facilitate rapid agreement of the
continuous parameters wf and bf . The extension to learning the ?-aggregative games is immediate.
We now describe some other optimization variants for the local aggregative games. Instead of
constraining each player to a hard neighborhood, one might relax the constraints Nij ? {0, 1} to
Nij ? [0, 1], where Nij might be interpreted as the strength of the edge (j, i). The Lov?sz convex
relaxation of F [20] is a natural prior for inducing sparsity in this case. Specifically, for an ordering
of values |Ni(0) | ? |Ni(1) | . . . ? |Ni(n?1) |, i ? [n], this prior is given by
?h (N ) =
n
X
?h (N, i), where ?h (N, i) =
i=1
n?1
X
[h(k + 1) ? h(k)]|Ni(k) |.
k=0
Since the transformation h encodes the preference for each degree, ?h (N ) will act as a prior that
encourages structured sparsity. One might also enforce other constraints on the structure of the
local aggregative game. For instance, an undirected graph could be obtained by adding constraints
Nij = Nji , for P
i ? [n], j 6= i. Likewise, a minimum in-degree constraint may be enforced on player
i by requiring j Nij ? d. Both these constraints are linear in Ni? , and thus do not add to the
complexity of the problem. Finally, based on cues such as domain knowledge, one may wish to add a
degree of freedom by not enforcing sharing of the parameters of the aggregator among the players.
6
Experiments
We now present strong empirical evidence to demonstrate the efficacy of local aggregative games in
unraveling the aggregative game structure of two real voting datasets, namely, the US Supreme Court
Rulings dataset and the Congressional Votes dataset. Our experiments span the different variants
for recovering the structure of the aggregative games including settings where (a) parameters of
the aggregator are learned along with the payoffs, (b) in-degree of each node is lower bounded, (c)
?-discounting is used, or (d) parameters of the aggregator are fixed. We will also demonstrate that
our method compares favorably with the potential games method for tree structured games [4], even
when we relax the digraph setting to let weights Nij ? [0, 1] instead of {0, 1} or force the game
structure to be undirected by adding the constraints
? Nij = Nji . For our purposes, we used the
smoothed square-root concave function, h(i) = i + 1 ? 1 + ?i parameterized by ?, the sum and
neural aggregators, and the scaled 0-1 loss function es (a, a
?) = 1{a 6= a
?}/n. We found our model to
perform well across a very wide range of hyperparameters. All the experiments described below used
the following setting of values: ? = 1, C = 100, and Cf = 1. C 0 was also set to 0.01
? in all settings
except when the parameters of the aggregator were fixed, when we set C 0 = 0.01 n.
91%
T
Thomas
91%
A
86%
Alito
93%
Scalia Roberts
S
90%
So
R
80%
94%
Ka
Sotomayor
93%
Kagan
G
Ginsburg
89%
K
Kennedy
B
Conservatives
Breyer
Liberals
Figure 1: Supreme Court Rulings (full bench): The digraph recovered by the local aggregative
and ?-aggregative games (` ? 2, all ?) with the sum aggregator as well as the neural aggregator is
consistent with the known behavior of the Justices: conservative and liberal sides of the bench are
well segregated from each other, while the moderate Justice Kennedy is positioned near the center.
Numbers on the arrows are taken from an independent study [21] on Justices? mutual voting patterns.
6
6.1
Dataset 1: Supreme Court Rulings
We experimented with a dataset containing all non-unanimous rulings by the US Supreme court
bench during the year 2013. We denote the Justices of the bench by their last name initials, and
add a second character to some names to avoid the conflicts in the initials: Alito (A), Breyer (B),
Ginsburg(G), Kennedy (K), Kagan (Ka), Roberts (R), Scalia (S), Sotomayor (So), and Thomas (T).
We obtained a binary dataset following the procedure described in [4].
T
A
G
S
T
G
K
S
K
R
B
A
(a) Local Aggregative
T
R
(b) Potential Exhaustive Enumeration
A
G
T
R
K
S
B
G
K
R
B
A
(c) Local Aggregative (Undirected & Relaxed)
S
B
(d) Potential Hamming
Figure 2: Comparison with the potential games method [4]: (a) The digraph produced by our
method with the sum as well as the neural aggregator is consistent with the expected voting behavior
of the Justices on the data used by [4] in their experiments. (c) Relaxing all Nij ? [0, 1] and enforcing
Nij = Nji still resulted in a meaningful undirected structure. (b) & (d) The tree structures obtained
by the brute force and the Hamming distance restricted methods [4] fail to capture higher order
interactions, e.g., the strongly connected component between Justices A, T, S and R.
.
T
A
G
T
A
K
S
R
G
K
B
(a) Local Aggregative (d >= 2)
R
S
B
(b) ? ? aggregative (` = 2, ? = 0.9)
Figure 3: Degree constrained and ?-aggregative games: (a) Enforcing the degree of each node to
be at least 2 reinforces the intra-republican and the intra-democrat affinity, reaffirming their respective
jurisprudences, and (b) ?-aggregative games also support this observation: the same digraph as Fig.
2(a) is obtained unless ` and ? are set to high values (plot generated with ` = 2, ? = 0.9), when the
strong effect of one-hop and two-hop neighbors overpowers the direct connection between B and G.
Fig. 1 shows the structure recovered by the local aggregative method. The method was able to
distinguish the conservative side of the court (Justices A, R, S, and T) from the left side (B, G, Ka, and
So). Also, the structure places Justice Kennedy in between the two extremes, which is consistent with
his moderate jurisprudence. To put our method in perspective, we also compare the result of applying
our method on the same subset of the full bench data that was considered by [4] in their experiments.
Fig. 2 demonstrates how the local aggregative approach estimated meaningful structures consistent
with the full bench structure, and compared favorably with both the methods of [4]. Finally, Fig. 3(a)
7
and 3(b) demonstrate the effect of enforcing minimum in-degree constraints in the local aggregative
games, and increasing ` and ? in the ?-aggregative games respectively. As expected, the estimated
?-aggregative structure is stable unless ? and ` are set to high values when non-local effects kick in.
We provide some additional results on the degree-constrained local aggregative games (Fig. 4 ) and
the ?-aggregative games (Fig. 5). In particular, we see that the ?-aggregative games are indeed robust
to small changes in the aggregator input as expected in the light of stability result of Theorem 4.
A
T
Thomas
Ka
Alito
Kagan
K
Scalia Roberts
S
R
So
G
Sotomayor
Kennedy
Ginsburg
B
Conservatives
Breyer
Liberals
Figure 4: Degree constrained local aggregative games (full bench): The digraph recovered by the
local aggregative method when the degree of each node was constrained to be at least 2. Clearly,
the cohesion among the Justices on the conservative side got strengthened by the degree constraint
(likewise for the liberal side of the bench). On the other hand, no additional edges were added
between the two sides.
T
A
Thomas
So
Alito
Sotomayor
G
Kagan
Ginsburg
K
Scalia Roberts
S
Ka
R
Kennedy
B
Conservatives
Breyer
Liberals
Figure 5: ?-Aggregative Games (full bench): The digraph estimated by the ?-aggregative method
for ` = 2, ? = 0.9, and lower values of ? and/or `. Note that an identical structure was obtained by
the local aggregative method (Fig. 1). This indicates that despite heavily weighting the effect of the
nodes on a shortest path with one or two intermediary hops, the structure in Fig. 1 is very stable.
Also, this substantiates our theoretical result about the stability of the ?-aggregative games.
6.2
Dataset 2: Congressional Votes
We also experimented with the Congressional Votes data [22], that contains the votes by the US
Senators on all the bills of the 110 US Congress, Session 2. Each of the 100 Senators voted in
favor of (treated as 1) or against each bill (treated as 0). Fig. 6 shows that the local aggregative
method provides meaningful insights into the voting patterns of the Senators as well. In particular,
few connections exist between the nodes in red and those in blue, making the bipartisan structure
quite apparent. In some cases, the intra-party connections might be bolstered due to same state
affiliations, e.g. Senators Corker (28) and Alexander (2) represent Tennessee. The cross connections
may also capture some interesting collaborations or influences, e.g., Senators Allard (3) and Clinton
(22) introduced the Autism Act. Likewise, Collins (26) and Carper (19) reintroduced the Fire Grants
Reauthorization Act. The potential methods [4] failed to estimate some of these strategic interactions.
Likewise, Fig. 7 provides some interesting insights regarding the ideologies of some Senators that
follow a more centrist ideology than their respective political affiliations would suggest.
8
14
18
4
29
23
30
15
13
17
1
3
10
20
16
8
5
21
28
2
24
25
26
7
12
22
19
9
11
27
6
Figure 6: Comparison with [4] on the Congressional Votes data: The digraph recovered by local
aggregative method, on the data used by [4], when the parameters of the sum aggregator were fixed
(wf = 1, bf = 0). The segregation between the Republicans (shown in red) and the Democrats
(shown in blue) strongly suggests that they are aligned according to their party policies.
Figure 7: Complete Congressional Votes data: The digraph recovered on fixing parameters, relaxing Nij to [0, 1], and thresholding at 0.05. The estimated structure not only separates majority of
the reds from the blues, but also associates closely the then independent Senators Sanders (82) and
Lieberman (62) with the Democrats. Moreover, the few reds among the blues generally identify with
a more centrist ideology - Collins (26) and Snowe (87) are two prominent examples.
Conclusion
An overwhelming majority of literature on machine learning is restricted to modeling non-strategic
settings. Strategic interactions in several real world systems such as decision/voting often exhibit local
structure in terms of how players are guided by or respond to each other. In other words, different
agents make rational moves in response to their neighboring agents leading to locally stable configurations such as Nash equilibria. Another challenge with modeling the strategic settings is that they
are invariably unsupervised. Consequently, standard learning techniques such as structured prediction
that enforce global consistency constraints fall short in such settings (cf. [4]). As substantiated by our
experiments, local aggregative games nicely encapsulate various strategic applications, and could
be leveraged as a tool to glean important insights from voting data. Furthermore, the stability of
approximate equilibria is a primary consideration from a conceptual viewpoint, and the ?-aggregative
games introduced in this work add a fresh perspective by achieving structural stability.
9
References
[1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data, ICML, 2001.
[2] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks, NIPS, 2003.
[3] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables, JMLR, 6(2), pp. 1453-1484, 2005.
[4] V. K. Garg and T. Jaakkola. Learning Tree Structured Potential Games, NIPS, 2016.
[5] C. Daskalakis and C. H. Papadimitriou. Approximate Nash equilibria in anonymous games, Journal of
Economic Theory, 156, pp. 207-245, 2015.
[6] J. Nash. Non-Cooperative Games, Annals of Mathematics, 54(2), pp. 286-295, 1951.
[7] R. Selten. Preispolitik der Mehrproduktenunternehmung in der Statischen Theorie, Springer-Verlag, 1970.
[8] M. Kearns and Y. Mansour. Efficient Nash computation in large population games with bounded influence,
UAI, 2002.
[9] R. Cummings, M. Kearns, A. Roth, and Z. S. Wu. Privacy and truthful equilibrium selection for aggregative
games, WINE, 2015.
[10] R. Cornes and R. Harley. Fully Aggregative Games, Economic Letters, 116, pp. 631-633, 2012.
[11] W. Novshek. On the Existence of Cournot Equilibrium, Review of Economic Studies, 52, pp. 86-98, 1985.
[12] M. K. Jensen. Aggregative Games and Best-Reply Potentials, Economic Theory, 43, pp. 45-66, 2010.
[13] J. M. Lasry and P. L. Lions. Mean field games, Japanese Journal of Mathematics, 2(1), pp. 229-260, 2007.
[14] G. Goel, C. Karande, P. Tripathi, and L. Wang. Approximability of Combinatorial Problems with Multiagent Submodular Cost Functions, FOCS, 2009.
[15] A. J. Defazio and T. S. Caetano. A convex formulation for learning scale-free networks via submodular
relaxation, NIPS, 2012.
[16] M. Zaheer, S. Kottur, S. Ravanbakhsh, B. Poczos, R. Salakhutdinov, and A. Smola. Deep Sets,
arXiv:1703.06114, 2017.
[17] P. Bonami et al. An algorithmic framework for convex mixed integer nonlinear programs, Discrete
Optimization, 5(2), pp. 186-204, 2008.
[18] P. Bonami, M. Kilin?, and J. Linderoth J. Algorithms and Software for Convex Mixed Integer Nonlinear
Programs, Mixed Integer Nonlinear Programming, The IMA Volumes in Mathematics and its Applications,
154, Springer, 2012.
[19] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3, 2011.
[20] F. Bach. Structured sparsity-inducing norms through submodular functions, NIPS, 2010.
[21] J. Bowers, A. Liptak and D. Willis. Which Supreme Court Justices Vote Together Most and Least Often,
The New York Times, 2014.
[22] J. Honorio and L. Ortiz. Learning the Structure and Parameters of Large-Population Graphical Games from
Behavioral Data, JMLR, 16, pp. 1157-1210, 2015.
[23] Y. Azrieli and E. Shmaya. Lipschitz Games, Mathematics of Operations Research, 38(2), pp. 350-357,
2013.
[24] E. Kalai. Large robust games. Econometrica, 72(6), pp. 1631-1665, 2004.
[25] E. N. Gilbert. Random Graphs, The Annals of Mathematical Statistics, 30(4), pp. 1141-1144,1959.
[26] W. Feller. An Introduction to Probability Theory and its Applications, Vol. 1, Second edition, Wiley, 1957.
[27] M.-F. Balcan and N. J. A. Harvey. Learning Submodular Functions, STOC, 2011.
10
[28] A. Yao. Probabilistic computations: Toward a unified measure of complexity, FOCS, 1977.
[29] U. Feige, V. S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions, FOCS, 2007.
[30] M. X. Goemans, N. J. A. Harvey, S. Iwata, and V. S. Mirrokni. Approximating submodular functions
everywhere, SODA, 2009.
[31] Z. Svitkina and L. Fleischer. Submodular approximation: Sampling based algorithms and lower bounds,
FOCS, 2008.
11
| 7118 |@word norm:1 justice:10 suitably:1 c0:1 bf:5 bn:2 decomposition:4 pg:9 accommodate:2 initial:2 configuration:7 contains:1 efficacy:2 ka:5 contextual:1 recovered:5 issuing:1 chu:1 fn:4 hofmann:1 designed:1 plot:1 cue:1 prohibitive:1 nq:1 parameterization:2 plane:2 mccallum:1 ith:1 hinged:1 short:1 provides:4 node:8 readability:1 preference:1 liberal:5 mathematical:1 dn:3 along:1 direct:1 focs:4 prove:7 behavioral:1 privacy:1 introduce:5 lov:1 expected:4 indeed:1 hardness:2 rapid:1 behavior:5 multi:2 salakhutdinov:1 enumeration:1 overwhelming:1 cardinality:6 increasing:1 becomes:1 provided:2 estimating:1 moreover:5 underlying:5 bounded:3 notation:1 linearity:1 xx:1 interpreted:1 minimizes:1 bipartisan:1 unified:1 transformation:2 every:1 voting:8 act:4 concave:3 exactly:4 scaled:3 demonstrates:1 exchangeable:1 brute:1 grant:1 encapsulate:1 segmenting:1 local:42 congress:1 limit:1 switching:1 despite:1 fluctuation:1 interpolation:1 path:4 might:9 black:1 garg:2 cournot:2 substantiates:1 suggests:1 relaxing:2 range:1 directed:9 practical:1 procedure:2 empirical:1 got:1 boyd:1 word:3 suggest:1 altun:1 get:2 cannot:2 tsochantaridis:1 operator:1 selection:1 put:1 influence:3 seminal:1 applying:1 gilbert:1 bill:2 deterministic:6 map:1 center:1 roth:1 maximizing:1 independently:1 convex:10 formulate:4 recovery:2 pure:6 subgraphs:3 insight:3 unilaterally:2 breyer:4 his:1 stability:9 handle:1 notion:3 coordinate:3 proving:2 population:2 annals:2 play:5 suppose:2 massive:1 heavily:1 programming:1 agreement:1 associate:1 logarithmically:1 element:2 trend:1 particularly:1 distributional:1 cooperative:1 taskar:1 solved:2 capture:2 wang:1 ensures:2 connected:8 cycle:1 caetano:1 ordering:1 decrease:1 highest:2 psne:17 feller:1 nash:19 ui:13 complexity:7 reward:1 econometrica:1 depend:1 tight:1 easily:1 represented:1 various:1 talk:1 substantiated:1 distinct:1 forced:1 describe:3 query:9 tell:1 aggregate:12 labeling:1 choosing:1 neighborhood:1 exhaustive:1 oligopoly:1 modular:2 supplementary:2 whose:1 solve:1 apparent:1 relax:2 otherwise:2 wg:8 quite:1 favor:1 statistic:1 sequence:1 net:1 interaction:8 supreme:6 turned:1 aligned:1 neighboring:1 inducing:3 asymmetry:1 optimum:3 help:1 fixing:1 finitely:1 advocated:2 strong:2 recovering:2 involves:2 implies:2 tommi:2 differ:3 direction:2 guided:1 closely:1 public:1 material:1 adjacency:1 require:1 alito:4 generalization:3 anonymous:3 pessimistic:1 im:5 extension:1 ideology:3 hold:1 considered:1 equilibrium:31 mapping:1 algorithmic:1 lasry:1 a2:3 cohesion:1 wine:1 purpose:1 estimation:1 wi1:3 intermediary:3 bonami:2 combinatorial:2 tool:1 mit:4 clearly:3 rather:1 tennessee:1 pn:2 avoid:2 kalai:1 jaakkola:2 liptak:1 focus:1 joachim:1 selten:1 indicates:1 contrast:1 political:1 criticism:1 am:20 wf:5 abstraction:1 nn:1 kottur:1 honorio:1 a0:9 hidden:1 relation:1 koller:1 interested:1 comprising:1 issue:1 among:4 dual:2 flexible:1 constrained:5 mutual:1 field:3 construct:1 equal:2 once:1 beach:1 sampling:1 ng:1 hop:3 identical:1 nicely:1 look:1 unsupervised:1 icml:1 discrepancy:1 minimized:1 others:2 papadimitriou:1 few:2 dg:9 resulted:1 individual:3 senator:7 deviating:1 ima:1 fire:1 versatility:1 n1:1 ortiz:1 harley:1 freedom:1 invariably:1 interest:1 possibility:1 intra:3 extreme:1 light:1 behind:2 myopic:1 a0i:2 edge:12 closer:1 respective:2 unless:2 tree:6 jurisprudence:2 theoretical:4 nij:15 instance:3 increased:1 modeling:3 lieberman:1 corroborate:1 averted:1 assignment:1 strategic:11 cost:11 deviation:2 subset:2 vertex:1 entry:1 introducing:1 successful:1 answer:1 st:1 randomized:4 csail:4 probabilistic:2 invoke:1 together:1 yao:4 w1:1 again:1 central:1 satisfied:1 containing:1 leveraged:2 possibly:1 slowly:1 worse:2 admit:5 leading:1 return:1 potential:11 subsumes:1 includes:1 satisfy:1 depends:6 root:2 red:4 azrieli:1 recover:2 square:2 ni:23 voted:1 efficiently:1 likewise:4 identify:1 generalize:2 produced:1 kennedy:6 autism:1 influenced:1 reach:1 aggregator:34 sharing:3 against:1 bolstered:1 pp:12 proof:7 recovers:1 hamming:3 sampled:3 gain:1 dataset:7 rational:1 knowledge:1 organized:1 positioned:1 carefully:1 surplus:1 higher:2 cummings:1 follow:2 response:2 improved:1 erd:1 formulation:3 sufficiency:1 box:1 strongly:3 furthermore:1 smola:1 reply:1 talagrand:1 sketch:2 hand:1 replacing:1 o:1 nonlinear:3 lack:1 believe:1 usa:1 effect:5 svitkina:1 normalized:2 facilitate:2 concept:1 multiplier:2 requiring:1 wi0:3 discounting:1 alternating:2 symmetric:2 attractive:1 game:96 width:1 encourages:1 during:1 prominent:1 gg:5 linderoth:1 theoretic:7 demonstrate:3 complete:1 willis:1 argues:1 balcan:1 wise:2 invoked:1 novel:1 recently:1 consideration:2 fi:3 parikh:1 superpolynomial:1 overview:1 exponentially:5 volume:1 interpretation:2 approximates:2 a00:6 queried:1 ai:20 consistency:2 scalia:4 session:1 mathematics:4 submodular:20 stable:8 access:3 add:4 own:4 ufi:6 perspective:4 moderate:2 verlag:1 harvey:2 inequality:1 binary:3 continue:1 affiliation:2 allard:1 karande:1 accomplished:1 der:2 scoring:3 guestrin:1 minimum:5 additional:2 relaxed:1 impose:1 goel:1 determine:1 maximize:2 shortest:4 truthful:1 branch:2 multiple:2 desirable:2 sound:1 full:5 smooth:1 technical:1 characterized:1 cross:1 long:1 bach:1 equally:1 a1:3 ensuring:1 prediction:2 variant:2 arxiv:1 represent:2 confined:1 nji:4 addition:1 remarkably:1 grow:1 appropriately:1 permissible:1 rest:1 unlike:1 undirected:6 meaningfully:1 name:2 lafferty:1 call:1 integer:3 structural:2 near:2 kick:1 intermediate:1 constraining:1 congressional:6 wn:3 sander:1 affect:1 relu:1 economic:4 idea:3 lesser:1 avenue:1 court:7 regarding:1 fleischer:1 defazio:1 unilateral:1 poczos:1 york:1 constitute:1 action:11 deep:1 generally:2 detailed:1 discount:2 ang:2 locally:4 specifies:1 ravanbakhsh:1 exist:2 estimated:4 reinforces:1 glean:1 blue:4 discrete:1 incentive:2 vol:1 centrist:2 achieving:1 ani:4 graph:10 relaxation:2 monotone:7 fraction:1 sum:8 fooling:1 enforced:1 year:1 parameterized:2 letter:1 respond:1 everywhere:1 soda:1 extends:1 almost:1 ruling:5 place:1 wu:1 decision:1 bound:10 layer:2 guaranteed:3 distinguish:1 oracle:8 g:3 strength:1 constraint:12 software:1 encodes:1 vondrak:1 argument:1 span:2 approximability:2 min:1 separable:1 relatively:1 structured:9 according:4 disconnected:2 across:1 feige:1 separability:1 character:1 wi:2 making:1 restricted:3 invariant:1 taken:1 ln:3 segregation:1 agree:1 slack:1 fail:1 enforcement:1 generalizes:1 operation:1 apply:2 enforce:3 appropriate:1 ginsburg:4 alternative:1 vikas:1 existence:7 thomas:4 denotes:2 cf:4 graphical:1 exploit:1 invokes:1 establish:2 approximating:3 nyi:1 unanimous:1 objective:2 move:1 question:2 added:1 kagan:4 strategy:30 randomize:1 concentration:1 primary:1 unraveling:1 mirrokni:2 said:1 exhibit:1 gradient:2 affinity:1 distance:3 separate:1 majority:2 outer:2 w0:5 argue:1 extent:1 spanning:8 enforcing:4 fresh:1 toward:1 length:1 difficult:1 robert:4 stoc:1 aggregative:65 expense:1 favorably:2 negative:5 mink:1 theorie:1 summarization:1 policy:1 perform:2 upper:1 observation:2 datasets:2 markov:1 finite:2 immediate:1 payoff:22 extended:2 rn:1 mansour:1 smoothed:1 peleato:1 introduced:3 namely:2 required:1 specified:3 eckstein:1 connection:4 reintroduced:1 conflict:1 learned:1 nip:5 address:1 beyond:1 able:2 below:1 pattern:2 lion:1 sparsity:4 challenge:1 program:2 max:6 including:2 natural:2 eh:1 circumvent:1 force:2 treated:2 mn:1 minimax:2 improve:2 republican:2 imply:1 created:1 text:1 deviate:3 prior:4 literature:2 geometric:1 interdependent:1 multiplication:1 segregated:1 asymptotic:1 review:1 loss:5 fully:1 permutation:1 multiagent:1 mixed:7 interesting:3 foundation:2 agent:3 degree:18 sufficient:3 consistent:4 principle:4 thresholding:1 viewpoint:1 playing:3 collaboration:1 last:1 free:1 side:6 weaker:1 deeper:1 neighbor:12 wide:2 fall:1 fg:8 distributed:1 world:1 rich:2 computes:1 coincide:1 ig:1 party:2 approximate:3 cutting:2 sz:1 global:2 incoming:1 reveals:1 instantiation:1 conceptual:1 uai:1 daskalakis:1 continuous:2 why:1 promising:1 robust:3 ca:1 obtaining:1 symmetry:1 complex:2 discourage:1 japanese:1 domain:1 clinton:1 main:5 arrow:1 hyperparameters:2 profile:11 n2:2 edition:1 fig:10 strengthened:1 wiley:1 sub:7 structurally:3 guiding:1 wish:1 pereira:1 candidate:1 tied:1 breaking:1 jmlr:2 weighting:1 bower:1 theorem:8 specific:1 jensen:1 maxi:2 appeal:1 experimented:2 admits:3 tripathi:1 ssa:5 concern:1 evidence:1 adding:2 margin:5 gap:1 democrat:3 failed:1 expressed:1 springer:2 corresponds:2 iwata:1 satisfies:1 conditional:1 digraph:24 consequently:1 lipschitz:13 shared:1 admm:1 change:5 hard:3 specifically:3 except:1 averaging:1 lemma:4 conservative:6 kearns:2 zaheer:1 goemans:1 experimental:1 e:3 player:52 vote:8 meaningful:3 support:1 collins:2 alexander:1 bench:9 |
6,764 | 7,119 | A Sample Complexity Measure with Applications to
Learning Optimal Auctions
Vasilis Syrgkanis
Microsoft Research
[email protected]
Abstract
We introduce a new sample complexity measure, which we refer to as split-sample
growth rate. For any hypothesis H and for any sample S of size m, the splitsample growth rate ??H (m) counts how many different hypotheses can empirical
risk minimization output on any sub-sample of S of size
m/2. We show
q
that
log(?
?H (2m))
the expected generalization error is upper bounded by O
. Our
m
result is enabled by a strengthening of the Rademacher complexity analysis of
the expected generalization error. We show that this sample complexity measure,
greatly simplifies the analysis of the sample complexity of optimal auction design,
for many auction classes studied in the literature. Their sample complexity can
be derived solely by noticing that in these auction classes, ERM on any sample or
sub-sample will pick parameters that are equal to one of the points in the sample.
1
Introduction
The seminal work of [11] gave a recipe for designing the revenue maximizing auction in auction
settings where the private information of players is a single number and when the distribution over
this number is completely known to the auctioneer. The latter raises the question of how has the
auction designer formed this prior distribution over the private information. Recent work, starting
from [4], addresses the question of how to design optimal auctions when having access only to
samples of values from the bidders. We refer the reader to [5] for an overview of the existing results
in the literature. [4, 9, 10, 2] give bounds on the sample complexity of optimal auctions without
computational efficiency, while recent work has also focused on getting computationally efficient
learning bounds [5, 13, 6].
This work solely focuses on sample complexity and not computational efficiency and thus is more
related to [4, 9, 10, 2]. The latter work, uses tools from supervised learning, such as pseudodimension [12] (a variant of VC dimension for real-valued functions), compression bounds [8] and
Rademacher complexity [12, 14] to bound the sample complexity of simple auction classes. Our
work introduces a new measure of sample complexity, which is a strengthening the Rademacher
complexity analysis and hence could also be of independent interest outside the scope of the sample
complexity of optimal auctions. Moreover, for the case of auctions, this measure greatly simplifies
the analysis of their sample complexity in many cases.
In particular, we show that in general PAC learning settings, the expected generalization error is upper
bounded by the Rademacher complexity not of the whole class of hypotheses, but rather only over
the class of hypotheses that could be the outcome of running Expected Risk Minimization (ERM)
on a subset of the samples of half the size. If the number of these hypotheses is small, then the
latter immediately yields a small generalization error. We refer to the growth rate of the latter set of
hypotheses as the split-sample growth rate. This measure of complexity is not restricted to auction
design and could be relevant to general statistical learning theory.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
We then show that for many auction classes such as single-item auctions with player-specific reserves,
single item t-level auctions and multiple-item item pricing auctions with additive buyers, the splitsample growth rate can be very easily bounded. The argument boils down to just saying that the
Empirical Risk Minimization over this classes will set the parameters of the auctions to be equal to
some value of some player in the sample. Then a simple counting argument gives bounds of the same
order as in prior work in the literature that used the pseudo-dimension [9, 10]. In multi-item settings
we also get improvements on the sample complexity bound.
Split-sample growth rate is similar in spirit to the notion of local Rademacher complexity [3], which
looks at the Rademacher complexity on a subset of hypotheses with small empirical error. In
particular, our proof is based on a refinement of the classic analysis Rademacher complexity analysis
of generalization error (see e.g. [14]). However, our bound is more structural, restricting the set
to outcomes of the chosen ERM process on a sub-sample of half the size. Moreover, we note that
counting the number of possible outputs of ERM also has connections to a counting argument made
in [1] in the context of pricing mechanisms. However, in essence the argument there is restricted to
transductive settings where the sample ?features? are known in advance and fixed and thereby the
argument is much more straightforward and more similar to standard notions of ?effective hypothesis
space? used in VC-dimension arguments.
Our new measure of sample complexity is applicable in the general statistical learning theory
framework and hence could have applications beyond auctions. To convey a high level intuition of
settings where split-sample growth could simplify the sample complexity analysis, suppose that the
output hypothesis of ERM is uniquely defined by a constant number of sample points (e.g. consider
linear separators and assume that the loss is such that the output of ERM is uniquely characterized
by choosing O(d) points from the sample). Then this means that the number of possible hypotheses
d
on any subset of size m/2, is at most O( m
the split sample growth rate analysis
d ) = O(m ). Thenp
immediately yields that the expected generalization error is O( d ? log(m)/m), or equivalently the
sample complexity of learning over this hypothesis class to within an error is O(d ? log(1/)/2 ).
2
Preliminaries
We look at the sample complexity of optimal auctions. We consider the case of m items, and n
bidders. Each bidder has a value function vi drawn independently from a distribution Di and we
denote with D the joint distribution.
We assume we are given a sample set S = {v1 , . . . , vm }, of m valuation vectors, where each vt ? D.
Let H denote the class of all dominant strategy truthful single item auctions (i.e. auctions where no
player has incentive to report anything else other than his true value to the auction, independent of
what other players do). Moreover, let
r(h, v) =
n
X
phi (v)
(1)
i=1
where phi (?) is the payment function of mechanism h, and r(h, v) is the revenue of mechanism h on
valuation vector v. Finally, let
RD (h) = Ev?D [r(h, v)]
(2)
be the expected revenue of mechanism h under the true distribution of values D.
Given a sample S of size m, we want to compute a dominant strategy truthful mechanism hS , such
that:
ES [RD (hS )] ? sup RD (h) ? (m)
(3)
h?H
where (m) ? 0 as m ? ?. We refer to (m) as the expected generalization error. Moreover, we
define the sample complexity of an auction class as:
Definition 1 (Sample Complexity of Auction Class). The (additive error) sample complexity of an
auction class H and a class of distributions D, for an accuracy target is defined as the smallest
number of samples m(), such that for any m ? m():
ES [RD (hS )] ? sup RD (h) ?
h?H
2
(4)
We might also be interested in a multiplcative error sample complexity, i.e.
ES [RD (hS )] ? (1 ? ) sup RD (h)
(5)
h?H
The latter is exactly the notion that is used in [4, 5]. If one assumes that the optimal revenue on the
distribution is lower bounded by some constant quantity, then an additive error implies a multiplicative
error. For instance, if one assumes that player values are bounded away from zero with significant
probability, then that implies a lower bound on revenue. Such assumptions for instance, are made in
the work of [9]. We will focus on additive error in this work.
We will also be interested in proving high probability guarantees, i.e. with probability 1 ? ?:
RD (hS ) ? sup RD (h) ? (m, ?)
(6)
h?H
where for any ?, (m, ?) ? 0 as m ? ?.
3
Generalization Error via the Split-Sample Growth Rate
We turn to the general PAC learning framework, and we give generalization guarantees in terms of a
new notion of complexity of a hypothesis space H, which we denote as split-sample growth rate.
Consider an arbitrary hypothesis space H and an arbitrary data space Z, and suppose we are given
a set S of m samples {z1 , . . . , zm }, where each zt is drawn i.i.d. from some distribution D on Z.
We are interested in maximizing some reward function r : H ? Z ? [0, 1], in expectation over
distribution D. In particular, denote with RD (h) = Ez?D [r(h, z)].
We will look at the Expected Reward Maximization algorithm on S, with some fixed tie-breaking
rule. Specifically, if we let
m
1 X
r(h, zt )
(7)
RS (h) =
m t=1
then ERM is defined as:
hS = arg sup RS (h)
(8)
h?H
where ties are broken based on some pre-defined manner.
We define the notion of a split-sample hypothesis space:
? S , denote the set of all
Definition 2 (Split-Sample Hypothesis Space). For any sample S, let H
hypothesis hT output by the ERM algorithm (with the pre-defined tie-breaking rule), on any subset
T ? S, of size d|S|/2e, i.e.:
? S = {hT : T ? S, |T | = d|S|/2e}
H
(9)
Based on the split-sample hypothesis space, we also define the split-sample growth rate of a hypothesis
? S for any set S of size m.
space H at value m, as the largest possible size of H
Definition 3 (Split-Sample Growth Rate). The split-sample growth rate of a hypothesis H and an
ERM process for H, is defined as:
?S|
sup |H
??H (m) =
(10)
S:|S|=m
We first show that the generalization error is upper bounded by the Rademacher complexity evaluated
on the split-sample hypothesis space of the union of two samples of size m. The Rademacher
complexity R(S, H) of a sample S of size m and a hypothesis space H is defined as:
#
"
2 X
R(S, H) = E? sup
?t ? r(h, zt )
(11)
h?H m
zt ?S
where ? = (?1 , . . . , ?m ) and each ?t is an independent binary random variable taking values {?1, 1},
each with equal probability.
3
Lemma 1. For any hypothesis space H, and any fixed ERM process, we have:
h
i
? S?S 0 ) ,
ES [RD (hS )] ? sup RD (h) ? ES,S 0 R(S, H
(12)
h?H
where S and S 0 are two independent samples of some size m.
Proof. Let h? be the optimal hypothesis for distribution D. First we re-write the left hand side, by
adding and subtracting the expected empirical reward:
ES [RD (hS )] = ES [RS (hS )] ? ES [RS (hS ) ? RD (hS )]
? ES [RS (h? )] ? ES [RS (hS ) ? RD (hS )]
(hS maximizes empirical reward)
= RD (h? ) ? ES [RS (hS ) ? RD (hS )]
(h? is independent of S)
Thus it suffices to upper bound the second quantity in the above equation.
Since RD (h) = ES 0 [RS 0 (h)] for a fresh sample S 0 of size m, we have:
ES [RS (hS ) ? RD (hS )] = ES [RS (hS ) ? ES 0 [RS 0 (hS )]]
= ES,S 0 [RS (hS ) ? RS 0 (hS )]
? S?S 0 . Since S is a subset of S ? S 0 of size |S ? S 0 |/2, we have by the
Now, consider the set H
? S?S 0 . Thus we can upper bound the latter
definition of the split-sample hypothesis space that hS ? H
?
quantity by taking a supremum over h ? HS?S 0 :
"
#
ES [RS (hS ) ? RD (hS )] ? ES,S 0
sup
RS (h) ? RS 0 (h)
?
h?H
S?S 0
"
m
= ES,S 0
sup
?
h?H
S?S 0
1 X
(r(h, zt ) ? r(h, zt0 ))
m t=1
#
Now observe, that we can rename any sample zt ? S to zt0 and sample zt0 ? S 0 to zt . By doing show
we do not change the distribution. Moreover, we do not change the quantity HS?S 0 , since S ? S 0 is
invariant to such swaps. Finally, we only change the sign of the quantity (r(h, zt ) ? r(h, zt0 )). Thus
if we denote with ?t ? {?1, 1}, a Rademacher variable, we get the above quantity is equal to:
"
#
"
#
m
m
1 X
1 X
0
0
ES,S 0
sup
(r(h, zt ) ? r(h, zt )) = ES,S 0
sup
?t (r(h, zt ) ? r(h, zt ))
m t=1
m t=1
?
?
h?H
h?H
S?S 0
S?S 0
(13)
for any vector ? = (?1 , . . . , ?m ) ? {?1, 1}m . The latter also holds in expectation over ?, where ?t
is randomly drawn between {?1, 1} with equal probability. Hence:
"
#
m
1 X
0
ES [RS (hS ) ? RD (hS )] ? ES,S 0 ,?
sup
?t (r(h, zt ) ? r(h, zt ))
m t=1
?
h?H
0
S?S
By splitting the supremma into a positive and negative part and observing that the two expected
quantities are identical, we get:
"
#
m
1 X
?t r(h, zt )
ES [RS (hS ) ? RD (hS )] ? 2ES,S 0 ,?
sup
m t=1
?
h?H
S?S 0
h
i
? S?S 0 )
= ES,S 0 R(S, H
where R(S, H) denotes the Rademacher complexity of a sample S and hypothesis H.
Observe, that the latter theorem is a strengthening of the fact that the Rademacher complexity upper
bounds the generalization error, simply because:
h
i
? S?S 0 ) ? ES,S 0 [R(S, H)] = ES [R(S, H)]
ES,S 0 R(S, H
(14)
Thus if we can bound the Rademacher complexity of H, then the latter lemma gives a bound on the
generalization error. However, the reverse might not be true. Finally, we show our main theorem,
which shows that if the split-sample hypothesis space has small size, then we immediately get a
generalization bound, without the need to further analyze the Rademacher complexity of H.
4
Theorem 2 (Main Theorem). For any hypothesis space H, and any fixed ERM process, we have:
r
2 log(?
?H (2m))
(15)
ES [RD (hS )] ? sup RD (h) ?
m
h?H
Moreover, with probability 1 ? ?:
1
RD (hS ) ? sup RD (h) ?
?
h?H
r
2 log(?
?H (2m))
m
Proof. By applying Massart?s lemma (see e.g. [14]) we have that:
s
r
? S?S 0 |)
2 log(|H
2 log(?
?H (2m))
?
R(S, HS?S 0 ) ?
?
m
m
(16)
(17)
Combining the above with Lemma 1, yields the first part of the theorem.
Finally, the high probability statement follows from observing that the random variable
suph?H RD (h) ? RD (hS ) is non-negative and by applying Markov?s inequality: with probability 1 ? ?
r
?H (2m))
1
1 2 log(?
(18)
sup RD (h) ? RD (hS ) ? ES sup RD (h) ? RD (hS ) ?
?
?
m
h?H
h?H
The latter theorem can be trivially extended to the case when r : H ? Z ? [?, ?], leading to a bound
of the form:
r
2 log(?
?H (2m))
ES [RD (hS )] ? sup RD (h) ? (? ? ?)
(19)
m
h?H
We note that unlike the standard Rademacher complexity, which is defined as R(S, H), our bound,
? S?S 0 ) for any two datasets S, S 0 of equal size, does not imply a
which is based on bounding R(S, H
high probability bound via McDiarmid?s inequality (see e.g. Chapter 26 of [14] of how this is done
for Rademacher complexity analysis), but only via Markov?s inequality. The latter yields a worse
dependence on the confidence ? on the high probability bound of 1/?, rather than log(1/?). The
? S?S 0 ), depends on the sample S, not only in terms
reason for the latter is that the quantity R(S, H
? S?S 0 .
of on which points to evaluate the hypothesis, but also on determining the hypothesis space H
Hence, the function:
?
?
m
X
1
f (z1 , . . . , zm ) = ES 0 ?
sup
?t (r(h, zt ) ? r(h, zt0 ))?
(20)
m t=1
?
h?H
0
{z1 ,...,zm }?S
1
does not satisfy the stability property that |f (z) ? f (zi00 , z?i )| ? m
. The reason being that the
supremum is taken over a different hypothesis space in the two inputs. This is unlike the case of the
function:
"
#
m
1 X
f (z1 , . . . , zm ) = ES 0 sup
?t (r(h, zt ) ? r(h, zt0 ))
(21)
h?H m t=1
which is used in the standard Rademacher complexity bound analysis, which satisfies the latter
stability property. Resolving whether this worse dependence on ? is necessary is an interesting open
question.
4
Sample Complexity of Auctions via Split-Sample Growth
We now present the application of the latter measure of complexity to the analysis of the sample
complexity of revenue optimal auctions. Thoughout this section we assume that the revenue of
any auction lies in the range [0, 1]. The results can be easily adapted to any other range [?, ?], by
5
re-scaling the equations, which will lead to blow-ups in the sample complexity of the order of an
extra (? ? ?) multiplicative factor. This limits the results here to bounded distributions of values.
However, as was shown in [5], one can always cap the distribution of values up to some upper bound,
for the case of regular distributions, by losing only an fraction of the revenue. So one can apply the
results below on this capped distribution.
Single bidder and single item. Consider the case of a single bidder and single item auction. In this
setting, it is known by results in auction theory [11] that an optimal auction belongs to the hypothesis
class H = {post a reserve price r for r ? [0, 1]}. We consider, the ERM rule, which for any set S,
in the case of ties, it favors reserve prices that are equal to some valuation vt ? S. Wlog assume that
samples v1 , . . . , vm are ordered in increasing order. Observe, that for any set S, this ERM rule on any
subset T of S, will post a reserve price that is equal to some value vt ? T . Any other reserve price
in between two values [vt , vt+1 ] is weakly dominated by posting r = vt+1 , as it does not change
? S is a subset of
which samples are allocated and we can only increase revenue. Thus the space H
{post a reserve price r ? {v1 , . . . , vm }. The latter is of size m. Thus the split-sample growth of H
is ??H (m) ? m. This yields:
r
2 log(2m)
ES [RD (hS )] ? sup RD (h) ?
(22)
m
h?H
Equivalently, the sample complexity is mH () = O log(1/)
.
2
Multiple i.i.d. regular bidders and single item. In this case, it is known by results in auction
theory [11] that the optimal auction belongs to the space of hypotheses H consisting of second price
auctions with some reserve r ? [0, 1]. Again if we consider ERM which in case of ties favors a
reserve that equals to a value in the sample (assuming that is part of the tied set, or outputs any other
value otherwise), then observe that for any subset T of a sample S, ERM on that subset will pick a
reserve price that is equal to one of the values in the samples S. Thus ??H (m) ? n ? m. This yields:
r
2 log(2 ? n ? m)
ES [RD (hS )] ? sup RD (h) ?
(23)
m
h?H
2
)
Equivalently, the sample complexity is mH () = O log(n/
.
2
Non-i.i.d. regular bidders, single item, second price with player specific reserves. In this case,
it is known by results in auction theory [11] that the optimal auction belongs to the space of hypotheses
HSP consisting of second price auctions with some reserve ri ? [0, 1] for each player i. Again if we
consider ERM which in case of ties favors a reserve that equals to a value in the sample (assuming
that is part of the tied set, or outputs any other value otherwise), then observe that for any subset T
of a sample S, ERM on that subset will pick a reserve price ri that is equal to one of the values vti
of player i in the sample S. There are m such possible choices for each player, thus mn possible
choices of reserves in total. Thus ??H (m) ? mn . This yields:
r
2n log(2m)
ES [RD (hS )] ? sup RD (h) ?
(24)
m
h?HSP
If H is the space of all dominant strategy truthful mechanisms, then by prophet inequalities (see [7]),
we know that suph?HSP RD (h) ? 12 suph?H RD (h). Thus:
r
2n log(2m)
1
(25)
ES [RD (hS )] ? sup RD (h) ?
2 h?H
m
Non-i.i.d. irregular bidders single item. In this case it is known by results in auction theory
[11] that the optimal auction belongs to the space of hypotheses H consisting of all virtual welfare
maximizing auctions: For each player i, pick a monotone function ??i (vi ) ? [?1, 1] and allocate to
the player with the highest non-negative virtual value, charging him the lowest value he could have
bid and still win the item. In this case, we will first coarsen the space of all possible auctions.
6
In particular, we will consider the class of t-level auctions of [9]. In this class, we constrain the value
functions ??i (vi ) to only take values in the discrete grid in [0, 1]. We will call this class H . An
equivalent representation of these auctions is by saying that for each player i, we define a vector of
i
thresholds 0 = ?0i ? ?1i ? . . . ? ?si ? ?s+1
= 1, with s = 1/. The index of a player is the largest
j for which vi ? ?j . Then we allocate the item to the player with the highest index (breaking ties
lexicographically) and charge the minimum value he has to bid to continue to win.
Observe that on any sample S of valuation vectors, it is always weakly better to place the thresholds
?ji on one of the values in the set S. Any other threshold is weakly dominated, as it does not change
the allocation. Thus for any subset T of a set S of size m, we have that the thresholds of each player
i will take one of the values of player i that appears in set S. We have 1/ thresholds for each player,
hence m1/ combinations of thresholds for each player and mn/ combinations of thresholds for all
players. Thus ??H (m) ? mn/ . This yields:
r
2n log(2m)
ES [RD (hS )] ? sup RD (h) ?
(26)
?m
h?H
Moreover, by [9] we also have that:
sup RD (h) ? sup RD (h) ?
h?H
Picking, =
2n log(2m)
m
1/3
(27)
h?H
, we get:
2n log(2m)
m
h?H
.
Equivalently, the sample complexity is mH () = O n log(1/)
3
ES [RD (hS )] ? sup RD (h) ? 2
1/3
(28)
k items, n bidders, additive valuations, grand bundle pricing. If the reserve price was anonymous, then the reserve price output by ERM on any subset of a sample S of size m, will take the
value of one of the m total values for the items of the buyers in S. So ??H (m) = m ? n. If the reserve
price was not anonymous, then for each buyer ERM will
pick one of the m total item values, so
??H (m) ? mn . Thus the sample complexity is mH () = O n log(1/)
.
2
k items, n bidders, additive valuations, item prices. If reserve prices are anonymous, then each
reserve price on item j computed by ERM on any subset of a sample S of size m, will take the value
of one of the player?s values for item j, i.e. n ? m. So ??H (m) = (n ? m)k . If reserve prices are not
anonymous, then the reserve price on item j for player i will take the value of one of the player?s
values for the item. So ??H (m) ? mn?k . Thus the sample complexity is mH () = O nk log(1/)
.
2
k items, n bidders, additive valuations, best of grand bundle pricing and item pricing. ERM
on the combination will take values on any subset of a sample S of size m, that is at most the
product of the values of each of the classes (bundle or item pricing). Thus, for anonymous pricing:
??H (m) = (m ? n)k+1 and for non-anonymous
pricing: ??H (m) ? mn(k+1) . Thus the sample
complexity is mH () = O
n(k+1) log(1/)
2
.
In the case of a single bidder, we know that the best of bundle pricing or item pricing is a 1/8
approximation to the overall best truthful mechanism for the true distribution of values, assuming
values for each item are drawn independently. Thus in the latter case we have:
r
1
2(k + 1) log(2m)
ES [RD (hS )] ? sup RD (h) ?
(29)
6 h?H
m
where H is the class of all truthful mechanisms.
Comparison with [10]. The latter three applications were analyzed by [10], via the notion of the
log(1/)
pseudo-dimension, but their results lead to sample complexity bounds of O( nk log(nk)
). Thus
2
the above simpler analysis removes the extra log factor on the dependence.
7
References
[1] M. F. Balcan, A. Blum, J. D. Hartline, and Y. Mansour. Mechanism design via machine learning.
In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS?05), pages
605?614, Oct 2005.
[2] Maria-Florina F Balcan, Tuomas Sandholm, and Ellen Vitercik. Sample complexity of automated mechanism design. In Advances in Neural Information Processing Systems, pages
2083?2091, 2016.
[3] Peter L. Bartlett, Olivier Bousquet, and Shahar Mendelson. Local rademacher complexities.
Ann. Statist., 33(4):1497?1537, 08 2005.
[4] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In 46th,
pages 243?252. ACM, 2014.
[5] Nikhil R. Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of
auctions with side information. In Proceedings of the Forty-eighth Annual ACM Symposium on
Theory of Computing, STOC ?16, pages 426?439, New York, NY, USA, 2016. ACM.
[6] Yannai A. Gonczarowski and Noam Nisan. Efficient empirical revenue maximization in singleparameter auction environments. CoRR, abs/1610.09976, 2016.
[7] Jason D. Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In Proceedings
of the 10th ACM Conference on Electronic Commerce, EC ?09, pages 225?234, New York, NY,
USA, 2009. ACM.
[8] Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold
algorithm. Machine learning, 2(4):285?318, 1988.
[9] Jamie Morgenstern and Tim Roughgarden. The pseudo-dimension of near-optimal auctions. In
Proceedings of the 28th International Conference on Neural Information Processing Systems,
NIPS?15, pages 136?144, Cambridge, MA, USA, 2015. MIT Press.
[10] Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In COLT 2016, 2016.
[11] Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58?73,
1981.
[12] D. Pollard. Convergence of Stochastic Processes. Springer Series in Statistics. 2011.
[13] Tim Roughgarden and Okke Schrijvers. Ironing in the dark. In Proceedings of the 2016 ACM
Conference on Economics and Computation, EC ?16, pages 1?18, New York, NY, USA, 2016.
ACM.
[14] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Understanding Machine Learning: From Theory to Algorithms. Cambridge University
Press, 2014.
8
| 7119 |@word h:45 private:2 compression:1 open:1 r:18 pick:5 thereby:1 series:1 ironing:1 existing:1 com:1 si:1 additive:7 remove:1 half:2 item:28 yannai:1 alexandros:1 mcdiarmid:1 simpler:1 symposium:2 focs:1 manner:1 introduce:1 expected:10 multi:1 increasing:1 abound:1 bounded:7 moreover:7 maximizes:1 prophet:1 gonczarowski:1 lowest:1 what:1 morgenstern:2 guarantee:2 pseudo:3 charge:1 growth:15 tie:7 exactly:1 positive:1 local:2 limit:1 solely:2 might:2 okke:1 studied:1 range:2 commerce:1 union:1 empirical:6 ups:1 pre:2 confidence:1 regular:3 get:5 risk:3 zhiyi:1 seminal:1 context:1 applying:2 equivalent:1 maximizing:3 syrgkanis:1 straightforward:1 starting:1 independently:2 devanur:1 focused:1 economics:1 splitting:1 immediately:3 rule:4 ellen:1 his:1 enabled:1 proving:1 classic:1 notion:6 stability:2 target:1 suppose:2 losing:1 olivier:1 us:1 designing:1 hypothesis:34 highest:2 intuition:1 environment:1 broken:1 complexity:53 reward:4 raise:1 weakly:3 efficiency:2 completely:1 swap:1 easily:2 joint:1 mh:6 chapter:1 effective:1 outside:1 outcome:2 choosing:1 shalev:1 valued:1 nikhil:1 otherwise:2 favor:3 statistic:1 transductive:1 subtracting:1 jamie:2 product:1 strengthening:3 zm:4 vasilis:1 relevant:1 combining:1 getting:1 recipe:1 convergence:1 rademacher:18 ben:1 tim:5 implies:2 attribute:1 stochastic:1 vc:2 virtual:2 suffices:1 generalization:13 preliminary:1 anonymous:6 hold:1 welfare:1 scope:1 reserve:21 smallest:1 applicable:1 cole:1 him:1 largest:2 tool:1 minimization:3 mit:1 always:2 rather:2 coarsen:1 derived:1 focus:2 improvement:1 maria:1 greatly:2 interested:3 arg:1 overall:1 colt:1 equal:12 having:1 beach:1 identical:1 look:3 report:1 simplify:1 richard:1 randomly:1 consisting:3 microsoft:2 zt0:6 ab:1 interest:1 introduces:1 analyzed:1 bundle:4 necessary:1 littlestone:1 re:2 instance:2 maximization:3 subset:15 st:1 grand:2 international:1 vm:3 picking:1 quickly:1 again:2 huang:1 worse:2 leading:1 blow:1 bidder:12 satisfy:1 vi:4 depends:1 nisan:1 multiplicative:2 jason:1 doing:1 sup:30 observing:2 analyze:1 zi00:1 formed:1 accuracy:1 yield:8 hartline:2 definition:4 proof:3 di:1 boil:1 cap:1 appears:1 supervised:1 evaluated:1 done:1 just:1 roger:1 hand:1 vasy:1 pricing:10 pseudodimension:1 usa:5 true:4 hence:5 uniquely:2 essence:1 anything:1 schrijvers:1 auction:49 balcan:2 ji:1 overview:1 he:2 m1:1 refer:4 significant:1 cambridge:2 rd:52 trivially:1 grid:1 mathematics:1 access:1 dominant:3 recent:2 belongs:4 irrelevant:1 reverse:1 inequality:4 binary:1 continue:1 shahar:1 vt:6 psomas:1 minimum:1 forty:1 truthful:5 resolving:1 multiple:2 lexicographically:1 characterized:1 long:1 post:3 variant:1 florina:1 expectation:2 irregular:1 want:1 else:1 allocated:1 extra:2 unlike:2 massart:1 spirit:1 call:1 structural:1 near:1 counting:3 split:18 automated:1 bid:2 gave:1 simplifies:2 whether:1 allocate:2 bartlett:1 peter:1 pollard:1 york:3 dark:1 statist:1 designer:1 sign:1 write:1 discrete:1 incentive:1 threshold:8 blum:1 drawn:4 ht:2 v1:3 monotone:1 fraction:1 noticing:1 auctioneer:1 place:1 saying:2 reader:1 electronic:1 scaling:1 bound:21 annual:2 roughgarden:5 adapted:1 constrain:1 ri:2 dominated:2 bousquet:1 argument:6 combination:3 sandholm:1 restricted:2 invariant:1 erm:21 taken:1 computationally:1 equation:2 payment:1 turn:1 count:1 mechanism:11 know:2 operation:1 apply:1 observe:6 away:1 assumes:2 running:1 denotes:1 question:3 quantity:8 strategy:3 dependence:3 win:2 hsp:3 valuation:7 vitercik:1 reason:2 fresh:1 assuming:3 tuomas:1 index:2 equivalently:4 statement:1 stoc:1 noam:1 negative:3 design:6 zt:17 upper:7 markov:2 datasets:1 extended:1 mansour:1 arbitrary:2 david:1 connection:1 z1:4 nick:1 nip:2 address:1 beyond:1 capped:1 below:1 ev:1 eighth:1 charging:1 mn:7 imply:1 prior:2 literature:3 understanding:2 determining:1 loss:1 interesting:1 suph:3 allocation:1 versus:1 revenue:11 foundation:1 side:2 taking:2 dimension:5 made:2 refinement:1 ec:2 supremum:2 vti:1 shwartz:1 ca:1 separator:1 main:2 whole:1 bounding:1 convey:1 ny:3 wlog:1 christos:1 sub:3 lie:1 tied:2 breaking:3 posting:1 down:1 theorem:6 specific:2 pac:2 mendelson:1 restricting:1 adding:1 corr:1 nk:3 simply:1 myerson:1 ez:1 ordered:1 phi:2 springer:1 satisfies:1 acm:7 ma:1 oct:1 ann:1 price:18 change:5 specifically:1 lemma:4 total:3 e:41 buyer:3 player:23 rename:1 latter:17 evaluate:1 |
6,765 | 712 | Reinforcement Learning Applied to
Linear Quadratic Regulation
Steven J. Bradtke
Computer Science Department
University of Massachusetts
Amherst, MA 01003
[email protected]
Abstract
Recent research on reinforcement learning has focused on algorithms based on the principles of Dynamic Programming (DP).
One of the most promising areas of application for these algorithms is the control of dynamical systems, and some impressive
results have been achieved. However, there are significant gaps
between practice and theory. In particular, there are no con vergence proofs for problems with continuous state and action spaces,
or for systems involving non-linear function approximators (such
as multilayer perceptrons). This paper presents research applying
DP-based reinforcement learning theory to Linear Quadratic Regulation (LQR), an important class of control problems involving
continuous state and action spaces and requiring a simple type of
non-linear function approximator. We describe an algorithm based
on Q-Iearning that is proven to converge to the optimal controller
for a large class of LQR problems. We also describe a slightly
different algorithm that is only locally convergent to the optimal
Q-function, demonstrating one of the possible pitfalls of using a
non-linear function approximator with DP-based learning.
1
INTRODUCTION
Recent research on reinforcement learning has focused on algorithms based on the
principles of Dynamic Programming. Some of the DP-based reinforcement learning
295
296
Bradtke
algorithms that have been described are Sutton's Temporal Differences methods
(Sutton, 1988), Watkins' Q-Iearning (Watkins, 1989), and Werbos' Heuristic Dynamic Programming (Werbos, 1987). However, there are few convergence results
for DP-based reinforcement learning algorithms, and these are limited to discrete
time, finite-state systems, with either lookup-tables or linear function approximators. Watkins and Dayan (1992) show that the Q-Iearning algorithm converges,
under appropriate conditions, to the optimal Q-function for finite-state Markovian
decision tasks, where the Q-function is represented by a lookup-table. Sutton (1988)
and Dayan (1992) show that the linear TD(A) learning rule, when applied to Markovian decision tasks where the states are representated by a linearly independent set
of feature vectors, converges in the mean to Vu , the value function for a given control policy U. Dayan (1992) also shows that linear TD(A) with linearly dependent
state representations converges, but not to Vu , the function that the algorithm is
supposed to learn.
Despite the paucity of theoretical results, applications have shown promise. For
example, Tesauro (1992) describes a system using TD(A) that learns to play championship level backgammon entirely through self-playl. It uses a multilayer perceptron (MLP) trained using back propagation as a function approximator. Sofge
and White (1990) describe a system that learns to improve process control with
continuous state and action spaces. Neither of these applications, nor many similar
applications that have been described, meet the convergence requirements of the
existing theory. Yet they produce good results experimentally. We need to extend
the theory of DP-based reinforcement learning to domains with continuous state
and action spaces, and to algorithms that use non-linear function approximators.
Linear Quadratic Regulation (e.g., Bertsekas, 1987) is a good candidate as a first
attempt in extending the theory of DP-based reinforcement learning in this manner. LQR is an important class of control problems and has a well-developed theory.
LQR problems involve continuous state and action spaces, and value functions can
be exactly represented by quadratic functions. The following sections review the
basics of LQR theory that will be needed in this paper, describe Q-functions for
LQR, describe the Q-Iearning algorithm used in this paper, and describe an algorithm based on Q-Iearning that is proven to converge to the optimal controller for a
large class of LQR problems. We also describe a slightly different algorithm that is
only locally convergent to the optimal Q-function, demonstrating one of the possible
pitfalls of using a non-linear function approximator with DP-based learning.
2
LINEAR QUADRATIC REGULATION
Consider the deterministic, linear, time-invariant, discrete time dynamical system
given by
f(:Z:t,Ut)
A:Z:t + BUt
Ut
U :Z:t,
where A, B, and U are matrices of dimensions n x n, n x m, and m x n respectively.
:Z:t is the state of the system at time t, and Ut is the control input to the system at
:Z:t+l
1
Backgammon can be viewed as a Markovian decision task.
Reinforcement Learning Applied to Linear Quadratic Regulation
time t. U is a linear feedback controller. The cost at every time step is a quadratic
function of the state and the control signal:
rt
r(zt, ud
x~Ext
+ u~Fut,
where E and F are symmetric, positive definite matrices of dimensions n x nand
m x m respectively, and Z' denotes z transpose.
The value Vu (xe) of a state Zt under a given control policy U is defined as the
discounted sum of all costs that will be incurred by using U for all times from
t onward, i.e., Vi,(ze) = 2::o'Y'rt+i, where 0 :s: 'Y :s: 1 is the discount factor.
Linear-quadratic control theory (e.g., Bertsekas, 1987) tells us that Vi, is a quadratic
function of the states and can be expressed as Vu(zd = z~Kuzt, where Ku is the
n x n cost matrix for policy U. The optimal control policy, U~, is that policy for
which the value of every state is minimized. We denote the cost matrix for the
optimal policy by K-.
3
Q-FUNCTIONS FOR LQR
Watkins (1989) defined the Q-function for a given control policy U as Qu(z, u)
r(z, u) + 'YVu(f(x, u)). This can be expressed for an LQR problem as
Qu(z, u)
r(z, u) + 'YVu(f(z, u))
Zl Ez + u ' Fu + 'Y(Az + BU)' Ku(Az
[
Z,U
]' [ E
+ 'YA' Ku A
'YB' Ku A
F
=
+ Bu)
'YA' Ku B
+ 'YB' Ku B
1[z, u],
(1)
where [z,u] is the column vector concatenation of the column vectors z and u.
Define the parameter matrix H u as
H
- [E+'YAIKU A
'YB' Ku A
(2)
u -
Hu is a symmetric positive definite matrix of dimensions (n
4
+ m)
x (n
+ m).
Q-LEARNING FOR LQR
The convergence results for Q-learning (Watkins & Dayan, 1992) assume a discrete time, finite-state system, and require the use of lookup-tables to represent
the Q-function. This is not suitable for the LQR domain, where the states and
actions are vectors of real numbers. Following the work of others, we will use a
parameterized representation of the Q-function and adjust the parameters through
a learning process. For example, Jordan and Jacobs (1990) and Lin (1992) use
MLPs trained using backpropagation to approximate the Q-function. Notice that
the function Qu is a quadratic function of its arguments, the state and control action, but it is a linear function of the quadratic combinations from the vector [z,u].
For example, if z
[Zb Z2], and 1.1.
[1.1.1], then Qu(z,u) is a linear function of
=
=
297
298
Bradtke
the vector [x~, x~, ut, XIX2, XIUl, X2Ul]' This fact allows us to use linear Recursive
Least Squares (RLS) to implement Q-Iearning in the LQR domain.
There are two forms of Q-Iearning. The first is the rule \Vatkins described in his
thesis (Watkins, 1989) . Watkins called this rule Q-Iearning, but we will refer to it
as optimizing Q-Iearning because it attempts to learn the Q-function of the optimal
policy directly. The optimizing Q-Iearning rule may be written as
Qt+I(Xt, Ut) = Qt(:et, Ut)
+a
[r(:et, ut)
+ 'Y mJn Qt(:et+l, a) -
Qt(:et, Ut)] ,
(3)
where Qt is the tth approximation to Q". The second form of Q-Iearning attempts
to learn Qu, the Q-function for some designated policy, U. U mayor may not be
the policy that is actually followed during training. This policy-based Q-learning
rule may be written as
Qt+I (:et, Ut)
= Qt(:et, Ut) + a [r( :et, Ut) + 'YQd :et+l, U :et+l) - Qt( :et, ue)] ,
(4)
where Qt is the t lh approximation to Qu. Bradtke, Ydstie, and Barto (paper in
preparation) show that a linear RLS implementation of the policy-based Q-Iearning
rule will converge to Qu for LQR problems.
5
POLICY IMPROVEMENT FOR LQR
Given a policy Uk, how can we find an improved policy, Uk+l? Following Howard
(1960) , define Uk+l as
Uk+lX = argmin [r(x, '1.?)
+ 'Y11ul< U(:e, '1.?))].
u
But equation (1) tells us that this can be rewritten as
Uk+I:e
= argmin QUI< (:e, u).
u
We can find the minimizing '1.? by taking the partial derivative of QUI?:e, u) with
respect to '1.?, setting that to zero, and solving for u. This yields
'1.?
= ,-'Y (F + 'YB' KUI<B)-l B' KUI<A:e.
.,
V'
UI<+l
Using (2), Uk+l can be written as
Uk+l =
-H:;/ H 21 .
Therefore we can use the definition of the Q-function to compute an improved
policy.
6
POLICY ITERATION FOR LQR
The RLS implementation of policy-based Q-Iearning (Section 4) and the policy
improvement process based on Q-functions (Section 5) are the key elements of the
policy iteration algorithm described in Figure 1. Theorem 1, proven in (Bradtke,
Reinforcement Learning Applied to Linear Quadratic Regulation
Y dstie, & Barto, in preparation), shows that the sequence of policies generated by
this algorithm converges to the optimal policy. Standard policy iteration algorithms,
such as those described by Howard (1960) for discrete time, finite state Markovian
decision tasks, or by Bertsekas (1987) and Kleinman (1968) for LQR problems,
require exact knowledge of the system model. Our algorithm requires no system
model. It only requires a suitably accurate estimate of H Uk ?
Theorem 1: If (1) {A, B} is controllable, (2) Un is stabilizing, and (3) the control
signal, which at time step t and policy iteration step k is UJ,-Xt plus some "exploration
factor", is strongly persistently exciting, then there exists a number N such that
the sequence of policies generated by the policy iteration algorithm described in
Figure 1 will converge to UX when policy updates are performed at most every N
time steps.
Initialize the Q-function parameters, HII ?
= 0, k = o.
do forever {
Initialize the Recursive Least Squares estimator.
for i
1 to N {
? Ut = UkXt + et, where et is the "exploration" component of the control signal.
? Apply Ut to the system, resulting in state Xt+l.
? Define at+l = UkXt+l.
? Update the Q-function parameters, Hk using the
Recursive Least Squares implementation of the
policy-based Q-learning rule, equation (4).
t
=
}
? t=t+1.
Policy improvement based on Hk :
Initialize parameters Hk+l
Hk .
k=k+1
=
}
Figure 1: The Q-function based policy iteration algorithm. It starts with the system
in some initial state Xo and with some stabilizing controller Uo. k keeps track of the
number of policy iteration steps. t keeps track of the total number of time steps. i
counts the number of time steps since the last change of policy. vVhen i = N, one
policy improvement step is executed.
Figure 2 demonstrates the performance of the Q-function based policy iteration
algorithm. We do not know how to characterize a persistently exciting exploratory
signal for this algorithm. Experimentally, however, a random exploration signal
generated from a normal distribution has worked very well, even though it does not
meet condition (3) of the theorem. The system is a 20-dimensional discrete time
approximation of a flexible beam supported at both ends. There is one control point.
The control signal is a scalar representing acceleration to be applied at that point.
Uo is an arbitrarily selected stabilizing controller for the system. Xo is a random
299
300
Bradtke
point in a neighborhood around 0 E n20. \Ve used a normal random variable with
mean 0 and variance 1 as the exploratory signal. There are 231 parameters to be
estimated for this system, so we set N = 500, approximately twice that. Panel A
of Figure 2 shows the norm of the difference between the current controller and the
optimal controller. Panel B of Figure 2 shows the norm of the difference between
the estimate of the Q-function for the current controller and the Q-function for
the optimal controller. After only eight policy iteration steps the Q-function based
policy iteration algorithm has converged close enough to U~ and Q~ that further
improvements are limited by the machine precision.
1...03,..............,......_........-.......,........_
A
~
0. 1
0.01
,..0)
1...04
1..05
~:::
-
1..06
1...09
:::~
1?? 2
... u
100 \
10
I
\
B
I '\
=
.ill
10+00,....................._ - . - _......._ - . . . . _ . . . . . . . . ,
........ . . . ,
'00
10
i.\
= I~!
1
~ 1..03
\
.
.. , '..0<
i\
::::
\\
&
\.
1..01
\
1..09
1 ..10
\,
~'y''~~
~
---...........------'
-.
1~1" O!--.....-.~IO----=2'::-0............--f::10.........------!40::--~30
k. number of poIi"" ileration ste,.
, ..u
L
~
_
~ ~
~/'.A~
- - ~ - - "v-'....
..-
.
""'-"1
1,.-12
1.. 13
1.. 14 O!--.....-."7::IO---:20::---30:!::--~""~'--!30
k. number of poll"" Iteration .t~
Figure 2: Performance of the Q-function based policy iteration algorithm on a
discretized beam system.
7
THE OPTIMIZING Q-LEARNING RULE FOR LQR
Policy iteration would seem to be a slow method. It has to evaluate each policy
before it can specify a new one. Why not do as VVatkins' optimizing Q-Iearning
rule does (equation 3), and try to learn Q- directly? Figure 3 defines this algorithm
precisely. This algorithm does not update the policy actually used during training.
It only updates the estimate of Q-. The system is started in some initial state :Z:o
and some stabilizing controller Uo is specified as the controller to be used during
training.
To what will this algorithm converge, if it does converge? A fixed point of this
algorithm must satisfy
11
[:z:, u] ' [ H
H21
12] [:Z:, u]=
H
H22
:z:'E:z:+u'Eu+'Y[A:z:+Bu,a]'
[~~~ ~~~] [A:z:+Bu,a),
(5)
=
where a -H:;/ H21(A:z:+Bu). Equation (5) actually specifies (n+m)(n+m+ 1)/2
polynomial equations in (n + m)(n + m + 1)/2 unknowns (remember that Hu is
symmetric). We know that there is at least one solution, that corresponding to the
optimal policy, but there may be other solutions as well.
As an example of the possibility of multiple solutions, consider the I-dimensional
system with A = B = E
F = [1) and l' = 0.9. Substituting these values into
=
Reinforcement Learning Applied to Linear Quadratic Regulation
Initialize the Q-function parameters, il u.
Initialize Recursive Least Squares estimator.
t
= o.
do forever {
= UOXt + et, where et is the "exploration" component of the control signal.
? Apply Ut to the system, resulting in state Xt+ 1.
?
Ut
A
-1
A
? Define at+1 = -H22 H 21 X t+1.
? Update the Q-function parameters, fIt, using the
Recursive Least Squares implementation of the optimizing Q-Iearning rule, equation (3).
}
? t=t+1.
Figure 3: The optimizing Q-learning rule in the LQR domain. Uo is the policy
followed during training. t keeps track of the total number of time steps.
equation (5) and solving for the unknown parameters yields two solutions. They
are
[ 2.4296
1.4296
1.4296]
d [ 0.3704
2.4296 an
-0.6296
-0.6296]
0.3704 .
The first solution is Q-. The second solution, if used to define an "improved" policy
as describe in Section 5, results in a destablizing controller. This is certainly not a
desirable result. Experiments show that the algorithm in Figure 3 will converge to
either of these solutions if the initial parameter estimates are close enough to that
solution. Therefore, this method of using Watkins' Q-learning rule directly on an
LQR problem will not necessarily converge to the optimal Q-function.
8
CONCLUSIONS
In this paper we take a first step toward extending the theory of DP-based reinforcement learning to domains with continuous state and action spaces, and to
algorithms that use non-linear function approximators. We concentrate on the
problem of Linear Quadratic Regulation. We describe a policy iteration algorithm
for LQR problems that is proven to converge to the optimal policy. In contrast to
standard methods of policy iteration, it does not require a system model. It only
requires a suitably accurate estimate of Hu/c. This is the first result of which we
are aware showing convergence of a DP-based reinforcement learning algorithm in
a domain with continuous states and actions. We also describe a straightforward
implementation of the optimizing Q-Iearning rule in the LQR domain. This algorithm is only locally convergent to Q-. This result demonstrates that we cannot
expect the theory developed for finite-state systems using lookup-tables to extend
to continuous state systems using parameterized function representations.
301
302
Bradtke
The convergence proof for the policy iteration algorithm described in this paper
requires exact matching between the form of the Q-function for LQR problems and
the form of the function approximator used to learn that function. Future work will
explore convergence of DP-based reinforcement learning algorithms when applied
to non-linear systems for which the form of the Q-functions is unknown.
Acknowledgements
The author thanks Andrew Barto, B. Erik Ydstie, and the ANW group for their
contributions to these ideas. This work was supported by the Air Force Office of
Scientific Research, Bolling AFB, under Grant AFOSR-89-0526 and by the National
Science Foundation under Grant ECS-8912623.
References
[1] D. P. Bertsekas. Dynamic Programming: Deterministic and Stochastic Models.
Prentice Hall, Englewood Cliffs, NJ, 1987.
[2] S. J. Bradtke, B. E. Ydstie, and A. G. Barto. Convergence to optimal cost of
adaptive policy iteration. In preparation.
[3] P. Dayan. The convergence ofTD(A) for general A. Machine Learning, 1992 .
[4] R. A. Howard. Dynamic Programming and Markov Processes. John Wiley &
Sons, Inc., New York, 1960.
[5] M. 1. Jordan and R. A. Jacobs. Learning to control an unstable system with
forward modeling. In Advances in Neural Information Processing Systems 2.
Morgan Kaufmann Publishers, San Mateo, CA, 1990.
[6] D. L. Kleinman. On an iterative technique for Riccati equation computations.
IEEE Transactions on Automatic Control, pages 114-115, February 1968.
[7] L.-J . Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 1992.
[8] D. A. Sofge and D. A. White. Neural network based process optimization and
control. In Proceedings of the 29 th IEEE Conference on Decision and Control,
Honolulu, Hawaii, December 1990.
[9] R. S. Sutton. Learning to predict by the method of temporal differences.
Alachine Learning, 3:9-44, 1988.
[10] G. J. Tesauro. Practical issues in temporal difference learning. Machine Learning, 8{3/4):257-277, May 1992.
[11] C. J. C. H. Watkins. Learning from Delayed Rewards. PhD thesis, Cambridge
University, Cambridge, England, 1989.
[12] C. J. C. H. Watkins and P. Dayan. Q-Iearning. Machine Learning, 1992.
[13] P. J . Werbos. Building and understanding adaptive systems: A statistical/numerical approach to factory automation and brain research. IEEE Transactions on Systems, Man, and Cybernetics, 17(1):7-20, 1987.
| 712 |@word polynomial:1 norm:2 suitably:2 hu:3 jacob:2 initial:3 uma:1 lqr:22 existing:1 current:2 z2:1 yet:1 written:3 must:1 john:1 numerical:1 championship:1 update:5 selected:1 lx:1 manner:1 nor:1 planning:1 brain:1 discretized:1 discounted:1 pitfall:2 td:3 panel:2 what:1 argmin:2 developed:2 nj:1 temporal:3 remember:1 every:3 iearning:17 exactly:1 demonstrates:2 uk:8 control:21 zl:1 uo:4 grant:2 oftd:1 bertsekas:4 positive:2 before:1 io:2 sutton:4 despite:1 ext:1 cliff:1 meet:2 approximately:1 plus:1 twice:1 mateo:1 limited:2 practical:1 vu:4 recursive:5 practice:1 definite:2 implement:1 backpropagation:1 area:1 honolulu:1 ydstie:3 matching:1 cannot:1 close:2 prentice:1 applying:1 deterministic:2 straightforward:1 focused:2 stabilizing:4 anw:1 rule:13 estimator:2 his:1 exploratory:2 play:1 exact:2 programming:5 us:1 element:1 persistently:2 ze:1 werbos:3 steven:1 eu:1 ui:1 reward:1 dynamic:5 trained:2 solving:2 represented:2 describe:10 tell:2 neighborhood:1 heuristic:1 sequence:2 h22:2 ste:1 riccati:1 supposed:1 az:2 convergence:8 requirement:1 extending:2 produce:1 converges:4 andrew:1 qt:9 c:1 concentrate:1 stochastic:1 exploration:4 require:3 onward:1 around:1 hall:1 normal:2 predict:1 substituting:1 barto:4 office:1 improvement:5 backgammon:2 hk:4 contrast:1 dayan:6 dependent:1 nand:1 issue:1 flexible:1 ill:1 initialize:5 aware:1 rls:3 future:1 minimized:1 others:1 few:1 mjn:1 ve:1 national:1 delayed:1 attempt:3 mlp:1 englewood:1 possibility:1 adjust:1 certainly:1 accurate:2 fu:1 partial:1 lh:1 theoretical:1 column:2 modeling:1 markovian:4 cost:5 characterize:1 thanks:1 amherst:1 bu:5 thesis:2 hawaii:1 derivative:1 lookup:4 automation:1 inc:1 satisfy:1 vi:2 performed:1 try:1 start:1 yvu:2 contribution:1 mlps:1 square:5 il:1 air:1 variance:1 kaufmann:1 yield:2 cybernetics:1 converged:1 definition:1 proof:2 con:1 massachusetts:1 knowledge:1 ut:15 actually:3 back:1 afb:1 specify:1 improved:3 yb:4 though:1 strongly:1 propagation:1 defines:1 scientific:1 building:1 requiring:1 symmetric:3 white:2 during:4 self:2 ue:1 bradtke:9 extend:2 significant:1 refer:1 cambridge:2 automatic:1 teaching:1 impressive:1 recent:2 optimizing:7 tesauro:2 arbitrarily:1 xe:1 approximators:4 morgan:1 n20:1 converge:9 ud:1 signal:8 multiple:1 desirable:1 england:1 lin:2 involving:2 basic:1 multilayer:2 controller:12 mayor:1 iteration:17 represent:1 achieved:1 beam:2 publisher:1 december:1 seem:1 jordan:2 enough:2 fit:1 idea:1 kleinman:2 york:1 action:9 involve:1 discount:1 locally:3 tth:1 specifies:1 notice:1 estimated:1 track:3 zd:1 discrete:5 promise:1 group:1 key:1 demonstrating:2 poll:1 neither:1 sum:1 parameterized:2 decision:5 qui:2 entirely:1 followed:2 convergent:3 quadratic:14 precisely:1 worked:1 argument:1 department:1 designated:1 combination:1 describes:1 slightly:2 son:1 qu:7 invariant:1 xo:2 equation:8 count:1 needed:1 know:2 end:1 rewritten:1 apply:2 eight:1 appropriate:1 hii:1 denotes:1 paucity:1 uj:1 february:1 rt:2 dp:11 concatenation:1 unstable:1 toward:1 erik:1 minimizing:1 regulation:8 executed:1 implementation:5 zt:2 policy:48 unknown:3 markov:1 howard:3 finite:5 vvhen:1 bolling:1 specified:1 dynamical:2 suitable:1 force:1 representing:1 improve:1 started:1 review:1 understanding:1 acknowledgement:1 afosr:1 expect:1 proven:4 approximator:5 foundation:1 incurred:1 agent:1 principle:2 exciting:2 supported:2 last:1 transpose:1 perceptron:1 taking:1 feedback:1 dimension:3 author:1 forward:1 reinforcement:15 adaptive:2 san:1 ec:1 transaction:2 approximate:1 forever:2 keep:3 sofge:2 continuous:8 un:1 iterative:1 vergence:1 why:1 table:4 promising:1 learn:5 ku:7 ca:1 controllable:1 improving:1 kui:2 necessarily:1 domain:7 linearly:2 slow:1 wiley:1 precision:1 factory:1 candidate:1 watkins:10 learns:2 theorem:3 xt:4 showing:1 exists:1 phd:1 gap:1 explore:1 ez:1 expressed:2 ux:1 scalar:1 ma:1 xix2:1 viewed:1 acceleration:1 fut:1 man:1 experimentally:2 change:1 zb:1 called:1 total:2 ya:2 perceptrons:1 h21:2 reactive:1 preparation:3 evaluate:1 |
6,766 | 7,120 | Thinking Fast and Slow
with Deep Learning and Tree Search
Thomas Anthony1, , Zheng Tian1 , and David Barber1,2
1
University College London
2
Alan Turing Institute
[email protected]
Abstract
Sequential decision making problems, such as structured prediction, robotic control,
and game playing, require a combination of planning policies and generalisation of
those plans. In this paper, we present Expert Iteration (E X I T), a novel reinforcement
learning algorithm which decomposes the problem into separate planning and
generalisation tasks. Planning new policies is performed by tree search, while a
deep neural network generalises those plans. Subsequently, tree search is improved
by using the neural network policy to guide search, increasing the strength of new
plans. In contrast, standard deep Reinforcement Learning algorithms rely on a
neural network not only to generalise plans, but to discover them too. We show that
E X I T outperforms REINFORCE for training a neural network to play the board
game Hex, and our final tree search agent, trained tabula rasa, defeats M O H EX, the
previous state-of-the-art Hex player.
1
Introduction
According to dual-process theory [1, 2], human reasoning consists of two different kinds of thinking.
System 1 is a fast, unconscious and automatic mode of thought, also known as intuition or heuristic
process. System 2, an evolutionarily recent process unique to humans, is a slow, conscious, explicit
and rule-based mode of reasoning.
When learning to complete a challenging planning task, such as playing a board game, humans exploit
both processes: strong intuitions allow for more effective analytic reasoning by rapidly selecting
interesting lines of play for consideration. Repeated deep study gradually improves intuitions.
Stronger intuitions feedback to stronger analysis, creating a closed learning loop. In other words,
humans learn by thinking fast and slow.
In deep Reinforcement Learning (RL) algorithms such as REINFORCE [3] and DQN [4], neural
networks make action selections with no lookahead; this is analogous to System 1. Unlike human
intuition, their training does not benefit from a ?System 2? to suggest strong policies. In this paper,
we present Expert Iteration (E X I T), which uses a Tree Search as an analogue of System 2; this assists
the training of the neural network. In turn, the neural network is used to improve the performance of
the tree search by providing fast ?intuitions? to guide search.
At a low level, E X I T can be viewed as an extension of Imitation Learning (IL) methods to domains
where the best known experts are unable to achieve satisfactory performance. In IL an apprentice
is trained to imitate the behaviour of an expert policy. Within E X I T, we iteratively re-solve the IL
problem. Between each iteration, we perform an expert improvement step, where we bootstrap the
(fast) apprentice policy to increase the performance of the (comparatively slow) expert.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Typically, the apprentice is implemented as a deep neural network, and the expert by a tree search
algorithm. Expert improvement can be achieved either by using the apprentice as an initial bias in the
search direction, or to assist in quickly estimating the value of states encountered in the search tree,
or both.
We proceed as follows: in section 2, we cover some preliminaries. Section 3 describes the general
form of the Expert Iteration algorithm, and discusses the roles performed by expert and apprentice.
Sections 4 and 5 dive into the implementation details of the Imitation Learning and expert improvement steps of E X I T for the board game Hex. The performance of the resultant E X I T algorithm is
reported in section 6. Sections 7 and 8 discuss our findings and relate the algorithm to previous
works.
2
Preliminaries
2.1
Markov Decision Processes
We consider sequential decision making in a Markov Decision Process (MDP). At each timestep
t, an agent observes a state st and chooses an action at to take. In a terminal state sT , an episodic
reward R is observed, which we intend to maximise.1 We can easily extend to two-player, perfect
information, zero-sum games by learning policies for both players simultaneously, which aim to
maximise the reward for the respective player.
We call a distribution over the actions a available in state s a policy, and denote it ?(a|s). The value
function V ? (s) is the mean reward from following ? starting in state s. By Q? (s, a) we mean the
expected reward from taking action a in state s, and following policy ? thereafter.
2.2
Imitation Learning
In Imitation Learning (IL), we attempt to solve the MDP by mimicking an expert policy ? ? that
has been provided. Experts can arise from observing humans completing a task, or, in the context
of structured prediction, calculated from labelled training data. The policy we learn through this
mimicry is referred to as the apprentice policy.
We create a dataset of states of expert play, along with some target data drawn from the expert, which
we attempt to predict. Several choices of target data have been used. The simplest approach is to ask
the expert to name an optimal move ? ? (a|s) [5]. Once we can predict expert moves, we can take
the action we think the expert would have most probably taken. Another approach is to estimate the
?
action-value function Q? (s, a). We can then predict that function, and act greedily with respect
to it. In contrast to direct action prediction, this target is cost-sensitive, meaning the apprentice can
trade-off prediction errors against how costly they are [6].
3
Expert iteration
Compared to IL techniques, Expert Iteration (E X I T) is enriched by an expert improvement step.
Improving the expert player and then resolving the Imitation Learning problem allows us to exploit
the fast convergence properties of Imitation Learning even in contexts where no strong player was
originally known, including when learning tabula rasa. Previously, to solve such problems, researchers
have fallen back on RL algorithms that often suffer from slow convergence, and high variance, and
can struggle with local minima.
At each iteration i, the algorithm proceeds as follows: we create a set Si of game states by self
play of the apprentice ?
?i?1 . In each of these states, we use our expert to calculate an Imitation
?
?
Learning target at s (e.g. the expert?s action ?i?1
(a|s)); the state-target pairs (e.g. (s, ?i?1
(a|s)))
form our dataset Di . We train a new apprentice ?
?i on Di (Imitation Learning). Then, we use our
new apprentice to update our expert ?i? = ? ? (a|s; ?
?i ) (expert improvement). See Algorithm 1 for
pseudo-code.
1
This reward may be decomposed as a sum of intermediate rewards (i.e. R =
2
PT
t=0
rt )
The expert policy is calculated using a tree search algorithm. By using the apprentice policy to
direct search effort towards promising moves, or by evaluating states encountered during search more
quickly and accurately, we can help the expert find stronger policies. In other words, we bootstrap
the knowledge acquired by Imitation Learning back into the planning algorithm.
The Imitation Learning step is analogous to a human improving their intuition for the task by studying
example problems, while the expert improvement step is analogous to a human using their improved
intuition to guide future analysis.
Algorithm 1 Expert Iteration
1: ?
?0 = initial_policy()
2: ?0? = build_expert(?
?0 )
3: for i = 1; i ? max_iterations; i++ do
4:
Si = sample_self_play(?
?i?1 )
?
5:
Di = {(s, imitation_learning_target(?i?1
(s)))|s ? Si }
6:
?
?i = train_policy(Di )
7:
?i? = build_expert(?
?i )
8: end for
3.1
Choice of expert and apprentice
The learning rate of E X I T is controlled by two factors: the size of the performance gap between the
apprentice policy and the improved expert, and how close the performance of the new apprentice
is to the performance of the expert it learns from. The former induces an upper bound on the new
apprentice?s performance at each iteration, while the latter describes how closely we approach that
upper bound. The choice of both expert and apprentice can have a significant impact on both these
factors, so must be considered together.
The role of the expert is to perform exploration, and thereby to accurately determine strong move
sequences, from a single position. The role of the apprentice is to generalise the policies that the
expert discovers across the whole state space, and to provide rapid access to that strong policy for
bootstrapping in future searches.
The canonical choice of expert is a tree search algorithm. Search considers the exact dynamics of the
game tree local to the state under consideration. This is analogous to the lookahead human games
players engage in when planning their moves. The apprentice policy can be used to bias search
towards promising moves, aid node evaluation, or both. By employing search, we can find strong
move sequences potentially far away from the apprentice policy, accelerating learning in complex
scenarios. Possible tree search algorithms include Monte Carlo Tree Search [7], ?-? Search, and
greedy search [6].
The canonical apprentice is a deep neural network parametrisation of the policy. Such deep networks
are known to be able to efficiently generalise across large state spaces, and they can be evaluated
rapidly on a GPU. The precise parametrisation of the apprentice should also be informed by what
data would be useful for the expert. For example, if state value approximations are required, the
policy might be expressed implicitly through a Q function, as this can accelerate lookup.
3.2
Distributed Expert Iteration
Because our tree search is orders of magnitude slower than the evaluations made during training of
the neural network, E X I T spends the majority of run time creating datasets of expert moves. Creating
these datasets is an embarassingly parallel task, and the plans made can be summarised by a vector
measuring well under 1KB. This means that E X I T can be trivially parallelised across distributed
architectures, even with very low bandwidth.
3.3
Online expert iteration
In each step of E X I T, Imitation Learning is restarted from scratch. This throws away our entire
dataset. Since creating datasets is computationally intensive this can add substantially to algorithm
run time.
3
The online version of E X I T mitigates this by aggregating all datasets generated so far at each iteration.
In other words, instead of training ?
?i on Di , we train it on D = ?j?i Dj . Such dataset aggregation is
similar to the DAGGER algorithm [5]. Indeed, removing the expert improvement step from online
E X I T reduces it to DAGGER.
Dataset aggregation in online E X I T allows us to request fewer move choices from the expert at each
iteration, while still maintaining a large dataset. By increasing the frequency at which improvements
can be made, the apprentice in online E X I T can generalise the expert moves sooner, and hence the
expert improves sooner also, which results in higher quality play appearing in the dataset.
4
Imitation Learning in the game Hex
We now describe the implementation of E X I T for the board game Hex. In this section, we develop
the techniques for our Imitation Learning step, and test them for Imitation Learning of Monte Carlo
Tree Search (MCTS). We use this test because our intended expert is a version of Neural-MCTS,
which will be described in section 5.
4.1
Preliminaries
Hex
Hex is a two-player connection-based game played on an n ? n hexagonal grid. The players, denoted
by colours black and white, alternate placing stones of their colour in empty cells. The black player
wins if there is a sequence of adjacent black stones connecting the North edge of the board to the
South edge. White wins if they achieve a sequence of adjacent white stones running from the West
edge to the East edge. (See figure 1).
Figure 1: A 5 ? 5 Hex game, won by white. Figure from Huang et al. [8].
Hex requires complex strategy, making it challenging for deep Reinforcement Learning algorithms; its
large action set and connection-based rules means it shares similar challenges for AI to Go. However,
games can be simulated efficiently because the win condition is mutually exclusive (e.g. if black has
a winning path, white cannot have one), its rules are simple, and permutations of move order are
irrelevant to the outcome of a game. These properties make it an ideal test-bed for Reinforcement
Learning. All our experiments are on a 9 ? 9 board size.
Monte Carlo Tree Search
Monte Carlo Tree Search (MCTS) is an any-time best-first tree-search algorithm. It uses repeated
game simulations to estimate the value of states, and expands the tree further in more promising
lines. When all simulations are complete, the most explored move is taken. It is used by the leading
algorithms in the AAAI general game-playing competition [9]. As such, it is the best known algorithm
for general game-playing without a long RL training procedure.
Each simulation consists of two parts. First, a tree phase, where the tree is traversed by taking actions
according to a tree policy. Second, a rollout phase, where some default policy is followed until the
simulation reaches a terminal game state. The result returned by this simulation can then be used to
update estimates of the value of each node traversed in the tree during the first phase.
Each node of the search tree corresponds to a possible state s in the game. The root node corresponds
to the current state, its children correspond to the states resulting from a single move from the current
state, etc. The edge from state s1 to s2 represents the action a taken in s1 to reach s2 , and is identified
by the pair (s1 , a).
4
At each node we store n(s), the number of iterations in which the node has been visited so far. Each
edge stores both n(s, a), the number of times it has been traversed, and r(s, a) the sum of all rewards
obtained in simulations that passed through the edge. The tree policy depends on these statistics. The
most commonly used tree policy is to act greedily with respect to the upper confidence bounds for
trees formula [7]:
r(s, a)
UCT(s, a) =
+ cb
n(s, a)
s
log n(s)
n(s, a)
(1)
When an action a in a state sL is chosen that takes us to a position s0 not yet in the search tree, the
rollout phase begins. In the absence of domain-specific information, the default policy used is simply
to choose actions uniformly from those available.
To build up the search tree, when the simulation moves from the tree phase to the rollout phase, we
perform an expansion, adding s0 to the tree as a child of sL .2 Once a rollout is complete, the reward
signal is propagated through the tree (a backup), with each node and edge updating statistics for visit
counts n(s), n(s, a) and total returns r(s, a).
In this work, all MCTS agents use 10,000 simulations per move, unless stated otherwise. All use a
uniform default policy. We also use RAVE. Full details are in the appendix. [10].
4.2
Imitation Learning from Monte Carlo Tree Search
In this section, we train a standard convolutional neural network3 to imitate an MCTS expert. Guo et
al. [12] used a similar set up on Atari games. However, their results showed that the performance of
the learned neural network fell well short of the MCTS expert, even with a large dataset of 800,000
MCTS moves. Our methodology described here improves on this performance.
Learning Targets
In Guo et al. [12], the learning target used was simply the move chosen by MCTS. We refer to this
as chosen-action targets (CAT), and optimise the Kullback?Leibler divergence between the output
distribution of the network and this target. So the loss at position s is given by the formula:
LCAT = ? log[?(a? |s)]
where a? = argmaxa (n(s, a)) is the move selected by MCTS.
We propose an alternative target, which we call tree-policy targets (TPT). The tree policy target is
the average tree policy of the MCTS at the root. In other words, we try to match the network output
to the distribution over actions given by n(s, a)/n(s) where s is the position we are scoring (so
n(s) = 10, 000 in our experiments). This gives the loss:
LTPT = ?
X n(s, a)
a
n(s)
log[?(a|s)]
Unlike CAT, TPT is cost-sensitive: when MCTS is less certain between two moves (because they
are of similar strength), TPT penalises misclassifications less severely. Cost-sensitivity is a desirable
property for an imitation learning target, as it induces the IL agent to trade off accuracy on less
important decisions for greater accuracy on critical decisions.
In E X I T, there is additional motivation for such cost-sensitive targets, as our networks will be used
to bias future searches. Accurate evaluations of the relative strength of actions never made by the
current expert are still important, since future experts will use the evaluations of all available moves
to guide their search.
Sometimes multiple nodes are added to the tree per iteration, adding children to s0 also. Conversely,
sometimes an expansion threshold is used, so sL is only expanded after multiple visits.
3
Our network architecture is described in the appendix. We use Adam [11] as our optimiser.
2
5
Sampling the position set
Correlations between the states in our dataset may reduce the effective dataset size, harming learning.
Therefore, we construct all our datasets to consist of uncorrelated positions sampled using an
exploration policy. To do this, we play multiple games with an exploration policy, and select a single
state from each game, as in Silver et al. [13]. For the initial dataset, the exploration policy is MCTS,
with the number of iterations reduced to 1,000 to reduce computation time and encourage a wider
distribution of positions.
We then follow the DAGGER procedure, expanding our dataset by using the most recent apprentice
policy to sample 100,000 more positions, again sampling one position per game to ensure that there
were no correlations in the dataset. This has two advantages over sampling more positions in the
same way: firstly, selecting positions with the apprentice is faster, and secondly, doing so results in
positions closer to the distribution that the apprentice network visits at test time.
4.3
Results of Imitation Learning
Based on our initial dataset of 100,000 MCTS moves, CAT and TPT have similar performance in the
task of predicting the move selected by MCTS, with average top-1 prediction errors of 47.0% and
47.7%, and top-3 prediction errors of 65.4% and 65.7%, respectively.
However, despite the very similar prediction errors, the TPT network is 50 ? 13 Elo stronger than the
CAT network, suggesting that the cost-awareness of TPT indeed gives a performance improvement. 4
We continued training the TPT network with the DAGGER algorithm, iteratively creating 3 more
batches of 100,000 moves. This additional data resulted in an improvement of 120 Elo over the first
TPT network. Our final DAGGER TPT network achieved similar performance to the MCTS it was
trained to emulate, winning just over half of games played between them (87/162).
5
Expert Improvement in Hex
We now have an Imitation Learning procedure that can train a strong apprentice network from MCTS.
In this section, we describe our Neural-MCTS (N-MCTS) algorithms, which use such apprentice
networks to improve search quality.
5.1
Using the Policy Network
Because the apprentice network has effectively generalised our policy, it gives us fast evaluations
of action plausibility at the start of search. As search progresses, we discover improvements on this
apprentice policy, just as human players can correct inaccurate intuitions through lookahead.
We use our neural network policy to bias the MCTS tree policy towards moves we believe to be
stronger. When a node is expanded, we evaluate the apprentice policy ?
? at that state, and store it. We
modify the UCT formula by adding a bonus proportional to ?
? (a|s):
UCTP?NN (s, a) = UCT(s, a) + wa
?
? (a|s)
n(s, a) + 1
Where wa weights the neural network against the simulations. This formula is adapted from one
found in Gelly & Silver [10]. Tuning of hyperparameters found that wa = 100 was a good choice for
this parameter, which is close to the average number of simulations per action at the root when using
10,000 iterations in the MCTS. Since this policy was trained using 10,000 iterations too, we would
expect that the optimal weight should be close to this average.
The TPT network?s final layer uses a softmax output. Because there is no reason to suppose that
the optimal bonus in the UCT formula should be linear in the TPT policy probability, we view the
temperature of the TPT network?s output layer as a hyperparameter for the N-MCTS and tune it to
maximise the performance of the N-MCTS.
4
When testing network performance, we greedily select the most likely move, because CAT and TPT may
otherwise induce different temperatures in the trained networks? policies.
6
When using the strongest TPT network from section 4, N-MCTS using a policy network significantly
outperforms our baseline MCTS, winning 97% of games. The neural network evaluations cause a
two times slowdown in search. For comparison, a doubling of the number of iterations of the vanilla
MCTS results in a win rate of 56%.
5.2
Using a Value Network
Strong value networks have been shown to be able to substantially improve the performance of MCTS
[13]. Whereas a policy network allows us to narrow the search, value networks act to reduce the
required search depth compared to using inaccurate rollout-based value estimation.
However, our imitation learning procedure only learns a policy, not a value function. Monte Carlo
?
estimates of V ? (s) could be used to train a value function, but to train a value function without
severe overfitting requires more than 105 independent samples. Playing this many expert games is
?
well beyond our computation resources, so instead we approximate V ? (s) with the value function
of the apprentice, V ?? (s), for which Monte Carlo estimates are cheap to produce.
To train the value network, we use a KL loss between V (s) and the sampled (binary) result z:
LV = ?z log[V (s)] ? (1 ? z) log[1 ? V (s)]
To accelerate the tree search and regularise value prediction, we used a multitask network with
separate output heads for the apprentice policy and value prediction, and sum the losses LV and
LTPT .
To use such a value network in the expert, whenever a leaf sL is expanded, we estimate V (s). This is
backed up through the tree to the root in the same way as rollout results are: each edge stores the
average of all evaluations made in simulations passing through it. In the tree policy, the value is
estimated as a weighted average of the network estimate and the rollout estimate.5
6
Experiments
6.1
Comparison of Batch and Online E X I T to REINFORCE
We compare E X I T to the policy gradient algorithm found in Silver et al. [13], which achieved
state-of-the-art performance for a neural network player in the related board game Go. In Silver et al.
[13], the algorithm was initialised by a network trained to predict human expert moves from a corpus
of 30 million positions, and then REINFORCE [3] was used. We initialise with the best network
from section 4. Such a scheme, Imitation Learning initialisation followed by Reinforcement Learning
improvement, is a common approach when known experts are not sufficiently strong.
In our batch E X I T, we perform 3 training iterations, each time creating a dataset of 243,000 moves.
In online E X I T, as the dataset grows, the supervised learning step takes longer, and in a na?ve
implementation would come to dominate run-time. We test two forms of online E X I T that avoid this.
In the first, we create 24,300 moves each iteration, and train on a buffer of the most recent 243,000
expert moves. In the second, we use all our data in training, and expand the size of the dataset by
10% each iteration.
For this experiment we did not use any value networks, so that network architectures between the
policy gradient and E X I T are identical. All policy networks are warm-started to the best network
from section 4.
As can be seen in figure 2, compared to REINFORCE, E X I T learns stronger policies faster. E X I T also
shows no sign of instability: the policy improves consistently each iteration and there is little variation
in the performance between each training run. Separating the tree search from the generalisation
has ensured that plans don?t overfit to a current opponent, because the tree search considers multiple
possible responses to the moves it recommends.
Online expert iteration substantially outperforms the batch mode, as expected. Compared to the
?buffer? version, the ?exponential dataset? version appears to be marginally stronger, suggesting that
retaining a larger dataset is useful.
5
This is the same as the method used in Silver et al. [13]
7
Figure 2: Elo ratings of policy gradient network and E X I T networks through training. Values are the
average of 5 training runs, shaded areas represent 90% confidence intervals. Time is measured by
number of neural network evaluations made. Elo calculated with BayesElo [14]
.
6.2
Comparison of Value and Policy E X I T
With sufficiently large datasets, a value network can be learnt to improve the expert further, as
discussed in section 5.2. We ran asynchronous distributed online E X I T using only a policy network
until our datasets contained ? 550, 000 positions. We then used our most recent apprentice to add a
Monte Carlo value estimate from each of the positions in our dataset, and trained a combined policy
and value apprentice, giving a substantial improvement in the quality of expert play.
We then ran E X I T with a combined value-and-policy network, creating another ? 7, 400, 000 move
choices. For comparison, we continued the training run without using value estimation for equal time.
Our results are shown in figure 3, which shows that value-and-policy-E X I T significantly outperforms
policy-only-E X I T. In particular, the improved plans from the better expert quickly manifest in a
stronger apprentice.
We can also clearly see the importance of expert improvement, with later apprentices comfortably
outperforming experts from earlier in training.
Figure 3: Apprentices and experts in distributed online E X I T, with and without neural network value
estimation. M O H EX?s rating (10,000 iterations per move) is shown by the black dashed line.
6.3
Performance Against State of the Art
M O H EX [8] is the state-of-the-art Hex player; versions of it have won every Computer Games
Olympiad Hex tournament since 2009. M O H EX is a highly optimised algorithm, containing many
8
Hex specific improvements, including a pattern based rollout learnt from datasets of human play; a
complex, hand-made theorem-proving algorithm which calculates provably suboptimal moves, to be
pruned from search; and a proof-search algorithm to allow perfect endgame play. In contrast, our
algorithm learns tabula rasa, without game-specific knowledge beside the rules of the game.
To fairly compare M O H EX to our experts with equal wall-clock times is difficult, as the relative
speeds of the algorithms are hardware dependent: M O H EX?s theorem prover makes heavy use of the
CPU, whereas for our experts, the GPU is the bottleneck. On our machine M O H EX is approximately
50% faster.6
E X I T (with 10,000 iterations) won 75.3% of games against 10,000 iteration-M O H EX and 59.3%
against 100,000 iteration-M OHEX, which is over six times slower than our searcher. We include
some sample games from the match between 100,000 iteration M O H EX and E X I T in the appendix.
This result is particularly remarkable because the training curves in figure 3 do not suggest that the
algorithm has reached convergence.
7
Related work
E X I T has several connections to existing RL algorithms, resulting from different choices of expert
class. For example, we can recover a version of Policy Iteration [15] by using Monte Carlo Search as
our expert; in this case it is easy to see that Monte Carlo Tree Search gives stronger plans than Monte
Carlo Search.
Previous works have also attempted to achieve Imitation Learning that outperforms the original
expert. Silver et al. [13] use Imitation Learning followed by Reinforcement Learning. Kai-Wei,
?
et
Pal. [16] use? Monte Carlo estimates to calculate Q (s, a), and train an apprentice ? to maximise
a ?(a|s)Q (s, a). At each iteration after the first, the rollout policy is changed to a mixture of the
most recent apprentice and the original expert. This too can be seen as blending an RL algorithm
with Imitation Learning: it combines Policy Iteration and Imitation Learning.
Neither of these approaches is able to improve the original expert policy. They are useful when strong
experts exist, but only at the beginning of training. In contrast, because E X I T creates stronger experts
for itself, it is able to use experts throughout the training process.
AlphaGo Zero (AG0)[17], presents an independently developed version of ExIt, 7 and showed that
it achieves state-of-the-art performance in Go. We include a detailed comparison of these closely
related works in the appendix.
Unlike standard Imitation Learning methods, E X I T can be applied to the Reinforcement Learning
problem: it makes no assumptions about the existence of a satisfactory expert. E X I T can be applied
with no domain specific heuristics available, as we demonstrate in our experiment, where we used a
general purpose search algorithm as our expert class.
8
Conclusion
We have introduced a new Reinforcement Learning algorithm, Expert Iteration, motivated by the
dual process theory of human thought. E X I T decomposes the Reinforcement Learning problem by
separating the problems of generalisation and planning. Planning is performed on a case-by-case basis,
and only once MCTS has found a significantly stronger plan is the resultant policy generalised. This
allows for long-term planning, and results in faster learning and state-of-the-art final performance,
particularly for challenging problems.
We show that this algorithm significantly outperforms a variant of the REINFORCE algorithm in
learning to play the board game Hex, and that the resultant tree search algorithm comfortably defeats
the state-of-the-art in Hex play, despite being trained tabula rasa.
6
This machine has an Intel Xeon E5-1620 and nVidia Titan X (Maxwell), our tree search takes 0.3 seconds
for 10,000 iterations, while M O H EX takes 0.2 seconds for 10,000 iterations.
7
Our original version, with only policy networks, was published before AG0 was published, but after its
submission. Our value networks were developed before AG0 was published, and published after Silver et al.[17]
9
Acknowledgements
This work was supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1 and by
AWS Cloud Credits for Research. We thank Andrew Clarke for help with efficiently parallelising
the generation of datasets, Alex Botev for assistance implementing the CNN, and Ryan Hayward for
providing a tool to draw Hex positions.
References
[1] J. St B. T. Evans. Heuristic and Analytic Processes in Reasoning. British Journal of Psychology,
75(4):451?468, 1984.
[2] Daniel Kahneman. Maps of Bounded Rationality: Psychology for Behavioral Economics. The
American Economic Review, 93(5):1449?1475, 2003.
[3] R. J. Williams. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Machine Learning, 8(3-4):229?256, 1992.
[4] V. Mnih et al. Human-Level Control through Deep Reinforcement Learning.
518(7540):529?533, 2015.
Nature,
[5] S. Ross, G. J. Gordon, and J. A. Bagnell. A Reduction of Imitation Learning and Structured
Prediction to No-Regret Online Learning. AISTATS, 2011.
[6] H. Daum? III, J. Langford, and D. Marcu. Search-based Structured Prediction. Machine
Learning, 2009.
[7] L. Kocsis and C. Szepesv?ri. Bandit Based Monte-Carlo Planning. In European Conference on
Machine Learning, pages 282?293. Springer, 2006.
[8] S.-C. Huang, B. Arneson, R. Hayward, M. M?ller, and J. Pawlewicz. MoHex 2.0: A PatternBased MCTS Hex Player. In International Conference on Computers and Games, pages 60?71.
Springer, 2013.
[9] M. Genesereth, N. Love, and B. Pell. General Game Playing: Overview of the AAAI Competition. AI Magazine, 26(2):62, 2005.
[10] S. Gelly and D. Silver. Combining Online and Offline Knowledge in UCT. In Proceedings of
the 24th International Conference on Machine learning, pages 273?280. ACM, 2007.
[11] D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. arXiv preprint
arXiv:1412.6980, 2014.
[12] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep Learning for Real-Time Atari Game
Play Using Offline Monte-Carlo Rree Search Planning. In Advances in Neural Information
Processing Systems, pages 3338?3346, 2014.
[13] D. Silver et al. Mastering the Game of Go with Deep Neural Networks and Tree Search. Nature,
529(7587):484?489, 2016.
[14] R Coulom. Bayeselo. http://remi.coulom.free.fr/Bayesian-Elo/, 2005.
[15] S. Ross and J. A. Bagnell. Reinforcement and Imitation Learning via Interactive No-Regret
Learning. ArXiv e-prints, 2014.
[16] K. Chang, A. Krishnamurthy, A. Agarwal, H. Daum? III, and J. Langford. Learning to Search
Better Than Your Teacher. CoRR, abs/1502.02206, 2015.
[17] D. Silver et al. Mastering the Game of Go without Human Knowledge. Nature, 550(7676):354?
359, 2017.
[18] K. Young, R. Hayward, and G. Vasan. NeuroHex: A Deep Q-learning Hex Agent. arXiv
preprint arXiv:1604.07097, 2016.
10
[19] Y. Goldberg and J. Nivre. Training Deterministic Parsers with Non-Deterministic Oracles.
Transactions of the Association for Computational Linguistics, 1:403?414, 2013.
[20] D. Arpit, Y. Zhou, B. U. Kota, and V. Govindaraju. Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks. arXiv preprint
arXiv:1603.01431, 2016.
[21] D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and Accurate Deep Network Learning by
Exponential Linear Units(ELUs). CoRR, abs/1511.07289, 2015.
11
| 7120 |@word multitask:1 cnn:1 version:8 stronger:11 simulation:11 thereby:1 reduction:1 initial:3 selecting:2 initialisation:1 daniel:1 outperforms:6 existing:1 current:4 si:3 yet:1 must:1 gpu:2 evans:1 dive:1 analytic:2 cheap:1 update:2 greedy:1 fewer:1 selected:2 half:1 imitate:2 leaf:1 beginning:1 short:1 node:9 penalises:1 firstly:1 rollout:9 along:1 direct:2 consists:2 combine:1 behavioral:1 acquired:1 indeed:2 expected:2 rapid:1 planning:11 love:1 terminal:2 decomposed:1 little:1 cpu:1 increasing:2 provided:1 discover:2 estimating:1 begin:1 bonus:2 hayward:3 bounded:1 what:1 kind:1 atari:2 spends:1 substantially:3 developed:2 informed:1 finding:1 bootstrapping:1 pseudo:1 every:1 act:3 expands:1 interactive:1 ensured:1 uk:1 control:2 unit:1 grant:1 generalised:2 maximise:4 before:2 local:2 aggregating:1 struggle:1 modify:1 severely:1 despite:2 optimised:1 path:1 approximately:1 might:1 black:5 tournament:1 conversely:1 challenging:3 shaded:1 unique:1 testing:1 regret:2 bootstrap:2 procedure:4 episodic:1 area:1 thought:2 significantly:4 word:4 embarassingly:1 confidence:2 argmaxa:1 induce:1 suggest:2 cannot:1 close:3 selection:1 context:2 instability:1 map:1 deterministic:2 backed:1 go:5 economics:1 starting:1 independently:1 williams:1 rule:4 continued:2 dominate:1 initialise:1 proving:1 variation:1 krishnamurthy:1 analogous:4 unconscious:1 play:12 target:14 pt:1 exact:1 engage:1 suppose:1 us:3 rationality:1 magazine:1 goldberg:1 parser:1 particularly:2 updating:1 marcu:1 submission:1 observed:1 role:3 epsrc:1 ep:1 cloud:1 preprint:3 wang:1 calculate:2 trade:2 observes:1 ran:2 substantial:1 intuition:9 reward:8 dynamic:1 trained:8 singh:1 creates:1 exit:1 basis:1 kahneman:1 easily:1 accelerate:2 cat:5 emulate:1 train:9 fast:8 effective:2 london:1 monte:14 describe:2 outcome:1 heuristic:3 larger:1 solve:3 kai:1 otherwise:2 statistic:2 think:1 itself:1 final:4 online:13 kocsis:1 sequence:4 advantage:1 ucl:1 propose:1 clevert:1 fr:1 loop:1 combining:1 rapidly:2 achieve:3 lookahead:3 bed:1 competition:2 convergence:3 empty:1 produce:1 perfect:2 adam:2 silver:10 help:2 wider:1 develop:1 ac:1 andrew:1 measured:1 progress:1 strong:10 throw:1 implemented:1 network3:1 come:1 elus:1 direction:1 closely:2 correct:1 subsequently:1 kb:1 exploration:4 human:15 stochastic:1 implementing:1 alphago:1 require:1 behaviour:1 wall:1 preliminary:3 ryan:1 secondly:1 traversed:3 blending:1 extension:1 sufficiently:2 considered:1 credit:1 cb:1 predict:4 elo:5 achieves:1 purpose:1 estimation:3 visited:1 ross:2 sensitive:3 create:3 tool:1 weighted:1 clearly:1 aim:1 avoid:1 zhou:1 improvement:16 consistently:1 contrast:4 greedily:3 baseline:1 dependent:1 nn:1 inaccurate:2 typically:1 entire:1 bandit:1 expand:1 provably:1 mimicking:1 dual:2 denoted:1 retaining:1 plan:9 art:7 softmax:1 fairly:1 equal:2 construct:1 once:3 never:1 beach:1 sampling:3 identical:1 placing:1 represents:1 thinking:3 future:4 connectionist:1 gordon:1 simultaneously:1 divergence:1 resulted:1 ve:1 intended:1 phase:6 attempt:2 ab:2 highly:1 mnih:1 zheng:1 evaluation:8 severe:1 mixture:1 accurate:2 edge:9 encourage:1 closer:1 respective:1 unless:1 tree:47 sooner:2 unterthiner:1 re:1 xeon:1 earlier:1 cover:1 measuring:1 cost:5 uniform:1 too:3 pal:1 reported:1 teacher:1 learnt:2 chooses:1 combined:2 st:4 international:2 sensitivity:1 lee:1 off:2 harming:1 connecting:1 together:1 quickly:3 parametrisation:2 na:1 again:1 aaai:2 containing:1 huang:2 choose:1 genesereth:1 creating:7 expert:76 american:1 leading:1 return:1 suggesting:2 lookup:1 north:1 titan:1 depends:1 performed:3 root:4 try:1 closed:1 view:1 observing:1 doing:1 reached:1 dagger:5 aggregation:2 start:1 parallel:1 recover:1 il:6 accuracy:2 convolutional:1 variance:1 efficiently:3 correspond:1 bayesian:1 fallen:1 accurately:2 marginally:1 carlo:14 researcher:1 published:4 strongest:1 reach:2 whenever:1 parallelised:1 against:5 frequency:1 initialised:1 resultant:3 proof:1 di:5 propagated:1 sampled:2 dataset:20 govindaraju:1 ask:1 manifest:1 knowledge:4 improves:4 back:2 appears:1 maxwell:1 originally:1 higher:1 supervised:1 follow:1 methodology:1 response:1 improved:4 wei:1 nivre:1 evaluated:1 just:2 uct:5 until:2 correlation:2 overfit:1 hand:1 clock:1 ag0:3 langford:2 propagation:1 mode:3 quality:3 mdp:2 believe:1 dqn:1 usa:1 name:1 grows:1 former:1 hence:1 iteratively:2 satisfactory:2 leibler:1 white:5 adjacent:2 assistance:1 game:38 self:1 during:3 won:3 stone:3 complete:3 demonstrate:1 temperature:2 reasoning:4 meaning:1 consideration:2 novel:1 discovers:1 common:1 rl:5 overview:1 defeat:2 million:1 extend:1 discussed:1 comfortably:2 association:1 pell:1 significant:1 refer:1 ai:2 automatic:1 tuning:1 trivially:1 grid:1 rasa:4 vanilla:1 dj:1 access:1 longer:1 etc:1 add:2 recent:5 showed:2 irrelevant:1 scenario:1 store:4 certain:1 buffer:2 nvidia:1 binary:1 outperforming:1 scoring:1 optimiser:1 seen:2 minimum:1 tabula:4 greater:1 additional:2 determine:1 ller:1 dashed:1 signal:1 resolving:1 multiple:4 full:1 desirable:1 reduces:1 alan:2 generalises:1 match:2 faster:4 plausibility:1 long:3 visit:3 controlled:1 impact:1 prediction:11 calculates:1 variant:1 searcher:1 iteration:35 sometimes:2 represent:1 arxiv:7 agarwal:1 achieved:3 cell:1 hochreiter:1 normalization:1 whereas:2 szepesv:1 interval:1 aws:1 unlike:3 probably:1 south:1 fell:1 call:2 ideal:1 intermediate:1 iii:2 recommends:1 easy:1 parallelising:1 psychology:2 misclassifications:1 architecture:3 bandwidth:1 identified:1 suboptimal:1 reduce:3 economic:1 arpit:1 intensive:1 shift:1 bottleneck:1 six:1 motivated:1 assist:2 accelerating:1 colour:2 effort:1 passed:1 suffer:1 returned:1 proceed:1 cause:1 passing:1 action:18 deep:15 useful:3 n510129:1 detailed:1 tune:1 conscious:1 induces:2 hardware:1 simplest:1 reduced:1 http:1 sl:4 exist:1 canonical:2 sign:1 estimated:1 per:5 summarised:1 hyperparameter:1 olympiad:1 thereafter:1 threshold:1 drawn:1 neither:1 timestep:1 sum:4 run:6 turing:2 throughout:1 draw:1 decision:6 appendix:4 clarke:1 bound:3 layer:2 completing:1 followed:3 played:2 encountered:2 oracle:1 strength:3 adapted:1 alex:1 your:1 ri:1 kota:1 endgame:1 speed:1 rave:1 pruned:1 hexagonal:1 expanded:3 structured:4 according:2 alternate:1 combination:1 request:1 describes:2 across:3 mastering:2 making:3 s1:3 gradually:1 taken:3 computationally:1 resource:1 mutually:1 previously:1 turn:1 discus:2 count:1 end:1 studying:1 available:4 opponent:1 away:2 appearing:1 apprentice:39 alternative:1 batch:4 slower:2 existence:1 thomas:2 bayeselo:2 top:2 running:1 include:3 ensure:1 original:4 linguistics:1 maintaining:1 daum:2 exploit:2 gelly:2 giving:1 build:1 comparatively:1 move:33 intend:1 added:1 prover:1 print:1 strategy:1 costly:1 rt:1 exclusive:1 bagnell:2 parametric:1 gradient:4 win:4 separate:2 reinforce:6 unable:1 simulated:1 majority:1 separating:2 thank:1 considers:2 reason:1 code:1 providing:2 coulom:2 difficult:1 potentially:1 relate:1 stated:1 ba:1 implementation:3 policy:67 perform:4 upper:3 markov:2 datasets:9 precise:1 head:1 rating:2 david:1 introduced:1 pair:2 required:2 kl:1 connection:3 learned:1 narrow:1 kingma:1 nip:1 able:4 beyond:1 proceeds:1 pattern:1 challenge:1 including:2 optimise:1 analogue:1 critical:1 rely:1 warm:1 predicting:1 scheme:1 improve:5 mcts:28 started:1 regularise:1 review:1 acknowledgement:1 relative:2 beside:1 loss:4 expect:1 permutation:1 interesting:1 generation:1 proportional:1 lv:2 remarkable:1 awareness:1 agent:5 s0:3 playing:6 share:1 uncorrelated:1 heavy:1 changed:1 supported:1 slowdown:1 asynchronous:1 free:1 hex:18 offline:2 guide:4 allow:2 bias:4 generalise:4 institute:2 taking:2 benefit:1 distributed:4 feedback:1 calculated:3 default:3 evaluating:1 depth:1 curve:1 made:7 reinforcement:13 commonly:1 employing:1 far:3 transaction:1 approximate:1 implicitly:1 kullback:1 robotic:1 overfitting:1 corpus:1 imitation:27 don:1 search:55 decomposes:2 promising:3 learn:2 nature:3 ca:1 expanding:1 improving:2 e5:1 expansion:2 complex:3 european:1 anthony:1 domain:3 tpt:14 did:1 aistats:1 whole:1 s2:2 arise:1 backup:1 motivation:1 hyperparameters:1 repeated:2 evolutionarily:1 child:3 enriched:1 referred:1 west:1 intel:1 board:8 slow:5 aid:1 position:16 explicit:1 winning:3 exponential:2 learns:4 young:1 removing:2 formula:5 theorem:2 british:1 specific:4 covariate:1 mitigates:1 explored:1 consist:1 sequential:2 adding:3 effectively:1 importance:1 corr:2 magnitude:1 gap:1 remi:1 simply:2 likely:1 expressed:1 contained:1 doubling:1 chang:1 restarted:1 springer:2 corresponds:2 lewis:1 acm:1 viewed:1 towards:3 arneson:1 labelled:1 absence:1 generalisation:4 uniformly:1 total:1 player:14 attempted:1 east:1 select:2 college:1 internal:1 guo:3 latter:1 evaluate:1 later:1 scratch:1 ex:10 |
6,767 | 7,121 | EEG-GRAPH: A Factor-Graph-Based Model for
Capturing Spatial, Temporal, and Observational
Relationships in Electroencephalograms
Yogatheesan Varatharajah ?
Benjamin Brinkmann?
Min Jin Chong?
Krishnakant Saboo?
Gregory Worrell?
Brent Berry?
Ravishankar Iyer?
Abstract
This paper presents a probabilistic-graphical model that can be used to infer characteristics of instantaneous brain activity by jointly analyzing spatial and temporal dependencies observed in electroencephalograms (EEG). Specifically, we describe a factor-graph-based model with customized factor-functions defined based
on domain knowledge, to infer pathologic brain activity with the goal of identifying seizure-generating brain regions in epilepsy patients. We utilize an inference
technique based on the graph-cut algorithm to exactly solve graph inference in
polynomial time. We validate the model by using clinically collected intracranial EEG data from 29 epilepsy patients to show that the model correctly identifies seizure-generating brain regions. Our results indicate that our model outperforms two conventional approaches used for seizure-onset localization (5?7%
better AUC: 0.72, 0.67, 0.65) and that the proposed inference technique provides
3?10% gain in AUC (0.72, 0.62, 0.69) compared to sampling-based alternatives.
1
Introduction
Studying the neurophysiological processes within the brain is an important step toward understanding the human brain. Techniques such as electroencephalography are exceptional tools for studying
the neurophysiological processes, because of their high temporal and spatial resolution. An electroencephalogram (EEG) typically contains several types of rhythms and discrete neurophysiological events that describe instantaneous brain activity. On the other hand, the neural activity taking
place in a brain region is very likely dependent on activities that took place in the same region at previous time instances. Furthermore, some EEG channels show inter-channel correlation due to their
spatial arrangement [1]. Those three characteristics are related, respectively, to the observational,
temporal, and spatial dependencies observed in time-series EEG signals.
The majority of the literature focuses on identifying and developing detectors for features relating to
the different rhythms and discrete neurophysiological events in the EEG signal [2]. Some effort has
been made to understand the inter-channel correlations [3] and temporal dependencies [4] observed
in EEG. Despite these separate efforts, very little effort has been made to combine those dependencies into a single model. Since those dependencies possess complementary information, using
only one of them generally results in poor understanding of the underlying neurophysiological phenomena. Hence, a unified framework that jointly captures all three dependencies in EEG, addresses
an important research problem in electrophysiology. In this paper, we describe a graphical-modelbased approach to capture all three dependencies, and we analyze its efficacy by applying it to a
critical problem in clinical neurology.
?
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana,
Illinois 61801. Email: {varatha2, mchong6, ksaboo2, rkiyer}@illinois.edu
?
Department of Neurology, Mayo Clinic, Rochester, Minnesota 55904. Email: {Berry.Brent,
Brinkmann.Benjamin, Worrell.Gregory}@mayo.edu
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Graphical models in general are useful for representing dependencies between random variables.
Factor graphs are a specific type of graphical models that have random variables and factor functions
as the vertices in the graph [5]. A factor function is used to describe the relationship between two or
more random variables in the graph. Factor graphs are particularly useful when custom definitions
of the dependencies, such as in our case, need to be encoded in the graph. Hence, we have chosen to
adopt a factor graph model to represent the three kinds of dependencies described previously. These
dependencies are represented via three different factor functions, namely observational, spatial,
and temporal factor functions. We assess the applicability of this model in localization of seizure
onset zones (SOZ), which is a critical step in treating patients with epilepsy [6]. In particular, our
model is utilized to isolate those neural events in EEG that are associated with the SOZ, and are
eventually used to deduce the location of the SOZ. However, in a general setting, with appropriate
definitions of factor functions, one can utilize our model to describe other neural events of interest
(e.g., events related to behavioral states or memory processing). Major contributions of our work
are the following.
1. A framework based on factor graphs that jointly represents instantaneous observationbased, temporal, and spatial dependencies in EEG. This is the first attempt to combine
these three aspects into a single model in the context of EEG analysis.
2. A lightweight and exact graph inference technique based on customized definitions of factor functions. Exact graph inference is typically intractable in most graphical model representations because of exponentially growing state spaces.
3. A markedly improved technique for localizing SOZ based on the factor-graph-based model
developed in this paper. Existing approaches utilize only the observations made in the EEG
to determine the SOZ and do not utilize spatial and temporal dependencies.
Our study establishes the feasibility of the factor-graph-based model and demonstrates its application
in SOZ localization on a real EEG dataset collected from epilepsy patients who underwent epilepsy
surgery. Our results indicate that utilizing the spatial and temporal dependencies in addition to
observations made in the EEG provides a 5?7% improvement in the AUC (0.72, 0.67, 0.65) and
outperforms alternative approaches utilized for SOZ localization. Furthermore, our experiments
demonstrate that the lightweight graph inference technique provides a considerable improvement
(3?10%) in SOZ localization compared to sampling-based alternatives (AUC: 0.72, 0.62, 0.69).
2
Related work
Identifying features (or biomarkers) that describe underlying neurophysiological phenomena has
been a major focus of research in the EEG literature [2]. Spectral features [7], interictal spikes [8],
high-frequency oscillations [2], and phase-amplitude coupling [4] are some of the widely used features. Although feature identification is an important step in any electrophysiologic study, features
alone often cannot completely describe the underlying physiological phenomena. Researchers have
also looked at spatial connectivity between EEG channels as means of describing neurophysiological
activities [3]. In recent times, because of the availability of long-term EEG recordings, understanding of the temporal dependencies within various brain activities has also advanced significantly [4].
A recent attempt at combining spatial and temporal constraints has shown promise despite lacking
comprehensive validation [9]. Regardless, a throughly validated and general model that captures all
the factors, and is applicable to a variety of problems has not, to our knowledge, been proposed in
the EEG literature. Since the three factors are complementary to each other, a model that jointly
represents them addresses an important research gap in the field of electrophysiology.
Graphical models have been widely used in medical informatics [10], intrusion detection [11], social network modeling [12], and many other areas. Although factor graphs are applicable in all these
settings, their applications in practice are still very much dependent on problem-specific custom
definitions of factor functions. Nevertheless, with some level of customization, our work provides a
general framework to describe the different dependencies observed in EEG signals. A similar framework for emotion prediction is described in Moodcast [12], for which the authors used a factor graph
model to describe the influences of historical information, other users, and dynamic status to predict
a user?s emotions in a social network setting. Although our factor functions are derived in a similar fashion, we show that graph inference can be performed exactly using the proposed lightweight
algorithm, and that it outperforms the sampling-based inference method utilized in Moodcast. Our
2
algorithm for inference was inspired by [13], in which the authors used an energy-minimizationbased approach for performing exact graph inference in a Markov random field-based model.
3
Model description
Here we provide a mathematical description of the model and the inference procedure. In a nutshell,
we are interested in inferring the presence of a neurophysiological phenomenon of interest by observing rhythms and discrete events (referred to as observations) present in the EEG, and by utilizing
their spatial and temporal patterns as represented by a probabilistic graph. Since the generality of
our model relies on the ability to customize the definitions of specific dependencies described by the
model, we have adopted a factor-graph-based setting to represent our model.
Definitions: Suppose that EEG data of a subject are recorded through M channels. Initially, the
data is discretized by dividing the recording duration into N epochs. We represent the interactions
between the channels at an epoch n as a dynamic graph Gn = (V, En ), where V is the set of
|V | = M channels and En ? V ? V is the set of undirected links between channels. The state of a
channel k in the nth epoch is denoted by Yn (k), which might represent a phenomenon of interest. For
example, in the case of SOZ localization, the state might be a binary value representing whether the
k th channel in the nth epoch exhibits a SOZ-likely phenomenon. We also use Yn to denote the states
of all the channels at epoch n, and use Y to denote the set of all possible values that Yn (k) can take.
We refer to the EEG rhythm or discrete event present in the EEG as observations and use Xn (k)
to denote the observation present in the nth epoch of the k th channel. Depending on the number of
rhythms and/or events, Xn (k) could be a scalar or vector random variable. The observations made
in all the channels at epoch n are denoted by Xn .
Inference: Given a dynamic network Gn , and the observations Xn , our goal is to infer the states
of the channels at epoch n, i.e., Yn . In our approach, we derive the inference model using a factor
graph with factor functions defined as shown in Table 1. The factor functions are defined using
exponential relationships so that they attain their maximum values when the exponents are zero, and
exponentially decay otherwise. All factor functions range in [0, 1].
Table 1: Factor functions used in our EEG model and their descriptions, definitions, and notations.
Function
Description
Defnition
Notations
?(Yn (k)??(Xn (k)))2
Observational:
Measures the direct contrif (Yn (k), ?(Xn (k))) bution of the observations
made in a channel to the
phenomenon of interest.
e
Spatial:
g(Yn (k), Yn (l))
Measures the correlation between the states of two channels at the same epoch.
e
Temporal:
h(Yn (k), ?n?1 (k))
Measures the correlation between a channel?s current
state and its previous states.
e?(Yn (k)??n?1 (k))
2
? 1
2 (Yn (k)?Yn (l))
d
kl
? : X ? Y is a mapping
from the observations to the
phenomenon of interest. In
general, it is not an accurate
map, because it is based on
observations alone.
dkl denotes the physical distance between electrodes (or
channels) k and l.
2
?n?1 (k) is a function of all
previous states of channel k.
E.g., ?n?1 (k) =
Pn?1
Yi (k)
n?1
i=1
With these definitions, the state of a channel is spatially related to the states of every other channel,
temporally related to a function of all its previous states, and, at the same time, explained by the
current observation of the channel. These dependencies and the factor functions that represent them
are illustrated in Fig. 1a and 1b respectively. (Note that Fig. 1b illustrates only the factor functions
related to Channel 1 and that similar factor functions exist for other channels although they are
not shown in the figure.) Provided with that information, for a particular state vector Y , we can
write P (Y |Gn ) as in Eq. 1, where Z is a normalizing factor. In general, it is infeasible to find the
normalizing constant Z, because it would require exploration of the space |Y|M .
?
?
M
1 Y ?Y
P (Y |Gn ) =
g (Y (k), Y (i)) ? f (Y (k), ?(Xn (k))) ? h (Y (k), ?n?1 (k))?
(1)
Z
k=1
i6=k
3
Previous states of
the same region
Current observation
(events, rhythms)
Current state of a
brain region
States of nearby
regions
(a) Factors that explain the state of a brain region. (b) Dependencies as factor functions.
Figure 1: The dependencies observed in brain activity and a representative factor graph model.
Therefore, we define the following predictive function (Eq. 2) for inferring Yn with the highest
likelihood per Eq. 1.
?
?
M
Y
Y
?
g (Y (k), Y (i)) ? f (Y (k), ?(Xn (k))) ? h (Y (k), ?n?1 (k))? (2)
Yn = arg max
Y ?Y M
k=1
i6=k
Still, finding a Y that maximizes this objective function involves a discrete optimization over the
space |Y|M . A brute-force approach to finding an exact solution is infeasible when M is large.
Several methods, such as junction trees [14], belief propagation [15], and sampling-based methods
such as Markov Chain Monte Carlo (MCMC) [16, 17], have been proposed to find approximate
solutions. However, we show that this can be calculated exactly when the aforementioned definitions
of the factor functions are utilized. We can rewrite Eq. 2 using the definitions in Table 1 as follows.
?
?
M
Y
Y ? 12 (Y (k)?Y (l))2
2
2
? e dkl
Yn = arg max
(3)
? e?(Y (k)??(Xn (k))) ? e?(Y (k)??n?1 (k)) ?
Y ?Y M
k=1
l6=k
Now, representing the product terms as summations inside the exponent and using the facts that the
exponential function is monotonically increasing and that maximizing a function is equivalent to
minimizing the negative of that function, we can rewrite Eq. 3 as:
Yn = arg min
Y ?Y M
PM
k=1
1
l6=k d2
kl
P
(Y (k)?Y (l))2 +(Y (k)??(Xn (k)))2 +(Y (k)??n?1 (k))2
(4)
Although the individual components in this objective function are solvable optimization problems,
the combination of them makes it difficult to solve. However, the objective function resembles
that of a standard graph energy minimization problem and hence can be solved using graph-cut
algorithms [18]. In this paper, we describe a solution for minimizing this objective function when
|Y| = 2, i.e., the brain states are binary. Although that is a limitation, the majority of the brain state
classification problems can be reduced to binary state cases when the time window of classification
is appropriately chosen. Regardless, potential solutions for |Y| > 2 are discussed in Section 6.
Graph inference using min-cut for the binary state case: We constructed the graph shown in
Fig. 2a with two special nodes in addition to the EEG channels as vertices. The additional nodes
function as source (marked by 1) and sink (marked by 0) nodes in the conventional min-cut/max-flow
problem. Weights in this graph are assigned as follows:
? Every channel is connected with every other channel, and the link between channels k and
l is assigned a weight of d12 (Y (k) ? Y (l))2 based on the distance between them.
kl
? Every channel is connected with the source node, and the link between channel k and the
2
2
source is assigned a weight of (1 ? ?n?1 (k)) + (1 ? ? (Xn (k))) .
? Every channel is also connected with the sink node, and the link between channel k and the
2
sink is assigned a weight of ?2n?1 (k) + (? (Xn (k))) .
Proposition 1. An optimal min-cut partitioning of the graph shown in Fig. 2a minimizes the objective function given in Eq. 4.
4
(a) New graphical structure
(b) Min-cut partitioning
Figure 2: Graph inference using the min-cut algorithm.
Proof: Suppose that we perform an arbitrary cut on the graph shown in Fig. 2a, resulting in two sets
of vertices S and T . The energy of the graph after the cut is performed is:
M h
i X X 1
X
2
2
2
Ecut =
(Y (k) ? ?n?1 (k)) + (Y (k) ? ? (Xn (k))) +
(Y
(k)
?
Y
(l))
d2kl
k=1
k?T l?S
It can be seen that, for the same partition of vertices, the objective function given in Eq. 4 attains the
same quantity as Ecut . Therefore, since the optimal min-cut partition minimizes the energy Ecut , it
minimizes the objective function given in Eq. 4.
Now suppose that we are given two sets of nodes {S ? , T ? } as the optimal partitioning of the graph.
Without loss of generality, let us assume that S ? contains the source and T ? contains the sink. Then,
the other vertices in S ? and T ? , are assigned 1 and 0 as their respective states to obtain the optimal
Y that minimizes the objective function given in Eq. 4.
4
Application of the model in seizure onset localization
Background: Epilepsy is a neurological disorder characterized by spontaneously occurring
seizures. It affects roughly 1% of the world?s population, and many do not respond to drug treatment
[19]. Epilepsy surgery, which involves resection of a portion of the patient?s brain, can reduce and
often eliminate seizures [20]. The success of resective surgery depends on accurate localization of
the seizure-onset zone [21]. The conventional practice is to identify the EEG channels that show the
earliest seizure discharge via visual inspection of the EEG recorded during seizures, and to remove
some tissue around these channels during the resective surgery. This method, despite being the current clinical standard, is very costly, time-consuming, and burdensome to the patients, as it requires
a lengthy ICU stay so that an adequate number of seizures can be captured. One approach, which
has recently become a widely researched topic, utilizes between-seizure (interictal) intracranial EEG
(iEEG) recording to localize the seizure onset zones [22, 6]. This type of localization is preferable
to the conventional method, as it does not require a lengthy ICU stay.
Interictal SOZ identification methodology: Like that of the conventional approach, the goal here
is to identify a few channels that are likely to be in the SOZ. Channels situated directly on or close to
a SOZ exhibit different forms of transient electrophysiologic events (or abnormal events) between
seizures [23]. The frequency of such abnormal neural events plays a major role in determining
the SOZ. However, capturing these abnormal neural events that occur in distinct locations of the
brain alone is often not sufficient to establish an area in the brain as the SOZ. The reason is that
insignificant artifacts present in the EEG may show characteristics of those abnormal events that are
associated with SOZ (referred to as SOZ-likely events). In order to set apart the SOZ-likely events,
their spatial and temporal patterns could be utilized. It is known that SOZ-likely events occur in a
repetitive and spatially correlated fashion (i.e., neighboring channels exhibit such events at the same
time) [6]. Hence, the factor-graph-based model described in Section 3 can be applied to capture and
utilize the spatial and temporal correlations in isolating the SOZ-likely events.
5
Identifying abnormal neural events: Spectral characteristics of iEEG measured in the form of
power-in-bands (PIB) features have been widely utilized to identify abnormal neural events [24,
6, 7]. In this paper, PIB features are extracted as spectral power in the frequency bands Delta
(0?3 Hz), Low-Theta (3?6 Hz), High-Theta (6?9 Hz), Alpha (9?14 Hz), Beta (14?25 Hz), LowGamma (30?55 Hz), High-Gamma (65?115 Hz), and Ripple (125?150 Hz) and utilized to make
observations from channels. As described in Section 3, a ? function is used to relate the observations
to abnormal events. In Section 6, we evaluate different techniques for obtaining a mapping from
extracted PIB features to the presence of an abnormal neural event. However, a mapping obtained
using observations alone is not sufficient to deduce SOZ because in addition to SOZ-likely events,
signal artifacts will also be captured by this mapping. This phenomenon is illustrated in Fig.3, in
which PIB features show similar characteristics for the events related to both SOZ and non-SOZ.
Therefore, we utilize the factor graph model presented in this paper to further filter the detected
abnormal events based on their spatial and temporal patterns and isolate the SOZ-likely events.
Channels
Non-SOZ
SOZ
SOZ
SOZ
2
3
4
5
Time (sec)
1
SOZ Signal
Non-SOZ Signal
1
0.5
0
-0.5
0
0.5
1
1
0.5
1
1.5
Time(sec)
Normalized PIB
Normalized PIB
0.5
0
-0.5
0
1.5
Time(sec)
8
7
6
5
4
3
2
1
0
0.5
1.5
Time(sec)
8
7
6
5
4
3
2
1
0
0.5
1
1.5
Time(sec)
Figure 3: EEG events related to both SOZ and non-SOZ are captured by PIB features because they
possess similar spectral characteristics.
Spatial and temporal dependencies in SOZ localization: Although artifacts show spectral characteristics similar to those of SOZ-likely events, unlike the latter, the former do not occur in a spatially
correlated manner. This spatial correlation is measured with respect to the physical distances between the electrodes placed in the brain. Therefore, the same definition of the spatial factor function
described in Section 3 is applicable. If a channel?s observation is classified as an abnormal neural
event and the spatial factor function attains a large value with an adjacent channel, it would mean
that both channels likely show similar patterns of abnormalities which therefore must be SOZ-likely
events. In addition, the SOZ-likely events show a repetitive pattern, which artifacts usually do not.
In Section 3, we described the temporal correlation as a function of all previous states. As such, the
temporal correlation here is established with the intuition that a channel that previously exhibited a
large number of SOZ-likely events is likely to exhibit more because of the repetitive pattern. Hence,
temporal correlation is measured as the correlation between the state of a channel and the
observed
P
n?1
Y (k)
i
frequency of SOZ-likely events in that channel until the previous epoch, i.e., ?n?1 (k) = i=1
.
n?1
Therefore, when ?n?1 (k) is close to 1 and the observation made from channel k is classified as an
abnormal neural event, the event is more likely to be a SOZ-likely event than an artifact.
5
Experiments
Data: The data used in this work are from a study approved by the Mayo Clinic Institutional Review
Board. The dataset consists of iEEG recordings collected from 29 epilepsy patients. The iEEG
sensors were surgically implanted in potentially epileptogenic regions in the brain. Patients were
6
implanted with different numbers of sensors, and they all had different SOZs. Ground truth (the
true SOZ channels) was established from clinical reports and verified independently through visual
inspection of the seizure iEEGs. During data collection, basic preprocessing was performed to
remove line-noise and other forms of signal contamination from the data.
Channel k
3-sec window
2-hour data segment
3-sec window
3-sec window
PIB feature extraction
Feature classification
Factor graph inference
Figure 4: A flow diagram illustrating the SOZ determination process.
Analytic scheme: Two-hour between-seizure segments were chosen for each patient to represent
a monitoring duration that could be achieved during surgery. The two-hour iEEG recordings were
divided into non-overlapping three-second epochs. This epoch length was chosen because it would
likely accommodate at least one abnormal neural event that could be associated with the SOZ [6].
Spectral domain features (PIB) were extracted in the 3-second epochs to capture abnormal neural
events [6]. Based on the features extracted in a 3-second recording of a channel, a binary value
? (Xn (k)) ? {0, 1} was assigned to that channel, indicating whether or not an abnormal event was
present. Section 6 provides a comparison of supervised and unsupervised techniques used to create
this mapping. In the case of supervised techniques, a classification model was trained using the PIB
features extracted from an existing corpus of manually annotated abnormal neural events. In the
case of unsupervised techniques, channels were clustered into two groups based on the PIB features
extracted during an epoch, and the cluster with the larger cluster center (measured as the Euclidean
distance from the origin) was labeled as the abnormal cluster. Consequently, the respective epochs
of those channels in the abnormal cluster were classified as abnormal neural events. The factor graph
model was then used to filter the SOZ-likely events out of all the detected abnormal neural events. A
factor graph is generated using the observational, spatial, and temporal factor functions described
above specifically for this application. The best combination of states that minimizes the objective
function given in Eq. 4, Yn , is found by using the min-cut algorithm. In our approach, we used the
Boykov-Kolmogorov algorithm [25] to obtain the optimal partition of the graph. The states Yn here
are binary values and represent the presence or absence of SOZ-likely events in the channels. This
process is repeated for all the 3-second epochs and the SOZ is deduced at the end using a maximum
likelihood (ML) approach (described in the following). This whole process is illustrated in Fig. 4.
Maximum likelihood SOZ deduction: We model the occurrences of SOZ-likely events in channel
k as independent Bernoulli random variables with probability ?(k). Here, ?(k) denotes the true bias
of the channel?s being in SOZ. We estimate ?(k) using a maximum likelihood (ML) approach and
use ?
? (k) to denote the estimate. Each Yn (k) that results from the factor graph inference is treated
as an outcome of a Bernoulli trial and the log-likelihood function after N such trials is defined as:
"N
#
Y
Yn (k)
1?Yn (k)
log (L(?(k))) = log
?(k)
(1 ? ?(k))
(5)
n=1
An estimate for ?(k) that maximizes the above likelihoodP
function (known as MLE, i.e., maximum
likelihood estimate) after N epochs is derived as ?
? (k) =
N
n=1
Yn (k)
.
N
Evaluation: The ML approach generates a likelihood probability for each channel k for being in
the SOZ. We compared these probabilities against the ground truth (binary values with 1 meaning
7
that the channel is in the SOZ and 0 otherwise) to generate the area under the ROC curve (AUC),
sensitivity, specificity, precision, recall, and F1-score metrics. First, we evaluated a number of
techniques for generating a mapping from the extracted PIB features to the presence of abnormal
events. We evaluated three unsupervised approaches, namely k-means, spectral, and hierarchical
clustering methods and two supervised approaches, namely support vector machine (SVM) and
generalized linear model (GLM), for this task. Second, we evaluated the benefits of utilizing the
min-cut algorithm for inferring instantaneous states. Here we compared our results using the mincut algorithm against those of two sampling-based techniques [12]: MCMC with random sampling,
and MCMC with sampling per prior distribution. Belief-propagation-based methods are not suitable
here because our factor graph contains cycles [26]. Third, we compared our results against two
recent solutions for interictal SOZ localization, including a summation approach [6] and a clustering
approach [22]. In the summation approach, summation of the features of a channel normalized by the
maximum feature summation was used as the likelihood of that channel?s being in the SOZ. In the
clustering approach, the features of all the channels during the whole 2-hour period were clustered
into two classes by a k-means algorithm, and the cluster with the larger cluster mean was chosen
as the abnormal cluster. For each channel, the fraction of all its features that were in the abnormal
cluster was used as the likelihood of that channel being in the SOZ. Both of these approaches utilize
only the observations and lack the additional information of the spatial and temporal correlations.
6
Results & discussion
Table 2 lists the results obtained for the experiments explained in Section 5, performed using a
dataset containing non-seizure (interictal) iEEG data from 29 epilepsy patients. First, a comparison
of supervised and unsupervised techniques for the mapping from PIB features to the presence of
abnormal events was performed. The results indicate that using a k-means clustering approach for
mapping PIB features to abnormal events is better than any other supervised or unsupervised approach, while other approaches also prove useful. Second, a comparison between sampling-based
methods and the min-cut approach was performed for the task of graph inference. Our results indicate that utilizing the min-cut approach to infer instantaneous states is considerably better than a
random-sampling-based MCMC approach (with a 10% higher AUC and 14% higher F1-score) and
marginally better than an MCMC approach with sampling per a prior distribution (with a 3% higher
AUC and a similar F1-score), when used with k-means algorithm for abnormal event classification.
However, unlike this approach, our method does not require a prior distribution to sample from.
Third, we show that our factor-graph-based model for interictal SOZ localization performs significantly better than either of the traditional approaches (with 5% and 7% higher AUCs) when used
with k-means algorithm for abnormal event classification and min-cut algorithm for graph inference.
Table 2: Goodness-of-fit metrics obtained for unsupervised and supervised methods for PIB-toabnormal-event mapping (?); sampling-based approaches for instantaneous state estimation; and
conventional approaches utilized for interictal SOZ localization. (?FG/kmeans/min-cut" means that
we utilized a factor-graph-based method, with a k-means clustering algorithm for mapping PIB
featuers to abnormal neural events and the min-cut algorithm for performing graph inference.)
Method
AUC
Sensitivity
Specificity
Precision
Recall
F1-score
Evaluation: techniques for PIB to abnormal event mapping (?)
FG/kmeans/min-cut
FG/spectral/min-cut
FG/hierarch/min-cut
FG/svm/min-cut
FG/glm/min-cut
0.72?0.03
0.68?0.03
0.69?0.03
0.71?0.03
0.69?0.03
0.74?0.03
0.60?0.07
0.52?0.06
0.68?0.06
0.62?0.07
0.61?0.02
0.48?0.05
0.51?0.05
0.54?0.05
0.47?0.05
0.39?0.05
0.31?0.05
0.29?0.05
0.36?0.05
0.31?0.05
0.74?0.03
0.60?0.07
0.52?0.06
0.68?0.06
0.62?0.08
0.46?0.04
0.36?0.05
0.34?0.05
0.43?0.05
0.37?0.05
0.51?0.08
0.65?0.04
0.40?0.07
0.66?0.04
0.35?0.06
0.40?0.04
0.51?0.08
0.65?0.04
0.32?0.05
0.46?0.04
0.38?0.05
0.42?0.06
0.59?0.05
0.49?0.06
0.43?0.05
0.44?0.05
Evaluation: sampling vs. min-cut
FG/kmeans/Random
FG/kmeans/Prior
0.62?0.03
0.69?0.03
Evaluation: comparison against conventional approaches
Summation
Clustering
0.67?0.04
0.65?0.04
0.59?0.05
0.49?0.06
0.67?0.03
0.72?0.04
8
Significance: Overall, the factor-graph-based model with k-means clustering for abnormal event
classification and the min-cut algorithm for instantaneous state inference outperforms all other methods for the application of interictal SOZ localization. Utilization of spatial and temporal factor
functions improves the localization AUC by 5?7%, relative to pure observation-based approaches
(summation and clustering). On the other hand, the runtime complexity of instantaneous state inference is greatly reduced by the min-cut approach. The complexity of a brute-force approach grows
exponentially with the number of nodes in the graph, while the min-cut approach has a reasonable
runtime complexity of O(|V ||E|2 ), where |V | is the number of nodes and |E| is the number of
edges in the graph. Although sampling-based methods are able to provide approximate solutions
with moderate complexity, the min-cut method provided superior performance in our experiments.
Future work: Significant domain knowledge is required to come up with manual definitions of
graphical models, and in many situations, almost no domain knowledge is available. Hence, the
manually defined factor-graphical model and associated factor functions are a potential limitation
of our work, as a framework that automatically learns the graphical representation might result in
a more generalizable model. Dynamic Bayesian networks [27] may provide a platform that can be
used to learn dependencies from the data while allowing the types of dependencies we described.
Another potential limitation of our work is the binary-brain-state assumption made while solving the
graph energy minimization task. We surmise that extensions of the min-cut algorithm such as the
one proposed in [28] are applicable for non-binary cases. In addition, we also believe that optimal
weighting of the different factor functions could further improve localization accuracy and provide
insights on the contributions of spatial, temporal, and observational relationships to a specific application that involves EEG signal analysis. We plan to investigate those in our future work.
7
Conclusion
We described a factor-graph-based model to encode observational, temporal, and spatial dependencies observed in EEG-based brain activity analysis. This model utilizes manually defined factor
functions to represent the dependencies, which allowed us to derive a lightweight graph inference
technique. This is a significant advancement in the field of electrophysiology because a general and
comprehensively validated model that encodes different forms of dependencies in EEG does not exist at present. We validated our model for the application of interictal seizure onset zone (SOZ) and
demonstrated the feasibility in a clinical setting. Our results indicate that our approach outperforms
two widely used conventional approaches for the application of SOZ localization. In addition, the
factor functions and the technology for exactly inferring the states described in this paper can be
extended to other applications of factor graphs in fields such as medical diagnoses, social network
analysis, and preemptive attack detection. Therefore, we assert that further investigation is necessary
to understand the different usecases of this model.
Acknowledgements: This work was partly supported by National Science Foundation grants CNS1337732 and CNS-1624790, National Institute of Health grants NINDS-U01-NS073557, NINDSR01-NS92882, NHLBI-HL105355, and NINDS-UH2-NS095495-01, Mayo Clinic and Illinois Alliance Fellowships for Technology-based Healthcare Research and an IBM faculty award. We thank
Subho Banerjee, Phuong Cao, Jenny Applequist, and the reviewers for their valuable feedback.
References
[1] C. P. Warren, S. Hu, M. Stead, B. H. Brinkmann, M. R. Bower, and G. A. Worrell, ?Synchrony in normal
and focal epileptic brain: The seizure onset zone is functionally disconnected,? Journal of Neurophysiology, vol. 104, no. 6, pp. 3530?3539, 2010.
[2] G. A. Worrell, A. B. Gardner, S. M. Stead, S. Hu, S. Goerss, G. J. Cascino, F. B. Meyer, R. Marsh,
and B. Litt, ?High-frequency oscillations in human temporal lobe: Simultaneous microwire and clinical
macroelectrode recordings,? Brain, vol. 131, no. 4, pp. 928?937, 2008.
[3] M. Rubinov and O. Sporns, ?Complex network measures of brain connectivity: Uses and interpretations,?
Neuroimage, vol. 52, no. 3, pp. 1059?1069, 2010.
[4] C. Alvarado-Rojas, M. Valderrama, A. Fouad-Ahmed, H. Feldwisch-Drentrup, M. Ihle, C. Teixeira,
F. Sales, A. Schulze-Bonhage, C. Adam, A. Dourado, S. Charpier, V. Navarro, and M. Le Van Quyen,
?Slow modulations of high-frequency activity (40?140 [emsp14] hz) discriminate preictal changes in human focal epilepsy,? Scientific Reports, vol. 4, 2014.
9
[5] B. J. Frey, F. R. Kschischang, H.-A. Loeliger, and N. Wiberg, ?Factor graphs and algorithms,? in Proceedings of the 35th Annual Allerton Conference on Communication Control and Computing. University of
Illinois, 1997, pp. 666?680.
[6] Y. Varatharajah, B. M. Berry, Z. T. Kalbarczyk, B. H. Brinkmann, G. A. Worrell, and R. K. Iyer, ?Interictal seizure onset zone localization using unsupervised clustering and bayesian filtering,? in 8th International IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 2017, pp. 533?539.
[7] Y. Varatharajah, R. K. Iyer, B. M. Berry, G. A. Worrell, and B. H. Brinkmann, ?Seizure forecasting and
the preictal state in canine epilepsy,? International Journal of Neural Systems, vol. 27, p. 1650046, 2017.
[8] R. Katznelson, ?EEG recording, electrode placement, and aspects of generator localization,? Electric
Fields of the Brain, pp. 176?213, 1981.
[9] J. D. Martinez-Vargas, G. Strobbe, K. Vonck, P. van Mierlo, and G. Castellanos-Dominguez, ?Improved
localization of seizure onset zones using spatiotemporal constraints and time-varying source connectivity,?
Frontiers in Neuroscience, vol. 11, p. 156, 2017.
[10] L. R. Andersen, J. H. Krebs, and J. D. Andersen, ?Steno: An expert system for medical diagnosis based
on graphical models and model search,? Journal of Applied Statistics, vol. 18, no. 1, pp. 139?153, 1991.
[11] P. Cao, E. Badger, Z. Kalbarczyk, R. Iyer, and A. Slagell, ?Preemptive intrusion detection: Theoretical
framework and real-world measurements,? in Proceedings of the 2015 Symposium and Bootcamp on the
Science of Security. ACM, 2015, pp. 21:1?21:2.
[12] Y. Zhang, J. Tang, J. Sun, Y. Chen, and J. Rao, ?Moodcast: Emotion prediction via dynamic continuous
factor graph model,? in 10th International Conference on Data Mining (ICDM), 2010, pp. 1193?1198.
[13] J. Liu, C. Zhang, C. McCarty, P. Peissig, E. Burnside, and D. Page, ?High-dimensional structured feature
screening using binary Markov random fields,? in Artificial Intelligence and Statistics, 2012, pp. 712?721.
[14] W. Wiegerinck, ?Variational approximations between mean field theory and the junction tree algorithm,?
in Proceedings of the 16th conference on Uncertainty in artificial intelligence, 2000, pp. 626?633.
[15] J. S. Yedidia, W. T. Freeman, Y. Weiss et al., ?Generalized belief propagation,? in Advances in Neural
Information Processing Systems, vol. 13, 2000, pp. 689?695.
[16] W. R. Gilks, S. Richardson, and D. Spiegelhalter, Markov chain Monte Carlo in practice. CRC Press,
1995.
[17] S. Chib and E. Greenberg, ?Understanding the Metropolis-Hastings algorithm,? The American Statistician, vol. 49, no. 4, pp. 327?335, 1995.
[18] V. Kolmogorov and R. Zabin, ?What energy functions can be minimized via graph cuts?? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, pp. 147?159, 2004.
[19] R. G. Andrzejak, D. Chicharro, C. E. Elger, and F. Mormann, ?Seizure prediction: Any better than
chance?? Clinical Neurophysiology, vol. 120, no. 8, pp. 1465?1478, 2009.
[20] R. W. Lee, G. A. Worrell, W. R. Marsh, G. D. Cascino, N. M. Wetjen, F. B. Meyer, E. C. Wirrell, and E. L.
So, ?Diagnostic outcome of surgical revision of intracranial electrode placements for seizure localization,?
Journal of Clinical Neurophysiology, vol. 31, no. 3, pp. 199?202, 2014.
[21] N. M. Wetjen, W. R. Marsh, F. B. Meyer, G. D. Cascino, E. So, J. W. Britton, S. M. Stead, and G. A.
Worrell, ?Intracranial electroencephalography seizure onset patterns and surgical outcomes in nonlesional
extratemporal epilepsy,? Journal of Neurosurgery, vol. 110, no. 6, pp. 1147?1152, 2009.
[22] S. Liu, Z. Sha, A. Sencer, A. Aydoseli, N. Bebek, A. Abosch, T. Henry, C. Gurses, and N. F. Ince, ?Exploring the time?frequency content of high frequency oscillations for automated identification of seizure
onset zone in epilepsy,? Journal of Neural Engineering, vol. 13, no. 2, p. 026026, 2016.
[23] M. Stead, M. Bower, B. H. Brinkmann, K. Lee, W. R. Marsh, F. B. Meyer, B. Litt, J. Van Gompel,
and G. A. Worrell, ?Microseizures and the spatiotemporal scales of human partial epilepsy,? Brain, pp.
2789?2797, 2010.
[24] G. P. Kalamangalam, L. Cara, N. Tandon, and J. D. Slater, ?An interictal eeg spectral metric for temporal
lobe epilepsy lateralization,? Epilepsy Research, vol. 108, no. 10, pp. 1748?1757, 2014.
[25] Y. Boykov and V. Kolmogorov, ?An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision,? IEEE transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 9,
pp. 1124?1137, 2004.
[26] Y. Weiss and W. T. Freeman, ?On the optimality of solutions of the max-product belief-propagation algorithm in arbitrary graphs,? IEEE Transactions on Information Theory, vol. 47, pp. 736?744, 2001.
[27] P. Dagum, A. Galper, and E. Horvitz, ?Dynamic network models for forecasting,? in Proceedings of the
8th International Conference on Uncertainty in Artificial Intelligence, 1992, pp. 41?48.
[28] A. Delong and Y. Boykov, ?Globally optimal segmentation of multi-region objects,? in 2009 12th IEEE
International Conference on Computer Vision. IEEE, 2009, pp. 285?292.
10
| 7121 |@word neurophysiology:3 trial:2 illustrating:1 faculty:1 polynomial:1 approved:1 d2:1 hu:2 lobe:2 accommodate:1 liu:2 contains:4 series:1 efficacy:1 lightweight:4 score:4 loeliger:1 phuong:1 outperforms:5 existing:2 horvitz:1 current:5 must:1 partition:3 analytic:1 remove:2 treating:1 v:1 alone:4 intelligence:5 advancement:1 inspection:2 provides:5 node:8 location:2 attack:1 allerton:1 zhang:2 mathematical:1 constructed:1 direct:1 become:1 beta:1 symposium:1 defnition:1 consists:1 prove:1 combine:2 behavioral:1 inside:1 manner:1 inter:2 roughly:1 growing:1 multi:1 brain:26 discretized:1 inspired:1 freeman:2 globally:1 automatically:1 researched:1 little:1 window:4 electroencephalography:2 increasing:1 revision:1 provided:2 underlying:3 notation:2 maximizes:2 what:1 kind:1 minimizes:5 developed:1 generalizable:1 britton:1 unified:1 finding:2 temporal:27 assert:1 every:5 nutshell:1 runtime:2 exactly:4 preferable:1 demonstrates:1 brute:2 partitioning:3 medical:3 utilization:1 yn:22 grant:2 healthcare:1 sale:1 control:1 ner:1 engineering:3 frey:1 despite:3 mccarty:1 analyzing:1 modulation:1 might:3 resembles:1 range:1 gilks:1 spontaneously:1 practice:3 procedure:1 peissig:1 area:3 drug:1 significantly:2 attain:1 specificity:2 cannot:1 close:2 context:1 applying:1 influence:1 conventional:8 map:1 equivalent:1 center:1 maximizing:1 demonstrated:1 reviewer:1 regardless:2 duration:2 independently:1 d12:1 resolution:1 identifying:4 disorder:1 pure:1 insight:1 utilizing:4 gur:1 ecut:3 population:1 discharge:1 suppose:3 play:1 user:2 exact:4 tandon:1 us:1 origin:1 particularly:1 utilized:9 slater:1 cut:30 surmise:1 labeled:1 observed:7 role:1 electrical:1 capture:5 solved:1 region:10 connected:3 cycle:1 sun:1 highest:1 contamination:1 valuable:1 benjamin:2 intuition:1 complexity:4 dynamic:6 trained:1 rewrite:2 surgically:1 segment:2 brinkmann:6 predictive:1 solving:1 localization:21 completely:1 sink:4 represented:2 various:1 kolmogorov:3 distinct:1 describe:10 monte:2 detected:2 artificial:3 outcome:3 encoded:1 widely:5 solve:2 larger:2 otherwise:2 stead:4 ability:1 alvarado:1 statistic:2 richardson:1 jointly:4 cara:1 took:1 interaction:1 product:2 neighboring:1 cao:2 combining:1 description:4 validate:1 electrode:4 cluster:8 ripple:1 generating:3 adam:1 object:1 coupling:1 depending:1 derive:2 measured:4 eq:10 dividing:1 pathologic:1 involves:3 indicate:5 come:1 annotated:1 filter:2 exploration:1 human:4 transient:1 observational:7 crc:1 require:3 f1:4 clustered:2 investigation:1 proposition:1 summation:7 extension:1 frontier:1 exploring:1 around:1 ground:2 normal:1 mapping:11 predict:1 major:3 adopt:1 institutional:1 estimation:1 mayo:4 applicable:4 dagum:1 fouad:1 exceptional:1 establishes:1 tool:1 create:1 wiberg:1 minimization:3 neurosurgery:1 sensor:2 pn:1 varying:1 earliest:1 encode:1 validated:3 focus:2 derived:2 improvement:2 bernoulli:2 likelihood:9 intrusion:2 greatly:1 attains:2 burdensome:1 teixeira:1 inference:23 dependent:2 typically:2 eliminate:1 initially:1 deduction:1 interested:1 arg:3 aforementioned:1 classification:7 overall:1 denoted:2 exponent:2 elger:1 plan:1 spatial:25 special:1 platform:1 delong:1 field:7 emotion:3 extraction:1 beach:1 sampling:13 manually:3 represents:2 unsupervised:7 future:2 minimized:1 report:2 few:1 chib:1 gamma:1 national:2 comprehensive:1 individual:1 phase:1 cns:1 statistician:1 resection:1 attempt:2 detection:3 interest:5 screening:1 investigate:1 mining:1 custom:2 evaluation:4 chong:1 chain:2 accurate:2 edge:1 partial:1 necessary:1 respective:2 tree:2 euclidean:1 alliance:1 isolating:1 theoretical:1 instance:1 modeling:1 gn:4 castellanos:1 rao:1 localizing:1 goodness:1 applicability:1 vertex:5 dependency:26 spatiotemporal:2 gregory:2 considerably:1 st:1 deduced:1 international:5 sensitivity:2 stay:2 probabilistic:2 lee:2 informatics:1 modelbased:1 connectivity:3 andersen:2 recorded:2 containing:1 brent:2 expert:1 american:1 potential:3 sec:8 u01:1 availability:1 onset:11 depends:1 performed:6 analyze:1 observing:1 bution:1 portion:1 rochester:1 synchrony:1 litt:2 contribution:2 ass:1 accuracy:1 characteristic:7 who:1 identify:3 surgical:2 identification:3 bayesian:2 marginally:1 carlo:2 monitoring:1 researcher:1 tissue:1 classified:3 detector:1 explain:1 simultaneous:1 manual:1 email:2 definition:12 lengthy:2 against:4 energy:7 frequency:8 pp:23 associated:4 proof:1 gain:1 dataset:3 treatment:1 recall:2 knowledge:4 improves:1 segmentation:1 amplitude:1 higher:4 supervised:6 methodology:1 improved:2 wei:2 evaluated:3 generality:2 furthermore:2 correlation:11 until:1 hand:2 hastings:1 overlapping:1 propagation:4 lack:1 banerjee:1 artifact:5 scientific:1 grows:1 believe:1 usa:1 normalized:3 true:2 former:1 hence:6 assigned:6 spatially:3 lateralization:1 illustrated:3 adjacent:1 during:6 auc:10 customize:1 rhythm:6 generalized:2 demonstrate:1 electroencephalogram:3 performs:1 ince:1 meaning:1 variational:1 instantaneous:8 marsh:4 recently:1 boykov:3 superior:1 bonhage:1 physical:2 exponentially:3 schulze:1 discussed:1 interpretation:1 relating:1 epilepsy:16 functionally:1 krebs:1 refer:1 significant:2 measurement:1 mormann:1 focal:2 pm:1 i6:2 illinois:5 had:1 henry:1 minnesota:1 deduce:2 intracranial:4 recent:3 burnside:1 moderate:1 apart:1 binary:10 success:1 yi:1 seen:1 captured:3 additional:2 determine:1 period:1 monotonically:1 signal:8 jenny:1 infer:4 champaign:1 characterized:1 determination:1 clinical:7 long:2 ahmed:1 divided:1 icdm:1 mle:1 award:1 dkl:2 feasibility:2 prediction:3 basic:1 implanted:2 patient:10 metric:3 vision:2 repetitive:3 represent:8 achieved:1 addition:6 background:1 fellowship:1 embs:1 diagram:1 source:5 appropriately:1 unlike:2 posse:2 exhibited:1 markedly:1 navarro:1 isolate:2 recording:8 subject:1 undirected:1 hz:9 flow:3 presence:5 abnormality:1 automated:1 variety:1 affect:1 fit:1 reduce:1 biomarkers:1 whether:2 epileptic:1 effort:3 forecasting:2 adequate:1 generally:1 useful:3 band:2 situated:1 reduced:2 generate:1 exist:2 delta:1 neuroscience:1 correctly:1 per:3 diagnostic:1 diagnosis:2 discrete:5 write:1 promise:1 vol:17 group:1 nevertheless:1 localize:1 verified:1 utilize:7 hierarch:1 graph:60 fraction:1 uncertainty:2 respond:1 place:2 almost:1 reasonable:1 preemptive:2 utilizes:2 oscillation:3 ninds:2 seizure:26 capturing:2 interictal:11 abnormal:29 annual:1 activity:10 occur:3 placement:2 constraint:2 encodes:1 nearby:1 generates:1 aspect:2 min:27 optimality:1 performing:2 vargas:1 department:2 developing:1 structured:1 clinically:1 poor:1 combination:2 disconnected:1 metropolis:1 dominguez:1 explained:2 glm:2 previously:2 describing:1 eventually:1 end:1 studying:2 adopted:1 junction:2 available:1 yedidia:1 hierarchical:1 appropriate:1 spectral:9 occurrence:1 alternative:3 denotes:2 clustering:9 mincut:1 graphical:11 l6:2 establish:1 surgery:5 objective:9 arrangement:1 quantity:1 spike:1 looked:1 costly:1 sha:1 traditional:1 exhibit:4 distance:4 separate:1 link:4 thank:1 rubinov:1 majority:2 topic:1 collected:3 toward:1 reason:1 length:1 relationship:4 minimizing:2 difficult:1 potentially:1 relate:1 negative:1 zabin:1 perform:1 allowing:1 canine:1 observation:19 quyen:1 markov:4 urbana:2 jin:1 situation:1 extended:1 communication:1 arbitrary:2 namely:3 required:1 kl:3 security:1 established:2 hour:4 nip:1 address:2 able:1 usually:1 pattern:9 pib:17 max:5 memory:1 including:1 belief:4 sporns:1 power:2 event:55 critical:2 treated:1 force:2 suitable:1 solvable:1 customized:2 advanced:1 nth:3 representing:3 scheme:1 improve:1 technology:2 theta:2 spiegelhalter:1 temporally:1 identifies:1 gardner:1 health:1 epoch:17 understanding:4 berry:4 literature:3 review:1 prior:4 determining:1 relative:1 acknowledgement:1 lacking:1 loss:1 limitation:3 filtering:1 generator:1 validation:1 clinic:3 foundation:1 rkiyer:1 sufficient:2 ibm:1 placed:1 supported:1 infeasible:2 bias:1 warren:1 understand:2 institute:1 comprehensively:1 taking:1 underwent:1 andrzejak:1 fg:8 benefit:1 van:3 curve:1 calculated:1 xn:14 world:2 feedback:1 greenberg:1 author:2 made:8 collection:1 preprocessing:1 historical:1 social:3 transaction:3 approximate:2 alpha:1 status:1 ml:3 corpus:1 consuming:1 neurology:2 search:1 continuous:1 table:5 channel:62 learn:1 ca:1 kschischang:1 obtaining:1 eeg:36 complex:1 electric:1 domain:4 icu:2 significance:1 whole:2 noise:1 martinez:1 repeated:1 complementary:2 allowed:1 fig:7 referred:2 representative:1 en:2 roc:1 board:1 fashion:2 slow:1 precision:2 neuroimage:1 inferring:4 meyer:4 exponential:2 bower:2 third:2 weighting:1 learns:1 tang:1 specific:4 list:1 decay:1 physiological:1 insignificant:1 svm:2 normalizing:2 intractable:1 throughly:1 iyer:4 illustrates:1 occurring:1 gap:1 chen:1 customization:1 electrophysiology:3 likely:22 neurophysiological:8 visual:2 neurological:1 scalar:1 truth:2 chance:1 relies:1 extracted:7 acm:1 ravishankar:1 chicharro:1 goal:3 marked:2 kmeans:4 consequently:1 rojas:1 absence:1 considerable:1 change:1 content:1 specifically:2 wiegerinck:1 discriminate:1 partly:1 experimental:1 zone:8 indicating:1 support:1 latter:1 nhlbi:1 evaluate:1 mcmc:5 phenomenon:9 correlated:2 |
6,768 | 7,122 | Improving the Expected Improvement Algorithm
Chao Qin
Columbia Business School
New York, NY 10027
[email protected]
Diego Klabjan
Northwestern University
Evanston, IL 60208
[email protected]
Daniel Russo
Columbia Business School
New York, NY 10027
[email protected]
Abstract
The expected improvement (EI) algorithm is a popular strategy for information
collection in optimization under uncertainty. The algorithm is widely known to
be too greedy, but nevertheless enjoys wide use due to its simplicity and ability
to handle uncertainty and noise in a coherent decision theoretic framework. To
provide rigorous insight into EI, we study its properties in a simple setting of
Bayesian optimization where the domain consists of a finite grid of points. This
is the so-called best-arm identification problem, where the goal is to allocate
measurement effort wisely to confidently identify the best arm using a small
number of measurements. In this framework, one can show formally that EI is far
from optimal. To overcome this shortcoming, we introduce a simple modification
of the expected improvement algorithm. Surprisingly, this simple change results in
an algorithm that is asymptotically optimal for Gaussian best-arm identification
problems, and provably outperforms standard EI by an order of magnitude.
1
Introduction
Recently Bayesian optimization has received much attention in the machine learning community
[21]. This literature studies the problem of maximizing an unknown black-box objective function by
collecting noisy measurements of the function at carefully chosen sample points. At first a prior belief
over the objective function is prescribed, and then the statistical model is refined sequentially as data
are observed. Expected improvement (EI) [13] is one of the most widely-used Bayesian optimization
algorithms. It is a greedy improvement-based heuristic that samples the point offering greatest
expected improvement over the current best sampled point. EI is simple and readily implementable,
and it offers reasonable performance in practice.
Although EI is reasonably effective, it is too greedy, focusing nearly all sampling effort near the
estimated optimum and gathering too little information about other regions in the domain. This
phenomenon is most transparent in the simplest setting of Bayesian optimization where the function?s
domain is a finite grid of points. This is the problem of best-arm identification (BAI) [1] in a multiarmed bandit. The player sequentially selects arms to measure and observes noisy reward samples
with the hope that a small number of measurements enable a confident identification of the best
arm. Recently Ryzhov [20] studied the performance of EI in this setting. His work focuses on a link
between EI and another algorithm known as the optimal computing budget allocation [3], but his
analysis reveals EI allocates a vanishing proportion of samples to suboptimal arms as the total number
of samples grows. Any method with this property will be far from optimal in BAI problems [1].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we improve the EI algorithm dramatically through a simple modification. The resulting
algorithm, which we call top-two expected improvement (TTEI), combines the top-two sampling
idea of Russo [19] with a careful change to the improvement-measure used by EI. We show that
this simple variant of EI achieves strong asymptotic optimality properties in the BAI problem, and
benchmark the algorithm in simulation experiments.
Our main theoretical contribution is a complete characterization of the asymptotic proportion of
samples TTEI allocates to each arm as a function of the true (unknown) arm means. These particular
sampling proportions have been shown to be optimal from several perspectives [4, 12, 9, 19, 8], and
this enables us to establish two different optimality results for TTEI. The first concerns the rate at
which the algorithm gains confidence about the identity of the optimal arm as the total number of
samples collected grows. Next we study the so-called fixed confidence setting, where the algorithm is
able to stop at any point and return an estimate of the optimal arm. We show that when applied with
the stopping rule of Garivier and Kaufmann [8], TTEI essentially minimizes the expected number of
samples required among all rules obeying a constraint on the probability of incorrect selection.
One undesirable feature of our algorithm is its dependence on a tuning parameter. Our theoretical
results precisely show the impact of this parameter, and reveal a surprising degree of robustness to its
value. It is also easy to design methods that adapt this parameter over time to the optimal value, and
we explore one such method in simulation. Still, removing this tuning parameter is an interesting
direction for future research.
Further related literature. Despite the popularity of EI, its theoretical properties are not well
studied. A notable exception is the work of Bull [2], who studies a global optimization problem and
provides a convergence rate for EI?s expected loss. However, it is assumed that the observations
are noiseless. Our work also relates to a large number of recent machine learning papers that try to
characterize the sample complexity of the best-arm identification problem [5, 18, 1, 7, 14, 10, 11, 15?
17]. Despite substantial progress, matching asymptotic upper and lower bounds remained elusive in
this line of work. Building on older work in statistics [4, 12] and simulation optimization [9], recent
work of Garivier and Kaufmann [8] and Russo [19] characterized the optimal sampling proportions.
Two notions of asymptotic optimality are established: sample complexity in the fixed confidence
setting and rate of posterior convergence. Garivier and Kaufmann [8] developed two sampling
rules designed to closely track the asymptotic optimal proportions and showed that, when combined
with a stopping rule motivated by Chernoff [4], this sampling rule minimizes the expected number
of samples required to guarantee a vanishing threshold on the probability of incorrect selection is
satisfied. Russo [19] independently proposed three simple Bayesian algorithms, and proved that
each algorithm attains the optimal rate of posterior convergence. TTEI proposed in this paper is
conceptually most similar to the top-two value sampling of Russo [19], but it is more computationally
efficient.
1.1
Main Contributions
As discussed below, our work makes both theoretical and algorithmic contributions.
Theoretical: Our main theoretical contribution is Theorem 1, which establishes that TTEI?a simple
modification to a popular Bayesian heuristic?converges to the known optimal asymptotic
sampling proportions. It is worth emphasizing that, unlike recent results for other top-two
sampling algorithms [19], this theorem establishes that the expected time to converge to the
optimal proportions is finite, which we need to establish optimality in the fixed confidence
setting. Proving this result required substantial technical innovations. Theorems 2 and 3
are additional theoretical contributions. These mirror results in [19] and [8], but we extract
minimal conditions on sampling rules that are sufficient to guarantee the two notions of
optimality studied in these papers.
Algorithmic: On the algorithmic side, we substantially improve a widely used algorithm. TTEI can
be easily implemented by modifying existing EI code, but, as shown in our experiments, can
offer an order of magnitude improvement. A more subtle point involves the advantages of
TTEI over algorithms that are designed to directly target convergence on the asymptotically
optimal proportions. In the experiments, we show that TTEI substantially outperforms an
oracle sampling rule whose sampling proportions directly track the asymptotically optimal
proportions. This phenomenon should be explored further in future work, but suggests that
2
by carefully reasoning about the value of information TTEI accounts for important factors
that are washed out in asymptotic analysis. Finally?as discussed in the conclusion?although
we focus on uncorrelated priors we believe our method can be easily extended to more
complicated problems like that of best-arm identification in linear bandits [22].
2
Problem Formulation
Let A = {1, . . . , k} be the set of arms. At each time n ? N = {0, 1, 2, . . .}, an arm In ? A is
measured, and an independent noisy reward Yn,In is observed. The reward Yn,i ? R of arm i at time
n follows a normal distribution N (?i , ? 2 ) with common known variance ? 2 , but unknown mean
?i . The objective is to allocate measurement effort wisely in order to confidently identify the arm
with highest mean using a small number of measurements. We assume that ?1 > ?2 > . . . > ?k .
Our analysis takes place in a frequentist setting, in which the true means (?1 , . . . , ?k ) are fixed but
unknown. The algorithms we study, however, are Bayesian in the sense that they begin with prior
over the arm means and update the belief to form a posterior distribution as evidence is gathered.
Prior and Posterior Distributions. The sampling rules studied in this paper begin with a normally
2
distributed prior over the true mean of each arm i ? A denoted by N (?0,i , ?0,i
), and update this to
form a posterior distribution as observations are gathered. By conjugacy, the posterior distribution
after observing the sequence (I0 , Y0,I0 , . . . , In?1 , Yn?1,In?1 ) is also a normal distribution denoted
2
by N (?n,i , ?n,i
). The posterior mean and variance can be calculated using the following recursive
equations:
?2
?2
(?n,i ?n,i + ? ?2 Yn,i )/(?n,i
+ ? ?2 ) if In = i,
?n+1,i =
?n,i ,
if In 6= i,
and
2
?n+1,i
=
?2
1/(?n,i
+ ? ?2 ) if In = i,
.
2
?n,i
,
if In 6= i.
We denote the posterior distribution over the vector of arm means by
2
2
2
?n = N (?n,1 , ?n,1
) ? N (?n,2 , ?n,2
) ? ? ? ? ? N (?n,k , ?n,k
)
and let ? = (?1 , . . . , ?k ). For example, with this notation
"
#
X
X
E???n
?i =
?n,i .
i?A
i?A
The posterior probability assigned to the event that arm i is optimal is
?n,i , P???n ?i > max ?j .
j6=i
(1)
To avoid confusion, we always use ? = (?1 , . . . , ?k ) to denote a random vector of arm means drawn
from the algorithm?s posterior ?n , and ? = (?1 , . . . , ?k ) to denote the vector of true arm means.
Two notions of asymptotic optimality. Our first notion of optimality relates to the rate of posterior convergence. As the number of observations grows, one hopes that the posterior distribution
definitively identifies the true best arm, in the sense that the posterior probability 1 ? ?n,1 assigned
by the event that a different arm is optimal tends to zero. By sampling the arms intelligently, we
hope this probability can be driven to zero as rapidly as possible. Following Russo [19], we aim to
maximize the exponent governing the rate of decay,
lim inf ?
n??
1
log (1 ? ?n,1 ) ,
n
among all sampling rules.
The second setting we consider is often called the ?fixed confidence? setting. Here, the agent is
allowed at any point to stop gathering samples and return an estimate of the identity of the optimal.
In addition to a sampling rule, we require a stopping rule that selects a time ? at which to stop, and
3
a decision rule that returns an estimate I?? of the optimal arm based on the first ? observations. We
consider minimizing the average number of observations E[?? ] required by an algorithm (that consists
of a sampling rule, a stopping rule and a decision rule) guaranteeing a vanishing probability ? of
incorrect identification, i.e., P(I??? 6= 1) ? ?. Following Garivier and Kaufmann [8], the number of
samples required scales with log(1/?), and so we aim to minimize
lim sup
??0
E[?? ]
log(1/?)
among all algorithms with probability of error no more than ?. In this setting, we study the performance of sampling rules when combined with the stopping rule studied by Chernoff [4] and Garivier
and Kaufmann [8].
3
Sampling Rules
In this section, we first introduce the expected improvement algorithm, and point out its weakness.
Then a simple variant of the expected improvement algorithm is proposed. Both algorithms make
calculations using function f (x) = x?(x) + ?(x) where ?(?) and ?(?) are the CDF and PDF of
the standard normal distribution. One can show that as x ? ?, log f (?x) ? ?x2 /2, and so
2
f (?x) ? e?x /2 for very large x. One can also show that f is an increasing function.
Expected Improvement. Expected improvement [13] is a simple improvement-based sampling
rule. The EI algorithm favors the arm that offers the largest amount of improvement upon a target.
The EI algorithm measures the arm In = arg maxi?A vn,i where vn,i is the EI value of arm i at time
n. Let In? = arg maxi?A ?n,i denote the arm with largest posterior mean at time n. The EI value of
arm i at time n is defined as
h
+ i
vn,i , E???n ?i ? ?n,In?
.
where x+ = max{x, 0}. The above expectation can be computed analytically as follows,
?n,i ? ?n,In?
?n,i ? ?n,In?
?n,i ? ?n,In?
+ ?n,i ?
= ?n,i f
.
vn,i = ?n,i ? ?n,In? ?
?n,i
?n,i
?n,i
The EI value vn,i measures the potential of arm i to improve upon the largest posterior mean ?n,In? at
time n. Because f is an increasing function, vn,i is increasing in both the posterior mean ?n,i and
posterior standard deviation ?n,i .
Top-Two Expected Improvement. The EI algorithm can have very poor performance for selecting
the best arm. Once the posterior indicates a particular arm is the best with reasonably high probability,
EI allocates nearly all future samples to this arm at the expense of measuring other arms. Recently
Ryzhov [20] showed that EI only allocates O(log n) samples to suboptimal arms asymptotically.
This is a severe shortcoming, as it means n must be extremely large before the algorithm has enough
samples from suboptimal arms to reach a confident conclusion.
To improve the EI algorithm, we build on the top-two sampling idea in Russo [19]. The idea is to
identify in each period the two ?most promising? arms based on current observations, and randomize
to choose which to sample. A tuning parameter ? ? (0, 1) controls the probability assigned to the
?top? arm. A naive top-two variant of EI would identify the two arms with largest EI value, and flip
a ??weighted coin to decide which to measure. However, one can prove that this algorithm is not
optimal for any choice of ?. Instead, what we call the top-two expected improvement algorithm uses
a novel modified EI criterion which more carefully accounts for the decision-maker?s uncertainty
when deciding which arm to sample.
For i, j ? A, define vn,i,j , E???n [(?i ? ?j )+ ]. This measures the expected magnitude of
improvement arm i offers over arm j, but unlike the typical EI criterion, this expectation integrates
over the uncertain quality of both arms. This measure can be computed analytically as
?
?
q
?n,i ? ?n,j ?
2 + ?2 f ? q
vn,i,j = ?n,i
.
n,j
2 + ?2
?n,i
n,j
4
TTEI depends on a tuning parameter ? > 0, set to 1/2 by default. With probability ?, TTEI measures
(1)
(2)
the arm In by optimizing the EI criterion, and otherwise it measures an alternative In that offers
(1)
the largest expected improvement on the arm In . Formally, TTEI measures the arm
( (1)
In = arg maxi?A vn,i ,
with probability ?,
In =
(2)
In = arg maxi?A vn,i,I (1) , with probability 1 ? ?.
n
Note that vn,i,i = 0, which implies
(2)
In
6=
(1)
In .
We notice that TTEI with ? = 1 is the standard EI algorithm. Comparing to the EI algorithm, TTEI
with ? ? (0, 1) allocates much more measurement effort to suboptimal arms. We will see that TTEI
allocates ? proportion of samples to the best arm asymptotically, and it uses the remaining 1 ? ?
fraction of samples for gathering evidence against each suboptimal arm.
4
Convergence to Asymptotically Optimal Proportions
Pn?1
For all i ? A and n ? N, we define Tn,i , `=0 1{I` = i} to be the number of samples of arm
i before time n. We will show that under TTEI with parameter ?, limn?? Tn,1 /n = ?. That is,
the algorithm asymptotically allocates ? proportion of the samples to true best arm. Dropping for
the moment questions regarding the impact of this tuning parameter, let us consider the optimal
asymptotic proportion of effort to allocate to each of the k ? 1 remaining arms. It is known that the
Pk
optimal proportions are given by the unique vector (w2? , ? ? ? , wk? ) satisfying i=2 wi? = 1 ? ? and
(?1 ? ?2 )2
1/? +
1/w2?
= ... =
(?1 ? ?k )2
1/? + 1/wk?
.
(2)
We set w1? = ?, so w? = w1? , . . . , wk? encodes the sampling proportions of each arm.
To understand the source of equation (2),
imagine
that over the first n periods each arm i is sampled
2
exactly wi? n times, and let ?
?n,i ? N ?i , w?? n denote the empirical mean of arm i. Then
i
!
1
?2 1
2
2
+ ? .
?
?n,1 ? ?
?n,i ? N ?1 ? ?i , ?
?i
where ?
?i =
n ?
wi
The probability ?
?n,1 ? ?
?n,i ? 0?leading to an incorrect estimate of which arm has highest mean?is
? ((?i ? ?1 )/?
?i ) where ? is the CDF of the standard normal distribution. Equation (2) is equivalent
to requiring (?1 ? ?i )/?
?i is equal for all arms i, so the probability of falsely declaring ?i ? ?1
is equal for all i 6= 1. In a sense, these sampling frequencies equalize the evidence against each
suboptimal arm. These proportions appeared first in the machine learning literature in [19, 8], but
appeared much earlier in the statistics literature in [12], and separately in the simulation optimization
literature in [9]. As we will see in the next section, convergence to this allocation is a necessary
condition for both notions of optimality considered in this paper.
Our main theoretical contribution is the following theorem, which establishes that under TTEI
sampling proportions converge to the proportions w? derived above. Therefore, while the sampling
proportion of the optimal arm is controlled by the tuning parameter ?, the remaining 1 ? ? fraction of
measurement is optimally distributed among the remaining k ? 1 arms. Such a result was established
for other top-two sampling algorithms in [19]. The second notion of optimality requires not just
convergence to w? with probability 1, but also a sense in which the expected time until convergence
is finite. The following theorem presents such a stronger result for TTEI. To make this precise,
we introduce a time after which for each arm, the empirical proportion allocated to it is accurate.
Specifically, given ? ? (0, 1) and > 0, we define
M? , inf N ? N : max |Tn,i /n ? wi? | ? ?n ? N .
(3)
i?A
It is clear that P(M? < ?) = 1 for all > 0 if and only if Tn,i /n ? wi? with probability 1 for each
arm i ? A. To establish optimality in the ?fixed confidence setting?, we need to prove in addition that
E[M? ] < ? for all > 0, which requires substantial new technical innovations.
5
Theorem 1. Under TTEI with parameter ? ? (0, 1), E[M? ] < ? for any > 0.
This result implies that under TTEI, P(M? < ?) = 1 for all > 0, or equivalently
lim
n??
4.1
Tn,i
= wi?
n
?i ? A.
Problem Complexity Measure
Given ? ? (0, 1), define the problem complexity measure
??? ,
(? ? ?k )2
(? ? ?2 )2
= ... =
1
1
,
2? 2 1/? + 1/w2?
2? 2 1/? + 1/wk?
which is a function of the true arm means and variances. This will be the exponent governing
the rate of posterior convergence, and also characterizing the average number of samples in the
fixed confidence stetting. The optimal exponent comes from maximizing over ?. Let us define
?? = max??(0,1) ??? and ? ? = arg max??(0,1) ??? and set
?
?
?
w? = w? = ? ? , w2? , . . . , wk? .
n ?
o
?
Russo [19] has proved that for ? ? (0, 1), ??? ? ?? / max ?? , 1??
, and therefore ??1/2 ? ?? /2.
1??
This demonstrates a surprising degree of robustness to ?. In particular, ?? is close to ?? if ? is
adjusted to be close to ? ? , and the choice of ? = 1/2 always yields a 2-approximation to ?? .
5
Implied Optimality Results
This section establishes formal optimality guarantees for TTEI. Both results, in fact, hold for any
algorithm satisfying the conclusions of Theorem 1, and are therefore of broader interest.
5.1
Optimal Rate of Posterior Convergence
We first provide upper and lower bounds on the exponent governing the rate of posterior convergence.
The same result has been has been proved in Russo [19] for bounded correlated priors. We use
different proof techniques to prove the following result for uncorrelated Gaussian priors.
?
This theorem shows that no algorithm can attain a rate of posterior convergence faster than e?? n
and that this is attained by any algorithm that, like TTEI with optimal tuning parameter ? ? , has
asymptotic sampling ratios (w1? , . . . , wk? ). The second part implies TTEI with parameter ? attains
?
convergence rate e?n?? and that it is optimal among sampling rules that allocation ??fraction of
samples to the optimal arm. Recall that, without loss of generality, we have assumed arm 1 is the arm
with true highest mean ?1 = maxi?A ?i . We will study the posterior mass 1 ? ?n,1 assigned to the
event that some other has the highest mean.
Theorem 2 (Posterior Convergence - Sufficient Condition for Optimality). The following properties
hold with probability 1:
1. Under any sampling rule that satisfies Tn,i /n ? wi? for each i ? A,
lim ?
n??
1
log (1 ? ?n,1 ) = ?? .
n
Under any sampling rule,
lim sup ?
n??
1
log(1 ? ?n,1 ) ? ?? .
n
2. Let ? ? (0, 1). Under any sampling rule that satisfies Tn,i /n ? wi? for each i ? A,
lim ?
n??
1
log(1 ? ?n,1 ) = ??? .
n
6
Under any sampling rule that satisfies Tn,1 /n ? ?,
lim sup ?
n??
1
log(1 ? ?n,1 ) ? ??? .
n
This result reveals that when the tuning parameter ? is set optimally to ? ? , TTEI attains the optimal
rate of posterior convergence. Since ??1/2 ? ?? /2, when ? is set to the default value 1/2, the
exponent governing the convergence rate of TTEI is at least half of the optimal one.
5.2
Optimal Average Sample Size
Chernoff?s Stopping Rule. In the fixed confidence setting, besides an efficient sampling rule, a
player also needs to design an intelligent stopping rule. This section introduces a stopping rule
proposed by Chernoff [4] and studied recently by Garivier and Kaufmann [8]. This stopping rule
makes use of the Generalized Likelihood Ratio statistic, which depends on the current maximum
likelihood estimates of all unknown means. For each arm i ? A, the maximum
likelihood estimate
?1 Pn?1
of its unknown mean ?i at time n is its empirical mean ?
?n,i = Tn,i
1{I
` = i}Y`,I` where
`=0
Pn?1
Tn,i = `=0 1{I` = i}. Next we define a weighted average of empirical means of arms i, j ? A:
?
?n,i,j ,
Tn,i
Tn,j
?
?n,i +
?
?n,j .
Tn,i + Tn,j
Tn,i + Tn,j
Then if ?
?n,i ? ?
?n,j , the Generalized Likelihood Ratio statistic Zn,i,j has the following explicit
expression:
Zn,i,j , Tn,i d(?
?n,i , ?
?n,i,j ) + Tn,j d(?
?n,j , ?
?n,i,j )
where d(x, y) = (x ? y)2 /(2? 2 ) is the Kullback-Leibler (KL) divergence between Gaussian distributions N (x, ? 2 ) and N (y, ? 2 ). Similarly, if ?
?n,i < ?
?n,j , Zn,i,j = ?Zn,j,i ? 0 where Zn,j,i is well
defined as above. If either arm has never been sampled before, these quantities are not well defined
and we take the convention that Zn,i,j = Zn,j,i = 0. Given a target confidence ? ? (0, 1), to ensure
that one arm is better than the others with probability at least 1 ? ?, we use the stopping time
?? , inf n ? N : Zn , max min Zn,i,j > ?n,?
i?A j?A\{i}
where ?n,? > 0 is an appropriate threshold. By definition, minj?A\{i} Zn,i,j is nonnegative if
?n,i is unique,
and only if ?
?n,i ? ?
?n,j for all j ? A \ {i}. Hence, whenever I?n? , arg maxi?A ?
Zn = minj?A\{I?? } Zn,I?? ,j .
n
n
Next we introduce the exploration rate for normal bandit models that can ensure to identify the best
arm with probability at least 1 ? ?. We use the following result given in Garivier and Kaufmann [8].
Proposition 1 (Garivier and Kaufmann [8] Proposition 12). Let ? ? (0, 1) and ? > 1. There exists a
constant C = C(?, k) such that under any sampling rule, using the Chernoff?s stopping rule with
?
the threshold ?n,?
= log(Cn? /?) guarantees
P ?? < ?, arg max ?
??? ,i 6= 1 ? ?.
i?A
Sample Complexity. Garivier and Kaufmann [8] recently provided a general lower bound on the
number of samples required in the fixed confidence setting. In particular, they show that for any
normal bandit model, under any sampling rule and stopping time ?? that guarantees a probability of
error no more than ?,
E[?? ]
1
lim inf
? ?.
??0
log(1/?)
?
Recall that M? , defined in (3), is the first time after which the empirical proportions are within
of their asymptotic limits. The next result provides a condition in terms of M? that is sufficient to
guarantee optimality in the fixed confidence setting.
7
Theorem 3 (Fixed Confidence - Sufficient Condition for Optimality). Let ?, ? ? (0, 1) and ? > 1.
Under any sampling rule which, if applied with no stopping rule, satisfies E[M? ] < ? for all > 0,
?
= log(Cn? /?) (where C = C(?, k))
using the Chernoff?s stopping rule with the threshold ?n,?
guarantees
E[?? ]
1
lim sup
? ?.
log(1/?)
??
??0
When ? = ? ? the general lower bound on sample complexity of 1/?? is essentially matched. In
addition, when ? is set to the default value 1/2, the sample complexity of TTEI combined with the
Chernoff?s stopping rule is at most twice the optimal sample complexity since 1/??1/2 ? 2/?? .
6
Numerical Experiments
To test the empirical performance of TTEI, we conduct several numerical experiments. The first
experiment compares the performance of TTEI with ? = 1/2 and EI. The second experiment
compares the performance of different versions of TTEI, top-two Thompson sampling (TTTS) [19],
knowledge gradient (KG) [6] and oracle algorithms that know the optimal proportions a priori. Each
algorithm plays arm i = 1, . . . , k exactly once at the beginning, and then prescribe a prior N (Yi,i , ? 2 )
for unknown arm-mean ?i where Yi,i is the observation from N (?i , ? 2 ). In both experiments, we fix
the common known variance ? 2 = 1 and the number of arms k = 5. We consider three instances
[?1 , . . . , ?5 ] = [5, 4, 1, 1, 1], [5, 4, 3, 2, 1] and [2, 0.8, 0.6, 0.4, 0.2]. The optimal parameter ? ? equals
0.48, 0.45 and 0.35, respectively.
Recall that ?n,i , defined in (1), denotes the posterior probability that arm i is optimal. Tables 1 and 2
show the average number of measurements required for the largest posterior probability assigned
to some arm being the best to reach a given confidence level c, i.e., maxi ?n,i ? c. In a Bayesian
setting, the probability of correct selection under this rule is exactly c. The results in Table 1 are
averaged over 100 trials. We see that TTEI with ? = 1/2 outperforms standard EI by an order of
magnitude.
Table 1: Average number of measurements required to reach the confidence level c = 0.95
[5, 4, 1, 1, 1]
[5, 4, 3, 2, 1]
[2, .8, .6, .4, .2]
TTEI-1/2
14.60
16.72
24.39
EI
238.50
384.73
1525.42
The second experiment compares the performance of different versions of TTEI, TTTS, KG, a random
sampling oracle (RSO) and a tracking oracle (TO). The random sampling oracle draws a random arm
in each round from the distribution w? encoding the asymptotically optimal proportions. The tracking
oracle tracks the optimal proportions at each round. Specifically, the tracking oracle samples the arm
with the largest ratio its optimal and empirical proportions. Two tracking algorithms proposed by
Garivier and Kaufmann [8] are similar to this tracking oracle. TTEI with adaptive ? (aTTEI) works
as follows: it starts with ? = 1/2 and updates ? = ??? every 10 rounds where ??? is the maximizer of
equation (2) based on plug-in estimators for the unknown arm-means. Table 2 shows the average
number of measurements required for the largest posterior probability being the best to reach the
confidence level c = 0.9999. The results in Table 2 are averaged over 200 trials. We see that the
performances of TTEI with adaptive ? and TTEI with ? ? are better than the performances of all other
algorithms. We note that TTEI with adaptive ? substantially outperforms the tracking oracle.
Table 2: Average number of measurements required to reach the confidence level c = 0.9999
[5, 4, 1, 1, 1]
[5, 4, 3, 2, 1]
[2, .8, .6, .4, .2]
TTEI-1/2
61.97
66.56
76.21
aTTEI
61.98
65.54
72.94
TTEI-? ?
61.59
65.55
71.62
TTTS-? ?
62.86
66.53
73.02
RSO
97.04
103.43
101.97
TO
77.76
88.02
96.90
KG
75.55
81.49
86.98
In addition to the Bayesian stopping rule tested above, we have run some experiments with the
Chernoff stopping rule discussed in Section 5.2. Asymptotic analysis shows these two rules are
8
similar when the confidence level c is very high. However, the Chernoff stopping rule appears to be
too conservative in practice; it typically yields a probability of correct selection much larger than
the specified confidence level c at the expense of using more samples. Since our current focus is on
allocation rules, we focus on this Bayesian stopping rule, which appears to offer a more fundamental
comparison than one based on ad hoc choice of tuning parameters. Developing improved stopping
rules is an important area for future research.
7
Conclusion and Extensions to Correlated Arms
We conclude by noting that while this paper thoroughly studies TTEI in the case of uncorrelated
priors, we believe the algorithm is also ideally suited to problems with complex correlated priors
and large sets of arms. In fact, the modified information measure vn,i,j was designed with an eye
toward dealing with correlation in a sophisticated way. In the case of a correlated normal distribution
N (?, ?), one has
!
p
?n,i ? ?n,j
+
vn,i,j = E??N (?,?) [(?i ? ?j ) ] = ?ii + ?jj ? 2?ij f p
.
?ii + ?jj ? 2?ij
This closed form accommodates efficient computation. Here the term ?i,j accounts for the correlation
or similarity between arms i and j. Therefore vn,i,I (1) is large for arms i that offer large potential
n
(1)
improvement over In , i.e. those that (1) have large posterior mean, (2) have large posterior variance,
(1)
(1)
and (3) are not highly correlated with arm In . As In concentrates near the estimated optimum, we
expect the third factor will force the algorithm to experiment in promising regions of the domain that
are ?far? away from the current-estimated optimum, and are under-explored under standard EI.
9
References
[1] Jean-Yves Audibert, S?bastien Bubeck, and R?mi Munos. Best arm identification in multiarmed bandits. In COLT 2010 - The 23rd Conference on Learning Theory, Haifa, Israel, June
27-29, 2010, pages 41?53, 2010.
[2] Adam D. Bull. Convergence rates of efficient global optimization algorithms. Journal of
Machine Learning Research, 12:2879?2904, 2011. URL http://dblp.uni-trier.de/db/
journals/jmlr/jmlr12.html#Bull11.
[3] Chun-Hung Chen, Jianwu Lin, Enver Y?cesan, and Stephen E Chick. Simulation budget
allocation for further enhancing the efficiency of ordinal optimization. Discrete Event Dynamic
Systems, 10(3):251?270, 2000.
[4] Herman Chernoff. Sequential design of experiments. Ann. Math. Statist., 30(3):755?770,
09 1959. doi: 10.1214/aoms/1177706205. URL http://dx.doi.org/10.1214/aoms/
1177706205.
[5] Eyal Even-dar, Shie Mannor, and Yishay Mansour. Pac bounds for multi-armed bandit and
markov decision processes. In In Fifteenth Annual Conference on Computational Learning
Theory (COLT), pages 255?270, 2002.
[6] Peter I Frazier, Warren B Powell, and Savas Dayanik. A knowledge-gradient policy for
sequential information collection. SIAM Journal on Control and Optimization, 47(5):2410?
2439, 2008.
[7] Victor Gabillon, Mohammad Ghavamzadeh, and Alessandro Lazaric. Best arm identification: A
unified approach to fixed budget and fixed confidence. In F. Pereira, C. J. C. Burges, L. Bottou,
and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages
3212?3220. Curran Associates, Inc., 2012.
[8] Aur?lien Garivier and Emilie Kaufmann. Optimal best arm identification with fixed confidence.
In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June
23-26, 2016, pages 998?1027, 2016.
[9] P. Glynn and S. Juneja. A large deviations perspective on ordinal optimization. In Simulation
Conference, 2004. Proceedings of the 2004 Winter, volume 1. IEEE, 2004.
[10] Kevin Jamieson, Matthew Malloy, Robert Nowak, and S?bastien Bubeck. lil? ucb : An optimal
exploration algorithm for multi-armed bandits. In Maria Florina Balcan, Vitaly Feldman, and
Csaba Szepesv?ri, editors, Proceedings of The 27th Conference on Learning Theory, volume 35
of Proceedings of Machine Learning Research, pages 423?439, Barcelona, Spain, 13?15 Jun
2014. PMLR. URL http://proceedings.mlr.press/v35/jamieson14.html.
[11] Kevin G. Jamieson and Robert D. Nowak. Best-arm identification algorithms for multi-armed
bandits in the fixed confidence setting. In 48th Annual Conference on Information Sciences and
Systems, CISS 2014, Princeton, NJ, USA, March 19-21, 2014, pages 1?6, 2014.
[12] C. Jennison, I. M. Johnstone, and B. W. Turnbull. Asymptotically optimal procedures for
sequential adaptive selection of the best of several normal means. Statistical decision theory
and related topics III, 2:55?86, 1982.
[13] Donald R. Jones, Matthias Schonlau, and William J. Welch. Efficient global optimization
of expensive black-box functions. Journal of Global Optimization, 13(4):455?492, 1998.
ISSN 1573-2916. doi: 10.1023/A:1008306431147. URL http://dx.doi.org/10.1023/A:
1008306431147.
[14] Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed
bandits. In Sanjoy Dasgupta and David McAllester, editors, Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, pages 1238?1246, Atlanta, Georgia, USA, 17?19 Jun 2013. PMLR. URL
http://proceedings.mlr.press/v28/karnin13.html.
10
[15] Emilie Kaufmann and Shivaram Kalyanakrishnan. Information complexity in bandit subset
selection. In Shai Shalev-Shwartz and Ingo Steinwart, editors, Proceedings of the 26th Annual
Conference on Learning Theory, volume 30 of Proceedings of Machine Learning Research,
pages 228?251, Princeton, NJ, USA, 12?14 Jun 2013. PMLR. URL http://proceedings.
mlr.press/v30/Kaufmann13.html.
[16] Emilie Kaufmann, Olivier Capp?, and Aur?lien Garivier. On the complexity of a/b testing. In
Maria Florina Balcan, Vitaly Feldman, and Csaba Szepesv?ri, editors, Proceedings of The 27th
Conference on Learning Theory, volume 35 of Proceedings of Machine Learning Research,
pages 461?481, Barcelona, Spain, 13?15 Jun 2014. PMLR. URL http://proceedings.mlr.
press/v35/kaufmann14.html.
[17] Emilie Kaufmann, Olivier Capp?, and Aur?lien Garivier. On the complexity of best-arm
identification in multi-armed bandit models. Journal of Machine Learning Research, 17(1):
1?42, 2016. URL http://jmlr.org/papers/v17/kaufman16a.html.
[18] Shie Mannor, John N. Tsitsiklis, Kristin Bennett, and Nicol? Cesa-bianchi. The sample
complexity of exploration in the multi-armed bandit problem. Journal of Machine Learning
Research, 5:2004, 2004.
[19] Daniel Russo. Simple bayesian algorithms for best arm identification. In 29th Annual Conference
on Learning Theory, pages 1417?1418, 2016.
[20] Ilya O. Ryzhov. On the convergence rates of expected improvement methods. Operations
Research, 64(6):1515?1528, 2016. doi: 10.1287/opre.2016.1494. URL http://dx.doi.org/
10.1287/opre.2016.1494.
[21] Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando de Freitas. Taking the
human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):
148?175, 2016. doi: 10.1109/JPROC.2015.2494218. URL http://dx.doi.org/10.1109/
JPROC.2015.2494218.
[22] Marta Soare, Alessandro Lazaric, and R?mi Munos. Best-arm identification in linear bandits.
In Advances in Neural Information Processing Systems, pages 828?836, 2014.
11
| 7122 |@word trial:2 version:2 proportion:26 stronger:1 simulation:6 kalyanakrishnan:1 soare:1 moment:1 bai:3 selecting:1 daniel:2 offering:1 outperforms:4 existing:1 freitas:1 current:5 comparing:1 surprising:2 dx:4 must:1 readily:1 john:1 numerical:2 enables:1 cis:1 designed:3 update:3 greedy:3 half:1 beginning:1 vanishing:3 characterization:1 provides:2 math:1 mannor:2 org:5 shahriari:1 incorrect:4 consists:2 prove:3 combine:1 introduce:4 falsely:1 expected:20 multi:6 little:1 armed:6 ryzhov:3 increasing:3 begin:2 provided:1 notation:1 bounded:1 matched:1 gsb:2 mass:1 spain:2 what:1 kg:3 israel:1 minimizes:2 substantially:3 developed:1 unified:1 csaba:2 nj:2 guarantee:7 every:1 collecting:1 exactly:3 demonstrates:1 evanston:1 control:2 normally:1 jamieson:2 yn:4 before:3 tends:1 limit:1 despite:2 encoding:1 black:2 twice:1 studied:6 suggests:1 averaged:2 russo:10 unique:2 testing:1 practice:2 recursive:1 procedure:1 powell:1 area:1 empirical:7 attain:1 matching:1 confidence:21 donald:1 undesirable:1 selection:6 close:2 equivalent:1 maximizing:2 elusive:1 attention:1 independently:1 thompson:1 welch:1 simplicity:1 schonlau:1 insight:1 rule:44 estimator:1 his:2 proving:1 handle:1 notion:6 marta:1 diego:1 target:3 imagine:1 play:1 yishay:1 olivier:2 us:2 prescribe:1 curran:1 associate:1 satisfying:2 expensive:1 observed:2 wang:1 region:2 highest:4 observes:1 substantial:3 equalize:1 alessandro:2 complexity:12 reward:3 ideally:1 dynamic:1 ghavamzadeh:1 upon:2 efficiency:1 capp:2 easily:2 shortcoming:2 effective:1 doi:8 kevin:3 shalev:1 refined:1 whose:1 heuristic:2 widely:3 larger:1 jean:1 otherwise:1 ability:1 statistic:4 favor:1 noisy:3 hoc:1 advantage:1 sequence:1 intelligently:1 matthias:1 qin:1 loop:1 rapidly:1 convergence:19 optimum:3 guaranteeing:1 converges:1 adam:2 measured:1 ij:2 school:2 received:1 progress:1 strong:1 implemented:1 involves:1 implies:3 come:1 convention:1 direction:1 concentrate:1 closely:1 correct:2 modifying:1 exploration:4 nando:1 human:1 enable:1 mcallester:1 require:1 transparent:1 fix:1 proposition:2 ryan:1 adjusted:1 extension:1 hold:2 considered:1 normal:8 deciding:1 algorithmic:3 matthew:1 achieves:1 integrates:1 maker:1 largest:8 establishes:4 weighted:2 kristin:1 hope:3 gaussian:3 always:2 aim:2 modified:2 avoid:1 pn:3 broader:1 derived:1 focus:4 june:2 improvement:21 frazier:1 maria:2 indicates:1 likelihood:4 rigorous:1 attains:3 sense:4 stopping:20 i0:2 typically:1 v30:1 dayanik:1 bandit:13 lien:3 selects:2 provably:1 arg:7 among:5 html:6 colt:3 denoted:2 exponent:5 priori:1 equal:3 once:2 karnin:1 never:1 beach:1 sampling:39 chernoff:10 jones:1 nearly:2 future:4 others:1 intelligent:1 winter:1 divergence:1 william:1 atlanta:1 interest:1 highly:1 severe:1 weakness:1 introduces:1 accurate:1 nowak:2 necessary:1 allocates:7 conduct:1 haifa:1 theoretical:8 minimal:1 uncertain:1 instance:1 earlier:1 measuring:1 zn:12 turnbull:1 bull:2 deviation:2 subset:1 too:4 characterize:1 optimally:2 combined:3 confident:2 st:1 thoroughly:1 international:1 fundamental:1 siam:1 aur:3 shivaram:1 ilya:1 gabillon:1 w1:3 satisfied:1 cesa:1 choose:1 leading:1 return:3 account:3 potential:2 savas:1 de:2 wk:6 inc:1 notable:1 audibert:1 depends:2 ad:1 try:1 closed:1 eyal:1 observing:1 sup:4 start:1 complicated:1 shai:1 contribution:6 minimize:1 il:1 yves:1 kaufmann:14 who:1 variance:5 gathered:2 identify:5 yield:2 conceptually:1 bayesian:12 identification:14 worth:1 j6:1 minj:2 reach:5 emilie:4 whenever:1 definition:1 against:2 frequency:1 glynn:1 proof:1 mi:2 sampled:3 gain:1 stop:3 proved:3 popular:2 recall:3 lim:9 knowledge:2 subtle:1 carefully:3 sophisticated:1 focusing:1 appears:2 attained:1 improved:1 formulation:1 box:2 generality:1 governing:4 just:1 until:1 correlation:2 steinwart:1 ei:36 maximizer:1 quality:1 reveal:1 grows:3 believe:2 usa:5 building:1 requiring:1 true:8 analytically:2 assigned:5 hence:1 leibler:1 round:3 criterion:3 generalized:2 pdf:1 theoretic:1 complete:1 confusion:1 tn:18 mohammad:1 balcan:2 reasoning:1 novel:1 recently:5 common:2 mlr:4 volume:5 discussed:3 measurement:12 multiarmed:2 feldman:2 tuning:9 rd:1 grid:2 similarly:1 similarity:1 posterior:30 recent:3 showed:2 perspective:2 optimizing:1 inf:4 driven:1 yi:2 victor:1 additional:1 converge:2 maximize:1 period:2 ii:2 relates:2 stephen:1 trier:1 technical:2 faster:1 adapt:1 characterized:1 offer:7 long:1 calculation:1 plug:1 lin:1 controlled:1 impact:2 variant:3 florina:2 essentially:2 noiseless:1 expectation:2 enhancing:1 fifteenth:1 oren:1 addition:4 szepesv:2 separately:1 source:1 limn:1 allocated:1 w2:4 unlike:2 db:1 shie:2 vitaly:2 call:2 near:2 noting:1 iii:1 easy:1 enough:1 suboptimal:6 idea:3 regarding:1 cn:2 motivated:1 expression:1 allocate:3 url:10 effort:5 peter:1 york:3 jj:2 dar:1 dramatically:1 clear:1 amount:1 statist:1 simplest:1 http:10 wisely:2 notice:1 estimated:3 lazaric:2 popularity:1 track:3 discrete:1 dropping:1 dasgupta:1 nevertheless:1 threshold:4 drawn:1 garivier:13 asymptotically:9 fraction:3 run:1 uncertainty:3 chick:1 swersky:1 place:1 almost:1 reasonable:1 decide:1 vn:14 draw:1 decision:6 bound:5 koren:1 oracle:9 nonnegative:1 annual:4 constraint:1 precisely:1 x2:1 ri:2 encodes:1 prescribed:1 optimality:15 extremely:1 min:1 developing:1 march:1 poor:1 y0:1 wi:8 modification:3 gathering:3 computationally:1 equation:4 conjugacy:1 know:1 flip:1 ordinal:2 operation:1 malloy:1 away:1 appropriate:1 pmlr:4 frequentist:1 alternative:1 robustness:2 coin:1 weinberger:1 top:11 remaining:4 ensure:2 denotes:1 build:1 establish:3 implied:1 objective:3 question:1 quantity:1 strategy:1 randomize:1 dependence:1 gradient:2 link:1 accommodates:1 topic:1 collected:1 toward:1 code:1 besides:1 issn:1 ratio:4 minimizing:1 innovation:2 equivalently:1 robert:2 expense:2 design:3 lil:1 policy:1 unknown:8 bianchi:1 upper:2 observation:7 markov:1 benchmark:1 finite:4 implementable:1 ingo:1 extended:1 precise:1 mansour:1 tomer:1 community:1 david:1 required:10 kl:1 specified:1 coherent:1 established:2 barcelona:2 nip:1 zohar:1 able:1 below:1 juneja:1 appeared:2 confidently:2 herman:1 max:8 belief:2 greatest:1 event:4 business:2 force:1 arm:94 older:1 improve:4 eye:1 identifies:1 washed:1 jun:4 extract:1 columbia:4 naive:1 chao:1 prior:10 literature:5 review:1 nicol:1 asymptotic:12 loss:2 expect:1 northwestern:2 interesting:1 allocation:5 declaring:1 degree:2 agent:1 sufficient:4 editor:5 uncorrelated:3 surprisingly:1 enjoys:1 tsitsiklis:1 side:1 formal:1 understand:1 warren:1 burges:1 wide:1 johnstone:1 characterizing:1 taking:1 munos:2 definitively:1 distributed:2 overcome:1 calculated:1 default:3 collection:2 adaptive:4 far:3 uni:1 kullback:1 dealing:1 global:4 sequentially:2 reveals:2 assumed:2 conclude:1 ziyu:1 shwartz:1 v28:1 table:6 promising:2 reasonably:2 ca:1 correlated:5 somekh:1 improving:1 bottou:1 complex:1 domain:4 pk:1 main:4 noise:1 allowed:1 georgia:1 ny:2 pereira:1 explicit:1 obeying:1 jmlr:2 third:1 removing:1 remained:1 theorem:10 emphasizing:1 bastien:2 pac:1 maxi:7 explored:2 decay:1 chun:1 concern:1 evidence:3 exists:1 sequential:3 mirror:1 magnitude:4 budget:3 dblp:1 chen:1 suited:1 explore:1 bubeck:2 tracking:6 satisfies:4 cdf:2 goal:1 identity:2 ann:1 careful:1 bennett:1 change:2 typical:1 specifically:2 conservative:1 called:3 total:2 sanjoy:1 rso:2 player:2 ucb:1 exception:1 formally:2 princeton:2 tested:1 phenomenon:2 hung:1 |
6,769 | 7,123 | Hybrid Reward Architecture for
Reinforcement Learning
Harm van Seijen1
[email protected]
Mehdi Fatemi1
[email protected]
Joshua Romoff12
[email protected]
Romain Laroche1
[email protected]
Tavian Barnes1
[email protected]
Jeffrey Tsang1
[email protected]
1
Microsoft Maluuba, Montreal, Canada
McGill University, Montreal, Canada
2
Abstract
One of the main challenges in reinforcement learning (RL) is generalisation. In
typical deep RL methods this is achieved by approximating the optimal value
function with a low-dimensional representation using a deep network. While
this approach works well in many domains, in domains where the optimal value
function cannot easily be reduced to a low-dimensional representation, learning can
be very slow and unstable. This paper contributes towards tackling such challenging
domains, by proposing a new method, called Hybrid Reward Architecture (HRA).
HRA takes as input a decomposed reward function and learns a separate value
function for each component reward function. Because each component typically
only depends on a subset of all features, the corresponding value function can be
approximated more easily by a low-dimensional representation, enabling more
effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms.
Pac-Man, where HRA achieves above-human performance.
1
Introduction
In reinforcement learning (RL) (Sutton & Barto, 1998; Szepesv?ri, 2009), the goal is to find a
behaviour policy that maximises the return?the discounted sum of rewards received over time?in a
data-driven way. One of the main challenges of RL is to scale methods such that they can be applied
to large, real-world problems. Because the state-space of such problems is typically massive, strong
generalisation is required to learn a good policy efficiently.
Mnih et al. (2015) achieved a big breakthrough in this area: by combining standard RL techniques
with deep neural networks, they achieved above-human performance on a large number of Atari 2600
games, by learning a policy from pixels. The generalisation properties of their Deep Q-Networks
(DQN) method is achieved by approximating the optimal value function. A value function plays an
important role in RL, because it predicts the expected return, conditioned on a state or state-action
pair. Once the optimal value function is known, an optimal policy can be derived by acting greedily
with respect to it. By modelling the current estimate of the optimal value function with a deep neural
network, DQN carries out a strong generalisation on the value function, and hence on the policy.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The generalisation behaviour of DQN is achieved by regularisation on the model for the optimal
value function. However, if the optimal value function is very complex, then learning an accurate
low-dimensional representation can be challenging or even impossible. Therefore, when the optimal
value function cannot easily be reduced to a low-dimensional representation, we argue to apply a
complementary form of regularisation on the target side. Specifically, we propose to replace the
optimal value function as target for training with an alternative value function that is easier to learn,
but still yields a reasonable?but generally not optimal?policy, when acting greedily with respect to
it.
The key observation behind regularisation on the target function is that two very different value
functions can result in the same policy when an agent acts greedily with respect to them. At the
same time, some value functions are much easier to learn than others. Intrinsic motivation (Stout
et al., 2005; Schmidhuber, 2010) uses this observation to improve learning in sparse-reward domains,
by adding a domain-specific intrinsic reward signal to the reward coming from the environment.
When the intrinsic reward function is potential-based, optimality of the resulting policy is maintained
(Ng et al., 1999). In our case, we aim for simpler value functions that are easier to represent with a
low-dimensional representation.
Our main strategy for constructing an easy-to-learn value function is to decompose the reward
function of the environment into n different reward functions. Each of them is assigned a separate
reinforcement-learning agent. Similar to the Horde architecture (Sutton et al., 2011), all these agents
can learn in parallel on the same sample sequence by using off-policy learning. Each agent gives its
action-values of the current state to an aggregator, which combines them into a single value for each
action. The current action is selected based on these aggregated values.
We test our approach on two domains: a toy-problem, where an agent has to eat 5 randomly located
fruits, and Ms. Pac-Man, one of the hard games from the ALE benchmark set (Bellemare et al.,
2013).
2
Related Work
Our HRA method builds upon the Horde architecture (Sutton et al., 2011). The Horde architecture
consists of a large number of ?demons? that learn in parallel via off-policy learning. Each demon
trains a separate general value function (GVF) based on its own policy and pseudo-reward function.
A pseudo-reward can be any feature-based signal that encodes useful information. The Horde
architecture is focused on building up general knowledge about the world, encoded via a large number
of GVFs. HRA focusses on training separate components of the environment-reward function, in
order to more efficiently learn a control policy. UVFA (Schaul et al., 2015) builds on Horde as well,
but extends it along a different direction. UVFA enables generalization across different tasks/goals. It
does not address how to solve a single, complex task, which is the focus of HRA.
Learning with respect to multiple reward functions is also a topic of multi-objective learning (Roijers
et al., 2013). So alternatively, HRA can be viewed as applying multi-objective learning in order to
more efficiently learn a policy for a single reward function.
Reward function decomposition has been studied among others by Russell & Zimdar (2003) and
Sprague & Ballard (2003). This earlier work focusses on strategies that achieve optimal behavior.
Our work is aimed at improving learning-efficiency by using simpler value functions and relaxing
optimality requirements.
There are also similarities between HRA and UNREAL (Jaderberg et al., 2017). Notably, both solve
multiple smaller problems in order to tackle one hard problem. However, the two architectures are
different in their workings, as well as the type of challenge they address. UNREAL is a technique that
boosts representation learning in difficult scenarios. It does so by using auxiliary tasks to help train
the lower-level layers of a deep neural network. An example of such a challenging representationlearning scenario is learning to navigate in the 3D Labyrinth domain. On Atari games, the reported
performance gain of UNREAL is minimal, suggesting that the standard deep RL architecture is
sufficiently powerful to extract the relevant representation. By contrast, the HRA architecture breaks
down a task into smaller pieces. HRA?s multiple smaller tasks are not unsupervised; they are tasks
that are directly relevant to the main task. Furthermore, whereas UNREAL is inherently a deep RL
technique, HRA is agnostic to the type of function approximation used. It can be combined with deep
2
neural networks, but it also works with exact, tabular representations. HRA is useful for domains
where having a high-quality representation is not sufficient to solve the task efficiently.
Diuk?s object-oriented approach (Diuk et al., 2008) was one of the first methods to show efficient
learning in video games. This approach exploits domain knowledge related to the transition dynamic
to efficiently learn a compact transition model, which can then be used to find a solution using
dynamic-programming techniques. This inherently model-based approach has the drawback that
while it efficiently learns a very compact model of the transition dynamics, it does not reduce the
state-space of the problem. Hence, it does not address the main challenge of Ms. Pac-Man: its huge
state-space, which is even for DP methods intractable (Diuk applied his method to an Atari game
with only 6 objects, whereas Ms. Pac-Man has over 150 objects).
Finally, HRA relates to options (Sutton et al., 1999; Bacon et al., 2017), and more generally hierarchical learning (Barto & Mahadevan, 2003; Kulkarni et al., 2016). Options are temporally-extended
actions that, like HRA?s heads, can be trained in parallel based on their own (intrinsic) reward
functions. However, once an option has been trained, the role of its intrinsic reward function is
over. A higher-level agent that uses an option sees it as just another action and evaluates it using its
own reward function. This can yield great speed-ups in learning and help substantially with better
exploration, but they do not directly make the value function of the higher-level agent less complex.
The heads of HRA represent values, trained with components of the environment reward. Even after
training, these values stay relevant, because the aggregator uses them to select its action.
3
Model
Consider a Markov Decision Process hS, A, P, Renv , ?i , which models an agent interacting with an
environment at discrete time steps t. It has a state set S, action set A, environment reward function
Renv : S ?A?S ? R, and transition probability function P : S ?A?S ? [0, 1]. At time step t, the
agent observes state st ? S and takes action at ? A. The agent observes the next state st+1 , drawn
from the transition probability distribution P (st , at , ?), and a reward rt = Renv (st , at , st+1 ). The
behaviour is defined by a policy ? : S ? A ? [0, 1], which represents the selection probabilities over
actions. The goal of an agent is to find a P
policy that maximises the expectation of the return, which is
?
the discounted sum of rewards: Gt := i=0 ? i rt+i , where the discount factor ? ? [0, 1] controls
the importance of immediate rewards versus future rewards. Each policy ? has a corresponding
action-value function that gives the expected return conditioned on the state and action, when acting
according to that policy:
Q? (s, a) = E[Gt |st = s, at = a, ?]
(1)
?
The optimal policy ? can be found by iteratively improving an estimate of the optimal action-value
function Q? (s, a) := max? Q? (s, a), using sample-based updates. Once Q? is sufficiently accurate
approximated, acting greedy with respect to it yields the optimal policy.
3.1
Hybrid Reward Architecture
The Q-value function is commonly estimated using a function approximator with weight vector ?:
Q(s, a; ?). DQN uses a deep neural network as function approximator and iteratively improves an
estimate of Q? by minimising the sequence of loss functions:
with
Li (?i ) = Es,a,r,s0 [(yiDQN ? Q(s, a; ?i ))2 ] ,
(2)
yiDQN
(3)
0
0
= r + ? max
Q(s , a ; ?i?1 ),
0
a
The weight vector from the previous iteration, ?i?1 , is encoded using a separate target network.
We refer to the Q-value function that minimises the loss function(s) as the training target. We will call
a training target consistent, if acting greedily with respect to it results in a policy that is optimal under
the reward function of the environment; we call a training target semi-consistent, if acting greedily
with respect to it results in a good policy?but not an optimal one?under the reward function of
the environment. For (2), the training target is Q?env , the optimal action-value function under Renv ,
which is the default consistent training target.
That a training target is consistent says nothing about how easy it is to learn that target. For example,
if Renv is sparse, the default learning objective can be very hard to learn. In this case, adding a
3
potential-based additional reward signal to Renv can yield an alternative consistent learning objective
that is easier to learn. But a sparse environment reward is not the only reason a training target can be
hard to learn. We aim to find an alternative training target for domains where the default training
target Q?env is hard to learn, due to the function being high-dimensional and hard to generalise for.
Our approach is based on a decomposition of the reward function.
We propose to decompose the reward function Renv into n reward functions:
Renv (s, a, s0 ) =
n
X
Rk (s, a, s0 ) ,
for all s, a, s0 ,
(4)
k=1
and to train a separate reinforcement-learning agent on each of these reward functions. There are
infinitely many different decompositions of a reward function possible, but to achieve value functions
that are easy to learn, the decomposition should be such that each reward function is mainly affected
by only a small number of state variables.
Because each agent k has its own reward function, it has also its own Q-value function, Qk . In
general, different agents can share multiple lower-level layers of a deep Q-network. Hence, we will
use a single vector ? to describe the combined weights of the agents. We refer to the combined
network that represents all Q-value functions as the Hybrid Reward Architecture (HRA) (see Figure
1). Action selection for HRA is based on the sum of the agent?s Q-value functions, which we call
QHRA :
QHRA (s, a; ?) :=
n
X
Qk (s, a; ?) ,
for all s, a.
(5)
k=1
The collection of agents can be viewed alternatively as a single agent with multiple heads, with each
head producing the action-values of the current state under a different reward function.
The sequence of loss function associated with HRA is:
"
Li (?i ) = Es,a,r,s0
with
n
X
#
2
(yk,i ? Qk (s, a; ?i ))
,
(6)
yk,i = Rk (s, a, s ) + ? max
Qk (s0 , a0 ; ?i?1 ) .
0
(7)
k=1
0
a
By minimising these loss functions, the different heads of HRA approximate the optimal action-value
functions under the different reward functions: Q?1 , . . . , Q?n . Furthermore, QHRA approximates Q?HRA ,
defined as:
Q?HRA (s, a) :=
n
X
Q?k (s, a)
for all s, a .
k=1
Note that Q?HRA is different from Q?env and generally not consistent.
An alternative training target is one that results from P
evaluating the uniformly random policy ?
n
under each component reward function: Q?HRA (s, a) := k=1 Q?k (s, a). Q?HRA is equal to Q?env , the
Single-head
HRA
Figure 1: Illustration of Hybrid Reward Architecture.
4
Q-values of the random policy under Renv , as shown below:
"?
#
X
?
i
Qenv (s, a) = E
? Renv (st+i , at+i , st+1+i )|st = s, at = a, ? ,
i=0
=E
"?
X
?
i
n
X
#
Rk (st+i , at+i , st+1+i )|st = s, at = a, ? ,
i=0
=
=
n
X
k=1
n
X
E
k=1
"?
X
i
#
? Rk (st+i , at+i , st+1+i )|st = s, at = a, ? ,
i=0
Q?k (s, a) := Q?HRA (s, a) .
k=1
This training target can be learned using the expected Sarsa update rule (van Seijen et al., 2009), by
replacing (7), with
X 1
yk,i = Rk (s, a, s0 ) + ?
Qk (s0 , a0 ; ?i?1 ) .
(8)
|A|
0
a ?A
Acting greedily with respect to the Q-values of a random policy might appear to yield a policy that is
just slightly better than random, but, surpringly, we found that for many navigation-based domains
Q?HRA acts as a semi-consistent training target.
3.2
Improving Performance further by using high-level domain knowledge.
In its basic setting, the only domain knowledge applied to HRA is in the form of the decomposed
reward function. However, one of the strengths of HRA is that it can easily exploit more domain
knowledge, if available. Domain knowledge can be exploited in one of the following ways:
1. Removing irrelevant features. Features that do not affect the received reward in any way
(directly or indirectly) only add noise to the learning process and can be removed.
2. Identifying terminal states. Terminal states are states from which no further reward can
be received; they have by definition a value of 0. Using this knowledge, HRA can refrain
from approximating this value by the value network, such that the weights can be fully used
to represent the non-terminal states.
3. Using pseudo-reward functions. Instead of updating a head of HRA using a component
of the environment reward, it can be updated using a pseudo-reward. In this scenario, a set
of GVFs is trained in parallel using pseudo-rewards.
While these approaches are not specific to HRA, HRA can exploit domain knowledge to a much great
extend, because it can apply these approaches to each head individually. We show this empirically in
Section 4.1.
4
4.1
Experiments
Fruit Collection task
In our first domain, we consider an agent that has to collect fruits as quickly as possible in a 10 ? 10
grid. There are 10 possible fruit locations, spread out across the grid. For each episode, a fruit is
randomly placed on 5 of those 10 locations. The agent starts at a random position. The reward is +1
if a fruit gets eaten and 0 otherwise. An episode ends after all 5 fruits have been eaten or after 300
steps, whichever comes first.
We compare the performance of DQN with HRA using the same network. For HRA, we decompose
the reward function into 10 different reward functions, one per possible fruit location. The network
consists of a binary input layer of length 110, encoding the agent?s position and whether there is
a fruit on each location. This is followed by a fully connected hidden layer of length 250. This
layer is connected to 10 heads consisting of 4 linear nodes each, representing the action-values of
5
the 4 actions under the different reward functions. Finally, the mean of all nodes across heads is
computed using a final linear layer of length 4 that connects the output of corresponding nodes in
each head. This layer has fixed weights with value 1 (i.e., it implements Equation 5). The difference
between HRA and DQN is that DQN updates the network from the fourth layer using loss function
(2), whereas HRA updates the network from the third layer using loss function (6).
DQN
HRA with pseudo-rewards
HRA
Figure 2: The different network architectures used.
Besides the full network, we test using different levels of domain knowledge, as outlined in Section
3.2: 1) removing the irrelevant features for each head (providing only the position of the agent + the
corresponding fruit feature); 2) the above plus identifying terminal states; 3) the above plus using
pseudo rewards for learning GVFs to go to each of the 10 locations (instead of learning a value
function associated to the fruit at each location). The advantage is that these GVFs can be trained
even if there is no fruit at a location. The head for a particular location copies the Q-values of the
corresponding GVF if the location currently contains a fruit, or outputs 0s otherwise. We refer to
these as HRA+1, HRA+2 and HRA+3, respectively. For DQN, we also tested a version that was
applied to the same network as HRA+1; we refer to this version as DQN+1.
Training samples are generated by a random policy; the training process is tracked by evaluating the
greedy policy with respect to the learned value function after every episode. For HRA, we performed
experiments with Q?HRA as training target (using Equation 7), as well as Q?HRA (using Equation 8).
Similarly, for DQN we used the default training target, Q?env , as well as Q?env . We optimised the
step-size and the discount factor for each method separately.
The results are shown in Figure 3 for the best settings of each method. For DQN, using Q?env
as training target resulted in the best performance, while for HRA, using Q?HRA resulted in the
best performance. Overall, HRA shows a clear performance boost over DQN, even though the
network is identical. Furthermore, adding different forms of domain knowledge causes further
large improvements. Whereas using a network structure enhanced by domain knowledge improves
performance of HRA, using that same network for DQN results in a decrease in performance. The big
boost in performance that occurs when the the terminal states are identified is due to the representation
becoming a one-hot vector. Hence, we removed the hidden layer and directly fed this one-hot vector
| 7123 |@word h:1 version:2 decomposition:4 diuk:3 carry:1 contains:1 yidqn:2 current:4 com:5 tackling:1 enables:1 update:4 greedy:2 selected:1 node:3 location:9 simpler:2 along:1 consists:2 combine:1 gvfs:4 notably:1 expected:3 behavior:1 multi:2 terminal:5 discounted:2 decomposed:2 agnostic:1 atari:4 substantially:1 proposing:1 pseudo:7 every:1 act:2 tackle:1 control:2 appear:1 producing:1 unreal:4 sutton:4 encoding:1 optimised:1 becoming:1 might:1 plus:2 studied:1 collect:1 challenging:3 relaxing:1 implement:1 area:1 ups:1 get:1 cannot:2 selection:2 impossible:1 applying:1 bellemare:1 go:1 focused:1 identifying:2 rule:1 his:1 updated:1 mcgill:2 enhanced:1 target:20 play:1 massive:1 exact:1 programming:1 us:4 romain:2 approximated:2 located:1 updating:1 predicts:1 role:2 tsang:1 connected:2 episode:3 russell:1 removed:2 decrease:1 observes:2 yk:3 environment:10 reward:56 dynamic:3 trained:5 upon:1 efficiency:1 easily:4 train:3 effective:1 describe:1 encoded:2 solve:3 say:1 otherwise:2 final:1 sequence:3 advantage:1 propose:2 coming:1 relevant:3 combining:1 achieve:2 schaul:1 demon:2 requirement:1 object:3 help:2 montreal:2 minimises:1 received:3 strong:2 auxiliary:1 come:1 direction:1 drawback:1 exploration:1 human:2 behaviour:3 generalization:1 decompose:3 sarsa:1 sufficiently:2 great:2 achieves:1 currently:1 individually:1 aim:2 barto:2 derived:1 focus:3 improvement:1 modelling:1 mainly:1 contrast:1 greedily:6 typically:2 a0:2 eaten:2 hidden:2 pixel:1 overall:1 among:1 breakthrough:1 equal:1 once:3 having:1 beach:1 ng:1 env:7 identical:1 represents:2 unsupervised:1 tabular:1 future:1 others:2 randomly:2 oriented:1 resulted:2 consisting:1 connects:1 jeffrey:2 microsoft:6 huge:1 mnih:1 navigation:1 behind:1 accurate:2 hra:51 minimal:1 earlier:1 subset:1 stout:1 reported:1 combined:3 st:16 stay:1 roijers:1 off:2 quickly:1 return:4 toy:2 li:2 suggesting:1 potential:2 depends:1 piece:1 performed:1 break:1 start:1 option:4 parallel:4 qk:5 efficiently:6 yield:5 aggregator:2 definition:1 evaluates:1 associated:2 maluuba:1 gain:1 knowledge:11 improves:2 higher:2 though:1 labyrinth:1 furthermore:3 just:2 working:1 replacing:1 mehdi:2 quality:1 dqn:14 usa:1 building:1 horde:5 hence:4 assigned:1 iteratively:2 game:6 maintained:1 m:4 demonstrate:1 rl:8 empirically:1 tracked:1 extend:1 approximates:1 refer:4 grid:2 outlined:1 similarly:1 similarity:1 gt:2 add:1 own:5 irrelevant:2 driven:1 schmidhuber:1 scenario:3 binary:1 refrain:1 joshua:2 exploited:1 additional:1 aggregated:1 signal:3 ale:1 relates:1 multiple:5 renv:10 semi:2 full:1 minimising:2 long:1 seijen:1 basic:1 expectation:1 iteration:1 represent:3 achieved:5 szepesv:1 whereas:4 separately:1 call:3 mahadevan:1 easy:3 affect:1 architecture:13 identified:1 reduce:1 whether:1 cause:1 action:19 deep:11 generally:3 useful:2 clear:1 aimed:1 discount:2 reduced:2 estimated:1 per:1 discrete:1 affected:1 key:1 drawn:1 sum:3 powerful:1 fourth:1 extends:1 reasonable:1 decision:1 layer:10 followed:1 barnes:1 strength:1 ri:1 encodes:1 sprague:1 speed:1 optimality:2 eat:1 according:1 across:3 smaller:3 slightly:1 gvf:2 equation:3 fed:1 whichever:1 end:1 available:1 apply:2 hierarchical:1 indirectly:1 alternative:4 exploit:3 build:2 approximating:3 objective:4 occurs:1 strategy:2 rt:2 dp:1 separate:6 topic:1 mail:1 argue:1 unstable:1 reason:1 fatemi:1 length:3 besides:1 illustration:1 providing:1 difficult:1 policy:27 maximises:2 observation:2 markov:1 benchmark:1 enabling:1 immediate:1 extended:1 head:13 interacting:1 canada:2 pair:1 required:1 learned:2 boost:3 nip:1 address:3 below:1 challenge:4 max:3 video:1 hot:2 hybrid:5 bacon:1 representing:1 improve:1 temporally:1 extract:1 uvfa:2 regularisation:3 loss:6 fully:2 versus:1 approximator:2 agent:22 sufficient:1 consistent:7 fruit:13 s0:8 laroche:1 share:1 placed:1 copy:1 side:1 generalise:1 sparse:3 van:2 default:4 world:2 transition:5 evaluating:2 commonly:1 reinforcement:5 collection:2 approximate:1 compact:2 jaderberg:1 harm:2 alternatively:2 learn:15 ballard:1 ca:2 inherently:2 contributes:1 improving:3 complex:3 constructing:1 domain:20 main:5 spread:1 big:2 motivation:1 noise:1 nothing:1 complementary:1 slow:1 position:3 third:1 learns:2 down:1 rk:5 removing:2 specific:2 navigate:1 pac:4 intrinsic:5 intractable:1 adding:3 importance:1 conditioned:2 easier:4 infinitely:1 goal:3 viewed:2 towards:1 replace:1 man:4 hard:6 generalisation:5 typical:1 specifically:1 uniformly:1 acting:7 called:1 e:2 select:1 kulkarni:1 tested:1 |
6,770 | 7,124 | Approximate Supermodularity Bounds for
Experimental Design
Luiz F. O. Chamon and Alejandro Ribeiro
Electrical and Systems Engineering
University of Pennsylvania
{luizf,aribeiro}@seas.upenn.edu
Abstract
This work provides performance guarantees for the greedy solution of experimental design problems. In particular, it focuses on A- and E-optimal designs, for
which typical guarantees do not apply since the mean-square error and the maximum eigenvalue of the estimation error covariance matrix are not supermodular.
To do so, it leverages the concept of approximate supermodularity to derive nonasymptotic worst-case suboptimality bounds for these greedy solutions. These
bounds reveal that as the SNR of the experiments decreases, these cost functions
behave increasingly as supermodular functions. As such, greedy A- and E-optimal
designs approach (1 ? e?1 )-optimality. These results reconcile the empirical success of greedy experimental design with the non-supermodularity of the A- and
E-optimality criteria.
1
Introduction
Experimental design consists of selecting which experiments to run or measurements to observe
in order to estimate some variable of interest. Finding good designs is an ubiquitous problem with
applications in regression, semi-supervised learning, multivariate analysis, and sensor placement [1?
10]. Nevertheless, selecting a set of k experiments that optimizes a generic figure of merit is NPhard [11, 12]. In some situations, however, an approximate solution with optimality guarantees can
be obtained in polynomial time. For example, this is possible when the cost function possesses
a diminishing returns property known as supermodularity, in which case greedy search is nearoptimal. Greedy solutions are particularly attractive for large-scale problems due to their iterative
nature and because they have lower computational complexity than typical convex relaxations [11,
12].
Supermodularity, however, is a stringent condition not met by important performance metrics. For
instance, it is well-known that neither the mean-square error (MSE) nor the maximum eigenvalue of
the estimation error covariance matrix are supermodular [1, 13, 14]. Nevertheless, greedy algorithms
have been successfully used to minimize these functions despite the lack of theoretical guarantees.
The goal of this paper is to reconcile these observations by showing that these figures of merit, used
in A- and E-optimal experimental designs, are approximately supermodular. To do so, it introduces
different measures of approximate supermodularity and derives near-optimality results for this class
of functions. It then bounds how much the MSE and the maximum eigenvalue of the error covariance
matrix violate supermodularity, leading to performance guarantees for greedy A- and E-optimal
designs. More to the point, the main results of this work are:
1. The greedy solution of the A-optimal design problem is within (1 ? e?? ) of the optimal
?1
with ? ? [1 + O(?)] , where ? upper bounds the signal-to-noise ratio (SNR) of the
experiments (Theorem 3).
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2. The value of the greedy solution of an E-optimal design problem is at most (1 ?
e?1 )(f (D? ) + k), where ? O(?) (Theorem 4).
3. As the SNR of the experiments decreases, the performance guarantees for greedy A- and
E-optimal designs approach the classical 1 ? 1/e.
This last observation is particularly interesting since careful selection of experiments is more important in low SNR scenarios. In fact, unless the experiments are highly correlated, designs have similar
performances in high SNR. Moreover, note that the guarantees in this paper are not asymptotic and
hold in the worst-case, i.e., hold for problems of any dimension and for designs of any size.
Notation Lowercase boldface letters represent vectors (x), uppercase boldface letters are matrices (X), and calligraphic letters denote sets/multisets (A). We write #A for the cardinality of A
and P(A) to denote the set of all finite multisets of A. To say X is a positive semi-definite (PSD)
matrix we write X 0, so that for X, Y ? Rn?n , X Y ? bT Xb ? bT Y b, for all b ? Rn .
Similarly, we write X 0 when X is positive definite.
2
Optimal experimental design
Let E be a pool of possible experiments. The outcome of experiment e ? E is a multivariate
measurement ye ? Rne defined as
ye = Ae ? + ve ,
(1)
?
where ? ? Rp is a parameter vector with a prior distribution such that E [?] = ?? and E(? ? ?)(?
?
? T = R? 0; Ae is an ne ? p observation matrix; and ve ? Rne is a zero-mean random variable
?)
with arbitrary covariance matrix Re = E ve veT 0 that represents the experiment uncertainty.
The {ve } are assumed to be uncorrelated across experiments, i.e., E ve vfT = 0 for all e 6= f , and
independent of ?. These experiments aim to estimate
z = H?,
(2)
where H is an m?p matrix. Appropriately choosing H is important given that the best experiments
to estimate ? are not necessarily the best experiments to estimate z. For instance, if ? is to be used
for classification, then H can be chosen so as to optimize the design with respect to the output of
the classifier. Alternatively, transductive experimental design can be performed by taking H to be
a collection of data points from a test set [6]. Finally, H = I, the identity matrix, recovers the
classical ?-estimation case.
The experiments to be used in the estimation of z are collected in a multiset D called a design.
Note that D contains elements of E with repetitions. Given a design D, it is ready to compute an
optimal Bayesian estimate z?D . The estimation error of z?D is measured by the error covariance
matrix K(D). An expression for the estimator and its error matrix in terms of the problem constants
is given in the following proposition.
Proposition 1 (Bayesian estimator). Let the experiments be defined as in (1). For Me =
ATe Re?1 Ae and a design D, the unbiased affine estimator of z with the smallest error covariance
matrix in the PSD cone is given by
"
#?1 "
#
X
X
? .
z?D = H R?1 +
Me
AT R?1 ye + R?1 ?
(3)
e
?
e?D
e
?
e?D
The corresponding error covariance matrix K(D) = E (z ? z?D )(z ? z?D )T | ?, {Me }e?D is
given by the expression
"
#?1
X
?1
K(D) = H R? +
Me
HT .
(4)
e?D
Proof. See extended version [15].
The experimental design problem consists of selecting a design D of cardinality at most k that
minimizes the overall estimation error. This can be explicitly stated as the problem of choosing D
2
with #D ? k that minimizes the error covariance K(D) whose expression is given in (4). Note
that (4) can account for unregularized (non-Bayesian) experimental design by removing R? and
using a pseudo-inverse [16]. However, the error covariance matrix is no longer monotone in this
case?see Lemma 1. Providing guarantees in this scenario is the subject of future work.
The minimization of the PSD matrix K(D) in experimental design is typically attempted using
scalarization procedures generically known as alphabetical design criteria, the most common of
which are A-, D-, and E-optimal design [17]. These are tantamount to selecting different figures of
merit to compare the matrices K(D). Our focus in this paper is mostly on A- and E-optimal designs,
but we also consider D-optimal designs for comparison. A design D with k experiments is said to be
A-optimal if it minimizes the estimation MSE which is given by the trace of the covariance matrix,
i
h
i
h
(P-A)
minimize Tr K(D) ? Tr HR? H T
|D|?k
Notice that is customary to say a design is A-optimal when H = I in (P-A), whereas the notation
V-optimal is reserved for the case when H is arbitrary [17]. We do not make this distinction here
for conciseness.
A design is E-optimal if instead of minimizing the MSE as in (P-A), it minimizes the largest eigenvalue of the covariance matrix K(D), i.e.,
h
i
h
i
minimize ?max K(D) ? ?max HR? H T .
(P-E)
|D|?k
Since the trace of a matrix is the sum of its eigenvalues, we can think of (P-E) as a robust version of (P-A). While the design in (P-A) seeks to reduce the estimation error in all directions,
the design in (P-E) seeks to reduce the estimation error in the worst direction. Equivalently, given
that ?max (X) = maxkuk2 =1 uT Xu, we can interpret (P-E) with H = I as minimizing the MSE
for an adversarial choice of z.
A D-optimal design is one in which the objective is to minimize the log-determinant of the estimator?s covariance matrix,
h
i
h
i
minimize log det K(D) ? log det HR? H T .
(P-D)
|D|?k
The motivation for using the objective in (P-D) is that the log-determinant of K(D) is proportional
to the volume of the confidence ellipsoid when the data are Gaussian. Note that the trace, maximum
eigenvalue, and determinant of HR? H T in (P-A), (P-E), and (P-D) are constants that do not affect
the respective objectives. They are subtracted so that the objectives vanish when D = ?. This
simplifies the exposition in Section 4.
Although the problem formulations in (P-A), (P-E), and (P-D) are integer programs known to be
NP-hard, the use of greedy methods for their solution is widespread and performs very well in
practice. In the case of D-optimal design, this is justified theoretically because the objective of (P-D)
is supermodular, which implies greedy methods are (1 ? e?1 )-optimal [2, 11, 12]. The objectives
in (P-A) and (P-E), on the other hand, are not be supermodular in general [1, 13, 14] and it is not
known why their greedy optimization yields good results in practice?conditions for the MSE to be
supermodular exist but are restrictive [1]. The goal of this paper is to derive performance guarantees
for greedy solutions of A- and E-optimal design problems. We do so by developing different notions
of approximate supermodularity to show that A- and E-optimal design problems are not far from
supermodular.
Remark 1. Besides its intrinsic value as a minimizer of the volume of the confidence ellipsoid,
(P-D) is often used as a surrogate for (P-A), when A-optimality is considered the appropriate metric.
It is important to point out that this is only justified when the problem has some inherent structure
that suggests the minimum volume ellipsoid is somewhat symmetric. Otherwise, since the volume of
an ellipsoid can be reduced by decreasing the length of a single principal axis, using (P-D) can lead
to designs that perform well?in the MSE sense?along a few directions of the parameter space and
poorly along all others. Formally, this can be seen by comparing the variation of the log-determinant
and trace functions with respect to the eigenvalues of the PSD matrix K,
? log det(K)
1
=
??j (K)
?j (K)
and
3
? Tr(K)
= 1.
??j (K)
The gradient of the log-determinant is largest in the direction of the smallest eigenvalue of the error
covariance matrix. In contrast, the MSE gives equal weight to all directions of the space. The latter
yields balanced designs that are similar to the former only if those are forced to be balanced by other
problem constraints.
3
Approximate supermodularity
Consider a multiset function f : P(E) ? R for which the value corresponding to an arbitrary
multiset D ? P(E) is denoted by f (D). We say the function f is normalized if f (?) = 0 and we
say f is monotone decreasing if for all multisets A ? B it holds that f (A) ? f (B). Observe that if a
function is normalized and monotone decreasing it must be that f (D) ? 0 for all D. The objectives
of (P-A), (P-E), and (P-D) are normalized and monotone decreasing multiset functions, since adding
experiments to a design decreases the covariance matrix uniformly in the PSD cone?see Lemma 1.
We say that a multiset function f is supermodular if for all pairs of multisets A, B ? P(E), A ? B,
and elements u ? E it holds that
f (A) ? f (A ? {u}) ? f (B) ? f (B ? {u}).
Supermodular functions encode a notion of diminishing returns as sets grow. Their relevance in this
paper is due to the celebrated bound on the suboptimality of their greedy minimization [18]. Specifically, construct a greedy solution by starting with G0 = ? and incorporating elements (experiments)
e ? E greedily so that at the h-th iteration we incorporate the element whose addition to Gh?1 results
in the maximum reduction of f :
Gh = Gh?1 ? {e},
e = argmin f (Gh?1 ? {u}) .
with
(5)
u?E
The recursion in (5) is repeated for k steps to obtain a greedy solution with k elements. Then, if f is
monotone decreasing and supermodular,
f (Gk ) ? (1 ? e?1 )f (D? ),
(6)
where D? , argmin|D|?k f (D) is the optimal design selection of cardinality not larger than k [18].
We emphasize that in contrast to the classical greedy algorithm, (5) allows the same element to be
selected multiple times.
The optimality guarantee in (6) applies to (P-D) because its objective is supermodular. This is
not true of the cost functions of (P-A) and (P-E). We address this issue by postulating that if a
function does not violate supermodularity too much, then its greedy minimization should have close
to supermodular performance. To formalize this idea, we introduce two measures of approximate
supermodularity and derive near-optimal bounds based on these properties. It is worth noting that as
intuitive as it may be, such results are not straightforward. In fact, [19] showed that even functions ?close to supermodular cannot be optimized in polynomial time.
We start with the following multiplicative relaxation of the supermodular property.
Definition 1 (?-supermodularity). A multiset function f : P(E) ? R is ?-supermodular, for ? :
N ? N ? R, if for all multisets A, B ? P(E), A ? B, and all u ? E it holds that
f (A) ? f (A ? {u}) ? ?(#A, #B) [f (B) ? f (B ? {u})] .
(7)
Notice that for ? ? 1, (7) reduces the original definition of supermodularity, in which case we refer
to the function simply as supermodular [11, 12]. On the other hand, when ? < 1, f is said to be
approximately supermodular. Notice that if f is decreasing, then (7) always holds for ? ? 0. We
are therefore interested in the largest ? for which (7) holds, i.e.,
?(a, b) =
min
A,B?P(E)
A?B, u?E
#A=a, #B=b
f (A) ? f (A ? {u})
f (B) ? f (B ? {u})
(8)
Interestingly, ? not only measures how much f violates supermodularity, but it also quantifies the
loss in performance guarantee incurred from these violations.
4
Theorem 1. Let f be a normalized, monotone decreasing, and ?-supermodular multiset function.
Then, for ?
? = mina<`, b<`+k ?(a, b), the greedy solution from 5 obeys
!#
"
`?1
Y
1
?
f (D? ) ? (1 ? e??`/k
)f (D? ).
(9)
f (G` ) ? 1 ?
1 ? Pk?1
?1
s=0 ?(h, h + s)
h=0
Proof. See extended version [15].
Theorem 1 bounds the suboptimality of the greedy solution from 5 when its objective is ?supermodular. At the same time, it quantifies the effect of relaxing the supermodularity hypothesis
typically used to provide performance guarantees in these settings. In fact, if f is supermodular (? ? 1) and for ` = k, we recover the 1 ? e?1 ? 0.63 guarantee from [18]. On the other hand,
for an approximately supermodular function (?
? < 1), the result in (9) shows that the 63% guarantee
can be recovered by selecting a set of size ` = ?
? ?1 k. Thus, ? not only measures how much f
violates supermodularity, but also gives a factor by which a solution set must increase to obtain a
supermodular near-optimal certificate. Similar to the original bound in [18], it worth noting that (9)
is not tight and that better results are common in practice (see Section 5).
Although ?-supermodularity gives us a multiplicative approximation factor, finding meaningful
bounds on ? can be challenging for certain multiset functions, such as the E-optimality criterion
in (P-E). It is therefore useful to look at approximate supermodularity from a different perspective
as in the following definition.
Definition 2 (-supermodularity). A multiset function f : P(E) ? R is -supermodular, for :
N ? N ? R, if for all multisets A, B ? P(E), A ? B, and all u ? E it holds that
f (A) ? f (A ? {u}) ? f (B) ? f (B ? {u}) ? (#A, #B) .
(10)
Again, we say f is supermodular if (a, b) ? 0 for all a, b and approximately supermodular otherwise. As with ?, we want the best that satisfies (10), which is given by
(a, b) =
max
A,B?P(E)
A?B, u?E
#A=a, #B=b
f (B) ? f (B ? {u}) ? f (A) + f (A ? {u}) .
(11)
In contrast to ?-supermodularity, we obtain an additive approximation guarantee for the greedy
minimization of -supermodular functions.
Theorem 2. Let f be a normalized, monotone decreasing, and -supermodular multiset function.
Then, for ? = maxa<`, b<`+k (a, b), the greedy solution from (5) obeys
"
` #
`?1?h
k?1 `?1
1
1
1 XX
?
f (G` ) ? 1 ? 1 ?
(h, h + s) 1 ?
f (D ) +
k
k s=0
k
(12)
h=0
? (1 ? e?`/k )(f (D? ) + k?
)
Proof. See extended version [15].
As before, quantifies the loss in performance guarantee due to relaxing supermodularity. Indeed,
(12) reveals that -supermodular functions have the same guarantees as a supermodular function
up to an additive factor of ?(k?
). In fact, if ? ? (ek)?1 |f (D? )| (recall that f (D? ) ? 0 due to
normalization), then taking ` = 3k recovers the supermodular 63% approximation factor. This
same factor is obtained for ? ? 1/3-supermodular functions.
With the certificates of Theorems 1 and 2 in hand, we now proceed with the study of the A- and Eoptimality criteria. In the next section, we derive explicit bounds on their ?- and -supermodularity,
respectively, thus providing near-optimal performance guarantees for greedy A- and E-optimal designs.
5
4
Near-optimal experimental design
Theorems 1 and 2 apply to functions that are (i) normalized, (ii) monotone decreasing, and (iii) approximately supermodular. By construction, the objectives of P-A and P-E are normalized [(i)]. The
following lemma establishes that they are also monotone decreasing [(ii)] by showing that K is a
decreasing set function in the PSD cone. The definition of Loewner order and the monotonicity of
the trace operator readily give the desired results [16].
Lemma 1. The matrix-valued set function K(D) in (4) is monotonically decreasing with respect to
the PSD cone, i.e., A ? B ? K(A) K(B).
Proof. See extended version [15].
The main results of this section provide the final ingredient [(iii)] for Theorems 1 and 2 by bounding
the approximate supermodularity of the A- and E-optimality criteria. We start by showing that the
objective of P-A is ?-supermodular.
Theorem 3. The objective of (P-A) is ?-supermodular with
?min R??1
1
?
, for all b,
(13)
?(a, b) ?
?(H)2 ?max R??1 + a ? `max
where `max = maxe?E ?max (Me ) and ?(H) = ?max / ?min is the `2 -norm condition number of H,
with ?max and ?min denoting the largest and smallest singular values of H respectively.
Proof. See extended version [15].
Theorem 3 bounds the ?-supermodularity of the objective of (P-A) in terms of the condition number
of H, the prior covariance matrix, and the measurements SNR. To facilitate the interpretation of
this result, let the SNR of the e-th experiment be ?e = Tr[Me ] and suppose R? = ??2 I, H = I,
and ?e ? ? for all e ? E. Then, for ` = k greedy iterations, (13) implies
?
??
1
,
1 + 2k??2 ?
for ?
? as in Theorem 1. This deceptively simple bound reveals that the MSE behaves as a supermodular function at low SNRs. Formally, ? ? 1 as ? ? 0. In contrast, the performance guarantee
from Theorem 3 degrades in high SNRs scenarios. In this case, however, greedy methods are expected to give good results since designs yield similar estimation errors (as shown in Section 5).
The greedy solution of P-A also approaches the (1 ? e?1 ) guarantee when the prior on ? is concentrated (??2 1), i.e., when considerable confidence is placed on prior knowledge or the problem is
heavily regularized.
These observations also hold for a generic H as long as it is well-conditioned. Even if ?(H) 1,
? = DH for some diagonal matrix D 0 without affecting the design,
we can replace H by H
since z is arbitrarily scaled. Then, the scaling D can be designed to minimize the condition number
? by leveraging preconditioning and balancing methods [20, 21].
of H
Proceeding, we derive guarantees for E-optimal designs using -supermodularity.
Theorem 4. The cost function of (P-E) is -supermodular with
2
(a, b) ? (b ? a) ?max (H)2 ?max (R? ) `max ,
(14)
where `max = maxe?E ?max (Me ) and ?max (H) is the largest singular value of H.
Proof. See extended version [15].
Under the same assumptions as above, Theorem 4 gives
? ? 2k??4 ?,
for ? as in Theorem 2. Thus, ? 0 as ? ? 0. In other words, the behavior of the objective of (P-E)
approaches that of a supermodular function as the SNR decreases. The same holds for concentrated
6
0.5
0.25
0
-30
-20
-10
0
SNR (dB)
10
20
1000
800
600
400
200
40
60
80
Design size
100
Unnormalized A-optimality
Equivalent
0.75
Unnormalized A-optimality
1
1
0.5
0.25
0
40
(b)
(a)
Greedy
[10]
Truncation
Random
0.75
60
80
Design size
100
(c)
100
10-3
-30
-20
-10
0
SNR (dB)
10
20
(a)
100
80
60
40
20
40
60
80
Design size
(b)
100
Unnormalized E-optimality
Equivalent
103
Unnormalized E-optimality
Figure 1: A-optimal design: (a) Thm. 3; (b) A-optimality (low SNR); (c) A-optimality (high SNR).
The plots show the unnormalized A-optimality value for clarity.
0.2
0.15
0.1
0.05
0
40
60
80
Design size
100
(c)
Figure 2: E-optimal design: (a) Thm. 4; (b) E-optimality (low SNR); (c) E-optimality (high SNR).
The plots show the unnormalized E-optimality value for clarity.
priors, i.e., lim??2 ?0 ? = 0. Once again, it is worth noting that when the SNRs of the experiments
are large, almost every design has the same E-optimal performance as long as the experiments are
not too correlated. Thus, greedy design is also expected to give good results under these conditions.
Finally, the proofs of Theorems 3 and 4 suggest that better bounds can be found when the designs are
constructed without replacement, i.e., when only one of each experiment is allowed in the design.
5
Numerical examples
In this section, we illustrate the previous results in some numerical examples. To do so, we draw the
elements of Ae from an i.i.d. zero-mean Gaussian random variable with variance 1/p and p = 20.
The noise {ve } are also Gaussian random variables with Re = ?v2 I. We take ?v2 = 10?1 in high
SNR and ?v2 = 10 in low SNR simulations. The experiment pool contains #E = 200 experiments.
Starting with A-optimal design, we display the bound from Theorem 3 in Figure 1a for multivariate
measurements of size ne = 5 and designs of size k = 40. Here, ?equivalent ?? is the single ?
?
that gives the same near-optimal certificate (9) as using (13). As expected, ? approaches 1 as the
SNR decreases. In fact, for ?10 dB is is already close to 0.75 which means that by selecting
a design of size ` = 55 we would be within (1 ? e?1 ) of the optimal design of size k = 40.
Figures 1b and 1c compare greedy A-optimal designs with the convex relaxation of (P-A) in low
and high SNR scenarios. The designs are obtained from the continuous solutions using the hard
constraint, with replacement method of [10] and a simple design truncation as in [22]. Therefore,
these simulations consider univariate measurements ne = 1. For comparison, a design sampled
uniformly at random with replacement from E is also presented. Note that, as mentioned before,
the performance difference across designs is small for high SNR?notice the scale in Figures 1c
and 2c?, so that even random designs perform well.
For the E-optimality criterion, the bound from Theorem 4 is shown in Figure 2a, again for multivariate measurements of size ne = 5 and designs of size k = 40. Once again, ?equivalent ? is the single
value that yields the same guarantee as using those in (14). In this case, the bound degradation in
high SNR is more pronounced. This reflects the difficulty in bounding the approximate supermodularity of the E-optimality cost function. Still, Figures 2b and 2c show that greedy E-optimal designs
7
have good performance when compared to convex relaxations or random designs. Note that, though
it is not intended for E-optimal designs, we again display the results of the sampling post-processing
from [10]. In Figure 2b, the random design is omitted due to its poor performance.
5.1
Cold-start survey design for recommender systems
Recommender systems use semi-supervised learning methods to predict user ratings based on few
rated examples. These methods are useful, for instance, to streaming service providers who are interested in using predicted ratings of movies to provide recommendations. For new users, these systems
suffer from a ?cold-start problem?, which refers to the fact that it is hard to provide accurate recommendations without knowing a user?s preference on at least a few items. For this reason, services
explicitly ask users for ratings in initial surveys before emitting any recommendation. Selecting
which movies should be rated to better predict a user?s preferences can be seen as an experimental design problem. In the following example, we use a subset of the EachMovie dataset [23] to
illustrate how greedy experimental design can be applied to address this problem.
We randomly selected a training and test set containing 9000 and 3000 users respectively. Following
the notation from Section 2, each experiment in E represents a movie (|E| = 1622) and the observation vector Ae collects the ratings of movie e for each user in the training set. The parameter ?
is used to express the rating of a new user in term of those in the training set. Our hope is that we
can extrapolate the observed ratings, i.e., {ye }e?D , by Af ? for f ?
/ D. Since the mean absolute
error (MAE) is commonly used in this setting, we choose to work with the A-optimality criterion.
? = 0 and R? = ? 2 I with ? 2 = 100.
We also let H = I and take a non-informative prior ?
?
?
As expected, greedy A-optimal design is able to find small sets of movies that lead to good prediction. For k = 10, for example, MAE = 2.3, steadily reducing until MAE < 1.8 for k ? 35. These
are considerably better results than a random movie selection, for which the MAE varies between 2.8
and 3.3 for k between 10 and 50. Instead of focusing on the raw ratings, we may be interested in
predicting the user?s favorite genre. This is a challenging task due to the heavily skewed dataset.
For instance, 32% of the movies are dramas whereas only 0.02% are animations. Still, we use the
simplest possible classifier by selecting the category with highest average estimated ratings. By using greedy design, we can obtain a misclassification rate of approximately 25% by observing 100
ratings, compared to over 45% error rate for a random design.
6
Related work
Optimal experimental design Classical experimental design typically relies on convex relaxations to solve optimal design problems [17, 22]. However, because these are semidefinite programs (SDPs) or sequential second-order cone programs (SOCPs), their computational complexity
can hinder their use in large-scale problems [5, 7, 22, 24]. Another issue with these relaxations
is that some sort of post-processing is required to extract a valid design from their continuous solutions [5, 22]. For D-optimal designs, this can be done with (1 ? e?1 )-optimality [25, 26]. For
A-optimal designs, [10] provides near-optimal randomized schemes for large enough k.
Greedy optimization guarantees The (1 ? e?1 )-suboptimality of greedy search for supermodular minimization under cardinality constraints was established in [18]. To deal with the fact that
the MSE is not supermodular, ?-supermodularity with constant ? was introduced in [27] along
with explicit lower bounds. This concept is related to the submodularity ratio introduced by [3]
to obtain guarantees similar to Theorem 1 for dictionary selection and forward regression. However, the bounds on the submodularity ratio from [3, 28] depend on the sparse eigenvalues of K or
restricted strong convexity constants of the A-optimal objective, which are NP-hard to compute. Explicit bounds for the submodularity ratio of A-optimal experimental design were recently obtained
in [29]. Nevertheless, neither [27] nor [29] consider multisets. Hence, to apply their results we must
operate on an extended ground set containing k unique copies of each experiment, which make the
bounds uninformative. For instance, in the setting of Section 5, Theorem 3 guarantees 0.1-optimality
at 0 dB SNR whereas [29] guarantees 2.5 ? 10?6 -optimality. The concept of -supermodularity was
first explored in [30] for a constant . There, guarantees for dictionary selection were derived by
bounding using an incoherence assumption on the Ae . Finally, a more stringent definition of approximately submodular functions was put forward in [19] by requiring the function to be upper and
8
lower bounded by a submodular function. They show strong impossibility results unless the function is O(1/k)-close to submodular. Approximate submodularity is sometimes referred to as weak
submodularity (e.g., [28]), though it is not related to the weak submodularity concept from [31].
7
Conclusions
Greedy search is known to be an empirically effective method to find A- and E-optimal experimental
designs despite the fact that these objectives are not supermodular. We reconciled these observations
by showing that the A- and E-optimality criteria are approximately supermodular and deriving nearoptimal guarantees for this class of functions. By quantifying their supermodularity violations,
we showed that the behavior of the MSE and the maximum eigenvalue of the error covariance
matrix becomes increasingly supermodular as the SNR decreases. An important open question is
whether these results can be improved using additional knowledge. Can we exploit some structure
of the observation matrices (e.g., Fourier, random)? What if the parameter vector is sparse but
with unknown support (e.g., compressive sensing)? Are there practical experiment properties other
than the SNR that lead to small supermodular violations? Finally, we hope that this approximate
supermodularity framework can be extended to other problems.
References
[1] A. Das and D. Kempe, ?Algorithms for subset selection in linear regression,? in ACM Symp.
on Theory of Comput., 2008, pp. 45?54.
[2] A. Krause, A. Singh, and C. Guestrin, ?Near-optimal sensor placements in Gaussian processes:
Theory, efficient algorithms and empirical studies,? J. Mach. Learning Research, vol. 9, pp.
235?284, 2008.
[3] A. Das and D. Kempe, ?Submodular meets spectral: Greedy algorithms for subset selection,
sparse approximation and dictionary selection,? in Int. Conf. on Mach. Learning, 2011.
[4] Y. Washizawa, ?Subset kernel principal component analysis,? in Int. Workshop on Mach.
Learning for Signal Process., 2009.
[5] S. Joshi and S. Boyd, ?Sensor selection via convex optimization,? IEEE Trans. Signal Process.,
vol. 57[2], pp. 451?462, 2009.
[6] K. Yu, J. Bi, and V. Tresp, ?Active learning via transductive experimental design,? in International Conference on Machine Learning, 2006, pp. 1081?1088.
[7] P. Flaherty, A. Arkin, and M.I. Jordan, ?Robust design of biological experiments,? in Advances
in Neural Information Processing Systems, 2006, pp. 363?370.
[8] X. Zhu, ?Semi-supervised learning literature survey,? 2008, http://pages.cs.wisc.edu/
~jerryzhu/research/ssl/semireview.html.
[9] S. Liu, S.P. Chepuri, M. Fardad, E. Ma?sazade, G. Leus, and P.K. Varshney, ?Sensor selection
for estimation with correlated measurement noise,? IEEE Trans. Signal Process., vol. 64[13],
pp. 3509?3522, 2016.
[10] Y. Wang, A.W. Yu, and A. Singh, ?On computationally tractable selection of experiments in
regression models,? 2017, arXiv:1601.02068v5.
[11] F. Bach, ?Learning with submodular functions: A convex optimization perspective,? Foundations and Trends in Machine Learning, vol. 6[2-3], pp. 145?373, 2013.
[12] A. Krause and D. Golovin, ?Submodular function maximization,? in Tractability: Practical
Approaches to Hard Problems. Cambridge University Press, 2014.
[13] G. Sagnol, ?Approximation of a maximum-submodular-coverage problem involving spectral
functions, with application to experimental designs,? Discrete Appl. Math., vol. 161[1-2], pp.
258?276, 2013.
[14] T.H. Summers, F.L. Cortesi, and J. Lygeros, ?On submodularity and controllability in complex
dynamical networks,? IEEE Trans. Contr. Netw. Syst., vol. 3[1], pp. 91?101, 2016.
[15] L. F. O. Chamon and A. Ribeiro, ?Approximate supermodularity bounds for experimental
design,? 2017, arXiv:1711.01501.
9
[16] R.A. Horn and C.R. Johnson, Matrix analysis, Cambridge University Press, 2013.
[17] F. Pukelsheim, Optimal Design of Experiments, SIAM, 2006.
[18] G.L. Nemhauser, L.A. Wolsey, and M.L. Fisher, ?An analysis of approximations for maximizing submodular set functions?I,? Mathematical Programming, vol. 14[1], pp. 265?294,
1978.
[19] T. Horel and Y. Singer, ?Maximization of approximately submodular functions,? in Advances
in Neural Information Processing Systems, 2016, pp. 3045?3053.
[20] M. Benzi, ?Preconditioning techniques for large linear systems: A survey,? Journal of Computational Physics, vol. 182[2], pp. 418?477, 2002.
[21] R.D. Braatz and M. Morari, ?Minimizing the Euclidian condition number,? SIAM Journal of
Control and Optimization, vol. 32[6], pp. 1763?1768, 1994.
[22] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004.
[23] Digital Equipment Corporation, ?EachMovie dataset,? http://www.gatsby.ucl.ac.uk/
~chuwei/data/EachMovie/.
[24] G. Sagnol, ?Computing optimal designs of multiresponse experiments reduces to second-order
cone programming,? Journal of Statistical Planning and Inference, vol. 141[5], pp. 1684?1708,
2011.
[25] T. Horel, S. Ioannidis, and M. Muthukrishnan, ?Budget feasible mechanisms for experimental
design,? in Latin American Theoretical Informatics Symposium, 2014.
[26] A.A. Ageev and M.I. Sviridenko, ?Pipage rounding: A new method of constructing algorithms
with proven performance guarantee,? Journal of Combinatorial Optimization, vol. 8[3], pp.
307?328, 2004.
[27] L.F.O. Chamon and A. Ribeiro, ?Near-optimality of greedy set selection in the sampling of
graph signals,? in Global Conf. on Signal and Inform. Process., 2016.
[28] E.R. Elenberg, R. Khanna, A.G. Dimakis, and S. Negahban, ?Restricted strong convexity
implies weak submodularity,? 2016, arXiv:1612.00804.
[29] A. Bian, J.M. Buhmann, A. Krause, and S. Tschiatschek, ?Guarantees for greedy maximization
of non-submodular functions with applications,? in ICML, 2017.
[30] A. Krause and V. Cevher, ?Submodular dictionary selection for sparse representation,? in Int.
Conf. on Mach. Learning, 2010.
[31] A. Borodin, D. T. M. Le, and Y. Ye,
?Weakly submodular functions,? 2014,
arXiv:1401.6697v5.
10
| 7124 |@word determinant:5 version:7 polynomial:2 norm:1 open:1 seek:2 simulation:2 covariance:16 euclidian:1 tr:4 reduction:1 initial:1 celebrated:1 contains:2 liu:1 selecting:8 denoting:1 interestingly:1 recovered:1 comparing:1 must:3 readily:1 additive:2 numerical:2 informative:1 designed:1 plot:2 greedy:41 selected:2 item:1 provides:2 multiset:10 certificate:3 math:1 preference:2 mathematical:1 along:3 constructed:1 sagnol:2 symposium:1 consists:2 symp:1 introduce:1 theoretically:1 upenn:1 expected:4 indeed:1 behavior:2 nor:2 planning:1 decreasing:12 cardinality:4 becomes:1 xx:1 moreover:1 notation:3 bounded:1 what:1 argmin:2 minimizes:4 maxa:1 dimakis:1 compressive:1 finding:2 corporation:1 guarantee:30 pseudo:1 every:1 classifier:2 scaled:1 uk:1 control:1 positive:2 before:3 engineering:1 service:2 despite:2 mach:4 meet:1 incoherence:1 approximately:9 suggests:1 relaxing:2 challenging:2 collect:1 appl:1 tschiatschek:1 bi:1 obeys:2 unique:1 practical:2 horn:1 drama:1 practice:3 alphabetical:1 definite:2 procedure:1 cold:2 empirical:2 boyd:2 confidence:3 word:1 refers:1 suggest:1 cannot:1 close:4 selection:13 operator:1 put:1 optimize:1 equivalent:4 www:1 maximizing:1 straightforward:1 starting:2 convex:7 survey:4 ageev:1 estimator:4 deceptively:1 deriving:1 vandenberghe:1 notion:2 variation:1 construction:1 suppose:1 heavily:2 user:9 programming:2 hypothesis:1 arkin:1 element:7 trend:1 particularly:2 observed:1 electrical:1 wang:1 worst:3 decrease:6 highest:1 balanced:2 mentioned:1 convexity:2 complexity:2 hinder:1 depend:1 tight:1 singh:2 weakly:1 preconditioning:2 genre:1 muthukrishnan:1 snrs:3 forced:1 effective:1 outcome:1 choosing:2 whose:2 larger:1 valued:1 solve:1 say:6 supermodularity:31 otherwise:2 transductive:2 think:1 final:1 eigenvalue:10 loewner:1 ucl:1 poorly:1 intuitive:1 pronounced:1 sea:1 derive:5 illustrate:2 ac:1 measured:1 strong:3 coverage:1 predicted:1 c:1 implies:3 met:1 direction:5 submodularity:8 stringent:2 violates:2 proposition:2 biological:1 hold:10 considered:1 ground:1 predict:2 dictionary:4 smallest:3 omitted:1 estimation:11 combinatorial:1 largest:5 repetition:1 successfully:1 establishes:1 reflects:1 minimization:5 hope:2 sensor:4 gaussian:4 always:1 aim:1 encode:1 derived:1 focus:2 impossibility:1 contrast:4 adversarial:1 equipment:1 greedily:1 sense:1 contr:1 inference:1 lowercase:1 streaming:1 bt:2 typically:3 diminishing:2 interested:3 overall:1 classification:1 issue:2 html:1 denoted:1 ssl:1 kempe:2 equal:1 construct:1 once:2 beach:1 sampling:2 represents:2 look:1 yu:2 icml:1 future:1 np:2 others:1 inherent:1 few:3 randomly:1 ve:6 intended:1 replacement:3 psd:7 interest:1 highly:1 multiresponse:1 introduces:1 generically:1 violation:3 semidefinite:1 uppercase:1 xb:1 accurate:1 respective:1 unless:2 re:3 desired:1 theoretical:2 cevher:1 instance:5 maximization:3 chuwei:1 cost:5 tractability:1 jerryzhu:1 subset:4 snr:23 rounding:1 johnson:1 too:2 nearoptimal:2 varies:1 considerably:1 st:1 international:1 randomized:1 siam:2 negahban:1 physic:1 informatics:1 pool:2 again:5 containing:2 choose:1 conf:3 ek:1 american:1 leading:1 return:2 syst:1 account:1 nonasymptotic:1 int:3 explicitly:2 performed:1 multiplicative:2 observing:1 socps:1 start:4 recover:1 sort:1 pipage:1 square:2 minimize:6 variance:1 reserved:1 who:1 yield:4 weak:3 bayesian:3 raw:1 sdps:1 provider:1 worth:3 inform:1 aribeiro:1 definition:6 pp:15 steadily:1 proof:7 conciseness:1 recovers:2 sampled:1 dataset:3 ask:1 recall:1 knowledge:2 ut:1 lim:1 ubiquitous:1 leu:1 formalize:1 focusing:1 supermodular:44 supervised:3 bian:1 improved:1 formulation:1 done:1 though:2 horel:2 until:1 hand:4 lack:1 widespread:1 khanna:1 reveal:1 usa:1 effect:1 ye:5 requiring:1 concept:4 unbiased:1 normalized:7 former:1 true:1 hence:1 symmetric:1 deal:1 attractive:1 skewed:1 benzi:1 unnormalized:6 suboptimality:4 criterion:8 mina:1 performs:1 gh:4 recently:1 common:2 behaves:1 empirically:1 volume:4 interpretation:1 mae:4 interpret:1 measurement:7 refer:1 cambridge:3 similarly:1 submodular:12 longer:1 alejandro:1 multivariate:4 showed:2 perspective:2 optimizes:1 scenario:4 certain:1 calligraphic:1 success:1 arbitrarily:1 seen:2 minimum:1 additional:1 somewhat:1 guestrin:1 monotonically:1 signal:6 semi:4 ii:2 violate:2 multiple:1 reduces:2 facilitate:1 eachmovie:3 af:1 bach:1 long:3 post:2 prediction:1 involving:1 regression:4 ae:6 metric:2 arxiv:4 iteration:2 represent:1 normalization:1 sometimes:1 kernel:1 justified:2 whereas:3 addition:1 want:1 affecting:1 uninformative:1 krause:4 grow:1 singular:2 appropriately:1 operate:1 posse:1 subject:1 db:4 leveraging:1 jordan:1 integer:1 joshi:1 near:9 leverage:1 noting:3 latin:1 rne:2 iii:2 enough:1 affect:1 pennsylvania:1 reduce:2 simplifies:1 idea:1 knowing:1 scalarization:1 det:3 whether:1 expression:3 suffer:1 proceed:1 remark:1 useful:2 concentrated:2 category:1 simplest:1 reduced:1 http:2 exist:1 notice:4 estimated:1 write:3 discrete:1 vol:11 express:1 nevertheless:3 clarity:2 wisc:1 neither:2 ht:1 graph:1 relaxation:6 monotone:9 cone:6 sum:1 run:1 inverse:1 letter:3 uncertainty:1 almost:1 draw:1 scaling:1 bound:22 summer:1 display:2 placement:2 constraint:3 sviridenko:1 fourier:1 optimality:26 min:4 developing:1 poor:1 across:2 ate:1 increasingly:2 restricted:2 unregularized:1 computationally:1 mechanism:1 singer:1 merit:3 tractable:1 apply:3 observe:2 v2:3 generic:2 appropriate:1 spectral:2 subtracted:1 rp:1 customary:1 original:2 morari:1 exploit:1 restrictive:1 classical:4 objective:16 g0:1 already:1 question:1 v5:2 degrades:1 diagonal:1 surrogate:1 said:2 gradient:1 flaherty:1 nemhauser:1 me:7 collected:1 reason:1 boldface:2 besides:1 length:1 ellipsoid:4 ratio:4 providing:2 minimizing:3 equivalently:1 mostly:1 gk:1 trace:5 stated:1 design:92 unknown:1 perform:2 upper:2 recommender:2 observation:7 finite:1 behave:1 vft:1 controllability:1 situation:1 extended:8 rn:2 arbitrary:3 thm:2 rating:9 introduced:2 pair:1 required:1 optimized:1 distinction:1 established:1 nip:1 trans:3 address:2 able:1 dynamical:1 borodin:1 program:3 max:16 misclassification:1 difficulty:1 regularized:1 predicting:1 buhmann:1 hr:4 recursion:1 zhu:1 scheme:1 movie:7 rated:2 ne:4 axis:1 multisets:7 ready:1 extract:1 tresp:1 prior:6 literature:1 lygeros:1 asymptotic:1 tantamount:1 loss:2 ioannidis:1 interesting:1 wolsey:1 proportional:1 proven:1 ingredient:1 digital:1 foundation:1 incurred:1 affine:1 uncorrelated:1 balancing:1 placed:1 last:1 truncation:2 copy:1 taking:2 absolute:1 sparse:4 dimension:1 valid:1 forward:2 collection:1 commonly:1 ribeiro:3 far:1 emitting:1 approximate:13 emphasize:1 netw:1 varshney:1 monotonicity:1 global:1 active:1 reveals:2 assumed:1 alternatively:1 luiz:1 search:3 iterative:1 vet:1 quantifies:3 why:1 continuous:2 favorite:1 nature:1 elenberg:1 robust:2 ca:1 golovin:1 correlated:3 mse:11 necessarily:1 complex:1 constructing:1 da:2 pk:1 main:2 reconciled:1 motivation:1 reconcile:2 noise:3 bounding:3 animation:1 repeated:1 allowed:1 xu:1 referred:1 nphard:1 postulating:1 gatsby:1 explicit:3 comput:1 vanish:1 theorem:20 removing:1 showing:4 sensing:1 explored:1 derives:1 intrinsic:1 incorporating:1 workshop:1 adding:1 sequential:1 conditioned:1 budget:1 simply:1 univariate:1 pukelsheim:1 recommendation:3 applies:1 minimizer:1 satisfies:1 relies:1 dh:1 acm:1 ma:1 goal:2 identity:1 quantifying:1 careful:1 exposition:1 replace:1 fisher:1 considerable:1 hard:5 feasible:1 typical:2 specifically:1 uniformly:2 reducing:1 lemma:4 principal:2 called:1 degradation:1 experimental:21 attempted:1 meaningful:1 maxe:2 formally:2 support:1 latter:1 relevance:1 incorporate:1 extrapolate:1 |
6,771 | 7,125 | Maximizing Subset Accuracy with Recurrent Neural
Networks in Multi-label Classification
Jinseok Nam1 , Eneldo Loza Menc?a1 , Hyunwoo J. Kim2 , and Johannes F?rnkranz1
2
1
Knowledge Engineering Group, TU Darmstadt
Department of Computer Sciences, University of Wisconsin-Madison
Abstract
Multi-label classification is the task of predicting a set of labels for a given input
instance. Classifier chains are a state-of-the-art method for tackling such problems,
which essentially converts this problem into a sequential prediction problem, where
the labels are first ordered in an arbitrary fashion, and the task is to predict a
sequence of binary values for these labels. In this paper, we replace classifier
chains with recurrent neural networks, a sequence-to-sequence prediction algorithm
which has recently been successfully applied to sequential prediction tasks in many
domains. The key advantage of this approach is that it allows to focus on the
prediction of the positive labels only, a much smaller set than the full set of possible
labels. Moreover, parameter sharing across all classifiers allows to better exploit
information of previous decisions. As both, classifier chains and recurrent neural
networks depend on a fixed ordering of the labels, which is typically not part of a
multi-label problem specification, we also compare different ways of ordering the
label set, and give some recommendations on suitable ordering strategies.
1
Introduction
There is a growing need for developing scalable multi-label classification (MLC) systems, which, e.g.,
allow to assign multiple topic terms to a document or to identify objects in an image. While the simple
binary relevance (BR) method approaches this problem by treating multiple targets independently,
current research in MLC has focused on designing algorithms that exploit the underlying label
structures. More formally, MLC is the task of learning a function f that maps inputs to subsets of
a label set L = {1, 2, ? ? ? , L}. Consider a set of N samples D = {(xn , y n )}N
n=1 , each of which
consists of an input x ? X and its target y ? Y, and the (xn , y n ) are assumed to be i.i.d following
an unknown distribution P (X, Y ) over a sample space X ? Y. We let Tn = |y n | denote the size
PN
of the label set associated to xn and C = N1 n=1 Tn the cardinality of D, which is usually much
smaller than L. Often, it is convenient to view y not as a subset of L but as a binary vector of size L,
? of inputs x,
i.e., y ? {0, 1}L . Given a function f parameterized by ? that returns predicted outputs y
? ? f (x; ?), and a loss function ` : (y, y
? ) ? R which measures the discrepancy between y and
i.e., y
? , the goal is to find an optimal parametrization f ? that minimizes
y
the expected loss onan unknown
sample drawn from P (X, Y ) such that f ? = arg minf EX EY |X [`(Y , f (X; ?))] . While the
expected risk minimization over P (X, Y ) is intractable, for a given observation x it can be simplified
to f ? (x) = arg minf EY |X [` (Y , f (x; ?))] . A natural choice for the loss function is subset 0/1
? ] which is a generalization of the 0/1 loss in binary
loss defined as `0/1 (y, f (x; ?)) = I [y 6= y
classification to multi-label problems. It can be interpreted
to find the mode of the
as an objective
? ) = 1 ? P (Y = y|X = x).
joint probability of label sets y given instances x: EY |X `0/1 (Y , y
Conversely, 1 ? `0/1 (y, f (x; ?)) is often referred to as subset accuracy in the literature.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
Subset Accuracy Maximization in Multi-label Classification
For maximizing subset accuracy, there are two principled ways for reducing a MLC problem to
multiple subproblems. The simplest method, label powerset (LP), defines a set of all possible label
combinations SL = {{1}, {2}, ? ? ? , {1, 2, ? ? ? , L}}, from which a new class label is assigned to each
label subset consisting of positive labels in D. LP, then, addresses MLC as a multi-class classification
problem with min(N, 2L ) possible labels such that
LP
P (y1 , y2 , ? ? ? , yL |x) ??? P (yLP = k|x)
(1)
where k = 1, 2, ? ? ? , min(N, 2L ). While LP is appealing because most methods well studied in multiclass classification can be used, training LP models becomes intractable for large-scale problems with
an increasing number of labels in SL . Even if the number of labels L is small enough, the problem is
still prone to suffer from data scarcity because each label subset in LP will in general only have a
few training instances. An effective solution to these problems is to build an ensemble of LP models
learning from randomly constructed small label subset spaces [29].
An alternative approach is to learn the joint probability of labels, which is prohibitively expensive
due to 2L label configurations. To address such a problem, Dembczy?nski et al. [3] have proposed
probabilistic classifier chain (PCC) which decomposes the joint probability into L conditionals:
P (y1 , y2 , ? ? ? , yL |x) =
L
Y
P (yi |y <i , x)
(2)
i=1
where y <i = {y1 , ? ? ? , yi?1 } denotes a set of labels that precede a label yi in computing conditional
probabilities, and y <i = ? if i = 1. For training PCCs, L functions need to be learned independently
to construct a probability tree with 2L leaf nodes. In other words, PCCs construct a perfect binary tree
of height L in which every node except the root node corresponds to a binary classifier. Therefore,
obtaining the exact solution of such a probabilistic tree requires to find an optimal path from the
root to a leaf node. A na?ve approach for doing so requires 2L path evaluations in the inference step,
and is therefore also intractable. However, several approaches have been proposed to reduce the
computational complexity [4, 13, 24, 19].
Apart from the computational issue, PCC has also a few fundamental problems. One of them is a
cascadation of errors as the length of a chain gets longer [25]. During training, the classifiers fi in the
chain are trained to reduce the errors E(yi , y?i ) by enriching the input vectors x with the corresponding
previous true targets y <i as additional features. In contrast, at test time, fi generates samples y?i or
? <i ) where y
? <i are obtained from the preceding classifiers f1 , ? ? ? , fi?1 .
estimates P (?
yi |x, y
Another key limitation of PCCs is that the classifiers fi are trained independently according to a fixed
label order, so that each classifier is only able to make predictions with respect to a single label in a
chain of labels. Regardless of the order of labels, the product of conditional probabilities in Eq. (2)
represents the joint probability of labels by the chain rule, but in practice the label order in a chain
has an impact on estimating the conditional probabilities. This issue was addressed in the past by
ensemble averaging [23, 3], ensemble pruning [17] or by a previous analysis of the label dependencies,
e.g., by Bayes nets [27], and selecting the ordering accordingly. Similar methods learning a global
order over the labels have been proposed by [13], who use kernel target alignment to order the chain
according to the difficulty of the single-label problems, and by [18], who formulate the problem of
finding the globally optimal label order as a dynamic programming problem. Aside from PCC, there
has been another family of probabilistic approaches to maximizing subset accuracy [9, 16].
3
Learning to Predict Subsets as Sequence Prediction
In the previous section, we have discussed LP and PCC as a means of subset accuracy maximization.
Note that yLP in Eq. (1) denotes a set of positive labels. Instead of solving Eq. (1) using a multi-class
classifier, one can consider predicting all labels individually in yLP , and interpret this approach as a
way of maximizing the joint probability of a label subset given the number of labels T in the subset.
Similar to PCC, the joint probability can be computed as product of conditional probabilities, but
unlike PCC, only T L terms are needed. Therefore, maximizing the joint probability of positive
labels can be viewed as subset accuracy maximization such as LP in a sequential manner as the
2
way PCC works. To be more precise, y can be represented as a set of 1-of-L vectors such that
y = {ypi }Ti=1 and ypi ? RL where T is the number of positive labels associated with an instance x.
The joint probability of positive labels can be written as
P (yp1 , yp2 , ? ? ? , ypT |x) =
T
Y
P (ypi |y <pi , x).
(3)
i=1
Note that Eq. (3) has the same form with Eq. (2) except for the number of output variables. While
Eq. (2) is meant to maximize the joint probability over the entire 2L configurations, Eq. (3) represents
the probability of sets of positive labels and ignores negative labels. The subscript p is omitted unless it
is needed for clarity. A key advantage of Eq. (3) over the traditional multi-label formulation is that the
number of conditional probabilities to be estimated is dramatically reduced from L to T , improving
scalability. Also note that each estimate itself again depends on the previous estimates. Reducing the
length of the chain might be helpful in reducing the cascading errors, which is particularly relevant
for labels at the end of the chain. Having said that, computations over the LT search space of Eq. (3)
remain infeasible even though our search space is much smaller than the search space of PCC in
Eq. (2), 2L , since the label cardinality C is usually very small, i.e., C L.
As each instance has a different value for T , we need MLC methods capable of dealing with a
different number of output targets across instances. In fact, the idea of predicting positive labels only
has been explored for MLC. Recurrent neural networks (RNNs) have been successful in solving
complex output space problems. In particular, Wang et al. [31] have demonstrated that RNNs
provide a competitive solution on MLC image datasets. Doppa et al. [6] propose multi-label search
where a heuristic function and cost function are learned to iteratively search for elements to be
chosen asQpositive labels on a binary vector of size L. In this work, we make use of RNNs to
compute Ti=1 P (ypi |y <pi , x) for which the order of labels in a label subset yp1 , yp2 , ? ? ? , ypT need
to be determined a priori, as in PCC. In the following, we explain possible ways of choosing label
permutations, and then present three RNN architectures for MLC.
3.1
Determining Label Permutations
We hypothesize that some label permutations make it easier to estimate Eqs. (2) and (3) than others.
However, as no ground truth such as relevance scores of each positive label to a training instance is
given, we need to make the way to prepare fixed label permutations during training.
The most straightforward approach is to order positive labels by frequency simply either in a
descending (from frequent to rare labels) or an ascending (from rare to frequent ones) order. Although
this type of label permutation may break down label correlations in a chain, Wang et al. [31] have
shown that the descending label ordering allows to achieve a decent performance on multi-label
image datasets. As an alternative, if additional information such as label hierarchies is available
about the labels, we can also take advantage of such information to determine label permutations.
For example, assuming that labels are organized in a directed acyclic graph (DAG) where labels are
partially ordered, we can obtain a total order of labels by topological sorting with depth-first search
(DFS), and given that order, target labels in the training set can be sorted in a way that labels that have
same ancestors in the graph are placed next to each other. In fact, this approach also preserves partial
label orders in terms of the co-occurrence frequency of a child and its parent label in the graph.
3.2
Label Sequence Prediction from Given Label Permutations
A recurrent neural network (RNN) is a neural network (NN) that is able to capture temporal
information. RNNs have shown their superior performance on a wide range of applications where
target outputs form a sequence. In our context, we can expect that MLC will also benefit from the
reformulation of PCCs because the estimation of the joint probability of only positive labels as in
Eq. (3) significantly reduces the length of the chains, thereby reducing the effect of error propagation.
A RNN architecture that learns a sequence of L binary targets can be seen as a NN counterpart of
PCC because its objective is to maximize Eq. (2), just like in PCC. We will refer to this architecture
as RNNb (Fig. 1b). One can also come up with a RNN architecture maximizing Eq. (3) to take
advantage of the smaller label subset size T than L, which shall be referred to as RNNm (Fig. 1c). For
learning RNNs, we use gated recurrent units (GRUs) which allow to effectively avoid the vanishing
? be the fixed input representation computed from an instance x. We shall
gradient problem [2]. Let x
3
y1
y2
y3
m1
m2
m3
? y <1
x
? y <2
x
? y <3
x
???
yL
y1
y2
y3
mL
h1
h2
h3
? y <L
x
? y0
x
? y1
x
? y2
x
(a) PCC
yL
???
hL
? yL-1
x
(b) RNNb
y1
y2
y3
h1
h2
h3
x1
x2
x3
? y0
x
? y1
x
? y2
x
u1
u2
u3
(c) RNNm
y1
y2
y3
x4
h1
h2
h3
u4
y0
y1
y2
(d) EncDec
Figure 1: Illustration of PCC and RNN architectures for MLC. For the purpose of illustration, we
assume T = 3 and x consists of 4 elements.
? in Sec. 4.2. Given an initial state h0 = finit (?
explain how to determine x
x), at each step i, both
?
RNNb and RNNm compute a hidden state hi by taking
x
and
a
target
(or
predicted)
label from the
? for RNNb and hi = GRU hi?1 , Vypi?1 , x
?
previous step as inputs: hi = GRU hi?1 , Vyi?1 , x
b
for RNNm where V is the matrix of d-dimensional label embeddings.
In
turn,
RNN
computes
the
? consisting of linear projection,
conditional probabilities P? (yi |y <i , x) in Eq. (2) by f hi , Vyi?1 , x
? ) for RNNm . Note that the
followed by the softmax function. Likewise, we consider f (hi , Vyi?1 , x
b
m
key difference between RNN and RNN is whether target labels are binary targets yi or 1-of-L
targets yi . Under the assumption that the hidden states hi preserve the information on all previous
labels y <i , learning RNNb and RNNm can be interpreted as learning classifiers in a chain. Whereas
in PCCs an independent classifier is responsible for predicting each label, both proposed types of
RNNs maintain a single set of parameters to predict all labels.
? to both RNNb and RNNm are kept fixed after the preprocessing of
The input representations x
inputs x is completed. Recently, an encoder-decoder (EncDec) framework, also known as sequenceto-sequence (Seq2Seq) learning [2, 28], has drawn attention to modeling both input and output
sequences, and has been applied successfully to various applications in natural language processing
and computer vision [5, 14]. EncDec is composed of two RNNs: an encoder network captures the
information in the entire input sequence, which is then passed to a decoder network which decodes
this information into a sequence of labels (Fig. 1d). In contrast to RNNb and RNNm , which only
? , EncDec makes use of context-sensitive input vectors from x. We
use fixed input representations x
describe how EncDec computes Eq. (3) in the following.
Encoder. An encoder takes x and produces a sequence of D-dimensional vectors x =
{x1 , x2 , ? ? ? , xE } where E is the number of encoded vectors for a single instance. In this work, we
consider documents as input data. For encoding documents, we use words as atomic units. Consider a
document as a sequence of E words such that x = {w1 , w2 , ? ? ? , wE } and a vocabulary of V words.
Each word wj ? V has its own K-dimensional vector representation uj . The set of these vectors constitutes a matrix of word embeddings defined as U ? RK?|V| . Given this word embedding matrix U,
words in a document are converted to a sequence of K-dimensional vectors u = {u1 , u2 , ? ? ? , uE },
which is then fed into the RNN to learn the sequential structures in a document
xj = GRU(xj?1 , uj )
(4)
where x0 is the zero vector.
Decoder. After the encoder computes xi for all elements in x, we set the initial hidden state of
the decoder
h0 = finit (xE ), and then compute hidden states hi = GRU (hi?1 , Vyi?1 , ci ) where
P
ci = j ?ij xj is the context vector which is the sum of the encoded input vectors weighted by
attention scores ?ij = fatt (hi?1 , xj ) , ?ij ? R. Then, as shown in [1], the conditional probability
P? (yi |y <i , x) for predicting a label yi can be estimated by a function of the hidden state hi , the
previous label yi?1 and the context vector ci :
P? (yi |y <i , x) = f (hi , Vyi?1 , ci ).
(5)
Indeed, EncDec is potentially more powerful than RNNb and RNNm because each prediction is
? used
determined based on the dynamic context of the input x unlike the fixed input representation x
4
Table 1: Comparison of the three RNN architectures for MLC.
hidden states
prob. of output labels
RNNb
?
GRU hi?1 , Vyi?1, x
?
f hi , Vyi?1 , x
RNNm
?)
GRU (hi?1 , Vyi?1 , x
?)
f (hi , Vyi?1 , x
EncDec
GRU (hi?1 , Vyi?1 , ci )
f (hi , Vyi?1 , ci )
in PCC, RNNb and RNNm (cf. Figs. 1a to 1d). The differences in computing hidden states and
conditional probabilities among the three RNNs are summarized in Table 1.
Unlike in the training phase, where we know the size of positive label set T , this information is not
available during prediction. Whereas this is typically solved using a meta learner that predicts a
threshold in the ranking of labels, EncDec follows a similar approach as [7] and directly predicts a
virtual label that indicates the end of the sequence.
4
Experimental Setup
In order to see whether solving MLC problems using RNNs can be a good alternative to classifier
chain (CC)-based approaches, we will compare traditional multi-label learning algorithms such as
BR and PCCs with the RNN architectures (Fig. 1) on multi-label text classification datasets. For a
fair comparison, we will use the same fixed label permutation strategies in all compared approaches
if necessary. As it has already been demonstrated in the literature that label permutations may affect
the performance of classifier chain approaches [23, 13], we will evaluate a few different strategies.
4.1
Baselines and Training Details
We use feed-forward NNs as a base learner of BR, LP and PCC. For PCC, beam search with beam
size of 5 is used at inference time [13]. As another NN baseline, we also consider a feed-forward NN
with binary cross entropy per label [21]. We compare RNNs to FastXML [22], one of state-of-the-arts
in extreme MLC.1 All NN based approaches are trained by using Adam [12] and dropout [26]. The
dimensionality of hidden states of all the NN baselines as well as the RNNs is set to 1024. The size
of label embedding vectors is set to 256. We used the NVIDIA Titan X to train NN models including
RNNs and base learners. For FastXML, a machine with 64 cores and 1024GB memory was used.
4.2
Datasets and Preprocessing
We use three multi-label text classification datasets for which we had access to the full text as it is
required for our approach EncDec, namely Reuters-21578,2 RCV1-v2 [15] and BioASQ,3 each of
which has different properties. Summary statistics of the datasets are given in Table 2. For preparing
the train and the test set of Reuters-21578 and RCV1-v2, we follow [21]. We split instances in
BioASQ by year 2014, so that all documents published in 2014 and 2015 belong to the test set. For
tuning hyperparameters, we set aside 10% of the training instances as the validation set for both
Reuters-21578 and RCV1-v2, but chose randomly 50 000 documents for BioASQ.
The RCV1-v2 and BioASQ datasets provide label relationships as a graph. Specifically, labels in
RCV1-v2 are structured in a tree. The label structure in BioASQ is a directed graph and contains
cycles. We removed all edges pointing to nodes which have been already visited while traversing the
graph using DFS, which results in a DAG of labels.
Document Representations. For all datasets, we replaced numbers with a special token and then
build a word vocabulary for each data set. The sizes of the vocabularies for Reuters-21578, RCV1-v2
and BioASQ are 22 747, 50 000 and 30 000, respectively. Out-of-vocabulary (OOV) words were also
replaced with a special token and we truncated the documents after 300 words.4
1
Note that as FastXML optimizes top-k ranking of labels unlike our approaches and assigns a confidence
score for each label. We set a threshold of 0.5 to convert rankings of labels into bipartition predictions.
2
http://www.daviddlewis.com/resources/testcollections/reuters21578/
3
http://bioasq.org
4
By the truncation, one may worry about the possibility of missing information related to some specific
labels. As the average length of documents in the datasets is below 300, the effect would be negligible.
5
Table 2: Summary of datasets. # training documents (Ntr ), # test documents (Nts ), # labels (L), label
cardinality (C), # label combinations (LC), type of label structure (HS).
DATASET
Ntr
Nts
L
C
LC
HS
Reuters-21578
7770
3019
90
1.24
468
RCV1-v2
781 261
23 149
103
3.21
14 921 Tree
BioASQ
11 431 049 274 675 26 970 12.60 11 673 800 DAG
We trained word2vec [20] on an English Wikipedia dump to get 512-dimensional word embeddings
? to be used for all of the
u. Given the word embeddings, we created the fixed input representations x
baselines in the following way: Each word in the document except for numbers and OOV words
is converted into its corresponding embedding vector, and these word vectors are then averaged,
? . For EncDec, which learns hidden states of word sequences using
resulting in a document vector x
an encoder RNN, all words are converted to vectors using the pre-trained word embeddings and we
? , we do not
feed these vectors as inputs to the encoder. In this case, unlike during the preparation of x
ignore OOV words and numbers. Instead, we initialize the vectors for those tokens randomly. For a
fair comparison, we do not update word embeddings of the encoder in EncDec.
4.3
Evaluation Measures
MLC algorithms can be evaluated with multiple measures which capture different aspects of the
problem. We evaluate all methods in terms of both example-based and label-based measures.
Example-based measures are defined by comparing the target vector y = {y1 , y2 , ? ? ? , yL } to the predicy1 , y?2 , ? ? ? , y?L }. Subset accuracy (ACC) is very strict regarding incorrect predictions
tion vector y? = {?
? ] . Hamming acin that it does not allow any deviation in the predicted label sets: ACC (y, y? ) = I [y = P
y
? : HA (y, y? ) = L1 L
curacy (HA) computes how many labels are correctly predicted in y
?j ] .
j=1 I [yj = y
ACC and HA are used for datasets with moderate L. If C as well as L is higher, entirely correct
predictions become increasingly unlikely, and therefore ACC often approaches 0. In this case, the
example-based F1 -measure (ebF1 ) defined by Eq. (6) can be considered as a good compromise.
Label-based measures are based on treating each label yj as a separate two-class prediction problem,
and computing the number of true positives (tp j ), false positives (fp j ) and false negatives (fn j )
for this label. We consider two label-based measures, namely micro-averaged F1 -measure (miF1 )
and macro-averaged F1 -measure (maF1 ) which are defined by Eq. (7) and Eq. (8), respectively.
?)
ebF1 (y, y
P
2 L
?j
(6)
j=1 yj y
= PL
PL
y
+
y
?
j
j
j=1
j=1
maF1
miF1
PL
= PL
j=1
j=1
2tp j
2tp j + fp j + fn j
(7)
=
L
2tp j
(8)
1X
L j=1 2tp j + fp j + fn j
miF1 favors a system yielding good predictions on frequent labels, whereas higher maF1 scores are
usually attributed to superior performance on rare labels.
5
Experimental Results
In the following, we show results of various versions of RNNs for MLC on three text datasets which
span a wide variety of input and label set sizes. We also evaluate different label orderings, such as
frequent-to-rare (f2r), and rare-to-frequent (r2f ), as well as a topological sorting (when applicable).
5.1
Experiments on Reuters-21578
Figure 2 shows the negative log-likelihood (NLL) of Eq. (3) on the validation set during the course
of training. Note that as RNNb attempts to predict binary targets, but RNNm and EncDec make
predictions on multinomial targets, the results of RNNb are plotted separately, with a different scale
of the y-axis (top half of the graph). Compared to RNNm and EncDec, RNNb converges very slowly.
This can be attributed to the length of the label chain and sparse targets in the chain since RNNb is
trained to make correct predictions over all 90 labels, most of them being zero. In other words, the
length of target sequences of RNNb is 90 and fixed regardless of the content of training documents.
6
Table 3: Performance comparison on Reuters-21578.
Negative log-likelihood
6
ACC
4
miF1
maF1
BR(NN)
LP(NN)
NN
No label permutations
0.7685
0.9957
0.8515
0.7837
0.9941
0.8206
0.7502
0.9952
0.8396
0.8348
0.7730
0.8183
0.4022
0.3505
0.3083
PCC(NN)
RNNb
RNNm
EncDec
Frequent labels first (f2r)
0.7844
0.9955
0.8585
0.6757
0.9931
0.7180
0.7744
0.9942
0.8396
0.8281
0.9961
0.8917
0.8305
0.7144
0.7884
0.8545
0.3989
0.0897
0.2722
0.4567
PCC(NN)
RNNb
RNNm
EncDec
0.7864
0.0931
0.7744
0.8261
0.8338
0.1389
0.7864
0.8575
0.3937
0.0102
0.2699
0.4365
b
RNN f2r
RNNb r2f
RNNm f2r
RNNm r2f
EncDec f2r
EncDec r2f
2
2
1
0
5
10 15 20 25 30 35 40 45
Epoch
Figure 2: Negative log-likelihood of RNNs
on the validation set of Reuters-21578.
Subset accuracy
1.0
Hamming accuracy
1.000
Example-based F1
1.0
HA
ebF1
Rare labels first (r2f )
0.9956
0.8598
0.9835
0.1083
0.9943
0.8409
0.9962
0.8944
Micro-averaged F1
0.9
0.9
0.6
0.8
0.995
0.8
0.8
0.5
0.6
0.990
0.7
0.7
0.4
0.4
0.985
0.6
0.3
0.6
0.5
0.2
0.980
0.0
0.975
0
10
20
30
40
b
RNN f2r
0
10
20
30
b
RNN r2f
40
Macro-averaged F1
0.7
0.2
0.4
0.5
0.3
0.4
0
10
20
m
30
40
m
RNN f2r
0.1
0
RNN r2f
10
20
30
40
EncDec f2r
0.0
0
10
20
30
40
EncDec r2f
Figure 3: Performance of RNN models on the validation set of Reuters-21578 during training. Note
that the x-axis denotes # epochs and we use different scales on the y-axis for each measure.
In particular, RNNb has trouble with the r2f label ordering, where training is unstable. The reason
is presumably that the predictions for later labels depend on sequences that are mostly zero when
rare labels occur at the beginning. Hence, the model sees only few examples of non-zero targets in a
single epoch. On the other hand, both RNNm and EncDec converge relatively faster than RNNb and
do obviously not suffer from the r2f ordering. Moreover, there is not much difference between both
strategies since the length of the sequences is often 1 for Reuters-21578 and hence often the same.
Figure 3 shows the performance of RNNs in terms of all evaluation measures on the validation set.
EncDec performs best for all the measures, followed by RNNm . There is no clear difference between
the same type of models trained on different label permutations, except for RNNb in terms of NLL
(cf. Fig. 2). Note that although it takes more time to update the parameters of EncDec than those
of RNNm , EncDec ends up with better results. RNNb performs poorly especially in terms of maF1
regardless of the label permutations, suggesting that RNNb would need more parameter updates for
predicting rare labels. Notably, the advantage of EncDec is most pronounced for this specific task.
Detailed results of all methods on the test set are shown in Table 3. Clearly, EncDec perform best
across all measures. LP works better than BR and NN in terms of ACC as intended, but performs
behind them in terms of other measures. The reason is that LP, by construction, is able to more
accurately hit the exact label set, but, on the other hand, produces more false positives and false
negatives in our experiments in comparison to BR and NN when missing the correct label combination.
As shown in the table, RNNm performs better than its counterpart, i.e., RNNb , in terms of ACC, but
has clear weaknesses in predicting rare labels (cf. especially maF1 ). For PCC, our two permutations
of the labels do not affect much ACC due to the low label cardinality.
5.2
Experiments on RCV1-v2
In comparison to Reuters-21578, RCV1-v2 consists of a considerably larger number of documents.
Though the the number of unique labels (L) is similar (103 vs. 90) in both datasets, RCV1-v2 has a
higher C and LC is greatly increased from 468 to 14 921. Moreover, this dataset has the interesting
property that all labels from the root to a relevant leaf label in the label tree are also associated to the
document. In this case, we can also test a topological ordering of labels, as described in Section 3.1.
7
Table 4: Performance comparison on RCV1-v2.
ACC
HA
ebF1
miF1
Table 5: Performance comparison on BioASQ.
maF1
ACC
HA
ebF1
miF1
maF1
0.3890
0.0570
0.4088
0.5634
0.1435
0.3211
BR(NN)
LP(NN)
NN
FastXML
No label permutations
0.5554
0.9904
0.8376
0.5149
0.9767
0.6696
0.5837
0.9907
0.8441
0.5953
0.9910
0.8409
0.8349
0.6162
0.8402
0.8470
0.6376
0.4154
0.6573
0.5918
FastXML
No label permutations
0.0001
0.9996
0.3585
RNNm
EncDec
Frequent label first (f2r)
0.0001
0.9993
0.3917
0.0004
0.9995
0.5294
PCC(NN)
RNNm
EncDec
Frequent labels first (f2r)
0.6211
0.9904
0.8461
0.8324
0.6218
0.9903
0.8578
0.8487
0.6798
0.9925
0.8895
0.8838
0.6404
0.6798
0.7381
RNNm
EncDec
0.0001
0.0006
Rare labels first (r2f )
0.9995
0.4188
0.9996
0.5531
0.4534
0.5943
0.1801
0.3363
PCC(NN)
RNNm
EncDec
Rare labels first (r2f )
0.6300
0.9906
0.8493
0.6216
0.9903
0.8556
0.6767
0.9925
0.8884
0.8395
0.8525
0.8817
0.6376
0.6583
0.7413
RNNm
EncDec
0.0001
0.0006
topological sorting
0.9994
0.4087
0.9953
0.5311
0.4402
0.5919
0.1555
0.3459
PCC(NN)
RNNm
EncDec
topological sorting
0.6257
0.9904
0.8463
0.6072
0.9898
0.8525
0.6761
0.9924
0.8888
0.8364
0.8437
0.8808
0.6486
0.6578
0.7220
RNNm
EncDec
reverse topological sorting
0.0001
0.9994
0.4210
0.4508
0.0007
0.9996
0.5585
0.5961
0.1646
0.3427
PCC(NN)
RNNm
EncDec
reverse topological sorting
0.6267
0.9902
0.8444
0.8346
0.6232
0.9904
0.8561
0.8496
0.6781
0.9925
0.8899
0.8797
0.6497
0.6535
0.7258
As RNNb takes long to train and did not show good results on the small dataset, we have no longer
considered it in these experiments. We instead include FastXML as a baseline.
Table 4 shows the performance of the methods with different label permutations. These results
demonstrate again the superiority of PCC and RNNm as well as EncDec against BR and NN in
maximizing ACC. Another interesting observation is that LP performs much worse than other
methods even in terms of ACC due to the data scarcity problem caused by higher LC. RNNm and
EncDec, which also predict label subsets but in a sequential manner, do not suffer from the larger
number of distinct label combinations. Similar to the previous experiment, we found no meaningful
differences between the RNNm and EncDec models trained on different label permutations on RCV1v2. FastXML also performs well except for maF1 which tells us that it focuses more on frequent
labels than rare labels. As noted, this is because FastXML is designed to maximize top-k ranking
measures such as prec@k for which the performance on frequent labels is important.
5.3
Experiments on BioASQ
Compared to Reuters-21578 and RCV1-v2, BioASQ has an extremely large number of instances and
labels, where LC is almost close to Ntr + Nts . In other words, nearly all distinct label combinations
appear only once in the dataset and some label subsets can only be found in the test set. Table 5
shows the performance of FastXML, RNNm and EncDec on the test set of BioASQ. EncDec
clearly outperforms RNNm by a large margin. Making predictions over several thousand labels
is a particularly difficult task because MLC methods not only learn label dependencies, but also
understand the context information in documents allowing us to find word-label dependencies and to
improve the generalization performance.
We can observe a consistent benefit from using the reverse label ordering on both approaches. Note
that EncDec does show reliable performance on two relatively small benchmarks regardless of the
choice of the label permutations. Also, EncDec with reverse topological sorting of labels achieves
the best performance, except for maF1 . Note that we observed similar effects with RNNm in
our preliminary experiments on RCV1-v2, but the impact of label permutations disappeared once
we tuned RNNm with dropout. This indicates that label ordering does not affect much the final
performance of models if they are trained well enough with proper regularization techniques.
To understand the effectiveness of each model with respect to the size of the positive label set, we
split the test set into five almost equally-sized partitions based on the number of target labels in the
documents and evaluated the models separately for each of the partition, as shown in Fig. 4. The first
partition (P1) contains test documents associated with 1 to 9 labels. Similarly, other partitions, P2,
P3, P4 and P5, have documents with cardinalities of 10 ? 12, 13 ? 15, 16 ? 18 and more than 19,
respectively. As expected, the performance of all models in terms of ACC and HA decreases as the
8
Figure 4: Comparison of RNNm and EncDec wrt. the number of positive labels T of test documents.
The test set is divided into 5 partitions according to T . The x-axis denotes partition indices. tps and
tps_rev stand for the label permutation ordered by topological sorting and its reverse.
number of positive labels increases. The other measures increase since the classifiers have potentially
more possibilities to match positive labels. We can further confirm the observations from Table 5
w.r.t. to different labelset sizes.
The margin of FastXML to RNNm and EncDec is further increased. Moreover, its poor performance
on rare labels confirms again the focus of FastXML on frequent labels. Regarding computational
complexity, we could observe an opposed relation between the used resources: whereas we ran
EncDec on a single GPU with 12G of memory for 5 days, FastXML only took 4 hours to complete
(on 64 CPU cores), but, on the other hand, required a machine with 1024G of memory.
6
Conclusion
We have presented an alternative formulation of learning the joint probability of labels given an
instance, which exploits the generally low label cardinality in multi-label classification problems.
Instead of having to iterate over each of the labels as in the traditional classifier chains approach, the
new formulation allows us to directly focus only on the positive labels. We provided an extension
of the formal framework of probabilistic classifier chains, contributing to the understanding of the
theoretical background of multi-label classification. Our approach based on recurrent neural networks,
especially encoder-decoders, proved to be effective, highly scalable, and robust towards different
label orderings on both small and large scale multi-label text classification benchmarks. However,
some aspects of the presented work deserve further consideration.
When considering MLC problems with extremely large numbers of labels, a problem often referred
to as extreme MLC (XMLC), F1 -measure maximization is often preferred to subset accuracy maximization because it is less susceptible to the very large number of label combinations and imbalanced
label distributions. One can exploit General F-Measure Maximizer (GFM) [30] to maximize the
example-based F1 -measure by drawing samples from P (y|x) at inference time. Although it is easy
to draw samples from P (y|x) approximated by RNNs, and the calculation of the necessary quantities
for GFM is straightforward, the use of GFM would be limited to MLC problems with a moderate
number of labels because of its quadratic computational complexity O(L2 ).
We used a fixed threshold 0.5 for all labels when making predictions by BR, NN and FastXML.
In fact, such a fixed thresholding technique performs poorly on large label spaces. Jasinska et al.
[10] exhibit an efficient macro-averaged F1 -measure (maF1 ) maximization approach by tuning the
threshold for each label relying on the sparseness of y. We believe that FastXML can be further
improved by the maF1 maximization approach on BioASQ. However, we would like to remark that
the RNNs, especially EncDec, perform well without any F1 -measure maximization at inference time.
Nevertheless, maF1 maximization for RNNs might be interesting for future work.
In light of the experimental results in Table 5, learning from raw inputs instead of using fixed input
representations plays a crucial role for achieving good performance in our XMLC experiments. As
the training costs of the encoder-decoder architecture used in this work depend heavily on the input
sequence lengths and the number of unique labels, it is inevitable to consider more efficient neural
architectures [8, 11], which we also plan to do in future work.
9
Acknowledgments
The authors would like to thank anonymous reviewers for their thorough feedback. Computations
for this research were conducted on the Lichtenberg high performance computer of the Technische
Universit?t Darmstadt. The Titan X used for this research was donated by the NVIDIA Corporation.
This work has been supported by the German Institute for Educational Research (DIPF) under the
Knowledge Discovery in Scientific Literature (KDSL) program, and the German Research Foundation
as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous
Sources (AIPHES) under grant No. GRK 1994/1.
References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate.
In Proceedings of the International Conference on Learning Representations, 2015.
[2] K. Cho, B. van Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using RNN Encoder?Decoder for statistical machine translation. In Proceedings of
the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1724?1734, 2014.
[3] K. Dembczy?nski, W. Cheng, and E. H?llermeier. Bayes optimal multilabel classification via probabilistic
classifier chains. In Proceedings of the 27th International Conference on Machine Learning, pages
279?286, 2010.
[4] K. Dembczy?nski, W. Waegeman, and E. H?llermeier. An analysis of chaining in multi-label classification.
In Frontiers in Artificial Intelligence and Applications, volume 242, pages 294?299. IOS Press, 2012.
[5] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell.
Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pages 2625?2634, 2015.
[6] J. R. Doppa, J. Yu, C. Ma, A. Fern, and P. Tadepalli. HC-Search for multi-label prediction: An empirical
study. In Proceedings of the AAAI Conference on Artificial Intelligence, 2014.
[7] J. F?rnkranz, E. H?llermeier, E. Loza Menc?a, and K. Brinker. Multilabel classification via calibrated label
ranking. Machine Learning, 73(2):133?153, 2008.
[8] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin. Convolutional sequence to sequence
learning. In Proceedings of the International Conference on Machine Learning, pages 1243?1252, 2017.
[9] N. Ghamrawi and A. McCallum. Collective multi-label classification. In Proceedings of the 14th ACM
International Conference on Information and Knowledge Management, pages 195?200, 2005.
[10] K. Jasinska, K. Dembczynski, R. Busa-Fekete, K. Pfannschmidt, T. Klerx, and E. Hullermeier. Extreme
F-measure maximization using sparse probability estimates. In Proceedings of the International Conference
on Machine Learning, pages 1435?1444, 2016.
[11] A. Joulin, M. Ciss?, D. Grangier, H. J?gou, et al. Efficient softmax approximation for GPUs. In Proceedings
of the International Conference on Machine Learning, pages 1302?1310, 2017.
[12] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proceedings of the International
Conference on Learning Representations, 2015.
[13] A. Kumar, S. Vembu, A. K. Menon, and C. Elkan. Beam search algorithms for multilabel learning. Machine
Learning, 92(1):65?89, 2013.
[14] A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher.
Ask me anything: Dynamic memory networks for natural language processing. In Proceedings of The
33rd International Conference on Machine Learning, pages 1378?1387, 2016.
[15] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. RCV1: A new benchmark collection for text categorization
research. Journal of Machine Learning Research, 5(Apr):361?397, 2004.
[16] C. Li, B. Wang, V. Pavlu, and J. Aslam. Conditional Bernoulli mixtures for multi-label classification.
In Proceedings of the 33rd International Conference on International Conference on Machine Learning,
pages 2482?2491, 2016.
[17] N. Li and Z.-H. Zhou. Selective ensemble of classifier chains. In Z.-H. Zhou, F. Roli, and J. Kittler, editors,
Multiple Classifier Systems, volume 7872, pages 146?156. Springer Berlin Heidelberg, 2013.
10
[18] W. Liu and I. Tsang. On the optimality of classifier chain for multi-label classification. In Advances in
Neural Information Processing Systems 28, pages 712?720. 2015.
[19] D. Mena, E. Monta??s, J. R. Quevedo, and J. J. Del Coz. Using A* for inference in probabilistic
classifier chains. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages
3707?3713, 2015.
[20] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In Advances in Neural Information Processing Systems 26, pages
3111?3119. 2013.
[21] J. Nam, J. Kim, E. Loza Menc?a, I. Gurevych, and J. F?rnkranz. Large-scale multi-label text classification?
revisiting neural networks. In Proceedings of the European Conference on Machine Learning and Principles
and Practice of Knowledge Discovery in Databases, pages 437?452, 2014.
[22] Y. Prabhu and M. Varma. FastXML: A fast, accurate and stable tree-classifier for extreme multi-label
learning. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery
and Data Mining, pages 263?272, 2014.
[23] J. Read, B. Pfahringer, G. Holmes, and E. Frank. Classifier chains for multi-label classification. Machine
Learning, 85(3):333?359, 2011.
[24] J. Read, L. Martino, and D. Luengo. Efficient monte carlo methods for multi-dimensional learning with
classifier chains. Pattern Recognition, 47(3):1535 ? 1546, 2014.
[25] R. Senge, J. J. Del Coz, and E. H?llermeier. On the problem of error propagation in classifier chains for
multi-label classification. In M. Spiliopoulou, L. Schmidt-Thieme, and R. Janning, editors, Data Analysis,
Machine Learning and Knowledge Discovery, pages 163?170. Springer International Publishing, 2014.
[26] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a simple way to
prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929?1958, 2014.
[27] L. E. Sucar, C. Bielza, E. F. Morales, P. Hernandez-Leal, J. H. Zaragoza, and P. Larra?aga. Multi-label
classification with Bayesian network-based chain classifiers. Pattern Recognition Letters, 41:14 ? 22, 2014.
ISSN 0167-8655.
[28] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances
in Neural Information Processing Systems 27, pages 3104?3112. 2014.
[29] G. Tsoumakas, I. Katakis, and I. Vlahavas. Random k-labelsets for multilabel classification. IEEE
Transactions on Knowledge and Data Engineering, 23(7):1079?1089, July 2011.
[30] W. Waegeman, K. Dembczy?nki, A. Jachnik, W. Cheng, and E. H?llermeier. On the Bayes-optimality of
F-measure maximizers. Journal of Machine Learning Research, 15(1):3333?3388, 2014.
[31] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu. CNN-RNN: A unified framework for multilabel image classification. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern
Recognition, pages 2285?2294, 2016.
11
| 7125 |@word h:2 cnn:1 version:1 pcc:24 tadepalli:1 confirms:1 ylp:3 grk:1 bioasq:13 thereby:1 initial:2 configuration:2 contains:2 score:4 selecting:1 liu:1 tuned:1 document:23 past:1 outperforms:1 lichtenberg:1 current:1 com:1 nt:3 comparing:1 anne:1 guadarrama:1 tackling:1 written:1 gpu:1 fn:3 partition:6 cis:1 hypothesize:1 treating:2 designed:1 update:3 aside:2 v:1 half:1 leaf:3 intelligence:3 accordingly:1 mccallum:1 beginning:1 parametrization:1 vanishing:1 core:2 node:5 org:1 five:1 height:1 constructed:1 become:1 incorrect:1 consists:3 busa:1 manner:2 x0:1 notably:1 indeed:1 expected:3 p1:1 growing:1 multi:28 salakhutdinov:1 globally:1 relying:1 cpu:1 gou:1 cardinality:6 increasing:1 becomes:1 provided:1 estimating:1 moreover:4 underlying:1 considering:1 katakis:1 thieme:1 interpreted:2 minimizes:1 unified:1 finding:1 corporation:1 temporal:1 thorough:1 every:1 y3:4 ti:2 donated:1 prohibitively:1 classifier:28 hit:1 labelset:1 universit:1 unit:2 grant:1 superiority:1 appear:1 positive:20 negligible:1 engineering:2 io:1 encoding:1 subscript:1 path:2 hernandez:1 might:2 rnns:18 chose:1 studied:1 conversely:1 co:1 limited:1 enriching:1 range:1 averaged:6 directed:2 unique:2 responsible:1 acknowledgment:1 yj:3 atomic:1 practice:2 x3:1 bipartition:1 rnn:20 empirical:2 significantly:1 convenient:1 projection:1 word:25 confidence:1 pre:1 get:2 close:1 jasinska:2 risk:1 context:6 descending:2 www:1 map:1 demonstrated:2 missing:2 maximizing:7 reviewer:1 straightforward:2 regardless:4 attention:2 independently:3 educational:1 focused:1 formulate:1 assigns:1 m2:1 rule:1 holmes:1 cascading:1 fastxml:15 nam:1 varma:1 embedding:3 target:19 hierarchy:1 construction:1 play:1 exact:2 programming:1 heavily:1 designing:1 elkan:1 element:3 expensive:1 particularly:2 approximated:1 recognition:5 u4:1 predicts:2 database:1 observed:1 role:1 p5:1 wang:4 capture:3 solved:1 thousand:1 tsang:1 wj:1 hullermeier:1 cycle:1 kittler:1 revisiting:1 ordering:12 decrease:1 removed:1 ran:1 principled:1 rose:1 complexity:3 coz:2 dynamic:3 multilabel:5 trained:9 depend:3 solving:3 compromise:1 menc:3 learner:3 finit:2 joint:12 schwenk:1 represented:1 various:2 train:3 distinct:2 fast:1 effective:2 describe:1 ondruska:1 monte:1 artificial:3 tell:1 choosing:1 h0:2 heuristic:1 encoded:2 larger:2 drawing:1 encoder:11 favor:1 statistic:1 jointly:1 itself:1 final:1 obviously:1 nll:2 sequence:24 advantage:5 net:1 took:1 propose:1 product:2 p4:1 frequent:11 tu:1 relevant:2 macro:3 translate:1 poorly:2 achieve:1 description:1 pronounced:1 scalability:1 sutskever:3 parent:1 darrell:1 produce:2 disappeared:1 perfect:1 adam:2 converges:1 object:1 categorization:1 recurrent:8 ij:3 bradbury:1 h3:3 ex:1 eq:20 p2:1 predicted:4 come:1 dfs:2 correct:3 stochastic:1 virtual:1 tsoumakas:1 darmstadt:2 assign:1 f1:11 generalization:2 preliminary:1 anonymous:1 merrienboer:1 extension:1 pl:4 frontier:1 considered:2 ground:1 aga:1 presumably:1 predict:5 pointing:1 u3:1 achieves:1 omitted:1 purpose:1 estimation:1 precede:1 applicable:1 label:207 daviddlewis:1 visited:1 prepare:1 sensitive:1 individually:1 reuters21578:1 successfully:2 weighted:1 minimization:1 clearly:2 yarats:1 pn:1 avoid:1 zhong:1 zhou:2 f2r:10 focus:4 martino:1 bernoulli:1 indicates:2 likelihood:3 greatly:1 contrast:2 sigkdd:1 baseline:5 kim:1 r2f:12 helpful:1 inference:5 nn:23 brinker:1 typically:2 unlikely:1 entire:2 pfahringer:1 hidden:9 relation:1 ancestor:1 selective:1 jachnik:1 arg:2 classification:24 issue:2 among:1 vyi:11 priori:1 dauphin:1 plan:1 art:2 softmax:2 special:2 initialize:1 construct:2 once:2 having:2 beach:1 x4:1 represents:2 preparing:1 yu:1 constitutes:1 minf:2 nearly:1 inevitable:1 discrepancy:1 future:2 others:1 micro:2 few:4 curacy:1 randomly:3 composed:1 preserve:2 ve:1 replaced:2 powerset:1 consisting:2 phase:1 intended:1 n1:1 maintain:1 attempt:1 possibility:2 highly:1 mining:1 evaluation:3 alignment:1 weakness:1 mixture:1 extreme:4 yielding:1 light:1 behind:1 chain:29 word2vec:1 accurate:1 edge:1 capable:1 partial:1 necessary:2 traversing:1 unless:1 tree:7 plotted:1 theoretical:1 seq2seq:1 instance:13 increased:2 modeling:1 tp:5 maximization:10 phrase:2 cost:2 deviation:1 subset:23 rare:13 technische:1 krizhevsky:1 successful:1 dembczy:4 conducted:1 dependency:3 considerably:1 nski:3 nns:1 st:1 grus:1 fundamental:1 cho:2 international:13 calibrated:1 probabilistic:6 yl:6 na:1 w1:1 again:3 aaai:1 management:1 opposed:1 huang:2 slowly:1 worse:1 return:1 li:3 suggesting:1 converted:3 sec:1 summarized:1 titan:2 caused:1 ranking:5 depends:1 tion:1 view:1 root:3 break:1 h1:3 doing:1 fatt:1 aslam:1 competitive:1 bayes:3 dembczynski:1 accuracy:11 convolutional:2 who:2 likewise:1 ensemble:4 identify:1 dean:1 raw:1 decodes:1 bayesian:1 accurately:1 fern:1 venugopalan:1 ghamrawi:1 carlo:1 cc:1 published:1 acc:13 explain:2 sharing:1 against:1 frequency:2 associated:4 attributed:2 hamming:2 dataset:4 proved:1 ask:1 yp1:2 knowledge:7 dimensionality:1 organized:1 worry:1 feed:3 oov:3 higher:4 day:1 follow:1 iyyer:1 improved:1 formulation:3 evaluated:2 though:2 just:1 correlation:1 hand:3 maximizer:1 propagation:2 del:2 defines:1 mode:1 gulrajani:1 menon:1 scientific:1 believe:1 usa:1 effect:3 mena:1 y2:10 true:2 counterpart:2 hence:2 assigned:1 regularization:1 read:2 iteratively:1 zaragoza:1 during:6 ue:1 noted:1 anything:1 chaining:1 tps:1 complete:1 demonstrate:1 tn:2 performs:7 l1:1 image:4 consideration:1 recently:2 fi:4 superior:2 wikipedia:1 multinomial:1 rl:1 irsoy:1 volume:2 discussed:1 belong:1 m1:1 vembu:1 interpret:1 bougares:1 refer:1 dag:3 tuning:2 rd:2 similarly:1 grangier:2 language:3 had:1 specification:1 access:1 longer:2 stable:1 base:2 align:1 own:1 rnnm:38 imbalanced:1 optimizes:1 apart:1 moderate:2 reverse:5 nvidia:2 meta:1 binary:11 xe:2 yi:12 seen:1 gurevych:1 additional:2 preceding:1 ey:3 determine:2 maximize:4 converge:1 corrado:1 july:1 full:2 multiple:5 ntr:3 reduces:1 faster:1 ypt:2 match:1 cross:1 long:3 calculation:1 divided:1 equally:1 a1:1 impact:2 prediction:20 scalable:2 heterogeneous:1 essentially:1 vision:3 kernel:1 labelsets:1 beam:3 whereas:4 conditionals:1 separately:2 background:1 addressed:1 source:1 crucial:1 w2:1 unlike:5 strict:1 bahdanau:2 effectiveness:1 spiliopoulou:1 yang:2 split:2 enough:2 decent:1 embeddings:6 variety:1 xj:4 affect:3 testcollections:1 iterate:1 architecture:9 easy:1 reduce:2 idea:1 regarding:2 br:9 multiclass:1 whether:2 gb:1 passed:1 suffer:3 remark:1 luengo:1 dramatically:1 generally:1 clear:2 detailed:1 johannes:1 simplest:1 reduced:1 http:2 sl:2 llermeier:5 estimated:2 per:1 correctly:1 shall:2 group:2 key:4 waegeman:2 reformulation:1 ypi:4 threshold:4 nevertheless:1 drawn:2 achieving:1 clarity:1 prevent:1 kept:1 graph:7 convert:2 sum:1 year:1 prob:1 parameterized:1 powerful:1 letter:1 family:1 almost:2 p3:1 draw:1 decision:1 dropout:3 entirely:1 hi:19 followed:2 cheng:2 topological:9 quadratic:1 occur:1 x2:2 generates:1 u1:2 aspect:2 min:2 yp2:2 span:1 mikolov:1 rcv1:14 extremely:2 kumar:2 relatively:2 gehring:1 gpus:1 department:1 developing:1 according:3 structured:1 gfm:3 combination:6 poor:1 smaller:4 across:3 remain:1 y0:3 increasingly:1 lp:15 appealing:1 making:2 hl:1 resource:2 turn:1 german:2 needed:2 know:1 wrt:1 ascending:1 fed:1 end:3 gulcehre:1 available:2 observe:2 v2:13 prec:1 occurrence:1 vlahavas:1 alternative:4 schmidt:1 denotes:4 top:3 cf:3 trouble:1 completed:1 include:1 publishing:1 madison:1 exploit:4 build:2 uj:2 especially:4 objective:2 already:2 quantity:1 strategy:4 traditional:3 said:1 exhibit:1 gradient:1 separate:1 thank:1 berlin:1 decoder:7 acin:1 me:1 topic:1 unstable:1 bengio:2 reason:2 prabhu:1 assuming:1 length:8 issn:1 index:1 relationship:1 illustration:2 setup:1 mostly:1 difficult:1 susceptible:1 potentially:2 frank:1 subproblems:1 negative:6 ba:1 proper:1 collective:1 unknown:2 gated:1 perform:2 allowing:1 observation:3 datasets:13 benchmark:3 truncated:1 hinton:1 precise:1 y1:11 auli:1 mlc:20 arbitrary:1 compositionality:1 namely:2 gru:7 required:2 learned:2 hour:1 kingma:1 nip:1 address:2 able:3 deserve:1 usually:3 below:1 hendricks:1 pattern:4 fp:3 program:1 including:1 memory:4 reliable:1 optimality:2 suitable:1 natural:4 difficulty:1 nki:1 predicting:7 improve:1 axis:4 created:1 text:7 epoch:3 loza:3 literature:3 understanding:1 l2:1 discovery:4 determining:1 contributing:1 wisconsin:1 loss:5 expect:1 permutation:20 interesting:3 limitation:1 acyclic:1 validation:5 h2:3 foundation:1 consistent:1 thresholding:1 editor:2 principle:1 pi:2 translation:2 morale:1 roli:1 prone:1 summary:2 token:3 placed:1 course:1 truncation:1 english:1 infeasible:1 supported:1 formal:1 allow:3 understand:2 institute:1 wide:2 taking:1 sparse:2 benefit:2 van:1 feedback:1 depth:1 xn:3 vocabulary:4 stand:1 distributed:1 computes:4 ignores:1 forward:2 author:1 adaptive:1 preprocessing:2 simplified:1 rnkranz:2 collection:1 transaction:1 pruning:1 ignore:1 preferred:1 dealing:1 ml:1 global:1 confirm:1 overfitting:1 assumed:1 xi:1 search:9 decomposes:1 table:13 learn:3 leal:1 robust:1 ca:1 obtaining:1 improving:1 rcv1v2:1 heidelberg:1 quevedo:1 hc:1 complex:1 european:1 domain:1 did:1 joulin:1 paulus:1 apr:1 reuters:12 scarcity:2 hyperparameters:1 child:1 fair:2 x1:2 xu:1 fig:7 referred:3 dump:1 fashion:1 lc:5 mao:1 learns:2 donahue:1 down:1 rk:1 specific:2 explored:1 maximizers:1 intractable:3 socher:1 false:4 sequential:5 doppa:2 effectively:1 ci:6 sparseness:1 margin:2 sorting:8 easier:1 chen:1 entropy:1 lt:1 simply:1 rohrbach:1 visual:1 vinyals:1 ordered:3 partially:1 recommendation:1 u2:2 fekete:1 springer:2 corresponds:1 truth:1 lewis:1 acm:2 ma:1 conditional:9 goal:1 viewed:1 sorted:1 sized:1 towards:1 replace:1 content:1 determined:2 except:6 reducing:4 specifically:1 averaging:1 total:1 experimental:3 m3:1 meaningful:1 saenko:1 formally:1 maf1:13 meant:1 relevance:2 preparation:2 evaluate:3 later:1 srivastava:1 |
6,772 | 7,126 | AdaGAN: Boosting Generative Models
Ilya Tolstikhin
MPI for Intelligent Systems
T?bingen, Germany
[email protected]
Olivier Bousquet
Google Brain
Z?rich, Switzerland
[email protected]
Sylvain Gelly
Google Brain
Z?rich, Switzerland
[email protected]
Carl-Johann Simon-Gabriel
MPI for Intelligent Systems
T?bingen, Germany
[email protected]
Bernhard Sch?lkopf
MPI for Intelligent Systems
T?bingen, Germany
[email protected]
Abstract
Generative Adversarial Networks (GAN) are an effective method for training
generative models of complex data such as natural images. However, they are
notoriously hard to train and can suffer from the problem of missing modes where
the model is not able to produce examples in certain regions of the space. We
propose an iterative procedure, called AdaGAN, where at every step we add a new
component into a mixture model by running a GAN algorithm on a re-weighted
sample. This is inspired by boosting algorithms, where many potentially weak
individual predictors are greedily aggregated to form a strong composite predictor.
We prove analytically that such an incremental procedure leads to convergence
to the true distribution in a finite number of steps if each step is optimal, and
convergence at an exponential rate otherwise. We also illustrate experimentally
that this procedure addresses the problem of missing modes.
1
Introduction
Imagine we have a large corpus, containing unlabeled pictures of animals, and our task is to build a
generative probabilistic model of the data. We run a recently proposed algorithm and end up with a
model which produces impressive pictures of cats and dogs, but not a single giraffe. A natural way to
fix this would be to manually remove all cats and dogs from the training set and run the algorithm on
the updated corpus. The algorithm would then have no choice but to produce new animals and, by
iterating this process until there?s only giraffes left in the training set, we would arrive at a model
generating giraffes (assuming sufficient sample size). At the end, we aggregate the models obtained
by building a mixture model. Unfortunately, the described meta-algorithm requires manual work for
removing certain pictures from the unlabeled training set at every iteration.
Let us turn this into an automatic approach, and rather than including or excluding a picture, put
continuous weights on them. To this end, we train a binary classifier to separate ?true? pictures of
the original corpus from the set of ?synthetic? pictures generated by the mixture of all the models
trained so far. We would expect the classifier to make confident predictions for the true pictures of
animals missed by the model (giraffes), because there are no synthetic pictures nearby to be confused
with them. By a similar argument, the classifier should make less confident predictions for the true
pictures containing animals already generated by one of the trained models (cats and dogs). For each
picture in the corpus, we can thus use the classifier?s confidence to compute a weight which we use
for that picture in the next iteration, to be performed on the re-weighted dataset.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The present work provides a principled way to perform this re-weighting, with theoretical guarantees
showing that the resulting mixture models indeed approach the true data distribution.1
Before discussing how to build the mixture, let
us consider the question of building a single
generative model. A recent trend in modelling A LGORITHM 1 AdaGAN, a meta-algorithm to construct a ?strong? mixture of T individual generative
high dimensional data such as natural images models (f.ex. GANs), trained sequentially.
is to use neural networks [1, 2]. One popular approach are Generative Adversarial Net- Input: Training sample SN := {X1 , . . . , XN }.
works (GAN) [2], where the generator is trained Output: Mixture generative model G = GT .
Train vanilla GAN G1 = GAN(SN , W1 ) with a
adversarially against a classifier, which tries to
uniform weight W1 = (1/N, . . . , 1/N ) over the
differentiate the true from the generated data.
training points
While the original GAN algorithm often profor t = 2, . . . , T do
duces realistically looking data, several issues
#Choose the overall weight of the next mixture
were reported in the literature, among which
component
the missing modes problem, where the generator
?t = ChooseMixtureWeight(t)
converges to only one or a few modes of the data
#Update the weight of each training example
distribution, thus not providing enough variabilWt = UpdateTrainingWeights(Gt?1 , SN , ?t )
ity in the generated data. This seems to match
#Train t-th ?weak? component generator Gct
the situation described earlier, which is why we
Gct = GAN(SN , Wt )
will most often illustrate our algorithm with a
#Update the overall generative model:
GAN as the underlying base generator. We call
#Form a mixture of Gt?1 and Gct .
it AdaGAN, for Adaptive GAN, but we could acGt = (1 ? ?t )Gt?1 + ?t Gct
tually use any other generator: a Gaussian mixend for
ture model, a VAE [1], a WGAN [3], or even
an unrolled [4] or mode-regularized GAN [5],
which were both already specifically developed
to tackle the missing mode problem. Thus, we do not aim at improving the original GAN or any
other generative algorithm. We rather propose and analyse a meta-algorithm that can be used on top
of any of them. This meta-algorithm is similar in spirit to AdaBoost in the sense that each iteration
corresponds to learning a ?weak? generative model (e.g., GAN) with respect to a re-weighted data
distribution. The weights change over time to focus on the ?hard? examples, i.e. those that the mixture
has not been able to properly generate so far.
Related Work Several authors [6, 7, 8] have proposed to use boosting techniques in the context of
density estimation by incrementally adding components in the log domain. This idea was applied
to GANs in [8]. A major downside of these approaches is that the resulting mixture is a product of
components and sampling from such a model is nontrivial (at least when applied to GANs where the
model density is not expressed analytically) and requires techniques such as Annealed Importance
Sampling [9] for the normalization.
When the log likelihood can be computed, [10] proposed to use an additive mixture model. They
derived the update rule via computing the steepest descent direction when adding a component with
infinitesimal weight. However, their results do not apply once the weight ? becomes non-infinitesimal.
In contrast, for any fixed weight of the new component our approach gives the overall optimal update
(rather than just the best direction) for a specified f -divergence. In both theories, improvements of the
mixture are guaranteed only if the new ?weak? learner is still good enough (see Conditions 10&11)
Similarly, [11] studied the construction of mixtures minimizing the Kullback divergence and proposed
a greedy procedure for doing so. They also proved that under certain conditions, finite mixtures can
approximate arbitrary mixtures at a rate 1/k where k is the number of components in the mixture
when the weight of each newly added component is 1/k. These results are specific to the Kullback
divergence but are consistent with our more general results.
An additive procedure similar to ours was proposed in [12] but with a different re-weighting scheme,
which is not motivated by a theoretical analysis of optimality conditions. On every new iteration the
authors run GAN on the k training examples with maximal values of the discriminator from the last
iteration.
1
Note that the term ?mixture? should not be interpreted to imply that each component models only one mode:
the models to be combined into a mixture can themselves cover multiple modes.
2
Finally, many papers investigate completely different approaches for addressing the same issue by
directly modifying the training objective of an individual GAN. For instance, [5] add an autoencoding
cost to the training objective of GAN, while [4] allow the generator to ?look few steps ahead? when
making a gradient step.
The paper is organized as follows. In Section 2 we present our main theoretical results regarding
iterative optimization of mixture models under general f -divergences. In Section 2.4 we show that if
optimization at each step is perfect, the process converges to the true data distribution at exponential
rate (or even in a finite number of steps, for which we provide a necessary and sufficient condition).
Then we show in Section 2.5 that imperfect solutions still lead to the exponential rate of convergence
under certain ?weak learnability? conditions. These results naturally lead to a new boosting-style
iterative procedure for constructing generative models. When used with GANs, it results in our
AdaGAN algorithm, detailed in Section 3 . Finally, we report initial empirical results in Section 4,
where we compare AdaGAN with several benchmarks, including original GAN and uniform mixture
of multiple independently trained GANs. Part of new theoretical results are reported without proofs,
which can be found in appendices.
2
2.1
Minimizing f -divergence with Mixtures
Preliminaries and notations
Generative Density Estimation In density estimation, one tries to approximate a real data distribution Pd , defined over the data space X , by a model distribution Pmodel . In the generative approach
one builds a function G : Z ? X that transforms a fixed probability distribution PZ (often called the
noise distribution) over a latent space Z into a distribution over X . Hence Pmodel is the pushforward
of PZ , i.e. Pmodel (A) = PZ (G?1 (A)). With this approach it is in general impossible to compute the
density dPmodel (x) and the log-likelihood of the training data under the model, but one can easily
sample from Pmodel by sampling from PZ and applying G. Thus, to construct G, instead of comparing Pmodel directly with Pd , one compares their samples. To do so, one uses a similarity measure
D(Pmodel kPd ) which can be estimated from samples of those distributions, and thus approximately
minimized over a class G of functions.
f -Divergences In order to measure the agreement between the model distribution and the true
distribution we will use an f -divergence defined in the following way:
Z
dQ
(x) dP (x)
(1)
Df (QkP ) := f
dP
for any pair of distributions P, Q with densities dP , dQ with respect to some dominating reference
measure ? (we refer to Appendix D for more details about such divergences and their domain of
definition). Here we assume that f is convex, defined on (0, ?), and satisfies f (1) = 0. We will
denote by F the set of such functions. 2
As demonstrated in [16, 17], several commonly used symmetric f -divergences are Hilbertian metrics,
which in particular means that their square root satisfies the triangle inequality. This is true for the
Jensen-Shannon divergence3 , the Hellinger distance and the Total Variation among others. We will
denote by FH the set of functions f such that Df is a Hilbertian metric.
GAN and f -divergences The original GAN algorithm [2] optimizes the following criterion:
min max EPd [log D(X)] + EPZ [log(1 ? D(G(Z)))] ,
G
D
(2)
where D and G are two functions represented by neural networks. This optimization is performed on
a pair of samples (a training sample from Pd and a ?fake? sample from PZ ), which corresponds to
approximating the above criterion by using the empirical distributions. In the non-parametric limit
for D, this is equivalent to minimizing the Jensen-Shannon divergence [2]. This point of view can be
generalized to any other f -divergence [13]. Because of this strong connection between adversarial
2
Examples of f -divergences include the Kullback-Leibler divergence (obtained for f (x) = x log x) and
Jensen-Shannon divergence (f (x) = ?(x + 1) log x+1
+ x log x). Other examples can be found in [13]. For
2
further details we refer to Section 1.3 of [14] and [15].
3
which means such a property can be used in the context of the original GAN algorithm.
3
training of generative models and minimization of f -divergences, we cast the results of this section
into the context of general f -divergences.
Generative Mixture Models In order to model complex data distributions, it can be convenient to
P
PT
T
use a mixture model of the following form: Pmodel
:= i=1 ?i Pi , where ?i ? 0, i ?i = 1, and
each of the T components is a generative density model. This is natural in the generative context,
since sampling from a mixture corresponds to a two-step sampling, where one first picks the mixture
component (according to the multinomial distribution with parameters ?i ) and then samples from it.
Also, this allows to construct complex models from simpler ones.
2.2
Incremental Mixture Building
We restrict ourselves to the case of f -divergences and assume that, given an i.i.d. sample from any
unknown distribution P , we can construct a simple model Q ? G which approximately minimizes4
min Df (Q k P ).
Q?G
(3)
Instead of modelling the data with a single distribution, we now want to model it with a mixture of
distributions Pi ,where each Pi is obtained by a training procedure of the form (3) with (possibly)
different target distributions P for each i. A natural way to build a mixture is to do it incrementally:
we train the first model P1 to minimize Df (P1 k Pd ) and set the corresponding weight to ?1 = 1,
1
leading to Pmodel
= P1 . Then after having trained t components P1 , . . . , Pt ? G we can form the
(t + 1)-st mixture model by adding a new component Q with weight ? as follows:
t+1
Pmodel
:=
t
X
(1 ? ?)?i Pi + ?Q.
(4)
i=1
where ? ? [0, 1] and Q ? G is computed by minimizing:
min Df ((1 ? ?)Pg + ?Q k Pd ),
Q
(5)
t
where we denoted Pg := Pmodel
the current generative mixture model before adding the new
component. We do not expect to find the optimal Q that minimizes (5) at each step, but we aim at
constructing some Q that slightly improves our current approximation of Pd , i.e. such that for c < 1
Df ((1 ? ?)Pg + ?Q k Pd ) ? c ? Df (Pg k Pd ) .
(6)
This greedy approach has a significant drawback in practice. As we build up the mixture, we need to
t
make ? decrease (as Pmodel
approximates Pd better and better, one should make the correction at
each step smaller and smaller). Since we are approximating (5) using samples from both distributions,
this means that the sample from the mixture will only contain a fraction ? of examples from Q. So,
as t increases, getting meaningful information from a sample so as to tune Q becomes harder and
harder (the information is ?diluted?). To address this issue, we propose to optimize an upper bound
on (5) which involves a term of the form Df (Q k R) for some distribution R, which can be computed
as a re-weighting of the original data distribution Pd . This procedure is reminiscent of the AdaBoost
algorithm [18], which combines multiple weak predictors into one strong composition. On each step
AdaBoost adds new predictor to the current composition, which is trained to minimize the binary loss
on the re-weighted training set. The weights are constantly updated to bias the next weak learner
towards ?hard? examples, which were incorrectly classified during previous stages.
In the following we will analyze the properties of (5) and derive upper bounds that provide practical
optimization criteria for building the mixture. We will also show that under certain assumptions, the
minimization of the upper bound leads to the optimum of the original criterion.
2.3
Upper Bounds
We provide two upper bounds on the divergence of the mixture in terms of the divergence of the
additive component Q with respect to some reference distribution R.
4
One example of such a setting is running GANs.
4
Lemma 1 Given two distributions Pd , Pg and some ? ? [0, 1], then, for any Q and R, and f ? FH :
q
q
q
Df ((1 ? ?)Pg + ?Q k Pd ) ? ?Df (Q k R) + Df ((1 ? ?)Pg + ?R k Pd ) .
(7)
If, more generally, f ? F, but ?dR ? dPd , then:
Pd ? ?R
Df ((1 ? ?)Pg + ?Q k Pd ) ? ?Df (Q k R) + (1 ? ?)Df Pg k
.
1??
(8)
We can thus exploit those bounds by introducing some well-chosen distribution R and then minimizing
them with respect to Q. A natural choice for R is a distribution that minimizes the last term of the
upper bound (which does not depend on Q). Our main result indicates the shape of the distributions
minimizing the right-most terms in those bounds.
Theorem 1 For any f -divergence Df , with f ? F and f differentiable, any fixed distributions
Pd , Pg , and any ? ? (0, 1], the minimizer of (5) over all probability distributions P has density
1
dPd
dPg
dQ?? (x) = (?? dPd (x) ? (1 ? ?)dPg (x))+ =
?? ? (1 ? ?)
.
(9)
?
?
dPd +
R
for the unique ?? ? [?, 1] satisfying
dQ?? = 1. Also, ?? = 1 if and only if
Pd ((1 ? ?)dPg > dPd ) = 0, which is equivalent to ?dQ?? = dPd ? (1 ? ?)dPg .
Theorem 2 Given two distributions Pd , Pg and some ? ? (0, 1], assume Pd (dPg = 0) < ?. Let
f ? F. The problem
Pd ? ?Q
min
Df Pg k
Q:?dQ?dPd
1??
?
has a solution with the density dQ? (x) = ?1 dPd (x) ? ?? (1 ? ?)dPg (x) + for the unique ?? ? 1
R
that satisfies dQ?? = 1.
Surprisingly, in both Theorems 1 and 2, the solutions do not depend on the choice of the function f ,
which means that the solution is the same for any f -divergence5 . Note that ?? is implicitly defined
by a fixed-point equation. In Section 3 we will show how it can be computed efficiently in the case of
empirical distributions.
2.4
Convergence Analysis for Optimal Updates
In previous section we derived analytical expressions for the distributions R minimizing last terms
in upper bounds (8) and (7). Assuming Q can perfectly match R, i.e. Df (Q k R) = 0, we are now
interested in the convergence of the mixture (4) to the true data distribution Pd when Q = Q?? or
Q = Q?? . We start with simple results showing that adding Q?? or Q?? to the current mixture would
yield a strict improvement of the divergence.
Lemma 2 (Property 6: exponential improvements) Under the conditions of Theorem 1, we have
Df (1 ? ?)Pg + ?Q??
Pd ? Df (1 ? ?)Pg + ?Pd
Pd ? (1 ? ?)Df (Pg k Pd ).
Under the conditions of Theorem 2, we have
!
Pd ? ?Q??
Df Pg
? Df (Pg k Pd ) and Df (1 ? ?)Pg + ?Q??
Pd ? (1 ? ?)Df (Pg k Pd ).
1??
Imagine repeatedly adding T new components to the current mixture Pg , where on every step we use
the same weight ? and choose the components described in Theorem 1. In this case Lemma 2 guarantees that the original objective value Df (Pg k Pd ) would be reduced at least to (1 ? ?)T Df (Pg k Pd ).
5
in particular, by replacing f with f ? (x) := xf (1/x), we get the same solution for the criterion written in
the other direction. Hence the order in which we write the divergence does not matter and the optimal solution is
optimal for both orders.
5
This exponential rate of convergence, which at first may look surprisingly good, is simply explained
by the fact that Q?? depends on the true distribution Pd , which is of course unknown.
Lemma 2 also suggests setting ? as large as possible since we assume we can compute the optimal
mixture component (which for ? = 1 is Pd ). However, in practice we may prefer to keep ? relatively
small, preserving what we learned so far through Pg : for instance, when Pg already covered part
of the modes of Pd and we want Q to cover the remaining ones. We provide further discussions on
choosing ? in Section 3.
2.5
Weak to Strong Learnability
In practice the component Q that we add to the mixture is not exactly Q?? or Q?? , but rather an
approximation to them. In this section we show that if this approximation is good enough, then we
retain the property (6) (exponential improvements).
Looking again at Lemma 1 we notice that the first upper bound is less tight than the second one.
Indeed, take the optimal distributions provided by Theorems 1 and 2 and plug them back as R into
the upper bounds of Lemma 1. Also assume that Q can match R exactly, i.e. Df (Q k R) = 0. In
this case both sides of (7) are equal to Df ((1 ? ?)Pg + ?Q?? k Pd ), which is the optimal value for
the original objective (5). On the other hand, (8) does not become an equality and the r.h.s. is not
the optimal one for (5). However, earlier we agreed that our aim is to reach the modest goal (6) and
next we show that this is indeed possible.Corollaries 1 and 2 provide sufficient conditions for strict
improvements when we use the upper bounds (8) and (7) respectively.
Corollary 1 Given Pd , Pg , and some ? ? (0, 1], assume Pd
in Theorem 2. If Q is such that
dPg
dPd
= 0 < ?. Let Q?? be as defined
Df (Q k Q?? ) ? ?Df (Pg k Pd )
(10)
for ? ? [0, 1], then Df ((1 ? ?)Pg + ?Q k Pd ) ? (1 ? ?(1 ? ?))Df (Pg k Pd ).
Corollary 2 Let f ? FH . Take any ? ? (0, 1], Pd , Pg , and let Q?? be as defined in Theorem 1. If Q
is such that
Df (Q k Q?? ) ? ?Df (Pg k Pd )
(11)
for some ? ? [0, 1], then Df ((1 ? ?)Pg + ?Q k Pd ) ? C?,? ? Df (Pg k Pd ) , where C?,? =
2
?
?
?? + 1 ? ? is strictly smaller than 1 as soon as ? < ?/4 (and ? > 0).
Conditions 10 and 11 may be compared to the ?weak learnability? condition of AdaBoost. As long
as our weak learner is able to solve the surrogate problem (3) of matching respectively Q?? or Q??
accurately enough, the original objective (5) is guaranteed to decrease as well. It should be however
noted that Condition 11 with ? < ?/4 is perhaps too strong to call it ?weak learnability?. Indeed, as
already mentioned before, the weight ? is expected to decrease to zero as the number of components
in the mixture distribution Pg increases. This leads to ? ? 0, making it harder to meet Condition 11.
This obstacle may be partially resolved by the fact that we will use a GAN to fit Q, which corresponds
to a relatively rich6 class of models G in (3). In other words, our weak learner is not so weak. On
the other hand, Condition 10 of Corollary 1 is milder. No matter what ? ? [0, 1] and ? ? (0, 1] are,
the new component Q is guaranteed to strictly improve the objective functional. This comes at the
price of the additional condition Pd (dPg /dPd = 0) < ?, which asserts that ? should be larger than
the mass of true data Pd missed by the current model Pg . We argue that this is a rather reasonable
condition: if Pg misses many modes of Pd we would prefer assigning a relatively large weight ? to
the new component Q. However, in practice, both Conditions 10 and 11 are difficult to check. A
rigorous analysis of situations when they are guaranteed is a direction for future research.
6
The hardness of meeting Condition 11 of course largely depends on the class of models G used to fit Q in
(3). For now we ignore this question and leave it for future research.
6
3
AdaGAN
We now describe the functions ChooseMixtureWeight and UpdateTrainingWeights of Algorithm 1.
The complete AdaGAN meta-algorithm with the details of UpdateTrainingWeight and ChooseMixtureWeight, is summarized in Algorithm 3 of Appendix A.
UpdateTrainingWeights At each iteration we add a new component Q to the current mixture Pg
with weight ?. The component Q should approach the ?optimal target? Q?? provided by (9) in Theorem 1. This distribution depends on the density ratio dPg /dPd , which is not directly accessible, but it
can be estimated using adversarial training. Indeed, we can train a separate mixture discriminator DM
to distinguish between samples from Pd and samples from the current mixture Pg . It is known [13]
that for an arbitrary f -divergence, there exists a corresponding function h such that the values of the
optimal discriminator DM are related to the density ratio by
dPg
(x) = h DM (x) .
(12)
dPd
We can replace dPg (x)/dPd (x) in (9) with h DM (x) . For the Jensen-Shannon divergence, used by
?
the original GAN algorithm, h(z) = 1?z
z . In practice, when we compute dQ? on the training sample
SN = (X1 , . . . , XN ), each example Xi receives weight
wi =
1
?? ? (1 ? ?)h(di ) + ,
?N
where di = DM (Xi ) .
(13)
The only remaining task is to determine ?? . As the weights wi in (13) must sum to 1, we get:
?
?
X
(1
?
?)
?
?1 +
?? = P
pi h(di )?
(14)
?
i?I(?? ) pi
?
i?I(? )
where I(?) := {i : ? > (1 ? ?)h(di )}. To find I(?? ), we sort h(di ) in increasing order: h(d1 ) ?
. . . ? h(dN ). Then I(?? ) is a set consisting of the first k indices. We then successively test all k-s
until the ? given by (14) verifies (1 ? ?)h(dk ) < ? ? (1 ? ?)h(dk+1 ) . This procedure is guaranteed
to converge by Theorem 1. It is summarized in Algorithm 2 of Appendix A
ChooseMixtureWeight For every ? there is an optimal re-weighting scheme with weights given
by (13). If the GAN could perfectly approximate its target Q?? , then choosing ? = 1 would be
optimal, because Q?1 = Pd . But in practice, GANs cannot do that. So we propose to choose ?
heuristically by imposing that each generator of the final mixture model has same weight. This yields
?t = 1/t, where t is the iteration index. Other heuristics are proposed in Appendix B, but did not
lead to any significant difference.
The optimal discriminator In practice it is of course hard to find the optimal discriminator DM
achieving the global maximum of the variational representation for the f-divergence and verifying (12).
For the JS-divergence this would mean that DM is the classifier achieving minimal expected crossentropy loss in the binary classification between Pg and Pd . In practice, we observed that the
reweighting (13) leads to the desired property of emphasizing at least some of the missing modes
as long as DM distinguishes reasonably between data points already covered by the current model
Pg and those which are still missing. We found an early stopping (while training DM ) sufficient
to achieve this. In the worst case, when DM overfits and returns 1 for all true data points, the
reweighting simply leads to the uniform distribution over the training set.
4
Experiments
We ran AdaGAN7 on toy datasets, for which we can interpret the missing modes in a clear and
reproducible way, and on MNIST, which is a high-dimensional dataset. The goal of these experiments
was not to evaluate the visual quality of individual sample points, but to demonstrate that the
re-weighting scheme of AdaGAN promotes diversity and effectively covers the missing modes.
7
Code available online at https://github.com/tolstikhin/adagan
7
Toy Datasets Our target distribution is a mixture of isotropic Gaussians over R2 . The distances
between the means are large enough to roughly avoid overlaps between different Gaussian components.
We vary the number of modes to test how well each algorithm performs when there are fewer or more
expected modes. We compare the baseline GAN algorithm with AdaGAN variations, and with other
meta-algorithms that all use the same underlying GAN procedure. For details on these algorithms
and on the architectures of the underlying generator and discriminator, see Appendix B.
To evaluate how well the generated distribution matches the target distribution, we use a coverage
metric C. We compute the probability mass of the true data ?covered? by the model Pmodel .
More precisely, we compute C := Pd (dPmodel > t) with t such that Pmodel (dPmodel > t) = 0.95.
This metric is more interpretable than the likelihood, making it easier to assess the difference in
performance of the algorithms. To approximate the density of Pmodel we use a kernel density
estimation, where the bandwidth is chosen by cross validation. We repeat the run 35 times with the
same parameters (but different random seeds). For each run, the learning rate is optimized using
a grid search on a validation set. We report the median over those multiple runs, and the interval
corresponding to the 5% and 95% percentiles.
Figure 2 summarizes the performance of algorithms as a function of the number of iterations T . Both
the ensemble and the boosting approaches significantly outperform the vanilla GAN and the ?best of
T ? algorithm. Interestingly, the improvements are significant even after just one or two additional
iterations (T = 2 or 3). Our boosting approach converges much faster. In addition, its variance is
much lower, improving the likelihood that a given run gives good results. On this setup, the vanilla
GAN approach has a significant number of catastrophic failures (visible in the lower bounds of the
intervals). Further empirical results are available in Appendix B, where we compared AdaGAN
variations to several other baseline meta-algorithms in more details (Table 1) and combined AdaGAN
with the unrolled GANs (UGAN) [4] (Figure 3). Interestingly, Figure 3 shows that AdaGAN ran with
UGAN outperforms the vanilla UGAN on the toy datasets, demonstrating the advantage of using
AdaGAN as a way to further improve the mode coverage of any existing GAN implementations.
T
Figure 1: Coverage C of the true data by the model distribution Pmodel
, as a function of iterations T .
Experiments correspond to the data distribution with 5 modes. Each blue point is the median over
35 runs. Green intervals are defined by the 5% and 95% percentiles (see Section 4). Iteration 0 is
equivalent to one vanilla GAN. The left plot corresponds to taking the best generator out of T runs.
The middle plot is an ?ensemble? GAN, simply taking a uniform mixture of T independently trained
GAN generators. The right plot corresponds to our boosting approach (AdaGAN), with ?t = 1/t.
MNIST and MNIST3 We ran experiments both on the original MNIST and on the 3-digit MNIST
(MNIST3) [5, 4] dataset, obtained by concatenating 3 randomly chosen MNIST images to form a
3-digit number between 0 and 999. According to [5, 4], MNIST contains 10 modes, while MNIST3
contains 1000 modes, and these modes can be detected using the pre-trained MNIST classifier. We
combined AdaGAN both with simple MLP GANs and DCGANs [19]. We used T ? {5, 10}, tried
models of various sizes and performed a reasonable amount of hyperparameter search.
Similarly to [4, Sec 3.3.1] we failed to reproduce the missing modes problem for MNIST3 reported in
[5] and found that simple GAN architectures are capable of generating all 1000 numbers. The authors
of [4] proposed to artificially introduce the missing modes again by limiting the generators? flexibility.
In our experiments, GANs trained with the architectures reported in [4] were often generating poorly
looking digits. As a result, the pre-trained MNIST classifier was outputting random labels, which
again led to full coverage of the 1000 numbers. We tried to threshold the confidence of the pre-trained
classifier, but decided that this metric was too ad-hoc.
8
For MNIST we noticed that the re-weighted distribution was often concentrating its mass on digits having
very specific strokes: on different rounds it could
highlight thick, thin, vertical, or diagonal digits, indicating that these traits were underrepresented in the
generated samples (see Figure 2). This suggests that
AdaGAN does a reasonable job at picking up different modes of the dataset, but also that there are more
than 10 modes in MNIST (and more than 1000 in
MNIST3). It is not clear how to evaluate the quality
of generative models in this context.
We also tried to use the ?inversion? metric discussed
in Section 3.4.1 of [4]. For MNIST3 we noticed that
a single GAN was capable of reconstructing most
of the training points very accurately both visually
and in the `2 -reconstruction sense. The ?inversion?
metric tests whether the trained model can generate
certain examples or not, but unfortunately it does not
take into account the probabilities of doing so.
5
Figure 2: Digits from the MNIST dataset corresponding to the smallest (left) and largest
(right) weights, obtained by the AdaGAN
procedure (see Section 3) in one of the runs.
Bold digits (left) are already covered and next
GAN will concentrate on thin (right) digits.
Conclusion
We studied the problem of minimizing general f -divergences with additive mixtures of distributions.
The main contribution of this work is a detailed theoretical analysis, which naturally leads to an
iterative greedy procedure. On every iteration the mixture is updated with a new component, which
minimizes f -divergence with a re-weighted target distribution. We provided conditions under which
this procedure is guaranteed to converge to the target distribution at an exponential rate. While
our results can be combined with any generative modelling techniques, we focused on GANs
and provided a boosting-style algorithm AdaGAN. Preliminary experiments show that AdaGAN
successfully produces a mixture which iteratively covers the missing modes.
References
[1] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[2] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural
Information Processing Systems, pages 2672?2680, 2014.
[3] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein GAN. arXiv:1701.07875,
2017.
[4] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled generative adversarial networks.
arXiv:1611.02163, 2017.
[5] Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized
generative adversarial networks. arXiv:1612.02136, 2016.
[6] Max Welling, Richard S. Zemel, and Geoffrey E. Hinton. Self supervised boosting. In Advances
in neural information processing systems, pages 665?672, 2002.
[7] Zhuowen Tu. Learning generative models via discriminative approaches. In 2007 IEEE
Conference on Computer Vision and Pattern Recognition, pages 1?8. IEEE, 2007.
[8] Aditya Grover and Stefano Ermon. Boosted generative models. ICLR 2017 conference
submission, 2016.
[9] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[10] Saharon Rosset and Eran Segal. Boosting density estimation. In Advances in Neural Information
Processing Systems, pages 641?648, 2002.
9
[11] A Barron and J Li. Mixture density estimation. Biometrics, 53:603?618, 1997.
[12] Yaxing Wang, Lichao Zhang, and Joost van de Weijer. Ensembles of generative adversarial
networks. arXiv:1612.00991, 2016.
[13] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-GAN: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information
Processing Systems, 2016.
[14] F. Liese and K.-J. Miescke. Statistical Decision Theory. Springer, 2008.
[15] M. D. Reid and R. C. Williamson. Information, divergence and risk for binary experiments.
Journal of Machine Learning Research, 12:731?817, 2011.
[16] Bent Fuglede and Flemming Topsoe. Jensen-shannon divergence and hilbert space embedding.
In IEEE International Symposium on Information Theory, pages 31?31, 2004.
[17] Matthias Hein and Olivier Bousquet. Hilbertian metrics and positive definite kernels on
probability measures. In AISTATS, pages 136?143, 2005.
[18] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[19] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
10
| 7126 |@word middle:1 inversion:2 seems:1 heuristically:1 tried:3 jacob:1 pg:40 pick:1 harder:3 initial:1 contains:2 ours:1 interestingly:2 outperforms:1 existing:1 current:9 com:3 comparing:1 assigning:1 reminiscent:1 written:1 must:1 additive:4 visible:1 shape:1 remove:1 reproducible:1 adagan:21 update:5 interpretable:1 plot:3 generative:29 greedy:3 fewer:1 isotropic:1 steepest:1 provides:1 boosting:11 simpler:1 zhang:1 dn:1 become:1 symposium:1 prove:1 combine:1 introduce:1 hellinger:1 expected:3 hardness:1 indeed:5 mpg:3 themselves:1 p1:4 roughly:1 brain:2 inspired:1 soumith:1 increasing:1 becomes:2 confused:1 provided:4 underlying:3 notation:1 mass:3 what:2 interpreted:1 minimizes:3 developed:1 pmodel:15 guarantee:2 every:6 tackle:1 exactly:2 wenjie:1 classifier:9 sherjil:1 reid:1 before:3 positive:1 limit:1 encoding:1 meet:1 approximately:2 studied:2 suggests:2 decided:1 practical:1 unique:2 practice:8 definite:1 digit:8 procedure:13 empirical:4 significantly:1 composite:1 convenient:1 matching:1 confidence:2 word:1 pre:3 get:2 cannot:1 unlabeled:2 put:1 context:5 impossible:1 applying:1 risk:1 optimize:1 equivalent:3 demonstrated:1 missing:11 annealed:2 independently:2 convex:1 focused:1 underrepresented:1 pouget:1 rule:1 ity:1 embedding:1 variation:3 updated:3 qkp:1 imagine:2 construction:1 pt:2 target:7 limiting:1 olivier:2 carl:1 us:1 goodfellow:1 agreement:1 trend:1 satisfying:1 recognition:1 submission:1 observed:1 wang:1 verifying:1 worst:1 region:1 decrease:3 ran:3 principled:1 mentioned:1 pd:51 warde:1 trained:13 depend:2 tight:1 learner:4 completely:1 triangle:1 easily:1 resolved:1 joost:1 cat:3 represented:1 various:1 train:6 effective:1 describe:1 detected:1 zemel:1 aggregate:1 choosing:2 jean:1 heuristic:1 larger:1 dominating:1 solve:1 otherwise:1 statistic:1 g1:1 analyse:1 final:1 online:1 differentiate:1 autoencoding:1 differentiable:1 advantage:1 net:2 analytical:1 matthias:1 propose:4 outputting:1 reconstruction:1 product:1 maximal:1 tu:1 flexibility:1 achieve:1 poorly:1 realistically:1 asserts:1 getting:1 convergence:6 optimum:1 produce:4 generating:3 incremental:2 converges:3 perfect:1 diluted:1 leave:1 illustrate:2 derive:1 job:1 strong:6 coverage:4 involves:1 come:1 switzerland:2 direction:4 thick:1 drawback:1 concentrate:1 modifying:1 ermon:1 fix:1 generalization:1 preliminary:2 strictly:2 correction:1 visually:1 seed:1 major:1 vary:1 early:1 smallest:1 fh:3 estimation:6 label:1 topsoe:1 largest:1 successfully:1 weighted:6 minimization:3 gaussian:2 aim:3 rather:5 avoid:1 boosted:1 vae:1 corollary:4 crossentropy:1 derived:2 focus:1 cseke:1 properly:1 improvement:6 modelling:3 likelihood:4 indicates:1 check:1 contrast:1 adversarial:9 greedily:1 baseline:2 sense:2 rigorous:1 milder:1 stopping:1 reproduce:1 interested:1 germany:3 issue:3 overall:3 among:2 classification:1 denoted:1 hilbertian:3 animal:4 weijer:1 equal:1 construct:4 once:1 having:2 beach:1 sampling:6 manually:1 adversarially:1 look:2 unsupervised:1 thin:2 future:2 minimized:1 report:2 others:1 intelligent:3 richard:1 few:2 distinguishes:1 mirza:1 randomly:1 yoshua:2 divergence:32 wgan:1 individual:4 ourselves:1 consisting:1 mlp:1 tolstikhin:2 investigate:1 mixture:51 johann:1 farley:1 capable:2 necessary:1 modest:1 biometrics:1 re:11 desired:1 hein:1 theoretical:5 minimal:1 instance:2 earlier:2 downside:1 obstacle:1 cover:4 cost:1 introducing:1 addressing:1 epd:1 predictor:4 uniform:4 too:2 learnability:4 reported:4 synthetic:2 combined:4 confident:2 st:2 density:15 rosset:1 international:1 accessible:1 retain:1 probabilistic:1 picking:1 ilya:2 gans:11 w1:2 again:3 successively:1 containing:2 choose:3 possibly:1 dr:1 style:2 leading:1 return:1 toy:3 li:3 account:1 segal:1 de:4 diversity:1 summarized:2 sec:1 bold:1 matter:2 depends:3 ad:1 performed:3 try:2 root:1 view:1 doing:2 analyze:1 overfits:1 start:1 sort:1 bayes:1 metz:2 simon:1 contribution:1 ass:1 square:1 minimize:2 botond:1 convolutional:1 variance:1 largely:1 efficiently:1 ensemble:3 yield:2 correspond:1 lkopf:1 weak:13 accurately:2 notoriously:1 classified:1 stroke:1 reach:1 manual:1 sebastian:1 definition:1 infinitesimal:2 against:1 failure:1 dm:10 naturally:2 proof:1 di:5 chintala:2 newly:1 dataset:5 proved:1 popular:1 concentrating:1 improves:1 organized:1 hilbert:1 agreed:1 back:1 supervised:1 adaboost:4 just:2 stage:1 until:2 hand:2 receives:1 replacing:1 mehdi:1 reweighting:2 google:4 incrementally:2 mode:26 quality:2 perhaps:1 usa:1 building:4 contain:1 true:15 lgorithm:1 analytically:2 hence:2 equality:1 symmetric:1 leibler:1 iteratively:1 neal:1 round:1 during:1 self:1 noted:1 percentile:2 mpi:3 liese:1 criterion:5 generalized:1 complete:1 demonstrate:1 theoretic:1 performs:1 saharon:1 stefano:1 image:3 variational:3 recently:1 multinomial:1 functional:1 discussed:1 approximates:1 interpret:1 trait:1 refer:2 significant:4 composition:2 imposing:1 automatic:1 vanilla:5 grid:1 similarly:2 impressive:1 similarity:1 gt:4 add:5 base:1 j:1 recent:1 dpg:11 optimizes:1 certain:6 meta:7 binary:4 inequality:1 discussing:1 meeting:1 preserving:1 arjovsky:1 additional:2 wasserstein:1 aggregated:1 determine:1 converge:2 multiple:4 full:1 match:4 xf:1 plug:1 cross:1 long:3 faster:1 bent:1 promotes:1 prediction:2 vision:1 metric:8 df:36 dpd:13 iteration:12 normalization:1 kernel:2 arxiv:4 addition:1 want:2 interval:3 median:2 sch:1 strict:2 spirit:1 call:2 bengio:2 enough:5 ture:1 zhuowen:1 fit:2 flemming:1 architecture:3 restrict:1 perfectly:2 bandwidth:1 imperfect:1 idea:1 regarding:1 whether:1 motivated:1 pushforward:1 expression:1 tually:1 suffer:1 bingen:3 repeatedly:1 deep:1 gabriel:1 generally:1 iterating:1 detailed:2 fake:1 tune:1 covered:4 clear:2 transforms:1 amount:1 reduced:1 generate:2 http:1 outperform:1 schapire:1 notice:1 estimated:2 blue:1 write:1 hyperparameter:1 dickstein:1 demonstrating:1 threshold:1 achieving:2 fraction:1 sum:1 run:10 kpd:1 arrive:1 reasonable:3 missed:2 decision:2 appendix:7 prefer:2 summarizes:1 bound:13 guaranteed:6 distinguish:1 courville:1 nontrivial:1 gct:4 ahead:1 precisely:1 bousquet:2 nearby:1 argument:1 optimality:1 min:4 relatively:3 martin:1 according:2 smaller:3 slightly:1 reconstructing:1 wi:2 b:1 making:3 explained:1 equation:1 bing:1 turn:1 end:3 available:2 gaussians:1 apply:1 barron:1 original:13 top:1 running:2 include:1 remaining:2 gan:35 exploit:1 gelly:1 build:5 approximating:2 yanran:1 objective:6 noticed:2 already:6 question:2 added:1 parametric:1 eran:1 diagonal:1 surrogate:1 che:1 gradient:1 dp:3 iclr:3 distance:2 separate:2 tue:3 argue:1 ozair:1 assuming:2 code:1 index:2 providing:1 minimizing:8 unrolled:3 ratio:2 difficult:1 unfortunately:2 setup:1 potentially:1 ryota:1 implementation:1 unknown:2 perform:1 upper:10 vertical:1 datasets:3 benchmark:1 finite:3 descent:1 incorrectly:1 situation:2 hinton:1 excluding:1 looking:3 arbitrary:2 david:1 dog:3 pair:2 specified:1 cast:1 connection:1 discriminator:6 optimized:1 pfau:1 learned:1 kingma:1 nip:1 address:2 able:3 poole:1 pattern:1 including:2 max:2 green:1 overlap:1 natural:6 regularized:2 dcgans:1 scheme:3 improve:2 github:1 imply:1 picture:11 auto:1 sn:5 literature:1 freund:1 loss:2 expect:2 highlight:1 grover:1 geoffrey:1 generator:11 validation:2 sufficient:4 consistent:1 dq:9 nowozin:1 pi:6 course:3 surprisingly:2 last:3 soon:1 repeat:1 bias:1 allow:1 side:1 taking:2 duce:1 van:1 athul:1 xn:2 rich:2 author:3 commonly:1 adaptive:1 far:3 welling:2 approximate:4 ignore:1 implicitly:1 bernhard:1 kullback:3 keep:1 hoc:1 global:1 sequentially:1 corpus:4 xi:2 discriminative:1 continuous:1 iterative:4 latent:1 search:2 why:1 table:1 reasonably:1 ca:1 improving:2 williamson:1 bottou:1 complex:3 artificially:1 constructing:2 domain:2 did:1 giraffe:4 main:3 aistats:1 noise:1 paul:1 verifies:1 x1:2 xu:1 tong:1 tomioka:1 exponential:7 concatenating:1 weighting:5 ian:1 removing:1 theorem:11 emphasizing:1 specific:2 showing:2 jensen:5 r2:1 pz:5 dk:2 abadie:1 exists:1 mnist:11 adding:6 effectively:1 importance:2 sohl:1 easier:1 led:1 simply:3 visual:1 failed:1 expressed:1 aditya:1 partially:1 springer:1 radford:1 corresponds:6 minimizer:1 satisfies:3 constantly:1 goal:2 towards:1 price:1 replace:1 hard:4 experimentally:1 change:1 sylvain:1 specifically:1 wt:1 sampler:1 miss:1 lemma:6 called:2 total:1 catastrophic:1 shannon:5 meaningful:1 indicating:1 aaron:1 evaluate:3 d1:1 ex:1 |
6,773 | 7,127 | Straggler Mitigation in Distributed Optimization
Through Data Encoding
Can Karakus
UCLA
Los Angeles, CA
[email protected]
Yifan Sun
Technicolor Research
Los Altos, CA
[email protected]
Suhas Diggavi
UCLA
Los Angeles, CA
[email protected]
Wotao Yin
UCLA
Los Angeles, CA
[email protected]
Abstract
Slow running or straggler tasks can significantly reduce computation speed in
distributed computation. Recently, coding-theory-inspired approaches have been
applied to mitigate the effect of straggling, through embedding redundancy in
certain linear computational steps of the optimization algorithm, thus completing
the computation without waiting for the stragglers. In this paper, we propose an
alternate approach where we embed the redundancy directly in the data itself, and
allow the computation to proceed completely oblivious to encoding. We propose
several encoding schemes, and demonstrate that popular batch algorithms, such as
gradient descent and L-BFGS, applied in a coding-oblivious manner, deterministically achieve sample path linear convergence to an approximate solution of the
original problem, using an arbitrarily varying subset of the nodes at each iteration.
Moreover, this approximation can be controlled by the amount of redundancy
and the number of nodes used in each iteration. We provide experimental results
demonstrating the advantage of the approach over uncoded and data replication
strategies.
1
Introduction
Solving large-scale optimization problems has become feasible through distributed implementations.
However, the efficiency can be significantly hampered by slow processing nodes, network delays or
node failures. In this paper we develop an optimization framework based on encoding the dataset,
which mitigates the effect of straggler nodes in the distributed computing system. Our approach
can be readily adapted to the existing distributed computing infrastructure and software frameworks,
since the node computations are oblivious to the data encoding.
In this paper, we focus on problems of the form
min f (w) :=
w?Rp
1
min kXw ? yk2 ,
2n w?Rp
(1)
where X ? Rn?p , y ? Rn?1 represent the data matrix and vector respectively. The function f (w) is
mapped onto a distributed computing setup depicted in Figure 1, consisting of one central server and
m worker nodes, which collectively store the row-partitioned matrix X and vector y. We focus on
batch, synchronous optimization methods, where the delayed or failed nodes can significantly slow
down the overall computation. Note that asynchronous methods are inherently robust to delays caused
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
by stragglers, although their convergence rates can be worse than their synchronous counterparts. Our
e = SX and ye = Sy,
approach consists of adding redundancy by encoding the data X and y into X
respectively, where S ? R(?n)?n is an encoding matrix with redundancy factor ? ? 1, and solving
the effective problem
1
1 e
min fe(w) := minp
kS (Xw ? y) k2 = minp
kXw ? yek2
w?R 2?n
w?R 2?n
w?Rp
(2)
instead. In doing so, we proceed with the computation in each iteration without waiting for the
stragglers, with the idea that the inserted redundancy will compensate for the lost data. The goal is to
design the matrix S such that, when the nodes obliviously solve the problem (2) without waiting for
the slowest (m ? k) nodes (where k is a design parameter) the achieved solution approximates the
original solution w? = arg minw f (w) sufficiently closely. Since in large-scale machine learning and
data analysis tasks one is typically not interested in the exact optimum, but rather a ?sufficiently" good
solution that achieves a good generalization error, such an approximation could be acceptable in many
scenarios. Note also that the use of such a technique does not preclude the use of other, non-coding
straggler-mitigation strategies (see [24] and references therein), which can still be implemented on
top of the redundancy embedded in the system, to potentially further improve performance.
Focusing on gradient descent and L-BFGS algorithms, we show that under a spectral condition on
S, one can achieve an approximation of the solution of (1), by solving (2), without waiting for the
stragglers. We show that with sufficient redundancy embedded, and with updates from a sufficiently
large, yet strict subset of the nodes in each iteration, it is possible to deterministically achieve linear
convergence to a neighborhood of the solution, as opposed to convergence in expectation (see Fig.
4). Further, one can adjust the approximation guarantee by increasing the redundancy and number
of node updates waited for in each iteration. Another potential advantage of this strategy is privacy,
since the nodes do not have access to raw data itself, but can still perform the optimization task over
the jumbled data to achieve an approximate solution.
Although in this paper we focus on quadratic objectives and two specific algorithms, in principle
our approach can be generalized to more general, potentially non-smooth objectives and constrained
optimization problems, as we discuss in Section 4 ( adding a regularization term is also a simple
generalization).
Our main contributions are as follows. (i) We demonstrate that gradient descent (with constant
step size) and L-BFGS (with line search) applied in a coding-oblivious manner on the encoded
problem, achieves (universal) sample path linear convergence to an approximate solution of the
original problem, using only a fraction of the nodes at each iteration. (ii) We present three classes of
coding matrices; namely, equiangular tight frames (ETF), fast transforms, and random matrices, and
discuss their properties. (iii) We provide experimental results demonstrating the advantage of the
approach over uncoded (S = I) and data replication strategies, for ridge regression using synthetic
data on an AWS cluster, as well as matrix factorization for the Movielens 1-M recommendation task.
Related work. Use of data replication to aid with the straggler problem has been proposed and
studied in [22, 1], and references therein. Additionally, use of coding in distributed computing
has been explored in [13, 7]. However, these works exclusively focused on using coding at the
computation level, i.e., certain linear computational steps are performed in a coded manner, and
explicit encoding/decoding operations are performed at each step. Specifically, [13] used MDS-coded
distributed matrix multiplication and [7] focused on breaking up large dot products into shorter
dot products, and perform redundant copies of the short dot products to provide resilience against
stragglers. [21] considers a gradient descent method on an architecture where each data sample is
replicated across nodes, and designs a code such that the exact gradient can be recovered as long as
fewer than a certain number of nodes fail. However, in order to recover the exact gradient under any
potential set of stragglers, the required redundancy factor is on the order of the number of straggling
nodes, which could mean a large amount of overhead for a large-scale system. In contrast, we show
that one can converge to an approximate solution with a redundancy factor independent of network
size or problem dimensions (e.g., 2 as in Section 5).
Our technique is also closely related to randomized linear algebra and sketching techniques [14, 6, 17],
used for dimensionality reduction of large convex optimization problems. The main difference
between this literature and the proposed coding technique is that the former focuses on reducing the
problem dimensions to lighten the computational load, whereas coding increases the dimensionality
2
M
N1
N2
2
kX1 w ? y1 k
M
Nm
2
kX2 w ? y2 k
N1
2
N2
2
kXm ? ? ym k
Nm
2
kS1 (X? ? y)k kS2 (X? ? y)k
kSm (X? ? y)k2
Figure 1: Left: Uncoded distributed optimization with partitioning, where X and y are partitioned as
> >
> >
X = X1> X2> . . . Xm
and y = y1> y2> . . . ym
. Right: Encoded distributed optimization,
where node i stores (Si X, Si y), instead of (Xi , yi ). The uncoded case corresponds to S = I.
of the problem to provide robustness. As a result of the increased dimensions, coding can provide a
much closer approximation to the original solution compared to sketching techniques.
A longer version of this paper is available on [12].
2
Encoded Optimization Framework
Figure 1 shows a typical data-distributed computational model in large-scale optimization (left), as
well as our proposed
network consists of m machines, where
encoded
model (right). Our computing
> >
> >
e
machine i stores Xi , yei = (Si X, Si y) and S = S1 S2 . . . Sm
. The optimization process
is oblivious to the encoding, i.e., once the data is stored at the nodes, the optimization algorithm
proceeds exactly as if the nodes contained uncoded, raw data (X, y). In each iteration t, the central
server broadcasts the current estimate wt , and each worker machine computes and sends to the server
e > (X
ei wt ? yei ).
the gradient terms corresponding to its own partition gi (wt ) := X
i
Note that this framework of distributed optimization is typically communication-bound, where
communication over a few slow links constitute a significant portion of the overall computation time.
We consider a strategy where at each iteration t, the server only uses the gradient updates from the
first k nodes to respond in that iteration, thereby preventing such slow links and straggler nodes from
stalling the overall computation:
1 X
1 e> e
get =
gi (wt ) =
X (XA wt ? yeA ),
2??n
??n A
i?At
k
where At ? [m], |At | = k are the indices of the first k nodes to respond at iteration t, ? := m
eA = [Si X]
and X
i?At . (Similarly, SA = [Si ]i?At .) Given the gradient approximation, the central
server then computes a descent direction dt through the history of gradients and parameter estimates.
For the remaining nodes i 6? At , the server can either send an interrupt signal, or simply drop their
updates upon arrival, depending on the implementation.
Next, the central server chooses a step size ?t , which can be chosen as constant, decaying, or through
e t that is needed to compute the step size. We
exact line search 1 by having the workers compute Xd
again assume the central server only hears from the fastest k nodes, denoted by Dt ? [m], where
Dt 6= At in general, to compute
?t = ??
d>
et
t g
,
>X
e
e D dt
d>
X
t
(3)
D
eD = [Si X]
where X
i?Dt , and 0 < ? < 1 is a back-off factor of choice.
Our goal is to especially focus on the case k < m, and design an encoding matrix S such that, for
any sequence of sets {At }, {Dt }, f (wt ) universally converges to a neighborhood of f (w? ). Note
that in general, this scheme with k < m is not guaranteed to converge for traditionally batch methods
like L-BFGS. Additionally, although the algorithm only works with the encoded function fe, our goal
is to provide a convergence guarantee in terms of the original function f .
1
Note that exact line search is not more expensive than backtracking line search for a quadratic loss, since it
only requires a single matrix-vector multiplication.
3
3
Algorithms and Convergence Analysis
Let the smallest and largest eigenvalues of X > X be denoted by ? > 0 and M > 0, respectively.
Let ? with ?1 < ? ? 1 be given. In order to prove convergence,we will consider a family of matrices
(?)
S
where ? is the aspect ratio (redundancy factor), such that for any > 0, and any A ? [m]
with |A| = ?m,
>
(1 ? )I SA
SA (1 + )I,
(4)
for sufficiently large ? ? 1, where SA = [Si ]i?A is the submatrix associated with subset A (we drop
dependence on ? for brevity). Note that this is similar to the restricted isometry property (RIP) used
in compressed sensing [4], except that (4) is only required for submatrices of the form SA . Although
this condition is needed to prove worst-case convergence results, in practice the proposed encoding
scheme can work well even when it is not exactly satisfied, as long as the bulk of the eigenvalues of
>
SA
SA lie within a small interval [1 ? , 1 + ]. We will discuss several specific constructions and
their relation to property (4) in Section 4.
Gradient descent. We consider gradient descent with constant step size, i.e.,
wt+1 = wt + ?dt = wt ? ?e
gt .
The following theorem characterizes the convergence of the encoded problem under this algorithm.
Theorem 1. Let ft = f (wt ), where wt is computed using gradient descent with updates from a set
2?
of (fastest) workers At , with constant step size ?t ? ? = M (1+)
for some 0 < ? ? 1, for all t. If S
satisfies (4) with > 0, then for all sequences of {At } with cardinality |At | = k,
?2 (? ? ?1 )
f (w? ) , t = 1, 2, . . . ,
1 ? ??1
1+
, and ?1 = 1 ? 4??(1??)
where ? = 1?
M (1+) , and f0 = f (w0 ) is the initial objective value.
t
ft ? (??1 ) f0 +
The proof is provided in Appendix B of [12], which relies on the fact that the solution to the effective
?instantaneous" problem corresponding to the subset At lies in the set {w : f (w) ? ?2 f (w? )}, and
therefore each gradient descent step attracts the estimate towards a point in this set, which must
eventually converge to this set. Note that in order to guarantee linear convergence, we need ??1 < 1,
which can be ensured by property (4).
Theorem 1 shows that gradient descent over the encoded problem, based on updates from only k < m
nodes, results in deterministically linear convergence to a neighborhood of the true solution w? ,
for sufficiently large k, as opposed to convergence in expectation. Note that by property (4), by
controlling the redundancy factor ? and the number of nodes k waited for in each iteration, one can
control the approximation guarantee. For k = m and S designed properly (see Section 4), then ? = 1
and the optimum value of the original function f (w? ) is reached.
Limited-memory-BFGS. Although L-BFGS is originally a batch method, requiring updates from
all nodes, its stochastic variants have also been proposed recently [15, 3]. The key modification to
ensure convergence is that the Hessian estimate must be computed via gradient components that are
common in two consecutive iterations, i.e., from the nodes in At ? At?1 . We adapt this technique to
our scenario. For t > 0, define ut := wt ? wt?1 , and
X
m
rt :=
(gi (wt ) ? gi (wt?1 )) .
2?n |At ? At?1 |
i?At ?At?1
Then once the gradient terms {gt }i?At are collected, the descent direction is computed by dt =
?Bt get , where Bt is the inverse Hessian estimate for iteration t, which is computed by
(`+1)
Bt
(`)
= Vj>
Bt Vj` + ?j` uj` u>
j` ,
`
4
?k =
1
rk> uk
,
Vk = I ? ?k rk u>
k
(0)
r> r
(e
?)
t t
I, and Bt := Bt with ?
with j` = t ? ?
e + `, Bt = r>
e := min {t, ?}, where ? is the L-BFGS
t ut
memory length. Once the descent direction dt is computed, the step size is determined through exact
line search, using (3), with back-off factor ? = 1?
1+ , where is as in (4).
For our convergence result for L-BFGS, we need another assumption on the matrix S, in addition to
(4). Defining S?t = [Si ]i?At ?At?1 for t > 0, we assume that for some ? > 0,
?I S?t> S?t
(5)
for all t > 0. Note that this requires that one should wait for sufficiently many nodes to finish so
that the overlap set At ? At?1 has more than a fraction ?1 of all nodes, and thus the matrix S?t can
1
be full rank. This is satisfied if ? ? 21 + 2?
in the worst-case, and under the assumption that node
delays are i.i.d., it is satisfied in expectation if ? ? ?1? . However, this condition is only required for a
worst-case analysis, and the algorithm may perform well in practice even when this condition is not
satisfied. The following lemma shows the stability of the Hessian estimate.
Lemma 1. If (5) is satisfied, then there exist constants c1 , c2 > 0 such that for all t, the inverse
Hessian estimate Bt satisfies c1 I Bt c2 I.
The proof, provided in Appendix A of [12], is based on the well-known trace-determinant method.
Using Lemma 1, we can show the following result.
Theorem 2. Let ft = f (wt ), where wt is computed using L-BFGS as described above, with gradient
updates from machines At , and line search updates from machines Dt . If S satisfies (4) and (5), for
all sequences of {At }, {Dt } with |At | = |Dt | = k,
t
ft ? (??2 ) f0 +
where ? =
1+
1? ,
and ?2 = 1 ?
4?c1 c2
M (c1 +c2 )2
?2 (? ? ?2 )
f (w? ) ,
1 ? ??2
, and f0 = f (w0 ) is the initial objective value.
The proof is provided in Appendix B of [12]. Similar to Theorem 1, the proof is based on the
observation that the solution of the effective problem at time t lies in a bounded set around the true
solution w? . As in gradient descent, coding enables linear convergence deterministically, unlike the
stochastic and multi-batch variants of L-BFGS [15, 3].
Generalizations. Although we focus on quadratic cost functions and two specific algorithms, our
2
approach can potentially be generalized for objectives of the form kXw ? yk + h(w) for a simple
2
convex function h, e.g., LASSO; or constrained optimization minw?C kXw ? yk (see [11]); as
well as other first-order algorithms used for such problems, e.g., FISTA [2]. In the next section
we demonstrate that the codes we consider have desirable properties that readily extend to such
scenarios.
4
Code Design
We consider three classes of coding matrices: tight frames, fast transforms, and random matrices.
n?
Tight frames. A unit-norm frame for Rn is a set of vectors F = {?i }i=1 with k?i k = 1, where
? ? 1, such that there exist constants ?1 ? ?2 > 0 such that, for any u ? Rn ,
2
?1 kuk ?
n?
X
2
|hu, ?i i| ? ?2 kuk2 .
i=1
The frame is tight if the above satisfied with ?1 = ?2 . In this case, it can be shown that the constants
are equal to the redundancy factor of the frame, i.e., ?1 = ?2 = ?. If we form S ? R(?n)?n by rows
that are a tight frame, then we have S > S = ?I, which ensures kXw ? yk2 = ?1 kSXw ? Syk2 .
Then for any solution w
e? to the encoded problem (with k = m),
?fe(w
e? ) = X > S > S(X w
e? ? y) = ?(X w
e? ? y)> X = ??f (w
e? ).
5
>
Figure 3: Sample spectrum of SA
SA for various constructions with low redundancy, and
large k (normalized).
>
Figure 2: Sample spectrum of SA
SA for various constructions with high redundancy, and
relatively small k (normalized).
Therefore, the solution to the encoded problem satisfies the optimality condition for the original
problem as well:
?fe(w
e? ) = 0, ? ?f (w
e? ) = 0,
and if f is also strongly convex, then w
e? = w? is the unique solution. Note that since the computation
is coding-oblivious, this is not true in general for an arbitrary full rank matrix, and this is, in addition
to property (4), a desired property of the encoding matrix. In fact, this equivalency extends beyond
smooth unconstrained optimization, in that
D
E
?fe(w
e? ), w ? w
e? ? 0, ?w ? C ? h?f (w
e? ), w ? w
e? i ? 0, ?w ? C
for any convex constraint set C, as well as
??fe(w
e? ) ? ?h(w
e? ),
? ??f (w
e? ) ? ?h(w
e? ),
for any non-smooth convex objective term h(x), where ?h is the subdifferential of h. This means
that tight frames can be promising encoding matrix candidates for non-smooth and constrained
optimization too. In [11], it was shown that when {At } is static, equiangular tight frames allow for a
close approximation of the solution for constrained problems.
A tight frame is equiangular if |h?i , ?j i| is constant across all pairs (i, j) with i 6= j.
n?
Proposition 1 (Welch bound [23]). Let F = {?i }i=1 be a tight frame. Then ?(F ) ?
Moreover, equality is satisfied if and only if F is an equiangular tight frame.
q
??1
2n??1 .
Therefore, an ETF minimizes the correlation between its individual elements, making each submatrix
>
SA
SA as close to orthogonal as possible, which is promising in light of property (4). We specifically
evaluate Paley [16, 10] and Hadamard ETFs [20] (not to be confused with Hadamard matrix, which is
discussed next) in our experiments. We also discuss Steiner ETFs [8] in Appendix D of [12], which
enable efficient implementation.
Fast transforms. Another computationally efficient method for encoding is to use fast transforms:
Fast Fourier Transform (FFT), if S is chosen as a subsampled DFT matrix, and the Fast WalshHadamard Transform (FWHT), if S is chosen as a subsampled real Hadamard matrix. In particular,
one can insert rows of zeroes at random locations into the data pair (X, y), and then take the FFT
or FWHT of each column of the augmented matrix. This is equivalent to a randomized Fourier or
Hadamard ensemble, which is known to satisfy the RIP with high probability [5].
Random matrices. A natural choice of encoding is using i.i.d. random matrices. Although such
random matrices do not have the computational advantages of fast transforms or the optimalitypreservation property of tight frames, their eigenvalue behavior can be characterized analytically. In
particular, using the existing results on the eigenvalue scaling of large i.i.d. Gaussian matrices [9, 19]
and union bound, it can be shown that
r 2 !
1 >
1
P
max ?max
SA SA > 1 +
? 0,
(6)
??n
??
A:|A|=k
r 2 !
1 >
1
P
min ?min
S SA < 1 ?
? 0,
(7)
??n A
??
A:|A|=k
6
Figure 4: Left: Sample evolution of uncoded, replication, and Hadamard (FWHT)-coded cases, for
k = 12, m = 32. Right: Runtimes of the schemes for different values of ?, for the same number of
iterations for each scheme. Note that this essentially captures the delay profile of the network, and
does not reflect the relative convergence rates of different methods.
as n ? ?, where ?i denotes the ith singular value. Hence, for sufficiently large redundancy and
problem dimension, i.i.d. random matrices are good candidates for encoding as well. However, for
finite ?, even if k = m, in general for this encoding scheme the optimum of the original problem is
not recovered exactly.
Property (4) and redundancy requirements. Using the analytical
(6)?(7) on i.i.d. Gaus bounds
1
?
sian matrices, one can see that such matrices satisfy (4) with = O
, independent of problem
??
dimensions or number of nodes m. Although we do not have tight eigenvalue bounds for subsampled
ETFs, numerical evidence (Figure 2) suggests that they may satisfy (4) with smaller than random
matrices, and thus we believe that the required redundancy in practice is even smaller for ETFs.
Note that our theoretical results focus on the extreme eigenvalues due to a worst-case analysis; in
practice, most of the energy of the gradient will be on the eigen-space associated with the bulk of the
eigenvalues, which the following proposition suggests can be mostly 1 (also see Figure 3), which
means even if (4) is not satisfied, the gradient (and the solution) can be approximated closely for a
modest redundancy, such as ? = 2. The following result is a consequence of the Cauchy interlacing
theorem, and the definition of tight frames.
Proposition 2. If the rows of S are chosen to form an ETF with redundancy ?, then for ? ? 1 ? ?1 ,
1 >
? SA SA has n(1 ? ??) eigenvalues equal to 1.
5
Numerical Results
Ridge regression with synthetic data on AWS EC2 cluster. We generate the elements of matrix
X i.i.d. ? N (0, 1), the elements of y i.i.d. ? N (0, p), for dimensions (n, p) = (4096, 6000), and
2
1
e
solve the problem minw 2?n
Xw ? ye
+ ?2 kwk2 , for regularization parameter ? = 0.05. We
evaluate column-subsampled Hadamard matrix with redundancy ? = 2 (encoded using FWHT
for fast encoding), data replication with ? = 2, and uncoded schemes. We implement distributed
L-BFGS as described in Section 3 on an Amazon EC2 cluster using the mpi4py Python package,
over m = 32 m1.small worker node instances, and a single c3.8xlarge central server instance.
We assume the central server encodes and sends the data variables to the worker nodes (see Appendix
D of [12] for a discussion of how to implement this more efficiently).
Figure 4 shows the result of our experiments, which are aggregated over 20 trials. As baselines,
we consider the uncoded scheme, as well as a replication scheme, where each uncoded partition is
replicated ? = 2 times across nodes, and the server uses the faster copy in each iteration. It can be
seen from the right figure that one can speed up computation by reducing ? from 1 to, for instance,
0.375, resulting in more than 40% reduction in the runtime. Note that in this case, uncoded L-BFGS
fails to converge, whereas the Hadamard-coded case stably converges. We also observe that the data
replication scheme converges on average, but in the worst case, the convergence is much less smooth,
since the performance may deteriorate if both copies of a partition are delayed.
7
Figure 5: Test RMSE for m = 8 (left) and m = 24 (right) nodes,
where the server waits for k = m/8 (top) and k = m/2 (bottom)
responses. ?Perfect" refers to the case where k = m.
Figure 6: Total runtime with m =
8 and m = 24 nodes for different
values of k, under fixed 100 iterations for each scheme.
Matrix factorization on Movielens 1-M dataset. We next apply matrix factorization on the
MovieLens-1M dataset [18] for the movie recommendation task. We are given R, a sparse matrix of movie ratings 1?5, of dimension #users ? #movies, where Rij is specified if user i has
rated movie j. We withhold randomly 20% of these ratings to form an 80/20 train/test split. The
goal is to recover user vectors xi ? Rp and movie vectors yi ? Rp (where p is the embedding
dimension) such that Rij ? xTi yj + ui + vj + ?, where ui , vj , and ? are user, movie, and global
biases, respectively. The optimization problem is given by
?
?
X
X
X
min
(Rij ? ui ? vj ? xTi yj ? ?)2 + ? ?
kxi k22 + kuk22 +
kyj k22 + kvk22 ? .
xi ,yj ,ui ,vj
i
i,j: observed
j
(8)
We choose ? = 3, p = 15, and ? = 10, which achieves a test RMSE 0.861, close to the current best
test RMSE on this dataset using matrix factorization2 .
Problem (8) is often solved using alternating minimization, minimizing first over all (xi , ui ), and then
all (yj , vj ), in repetition. Each such step further decomposes by row and column, made smaller by the
sparsity of R. To solve for (xi , ui ), we first extract Ii = {j | rij is observed}, and solve the resulting
>
sequence of regularized least squares problems in the variables wi = [x>
i , ui ] distributedly using
>
>
coded L-BFGS; and repeat for w = [yj , vj ] , for all j. As in the first experiment, distributed coded
L-BFGS is solved by having the master node encoding the data locally, and distributing the encoded
data to the worker nodes (Appendix D of [12] discusses how to implement this step more efficiently).
The overhead associated with this initial step is included in the overall runtime in Figure 6.
The Movielens experiment is run on a single 32-core machine with 256 GB RAM. In order to simulate
network latency, an artificial delay of ? ? exp(10 ms) is imposed each time the worker completes a
task. Small problem instances (n < 500) are solved locally at the central server, using the built-in
function numpy.linalg.solve. Additionally, parallelization is only done for the ridge regression
instances, in order to isolate speedup gains in the L-BFGS distribution. To reduce overhead, we create
a bank of encoding matrices {Sn } for Paley ETF and Hadamard ETF, for n = 100, 200, . . . , 3500,
and then given a problem instance, subsample the columns of the appropriate matrix Sn to match
the dimensions. Overall, we observe that encoding overhead is amortized by the speed-up of the
distributed optimization.
Figure 5 gives the final performance of our distributed L-BFGS for various encoding schemes, for
each of the 5 epochs, which shows that coded schemes are most robust for small k. A full table of
results is given in Appendix C of [12].
2
http://www.mymedialite.net/examples/datasets.html
8
Acknowledgments
This work was supported in part by NSF grants 1314937 and 1423271.
References
[1] G. Ananthanarayanan, A. Ghodsi, S. Shenker, and I. Stoica. Effective straggler mitigation: Attack of the
clones. In NSDI, volume 13, pages 185?198, 2013.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM journal on imaging sciences, 2(1):183?202, 2009.
[3] A. S. Berahas, J. Nocedal, and M. Tak?c. A multi-batch l-bfgs method for machine learning. In Advances
in Neural Information Processing Systems, pages 1055?1063, 2016.
[4] E. J. Candes and T. Tao. Decoding by linear programming. IEEE transactions on information theory, 51
(12):4203?4215, 2005.
[5] E. J. Candes and T. Tao. Near-optimal signal recovery from random projections: Universal encoding
strategies? IEEE transactions on information theory, 52(12):5406?5425, 2006.
[6] P. Drineas, M. W. Mahoney, S. Muthukrishnan, and T. Sarl?s. Faster least squares approximation.
Numerische mathematik, 117(2):219?249, 2011.
[7] S. Dutta, V. Cadambe, and P. Grover. Short-dot: Computing large linear transforms distributedly using
coded short dot products. In Advances In Neural Information Processing Systems, pages 2092?2100, 2016.
[8] M. Fickus, D. G. Mixon, and J. C. Tremain. Steiner equiangular tight frames. Linear algebra and its
applications, 436(5):1014?1027, 2012.
[9] S. Geman. A limit theorem for the norm of random matrices. The Annals of Probability, pages 252?261,
1980.
[10] J. Goethals and J. J. Seidel. Orthogonal matrices with zero diagonal. Canad. J. Math, 1967.
[11] C. Karakus, Y. Sun, and S. Diggavi. Encoded distributed optimization. In 2017 IEEE International
Symposium on Information Theory (ISIT), pages 2890?2894. IEEE, 2017.
[12] C. Karakus, Y. Sun, S. Diggavi, and W. Yin. Straggler mitigation in distributed optimization through data
encoding. Arxiv.org, 2017.
[13] K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran. Speeding up distributed machine
learning using codes. In Information Theory (ISIT), 2016 IEEE International Symposium on, pages
1143?1147. IEEE, 2016.
R in
[14] M. W. Mahoney et al. Randomized algorithms for matrices and data. Foundations and Trends
Machine Learning, 3(2):123?224, 2011.
[15] A. Mokhtari and A. Ribeiro. Global convergence of online limited memory BFGS. Journal of Machine
Learning Research, 16:3151?3181, 2015.
[16] R. E. Paley. On orthogonal matrices. Studies in Applied Mathematics, 12(1-4):311?320, 1933.
[17] M. Pilanci and M. J. Wainwright. Randomized sketches of convex programs with sharp guarantees. IEEE
Transactions on Information Theory, 61(9):5096?5115, 2015.
[18] J. Riedl and J. Konstan. Movielens dataset, 1998.
[19] J. W. Silverstein. The smallest eigenvalue of a large dimensional wishart matrix. The Annals of Probability,
pages 1364?1368, 1985.
[20] F. Sz?ll?osi. Complex hadamard matrices and equiangular tight frames. Linear Algebra and its Applications,
438(4):1962?1967, 2013.
[21] R. Tandon, Q. Lei, A. G. Dimakis, and N. Karampatziakis. Gradient coding. ML Systems Workshop
(MLSyS), NIPS, 2016.
[22] D. Wang, G. Joshi, and G. Wornell. Using straggler replication to reduce latency in large-scale parallel
computing. ACM SIGMETRICS Performance Evaluation Review, 43(3):7?11, 2015.
[23] L. Welch. Lower bounds on the maximum cross correlation of signals (corresp.). IEEE Transactions on
Information theory, 20(3):397?399, 1974.
[24] N. J. Yadwadkar, B. Hariharan, J. Gonzalez, and R. H. Katz. Multi-task learning for straggler avoiding
predictive job scheduling. Journal of Machine Learning Research, 17(4):1?37, 2016.
9
| 7127 |@word trial:1 determinant:1 version:1 norm:2 hu:1 thereby:1 yea:1 reduction:2 initial:3 exclusively:1 mixon:1 existing:2 steiner:2 recovered:2 com:1 current:2 si:9 yet:1 must:2 readily:2 numerical:2 partition:3 enables:1 drop:2 designed:1 update:9 fewer:1 ksm:1 ith:1 short:3 core:1 mitigation:4 infrastructure:1 math:2 node:40 location:1 attack:1 org:1 c2:4 become:1 symposium:2 replication:8 consists:2 prove:2 overhead:4 manner:3 privacy:1 deteriorate:1 behavior:1 multi:3 inspired:1 xti:2 preclude:1 cardinality:1 increasing:1 provided:3 confused:1 moreover:2 bounded:1 alto:1 minimizes:1 dimakis:1 guarantee:5 mitigate:1 xd:1 runtime:3 exactly:3 ensured:1 k2:2 uk:1 partitioning:1 control:1 unit:1 grant:1 resilience:1 limit:1 consequence:1 encoding:24 path:2 therein:2 k:1 studied:1 suggests:2 nsdi:1 fastest:2 factorization:3 limited:2 unique:1 acknowledgment:1 yj:5 lost:1 practice:4 union:1 implement:3 universal:2 submatrices:1 significantly:3 projection:1 refers:1 wait:2 get:2 onto:1 close:3 scheduling:1 www:1 equivalent:1 imposed:1 send:1 convex:6 focused:2 welch:2 distributedly:2 amazon:1 recovery:1 numerische:1 embedding:2 stability:1 traditionally:1 papailiopoulos:1 annals:2 construction:3 controlling:1 tandon:1 rip:2 exact:6 user:4 programming:1 us:2 element:3 amortized:1 expensive:1 ks2:1 approximated:1 trend:1 geman:1 bottom:1 inserted:1 ft:4 observed:2 rij:4 capture:1 worst:5 solved:3 wang:1 wornell:1 ensures:1 sun:4 yk:2 ui:7 straggler:16 solving:3 tight:15 algebra:3 predictive:1 upon:1 efficiency:1 yei:2 completely:1 drineas:1 various:3 muthukrishnan:1 train:1 fast:9 effective:4 artificial:1 etf:9 neighborhood:3 sarl:1 encoded:12 solve:5 compressed:1 gi:4 transform:2 itself:2 final:1 online:1 advantage:4 sequence:4 eigenvalue:9 paley:3 analytical:1 net:1 propose:2 lam:1 product:4 hadamard:9 osi:1 achieve:4 kx1:1 straggling:2 los:4 convergence:19 cluster:3 optimum:3 requirement:1 perfect:1 converges:3 depending:1 develop:1 job:1 sa:18 implemented:1 direction:3 closely:3 stochastic:2 enable:1 generalization:3 proposition:3 isit:2 obliviously:1 insert:1 sufficiently:7 around:1 exp:1 achieves:3 consecutive:1 smallest:2 largest:1 repetition:1 create:1 minimization:1 gaussian:1 sigmetrics:1 rather:1 gaus:1 shrinkage:1 varying:1 focus:7 interrupt:1 properly:1 vk:1 rank:2 karampatziakis:1 slowest:1 contrast:1 baseline:1 typically:2 bt:9 relation:1 tak:1 stalling:1 interested:1 tao:2 overall:5 arg:1 html:1 denoted:2 constrained:4 equal:2 once:3 having:2 beach:1 runtimes:1 lighten:1 oblivious:6 few:1 randomly:1 individual:1 delayed:2 numpy:1 subsampled:4 beck:1 corresp:1 consisting:1 n1:2 evaluation:1 adjust:1 mahoney:2 extreme:1 light:1 closer:1 worker:8 minw:3 shorter:1 orthogonal:3 modest:1 desired:1 theoretical:1 increased:1 column:4 instance:6 teboulle:1 cost:1 subset:4 delay:5 too:1 stored:1 kxi:1 synthetic:2 chooses:1 mokhtari:1 st:1 clone:1 randomized:4 ec2:2 siam:1 international:2 lee:1 off:2 decoding:2 ym:2 sketching:2 again:1 central:8 satisfied:8 nm:2 opposed:2 choose:1 broadcast:1 reflect:1 wishart:1 worse:1 potential:2 bfgs:18 coding:14 mpi4py:1 satisfy:3 caused:1 performed:2 stoica:1 doing:1 characterizes:1 portion:1 reached:1 recover:2 decaying:1 parallel:1 candes:2 rmse:3 contribution:1 square:2 dutta:1 hariharan:1 efficiently:2 sy:1 ensemble:1 diggavi:3 silverstein:1 raw:2 history:1 ed:1 definition:1 failure:1 against:1 energy:1 associated:3 proof:4 static:1 gain:1 dataset:5 popular:1 ut:2 dimensionality:2 ea:1 back:2 focusing:1 originally:1 dt:12 response:1 done:1 strongly:1 xa:1 correlation:2 sketch:1 ei:1 stably:1 lei:1 believe:1 usa:1 effect:2 ye:2 requiring:1 y2:2 true:3 counterpart:1 former:1 regularization:2 normalized:2 equality:1 analytically:1 evolution:1 hence:1 alternating:1 ll:1 m:1 generalized:2 ridge:3 demonstrate:3 instantaneous:1 recently:2 common:1 volume:1 extend:1 discussed:1 approximates:1 m1:1 shenker:1 kwk2:1 katz:1 significant:1 dft:1 unconstrained:1 mathematics:1 similarly:1 dot:5 access:1 f0:4 longer:1 yk2:2 gt:2 own:1 isometry:1 scenario:3 store:3 certain:3 server:13 arbitrarily:1 kx2:1 yi:2 seen:1 kxm:1 converge:4 aggregated:1 redundant:1 signal:3 ii:2 full:3 desirable:1 interlacing:1 seidel:1 smooth:5 faster:2 adapt:1 characterized:1 match:1 long:3 compensate:1 cross:1 coded:8 controlled:1 variant:2 regression:3 essentially:1 expectation:3 arxiv:1 iteration:16 represent:1 achieved:1 c1:4 whereas:2 addition:2 subdifferential:1 interval:1 aws:2 singular:1 completes:1 sends:2 parallelization:1 unlike:1 strict:1 isolate:1 joshi:1 near:1 iii:1 split:1 fft:2 equivalency:1 finish:1 architecture:1 attracts:1 lasso:1 reduce:3 idea:1 angeles:3 synchronous:2 distributing:1 gb:1 proceed:2 hessian:4 constitute:1 latency:2 amount:2 transforms:6 locally:2 generate:1 http:1 exist:2 nsf:1 bulk:2 waiting:4 redundancy:22 key:1 demonstrating:2 kuk:1 nocedal:1 ram:1 imaging:1 fraction:2 run:1 inverse:3 package:1 master:1 respond:2 extends:1 family:1 gonzalez:1 acceptable:1 appendix:7 scaling:1 submatrix:2 cadambe:1 bound:6 completing:1 guaranteed:1 quadratic:3 adapted:1 constraint:1 ghodsi:1 x2:1 software:1 equiangular:6 encodes:1 ucla:6 aspect:1 speed:3 fourier:2 min:7 optimality:1 simulate:1 relatively:1 speedup:1 alternate:1 riedl:1 across:3 smaller:3 partitioned:2 wi:1 modification:1 ks1:1 s1:1 making:1 restricted:1 computationally:1 mathematik:1 discus:5 eventually:1 fail:1 technicolor:2 needed:2 available:1 operation:1 apply:1 observe:2 spectral:1 appropriate:1 batch:6 robustness:1 eigen:1 rp:5 original:8 hampered:1 top:2 running:1 remaining:1 ensure:1 denotes:1 xw:2 especially:1 uj:1 objective:6 strategy:6 canad:1 dependence:1 md:1 rt:1 diagonal:1 gradient:22 link:2 mapped:1 w0:2 considers:1 collected:1 cauchy:1 code:4 length:1 index:1 ratio:1 minimizing:1 setup:1 mostly:1 fe:6 potentially:3 trace:1 implementation:3 design:5 perform:3 wotao:1 observation:1 datasets:1 sm:1 finite:1 descent:13 defining:1 communication:2 frame:16 rn:4 y1:2 arbitrary:1 k22:2 sharp:1 kxw:5 rating:2 namely:1 required:4 pair:2 c3:1 specified:1 nip:2 beyond:1 proceeds:1 xm:1 sparsity:1 program:1 built:1 max:2 memory:3 wainwright:1 overlap:1 natural:1 regularized:1 sian:1 scheme:13 improve:1 movie:6 rated:1 kvk22:1 uncoded:10 extract:1 hears:1 sn:2 speeding:1 epoch:1 literature:1 review:1 python:1 multiplication:2 relative:1 embedded:2 loss:1 grover:1 foundation:1 sufficient:1 minp:2 principle:1 thresholding:1 bank:1 berahas:1 pedarsani:1 row:5 repeat:1 supported:1 asynchronous:1 copy:3 bias:1 allow:2 sparse:1 distributed:19 dimension:9 withhold:1 xlarge:1 kyj:1 computes:2 preventing:1 made:1 replicated:2 universally:1 ribeiro:1 transaction:4 approximate:4 sz:1 ml:1 global:2 goethals:1 xi:6 yifan:2 spectrum:2 search:6 iterative:1 decomposes:1 table:1 additionally:3 promising:2 robust:2 ca:5 inherently:1 pilanci:1 complex:1 vj:8 main:2 s2:1 subsample:1 arrival:1 n2:2 profile:1 x1:1 augmented:1 fig:1 slow:5 aid:1 fails:1 kuk22:1 waited:2 deterministically:4 explicit:1 konstan:1 lie:3 candidate:2 breaking:1 down:1 theorem:7 embed:1 load:1 specific:3 rk:2 kuk2:1 mitigates:1 sensing:1 explored:1 evidence:1 workshop:1 adding:2 ramchandran:1 sx:1 depicted:1 yin:2 backtracking:1 simply:1 failed:1 contained:1 recommendation:2 collectively:1 corresponds:1 satisfies:4 relies:1 acm:1 goal:4 towards:1 feasible:1 fista:1 included:1 movielens:5 specifically:2 reducing:2 typical:1 wt:17 except:1 determined:1 lemma:3 total:1 experimental:2 brevity:1 evaluate:2 avoiding:1 |
6,774 | 7,128 | Multi-View Decision Processes:
The Helper-AI Problem
Christos Dimitrakakis
Chalmers University of Technology & University of Lille
???????????????????????????????
David C. Parkes
Harvard University
???????????????????????
Goran Radanovic
Harvard University
????????????????????????
Paul Tylkin
Harvard University
?????????????????????
Abstract
We consider a two-player sequential game in which agents have the same reward
function but may disagree on the transition probabilities of an underlying Markovian model of the world. By committing to play a speci?c policy, the agent with
the correct model can steer the behavior of the other agent, and seek to improve
utility. We model this setting as a multi-view decision process, which we use to formally analyze the positive effect of steering policies. Furthermore, we develop an
algorithm for computing the agents? achievable joint policy, and we experimentally
show that it can lead to a large utility increase when the agents? models diverge.
1
Introduction.
In the past decade, we have been witnessing the ful?llment of Licklider?s profound vision on AI
[Licklider, 1960]:
Man-computer symbiosis is an expected development in cooperative interaction
between men and electronic computers.
Needless to say, such a collaboration, between humans and AIs, is natural in many real-world AI
problems. As a motivating example, consider the case of autonomous vehicles, where a human driver
can override the AI driver if needed. With advances in AI, the human will bene?t most if she allows
the AI agent to assume control and drive optimally. However, this might not be achievable?due to
human behavioral biases, such as over-weighting the importance of rare events, the human might
incorrectly override the AI. In the way, the misaligned models of the two drivers can lead to a decrease
in utility. In general, this problem may occur whenever two agents disagree on their view of reality,
even if they cooperate to achieve a common goal.
Formalizing this setting leads to a class of sequential multi-agent decision problems that extend
stochastic games. While in a stochastic game there is an underlying transition kernel to which all
agents (players) agree, the same is not necessarily true in the described scenario. Each agent may
have a different transition model. We focus on a leader-follower setting in which the leader commits
to a policy that the follower then best responds to, according to the follower?s model. Mapped to our
motivating example, this would mean that the AI driver is aware of human behavioral biases and
takes them into account when deciding how to drive.
To incorporate both sequential and stochastic aspects, we model this as a multi-view decision process.
Our multi-view decision process is based on an MDP model, with two, possibly different, transition
kernels. One of the agents, hereafter denoted as P1 , is assumed to have the correct transition kernel
and is chosen to be the leader of the Stackelberg game?it commits to a policy that the second agent
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(P2 ) best-responds to according to its own model. The agents have the same reward function, and
are in this sense cooperative. In an application setting, while the human (P2 ) may not be a planner,
we motivate our set-up as modeling the endpoint of an adaptive process that leads P2 to adopt a
best-response to the policy of P1 .
Using the multi-view decision process, we analyze the effect of P2 ?s imperfect model on the achieved
utility. We place an upper bound on the utility loss due to this, and also provide a lower bound on
how much P1 gains by knowing P2 ?s model. One of our main analysis tools is the amount of
in?uence an agent has, i.e. how much its actions affect the transition probabilities, both according
to its own model, and according to the model of the other agent. We also develop an algorithm,
extending backwards induction for simultaneous-move sequential games [c.f. Bo?ansk`y et al., 2016],
to compute a pair of policies that constitute a subgame perfect equilibrium.
In our experiments, we introduce intervention games as a way to construct example scenarios. In
an intervention game, an AI and a human share control of a process, and the human can intervene
to override the AI?s actions but suffers some cost in doing so. This allows us to derive a multi-view
process from any single-agent MDP. We consider two domains: ?rst, the intervention game variant of
the shelter-food game introduced by Guo et al. [2013], as well as an autonomous driving problem
that we introduce here. Our results show that the proposed approach provides a large increase in
utility in each domain, thus overcoming the de?ciencies of P2 ?s model, when the latter model is
known to the AI.
1.1
Related work
Environment design [Zhang et al., 2009, Zhang and Parkes, 2008] is a related problem, where a
?rst agent seeks to modulate the behavior of a second agent. However, the interaction between
agents occurs through ?nding a good modi?cation of the second agent?s reward function: the AI
observes a human performing a task, and uses inverse reinforcement learning [Ng et al., 2000] to
estimate the human?s reward function. Then it can assign extrinsic reward to different states in order
to improve the human?s policy. A similar problem in single-agent reinforcement learning is how
to use internal rewards to improve the performance of a computationally-bounded, reinforcement
learning agent [Sorg et al., 2010]. For example, even a myopic agent can maximize expected utility
over a long time horizon if augmented with appropriately designed internal rewards. Our model
differs from these prior works, in that the interaction between a ?helper agent? and a second agent is
through taking actions in the same environment as the second agent.
In cooperative inverse reinforcement learning [Had?eld-Menell et al., 2016], an AI wants to cooperate
with a human but does not initially understand the task. While their framework allows for simultaneous
moves of the AI and the human, they only apply it to two-stage games, where the human demonstrates
a policy in the ?rst stage and the AI imitates in the second stage. They show that the human should
take into account the AI?s best response when providing demonstrations, and develop an algorithm
for computing an appropriate demonstration policy. Our focus is on joint actions in a multi-period,
uncertain environment, rather than teaching. The model of Amir et al. [2016] is also different, in that
it considers the problem of how a teacher can optimally give advice to a sub-optimal learner, and
is thus focused on communication and adaptation rather than interaction through actions. Finally,
Elmalech et al. [2015] consider an advice-giving AI in single-shot games, where the human has an
incorrect model. They experimentally ?nd that when the AI heuristically models human expectations
when giving advice, their performance is improved. We ?nd that this also holds in our more general
setting.
We cannot use standard methods for computing optimal strategies in stochastic games [Bo?ansk?
et al., 2015, Zinkevich et al., 2005], as the two agents have different models of the transitions between
states. On the other extreme, a very general formalism to represent agent beliefs, such as that of
Gal and Pfeffer [2008] is not well suited, because we have a Stackelberg setting and the problem of
the follower is standard. Our approach is to extend backwards induction [c.f. Bo?ansk`y et al., 2016,
Sec. 4] to the case of misaligned models in order to obtain a subgame perfect policy for the AI.
Paper organization. Section 2 formalises the setting and its basic properties, and provides a lower
bound on the improvement P1 obtains when P2 ?s model is known. Section 3 introduces a backwards
induction algorithm, while Section 4 discusses the experimental results. We conclude with Section 5.
Finally, Appendix A collects all the proofs, additional technical material and experimental details.
2
2
The Setting and Basic Properties
We consider two-agent sequential stochastic game, with two agents P1 , P2 , who disagree on the
underlying model of the world, with the i-th agent?s model being ?i , but share the same reward
function. More formally,
De?nition 1 (Multi-view decision process (MVDP)). A multi-view decision process G =
?S, A, ?1 , ?2 , ?1 , ?2 , ?, ?? is a game between two agents, P1 , P2 , who share the same reward
?
function. The game has a state space S, with S ? |S|, action space A = i Ai , with A ? |A|,
starting state distribution ?, transition kernel ?, reward function1 ? : S ? [0, 1], and discount factor
? ? [0, 1].
At time t, the agents observe the state st , take a joint action at = (at,1 , at,2 ) and receive reward
rt = ?(st ). However, the two agents may have a different view of the game, with agent i modelling
the transition probabilities of the process as ?i (st+1 | st , at ) for the probability of the next state
st+1 given the current state st and joint action at . Each agent?s actions are drawn from a policy ?i ,
which may be an arbitrary behavioral?
policy, ?xed at the start of the game. For a given policy pair
? = (?1 , ?2 ), with ?i ? ?i and ? ? i ?i , the respective payoff from the point of view of the i-th
agent ui : ? ? R is de?ned to be:
ui (?) = E?
?i [U | s1 ? ?],
U?
T
?
? t?1 ?(st ).
(2.1)
t=t
For simplicity of presentation, we de?ne reward rt = ?(st ) at time t, as a function of the state only,
although an extension to state-action reward functions is trivial. The reward, as well, as well as the
utility U (the discounted sum of rewards over time) are the same for both agents for a given sequence
of states. However, the payoff for agent i is their expected utility under the model i, and can be
different for each agent.
Any two-player stochastic game can be cast into an MVDP:
Lemma 1. Any two-player general-sum stochastic game (SG) can be reduced to a two-player MVDP
in polynomial time and space.
The proof of Lemma 1 is in Appendix A.
2.1
Stackelberg setting
We consider optimal policies from the point of view of P1 , who is trying to assist a misguided
P2 . For simplicity, we restrict our attention to the Stackelberg setting, i.e. where P1 commits to
a speci?c policy ?1 at the start of the game. This simpli?es the problem for P2 , who can play the
optimal response according to the agent?s model of the world. We begin by de?ning the (potentially
unachievable) optimal joint policy, where both policies are chosen to maximise the same utility
function:
? is optimal under ? and ?1 iff u1 (?)
? ? u1 (?),
De?nition 2 (Optimal joint policy). A joint policy ?
? to refer to the value of the jointly optimal policy.
?? ? ?. We furthermore use u
?1 ? u1 (?)
This value may not be achievable, even though the two agents share a reward function, as the second
agent?s model does not agree with the ?rst agent?s, and so their expected utilities are different. To
model this, we de?ne the Stackelberg utility of policy ?1 for the ?rst agent as:
B
uSt
1 (?1 ) ? u1 (?1 , ?2 (?1 )),
?2B (?1 ) = arg max u2 (?1 , ?2 ),
(2.2)
?2 ??2
i.e. the value of the policy when the second agent best responds to agent one?s policy under the
second agent?s model.2 The following de?nes the highest utility that P1 can achieve.
De?nition 3 (Optimal policy). The optimal policy for P1 , denoted by ?1? , is the one maximizing the
?
St
?
St ?
Stackelberg utility, i.e. uSt
1 (?1 ) ? u1 (?1 ), ?1 ? ?1 , and we use u1 ? u (?1 ) to refer to the value of
this optimal policy.
1
For simplicity we consider state-dependent rewards bounded in [0, 1]. Our results are easily generalizable to
? : S ? A ? [0, 1], through scaling by a factor of B for any reward function in [b, b + B].
2
If there is no unique best response, we de?ne the utility in terms of the worst-case, best response.
3
In the remainder of the technical discussion, we will characterize P1 policies in terms of how much
worse they are than the jointly optimal policy, as well as how much better they can be than the policy
that blithely assumes that P2 shares the same model.
We start with some observations about the nature of the game when one agent ?xes its policy, and we
argue how the difference between the models of the two agents affects the utility functions. We then
combine this with a de?nition of in?uence to obtain bounds on the loss due to the difference in the
models.
When agent i ?xes a Markov policy ?i , the game is an MDP for agent j. However, if agent i?s policy
is not Markovian the resulting game is not an MDP on the original state space. We show that if P1
acts as if P2 has the correct transition kernel, then the resulting joint policy has value bounded by
the L1 norm between the true kernel and agent 2?s actual kernel. We begin by establishing a simple
inequality to show that knowledge of the model ?2 is bene?cial for P1 .
Lemma 2. For any MVDP, the utility of the jointly optimal policy is greater than that of the
(achievable) optimal policy, which is in turn greater than that of the policy that assumes that ?2 = ?1 .
?
St
? ? uSt
u1 (?)
?1 )
1 (?1 ) ? u1 (?
(2.3)
Proof. The ?rst inequality follows from the de?nition of the jointly optimal policy and uSt
1 . For the
second inequality, note that the middle term is a maximizer for the right-hand side.
Consequently, P1 must be able to do (weakly) better if it knows ?2 compared to if it just assumes
that ?2 = ?1 . However, this does not tell us how much (if any) improvement we can obtain. Our idea
is to see what policy ?1 we?d need to play in order to make P2 play ?
?2 , and measure the distance of
this policy from ?
?1 . To obtain a useful bound, we need to have a measure on how much P1 must
deviate from ?
?1 in order for P2 to play ?
?2 . For this, we de?ne the notion of in?uence. This will
capture the amount by which a agent i can affect the game in the eyes of agent j. In particular, it is
the maximal amount by which an agent i can affect the transition distribution of agent j by changing
i?s action at each state s:
De?nition 4 (In?uence). The in?uence of agent i on the transition distribution of model ?j is de?ned
as the vector:
Ii,j (s) ? max max? ??j (st+1 | st = s, at,i , at,?i ) ? ?j (st+1 | st = s, a?t,i , at,?i )?1 .
at,?i at,i at,i
(2.4)
Thus, I1,1 describes the actual in?uence of P1 on the transition probabilities, while I1,2 describes
the perceived in?uence of P1 by P2 . We will use in?uence to de?ne an ?-dependent distance
between policies, capturing the effect of an altered policy on the model:
De?nition 5 (Policy distance). The distance between policies ?i , ?i? under model ?j is:
??i ? ?i? ??j ? max ??i (? | s) ? ?i? (? | s)?1 Ii,j (s).
s?S
(2.5)
These two de?nitions result in the following Lipschitz condition on the utility function, whose proof
can be found in Appendix A.
?
Lemma 3. For any ?xed ?2 , and any ?1 , ?1? : ui (?1 , ?2 ) ? ui (?1? , ?2 ) + ??1 ? ?1? ??i (1??)
2 , with
a symmetric result holding for any ?xed policy ?1 , and any pair ?2 , ?2? .
Lemma 3 bounds the change in utility due to a change in policy by P1 with respect to i?s payoff. As
shall be seen in the next section, it allows us to analyze how close the utility we can achieve comes
to that of the jointly optimal policy, and how much can be gained by not naively assuming that the
model of P2 is the same.
2.2
Optimality
In this section, we illuminate the relationship between different types of policies. First, we show that
if P1 simply assumes ?2 = ?1 , it only suffers a bounded loss relative to the jointly optimal policy.
Subsequently, we prove that knowing ?2 allows P1 to ?nd an improved policy.
4
Lemma 4. Consider the optimal policy ?
?1 for the modi?ed game G? = ?S, A, ?1 , ?1 , ?1 , ?1 , ?, ??
? while its utility in G is:
where P2 ?s model is correct. Then ?
?1 is Markov and achieves utility u
? in G,
uSt
?1 ) ? u
??
1 (?
2???1 ? ?2 ?1
,
(1 ? ?)2
??1 ? ?2 ?1 ? max ??1 (st+1 | st , at ) ? ?2 (st+1 | st , at )?1 .
st ,at
As this bound depends on the maximum between all state action pairs, we re?ne it in terms of the
in?uence of each agent?s actions. This also allows us to measure the loss in terms of the difference in
P2 ?s actual and desired response, rather than the difference between the two models, which can be
much larger.
?1 is ?2B (?
?1 ) ?= ??
?2 , then our loss
to the jointly optimal
Corollary 1. If P2 ?s best response to ?
? relative
?
policy is bounded by u1 (?
?1 , ?
?2 ) ? u1 (?
?1 , ?2B (?
?1 )) ? ??2B (?
?1 ) ? ?
?2 ??1 (1??)
2.
Proof. This follows from Lemma 3 by ?xing ?
?1 for the policy pairs ?2B (?
?1 ), ?
?2 under ?1 .
While the previous corollary gave us an upper bound on the loss we incur if we ignore the beliefs of
P2 , we can bound the loss of the optimal Stackelberg policy in the same way:
Corollary 2. The difference between the optimal?utility u1 (?
?1 ,??
?2 ) and the optimal Stackleberg utility
?
?
?
? B ?1 ) ? ?
uSt
?1 , ?
?2 ) ? uSt
?2 ??1 (1??)
2.
1 (?1 ) is bounded by u1 (?
1 (?1 ) ? ?2 (?
Proof. The result follows directly from Corollary 1 and Lemma 2.
This bound is not very informative by itself, as it does not suggest an advantage for the optimal
Stackelberg policy. Instead, we can use Lemma 3 to lower bound the increase in utility obtained
relative to just playing the optimistic policy ?
?1 . We start by observing that when P2 responds with
some ?
?2 to ?
?1 , P1 could improve upon this by playing ?
?1 = ?1B (?
?2 ), the best response of to ?
?2 , if
P1 could somehow force P2 to stick to ?
?2 . We can de?ne
? ? u1 (?
?1 , ?
?2 ) ? u1 (?
?1 , ?
?2 ),
(2.6)
to be the potential advantage from switching to ?
?1 . Theorem 1 characterizes how close to this
?1 (a | s) + (1 ? ?)?
?1 (a | s),
advantage P1 can get by playing a stochastic policy ?1? (a | s) ? ??
while ensuring that P2 sticks to ?
?2 .
Theorem 1 (A suf?cient condition for an advantage over the naive policy). Let ?
?2 = ?2B (?
?1 ) be the
?1 and assume ? > 0. Then we can obtain an advantage of
response of P2 to the optimistic policy ?
at least:
?1 ? ?
? 1 ?? 1
?1 ??1
? ??
?1 ? ?
? ??
+
(2.7)
??
(1 ? ?)2
2 ??
?1 ? ?
?1 ??2
where ? ? u2 (?
?1 , ?
?2 ) ? max?2 ?=??2 u2 (?
?1 , ?2 ) is the gap between ?
?2 and all other deterministic
policies of P2 when P1 plays ?
?1 .
We have shown that knowledge of ?2 allows P1 to obtain improved policies compared to simply
assuming ?2 = ?1 , and that this improvement depends on both the real and perceived effects of a
change in P1 ?s policy. In the next section we develop an ef?cient dynamic programming algorithm
for ?nding a good policy for P1 .
3
Algorithms for the Stackelberg Setting
In the Stackelberg setting, we assume that P1 commits to a policy ?1 , and this policy is observed
by P2 . Because of this, it is suf?cient for P2 to use a Markov policy, and this can be calculated in
polynomial time in the number of states and actions.
However, there is a polynomial reduction from stochastic games to MVDPs (Lemma 1), and since
Letchford et al. [2012] show that computing optimal commitment strategies is NP-hard, then the
planning problem for MVDPs is also NP-hard. Another dif?culty that occurs is that dominating
policies in the MDP sense may not exist in MVDPs.
5
?
De?nition 6 (Dominating policies). A dominating policy ? satis?es V ? (s) ? V ? (s), ?s ? S,
where V ? (s) = E? (u | s0 = s).
Dominating policies have the nice property that they are also optimal for any starting distribution ?.
However, dominating, stationary Markov polices need not exist in our setting.
Theorem 2. A dominating, stationary Markov policy may not exist in a given MVDP.
The proof of this theorem is given by a counterexample in Appendix A, where the optimal policy
depends on the history of previously visited states.
In the trivial case when ?1 = ?2 , the problem can be reduced to a Markov decision process, which
can be solved in O(S 2 A) [Mansour and Singh, 1999, Littman et al., 1995]. Generally, however, the
commitment by P1 creates new dependencies that render the problem inherently non-Markovian
with respect to the state st and thus harder to solve. In particular, even though the dynamics of the
environment are Markovian with respect to the state st , the MVDP only becomes Markov in the
Stackelberg setting with respect to the hyper-state ?t = (st , ?1,t:T ) where ?1,t:T is the commitment
by P1 for steps t, . . . , T . To see that the game is non-Markovian, we only need to consider a single
transition from st to st+1 . P2 ?s action depends not only on the action at,1 of P1 , but also on the
expected utility the agent will obtain in the future, which in turn depends on ?1,t:T . Consequently,
state st is not a suf?cient statistic for the Stackelberg game.
3.1
Backwards Induction
These dif?culties aside, we now describe a backwards induction algorithm for approximately solving
MVDPs. The algorithm can be seen as a generalization of the backwards induction algorithm for
simultaneous-move stochastic games [c.f. Bo?ansk`y et al., 2016] to the case of disagreement on the
transition distribution.
In our setting, at stage t of the interaction, P2 has observed the current state st and also knows the
commitment of P1 for all future periods. P2 now chooses the action
?
?
?1 (at,1 | st )
?2 (st+1 |st , at,1 , at,2 ) ? V2,t+1 (st+1 ). (3.1)
a?t,2 (?1 ) ? arg max ?(st ) + ?
at,2
at,1
st+1
Thus, for every state, there is a well-de?ned continuation for P2 . Now, P1 needs to choose an action.
This can be done easily, since we know P2 ?s continuation, and so we can de?ne a value for each
state-action-action triplet for either agent:
?
Qi,t (st , at,1 , at,2 ) = ?(s) + ?
?1 (st+1 |st , at,1 , at,2 ) ? Vi,t+1 (st+1 ).
st+1
As the agents act simultaneously, the policy of P1 needs to be stochastic. The local optimization
problem can be formed as a set of linear programs (LPs), one for each action a2 ? A2 :
?
max
?1 (a1 |s) ? Q1,t (s, a1 , a2 )
?1
a1
s.t. ??
a2 :
?
a1
?1 (a1 |s) ? Qt,2 (s, a1 , a2 ) ?
??
a1 : 0 ? ?1 (a1 |s) ? 1, and
?
a1
?
a1
?(a1 ) ? Qt,2 (s, a1 , a
?2 ),
?1 (a1 |s) = 1.
Each LP results in the best possible policy at time t, such that we force P2 to play a2 . From these,
we select the best one. At the end, the algorithm, given the transitions (?1 , ?2 ), and the time horizon
T , returns an approximately optimal joint policy, (?1? , ?2? ) for the MVDP. The complete pseudocode
is given in Appendix C, algorithm 1.
As this solves a ?nite horizon problem, the policy is inherently non-stationary. In addition, because
there is no guarantee that there is a dominating policy, we may never obtain a stationary policy (see
below). However, we can extract a stationary policy from the policies played at individual time steps
t, and select the one with the highest expected utility. We can also obtain a version of the algorithm
that attains a deterministic policy, by replacing the linear program with a maximization over P1 ?s
actions.
6
Optimality. The policies obtained using this algorithm are subgame perfect, up to the time horizon
adopted for backward induction; i.e. the continuation policies are optimal (considering the possibly
incorrect transition kernel of P2 ) off the equilibrium path. As a dominating Markov policy may not
exist, the algorithm may not converge to a stationary policy in the in?nite horizon discounted setting,
similarly to the cyclic equilibria examined by Zinkevich et al. [2005]. This is because the commitment
of P1 affects the current action of P2 , and so the effective transition matrix for P1 . More precisely,
the transition actually depends on the future joint policy ? n+1:T , because this determines the value
Q2,t and so the policy of P2 . Thus, the Bellman optimality condition does not hold, as the optimal
continuation may depend on previous decisions.
4
Experiments
We focus on a natural subclass of multi-view decision processes, which we call intervention games.
Therein, a human and an AI have joint control of a system, and the human can override the AI?s
actions at a cost. As an example, consider semi-autonomous driving, where the human always has an
option to override the AI?s decisions. The cost represents the additional effort of human intervention;
if there was no cost, the human may always prefer to assume manual control and ignore the AI.
De?nition 7 (c-intervention game). A MVDP is a c-intervention game if all of P2 ?s actions override
those of P1 , apart from the null action a0 ? A2 , which has no effect.
?1 (st+1 | st , at,1 , at,2 ) = ?1 (st+1 | st , a?t,1 , at,2 )
?at,1 , a?t,1 ? A, at,2 ?= a0 .
(4.1)
In addition, the agents subtract a cost c(s) > 0 from the reward rt = ?(st ) whenever P2 takes an
action other than a0 .
Any MDP with action space A? and reward function ?? : S ? [0, 1] can be converted into a c?
intervention?game,
? and modeled as an MVDP, with action space A = A1 ? A2 , where A1 = A ,
A2 = A1 ? a0 , a1 ? A1 , a2 ? A2 , a = (a1 , a2 ) ? A,
?
?
?? (s? ) ? c(s? ) I a?2 ?= a0 ,
rMIN = ? min
(4.2)
?
s ?S, a2 ?A2
rMAX =
max
s? ?S, a?2 ?A2
?
?
?? (s? ) ? c(s? ) I a?2 ?= a0 ,
(4.3)
and reward function3 ? : S ? A ? [0, 1], with
?
?
?? (s) ? c(s) I a2 ?= a0 ? rMIN
.
?(s, a) =
rMAX ? rMIN
(4.4)
The reward function in the MVDP is de?ned so that it also has the range [0, 1].
Algorithms and scenarios. We consider the main scenario, as well as three variant scenarios, with
different assumptions about the AI?s model. For the main scenario, the human has an incorrect model
of the world, which the AI knows. For this, we consider three types of AI policies:
PURE :
The AI only uses deterministic Markov policies.
The AI may use stochastic Markov policies.
STAT : As above, but use the best instantaneous deterministic policy of the ?rst 25 time-steps found
in PURE as a stationary Markov policy (running for the same time horizon as PURE).
MIXED :
We also have three variant scenarios of AI and human behaviour.
OPT :
Both the AI and human have the correct model of the world.
The AI assumes that the human?s model is correct.
HUMAN : Both agents use the incorrect human model to take actions. It is equivalent to the human
having full control without any intervention cost.
NAIVE:
3
Note that although our original de?nition used a state-only reward function, we are using a state-action
reward function.
7
20
25
15
20
15
10
5
utility
utility
10
opt
pure
m ixed
naive
hum an
st at
0
-5
-10
0
-5
-10
-15
0
5
10
20
30
40
opt
pure
m ixed
naive
hum an
st at
-15
0.0
50
0.05
human error (factor)
(b) Highway: Error
4
3.5
3
3.0
2
2.5
1
opt
pure
m ixed
naive
hum an
st at
0
-1
0.2
2.0
opt
pure
m ixed
naive
hum an
st at
1.5
1.0
-2
0.5
0.0
(d) Food and Shelter
0.15
(c) Highway: Cost
utility
utility
(a) Multilane Highway
0.1
cost (safety+intervention)
0.1
0.2
0.3
0.4
0.5
0.0
0.1
0.2
0.3
0.4
human error (skewness)
cost (intervention)
(e) Food and Shelter: Error
(f) Food and Shelter: Cost
0.5
Figure 1: Illustrations and experimental results for the ?multilane highway? and ?food and shelter?
domains. Plots (b,e) show the effect of varying the error in the human?s transition kernel with ?xed
intervention cost. Plots (c,f) show the effect of varying the intervention cost for a ?xed error in the
human?s transition kernel.
In all of these, the AI uses a MIXED policy. We consider two simulated problem domains in which
to evaluate our methods. The ?rst is a multilane highway scenario, where the human and AI have
shared control of a car, and the second is a food and shelter domain where they must collect food
and maintain a shelter. In all cases, we use a ?nite time horizon of 100 steps and a discount factor of
? = 0.95.
Multilane Highway. In this domain, a car is under joint control of an AI agent and a human, with
the human able to override the AI?s actions at any time. There are multiple lanes in a highway,
with varying levels of risk and speed (faster lanes are more risky). Within each lane, there is some
probability of having an accident. However, the human overestimates this probability, and so wants
to travel in a slower lane than is optimal. We denote a starting state by A, a destination state by B,
and, for lane i, intermediate states Ci1 , ..., CiJ , where J is the number of intermediate states in a
lane, and an accident state D. See Figure 1(a) for an illustration of the domain, and for the simulation
results. In the plots, the error parameter represents a factor by which the human is wrong in assessing
the accident probability (assumed to be small), while the cost parameter determines both the cost of
safety (slow driving) of different lanes as well as the cost of human intervening on these lanes. The
latter is because our experimental model couples the cost of intervention with the safety cost. The
rewards range from ?10 to 10. More details are provided in the Appendix (Section B).
Food and Shelter Domain. The food and shelter domain [Guo et al., 2013] involves an agent
simultaneously trying to ?nd randomly placed food (in one of the top ?ve locations) while maintaining
a shelter. With positive probability at each time step, the shelter can collapse if it is not maintained.
There is a negative reward for the shelter collapsing and positive reward for ?nding food (food
reappears whenever it is found). In order to exercise the abilities of our modeling, we make the
original setting more complex by increasing the size of the grid to 5 ? 5 and allowing diagonal moves.
For our MVDP setting, we give the AI the correct model but assume the human overestimates the
probabilities. Furthermore, the human believes that diagonal movements are more prone to error.
See Figure 1(d) for an illustration of the domain, and for the simulation results. In the plots, the
error parameter determines how skewed the human?s belief about the error is towards the uniform
8
distribution, while the cost parameter determines the cost of intervention. The rewards range from
?1 to 1. More details are provided in the Appendix (Section B).
Results. In the simulations, when we change the error parameter, we keep the cost parameter
constant (0.15 for the multilane highway domain and 0.1 for the food and shelter domain), and vice
versa, when we change the cost, we keep the error constant (25 for the multilane highway domain
and 0.25 for the food and shelter domain). Overall, the results show that PURE, MIXED and STAT
perform considerably better than NAIVE and HUMAN. Furthermore, for low costs, HUMAN is better
than NAIVE. The reason is that in NAIVE the human agent overrides the AI, which is more costly than
having the AI perform the same policy (as it happens to be for HUMAN). Therefore, simply assuming
that the human has the correct model does not only lead to a larger error than knowing the human?s
model, but it can also be worse than simply adopting the human?s erroneous model when making
decisions.
As the cost of intervention increases, the utilities become closer to the jointly optimal one (OPT
scenario), with the exception of the utility for scenario HUMAN. This is not surprising since the
intervention cost has an important tempering effect?the human is less likely to take over the control
if interventions are costly. When the human error is small, the utility approaches that of the jointly
optimal policy. Clearly, the increasing error leads to larger deviations from the the optimal utility.
Out of the three algorithms (PURE, MIXED and STAT), MIXED obtains a slightly better performance
and shows the additional bene?t from allowing for stochastic polices. PURE and STAT have quite
similar performance, which indicates that in most of the cases the backwards induction algorithm
converges to a stationary policy.
5
Conclusion
We have introduced the framework of multi-view decision processes to model value-alignment
problems in human-AI collaboration. In this problem, an AI and a human act in the same environment
as a human, and share the same reward function, but the human may have an incorrect world model.
We analyze the effect of knowledge of the human?s world model on the policy selected by the AI.
More precisely, we develop a dynamic programming algorithm, and give simulation results to
demonstrate that an AI with this algorithm can adopt a useful policy in simple environments and
even when the human adopts an incorrect model. This is important for modern applications involving
the close cooperation between humans and AI such as home robots or automated vehicles, where
the human can choose to intervene but may do so erroneously. Although backwards induction is
ef?cient for discrete state and action spaces, it cannot usefully be applied to the continuous case. We
would like to develop stochastic gradient algorithms for this case. More generally, we see a number
of immediate extensions to MVDP: estimating the human?s world model, studying a setting in which
human is learning to respond to the actions of the AI, and moving away from Stackelberg to the case
of no commitment.
Acknowledgements. The research has received funding from: the People Programme (Marie Curie
Actions) of the European Union?s Seventh Framework Programme (FP7/2007-2013) under REA
grant agreement 608743, the Swedish national science foundation (VR), the Future of Life Institute,
the SEAS TomKat fund, and a SNSF Early Postdoc Mobility fellowship.
References
Ofra Amir, Ece Kamar, Andrey Kolobov, and Barbara Grosz. Interactive teaching strategies for agent
training. In IJCAI 2016, 2016.
Branislav Bo?ansk?, Simina Br?nzei, Kristoffer Arnsfelt Hansen, Peter Bro Miltersen, and
Troels Bjerre S?rensen. Computation of Stackelberg Equilibria of Finite Sequential Games.
2015.
?
Branislav Bo?ansk`y, Viliam Lis`y, Marc Lanctot, Ji?r? Cerm?k,
and Mark HM Winands. Algorithms
for computing strategies in two-player simultaneous move games. Arti?cial Intelligence, 237:1?40,
2016.
9
Avshalom Elmalech, David Sarne, Avi Rosenfeld, and Eden Shalom Erez. When suboptimal rules.
In AAAI, pages 1313?1319, 2015.
Eyal Even-Dar and Yishai Mansour. Approximate equivalence of markov decision processes. In
Learning Theory and Kernel Machines. COLT/Kernel 2003, Lecture notes in Computer science,
pages 581?594, Washington, DC, USA, 2003. Springer.
Ya?akov Gal and Avi Pfeffer. Networks of in?uence diagrams: A formalism for representing agents?
beliefs and decision-making processes. Journal of Arti?cial Intelligence Research, 33(1):109?147,
2008.
Xiaoxiao Guo, Satinder Singh, and Richard L Lewis. Reward mapping for transfer in long-lived
agents. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors,
Advances in Neural Information Processing Systems 26, pages 2130?2138. 2013.
Dylan Had?eld-Menell, Anca Dragan, Pieter Abbeel, and Stuart Russell. Cooperative inverse
reinforcement learning, 2016.
Joshua Letchford, Liam MacDermed, Vincent Conitzer, Ronald Parr, and Charles L. Isbell. Computing
optimal strategies to commit to in stochastic games. In Proceedings of the Twenty-Sixth AAAI
Conference on Arti?cial Intelligence, AAAI?12, 2012.
J. C. R. Licklider. Man-computer symbiosis. RE Transactions on Human Factors in Electronics, 1:
4?11, 1960.
Michael L Littman, Thomas L Dean, and Leslie Pack Kaelbling. On the complexity of solving
markov decision problems. In Proceedings of the Eleventh conference on Uncertainty in arti?cial
intelligence, pages 394?402. Morgan Kaufmann Publishers Inc., 1995.
Yishay Mansour and Satinder Singh. On the complexity of policy iteration. In Proceedings of the
Fifteenth conference on Uncertainty in arti?cial intelligence, pages 401?408. Morgan Kaufmann
Publishers Inc., 1999.
Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforcement learning. In ICML, pages
663?670, 2000.
Jonathan Sorg, Satinder P Singh, and Richard L Lewis. Internal rewards mitigate agent boundedness.
In Proceedings of the 27th international conference on machine learning (ICML-10), pages
1007?1014, 2010.
Haoqi Zhang and David C. Parkes. Value-based policy teaching with active indirect elicitation. In
Proc. 23rd AAAI Conference on Arti?cial Intelligence (AAAI?08), page 208?214, Chicago, IL, July
2008.
Haoqi Zhang, David C. Parkes, and Yiling Chen. Policy teaching through reward function learning.
In 10th ACM Electronic Commerce Conference (EC?09), page 295?304, 2009.
Martin Zinkevich, Amy Greenwald, and Michael Littman. Cyclic equilibria in markov games. In
Advances in Neural Information Processing Systems, 2005.
10
| 7128 |@word version:1 middle:1 polynomial:3 achievable:4 norm:1 nd:4 heuristically:1 pieter:1 seek:2 simulation:4 q1:1 arti:6 eld:2 harder:1 boundedness:1 shot:1 reduction:1 electronics:1 cyclic:2 hereafter:1 past:1 current:3 surprising:1 follower:4 ust:7 must:3 ronald:1 sorg:2 chicago:1 informative:1 designed:1 plot:4 fund:1 aside:1 stationary:8 intelligence:6 selected:1 amir:2 reappears:1 parkes:4 menell:2 provides:2 location:1 zhang:4 become:1 profound:1 driver:4 incorrect:6 prove:1 combine:1 eleventh:1 behavioral:3 introduce:2 expected:6 behavior:2 p1:38 planning:1 multi:12 bellman:1 discounted:2 food:14 actual:3 considering:1 increasing:2 becomes:1 begin:2 provided:2 underlying:3 estimating:1 bounded:6 formalizing:1 null:1 what:1 xed:5 rmax:2 skewness:1 q2:1 generalizable:1 ansk:6 gal:2 guarantee:1 cial:7 mitigate:1 every:1 act:3 subclass:1 ful:1 usefully:1 interactive:1 demonstrates:1 wrong:1 stick:2 control:8 grant:1 intervention:18 conitzer:1 overestimate:2 positive:3 maximise:1 safety:3 local:1 switching:1 establishing:1 path:1 approximately:2 might:2 therein:1 examined:1 equivalence:1 collect:2 misaligned:2 dif:2 collapse:1 liam:1 range:3 unique:1 commerce:1 union:1 differs:1 subgame:3 nite:3 suggest:1 get:1 cannot:2 needle:1 close:3 risk:1 zinkevich:3 deterministic:4 equivalent:1 branislav:2 maximizing:1 dean:1 attention:1 starting:3 focused:1 simplicity:3 pure:10 miltersen:1 rule:1 amy:1 notion:1 autonomous:3 yishay:1 play:7 programming:2 us:3 agreement:1 harvard:3 nitions:1 cooperative:4 pfeffer:2 observed:2 solved:1 capture:1 worst:1 decrease:1 highest:2 movement:1 observes:1 russell:2 environment:6 ui:4 complexity:2 reward:32 littman:3 dynamic:3 motivate:1 weakly:1 singh:4 solving:2 depend:1 incur:1 upon:1 creates:1 learner:1 easily:2 joint:12 indirect:1 tomkat:1 committing:1 describe:1 effective:1 tell:1 hyper:1 avi:2 ixed:4 whose:1 quite:1 larger:3 dominating:8 solve:1 say:1 kamar:1 ability:1 statistic:1 bro:1 commit:1 rosenfeld:1 jointly:9 itself:1 sequence:1 advantage:5 yiling:1 interaction:5 maximal:1 adaptation:1 remainder:1 commitment:6 culty:1 iff:1 achieve:3 intervening:1 rst:8 ijcai:1 extending:1 assessing:1 sea:1 perfect:3 converges:1 derive:1 develop:6 andrew:1 stat:4 kolobov:1 qt:2 received:1 solves:1 p2:39 involves:1 come:1 ning:1 stackelberg:14 correct:8 stochastic:15 subsequently:1 human:64 material:1 behaviour:1 assign:1 abbeel:1 generalization:1 ci1:1 opt:6 extension:2 hold:2 deciding:1 equilibrium:5 mapping:1 parr:1 driving:3 achieves:1 adopt:2 a2:16 early:1 perceived:2 proc:1 travel:1 visited:1 hansen:1 highway:9 vice:1 tool:1 kristoffer:1 clearly:1 snsf:1 always:2 rather:3 varying:3 corollary:4 focus:3 she:1 improvement:3 modelling:1 indicates:1 attains:1 sense:2 dependent:2 a0:7 initially:1 troels:1 i1:2 arg:2 overall:1 colt:1 denoted:2 development:1 aware:1 construct:1 never:1 beach:1 having:3 ng:2 washington:1 represents:2 lille:1 stuart:2 icml:2 future:4 np:2 richard:2 modern:1 randomly:1 modi:2 simultaneously:2 ve:1 national:1 individual:1 maintain:1 organization:1 satis:1 function3:1 alignment:1 introduces:1 extreme:1 winands:1 myopic:1 yishai:1 closer:1 helper:2 respective:1 mobility:1 re:2 desired:1 uence:10 uncertain:1 formalism:2 modeling:2 steer:1 markovian:5 maximization:1 leslie:1 cost:24 kaelbling:1 deviation:1 rare:1 uniform:1 seventh:1 motivating:2 optimally:2 characterize:1 dependency:1 teacher:1 considerably:1 chooses:1 andrey:1 st:48 international:1 destination:1 off:1 diverge:1 michael:2 aaai:5 choose:2 possibly:2 collapsing:1 worse:2 return:1 li:1 account:2 potential:1 converted:1 de:25 sec:1 inc:2 depends:6 vi:1 vehicle:2 view:14 optimistic:2 analyze:4 doing:1 observing:1 start:4 xing:1 characterizes:1 option:1 eyal:1 curie:1 formed:1 il:1 kaufmann:2 who:4 vincent:1 drive:2 cation:1 history:1 simultaneous:4 suffers:2 whenever:3 ed:1 manual:1 xiaoxiao:1 sixth:1 proof:7 couple:1 gain:1 knowledge:3 car:2 actually:1 response:9 improved:3 swedish:1 done:1 though:2 furthermore:4 just:2 stage:4 hand:1 replacing:1 maximizer:1 somehow:1 mdp:6 usa:2 effect:9 true:2 symmetric:1 shelter:14 game:37 skewed:1 maintained:1 formalises:1 trying:2 override:8 complete:1 demonstrate:1 l1:1 cooperate:2 instantaneous:1 ef:2 funding:1 charles:1 common:1 pseudocode:1 ji:1 endpoint:1 function1:1 extend:2 refer:2 versa:1 counterexample:1 ai:46 rd:1 grid:1 similarly:1 teaching:4 erez:1 had:2 moving:1 robot:1 intervene:2 anca:1 own:2 shalom:1 apart:1 barbara:1 scenario:10 inequality:3 life:1 joshua:1 nition:10 morgan:2 seen:2 additional:3 simpli:1 steering:1 greater:2 speci:2 accident:3 converge:1 maximize:1 period:2 july:1 ii:2 semi:1 full:1 multiple:1 technical:2 faster:1 long:3 a1:19 ensuring:1 qi:1 variant:3 basic:2 involving:1 vision:1 expectation:1 fifteenth:1 iteration:1 kernel:12 represent:1 adopting:1 achieved:1 receive:1 addition:2 want:2 rea:1 fellowship:1 diagram:1 publisher:2 appropriately:1 call:1 backwards:8 intermediate:2 automated:1 affect:5 gave:1 restrict:1 suboptimal:1 imperfect:1 idea:1 knowing:3 br:1 utility:35 assist:1 simina:1 effort:1 render:1 peter:1 constitute:1 action:35 dar:1 useful:2 generally:2 amount:3 discount:2 reduced:2 continuation:4 exist:4 rensen:1 extrinsic:1 discrete:1 shall:1 eden:1 drawn:1 tempering:1 changing:1 marie:1 backward:1 sum:2 dimitrakakis:1 inverse:4 uncertainty:2 respond:1 place:1 planner:1 electronic:2 home:1 decision:17 appendix:7 scaling:1 prefer:1 lanctot:1 capturing:1 bound:11 played:1 occur:1 precisely:2 rmin:3 isbell:1 unachievable:1 lane:8 erroneously:1 aspect:1 misguided:1 chalmers:1 u1:14 optimality:3 min:1 performing:1 speed:1 martin:1 ned:4 according:5 describes:2 slightly:1 lp:2 making:2 s1:1 happens:1 computationally:1 agree:2 previously:1 discus:1 turn:2 needed:1 know:4 letchford:2 fp7:1 end:1 adopted:1 studying:1 apply:1 observe:1 v2:1 appropriate:1 disagreement:1 away:1 weinberger:1 slower:1 original:3 thomas:1 assumes:5 running:1 top:1 maintaining:1 commits:4 giving:2 ghahramani:1 move:5 occurs:2 hum:4 strategy:5 costly:2 rt:3 responds:4 mvdp:12 illuminate:1 diagonal:2 gradient:1 distance:4 mapped:1 simulated:1 argue:1 considers:1 trivial:2 reason:1 induction:9 assuming:3 modeled:1 relationship:1 illustration:3 providing:1 demonstration:2 cij:1 potentially:1 holding:1 negative:1 design:1 lived:1 policy:101 twenty:1 perform:2 allowing:2 disagree:3 upper:2 observation:1 markov:14 finite:1 incorrectly:1 immediate:1 payoff:3 communication:1 dc:1 mansour:3 arbitrary:1 police:2 overcoming:1 david:4 introduced:2 pair:5 cast:1 bene:3 nip:1 able:2 elicitation:1 below:1 program:2 max:9 belief:5 event:1 natural:2 force:2 representing:1 improve:4 altered:1 technology:1 eye:1 ne:9 risky:1 nding:3 hm:1 naive:9 extract:1 imitates:1 deviate:1 prior:1 sg:1 nice:1 acknowledgement:1 dragan:1 relative:3 loss:7 lecture:1 mixed:5 men:1 suf:3 foundation:1 agent:72 s0:1 editor:1 playing:3 share:6 licklider:3 collaboration:2 prone:1 cooperation:1 placed:1 bias:2 side:1 understand:1 burges:1 institute:1 taking:1 calculated:1 transition:21 world:9 adopts:1 adaptive:1 reinforcement:6 programme:2 ec:1 welling:1 transaction:1 viliam:1 approximate:1 obtains:2 ignore:2 keep:2 satinder:3 active:1 assumed:2 conclude:1 leader:3 continuous:1 decade:1 triplet:1 reality:1 multilane:6 nature:1 transfer:1 pack:1 ca:1 inherently:2 culties:1 bottou:1 necessarily:1 complex:1 european:1 domain:14 postdoc:1 marc:1 main:3 paul:1 augmented:1 advice:3 cient:5 slow:1 vr:1 cerm:1 christos:1 sub:1 exercise:1 dylan:1 weighting:1 theorem:4 erroneous:1 x:2 naively:1 sequential:6 importance:1 gained:1 horizon:7 gap:1 chen:1 subtract:1 suited:1 simply:4 likely:1 bo:6 u2:3 springer:1 determines:4 lewis:2 acm:1 modulate:1 goal:1 presentation:1 greenwald:1 consequently:2 towards:1 lipschitz:1 man:2 shared:1 experimentally:2 change:5 hard:2 lemma:10 ece:1 experimental:4 e:2 player:6 ya:1 exception:1 formally:2 select:2 internal:3 people:1 guo:3 latter:2 mark:1 witnessing:1 jonathan:1 incorporate:1 evaluate:1 |
6,775 | 7,129 | A Greedy Approach for
Budgeted Maximum Inner Product Search
Hsiang-Fu Yu?
Amazon Inc.
[email protected]
Cho-Jui Hsieh
University of California, Davis
[email protected]
Qi Lei
The University of Texas at Austin
[email protected]
Inderjit S. Dhillon
The University of Texas at Austin
[email protected]
Abstract
Maximum Inner Product Search (MIPS) is an important task in many machine
learning applications such as the prediction phase of low-rank matrix factorization
models and deep learning models. Recently, there has been substantial research
on how to perform MIPS in sub-linear time, but most of the existing work does
not have the flexibility to control the trade-off between search efficiency and
search quality. In this paper, we study the important problem of MIPS with a
computational budget. By carefully studying the problem structure of MIPS, we
develop a novel Greedy-MIPS algorithm, which can handle budgeted MIPS by
design. While simple and intuitive, Greedy-MIPS yields surprisingly superior
performance compared to state-of-the-art approaches. As a specific example, on a
candidate set containing half a million vectors of dimension 200, Greedy-MIPS
runs 200x faster than the naive approach while yielding search results with the
top-5 precision greater than 75%.
1
Introduction
In this paper, we study the computational issue in the prediction phase for many embedding based
models such as matrix factorization and deep learning models in recommender systems, which can be
mathematically formulated as a Maximum Inner Product Search (MIPS) problem. Specifically, given
a large collection of n candidate vectors: H = hj 2 Rk : 1, . . . , n and a query vector w 2 Rk ,
MIPS aims to identify a subset of candidates that have top largest inner product values with w. We
also denote H = [h1 , . . . , hj , . . . , hn ]> as the candidate matrix. A naive linear search procedure
to solve MIPS for a given query w requires O(nk) operations to compute n inner products and
O(n log n) operations to obtain the sorted ordering of the n candidates.
Recently, MIPS has drawn a lot of attention in the machine learning community due to its wide
applicability, such as the prediction phase of embedding based recommender systems [6, 7, 10].
In such an embedding based recommender system, each user i is associated with a vector wi of
dimension k, while each item j is associated with a vector hj of dimension k. The interaction (such
as preference) between a user and an item is modeled by wiT hj . It is clear that identifying top-ranked
items in such a system for a user is exactly a MIPS problem. Because both the number of users
(the number of queries) and the number of items (size of vector pool in MIPS) can easily grow to
millions, a naive linear search is extremely expensive; for example, to compute the preference for all
m users over n items with latent embeddings of dimension k in a recommender system requires at
least O(mnk) operations. When both m and n are large, the prediction procedure is extremely time
consuming; it is even slower than the training procedure used to obtain the m + n embeddings, which
?
Work done while at the University of Texas at Austin.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
costs only O(|?|k) operations per iteration, where |?| is number of observations and is much smaller
than mn. Taking the yahoo-music dataset as an example, m = 1M , n = 0.6M , |?| = 250M ,
and mn = 600B
250M = |?|. As a result, the development of efficient algorithms for MIPS is
needed in large-scale recommender systems. In addition, MIPS can be found in many other machine
learning applications, such as the prediction for a multi-class or multi-label classifier [16, 17], an
object detector, a structure SVM predicator, or as a black-box routine to improve the efficiency of
learning and inference algorithm [11]. Also, the prediction phase of neural network could also benefit
from a faster MIPS algorithm: the last layer of NN is often a dense fully-connected layer, so finding
the label with maximum score becomes a MIPS problem with dense vectors [6].
There is a recent line of research on accelerating MIPS for large n, such as [2, 3, 9, 12?14]. However,
most of them do not have the flexibility to control the trade-off between search efficiency and
search quality in the prediction phase. In this paper, we consider the budgeted MIPS problem,
which is a generalized version of the standard MIPS with a computation budget: how to generate
a set of top-ranked candidates under a given budget on the number of inner products one can
perform. By carefully studying the problem structure of MIPS, we develop a novel Greedy-MIPS
algorithm, which handles budgeted MIPS by design. While simple and intuitive, Greedy-MIPS yields
surprisingly superior performance compared to existing approaches.
Our Contributions:
? We develop Greedy-MIPS, which is a novel algorithm without any nearest neighbor search
reduction that is essential in many state-of-the-art approaches [2, 12, 14].
? We establish a sublinear time theoretical guarantee for Greedy-MIPS under certain assumptions.
? Greedy-MIPS is orders of magnitudes faster than many state-of-the-art MIPS approaches to
obtain a desired search performance. As a specific example, on the yahoo-music data sets with
n = 624, 961 and k = 200, Greedy-MIPS runs 200x faster than the naive approach and yields
search results with the top-5 precision more than 75%, while the search performance of other
state-of-the-art approaches under the similar speedup drops to less than 3% precision.
? Greedy-MIPS supports MIPS with a budget, which brings the ability to control of the trade-off
between computation efficiency and search quality in the prediction phase.
2
Existing Approaches for Fast MIPS
Because of its wide applicability, several algorithms have been proposed for efficient MIPS. Most of
existing approaches consider to reduce the MIPS problem to the nearest neighbor search problem
(NNS), where the goal is to identify the nearest candidates of the given query, and apply an existing
efficient NNS algorithm to solve the reduced problem. [2] is the first MIPS work which adopts
such a MIPS-to-NNS reduction. Variants MIPS-to-NNS reduction are also proposed in [14, 15].
Experimental results in [2] show the superiority of the NNS reduction over the traditional branchand-bound search approaches for MIPS [9, 13]. After the reduction, there are many choices to solve
the transformed NNS problem, such as locality sensitive hashing scheme (LSH-MIPS) considered in
[12, 14, 15], PCA-tree based approaches (PCA-MIPS) in [2], or K-Means approaches in [1].
Fast MIPS approaches with sampling schemes have become popular recently. Various sampling
schemes have been proposed to handle MIPS problem with different constraints. The idea of the
sampling-based MIPS approach is first proposed in [5] as an approach to perform approximate
matrix-matrix multiplications. Its applicability on MIPS problems is studied very recently [3].
The idea behind a sampling-based approach called Sample-MIPS, is about to design an efficient
sampling procedure such that the j-th candidate is selected with probability p(j): p(j) ? h>
j w. In
particular, Sample-MIPS is an efficient scheme to sample (j, t) 2 [n] ? [k] with the probability
p(j, t): p(j, t) ? hjt wt . Each time a pair (j, t) is sampled, we increase the count for the j-th item by
one. By the end of the sampling process, the spectrum of the counts forms an estimation of n inner
product values. Due to the nature of the sampling approach, it can only handle the situation where all
the candidate vectors and query vectors are nonnegative.
Diamond-MSIPS, a diamond sampling scheme proposed in [3], is an extension of Sample-MIPS
to handle the maximum squared inner product search problem (MSIPS) where the goal is to identify
2
candidate vectors with largest values of (h>
j w) . However, the solutions to MSIPS can be very
different from the solutions to MIPS in general. For example, if all the inner product values are
negative, the ordering for MSIPS is the exactly reverse ordering induced by MIPS. Here we can see
that the applicability of both Sample-MIPS and Diamond-MSIPS to MIPS is very limited.
2
3
Budgeted MIPS
The core idea behind the fast approximate MIPS approaches is to trade the search quality for the
shorter query latency: the shorter the search latency, the lower the search quality. In most existing fast
MIPS approaches, the trade-off depends on the approach-specific parameters such as the depth of the
PCA tree in PCA-MIPS or the number of hash functions in LSH-MIPS. Such specific parameters
are usually required to construct approach-specific data structures before any query is given, which
means that the trade-off is somewhat fixed for all the queries. Thus, the computation cost for a
given query is fixed. However, in many real-world scenarios, each query might have a different
computational budget, which raises the question: Can we design a MIPS approach supporting the
dynamic adjustment of the trade-off in the query phase?
3.1 Essential Components for Fast MIPS
Before any query request:
? Query-Independent Data Structure Construction: A pre-processing procedure is performed on the
entire candidate sets to construct an approach-specific data structure D to store information about
H: the LSH hash tables, space partition trees (e.g., KD-tree or PCA-tree), or cluster centroids.
For each query request:
? Query-dependent Pre-processing: In some approaches, a query dependent pre-processing is needed.
For example, a vector augmentation is required in all MIPS-to-NNS approaches. In addition, [2]
also requires another normalization. TP is used to denote the time complexity of this stage.
? Candidate Screening: In this stage, based on the pre-constructed data structure D, an efficient
procedure is performed to filter candidates such that only a subset of candidates C(w) ? H is
selected. In a naive linear approach, no screening procedure is performed, so C(w) simply contains
all the n candidates. For a tree-based structure, C(w) contains all the candidates stored in the leaf
node of the query vector. In a sampling-based MIPS approach, an efficient sampling scheme is
designed to generate highly possible candidates to form C(w). TS denotes the computational cost
of the screening stage.
? Candidate Ranking: An exact ranking is performed on the selected candidates in C(w) obtained from the screening stage. This involves the computation of |C(w)| inner products
and the sorting procedure among these |C(w)| values. The overall time complexity TR =
O(|C(w)|k + |C(w)| log|C(w)|).
The per-query computational cost: TQ = TP + TS + TR .
(1)
It is clear that the candidate screening stage is the key component for a fast MIPS approach. In
terms of the search quality, the performance highly depends on whether the screening procedure can
identify highly possible candidates. Regarding the query latency, the efficiency highly depends on the
size of C(w) and how fast to generate C(w). The major difference among various MIPS approaches
is the choice of the data structure D and the screening procedure.
3.2 Budgeted MIPS: Problem Definition
Budgeted MIPS is an extension of the standard approximate MIPS problem with a computational
budget: how to generate top-ranked candidates under a given budget on the number of inner products
one can perform. Note that the cost for the candidate ranking (TR ) is inevitable in the per-query
cost (1). A viable approach for budgeted MIPS must include a screening procedure which satisfies
the following requirements:
? the flexibility to control the size of C(w) in the candidate screening stage such that |C(w)| ? B,
where B is a given budget, and
? an efficient screening procedure to obtain C(w) in O(Bk) time such thatTQ = O(Bk + B log B).
As mentioned earlier, most recently proposed MIPS-to-NNS approaches algorithms apply various
search space partition data structures or techniques (e.g., LSH, KD-tree, or PCA-tree) designed for
NNS to index the candidates H in the query-independent pre-processing stage. As the construction
of D is query independent, both the search performance and the computation cost are somewhat
fixed when the construction is done. For example, the performance of a PCA-MIPS depends on
the depth of the PCA-tree. Given a query vector w, there is no control to the size of C(w) in the
candidate generating phase. LSH-based approaches also have the similar issue. There might be some
ad-hoc treatments to adjust C(w), it is not clear how to generalize PCA-MIPS and LSH-MIPS in a
principled way to handle the situation with a computational budget: how to reduce the size of C(w)
under a limited budget and how to improve the performance when a larger budget is given.
3
Unlike other NNS-based algorithms, the design of Sample-MIPS naturally enables it to support
budgeted MIPS for a nonnegative candidate matrix H and a nonnegative query w. The more the
number of samples, the lower the variance of the estimated frequency spectrum. Clearly, SampleMIPS has the flexibility to control the size of C(w), and thus is a viable approach for the budgeted
MIPS problem. However, Sample-MIPS works only on the situation with non-negative H and w.
Diamond-MSIPS has the similar issue.
4
Greedy-MIPS
We carefully study the structure of MIPS and develop a simple but novel algorithm called GreedyMIPS, which handles budgeted MIPS by design. Unlike the recent MIPS-to-NNS approaches,
Greedy-MIPS is an approach without any reduction to a NNS problem. Moreover, Greedy-MIPS is
a viable approach for the budgeted MIPS problem without the non-negativity limitation inherited in
the sampling approaches.
The key component for a fast MIPS approach is the algorithm used in the candidate screening phase.
In budgeted MIPS, for any given budget B and query w, an ideal procedure for the candidate
screening phase costs O(Bk) time to generate C(w) which contains the B items with the largest B
inner product values over the n candidates in H. The requirement on the time complexity O(Bk)
implies that the procedure is independent from n = |H|, the number of candidates in H. One might
wonder whether such an ideal procedure exists or not. In fact, designing such an ideal procedure with
the requirement to generate the largest B items in O(Bk) time is even more challenging than the
original budgeted MIPS problem.
Definition 1. The rank of an item x among a set of items X = x1 , . . . , x|X | is defined as
X|X |
rank(x | X ) :=
I[xj x],
(2)
j=1
where I[?] is the indicator function. A ranking induced by X is a function ?(?) : X ! {1, . . . , |X |}
such that ?(xj ) = rank(xj | X ) 8xj 2 X .
One way to store a ranking ?(?) induced by X is by a sorted index array s[r] of size |X | such that
?(xs[1] ) ? ?(xs[2] ) ? ? ? ? ? ?(xs[|X |] ).
We can see that s[r] stores the index to the item x with ?(x) = r.
To design an efficient candidate screening procedure, we study the operations required for MIPS: In
the simple linear MIPS approach, nk multiplication operations are required to obtain n inner product
>
n?k
values h>
as Z = H diag(w), where
1 w, . . . , hn w . We define an implicit matrix Z 2 R
k?k
diag(w) 2 R
is a matrix with w as it diagonal. The (j, t) entry of Z denotes the multiplication
operation zjt = hjt wt and zj = diag(w)hj denotes the j-th row of Z. In Figure 1, we use Z > to
demonstrate the implicit matrix. Note that Z is query dependant, i.e., the values of Z depend on the
query vector w, and n inner product values can be obtained by taking the column-wise summation of
Pk
Z > . In particular, for each j we have h>
j w =
t=1 zjt , j = 1, . . . , n. Thus, the ranking induced
by the n inner product values can be characterized by the marginal ranking ?(j|w) defined on the
implicit matrix Z as follows:
( k
)!
k
k
X
X
X
>
>
?(j|w) := rank
zjt
z1t , ? ? ? ,
znt
= rank h>
. (3)
j w | h1 w, . . . , hn w
t=1
t=1
t=1
As mentioned earlier, it is hard to design an ideal candidate screening procedure generating C(w)
based on the marginal ranking. Because the main goal for the candidate screening phase is to
quickly identify candidates which are highly possible to be top-ranked items, it suffices to have
an efficient procedure generating C(w) by an approximation ranking. Here we propose a greedy
heuristic ranking:
?
? (j|w) := rank maxkt=1 zjt maxkt=1 z1t , ? ? ? , maxkt=1 znt ,
(4)
which is obtained by replacing the summation terms in (3) by max operators. The intuition behind
this heuristic is that the largest element of zj multiplied by k is an upper bound of h>
j w:
h>
j w =
k
X
t=1
zjt ? k max{zjt : t = 1, . . . , k}.
(5)
Thus, ?
? (j|w), which is induced by such an upper bound of h>
j w, could be a reasonable approximation
ranking for the marginal ranking ?(j|w).
4
Z > = diag(w)H > : zjt = hjt wt , 8j, t
Next we design an efficient procedure which
generates C(w) according to the ranking ?
? (j|w)
defined in (4). First, based on the relative orderings of {zjt }, we consider the joint ranking and
the conditional ranking defined as follows:
? Joint ranking: ?(j, t|w) is the exact ranking
over the nk entries of Z.
?(j, t|w)
+
z11
z21
z31
z41
z51
z61
z71
z12
z22
z32
z42
z52
z62
z72
z13
z23
z33
z43
z53
z63
z73
h1>w h2>w h3>w h4>w h5>w h6>w h7>w
?t (j|w)
?(j|w)
?(j, t|w) := rank(zjt | {z11 , . . . , znk }).
Figure 1: nk multiplications in a naive linear MIPS
approach. ?(j, t|w): joint ranking. ?t (j|w): con? Conditional ranking: ?t (j|w) is the exact
ditional ranking. ?(j|w): marginal ranking.
ranking over the n entires of the t-th row of
>
Z .
?t (j|w) := rank(zjt | {z1t , . . . , znt }).
See Figure 1 for an illustration for both rankings. Similar to the marginal ranking, both joint and
conditional rankings are query dependent.
Observe that, in (4), for each j, only a single maximum entry of Z, maxkt=1 zjt , is considered to
obtain the ranking ?
? (j|w). To generate C(w) based on ?
? (j|w), we can iterate (j, t) entries of Z in
a greedy sequence such that (j1 , t1 ) is visited before (j2 , t2 ) if zj1 t1 > zj2 t2 , which is exactly the
sequence corresponding to the joint ranking ?(j, t|w). Each time an entry (j, t) is visited, we can
include the index j into C(w) if j 2
/ C(w). In Theorem 1, we show that the sequence to include a
newly observed j into C(w) is exactly the sequence induced by the ranking ?
? (j|w) defined in (4).
Theorem 1. For all j1 and j2 such that ?
? (j1 |w) < ?
? (j2 |w), j1 will be included into C(w) before
j2 if we iterate (j, t) pairs following the sequence induced by the joint ranking ?(j, t|w). A proof
can be found in Section D.1.
At first glance, generating (j, t) in the sequence according to the joint ranking ?(j, t|w) might require
the access to all the nk entries of Z and cost O(nk) time. In fact, based on Property 1 of conditional
rankings, we can design an efficient variant of the k-way merge algorithm [8] to generate (j, t) pairs
in the desired sequence iteratively.
Property 1. Given a fixed candidate matrix H, for any possible w with wt 6= 0, the conditional
ranking ?t (j|w) is either ?t+ (j) or ?t (j), where ?t+ (j) = rank(hjt | {h1t , . . . , hnt }), and
?
?t+ (j) if wt > 0,
?t (j) = rank( hjt | { h1t , . . . , hnt }). In particular, ?t (j|w) =
?t (j) if wt < 0.
Property 1 enables us to characterize a query dependent conditional ranking ?t (j|w) by two query
independent rankings ?t+ (j) and ?t (j). Thus, for each t, we can construct and store a sorted index
array st [r], r = 1, . . . , n such that
?t+ (st [1]) ? ?t+ (st [2]) ? ? ? ? ? ?t+ (st [n]),
?t (st [1])
?t (st [2])
???
?t (st [n]).
(6)
(7)
Thus, in the phase of query-independent data structure construction of Greedy-MIPS, we compute
and store k query-independent rankings ?t+ (?) by k sorted index arrays of length n: st [r], r =
1, . . . , n, t = 1, . . . , k. The entire construction costs O(kn log n) time and O(kn) space.
Next we describe the details of the proposed Greedy-MIPS algorithm for a given query w and a
budget B. Greedy-MIPS utilizes the idea of the k-way merge algorithm to visit (j, t) entries of Z
according to the joint ranking ?(j, t|w). Designed to merge k sorted sublists into a single sorted
list, the k-way merge algorithm uses 1) k pointers, one for each sorted sublist, and 2) a binary tree
structure (either a heap or a selection tree) containing the elements pointed by these k pointers to
obtain the next element to be appended into the sorted list [8].
4.1 Query-dependent Pre-processing
We divide nk entries of (j, t) into k groups. The t-th group contains n entries: {(j, t) : j = 1, . . . , n}.
Here we need an iterator playing a similar role as the pointer which can iterate index j 2 {1, . . . , n}
in the sorted sequence induced by the conditional ranking ?t (?|w). Utilizing Property 1, the t-th
pre-computed sorted arrays st [r], r = 1, . . . , n can be used to construct such an iterator, called
CondIter, which supports current() to access the currently pointed index j and getNext() to
5
Algorithm 1 CondIter: an iterator over j 2 Algorithm 2 Query-dependent pre{1, . . . , n} based on the conditional ranking ?t (j|w). processing procedure in Greedy-MIPS.
This code assumes that the k sorted index arrays ? Input: query w 2 Rk
st [r], r = 1, . . . , n, t = 1, . . . , k are available.
? For t = 1, . . . , k
class CondIter:
- iters[t]
CondIter(t, wt )
def constructor(dim_idx, query_val):
- z
hjt wt ,
t, w, ptr
dim_idx, query_val, 1
where j = iters[t].current()
- Q.push((z, t))
def current():
?
st [ptr]
if w > 0,
? Output:
return
- iters[t], t ? k: iterators for ?t (?|w).
st [n ptr + 1] otherwise.
- Q:
def hasNext(): return (ptr < n)
? a max-heap of
n
def getNext():
(z, t) | z = max zjt , 8t ? k .
j=1
ptr
ptr + 1 and return current()
advance the iterator. In Algorithm 1, we describe a pseudo code for CondIter, which utilizes the
facts (6) and (7) such that both the construction and the index access cost O(1) space and O(1) time.
For each t, we use iters[t] to denote the CondIter for the t-th conditional ranking ?t (j|w).
Regarding the binary tree structure used in Greedy-MIPS, we consider a max-heap Q of (z, t)
pairs. z 2 R is the compared key used to maintain the heap property of Q, and t 2 {1, . . . , k}
is an integer to denote the index to a entry group. Each (z, t) 2 Q denotes the (j, t) entry of Z
where j = iters[t].current() and z = zjt = hjt wt . Note that there are most k elements in the
max-heap at any time. Thus, we can implement Q by a binary heap such that 1) Q.top() returns the
maximum pair (z, t) in O(1) time; 2) Q.pop() deletes the maximum pair of Q in O(log k) time; and
3) Q.push((z, t)) inserts a new pair in O(log k) time. Note that the entire Greedy-MIPS can also be
implemented using a selection tree among the k entries pointed by the k iterators. See Section B in
the supplementary material for more details.
In the query-dependent pre-processing phase, we need to construct iters[t], t = 1, . . . , k,
one for each conditional ranking ?t (j|w), and a max-heap Q which is initialized to contain
(z, t) | z = maxnj=1 zjt , t ? k . A detailed procedure is described in Algorithm 2 which costs
O(k log k) time and O(k) space.
4.2 Candidate Screening
The core idea of Greedy-MIPS is to iteratively traverse (j, t) entries of Z in a greedy sequence and
collect newly observed indices j into C(w) until |C(w)| = B. In particular, if r = ?(j, t|w), then
(j, t) entry is visited at the r-th iterate. Similar to the k-way merge algorithm, we describe a detailed
procedure in Algorithm 3, which utilizes the CondIter in Algorithm 1 to perform the screening.
Recall both requirements of a viable candidate screening procedure for budgeted MIPS: 1) the
flexibility to control the size |C(w)| ? B; and 2) an efficient procedure runs in O(Bk). First, it is
clear that Algorithm 3 has the flexibility to control the size of C(w) by the exiting condition of the
outer while-loop. Next, to analyze the overall time complexity of Algorithm 3, we need to know the
number of the zjt entries the algorithm iterates before C(w) = B. Theorem 2 gives an upper bound
on this number of iterations.
Theorem 2. There are at least B distinct indices j in the first Bk entries (j, t) in terms of the joint
ranking ?(j, t|w) for any w; that is,
|{j | 8(j, t) such that ?(j, t|w) ? Bk}| B.
(8)
A detailed proof can be found in Section D of the supplementary material. Note that there are
some O(log k) time operations within both the outer and inner while loops such as Q.push((z, t))
and Q.pop()). As the goal of the screening procedure is to identify j indices only, we can skip the
Q.push zjt , t for an entry (j, t) with the j having been included in C(w). As a results, we can
guarantee that Q.pop() is executed at most B + k 1 times when |C(w)| = B. The extra k 1 times
occurs in the situation that
iters[1].current() = ? ? ? = iters[k].current()
at the beginning of the entire screening procedure.
6
To check weather a index j in the current C(w) in O(1) time, we use an auxiliary zero-initialized array of length n:
visited[j], j = 1, . . . , n to denote
whether an index j has been included in
C(w) or not. As C(w) contains at most B
indices, only B elements of this auxiliary
array will be modified during the screening
procedure. Furthermore, the auxiliary array can be reset to zero using O(B) time
in the end of Algorithm 3, so this auxiliary array can be utilized again for a different query vector w. Notice that Algorithm 3 still iterates Bk entries of Z but
at most B + k 1 entries will be pushed
into or pop from the max-heap Q. Thus, the
overall time complexity of Algorithm 3 is
O(Bk + (B + k) log k) = O(Bk), which
makes Greedy-MIPS a viable budgeted
MIPS approach.
Algorithm 3 An improved candidate screening procedure in Greedy-MIPS. The time complexity is O(Bk).
? Input:
- H, w, and the computational budget B
- Q and iters[t]: output of Algorithm 2
- C(w): an empty list
- visited[j] = 0, 8j ? n: a zero-initialized array.
? While |C(w)| < B:
- (z, t)
Q.pop()
? ? ? O(log k)
- j
iters[t].current()
- If visited[j] = 0:
* append j into C(w) and visited[j]
1
- While iters[t].hasNext():
* j
iters[t].getNext()
* if visited[j] = 0:
? z
hjt wt and Q.push((z, t)) ? ? ? O(log k)
? break
? visited[j]
0, 8j 2 C(w)
? ? ? O(B)
? Output: C(w) = {j | ?
? (j|w) ? B}
4.3 Connection to Sampling Approaches
Sample-MIPS, as mentioned earlier, is essentially a sampling algorithm with replacement scheme to
draw entries of Z such that (j, t) is sampled with the probability proportional to zjt . Thus, SampleMIPS can be thought as a traversal of (j, t) entries using in a stratified random sequence determined
by a distribution of the values of {zjt }, while the core idea of Greedy-MIPS is to iterate (j, t) entries
of Z in a greedy sequence induced by the ordering of {zjt }.
Next, we discuss the differences of Greedy-MIPS from Sample-MIPS and Diamond-MSIPS.
Sample-MIPS can be applied to the situation where both H and w are nonnegative because of the
nature of sampling scheme. In contrast, Greedy-MIPS can work on any MIPS problems as only the
ordering of {zjt } matters in Greedy-MIPS. Instead of h>
j w, Diamond-MSIPS is designed for the
2
>
MSIPS problem which is to identify candidates with largest (h>
j w) or |hj w| values. In fact, for
nonnegative MIPS problems, the diamond sampling is equivalent to Sample-MIPS. Moreover, for
MSIPS problems with negative entries, when the number of samples is set to be the budget B,2 the
Diamond-MSIPS is equivalent to apply Sample-MIPS to sample (j, t) entries with the probability
p(j, t) / |zjt |. Thus, the applicability of the existing sampling-based approaches remains limited for
general MIPS problems.
4.4 Theoretical Guarantee
Greedy-MIPS is an algorithm based on a greedy heuristic ranking (4). Similar to the analysis of
Quicksort, we study the average complexity of Greedy-MIPS by assuming a distribution of the input
dataset. For simplicity, our analysis is performed on a stochastic implicit matrix Z instead of w. Each
entry in Z is assumed to follow a uniform distribution uniform(a, b). We establish Theorem 3 to
prove that the number of entries (j, t) iterated by Greedy-MIPS to include the index to the largest
candidate is sublinear to n = |H| with a high probability when n is large enough.
Theorem 3. Assume that all the entries zjt are drawn from a uniform distribution uniform(a, b).
Let j ? be the index to the largest candidate (i.e., ?(j ? |Z) = 1). With high probability, we have
1
?
? (j ? |Z) ? O(k log(n)n k ). A detailed proof can be found in the supplementary material.
Notice that theoretical guarantees for approximate MIPS is challenging even for randomized algorithms. For example, the analysis for Diamond-MSIPS in [3] requires nonnegative assumptions and
only works on MSIPS (max-squared-inner-product search) problems instead of MIPS problems.
5
Experimental Results
In this section, we perform extensive empirical comparisons to compare Greedy-MIPS with other
state-of-the-art fast MIPS approaches on both real-world and synthetic datasets: We use netflix and
yahoo-music as our real-world recommender system datasets. There are 17, 770 and 624, 961 items
in netflix and yahoo-music, respectively. In particular, we obtain the user embeddings {wi } 2 Rk
2
This setting is used in the experiments in [3].
7
Figure 2: MIPS comparison on netflix and yahoo-music.
Figure 3: MIPS comparison on synthetic datasets with n 2 2{17,18,19,20} and k = 128. The datasets
used to generate results are created with each entry drawn from a normal distribution.
Figure 4: MIPS Comparison on synthetic datasets with n = 218 and k 2 2{2,5,7,10} . The datasets
used to generate results on are created with each entry drawn from a normal distribution.
and item embeddings hj 2 Rk by the standard low-rank matrix factorization [4] with k 2 {50, 200}.
We also generate synthetic datasets with various n = 2{17,18,19,20} and k = 2{2,5,7,10} . For each
synthetic dataset, both candidate vector hj and query w vector are drawn from the normal distribution.
5.1 Experimental Settings
To have fair comparisons, all the compared approaches are implemented in C++.
? Greedy-MIPS: our proposed approach in Section 4.
? PCA-MIPS: the approach proposed in [2]. We vary the depth of PCA tree to control the trade-off.
? LSH-MIPS: the approach proposed in [12, 14]. We use the nearest neighbor transform function
proposed in [2, 12] and use the random projection scheme as the LSH function as suggested in
[12]. We also implement the standard amplification procedure with an OR-construction of b hyper
LSH hash functions. Each hyper LSH function is a result of an AND-construction of a random
projections. We vary values (a, b) to control the trade-off.
? Diamond-MSIPS: the sampling scheme proposed in [3] for the maximum squared inner product
search. As it shows better performance than LSH-MIPS in [3] in terms of MIPS problems, we
also include Diamond-MSIPS into our comparison.
? Naive-MIPS: the baseline approach which applies a linear search to identify the exact top-K
candidates.
Evaluation Criteria. For each dataset, the actual top-20 items for each query are regarded as the
ground truth. We report the average performance on a randomly selected 2,000 query vectors. To
evaluate the search quality, we use the precision on the top-P prediction (prec@P ), obtained by
selecting top-P items from C(w) returned by the candidate screening procedure. Results with P = 5
is shown in the paper, while more results with various P are in the supplementary material. To
evaluate the search efficiency, we report the relative speedups over the Naive-MIPS approach:
speedup =
prediction time required by Naive-MIPS
.
prediction time by a compared approach
8
Remarks on Budgeted MIPS versus Non-Budgeted MIPS. As mentioned in Section 3, PCAMIPS and LSH-MIPS cannot handle MIPS with a budget. Both the search computation cost and
the search quality are fixed when the corresponding data structure is constructed. As a result, to
understand the trade-off between search efficiency and search quality for these two approaches, we
can only try various values for its parameters (such as the depth for PCA tree and the amplification
parameters (a, b) for LSH). For each combination of parameters, we need to re-run the entire
query-independent pre-processing procedure to construct a new data structure.
Remarks on data structure construction. Note that the time complexity for the construction
for Greedy-MIPS is O(kn log n), which is on par to O(kn) for Diamond-MSIPS, and faster
than O(knab) for LSH-MIPS and O(k 2 n) for PCA-MIPS. As an example, the construction for
Greedy-MIPS only takes around 10 seconds on yahoo-music with n = 624, 961 and k = 200.
5.2 Experimental Results
Results on Real-World Data sets. Comparison results for netflix and yahoo-music are shown
in Figure 2. The first, second, and third columns present the results with k = 50 and k = 200,
respectively. It is clearly observed that given a fixed speedup, Greedy-MIPS yields predictions with
much higher search quality. In particular, on the yahoo-music data set with k = 200, Greedy-MIPS
runs 200x faster than Naive-MIPS and yields search results with p@5 = 70%, while none of PCAMIPS, LSH-MIPS, and Diamond-MSIPS can achieve a p@5 > 10% while maintaining the similar
200x speedups.
Results on Synthetic Data Sets. We also perform comparisons on synthetic datasets. The comparison with various n 2 2{17,18,19,20} is shown in Figure 3, while the comparison with various
k 2 2{2,5,7,10} is shown in Figure 4. We observe that the performance gap between Greedy-MIPS
over other approaches remains when n increases, while the gap becomes smaller when k increases.
However, Greedy-MIPS still outperforms other approaches significantly.
6
Conclusions and Future Work
In this paper, we develop a novel Greedy-MIPS algorithm, which has the flexibility to handle
budgeted MIPS, and yields surprisingly superior performance compared to state-of-the-art approaches.
The current implementation focuses on MIPS with dense vectors, while in the future we plan to
implement our algorithm also for high dimensional sparse vectors. We also establish a theoretical
guarantee for Greedy-MIPS based on the assumption that data are generated from a random
distribution. How to relax the assumption or how to design a nondeterministic pre-processing step for
Greedy-MIPS to satisfy the assumption are interesting future directions of this work.
Acknowledgements
This research was supported by NSF grants CCF-1320746, IIS-1546452 and CCF-1564000. CJH
was supported by NSF grant RI-1719097.
References
[1] Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. Clustering is efficient for approximate maximum inner product search, 2016. arXiv preprint
arXiv:1507.05910.
[2] Yoram Bachrach, Yehuda Finkelstein, Ran Gilad-Bachrach, Liran Katzir, Noam Koenigstein,
Nir Nice, and Ulrich Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender
Systems, pages 257?264, 2014.
[3] Grey Ballard, Seshadhri Comandur, Tamara Kolda, and Ali Pinar. Diamond sampling for
approximate maximum all-pairs dot-product (MAD) search. In Proceedings of the IEEE
International Conference on Data Mining, 2015.
[4] Wei-Sheng Chin, Yong Zhuang, Yu-Chin Juan, and Chih-Jen Lin. A learning-rate schedule
for stochastic gradient methods to matrix factorization. In Proceedings of the Pacific-Asia
Conference on Knowledge Discovery and Data Mining (PAKDD), 2015.
[5] Edith Cohen and David D. Lewis. Approximating matrix multiplication for pattern recognition
tasks. In Proceedings of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms,
pages 682?691, 1997.
9
[6] Paul Covington, Jay Adams, and Emre Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, 2016.
[7] Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. The Yahoo! music
dataset and KDD-Cup?11. In JMLR Workshop and Conference Proceedings: Proceedings of
KDD Cup 2011 Competition, volume 18, pages 3?18, 2012.
[8] Donald E. Knuth. The Art of Cmoputer Programming, Volumne 3: Sorting and Searching.
Addison-Wesley, 2nd edition, 1998.
[9] Noam Koenigstein, Parikshit Ram, and Yuval Shavitt. Efficient retrieval of recommendations in
a matrix factorization framework. In Proceedings of the 21st ACM International Conference on
Information and Knowledge Management, CIKM ?12, pages 535?544, 2012.
[10] Yehuda Koren, Robert M. Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. IEEE Computer, 42:30?37, 2009.
[11] Stephen Mussmann and Stefano Ermon. Learning and inference via maximum inner product
search. In Proceedings of the 33rd International Conference on International Conference on
Machine Learning, 2016.
[12] Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product
search. In Proceedings of the International Conference on Machine Learning, pages 1926?1934,
2015.
[13] Parikshit Ram and Alexander G. Gray. Maximum inner-product search using cone trees. In
Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, pages 931?939, 2012.
[14] Anshumali Shrivastava and Ping Li. Asymmetric lsh (ALSH) for sublinear time maximum inner
product search (MIPS). In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q.
Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2321?2329,
2014.
[15] Anshumali Shrivastava and Ping Li. Improved asymmetric locality senstive hashing lsh (ALSH)
for maximum inner product search (MIPS). In Proceedings of the Thirty-First Conference on
Uncertainty in Artificial Intelligence (UAI), pages 812?821, 2015.
[16] Jason Weston, Samy Bengio, and Nicolas Usunier. Large scale image annotation: learning to
rank with joint word-image embeddings. Mach. Learn., 81(1):21?35, October 2010.
[17] Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, and Inderjit S. Dhillon. Large-scale multi-label
learning with missing labels. In Proceedings of the International Conference on Machine
Learning, pages 593?601, 2014.
10
| 7129 |@word version:1 nd:1 grey:1 hsieh:1 tr:3 reduction:6 contains:5 score:1 selecting:1 outperforms:1 existing:7 current:10 must:1 partition:2 j1:4 kdd:2 enables:2 drop:1 designed:4 hash:3 chohsieh:1 greedy:46 half:1 selected:4 item:16 leaf:1 intelligence:1 beginning:1 core:3 pointer:3 iterates:2 node:1 preference:2 traverse:1 constructed:2 h4:1 become:1 symposium:1 viable:5 prove:1 nondeterministic:1 multi:3 z31:1 actual:1 becomes:2 z13:1 moreover:2 prateek:1 dror:1 finding:1 transformation:1 guarantee:5 pseudo:1 seshadhri:1 exactly:4 classifier:1 control:10 grant:2 superiority:1 branchand:1 ice:1 before:5 t1:2 mach:1 merge:5 black:1 might:4 studied:1 collect:1 challenging:2 factorization:6 limited:3 stratified:1 thirty:1 yehuda:3 implement:3 procedure:32 empirical:1 bell:1 thought:1 weather:1 projection:2 significantly:1 pre:11 word:1 donald:1 jui:1 cannot:1 selection:2 operator:1 equivalent:2 missing:1 attention:1 bachrach:2 wit:1 amazon:1 identifying:1 simplicity:1 array:10 utilizing:1 regarded:1 sarath:1 embedding:3 handle:9 sargin:1 searching:1 constructor:1 kolda:1 construction:11 user:6 exact:4 programming:1 us:1 designing:1 samy:1 element:5 expensive:1 recognition:1 utilized:1 asymmetric:3 observed:3 role:1 preprint:1 connected:1 ordering:6 trade:10 ran:1 substantial:1 mentioned:4 principled:1 intuition:1 complexity:8 dynamic:1 traversal:1 raise:1 depend:1 ali:1 efficiency:7 easily:1 joint:10 various:8 distinct:1 fast:9 describe:3 jain:1 query:42 artificial:1 hyper:2 edith:1 heuristic:3 larger:1 solve:3 supplementary:4 relax:1 otherwise:1 ability:1 paquet:1 transform:1 hoc:1 sequence:11 propose:1 interaction:1 product:25 reset:1 finkelstein:1 j2:4 loop:2 flexibility:7 achieve:1 amplification:2 intuitive:2 competition:1 cluster:1 requirement:4 empty:1 lshs:1 generating:4 adam:1 object:1 koenigstein:3 develop:5 nearest:4 h3:1 implemented:2 c:2 involves:1 implies:1 skip:1 auxiliary:4 larochelle:1 direction:1 filter:1 stochastic:2 ermon:1 material:4 require:1 suffices:1 summation:2 mathematically:1 extension:2 insert:1 around:1 considered:2 ground:1 normal:3 lawrence:1 major:1 vary:2 heap:8 ditional:1 estimation:1 label:4 currently:1 visited:9 utexas:3 sensitive:1 largest:8 iterator:4 clearly:2 anshumali:2 aim:1 modified:1 hj:8 focus:1 rank:13 check:1 contrast:1 sigkdd:1 centroid:1 baseline:1 inference:2 dependent:7 nn:1 entire:6 transformed:1 issue:3 among:4 overall:3 pascal:1 yahoo:9 development:1 plan:1 art:7 iters:12 marginal:5 construct:6 having:1 beach:1 sampling:18 yu:3 inevitable:1 future:3 t2:2 report:2 yoshua:1 randomly:1 parikshit:2 phase:13 replacement:1 tq:1 maintain:1 screening:23 highly:5 mining:3 evaluation:1 adjust:1 yielding:1 behind:3 fu:2 shorter:2 tree:16 divide:1 euclidean:1 initialized:3 desired:2 re:1 theoretical:4 covington:1 column:2 earlier:3 tp:2 applicability:5 cost:13 subset:2 entry:29 uniform:4 xbox:1 wonder:1 characterize:1 stored:1 kn:4 synthetic:7 cho:1 nns:12 st:14 international:7 randomized:1 siam:1 off:9 pool:1 quickly:1 squared:3 augmentation:1 again:1 management:1 containing:2 hn:3 juan:1 return:4 li:2 chandar:1 inc:1 z12:1 matter:1 satisfy:1 ranking:42 depends:4 ad:1 performed:5 try:1 jason:1 h1:3 lot:1 analyze:1 break:1 netflix:4 inherited:1 annotation:1 contribution:1 appended:1 variance:1 yield:6 identify:8 generalize:1 vincent:1 iterated:1 none:1 detector:1 ping:2 definition:2 volinsky:1 frequency:1 tamara:1 naturally:1 associated:2 proof:3 con:1 sampled:2 newly:2 dataset:5 treatment:1 popular:1 recall:1 knowledge:3 schedule:1 routine:1 carefully:3 wesley:1 hashing:2 higher:1 follow:1 asia:1 improved:2 wei:1 done:2 box:1 furthermore:1 stage:7 implicit:4 until:1 sheng:1 replacing:1 glance:1 dependant:1 brings:1 quality:10 gray:1 lei:1 usa:1 contain:1 ccf:2 symmetric:1 dhillon:2 iteratively:2 during:1 davis:1 ptr:6 criterion:1 generalized:1 chin:2 demonstrate:1 stefano:1 image:2 wise:1 novel:5 recently:5 superior:3 hugo:1 cohen:1 volume:1 million:2 cup:2 rd:1 pointed:3 dot:1 lsh:17 access:3 recent:2 purushottam:1 reverse:1 scenario:1 store:5 certain:1 kar:1 binary:3 greater:1 somewhat:2 ii:1 stephen:1 z11:2 faster:6 characterized:1 h7:1 long:1 lin:1 retrieval:1 visit:1 qi:1 prediction:12 variant:2 essentially:1 arxiv:2 iteration:2 normalization:1 gilad:1 addition:2 grow:1 extra:1 unlike:2 induced:9 integer:1 ideal:4 bengio:2 embeddings:5 mips:150 enough:1 iterate:5 xj:4 inner:25 reduce:2 idea:6 regarding:2 texas:3 whether:3 pca:13 accelerating:1 returned:1 remark:2 deep:3 latency:3 clear:4 detailed:4 z22:1 reduced:1 generate:11 zj:2 nsf:2 notice:2 estimated:1 cikm:1 per:3 mnk:1 discrete:1 pinar:1 group:3 key:3 deletes:1 drawn:5 alsh:2 budgeted:19 ram:2 znt:3 cone:1 run:5 zj2:1 uncertainty:1 h5:1 reasonable:1 chih:1 utilizes:3 draw:1 z23:1 pushed:1 z32:1 layer:2 bound:4 def:4 koren:2 nonnegative:6 h1t:2 annual:1 constraint:1 alex:1 pakdd:1 ri:1 yong:1 markus:1 generates:1 nathan:1 extremely:2 speedup:5 pacific:1 according:3 request:2 combination:1 kd:2 smaller:2 wi:2 remains:2 discus:1 count:2 needed:2 know:1 addison:1 end:2 studying:2 available:1 operation:8 h6:1 usunier:1 multiplied:1 apply:3 observe:2 neyshabur:1 prec:1 weinberger:1 slower:1 original:1 top:12 denotes:4 include:5 assumes:1 clustering:1 maintaining:1 music:9 yoram:1 predicator:1 ghahramani:1 establish:3 approximating:1 question:1 occurs:1 traditional:1 diagonal:1 gradient:1 outer:2 chris:1 mad:1 assuming:1 length:2 code:2 modeled:1 index:19 illustration:1 executed:1 october:1 robert:1 noam:3 negative:3 append:1 design:11 implementation:1 perform:7 diamond:14 recommender:10 upper:3 observation:1 datasets:8 t:2 supporting:1 situation:5 exiting:1 community:1 bk:12 david:1 pair:8 required:5 shavitt:1 extensive:1 connection:1 california:1 ucdavis:1 pop:5 nip:1 suggested:1 usually:1 pattern:1 eighth:1 gideon:1 max:9 ranked:4 indicator:1 mn:2 scheme:10 improve:2 iterators:2 zhuang:1 created:2 negativity:1 naive:10 nir:1 speeding:1 nice:1 acknowledgement:1 discovery:2 multiplication:5 relative:2 rofuyu:1 emre:1 fully:1 par:1 sublinear:3 interesting:1 limitation:1 proportional:1 srebro:1 versus:1 h2:1 znk:1 editor:1 ulrich:1 playing:1 austin:3 row:2 surprisingly:3 last:1 supported:2 understand:1 wide:2 neighbor:3 taking:2 sparse:1 benefit:1 dimension:4 depth:4 world:4 adopts:1 collection:1 z33:1 welling:1 approximate:6 hjt:8 quicksort:1 uai:1 assumed:1 consuming:1 spectrum:2 search:42 latent:1 table:1 nature:2 ballard:1 learn:1 ca:1 nicolas:1 shrivastava:2 diag:4 pk:1 dense:3 main:1 weimer:1 paul:1 edition:1 fair:1 x1:1 hsiang:2 precision:4 sub:1 candidate:45 jmlr:1 third:1 jay:1 rk:5 theorem:6 specific:6 jen:1 behnam:1 list:3 x:3 svm:1 zj1:1 cortes:1 essential:2 exists:1 workshop:1 knuth:1 magnitude:1 budget:16 push:5 nk:7 sorting:2 gap:2 locality:2 zjt:22 simply:1 adjustment:1 inderjit:3 recommendation:2 applies:1 truth:1 satisfies:1 lewis:1 acm:5 weston:1 conditional:10 z1t:3 sorted:11 formulated:1 goal:4 hnt:2 z21:1 hard:1 youtube:1 included:3 specifically:1 determined:1 yuval:1 wt:10 called:3 experimental:4 support:3 alexander:1 cjh:1 evaluate:2 |
6,776 | 713 | Adaptive Stimulus Representations:
A Computational Theory of
Hippocampal-Region Function
Mark A. Gluck
Catherine E. Myers
Center for Molecular and Behavioral Neuroscience
Rutgers University. Newark. NJ 07102
g IlIck@pOl ?/OI?.I'lI(gers.edll
mycrs@p(/\ -Iol'.rl/(gers.edll
Abstract
We present a theory of cortico-hippocampal interaction in discrimination learning. The
hippocampal region is presumed to form new stimulus representations which facilitate
learning by enhancing the discriminability of predictive stimuli and compressing
stimulus-stimulus redundancies. The cortical and cerebellar regions, which are the sites
of long-term memory. may acquire these new representations but are not assumed to be
capable of forming new representations themselves. Instantiated as a connectionist
model. this theory accounts for a wide range of trial-level classical conditioning
phenomena in normal (intact) and hippocampal-Iesioned animals. It also makes several
novel predictions which remain to be investigated empirically. The theory implies that
the hippocampal region is involved in even the simplest learning tasks; although
hippocampal-Iesioned animals may be able to use other strategies to learn these tasks. the
theory predicts that they will show consistently different patterns of transfer and
generalization when the task demands change.
1 INTRODUCTION
It has long been known that the hippocampal region (including the entorhinal cortex.
subicular complex. hippocampus and dentate gyrus) plays a role in leaming and memory.
For example. the hippocampus has been implicated in human declarative memory
(Scoville & Millner. 1957: Squire. 1987) while hippocampal damage in animals impairs
such seemingly disparate abilities as spatial mapping (O'Keefe & Nadel. 1978).
contextual sensitivity (Hirsh. 1974; Winocur. Rawlins & Gray. 1987; Nadel & Willner.
1980). temporal processing (Buszaki. 1989; Akase. Aikon & Disterhoft. 1989). configural
association (Sutherland & Rudy. 1989) and the tlexible use of representations in novel
situations (Eichenbaum & Buckingham. 1991). Several theories have characterized
hippocampal function in terms of one or more of these abilities. However. a theory
which can predict the full range of deficits after hippocampal lesion has been elusive.
This paper attempts to provide a functional interpretation of a hippocampal-region role in
associative learning. We propose that one function of the hippocampal region is to
construct new representations which facilitate discrimination learning. We argue that this
937
938
Gluck and Myers
representational function is sufficielll to derive and unify a wide range of trial-level
conditioned effects observable in the illloct and lesioned animal.
2 A THEORY OF CORTICO-HIPPOCAMPAL INTERACTION
Psychological theories have often found it useful to characterize stimuli as occupying
points in an internal representation space (c.f. Shepard. 1958: Nosofsky. 1984).
Connectionist theories can be interpreted in a similar geometric framework. For example.
in a connectionist network (see Figure IA) a stimulus input such as a tone is recoded in
the network's internal layer as a pallern of activations. A light input will activate a
different pallern of activations in the internal layer nodes (Figure I B). These internal
layer activations can be viewed as a representation of the stimulus inputs. and can be
plotted in multi-dimensional internal representation space (Figure IC). Learning to
classify stimulus inputs corresponds to finding an appropriate partition of representation
space. In the connectionist model. the lower layer of network weights determine the
representation while the upper layer of network weights detennine the classification.
Our basic premise is that the hippocampal region has the ability to modify stimulus
representations to facilitate classification. and that its representations are biased by two
constraints. The first constraint. predictive differentiation. is a bias to differentiate the
representations of stimuli which are to be classified differently. Predictive differentiation
increases the representational resources (i.e .. hidden units) devoted to representing
stimulus features which are especially predictive of how a stimulus is to be classified.
For example. if red stimuli alone should evoke a response. then many represelllational
(A)
(C)
Internal Representation Space
n
o ne
1.0
Tone Light
(8)
0.8
Context
Response
Classification
R
2
0.6
:
?~ Light
. . . . . :.
0.4
0.2
+--...---..---.---.--....,
0.2
0.4
O.
0.8
1.0
0.0
0.0
R,
Tone Light Context
Figure 1: Stimulus representations. The activations of the internal layer nodes in a
connectionist network constitute a representation of the network's stimulus inputs. (A)
Internal representation for an example tone stimulus. (B) Internal representation for an
example light stimulus. (C) Translation of these representations into points in an internal
representation space. with one dimension encoding the activation level of each internal
node. Classifying stimuli corresponds 10 partitioning representation space so that
representations of stimuli which ought to be classified together lie in the same partition.
Classification is easier if the representations of stimuli to be classified together are
clustered while representations of stimuli to be classified differently are widely separated
in this space.
Adaptive Stimulus Representations: Computational Theory of Hippocampal-Region Function
resources should be devoted to encoding color. The second constraint, redundancv
compression. reduces the resources allocated to represent features which are redundant or
irrelevant in predicting the desired response . These IWO constraints are by nature
complementary. given a finite amount of representational resources. Compressing
redundant features frees resources 10 encode more predictive features. Conversely.
increasing the resources alloC:lIed to predictive features forces compression of the
remaining (less predictive) features.
This proposed hippocampal-region function may be modelled by a predictive autoencoder
(on the right in Figure 2). An autoencoder (Hinton, 1989) learns to map from stimulus
inputs. through :m internal layer, to an output which is a reproduction of those inputs.
This is also known as stimulus-stimulus learning . To do this. the network must have
access to some multi-layer learning algorithm such as error backpropagation (Rumelhart.
Hinton & Williams. 1986). When the internal layer is narrower than the input and output
layers. the system develops a recoding in the internal layer which takes advantage of
redundancies in the inputs. A predictive autoencoder has the further constraint that it
must also output a classification response to the inputs. This is also known as stimulusresponse learning . The internal layer recoding must therefore also emphasize stimulus
features which are especially predictive of this classification. Therefore, a predictive
autoencoder learns to fonn internal representations constrained by both predictive
differentiation and redundancy compression. and is thus an example of a mechanism for
implementing the two representational biases described above.
The cerebral and cerebellar cortices form the sites of long term memory in this theory. but
are not themselves directly able to form new representations. They can . however. acquire
new representations formed in the hippocampal region. A simplified model of one such
cerebellar region is shown on the left in Figure 2. This network does not have access to
multi-layer learning which would allow it to independently form new internal
representations by itself. Instead, the two layers of weights in this network evolve
independently. The bollom layer of weights is trained so that the current input pallern
generates an internal representation equivalent to that developed in the hippocampal
model. Independently and simultaneously. weights in the cortical network top layer are
trained to map from this evolving representation to the classification response. Because
the cortical networks are not creating new representations. but only learning two
independent single-layer mappings. they can use a much simpler learning rule than the
hippocampal model. One such algorithm is the LMS learning rule (Widrow & Hoff.
1960), which can instantiate the Rescorla-Wagner (1972) model of classical conditioning.
Cortical (Cerebellar) Network
Hippocampal-System Model
Sensory Input
(training signal)
Single-layer
learning
---
I
+
Multi-layer
learning
.".... s,:nv
Input
Figure 2. The cortico-hippocampal model: new representations developed in the
hippocampal model can be acquired by cortical networks which are incapable. of
developing such representations by themselves.
939
940
Gluck and Myers
3 MODELLING HIPPOCAMPAL
CLASSICAL CONDITIONING
INVOLVEMENT
IN
A popular experimental paradigm for the study of associative learning in :lI1imals is
classical conditioning of the rabbit eyeblink response (see Gormezano. Kehoe &
Marshall. 1983. for review). A puff of air delivered to the eye elicits a blink response in
the rabbit. If a previously neutral stimulus. such as a lOne or light (called the conditioned
stimulus). is repeatedly presented just before the airpuff. the animal will develop a blink
response to this stimulus -- and time the response so that the lid is maximally closed just
when the airpuff is scheduled to arrive. Ignoring Ihe many lemporal factors -- such as the
interval between stimuli or precise timing of the response -- this reduces to a
classification problem: learning which stimuli accuralely predict the airpuff and should
therefore evoke a response.
During a training trial. both the hippocampal and cortical networks receive the same
input pallern. This pallem represents the presence or absence of all stimulus cues -- both
conditioned stimuli and background contextual cues. Contextual cues are always present.
but may change slowly over time. The hippocampus is trained incrementally to predict
the current values of all cues -- including the US. The evolving hippoocampal internal
layer representation is provided to the cortical network. which concurrently learns to
reproduce this representation and to associate this evolving internal representation with a
prediction of the US. This cortical network prediction is interpreted as the system's
response .
The complete (intact) cortico-hippocampal model of Figure 2 can be shown to produce
conditioned behavior comparable to that of nonnal (intact) animals. Hippocampal lesions
can be simulated by disabling the hippocampal model. This eliminates the training signal
which the cortical model would otherwise use to construct internal layer representations.
As a result. the lower layer of cortical network weights remains fixed. The lesioned
model's cortical network can still modify its upper layer of weights to learn new
discriminations for which its current (now fixed) internal representation is sufficient.
4 BEHAVIORAL RESULTS
A stimulus discrimination task involves learning that one stimulus A predicts the airpuff
but a second stimulus B does not. The notation <A+. B-> is used to indicate a series of
training trials intermixing A+ (A preceeds the airpuff). B- (B does not preceed the
airpuff) and context-alone presentations. Figure 3A shows the appropriate development
of responses to A but nOI to B during this task. Both the intact and lesioned systems can
acquire this discrimination. In fact. the lesioned system leams somewhat faster: it is only
learning a classification. since its representation is fixed and (for this simple task)
generally sufficient. In the intact system. by conlrast. the hippocampal model is
developing a new representation and transferring it to the cortical network The cortical
network must then learn classifications based on this changing representation. This will
be slower than learning based on a fixed representation. This paradox of discrimination
facilitation after hippocampal lesion has often been reported in the animal literature
(Schmaltz & Theios. 1972: Eichenbaum. Fagan. Mathews & Cohen. 1988): one previous
interpretation has been to suggest that the hippocampal region is somehow "unncccessary
for" or even "inhibitory to" simple discrimination learning. Our model suggests a
different interpretation: the intact system learns more slowly because it is actually
learning more than the lesioned system. The intact system is learning not only how to
map from stimuli to responses. it is also developing new stimulus representations which
enhance the differentiation among representations of predictive stimulus features while
compressing the representations of redundant and irrelevant stimulus features.
Adaptive Stimulus Representations: Computational Theory of Hippocampal-Region Function
941
The benefit of this re-representation can most readily be seen when the task demands
suddenly change. For example. suppose the task valences shift from <A+. B-> to <A-.
B+>. The representation developed during the first training phase. which maximally
differentiated features distinguishing stimulus A from B. will still be useful in the second
training phase . Only the classification needs to be relearned. Figure 3B shows that the
intact system can learn the reversed task slightly more quickly than it learned the original
task. Successive reversals are expected to be even more facilitated. as the representations
of A and B grow ever more distinct (see Sutherland & Mackintosh . 1971. for a review of
the relevant empirical data). In contrast. the lesioned system is severely impaired in the
reversal task (Figure 3 B). In the lesioned system. with a fixed representation. all the
information is contained in the upper classificatory layer of weights. This information
must be unlearned before the reversal task can be learned. Consistent with the model's
behavior. empirical studies of hippocarnpal-Iesioned animals show strong impairment at
reversal learning (Berger & Orr. 1983).
The simplest evidence for redundancy compression likewise occurs during a transfer task .
During unreinforced pre-exposure to a stimulus cue A. the presence or absence of A is
irrelevant in terms of predicting US arrival (since a US never comes). Our theory expects
that the representation of A will therefore become compressed with the representations of
of the background contextual cues. In a subsequent training phase in which A does
predict the US. the system must learn to respond to a feature it previously learned to
ignore. The representation of A must now be re-differentiated from the context. Our
theory therefore expects that learning to respond to A will be slowed. relative to learning
(A)
(8)
Response
Trials to Learn
400
1
0.8
300
0.6
200
0.4
100
0.2
o
20
40
60
80
O~----------~-----------,
100
A-. B+
A+. B?
Training Trials
(0)
(C)
Response
1
A-
Response
A+
1
A+ only
0.8
A-
A+
, _ ;%.C(
0 .8
0.6
0 .6
0.4
0.4
0.2
0.2
~"
'11' ....
01'
l'
Ii
0
0
50
0
I I I I I I I I I I I
Training Trials
100
"
0
0
50
0
100
Training Trials
Figure 3. Behavioral results . Solid line = intact system. dashed line = lesioned system
(A) Discrimination learning <A+. B-> in intact and lesioned models: lesioned model
learns slightly faster. (B) Discrimination reversal ?A+. B-.> then <A-. B+? Intact
system shows facilitation on successive reversals, lesioned system is severely impaired .
(C) Latent inhibition (A- impairs A+) in the intact model: (0) No latent inhibition in the
lesioned model. All results shown are consistent with empirical data (see text for
references).
942
Gluck and Myers
without pre-exposure to A (Figure 3C). This effect occurs in animals and is known as
latent inhibition (Lubow. 1973).
In this theory. latent inhibition arises from hippocampal-dependent recodings. In the
lesioned system. there is no stimulus-stimulus learning during the pre-exposure phase.
and no redundancy compression in the (fixed) internal representation. Therefore.
unreinforced pre-exposure does not slow the learning of a response to A (Figure 3D).
Empirical studies have shown that hippocampal lesions also eliminate latent inhibition in
animals (Solomon & Moore. 1975).
Incidentally. a standard feedforward backpropagation network. with the same architecture
as the cortical network. but with access to a multi-layer learning algorithm. fails to show
latent inhibition. Such a network can fonn representations in its internal layer. but unlike
the hippocampal model it does not perform stimulus-stimulus learning. Therefore. there
is no effect of unreinfored pre-exposure of a stimulus. and no latent inhibition effect
(simulations not shown).
This cortico-hippocampal theory can account for many other effects of hippocampal
lesions (see Gluck & Myers. 1992. 1993 / in press): including increased stimulus
generalization and elimination of sensory preconditioning. It also provides all
interpretation of the observation that hippocampal disruption can damage learning more
than complete hippocampal removal (Solomon. Solomon. van der Schaaf & Perry. 1983):
if the training signals from the hippocampus are "noisy". the cortical network will acquire
a distorted and continuously changing internal representation. In general. this will make
classification learning harder than in the lesioned system where the illlernal
representation is simply fixed.
The theory also makes several novel and testable predictions. For example. in the intact
animal. training 10 discriminate two highly similar stimuli is facilitated by pre-training on
an easier version of the same task -- even if the hard task is a reversal of the easy task
(Mackintosh & Little. 1970). The theory predicts that this effect arises from predictive
differentiation during the pre-training phase. and therefore should be eliminated after
hippocampal lesion. Another effect observed in intact animals is compound
preconditioning: discrimination of two stimuli A and B is impaired by pre-exposure to
the compound AB (Lubow. Rifkin & Alek. 1976). The theory attributes this effect 10
redundancy compression in the pre-exposure phase. and therefore again predicts that the
effect should disappear in the hippocampal-Iesioned animal.
5 CONCLUSIONS
There are many hippocampal-dependent phenomena which the model. in its present form.
does not address. For example. the model does not consider real-time tempor?.tl effects. or
operant choice behavior. Because it is a trial-level model. it does not address the issue of
a consolidation period during which memories gradually become independent of the
hippocampus. We have also not considered here the physiological mechanisms or
structures within the hippocampal region which might implement the proposed
hippocampal function. Finally. the model would require extensions before it could apply
to such high-level behaviors as spatial navigation. human declarative memory. and
working memory -- all of which are known to be disrupted by hippocampal lesions.
Despite the theory's restricted scope. it provides a simple and unified account of a wide
range of trial-level conditioning data. It also makes several novel predictions which
remain to be investigated in lesioned animals. The theory suggests that the effects of
hippocampal damage may be especially informative in studies of two-phase transfer
tasks. In these paradigms. both intact and hippocampal-Iesioned animals are expected to
behave similarly on a simple initial learning task. but exhibit different behaviors on a
subsequent transfer or generalization task.
Adaptive Stimulus Representations: Computational Theory of Hippocampal-Region Function
REFERENCES
Akase, E., Alkon, D., & Disterhoft. J. (1989). Hippocampal lesions impair memory of
shorl-delay conditioned eye blink in rabbits. Behavioral Neuroscience, 103(5}, 935-943.
Berger. T. W .. & Orr. W . B. (1983). Hippocampectomy seleclively disrupts
discrimination reversal learning of the rabbit nictitating membrane response. Behavioral
Brain Research, 49-68.
a,
Buszaki, G. (1989). Two-stage model of memory trace formation: A role for "noisy"
brain states. Neuroscience, 31(3). 551-570.
Eichenbaum, H., & Buckingham, J. (1991). Studies on hippocampal processing:
Experiment. theory, and model. In M. Gabriel & J. Moore (Eds.), Neurocomputation and
learning: Foundations of adaptive networks Cambridge. MA: M.l.T. Press.
Eichenbaum, H.. Fagan. A.. Mathews, P .. & Cohen, N. (1988). Hippocampal system
dysfunction and odor discrimination learning in rats: lmpainnent or facilitation depending
on representational demands. Behavioral Neuroscience, 102(3).331-339.
Gluck, M. & Myers, C. (1992). Hippocampal-system function in stimulus representation
and generalization: A computational theory. Proceedin~s 14th Annual Conference of the
Co~nitive Science Society. Bloomington. IN, 390-395.
Gluck, M .. & Myers, C. (1993 / in press). Hippocampal mediation of stimulus
representation: A computational theory, Hippocampus.
Gormezano, I.. Kehoe, E. K .. & MarshaL B. S. (1983). Twenty years of classical
conditioning research with the rabbit. Progress in Psychobiology and Physiological
Psychology, lQ. 197-275.
Hinton, G. E. (1989). Connectionist learning procedures. Artificial Intelligence, 4Q. 185234.
Hirsh, R. (1974). The hippocampus and contextual retrieval of information from memory:
A theory. Behavioral Biology. il. 421-444.
Lubow, R. E. (1973). Latent inhibition. Psychological Bylletin.l!l... 398-407.
Lubow, R, Ritkin, B.. & Alek, M. (1976). The context effect: The relationship between
stimulus pre-exposure and environmental pre-exposure determines subsequent learning.
Journal of Experimental Psychology: Animal Behavior Processes, 2(1). 38-47.
Mackintosh, N. & Little. L. (1970). An analysis of transfer along a continuum. Canad. J.
Psychol.1 Rev. Canad. Psychol.. 24(5).362-369.
Nadel. L.. & Willner. J. (1980). Context and conditioning: A place for space.
Physiological Psychology. 218-228.
a.
Nosofsky. R. M. (1974). Choice. similarity. and the context theory of classification.
Joyrnal of Experimental Psychol0I:Y: Learning. Memo!), and Cognition. lQ, 104-114.
O'Keefe, L & NadeL L. (1978). The Hippocampus as a Cognitive Map. Oxford. UK:
Claredon University Press.
943
944
Gluck and Myers
Rescorla. R. A.. & Wagner. A. R. (1972). A theory of Pavlovian conditioning: Variations
in the effectiveness of reinforcement and non-reinforcement. In A. H. Black & W. F.
Prokasy (Eds.). Classical Conditioning II: Current Research and Theory New York:
Appleton-Century-Crofts.
Rumelhart. D. Eo, Hilllon. G. E .. & Williams. R. J. (1986). Learning internal
representations by error propagation. In D. Rumelhart & J. McClelland (Eds.). Parallel
DistribUled Processing: Explorations in the MicrOS!Tucture of Cognition (Vol. I:
Foundations) (pp. 318-362). Cambridge. MA: MIT Press.
Schmaltz. L. W .. & Theios. J. (1972). Acquisition and extinction of a classically
conditioned response in hippocarnpectomized rabbits (Oryctolagus cuniculus). Journal of
Comparative and Physiological Psychology. 79. 328-333 .
Scoville. W. B.. & Milner. B. (1957). Loss of recent memory after bilateral hippocampal
lesions. journal of Neurology. Neurosurgery. & Psychiatry, 2.0. 11-21.
Shepard. R. N. (1958). Stimulus and response generalization: Deduction of the
generalization gradient from a trace model. PSychological Review.~. 242-256.
Solomon. P. R .. & Moore. J. W. (1975). Latent inhibition and stimulus generalization of
the classically conditioned nictitating membrane response in rabbits (Oryctolagus
cuniculus) following dorsal hippocampal ablation. Journal of Comparative and
Physiological Psychology. 82. 1192-1203.
Solomon. P .. Solomon. S.. van der Schaaf. E. & Perry, H. (1983) . Altered activity in the
hippocampus is more detrimental to classical conditioning than removing the structure.
Science. 220. 329-331.
Squire. L. R. (1987). Memory and brain. New York: Oxford University Press.
Sutherland. N. & Mackintosh, N. (1971) . Mechanisms of Animal Discrimination
Learning. New York: Academic Press.
Sutherland. R. J.. & Rudy. J. W. (1989). Configural association theory: The role of the
hippocampal formation in learning. memory. and amnesia. Psychobiology. 17 (2), 129144.
Widrow. B .. & Hoff. M. (1960). Adaptive switching circuits. Institute of Radio
Engineers. Western Electronic Show and Convention. Convention Record.~. 96-194.
Winocur. G .. Rawlins. 1. & Gray. J. R. (1987). The hippocampus and conditioning
contextual cues. Behavioral Neuroscience.lQl. 617 -625.
lO
| 713 |@word trial:10 version:1 compression:6 hippocampus:10 extinction:1 simulation:1 fonn:2 schmaltz:2 solid:1 harder:1 initial:1 series:1 current:4 contextual:6 activation:5 buckingham:2 must:7 readily:1 subsequent:3 partition:2 informative:1 discrimination:13 alone:2 cue:7 instantiate:1 intelligence:1 tone:4 record:1 provides:2 node:3 successive:2 simpler:1 along:1 become:2 amnesia:1 behavioral:8 acquired:1 expected:2 presumed:1 behavior:6 themselves:3 disrupts:1 multi:5 brain:3 little:2 increasing:1 provided:1 notation:1 circuit:1 interpreted:2 developed:3 lone:1 unified:1 finding:1 differentiation:5 nj:1 ought:1 configural:2 temporal:1 uk:1 partitioning:1 unit:1 mathews:2 hirsh:2 sutherland:4 before:3 timing:1 modify:2 nictitating:2 severely:2 switching:1 despite:1 encoding:2 oxford:2 might:1 black:1 discriminability:1 conversely:1 suggests:2 co:1 range:4 classificatory:1 implement:1 backpropagation:2 procedure:1 empirical:4 evolving:3 pre:11 suggest:1 context:7 equivalent:1 map:4 center:1 lied:1 elusive:1 williams:2 exposure:9 independently:3 rabbit:7 unify:1 scoville:2 rule:2 facilitation:3 subicular:1 century:1 variation:1 play:1 suppose:1 milner:1 distinguishing:1 associate:1 rumelhart:3 predicts:4 observed:1 role:4 region:16 compressing:3 noi:1 unlearned:1 pol:1 lesioned:15 trained:3 predictive:14 iesioned:5 preconditioning:2 differently:2 iol:1 separated:1 instantiated:1 distinct:1 activate:1 artificial:1 formation:2 widely:1 otherwise:1 compressed:1 ability:3 itself:1 noisy:2 delivered:1 seemingly:1 associative:2 differentiate:1 advantage:1 myers:8 propose:1 interaction:2 rescorla:2 relevant:1 ablation:1 rifkin:1 representational:5 impaired:3 produce:1 incidentally:1 comparative:2 derive:1 widrow:2 develop:1 depending:1 progress:1 disabling:1 strong:1 involves:1 implies:1 indicate:1 come:1 convention:2 attribute:1 exploration:1 human:2 elimination:1 implementing:1 require:1 premise:1 generalization:7 clustered:1 extension:1 considered:1 ic:1 normal:1 mapping:2 dentate:1 predict:4 lm:1 scope:1 cognition:2 nitive:1 continuum:1 winocur:2 radio:1 occupying:1 mit:1 concurrently:1 neurosurgery:1 always:1 alek:2 encode:1 hippocampectomy:1 consistently:1 modelling:1 contrast:1 psychiatry:1 dependent:2 eliminate:1 transferring:1 hidden:1 deduction:1 reproduce:1 nonnal:1 issue:1 classification:13 among:1 development:1 animal:17 spatial:2 constrained:1 hoff:2 schaaf:2 construct:2 never:1 eliminated:1 biology:1 represents:1 connectionist:6 stimulus:58 develops:1 alkon:1 simultaneously:1 phase:7 disterhoft:2 attempt:1 ab:1 highly:1 navigation:1 light:6 devoted:2 capable:1 desired:1 plotted:1 re:2 psychological:3 increased:1 classify:1 marshall:1 neutral:1 expects:2 stimulusresponse:1 delay:1 characterize:1 reported:1 rudy:2 disrupted:1 sensitivity:1 enhance:1 together:2 quickly:1 continuously:1 nosofsky:2 again:1 solomon:6 preceed:1 slowly:2 classically:2 cognitive:1 creating:1 relearned:1 li:1 account:3 orr:2 squire:2 bilateral:1 closed:1 red:1 parallel:1 intermixing:1 oi:1 formed:1 air:1 il:1 likewise:1 blink:3 modelled:1 psychobiology:2 classified:5 fagan:2 ed:3 acquisition:1 pp:1 involved:1 bloomington:1 popular:1 color:1 actually:1 willner:2 response:22 maximally:2 unreinforced:2 just:2 stage:1 working:1 perry:2 incrementally:1 somehow:1 propagation:1 western:1 gray:2 scheduled:1 facilitate:3 effect:12 moore:3 during:8 dysfunction:1 rat:1 hippocampal:55 complete:2 disruption:1 novel:4 functional:1 rl:1 empirically:1 cohen:2 conditioning:11 shepard:2 cerebral:1 association:2 interpretation:4 cambridge:2 appleton:1 similarly:1 access:3 cortex:2 similarity:1 inhibition:9 recent:1 involvement:1 irrelevant:3 catherine:1 compound:2 incapable:1 der:2 seen:1 somewhat:1 eo:1 determine:1 paradigm:2 redundant:3 period:1 signal:3 ii:2 dashed:1 full:1 reduces:2 faster:2 characterized:1 academic:1 long:3 retrieval:1 molecular:1 prediction:5 nadel:4 basic:1 enhancing:1 rutgers:1 cerebellar:4 represent:1 receive:1 background:2 interval:1 grow:1 allocated:1 biased:1 eliminates:1 unlike:1 nv:1 effectiveness:1 presence:2 feedforward:1 easy:1 psychology:5 architecture:1 shift:1 impairs:2 york:3 constitute:1 repeatedly:1 impairment:1 gabriel:1 useful:2 generally:1 amount:1 mcclelland:1 simplest:2 gyrus:1 inhibitory:1 neuroscience:5 vol:1 redundancy:6 changing:2 year:1 facilitated:2 respond:2 distorted:1 arrive:1 place:1 electronic:1 comparable:1 layer:26 annual:1 activity:1 constraint:5 generates:1 preceeds:1 pavlovian:1 eichenbaum:4 developing:3 membrane:2 remain:2 slightly:2 lid:1 rev:1 slowed:1 gradually:1 restricted:1 operant:1 resource:6 previously:2 remains:1 mechanism:3 ihe:1 reversal:8 detennine:1 apply:1 appropriate:2 differentiated:2 odor:1 slower:1 original:1 top:1 remaining:1 testable:1 especially:3 disappear:1 classical:7 suddenly:1 society:1 millner:1 occurs:2 strategy:1 damage:3 canad:2 exhibit:1 gradient:1 detrimental:1 valence:1 deficit:1 elicits:1 simulated:1 reversed:1 argue:1 declarative:2 kehoe:2 berger:2 relationship:1 acquire:4 iwo:1 trace:2 memo:1 disparate:1 recoded:1 twenty:1 perform:1 upper:3 observation:1 finite:1 behave:1 situation:1 hinton:3 ever:1 precise:1 gormezano:2 paradox:1 learned:3 address:2 able:2 impair:1 pattern:1 including:3 memory:13 ia:1 force:1 predicting:2 representing:1 rawlins:2 altered:1 eye:2 ne:1 psychol:2 autoencoder:4 text:1 review:3 geometric:1 literature:1 removal:1 evolve:1 relative:1 mediation:1 loss:1 foundation:2 sufficient:2 consistent:2 classifying:1 translation:1 lo:1 consolidation:1 free:1 implicated:1 bias:2 cortico:5 allow:1 institute:1 wide:3 wagner:2 recoding:3 benefit:1 van:2 dimension:1 cortical:15 sensory:2 adaptive:6 reinforcement:2 simplified:1 prokasy:1 observable:1 emphasize:1 ignore:1 evoke:2 assumed:1 neurology:1 airpuff:6 latent:9 learn:6 transfer:5 nature:1 ignoring:1 investigated:2 complex:1 arrival:1 lesion:9 complementary:1 site:2 tl:1 eyeblink:1 slow:1 fails:1 gers:2 lq:2 lie:1 learns:5 croft:1 removing:1 physiological:5 reproduction:1 evidence:1 keefe:2 entorhinal:1 conditioned:7 demand:3 easier:2 gluck:8 proceedin:1 simply:1 forming:1 contained:1 corresponds:2 environmental:1 determines:1 ma:2 viewed:1 narrower:1 presentation:1 leaming:1 absence:2 change:3 hard:1 engineer:1 called:1 discriminate:1 experimental:3 intact:15 puff:1 internal:26 mark:1 newark:1 arises:2 dorsal:1 phenomenon:2 mackintosh:4 |
6,777 | 7,130 | SVD-Softmax: Fast Softmax Approximation on Large
Vocabulary Neural Networks
Kyuhong Shim, Minjae Lee, Iksoo Choi, Yoonho Boo, Wonyong Sung
Department of Electrical and Computer Engineering
Seoul National University, Seoul, Korea
[email protected], {mjlee, ischoi, yhboo}@dsp.snu.ac.kr, [email protected]
Abstract
We propose a fast approximation method of a softmax function with a very large
vocabulary using singular value decomposition (SVD). SVD-softmax targets fast
and accurate probability estimation of the topmost probable words during inference of neural network language models. The proposed method transforms the
weight matrix used in the calculation of the output vector by using SVD. The approximate probability of each word can be estimated with only a small part of
the weight matrix by using a few large singular values and the corresponding elements for most of the words. We applied the technique to language modeling and
neural machine translation and present a guideline for good approximation. The
algorithm requires only approximately 20% of arithmetic operations for an 800K
vocabulary case and shows more than a three-fold speedup on a GPU.
1
Introduction
Neural networks have shown impressive results for language modeling [1?3]. Neural network-based
language models (LMs) estimate the likelihood of a word sequence by predicting the next word wt+1
by previous words w1:t . Word probabilities for every step are acquired by matrix multiplication
and a softmax function. Likelihood evaluation by an LM is necessary for various tasks, such as
speech recognition [4, 5], machine translation, or natural language parsing and tagging. However,
executing an LM with a large vocabulary size is computationally challenging because of the softmax
normalization. Softmax computationP
needs to access every word to compute the normalization factor
Z, where sof tmax(zk ) = exp(zk )/ V exp(zi ) = exp(zk )/Z. V indicates the vocabulary size of
the dataset. We refer the conventional softmax algorithm as the "full-softmax."
The computational requirement of the softmax function frequently dominates the complexity of
neural network LMs. For example, a Long Short-Term Memory (LSTM) [6] RNN with four layers
of 2K hidden units requires roughly 128M multiply-add operations for one inference. If the LM
supports an 800K vocabulary, the evaluation of the output probability computation with softmax
normalization alone demands approximately 1,600M multiply-add operations, far exceeding that of
the RNN core itself.
Although we should compute the output vector of all words to evaluate the denominator of the softmax function, few applications require the probability of every word. For example, if an LM is used
for rescoring purposes as in [7], only the probabilities of one or a few given words are needed. Further, for applications employing beam search, the most probable top-5 or top-10 values are usually
required. In speech recognition, since many states need to be pruned for efficient implementations,
it is not demanded to consider the probabilities of all the words. Thus, we formulate our goal: to obtain accurate top-K word probabilities with considerably less computation for LM evaluation, where
the K considered is from 1 to 500.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
In this paper, we present a fast softmax approximation for LMs, which does not involve alternative
neural network architectures or additional loss during training. Our method can be directly applied
to full-softmax, regardless of how it is trained. This method is different from those proposed in other
papers, in that it is aimed to reduce the evaluation complexity, not to minimize the training time or
to improve the performance.
The proposed technique is based on singular value decomposition (SVD) [8] of the softmax weight
matrix. Experimental results show that the proposed algorithm provides both fast and accurate
evaluation of the most probable top-K word probabilities.
The contributions of this paper are as follows.
? We propose a fast and accurate softmax approximation, SVD-softmax, applied for calculating the top-K word probabilities.
? We provide a quantitative analysis of SVD-softmax with three different datasets and two
different tasks.
? We show through experimental results that the normalization term of softmax can be approximated fairly accurately by computing only a fraction of the full weight matrix.
This paper is organized as follows. In Section 2, we review related studies and compare them to our
study. We introduce SVD-softmax in Section 3. In Section 4, we provide experimental results. In
Section 5, we discuss more details about the proposed algorithm. Section 6 concludes the paper.
2
Related work
Many methods have been developed to reduce the computational burden of the softmax function.
The most successful approaches include sampling-based softmax approximation, hierarchical softmax architecture, and self-normalization techniques. Some of these support very efficient training.
However, the methods listed below must search the entire vocabulary to find the top-K words.
Sampling-based approximations choose a small subset of possible outputs and train only with
those. Importance sampling (IS) [9], noise contrastive estimation (NCE) [10], negative sampling
(NEG) [11], and Blackout [12] are included in this category. These approximations train the network to increase the possibilities of positive samples, which are usually labels, and to decrease the
probabilities of negative samples, which are randomly sampled. These strategies are beneficial for
increasing the training speed. However, their evaluation does not show any improvement in speed.
Hierarchical softmax (HS) unifies the softmax function and output vector computation by constructing a tree structure of words. Binary HS [13, 14] uses the binary tree structure, which is
log(V ) in depth. However, the binary representation is heavily dependent on each word?s position,
and therefore, a two-layer [2] or three-layer [15] hierarchy is also introduced. In particular, in the
study in [15] several clustered words were arranged in a "short-list," where the outputs of the second
level hierarchy were the words themselves, not the classes of the third hierarchy. Adaptive softmax
[16] extends the idea and allocates the short-list to the first layer, with a two-layer hierarchy. Adaptive softmax achieves both a training time speedup and a performance gain. HS approaches have
advantages for quickly gathering probability of a certain word or predetermined words. However,
HS should also visit every word to find the topmost likely words, where the merit of the tree structure
is not useful.
Self-normalization approaches [17, 18] employ an additional training loss term, which leads a
normalization factor Z close to 1. The evaluation of selected words can be achieved significantly
faster than by using full-softmax if the denominator is trained well. However, the method cannot
ensure that the denominator always appears correctly, and should also consider every word for top-K
estimation.
Differentiated Softmax (D-softmax) [19] restricts the effective parameters, using the fraction of
the full output matrix. The matrix allocates higher dimensional representation to frequent words
and only a lower dimensional vector to rare words. From this point of view, there is a commonality
between our method and D-softmax in that the length of vector used in the output vector computation varies among words. However, the determination of the length of each portion is somewhat
heuristic and requires specified training procedures in D-softmax. The word representation learned
2
|?|
|?|
|?|
?
?
|?|
?
??
?
|?|
(a) Base
(b) After SVD
(c) Preview window
(d) Additional full-view vectors
Figure 1: Illustration of the proposed SVD-softmax algorithm. The softmax weight matrix is decomposed by singular value decomposition (b). Only a part of the columns is used to compute the
preview outputs (c). Selected rows, which are chosen by sorting the preview outputs, are recomputed
with full-width (d). For simplicity, the bias vector is omitted.
by D-softmax is restricted from the start, and may therefore be lacking in terms of expressiveness.
In contrast, our algorithm first trains words with a full-length vector and dynamically limits the
dimension during evaluation. In SVD-softmax, the importance of each word is also dynamically
determined during the inference.
3
SVD-softmax
The softmax function transforms a D-dimensional real-valued vector h to a V -dimensional probability distribution. The probability calculation consists of two stages. First, we acquire the output
vector of size V , denoted as z, from h by matrix multiplication as
z = Ah + b
(1)
where A ? RV ?D is a weight matrix, h ? RD is an input vector, b ? RV is a bias vector, and
z ? RV is the computed output vector. Second, we normalize the output vector to compute the
probability yk of each word as
exp(Ak h + bk )
exp(zk )
exp(zk )
yk = sof tmax(zk ) = PV
= PV
=
Z
i=1 exp(Ai h + bi )
i=1 exp(zi )
(2)
The computational complexity of calculating the probability distribution over all classes and only
one class is the same, because the normalization factor Z requires every output vector elements to
be computed.
3.1
Singular value decomposition
SVD is a factorization method that decomposes a matrix into two unitary matrices U, V with singular vectors in columns and one diagonal matrix ? with non-negative real singular values in descending order. SVD is applied to the weight matrix A as
A = U?VT
(3)
where U ? RV ?D , ? ? RD?D , and V ? RD?D . We multiply ? and U to factorize the original
matrix into two parts: U? and VT . Note that U ? ? multiplication is negligible in evaluation time
because we can keep the result as a single matrix.
Larger singular values in ? are multiplied to the leftmost columns of U. As a result, the elements
of the B(= U?) matrix are statistically arranged in descending order of magnitude, from the first
column to the last. The leftmost columns of B are more influential than the rightmost columns.
3
Algorithm 1 Algorithm of the proposed SVD-softmax.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
3.2
input: trained weight matrix A, input vector h, bias vector b
hyperparameter: width of preview window W , number of full-view vectors N .
initialize: decompose A = U?VT , B = U?
? = VT ? h
h
? W] + b
? = B[:, : W ] ? h[:
z
compute preview outputs with only W dimensions
? in descending order
Sort z
select N words of largest preview outputs
?
CN = Top-N word indices of z
for all id in CN do
? + b[id]
?[id] = B[id, :] ? h
z
update selected words by full-view vector multiplication
end for
P
Z? = V exp z?i
y
? = exp(?
z)/Z?
compute probability distribution using softmax
return y
?
Softmax approximation
Algorithm 1 shows the softmax approximation procedure, which is also illustrated in Figure 1.
Previous methods needed to compare every output vector elements to find the top-K words. Instead
of using the full-length vector, we consult every word with a window of restricted length W . We
call this the "preview window" and the results the "preview outputs." Note that adding the bias b
in preview outputs computation is crucial for the performance. Since larger singular values are
multiplied to several leftmost columns, it is reasonable to assume that the most important portion of
the output vector is already computed with the preview window.
However, we find that the preview outputs do not suffice to obtain accurate results. To increase the
accuracy, N largest candidates CN are selected by sorting V preview outputs. The selected candidates are recomputed with the full-length window. We call the candidates the "full-view" vectors. As
a result, N outputs are computed exactly while (V ? N ) outputs are only an approximation based
on the preview outputs. In other words, only the selected indices use the full window for output
vector computation. Finally, the softmax function is applied to the output vector to normalize the
probability distribution. The modified output vector z?k is formulated as
z?k =
? + bk ,
Bk h
if k ? CN
? W ] + bk , otherwise
Bk [: W ]h[:
(4)
? = V T h ? RD . Note that if k ? CN , z?k is equal to zk . The computational
where B ? RV ?D and h
complexity is reduced from O(V ? D) to O(V ? W + N ? D).
3.3
Metrics
To observe the accuracy of every word probability, we use Kullback-Leibler divergence (KLD) as a
metric. KLD shows the closeness of the approximated distribution to the actual one. Perplexity, or
negative log-likelihood (N LL), is a useful measurement for likelihood estimation. The gap between
full-softmax and SVD-softmax N LL should be small. For the evaluation of a given word, the
accuracy of probability depends only on the normalization factor Z, and therefore we monitor also
the denominator of the softmax function.
We define "top-K coverage," which represents how many top-K words of full-softmax are included
in the top-K words of SVD-softmax. For the beam-search purpose, it is important to correctly select
the top-K words, as beam paths might change if the order is mingled.
4
Experimental results
The experiments were performed on three datasets and two different applications: language modeling and machine translation. The WikiText-2 [20] and One Billion Word benchmark (OBW) [21]
4
Table 1: Effect of the number of hidden units on the WikiText-2 language model. The number of
full-view vectors is fixed to 3,300 for the table, which is about 10% of the size of the vocabulary.
Top-K denotes top-K coverage defined in 3.3. The values are averaged.
D
256
512
1024
W
?
Z/Z
KLD
N LL (full/SVD)
Top-10
Top-100
Top-1000
16
32
32
64
64
128
0.9813
0.9914
0.9906
0.9951
0.9951
0.9971
0.03843
0.01134
0.01453
0.00638
0.00656
0.00353
4.408 / 4.518
4.408 / 4.441
3.831 / 3.907
3.831 / 3.852
3.743 / 3.789
3.743 / 3.761
9.97
10.00
10.00
10.00
10.00
10.00
99.47
99.97
99.89
99.99
99.99
100.00
952.71
986.94
974.87
993.35
992.62
998.28
datasets were used for language modeling. The neural machine translation (NMT) from German to
English was trained with a dataset provided by the OpenNMT toolkit [22].
We first analyzed the extent to which the preview window size W and the number of full-view
vectors N affect the overall performance and searched the best working combination.
4.1
Effect of the number of hidden units on preview window size
To find the relationship between the preview window?s width and the approximation quality, three
LMs trained with WikiText-2 were tested. WikiText is a text dataset, which was recently introduced [20]. The WikiText-2 dataset contains 33,278-word vocabulary and approximately 2M training tokens. An RNN with a single LSTM layer [6] was used for language modeling. Traditional
full-softmax was used for the output layer. The number of LSTM units was the same as the input
embedding dimension. Three models were trained on WikiText-2 with the number of hidden units
D being 256, 512, and 1,024.
The models were trained with stochastic gradient descent (SGD) with an initial learning rate of 1.0
and momentum of 0.95. The batch size was set to 20, and the network was unrolled for 35 timesteps.
Dropout [23] was applied to the LSTM output with a drop ratio of 0.5. Gradient clipping [24] of
maximum norm value 5 was applied.
The preview window widths W selected were 16, 32, 64, and 128 and the number of full-view
candidates N were 5% and 10% of the full vocabulary size for all three models. One thousand
sequential frames were used for the evaluation. Table 1 shows the results of selected experiments,
which indicates that the sufficient preview window size is proportional to the hidden layer dimension
D. In most cases, 1/8 of D is an adequate window width, which costs 12.5% of multiplications.
Over 99% of the denominator is covered. KLD and N LL show that the approximation produces
almost the same results as the original. The top-K words are also computed precisely. We also
checked the order of the top-K words that were preserved. The result showed that using too short
window width affects the performance badly.
4.2
Effect of the vocabulary size on the number of full-view vectors
The OBW dataset was used to analyze the effect of vocabulary size on SVD-softmax. This benchmark is a huge dataset with a 793,472-word vocabulary. The model used 256-dimension word embedding, an LSTM layer of 2,048 units, and a full-softmax output layer. The RNN LM was trained
with SGD with an initial learning rate of 1.0.
We explored multiple models by employing a vocabulary size of 8,004, 80,004, 401,951, and
793,472, abbreviated as 8K, 80K, 400K, and 800K below. The 800K model follows the preprocessing consensus, keeping words that appear more than three times. The 400K vocabulary follows
the same process as the 800K but without case sensitivity. The 8K and 80K data models were created by choosing the topmost frequent 8K and 80K words, respectively. Because of the limitation
of GPU memory, the 800K model was trained with half-precision parameters. We used the full data
for training.
5
Table 2: Effect of the number of full-view vector size N on One Billion Word benchmark language
model. The preview window width is fixed to 256 in this table. We omitted the ratio of approximated
Z? and real Z, because the ratio is over 0.997 for all cases in the table. The multiplication ratio is to
full-softmax, including the overhead of VT ? h.
V
8K
80K
400K
800K
N
N LL (full/SVD)
Top-10
Top-50
Top-100
Top-500
Mult. ratio
1024
2048
4096
8192
16384
32768
32768
65536
2.685 / 2.698
2.685 / 2.687
3.589 / 3.6051
3.589 / 3.591
3.493 / 3.495
3.493 / 3.495
4.688 / 4.718
4.688 / 4.690
9.98
9.99
10.00
10.00
10.00
10.00
10.00
10.00
49.81
49.99
49.94
49.99
50.00
50.00
49.99
49.99
99.36
99.89
99.85
99.97
100.00
100.00
99.96
99.96
469.48
496.05
497.73
499.56
499.90
499.98
499.99
499.89
0.493
0.605
0.195
0.240
0.171
0.201
0.168
0.200
Table 3: SVD-softmax on machine translation task. The baseline perplexity and BLEU score are
10.57 and 21.98, respectively.
W
200
100
50
N
5000
2500
1000
5000
2500
1000
5000
2500
1000
Perplexity
10.57
10.57
10.58
10.58
10.59
10.65
10.60
10.68
11.04
BLEU
21.99
21.99
22.00
22.00
22.00
22.01
22.00
21.99
22.00
The preview window width and the number of full-view vectors were selected in the powers of 2.
The results were computed on randomly selected 2,000 consecutive frames.
Table 2 shows the experimental results. With a fixed hidden dimension of 2,048, the required preview window width does not change significantly, which is consistent with the observations in Section 4.1. However, the number of full-view vectors N should increase as the vocabulary size grows.
In our experiments, using 5% to 10% of the total vocabulary size as candidates sufficed to achieve a
successful approximation. The results prove that the proposed method is scalable and more efficient
when applied to large vocabulary softmax.
4.3
Result on machine translation
NMT is based on neural networks and contains an internal softmax function. We applied SVDsoftmax to a German to English NMT task to evaluate the actual performance of the proposed
algorithm.
The baseline network, which employs the encoder-decoder model with an attention mechanism
[25, 26], was trained using the OpenNMT toolkit. The network was trained with concatenated
data which contained a WMT 2015 translation task [27], Europarl v7 [28], common crawl [29],
and news commentary v10 [30], and evaluated with newstest 2013. The training and evaluation
data were tokenized and preprocessed by following the procedures in previous studies [31, 32] to
conduct case-sensitive translation with 50,004 frequent words. The baseline network employed 500dimension word embedding, encoder- and decoder-networks with two unidirectional LSTM layers
with 500 units each, and a full-softmax output layer. The network was trained with SGD with
an initial learning rate of 1.0 while applying dropout [23] with ratio 0.3 between adjacent LSTM
layers. The rest of the training settings followed the OpenNMT training recipe, which is based on
6
70
S(256)
S(512)
70
S(1024)
S(256)
60
60
50
50
40
40
30
30
20
20
10
10
0
0
128
256
384
512
640
768
896
1024
0
0.00
0.125 0.20
0.40
0.60
S(512)
0.80
S(1024)
1.00
Figure 2: Singular value plot of three WikiText-2 language models that differ in hidden vector
dimension D ? {256, 512, 1024}. The left hand side figure represents the singular value for each
element, while the right hand side figure illustrates the value proportional to D. The dashed line
implies 0.125 = 1/8 point. Both are from the same data.
previous studies [31, 33]. The performance of the network was evaluated according to perplexity
and the case-sensitive BLEU score [34], which was computed with the Moses toolkit [35]. During
translation, a beam search was conducted with beam width 5.
To evaluate our algorithm, the preview window widths W selected were 25, 50, 100, and 200, and
the numbers of full-view candidates N chosen were 1,000, 2,500, and 5,000.
Table 3 shows the experimental results for perplexity and the BLEU score with respect to the preview
window dimension W and the number of full-view vectors N . The full-softmax layer in the baseline
model employed a hidden dimension D of 500 and computed the probability for V = 50,004 words.
The experimental results show that a speed up can be achieved with preview width W = 100, which
is 1/5 of D, and the number of full-view vectors N = 2,500 or 5,000, which is 1/5 or 1/10 of
V . The parameters chosen did not affect the translation performance in terms of perplexity. For a
wider W , it is possible to use a smaller N . The experimental results show that SVD-softmax is also
effective when applied to NMT tasks.
5
Discussion
In this section, we provide empirical evidence of the reasons why SVD-softmax operates efficiently.
We also present the results of an implementation on a GPU.
5.1
Analysis of W , N , and D
We first explain the reason the required preview window width W is proportional to the hidden
vector size D. Figure 2 shows the singular value distribution of WikiText-2 LM softmax weights.
We observed that the distributions are similar for all three cases when the singular value indices are
scaled with D. Thus, it is important to preserve the ratio between W and D. The ratio of singular
values in a D/8 window over the total sum of singular values for 256, 512, and 1,024 hidden vector
dimensions is 0.42, 0.38, and 0.34, respectively.
Furthermore, we explore the manner in which W and N affect the normalization term, i.e., the
denominator. Figure 3 shows how the denominator is approximated while changing W or N . Note
that the leftmost column of Figure 3 represents that no full-view vectors were used.
5.2
Computational efficiency
The modeled number of multiplications in Table 2 shows that the computation required can be
decreased to 20%. After factorization, the overhead of matrix multiplication VT , which is O(D2 ),
is a fixed cost. In most cases, especially with a very large vocabulary, V is significantly larger than
D, and the additional computation cost is negligible. However, as V decreases, the portion of the
overhead increases.
7
?
Figure 3: Heatmap of approximated normalization factor ratio Z/Z.
The x and y axis represent
N and W , respectively. The WikiText-2 language model with D = 1,024 was used. Note that the
maximum values of N and W are 1,024 and 33,278, respectively. The gray line separates the area
by 0.99 as a threshold. Best viewed in color.
Table 4: Measured time (ms) of full-softmax and SVD-softmax on a GPU and CPU. The experiment
was conducted on a NVIDIA GTX Titan-X (Pascal) GPU and Intel i7-6850 CPU. The second column indicates the full-softmax, while the other columns represent each step of SVD-softmax. The
cost of the sorting, exponential, and sum is omitted, as their time consumption is negligible.
Device
GPU
CPU
Full-softmax
A?h
VT ? h
(262k, 2k)
?2k
14.12
1541.43
(2k, 2k)
?2k
0.33
25.32
SVD-softmax
Preview window Full-view vectors
(262k, 256)
?256
2.98
189.27
(16k, 2k)
?2k
1.12
88.98
Sum (speedup)
4.43 (?3.19)
303.57 (?5.08)
We provide an example of time consumption on a CPU and GPU. Assume the weight A is a 262K
(V = 218 ) by 2K (D = 211 ) matrix and SVD-softmax is applied with preview window width of
256 and the number of full-view vectors is 16K (N = 214 ). This corresponds to W/D = 1/8 and
N/V = 1/16. The setting well simulates the real LM environment and the use of the recommended
SVD-softmax hyperparameters discussed above. We used our highly optimized custom CUDA kernel for the GPU evaluation. The matrix B was stored in row-major order for convenient full-view
vector evaluation.
As observed in Table 4, the time consumption is reduced by approximately 70% on the GPU and
approximately 80% on the CPU. Note that the GPU kernel is fully parallelized while the CPU code
employs a sequential logic. We also tested various vocabulary sizes and hidden dimensions on the
custom kernel, where a speedup is mostly observed, although it is less effective for small vocabulary
cases.
5.3
Compatibility with other methods
The proposed method is compatible with a neural network trained with sampling-based softmax
approximations. SVD-softmax is also applicable to hierarchical softmax and adaptive softmax,
especially when the vocabulary is large. Hierarchical methods need large weight matrix multiplication to gather every word probability, and SVD-softmax can reduce the computation. We tested
SVD-softmax with various softmax approximations and observed that a significant amount of multiplication is removed while the performance is not significantly affected as it is by full softmax.
8
6
Conclusion
We present SVD-softmax, an efficient softmax approximation algorithm, which is effective for computing top-K word probabilities. The proposed method factorizes the matrix by SVD, and only part
of the SVD transformed matrix is previewed to determine which words are worth preserving. The
guideline for hyperparameter selection was given empirically. Language modeling and NMT experiments were conducted. Our method reduces the number of multiplication operations to only 20% of
that of the full-softmax with little performance degradation. The proposed SVD-softmax is a simple
yet powerful computation reduction technique.
Acknowledgments
This work was supported in part by the Brain Korea 21 Plus Project and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP)
(No.2015R1A2A1A10056051).
References
[1] Tomas Mikolov, Martin Karafi?t, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur, ?Recurrent neural network based language model,? in Interspeech, 2010, vol. 2, p. 3.
?
[2] Tom?? Mikolov, Stefan Kombrink, Luk?? Burget, Jan Cernock`
y, and Sanjeev Khudanpur, ?Extensions of recurrent neural network language model,? in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5528?5531.
[3] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier, ?Language modeling with
gated convolutional networks,? arXiv preprint arXiv:1612.08083, 2016.
[4] William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals, ?Listen, attend and spell: A
neural network for large vocabulary conversational speech recognition,? in Acoustics, Speech
and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016, pp.
4960?4964.
[5] Kyuyeon Hwang and Wonyong Sung, ?Character-level incremental speech recognition with
recurrent neural networks,? in Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE
International Conference on. IEEE, 2016, pp. 5335?5339.
[6] Sepp Hochreiter and J?rgen Schmidhuber, ?Long short-term memory,? Neural Computation,
vol. 9, no. 8, pp. 1735?1780, 1997.
[7] Xunying Liu, Yongqiang Wang, Xie Chen, Mark JF Gales, and Philip C Woodland, ?Efficient
lattice rescoring using recurrent neural network language models,? in Acoustics, Speech and
Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 4908?
4912.
[8] Gene H Golub and Christian Reinsch, ?Singular value decomposition and least squares solutions,? Numerische Mathematik, vol. 14, no. 5, pp. 403?420, 1970.
[9] Yoshua Bengio, Jean-S?bastien Sen?cal, et al., ?Quick training of probabilistic neural nets by
importance sampling.,? in AISTATS, 2003.
[10] Andriy Mnih and Yee Whye Teh, ?A fast and simple algorithm for training neural probabilistic
language models,? arXiv preprint arXiv:1206.6426, 2012.
[11] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean, ?Distributed representations of words and phrases and their compositionality,? in Advances in Neural Information
Processing Systems, 2013, pp. 3111?3119.
[12] Shihao Ji, SVN Vishwanathan, Nadathur Satish, Michael J Anderson, and Pradeep Dubey,
?Blackout: Speeding up recurrent neural network language models with very large vocabularies,? arXiv preprint arXiv:1511.06909, 2015.
9
[13] Frederic Morin and Yoshua Bengio, ?Hierarchical probabilistic neural network language
model,? in AISTATS. Citeseer, 2005, vol. 5, pp. 246?252.
[14] Andriy Mnih and Geoffrey E Hinton, ?A scalable hierarchical distributed language model,? in
Advances in Neural Information Processing Systems, 2009, pp. 1081?1088.
[15] Hai-Son Le, Ilya Oparin, Alexandre Allauzen, Jean-Luc Gauvain, and Fran?ois Yvon, ?Structured output layer neural network language model,? in Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on. IEEE, 2011, pp. 5524?5527.
[16] Edouard Grave, Armand Joulin, Moustapha Ciss?, David Grangier, and Herv? J?gou, ?Efficient softmax approximation for GPUs,? arXiv preprint arXiv:1609.04309, 2016.
[17] Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M Schwartz, and John
Makhoul, ?Fast and robust neural network joint models for statistical machine translation,? in
ACL (1). Citeseer, 2014, pp. 1370?1380.
[18] Jacob Andreas, Maxim Rabinovich, Michael I Jordan, and Dan Klein, ?On the accuracy of
self-normalized log-linear models,? in Advances in Neural Information Processing Systems,
2015, pp. 1783?1791.
[19] Welin Chen, David Grangier, and Michael Auli, ?Strategies for training large vocabulary neural
language models,? arXiv preprint arXiv:1512.04906, 2015.
[20] Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher, ?Pointer sentinel mixture models,? arXiv preprint arXiv:1609.07843, 2016.
[21] Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and
Tony Robinson, ?One billion word benchmark for measuring progress in statistical language
modeling,? arXiv preprint arXiv:1312.3005, 2013.
[22] Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush, ?OpenNMT: Open-source toolkit for neural machine translation,? arXiv preprint arXiv:1701.02810,
2017.
[23] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, ?Dropout: a simple way to prevent neural networks from overfitting,? Journal of Machine
Learning Research, vol. 15, no. 1, pp. 1929?1958, 2014.
[24] Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio, ?On the difficulty of training recurrent
neural networks,? ICML (3), vol. 28, pp. 1310?1318, 2013.
[25] Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares,
Holger Schwenk, and Yoshua Bengio, ?Learning phrase representations using RNN encoderdecoder for statistical machine translation,? arXiv preprint arXiv:1406.1078, 2014.
[26] Ilya Sutskever, Oriol Vinyals, and Quoc V Le, ?Sequence to sequence learning with neural
networks,? in Advances in Neural Information Processing Systems, 2014, pp. 3104?3112.
[27] Ondrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris
Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi, ?Findings of the 2015 workshop on statistical
machine translation,? in Proceedings of the Tenth Workshop on Statistical Machine Translation, 2015, pp. 1?46.
[28] Philipp Koehn, ?Europarl: A parallel corpus for statistical machine translation,? in MT Summit,
2005, vol. 5, pp. 79?86.
[29] Common Crawl Foundation, ?Common crawl,? http://commoncrawl.org, 2016, Accessed:
2017-04-11.
[30] Jorg Tiedemann, ?Parallel data, tools and interfaces in OPUS,? in LREC, 2012, vol. 2012, pp.
2214?2218.
10
[31] Minh-Thang Luong, Hieu Pham, and Christopher D Manning, ?Effective approaches to
attention-based neural machine translation,? arXiv preprint arXiv:1508.04025, 2015.
[32] S?bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio, ?On using very large
target vocabulary for neural machine translation,? arXiv preprint arXiv:1412.2007, 2014.
[33] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio, ?Neural machine translation by
jointly learning to align and translate,? arXiv preprint arXiv:1409.0473, 2014.
[34] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu, ?Bleu: a method for automatic evaluation of machine translation,? in Proceedings of the 40th annual meeting on
Association for Computational Linguistics. Association for Computational Linguistics, 2002,
pp. 311?318.
[35] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico,
Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al., ?Moses:
Open source toolkit for statistical machine translation,? in Proceedings of the 45th Annual
Meeting of the Association for Computational Linguistics on Interactive Poster and Demonstration Sessions. Association for Computational Linguistics, 2007, pp. 177?180.
11
| 7130 |@word h:4 luk:1 armand:1 norm:1 open:2 d2:1 carolina:1 decomposition:5 jacob:2 contrastive:1 citeseer:2 sgd:3 reduction:1 initial:3 liu:1 contains:2 score:3 blackout:2 rightmost:1 gauvain:1 yet:1 must:1 gpu:10 parsing:1 john:1 predetermined:1 christian:2 cis:1 drop:1 plot:1 update:1 bart:1 alone:1 half:1 selected:11 device:1 short:5 core:1 pointer:1 provides:1 rescoring:2 pascanu:1 philipp:3 org:1 accessed:1 bertoldi:1 consists:1 prove:1 overhead:3 dan:1 manner:1 introduce:1 acquired:1 hieu:2 tagging:1 merity:1 themselves:1 frequently:1 chelba:1 roughly:1 brain:1 salakhutdinov:1 decomposed:1 gou:1 actual:2 cpu:6 window:23 little:1 increasing:1 provided:1 project:1 suffice:1 bojar:1 developed:1 finding:1 sung:2 quantitative:1 every:10 interactive:1 exactly:1 scaled:1 schwartz:1 unit:7 grant:1 appear:1 christof:1 positive:1 negligible:3 engineering:1 attend:1 todd:1 limit:1 ak:1 id:4 path:1 matteo:1 approximately:5 tmax:2 might:1 plus:1 acl:1 dynamically:2 edouard:1 challenging:1 factorization:2 bi:1 statistically:1 averaged:1 acknowledgment:1 wonyong:2 razvan:1 procedure:3 jan:2 area:1 rnn:5 empirical:1 significantly:4 mult:1 convenient:1 burget:2 word:61 poster:1 morin:1 cannot:1 close:1 selection:1 cal:1 salim:1 applying:1 kld:4 yee:1 descending:3 conventional:1 dean:1 quick:1 sepp:1 regardless:1 attention:2 formulate:1 tomas:4 simplicity:1 numerische:1 shen:1 embedding:3 merri:1 target:2 hierarchy:4 heavily:1 us:1 jaitly:1 element:5 recognition:4 approximated:5 summit:1 observed:4 mike:1 yoon:1 preprint:12 electrical:1 wang:1 thousand:1 news:1 decrease:2 removed:1 yk:2 topmost:3 environment:1 complexity:4 trained:13 efficiency:1 icassp:5 joint:1 schwenk:1 various:3 train:3 fast:8 effective:5 choosing:1 jean:4 heuristic:1 larger:3 valued:1 kai:1 grave:1 koehn:4 otherwise:1 federico:1 encoder:2 ward:1 jointly:1 itself:1 sequence:3 advantage:1 net:1 matthias:1 sen:1 propose:2 frequent:3 translate:1 achieve:1 papineni:1 normalize:2 recipe:1 billion:3 sutskever:3 requirement:1 jing:1 produce:1 incremental:1 executing:1 wider:1 recurrent:6 ac:3 v10:1 measured:1 bradbury:1 progress:1 coverage:2 ois:1 implies:1 differ:1 stochastic:1 require:1 government:1 zbib:1 clustered:1 decompose:1 probable:3 extension:1 marco:1 pham:1 considered:1 exp:10 lm:12 rgen:1 major:1 achieves:1 commonality:1 consecutive:1 omitted:3 purpose:2 estimation:4 ruslan:1 applicable:1 label:1 wikitext:9 sensitive:2 largest:2 tool:1 stefan:1 always:1 modified:1 factorizes:1 dsp:1 improvement:1 commoncrawl:1 likelihood:4 indicates:3 contrast:1 baseline:4 kim:1 inference:3 dependent:1 entire:1 hidden:11 transformed:1 compatibility:1 overall:1 among:1 pascal:1 denoted:1 dauphin:1 heatmap:1 softmax:81 fairly:1 initialize:1 equal:1 beach:1 sampling:6 ciprian:1 thang:1 represents:3 nrf:1 holger:1 icml:1 marcello:1 yoshua:6 richard:3 few:3 employ:3 randomly:2 preserve:1 national:2 divergence:1 william:1 preview:27 huge:1 possibility:1 callison:1 highly:1 multiply:3 custom:2 evaluation:15 golub:1 brooke:1 mnih:2 analyzed:1 mixture:1 pradeep:1 accurate:5 necessary:1 korea:4 allocates:2 tree:3 conduct:1 rush:1 yvon:1 column:10 modeling:8 lamar:1 measuring:1 rabinovich:1 kyuyeon:1 phrase:2 clipping:1 cost:4 shihao:1 subset:1 rare:1 lattice:1 krizhevsky:1 successful:2 conducted:3 satish:1 too:1 stored:1 varies:1 considerably:1 cho:3 st:1 lstm:7 sensitivity:1 international:5 memisevic:1 lee:1 probabilistic:3 michael:4 quickly:1 ilya:4 sanjeev:2 w1:1 zen:1 choose:1 gale:1 huang:1 v7:1 luong:1 return:1 titan:1 depends:1 msip:1 performed:1 view:19 analyze:1 portion:3 start:1 sort:1 sufficed:1 parallel:2 unidirectional:1 contribution:1 minimize:1 square:1 accuracy:4 convolutional:1 greg:1 efficiently:1 unifies:1 accurately:1 worth:1 rabih:1 ah:1 explain:1 checked:1 pp:20 james:1 sampled:1 gain:1 dataset:6 birch:1 color:1 listen:1 organized:1 ondrej:1 appears:1 alexandre:1 higher:1 xie:1 tom:1 wei:1 arranged:2 evaluated:2 anderson:1 furthermore:1 stage:1 working:1 hand:2 christopher:1 quality:1 gray:1 hwang:1 grows:1 alexandra:1 usa:1 effect:5 matt:1 normalized:1 gtx:1 spell:1 kyunghyun:3 leibler:1 phillipp:1 illustrated:1 adjacent:1 ll:5 during:5 self:3 width:14 interspeech:1 fethi:1 m:1 leftmost:4 whye:1 interface:1 christine:1 recently:1 common:3 mt:1 empirically:1 ji:1 discussed:1 association:4 bougares:1 refer:1 measurement:1 significant:1 ai:1 rd:4 automatic:1 session:1 grangier:3 language:24 funded:1 toolkit:5 access:1 wmt:1 impressive:1 haddow:1 add:2 base:1 align:1 showed:1 chan:1 perplexity:6 schmidhuber:1 certain:1 nvidia:1 binary:3 vt:7 meeting:2 neg:1 preserving:1 additional:4 somewhat:1 commentary:1 employed:2 parallelized:1 deng:1 determine:1 barry:1 signal:5 recommended:1 dashed:1 arithmetic:1 full:44 rv:5 multiple:1 reduces:1 corrado:1 stephen:1 faster:1 determination:1 calculation:2 long:3 opennmt:4 zhongqiang:1 post:1 roland:1 visit:1 qi:1 scalable:2 denominator:7 metric:2 navdeep:1 arxiv:24 normalization:11 sof:2 represent:2 kernel:3 achieved:2 hochreiter:1 beam:5 preserved:1 decreased:1 singular:16 source:2 crucial:1 rest:1 moustapha:1 nmt:5 simulates:1 bahdanau:2 cowan:1 jordan:1 call:2 consult:1 unitary:1 encoderdecoder:1 bengio:6 boo:1 affect:4 zi:2 timesteps:1 architecture:2 andriy:2 reduce:3 idea:1 cn:5 devlin:1 svn:1 andreas:1 i7:1 herv:1 speech:9 adequate:1 useful:2 woodland:1 covered:1 involve:1 aimed:1 listed:1 dubey:1 transforms:2 amount:1 category:1 reduced:2 http:1 restricts:1 cuda:1 moses:2 estimated:1 correctly:2 klein:2 hyperparameter:2 vol:8 affected:1 recomputed:2 four:1 threshold:1 monitor:1 changing:1 preprocessed:1 prevent:1 tenth:1 fraction:2 nce:1 sum:3 powerful:1 extends:1 almost:1 reasonable:1 yann:1 fran:1 dropout:3 layer:15 lrec:1 followed:1 fold:1 fan:1 annual:2 badly:1 precisely:1 vishwanathan:1 alex:1 burch:1 lucia:1 newstest:1 speed:3 nitish:1 pruned:1 mikolov:5 conversational:1 nboer:1 martin:1 gpus:1 speedup:4 department:1 influential:1 according:1 structured:1 turchi:1 combination:1 manning:1 makhoul:1 beneficial:1 smaller:1 son:1 character:1 karafi:1 snu:3 quoc:2 restricted:2 gathering:1 thorsten:1 computationally:1 mathematik:1 discus:1 german:2 abbreviated:1 mechanism:1 needed:2 merit:1 ge:1 end:1 gulcehre:1 operation:4 multiplied:2 observe:1 hierarchical:6 differentiated:1 caiming:1 xiong:1 alternative:1 batch:1 original:2 thomas:1 top:25 denotes:1 include:1 ensure:1 angela:1 tony:1 linguistics:4 calculating:2 brant:1 concatenated:1 especially:2 nicola:1 already:1 strategy:2 diagonal:1 traditional:1 hai:1 gradient:2 separate:1 decoder:2 philip:1 consumption:3 chris:2 extent:1 consensus:1 senellart:1 bleu:5 reason:2 dzmitry:2 tokenized:1 length:6 code:1 index:3 relationship:1 illustration:1 ratio:9 modeled:1 acquire:1 unrolled:1 demonstration:1 mostly:1 negative:4 implementation:2 guideline:2 jorg:1 gated:1 teh:1 observation:1 datasets:3 benchmark:4 caglar:1 minh:1 descent:1 hinton:2 frame:2 previewed:1 auli:2 expressiveness:1 compositionality:1 introduced:2 bk:5 david:3 required:4 specified:1 nadathur:1 optimized:1 acoustic:5 learned:1 nip:1 robinson:1 usually:2 below:2 including:1 memory:3 wade:1 power:1 natural:1 cernock:2 difficulty:1 predicting:1 zhu:1 improve:1 axis:1 created:1 concludes:1 speeding:1 text:1 review:1 multiplication:11 lacking:1 loss:2 shim:1 fully:1 limitation:1 proportional:3 geoffrey:2 hoang:1 foundation:2 gather:1 sufficient:1 consistent:1 roukos:1 specia:1 translation:21 row:2 kombrink:1 compatible:1 token:1 supported:1 last:1 keeping:1 english:2 bias:4 side:2 lukas:1 distributed:2 van:1 depth:1 vocabulary:26 dimension:12 crawl:3 adaptive:3 preprocessing:1 far:1 employing:2 kishore:1 approximate:1 kullback:1 keep:1 logic:1 gene:1 overfitting:1 corpus:1 sentinel:1 factorize:1 search:4 demanded:1 decomposes:1 why:1 table:12 zk:7 robust:1 ca:1 constructing:1 did:1 aistats:2 joulin:1 noise:1 hyperparameters:1 intel:1 precision:1 position:1 pv:2 exceeding:1 momentum:1 exponential:1 candidate:6 third:1 choi:1 bastien:2 moran:1 list:2 explored:1 dominates:1 closeness:1 workshop:2 burden:1 evidence:1 frederic:1 adding:1 sequential:2 importance:3 kr:3 maxim:1 socher:1 magnitude:1 illustrates:1 chatterjee:1 demand:1 sorting:3 gap:1 chen:3 likely:1 explore:1 vinyals:2 khudanpur:2 contained:1 srivastava:1 corresponds:1 goal:1 formulated:1 viewed:1 jeff:1 jf:1 luc:1 change:2 included:2 determined:1 operates:1 wt:1 degradation:1 total:2 svd:35 experimental:8 select:2 guillaume:1 internal:1 support:2 searched:1 seoul:2 mark:1 alexander:1 oriol:2 evaluate:3 tested:3 schuster:1 europarl:2 |
6,778 | 7,131 | Plan, Attend, Generate:
Planning for Sequence-to-Sequence Models
Francis Dutil?
University of Montreal (MILA)
[email protected]
Caglar Gulcehre?
University of Montreal (MILA)
[email protected]
Adam Trischler
Microsoft Research Maluuba
[email protected]
Yoshua Bengio
University of Montreal (MILA)
[email protected]
Abstract
We investigate the integration of a planning mechanism into sequence-to-sequence
models using attention. We develop a model which can plan ahead in the future when
it computes its alignments between input and output sequences, constructing a matrix
of proposed future alignments and a commitment vector that governs whether to follow
or recompute the plan. This mechanism is inspired by the recently proposed strategic
attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed
model is end-to-end trainable using primarily differentiable operations. We show that
it outperforms a strong baseline on character-level translation tasks from WMT?15,
the algorithmic task of finding Eulerian circuits of graphs, and question generation
from the text. Our analysis demonstrates that the model computes qualitatively intuitive
alignments, converges faster than the baselines, and achieves superior performance
with fewer parameters.
1 Introduction
Several important tasks in the machine learning literature can be cast as sequence-to-sequence
problems (Cho et al., 2014b; Sutskever et al., 2014). Machine translation is a prime example of this: a
system takes as input a sequence of words or characters in some source language, then generates an output
sequence of words or characters in the target language ? the translation.
Neural encoder-decoder models (Cho et al., 2014b; Sutskever et al., 2014) have become a standard
approach for sequence-to-sequence tasks such as machine translation and speech recognition. Such models
generally encode the input sequence as a set of vector representations using a recurrent neural network
(RNN). A second RNN then decodes the output sequence step-by-step, conditioned on the encodings.
An important augmentation to this architecture, first described by Bahdanau et al. (2015), is for models
to compute a soft alignment between the encoder representations and the decoder state at each time-step,
through an attention mechanism. The computed alignment conditions the decoder more directly on a
relevant subset of the input sequence. Computationally, the attention mechanism is typically a simple
learned function of the decoder?s internal state, e.g., an MLP.
In this work, we propose to augment the encoder-decoder model with attention by integrating a planning
mechanism. Specifically, we develop a model that uses planning to improve the alignment between input
and output sequences. It creates an explicit plan of input-output alignments to use at future time-steps, based
?
denotes that both authors (CG and FD) contributed equally and the order is determined randomly.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
on its current observation and a summary of its past actions, which it may follow or modify. This enables the
model to plan ahead rather than attending to what is relevant primarily at the current generation step. Concretely, we augment the decoder?s internal state with (i) an alignment plan matrix and (ii) a commitment plan
vector. The alignment plan matrix is a template of alignments that the model intends to follow at future timesteps, i.e., a sequence of probability distributions over input tokens. The commitment plan vector governs
whether to follow the alignment plan at the current step or to recompute it, and thus models discrete decisions.
This is reminiscent of macro-actions and options from the hierarchical reinforcement learning literature (Dietterich, 2000). Our planning mechanism is inspired by the strategic attentive reader and writer (STRAW)
of Vezhnevets et al. (2016), which was originally proposed as a hierarchical reinforcement learning algorithm. In reinforcement-learning parlance, existing sequence-to-sequence models with attention can be said
to learn reactive policies; however, a model with a planning mechanism could learn more proactive policies.
Our work is motivated by the intuition that, although many natural sequences are output step-by-step because
of constraints on the output process, they are not necessarily conceived and ordered according to only local,
step-by-step interactions. Natural language in the form of speech and writing is again a prime example ?
sentences are not conceived one word at a time. Planning, that is, choosing some goal along with candidate
macro-actions to arrive at it, is one way to induce coherence in sequential outputs like language. Learning
to generate long coherent sequences, or how to form alignments over long input contexts, is difficult for
existing models. In the case of neural machine translation (NMT), the performance of encoder-decoder
models with attention deteriorates as sequence length increases (Cho et al., 2014a; Sutskever et al., 2014).
A planning mechanism could make the decoder?s search for alignments more tractable and more scalable.
In this work, we perform planning over the input sequence by searching for alignments; our model does not
form an explicit plan of the output tokens to generate. Nevertheless, we find this alignment-based planning
to improve performance significantly in several tasks, including character-level NMT. Planning can also
be applied explicitly to generation in sequence-to-sequence tasks. For example, recent work by Bahdanau
et al. (2016) on actor-critic methods for sequence prediction can be seen as this kind of generative planning.
We evaluate our model and report results on character-level translation tasks from WMT?15 for English
to German, English to Finnish, and English to Czech language pairs. On almost all pairs we observe
improvements over a baseline that represents the state-of-the-art in neural character-level translation. In
our NMT experiments, our model outperforms the baseline despite using significantly fewer parameters
and converges faster in training. We also show that our model performs better than strong baselines on
the algorithmic task of finding Eulerian circuits in random graphs and the task of natural-language question
generation from a document and target answer.
2 Related Works
Existing sequence-to-sequence models with attention have focused on generating the target sequence by
aligning each generated output token to another token in the input sequence. This approach has proven
successful in neural machine translation (Bahdanau et al., 2016) and has recently been adapted to several
other applications, including speech recognition (Chan et al., 2015) and image caption generation (Xu
et al., 2015). In general these models construct alignments using a simple MLP that conditions on the
decoder?s internal state. In our work we integrate a planning mechanism into the alignment function.
There have been several earlier proposals for different alignment mechanisms: for instance, Yang et al.
(2016) developed a hierarchical attention mechanism to perform document-level classification, while Luo
et al. (2016) proposed an algorithm for learning discrete alignments between two sequences using policy
gradients (Williams, 1992).
Silver et al. (2016) used a planning mechanism based on Monte Carlo tree search with neural networks
to train reinforcement learning (RL) agents on the game of Go. Most similar to our work, Vezhnevets
et al. (2016) developed a neural planning mechanism, called the strategic attentive reader and writer
(STRAW), that can learn high-level temporally abstracted macro-actions. STRAW uses an action plan
matrix, which represents the sequences of actions the model plans to take, and a commitment plan
vector, which determines whether to commit an action or recompute the plan. STRAW?s action plan and
commitment plan are stochastic and the model is trained with RL. Our model computes an alignment plan
rather than an action plan, and both its alignment matrix and commitment vector are deterministic and
end-to-end trainable with backpropagation.
2
Our experiments focus on character-level neural machine translation because learning alignments for long
sequences is difficult for existing models. This effect can be more pronounced in character-level NMT,
since sequences of characters are longer than corresponding sequences of words. Furthermore, to learn
a proper alignment between sequences a model often must learn to segment them correctly, a process
suited to planning. Previously, Chung et al. (2016) and Lee et al. (2016) addressed the character-level
machine translation problem with architectural modifications to the encoder and the decoder. Our model
is the first we are aware of to tackle the problem through planning.
3 Planning for Sequence-to-Sequence Learning
We now describe how to integrate a planning mechanism into a sequence-to-sequence architecture with
attention (Bahdanau et al., 2015). Our model first creates a plan, then computes a soft alignment based on the
plan, and generates at each time-step in the decoder. We refer to our model as PAG (Plan-Attend-Generate).
3.1 Notation and Encoder
As input our model receives a sequence of tokens, X =(x0,???,x|X|), where |X| denotes the length of X. It
processes these with the encoder, a bidirectional RNN. At each input position i we obtain annotation vector
?
?
hi by concatenating the forward and backward encoder states, hi =[h?
i ;hi ], where hi denotes the hidden
?
state of the encoder?s forward RNN and hi denotes the hidden state of the encoder?s backward RNN.
Through the decoder the model predicts a sequence of output tokens, Y = (y1,???,y|Y |). We denote by
st the hidden state of the decoder RNN generating the target output token at time-step t.
3.2 Alignment and Decoder
Our goal is a mechanism that plans which parts of the input sequence to focus on for the next k
time-steps of decoding. For this purpose, our model computes an alignment plan matrix At ? Rk?|X|
and commitment plan vector ct ? Rk at each time-step. Matrix At stores the alignments for the current
and the next k?1 timesteps; it is conditioned on the current input, i.e. the token predicted at the previous
time-step, yt, and the current context ?t, which is computed from the input annotations hi. Each row
of At gives the logits for a probability vector over the input annotation vectors. The first row gives the
logits for the current time-step, t, the second row for the next time-step, t+1, and so on. The recurrent
decoder function, fdec-rnn(?), receives st?1, yt, ?t as inputs and computes the hidden state vector
st =fdec-rnn(st?1,yt,?t).
(1)
Context ?t is obtained by a weighted sum of the encoder annotations,
|X|
X
?t =
?tihi,
(2)
i
where the soft-alignment vector ?t =softmax(At[0])?R|X| is a function of the first row of the alignment
? t whose entry at the ith row is
matrix. At each time-step, we compute a candidate alignment-plan matrix A
? t[i]=falign(st?1, hj , ?i, yt),
A
t
(3)
where falign(?) is an MLP and ?ti denotes a summary of the alignment matrix?s ith row at time t?1. The
summary is computed using an MLP, fr (?), operating row-wise on At?1: ?ti =fr (At?1[i]).
The commitment plan vector ct governs whether to follow the existing alignment plan, by shifting it forward
from t?1, or to recompute it. Thus, ct represents a discrete decision. For the model to operate discretely,
we use the recently proposed Gumbel-Softmax trick (Jang et al., 2016; Maddison et al., 2016) in conjunction
with the straight-through estimator (Bengio et al., 2013) to backpropagate through ct.1 The model further
learns the temperature for the Gumbel-Softmax as proposed in (Gulcehre et al., 2017). Both the commitment
vector and the action plan matrix are initialized with ones; this initialization is not modified through training.
1
We also experimented with training ct using REINFORCE (Williams, 1992) but found that Gumbel-Softmax
led to better performance.
3
yt
st
1
# tokens in the
source Tx
# steps to plan ahead (k)
Alignment Plan
Matrix
At [0]
Softmax(
)
At
Commitment plan ct
+
t
s0t
ht
Figure 1: Our planning mechanism in a sequence-to-sequence model that learns to plan and execute
alignments. Distinct from a standard sequence-to-sequence model with attention, rather than using a
simple MLP to predict alignments our model makes a plan of future alignments using its alignment-plan
matrix and decides when to follow the plan by learning a separate commitment vector. We illustrate the
model for a decoder with two layers s0t for the first layer and the st for the second layer of the decoder.
The planning mechanism is conditioned on the first layer of the decoder (s0t).
Alignment-plan update Our decoder updates its alignment plan as governed by the commitment plan.
?t. In more detail, gt =?
We denote by gt the first element of the discretized commitment plan c
ct[0], where
the discretized commitment plan is obtained by setting ct?s largest element to 1 and all other elements
to 0. Thus, gt is a binary indicator variable; we refer to it as the commitment switch. When gt = 0, the
decoder simply advances the time index by shifting the action plan matrix At?1 forward via the shift
function ?(?). When gt = 1, the controller reads the action-plan matrix to produce the summary of the
plan, ?ti. We then compute the updated alignment plan by interpolating the previous alignment plan matrix
? t. The mixing ratio is determined by a learned update
At?1 with the candidate alignment plan matrix A
k?|X|
gate ut ? R
, whose elements uti correspond to tokens in the input sequence and are computed by
an MLP with sigmoid activation, fup(?):
uti =fup(hi, st?1),
? t[:,i].
At[:,i]=(1?uti)At?1[:,i]+uti A
To reiterate, the model only updates its alignment plan when the current commitment switch gt is active.
Otherwise it uses the alignments planned and committed at previous time-steps.
Commitment-plan update The commitment plan also updates when gt becomes 1. If gt is 0, the
shift function ?(?) shifts the commitment vector forward and appends a 0-element. If gt is 1, the model
?t is recomputed
recomputes ct using a single layer MLP, fc(?), followed by a Gumbel-Softmax, and c
by discretizing ct as a one-hot vector:
ct =gumbel_softmax(fc(st?1)),
(4)
?t =one_hot(ct).
c
(5)
We provide pseudocode for the algorithm to compute the commitment plan vector and the action plan
matrix in Algorithm 1. An overview of the model is depicted in Figure 1.
3.2.1 Alignment Repeat
In order to reduce the model?s computational cost, we also propose an alternative to computing the
candidate alignment-plan matrix at every step. Specifically, we propose a model variant that reuses the
4
Algorithm 1: Pseudocode for updating the alignment plan and commitment vector.
for j ?{1,???|X|} do
for t?{1,???|Y |} do
if gt =1 then
ct =softmax(fc (st?1 ))
?tj =fr (At?1 [j]) {Read alignment plan}
? t [i]=falign (st?1 , hj , ?tj , yt ) {Compute candidate alignment plan}
A
utj =fup (hj , st?1 , ?t?1 ) {Compute update gate}
? t {Update alignment plan}
At = (1 ? utj )At?1 +utj A
else
At =?(At?1 ) {Shift alignment plan}
ct =?(ct?1 ) {Shift commitment plan}
end if
Compute the alignment as ?t =softmax(At [0])
end for
end for
alignment vector from the previous time-step until the commitment switch activates, at which time the
model computes a new alignment vector. We call this variant repeat, plan, attend, and generate (rPAG).
rPAG can be viewed as learning an explicit segmentation with an implicit planning mechanism in an
unsupervised fashion. Repetition can reduce the computational complexity of the alignment mechanism
drastically; it also eliminates the need for an explicit alignment-plan matrix, which reduces the model?s
memory consumption also. We provide pseudocode for rPAG in Algorithm 2.
Algorithm 2: Pseudocode for updating the repeat alignment and commitment vector.
for j ?{1,???|X|} do
for t?{1,???|Y |} do
if gt =1 then
ct =softmax(fc (st?1 ,?t?1 ))
?t =softmax(falign (st?1 , hj , yt ))
else
ct =?(ct?1 ) {Shift the commitment vector ct?1 }
?t =?t?1 {Reuse the old the alignment}
end if
end for
end for
3.3 Training
We use a deep output layer (Pascanu et al., 2013a) to compute the conditional distribution over output tokens,
p(yt|y<t,x)?yt>exp(Wofo(st,yt?1,?t)),
(6)
where Wo is a matrix of learned parameters and we have omitted the bias for brevity. Function fo is
an MLP with tanh activation.
The full model, including both the encoder and decoder, is jointly trained to minimize the (conditional)
negative log-likelihood
N
1X
L=?
logp? (y(n)|x(n)),
N n=1
where the training corpus is a set of (x(n),y(n)) pairs and ? denotes the set of all tunable parameters.
As noted by Vezhnevets et al. (2016), the proposed model can learn to recompute very often, which
decreases the utility of planning. To prevent this behavior, we introduce a loss that penalizes the model
for committing too often,
|X| k
X
X 1
Lcom =?com
|| ?cti||22,
(7)
k
t=1 i=0
where ?com is the commitment hyperparameter and k is the timescale over which plans operate.
5
(a)
Indeed
,
Republican
lawyers
identified
only
300
cases
of
electoral
fraud
in
the
United
States
in
a
decade
.
(b)
T a t s ? c h l i c h
i d e n t i f i z i e r t e n
r e p u b l i k a n i s c h e
R e c h t s a n w ? l t e
i n
e i n e m
J a h r z e h n t
n u r
3 0 0
F ? l l e
v o n
Wa h l b e t r u g
i n
d e n
U S A
.
(c)
Figure 2: We visualize the alignments learned by PAG in (a), rPAG in (b), and our baseline model with
a 2-layer GRU decoder using h2 for the attention in (c). As depicted, the alignments learned by PAG
and rPAG are smoother than those of the baseline. The baseline tends to put too much attention on the
last token of the sequence, defaulting to this empty location in alternation with more relevant locations.
Our model, however, places higher weight on the last token usually when no other good alignments exist.
We observe that rPAG tends to generate less monotonic alignments in general.
4 Experiments
Our baseline is the encoder-decoder architecture with attention described in Chung et al. (2016),
wherein the MLP that constructs alignments conditions on the second-layer hidden states, h2, in the
two-layer decoder. The integration of our planning mechanism is analogous across the family of attentive
encoder-decoder models, thus our approach can be applied more generally. In all experiments below,
we use the same architecture for our baseline and the (r)PAG models. The only factor of variation is
the planning mechanism. For training all models we use the Adam optimizer with initial learning rate
set to 0.0002. We clip gradients with a threshold of 5 (Pascanu et al., 2013b) and set the number of
planning steps (k) to 10 throughout. In order to backpropagate through the alignment-plan matrices and
the commitment vectors, the model must maintain these in memory, increasing the computational overhead
of the PAG model. However, rPAG does not suffer from these computational issues.
4.1 Algorithmic Task
We first compared our models on the algorithmic task from Li et al. (2015) of finding the ?Eulerian
Circuits? in a random graph. The original work used random graphs with 4 nodes only, but we found
that both our baseline and the PAG model solve this task very easily. We therefore increased the number
of nodes to 7. We tested the baseline described above with hidden-state dimension of 360, and the same
model augmented with our planning mechanism. The PAG model solves the Eulerian Circuits problem
with 100% absolute accuracy on the test set, indicating that for all test-set graphs, all nodes of the circuit
were predicted correctly. The baseline encoder-decoder architecture with attention performs well but
significantly worse, achieving 90.4% accuracy on the test set.
4.2 Question Generation
SQUAD (Rajpurkar et al., 2016) is a question answering (QA) corpus wherein each sample is a (document,
question, answer) triple. The document and the question are given in words and the answer is a
span of word positions in the document. We evaluate our planning models on the recently proposed
question-generation task (Yuan et al., 2017), where the goal is to generate a question conditioned on a
document and an answer. We add the planning mechanism to the encoder-decoder architecture proposed
by Yuan et al. (2017). Both the document and the answer are encoded via recurrent neural networks, and
6
the model learns to align the question output with the document during decoding. The pointer-softmax
mechanism (Gulcehre et al., 2016) is used to generate question words from either a shortlist vocabulary
or by copying from the document. Pointer-softmax uses the alignments to predict the location of the word
to copy; thus, the planning mechanism has a direct influence on the decoder?s predictions.
We used 2000 examples from SQUAD?s training set for validation and used the official development set
as a test set to evaluate our models. We trained a model with 800 units for all GRU hidden states 600
units for word embedding. On the test set the baseline achieved 66.25 NLL while PAG got 65.45 NLL.
We show the validation-set learning curves of both models in Figure 3.
Baseline
PAG
62
NLL
60
58
56
54
0
5
10
1200x Updates
15
Figure 3: Learning curves for question-generation models on our development set. Both models have
the same capacity and are trained with the same hyperparameters. PAG converges faster than the baseline
with better stability.
4.3 Character-level Neural Machine Translation
Character-level neural machine translation (NMT) is an attractive research problem (Lee et al., 2016;
Chung et al., 2016; Luong and Manning, 2016) because it addresses important issues encountered in
word-level NMT. Word-level NMT systems can suffer from problems with rare words (Gulcehre et al.,
2016) or data sparsity, and the existence of compound words without explicit segmentation in some
language pairs can make learning alignments between different languages and translations more difficult.
Character-level neural machine translation mitigates these issues.
In our NMT experiments we use byte pair encoding (BPE) (Sennrich et al., 2015) for the source sequence
and characters at the target, the same setup described in Chung et al. (2016). We also use the same
preprocessing as in that work.2 We present our experimental results in Table 1. Models were tested on
the WMT?15 tasks for English to German (En?De), English to Czech (En?Cs), and English to Finnish
(En?Fi) language pairs. The table shows that our planning mechanism improves translation performance
over our baseline (which reproduces the results reported in (Chung et al., 2016) to within a small margin).
It does this with fewer updates and fewer parameters. We trained (r)PAG for 350K updates on the training
set, while the baseline was trained for 680K updates. We used 600 units in (r)PAG?s encoder and decoder,
while the baseline used 512 in the encoder and 1024 units in the decoder. In total our model has about
4M fewer parameters than the baseline. We tested all models with a beam size of 15.
As can be seen from Table 1, layer normalization (Ba et al., 2016) improves the performance of PAG
significantly. However, according to our results on En?De, layer norm affects the performance of rPAG
only marginally. Thus, we decided not to train rPAG with layer norm on other language pairs.
In Figure 2, we show qualitatively that our model constructs smoother alignments. At each word that the
baseline decoder generates, it aligns the first few characters to a word in the source sequence, but for the remaining characters places the largest alignment weight on the last, empty token of the source sequence. This
is because the baseline becomes confident of which word to generate after the first few characters, and it generates the remainder of the word mainly by relying on language-model predictions. We observe that (r)PAG
converges faster with the help of the improved alignments, as illustrated by the learning curves in Figure 4.
2
Our implementation is based on the code available at https://github.com/nyu-dl/dl4mt-cdec
7
Model
Baseline
Baseline?
Baseline?
Layer Norm
Dev Test 2014 Test 2015
7
21.57
21.33
23.45
7
21.4
21.16
22.1
3
21.65
21.69
22.55
En?De
7
21.92
21.93
22.42
PAG
3
22.44
22.59
23.18
7
21.98
22.17
22.85
rPAG
3
22.33
22.35
22.83
Baseline
7
17.68
19.27
16.98
Baseline?
3
19.1
21.35
18.79
7
18.9
20.6
18.88
En?Cs PAG
3
19.44
21.64
19.48
rPAG
7
18.66
21.18
19.14
Baseline
7
11.19
10.93
Baseline?
3
11.26
10.71
7
12.09
11.08
En?Fi
PAG
3
12.85
12.15
rPAG
7
11.76
11.02
Table 1: The results of different models on the WMT?15 tasks for English to German, English to
Czech, and English to Finnish language pairs. We report BLEU scores of each model computed via the
multi-blue.perl script. The best-score of each model for each language pair appears in bold-face. We use
newstest2013as our development set, newstest2014 as our "Test 2014" and newstest2015 as our "Test
2015" set. ? denotes the results of the baseline that we trained using the hyperparameters reported in
Chung et al. (2016) and the code provided with that paper. For our baseline, we only report the median
result, and do not have multiple runs of our models. On WMT?14 and WMT?15 for EnrightarrowDe
character-level NMT, Kalchbrenner et al. (2016) have reported better results with deeper auto-regressive
convolutional models (Bytenets), 23.75 and 26.26 respectively.
PAG
PAG + LayerNorm
rPAG
rPAG + LayerNorm
Baseline
3 ? 102
NLL
2 ? 102
102
6 ? 101
50
100
150
200
250
100x Updates
300
350
400
Figure 4: Learning curves for different models on WMT?15 for En?De. Models with the planning
mechanism converge faster than our baseline (which has larger capacity).
5 Conclusion
In this work we addressed a fundamental issue in neural generation of long sequences by integrating
planning into the alignment mechanism of sequence-to-sequence architectures. We proposed two different
planning mechanisms: PAG, which constructs explicit plans in the form of stored matrices, and rPAG,
which plans implicitly and is computationally cheaper. The (r)PAG approach empirically improves
alignments over long input sequences. We demonstrated our models? capabilities through results on
8
character-level machine translation, an algorithmic task, and question generation. In machine translation,
models with planning outperform a state-of-the-art baseline on almost all language pairs using fewer
parameters. We also showed that our model outperforms baselines with the same architecture (minus
planning) on question-generation and algorithmic tasks. The introduction of planning improves training
convergence and potentially the speed by using the alignment repeats.
References
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450
.
Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and
Yoshua Bengio. 2016. An actor-critic algorithm for sequence prediction. arXiv preprint arXiv:1607.07086 .
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to
align and translate. International Conference on Learning Representations (ICLR) .
Yoshua Bengio, Nicholas L?onard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic
neurons for conditional computation. arXiv preprint arXiv:1308.3432 .
William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv preprint
arXiv:1508.01211 .
Kyunghyun Cho, Bart Van Merri?nboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural
machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 .
Kyunghyun Cho, Bart Van Merri?nboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk,
and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder-decoder for statistical machine
translation. arXiv preprint arXiv:1406.1078 .
Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A character-level decoder without explicit segmentation
for neural machine translation. arXiv preprint arXiv:1603.06147 .
Thomas G Dietterich. 2000. Hierarchical reinforcement learning.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. 2016. Pointing the unknown
words. arXiv preprint arXiv:1603.08148 .
Caglar Gulcehre, Sarath Chandar, and Yoshua Bengio. 2017. Memory augmented neural networks with wormhole
connections. arXiv preprint arXiv:1701.08718 .
Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint
arXiv:1611.01144 .
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu.
2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099 .
Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2016. Fully character-level neural machine translation without
explicit segmentation. arXiv preprint arXiv:1610.03017 .
Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. 2015. Gated graph sequence neural networks. arXiv
preprint arXiv:1511.05493 .
Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, and Ilya Sutskever. 2016. Learning online alignments with
continuous rewards policy gradient. arXiv preprint arXiv:1608.01281 .
Minh-Thang Luong and Christopher D Manning. 2016. Achieving open vocabulary neural machine translation with
hybrid word-character models. arXiv preprint arXiv:1604.00788 .
Chris J Maddison, Andriy Mnih, and Yee Whye Teh. 2016. The concrete distribution: A continuous relaxation of
discrete random variables. arXiv preprint arXiv:1611.00712 .
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2013a. How to construct deep recurrent
neural networks. arXiv preprint arXiv:1312.6026 .
Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013b. On the difficulty of training recurrent neural networks.
ICML (3) 28:1310?1318.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine
comprehension of text. arXiv preprint arXiv:1606.05250 .
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword
units. arXiv preprint arXiv:1508.07909 .
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian
Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of go
with deep neural networks and tree search. Nature 529(7587):484?489.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances
in neural information processing systems. pages 3104?3112.
Alexander Vezhnevets, Volodymyr Mnih, John Agapiou, Simon Osindero, Alex Graves, Oriol Vinyals, and Koray
Kavukcuoglu. 2016. Strategic attentive writer for learning macro-actions. In Advances in Neural Information
Processing Systems. pages 3486?3494.
Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine learning 8(3-4):229?256.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua
Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In International
9
Conference on Machine Learning. pages 2048?2057.
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention
networks for document classification. In Proceedings of NAACL-HLT. pages 1480?1489.
Xingdi Yuan, Tong Wang, Caglar Gulcehre, Alessandro Sordoni, Philip Bachman, Sandeep Subramanian, Saizheng
Zhang, and Adam Trischler. 2017. Machine comprehension by text-to-text neural question generation. arXiv
preprint arXiv:1705.02012 .
10
| 7131 |@word norm:3 open:1 bachman:1 minus:1 initial:1 score:2 united:1 daniel:1 document:10 subword:1 outperforms:3 past:1 existing:5 current:8 com:7 luo:2 activation:2 gmail:3 guez:1 reminiscent:1 must:2 john:1 ronald:1 hofmann:1 enables:1 update:13 bart:2 generative:1 fewer:6 ith:2 aja:1 pointer:2 regressive:1 tarlow:1 recompute:5 pascanu:4 node:3 location:3 shortlist:1 lawyer:1 zhang:2 along:1 direct:1 become:1 yuan:3 overhead:1 introduce:1 x0:1 indeed:1 behavior:1 planning:37 kiros:2 multi:1 discretized:2 inspired:2 relying:1 increasing:1 becomes:2 provided:1 estimating:1 notation:1 circuit:5 what:1 kind:1 developed:2 finding:3 every:1 ti:3 tackle:1 demonstrates:1 unit:5 reuses:1 kelvin:2 attend:5 local:1 modify:1 tends:2 despite:1 encoding:2 laurent:1 initialization:1 decided:1 goyal:1 backpropagation:1 razvan:2 rnn:9 significantly:4 got:1 onard:1 word:20 induce:1 integrating:2 fraud:1 put:1 context:3 influence:1 writing:1 yee:1 deterministic:1 demonstrated:1 yt:10 williams:3 attention:16 go:2 jimmy:2 focused:1 tomas:1 attending:1 estimator:1 sarath:1 reparameterization:1 embedding:1 searching:1 stability:1 variation:1 merri:2 analogous:1 updated:1 target:5 caption:2 us:4 jaitly:2 trick:1 element:5 recognition:2 updating:2 predicts:1 preprint:20 wang:1 intends:1 decrease:1 alessandro:1 intuition:1 complexity:1 reward:1 trained:7 segment:1 creates:2 writer:4 eric:1 gu:1 easily:1 schwenk:1 tx:1 train:2 distinct:1 recomputes:1 describe:1 committing:1 monte:1 zemel:2 tell:1 choosing:1 kalchbrenner:2 whose:2 encoded:1 larger:1 solve:1 saizheng:1 otherwise:1 encoder:20 simonyan:1 commit:1 timescale:1 jointly:2 online:1 nll:4 sequence:59 differentiable:1 propose:3 jamie:1 interaction:1 commitment:27 macro:4 fr:3 relevant:3 remainder:1 mixing:1 translate:1 intuitive:1 pronounced:1 sutskever:5 convergence:1 empty:2 produce:1 generating:2 adam:4 converges:4 ben:1 silver:2 help:1 illustrate:1 develop:2 montreal:3 propagating:1 recurrent:5 solves:1 strong:2 predicted:2 c:2 diyi:1 stochastic:2 espeholt:1 ryan:3 comprehension:2 sordoni:1 eduard:1 exp:1 algorithmic:6 predict:2 visualize:1 pointing:1 achieves:1 optimizer:1 salakhudinov:1 omitted:1 purpose:1 ruslan:1 tanh:1 largest:2 repetition:1 weighted:1 activates:1 modified:1 rather:3 newstest2013:1 zhou:1 hj:4 conjunction:1 encode:1 focus:2 improvement:1 likelihood:1 mainly:1 cg:1 baseline:35 typically:1 hidden:7 issue:4 classification:2 augment:2 development:3 plan:65 art:2 integration:2 softmax:13 construct:5 aware:1 beach:1 koray:2 thang:1 represents:3 holger:1 unsupervised:1 icml:1 future:5 yoshua:13 report:3 connectionist:1 richard:1 primarily:2 few:2 randomly:1 anirudh:1 cheaper:1 microsoft:2 maintain:1 william:1 mlp:9 fd:1 investigate:1 mnih:2 alignment:73 tj:2 arthur:1 rajpurkar:2 tree:2 old:1 initialized:1 penalizes:1 instance:1 increased:1 soft:3 earlier:1 fdec:2 planned:1 dev:1 konstantin:1 logp:1 phrase:1 strategic:4 cost:1 subset:1 entry:1 rare:2 successful:1 osindero:1 too:2 reported:3 stored:1 answer:5 cho:10 confident:1 st:16 fundamental:1 international:2 oord:1 lee:3 decoding:2 straw:5 ilya:2 concrete:1 augmentation:1 again:1 huang:1 worse:1 luong:2 chung:8 li:2 volodymyr:1 de:4 bold:1 ioannis:1 chandar:1 explicitly:1 reiterate:1 proactive:1 script:1 lowe:1 jason:1 francis:1 option:1 capability:1 annotation:4 simon:1 minimize:1 accuracy:2 convolutional:1 hovy:1 correspond:1 decodes:1 kavukcuoglu:2 marginally:1 carlo:1 straight:1 sennrich:2 fo:1 aligns:1 hlt:1 attentive:5 maluuba:1 tunable:1 birch:1 appends:1 ut:1 improves:4 listen:1 segmentation:4 appears:1 bidirectional:1 rico:1 originally:1 higher:1 follow:6 wherein:2 improved:1 execute:1 furthermore:1 implicit:1 smola:1 layernorm:2 parlance:1 until:1 receives:2 christopher:1 pineau:1 newstest2014:1 lei:1 alexandra:1 usa:1 dietterich:2 effect:1 xiaodong:1 naacl:1 logits:2 spell:1 kyunghyun:7 read:2 illustrated:1 attractive:1 fethi:1 game:2 during:1 shixiang:1 noted:1 whye:1 performs:2 percy:1 temperature:1 cdec:1 image:2 wise:1 recently:4 umontreal:1 fi:2 superior:1 sigmoid:1 pseudocode:4 rl:2 overview:1 vezhnevets:4 empirically:1 he:1 bougares:1 refer:2 trischler:3 language:14 wmt:7 actor:2 longer:1 operating:1 ahn:1 gt:11 add:1 aligning:1 align:2 haddow:1 recent:1 chan:2 showed:1 electoral:1 prime:2 store:1 compound:1 binary:1 discretizing:1 alternation:1 joelle:1 seen:2 george:1 converge:1 barry:1 ii:1 smoother:2 full:1 multiple:1 reduces:1 faster:5 long:6 equally:1 prediction:4 scalable:1 variant:2 controller:1 navdeep:2 arxiv:40 normalization:2 achieved:1 beam:1 proposal:1 addressed:2 else:2 median:1 source:5 jian:1 operate:2 eliminates:1 finnish:3 nmt:9 bahdanau:8 wormhole:1 call:1 yang:3 bengio:13 switch:3 affect:1 timesteps:2 architecture:8 identified:1 andriy:1 reduce:2 shift:6 defaulting:1 whether:4 motivated:1 veda:1 sandeep:1 utility:1 reuse:1 wo:1 suffer:2 karen:1 speech:3 action:14 deep:3 generally:2 governs:3 clip:1 generate:9 http:1 outperform:1 exist:1 deteriorates:1 conceived:2 correctly:2 blue:1 discrete:4 hyperparameter:1 recomputed:1 nevertheless:1 threshold:1 achieving:2 prevent:1 nal:1 ht:1 backward:2 graph:6 relaxation:1 sum:1 run:1 arrive:1 almost:2 reader:3 uti:4 architectural:1 place:2 family:1 throughout:1 decision:2 coherence:1 lanctot:1 layer:14 hi:7 ct:19 followed:1 courville:3 cheng:1 encountered:1 discretely:1 adapted:1 ahead:3 constraint:1 alex:3 generates:4 speed:1 span:1 nboer:2 mikolov:1 utj:3 according:2 manning:2 across:1 character:22 mastering:1 modification:1 quoc:2 den:2 computationally:2 previously:1 german:3 mechanism:29 bpe:1 dyer:1 tractable:1 antonoglou:1 end:10 gulcehre:9 available:1 operation:1 panneershelvam:1 observe:3 hierarchical:5 nicholas:1 alternative:1 jang:2 gate:2 eulerian:4 existence:1 original:1 thomas:2 denotes:7 remaining:1 question:15 said:1 gradient:5 iclr:1 separate:1 reinforce:1 capacity:2 decoder:34 philip:1 consumption:1 maddison:3 chris:3 bleu:1 dzmitry:4 length:2 code:2 index:1 copying:1 ratio:1 julian:1 schrittwieser:1 zichao:1 liang:1 difficult:3 setup:1 potentially:1 negative:1 ba:3 implementation:1 proper:1 policy:4 unknown:1 contributed:1 perform:2 gated:1 teh:1 observation:1 neuron:1 philemon:1 caglar:6 ramesh:1 minh:1 hinton:1 committed:1 y1:1 david:1 cast:1 pair:10 gru:2 trainable:2 sentence:1 connection:1 coherent:1 learned:5 czech:3 nip:1 qa:1 address:1 poole:1 usually:1 below:1 yujia:1 sparsity:1 perl:1 including:3 memory:3 shifting:2 hot:1 subramanian:1 natural:3 hybrid:1 difficulty:1 indicator:1 improve:2 github:1 republican:1 bowen:1 temporally:1 squad:3 categorical:1 auto:1 text:4 byte:1 literature:2 graf:2 loss:1 fully:1 generation:13 proven:1 geoffrey:1 triple:1 validation:2 h2:2 integrate:2 agent:1 critic:2 translation:25 row:7 summary:4 token:14 repeat:4 last:3 copy:1 english:9 drastically:1 bias:1 deeper:1 template:1 face:1 pag:21 absolute:1 van:4 curve:4 dimension:1 vocabulary:2 rich:1 computes:7 author:1 qualitatively:2 reinforcement:7 concretely:1 forward:5 preprocessing:1 sungjin:1 sifre:1 brakel:1 implicitly:1 abstracted:1 reproduces:1 decides:1 active:1 corpus:2 agapiou:1 search:3 continuous:2 decade:1 table:4 learn:6 nature:1 ca:1 brockschmidt:1 necessarily:1 interpolating:1 constructing:1 marc:2 official:1 hyperparameters:2 nallapati:1 xu:3 augmented:2 lasse:1 junyoung:1 mila:3 en:8 fashion:1 tong:1 s0t:3 position:2 explicit:8 concatenating:1 candidate:5 governed:1 answering:1 learns:3 rk:2 mitigates:1 nyu:1 experimented:1 dl:1 sequential:1 conditioned:4 margin:1 gumbel:5 suited:1 backpropagate:2 depicted:2 led:1 fc:4 simply:1 visual:1 vinyals:3 ordered:1 monotonic:1 driessche:1 determines:1 cti:1 conditional:3 goal:3 viewed:1 specifically:2 determined:2 called:1 total:1 experimental:1 indicating:1 aaron:4 chiu:1 internal:3 brevity:1 reactive:1 alexander:1 oriol:3 evaluate:3 tested:3 fup:3 |
6,779 | 7,132 | Task-based End-to-end Model Learning
in Stochastic Optimization
Priya L. Donti
Dept. of Computer Science
Dept. of Engr. & Public Policy
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Brandon Amos
Dept. of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
J. Zico Kolter
Dept. of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However,
the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for
learning probabilistic machine learning models in a manner that directly captures
the ultimate task-based objective for which they will be used, within the context
of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid
scheduling task, and a real-world energy storage arbitrage task. We show that the
proposed approach can outperform both traditional modeling and purely black-box
policy optimization approaches in these applications.
1
Introduction
While prediction algorithms commonly operate within some larger process, the criteria by which
we train these algorithms often differ from the ultimate criteria on which we evaluate them: the
performance of the full ?closed-loop? system on the ultimate task at hand. For instance, instead of
merely classifying images in a standalone setting, one may want to use these classifications within
planning and control tasks such as autonomous driving. While a typical image classification algorithm
might optimize accuracy or log likelihood, in a driving task we may ultimately care more about the
difference between classifying a pedestrian as a tree vs. classifying a garbage can as a tree. Similarly,
when we use a probabilistic prediction algorithm to generate forecasts of upcoming electricity demand,
we then want to use these forecasts to minimize the costs of a scheduling procedure that allocates
generation for a power grid. As these examples suggest, instead of using a ?generic loss,? we instead
may want to learn a model that approximates the ultimate task-based ?true loss.?
This paper considers an end-to-end approach for learning probabilistic machine learning models
that directly capture the objective of their ultimate task. Formally, we consider probabilistic models
in the context of stochastic programming, where the goal is to minimize some expected cost over
the models? probabilistic predictions, subject to some (potentially also probabilistic) constraints.
As mentioned above, it is common to approach these problems in a two-step fashion: first to fit a
predictive model to observed data by minimizing some criterion such as negative log-likelihood,
and then to use this model to compute or approximate the necessary expected costs in the stochastic
programming setting. While this procedure can work well in many instances, it ignores the fact
that the true cost of the system (the optimization objective evaluated on actual instantiations in the
real world) may benefit from a model that actually attains worse overall likelihood, but makes more
accurate predictions over certain manifolds of the underlying space.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
We propose to train a probabilistic model not (solely) for predictive accuracy, but so that?when it is
later used within the loop of a stochastic programming procedure?it produces solutions that minimize
the ultimate task-based loss. This formulation may seem somewhat counterintuitive, given that a
?perfect? predictive model would of course also be the optimal model to use within a stochastic
programming framework. However, the reality that all models do make errors illustrates that we
should indeed look to a final task-based objective to determine the proper error tradeoffs within a
machine learning setting. This paper proposes one way to evaluate task-based tradeoffs in a fully
automated fashion, by computing derivatives through the solution to the stochastic programming
problem in a manner that can improve the underlying model.
We begin by presenting background material and related work in areas spanning stochastic programming, end-to-end training, optimizing alternative loss functions, and the classic generative/discriminative tradeoff in machine learning. We then describe our approach within the formal
context of stochastic programming, and give a generic method for propagating task loss through these
problems in a manner that can update the models. We report on three experimental evaluations of the
proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task,
and a real-world energy storage arbitrage task. We show that the proposed approach outperforms
traditional modeling and purely black-box policy optimization approaches.
2
Background and related work
Stochastic programming Stochastic programming is a method for making decisions under uncertainty by modeling or optimizing objectives governed by a random process. It has applications
in many domains such as energy [1], finance [2], and manufacturing [3], where the underlying
probability distributions are either known or can be estimated. Common considerations include how
to best model or approximate the underlying random variable, how to solve the resulting optimization
problem, and how to then assess the quality of the resulting (approximate) solution [4].
In cases where the underlying probability distribution is known but the objective cannot be solved
analytically, it is common to use Monte Carlo sample average approximation methods, which
draw multiple iid samples from the underlying probability distribution and then use deterministic
optimization methods to solve the resultant problems [5]. In cases where the underlying distribution
is not known, it is common to learn or estimate some model from observed samples [6].
End-to-end training Recent years have seen a dramatic increase in the number of systems building
on so-called ?end-to-end? learning. Generally speaking, this term refers to systems where the end
goal of the machine learning process is directly predicted from raw inputs [e.g. 7, 8]. In the context
of deep learning systems, the term now traditionally refers to architectures where, for example, there
is no explicit encoding of hand-tuned features on the data, but the system directly predicts what the
image, text, etc. is from the raw inputs [9, 10, 11, 12, 13]. The context in which we use the term
end-to-end is similar, but slightly more in line with its older usage: instead of (just) attempting to learn
an output (with known and typically straightforward loss functions), we are specifically attempting to
learn a model based upon an end-to-end task that the user is ultimately trying to accomplish. We feel
that this concept?of describing the entire closed-loop performance of the system as evaluated on the
real task at hand?is beneficial to add to the notion of end-to-end learning.
Also highly related to our work are recent efforts in end-to-end policy learning [14], using value
iteration effectively as an optimization procedure in similar networks [15], and multi-objective
optimization [16, 17, 18, 19]. These lines of work fit more with the ?pure? end-to-end approach
we discuss later on (where models are eschewed for pure function approximation methods), but
conceptually the approaches have similar motivations in modifying typically-optimized policies to
address some task(s) directly. Of course, the actual methodological approaches are quite different,
given our specific focus on stochastic programming as the black box of interest in our setting.
Optimizing alternative loss functions There has been a great deal of work in recent years on
using machine learning procedures to optimize different loss criteria than those ?naturally? optimized
by the algorithm. For example, Stoyanov et al. [20] and Hazan et al. [21] propose methods for
optimizing loss criteria in structured prediction that are different from the inference procedure of
the prediction algorithm; this work has also recently been extended to deep networks [22]. Recent
work has also explored using auxiliary prediction losses to satisfy multiple objectives [23], learning
2
dynamics models that maximize control performance in Bayesian optimization [24], and learning
adaptive predictive models via differentiation through a meta-learning optimization objective [25].
The work we have found in the literature that most closely resembles our approach is the work of
Bengio [26], which uses a neural network model for predicting financial prices, and then optimizes the
model based on returns obtained via a hedging strategy that employs it. We view this approach?of both
using a model and then tuning that model to adapt to a (differentiable) procedure?as a philosophical
predecessor to our own work. In concurrent work, Elmachtoub and Grigas [27] also propose
an approach for tuning model parameters given optimization results, but in the context of linear
programming and outside the context of deep networks. Whereas Bengio [26] and Elmachtoub and
Grigas [27] use hand-crafted (but differentiable) algorithms to approximately attain some objective
given a predictive model, our approach is tightly coupled to stochastic programming, where the
explicit objective is to attempt to optimize the desired task cost via an exact optimization routine, but
given underlying randomness. The notions of stochasticity are thus naturally quite different in our
work, but we do hope that our work can bring back the original idea of task-based model learning.
(Despite Bengio [26]?s original paper being nearly 20 years old, virtually all follow-on work has
focused on the financial application, and not on what we feel is the core idea of using a surrogate
model within a task-driven optimization procedure.)
3
End-to-end model learning in stochastic programming
We first formally define the stochastic modeling and optimization problems with which we are
concerned. Let (x 2 X , y 2 Y) ? D denote standard input-output pairs drawn from some
(real, unknown) distribution D. We also consider actions z 2 Z that incur some expected loss
LD (z) = Ex,y?D [f (x, y, z)]. For instance, a power systems operator may try to allocate power
generators z given past electricity demand x and future electricity demand y; this allocation?s loss
corresponds to the over- or under-generation penalties incurred given future demand instantiations.
?
If we knew D, then we could select optimal actions zD
= argminz LD (z). However, in practice,
the true distribution D is unknown. In this paper, we are interested in modeling the conditional
distribution y|x using some parameterized model p(y|x; ?) in order to minimize the real-world cost of
the policy implied by this parameterization. Specifically, we find some parameters ? to parameterize
p(y|x; ?) (as in the standard statistical setting) and then determine optimal actions z ? (x; ?) (via
stochastic optimization) that correspond to our observed input x and the specific choice of parameters
? in our probabilistic model. Upon observing the costs of these actions z ? (x; ?) relative to true
instantiations of x and y, we update our parameterized model p(y|x; ?) accordingly, calculate the
resultant new z ? (x; ?), and repeat. The goal is to find parameters ? such that the corresponding policy
z ? (x; ?) optimizes loss under the true joint distribution of x and y.
Explicitly, we wish to choose ? to minimize the task loss L(?) in the context of x, y ? D, i.e.
minimize L(?) = Ex,y?D [f (x, y, z ? (x; ?))].
?
(1)
Since in reality we do not know the distribution D, we obtain z ? (x; ?) via a proxy stochastic
optimization problem for a fixed instantiation of parameters ?, i.e.
z ? (x; ?) = argmin Ey?p(y|x;?) [f (x, y, z)].
z
(2)
The above setting specifies z ? (x; ?) using a simple (unconstrained) stochastic program, but in reality
our decision may be subject to both probabilistic and deterministic constraints. We therefore consider
more general decisions produced through a generic stochastic programming problem1
z ? (x; ?) = argmin Ey?p(y|x;?) [f (x, y, z)]
z
subject to Ey?p(y|x;?) [gi (x, y, z)] ? 0, i = 1, . . . , nineq
(3)
hi (z) = 0, i = 1, . . . , neq .
1
It is standard to presume in stochastic programming that equality constraints depend only on decision
variables (not random variables), as non-trivial random equality constraints are typically not possible to satisfy.
3
In this setting, the full task loss is more complex, since it captures both the expected cost and any
deviations from the constraints. We can write this, for instance, as
nineq
L(?) = Ex,y?D [f (x, y, z ? (x; ?))]+
X
i=1
I{Ex,y?D [gi (x, y, z ? (x; ?))] ? 0}+
neq
X
i=1
Ex [I{hi (z ? (x; ?)) = 0}]
(4)
(where I(?) is the indicator function that is zero when its constraints are satisfied and infinite otherwise). However, the basic intuition behind our approach remains the same for both the constrained
and unconstrained cases: in both settings, we attempt to learn parameters of a probabilistic model not
to produce strictly ?accurate? predictions, but such that when we use the resultant model within a
stochastic programming setting, the resulting decisions perform well under the true distribution.
Actually solving this problem requires that we differentiate through the ?argmin? operator z ? (x; ?)
of the stochastic programming problem. This differentiation is not possible for all classes of optimization problems (the argmin operator may be discontinuous), but as we will show shortly, in many
practical cases?including cases where the function and constraints are strongly convex?we can indeed
efficiently compute these gradients even in the context of constrained optimization.
3.1
Discussion and alternative approaches
We highlight our approach in contrast to two alternative existing methods: traditional model learning
and model-free black-box policy optimization. In traditional machine learning approaches, it is
common to use ? to minimize the (conditional) log-likelihood of observed data under the model
p(y|x; ?). This method corresponds to approximately solving the optimization problem
minimize Ex,y?D [ log p(y|x; ?)] .
(5)
?
If we then need to use the conditional distribution y|x to determine actions z within some later
optimization setting, we commonly use the predictive model obtained from (5) directly. This
approach has obvious advantages, in that the model-learning phase is well-justified independent of
any future use in a task. However, it is also prone to poor performance in the common setting where
the true distribution y|x cannot be represented within the class of distributions parameterized by ?, i.e.
where the procedure suffers from model bias. Conceptually, the log-likelihood objective implicitly
trades off between model error in different regions of the input/output space, but does so in a manner
largely opaque to the modeler, and may ultimately not employ the correct tradeoffs for a given task.
In contrast, there is an alternative approach to solving (1) that we describe as the model-free
?black-box? policy optimization approach. Here, we forgo learning any model at all of the random variable y. Instead, we attempt to learn a policy mapping directly from inputs x to actions
? that minimize the loss L(?)
? presented in (4) (where here ?? defines the form of the polz ? (x; ?)
icy itself, not a predictive model). While such model-free methods can perform well in many
settings, they are often very data-inefficient, as the policy class must have enough representational power to describe sufficiently complex policies without recourse to any underlying model.2
Algorithm 1 Task Loss Optimization
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
Our approach offers an intermediate setting,
where
we do still use a surrogate model to deterInput: Example x, y ? D.
mine
an
optimal decision z ? (x; ?), yet we adapt
initialize ? // some initial parameterization
this model based on the task loss instead of any
model prediction accuracy. In practice, we typifor t = 1, . . . , T do
cally want to minimize some weighted combina?
compute z (x; ?) via Equation (3)
tion of log-likelihood and task loss, which can
initialize ?t
// step size
be easily accomplished given our approach.
// step in violated constraint or objective
?
if 9i s.t. gi (x, y, z (x; ?)) > 0 then
update ?
? ?t r? gi (x, y, z ? (x; ?)) 3.2 Optimizing task loss
else
update ?
? ?t r? f (x, y, z ? (x; ?)) To solve the generic optimization problem (4),
we can in principle adopt a straightforward (conend if
strained) stochastic gradient approach, as deend for
tailed in Algorithm 1. At each iteration, we
2
This distinction is roughly analogous to the policy search vs. model-based settings in reinforcement learning.
However, for the purposes of this paper, we consider much simpler stochastic programs without the multiple
rounds that occur in RL, and the extension of these techniques to a full RL setting remains as future work.
4
1 2 5 10 20
Pred. demand
? & ? ?,
Newspaper
stocking
decision
?-??
(uncertain; discrete)
Past demand,
past temperature,
temporal features
?&
? "($|&; ()
(a) Inventory stock problem
(w/ uncertainty)
? "($|&; ()
(
?&
Pred. demand
(
Pred. prices
past temperature,
temporal features,
load forecasts
(w/ uncertainty)
? "($|&; ()
(b) Load forecasting problem
Battery
schedule (e.g.)
?-
Present
(randomly
generated)
Present
Features (
Past prices,
Generation
schedule (e.g.)
?-
(c) Price forecasting problem
Figure 1: Features x, model predictions y, and policy z for the three experiments.
solve the proxy stochastic programming problem (3) to obtain z ? (x, ?), using the distribution defined
by our current values of ?. Then, we compute the true loss L(?) using the observed value of y.
If any of the inequality constraints gi in L(?) are violated, we take a gradient step in the violated
constraint; otherwise, we take a gradient step in the optimization objective f . We note that if any
inequality constraints are probabilistic, Algorithm 1 must be adapted to employ mini-batches in order
to determine whether these probabilistic constraints are satisfied. Alternatively, because even the gi
constraints are probabilistic, it is common in practice to simply move a weighted version of these
constraints to the objective, i.e., we modify the objective by adding some appropriate penalty times
the positive part of the function, gi (x, y, z)+ , for some > 0. In practice, this has the effect of
taking gradient steps jointly in all the violated constraints and the objective in the case that one or
more inequality constraints are violated, often resulting in faster convergence. Note that we need
only move stochastic constraints into the objective; deterministic constraints on the policy itself will
always be satisfied by the optimizer, as they are independent of the model.
3.3
Differentiating the optimization solution to a stochastic programming problem
While the above presentation highlights the simplicity of the proposed approach, it avoids the issue
of chief technical challenge to this approach, which is computing the gradient of an objective that
depends upon the argmin operation z ? (x; ?). Specifically, we need to compute the term
@L
@L @z ?
= ?
(6)
@?
@z @?
?
which involves the Jacobian @z
@? . This is the Jacobian of the optimal solution with respect to the
distribution parameters ?. Recent approaches have looked into similar argmin differentiations [28, 29],
though the methodology we present here is more general and handles the stochasticity of the objective.
At a high level, we begin by writing the KKT optimality conditions of the general stochastic
programming problem (3). Differentiating these equations and applying the implicit function theorem
gives a set of linear equations that we can solve to obtain the necessary Jacobians (with expectations
over the distribution y ? p(y|x; ?) denoted Ey? , and where g is the vector of inequality constraints)
2
3
n
?
?
X
" @z # 2 @rz Ey f (z) @ Pnineq i rz Ey gi (z) 3
@E g T
2
T
i=1
r2z Ey f (z) +
A
?
?
i rz Ey gi (z)
@z
+
@?
6
7
@?
@?
5.
i=1
?
?
4
5 @@? = 4
diag( ) @g
@E g
ineq
y?
?
?
diag( )
A
y?
@z
diag(Ey? g(z))
0
0
0
@?
@?
0
@z
(7)
The terms in these equations look somewhat complex, but fundamentally, the left side gives the
optimality conditions of the convex problem, and the right side gives the derivatives of the relevant
functions at the achieved solution with respect to the governing parameter ?. In practice, we calculate
the right-hand terms by employing sequential quadratic programming [30] to find the optimal policy
z ? (x; ?) for the given parameters ?, using a recently-proposed approach for fast solution of the argmin
differentiation for QPs [31] to solve the necessary linear equations; we then take the derivatives at the
optimum produced by this strategy. Details of this approach are described in the appendix.
4
Experiments
We consider three applications of our task-based method: a synthetic inventory stock problem, a
real-world energy scheduling task, and a real-world battery arbitrage task. We demonstrate that the
task-based end-to-end approach can substantially improve upon other alternatives. Source code for
all experiments is available at https://github.com/locuslab/e2e-model-learning.
5
4.1
Inventory stock problem
Problem definition To highlight the performance of the algorithm in a setting where the true
underlying model is known to us, we consider a ?conditional? variation of the classical inventory
stock problem [4]. In this problem, a company must order some quantity z of a product to minimize
costs over some stochastic demand y, whose distribution in turn is affected by some observed features
x (Figure 1a). There are linear and quadratic costs on the amount of product ordered, plus different
linear/quadratic costs on over-orders [z y]+ and under-orders [y z]+ . The objective is given by
1
1
1
fstock (y, z) = c0 z + q0 z 2 + cb [y z]+ + qb ([y z]+ )2 + ch [z y]+ + qh ([z y]+ )2 , (8)
2
2
2
where [v]+ ? max{v, 0}. For a specific choice of probability model p(y|x; ?), our proxy stochastic
programming problem can then be written as
minimize Ey?p(y|x;?) [fstock (y, z)].
z
(9)
To simplify the setting, we further assume that the demands are discrete, taking on values d1 , . . . , dk
with probabilities (conditional on x) (p? )i ? p(y = di |x; ?). Thus our stochastic programming
problem (A.5) can be written succinctly as a joint quadratic program3
?
?
k
X
1
1
1
minimize c0 z + q0 z 2 +
(p? )i cb (zb )i + qb (zb )2i + ch (zh )i + qh (zh )2i
2
2
2
z2R,zb ,zh 2Rk
(10)
i=1
subject to d
z1 ? zb , z1
d ? zh , z, zh , zb
0.
Further details of this approach are given in the appendix.
Experimental setup We examine our algorithm under two main conditions: where the true model
is linear, and where it is nonlinear. In all cases, we generate problem instances by randomly sampling
some x 2 Rn and then generating p(y|x; ?) according to either p(y|x; ?) / exp(?T x) (linear true
model) or p(y|x; ?) / exp((?T x)2 ) (nonlinear true model) for some ? 2 Rn?k . We compare
the following approaches on these tasks: 1) The QP allocation based upon the true model (which
performs optimally); 2) MLE approaches (with linear or nonlinear probability models) that fit a model
to the data, and then compute the allocation by solving the QP; 3) pure end-to-end policy-optimizing
models (using linear or nonlinear hypotheses for the policy); and 4) our task-based learning models
(with linear or nonlinear probability models). In all cases we evaluate test performance by running on
1000 random examples, and evaluate performance over 10 folds of different true ?? parameters.
Figures 2(a) and (b) show the performance of these methods given a linear true model, with linear
and nonlinear model hypotheses, respectively. As expected, the linear MLE approach performs best,
as the true underlying model is in the class of distributions that it can represent and thus solving the
stochastic programming problem is a very strong proxy for solving the true optimization problem
under the real distribution. While the true model is also contained within the nonlinear MLE?s generic
nonlinear distribution class, we see that this method requires more data to converge, and when given
less data makes error tradeoffs that are ultimately not the correct tradeoffs for the task at hand; our
task-based approach thus outperforms this approach. The task-based approach also substantially
outperforms the policy-optimizing neural network, highlighting the fact that it is more data-efficient
to run the learning process ?through? a reasonable model. Note that here it does not make a difference
whether we use the linear or nonlinear model in the task-based approach.
Figures 2(c) and (d) show performance in the case of a nonlinear true model, with linear and
nonlinear model hypotheses, respectively. Case (c) represents the ?non-realizable? case, where the
true underlying distribution cannot be represented by the model hypothesis class. Here, the linear
MLE, as expected, performs very poorly: it cannot capture the true underlying distribution, and thus
the resultant stochastic programming solution would not be expected to perform well. The linear
policy model similarly performs poorly. Importantly, the task-based approach with the linear model
performs much better here: despite the fact that it still has a misspecified model, the task-based
nature of the learning process lets us learn a different linear model than the MLE version, which is
3
This is referred to as a two-stage stochastic programming problem (though a very trivial example of one),
where first stage variables consist of the amount of product to buy before observing demand, and second-stage
variables consist of how much to sell back or additionally purchase once the true demand has been revealed.
6
Figure 2: Inventory problem results for 10 runs over a representative instantiation of true parameters
(c0 = 10, q0 = 2, cb = 30, qb = 14, ch = 10, qh = 2). Cost is evaluated over 1000 testing samples
(lower is better). The linear MLE performs best for a true linear model. In all other cases, the
task-based models outperform their MLE and policy counterparts.
particularly tuned to the distribution and loss of the task. Finally, also as to be expected, the non-linear
models perform better than the linear models in this scenario, but again with the task-based non-linear
model outperforming the nonlinear MLE and end-to-end policy approaches.
4.2
Load forecasting and generator scheduling
We next consider a more realistic grid-scheduling task, based upon over 8 years of real electrical
grid data. In this setting, a power system operator must decide how much electricity generation
z 2 R24 to schedule for each hour in the next 24 hours based on some (unknown) distribution over
electricity demand (Figure 1b). Given a particular realization y of demand, we impose penalties for
both generation excess ( e ) and generation shortage ( s ), with s
e . We also add a quadratic
regularization term, indicating a preference for generation schedules that closely match demand
realizations. Finally, we impose a ramping constraint cr restricting the change in generation between
consecutive timepoints, reflecting physical limitations associated with quick changes in electricity
output levels. These are reasonable proxies for the actual economic costs incurred by electrical grid
operators when scheduling generation, and can be written as the stochastic programming problem
?
24
X
1
minimize
E
zi ]+ + e [zi yi ]+ + (zi yi )2
s [yi
y?p(y|x;?)
24
2
z2R
(11)
i=1
subject to |zi
zi
1|
? cr 8i,
where [v]+ ? max{v, 0}. Assuming (as we will in our model), that yi is a Gaussian random
variable with mean ?i and variance i2 , then this expectation has a closed form that can be computed
via analytically integrating the Gaussian PDF.4 We then use sequential quadratic programming
(SQP) to iteratively approximate the resultant convex objective as a quadratic objective, iterate until
convergence, and then compute the necessary Jacobians using the quadratic approximation at the
solution, which gives the correct Hessian and gradient terms. Details are given in the appendix.
To develop a predictive model, we make use of a highly-tuned load forecasting methodology. Specifically, we input the past day?s electrical load and temperature, the next day?s temperature forecast,
and additional features such as non-linear functions of the temperatures, binary indicators of weekends or holidays, and yearly sinusoidal features. We then predict the electrical load over all 24
4
Part of the philosophy behind applying our approach here is that we know the Gaussian assumption
is incorrect: the true underlying load is neither Gaussian distributed nor homoskedastic. However, these
assumptions are exceedingly common in practice, as they enable easy model learning and exact analytical
solutions. Thus, training the (still Gaussian) system with a task-based loss retains computational tractability
while still allowing us to modify the distribution?s parameters to improve actual performance on the task at hand.
7
Figure 4: Results for 10 runs of the generation-scheduling problem for representative decision
parameters e = 0.5, s = 50, and cr = 0.4. (Lower loss is better.) As expected, the RMSE net
achieves the lowest RMSE for its predictions. However, the task net outperforms the RMSE net on
task loss by 38.6%, and the cost-weighted RMSE on task loss by 8.6%.
hours of the next day. We employ a 2-hidden-layer neural network for this purpose, with an additional residual connection from the inputs to the outputs initialized to the linear regression solution.
An illustration of the architecture is shown in Figure 3. We train the model to minimize the mean
! ? ?$
% ? ?&'
squared error between its predictions and the actual
Past Load
load (giving the mean prediction ?i ), and compute
200
200
2
Past Temp
i as the (constant) empirical variance between the
(Past Temp)
predicted and actual values. In all cases we use 7
Future Temp
years of data to train the model, and 1.75 subsequent
(Future Temp)
Future Load
(Future Temp)
years for testing.
2
2
3
((Weekday)
Using the (mean and variance) predictions of this
base model, we obtain z ? (x; ?) by solving the generator scheduling problem (11) and then adjusting
cos(2-? DOY)
network parameters to minimize the resultant task
loss. We compare against a traditional stochastic
Figure 3: 2-hidden-layer neural network to
programming model that minimizes just the RMSE,
predict hourly electric load for the next day.
as well as a cost-weighted RMSE that periodically
reweights training samples given their task loss.5 (A
pure policy-optimizing network is not shown, as it could not sufficiently learn the ramp constraints.
We could not obtain good performance for the policy optimizer even ignoring this infeasibility.)
((Holiday)
((DST)
sin(2-.? DOY)
Figure 4 shows the performance of the three models. As expected, the RMSE model performs
best with respect to the RMSE of its predictions (its objective). However, the task-based model
substantially outperforms the RMSE model when evaluated on task loss, the actual objective that
the system operator cares about: specifically, we improve upon the performance of the traditional
stochastic programming method by 38.6%. The cost-weighted RMSE?s performance is extremely
variable, and overall, the task net improves upon this method by 8.6%.
4.3
Price forecasting and battery storage
Finally, we consider a battery arbitrage task, based upon 6 years of real electrical grid data. Here, a
grid-scale battery must operate over a 24 hour period based on some (unknown) distribution over
future electricity prices (Figure 1c). For each hour, the operator must decide how much to charge
(zin 2 R24 ) or discharge (zout 2 R24 ) the battery, thus inducing a particular state of charge in the
battery (zstate 2 R24 ). Given a particular realization y of prices, the operator optimizes over: 1)
profits, 2) flexibility to participate in other markets, by keeping the battery near half its capacity B
(with weight ), and 3) battery health, by discouraging rapid charging/discharging (with weight ?,
5
It is worth noting that a cost-weighted RMSE approach is only possible when direct costs can be assigned
independently to each decision point, i.e. when costs do not depend on multiple decision points (as in this
experiment). Our task-based method, however, accommodates the (typical) more general setting.
8
Hyperparameters
?
0.1
0.05
1
0.5
10
5
35
15
RMSE net
Task-based net (our method)
% Improvement
1.45 ? 4.67
4.96 ? 4.85
131.08 ? 144.86
172.66 ? 7.38
2.92 ? 0.30
2.28 ? 2.99
95.88 ? 29.83
169.84 ? 2.16
1.02
0.54
0.27
0.02
Table 1: Task loss results for 10 runs each of the battery storage problem, given a lithium-ion battery
with attributes B = 1, eff = 0.9, cin = 0.5, and cout = 0.2. (Lower loss is better.) Our task-based net
on average somewhat improves upon the RMSE net, and demonstrates more reliable performance.
? < ). The battery also has a charging efficiency ( eff ), limits on speed of charge (cin ) and discharge
(cout ), and begins at half charge. This can be written as the stochastic programming problem
" 24
#
2
X
B
2
2
minimize Ey?p(y|x;?)
yi (zin zout )i +
zstate
+ ?kzin k + ?kzout k
2
zin ,zout ,zstate 2R24
i=1
(12)
subject to zstate,i+1 = zstate,i zout,i + eff zin,i 8i, zstate,1 = B/2,
0 ? zin ? cin , 0 ? zout ? cout , 0 ? zstate ? B.
Assuming (as we will in our model) that yi is a random variable with mean ?i , then this expectation
has a closed form that depends only on the mean. Further details are given in the appendix.
To develop a predictive model for the mean, we use an architecture similar to that described in
Section 4.2. In this case, we input the past day?s prices and temperature, the next day?s load forecasts
and temperature forecasts, and additional features such as non-linear functions of the temperatures
and temporal features similar to those in Section 4.2. We again train the model to minimize the
mean squared error between the model?s predictions and the actual prices (giving the mean prediction
?i ), using about 5 years of data to train the model and 1 subsequent year for testing. Using the
mean predictions of this base model, we then solve the storage scheduling problem by solving the
optimization problem (12), again learning network parameters by minimizing the task loss. We
compare against a traditional stochastic programming model that minimizes just the RMSE.
Table 1 shows the performance of the two models. As energy prices are difficult to predict due
to numerous outliers and price spikes, the models in this case are not as well-tuned as in our load
forecasting experiment; thus, their performance is relatively variable. Even then, in all cases, our
task-based model demonstrates better average performance than the RMSE model when evaluated
on task loss, the objective most important to the battery operator (although the improvements are
not statistically significant). More interestingly, our task-based method shows less (and in some
cases, far less) variability in performance than the RMSE-minimizing method. Qualitatively, our
task-based method hedges against perverse events such as price spikes that could substantially affect
the performance of a battery charging schedule. The task-based method thus yields more reliable
performance than a pure RMSE-minimizing method in the case the models are inaccurate due to a
high level of stochasticity in the prediction task.
5
Conclusions and future work
This paper proposes an end-to-end approach for learning machine learning models that will be used in
the loop of a larger process. Specifically, we consider training probabilistic models in the context of
stochastic programming to directly capture a task-based objective. Preliminary experiments indicate
that our task-based learning model substantially outperforms MLE and policy-optimizing approaches
in all but the (rare) case that the MLE model ?perfectly? characterizes the underlying distribution.
Our method also achieves a 38.6% performance improvement over a highly-optimized real-world
stochastic programming algorithm for scheduling electricity generation based on predicted load.
In the case of energy price prediction, where there is a high degree of inherent stochasticity in
the problem, our method demonstrates more reliable task performance than a traditional predictive
method. The task-based approach thus demonstrates promise in optimizing in-the-loop predictions.
Future work includes an extension of our approach to stochastic learning models with multiple rounds,
and further to model predictive control and full reinforcement learning settings.
9
Acknowledgments
This material is based upon work supported by the National Science Foundation Graduate Research
Fellowship Program under Grant No. DGE1252522, and by the Department of Energy Computational
Science Graduate Fellowship.
References
[1] Stein W Wallace and Stein-Erik Fleten. Stochastic programming models in energy. Handbooks
in operations research and management science, 10:637?677, 2003.
[2] William T Ziemba and Raymond G Vickson. Stochastic optimization models in finance,
volume 1. World Scientific, 2006.
[3] John A Buzacott and J George Shanthikumar. Stochastic models of manufacturing systems,
volume 4. Prentice Hall Englewood Cliffs, NJ, 1993.
[4] Alexander Shapiro and Andy Philpott. A tutorial on stochastic programming. Manuscript.
Available at www2.isye.gatech.edu/ashapiro/publications.html, 17, 2007.
[5] Jeff Linderoth, Alexander Shapiro, and Stephen Wright. The empirical behavior of sampling
methods for stochastic programming. Annals of Operations Research, 142(1):215?241, 2006.
[6] R Tyrrell Rockafellar and Roger J-B Wets. Scenarios and policy aggregation in optimization
under uncertainty. Mathematics of operations research, 16(1):119?147, 1991.
[7] Yann LeCun, Urs Muller, Jan Ben, Eric Cosatto, and Beat Flepp. Off-road obstacle avoidance
through end-to-end learning. In NIPS, pages 739?746, 2005.
[8] Ryan W Thomas, Daniel H Friend, Luiz A Dasilva, and Allen B Mackenzie. Cognitive networks:
adaptation and learning to achieve end-to-end performance objectives. IEEE Communications
Magazine, 44(12):51?57, 2006.
[9] Kai Wang, Boris Babenko, and Serge Belongie. End-to-end scene text recognition. In Computer
Vision (ICCV), 2011 IEEE International Conference on, pages 1457?1464. IEEE, 2011.
[10] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 770?778, 2016.
[11] Tao Wang, David J Wu, Adam Coates, and Andrew Y Ng. End-to-end text recognition
with convolutional neural networks. In Pattern Recognition (ICPR), 2012 21st International
Conference on, pages 3304?3308. IEEE, 2012.
[12] Alex Graves and Navdeep Jaitly. Towards end-to-end speech recognition with recurrent neural
networks. In ICML, volume 14, pages 1764?1772, 2014.
[13] Dario Amodei, Rishita Anubhai, Eric Battenberg, Carl Case, Jared Casper, Bryan Catanzaro,
Jingdong Chen, Mike Chrzanowski, Adam Coates, Greg Diamos, et al. Deep speech 2: End-toend speech recognition in english and mandarin. arXiv preprint arXiv:1512.02595, 2015.
[14] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep
visuomotor policies. Journal of Machine Learning Research, 17(39):1?40, 2016.
[15] Aviv Tamar, Sergey Levine, Pieter Abbeel, YI WU, and Garrett Thomas. Value iteration
networks. In Advances in Neural Information Processing Systems, pages 2146?2154, 2016.
[16] Ken Harada, Jun Sakuma, and Shigenobu Kobayashi. Local search for multiobjective function
optimization: pareto descent method. In Proceedings of the 8th annual conference on Genetic
and evolutionary computation, pages 659?666. ACM, 2006.
[17] Kristof Van Moffaert and Ann Now?. Multi-objective reinforcement learning using sets of
pareto dominating policies. Journal of Machine Learning Research, 15(1):3483?3512, 2014.
10
[18] Hossam Mossalam, Yannis M Assael, Diederik M Roijers, and Shimon Whiteson. Multiobjective deep reinforcement learning. arXiv preprint arXiv:1610.02707, 2016.
[19] Marco A Wiering, Maikel Withagen, and M?ad?alina M Drugan. Model-based multi-objective
reinforcement learning. In Adaptive Dynamic Programming and Reinforcement Learning
(ADPRL), 2014 IEEE Symposium on, pages 1?6. IEEE, 2014.
[20] Veselin Stoyanov, Alexander Ropson, and Jason Eisner. Empirical risk minimization of graphical
model parameters given approximate inference, decoding, and model structure. International
Conference on Artificial Intelligence and Statistics, pages 725?733. ISSN 15324435.
[21] Tamir Hazan, Joseph Keshet, and David A McAllester. Direct loss minimization for structured
prediction. In Advances in Neural Information Processing Systems, pages 1594?1602, 2010.
[22] Yang Song, Alexander G Schwing, Richard S Zemel, and Raquel Urtasun. Training deep neural
networks via direct loss minimization. In Proceedings of The 33rd International Conference on
Machine Learning, pages 2169?2177, 2016.
[23] Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo,
David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary
tasks. arXiv preprint arXiv:1611.05397, 2016.
[24] Somil Bansal, Roberto Calandra, Ted Xiao, Sergey Levine, and Claire J Tomlin. Goal-driven
dynamics learning via bayesian optimization. arXiv preprint arXiv:1703.09260, 2017.
[25] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017.
[26] Yoshua Bengio. Using a financial training criterion rather than a prediction criterion. International Journal of Neural Systems, 8(04):433?443, 1997.
[27] Adam N Elmachtoub and Paul Grigas. Smart "predict, then optimize". arXiv preprint
arXiv:1710.08005, 2017.
[28] Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, and
Edison Guo. On differentiating parameterized argmin and argmax problems with application to
bi-level optimization. arXiv preprint arXiv:1607.05447, 2016.
[29] Brandon Amos, Lei Xu, and J Zico Kolter. Input convex neural networks. arXiv preprint
arXiv:1609.07152, 2016.
[30] Paul T Boggs and Jon W Tolle. Sequential quadratic programming. Acta numerica, 4:1?51,
1995.
[31] Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural
networks. arXiV preprint arXiv:1703.00443, 2017.
[32] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[33] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
11
| 7132 |@word version:2 c0:3 pieter:3 jingdong:1 weekday:1 dramatic:1 profit:1 ld:2 initial:1 cherian:1 daniel:1 tuned:4 genetic:1 interestingly:1 outperforms:6 past:10 existing:1 current:1 com:1 babenko:1 yet:1 diederik:2 must:6 written:4 neq:2 john:1 cruz:1 realistic:1 subsequent:2 periodically:1 christian:1 update:4 standalone:1 v:2 generative:1 half:2 intelligence:1 parameterization:2 accordingly:1 tolle:1 core:1 ziemba:1 preference:1 simpler:1 zhang:1 direct:3 become:1 predecessor:1 symposium:1 incorrect:1 manner:4 indeed:2 ramping:1 roughly:1 market:1 planning:1 examine:1 multi:3 nor:1 wallace:1 expected:10 rapid:1 behavior:1 company:1 actual:8 increasing:1 begin:3 underlying:15 agnostic:1 lowest:1 what:2 argmin:8 substantially:5 minimizes:2 differentiation:4 nj:1 temporal:3 marian:1 charge:4 finance:2 demonstrates:4 control:3 zico:3 grant:1 discharging:1 positive:1 before:1 hourly:1 kobayashi:1 modify:2 local:1 limit:1 multiobjective:2 despite:2 encoding:1 cliff:1 solely:1 approximately:2 black:5 might:1 plus:1 acta:1 resembles:1 co:1 catanzaro:1 graduate:2 statistically:1 bi:1 practical:1 acknowledgment:1 lecun:1 testing:3 practice:6 procedure:9 jan:1 area:1 empirical:3 attain:1 integrating:1 refers:2 road:1 suggest:1 cannot:4 operator:9 scheduling:11 storage:5 context:10 applying:2 writing:1 prentice:1 risk:1 optimize:4 deterministic:3 quick:1 straightforward:2 independently:1 convex:4 focused:1 jimmy:1 simplicity:1 pure:5 avoidance:1 importantly:1 counterintuitive:1 ropson:1 financial:3 classic:1 handle:1 notion:2 autonomous:1 holiday:2 traditionally:1 analogous:1 feel:2 variation:1 qh:3 annals:1 user:1 exact:2 programming:40 magazine:1 us:1 carl:1 hypothesis:4 jaitly:1 pa:3 recognition:7 particularly:1 predicts:1 observed:6 mike:1 levine:4 preprint:11 electrical:7 capture:5 solved:1 parameterize:1 calculate:2 region:1 wang:2 wiering:1 sun:1 trade:1 cin:3 mentioned:1 intuition:1 battery:14 mine:1 dynamic:3 ultimately:4 engr:1 depend:2 solving:8 smart:1 predictive:11 purely:2 upon:11 incur:1 efficiency:1 eric:2 czarnecki:1 easily:1 joint:2 stock:6 represented:2 weekend:1 train:7 fast:2 describe:3 monte:1 artificial:1 zemel:1 visuomotor:1 outside:1 basura:1 quite:2 whose:1 larger:3 solve:7 kai:1 dominating:1 ramp:1 otherwise:2 statistic:1 gi:9 tomlin:1 jointly:1 itself:2 final:1 differentiate:1 advantage:1 differentiable:3 analytical:1 net:8 propose:3 product:3 adaptation:2 relevant:1 loop:5 realization:3 poorly:2 flexibility:1 representational:1 achieve:1 schaul:1 inducing:1 convergence:2 optimum:1 sqp:1 darrell:1 produce:2 generating:1 perfect:1 qps:1 ben:1 boris:1 adam:4 silver:1 develop:2 friend:1 mandarin:1 propagating:1 andrew:1 recurrent:1 strong:1 auxiliary:2 c:3 predicted:3 involves:1 indicate:1 differ:2 closely:2 discontinuous:1 correct:3 modifying:1 stochastic:48 attribute:1 enable:1 mcallester:1 public:1 material:2 eff:3 adprl:1 abbeel:3 preliminary:1 ryan:1 strictly:1 extension:2 marco:1 brandon:3 sufficiently:2 hall:1 wright:1 exp:2 great:1 cb:3 mapping:1 predict:4 strained:1 driving:2 achieves:2 optimizer:2 adopt:1 consecutive:1 purpose:2 wet:1 concurrent:1 amos:3 weighted:6 hope:1 minimization:3 priya:1 always:1 gaussian:5 rather:1 cr:3 gatech:1 publication:1 focus:1 improvement:3 methodological:1 likelihood:6 contrast:2 attains:1 realizable:1 inference:2 inaccurate:1 typically:3 entire:1 flepp:1 hidden:2 interested:1 tao:1 overall:2 classification:2 issue:1 html:1 denoted:1 proposes:3 constrained:2 initialize:2 once:1 ted:1 beach:1 sampling:2 ng:1 represents:1 sell:1 look:2 icml:1 nearly:1 koray:1 problem1:1 unsupervised:1 future:11 purchase:1 report:1 yoshua:1 fundamentally:1 simplify:1 employ:4 inherent:1 richard:1 randomly:2 tightly:1 national:1 phase:1 argmax:1 william:1 assael:1 attempt:3 interest:1 englewood:1 highly:3 mnih:1 evaluation:2 joel:1 behind:2 cosatto:1 accurate:2 andy:1 necessary:4 allocates:1 tree:2 old:1 initialized:1 desired:1 vickson:1 battenberg:1 uncertain:1 instance:5 modeling:5 obstacle:1 retains:1 electricity:8 cost:19 tractability:1 deviation:1 rare:1 harada:1 calandra:1 optimally:1 drugan:1 accomplish:1 synthetic:1 st:2 international:5 roijers:1 probabilistic:14 off:2 decoding:1 eschewed:1 again:3 satisfied:3 squared:2 management:1 choose:1 worse:1 reweights:1 cognitive:1 derivative:3 inefficient:1 return:1 wojciech:1 jacobians:2 szegedy:1 volodymyr:1 sinusoidal:1 includes:1 rockafellar:1 pedestrian:1 satisfy:2 kolter:3 explicitly:1 depends:2 ad:1 hedging:1 later:3 view:1 try:1 closed:4 tion:1 hazan:2 observing:2 characterizes:1 jason:1 aggregation:1 e2e:1 rmse:17 ass:1 minimize:18 greg:1 accuracy:3 variance:3 largely:1 efficiently:1 convolutional:1 correspond:1 yield:1 serge:1 conceptually:2 raw:2 bayesian:2 kavukcuoglu:1 produced:2 iid:1 ren:1 carlo:1 worth:1 presume:1 randomness:1 suffers:1 homoskedastic:1 trevor:1 definition:1 against:3 chrzanowski:1 energy:8 obvious:1 resultant:6 naturally:2 di:1 modeler:1 associated:1 adjusting:1 improves:2 schedule:5 routine:1 garrett:1 actually:2 back:2 reflecting:1 manuscript:1 day:6 follow:1 methodology:2 tom:1 formulation:1 evaluated:5 box:5 strongly:1 anderson:1 though:2 just:3 implicit:1 governing:1 stage:3 until:1 roger:1 hand:7 nonlinear:12 defines:1 quality:1 scientific:1 aviv:1 lei:1 usa:1 building:1 usage:1 concept:1 true:25 perverse:1 effect:1 counterpart:1 analytically:2 assigned:1 equality:2 regularization:1 q0:3 iteratively:1 i2:1 deal:1 round:2 sin:1 criterion:9 trying:1 cout:3 bansal:1 presenting:1 pdf:1 linderoth:1 demonstrate:1 performs:7 allen:1 bring:1 temperature:8 image:4 consideration:1 recently:2 misspecified:1 common:9 rl:2 qp:2 physical:1 volume:3 he:1 approximates:1 mellon:3 significant:1 tuning:2 unconstrained:2 grid:8 mathematics:1 similarly:2 rd:1 stochasticity:4 operating:1 etc:1 add:2 base:2 chelsea:2 own:1 recent:5 optimizing:10 optimizes:3 driven:2 scenario:2 certain:1 ineq:1 meta:2 inequality:4 outperforming:1 binary:1 accomplished:1 yi:7 muller:1 seen:1 george:1 additional:3 care:2 somewhat:3 ey:11 impose:2 determine:4 fernando:1 maximize:1 converge:1 period:1 xiangyu:1 stephen:2 full:4 multiple:5 shigenobu:1 stoyanov:2 technical:1 faster:1 adapt:2 match:1 offer:1 long:1 dept:4 mle:10 zin:5 prediction:24 grigas:3 basic:1 regression:1 vision:2 cmu:3 expectation:3 navdeep:1 arxiv:22 iteration:3 represent:1 sergey:5 normalization:1 achieved:1 ion:1 justified:1 background:2 want:4 whereas:1 fellowship:2 else:1 source:1 jian:1 operate:2 subject:6 virtually:1 seem:1 www2:1 near:1 noting:1 yang:1 intermediate:1 bengio:4 enough:1 concerned:1 automated:1 revealed:1 iterate:1 fit:3 zi:5 easy:1 architecture:3 affect:1 perfectly:1 economic:1 idea:2 tamar:1 tradeoff:6 shift:1 whether:2 allocate:1 ultimate:7 accelerating:1 effort:1 forecasting:6 penalty:3 song:1 peter:1 speech:3 speaking:1 hessian:1 shaoqing:1 action:6 garbage:1 deep:10 generally:1 santa:1 shortage:1 amount:2 stein:2 ken:1 argminz:1 generate:2 specifies:1 outperform:2 http:1 shapiro:2 coates:2 tutorial:1 toend:1 estimated:1 popularity:1 bryan:1 zd:1 carnegie:3 write:1 discrete:2 promise:1 affected:1 rishita:1 numerica:1 drawn:1 alina:1 neither:1 leibo:1 merely:1 year:9 run:4 parameterized:4 uncertainty:4 opaque:1 dst:1 sakuma:1 raquel:1 reasonable:2 stocking:1 decide:2 yann:1 wu:2 draw:1 decision:10 appendix:4 layer:3 hi:2 optnet:1 fold:1 quadratic:9 r24:5 annual:1 adapted:1 occur:1 elmachtoub:3 constraint:21 alex:1 scene:1 anubhai:1 speed:1 optimality:2 extremely:1 qb:3 attempting:2 relatively:1 gould:1 structured:2 department:1 according:1 icpr:1 amodei:1 poor:1 beneficial:1 slightly:1 ur:1 joseph:1 temp:5 making:1 outlier:1 iccv:1 recourse:1 equation:5 remains:2 describing:1 discus:1 turn:1 know:2 jared:1 finn:2 end:46 available:2 operation:4 generic:5 appropriate:1 alternative:6 batch:2 shortly:1 original:2 rz:3 thomas:2 running:1 include:1 graphical:1 cally:1 yearly:1 giving:2 eisner:1 classical:3 upcoming:1 implied:1 objective:30 move:2 quantity:1 looked:1 spike:2 strategy:2 traditional:8 surrogate:2 evolutionary:1 gradient:7 capacity:1 accommodates:1 participate:1 manifold:1 considers:1 trivial:2 spanning:1 urtasun:1 assuming:2 erik:1 code:1 issn:1 mini:1 illustration:1 minimizing:4 setup:1 difficult:1 potentially:1 negative:1 ba:1 proper:1 policy:28 unknown:4 perform:4 allowing:1 descent:1 beat:1 extended:1 variability:1 communication:1 rn:2 pred:3 david:3 pair:1 optimized:3 philosophical:1 z1:2 connection:1 distinction:1 icy:1 hour:5 kingma:1 nip:2 address:1 pattern:2 challenge:1 program:3 including:1 max:3 reliable:3 charging:3 power:5 event:1 predicting:1 indicator:2 residual:2 older:1 improve:4 github:1 numerous:1 mackenzie:1 jun:1 coupled:1 health:1 roberto:1 raymond:1 text:3 literature:1 discouraging:1 zh:5 relative:1 graf:1 loss:35 fully:1 highlight:3 generation:11 limitation:1 allocation:3 generator:3 foundation:1 incurred:2 degree:1 proxy:5 xiao:1 principle:1 classifying:3 pareto:2 casper:1 claire:1 arbitrage:4 course:2 prone:1 succinctly:1 repeat:1 supported:1 zkolter:1 free:3 keeping:1 english:1 formal:1 bias:1 side:2 rodrigo:1 taking:2 differentiating:3 benefit:1 distributed:1 van:1 world:10 avoids:1 tamir:1 exceedingly:1 ignores:1 commonly:2 adaptive:2 reinforcement:7 qualitatively:1 jon:1 employing:1 far:1 newspaper:1 excess:1 approximate:5 implicitly:1 jaderberg:1 kkt:1 instantiation:5 buy:1 handbook:1 ioffe:1 pittsburgh:3 belongie:1 edison:1 knew:1 discriminative:1 withagen:1 alternatively:1 z2r:2 search:2 luiz:1 infeasibility:1 tailed:1 chief:1 table:2 reality:3 additionally:1 learn:8 nature:1 ca:1 ignoring:1 whiteson:1 inventory:7 complex:3 electric:1 domain:1 diag:3 main:1 motivation:1 hyperparameters:1 paul:2 xu:1 crafted:1 referred:1 representative:2 fashion:2 explicit:2 wish:1 timepoints:1 governed:1 isye:1 jacobian:2 yannis:1 shimon:1 theorem:1 rk:1 load:14 specific:3 covariate:1 discharge:2 explored:1 dk:1 consist:2 restricting:1 adding:1 effectively:1 sequential:3 zout:5 diamos:1 keshet:1 illustrates:1 demand:14 forecast:6 dario:1 chen:1 simply:1 kristof:1 highlighting:1 ordered:1 contained:1 kaiming:1 ch:3 corresponds:2 acm:1 hedge:1 conditional:5 goal:4 presentation:1 ann:1 manufacturing:2 towards:1 jeff:1 price:13 doy:2 change:2 typical:2 specifically:6 infinite:1 tyrrell:1 reducing:1 schwing:1 zb:5 called:1 forgo:1 experimental:3 indicating:1 formally:2 select:1 internal:1 guo:1 alexander:4 anoop:1 violated:5 philosophy:1 evaluate:5 d1:1 ex:6 |
6,780 | 7,133 | ALICE: Towards Understanding Adversarial
Learning for Joint Distribution Matching
Chunyuan Li1 , Hao Liu2 , Changyou Chen3 , Yunchen Pu1 , Liqun Chen1 ,
Ricardo Henao1 and Lawrence Carin1
1
Duke University 2 Nanjing University 3 University at Buffalo
[email protected]
Abstract
We investigate the non-identifiability issues associated with bidirectional adversarial training for joint distribution matching. Within a framework of conditional
entropy, we propose both adversarial and non-adversarial approaches to learn
desirable matched joint distributions for unsupervised and supervised tasks. We
unify a broad family of adversarial models as joint distribution matching problems.
Our approach stabilizes learning of unsupervised bidirectional adversarial learning
methods. Further, we introduce an extension for semi-supervised learning tasks.
Theoretical results are validated in synthetic data and real-world applications.
1
Introduction
Deep directed generative models are a powerful framework for modeling complex data distributions.
Generative Adversarial Networks (GANs) [1] can implicitly learn the data generating distribution;
more specifically, GAN can learn to sample from it. In order to do this, GAN trains a generator to
mimic real samples, by learning a mapping from a latent space (where the samples are easily drawn)
to the data space. Concurrently, a discriminator is trained to distinguish between generated and real
samples. The key idea behind GAN is that if the discriminator finds it difficult to distinguish real from
artificial samples, then the generator is likely to be a good approximation to the true data distribution.
In its standard form, GAN only yields a one-way mapping, i.e., it lacks an inverse mapping mechanism
(from data to latent space), preventing GAN from being able to do inference. The ability to compute
a posterior distribution of the latent variable conditioned on a given observation may be important
for data interpretation and for downstream applications (e.g., classification from the latent variable)
[2, 3, 4, 5, 6, 7]. Efforts have been made to simultaneously learn an efficient bidirectional model
that can produce high-quality samples for both the latent and data spaces [3, 4, 8, 9, 10, 11]. Among
them, the recently proposed Adversarially Learned Inference (ALI) [4, 10] casts the learning of such
a bidirectional model in a GAN-like adversarial framework. Specifically, a discriminator is trained to
distinguish between two joint distributions: that of the real data sample and its inferred latent code,
and that of the real latent code and its generated data sample.
While ALI is an inspiring and elegant approach, it tends to produce reconstructions that are not
necessarily faithful reproductions of the inputs [4]. This is because ALI only seeks to match two
joint distributions, but the dependency structure (correlation) between the two random variables
(conditionals) within each joint is not specified or constrained. In practice, this results in solutions
that satisfy ALI?s objective and that are able to produce real-looking samples, but have difficulties
reconstructing observed data [4]. ALI also has difficulty discovering the correct pairing relationship
in domain transformation tasks [12, 13, 14].
In this paper, (i) we first describe the non-identifiability issue of ALI. To solve this problem, we
propose to regularize ALI using the framework of Conditional Entropy (CE), hence we call the
proposed approach ALICE. (ii) Adversarial learning schemes are proposed to estimate the conditional
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
entropy, for both unsupervised and supervised learning paradigms. (iii) We provide a unified view
for a family of recently proposed GAN models from the perspective of joint distribution matching,
including ALI [4, 10], CycleGAN [12, 13, 14] and Conditional GAN [15]. (iv) Extensive experiments
on synthetic and real data demonstrate that ALICE is significantly more stable to train than ALI, in
that it consistently yields more viable solutions (good generation and good reconstruction), without
being too sensitive to perturbations of the model architecture, i.e., hyperparameters. We also show
that ALICE results in more faithful image reconstructions. (v) Further, our framework can leverage
paired data (when available) for semi-supervised tasks. This is empirically demonstrated on the
discovery of relationships for cross domain tasks based on image data.
2
Background
Consider two general marginal distributions q(x) and p(z) over x ? X and z ? Z. One domain
can be inferred based on the other using conditional distributions, q(z|x) and p(x|z). Further, the
combined structure of both domains is characterized by joint distributions q(x, z) = q(x)q(z|x) and
p(x, z) = p(z)p(x|z).
To generate samples from these random variables, adversarial methods [1] provide a sampling
mechanism that only requires gradient backpropagation, without the need to specify the conditional
densities. Specifically, instead of sampling directly from the desired conditional distribution, the
random variable is generated as a deterministic transformation of two inputs, the variable in the source
domain, and an independent noise, e.g., a Gaussian distribution. Without loss of generality, we use an
universal distribution approximator specification [9], i.e., the sampling procedure for conditionals
? ? p? (x|z) and z
? ? q? (z|x) is carried out through the following two generating processes:
x
? = g? (z, ), z ? p(z), ? N (0, I), and z
? = g? (x, ?), x ? q(x), ? ? N (0, I),
x
(1)
where g? (?) and g? (?) are two generators, specified as neural networks with parameters ? and
?, respectively. In practice, the inputs of g? (?) and g? (?) are simple concatenations, [z ] and
[x ?], respectively. Note that (1) implies that p? (x|z) and q? (z|x) are parameterized by ? and ?
respectively, hence the subscripts.
R
The goal of GAN [1] is to match the marginal p? (x) = p? (x|z)p(z)dz to q(x). Note that q(x)
denotes the true distribution of the data (from which we have samples) and p(z) is specified as a
simple parametric distribution, e.g., isotropic Gaussian. In order to do the matching, GAN trains
a ?-parameterized adversarial discriminator network, f? (x), to distinguish between samples from
p? (x) and q(x). Formally, the minimax objective of GAN is given by the following expression:
min max LGAN (?, ?) = Ex?q(x) [log ?(f? (x))] + Ex? ?p? (x|z),z?p(z) [log(1 ? ?(f? (?
x)))], (2)
?
?
where ?(?) is the sigmoid function. The following lemma characterizes the solutions of (2) in terms
of marginals p? (x) and q(x).
Lemma 1 ([1]) The optimal decoder and discriminator, parameterized by {? ? , ? ? }, correspond to
a saddle point of the objective in (2), if and only if p?? (x) = q(x).
Alternatively, ALI [4] matches the joint distributions p? (x, z) = p? (x|z)p(z) and q? (x, z) =
q(x)q? (z|x), using an adversarial discriminator network similar to (2), f? (x, z), parameterized by
?. The minimax objective of ALI can be then written as
? ))]
min max LALI (?, ?, ?) = Ex?q(x),?z?q? (z|x) [log ?(f? (x, z
?,? ?
(3)
+ Ex? ?p? (x|z),z?p(z) [log(1??(f? (?
x, z)))].
Lemma 2 ([4]) The optimum of the two generators and the discriminator with parameters
{? ? , ?? , ? ? } form a saddle point of the objective in (3), if and only if p?? (x, z) = q?? (x, z).
From Lemma 2, if a solution of (3) is achieved, it is guaranteed that all marginals and conditional
distributions of the pair {x, z} match. Note that this implies that q? (z|x) and p? (z|x) match;
however, (3) imposes no restrictions on these two conditionals. This is key for the identifiability
issues of ALI described below.
3
Adversarial Learning with Information Measures
The relationship (mapping) between random variables x and z is not specified or constrained by
ALI. As a result, it is possible that the matched distribution ?(x, z) , p?? (x, z) = q?? (x, z) is
undesirable for a given application.
2
To illustrate this issue, Figure 1 shows all solutions
z1
z2
z1 z2
z1 z2
(saddle points) to the ALI objective on a simple toy
problem. The data and latent random variables can take
two possible values, X = {x1 , x2 } and Z = {z1 , z2 },
x1 x2
x1
x2
x1 x2
respectively. In this case, their marginals q(x) and p(z)
z1
z2
z1
z2
z1 z2
are known, i.e., q(x = x1 ) = 0.5 and p(z = z1 ) = 0.5.
x1 0
1/2
The matched joint distribution, ?(x, z), can be repre- x1 /2 (1 )/2 x1 1/2 0
1/2
x2 1/2 0
x2 0
sented as a 2 ? 2 contingency table. Figure 1(a) repre- x2 (1 )/2 /2
sents all possible solutions of the ALI objective in (3),
(c)
(b)
(a)
for any ? ? [0, 1]. Figures 1(b) and 1(c) represent opposite extreme solutions when ? = 1 and ? = 0, respec- Figure 1: Illustration of possible solutions to
tively. Note that although we can generate ?realistic? the ALI objective. The first row shows the mapvalues of x from any sample of p(z), for 0 < ? < 1, we pings between two domains, The second row
will have poor reconstruction ability since the sequence shows matched joint distribution, ?(x, z), as
? ? q? (z|x), x
? ? p? (x|?
x ? q(x), z
z ), can easily contingency tables parameterized by ? = [0, 1].
? 6= x. The two (trivial) exceptions where the model can achieve perfect reconstruction
result in x
correspond to ? = {1, 0}, and are illustrated in Figures 1(b) and 1(c), respectively. From this simple
example, we see that due to the flexibility of the joint distribution, ?(x, z), it is quite likely to obtain
an undesirable solution to the ALI objective. For instance, i) one with poor reconstruction ability or
ii) one where a single instance of z can potentially map to any possible value in X , e.g., in Figure 1(a)
with ? = 0.5, z1 can generate either x1 or x2 with equal probability.
Many applications require meaningful mappings. Consider two scenarios:
? A1: In unsupervised learning, one desirable property is cycle-consistency [12], meaning that the
inferred z of a corresponding x, can reconstruct x itself with high probability. In Figure 1 this
corresponds to either ? ? 1 or ? ? 0, as in Figures 1(b) and 1(c).
? A2: In supervised learning, the pre-specified correspondence between samples imposes restrictions
on the mapping between x and z, e.g., in image tagging, x are images and z are tags. In this case,
paired samples from the desired joint distribution are usually available, thus we can leverage this
supervised information to resolve the ambiguity between Figure 1(b) and (c).
From our simple example in Figure 1, we see that in order to alleviate the identifiability issues
associated with the solutions to the ALI objective, we have to impose constraints on the conditionals
q? (z|x) and p? (z|x). Furthermore, to fully mitigate the identifiability issues we require supervision,
i.e., paired samples from domains X and Z.
To deal with the problem of undesirable but matched joint distributions, below we propose to use
an information-theoretic measure to regularize ALI. This is done by controlling the ?uncertainty?
between pairs of random variables, i.e., x and z, using conditional entropies.
3.1 Conditional Entropy
Conditional Entropy (CE) is an information-theoretic measure that quantifies the uncertainty of
random variable x when conditioned on z (or the other way around), under joint distribution ?(x, z):
H ? (x|z) , ?E?(x,z) [log ?(x|z)], and H ? (z|x) , ?E?(x,z) [log ?(z|x)].
(4)
min max LALICE (?, ?, ?) = LALI (?, ?, ?) + L?CE (?, ?).
(5)
?
?
The uncertainty of x given z is linked with H (x|z); in fact, H (x|z) = 0 if only if x is a
deterministic mapping of z. Intuitively, by controlling the uncertainty of q? (z|x) and p? (z|x), we
can restrict the solutions of the ALI objective to joint distributions whose mappings result in better
reconstruction ability. Therefore, we propose to use the CE in (4), denoted as L?CE (?, ?) = H ? (x|z)
or H ? (z|x) (depending on the task; see below), as a regularization term in our framework, termed
ALI with Conditional Entropy (ALICE), and defined as the following minimax objective:
?,?
?
L?CE (?, ?) is dependent on the underlying distributions for the random variables, parametrized by
(?, ?), as made clearer below. Ideally, we could select the desirable solutions of (5) by evaluating
their CE, once all the saddle points of the ALI objective have been identified. However, in practice,
L?CE (?, ?) is intractable because we do not have access to the saddle points beforehand. Below, we
propose to approximate the CE in (5) during training for both unsupervised and supervised tasks.
Since x and z are symmetric in terms of CE according to (4), we use x to derive our theoretical
results. Similar arguments hold for z, as discussed in the Supplementary Material (SM).
3
3.2 Unsupervised Learning
In the absence of explicit probability distributions needed for computing the CE, we can bound
? , via
the CE using the criterion of cycle-consistency [12]. We denote the reconstruction of x as x
? ? p? (?
generating procedure (cycle) x
x|z) have
x|z), z ? q? (z|x), x ? q(x). We desire that p? (?
? = x, for the x ? q(x) that begins the cycle x ? z ? x
? , and hence that x
?
high likelihood for x
be similar to the original x. Lemma 3 below shows that cycle-consistency is an upper bound of the
conditional entropy in (4).
Lemma 3 For joint distributions p? (x, z) or q? (x, z), we have
H q? (x|z) , ?Eq? (x,z) [log q? (x|z)] = ?Eq? (x,z) [log p? (x|z)] ? Eq? (z) [KL(q? (x|z)kp? (x|z))]
? ?Eq? (x,z) [log p? (x|z)] , LCycle (?, ?).
(6)
R
where q? (z) = dxq? (x, z). The proof is in the SM. Note that latent z is implicitly involved
in LCycle (?, ?) via Eq? (x,z) [?]. For the unsupervised case we want to leverage (6) to optimize the
following upper bound of (5):
min max LALI (?, ?, ?) + LCycle (?, ?) .
?,?
?
(7)
Note that as ALI reaches its optimum, p? (x, z) and q? (x, z) reach saddle point ?(x, z), then
LCycle (?, ?) ? H q? (x|z) ? H ? (x|z) in (4) accordingly, thus (7) effectively approaches (5)
(ALICE). Unlike L?CE (?, ?) in (4), its upper bound, LCycle (?, ?), can be easily approximated via
Monte Carlo simulation. Importantly, (7) can be readily added to ALI?s objective without additional
changes to the original training procedure.
The cycle-consistency property has been previously leveraged in CycleGAN [12], DiscoGAN [13]
and DualGAN [14]. However, in [12, 13, 14], cycle-consistency, LCycle (?, ?), is implemented via `k
losses, for k = 1, 2, and real-valued data such as images. As a consequence of an `2 -based pixel-wise
loss, the generated samples tend to be blurry [8]. Recognizing this limitation, we further suggest
to enforce cycle-consistency (for better reconstruction) using fully adversarial training (for better
generation), as an alternative to LCycle (?, ?) in (7). Specifically, to reconstruct x, we specify an
? ) to distinguish between x and its reconstruction x
?:
?-parameterized discriminator f? (x, x
A
min max LCycle (?, ?, ?) = Ex?q(x) [log ?(f? (x, x))]
?,?
?
? )))].
+ Ex? ?p? (?x|z),z?q? (z|x) log(1 ? ?(f? (x, x
(8)
Finally, the fully adversarial training algorithm for unsupervised learning using the ALICE framework
is the result of replacing LCycle (?, ?) with LA
Cycle (?, ?, ?) in (7); thus, for fixed (?, ?), we maximize
wrt {?, ?}.
? } in (8) is critical. It encourages the generators to mimic the
The use of paired samples {x, x
reconstruction relationship implied in the first joint; on the contrary, the model may reduce to the
basic GAN discussed in Section 3, and generate any realistic sample in X . The objective in (8)
enjoys many theoretical properties of GAN. Particularly, Proposition 1 guarantees the existence of
the optimal generator and discriminator.
Proposition 1 The optimal generators and discriminator {? ? , ?? , ? ? } of the objective in (8) is
? ).
achieved, if and only if Eq?? (z|x) p?? (?
x|z) = ?(x ? x
The proof is provided in the SM. Together with Lemma 2 and 3, we can also show that:
Corollary 1 When cycle-consistency is satisfied (the optimum in (8) is achieved), (i) a deterministic mapping enforces Eq? (z) [KL(q? (x|z)kp? (x|z))] = 0, which indicates the conditionals are
matched. (ii) On the contrary, the matched conditionals enforce H q? (x|z) = 0, which indicates the
corresponding mapping becomes deterministic.
3.3 Semi-supervised Learning
When the objective in (7) is optimized in an unsupervised way, the identifiability issues associated
with ALI are largely reduced due to the cycle-consistency-enforcing bound in Lemma 3. This
means that samples in the training data have been probabilistically ?paired? with high certainty,
by conditionals p? (x|z) and p? (z|x), though perhaps not in the desired configuration. In realworld applications, obtaining correctly paired data samples for the entire dataset is expensive or
4
even impossible. However, in some situations obtaining paired data for a very small subset of the
observations may be feasible. In such a case, we can leverage the small set of empirically paired
samples, to further provide guidance on selecting the correct configuration. This suggests that ALICE
is suitable for semi-supervised classification.
For a paired sample drawn from empirical distribution ?
? (x, z), its desirable joint distribution is well
specified. Thus, one can directly approximate the CE as
H ?? (x|z) ? E?? (x,z) [log p? (x|z)] , LMap (?) ,
(9)
where the approximation (?) arises from the fact that p? (x|z) is an approximation to ?
? (x|z). For
the supervised case we leverage (9) to approximate (5) using the following minimax objective:
min max LALI (?, ?, ?) + LMap (?).
?,?
?
(10)
Note that as ALI reaches its optimum, p? (x, z) and q? (x, z) reach saddle point ?(x, z), then
LMap (?) ? H ?? (x|z) ? H ? (x|z) in (4) accordingly, thus (10) approaches (5) (ALICE).
We can employ standard losses for supervised learning objectives to approximate LMap (?) in (10),
such as cross-entropy or `k loss in (9). Alternatively, to also improve generation ability, we propose
an adversarial learning scheme to directly match p? (x|z) to the paired empirical conditional ?
? (x|z),
using conditional GAN [15] as an alternative to LMap (?) in (10). The ?-parameterized discriminator
f? is used to distinguish the true pair {x, z} from the artificially generated one {?
x, z} (conditioned
on z), using
min max LA
x, z)))]. (11)
? ?p? (?
? (x,z) [log ?(f? (x, z)) + Ex
x|z) log(1 ? ?(f? (?
Map (?, ?) = Ex,z??
?
?
The fully adversarial training algorithm for supervised learning using the ALICE in (11) is the result
of replacing LMap (?) with LA
Map (?, ?) in (10), thus for fixed (?, ?) we maximize wrt {?, ?}.
Proposition 2 The optimum of generators and discriminator {? ? , ?? } form saddle points of objective
in (11), if and only if ?
? (x|z) = p?? (x|z) and ?
? (x, z) = p?? (x, z).
The proof is provided in the SM. Proposition 2 enforces that the generator will map to the correctly
paired sample in the other space. Together with the theoretical result for ALI in Lemma 2, we have
Corollary 2 When the optimum in (10) is achieved, ?
? (x, z) = p?? (x, z) = q?? (x, z).
Corollary 2 indicates that ALI?s drawbacks associated with identifiability issues can be alleviated for
the fully supervised learning scenario. Two conditional GANs can be used to boost the perfomance,
each for one direction mapping. When tying the weights of discriminators of two conditional GANs,
ALICE recovers Triangle GAN [16]. In practice, samples from the paired set ?
? (x, z) often contain
enough information to readily approximate the sufficient statistics of the entire dataset. In such case,
we may use the following objective for semi-supervised learning:
min max LALI (?, ?, ?) + LCycle (?, ?) + LMap (?) .
?,?
?
(12)
The first two terms operate on the entire set, while the last term only applies to the paired subset. Note
that we can train (12) fully adversarially by replacing LCycle (?, ?) and LMap (?) with LA
Cycle (?, ?, ?)
A
and LMap (?, ?) in (8) and (11), respectively. In (12) each of the three terms are treated with equal
weighting in the experiments if not specificially mentioned, but of course one may introduce additional
hyperparameters to adjust the relative emphasis of each term.
4
Related Work: A Unified Perspective for Joint Distribution Matching
Connecting ALI and CycleGAN. We provide an information theoretical interpretation for cycleconsistency, and show that it is equivalent to controlling conditional entropies and matching conditional distributions. When cycle-consistency is satisfied, Corollary 1 shows that the conditionals
are matched in CycleGAN. They also train additional discriminators to guarantee the matching of
marginals for x and z using the original GAN objective in (2). This reveals the equivalence between
ALI and CycleGAN, as the latter can also guarantee the matching of joint distributions p? (x, z) and
q? (x, z). In practice, CycleGAN is easier to train, as it decomposes the joint distribution matching
objective (as in ALI) into four subproblems. Our approach leverages a similar idea, and further
improves it with adversarially learned cycle-consistency, when high quality samples are of interest.
5
(a) True x
(b) True z
(c) Inception Score
(d) MSE
Figure 2: Quantitative evaluation of generation (c) and reconstruction (d) results on toy data (a,b).
Stochastic Mapping vs. Deterministic Mapping. We propose to enforce the cycle-consistency in
ALI for the case when two stochastic mappings are specified as in (1). When cycle-consistency is
achieved, Corollary 1 shows that the bounded conditional entropy vanishes, and thus the corresponding
mapping reduces to be deterministic. In the literture, one deterministic mapping has been empirically
tested in ALI?s framework [4], without explicitly specifying cycle-consistency. BiGAN [10] uses
two deterministic mappings. In theory, deterministic mappings guarantee cycle-consistency in ALI?s
framework. However, to achieve this, the model has to fit a delta distribution (deterministic mapping)
to another distribution in the sense of KL divergence (see Lemma 3). Due to the asymmetry of
KL, the cost function will pay extremely low cost for generating fake-looking samples [17]. This
explains the underfitting reasoning in [4] behind the subpar reconstruction ability of ALI. Therefore,
in ALICE, we explicitly add a cycle-consistency regularization to accelerate and stabilize training.
Conditional GANs as Joint Distribution Matching. Conditional GAN and its variants [15, 18, 19,
20] have been widely used in supervised tasks. Our scheme to learn conditional entropy borrows the
formulation of conditional GAN [15]. To the authors? knowledge, this is the first attempt to study the
conditional GAN formulation as joint distribution matching problem. Moreover, we add the potential
to leverage the well-defined distribution implied by paired data, to resolve the ambiguity issues of
unsupervised ALI variants [4, 10, 12, 13, 14].
5
Experimental Results
The code to reproduce these experiments is at https://github.com/ChunyuanLI/ALICE
5.1 Effectiveness and Stability of Cycle-Consistency
To highlight the role of the CE regularization for unsupervised learning, we perform an experiment
on a toy dataset. q(x) is a 2D Gaussian Mixture Model (GMM) with 5 mixture components, and
p(z) is chosen as a standard Gaussian, N (0, I). Following [4], the covariance matrices and centroids
are chosen such that the distribution exhibits severely separated modes, which makes it a relatively
hard task despite its 2D nature. Following [21], to study stability, we run an exhaustive grid search
over a set of architectural choices and hyper-parameters, 576 experiments for each method. We report
Mean Squared Error (MSE) and inception score (denoted as ICP) [22] to quantitatively evaluate the
performance of generative models. MSE is a proxy for reconstruction quality, while ICP reflects the
plausibility and variety of sample generation. Lower MSE and higher ICP indicate better results. See
SM for the details of the grid search and the calculation of ICP.
We train on 2048 samples, and test on 1024 samples. The ground-truth test samples for x and z are
shown in Figure 2(a) and (b), respectively. We compare ALICE, ALI and Denoising Auto-Encoders
(DAEs) [23], and report the distribution of ICP and MSE values, for all (576) experiments in Figure 2
(c) and (d), respectively. For reference, samples drawn from the ?oracle? (ground-truth) GMM yield
ICP=4.977?0.016. ALICE yields an ICP larger than 4.5 in 77% of experiments, while ALI?s ICP
wildly varies across different runs. These results demonstrate that ALICE is more consistent and
quantitatively reliable than ALI. The DAE yields the lowest MSE, as expected, but it also results in
the weakest generation ability. The comparatively low MSE of ALICE demonstrates its acceptable
reconstruction ability compared to DAE, though a very significantly improvement over ALI.
Figure 3 shows the qualitative results on the test set. Since ALI?s results vary largely from trial to
trial, we present the one with highest ICP. In the figure, we color samples from different mixture
components to highlight their correspondance between the ground truth, in Figure 2(a), and their
reconstructions, in Figure 3 (first row, columns 2, 4 and 6, for ALICE, ALI and DAE, respectively).
Importantly, though the reconstruction of ALI can recover the shape of manifold in x (Gaussian
mixture), each individual reconstructed sample can be substantially far away from its ?original?
mixture component (note the highly mixed coloring), hence the poor MSE. This occurs because the
adversarial training in ALI only requires that the generated samples look realistic, i.e., to be located
6
(a) ALICE
(b) ALI
(c) DAEs
Figure 3: Qualitative results on toy data. Two-column blocks represent the results of each method, with left for
z and right for x. For the first row, left is sampling of z, and right is reconstruction of x. Colors indicate mixture
component membership. The second row shows reconstructions, x, from linearly interpolated samples in z.
near true samples in X , but the mapping between observed and latent spaces (x ? z and z ? x) is
not specified. In the SM we also consider ALI with various combinations of stochastic/deterministic
mappings, and conclude that models with deterministic mappings tend to have lower reconstruction
ability but higher generation ability. In terms of the estimated latent space, z, in Figure 3 (first row,
columns 1, 3 and 5, for ALICE, ALI and DAE, respectively), we see that ALICE results in a better
latent representation, in the sense of mapping consistency (samples from different mixture components
remain clustered) and distribution consistency (samples approximate a Gaussian distribution). The
results for reconstruction of z and sampling of x are shown in the SM.
In Figure 3 (second row), we also investigate latent space interpolation between a pair of test set
examples. We use x1 = [?2.2, ?2.2] and x9 = [2.2, 2.2], map them into z 1 and z 9 , linearly
interpolate between z 1 and z 9 to get intermediate points z 2 , . . . , z 8 , and then map them back to the
original space as x2 , . . . , x8 . We only show the index of the samples for better visualization. Figure 3
shows that ALICE?s interpolation is smooth and consistent with the ground-truth distributions.
Interpolation using ALI results in realistic samples (within mixture components), but the transition is
not order-wise consistent. DAEs provides smooth transitions, but the samples in the original space
look unrealistic as some of them are located in low probability density regions of the true model.
We investigate the impact of different amount of regularization on three datasets, including the toy
dataset, MNIST and CIFAR-10 in SM Section D. The results show that our regularizer can improve
image generation and reconstruction of ALI for a large range of weighting hyperparameter values.
5.2 Reconstruction and Cross-Domain Transformation on Real Datasets
Two image-to-image translation tasks are considered. (i) Car-to-Car [24]: each domain (x and z)
includes car images in 11 different angles, on which we seek to demonstrate the power of adversarially
learned reconstruction and weak supervision. (ii) Edge-to-Shoe [25]: x domain consists of shoe
photos and z domain consists of edge images, on which we report extensive quantitative comparisons.
Cycle-consistency is applied on both domains. The goal is to discover the cross-domain relationship
(i.e., cross-domain prediction), while maintaining reconstruction ability on each domain.
Classification Accuracy (%)
Adversarially learned reconstruction To demonstrate the effectiveness of our fully adversarial
scheme in (8) (Joint A.) on real datasets, we use it in place of the `2 losses in DiscoGAN [13]. In
practice, feature matching [22] is used to help the adversarial objective in (8) to reach its optimum.
We also compared with a baseline scheme (Marginal A.) in [12], which adversarially discriminates
?.
between x and its reconstruction x
80
The results are shown in Figure 4 (a). From Inputs
60
top to bottom, each row shows ground-truth Joint A.
ALICE (10% sup.)
ALICE (1% sup.)
images, DiscoGAN (with Joint A., `2 loss
DiscoGAN
40
BiGAN
`
loss
and Marginal A. schemes, respectively) and 2
20
BiGAN [10]. Note that BiGAN is the best Marginal A.
ALI variant in our grid search compasion.
BiGAN
2
4
6
8
10
The proposed Joint A. scheme can retain the
Number of Paired Angles
same crispness characteristic to adversarially(a) Reconstruction
(b) Prediction
trained models, while `2 tends to be blurry.
Figure 4: Results on Car-to-Car task.
Marginal A. provides realistic car images, but not faithful reproductions of the inputs. This explains
7
Lcycle+`A
Lcycle+`2
BiGAN
BiGAN
(a) Cross-domain transformation
(b) Reconstruction
(c) Generated edges
Figure 5: SSIM and generated images on Edge-to-Shoe dataset.
the observations in [12] in terms of no performance gain. The BiGAN learns the shapes of cars, but
misses the textures. This is a sign of underfitting, thus indicating BiGAN is not easy to train.
Weak supervision The DiscoGAN and BiGAN are unsupervised methods, and exhibit very different
cross-domain pairing configurations during different training epochs, which is indicative of nonidentifiability issues. We leverage very weak supervision to help with convergence and guide the
pairing. The results on shown in Figure 4 (b). We run each methods 5 times, the width of the
colored lines reflect the standard deviation. We start with 1% true pairs for supervision, which yields
significantly higher accuracy than DiscoGAN/BiGAN. We then provided 10% supervison in only 2
or 6 angles (of 11 total angles), which yields comparable angle prediction accuracy with full angle
supervison in testing. This shows ALICE?s ability in terms of zero-shot learning, i.e., predicting
unseen pairs. In the SM, we show that enforcing different weak supervision strategies affects the final
pairing configurations, i.e., we can leverage supervision to obtain the desirable joint distribution.
Quantitative comparison To quantitatively assess the generated images, we use structural similarity
(SSIM) [26], which is an established image quality metric that correlates well with human visual
perception. SSIM values are between [0, 1]; higher is better. The SSIM of ALICE on prediction and
reconstruction is shown in Figure 5 (a)(b) for the edge-to-shoe task. As a baseline, we set DiscoGAN
with `2 -based supervision (`2 -sup). BiGAN/ALI, highlighted with a circle is outperformed by
ALICE in two aspects: (i) In the unpaired setting (0% supervision), cycle-consistency regularization
(LCycle ) shows significant performance gains, particularly on reconstruction. (ii) When supervision
is leveraged (10%), SSIM is significantly increased on prediction. The adversarial-based supervision
(`A -sup) shows higher prediction than `2 -sup. ALICE achieves very similar performance with the
50% and full supervision setup, indicating its advantage of in semi-supervised learning. Several
generated edge images (with 50% supervision) are shown in Figure 5(c), `A -sup tends to provide
more details than `2 -sup. Both methods generate correct paired edges, and quality is higher than
BiGAN and DiscoGAN. In the SM, we also report MSE metrics, and results on edge domain only,
which are consistent with the results presented here.
One-side cycle-consistency When uncertainty in one domain is desirable, we consider one-side
cycle-consistency. This is demonstrated on the CelebA face dataset [27]. Each face is associated
with a 40-dimensional attribute vector. The results are in the Figure 8 of SM. In the first task, we
consider the images x are generated from a 128-dimensional Gaussian latent space z, and apply
LCycle on x. We compare ALICE and ALI on reconstruction in Figure 8 (a)(b). ALICE shows more
faithful reproduction of the input subjects. In the second task, we consider z as the attribute space,
from which the images x are generated. The mapping from x to z is then attribute classification. We
only apply LCycle on the attribute domain, and LA
Map on both domains. When 10% paired samples
are considered, the predicted attributes still reach 86% accuracy, which is comparable with the fully
supervised case. To test the diversity on x, we first predict the attributes of a true face image, and
then generated multiple images conditioned on the predicted attributes. Four examples are shown in
Figure 8 (c).
6
Conclusion
We have studied the problem of non-identifiability in bidirectional adversarial networks. A unified
perspective of understanding various GAN models as joint matching is provided to tackle this problem.
This insight enables us to propose ALICE (with both adversarial and non-adversarial solutions) to
reduce the ambiguity and control the conditionals in unsupervised and semi-supervised learning. For
future work, the proposed view can provide opportunities to leverage the advantages of each model,
to advance joint-distribution modeling.
8
Acknowledgements We acknowledge Shuyang Dai, Chenyang Tao and Zihang Dai for helpful
feedback/editing. This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.
References
[1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, 2014.
[2] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[3] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable
representation learning by information maximizing generative adversarial nets. In NIPS, 2016.
[4] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. A., O. Mastropietro, and A. Courville. Adversarially
learned inference. ICLR, 2017.
[5] L. Mescheder, S. Nowozin, and A. Geiger. Adversarial variational bayes: Unifying variational autoencoders
and generative adversarial networks. ICML, 2017.
[6] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin. Variational autoencoder for deep
learning of images, labels and captions. In NIPS, 2016.
[7] Y. Pu, Z. Gan, R. Henao, C. Li, S. Han, and L. Carin. Vae learning via Stein variational gradient descent.
NIPS, 2017.
[8] A. B. L. Larsen, S. K. S?nderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a
learned similarity metric. ICML, 2016.
[9] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey. Adversarial autoencoders. arXiv preprint
arXiv:1511.05644, 2015.
[10] J. Donahue, K. Philipp, and T. Darrell. Adversarial feature learning. ICLR, 2017.
[11] Y. Pu, W. Wang, R. Henao, L. Chen, Z. Gan, C. Li, and L. Carin. Adversarial symmetric variational
autoencoder. NIPS, 2017.
[12] J. Zhu, T. Park, P. Isola, and A. Efros. Unpaired image-to-image translation using cycle-consistent
adversarial networks. ICCV, 2017.
[13] T. Kim, M. Cha, H. Kim, J. Lee, and J. Kim. Learning to discover cross-domain relations with generative
adversarial networks. ICML, 2017.
[14] Z. Yi, H. Zhang, and P. Tan. DualGAN: Unsupervised dual learning for image-to-image translation. ICCV,
2017.
[15] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
[16] Z. Gan, L. Chen, W. Wang, Y. Pu, Y. Zhang, H. Liu, C. Li, and L. Carin. Triangle generative adversarial
networks. NIPS, 2017.
[17] M. Arjovsky and L. Bottou. Towards principled methods for training generative adversarial networks. In
ICLR, 2017.
[18] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text to image
synthesis. In ICML, 2016.
[19] P. Isola, J. Zhu, T. Zhou, and A. Efros. Image-to-image translation with conditional adversarial networks.
CVPR, 2017.
[20] C. Li, K. Xu, J. Zhu, and B. Zhang. Triple generative adversarial nets. NIPS, 2017.
[21] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. ICLR, 2017.
[22] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for
training GANs. In NIPS, 2016.
[23] P. Vincent, H. Larochelle, Y. Bengio, and P. Manzagol. Extracting and composing robust features with
denoising autoencoders. In ICML, 2008.
[24] S. Fidler, S. Dickinson, and R. Urtasun. 3D object detection and viewpoint estimation with a deformable
3D cuboid model. In NIPS, 2012.
[25] A. Yu and K. Grauman. Fine-grained visual comparisons with local learning. In CVPR, 2014.
[26] Z. Wang, A. C Bovik, H. R Sheikh, and E. P Simoncelli. Image quality assessment: from error visibility to
structural similarity. IEEE trans. on Image Processing, 2004.
[27] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In ICCV, 2015.
9
| 7133 |@word trial:2 changyou:1 cha:1 simulation:1 seek:2 covariance:1 shot:1 configuration:4 liu:2 score:2 selecting:1 z2:7 com:1 luo:1 written:1 readily:2 realistic:5 shape:2 enables:1 visibility:1 interpretable:1 v:1 generative:13 discovering:1 accordingly:2 indicative:1 isotropic:1 colored:1 provides:2 philipp:1 zhang:3 viable:1 pairing:4 qualitative:2 consists:2 yuan:1 wild:1 underfitting:2 introduce:2 tagging:1 expected:1 resolve:2 duan:1 becomes:1 begin:1 provided:4 matched:8 underlying:1 dxq:1 bounded:1 moreover:1 lowest:1 discover:2 tying:1 substantially:1 unified:3 transformation:4 guarantee:4 certainty:1 mitigate:1 quantitative:3 tackle:1 zaremba:1 grauman:1 demonstrates:1 control:1 frey:1 local:1 tends:3 consequence:1 severely:1 despite:1 encoding:1 subscript:1 interpolation:3 emphasis:1 studied:1 equivalence:1 suggests:1 specifying:1 alice:31 logeswaran:1 lmap:9 range:1 directed:1 faithful:4 lecun:1 enforces:2 testing:1 practice:6 block:1 backpropagation:1 procedure:3 universal:1 empirical:2 yan:1 significantly:4 matching:14 alleviated:1 pre:1 suggest:1 nanjing:1 get:1 undesirable:3 impossible:1 zihang:1 restriction:2 optimize:1 deterministic:12 demonstrated:2 dz:1 map:7 equivalent:1 maximizing:1 mescheder:1 unify:1 pouget:1 insight:1 importantly:2 shlens:1 regularize:2 stability:2 supervison:2 controlling:3 tan:1 caption:1 duke:2 dickinson:1 us:1 goodfellow:3 jaitly:1 approximated:1 particularly:2 expensive:1 located:2 nderby:1 observed:2 role:1 bottom:1 preprint:2 wang:4 region:1 cycle:25 highest:1 mentioned:1 discriminates:1 vanishes:1 principled:1 schiele:1 ideally:1 warde:1 trained:3 ali:52 triangle:2 easily:3 joint:32 discogan:8 accelerate:1 darpa:1 various:2 regularizer:1 train:8 separated:1 describe:1 monte:1 kp:2 artificial:1 hyper:1 exhaustive:1 quite:1 whose:1 supplementary:1 solve:1 valued:1 widely:1 larger:1 reconstruct:2 cvpr:2 ability:12 statistic:1 unseen:1 highlighted:1 itself:1 final:1 autoencoding:1 sequence:1 advantage:2 net:4 propose:8 reconstruction:32 aro:1 flexibility:1 achieve:2 deformable:1 sutskever:1 convergence:1 optimum:7 asymmetry:1 darrell:1 produce:3 generating:4 perfect:1 object:1 help:2 illustrate:1 depending:1 clearer:1 derive:1 eq:7 implemented:1 predicted:2 implies:2 indicate:2 larochelle:2 direction:1 drawback:1 correct:3 attribute:8 stevens:1 stochastic:3 human:1 material:1 explains:2 require:2 abbeel:1 clustered:1 alleviate:1 proposition:4 extension:1 hold:1 around:1 considered:2 ground:5 lawrence:1 mapping:24 predict:1 stabilizes:1 efros:2 vary:1 achieves:1 a2:1 estimation:1 outperformed:1 label:1 sensitive:1 reflects:1 nonidentifiability:1 concurrently:1 gaussian:7 zhou:1 vae:1 probabilistically:1 corollary:5 validated:1 improvement:1 consistently:1 likelihood:1 indicates:3 adversarial:40 centroid:1 baseline:2 sense:2 kim:3 helpful:1 inference:3 dependent:1 membership:1 entire:3 relation:1 reproduce:1 tao:1 pixel:2 issue:10 classification:4 henao:3 dual:1 denoted:2 among:1 constrained:2 marginal:6 equal:2 once:1 beach:1 sampling:5 adversarially:8 broad:1 yu:1 look:2 unsupervised:14 icml:5 carin:4 celeba:1 mimic:2 future:1 report:4 mirza:2 quantitatively:3 employ:1 simultaneously:1 divergence:1 interpolate:1 individual:1 attempt:1 detection:1 interest:1 investigate:3 highly:1 evaluation:1 adjust:1 mixture:8 extreme:1 farley:1 behind:2 beforehand:1 lcycle:16 edge:8 iv:1 desired:3 circle:1 guidance:1 dae:4 theoretical:5 increased:1 instance:2 modeling:2 column:3 cost:2 deviation:1 subset:2 recognizing:1 osindero:1 too:1 dependency:1 encoders:1 varies:1 synthetic:2 combined:1 st:1 density:2 winther:1 retain:1 lee:2 bigan:13 together:2 connecting:1 icp:9 gans:5 synthesis:1 squared:1 ambiguity:3 satisfied:2 x9:1 reflect:1 leveraged:2 zhao:1 ricardo:1 toy:5 li:5 potential:1 diversity:1 stabilize:1 includes:1 satisfy:1 explicitly:2 view:2 dumoulin:1 linked:1 characterizes:1 sup:7 repre:2 recover:1 start:1 bayes:2 identifiability:8 correspondance:1 ass:1 accuracy:4 largely:2 characteristic:1 yield:7 correspond:2 weak:4 vincent:1 carlo:1 ping:1 reach:6 energy:1 involved:1 larsen:1 associated:5 proof:3 recovers:1 gain:2 dataset:6 crispness:1 knowledge:1 color:2 improves:1 car:7 akata:1 back:1 coloring:1 bidirectional:5 higher:6 supervised:18 specify:2 improved:1 editing:1 formulation:2 done:1 cyclegan:6 though:3 generality:1 furthermore:1 inception:2 wildly:1 correlation:1 autoencoders:3 replacing:3 assessment:1 lack:1 mode:1 quality:6 perhaps:1 usa:1 contain:1 true:9 hence:4 regularization:5 fidler:1 symmetric:2 dualgan:2 illustrated:1 deal:1 daes:3 during:2 width:1 encourages:1 criterion:1 theoretic:2 demonstrate:4 reasoning:1 image:30 meaning:1 wise:2 variational:6 recently:2 sigmoid:1 empirically:3 tively:1 discussed:2 interpretation:2 perfomance:1 marginals:4 significant:1 consistency:22 grid:3 stable:1 specification:1 supervision:13 access:1 similarity:3 han:1 add:2 pu:4 posterior:1 perspective:3 scenario:2 termed:1 onr:1 yi:1 arjovsky:1 additional:3 dai:2 impose:1 isola:2 paradigm:1 maximize:2 semi:7 ii:5 multiple:1 simoncelli:1 desirable:6 reduces:1 full:2 smooth:2 match:6 characterized:1 plausibility:1 cross:8 long:1 calculation:1 cifar:1 paired:17 a1:1 impact:1 prediction:6 variant:3 basic:1 metric:3 arxiv:4 represent:2 achieved:5 liu2:1 background:1 conditionals:9 want:1 fine:1 yunchen:1 source:1 bovik:1 operate:1 unlike:1 subject:1 tend:2 elegant:1 sent:1 contrary:2 effectiveness:2 call:1 extracting:1 structural:2 near:1 leverage:10 intermediate:1 iii:1 enough:1 easy:1 bengio:2 variety:1 affect:1 fit:1 mastropietro:1 li1:1 architecture:1 opposite:1 restrict:1 identified:1 idea:2 reduce:2 expression:1 effort:1 deep:3 fake:1 amount:1 stein:1 inspiring:1 unpaired:2 reduced:1 generate:5 http:1 nsf:1 sign:1 delta:1 estimated:1 correctly:2 hyperparameter:1 key:2 four:2 drawn:3 gmm:2 ce:15 downstream:1 houthooft:1 nga:1 realworld:1 inverse:1 parameterized:7 powerful:1 uncertainty:5 run:3 angle:6 place:1 family:2 lamb:1 architectural:1 sented:1 geiger:1 acceptable:1 comparable:2 bound:5 pay:1 guaranteed:1 distinguish:6 courville:2 correspondence:1 oracle:1 constraint:1 x2:9 tag:1 interpolated:1 aspect:1 argument:1 min:8 extremely:1 relatively:1 according:1 combination:1 poor:3 liqun:1 across:1 remain:1 reconstructing:1 sheikh:1 intuitively:1 iccv:3 visualization:1 previously:1 mechanism:2 needed:1 wrt:2 photo:1 available:2 apply:2 away:1 enforce:3 salimans:1 blurry:2 alternative:2 existence:1 original:6 denotes:1 top:1 gan:24 opportunity:1 maintaining:1 unifying:1 comparatively:1 implied:2 objective:24 added:1 occurs:1 parametric:1 strategy:1 makhzani:1 exhibit:2 gradient:2 iclr:5 concatenation:1 decoder:1 parametrized:1 manifold:1 trivial:1 urtasun:1 enforcing:2 ozair:1 lgan:1 code:3 index:1 relationship:5 illustration:1 reed:1 manzagol:1 difficult:1 setup:1 potentially:1 subproblems:1 hao:1 perform:1 upper:3 ssim:5 observation:3 datasets:3 sm:11 acknowledge:1 descent:1 buffalo:1 situation:1 looking:2 perturbation:1 carin1:1 chunyuan:1 inferred:3 cast:1 pair:6 specified:8 extensive:2 z1:9 discriminator:14 kl:4 optimized:1 learned:6 chen1:1 established:1 boost:1 kingma:1 nip:10 trans:1 able:2 beyond:1 poole:1 below:6 usually:1 perception:1 max:8 including:2 reliable:1 unrealistic:1 critical:1 suitable:1 difficulty:2 treated:1 power:1 predicting:1 zhu:3 minimax:4 scheme:7 improve:2 github:1 mathieu:1 carried:1 x8:1 auto:2 autoencoder:2 text:1 epoch:1 understanding:2 discovery:1 acknowledgement:1 schulman:1 relative:1 loss:8 fully:8 highlight:2 mixed:1 generation:8 limitation:1 approximator:1 borrows:1 generator:9 triple:1 contingency:2 sufficient:1 proxy:1 imposes:2 consistent:5 viewpoint:1 nowozin:1 translation:4 row:8 course:1 supported:1 last:1 enjoys:1 guide:1 side:2 face:4 feedback:1 world:1 evaluating:1 transition:2 preventing:1 author:1 made:2 far:1 welling:1 correlate:1 shuyang:1 reconstructed:1 approximate:6 pu1:1 implicitly:2 cl319:1 belghazi:1 cuboid:1 reveals:1 conclude:1 alternatively:2 search:3 latent:14 quantifies:1 decomposes:1 table:2 learn:5 nature:1 robust:1 ca:1 composing:1 obtaining:2 mse:9 bottou:1 complex:1 necessarily:1 artificially:1 domain:22 linearly:2 noise:1 hyperparameters:2 x1:10 xu:2 explicit:1 infogan:1 weighting:2 learns:1 donahue:1 grained:1 tang:1 abadie:1 reproduction:3 weakest:1 intractable:1 mnist:1 effectively:1 texture:1 conditioned:4 chen:4 easier:1 entropy:12 likely:2 saddle:8 shoe:4 visual:2 desire:1 applies:1 radford:1 corresponds:1 truth:5 conditional:27 goal:2 cheung:1 towards:2 lali:5 absence:1 feasible:1 change:1 hard:1 specifically:4 respec:1 denoising:2 lemma:10 miss:1 total:1 experimental:1 la:5 meaningful:1 exception:1 formally:1 select:1 indicating:2 latter:1 arises:1 evaluate:1 park:1 tested:1 ex:8 |
6,781 | 7,134 | Finite sample analysis of the GTD Policy Evaluation
Algorithms in Markov Setting
Yue Wang ?
School of Science
Beijing Jiaotong University
[email protected]
Wei Chen
Microsoft Research
[email protected]
Zhi-Ming Ma
Academy of Mathematics and Systems Science
Chinese Academy of Sciences
[email protected]
Yuting Liu
School of Science
Beijing Jiaotong University
[email protected]
Tie-Yan Liu
Microsoft Research
[email protected]
Abstract
In reinforcement learning (RL) , one of the key components is policy evaluation,
which aims to estimate the value function (i.e., expected long-term accumulated
reward) of a policy. With a good policy evaluation method, the RL algorithms
will estimate the value function more accurately and find a better policy. When
the state space is large or continuous Gradient-based Temporal Difference(GTD)
policy evaluation algorithms with linear function approximation are widely used.
Considering that the collection of the evaluation data is both time and reward
consuming, a clear understanding of the finite sample performance of the policy
evaluation algorithms is very important to reinforcement learning. Under the
assumption that data are i.i.d. generated, previous work provided the finite sample
analysis of the GTD algorithms with constant step size by converting them into
convex-concave saddle point problems. However, it is well-known that, the data
are generated from Markov processes rather than i.i.d. in RL problems.. In this
paper, in the realistic Markov setting, we derive the finite sample bounds for the
general convex-concave saddle point problems, and hence for the GTD algorithms.
We have the following discussions based on our bounds. (1) With variants of step
size, GTD algorithms converge. (2) The convergence rate is determined by the
step size, with the mixing time of the Markov process as the coefficient. The faster
the Markov processes mix, the faster the convergence. (3) We explain that the
experience replay trick is effective by improving the mixing property of the Markov
process. To the best of our knowledge, our analysis is the first to provide finite
sample bounds for the GTD algorithms in Markov setting.
1
Introduction
Reinforcement Learning (RL) (Sutton and Barto [1998]) technologies are very powerful to learn how
to interact with environments, and has variants of important applications, such as robotics, computer
games and so on (Kober et al. [2013], Mnih et al. [2015], Silver et al. [2016], Bahdanau et al. [2016]).
In RL problem, an agent observes the current state, takes an action following a policy at the current
state, receives a reward from the environment, and the environment transits to the next state in Markov,
and again repeats these steps. The goal of the RL algorithms is to find the optimal policy which
?
This work was done when the first author was visiting Microsoft Research Asia.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
leads to the maximum long-term reward. The value function of a fixed policy for a state is defined as
the expected long-term accumulated reward the agent would receive by following the fixed policy
starting from this state. Policy evaluation aims to accurately estimate the value of all states under a
given policy, which is a key component in RL (Sutton and Barto [1998], Dann et al. [2014]). A better
policy evaluation method will help us to better improve the current policy and find the optimal policy.
When the state space is large or continuous, it is inefficient to represent the value function over all the
states by a look-up table. A common approach is to extract features for states and use parameterized
function over the feature space to approximate the value function. In applications, there are linear
approximation and non-linear approximation (e.g. neural networks) to the value function. In this
paper, we will focus on the linear approximation (Sutton et al. [2009a],Sutton et al. [2009b],Liu et al.
[2015]). Leveraging the localization technique in Bhatnagar et al. [2009], the results can be generated
into non-linear cases with extra efforts. We leave it as future work.
In policy evaluation with linear approximation, there were substantial work for the temporal-difference
(TD) method, which uses the Bellman equation to update the value function during the learning
process (Sutton [1988],Tsitsiklis et al. [1997]). Recently, Sutton et al. [2009a] Sutton et al. [2009b]
have proposed Gradient-based Temporal Difference (GTD) algorithms which use gradient information
of the error from the Bellman equation to update the value function. It is shown that, GTD algorithms
have achieved the lower-bound of the storage and computation complexity, making them powerful
to handle high dimensional big data. Therefore, now GTD algorithms are widely used in policy
evaluation problems and the policy evaluation step in practical RL algorithms (Bhatnagar et al.
[2009],Silver et al. [2014]).
However, we don?t have sufficient theory to tell us about the finite sample performance of the GTD
algorithms. To be specific, will the evaluation process converge with the increasing of the number
of the samples? If yes, how many samples we need to get a target evaluation error? Will the step
size in GTD algorithms influence the finite sample error? How to explain the effectiveness of the
practical tricks, such as experience replay? Considering that the collection of the evaluation data is
very likely to be both time and reward consuming, to get a clear understanding of the finite sample
performance of the GTD algorithms is very important to the efficiency of policy evaluation and the
entire RL algorithms.
Previous work (Liu et al. [2015]) converted the objective function of GTD algorithms into a convexconcave saddle problem and conducted the finite sample analysis for GTD with constant step size
under the assumption that data are i.i.d. generated. However, in RL problem, the date are generated
from an agent who interacts with the environment step by step, and the state will transit in Markov as
introduced previously. As a result, the data are generated from a Markov process but not i.i.d.. In
addition, the work did not study the decreasing step size, which are also commonly-used in many
gradient based algorithms(Sutton et al. [2009a],Sutton et al. [2009b],Yu [2015]). Thus, the results
from previous work cannot provide satisfactory answers to the above questions for the finite sample
performance of the GTD algorithms.
In this paper, we perform the finite sample analysis for the GTD algorithms in the more realistic
Markov setting. To achieve the goal, first of all, same with Liu et al. [2015], we consider the stochastic
gradient descent algorithms of the general convex-concave saddle point problems, which include
the GTD algorithms. The optimality of the solution is measured by the primal-dual gap (Liu et al.
[2015], Nemirovski et al. [2009]). The finite sample analysis for convex-concave optimization in
Markov setting is challenging. On one hand, in Markov setting, the non-i.i.d. sampled gradients
are no longer unbiased estimation of the gradients. Thus, the proof technique for the convergence
of convex-concave problem in i.i.d. setting cannot be applied. On the other hand, although SGD
converge for convex optimization problem with the Markov gradients, it is much more difficult to
obtain the same results in the more complex convex-concave optimization problem.
To overcome the challenge, we design a novel decomposition of the error function (i.e. Eqn (3.1)).
The intuition of the decomposition and key techniques are as follows: (1) Although samples are not
i.i.d., for large enough ? , the sample at time t + ? is "nearly independent" of the sample at time t,
and its distribution is "very close" to the stationary distribution. (2) We split the random variables in
the objective related to E operator and the variables related to max operator into different terms in
order to control them respectively. It is non-trivial, and we construct a sequence of auxiliary random
variables to do so. (3) All constructions above need to be carefully considered the measurable issues
2
in the Markov setting. (4) We construct new martingale difference sequences and apply Azuma?s
inequality to derive the high-probability bound from the in-expectation bound.
By using the above techniques, we prove a novel finite sample bound for the convex-concave
saddle point problem. Considering the GTD algorithms are specific convex-concave saddle point
optimization methods, we finally obtained the finite sample bounds for the GTD algorithms, in the
realistic Markov setting for RL. To the best of our knowledge, our analysis is the first to provide finite
sample bounds for the GTD algorithms in Markov setting.
We have the following discussions based on our finite sample bounds.
1. GTD algorithms do converge, under a flexible condition on the step size, i.e.
PT
?2t
Pt=1
T
t=1 ?t
PT
t=1
?t ?
< ?, as T ? ?, where ?t is the step size. Most of step sizes used in practice
?,
satisfy this condition.
r
2. The convergence rate is O
(1 +
PT
?2
t
? (?)) Pt=1
T
t=1 ?t
q
+
? (?)
? (?) log( ?
PT
t=1
)
PT
t=1
?2
t
!
?t
, where ? (?) is
the mixing time of the Markov process, and ? is a constant. Different step sizes will lead to
different convergence rates.
3. The experience replay trick is effective, since it can improve the mixing property of the
Markov process.
Finally, we conduct simulation experiments to verify our theoretical finding. All the conclusions
from the analysis are consistent with our empirical observations.
2
Preliminaries
In this section, we briefly introduce the GTD algorithms and related works.
2.1
Gradient-based TD algorithms
Consider the reinforcement learning problem with Markov decision process(MDP) (S, A, P, R, ?),
a
0
where S is the state space, A is the action space, P = {Ps,s
0 ; s, s ? S, a ? A} is the transition
a
matrix and Ps,s0 is the transition probability from state s to state s0 after taking action a, R =
{R(s, a); s ? S, a ? A is the reward function and R(s, a) is the reward received at state s if
taking action a, and 0 < ? < 1 is the discount factor. A policy function ? : A ? S ? [0, 1]
indicates the P
probability to take each action at each state. Value function for policy ? is defined as:
?
V ? (s) , E [ t=0 ? t R(st , at )|s0 = s, ?].
In order to perform policy evaluation in a large state space, states are represented by a feature vector
?(s) ? Rd , and a linear function v?(s) = ?(s)> ? is used to approximate the value function. The
evaluation error is defined as kV (s) ? v?(s)ks?? , which can be decomposed into approximation
error and estimation error. In this paper, we will focus on the estimation error with linear function
approximation.
As we know, the value function in RL satisfies the following Bellman equation: V ? (s) =
E?,P [R(st , at ) + ?V ? (st+1 )|st = s] , T ? V ? (s), where T ? is called Bellman operator for policy
?. Gradient-based TD (GTD) algorithms (including GTD and GTD2) proposed by Sutton et al.
[2009a] and Sutton et al. [2009b] update the approximated value function by minimizing the objective
function related to Bellman equation errors, i.e., the norm of the expected TD update (NEU) and
mean-square projected Bellman error (MSPBE) respectively(Maei [2011],Liu et al. [2015]) ,
GT D :
GT D2 :
JN EU (?) = k?> K(T ? v? ? v?)k2
(2.1)
>
?
?
JM SP BE (?) = k?
v ? PT v?k = k? K(T v? ?
where K is a diagonal matrix whose elements are ?(s), C =
the state space S.
E? (?i ?>
i ),
v?)k2C ?1
(2.2)
and ? is a distribution over
Actually, the two objective functions in GTD and GTD2 can be unified as below
J(?) = kb ? A?k2M ?1 ,
3
(2.3)
Algorithm 1 GTD Algorithms
1: for t = 1, . . . , T do
? t yt )
2:
Update parameters: yt+1 = PXy yt + ?t (?bt ? A?t ?t ? M
3: end for
PT
PT
?t xt
t=1 ?t yt
P
Output:
x
?n = Pt=1
y
?
=
n
T
T
?
?
t=1
t
t=1
xt+1 = PXx xt + ?t A?>
t yt
t
where M = I in GTD, M = C, in GTD2, A = E? [?(s, a)?(s)(?(s) ? ??(s0 ))> ], b =
E? [?(s, a)?(s)r], ?(s, a) = ?(a|s)/?b (a|s) is the importance weighting factor. Since the underlying
n
distribution is unknown, we use the data D = {?i = (si , ai , ri , s0i )}i=1 to estimate the value function
by minimizing the empirical estimation error, i.e.,
? = 1/T
J(?)
T
X
? 2? ?1
k?b ? A?k
M
i=1
where A?i = ?(si , ai )?(si )(?(si ) ? ??(s0i ))> , ?bi = ?(si , ai )?(si )ri , C?i = ?(si )?(si )> .
Liu et al. [2015] derived that the GTD algorithms to minimize (2.3) is equivalent to the stochastic
gradient algorithms to solve the following convex-concave saddle point problem
1
min max L(x, y) = hb ? Ax, yi ? kyk2M ,
x
y
2
(2.4)
with x as the parameter ? in the value function, y as the auxiliary variable used in GTD algorithms.
Therefore, we consider the general convex-concave stochastic saddle point problem as below
min max {?(x, y) = E? [?(x, y, ?)]},
x?Xx y?Xy
(2.5)
where Xx ? Rn and Xy ? Rm are bounded closed convex sets, ? ? ? is random variable and its
distribution is ?(?), and the expected function ?(x, y) is convex in x and concave in y. Denote
z = (x, y) ? Xx ? Xy , X , the gradient of ?(z) as g(z), and the gradient of ?(z, ?) as G(z, ?).
In the stochastic gradient algorithm, the model is updated as: zt+1 = PX (zt ? ?t (G(zt , ?t ))),
where P
PX is the projection onto X and ?t is the step size. After T iterations, we get the model
T
?t zt
T
z?1 = Pt=1
. The error of the model z?1T is measured by the primal-dual gap error
T
?
t=1
t
Err? (?
z1T ) = max ?(?
xT1 , y) ? min ?(x, y?1T ).
y?Xy
x?Xx
(2.6)
Liu et al. [2015] proved that the estimation error of the GTD algorithms can be upper bounded by
their corresponding primal-dual gap error multiply a factor. Therefore, we are going to derive the
finite sample primal-dual gap error bound for the convex-concave saddle point problem firstly, and
then extend it to the finite sample estimation error bound for the GTD algorithms.
Details of GTD algorithms used to optimize (2.4) are placed in Algorithm 1( Liu et al. [2015]).
2.2
Related work
The TD algorithms for policy evaluation can be divided into two categories: gradient based methods
and least-square(LS) based methods(Dann et al. [2014]). Since LS based algorithms need O(d2 )
storage and computational complexity while GTD algorithms are both of O(d) complexity, gradient
based algorithms are more commonly used when the feature dimension is large. Thus, in this paper,
we focus on GTD algorithms.
Sutton et al. [2009a] proposed the gradient-based temporal difference (GTD) algorithm for off-policy
policy evaluation problem with linear function approximation. Sutton et al. [2009b] proposed GTD2
algorithm which shows a faster convergence in practice. Liu et al. [2015] connected GTD algorithms
to a convex-concave saddle point problem and derive a finite sample bound in both on-policy and
off-policy cases for constant step size in i.i.d. setting.
In the realistic Markov setting, although the finite sample bounds for LS-based algorithms have been
proved (Lazaric et al. [2012] Tagorti and Scherrer [2015]) LSTD(?), to the best of our knowledge,
there is no previous finite sample analysis work for GTD algorithms.
4
3
Main Theorems
In this section, we will present our main results. In Theorem 1, we present our finite sample bound
for the general convex-concave saddle point problem; in Theorem 2, we provide the finite sample
bounds for GTD algorithms in both on-policy and off-policy cases. Please refer the complete proofs
in the supplementary materials.
Our results are derived based on the following common assumptions(Nemirovski [2004], Duchi et al.
[2012], Liu et al. [2015]). Please note that, the bounded-data property in assumption 4 in RL can
guarantee the Lipschitz and smooth properties in assumption 5-6 (Please see Propsition 1 ).
Assumption 1 (Bounded parameter). There exists D > 0, such that kz ? z 0 k ? D, f or ?z, z 0 ? X .
Assumption 2 (Step size). The step size ?t is non-increasing.
Assumption 3 (Problem solvable). The matrix A and C in Problem 2.4 are non-singular.
Assumption 4 (Bounded data). Features are bounded by L, rewards are bounded by Rmax and
importance weights are bounded by ?max .
Assumption 5 (Lipschitz). For ?-almost every ?, the function ?(x,
y, ?) is Lipschitz for both x and
? q 2
y, with finite constant L1x , L1y , respectively. We Denote L1 , 2 L1x + L21y .
Assumption 6 (Smooth). For ?-almost every ?, the partial gradient function of ?(x,
y, ?) is Lipschitz
? q 2
for both x and y with finite constant L2x , L2y respectively. We denote L2 , 2 L2x + L22y .
For Markov process, the mixing time characterizes how fast the process converge to its stationary
distribution. Following the notation of Duchi et al. [2012], we denote the conditional probability
t
distribution P (?t ? A|Fs ) as P[s]
(A) and the corresponding probability density as pt[s] . Similarly, we
denote the stationary distribution of the data generating stochastic process as ? and its density as ?.
Definition 1. The mixing time ? (P[t] , ?) of the sampling distribution P conditioned on
the n??field of the initial t sample Ft o = ?(?1 , . . . , ?t ) is defined as: ? (P[t] , ?) ,
R
t+?
inf ? : t ? N, |pt+?
is the conditional probability density
[t] (?) ? ?(?)|d(?) ? ? , where p[t]
at time t + ?, given Ft .
Assumption 7 (Mixing time). The mixing times of the stochastic process {?t } are uniform. i.e., there
exists uniform mixing times ? (P, ?) ? ? such that, with probability 1, we have ? (P[s] , ?) ? ? (P, ?)
for all ? > 0 and s ? N.
Please note that, any time-homogeneous Markov chain with finite state-space and any uniformly
ergodic Markov chains with general state space satisfy the above assumption(Meyn and Tweedie
[2012]). For simplicity and without of confusion, we will denote ? (P, ?) as ? (?).
3.1
Finite Sample Bound for Convex-concave Saddle Point Problem
Theorem 1. Consider the convex-concave problem in Eqn (2.6). Suppose Assumption 1,2,5,6 hold.
Then for the gradient algorithm optimizing the convex-concave saddle point problem in (2.5), for
?? > 0 and ?? > 0 such that ? (?) ? T /2, with probability at least 1 ? ?, we have
Err? (?
z1T )
where : A = D2
"
T
T
T
X
X
X
1
? T
A+B
?t2 + C? (?)
?t2 + F ?
?t + H? (?)
P
t=1
t=1
t=1
?t
t=1
v
!#
u
T
u
? (?) X 2
t
+ 8DL1 2? (?) log
?t + ? (?)?0
?
t=1
B=
5 2
L1
2
C = 6L21 + 2L1 L2 D
F = 2L1 D
H = 6L1 D?0
Proof Sketch of Theorem 1. By the definition of the error function in (2.6) and the property that
?(x, y) is convex for x and concave for y, the expected error can be bounded as below
Err? (?
z1T ) ? max PT
z
T
X
1
t=1
?t
5
t=1
h
i
?t (zt ? z)> g(zt ) .
Denote ?t , g(zt )?G(zt , ?t ), ?t0 , g(zt )?G(zt , ?t+? ), ?t00 , G(zt , ?t+? )?G(zt , ?t ). Constructing
{vt }t?1 which is measurable with respect to Ft?1 ,vt+1 = PX vt ? ?t (g(zt ) ? G(zt , ?t )) . We have
the following key decomposition to the right hand side in the above inequality, the initiation and the
explanation for such decomposition is placed in supplementary materials. For ?? ? 0:
max
z
T
X
h
i
>
?t (zt ? z) g(zt ) = max
"T ??
X
z
t=1
t=1
?t (zt ? z)> G(zt , ?t ) + (zt ? vt )> ?t0
|
{z
} |
{z
}
(a)
vt )> ?t00
+ (zt ?
{z
|
(c)
}
>
(3.1)
(b)
T
X
+ (vt ? z) ?t +
|
{z
}
t=T ?? +1
(d)
|
#
h
i
>
?t (zt ? z) g(zt ) .
{z
}
(e)
For term(a), we split G(zt , ?t ) into three terms by the definition of L2 -norm and the iteration formula
PT ??
of zt , and then we bound its summation by t=1 k?t G(zt , ?t )k2 + kzt ? zk2 ? kzt+1 ? zk2 .
Actually, in the summation,Pthe last two terms will be eliminated except for their first and the last
terms. Swap the max and
operators and use the Lipschitz Assumption 5, the first term can be
bounded. Term (c) includes the sum of G(zt , ?t+? ) ? G(zt , ?t ), which is might be large in Markov
setting. We reformulate it into the sum of G(zt?? , ?t ) ? G(zt , ?t ) and use the smooth Assumption 6
to bound it. Term (d) is similar to term (a) except that g(zt ) ? G(zt , ?t ) is the gradient that used to
update vt . We can bound it similarly with term (a). Term(e) is a constant that does not change much
with T ? ?, and we can bound it directly through upper bound of each of its own terms. Finally, we
combine all the upper bounds to each term, use the mixing time Assumption 7 to choose ? = ? (?)
and obtain the error bound in Theorem 1.
We decompose Term(b) into a martingale part and an expectation part.By constructing a martingale
difference sequence and using the Azuma?s inequality together with the Assumption 7, we can bound
Term (b) and finally obtain the high probability error bound.
PT
?2
t
Remark: (1) With T ? ?, the error bound approaches 0 in order O( Pt=1
). (2) The mixing
T
?
t=1
t
time ? (?) will influence the convergence rate. If the Markov process has better mixing property
with smaller ? (?), the algorithm converge faster. (3) If the data are i.i.d. generated (the mixing time
? (?) = 0, ??) and the step size is set to the constant L c?T , our bound will reduce to Err? (?
z1T ) ?
1
h
i
PT
L1
PT 1
), which is identical to previous work with constant step size in
A + B t=1 ?t2 = O( ?
T
t=1 ?t
i.i.d. setting (Liu et al. [2015],Nemirovski et al. [2009]). (4) The high probability bound is similar to
the expectation bound in the following Lemma 1 except for the last term. This is because we consider
the deviation of the data around its expectation to derive the high probability bound.
Lemma 1. Consider the convex-concave problem (2.6), under the same as Theorem 1, we have
ED [Err? (?
z1T )] ? PT
"
1
t=1
?t
A+B
T
X
?t2 + C? (?)
t=1
T
X
t=1
?t2 + F ?
T
X
#
?t + H? (?) , ?? > 0,
t=1
Proof Sketch of Lemma 1. We start from the key decomposition (3.1), and bound each term with
expectation this time. We can easily bound each term as previously except for Term (b). For term
(b), since (zt ? vt ) is not related to max operator and it is measurable with respect to Ft?1 , we can
bound Term (b) through the definition of mixing time and finally obtain the expectation bound.
3.2
Finite Sample Bounds for GTD Algorithms
As a specific convex-concave saddle point problem, the error bounds in Theorem 1&2 can also
provide the error bounds for GTD with the following specifications for the Lipschitz constants.
Proposition 1. Suppose Assumption 1-4 hold, then the objective function in GTD algorithms is
Lipschitz and smooth with the following coefficients:
?
L1 ? 2(2D(1 + ?)?max L2 d + ?max LRmax + ?M )
?
L2 ? 2(2(1 + ?)?max L2 d + ?M )
6
where ?M is the largest singular value of M .
Theorem 2. Suppose assumptions 1-4 hold, then we have the following finite sample bounds for the error kV ? v?1T k? in GTD algorithms:
In on-policy case, the
?
L L4 d3 ?M ?max (1+? (?))?max o1 (T )
bound in expectation is O
and with probability 1 ? ? is
?C
?
O
s
L4 d2 ?M ?max
?C
(1 +
bound in expectation is O
?
O
2?C ?M ?max
?(AT M ?1 A)
!!
r
? (?)
? (?) log
; In off-policy case, the
o2 (T )
1 (T ) +
?
?
? (?))L2 do
L2 d
2?C ?M ?max (1+? (?))o1 (T )
?(AT M ?1 A)
r
L4 d2 (1 + ? (?))o1 (T ) +
q
and with probability 1 ? ? is
!!
? (?) log ( ? (?)
)o2 (T )
?
, where ?C , ?(AT M ?1 A) is
the smallest eigenvalue of the C and
AT M ?1 A respectively, ?C is the largest singular value
?P
PT
T
2
?2
?
t
PTt=1 t ).
of C, o1 (T ) = ( Pt=1
),
o
(T
)
=
(
2
T
?
?
t=1
t
t=1
t
We would like to make the following discussions for Theorem 2.
As in Theorem
The GTD algorithms do converge in the realistic Markov
setting.
p
2, the bound in expectation is O
(1 + ? (?))o1 (T ) and with probability 1 ? ? is
!
q
? (?)
(1 + ? (?))o1 (T ) + ? (?) log( ? )o2 (T ) .
r
O
If the step size ?t makes o1 (T ) ? 0 and
o2 (T ) ? 0, as T ? ?, the GTD algorithms will converge. Additionally, in high probability
PT
PT
bound, if t=1 ?t2 > 1, then o1 (T ) dominates the order, if t=1 ?t2 < 1, o2 (T ) dominates.
The setup of the step size can be flexible. Our
finite sample bounds for GTD algorithms converge
PT
2
PT
t=1 ?t
P
to 0 if the step size satisfies t=1 ?t ? ?, T ? < ?, as T ? ?. This condition on step size
t=1 t
is much weaker than the constant step size in previous work Liu et al. [2015], and the common-used
step size ?t = O( ?1t ), ?t = O( 1t ), ?t = c = O( ?1T ) all satisfy the condition. To be specific, for
)
? ); for ?t = O( 1 ), the convergence rate is O( 1 ),
?t = O( ?1t ), the convergence rate is O( ln(T
t
ln(T )
T
for the constant step size, the optimal setup is ?t = O( ?1T ) considering the trade off between o1 (T )
and o2 (T ), and the convergence rate is O( ?1T ).
The mixing time matters. If the data are generated from a Markov process with smaller mixing
time, the error bound will be smaller, and we just need fewer samples to achieve a fixed estimation
error. This finding can explain why the experience replay trick (Lin [1993]) works. With experience
replay, we store the agent?s experiences (or data samples) at each step, and randomly sample one from
the pool of stored samples to update the policy function. By Theorem 1.19 - 1.23 of Durrett [2016], it
can be proved that, for arbitrary ? > 0, there exists t0 , such that ?t > t0 maxi | Ntt(i) ? ?(i)| ? ?.
That is to say, when the size of the stored samples is larger than t0 , the mixing time of the new data
process with experience replay is 0. Thus, the experience replay trick improves the mixing property
of the data process, and hence improves the convergence rate.
Other factors that influence the finite sample bound: (1) With the increasing of the feature norm
L, the finite sample bound increase. This is consistent with the empirical finding by Dann et al.
[2014] that the normalization of features is crucial for the estimation quality of GTD algorithms. (2)
With the increasing of the feature dimension d, the bound increase. Intuitively, we need more samples
for a linear approximation in a higher dimension feature space.
4
Experiments
In this section, we report our simulation results to validate our theoretical findings. We consider the
general convex-concave saddle problem,
1
1
2
2
min max L(x, y) = hb ? Ax, yi + kxk ? kyk
x
y
2
2
7
(4.1)
where A is a n ? n matrix, b is a n ? 1 vector, Here we set n = 10. We conduct three experiment and
? and ?t = O( 1 ) = 0.03 respectively. In
set the step size to ?t = c = 0.001, ?t = O( ?1t ) = 0.015
t
t
t
? ?b three ways: sample from two Markov chains with different
each experiment we sample the data A,
mixing time but share the same stationary distribution or sample from stationary distribution i.i.d.
directly. We sample A? and ?b from Markov chain by using MCMC Metropolis-Hastings algorithms.
Specifically, notice that the mixing time of a Markov chain is positive correlation with the second
largest eigenvalue of its transition probability matrix (Levin et al. [2009]), we firstly conduct two
transition probability matrix with different second largest eigenvalues( both with 1001 state and the
second largest eigenvalue are 0.634 and 0.31 respectively), then using Metropolis-Hastings algorithms
construct two Markov chain with same stationary distribution.
We run the gradient algorithm for the objective in (4.1) based on the simulation data, without and
with experience replay trick. The primal-dual gap error curves are plotted in Figure 1.
We have the following observations. (1) The error curves converge in Markov setting with all the
three setups of the step size. (2) The error curves with the data generated from the process which
has small mixing time converge faster. The error curve for i.i.d. generated data converge fastest. (3)
The error curve for different step size convergence at different rate. (4) With experience replay trick,
the error curves in the Markov settings converge faster than previously. All these observations are
consistent with our theoretical findings.
(a) ?t = c
(b) ?t = O( ?1t )
(c) ?t = O( 1t )
(d) ?t = c with trick
(e) ?t = O( ?1t ) with trick
(f) ?t = O( 1t ) with trick
Figure 1: Experimental Results
5
Conclusion
In this paper, in the more realistic Markov setting, we proved the finite sample bound for the convexconcave saddle problems with high probability and in expectation. Then, we obtain the finite sample
bound for GTD algorithms both in on-policy and off-policy, considering that the GTD algorithms
are specific convex-concave saddle point problems. Our finite sample bounds provide important
theoretical guarantee to the GTD algorithms, and also insights to improve them, including how to
setup the step size and we need to improve the mixing property of the data like experience replay.
In the future, we will study the finite sample bounds for policy evaluation with nonlinear function
approximation.
Acknowledgment
This work was supported by A Foundation for the Author of National Excellent Doctoral Dissertation
of RP China (FANEDD 201312) and National Center for Mathematics and Interdisciplinary Sciences
of CAS.
8
References
Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron
Courville, and Yoshua Bengio. An actor-critic algorithm for sequence prediction. arXiv preprint
arXiv:1607.07086, 2016.
Shalabh Bhatnagar, Doina Precup, David Silver, Richard S Sutton, Hamid R Maei, and Csaba
Szepesv?ri. Convergent temporal-difference learning with arbitrary smooth function approximation.
In Advances in Neural Information Processing Systems, pages 1204?1212, 2009.
Christoph Dann, Gerhard Neumann, and Jan Peters. Policy evaluation with temporal differences: a
survey and comparison. Journal of Machine Learning Research, 15(1):809?883, 2014.
John C Duchi, Alekh Agarwal, Mikael Johansson, and Michael I Jordan. Ergodic mirror descent.
SIAM Journal on Optimization, 22(4):1549?1578, 2012.
Richard Durrett. Poisson processes. In Essentials of Stochastic Processes, pages 95?124. Springer,
2016.
Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The
International Journal of Robotics Research, 32(11):1238?1274, 2013.
Alessandro Lazaric, Mohammad Ghavamzadeh, and R?mi Munos. Finite-sample analysis of leastsquares policy iteration. Journal of Machine Learning Research, 13(1):3041?3074, 2012.
David Asher Levin, Yuval Peres, and Elizabeth Lee Wilmer. Markov chains and mixing times.
American Mathematical Soc., 2009.
Long-Ji Lin. Reinforcement learning for robots using neural networks. PhD thesis, Fujitsu Laboratories Ltd, 1993.
Bo Liu, Ji Liu, Mohammad Ghavamzadeh, Sridhar Mahadevan, and Marek Petrik. Finite-sample
analysis of proximal gradient td algorithms. In UAI, pages 504?513. Citeseer, 2015.
Hamid Reza Maei. Gradient temporal-difference learning algorithms. PhD thesis, University of
Alberta, 2011.
Sean P Meyn and Richard L Tweedie. Markov chains and stochastic stability. Springer Science &
Business Media, 2012.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare,
Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control
through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
Arkadi Nemirovski. Prox-method with rate of convergence o (1/t) for variational inequalities with
lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM
Journal on Optimization, 15(1):229?251, 2004.
Arkadi Nemirovski, Anatoli Juditsky, Guanghui Lan, and Alexander Shapiro. Robust stochastic
approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574?
1609, 2009.
David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller.
Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on
Machine Learning, pages 387?395, 2014.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering
the game of go with deep neural networks and tree search. Nature, 529(7587):484?489, 2016.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine learning, 3
(1):9?44, 1988.
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press
Cambridge, 1998.
9
Richard S Sutton, Hamid R Maei, and Csaba Szepesv?ri. A convergent o(n) temporal-difference
algorithm for off-policy learning with linear function approximation. In Advances in Neural
Information Processing Systems, pages 1609?1616, 2009a.
Richard S Sutton, Hamid Reza Maei, Doina Precup, Shalabh Bhatnagar, David Silver, Csaba
Szepesv?ri, and Eric Wiewiora. Fast gradient-descent methods for temporal-difference learning with linear function approximation. In Proceedings of the 26th International Conference on
Machine Learning, pages 993?1000, 2009b.
Manel Tagorti and Bruno Scherrer. On the rate of convergence and error bounds for lstd ( lambda).
In Proceedings of the 32nd International Conference on Machine Learning, pages 1521?1529,
2015.
John N Tsitsiklis, Benjamin Van Roy, et al. An analysis of temporal-difference learning with function
approximation. IEEE transactions on automatic control, 42(5):674?690, 1997.
H Yu. On convergence of emphatic temporal-difference learning. In Proceedings of The 28th
Conference on Learning Theory, pages 1724?1751, 2015.
10
| 7134 |@word briefly:1 norm:3 johansson:1 nd:1 d2:5 simulation:3 decomposition:5 citeseer:1 sgd:1 initial:1 liu:17 o2:6 err:5 current:3 com:2 si:8 guez:1 john:2 realistic:6 wiewiora:1 update:7 juditsky:1 stationary:6 fewer:1 kyk:1 dissertation:1 aja:1 yuting:1 firstly:2 k2m:1 mathematical:1 wierstra:1 prove:1 combine:1 introduce:1 expected:5 tagorti:2 bellman:6 ming:1 decreasing:1 decomposed:1 td:6 zhi:1 alberta:1 jm:1 considering:5 increasing:4 provided:1 xx:4 underlying:1 bounded:10 notation:1 medium:1 rmax:1 unified:1 finding:5 csaba:3 guarantee:2 temporal:12 every:2 concave:23 tie:2 k2:2 rm:1 control:3 kelvin:1 positive:1 sutton:18 laurent:1 might:1 doctoral:1 k:1 china:1 challenging:1 christoph:1 fastest:1 nemirovski:5 bi:1 practical:2 acknowledgment:1 practice:2 goyal:1 jan:2 riedmiller:2 empirical:3 yan:2 projection:1 get:3 cannot:2 close:1 onto:1 operator:6 storage:2 influence:3 bellemare:1 optimize:1 measurable:3 equivalent:1 deterministic:1 yt:5 center:1 go:1 starting:1 l:3 convex:25 ergodic:2 survey:2 simplicity:1 insight:1 meyn:2 stability:1 handle:1 manel:1 updated:1 target:1 construction:1 pt:27 suppose:3 gerhard:1 programming:1 homogeneous:1 us:1 trick:10 element:1 roy:1 approximated:1 ft:4 preprint:1 wang:1 connected:1 l1x:2 eu:1 trade:1 observes:1 substantial:1 intuition:1 environment:4 alessandro:1 complexity:3 benjamin:1 reward:9 ghavamzadeh:2 petrik:1 localization:1 efficiency:1 eric:1 swap:1 easily:1 represented:1 fast:2 effective:2 tell:1 whose:1 widely:2 solve:1 supplementary:2 say:1 larger:1 t00:2 sequence:4 eigenvalue:4 kober:2 date:1 mixing:23 pthe:1 achieve:2 academy:2 kv:2 validate:1 convergence:15 p:2 neumann:1 generating:1 silver:7 leave:1 help:1 derive:5 andrew:2 ac:1 measured:2 school:2 received:1 soc:1 auxiliary:2 stochastic:10 kb:1 human:1 material:2 preliminary:1 decompose:1 proposition:1 hamid:4 ryan:1 summation:2 leastsquares:1 hold:3 around:1 considered:1 predict:1 smallest:1 estimation:8 largest:5 mit:1 aim:2 rather:1 rusu:1 barto:3 derived:2 focus:3 ax:2 indicates:1 accumulated:2 entire:1 bt:1 going:1 issue:1 dual:5 flexible:2 scherrer:2 field:1 construct:3 beach:1 sampling:1 eliminated:1 identical:1 koray:1 veness:1 look:1 yu:2 nearly:1 future:2 t2:7 report:1 yoshua:1 richard:7 randomly:1 anirudh:1 national:2 microsoft:5 ostrovski:1 mnih:2 multiply:1 evaluation:21 joel:1 primal:5 chain:8 partial:1 arthur:1 experience:11 xy:4 tweedie:2 conduct:3 tree:1 plotted:1 theoretical:4 deviation:1 uniform:2 levin:2 conducted:1 stored:2 answer:1 proximal:1 guanghui:1 st:6 density:3 international:4 siam:3 interdisciplinary:1 lee:1 off:7 pool:1 michael:1 together:1 precup:2 again:1 thesis:2 lever:1 choose:1 huang:1 guy:1 lambda:1 american:1 inefficient:1 volodymyr:1 converted:1 prox:1 degris:1 ioannis:1 includes:1 coefficient:2 matter:1 satisfy:3 dann:4 doina:2 lowe:1 closed:1 characterizes:1 start:1 arkadi:2 minimize:1 square:2 who:1 yes:1 kavukcuoglu:1 accurately:2 bhatnagar:4 l21:1 explain:3 ed:1 neu:1 definition:4 wche:1 proof:4 mi:1 sampled:1 proved:4 knowledge:3 improves:2 sean:1 carefully:1 actually:2 higher:1 asia:1 wei:1 done:1 just:1 correlation:1 hand:3 receives:1 eqn:2 sketch:2 hastings:2 nonlinear:1 pineau:1 quality:1 mdp:1 shalabh:2 usa:1 verify:1 unbiased:1 hence:2 laboratory:1 satisfactory:1 convexconcave:2 game:2 during:1 please:4 complete:1 mohammad:2 confusion:1 duchi:3 l1:7 variational:1 novel:2 recently:1 common:3 rl:13 ji:2 reza:2 extend:1 mspbe:1 refer:1 cambridge:1 ai:3 rd:1 automatic:1 mathematics:2 similarly:2 bruno:1 specification:1 actor:1 longer:1 alekh:1 robot:1 gt:2 own:1 optimizing:1 inf:1 store:1 initiation:1 inequality:4 vt:8 joelle:1 yi:2 jens:1 jiaotong:2 george:1 converting:1 converge:13 mix:1 smooth:6 ntt:1 faster:6 ptt:1 long:5 lin:2 divided:1 prediction:1 variant:2 expectation:10 poisson:1 arxiv:2 iteration:3 represent:1 normalization:1 agarwal:1 robotics:3 achieved:1 receive:1 addition:1 szepesv:3 singular:3 crucial:1 extra:1 yue:1 bahdanau:2 leveraging:1 effectiveness:1 jordan:1 split:2 enough:1 bengio:1 hb:2 mahadevan:1 reduce:1 andreas:1 cn:3 t0:5 veda:1 ltd:1 effort:1 f:1 peter:2 action:5 remark:1 deep:2 gtd2:4 heess:1 clear:2 discount:1 category:1 shapiro:1 notice:1 lazaric:2 georg:1 key:5 lan:1 d3:1 monotone:1 sum:2 beijing:2 run:1 parameterized:1 powerful:2 almost:2 k2c:1 decision:1 lanctot:1 bound:53 pxy:1 courville:1 convergent:2 alex:1 ri:5 optimality:1 min:4 px:3 martin:2 smaller:3 elizabeth:1 mastering:1 metropolis:2 making:1 intuitively:1 den:1 ln:2 equation:4 previously:3 know:1 antonoglou:1 end:1 zk2:2 panneershelvam:1 apply:1 rp:1 jn:1 thomas:1 include:1 anatoli:1 mikael:1 chinese:1 objective:6 question:1 interacts:1 diagonal:1 visiting:1 bagnell:1 gradient:25 fidjeland:1 chris:1 transit:2 maddison:1 trivial:1 dzmitry:1 o1:9 reformulate:1 julian:1 minimizing:2 schrittwieser:1 difficult:1 setup:4 design:1 zt:32 policy:41 unknown:1 perform:2 upper:3 observation:3 markov:37 philemon:1 daan:1 finite:38 descent:3 peres:1 rn:1 arbitrary:2 introduced:1 maei:5 david:6 mazm:1 nip:1 below:3 azuma:2 challenge:1 max:19 including:2 explanation:1 marek:1 business:1 solvable:1 improve:4 technology:1 extract:1 l2y:1 understanding:2 l2:8 graf:1 emphatic:1 foundation:1 agent:4 sufficient:1 consistent:3 s0:4 share:1 critic:1 fujitsu:1 repeat:1 placed:2 last:3 supported:1 wilmer:1 tsitsiklis:2 side:1 weaker:1 taking:2 munos:1 van:2 overcome:1 dimension:3 curve:6 transition:4 l1y:1 kz:1 author:2 collection:2 reinforcement:8 commonly:2 projected:1 durrett:2 sifre:1 brakel:1 transaction:1 approximate:2 uai:1 xt1:1 consuming:2 don:1 continuous:3 search:1 s0i:2 why:1 table:1 additionally:1 learn:1 nature:2 robust:1 ca:2 nicolas:1 improving:1 interact:1 excellent:1 complex:1 constructing:2 marc:2 did:1 sp:1 main:2 big:1 gtd:49 sridhar:1 dl1:1 asher:1 xu:1 andrei:1 martingale:3 kzt:2 replay:10 weighting:1 pxx:1 theorem:12 formula:1 specific:5 xt:3 maxi:1 dominates:2 exists:3 essential:1 importance:2 mirror:1 phd:2 conditioned:1 chen:1 gap:5 saddle:18 likely:1 kxk:1 bo:1 lstd:2 springer:2 driessche:1 satisfies:2 amt:1 ma:1 conditional:2 z1t:5 goal:2 lipschitz:8 change:1 determined:1 except:4 uniformly:1 specifically:1 yuval:1 lemma:3 called:1 experimental:1 l4:3 aaron:1 alexander:1 mcmc:1 |
6,782 | 7,135 | On the Complexity of Learning Neural Networks
Le Song
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Santosh Vempala
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
John Wilmes
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Bo Xie
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Abstract
The stunning empirical successes of neural networks currently lack rigorous theoretical explanation. What form would such an explanation take, in the face of
existing complexity-theoretic lower bounds? A first step might be to show that
data generated by neural networks with a single hidden layer, smooth activation
functions and benign input distributions can be learned efficiently. We demonstrate
here a comprehensive lower bound ruling out this possibility: for a wide class
of activation functions (including all currently used), and inputs drawn from any
logconcave distribution, there is a family of one-hidden-layer functions whose
output is a sum gate, that are hard to learn in a precise sense: any statistical query
algorithm (which includes all known variants of stochastic gradient descent with
any loss function) needs an exponential number of queries even using tolerance
inversely proportional to the input dimensionality. Moreover, this hard family of
functions is realizable with a small (sublinear in dimension) number of activation
units in the single hidden layer. The lower bound is also robust to small perturbations of the true weights. Systematic experiments illustrate a phase transition in the
training error as predicted by the analysis.
1
Introduction
It is well-known that Neural Networks (NN?s) provide universal approximate representations [11, 6, 2]
and under mild assumptions, i.e., any real-valued function can be approximated by a NN. This holds
for a wide class of activation functions (hidden layer units) and even with only a single hidden layer
(although there is a trade-off between depth and width [8, 20]). Typically learning a NN is done by
stochastic gradient descent applied to a loss function comparing the network?s current output to the
values of the given training data; for regression, typically the function is just the least-squares error.
Variants of gradient descent include drop-out, regularization, perturbation, batch gradient descent etc.
In all cases, the training algorithm has the following form:
Repeat:
1. Compute a fixed function FW (.) defined by the current network weights W on a subset
of training examples.
2. Use FW (.) to update the current weights W .
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The empirical success of this approach raises the question: what can NN?s learn efficiently in theory?
In spite of much effort, at the moment there are no satisfactory answers to this question, even with
reasonable assumptions on the function being learned and the input distribution.
When learning involves some computationally intractable optimization problem, e.g., learning an
intersection of halfspaces over the uniform distribution on the Boolean hypercube, then any training
algorithm is unlikely to be efficient. This is the case even for improper learning (when the complexity
of the hypothesis class being used to learn can be greater than the target class). Such lower bounds are
unsatisfactory to the extent they rely on discrete (or at least nonsmooth) functions and distributions.
What if we assume that the function to be learned is generated by a NN with a single hidden layer of
smooth activation units, and the input distribution is benign? Can such functions be learned efficiently
by gradient descent?
Our main result is a lower bound, showing a simple and natural family of functions generated by
1-hidden layer NN?s using any known activation function (e.g., sigmoid, ReLU), with each input
drawn from a logconcave input distribution (e.g., Gaussian, uniform in an interval), are hard to learn
by a wide class of algorithms, including those in the general form above. Our finding implies that
efficient NN training algorithms need to use stronger assumptions on the target function and input
distribution, more so than Lipschitzness and smoothness even when the true data is generated by a
NN with a single hidden layer.
The idea of the lower bound has two parts. First, NN updates can be viewed as statistical queries to
the input distribution. Second, there are many very different 1-layer networks, and in order to learn
the correct one, any algorithm that makes only statistical queries of not too small accuracy has to
make an exponential number of queries. The lower bound uses the SQ framework of Kearns [13] as
generalized by Feldman et al. [9].
1.1
Statistical query algorithms
A statistical query (SQ) algorithm is one that solves a computational problem over an input distribution; its interaction with the input is limited to querying the expected value of of a bounded
function up to a desired accuracy. More precisely, for any integer t > 0 and distribution D over X, a
VSTAT(t) oracle takes as input a query function f : X ? [0, 1] with expectation p = ED (f (x))
and returns a value v such that
( r
)
1
p(1 ? p)
,
.
E (f (x)) ? v ? max
x?D
t
t
The bound on the RHS is the standard deviation of t independent Bernoulli coins with desired
expectation, i.e., the error that even a random sample of size t would yield. In this paper, we study
SQ algorithms that access the input distribution only via the VSTAT(t) oracle. The remaining
computation is unrestricted and can use randomization (e.g., to determine which query to ask next).
In the case of an algorithm training a neural network via gradient descent, the relevant query functions
are derivatives of the loss function.
The statistical query framework was first introduced by Kearns for supervised learning problems [14]
using the STAT(? ) oracle, which, for ? ? R+ ,?responds to a query function f : X ? [0, 1] with a
value v such that | ED (f )?v| ? ? . The STAT( ? ) oracle can be simulated by the VSTAT(O(1/? ))
oracle. The VSTAT oracle was introduced by [9] who extended these oracles to general problems
over distributions.
1.2
Main result
We will describe a family C of functions f : Rn ? R that can be computed exactly by a small NN,
but cannot be efficiently learned by an SQ algorithm. While our result applies to all commonly
used activation units, we will use sigmoids as a running example. Let ?(z) be the sigmoid gate that
goes to 0 for z < 0 and goes to 1 for z > 0. The sigmoid gates have sharpness parameter s, i.e.,
?(x) = ?s (x) = (1 + e?sx )?1 . Note that the parameter s also bounds the Lipschitz constant of
?(x).
2
A function f : Rn ? R can be computed exactly by a single layer NN with sigmoid gates precisely
when it is of the form f (x) = h(?(g(x)), where g : Rn ? Rm and h : Rm ? R are affine, and ?
acts component-wise. Here, m is the number of hidden units, or sigmoid gates, of the of the NN.
In the case of a learning problem for a class C of functions f : X ? R, the input distribution to the
algorithm is over labeled examples (x, f ? (x)), where x ? D for some underlying distribution D on
X, and f ? ? C is a fixed concept (function).
As mentioned in the introduction, we can view a typical NN learning algorithm as a statistical query
(SQ) algorithm: in each iteration, the algorithm constructs a function based on its current weights
(typically a gradient or subgradient), evaluates it on a batch of random examples from the input
distribution, then uses the evaluations to update the weights of the NN. Then we have the following
result.
n
Theorem 1.1. Let n ? N, and let ?, s ? 1. There exists an explicit family
? C of functions f : R ?
[?1, 1], representable as a single hidden layer neural network with O(s n log(?sn)) sigmoid units
of sharpness s, a single output sum gate and a weight matrix with condition number O(poly(n, s, ?)),
and an integer t = ?(s2 n) s.t. the following holds. Any (randomized) SQ algorithm A that uses ?Lipschitz?queries to VSTAT(t) and weakly learns C with probability at least 1/2, to within regression
error 1/ t less than any constant function over i.i.d. inputs from any logconcave distribution of unit
variance on R requires 2?(n) /(?s2 ) queries.
The Lipschitz assumption on the statistical queries is satisfied by all commonly used algorithms
for training neural networks can be simulated with Lipschitz queries (e.g., gradients of natural loss
functions with regularizers). This assumption can be omitted if the output of the hard-to-learn family
C is represented with bounded precision.
Informally, Theorem 1.1 shows that there exist simple realizable functions that are not efficiently
learnable by NN training algorithms with polynomial batch sizes, assuming the algorithm allows for
error as much as the standard deviation of random samples for each query. We remark that in practice,
large batch sizes are seldom used for training NNs, not just for efficiency, but also since moderately
noisy gradient estimates are believed to be useful for avoiding bad local minima. Even NN training
algorithms with larger batch sizes will require ?(t) samples to achieve lower error, whereas the NNs
?
e t) parameters.
that represent functions in our class C have only O(
Our lower bound extends to a broad family of activation units, including all the well-known ones
(ReLU, sigmoid, softplus etc., see Section 3.1). In the case of sigmoid gates, the functions of C
take P
the following form (cf. Figure 1.1). For a set S ? {1, . . . , n}, we define fm,S (x1 , . . . , xn ) =
?m ( i?S xi ), where
m
X
(4k + 1)
(4k ? 1)
?m (x) = ?(2m + 1) +
? x?
+?
?x .
(1.1)
s
s
k=?m
Then C = {fm,S : S ? {1, . . . , n}}. We call the functions fm,S , along with ?m , the s-wave
functions. It is easy to see that they are smooth and bounded. Furthermore, the size of the NN
? ?n), assuming the query functions
representing this hard-to-learn family of functions is only O(s
(e.g., gradients of loss function) are poly(s, n)-Lipschitz. We note that the lower bounds hold
regardless of the architecture of the model, i.e., NN used to learn.
Our lower bounds are asymptotic, but we show empirically in Section 4 that they apply
? even at
practical values of n and s. We experimentally observe a threshold for the quantity s n, above
which stochastic gradient descent fails to train the NN to low error?that is, regression error below
that of the best constant approximation? regardless of choices of gates, architecture used to learning,
learning rate, batch size, etc.
The condition number upper bound for C is significant in part because there do exist SQ algorithms
for learning certain families of simple NNs with time complexity polynomial in the condition number
of the weight matrix (the tensor factorization based algorithm of Janzamin et al. [12] can easily
be seen to be SQ). Our results imply that this dependence cannot be substantially improved (see
Section 1.3).
Remark 1. The class of input distributions can be relaxed further. Rather than being a product
distribution, it suffices if the distribution is in isotropic position and invariant under reflections across
3
?(1/s + x)
?(1/s ? x)
?(x) = ?(1/s + x) + ?(1/s ? x) ? 1
?m (x) = ?(x) + ?(x ? 4/s) + ?(x + 4/s) + ? ? ?
Figure 1.1: (a) The sigmoid function, the L1 -function ? constructed from sigmoid functions, and the
nearly-periodic ?wave? function ? constructed from ?. (b) The architecture of the NNs computing
the wave functions.
and permutations of coordinate axes. And instead of being logconcave, it suffices for marginals to
be unimodal with variance ?, density O(1/?) at the mode, and density ?(1/?) within a standard
deviation of the mode.
Overall, our lower bounds suggest that even the combination of small network size, smooth, standard
activation functions, and benign input distributions is insufficient to make learning a NN easy, even
improperly via a very general family of algorithms. Instead, stronger structural assumptions on the
NN, such as a small condition number, and very strong structural properties on the input distribution,
are necessary to make learning tractable. It is our hope that these insights will guide the discovery of
provable efficiency guarantees.
1.3
Related Work
There is much work on complexity-theoretic hardness of learning neural networks [4, 7, 15]. These
results have shown the hardness of learning functions representable as small (depth 2) neural networks
over discrete input distributions. Since these input distributions bear little resemblance to the realworld data sets on which NNs have seen great recent empirical success, it is natural to wonder whether
more realistic distributional assumptions might make learning NNs tractable. Our results suggest
that benign input distributions are insufficient, even for functions realized as small networks with
standard, smooth activation units.
Recent independent work of Shamir [17] shows a smooth family of functions for which the gradient
of the squared loss function is not informative for training a NN over a Gaussian input distribution
(more generally, for distributions with rapidly decaying Fourier coefficients). In fact, for this setting
the paper shows an exponentially small bound on the gradient, relying on the fine structure of the
Gaussian distribution and of the smooth functions (see [16] for a follow-up with experiments and
further ideas). These smooth functions cannot be realized in small NNs using the most commonly
studied activation units (though a related non-smooth family of functions for which the bounds apply
can be realized by larger NNs using ReLU units). In contrast our bounds are (a) in the more general
SQ framework, and in particular apply regardless of the loss function, regularization scheme, or
specific variant of gradient descent (b) apply to functions actually realized as small NNs using any of
a wide family of activation units (c) apply to any logconcave input distribution and (d) are robust to
small perturbations of the input layer weights.
Also related is the tensor-based algorithm of Janzamin et al. [12] to learn a 1-layer network under
nondegeneracy assumptions on the weight matrix. The complexity is polynomial in the dimension,
size of network being learned and condition number of the weight matrix. Since their tensor
decomposition can also be implemented as a statistical query algorithm, our results give a lower
bound indicating that such a polynomial dependence on the dimension and condition number is
unavoidable.
Other algorithmic results for learning NNs apply in very restricted settings. For example, polynomialtime bounds are known for learning NNs with a single hidden ReLU layer over Gaussian inputs under
4
the assumption that the hidden units use disjoint sets of inputs [5], as well as for learning a single
ReLU [10] and for learning sparse polynomials via NNs [1].
1.4
Proof ideas
To prove Theorem 1.1, we wish to estimate the number of queries used by a statistical query algorithm
learning the family of s-wave functions, regardless of the strategy employed by the algorithm. To that
end, we estimate the statistical dimension of the family of s-wave functions. Statistical dimension
is a key concept in the study of SQ algorithms, and is known to characterize the query complexity
of supervised learning via SQ algorithms [3, 19, 9]. Briefly, a family C of distributions (e.g., over
labeled examples) has ?statistical dimension d with average correlation ?? ? if every (1/d)-fraction
of C has average correlation ?? ; this condition implies that C cannot be learned with fewer than O(d)
queries to VSTAT(O(1/?
? )). See Section 2 for precise statements.
The SQ literature for supervised learning of boolean functions is rich. However, lower bounds for
regression problems in the SQ framework have so far not appeared in the literature, and the existing
notions of statistical dimension are too weak for this setting. We state a new, strengthened notion
of statistical dimension for regression problems (Definition 2), and show that lower bounds for this
dimension transfer to query complexity bounds (Theorem 2.1). The essential difference from the
statistical dimension for learning is that we must additionally bound the average covariances of
indicator functions (or, rather, continuous analogues of indicators) on the outputs of functions in
C. The essential claim in our lower bounds is therefore in showing that a typical pair of (indicator
functions on outputs of) s-wave functions has small covariance.
In other words, to prove Theorem 1.1, it suffices to upper-bound the quantity
E[(? ? fm,S )(? ? fm,T )] ? E[? ? fm,S ]E[? ? fm,T )
(1.2)
for most pairs fm,S , fm,T of s-wave functions, where ? is someP
smoothed version of an indicator
function. Write h(t) = ?(?m (t)), so ?(fm,S (x1 , . . . , xn )) = h( i?S xi ). We have
X
X
X
E
(h(
xi )h(
xi ) |
xi = z)
(x1 ,...,xn )?D
=
E
xi ,i?S\T
i?S
(h(
X
i?T
xi + z))
i?S\T
i?S?T
E
xi ,i?T \S
(h(
X
xi + z)) .
i?T \S
So to estimate Eq. (1.2), it suffices to show
P that the expectation of h(
when we condition on the value of z = i?S?T xi .
P
i?S
xi ) doesn?t change much
We now observe that if ? is Lipschitz, and ?m is ?close to? a periodic function with period ? > 0,
then h is also ?close to? a periodic function with period ? > 0 (see Section 3 for a precise statement).
Under this near-periodicity assumption, we are now able to show for any logconcave distribution D0
on R of variance ? > ?, and any translation z ? R, that
?
E (|h(x)|) .
E (h(x + z) ? h(x)) = O
x?D
? x?D
P
P
In particular, conditioning on the value of z = i?S?T xi has little effect on the value of h( i?S xi ).
The combination of these observations gives the query complexity lower bound. Precise statements
of some of the technical lemmas are given in Section 3; the complete proof appears in the full version
of this paper [18].
2
Statistical dimension
We now give a precise definition of the statistical dimension with average correlation for regression
problems, extending the concept introduced in [9].
Let C be a finite family of functions f : X ? R over some domain X, and let D be a distribution
over X. The average covariance and the average correlation of C with respect to D are
1 X
1 X
CovD (C) =
CovD (f, g) and ?D (C) =
?D (f, g)
2
|C|
|C|2
f,g?C
f,g?C
5
p
where ?D (f, g) = CovD (f, g)/ Var(f ) Var(g) when both Var(f ) and Var(g) are nonzero, and
?D (f, g) = 0 otherwise.
()
For y ? R and > 0, we define the -soft indicator function ?y : R ? R as
2
?()
y (x) = ?y (x) = max{0, 1/ ? (1/) |x ? y|}.
So ?y is (1/)2 -Lipschitz, is supported on (y ? , y + ), and has norm k?y k1 = 1.
Definition 2. Let ?? > 0, let D be a probability distribution over some domain X, and let C
be a family of functions f : X ? [?1, 1] that are identically distributed as random variables
over D. The statistical dimension of C relative to D with average covariance ?? and precision ,
denoted by -SDA(C, D, ?? ), is defined to be the largest integer d such that the following holds:
for every y ? R and every subset C 0 ? C of size |C 0 | > |C|/d, we have ?D (C 0 ) ? ?? . Moreover,
()
()
CovD (Cy0 ) ? (max{, ?(y)})2 ?? where Cy0 = {?y ? f : f ? C} and ?(y) = ED (?y ? f ) for some
f ? C.
Note that the parameter ?(y) is independent of the choice of f ? C. The application of this notion of
dimension is given by the following theorem.
Theorem 2.1. Let D be a distribution on a domain X and let C be a family of functions f :
X ? [?1, 1] identically distributed as random variables over D. Suppose there is d ? R and
? ? 1 ? ?? > 0 such that -SDA(C, D, ?? ) ? d, where ? ?? /(2?). Let A be a randomized algorithm
?
learning C over D with probability greater than 1/2 to regression error less than ?(1) ? 2 ?? . If
A only uses queries to VSTAT(t) for some t = O(1/?
? ), which are ?-Lipschitz at any fixed x ? X,
then A uses ?(d) queries.
A version of the theorem for Boolean functions is proved in [9]. For completeness, in the full version
of this paper [18] we include a proof of Theorem 2.1, following ideas in [19, Theorem 2].
As a consequence of Theorem 2.1, there is no need to consider an SQ algorithm?s query strategy in
order to obtain lower bounds on its query complexity. Instead, the lower bounds follow directly from
properties of the concept class itself, in particular from bounds on average covariances of indicator
functions. Theorem 1.1 will therefore follow from Theorem 2.1 by analyzing the statistical dimension
of the s-wave functions.
3
Estimates of statistical dimension for one-layer functions
We now present the most general context in which we obtain SQ lower bounds.
A function ? : R ? R is (M, ?, ?)-quasiperiodic if there exists a function ?? : R ? R which is
?
periodic with period ? such that |?(x) ? ?(x)|
< ? for all x ? [?M, M ]. In particular, any periodic
function with period ? is (M, ?, ?)-quasiperiodic for all M, ? > 0.
2
Lemma 3.1. Let
? n ? N and let ? > 0. There?exists ?? = O(? /n) such that for all > 0, there
exist M = O( n log(n/(?)) and ? = ?(3 ?/ n) and a family C0 of affine functions g : Rn ? R
of bounded operator norm with the following property. Suppose ? : R ? [?1, 1] is (M, ?, ?)quasiperiodic and Varx?U (0,?) (?(x)) = ?(1). Let D be logconcave distribution with unit variance
on R. Then for C = {? ? g : g ? C0 }, we have -SDA(C, Dn , ?? ) ? 2?(n) ?2 . Furthermore, the
functions of C are identically distributed as random variables over Dn .
In other words, we have statistical dimension bounds (and hence query complexity bounds) for
functions that are sufficiently close to periodic. However, the activation units of interest are generally
monotonic increasing functions such as sigmoids and ReLUs that are quite far from periodic. Hence,
in order to apply Lemma 3.1 in our context, we must show that the activation units of interest can be
combined to make nearly periodic functions.
As an intermediate step, we analyze activation functions in L1 (R), i.e., functions whose absolute
value has bounded integral over the whole real line. These L1 -functions analyzed in our framework
are themselves constructed as affine combinations of the usual activation functions. For example, for
the sigmoid unit with sharpness s, we study the following L1 -function (cf. (1.1)):
1
1
?(x) = ?
+x +?
? x ? 1.
(3.1)
s
s
6
We now describe the properties of the integrable functions ? that will be used in the proof.
3. For ? ? L1 (R), we say the essential radius of ? is the number r ? R such that
RDefinition
r
|?| = (5/6)k?k1 .
?r
Definition 4. We say ? ? L1 (R) has the mean bound property if for all x ? R and > 0, we have
Z x+
1
|?(x)| .
?(x) = O
x?
In particular, if ? is bounded, and monotonic nonincreasing (resp. nondecreasing) for sufficiently
large positive (resp. negative) inputs, then ? satisfies Definition 4. Alternatively, it suffices for ? to
have bounded first derivative.
To complete the proof of Theorem 1.1, we show that we can combine activation units ? satisfying
the above properties in a function which is close to periodic, i.e., which satisfies the hypotheses of
Lemma 3.1 above.
Lemma 3.2. Let ? ? L1 (R) have the mean bound property and let r > 0 be such that ? has
essential radius at most r and k?k1 = ?(r). Let M, ? > 0. Then there is a pair of affine
functions h : Rm ? R and g : R ? Rm such that if ?(x) = h(?(g(x))), where ? is applied
component-wise, then ? is (M, ?, 4r)-quasiperiodic. Furthermore, ?(x) ? [?1, 1] for all x ? R, and
Varx?U (0,4r) (?(x)) = ?(1), and we may take m = (1/r) ? O(max{m1 , M }), where m1 satisfies
Z ?
(|?(x)| + |?(?x)|)dx < 4?r .
m1
We now sketch how Lemmas 3.1 and 3.2 imply Theorem 1.1 for sigmoid units.
Sketch of proof of Theorem 1.1. The sigmoid function ? with sharpness s is not even in L1 (R), so it
is unsuitable as the function ? of Lemma 3.2. Instead, we define ? to be an affine combination of ?
gates as in Eq. (3.1). Then ? satisfies the hypotheses of Lemma 3.2.
2
Let ? = 4r and
? let ?? = O(? /n) be as given by
? the statement of Lemma 3.1. Let = ?? /(2?), and
let M = O( n log(n/(?)) and ? = ?(3 ?/ n) be as given by the statement of Lemma 3.1. By
Lemma 3.2, there is m ? N and functions h : Rm ? R and g : R ? Rm such that ? = h ? ? ? g is
(M, ?, ?)-quasiperiodic and satisfies the hypotheses of Lemma 3.1. Therefore, we have a family C0 of
affine functions f : Rn ? R such that for C = {??f : f ? C0 } satisfies -SDA(C, D, ?? ) ? 2?(n) ?2 .
Therefore, the functions in C satisfy the hypothesis of Theorem 2.1, giving the query complexity
lower bound.
All details are given in the full version of the paper [18].
3.1
Different activation functions
Similar proofs give corresponding lower bounds for activation functions other than sigmoids. In
every case, we reduce to gates satisfying the hypotheses of Lemma 3.2 by constructing an appropriate
L1 -function ? as an affine combination of of the activation functions.
For example, let ?(x) = ?s (x) = max{0, sx} denote the ReLU unit with slope s. Then the affine
combination
?(x) = ?(x + 1/s) ? ?(x) + ?(?x + 1/s) ? ?(?x) ? 1
(3.2)
is in L1 (R), and is zero for |x| ? 1/s (and hence has the mean bound property and essential radius
O(1/s)). The proof of Theorem 1.1 therefore goes through almost identically, the slope-s ReLU
units replacing ?
the s-sharp sigmoid units. In particular, there is a family of single hidden layer
NNs using O(s n log(?sn) slope-s ReLU units, which is not learned by any SQ algorithm using
fewer than 2?(n) /(?s2 ) queries to VSTAT(O(s2 n)), when inputs are drawn i.i.d. from a logconcave
distribution.
Similarly, we can consider the s-sharp softplus function ?(x) = log(exp(sx) + 1). Then Eq. (3.2)
again gives an appropriate L1 (R) function to which we can apply Lemma 3.2 and therefore follow
the proof of Theorem 1.1. For softsign functions ?(x) = x/(|x| + 1), we use the affine combination
?(x) = ?(x + 1) + ?(?x + 1) .
7
(a) normal distribution
(b) exp(?|xi |) distribution
(c) uniform l1 ball
(d) normal distribution
(e) exp(?|xi |) distribution
(f) uniform l1 ball
Figure 4.1: Test error vs sharpness times square-root of dimension. Each curve corresponds to a
different input dimension n. The flat line corresponds to the best error by a constant function.
In the case of softsign functions, this function ? converges much more slowly to zero as |x| ? ?
compared to sigmoid units. Hence, in order to obtain an adequate quasiperiodic function as an affine
combination of ?-units, a much larger number of ?-units is needed: the bound on the number m
of units in this case is polynomial in the Lipschitz parameter ? of the query functions, and a larger
polynomial in the input dimension n. The case of other commonly used activation functions, such as
ELU (exponential linear) or LReLU (Leaky ReLU), is similar to those discussed above.
4
Experiments
In the experiments, we show how the errors, E(f (x) ? y)2 , change with respect to the sharpness
parameter s and the input dimension n for two input distributions: 1) multivariate
Pnormal distribution,
2) coordinate-wise independent exp(?|xi |), and 3) uniform in the l1 ball {x : i |xi | ? n}.
For a given sharpness parameter s ? {0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1, 2}, input dimension d ?
{50, 100, 200} and input distribution, we generate the true function according to Eqn. 1.1. There are
a total of 50,000 training data points and 1000 test data points. We then learn the true function with
fully-connected neural networks of both ReLU and sigmoid activation functions. The best test error
is reported among the following different hyper-parameters.
The number of hidden layers we used is 1, 2, and 4. The number of hidden units per layer varies from
4n to 8n. The training is carried out using SGD with 0.9 momentum, and we enumerate learning
rates from 0.1, 0.01 and 0.001 and batch sizes from 64, 128 and 256.
?
From Theorem 1.1, learning such functions should become difficult as s n increases over a threshold.
In Figure 4.1, we illustrate this phenomenon. Each curve corresponds to a particular input dimension
?
n and each point in the curve corresponds to a particular smoothness ?
parameter s. The x-axis is s n
and the y-axis denotes the test errors. We can see that at roughly s n = 5, the problem becomes
hard even empirically.
Acknowledgments
The authors are grateful to Vitaly Feldman for discussions about statistical query lower bounds, and
for suggestions that simplified the presentation of our results, and also to Adam Kalai for an inspiring
discussion. This research was supported in part by NSF grants CCF-1563838 and CCF-1717349.
8
References
[1] Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. Learning polynomials with
neural networks. In International Conference on Machine Learning, pages 1908?1916, 2014.
[2] Andrew R Barron. Universal approximation bounds for superpositions of a sigmoidal function.
IEEE Transactions on Information theory, 39(3):930?945, 1993.
[3] Avrim Blum, Merrick Furst, Jeffrey Jackson, Michael Kearns, Yishay Mansour, and Steven
Rudich. Weakly learning DNF and characterizing statistical query learning using Fourier
analysis. In STOC, pages 253?262, 1994.
[4] Avrim Blum and Ronald L. Rivest. Training a 3-node neural network is NP-complete. Neural
Networks, 5(1):117?127, 1992.
[5] Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with
gaussian inputs. CoRR, abs/1702.07966, 2017.
[6] George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of
Control, Signals, and Systems (MCSS), 2(4):303?314, 1989.
[7] Amit Daniely and Shai Shalev-Shwartz. Complexity theoretic limitations on learning dnf?s. In
Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June
23-26, 2016, pages 815?830, 2016.
[8] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In
Conference on Learning Theory, pages 907?940, 2016.
[9] Vitaly Feldman, Elena Grigorescu, Lev Reyzin, Santosh Vempala, and Ying Xiao. Statistical
algorithms and a lower bound for planted clique. In Proceedings of the 45th annual ACM
Symposium on Theory of Computing, pages 655?664. ACM, 2013.
[10] Surbhi Goel, Varun Kanade, Adam R. Klivans, and Justin Thaler. Reliably learning the ReLU
in polynomial time. CoRR, abs/1611.10258, 2016.
[11] Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are
universal approximators. Neural networks, 2(5):359?366, 1989.
[12] Majid Janzamin, Hanie Sedghi, and Anima Anandkumar. Generalization bounds for neural
networks through tensor factorization. CoRR, abs/1506.08473, 2015.
[13] Michael Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM,
45(6):983?1006, 1998.
[14] Michael J. Kearns. Efficient noise-tolerant learning from statistical queries. In Proceedings
of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, May 16-18, 1993, San
Diego, CA, USA, pages 392?401, 1993.
[15] Adam R. Klivans. Cryptographic hardness of learning. In Encyclopedia of Algorithms, pages
475?477. 2016.
[16] Shai Shalev-Shwartz, Ohad Shamir, and Shaked Shammah. Failures of deep learning. CoRR,
abs/1703.07950, 2017.
[17] Ohad Shamir.
Distribution-specific hardness of learning neural networks.
abs/1609.01037, 2016.
CoRR,
[18] Le Song, Santosh Vempala, John Wilmes, and Bo Xie. On the complexity of learning neural
networks. arXiv preprint arXiv:1707.04615, 2017.
[19] B. Sz?r?nyi. Characterizing statistical query learning:simplified notions and proofs. In ALT,
pages 186?200, 2009.
[20] Matus Telgarsky. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485, 2016.
9
| 7135 |@word mild:1 version:5 briefly:1 polynomial:9 stronger:2 norm:2 softsign:2 c0:4 decomposition:1 covariance:5 sgd:1 moment:1 kurt:1 existing:2 current:4 comparing:1 varx:2 activation:22 merrick:1 dx:1 must:2 john:2 ronald:1 realistic:1 hanie:1 informative:1 benign:4 drop:1 update:3 v:1 fewer:2 cy0:2 amir:1 isotropic:1 completeness:1 node:1 sigmoidal:2 zhang:1 along:1 constructed:3 dn:2 become:1 symposium:2 prove:2 combine:1 hardness:4 expected:1 roughly:1 themselves:1 relying:1 globally:1 little:2 increasing:1 becomes:1 moreover:2 bounded:7 underlying:1 rivest:1 what:3 substantially:1 finding:1 lipschitzness:1 guarantee:1 every:4 act:1 exactly:2 rm:6 control:1 unit:28 grant:1 positive:1 local:1 consequence:1 analyzing:1 lev:1 might:2 studied:1 limited:1 factorization:2 practical:1 acknowledgment:1 globerson:1 alexandr:1 practice:1 sq:16 vstat:8 empirical:3 universal:3 word:2 spite:1 suggest:2 cannot:4 ga:4 close:4 operator:1 context:2 shaked:1 go:3 regardless:4 sharpness:7 insight:1 surbhi:1 jackson:1 notion:4 coordinate:2 resp:2 target:2 shamir:4 suppose:2 yishay:1 diego:1 us:5 hypothesis:6 approximated:1 satisfying:2 distributional:1 labeled:2 steven:1 preprint:2 improper:1 connected:1 rina:1 trade:1 halfspaces:1 mentioned:1 complexity:14 moderately:1 raise:1 weakly:2 grateful:1 efficiency:2 easily:1 represented:1 train:1 describe:2 dnf:2 query:39 hyper:1 shalev:2 whose:2 quasiperiodic:6 larger:4 valued:1 quite:1 say:2 otherwise:1 nondecreasing:1 noisy:1 itself:1 interaction:1 product:1 relevant:1 rapidly:1 reyzin:1 achieve:1 extending:1 adam:3 converges:1 telgarsky:1 illustrate:2 andrew:1 alon:1 stat:2 lsong:1 eq:3 strong:1 solves:1 implemented:1 predicted:1 involves:1 implies:2 radius:3 correct:1 stochastic:3 require:1 suffices:5 generalization:1 cybenko:1 randomization:1 hold:4 sufficiently:2 normal:2 exp:4 great:1 algorithmic:1 matus:1 claim:1 furst:1 omitted:1 currently:2 superposition:2 largest:1 hope:1 gaussian:5 rather:2 kalai:1 covd:4 gatech:4 ax:1 june:1 unsatisfactory:1 bernoulli:1 contrast:1 rigorous:1 sense:1 realizable:2 nn:22 typically:3 unlikely:1 hidden:15 overall:1 among:1 colt:1 denoted:1 santosh:3 construct:1 beach:1 broad:1 nearly:2 nonsmooth:1 np:1 comprehensive:1 stunning:1 brutzkus:1 phase:1 jeffrey:1 ab:5 atlanta:4 interest:2 possibility:1 evaluation:1 analyzed:1 regularizers:1 nonincreasing:1 integral:1 necessary:1 janzamin:3 ohad:3 desired:2 halbert:1 theoretical:1 soft:1 boolean:3 deviation:3 subset:2 daniely:1 uniform:5 wonder:1 too:2 characterize:1 reported:1 answer:1 varies:1 periodic:9 gregory:1 nns:13 combined:1 st:1 density:2 sda:4 randomized:2 international:1 systematic:1 off:1 michael:3 squared:1 again:1 satisfied:1 unavoidable:1 slowly:1 derivative:2 return:1 li:1 includes:1 coefficient:1 satisfy:1 view:1 root:1 analyze:1 nondegeneracy:1 wave:8 decaying:1 relus:1 shai:2 slope:3 polynomialtime:1 square:2 accuracy:2 variance:4 who:1 efficiently:5 yield:1 ronen:1 weak:1 mc:1 cc:1 anima:1 ed:3 definition:5 evaluates:1 failure:1 proof:10 proved:1 ask:1 dimensionality:1 actually:1 appears:1 maxwell:1 xie:3 supervised:3 follow:4 varun:1 improved:1 done:1 though:1 furthermore:3 just:2 correlation:4 sketch:2 eqn:1 replacing:1 lack:1 mode:2 resemblance:1 usa:3 effect:1 concept:4 true:4 ccf:2 regularization:2 hence:4 nonzero:1 satisfactory:1 white:1 width:1 generalized:1 theoretic:3 demonstrate:1 complete:3 l1:14 reflection:1 wise:3 sigmoid:16 empirically:2 conditioning:1 exponentially:1 discussed:1 m1:3 marginals:1 significant:1 feldman:3 smoothness:2 seldom:1 mathematics:1 similarly:1 access:1 etc:3 multivariate:1 recent:2 certain:1 success:3 approximators:1 integrable:1 seen:2 minimum:1 greater:2 unrestricted:1 relaxed:1 george:1 employed:1 goel:1 determine:1 period:4 signal:1 full:3 unimodal:1 d0:1 smooth:9 technical:1 believed:1 long:1 variant:3 regression:7 multilayer:1 expectation:3 arxiv:4 iteration:1 represent:1 whereas:1 fine:1 interval:1 logconcave:8 majid:1 vitaly:2 integer:3 call:1 structural:2 near:1 anandkumar:1 intermediate:1 feedforward:2 easy:2 identically:4 relu:11 architecture:3 fm:10 reduce:1 idea:4 benefit:1 whether:1 effort:1 improperly:1 song:2 york:1 remark:2 adequate:1 deep:1 enumerate:1 useful:1 generally:2 informally:1 encyclopedia:1 inspiring:1 generate:1 exist:3 nsf:1 disjoint:1 per:1 discrete:2 write:1 key:1 threshold:2 blum:2 drawn:3 subgradient:1 fraction:1 sum:2 realworld:1 extends:1 family:22 ruling:1 reasonable:1 almost:1 bound:42 layer:18 oracle:7 annual:2 precisely:2 flat:1 fourier:2 klivans:2 vempala:4 according:1 combination:8 representable:2 ball:3 across:1 invariant:1 restricted:1 grigorescu:1 computationally:1 needed:1 tractable:2 end:1 apply:8 observe:2 barron:1 appropriate:2 batch:7 coin:1 gate:10 denotes:1 remaining:1 include:2 running:1 cf:2 unsuitable:1 giving:1 k1:3 amit:1 nyi:1 hypercube:1 tensor:4 question:2 quantity:2 realized:4 strategy:2 planted:1 dependence:2 usual:1 responds:1 rudich:1 gradient:15 convnet:1 simulated:2 extent:1 provable:1 sedghi:1 assuming:2 panigrahy:1 insufficient:2 ying:1 difficult:1 statement:5 stoc:1 negative:1 reliably:1 cryptographic:1 twenty:1 upper:2 observation:1 finite:1 descent:9 extended:1 precise:5 rn:5 perturbation:3 mansour:1 smoothed:1 sharp:2 introduced:3 pair:3 learned:8 wilmes:2 nip:1 able:1 justin:1 below:1 appeared:1 including:3 max:5 explanation:2 analogue:1 power:1 stinchcombe:1 natural:3 rely:1 indicator:6 representing:1 scheme:1 technology:4 thaler:1 inversely:1 imply:2 axis:2 carried:1 sn:2 literature:2 discovery:1 asymptotic:1 relative:1 loss:7 fully:1 permutation:1 bear:1 sublinear:1 suggestion:1 limitation:1 proportional:1 querying:1 var:4 affine:10 xiao:1 translation:1 periodicity:1 eldan:1 repeat:1 supported:2 guide:1 institute:4 wide:4 face:1 characterizing:2 fifth:1 absolute:1 sparse:1 leaky:1 tolerance:1 distributed:3 curve:3 dimension:23 depth:4 transition:1 xn:3 rich:1 doesn:1 author:1 commonly:4 san:1 simplified:2 far:2 transaction:1 approximate:1 clique:1 elu:1 sz:1 tolerant:2 xi:17 shwartz:2 alternatively:1 continuous:1 additionally:1 kanade:1 learn:10 transfer:1 robust:2 ca:2 hornik:1 poly:2 constructing:1 domain:3 main:2 rh:1 s2:4 whole:1 lrelu:1 noise:2 x1:3 georgia:4 strengthened:1 precision:2 fails:1 position:1 momentum:1 explicit:1 wish:1 exponential:3 learns:1 theorem:20 bad:1 specific:2 elena:1 showing:2 learnable:1 alt:1 intractable:1 exists:3 essential:5 andoni:1 avrim:2 valiant:1 corr:5 sigmoids:3 sx:3 intersection:1 bo:3 applies:1 monotonic:2 corresponds:4 satisfies:6 acm:4 viewed:1 presentation:1 lipschitz:9 hard:6 fw:2 experimentally:1 typical:2 change:2 kearns:5 lemma:14 total:1 indicating:1 softplus:2 avoiding:1 phenomenon:1 |
6,783 | 7,136 | Hierarchical Implicit Models and
Likelihood-Free Variational Inference
Dustin Tran
Columbia University
Rajesh Ranganath
Princeton University
David M. Blei
Columbia University
Abstract
Implicit probabilistic models are a flexible class of models defined by a simulation process for data. They form the basis for theories which encompass our
understanding of the physical world. Despite this fundamental nature, the use
of implicit models remains limited due to challenges in specifying complex latent
structure in them, and in performing inferences in such models with large data sets.
In this paper, we first introduce hierarchical implicit models (HIMs). HIMs combine the idea of implicit densities with hierarchical Bayesian modeling, thereby
defining models via simulators of data with rich hidden structure. Next, we develop likelihood-free variational inference (LFVI), a scalable variational inference
algorithm for HIMs. Key to LFVI is specifying a variational family that is also implicit. This matches the model?s flexibility and allows for accurate approximation
of the posterior. We demonstrate diverse applications: a large-scale physical simulator for predator-prey populations in ecology; a Bayesian generative adversarial
network for discrete data; and a deep implicit model for text generation.
1
Introduction
Consider a model of coin tosses. With probabilistic models, one typically posits a latent probability,
and supposes each toss is a Bernoulli outcome given this probability [36, 15]. After observing a collection of coin tosses, Bayesian analysis lets us describe our inferences about the probability.
However, we know from the laws of physics that the outcome of a coin toss is fully determined by
its initial conditions (say, the impulse and angle of flip) [25, 9]. Therefore a coin toss? randomness
does not originate from a latent probability but in noisy initial parameters. This alternative model
incorporates the physical system, better capturing the generative process. Furthermore the model is
implicit, also known as a simulator: we can sample data from its generative process, but we may not
have access to calculate its density [11, 20].
Coin tosses are simple, but they serve as a building block for complex implicit models. These
models, which capture the laws and theories of real-world physical systems, pervade fields such as
population genetics [40], statistical physics [1], and ecology [3]; they underlie structural equation
models in economics and causality [39]; and they connect deeply to generative adversarial networks
(GANs) [18], which use neural networks to specify a flexible implicit density [35].
Unfortunately, implicit models, including GANs, have seen limited success outside specific domains.
There are two reasons. First, it is unknown how to design implicit models for more general applications, exposing rich latent structure such as priors, hierarchies, and sequences. Second, existing
methods for inferring latent structure in implicit models do not sufficiently scale to high-dimensional
or large data sets. In this paper, we design a new class of implicit models and we develop a new
algorithm for accurate and scalable inference.
For modeling, ? 2 describes hierarchical implicit models, a class of Bayesian hierarchical models
which only assume a process that generates samples. This class encompasses both simulators in the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
classical literature and those employed in GANs. For example, we specify a Bayesian GAN, where
we place a prior on its parameters. The Bayesian perspective allows GANs to quantify uncertainty
and improve data efficiency. We can also apply them to discrete data; this setting is not possible
with traditional estimation algorithms for GANs [27].
For inference, ? 3 develops likelihood-free variational inference (LFVI), which combines variational
inference with density ratio estimation [49, 35]. Variational inference posits a family of distributions
over latent variables and then optimizes to find the member closest to the posterior [23]. Traditional
approaches require a likelihood-based model and use crude approximations, employing a simple
approximating family for fast computation. LFVI expands variational inference to implicit models
and enables accurate variational approximations with implicit variational families: LFVI does not
require the variational density to be tractable. Further, unlike previous Bayesian methods for implicit
models, LFVI scales to millions of data points with stochastic optimization.
This work has diverse applications. First, we analyze a classical problem from the approximate
Bayesian computation (ABC) literature, where the model simulates an ecological system [3]. We
analyze 100,000 time series which is not possible with traditional methods. Second, we analyze a
Bayesian GAN, which is a GAN with a prior over its weights. Bayesian GANs outperform corresponding Bayesian neural networks with known likelihoods on several classification tasks. Third,
we show how injecting noise into hidden units of recurrent neural networks corresponds to a deep
implicit model for flexible sequence generation.
Related Work. This paper connects closely to three lines of work. The first is Bayesian inference
for implicit models, known in the statistics literature as approximate Bayesian computation (ABC)
[3, 33]. ABC steps around the intractable likelihood by applying summary statistics to measure
the closeness of simulated samples to real observations. While successful in many domains, ABC
has shortcomings. First, the results generated by ABC depend heavily on the chosen summary
statistics and the closeness measure. Second, as the dimensionality grows, closeness becomes harder
to achieve. This is the classic curse of dimensionality.
The second is GANs [18]. GANs have seen much interest since their conception, providing an efficient method for estimation in neural network-based simulators. Larsen et al. [28] propose a hybrid
of variational methods and GANs for improved reconstruction. Chen et al. [7] apply information
penalties to disentangle factors of variation. Donahue et al. [12], Dumoulin et al. [13] propose to
match on an augmented space, simultaneously training the model and an inverse mapping from
data to noise. Unlike any of the above, we develop models with explicit priors on latent variables,
hierarchies, and sequences, and we generalize GANs to perform Bayesian inference.
The final thread is variational inference with expressive approximations [45, 48, 52]. The idea of
casting the design of variational families as a modeling problem was proposed in Ranganath et al.
[44]. Further advances have analyzed variational programs [42]?a family of approximations which
only requires a process returning samples?and which has seen further interest [30]. Implicit-like
variational approximations have also appeared in auto-encoder frameworks [32, 34] and message
passing [24]. We build on variational programs for inferring implicit models.
2
Hierarchical Implicit Models
Hierarchical models play an important role in sharing statistical strength across examples [16]. For
a broad class of hierarchical Bayesian models, the joint distribution of the hidden and observed
variables is
N
Y
p(x, z, ?) = p(?)
p(xn | zn , ?)p(zn | ?),
(1)
n=1
where xn is an observation, zn are latent variables associated to that observation (local variables),
and ? are latent variables shared across observations (global variables). See Fig. 1 (left).
With hierarchical models, local variables can be used for clustering in mixture models, mixed memberships in topic models [4], and factors in probabilistic matrix factorization [47]. Global variables
can be used to pool information across data points for hierarchical regression [16], topic models [4],
and Bayesian nonparametrics [50].
Hierarchical models typically use a tractable likelihood p(xn | zn , ?). But many likelihoods of
interest, such as simulator-based models [20] and generative adversarial networks [18], admit high
2
?
zn
?
zn
xn
xn
n
N
N
Figure 1: (left) Hierarchical model, with local variables z and global variables ?. (right) Hierarchical implicit model. It is a hierarchical model where x is a deterministic function (denoted with
a square) of noise (denoted with a triangle).
fidelity to the true data generating process and do not admit a tractable likelihood. To overcome this
limitation, we develop hierarchical implicit models (HIMs).
Hierarchical implicit models have the same joint factorization as Eq.1 but only assume that one can
sample from the likelihood. Rather than define p(xn | zn , ?) explicitly, HIMs define a function g
that takes in random noise n ? s(?) and outputs xn given zn and ?,
xn = g(n | zn , ?), n ? s(?).
The induced, implicit likelihood of xn ? A given zn and ? is
Z
P(xn ? A | zn , ?) =
s(n ) dn .
{g(n | zn ,?)=xn ?A}
This integral is typically intractable. It is difficult to find the set to integrate over, and the integration
itself may be expensive for arbitrary noise distributions s(?) and functions g.
Fig. 1 (right) displays the graphical model for HIMs. Noise (n ) are denoted by triangles; deterministic computation (xn ) are denoted by squares. We illustrate two examples.
Example: Physical Simulators. Given initial conditions, simulators describe a stochastic process that generates data. For example, in population ecology, the Lotka-Volterra model simulates
predator-prey populations over time via a stochastic differential equation [55]. For prey and predator
populations x1 , x2 ? R+ respectively, one process is
dx1
= ?1 x 1 ? ?2 x 1 x 2 + 1 ,
1 ? Normal(0, 10),
dt
dx2
= ??2 x2 + ?3 x1 x2 + 2 , 2 ? Normal(0, 10),
dt
where Gaussian noises 1 , 2 are added at each full time step. The simulator runs for T time steps
given initial population sizes for x1 , x2 . Lognormal priors are placed over ?. The Lotka-Volterra
model is grounded by theory but features an intractable likelihood. We study it in ? 4.
Example: Bayesian Generative Adversarial Network. Generative adversarial networks (GANs)
define an implicit model and a method for parameter estimation [18]. They are known to perform
well on image generation [41]. Formally, the implicit model for a GAN is
xn = g(n ; ?), n ? s(?),
(2)
where g is a neural network with parameters ?, and s is a standard normal or uniform. The neural
network g is typically not invertible; this makes the likelihood intractable.
The parameters ? in GANs are estimated by divergence minimization between the generated and real
data. We make GANs amenable to Bayesian analysis by placing a prior on the parameters ?. We call
this a Bayesian GAN. Bayesian GANs enable modeling of parameter uncertainty and are inspired by
Bayesian neural networks, which have been shown to improve the uncertainty and data efficiency of
standard neural networks [31, 37]. We study Bayesian GANs in ? 4; Appendix B provides example
implementations in the Edward probabilistic programming language [53].
3
Likelihood-Free Variational Inference
We described hierarchical implicit models, a rich class of latent variable models with local and
global structure alongside an implicit density. Given data, we aim to calculate the model?s posterior p(z, ? | x) = p(x, z, ?)/p(x). This is difficult as the normalizing constant p(x) is typically
3
intractable. With implicit models, the lack of a likelihood function introduces an additional source
of intractability.
We use variational inference [23]. It posits an approximating family q ? Q and optimizes to find
the member closest to p(z, ? | x). There are many choices of variational objectives that measure
closeness [42, 29, 10]. To choose an objective, we lay out desiderata for a variational inference
algorithm for implicit models:
1. Scalability. Machine learning hinges on stochastic optimization to scale to massive data [6]. The
variational objective should admit unbiased subsampling with the standard technique,
N
X
M
N X
f (xm ),
M m=1
f (xn ) ?
n=1
where some computation f (?) over the full data is approximated with a mini-batch of data {xm }.
2. Implicit Local Approximations. Implicit models specify flexible densities; this induces very complex posterior distributions. Thus we would like a rich approximating family for the per-data
point approximations q(zn | xn , ?). This means the variational objective should only require that
one can sample zn ? q(zn | xn , ?) and not evaluate its density.
One variational objective meeting our desiderata is based on the classical minimization of the
Kullback-Leibler (KL) divergence. (Surprisingly, Appendix C details how the KL is the only possible objective among a broad class.)
3.1
KL Variational Objective
Classical variational inference minimizes the KL divergence from the variational approximation q
to the posterior. This is equivalent to maximizing the evidence lower bound (ELBO),
L = Eq(?,z | x) [log p(x, z, ?) ? log q(?, z | x)].
(3)
Let q factorize in the same way as the posterior,
N
Y
q(?, z | x) = q(?)
q(zn | xn , ?),
n=1
where q(zn | xn , ?) is an intractable density and since the data x is constant during inference, we
drop conditioning for the global q(?). Substituting p and q?s factorization yields
L = Eq(?) [log p(?) ? log q(?)] +
N
X
Eq(?)q(zn | xn ,?) [log p(xn , zn | ?) ? log q(zn | xn , ?)].
n=1
This objective presents difficulties: the local densities p(xn , zn | ?) and q(zn | xn , ?) are both intractable. To solve this, we consider ratio estimation.
3.2
Ratio Estimation for the KL Objective
Let q(xn ) be the empirical distribution on the observations x and consider using it in a ?variational
joint? q(xn , zn | ?) = q(xn )q(zn | xn , ?). Now subtract the log empirical log q(xn ) from the ELBO
above. The ELBO reduces to
N
X
p(xn , zn | ?)
(4)
L ? Eq(?) [log p(?) ? log q(?)] +
Eq(?)q(zn | xn ,?) log
.
q(xn , zn | ?)
n=1
(Here the proportionality symbol means equality up to additive constants.) Thus the ELBO is a
function of the ratio of two intractable densities. If we can form an estimator of this ratio, we can
proceed with optimizing the ELBO.
We apply techniques for ratio estimation [49]. It is a key idea in GANs [35, 54], and similar ideas
have rearisen in statistics and physics [19, 8]. In particular, we use class probability estimation:
given a sample from p(?) or q(?) we aim to estimate the probability that it belongs to p(?). We model
4
this using ?(r(?; ?)), where r is a parameterized function (e.g., neural network) taking sample inputs
and outputting a real value; ? is the logistic function outputting the probability.
We train r(?; ?) by minimizing a loss function known as a proper scoring rule [17]. For example, in
experiments we use the log loss,
Dlog = Ep(xn ,zn | ?) [? log ?(r(xn , zn , ?; ?))] + Eq(xn ,zn | ?) [? log(1 ? ?(r(xn , zn , ?; ?)))]. (5)
The loss is zero if ?(r(?; ?)) returns 1 when a sample is from p(?) and 0 when a sample is from q(?).
(We also experiment with the hinge loss; see ? 4.) If r(?; ?) is sufficiently expressive, minimizing
the loss returns the optimal function [35],
r? (xn , zn , ?) = log p(xn , zn | ?) ? log q(xn , zn | ?).
As we minimize Eq.5, we use r(?; ?) as a proxy to the log ratio in Eq.4. Note r estimates the log
ratio; it?s of direct interest and more numerically stable than the ratio.
The gradient of Dlog with respect to ? is
Ep(xn ,zn | ?) [?? log ?(r(xn , zn , ?; ?))] + Eq(xn ,zn | ?) [?? log(1 ? ?(r(xn , zn , ?; ?)))].
(6)
We compute unbiased gradients with Monte Carlo.
3.3
Stochastic Gradients of the KL Objective
To optimize the ELBO, we use the ratio estimator,
L = Eq(? | x) [log p(?) ? log q(?)] +
N
X
Eq(? | x)q(zn | xn ,?) [r(xn , zn , ?)].
(7)
n=1
All terms are now tractable. We can calculate gradients to optimize the variational family q. Below
we assume the priors p(?), p(zn | ?) are differentiable. (We discuss methods to handle discrete
global variables in the next section.)
We focus on reparameterizable variational approximations [26, 46]. They enable sampling via a
differentiable transformation T of random noise, ? ? s(?). Due to Eq.7, we require the global
approximation q(?; ?) to admit a tractable density. With reparameterization, its sample is
? = Tglobal (? global ; ?),
? global ? s(?),
for a choice of transformation Tglobal (?; ?) and noise s(?). For example, setting s(?) = N (0, 1) and
Tglobal (? global ) = ? + ?? global induces a normal distribution N (?, ? 2 ).
Similarly for the local variables zn , we specify
zn = Tlocal (? n , xn , ?; ?),
? n ? s(?).
Unlike the global approximation, the local variational density q(zn | xn ; ?) need not be tractable:
the ratio estimator relaxes this requirement. It lets us leverage implicit models not only for data but
also for approximate posteriors. In practice, we also amortize computation with inference networks,
sharing parameters ? across the per-data point approximate posteriors.
The gradient with respect to global parameters ? under this approximating family is
?? L = Es(?global ) [?? (log p(?) ? log q(?))]] +
N
X
Es(?global )sn (?n ) [?? r(xn , zn , ?)].
(8)
n=1
The gradient backpropagates through the local sampling zn = Tlocal (? n , xn , ?; ?) and the global
reparameterization ? = Tglobal (? global ; ?). We compute unbiased gradients with Monte Carlo. The
gradient with respect to local parameters ? is
?? L =
N
X
Eq(?)s(?n ) [?? r(xn , zn , ?)].
n=1
where the gradient backpropagates through Tlocal .1
5
(9)
Algorithm 1: Likelihood-free variational inference (LFVI)
Input : Model xn , zn ? p(? | ?), p(?)
Variational approximation zn ? q(? | xn , ?; ?), q(? | x; ?),
Ratio estimator r(?; ?)
Output: Variational parameters ?, ?
Initialize ?, ?, ? randomly.
while not converged do
Compute unbiased estimate of ?? D (Eq.6), ?? L (Eq.8), ?? L (Eq.9).
Update ?, ?, ? using stochastic gradient descent.
end
3.4
Algorithm
Algorithm 1 outlines the procedure. We call it likelihood-free variational inference (LFVI). LFVI
is black box: it applies to models in which one can simulate data and local variables, and calculate
densities for the global variables. LFVI first updates ? to improve the ratio estimator r. Then
it uses r to update parameters {?, ?} of the variational approximation q. We optimize r and q
simultaneously. The algorithm is available in Edward [53].
LFVI is scalable: we can unbiasedly estimate the gradient over the full data set with mini-batches
[22]. The algorithm can also handle models of either continuous or discrete data. The requirement
for differentiable global variables and reparameterizable global approximations can be relaxed using
score function gradients [43].
Point estimates of the global parameters ? suffice for many applications [18, 46]. Algorithm 1
can find point estimates: place a point mass approximation q on the parameters ?. This simplifies
gradients and corresponds to variational EM.
4
Experiments
We developed new models and inference. For experiments, we study three applications: a largescale physical simulator for predator-prey populations in ecology; a Bayesian GAN for supervised
classification; and a deep implicit model for symbol generation. In addition, Appendix F, provides
practical advice on how to address the stability of the ratio estimator by analyzing a toy experiment.
We initialize parameters from a standard normal and apply gradient descent with ADAM.
Lotka-Volterra Predator-Prey Simulator. We analyze the Lotka-Volterra simulator of ? 2 and
follow the same setup and hyperparameters of Papamakarios and Murray [38]. Its global variables
? govern rates of change in a simulation of predator-prey populations. To infer them, we posit a
mean-field normal approximation (reparameterized to be on the same support) and run Algorithm 1
with both a log loss and hinge loss for the ratio estimation problem; Appendix D details the hinge
loss. We compare to rejection ABC, MCMC-ABC, and SMC-ABC [33]. MCMC-ABC uses a spherical Gaussian proposal; SMC-ABC is manually tuned with a decaying epsilon schedule; all ABC
methods are tuned to use the best performing hyperparameters such as the tolerance error.
Fig. 2 displays results on two data sets. In the top figures and bottom left, we analyze data consisting
of a simulation for T = 30 time steps, with recorded values of the populations every 0.2 time
units. The bottom left figure calculates the negative log probability of the true parameters over the
tolerance error for ABC methods; smaller tolerances result in more accuracy but slower runtime.
The top figures compare the marginal posteriors for two parameters using the smallest tolerance for
the ABC methods. Rejection ABC, MCMC-ABC, and SMC-ABC all contain the true parameters in
their 95% credible interval but are less confident than our methods. Further, they required 100, 000
simulations from the model, with an acceptance rate of 0.004% and 2.990% for rejection ABC and
MCMC-ABC respectively.
1
The ratio r indirectly depends on ? but its gradient w.r.t. ? disappears. This is derived via the score
function identity and the product rule (see, e.g., Ranganath et al. [43, Appendix]).
6
?2.5
1.5
True value
0.5
log ?2
log ?1
?3.5
?4.0
?4.5
10
?1.5
Rej.
ABC
MCMC
ABC
SMC
ABC
VI
Log
?2.0
VI
Hinge
Rej.
ABC
MCMC
ABC
SMC
ABC
VI
Log
?2.5
Rej ABC
MCMC-ABC
SMC-ABC
VI
Hinge
True value
?3.0
?3.5
log ?1
Neg. log probability of true parameters
15
0.0
?0.5
?1.0
?5.0
?5.5
True value
1.0
?3.0
5
?4.0
?4.5
0
?5.0
?5
100
?5.5
101
VI
Log
VI
Hinge
Figure 2: (top) Marginal posterior for first two parameters. (bot. left) ABC methods over tolerance
error. (bot. right) Marginal posterior for first parameter on a large-scale data set. Our inference
achieves more accurate results and scales to massive data.
Model + Inference
Crabs
Test Set Error
Pima Covertype
MNIST
Bayesian GAN + VI
Bayesian GAN + MAP
Bayesian NN + VI
Bayesian NN + MAP
0.03
0.12
0.02
0.05
0.232
0.240
0.242
0.320
0.0136
0.0283
0.0311
0.0623
0.154
0.185
0.164
0.188
Table 1: Classification accuracy of Bayesian GAN and Bayesian neural networks across small to
medium-size data sets. Bayesian GANs achieve comparable or better performance to their Bayesian
neural net counterpart.
The bottom right figure analyzes data consisting of 100, 000 time series, each of the same size as the
single time series analyzed in the previous figures. This size is not possible with traditional methods.
Further, we see that with our methods, the posterior concentrates near the truth. We also experienced
little difference in accuracy between using the log loss or the hinge loss for ratio estimation.
Bayesian Generative Adversarial Networks. We analyze Bayesian GANs, described in ? 2. Mimicking a use case of Bayesian neural networks [5, 21], we apply Bayesian GANs for classification
on small to medium-size data. The GAN defines a conditional p(yn | xn ), taking a feature xn ? RD
as input and generating a label yn ? {1, . . . , K}, via the process
yn = g(xn , n | ?),
n ? N (0, 1),
(10)
where g(? | ?) is a 2-layer multilayer perception with ReLU activations, batch normalization, and is
parameterized by weights and biases ?. We place normal priors, ? ? N (0, 1).
We analyze two choices of the variational model: one with a mean-field normal approximation for
q(? | x), and another with a point mass approximation (equivalent to maximum a posteriori). We
compare to a Bayesian neural network, which uses the same generative process as Eq.10 but draws
from a Categorical distribution rather than feeding noise into the neural net. We fit it separately
using a mean-field normal approximation and maximum a posteriori. Table 1 shows that Bayesian
GAN s generally outperform their Bayesian neural net counterpart.
Note that Bayesian GANs can analyze discrete data such as in generating a classification label.
Traditional GANs for discrete data is an open challenge [27]. In Appendix E, we compare Bayesian
GAN s with point estimation to typical GAN s. Bayesian GAN s are also able to leverage parameter
uncertainty for analyzing these small to medium-size data sets.
One problem with Bayesian GANs is that they cannot work with very large neural networks: the
ratio estimator is a function of global parameters, and thus the input size grows with the size of the
7
1
???
zt?1
zt
zt+1
???
2
3
xt?1
xt
xt+1
4
5
6
?x+x/x??x?//x?x+
x/x?x+x?x/x+x+x+
/+x?x+x?x/x/x+x+
/x+?x+x?x/x+x?x+
x/x?x/x?x+x+x+x?
x+x+x/x?x?x+x/x+
(a) A deep implicit model for sequences. It is a recur-(b) Generated symbols from the implicit model. Good
rent neural network (RNN) with noise injected into samples place arithmetic operators between the
each hidden state. The hidden state is now an im- variable x. The implicit model learned to follow
plicit latent variable. The same occurs for generat- rules from the context free grammar up to some
multiple operator repeats.
ing outputs.
neural network. One approach is to make the ratio estimator not a function of the global parameters.
Instead of optimizing model parameters via variational EM, we can train the model parameters by
backpropagating through the ratio objective instead of the variational objective. An alternative is to
use the hidden units as input which is much lower dimensional [51, Appendix C].
Injecting Noise into Hidden Units. In this section, we show how to build a hierarchical implicit
model by simply injecting randomness into hidden units. We model sequences x = (x1 , . . . , xT )
with a recurrent neural network. For t = 1, . . . , T ,
zt = gz (xt?1 , zt?1 , t,z ), t,z ? N (0, 1),
xt = gx (zt , t,x ),
t,x ? N (0, 1),
where gz and gx are both 1-layer multilayer perceptions with ReLU activation and layer normalization. We place standard normal priors over all weights and biases. See Fig. 3a.
If the injected noise t,z combines linearly with the output of gz , the induced distribution
p(zt | xt?1 , zt?1 ) is Gaussian parameterized by that output. This defines a stochastic RNN [2, 14],
which generalizes its deterministic connection. With nonlinear combinations, the implicit density
is more flexible (and intractable), making previous methods for inference not applicable. In our
method, we perform variational inference and specify q to be implicit; we use the same architecture
as the probability model?s implicit priors.
We follow the same setup and hyperparameters as Kusner and Hern?ndez-Lobato [27] and generate
simple one-variable arithmetic sequences following a context free grammar,
S ? xkS + SkS ? SkS ? SkS/S,
where k divides possible productions of the grammar. We concatenate the inputs and point estimate
the global variables (model parameters) using variational EM. Fig. 3b displays samples from the
inferred model, training on sequences with a maximum of 15 symbols. It achieves sequences which
roughly follow the context free grammar.
5
Discussion
We developed a class of hierarchical implicit models and likelihood-free variational inference, merging the idea of implicit densities with hierarchical Bayesian modeling and approximate posterior
inference. This expands Bayesian analysis with the ability to apply neural samplers, physical simulators, and their combination with rich, interpretable latent structure.
More stable inference with ratio estimation is an open challenge. This is especially important when
we analyze large-scale real world applications of implicit models. Recent work for genomics offers
a promising solution [51].
Acknowledgements. We thank Balaji Lakshminarayanan for discussions which helped motivate
this work. We also thank Christian Naesseth, Jaan Altosaar, and Adji Dieng for their feedback
and comments. DT is supported by a Google Ph.D. Fellowship in Machine Learning and an
Adobe Research Fellowship. This work is also supported by NSF IIS-0745520, IIS-1247664, IIS1009542, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, N66001-15-C-4032, Facebook,
Adobe, Amazon, and the John Templeton Foundation.
8
References
[1] Anelli, G., Antchev, G., Aspell, P., Avati, V., Bagliesi, M., Berardi, V., Berretti, M., Boccone, V.,
Bottigli, U., Bozzo, M., et al. (2008). The totem experiment at the CERN large Hadron collider.
Journal of Instrumentation, 3(08):S08007.
[2] Bayer, J. and Osendorfer, C. (2014). Learning stochastic recurrent networks. arXiv preprint
arXiv:1411.7610.
[3] Beaumont, M. A. (2010). Approximate Bayesian computation in evolution and ecology. Annual
Review of Ecology, Evolution and Systematics, 41(379-406):1.
[4] Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). Latent Dirichlet allocation. Journal of Machine
Learning Research, 3(Jan):993?1022.
[5] Blundell, C., Cornebise, J., Kavukcuoglu, K., and Wierstra, D. (2015). Weight uncertainty in
neural network. In International Conference on Machine Learning.
[6] Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT?2010, pages 177?186. Springer.
[7] Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. (2016). InfoGAN:
Interpretable representation learning by information maximizing generative adversarial nets. In
Neural Information Processing Systems.
[8] Cranmer, K., Pavez, J., and Louppe, G. (2015). Approximating likelihood ratios with calibrated
discriminative classifiers. arXiv preprint arXiv:1506.02169.
[9] Diaconis, P., Holmes, S., and Montgomery, R. (2007). Dynamical bias in the coin toss. SIAM,
49(2):211?235.
[10] Dieng, A. B., Tran, D., Ranganath, R., Paisley, J., and Blei, D. M. (2017). The ?-Divergence
for Approximate Inference. In Neural Information Processing Systems.
[11] Diggle, P. J. and Gratton, R. J. (1984). Monte Carlo methods of inference for implicit statistical
models. Journal of the Royal Statistical Society: Series B (Methodological), pages 193?227.
[12] Donahue, J., Kr?henb?hl, P., and Darrell, T. (2017). Adversarial feature learning. In International Conference on Learning Representations.
[13] Dumoulin, V., Belghazi, I., Poole, B., Lamb, A., Arjovsky, M., Mastropietro, O., and Courville,
A. (2017). Adversarially learned inference. In International Conference on Learning Representations.
[14] Fraccaro, M., S?nderby, S. K., Paquet, U., and Winther, O. (2016). Sequential neural models
with stochastic layers. In Neural Information Processing Systems.
[15] Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013).
Bayesian data analysis. Texts in Statistical Science Series. CRC Press, Boca Raton, FL.
[16] Gelman, A. and Hill, J. (2006). Data analysis using regression and multilevel/hierarchical
models. Cambridge University Press.
[17] Gneiting, T. and Raftery, A. E. (2007). Strictly proper scoring rules, prediction, and estimation.
Journal of the American Statistical Association, 102(477):359?378.
[18] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville,
A., and Bengio, Y. (2014). Generative adversarial nets. In Neural Information Processing Systems.
[19] Gutmann, M. U., Dutta, R., Kaski, S., and Corander, J. (2014). Statistical Inference of Intractable Generative Models via Classification. arXiv preprint arXiv:1407.4981.
[20] Hartig, F., Calabrese, J. M., Reineking, B., Wiegand, T., and Huth, A. (2011). Statistical
inference for stochastic simulation models?theory and application. Ecology Letters, 14(8):816?
827.
9
[21] Hern?ndez-Lobato, J. M., Li, Y., Rowland, M., Hern?ndez-Lobato, D., Bui, T., and Turner,
R. E. (2016). Black-box ?-divergence minimization. In International Conference on Machine
Learning.
[22] Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. W. (2013). Stochastic variational
inference. Journal of Machine Learning Research, 14(1):1303?1347.
[23] Jordan, M. I., Ghahramani, Z., Jaakkola, T. S., and Saul, L. K. (1999). An introduction to
variational methods for graphical models. Machine Learning.
[24] Karaletsos, T. (2016). Adversarial message passing for graphical models. In NIPS Workshop.
[25] Keller, J. B. (1986).
93(3):191?197.
The probability of heads.
The American Mathematical Monthly,
[26] Kingma, D. P. and Welling, M. (2014). Auto-Encoding Variational Bayes. In International
Conference on Learning Representations.
[27] Kusner, M. J. and Hern?ndez-Lobato, J. M. (2016). GANs for sequences of discrete elements
with the Gumbel-Softmax distribution. In NIPS Workshop.
[28] Larsen, A. B. L., S?nderby, S. K., Larochelle, H., and Winther, O. (2016). Autoencoding beyond pixels using a learned similarity metric. In International Conference on Machine Learning.
[29] Li, Y. and Turner, R. E. (2016). R?nyi Divergence Variational Inference. In Neural Information
Processing Systems.
[30] Liu, Q. and Feng, Y. (2016). Two methods for wild variational inference. arXiv preprint
arXiv:1612.00081.
[31] MacKay, D. J. C. (1992). Bayesian methods for adaptive models. PhD thesis, California
Institute of Technology.
[32] Makhzani, A., Shlens, J., Jaitly, N., and Goodfellow, I. (2015). Adversarial autoencoders.
arXiv preprint arXiv:1511.05644.
[33] Marin, J.-M., Pudlo, P., Robert, C. P., and Ryder, R. J. (2012). Approximate Bayesian computational methods. Statistics and Computing, 22(6):1167?1180.
[34] Mescheder, L., Nowozin, S., and Geiger, A. (2017). Adversarial variational Bayes: Unifying
variational autoencoders and generative adversarial networks. arXiv preprint arXiv:1701.04722.
[35] Mohamed, S. and Lakshminarayanan, B. (2016). Learning in implicit generative models. arXiv
preprint arXiv:1610.03483.
[36] Murphy, K. (2012). Machine Learning: A Probabilistic Perspective. MIT Press.
[37] Neal, R. M. (1994). Bayesian Learning for Neural Networks. PhD thesis, University of
Toronto.
[38] Papamakarios, G. and Murray, I. (2016). Fast -free inference of simulation models with
Bayesian conditional density estimation. In Neural Information Processing Systems.
[39] Pearl, J. (2000). Causality. Cambridge University Press.
[40] Pritchard, J. K., Seielstad, M. T., Perez-Lezaun, A., and Feldman, M. W. (1999). Population
growth of human Y chromosomes: a study of Y chromosome microsatellites. Molecular Biology
and Evolution, 16(12):1791?1798.
[41] Radford, A., Metz, L., and Chintala, S. (2016). Unsupervised representation learning with
deep convolutional generative adversarial networks. In International Conference on Learning
Representations.
[42] Ranganath, R., Altosaar, J., Tran, D., and Blei, D. M. (2016a). Operator variational inference.
In Neural Information Processing Systems.
[43] Ranganath, R., Gerrish, S., and Blei, D. M. (2014). Black box variational inference. In Artificial Intelligence and Statistics.
10
[44] Ranganath, R., Tran, D., and Blei, D. M. (2016b). Hierarchical variational models. In International Conference on Machine Learning.
[45] Rezende, D. J. and Mohamed, S. (2015). Variational inference with normalizing flows. In
International Conference on Machine Learning.
[46] Rezende, D. J., Mohamed, S., and Wierstra, D. (2014). Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning.
[47] Salakhutdinov, R. and Mnih, A. (2008). Bayesian probabilistic matrix factorization using
Markov chain Monte Carlo. In International Conference on Machine Learning, pages 880?887.
ACM.
[48] Salimans, T., Kingma, D. P., and Welling, M. (2015). Markov chain Monte Carlo and variational inference: Bridging the gap. In International Conference on Machine Learning.
[49] Sugiyama, M., Suzuki, T., and Kanamori, T. (2012). Density-ratio matching under the Bregman divergence: A unified framework of density-ratio estimation. Annals of the Institute of
Statistical Mathematics.
[50] Teh, Y. W. and Jordan, M. I. (2010). Hierarchical Bayesian nonparametric models with applications. Bayesian Nonparametrics, 1.
[51] Tran, D. and Blei, D. M. (2017). Implicit causal models for genome-wide association studies.
arXiv preprint arXiv:1710.10742.
[52] Tran, D., Blei, D. M., and Airoldi, E. M. (2015). Copula variational inference. In Neural
Information Processing Systems.
[53] Tran, D., Kucukelbir, A., Dieng, A. B., Rudolph, M., Liang, D., and Blei, D. M. (2016).
Edward: A library for probabilistic modeling, inference, and criticism. arXiv preprint
arXiv:1610.09787.
[54] Uehara, M., Sato, I., Suzuki, M., Nakayama, K., and Matsuo, Y. (2016). Generative adversarial
nets from a density ratio estimation perspective. arXiv preprint arXiv:1610.02920.
[55] Wilkinson, D. J. (2011). Stochastic modelling for systems biology. CRC press.
11
| 7136 |@word open:2 proportionality:1 lezaun:1 simulation:6 thereby:1 harder:1 initial:4 ndez:4 series:5 score:2 liu:1 tuned:2 fa8750:1 existing:1 activation:2 john:1 exposing:1 additive:1 concatenate:1 enables:1 christian:1 drop:1 interpretable:2 update:3 generative:17 intelligence:1 blei:10 provides:2 toronto:1 gx:2 unbiasedly:1 wierstra:2 mathematical:1 direct:1 differential:1 combine:3 wild:1 introduce:1 papamakarios:2 roughly:1 simulator:13 inspired:1 salakhutdinov:1 spherical:1 duan:1 little:1 curse:1 becomes:1 suffice:1 mass:2 medium:3 minimizes:1 developed:2 unified:1 transformation:2 adji:1 beaumont:1 every:1 expands:2 growth:1 runtime:1 returning:1 classifier:1 unit:5 underlie:1 yn:3 local:11 gneiting:1 despite:1 encoding:1 marin:1 analyzing:2 black:3 specifying:2 limited:2 factorization:4 smc:6 pervade:1 practical:1 practice:1 block:1 backpropagation:1 procedure:1 jan:1 empirical:2 rnn:2 matching:1 diggle:1 cannot:1 operator:3 gelman:2 altosaar:2 context:3 applying:1 optimize:3 equivalent:2 deterministic:3 map:2 mescheder:1 maximizing:2 lobato:4 compstat:1 economics:1 keller:1 amazon:1 pouget:1 estimator:8 rule:4 holmes:1 shlens:1 reparameterization:2 classic:1 population:10 handle:2 variation:1 stability:1 annals:1 hierarchy:2 play:1 heavily:1 massive:2 programming:1 us:3 goodfellow:2 jaitly:1 element:1 expensive:1 approximated:1 nderby:2 lay:1 balaji:1 observed:1 role:1 ep:2 bottom:3 preprint:10 louppe:1 capture:1 boca:1 calculate:4 wang:1 gutmann:1 deeply:1 vehtari:1 govern:1 wilkinson:1 warde:1 motivate:1 depend:1 serve:1 efficiency:2 basis:1 triangle:2 joint:3 darpa:1 kaski:1 train:2 fast:2 describe:2 shortcoming:1 monte:5 artificial:1 outcome:2 outside:1 solve:1 say:1 elbo:6 encoder:1 grammar:4 statistic:6 ability:1 paquet:1 noisy:1 itself:1 final:1 rudolph:1 autoencoding:1 sequence:9 differentiable:3 net:6 propose:2 tran:7 reconstruction:1 outputting:2 product:1 flexibility:1 achieve:2 scalability:1 sutskever:1 requirement:2 darrell:1 generating:3 adam:1 illustrate:1 develop:4 recurrent:3 eq:18 edward:3 berardi:1 quantify:1 larochelle:1 concentrate:1 posit:4 collider:1 closely:1 stochastic:14 human:1 enable:2 crc:2 require:4 feeding:1 multilevel:1 abbeel:1 im:1 strictly:1 sufficiently:2 around:1 crab:1 normal:10 mapping:1 substituting:1 achieves:2 smallest:1 estimation:16 injecting:3 applicable:1 label:2 hoffman:1 minimization:3 mit:1 gaussian:3 aim:2 rather:2 casting:1 jaakkola:1 derived:1 focus:1 karaletsos:1 rezende:2 methodological:1 bernoulli:1 likelihood:19 microsatellites:1 modelling:1 adversarial:15 criticism:1 posteriori:2 inference:46 membership:1 nn:2 typically:5 hidden:8 mimicking:1 pixel:1 classification:6 flexible:5 fidelity:1 denoted:4 among:1 integration:1 initialize:2 softmax:1 marginal:3 field:4 mackay:1 copula:1 beach:1 sampling:2 manually:1 generat:1 placing:1 broad:2 adversarially:1 unsupervised:1 ng:1 biology:2 osendorfer:1 mirza:1 develops:1 randomly:1 diaconis:1 simultaneously:2 divergence:7 murphy:1 connects:1 consisting:2 ecology:7 interest:4 message:2 acceptance:1 mnih:1 introduces:1 analyzed:2 mixture:1 farley:1 perez:1 chain:2 amenable:1 accurate:4 rajesh:1 bregman:1 integral:1 bayer:1 divide:1 causal:1 modeling:6 zn:49 uniform:1 successful:1 sks:3 connect:1 supposes:1 calibrated:1 confident:1 st:1 density:20 international:12 winther:2 fundamental:1 tlocal:3 recur:1 siam:1 probabilistic:7 physic:3 invertible:1 pool:1 gans:23 thesis:2 recorded:1 kucukelbir:1 choose:1 admit:4 american:2 return:2 toy:1 li:2 lakshminarayanan:2 explicitly:1 depends:1 vi:8 helped:1 dumoulin:2 observing:1 analyze:9 decaying:1 bayes:2 metz:1 predator:6 dieng:3 minimize:1 square:2 dutta:1 accuracy:3 convolutional:1 yield:1 cern:1 generalize:1 bayesian:53 kavukcuoglu:1 calabrese:1 carlo:5 randomness:2 converged:1 sharing:2 facebook:1 larsen:2 mohamed:3 chintala:1 associated:1 dimensionality:2 credible:1 schedule:1 dt:3 supervised:1 follow:4 specify:5 improved:1 nonparametrics:2 box:3 furthermore:1 jaan:1 implicit:50 autoencoders:2 expressive:2 nonlinear:1 lack:1 google:1 defines:2 logistic:1 impulse:1 grows:2 usa:1 building:1 contain:1 true:7 unbiased:4 counterpart:2 evolution:3 equality:1 leibler:1 dx2:1 neal:1 during:1 xks:1 backpropagating:1 backpropagates:2 hill:1 outline:1 demonstrate:1 seielstad:1 image:1 variational:56 physical:7 conditioning:1 million:1 association:2 numerically:1 monthly:1 cambridge:2 feldman:1 paisley:2 rd:1 mathematics:1 similarly:1 sugiyama:1 language:1 access:1 stable:2 similarity:1 disentangle:1 posterior:13 closest:2 recent:1 perspective:3 optimizing:2 optimizes:2 belongs:1 instrumentation:1 n00014:1 ecological:1 onr:1 success:1 meeting:1 scoring:2 neg:1 seen:3 analyzes:1 additional:1 relaxed:1 hadron:1 arjovsky:1 employed:1 arithmetic:2 ii:2 encompass:1 full:3 multiple:1 reduces:1 infer:1 ing:1 match:2 offer:1 long:1 molecular:1 calculates:1 adobe:2 scalable:3 regression:2 desideratum:2 multilayer:2 prediction:1 metric:1 arxiv:20 grounded:1 normalization:2 proposal:1 addition:1 fellowship:2 separately:1 interval:1 source:1 unlike:3 comment:1 induced:2 simulates:2 member:2 incorporates:1 flow:1 jordan:3 call:2 structural:1 near:1 leverage:2 mastropietro:1 conception:1 relaxes:1 bengio:1 relu:2 fit:1 carlin:1 architecture:1 idea:5 simplifies:1 blundell:1 thread:1 bridging:1 penalty:1 henb:1 passing:2 proceed:1 deep:6 generally:1 nonparametric:1 ph:1 induces:2 generate:1 outperform:2 nsf:1 bot:2 estimated:1 per:2 diverse:2 discrete:7 lotka:4 key:2 prey:6 n66001:1 houthooft:1 run:2 angle:1 inverse:1 uncertainty:5 parameterized:3 injected:2 letter:1 place:5 family:10 lamb:1 geiger:1 draw:1 appendix:7 comparable:1 capturing:1 bound:1 layer:4 fl:1 display:3 courville:2 annual:1 sato:1 strength:1 covertype:1 x2:4 generates:2 simulate:1 performing:2 combination:2 lfvi:11 describes:1 across:5 em:3 smaller:1 templeton:1 kusner:2 making:1 hl:1 dlog:2 fraccaro:1 equation:2 remains:1 hern:4 discus:1 montgomery:1 know:1 flip:1 tractable:6 end:1 available:1 generalizes:1 systematics:1 apply:6 hierarchical:23 salimans:1 indirectly:1 alternative:2 coin:6 batch:3 slower:1 rej:3 top:3 clustering:1 subsampling:1 dirichlet:1 gan:14 graphical:3 hinge:8 unifying:1 ghahramani:1 build:2 murray:2 approximating:5 classical:4 epsilon:1 especially:1 society:1 nyi:1 objective:12 feng:1 added:1 occurs:1 volterra:4 makhzani:1 traditional:5 corander:1 gradient:16 thank:2 simulated:1 originate:1 topic:2 reason:1 ozair:1 mini:2 ratio:25 providing:1 minimizing:2 liang:1 difficult:2 unfortunately:1 setup:2 dunson:1 pima:1 robert:1 reparameterizable:2 negative:1 huth:1 uehara:1 design:3 implementation:1 proper:2 zt:8 unknown:1 perform:3 stern:1 teh:1 observation:5 markov:2 descent:3 reparameterized:1 defining:1 head:1 pritchard:1 arbitrary:1 inferred:1 raton:1 david:1 required:1 kl:6 connection:1 california:1 learned:3 pearl:1 kingma:2 nip:3 address:1 able:1 beyond:1 alongside:1 below:1 perception:2 xm:2 dynamical:1 poole:1 appeared:1 challenge:3 encompasses:1 program:2 including:1 cornebise:1 royal:1 difficulty:1 hybrid:1 largescale:1 wiegand:1 turner:2 improve:3 technology:1 library:1 disappears:1 raftery:1 categorical:1 gz:3 auto:2 columbia:2 sn:1 genomics:1 text:2 prior:10 understanding:1 literature:3 acknowledgement:1 review:1 schulman:1 law:2 fully:1 loss:10 mixed:1 generation:4 limitation:1 allocation:1 matsuo:1 foundation:1 integrate:1 proxy:1 rubin:1 intractability:1 nowozin:1 production:1 genetics:1 summary:2 placed:1 surprisingly:1 free:11 repeat:1 supported:2 kanamori:1 bias:3 institute:2 saul:1 wide:1 lognormal:1 taking:2 cranmer:1 tolerance:5 overcome:1 feedback:1 xn:54 world:3 rich:5 genome:1 collection:1 adaptive:1 suzuki:2 employing:1 rowland:1 welling:2 ranganath:7 approximate:9 kullback:1 bui:1 belghazi:1 global:25 factorize:1 discriminative:1 continuous:1 latent:13 table:2 promising:1 nature:1 chromosome:2 ca:1 nakayama:1 bottou:1 complex:3 domain:2 linearly:1 noise:13 hyperparameters:3 x1:4 augmented:1 causality:2 fig:5 advice:1 xu:1 gratton:1 amortize:1 experienced:1 inferring:2 explicit:1 crude:1 rent:1 third:1 dustin:1 infogan:1 donahue:2 specific:1 xt:7 dx1:1 symbol:4 abadie:1 closeness:4 normalizing:2 intractable:10 evidence:1 mnist:1 workshop:2 merging:1 sequential:1 kr:1 airoldi:1 phd:2 gumbel:1 chen:2 gap:1 subtract:1 rejection:3 simply:1 applies:1 springer:1 radford:1 corresponds:2 truth:1 gerrish:1 abc:28 acm:1 conditional:2 identity:1 toss:7 shared:1 change:1 naesseth:1 determined:1 typical:1 sampler:1 e:2 formally:1 support:1 evaluate:1 mcmc:7 princeton:1 |
6,784 | 7,137 | Semi-supervised Learning with GANs: Manifold
Invariance with Improved Inference
Abhishek Kumar?
IBM Research AI
Yorktown Heights, NY
[email protected]
Prasanna Sattigeri?
IBM Research AI
Yorktown Heights, NY
[email protected]
P. Thomas Fletcher
University of Utah
Salt Lake City, UT
[email protected]
Abstract
Semi-supervised learning methods using Generative adversarial networks (GANs)
have shown promising empirical success recently. Most of these methods use a
shared discriminator/classifier which discriminates real examples from fake while
also predicting the class label. Motivated by the ability of the GANs generator to
capture the data manifold well, we propose to estimate the tangent space to the data
manifold using GANs and employ it to inject invariances into the classifier. In the
process, we propose enhancements over existing methods for learning the inverse
mapping (i.e., the encoder) which greatly improves in terms of semantic similarity
of the reconstructed sample with the input sample. We observe considerable
empirical gains in semi-supervised learning over baselines, particularly in the cases
when the number of labeled examples is low. We also provide insights into how
fake examples influence the semi-supervised learning procedure.
1
Introduction
Deep generative models (both implicit [11, 23] as well as prescribed [16]) have become widely
popular for generative modeling of data. Generative adversarial networks (GANs) [11] in particular
have shown remarkable success in generating very realistic images in several cases [30, 4]. The
generator in a GAN can be seen as learning a nonlinear parametric mapping g : Z ? X to the data
manifold. In most applications of interest (e.g., modeling images), we have dim(Z) dim(X). A
distribution pz over the space Z (e.g., uniform), combined with this mapping, induces a distribution
pg over the space X and a sample from this distribution can be obtained by ancestral sampling, i.e.,
z ? pz , x = g(z). GANs use adversarial training where the discriminator approximates (lower
bounds) a divergence measure (e.g., an f -divergence) between pg and the real data distribution px by
solving an optimization problem, and the generator tries to minimize this [28, 11]. It can also be seen
from another perspective where the discriminator tries to tell apart real examples x ? px from fake
examples xg ? pg by minimizing an appropriate loss function[10, Ch. 14.2.4] [21], and the generator
tries to generate samples that maximize that loss [39, 11].
One of the primary motivations for studying deep generative models is for semi-supervised learning.
Indeed, several recent works have shown promising empirical results on semi-supervised learning
with both implicit as well as prescribed generative models [17, 32, 34, 9, 20, 29, 35]. Most state-ofthe-art semi-supervised learning methods using GANs [34, 9, 29] use the discriminator of the GAN
as the classifier which now outputs k + 1 probabilities (k probabilities for the k real classes and one
probability for the fake class).
When the generator of a trained GAN produces very realistic images, it can be argued to capture
the data manifold well whose properties can be used for semi-supervised learning. In particular, the
?
Contributed equally.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
tangent spaces of the manifold can inform us about the desirable invariances one may wish to inject
in a classifier [36, 33]. In this work we make following contributions:
? We propose to use the tangents from the generator?s mapping to automatically infer the desired
invariances and further improve on semi-supervised learning. This can be contrasted with methods
that assume the knowledge of these invariances (e.g., rotation, translation, horizontal flipping, etc.)
[36, 18, 25, 31].
? Estimating tangents for a real sample x requires us to learn an encoder h that maps from data to
latent space (inference), i.e., h : X ? Z. We propose enhancements over existing methods for
learning the encoder [8, 9] which improve the semantic match between x and g(h(x)) and counter
the problem of class-switching.
? Further, we provide insights into the workings of GAN based semi-supervised learning methods
[34] on how fake examples affect the learning.
2
Semi-supervised learning using GANs
Most of the existing methods for semi-supervised learning using GANs modify the regular GAN
discriminator to have k outputs corresponding to k real classes [38], and in some cases a (k + 1)?th
output that corresponds to fake samples from the generator [34, 29, 9]. The generator is mainly
used as a source of additional data (fake samples) which the discriminator tries to classify under the
(k + 1)th label. We propose to use the generator to obtain the tangents to the image manifold and use
these to inject invariances into the classifier [36].
2.1
Estimating the tangent space of data manifold
Earlier work has used contractive autoencoders (CAE) to estimate the local tangent space at each
point [33]. CAEs optimize the regular autoencoder loss (reconstruction error) augmented with an
additional `2 -norm penalty on the Jacobian of the encoder mapping. Rifai et al. [33] intuitively reason
that the encoder of the CAE trained in this fashion is sensitive only to the tangent directions and use
the dominant singular vectors of the Jacobian of the encoder as the tangents. This, however, involves
extra computational overhead of doing an SVD for every training sample which we will avoid in
our GAN based approach. GANs have also been established to generate better quality samples than
prescribed models (e.g., reconstruction loss based approaches) like VAEs [16] and hence can be
argued to learn a more accurate parameterization of the image manifold.
The trained generator of the GAN serves as a parametric mapping from a low dimensional space Z to
a manifold M embedded in the higher dimensional space X, g : Z ? X, where Z is an open subset
in Rd and X is an open subset in RD under the standard topologies on Rd and RD , respectively
(d D). This map is not surjective and the range of g is restricted to M.2 We assume g is a smooth,
injective mapping, so that M is an embedded manifold. The Jacobian of a function f : Rd ? RD
at z ? Rd , Jz f , is the matrix of partial derivatives (of shape D ? d). The Jacobian of g at z ? Z,
Jz g, provides a mapping from the tangent space at z ? Z into the tangent space at x = g(z) ? X,
i.e., Jz g : Tz Z ? Tx X. It should be noted that Tz Z is isomorphic to Rd and Tx X is isomorphic to
RD . However, this mapping is not surjective and the range of Jz g is restricted to the tangent space of
the manifold M at x = g(z), denoted as Tx M (for all z ? Z). As GANs are capable of generating
realistic samples (particularly for natural images), one can argue that M approximates the true data
manifold well and hence the tangents to M obtained using Jz g are close to the tangents to the true
data manifold. The problem of learning a smooth manifold from finite samples has been studied in
the literature[5, 2, 27, 6, 40, 19, 14, 3], and it is an interesting problem in its own right to study the
manifold approximation error of GANs, which minimize a chosen divergence measure between the
data distribution and the fake distribution [28, 23] using finite samples, however this is outside the
scope of the current work.
For a given data sample x ? X, we need to find its corresponding latent representation z before we
can use Jz g to get the tangents to the manifold M at x. For our current discussion we assume the
availability of a so-called encoder h : X ? Z, such that h(g(z)) = z ? z ? Z. By definition, the
2
We write g as a map from Z to X to avoid the unnecessary (in our context) burden of manifold terminologies
and still being technically correct. This also enables us to get the Jacobian of g as a regular matrix in RD?d ,
instead of working with the differential if g was taken as a map from Z to M.
2
Jacobian of the generator at z, Jz g, can be used to get the tangent directions to the manifold at a point
x = g(z) ? M. The following lemma specifies the conditions for existence of the encoder h and
shows that such an encoder can also be used to get tangent directions. Later we will come back to the
issues involved in training such an encoder.
Lemma 2.1. If the Jacobian of g at z ? Z, Jz g, is full rank then g is locally invertible in the open
neighborhood g(S) (S being an open neighborhood of z), and there exists a smooth h : g(S) ? S
such that h(g(y)) = y, ? y ? S. In this case, the Jacobian of h at x = g(z), Jx h, spans the tangent
space of M at x.
Proof. We refer the reader to standard textbooks on multivariate calculus and differentiable manifolds
for the first statement of the lemma (e.g., [37]).
The second statement can be easily deduced by looking at the Jacobian of the composition of functions
h ? g. We have Jz (h ? g) = Jg(z) h Jz g = Jx h Jz g = Id?d , since h(g(z)) = z. This implies that the
row span of Jx h coincides with the column span of Jz g. As the columns of Jz g span the tangent
space Tg(z) M, so do the the rows of Jx h.
2.1.1
Training the inverse mapping (the encoder)
To estimate the tangents for a given real data point x ? X, we need its corresponding latent
representation z = h(x) ? Z, such that g(h(x)) = x in an ideal scenario. However, in practice
g will only learn an approximation to the true data manifold, and the mapping g ? h will act like
a projection of x (which will almost always be off the manifold M) to the manifold M, yielding
some approximation error. This projection may not be orthogonal, i.e., to the nearest point on M.
Nevertheless, it is desirable that x and g(h(x)) are semantically close, and at the very least, the class
label is preserved by the mapping g ? h. We studied the following three approaches for training the
inverse map h, with regard to this desideratum :
? Decoupled training. This is similar to an approach outlined by Donahue et al. [8] where the
generator is trained first and fixed thereafter, and the encoder is trained by optimizing a suitable
reconstruction loss in the Z space, L(z, h(g(z))) (e.g., cross entropy, `2 ). This approach does not
yield good results and we observe that most of the time g(h(x)) is not semantically similar to
the given real sample x with change in the class label. One of the reasons as noted by Donahue
et al. [8] is that the encoder never sees real samples during training. To address this, we also
experimented with the combined objective minh Lz (z, h(g(z))) + Lh (x, g(h(x))), however this
too did not yield any significant improvements in our early explorations.
? BiGAN. Donahue et al. [8] propose to jointly train the encoder and generator using adversarial
training, where the pair (z, g(z)) is considered a fake example (z ? pz ) and the pair (h(x), x) is
considered a real example by the discriminator. A similar approach is proposed by Dumoulin
et al. [9], where h(x) gives the parameters of the posterior p(z|x) and a stochastic sample from
the posterior paired with x is taken as a real example. We use BiGAN [8] in this work, with one
modification: we use feature matching loss [34] (computed using features from an intermediate
layer ` of the discriminator f ), i.e., kEx f` (h(x), x) ? Ez f` (z, g(z))k22 , to optimize the generator
and encoder, which we found to greatly help with the convergence 3 . We observe better results in
terms of semantic match between x and g(h(x)) than in the decoupled training approach, however,
we still observe a considerable fraction of instances where the class of g(h(x)) is changed (let us
refer to this as class-switching).
? Augmented-BiGAN. To address the still-persistent problem of class-switching of the reconstructed samples g(h(x)), we propose to construct a third pair (h(x), g(h(x)) which is also considered by the discriminator as a fake example in addition to (z, g(z)). Our Augmented-BiGAN
objective is given as
1
1
Ex?px log f (h(x), x) + Ez?pz log(1 ? f (z, g(z))) + Ex?px log(1 ? f (h(x), g(h(x))), (1)
2
2
where f (?, ?) is the probability of the pair being a real example, as assigned by the discriminator
f . We optimize the discriminator using the above objective (1). The generator and encoder are
again optimized using feature matching [34] loss on an intermediate layer ` of the discriminator,
i.e., Lgh = kEx f` (h(x), x) ? Ez f` (z, g(z))k22 , to help with the convergence. Minimizing Lgh
3
Note that other recently proposed methods for training GANs based on Integral Probability Metrics
[1, 13, 26, 24] could also improve the convergence and stability during training.
3
will make x and g(h(x)) similar (through the lens of f` ) as in the case of BiGAN, however
the discriminator tries to make the features at layer f` more difficult to achieve this by directly
optimizing the third term in the objective (1). This results in improved semantic similarity between
x and g(h(x)).
We empirically evaluate these approaches with regard to similarity between x and g(h(x)) both
quantitatively and qualitatively, observing that Augmented-BiGAN works significantly better than
BiGAN. We note that ALI [9] also has the problems of semantic mismatch and class switching for
reconstructed samples as reported by the authors, and a stochastic version of the proposed third term
in the objective (1) can potentially help there as well, investigation of which is left for future work.
2.1.2
Estimating the dominant tangent space
Once we have a trained encoder h such that g(h(x)) is a good approximation to x and h(g(z)) is a
good approximation to z, we can use either Jh(x) g or Jx h to get an estimate of the tangent space.
Specifically, the columns of Jh(x) g and the rows of Jx h are the directions that approximately span
the tangent space to the data manifold at x. Almost all deep learning packages implement reverse
mode differentiation (to do backpropagation) which is computationally cheaper than forward mode
differentiation for computing the Jacobian when the output dimension of the function is low (and
vice versa when the output dimension is high). Hence we use Jx h in all our experiments to get the
tangents.
As there are approximation errors at several places (M ? data-manifold, g(h(x)) ? x, h(g(z)) ? z),
it is preferable to only consider dominant tangent directions in the row span of Jx h. These can be
obtained using the SVD on the matrix Jx h and taking the right singular vectors corresponding to
top singular values, as done in [33] where h is trained using a contractive auto-encoder. However,
this process is expensive as the SVD needs to be done independently for every data sample. We
adopt an alternative approach to get dominant tangent direction: we take the pre-trained model with
encoder-generator-discriminator (h-g-f ) triple and insert two extra functions p : Rd ? Rdp and p? :
Rdp ? Rd (with dp < d) which are learned by optimizing minp,p? Ex [kg(h(x)) ? g(?
p(p(h(x))))k1 +
X
X
kf?1
(g(h(x))) ? f?1
(g(?
p(p(h(x)))))k] while g, h and f are kept fixed from the pre-trained model.
Note that our discriminator f has two pipelines f Z and f X for the latent z ? Z and the data x ? X,
respectively, which share parameters in the last few layers (following [8]), and we use the last layer of
f X in this loss. This enables us to learn a nonlinear (low-dimensional) approximation in the Z space
such that g(?
p(p(h(x)))) is close to g(h(x)). We use the Jacobian of p ? h, Jx p ? h, as an estimate of
the dp dominant tangent directions (dp = 10 in all our experiments)4 .
2.2
Injecting invariances into the classifier using tangents
We use the tangent propagation approach (TangentProp) [36] to make the classifier invariant to the
estimated tangent directions from the previous section.
Pn Apart
P form the regular classification loss on
labeled examples, it uses a regularizer of the form i=1 v?Tx k(Jxi c) vk22 , where Jxi c ? Rk?D
i
is the Jacobian of the classifier function c at x = xi (with the number of classes k). and Tx is the
set of tangent directions we want the classifier to be invariant to. This term penalizes the linearized
variations of the classifier output along the tangent directions. Simard et al. [36] get the tangent
directions using slight rotations and translations of the images, whereas we use the GAN to estimate
the tangents to the data manifold.
We can go one step further and make the classifier invariant to small perturbations in all directions
emanating from a point x. This leads to the regularizer
sup
v:kvkp ?
k(Jx c) vkjj
?
k
X
k
X
sup |(Jx c)i: v| =
k(Jx c)i: kjq ,
j
i=1 v:kvkp ?
j
(2)
i=1
where k?kq is the dual norm of k?kp (i.e., p1 + 1q = 1), and k?kjj denotes jth power of `j -norm. This
reduces to squared Frobenius norm of the Jacobian matrix Jx c for p = j = 2. The penalty in
4
Training the GAN with z ? Z ? Rdp results in a bad approximation of the data manifold. Hence we first
learn the GAN with Z ? Rd and then approximate the smooth manifold M parameterized by the generator
using p and p? to get the dominant dp tangent directions to M.
4
Eq. (2) is closely related to the recent work on virtual adversarial training (VAT) [22] which uses a
regularizer (ref. Eq (1), (2) in [22])
sup
KL[c(x)||c(x + v)],
(3)
v:kvk2 ?
where c(x) are the classifier outputs (class probabilities). VAT[22] approximately estimates v ? that
yields the sup using the gradient of KL[c(x)||c(x + v)], calling (x + v ? ) as virtual adversarial
example (due to its resemblance to adversarial training [12]), and uses KL[c(x)||c(x + v ? )] as the
regularizer in the classifier objective. If we replace KL-divergence in Eq. 3 with total-variation
distance and optimize its first-order approximation, it becomes equivalent to the regularizer in Eq. (2)
for j = 1 and p = 2.
In practice, it is computationally expensive to optimize these Jacobian based regularizers. Hence in all
our experiments we use stochastic finite difference approximation for all Jacobian based regularizers.
For TangentProp, we use kc(xi + v) ? c(xi )k22 with v randomly sampled (i.i.d.) from the set of
tangents Txi every time example xi is visited by the SGD. For Jacobian-norm regularizer of Eq. (2),
we use kc(x + ?) ? c(x)k22 with ? ? N (0, ? 2 I) (i.i.d) every time an example x is visited by the SGD,
which approximates an upper bound on Eq. (2) in expectation (up to scaling) for j = 2 and p = 2.
2.3
GAN discriminator as the classifier for semi-supervised learning: effect of fake examples
Recent works have used GANs for semi-supervised learning where the discriminator also serves as a
classifier [34, 9, 29]. For a semi-supervised learning problem with k classes, the discriminator has
k + 1 outputs with the (k + 1)?th output corresponding to the fake examples originating from the
generator of the GAN. The loss for the discriminator f is given as [34]
Lf = Lfsup + Lfunsup , where Lfsup = ?E(x,y)?pd (x,y) log pf (y|x, y ? k)
and Lfunsup = ?Ex?pg (x) log(pf (y = k + 1|x)) ? Ex?pd (x) log(1 ? pf (y = k + 1|x))).
(4)
The term pf (y = k + 1|x) is the probability of x being a fake example and (1 ? pf (y = k + 1|x))
is the probability of x being a real example (as assigned by the model). The loss component
Lfunsup is same as the regular GAN discriminator loss with the only modification that probabilities
for real vs. fake are compiled from (k + 1) outputs. Salimans et al. [34] proposed training the
generator using feature matching where the generator minimizes the mean discrepancy between the
features for real and fake examples obtained from an intermediate layer ` of the discriminator f ,
i.e., Lg = kEx f` (x) ? Ez f` (g(z))k22 . Using feature matching loss for the generator was empirically
shown to result in much better accuracy for semi-supervised learning compared to other training
methods including minibatch discrimination and regular GAN generator loss [34].
Here we attempt to develop an intuitive understanding of how fake examples influence the learning
of the classifier and why feature matching loss may work much better for semi-supervised learning
compared to regular GAN. We will use the term classifier and discriminator interchangeably based
on the context however they are really the same network as mentioned earlier. Following [34] we
assume the (k + 1)?th logit is fixed to 0 as subtracting a term v(x) from all logits does not change the
softmax probabilities. Rewriting the unlabeled loss of Eq. (4) in terms of logits li (x), i = 1, 2, . . . , k,
we have
!
"
!#
k
k
k
X
X
X
f
li (xg )
li (x)
li (x)
Lunsup = Exg ?pg log 1 +
e
? Ex?pd log
e
? log 1 +
e
(5)
i=1
i=1
i=1
Taking the derivative w.r.t. discriminator?s parameters ? followed by some basic algebra, we get
?? Lfunsup =
" k
#
k
k
X
X
X
E
pf (y = i|xg )?li (xg ) ? E
pf (y = i|x, y ? k)?li (x) ?
pf (y = i|x)?li (x)
xg ?pg
=
x?pd
i=1
E
xg ?pg
k
X
i=1
i=1
i=1
k
X
pf (y = i|xg ) ?li (xg ) ? E
pf (y = i|x, y ? k)pf (y = k + 1|x) ?li (x)
x?pd
|
{z
}
|
{z
}
i=1
ai (xg )
bi (x)
5
(6)
Minimizing Lfunsup will move the parameters ? so as to decrease li (xg ) and increase li (x) (i =
1, . . . , k). The rate of increase in li (x) is also modulated by pf (y = k + 1|x). This results in warping
of the functions li (x) around each real example x with more warping around examples about which
the current model f is more confident that they belong to class i: li (?) becomes locally concave
around those real examples x if xg are loosely scattered around x. Let us consider the following three
cases:
Weak fake examples. When the fake examples coming from the generator are very weak (i.e.,
very easy for the current discriminator to distinguish from real examples), we will have pf (y =
k + 1|xg ) ? 1, pf (y = i|xg ) ? 0 for 1 ? i ? k and pf (y = k + 1|x) ? 0. Hence there is no
gradient flow from Eq. (6), rendering unlabeled data almost useless for semi-supervised learning.
Strong fake examples. When the fake examples are very strong (i.e., difficult for the current
discriminator to distinguish from real ones), we have pf (k + 1|xg ) ? 0.5 + 1 , pf (y = imax |xg ) ?
0.5 ? 2 for some imax ? {1, . . . , k} and pf (y = k + 1|x) ? 0.5 ? 3 (with 2 > 1 ? 0 and 3 ? 0).
Note that bi (x) in this case would be smaller than ai (x) since it is a product of two probabilities. If
two examples x and xg are close to each other with imax = arg maxi li (x) = arg maxi li (xg ) (e.g.,
x is a cat image and xg is a highly realistic generated image of a cat), the optimization will push
limax (x) up by some amount and will pull limax (xg ) down by a larger amount. We further want to
consider two cases here: (i) Classifier with enough capacity: If the classifier has enough capacity,
this will make the curvature of limax (?) around x really high (with limax (?) locally concave around x)
since x and xg are very close. This results in over-fitting around the unlabeled examples and for a test
example xt closer to xg (which is quite likely to happen since xg itself was very realistic sample), the
model will more likely misclassify xt . (ii) Controlled-capacity classifier: Suppose the capacity of
the classifier is controlled with adequate regularization. In that case the curvature of the function
limax (?) around x cannot increase beyond a point. However, this results in limax (x) being pulled down
by the optimization process since ai (xg ) > bi (x). This is more pronounced for examples x on which
the classifier is not so confident (i.e., pf (y = imax |x, y ? k) is low, although still assigning highest
probability to class imax ) since the gap between ai (xg ) and bi (x) becomes higher. For these examples,
the entropy of the distribution {p(y = i|x, y ? k)}ki=1 may actually increase as the training proceeds
which can hurt the test performance.
Moderate fake examples. When the fake examples from the generator are neither too weak nor too
strong for the current discriminator (i.e., xg is a somewhat distorted version of x), the unsupervised
gradient will push limax (x) up while pulling limax (xg ) down, giving rise to a moderate curvature of
li (?) around real examples x since xg and x are sufficiently far apart (consider multiple distorted cat
images scattered around a real cat image at moderate distances). This results in a smooth decision
function around real unlabeled examples. Again, the curvatures of li (?) around x for classes i which
the current classifier does not trust for the example x are not affected much. Further, pf (y = k + 1|x)
will be less than the case when fake examples are very strong. Similarly pf (y = imax |xg ) (where
imax = arg max1?i?k li (xg )) will be less than the case of strong fake examples. Hence the norm
of the gradient in Eq. (6) is lower and the contribution of unlabeled data in the overall gradient of
Lf (Eq. (4) is lower than the case of strong fake examples. This intuitively seems beneficial as the
classifier gets ample opportunity to learn on supervised loss and get confident on the right class for
unlabeled examples, and then boost this confidence slowly using the gradient of Eq. (6) as the training
proceeds.
We experimented with regular GAN loss (i.e., Lg = Ex?pg log(pf (y = k + 1|x))), and feature
matching loss for the generator [34], plotting several of the quantities of interest discussed above
for MNIST (with 100 labeled examples) and SVHN (with 1000 labeled examples) datasets in Fig.1.
Generator trained with feature matching loss corresponds to the case of moderate fake examples
discussed above (as it generates blurry and distorted samples as mentioned in [34]). Generator
trained with regular GAN loss corresponds to the case of strong fake
P examples discussed above.
1
We plot Exg aimax (xg ) for imax = arg max1?i?k li (xg ) and Exg [ k?1
1?i6=imax ?k ai (xg )] separately
to look into the behavior of imax logit. Similarly we plot Ex bt (x) separately where t is the true
label for unlabeled example x (we assume knowledge of the true label only for plotting these
quantities and not while training the semi-supervised GAN). Other quantities in the plots are selfexplanatory. As expected, the unlabeled loss Lfunsup for regular GAN becomes quite high early on
implying that fake examples are strong. The gap between aimax (xg ) and bt (x) is also higher for
regular GAN pointing towards the case of strong fake examples with controlled-capacity classifier as
discussed above. Indeed, we see that the average of the entropies for the distributions pf (y|x) (i.e.,
6
Figure 1: Plots of Entropy, Lfunsup (Eq. (4)), ai (xg ), bi (x) and other probabilities (Eq. (6)) for regular
GAN generator loss and feature-matching GAN generator loss.
Ex H(pf (y|x, y ? k))) is much lower for feature-matching GAN compared to regular GAN (seven
times lower for SVHN, ten times lower for MNIST). Test errors for MNIST for regular GAN and
FM-GAN were 2.49% (500 epochs) and 0.86% (300 epochs), respectively. Test errors for SVHN
were 13.36% (regular-GAN at 738 epochs) and 5.89% (FM-GAN at 883 epochs), respectively5 .
It should also be emphasized that the semi-supervised learning heavily depends on the generator
dynamically adapting fake examples to the current discriminator ? we observed that freezing the
training of the generator at any point results in the discriminator being able to classify them easily
(i.e., pf (y = k + 1|xg ) ? 1) thus stopping the contribution of unlabeled examples in the learning.
Our final loss for semi-supervised learning. We use feature matching GAN with semi-supervised
loss of Eq. (4) as our classifier objective and incorporate invariances from Sec. 2.2 in it. Our final
objective for the GAN discriminator is
X
Lf = Lfsup + Lfunsup + ?1 Ex?pd (x)
k(Jx f ) vk22 + ?2 Ex?pd (x) kJx f k2F .
(7)
v?Tx
The third term in the objective makes the classifier decision function change slowly along tangent
directions around a real example x. As mentioned in Sec. 2.2 we use stochastic finite difference
approximation for both Jacobian terms due to computational reasons.
3
Experiments
Implementation Details. The architecture of the endoder, generator and discriminator closely
follow the network structures in ALI [9]. We remove the stochastic layer from the ALI encoder (i.e.,
h(x) is deterministic). For estimating the dominant tangents, we employ fully connected two-layer
network with tanh non-linearly in the hidden layer to represent p ? p?. The output of p is taken from
the hidden layer. Batch normalization was replaced by weight normalization in all the modules to
make the output h(x) (similarly g(z)) dependent only on the given input x (similarly z) and not on
the whole minibatch. This is necessary to make the Jacobians Jx h and Jz g independent of other
examples in the minibatch. We replaced all ReLU nonlinearities in the encoder and the generator
with the Exponential Linear Units (ELU) [7] to ensure smoothness of the functions g and h. We
follow [34] completely for optimization (using ADAM optimizer [15] with the same learning rates as
in [34]). Generators (and encoders, if applicable) in all the models are trained using feature matching
loss.
5
We also experimented with minibatch-discrimination (MD) GAN[34] but the minibatch features are not
suited for classification as the prediction for an example x is adversely affected by features of all other examples
(note that this is different from batch-normalization). Indeed we notice that the training error for MD-GAN is
10x that of regular GAN and FM-GAN. MD-GAN gave similar test error as regular-GAN.
7
Figure 2: Comparing BiGAN with Augmented BiGAN based on the classification error on the
reconstructed test images. Left column: CIFAR10, Right column: SVHN. In the images, the top row
corresponds to the original images followed by BiGAN reconstructions in the middle row and the
Augmented BiGAN reconstructions in the bottom row. More images can be found in the appendix.
Figure 3: Visualizing tangents. Top: CIFAR10, Bottom: SVHN. Odd rows: Tangents using our
method for estimating the dominant tangent space. Even rows: Tangents using SVD on Jh(x) g and
Jx h. First column: Original image. Second column: Reconstructed image using g ? h. Third column:
Reconstructed image using g ? p? ? p ? h. Columns 4-13: Tangents using encoder. Columns 14-23:
Tangents using generator.
Semantic Similarity. The image samples x and their reconstructions g(h(x)) for BiGAN and
Augemented-BiGAN can be seen in Fig. 2. To quantitatively measure the semantic similarity of the
reconstructions to the original images, we learn a supervised classifier using the full training set and
obtain the classification accuracy on the reconstructions of the test images. The architectures of the
classifier for CIFAR10 and SVHN are similar to their corresponding GAN discriminator architectures
we have. The lower error rates with our Augmented-BiGAN suggest that it leads to reconstructions
with reduced class-switching.
Tangent approximations. Tangents for CIFAR10 and SVHN are shown in Fig. 3. We show visual
comparison of tangents from Jx (p ? h), from Jp(h(x)) g ? p?, and from Jx h and Jh(x) g followed
by the SVD to get the dominant tangents. It can be seen that the proposed method for getting
dominant tangent directions gives similar tangents as SVD. The tangents from the generator (columns
14-23) look different (more colorful) from the tangents from the encoder (columns 4-13) though they
do trace the boundaries of the objects in the image (just like the tangents from the encoder). We
also empirically quantify our method for dominant tangent subspace estimation against the SVD
estimation by computing the geodesic distances and principal angles between these two estimations.
These results are shown in Table 2.
Semi-supervised learning results. Table 1 shows the results for SVHN and CIFAR10 with various
number of labeled examples. For all experiments with the tangent regularizer for both CIFAR10
and SVHN, we use 10 tangents. The hyperparameters ?1 and ?2 in Eq. (7) are set to 1. We obtain
significant improvements over baselines, particularly for SVHN and more so for the case of 500
8
Model
Nl = 500
VAE (M1+M2) [17]
SWWAE with dropout [41]
VAT [22]
Skip DGM [20]
Ladder network [32]
ALI [9]
FM-GAN [34]
Temporal ensembling [18]
FM-GAN + Jacob.-reg (Eq. (2))
FM-GAN + Tangents
FM-GAN + Jacob.-reg + Tangents
SVHN
Nl = 1000
?
?
?
?
?
?
18.44 ? 4.8
5.12 ? 0.13
10.28 ? 1.8
5.88 ? 1.5
4.87 ? 1.6
36.02 ? 0.10
23.56
24.63
16.61 ? 0.24
?
7.41 ? 0.65
8.11 ? 1.3
4.42 ? 0.16
4.74 ? 1.2
5.26 ? 1.1
4.39 ? 1.2
CIFAR-10
Nl = 1000
Nl = 4000
?
?
?
?
?
19.98 ? 0.89
21.83 ? 2.01
?
20.87 ? 1.7
20.23 ? 1.3
19.52 ? 1.5
?
?
?
?
20.40
17.99 ? 1.62
18.63 ? 2.32
12.16 ? 0.24
16.84 ? 1.5
16.96 ? 1.4
16.20 ? 1.6
Table 1: Test error with semi-supervised learning on SVHN and CIFAR-10 (Nl is the number of
labeled examples). All results for the proposed methods (last 3 rows) are obtained with training the
model for 600 epochs for SVHN and 900 epochs for CIFAR10, and are averaged over 5 runs.
Rand-Rand
SVD-Approx. (CIFAR)
SVD-Approx. (SVHN)
d(S1 , S2 )
?1
?2
?3
?4
?5
?6
?7
?8
?9
?10
4.5
2.6
2.3
14
2
1
83
15
7
85
21
12
86
26
16
87
34
22
87
40
30
88
50
41
88
61
51
88
73
67
89
85
82
Table 2: Dominant tangent subspace approximation quality: Columns show the geodesic distance
and 10 principal angles between the two subspaces. Top row shows results for two randomly sampled
10-dimensional subspaces in 3072-dimensional space, middle and bottom rows show results for
dominant subspace obtained using SVD of Jx h and dominant subspace obtained using our method,
for CIFAR-10 and SVHN, respectively. All numbers are averages 10 randomly sampled test examples.
labeled examples. We do not get as good results on CIFAR10 which may be due to the fact that
our encoder for CIFAR10 is still not able to approximate the inverse of the generator well (which
is evident from the sub-optimal reconstructions we get for CIFAR10) and hence the tangents we
get are not good enough. We think that obtaining better estimates of tangents for CIFAR10 has the
potential for further improving the results. ALI [9] accuracy for CIFAR (Nl = 1000) is also close to
our results however ALI results were obtained by running the optimization for 6475 epochs with a
slower learning rate as mentioned in [9]. Temporal ensembling [18] using explicit data augmentation
assuming knowledge of the class-preserving transformations on the input, while our method estimates
these transformations from the data manifold in the form of tangent vectors. It outperforms our
method by a significant margin on CIFAR-10 which could be due the fact that it uses horizontal
flipping based augmentation for CIFAR-10 which cannot be learned through the tangents as it is a
non-smooth transformation. The use of temporal ensembling in conjunction with our method has the
potential of further improving the semi-supervised learning results.
4
Discussion
Our empirical results show that using the tangents of the data manifold (as estimated by the generator
of the GAN) to inject invariances in the classifier improves the performance on semi-supevised
learning tasks. In particular we observe impressive accuracy gains on SVHN (more so for the
case of 500 labeled examples) for which the tangents obtained are good quality. We also observe
improvements on CIFAR10 though not as impressive as SVHN. We think that improving on the
quality of tangents for CIFAR10 has the potential for further improving the results there, which is a
direction for future explorations. We also shed light on the effect of fake examples in the common
framework used for semi-supervised learning with GANs where the discriminator predicts real
class labels along with the fake label. Explicitly controlling the difficulty level of fake examples
(i.e., pf (y = k + 1|xg ) and hence indirectly pf (y = k + 1|x) in Eq. (6)) to do more effective
semi-supervised learning is another direction for future work. One possible way to do this is to have a
distortion model for the real examples (i.e., replace the generator with a distorter that takes as input
the real examples) whose strength is controlled for more effective semi-supervised learning.
9
References
[1] Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein gan. arXiv preprint
arXiv:1701.07875, 2017.
[2] Alexander V Bernstein and Alexander P Kuleshov. Tangent bundle manifold learning via
grassmann&stiefel eigenmaps. arXiv preprint arXiv:1212.6031, 2012.
[3] AV Bernstein and AP Kuleshov. Data-based manifold reconstruction via tangent bundle
manifold learning. In ICML-2014, Topological Methods for Machine Learning Workshop,
Beijing, volume 25, pages 1?6, 2014.
[4] David Berthelot, Tom Schumm, and Luke Metz. Began: Boundary equilibrium generative
adversarial networks. arXiv preprint arXiv:1703.10717, 2017.
[5] Guillermo Canas, Tomaso Poggio, and Lorenzo Rosasco. Learning manifolds with k-means
and k-flats. In Advances in Neural Information Processing Systems, pages 2465?2473, 2012.
[6] Guangliang Chen, Anna V Little, Mauro Maggioni, and Lorenzo Rosasco. Some recent advances
in multiscale geometric analysis of point clouds. In Wavelets and Multiscale Analysis, pages
199?225. Springer, 2011.
[7] Djork-Arn? Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network
learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
[8] Jeff Donahue, Philipp Kr?henb?hl, and Trevor Darrell. Adversarial feature learning. arXiv
preprint arXiv:1605.09782, 2016.
[9] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier
Mastropietro, and Aaron Courville. Adversarially learned inference. arXiv preprint
arXiv:1606.00704, 2016.
[10] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning,
volume 1. Springer series in statistics Springer, Berlin, 2001.
[11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil
Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural
information processing systems, pages 2672?2680, 2014.
[12] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
[13] Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville.
Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028, 2017.
[14] Kui Jia, Lin Sun, Shenghua Gao, Zhan Song, and Bertram E Shi. Laplacian auto-encoders: An
explicit learning of nonlinear data manifold. Neurocomputing, 160:250?260, 2015.
[15] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[16] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint
arXiv:1312.6114, 2013.
[17] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semisupervised learning with deep generative models. In Advances in Neural Information Processing
Systems, pages 3581?3589, 2014.
[18] Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint
arXiv:1610.02242, 2016.
[19] G Lerman and T Zhang. Probabilistic recovery of multiple subspaces in point clouds by
geometric `p minimization. Preprint, 2010.
[20] Lars Maal?e, Casper Kaae S?nderby, S?ren Kaae S?nderby, and Ole Winther. Auxiliary deep
generative models. arXiv preprint arXiv:1602.05473, 2016.
10
[21] Aditya Menon and Cheng Soon Ong. Linking losses for density ratio and class-probability
estimation. In International Conference on Machine Learning, pages 304?313, 2016.
[22] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional
smoothing with virtual adversarial training. arXiv preprint arXiv:1507.00677, 2015.
[23] Shakir Mohamed and Balaji Lakshminarayanan. Learning in implicit generative models. arXiv
preprint arXiv:1610.03483, 2016.
[24] Youssef Mroueh and Tom Sercu. Fisher gan. In NIPS, 2017.
[25] Youssef Mroueh, Stephen Voinea, and Tomaso A Poggio. Learning with group invariant features:
A kernel perspective. In Advances in Neural Information Processing Systems, pages 1558?1566,
2015.
[26] Youssef Mroueh, Tom Sercu, and Vaibhava Goel. Mcgan: Mean and covariance feature
matching gan. In ICML, 2017.
[27] Partha Niyogi, Stephen Smale, and Shmuel Weinberger. A topological view of unsupervised
learning from noisy data. SIAM Journal on Computing, 40(3):646?663, 2011.
[28] Sebastian Nowozin, Botond Cseke, and Ryota Tomioka. f-gan: Training generative neural samplers using variational divergence minimization. In Advances in Neural Information Processing
Systems, pages 271?279, 2016.
[29] Augustus Odena. Semi-supervised learning with generative adversarial networks. arXiv preprint
arXiv:1606.01583, 2016.
[30] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with
deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
[31] Anant Raj, Abhishek Kumar, Youssef Mroueh, P Thomas Fletcher, and Bernhard Sch?lkopf.
Local group invariant representations via orbit embeddings. In AISTATS, 2017.
[32] Antti Rasmus, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. Semisupervised learning with ladder networks. In Advances in Neural Information Processing
Systems, pages 3546?3554, 2015.
[33] Salah Rifai, Yann Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold
tangent classifier. In NIPS, 2011.
[34] Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training gans. In Advances in Neural Information Processing Systems,
2016.
[35] Cicero Nogueira dos Santos, Kahini Wadhawan, and Bowen Zhou. Learning loss functions for semi-supervised learning via discriminative adversarial networks. arXiv preprint
arXiv:1707.02198, 2017.
[36] Patrice Y Simard, Yann A LeCun, John S Denker, and Bernard Victorri. Transformation
invariance in pattern recognition?tangent distance and tangent propagation. In Neural networks:
tricks of the trade, pages 239?274. Springer, 1998.
[37] M. Spivak. A Comprehensive Introduction to Differential Geometry, volume 1. Publish or
Perish, 3rd edition, 1999.
[38] Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative adversarial networks. arXiv preprint arXiv:1511.06390, 2015.
[39] Zhuowen Tu. Learning generative models via discriminative approaches. In Computer Vision
and Pattern Recognition, 2007. CVPR?07. IEEE Conference on, pages 1?8. IEEE, 2007.
[40] Rene Vidal, Yi Ma, and Shankar Sastry. Generalized principal component analysis (gpca). IEEE
Transactions on Pattern Analysis and Machine Intelligence, 27(12):1945?1959, 2005.
[41] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where autoencoders. arXiv preprint arXiv:1506.02351, 2015.
11
| 7137 |@word middle:2 version:2 norm:6 seems:1 logit:2 open:4 calculus:1 linearized:1 jacob:2 covariance:1 pg:8 sgd:2 series:1 jimenez:1 outperforms:1 existing:3 current:8 com:2 comparing:1 assigning:1 diederik:3 john:1 realistic:5 happen:1 shape:1 enables:2 christian:1 remove:1 plot:4 v:1 discrimination:2 generative:16 implying:1 alec:2 intelligence:1 parameterization:1 timo:1 provides:1 gpca:1 philipp:1 zhang:1 height:2 along:3 kvk2:1 become:1 differential:2 junbo:1 persistent:1 overhead:1 fitting:1 expected:1 indeed:3 tomaso:2 p1:1 nor:1 behavior:1 automatically:1 soumith:2 little:1 pf:27 becomes:4 estimating:5 what:1 kg:1 santos:1 minimizes:1 textbook:1 voinea:1 transformation:4 differentiation:2 temporal:4 every:4 act:1 concave:2 shed:1 zaremba:1 preferable:1 classifier:31 sherjil:1 unit:2 colorful:1 faruk:1 nakae:1 before:1 local:2 modify:1 switching:5 encoding:1 id:1 approximately:2 ap:1 studied:2 dynamically:1 luke:2 contractive:2 range:2 bi:5 averaged:1 lecun:2 practice:2 implement:1 lf:3 backpropagation:1 procedure:1 shin:2 empirical:4 significantly:1 adapting:1 projection:2 matching:12 pre:2 confidence:1 regular:17 suggest:1 get:16 cannot:2 close:6 unlabeled:9 shankar:1 context:2 influence:2 optimize:5 equivalent:1 map:5 deterministic:1 tangentprop:2 shi:1 go:1 sepp:1 independently:1 jimmy:1 recovery:1 canas:1 pouget:1 m2:1 insight:2 imax:10 pull:1 shlens:1 stability:1 sercu:2 maggioni:1 variation:2 hurt:1 controlling:1 suppose:1 heavily:1 olivier:1 us:4 kuleshov:2 goodfellow:3 mikko:1 trick:1 element:1 expensive:2 particularly:3 nderby:2 recognition:2 balaji:1 predicts:1 labeled:8 distributional:1 observed:1 bottom:3 module:1 preprint:20 cloud:2 capture:2 connected:1 sun:1 counter:1 decrease:1 highest:1 trade:1 mentioned:4 discriminates:1 pd:7 warde:1 ong:1 tobias:1 geodesic:2 trained:12 solving:1 algebra:1 ali:6 technically:1 max1:2 completely:1 cae:2 easily:2 cat:4 tx:6 various:1 regularizer:7 harri:1 train:1 stacked:1 fast:1 effective:2 kp:1 emanating:1 ole:1 tell:1 youssef:4 vicki:1 outside:1 neighborhood:2 harnessing:1 whose:2 quite:2 widely:1 larger:1 jean:1 distortion:1 cvpr:1 encoder:25 ability:1 statistic:1 niyogi:1 augustus:1 think:2 jointly:1 itself:1 noisy:1 final:2 shakir:2 patrice:1 differentiable:1 net:1 propose:7 reconstruction:11 subtracting:1 coming:1 product:1 clevert:1 tu:1 achieve:1 intuitive:1 frobenius:1 pronounced:1 getting:1 convergence:3 enhancement:2 darrell:1 produce:1 generating:2 adam:2 ben:1 object:1 help:3 tim:1 develop:1 nearest:1 odd:1 eq:17 strong:9 auxiliary:1 involves:1 come:1 goroshin:1 quantify:1 skip:1 kaae:2 direction:17 implies:1 closely:2 correct:1 elus:1 stochastic:6 lars:1 exploration:2 jonathon:1 virtual:3 argued:2 vaibhava:1 really:2 investigation:1 insert:1 around:13 considered:3 sufficiently:1 fletcher:3 mapping:12 scope:1 equilibrium:1 pointing:1 optimizer:1 jx:20 early:2 adopt:1 estimation:4 injecting:1 applicable:1 label:8 tanh:1 visited:2 honkala:1 sensitive:1 ross:1 vice:1 city:1 minimization:2 jxi:2 always:1 avoid:2 pn:1 zhou:1 vae:1 conjunction:1 cseke:1 rezende:1 improvement:3 rank:1 mainly:1 masanori:1 greatly:2 adversarial:16 ishii:1 baseline:2 dim:2 inference:3 dependent:1 stopping:1 bt:2 hidden:2 kc:2 originating:1 kex:3 issue:1 classification:4 dual:1 arg:4 denoted:1 overall:1 dauphin:1 pascal:1 art:1 softmax:1 smoothing:1 construct:1 never:1 once:1 beach:1 sampling:1 adversarially:1 mcgan:1 look:2 unsupervised:4 k2f:1 icml:2 future:3 discrepancy:1 mirza:1 yoshua:2 quantitatively:2 employ:2 few:1 randomly:3 divergence:5 neurocomputing:1 comprehensive:1 cheaper:1 replaced:2 geometry:1 attempt:1 friedman:1 misclassify:1 interest:2 highly:1 rdp:3 nl:6 yielding:1 light:1 farley:1 regularizers:2 bundle:2 accurate:2 integral:1 capable:1 partial:1 injective:1 closer:1 lh:1 necessary:1 cifar10:13 orthogonal:1 decoupled:2 poggio:2 loosely:1 unterthiner:1 penalizes:1 desired:1 orbit:1 instance:1 classify:2 modeling:2 earlier:2 column:13 ishmael:1 tg:1 subset:2 uniform:1 kq:1 cicero:1 eigenmaps:1 too:3 reported:1 encoders:2 combined:2 confident:3 st:1 deduced:1 winther:1 density:1 international:1 siam:1 ancestral:1 probabilistic:1 off:1 bigan:14 invertible:1 michael:1 gans:17 again:2 squared:1 augmentation:2 rosasco:2 slowly:2 berglund:1 tz:2 adversely:1 inject:4 derivative:2 simard:2 wojciech:1 zhao:1 li:20 jacobians:1 szegedy:1 potential:3 nonlinearities:1 sec:2 availability:1 lakshminarayanan:1 kvkp:2 explicitly:1 depends:1 later:1 try:5 view:1 dumoulin:3 doing:1 observing:1 sup:4 bayes:1 metz:2 jia:1 contribution:3 minimize:2 vk22:2 partha:1 accuracy:4 botond:1 convolutional:1 yield:3 ofthe:1 weak:3 lkopf:1 vincent:3 ren:1 guangliang:1 inform:1 sebastian:1 trevor:2 definition:1 against:1 involved:1 mohamed:2 chintala:2 proof:1 gain:2 sampled:3 popular:1 knowledge:3 ut:1 improves:2 actually:1 back:1 higher:3 supervised:34 follow:2 tom:3 danilo:1 improved:4 rand:2 done:2 though:2 just:1 implicit:3 djork:1 autoencoders:2 jerome:1 working:2 dgm:1 horizontal:2 trust:1 freezing:1 nonlinear:3 multiscale:2 propagation:2 mehdi:1 minibatch:5 mode:2 quality:4 gulrajani:1 resemblance:1 pulling:1 menon:1 semisupervised:2 usa:1 utah:2 k22:5 effect:2 true:5 logits:2 xavier:1 hence:9 assigned:2 regularization:1 semantic:7 visualizing:1 during:2 interchangeably:1 noted:2 yorktown:2 coincides:1 generalized:1 evident:1 txi:1 svhn:17 stiefel:1 image:22 variational:2 recently:2 began:1 common:1 rotation:2 empirically:3 salt:1 jp:1 volume:3 belong:1 salah:1 slight:1 approximates:3 discussed:4 m1:1 berthelot:1 refer:2 composition:1 significant:3 versa:1 ishaan:1 ai:8 rene:1 smoothness:1 rd:14 approx:2 outlined:1 mroueh:4 similarly:4 i6:1 sastry:1 jg:1 similarity:5 compiled:1 impressive:2 etc:1 dominant:14 curvature:4 multivariate:1 own:1 recent:4 posterior:2 perspective:2 optimizing:3 moderate:4 apart:3 reverse:1 scenario:1 raj:1 success:2 samuli:1 yi:1 muller:1 seen:4 preserving:1 additional:2 somewhat:1 arjovsky:3 wasserstein:2 goel:1 arn:1 tapani:1 maximize:1 semi:33 ii:1 full:2 desirable:2 multiple:2 infer:1 reduces:1 stephen:2 smooth:6 takeru:1 match:2 ahmed:1 cross:1 long:1 cifar:7 lin:1 equally:1 grassmann:1 paired:1 vat:3 controlled:4 jost:1 prediction:1 desideratum:1 basic:1 bertram:1 vision:1 laplacian:1 metric:1 expectation:1 arxiv:38 publish:1 represent:1 normalization:3 kernel:1 hochreiter:1 preserved:1 addition:1 want:2 whereas:1 separately:2 victorri:1 singular:3 source:1 sch:1 extra:2 ample:1 flow:1 ideal:1 intermediate:3 zhuowen:1 embeddings:1 easy:1 enough:3 rendering:1 bernstein:2 affect:1 relu:1 gave:1 mastropietro:1 architecture:3 topology:1 fm:7 hastie:1 rifai:2 motivated:1 penalty:2 song:1 henb:1 adequate:1 deep:7 fake:33 amount:2 locally:3 ten:1 induces:1 ken:1 reduced:1 generate:2 specifies:1 notice:1 estimated:2 tibshirani:1 write:1 affected:2 ichi:1 thereafter:1 group:2 terminology:1 nevertheless:1 neither:1 rewriting:1 kept:1 schumm:1 fraction:1 beijing:1 laine:1 run:1 inverse:4 package:1 parameterized:1 angle:2 springenberg:1 distorted:3 place:1 almost:3 reader:1 lamb:1 yann:3 lake:1 decision:2 appendix:1 scaling:1 zhan:1 dropout:1 bound:2 layer:10 ki:1 followed:3 distinguish:2 courville:3 cheng:1 topological:2 strength:1 alex:1 flat:1 calling:1 generates:1 prescribed:3 span:6 kumar:2 px:4 martin:3 smaller:1 beneficial:1 aila:1 modification:2 s1:1 hl:1 intuitively:2 restricted:2 invariant:5 taken:3 pipeline:1 computationally:2 bing:1 serf:2 maal:1 studying:1 vidal:1 denker:1 observe:6 salimans:2 appropriate:1 indirectly:1 blurry:1 alternative:1 batch:2 weinberger:1 slower:1 existence:1 thomas:3 original:3 top:4 denotes:1 ensure:1 running:1 gan:46 miyato:1 opportunity:1 giving:1 k1:1 surjective:2 warping:2 objective:9 move:1 quantity:3 flipping:2 parametric:2 primary:1 md:3 gradient:6 dp:4 subspace:7 distance:5 valpola:1 spivak:1 sci:1 capacity:5 mauro:1 berlin:1 koyama:1 seven:1 manifold:36 argue:1 bengio:2 reason:3 ozair:1 assuming:1 useless:1 rasmus:1 ratio:1 minimizing:3 difficult:2 lg:2 robert:1 statement:2 potentially:1 smale:1 ryota:1 trace:1 rise:1 ba:1 implementation:1 contributed:1 upper:1 av:1 datasets:1 finite:4 minh:1 looking:1 perturbation:1 david:2 pair:4 kl:4 optimized:1 discriminator:32 anant:1 learned:3 established:1 boost:1 kingma:3 nip:3 address:2 beyond:1 able:2 proceeds:2 poole:1 pattern:3 mismatch:1 maeda:1 including:1 max:2 nogueira:1 power:1 suitable:1 odena:1 natural:1 difficulty:1 predicting:1 improve:3 lorenzo:2 ladder:2 bowen:1 mathieu:1 raiko:1 xg:36 categorical:1 autoencoder:1 auto:3 epoch:7 literature:1 understanding:1 tangent:71 kf:1 geometric:2 embedded:2 loss:29 fully:1 interesting:1 remarkable:1 generator:39 triple:1 minp:1 plotting:2 prasanna:1 share:1 nowozin:1 casper:1 ibm:4 translation:2 row:12 changed:1 guillermo:1 last:3 soon:1 antti:1 jth:1 jh:4 pulled:1 explaining:1 taking:2 regard:2 boundary:2 dimension:2 linking:1 author:1 qualitatively:1 forward:1 lz:1 far:1 welling:2 transaction:1 reconstructed:6 approximate:2 bernhard:1 belghazi:1 elu:1 unnecessary:1 abhishek:2 xi:5 discriminative:2 latent:4 why:1 table:4 promising:2 learn:7 shmuel:1 jz:14 ca:1 obtaining:1 improving:4 kjj:1 kui:1 bottou:1 did:1 anna:1 aistats:1 linearly:1 motivation:1 whole:1 hyperparameters:1 s2:1 edition:1 ref:1 xu:1 augmented:7 fig:3 ensembling:4 swwae:1 scattered:2 fashion:1 ny:2 kjx:1 tomioka:1 sub:1 wish:1 explicit:2 exponential:2 jacobian:17 third:5 wavelet:1 donahue:4 ian:3 rk:1 down:3 bad:1 xt:2 emphasized:1 maxi:2 pz:4 experimented:3 abadie:1 burden:1 exists:1 mnist:3 workshop:1 kr:1 push:2 margin:1 gap:2 chen:2 suited:1 entropy:4 likely:2 gao:1 ez:4 visual:1 aditya:1 springer:4 ch:1 corresponds:4 radford:2 ma:1 cheung:1 towards:1 jeff:1 shared:1 replace:2 considerable:2 change:3 fisher:1 specifically:1 contrasted:1 semantically:2 sampler:1 lemma:3 principal:3 called:1 lens:1 bernard:1 isomorphic:2 invariance:10 svd:10 total:1 lerman:1 mathias:1 vaes:1 caes:1 aaron:3 modulated:1 alexander:2 incorporate:1 evaluate:1 reg:2 ex:11 |
6,785 | 7,138 | Approximation and Convergence Properties of
Generative Adversarial Learning
Shuang Liu
University of California, San Diego
[email protected]
Olivier Bousquet
Google Brain
[email protected]
Kamalika Chaudhuri
University of California, San Diego
[email protected]
Abstract
Generative adversarial networks (GAN) approximate a target data distribution by
jointly optimizing an objective function through a "two-player game" between a
generator and a discriminator. Despite their empirical success, however, two very
basic questions on how well they can approximate the target distribution remain
unanswered. First, it is not known how restricting the discriminator family affects
the approximation quality. Second, while a number of different objective functions
have been proposed, we do not understand when convergence to the global minima
of the objective function leads to convergence to the target distribution under
various notions of distributional convergence.
In this paper, we address these questions in a broad and unified setting by defining
a notion of adversarial divergences that includes a number of recently proposed
objective functions. We show that if the objective function is an adversarial
divergence with some additional conditions, then using a restricted discriminator
family has a moment-matching effect. Additionally, we show that for objective
functions that are strict adversarial divergences, convergence in the objective
function implies weak convergence, thus generalizing previous results.
1
Introduction
Generative adversarial networks (GANs) have attracted an enormous amount of recent attention in
machine learning. In a generative adversarial network, the goal is to produce an approximation to
a target data distribution from which only samples are available. This is done iteratively via two
components ? a generator and a discriminator, which are usually implemented by neural networks.
The generator takes in random (usually Gaussian or uniform) noise as input and attempts to transform
it to match the target distribution ; the discriminator aims to accurately discriminate between
samples from the target distribution and those produced by the generator. Estimation proceeds by
iteratively refining the generator and the discriminator to optimize an objective function until the
target distribution is indistinguishable from the distribution induced by the generator. The practical
success of GANs has led to a large volume of recent literature on variants which have many desirable
properties; examples are the f-GAN [10], the MMD-GAN [5, 9], the Wasserstein-GAN [2], among
many others.
In spite of their enormous practical success, unlike more traditional methods such as maximum
likelihood inference, GANs are theoretically rather poorly-understood. In particular, two very basic
questions on how well they can approximate the target distribution , even in the presence of a very
large number of samples and perfect optimization, remain largely unanswered. The first relates to the
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
role of the discriminator in the quality of the approximation. In practice, the discriminator is usually
restricted to belong to some family, and it is not understood in what sense this restriction affects the
distribution output by the generator. The second question relates to convergence; different variants of
GANs have been proposed that involve different objective functions (to be optimized by the generator
and the discriminator). However, it is not understood under what conditions minimizing the objective
function leads to a good approximation of the target distribution. More precisely, does a sequence
of distributions output by the generator that converges to the global minimum under the objective
function always converge to the target distribution under some standard notion of distributional
convergence?
In this work, we consider these two questions in a broad setting. We first characterize a very general
class of objective functions that we call adversarial divergences, and we show that they capture
the objective functions used by a variety of existing procedures that include the original GAN [7],
f-GAN [10], MMD-GAN [5, 9], WGAN [2], improved WGAN [8], as well as a class of entropic
regularized optimal transport problems [6]. We then define the class of strict adversarial divergences
? a subclass of adversarial divergences where the minimizer of the objective function is uniquely the
target distribution. This characterization allows us to address the two questions above in a unified
setting, and translate the results to an entire class of GANs with little effort.
First, we address the role of the discriminator in the approximation in Section 4. We show that if the
objective function is an adversarial divergence that obeys certain conditions, then using a restricted
class of discriminators has the effect of matching generalized moments. A concrete consequence of
this result is that in linear f-GANs, where the discriminator family is the set of all affine functions over
a vector of features maps, and the objective function is an f-GAN, the optimal distribution output
by the GAN will satisfy Ex [ (x)] = Ex [ (x)] regardless of the specific f -divergence chosen
in the objective function. Furthermore, we show that a neural network GAN is just a supremum of
linear GANs, therefore has the same moment-matching effect.
We next address convergence in Section 5. We show that convergence in an adversarial divergence
implies some standard notion of topological convergence. Particularly, we show that provided an
objective function is a strict adversarial divergence, convergence to in the objective function implies
weak convergence of the output distribution to . While convergence properties of some isolated
objective functions were known before [2], this result extends them to a broad class of GANs. An
additional consequence of this result is the observation that as the Wasserstein distance metrizes
weak convergence of probability distributions (see e.g. [14]), Wasserstein-GANs have the weakest1
objective functions in the class of strict adversarial divergences.
2
Notations
We use bold constants (e.g., 0, 1, x0 ) to denote constant functions. We denote by f g the function
composition of f and g . We denote by Y X the set of functions maps from the set X to the set Y . We
denote by
the product measure of and . We denote by int(X ) the interior of the set X . We
denote by E [f ] the integral of f with respect to measure .
Let f : R ! R [ f+1g be a convex function, we denote by dom f the effective domain of f ,
that is, dom f = fx 2 R; f (x) < +1g; and we denote by f the convex conjugate of f , that is,
f (x ) = supx2R fx x f (x)g.
For a topological space
, we denote by C (
) the set of continuous functions on
, Cb (
) the set of
bounded continuous functions on
, rca(
) the set of finite signed regular Borel measures on
, and
P(
) the set of probability measures on
.
Given a non-empty subspace Y of a topological space X , denote by X=Y the quotient space equipped
with the quotient topology Y , where for any a; b 2 X , a Y b if and only if a = b or a; b both
belong to Y . The equivalence class of each element a 2 X is denoted as [a] = fb : a Y bg.
1
Weakness is actually a desirable property since it prevents the divergence from being too discriminative
(saturate), thus providing more information about how to modify the model to approximate the true distribution.
2
3
General Framework
Let be the target data distribution from which we can draw samples. Our goal is to find a generative
model to approximate . Informally, most GAN-style algorithms model this approximation as
solving the following problem
inf sup Ex; y [f (x; y)] ;
f 2F
where F is a class of functions. The process is usually considered adversarial in the sense that it
can be thought of as a two-player minimax game, where a generator is trying to mimick the true
distribution , and a adversary f is trying to distinguish between the true and generated distributions.
However, another way to look at it is as the minimization of the following objective function
7 ! sup Ex; y [f (x; y)]
(1)
f 2F
This objective function measures how far the target distribution is from the current estimate .
Hence, minimizing this function can lead to a good approximation of the target distribution .
This leads us to the concept of adversarial divergence.
Definition 1 (Adversarial divergence). Let X be a topological space,
adversarial divergence over X is a function
F Cb (X 2 ), F =6 ;. An
P(X ) P(X ) ! R [ f+1g
(; ) 7 ! (jj ) = sup E
[f ] :
f 2F
(2)
Observe that in Definition 1 if we have a fixed target distribution , then (2) is reduced to the objective
function (1). Also, notice that because is the supremum of a family of linear functions (in each of
the variables and separately), it is convex in each of its variables.
Definition 1 captures the objective functions used by a variety of existing GAN-style procedures. In
practice, although the function class F can be complicated, it is usually a transformation of a simple
function class V , which is the set of discriminators or critics, as they have been called in the GAN
literature. We give some examples by specifying F and V for each objective function.
F = fx; y 7! log(u(x)) + log(1 u(y)) : u 2 Vg
V = (0; 1)X \ Cb (X ):
f -GAN [10]. Let f : R ! R [ f1g be a convex lower semi-continuous function. Assume
f (x) x for any x 2 R, f is continuously differentiable on int(dom f ), and there
exists x0 2 int(dom f ) such that f (x0 ) = x0 .
F = fx; y 7! v(x) f (v(y)) : v 2 Vg ;
V = (dom f )X \ Cb (X ):
MMD-GAN [5, 9]. Let k : X 2 ! R be a universal reproducing kernel. Let M be the set of
(a) GAN [7].
(b)
(c)
signed measures on X .
F = fx; y 7! v(x) v(y) : v 2 Vg ;
V = x 7! E [k(x; )] : 2 M; E2 [k] 1 :
(d) Wasserstein-GAN (WGAN) [2]. Assume X is a metric space.
F = fnx; y 7! v(x) v(y) : v 2oVg ;
V = v 2 Cb (X ) : kvkLip K ;
where K is a positive constant, kkLip denotes the Lipschitz constant.
(e) WGAN-GP (Improved WGAN) [8]. Assume X is a convex subset of a Euclidean space.
F = fx; y 7! v(x)
V = C 1 (X );
()
v y
EtU
[(krv(tx + (1
) )k2 1)p ] : v 2 Vg;
t y
where U is the uniform distribution on [0; 1], is a positive constant, p 2 (1; 1).
3
(f) (Regularized) Optimal Transport [6]. 2 Let c : X 2 ! R be some transportation cost
function, 0 be the strength of regularization. If = 0 (no regularization), then
F = fx; y 7! u(x) + v(y) : (u; v) 2 Vg ;
V = f(u; v) 2 Cb (X ) Cb (X ); u(x) + v(y) c(x; y) for any x; y 2 X g ;
if > 0, then
F = x; y 7! u(x) + v(y) exp u(x) + v(y) c(x; y) : u; v 2 V ;
V = Cb (X ):
(3)
(4)
In order to study an adversarial divergence , it is critical to first understand at which points the divergence is minimized. More precisely, let be an adversarial divergence and be the target probability
measure. We are interested in the set of probability measures that minimize the divergence when
the first argument of is set to , i.e., the set arg min ( jj) = f : ( jj) = inf ( jj )g.
Formally, we define the set OPT; as follows.
Definition 2 (OPT; ). Let be an adversarial divergence over a topological space X , 2 P(X ).
Define OPT; to be the set of probability measures that minimize the function ( jj). That is,
OPT ;
=4
2 P(X ) : ( jj) = 0 2P
inf(X ) ( jj0 )
:
Ideally, the target probability measure should be one and the only one that minimizes the objective
function. The notion of strict adversarial divergence captures this property.
Definition 3 (Strict adversarial divergence). Let be an adversarial divergence over a topological
space X , is called a strict adversarial divergence if for any 2 P(X ), OPT; = f g.
For example, if the underlying space X is a compact metric space, then examples (c) and (d) induce
metrics on P(X ) (see, e.g., [12]), therefore are strict adversarial divergences.
In the next two sections, we will answer two questions regarding the set OPT; : how well do the
elements in OPT; approximate the target distribution when restricting the class of discriminators? (Section 4); and does a sequence of distributions that converges in an adversarial divergence
also converges to OPT; under some standard notion of distributional convergence? (Section 5)
4
Generalized Moment Matching
To motivate the discussion in this section, recall example (b) in Section 3). It can be shown that
under some mild conditions, , the objective function of f -GAN, is actually the f -divergence, and
the minimizer of ( jj) is only [10]. However, in practice, the discriminator class V is usually
implemented by a feedforward neural network, and it is known that a fixed neural network has limited
capacity (e.g., it cannot implement the set of all the bounded continuous function). Therefore, one
could ask what will happen if we restrict V to a sub-class V 0 ? Obviously one would expect not be
the unique minimizer of ( jj) anymore, that is, OPT; contains elements other than . What
can we say about the elements in OPT; now? Are all of them close to in a certain sense? In this
section we will answer these questions.
More formally, we consider F = fm r : 2 g to be a function class indexed by a set .
We can think of as the parameter set of a feedforward neural network. Each m is thought to
be a matching between two distributions, in the sense that and are matched under m if and
only if E
[m ] = 0. In particular, if each m is corresponding to some function v such that
m (x; y ) = v (x) v (y ), then and are matched under m if and only if some generalized
moment of and are equal: E [v ] = E [v ]. Each r can be thought as a residual.
We will now relate the matching condition to the optimality of the divergence. In particular, define
M =4 f : 8 2 ; E [v ] = E [v ]g ;
We will give sufficients conditions for members of M to be in OPT; .
2
To the best of our knowledge, neither (3) or (4) was used in any GAN algorithm. However, since our focus
in this paper is not implementing new algorithms, we leave experiments with this formulation for future work.
4
n
Theorem
4. Let X be
a topological space, R , V = fv 2 Cb (X ) : 2 g, R =
2
v (y ). If there exists c 2 R such that for
r 2 Cb (X ) : 2 . Let m (x; y ) = v (x)
any ; 2 P(X ), inf 2 E
[r ] = c and there exists some 2 such that E
[r ] = c and
E
[m ] 0, then (jj ) = sup2 E
[m r ] is an adversarial divergence over X and
for any 2 P(X ), OPT; M :
We now review the examples (a)-(e) in Section 3, show how to write each f 2 F into m
specify in each case such that the conditions of Theorem 4 can be satisfied.
(a) GAN. Note that for any x 2 (0; 1), log (1=(x(1
))) log(4). Let u = 21 ,
x
( ) = log(u (x)) + log(1 u (y))
= |log(u (x)) {z log(u (y))}
log
(1= (u (y{z)(1
|
f x; y
( )
m x;y
note
E
h
m
i
( )
=0
r , and
r x;y
( )
( ))))}
u y
( )=log(4)
note r x;y r x;y
:
0 for any x 2 R and f (x0 ) = x0 . Let v = x0 ,
f (x; y ) = v (x) f (v (y ))
(|f (v (y)){z v (y))} :
=
v (x) v (y )
{z
}
|
(b) f -GAN. Recall that f (x)
x
( )
m x;y
note
E
h
m
i
=0
(c, d) MMD-GAN or Wasserstein-GAN. Let v
( )=
()
f x; y
()
v x
v y
|
{z h } i
( )
m x;y
note
E
m
( )
r x;y
( )
=0
r x;y
(5)
( )=0
note r x;y r x;y
= 0,
( )
0
|{z}
:
( )=r (x;y)=0
note r x;y
(e) WGAN-GP. Note that the function x 7! xp is nonnegative on R. Let
Pn
(
Pn
Pn
x
(
x1 ; x2 ; ; xn ) 7! ip=1n i ;
if E [ i=1 xi ] E [ i=1 xi ],
P
n
v =
(x1 ; x2 ; ; xn ) 7! ip=1n xi ; otherwise;
( )=
f x; y
()
()
v x
v y
|
{z h } i
( )
m x;y
note
E
m
0
EtU
|
[(krv(tx +{z(1
( )
r x;y
( )
) )k2 1)p}] :
t y
( )=0
note r x;y r x;y
We now refine the previous result and show that under some additional conditions on m and r , the
optimal elements of are fully characterized by the matching condition, i.e. OPT; = M .
Theorem 5. Under the assumptions of Theorem 4, if 2 int() and both 7! E
[m ] and
7! E
[r ] have gradients at , and
E
[m ] = 0 and 90 ; E
[m0 ] 6= 0 =) r E
[m] 6= 0:
Then for any
(6)
2 P(X ), OPT; = M :
We remark that Theorem 4 is relatively intuitive, while Theorem 5 requires extra conditions, and is
quite counter-intuitive especially for algorithms like f -GANs.
4.1
Example: Linear f -GAN
We first consider a simple algorithm called linear f -GAN. Suppose we are provided with a feature
map that maps each point x in the sample space X to a feature vector ( 1 (x); 2 (x); ; n (x))
where each i 2 Cb (X ). We are satisfied that any distribution is a good approximation of the
target distribution as long as E [ ] = E [ ]. For example, if X R and k (x) = xk , to
say E [ ] = E [ ] is equivalent to say the first n moments of and are matched. Recall that
in the standard f -GAN (example (b) in Section 3), V = (dom f )X \ Cb (X ). Now instead of
0
using the discriminator class V , we use a restricted discriminator class
the linear
V V , containing
0 = T ( ; 1) : 2 V ; where
(or more
precisely,
affine)
transformations
of
?
the
set
V
= 2 Rn+1 : 8x 2 X; T ( (x); 1) 2 dom f . We will show that now OPT; contains
exactly those such that E [ ] = E [ ], regardless of the specific f chosen. Formally,
5
Corollary 6 (linear f -GAN). Let X be a compact topological space. Let f be a function as defined
in example (b)
3. Let = ( i )ni=1 be a vector of continuously differentiable functions on
of Section
n+1
X . Let = 2 R
: 8x 2 X; T ( (x); 1) 2 dom f . Let be the objective function of the
linear f -GAN
(jj ) = sup E [ T ( ; 1)] E [f ( T ( ; 1))] :
2
Then for any
2 P(X ), OPT; = f : ( jj) = 0g = f : E [ ] = E [ ]g 3 :
A very concrete example of Corollary 6 could be, for example, the linear KL-GAN, where f (u) =
n+1
u log u, f (t) = exp(t 1), = ( i )n
. The objective function is
i=1 , = R
( jj ) = supn+1 E [T (
4.2
2R
;1
)] E [exp(T (
;1
) 1)]
;
Example: Neural Network f -GAN
Next we consider a more general and practical example: an f -GAN where the discriminator class
V 0 = fv : 2 g is implemented through a feedforward neural network with weight parameter set
. We assume that all the activation functions are continuously differentiable (e.g., sigmoid,tanh),
and the last layer of the network is a linear transformation plus a bias. We also assume dom f = R
(e.g., the KL-GAN where f (t) = exp(t 1)).
Now observe that when all the weights before the last layer are fixed, the last layer acts as a
discriminator in a linear f -GAN. More precisely, let pre be the index set for the weights before
the last layer. Then each pre 2 pre corresponds to a feature map pre . Let the linear f -GAN that
corresponds to pre be pre , the adversarial divergence induced by the Neural Network f -GAN is
( jj ) = sup
T
pre 2
pre
( jj )
pre
Clearly OPT; pre 2pre OPTpre ; . For the other direction, note that by Corollary 6, for any
pre 2 pre , pre ( jj) 0 and pre ( jj ) = 0. Therefore ( jj) 0 and ( jj ) = 0. If
2 OPT; , then ( jj) = 0. As a consequence, pre ( jj) = 0 for any pre 2 pre . Therefore
T
OPT ; pre 2pre OPT pre ; . Therefore, by Corollary 6,
OPT ;
=
\
pre 2
pre
OPT pre ;
= f : 8 2 ; E [v ] = E [v ]g :
That is, the minimizer of the Neural Network f -GAN are exactly those distributions that are indistinguishable under the expectation of any discriminator network v .
5
Convergence
To motivate the discussion in this section, consider the following question. Let x0 be the delta
distribution at x0 2 R, that is, x = x0 with probability 1. Now, does the sequence of delta
distributions 1=n converges to 1 ? Almost all the people would answer no. However, does the
sequence of delta distributions 1=n converges to 0 ? Most people would answer yes based on the
intuition that 1=n ! 0 and so does the sequence of corresponding delta distributions, even though
the support of 1=n never has any intersection with the support of 0 . Therefore, convergence can be
defined for distributions not only in a point-wise way, but in a way that takes consideration of the
underlying structure of the sample space.
Now returning to our adversarial divergence framework. Given an adversarial divergence , is it
possible that (1 jj1=n ) convreges to the global minimum of (1 jj)? How to we define convergence
to a set of points instead of only one point, in order to explain the convergence behaviour of any
adversarial divergence? In this section we will answer these questions.
We start from two standard notions from functional analysis.
Definition 7 (Weak-* topology on P(X ) (see e.g. [11])). Let X be a compact metric space. By
associating with each 2 rca(X ) a linear function f 7 ! E [f ] on C (X ), we have that rca(X )
6
is the continuous dual of C (X ) with respect to the uniform norm on C (X ) (see e.g. [4]). Therefore
we can equip rca(X ) (and therefore P(X )) with a weak-* topology, which is the coarsest topology
on rca(X ) such that f 7! E [f ] : f 2 C (X )g is a set of continuous linear functions on rca(X ).
Definition 8 (Weak convergence of probability measures (see e.g. [11])). Let X be a compact metric
space. A sequence of probability measures (n ) in P(X ) is said to weakly converge to a measure
2 P(X ), if 8f 2 C (X ), En [f ] ! E [f ]; or equivalently, if (n ) is weak-* convergent to .
The definition of weak-* topology and weak convergence respect the topological structure of the
sample space. For example, it is easy to check that the sequence of delta distributions 1=n weakly
converges to 0 , but not to 1 .
Now note that Definition 8 only defines weak convergence of a sequence of probability measures to a
single target measure. Here we generalize the definition for the single target measure to a set of target
measures through quotient topology as follows.
Definition 9 (Weak convergence of probability measures to a set). Let X be a compact metric space,
equip P(X ) with the weak-* topology and let A be a non-empty subspace of P(X ). A sequence of
probability measures (n ) in P(X ) is said to weakly converge to the set A if ([n ]) converges to A
in the quotient space P(X )=A.
With everything properly defined, we are now ready to state our convergence result. Note that
an adversarial divergence is not necessarily a metric, and therefore does not necessarily induce a
topology. However, convergence in an adversarial divergence can still imply some type of topological
convergence. More precisely, we show a convergence result that holds for any adversarial divergence
as long as the sample space is a compact metric space. Informally, we show that for any target
probability measure, if ( jjn ) converges to the global minimum of ( jj), then n weakly
converges to the set of measures that achieve the global minimum. Formally,
Theorem 10. Let X be a compact metric space, be an adversarial divergence over X , 2 P(X ),
then OPT; 6= ;. Let (n ) be a sequence of probability measures in P(X ). If ( jjn ) !
inf 0 ( jj0 ), then (n ) weakly converges to the set OPT; .
As a special case of Theorem 10, if is a strict adversarial divergence, i.e., OPT; = f g, then
converging to the minimizer of the objective function implies the usual weak convergence to the
target probability measure. For example, it can be checked that the objective function of f -GAN is a
strict adversarial divergence, therefore converging in the objective function of an f -GAN implies the
usual weak convergence to the target probability measure.
To compare this result with our intuition, we return to the example of a sequence of delta distributions
and show that as long as is a strict adversarial divergence, (1 jj1=n ) does not converge to the
global minimum of (1 jj). Observe that if (1 jj1=n ) converges to the global minimum of (1 jj),
then according to Theorem 10, 1=n will weakly converge to 1 , which leads to a contradiction.
However Theorem 10 does more than excluding undesired possibilities. It also enables us to give
general statements about the structure of the class of adversarial divergences. The structural result
can be easily stated under the notion of relative strength between adversarial divergences, which is
defined as follows.
Definition 11 (Relative strength between adversarial divergences). Let 1 and 2 be two adversarial
divergences, if for any sequence of probability measures (n ) and any target probability measure
, 1 ( jjn ) ! inf 1 ( jj) implies 2 ( jjn ) ! inf 2 ( jj), then we say 1 is stronger
than 2 and 2 is weaker than 1 . We say 1 is equivalent to 2 if 1 is both stronger and weaker than
2 . We say 1 is strictly stronger (strictly weaker) than 2 if 1 is stronger (weaker) than 2 but not
equivalent. We say 1 and 2 are not comparable if 1 is neither stronger nor weaker than 2 .
Not much is known about the relative strength between different adversarial divergences. If the
underlying sample space is nice (e.g., subset of Euclidean space), then the variational (GAN-style)
formulation of f -divergences using bounded continuous functions coincides with the original definition [15], and therefore f -divergences are adversarial divergences. [2] showed that the KL-divergence
is stronger than the JS-divergence, which is equivalent to the total variation distance, which is strictly
stronger than the Wasserstein-1 distance.
However, the novel fact is that we can reach the weakest strict adversarial divergence. Indeed,
one implicatoin of Theorem 10 is that if X is a compact metric space and is a strict adversarial
7
Figure 1: Structure of the class of strict adversarial divergences
divergence over , then -convergence implies the usual weak convergence on probability measures.
In particular, since the Wasserstein distance metrizes weak convergence of probability distributions
(see e.g. [14]), as a direct consequence of Theorem 10, the Wasserstein distance is in the equivalence
class of the weakest strict adversarial divergences. In the other direction, there exists a trivial strict
adversarial divergence
0;
( jj ) =4 +1
;
Trivial
if = ,
otherwise;
(7)
that is stronger than any other strict adversarial divergence. We now incorporate our convergence
results with some previous results and get the following structural result.
Corollary 12. The class of strict adversarial divergences over a bounded and closed subset of a
Euclidean space has the structure as shown in Figure 1, where Trivial is defined as in (7), MMD is
corresponding to example (c) in Section 3, Wasserstein is corresponding to example (d) in Section 3, and
KL , Reverse-KL , TV , JS , Hellinger are corresponding to example (b) in Section 3 with f (x) being x log(x),
p
2
log(x), 12 jx 1j, (x + 1) log( x+1
2 ) + x log(x), ( x 1) , respectively. Each rectangle in
Figure 1 represents an equivalence class, inside of which are some examples. In particular, Trivial is in
the equivalence class of the strongest strict adversarial divergences, while MMD and Wasserstein are in the
equivalence class of the weakest strict adversarial divergences.
6
Related Work
There has been an explosion of work on GANs over the past couple of years; however, most of the
work has been empirical in nature. A body of literature has looked at designing variants of GANs
which use different objective functions. Examples include [10], which propose using the f-divergence
between the target and the generated distribution , and [5, 9], which propose the MMD distance.
Inspired by previous work, we identify a family of GAN-style objective functions in full generality
and show general properties of the objective functions in this family.
There has also been some work on comparing different GAN-style objective functions in terms
of their convergence properties, either in a GAN-related setting [2], or in a general IPM setting
[12]. Unlike these results, which look at the relationship between several specific strict adversarial
divergences, our results apply to an entire class of GAN-style objective functions and establish their
convergence properties. For example, [2] shows that KL-divergnce, JS-divergence, total-variation
distance are all stronger than the Wasserstein distance, while our results generalize this part of their
result and says that any strict adversarial divergence is stronger than the Wasserstein distance and its
equivalences. Furthermore, our results also apply to non-strict adversarial divergences.
That being said, it does not mean our results are a complete generalization of the previous convergence
results such as [2, 12]. Our results do not provide any methods to compare two strict adversarial
divergences if none of them is equivalent to the Wasserstein distance or the trivial divergence. In
contrast, [2] show that the KL-divergence is stronger than the JS-divergence, which is equivalent to
the total variation distance, which is strictly stronger than the Wasserstein-1 distance.
Finally, there has been some additional theoretical literature on understanding GANs, which consider
orthogonal aspects of the problem. [3] address the question of whether we can achieve generalization
bounds when training GANs. [13] focus on optimizing the estimating power of kernel distances. [5]
study generalization bounds for MMD-GAN in terms of fat-shattering dimension.
8
7
Discussion and Conclusions
In conclusion, our results provide insights on the cost or loss functions that should be used in GANs.
The choice of cost function plays a very important role in this case ? more so, for example, than data
domains or network architectures. For example, most works still use the DCGAN architecture, while
changing the cost functions to achieve different levels of performance, and which cost function is
better is still a matter of debate. In particular we provide a framework for studying many different
GAN criteria in a way that makes them more directly comparable, and under this framework, we
study both approximation and convergence properties of various loss functions.
8
Acknowledgments
We thank Iliya Tolstikhin, Sylvain Gelly, and Robert Williamson for helpful discussions. The work
of KC and SL were partially supported by NSF under IIS 1617157.
References
[1] C. D. Aliprantis and O. Burkinshaw. Principles of real analysis. Academic Press, 1998.
[2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. CoRR, abs/1701.07875, 2017.
[3] S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang. Generalization and equilibrium in generative
adversarial nets (gans). CoRR, abs/1703.00573, 2017.
[4] H. G. Dales, J. F.K. Dashiell, A.-M. Lau, and D. Strauss. Banach Spaces of Continuous
Functions as Dual Spaces. CMS Books in Mathematics. Springer International Publishing,
2016.
[5] G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via
maximum mean discrepancy optimization. In UAI 2015.
[6] A. Genevay, M. Cuturi, G. Peyr?, and F. R. Bach. Stochastic optimization for large-scale
optimal transport. In NIPS 2016.
[7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio. Generative adversarial nets. In NIPS 2014.
[8] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of
wasserstein gans. CoRR, abs/1704.00028, 2017.
[9] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML 2015.
[10] S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training generative neural samplers using
variational divergence minimization. In NIPS 2016.
[11] W. Rudin. Functional Analysis. International Series in Pure and Applied Mathematics. McGrawHill, Inc, 1991.
[12] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?lkopf, and G. R. G. Lanckriet. Hilbert
space embeddings and metrics on probability measures. Journal of Machine Learning Research,
11:1517?1561, 2010.
[13] D. J. Sutherland, H. F. Tung, H. Strathmann, S. De, A. Ramdas, A. J. Smola, and A. Gretton.
Generative models and model criticism via optimized maximum mean discrepancy. In ICLR
2017.
[14] C. Villani. Optimal transport, old and new. Grundlehren der mathematischen Wissenschaften.
Springer-Verlag Berlin Heidelberg, 2009.
[15] Y. Wu. Lecture notes: Information-theoretic methods for high-dimensional statistics. 2017.
9
| 7138 |@word mild:1 norm:1 villani:1 stronger:12 mimick:1 ipm:1 moment:7 liu:1 contains:2 series:1 past:1 existing:2 current:1 com:1 comparing:1 activation:1 attracted:1 happen:1 metrizes:2 enables:1 generative:11 rudin:1 xk:1 characterization:1 zhang:1 direct:1 inside:1 hellinger:1 x0:10 theoretically:1 indeed:1 nor:1 brain:1 inspired:1 little:1 equipped:1 provided:2 estimating:1 notation:1 bounded:4 underlying:3 matched:3 what:4 cm:1 minimizes:1 unified:2 transformation:3 subclass:1 act:1 exactly:2 returning:1 k2:2 fat:1 before:3 positive:2 understood:3 sutherland:1 modify:1 consequence:4 despite:1 signed:2 plus:1 equivalence:6 specifying:1 limited:1 obeys:1 practical:3 unique:1 acknowledgment:1 practice:3 implement:1 procedure:2 universal:1 empirical:2 thought:3 matching:8 pre:25 induce:2 regular:1 spite:1 get:1 cannot:1 interior:1 close:1 optimize:1 restriction:1 map:5 equivalent:6 transportation:1 attention:1 regardless:2 convex:5 pure:1 pouget:1 contradiction:1 insight:1 unanswered:2 notion:8 fx:6 variation:3 diego:2 target:27 suppose:1 play:1 olivier:1 designing:1 goodfellow:1 lanckriet:1 element:5 roy:1 particularly:1 distributional:3 role:3 tung:1 capture:3 counter:1 intuition:2 cuturi:1 ideally:1 warde:1 dom:9 motivate:2 weakly:6 solving:1 easily:1 various:2 tx:2 effective:1 zemel:1 quite:1 say:8 otherwise:2 statistic:1 gp:2 jointly:1 transform:1 think:1 ip:2 obviously:1 sequence:12 differentiable:3 net:2 propose:2 product:1 translate:1 chaudhuri:1 poorly:1 achieve:3 intuitive:2 convergence:38 empty:2 strathmann:1 produce:1 perfect:1 converges:11 leave:1 implemented:3 c:1 quotient:4 implies:7 direction:2 stochastic:1 implementing:1 everything:1 behaviour:1 generalization:4 opt:26 strictly:4 hold:1 considered:1 exp:4 cb:12 equilibrium:1 entropic:1 jx:1 estimation:1 krv:2 tanh:1 minimization:2 fukumizu:1 clearly:1 gaussian:1 always:1 aim:1 rather:1 pn:3 corollary:5 cseke:1 focus:2 refining:1 properly:1 likelihood:1 check:1 contrast:1 adversarial:60 criticism:1 sense:4 helpful:1 inference:1 entire:2 kc:1 interested:1 arg:1 among:1 dual:2 denoted:1 special:1 equal:1 never:1 beach:1 shattering:1 represents:1 broad:3 look:2 icml:1 future:1 minimized:1 others:1 discrepancy:2 mirza:1 divergence:69 wgan:6 attempt:1 ab:3 tolstikhin:1 possibility:1 weakness:1 farley:1 f1g:1 integral:1 explosion:1 orthogonal:1 indexed:1 euclidean:3 old:1 isolated:1 theoretical:1 cost:5 subset:3 uniform:3 shuang:1 peyr:1 too:1 characterize:1 answer:5 st:1 international:2 continuously:3 concrete:2 gans:17 satisfied:2 containing:1 book:1 style:6 return:1 li:1 de:1 bold:1 includes:1 int:4 matter:1 inc:1 satisfy:1 bg:1 closed:1 dumoulin:1 sup:6 mcgrawhill:1 start:1 complicated:1 minimize:2 ni:1 largely:1 identify:1 yes:1 generalize:2 weak:16 lkopf:1 accurately:1 produced:1 none:1 explain:1 strongest:1 reach:1 checked:1 definition:13 sriperumbudur:1 chintala:1 couple:1 ask:1 recall:3 knowledge:1 hilbert:1 actually:2 specify:1 improved:3 formulation:2 done:1 though:1 generality:1 furthermore:2 just:1 smola:1 until:1 klip:1 transport:4 google:2 defines:1 gulrajani:1 quality:2 usa:1 effect:3 dziugaite:1 concept:1 true:3 hence:1 regularization:2 iteratively:2 undesired:1 indistinguishable:2 game:2 uniquely:1 coincides:1 criterion:1 generalized:3 trying:2 complete:1 theoretic:1 wise:1 consideration:1 variational:2 recently:1 novel:1 sigmoid:1 functional:2 volume:1 banach:1 belong:2 mathematischen:1 composition:1 mathematics:2 j:4 recent:2 showed:1 optimizing:2 inf:7 reverse:1 certain:2 verlag:1 success:3 der:1 minimum:7 additional:4 wasserstein:16 arjovsky:2 converge:5 semi:1 relates:2 full:1 desirable:2 ii:1 gretton:2 match:1 characterized:1 academic:1 bach:1 long:4 ahmed:1 converging:2 variant:3 basic:2 metric:11 expectation:1 kernel:2 fnx:1 mmd:8 separately:1 sch:1 extra:1 unlike:2 strict:24 induced:2 grundlehren:1 member:1 call:1 structural:2 presence:1 feedforward:3 bengio:1 easy:1 embeddings:1 variety:2 affect:2 architecture:2 topology:8 restrict:1 fm:1 associating:1 regarding:1 whether:1 effort:1 jj:36 remark:1 involve:1 informally:2 amount:1 reduced:1 sl:1 nsf:1 notice:1 delta:6 write:1 enormous:2 changing:1 neither:2 rectangle:1 year:1 swersky:1 extends:1 family:7 almost:1 wu:1 draw:1 comparable:2 layer:4 bound:2 distinguish:1 convergent:1 courville:2 topological:10 refine:1 nonnegative:1 strength:4 precisely:5 x2:2 bousquet:1 aspect:1 argument:1 min:1 optimality:1 coarsest:1 relatively:1 tv:1 according:1 conjugate:1 remain:2 lau:1 restricted:4 rca:6 ge:1 studying:1 available:1 apply:2 observe:3 anymore:1 original:2 denotes:1 include:2 gan:47 publishing:1 gelly:1 ghahramani:1 especially:1 establish:1 objective:38 question:11 looked:1 usual:3 traditional:1 said:3 gradient:1 supn:1 subspace:2 distance:13 thank:1 iclr:1 berlin:1 capacity:1 trivial:5 equip:2 ozair:1 index:1 relationship:1 providing:1 minimizing:2 equivalently:1 liang:1 robert:1 statement:1 relate:1 debate:1 stated:1 observation:1 finite:1 defining:1 excluding:1 rn:1 ucsd:2 reproducing:1 kl:7 optimized:2 discriminator:20 california:2 fv:2 nip:4 address:5 adversary:1 proceeds:1 usually:6 power:1 critical:1 regularized:2 residual:1 minimax:1 imply:1 arora:1 ready:1 review:1 literature:4 nice:1 understanding:1 relative:3 fully:1 expect:1 loss:2 lecture:1 vg:5 generator:10 affine:2 xp:1 principle:1 nowozin:1 critic:1 supported:1 last:4 bias:1 weaker:5 understand:2 dimension:1 xn:2 fb:1 dale:1 san:2 far:1 approximate:6 compact:8 supremum:2 global:7 uai:1 discriminative:1 xi:3 continuous:8 additionally:1 nature:1 ca:1 genevay:1 heidelberg:1 williamson:1 bottou:1 necessarily:2 domain:2 wissenschaften:1 noise:1 ramdas:1 x1:2 body:1 xu:1 borel:1 tomioka:1 sub:1 saturate:1 theorem:12 specific:3 abadie:1 weakest:3 exists:4 restricting:2 strauss:1 kamalika:2 corr:3 generalizing:1 led:1 intersection:1 prevents:1 dcgan:1 partially:1 springer:2 corresponds:2 minimizer:5 ma:1 goal:2 lipschitz:1 sylvain:1 sampler:1 called:3 total:3 discriminate:1 player:2 formally:4 people:2 support:2 incorporate:1 ex:4 |
6,786 | 7,139 | From Bayesian Sparsity to Gated Recurrent Nets
Hao He
Massachusetts Institute of Technology
[email protected]
Satoshi Ikehata
National Institute of Informatics
[email protected]
Bo Xin
Microsoft Research, Beijing, China
[email protected]
David Wipf
Microsoft Research, Beijing, China
[email protected]
Abstract
The iterations of many first-order algorithms, when applied to minimizing common
regularized regression functions, often resemble neural network layers with prespecified weights. This observation has prompted the development of learningbased approaches that purport to replace these iterations with enhanced surrogates
forged as DNN models from available training data. For example, important NPhard sparse estimation problems have recently benefitted from this genre of upgrade,
with simple feedforward or recurrent networks ousting proximal gradient-based
iterations. Analogously, this paper demonstrates that more powerful Bayesian
algorithms for promoting sparsity, which rely on complex multi-loop majorizationminimization techniques, mirror the structure of more sophisticated long short-term
memory (LSTM) networks, or alternative gated feedback networks previously
designed for sequence prediction. As part of this development, we examine the
parallels between latent variable trajectories operating across multiple time-scales
during optimization, and the activations within deep network structures designed
to adaptively model such characteristic sequences. The resulting insights lead to
a novel sparse estimation system that, when granted training data, can estimate
optimal solutions efficiently in regimes where other algorithms fail, including
practical direction-of-arrival (DOA) and 3D geometry recovery problems. The
underlying principles we expose are also suggestive of a learning process for a
richer class of multi-loop algorithms in other domains.
1
Introduction
Many practical iterative algorithms for minimizing an energy function Ly (x), parameterized by some
vector y, adopt the updating prescription
x(t+1) = f (Ax(t) + By),
(1)
where t is the iteration count, A and B are fixed matrices/filters, and f is a point-wise nonlinear
operator. When we treat By as a bias or exogenous input, then the progression of these iterations
through time resembles activations passing through the layers (indexed by t) of a deep neural network
(DNN) [20, 30, 34, 38]. It then naturally begs the question: If we have access to an ensemble of pairs
{y, x? }, where x? = arg minx Ly (x), can we train an appropriately structured DNN to produce a
minimum of Ly (x) when presented with an arbitrary new y as input? If A and B are fixed for all t,
this process can be interpreted as training a recurrent neural network (RNN), while if they vary, a
deep feedforward network with independent weights on each layer is a more apt description.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Although many of our conclusions may ultimately have broader implications, in this work we focus
on minimizing the ubiquitous sparse estimation problem
Ly (x) = ky ? ?xk22 + ?kxk0 ,
(2)
where ? ? R
is an overcomplete matrix of feature vectors, k ? k0 is the `0 norm equal to a count
of the nonzero elements in a vector, and ? > 0 is a trade-off parameter. Although crucial to many
applications [2, 9, 13, 17, 23, 27], solving (2) is NP-hard, and therefore efficient approximations are
sought. Popular examples with varying degrees of computational overhead include convex relaxations
such as `1 -norm regularization [4, 8, 32] and many flavors of iterative hard-thresholding (IHT) [5, 6].
n?m
In most cases, these approximate algorithms can be implemented via (1), where A and B are
functions of ?, and the nonlinearity f is, for example, a hard-thresholding operator for IHT or
soft-thresholding for convex relaxations. However, the Achilles? heel of all these approaches is that
they will generally not converge to good approximate minimizers of (2) if ? has columns with a high
degree of correlation [5, 8], which is unfortunately often the case in practice [35].
To mitigate the effects of such correlations, we could leverage the aforementioned correspondence
with common DNN structures to learn something like a correlation-invariant algorithm or update
rules [38], although in this scenario our starting point would be an algorithmic format with known
deficiencies. But if our ultimate goal is to learn a new sparse estimation algorithm that efficiently
compensates for structure in ?, then it seems reasonable to invoke iterative algorithms known a priori
to handle such correlations directly as our template for learned network layers. One important example
is sparse Bayesian learning (SBL) [33], which has been shown to solve (2) using a principled, multiloop majorization-minimization approach [22] even in cases where ? displays strong correlations
[35]. Herein we demonstrate that, when judiciously unfolded, SBL iterations can be formed into
variants of long short-term memory (LSTM) cells, one of the more popular recurrent deep neural
network architectures [21], or gated extensions thereof [12]. The resulting network dramatically
outperforms existing methods in solving (2) with a minimal computational budget. Our high-level
contributions can be summarized as follows:
? Quite surprisingly, we demonstrate that the SBL objective, which explicitly compensates for
correlated dictionaries, can be optimized using iteration structures that map directly to popular
LSTM cells despite its radically different origin. This association significantly broadens recent
work connecting elementary, one-step iterative sparsity algorithms like (1) with simple recurrent
or feedforward deep network architectures [20, 30, 34, 38].
? At its core, any SBL algorithm requires coordinating inner- and outer-loop computations that
produce expensive latent posterior variances (or related, derived quantities) and optimized
coefficient estimates respectively. Although this process can in principle be accommodated via
canonical LSTM cells, such an implementation will enforce that computation of latent variables
rigidly map to predefined subnetworks corresponding with various gating structures, ultimately
administering a fixed schedule of switching between loops. To provide greater flexibility in
coordinating inner- and outer-loops, we propose a richer gated-feedback LSTM structure for
sparse estimation.
? We achieve state-of-the-art performance on several empirical tasks, including direction-ofarrival (DOA) estimation [28] and 3D geometry recovery via photometric stereo [37]. In
these and other cases, our approach produces higher accuracy estimates at a fraction of the
computational budget. These results are facilitated by a novel online data generation process.
? Although learning-to-learn style approaches [1, 20, 30, 34] have been commonly applied to
relatively simple gradient descent optimization templates, this is the first successful attempt
we are aware of to learn a complex, multi-loop, majorization-minimization algorithm [22]. We
envision that such a strategy can have wide-ranging implications beyond the sparse estimation
problems explored herein given that it is often not obvious how to optimally tune loop execution
to balance both complexity and estimation accuracy in practice.
2
Connecting SBL and LSTM Networks
This section first reviews the basic SBL model, followed an algorithmic characterization of how
correlation structure can be handled during sparse estimation. Later we derive specialized SBL update
rules that reveal a close association with LSTM cells.
2
2.1
Original SBL Model
Given an observed vector y ? Rn and feature dictionary ? ? Rn?m , SBL assumes the Gaussian
likelihood model and a parameterized zero-mean Gaussian prior for the unknown coefficients x ? Rm
given by
h
i
2
1
p(y|x) ? exp ? 2?
ky ? ?xk2 , and p(x; ?) ? exp ? 12 x> ??1 x , ? , diag[?] (3)
where ? > 0 is a fixed variance factor and ? denotes a vector of unknown hyperparamters [33].
Because both likelihood and prior are Gaussian, the posterior p(x|y; ?) is also Gaussian, with mean
? satisfying
x
>
? = ??> ??1
x
(4)
y y, with ?y , ??? + ?I.
? will have a matching sparsity profile or support
Given the lefthand-side multiplication by ? in (4), x
pattern as ?, meaning that the locations of zero-valued elements will align or supp[?
x] = supp[?].
? , to
Ultimately then, the SBL strategy shifts from directly searching for some optimally sparse x
an optimally sparse ?. For this purpose we marginalize over x (treating it initially as hidden or
nuisance data) and then maximize the resulting type-II likelihood function with respect to ? [26].
Conveniently, the resulting convolution-of-Gaussians integral is available in closed-form [33] such
that we can equivalently minimize the negative log-likelihood
Z
L(?) = ? log p(y|x)p(x; ?)dx ? y > ??1
(5)
y y + log |?y |.
? via (4). Equivalently,
Given an optimal ? so obtained, we can compute the posterior mean estimator x
this same posterior mean estimator can be obtained by an iterative reweighted `1 process described
next that exposes subtle yet potent sparsity-promotion mechanisms.
2.2
Iterative Reweighted `1 Implementation
Although not originally derived this way, SBL can be implemented using a modified form of iterative
reweighted `1 -norm optimization that exposes its agency for producing sparse estimates. In general,
if we replace the `0 norm from (2) with any smooth approximation g(|x|), where g is a concave,
non-decreasing function and | ? | applies elementwise, then cost function descent1 can be guaranteed
using iterations of the form [36]
X (t)
(t+1)
x(t+1) ? arg min 12 ky ? ?xk22 + ?
wi |xi |,
wi
? ?g(u)/?ui |u =x(t+1) , ?i. (6)
i
x
i
i
This process can be viewed as a multi-loop, majorization-minimization algorithm [22] (a generalization of the EM algorithm [15]), whereby the inner-loop involves computing x(t+1) by minimizing a
P (t)
first-order, upper-bounding approximation ky ? ?xk22 + ? i wi |xi |, while the outer-loop updates
the bound/majorizer itself as parameterized by the weights w(t+1) . Obviously, if g(u) = u, then
w(t) = 1 for all t, and (6) reduces to the Lasso objective for `1 norm regularized sparse regression
[32], and only a single iteration
P is required. However, one popular non-trivial instantiation of this
approach assumes g(u) = i log (ui + ) with > 0 a user-defined parameter [10]. The corre
?1
(t+1)
(t+1)
sponding weights then become wi
= xi
, and we observe that once any particular
+
(t+1)
xi
becomes large, the corresponding weight becomes small and at the next iteration a weaker
penalty will be applied. This prevents the overshrinkage of large coefficients, a well-known criticism
of `1 norm penalties [16].
(t+1)
In the context of SBL, there is no closed-form wi
update except in special cases. However, if we
allow for additional latent structure, which we later show is akin to the memory unit of LSTM cells, a
viable recurrency emerges for computing these weights and elucidating their effectiveness in dealing
with correlated dictionaries. In particular we have:
Proposition 1. If weights w(t+1) satisfy
2
X
zj2
1
(t+1)
wi
=
min
k?i ? ?zk22 +
(7)
(t+1)
z :supp[z ]?supp[? (t) ] ?
?
(t)
j
j?supp[? ]
1
Or global convergence to some stationary point with mild additional assumptions [31].
3
(t+1)
for all i, then the iterations (6), with ?j
h
i?1
(t+1)
(t)
= wj
, are guaranteed to reduce or leave
xj
unchanged the SBL objective (5). Also, at each iteration, ? (t+1) and x(t+1) will satisfy (4).
Unlike the traditional sparsity penalty mentioned above, with SBL we see that the i-th weight
(t+1)
(t+1)
wi
is not dependent solely on the value of the i-th coefficient xi
, but rather on all the latent
hyperparameters ? (t+1) and therefore ultimately prior-iteration weights w(t) as well. Moreover,
because the fate of each sparse coefficient is linked together, correlation structure can be properly
accounted for in a progressive fashion.
More concretely, from (7) it is immediately apparent that if ?i ? ?i0 for some indeces i and i0
(t+1)
(t+1)
(meaning a large degree of correlation), then it is highly likely that wi
? wi0
. This is simply
because the regularized residual error that emerges from solving (7) will tend to be quite similar
when ?i ? ?i0 . In this situation, a suboptimal solution will not be prematurely enforced by weights
with large, spurious variance across a correlated group of basis vectors. Instead, weights will differ
substantially only when the corresponding columns have meaningful differences relative to the
dictionary as a whole, in which case such differences can help to avoid overshrinkage as before.
A crucial exception to this perspective occurs when ? (t+1) is highly sparse, or nearly so, in which
case there are limited degrees of freedom with which to model even small differences between some
?i and ?i0 . However, such cases can generally only occur when we are in the neighborhood of ideal,
maximally sparse solutions by definition [35], when different weights are actually desirable even
among correlated columns for resolving the final sparse estimates.
2.3
Revised SBL Iterations
Although presumably there are multiple ways such an architecture could be developed, in this section
we derive specialized SBL iterations that will directly map to one of the most common RNN structures,
namely LSTM networks. With this in mind, the notation we adopt has been intentionally chosen to
facilitate later association with LSTM cells. We first define
?1 12
>
(t) >
(t)
w , diag ? ?I + ?? ?
?
and ? (t) , u(t) + ??> y ? ?u(t) ,
(8)
?1
where ?(t) , diag ? (t) , u(t) , ?(t) ?> ?I + ??(t) ?>
y, and ? > 0 is a constant. As
will be discussed further below, w(t) serves the exact same role as the weights from (7), hence the
identical notation. We then partition our revised SBL iterations as so-called gate updates
h
i
?1
(t)
(t)
(t)
? in ? ? ? (t) ? (t) ? 2?w(t)
,
? f ? ? ? (t) ,
? out ? w(t)
, (9)
+
cell updates
h
i
? (t+1) ? sign ? (t) ,
x
(t)
(t)
? (t+1) ,
x(t+1) ? ? f x(t) + ? in x
(10)
and output updates
? (t+1)
(t)
? ? out x(t+1) ,
(11)
where the inverse and absolute-value operators are applied element-wise when a vector is the argument,
and at least for now, ? and ? define arbitrary functions. Moreover, denotes the Hadamard product
and [?]+ sets negative values to zero and leaves positive quantities unchanged, also in an element-wise
fashion, i.e., it acts just like a rectilinear (ReLU) unit [29]. Note also that the gate and cell updates
in isolation can be viewed as computing a first-order, partial solution to the inner-loop weighted `1
optimization problem from (6).
Starting from some initial ? (0) and x(0) , we will demonstrate in the next section that these computations closely mirror a canonical LSTM network unfolded in time with y acting as a constant input
applied at each step. Before doing so however, we must first demonstrate that (8)?(11) indeed serve
to reduce the SBL objective. For this purpose we require the following definition:
4
Definition 2. We say that the iterations (8)?(11) satisfy the monotone cell update property if
X (t) (t)
X (t) (t+1)
ky ? ?u(t) k22 + 2?
wi |ui | ? ky ? ?x(t+1) k22 + 2?
wi |xi
|, ?t.
(12)
i
i
Note that for rather inconsequential technical reasons this definition involves u(t) , which can be
viewed as a proxy for x(t) . We then have the following:
Proposition
3. The
iiterations (8)?(11) will reduce or leave unchanged (5) for all t provided that
? ? 0, ?/
?> ?
and ? and ? are chosen such that the monotone cell update property holds.
In practical terms, the simple selections ?(?) = 1 and ?(?) = 0 will provably satisfy the monotone
cell update property (see proof details in the supplementary). However, for additional flexibility, ?
and ? could be selected to implement various forms of momentum, ultimately leading to cell updates
akin to the popular FISTA [4] or monotonic FISTA [3] algorithms. In both cases, old values x(t) are
(t)
? (t+1) to speed convergence (in the present circumstances, ? f
precisely mixed with new factors x
(t)
and ? in respectively modulate this mixing process via (10)). Of course the whole point of casting
the SBL iterations as an RNN structure to begin with is so that we may ultimately learn these types
of functions, without the need for hand-crafting suboptimal iterations up front.
2.4
Correspondences with LSTM Components
We will now flesh out how the SBL iterations presented in Section 2.3 display the same structure as a
canonical LSTM cell, the only differences being the shape of the nonlinearities, and the exact details
of the gate subnetworks. To facilitate this objective, Figure 1 contains a canonical LSTM network
structure annotated with SBL-derived quantities. We now walk through these correspondences.
First, the exogenous input to the network is the observation vector y, which does not change from
time-step to time-step. This is much like the strategy used by feedback networks for obtaining
incrementally refined representations [40]. The output at time-step t is ? (t) , which serves as the
current estimate of the SBL hyperparameters. In contrast, we treat x(t) as the internal LSTM memory
cell, or the latent cell state.2 This deference to ? (t) directly mirrors the emphasis SBL places on
learning variances per the marginalized cost from (5) while treating x(t) as hidden data, and in some
sense flips the coefficient-centric script used in producing (6).3
Proceeding further, ? (t) is fed to four separate layers/subnetworks (represented by yellow boxes in
(t)
(t)
(t)
Figure 1): (i) the forget gate ? f , (ii) the input gate ? in , (iii) the output gate ? out , and (iv) the
? (t) . The forget gate computes scaling factors for each element of x(t) , with
candidate input update x
small values of the gate output suggesting that we ?forget? the corresponding old cell state elements.
? (t) .
Similarly the input gate determines how large we rescale signals from the candidate input update x
(t+1)
These two re-weighted quantities are then mixed together to form the new cell state x
. Finally,
the output gate modulates how new ? (t+1) are created as scaled versions of the updated cell state.
Regarding details of these four subnetworks, based on the update templates from (9) and (10), we
immediately observe that the required quantities depend directly on (8). Fortunately, both ? (t) and
w(t) can be naturally computed using simple feedforward subnetwork structures.4 These values can
either be computed in full (ideal case), or partially to reduce the computational burden. In any event,
once obtained, the respective gates and candidate cell input updates can be computed by applying
final non-linearities. Note that ? and ? are treated as arbitrary subnetwork structures at this point
that can be learned.
2
If we allow for peephole connections [18], it is possible to reverse these roles; however, for simplicity and
the most direct mapping to LSTM cells we do not pursue this alternative here.
3
Incidently, this association also suggests that the role of hidden cell updates in LSTM networks can be
reinterpreted as an analog to the expectation step (or E-step) for estimating hidden data in a suitably structured
EM algorithm.
4
For w(t) the result of Proposition 1 suggests that these weights can be computed as the solution of a simple
regularized regression problem, which can easily be replaced with a small network analogous to that used in
[18]; similarly for ? (t) .
5
A few cosmetic differences remain between this SBL implementation and a canonical LSTM network.
First, the final non-linearity for LSTM gating subnetworks is often a sigmoidal activation, whereas
SBL is flexible with the forget gate (via ?), while effectively using a ReLU unit for the input gate
and an inverse function for the output gate. Moreover, for the candidate cell update subnetwork, SBL
replaces the typical tanh nonlinearity with a quantized version, the sign function, and likewise, for the
output nonlinearity an absolute value operator (abs) is used. Finally, in terms of internal subnetwork
(t)
(t)
? (t) are connected via ? (t) and w(t) .
structure, there is some parameter sharing since ? in , ? out , and x
Of course in all cases we need not necessarily share parameters nor abide by these exact structures.
In fact there is nothing inherently optimal about the particular choices used by SBL; rather it is
merely that these structures happen to reproduce the successful, yet hand-crafted SBL iterations. But
certainly there is potential in replacing such iterations with learned LSTM-like surrogates, at least
when provided with access to sufficient training data as in prior attempts to learn sparse estimation
algorithms [20, 34, 38].
10
& ($'()
w magnitudes
"
($)
8
($'()
+ "
?
6
4
2
+,0
?
($)
)./
"0 ($)
?
30
40
50
60
70
80
90
100
70
80
90
100
1
($)
)12$
& ($)
20
iteration number
0.8
x magnitudes
($)
)*
10
& ($'()
0.6
0.4
0.2
!
Subnetwork
Pointwise
Operation
Vector
Transfer
0
Concatenate
Copy
20
30
40
50
60
iteration number
Figure 1:
Figure 1: LSTM/SBL Network
3
10
1
References
Figure 2: SBL Dynamics
Extension to Gated Feedback Networks
Although SBL iterations can be molded into an LSTM structure as we have shown, there remain hints
that the full potential of this association may be presently undercooked. Here we first empirically
examine the trajectories of SBL iterations produced via the LSTM-like rules derived in Section 2.3.
1
This process will later serve to unmask certain characteristic dynamics operating
across different time
scales that are suggestive of a richer class of gated recurrent network structures inspired by sequence
prediction tasks [12].
3.1
Trajectory Analysis of SBL Iterations
To begin, Figure 2 displays sample trajectories of w(t) ? R100 (top) and x(t) ? R100 (bottom) during
execution of (8)?(11) on a simple representative problem, where each colored line represents a
(t)
(t)
different element wi or |xi | respectively. All details of the data generation process, as well as
comprehensive attendant analyses, are deferred to the supplementary. To summarize here though, in
the top plot the elements of w(t) , which represent the non-negative weights forming the outer-loop
majorization step from (6) and reflect coarse correlation structure in ?, converge very quickly (?3-5
iterations). Moreover, the observed bifurcation of magnitudes ultimately helps to screen many (but
not necessarily all) elements of x(t) that are the most likely to be zero in the maximally sparse
(t)
(t)
representation (i.e., a stable, higher weighting value wi is likely to eventually cause xi ? 0). In
(t)
contrast, the actual coefficients x themselves converge much more slowly, with final destinations
still unclear even after 50+ iterations. Hence w(t) need not be continuously updated after rapid initial
convergence, provided that we retain a memory of the optimal value during periods when it is static.
This discrepancy in convergence rates occurs in part because, as mentioned previously, the gate and
cell updates do not fully solve the inner-loop weighted `1 optimization needed to compute a globally
optimal x(t+1) give w(t) . Varying the number of inner-loop iterations, meaning additional executions
6
of (8)?(11) with w(t) fixed, is one heuristic for normalizing across different trajectory frequencies,
but this requires additional computational overhead, and prior knowledge is needed to micro-manage
iteration counts for either efficiency or final estimation quality. With respect to the latter, we conduct
additional experiments in the supplementary which reveal that indeed the number of inner-loop
updates per outer-loop cycle can affect the quality of sparse solutions, with no discernible rule of
thumb for enhancing solution quality.5 For example, navigating around suboptimal local minima
could require adaptively adjusting the number inner-loop iterations in subtle, non-obvious ways. We
therefore arrive at an unresolved state of affairs:
1. The latent variables which define SBL iterations can potentially follow optimization trajectories
with radically different time scales, or both long- and short-term dependencies.
2. But there is no intrinsic mechanism within the SBL framework itself (or most multi-loop
optimization problems in general either) for automatically calibrating the differing time scales
for optimal performance.
These same issues are likely to arise in other non-convex multi-loop optimization algorithms as well.
It therefore behooves us to consider a broader family of model structures that can adapt to these
scales in a data-dependent fashion.
3.2
Modeling via Gated Feedback Nets
In addressing this fundamental problem, we make the following key observation: If the trajectories of
various latent variables can be interpreted as activations passing through an RNN with both long- and
short-term dependencies, then in developing a pipeline for optimizing such trajectories it makes sense
to consider learning deep architectures explicitly designed to adaptively model such characteristic
sequences. Interestingly, in the context of sequence prediction, the clockwork RNN (CW-RNN) has
been proposed to cope with temporal dependencies engaged across multiple scales [25]. As shown in
the supplementary however, the CW-RNN enforces dynamics synced to pre-determined clock rates
exactly analogous to the fixed, manual schedule for terminating inner-loops in existing multi-loop
iterative algorithms such as SBL. So we are back at our starting point.
Fortunately though, the gated feedback RNN (GF-RNN) [12] was recently developed to update the
CW-RNN with an additional set of gated connections that, in effect, allow the network to learn
its own clock rates. In brief, the GF-RNN involves stacked LSTM layers (or somewhat simpler
gated recurrent unit (GRU) layers [11]), that are permitted to communicate bilaterally via additional,
data-dependent gates that can open and close on different time-scales. In the context of SBL, this
means that we no longer need strain a specialized LSTM structure with the burden of coordinating
trajectory dynamics. Instead, we can stack layers that are, at least from a conceptual standpoint,
designed to reflect the different dynamics of disparate variable sets such as w(t) or x(t) . In doing
so, we are then positioned to learn new SBL update rules from training pairs {y, x? } as described
previously. At the very least, this structure should include SBL-like iterations within its capacity, but
of course it is also free to explore something even better.
3.3
Network Design and Training Protocol
We stack two gated recurrent layers loosely designed to mimic the relatively fast SBL adaptation to
basic correlation structure, as well as the slower resolution of final support patterns and coefficient
estimates. These layers are formed from either LSTM or GRU base architectures. For the final output
layer we adopt a multi-label classification loss for predicting supp[x? ], which is the well-known ?NPhard? part of sparse estimation (determining final coefficient amplitudes just requires a simple least
squares fit given the correct support pattern). Full network details are deferred to the supplementary,
including special modifications to handle complex data as required by DOA applications.
For a given dictionary ? a separate network must be trained via SGD, to which we add a unique
extra dimension of randomness via an online stochastic data-generation strategy. In particular, to
create samples in each mini-batch, we first generate a vector x? with random support pattern and
nonzero amplitudes. We then compute y = ?x? + , where is a small Gaussian noise component.
This y forms a training input sample, while supp[x? ] represents the corresponding labels. For all
5
In brief, these experiments demonstrate a situation where executing either 1, 10, or 1000 inner-loop iterations
per outer loop fails to produce the optimal solution, while 100 inner-loop iterations is successful.
7
mini-batches, novel samples are drawn, which we have found boosts performance considerably over
the fixed training sets used by current DNN approaches to sparse estimation (see supplementary).
4
Experiments
This section presents experiments involving synthetic data and two applications.
1
MaxSparseNet
GRU-small
LSTM-big
LSTM-small
GRU-big
GFGRU-small
GFLSTM-small
SBL
GFGRU-big
GFLSTM-big
0.6
0.95
0.8
Ours-GFLSTM
SBL
MaxSparseNet
?1 -norm
IHT(all zero)
ISTA-Net(all zero)
IHT-Net(all zero)
0.5
0.4
0.3
0.2
correct support recovery
0.6
support recovery rate
correct support recovery
0.5
0.9
0.7
0.85
Ours-GFLSTM
SBL
MaxSparseNet
?1 -norm
IHT(all below 0.6)
ISTA-Net(all below 0.6)
IHT-Net(all below 0.6)
0.8
0.75
0.7
0.4
SBL
Ours-GFLSTM
70
60
50
chamfer distance
1
0.9
0.3
40
30
0.2
20
0.1
10
0.1
0
0.15
0.2
0.25
0.3
d
n
0.35
0.4
(a) Strict Accuracy
0.45
0.65
0.15
0
0.2
0.25
0.3
0.35
0.4
0.45
d
n
1
2
3
4
5
6
7
8
9
10
models
(b) Loose Accuracy
(c) Architecture Comparisons
0
10
20
30
40
50
60
70
80
SNR(dB)
(d) DOA
Figure 3: Plots (a), (b), and (c) show sparse recovery results involving synthetic correlated dictionaries.
Plot (d) shows Chamfer distance-based errors [7] from the direction-of-arrival (DOA) experiment.
4.1
Evaluations via Synthetic Correlated Dictionaries
To
experiments from [38], we generate correlated synthetic features via ? =
Pn reproduce
1
n
m
>
are drawn iid from a unit Gaussian distribution,
i=1 i2 ui v i , where ui ? R and v i ? R
and each column of ? is subsequently rescaled to unit `2 norm. Ground truth samples x? have
d nonzero elements drawn randomly from U[?0.5, 0.5] excluding the interval [?0.1, 0.1]. We use
n=20, m=100, and vary d, with larger values producing a much harder combinatorial estimation
problem (exhaustive search is not feasible here). All algorithms are presented with y and attempt
to estimate supp[x? ]. We evaluate using strict accuracy, meaning percentage of trials with exact
support recovery, and loose accuracy, which quantifies the percentage of true positives among the top
n ?guesses? (i.e., largest predicted outputs).
Figures 3(a) and 3(b) evaluate our model, averaged across 105 trials, against an array of optimizationbased approaches: SBL [33], `1 norm minimization [4], and IHT [5]; and existing learning-based
DNN models: an ISTA-inspired network [20], an IHT-inspired network [34], and the best maximal
sparsity net (MaxSparseNet) from [38] (detailed settings in the supplementary). With regard to strict
accuracy, only SBL is somewhat competitive with our approach and other learning-based models
are much worse; however, using loose accuracy our method is far superior than all others. Note that
this is the first approach we are aware of in the literature that can convincingly outperform SBL
recovering sparse solutions when a heavily correlated dictionary is present, and we hypothesize that
this is largely possible because our design principles were directly inspired by SBL itself.
To isolate architectural factors affecting performance we conducted ablation studies: (i) with or
without gated feedback, (iii) LSTM or GRU cells, and (iii) small or large (4?) model size; for each
model type, the small and respectively large versions have roughly the same number of parameters.
The supplementary also contains a much broader set of self-comparison tests. Figure 3(c), which
shows strict accuracy results with d/n = 0.4, indicates the importance of gated feedback and to a
lesser degree network size, while LSTM and GRU cells perform similarly as expected.
4.2
Practical Application I: Direction-of-Arrival (DOA) Estimation
DOA estimation is a fundamental problem in sonar/radar processing [28]. Given an array of n
omnidirectional sensors with d signal waves impinging upon them, the objective is to estimate the
angular direction of the wave sources with respect to the sensors. For certain array geometries and
known propagation mediums, estimation of these angles can be mapped directly to solving (2) in the
complex domain. In this scenario, the i-th column of ? represents the sensor array output (a point in
Cn ) from a hypothetical source with unit strength at angular location ?i , and can be computed using
wave progagation formula [28]. The entire dictionary can be constructed by concatenating columns
associated with angles forming some spacing of interest, e.g., every 1? across a half circle, and will
be highly correlated. Given measurements y ? Cn , we can solve (2), with ? reflecting the noise level.
8
The indexes of nonzero elements of x? will then reveal the angular locations/directions of putative
sources.
Recently SBL-based algorithms have produced state-of-the-art results solving the DOA problem
[14, 19, 39], and we compare our approach against SBL here. We apply a typical experimental
design from the literature involving a uniform linear array with n = 10 sensors; see supplementary
for background and details on how to compute ?, as well as specifics on how to adapt and train
our GFLSTM using complex data. Four sources are then placed in random angular locations, with
nonzero coefficients at {?1 ? i}, and we compute measurements y = ?x? + , with chosen from
a complex Gaussian distribution to produce different SNR. Because the nonzero positions in x? now
have physical meaning, we apply the Chamfer distance [7] as the error metric, which quantifies how
close we are to true source locations (lower is better). Figure 3(d) displays the results, where our
learned network outperforms SBL across a range of SNR values.
Table 1: Photometric stereo results
Algorithm
SBL
MaxSparseNet
Ours
4.3
Average angular error (degrees)
Bunny
Caesar
r=10 r=20 r=40 r=10 r=20 r=40
4.02 1.86 0.50 4.79 2.07 0.34
1.48 1.95 1.20 3.51 2.51 1.18
1.35 1.55 1.12 2.39 1.80 0.60
r=10
35.46
0.90
0.63
Bunny
r=20
22.66
0.87
0.67
Runtime (sec.)
r=40
32.20
0.92
0.85
r=10
86.96
2.13
1.48
Caesar
r=20
64.67
2.12
1.70
r=40
90.48
2.20
2.08
Practical Application II: 3D Geometry Recovery via Photometric Stereo
Photometric stereo represents another application domain whereby approximately solving (2) using
SBL has recently produced state-of-the-art results [24]. The objective here is to recover the 3D
surface normals of a given scene using r images taken from a single camera but with different lighting
conditions. Under the assumption that these images can be approximately decomposed into a diffuse
Lambertian component and sparse corruptions such as shadows and specular highlights, then surface
normals at each pixel can be recovered using (2) to isolate these sparse factors followed by a final
least squares post-processing step [24]. In this context, ? is constructed using the known camera and
lighting geometry, and y represents intensity measurements for a given pixel across images projected
onto the nullspace of a special transposed lighting matrix (see supplementary for more details and
our full experimental design). However, because a sparse regression problem must be computed for
every pixel to recovery the full scene geometry, a fast, efficient solver is paramount.
We compare our GFLSTM model against both SBL and the MaxSparseNet [38] (both of which
outperform other existing methods). Tests are performed using the 32-bit HDR gray-scale images
of objects ?Bunny? (256 ? 256) and ?Caesar? (300 ? 400) as in [24]. For (very) weakly-supervised
training data, we apply the same approach as before, only we use nonzero magnitudes drawn from a
Gaussian, with mean and variance loosely tuned to the photometric stereo data, consistent with [38].
Results are shown in Table 1, where we observe in all cases the DNN models are faster by a wide
margin, and in the hard cases cases (smaller r) our approach produces the lowest angular error. The
only exception is with r = 40; however, this is a quite easy scenario with so many images such that
SBL can readily find a near optimal solution, albeit at a high computational cost. See supplementary
for error surface visualizations.
5
Conclusion
In this paper we have examined the structural similarities between multi-loop iterative algorithms
and multi-scale sequence prediction neural networks. This association is suggestive of a learning
process for a richer class of algorithms that employ multiple loops and latent states, such as the EM
algorithm or general majorization-minimization approaches. For example, in a narrower sense, we
have demonstrated that specialized gated recurrent nets carefully patterned to reflect the multi-scale
optimization trajectories of multi-loop SBL iterations can lead to a considerable boost in both accuracy
and efficiency. Note that simpler first-order, gradient descent-style algorithms can be ineffective
when applied to sparsity-promoting energy functions with a combinatorial number of bad local
optima and highly concave or non-differentiable surfaces in the neighborhood of minima. Moreover,
implementing smoother approximations such as SBL with gradient descent is impractical since each
gradient calculation would be prohibitively expensive. Therefore, recent learning-to-learn approaches
such as [1] that explicitly rely on gradient calculations are difficult to apply in the present setting.
9
Acknowledgments
This work was accomplished while Hao He was an intern at Microsoft Research, Beijing.
References
[1] M. Andrychowicz, M. Denil, S. Gomez, M.W. Hoffman, D. Pfau, T. Schaul, B. Shillingford,
and N. de Freitas. Learning to learn by gradient descent by gradient descent. arXiv:1606.04474,
2016.
[2] S. Baillet, J.C. Mosher, and R.M. Leahy. Electromagnetic brain mapping. IEEE Signal
Processing Magazine, pages 14?30, Nov. 2001.
[3] A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image
denoising and deblurring problems. IEEE Trans. Image Processing, 18(11), 2009.
[4] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse
problems. SIAM J. Imaging Sciences, 2(1), 2009.
[5] T. Blumensath and M.E. Davies. Iterative hard thresholding for compressed sensing. Applied
and Computational Harmonic Analysis, 27(3), 2009.
[6] T. Blumensath and M.E. Davies. Normalized iterative hard thresholding: Guaranteed stability
and performance. IEEE J. Selected Topics Signal Processing, 4(2), 2010.
[7] G. Borgefors. Distance transformations in arbitrary dimensions. Computer Vision, Graphics,
and Image Processing, 27(3):321?345, 1984.
[8] E. Cand?s, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction
from highly incomplete frequency information. IEEE Trans. Information Theory, 52(2):489?
509, Feb. 2006.
[9] E. Cand?s and T. Tao. Decoding by linear programming. IEEE Trans. Information Theory,
51(12), 2005.
[10] E. Cand?s, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted `1 minimization. J.
Fourier Anal. Appl., 14(5):877?905, 2008.
[11] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using RNN encoder-decoder for statistical machine translation. Conference on Empirical Methods in Natural Language Processing, 2014.
[12] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Gated feedback recurrent neural networks. In
International Conference on Machine Learning, 2015.
[13] S.F. Cotter and B.D. Rao. Sparse channel estimation via matching pursuit with application to
equalization. IEEE Trans. on Communications, 50(3), 2002.
[14] J. Dai, X. Bao, W. Xu, and C. Chang. Root sparse Bayesian learning for off-grid DOA estimation.
IEEE Signal Processing Letters, 24(1), 2017.
[15] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the
EM algorithm. J. Royal Statistical Society, Series B (Methodological), 39(1):1?38, 1977.
[16] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties.
J. American Statistical Assoc., 96, 2001.
[17] M.A.T. Figueiredo. Adaptive sparseness using Jeffreys prior. NIPS, 2002.
[18] F.A. Gers and J. Schmidhuber. Recurrent nets that time and count. International Joint Conference on Neural Networks, 2000.
[19] P. Gerstoft, C.F. Mecklenbrauker, A. Xenaki, and S. Nannuru. Multi snapshot sparse Bayesian
learning for DOA. IEEE Signal Processing Letters, 23(20), 2016.
10
[20] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, 2010.
[21] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8), 1997.
[22] D.R. Hunter and K. Lange. A tutorial on MM algorithms. American Statistician, 58(1), 2004.
[23] S. Ikehata, D.P. Wipf, Y. Matsushita, and K. Aizawa. Robust photometric stereo using sparse
regression. In Computer Vision and Pattern Recognition, 2012.
[24] S. Ikehata, D.P. Wipf, Y. Matsushita, and K. Aizawa. Photometric stereo using sparse Bayesian
regression for general diffuse surfaces,. IEEE Trans. Pattern Analysis and Machine Intelligence,
36(9):1816?1831, 2014.
[25] J. Koutnik, K. Greff, F. Gomez, and J. Schmidhuber. A clockwork RNN. International
Conference on Machine Learning, 2014.
[26] D.J.C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415?447, 1992.
[27] D.M. Malioutov, M. ?etin, and A.S. Willsky. Sparse signal reconstruction perspective for source
localization with sensor arrays. IEEE Trans. Signal Processing, 53(8), 2005.
[28] D.G. Manolakis, V.K. Ingle, and S.M. Kogon. Statistical and Adaptive Signal Processing.
McGrall-Hill, Boston, 2000.
[29] V. Nair and G. Hinton. Rectified linear units improve restricted Boltzmann machines. International Conference on Machine Learning, 2010.
[30] P. Sprechmann, A.M. Bronstein, and G. Sapiro. Learning efficient sparse and low rank models.
IEEE Trans. Pattern Analysis and Machine Intelligence, 37(9), 2015.
[31] B.K. Sriperumbudu and G.R.G. Lanckriet. A proof of convergence of the concave-convex
procedure using Zangwill?s theory. Neural computation, 24, 2012.
[32] R. Tibshirani. Regression shrinkage and selection via the lasso. J. of the Royal Statistical
Society, 1996.
[33] M.E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1, 2001.
[34] Z. Wang, Q. Ling, and T. Huang. Learning deep `0 encoders. AAAI Conference on Artificial
Intelligence, 2016.
[35] D.P. Wipf. Sparse estimation with structured dictionaries. Advances in Nerual Information
Processing 24, 2012.
[36] D.P. Wipf and S. Nagarajan. Iterative reweighted `1 and `2 methods for finding sparse solutions.
Journal of Selected Topics in Signal Processing (Special Issue on Compressive Sensing), 4(2),
April 2010.
[37] R.J. Woodham. Photometric method for determining surface orientation from multiple images.
Optical Engineering, 19(1), 1980.
[38] B. Xin, Y. Wang, W. Gao, and D.P. Wipf. Maximal sparsity with deep networks? Advances in
Neural Information Processing Systems 29, 2016.
[39] Z. Yang, L. Xie, and C. Zhang. Off-grid direction of arrival estimation using sparse Bayesian
inference. IEEE Trans. Signal Processing, 61(1):38?43, 2013.
[40] A.R. Zamir, T.L. Wu, L. Sun, W. Shen, J. Malik, and S. Savarese. Feedback networks.
arXiv:1612.09508, 2016.
11
| 7139 |@word mild:1 trial:2 version:3 norm:10 seems:1 suitably:1 open:1 sgd:1 harder:1 initial:2 series:1 contains:2 mosher:1 tuned:1 ours:4 interestingly:1 envision:1 outperforms:2 existing:4 freitas:1 recovered:1 current:2 com:3 activation:4 yet:2 dx:1 gmail:3 must:3 readily:1 concatenate:1 happen:1 partition:1 shape:1 unmask:1 discernible:1 hypothesize:1 designed:5 plot:3 update:22 treating:2 stationary:1 intelligence:3 selected:3 guess:1 leaf:1 half:1 affair:1 core:1 short:5 prespecified:1 colored:1 coarse:1 quantized:1 characterization:1 location:5 sigmoidal:1 simpler:2 zhang:1 constructed:2 direct:1 become:1 viable:1 blumensath:2 overhead:2 expected:1 indeed:2 roughly:1 themselves:1 rapid:1 examine:2 multi:13 nor:1 brain:1 cand:3 inspired:4 globally:1 decomposed:1 decreasing:1 automatically:1 unfolded:2 actual:1 solver:1 becomes:2 provided:3 begin:2 underlying:1 notation:2 linearity:2 medium:1 estimating:1 moreover:5 lowest:1 interpreted:2 substantially:1 pursue:1 developed:2 compressive:1 differing:1 finding:1 transformation:1 impractical:1 temporal:1 mitigate:1 sapiro:1 hypothetical:1 act:1 concave:3 every:2 runtime:1 exactly:1 prohibitively:1 demonstrates:1 assoc:1 rm:1 scaled:1 unit:8 ly:4 iv:1 producing:3 before:3 positive:2 engineering:1 local:2 treat:2 switching:1 despite:1 rigidly:1 solely:1 interpolation:1 approximately:2 inconsequential:1 emphasis:1 china:2 resembles:1 examined:1 suggests:2 appl:1 patterned:1 limited:1 range:1 averaged:1 unique:1 camera:2 acknowledgment:1 zangwill:1 lecun:1 practice:2 learningbased:1 implement:1 enforces:1 optimizationbased:1 procedure:1 rnn:13 empirical:2 significantly:1 matching:2 davy:2 boyd:1 pre:1 onto:1 close:3 selection:3 operator:4 marginalize:1 romberg:1 context:4 applying:1 equalization:1 map:3 demonstrated:1 clockwork:2 starting:3 convex:4 resolution:1 shen:1 simplicity:1 recovery:9 immediately:2 rule:5 estimator:2 insight:1 array:6 leahy:1 stability:1 handle:2 searching:1 variation:1 analogous:2 updated:2 hyperparamters:1 enhanced:1 heavily:1 magazine:1 user:1 programming:1 exact:5 deblurring:1 origin:1 lanckriet:1 element:11 satisfying:1 recognition:1 updating:1 expensive:2 observed:2 role:3 bottom:1 practical:5 wang:2 zamir:1 wj:1 connected:1 sun:1 cycle:1 trade:1 rescaled:1 principled:1 mentioned:2 agency:1 dempster:1 ui:5 complexity:1 dynamic:5 ultimately:7 radar:1 terminating:1 trained:1 weakly:1 depend:1 solving:6 serve:2 localization:1 upon:1 efficiency:2 basis:1 r100:2 easily:1 joint:1 k0:1 schwenk:1 represented:1 various:3 genre:1 train:2 stacked:1 fast:5 artificial:1 broadens:1 neighborhood:2 refined:1 exhaustive:1 quite:3 richer:4 larger:1 valued:1 apparent:1 supplementary:11 solve:3 heuristic:1 compressed:1 compensates:2 say:1 encoder:1 peephole:1 itself:3 laird:1 final:9 online:2 obviously:1 sequence:6 differentiable:1 net:9 propose:1 reconstruction:2 product:1 maximal:2 unresolved:1 adaptation:1 loop:27 hadamard:1 ablation:1 lefthand:1 flexibility:2 mixing:1 achieve:1 schaul:1 description:1 bao:1 ky:6 convergence:5 optimum:1 produce:6 leave:2 executing:1 object:1 help:2 derive:2 recurrent:11 rescale:1 strong:1 implemented:2 recovering:1 resemble:1 shadow:1 predicted:1 involves:3 flesh:1 direction:7 differ:1 closely:1 annotated:1 correct:3 filter:1 stochastic:1 subsequently:1 implementing:1 require:2 nagarajan:1 generalization:1 electromagnetic:1 achilles:1 proposition:3 elementary:1 merrienboer:1 shillingford:1 extension:2 hold:1 mm:1 around:1 ground:1 normal:2 exp:2 presumably:1 algorithmic:2 mapping:2 sought:1 deference:1 vary:2 adopt:3 xk2:1 dictionary:10 purpose:2 estimation:21 combinatorial:2 label:2 tanh:1 expose:3 largest:1 create:1 weighted:3 cotter:1 minimization:6 hoffman:1 mit:1 promotion:1 sensor:5 gaussian:8 modified:1 rather:3 denil:1 pn:1 avoid:1 shrinkage:2 varying:2 casting:1 broader:3 derived:4 ax:1 focus:1 properly:1 methodological:1 rank:1 indicates:1 likelihood:6 cosmetic:1 contrast:2 criticism:1 sense:3 zk22:1 inference:1 dependent:3 minimizers:1 i0:4 entire:1 initially:1 hidden:4 spurious:1 dnn:7 reproduce:2 tao:2 provably:1 pixel:3 arg:2 classification:1 issue:2 aforementioned:1 among:2 priori:1 flexible:1 orientation:1 development:2 art:3 constrained:1 bifurcation:1 mackay:1 special:4 equal:1 aware:2 once:2 beach:1 identical:1 progressive:1 represents:5 icml:1 nearly:1 caesar:3 wipf:6 discrepancy:1 photometric:8 mimic:1 np:1 micro:1 others:1 employ:1 hint:1 randomly:1 few:1 national:1 comprehensive:1 beck:2 replaced:1 geometry:6 statistician:1 microsoft:3 ab:1 freedom:1 attempt:3 interest:1 highly:5 reinterpreted:1 elucidating:1 certainly:1 evaluation:1 deferred:2 implication:2 predefined:1 integral:1 partial:1 respective:1 conduct:1 indexed:1 savarese:1 accommodated:1 walk:1 circle:1 old:2 loosely:2 overcomplete:1 incomplete:2 progagation:1 re:1 minimal:1 column:6 modeling:1 soft:1 teboulle:2 rao:1 phrase:1 cost:3 addressing:1 snr:3 uniform:1 successful:3 conducted:1 graphic:1 front:1 optimally:3 dependency:3 encoders:1 koutnik:1 proximal:1 considerably:1 synthetic:4 cho:2 st:1 adaptively:3 international:4 siam:1 lstm:30 potent:1 retain:1 fundamental:2 destination:1 off:3 informatics:1 decoding:1 invoke:1 connecting:2 quickly:1 analogously:1 together:2 continuously:1 aaai:1 reflect:3 manage:1 huang:1 slowly:1 worse:1 american:2 chung:1 style:2 leading:1 li:1 supp:8 suggesting:1 potential:2 nonlinearities:1 de:1 sec:1 coding:1 summarized:1 coefficient:10 satisfy:4 explicitly:3 script:1 performed:1 root:1 closed:2 later:4 doing:2 exogenous:2 linked:1 competitive:1 recover:1 wave:3 parallel:1 contribution:1 minimize:1 square:2 majorization:5 accuracy:10 formed:2 variance:5 largely:1 efficiently:2 likewise:1 characteristic:3 ensemble:1 satoshi:2 upgrade:1 yellow:1 bayesian:9 thumb:1 produced:3 iid:1 hunter:1 trajectory:10 lighting:3 rectified:1 corruption:1 malioutov:1 randomness:1 manual:1 sharing:1 iht:8 definition:4 against:3 energy:2 frequency:2 intentionally:1 obvious:2 thereof:1 naturally:2 proof:2 associated:1 transposed:1 static:1 adjusting:1 popular:5 massachusetts:1 knowledge:1 emerges:2 ubiquitous:1 schedule:2 subtle:2 positioned:1 amplitude:2 carefully:1 back:1 sophisticated:1 centric:1 actually:1 reflecting:1 higher:2 originally:1 supervised:1 follow:1 permitted:1 tipping:1 maximally:2 april:1 xie:1 box:1 though:2 angular:6 just:2 bilaterally:1 correlation:10 clock:2 hand:2 replacing:1 nonlinear:1 propagation:1 incrementally:1 quality:3 gray:1 reveal:3 facilitate:2 effect:2 k22:2 usa:1 true:2 calibrating:1 normalized:1 hence:2 wi0:1 regularization:1 nonzero:7 omnidirectional:1 i2:1 reweighted:5 during:4 self:1 nuisance:1 whereby:2 hill:1 demonstrate:5 greff:1 ranging:1 meaning:5 wise:3 novel:3 recently:4 sbl:57 image:9 common:3 harmonic:1 specialized:4 superior:1 physical:1 empirically:1 association:6 discussed:1 analog:1 elementwise:1 he:2 bougares:1 measurement:3 rectilinear:1 grid:2 similarly:3 nonlinearity:3 language:1 stable:1 doa:10 similarity:1 surface:6 access:2 longer:1 base:1 add:1 feb:1 align:1 something:2 own:1 recent:2 posterior:4 perspective:2 optimizing:1 reverse:1 scenario:3 schmidhuber:3 certain:2 accomplished:1 minimum:3 fortunately:2 additional:8 somewhat:2 dai:1 greater:1 kxk0:1 converge:3 maximize:1 period:1 signal:12 ii:3 resolving:1 multiple:5 desirable:1 full:5 reduces:1 smoother:1 smooth:1 technical:1 baillet:1 adapt:2 calculation:2 faster:1 long:6 prescription:1 post:1 prediction:4 variant:1 regression:7 basic:2 involving:3 circumstance:1 metric:1 vision:2 expectation:1 arxiv:2 iteration:37 sponding:1 represent:1 enhancing:2 hochreiter:1 cell:25 bunny:3 affecting:1 whereas:1 background:1 spacing:1 interval:1 source:6 standpoint:1 appropriately:1 crucial:2 extra:1 unlike:1 ineffective:1 strict:4 isolate:2 tend:1 db:1 nonconcave:1 fate:1 effectiveness:1 structural:1 near:1 yang:1 ideal:2 leverage:1 bengio:2 easy:1 operating:2 feedforward:4 iii:3 xj:1 manolakis:1 relu:2 fit:1 architecture:6 lasso:2 specular:1 affect:1 inner:11 reduce:4 regarding:1 cn:2 lesser:1 suboptimal:3 judiciously:1 lange:1 shift:1 handled:1 ultimate:1 granted:1 akin:2 penalty:3 stereo:7 passing:2 cause:1 andrychowicz:1 deep:8 dramatically:1 generally:2 detailed:1 tune:1 descent1:1 generate:2 outperform:2 percentage:2 canonical:5 tutorial:1 sign:2 coordinating:3 tibshirani:1 per:3 group:1 key:1 four:3 drawn:4 imaging:1 relaxation:2 monotone:3 merely:1 fraction:1 beijing:3 enforced:1 angle:2 zj2:1 inverse:3 uncertainty:1 letter:2 powerful:1 communicate:1 place:1 facilitated:1 reasonable:1 parameterized:3 architectural:1 wu:1 arrive:1 family:1 putative:1 scaling:1 bit:1 layer:11 bound:1 guaranteed:3 corre:1 gomez:2 matsushita:2 followed:2 replaces:1 display:4 paramount:1 oracle:1 correspondence:3 fan:1 strength:1 occur:1 precisely:1 deficiency:1 scene:2 diffuse:2 fourier:1 speed:1 argument:1 min:2 optical:1 format:1 relatively:2 structured:3 developing:1 remain:2 across:9 em:4 smaller:1 wi:12 heel:1 modification:1 ikehata:4 presently:1 jeffreys:1 restricted:1 invariant:1 pipeline:1 taken:1 xk22:3 visualization:1 previously:3 loose:3 fail:1 mechanism:2 count:4 eventually:1 mind:1 needed:2 fed:1 sprechmann:1 serf:2 flip:1 subnetworks:5 gulcehre:2 pursuit:1 gaussians:1 aizawa:2 available:2 operation:1 promoting:2 progression:1 apply:4 observe:3 enforce:1 lambertian:1 recurrency:1 alternative:2 apt:1 batch:2 gate:16 slower:1 original:1 denotes:2 assumes:2 include:2 top:3 wakin:1 marginalized:1 society:2 gregor:1 unchanged:3 crafting:1 malik:1 objective:7 question:1 quantity:5 occurs:2 strategy:4 forged:1 traditional:1 surrogate:2 unclear:1 subnetwork:5 minx:1 gradient:9 navigating:1 cw:3 separate:2 distance:4 mapped:1 capacity:1 decoder:1 outer:6 topic:2 trivial:1 reason:1 willsky:1 pointwise:1 index:1 mini:2 prompted:1 balance:1 minimizing:4 equivalently:2 difficult:1 unfortunately:1 potentially:1 abide:1 hao:2 negative:3 ingle:1 disparate:1 design:4 anal:1 bronstein:1 boltzmann:1 implementation:3 perform:1 gated:15 unknown:2 upper:1 convolution:1 snapshot:1 revised:2 observation:3 descent:5 situation:2 hinton:1 excluding:1 communication:1 strain:1 prematurely:1 rn:2 stack:2 arbitrary:4 davidwipf:1 intensity:1 david:1 pair:2 required:3 gru:6 namely:1 connection:2 optimized:2 pfau:1 learned:4 herein:2 boost:2 nip:2 trans:8 beyond:1 below:4 pattern:7 regime:1 sparsity:10 summarize:1 convincingly:1 royal:2 memory:6 including:3 event:1 treated:1 rely:2 regularized:4 predicting:1 natural:1 residual:1 improve:1 technology:1 brief:2 administering:1 created:1 gf:2 prior:6 literature:2 review:1 multiplication:1 determining:2 relative:1 fully:1 loss:1 highlight:1 mixed:2 generation:3 degree:6 sufficient:1 proxy:1 consistent:1 rubin:1 principle:4 begs:1 thresholding:6 share:1 translation:1 course:3 penalized:1 accounted:1 surprisingly:1 placed:1 majorizationminimization:1 copy:1 figueiredo:1 free:1 bias:1 weaker:1 side:1 allow:3 institute:2 wide:2 template:3 absolute:2 sparse:38 van:1 regard:1 feedback:10 dimension:2 attendant:1 computes:1 concretely:1 commonly:1 adaptive:2 projected:1 far:1 cope:1 approximate:2 nov:1 dealing:1 global:1 suggestive:3 instantiation:1 conceptual:1 xi:8 search:1 iterative:13 latent:9 quantifies:2 sonar:1 nerual:1 table:2 channel:1 transfer:1 learn:10 ca:1 inherently:1 robust:2 obtaining:1 complex:6 necessarily:2 domain:3 diag:3 impinging:1 protocol:1 whole:2 bounding:1 arise:1 arrival:4 profile:1 nothing:1 big:4 noise:2 hyperparameters:2 ista:3 ling:1 xu:1 crafted:1 representative:1 screen:1 fashion:3 nphard:2 fails:1 position:1 purport:1 momentum:1 gers:1 concatenating:1 candidate:4 weighting:1 nullspace:1 formula:1 bad:1 specific:1 chamfer:3 isolation:1 gating:2 sensing:2 explored:1 normalizing:1 burden:2 intrinsic:1 albeit:1 effectively:1 modulates:1 importance:1 mirror:3 magnitude:4 execution:3 budget:2 sparseness:1 margin:1 flavor:1 boston:1 forget:4 simply:1 likely:4 explore:1 forming:2 intern:1 gao:1 conveniently:1 prevents:1 partially:1 bo:1 chang:1 monotonic:1 applies:1 radically:2 truth:1 determines:1 nair:1 modulate:1 goal:1 narrower:1 viewed:3 replace:2 considerable:1 hard:6 fista:2 change:1 typical:2 except:1 feasible:1 determined:1 acting:1 denoising:1 etin:1 total:1 called:1 engaged:1 experimental:2 xin:2 meaningful:1 exception:2 internal:2 support:8 latter:1 relevance:1 evaluate:2 benefitted:1 correlated:9 |
6,787 | 714 | Feudal Reinforcement Learning
Peter Dayan
CNL
The Salk Institute
PO Box 85800
San Diego CA 92186-5800, USA
Geoffrey E Hinton
Department of Computer Science
University of Toronto
6 Kings College Road, Toronto,
Canada M5S 1A4
dayan~helmholtz.sdsc.edu
hinton~ai.toronto.edu
Abstract
One way to speed up reinforcement learning is to enable learning to
happen simultaneously at multiple resolutions in space and time.
This paper shows how to create a Q-Iearning managerial hierarchy
in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Sub-managers
need not initially understand their managers' commands. They
simply learn to maximise their reinforcement in the context of the
current command.
We illustrate the system using a simple maze task .. As the system
learns how to get around, satisfying commands at the multiple
levels, it explores more efficiently than standard, flat, Q-Iearning
and builds a more comprehensive map.
1 INTRODUCTION
Straightforward reinforcement learning has been quite successful at some relatively complex tasks like playing backgammon (Tesauro, 1992). However, the
learning time does not scale well with the number of parameters. For agents solving rewarded Markovian decision tasks by learning dynamic programming value
functions, some of the main bottlenecks (Singh, 1992b) are temporal resolution expanding the unit of learning from the smallest possible step in the task, divisionand-conquest - finding smaller subtasks that are easier to solve, exploration, and
structural generalisation - generalisation of the value function between different 10271
272
Dayan and Hinton
cations. These are obviously related - for instance, altering the temporal resolution
can have a dramatic effect on exploration.
Consider a control hierarchy in which managers have sub-managers, who work
for them, and super-managers, for whom they work. If the hierarchy is strict
in the sense that managers control exactly the sub-managers at the level below
them and only the very lowest level managers can actually act in the world, then
intermediate level managers have essentially two instruments of control over their
sub-managers at any time - they can choose amongst them and they can set them
sub-tasks. These sub-tasks can be incorporated into the state of the sub-managers
so that they in turn can choose their own sub-sub-tasks and sub-sub-managers to
execute them based on the task selection at the higher level.
An appropriate hierarchy can address the first three bottlenecks. Higher level
managers should sustain a larger grain of temporal resolution, since they leave the
sub-sub-managers to do the actual work. Exploration for actions leading to rewards
can be more efficient since it can be done non-uniformly - high level managers can
decide that reward is best found in some other region of the state space and send
the agent there directly, without forcing it to explore in detail on the way.
Singh (1992a) has studied the case in which a manager picks one of its sub-managers
rather than setting tasks. He used the degree of accuracy of the Q-values of submanagerial Q-Iearners (Watkins, 1989) to train a gating system (Jacobs, Jordan,
Nowlan & Hinton, 1991) to choose the one that matches best in each state. Here
we study the converse case, in which there is only one possible sub-manager active
at any level, and so the only choice a manager has is over the tasks it sets. Such
systems have been previously considered (Hinton, 1987; Watkins, 1989).
The next section considers how such a strict hierarchical scheme can learn to choose
appropriate tasks at each level, section 3 describes a maze learning example for
which the hierarchy emerges naturally as a multi-grid division of the space in
which the agent moves, and section 4 draws some conclusions.
2
FEUDAL CONTROL
We sought to build a system that mirrored the hierarchical aspects of a feudal
fiefdom, since this is one extreme for models of control. Managers are given
absolute power over their sub-managers - they can set them tasks and reward
and punish them entirely as they see fit. However managers ultimately have to
satisfy their own super-managers, or face punishment themselves - and so there is
recursive reinforcement and selection until the whole system satisfies the goal of the
highest level manager. This can all be made to happen without the sub-managers
initially "understanding" the sub-tasks they are set. Every component just acts to
maximise its expected reinforcement, so after learning, the meaning it attaches to a
specification of a sub-task consists of the way in which that specification influences
its choice of sub-sub-managers and sub-sub-tasks. Two principles are key:
Reward Hiding Managers must reward sub-managers for doing their bidding
whether or not this satisfies the commands of the super-managers. Sub-managers
should just learn to obey their managers and leave it up to them to determine what
Feudal Reinforcement Learning
it is best to do at the next level up. So if a sub-manager fails to achieve the sub-goal
set by its manager it is not rewarded, even if its actions result in the satisfaction of
of the manager's own goal. Conversely, if a sub-manager achieves the sub-goal it
is given it is rewarded, even if this does not lead to satisfaction of the manager's
own goal. This allows the sub-manager to learn to achieve sub-goals even when
the manager was mistaken in setting these sub-goals. So in the early stages of
learning, low-level managers can become quite competent at achieving low-level
goals even if the highest level goal has never been satisfied.
Information Hiding Managers only need to know the state of the system at the
granularity of their own choices of tasks. Indeed, allowing some decision making
to take place at a coarser grain is one of the main goals of the hierarchical decomposition. Information is hidden both downwards - sub-managers do not know
the task the super-manager has set the manager - and upwards - a super-manager
does not know what choices its manager has made to satisfy its command. However managers do need to know the satisfaction conditions for the tasks they set
and some measure of the actual cost to the system for achieving them using the
sub-managers and tasks it picked on any particular occasion.
For the special case to be considered here, in which managers are given no choice
of which sub-manager to use in a given state, their choice of a task is very similar
to that of an action for a standard Q-Iearning system. If the task is completed
successfully, the cost is determined by the super-manager according to how well (eg
how quickly, or indeed whether) the manager satisfied its super-tasks. Depending
on how its own task is accomplished, the manager rewards or punishes the submanager responsible. When a manager chooses an action, control is passed to the
sub-manager and is only returned when the state changes at the managerial level.
3 THE MAZE TASK
To illustrate this feudal system, consider a standard maze task (Barto, Sutton &
Watkins, 1989) in which the agent has to learn to find an initially unknown goal.
The grid is split up at successively finer grains (see figure 1) and managers are
assigned to separable parts of the maze at each level. So, for instance, the level 1
manager of area 1-(1,1) sets the tasks for and reinforcement given to the level 2
managers for areas 2-(1,1), 2-(1,2), 2-(2,1) and 2-(2,2). The successive separation
into quarters is fairly arbitrary - however if the regions at high levels did not cover
contiguous areas at lower levels, then the system would not perform very well.
At all times, the agent is effectively performing an action at every level. There are
five actions, N5EW and "', available to the managers at all levels other than the first
and last. NSEW represent the standard geographical moves and'" is a special action
that non-hierarchical systems do not require. It specifies that lower level managers
should search for the goal within the confines of the current larger state instead of
trying to move to another region of the space at the same level. At the top level,
the only possible action is "'; at the lowest level, only the geographical moves are
allowed, since the agent cannot search at a finer granularity than it can move.
Each manager maintains Q values (Watkins, 1989; Barto, Bradtke & Singh, 1992)
over the actions it instructs its sub-managers to perform, based on the location of
273
274
Dayan and Hinton
Figure 1: Figure 1: The Grid Task. This shows how the maze is divided up at
different levels in the hierarchy. The 'u' shape is the barrier, and the shaded square
is the goal. Each high level state is divided into four low level ones at every step.
the agent at the subordinate level of detail and the command it has received from
above. So, for instance, if the agent currently occupies 3-(6,6), and the instruction
from the level a manager is to move South, then the 1-(2,2) manager decides upon
an action based on the Q values for NSEW giving the total length of the path to
either 2-(3,2) or 2-(4,2). The action the 1-(2,2) manager chooses is communicated
one level down the hierarchy and becomes part of the state determining the level 2
Q values.
When the agent starts, actions at successively lower levels are selected using the
standard Q-Iearning softmax method and the agent moves according to the finest
grain action (at level 3 here). The Q values at every level at which this causes
Feudal Reinforcement Learning
Steps to Goal
1e+04
7
5
3
2
1.5
1e+03
7
5
3
2
F-Q Task 1
F'-=-Q-Task 2
S.:(j-fask 1
"\
,\
\
\,
....
\
\
.
S-QTask 2
\
\
',~
'<\
...
" ---
-+ -
----- ----""~ r----. --- --- ---
--
~~
1.5
'.-'.
1e+02
7
5
".
--- - -
-- -
-. -- .. .. ----- .............. _- .
... _--------- ...
~-
3
2
1.5
0 .00
100.00
200.00
300 .00
400.00
500.00
Iterations
Figure 2: Learning Performance. F-Q shows the performance of the feudal an:::hitecture and S-Q of the standard Q-Iearning architecture.
a state transition are updated according to the length of path at that level, if the
state transition is what was ordered at all lower levels. This restriction comes from
the constraint that super-managers should only learn from the fruits of the honest
labour of sub-managers, ie only if they obey their managers.
Figure 2 shows how the system performs compared with standard, one-step, Qlearning, first in finding a goal in a maze similar to that in figure I, only having
32x32 squares, and second in finding the goal after it is subsequently moved. Points
on the graph are averages of the number of steps it takes the agent to reach the goal
across all possible testing locations, after the given number of learning iterations.
Little effort was made to optimise the learning parameters, so care is necessary in
interpreting the results.
For the first task the feudal system is initially slower, but after a while, it learns
much more quickly how to navigate to the goal. The early sloth is due to the
fact that many low level actions are wasted, since they do not implement desired
higher level behaviour and the system has to learn not to try impossible actions or
* in inappropriate places. The late speed comes from the feudal system's superior
exploratory behaviour. If it decides at a high level that the goal is in one part of the
maze, then it has the capacity to specify large scale actions at that level to take it
there. This is the same advantage that Singh's (1992b) variable temporal resolution
system garners, although this is over a single task rather than explicitly composite
sub-tasks. Tests on mazes of different sizes suggested that the number of iterations
after which the advantage of exploration outweighs the disadvantage of wasted
actions gets less as the complexity of the task increases.
A similar pattern emerges for the second task. Low level Q values embody an
implicit knowledge of how to get around the maze, and so the feudal system can
explore efficiently once it (slowly) learns not to search in the original place.
275
276
Dayan and Hinton
Figure 3: The Learned Actions. The area of the boxes and the radius of the central
circle give the probabilities of taking action NSEW and * respectively.
Figure 3 shows the probabilities of each move at each location once the agent has
learnt to find the goal at 3-(3,3). The length of the NSEW bars and the radius
of the central circle are proportional to the probability of selecting actions NSEW
or * respectively, and action choice flows from top to bottom. For instance, the
probability of choosing action S at state 2-(1,3) is the sum of the products of the
probabilities of choosing actions NSEW and * at state 1-(1,2) and the probabilities,
conditional on this higher level selection, of choosing action S at state 2-0,3). Apart
from the right hand side of the barrier, the actions are generally correct - however
there are examples of sub-optimal behaviour caused by the decomposition of the
space, eg the system decides to move North at 3-(8,5) despite it being more felicitous
to move South.
Closer investigation of the course of learning revea Is that, as might be expected from
the restrictions in updating the Q values, the system initially learns in a completely
bottom-up manner. However after a while, it learns appropriate actions at the
highest levels, and so top-down learning happens too. This generally beneficial
effect arises because there are far fewer states at coarse resolutions, and so it is
easier for the agent to calculate what to do.
Feudal Reinforcement Learning
4 DISCUSSION
The feudal architecture partially addresses one of the major concerns in reinforcement learning about how to divide a single task up into sub-tasks at multiple levels.
A demonstration was given of how this can be done separately from choosing between different possible sub-managers at a given level.
It depends on there being a plausible managerial system, preferably based on a
natural hierarchical division of the available state space. For some tasks it can be
very inefficient, since it forces each sub-manager to learn how to satisfy all the
sub-tasks set by its manager, whether or not those sub-tasks are appropriate. It
is therefore more likely to be useful in environments in which the set tasks can
change. Managers need not necessarily know in advance the consequences of their
actions. They could learn, in a self-supervised manner, information about the state
transitions that they have experienced. These observed next states can be used as
goals for their sub-managers - consistency in providing rewards for appropriate
transitions is the only requirement.
Although the system gains power through hiding information, which reduces
the size of the state spaces that must be searched, such a step also introduces
inefficiencies. In some cases, if a sub-manager only knew the super-task of its
super-manager then it could bypass its manager with advantage. However the
reductio of this would lead to each sub-manager having as large a state space as
the whole problem, negating the intent of the feudal architecture. A more serious
concern is that the non-Markovian nature of the task at the higher levels (the future
course of the agent is determined by more detailed information than just the high
level states) can render the problem insoluble. Moore and Atkeson's (1993) system
for detecting such cases and choosing finer resolutions accordingly should integrate
well with the feudal system.
For the maze task, the feudal system learns much more about how to navigate than
the standard Q-Iearning system. Whereas the latter is completely concentrated on
a particular target, the former knows how to execute arbitrary high level moves
efficiently, even ones that are not used to find the current goal such as going East
from one quarter of the space 1-(2,2) to another 1-(1,2). This is why exploration
can be more efficient. It doesn't require a map of the space, or even a model of
state x action - 4 next state to be learned explicitly.
Jameson (1992) independently studied a system with some similarities to the feudal architecture. In one case, a high level agent learned on the basis of external
reinforcement to provide on a slow timescale direct commands (like reference trajectories) to a low level agent - which learned to obey it based on reinforcement
proportional to the square trajectory error. In another, low and high level agents
received the same reinforcement from the world, but the former was additionally
tasked on making its prediction of future reinforcement significantly dependent
on the output of the latter. Both systems learned very effectively to balance an
upended pole for long periods. They share the notion of hierarchical structure
with the feudal architecture, but the notion of control is somewhat different.
Multi-resolution methods have long been studied as ways of speeding up dynamic
programming (see Morin, 1978, for numerous examples and references). Standard
277
278
Dayan and Hinton
methods focus effectively on having a single task at every level and just having
coarser and finer representations of the value function. However, here we have
studied a slightly different problem in which managers have the flexibility to specify
different tasks which the sub-managers have to learn how to satisfy. This is more
com plicated, but also more powerful.
From a psychological perspective, we have replaced a system in which there is a
single external reinforcement schedule with a system in which the rat's mind is
composed of a hierarchy of little Skinners.
Acknowledgements
We are most grateful to Andrew Moore, Mark Ring, Jiirgen Schmid huber, Satinder
Singh, Sebastian Thrun and Ron Williams for helpful discussions. This work
was supported by SERC, the Howard Hughes Medical Institute and the Canadian
Institute for Advanced Research (CIAR). GEH is the Noranda fellow of the CIAR.
References
[1] Barto, AC, Bradtke, SJ & Singh, SP (1991). Real-Time Learning and Control using Asynchronous Dynamic Programming. COINS technical report 91-57. Amherst: University of
Massach usetts.
[2] Barto, AC, Sutton, RS & Watkins, qCH (1989). Learning and sequential decision
making. In M Gabriel & J Moore, editors, Learning and Computational Neuroscience:
Foundations of Adaptive Networks. Cambridge, MA: MIT Press, Bradford Books.
[3] Hinton, GE (1987). Connectionist Learning Procedures. Technical Report CMU-CS-B7-115,
Department of Computer Science, Carnegie-Mellon University.
[4] Jacobs, RA, Jordan, MI, Nowlan, S1 & Hinton, GE. Adaptive mixtures of local experts.
Neural Computation, 3, pp 79-87.
[5] Jameson, JW (1992). Reinforcement control with hierarchical backpropagated adaptive
critics. Submitted to Neural Networks.
[6] Moore, AW & Atkeson, CC (1993). Memory-based reinforcement learning: efficient
computation with prioritized sweeping. In SJ Hanson, CL Giles & JD Cowan, editors
Advances in Neural Information Processing Systems 5. San Mateo, CA: Morgan Kaufmann.
[7] Morin, TL (1978). Computational ad vances in dynamic programming. In ML Puterman,
editor, Dynamic Programming and its Applications. New York: Academic Press.
[8] Moore, AW (1991). Variable resolution dynamic programming: Efficiently learning
action maps in multivariate real-valued state spaces. Proceedings of the Eighth Machine
Learning Workshop . San Mateo, CA: Morgan Kaufmann.
[9] Singh, SP (1992a). Transfer of learning by composing solutions for elemental sequential
tasks. Machine Learning, 8, pp 323-340.
[10] Singh, SP (1992b). Scaling reinforcement learning algorithms by learning variable temporal resolution models. Submitted to Machine Learning.
[11] Tesauro, G (1992). Practical issues in temporal difference learning. Machine Learning, 8,
pp 257-278.
[12J Watkins, qCH (1989). Learning from Delayed Rewards. PhD Thesis. University of Cambridge, England .
| 714 |@word instruction:1 r:1 jacob:2 decomposition:2 pick:1 dramatic:1 inefficiency:1 selecting:1 punishes:1 current:3 com:1 nowlan:2 must:2 finest:1 grain:4 happen:2 shape:1 selected:1 fewer:1 accordingly:1 coarse:1 detecting:1 toronto:3 successive:1 location:3 ron:1 instructs:1 five:1 direct:1 become:1 consists:1 manner:2 huber:1 indeed:2 ra:1 expected:2 themselves:1 embody:1 multi:2 manager:79 actual:2 little:2 inappropriate:1 hiding:3 becomes:1 lowest:2 what:4 finding:3 temporal:6 sloth:1 every:5 preferably:1 act:2 fellow:1 iearning:6 exactly:1 control:9 unit:1 converse:1 medical:1 maximise:2 local:1 consequence:1 sutton:2 despite:1 punish:1 path:2 might:1 studied:4 mateo:2 conversely:1 shaded:1 practical:1 responsible:1 testing:1 recursive:1 hughes:1 implement:1 communicated:1 procedure:1 area:4 managerial:3 significantly:1 composite:1 road:1 morin:2 get:3 cannot:1 selection:3 context:1 influence:1 impossible:1 restriction:2 map:3 send:1 straightforward:1 williams:1 independently:1 resolution:10 x32:1 exploratory:1 notion:2 updated:1 diego:1 hierarchy:8 target:1 programming:6 helmholtz:1 satisfying:1 updating:1 coarser:2 bottom:2 observed:1 calculate:1 region:3 highest:3 environment:1 complexity:1 reward:8 dynamic:6 ultimately:1 singh:8 solving:1 grateful:1 upon:1 division:2 completely:2 basis:1 bidding:1 po:1 train:1 choosing:5 quite:2 larger:2 cnl:1 solve:1 plausible:1 valued:1 sdsc:1 timescale:1 obviously:1 advantage:3 iearners:1 product:1 flexibility:1 achieve:2 moved:1 elemental:1 requirement:1 leave:2 ring:1 illustrate:2 depending:1 andrew:1 ac:2 received:2 c:1 come:2 radius:2 correct:1 subsequently:1 exploration:5 occupies:1 enable:1 subordinate:1 require:2 behaviour:3 investigation:1 around:2 considered:2 major:1 sought:1 achieves:1 smallest:1 early:2 currently:1 create:1 successfully:1 mit:1 super:10 rather:2 command:7 barto:4 focus:1 backgammon:1 sense:1 helpful:1 dayan:6 dependent:1 initially:5 hidden:1 going:1 issue:1 special:2 fairly:1 softmax:1 once:2 never:1 having:4 future:2 report:2 connectionist:1 serious:1 composed:1 simultaneously:1 comprehensive:1 delayed:1 replaced:1 introduces:1 mixture:1 extreme:1 closer:1 necessary:1 divide:1 desired:1 circle:2 psychological:1 instance:4 giles:1 markovian:2 negating:1 cover:1 contiguous:1 disadvantage:1 altering:1 cost:2 pole:1 successful:1 too:1 aw:2 learnt:1 punishment:1 chooses:2 geographical:2 explores:1 amherst:1 ie:1 quickly:2 thesis:1 central:2 satisfied:2 successively:2 choose:4 slowly:1 external:2 book:1 expert:1 inefficient:1 leading:1 north:1 satisfy:5 explicitly:2 caused:1 depends:1 ad:1 try:1 picked:1 doing:1 start:1 maintains:1 square:3 accuracy:1 kaufmann:2 who:2 efficiently:4 garner:1 trajectory:2 cc:1 finer:4 m5s:1 cation:1 submitted:2 reach:1 sebastian:1 pp:3 jiirgen:1 naturally:1 mi:1 gain:1 knowledge:1 emerges:2 schedule:1 actually:1 higher:5 supervised:1 sustain:1 specify:2 jw:1 execute:2 box:2 done:2 just:4 stage:1 implicit:1 until:1 hand:1 usa:1 effect:2 former:2 skinner:1 assigned:1 moore:5 eg:2 puterman:1 nsew:6 self:1 rat:1 occasion:1 trying:1 performs:1 bradtke:2 interpreting:1 upwards:1 meaning:1 superior:1 quarter:2 labour:1 he:1 mellon:1 cambridge:2 ai:1 insoluble:1 mistaken:1 grid:3 consistency:1 specification:2 similarity:1 multivariate:1 own:6 perspective:1 apart:1 tesauro:2 rewarded:3 forcing:1 accomplished:1 morgan:2 care:1 somewhat:1 determine:1 period:1 multiple:3 reduces:1 technical:2 match:1 academic:1 england:1 long:2 divided:2 prediction:1 essentially:1 cmu:1 tasked:1 iteration:3 represent:1 whereas:1 separately:1 strict:2 south:2 cowan:1 flow:1 jordan:2 structural:1 granularity:2 intermediate:1 split:1 canadian:1 b7:1 fit:1 architecture:5 honest:1 bottleneck:2 whether:3 ciar:2 passed:1 effort:1 render:1 peter:1 returned:1 york:1 cause:1 action:29 gabriel:1 generally:2 useful:1 detailed:1 qch:2 backpropagated:1 concentrated:1 specifies:1 mirrored:1 neuroscience:1 carnegie:1 geh:1 key:1 four:1 achieving:2 graph:1 wasted:2 sum:1 powerful:1 place:3 decide:1 separation:1 draw:1 decision:3 scaling:1 entirely:1 constraint:1 feudal:17 flat:1 aspect:1 speed:2 performing:1 separable:1 relatively:1 department:2 according:3 smaller:1 describes:1 across:1 beneficial:1 slightly:1 making:3 happens:1 s1:1 previously:1 turn:2 know:6 mind:1 ge:2 instrument:1 available:2 obey:3 hierarchical:7 appropriate:5 coin:1 slower:1 jd:1 original:1 top:3 completed:1 a4:1 outweighs:1 serc:1 giving:1 build:2 move:11 amongst:1 thrun:1 capacity:1 whom:1 considers:1 length:3 providing:1 demonstration:1 balance:1 intent:1 unknown:1 perform:2 allowing:1 jameson:2 howard:1 hinton:10 incorporated:1 arbitrary:2 sweeping:1 canada:1 subtasks:1 hanson:1 learned:5 address:2 suggested:1 bar:1 below:1 pattern:1 eighth:1 optimise:1 memory:1 power:2 satisfaction:3 natural:1 force:1 plicated:1 advanced:1 scheme:1 numerous:1 schmid:1 speeding:1 understanding:1 acknowledgement:1 determining:1 attache:1 proportional:2 geoffrey:1 foundation:1 integrate:1 agent:17 degree:1 fruit:1 principle:1 editor:3 playing:1 bypass:1 share:1 critic:1 course:2 supported:1 last:1 asynchronous:1 side:1 understand:1 institute:3 face:1 barrier:2 taking:1 absolute:1 world:2 transition:4 maze:11 doesn:1 made:3 reinforcement:19 san:3 adaptive:3 atkeson:2 far:1 sj:2 qlearning:1 satinder:1 ml:1 active:1 decides:3 knew:1 noranda:1 search:3 why:1 additionally:1 learn:12 nature:1 transfer:1 ca:3 expanding:1 composing:1 complex:1 necessarily:1 cl:1 did:1 sp:3 main:2 whole:2 allowed:1 competent:1 tl:1 downwards:1 salk:1 slow:1 fails:1 sub:49 experienced:1 watkins:6 late:1 learns:6 down:2 navigate:2 gating:1 concern:2 workshop:1 sequential:2 effectively:3 phd:1 easier:2 simply:1 explore:2 likely:1 ordered:1 partially:1 satisfies:2 ma:1 conditional:1 goal:22 king:1 prioritized:1 change:2 generalisation:2 determined:2 uniformly:1 total:1 bradford:1 east:1 college:1 searched:1 mark:1 latter:2 arises:1 confines:1 |
6,788 | 7,140 | Min-Max Propagation
Christopher Srinivasa
University of Toronto
Borealis AI
christopher.srinivasa
@gmail.com
Inmar Givoni
University of
Toronto
inmar.givoni
@gmail.com
Siamak Ravanbakhsh
University of
British
Columbia
[email protected]
Brendan J. Frey
University of Toronto
Vector Institute
Deep Genomics
[email protected]
Abstract
We study the application of min-max propagation, a variation of belief propagation,
for approximate min-max inference in factor graphs. We show that for ?any? highorder function that can be minimized in O(?), the min-max message update can be
obtained using an efficient O(K(? + log(K)) procedure, where K is the number
of variables. We demonstrate how this generic procedure, in combination with
efficient updates for a family of high-order constraints, enables the application of
min-max propagation to efficiently approximate the NP-hard problem of makespan
minimization, which seeks to distribute a set of tasks on machines, such that the
worst case load is minimized.
1
Introduction
Min-max is a common optimization problem that involves minimizing a function with respect to
some variables X and maximizing it with respect to others Z: minX maxZ f (X, Z). For example,
f (X, Z) may be the cost or loss incurred by a system X under different operating conditions Z, in
which case the goal is to select the system whose worst-case cost is lowest. In Section 2, we show
that factor graphs present a desirable framework for solving min-max problems and in Section 3 we
review min-max propagation, a min-max based belief propagation algorithm.
Sum-product and min-sum inference using message passing has repeatedly produced groundbreaking
results in various fields, from low-density parity-check codes in communication theory (Kschischang
et al., 2001), to satisfiability in combinatorial optimization and latent-factor analysis in machine
learning.
An important question is whether ?min-max? propagation can also yield good approximate solutions
when dealing with NP-hard problems? In this paper we answer this question in two parts.
I) Our main contribution is the introduction of an efficient min-max message passing procedure
for a generic family of high-order factors in Section 4. This enables us to approach new problems
through their factor graph formulation. Section 5.2 leverages our solution for high-order factors
to efficiently approximate the problem of makespan minimization using min-max propagation. II)
To better understand the pros and cons of min-max propagation, Section 5.1 compares it with the
alternative approach of reducing min-max inference to a sequence of Constraint Satisfaction Problems
(CSPs).
The feasibility of ?exact? inference in a min-max semiring using the junction-tree method goes back
to (Aji and McEliece, 2000). More recent work of (Vinyals et al., 2013) presents the application
of min-max for junction-tree in a particular setting of the makespan problem. In this paper, we
investigate the usefulness of min-max propagation in the loopy case and more importantly provide an
efficient and generic algorithm to perform message passing with high-order factors.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
2
Min-Max Optimization on Factor Graphs
We are interested in factorizable min-max problems minX maxZ f (X, Z) ? i.e. min-max problems
that can be efficiently factored into a group of more simple functions. These have the following
properties:
1. The cardinality of either X or Z (say Z) is linear in available computing resources (e.g. Z
is an indexing variable a which is linear in the number of indices)
2. The other variable can be decomposed, so that X = (x1 , . . . , xN )
3. Given Z, the function f () depends on only a subset of the variables in X and/or exhibits a
form which is easier to minimize individually than when combined with f (X, Z)
Using a ? F = {1, . . . , F } to index the values of Z and X?a to denote the subset of variables that
f () depends on when Z = a, the min-max problem can be formulated as,
min max fa (X?a ).
X
(1)
a
In the following we use i, j ? N = {1, . . . , N } to denote variable indices and a, b ? {1, . . . , F } for
factor indices. A Factor Graph (FG) is a bipartite graphical representation of the above factorization
properties. In it, each function (i.e. factor fa ) is represented by a square node and each variable is
represented by a circular node. Each factor node is connected via individual edges to the variables
on which it depends. We use ?i to denote the set of neighbouring factor indices for variable i, and
similarly we use ?a to denote the index set of variables connected to factor a.
This problem is related
P Qto the problems commonly analyzed using
P FGs (Bishop, 2006): the sumproduct problem,Q X a fa (X?a ), the min-sum problem, minX a fa (X?a ), and the max-product
problem, maxX a fa (X?a ) in which case we would respectively take product, sum, and product
rather than the max of the factors in the FG.
When dealing with NP-hard problems, the FG contains one or more loop(s). While NP-hard problems
have been represented and (approximately) solved directly using message passing on FGs in the
sum-product, min-sum, and max-product cases, to our knowledge this has never been done in the
min-max case.
3
Min-Max Propagation
An important question is how min-max can be computed on FGs. Consider the sum-product
algorithm on FGs which relies on the sum and product operations satisfying the distributive law
a(b + c) = ab + ac (Aji and McEliece, 2000).
Min and max operators also satisfy the distributive law: min(max(?, ?), max(?, ?)) =
max(?, min(?, ?)). Using (min, max, <) semiring, the belief propagation updates are as follows.
Note that these updates are analogous to sum-product belief propagation updates, where sum is
replaced by min and product operation is replaced by max.
Variable-to-Factor Messages. The message sent from variable xi
to function fa is
?ia (xi ) = max ?bi (xi )
(2)
b??i\a
where ?bi (xi ) is the message sent from function fb to variable xi (as
shown in Fig. 1) and ?i \ a is the set of all neighbouring factors of
variable i, with a removed.
Figure 1: Variable-to-factor message.
Factor-to-Variable Messages. The message sent from function fa
to variable xi is computed using
?ai (xi ) = min max fa (X?a ), max ?ja (xj )
(3)
X?a\i
j??a\i
Figure 2: Factor-to-variable
Initialization Using the Identity. In the sum-product algorithm, message.
messages are usually initialized using knowledge of the identity of
2
the product operation. For example, if the FG is a tree with some node chosen as a root, messages can
be passed from the leaves to the root and back to the leaves. The initial message sent from a variable
that is a leaf involves taking the product for an empty set of incoming messages, and therefore the
?
message is initialized to the identity of the group (<+ , ?), which is 1 = 1.
max
In this case, we need the identity of the (<, max) semi-group, where max( 1 , x) = x ?x ? < ?
max
that is 1 = ??. Examining Eq. (3), we see that the message sent from a function that is a leaf will
involve maximizing over the empty set of incoming messages. So, we can initialize the message sent
from function fa to variable xi using ?ai (xi ) = minX?a\xi fa (X?a ).
Marginals. Min-max marginals, which involve ?minimizing? over
all variables except some xi , can be computed by taking the max of
all incoming messages at xi as in Fig. 3:
m(xi ) = min max fa (X?a ) = max ?bi (xi )
XN \i
a
b??i
(4)
The value of xi that achieves the global solution is given by
arg minxi m(xi ).
4
Figure 3: Marginals.
Efficient Update for High-Order Factors
When passing messages from factors to variables, we are interested in efficiently evaluating Eq. (3).
In its original form, this computation is exponential in the number of neighbouring variables |?a|.
Since many interesting problems require high-order factors in their FG formulation, many have
investigated efficient min-sum and sum-product message passing through special family of, often
sparse, factors (e.g. Tarlow et al., 2010; Potetz and Lee, 2008).
For the time being, consider the factors over binary variables xi ? {0, 1}?i ? ?a and further assume
that efficient minimization of the factor fa is possible.
Assumption 1. The function fa : X?a ? < can be minimized in time O(?) with any subset B ? ?a
of its variables fixed.
In the following we show how to calculate min-max factor-to-variable messages in O(K(? +
log(K))), where K = |?a| ? 1. In comparison to the limited settings in which high-order factors
allow efficient min-sum and sum-product inference, we believe this result to be quite general.1
The idea is to break the problem in half, at each iteration. We show that for one of these halves, we
can obtain the min-max value using a single evaluation of fa . By reducing the size of the original
problem in this way, we only need to choose the final min-max message value from a set of candidates
that is at most linear in |?a|.
Procedure. According to Eq. (3), in calculating the factor-to-variable message ?ai (xi ) for a fixed
xi = c, we are interested in efficiently solving the following optimization problem
min max ?1 (x1 ), ?2 (x2 ), ..., ?K (xK ), f (X?a\i , xi = ci )
(5)
X?a\i
where, without loss of generality we are assuming ?a \ i = {1, . . . , K}, and for better readability, we
drop the index a, in factors (fa ), messages (?ka , ?ai ) and elsewhere, when it is clear from the context.
There are 2K configurations of X?a\i , one of which is the minimizing solution. We will
divide this set in half in each iteration and save the minimum in one of these halves in
the min-max candidate list C. The maximization part of the expression is equivalent to
max (max (?1 (x1 ), ?2 (x2 ), ..., ?K (xK )) , f (X?a , xi = ci )).
Let ?j1 (cj1 ) be the largest ? value that is obtained at some index j1 , for some value cj1 ? {0, 1}.
In other words ?j1 (cj1 ) = max (?1 (0), ?1 (1), ..., ?K (0), ?K (1)). For future use, let j2 , . . . , jM be
the index of the next largest message indices up to the K largest ones, and let cj2 , . . . , cjK be their
1
Here we show that solving the minimization problem on any particular factor can be solved in a fixed
amount of time. In many applications, doing this might itself involve running another entire inference algorithm.
However, please note that our algorithm is agnostic to such choices for optimization of individual factors.
3
corresponding assignment. Note that the same message (e.g. ?3 (0), ?3 (1)) could appear in this sorted
list at different locations.
We then partition the set of all assignments to X?a\i into two sets of size 2K?1 depending on the
assignment to xj1 : 1) xj1 = cj1 or; 2) xj1 = 1 ? cj1 . The minimization of Eq. (5) can also be divided
to two minimizations each having xj1 set to a different value. For xj1 = cj1 , Eq. (5) simplifies to
? (j1 ) = max ?j1 (cj1 ), min (f (X?a\{i,j1 } , xi = ci , xj1 = cj1 ))
(6)
X?a\{i,j1 }
where we need to minimize f , subject to a fixed xi , xj1 . We repeat the procedure above at most K
times, for j1 , . . . , jk , . . . jK , where at each iteration we obtain a candidate solution, ? (jm ) that we add
to the candidate set C = {? (j1 ) , . . . , ? (jK ) }. The final solution is the smallest value in the candidate
solution set, min C.
Early Termination. If jk = jk0 for 1 ? k, k 0 ? K it means that we have performed the minimization
of Eq. (5) for both xjk = 0 and xjk = 1. This means that we can terminate the iterations and report
the minimum in the current candidate set. Adding the cost of sorting O(K log(K)) to the worst case
cost of minimization of f () in Eq. (6) gives a total cost of O(K(log(K) + ?)).
Arbitrary Discrete Variables. This algorithm is not limited to binary variables. The main difference
in dealing with cardinality D > 2, is that we run the procedure for at most K(D ? 1) iterations, and
in early termination, all variable values should appear in the top K(D ? 1) incoming message values.
For some factors, we can go further and calculate all factor-to-variable messages leaving fa in a time
linear in |?a|. The following section derives such update rule for a type of factor that we use in the
make-span application of Section 5.2.
4.1
Choose-One Constraint
If fa (X?a ) implements a constraint such that only a subset of configurations XA ? X?a , of the
possible configurations of X?a ? X?a are allowed, then the message from function fa to xi simplifies
to
?ai (x0i ) =
min
max ?ja (xj )
(7)
0
X?a ?Aa |xi =xi
j??a\i
In many applications, this can be further simplified by taking into account properties of the constraints.
Here, we describe such a procedure for factors which enforce that exactly one of P
their binary variables
be set to one and all others to zero. Consider the constraint f (x1 , ..., xK ) = ?( k xk , 1) for binary
variables xk ? {0, 1}, where ?(x, x0 ) evaluates to ?? iff x = x0 and ? otherwise.2
Using X\i = (x1 , x2 , ..., xi?1 , xi+1 , ..., xK ) for X with xi removed, Eq. (7) becomes
?i (xi ) =
min
P
X\i | K
k=1 xk =1
=
max ?k (xk )
k|k6=i
maxk|k6=i ?k (0) if xi = 1
minX\i ?{(1,0,...,0),(0,1,...,0),...,(0,0,...,1)} maxk|k6=i ?k (xk ) if
(8)
xi = 0
Naive implementation of the above update is O(K 2 ) for each xi , or O(K 3 ) for sending messages
to all neighbouring xi . However, further simplification is possible. Consider the calculation of
maxk|k6=i ?k (xk ) for X\i = (1, 0, . . . , 0) and X\i = (0, 1, . . . , 0). All but the first two terms in
these two sets are the same (all zero), so most of the comparisons that were made when computing
maxk|k6=i ?k (xk ) for the first set, can be reused when computing it for the second set. This extends
to all K ? 1 sets (1, 0, . . . , 0), . . . , (0, 0, . . . , 1), and also extends across the message updates for
different xi ?s. After examining the shared terms in the maximizations, we see that all that is needed
is
(1)
(2)
ki = arg max ?k (0), ki = arg max ?k (0),
(9)
(1)
k|k6=i
k|k6=i,ki
the indices of the maximum and second largest values of ?k (0) with i removed from consideration.
Note that these can be computed for all neighbouring xi in time linear in K, by finding the top three
2
Similar to any other semiring, ?? as the identities of min and max have a special role in defining
constraints.
4
values of ?k (0) and selecting two of them appropriately depending on whether ?i (0) is among the
three values. Using this notation, the above update simplifies as follows:
(
?k(1) (0) if xi = 1
i
?i (xi ) =
min mink|k6=i,k(1) max(?k(1) (0), ?k (1)), max(?k(1) (1), ?k(2) (0)) if xj = 0
i
i
i
i
(
?k(1) (0) if xi = 1
ai
=
min max(?k(1) (0), mink|k6=i,k(1) ?k (1)), max(?k(1) (1), ?k(2) (0)) if xi = 0
i
i
i
i
(10)
The term mink|k6=i,k(1) ?k (1) also need not be recomputed for every xi , since terms will be shared.
i
Define the following:
si = arg min ?k (1),
(11)
(1)
k6=i,ki
(1)
which is the index of the smallest value of ?k (1) with i and ki removed from consideration. This
can be computed efficiently for all i in time that is linear in K by finding the smallest three values of
?k (1) and selecting one of them appropriately depending on whether ?i (1) and/or ?k(1) are among
i
the three values. The resulting message update for K-choose-1 constraint becomes
(
?k(1) (0) if xi = 1
i
?i (xi ) =
(12)
min max(?k(1) (0), ?si (1)), max(?k(1) (1), ?k(2) (0)) if xi = 0
i
i
i
This shows that messages to all neighbouring variables x1 , ..., xK can be obtained in time that is
linear in K. This type of constraint also has a tractable form in min-sum and sum-product inference,
albeit of a different form (e.g. see Gail et al., 1981; Gupta et al., 2007).
5
Experiments and Applications
In the first part of this section we compare min-max propagation with the only alternative min-max
inference method over FGs that relies on sum-product reduction. In the second part, we formulate
the real-world problem of makespan minimization as a min-max inference problem, with high-order
factors. In this application, the sum-product reduction is not tractable; to formulate the makespan
problem using a FG we need to use high-order factors that do not allow an efficient (polynomial
time) sum-product message update. However, min-max propagation can be applied using the efficient
updates of the previous section.
5.1
Sum-Product Reduction vs. Min-Max Propagation
Like all belief propagation algorithms, min-max propagation is exact when the FG is tree. However,
our first point of interest is how min-max propagation performs on loopy graphs. For this compare its
performance against the sum-product (or CSP) reduction.
Sum-product reduction of (Ravanbakhsh et al., 2014) seeks the min-max value using bisection-search
over all values in the range of all factors in the FG ? i.e. Y = {fa (X?a ) ?a, X?a }. In each step of
the search a value y ? Y is used to reduce the min-max problem to a CSP. This CSP is satisfiable
iff the min-max solution y ? = minX maxa fa (X?a ) is less than the current y. The complexity of
this search procedure is O(log(|Y|)? ), where ? is the complexity of solving the CSP. Following that
paper, we use Perturbed Belief Propagation (PBP) (Ravanbakhsh and Greiner, 2015) to solve the
resulting CSPs.
Experimental Setup. Our setup is based on the following observations
Observation 1. For any strictly monotonically increasing function g : < ? <,
arg min max fa (X?a ) = arg min max g(fa (X?a ))
X
a
X
a
that is only the ordering of the factor values affects the min-max assignment. Using the same
argument, application of monotonic g() does not inherently change the behaviour of min-max
propagation either.
5
Mean Min-Max Solution
5
4
3
7
Min-Max Propagation (Random decimation)
Min-Max Propagation (Max support decimation)
Min-Max Propagation (Min value decimation)
PBP CSP Solver (Min-max prop iterations)
PBP CSP Solver (1000 iterations)
Upper Bound
Brute Force
9
8
6
7
5
6
4
5
4
3
2
3
2
1
2
1
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
0
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
7
5
Mean Min-Max Solution
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
9
8
6
4
7
5
6
3
4
5
4
3
2
3
2
2
1
1
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
0
0.2
Connectivity
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.1
Connectivity
Connectivity
Figure 4: Min-max performance of different methods on Erdos-Renyi random graphs. Top: N=10, Bottom:
N=100, Left: D=4, Middle: D=6, Right: D=8.
Observation 2. Only the factor(s) which output(s) the max value, i.e. max factor(s), matter. For
all other factors the variables involved can be set in any way as long as the factors? value remains
smaller or equal to that of the max factor.
This means that variables that do not appear in the max factor(s), which we call free variables, could
potentially assume any value without affecting the min-max value. Free variables can be identified
from their uniform min-max marginals. This also means that the min-max assignment is not unique.
This phenomenon is unique to min-max inference and does not appear in min-sum and sum-product
counterparts.
We rely on this observation in designing benchmark random min-max inference problems: i) we use
integers as the range of factor values; ii) by selecting all factor values in the same range, we can use
the number of factors as a control parameter for the difficulty of the inference problem.
For N variables x1 , . . . , xN , where each xi ? {1, . . . , D}, we draw Erdos-Renyi graphs with edge
probability p ? (0, 1] and treat each edge as a pairwise factor. Consider the factor fa (xi , xj ) =
min(?(xi ), ? 0 (xj )), where ?, ? 0 are permutations of {1, . . . , D}. With D = 2, this definition of
factor fa reduces to 2-SAT factor. This setup for random min-max instances therefore generalizes
different K-SAT settings, where the min-max solution of minX maxa fa (X?a ) = 1 for D = 2,
corresponds to a satisfying assignment. The same argument with K > 2 establishes the ?NP-hardness?
of min-max inference in factor-graphs.
We test our setup on graphs with N ? {10, 100} variables and cardinality D ? {4, 6, 8}. For each
choice of D and N , we run min-max propagation and sum-product reduction for various connectivity
in the Erdos-Renyi graph. Both methods use random sequential update. For N = 10 we also report
the exact min-max solutions.
Min-max propagation is run for a maximum T = 1000 iterations or until convergence, whichever
comes first. The number of iterations actually taken by min-max propagation are reported in appendix.
The PBP used in the sum-product reduction requires a fixed T ; we report the results for T equal to
the worse case min-max convergence iterations (see appendix) and T = 1000 iterations. Each setting
is repeated 10 times for a random graph of a fixed connectivity value p ? (0, 1].
Decimation. To obtain a final min-max assignment we need to fix the free variables. For this we use
a decimation scheme similar to what is used with min-sum inference or in finding a satisfying CSP
assignment in sum-product. We consider three different decimation procedures:
Random: Randomly choose a variable, set it to the state with minimum min-max marginal value.
Min-value: Fix the variable with the minimum min-max marginal value.
6
Max-support: Choose the variable for which its min value occurs with the highest frequency.
Results. Fig. 4 compares the performance of sum-product reduction that relies on PBP with min-max
propagation and brute-force. For min-max propagation we report the results for three different
decimation procedures. Each column uses a different variable cardinality D. While this changes
the range of values in the factors, we observe a similar trend in performance of different methods.
In the top row, we also report the exact min-max value. As expected, by increasing the number of
factors (connectivity) the min-max value increases. Overall, the sum-product reduction (although
asymptotically more expensive), produces slightly better results. Also different decimation schemes
do not significantly affect the results in these experiments.
5.2
Makespan Minimization
The objective in the makespan problem is to
schedule a set of given jobs, each with a load,
on machines which operate in parallel such that
the total load for the machine which has the
largest total load (i.e. the makespan) is minimized (Pinedo, 2012). This problem has a range
of applications, for example in the energy sector,
where the machines represent turbines and the
jobs represent electrical power demands.
Given N distinct jobs N = {1, . . . , n, . . . , N }
and M machines M = {1, . . . , m, . . . , M },
where pnm represents the load of machine
m, we denote the assignment variable xnm as
Figure 5: Makespan FG.
whether or not job n is assigned to machine
m. The task is to find the set of assignments
xnm ? n ? N , ? m ? M which minimizes the total cost function below, while satisfying the
associated set of constraints:
!
N
M
X
X
min max
pnm xnm
s.t.
xnm = 1 xnm ? {0, 1} ? n ? N , m ? M
(13)
X
m
n=1
m=1
The makespan minimiza- Figure 6: Min-max ratio to a lower bound (lower is better) obtained by
tion problem is NP- LPT with 4/3-approximation guarantee versus min-max propagation using
hard for M = 2 different decimation procedures. N is the number of jobs and M is the
and strongly NP-hard for number of machines. In this setting, all jobs have the same run-time across
M > 2 (Garey and all machines.
Johnson, 1979). Two
well-known approximaM N
LPT
Min-Max Prop
Min-Max Prop
Min-Max Prop
(Random Dec.) (Max-Support Dec.) (Min-Value Dec.)
tion algorithms are the
25 1.178
1.183
1.091
1.128
2-approximation greedy
26 1.144
1.167
1.079
1.112
algorithm and the 4/333 1.135
1.144
1.081
1.093
approximation greedy al8
34 1.117
1.132
1.071
1.086
gorithm. In the former,
41 1.112
1.117
1.055
1.077
all machines are initial42 1.094
1.109
1.079
1.074
ized as empty. We then
31 1.184
1.168
1.110
1.105
select one job at ran32 1.165
1.186
1.109
1.111
dom and assign it to the
41 1.138
1.183
1.077
1.088
10
42 1.124
1.126
1.074
1.090
machine with least total
51 1.112
1.131
1.077
1.081
load given the current
52 1.102
1.100
1.051
1.076
job assignments. We repeat this process until no jobs remain. This algorithm is guaranteed to give a schedule with a
makespan no more than 2 times larger than the one for the optimal schedule (Behera, 2012; Behera
and Laha, 2012) The 4/3-approximation algorithm, a.k.a. the Longest Processing Time (LPT) algorithm, operates similar to the 2-approximation algorithm with the exception that, at each iteration,
we always take the job with the next largest load rather than selecting one of the remaining jobs at
random (Graham, 1966).
7
FG Representation. Fig. 5 shows the FG with binary variables xnm , where the factors are
N
PM
X
0,
m=1 xnm = 1 ?n
fm (x1m , . . . , xN m ) =
pnm xnm ?m ; gn (xn1 , . . . , xnM ) =
?, otherwise
n=1
where f () computes the total load for a machine according to and g() enforces the constraint in
Eq. (13). We see that the following min-max problem over this FG minimizes the makespan
min max fm (x1m , ..., xN m ), max gn (xn1 , ..., xnM ).
(14)
m
n
Using the procedure for passing messages through the g constraints in Section 4.1 and using the
procedure of Section 4 for f , we can efficiently approximate the min-max solution of Eq. (14) by
message passing. Note that the factor f () in the sum-product reduction of this FG has a non-trivial
form that does not allow efficient message update.
Results. In an initial set of experiments, we
compare min-max propagation (with different
decimation procedures) with LPT on a set of
benchmark experiments designed in (Gupta and
Ruiz-Torres, 2001) for the identical machine
version of the problem ? i.e. a task has the same
processing time on all machines.
Figure 7: Min-max ratio (LP relaxation to that) of
min-max propagation versus same for the method
of (Vinyals et al., 2013) (higher is better). Mode 0,
1 and 2 corresponds to uncorrelated, machine correlated and machine-task correlated respectively.
Mode
0
Fig. 6 shows the scenario where min-max prop
performs best against the LPT algorithm. We see
that this scenario involves large instance (from
the additional results in the appendix, we see
that our framework does not perform as well
on small instances). From this table, we also
see that max-support decimation almost always
outperforms the other decimation schemes.
1
2
N/M
5
10
15
5
10
15
5
10
15
(Vinyals et al., 2013)
0.93(0.03)
0.94(0.01)
0.94(0.00)
0.90(0.01)
0.90(0.00)
0.87(0.01)
0.81(0.01)
0.81(0.01)
0.78(0.01)
Min-Max Prop
0.95(0.01)
0.93(0.01)
0.90(0.01)
0.86(0.07)
0.88(0.00)
0.73(0.03)
0.89(0.01)
0.89(0.01)
0.86(0.01)
We then test the min-max propagation with max-support decimation against a more difficult version
of the problem: the unrelated machine model, where each job has a different processing time on
each machine. Specifically, we compare our method against that of (Vinyals et al., 2013) which also
uses distributive law for min-max inference to solve a load balancing problem. However, that paper
studies a sparsified version of the unrelated machines problem where tasks are restricted to a subset
of machines (i.e. they have infinite processing time for particular machines). This restriction, allows
for decomposition of their loopy graph into an almost equivalent tree structure, something which
cannot be done in the general setting. Nevertheless, we can still compare their results to what we can
achieve using min-max propagation with infinite-time constraints.
We use the same problem setup with three different ways of generating the processing times (uncorrelated, machine correlated, and machine/task correlated) and compare our answers to IBM?s CPLEX
solver exactly as the authors do in that paper (where a high ratio is better). Fig. 7 shows a subset
of results. Here again, min-max propagation works best for large instances. Overall, despite the
generality of our approach the results are comparable.
6
Conclusion
This paper demonstrates that FGs are well suited to model min-max optimization problems with
factorization characteristics. To solve such problems we introduced and evaluated min-max propagation, a variation of the well-known belief propagation algorithm. In particular, we introduced an
efficient procedure for passing min-max messages through high-order factors that applies to a wide
range of functions. This procedure equips min-max propagation with an ammunition unavailable
to min-sum and sum-product message passing and it could enable its application to a wide range
of problems. In this work we demonstrated how to leverage efficient min-max-propagation at the
presence of high-order factors, in approximating the NP-hard problem of makespan. In the future, we
plan to investigate the application of min-max propagation to a variety of combinatorial problems,
known as bottleneck problems (Edmonds and Fulkerson, 1970) that can be naturally formulated as
min-max inference problems over FGs.
8
References
S. M. Aji and R. J. McEliece. The generalized distributive law. Information Theory, IEEE Transactions
on, 46(2):325?343, 2000.
D. Behera. Complexity on parallel machine scheduling: A review. In S. Sathiyamoorthy, B. E.
Caroline, and J. G. Jayanthi, editors, Emerging Trends in Science, Engineering and Technology,
Lecture Notes in Mechanical Engineering, pages 373?381. Springer India, 2012.
D. K. Behera and D. Laha. Comparison of heuristics for identical parallel machine scheduling.
Advanced Materials Research, 488:1708?1712, 2012.
C. M. Bishop. Pattern recognition and machine learning. Springer-Verlag New York, Inc., Secaucus,
NJ, USA, 2006.
J. Edmonds and D. R. Fulkerson. Bottleneck extrema. Journal of Combinatorial Theory, 8(3):
299?306, 1970.
M. H. Gail, J. H. Lubin, and L. V. Rubinstein. Likelihood calculations for matched case-control
studies and survival studies with tied death times. Biometrika, pages 703?707, 1981.
M. R. Garey and D. S. Johnson. Computers and intractability, volume 174. Freeman San Francisco,
1979.
R. L. Graham. Bounds for certain multiprocessing anomalies. Bell System Technical Journal, 45(9):
1563?1581, 1966.
J. N. D. Gupta and A. J. Ruiz-Torres. A listfit heuristic for minimizing makespan on identical parallel
machines. Production Planning & Control, 12(1):28?36, 2001.
R. Gupta, A. A. Diwan, and S. Sarawagi. Efficient inference with cardinality-based clique potentials.
In Proceedings of the 24th international conference on Machine learning, pages 329?336. ACM,
2007.
F. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE
Transactions on Information Theory, 47(2):498 ?519, 2001.
M. Pinedo. Scheduling: theory, algorithms, and systems. Springer, 2012.
B. Potetz and T. S. Lee. Efficient belief propagation for higher-order cliques using linear constraint
nodes. Computer Vision and Image Understanding, 112(1):39?54, 2008.
S. Ravanbakhsh and R. Greiner. Perturbed message passing for constraint satisfaction problems.
Journal of Machine Learning Research, 16:1249?1274, 2015.
S. Ravanbakhsh, C. Srinivasa, B. Frey, and R. Greiner. Min-max problems on factor graphs. In
Proceedings of the 31st International Conference on Machine Learning, ICML ?14, 2014.
D. Tarlow, I. Givoni, and R. Zemel. HOP-MAP: Efficient message passing with high order potentials.
Journal of Machine Learning Research - Proceedings Track, 9:812?819, 2010.
M. Vinyals, K. S. Macarthur, A. Farinelli, S. D. Ramchurn, and N. R. Jennings. A message-passing
approach to decentralized parallel machine scheduling. The Computer Journal, 2013.
9
| 7140 |@word version:3 middle:1 polynomial:1 reused:1 termination:2 seek:2 decomposition:1 reduction:10 initial:2 configuration:3 contains:1 selecting:4 loeliger:1 outperforms:1 ka:1 com:2 current:3 si:2 gmail:2 partition:1 j1:9 enables:2 siamak:1 drop:1 update:15 designed:1 v:1 half:4 leaf:4 greedy:2 xk:12 tarlow:2 node:5 toronto:4 readability:1 location:1 xnm:10 pairwise:1 x0:2 expected:1 hardness:1 planning:1 freeman:1 decomposed:1 jm:2 cardinality:5 increasing:2 becomes:2 solver:3 notation:1 unrelated:2 matched:1 agnostic:1 lowest:1 what:2 minimizes:2 emerging:1 maxa:2 finding:3 extremum:1 nj:1 guarantee:1 every:1 exactly:2 biometrika:1 demonstrates:1 brute:2 control:3 appear:4 engineering:2 frey:4 treat:1 despite:1 approximately:1 might:1 initialization:1 factorization:2 limited:2 bi:3 range:7 unique:2 enforces:1 implement:1 sarawagi:1 procedure:16 aji:3 maxx:1 significantly:1 pnm:3 bell:1 word:1 cannot:1 operator:1 scheduling:4 context:1 restriction:1 equivalent:2 maxz:2 demonstrated:1 map:1 maximizing:2 go:2 formulate:2 factored:1 rule:1 importantly:1 fulkerson:2 variation:2 analogous:1 exact:4 neighbouring:6 anomaly:1 us:2 givoni:3 designing:1 decimation:13 trend:2 satisfying:4 jk:4 expensive:1 recognition:1 gorithm:1 bottom:1 role:1 solved:2 electrical:1 worst:3 calculate:2 connected:2 ordering:1 removed:4 highest:1 complexity:3 dom:1 highorder:1 solving:4 minimiza:1 bipartite:1 various:2 represented:3 distinct:1 describe:1 jk0:1 rubinstein:1 minxi:1 zemel:1 whose:1 quite:1 larger:1 solve:3 heuristic:2 say:1 otherwise:2 multiprocessing:1 itself:1 final:3 sequence:1 borealis:1 product:31 j2:1 loop:1 x1m:2 iff:2 achieve:1 secaucus:1 convergence:2 empty:3 produce:1 generating:1 depending:3 ac:1 x0i:1 job:12 eq:10 c:1 involves:3 come:1 enable:1 material:1 ja:2 require:1 behaviour:1 assign:1 fix:2 strictly:1 lpt:5 achieves:1 early:2 smallest:3 combinatorial:3 individually:1 largest:6 gail:2 establishes:1 minimization:10 always:2 csp:7 rather:2 longest:1 check:1 likelihood:1 brendan:1 inference:17 entire:1 interested:3 arg:6 among:2 overall:2 k6:11 plan:1 special:2 initialize:1 marginal:2 field:1 equal:2 never:1 having:1 beach:1 hop:1 identical:3 represents:1 icml:1 future:2 minimized:4 np:8 others:2 report:5 randomly:1 individual:2 replaced:2 cplex:1 ab:1 interest:1 message:45 investigate:2 circular:1 evaluation:1 analyzed:1 edge:3 tree:5 divide:1 initialized:2 xjk:2 instance:4 column:1 gn:2 assignment:11 maximization:2 loopy:3 cost:6 subset:6 uniform:1 usefulness:1 examining:2 johnson:2 reported:1 answer:2 perturbed:2 combined:1 st:2 density:1 international:2 lee:2 connectivity:6 again:1 choose:5 worse:1 account:1 distribute:1 potential:2 matter:1 inc:1 satisfy:1 depends:3 performed:1 root:2 break:1 tion:2 doing:1 satisfiable:1 parallel:5 contribution:1 minimize:2 square:1 characteristic:1 efficiently:7 yield:1 produced:1 bisection:1 caroline:1 definition:1 evaluates:1 against:4 energy:1 frequency:1 involved:1 garey:2 naturally:1 associated:1 psi:1 con:1 xn1:2 knowledge:2 satisfiability:1 schedule:3 actually:1 back:2 higher:2 formulation:2 done:2 evaluated:1 strongly:1 generality:2 xa:1 until:2 mceliece:3 christopher:2 propagation:41 farinelli:1 mode:2 believe:1 usa:2 xj1:7 counterpart:1 former:1 assigned:1 death:1 please:1 generalized:1 demonstrate:1 performs:2 pro:1 equips:1 image:1 consideration:2 macarthur:1 srinivasa:3 pbp:5 common:1 volume:1 marginals:4 makespan:14 ai:7 pm:1 similarly:1 operating:1 add:1 something:1 recent:1 csps:2 fgs:7 scenario:2 verlag:1 certain:1 binary:5 minimum:4 additional:1 monotonically:1 ii:2 semi:1 ramchurn:1 desirable:1 reduces:1 technical:1 calculation:2 long:2 divided:1 feasibility:1 vision:1 iteration:12 represent:2 dec:3 affecting:1 leaving:1 appropriately:2 operate:1 subject:1 sent:6 call:1 integer:1 leverage:2 presence:1 variety:1 xj:5 affect:2 identified:1 fm:2 reduce:1 idea:1 simplifies:3 bottleneck:2 whether:4 expression:1 passed:1 lubin:1 passing:13 york:1 repeatedly:1 deep:1 jennings:1 clear:1 involve:3 amount:1 ravanbakhsh:5 cj2:1 track:1 edmonds:2 discrete:1 group:3 recomputed:1 nevertheless:1 groundbreaking:1 graph:15 asymptotically:1 relaxation:1 sum:35 run:4 extends:2 family:3 almost:2 draw:1 appendix:3 graham:2 comparable:1 ki:5 bound:3 guaranteed:1 simplification:1 constraint:15 x2:3 cj1:8 argument:2 min:128 span:1 according:2 combination:1 across:2 smaller:1 slightly:1 remain:1 lp:1 restricted:1 indexing:1 taken:1 resource:1 remains:1 needed:1 tractable:2 whichever:1 sending:1 junction:2 available:1 operation:3 generalizes:1 decentralized:1 observe:1 qto:1 generic:3 enforce:1 diwan:1 save:1 alternative:2 original:2 top:4 running:1 remaining:1 graphical:1 calculating:1 potetz:2 approximating:1 objective:1 question:3 occurs:1 fa:25 exhibit:1 minx:7 distributive:4 trivial:1 assuming:1 code:1 index:12 ammunition:1 ratio:3 minimizing:4 setup:5 difficult:1 sector:1 potentially:1 mink:3 ized:1 implementation:1 perform:2 upper:1 observation:4 benchmark:2 sparsified:1 maxk:4 defining:1 communication:1 arbitrary:1 sumproduct:1 introduced:2 semiring:3 mechanical:1 nip:1 usually:1 below:1 pattern:1 max:142 belief:8 ia:1 power:1 satisfaction:2 difficulty:1 force:2 rely:1 advanced:1 scheme:3 technology:1 naive:1 columbia:1 genomics:1 review:2 understanding:1 law:4 loss:2 lecture:1 permutation:1 interesting:1 versus:2 incurred:1 editor:1 intractability:1 uncorrelated:2 balancing:1 production:1 row:1 ibm:1 elsewhere:1 repeat:2 parity:1 free:3 allow:3 understand:1 institute:1 wide:2 india:1 taking:3 sparse:1 fg:13 xn:5 evaluating:1 world:1 fb:1 computes:1 author:1 commonly:1 made:1 san:1 simplified:1 transaction:2 approximate:5 erdos:3 dealing:3 clique:2 global:1 incoming:4 sat:2 francisco:1 xi:47 search:3 latent:1 table:1 terminate:1 ca:2 kschischang:2 inherently:1 unavailable:1 investigated:1 factorizable:1 main:2 allowed:1 repeated:1 x1:7 fig:6 torres:2 exponential:1 candidate:6 tied:1 renyi:3 ruiz:2 british:1 load:9 bishop:2 list:2 gupta:4 survival:1 derives:1 cjk:1 albeit:1 adding:1 sequential:1 ci:3 demand:1 sorting:1 easier:1 suited:1 greiner:3 vinyals:5 monotonic:1 applies:1 aa:1 ubc:1 corresponds:2 springer:3 relies:3 turbine:1 acm:1 prop:6 goal:1 formulated:2 identity:5 sorted:1 shared:2 hard:7 change:2 specifically:1 except:1 reducing:2 operates:1 infinite:2 total:6 experimental:1 exception:1 select:2 support:5 phenomenon:1 correlated:4 |
6,789 | 7,141 | What Uncertainties Do We Need in Bayesian Deep
Learning for Computer Vision?
Alex Kendall
University of Cambridge
[email protected]
Yarin Gal
University of Cambridge
[email protected]
Abstract
There are two major types of uncertainty one can model. Aleatoric uncertainty
captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model ? uncertainty which can be explained
away given enough data. Traditionally it has been difficult to model epistemic
uncertainty in computer vision, but with new Bayesian deep learning tools this
is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present
a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework
with per-pixel semantic segmentation and depth regression tasks. Further, our
explicit uncertainty formulation leads to new loss functions for these tasks, which
can be interpreted as learned attenuation. This makes the loss more robust to noisy
data, also giving new state-of-the-art results on segmentation and depth regression
benchmarks.
1
Introduction
Understanding what a model does not know is a critical part of many machine learning systems.
Today, deep learning algorithms are able to learn powerful representations which can map high dimensional data to an array of outputs. However these mappings are often taken blindly and assumed
to be accurate, which is not always the case. In two recent examples this has had disastrous consequences. In May 2016 there was the first fatality from an assisted driving system, caused by the
perception system confusing the white side of a trailer for bright sky [1]. In a second recent example, an image classification system erroneously identified two African Americans as gorillas [2],
raising concerns of racial discrimination. If both these algorithms were able to assign a high level
of uncertainty to their erroneous predictions, then the system may have been able to make better
decisions and likely avoid disaster.
Quantifying uncertainty in computer vision applications can be largely divided into regression settings such as depth regression, and classification settings such as semantic segmentation. Existing
approaches to model uncertainty in such settings in computer vision include particle filtering and
conditional random fields [3, 4]. However many modern applications mandate the use of deep learning to achieve state-of-the-art performance [5], with most deep learning models not able to represent
uncertainty. Deep learning does not allow for uncertainty representation in regression settings for
example, and deep learning classification models often give normalised score vectors, which do not
necessarily capture model uncertainty. For both settings uncertainty can be captured with Bayesian
deep learning approaches ? which offer a practical framework for understanding uncertainty with
deep learning models [6].
In Bayesian modeling, there are two main types of uncertainty one can model [7]. Aleatoric uncertainty captures noise inherent in the observations. This could be for example sensor noise or motion
noise, resulting in uncertainty which cannot be reduced even if more data were to be collected. On
the other hand, epistemic uncertainty accounts for uncertainty in the model parameters ? uncertainty
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
(a) Input Image
(b) Ground Truth
(c) Semantic
Segmentation
(d) Aleatoric
Uncertainty
(e) Epistemic
Uncertainty
Figure 1: Illustrating the difference between aleatoric and epistemic uncertainty for semantic segmentation
on the CamVid dataset [8]. Aleatoric uncertainty captures noise inherent in the observations. In (d) our model
exhibits increased aleatoric uncertainty on object boundaries and for objects far from the camera. Epistemic
uncertainty accounts for our ignorance about which model generated our collected data. This is a notably
different measure of uncertainty and in (e) our model exhibits increased epistemic uncertainty for semantically
and visually challenging pixels. The bottom row shows a failure case of the segmentation model when the
model fails to segment the footpath due to increased epistemic uncertainty, but not aleatoric uncertainty.
which captures our ignorance about which model generated our collected data. This uncertainty
can be explained away given enough data, and is often referred to as model uncertainty. Aleatoric
uncertainty can further be categorized into homoscedastic uncertainty, uncertainty which stays constant for different inputs, and heteroscedastic uncertainty. Heteroscedastic uncertainty depends on
the inputs to the model, with some inputs potentially having more noisy outputs than others. Heteroscedastic uncertainty is especially important for computer vision applications. For example, for
depth regression, highly textured input images with strong vanishing lines are expected to result in
confident predictions, whereas an input image of a featureless wall is expected to have very high
uncertainty.
In this paper we make the observation that in many big data regimes (such as the ones common
to deep learning with image data), it is most effective to model aleatoric uncertainty, uncertainty
which cannot be explained away. This is in comparison to epistemic uncertainty which is mostly
explained away with the large amounts of data often available in machine vision. We further show
that modeling aleatoric uncertainty alone comes at a cost. Out-of-data examples, which can be
identified with epistemic uncertainty, cannot be identified with aleatoric uncertainty alone.
For this we present a unified Bayesian deep learning framework which allows us to learn mappings from input data to aleatoric uncertainty and compose these together with epistemic uncertainty approximations. We derive our framework for both regression and classification applications
and present results for per-pixel depth regression and semantic segmentation tasks (see Figure 1 and
the supplementary video for examples). We show how modeling aleatoric uncertainty in regression
can be used to learn loss attenuation, and develop a complimentary approach for the classification
case. This demonstrates the efficacy of our approach on difficult and large scale tasks.
The main contributions of this work are;
1. We capture an accurate understanding of aleatoric and epistemic uncertainties, in particular
with a novel approach for classification,
2. We improve model performance by 1 ? 3% over non-Bayesian baselines by reducing the
effect of noisy data with the implied attenuation obtained from explicitly representing
aleatoric uncertainty,
3. We study the trade-offs between modeling aleatoric or epistemic uncertainty by characterizing the properties of each uncertainty and comparing model performance and inference
time.
2
2
Related Work
Existing approaches to Bayesian deep learning capture either epistemic uncertainty alone, or
aleatoric uncertainty alone [6]. These uncertainties are formalised as probability distributions over
either the model parameters, or model outputs, respectively. Epistemic uncertainty is modeled by
placing a prior distribution over a model?s weights, and then trying to capture how much these
weights vary given some data. Aleatoric uncertainty on the other hand is modeled by placing a distribution over the output of the model. For example, in regression our outputs might be modeled as
corrupted with Gaussian random noise. In this case we are interested in learning the noise?s variance
as a function of different inputs (such noise can also be modeled with a constant value for all data
points, but this is of less practical interest). These uncertainties, in the context of Bayesian deep
learning, are explained in more detail in this section.
2.1
Epistemic Uncertainty in Bayesian Deep Learning
To capture epistemic uncertainty in a neural network (NN) we put a prior distribution over its
weights, for example a Gaussian prior distribution: W ? N (0, I).
Such a model is referred to as a Bayesian neural network (BNN) [9?11]. Bayesian neural networks
replace the deterministic network?s weight parameters with distributions over these parameters, and
instead of optimising the network weights directly we average over all possible weights (referred
to as marginalisation). Denoting the random output of the BNN as f W (x), we define the model
likelihood p(y|f W (x)). Given a dataset X = {x1 , ..., xN }, Y = {y1 , ..., yN }, Bayesian inference
is used to compute the posterior over the weights p(W|X, Y). This posterior captures the set of
plausible model parameters, given the data.
For regression tasks we often define our likelihood as a Gaussian with mean given by the model
output: p(y|f W (x)) = N (f W (x), ? 2 ), with an observation noise scalar ?. For classification, on
the other hand, we often squash the model output through a softmax function, and sample from the
resulting probability vector: p(y|f W (x)) = Softmax(f W (x)).
BNNs are easy to formulate, but difficult to perform inference in. This is because the marginal
probability p(Y|X), required to evaluate the posterior p(W|X, Y) = p(Y|X, W)p(W)/p(Y|X),
cannot be evaluated analytically. Different approximations exist [12?15]. In these approximate
inference techniques, the posterior p(W|X, Y) is fitted with a simple distribution q?? (W), parameterised by ?. This replaces the intractable problem of averaging over all weights in the BNN with an
optimisation task, where we seek to optimise over the parameters of the simple distribution instead
of optimising the original neural network?s parameters.
Dropout variational inference is a practical approach for approximate inference in large and complex
models [15]. This inference is done by training a model with dropout before every weight layer,
and by also performing dropout at test time to sample from the approximate posterior (stochastic
forward passes, referred to as Monte Carlo dropout). More formally, this approach is equivalent
to performing approximate variational inference where we find a simple distribution q?? (W) in a
tractable family which minimises the Kullback-Leibler (KL) divergence to the true model posterior
p(W|X, Y). Dropout can be interpreted as a variational Bayesian approximation, where the approximating distribution is a mixture of two Gaussians with small variances and the mean of one of
the Gaussians is fixed at zero. The minimisation objective is given by [16]:
N
1 X
1?p
c
L(?, p) = ?
log p(yi |f Wi (xi )) +
||?||2
(1)
N i=1
2N
c i ? q ? (W), and ? the set of the simple
with N data points, dropout probability p, samples W
?
distribution?s parameters to be optimised (weight matrices in dropout?s case). In regression, for
example, the negative log likelihood can be further simplified as
1
1
c
c
? log p(yi |f Wi (xi )) ?
||yi ? f Wi (xi )||2 + log ? 2
(2)
2? 2
2
for a Gaussian likelihood, with ? the model?s observation noise parameter ? capturing how much
noise we have in the outputs.
Epistemic uncertainty in the weights can be reduced by observing more data. This uncertainty induces prediction uncertainty by marginalising over the (approximate) weights posterior distribution.
3
For classification this can be approximated using Monte Carlo integration as follows:
T
1X
c
p(y = c|x, X, Y) ?
Softmax(f Wt (x))
T t=1
(3)
c t ? q ? (W), where q? (W) is the Dropout distribution
with T sampled masked model weights W
?
[6]. The uncertainty of this probability vector p can then be summarised using the entropy of the
PC
probability vector: H(p) = ? c=1 pc log pc . For regression this epistemic uncertainty is captured
by the predictive variance, which can be approximated as:
T
1X W
c
c
2
f t (x)T f Wt (xt ) ? E(y)T E(y)
Var(y) ? ? +
(4)
T t=1
with predictions in this epistemic model done by approximating the predictive mean: E(y) ?
PT
ct
1
W
(x). The first term in the predictive variance, ? 2 , corresponds to the amount of noise
t=1 f
T
inherent in the data (which will be explained in more detail soon). The second part of the predictive
variance measures how much the model is uncertain about its predictions ? this term will vanish
c t take the same constant value).
when we have zero parameter uncertainty (i.e. when all draws W
2.2
Heteroscedastic Aleatoric Uncertainty
In the above we captured model uncertainty ? uncertainty over the model parameters ? by approximating the distribution p(W|X, Y). To capture aleatoric uncertainty in regression, we would have
to tune the observation noise parameter ?.
Homoscedastic regression assumes constant observation noise ? for every input point x. Heteroscedastic regression, on the other hand, assumes that observation noise can vary with input x
[17, 18]. Heteroscedastic models are useful in cases where parts of the observation space might
have higher noise levels than others. In non-Bayesian neural networks, this observation noise parameter is often fixed as part of the model?s weight decay, and ignored. However, when made
data-dependent, it can be learned as a function of the data:
N
1
1
1 X
||yi ? f (xi )||2 + log ?(xi )2
(5)
LNN (?) =
N i=1 2?(xi )2
2
with added weight decay parameterised by ? (and similarly for l1 loss). Note that here, unlike
the above, variational inference is not performed over the weights, but instead we perform MAP
inference ? finding a single value for the model parameters ?. This approach does not capture
epistemic model uncertainty, as epistemic uncertainty is a property of the model and not of the data.
In the next section we will combine these two types of uncertainties together in a single model. We
will see how heteroscedastic noise can be interpreted as model attenuation, and develop a complimentary approach for the classification case.
3
Combining Aleatoric and Epistemic Uncertainty in One Model
In the previous section we described existing Bayesian deep learning techniques. In this section we
present novel contributions which extend this existing literature. We develop models that will allow
us to study the effects of modeling either aleatoric uncertainty alone, epistemic uncertainty alone,
or modeling both uncertainties together in a single model. This is followed by an observation that
aleatoric uncertainty in regression tasks can be interpreted as learned loss attenuation ? making the
loss more robust to noisy data. We follow that by extending the ideas of heteroscedastic regression
to classification tasks. This allows us to learn loss attenuation for classification tasks as well.
3.1
Combining Heteroscedastic Aleatoric Uncertainty and Epistemic Uncertainty
We wish to capture both epistemic and aleatoric uncertainty in a vision model. For this we turn the
heteroscedastic NN in ?2.2 into a Bayesian NN by placing a distribution over its weights, with our
construction in this section developed specifically for the case of vision models1 .
We need to infer the posterior distribution for a BNN model f mapping an input image, x, to a unary
? ? R, and a measure of aleatoric uncertainty given by variance, ? 2 . We approximate the
output, y
posterior over the BNN with a dropout variational distribution using the tools of ?2.1. As before,
1
Although this construction can be generalised for any heteroscedastic NN architecture.
4
c ? q(W) to obtain a model output, this
we draw model weights from the approximate posterior W
time composed of both predictive mean as well as predictive variance:
[?
y, ?
? 2 ] = f W (x)
(6)
c
where f is a Bayesian convolutional neural network parametrised by model weights W. We can use
? as well as ?
a single network to transform the input x, with its head split to predict both y
?2.
c
We fix a Gaussian likelihood to model our aleatoric uncertainty. This induces a minimisation objective given labeled output points x:
1
1 X 1 ?2
? i ||2 + log ?
?
? ||yi ? y
?i2
(7)
LBN N (?) =
D i 2 i
2
where D is the number of output pixels yi corresponding to input image x, indexed by i (additionally, the loss includes weight decay which is omitted for brevity). For example, we may set D = 1
for image-level regression tasks, or D equal to the number of pixels for dense prediction tasks (predicting a unary corresponding to each input image pixel). ?
?i2 is the BNN output for the predicted
variance for pixel i.
This loss consists of two components; the residual regression obtained with a stochastic sample
through the model ? making use of the uncertainty over the parameters ? and an uncertainty regularization term. We do not need ?uncertainty labels? to learn uncertainty. Rather, we only need to
supervise the learning of the regression task. We learn the variance, ? 2 , implicitly from the loss
function. The second regularization term prevents the network from predicting infinite uncertainty
(and therefore zero loss) for all data points.
In practice, we train the network to predict the log variance, si := log ?
?i2 :
X
1
1
1
? i ||2 + si .
exp(?si )||yi ? y
LBN N (?) =
D i 2
2
(8)
This is because it is more numerically stable than regressing the variance, ? 2 , as the loss avoids a
potential division by zero. The exponential mapping also allows us to regress unconstrained scalar
values, where exp(?si ) is resolved to the positive domain giving valid values for variance.
To summarize, the predictive uncertainty for pixel y in this combined model can be approximated
using:
X
2
T
T
T
1
1X 2
1X 2
?t ?
?t +
Var(y) ?
y
y
?
?
(9)
T t=1
T t=1
T t=1 t
?t, ?
?t2 = f Wt (x) for randomly masked weights
with {?
yt , ?
?t2 }Tt=1 a set of T sampled outputs: y
c t ? q(W).
W
c
3.2
Heteroscedastic Uncertainty as Learned Loss Attenuation
We observe that allowing the network to predict uncertainty, allows it effectively to temper the
residual loss by exp(?si ), which depends on the data. This acts similarly to an intelligent robust
regression function. It allows the network to adapt the residual?s weighting, and even allows the
network to learn to attenuate the effect from erroneous labels. This makes the model more robust to
noisy data: inputs for which the model learned to predict high uncertainty will have a smaller effect
on the loss.
The model is discouraged from predicting high uncertainty for all points ? in effect ignoring the
data ? through the log ? 2 term. Large uncertainty increases the contribution of this term, and in turn
penalizes the model: The model can learn to ignore the data ? but is penalised for that. The model is
also discouraged from predicting very low uncertainty for points with high residual error, as low ? 2
will exaggerate the contribution of the residual and will penalize the model. It is important to stress
that this learned attenuation is not an ad-hoc construction, but a consequence of the probabilistic
interpretation of the model.
3.3
Heteroscedastic Uncertainty in Classification Tasks
This learned loss attenuation property of heteroscedastic NNs in regression is a desirable effect for
classification models as well. However, heteroscedastic NNs in classification are peculiar models
because technically any classification task has input-dependent uncertainty. Nevertheless, the ideas
above can be extended from regression heteroscedastic NNs to classification heteroscedastic NNs.
5
For this we adapt the standard classification model to marginalise over intermediate heteroscedastic
regression uncertainty placed over the logit space. We therefore explicitly refer to our proposed
model adaptation as a heteroscedastic classification NN.
For classification tasks our NN predicts a vector of unaries fi for each pixel i, which when passed
through a softmax operation, forms a probability vector pi . We change the model by placing a
Gaussian distribution over the unaries vector:
? i |W ? N (fiW , (?iW )2 )
x
(10)
? i = Softmax(?
p
xi ).
Here fiW , ?iW are the network outputs with parameters W. This vector fiW is corrupted with Gaussian noise with variance (?iW )2 (a diagonal matrix with one element for each logit value), and the
corrupted vector is then squashed with the softmax function to obtain pi , the probability vector for
pixel i.
Our expected log likelihood for this model is given by:
pi,c ]
(11)
log EN (?xi ;fiW ,(?iW )2 ) [?
with c the observed class for input i, which gives us our loss function. Ideally, we would want to
analytically integrate out this Gaussian distribution, but no analytic solution is known. We therefore
approximate the objective through Monte Carlo integration, and sample unaries through the softmax
function. We note that this operation is extremely fast because we perform the computation once
(passing inputs through the model to get logits). We only need to sample from the logits, which is
a fraction of the network?s compute, and therefore does not significantly increase the model?s test
time. We can rewrite the above and obtain the following numerically-stable stochastic loss:
? i,t = fiW + ?iW t , t ? N (0, I)
x
X
X
1X
(12)
Lx =
log
exp(?
xi,t,c ? log
exp x
?i,t,c0 )
T
0
t
i
c
with xi,t,c0 the c0 element in the logit vector xi,t .
This objective can be interpreted as learning loss attenuation, similarly to the regression case. We
next assess the ideas above empirically.
4
Experiments
In this section we evaluate our methods with pixel-wise depth regression and semantic segmentation.
An analysis of these results is given in the following section. To show the robustness of our learned
loss attenuation ? a side-effect of modeling uncertainty ? we present results on an array of popular
datasets, CamVid, Make3D, and NYUv2 Depth, where we set new state-of-the-art benchmarks.
For the following experiments we use the DenseNet architecture [19] which has been adapted for
dense prediction tasks by [20]. We use our own independent implementation of the architecture
using TensorFlow [21] (which slightly outperforms the original authors? implementation on CamVid
by 0.2%, see Table 1a). For all experiments we train with 224 ? 224 crops of batch size 4, and then
fine-tune on full-size images with a batch size of 1. We train with RMS-Prop with a constant learning
rate of 0.001 and weight decay 10?4 .
We compare the results of the Bayesian neural network models outlined in ?3. We model epistemic
uncertainty using Monte Carlo dropout (?2.1). The DenseNet architecture places dropout with p =
0.2 after each convolutional layer. Following [22], we use 50 Monte Carlo dropout samples. We
model aleatoric uncertainty with MAP inference using loss functions (8) and (12 in the appendix),
for regression and classification respectively (?2.2). However, we derive the loss function using a
Laplacian prior, as opposed to the Gaussian prior used for the derivations in ?3. This is because
it results in a loss function which applies a L1 distance on the residuals. Typically, we find this to
outperform L2 loss for regression tasks in vision. We model the benefit of combining both epistemic
uncertainty as well as aleatoric uncertainty using our developments presented in ?3.
4.1
Semantic Segmentation
To demonstrate our method for semantic segmentation, we use two datasets, CamVid [8] and NYU
v2 [23]. CamVid is a road scene understanding dataset with 367 training images and 233 test images,
of day and dusk scenes, with 11 classes. We resize images to 360 ? 480 pixels for training and
evaluation. In Table 1a we present results for our architecture. Our method sets a new state-of-the-art
6
CamVid
IoU
NYUv2 40-class
SegNet [28]
FCN-8 [29]
DeepLab-LFOV [24]
Bayesian SegNet [22]
Dilation8 [30]
Dilation8 + FSO [31]
DenseNet [20]
46.4
57.0
61.6
63.1
65.3
66.1
66.9
SegNet [28]
FCN-8 [29]
Bayesian SegNet [22]
Eigen and Fergus [32]
IoU
66.1
61.8
68.0
65.6
23.6
31.6
32.4
34.1
70.1
70.4
70.2
70.6
36.5
37.1
36.7
37.3
This work:
This work:
DenseNet (Our Implementation)
+ Aleatoric Uncertainty
+ Epistemic Uncertainty
+ Aleatoric & Epistemic
Accuracy
DeepLabLargeFOV
+ Aleatoric Uncertainty
+ Epistemic Uncertainty
+ Aleatoric & Epistemic
67.1
67.4
67.2
67.5
(a) CamVid dataset for road scene segmentation.
(b) NYUv2 40-class dataset for indoor scenes.
Table 1: Semantic segmentation performance. Modeling both aleatoric and epistemic uncertainty gives a
notable improvement in segmentation accuracy over state of the art baselines.
Make3D
Karsch et al. [33]
Liu et al. [34]
Li et al. [35]
Laina et al. [26]
rel
rms
log10
NYU v2 Depth
rel
rms
log10
?1
?2
?3
0.355
0.335
0.278
0.176
9.20
9.49
7.19
4.46
0.127
0.137
0.092
0.072
Karsch et al. [33]
Ladicky et al. [36]
Liu et al. [34]
Li et al. [35]
Eigen et al. [27]
Eigen and Fergus [32]
Laina et al. [26]
0.374
0.335
0.232
0.215
0.158
0.127
1.12
1.06
0.821
0.907
0.641
0.573
0.134
0.127
0.094
0.055
54.2%
62.1%
61.1%
76.9%
81.1%
82.9%
88.6%
88.7%
95.0%
95.3%
91.4%
96.8%
97.1%
98.8%
98.8%
3.92
3.93
3.87
4.08
0.064
0.061
0.064
0.063
DenseNet Baseline
+ Aleatoric Uncertainty
+ Epistemic Uncertainty
+ Aleatoric & Epistemic
0.117
0.112
0.114
0.110
80.2%
81.6%
81.1%
81.7%
95.1%
95.8%
95.4%
95.9%
98.8%
98.8%
98.8%
98.9%
This work:
DenseNet Baseline
+ Aleatoric Uncertainty
+ Epistemic Uncertainty
+ Aleatoric & Epistemic
0.167
0.149
0.162
0.149
This work:
(a) Make3D depth dataset [25].
0.517
0.508
0.512
0.506
0.051
0.046
0.049
0.045
(b) NYUv2 depth dataset [23].
Table 2: Monocular depth regression performance. Comparison to previous approaches on depth regression
dataset NYUv2 Depth. Modeling the combination of uncertainties improves accuracy.
on this dataset with mean intersection over union (IoU) score of 67.5%. We observe that modeling
both aleatoric and epistemic uncertainty improves over the baseline result. The implicit attenuation
obtained from the aleatoric loss provides a larger improvement than the epistemic uncertainty model.
However, the combination of both uncertainties improves performance even further. This shows that
for this application it is more important to model aleatoric uncertainty, suggesting that epistemic
uncertainty can be mostly explained away in this large data setting.
Secondly, NYUv2 [23] is a challenging indoor segmentation dataset with 40 different semantic
classes. It has 1449 images with resolution 640 ? 480 from 464 different indoor scenes. Table 1b
shows our results. This dataset is much harder than CamVid because there is significantly less structure in indoor scenes compared to street scenes, and because of the increased number of semantic
classes. We use DeepLabLargeFOV [24] as our baseline model. We observe a similar result (qualitative results given in Figure 4); we improve baseline performance by giving the model flexibility
to estimate uncertainty and attenuate the loss. The effect is more pronounced, perhaps because the
dataset is more difficult.
4.2
Pixel-wise Depth Regression
We demonstrate the efficacy of our method for regression using two popular monocular depth regression datasets, Make3D [25] and NYUv2 Depth [23]. The Make3D dataset consists of 400 training
and 134 testing images, gathered using a 3-D laser scanner. We evaluate our method using the same
standard as [26], resizing images to 345 ? 460 pixels and evaluating on pixels with depth less than
70m. NYUv2 Depth is taken from the same dataset used for classification above. It contains RGB-D
imagery from 464 different indoor scenes. We compare to previous approaches for Make3D in Table
2a and NYUv2 Depth in Table 2b, using standard metrics (for a description of these metrics please
see [27]).
These results show that aleatoric uncertainty is able to capture many aspects of this task which
are inherently difficult. For example, in the qualitative results in Figure 5 and 6 we observe that
aleatoric uncertainty is greater for large depths, reflective surfaces and occlusion boundaries in the
image. These are common failure modes of monocular depth algorithms [26]. On the other hand,
these qualitative results show that epistemic uncertainty captures difficulties due to lack of data. For
7
1.00
0
Precision (RMS Error)
Precision
0.98
0.96
0.94
0.92
0.90
Aleatoric Uncertainty
Epistemic Uncertainty
0.88
0.0
0.2
0.4
Recall
0.6
0.8
1
2
3
Aleatoric Uncertainty
Epistemic Uncertainty
4
1.0
0.0
0.2
0.4
Recall
0.6
0.8
1.0
(a) Classification (CamVid)
(b) Regression (Make3D)
Figure 2: Precision Recall plots demonstrating both measures of uncertainty can effectively capture accuracy,
as precision decreases with increasing uncertainty.
Aleatoric, MSE = 0.031
Epistemic, MSE = 0.00364
0.6
Precision
Frequency
0.8
0.4
0.2
0.0
0.0
1.0
1.0
0.8
0.8
0.6
0.4
Precision
1.0
0.4
0.6
Probability
0.8
1.0
0.0
0.0
Epistemic+Aleatoric, MSE = 0.00214
0.6
0.4
0.2
0.2
0.2
Non-Bayesian,
MSE =MSE
0.00501
Non-Bayesian,
= 0.00501
Aleatoric,
MSEMSE
= 0.00272
Aleatoric,
= 0.00272
Epistemic,
MSE
=
0.007
Epistemic, MSE = 0.007
Epistemic+Aleatoric, MSE = 0.00214
0.0
0.0
0.2
0.2
0.4
0.6
Probability
0.4
0.6
0.8
0.8
1.0
1.0
Probability (CamVid)
(a) Regression (Make3D)
(b) Classification
Figure 3: Uncertainty calibration plots. This plot shows how well uncertainty is calibrated, where perfect
calibration corresponds to the line y = x, shown in black. We observe an improvement in calibration mean
squared error with aleatoric, epistemic and the combination of uncertainties.
example, we observe larger uncertainty for objects which are rare in the training set such as humans
in the third example of Figure 5.
In summary, we have demonstrated that our model can improve performance over non-Bayesian
baselines by implicitly learning attenuation of systematic noise and difficult concepts. For example
we observe high aleatoric uncertainty for distant objects and on object and occlusion boundaries.
5
Analysis: What Do Aleatoric and Epistemic Uncertainties Capture?
In ?4 we showed that modeling aleatoric and epistemic uncertainties improves prediction performance, with the combination performing even better. In this section we wish to study the effectiveness of modeling aleatoric and epistemic uncertainty. In particular, we wish to quantify the
performance of these uncertainty measurements and analyze what they capture.
5.1
Quality of Uncertainty Metric
Firstly, in Figure 2 we show precision-recall curves for regression and classification models. They
show how our model performance improves by removing pixels with uncertainty larger than various
percentile thresholds. This illustrates two behaviors of aleatoric and epistemic uncertainty measures.
Firstly, it shows that the uncertainty measurements are able to correlate well with accuracy, because
all curves are strictly decreasing functions. We observe that precision is lower when we have more
points that the model is not certain about. Secondly, the curves for epistemic and aleatoric uncertainty models are very similar. This shows that each uncertainty ranks pixel confidence similarly to
the other uncertainty, in the absence of the other uncertainty. This suggests that when only one uncertainty is explicitly modeled, it attempts to compensate for the lack of the alternative uncertainty
when possible.
Secondly, in Figure 3 we analyze the quality of our uncertainty measurement using calibration plots
from our model on the test set. To form calibration plots for classification models, we discretize our
model?s predicted probabilities into a number of bins, for all classes and all pixels in the test set. We
then plot the frequency of correctly predicted labels for each bin of probability values. Better performing uncertainty estimates should correlate more accurately with the line y = x in the calibration
plots. For regression models, we can form calibration plots by comparing the frequency of residuals
lying within varying thresholds of the predicted distribution. Figure 3 shows the calibration of our
classification and regression uncertainties.
8
Train
dataset
Test
dataset
IoU
Aleatoric
entropy
Epistemic logit
variance (?10?3 )
7.73
4.38
2.78
CamVid / 4
CamVid / 2
CamVid
CamVid
CamVid
CamVid
57.2
62.9
67.5
0.106
0.156
0.111
1.96
1.66
1.36
15.0
4.87
CamVid / 4
CamVid
NYUv2
NYUv2
-
0.247
0.264
10.9
11.8
Train
dataset
Test
dataset
RMS
Aleatoric
variance
Epistemic
variance
Make3D / 4
Make3D / 2
Make3D
Make3D
Make3D
Make3D
5.76
4.62
3.87
0.506
0.521
0.485
Make3D / 4
Make3D
NYUv2
NYUv2
-
0.388
0.461
(a) Regression
(b) Classification
Table 3: Accuracy and aleatoric and epistemic uncertainties for a range of different train and test dataset
combinations. We show aleatoric and epistemic uncertainty as the mean value of all pixels in the test dataset. We
compare reduced training set sizes (1, 1?2, 1?4) and unrelated test datasets. This shows that aleatoric uncertainty
remains approximately constant, while epistemic uncertainty decreases the closer the test data is to the training
distribution, demonstrating that epistemic uncertainty can be explained away with sufficient training data (but
not for out-of-distribution data).
5.2
Uncertainty with Distance from Training Data
In this section we show two results:
1. Aleatoric uncertainty cannot be explained away with more data,
2. Aleatoric uncertainty does not increase for out-of-data examples (situations different from
training set), whereas epistemic uncertainty does.
In Table 3 we give accuracy and uncertainty for models trained on increasing sized subsets of
datasets. This shows that epistemic uncertainty decreases as the training dataset gets larger. It also
shows that aleatoric uncertainty remains relatively constant and cannot be explained away with more
data. Testing the models with a different test set (bottom two lines) shows that epistemic uncertainty
increases considerably on those test points which lie far from the training sets.
These results reinforce the case that epistemic uncertainty can be explained away with enough data,
but is required to capture situations not encountered in the training set. This is particularly important
for safety-critical systems, where epistemic uncertainty is required to detect situations which have
never been seen by the model before.
5.3
Real-Time Application
Our model based on DenseNet [20] can process a 640?480 resolution image in 150ms on a NVIDIA
Titan X GPU. The aleatoric uncertainty models add negligible compute. However, epistemic models require expensive Monte Carlo dropout sampling. For models such as ResNet [4], this is possible to achieve economically because only the last few layers contain dropout. Other models, like
DenseNet, require the entire architecture to be sampled. This is difficult to parallelize due to GPU
memory constraints, and often results in a 50? slow-down for 50 Monte Carlo samples.
6
Conclusions
We presented a novel Bayesian deep learning framework to learn a mapping to aleatoric uncertainty
from the input data, which is composed on top of epistemic uncertainty models. We derived our
framework for both regression and classification applications. We showed that it is important to
model aleatoric uncertainty for:
? Large data situations, where epistemic uncertainty is explained away,
? Real-time applications, because we can form aleatoric models without expensive Monte
Carlo samples.
And epistemic uncertainty is important for:
? Safety-critical applications, because epistemic uncertainty is required to understand examples which are different from training data,
? Small datasets where the training data is sparse.
However aleatoric and epistemic uncertainty models are not mutually exclusive. We showed that
the combination is able to achieve new state-of-the-art results on depth regression and semantic
segmentation benchmarks.
The first paragraph in this paper posed two recent disasters which could have been averted by realtime Bayesian deep learning tools. Therefore, we leave finding a method for real-time epistemic
uncertainty in deep learning as an important direction for future research.
9
References
[1] NHTSA. PE 16-007. Technical report, U.S. Department of Transportation, National Highway Traffic
Safety Administration, Jan 2017. Tesla Crash Preliminary Evaluation Report.
[2] Jessica Guynn. Google photos labeled black people ?gorillas?. USA Today, 2015.
[3] Andrew Blake, Rupert Curwen, and Andrew Zisserman. A framework for spatiotemporal control in the
tracking of visual contours. International Journal of Computer Vision, 11(2):127?145, 1993.
? Carreira-Perpi?na? n. Multiscale conditional random fields for
[4] Xuming He, Richard S Zemel, and Miguel A
image labeling. In Computer vision and pattern recognition, 2004. CVPR 2004. Proceedings of the 2004
IEEE computer society conference on, volume 2, pages II?II. IEEE, 2004.
[5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770?778,
2016.
[6] Y. Gal. Uncertainty in Deep Learning. PhD thesis, University of Cambridge, 2016.
[7] Armen Der Kiureghian and Ove Ditlevsen. Aleatory or epistemic? does it matter? Structural Safety, 31
(2):105?112, 2009.
[8] Gabriel J Brostow, Julien Fauqueur, and Roberto Cipolla. Semantic object classes in video: A highdefinition ground truth database. Pattern Recognition Letters, 30(2):88?97, 2009.
[9] John Denker and Yann LeCun. Transforming neural-net output levels to probability distributions. In
Advances in Neural Information Processing Systems 3. Citeseer, 1991.
[10] David JC MacKay. A practical Bayesian framework for backpropagation networks. Neural Computation,
4(3):448?472, 1992.
[11] Radford M Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995.
[12] Alex Graves. Practical variational inference for neural networks. In Advances in Neural Information
Processing Systems, pages 2348?2356, 2011.
[13] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural
network. In ICML, 2015.
[14] Jos?e Miguel Hern?andez-Lobato, Yingzhen Li, Daniel Hern?andez-Lobato, Thang Bui, and Richard E
Turner. Black-box alpha divergence minimization. In Proceedings of The 33rd International Conference on Machine Learning, pages 1511?1520, 2016.
[15] Yarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate
variational inference. ICLR workshop track, 2016.
[16] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to
variational methods for graphical models. Machine learning, 37(2):183?233, 1999.
[17] David A Nix and Andreas S Weigend. Estimating the mean and variance of the target probability distribution. In Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE
International Conference On, volume 1, pages 55?60. IEEE, 1994.
[18] Quoc V Le, Alex J Smola, and St?ephane Canu. Heteroscedastic Gaussian process regression. In Proceedings of the 22nd international conference on Machine learning, pages 489?496. ACM, 2005.
[19] Gao Huang, Zhuang Liu, Kilian Q Weinberger, and Laurens van der Maaten. Densely connected convolutional networks. arXiv preprint arXiv:1608.06993, 2016.
[20] Simon J?egou, Michal Drozdzal, David Vazquez, Adriana Romero, and Yoshua Bengio. The one
hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. arXiv preprint
arXiv:1611.09326, 2016.
[21] Mart??n Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin,
Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine
learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI). Savannah, Georgia, USA, 2016.
[22] Alex Kendall, Vijay Badrinarayanan, and Roberto Cipolla. Bayesian SegNet: Model uncertainty in deep
convolutional encoder-decoder architectures for scene understanding. arXiv preprint arXiv:1511.02680,
2015.
[23] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support
inference from rgbd images. In European Conference on Computer Vision, pages 746?760. Springer,
2012.
[24] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint
arXiv:1412.7062, 2014.
10
[25] Ashutosh Saxena, Min Sun, and Andrew Y Ng. Make3d: Learning 3d scene structure from a single still
image. IEEE transactions on pattern analysis and machine intelligence, 31(5):824?840, 2009.
[26] Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth
prediction with fully convolutional residual networks. In 3D Vision (3DV), 2016 Fourth International
Conference on, pages 239?248. IEEE, 2016.
[27] David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a
multi-scale deep network. In Advances in neural information processing systems, pages 2366?2374,
2014.
[28] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. SegNet: A deep convolutional encoderdecoder architecture for scene segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
[29] Evan Shelhamer, Jonathon Long, and Trevor Darrell. Fully convolutional networks for semantic segmentation. IEEE transactions on pattern analysis and machine intelligence, 2016.
[30] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint
arXiv:1511.07122, 2015.
[31] Abhijit Kundu, Vibhav Vineet, and Vladlen Koltun. Feature space optimization for semantic video segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
3168?3175, 2016.
[32] David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common
multi-scale convolutional architecture. In Proceedings of the IEEE International Conference on Computer
Vision, pages 2650?2658, 2015.
[33] Kevin Karsch, Ce Liu, and Sing Bing Kang. Depth extraction from video using non-parametric sampling.
In European Conference on Computer Vision, pages 775?788. Springer, 2012.
[34] Miaomiao Liu, Mathieu Salzmann, and Xuming He. Discrete-continuous depth estimation from a single
image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
716?723, 2014.
[35] Bo Li, Chunhua Shen, Yuchao Dai, Anton van den Hengel, and Mingyi He. Depth and surface normal
estimation from monocular images using regression on deep features and hierarchical crfs. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1119?1127, 2015.
[36] Lubor Ladicky, Jianbo Shi, and Marc Pollefeys. Pulling things out of perspective. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pages 89?96, 2014.
11
| 7141 |@word kohli:1 illustrating:1 economically:1 kokkinos:1 logit:4 c0:3 nd:1 seek:1 rgb:1 citeseer:1 egou:1 harder:1 epistemic:75 liu:5 contains:1 score:2 efficacy:2 hoiem:1 daniel:1 denoting:1 salzmann:1 outperforms:1 existing:4 comparing:2 michal:1 si:5 gpu:2 john:1 devin:1 distant:1 romero:1 analytic:1 christian:2 plot:8 ashutosh:1 v:1 discrimination:1 alone:6 intelligence:4 isard:1 vanishing:1 provides:1 toronto:1 lx:1 firstly:2 zhang:1 wierstra:1 brostow:1 symposium:1 koltun:2 qualitative:3 consists:2 abadi:1 compose:1 combine:1 paragraph:1 notably:1 unaries:3 expected:3 behavior:1 multi:3 decreasing:1 increasing:2 estimating:1 unrelated:1 what:4 interpreted:5 complimentary:2 developed:1 unified:1 finding:2 gal:3 sky:1 every:2 saxena:1 attenuation:13 act:1 demonstrates:1 jianbo:1 uk:2 control:1 yn:1 generalised:1 before:3 positive:1 safety:4 negligible:1 congress:1 consequence:2 parallelize:1 optimised:1 approximately:1 might:2 black:3 suggests:1 challenging:2 heteroscedastic:20 range:1 practical:5 camera:1 lecun:1 testing:2 practice:1 union:1 backpropagation:1 jan:1 evan:1 significantly:2 confidence:1 road:2 zoubin:2 get:2 cannot:6 put:1 context:2 equivalent:1 map:4 deterministic:1 yt:1 demonstrated:1 transportation:1 lobato:2 dean:1 crfs:2 shi:1 formulate:1 resolution:2 shen:1 matthieu:1 array:2 lnn:1 traditionally:1 pt:1 today:2 construction:3 target:1 element:2 approximated:3 particularly:1 expensive:2 recognition:8 predicts:1 labeled:2 database:1 bottom:2 observed:1 preprint:5 capture:19 connected:2 sun:2 kilian:1 trade:1 decrease:3 transforming:1 ideally:1 cam:2 trained:1 rewrite:1 segment:1 ove:1 predictive:7 technically:1 yuille:1 division:1 textured:1 field:2 resolved:1 various:1 derivation:1 train:6 laser:1 fast:1 effective:1 monte:8 zemel:1 labeling:1 kevin:2 supplementary:1 plausible:1 larger:4 posed:1 cvpr:1 squash:1 resizing:1 encoder:1 federico:1 transform:1 noisy:5 hoc:1 net:2 formalised:1 adaptation:1 combining:4 flexibility:1 achieve:3 description:1 pronounced:1 darrell:1 extending:1 perfect:1 leave:1 object:6 resnet:1 derive:2 develop:3 ac:2 andrew:3 miguel:2 minimises:1 make3d:17 strong:1 predicted:4 come:1 quantify:1 iou:4 direction:1 tommi:1 laurens:1 stochastic:3 human:1 jonathon:1 bin:2 require:2 assign:1 trailer:1 fix:1 wall:1 preliminary:1 andez:2 secondly:3 strictly:1 assisted:1 scanner:1 lying:1 ground:2 blake:1 visually:1 exp:5 lawrence:1 mapping:5 nyuv2:13 normal:2 predict:4 bnns:1 driving:1 major:1 vary:2 homoscedastic:2 omitted:1 estimation:2 label:4 iw:5 highway:1 karsch:3 tool:3 navab:1 minimization:1 offs:1 sensor:1 always:1 gaussian:10 camvid:18 rather:1 lubor:1 avoid:1 varying:1 jaakkola:1 minimisation:2 derived:1 improvement:3 rank:1 likelihood:6 bernoulli:1 baseline:8 detect:1 inference:14 osdi:1 dependent:3 savannah:1 nn:6 unary:2 typically:1 entire:1 interested:1 pixel:19 classification:28 jianmin:1 temper:1 development:1 art:6 softmax:7 integration:2 mackay:1 marginal:1 equal:1 once:1 never:1 having:1 beach:1 sampling:2 koray:1 optimising:2 placing:4 thang:1 extraction:1 icml:1 yu:1 ng:1 fcn:2 future:1 ephane:1 others:2 t2:2 inherent:4 intelligent:1 richard:2 report:2 few:1 modern:1 composed:2 randomly:1 divergence:2 lbn:2 national:1 densely:1 murphy:1 occlusion:2 jeffrey:1 attempt:1 jessica:1 interest:1 highly:1 regressing:1 evaluation:2 mixture:1 pc:3 parametrised:1 accurate:2 peculiar:1 andy:1 closer:1 indexed:1 penalizes:1 fitted:1 uncertain:1 increased:4 modeling:13 papandreou:1 averted:1 cost:1 subset:1 rare:1 masked:2 hundred:1 corrupted:3 spatiotemporal:1 considerably:1 combined:1 confident:1 st:2 nns:4 calibrated:1 international:6 stay:1 vineet:1 probabilistic:1 systematic:1 jos:1 michael:2 together:4 na:1 imagery:1 squared:1 thesis:2 opposed:1 huang:1 american:1 li:4 account:3 potential:1 suggesting:1 includes:1 dilated:1 matter:1 titan:1 notable:1 caused:1 explicitly:3 depends:2 ad:1 jc:1 performed:1 kendall:3 observing:1 analyze:2 traffic:1 aggregation:1 simon:1 contribution:4 ass:1 bright:1 accuracy:7 convolutional:11 variance:17 largely:1 gathered:1 anton:1 bayesian:31 kavukcuoglu:1 accurately:1 ren:1 carlo:8 vazquez:1 african:1 penalised:1 trevor:1 failure:2 frequency:3 derek:1 regress:1 sampled:3 dataset:21 popular:2 recall:4 improves:5 segmentation:21 iasonas:1 higher:1 day:1 follow:1 zisserman:1 formulation:1 evaluated:1 done:2 box:1 marginalising:1 parameterised:2 implicit:1 smola:1 hand:6 multiscale:1 densenets:1 lack:2 google:1 mode:1 quality:2 perhaps:1 vibhav:1 pulling:1 usa:3 effect:8 concept:1 true:1 logits:2 contain:1 analytically:2 regularization:2 leibler:1 semantic:18 bnn:6 white:1 ignorance:2 i2:3 neal:1 irving:1 please:1 davis:1 percentile:1 m:1 trying:1 stress:1 vasileios:1 tt:1 demonstrate:2 motion:1 l1:2 image:26 variational:8 wise:2 novel:3 fi:1 charles:1 common:3 empirically:1 volume:2 extend:1 interpretation:1 he:4 numerically:2 refer:1 measurement:3 cambridge:3 attenuate:2 rd:1 unconstrained:1 outlined:1 canu:1 similarly:4 particle:1 had:1 stable:2 calibration:8 surface:3 operating:1 add:1 posterior:10 own:1 recent:3 showed:3 perspective:1 chunhua:1 certain:1 nvidia:1 yi:7 der:2 captured:3 seen:1 greater:1 george:1 dai:1 xiangyu:1 ii:2 full:1 desirable:1 infer:1 alan:1 technical:1 adapt:2 offer:1 long:2 compensate:1 divided:1 laplacian:1 prediction:10 regression:44 crop:1 vision:21 optimisation:1 metric:3 blindly:1 arxiv:10 represent:1 disaster:2 deeplab:1 penalize:1 whereas:2 want:1 fine:1 crash:1 adriana:1 mandate:1 jian:1 marginalisation:1 unlike:1 pass:1 thing:1 effectiveness:1 jordan:1 encoderdecoder:1 reflective:1 structural:1 intermediate:1 split:1 enough:3 easy:1 bengio:1 architecture:9 identified:3 andreas:1 idea:3 barham:1 administration:1 blundell:1 rms:5 passed:1 passing:1 shaoqing:1 deep:27 ignored:1 useful:1 gabriel:1 tune:2 amount:2 induces:2 reduced:3 outperform:1 exist:1 per:2 correctly:1 track:1 summarised:1 discrete:1 pollefeys:1 nevertheless:1 demonstrating:2 threshold:2 ce:1 densenet:8 fraction:1 weigend:1 letter:1 uncertainty:182 powerful:1 fourth:1 place:1 family:1 yann:1 realtime:1 draw:2 decision:1 confusing:1 appendix:1 resize:1 maaten:1 dropout:14 layer:4 capturing:1 ct:1 followed:1 replaces:1 armen:1 encountered:1 adapted:1 constraint:1 ladicky:2 alex:5 scene:11 erroneously:1 aspect:1 nathan:1 extremely:1 min:1 performing:4 relatively:1 department:1 combination:6 vladlen:2 smaller:1 slightly:1 wi:3 rob:3 making:2 quoc:1 supervise:1 explained:12 dv:1 den:1 taken:2 monocular:4 mutually:1 remains:2 hern:2 turn:2 bing:1 know:1 tractable:1 photo:1 segnet:6 available:1 gaussians:2 operation:2 denker:1 observe:8 hierarchical:1 away:10 v2:2 batch:2 robustness:1 alternative:1 eigen:5 weinberger:1 original:2 assumes:2 top:1 include:1 graphical:1 log10:2 giving:3 ghahramani:2 especially:1 approximating:3 society:1 silberman:1 chieh:1 implied:1 objective:4 miaomiao:1 added:1 squashed:1 parametric:1 exclusive:1 diagonal:1 exhibit:2 discouraged:2 iclr:1 distance:2 reinforce:1 street:1 decoder:1 collected:3 yg279:1 iro:1 racial:1 modeled:5 tiramisu:1 liang:1 difficult:7 mostly:2 yingzhen:1 disastrous:1 potentially:1 rupert:1 negative:1 implementation:4 design:1 perform:3 allowing:1 discretize:1 observation:12 convolution:1 datasets:6 nix:1 benchmark:3 daan:1 sing:1 situation:4 extended:1 head:1 y1:1 usenix:1 david:5 required:4 kl:1 puhrsch:1 raising:1 marginalise:1 learned:8 tensorflow:2 kang:1 aleatoric:72 nip:1 able:7 sanjay:1 perception:1 pattern:10 indoor:6 regime:1 summarize:1 gorilla:2 optimise:1 memory:1 video:4 cornebise:1 critical:3 difficulty:1 predicting:5 belagiannis:1 residual:9 turner:1 kundu:1 representing:1 improve:3 zhuang:1 badrinarayanan:2 julien:2 mathieu:1 roberto:3 prior:5 understanding:5 literature:1 l2:1 graf:1 loss:26 fully:4 filtering:1 xuming:2 var:2 geoffrey:1 shelhamer:1 integrate:1 sufficient:1 pi:3 row:1 summary:1 placed:1 last:1 soon:1 side:2 allow:2 normalised:1 understand:1 deeper:1 saul:1 characterizing:1 sparse:1 benefit:2 van:2 boundary:3 depth:28 xn:1 valid:1 avoids:1 evaluating:1 curve:3 contour:1 forward:1 made:1 author:1 world:1 simplified:1 hengel:1 far:2 pushmeet:1 correlate:2 transaction:3 approximate:9 alpha:1 ignore:1 implicitly:2 kullback:1 bui:1 assumed:1 xi:11 fergus:5 continuous:1 table:9 additionally:1 learn:9 robust:4 ca:1 inherently:1 ignoring:1 mse:8 necessarily:1 complex:1 european:2 domain:1 marc:1 main:2 dense:2 featureless:1 noise:20 big:1 paul:1 tesla:1 rgbd:1 yarin:2 categorized:1 x1:1 referred:4 en:1 georgia:1 slow:1 precision:8 fails:1 explicit:1 wish:3 exponential:1 lie:1 pe:1 vanish:1 weighting:1 third:1 zhifeng:1 removing:1 down:1 erroneous:2 perpi:1 xt:1 ghemawat:1 nyu:2 decay:4 exaggerate:1 concern:1 intractable:1 workshop:1 rel:2 effectively:2 phd:2 illustrates:1 chen:3 vijay:2 entropy:2 intersection:1 yoshua:1 likely:1 gao:1 visual:1 prevents:1 tracking:1 kaiming:1 scalar:2 bo:1 applies:1 cipolla:3 radford:1 corresponds:2 truth:2 springer:2 mingyi:1 acm:1 mart:1 prop:1 conditional:2 sized:1 quantifying:1 replace:1 absence:1 fisher:1 change:1 carreira:1 specifically:1 infinite:1 reducing:1 semantically:1 averaging:1 wt:3 formally:1 people:1 support:1 brevity:1 evaluate:3 |
6,790 | 7,142 | Gradient descent GAN optimization is locally stable
Vaishnavh Nagarajan
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA 15213
[email protected]
J. Zico Kolter
Computer Science Department
Carnegie-Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze
the ?gradient descent? form of GAN optimization i.e., the natural setting where
we simultaneously take small gradient steps in both generator and discriminator
parameters. We show that even though GAN optimization does not correspond to a
convex-concave game (even for simple parameterizations), under proper conditions,
equilibrium points of this optimization procedure are still locally asymptotically
stable for the traditional GAN formulation. On the other hand, we show that the
recently proposed Wasserstein GAN can have non-convergent limit cycles near
equilibrium. Motivated by this stability analysis, we propose an additional regularization term for gradient descent GAN updates, which is able to guarantee local
stability for both the WGAN and the traditional GAN, and also shows practical
promise in speeding up convergence and addressing mode collapse.
1
Introduction
Since their introduction a few years ago, Generative Adversarial Networks (GANs) [Goodfellow et al.,
2014] have gained prominence as one of the most widely used methods for training deep generative
models. GANs have been successfully deployed for tasks such as photo super-resolution, object
generation, video prediction, language modeling, vocal synthesis, and semi-supervised learning,
amongst many others [Ledig et al., 2017, Wu et al., 2016, Mathieu et al., 2016, Nguyen et al., 2017,
Denton et al., 2015, Im et al., 2016].
At the core of the GAN methodology is the idea of jointly training two networks: a generator network,
meant to produce samples from some distribution (that ideally will mimic examples from the data
distribution), and a discriminator network, which attempts to differentiate between samples from
the data distribution and the ones produced by the generator. This problem is typically written as a
min-max optimization problem of the following form:
min max (Ex?pdata [log D(x)] + Ez?platent [log(1 D(G(z))]) .
(1)
G
D
For the purposes of this paper, we will shortly consider a more general form of the optimization problem, which also includes the recent Wasserstein GAN (WGAN) [Arjovsky et al., 2017] formulation.
Despite their prominence, the actual task of optimizing GANs remains a challenging problem, both
from a theoretical and a practical standpoint. Although the original GAN paper included some
analysis on the convergence properties of the approach [Goodfellow et al., 2014], it assumed that
updates occurred in pure function space, allowed arbitrarily powerful generator and discriminator
networks, and modeled the resulting optimization objective as a convex-concave game, therefore
yielding well-defined global convergence properties. Furthermore, this analysis assumed that the
discriminator network is fully optimized between generator updates, an assumption that does not
mirror the practice of GAN optimization. Indeed, in practice, there exist a number of well-documented
failure modes for GANs such as mode collapse or vanishing gradient problems.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Our contributions. In this paper, we consider the ?gradient descent? formulation of GAN optimization, the setting where both the generator and the discriminator are updated simultaneously via
simple (stochastic) gradient updates; that is, there are no inner and outer optimization loops, and
neither the generator nor the discriminator are assumed to be optimized to convergence. Despite the
fact that, as we show, this does not correspond to a convex-concave optimization problem (even for
simple linear generator and discriminator representations), we show that:
Under suitable conditions on the representational powers of the discriminator and the generator,
the resulting GAN dynamical system is locally exponentially stable.
That is, for some region around an equilibrium point of the updates, the gradient updates will converge
to this equilibrium point at an exponential rate. Interestingly, our conditions can be satisfied by the
traditional GAN but not by the WGAN, and we indeed show that WGANs can have non-convergent
limit cycles in the gradient descent case.
Our theoretical analysis also suggests a natural method for regularizing GAN updates by adding
an additional regularization term on the norm of the discriminator gradient. We show that the
addition of this term leads to locally exponentially stable equilibria for all classes of GANs, including
WGANs. The additional penalty is highly related to (but also notably different from) recent proposals
for practical GAN optimization, such as the unrolled GAN [Metz et al., 2017] and the improved
Wasserstein GAN training [Gulrajani et al., 2017]. In practice, the approach is simple to implement,
and preliminary experiments show that it helps avert mode collapse and leads to faster convergence.
2
Background and related work
GAN optimization and theory. Although the theoretical analysis of GANs has been far outpaced
by their practical application, there have been some notable results in recent years, in addition
to the aforementioned work in the original GAN paper. For the most part, this work is entirely
complementary to our own, and studies a very different set of questions. Arjovsky and Bottou
[2017] provide important insights into instability that arises when the supports of the generated
distribution and the true distribution are disjoint. In contrast, in this paper we delve into an equally
important question of whether the updates are stable even when the generator is in fact very close
to the true distribution (and we answer in the affirmative). Arora et al. [2017], on the other hand,
explore questions relating to the sample complexity and expressivity of the GAN architecture and
their relation to the existence of an equilibrium point. However, it is still unknown as to whether,
given that an equilibrium exists, the GAN update procedure will converge locally.
From a more practical standpoint, there have been a number of papers that address the topic of
optimization in GANs. Several methods have been proposed that introduce new objectives or
architectures for improving the (practical and theoretical) stability of GAN optimization [Arjovsky
et al., 2017, Poole et al., 2016]. A wide variety of optimization heuristics and architectures have
also been proposed to address challenges such as mode collapse [Salimans et al., 2016, Metz et al.,
2017, Che et al., 2017, Radford et al., 2016]. Our own proposed regularization term falls under
this same category, and hopefully provides some context for understanding some of these methods.
Specifically, our regularization term (motivated by stability analysis) captures a degree of ?foresight?
of the generator in the optimization procedure, similar to the unrolled GANs procedure [Metz et al.,
2017]. Indeed, we show that our gradient penalty is closely related to 1-unrolled GANs, but also
provides more flexibility in leveraging this foresight. Finally, gradient-based regularization has been
explored for GANs, with one of the most recent works being that of Gulrajani et al. [2017], though
their penalty is on the discriminator rather than the generator as in our case.
Finally, there are several works that have simultaneously addressed similar issues as this paper. Of
particular similarity to the methodology we propose here are the works by Roth et al. [2017] and
Mescheder et al. [2017]. The first of these two present a stabilizing regularizer that is based on a
gradient norm, where the gradient is calculated with respect to the datapoints. Our regularizer on the
other hand is based on the norm of a gradient calculated with respect to the parameters. Our approach
has some strong similarities with that of the second work noted above; however, the authors there
do not establish or disprove stability, and instead note the presence of zero eigenvalues (which we
will treat in some depth) as a motivation for their alternative optimization method. Thus, we feel the
works as a whole are quite complementary, and signify the growing interest in GAN optimization
issues.
2
Stochastic approximation algorithms and analysis of nonlinear systems. The technical tools
we use to analyze the GAN optimization dynamics in this paper come from the fields of stochastic
approximation algorithm and the analysis of nonlinear differential equations ? notably the ?ODE
method? for analyzing convergence properties of dynamical systems [Borkar and Meyn, 2000].
Consider a general stochastic process driven by the updates ? t+1 = ? t + ?t (h(? t ) + ?t ) for vector
? t 2 Rn , step size ?t > 0, function h : Rn ! Rn and a martingale difference sequence ?t .1 Under
fairly general conditions, namely: 1) bounded second moments of ?t , 2) Lipschitz continuity of h, and
3) summable but not square-summable step sizes, the stochastic approximation algorithm converges
?
to an equilibrium point of the (deterministic) ordinary differential equation ?(t)
= h(?(t)).
Thus, to understand stability of the stochastic approximation algorithm, it suffices to understand
the stability and convergence of the deterministic differential equation. Though such analysis is
typically used to show global asymptotic convergence of the stochastic approximation algorithm to
an equilibrium point (assuming the related ODE also is globally asymptotically stable), it can also be
used to analyze the local asymptotic stability properties of the stochastic approximation algorithm
around equilibrium points.2 This is the technique we follow throughout this entire work, though for
brevity we will focus entirely on the analysis of the continuous time ordinary differential equation,
and appeal to these standard results to imply similar properties regarding the discrete updates.
Given the above consideration, our focus will be on proving stability of the dynamical system around
equilbrium points, i.e. points ? ? for which h(? ? ) = 0.3 . Specifically, we appeal to the well known
linearization theorem [Khalil, 1996, Sec 4.3], which states that if the Jacobian of the dynamical
system J = @h(?)/@?|?=?? evaluated at an equilibrium point is Hurwitz (has all strictly negative
eigenvalues, Re( i (J)) < 0, 8i = 1, . . . , n), then the ODE will converge to ?? for some non-empty
region around ?? , at an exponential rate. This means that the system is locally asymptotically stable,
or more precisely, locally exponentially stable (see Definition A.1 in Appendix A).
Thus, an important contribution of this paper is a proof of this seemingly simple fact: under some
conditions, the Jacobian of the dynamical system given by the GAN update is a Hurwitz matrix at
an equilibrium (or, if there are zero-eigenvalues, if they correspond to a subspace of equilibria, the
system is still asymptotically stable). While this is a trivial property to show for convex-concave
games, the fact that the GAN is not convex-concave leads to a substantially more challenging analysis.
In addition to this, we provide an analysis that is based on Lyapunov?s stability theorem (described
in Appendix A). The crux of the idea is that to prove convergence it is sufficient to identify a nonnegative ?energy? function for the linearized system which always decreases with time (specifically,
the energy function will be a distance from the equilibrium, or from the subspace of equilibria). Most
importantly, this analysis provides insights into the dynamics that lead to GAN convergence.
3
GAN optimization dynamics
This section comprises the main results of this paper, showing that under proper conditions the
gradient descent updates for GANs (that is, updating both the generator and discriminator locally and
simultaneously), is locally exponentially stable around ?good? equilibrium points (where ?good? will
be defined shortly). This requires that the GAN loss be strictly concave, which is not the case for
WGANs, and we indeed show that the updates for WGANs can cycle indefinitely. This leads us to
propose a simple regularization term that is able to guarantee exponential stability for any concave
GAN loss, including the WGAN, rather than requiring strict concavity.
1
Stochastic gradient descent on an objective f (?) can be expressed in this framework as h(?) = r? f (?).
Note that the local analysis does not show that the stochastic approximation algorithm will necessarily
converge to an equilibrium point, but still provides a valuable characterization of how the algorithm will behave
around these points.
3
Note that this is a slightly different usage of the term equilibrium as typically used in the GAN literature,
where it refers to a Nash equilibrium of the min max optimization problem. These two definitions (assuming we
mean just a local Nash equilibrium) are equivalent for the ODE corresponding to the min-max game, but we use
the dynamical systems meaning throughout this paper, that is, any point where the gradient update is zero
2
3
3.1
The generalized GAN setting
For the remainder of the paper, we consider a slightly more general formulation of the GAN
optimization problem than the one presented earlier, given by the following min/max problem:
min max V (G, D) = (Ex?pdata [f (D(x))] + Ez?platent [f ( D(G(z)))])
(2)
G
D
where G : Z ! X is the generator network, which maps from the latent space Z to the input space
X ; D : X ! R is the discriminator network, which maps from the input space to a classification
of the example as real or synthetic; and f : R ! R is a concave function. We can recover the
traditional GAN formulation [Goodfellow et al., 2014] by taking f to be the (negated) logistic loss
f (x) = log(1 + exp( x)); note that this convention slightly differs from the standard formulation
in that in this case the discriminator outputs the real-valued ?logits? and the loss function would
implicitly scale this to a probability. We can recover the Wasserstein GAN by simply taking f (x) = x.
Assuming the generator and discriminator networks to be parameterized by some set of parameters,
? D and ? G respectively, we analyze the simple stochastic gradient descent approach to solving this
optimization problem. That is, we take simultaneous gradient steps in both ?D and ?G , which in our
?ODE method? analysis leads to the following differential equation:
??D = r?D V (?G , ?D ), ??G := r?G V (?G , ?D ).
(3)
A note on alternative updates. Rather than updating both the generator and discriminator according to the min-max problem above, Goodfellow et al. [2014] also proposed a modified update for
just the generator that minimizes a different objective, V 0 (G, D) = Ez?platent [f (D(G(z)))] (the
negative sign is pulled out from inside f ). In fact, all the analyses we consider in this paper apply
equally to this case (or any convex combination of both updates), as the ODE of the update equations
have the same Jacobians at equilibrium.
3.2
Why is proving stability hard for GANs?
Before presenting our main results, we first highlight why understanding the local stability of GANs
is non-trivial, even when the generator and discriminator have simple forms. As stated above, GAN
optimization consists of a min-max game, and gradient descent algorithms will converge if the game
is convex-concave ? the objective must be convex in the term being minimized and concave in the
term being maximized. Indeed, this was a crucial assumption in the convergence proof in the original
GAN paper. However, for virtually any parameterization of the real GAN generator and discriminator,
even if both representations are linear, the GAN objective will not be a convex-concave game:
Proposition 3.1. The GAN objective in Equation 2 can be a concave-concave objective i.e., concave
with respect to both the discriminator and generator parameters, for a large part of the discriminator
space, including regions arbitrarily close to the equilibrium.
To see why, consider a simple GAN over 1 dimensional data and latent space with linear generator
0
0
and discriminator, i.e. D(x) = ?D x + ?D
and G(z) = ?G z + ?G
. Then the GAN objective is:
0
0
0
V (G, D) = Ex?pdata [f (?D x + ?D )] + Ez?platent [f ( ?D (?G z + ?G
) ?D
)].
0
Because f is concave, by inspection we can see that V is concave in ?D and ?D ; but it is also
0
concave (not convex) in ?G and ?G
, for the same reason. Thus, the optimization involves concave
minimization, which in general is a difficult problem. To prove that this is not a peculiarity of the
above linear discriminator system, in Appendix B, we show similar observations for a more general
parametrization, and also for the case where f 00 (x) = 0 (which happens in the case of WGANs).
Thus, a major question remains as to whether or not GAN optimization is stable at all (most concave
maximization is not). Indeed, there are several well-known properties of GAN optimization that may
make it seem as though gradient descent optimization may not work in theory. For instance, it is
well-known that at the optimal location pg = pdata , the optimal discriminator will output zero on all
examples, which in turn means that any generator distribution will be optimal for this generator. This
would seem to imply that the system can not be stable around such an equilibrium.
However, as we will show, gradient descent GAN optimization is locally asymptotically stable, even
for natural parameterizations of generator-discriminator pairs (which still make up concave-concave
optimization problems). Furthermore, at equilibrium, although the zero-discriminator property
means that the generator is not stable ?independently?, the joint dynamical system of generator and
discriminator is locally asymptotically stable around certain equilibrium points.
4
3.3
Local stability of general GAN systems
This section contains our first technical result, establishing that GANs are locally stable under proper
local conditions. Although the proofs are deferred to the appendix, the elements that we do emphasize
here are the conditions that we identified for local stability to hold. Indeed, because the proof rests on
these conditions (some of which are fairly strong), we want to highlight them as much as possible, as
they themselves also convey valuable intuition as to what is required for GAN convergence.
To formalize our conditions, we denote the support of a distribution with probability density function
(p.d.f) p by supp(p) and the p.d.f of the generator ?G by p?G . Let B? (?) denote the Euclidean L2 -ball
(+)
of radius of ?. Let max (?) and min (?) denote the largest and the smallest non-zero eigenvalues of a
non-zero positive semidefinite matrix. Let Col(?) and Null(?) denote the column space and null space
of a matrix respectively. Finally, we define two key matrices that will be integral to our analyses:
Z
T
KDD , Epdata [r?D D?D (x)r?D D?D (x)] ?? , KDG ,
r?D D?D (x)rT?G p?G (x)dx
D
X
?
?
(?D
,?G
)
?
?
Here, the matrices are evaluated at an equilibrium point (?D
, ?G
) which we will characterize shortly.
The significance of these terms is that, as we will see, KDD is proportional to the Hessian of the
GAN objective with respect to the discriminator parameters at equilibrium, and KDG is proportional
to the off-diagonal term in this Hessian, corresponding to the discriminator and generator parameters.
These matrices also occur in similar positions in the Jacobian of the system at equilibrium.
We now discuss conditions under which we can guarantee exponential stability. All our conditions
?
?
are imposed on both (?D
, ?G
) and all equilibria in a small neighborhood around it, though we do
not state this explicitly in every assumption. First, we define the ?good? equilibria we care about as
those that correspond to a generator which matches the true distribution and a discriminator that is
identically zero on the support of this distribution. As described next, implicitly, this also assumes
that the discriminator and generator representations are powerful enough to guarantee that there are
no ?bad? equilibria in a local neighborhood of this equilibrium.
? = p
? (x) = 0, 8 x 2 supp(p
Assumption I. p?G
data and D?D
data ).
The assumption that the generator matches the true distribution is a rather strong assumption, as
it limits us to the ?realizable? case, where the generator is capable of creating the underlying data
distribution. Furthermore, this means the discriminator is (locally) powerful enough that for any other
generator distribution it is not at equilibrium (i.e., discriminator updates are non-zero). Since we
do not typically expect this to be the case, we also provide an alternative non-realizable assumption
below that is also sufficient for our results i.e., the system is still stable. In both the realizable and
non-realizable cases the requirement of an all-zero discriminator remains. This implicitly requires
even the generator representation be (locally) rich enough so that when the discriminator is not
identically zero, the generator is not at equilibrium (i.e., generator updates are non-zero). Finally,
note that these conditions do not disallow bad equilibria outside of this neighborhood, which may
potentially even be unstable.
Assumption I. (Non-realizable) The discriminator is linear in its parameters ?D and furthermore,
?
?
? (x) = 0, 8 x 2 supp(p
? ).
for any equilibrium point (?D
, ?G
), D?D
data ) [ supp(p?G
This alternative assumption is largely a weakening of Assumption I, as the condition on the discriminator remains, but there is no requirement that the generator give rise to the true distribution.
However, the requirement that the discriminator be linear in the parameters (not in its input), is an
additional restriction that seems unavoidable in this case for technical reasons. Further, note that
? (x) = 0 and that the generator/discriminator are both at equilibrium, still means
the fact that D?D
? 6= p
that although it may be that p?G
data , these distributions are (locally) indistinguishable as far as
the discriminator is concerned. Indeed, this is a nice characterization of ?good? equilibria, that the
discriminator cannot differentiate between the real and generated samples.
The next assumption is straightforward, making it necessary that the loss f be strictly concave. (As
we will show, for non-strictly concave losses, there need not be local asymptotic convergence).
Assumption II. The function f satisfies f 00 (0) < 0, and f 0 (0) 6= 0
The goal of our third assumption will be to allow systems with multiple equilibria in the neighborhood
?
?
of (?D
, ?G
) in a limited sense. To state our assumption, we first define the following property for a
5
function, say g, at a specific point in its domain: along any direction either the second derivative of g
must be non-zero or all derivatives must be zero. For example, at the origin, g(x, y) = x2 + x2 y 2 is
flat along y, and along any other direction at an angle ? 6= 0 with the y axis, the second derivative
is 2 sin2 ?. For the GAN system, we will require this property, formalized in Property I, for two
important convex functions whose Hessians are proportional to KDD and KTDG KDG . We provide
more intuition for these functions below.
Property I. g : ? ! R satisfies Property I at ? ? 2 ? if for any ? 2 Null( r2? g(?) ?? ), the function
is locally constant along ? at ? ? i.e., 9? > 0 such that for all ?0 2 ( ?, ?), g(? ? ) = g(? ? + ?0 ?).
Assumption
III.
At
Epdata [r?D D?D (x)]
an
equilibrium
Ep?G [r?D D?D (x)]
and generator space respectively.
?
?
(?D
, ?G
),
2
the
functions
Epdata [D?2D (x)]
and
must satisfy Property I in the discriminator
?
?D =?D
Here is an intuitive explanation of these two non-negative functions. The first function is a function
of ?D which measures how far ?D is from an all-zero state, and the second is a function of ?G which
measures how far ?G is from the true distribution ? at equilibrium these functions are zero. We
?
will see later that given f 00 (0) < 0, the curvature of the first function at ?D
is representative of the
?
curvature of V (?D , ?G
) in the discriminator space; similarly, given f 0 (0) 6= 0 the curvature of the
?
second function at ?G
is representative of the curvature of the magnitude of the discriminator update
?
on ? D in the generator space. The intuition behind this particular relation is that, when ?G moves
?
away from the true distribution, while the second function in Assumption III increases, ?D
also
?
becomes more suboptimal for that generator; as a result, the magnitude of update on ?D
increases
too. Besides this, we show in Lemma C.2, that the Hessian of the two functions in Assumption III in
the discriminator and the generator space respectively, are proportional to KDD and KTDG KDG .
The above relations involving the two functions and the GAN objective, together with Assumption III,
basically allow us to consider systems with many equilibria in a local neighborhood in a specific
sense. In particular, if the curvature of the first function is flat along a direction u (which also means
?
that KDD u = 0) we can perturb ?D
slightly along u and still have an ?equilibrium discriminator?
as defined in Assumption I i.e., 8x 2 supp(p?G )(x), D?D (x) = 0. Similarly, for any direction v
?
along which the curvature of the second function is flat (i.e., KDG v = 0), we can perturb ?G
slightly
along that direction such that ?G remains an ?equilibrium generator? as defined in Assumption I i.e.,
p?G = pdata . We prove this formally in Lemma C.2. Perturbations along any other directions do not
yield equilibria because then, either ? D is no longer in an all-zero state or ? G does not match the
true distribution. Thus, we consider a setup where the rank deficiencies of KDD , KTDG KDG if any,
correspond to equivalent equilibria ? which typically exist for neural networks, though in practice
they may not correspond to ?linear? perturbations as modeled here.
As a final assumption, we require that all the generators in a sufficiently small neighborhood of the
equilibrium have distributions with the same support as the true distribution.
?
Assumption IV. 9?G > 0 such that 8?G 2 B?G (?G
), supp(p?G ) = supp(pdata ).
We can replace this assumption with a more realistic smoothness condition on the discriminator,
which is sufficient for our results as we prove in Appendix C.1. The motivation is that Assumption IV
may typically hold if the support covers the whole space X ; but when the true distribution has
support in some smaller disjoint parts of the space X , nearby generators may correspond to slightly
displaced versions of this distribution with a different support. Perhaps a fairer requirement from
the system would be to hope that the union of the supports of the generator and the generators in its
neighborhood does not cover too large a space, and furthermore, the equilibrium discriminator is zero
in the union of all these supports ?a property that is likely to be satisfied if we restrict ourselves to
smooth discriminators. We mathematically state this assumption as follows:
S
? supp(p? ), D? ? (x) = 0.
Assumption IV (Relaxed) 9?G > 0 such that for all x 2 ?G 2B? (?G
G
)
D
G
We now state our result.
Theorem 3.1. The dynamical system defined by the GAN objective in Equation 2 and the updates in
?
?
Equation 3 is locally exponentially stable with respect to an equilibrium point (?D
, ?G
) when the
?
?
Assumptions I, II, III, IV hold for (?D , ?G ) and other equilibria in a small neighborhood around it.
6
Furthermore, the rate of convergence is governed only by the eigenvalues
system at equilibrium with a strict negative real part upper bounded as:
? If Im( ) = 0, then Re( ) ?
of the Jacobian J of the
(+)
(+)
T
min (KDD ) min (KDG KDG )
(+)
0 (0)2 (+) (KT K
(K
)
(K
)+f
max
DD
DD
DG )
min
min
DG
2f 00 (0)f 02 (0)
4f 002 (0)
? If Im( ) 6= 0, then Re( ) ? f 00 (0)
(+)
min (KDD )
The vast majority of our proofs are deferred to the appendix, but we briefly describe the intuition
here. It is straightforward to show that the Jacobian J of the system at equilibrium can be written as:
?
? 00
JDD JDG
2f (0)KDD f 0 (0)KDG
J=
=
T
JDG JGG
f 0 (0)KTDG
0
Recall that we wish to show this is Hurwitz. First note that JDD (the Hessian of the objective with
respect to the discriminator) is negative semi-definite if and only if f 00 (0) < 0. Next, a crucial
observation is that JGG = 0 i.e, the Hessian term w.r.t. the generator vanishes because for the all-zero
discriminator, all generators result in the same objective value. Fortunately, this means at equilibrium
we do not have non-convexity in ? G precluding local stability. Then, we? make use of the crucial
?
Lemma G.2 we prove in the appendix, showing that any matrix of the form Q P;
PT 0 is
Hurwitz provided that Q is strictly negative definite and P has full column rank.
However, this property holds only when KDD is positive definite and KDG is full column rank.
Now, if KDD or KDG do not have this property, recall that the rank deficiency is due to a subspace
?
?
). Consequently, we can analyze the stability of the system projected
of equilibria around (?D
, ?G
to an subspace orthogonal to these equilibria (Theorem A.4). Additionally, we also prove stability
using Lyapunov?s stability (Theorem A.1) by showing that the squared L2 distance to the subspace of
equilibria always either decreases or only instantaneously remains constant.
Additional results. In order to illustrate our assumptions in Theorem 3.1, in Appendix D we
consider a simple GAN that learns a multi-dimensional Gaussian using a quadratic discriminator and
a linear generator. In a similar set up, in Appendix E, we consider the case where f (x) = x i.e., the
Wasserstein GAN and so f 00 (x) = 0, and we show that the system can perennially cycle around an
equilibrium point without converging. A simple two-dimensional example is visualized in Section 4.
Thus, gradient descent WGAN optimization is not necessarily asymptotically stable.
3.4
Stabilizing optimization via gradient-based regularization
Motivated by the considerations above, in this section we propose a regularization penalty for the
generator update, which uses a term based upon the gradient of the discriminator. Crucially, the
regularization term does not change the parameter values at the equilibrium point, and at the same
time enhances the local stability of the optimization procedure, both in theory and practice. Although
these update equations do require that we differentiate with respect to a function of another gradient
term, such ?double backprop? terms (see e.g., Drucker and Le Cun [1992]) are easily computed by
modern automatic differentiation tools. Specifically, we propose the regularized update
?G := ?G
?r?G V (D?D , G?G ) + ?kr?D V (D?D , G?G )k2
(4)
Local Stability The intuition of this regularizer is perhaps most easily understood by considering
how it changes the Jacobian at equilibrium (though there are other means of motivating the update as
well, discussed further in Appendix F.2). In the Jacobian of the new update, although there are now
non-antisymmetric diagonal blocks, the block diagonal terms are now negative definite:
?
JDD
JTDG (I + 2?JDD )
JDG
2?JTDG JDG
As we show below in Theorem 3.2 (proved in Appendix F), as long as we choose ? small enough so
that I + 2?JDD ? 0, this guarantees the updates are locally asymptotically stable for any concave f .
In addition to stability properties, this regularization term also addresses a well known failure state
in GANs called mode collapse, by lending more ?foresight? to the generator. The way our updates
provide this foresight is very similar to the unrolled updates proposed in Metz et al. [2017], although,
7
our regularization is much simpler and provides more flexibility to leverage the foresight. In practice,
we see that our method can be as powerful as the more complex and slower 10-unrolled GANs. We
discuss this and other intuitive ways of motivating our regularizer in Appendix F.
Theorem 3.2. The dynamical system defined by the GAN objective in Equation 2 and the updates
in Equation 4, is locally exponentially stable at the equilibrium, under the same conditions as in
Theorem 3.1, if ? < 2 max (1 JDD ) . Further, under appropriate conditions similar to these, the WGAN
system is locally exponentially stable at the equilibrium for any ?. The rate of convergence for the
WGAN is governed only by the eigenvalues of the Jacobian at equilibrium with a strict negative
real part upper bounded as:
4
(+)
? If Im( ) = 0, then Re( ) ?
2f 02 (0)? min (KT
DG KDG )
4f 02 (0)? 2 max (KT
DG KDG )+1
? If Im( ) 6= 0, then Re( ) ?
?f 02 (0)
(+)
T
min (KDG KDG )
Experimental results
We very briefly present experimental results that demonstrate that our regularization term also has
substantial practical promise.4 In Figure 1, we compare our gradient regularization to 10-unrolled
GANs on the same architecture and dataset (a mixture of eight gaussians) as in Metz et al. [2017].
Our system quickly spreads out all the points instead of first exploring only a few modes and then
redistributing its mass over all the modes gradually. Note that the conventional GAN updates are
known to enter mode collapse for this setup. We see similar results (see Figure 2 here, and Figure 4
in the Appendix for a more detailed figure) in the case of a stacked MNIST dataset using a DCGAN
[Radford et al., 2016] i.e., three random digits from MNIST are stacked together so as to create a
distribution over 1000 modes. Finally, Figure 3, presents streamline plots for a 2D system where both
the true and the latent distribution is uniform over [ 1, 1] and the discriminator is D(x) = w2 x2
while the generator is G(z) = az. Observe that while the WGAN system goes in orbits as expected,
the original GAN system converges. With our updates, both these systems converge quickly to the
true equilibrium.
Iteration 0
Iteration 3000
Iteration 8000
Iteration 50000
Iteration 70000
Figure 1: Gradient regularized GAN, ? = 0.5 (top row) vs. 10-unrolled with ? = 10
4
(bottom row)
Figure 2: Gradient regularized (left) and traditional (right) DCGAN architectures on stacked MNIST
examples, after 1,4 and 20 epochs.
4
We provide an implementation of this technique at https://github.com/locuslab/gradient_
regularized_gan
8
GAN, ?#0.25
GAN, ?#1.0
1.5
1.5
1.5
1.5
1.0
1.0
1.0
1.0
0.5
0.0
0.5
0.0
!1.0
!0.5
0.0
0.5
1.0
a
2.0
a
2.0
0.5
0.5
0.0
!1.0
w2
WGAN, ?#0.0
!0.5
0.0
0.5
1.0
0.0
!1.0
w2
WGAN, ?#0.25
!0.5
0.0
0.5
1.0
!1.0
1.5
1.5
1.5
1.5
1.0
1.0
1.0
1.0
0.0
!0.5
0.0
0.5
w2
1.0
!0.5
0.0
0.5
1.0
w2
1.0
0.5
1.0
0.0
0.0
!1.0
0.5
0.5
0.5
0.0
!1.0
a
2.0
a
2.0
0.5
0.0
w2
WGAN, ?#1
2.0
0.5
!0.5
w2
WGAN, ?#0.5
2.0
a
a
GAN, ?#0.5
2.0
a
a
GAN, ?#0.0
2.0
!1.0
!0.5
0.0
w2
0.5
1.0
!1.0
!0.5
0.0
w2
Figure 3: Streamline plots around the equilibrium (0, 1) for the conventional GAN (top) and the
WGAN (bottom) for ? = 0 (vanilla updates) and ? = 0.25, 0.5, 1 (left to right).
5
Conclusion
In this paper, we presented a theoretical analysis of the local asymptotic stability of GAN optimization
under proper conditions. We further showed that the recently proposed WGAN is not asymptotically
stable under the same conditions, but we introduced a gradient-based regularizer which stabilizes
both traditional GANs and the WGANs, and can improve convergence speed in practice.
The results here provide substantial insight into the nature of GAN optimization, perhaps even
offering some clues as to why these methods have worked so well despite not being convex-concave.
However, we also emphasize that there are substantial limitations to the analysis, and directions for
future work. Perhaps most notably, the analysis here only provides an understanding of what happens
locally, close to an equilibrium point. For non-convex architectures this may be all that is possible, but
it seems plausible that much stronger global convergence results could hold for simple settings like
the linear quadratic GAN (indeed, as the streamline plots show, we observe this in practice for simple
domains). Second, the analysis here does not show the equilibrium points necessarily exist, but only
illustrates convergence if there do exist points that satisfy certain criteria: the existence question has
been addressed by previous work [Arora et al., 2017], but much more analysis remains to be done
here. GANs are rapidly becoming a cornerstone of deep learning methods, and the theoretical and
practical understanding of these methods will prove crucial in moving the field forward.
References
Martin Arjovsky and L?on Bottou. Towards principled methods for training generative adversarial
networks. In International Conference on Learning Representations (ICLR), 2017.
Martin Arjovsky, Soumith Chintala, and L?on Bottou. Wasserstein generative adversarial networks. In
Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings
of Machine Learning Research, pages 214?223, 2017.
Sanjeev Arora, Rong Ge, Yingyu Liang, Tengyu Ma, and Yi Zhang. Generalization and equilibrium
in generative adversarial nets (GANs). In Proceedings of the 34th International Conference on
Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 224?232,
2017.
Vivek S Borkar and Sean P Meyn. The ode method for convergence of stochastic approximation and
reinforcement learning. SIAM Journal on Control and Optimization, 38(2):447?469, 2000.
9
Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, and Wenjie Li. Mode regularized generative
adversarial networks. In Fifth International Conference on Learning Representations (ICLR).
2017.
Emily L Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. In Advances in Neural Information
Processing Systems 28, pages 1486?1494. 2015.
Harris Drucker and Yann Le Cun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3(6):991?997, 1992.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in Neural
Information Processing Systems 27, pages 2672?2680. 2014.
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Improved
training of wasserstein GANs. In Thirty-first Annual Conference on Neural Information Processing
Systems (NIPS). 2017.
Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with
recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016.
Hassan K Khalil. Noninear Systems. Prentice-Hall, New Jersey, 1996.
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta,
Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic
single image super-resolution using a generative adversarial network. In The IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), July 2017.
Jan R Magnus, Heinz Neudecker, et al. Matrix differential calculus with applications in statistics and
econometrics. 1995.
Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond
mean square error. In Fourth International Conference on Learning Representations (ICLR). 2016.
L. Mescheder, S. Nowozin, and A. Geiger. The numerics of GANs. In Thirty-first Annual Conference
on Neural Information Processing Systems (NIPS). 2017.
Luke Metz, Ben Poole, David Pfau, and Jascha Sohl-Dickstein. Unrolled generative adversarial
networks. In Fifth International Conference on Learning Representations (ICLR). 2017.
Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & play
generative networks: Conditional iterative generation of images in latent space. In The IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
Ben Poole, Alexander A Alemi, Jascha Sohl-Dickstein, and Anelia Angelova. Improved generator
objectives for GANs. arXiv preprint arXiv:1612.02780, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. In Fourth International Conference on Learning
Representations (ICLR). 2016.
K. Roth, A. Lucchi, S. Nowozin, and T. Hofmann. Stabilizing training of generative adversarial networks through regularization. In Thirty-first Annual Conference on Neural Information Processing
Systems (NIPS). 2017.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.
Improved techniques for training GANs. In Advances in Neural Information Processing Systems
29, pages 2234?2242. 2016.
Jiajun Wu, Chengkai Zhang, Tianfan Xue, Bill Freeman, and Josh Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in Neural
Information Processing Systems 29, pages 82?90. 2016.
10
| 7142 |@word briefly:2 version:1 stronger:1 norm:3 seems:2 calculus:1 fairer:1 linearized:1 crucially:1 prominence:3 jacob:1 pg:1 moment:1 contains:1 daniel:1 offering:1 precluding:1 interestingly:1 com:1 dx:1 written:2 must:4 realistic:2 kdd:11 hofmann:1 christian:1 shape:1 plot:3 update:36 v:1 generative:14 alec:2 parameterization:1 inspection:1 parametrization:1 vanishing:1 core:1 indefinitely:1 provides:6 parameterizations:2 characterization:2 location:1 lending:1 simpler:1 zhang:2 tianfan:1 along:9 differential:6 prove:7 consists:1 inside:1 yingyu:1 introduce:1 aitken:1 notably:3 expected:1 jiwoong:1 indeed:9 themselves:1 nor:1 growing:2 multi:2 heinz:1 freeman:1 globally:1 actual:1 soumith:3 considering:1 becomes:1 provided:1 bounded:3 underlying:1 mass:1 anh:1 null:3 what:2 substantially:1 affirmative:1 minimizes:1 differentiation:1 guarantee:5 every:1 concave:25 zaremba:1 wenjie:1 k2:1 sherjil:1 control:1 zico:1 szlam:1 faruk:1 before:1 positive:2 understood:2 local:15 treat:1 limit:3 despite:4 analyzing:1 jiang:1 establishing:1 becoming:1 alexey:1 suggests:1 challenging:2 luke:2 delve:1 collapse:6 limited:1 practical:8 thirty:3 lecun:1 practice:8 union:2 implement:1 differs:1 definite:4 block:2 backpropagation:1 digit:1 procedure:5 jan:1 dongjoo:1 vocal:1 refers:1 cannot:1 close:3 zehan:1 prentice:1 context:1 instability:1 restriction:1 bill:1 equivalent:2 deterministic:2 map:2 shi:1 roth:2 mescheder:2 imposed:1 straightforward:2 conventional:2 go:1 independently:1 convex:13 emily:1 resolution:2 stabilizing:3 formalized:1 pure:1 pouget:1 jascha:2 insight:3 meyn:2 importantly:1 datapoints:1 stability:24 proving:2 updated:1 feel:1 pt:1 play:1 us:1 goodfellow:6 origin:1 pa:2 element:1 recognition:2 updating:2 econometrics:1 ep:1 bottom:2 preprint:2 wang:1 capture:1 region:3 cycle:4 decrease:2 valuable:2 substantial:3 intuition:5 vanishes:1 nash:2 complexity:1 convexity:1 principled:1 ideally:1 warde:1 dynamic:3 solving:1 ferenc:1 upon:1 easily:2 joint:1 jersey:1 regularizer:5 stacked:3 describe:1 vicki:1 neighborhood:8 outside:1 quite:1 heuristic:1 widely:1 valued:1 whose:1 say:1 plausible:1 jean:1 cvpr:2 statistic:1 jointly:1 final:1 seemingly:1 differentiate:3 sequence:1 eigenvalue:6 net:2 propose:5 remainder:1 loop:1 rapidly:1 poorly:1 flexibility:2 representational:1 intuitive:2 khalil:2 az:1 convergence:19 empty:1 requirement:4 double:2 produce:1 generating:1 converges:2 ben:2 object:2 help:1 illustrate:1 recurrent:1 andrew:2 tim:1 strong:3 c:2 involves:1 come:1 streamline:3 convention:1 lyapunov:2 direction:7 radius:1 closely:1 stochastic:12 peculiarity:1 hassan:1 backprop:1 require:3 equilbrium:1 crux:1 nagarajan:1 suffices:1 generalization:2 preliminary:1 proposition:1 im:6 kdg:15 strictly:5 mathematically:1 exploring:1 hold:5 jdg:4 around:13 sufficiently:1 rong:1 hall:1 exp:1 caballero:1 equilibrium:67 magnus:1 stabilizes:1 major:1 smallest:1 purpose:1 largest:1 create:1 successfully:1 tool:2 instantaneously:1 minimization:1 hope:1 always:2 gaussian:1 super:2 modified:1 rather:4 clune:1 focus:2 rank:4 contrast:1 adversarial:13 kim:1 sense:2 realizable:5 sin2:1 typically:6 entire:1 weakening:1 cunningham:1 relation:3 issue:2 aforementioned:1 classification:1 lucas:1 fairly:2 field:2 beach:1 denton:2 unsupervised:1 pdata:6 mimic:1 minimized:1 others:1 yoshua:3 dosovitskiy:1 future:1 few:2 mirza:1 modern:1 dg:4 simultaneously:4 wgan:14 ourselves:1 attempt:1 interest:1 highly:1 deferred:2 mixture:1 yielding:1 semidefinite:1 behind:1 farley:1 kt:3 integral:1 capable:1 necessary:1 arthur:1 disprove:1 orthogonal:1 iv:4 euclidean:1 re:5 orbit:1 theoretical:6 instance:1 column:3 modeling:2 earlier:1 cover:2 maximization:1 ordinary:2 hurwitz:4 addressing:1 uniform:1 too:2 characterize:1 motivating:2 answer:1 xue:1 synthetic:1 st:1 density:1 international:7 siam:1 memisevic:1 probabilistic:1 off:1 michael:1 synthesis:1 lucchi:1 together:2 quickly:2 gans:26 sanjeev:1 squared:1 alemi:1 satisfied:2 unavoidable:1 choose:1 summable:2 wgans:6 creating:1 derivative:3 wojciech:1 jacobians:1 supp:8 li:2 sec:1 includes:1 satisfy:2 notable:1 kolter:1 explicitly:1 later:1 jason:1 dumoulin:1 analyze:5 recover:2 metz:7 contribution:2 square:2 convolutional:1 largely:1 maximized:1 correspond:7 identify:1 yield:1 tejani:1 avert:1 vincent:1 produced:1 basically:1 ago:1 simultaneous:1 definition:2 failure:2 energy:2 chintala:3 proof:5 ledig:2 proved:1 dataset:2 recall:2 formalize:1 sean:1 supervised:1 follow:1 methodology:2 totz:1 improved:4 chengkai:1 formulation:6 evaluated:2 though:8 done:1 furthermore:6 just:2 alykhan:1 hand:3 mehdi:1 nonlinear:2 hopefully:1 continuity:1 mode:11 logistic:1 gulrajani:3 perhaps:4 usa:1 usage:1 requiring:1 true:12 logits:1 regularization:14 vivek:1 indistinguishable:1 game:7 noted:1 criterion:1 generalized:1 presenting:1 demonstrate:1 meaning:1 image:3 consideration:2 recently:2 exponentially:7 volume:2 discussed:1 occurred:1 yosinski:1 relating:1 mellon:2 ishaan:1 enter:1 smoothness:1 automatic:1 vanilla:1 similarly:2 language:1 moving:1 stable:23 similarity:2 longer:1 alejandro:1 curvature:6 own:2 recent:4 showed:1 optimizing:1 driven:1 certain:2 arbitrarily:2 yi:1 arjovsky:6 wasserstein:7 additional:5 care:1 relaxed:1 fortunately:1 converge:6 july:2 semi:2 ii:2 multiple:1 full:2 smooth:1 technical:3 faster:1 match:3 ahmed:1 plug:1 long:2 equally:2 roland:1 prediction:2 involving:1 converging:1 vision:2 cmu:2 arxiv:4 iteration:5 proposal:1 background:1 addition:4 signify:1 ode:7 addressed:2 want:1 standpoint:2 crucial:4 w2:9 rest:1 strict:3 virtually:1 leveraging:1 seem:2 near:1 presence:1 leverage:1 iii:5 identically:2 enough:4 concerned:1 variety:1 bengio:3 architecture:6 identified:1 suboptimal:1 restrict:1 inner:1 idea:2 regarding:1 drucker:2 whether:3 motivated:3 jgg:2 penalty:4 hessian:6 deep:4 cornerstone:1 detailed:1 johannes:1 locally:21 tenenbaum:1 visualized:1 category:1 documented:1 http:1 exist:4 sign:1 disjoint:2 jiajun:1 carnegie:2 discrete:1 promise:2 dickstein:2 key:1 neither:1 vast:1 asymptotically:9 year:2 angle:1 parameterized:1 powerful:4 jose:1 fourth:2 throughout:2 wu:2 yann:2 geiger:1 appendix:13 redistributing:1 huszar:1 entirely:2 epdata:3 convergent:2 courville:2 quadratic:2 nonnegative:1 annual:3 occur:1 precisely:1 deficiency:2 worked:1 x2:3 flat:3 neudecker:1 nearby:1 speed:1 min:16 tengyu:1 martin:3 department:2 according:1 combination:1 ball:1 smaller:1 slightly:6 acosta:1 cun:2 rob:1 making:1 happens:2 gradually:1 equation:12 remains:7 bing:1 turn:1 discus:2 ge:1 photo:2 gaussians:1 apply:1 eight:1 observe:2 salimans:2 away:1 appropriate:1 alternative:4 shortly:3 slower:1 existence:2 original:4 assumes:1 top:2 gan:66 perturb:2 establish:1 yanran:1 objective:16 move:1 question:5 foresight:5 rt:1 traditional:6 diagonal:3 che:2 enhances:1 gradient:31 amongst:1 subspace:5 distance:2 iclr:5 majority:1 outer:1 chris:1 topic:2 unstable:1 trivial:2 reason:2 ozair:1 assuming:3 besides:1 modeled:2 unrolled:8 liang:1 difficult:1 setup:2 potentially:1 negative:8 stated:1 rise:1 numerics:1 implementation:1 proper:4 unknown:1 negated:1 upper:2 observation:2 displaced:1 descent:12 behave:1 rn:3 perturbation:2 camille:1 introduced:1 david:2 namely:1 pair:1 required:1 optimized:2 discriminator:52 pfau:1 expressivity:1 nip:4 address:3 able:2 beyond:1 poole:3 dynamical:9 below:3 pattern:2 challenge:1 max:12 including:3 video:2 explanation:1 power:1 suitable:1 natural:3 regularized:4 improve:1 github:1 imply:2 mathieu:2 arora:3 axis:1 speeding:1 nice:1 understanding:4 literature:1 l2:2 epoch:1 theis:1 asymptotic:4 fully:1 loss:6 highlight:2 expect:1 generation:2 limitation:1 proportional:4 generator:54 degree:1 sufficient:3 dd:2 nowozin:2 row:2 zkolter:1 disallow:1 allow:2 understand:2 pulled:1 wide:1 fall:1 taking:2 fifth:2 athul:1 calculated:2 depth:1 rich:1 concavity:1 author:1 forward:1 clue:1 projected:1 reinforcement:1 nguyen:2 far:4 transaction:1 emphasize:2 implicitly:3 global:3 angelova:1 pittsburgh:2 assumed:3 fergus:1 xi:1 continuous:1 latent:5 iterative:1 why:4 additionally:1 nature:1 ca:1 improving:2 bottou:3 necessarily:3 complex:1 domain:2 antisymmetric:1 significance:1 main:2 spread:1 motivation:2 whole:2 paul:1 wenzhe:1 allowed:1 complementary:2 convey:1 xu:1 representative:2 deployed:1 martingale:1 tong:1 position:1 comprises:1 wish:1 exponential:4 col:1 governed:2 jacobian:8 third:1 learns:1 ian:2 theorem:9 bad:2 specific:2 showing:3 explored:1 appeal:2 r2:1 abadie:1 exists:1 mnist:3 adding:1 sohl:2 gained:1 kr:1 mirror:1 hui:1 magnitude:2 linearization:1 illustrates:1 chen:1 outpaced:1 borkar:2 simply:1 explore:1 likely:1 ez:4 josh:1 expressed:1 dcgan:2 radford:4 satisfies:2 harris:1 ma:1 conditional:1 goal:1 cheung:1 consequently:1 towards:1 couprie:1 lipschitz:1 replace:1 jeff:1 hard:1 change:2 included:1 specifically:4 lemma:3 called:1 experimental:2 aaron:2 formally:1 support:9 arises:1 meant:1 brevity:1 alexander:1 jdd:6 regularizing:1 ex:3 |
6,791 | 7,143 | Toward Robustness against Label Noise in
Training Deep Discriminative Neural Networks
Arash Vahdat
D-Wave Systems Inc.
Burnaby, BC, Canada
[email protected]
Abstract
Collecting large training datasets, annotated with high-quality labels, is costly
and time-consuming. This paper proposes a novel framework for training deep
convolutional neural networks from noisy labeled datasets that can be obtained
cheaply. The problem is formulated using an undirected graphical model that
represents the relationship between noisy and clean labels, trained in a semisupervised setting. In our formulation, the inference over latent clean labels is
tractable and is regularized during training using auxiliary sources of information.
The proposed model is applied to the image labeling problem and is shown to be
effective in labeling unseen images as well as reducing label noise in training on
CIFAR-10 and MS COCO datasets.
1
Introduction
The availability of large annotated data collections such as ImageNet [1] is one of the key reasons
why deep convolutional neural networks (CNNs) have been successful in the image classification
problem. However, collecting training data with such high-quality annotation is very costly and time
consuming. In some applications, annotators are required to be trained before identifying classes in
data, and feedback from many annotators is aggregated to reduce labeling error. On the other hand,
many inexpensive approaches for collecting labeled data exist, such as data mining on social media
websites, search engines, querying fewer annotators per instance, or the use of amateur annotators
instead of experts. However, all these low-cost approaches have one common side effect: label noise.
This paper tackles the problem of training deep CNNs for the image labeling task from datapoints
with noisy labels. Most previous work in this area has focused on modeling label noise for multiclass
classification1 using a directed graphical model similar to Fig. 1.a. It is typically assumed that the
clean labels are hidden during training, and they are marginalized by enumerating all possible classes.
These techniques cannot be extended to the multilabel classification problem, where exponentially
many configurations exist for labels, and the explaining-away phenomenon makes inference over
latent clean labels difficult.
We propose a conditional random field (CRF) [2] model to represent the relationship between noisy
and clean labels, and we show how modern deep CNNs can gain robustness against label noise using
our proposed structure. We model the clean labels as latent variables during training, and we design
our structure such that the latent variables can be inferred efficiently.
The main challenge in modeling clean labels as latent is the lack of semantics on latent variables.
In other words, latent variables may not semantically correspond to the clean labels when the joint
probability of clean and noisy labels is parameterized such that latent clean labels can take any
1
Each sample is assumed to belong to only one class.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
x
y?
y
x
(a)
y
y?
h
(b)
Figure 1: a) The general directed graphical model used for modeling noisy labels. x , y?, y represent
a data instance, its clean label, and its noisy label, respectively. b) We represent the interactions
between clean and noisy labels using an undirected graphical model with hidden binary random
h).
variables (h
configuration. To solve this problem, most previous work relies on either carefully initializing the
conditionals [3], fine-tuning the model on the noisy set after pretraining on a clean set [4], or
regularizing the transition parameters [5]. In contrast, we inject semantics to the latent variables
by formulating the training problem as a semi-supervised learning problem, in which the model is
trained using a large set of noisy training examples and a small set of clean training examples. To
overcome the problem of inferring clean labels, we introduce a novel framework equipped with an
auxiliary distribution that represents the relation between noisy and clean labels while relying on
information sources different than the image content.
This paper makes the following contributions: i) A generic CRF model is proposed for training deep
neural networks that is robust against label noise. The model can be applied to both multiclass and
multilabel classification problems, and it can be understood as a robust loss layer, which can be
plugged into any existing network. ii) We propose a novel objective function for training the deep
structured model that benefits from sources of information representing the relation between clean
and noisy labels. iii) We demonstrate that the model outperforms previous techniques.
2
Previous Work
Learning from Noisy Labels: Learning discriminative models from noisy-labeled data is an active
area of research. A comprehensive overview of previous work in this area can be found in [6].
Previous research on modeling label noise can be grouped into two main groups: class-conditional
and class-and-instance-conditional label noise models. In the former group, the label noise is assumed
to be independent of the instance, and the transition probability from clean classes to the noisy classes
is modeled. For example, class conditional models for binary classification problems are considered
in [7, 8] whereas multiclass counterparts are targeted in [9, 5]. In the class-and-instance-conditional
group, label noise is explicitly conditioned on each instance. For example, Xiao et al. [3] developed a
model in which the noisy observed annotation is conditioned on binary random variables indicating
if an instance?s label is mistaken. Reed et al. [10] fixes noisy labels by ?bootstrapping? on the
labels predicted by a neural network. These techniques are all applied to either binary or multiclass
classification problems in which marginalization over classes is possible. Among methods proposed
for noise-robust training, Misra et al. [4] target the image multilabeling problem but model the label
noise for each label independently. In contrast, our proposed CRF model represents the relation
between all noisy and clean labels while the inference over latent clean labels is still tractable.
Many works have focused on semi-supervised learning using a small clean dataset combined with
noisy labeled data, typically obtained from the web. Zhu et al. [11] used a pairwise similarity measure
to propagate labels from labeled dataset to unlabeled one. Fergus et al. [12] proposed a graph-based
label propagation, and Chen and Gupta [13] employed the weighted cross entropy loss. Recently Veit
et al. [14] proposed a multi-task network containing i) a regression model that maps noisy labels and
image features to clean labels ii) an image classification model that labels input. However, the model
in this paper is trained using a principled objective function that regularizes the inference model using
extra sources of information without the requirement for oversampling clean instances.
Deep Structured Models: Conditional random fields (CRFs) [2] are discriminative undirected
graphical models, originally proposed for modeling sequential and structured data. Recently, they have
shown state-of-the-art results in segmentation [15, 16] when combined with deep neural networks [17,
18, 19]. The main challenge in training deep CNN-CRFs is how to do inference and back-propagate
gradients of the loss function through the inference. Previous approaches have focused on mean-field
2
approximation [16, 20], belief propagation [21, 22], unrolled inference [23, 24], and sampling [25].
The CNN-CRFs used in this work are extensions of hidden CRFs introduced in [26, 27].
3
Robust Discriminative Neural Network
Our goal in this paper is to train deep neural networks given a set of noisy labeled data and a small
set of cleaned data. A datapoint (an image in our case) is represented by x , and its noisy annotation
by a binary vector y = {y1 , y2 , . . . , yN } ? YN , where yi ? {0, 1} indicates whether the ith label is
present in the noisy annotation. We are interested in inferring a set of clean labels for each datapoint.
The clean labels may be defined on a set different than the set of noisy labels. This is typically the
case in the image annotation problem where noisy labels obtained from user tags are defined over a
large set of textual tags (e.g., ?cat?, ?kitten, ?kitty?, ?puppy?, ?pup?, etc.), whereas clean labels are
defined on a small set of representative labels (e.g., ?cat?, ?dog?, etc.). In this paper, the clean label is
represented by a stochastic binary vector y? = {?
y1 , y?2 , . . . , y?C } ? YC .
We use the CRF model shown in Fig. 1.b. In our formulation, both y? and y may conditionally depend
on the image x . The link between y? and y captures the correlations between clean and noisy labels.
These correlations help us infer latent clean labels when only the noisy labels are observed. Since
noisy labels are defined over a large set of overlapping (e.g., ?cat? and ?pet?) or co-occurring (e.g.,
?road? and ?car?) entities, p(yy |?
y , x ) may have a multimodal form. To keep the inference simple and
still be able to model these correlations, we introduce a set of hidden binary variables represented by
h ? H. In this case, the correlations between components of y are modeled through h . These hidden
variables are not connected to y? in order to keep the CRF graph bipartite.
The CRF model shown in Fig. 1.b defines the joint probability distribution of y , y?, and h conditioned
on x using a parameterized energy function E? : YN ? YC ? H ? X ? R. The energy function
assigns a potential score E? (yy , y?, h , x ) to the configuration of (yy , y?, h , x ), and is parameterized by a
parameter vector ? . This conditional probability distribution is defined using a Boltzmann distribution:
x) =
p? (yy , y?, h|x
1
exp(?E? (yy , y?, h, x))
x)
Z? (x
x) is the partition function defined by Z? (x
x) =
where Z? (x
X X X
(1)
exp(?E? (yy , y?, h , x )). The
y ?YN y
??YC h ?H
energy function in Fig. 1.b is defined by the quadratic function:
a?T (x
x)?
x)yy ? c T h ? y?T W y ? h T W 0 y
E? (yy , y?, h , x ) = ?a
y ? b?T (x
(2)
x), b? (x
x), c are the bias terms and the matrices W and W 0 are the pairwise
where the vectors a? (x
interactions. In our formulation, the bias terms on the clean and noisy labels are functions of input
x and are defined using a deep CNN parameterized by ? . The deep neural network together with
?, c, W , W 0 }. Note that in
the introduced CRF forms our CNN-CRF model, parameterized by ? = {?
order to regularize W and W 0 , these matrices are not a function of x .
The structure of this graph is designed such that the conditional distribution p? (?
y , h |yy , x ) takes
y , h |yy , x ) =
aQsimple factorial
distribution
that
can
be
calculated
analytically
given
?
using:
p? (?
Q
a? (x
x)(i) +W
W (i,:)y ), p? (hj |yy ) = ?(cc(j) +W
W 0(j,:)y ),
yi |yy , x ) j p? (hj |yy ) where p? (?
yi = 1|yy , x ) = ?(a
i p? (?
1
th
x)(i) or W (i,:) indicate the i element
in which ?(u) = 1+exp(?u) is the logistic function, and a? (x
and row in the corresponding vector or matrix respectively.
3.1
Semi-Supervised Learning Approach
The main challenge here is how to train the parameters of the CNN-CRF model defined in Eq. 1. To
tackle this problem, we define the training problem as a semi-supervised learning problem where
clean labels are observed in a small subset of a larger training set annotated with noisy labels. In this
case, one can form an objective function by combining the marginal data likelihood defined on both
the fully labeled clean set and noisy labeled set, and using the maximum likelihood method to learn
x(n) , y (n) )} and DC = {(x
x(c) , y?(c) , y (c) )} are two
the parameters of the model. Assume that DN = {(x
disjoint sets representing the noisy labeled and clean labeled training datasets respectively. In the
3
maximum likelihood method, the parameters are trained by maximizing the marginal log likelihood:
1 X
1 X
x(n) ) +
x(c) )
max
log p? (yy (n) |x
log p? (yy (c) , y?(c) |x
(3)
?
|DN | n
|DC | c
x(n) ) = y ,hh p? (yy (n) , y , h |x
x(n) ) and p? (yy (c) , y?(c) |x
x(c) ) = h p? (yy (c) , y?(c) , h |x
x(c) ). Due
where p? (yy (n) |x
to the marginalization of hidden variables in log terms, the objective function cannot be analytically
optimized. A common approach to optimizing the log marginals is to use the stochastic maximum
likelihood method which is also known as persistent contrastive divergence (PCD) [28, 29, 25].
P
P
The stochastic maximum likelihood method, or equivalently PCD, can be fundamentally viewed
as an Expectation-Maximization (EM) approach to training. The EM algorithm maximizes the
variational lower bound that is formed by subtracting the Kullback?Leibler (KL) divergence between
a variational approximating distribution q and the true conditional distribution from the log marginal
probability. For example, consider the bound for the first term in the objective function:
x) ? log p? (yy |x
x) ? KL[q(?
y , h|yy , x)||p? (?
y , h|yy , x)]
log p? (yy |x
(4)
x)] ? Eq(?y ,hh|yy ,xx) [log q(?
x, y ). (5)
= Eq(?y ,hh|yy ,xx) [log p? (yy , y?, h |x
y , h |yy , x )] = U? (x
x, y )
If the incremental EM approach[30] is taken for training the parameters ? , the lower bound U? (x
is maximized over the noisy training set by iterating between two steps. In the Expectation step
(E step), ? is fixed and the lower bound is optimized with respect to the conditional distribution
q(?
y , h |yy , x ). Since this distribution is only present in the KL term in Eq. 4, the lower bound
is maximized simply by setting q(?
y , h |yy , x ) to the analytic p? (?
y , h |yy , x ). In the Maximization
step (M step), q is fixed, and the bound is maximized with respect to the model parameters ? ,
which occurs only in the first expectation term in Eq. 5. This expectation can be written as
x), which is maximized by updating ? in the direction of its
Eq(?y ,hh|yy ,xx) [?E? (yy , y?, h , x )] ? log Z? (x
?
?
y y? h x
y y? h x
gradient, computed using ?Eq(?y ,hh|xx,yy ) [ ??
y ,?
h|x
x) [ ??
y ,h
? E? (y , , , )] + Ep(y
? E? (y , , , )]. Noting that
q(?
y , h |yy , x ) is set to p? (?
y , h |yy , x ) in the E step, it becomes clear that the M step is equivalent to the
parameter updates in PCD.
3.2
Semi-Supervised Learning Regularized by Auxiliary Distributions
The semi-supervised approach infers the latent variables using the conditional q(?
y , h |yy , x ) =
p? (?
y , h |yy , x ). However, at the beginning of training when the model?s parameters are not trained yet,
y , h|yy , x) does not necessarily generate the clean labels
sampling from the conditional distributions p(?
accurately. The problem is more severe with the strong representation power of CNN-CRFs, as they
can easily fit to poor conditional distributions that occur at the beginning of training. That is why the
impact of the noisy set on training must be reduced by oversampling clean instances [14, 3].
In contrast, there may exist auxiliary sources of information that can be used to extract the relationship
between noisy and clean labels. For example, non-image-related sources may be formed from
semantic relatedness of labels [31]. We assume that, in using such sources, we can form an auxiliary
distribution paux (yy , y?, h ) representing the joint probability of noisy and clean labels and some hidden
binary states. Here, we propose a framework to use this distribution to train parameters in the semisupervised setting by guiding the variational distribution to infer the clean labels more accurately. To
do so, we add a new regularization term in the lower bound that penalizes the variational distribution
for being different from the conditional distribution resulting from the auxiliary distribution as
follows:
x) ? U?aux (x
x, y ) = log p? (yy |x
x)?KL[q(?
log p? (yy |x
y , h |yy , x )||p? (?
y , h |yy , x )]??KL[q(?
y , h |yy , x )||paux (?
y , h |yy )]
where ? is a non-negative scalar hyper-parameter that controls the impact of the added KL term.
Setting ? = 0 recovers the original variational lower bound defined in Eq. 4 whereas ? ? ? forces
the variational distribution q to ignore the p? (?
y , h |yy , x ) term. A value between these two extremes
makes the inference distribution intermediate between p? (?
y , h |yy , x ) and paux (?
y , h |yy ). Note that this
new lower bound is actually looser than the original bound. This may be undesired if we were actually
interested in predicting noisy labels. However, our goal is to predict clean labels, and the proposed
framework benefits from the regularization that is imposed on the variational distribution. Similar
ideas have been explored in the posterior regularization approach [32].
Similarly, we also define a new lower bound on the second log marginal in Eq. 3 by:
x) ? L?aux (x
x, y , y?) = log p? (yy , y?|x
x) ? KL[q(h
h|yy )||p? (h
h|yy )] ? ?KL[q(h
h|yy )||paux (h
h|yy )].
log p? (yy , y?|x
4
Auxiliary Distribution: In this paper, the auxiliary joint distribution paux (yy , y?, h ) is modeled by
an undirected graphical model in a special form of a restricted Boltzmann machine (RBM), and is
trained on the clean training set. The structure of the RBM is similar to the CRF model shown in
Fig. 1.b with the fundamental difference that parameters of the model do not depend on x :
1
exp(?Eaux (yy , y?, h ))
(6)
paux (yy , y?, h ) =
Zaux
where the energy function is defined by the quadratic function:
aTauxy? ? b Tauxy ? c Tauxh ? y?T W auxy ? h T W 0 auxy
Eaux (yy , y?, h ) = ?a
(7)
and Zaux is the partition function, defined similarly to the CRF?s partition function. The number of
hidden variables is set to 200 and the parameters of this generative model are trained using the PCD
algorithm [28], and are fixed while the CNN-CRF model is being trained.
3.3
Training Robust CNN-CRF
In training, we seek ? that maximizes the proposed lower bounds on the noisy and clean training sets:
max
?
1 X aux (n) (n)
1 X aux (c) (c) (c)
x ,y ) +
x , y , y? ).
U? (x
L? (x
|DN | n
|DC | c
(8)
The optimization problem is solved in a two-step iterative procedure as follows:
x, y ),
y , h |yy , x ) for a fixed ? . For U?aux (x
E step: The objective function is optimized with respect to q(?
this is done by solving the following problem:
min KL[q(?
y , h |yy , x )||p? (?
y , h |yy , x )] + ?KL[q(?
y , h |yy , x )||paux (?
y , h |yy )].
(9)
q
The weighted average of KL terms above is minimized with respect to q when:
1
(
)
q(?
y , h |yy , x ) ? [p? (?
y , h |yy , x ) ? p?
y , h |yy )] ?+1 ,
aux (?
(10)
which is a weighted geometric mean of the true conditional distribution and auxiliary distribution.
Given the factorial structure of these distributions, q(?
y , h |yy , x ) is also a factorial distribution:
q(?
yi = 1|yy , x )
=
q(hj = 1|yy )
=
1
a? (x
x)(i) + W (i,:)y + ?a
aaux(i) + ?W
W aux(i,:)y )
(a
?+1
1
W 0aux(j,:)y ) .
(cc(j) + W 0(j,:)y + ?ccaux(j) + ?W
?
?+1
?
x, y , y?) w.r.t q(h
h|yy ) gives a similar factorial result:
Optimizing L?aux (x
1
h|yy ) ? [p? (h
h|yy ) ? p?
h|yy )]( ?+1 ) .
q(h
aux (h
(11)
M step: Holding q fixed, the objective function is optimized with respect to ? . This is achieved by
x)], which is:
updating ? in the direction of the gradient of Eq(?y ,hh|xx,yy ) [log p? (yy , y?, h |x
?
x)]
Eq(?y ,hh|xx,yy ) [log p? (yy , y?, h |x
???
?
?
= ?Eq(?y ,hh|xx,yy ) [ E? (yy , y?, h, x)] + Ep(yy ,?y ,hh|xx) [ E? (yy , y?, h, x)], (12)
???
???
where the first expectation (the positive phase) is defined under the variational distribution q and
x). With the
the second expectation (the negative phase) is defined under the CRF model p(yy , y?, h |x
factorial form of q, the first expectation is analytically tractable. The second expectation is estimated
by PCD [28, 29, 25]. This approach requires maintaining a set of particles for each training instance
that are used for seeding the Markov chains at each iteration of training.
? aux
x, y )
U
(x
??? ?
=
The gradient of the lower bound on the clean set is defined similarly:
? aux
x, y , y?)
L (x
??? ?
=
=
?
x)]
Eq(hh|yy ) [log p? (yy , y?, h |x
???
?
?
?Eq(hh|yy ) [ E? (yy , y?, h , x )] + Ep(yy ,?y ,hh|xx) [ E? (yy , y?, h , x )]
???
???
5
(13)
with the minor difference that in the positive phase the clean label y? is given for each instance and the
variational distribution is defined over only the hidden variables.
Scheduling ? : Instead of setting ? to a fixed value during training, it is set to a very large value at
the beginning of training and is slowly decreased to smaller values. The rationale behind this is that at
the beginning of training, when p? (?
y , h |yy , x ) cannot predict the clean labels accurately, it is intuitive
to rely more on pretrained paux (?
y , h |yy ) when inferring the latent variables. As training proceeds we
shift the variational distribution q more toward the true conditional distribution.
Algorithm 1 summarizes the learning procedure proposed for training our CRF-CNN. The training is
done end-to-end for both CNN and CRF parameters together. In the test time, samples generated by
x) for the test image x are used to compute the marginal p? (?
x).
Gibbs sampling from p? (yy , y?, h |x
y |x
Algorithm 1: Train robust CNN-CRF with simple gradient descent
Input : Noisy dataset DN and clean dataset DC , auxiliary distribution paux (yy , y?, h), a learning
rate parameter ? and a schedule for ?
?, c , W , W 0 }
Output :Model parameters: ? = {?
Initialize model parameters
while Stopping criteria is not met do
x(n) , y (n) ), (x
x(c) , y?(c) , y (c) )} = getMinibatch(DN , DC ) do
foreach minibatch {(x
(n)
(n)
Compute q(?
y , h |yy , x ) by Eq.10 for each noisy instance
h|yy (c) ) by Eq. 11 for each clean instance
Compute q(h
x(?) ) for each clean/noisy instance
Do Gibbs sweeps to sample from the current p? (yy , y?, h|x
(mn , mc ) ? (# noisy instances in minibatch, # clean instances in minibatch)
P ? aux (n) (n)
P ? aux (c) (c) (c)
x , y ) + m1c c ??
x , y , y? ) by Eq.12 and 13
(x
(x
? ? ? + ? m1n n ??
? U?
? L?
end
end
4
Experiments
In this section, we examine the proposed robust CNN-CRF model for the image labeling problem.
4.1
Microsoft COCO Dataset
The Microsoft COCO 2014 dataset is one of the largest publicly available datasets that contains both
noisy and clean object labels. Created from challenging Flickr images, it is annotated with 80 object
categories as well as captions describing the images. Following [4], we use the 1000 most common
words in the captions as the set of noisy labels. We form a binary vector of this length for each
image representing the words present in the caption. We use 73 object categories as the set of clean
labels, and form binary vectors indicating whether the object categories are present in the image.
We follow the same 87K/20K/20K train/validation/test split as [4], and use mean average precision
(mAP) measure over these 73 object categories as the performance assessment. Finally, we use 20%
of the training data as the clean labeled training set (DC ). The rest of data was used as the noisy
training set (DN ), in which clean labels were ignored in training.
Network Architectures: We use the implementation of ResNet-50 [33] and VGG-16 [34] in TensorFlow as the neural networks that compute the bias coefficients in the energy function of our CRF
(Eq. 2). These two networks are applied in a fully convolutional setting to each image. Their features
in the final layer are pooled in the spatial domain using an average pooling operation, and these are
passed through a fully connected linear layer to generate the bias terms. VGG-16 is used intentionally
in order to compare our method directly with [4] that uses the same network. ResNet-50 experiments
enable us to examine how our model works with other modern architectures. Misra et al. [4] have
reported results when the images were upsampled to 565 pixels. Using upsampled images improves
the performance significantly, but they make cross validation significantly slower. Here, we report
our results for image sizes of both 224 (small) and 565 pixels (large).
Parameters Update: The parameters of all the networks were initialized from ImageNet-trained
models that are provided in TensorFlow. The other terms in the energy function of our CRF were all
6
x
x
y?
y
(a) Clean
(b) Noisy
x
y?
x
x
y
(c) No link
y?
y
(d) CRF w/o h
y?
x
y
h
(e) CRF w/ h
y?
y
h
(f) CRF w/o x ? y
Figure 2: Visualization of different variations of the model examined in the experiments.
initialized to zero. Our gradient estimates can be high variance as they are based on a Monte Carlo
estimate. For training, we use Adam [35] updates that are shown to be robust against noisy gradients.
The learning rate and epsilon for the optimizer are set to (0.001, 1) and (0.0003, 0.1) respectively in
VGG-16 and ResNet-50. We anneal ? from 40 to 5 in 11 epochs.
Sampling Overhead: Fifty Markov chains per datapoint are maintained for PCD. In each iteration
of the training, the chains are retrieved for the instances in the current minibatch, and 100 iterations
of Gibbs sampling are applied for negative phase samples. After parameter updates, the final state of
chains is stored in memory for the next epoch. Note that we are only required to store the state of the
chains for either (?
y , h ) or y . In this experiment, since the size of h is 200, the former case is more
memory efficient. Storing persistent chains in this dataset requires only about 1 GB of memory. In
ResNet-50, sampling increases the training time only by 16% and 8% for small and large images
respectively. The overhead is 9% and 5% for small and large images in VGG-16.
Baselines: Our proposed method is compared against several baselines visualized in Fig. 2:
? Cross entropy loss with clean labels: The networks are trained using cross entropy loss
with the all clean labels. This defines a performance upper bound for each network.
? Cross entropy loss with noisy labels: The model is trained using only noisy labels. Then,
predictions on the noisy labels are mapped to clean labels using the manual mapping in [4].
? No pairwise terms: All the pairwise terms are removed and the model is trained using
analytic gradients without any sampling using our proposed objective function in Eq. 8.
? CRF without hidden: W is trained but W 0 is omitted from the model.
? CRF with hidden: Both W and W 0 are present in the model.
? CRF without x ? y link: Same as the previous model but b is not a function of x .
? = 00): Same as the previous model but trained with ? = 0.
? CRF without x ? y link (?
The experimental results are reported in Table 1 under ?Caption Labels.? A performance increase is
observed after adding each component to the model. However, removing the x ? y link generally
improves the performance significantly. This may be because removing this link forces the model
to rely on y? and its correlations with y for predicting y on the noisy labeled set. This can translate
to better recognition of clean labels. Last but not least, the CRF model with no x ? y connection
trained using ? = 0 performed very poorly on this dataset. This demonstrates the importance of the
introduced regularization in training.
4.2
Microsoft COCO Dataset with Flickr Tags
The images in the COCO dataset were originally gathered and annotated from the Flickr website. This
means that these image have actual noisy Flickr tags. To examine the performance of our model on
actual noisy labels, we collected these tags for the COCO images using Flickr?s public API. Similar
to the previous section, we used the 1024 most common tags as the set of noisy labels. We observed
that these tags have significantly more noise compared to the noisy labels in the previous section;
therefore, it is more challenging to predict clean labels from them using the auxiliary distribution.
In this section, we only examine the ResNet-50 architecture for both small and large image sizes.
The different baselines introduced in the previous section are compared against each other in Table 1
under ?Flickr Tags.?
Auxiliary Distribution vs. Variational Distribution: As the auxiliary distribution paux is fixed,
and the variational distribution q is updated using Eq. 10 in each iteration, a natural question is how
7
Table 1: The performance of different baselines on the COCO dataset in terms of mAP (%).
Baseline
Cross entropy loss w/ clean
Cross entropy loss w/ noisy
No pairwise link
CRF w/o hidden
CRF w/ hidden
CRF w/o x ? y link
CRF w/o x ? y link (? = 0)
Misra et al. [4]
Fang et al. [36] reported in [4]
Caption Labels (Sec. 4.1)
ResNet-50
VGG-16
Small Large
Small Large
68.57 78.38
71.99 75.50
56.88 64.13
58.59 62.75
63.67 73.19
66.18 71.78
64.26 73.23
67.73 71.78
65.73 74.04
68.35 71.92
66.61 75.00
69.89 73.16
48.53 56.53
56.76 56.39
66.8
63.7
Flickr Tags (Sec. 4.2)
ResNet-50
Small
Large
68.57
78.38
58.01
67.84
59.04
67.22
59.19
67.33
60.97
67.57
47.25
58.74
-
q differs from paux . Since, we have access to the clean labels in the COCO dataset, we examine
the accuracy of q in terms of predicting clean labels on the noisy training set (DN ) using the mAP
measurement at the beginning and end of training the CRF-CNN model (ResNet-50 on large images).
We observed that at the beginning of training, when ? is big, q is almost equal to paux , which obtains
49.4% mAP on this set. As training iterations proceed, the accuracy of q increases to 69.4% mAP.
Note that the 20.0% gain in terms of mAP is very significant, and it demonstrates that combining the
auxiliary distribution with our proposed CRF can yield a significant performance gain in inferring
latent clean labels. In other words, our proposed model is capable of cleaning the noisy labels
and proposing more accurate labels on the noisy set as training continues. Please refer to our
supplementary material for a qualitative comparison between q and paux .
4.3
CIFAR-10 Dataset
We also apply our proposed learning framework to the object classification problem in the CIFAR-10
dataset. This dataset contains images of 10 objects resized to 32x32-pixel images. We follow the
settings in [9] and we inject synthesized noise to the original labels in training. Moreover, we
implement the forward and backward losses proposed in [9] and we use them to train ResNet [33] of
depth 32 with the ground-truth noise transition matrix.
Here, we only train the variant of our model shown in Fig. 2.c that can be trained analytically. For
the auxiliary distribution, we trained a simple linear multinomial logistic regression representing the
h) . We trained this distribution such that the output
conditional paux (?
y |yy ) with no hidden variables (h
probabilities match the ground-truth noise transition matrix. We trained all models for 200 epochs.
For our model, we anneal ? from 8 to 1 in 10 epochs. Similar to the previous section, we empirically
observed that it is better to stop annealing ? before it reaches zero. Here, to compare our method
with the previous work, we do not work in a semi-supervised setting, and we assume that we have
access only to the noisy training dataset.
Our goal for this experiment is to demonstrate that a simple variant of our model can be used for
training from images with only noisy labels and to show that our model can clean the noisy labels.
To do so, we report not only the average accuracy on the clean test dataset, but also the recovery
accuracy. The recovery accuracy for our method is defined as the accuracy of q in predicting the
clean labels in the noisy training set at the end of learning. For the baselines, we measure the accuracy
x) on the same set. The results are reported in Table 2. Overall, our
of the trained neural network p(?
y |x
method achieves slightly better prediction accuracy on the CIFAR-10 dataset than the baselines. And,
in terms of recovering clean labels on the noisy training set, our model significantly outperforms the
baselines. Examples of the recovered clean labels are visualized for the CIFAR-10 experiment in the
supplementary material.
5
Conclusion
We have proposed a general undirected graphical model for modeling label noise in training deep
neural networks. We formulated the problem as a semi-supervised learning problem, and we proposed
a novel objective function equipped with a regularization term that helps our variational distribution
8
Table 2: Prediction and recovery accuracy of different baselines on the CIFAR-10 dataset.
Noise (%)
Cross entropy loss
Backward [9]
Forward [9]
Our model
10
91.2
87.4
90.9
91.6
Prediction Accuracy (%)
20
30
40
50
90.0 89.1 87.1 80.2
87.4 84.6 76.5 45.6
90.3 89.4 88.4 80.0
91.0 90.6 89.4 84.3
10
94.1
88.0
94.6
97.7
Recovery Accuracy (%)
20
30
40
50
92.4 89.6 85.2 74.6
87.4 84.0 75.3 44.0
93.6 92.3 91.1 83.1
96.4 95.1 93.5 88.1
infer latent clean labels more accurately using auxiliary sources of information. Our model not only
predicts clean labels on unseen instances more accurately, but also recovers clean labels on noisy
training sets with a higher precision. We believe the ability to clean noisy annotations is a very
valuable property of our framework that will be useful in many application domains.
Acknowledgments
The author thanks Jason Rolfe, William Macready, Zhengbing Bian, and Fabian Chudak for their
helpful discussions and comments. This work would not be possible without the excellent technical
support provided by Mani Ranjbar and Oren Shklarsky.
References
[1] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical
image database. In Computer Vision and Pattern Recognition (CVPR), 2009.
[2] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In International Conference on Machine Learning (ICML), 2001.
[3] Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled
data for image classification. In Computer Vision and Pattern Recognition (CVPR), 2015.
[4] Ishan Misra, C. Lawrence Zitnick, Margaret Mitchell, and Ross Girshick. Seeing through the human
reporting bias: Visual classifiers from noisy human-centric labels. In CVPR, 2016.
[5] Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080, 2014.
[6] B. Frenay and M. Verleysen. Classification in the presence of label noise: A survey. IEEE Transactions on
Neural Networks and Learning Systems, 25(5):845?869, 2014.
[7] Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep K. Ravikumar, and Ambuj Tewari. Learning with noisy
labels. In Advances in neural information processing systems, pages 1196?1204, 2013.
[8] Volodymyr Mnih and Geoffrey E. Hinton. Learning to label aerial images from noisy data. In International
Conference on Machine Learning (ICML), pages 567?574, 2012.
[9] Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, and Lizhen Qu. Making neural networks
robust to label noise: A loss correction approach. In Computer Vision and Pattern Recognition, 2017.
[10] Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich.
Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014.
[11] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Combining active learning and semi-supervised
learning using Gaussian fields and harmonic functions. In ICML, 2003.
[12] Rob Fergus, Yair Weiss, and Antonio Torralba. Semi-supervised learning in gigantic image collections. In
Advances in neural information processing systems, pages 522?530, 2009.
[13] Xinlei Chen and Abhinav Gupta. Webly supervised learning of convolutional networks. In International
Conference on Computer Vision (ICCV), 2015.
[14] Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, and Serge Belongie. Learning from
noisy large-scale datasets with minimal supervision. arXiv preprint arXiv:1701.01619, 2017.
[15] Guosheng Lin, Chunhua Shen, Anton van den Hengel, and Ian Reid. Efficient piecewise training of deep
structured models for semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2016.
9
[16] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du,
Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In International
Conference on Computer Vision (ICCV), 2015.
[17] Jian Peng, Liefeng Bo, and Jinbo Xu. Conditional neural fields. In Advances in neural information
processing systems, pages 1419?1427, 2009.
[18] Thierry Artieres et al. Neural conditional random fields. In Proceedings of the Thirteenth International
Conference on Artificial Intelligence and Statistics, pages 177?184, 2010.
[19] Rohit Prabhavalkar and Eric Fosler-Lussier. Backpropagation training for multilayer conditional random field based phone recognition. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE
International Conference on, pages 5534?5537. IEEE, 2010.
[20] Philipp Kr?henb?hl and Vladlen Koltun. Efficient inference in fully connected CRFs with Gaussian edge
potentials. In Advances in Neural Information Processing Systems (NIPS), pages 109?117, 2011.
[21] Liang-Chieh Chen, Alexander G. Schwing, Alan L. Yuille, and Raquel Urtasun. Learning deep structured
models. In ICML, pages 1785?1794, 2015.
[22] Alexander G. Schwing and Raquel Urtasun. Fully connected deep structured networks. arXiv preprint
arXiv:1503.02351, 2015.
[23] Zhiwei Deng, Arash Vahdat, Hexiang Hu, and Greg Mori. Structure inference machines: Recurrent neural
networks for analyzing relations in group activity recognition. In CVPR, 2016.
[24] Stephane Ross, Daniel Munoz, Martial Hebert, and J. Andrew Bagnell. Learning message-passing inference
machines for structured prediction. In Computer Vision and Pattern Recognition (CVPR), 2011.
[25] Alexander Kirillov, Dmitrij Schlesinger, Shuai Zheng, Bogdan Savchynskyy, Philip HS Torr, and Carsten
Rother. Joint training of generic CNN-CRF models with stochastic optimization. arXiv preprint
arXiv:1511.05067, 2015.
[26] Ariadna Quattoni, Sybor Wang, Louis-Philippe Morency, Morency Collins, and Trevor Darrell. Hidden
conditional random fields. IEEE transactions on pattern analysis and machine intelligence, 29(10), 2007.
[27] Laurens Maaten, Max Welling, and Lawrence K. Saul. Hidden-unit conditional random fields. In
International Conference on Artificial Intelligence and Statistics, pages 479?488, 2011.
[28] Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient.
In Proceedings of the 25th international conference on Machine learning, pages 1064?1071. ACM, 2008.
[29] Laurent Younes. Parametric inference for imperfectly observed Gibbsian fields. Probability theory and
related fields, 1989.
[30] Radford M. Neal and Geoffrey E. Hinton. A view of the em algorithm that justifies incremental, sparse,
and other variants. In Learning in graphical models. 1998.
[31] Marcus Rohrbach, Michael Stark, Gy?rgy Szarvas, Iryna Gurevych, and Bernt Schiele. What helps where?
and why? Semantic relatedness for knowledge transfer. In Computer Vision and Pattern Recognition
(CVPR), 2010.
[32] Kuzman Ganchev, Jennifer Gillenwater, Ben Taskar, et al. Posterior regularization for structured latent
variable models. Journal of Machine Learning Research, 2010.
[33] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
In Computer Vision and Pattern Recognition, 2016.
[34] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[35] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[36] Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K. Srivastava, Li Deng, Piotr Doll?r, Jianfeng Gao,
Xiaodong He, Margaret Mitchell, John C Platt, et al. From captions to visual concepts and back. In
Conference on Computer Vision and Pattern Recognition, 2015.
10
| 7143 |@word h:2 cnn:14 paredes:1 hu:1 seek:1 propagate:2 contrastive:1 configuration:3 contains:2 score:1 daniel:1 bc:1 romera:1 outperforms:2 existing:1 current:2 com:1 recovered:1 jinbo:1 yet:1 diederik:1 written:1 must:1 john:2 partition:3 analytic:2 christian:1 seeding:1 designed:1 update:4 v:1 sukhbaatar:1 generative:1 fewer:1 website:2 intelligence:3 mccallum:1 beginning:6 ith:1 philipp:1 zhang:1 dn:7 koltun:1 persistent:2 qualitative:1 veit:2 overhead:2 introduce:2 pairwise:5 peng:1 paluri:1 examine:5 multi:1 relying:1 actual:2 equipped:2 becomes:1 provided:2 xx:9 moreover:1 maximizes:2 medium:1 what:1 developed:1 proposing:1 gal:1 bootstrapping:2 collecting:3 tackle:2 demonstrates:2 classifier:1 platt:1 control:1 unit:1 yn:4 gigantic:1 reid:1 segmenting:1 before:2 positive:2 understood:1 giorgio:1 louis:1 vahdat:2 api:1 analyzing:1 laurent:1 examined:1 challenging:2 co:1 tian:1 directed:2 acknowledgment:1 implement:1 differs:1 backpropagation:1 procedure:2 area:3 significantly:5 chechik:1 word:4 road:1 upsampled:2 seeing:1 zoubin:1 cannot:3 unlabeled:1 savchynskyy:1 scheduling:1 equivalent:1 map:7 imposed:1 ranjbar:1 crfs:6 maximizing:1 independently:1 jimmy:1 focused:3 survey:1 shen:1 identifying:1 assigns:1 x32:1 recovery:4 regularize:1 datapoints:1 fang:2 variation:1 updated:1 target:1 user:1 caption:6 cleaning:1 massive:1 us:1 element:1 recognition:13 natarajan:1 updating:2 continues:1 predicts:1 labeled:13 database:1 observed:8 ep:3 artieres:1 preprint:7 taskar:1 initializing:1 capture:1 solved:1 wang:2 connected:4 sun:1 removed:1 valuable:1 principled:1 alessandro:1 schiele:1 multilabel:2 trained:21 depend:2 solving:1 yuille:1 bipartite:1 eric:1 multimodal:1 joint:5 easily:1 icassp:1 represented:3 cat:3 train:7 effective:1 monte:1 artificial:2 labeling:6 hyper:1 jianfeng:1 bernt:1 larger:1 solve:1 supplementary:2 kai:1 cvpr:7 ability:1 statistic:2 neil:1 unseen:2 simonyan:1 noisy:73 final:2 sequence:1 propose:3 subtracting:1 interaction:2 combining:3 translate:1 poorly:1 margaret:2 intuitive:1 rgy:1 requirement:1 darrell:1 rolfe:1 incremental:2 adam:2 ben:1 object:7 help:3 resnet:9 bourdev:1 andrew:3 recurrent:2 bogdan:1 minor:1 thierry:1 eq:20 strong:1 auxiliary:16 predicted:1 recovering:1 indicate:1 met:1 direction:2 puppy:1 laurens:1 annotated:5 nock:1 cnns:3 stochastic:5 arash:2 stephane:1 human:2 enable:1 public:1 material:2 nagarajan:1 fix:1 sainbayar:1 manohar:1 extension:1 correction:1 zhiwei:1 considered:1 ground:2 exp:4 lawrence:2 mapping:1 predict:3 optimizer:1 achieves:1 torralba:1 omitted:1 label:106 ross:2 grouped:1 largest:1 ganchev:1 weighted:3 gaussian:2 hj:3 resized:1 indicates:1 likelihood:7 contrast:3 baseline:9 helpful:1 inference:13 rupesh:1 stopping:1 burnaby:1 typically:3 hidden:16 relation:4 interested:2 semantics:2 pixel:3 overall:1 classification:9 among:1 verleysen:1 proposes:1 art:1 special:1 initialize:1 spatial:1 marginal:5 field:13 equal:1 saurabh:1 beach:1 sampling:7 piotr:1 represents:3 icml:4 minimized:1 report:2 fundamentally:1 piecewise:1 richard:2 modern:2 divergence:2 comprehensive:1 phase:4 microsoft:3 william:1 message:1 mining:1 mnih:1 zheng:2 severe:1 extreme:1 pradeep:1 behind:1 chain:6 accurate:1 gibbsian:1 edge:1 capable:1 amateur:1 plugged:1 penalizes:1 initialized:2 eaux:2 girshick:1 schlesinger:1 minimal:1 instance:18 modeling:6 rabinovich:1 maximization:2 cost:1 subset:1 imperfectly:1 successful:1 reported:4 stored:1 combined:2 st:1 thanks:1 fundamental:1 international:8 vineet:1 probabilistic:1 dong:1 lee:1 michael:1 together:2 containing:1 huang:2 slowly:1 expert:1 inject:2 stark:1 li:5 szegedy:1 volodymyr:1 potential:2 avahdat:1 gy:1 pooled:1 sec:2 availability:1 coefficient:1 inc:1 explicitly:1 performed:1 view:1 jason:1 zhizhong:1 wave:1 annotation:6 jia:2 contribution:1 formed:2 publicly:1 greg:1 convolutional:6 variance:1 accuracy:11 efficiently:1 maximized:4 correspond:1 gathered:1 yield:1 serge:1 anton:1 accurately:5 mc:1 carlo:1 ren:1 cc:2 datapoint:3 quattoni:1 flickr:7 reach:1 manual:1 trevor:1 against:6 inexpensive:1 energy:6 intentionally:1 rbm:2 recovers:2 gain:3 stop:1 dataset:19 mitchell:2 knowledge:1 car:1 infers:1 improves:2 segmentation:2 schedule:1 carefully:1 actually:2 back:2 centric:1 originally:2 higher:1 supervised:11 follow:2 bian:1 zisserman:1 wei:2 formulation:3 done:2 shuai:2 correlation:5 hand:1 web:1 su:1 overlapping:1 lack:1 propagation:2 minibatch:4 assessment:1 defines:2 logistic:2 liefeng:1 quality:2 menon:1 vibhav:1 believe:1 semisupervised:2 jayasumana:1 xiaodong:1 usa:1 effect:1 concept:1 y2:1 true:3 counterpart:1 former:2 analytically:4 regularization:6 mani:1 leibler:1 dhillon:1 semantic:3 neal:1 undesired:1 conditionally:1 during:4 please:1 maintained:1 m:1 criterion:1 crf:35 demonstrate:2 dragomir:1 patrini:1 image:38 variational:13 harmonic:1 novel:4 recently:2 regularizing:1 common:4 ishan:1 multinomial:1 empirically:1 overview:1 exponentially:1 foreach:1 belong:1 lizhen:1 he:2 marginals:1 synthesized:1 measurement:1 significant:2 refer:1 honglak:1 gibbs:3 anguelov:1 munoz:1 tuning:1 mistaken:1 similarly:3 particle:1 gillenwater:1 bruna:1 access:2 similarity:1 supervision:1 etc:2 add:1 posterior:2 retrieved:1 optimizing:2 coco:8 chunhua:1 store:1 misra:4 phone:1 binary:10 yi:5 gurevych:1 guosheng:1 employed:1 deng:3 aggregated:1 xiangyu:1 signal:1 semi:10 ii:2 infer:3 dalong:1 alan:1 technical:1 match:1 cross:8 long:1 cifar:6 lin:1 ravikumar:1 impact:2 prediction:5 variant:3 regression:2 multilayer:1 vision:10 expectation:8 arxiv:14 iteration:5 represent:3 szarvas:1 achieved:1 oren:1 whereas:3 conditionals:1 fine:1 thirteenth:1 decreased:1 annealing:1 source:8 jian:2 extra:1 rest:1 fifty:1 comment:1 pooling:1 undirected:5 lafferty:2 noting:1 yang:1 intermediate:1 iii:1 split:1 presence:1 ivan:1 marginalization:2 fit:1 architecture:3 reduce:1 idea:1 andreas:1 lubomir:1 multiclass:4 vgg:5 enumerating:1 fosler:1 shift:1 whether:2 gb:1 passed:1 karen:1 henb:1 shaoqing:1 speech:1 proceed:1 pretraining:1 passing:1 deep:20 ignored:1 generally:1 iterating:1 clear:1 useful:1 tewari:1 factorial:5 antonio:1 visualized:2 category:4 younes:1 reduced:1 generate:2 exist:3 oversampling:2 estimated:1 disjoint:1 per:2 yy:97 group:4 key:1 clean:71 backward:2 graph:3 parameterized:5 raquel:2 reporting:1 almost:1 looser:1 forrest:1 maaten:1 summarizes:1 layer:3 bound:14 quadratic:2 activity:1 occur:1 xiaogang:1 pcd:6 fei:2 tag:9 min:1 formulating:1 structured:8 poor:1 aerial:1 vladlen:1 smaller:1 slightly:1 em:4 xinlei:1 qu:1 rob:2 making:1 hl:1 den:1 restricted:2 iccv:2 taken:1 mori:1 visualization:1 jennifer:1 describing:1 hh:12 tractable:3 end:6 available:1 operation:1 kirillov:1 doll:1 apply:1 pup:1 hierarchical:1 away:1 generic:2 macready:1 robustness:2 yair:1 slower:1 original:3 graphical:8 marginalized:1 maintaining:1 epsilon:1 ghahramani:1 approximating:1 chieh:1 sweep:1 objective:9 added:1 question:1 occurs:1 parametric:1 costly:2 bagnell:1 gradient:9 link:9 mapped:1 entity:1 philip:2 collected:1 urtasun:2 toward:2 reason:1 pet:1 marcus:1 rother:1 length:1 modeled:3 relationship:3 reed:2 tijmen:1 unrolled:1 kuzman:1 equivalently:1 difficult:1 liang:1 holding:1 m1n:1 hao:1 negative:3 ba:1 design:1 implementation:1 boltzmann:3 upper:1 datasets:6 markov:2 sadeep:1 fabian:1 descent:1 philippe:1 regularizes:1 extended:1 hinton:2 y1:2 dc:6 canada:1 inferred:1 introduced:4 dog:1 required:2 cleaned:1 kl:11 optimized:4 imagenet:3 connection:1 engine:1 acoustic:1 textual:1 tensorflow:2 kingma:1 nip:2 able:1 proceeds:1 pattern:9 scott:1 yc:3 challenge:3 ambuj:1 max:3 memory:3 belief:1 power:1 natural:1 force:2 regularized:2 predicting:4 rely:2 chudak:1 residual:1 zhu:2 representing:5 mn:1 abhinav:2 martial:1 created:1 extract:1 xiaojin:1 joan:1 epoch:4 geometric:1 rohit:1 loss:11 fully:5 rationale:1 querying:1 geoffrey:2 annotator:4 validation:2 xiao:2 storing:1 row:1 last:1 hebert:1 ariadna:1 side:1 bias:5 explaining:1 saul:1 sparse:1 benefit:2 van:1 feedback:1 overcome:1 calculated:1 transition:4 depth:1 xia:1 hengel:1 forward:2 collection:2 author:1 erhan:1 social:1 transaction:2 welling:1 obtains:1 ignore:1 relatedness:2 kullback:1 keep:2 active:2 assumed:3 belongie:1 consuming:2 discriminative:4 fergus:3 search:1 latent:16 iterative:1 why:3 table:5 learn:1 transfer:1 robust:9 ca:1 kitty:1 du:1 excellent:1 necessarily:1 anneal:2 domain:2 zitnick:1 main:4 big:1 noise:20 xu:1 fig:7 representative:1 tong:1 precision:2 inferring:4 guiding:1 pereira:1 ian:1 removing:2 dumitru:1 explored:1 gupta:4 socher:1 sequential:1 adding:1 importance:1 kr:1 zaux:2 conditioned:3 occurring:1 justifies:1 chen:3 entropy:7 simply:1 rohrbach:1 cheaply:1 visual:2 gao:1 bernardino:1 aditya:1 iandola:1 kaiming:1 lussier:1 scalar:1 inderjit:1 pretrained:1 chang:2 bo:1 radford:1 truth:2 tieleman:1 relies:1 acm:1 conditional:24 goal:3 formulated:2 targeted:1 viewed:1 carsten:1 content:1 torr:2 reducing:1 semantically:1 schwing:2 morency:2 experimental:1 indicating:2 support:1 collins:1 alexander:3 phenomenon:1 aux:14 kitten:1 srivastava:1 |
6,792 | 7,144 | Dualing GANs
Yujia Li1?
Alexander Schwing3
Kuan-Chieh Wang1,2
Richard Zemel1,2
1
2
Department of Computer Science, University of Toronto
Vector Institute
3
Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign
{yujiali, wangkua1, zemel}@cs.toronto.edu
[email protected]
Abstract
Generative adversarial nets (GANs) are a promising technique for modeling a
distribution from samples. It is however well known that GAN training suffers from
instability due to the nature of its saddle point formulation. In this paper, we explore
ways to tackle the instability problem by dualizing the discriminator. We start from
linear discriminators in which case conjugate duality provides a mechanism to
reformulate the saddle point objective into a maximization problem, such that both
the generator and the discriminator of this ?dualing GAN? act in concert. We then
demonstrate how to extend this intuition to non-linear formulations. For GANs
with linear discriminators our approach is able to remove the instability in training,
while for GANs with nonlinear discriminators our approach provides an alternative
to the commonly used GAN training algorithm.
1
Introduction
Generative adversarial nets (GANs) [5] are, among others like variational auto-encoders [10] and
auto-regressive models [19], a promising technique for modeling a distribution from samples. A lot
of empirical evidence shows that GANs are able to learn to generate images with good visual quality
at unprecedented resolution [22, 17], and recently there has been a lot of research interest in GANs,
to better understand their properties and the training process.
Training GANs can be viewed as a duel between a discriminator and a generator. Both players
are instantiated as deep nets. The generator is required to produce realistic-looking samples that
cannot be differentiated from real data by the discriminator. In turn, the discriminator does as good a
job as possible to tell the samples apart from real data. Due to the complexity of the optimization
problem, training GANs is notoriously hard, and usually suffers from problems such as mode collapse,
vanishing gradient, and divergence. Moreover, the training procedures are very unstable and sensitive
to hyper-parameters. Therefore, a number of techniques have been proposed to address these issues,
some empirically justified [17, 18], and others more theoretically motivated [15, 1, 16, 23].
This tremendous amount of recent work, together with the wide variety of heuristics applied by
practitioners, indicates that many questions regarding the properties of GANs are still unanswered. In
this work we provide another perspective on the properties of GANs, aiming toward better training
algorithms in some cases. Our study in this paper is motivated by the alternating gradient update
between discriminator and generator, employed during training of GANs. This form of update is one
source of instability, and it is known to diverge even for some simple problems [18]. Ideally, when
the discriminator is optimized to optimality, the GAN objective is a deterministic function of the
generator. In this case, the optimization problem would be much easier to solve. This motivates our
idea to dualize parts of the GAN objective, offering a mechanism to better optimize the discriminator.
Interestingly, our dual formulation provides a direct relationship between the GAN objective and the
maximum mean-discrepancy framework discussed in [6]. When restricted to linear discriminators,
where we can find the optimal discriminator by solving the dual, this formulation permits the
derivation of an optimization algorithm that monotonically increases the objective. Moreover, for
?
Now at DeepMind.
non-linear discriminators we can apply trust-region type optimization techniques to obtain more
accurate discriminators. Our work brings to the table some additional optimization techniques beyond
stochastic gradient descent; we hope this encourages other researchers to pursue this direction.
2
Background
In generative training we are interested in modeling of and sampling from an unknown distribution
P , given a set D = {x1 , . . . , xN } ? P of datapoints, for example images. GANs use a generator
network G? (z) parameterized by ?, that maps samples z drawn from a simple distribution, e.g.,
? = G? (z). A separate discriminator Dw (x)
Gaussian or uniform, to samples in the data space x
parameterized by w maps a point x in the data space to the probability of it being a real sample.
The discriminator is trained to minimize a classification loss, typically the cross-entropy, and the
generator is trained to maximize the same loss. On sets of real data samples {x1 , ..., xn } and
noise samples {z1 , ..., zn }, using the (averaged) cross-entropy loss results in the following joint
optimization problem:
1 X
1 X
max min f (?, w) where f (?, w) = ?
log Dw (xi )?
log(1?Dw (G? (zi ))). (1)
w
?
2n i
2n i
We adhere to the formulation of a fixed batch of samples for clarity of the presentation, but also point
out how this process is adapted to the stochastic optimization setting later in the paper as well as in
the supplementary material.
To solve this saddle point optimization problem, ideally, we want to solve for the optimal discriminator
parameters w? (?) = argminw f (?, w), in which case the GAN program given in Eq. (1) can be
reformulated as a maximization for ? using max? f (?, w? (?)). However, typical GAN training only
alternates two gradient updates w ? w ? ?w ?w f (?, w) and ? ? ? + ?? ?? f (?, w), and usually
just one step for each of ? and w in each round. In this case, the objective maximized by the generator
is f (?, w) instead. This objective is always an upper bound on the correct objective f (?, w? (?)),
since w? (?) is the optimal w for ?. Maximizing an upper bound has no guarantee on maximizing the
correct objective, which leads to instability. Therefore, many practically useful techniques have been
proposed to circumvent the difficulties of the original program definition presented in Eq. (1).
P
Another widely employed technique is a separate loss ? i log(Dw (G? (zi ))) to update ? in order
to avoid vanishing gradients during early stages of training when the discriminator can get too
strong. This technique can be combined with our approach, but in what follows, we keep the elegant
formulation of the GAN program specified in Eq. (1).
3
Dualing GANs
The main idea of ?Dualing GANs? is to represent the discriminator program minw f (?, w) in Eq. (1)
using its dual, max? g(?, ?). Hereby, g is the dual objective of f w.r.t. w, and ? are the dual variables.
Instead of gradient descent on f to update w, we solve the dual instead. This results in a maximization
problem max? max? g(?, ?).
Using the dual is beneficial for two reasons. First, note that for any ?, g(?, ?) is a lower bound on the
objective with optimal discriminator parameters f (?, w? (?)). Staying in the dual domain, it is then
guaranteed that optimization of g w.r.t. ? makes progress in terms of the original program. Second,
the dual problem usually involves a much smaller number of variables, and can therefore be solved
much more easily than the primal formulation. This provides opportunities to obtain more accurate
estimates for the discriminator parameters w, which is in turn beneficial for stabilizing the learning
of the generator parameters ?. In the following, we start by studying linear discriminators, before
extending our technique to training with non-linear discriminators. Also, we use cross-entropy as the
classification loss, but emphasize that other convex loss functions, e.g., the hinge-loss, can be applied
equivalently.
3.1
Linear Discriminator
We start from linear discriminators that use a linear scoring function F (w, x) = w> x, i.e., the
discriminator Dw (x) = pw (y = 1|x) = ?(F (w, x)) = 1/[1 + exp(?w> x)]. Here, y = 1 indicates
real data, while y = ?1 for a generated sample. The distribution pw (y = ?1|x) = 1 ? pw (y = 1|x)
characterizes the probability of x being a generated sample.
2
We only require the scoring function F to be linear in w and any (nonlinear) differentiable features
?(x) can be used in place of x in this formulation. Substituting the linear scoring function into the
objective given in Eq. (1), results in the following program for w:
C
1 X
1 X
kwk22 +
log(1 + exp(?w> xi )) +
log(1 + exp(w> G? (zi ))). (2)
min
w
2
2n i
2n i
Here we also added an L2-norm regularizer on w. We note that the program presented in Eq. (2) is
convex in the discriminator parameters w. Hence, we can equivalently solve it in the dual domain as
discussed in the following claim, with proof provided in the supplementary material.
Claim 1. The dual program to the task given in Eq. (2) reads as follows:
2
X
1 X
1
1 X
X
?xi xi ?
?zi G? (zi )
+
max g(?, ?) = ?
H(2n?xi ) +
H(2n?zi ),
?
2C
i
2n i
2n i
i
1
1
, 0 ? ?zi ?
,
(3)
2n
2n
with binary entropy H(u) = ?u log u ? (1 ? u) log(1 ? u). The optimal solution to the original
problem w? can be obtained from the optimal ??xi and ??zi via
!
X
1 X ?
?
?
w =
?xi xi ?
?zi G? (zi ) .
C
i
i
s.t. ?i,
0 ? ?xi ?
Remarks: Intuitively, considering the last two terms of the program given in Claim 1 as well as its
1
constraints, we aim at assigning weights ?x , ?z close to half of 2n
to as many data points and to as
many artificial samples as possible. More carefully investigating the first
P part, which can at most
reach zero, reveals that we aim
to
match
the
empirical
data
observation
i ?xi xi and the generated
P
artificial sample observation i ?zi G? (zi ). Note that this resembles the moment matching property
obtained in other maximum likelihood models. Importantly, this objective also resembles the (kernel)
maximum
mean P
discrepancy (MMD) framework, where the empirical squared MMD is estimated via
P
k n1 xi xi ? n1 zi G? (zi )k22 . Generative models that learn to minimize the MMD objective, like
the generative moment matching networks [13, 3], can therefore be included in our framework, using
fixed ??s and proper scaling of the first term.
Combining the result obtained in Claim 1 with the training objective for the generator yields the task
max?,? g(?, ?) for training of GANs with linear discriminators. Hence, instead of searching for a
saddle point, we strive to find a maximizer, a task which is presumably easier. The price to pay is the
restriction to linear discriminators and the fact that every randomly drawn artificial sample zi has its
own dual variable ?zi .
In the non-stochastic optimization setting, where we optimize for fixed sets of data samples {xi } and
randomizations {zi }, it is easy to design a learning algorithm for GANs with linear discriminators
that monotonically improves the objective g(?, ?) based on line search. Although this approach is
not practical for very large data sets, such a property is convenient for smaller scale data sets. In
addition, linear models are favorable in scenarios in which we know informative features that we
want the discriminator to pay attention to.
When optimizing with mini-batches we introduce new data samples {xi } and randomizations {zi }
in every iteration. In the supplementary material we show that this corresponds to maximizing a
lower bound on the full expectation objective. Since the dual variables vary from one mini-batch
to the next, we need to solve for the newly introduced dual variables to a reasonable accuracy. For
small minibatch sizes commonly used in deep learning literature, like 100, calling a constrained
optimization solver to solve the dual problem is quite cheap. We used Ipopt [20], which typically
solves this dual problem to a good accuracy in negligible time; other solvers can also be used and
may lead to improved performance.
Utilizing a log-linear discriminator reduces the model?s expressiveness and complexity. We therefore
now propose methods to alleviate this restriction.
3.2
Non-linear Discriminator
General non-linear discriminators use non-convex scoring functions F (w, x), parameterized by a
deep net. The non-convexity of F makes it hard to directly convert the problem into its dual form.
3
Therefore, our approach for training GANs with non-convex discriminators is based on repeatedly
linearizing and dualizing the discriminator locally. At first sight this seems restrictive, however, we
will show that a specific setup of this technique recovers the gradient direction employed in the
regular GAN training mechanism while providing additional flexibility.
We consider locally approximating the primal objective f around a point wk using a model function
mk,? (s) ? f (?, wk + s). We phrase the update w.r.t. the discriminator parameters w as a search
for a step s, i.e., wk+1 = wk + s where k indicates the current iteration. In order to guarantee the
quality of the approximation, we introduce a trust-region constraint 12 ksk22 ? ?k ? R+ where ?k
specifies the trust-region size. More concretely, we search for a step s by solving
1
min mk,? (s) s.t.
ksk22 ? ?k ,
(4)
s
2
given generator parameters ?. Rather than optimizing the GAN objective f (?, w) with stochastic
gradient descent, we can instead employ this model function and use the algorithm outlined in Alg. 1.
It proceeds by first performing a gradient ascent w.r.t. the generator parameters ?. Afterwards, we
find a step s by solving the program given in Eq. (4). We then apply this step, and repeat.
Different model functions mk,? (s) result in variants of the algorithm. If we choose mk,? (s) =
f (?, wk + s), model m and function f are identical but the program given in Eq. (4) is hard to
solve. Therefore, in the following, we propose two model functions that we have found to be
useful. The first one is based on linearization of the cost function f (?, w) and recovers the step s
employed by gradient-based discriminator updates in standard GAN training. The second one is
based on linearization of the score function F (w, x) while keeping the loss function intact; this
second approximation is hence accurate in a larger region. Many more models mk,? (s) exist and we
leave further exploration of this space to future work.
(A). Cost function linearization: A local approximation to the cost function f (?, w) can be constructed by using the first order Taylor approximation
mk,? (s) = f (wk , ?) + ?w f (wk , ?)> s.
Such a model function is appealing because step 2 of Fig. 1, i.e., minimization of the model function
subject to trust-region constraints as specified in Eq. (4), has the analytically computable solution
?
2?k
s=?
?w f (wk , ?).
k?w f (wk , ?)k2
Consequently step 3 of Fig. 1 is a step of length 2?k into the negative gradient direction of the
cost function f (?, w). We can use the trust region parameter ?k to tune the step size just like it
is common to specify the step size for standard GAN training. As mentioned before, using the
first order Taylor approximation as our model mk,? (s) recovers the same direction that is employed
during standard GAN training. The value of the ?k parameters can be fixed or adapted; see the
supplementary material for more details.
Using the first order Taylor approximation as a model is not the only choice. While some choices like
quadratic approximation are fairly obvious, we present another intriguing option in the following.
(B). Score function linearization: Instead of linearizing the entire cost function as demonstrated in
the previous part, we can choose to only linearize the score function F , locally around wk , via
F (wk + s, x) ? F? (s, x) = F (wk , x) + s> ?w F (wk , x), ?x.
Note that the overall objective f is itself a nonlinear function of F . Substituting the approximation
for F into the overall objective, results in the following model function:
C
1 X
mk,? (s) = kwk + sk22 +
log 1 + exp ?F (wk , xi ) ? s> ?w F (wk , xi )
2
2n i
1 X
+
log 1 + exp F (wk , G? (zi )) + s> ?w F (wk , G? (zi )) .
(5)
2n i
This approximation keeps the nonlinearities of the surrogate loss function intact, therefore we expect
it to be more accurate than linearization of the whole cost function f (?, w). When F is already linear
in w, linearization of the score function introduces no approximation error, and the formulation can
be naturally reduced to the discussion presented in Sec. 3.1; non-negligible errors are introduced
when linearizing the whole cost function f in this case.
4
Algorithm 1 GAN optimization with model function.
Initialize ?, w0 , k = 0 and iterate
1. One or few gradient ascent steps on f (?, wk ) w.r.t. generator parameters ?
2. Find step s using mins mk,? (s) s.t. 21 ksk22 ? ?k
3. Update wk+1 ? wk + s
4. k ? k + 1
For general non-linear discriminators, however, no analytic solution can be computed for the program
given in Eq. (4) when using this model. Nonetheless, the model function fulfills mk,? (0) = f (wk , ?)
and it is convex in s. Exploiting this convexity, we can derive the dual for this trust-region optimization
problem as presented in the following claim. The proof is included in the supplementary material.
Claim 2. The dual program to mins mk,? (s) s.t. 12 ksk22 ? ?k with model function as in Eq. (5) is:
2
X
X
C
1
2
?xi ?w F (wk , xi ) ?
?zi ?w F (wk , G? (zi ))
max
kwk k2 ?
?Cwk +
?
2
2(C + ?T )
i
i
2
X
X
1 X
1 X
+
?zi Fzi ? ?T ?k
?xi Fxi +
H(2n?xi ) +
H(2n?zi ) ?
2n i
2n i
i
i
1
1
, 0 ? ?zi ?
.
2n
2n
The optimal s? to the original problem can be expressed through optimal ??T , ??xi , ??zi as
!
X
X
1
C
?
?
?
s =
?xi ?w F (wk , xi ) ?
?zi ?w F (wk , zi ) ?
wk
C + ??T
C
+
??T
i
i
s.t.
?T ? 0
?i,
0 ? ?xi ?
Combining the dual formulation with the maximization of the generator parameters ? results in a
maximization as opposed to a search for a saddle point. However, unlike the linear case, it is not
possible to design an algorithm that is guaranteed to monotonically increase the cost function f (?, w).
The culprit is step 3 of Alg. 1, which adapts the model mk,? (s) in every iteration.
Intuitively, the program illustrated in Claim 2 aims at choosing dual variables ?xi , ?zi such that the
weighted means of derivatives as well as scores match. Note that this program searches for a direction
s as opposed to searching for the weights w, hence the term ?Cwk inside the squared norm.
In practice, we use Ipopt [20] to solve the dual problem. The form of this dual is more ill-conditioned
than the linear case. The solution found by Ipopt sometimes contains errors, however, we found the
errors to be generally tolerable and not to affect the performance of our models.
4
Experiments
In this section, we empirically study the proposed dual GAN algorithms. In particular, we show
the stable and monotonic training for linear discriminators and study its properties. For nonlinear
GANs we show good quality samples and compare it with standard GAN training methods. Overall
the results show that our proposed approaches work across a range of problems and provide good
alternatives to the standard GAN training method.
4.1
Dual GAN with linear discriminator
We explore the dual GAN with linear discriminator on a synthetic 2D dataset generated by sampling
points from a mixture of 5 2D Gaussians, as well as the MNIST [12] dataset. Through these
experiments we show that (1) with the proposed dual GAN algorithm, training is very stable; (2) the
dual variables ? can be used as an extra informative signal for monitoring the training process; (3)
features matter, and we can train good generative models even with linear discriminators when we
have good features. In all experiments, we compare our proposed dual GAN with the standard GAN
when training the same generator and discriminator models. Additional experimental details and
results are included in the supplementary material.
The discussion of linear discriminators presented in Sec. 3.1 works with any feature representation
?(x) in place of x as long as ? is differentiable to allow gradients flow through it. For the simple
5
Figure 1: We show the learning curves and samples from two models of the same architecture, one
optimized in dual space (left), and one in the primal space (i.e., typical GAN) up to 5000 iterations.
Samples are shown at different points during training, as well as at the very end (right top - dual, right
bottom - primal). Despite having similar sample qualities in the end, they demonstrate drastically
different training behavior. In the typical GAN setup, loss oscillates and has no clear trend, whereas
in the dual setup, loss monotonically increases and shows much smaller oscillation. Sample quality is
nicely correlated with the dual objective during training.
Figure 2: Training GANs with linear discriminators on the simple 5-Gaussians dataset. Here we
are showing typical runs with the compared methods (not cherry-picked). Top: training curves and
samples from a single experiment: left - dual with full batch, middle - dual with minibatch, right standard GAN with minibatch. The real data from this dataset are drawn in blue, generated samples
in green. Below: distribution of ??s during training for the two dual GAN experiments, as a histogram
at each x-value (iteration) where intensity depicts frequency for values ranging from 0 to 1 (red are
data, and green are samples).
5-Gaussian dataset, we use RBF features based on 100 sample training points. For the MNIST dataset,
we use a convolutional neural net, and concatenate the hidden activations on all layers as the features.
The dual GAN formulation has a single hyper-parameter C, but we found the algorithm not to be
sensitive to it, and set it to 0.0001 in all experiments. We used Adam [9] with fixed learning rate and
momentum to optimize the generator.
Stable Training: The main results illustrating stable training are provided in Fig. 1 and 2, where
we show the learning curves as well as model samples at different points during training. Both the
dual GAN and the standard GAN use minibatches of the same size, and for the synthetic dataset
we did an extra experiment doing full-batch training. From these curves we can see the stable
monotonic increase of the dual objective, contrasted with standard GAN?s spiky training curves. On
the synthetic data, we see that increasing the minibatch size leads to significantly improved stability.
In the supplementary material we include an extra experiment to quantify the stability of the proposed
method on the synthetic dataset.
6
Dataset mini-batch size generator generator
C
discriminator generator
max
learnrate momentum
learnrate* architecture
iterations
5-Gaussians randint[20,200] enr([0,10]) rand[.1,.9] enr([0,6]) enr([0,10])
fc-small randint[400,2000]
fc-large
MNIST randint[20,200] enr([0,10]) rand[.1,.9] enr([0,6]) enr([0,10])
fc-small
20000
fc-large
dcgan
dcgan-no-bn
Table 1: Ranges of hyperparameters for sensitivity experiment. randint[a,b] means samples were
drawn from uniformly distributed integers in the closed interval of [a,b], similarly rand[a,b] for
real numbers. enr([a,b]) is shorthand for exp(-randint[a,b]), which was used for hyperparameters
commonly explored in log-scale. For generator architectures, for the 5-Gaussians dataset we tried
2 3-layer fully-connected networks, with 20 and 40 hidden units. For MNIST, we tried 2 3-layer
fully-connected networks, with 256 and 1024 hidden units, and a DCGAN-like architecture with and
without batch normalization.
1.0
1.0
gan
dual
0.8
normalized counts
normalized counts
0.8
0.6
0.4
0.2
0.0
gan
dual
0.6
0.4
0.2
0
1
2
3
# modes covered (/5)
4
0.0
5
5-Gaussians
0
1
2
3
4
5
6
discretized inception scores
7
8
MNIST
Figure 3: Results for hyperparameter sensitivity experiment. For 5-Gaussians dataset, the x-axis
represents the number of modes covered. For MNIST, the x-axis represents discretized Inception
score. Overall, the proposed dual GAN results concentrate significantly more mass on the right side,
demonstrating its better robustness to hyperparameters than standard GANs.
Sensitivity to Hyperparameters: Sensitivity to hyperparameters is another important aspect of
training stability. Successful GAN training typically requires carefully tuned hyperparameters,
making it difficult for non-experts to adopt these generative models. In an attempt to quantify this
sensitivity, we investigated the robustness of the proposed method to the hyperparameter choice.
For both the 5-Gaussians and MNIST datasets, we randomly sampled 100 hyperparameter settings
from ranges specified in Table 1, and compared learning using both the proposed dual GAN and the
standard GAN. On the 5-Gaussians dataset, we evaluated the performance of the models by how well
the model samples covered the 5 modes. We defined successfully covering a mode as having > 100
out of 1000 samples falling within a distance of 3 standard deviations to the center of the Gaussian.
Our dual linear GAN succeeded in 49% of the experiments (note that there are a significant number
of bad hyperparameter combinations in the search range), and standard GAN succeeded in only 32%,
demonstrating our method was significantly easier to train and tune. On MNIST, the mean Inception
scores were 2.83, 1.99 for the proposed method and GAN training respectively. A more detailed
breakdown of mode coverage and Inception score can be found in Figure 3.
Distribution of ? During Training: The dual formulation allows us to monitor the training process
through a unique perspective by monitoring the dual variables ?. Fig. 2 shows the evolution of the
distribution of ? during training for the synthetic 2D dataset. At the begining of training the ??s
are on the low side as the generator is not good and ??s are encouraged to be small to minimize the
moment matching cost. As the generator improves, more attention is devoted to the entropy term in
the dual objective, and the ??s start to converge to the value of 1/(4n).
Comparison of Different Features: The qualitative differences of the learned models with different
features can be observed in Fig. 4. In general, the more information the features carry about the
data, the better the learned generative models. On MNIST, even with random features and linear
discriminators we can learn reasonably good generative models. On the other hand, these results also
7
Trained
Random
Layer: All
Conv1
Conv2
Conv3
Fc4
Fc5
Figure 4: Samples from dual linear GAN using pretrained and random features on MNIST. Each
column shows a set of different features, utilizing all layers in a convnet and then successive single
layers in the network.
Score Type
Inception (end)
Internal classifier (end)
Inception (avg)
Internal classifier (avg)
GAN
5.61?0.09
3.85?0.08
5.59?0.38
3.64?0.47
Score Lin
5.40?0.12
3.52?0.09
5.44?0.08
3.70?0.27
Cost Lin
5.43?0.10
4.42?0.09
5.16?0.37
4.04?0.37
Real Data
10.72 ? 0.38
8.03 ? 0.07
-
Table 2: Inception Score [18] for different GAN training methods. Since the score depends on the
classifier, we used code from [18] as well as our own small convnet CIFAR-10 classifier for evaluation
(achieves 83% accuracy). All scores are computed using 10,000 samples. The top pair are scores on
the final models. GANs are known to be unstable, and results are sometimes cherry-picked. So, the
bottom pair are scores averaged across models sampled from different iterations of training after it
stopped improving.
indicate that if the features are bad then it is hard to learn good models. This leads us to the nonlinear
discriminators presented below, where the discriminator features are learned together with the last
layer, which may be necessary for more complicated problem domains where features are potentially
difficult to engineer.
4.2
Dual GAN with non-linear discriminator
Next we assess the applicability of our proposed technique for non-linear discriminators, and focus
on training models on MNIST and CIFAR-10 [11].
As discussed in Sec. 3.2, when the discriminator is non-linear, we can only approximate the discriminator locally. Therefore we do not have monotonic convergence guarantees. However, through better
approximation and optimization of the discriminator we may expect the proposed dual GAN to work
better than standard gradient based GAN training in some cases. Since GAN training is sensitive to
hyperparameters, to make the comparison fair, we tuned the parameters for both the standard GANs
and our approaches extensively and compare the best results for each.
Fig. 5 and 6 show the samples generated by models learned using different approaches. Visually
samples of our proposed approaches are on par with the standard GANs. As an extra quantitative
metric for performance, we computed the Inception Score [18] for each of them on CIFAR-10 in
Table 2. The Inception Score is a surrogate metric which highly depends on the network architecture.
Therefore we computed the score using our own classifier and the one proposed in [18]. As can be
seen in Table 2, both score and cost linearization are competitive with standard GANs. From the
training curves we can also see that score linearization does the best in terms of approximating the
objective, and both score linearization and cost linearization oscillate less than standard GANs.
5
Related Work
A thorough review of the research devoted to generative modeling is beyond the scope of this paper.
In this section we focus on GANs [5] and review the most related work that has not been discussed
throughout the paper.
8
Score Linearization
Cost Linearization
GAN
Figure 5: Nonlinear discriminator experiments on MNIST, and their training curves, showing the
primal objective, the approximation, and the discriminator accuracy. Here we are showing typical
runs with the compared methods (not cherry-picked).
Score Linearization
Cost Linearization
GAN
Figure 6: Nonlinear discriminator experiments on CIFAR-10, learning curves and samples organized
by class are provided in the supplementary material.
Our dual formulation reveals a close connection to moment-matching objectives widely seen in many
other models. MMD [6] is one such related objective, and has been used in deep generative models
in [13, 3]. [18] proposed a range of techniques to improve GAN training, including the usage of
feature matching. Similar techniques are also common in style transfer [4]. In addition to these,
moment-matching objectives are very common for exponential family models [21]. Common to all
these works is the use of fixed moments. The Wasserstein objective proposed for GAN training in [1]
can also be thought of as a form of moment matching, where the features are part of the discriminator
and they are adaptive. The main difference between our dual GAN with linear discriminators and
other forms of adaptive moment matching is that we adapt the weighting of features by optimizing
non-parametric dual parameters, while other works mostly adopt a parametric model to adapt features.
Duality has also been studied to understand and improve GAN training. [16] pioneered work that uses
duality to derive new GAN training objectives from other divergences. [1] also used duality to derive
a practical objective for training GANs from other distance metrics. Compared to previous work,
instead of coming up with new objectives, we instead used duality on the original GAN objective and
aim to better optimize the discriminator.
Beyond what has already been discussed, there has been a range of other techniques developed to
improve or extend GAN training, e.g., [8, 7, 22, 2, 23, 14] just to name a few.
6
Conclusion
To conclude, we introduced ?Dualing GANs,? a framework which considers duality based formulations
for the duel between the discriminator and the generator. Using the dual formulation provides
opportunities to better train the discriminator. This helps remove the instability in training for linear
discriminators, and we also adapted this framework to non-linear discriminators. The dual formulation
also provides connections to other techniques. In particular, we discussed a close link to moment
matching techniques, and showed that the cost function linearization for non-linear discriminators
recovers the original gradient direction in standard GANs. We hope that our results spur further
research in this direction to obtain a better understanding of the GAN objective and its intricacies.
9
Acknowledgments: This material is based upon work supported in part by the National Science
Foundation under Grant No. 1718221, and grants from NSERC, Samsung and CIFAR.
References
[1] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein GAN. In https://arxiv.org/abs/1701.07875,
2017.
[2] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. In
https://arxiv.org/pdf/1606.03657v1.pdf, 2016.
[3] Gintare Karolina Dziugaite, Daniel M Roy, and Zoubin Ghahramani. Training generative neural
networks via maximum mean discrepancy optimization. arXiv preprint arXiv:1505.03906,
2015.
[4] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style.
arXiv preprint arXiv:1508.06576, 2015.
[5] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville,
and Yoshua Bengio. Generative Adversarial Networks. In https://arxiv.org/abs/1406.2661,
2014.
[6] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?lkopf, and A. Smola. A Kernel Two-Sample
Test. JMLR, 2012.
[7] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie. Stacked Generative Adversarial
Networks. In https://arxiv.org/abs/1612.04357, 2016.
[8] D. J. Im, C. D. Kim, H. Jiang, and R. Memisevic. Generating images with recurrent adversarial
networks. In https://arxiv.org/abs/1602.05110, 2016.
[9] D. P. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. In Proc. ICLR, 2015.
[10] D. P. Kingma and M. Welling.
https://arxiv.org/abs/1312.6114, 2013.
Auto-Encoding Variational Bayes.
In
[11] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images,
2009.
[12] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. IEEE, 1998.
[13] Y. Li, K. Swersky, and R. Zemel. Generative Moment Matching Networks. In abs/1502.02761,
2015.
[14] B. London and A. G. Schwing. Generative Adversarial Structured Networks. In Proc. NIPS
Workshop on Adversarial Training, 2016.
[15] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein. Unrolled Generative Adversarial Networks.
In https://arxiv.org/abs/1611.02163, 2016.
[16] S. Nowozin, B. Cseke, and R. Tomioka. f-GAN: Training Generative Neural Samplers using
Variational Divergence Minimization. In https://arxiv.org/abs/1606.00709, 2016.
[17] A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep
Convolutional Generative Adversarial Networks. In https://arxiv.org/abs/1511.06434, 2015.
[18] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved
Techniques for Training GANs. In https://arxiv.org/abs/1606.03498, 2016.
[19] A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu.
Conditional Image Generation with PixelCNN Decoders. In https://arxiv.org/abs/1606.05328,
2016.
[20] A. W?chter and L. T. Biegler. On the Implementation of a Primal-Dual Interior Point Filter Line
Search Algorithm for Large-Scale Nonlinear Programming. Mathematical Programming, 2006.
[21] Martin J Wainwright, Michael I Jordan, et al. Graphical models, exponential families, and
R in Machine Learning, 1(1?2):1?305, 2008.
variational inference. Foundations and Trends
10
[22] H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. StackGAN:
Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks. In
https://arxiv.org/abs/1612.03242, 2016.
[23] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based Generative Adversarial Network. In Proc.
ICLR, 2017.
11
| 7144 |@word illustrating:1 middle:1 pw:3 norm:2 seems:1 tried:2 bn:1 carry:1 moment:10 contains:1 score:24 daniel:1 offering:1 tuned:2 interestingly:1 document:1 current:1 culprit:1 activation:1 assigning:1 intriguing:1 realistic:2 concatenate:1 informative:2 cheap:1 analytic:1 remove:2 concert:1 update:8 interpretable:1 generative:22 half:1 vanishing:2 regressive:1 provides:6 toronto:2 successive:1 org:12 zhang:2 mathematical:1 constructed:1 direct:1 qualitative:1 shorthand:1 cwk:2 inside:1 introduce:2 theoretically:1 behavior:1 gatys:1 discretized:2 duan:1 considering:1 solver:2 increasing:1 provided:3 moreover:2 mass:1 what:2 gintare:1 pursue:1 deepmind:1 developed:1 guarantee:3 quantitative:1 every:3 thorough:1 act:1 tackle:1 zaremba:1 oscillates:1 k2:2 classifier:5 unit:2 grant:2 before:2 negligible:2 engineering:1 local:1 aiming:1 despite:1 karolina:1 encoding:1 jiang:1 resembles:2 studied:1 collapse:1 aschwing:1 range:6 averaged:2 practical:2 unique:1 acknowledgment:1 lecun:2 practice:1 procedure:1 empirical:3 significantly:3 thought:1 matching:10 convenient:1 regular:1 zoubin:1 get:1 cannot:1 close:3 interior:1 instability:6 optimize:4 restriction:2 deterministic:1 map:2 demonstrated:1 maximizing:4 center:1 ecker:1 attention:2 convex:5 resolution:1 stabilizing:1 pouget:1 utilizing:2 importantly:1 datapoints:1 dw:5 unanswered:1 searching:2 stability:3 pioneered:1 programming:2 us:1 goodfellow:2 trend:2 roy:1 recognition:1 breakdown:1 bottom:2 observed:1 preprint:2 electrical:1 solved:1 wang:1 region:7 connected:2 mentioned:1 intuition:1 convexity:2 complexity:2 ideally:2 warde:1 trained:3 solving:3 upon:1 easily:1 joint:1 dualizing:2 samsung:1 hopcroft:1 regularizer:1 derivation:1 train:3 stacked:2 instantiated:1 london:1 artificial:3 zemel:2 tell:1 hyper:2 choosing:1 kalchbrenner:1 quite:1 heuristic:1 supplementary:8 solve:9 widely:2 larger:1 itself:1 kuan:1 final:1 differentiable:2 unprecedented:1 net:6 matthias:1 propose:2 coming:1 argminw:1 combining:2 flexibility:1 spur:1 adapts:1 exploiting:1 convergence:1 sutskever:1 extending:1 produce:1 generating:1 adam:2 leave:1 staying:1 help:1 derive:3 linearize:1 recurrent:1 progress:1 job:1 solves:1 eq:12 coverage:1 c:1 involves:1 indicate:1 strong:1 quantify:2 rasch:1 direction:7 concentrate:1 correct:2 filter:1 stochastic:5 exploration:1 material:9 require:1 espeholt:1 abbeel:1 alleviate:1 randomization:2 im:1 practically:1 around:2 exp:6 presumably:1 visually:1 scope:1 claim:7 substituting:2 vary:1 early:1 adopt:2 achieves:1 favorable:1 proc:3 sensitive:3 successfully:1 weighted:1 hope:2 minimization:2 gaussian:3 always:1 aim:4 sight:1 rather:1 avoid:1 cseke:1 focus:2 indicates:3 likelihood:1 adversarial:12 kim:1 wang1:1 inference:1 typically:3 ksk22:4 entire:1 hidden:3 interested:1 issue:1 among:1 dual:56 classification:2 overall:4 ill:1 constrained:1 fairly:1 initialize:1 having:2 nicely:1 sampling:2 encouraged:1 identical:1 represents:2 unsupervised:1 discrepancy:3 future:1 others:2 mirza:1 yoshua:1 richard:1 employ:1 few:2 randomly:2 divergence:3 national:1 n1:2 attempt:1 ab:12 interest:1 highly:1 evaluation:1 introduces:1 mixture:1 farley:1 primal:6 devoted:2 cherry:3 accurate:4 succeeded:2 necessary:1 minw:1 taylor:3 mk:12 stopped:1 column:1 modeling:4 zn:1 maximization:5 phrase:1 cost:15 applicability:1 deviation:1 artistic:1 uniform:1 krizhevsky:1 successful:1 too:1 encoders:1 synthetic:5 combined:1 borgwardt:1 sensitivity:5 oord:1 memisevic:1 diverge:1 michael:1 together:2 bethge:1 synthesis:1 gans:31 squared:2 opposed:2 choose:2 huang:2 expert:1 strive:1 derivative:1 style:2 zhao:1 li:3 nonlinearities:1 sec:3 wk:26 matter:1 depends:2 later:1 lot:2 picked:3 closed:1 kwk:2 characterizes:1 red:1 start:4 doing:1 option:1 complicated:1 competitive:1 bayes:1 metz:2 minimize:3 ass:1 accuracy:4 convolutional:2 maximized:1 yield:1 lkopf:1 metaxas:1 kavukcuoglu:1 monitoring:2 notoriously:1 researcher:1 reach:1 suffers:2 duel:2 definition:1 nonetheless:1 energy:1 frequency:1 obvious:1 hereby:1 proof:2 naturally:1 recovers:4 chintala:2 sampled:2 newly:1 dataset:13 improves:2 organized:1 carefully:2 specify:1 improved:3 rand:3 formulation:16 evaluated:1 just:3 stage:1 inception:9 spiky:1 smola:1 hand:1 trust:6 dualize:1 nonlinear:8 maximizer:1 minibatch:4 mode:6 brings:1 quality:5 usage:1 name:1 k22:1 normalized:2 dziugaite:1 evolution:1 hence:4 analytically:1 alternating:1 read:1 illustrated:1 round:1 during:9 encourages:1 covering:1 linearizing:3 pdf:2 demonstrate:2 image:6 variational:4 ranging:1 recently:1 common:4 fc4:1 empirically:2 extend:2 discussed:6 significant:1 outlined:1 similarly:1 illinois:2 pixelcnn:1 stable:5 own:3 recent:1 showed:1 perspective:2 optimizing:3 apart:1 scenario:1 poursaeed:1 binary:1 scoring:4 seen:2 arjovsky:1 additional:3 wasserstein:2 employed:5 converge:1 maximize:1 monotonically:4 signal:1 full:3 afterwards:1 multiple:1 reduces:1 gretton:1 champaign:1 match:2 adapt:2 cross:3 long:1 lin:2 cifar:5 sk22:1 variant:1 expectation:1 metric:3 arxiv:16 iteration:7 represent:1 kernel:2 mmd:4 sometimes:2 histogram:1 normalization:1 justified:1 background:1 want:2 addition:2 whereas:1 interval:1 adhere:1 source:1 sch:1 extra:4 unlike:1 ascent:2 kwk22:1 subject:1 elegant:1 flow:1 jordan:1 practitioner:1 integer:1 bengio:2 easy:1 variety:1 iterate:1 affect:1 zi:29 li1:1 architecture:5 regarding:1 idea:2 haffner:1 computable:1 motivated:2 ipopt:3 reformulated:1 oscillate:1 remark:1 repeatedly:1 deep:5 useful:2 generally:1 clear:1 covered:3 tune:2 detailed:1 amount:1 locally:4 extensively:1 reduced:1 generate:1 specifies:1 http:12 exist:1 estimated:1 blue:1 hyperparameter:4 dickstein:1 begining:1 stackgan:1 demonstrating:2 falling:1 drawn:4 monitor:1 clarity:1 v1:1 convert:1 houthooft:1 run:2 parameterized:3 swersky:1 place:2 throughout:1 reasonable:1 family:2 oscillation:1 scaling:1 bound:4 layer:8 pay:2 guaranteed:2 courville:1 quadratic:1 fzi:1 adapted:3 constraint:3 alex:1 calling:1 aspect:1 optimality:1 min:5 leon:1 performing:1 martin:1 department:2 structured:1 alternate:1 combination:1 conjugate:1 beneficial:2 smaller:3 across:2 appealing:1 making:1 intuitively:2 restricted:1 den:1 turn:2 count:2 mechanism:3 know:1 end:4 photo:1 studying:1 gaussians:8 permit:1 apply:2 fxi:1 differentiated:1 salimans:1 tolerable:1 alternative:2 batch:7 robustness:2 original:6 top:3 include:1 gan:59 graphical:1 opportunity:2 hinge:1 zemel1:1 restrictive:1 ghahramani:1 approximating:2 chieh:1 objective:35 question:1 added:1 already:2 parametric:2 surrogate:2 gradient:16 iclr:2 distance:2 separate:2 convnet:2 link:1 decoder:1 w0:1 considers:1 unstable:2 toward:1 reason:1 ozair:1 length:1 code:1 relationship:1 reformulate:1 mini:3 providing:1 unrolled:1 equivalently:2 setup:3 difficult:2 mostly:1 potentially:1 negative:1 ba:1 design:2 implementation:1 motivates:1 proper:1 unknown:1 conv2:1 upper:2 observation:2 datasets:1 urbana:1 descent:3 hinton:1 looking:1 expressiveness:1 intensity:1 introduced:3 pair:2 required:1 specified:3 optimized:2 discriminator:67 z1:1 connection:2 pfau:1 learned:4 tremendous:1 kingma:2 nip:1 address:1 able:2 beyond:3 proceeds:1 usually:3 below:2 poole:1 yujia:1 program:15 max:9 green:2 including:1 wainwright:1 difficulty:1 circumvent:1 improve:3 mathieu:1 axis:2 auto:3 text:1 review:2 literature:1 l2:1 understanding:1 schulman:1 fc5:1 graf:1 loss:11 expect:2 fully:2 par:1 generation:1 geoffrey:1 generator:23 foundation:2 conv1:1 tiny:1 nowozin:1 repeat:1 last:2 keeping:1 supported:1 drastically:1 side:2 allow:1 understand:2 institute:1 wide:1 conv3:1 distributed:1 van:1 curve:8 xn:2 concretely:1 commonly:3 avg:2 adaptive:2 welling:1 enr:7 approximate:1 emphasize:1 keep:2 investigating:1 reveals:2 conclude:1 belongie:1 xi:26 biegler:1 search:7 table:6 promising:2 nature:1 learn:4 reasonably:1 transfer:1 improving:1 alg:2 investigated:1 bottou:2 domain:3 did:1 main:3 whole:2 noise:1 hyperparameters:7 fair:1 x1:2 xu:2 fig:6 depicts:1 tomioka:1 momentum:2 exponential:2 infogan:1 weighting:1 jmlr:1 bad:2 specific:1 chter:1 showing:3 explored:1 abadie:1 evidence:1 workshop:1 mnist:12 sohl:1 linearization:15 conditioned:1 chen:2 easier:3 entropy:5 intricacy:1 fc:4 saddle:5 explore:2 visual:1 vinyals:1 expressed:1 dcgan:3 nserc:1 pretrained:1 monotonic:3 radford:2 corresponds:1 minibatches:1 conditional:1 viewed:1 presentation:1 cheung:1 consequently:1 rbf:1 price:1 hard:4 included:3 typical:5 contrasted:1 uniformly:1 sampler:1 schwing:1 engineer:1 duality:6 experimental:1 player:1 intact:2 internal:2 fulfills:1 alexander:2 correlated:1 |
6,793 | 7,145 | Deep Learning for Precipitation Nowcasting:
A Benchmark and A New Model
Xingjian Shi, Zhihan Gao, Leonard Lausen, Hao Wang, Dit-Yan Yeung
Department of Computer Science and Engineering
Hong Kong University of Science and Technology
{xshiab,zgaoag,lelausen,hwangaz,dyyeung}@cse.ust.hk
Wai-kin Wong, Wang-chun Woo
Hong Kong Observatory
Hong Kong, China
{wkwong,wcwoo}@hko.gov.hk
Abstract
With the goal of making high-resolution forecasts of regional rainfall, precipitation nowcasting has become an important and fundamental technology underlying
various public services ranging from rainstorm warnings to flight safety. Recently,
the Convolutional LSTM (ConvLSTM) model has been shown to outperform traditional optical flow based methods for precipitation nowcasting, suggesting that deep
learning models have a huge potential for solving the problem. However, the convolutional recurrence structure in ConvLSTM-based models is location-invariant
while natural motion and transformation (e.g., rotation) are location-variant in general. Furthermore, since deep-learning-based precipitation nowcasting is a newly
emerging area, clear evaluation protocols have not yet been established. To address
these problems, we propose both a new model and a benchmark for precipitation
nowcasting. Specifically, we go beyond ConvLSTM and propose the Trajectory
GRU (TrajGRU) model that can actively learn the location-variant structure for
recurrent connections. Besides, we provide a benchmark that includes a real-world
large-scale dataset from the Hong Kong Observatory, a new training loss, and a
comprehensive evaluation protocol to facilitate future research and gauge the state
of the art.
1
Introduction
Precipitation nowcasting refers to the problem of providing very short range (e.g., 0-6 hours) forecast
of the rainfall intensity in a local region based on radar echo maps1 , rain gauge and other observation
data as well as the Numerical Weather Prediction (NWP) models. It significantly impacts the daily
lives of many and plays a vital role in many real-world applications. Among other possibilities,
it helps to facilitate drivers by predicting road conditions, enhances flight safety by providing
weather guidance for regional aviation, and avoids casualties by issuing citywide rainfall alerts.
In addition to the inherent complexities of the atmosphere and relevant dynamical processes, the
ever-growing need for real-time, large-scale, and fine-grained precipitation nowcasting poses extra
challenges to the meteorological community and has aroused research interest in the machine learning
community [23, 25].
1
The radar echo maps are Constant Altitude Plan Position Indicator (CAPPI) images which can be converted
to rainfall intensity maps using the Marshall-Palmer relationship or Z-R relationship [19].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
The conventional approaches to precipitation nowcasting used by existing operational systems rely
on optical flow [28]. In a modern day nowcasting system, the convective cloud movements are
first estimated from the observed radar echo maps by optical flow and are then used to predict the
future radar echo maps using semi-Lagrangian advection. However, these methods are unsupervised
from the machine learning point of view in that they do not take advantage of the vast amount of
existing radar echo data. Recently, progress has been made by utilizing supervised deep learning [15]
techniques for precipitation nowcasting. Shi et al. [23] formulated precipitation nowcasting as a
spatiotemporal sequence forecasting problem and proposed the Convolutional Long Short-Term
Memory (ConvLSTM) model, which extends the LSTM [7] by having convolutional structures in both
the input-to-state and state-to-state transitions, to solve the problem. Using the radar echo sequences
for model training, the authors showed that ConvLSTM is better at capturing the spatiotemporal
correlations than the fully-connected LSTM and gives more accurate predictions than the Real-time
Optical flow by Variational methods for Echoes of Radar (ROVER) algorithm [28] currently used by
the Hong Kong Observatory (HKO).
However, despite their pioneering effort in this interesting direction, the paper has some deficiencies.
First, the deep learning model is only evaluated on a relatively small dataset containing 97 rainy
days and only the nowcasting skill score at the 0.5mm/h rain-rate threshold is compared. As
real-world precipitation nowcasting systems need to pay additional attention to heavier rainfall
events such as rainstorms which cause more threat to the society, the performance at the 0.5mm/h
threshold (indicating raining or not) alone is not sufficient for demonstrating the algorithm?s overall
performance [28]. In fact, as the area Deep Learning for Precipitation Nowcasting is still in its early
stage, it is not clear how models should be evaluated to meet the need of real-world applications.
Second, although the convolutional recurrence structure used in ConvLSTM is better than the fullyconnected recurrent structure in capturing spatiotemporal correlations, it is not optimal and leaves
room for improvement. For motion patterns like rotation and scaling, the local correlation structure
of consecutive frames will be different for different spatial locations and timestamps. It is thus
inefficient to use convolution which uses a location-invariant filter to represent such location-variant
relationship. Previous attempts have tried to solve the problem by revising the output of a recurrent
neural network (RNN) from the raw prediction to be some location-variant transformation of the
input, like optical flow or dynamic local filter [5, 3]. However, not much research has been conducted
to address the problem by revising the recurrent structure itself.
In this paper, we aim to address these two problems by proposing both a benchmark and a new
model for precipitation nowcasting. For the new benchmark, we build the HKO-7 dataset which
contains radar echo data from 2009 to 2015 near Hong Kong. Since the radar echo maps arrive in
a stream in the real-world scenario, the nowcasting algorithms can adopt online learning to adapt
to the newly emerging patterns dynamically. To take into account this setting, we use two testing
protocols in our benchmark: the offline setting in which the algorithm can only use a fixed window
of the previous radar echo maps and the online setting in which the algorithm is free to use all the
historical data and any online learning algorithm. Another issue for the precipitation nowcasting
task is that the proportions of rainfall events at different rain-rate thresholds are highly imbalanced.
Heavier rainfall occurs less often but has a higher real-world impact. We thus propose the Balanced
Mean Squared Error (B-MSE) and Balanced Mean Absolute Error (B-MAE) measures for training
and evaluation, which assign more weights to heavier rainfalls in the calculation of MSE and MAE.
We empirically find that the balanced variants of the loss functions are more consistent with the
overall nowcasting performance at multiple rain-rate thresholds than the original loss functions.
Moreover, our experiments show that training with the balanced loss functions is essential for deep
learning models to achieve good performance at higher rain-rate thresholds. For the new model, we
propose the Trajectory Gated Recurrent Unit (TrajGRU) model which uses a subnetwork to output the
state-to-state connection structures before state transitions. TrajGRU allows the state to be aggregated
along some learned trajectories and thus is more flexible than the Convolutional GRU (ConvGRU) [2]
whose connection structure is fixed. We show that TrajGRU outperforms ConvGRU, Dynamic Filter
Network (DFN) [3] as well as 2D and 3D Convolutional Neural Networks (CNNs) [20, 27] in both a
synthetic MovingMNIST++ dataset and the HKO-7 dataset.
Using the new dataset, testing protocols, training loss and model, we provide extensive empirical
evaluation of seven models, including a simple baseline model which always predicts the last frame,
two optical flow based models (ROVER and its nonlinear variant), and four representative deep
learning models (TrajGRU, ConvGRU, 2D CNN, and 3D CNN). We also provide a large-scale
2
benchmark for precipitation nowcasting. Our experimental validation shows that (1) all the deep
learning models outperform the optical flow based models, (2) TrajGRU attains the best overall
performance among all the deep learning models, and (3) after applying online fine-tuning, the models
tested in the online setting consistently outperform those in the offline setting. To the best of our
knowledge, this is the first comprehensive benchmark of deep learning models for the precipitation
nowcasting problem. Besides, since precipitation nowcasting can be viewed as a video prediction
problem [22, 27], our work is the first to provide evidence and justification that online learning could
potentially be helpful for video prediction in general.
2
Related Work
Deep learning for precipitation nowcasting and video prediction For the precipitation nowcasting problem, the reflectivity factors in radar echo maps are first transformed to grayscale images
before being fed into the prediction algorithm [23]. Thus, precipitation nowcasting can be viewed
as a type of video prediction problem with a fixed ?camera?, which is the weather radar. Therefore,
methods proposed for predicting future frames in natural videos are also applicable to precipitation
nowcasting and are related to our paper. There are three types of general architecture for video
prediction: RNN based models, 2D CNN based models, and 3D CNN based models. Ranzato et
al. [22] proposed the first RNN based model for video prediction, which uses a convolutional RNN
with 1 ? 1 state-state kernel to encode the observed frames. Srivastava et al. [24] proposed the LSTM
encoder-decoder network which uses one LSTM to encode the input frames and another LSTM to
predict multiple frames ahead. The model was generalized in [23] by replacing the fully-connected
LSTM with ConvLSTM to capture the spatiotemporal correlations better. Later, Finn et al. [5] and De
Brabandere et al. [3] extended the model in [23] by making the network predict the transformation of
the input frame instead of directly predicting the raw pixels. Ruben et al. [26] proposed to use both an
RNN that captures the motion and a CNN that captures the content to generate the prediction. Along
with RNN based models, 2D and 3D CNN based models were proposed in [20] and [27] respectively.
Mathieu et al. [20] treated the frame sequence as multiple channels and applied 2D CNN to generate
the prediction while [27] treated them as the depth and applied 3D CNN. Both papers show that
Generative Adversarial Network (GAN) [6] is helpful for generating sharp predictions.
Structured recurrent connection for spatiotemporal modeling From a higher-level perspective,
precipitation nowcasting and video prediction are intrinsically spatiotemporal sequence forecasting
problems in which both the input and output are spatiotemporal sequences [23]. Recently, there is
a trend of replacing the fully-connected structure in the recurrent connections of RNN with other
topologies to enhance the network?s ability to model the spatiotemporal relationship. Other than the
ConvLSTM which replaces the full-connection with convolution and is designed for dense videos, the
SocialLSTM [1] and the Structural-RNN (S-RNN) [11] have been proposed sharing a similar notion.
SocialLSTM defines the topology based on the distance between different people and is designed for
human trajectory prediction while S-RNN defines the structure based on the given spatiotemporal
graph. All these models are different from our TrajGRU in that our model actively learns the recurrent
connection structure. Liang et al. [17] have proposed the Structure-evolving LSTM, which also has the
ability to learn the connection structure of RNNs. However, their model is designed for the semantic
object parsing task and learns how to merge the graph nodes automatically. It is thus different from
TrajGRU which aims at learning the local correlation structure for spatiotemporal data.
Benchmark for video tasks There exist benchmarks for several video tasks like online object
tracking [29] and video object segmentation [21]. However, there is no benchmark for the precipitation
nowcasting problem, which is also a video task but has its unique properties since radar echo map is
a completely different type of data and the data is highly imbalanced (as mentioned in Section 1).
The large-scale benchmark created as part of this work could help fill the gap.
3
Model
In this section, we present our new model for precipitation nowcasting. We first introduce the general
encoding-forecasting structure used in this paper. Then we review the ConvGRU model and present
our new TrajGRU model.
3
3.1
Encoding-forecasting Structure
We adopt a similar formulation of the precipitation nowcasting problem as in [23]. Assume that the
radar echo maps form a spatiotemporal sequence hI1 , I2 , . . .i. At a given timestamp t, our model generates the most likely K-step predictions, I?t+1 , I?t+2 , . . . , I?t+K , based on the previous J observations
including the current one: It?J+1 , It?J+2 , . . . , It . Our encoding-forecasting network first encodes
the observations into n layers of RNN states: Ht1 , Ht2 , . . . , Htn = h(It?J+1 , It?J+2 , . . . , It ), and
then uses another n layers of RNNs to generate the predictions based on these encoded states:
I?t+1 , I?t+2 , . . . , I?t+K = g(Ht1 , Ht2 , . . . , Htn ). Figure 1 illustrates our encoding-forecasting structure
for n = 3, J = 2, K = 2. We insert downsampling and upsampling layers between the RNNs, which
are implemented by convolution and deconvolution with stride. The reason to reverse the order of the
forecasting network is that the high-level states, which have captured the global spatiotemporal representation, could guide the update of the low-level states. Moreover, the low-level states could further
influence the prediction. This structure is more reasonable than the previous structure [23] which does
not reverse the link of the forecasting network because we are free to plug in additional RNN layers
on top and no skip-connection is required to aggregate the low-level information. One can choose any
type of RNNs like ConvGRU or our newly proposed TrajGRU in this general encoding-forecasting
structure as long as their states correspond to tensors.
3.2
Convolutional GRU
The main formulas of the ConvGRU used in this paper are given as follows:
Zt = ?(Wxz ? Xt + Whz ? Ht?1 ),
Rt = ?(Wxr ? Xt + Whr ? Ht?1 ),
Ht0 = f (Wxh ? Xt + Rt ? (Whh ? Ht?1 )),
(1)
Ht = (1 ? Zt ) ? Ht0 + Zt ? Ht?1 .
The bias terms are omitted for notational simplicity. ??? is the convolution operation and ??? is the
Hadamard product. Here, Ht , Rt , Zt , Ht0 ? RCh ?H?W are the memory state, reset gate, update gate,
and new information, respectively. Xt ? RCi ?H?W is the input and f is the activation, which is
chosen to be leaky ReLU with negative slope equals to 0.2 [18] througout the paper. H, W are the
height and width of the state and input tensors and Ch , Ci are the channel sizes of the state and input
tensors, respectively. Every time a new input arrives, the reset gate will control whether to clear the
previous state and the update gate will control how much the new information will be written to the
state.
3.3
Trajectory GRU
When used for capturing spatiotemporal correlations, the deficiency of ConvGRU and other
ConvRNNs is that the connection structure and weights are fixed for all the locations. The convolution
operation basically applies a location-invariant filter to the input. If the inputs are all zero and the
reset gates are all one, we could rewrite the computation process of the new information at a specific
0
location (i, j) at timestamp t, i.e, Ht,:,i,j
, as follows:
h
|Ni,j
|
0
h
Ht,:,i,j
= f (Whh concat(hHt?1,:,p,q | (p, q) ? Ni,j
i)) = f (
X
l
Whh
Ht?1,:,pl,i,j ,ql,i,j ).
(2)
l=1
h
Here, Ni,j
is the ordered neighborhood set at location (i, j) defined by the hyperparameters of the
state-to-state convolution such as kernel size, dilation and padding [30]. (pl,i,j , ql,i,j ) is the lth
neighborhood location of position (i, j). The concat(?) function concatenates the inner vectors in the
set and Whh is the matrix representation of the state-to-state convolution weights.
h
As the hyperparameter of convolution is fixed, the neighborhood set Ni,j
stays the same for all
locations. However, most motion patterns have different neighborhood sets for different locations.
For example, rotation and scaling generate flow fields with different angles pointing to different
directions. It would thus be more reasonable to have a location-variant connection structure as
L
X
0
l
(3)
Ht,:,i,j = f (
Whh
Ht?1,:,pl,i,j (?),ql,i,j (?) ),
l=1
4
Encoder
Forecaster
RNN
RNN
RNN
RNN
Downsample
Downsample
Upsample
Upsample
RNN
RNN
RNN
RNN
Downsample
Downsample
Upsample
Upsample
RNN
RNN
RNN
RNN
Convolution
Convolution
Convolution
Convolution
Figure 1: Example of the encoding-forecasting structure used
in the paper. In the figure, we use three RNNs to predict two
future frames I?3 , I?4 given the two input frames I1 , I2 . The spatial
coordinates G are concatenated to the input frame to ensure the
network knows the observations are from different locations. The
RNNs can be either ConvGRU or TrajGRU. Zeros are fed as input
to the RNN if the input link is missing.
(a) For convolutional RNN, the recurrent
connections are fixed over time.
(b) For trajectory RNN, the recurrent connections are dynamically determined.
Figure 2: Comparison of the connection
structures of convolutional RNN and trajectory RNN. Links with the same color
share the same transition weights. (Best
viewed in color)
where L is the total number of local links, (pl,i,j (?), ql,i,j (?)) is the lth neighborhood parameterized
by ?.
Based on this observation, we propose the TrajGRU, which uses the current input and previous
state to generate the local neighborhood set for each location at each timestamp. Since the location
indices are discrete and non-differentiable, we use a set of continuous optical flows to represent these
?indices?. The main formulas of TrajGRU are given as follows:
Ut , Vt = ?(Xt , Ht?1 ),
Zt = ?(Wxz ? Xt +
L
X
l
Whz
? warp(Ht?1 , Ut,l , Vt,l )),
l=1
Rt = ?(Wxr ? Xt +
L
X
l
Whr
? warp(Ht?1 , Ut,l , Vt,l )),
(4)
l=1
L
X
l
Ht0 = f (Wxh ? Xt + Rt ? (
Whh
? warp(Ht?1 , Ut,l , Vt,l ))),
l=1
Ht = (1 ? Zt ) ? Ht0 + Zt ? Ht?1 .
Here, L is the total number of allowed links. Ut , Vt ? RL?H?W are the flow fields that store the
l
l
l
local connection structure generated by the structure generating network ?. The Whz
, Whr
, Whh
are the weights for projecting the channels, which are implemented by 1 ? 1 convolutions. The
warp(Ht?1 , Ut,l , Vt,l ) function selects the positions pointed out by Ut,l , Vt,l from Ht?1 via the
bilinear sampling kernel [10, 9]. If we denote M = warp(I, U, V) where M, I ? RC?H?W and
U, V ? RH?W , we have:
Mc,i,j =
H X
W
X
Ic,m,n max(0, 1 ? |i + Vi,j ? m|) max(0, 1 ? |j + Ui,j ? n|).
(5)
m=1 n=1
The advantage of such a structure is that we could learn the connection topology by learning the
parameters of the subnetwork ?. In our experiments, ? takes the concatenation of Xt and Ht?1 as
the input and is fixed to be a one-hidden-layer convolutional neural network with 5 ? 5 kernel size
and 32 feature maps. Thus, ? has only a small number of parameters and adds nearly no cost to the
overall computation. Compared to a ConvGRU with K ? K state-to-state convolution, TrajGRU
is able to learn a more efficient connection structure with L < K 2 . For ConvGRU and TrajGRU,
the number of model parameters is dominated by the size of the state-to-state weights, which is
O(L ? Ch2 ) for TrajGRU and O(K 2 ? Ch2 ) for ConvGRU. If L is chosen to be smaller than K 2 , the
5
Table 1: Comparison of TrajGRU and the baseline models in the MovingMNIST++ dataset. ?Conv-K?-D??
refers to the ConvGRU with kernel size ? ? ? and dilation ? ? ?. ?Traj-L?? refers to the TrajGRU with ? links.
We replace the output layer of the ConvGRU-K5-D1 model to get the DFN.
#Parameters
Test MSE ?10?2
Standard Deviation ?10?2
Conv-K3-D2
Conv-K5-D1
Conv-K7-D1
Traj-L5
Traj-L9
Traj-L13
TrajGRU-L17
DFN
Conv2D
Conv3D
2.84M
1.495
0.003
4.77M
1.310
0.004
8.01M
1.254
0.006
2.60M
1.351
0.020
3.42M
1.247
0.015
4.00M
1.170
0.022
4.77M
1.138
0.019
4.83M
1.461
0.002
29.06M
1.681
0.001
32.52M
1.637
0.002
number of parameters of TrajGRU can also be smaller than the ConvGRU and the TrajGRU model
is able to use the parameters more efficiently. Illustration of the recurrent connection structures of
ConvGRU and TrajGRU is given in Figure 2. Recently, Jeon & Kim [12] has used similar ideas to
extend the convolution operations in CNN. However, their proposed Active Convolution Unit (ACU)
focuses on the images where the need for location-variant filters is limited. Our TrajGRU focuses on
videos where location-variant filters are crucial for handling motion patterns like rotations. Moreover,
we are revising the structure of the recurrent connection and have tested different number of links
while [12] fixes the link number to 9.
4
Experiments on MovingMNIST++
Before evaluating our model on the more challenging precipitation nowcasting task, we first compare
TrajGRU with ConvGRU, DFN and 2D/3D CNNs on a synthetic video prediction dataset to justify
its effectiveness.
The previous MovingMNIST dataset [24, 23] only moves the digits with a constant speed and is not
suitable for evaluating different models? ability in capturing more complicated motion patterns. We
thus design the MovingMNIST++ dataset by extending MovingMNIST to allow random rotations,
scale changes, and illumination changes. Each frame is of size 64 ? 64 and contains three moving
digits. We use 10 frames as input to predict the next 10 frames. As the frames have illumination
changes, we use MSE instead of cross-entropy for training and evaluation 2 . We train all models
using the Adam optimizer [14] with learning rate equal to 10?4 and momentum equal to 0.5. For
the RNN models, we use the encoding-forecasting structure introduced previously with three RNN
layers. All RNNs are either ConvGRU or TrajGRU and all use the same set of hyperparameters. For
TrajGRU, we initialize the weight of the output layer of the structure generating network to zero.
The strides of the middle downsampling and upsampling layers are chosen to be 2. The numbers
of filters for the three RNNs are 64, 96, 96 respectively. For the DFN model, we replace the output
layer of ConvGRU with a 11 ? 11 local filter and transform the previous frame to get the prediction.
For the RNN models, we train them for 200,000 iterations with norm clipping threshold equal to
1 and batch size equal to 4. For the CNN models, we train them for 100,000 iterations with norm
clipping threshold equal to 50 and batch size equal to 32. The detailed experimental configuration of
the models for the MovingMNIST++ experiment can be found in the appendix. We have also tried to
use conditional GAN for the 2D and 3D models but have failed to get reasonable results.
Table 1 gives the results of different models on the same test set that contains 10,000 sequences. We
train all models using three different seeds to report the standard deviation. We can find that TrajGRU
with only 5 links outperforms ConvGRU with state-to-state kernel size 3 ? 3 and dilation 2 ? 2 (9
links). Also, the performance of TrajGRU improves as the number of links increases. TrajGRU
with L = 13 outperforms ConvGRU with 7 ? 7 state-to-state kernel and yet has fewer parameters.
Another observation from the table is that DFN does not perform well in this synthetic dataset. This
is because DFN uses softmax to enhance the sparsity of the learned local filters, which fails to model
illumination change because the maximum value always gets smaller after convolving with a positive
kernel whose weights sum up to 1. For DFN, when the pixel values get smaller, it is impossible for
them to increase again. Figure 3 visualizes the learned structures of TrajGRU. We can see that the
network has learned reasonable local link patterns.
2
The MSE for the MovingMNIST++ experiment is averaged by both the frame size and the length of the
predicted sequence.
6
Figure 3: Selected links of TrajGRU-L13 at different frames and layers. We choose one of the 13 links and
plot an arrow starting from each pixel to the pixel that is referenced by the link. From left to right we display
the learned structure at the first, second and third layer of the encoder. The links displayed here have learned
behaviour for rotations. We sub-sample the displayed links for the first layer for better readability. We include
animations for all layers and links in the supplementary material. (Best viewed when zoomed in.)
5
5.1
Benchmark for Precipitation Nowcasting
HKO-7 Dataset
The HKO-7 dataset used in the benchmark contains radar echo data from 2009 to 2015 collected by
HKO. The radar CAPPI reflectivity images, which have resolution of 480 ? 480 pixels, are taken from
an altitude of 2km and cover a 512km ? 512km area centered in Hong Kong. The data are recorded
every 6 minutes and hence there are 240 frames per day. The raw logarithmic radar reflectivity factors
+ 0.5c and are clipped to be
are linearly transformed to pixel values via pixel = b255 ? dBZ+10
70
between 0 and 255. The raw radar echo images generated by Doppler weather radar are noisy due to
factors like ground clutter, sea clutter, anomalous propagation and electromagnetic interference [16].
To alleviate the impact of noise in training and evaluation, we filter the noisy pixels in the dataset and
generate the noise masks by a two-stage process described in the appendix.
As rainfall events occur sparsely, we select the rainy days based on the rain barrel information to form
our final dataset, which has 812 days for training, 50 days for validation and 131 days for testing.
Our current treatment is close to the real-life scenario as we are able to train an additional model that
classifies whether or not it will rain on the next day and applies our precipitation nowcasting model if
this coarser-level model predicts that it will be rainy. The radar reflectivity values are converted to
rainfall intensity values (mm/h) using the Z-R relationship: dBZ = 10 log a + 10b log R where R is
the rain-rate level, a = 58.53 and b = 1.56. The overall statistics and the average monthly rainfall
distribution of the HKO-7 dataset are given in the appendix.
5.2
Evaluation Methodology
As the radar echo maps arrive in a stream, nowcasting algorithms can apply online learning to adapt
to the newly emerging spatiotemporal patterns. We propose two settings in our evaluation protocol:
(1) the offline setting in which the algorithm always receives 5 frames as input and predicts 20 frames
ahead, and (2) the online setting in which the algorithm receives segments of length 5 sequentially and
predicts 20 frames ahead for each new segment received. The evaluation protocol is described more
systematically in the appendix. The testing environment guarantees that the same set of sequences is
tested in both the offline and online settings for fair comparison.
For both settings, we evaluate the skill scores for multiple thresholds that correspond to different
rainfall levels to give an all-round evaluation of the algorithms? nowcasting performance. Table 2
shows the distribution of different rainfall levels in our dataset. We choose to use the thresholds 0.5,
2, 5, 10, 30 to calculate the CSI and Heidke Skill Score (HSS) [8]. For calculating the skill score at a
specific threshold ? , which is 0.5, 2, 5, 10 or 30, we first convert the pixel values in prediction and
ground-truth to 0/1 by thresholding with ? . We then calculate the TP (prediction=1, truth=1), FN
(prediction=0, truth=1), FP (prediction=1, truth=0), and TN (prediction=0, truth=0). The CSI score is
TP
TP?TN?FN?FP
calculated as TP+FN+FP
and the HSS score is calculated as (TP+FN)(FN+TN)+(TP+FP)(FP+TN)
. During
the computation, the masked points are ignored.
7
Table 2: Rain rate statistics in the HKO-7 benchmark.
Rain Rate (mm/h)
0?
0.5 ?
2?
5?
10 ?
30 ?
x
x
x
x
x
x
< 0.5
<2
<5
< 10
< 30
Proportion (%)
90.25
4.38
2.46
1.35
1.14
0.42
Rainfall Level
No / Hardly noticeable
Light
Light to moderate
Moderate
Moderate to heavy
Rainstorm warning
As shown in Table 2, the frequencies of different rainfall levels are highly imbalanced. We propose
to use the weighted loss function to help solve this problem. Specifically,
we assign a weight
?
?
1,
x
<
2
?
?
?
?
2?x<5
?2,
w(x) to each pixel according to its rainfall intensity x: w(x) = 5,
5 ? x < 10 . Also, the
?
?
?
10,
10 ? x < 30
?
?
?30, x ? 30
masked pixels have weight 0. The resulting B-MSE and B-MAE scores are computed as B-MSE =
PN P480 P480
PN P480 P480
1
?n,i,j )2 and B-MAE = N1 n=1 i=1 j=1 wn,i,j |xn,i,j ?
n=1
i=1
j=1 wn,i,j (xn,i,j ? x
N
x
?n,i,j |, where N is the total number of frames and wn,i,j is the weight corresponding to the (i, j)th
pixel in the nth frame. For the conventional MSE and MAE measures, we simply set all the weights
to 1 except the masked points.
5.3
Evaluated Algorithms
We have evaluated seven nowcasting algorithms, including the simplest model which always predicts
the last frame, two optical flow based methods (ROVER and its nonlinear variant), and four deep
learning methods (TrajGRU, ConvGRU, 2D CNN, and 3D CNN). Specifically, we have evaluated
the performance of deep learning models in the online setting by fine-tuning the algorithms using
AdaGrad [4] with learning rate equal to 10?4 . We optimize the sum of B-MSE and B-MAE during
offline training and online fine-tuning. During the offline training process, all models are optimized
by the Adam optimizer with learning rate equal to 10?4 and momentum equal to 0.5 and we train
these models with early-stopping on the sum of B-MSE and B-MAE. For RNN models, the training
batch size is set to 4. For the CNN models, the training batch size is set to 8. For TrajGRU and
ConvGRU models, we use a 3-layer encoding-forecasting structure with the number of filters for the
RNNs set to 64, 192, 192. We use kernel size equal to 5 ? 5, 5 ? 5, 3 ? 3 for the ConvGRU models
while the number of links is set to 13, 13, 9 for the TrajGRU model. We also train the ConvGRU
model with the original MSE and MAE loss, which is named ?ConvGRU-nobal?, to evaluate the
improvement by training with the B-MSE and B-MAE loss. The other model configurations including
ROVER, ROVER-nonlinear and deep models are included in the appendix.
5.4
Evaluation Results
The overall evaluation results are summarized in Table 3. In order to analyze the confidence interval
of the results, we train 2D CNN, 3D CNN, ConvGRU and TrajGRU models using three different
random seeds and report the standard deviation in Table 4. We find that training with balanced loss
functions is essential for good nowcasting performance of heavier rainfall. The ConvGRU model that
is trained without balanced loss, which best represents the model in [23], has worse nowcasting score
than the optical flow based methods at the 10mm/h and 30mm/h thresholds. Also, we find that all the
deep learning models that are trained with the balanced loss outperform the optical flow based models.
Among the deep learning models, TrajGRU performs the best and 3D CNN outperforms 2D CNN,
which shows that an appropriate network structure is crucial to achieving good performance. The
improvement of TrajGRU over the other models is statistically significant because the differences in
B-MSE and B-MAE are larger than three times their standard deviation. Moreover, the performance
with online fine-tuning enabled is consistently better than that without online fine-tuning, which
verifies the effectiveness of online learning at least for this task.
8
Table 3: HKO-7 benchmark result. We mark the best result within a specific setting with bold face and the
second best result by underlining. Each cell contains the mean score of the 20 predicted frames. In the online
setting, all algorithms have used the online learning strategy described in the paper. ??? means that the score is
higher the better while ??? means that the score is lower the better. ?r ? ? ? means the skill score at the ? mm/h
rainfall threshold. For 2D CNN, 3D CNN, ConvGRU and TrajGRU models, we train the models with three
different random seeds and report the mean scores.
r ? 0.5
r?2
CSI ?
r?5
r ? 10
r ? 30
Last Frame
ROVER + Linear
ROVER + Non-linear
2D CNN
3D CNN
ConvGRU-nobal
ConvGRU
TrajGRU
0.4022
0.4762
0.4655
0.5095
0.5109
0.5476
0.5489
0.5528
0.3266
0.4089
0.4074
0.4396
0.4411
0.4661
0.4731
0.4759
0.2401
0.3151
0.3226
0.3406
0.3415
0.3526
0.3720
0.3751
0.1574
0.2146
0.2164
0.2392
0.2424
0.2138
0.2789
0.2835
0.0692
0.1067
0.0951
0.1093
0.1185
0.0712
0.1776
0.1856
2D CNN
3D CNN
ConvGRU
TrajGRU
0.5112
0.5106
0.5511
0.5563
0.4363
0.4344
0.4737
0.4798
0.3364
0.3345
0.3742
0.3808
0.2435
0.2427
0.2843
0.2914
0.1263
0.1299
0.1837
0.1933
Algorithms
r ? 0.5
r?2
Offline Setting
0.5207
0.4531
0.6038
0.5473
0.5896
0.5436
0.6366
0.5809
0.6334
0.5825
0.6756
0.6094
0.6701
0.6104
0.6731
0.6126
Online Setting
0.6365
0.5756
0.6355
0.5736
0.6712
0.6105
0.6760
0.6164
HSS ?
r?5
r ? 10
r ? 30
0.3582
0.4516
0.4590
0.4851
0.4862
0.4981
0.5163
0.5192
0.2512
0.3301
0.3318
0.3690
0.3734
0.3286
0.4159
0.4207
0.1193
0.1762
0.1576
0.1885
0.2034
0.1160
0.2893
0.2996
15274
11651
10945
7332
7202
9087
5951
5816
28042
23437
22857
18091
17593
19642
15000
14675
0.4790
0.4766
0.5183
0.5253
0.3744
0.3733
0.4226
0.4308
0.2162
0.2220
0.2981
0.3111
6654
6690
5724
5589
17071
16903
14772
14465
B-MSE ? B-MAE ?
Table 4: Confidence intervals of selected deep models in the HKO-7 benchmark. We train 2D CNN, 3D CNN,
ConvGRU and TrajGRU using three different random seeds and report the standard deviation of the test scores.
r ? 0.5
r?2
CSI
r?5
r ? 10
r ? 30
2D CNN
3D CNN
ConvGRU
TrajGRU
0.0032
0.0043
0.0022
0.0020
0.0023
0.0027
0.0018
0.0024
0.0015
0.0016
0.0031
0.0025
0.0001
0.0024
0.0008
0.0031
0.0025
0.0024
0.0022
0.0031
2D CNN
3D CNN
ConvGRU
TrajGRU
0.0002
0.0004
0.0006
0.0008
0.0005
0.0003
0.0012
0.0004
0.0002
0.0002
0.0017
0.0002
0.0002
0.0003
0.0019
0.0002
0.0012
0.0008
0.0024
0.0002
Algorithms
r ? 0.5
r?2
Offline Setting
0.0032
0.0025
0.0042
0.0028
0.0022
0.0021
0.0019
0.0024
Online Setting
0.0002
0.0005
0.0004
0.0004
0.0006
0.0012
0.0007
0.0004
HSS
r?5
r ? 10
r ? 30
0.0018
0.0018
0.0040
0.0028
0.0003
0.0031
0.0010
0.0039
0.0002
0.0003
0.0019
0.0002
0.0003
0.0004
0.0023
0.0002
B-MSE
B-MAE
0.0043
0.0041
0.0038
0.0045
90
44
52
18
95
26
81
32
0.0019
0.0001
0.0031
0.0003
12
23
30
10
12
27
69
20
Table 5: Kendall?s ? coefficients between skill scores. Higher absolute value indicates stronger correlation. The
numbers with the largest absolute values are shown in bold face.
Skill Scores
r ? 0.5
r?2
CSI
r?5
r ? 10
r ? 30
r ? 0.5
r?2
HSS
r?5
r ? 10
r ? 30
MSE
MAE
B-MSE
B-MAE
-0.24
-0.41
-0.70
-0.74
-0.39
-0.57
-0.57
-0.59
-0.39
-0.55
-0.61
-0.58
-0.07
-0.25
-0.86
-0.82
-0.01
-0.27
-0.84
-0.92
-0.33
-0.50
-0.62
-0.67
-0.42
-0.60
-0.55
-0.57
-0.39
-0.55
-0.61
-0.59
-0.06
-0.24
-0.86
-0.83
0.01
-0.26
-0.84
-0.92
Based on the evaluation results, we also compute the Kendall?s ? coefficients [13] between the MSE,
MAE, B-MSE, B-MAE and the CSI, HSS at different thresholds. As shown in Table 5, B-MSE and
B-MAE have stronger correlation with the CSI and HSS in most cases.
6
Conclusion and Future Work
In this paper, we have provided the first large-scale benchmark for precipitation nowcasting and have
proposed a new TrajGRU model with the ability of learning the recurrent connection structure. We
have shown TrajGRU is more efficient in capturing the spatiotemporal correlations than ConvGRU.
For future work, we plan to test if TrajGRU helps improve other spatiotemporal learning tasks like
visual object tracking and video segmentation. We will also try to build an operational nowcasting
system using the proposed algorithm.
9
Acknowledgments
This research has been supported by General Research Fund 16207316 from the Research Grants
Council and Innovation and Technology Fund ITS/205/15FP from the Innovation and Technology
Commission in Hong Kong. The first author has also been supported by the Hong Kong PhD
Fellowship.
References
[1] Alexandre Alahi, Kratarth Goel, Vignesh Ramanathan, Alexandre Robicquet, Li Fei-Fei, and Silvio
Savarese. Social LSTM: Human trajectory prediction in crowded spaces. In CVPR, 2016.
[2] Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networks for
learning video representations. In ICLR, 2016.
[3] Bert De Brabandere, Xu Jia, Tinne Tuytelaars, and Luc Van Gool. Dynamic filter networks. In NIPS, 2016.
[4] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121?2159, 2011.
[5] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through
video prediction. In NIPS, 2016.
[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron
Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
[7] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780,
1997.
[8] Robin J Hogan, Christopher AT Ferro, Ian T Jolliffe, and David B Stephenson. Equitability revisited: Why
the ?equitable threat score? is not equitable. Weather and Forecasting, 25(2):710?726, 2010.
[9] Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox.
Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, 2017.
[10] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In NIPS, 2015.
[11] Ashesh Jain, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. Structural-RNN: Deep learning on
spatio-temporal graphs. In CVPR, 2016.
[12] Yunho Jeon and Junmo Kim. Active convolution: Learning the shape of convolution for image classification.
In CVPR, 2017.
[13] Maurice G Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81?93, 1938.
[14] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[15] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[16] Hansoo Lee and Sungshin Kim. Ensemble classification for anomalous propagation echo detection with
clustering-based subset-selection method. Atmosphere, 8(1):11, 2017.
[17] Xiaodan Liang, Liang Lin, Xiaohui Shen, Jiashi Feng, Shuicheng Yan, and Eric P Xing. Interpretable
structure-evolving LSTM. In CVPR, 2017.
[18] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network
acoustic models. In ICML, 2013.
[19] John S Marshall and W Mc K Palmer. The distribution of raindrops with size. Journal of Meteorology,
5(4):165?166, 1948.
[20] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean
square error. In ICLR, 2016.
[21] Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander
Sorkine-Hornung. A benchmark dataset and evaluation methodology for video object segmentation. In
CVPR, 2016.
[22] MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit
Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint
arXiv:1412.6604, 2014.
[23] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In NIPS, 2015.
[24] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using LSTMs. In ICML, 2015.
[25] Juanzhen Sun, Ming Xue, James W Wilson, Isztar Zawadzki, Sue P Ballard, Jeanette Onvlee-Hooimeyer,
Paul Joe, Dale M Barker, Ping-Wah Li, Brian Golding, et al. Use of NWP for nowcasting convective
10
[26]
[27]
[28]
[29]
[30]
precipitation: Recent progress and challenges. Bulletin of the American Meteorological Society, 95(3):409?
426, 2014.
Ruben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, and Honglak Lee. Decomposing motion and
content for natural video sequence prediction. In ICLR, 2017.
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In NIPS,
2016.
Wang-chun Woo and Wai-kin Wong. Operational application of optical flow techniques to radar-based
rainfall nowcasting. Atmosphere, 8(3):48, 2017.
Yi Wu, Jongwoo Lim, and Ming-Hsuan Yang. Online object tracking: A benchmark. In CVPR, 2013.
Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. In ICLR, 2016.
11
| 7145 |@word kong:9 cnn:29 middle:1 proportion:2 norm:2 stronger:2 d2:1 km:3 shuicheng:1 tried:2 forecaster:1 k7:1 configuration:2 contains:5 score:17 outperforms:4 existing:2 current:3 activation:1 yet:2 issuing:1 ust:1 parsing:1 written:1 john:2 fn:5 numerical:1 timestamps:1 ashesh:1 ronan:1 shape:1 designed:3 plot:1 update:3 fund:2 interpretable:1 alone:1 generative:3 leaf:1 fewer:1 selected:2 concat:2 amir:1 short:3 rch:1 cse:1 location:20 node:1 readability:1 revisited:1 height:1 rc:1 alert:1 along:2 become:1 driver:1 koltun:1 fullyconnected:1 introduce:1 mask:1 elman:1 growing:1 multi:2 salakhutdinov:1 ming:2 automatically:1 gov:1 window:1 precipitation:31 conv:4 classifies:1 underlying:1 moreover:4 provided:1 barrel:1 emerging:3 revising:3 proposing:1 keuper:1 warning:2 transformation:3 guarantee:1 temporal:1 every:2 saxena:1 jimei:1 alahi:1 rainfall:19 biometrika:1 mansimov:1 sherjil:1 control:2 unit:2 grant:1 dfn:8 mcwilliams:1 szlam:1 safety:2 service:1 engineering:1 local:10 before:3 positive:1 referenced:1 despite:1 encoding:8 bilinear:1 meet:1 merge:1 rnns:9 alexey:1 china:1 dynamically:2 challenging:1 limited:1 palmer:2 range:1 statistically:1 averaged:1 unique:1 camera:1 acknowledgment:1 testing:4 lecun:2 xshiab:1 digit:2 area:3 rnn:34 yan:3 empirical:1 significantly:1 weather:5 evolving:2 confidence:2 road:1 refers:3 get:5 close:1 selection:1 context:1 applying:1 influence:1 impossible:1 wong:3 wxr:2 conventional:2 map:11 lagrangian:1 shi:3 missing:1 optimize:1 go:1 attention:1 starting:1 sepp:1 jimmy:1 barker:1 resolution:2 ferro:1 simplicity:1 shen:1 hsuan:1 pouget:1 utilizing:1 fill:1 enabled:1 notion:1 coordinate:1 justification:1 play:1 carl:1 us:7 xiaodan:1 goodfellow:2 trend:1 sparsely:1 predicts:5 coarser:1 observed:2 role:1 cloud:1 levine:1 preprint:1 wang:5 capture:3 calculate:2 zamir:1 region:1 connected:3 sun:1 ranzato:2 movement:1 csi:7 balanced:7 mentioned:1 environment:1 gross:1 complexity:1 nowcasting:42 ui:1 warde:1 dynamic:4 hogan:1 radar:22 trained:2 solving:1 rewrite:1 segment:2 rover:7 eric:1 completely:1 tinne:1 kratarth:1 various:1 train:10 jain:1 aggregate:1 neighborhood:6 whose:2 encoded:1 supplementary:1 solve:3 larger:1 cvpr:7 elad:1 hornung:1 federico:1 encoder:3 ability:4 statistic:2 simonyan:1 tested:3 tuytelaars:1 transform:1 echo:17 itself:1 noisy:2 online:21 final:1 jeanette:1 advantage:2 sequence:10 differentiable:1 net:1 propose:7 interaction:1 product:1 reset:3 zoomed:1 relevant:1 hadamard:1 achieve:1 extending:1 sea:1 generating:4 advection:1 adam:3 object:6 help:4 transformer:1 recurrent:13 andrew:3 pose:1 noticeable:1 progress:2 received:1 implemented:2 predicted:2 skip:1 direction:2 filter:12 cnns:2 stochastic:2 centered:1 human:2 public:1 material:1 villegas:1 wxz:2 atmosphere:3 behaviour:1 assign:2 fix:1 electromagnetic:1 alleviate:1 brian:2 awni:1 insert:1 pl:4 mm:7 ic:1 ground:2 k3:1 seed:4 predict:5 pointing:1 rgen:1 optimizer:2 consecutive:1 early:2 adopt:2 torralba:1 omitted:1 estimation:1 ruslan:1 applicable:1 observatory:3 currently:1 council:1 ilg:1 largest:1 gauge:2 weighted:1 always:4 aim:2 pn:2 wilson:1 encode:2 focus:2 improvement:3 consistently:2 notational:1 indicates:1 rank:1 hk:2 adversarial:2 attains:1 baseline:3 kim:3 helpful:2 downsample:4 stopping:1 hidden:1 transformed:2 pont:1 i1:1 selects:1 pixel:12 overall:6 among:3 issue:1 flexible:1 classification:2 plan:2 art:1 spatial:3 initialize:1 brox:1 timestamp:3 equal:11 field:2 softmax:1 having:1 beach:1 sampling:1 ng:1 represents:1 yu:1 unsupervised:3 nearly:1 icml:2 future:6 rci:1 report:4 mirza:1 yoshua:2 inherent:1 aroused:1 dosovitskiy:1 modern:1 rainy:3 comprehensive:2 jeon:2 n1:1 attempt:1 detection:1 huge:1 interest:1 possibility:1 highly:3 evaluation:14 arrives:1 farley:1 light:2 convective:2 accurate:1 daily:1 arthur:1 l17:1 savarese:2 guidance:1 modeling:2 marshall:2 cover:1 tp:6 clipping:2 whh:7 cost:1 deviation:5 subset:1 junmo:1 masked:3 jiashi:1 conducted:1 sumit:1 pal:1 commission:1 spatiotemporal:16 xue:1 casualty:1 synthetic:3 st:1 fundamental:1 lstm:11 stay:1 l5:1 lee:2 enhance:2 michael:2 yao:1 squared:1 again:1 recorded:1 flownet:1 containing:1 choose:3 worse:1 maurice:1 convolving:1 inefficient:1 american:1 actively:2 li:3 suggesting:1 potential:1 converted:2 account:1 de:2 stride:2 nonlinearities:1 summarized:1 bold:2 includes:1 coefficient:2 crowded:1 dilated:1 vi:1 stream:2 collobert:1 later:1 view:1 try:1 kendall:3 analyze:1 hazan:1 xing:1 aggregation:1 complicated:1 jul:1 slope:1 jia:1 square:1 ni:4 convolutional:14 efficiently:1 ensemble:1 correspond:2 whz:3 raw:4 basically:1 mc:2 trajectory:8 wxh:2 visualizes:1 ping:1 hamed:1 sharing:1 wai:3 frequency:1 james:1 jordi:1 newly:4 dataset:18 treatment:1 intrinsically:1 knowledge:1 color:2 ut:7 improves:1 segmentation:3 eddy:1 sorkine:1 lim:1 alexandre:2 higher:5 day:8 supervised:1 methodology:2 zisserman:1 formulation:1 evaluated:5 underlining:1 furthermore:1 stage:2 correlation:10 flight:2 receives:2 lstms:1 christopher:1 replacing:2 mehdi:1 nonlinear:3 propagation:2 meteorological:2 defines:2 facilitate:2 usa:1 vignesh:1 evolution:1 hence:1 semantic:1 i2:2 round:1 during:3 width:1 recurrence:2 robicquet:1 hong:10 generalized:1 tn:4 performs:1 motion:7 duchi:1 vondrick:1 ranging:1 image:6 variational:1 recently:4 zawadzki:1 rotation:6 empirically:1 rl:1 physical:1 ballas:1 extend:1 mae:17 significant:1 monthly:1 honglak:1 xingjian:2 tuning:5 pointed:1 language:1 moving:1 bruna:1 add:1 chelsea:1 imbalanced:3 showed:1 recent:1 perspective:1 moderate:3 dyyeung:1 reverse:2 scenario:2 store:1 schmidhuber:1 ht1:2 life:2 vt:7 equitable:2 yi:1 captured:1 additional:3 goel:1 aggregated:1 xiaohui:1 semi:1 multiple:4 full:1 adapt:2 calculation:1 plug:1 long:4 cross:1 lin:2 jean:1 impact:3 prediction:29 variant:10 anomalous:2 sue:1 yeung:2 iteration:2 represent:2 kernel:9 sergey:1 arxiv:2 hochreiter:1 cell:1 addition:1 fellowship:1 fine:6 htn:2 interval:2 crucial:2 movingmnist:8 extra:1 regional:2 flow:15 effectiveness:2 structural:2 near:1 chopra:1 yang:2 vital:1 bengio:2 wn:3 relu:1 architecture:1 topology:3 inner:1 l13:2 idea:1 seunghoon:1 golding:1 whether:2 heavier:4 padding:1 forecasting:13 effort:1 karen:1 convlstm:8 cause:1 hardly:1 deep:22 ignored:1 antonio:1 clear:3 detailed:1 amount:1 clutter:2 meteorology:1 dit:2 simplest:1 generate:6 outperform:4 exist:1 estimated:1 per:1 discrete:1 hyperparameter:1 threat:2 four:2 threshold:13 demonstrating:1 achieving:1 hi1:1 ht:20 ht0:5 vast:1 graph:3 subgradient:1 sum:3 convert:1 ch2:2 jongwoo:1 angle:1 parameterized:1 named:1 extends:1 arrive:2 reasonable:4 clipped:1 yann:2 wu:1 appendix:5 scaling:2 capturing:5 layer:15 pay:1 display:1 courville:2 replaces:1 ahead:3 occur:1 deficiency:2 fei:2 scene:1 l9:1 encodes:1 markus:1 dominated:1 generates:1 speed:1 nitish:1 optical:13 saikia:1 relatively:1 department:1 structured:1 according:1 vladlen:1 smaller:4 making:2 projecting:1 invariant:3 altitude:2 taken:1 interference:1 previously:1 bing:1 hannun:1 jolliffe:1 singer:1 know:1 fed:2 finn:2 operation:3 decomposing:1 k5:2 apply:1 appropriate:1 nikolaus:1 batch:4 marcaurelio:1 gate:5 original:2 thomas:1 rain:10 top:1 ensure:1 include:1 gan:2 clustering:1 calculating:1 yoram:1 concatenated:1 rainstorm:3 build:2 society:2 feng:1 hko:11 tensor:3 move:1 occurs:1 strategy:1 rt:5 traditional:1 enhances:1 subnetwork:2 iclr:5 distance:1 link:19 upsampling:2 decoder:1 concatenation:1 chris:1 seven:2 perazzi:1 collected:1 reason:1 ozair:1 besides:2 length:2 index:2 relationship:5 illustration:1 providing:2 downsampling:2 innovation:2 liang:3 ql:4 potentially:1 hao:2 negative:1 ba:1 design:1 zt:7 gated:1 perform:1 observation:6 convolution:19 benchmark:20 displayed:2 extended:1 ever:1 hinton:1 frame:27 bert:1 sharp:1 camille:1 community:2 intensity:4 introduced:1 david:2 gru:4 required:1 extensive:1 connection:20 doppler:1 optimized:1 mayer:1 wah:1 acoustic:1 learned:6 established:1 hour:1 kingma:1 nip:7 address:3 beyond:2 able:3 dynamical:1 pattern:7 fp:6 sparsity:1 challenge:2 pioneering:1 pirsiavash:1 memory:3 including:4 video:25 max:3 gool:2 event:3 suitable:1 natural:4 rely:1 treated:2 predicting:3 indicator:1 nth:1 improve:2 technology:4 mathieu:3 created:1 woo:3 joan:1 ruben:2 review:1 adagrad:1 loss:11 fully:3 interesting:1 hs:7 stephenson:1 geoffrey:1 validation:2 sufficient:1 consistent:1 thresholding:1 systematically:1 share:1 heavy:1 maas:1 supported:2 last:3 free:2 offline:8 guide:1 bias:1 allow:1 warp:5 deeper:1 face:2 bulletin:1 absolute:3 leaky:1 van:2 raining:1 depth:1 world:6 avoids:1 transition:3 whr:3 evaluating:2 author:2 made:1 calculated:2 xn:2 adaptive:1 tuset:1 historical:1 dale:1 social:1 skill:7 jaderberg:1 global:1 active:2 sequentially:1 spatio:1 grayscale:1 ht2:2 continuous:1 why:1 dilation:3 table:12 robin:1 ballard:1 nature:1 delving:1 nicolas:1 learn:4 channel:3 ca:1 operational:3 reflectivity:4 concatenates:1 traj:4 mse:20 protocol:6 dense:1 main:2 linearly:1 rh:1 arrow:1 noise:2 hyperparameters:2 animation:1 zhourong:1 paul:1 verifies:1 allowed:1 fair:1 citywide:1 xu:2 representative:1 fails:1 position:3 momentum:2 sub:1 third:1 learns:2 grained:1 kin:3 hwangaz:1 formula:2 minute:1 ian:3 xt:9 brabandere:2 specific:3 rectifier:1 nwp:2 xunyu:1 abadie:1 chun:3 evidence:1 deconvolution:1 essential:2 joe:1 ramanathan:1 ci:1 phd:1 illumination:3 illustrates:1 forecast:2 gap:1 chen:1 entropy:1 logarithmic:1 simply:1 likely:1 gao:1 visual:1 failed:1 ordered:1 tracking:3 upsample:4 applies:2 srivastava:2 ch:1 truth:5 conditional:1 lth:2 goal:1 formulated:1 viewed:4 diederik:1 leonard:1 room:1 replace:2 fisher:1 luc:2 content:2 change:4 couprie:1 included:1 specifically:3 determined:1 aviation:1 except:1 justify:1 ashutosh:1 total:3 silvio:2 hht:1 cappi:2 experimental:2 indicating:1 select:1 aaron:2 people:1 mark:1 alexander:1 evaluate:2 d1:3 handling:1 |
6,794 | 7,146 | Do Deep Neural Networks Suffer from Crowding?
Anna Volokitin?\
Gemma Roig???
Tomaso Poggio??
[email protected]
[email protected]
[email protected]
?
Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Cambridge, MA
?
Istituto Italiano di Tecnologia at Massachusetts Institute of Technology, Cambridge, MA
\
Computer Vision Laboratory, ETH Zurich, Switzerland
?
Singapore University of Technology and Design, Singapore
Abstract
Crowding is a visual effect suffered by humans, in which an object that can be
recognized in isolation can no longer be recognized when other objects, called
flankers, are placed close to it. In this work, we study the effect of crowding in
artificial Deep Neural Networks (DNNs) for object recognition. We analyze both
deep convolutional neural networks (DCNNs) as well as an extension of DCNNs
that are multi-scale and that change the receptive field size of the convolution filters
with their position in the image. The latter networks, that we call eccentricitydependent, have been proposed for modeling the feedforward path of the primate
visual cortex. Our results reveal that the eccentricity-dependent model, trained on
target objects in isolation, can recognize such targets in the presence of flankers,
if the targets are near the center of the image, whereas DCNNs cannot. Also, for
all tested networks, when trained on targets in isolation, we find that recognition
accuracy of the networks decreases the closer the flankers are to the target and
the more flankers there are. We find that visual similarity between the target and
flankers also plays a role and that pooling in early layers of the network leads
to more crowding. Additionally, we show that incorporating flankers into the
images of the training set for learning the DNNs does not lead to robustness against
configurations not seen at training.
1
Introduction
Despite stunning successes in many computer vision problems [1, 2, 3, 4, 5], Deep Neural Networks
(DNNs) lack interpretability in terms of how the networks make predictions, as well as how an
arbitrary transformation of the input, such as addition of clutter in images in an object recognition
task, will affect the function value.
Examples of an empirical approach to this problem are testing the network with adversarial examples [6, 7] or images with different geometrical transformations such as scale, position and rotation,
as well as occlusion [8]. In this paper, we add clutter to images to analyze the crowding in DNNs.
Crowding is a well known effect in human vision [9, 10], in which objects (targets) that can be
recognized in isolation can no longer be recognized in the presence of nearby objects (flankers), even
though there is no occlusion. We believe that crowding is a special case of the problem of clutter in
object recognition. In crowding studies, human subjects are asked to fixate at a cross at the center of
a screen, and objects are presented at the periphery of their visual field in a flash such that the subject
has no time to move their eyes. Experimental data suggests that crowding depends on the distance of
the target and the flankers [11], eccentricity (the distance of the target to the fixation point), as well as
the similarity between the target and the flankers [12, 13] or the configuration of the flankers around
the target object [11, 14, 15].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
whole image
(a)
MNIST
(b)
notMNIST
(c)
Omniglot
(d)
Places background
(e)
Figure 1: (a) Example image used to test the models, with even MNIST as target and two odd MNIST flankers.
(b-d) Close-up views with odd MNIST, notMNIST and Omniglot datasets as flankers, respectively. (e) An even
MNIST target embedded into a natural image.
Many computational models of crowding have been proposed e.g. [16, 17]. Our aim is not to model
human crowding. Instead, we characterize the crowding effect in DNNs trained for object recognition,
and analyze which models and settings suffer less from such effects.
We investigate two types of DNNs for crowding: traditional deep convolutional neural networks and
an extension of these which is multi-scale, and called eccentricity-dependent model [18]. Inspired
by the retina, the receptive field size of the convolutional filters in this model grows with increasing
distance from the center of the image, called the eccentricity. Cheung et al. [19] explored the
emergence of such property when the visual system has an eye-fixation mechanism.
We investigate under which conditions crowding occurs in DNNs that have been trained with images
of target objects in isolation. We test the DNNs with images that contain the target object as well
as clutter, which the network has never seen at training. Examples of the generated images using
MNIST [20], notMNIST [21], and Omniglot [22] datasets are depicted in Fig 1, in which even
MNIST digits are the target objects. As done in human psychophysics studies, we take recognition
accuracy to be the measure of crowding. If a DNN can recognize a target object correctly despite the
presence of clutter, crowding has not occurred.
Our experiments reveal the dependence of crowding on image factors, such as flanker configuration,
target-flanker similarity, and target eccentricity. Our results also show that prematurely pooling
signals increases crowding. This result is related to the theories of crowding in humans. In addition,
we show that training the models with cluttered images does not make models robust to clutter and
flankers configurations not seen in training. Thus, training a model to be robust to general clutter is
prohibitively expensive.
We also discover that the eccentricity-dependent model, trained on isolated targets, can recognize
objects even in very complex clutter, i.e. when they are embedded into images of places (Fig 1(e)).
Thus, if such models are coupled with a mechanism for selecting eye fixation locations, they can be
trained with objects in isolation being robust to clutter, reducing the amount of training data needed.
2
Models
In this section we describe the DNN architectures for which we characterize crowding effect. We
consider two kinds of DNN models: Deep Convolutional Neural Networks and eccentricity-dependent
networks, each with different pooling strategies across space and scale. We investigate pooling in
particular, because we [18, 23] as well as others [24] have suggested that feature integration by
pooling may be the cause of crowding in human perception.
2.1
Deep Convolutional Neural Networks
The first set of models we investigate are deep convolutional neural networks (DCNN) [25], in which
the image is processed by three rounds of convolution and max pooling across space, and then passed
to one fully connected layer for the classification. We investigate crowding under three different
spatial pooling configurations, listed below and shown in Fig 2. The word pooling in the names of
the model architectures below refers to how quickly we decrease the feature map size in the model.
All architectures have 3?3 max pooling with various strides, and are:
2
0
2
4
6
8
no total pooling
progressive pooling
at end pooling
Figure 2: DCNN architectures with three convolutional layers and one fully connected, trained to recognize
even MNIST digits. These are used to investigate the role of pooling in crowding. The grey arrow indicates
downsampling.
scale
inverted pyramid
sampling
image sampled at
different scales
before downsampling
input to model
(b)
(c)
(d)
filter
x
y
(a)
Figure 3: Eccentricity-dependent model: Inverted pyramid with sampling points. Each circle
represents a filter with its respective receptive field. For simplicity, the model is shown with 3 scales.
? No total pooling Feature maps sizes decrease only due to boundary effects, as the 3?3 max
pooling has stride 1. The square feature maps sizes after each pool layer are 60-54-48-42.
? Progressive pooling 3?3 pooling with a stride of 2 halves the square size of the feature maps,
until we pool over what remains in the final layer, getting rid of any spatial information before the
fully connected layer. (60-27-11-1).
? At end pooling Same as no total pooling, but before the fully connected layer, max-pool over the
entire feature map. (60-54-48-1).
The data in each layer in our model is a 5-dimensional tensor of minibatch size? x ? y ? number
of channels, in which x defines the width and y the height of the input. The input image to the
model is resized to 60 ? 60 pixels. In our training, we used minibatches of 128 images, 32 feature
channels for all convolutional layers, and convolutional filters of size 5 ? 5 and stride 1.
2.2
Eccentricity-dependent Model
The second type of DNN model we consider is an eccentricity-dependent deep neural network,
proposed by Poggio et al. in [18] as a model of the human visual cortex and further studied in [23].
Its eccentricity dependence is based on the human retina, which has receptive fields which increase
in size with eccentricity. [18] argues that the computational reason for this property is the need to
compute a scale- and translation-invariant representation of objects. [18] conjectures that this model
is robust to clutter when the target is near the fixation point.
As discussed in [18], the set of all scales and translations for which invariant representations can be
computed lie within an inverted truncated pyramid shape, as shown in Fig 3(a). The width of the
pyramid at a particular scale is roughly related to the amount of translation invariance for objects
of that size. Scale invariance is prioritized over translation invariance in this model, in contrast to
3
classical DCNNs. From a biological point of view, the limitation of translation invariance can be
compensated for by eye movements, whereas to compensate for a lack of scale invariance the human
would have to move their entire body to change their distance to the object.
The eccentricity-dependent model computes an invariant representation by sampling the inverted
pyramid at a discrete set of scales with the same number of filters at each scale. At larger scales,
the receptive fields of the filters are also larger to cover a larger image area, see Fig 3(a). Thus, the
model constructs a multi-scale representation of the input, where smaller sections (crops) of the
image are sampled densely at a high resolution, and larger sections (crops) are sampled with at a
lower resolution, with each scale represented using the same number of pixels, as shown in Fig 3(b-d).
Each scale is treated as an input channel to the network and then processed by convolutional filters,
the weights of which are shared also across scales as well as space. Because of the downsampling of
the input image, this is equivalent to having receptive fields of varying sizes. These shared parameters
also allow the model to learn a scale invariant representation of the image.
Each processing step in this model consists of convolution-pooling, as above, as well as max pooling
across different scales. Scale pooling reduces the number of scales by taking the maximum value
of corresponding locations in the feature maps across multiple scales. We set the spatial pooling
constant using At end pooling, as described above. The type of scale pooling is indicated by writing
the number of scales remaining in each layer, e.g. 11-1-1-1-1. The three configurations tested for
scale pooling are (1) at the beginning, in which all the different scales are pooled together after the
first layer, 11-1-1-1-1 (2) progressively, 11-7-5-3-1 and (3) at the end, 11-11-11-11-1, in which all
11 scales are pooled together at the last layer.
The parameters of this model are the same as for the DCNN explained above, except that now there
are extra filters for the scales. Note that because of weight sharing across scales, the number of
parameters in the eccentricity dependent model is equal that in a standard
DCNN. We use 11 crops,
?
with the smallest crop of 60 ? 60 pixels, increasing by a factor of 2. Exponentially interpolated
crops produce fewer boundary effects than linearly interpolated crops, while having qualitatively
the same behavior. Results with linearly extracted crops are shown in Fig 7 of the supplementary
material. All the crops are resized to 60 ? 60 pixels, which is the same input image size used for the
DCNN above. Image crops are shown in Fig 9.
Contrast Normalization We also investigate the effect of input normalization so that the sum of
the pixel intensities in each scale is in the same range. To de-emphasize the smaller crops, which
will have the most non-black pixels and therefore dominate the max-pooling across scales, in some
experiments we rescale all the ?
pixel intensities to the [0, 1] interval, and then divide them by factor
proportional to the crop area (( 2)11?i , where i = 1 for the smallest crop).
3
Experimental Set-up
Models are trained with back-propagation to recognize a set of objects, which we call targets. During
testing, we present the models with images which contain a target object as well as other objects
which the model has not been trained to recognize, which we call flankers. The flanker acts as clutter
with respect to the target object.
Specifically, we train our models to recognize even MNIST digits?i.e. numbers 0, 2, 4, 6, 8?shifted
at different locations of the image along the horizontal axis, which are the target objects in our
experiments. We compare performance when we use images with the target object in isolation, or
when flankers are also embedded in the training images. The flankers are selected from odd MNIST
digits, notMNIST dataset [21] which contains letters of different typefaces, and Omniglot [22] which
was introduced for one-shot character recognition. Also, we evaluate recognition when the target is
embedded to images of the Places dataset [26].
The images are of size 1920 squared pixels, in which we embedded target objects of 120 squared px,
and flankers of the same size, unless contrary stated. Recall that the images are resized to 60 ? 60
as input to the networks. We keep the training and testing splits provided by the MNIST dataset,
and use it respectively for training and testing. We illustrate some examples of target and flanker
configuration in Fig 1. We refer to the target as a and to the flanker as x and use this shorthand in the
plots. All experiments are done in the right half of the image plane. We do this to check if there is a
difference between central and peripheral flankers. We test the models under 4 conditions:
4
?
?
?
?
No flankers. Only the target object. (a in the plots)
One central flanker closer to the center of the image than the target. (xa)
One peripheral flanker closer to the boundary of the image that the target. (ax)
Two flankers spaced equally around the target, being both the same object, see Fig 1. (xax).
4
Experiments
In this section, we investigate the crowding effect in DNNs. We first carry out experiments on models
that have been trained with images containing both targets and flankers. We then repeat our analysis
with the models trained with images of the targets in isolation, shifted at all positions in the horizontal
axis. We analyze the effect of flanker configuration, flanker dataset, pooling in the model architecture,
and model type, by evaluating accuracy recognition of the target objects.1
4.1
DNNs Trained with Target and Flankers
In this setup we trained DNNs with images in which there were two identical flankers randomly
chosen from the training set of MNIST odd digits, placed at a distance of 120 pixels on either side
of the target (xax). The target is shifted horizontally, while keeping the distance between target and
flankers constant, called the constant spacing setup, and depicted in Fig 1(a) of the supplementary
material. We evaluate (i) DCNN with at the end pooling, and (ii) eccentricity-dependent model with
11-11-11-11-1 scale pooling, at the end spatial pooling and contrast normalization. We report the
results using the different flanker types at test with xax, ax, xa and a target flanker configuration, in
which a represents the target and x the flanker, as described in Section 3. 2
Results are in Fig 4. In the plots with 120 px spacing, we see that the models are better at recognizing
objects in clutter than isolated objects for all image locations tested, especially when the configuration
of target and flanker is the same at the training images than in the testing images (xax). However, in
the plots where target-flanker spacing is 240 px recognition accuracy falls to less than the accuracy of
recognizing isolated target objects. Thus, in order for a model to be robust to all kinds of clutter, it
needs to be trained with all possible target-flanker configurations, which is infeasible in practice.
Interestingly, we see that the eccentricity model is much better at recognizing objects in isolation
than the DCNN. This is because the multi-scale crops divide the image into discrete regions, letting
the model learn from image parts as well as the whole image.
We performed an additional experiment training the network with images that contain the same
target-flanker configuration as above (xax), but with different spacings between the target and the
flankers, including different spacings on either side of the target. Left spacing and right spacing are
sampled from 120 px, 240 px and 480 px independently. Train and test images shown in Fig. 2 of
supplementary material.
We test two conditions: (1) With flankers on both sides from the target (xax) at a spacing not seen
in the training set (360 px); (2) With 360 px spacing, including 2 flankers on both sides (4 flankers
total, xxaxx). In Fig 5, we show the accuracy for images with 360 px target-flanker spacing, and
see that accuracy is not impaired, neither for the DCNN nor the eccentricity model. Yet, the DCNN
accuracy for images with four flankers is impaired, while the eccentricity model still has unimpaired
recognition accuracy provided that the target is in the center of the image.
Thus, the recognition accuracy is not impaired for all tested models when flankers are in a similar
configuration in testing as in training. This is even when the flankers at testing are placed at a spacing
that is in between two seen spacings used at training. The models can interpolate to new spacings of
flankers when using similar configurations in test images as seen during training, e.g. (xax), arguably
due to the pooling operators. Yet, DCNN recognition is still severely impaired and do not generalize
to new flanker configurations, such as adding more flankers, when there are 2 flankers on both sides of
the target (xxaxx). To gain robustness to such configurations, each of these cases should be explicitly
included in the training set. Only the eccentricity-dependent model is robust to different flanker
1
Code to reproduce experiments is available at https://github.com/CBMM/eccentricity
The ax flanker line starts at 120 px of target eccentricity, because nothing was put at negative eccentricities.
For the case of 2 flankers, when the target was at 0-the image center, the flankers were put at -120 px and 120 px.
2
5
Test Flankers:
odd MNIST
notMNIST
omniglot
DCNN
constant spacing
of 120 px
Ecc.-dependent model
constant spacing
of 120 px
DCNN
constant spacing
of 240 px
Ecc.-dependent model
constant spacing
of 240 px
Figure 4: Even MNIST accuracy recognition of DCNN (at the end pooling) and Eccentricity Model (11-11-1111-1, At End spatial pooling with contrast normalization) trained with odd MNIST flankers at 120px constant
spacing. The target eccentricity is in pixels.
Model:
DCNN
Ecc w. Contrast Norm
Ecc no Contrast Norm
MNIST flankers
120, 240, 480 px
diff. left right spacing
Figure 5: All models tested at 360 px target-flanker spacing. All models can recognize digit in the presence
of clutter at a spacing that is in between spacings seen at training time. However, the eccentricity Model
(11-11-11-11-1, At End spatial pooling with contrast normalization) and the DCNN fail to generalize to new
types of flanker configurations (two flankers on each side, xxaxx) at 360 px spacing between the target and inner
flanker
configurations not included in training, when the target is centered. We will explore the role of
contrast normalization in Sec 4.3.
4.2
DNNs Trained with Images with the Target in Isolation
For these experiments, we train the models with the target object in isolation and in different positions
of the image horizontal axis. We test the models on images with target-flanker configurations a, ax,
xa, xax.
DCNN We examine the crowding effect with different spatial pooling in the DCNN hierarchy: (i) no
total pooling, (ii) progressive pooling, (iii) at end pooling (see Section 2.1 and Fig 2).
6
Spatial Pooling:
No Total Pooling
Progressive
At End
MNIST flankers
Constant spacing
120 px spacing
MNIST flankers
Constant target ecc.
0 px target ecc.
Figure 6: Accuracy results of 4 layer DCNN with different pooling schemes trained with targets shifted across
image and tested with different flanker configurations. Eccentricity is in pixels.
Flanker dataset:
MNIST
notMNIST
Omniglot
DCNN
Constant spacing
120 px spacing
Figure 7: Effect in the accuracy recognition in DCNN with at end pooling, when using different flanker datasets
at testing.
Results are shown in Fig 6. In addition to the constant spacing experiment (see Section 4.1), we also
evaluate the models in a setup called constant target eccentricity, in which we have fixed the target in
the center of the visual field, and change the spacing between the target and the flanker, as shown in
Fig 1(b) of the supplementary material. Since the target is already at the center of the visual field,
a flanker can not be more central in the image than the target. Thus, we only show x, ax and xax
conditions.
From Fig 6, we observe that the more flankers are present in the test image, the worse recognition
gets. In the constant spacing plots, we see that recognition accuracy does not change with eccentricity,
which is expected, as translation invariance is built into the structure of convolutional networks. We
attribute the difference between the ax and xa conditions to boundary effects. Results for notMNIST
and Omniglot flankers are shown in Fig 4 of the supplementary material.
From the constant target eccentricity plots, we see that as the distance between target and flanker
increases, the better recognition gets. This is mainly due to the pooling operation that merges the
neighboring input signals. Results with the target at the image boundary is shown in Fig 3 of the
supplementary material.
Furthermore, we see that the network called no total pooling performs worse in the no flanker setup
than the other two models. We believe that this is because pooling across spatial locations helps the
network learn invariance. However, in the below experiments, we will see that there is also a limit to
how much pooling across scales of the eccentricity model improves performance.
We test the effect of flankers from different datasets evaluating DCNN model with at end pooling
in Fig 7. Omniglot flankers crowd slightly less than odd MNIST flankers. The more similar the
flankers are to the target object?even MNIST, the more recognition impairment they produce. Since
Omniglot flankers are visually similar to MNIST digits, but not digits, we see that they activate the
convolutional filters of the model less than MNIST digits, and hence impair recognition less.
7
Scale Pooling:
11-1-1-1-1
11-7-5-3-1
11-11-11-11-1
No contrast norm.
Constant spacing
120 px spacing
With contrast norm.
Constant spacing
120 px spacing
Figure 8: Accuracy performance of Eccentricity-dependent model with spatial At End pooling, and changing
contrast normalization and scale pooling strategies. Flankers are odd MNIST digits.
We also observe that notMNIST flankers crowd much more than either MNIST or Omniglot flankers,
even though notMNIST characters are much more different to MNIST digits than Omniglot flankers.
This is because notMNIST is sampled from special font characters and these have many more edges
and white image pixels than handwritten characters. In fact, both MNIST and Omniglot have about
20% white pixels in the image, while notMNIST has 40%. Fig 5 of the supplementary material
shows the histogram of the flanker image intensities. The high number of edges in the notMNIST
dataset has a higher probability of activating the convolutional filters and thus influencing the final
classification decision more, leading to more crowding.
Eccentricity Model We now repeat the above experiment with different configurations of the eccentricity dependent model. In this experiment, we choose to keep the spacial pooling constant (at end
pooling), and investigate the effect of pooling across scales, as described in Section 2.2. The three
configurations for scale pooling are (1) at the beginning, (2) progressively and (3) at the end. The
numbers indicate the number of scales at each layer, so 11-11-11-11-1 is a network in which all 11
scales have been pooled together at the last layer.
Results with odd MNIST flankers are shown in Fig 8. Our conclusions for the effect of the flanker
dataset are similar to the experiment above with DCNN. (Results with other flanker datasets shown in
Fig 6 of the supplementary material.)
In this experiment, there is a dependence of accuracy on target eccentricity. The model without
contrast normalization is robust to clutter at the fovea, but cannot recognize cluttered objects in the
periphery. Interestingly, also in psychophysics experiments little effect of crowding is observed at the
fovea [10]. The effect of adding one central flanker (ax) is the same as adding two flankers on either
side (xax). This is because the highest resolution area in this model is in the center, so this part of the
image contributes more to the classification decision. If a flanker is placed there instead of a target,
the model tries to classify the flanker, and, it being an unfamiliar object, fails. The dependence of
accuracy on eccentricity can however be mitigated by applying contrast normalization. In this case,
all scales contribute equally in contrast, and dependence of accuracy on eccentricity is removed.
Finally, we see that if scale pooling happens too early in the model, such as in the 11-1-1-1-1
architecture, there is more crowding. Thus, pooling too early in the architecture prevents useful
information from being propagated to later processing in the network. For the rest of the experiments,
we always use the 11-11-11-11-1 configuration of the model with spatial pooling at the end.
4.3
Complex Clutter
Previous experiments show that training with clutter does not give robustness to clutter not seen in
training, e.g. more or less flankers, or different spacing. Also, that the eccentricity-dependent model is
more robust to clutter when the target is at the image center and no contrast normalization is applied,
Fig 8. To further analyze the models robustness to other kinds of clutter, we test models trained
8
(a) crop outlines
(b) crops resampled
(c) results on places images
Figure 9: (a-b) An example of how multiple crops of an input image look, as well as (c) recognition accuracy
when MNIST targets are embedded into images of places.
on images with isolated targets shifted along the horizontal axis, with images in which the target is
embedded into randomly selected images of Places dataset [26], shown in Fig 1(e) and Fig 9(a), (b).
We tested the DCNN and the eccentricity model (11-11-11-11-1) with and without contrast normalization, both with at end pooling. The results are in Fig 9(c): only the eccentricity model without
contrast normalization can recognize the target and only when the target is close to the image center.
This implies that the eccentricity model is robust to clutter: it doesn?t need to be trained with all
different kinds of clutter. If it can fixate on the relevant part of the image, it can still discriminate the
object, even at multiple object scales because this model is scale invariant [18].
5
Discussion
We investigated whether DNNs suffer from crowding, and if so, under which conditions, and what
can be done to reduce the effect. We found that DNNs suffer from crowding. We also explored the
most obvious approach to mitigate this problem, by including clutter in the training set of the model.
Yet, this approach does not help recognition in crowding, unless, of course, a similar configuration of
clutter is used for training and testing.
We explored conditions under which DNNs trained with images of targets in isolation are robust to
clutter. We trained various architectures of both DCNNs and eccentricity-dependent models with
images of isolated targets, and tested them with images containing a target at varying image locations
and 0, 1 or 2 flankers, as well as with the target object embedded into complex scenes. We found the
four following factors influenced the amount of crowding in the models:
? Flanker Configuration: When models are trained with images of objects in isolation, adding
flankers harms recognition. Adding two flankers is the same or worse than adding just one and the
smaller the spacing between flanker and target, the more crowding occurs. These is because the
pooling operation merges nearby responses, such as the target and flankers if they are close.
? Similarity between target and flanker: Flankers more similar to targets cause more crowding,
because of the selectivity property of the learned DNN filters.
? Dependence on target location and contrast normalization: In DCNNs and eccentricitydependent models with contrast normalization, recognition accuracy is the same across all eccentricities. In eccentricity-dependent networks without contrast normalization, recognition does not
decrease despite presence of clutter when the target is at the center of the image.
? Effect of pooling: adding pooling leads to better recognition accuracy of the models. Yet, in the
eccentricity model, pooling across the scales too early in the hierarchy leads to lower accuracy.
Our main conclusion is that when testing accuracy recognition of the target embedded in (place)
images, the eccentricity-dependent model ? without contrast normalization and with spatial and scale
pooling at the end of the hierarchy ? is robust to complex types of clutter, even though it had been
trained on images of objects in isolation. Yet, this occurs only when the target is at the center of the
image as it occurs when it is fixated by a human observer. Our analysis suggests that if we had access
to a system for selecting target object location, such as the one proposed by [27], the eccentricity
dependent model could be trained with lower sample complexity than other DCNN because it is
robust to some factors of image variation, such as clutter and scale changes. Translation invariance
would mostly be achieved through foveation.
9
Acknowledgments
This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF
STC award CCF - 1231216. A. Volokitin was also funded by Swiss Commission for Technology
and Innovation (KTI, Grant No 2-69723-16), and thanks Luc Van Gool for his support. G. Roig was
partly funded by SUTD SRG grant (SRG ISTD 2017 131). We also thank Xavier Boix, Francis Chen
and Yena Han for helpful discussions.
References
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ?Imagenet classification with deep convolutional neural
networks,? in Advances in Neural Information Processing Systems 25 (F. Pereira, C. J. C. Burges, L. Bottou,
and K. Q. Weinberger, eds.), pp. 1097?1105, Curran Associates, Inc., 2012.
[2] K. Simonyan and A. Zisserman, ?Very deep convolutional networks for large-scale image recognition,?
CoRR, vol. abs/1409.1556, 2014.
[3] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich,
?Going deeper with convolutions,? in Computer Vision and Pattern Recognition (CVPR), 2015.
[4] K. He, X. Zhang, S. Ren, and J. Sun, ?Deep residual learning for image recognition,? in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pp. 770?778, 2016.
[5] S. Ren, K. He, R. Girshick, and J. Sun, ?Faster r-cnn: Towards real-time object detection with region
proposal networks,? in Advances in neural information processing systems, pp. 91?99, 2015.
[6] I. J. Goodfellow, J. Shlens, and C. Szegedy, ?Explaining and harnessing adversarial examples,? arXiv
preprint arXiv:1412.6572, 2014.
[7] Y. Luo, X. Boix, G. Roig, T. A. Poggio, and Q. Zhao, ?Foveation-based mechanisms alleviate adversarial
examples,? arXiv:1511.06292, 2015.
[8] M. D. Zeiler and R. Fergus, ?Visualizing and understanding convolutional networks,? in European conference on computer vision, pp. 818?833, Springer, 2014.
[9] D. Whitney and D. M. Levi, ?Visual crowding: A fundamental limit on conscious perception and object
recognition,? Trends in cognitive sciences, vol. 15, no. 4, pp. 160?168, 2011.
[10] D. M. Levi, ?Crowding?an essential bottleneck for object recognition: A mini-review,? Vision research,
vol. 48, no. 5, pp. 635?654, 2008.
[11] H. Bouma, ?Interaction effects in parafoveal letter recognition,? Nature, vol. 226, pp. 177?178, 1970.
[12] F. L. Kooi, A. Toet, S. P. Tripathy, and D. M. Levi, ?The effect of similarity and duration on spatial
interaction in peripheral vision,? Spatial vision, vol. 8, no. 2, pp. 255?279, 1994.
[13] T. A. Nazir, ?Effects of lateral masking and spatial precueing on gap-resolution in central and peripheral
vision,? Vision research, vol. 32, no. 4, pp. 771?777, 1992.
[14] W. P. Banks, K. M. Bachrach, and D. W. Larson, ?The asymmetry of lateral interference in visual letter
identification,? Perception & Psychophysics, vol. 22, no. 3, pp. 232?240, 1977.
[15] G. Francis, M. Manassi, and M. Herzog, ?Cortical dynamics of perceptual grouping and segmentation:
Crowding,? Journal of Vision, vol. 16, no. 12, pp. 1114?1114, 2016.
[16] J. Freeman and E. P. Simoncelli, ?Metamers of the ventral stream,? Nature neuroscience, vol. 14, no. 9,
pp. 1195?1201, 2011.
[17] B. Balas, L. Nakano, and R. Rosenholtz, ?A summary-statistic representation in peripheral vision explains
visual crowding,? Journal of vision, vol. 9, no. 12, pp. 13?13, 2009.
[18] T. Poggio, J. Mutch, and L. Isik, ?Computational role of eccentricity dependent cortical magnification,?
arXiv preprint arXiv:1406.1770, 2014.
[19] B. Cheung, E. Weiss, and B. A. Olshausen, ?Emergence of foveal image sampling from learning to attend
in visual scenes,? International Conference on Learning Representations, 2016.
[20] Y. LeCun, C. Cortes, and C. J. Burges, ?The mnist database of handwritten digits,? 1998.
[21] Y. Bulatov, ?notMNIST dataset.? http://yaroslavvb.blogspot.ch/2011/09/notmnist-dataset.
html, 2011. Accessed: 2017-05-16.
[22] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, ?Human-level concept learning through probabilistic
program induction,? Science, vol. 350, no. 6266, pp. 1332?1338, 2015.
[23] F. Chen, G. Roig, X. Isik, L. Boix, and T. Poggio, ?Eccentricity dependent deep neural networks: Modeling
invariance in human vision,? in AAAI Spring Symposium Series, Science of Intelligence, 2017.
10
[24] S. Keshvari and R. Rosenholtz, ?Pooling of continuous features provides a unifying account of crowding,?
Journal of Vision, vol. 16, no. 3, pp. 39?39, 2016.
[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, ?Gradient-based learning applied to document recognition,?
Proceedings of the IEEE, vol. 86, no. 11, pp. 2278?2324, 1998.
[26] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, ?Learning deep features for scene recognition
using places database,? in Advances in neural information processing systems, pp. 487?495, 2014.
[27] V. Mnih, N. Heess, A. Graves, et al., ?Recurrent models of visual attention,? in Advances in neural
information processing systems, pp. 2204?2212, 2014.
11
| 7146 |@word cnn:1 norm:4 grey:1 shot:1 carry:1 crowding:39 configuration:25 contains:1 liu:1 selecting:2 foveal:1 series:1 document:1 interestingly:2 com:1 luo:1 yet:5 shape:1 plot:6 progressively:2 half:2 fewer:1 selected:2 intelligence:1 plane:1 beginning:2 provides:1 contribute:1 location:8 zhang:1 accessed:1 height:1 along:2 symposium:1 consists:1 fixation:4 shorthand:1 expected:1 roughly:1 behavior:1 nor:1 examine:1 multi:4 brain:2 tomaso:1 inspired:1 freeman:1 salakhutdinov:1 bulatov:1 little:1 increasing:2 provided:2 discover:1 mitigated:1 lapedriza:1 what:2 kind:4 transformation:2 mitigate:1 act:1 prohibitively:1 grant:2 arguably:1 before:3 ecc:6 influencing:1 attend:1 limit:2 severely:1 despite:3 path:1 black:1 studied:1 suggests:2 range:1 acknowledgment:1 lecun:2 testing:10 practice:1 swiss:1 digit:12 area:3 srg:2 empirical:1 eth:1 word:1 refers:1 get:2 cannot:2 close:4 operator:1 put:2 applying:1 writing:1 equivalent:1 map:6 center:15 compensated:1 attention:1 cluttered:2 independently:1 duration:1 resolution:4 bachrach:1 simplicity:1 dominate:1 shlens:1 his:1 variation:1 roig:4 target:96 play:1 hierarchy:3 curran:1 goodfellow:1 associate:1 trend:1 recognition:36 expensive:1 magnification:1 database:2 observed:1 role:4 preprint:2 region:2 connected:4 sun:2 decrease:4 movement:1 highest:1 removed:1 complexity:1 asked:1 dynamic:1 trained:24 various:2 represented:1 train:3 describe:1 activate:1 artificial:1 crowd:2 harnessing:1 larger:4 supplementary:8 cvpr:1 simonyan:1 statistic:1 emergence:2 final:2 interaction:2 neighboring:1 relevant:1 getting:1 sutskever:1 gemma:1 eccentricity:48 impaired:4 asymmetry:1 produce:2 object:47 help:2 illustrate:1 recurrent:1 rescale:1 odd:9 indicate:1 implies:1 switzerland:1 attribute:1 filter:12 centered:1 human:13 material:8 explains:1 dnns:15 activating:1 alleviate:1 biological:1 extension:2 around:2 cbmm:2 visually:1 ventral:1 early:4 smallest:2 torralba:1 mit:2 dcnn:24 always:1 aim:1 zhou:1 resized:3 varying:2 ax:7 indicates:1 check:1 mainly:1 contrast:21 adversarial:3 helpful:1 dependent:22 entire:2 dnn:5 reproduce:1 going:1 dcnns:6 pixel:13 classification:4 html:1 spatial:15 integration:1 special:2 psychophysics:3 field:9 construct:1 never:1 having:2 beach:1 sampling:4 equal:1 identical:1 progressive:4 represents:2 look:1 others:1 report:1 retina:2 randomly:2 recognize:10 densely:1 interpolate:1 stunning:1 occlusion:2 ab:1 detection:1 investigate:9 mnih:1 parafoveal:1 edge:2 closer:3 istituto:1 poggio:5 respective:1 unless:2 divide:2 circle:1 isolated:5 girshick:1 classify:1 modeling:2 cover:1 tp:1 whitney:1 rabinovich:1 recognizing:3 krizhevsky:1 too:3 characterize:2 commission:1 st:1 thanks:1 fundamental:1 international:1 csail:1 flanker:101 probabilistic:1 pool:3 together:3 quickly:1 squared:2 central:5 aaai:1 containing:2 choose:1 worse:3 cognitive:1 zhao:1 leading:1 szegedy:2 account:1 de:1 stride:4 pooled:3 sec:1 inc:1 explicitly:1 depends:1 stream:1 performed:1 view:2 try:1 later:1 observer:1 analyze:5 francis:2 start:1 masking:1 jia:1 square:2 accuracy:23 convolutional:16 spaced:1 generalize:2 xax:10 handwritten:2 identification:1 ren:2 influenced:1 sharing:1 ed:1 against:1 pp:18 obvious:1 fixate:2 di:1 propagated:1 sampled:5 gain:1 dataset:10 massachusetts:2 recall:1 improves:1 segmentation:1 back:1 higher:1 response:1 zisserman:1 mutch:1 wei:1 done:3 though:3 typeface:1 furthermore:1 xa:4 just:1 gemmar:1 until:1 horizontal:4 lack:2 propagation:1 minibatch:1 defines:1 reveal:2 indicated:1 grows:1 believe:2 olshausen:1 usa:1 effect:24 name:1 contain:3 concept:1 ccf:1 xavier:1 hence:1 laboratory:1 white:2 round:1 visualizing:1 during:2 width:2 tripathy:1 larson:1 outline:1 argues:1 performs:1 geometrical:1 image:85 rotation:1 exponentially:1 discussed:1 occurred:1 he:2 refer:1 unfamiliar:1 anguelov:1 cambridge:2 omniglot:12 had:2 unimpaired:1 funded:3 access:1 han:1 longer:2 cortex:2 similarity:5 add:1 periphery:2 selectivity:1 success:1 inverted:4 seen:8 additional:1 recognized:4 signal:2 ii:2 multiple:3 simoncelli:1 reduces:1 faster:1 cross:1 long:1 compensate:1 equally:2 award:1 prediction:1 crop:16 oliva:1 vision:17 arxiv:5 histogram:1 normalization:16 pyramid:5 achieved:1 proposal:1 whereas:2 addition:3 background:1 spacing:36 interval:1 suffered:1 extra:1 rest:1 pooling:65 subject:2 contrary:1 call:3 ee:1 near:2 presence:5 feedforward:1 split:1 iii:1 bengio:1 affect:1 isolation:14 architecture:8 inner:1 reduce:1 haffner:1 bottleneck:1 whether:1 passed:1 suffer:4 cause:2 deep:14 impairment:1 useful:1 heess:1 listed:1 amount:3 clutter:28 conscious:1 tenenbaum:1 processed:2 http:2 nsf:1 singapore:2 shifted:5 sutd:1 rosenholtz:2 neuroscience:1 correctly:1 discrete:2 vol:13 levi:3 four:2 changing:1 neither:1 sum:1 letter:3 place:8 lake:1 decision:2 layer:15 resampled:1 scene:3 nearby:2 interpolated:2 spring:1 px:25 conjecture:1 peripheral:5 across:13 smaller:3 slightly:1 character:4 primate:1 happens:1 explained:1 invariant:5 interference:1 zurich:1 remains:1 mechanism:3 fail:1 needed:1 mind:2 letting:1 italiano:1 end:19 available:1 operation:2 observe:2 robustness:4 weinberger:1 remaining:1 zeiler:1 nakano:1 unifying:1 especially:1 classical:1 tensor:1 move:2 already:1 occurs:4 font:1 receptive:6 strategy:2 dependence:6 traditional:1 gradient:1 fovea:2 distance:7 thank:1 lateral:2 reason:1 induction:1 code:1 reed:1 mini:1 downsampling:3 sermanet:1 innovation:1 setup:4 mostly:1 stated:1 negative:1 design:1 convolution:4 datasets:5 truncated:1 hinton:1 prematurely:1 arbitrary:1 intensity:3 introduced:1 imagenet:1 merges:2 learned:1 herzog:1 nip:1 impair:1 suggested:1 below:3 perception:3 pattern:2 program:1 built:1 interpretability:1 max:6 including:3 gool:1 natural:1 treated:1 blogspot:1 residual:1 scheme:1 github:1 technology:4 eye:4 axis:4 coupled:1 review:1 understanding:1 graf:1 embedded:9 fully:4 limitation:1 proportional:1 kti:1 vanhoucke:1 xiao:1 bank:1 translation:7 course:1 summary:1 placed:4 last:2 repeat:2 keeping:1 infeasible:1 supported:1 side:7 allow:1 burges:2 deeper:1 institute:2 fall:1 explaining:1 taking:1 van:1 boundary:5 cortical:2 evaluating:2 toet:1 computes:1 doesn:1 qualitatively:1 erhan:1 emphasize:1 keep:2 rid:1 harm:1 fixated:1 fergus:1 continuous:1 spacial:1 additionally:1 nature:2 channel:3 learn:3 robust:12 ca:1 contributes:1 investigated:1 complex:4 bottou:2 european:1 stc:1 notmnist:14 anna:1 main:1 linearly:2 arrow:1 whole:2 nothing:1 body:1 fig:28 boix:3 screen:1 fails:1 position:4 pereira:1 lie:1 perceptual:1 explored:3 cortes:1 grouping:1 incorporating:1 essential:1 mnist:30 balas:1 adding:7 corr:1 metamers:1 chen:2 gap:1 depicted:2 explore:1 visual:13 horizontally:1 prevents:1 springer:1 ch:2 extracted:1 ma:2 minibatches:1 cheung:2 flash:1 towards:1 prioritized:1 shared:2 luc:1 change:5 included:2 tecnologia:1 except:1 reducing:1 specifically:1 diff:1 foveation:2 called:6 total:7 discriminate:1 invariance:9 experimental:2 partly:1 bouma:1 support:1 latter:1 ethz:1 evaluate:3 tested:8 |
6,795 | 7,147 | Learning from Complementary Labels
Takashi Ishida1,2,3 Gang Niu2,3 Weihua Hu2,3 Masashi Sugiyama3,2
1
Sumitomo Mitsui Asset Management, Tokyo, Japan
2
The University of Tokyo, Tokyo, Japan
3
RIKEN, Tokyo, Japan
{ishida@ms., gang@ms., hu@ms., sugi@}k.u-tokyo.ac.jp
Abstract
Collecting labeled data is costly and thus a critical bottleneck in real-world classification tasks. To mitigate this problem, we propose a novel setting, namely learning from complementary labels for multi-class classification. A complementary
label specifies a class that a pattern does not belong to. Collecting complementary
labels would be less laborious than collecting ordinary labels, since users do not
have to carefully choose the correct class from a long list of candidate classes.
However, complementary labels are less informative than ordinary labels and thus
a suitable approach is needed to better learn from them. In this paper, we show
that an unbiased estimator to the classification risk can be obtained only from
complementarily labeled data, if a loss function satisfies a particular symmetric
condition. We derive estimation error bounds for the proposed method and prove
that the optimal parametric convergence rate is achieved. We further show that
learning from complementary labels can be easily combined with learning from
ordinary labels (i.e., ordinary supervised learning), providing a highly practical
implementation of the proposed method. Finally, we experimentally demonstrate
the usefulness of the proposed methods.
1
Introduction
In ordinary supervised classification problems, each training pattern is equipped with a label which
specifies the class the pattern belongs to. Although supervised classifier training is effective, labeling
training patterns is often expensive and takes a lot of time. For this reason, learning from less
expensive data has been extensively studied in the last decades, including but not limited to, semisupervised learning [4, 38, 37, 13, 1, 21, 27, 20, 35, 16, 18], learning from pairwise/triple-wise
constraints [34, 12, 6, 33, 25], and positive-unlabeled learning [7, 11, 32, 2, 8, 9, 26, 17].
In this paper, we consider another weakly supervised classification scenario with less expensive
data: instead of any ordinary class label, only a complementary label which specifies a class that
the pattern does not belong to is available. If the number of classes is large, choosing the correct
class label from many candidate classes is laborious, while choosing one of the incorrect class
labels would be much easier and thus less costly. In the binary classification setup, learning with
complementary labels is equivalent to learning with ordinary labels, because complementary label 1
(i.e., not class 1) immediately means ordinary label 2. On the other hand, in K-class classification
for K > 2, complementary labels are less informative than ordinary labels because complementary
label 1 only means either of the ordinary labels 2, 3, . . . , K.
The complementary classification problem may be solved by the method of learning from partial labels [5], where multiple candidate class labels are provided to each training pattern?complementary
label y can be regarded as an extreme case of partial labels given to all K 1 classes other than class
y. Another possibility to solve the complementary classification problem is to consider a multi-label
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
setup [3], where each pattern can belong to multiple classes?complementary label y is translated
into a negative label for class y and positive labels for the other K 1 classes.
Our contribution in this paper is to give a direct risk minimization framework for the complementary
classification problem. More specifically, we consider a complementary loss that incurs a large loss
if a predicted complementary label is not correct. We then show that the classification risk can be
empirically estimated in an unbiased fashion if the complementary loss satisfies a certain symmetric
condition?the sigmoid loss and the ramp loss (see Figure 1) are shown to satisfy this symmetric
condition. Theoretically, we establish estimation error bounds for the proposed method, showing
that learning from complementary
p labels is also consistent; the order of these bounds achieves the
optimal parametric rate Op (1/ n), where Op denotes the order in probability and n is the number
of complementarily labeled data.
We further show that our proposed complementary classification can be easily combined with ordinary classification, providing a highly data-efficient classification method. This combination method
is particularly useful, e.g., when labels are collected through crowdsourcing [14]: Usually, crowdworkers are asked to give a label to a pattern by selecting the correct class from the list of all
candidate classes. This process is highly time-consuming when the number of classes is large. We
may instead choose one of the classes randomly and ask crowdworkers whether a pattern belongs to
the chosen class or not. Such a yes/no question can be much easier and quicker to be answered than
selecting the correct class out of a long list of candidates. Then the pattern is treated as ordinarily
labeled if the answer is yes; otherwise, the pattern is regarded as complementarily labeled.
Finally, we demonstrate the practical usefulness of the proposed methods through experiments.
2
Review of ordinary multi-class classification
Suppose that d-dimensional pattern x 2 Rd and its class label y 2 {1, . . . , K} are sampled independently from an unknown probability distribution with density p(x, y). The goal of ordinary
multi-class classification is to learn a classifier f (x) : Rd ! {1, . . . , K} that minimizes the classification risk with multi-class loss L f (x), y :
?
?
R(f ) = Ep(x,y) L f (x), y ,
(1)
where E denotes the expectation. Typically, a classifier f (x) is assumed to take the following form:
f (x) = arg max gy (x),
(2)
y2{1,...,K}
where gy (x) : Rd ! R is a binary classifier for class y versus the rest. Then, together with a
binary loss `(z) : R ! R that incurs a large loss for small z, the one-versus-all (OVA) loss1 or the
pairwise-comparison (PC) loss defined as follows are used as the multi-class loss [36]:
X
1
LOVA (f (x), y) = ` gy (x) +
`
gy0 (x) ,
(3)
K 1 0
y 6=y
X
LPC f (x), y =
` gy (x) gy0 (x) .
(4)
y 0 6=y
Finally, the expectation over unknown p(x, y) in Eq.(1) is empirically approximated using training
samples to give a practical classification formulation.
3
Classification from complementary labels
In this section, we formulate the problem of complementary classification and propose a risk minimization framework.
We consider the situation where, instead of ordinary class label y, we are given only complementary
label y which specifies a class that pattern x does not belong to. Our goal is to still learn a classifier
1
We normalize the ?rest? loss by K
1 to be consistent with the discussion in the following sections.
2
that minimizes the classification risk (1), but only from complementarily labeled training samples
{(xi , y i )}ni=1 . We assume that {(xi , y i )}ni=1 are drawn independently from an unknown probability
distribution with density:2
1 X
p(x, y) =
p(x, y).
(5)
K 1
y6=y
Let us consider a complementary loss L(f (x), y) for a complementarily labeled sample (x, y).
Then we have the following theorem, which allows unbiased estimation of the classification risk
from complementarily labeled samples:
Theorem 1. The classification risk (1) can be expressed as
?
?
R(f ) = (K 1)Ep(x,y) L f (x), y
M1 + M2 ,
(6)
if there exist constants M1 , M2 0 such that for all x and y, the complementary loss satisfies
K
X
L f (x), y = M1 and L f (x), y + L f (x), y = M2 .
(7)
y=1
Proof. According to (5),
(K
1)Ep(x,y) [L(f (x), y)] = (K
= (K
1)
Z X
K
2
= Ep(x,y) 4
y=1
X
y6=y
0
L(f (x), y) @
3
1)
Z X
K
y=1
1
K
1
X
y6=y
L(f (x), y)p(x, y)dx
1
p(x, y)A dx =
L(f (x), y)5 = Ep(x,y) [M1
Z X
K X
y=1 y6=y
L(f (x), y)] = M1
L(f (x), y)p(x, y)dx
Ep(x,y) [L(f (x), y)],
where the fifth equality follows from the first constraint in (7). Subsequently,
(K 1)Ep(x,y) [L(f (x), y)] Ep(x,y) [L(f (x), y)] = M1 Ep(x,y) [L(f (x), y) + L(f (x), y)]
= M1
Ep(x,y) [M2 ]
= M1 M2 ,
where the second equality follows from the second constraint in (7).
The first constraint in (7) can be regarded as a multi-class loss version of a symmetric constraint that
we later use in Theorem 2. The second constraint in (7) means that the smaller L is, the larger L
should be, i.e., if ?pattern x belongs to class y? is correct, ?pattern x does not belong to class y?
should be incorrect.
With the expression (6), the classification risk (1) can be naively approximated in an unbiased fashion by the sample average as
n
X
b )= K 1
R(f
L f (xi ), y i
M 1 + M2 .
(8)
n i=1
Let us define the complementary losses corresponding to the OVA loss LOVA (f (x), y) and the PC
loss LPC f (x), y as
1 X
LOVA (f (x), y) =
` gy (x) + `
gy (x) ,
(9)
K 1
y6=y
X
LPC f (x), y =
` gy (x) gy (x) .
(10)
y6=y
Then we have the following theorem (its proof is given in Appendix A):
2
The
Pcoefficient 1/(K 1) is for the normalization purpose: it would be natural to assume p(x, y) =
(1/Z) y6=y p(x, y) since all p(x, y) for y 6= y equally contribute to p(x, y); in order to ensure that p(x, y)
is a valid joint density such that Ep(x,y) [1] = 1, we must take Z = K 1.
3
Figure 1: Examples of binary losses that satisfy the symmetric condition (11).
Theorem 2. If binary loss `(z) satisfies
`(z) + `( z) = 1,
(11)
then LOVA satisfies conditions (7) with M1 = K and M2 = 2, and LPC satisfies conditions (7) with
M1 = K(K 1)/2 and M2 = K 1.
For example, the following binary losses satisfy the symmetric condition (11) (see Figure 1):
?
0 if z > 0,
Zero-one loss: `0-1 (z =
(12)
1 if z ? 0,
1
Sigmoid loss: `S (z =
,
(13)
1 + ez
?
?
1
Ramp loss: `R z = max 0, min 2, 1 z .
(14)
2
Note that these losses are non-convex [8]. In practice, the sigmoid loss or ramp loss may be used for
training a classifier, while the zero-one loss may be used for tuning hyper-parameters (see Section 6
for the details).
4
Estimation Error Bounds
In this section, we establish the estimation error bounds for the proposed method.
Let G = {g(x)} be a function class for empirical risk minimization, 1 , . . . , n be n Rademacher
variables, then the Rademacher complexity of G for X of size n drawn from p(x) is defined as
follows [23]:
"
#
1 X
Rn (G) = EX E 1 ,..., n sup
i g(xi ) ;
g2G n
xi 2X
define the Rademacher complexity of G for X of size n drawn from p(x) as
2
3
X
1
5
Rn (G) = EX E 1 ,..., n 4sup
i g(xi ) .
g2G n
xi 2X
Note that p(x) = p(x) and thus Rn (G) = Rn (G), which enables us to express the obtained theoretical results using the standard Rademacher complexity Rn (G).
e = `(z) `(0) be the shifted loss such that `(0)
e = 0 (in order to apply the
To begin with, let `(z)
e
e
Talagrand?s contraction lemma [19] later), and LOVA and LPC be losses defined following (9) and
4
(10) but with `e instead of `; let L` be any (not necessarily the best) Lipschitz constant of `. Define
the corresponding function classes as follows:
HOVA = {(x, y) 7! LeOVA (f (x), y) | g1 , . . . , gK 2 G},
HPC = {(x, y) 7! LePC (f (x), y) | g1 , . . . , gK 2 G}.
Then we can obtain the following lemmas (their proofs are given in Appendices B and C):
Lemma 3. Let Rn (HOVA ) be the Rademacher complexity of HOVA for S of size n drawn from
p(x, y) defined as
3
2
X
1
5
Rn (HOVA ) = ES E 1 ,..., n 4 sup
i h(xi , y i ) .
h2HOVA n
(xi ,y i )2S
Then,
Rn (HOVA ) ? KL` Rn (G).
Lemma 4. Let Rn (HPC ) be the Rademacher complexity of HPC defined similarly to Rn (HOVA ).
Then,
Rn (HPC ) ? 2K(K
1)L` Rn (G).
b ) as follows (its proof
Based on Lemmas 3 and 4, we can derive the uniform deviation bounds of R(f
is given in Appendix D):
Lemma 5. For any
> 0, with probability at least 1
sup
g1 ,...,gK 2G
b )
R(f
R(f ) ? 2K(K
1)L` Rn (G) + (K
b ) is w.r.t. LOVA , and
where R(f
sup
g1 ,...,gK 2G
b )
R(f
,
1)
2
R(f ) ? 4K(K
1) L` Rn (G) + (K
r
1)
2
2 ln(2/ )
,
n
r
ln(2/ )
,
2n
b ) is w.r.t. LPC .
where R(f
?
Let (g1? , . . . , gK
) be the true risk minimizer and (b
g1 , . . . , gbK ) be the empirical risk minimizer, i.e.,
?
(g1? , . . . , gK
) = arg min R(f )
and
g1 ,...,gK 2G
b ).
(b
g1 , . . . , gbK ) = arg min R(f
g1 ,...,gK 2G
Let also
and fb(x) = arg max gby (x).
f ? (x) = arg max gy? (x)
y2{1,...,K}
y2{1,...,K}
Finally, based on Lemma 5, we can establish the estimation error bounds as follows:
Theorem 6. For any
R(fb)
> 0, with probability at least 1
?
R(f ) ? 4K(K
,
1)L` Rn (G) + (K
b ) is w.r.t. LOVA , and
if (b
g1 , . . . , gbK ) is trained by minimizing R(f
R(fb)
?
R(f ) ? 8K(K
2
1) L` Rn (G) + (K
b ) is w.r.t. LPC .
if (b
g1 , . . . , gbK ) is trained by minimizing R(f
5
1)
r
1)
2
8 ln(2/ )
,
n
r
2 ln(2/ )
,
n
Proof. Based on Lemma 5, the estimation error bounds can be proven through
?
? ?
? ?
?
b fb) R(f
b ? ) + R(fb) R(
b fb) + R(f
b ? ) R(f ? )
R(fb) R(g ? ) = R(
?0+2
sup
g1 ,...,gK 2G
b )
R(f
R(f ) ,
b fb) ? R(f
b ? ) by the definition of fb.
where we used that R(
Theorem 6 also guarantees that learning from complementary labels is consistent: as n ! 1,
R(fb) ! R(f ? ). Consider a linear-in-parameter model defined by
G = {g(x) = hw, (x)iH | kwkH ? Cw , k (x)kH ? C },
where H is a Hilbert space with an inner product h?, ?iH , w 2 H is a normal, : Rd ! H
pis a feature
map, and Cw > 0 and C > 0 are constants [29]. It is known that Rn (G) ? Cw C / n [23] and
p
thus R(fb) ! R(f ? ) in Op (1/ n) if this G is used, where Op denotes the order in probability.
This order is already the optimal parametric rate and cannot be improved without additional strong
assumptions on p(x, y), ` and G jointly.
5
Incorporation of ordinary labels
In many practical situations, we may also have ordinarily labeled data in addition to complementarily
labeled data. In such cases, we want to leverage both kinds of labeled data to obtain more accurate
classifiers. To this end, motivated by [28], let us consider a convex combination of the classification
risks derived from ordinarily labeled data and complementarily labeled data:
h
i
R(f ) = ?Ep(x,y) [L(f (x), y)] + (1 ?) (K 1)Ep(x,y) [L(f (x), y)] M1 + M2 ,
(15)
where ? 2 [0, 1] is a hyper-parameter that interpolates between the two risks. The combined risk
(15) can be naively approximated by the sample averages as
m
X
(1
b )= ?
R(f
L(f (xj ), yj ) +
m j=1
?)(K
n
n
1) X
i=1
L(f (xi ), y i ),
(16)
n
where {(xj , yj )}m
j=1 are ordinarily labeled data and {(xi , y i )}i=1 are complementarily labeled data.
As explained in the introduction, we can naturally obtain both ordinarily and complementarily labeled data through crowdsourcing [14]. Our risk estimator (16) can utilize both kinds of labeled data
to obtain better classifiers3 . We will experimentally demonstrate the usefulness of this combination
method in Section 6.
6
Experiments
In this section, we experimentally evaluate the performance of the proposed methods.
6.1
Comparison of different losses
Here we first compare the performance among four variations of the proposed method with different
loss functions: OVA (9) and PC (10), each with the sigmoid loss (13) and ramp loss (14). We used
the MNIST hand-written digit dataset, downloaded from the website of the late Sam Roweis4 (with
all patterns standardized to have zero mean and unit variance), with different number of classes: 3
classes (digits ?1? to ?3?) to 10 classes (digits ?1? to ?9? and ?0?). From each class, we randomly
sampled 500 data for training and 500 data for testing, and generated complementary labels by
randomly selecting one of the complementary classes. From the training dataset, we left out 25% of
the data for validating hyperparameter based on (8) with the zero-one loss plugged in (9) or (10).
3
Note that when pattern x has already been equipped with ordinary label y, giving complementary label y
does not bring us any additional information (unless the ordinary label is noisy).
4
See http://cs.nyu.edu/~roweis/data.html.
6
Table 1: Means and standard deviations of classification accuracy over five trials in percentage, when the
number of classes (?cls?) is changed for the MNIST dataset. ?PC? is (10), ?OVA? is (9), ?Sigmoid? is (13), and
?Ramp? is (14). Best and equivalent methods (with 5% t-test) are highlighted in boldface.
Method
OVA
Sigmoid
3 cls
95.2
(0.9)
4 cls
91.4
(0.5)
5 cls
87.5
(2.2)
6 cls
82.0
(1.3)
7 cls
74.5
(2.9)
8 cls
73.9
(1.2)
9 cls
63.6
(4.0)
10 cls
57.2
(1.6)
OVA
Ramp
95.1
(0.9)
90.8
(1.0)
86.5
(1.8)
79.4
(2.6)
73.9
(3.9)
71.4
(4.0)
66.1
(2.1)
56.1
(3.6)
PC
Sigmoid
94.9
(0.5)
90.9
(0.8)
88.1
(1.8)
80.3
(2.5)
75.8
(2.5)
72.9
(3.0)
65.0
(3.5)
58.9
(3.9)
PC
Ramp
94.5
(0.7)
90.8
(0.5)
88.0
(2.2)
81.0
(2.2)
74.0
(2.3)
71.4
(2.4)
69.0
(2.8)
57.3
(2.0)
For all the methods, we used a linear-in-input model gk (x) = wk> x + bk as the binary classifier,
where > denotes the transpose, wk 2 Rd is the weight parameter, and bk 2 R is the bias parameter
for class k 2 {1, . . . , K}. We added an `2 -regularization term, with the regularization parameter
chosen from {10 4 , 10 3 , . . . , 104 }. Adam [15] was used for optimization with 5,000 iterations,
with mini-batch size 100. We reported the test accuracy of the model with the best validation score
out of all iterations. All experiments were carried out with Chainer [30].
We reported means and standard deviations of the classification accuracy over five trials in Table 1.
From the results, we can see that the performance of all four methods deteriorates as the number
of classes increases. This is intuitive because supervised information that complementary labels
contain becomes weaker with more classes.
The table also shows that there is no significant difference in classification accuracy among the four
losses. Since the PC formulation is regarded as a more direct approach for classification [31] (it
takes the sign of the difference of the classifiers, instead of the sign of each classifier as in OVA)
and the sigmoid loss is smooth, we use PC with the sigmoid loss as a representative of our proposed
method in the following experiments.
6.2
Benchmark experiments
Next, we compare our proposed method, PC with the sigmoid loss (PC/S), with two baseline methods. The first baseline is one of the state-of-the-art partial label (PL) methods [5] with the squared
hinge loss5 :
` z = (max(0, 1
z))2 .
The second baseline is a multi-label (ML) method [3], where every complementary label y is translated into a negative label for class y and positive labels for the other K 1 classes. This yields the
following loss:
X
LML (f (x), y) =
` gy (x) + `
gy (x) ,
y6=y
where we used the same sigmoid loss as the proposed method for `. We used a one-hidden-layer
neural network (d-3-1) with rectified linear units (ReLU) [24] as activation functions, and weight decay candidates were chosen from {10 7 , 10 4 , 10 1 }. Standardization, validation and optimization
details follow the previous experiments.
We evaluated the classification performance on the following benchmark datasets: WAVEFORM1,
WAVEFORM2, SATIMAGE, PENDIGITS, DRIVE, LETTER, and USPS. USPS can be downloaded from the website of the late Sam Roweis6 , and all other datasets can be downloaded from the
UCI machine learning repository7 . We tested several different settings of class labels, with equal
number of data in each class.
5
We decided to use the squared hinge loss (which is convex) here since it was reported to work well in the
original paper [5].
6
See http://cs.nyu.edu/~roweis/data.html.
7
See http://archive.ics.uci.edu/ml/.
7
Table 2: Means and standard deviations of classification accuracy over 20 trials in percentage. ?PC/S? is
the proposed method for the pairwise comparison formulation with the sigmoid loss, ?PL? is the partial label
method with the squared hinge loss, and ?ML? is the multi-label method with the sigmoid loss. Best and
equivalent methods (with 5% t-test) are highlighted in boldface. ?Class? denotes the class labels used for the
experiment and ?Dim? denotes the dimensionality d of patterns to be classified. ?# train? denotes the total
number of training and validation samples in each class. ?# test? denotes the number of test samples in each
class.
Dataset
Class
Dim
# train
# test
PC/S
PL
ML
WAVEFORM1
1?3
21
1226
398
85.8(0.5)
85.7(0.9)
79.3(4.8)
40
1227
408
84.7(1.3)
84.6(0.8)
74.9(5.2)
1?7
36
415
211
68.7(5.4)
60.7(3.7)
33.6(6.2)
16
719
719
719
719
719
336
335
336
335
335
87.0(2.9)
78.4(4.6)
90.8(2.4)
76.0(5.4)
38.0(4.3)
76.2(3.3)
71.1(3.3)
76.8(1.6)
67.4(2.6)
33.2(3.8)
44.7(9.6)
38.4(9.6)
43.8(5.1)
40.2(8.0)
16.1(4.6)
48
3955
3923
3925
3939
3925
1326
1313
1283
1278
1269
89.1(4.0)
88.8(1.8)
81.8(3.4)
85.4(4.2)
40.8(4.3)
77.7(1.5)
78.5(2.6)
63.9(1.8)
74.9(3.2)
32.0(4.1)
31.1(3.5)
30.4(7.2)
29.7(6.3)
27.6(5.8)
12.7(3.1)
16
565
550
556
550
585
550
171
178
177
184
167
167
79.7(5.3)
76.2(6.2)
78.3(4.1)
77.2(3.2)
80.4(4.2)
5.1(2.1)
75.1(4.4)
66.8(2.5)
67.4(3.3)
68.4(2.1)
75.1(1.9)
5.0(1.0)
28.3(10.4)
34.0(6.9)
28.6(5.0)
32.7(6.4)
32.0(5.7)
5.2(1.1)
256
652
542
556
542
542
166
147
147
147
127
79.1(3.1)
69.5(6.5)
67.4(5.4)
77.5(4.5)
30.7(4.4)
70.3(3.2)
66.1(2.4)
66.2(2.3)
69.3(3.1)
26.0(3.5)
44.4(8.9)
37.3(8.8)
35.7(6.6)
36.6(7.5)
13.3(5.4)
WAVEFORM2
SATIMAGE
PENDIGITS
DRIVE
LETTER
USPS
1?3
1?5
6 ? 10
even #
odd #
1 ? 10
1?5
6 ? 10
even #
odd #
1 ? 10
1?5
6 ? 10
11 ? 15
16 ? 20
21 ? 25
1 ? 25
1?5
6 ? 10
even #
odd #
1 ? 10
In Table 2, we summarized the specification of the datasets and reported the means and standard
deviations of the classification accuracy over 10 trials. From the results, we can see that the proposed
method is either comparable to or better than the baseline methods on many of the datasets.
6.3
Combination of ordinary and complementary labels
Finally, we demonstrate the usefulness of combining ordinarily and complementarily labeled data.
We used (16), with hyperparameter ? fixed at 1/2 for simplicity. We divided our training dataset
by 1 : (K 1) ratio, where one subset was labeled ordinarily while the other was labeled complementarily8 . From the training dataset, we left out 25% of the data for validating hyperparameters
based on the zero-one loss version of (16). Other details such as standardization, the model and
optimization, and weight-decay candidates follow the previous experiments.
We compared three methods: the ordinary label (OL) method corresponding to ? = 1, the complementary label (CL) method corresponding to ? = 0, and the combination (OL & CL) method with
? = 1/2. The PC and sigmoid losses were commonly used for all methods.
We reported the means and standard deviations of the classification accuracy over 10 trials in Table 3.
From the results, we can see that OL & CL tends to outperform OL and CL, demonstrating the
usefulnesses of combining ordinarily and complementarily labeled data.
8
We used K 1 times more complementarily labeled data than ordinarily labeled data since a single ordinary
label corresponds to (K 1) complementary labels.
8
Table 3: Means and standard deviations of classification accuracy over 10 trials in percentage. ?OL? is the
ordinary label method, ?CL? is the complementary label method, and ?OL & CL? is a combination method
that uses both ordinarily and complementarily labeled data. Best and equivalent methods are highlighted in
boldface. ?Class? denotes the class labels used for the experiment and ?Dim? denotes the dimensionality d of
patterns to be classified. # train denotes the number of ordinarily/complementarily labeled data for training and
validation in each class. # test denotes the number of test data in each class.
Dataset
Class
Dim
# train
# test
OL
(? = 1)
CL
(? = 0)
OL & CL
(? = 12 )
WAVEFORM1
1?3
21
413/826
408
85.3(0.8)
86.0(0.4)
86.9(0.5)
40
411/821
411
82.7(1.3)
82.0(1.7)
84.7(0.6)
1?7
36
69/346
211
74.9(4.9)
70.1(5.6)
81.2(1.1)
16
144/575
144/575
144/575
144/575
72/647
336
335
336
335
335
91.3(2.1)
86.3(3.5)
94.3(1.7)
85.6(2.0)
61.7(4.3)
84.7(3.2)
78.3(6.2)
91.0(4.3)
75.9(3.1)
41.1(5.7)
93.1(2.0)
87.8(2.8)
95.8(0.6)
86.9(1.1)
66.9(2.0)
48
780/3121
795/3180
657/3284
790/3161
397/3570
1305
1290
1314
1255
1292
92.1(2.6)
87.0(3.0)
91.4(2.9)
91.1(1.5)
75.2(2.8)
89.0(2.1)
86.5(3.1)
81.8(4.6)
86.7(2.9)
40.5(7.2)
94.2(1.0)
89.5(2.1)
91.8(3.3)
93.4(0.5)
77.6(2.2)
16
113/452
110/440
111/445
110/440
117/468
22/528
171
178
177
184
167
167
85.2(1.3)
81.0(1.7)
81.1(2.7)
81.3(1.8)
86.8(2.7)
11.9(1.7)
77.2(6.1)
77.6(3.7)
76.0(3.2)
77.9(3.1)
81.2(3.4)
6.5(1.7)
89.5(1.6)
84.6(1.0)
87.3(1.6)
84.7(2.0)
91.1(1.0)
31.0(1.7)
256
130/522
108/434
108/434
111/445
54/488
166
147
166
147
147
83.8(1.7)
79.2(2.1)
79.6(2.7)
82.7(1.9)
43.7(2.6)
76.5(5.3)
67.6(4.3)
67.4(4.4)
72.9(6.2)
28.5(3.6)
89.5(1.3)
85.5(2.4)
84.8(1.4)
87.3(2.2)
59.3(2.2)
WAVEFORM2
SATIMAGE
PENDIGITS
DRIVE
LETTER
USPS
7
1?3
1?5
6 ? 10
even #
odd #
1 ? 10
1?5
6 ? 10
even #
odd #
1 ? 10
1?5
6 ? 10
11 ? 15
16 ? 20
21 ? 25
1 ? 25
1?5
6 ? 10
even #
odd #
1 ? 10
Conclusions
We proposed a novel problem setting called learning from complementary labels, and showed that
an unbiased estimator to the classification risk can be obtained only from complementarily labeled
data, if the loss function satisfies a certain symmetric condition. Our risk estimator can easily be
minimized by any stochastic optimization algorithms such as Adam [15], allowing large-scale training. We theoretically established estimation error bounds for the proposed method, and proved that
the proposed method achieves the optimal parametric rate. We further showed that our proposed
complementary classification can be easily combined with ordinary classification. Finally, we experimentally demonstrated the usefulness of the proposed methods.
The formulation of learning from complementary labels may also be useful in the context of privacyaware machine learning [10]: a subject needs to answer private questions such as psychological
counseling which can make him/her hesitate to answer directly. In such a situation, providing a
complementary label, i.e., one of the incorrect answers to the question, would be mentally less
demanding. We will investigate this issue in the future.
It is noteworthy that the symmetric condition (11), which the loss should satisfy in our complementary classification framework, also appears in other weakly supervised learning formulations,
e.g., in positive-unlabeled learning [8]. It would be interesting to more closely investigate the role
of this symmetric condition to gain further insight into these different weakly supervised learning
problems.
9
Acknowledgements
GN and MS were supported by JST CREST JPMJCR1403. We thank Ikko Yamane for the helpful
discussions.
References
[1] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for learning
from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399?2434, 2006.
[2] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. Journal of Machine Learning
Research, 11:2973?3009, 2010.
[3] M. R. Boutell, J. Luo, X. Shen, and C. M. Brown. Learning multi-label scene classification. Pattern
Recognition, 37(9):1757?1771, 2004.
[4] O. Chapelle, B. Sch?lkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006.
[5] T. Cour, B. Sapp, and B. Taskar. Learning from partial labels. Journal of Machine Learning Research,
12:1501?1536, 2011.
[6] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. In ICML, 2007.
[7] F. Denis. PAC learning from positive statistical queries. In ALT, 1998.
[8] M. C. du Plessis, G. Niu, and M. Sugiyama. Analysis of learning from positive and unlabeled data. In
NIPS, 2014.
[9] M. C. du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive and unlabeled
data. In ICML, 2015.
[10] C. Dwork. Differential privacy: A survey of results. In TAMC, 2008.
[11] C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In KDD, 2008.
[12] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In
NIPS, 2004.
[13] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. In NIPS, 2004.
[14] J. Howe. Crowdsourcing: Why the power of the crowd is driving the future of business. Crown Publishing
Group, 2009.
[15] D. P. Kingma and J. L. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[16] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In ICLR,
2017.
[17] R. Kiryo, G. Niu, M. C. du Plessis, and M. Sugiyama. Positive-unlabeled learning with non-negative risk
estimator. In NIPS, 2017.
[18] S. Laine and T. Aila. Temporal ensembling for semi-supervised learning. In ICLR, 2017.
[19] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer,
1991.
[20] Y.-F. Li and Z.-H. Zhou. Towards making unlabeled data never hurt. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 37(1):175?188, 2015.
[21] G. Mann and A. McCallum. Simple, robust, scalable semi-supervised learning via expectation regularization. In ICML, 2007.
[22] C. McDiarmid. On the method of bounded differences. In J. Siemons, editor, Surveys in Combinatorics,
pages 148?188. Cambridge University Press, 1989.
[23] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
[24] V. Nair and G. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010.
10
[25] G. Niu, B. Dai, M. Yamada, and M. Sugiyama. Information-theoretic semi-supervised metric learning via
entropy regularization. Neural Computation, 26(8):1717?1762, 2014.
[26] G. Niu, M. C. du Plessis, T. Sakai, Y. Ma, and M. Sugiyama. Theoretical comparisons of positiveunlabeled learning against positive-negative learning. In NIPS, 2016.
[27] G. Niu, W. Jitkrittum, B. Dai, H. Hachiya, and M. Sugiyama. Squared-loss mutual information regularization: A novel information-theoretic approach to semi-supervised learning. In ICML, 2013.
[28] T. Sakai, M. C. du Plessis, G. Niu, and M. Sugiyama. Semi-supervised classification based on classification from positive and unlabeled data. In ICML, 2017.
[29] B. Sch?lkopf and A. Smola. Learning with Kernels. MIT Press, 2001.
[30] S. Tokui, K. Oono, S. Hido, and J. Clayton. Chainer: a next-generation open source framework for deep
learning. In Proceedings of Workshop on Machine Learning Systems in NIPS, 2015.
[31] V. N. Vapnik. Statistical learning theory. John Wiley and Sons, 1998.
[32] G. Ward, T. Hastie, S. Barry, J. Elith, and J. Leathwick. Presence-only data and the EM algorithm.
Biometrics, 65(2):554?563, 2009.
[33] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. Journal of Machine Learning Research, 10:207?244, 2009.
[34] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning with application to clustering
with side-information. In NIPS, 2002.
[35] Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In ICML, 2016.
[36] T. Zhang. Statistical analysis of some multi-category large margin classification methods. Journal of
Machine Learning Research, 5:1225?1251, 2004.
[37] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Sch?lkopf. Learning with local and global
consistency. In NIPS, 2003.
[38] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using Gaussian fields and harmonic
functions. In ICML, 2003.
11
| 7147 |@word trial:6 private:1 version:2 kulis:1 open:1 hu:1 contraction:1 incurs:2 score:1 selecting:3 luo:1 activation:1 goldberger:1 dx:3 must:1 written:1 john:1 informative:2 kdd:1 enables:1 intelligence:1 website:2 mccallum:1 yamada:1 contribute:1 denis:1 mcdiarmid:1 zhang:1 five:2 direct:2 differential:1 incorrect:3 prove:1 privacy:1 theoretically:2 jpmjcr1403:1 pairwise:3 multi:11 ol:8 salakhutdinov:2 equipped:2 becomes:1 provided:1 begin:1 bounded:1 kind:2 minimizes:2 guarantee:1 temporal:1 mitigate:1 masashi:1 collecting:3 every:1 classifier:11 unit:3 positive:11 local:1 tends:1 niu:7 noteworthy:1 pendigits:3 studied:1 limited:1 decided:1 practical:4 elith:1 yj:2 testing:1 practice:1 digit:3 empirical:2 cannot:1 unlabeled:9 risk:19 context:1 equivalent:4 map:1 demonstrated:1 independently:2 convex:4 survey:2 formulate:1 boutell:1 simplicity:1 shen:1 immediately:1 m2:9 estimator:5 insight:1 regarded:4 crowdsourcing:3 variation:1 hurt:1 suppose:1 user:1 us:1 elkan:1 lova:7 expensive:3 particularly:1 approximated:3 recognition:1 labeled:27 ep:13 role:1 taskar:1 quicker:1 solved:1 revisiting:1 russell:1 complexity:5 asked:1 trained:2 weakly:3 usps:4 translated:2 easily:4 joint:1 riken:1 train:4 jain:1 effective:1 query:1 labeling:1 hyper:2 choosing:2 crowd:1 larger:1 solve:1 ramp:7 otherwise:1 niyogi:1 ward:1 g1:13 jointly:1 crowdworkers:2 noisy:1 highlighted:3 ledoux:1 propose:2 product:1 uci:2 combining:2 roweis:3 intuitive:1 kh:1 normalize:1 convergence:1 cour:1 rademacher:6 adam:3 derive:2 blitzer:1 ac:1 nearest:1 odd:6 op:4 eq:1 strong:1 predicted:1 c:2 hova:6 closely:1 tokyo:5 correct:6 subsequently:1 stochastic:2 jst:1 mann:1 pl:3 ic:1 normal:1 driving:1 achieves:2 noto:1 purpose:1 estimation:8 label:67 him:1 hpc:4 minimization:4 mit:3 gaussian:1 zhou:2 takashi:1 derived:1 plessis:5 baseline:4 rostamizadeh:1 talwalkar:1 dim:4 helpful:1 typically:1 hidden:1 her:1 arg:5 classification:44 among:2 html:2 issue:1 art:1 mutual:1 equal:1 field:1 never:1 beach:1 ng:1 y6:8 icml:8 future:2 minimized:1 belkin:1 randomly:3 detection:1 highly:3 possibility:1 investigate:2 dwork:1 laborious:2 loss1:1 extreme:1 pc:13 accurate:1 partial:5 unless:1 biometrics:1 plugged:1 theoretical:2 psychological:1 gn:1 ordinary:22 deviation:7 subset:1 uniform:1 usefulness:6 reported:5 answer:4 combined:4 st:1 density:3 tokui:1 lee:1 together:1 squared:4 management:1 choose:2 li:1 japan:3 gy:11 summarized:1 wk:2 blanchard:1 satisfy:4 combinatorics:1 hu2:1 later:2 lot:1 sup:6 xing:1 contribution:1 ni:2 accuracy:8 convolutional:1 variance:1 yield:1 yes:2 lkopf:3 asset:1 rectified:2 drive:3 classified:2 hachiya:1 definition:1 against:1 sugi:1 naturally:1 proof:5 sampled:2 gain:1 dataset:7 proved:1 ask:1 jitkrittum:1 dimensionality:2 sapp:1 hilbert:1 carefully:1 appears:1 supervised:18 follow:2 improved:1 formulation:6 evaluated:1 smola:1 talagrand:2 hand:2 navin:1 semisupervised:1 usa:1 contain:1 unbiased:5 y2:3 true:1 brown:1 regularization:6 equality:2 symmetric:9 dhillon:1 davis:1 m:4 theoretic:3 demonstrate:4 bring:1 crown:1 wise:1 harmonic:1 novel:3 sigmoid:14 mentally:1 empirically:2 cohen:1 jp:1 banach:1 belong:5 m1:11 significant:1 cambridge:1 rd:5 tuning:1 consistency:1 similarly:1 sugiyama:7 ishida:1 kipf:1 chapelle:1 specification:1 showed:2 belongs:3 scenario:1 certain:2 binary:7 additional:2 dai:2 novelty:1 barry:1 semi:11 zien:1 multiple:2 smooth:1 long:3 divided:1 equally:1 hido:1 scalable:1 expectation:3 metric:4 iteration:2 normalization:1 kernel:1 achieved:1 hesitate:1 addition:1 want:1 howe:1 source:1 sch:3 rest:2 archive:1 subject:1 validating:2 lafferty:1 siemons:1 jordan:1 leverage:1 presence:1 yang:1 bengio:1 embeddings:1 xj:2 relu:1 hastie:1 gbk:4 inner:1 bottleneck:1 whether:1 expression:1 motivated:1 interpolates:1 deep:1 useful:2 oono:1 extensively:1 category:1 http:3 specifies:4 chainer:2 exist:1 percentage:3 outperform:1 shifted:1 sign:2 estimated:1 deteriorates:1 hyperparameter:2 express:1 group:1 four:3 loss5:1 demonstrating:1 drawn:4 utilize:1 graph:2 laine:1 letter:3 appendix:3 comparable:1 bound:9 layer:1 gang:2 constraint:6 incorporation:1 scene:1 bousquet:1 answered:1 min:3 complementarily:16 according:1 combination:6 smaller:1 son:1 sam:2 aila:1 em:1 making:1 explained:1 restricted:1 lml:1 ln:4 needed:1 end:1 available:1 apply:1 neighbourhood:1 batch:1 weinberger:1 original:1 denotes:12 standardized:1 ensure:1 clustering:1 publishing:1 hinge:3 giving:1 ghahramani:1 establish:3 question:3 already:2 added:1 parametric:4 costly:2 iclr:3 cw:3 thank:1 distance:2 manifold:1 collected:1 reason:1 boldface:3 mini:1 providing:3 minimizing:2 ratio:1 setup:2 gk:10 negative:4 ordinarily:11 ba:1 implementation:1 boltzmann:1 unknown:3 allowing:1 datasets:4 benchmark:2 yamane:1 situation:3 hinton:2 rn:18 leathwick:1 bk:2 clayton:1 namely:1 kl:1 lal:1 established:1 kingma:1 nip:9 kwkh:1 usually:1 pattern:21 scott:1 lpc:7 including:1 max:5 power:1 critical:1 suitable:1 treated:1 natural:1 demanding:1 business:1 isoperimetry:1 zhu:1 improve:1 carried:1 gy0:2 review:1 geometric:1 acknowledgement:1 loss:50 interesting:1 generation:1 proven:1 versus:2 triple:1 validation:4 foundation:1 downloaded:3 consistent:3 standardization:2 editor:2 grandvalet:1 pi:1 changed:1 mohri:1 supported:1 last:1 transpose:1 ovum:7 bias:1 weaker:1 side:1 saul:1 neighbor:1 fifth:1 world:1 valid:1 sakai:2 fb:11 commonly:1 welling:1 transaction:1 crest:1 ml:4 global:1 assumed:1 consuming:1 xi:11 decade:1 why:1 table:7 learn:3 robust:1 ca:1 sra:1 du:5 necessarily:1 cl:17 hyperparameters:1 complementary:42 ensembling:1 representative:1 g2g:2 fashion:2 wiley:1 candidate:7 late:2 hw:1 theorem:7 showing:1 pac:1 list:3 nyu:2 decay:2 alt:1 naively:2 workshop:1 ih:2 mnist:2 vapnik:1 margin:2 easier:2 entropy:2 ez:1 expressed:1 sindhwani:1 springer:1 corresponds:1 minimizer:2 satisfies:7 ma:1 nair:1 weston:1 goal:2 towards:1 satimage:3 lipschitz:1 experimentally:4 specifically:1 lemma:8 total:1 called:1 e:1 evaluate:1 tested:1 ex:2 |
6,796 | 7,148 | Online control of the false discovery rate with
decaying memory
Aaditya Ramdas
Fanny Yang Martin J. Wainwright Michael I. Jordan
University of California, Berkeley
{aramdas, fanny-yang, wainwrig, jordan} @berkeley.edu
Abstract
In the online multiple testing problem, p-values corresponding to different null
hypotheses are observed one by one, and the decision of whether or not to reject the current hypothesis must be made immediately, after which the next pvalue is observed. Alpha-investing algorithms to control the false discovery rate
(FDR), formulated by Foster and Stine, have been generalized and applied to many
settings, including quality-preserving databases in science and multiple A/B or
multi-armed bandit tests for internet commerce. This paper improves the class
of generalized alpha-investing algorithms (GAI) in four ways: (a) we show how
to uniformly improve the power of the entire class of monotone GAI procedures
by awarding more alpha-wealth for each rejection, giving a win-win resolution to
a recent dilemma raised by Javanmard and Montanari, (b) we demonstrate how
to incorporate prior weights to indicate domain knowledge of which hypotheses
are likely to be non-null, (c) we allow for differing penalties for false discoveries
to indicate that some hypotheses may be more important than others, (d) we define a new quantity called the decaying memory false discovery rate (mem-FDR)
that may be more meaningful for truly temporal applications, and which alleviates
problems that we describe and refer to as ?piggybacking? and ?alpha-death.? Our
GAI++ algorithms incorporate all four generalizations simultaneously, and reduce
to more powerful variants of earlier algorithms when the weights and decay are all
set to unity. Finally, we also describe a simple method to derive new online FDR
rules based on an estimated false discovery proportion.
1
Introduction
The problem of multiple comparisons was first recognized in the seminal monograph by Tukey [12]:
simply stated, given a collection of multiple hypotheses to be tested, the goal is to distinguish between the nulls and non-nulls, with suitable control on different types of error. We are given access
to one p-value for each hypothesis, which we use to decide which subset of hypotheses to reject, effectively proclaiming the rejected hypothesis as being non-null. The rejected hypotheses are called
discoveries, and the subset of these that were truly null?and hence mistakenly rejected?are called
false discoveries. In this work, we measure a method?s performance using the false discovery rate
(FDR) [2], defined as the expected ratio of false discoveries to total discoveries. Specifically, we
require that any procedure must guarantee that the FDR is bounded by a pre-specified constant ?.
The traditional form of multiple testing is offline in nature, meaning that an algorithm testing N
hypotheses receives the entire batch of p-values {P1 , . . . , PN } at one time instant. In the online
version of the problem, we do not know how many hypotheses we are testing in advance; instead, a
possibly infinite sequence of p-values appear one by one, and a decision about rejecting the null must
be made before the next p-value is received. There are at least two different motivating justifications
for considering the online setting:
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
M1. We may have the entire batch of p-values available at our disposal from the outset, but we may
nevertheless choose to process the p-values one by one in a particular order. Indeed, if one
can use prior knowledge to ensure that non-nulls typically appear earlier in the ordering, then
carefully designed online procedures could result in more discoveries than offline algorithms
(that operate without prior knowledge) such as the classical Benjamini-Hochberg algorithm [2],
while having the same guarantee on FDR control. This motivation underlies one of the original
online multiple testing paper, namely that of Foster and Stine [5].
M2. We may genuinely conduct a sequence of tests one by one, where both the choice of the next
null hypothesis and the level at which it is tested may depend on the results of the previous tests.
Motivating applications include the desire to provide anytime guarantees for (i) internet companies running a sequence of A/B tests over time [9], (ii) pharmaceutical companies conducting a
sequence of clinical trials using multi-armed bandits [13], or (iii) quality-preserving databases
in which different research teams test different hypotheses on the same data over time [1].
The algorithms developed in this paper apply to both settings, with emphasis on motivation M2.
Let us first reiterate the need for corrections when testing a sequence of hypotheses in the online
setting, even when all the p-values are independent. If each hypothesis i is tested independently
of the total number of tests either performed before it or to be performed after it, then we have no
control over the number of false discoveries made over time. Indeed, if our test for every Pi takes
the form 1 {Pi ? ?} for some fixed ?, then, while the type 1 error for any individual test is bounded
by ?, the set of discoveries could have arbitrarily poor FDR control. For example, under the ?global
null? where every hypothesis is truly null, as long as the number of tests N is large and the null
p-values are uniform, this method will make at least one rejection with high probability (w.h.p.), and
since in this setting every discovery is a false discovery, w.h.p. the FDR will equal one.
A natural alternative that takes multiplicity into account is the Bonferroni correction. If one knew the
total number N of tests to be performed, the decision rule 1 {Pi ? ?/N } for each i 2 {1, . . . , N }
controls the probability of even a single false discovery?a quantity known as the familywise error rate or FWER?at level ?, as can be seen by applying the union bound. The natural extension of this solution to having an unknown and potentially infinite number of tests
P is called alphaspending. Specifically, we choose any sequence of constants {?i }i2N such that i ?i ? ?, and on
receiving Pi , our decision is simply 1 {Pi ? ?i }. However, such methods typically make very few
discoveries?meaning that they have very low power?when the number of tests is large, because
they must divide their error budget of ?, also called alpha-wealth, among a large number of tests.
Since the FDR is less stringent than FWER, procedures that guarantee FDR control are generally
more powerful, and often far more powerful, than those controlling FWER. This fact has led to
the wide adoption of FDR as a de-facto standard for offline multiple testing (note, e.g., that the
Benjamini-Hochberg paper [2] currently has over 40,000 citations).
Foster and Stine [5] designed the first online alpha-investing procedures that use and earn alphawealth in order to control a modified definition of FDR. Aharoni and Rosset [1] further extended
this to a class of generalized alpha-investing (GAI) methods, but once more for the modifed FDR. It
was only recently that Javanmard and Montanari [9] demonstrated that monotone GAI algorithms,
appropriately parameterized, can control the (unmodified) FDR for independent p-values. It is this
last work that our paper directly improves upon and generalizes; however, as we summarize below,
many of our modifications and generalizations are immediately applicable to all previous algorithms.
Contributions and outline. Instead of presenting the most general and improved algorithms immediately, we choose to present results in a bottom-up fashion, introducing one new concept at a time
so as to lighten the symbolic load on the reader. For this purpose, we set up the problem formally
in Section 2. Our contributions are organized as follows:
1. Power. In Section 3, we introduce the generalized alpha-investing (GAI) procedures, and
demonstrate how to uniformly improve the power of monotone GAI procedures that control
FDR for independent p-values, resulting in a win-win resolution to a dilemma posed by Javanmard and Montanari [9]. This improvement is achieved by a somewhat subtle modification that
allows the algorithm to reward more alpha-wealth at every rejection but the first. We refer to our
algorithms as improved generalized alpha-investing (GAI++) procedures, and provide intuition
for why they work through a general super-uniformity lemma (see Lemma 1 in Section 3.2). We
2
also provide an alternate way of deriving online FDR procedures by defining and bounding a
d
natural estimator for the false discovery proportion FDP.
2. Weights. In Section 5, we demonstrate how to incorporate certain types of prior information
about the different hypotheses. For example, we may have a prior weight for each hypothesis,
indicating whether it is more or less likely to be null. Additionally, we may have a different
penalty weight for each hypothesis, indicating differing importance of hypotheses. These prior
and penalty weights have been incorporated successfully into offline procedures [3, 6, 11]. In the
online setting, however, there are some technical challenges that prevent immediate application
of these offline procedures. For example, in the offline setting all the weights are constants,
but in the online setting, we allow them to be random variables that depend on the sequence
of past rejections. Further, in the offline setting all provided weights are renormalized to have
an empirical mean of one, but in the truly online setting (motivation M2) we do not know the
sequence of hypotheses or their random weights in advance, and hence we cannot perform any
such renormalization. We clearly outline and handle such issues and design novel prior- and/or
penalty-weighted GAI++ algorithms that control the penalty-weighted FDR at any time. This
may be seen as an online analog of doubly-weighted procedures for the offline setting [4, 11].
Setting the weights to unity recovers the original class of GAI++ procedures.
3. Decaying memory. In Section 6, we discuss some implications of the fact that existing algorithms have an infinite memory and treat all past rejections equally, no matter when they occurred. This causes phenomena that we term as ?piggybacking? (a string of bad decisions, riding
on past earned alpha-wealth) and ?alpha-death? (a permanent end to decision-making when the
alpha-wealth is essentially zero). These phenomena may be desirable or acceptable under motivation M1 when dealing with batch problems, but are generally undesirable under motivation
M2. To address these issues, we propose a new error metric called the decaying memory false
discovery rate, abbreviated as mem-FDR, that we view as better suited to multiple testing for
truly temporal problems. Briefly, mem-FDR pays more attention to recent discoveries by introducing a user-defined discount factor, 0 < ? 1, into the definition of FDR. We demonstrate
how to design GAI++ procedures that control online mem-FDR, and show that they have a stable and robust behavior over time. Using < 1 allows these procedures to slowly forget their
past decisions (reducing piggybacking), or they can temporarily ?abstain? from decision-making
(allowing rebirth after alpha-death). Instantiating = 1 recovers the class of GAI++ procedures.
We note that the generalizations to incorporate weights and decaying memory are entirely orthogonal
to the improvements that we introduce to yield GAI++ procedures, and hence these ideas immediately extend to other GAI procedures for non-independent p-values. We also describe simulations
involving several of the aforementioned generalizations in Appendix C.
2
Problem Setup
At time t = 0, before the p-values begin to appear, we fix the level ? at which we wish to control the
FDR over time. At each time step t = 1, 2, . . . , we observe a p-value Pt corresponding to some null
hypothesis Ht , and we must immediately decide whether to reject Ht or not. If the null hypothesis
is true, p-values are stochastically larger than the uniform distribution (?super-uniform?, for short),
formulated as follows: if H0 is the set of true null hypotheses, then for any null Ht 2 H0 , we have
Pr{Pt ? x} ? x for any x 2 [0, 1].
(1)
We do not make assumptions on the marginal distribution of the p-values for hypotheses that are
non-null / false. Although they can be arbitrary, it is useful to think of them as being stochastically
smaller than the uniform distribution, since only then do they carry signal that differentiates them
from nulls. Our task is to design threshold levels ?t according to which we define the rejection
decision as Rt = 1 {Pt ? ?t }, where 1 {?} is the indicator function. Since the aim is to control
the FDR at the fixed level ? at any time t, each ?t must be set according to the past decisions of
the algorithm, meaning that ?t = ?t (R1 , . . . , Rt 1 ). Note that, in accordance with past work,
we require that ?t does not directly depend on the observed p-values but only on past rejections.
Formally, we define the sigma-field at time t as F t = (R1 , . . . , Rt ), and insist that
?t 2 F t 1 ? ?t is F t 1 -measurable ? ?t is predictable.
(2)
As studied by Javanmard and Montanari [8], and as is predominantly the case in offline multiple
testing, we consider monotone decision rules, where ?t is a coordinate-wise nondecreasing function:
? i Ri for all i ? t 1, then we have ?t (R
?1, . . . , R
? t 1 ) ?t (R1 , . . . , Rt 1 ).
if R
(3)
3
Existing online multiple testing algorithms control some variant of the FDR over time, as we now
PT
define. At any time T , let R(T ) = Pt=1 Rt be the total number of rejections/discoveries made by
the algorithm so far, and let V (T ) = t2H0 Rt be the number of false rejections/discoveries. Then,
the false discovery proportion and rate are defined as
?
V (T )
V (T )
FDP(T ) := ??????????? and FDR(T ) = E ??????????? ,
R(T )
R(T )
a
a
where we use the dotted-fraction notation corresponds to the shorthand ???
b = b_1 . Two variants of the FDR studied in earlier online FDR works [5, 8] are the marginal FDR given by
E[V (T )]
E[V (T )]
mFDR? (T ) = E[R(T
case being mFDR(T ) = E[R(T
)]+? ,hwith a special
)_1] , and the smoothed FDR,
i
V (T )
given by sFDR? (T ) = E R(T )+? . In Appendix A, we summarize a variety of algorithms and dependence assumptions considered in previous work.
3
Generalized alpha-investing (GAI) rules
The generalized class of alpha-investing rules [1] essentially covers most rules that have been proposed thus far, and includes a wide range of algorithms with different behaviors. In this section, we
present a uniform improvement to monotone GAI algorithms for FDR control under independence.
Any algorithm of the GAI type begins with an alpha-wealth of W (0) = W0 > 0, and keeps track
of the wealth W (t) available after t steps. At any time t, a part of this alpha-wealth is used to test
the t-th hypothesis at level ?t , and the wealth is immediately decreased by an amount t . If the
t-th hypothesis is rejected, that is if Rt := 1 {Pt ? ?t } = 1, then we award extra wealth equaling
an amount t . Recalling the definition F t : = (R1 , . . . , Rt ), we require that ?t , t , t 2 F t 1 ,
meaning they are predictable, and W (t) 2 F t , with the explicit update W (t) : = W (t 1)
t+
Rt t . The parameters W0 and the sequences ?t , t , t are all user-defined. They must be chosen
so that the total wealth W (t) is always non-negative, and hence that t ? W (t 1) If the wealth
ever equals zero, the procedure is not allowed to reject any more hypotheses since it has to choose
?t equal to zero from then on. The only real restriction for ?t , t , t arises from the goal to control
FDR. This condition takes a natural form?whenever a rejection takes place, we cannot be allowed
to award an arbitrary amount of wealth. Formally, for some user-defined constant B0 , we must have
t
? min{
t
+ B0 ,
t
?t
+ B0
(4)
1}.
Many GAI rules are not monotone (cf. equation (3)), meaning that ?t is not always a coordinatewise
nondecreasing function of R1 , . . . , Rt 1 , as mentioned
Psin the last column of Table 2 (Appendix A).
Table 1 has some examples, where ?k := mins2N 1 { t=1 Rt = k} is the time of the k-th rejection.
Name
[5] Alpha-investing (AI)
[1] Alpha-spending with rewards
[9] LORD?17
Parameters
?
? ? 1, c
1
P
i = 1
Level ?t
t
1+ t
cW (t
t
1)
Penalty t
? W (t 1)
?W (t 1)
t W0
+ B0
i=1
j:?j <t
Table 1: Examples of GAI rules.
3.1
P
Reward t
t + B0
satisfy (4)
t ?j
B0 = ?
W0
Improved monotone GAI rules (GAI++) under independence
In their initial work on GAI rules, Aharoni and Rosset [1] did not incorporate an explicit parameter
B0 ; rather, they proved that choosing W0 = B0 = ? suffices for mFDR1 control. In subsequent
work, Javanmard and Montanari [9] introduced the parameter B0 and proved that for monotone GAI
rules, the same choice W0 = B0 = ? suffices for sFDR1 control, whereas the choice B0 = ? W0
suffices for FDR control, with both results holding under independence. In fact, their monotone GAI
rules with B0 = ? W0 are the only known methods that control FDR. This state of affairs leads
to the following dilemma raised in their paper [9]:
A natural question is whether, in practice, we should choose W0 , B0 as to guarantee FDR
control (and hence set B0 = ? W0 ? ?) or instead be satisfied with mFDR or sFDR
control, which allow for B0 = ? and hence potentially larger statistical power.
4
Our first contribution is a ?win-win? resolution to this dilemma: more precisely, we prove that we
can choose B0 = ? while maintaining FDR control, with a small catch that at the very first rejection
only, we need B0 = ? W0 . Of course, in this case B0 is not constant, and hence we replace
it by the random variable bt 2 F t 1 , and we prove that choosing W0 , bt such that bt + W0 = ?
for the first rejection, and simply bt = ? for every future rejection, suffices for formally proving
FDR control under independence. This achieves the best of both worlds (guaranteeing FDR control,
and handing out the largest possible reward of ?), as posed by the above dilemma. To restate our
contribution, we effectively prove that the power of monotone GAI rules can be uniformly improved
without changing the FDR guarantee.
Formally, we define our improved generalized alpha-investing (GAI++) algorithm as follows. It sets
W (0) = W0 with 0 ? W0 ? ?, and chooses ?t 2 F t 1 to make decisions Rt = 1 {Pt ? ?t } and
t
updates the wealth W (t) = W (t 1)
1) 2 F t 1 and
t + Rt t 2 F using some t ? W (t
some reward t ? min{ t + bt , ?tt + bt 1} 2 F t 1 , using the choice
?
? W0 when R(t 1) = 0
bt =
2 F t 1.
?
otherwise
As an explicit example, given an infinite nonincreasing sequence of positive constants { j } that
sums to one, the LORD++ algorithm effectively makes the choice:
X
?t = t W0 + (? W0 ) t ?1 + ?
(5)
t ?j ,
j:?j <t,?j 6=?1
recalling that ?j is the time of the j-th rejection. Reasonable default choices include W0 = ?/2, and
log(j_2)
j = 0.0722 jeplog j , the latter derived in the context of testing if a Gaussian is zero mean [9].
Any monotone GAI++ rule comes with the following guarantee.
h
i
V (T )+W (T )
Theorem 1. Any monotone GAI++ rule satisfies the bound E ?????????????????????
? ? for all T 2 N
R(T )
under independence. Since W (T )
0 for all T 2 N, any such rule (a) controls FDR at level ?
under independence, and (b) has power at least as large as the corresponding GAI algorithm.
The proof of this theorem is provided in Appendix F. Note that for monotone rules, a larger alphawealth reward at each rejection yields a possibly higher power, but never lower power, immediately
implying statement (b). Consequently, we provide only a proof for statement (a) in Appendix F. For
the reader interested in technical details, a key super-uniformity Lemma 1 and associated intuition
for online FDR algorithms is provided in Section 3.2.
3.2
Intuition for larger rewards via a super-uniformity lemma
For the purposes of providing some intuition for why we are able to obtain larger rewards than
Javanmard and Montanari [9], we present the following lemma. In order to set things up, recall
that Rt = 1 {Pt ? ?t } and note that ?t is F t 1 -measurable, being a coordinatewise nondecreasing function of R1 , . . . , Rt 1 . Hence, the marginal super-uniformity assumption (1) immediately
implies that for independent p-values, we have
?
1 {Pt ? ?t }
Pr Pt ? ?t F t 1 ? ?t , or equivalently, E ???????????????????????? F t 1 ? 1.
(6)
?t
Lemma 1 states that under independence, the above statement remains valid in much more generality.
Given a sequence P1 , P2 , . . . of independent p-values, define a filtration via the sigma-fields
F i 1 : = (R1 , . . . , Ri 1 ), where Ri : = 1 {Pi ? fi (R1 , . . . , Ri 1 )} for some coordinatewise
nondecreasing function fi : {0, 1}i 1 ! R. With this set-up, we have the following guarantee:
Lemma 1. Let g : {0, 1}T ! R be any coordinatewise nondecreasing function such that g(~x) > 0
for any vector ~x 6= (0, . . . , 0). Then for any index t ? T such that Ht 2 H0 , we have
?
?
1 {Pt ? ft (R1 , . . . , Rt 1 )}
ft (R1 , . . . , Rt 1 )
E ?????????????????????????????????????????????????????
F t 1 ? E ??????????????????????????????????
Ft 1 .
(7)
g(R1 , . . . , RT )
g(R1 , . . . , RT )
5
This super-uniformity lemma is analogous to others used in offline multiple testing [4, 11], and will
be needed in its full generality later in the paper. The proof of this lemma in Appendix E is based
on a leave-one-out technique which is common in the multiple testing literature [7, 10, 11]; ours
specifically generalizes a lemma in the Appendix of Javanmard and Montanari [9].
As mentioned, this lemma helps to provide some intuition for the condition on t and the unorthodox
condition on bt . Indeed, note that by definition,
?
?P
? PT
?t
V (T )
t2H0 1 {Pt ? ?t }
FDR(T ) = E ??????????? = E ???????????????????????????????????????? ? E ??????????????????
,
PTt=1
R(T )
R(T )
t=1
Rt
where we applied Lemma 1 to the coordinatewise nondecreasing
function g(R1 , . . . , RT ) = R(T ).
P
From this equation, we may infer thePfollowing: If t Rt = k, then the FDR will be bounded by
? as long as the total alpha-wealth t ?t that was used for testing is smaller than k?. In other
words, with every additional rejection that adds one to the denominator, the algorithm is allowed
extra alpha-wealth equaling ? for testing.
In order to see where this shows up in the algorithm design, assume for a moment that we choose
our penalty as t = ?t . Then, our condition on rewards t simply reduces to t ? bt . Furthermore,
since we choose bt = ? after every rejection except the first, our total earned alpha-wealth is
approximately ?R(T ), which also upper bounds the total alpha-wealth used for testing.
The intuitive reason that bt cannot equal ? at the very first rejection can also be inferred from the
V (T )
V (T )
above equation. Indeed, note that because of the definition of FDR, we have ?????????
:= R(T
)_1 ,
R(T )
the denominator R(T ) _ 1 = 1 when the number of rejections equals zero or one. Therefore, the
denominator only starts incrementing at the second rejection. Hence, the sum of W0 and the first
reward must be at most ?, following which one may award ? at every rejection. This is the central
piece of intuition behind the GAI algorithm design, its improvement in this paper, and the FDR
control analysis. To the best of our knowledge, this is the first explicit presentation for the intuition
for online FDR control.
4
A direct method for deriving new online FDR rules
d of the false discovery
Many offline FDR procedures can be derived in terms of an estimate FDP
proportion; see Ramdas et al. [11] and references therein. The discussion in Section 3.2 suggests
that it is also possible to write online FDR rules in this fashion. Indeed, given any non-negative,
predictable sequence {?t }, we propose the following definition:
Pt
j=1 ?j
d (t) : = ??????????????????
FDP
.
R(t)
d
This definition is intuitive because FDP(t)
approximately overestimates the unknown FDP(t):
d (t)
FDP
P
P
j?t,j2H0 ?j
j?t,j2H0 1 {Pj ? ?j }
????????????????????????????
? ????????????????????????????????????????????????
= FDP(t).
R(t)
R(t)
d
A more direct way to construct new online FDR procedures is to ensure that supt2N FDP(t)
? ?,
bypassing the use of wealth, penalties and rewards in GAI. This idea is formalized below.
d
Theorem 2. For any predictable sequence {?t } such that supt2N FDP(t)
? ?, we have:
(a) If the p-values are super-uniform conditional on all past discoveries, meaning that
Pr Pj ? ?j F j 1 ? ?j , then the associated procedure has supT 2N mFDR(T ) ? ?.
(b) If the p-values are independent and if {?t } is monotone, then we also have supT 2N FDR(T ) ? ?.
The proof of this theorem is given in Appendix D. In our opinion, it is more transparent to verify
that LORD++ controls both mFDR and FDR using Theorem 2 than using Theorem 1.
5
Incorporating prior and penalty weights
Here, we develop GAI++ algorithms that incorporate prior weights wt , which allow the user to
exploit domain knowledge about which hypotheses are more likely to be non-null, as well as penalty
weights ut to differentiate more important hypotheses from the rest. The weights must be strictly
positive, predictable (meaning that wt , ut 2 Ft 1 ) and monotone (in the sense of definition (3)).
6
Penalty weights. For many motivating applications, including internet companies running a series
of A/B tests over time, or drug companies doing a series of clinical trials over time, it is natural
to assume that some tests are more important than others, in the sense that some false discoveries
may have more lasting positive/negative effects than others. To incorporate this in the offline setting,
Benjamini and Hochberg [3] suggested associating each test with a positive penalty weight ui with
hypothesis Hi . Choosing ui > 1 indicates a more impactful or important test, while ui < 1
means the opposite. Although algorithms exist in the offline setting that can intelligently incorporate
penalty weights, no such flexibility currently exists for online FDR algorithms. With this motivation
in mind and following Benjamini and Hochberg [3], define the penalty-weighted FDR as
?
Vu (T )
FDRu (T ) : = E ?????????????
(8)
Ru (T )
P
where Vu (T ) : = t2H0 ut Rt = Vu (T 1) + uT RT 1 T 2 H0 and Ru (T ) : = Ru (T 1) +
uT RT . One may set ut = 1 to recover the special case of no penalty weights. In the offline setting, a
given set of penalty weights can be rescaled to make the average penalty weight equal unity, without
affecting the associated procedure. However, in the online setting, we choose penalty weights ut one
at a time, possibly not knowing the total number of hypotheses ahead of time. As a consequence,
these weights cannot be rescaled in advance to keep their average equal to unity. It is important to
note that we allow ut 2 Ft 1 to be determined after viewing the past rejections, another important
difference from the offline setting. Indeed, if the hypotheses are logically related (even if the pvalues are independent), then the current hypothesis can be more or less critical depending on which
other ones are already rejected.
Prior weights. In many applications, one may have access to prior knowledge about the underlying
state of nature (that is, whether the hypothesis is truly null or non-null). For example, an older
published biological study might have made significant discoveries, or an internet company might
know the results of past A/B tests or decisions made by other companies. This knowledge may
be incorporated by a weight wt which indicates the strength of a prior belief about whether the
hypothesis is null or not?typically, a larger wt > 1 can be interpreted as a greater likelihood
of being a non-null, indicating that the algorithm may be more aggressive in deciding whether to
reject Ht . Such p-value weighting was first suggested in the offline FDR context by [6], though
earlier work employed it in the context of FWER control. As with penalty weights in the offline
setting, offline prior weights are also usually rescaled to have unit mean, and then existing offline
algorithms simply replace the p-value Pt by the weighted p-value Pt /wt . However, it is not obvious
how to incorporate prior weights in the online setting. As we will see in the sections to come, the
online FDR algorithms we propose will also use p-value reweighting; moreover, the rewards must be
prudently adjusted to accommodate the fact that an a-priori rescaling is not feasible. Furthermore,
as opposed to the offline case, the weights wt 2 Ft 1 are allowed to depend on past rejections.
This additional flexibility allows one to set the weights not only based on our prior knowledge of
the current hypothesis being tested, but also based on properties of the sequence of discoveries (for
example, whether we recently saw a string of rejections or non-rejections). We point out some
practical subtleties with the use and interpretation of prior weights in Appendix C.4.
Doubly-weighted GAI++ rules. Given a testing level ?t and weights wt , ut , all three being predictable and monotone, we make the decision
Rt : = 1 {Pt ? ?t ut wt } .
(9)
This agrees with the intuition that larger prior weights should be reflected in an increased willingness to reject the null, and we should favor rejecting more important hypotheses. As before,
our rejection reward strategy differs before and after ?1 , the time of the first rejection. Starting
with some W (0) = W0 ? ?, we update the wealth as W (t) = W (t 1)
t + Rt t , where
wt , ut , ?t , t , t 2 F t 1 must be chosen so that t ? W (t 1), and the rejection reward t must
obey the condition
?
t
0 ? t ? min
+ ut bt ut , where
(10a)
t + u t bt ,
ut w t ? t
W0
bt := ?
1 {?1 > t 1} 2 Ft 1 .
(10b)
ut
Notice that setting wt = ut = 1 immediately recovers the GAI updates. Let us provide some
intuition for the form of the rewards t , which involves an interplay between the weights wt , ut ,
7
the testing levels ?t and the testing penalties t . First note that large weights ut , wt > 1 result in a
smaller earning of alpha-wealth and if ?t , t are fixed, then the maximum ?common-sense? weights
are determined by requiring t
0. The requirements of lower rewards for larger weights and of
a maximum allowable weight should both seem natural; indeed, there must be some price one must
pay for an easier rejection, otherwise we would always use a high prior weight or penalty weight to
get more power, no matter the hypothesis! We show that such a price does not have to be paid in
terms of the FDR guarantee?we prove that FDRu is controlled for any choices of weights?but a
price is paid in terms of power, specifically the ability to make rejections in the future. Indeed, the
combined use of ut , wt in both the decision rule Rt and the earned reward t keeps us honest; if
we overstate our prior belief in the hypothesis being non-null or its importance by assigning a large
ut , wt > 1, we will not earn much of a reward (or even a negative reward!), while if we understate
our prior beliefs by assigining a small ut , wt < 1, then we may not reject this hypothesis. Hence, it
is prudent to not misuse or overuse the weights, and we recommend that the scientist uses the default
ut = wt = 1 in practice unless there truly is prior evidence against the null or a reason to believe the
finding would be of importance, perhaps due to past studies by other groups or companies, logical
relationships between hypotheses, or due to extraneous reasons suggested by the underlying science.
We are now ready to state a theoretical guarantee for the doubly-weighted GAI++ procedure:
Theorem
3. Under
independence, the doubly-weighted GAI++ algorithm satisfies the bound
h
i
Vu (T )+W (T )
E ???????????????????????
? ? for all T 2 N. Since W (T ) 0, we also have FDRu (T ) ? ? for all T 2 N.
Ru (T )
The proof of this theorem is given in Appendix G. It is important to note that although we provide
the proof here only for GAI++ rules under independence, the ideas would actually carry forward in
an analogous fashion for GAI rules under various other forms of dependence.
6
From infinite to decaying memory
Here, we summarize two phenomena : (i) the ?piggybacking? problem that can occur with nonstationary null-proportion, (ii) the ?alpha-death? problem that can occur with a sequence of nulls. We
propose a new error metric, the decaying-memory FDR (mem-FDR), that for truly temporal multiple
testing scenarios, and propose an adjustment of our GAI++ algorithms to control this quantity.
Piggybacking. As outlined in motivation M1, when the full batch of p-values is available offline,
online FDR algorithms have an inherent asymmetry in their treatment of different p-values, and
make different rejections depending on the order in which they process the batch. Indeed, Foster and
Stine [5] demonstrated that if one knew a reasonably good ordering (with non-nulls arriving earlier),
then their online alpha-investing procedures could attain higher power than the offline BH procedure.
This is partly due to a phenomenon that we call ?piggybacking??if a lot of rejections are made early,
these algorithms earn and accumulate enough alpha-wealth to reject later hypotheses more easily by
testing them at more lenient thresholds than earlier ones. In essence, later tests ?piggyback? on the
success of earlier tests. While piggybacking may be desirable or acceptable under motivation M1,
such behavior may be unwarranted and unwanted under motivation M2. We argue that piggybacking
may lead to a spike in the false discovery rate locally in time, even though the FDR over all time is
controlled. This may occur when the sequence of hypotheses is non-stationary and clustered, when
strings of nulls may follow strings of non-nulls. For concreteness, consider the setting in Javanmard
and Montanari [8] where an internet company conducts many A/B tests over time. In ?good times?,
when a large fraction tests are truly non-null, the company may accumulate wealth due to frequent
rejections. We demonstrate using simulations that such accumulated wealth can lead to a string of
false discoveries when there is a quick transition to a ?bad period? where the proportion of non-nulls
is much lower, causing a spike in the false discovery proportion locally in time.
Alpha-death. Suppose we test a long stretch of nulls, followed by a stretch of non-nulls. In this
setting, GAI algorithms will make (almost) no rejections in the first stretch, losing nearly all of
its wealth. Thereafter, the algorithm may be effectively condemned to have no power, unless a
non-null with extremely strong signal is observed. Such a situation, from which no recovery is
possible, is perfectly reasonable under motivation M1. The alpha-wealth has been used up fully, and
those are the only rejections we are allowed to make with that batch of p-values. However, for an
internet company operating with motivation M2, it might be unacceptable to inform them that they
essentially cannot run any more tests, or that they may perhaps never make another useful discovery.
8
Both of these problems, demonstrated in simulations in Appendix C.2, are due to the fact that the
process effectively has an infinite memory. In the following, we propose one way to smoothly forget
the past and to some extent alleviate the negative effects of the aforementioned phenomena.
Decaying memory. For a user-defined decay parameter > 0, define V (0) = R (0) = 0 and
define the decaying memory FDR as follows:
?
V (T )
mem-FDR(T ) : = E ????????????? ,
R (T )
P
0
where V (T ) : = V (T 1) + RT 1 T 2 H = t2H0 T t Rt 1 t 2 H0 , and analogously
P T t
R (T ) : = R (T
1) + RT =
Rt . This notion of FDR control, which is arguably
t
natural for modern temporal applications, appears to be novel in the multiple testing literature. The
parameter is reminiscent of the discount factor in reinforcement learning.
Penalty-weighted decaying-memory FDR. We may naturally extend the notion of decayingmemory FDR to encompass penalty weights. Setting Vu (0) = Ru (0) = 0, we define
?
Vu (T )
mem-FDRu (T ) : = E ?????????????
,
Ru (T )
PT
T t
where we define Vu (T ) : = Vu (T
1) + uT RT 1 T 2 H0 =
u t Rt 1 t 2 H 0 ,
t=1
PT
T t
Ru (T ) : = Ru (T 1) + ut Rt = t=1
u t Rt .
mem-GAI++ algorithms with decaying memory and weights. Given a testing level ?t , we make
the decision using equation (9) as before, starting with a wealth of W (0) = W0 ? ?. Also, recall
that ?k is the time of the k-th rejection. On making the decision Rt , we update the wealth as:
W (t) : = W (t 1) + (1
)W0 1 {?1 > t 1}
(11)
t + Rt t ,
so that W (T ) = W0
T
min{?1 ,T }
+
T
X
T
t
(
t
+ Rt
t ).
t=1
The first term in equation (11) indicates that the wealth must decay in order to forget the old earnings
from rejections far in the past. If we were to keep the first term and drop the second, then the
effect of the initial wealth (not just the post-rejection earnings) also decays to zero. Intuitively,
the correction from the second term suggests that even if one forgets all the past post-rejection
earnings, the algorithm should behave as if it started from scratch, which means that its initial wealth
should not decay. This does not contradict the fact that initial wealth can be consumed because of
testing penalties t , but it should not decay with time?the decay was only introduced to avoid
piggybacking, which is an effect of post-rejection earnings and not the initial wealth.
A natural restriction on t is the bound t ? W (t 1) + (1
)W0 1 {?1 > t 1} , which
ensures that the wealth stays non-negative. Further, wt , ut , ?t , t 2 F t 1 must be chosen so that
the rejection reward t obeys conditions (10a) and (10b). Notice that setting wt = ut = = 1
recovers the GAI++ updates. As an example, mem-LORD++ would use :
X
t ?j
?t = t W0 t min{?1 ,t} +
t ?j ?j .
j:?j <t
We are now ready to present our last main result.
Theorem
4. Under
h
i independence, the doubly-weighted mem-GAI++ algorithm satisfies the bound
Vu (T )+W (T )
E ???????????????????????
? ? for all T 2 N. Since W (T ) 0, we have mem-FDRu (T ) ? ? for all T 2 N.
R (T )
u
See Appendix H for the proof of this claim. Appendix B discusses how to use ?abstaining? to
provide a smooth restart from alpha-death, whereas Appendix C contains a numerical simulation
demonstrating the use of decaying memory.
7
Summary
In this paper, we make four main contributions?more powerful procedures under independence, an
alternate viewpoint of deriving online FDR procedures, incorporation of prior and penalty weights,
and introduction of a decaying-memory false discovery rate to handle piggybacking and alpha-death.
Numerical simulations in Appendix C complement the theoretical results.
9
Acknowledgments
We thank A. Javanmard, R. F. Barber, K. Johnson, E. Katsevich, W. Fithian and L. Lei for related
discussions, and A. Javanmard for sharing code to reproduce experiments in Javanmard and Montanari [9]. This material is based upon work supported in part by the Army Research Office under
grant number W911NF-17-1-0304, and National Science Foundation grant NSF-DMS-1612948.
References
[1] Ehud Aharoni and Saharon Rosset. Generalized ?-investing: definitions, optimality results and
application to public databases. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 76(4):771?794, 2014.
[2] Yoav Benjamini and Yosef Hochberg. Controlling the false discovery rate: a practical and
powerful approach to multiple testing. Journal of the Royal Statistical Society, Series B, 57(1):
289?300, 1995.
[3] Yoav Benjamini and Yosef Hochberg. Multiple hypotheses testing with weights. Scandinavian
Journal of Statistics, 24(3):407?418, 1997.
[4] Gilles Blanchard and Etienne Roquain. Two simple sufficient conditions for fdr control. Electronic journal of Statistics, 2:963?992, 2008.
[5] Dean P. Foster and Robert A. Stine. ?-investing: a procedure for sequential control of expected
false discoveries. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
70(2):429?444, 2008.
[6] Christopher R Genovese, Kathryn Roeder, and Larry Wasserman. False discovery control with
p-value weighting. Biometrika, 93(3):509?524, 2006.
[7] Philipp Heesen and Arnold Janssen. Dynamic adaptive multiple tests with finite sample fdr
control. arXiv preprint arXiv:1410.6296, 2014.
[8] Adel Javanmard and Andrea Montanari. On online control of false discovery rate. arXiv
preprint arXiv:1502.06197, 2015.
[9] Adel Javanmard and Andrea Montanari. Online rules for control of false discovery rate and
false discovery exceedance. The Annals of statistics, 2017.
[10] Ang Li and Rina Foygel Barber. Multiple testing with the structure adaptive benjaminihochberg algorithm. arXiv preprint arXiv:1606.07926, 2016.
[11] Aaditya Ramdas, Rina Foygel Barber, Martin J. Wainwright, and Michael I. Jordan. A unified
treatment of multiple testing with prior knowledge. arXiv preprint arXiv:1703.06222, 2017.
[12] John Tukey. The Problem of Multiple Comparisons: Introduction and Parts A, B, and C.
Princeton University, 1953.
[13] Fanny Yang, Aaditya Ramdas, Kevin Jamieson, and Martin J. Wainwright. A framework for
Multi-A(rmed)/B(andit) testing with online FDR control. Advances in Neural Information
Processing Systems, 2017.
10
| 7148 |@word trial:2 briefly:1 version:1 proportion:7 simulation:5 paid:2 accommodate:1 carry:2 moment:1 initial:5 series:5 contains:1 ours:1 past:15 wainwrig:1 existing:3 current:3 assigning:1 must:17 reminiscent:1 john:1 stine:5 subsequent:1 numerical:2 designed:2 drop:1 update:6 implying:1 stationary:1 affair:1 short:1 earnings:4 philipp:1 unacceptable:1 direct:2 shorthand:1 doubly:5 prove:4 introduce:2 javanmard:13 indeed:9 expected:2 andrea:2 p1:2 behavior:3 multi:3 insist:1 company:10 armed:2 considering:1 provided:3 begin:2 bounded:3 notation:1 underlying:2 moreover:1 null:37 interpreted:1 string:5 developed:1 differing:2 finding:1 unified:1 guarantee:10 temporal:4 berkeley:2 every:8 unwanted:1 biometrika:1 facto:1 control:41 unit:1 grant:2 jamieson:1 appear:3 arguably:1 overestimate:1 before:6 positive:4 scientist:1 accordance:1 treat:1 consequence:1 approximately:2 might:3 emphasis:1 therein:1 studied:2 suggests:2 range:1 adoption:1 i2n:1 obeys:1 commerce:1 practical:2 acknowledgment:1 testing:29 vu:9 union:1 practice:2 differs:1 procedure:29 empirical:1 drug:1 reject:8 attain:1 pre:1 outset:1 word:1 symbolic:1 get:1 cannot:5 undesirable:1 bh:1 context:3 applying:1 seminal:1 restriction:2 measurable:2 misuse:1 demonstrated:3 quick:1 dean:1 attention:1 starting:2 independently:1 resolution:3 formalized:1 recovery:1 immediately:9 wasserman:1 m2:6 rule:24 estimator:1 deriving:3 proving:1 handle:2 notion:2 coordinate:1 justification:1 analogous:2 annals:1 controlling:2 pt:19 suppose:1 user:5 losing:1 us:1 kathryn:1 hypothesis:45 genuinely:1 database:3 observed:4 bottom:1 ft:7 preprint:4 equaling:2 ensures:1 rina:2 earned:3 ordering:2 rescaled:3 monograph:1 intuition:9 predictable:6 mentioned:2 ui:3 reward:20 dynamic:1 renormalized:1 depend:4 uniformity:5 lord:4 dilemma:5 upon:2 easily:1 various:1 describe:3 kevin:1 choosing:3 h0:6 posed:2 larger:8 otherwise:2 favor:1 ability:1 statistic:3 think:1 nondecreasing:6 online:32 differentiate:1 sequence:16 interplay:1 intelligently:1 propose:6 frequent:1 causing:1 alleviates:1 flexibility:2 intuitive:2 requirement:1 r1:13 asymmetry:1 guaranteeing:1 leave:1 help:1 derive:1 develop:1 depending:2 b0:18 received:1 strong:1 p2:1 involves:1 indicate:2 come:2 implies:1 restate:1 stringent:1 viewing:1 opinion:1 material:1 public:1 larry:1 require:3 modifed:1 fix:1 generalization:4 suffices:4 transparent:1 clustered:1 alleviate:1 biological:1 adjusted:1 extension:1 strictly:1 correction:3 bypassing:1 stretch:3 considered:1 deciding:1 claim:1 achieves:1 early:1 purpose:2 unwarranted:1 applicable:1 currently:2 saw:1 largest:1 agrees:1 successfully:1 weighted:10 clearly:1 always:3 supt:2 super:7 modified:1 aim:1 mfdr:5 pn:1 rather:1 avoid:1 gaussian:1 office:1 derived:2 improvement:4 indicates:3 logically:1 likelihood:1 sense:3 roeder:1 accumulated:1 entire:3 typically:3 familywise:1 bt:14 bandit:2 reproduce:1 interested:1 issue:2 among:1 aforementioned:2 prudent:1 priori:1 extraneous:1 raised:2 special:2 marginal:3 equal:7 construct:1 once:1 having:2 beach:1 never:2 field:2 nearly:1 genovese:1 future:2 others:4 recommend:1 lighten:1 inherent:1 few:1 modern:1 simultaneously:1 national:1 individual:1 pharmaceutical:1 recalling:2 truly:9 behind:1 nonincreasing:1 implication:1 orthogonal:1 unless:2 conduct:2 divide:1 old:1 theoretical:2 increased:1 column:1 earlier:7 cover:1 w911nf:1 unmodified:1 yoav:2 introducing:2 subset:2 uniform:6 johnson:1 fdp:10 motivating:3 rosset:3 chooses:1 combined:1 st:1 fithian:1 stay:1 receiving:1 michael:2 analogously:1 earn:3 central:1 satisfied:1 opposed:1 choose:9 possibly:3 slowly:1 exceedance:1 stochastically:2 rescaling:1 li:1 account:1 aggressive:1 de:1 includes:1 blanchard:1 matter:2 permanent:1 satisfy:1 reiterate:1 piece:1 performed:3 view:1 later:3 lot:1 tukey:2 doing:1 start:1 decaying:13 recover:1 contribution:5 conducting:1 yield:2 rejecting:2 published:1 inform:1 whenever:1 sharing:1 definition:9 against:1 obvious:1 dm:1 naturally:1 proof:7 associated:3 recovers:4 proved:2 treatment:2 logical:1 recall:2 knowledge:9 anytime:1 improves:2 ut:26 organized:1 subtle:1 carefully:1 actually:1 appears:1 disposal:1 higher:2 follow:1 reflected:1 methodology:2 improved:5 though:2 generality:2 furthermore:2 rejected:5 just:1 receives:1 mistakenly:1 christopher:1 reweighting:1 quality:2 willingness:1 perhaps:2 lei:1 believe:1 riding:1 usa:1 name:1 effect:4 concept:1 true:2 verify:1 requiring:1 hence:10 death:7 bonferroni:1 essence:1 fwer:4 generalized:9 allowable:1 presenting:1 outline:2 tt:1 demonstrate:5 saharon:1 aaditya:3 meaning:7 wise:1 spending:1 novel:2 recently:2 abstain:1 predominantly:1 fi:2 common:2 overstate:1 analog:1 occurred:1 m1:5 extend:2 interpretation:1 accumulate:2 refer:2 significant:1 ai:1 outlined:1 benjamini:6 access:2 stable:1 scandinavian:1 operating:1 add:1 recent:2 scenario:1 certain:1 arbitrarily:1 success:1 preserving:2 seen:2 additional:2 somewhat:1 greater:1 employed:1 recognized:1 period:1 signal:2 ii:2 encompass:1 multiple:20 desirable:2 full:2 infer:1 reduces:1 smooth:1 technical:2 ptt:1 clinical:2 long:4 post:3 equally:1 award:3 controlled:2 instantiating:1 variant:3 underlies:1 involving:1 denominator:3 essentially:3 aharoni:3 metric:2 arxiv:8 achieved:1 whereas:2 affecting:1 decreased:1 wealth:34 appropriately:1 extra:2 operate:1 rest:1 unorthodox:1 thing:1 seem:1 jordan:3 call:1 nonstationary:1 yang:3 iii:1 enough:1 variety:1 independence:11 associating:1 opposite:1 perfectly:1 reduce:1 idea:3 knowing:1 consumed:1 honest:1 whether:8 adel:2 penalty:25 cause:1 generally:2 useful:2 amount:3 discount:2 ang:1 locally:2 exist:1 nsf:1 notice:2 dotted:1 estimated:1 track:1 write:1 group:1 key:1 four:3 thereafter:1 nevertheless:1 threshold:2 demonstrating:1 changing:1 prevent:1 pj:2 abstaining:1 ht:5 monotone:16 fraction:2 sum:2 concreteness:1 run:1 parameterized:1 powerful:5 place:1 almost:1 reader:2 decide:2 reasonable:2 electronic:1 earning:1 decision:17 hochberg:6 acceptable:2 appendix:15 entirely:1 bound:6 internet:6 pay:2 hi:1 distinguish:1 followed:1 strength:1 ahead:1 occur:3 precisely:1 incorporation:1 ri:4 min:5 extremely:1 optimality:1 martin:3 handing:1 according:2 alternate:2 poor:1 yosef:2 smaller:3 unity:4 modification:2 making:3 lasting:1 intuitively:1 multiplicity:1 pr:3 equation:5 remains:1 foygel:2 discus:2 abbreviated:1 differentiates:1 needed:1 know:3 mind:1 end:1 rmed:1 available:3 generalizes:2 apply:1 observe:1 obey:1 batch:6 alternative:1 original:2 running:2 ensure:2 include:2 cf:1 maintaining:1 instant:1 etienne:1 exploit:1 giving:1 lenient:1 classical:1 society:3 question:1 quantity:3 already:1 spike:2 strategy:1 rt:39 dependence:2 traditional:1 win:6 cw:1 thank:1 restart:1 awarding:1 w0:27 argue:1 extent:1 barber:3 reason:3 ru:8 code:1 index:1 relationship:1 ratio:1 providing:1 equivalently:1 setup:1 robert:1 potentially:2 holding:1 statement:3 sigma:2 stated:1 negative:6 filtration:1 design:5 fdr:69 unknown:2 perform:1 allowing:1 upper:1 gilles:1 finite:1 behave:1 immediate:1 defining:1 extended:1 incorporated:2 fanny:3 team:1 ever:1 situation:1 smoothed:1 arbitrary:2 inferred:1 introduced:2 impactful:1 namely:1 complement:1 specified:1 california:1 nip:1 address:1 able:1 suggested:3 below:2 usually:1 summarize:3 challenge:1 including:2 memory:15 royal:3 belief:3 wainwright:3 power:13 suitable:1 critical:1 natural:9 indicator:1 older:1 improve:2 pvalue:1 started:1 ready:2 catch:1 prior:23 literature:2 discovery:39 fully:1 psin:1 foundation:1 sufficient:1 foster:5 viewpoint:1 pi:6 course:1 summary:1 supported:1 last:3 arriving:1 offline:22 allow:5 arnold:1 wide:2 default:2 world:1 valid:1 transition:1 forward:1 made:7 collection:1 reinforcement:1 adaptive:2 far:4 alpha:33 citation:1 contradict:1 keep:4 dealing:1 global:1 mem:11 knew:2 investing:13 why:2 table:3 additionally:1 nature:2 reasonably:1 robust:1 ca:1 domain:2 ehud:1 did:1 main:2 montanari:11 motivation:11 bounding:1 incrementing:1 ramdas:4 coordinatewise:5 allowed:5 gai:43 fashion:3 renormalization:1 wish:1 explicit:4 forgets:1 weighting:2 theorem:9 load:1 bad:2 decay:7 evidence:1 incorporating:1 exists:1 janssen:1 false:28 sequential:1 effectively:5 importance:3 budget:1 easier:1 rejection:42 suited:1 smoothly:1 forget:3 led:1 simply:5 likely:3 army:1 desire:1 adjustment:1 temporarily:1 subtlety:1 corresponds:1 satisfies:3 conditional:1 goal:2 formulated:2 presentation:1 consequently:1 replace:2 price:3 feasible:1 specifically:4 infinite:6 uniformly:3 reducing:1 except:1 wt:18 determined:2 lemma:12 pvalues:1 called:6 total:9 partly:1 meaningful:1 indicating:3 formally:5 latter:1 arises:1 phenomenon:5 incorporate:9 princeton:1 tested:4 scratch:1 |
6,797 | 7,149 | Learning from uncertain curves:
The 2-Wasserstein metric for Gaussian processes
Anton Mallasto
Department of Computer Science
University of Copenhagen
[email protected]
Aasa Feragen
Department of Computer Science
University of Copenhagen
[email protected]
Abstract
We introduce a novel framework for statistical analysis of populations of nondegenerate Gaussian processes (GPs), which are natural representations of uncertain
curves. This allows inherent variation or uncertainty in function-valued data to be
properly incorporated in the population analysis. Using the 2-Wasserstein metric
we geometrize the space of GPs with L2 mean and covariance functions over
compact index spaces. We prove existence and uniqueness of the barycenter of
a population of GPs, as well as convergence of the metric and the barycenter of
their finite-dimensional counterparts. This justifies practical computations. Finally,
we demonstrate our framework through experimental validation on GP datasets
representing brain connectivity and climate development. A M ATLAB library for
relevant computations will be published at https://sites.google.com/view/
antonmallasto/software.
1
Introduction
Gaussian processes (GPs, see Fig. 1) are the
counterparts of Gaussian distributions (GDs)
over functions, making GPs natural objects to
model uncertainty in estimated functions. With
the rise of GP modelling and probabilistic numerics, GPs are increasingly used to model uncertainty in function-valued data such as segmentation boundaries [17, 19, 29], image registration [38] or time series [27]. Centered GPs, or
covariance operators, appear as image features
in computer vision [12,16,24,25] and as features
of phonetic language structure [22]. A natural
next step is therefore to analyze populations of Figure 1: An illustration of a GP, with mean funcGPs, where performance depends crucially on tion (in black) and confidence bound (in grey). The
proper incorporation of inherent uncertainty or colorful curves are sample paths of this GP.
variation. This paper contributes a principled
framework for population analysis of GPs based on Wasserstein, a.k.a. earth mover?s, distances.
The importance of incorporating uncertainty into population analysis is emphasized by the example
in Fig. 2, where each data point is a GP representing the minimal temperature in the Siberian city
Vanavara over the course of one year [9, 33]. A na?ve way to compute its average temperature curve
is to compute the per-day mean and standard deviation of the yearly GP mean curves. This is shown
in the bottom right plot, and it is clear that the temperature variation is grossly underestimated,
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 2: Left: Example GPs describing the daily minimum temperatures in a Siberian city (see
Sec. 4). Right top: The mean GP temperature curve, computed as a Wasserstein barycenter. Note
that the inherent variability in the daily temperature is realistically preserved, in contrast with the
na?ve approach. Right bottom: A na?ve estimation of the mean and standard deviation of the daily
temperature, obtained by taking the day-by-day mean and standard deviation of the temperature. All
figures show a 95% confidence interval.
especially in the summer season. The top right figure shows the mean GP obtained with our proposed
framework, which preserves a far more accurate representation of the natural temperature variation.
We propose analyzing populations of GPs by geometrizing the space of GPs through the Wasserstein
distance, which yields a metric between probability measures with rich geometric properties. We
contribute i) closed-form solutions for arbitrarily good approximation of the Wasserstein distance by
showing that the 2-Wasserstein distance between two finite-dimensional GP representations converges
to the 2-Wasserstein distance of the two GPs; and ii) a proof that the barycenter of a population of
GPs exists, is unique, and can be approximated by its finite-dimensional counterpart.
We evaluate the Wasserstein distance in two applications. First, we illustrate the use of the Wasserstein
distance for processing of uncertain white-matter trajectories in the brain segmented from noisy
diffusion-weighted imaging (DWI) data using tractography. It is well known that the noise level and
the low resolution of DWI images result in unreliable trajectories (tracts) [23]. This is problematic as
the estimated tracts are e.g. used for surgical planning [8]. Recent work [17, 29] utilizes probabilistic
numerics [28] to return uncertain tracts represented as GPs. We utilize the Wassertein distance to
incorporate the estimated uncertainty into typical DWI analysis tools such as tract clustering [37]
and visualization. Our second study quantifies recent climate development based on data from
Russian meteorological stations using permutation testing on population barycenters, and supplies
interpretability of the climate development using GP-valued kernel regression.
Related work. Multiple frameworks exist for comparing Gaussian distributions (GDs) represented
by their covariance matrices, including the Frobenius, Fisher-Rao (affine-invariant), log-Euclidean
and Wasserstein metrics. Particularly relevant to our work is the 2-Wasserstein metric on GDs, whose
Riemannian geometry is studied in [32], and whose barycenters are well understood [1, 4].
A body of work exists on generalizing the aforementioned metrics to the infinite-dimensional
covariance operators. As pointed out in [22], extending the affine-invariant and Log-Euclidean
metrics is problematic as covariance operators are not compatible with logarithmic maps and their
inverses are unbounded. These problems are avoided in [24, 25] by regularizing the covariance
operators, but unfortunately, this also alters the data in a non-unique way. The Procrustes metric
from [22] avoids this, but as it is, only defines a metric between covariance operators.
The 2-Wasserstein metric, on the other hand, generalizes naturally from GDs to GPs, does not require
regularization, and can be arbitrarily well approximated by a closed form expression, making the
computations cheap. Moreover, the theory of optimal transport [5, 6, 36] shows that the Wasserstein
metric yields a rich geometry, which is further demonstrated by the previous work on GDs [32].
2
Structure. Prior to introducing the Wasserstein distance between GPs, we review GPs, their Hilbert
space covariance operators and the corresponding Gaussian measures in Sec. 2. In Sec. 3 we introduce
the Wasserstein metric and its barycenters for GPs and prove convergence properties of the metric
and barycenters, when GPs are approximated by finite-dimensional GDs. Experimental validation is
found in Sec. 4, followed by discussion and conclusion in Sec. 5.
2
Prerequisites
Gaussian processes and measures. A Gaussian process (GP) f is a collection of random variables,
such that any finite restriction of its values (f (xi ))N
i=1 has a joint Gaussian distribution, where xi ? X,
and X is the index set. A GP is entirely characterized by the pair
m(x) = E [f (x)] , k(x, x0 ) = E [(f (x) ? m(x))(f (x0 ) ? m(x0 ))] ,
(1)
where m and k are called the mean function and covariance function, respectively. We use the
notation f ? GP(m, k) for a GP f with mean function m and covariance function k. It follows from
the definition that the covariance function k is symmetric and positive semidefinite. We say that f is
non-degenerate, if k is positive definite. We will assume the GPs used to be non-degenerate.
GPs relate closely to Gaussian measures on Hilbert spaces. Given probability spaces (X, ?X , ?) and
(Y, ?Y , ?), we say that the measure ? is a push-forward of ? if ?(A) = ?(T ?1 (A)) for a measurable
T : X ? Y and any A ? ?Y . Denote this by T# ? = ?. A Borel measure ? on a separable Hilbert
space H is a Gaussian measure, if its push-forward with respect to any non-zero continuous element
of the dual space of H is a Gaussian measure on R (i.e., the push-forward gives a univariate Gaussian
distribution). A Borel-measurable set B is a Gaussian null set, if ?(B) = 0 for any Gaussian measure
? on X. A measure ? on H is regular if ?(B) = 0 for any Gaussian null set B.
Covariance operators. Denote by L2 (X) the space of L2 -integrable functions from X to R. The
covariance function k has an associated integral operator K : L2 (X) ? L2 (X) defined by
Z
[K?](x) =
k(x, s)?(s)ds, ?? ? L2 (X) ,
(2)
X
called the covariance operator associated with k. As a by-product of the 2-Wasserstein metric
on centered GPs, we get a metric on covariance operators. The operator K is Hilbert-Schmidt,
self-adjoint, compact, positive, and of trace class, and the space of such covariance operators is a
convex space. Furthermore, the assignment k 7? K from L2 (X ? X) to the covariance operators is
an isometric isomorphism onto the space of positive Hilbert-Schmidt operators on L2 (X) [7, Prop.
2.8.6]. This justifies us to write both f ? GP(m, K) and f ? GP(m, k).
Trace of an operator. The Wasserstein distance between GPs admits an analytical formula using
traces of their covariance operators, as we will see below. Let (H, h?, ?i) be a separable Hilbert space
with the orthonormal basis {ek }?
k=1 . Then the trace of a bounded linear operator T on H is given by
Tr T :=
?
X
hT ek , ek i ,
(3)
k=1
1
which is absolutely convergent and independent of the choice of the basis if Tr (T ? T ) 2 < ?, where
1
T ? denotes the adjoint operator of T and T 2 is the square-root of T . In this case T is called a trace
class operator. For positive self-adjoint operators, the trace is the sum of their eigenvalues.
The Wasserstein metric. The Wasserstein metric on probability measures derives from the optimal
transport problem introduced by Monge and made rigorous by Kantorovich. The p-Wasserstein
distance describes the minimal cost of transporting the unit mass of one probability measure into the
unit mass of another probability measure, when the cost is given by a Lp distance [5, 6, 36].
Let (M, d) be a Polish space (complete and separable
metric space) and denote by Pp (M ) the set
R
of all probability measures ? on M satisfying M dp (x, x0 )d?(x) < ? for some x0 ? M . The
p-Wasserstein distance between two probability measures ?, ? ? Pp (M ) is given by
p1
Z
Wp (?, ?) =
inf
dp (x1 , x2 )d?(x1 , x2 )
, (x1 , x2 ) ? M ? M,
(4)
???[?,?]
M ?M
3
where ?[?, ?] is the set of joint measures on M ? M with marginals ? and ?. Defined as above, Wp
satisfies the properties of a metric. Furhermore, a minimizer in (4) is always achieved.
3
The Wasserstein metric for GPs
We will now study the Wasserstein metric with p = 2 between GPs. For GDs, this has been studied
in [11, 14, 18, 21, 32].
From now on, assume that all GPs f ? GP(m, k) are indexed over a compact X ? Rn so that
H := L2 (X) is separable. Furthermore, we assume m ? L2 (X), k ? L2 (X ? X), so that
observations of f live almost surely in H. Let f1 ? GP(m1 , k1 ) and f2 ? GP(m2 , k2 ) be GPs with
associated covariance operators K1 and K2 , respectively. As the sample paths of f1 and f2 are in H,
they induce Gaussian measures ?1 , ?2 ? P2 (H) on H, as there is a 1-1 correspondence between GPs
having sample paths almost surely on a L2 (X) space and Gaussian measures on L2 (X) [26].
The 2-Wasserstein metric between the Gaussian measures ?1 , ?2 is given by [13]
1
1
1
W22 (?1 , ?2 ) = d22 (m1 , m2 ) + Tr (K1 + K2 ? 2(K12 K2 K12 ) 2 ),
(5)
2
where d2 is the canonical metric on L (X). Using this, we get the following definition
Definition 1. Let f1 , f2 be GPs as above, and the induced Gaussian measures of f1 and f2 be ?1
and ?2 , respectively. Then, their squared 2-Wasserstein distance is given by
1
1
1
W22 (f1 , f2 ) := W22 (?1 , ?2 ) = d22 (m1 , m2 ) + Tr (K1 + K2 ? 2(K12 K2 K12 ) 2 ) .
Remark 2. Note that the case m1 = m2 = 0 defines a metric for the covariance operators K1 , K2 ,
as (5) shows that the space of GPs is isometric to the cartesian product of L2 (X) and the covariance
operators. We will denote this metric by W22 (K1 , K2 ). Furthermore, as GDs are just a subset of GPs,
W22 yields also the 2-Wasserstein metric between GDs studied in [11, 14, 18, 21, 32].
Barycenters of Gaussian processes. Next, we define and study barycenters of populations of GPs,
in a similar fashion as the GD case in [1].
PN
N
Given a population {?i }N
i=1 ? P2 (H) and weights {?i ? 0}i=1 with
i=1 ?i = 1, and H a
separable Hilbert space, the solution ?
? of the problem
(P)
N
X
inf
??P2 (H)
?i W22 (?i , ?),
i=1
N
is the barycenter of the population {?i }N
i=1 with barycentric coordinates {?i }i=1 . The barycenter
for GPs is defined to be the barycenter of the associated Gaussian measures.
We now state the main theorem of this section, which we will prove using Prop. 4 and Prop. 5 below.
Theorem 3. Let {fi }N
i=1 be a population of GPs with fi ? GP(mi , Ki ), then the unique barycenter
?
? where m
? satisfy
with barycentric coordinates (?i )N
? K),
? and K
i=1 is f ? GP(m,
m
? =
N
X
i=1
?i m i ,
N
X
1
1
? 2 Ki K
? 21 2 = K.
?
?i K
i=1
Proposition 4. Let {?i }N
? be a barycenter with barycentric coordinates (?i )N
i=1 ? P2 (H) and ?
i=1 .
Assume ?i is regular for some i, then ?
? is the unique minimizer of (P).
Proof. We first show that the map ? 7? W22 (?, ?) is convex, and strictly convex if ? is a regular
measure. To see this, let ?i ? P2 (H) and ?i? ? ?[?, ?i ] be the optimal transport plans between ? and
?i for i = 1, 2, then ??1? + (1 ? ?)?2? ? ?[?, ??1 + (1 ? ?)?2 ] for ? ? [0, 1]. Therefore
Z
W22 (?, ??1 + (1 ? ?)?2 ) =
inf
d2 (x, y)d?
???[?,??1 +(1??)?2 ] H?H
Z
?
d2 (x, y)d(??1? + (1 ? ?)?2? )
H?H
= ?W22 (?, ?1 ) + (1 ? ?)W22 (?, ?2 ),
4
which gives convexity. Note that for ? ?]0, 1[, the transport plan ??1? + (1 ? ?)?2? splits mass.
Therefore it cannot be the unique optimal plan between ? and (1 ? t)?1 + t?2 . As ? is regular,
the optimal plan does not split mass, as it is induced by a map [3, Thm. 6.2.10], so we have strict
convexity. From this follows the strict convexity of the object function in (P).
Next we characterize the barycenter in spirit of the finite-dinemsional case in [1, Thm. 6.1].
Proposition 5. Let {fi }N
i=1 be a population of centered GPs, fi ? GP(0, Ki ). Then (P) has a
? where K
? is the unique bounded self-adjoint positive linear operator
unique solution f ? GP(0, K),
satisfying
N
1
21
X
1
F (K) :=
?i K 2 Ki K 2
= K.
(6)
i=1
Proof. First we show that (6) has a solution. Following the proof presented in [1, Thm. 6.1], let
P
2
p
N
?max (Ki ) be the maximum eigenvalue of Ki . Then pick ? such that ? ?
?max (Ki )
i=1 ?i
and define the convex set K? = {K | ?I ? K > 0}, where A ? B denotes that the operator
A ? B is positive.
Then note that the map F in (6) is a compact operator as the set of compact operators forms a
two-sided ideal and is closed under taking the square-root, K? is bounded, and so by the definition of
a compact operator, F (K? ) is contained in a compact set (the closure of the image). Finally, one can
check that ?I ? F (K) > 0, so therefore by Schauder?s fixed point theorem, there exists a solution
for (6).
? satisfy (6) and 0 < ?1 , ?2 , ... be its
Next, we show that the solution to (6) is the barycenter. Let K
eigenvalues with eigenfunctions e1 , e2 , .... By [10, Prop. 2.2.] the transport map between ? and ?k is
given by
? X
?
X
? 21 Kk K
? 12 ) 21 ej , ei i
hx, ej ih(K
Tk (x) =
ei (x) ,
(7)
1
1
?i2 ?j2
i=1 j=1
for almost surely any x ? supp(?) which equals the whole of H [34, Thm. 1].
? 12 Kk K
? 12 Tk K
? 21 ) 12 x = K
? 21 x, which gives
Then one can check the identity (K
? =
F (K)x
N
X
1
1
1
? 2 Kk K
? 2 )2 x =
?k ( K
k=1
N
X
1
1
? 2 Tk K
? 2 x = Kx.
?
?k K
k=1
1
2
? is bijective, we get
By noting that K
!
N
N
N
1
X
X
? 2x X
1
1
K
?
?
? ?
? 12 x = K
? 21 x y:=?
2
2
K
?k Tk K x = Kx
?k Tk K
?k Tk y = y, ?y ? H.
k=1
k=1
k=1
Therefore, by (3?1) in Proposition 3.8 in [1] we are done (replacing measures vanishing on small
sets on Rn by regular measures on H, the proof carries over). Also, by Proposition 4, this is the
unique barycenter.
Proof of Theorem 3. Use Prop. 5, the properties of a barycenter in a Hilbert space, and that the space
of GPs is isometric to the cartesian product of L2 (X) and the covariance operators.
Remark 6. For the practical computations of barycenters of GDs approximating GPs, to be discussed
below, a fixed-point iteration scheme with a guarantee of convergence exists [4, Thm. 4.2].
Convergence properties. Now, we show that the 2-Wasserstein metric for GPs can be approximated arbitrarily well by the 2-Wasserstein metric for GDs. This is important, as in real-life we
observe finite-dimensional representations of the covariance operators.
5
2
Let {ei }?
i=1 be an orthonormal basis for L (X). Then we define the GDs given by restrictions min
and Kin of mi and Ki , i = 1, 2, on Vn = span(e1 , ..., en ) by
min (x) =
n
X
hmi , ek iek (x), Kin ? =
k=1
n
X
h?, ek iKi ek , ?? ? Vn , ?x ? X ,
(8)
k=1
and prove the following:
Theorem 7. The 2-Wasserstein metric between GDs on finite samples converges to the Wasserstein
metric between GPs, that is, if fin ? N (min , Kin ), fi ? GP(mi , Ki ) for i = 1, 2, then
lim W22 (f1n , f2n ) = W22 (f1 , f2 ).
n??
By the same argument, it also follows that W22 (?, ?) is continuous in both arguments in operator norm
topology.
Proof. Kin ? Ki in operator norm as n ? ?. Because taking a sum, product and square-root of
operators are all continuous with respect to the operator norm, it follows that
1
1
1
1
1
1
2
2
K1n + K2n ? 2(K1n
K2n K1n
) 2 ? K1 + K2 ? 2(K12 K2 K12 ) 2 .
Note that for any sequence An ? A with convergence in operator norm, we have
|Tr A ? Tr An | ?
?
X
|h(A ? An )ek , ek i|
?
Cauchy-Schwarz X
MCT
k(A ? An )ek kL2 ? 0 ,
?
k=1
as lim
sup
n?? v?L2 (X)
(9)
k=1
k(A ? An )vkL2 = 0 due to the convergence in operator norm. Here MCT stands
?
for the monotone convergence theorem. Thus we have
1
1
1
2
2
W22 (f1n , f2n ) = d22 (m1n , m2n ) + Tr (K1n + K2n ? 2(K1n
K2n K1n
)2 )
1
n??
1
1
? d22 (m1 , m2 ) + Tr (K1 + K2 ? 2(K12 K2 K12 ) 2 )
= W22 (f1 , f2 ).
The importance of Proposition 7 is that it justifies computations of distances using finite representations of GPs as approximations for the infinite-dimensional case.
Next, we show that we can also approximate the barycenter of a population of GPs by computing the
barycenters of populations of GDs converging to these GPs.
Theorem 8. The barycenter of a population of GPs varies continuously, that is, the map
(f1 , ..., fN ) 7? f? is continuous in the operator norm. Especially, this implies that the barycen?
ter f?n of the finite-dimensional restrictions {fin }N
i=1 converges to f .
? then that the map (K1 , ..., KN ) 7? K
?
First, we show that if fi ? GP(mi , Ki ) and f? = GP(m,
? K),
is continuous. Continuity of (m1 , ..., mN ) 7? m
? is clear.
Let K be a covariance operator, denote its maximal eigenvalue by ?max (K). Note that this map is
well-defined, as K is also bounded, normal operator, thus ?max (K) = kKkop < ? holds. Now let
a = (K1 , ..., KN ) be a population of covariance operators, denote ith as a(i) = Ki , then define the
continuous function ? and correspondence (a set valued map) ? as follows
!2
N
X
p
? : a 7?
?i ?max (a(i)) , ? : a 7? K?(a) = {K ? HS(H) | ?(a)I ? K ? 0}.
i=1
Recall that ? and ? were already implicitly used in the proof of Proposition 6.
We want to show that this correspondence is continuous in order to put the Maximum theorem to
use. A correspondence ? : A ? B is upper hemi-continuous at a ? A, if all convergent sequences
(an ) ? A, (bn ) ? ?(an ) satisfy lim bn = b, lim an = a and b ? ?(a). The correspondence is
n??
n??
6
lower hemi-continuous at a ? A, if for all convergent sequences an ? a in A and any b ? ?(a),
there is a subsequence ank , so that we have a sequence bk ? ?(ank ) which satisfies bk ? b. If the
correspondence is both upper and lower hemi-continuous, we say that it is continuous. For more
about the Maximum theorem and hemi-continuity, see [2].
Lemma 9. The correspondence ? : a 7? K?(a) is continuous as correspondence.
Proof. First, we show the correspondence is lower hemi-continuous. Let (an )?
n=1 be a sequence of
populations of covariance operators of size N , that converges an ? a. Use the shorthand notation
?n := ?(an ), then ?n ? ?? := ?(a), and let b ? ?(a) = K?? .
?
Pick subsequence (ank )?
k=1 so that (?nk )k=1 is increasing or decreasing. If it was decreasing, then
K?? ? K?nk for every nk . Thus the proof would be finished by choosing bk = b for every k.
Hence assume the sequence is increasing, so that K?nk ? K?nk+1 . Now let ?(t) = (1 ? t)b1 + tb,
where b1 ? K?1 , and let tnk be the solution to (1 ? t)?1 + t?? = ?nk , then bk := ?(tnk ) ? K?nk
and bk ? b.
For upper hemicontinuity, assume that an ? a, bn ? K?n and that bn ? b. Then using the
definition of ?, we get the positive sequence h(?n I ? bn )x, xi ? 0 indexed by n, then by continuity
and the positivity of this sequence it follows that
0 ? lim h(?n I ? bn )x, xi = h(?? I ? b)x, xi.
n??
One can check the criterion b ? 0 similarly, and so we are done.
PN
2
Proof of Theorem 8. Now let a = (K1 , ..., Kn ), f (K, a) :=
i=1 ?i W2 (K, Ki ) and F (K) :=
PN
1
1 1
?
2
2 2
i=1 ?i (K Ki K ) , then the unique minimizer K of f is the fixed point of F . Furthermore, the
closure cl(F (K?(a) )) is compact, a 7? cl(F (K?(a) )) is a continuous correspondence as the closure
? ? cl(F (K?(a) )),
of composition of two continuous correspondence. Additionally, we know that K
so applying the maximum theorem, we have shown that the barycenter of a population of covariance
? is continuous, finishing the proof.
operators varies continuously, i.e. the map (K1 , ..., KN ) 7? K
4
Experiments
We illustrate the utility of the Wasserstein metric in two different applications: Processing of uncertain
white-matter tracts estimated from DWI, and analysis of climate development via temperature curve
GPs.
Experimental setup. The white-matter tract GPs are estimated for a single subject from the
Human Connectome Project [15, 31, 35], using probabilistic shortest-path tractography [17]. See
the supplementary material for details on the data and its preprocessing. From daily minimum
temperatures measured at a set of 30 randomly sampled Russian metereological stations [9, 33],
GP regression was used to estimate a GP temperature curve per year and station for the period
1940 ? 2009 using maximum likelihood parameters. All code for computing Wasserstein distances
and barycenters was implemented in MATLAB and ran on a laptop with 2,7 GHz Intel Core i5
processor and 8 GB 1867 MHz DDR3 memory. On the temperature GP curves (represented by 50
samples), the average runtime of the 2-Wasserstein distance computation was 0.048 ? 0.014 seconds
(estimated from 1000 pairwise distance computations), and the average runtime of the 2-Wasserstein
barycenter of a sample of size 10 was 0.69 ? 0.11 seconds (estimated from 200 samples).
White-matter tract processing. The inferior longitudinal fasiculus is a white-matter bundle which
splits into two separate bundles. Fig. 3 (top) shows the results of agglomerative hierarchical clustering
of the GP tracts using average Wasserstein distance. The per-cluster Wasserstein barycenter can
be used to represent the tracts; its overlap with the individual GP mean curves is shown in Fig. 3
(bottom).
The individual GP tracts are visualized via their mean curves, but they are in fact a population of GPs.
To confirm that the two clusters are indeed different also when the covariance function is taken into
account, we perform a permutation test for difference between per-cluster Wasserstein barycenters,
and already with 50 permutations we observe a p-value of p = 0.0196, confirming that the two
clusters are significantly different at a 5% significance level.
7
Quantifying climate change. Using the Wasserstein
barycenters we perform nonparametric kernel regression to
visualize how yearly temperature curves evolve with time,
based on the Russian yearly temperature GPs. Fig. 4 shows
snapshots from this evolution, and a continuous movie version climate.avi is found in the supplementary material.
The regressed evolution indicates an increase in overall
temperature as we reach the final year 2009. To quantify this observation, we perform a permutation test using
the Wasserstein distance between population Wasserstein
barycenters to compare the final 10 years 2000-2009 with
the years 1940-1999. Using 50 permutations we obtain a
p-value of 0.0392, giving significant difference in temperature curves at a 95% confidence level.
Significance. Note that the state-of-the-art in tract analysis as well as in functional data analysis would be to
ignore the covariance of the estimated curves and treat
the mean curves as observations. We contribute a framework to incorporate the uncertainty into the population Figure 3: Top: The mean functions of
analysis ? but why would we want to retain uncertainty? the individual GPs, colored by cluster
In the white-matter tracts, the GP covariance represents membership, in the context of the correspatial uncertainty in the estimated curve trajectory. The sponding T1-weighted MRI slices. Botindividual GPs represent connections between different tom: The tract GP mean functions and
endpoints. Thus, they do not represent observations of the cluster mean GPs with 95% confithe exact same trajectory, but rather of distinct, nearby dence bounds.
trajectories. It is common in diffusion MRI to represent
such sets of estimated trajectories by a few prototype trajectories for visualization and comparative
analysis; we obtain prototypes through the Wasserstein barycenter. To correctly interpret the spatial
uncertainty, e.g. for a brain surgeon [8], it is crucial that the covariance of the prototype GP represents
the covariances of the individual GPs, and not smaller. If you wanted to reduce uncertainty by
increasing sample size, you would need more images, not more curves ? because the noise is in the
image. But more images are not usually available. In the climate data, the GP covariance models
natural temperature variation, not measurement noise. Increasing the sample size decreases the error
of the temperature distribution, but should not decrease this natural variation (i.e. the covariance).
Figure 4: Snapshots from the kernel regression giving yearly temperature curves 1940-2009. We
observe an apparent temperature increase which is confirmed by the permutation test.
5
Discussion and future work
We have shown that the Wasserstein metric for GPs is both theoretically and computationally wellfounded for statistics on GPs: It defines unique barycenters, and allows efficient computations
through finite-dimensional representations. We have illustrated its use in two different applications:
Processing of uncertain estimates of white-matter trajectories in the brain, and analysis of climate
development via GP representations of temperature curves. We have seen that the metric itself is
discriminative for clustering and permutation testing, and we have seen how the GP barycenters allow
truthful interpretation of uncertainty in the white matter tracts and of variation in the temperature
curves.
8
Future work includes more complex learning algorithms, starting with preprocessing tools such as
PCA [30], and moving on to supervised predictive models. This includes a better understanding of
the potentially Riemannian structure of the infinite-dimensional Wasserstein space, which would
enable us to draw on existing results for learning with manifold-valued data [20].
The Wasserstein distance allows the inherent uncertainty in the estimated GP data points to be
appropriately accounted for in every step of the analysis, giving truthful analysis and subsequent
interpretation. This is particularly important in applications where uncertainty or variation is crucial:
Variation in temperature is an important feature in climate change, and while estimated white-matter
trajectories are known to be unreliable, they are used in surgical planning, making uncertainty about
their trajectories a highly relevant parameter.
6
Acknowledgements
This research was supported by Centre for Stochastic Geometry and Advanced Bioimaging, funded
by a grant from the Villum Foundation. Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (Principal Investigators: David Van Essen and Kamil Ugurbil;
1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint
for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. The authors would also like to thank Mads Nielsen for valuable discussions and
supervision.
References
[1] M. Agueh and G. Carlier. Barycenters in the Wasserstein space. SIAM Journal on Mathematical Analysis,
43(2):904?924, 2011.
[2] C. Aliprantis and K. Border. Infinite dimensional analysis: a hitchhiker?s guide. Studies in Economic
Theory, 4, 1999.
[3] P. ?lvarez-Esteban, E. Del Barrio, J. Cuesta-Albertos, C. Matr?n, et al. Uniqueness and approximate computation of optimal incomplete transportation plans. In Annales de l?Institut Henri Poincar?, Probabilit?s
et Statistiques, volume 47, pages 358?375. Institut Henri Poincar?, 2011.
[4] P. C. ?lvarez-Esteban, E. del Barrio, J. Cuesta-Albertos, and C. Matr?n. A fixed-point approach to
barycenters in Wasserstein space. Journal of Mathematical Analysis and Applications, 441(2):744?762,
2016.
[5] L. Ambrosio and N. Gigli. A user?s guide to optimal transport. In Modelling and optimisation of flows on
networks, pages 1?155. Springer, 2013.
[6] L. Ambrosio, N. Gigli, and G. Savar?. Gradient flows: in metric spaces and in the space of probability
measures. Springer Science & Business Media, 2008.
[7] W. Arveson. A short course on spectral theory, volume 209. Springer Science & Business Media, 2006.
[8] J. Berman. Diffusion MR tractography as a tool for surgical planning. Magnetic resonance imaging clinics
of North America, 17(2):205?214, 2009.
[9] O. Bulygina and V. Razuvaev. Daily temperature and precipitation data for 518 russian meteorological
stations. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, US Department of
Energy, Oak Ridge, Tennessee, 2012.
[10] J. Cuesta-Albertos, C. Matr?n-Bea, and A. Tuero-Diaz. On lower bounds for the l2 -Wasserstein metric in
a Hilbert space. Journal of Theoretical Probability, 9(2):263?283, 1996.
[11] D. Dowson and B. Landau. The Fr?chet distance between multivariate normal distributions. Journal of
multivariate analysis, 12(3):450?455, 1982.
[12] M. Faraki, M. T. Harandi, and F. Porikli. Approximate infinite-dimensional region covariance descriptors
for image classification. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International
Conference on, pages 1364?1368. IEEE, 2015.
[13] M. Gelbrich. On a formula for the L2 Wasserstein metric between measures on Euclidean and Hilbert
spaces. Mathematische Nachrichten, 147(1):185?203, 1990.
[14] C. R. Givens, R. M. Shortt, et al. A class of Wasserstein metrics for probability distributions. The Michigan
Mathematical Journal, 31(2):231?240, 1984.
[15] M. F. Glasser, S. N. Sotiropoulos, J. A. Wilson, T. S. Coalson, B. Fischl, J. L. Andersson, J. Xu, S. Jbabdi,
M. Webster, J. R. Polimeni, et al. The minimal preprocessing pipelines for the Human Connectome project.
Neuroimage, 80:105?124, 2013.
[16] M. Harandi, M. Salzmann, and F. Porikli. Bregman divergences for infinite dimensional covariance
matrices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
1003?1010, 2014.
9
[17] S. Hauberg, M. Schober, M. Liptrot, P. Hennig, and A. Feragen. A random Riemannian metric for
probabilistic shortest-path tractography. In International Conference on Medical Image Computing and
Computer-Assisted Intervention, pages 597?604. Springer, 2015.
[18] M. Knott and C. S. Smith. On the optimal mapping of distributions. Journal of Optimization Theory and
Applications, 43(1):39?49, 1984.
[19] M. L?, J. Unkelbach, N. Ayache, and H. Delingette. GPSSI: Gaussian process for sampling segmentations
of images. In International Conference on Medical Image Computing and Computer-Assisted Intervention,
pages 38?46. Springer, 2015.
[20] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks
on Riemannian manifolds. In Proceedings of the IEEE international conference on computer vision
workshops, pages 37?45, 2015.
[21] I. Olkin and F. Pukelsheim. The distance between two random vectors with given dispersion matrices.
Linear Algebra and its Applications, 48:257?263, 1982.
[22] D. Pigoli, J. A. Aston, I. L. Dryden, and P. Secchi. Distances and inference for covariance operators.
Biometrika, 101(2):409?422, 2014.
[23] S. Pujol, W. Wells, C. Pierpaoli, C. Brun, J. Gee, G. Cheng, B. Vemuri, O. Commowick, S. Prima, A. Stamm,
et al. The DTI challenge: toward standardized evaluation of diffusion tensor imaging tractography for
neurosurgery. Journal of Neuroimaging, 25(6):875?882, 2015.
[24] M. H. Quang and V. Murino. From covariance matrices to covariance operators: Data representation from
finite to infinite-dimensional settings. In Algorithmic Advances in Riemannian Geometry and Applications,
pages 115?143. Springer, 2016.
[25] M. H. Quang, M. San Biagio, and V. Murino. Log-Hilbert-Schmidt metric between positive definite
operators on Hilbert spaces. In Advances in Neural Information Processing Systems, pages 388?396, 2014.
[26] B. S. Rajput. Gaussian measures on Lp spaces, 1 ? p < ?. Journal of Multivariate Analysis, 2(4):382?
403, 1972.
[27] S. Roberts, M. Osborne, M. Ebden, S. Reece, N. Gibson, and S. Aigrain. Gaussian processes for time-series
modelling. Phil. Trans. R. Soc. A, 371(1984):20110550, 2013.
[28] M. Schober, D. K. Duvenaud, and P. Hennig. Probabilistic ODE solvers with Runge-Kutta means. In
Advances in neural information processing systems, pages 739?747, 2014.
[29] M. Schober, N. Kasenburg, A. Feragen, P. Hennig, and S. Hauberg. Probabilistic shortest path tractography
in DTI using Gaussian Process ODE solvers. In International Conference on Medical Image Computing
and Computer-Assisted Intervention, pages 265?272. Springer, 2014.
[30] V. Seguy and M. Cuturi. Principal geodesic analysis for probability measures under the optimal transport
metric. In Advances in Neural Information Processing Systems, pages 3312?3320, 2015.
[31] S. Sotiropoulos, S. Moeller, S. Jbabdi, J. Xu, J. Andersson, E. Auerbach, E. Yacoub, D. Feinberg, K. Setsompop, L. Wald, et al. Effects of image reconstruction on fiber orientation mapping from multichannel
diffusion MRI: reducing the noise floor using SENSE. Magnetic resonance in medicine, 70(6):1682?1689,
2013.
[32] A. Takatsu et al. Wasserstein geometry of Gaussian measures. Osaka Journal of Mathematics, 48(4):1005?
1026, 2011.
[33] R. Tatusko and J. A. Mirabito. Cooperation in climate research: An evaluation of the activities conducted
under the US-USSR agreement for environmental protection since 1974. National Climate Program Office,
1990.
[34] N. Vakhania. The topological support of Gaussian measure in Banach space. Nagoya Mathematical
Journal, 57:59?63, 1975.
[35] D. C. Van Essen, S. M. Smith, D. M. Barch, T. E. Behrens, E. Yacoub, K. Ugurbil, W.-M. H. Consortium,
et al. The wu-minn Human Connectome project: an overview. Neuroimage, 80:62?79, 2013.
[36] C. Villani. Topics in optimal transportation. Number 58. American Mathematical Soc., 2003.
[37] D. Wassermann, L. Bloy, E. Kanterakis, R. Verma, and R. Deriche. Unsupervised white matter fiber
clustering and tract probability map generation: Applications of a Gaussian process framework for white
matter fibers. NeuroImage, 51(1):228?241, 2010.
[38] X. Yang and M. Niethammer. Uncertainty quantification for LDDMM using a low-rank Hessian approximation. In International Conference on Medical Image Computing and Computer-Assisted Intervention,
pages 289?296. Springer, 2015.
10
| 7149 |@word h:1 version:1 mri:3 norm:6 villani:1 grey:1 d2:3 closure:3 crucially:1 iki:1 covariance:39 bn:6 pick:2 tr:8 carry:1 series:2 salzmann:1 longitudinal:1 existing:1 com:1 comparing:1 olkin:1 protection:1 fn:1 subsequent:1 confirming:1 cheap:1 wanted:1 webster:1 plot:1 auerbach:1 wassermann:1 ith:1 vanishing:1 core:1 short:1 smith:2 colored:1 contribute:2 oak:2 unbounded:1 mathematical:5 quang:2 supply:1 agueh:1 prove:4 shorthand:1 introduce:2 theoretically:1 pairwise:1 x0:5 indeed:1 p1:1 planning:3 brain:4 ambrosio:2 decreasing:2 landau:1 solver:2 increasing:4 precipitation:1 project:4 provided:1 moreover:1 notation:2 bounded:4 mass:4 laptop:1 null:2 medium:2 investigator:1 porikli:2 guarantee:1 dti:2 every:3 runtime:2 biometrika:1 k2:12 unit:2 medical:4 grant:1 colorful:1 appear:1 intervention:4 positive:9 t1:1 understood:1 treat:1 analyzing:1 path:6 black:1 studied:3 mads:1 practical:2 unique:10 testing:2 transporting:1 definite:2 poincar:2 probabilit:1 gibson:1 significantly:1 confidence:3 induce:1 regular:5 consortium:2 get:4 onto:1 cannot:1 f2n:2 operator:45 put:1 context:1 live:1 applying:1 hemi:5 schober:3 restriction:3 measurable:2 map:11 demonstrated:1 center:3 blueprint:1 transportation:2 polimeni:1 phil:1 starting:1 convex:4 d22:4 resolution:1 m2:5 orthonormal:2 osaka:1 coalson:1 population:23 variation:9 coordinate:3 behrens:1 user:1 exact:1 gps:53 agreement:1 element:1 approximated:4 particularly:2 satisfying:2 recognition:1 bottom:3 murino:2 region:1 decrease:2 jbabdi:2 valuable:1 ran:1 principled:1 convexity:3 cuturi:1 chet:1 geodesic:2 sotiropoulos:2 algebra:1 surgeon:1 predictive:1 f2:7 basis:3 icassp:1 joint:2 represented:3 america:1 fiber:3 distinct:1 reece:1 avi:1 choosing:1 whose:2 apparent:1 supplementary:2 valued:5 say:3 statistic:1 gp:39 noisy:1 itself:1 final:2 runge:1 sequence:8 eigenvalue:4 analytical:1 niethammer:1 propose:1 reconstruction:1 product:4 maximal:1 fr:1 j2:1 relevant:3 moeller:1 degenerate:2 realistically:1 adjoint:4 frobenius:1 convergence:7 cluster:6 extending:1 comparative:1 tract:15 converges:4 object:2 tk:6 illustrate:2 measured:1 p2:5 soc:2 implemented:1 implies:1 berman:1 quantify:1 closely:1 stochastic:1 centered:3 human:4 enable:1 material:2 require:1 hx:1 villum:1 f1:8 proposition:6 strictly:1 assisted:4 hold:1 duvenaud:1 normal:2 mapping:2 algorithmic:1 visualize:1 earth:1 uniqueness:2 estimation:1 esteban:2 schwarz:1 albertos:3 city:2 tool:3 weighted:2 neurosurgery:1 gaussian:29 always:1 rather:1 tennessee:1 pn:3 season:1 ej:2 wilson:1 office:1 finishing:1 properly:1 rank:1 modelling:3 check:3 likelihood:1 indicates:1 contrast:1 rigorous:1 polish:1 hauberg:2 sense:1 inference:1 membership:1 overall:1 aforementioned:1 dual:1 classification:1 orientation:1 ussr:1 development:5 plan:5 art:1 spatial:1 resonance:2 equal:1 kkkop:1 having:1 beach:1 washington:1 sampling:1 represents:2 unsupervised:1 future:2 inherent:4 few:1 deriche:1 randomly:1 preserve:1 mover:1 ve:3 geometrizing:1 individual:4 national:2 divergence:1 geometry:5 highly:1 essen:2 evaluation:2 feinberg:1 semidefinite:1 bundle:2 accurate:1 bregman:1 integral:1 daily:5 institut:2 indexed:2 incomplete:1 euclidean:3 theoretical:1 minimal:3 uncertain:6 rao:1 mhz:1 assignment:1 cost:2 introducing:1 deviation:3 subset:1 conducted:1 characterize:1 kn:4 varies:2 gd:15 st:1 international:6 siam:1 retain:1 probabilistic:6 connectome:4 continuously:2 na:3 connectivity:1 squared:1 positivity:1 ek:9 american:1 return:1 supp:1 account:1 de:1 ebden:1 sec:5 includes:2 north:1 matter:11 satisfy:3 depends:1 tion:1 view:1 root:3 closed:3 analyze:1 sup:1 square:3 siberian:2 descriptor:1 convolutional:1 yield:3 surgical:3 anton:1 trajectory:10 confirmed:1 published:1 processor:1 reach:1 definition:5 grossly:1 energy:1 kl2:1 pp:2 atlab:1 e2:1 naturally:1 proof:12 di:2 riemannian:5 associated:4 mi:4 sampled:1 liptrot:1 recall:1 lim:5 segmentation:2 hilbert:12 nielsen:1 barrio:2 day:3 isometric:3 tom:1 supervised:1 done:2 furthermore:4 just:1 d:1 hand:1 statistiques:1 transport:7 ei:3 replacing:1 brun:1 google:1 meteorological:2 continuity:3 defines:3 del:2 russian:4 usa:1 effect:1 counterpart:3 evolution:2 regularization:1 hence:1 symmetric:1 wp:2 laboratory:1 i2:1 illustrated:1 climate:11 white:11 self:3 inferior:1 criterion:1 bijective:1 complete:1 demonstrate:1 ridge:2 temperature:25 image:14 novel:1 fi:6 nih:2 common:1 functional:1 overview:1 endpoint:1 volume:2 banach:1 discussed:1 tnk:2 m1:6 interpretation:2 marginals:1 interpret:1 significant:1 composition:1 measurement:1 iek:1 mathematics:1 similarly:1 pointed:1 centre:1 language:1 funded:2 moving:1 supervision:1 multivariate:3 recent:2 inf:3 nagoya:1 phonetic:1 arbitrarily:3 life:1 integrable:1 seen:2 minimum:2 wasserstein:51 floor:1 mr:1 surely:3 shortest:3 period:1 truthful:2 signal:1 ii:1 multiple:1 segmented:1 characterized:1 long:1 e1:2 converging:1 regression:4 wald:1 vision:3 metric:41 optimisation:1 iteration:1 kernel:3 represent:4 sponding:1 shortt:1 achieved:1 boscaini:1 preserved:1 wellfounded:1 want:2 ode:2 interval:1 underestimated:1 ank:3 crucial:2 appropriately:1 w2:1 strict:2 eigenfunctions:1 induced:2 subject:1 flow:2 spirit:1 noting:1 tractography:6 ideal:1 split:3 ter:1 yang:1 topology:1 reduce:1 economic:1 prototype:3 expression:1 pca:1 ugurbil:2 isomorphism:1 utility:1 gb:1 glasser:1 carlier:1 speech:1 hessian:1 remark:2 matlab:1 clear:2 k2n:4 procrustes:1 nonparametric:1 visualized:1 multichannel:1 http:1 exist:1 problematic:2 canonical:1 alters:1 estimated:12 neuroscience:2 per:4 correctly:1 fischl:1 mathematische:1 write:1 diaz:1 hennig:3 registration:1 diffusion:5 utilize:1 ht:1 imaging:3 matr:3 annales:1 monotone:1 year:5 sum:2 gigli:2 inverse:1 uncertainty:16 i5:1 you:2 almost:3 kasenburg:1 wu:2 vn:2 utilizes:1 draw:1 entirely:1 bound:3 ki:14 summer:1 followed:1 convergent:3 correspondence:11 bea:1 cheng:1 topological:1 activity:1 aasa:2 incorporation:1 w22:15 x2:3 software:1 regressed:1 dence:1 nearby:1 pierpaoli:1 argument:2 min:3 span:1 separable:5 department:3 mcdonnell:1 describes:1 smaller:1 increasingly:1 lp:2 m2n:1 making:3 invariant:2 sided:1 taken:1 pipeline:1 computationally:1 dioxide:1 visualization:2 describing:1 know:1 generalizes:1 available:1 aigrain:1 prerequisite:1 observe:3 hierarchical:1 spectral:1 magnetic:2 schmidt:3 existence:1 top:4 clustering:4 denotes:2 standardized:1 medicine:1 yearly:4 giving:3 k1:12 especially:2 approximating:1 tensor:1 already:2 barycenter:35 kantorovich:1 gradient:1 dp:2 kutta:1 distance:24 separate:1 thank:1 topic:1 manifold:2 agglomerative:1 cauchy:1 toward:1 code:1 index:2 minn:2 illustration:1 kk:3 setup:1 unfortunately:1 neuroimaging:1 robert:1 potentially:1 relate:1 carbon:1 m1n:1 trace:6 rise:1 numerics:2 bronstein:1 proper:1 perform:3 upper:3 biagio:1 observation:4 snapshot:2 datasets:1 knott:1 fin:2 finite:12 dispersion:1 incorporated:1 variability:1 rn:2 barycentric:3 station:4 thm:5 introduced:1 bk:5 copenhagen:2 pair:1 david:1 lvarez:2 connection:1 acoustic:1 nip:1 trans:1 below:3 usually:1 pattern:1 challenge:1 pujol:1 tb:1 program:1 interpretability:1 including:1 max:5 memory:1 overlap:1 natural:6 business:2 quantification:1 advanced:1 mn:1 representing:2 scheme:1 movie:1 aston:1 library:1 finished:1 seguy:1 prior:1 geometric:1 l2:18 review:1 understanding:1 evolve:1 acknowledgement:1 permutation:7 generation:1 vandergheynst:1 monge:1 validation:2 foundation:1 clinic:1 affine:2 nondegenerate:1 verma:1 compatible:1 course:2 accounted:1 supported:1 cooperation:1 gee:1 guide:2 allow:1 institute:1 taking:3 k12:8 ghz:1 slice:1 curve:20 boundary:1 van:2 stand:1 avoids:1 rich:2 forward:3 collection:1 dwi:4 made:1 avoided:1 preprocessing:3 author:1 far:1 san:1 henri:2 approximate:3 compact:8 ignore:1 implicitly:1 unreliable:2 confirm:1 b1:2 xi:5 discriminative:1 k1n:6 subsequence:2 continuous:17 ayache:1 quantifies:1 why:1 additionally:1 ku:2 ca:1 contributes:1 hmi:1 cl:3 complex:1 kamil:1 significance:2 main:1 cuesta:3 whole:1 noise:4 border:1 osborne:1 body:1 x1:3 site:1 fig:5 intel:1 en:1 borel:2 xu:2 fashion:1 ddr3:1 feragen:3 neuroimage:3 kin:4 masci:1 formula:2 theorem:11 emphasized:1 harandi:2 showing:1 dk:2 admits:1 derives:1 incorporating:1 exists:4 ih:1 workshop:1 importance:2 barch:1 justifies:3 push:3 cartesian:2 kx:2 nk:7 generalizing:1 logarithmic:1 michigan:1 univariate:1 mct:2 pukelsheim:1 contained:1 yacoub:2 springer:8 minimizer:3 satisfies:2 environmental:1 prop:5 identity:1 quantifying:1 fisher:1 change:2 vemuri:1 typical:1 infinite:7 reducing:1 lemma:1 principal:2 called:3 andersson:2 experimental:3 support:2 absolutely:1 dryden:1 incorporate:2 evaluate:1 regularizing:1 prima:1 |
6,798 | 715 | A Boundary Hunting Radial Basis Function
Classifier Which Allocates Centers
Constructively
Eric I. Chang and Richard P. Lippmann
MIT Lincoln Laboratory
Lexington, MA02173-0073, USA
Abstract
A new boundary hunting radial basis function (BH-RBF) classifier
which allocates RBF centers constructively near class boundaries is
described. This classifier creates complex decision boundaries only in
regions where confusions occur and corresponding RBF outputs are
similar. A predicted square error measure is used to determine how
many centers to add and to determine when to stop adding centers. Two
experiments are presented which demonstrate the advantages of the BHRBF classifier. One uses artificial data with two classes and two input
features where each class contains four clusters but only one cluster is
near a decision region boundary. The other uses a large seismic database
with seven classes and 14 input features. In both experiments the BHRBF classifier provides a lower error rate with fewer centers than are
required by more conventional RBF, Gaussian mixture, or MLP
classifiers.
1
INTRODUCTION
Radial basis function (RBF) classifiers have been successfully applied to many pattern
classification problems (Broomhead, 1988, Ng, 1991). These classifiers have the advantages of short training times and high classification accuracy. In addition, RBF outputs
estimate minimum-error Bayesian a posteriori probabilities (Richard, 1991). Performing
classification with RBF outputs requires selecting the output which is highest for each
input. In regions where one class dominates, the Bayesian a posteriori probability for that
class will be uniformly "high" and near 1.0. Detailed modeling of the variation of the
Bayesian a posteriori probability in these regions is not necessary for classification. Only
139
140
Chang and Lippmann
at the boundary between different classes is accurate estimation of the Bayesian a posteriori probability necessary for high classification accuracy. If the boundary between different classes can be located in the input space, RBF centers can be judiciously allocated in
those regions without wasting RBF centers in regions where accurate estimation of the
Bayesian a posteriori probability does not improve classification perfonnance.
In general, having more RBF centers allows better approximation of the desired output.
While training a RBF classifier, the number of RBF centers must be selected. The traditional approach has been to randomly choose patterns from the training set as centers, or to
perfonn K-means clustering on the data and then to use these centers as the RBF centers.
Frequently the correct number of centers to use is not known a priori and the number of
centers has to be tuned. Also, with K-means clustering, the centers are distributed without
considering their usefulness in classification. In contrast, a constructive approach to adding RBF centers based on modeling Bayesian a posteriori probabilities accurately only
near class boundaries provides good perfonnance with fewer centers than are required to
separately model class PDF's.
Many algorithms have been proposed for constructively building up the structure of a RBF
network (Mel, 1991). However. the algorithms proposed have all been designed for training a RBF network to perfonn function mapping. For mapping tasks, accuracy is important throughout the input region and the mean squared error is the criterion that is
minimized. In classification tasks, only boundaries between different classes are important
and the overall mean squared error is not as important as the error in class boundaries.
2
ALGORITHM DESCRIPTION
A block diagram of a new boundary hunting RBF (BH-RBF) classifier that adds centers
constructively near class boundaries is presented in Figure 1. A simple unimodal Gaussian
classifier is first fonned by clustering the training patterns from a randomly selected class
and assigning a center to that class. The confusion matrix generated by using this simple
classifier is then examined to determine the pair of classes A and B, which have the most
mutual confusion. Training patterns that are close to the boundary between these two
classes are detennined by looking at the outputs of the RBF classifier. Boundary patterns
ONE RBF
CENTER
ADD NEW RBF CENTERS TO
CLASS PAIR RESPONSIBLE
FOR MOST ERRORS & OVERLAP'
INITIAL RBF
NETWORK
INTERMEDIATE
RBF NETWORKS
CALCULATE
PREDICTED SQUARED
ERROR SCORE
FINAL NETWORK
Figure 1: Block Diagram of Training of BH-RBF Network
A Boundary Hunting Radial Basis Function Classifier (Allocates Centers Constructively)
which produce similar "high" outputs for both classes that are different by less than a
"closecall" threshold are used to produce new cluster centers.
Figure 2 shows RBF outputs corresponding to class A and B as the input varies over a
small range. This figure illustrates how network outputs are used to determine the "closecall" region between classes. Network outputs are high in regions dominated by a particular class and therefore these regions are outside the boundary between different classes.
Networlc outputs are close in the region where the absolute difference of the two highest
network outputs is less than the closecall threshold. Training patterns which fall into this
closecall region plus all the points that are misclassified as the other class in the class pair
are considered to be points in the boundary. For example, a pattern in class A which is
misclassified as class B would be considered to be in the boundary between class A and B.
On the other hand, a pattern in class A which is misclassified as class C would not be
placed in the boundary between class A and B.
1
=>
c.
=>
F(B)
0.9
FCA)
0.8
CLASS A
0.7
0.6
CLOSECALL
THRESHOLD
0
~ 0.5
~
CLASS B
0.4
Z 0.3
0.2
CLOSECALL
REGION
0.1
~
0
-3
-2
-1
?
0
INPUT
1
2
3
Figure 2: Using the Network Output to Determine Closecall Regions
After the patterns which belong in the boundary are determined, clustering is performed
separately on boundary patterns from different classes using K-means clustering and a
number of centers ranging from zero to a preset maximum number of centers. After the
centers are found, new RBF classifiers are trained using the new sets of centers plus the
original set of centers. The combined set of centers that provides the best performance is
saved and the cycle repeats again by fmding the next class pair which accounts for the
most remaining confusions. Overfitting by adding too many centers at a time is avoided by
using the predicted squared error (PSE) as the criterion for choosing new centers (Barron,
1984):
Cxa2
N
PSE=RMS+--
141
142
Chang and Lippmann
In this equation, RMS is the root mean squared error on the training set, (12 estimates the
variance of the error, C is the total number of centers in the RBF classifier, and N is the
total number of patterns in the training set. The error variance (12 is selected empirically
using left-out evaluation data. Different values of cr2 are tried and the value which provides the best performance on the evaluation data is chosen. On each cycle, different number of centers are tried for each class of the selected class pair and the PSE is used to select
the best subset of centers. The best PSE on each cycle is used to determine when training
should be stopped to prevent overfitting. Training stops after the PSE has not decreased
for five consecuti ve cycles.
3
EXPERIMENTAL RESULTS
Two experiments were performed using the new BH-RBF classifier, a more conventional
RBF classifier, a Gaussian mixture classifier (Ng, 1991), and aMLP classifier. Five regular RBF classifiers (RBF) were trained by asSigning 1, 2, 3,4, or 5 centers to each class.
Similarly, five Gaussian mixture classifiers (GMIX) were trained with 1,2,3,4, or 5 centers in each class. The means of each center were trained individually using K-means clustering to find the centers for patterns from each class. The diagonal covariance of each
center was set using all the patterns that were assigned to a cluster during the last pass of
K-means clustering. The structure of the regular RBF classifier and the Gaussian mixture
classifier are identical when the number of centers are the same. The only difference
between the classifiers is the method used to train parameters.
MLP classifiers were trained for 10 independent trials for each data set. The number of
hidden nodes was varied from 2 to 30 in increments of 2. The goal of the experiment was
to explore the relationship between the complexity of the classifier and the classification
accuracy of the classifier. Training was stopped using cross validation to avoid overfitting.
3.1 FOUR-CLUSTER DATABASE
The flfst problem is an artificial data set designed to illustrate the difference between BHRBF and other classifiers. There are two classes, each class consist of one large Gaussian
cluster with 700 random points and three smaller clusters with 100 points each. Figure 3
shows the distribution of the data and the ideal decision boundary if the actual centers and
variances are used to train a Bayesian minimum error classifier. There were 2000 training
patterns, 2000 evaluation patterns, and 2000 test patterns. The BH -RBF classifier was
trained with the closecall threshold set to 0.75, (12 set to 0.5, and a maximum of two extra
centers per class at between each pair of classes. The theoretically optimal Bayesian classifier for this database provides the error rate of 1.95% on the test set. This optimal Bayesian classifier is obtained using the actual centers, variances, and a priori probability used
to generate the data in a Gaussian mixture classifier. In a real classification task, these center parameters are not known and have to be estimated from training data.
Figure 4 shows the testing error rate of the three different classifiers. The BH-RBF classifier was able to achieve 2.35% error rate with only 5 centers and the error rate gradually
decreased to 2.15% with 15 centers. The BH-RBF classifier performed well with few centers because it allocated these centers near the boundary between the two classes. On the
other hand, the perfonnance of the RBF classifier and the Gaussian mixture classifier was
worse with few centers. These classifiers perfonned worse because they allocated centers
A Boundary Hunting Radial Basis Function Classifier (Allocates Centers Constructively)
60
50
?
?
40
30
Y
20
o
X
Figure 3: The Artificially Generated Four-Cluster Problem
15
ex:
!~
10
./\RBF
.,
~
, 0\
J /, l
!!ex:
,- .
w
H
5
.\
GMIX
\ t=:.
?t,. .. ,-.".'
..... /'0.
I?!.......
"- .
o
BH-RBF
o
5
.-
:=s-::.??.:.;,:-=.':.:.!'.~r::.;'?
'
??
' ??11:'1
.... .. ....
'--_~=--";"_"'~":':':"':'::'.'::._:,:
,,,,:,:,,,~,
..
10
15
NUMBER OF CENTERS
I I?
20
Figure 4: Testing Error Rate Of The BH-RBF Classifier, The Gaussian Mixture
Classifier, And The Regular RBF Classifier On The Four-Cluster Problem.
in regions that had many patterns. The training algorithm did not distinguish between patterns that are easily confusable between classes (Le. near the class boundary) and patterns
that clearly belong in a given class. Furthermore, adding more centers did not monotoni-
143
144
Chang and Lippmann
cally decrease the error rate. For example, the RBF classifier had 5% error using two centers, but when the number of centers was increased to four, the error rate jumped to 11 %.
Only until the number of centers increased above 14 did the RBF classifier and the Gaussian mixture classifier's error rates converge. The RBF and the Gaussian mixture classifiers
performed poorly with few centers because the centers were concentrated away from the
decision boundary due to the high concentration of data far away from the boundary.
Thus, there weren't enough centers to model the decision boundary accurately. The BHRBF classifier added centers near the boundary and thus was able to define an accurate
boundary with fewer centers.
Figure 5 presents the results from training MLP classifiers on the same data set using different numbers of hidden nodes. The learning rate was set to 0.001, the momentum term
was set to 0.6, and each classifier was trained for 100 epochs. The error rate on a left out
evaluation set was checked to assure that the net had not overfitted the training data. As
the number of hidden nodes increased, the MLP classifier generally performed better.
However, the testing error rate did not decrease monotonically as the number of hidden
nodes increased. Furthermore, the random initial condition set by the different random
seeds affected the classification error rate of each classifier. In comparison, the training
algorithms used for BH -RBF, RBF, and GMIX classifiers do not exhibit such sensitivity
to initial conditions.
15
a:
o
a:
a:
10
MAX
w
~
5
MIN
o
2
4
6
8
10 12 14 16 18 20 22
NUMBER OF HIDDEN NODES
24
26
28
Figure 5: Testing Error Rate Of The MLP Classifiers On The Four-Cluster Problem
3.2 SEISMIC DATABASE
The second problem consists of data for classification of seismic events. The input consist
of 14 continuous and binary measurements derived from seismic waveform signals. These
features are used to classify a wavefonn as belonging to one of 7 classes which represent
different seismic phases. There were 3038 training, 3033 evaluation, and 3034 testing pat-
A Boundary Hunting Radial Basis Function Classifier (Allocates Centers Constructively)
20
15
GMIX
?
III?I~
?
-
--~--~~
..
c:
~10
LLJ
~
?
--..
--- --- .--
I ????? If ???? If ??? "., .................. I...........
I ????
II
I ?
RBF
BH-RBF
5
o ~------~----~~~~--~~~~~~~~~~--~~
o
5
10
15
20
25
NUMBER OF CENTERS
30
35
40
Figure 6: Error Rate Comparison Between The BH -RBF Classifier, The Regular
RBF Classifier, And The Gaussian Mixture Classifier On The Seismic Problem
terns. Once again, the number of centers per class was varied from 1 to 5 for the regular
RBF classifier and the Gaussian mixture classifier, while the BH-RBF classifier was
started with 1 center in the frrst class and then more centers were automaticallY assigned.
The BH-RBF classifier was trained with the closecall threshold set to 0.75, (52 set to 0.5,
and a maximum of one extra center per class at each boundary. The parameters were chosen according to the performance of the classifier on the left-out evaluation data. For this
problem, the closecall threshold and (52 turned out to be the same as the ones used in the
four-cluster problem.
Figure 6 shows the error rate on the testing patterns for all three classifiers. The BH-RBF
classifier clearly performed better than the regular RBF classifier and the Gaussian mixture classifier. The BH-RBF classifier added centers only at the boundary region where
they improved discrimination. Also, the diagonal covariance of the added centers are more
local in their influence and can improve discrimination of a particular boundary without
affecting other decision region boundaries.
MLP classifiers were also trained on this data set with the number of hidden nodes varying
from 2 to 32 in increments of 2. The learning rate was set to 0.001, the momentum term
was set to 0.6, and each classifier was trained for 100 epochs. The classification error rate
on the left-out evaluation set showed that the network had not overfitted on the training
data. Once more, the MLP classifiers exhibited great sensitivity to initial conditions, especially when the number of hidden nodes were small. Also, for this high dimensionality
classification task, even the best performance of the MLP classifier (15.5%) did not match
the best performance of the BH-RBF classifier. This result suggests that for this high
145
146
Chang and Lippmann
dimensionality data, the radially symmetric boundaries fonned with local basis functions
such as the RBF classifier are more appropriate than the ridge-like boundaries formed with
the MLP classifier.
4
CONCLUSION
A new boundary-hunting RBF classifier was developed which adds RBF centers constructively near boundaries of classes which produce classification confusions. Experimental
results from two problems differing in input dimension, number of classes, and difficulty
show that the BH-RBF classifier performed better than traditional training algorithms used
for RBF, Gaussian mixture, and MLP classifiers. Experiments have also been conducted
on other problems such as Peterson and Barney's vowel database and the disjoint database
used by Ng (Peterson, 1952, Ng, 1990). In all experiments, the BH-RBF constructive
algorithm performed at least as well as the traditional RBF training algorithm. These
results, and the experiments described above, confirm the hypothesis that better discrimination performance can be achieved by training a classifier to perform discrimination
instead of probability density function estimation.
Acknowledgments
This work was supported by DARPA. The views expressed are those of the authors and do
not reflect the official policy or position of the U.S. Government. Experiments were conducted using LNKnet, a general purpose classifier program developed at Lincoln Laboratory by Richard Lippmann, Dave Nation, and Linda Kukolich.
References
G. E. Peterson and H. L. Barney. (1952) Control Methods Used in a Study of Vowels. The
Journal of the Acoustical Society ofAmerica 24:2, 175-84.
A. Barron. (1984) Predicted squared error: a criterion for automatic model selection. In S.
Farlow, Editor. Self-Organizing Methods in Modeling. New York, Marcel Dekker.
D. S. Broomhead and D. Lowe. (1988) Radial Basis Functions, multi-variable functional
interpolation and adaptive networks. Technical Report RSRE Memorandum No. 4148,
Royal Speech and Radar Establishment, Malvern, Worcester, Great Britain.
B. W. Mel and S. M. Omohundro. (1991) How Receptive Field Parameters Affect Neural
Learning. In R. Lippmann, J. Moody and D. Touretzky (Eds.), Advances in Neural Information Processing Systems 3, 1991. San Mateo, CA: Morgan Kaufman.
K. Ng and R. Lippmann. (1991) A Comparative Study of the Practical Characteristics of
Neural Networks and Conventional Pattern Classifiers. In R. Lippmann, 1. Moody and D.
Touretzky (Eds.), Advances in Neural Information Processing Systems 3, 1991. San
Mateo, CA: Morgan Kaufman.
M.D. Richard and R. P. Lippmann. (1991) Neural Network Classifier Estimates Bayesian
a posteriori Probabilities. Neural Computation, Volume 3, Number 4.
| 715 |@word trial:1 dekker:1 tried:2 covariance:2 barney:2 initial:4 hunting:7 contains:1 score:1 selecting:1 tuned:1 assigning:2 must:1 designed:2 discrimination:4 fewer:3 selected:4 short:1 provides:5 node:7 five:3 consists:1 theoretically:1 frequently:1 multi:1 automatically:1 actual:2 considering:1 linda:1 kaufman:2 developed:2 differing:1 lexington:1 wasting:1 perfonn:2 nation:1 classifier:83 control:1 local:2 farlow:1 interpolation:1 plus:2 mateo:2 examined:1 suggests:1 range:1 acknowledgment:1 responsible:1 practical:1 testing:6 block:2 radial:7 regular:6 close:2 selection:1 bh:19 influence:1 conventional:3 britain:1 center:70 variation:1 increment:2 memorandum:1 us:2 hypothesis:1 assure:1 located:1 database:6 calculate:1 region:17 cycle:4 decrease:2 highest:2 overfitted:2 complexity:1 radar:1 trained:10 creates:1 eric:1 basis:8 fca:1 easily:1 darpa:1 train:2 artificial:2 outside:1 choosing:1 final:1 advantage:2 net:1 turned:1 organizing:1 detennined:1 poorly:1 achieve:1 lincoln:2 description:1 frrst:1 cluster:11 produce:3 comparative:1 illustrate:1 predicted:4 marcel:1 waveform:1 correct:1 saved:1 government:1 weren:1 considered:2 great:2 seed:1 mapping:2 jumped:1 purpose:1 estimation:3 individually:1 successfully:1 mit:1 clearly:2 gaussian:15 establishment:1 avoid:1 varying:1 derived:1 contrast:1 cr2:1 posteriori:7 hidden:7 misclassified:3 overall:1 classification:15 priori:2 mutual:1 field:1 once:2 having:1 ng:5 identical:1 minimized:1 report:1 richard:4 few:3 randomly:2 ve:1 phase:1 vowel:2 mlp:10 evaluation:7 mixture:13 accurate:3 necessary:2 allocates:5 perfonnance:3 desired:1 confusable:1 stopped:2 increased:4 classify:1 modeling:3 subset:1 usefulness:1 conducted:2 too:1 varies:1 combined:1 density:1 sensitivity:2 moody:2 squared:6 again:2 reflect:1 choose:1 worse:2 account:1 performed:8 root:1 view:1 lowe:1 square:1 formed:1 accuracy:4 variance:4 characteristic:1 bayesian:10 accurately:2 dave:1 touretzky:2 checked:1 ed:2 stop:2 radially:1 broomhead:2 dimensionality:2 improved:1 furthermore:2 until:1 hand:2 usa:1 building:1 assigned:2 symmetric:1 laboratory:2 during:1 self:1 mel:2 criterion:3 pdf:1 ridge:1 demonstrate:1 confusion:5 omohundro:1 ranging:1 functional:1 empirically:1 volume:1 belong:2 measurement:1 automatic:1 similarly:1 gmix:4 had:4 add:4 showed:1 binary:1 morgan:2 minimum:2 determine:6 converge:1 monotonically:1 signal:1 ii:1 unimodal:1 technical:1 match:1 cross:1 represent:1 achieved:1 addition:1 affecting:1 separately:2 decreased:2 diagram:2 allocated:3 extra:2 exhibited:1 near:9 ideal:1 intermediate:1 iii:1 enough:1 affect:1 judiciously:1 pse:5 rms:2 flfst:1 lnknet:1 speech:1 york:1 rsre:1 generally:1 detailed:1 concentrated:1 generate:1 estimated:1 disjoint:1 per:3 affected:1 four:7 threshold:6 prevent:1 throughout:1 decision:6 networlc:1 distinguish:1 occur:1 dominated:1 min:1 performing:1 llj:1 according:1 belonging:1 smaller:1 wavefonn:1 gradually:1 equation:1 barron:2 away:2 appropriate:1 original:1 clustering:7 remaining:1 cally:1 especially:1 society:1 added:3 receptive:1 concentration:1 traditional:3 diagonal:2 exhibit:1 seven:1 acoustical:1 fmding:1 relationship:1 constructively:8 policy:1 seismic:6 perform:1 pat:1 looking:1 varied:2 pair:6 required:2 able:2 pattern:21 program:1 royal:1 max:1 overlap:1 event:1 difficulty:1 perfonned:1 improve:2 started:1 epoch:2 validation:1 editor:1 placed:1 last:1 repeat:1 supported:1 fall:1 peterson:3 absolute:1 distributed:1 boundary:39 dimension:1 author:1 fonned:2 adaptive:1 avoided:1 san:2 far:1 lippmann:10 confirm:1 overfitting:3 continuous:1 ca:2 complex:1 artificially:1 official:1 did:5 malvern:1 momentum:2 position:1 kukolich:1 dominates:1 consist:2 adding:4 illustrates:1 explore:1 expressed:1 chang:5 worcester:1 goal:1 rbf:63 determined:1 uniformly:1 preset:1 total:2 pas:1 experimental:2 select:1 tern:1 constructive:2 ex:2 |
6,799 | 7,150 | Discriminative State-Space Models
Vitaly Kuznetsov
Google Research
New York, NY 10011, USA
[email protected]
Mehryar Mohri
Courant Institute and Google Research
New York, NY 10011, USA
[email protected]
Abstract
We introduce and analyze Discriminative State-Space Models for forecasting nonstationary time series. We provide data-dependent generalization guarantees for
learning these models based on the recently introduced notion of discrepancy. We
provide an in-depth analysis of the complexity of such models. We also study the
generalization guarantees for several structural risk minimization approaches to
this problem and provide an efficient implementation for one of them which is
based on a convex objective.
1
Introduction
Time series data is ubiquitous in many domains including such diverse areas as finance, economics,
climate science, healthcare, transportation and online advertisement. The field of time series analysis
consists of many different problems, ranging from analysis to classification, anomaly detection, and
forecasting. In this work, we focus on the problem of forecasting, which is probably one of the most
challenging and important problems in the field.
Traditionally, time series analysis and time series prediction, in particular, have been approached
from the perspective of generative modeling: particular generative parametric model is postulated that
is assumed to generate the observations and these observations are then used to estimate unknown
parameters of the model. Autoregressive models are among the most commonly used types of
generative models for time series [Engle, 1982, Bollerslev, 1986, Brockwell and Davis, 1986, Box
and Jenkins, 1990, Hamilton, 1994]. These models typically assume that the stochastic process that
generates the data is stationary up to some known transformation, such as differencing or composition
with natural logarithms.
In many modern real world applications, the stationarity assumption does not hold, which has led
to the development of more flexible generative models that can account for non-stationarity in the
underlying stochastic process. State-Space Models [Durbin and Koopman, 2012, Commandeur and
Koopman, 2007, Kalman, 1960] provide a flexible framework that captures many of such generative
models as special cases, including autoregressive models, hidden Markov models, Gaussian linear
dynamical systems and many other models. This framework typically assumes that the time series Y
is a noisy observation of some dynamical system S that is hidden from the practitioner:
Yt = h(St ) + ?t ,
St = g(St
1)
+ ?t
for all t.
(1)
In (1), h, g are some unknown functions estimated from data, {?t }, {?t } are sequences of random
variables and {St } is an unobserved sequence of states of a hidden dynamical system.1 While this
class of models provides a powerful and flexible framework for time series analysis, the theoretical
learning properties of these models is not sufficiently well understood. The statistical guarantees
available in the literature rely on strong assumptions about the noise terms (e.g. {?t } and {?t } are
Gaussian white noise). Furthermore, these results are typically asymptotic and require the model
1
A more general formulation is given in terms of distribution of Yt : ph (Yt |St )pg (St |St
1 ).
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
to be correctly specified. This last requirement places a significant burden on a practitioner since
the choice of the hidden state-space is often a challenging problem and typically requires extensive
domain knowledge.
In this work, we introduce and study Discriminative State-Space Models (DSSMs). We provide the
precise mathematical definition of this class of models in Section 2. Roughly speaking, a DSSM
follows the same general structure as in (1) and consists of a state predictor g and an observation
predictor h. However, no assumption is made about the form of the stochastic process used to
generate observations. This family of models includes existing generative models and other statebased discriminative models (e.g. RNNs) as special cases, but also consists of some novel algorithmic
solutions explored in this paper.
The material we present is organized as follows. In Section 3, we generalize the notion of discrepancy,
recently introduced by Kuznetsov and Mohri [2015] to derive learning guarantees for DSSMs. We
show that our results can be viewed as a generalization of those of these authors. Our notion of
discrepancy is finer, taking into account the structure of state-space representations, and leads to
tighter learning guarantees. Additionally, our results provide the first high-probability generalization guarantees for state-space models with possibly incorrectly specified models. Structural Risk
Minimization (SRM) for DSSMs is analyzed in Section 4. As mentioned above, the choice of the
state-space representation is a challenging problem since it requires carefully balancing the accuracy
of the model on the training sample with the complexity of DSSM to avoid overfitting. We show
that it is possible to adaptively learn a state-space representation in a principled manner using the
SRM technique. This requires analyzing the complexity of several families of DSSMs of interest
in Appendix B. In Section 5, we use our theory to design an efficient implementation of our SRM
technique. Remarkably, the resulting optimization problem turns out to be convex. This should be
contrasted with traditional SSMs that are often derived via Maximum Likelihood Estimation (MLE)
with a non-convex objective. We conclude with some promising preliminary experimental results in
Appendix D.
2
Preliminaries
In this section, we introduce the general scenario of time series prediction as well as the broad family
of DSSMs considered in this paper.
We study the problem of time series forecasting in which the learner observes a realization
(X1 , Y1 ), . . . , (XT , YT ) of some stochastic process, with (Xt , Yt ) 2 Z = X ? Y. We assume
that the learner has access to a family of observation predictors H = {h : X ? S ! Y} and
state predictors G = {g : X ? S ! S}, where S is some pre-defined space. We refer to any pair
f = (h, g) 2 H ? G = F as a DSSM, which is used to make predictions as follows:
yt = h(Xt , st ),
st = g(Xt , st
1)
for all t.
(2)
Observe that this formulation includes the hypothesis sets used in (1) as special cases. In our setting,
h and g both accept an additional argument x 2 X . In practice, if Xt = (Yt 1 , . . . , Yt p ) 2 X = Y p
for some p, then Xt represents some recent history of the stochastic process that is used to make a
prediction of Yt . More generally, X may also contain some additional side information. Elements of
the output space Y may further be multi-dimensional, which covers both multi-variate time series
forecasting and multi-step forecasting.
The performance of the learner is measured using a bounded loss function L : H ? S ? Z ! [0, M ],
for some upper bound M
0. A commonly used loss function is the squared loss: L(h, s, z) =
(h(x, s) y)2 .
The objective of the learner is to use the observed realization of the stochastic process up to time T to
determine a DSSM f = (h, g) 2 F that has the smallest expected loss at time T + 1, conditioned on
the given realization of the stochastic process:2
LT +1 (f |ZT1 ) = E[L(h, sT +1 , ZT +1 )|ZT1 ],
2
(3)
An alternative performance metric commonly considered in the time series literature is the averaged
generalization error LT +1 (f ) = E[L(f, sT +1 , ZT +1 )]. The path-dependent generalization error that we
consider in this work is a finer measure of performance since it only takes into consideration the realized history
of the stochastic process, as opposed to an average trajectory.
2
where st for all t is specified by g via the recursive computation in (2). We will use the notation ars to
denote (as , as+1 , . . . ar ).
In the rest of this section, we will introduce the tools needed for the analysis of this problem. The key
technical tool that we require is the notion of state-space discrepancy:
?
?
T
1X
t 1
T
disc(s) = sup E[L(h, sT +1 , ZT +1 )|Z1 ]
E[L(h, st , Zt )|Z1 ] ,
(4)
T t=1
h2H
where, for simplicity, we used the shorthand s = sT1 +1 . This definition is a strict generalization of
the q-weighted discrepancy of Kuznetsov and Mohri [2015]. In particular, redefining L(h, s, z) =
e z) and setting st = T qt for 1 ? t ? T and sT +1 = 1 recovers the definition of q-weighted
sL(h,
discrepancy. The discrepancy disc defines an integral probability pseudo-metric on the space of
probability distributions that serves as a measure of the non-stationarity of the stochastic process
Z with respect to both the loss function L and the hypothesis set H, conditioned on the given state
sequence s. For example, if the process Z is i.i.d., then we simply have disc(s) = 0 provided that
s is a constant sequence. See [Cortes et al., 2017, Kuznetsov and Mohri, 2014, 2017, 2016, Zimin
and Lampert, 2017] for further examples and bounds on discrepancy in terms of other divergences.
However, the most important property of the discrepancy disc(s) is that, as shown in Appendix C,
under some additional mild assumptions, it can be estimated from data.
The learning guarantees that we present are given in terms of data-dependent measures of sequential
complexity, such as expected sequential covering number [Rakhlin et al., 2010], that are modified to
account for the state-space structure in the hypothesis set. The following definition of a complete
binary tree is used throughout this paper: a Z-valued complete binary tree z is a sequence (z1 , . . . , zT )
of T mappings zt : {?1}t 1 ! Z, t 2 [1, T ]. A path in the tree is = ( 1 , . . . , T 1 ) 2 {?1}T 1 .
We write zt ( ) instead of zt ( 1 , . . . , t 1 ) to simplify the notation. Let R = R0 ? G be any function
class where G is a family of state predictors and R0 = {r : Z ? S ! R}. A set V of R-valued trees
of depth T is a sequential ?-cover (with respect to `p norm) of R on a tree z of depth T if for all
(r, g) 2 R and all 2 {?1}T , there is v 2 V such that
"
# p1
T
1X
p
vt ( ) r(zt ( ), st )
? ?,
T t=1
where st = g(zt ( ), st 1 ). The (sequential) covering number Np (?, R, z) on a given tree z is
defined to be the size of the minimal sequential cover. We call Np (?, R) = supz Np (?, R, z) the
maximal covering number. See Figure 1 for an example.
We define the expected covering number to be Ez?T (p) [Np (?, R, z)], where T (p) denotes the
distribution of z implicitly defined via the following sampling procedure. Given a stochastic process
distributed according to the distribution p with pt (?|zt1 1 ) denoting the conditional distribution at
time t, sample Z1 , Z10 from p1 independently. In the left child of the root sample Z2 , Z20 according to
p2 (?|Z1 ) and in the right child according to p2 (?|Z20 ) all independent from each other. For a node that
can be reached by a path ( 1 , . . . , t ), we draw Zt , Zt0 according to pt (?|S1 ( 1 ), . . . , St 1 ( t 1 )),
where St (1) = Zt and St ( 1) = Zt0 . Expected sequential covering numbers are a finer measure of
complexity since they directly take into account the distribution of the underlying stochastic process.
For further details on sequential complexity measures, we refer the reader to [Littlestone, 1987,
Rakhlin et al., 2010, 2011, 2015a,b].
3
Theory
In this section, we present our generalization bounds for learning with DSSMs. For our first result,
we assume that the sequence of states s (or equivalently state predictor g) is fixed and we are only
learning the observation predictor h.
Theorem 1. Fix s 2 S T +1 . For any > 0, with probability at least 1
, for all h 2 H and all
? > 0, the following inequality holds:
s
E
[N (?,Rs ,v)]
T
X
2 log v?T (P) 1
1
L(f |ZT1 ) ?
L(h, Xt , st ) + disc(s) + 2? + M
,
T t=1
T
3
where Rs = {(z, s) 7! L(h, s, z) : h 2 H} ? {s}.
The proof of Theorem 1 (as well as the proofs of all other results in this paper) is given in Appendix A.
Note that this result is a generalization of the learning guarantees of Kuznetsov and Mohri [2015].
e z) recovers
Indeed, setting s = (T q1 , . . . , T qT , 1) for some weight vector q and L(h, s, z) = sL(h,
Corollary 2 of Kuznetsov and Mohri [2015]. Zimin and Lampert [2017] show that, under some
additional assumptions on the underlying stochastic process (e.g. Markov processes, uniform martingales), it is possible to choose these weights to guarantee that the discrepancy disc(s) is small.
Alternatively, Kuznetsov and Mohri [2015] show that if the distribution of the stochastic process
at times T + 1 and [T s, T ] is sufficiently close (in terms of discrepancy) then disc(s) can be
estimated from data. In Theorem 5 in Appendix C, we show that this property holds for arbitrary
state sequences s. Therefore, one can use the bound of Theorem 1 that can be computed from data to
search for the predictor h 2 H that minimizes this quantity. The quality of the result will depend
on the given state-space sequence s. Our next result shows that it is possible to learn h 2 H and s
generated by some state predictor g 2 G jointly.
Theorem 2. For any > 0, with probability at least 1
? > 0, the following inequality holds:
, for all f = (h, g) 2 H ? G and all
T
1X
L(f |ZT1 ) ?
L(h, Xt , st ) + disc(s) + 2? + M
T t=1
where st = g(Xt , st
1)
s
2 log
Ev?T (P) [N1 (?,R,v)]
T
,
for all t and R = {(z, s) 7! L(h, s, z) : h 2 H} ? G.
The cost of this significantly more general result is a slightly larger complexity term N1 (?, R, v)
N1 (?, Rs , v). This bound is also much tighter than the one that can be obtained by applying the
result of Kuznetsov and Mohri [2015] directly to F = H ? G, which would lead to the same bound
as in Theorem 2 but with disc(s) replaced by supg2G disc(s). Not only supg2G disc(s) is an upper
bound on disc(s), but it is possible to construct examples that lead to learning bounds that are too
loose. Consider the stochastic process generated as follows. Let X be uniformly distributed on
{?1}. Suppose Y1 = 1 and Yt = Yt 1 for all t > 1 if X = 1 and Yt = Yt 1 for all t > 1
otherwise. In other words, Y is either periodic or a constant state sequence. If L is the squared
loss, for G = {g1 , g2 } with g1 (s) = s and g2 (s) = s and H = {h} with h(s) = s, for odd T ,
supg2G disc(s) 1/2. On the other hand, the bound in terms of disc(s) is much finer and helps
us select g such that disc(s) = 0 for that g. This example shows that even for simple deterministic
dynamics our learning bounds are finer than existing ones.
Since the guarantees of Theorem 2 are data-dependent and hold uniformly over F, they allow us to
seek a solution f 2 F that would directly optimize this bound and that could be computed from the
given sample. As our earlier example shows, the choice of the family of state predictors G is crucial
to achieve good guarantees. For instance, if G = {g1 } then it may be impossible to have a non-trivial
bound. In other words, if the family of state predictors is not rich enough, then, it may not be possible
to handle the non-stationarity of the data. On the other hand, if G is chosen to be too large, then, the
complexity term may be too large. In Section 4, we present an SRM technique that enables us to
learn the state-space representation and adapt to non-stationarity in a principled way.
4
Structural Risk Minimization
Suppose we are given a sequence of families of observation predictors H1 ? H2 ? ? ? ? Hn . . . and
a sequence of families of state predictors G1 ? G2 ? ? ? Gn . . . Let Rk = {(s, z) 7! L(h, s, z) : h 2
Hk } ? Gk and R = [1
k=1 Rk . Consider the following objective function:
r
T
1X
log k
F (h, g, k) =
L(h, st , Zt ) + (s) + Bk + M
,
(5)
T t=1
T
r
2 log
Ev?T (P) [N1 (?,Rk ,v)]
where (s) is any upper bound on disc(s) and Bk is any upper bound on M
.
T
We are presenting an estimatable upper bound on disc(s) in Appendix C, which provides one
4
particular choice for (s). In Appendix B, we also prove upper bounds on the expected sequential
covering numbers for several families of hypothesis. Then, we define the SRM solution as follows:
(e
h, ge, e
k) = argminh,g2Hk ?Gk ,k
1
F (h, g, k).
(6)
We also define f by f = (h , g ) 2
Theorem 3. For any > 0, with probability at least 1
Then, the following result holds.
, for all ? > 0, the following bound holds:
s
r
?
log 2
log k(f )
LT +1 (e
h, ge|ZT1 ) ? LT +1 (f ? |ZT1 ) + 2 (s? ) + 2? + 2Bk(f ? ) + M
+ 2M
,
T
T
?
?
?
?
argminf 2F LT +1 (f |ZT1 ).
where s?t = g ? (Xt , s?t 1 ), and where k(f ? ) is the smallest integer k such that f ? 2 Hk ? Gk .
Theorem 3 provides a learning guarantee for the solution of SRM problem (5). This result guarantees
for the SRM solution a performance close to that of the best-in-class model f ? modulo a penalty
term that includes the discrepancy (of the best-in-class state predictor), similar to the guarantees
of Section 3. This guarantee can be viewed as a worst case bound when we are unsure if the
state-space predictor captures the non-stationarity of the problem correctly. However, in most cases,
by introducing a state-space representation, we hope that it will help us model (at least to some
degree) the non-stationarity of the underlying stochastic process. In what follows, we present a
more optimistic best-case analysis which shows that, under some additional mild assumptions on
the complexity of the hypothesis space with respect to stochastic process, we can simultaneously
simplify the SRM optimization and give tighter learning guarantees for this modified version.
Assumption 1 (Stability of state trajectories). Assume that there is a decreasing function r such that
for any ? > 0 and > 0, with probability 1
, if h? , g ? = argmin(h,g)2F LT +1 (h, g|ZT1 ) and
(h, g) 2 F is such that
then, the following holds:
T
1X
Lt (h, g|Zt1 1 )
T t=1
LT +1 (h, g|ZT1 )
Lt (h? , g ? |Zt1 1 ) ? ?,
LT +1 (h? , g ? |ZT1 ) ? r(?).
(7)
(8)
Roughly speaking, this assumption states that, given a sequence of states s1 , . . . , sT generated by g
such that the performance of some observation predictor h along this sequence of states is close to
the performance of the ideal pair h? along the ideal sequence generated by g ? , the performance of h
in the near future (at state sT +1 ) will remain close to that of h? (in state s?T +1 ). Note that, in most
cases of interest, r has the form r(?) = a?, for some a > 0.
Consider the following optimization problem which is similar to (5) but omits the discrepancy upper
bound :
r
T
1X
log k
F0 (h, g, k) =
L(h, st , Zt ) + Bk + M
,
(9)
T t=1
T
We will refer to F0 as an optimistic SRM objective and we let (h0 , g0 ) be a minimizer of F0 . Then,
we have the following learning guarantee.
Theorem 4. Under Assumption 1, for any > 0, with probability at least 1
, for all ? > 0, the
inequality LT +1 (h0 , g0 |ZT1 ) LT +1 (f ? |ZT1 ) < r(?) holds with
s
r
?
log 2
log k(f )
? = 2? + 2Bk(f ? ) + M
+ 2M
,
T
T
where s?t = g ? (Xt , s?t 1 ), and where k(f ? ) is the smallest integer k such that f ? 2 Hk ? Gk .
We remark that a finer analysis can be used to show that Assumption 1 only need to be satisfied for
k ? k(f ? ) for the Theorem 4. Furthermore, observe that for linear functions r(?) = a?, one recovers
a guarantee similar to the bound in Theorem 3, but the discrepancy term is omitted making this result
tighter. This result suggests that in the optimistic scenarios where our hypothesis set contains a good
5
state predictor that can capture the data non-stationarity, it is possible to achieve a tighter guarantee
that avoids the pessimistic discrepancy term. Note that, increasing the capacity of the family of
state predictors makes it easier to find such a good state predictor but it also may make the learning
problem harder and lead to the violation of Assumption 1. This further motivates the use of an SRM
technique for this problem to find the right balance between capturing the non-stationarity in data and
the complexity of the models that are being used. Theorem 4 formalizes this intuition by providing
theoretical guarantees for this approach.
We now consider several illustrative examples showing that this assumption holds in a variety of
cases of interest. In all our examples, we will use the squared loss but it is possible to generalize all
of them to other sufficiently regular losses.
Linear models. Let F be defined by F = {f : y 7! w ? (y), kwk ? ?} for some ? > 0 and some
feature map . Consider a separable case where Yt = w? ? (Ytt p1 ) + ?t , where ?t represents white
noise. One can verify that the following equality holds:
h
i2
Lt (w|Zt1 1 ) = E[(w ? (Ytt p1 ) Yt )|Y1t 1 ] = (w w? ) ? (Ytt p1 ) .
In view of that, it follows that (7) is equal to
T
1 Xh
(w
T t=1
w? ) ?
(Ytt
1
p)
i2
T
1X
(wj
T t=1
wj? )2
t 1 2
j (Yt p )
for any coordinate j 2 [1, N ]. Thus, for any coordinate j 2 [1, N ], by H?lder?s inequality, we have
LT +1 (h, g|ZT1 )
h
LT +1 (h? , g ? |ZT1 ) = (w
w? ) ?
(YTT
p+1 )
i2
? r?
N
X
1
j=1
,
j
PT
where j = T1 t=1 j (Ytt p1 )2 is the empirical variance of the j-th coordinate and where r =
supy (y)2 is the empirical `1 -norm radius of the data. The special case where is the identity
map covers standard autoregressive models. These often serve as basic building blocks for other
state-space models, as discussed below. More generally, other feature maps may be induced by
a positive definite kernel K. Alternatively, we may take as our hypothesis set F the convex hull of
all decision trees of certain depth d. In that case, we can view each coordinate j as the output of a
particular decision tree on the given input.
Linear trend models. For simplicity, in this example, we consider univariate time series with linear
trend. However, this can be easily generalized to the multi-variate setting with different trend models.
Define G as G = {s 7! s + c : |c| ? ?} for some ? > 0 and let H be a singleton consisting of the
identity map. Assume that Yt = c? t + ?t , where ?t is white noise. As in the previous example, it is
easy to check that Lt (h, g|Zt1 1 ) = |c c? |2 t2 . Therefore,
p in this case, one can show that (7) reduces
to 13 (T + 1)(2T + 1)|c c? |2 and therefore, if ? = O( 1/T ), then we have |c c? |2 = O(1/T 5/2 )
p
and thus (8) is |c c? |2 (T + 1)2 = O( 1/T ).
Periodic signals. We study a multi-resolution setting where the time series of interest are modeled
as a linear combination of periodic signals at different frequencies. We express this as a state-space
model as follows. Define
?
1
1
Ad =
,
Id 1 0
where 1 is d 1-dimensional row vector of 1s, 0 is d 1-dimensional column vector of 0 and Id 1 is
an identity matrix. It easy to verify that, under the map s 7! Ad s, the sequence s1 ? e1 , s2 ? e1 . . . , st ?
e1 . . ., where ?1 = (1, 0, . . . , 0)T , is a periodic sequence with period d. Let D = d1 , . . . , dk be
any collection of positive integers and let A be a block-diagonal matrix with Ad1 , . . . , Adk on the
diagonal. We set G = {s 7! A ? s} and H = {s 7! w ? s : kwk < ?}, where we also restrict ws to
Pk 1
be non-zero only at coordinates 1, 1 + d1 , 1 + d1 + d2 , . . . , 1 + j=1 dk 1 . Once again, to simplify
our presentation, we assume that Yt satisfies Yt = w? ? st + ?t . Using arguments similar to those of
PT
the previous examples, one can show that (7) is lower bounded by (wj wj? )2 T1 t=1 st,j for any
coordinate j. Therefore, as before, if (7) is upper bounded by ? > 0, then (8) is upper bounded by
PN
r? j=1 1j , where r is the maximal radius of any state and j a variance of j-th state sequence.
6
Trajectory ensembles. Note that, in our previous example, we did not exploit the fact that the
sequences were periodic. Indeed, our argument holds for any g that generates a multi-dimensional
trajectory h 2 H = {s 7! w ? s : kwk < ?} which can be interpreted as learning an ensemble of
different state-space trajectories.
Structural Time Series Models (STSMs). STSMs are a popular family of state-space models that
combine all of the previous examples. For this model, we use (h, g) 2 H ? G that have the following
structure: h(xt , g(st )) = w ? (xt )+ct+w0 ?st , where st is a vector of periodic sequences described
in the previous examples and xt is the vector representing the most recent history of the time series.
Note that our formulation is very general and allows for arbitrary feature maps that can correspond
either to kernel-based or tree-based models. Arguments similar to those given in previous examples
show that Assumption 1 holds in this case.
Shifting parameters. We consider the non-realizable case where H is a set of linear models but
where the data is generated according to the following procedure. The first T /2 rounds obey the
formula Yt = w0 Yt 1 + ?t , the subsequent rounds the formula Yt = w? Yt 1 + ?t . Note that, in this
PT
case, we have | T1 t=1 Lt (w0 |Zt1 1 ) Lt (w? |Zt1 1 )| = 0. However, if w0 and w? are sufficiently
far apart, it is possible to show that there is a constant lower bound on LT +1 (w0 |ZT1 ) LT +1 (w? |ZT1 ).
One approach to making Assumption 1 hold for this stochastic process is to choose H such that the
resulting learning problem is separable. However, that requires us to know the exact nature of the
underlying stochastic process. An alternative agnostic approach, is to consider a sequence of states
(or equivalently weights) that can assign different weights qt to different training points.
Finally, observe that our learning guarantees in Section 3 and 4 are expressed in terms of the expected
sequential covering numbers of the family of DSSMs that we are seeking to learn. A priori, it is
not clear if it is possible to control the complexity of such models in a meaningful way. However,
in Appendix B, we present explicit upper bounds on the expected sequential covering numbers of
several families of DSSMs, including several of those discussed above: linear models, tree-based
hypothesis, and trajectory ensembles.
5
Algorithmic Solutions
The generic SRM procedures described in Section 4 can lead to the design of a range of different
algorithmic solutions for forecasting time series, depending on the choice of the families Hk and
Fk . The key challenge for the design of an algorithm design in this setting is to come up with a
tractable procedure for searching through sets of increasing complexity. In this section, we describe
one such procedure that leads to a boosting-style algorithm. Our algorithm learns a structural time
series model by adaptively adding various structural subcomponents to the model in order to balance
model complexity and the ability of the model to handle non-stationarity in data. We refer to our
algorithm as Boosted Structural Time Series Models (BOOSTSM).
We will discuss BOOSTSM in the context of the squared loss, but most of our results can be
straightforwardly extended to other convex loss functions. The hypothesis set used by our algorithm
admits the following form: H = {(x, s) 7! w ? (x) + w0 ? s : kwk1 ? ?, kw0 k1 ? ?0 }. Each
coordinate j is a binary-valued decision tree maps its inputs to a bounded set. For simplicity, we
also assume that ? = ?0 = 1. We choose G to be any set of state trajectories. For instance, this set
may include periodic or trend sequences as described in Section 4.
Note that, to make the discussion concrete, we impose an `1 -constraint to the parameter vectors, but
other regularization penalties are also possible. The particular choice of the regularization defined by
H would also lead to sparser solutions, which is an additional advantage given that our state-space
representation is high-dimensional.
For the squared loss and the aforementioned H, the optimistic SRM objective (9) is given by
F (w, w0 ) =
T
1 X?
yt
T t=1
w?
(xt ) + w0 ? st
?2
+ (kwk1 + kw0 k1 ),
(10)
where we omit log(k) because the index k in our setting tracks the maximal depth of the tree and it
suffices to restrict the search to the case
k < T ?as, for deeper trees, we can achieve zero empirical
?q
error. With this upper bound on k, O
log T
T
is small and hence not included in the objective.
7
BOOSTSM(S = ((xi , yi )Tt=1 )
1 f0
0
2 for k
1 to K do
3
j
argminj ?k,j + sgn(wj )
4
j0
argminj 0 k,j 0 + sgn(wj0 )
5
if ?k,j + sgn(wj ) ? k,j 0 + sgn(wj0 ) then
6
?k
argmin? F (w + ??j , w0 )
7
fk
fk 1 + ?k j
8
else ?k
argmin? F (w, w0 + ??j 0 )
9
fk
f k 1 + ? t ?j 0
10 return fK
Figure 1: Pseudocode of the BOOSTSM algorithm. On line 3 and 4 two candidates are selected to be
added to the ensemble: a state trajectory with j 0 or a tree-based predictor with index j. Both of these
minimize their subgradients within their family of weak learners. Subgradients are defined by (11).
The candidate with the smallest gradient is added to the ensemble. The weight of the new ensemble
member is found via line search (line 6 and 8).
The regularization penalty is directly derived from the bounds on the expected sequential covering
numbers of H given in Appendix B in Lemma 4 and Lemma 5.
Observe that (10) is a convex objective function. Our BOOSTSM algorithm is defined by the
application of coordinate descent to this objective. Figure 1 gives its pseudocode. The algorithm
proceeds in K rounds. At each round, we either add a new predictor tree or a new state-space
trajectory to the model, depending on which results in a greater decrease in the objective. In particular,
with the following definitions:
?k,j
T
1X
=
(yt
T t=1
fk
1 (xt , st ))
j (xt ),
k,j
T
1X
=
(yt
T t=1
fk
1 (xt , st ))st,j .
(11)
the subgradient in tree-space direction j at round k is given by ?k,j + sgn(wk,j ). We use the
notation wk to denote the tree-space parameter vector after k 1 rounds. Similarly, the subgradient
0
in the trajectory space direction j 0 is given by k,j 0 + sgn(wk,j
), where wk0 represents the trajectory
space parameter vector after k 1 rounds.
By standard results in optimization theory [Luo and Tseng, 1992], BOOSTSM admits a linear
convergence guarantee.
6
Conclusion
We introduced a new family of models for forecasting non-stationary time series, Discriminative StateSpace Models. This family includes existing generative models and other state-based discriminative
models (e.g. RNNs) as special cases, but also covers several novel algorithmic solutions explored
in this paper. We presented an analysis of the problem of learning DSSMs in the most general
setting of non-stationary stochastic processes and proved finite-sample data-dependent generalization
bounds. These learning guarantees are novel even for traditional state-space models since the existing
guarantees are only asymptotic and require the model to be correctly specified. We fully analyzed the
complexity of several DSSMs that are useful in practice. Finally, we also studied the generalization
guarantees of several structural risk minimization approaches to this problem and provided an
efficient implementation of one such algorithm which is based on a convex objective. We report some
promising preliminary experimental results in Appendix D.
Acknowledgments
This work was partly funded by NSF CCF-1535987 and NSF IIS-1618662, as well as a Google
Research Award.
8
References
Rakesh D. Barve and Philip M. Long. On the complexity of learning from drifting distributions. In
COLT, 1996.
Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. J Econometrics, 1986.
George Edward Pelham Box and Gwilym Jenkins. Time Series Analysis, Forecasting and Control.
Holden-Day, Incorporated, 1990.
Peter J Brockwell and Richard A Davis. Time Series: Theory and Methods. Springer-Verlag, New
York, 1986.
J.J.F. Commandeur and S.J. Koopman. An Introduction to State Space Time Series Analysis. OUP
Oxford, 2007.
Corinna Cortes, Giulia DeSalvo, Vitaly Kuznetsov, Mehryar Mohri, and Scott Yand. Multi-armed
bandits with non-stationary rewards. CoRR, abs/1710.10657, 2017.
Victor H. De la Pe?a and Evarist Gin?. Decoupling: from dependence to independence: randomly
stopped processes, U-statistics and processes, martingales and beyond. Probability and its
applications. Springer, NY, 1999.
J. Durbin and S.J. Koopman. Time Series Analysis by State Space Methods: Second Edition. Oxford
Statistical Science Series. OUP Oxford, 2012.
Robert Engle. Autoregressive conditional heteroscedasticity with estimates of the variance of United
Kingdom inflation. Econometrica, 50(4):987?1007, 1982.
James D. Hamilton. Time series analysis. Princeton, 1994.
Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. Transactions of
the ASME?Journal of Basic Engineering, 82(Series D), 1960.
Vitaly Kuznetsov and Mehryar Mohri. Generalization bounds for time series prediction with nonstationary processes. In ALT, 2014.
Vitaly Kuznetsov and Mehryar Mohri. Learning theory and algorithms for forecasting non-stationary
time series. In Advances in Neural Information Processing Systems 28, pages 541?549, 2015.
Vitaly Kuznetsov and Mehryar Mohri. Time series prediction and on-line learning. In Proceedings of
The 29th Conference on Learning Theory, COLT 2016, 2016.
Vitaly Kuznetsov and Mehryar Mohri. Generalization bounds for non-stationary mixing processes.
Machine Learning, 106(1):93?117, 2017.
M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Ergebnisse
der Mathematik und ihrer Grenzgebiete. U.S. Government Printing Office, 1991.
Nick Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold
algorithm. Machine Learning, 1987.
Zhi-Quan Luo and Paul Tseng. On the convergence of coordinate descent method for convex
differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7 ? 35, 1992.
Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages,
combinatorial parameters, and learnability. In NIPS, 2010.
Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Stochastic, constrained,
and smoothed adversaries. In NIPS, 2011.
Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Sequential complexities and uniform
martingale laws of large numbers. Probability Theory and Related Fields, 2015a.
Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning via sequential complexities. JMLR, 16(1), January 2015b.
Alexander Zimin and Christopher H. Lampert. Learning theory for conditional risk minimization. In
AISTAT, 2017.
9
| 7150 |@word mild:2 version:1 norm:2 d2:1 r:3 seek:1 pg:1 q1:1 harder:1 series:30 contains:1 united:1 denoting:1 existing:4 z2:1 subcomponents:1 luo:2 subsequent:1 enables:1 stationary:6 generative:7 selected:1 provides:3 boosting:1 node:1 mathematical:1 along:2 adk:1 consists:3 shorthand:1 prove:1 combine:1 manner:1 introduce:4 indeed:2 expected:8 roughly:2 p1:6 multi:7 decreasing:1 zhi:1 armed:1 increasing:2 abound:1 provided:2 underlying:5 bounded:5 notation:3 agnostic:1 what:1 argmin:3 interpreted:1 minimizes:1 unobserved:1 transformation:1 guarantee:25 pseudo:1 formalizes:1 finance:1 healthcare:1 control:2 omit:1 hamilton:2 t1:3 positive:2 understood:1 before:1 engineering:1 analyzing:1 id:2 oxford:3 path:3 rnns:2 studied:1 suggests:1 challenging:3 range:1 averaged:1 acknowledgment:1 practice:2 recursive:1 block:2 definite:1 procedure:5 j0:1 area:1 barve:1 empirical:3 significantly:1 pre:1 word:2 regular:1 close:4 risk:5 applying:1 impossible:1 context:1 optimize:1 deterministic:1 map:7 transportation:1 yt:26 economics:1 independently:1 convex:8 resolution:1 simplicity:3 supz:1 stability:1 handle:2 notion:4 traditionally:1 coordinate:9 searching:1 pt:5 suppose:2 modulo:1 anomaly:1 exact:1 hypothesis:9 element:1 trend:4 econometrics:1 observed:1 capture:3 worst:1 wj:6 decrease:1 observes:1 mentioned:1 principled:2 intuition:1 und:1 complexity:17 reward:1 econometrica:1 dynamic:1 depend:1 heteroscedasticity:1 serve:1 learner:5 easily:1 various:1 describe:1 approached:1 h0:2 larger:1 valued:3 y1t:1 otherwise:1 lder:1 ability:1 statistic:1 g1:4 jointly:1 noisy:1 rudolph:1 online:4 sequence:21 advantage:1 ledoux:1 differentiable:1 emil:1 maximal:3 realization:3 brockwell:2 mixing:1 achieve:3 aistat:1 bollerslev:2 convergence:2 requirement:1 help:2 derive:1 depending:2 tim:1 measured:1 odd:1 qt:3 p2:2 edward:1 strong:1 come:1 direction:2 radius:2 attribute:1 stochastic:20 hull:1 sgn:6 material:1 require:3 government:1 st1:1 assign:1 fix:1 generalization:13 suffices:1 preliminary:3 tighter:5 pessimistic:1 hold:14 sufficiently:4 considered:2 inflation:1 algorithmic:4 mapping:1 smallest:4 omitted:1 estimation:1 combinatorial:1 tool:2 weighted:2 minimization:6 hope:1 gaussian:2 modified:2 avoid:1 pn:1 boosted:1 office:1 corollary:1 derived:2 focus:1 likelihood:1 check:1 hk:4 realizable:1 dependent:5 typically:4 holden:1 accept:1 hidden:4 w:1 bandit:1 classification:1 among:1 flexible:3 aforementioned:1 priori:1 colt:2 development:1 constrained:1 special:5 field:3 construct:1 equal:1 once:1 beach:1 sampling:1 represents:3 broad:1 discrepancy:15 future:1 np:4 t2:1 simplify:3 report:1 richard:1 modern:1 randomly:1 simultaneously:1 divergence:1 replaced:1 consisting:1 n1:4 karthik:4 zt0:2 ab:1 detection:1 stationarity:10 interest:4 violation:1 analyzed:2 integral:1 tree:17 logarithm:1 littlestone:2 theoretical:2 minimal:1 stopped:1 instance:2 column:1 modeling:1 earlier:1 gn:1 cover:5 ar:2 cost:1 introducing:1 predictor:21 srm:12 uniform:2 too:3 learnability:1 straightforwardly:1 periodic:7 adaptively:2 st:42 h2h:1 quickly:1 concrete:1 again:1 squared:5 satisfied:1 opposed:1 choose:3 possibly:1 hn:1 style:1 return:1 account:4 koopman:4 singleton:1 de:1 wk:3 includes:4 postulated:1 ad:2 root:1 h1:1 view:2 optimistic:4 analyze:1 sup:1 reached:1 kwk:3 minimize:1 accuracy:1 variance:3 ensemble:6 correspond:1 generalize:2 weak:1 disc:17 trajectory:11 finer:6 history:3 definition:5 frequency:1 james:1 proof:2 recovers:3 proved:1 popular:1 knowledge:1 ubiquitous:1 organized:1 carefully:1 courant:1 day:1 formulation:3 box:2 furthermore:2 talagrand:1 hand:2 christopher:1 google:3 defines:1 quality:1 usa:3 building:1 contain:1 verify:2 ccf:1 equality:1 regularization:3 hence:1 i2:3 climate:1 white:3 round:7 davis:2 covering:9 illustrative:1 generalized:2 wj0:2 presenting:1 asme:1 complete:2 tt:1 ranging:1 consideration:1 novel:3 recently:2 pseudocode:2 banach:1 discussed:2 significant:1 composition:1 refer:4 fk:7 similarly:1 funded:1 access:1 f0:4 heteroskedasticity:1 add:1 recent:2 perspective:1 irrelevant:1 apart:1 scenario:2 certain:1 verlag:1 inequality:4 binary:3 kwk1:2 vt:1 yi:1 der:1 victor:1 additional:6 greater:1 ssms:1 impose:1 george:1 r0:2 determine:1 period:1 pelham:1 signal:2 ii:1 reduces:1 technical:1 adapt:1 long:2 mle:1 e1:3 award:1 prediction:7 basic:2 metric:2 kernel:2 remarkably:1 else:1 crucial:1 rest:1 probably:1 strict:1 induced:1 quan:1 vitaly:7 member:1 sridharan:4 practitioner:2 nonstationary:2 structural:8 call:1 integer:3 ideal:2 near:1 enough:1 easy:2 variety:1 independence:1 variate:2 ergebnisse:1 restrict:2 engle:2 forecasting:10 penalty:3 peter:1 argminj:2 york:3 speaking:2 remark:1 generally:2 useful:1 clear:1 tewari:4 ph:1 wk0:1 generate:2 sl:2 nsf:2 estimated:3 correctly:3 track:1 diverse:1 write:1 express:1 key:2 threshold:1 subgradient:2 powerful:1 place:1 family:18 throughout:1 reader:1 draw:1 decision:3 appendix:10 capturing:1 bound:27 ct:1 durbin:2 constraint:1 generates:2 argument:4 subgradients:2 separable:2 oup:2 according:5 combination:1 unsure:1 remain:1 slightly:1 making:2 s1:3 mathematik:1 turn:1 loose:1 discus:1 kw0:2 needed:1 know:1 ge:2 tractable:1 serf:1 available:1 jenkins:2 z10:1 observe:4 obey:1 generic:1 zimin:3 alternative:2 corinna:1 drifting:1 assumes:1 denotes:1 include:1 exploit:1 k1:2 seeking:1 objective:11 g0:2 added:2 realized:1 quantity:1 desalvo:1 parametric:1 dependence:1 traditional:2 diagonal:2 gin:1 gradient:1 capacity:1 philip:1 w0:10 tseng:2 trivial:1 kalman:2 modeled:1 index:2 providing:1 balance:2 differencing:1 equivalently:2 kingdom:1 robert:1 gk:4 argminf:1 implementation:3 design:4 zt:14 motivates:1 unknown:2 upper:11 observation:9 markov:2 finite:1 descent:2 incorrectly:1 january:1 extended:1 incorporated:1 precise:1 y1:2 smoothed:1 arbitrary:2 introduced:3 bk:5 pair:2 specified:4 extensive:1 z1:5 redefining:1 nick:1 omits:1 nip:3 ytt:6 beyond:1 adversary:1 proceeds:1 dynamical:3 below:1 ev:2 scott:1 challenge:1 ambuj:4 including:3 shifting:1 natural:1 rely:1 isoperimetry:1 representing:1 cim:2 literature:2 asymptotic:2 law:1 loss:11 fully:1 filtering:1 h2:1 supy:1 degree:1 balancing:1 row:1 mohri:14 last:1 side:1 allow:1 deeper:1 institute:1 supg2g:3 taking:1 distributed:2 depth:5 world:1 avoids:1 rich:1 autoregressive:5 author:1 commonly:3 made:1 collection:1 far:1 transaction:1 zt1:23 implicitly:1 overfitting:1 z20:2 assumed:1 conclude:1 discriminative:6 xi:1 alternatively:2 search:3 additionally:1 promising:2 learn:4 nature:1 ca:1 decoupling:1 evarist:1 mehryar:6 domain:2 did:1 pk:1 s2:1 noise:4 lampert:3 edition:1 paul:1 child:2 x1:1 martingale:3 ny:3 explicit:1 xh:1 candidate:2 pe:1 jmlr:1 printing:1 advertisement:1 learns:1 theorem:13 rk:3 formula:2 xt:18 showing:1 nyu:2 explored:2 cortes:2 alt:1 rakhlin:6 dk:2 ad1:1 burden:1 admits:2 giulia:1 sequential:13 adding:1 corr:1 conditioned:2 sparser:1 easier:1 led:1 lt:20 simply:1 univariate:1 ez:1 expressed:1 g2:3 kuznetsov:13 springer:2 minimizer:1 satisfies:1 conditional:4 viewed:2 identity:3 presentation:1 included:1 contrasted:1 uniformly:2 lemma:2 yand:1 partly:1 experimental:2 la:1 rakesh:1 meaningful:1 select:1 alexander:5 argminh:1 statespace:1 princeton:1 d1:3 |
Subsets and Splits