Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,900 | 6,338 | Stein Variational Gradient Descent: A General
Purpose Bayesian Inference Algorithm
Qiang Liu
Dilin Wang
Department of Computer Science
Dartmouth College
Hanover, NH 03755
{qiang.liu, dilin.wang.gr}@dartmouth.edu
Abstract
We propose a general purpose variational inference algorithm that forms a natural
counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of
functional gradient descent that minimizes the KL divergence. Empirical studies
are performed on various real world models and datasets, on which our method is
competitive with existing state-of-the-art methods. The derivation of our method
is based on a new theoretical result that connects the derivative of KL divergence
under smooth transforms with Stein?s identity and a recently proposed kernelized
Stein discrepancy, which is of independent interest.
1
Introduction
Bayesian inference provides a powerful tool for modeling complex data and reasoning under uncertainty, but casts a long standing challenge on computing intractable posterior distributions. Markov
chain Monte Carlo (MCMC) has been widely used to draw approximate posterior samples, but is
often slow and has difficulty accessing the convergence. Variational inference instead frames the
Bayesian inference problem into a deterministic optimization that approximates the target distribution
with a simpler distribution by minimizing their KL divergence. This makes variational methods
efficiently solvable by using off-the-shelf optimization techniques, and easily applicable to large
datasets (i.e., "big data") using the stochastic gradient descent trick [e.g., 1]. In contrast, it is much
more challenging to scale up MCMC to big data settings [see e.g., 2, 3].
Meanwhile, both the accuracy and computational cost of variational inference critically depend on
the set of distributions in which the approximation is defined. Simple approximation sets, such as
these used in the traditional mean field methods, are too restrictive to resemble the true posterior
distributions, while more advanced choices cast more difficulties on the subsequent optimization tasks.
For this reason, efficient variational methods often need to be derived on a model-by-model basis,
causing is a major barrier for developing general purpose, user-friendly variational tools applicable
for different kinds of models, and accessible to non-ML experts in application domains.
This case is in contrast with the maximum a posteriori (MAP) optimization tasks for finding the
posterior mode (sometimes known as the poor man?s Bayesian estimator, in contrast with the full
Bayesian inference for approximating the full posterior distribution), for which variants of (stochastic)
gradient descent serve as a simple, generic, yet extremely powerful toolbox. There has been a recent
growth of interest in creating user-friendly variational inference tools [e.g., 4?7], but more efforts are
still needed to develop more efficient general purpose algorithms.
In this work, we propose a new general purpose variational inference algorithm which can be treated
as a natural counterpart of gradient descent for full Bayesian inference (see Algorithm 1). Our
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
algorithm uses a set of particles for approximation, on which a form of (functional) gradient descent
is performed to minimize the KL divergence and drive the particles to fit the true posterior distribution.
Our algorithm has a simple form, and can be applied whenever gradient descent can be applied. In
fact, it reduces to gradient descent for MAP when using only a single particle, while automatically
turns into a full Bayesian sampling approach with more particles.
Underlying our algorithm is a new theoretical result that connects the derivative of KL divergence
w.r.t. smooth variable transforms and a recently introduced kernelized Stein discrepancy [8?10],
which allows us to derive a closed form solution for the optimal smooth perturbation direction that
gives the steepest descent on the KL divergence within the unit ball of a reproducing kernel Hilbert
space (RKHS). This new result is of independent interest, and can find wide application in machine
learning and statistics beyond variational inference.
2
Background
Preliminary Let x be a continuous random variable or parameter of interest taking values in
X ? Rd , and {Dk } is a set of i.i.d. observation. With prior p0 (x), Bayesian inference of x involves
QN
reasoning
with the posterior distribution p(x) := p?(x)/Z with p?(x) := p0 (x) k=1 p(Dk |x), where
R
Z = p?(x)dx is the troublesome normalization constant. We have dropped the conditioning on data
{Dk } in p(x) for convenience.
Let k(x, x0 ) : X ? X ? R be a positive definite kernel. The P
reproducing kernel Hilbert space
m
(RKHS) H of k(x, x0 ) is the closure of linear span {f P
: f (x) = i=1 ai k(x, xi ), aP
i ? R, m ?
N, xi ? X }, equipped with inner products hf, giH = ij ai bj k(xi , xj ) for g(x) = i bi k(x, xi ).
Denote by Hd the space of vector functions f = [f1 , . . . , fd ]> with fi ? H, equipped with
Pd
inner product hf , giHd =
i=1 hfi , gi iH . We assume all the vectors are column vectors. Let
?x f = [?x f1 , . . . , ?x fd ].
Stein?s Identity and Kernelized Stein Discrepancy Stein?s identity plays a fundamental role in
our framework. Let p(x) be a continuously differentiable (also called smooth) density supported on
X ? Rd , and ?(x) = [?1 (x), ? ? ? , ?d (x)]> a smooth vector function. Stein?s identity states that for
sufficiently regular ?, we have
Ex?p [Ap ?(x)] = 0,
where
Ap ?(x) = ?x log p(x)?(x)> + ?x ?(x),
(1)
where Ap is called the Stein operator, which acts on function ? and yields a zero mean function
Ap ?(x) under x ? p. This identity can be easily checked using integration by parts by assuming
mild zeroH boundary conditions on ?: either p(x)?(x) = H0, ?x ? ?X when X is compact, or
limr?? Br p(x)?(x)> n(x)dS = 0 when X = Rd , where Br is the surface integral on the sphere
Br of radius r centered at the origin and n(x) is the unit normal to Br . We call that ? is in the Stein
class of p if Stein?s identity (1) holds.
Now let q(x) be a different smooth density also supported in X , and consider the expectation of
Ap ?(x) under x ? q, then Ex?q [Ap ?(x)] would no longer equal zero for general ?. Instead, the
magnitude of Ex?q [Ap ?(x)] relates to how different p and q are, and can be leveraged to define a
discrepancy measure, known as Stein discrepancy, by considering the ?maximum violation of Stein?s
identity? for ? in some proper function set F:
D(q, p) = max Ex?q [trace(Ap ?(x))] ,
??F
Here the choice of this function set F is critical, and decides the discriminative power and computational tractability of Stein discrepancy. Traditionally, F is taken to be sets of functions with bounded
Lipschitz norms, which unfortunately casts a challenging functional optimization problem that is
computationally intractable or requires special considerations (see Gorham and Mackey [11] and
reference therein).
Kernelized Stein discrepancy (KSD) bypasses this difficulty by maximizing ? in the unit ball of a
reproducing kernel Hilbert space (RKHS) for which the optimization has a closed form solution.
KSD is defined as
D(q, p) = max Ex?q [trace(Ap ?(x))],
s.t. ||?||Hd ? 1 ,
(2)
??Hd
2
where we assume the kernel k(x, x0 ) of RKHS H is in the Stein class of p as a function of x for any
fixed x0 ? X . The optimal solution of (2) has been shown to be ?(x) = ??q,p (x)/||??q,p ||Hd [8?10],
where
??q,p (?) = Ex?q [Ap k(x, ?)],
for which we have
D(q, p) = ||??q,p ||Hd .
(3)
One can further show that D(q, p) equals zero (and equivalently ??q,p (x) ? 0) if and only if p = q
once k(x, x0 ) is strictly positive definite in a proper sense [See 8, 10], which is satisfied by commonly
used kernels such as the RBF kernel k(x, x0 ) = exp(? h1 ||x ? x0 ||22 ). Note that the RBF kernel is
also in the Stein class of smooth densities supported in X = Rd because of its decaying property.
Both Stein operator and KSD depend on p only through the score function ?x log p(x), which can
be calculated without knowing the normalization constant of p, because we have ?x log p(x) =
?x log p?(x) when p(x) = p?(x)/Z. This property makes Stein?s identity a powerful tool for handling
unnormalized distributions that appear widely in machine learning and statistics.
3
Variational Inference Using Smooth Transforms
Variational inference approximates the target distribution p(x) using a simpler distribution q ? (x)
found in a predefined set Q = {q(x)} of distributions by minimizing the KL divergence, that is,
q ? = arg min KL(q || p) ? Eq [log q(x)] ? Eq [log p?(x)] + log Z ,
(4)
q?Q
where we do not need to calculate the constant log Z for solving the optimization. The choice of
set Q is critical and defines different types of variational inference methods. The best set Q should
strike a balance between i) accuracy, broad enough to closely approximate a large class of target
distributions, ii) tractability, consisting of simple distributions that are easy for inference, and iii)
solvability so that the subsequent KL minimization problem can be efficiently solved.
In this work, we focus on the sets Q consisting of distributions obtained by smooth transforms from a
tractable reference distribution, that is, we take Q to be the set of distributions of random variables of
form z = T (x) where T : X ? X is a smooth one-to-one transform, and x is drawn from a tractable
reference distribution q0 (x). By the change of variables formula, the density of z is
q[T ] (z) = q(T ?1 (z)) ? | det(?z T ?1 (z))|,
where T ?1 denotes the inverse map of T and ?z T ?1 the Jacobian matrix of T ?1 . Such distributions
are computationally tractable, in the sense that the expectation under q[T ] can be easily evaluated by
averaging {zi } when zi = T (xi ) and xi ? q0 . Such Q can also in principle closely approximate
almost arbitrary distributions: it can be shown that there always exists a measurable transform T
between any two distributions without atoms (i.e. no single point carries a positive mass); in addition,
for Lipschitz continuous densities p and q, there always exist transforms between them that are least
as smooth as both p and q. We refer the readers to Villani [12] for in-depth discussion on this topic.
In practice, however, we need to restrict the set of transforms T properly to make the corresponding
variational optimization in (4) practically solvable. One approach is to consider T with certain
parametric form and optimize the corresponding parameters [e.g., 13, 14]. However, this introduces a
difficult problem on selecting the proper parametric family to balance the accuracy, tractability and
solvability, especially considering that T has to be an one-to-one map and has to have an efficiently
computable Jacobian matrix.
Instead, we propose a new algorithm that iteratively constructs incremental transforms that effectively
perform steepest descent on T in RKHS. Our algorithm does not require to explicitly specify
parametric forms, nor to calculate the Jacobian matrix, and has a particularly simple form that
mimics the typical gradient descent algorithm, making it easily implementable even for non-experts
in variational inference.
3.1
Stein Operator as the Derivative of KL Divergence
To explain how we minimize the KL divergence in (4), we consider an incremental transform formed
by a small perturbation of the identity map: T (x) = x + ?(x), where ?(x) is a smooth function
3
that characterizes the perturbation direction and the scalar represents the perturbation magnitude.
When || is sufficiently small, the Jacobian of T is full rank (close to the identity matrix), and hence
T is guaranteed to be an one-to-one map by the inverse function theorem.
The following result, which forms the foundation of our method, draws an insightful connection
between Stein operator and the derivative of KL divergence w.r.t. the perturbation magnitude .
Theorem 3.1. Let T (x) = x + ?(x) and q[T ] (z) the density of z = T (x) when x ? q(x), we have
? KL(q[T ] || p) =0 = ?Ex?q [trace(Ap ?(x))],
(5)
where Ap ?(x) = ?x log p(x)?(x)> + ?x ?(x) is the Stein operator.
Relating this to the definition of KSD in (2), we can identify the ??q,p in (3) as the optimal perturbation
direction that gives the steepest descent on the KL divergence in zero-centered balls of Hd .
Lemma 3.2. Assume the conditions in Theorem 3.1. Consider all the perturbation directions ? in
the ball B = {? ? Hd : ||?||Hd ? D(q, p)} of vector-valued RKHS Hd , the direction of steepest
descent that maximizes the negative gradient in (5) is the ??q,p in (3), i.e.,
??q,p (?) = Ex?q [?x log p(x)k(x, ?) + ?x k(x, ?)],
for which (5) equals the square of KSD, that is, ? KL(q[T ] || p) =0 = ?D2 (q, p).
(6)
The result in Lemma (3.2) suggests an iterative procedure that transforms an initial reference distribution q0 to the target distribution p: we start with applying transform T ?0 (x) = x + 0 ? ??q0 ,p (x)
on q0 which decreases the KL divergence by an amount of 0 ? D2 (q0 , p), where 0 is a small
step size; this would give a new distribution q1 (x) = q0[T 0 ] (x), on which a further transform
T ?1 (x) = x + 1 ? ??q1 ,p (x) can further decrease the KL divergence by 1 ? D2 (q1 , p). Repeating this
process one constructs a path of distributions {q` }n`=1 between q0 and p via
q`+1 = q`[T ?` ] ,
T ?` (x) = x + ` ? ??q` ,p (x).
where
(7)
This would eventually converge to the target p with sufficiently small step-size {` }, under which
??p,q? (x) ? 0 and T ?? reduces to the identity map. Recall that q? = p if and only if ??p,q? (x) ? 0.
Functional Gradient To gain further intuition on this process, we now reinterpret (6) as a functional
gradient in RKHS. For any functional F [f ] of f ? Hd , its (functional) gradient ?f F [f ] is a function
in Hd such that F [f + g(x)] = F [f ] + h?f F [f ], giHd + O(2 ) for any g ? Hd and ? R.
Theorem 3.3. Let T (x) = x + f (x), where f ? Hd , and q[T ] the density of z = T (x) when x ? q,
?f KL(q[T ] || p) f =0 = ???q,p (x),
whose RKHS norm is ||??q,p ||Hd = D(q, p).
This suggests that T ? (x) = x + ? ??q,p (x) is equivalent to a step of functional gradient descent in
RKHS. However, what is critical in the iterative procedure (7) is that we also iteratively apply the
variable transform so that every time we would only need to evaluate the functional gradient descent
at zero perturbation f = 0 on the identity map T (x) = x. This brings a critical advantage since
the gradient at f 6= 0 is more complex and would require to calculate the inverse Jacobian matrix
[?x T (x)]?1 that casts computational or implementation hurdles.
3.2
Stein Variational Gradient Descent
To implement the iterative procedure (7) in practice, one would need to approximate the expectation
for calculating ??q,p (x) in (6). To do this, we can first draw a set of particles {x0i }ni=1 from the initial
distribution q0 , and then iteratively update the particles with an empirical version of the transform in
(7) in which the expectation under q` in ??q` ,p is approximated by the empirical mean of particles
{x`i }ni=1 at the `-th iteration. This procedure is summarized in Algorithm 1, which allows us to
(deterministically) transport a set of points to match our target distribution p(x), effectively providing
4
Algorithm 1 Bayesian Inference via Variational Gradient Descent
Input: A target distribution with density function p(x) and a set of initial particles {x0i }ni=1 .
Output: A set of particles {xi }ni=1 that approximates the target distribution p(x).
for iteration ` do
n
X
?? (x`i ) where ?
?? (x) = 1
xi`+1 ? x`i + ` ?
k(x`j , x)?x`j log p(x`j ) + ?x`j k(x`j , x) , (8)
n j=1
where ` is the step size at the `-th iteration.
end for
a sampling method for p(x). We can see that the implementation of this procedure does not depend
on the initial distribution q0 at all, and in practice we can start with a set of arbitrary points {xi }ni=1 ,
possibly generated by a complex (randomly or deterministic) black-box procedure.
We can expect that {x`i }ni=1 forms increasingly better approximation for q` as n increases. To
see this, denote by ? the nonlinear map that takes the measure of q` and outputs that of q`+1 in
(7), that is, q`+1 = ?` (q` ), where q` enters the map through both q`[T ?` ] and ??q` ,p . Then, the
updates in Algorithm 1 can be seen as applying the same map ? on the empirical measure q?` of
particles {x`i } to get the empirical measure q?`+1 of particles {x`+1
} at the next iteration, that is,
i
q?`+1 = ?` (?
q` ). Since q?0 converges to q0 as n increases, q?` should also converge to q` when the
map ? is ?continuous? in a proper sense. Rigorous theoretical results on such convergence have
been established
Pnin the mean field theory of interacting
? particle systems [e.g., 15], which in general
guarantee that i=1 h(x`i )/n ? Eq` [h(x)] = O(1/ n) for bounded testing functions h. In addition,
the distribution of each particle x`i0 , for any fixed i0 , also tends to q` , and is independent with any
other finite subset of particles as n ? ?, a phenomenon called propagation of chaos [16]. We leave
concrete theoretical analysis for future work.
?? (x) in (8)
Algorithm 1 mimics a gradient dynamics at the particle level, where the two terms in ?
play different roles: the first term drives the particles towards the high probability areas of p(x)
by following a smoothed gradient direction, which is the weighted sum of the gradients of all the
points weighted by the kernel function. The second term acts as a repulsive force that prevents
all the points to collapse together into local modes of p(x); P
to see this, consider the RBF kernel
k(x, x0 ) = exp(? h1 ||x ? x0 ||2 ), the second term reduces to j h2 (x ? xj )k(xj , x), which drives
x away from its neighboring points xj that have large k(xj , x). If we let bandwidth h ? 0, the
repulsive term vanishes, and update (8) reduces to a set of independent chains of typical gradient
ascent for maximizing log p(x) (i.e., MAP) and all the particles would collapse into the local modes.
Another interesting case is when we use only a single particle (n = 1), in which case Algorithm 1
reduces to a single chain of typical gradient ascent for MAP for any kernel that satisfies ?x k(x, x) = 0
(for which RBF holds). This suggests that our algorithm can generalize well for supervised learning
tasks even with a very small number n of particles, since gradient ascent for MAP (n = 1) has been
shown to be very successful in practice. This property distinguishes our particle method with the
typical Monte Carlo methods that requires to average over many points. The key difference here is
that we use a deterministic repulsive force, other than Monte Carlo randomness, to get diverse points
for distributional approximation.
Complexity and Efficient Implementation The major computation bottleneck in (8) lies on calculating the gradient ?x logQ
p(x) for all the points {xi }ni=1 ; this is especially the case in big data
settings when p(x) ? p0 (x) N
k=1 p(Dk |x) with a very large N . We can conveniently address this
problem by approximating ?x log p(x) with subsampled mini-batches ? ? {1, . . . , N } of the data
?x log p(x) ? log p0 (x) +
N X
log p(Dk | x).
|?|
(9)
k??
Additional speedup can be obtained by parallelizing the gradient evaluation of the n particles.
The update (8) also requires to compute the kernel matrix {k(xi , xj )} which costs O n2 ; in practice,
this cost can be relatively small compared with the cost of gradient evaluation, since it can be sufficient
to use a relatively small n (e.g., several hundreds) in practice. If there is a need for very large n, one
5
Pn
can approximate the summation i=1 in (8) by subsampling the particles, or using a random feature
expansion of the kernel k(x, x0 ) [17].
4
Related Works
Our work is mostly related to Rezende and Mohamed [13], which also considers variational inference
over the set of transformed random variables, but focuses on transforms of parametric form T (x) =
f` (? ? ? (f1 (f0 (x)))) where fi (?) is a predefined simple parametric transform and ` a predefined length;
this essentially creates a feedforward neural network with ` layers, whose invertibility requires further
conditions on the parameters and needs to be established case by case. The similar idea is also
discussed in Marzouk et al. [14], which also considers transforms parameterized in special ways
to ensure the invertible and the computational tractability of the Jacobian matrix. Recently, Tran
et al. [18] constructed a variational family that achieves universal approximation based on Gaussian
process (equivalent to a single-layer, infinitely-wide neural network), which does not have a Jacobian
matrix but needs to calculate the inverse of the kernel matrix of the Gaussian process. Our algorithm
has a simpler form, and does not require to calculate any matrix determinant or inversion. Several
other works also leverage variable transforms in variational inference, but with more limited forms;
examples include affine transforms [19, 20], and recently the copula models that correspond to
element-wise transforms over the individual variables [21, 22].
Our algorithm maintains and updates a set of particles, and is of similar style with the Gaussian
mixture variation inference methods whose mean parameters can be treated as a set of particles.
[23?26, 5]. Optimizing such mixture KL objectives often requires certain approximation, and this
was done most recently in Gershman et al. [5] by approximating the entropy using Jensen?s inequality
and the expectation term using Taylor approximation. There is also a large set of particle-based Monte
Carlo methods, including variants of sequential Monte Carlo [e.g., 27, 28], as well as a recent particle
mirror descent for optimizing the variational objective function [7]; compared with these methods,
our method does not have the weight degeneration problem, and is much more ?particle-efficient? in
that we reduce to MAP with only one single particle.
5
Experiments
We test our algorithm on both toy and real world examples, on which we find our method tends to
outperform a variety of baseline methods. Our code is available at https://github.com/DartML/
Stein-Variational-Gradient-Descent.
For all our experiments, we use RBF kernel k(x, x0 ) = exp(? h1 ||x ? x0 ||22 ), and take the bandwidth
to be h = med2 / log n, where med is the median of the pairwise
P distance between the current points
{xi }ni=1 ; this is based on the intuition that we would have j k(xi , xj ) ? n exp(? h1 med2 ) = 1,
so that for each xi the contribution from its own gradient and the influence from the other points
balance with each other. Note that in this way, the bandwidth h actually changes adaptively across
the iterations. We use AdaGrad for step size and initialize the particles using the prior distribution
unless otherwise specified.
Toy Example on 1D Gaussian Mixture We set our target distribution to be p(x) = 1/3N (x; ?
2, 1) + 2/3N (x; 2, 1), and initialize the particles using q0 (x) = N (x; ?10, 1). This creates a
challenging situation since the probability mass of p(x) and q0 (x) are far away each other (with
almost zero overlap). Figure 1 shows how the distribution of the particles (n = 1) of our method
evolve at different iterations. We see that despite the small overlap between q0 (x) and p(x), our
method can push the particles towards the target distribution, and even recover the mode that is further
away from the initial point. We found that other particle based algorithms, such as Dai et al. [7], tend
to experience weight degeneracy on this toy example due to the ill choice of q0 (x).
Figure 2 compares our method with Monte Carlo sampling when using the obtained particles to
estimate expectation Ep (h(x)) with different test functions h(?). We see that the MSE of our method
tends to perform similarly or better than the exact Monte Carlo sampling. This may be because our
particles are more spread out than i.i.d. samples due to the repulsive force, and hence give higher
estimation accuracy. It remains an open question to formally establish the error rate of our method.
6
0th Iteration
50th Iteration
75th Iteration
100th Iteration
150th Iteration
500th Iteration
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
-10
0
10
-10
0
10
-10
0
10
-10
0
10
-10
0
10
-10
0
10
-1
-1.5
10
50
250
Sample Size (n)
(a) Estimating E(x)
Log10 MSE
0
-0.5
Log10 MSE
Log10 MSE
Figure 1: Toy example with 1D Gaussian mixture. The red dashed lines are the target density function
and the solid green lines are the densities of the particles at different iterations of our algorithm
(estimated using kernel density estimator) . Note that the initial distribution is set to have almost zero
overlap with the target distribution, and our method demonstrates the ability of escaping the local
mode on the left to recover the mode on the left that is further away. We use n = 100 particles.
-1
-2
-3
10
50
250
Sample Size (n)
(b) Estimating E(x2 )
Monte Carlo
Stein Variational Gradient Descent
-2
-2.5
-3
-3.5
10
50
250
Sample Size (n)
(c) Estimating E(cos(?x + b))
Figure 2: We use the same setting as Figure 1, except varying the number n of particles. (a)-(c)
show the mean square errors when using the obtained particles to estimate expectation Ep (h(x)) for
h(x) = x, x2 , and cos(?x + b); for cos(?x + b), we draw ? ? N (0, 1) and b ? Uniform([0, 2?])
and report the average MSE over 20 random draws of ? and b.
Bayesian Logistic Regression We consider Bayesian logistic regression for binary classification
using the same setting as Gershman et al. [5], which assigns the regression weights w with a
Gaussian prior p0 (w|?) = N (w, ??1 ) and p0 (?) = Gamma(?, 1, 0.01). The inference is applied
on posterior p(x|D) with x = [w, log ?]. We compared our algorithm with the no-U-turn sampler
(NUTS)1 [29] and non-parametric variational inference (NPV)2 [5] on the 8 datasets (N > 500) used
in Gershman et al. [5], and find they tend to give very similar results on these (relatively simple)
datasets; see Appendix for more details.
We further test the binary Covertype dataset3 with 581,012 data points and 54 features. This dataset
is too large, and a stochastic gradient descent is needed for speed. Because NUTS and NPV do
not have mini-batch option in their code, we instead compare with the stochastic gradient Langevin
dynamics (SGLD) by Welling and Teh [2], the particle mirror descent (PMD) by Dai et al. [7], and
the doubly stochastic variational inference (DSVI) by Titsias and L?zaro-Gredilla [19].4 We also
compare with a parallel version of SGLD that runs n parallel chains and take the last point of each
chain as the result. This parallel SGLD is similar with our method and we use the same step-size of
` = a/(t + 1).55 for both as suggested by Welling and Teh [2] for fair comparison; 5 we?select a
using a validation set within the training set. For PMD, we use a step size of Na /(100 + t), and
RBF kernel k(x, x0 ) = exp(?||x ? x0 ||2 /h) with bandwidth h = 0.002 ? med2 which is based on
the guidance of Dai et al. [7] which we find works most efficiently for PMD. Figure 3(a)-(b) shows
the results when we initialize our method and both versions of SGLD using the prior p0 (?)p0 (w|?);
we find that PMD tends to be unstable with this initialization because it generates weights w with
large magnitudes, so we divided the initialized weights by 10 for PMD; as shown in Figure 3(a),
this gives some advantage to PMD in the initial stage. We find our method generally performs the
best, followed with the parallel SGLD, which is much better than its sequential counterpart; this
comparison is of course in favor of parallel SGLD, since each iteration of it requires n = 100 times of
likelihood evaluations compared with sequential SGLD. However, by leveraging the matrix operation
in MATLAB, we find that each iteration of parallel SGLD is only 3 times more expensive than
sequential SGLD.
1
code: http://www.cs.princeton.edu/ mdhoffma/
code: http://gershmanlab.webfactional.com/pubs/npv.v1.zip
3
https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/binary.html
4
code: http://www.aueb.gr/users/mtitsias/code/dsvi_matlabv1.zip.
5
We scale the gradient of SGLD by a factor of 1/n to make it match with the scale of our gradient in (8).
2
7
0.75
Testing Accuracy
Testing Accuracy
0.75
0.7
0.65
0.1
1
Number of Epoches
(a) Particle size n = 100
2
Stein Variational Gradient Descent (Our Method)
Stochastic Langevin (Parallel SGLD)
Particle Mirror Descent (PMD)
Doubly Stochastic (DSVI)
Stochastic Langevin (Sequential SGLD)
0.7
0.65
1
10
50
250
Particle Size (n)
(b) Results at 3000 iteration (? 0.32 epoches)
Figure 3: Results on Bayesian logistic regression on Covertype dataset w.r.t. epochs and the particle
size n. We use n = 100 particles for our method, parallel SGLD and PMD, and average the last 100
points for the sequential SGLD. The ?particle-based? methods (solid lines) in principle require 100
times of likelihood evaluations compare with DVSI and sequential SGLD (dash lines) per iteration,
but are implemented efficiently using Matlab matrix operation (e.g., each iteration of parallel SGLD
is about 3 times slower than sequential SGLD). We partition the data into 80% for training and 20%
for testing and average on 50 random trials. A mini-batch size of 50 is used for all the algorithms.
Bayesian Neural Network We compare our algorithm with the probabilistic back-propagation
(PBP) algorithm by Hern?ndez-Lobato and Adams [30] on Bayesian neural networks. Our experiment
settings are almost identity, except that we use a Gamma(1, 0.1) prior for the inverse covariances and
do not use the trick of scaling the input of the output layer. We use neural networks with one hidden
layers, and take 50 hidden units for most datasets, except that we take 100 units for Protein and Year
which are relatively large; all the datasets are randomly partitioned into 90% for training and 10% for
testing, and the results are averaged over 20 random trials, except for Protein and Year on which 5
and 1 trials are repeated, respectively. We use RELU(x) = max(0, x) as the active function, whose
weak derivative is I[x > 0] (Stein?s identity also holds for weak derivatives; see Stein et al. [31]).
PBP is repeated using the default setting of the authors? code6 . For our algorithm, we only use 20
particles, and use AdaGrad with momentum as what is standard in deep learning. The mini-batch
size is 100 except for Year on which we use 1000.
We find our algorithm consistently improves over PBP both in terms of the accuracy and speed; this
is encouraging since PBP were specifically designed for Bayesian neural network. We also find that
our results are comparable with the more recent results reported on the same datasets [e.g., 32?34]
which leverage some advanced techniques that we can also benefit from.
Dataset
Boston
Concrete
Energy
Kin8nm
Naval
Combined
Protein
Wine
Yacht
Year
6
Avg. Test RMSE
PBP
Our Method
2.977 ? 0.093 2.957 ? 0.099
5.506 ? 0.103 5.324 ? 0.104
1.734 ? 0.051 1.374 ? 0.045
0.098 ? 0.001 0.090 ? 0.001
0.006 ? 0.000 0.004 ? 0.000
4.052 ? 0.031 4.033 ? 0.033
4.623 ? 0.009 4.606 ? 0.013
0.614 ? 0.008 0.609 ? 0.010
0.778 ? 0.042 0.864 ? 0.052
8.733 ? NA
8.684 ? NA
Avg. Test LL
PBP
Our Method
?2.579 ? 0.052 ?2.504 ? 0.029
?3.137 ? 0.021 ?3.082 ? 0.018
?1.981 ? 0.028 ?1.767 ? 0.024
0.901 ? 0.010
0.984 ? 0.008
3.735 ? 0.004
4.089 ? 0.012
?2.819 ? 0.008 ?2.815 ? 0.008
?2.950 ? 0.002 ?2.947 ? 0.003
?0.931 ? 0.014 ?0.925 ? 0.014
?1.211 ? 0.044 ?1.225 ? 0.042
?3.586 ? NA
?3.580 ? NA
Avg. Time (Secs)
PBP
Ours
18
16
33
24
25
21
118
41
173
49
136
51
682
68
26
22
25
25
7777
684
Conclusion
We propose a simple general purpose variational inference algorithm for fast and scalable Bayesian
inference. Future directions include more theoretical understanding on our method, more practical
applications in deep learning models, and other potential applications of our basic Theorem in
Section 3.1.
Acknowledgement
6
This work is supported in part by NSF CRII 1565796.
https://github.com/HIPS/Probabilistic-Backpropagation
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 2013.
M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with subsets of data. In UAI, 2014.
R. Ranganath, S. Gerrish, and D. M. Blei. Black box variational inference. In AISTATS, 2014.
S. Gershman, M. Hoffman, and D. Blei. Nonparametric variational inference. In ICML, 2012.
A. Kucukelbir, R. Ranganath, A. Gelman, and D. Blei. Automatic variational inference in STAN. In NIPS,
2015.
B. Dai, N. He, H. Dai, and L. Song. Provable Bayesian inference via particle mirror descent. In AISTATS,
2016.
Q. Liu, J. D. Lee, and M. I. Jordan. A kernelized Stein discrepancy for goodness-of-fit tests and model
evaluation. arXiv preprint arXiv:1602.03253, 2016.
C. J. Oates, M. Girolami, and N. Chopin. Control functionals for Monte Carlo integration. Journal of the
Royal Statistical Society, Series B, 2017.
K. Chwialkowski, H. Strathmann, and A. Gretton. A kernel test of goodness-of-fit. arXiv preprint
arXiv:1602.02964, 2016.
J. Gorham and L. Mackey. Measuring sample quality with Stein?s method. In NIPS, pages 226?234, 2015.
C. Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008.
D. J. Rezende and S. Mohamed. Variational inference with normalizing flows. In ICML, 2015.
Y. Marzouk, T. Moselhy, M. Parno, and A. Spantini. An introduction to sampling via measure transport.
arXiv preprint arXiv:1602.05023, 2016.
P. Del Moral. Mean field simulation for Monte Carlo integration. CRC Press, 2013.
M. Kac. Probability and related topics in physical sciences, volume 1. American Mathematical Soc., 1959.
A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177?1184,
2007.
D. Tran, R. Ranganath, and D. M. Blei. Variational Gaussian process. In ICLR, 2016.
M. Titsias and M. L?zaro-Gredilla. Doubly stochastic variational Bayes for non-conjugate inference. In
ICML, pages 1971?1979, 2014.
E. Challis and D. Barber. Affine independent variational inference. In NIPS, 2012.
S. Han, X. Liao, D. B. Dunson, and L. Carin. Variational Gaussian copula inference. In AISTATS, 2016.
D. Tran, D. M. Blei, and E. M. Airoldi. Copula variational inference. In NIPS, 2015.
C. M. B. N. Lawrence and T. J. M. I. Jordan. Approximating posterior distributions in belief networks
using mixtures. In NIPS, 1998.
T. S. Jaakkola and M. I. Jordon. Improving the mean field approximation via the use of mixture distributions.
In Learning in graphical models, pages 163?173. MIT Press, 1999.
N. D. Lawrence. Variational inference in probabilistic models. PhD thesis, University of Cambridge, 2001.
T. D. Kulkarni, A. Saeedi, and S. Gershman. Variational particle approximations. arXiv preprint
arXiv:1402.5715, 2014.
C. Robert and G. Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2013.
A. Smith, A. Doucet, N. de Freitas, and N. Gordon. Sequential Monte Carlo methods in practice. Springer
Science & Business Media, 2013.
M. D. Hoffman and A. Gelman. The No-U-Turn sampler: Adaptively setting path lengths in Hamiltonian
Monte Carlo. The Journal of Machine Learning Research, 15(1):1593?1623, 2014.
J. M. Hern?ndez-Lobato and R. P. Adams. Probabilistic backpropagation for scalable learning of Bayesian
neural networks. In ICML, 2015.
C. Stein, P. Diaconis, S. Holmes, G. Reinert, et al. Use of exchangeable pairs in the analysis of simulations.
In Stein?s Method, pages 1?25. Institute of Mathematical Statistics, 2004.
Y. Li, J. M. Hern?ndez-Lobato, and R. E. Turner. Stochastic expectation propagation. In NIPS, 2015.
Y. Li and R. E. Turner. Variational inference with Renyi divergence. arXiv preprint arXiv:1602.02311,
2016.
Y. Gal and Z. Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep
learning. arXiv preprint arXiv:1506.02142, 2015.
9
| 6338 |@word mild:1 trial:3 determinant:1 version:3 inversion:1 norm:2 villani:2 open:1 closure:1 d2:3 simulation:2 covariance:1 p0:8 q1:3 solid:2 carry:1 initial:7 liu:3 ndez:3 score:1 selecting:1 pub:1 series:1 rkhs:9 ours:1 existing:1 freitas:1 current:1 com:3 yet:1 dx:1 subsequent:2 partition:1 designed:1 update:5 mackey:2 steepest:4 smith:1 hamiltonian:1 blei:6 provides:1 simpler:3 mathematical:2 constructed:1 doubly:3 firefly:1 pairwise:1 x0:14 nor:1 automatically:1 encouraging:1 equipped:2 considering:2 spain:1 estimating:3 underlying:1 bounded:2 maximizes:1 mass:2 medium:3 what:2 kind:1 minimizes:1 finding:1 gal:1 guarantee:1 every:1 reinterpret:1 act:2 friendly:2 growth:1 demonstrates:1 control:1 unit:5 exchangeable:1 appear:1 positive:3 dropped:1 local:3 tends:4 despite:1 troublesome:1 crii:1 path:2 ap:13 black:2 therein:1 initialization:1 suggests:3 challenging:3 co:3 collapse:2 limited:1 bi:1 averaged:1 challis:1 practical:1 zaro:2 testing:5 practice:7 definite:2 implement:1 backpropagation:2 yacht:1 procedure:6 area:1 empirical:5 universal:1 regular:1 protein:3 get:2 convenience:1 close:1 operator:5 gelman:2 applying:3 influence:1 optimize:1 measurable:1 deterministic:3 map:16 equivalent:2 maximizing:2 www:3 lobato:3 limr:1 assigns:1 estimator:2 holmes:1 hd:14 traditionally:1 variation:1 target:13 play:2 user:3 exact:2 us:1 origin:1 trick:2 element:1 approximated:1 particularly:1 expensive:1 distributional:1 logq:1 ep:2 role:2 csie:1 preprint:6 spantini:1 wang:3 solved:1 enters:1 calculate:5 degeneration:1 decrease:2 accessing:1 pd:1 intuition:2 vanishes:1 complexity:1 dynamic:3 depend:3 solving:1 serve:1 creates:2 titsias:2 basis:1 easily:4 various:1 derivation:1 fast:1 monte:14 gorham:2 h0:1 whose:4 widely:2 valued:1 otherwise:1 ability:1 statistic:3 gi:1 favor:1 transform:8 advantage:2 differentiable:1 propose:4 tran:3 product:2 causing:1 neighboring:1 convergence:2 strathmann:1 incremental:2 converges:1 leave:1 adam:3 derive:1 develop:1 x0i:2 ij:1 ex:8 eq:3 soc:1 implemented:1 c:1 resemble:1 involves:1 girolami:1 direction:7 radius:1 closely:2 stochastic:12 centered:2 libsvmtools:1 crc:1 require:4 f1:3 preliminary:1 ntu:1 summation:1 strictly:1 hold:3 practically:1 sufficiently:3 normal:1 exp:5 sgld:17 maclaurin:1 lawrence:2 bj:1 major:2 achieves:1 wine:1 purpose:6 estimation:1 applicable:2 tool:4 weighted:2 mtitsias:1 minimization:1 hoffman:3 mit:1 always:2 gaussian:8 pn:1 shelf:1 varying:1 jaakkola:1 derived:1 focus:2 rezende:2 naval:1 properly:1 consistently:1 rank:1 likelihood:2 contrast:3 rigorous:1 baseline:1 sense:3 posteriori:1 inference:38 i0:2 kernelized:5 hidden:2 transformed:1 chopin:1 arg:1 classification:1 ill:1 html:1 art:1 integration:3 special:2 copula:3 initialize:3 field:4 equal:3 once:1 construct:2 sampling:5 qiang:2 atom:1 represents:1 broad:1 icml:5 carin:1 discrepancy:8 future:2 report:1 mimic:2 gordon:1 distinguishes:1 randomly:2 diaconis:1 gamma:2 divergence:14 ksd:5 individual:1 subsampled:1 connects:2 consisting:2 interest:4 fd:2 evaluation:5 reinert:1 violation:1 introduces:1 mixture:6 chain:5 predefined:3 integral:1 dataset3:1 experience:1 unless:1 taylor:1 old:1 initialized:1 guidance:1 theoretical:5 jordon:1 hip:1 column:1 modeling:1 goodness:2 measuring:1 cost:4 tractability:4 subset:2 hundred:1 uniform:1 successful:1 gr:2 too:2 reported:1 combined:1 adaptively:2 recht:1 density:11 fundamental:1 accessible:1 standing:1 probabilistic:4 off:1 lee:1 invertible:1 together:1 continuously:1 concrete:2 na:5 thesis:1 satisfied:1 kucukelbir:1 leveraged:1 possibly:1 creating:1 expert:2 derivative:6 style:1 american:1 toy:4 li:2 potential:1 de:1 summarized:1 sec:1 invertibility:1 explicitly:1 performed:2 h1:4 closed:2 characterizes:1 red:1 competitive:1 hf:2 decaying:1 start:2 maintains:1 recover:2 option:1 parallel:9 bayes:1 rmse:1 contribution:1 minimize:2 formed:1 square:2 accuracy:7 ni:8 efficiently:5 yield:1 identify:1 correspond:1 generalize:1 weak:2 bayesian:20 critically:1 carlo:14 drive:3 randomness:1 explain:1 casella:1 whenever:1 checked:1 definition:1 energy:1 mohamed:2 degeneracy:1 gain:1 dataset:3 recall:1 pmd:8 improves:1 hilbert:3 actually:1 back:1 higher:1 supervised:1 specify:1 marzouk:2 evaluated:1 box:2 done:1 stage:1 d:1 transport:4 nonlinear:1 propagation:3 del:1 defines:1 mode:6 brings:1 logistic:3 quality:1 true:2 counterpart:3 hence:2 mdhoffma:1 q0:15 iteratively:4 nut:2 ll:1 unnormalized:1 performs:1 reasoning:2 variational:40 consideration:1 chaos:1 recently:5 fi:2 wise:1 pbp:7 kin8nm:1 functional:9 physical:1 conditioning:1 nh:1 volume:2 discussed:1 he:1 approximates:3 relating:1 refer:1 cambridge:1 ai:2 paisley:1 rd:4 automatic:1 similarly:1 particle:50 f0:1 longer:1 surface:1 han:1 solvability:2 posterior:9 hfi:1 recent:3 own:1 optimizing:2 certain:2 inequality:1 binary:3 seen:1 additional:1 dai:5 zip:2 converge:2 strike:1 dashed:1 ii:1 relates:1 full:5 reduces:5 gretton:1 rahimi:1 smooth:12 match:3 long:1 sphere:1 divided:1 variant:2 regression:4 aueb:1 liao:1 essentially:1 expectation:8 scalable:2 basic:1 arxiv:12 iteration:18 sometimes:1 kernel:19 normalization:2 background:1 addition:2 hurdle:1 median:1 ascent:3 med:1 tend:2 chwialkowski:1 leveraging:1 flow:1 jordan:2 call:1 leverage:2 feedforward:1 iii:1 enough:1 easy:1 variety:1 xj:7 fit:3 zi:2 relu:1 dartmouth:2 restrict:1 bandwidth:4 escaping:1 inner:2 idea:1 reduce:1 knowing:1 br:4 computable:1 det:1 bottleneck:1 effort:1 moral:1 song:1 matlab:2 deep:3 generally:1 transforms:13 amount:1 stein:32 repeating:1 nonparametric:1 gih:1 http:6 outperform:1 exist:1 nsf:1 kac:1 estimated:1 per:1 diverse:1 key:1 drawn:1 saeedi:1 v1:1 sum:1 year:4 run:1 inverse:5 parameterized:1 powerful:3 uncertainty:2 almost:4 reader:1 family:2 draw:5 appendix:1 scaling:1 comparable:1 dropout:1 layer:4 guaranteed:1 followed:1 dash:1 covertype:2 x2:2 generates:1 speed:2 span:1 extremely:1 min:1 relatively:4 speedup:1 department:1 developing:1 gredilla:2 ball:4 poor:1 conjugate:1 across:1 increasingly:1 partitioned:1 tw:1 making:1 taken:1 computationally:2 dilin:2 remains:1 turn:3 eventually:1 cjlin:1 hern:3 needed:2 tractable:3 end:1 repulsive:4 available:1 operation:2 hanover:1 apply:1 away:4 generic:1 batch:4 slower:1 dsvi:2 denotes:1 subsampling:1 ensure:1 include:2 graphical:1 log10:3 calculating:2 restrictive:1 ghahramani:1 especially:2 establish:1 approximating:4 society:1 objective:2 question:1 parametric:6 traditional:1 gradient:37 iclr:1 distance:1 topic:2 barber:1 considers:2 unstable:1 reason:1 provable:1 assuming:1 length:2 code:6 mini:4 providing:1 minimizing:2 balance:3 equivalently:1 difficult:1 unfortunately:1 mostly:1 dunson:1 robert:1 trace:3 negative:1 implementation:3 proper:4 perform:2 teh:3 observation:1 datasets:8 markov:1 implementable:1 finite:1 descent:26 situation:1 langevin:4 frame:1 interacting:1 perturbation:8 reproducing:3 smoothed:1 arbitrary:2 parallelizing:1 introduced:1 cast:4 pair:1 kl:19 toolbox:1 connection:1 specified:1 established:2 barcelona:1 nip:8 address:1 beyond:1 suggested:1 challenge:1 royal:1 max:3 including:1 belief:1 green:1 oates:1 power:1 critical:4 overlap:3 natural:2 difficulty:3 treated:2 solvable:2 force:3 business:3 advanced:2 turner:2 representing:1 github:2 stan:1 prior:5 epoch:3 understanding:1 acknowledgement:1 evolve:1 adagrad:2 expect:1 interesting:1 gershman:5 validation:1 foundation:1 h2:1 affine:2 sufficient:1 principle:2 bypass:1 course:1 supported:4 last:2 institute:1 wide:2 taking:1 barrier:1 benefit:1 boundary:1 calculated:1 depth:1 world:2 default:1 qn:1 author:1 commonly:1 avg:3 far:1 welling:3 ranganath:3 functionals:1 approximate:5 compact:1 moselhy:1 ml:1 doucet:1 decides:1 active:1 uai:1 xi:14 discriminative:1 parno:1 continuous:3 iterative:3 improving:1 expansion:1 mse:5 complex:3 meanwhile:1 domain:1 aistats:3 spread:1 big:3 n2:1 fair:1 repeated:2 slow:1 momentum:1 deterministically:1 lie:1 jmlr:1 jacobian:7 renyi:1 formula:1 theorem:5 insightful:1 jensen:1 dk:5 normalizing:1 intractable:2 exists:1 ih:1 sequential:9 effectively:2 mirror:4 airoldi:1 magnitude:4 pnin:1 phd:1 push:1 boston:1 entropy:1 infinitely:1 conveniently:1 prevents:1 scalar:1 springer:3 satisfies:1 gerrish:1 identity:14 rbf:6 towards:2 lipschitz:2 man:1 change:2 typical:4 except:5 specifically:1 averaging:1 sampler:2 lemma:2 called:3 formally:1 college:1 select:1 npv:3 kulkarni:1 evaluate:1 mcmc:3 princeton:1 phenomenon:1 handling:1 |
5,901 | 6,339 | Measuring Neural Net Robustness with Constraints
Osbert Bastani
Stanford University
[email protected]
Dimitrios Vytiniotis
Microsoft Research
[email protected]
Yani Ioannou
University of Cambridge
[email protected]
Leonidas Lampropoulos
University of Pennsylvania
[email protected]
Aditya V. Nori
Microsoft Research
[email protected]
Antonio Criminisi
Microsoft Research
[email protected]
Abstract
Despite having high accuracy, neural nets have been shown to be susceptible to
adversarial examples, where a small perturbation to an input can cause it to become
mislabeled. We propose metrics for measuring the robustness of a neural net and
devise a novel algorithm for approximating these metrics based on an encoding of
robustness as a linear program. We show how our metrics can be used to evaluate
the robustness of deep neural nets with experiments on the MNIST and CIFAR-10
datasets. Our algorithm generates more informative estimates of robustness metrics
compared to estimates based on existing algorithms. Furthermore, we show how
existing approaches to improving robustness ?overfit? to adversarial examples
generated using a specific algorithm. Finally, we show that our techniques can be
used to additionally improve neural net robustness both according to the metrics
that we propose, but also according to previously proposed metrics.
1
Introduction
Recent work [21] shows that it is often possible to construct an input mislabeled by a neural net
by perturbing a correctly labeled input by a tiny amount in a carefully chosen direction. Lack of
robustness can be problematic in a variety of settings, such as changing camera lens or lighting
conditions, successive frames in a video, or adversarial attacks in security-critical applications [18].
A number of approaches have since been proposed to improve robustness [6, 5, 1, 7, 20]. However,
work in this direction has been handicapped by the lack of objective measures of robustness. A typical
approach to improving the robustness of a neural net f is to use an algorithm A to find adversarial
examples, augment the training set with these examples, and train a new neural net f 0 [5]. Robustness
is then evaluated by using the same algorithm A to find adversarial examples for f 0 ?if A discovers
fewer adversarial examples for f 0 than for f , then f 0 is concluded to be more robust than f . However,
f 0 may have overfit to adversarial examples generated by A?in particular, a different algorithm A0
may find as many adversarial examples for f 0 as for f . Having an objective robustness measure is
vital not only to reliably compare different algorithms, but also to understand robustness of production
neural nets?e.g., when deploying a login system based on face recognition, a security team may
need to evaluate the risk of an attack using adversarial examples.
In this paper, we study the problem of measuring robustness. We propose to use two statistics of
the robustness ?(f, x? ) of f at point x? (i.e., the L? distance from x? to the nearest adversarial
example) [21]. The first one measures the frequency with which adversarial examples occur; the
other measures the severity of such adversarial examples. Both statistics depend on a parameter ,
which intuitively specifies the threshold below which adversarial examples should not exist (i.e.,
points x with L? distance to x? less than should be assigned the same label as x? ).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The key challenge is efficiently computing ?(f, x? ). We give an exact formulation of this problem
as an intractable optimization problem. To recover tractability, we approximate this optimization
problem by constraining the search to a convex region Z(x? ) around x? . Furthermore, we devise
an iterative approach to solving the resulting linear program that produces an order of magnitude
speed-up. Common neural nets (specifically, those using rectified linear units as activation functions)
are in fact piecewise linear functions [15]; we choose Z(x? ) to be the region around x? on which
f is linear. Since the linear nature of neural nets is often the cause of adversarial examples [5], our
choice of Z(x? ) focuses the search where adversarial examples are most likely to exist.
We evaluate our approach on a deep convolutional neural network f for MNIST. We estimate ?(f, x? )
using both our algorithm ALP and (as a baseline) the algorithm AL-BFGS introduced by [21]. We show
that ALP produces a substantially more accurate estimate of ?(f, x? ) than AL-BFGS . We then use data
augmentation with each algorithm to improve the robustness of f , resulting in fine-tuned neural nets
fLP and fL-BFGS . According to AL-BFGS , fL-BFGS is more robust than f , but not according to ALP . In
other words, fL-BFGS overfits to adversarial examples computed using AL-BFGS . In contrast, fLP is
more robust according to both AL-BFGS and ALP . Furthermore, to demonstrate scalability, we apply
our approach to evaluate the robustness of the 23-layer network-in-network (NiN) neural net [13] for
CIFAR-10, and reveal a surprising lack of robustness. We fine-tune NiN and show that robustness
improves, albeit only by a small amount. In summary, our contributions are:
? We formalize the notion of pointwise robustness studied in previous work [5, 21, 6] and
propose two statistics for measuring robustness based on this notion (?2).
? We show how computing pointwise robustness can be encoded as a constraint system
(?3). We approximate this constraint system with a tractable linear program and devise an
optimization for solving this linear program an order of magnitude faster (?4).
? We demonstrate experimentally that our algorithm produces substantially more accurate
measures of robustness compared to algorithms based on previous work, and show evidence
that neural nets fine-tuned to improve robustness (?5) can overfit to adversarial examples
identified by a specific algorithm (?6).
1.1
Related work
The susceptibility of neural nets to adversarial examples was discovered by [21]. Given a test point
x? with predicted label `? , an adversarial example is an input x? + r with predicted label ` 6= `?
where the adversarial perturbation r is small (in L? norm). Then, [21] devises an approximate
algorithm for finding the smallest possible adversarial perturbation r. Their approach is to minimize
the combined objective loss(f (x? + r), `) + ckrk? , which is an instance of box-constrained convex
optimization that can be solved using L-BFGS-B. The constant c is optimized using line search.
Our formalization of the robustness ?(f, x? ) of f at x? corresponds to the notion in [21] of finding the
minimal krk? . We propose an exact algorithm for computing ?(f, x? ) as well as a tractable approximation. The algorithm in [21] can also be used to approximate ?(f, x? ); we show experimentally
that our algorithm is substantially more accurate than [21].
There has been a range of subsequent work studying robustness; [17] devises an algorithm for
finding purely synthetic adversarial examples (i.e., no initial image x? ), [22] searches for adversarial
examples using random perturbations, showing that adversarial examples in fact exist in large regions
of the pixel space, [19] shows that even intermediate layers of neural nets are not robust to adversarial
noise, and [3] seeks to explain why neural nets may generalize well despite poor robustness properties.
Starting with [5], a major focus has been on devising faster algorithms for finding adversarial
examples. Their idea is that adversarial examples can then be computed on-the-fly and used as
training examples, analogous to data augmentation approaches typically used to train neural nets [10].
To find adversarial examples quickly, [5] chooses the adversarial perturbation r to be in the direction
of the signed gradient of loss(f (x? + r), `) with fixed magnitude. Intuitively, given only the gradient
of the loss function, this choice of r is most likely to produce an adversarial example with krk? ? .
In this direction, [16] improves upon [5] by taking multiple gradient steps, [7] extends this idea to
norms beyond the L? norm, [6] takes the approach of [21] but fixes c, and [20] formalizes [5] as
robust optimization.
A key shortcoming of these lines of work is that robustness is typically measured using the same
algorithm used to find adversarial examples, in which case the resulting neural net may have overfit
2
to adversarial examples generating using that algorithm. For example, [5] shows improved accuracy
to adversarial examples generated using their own signed gradient method, but do not consider
whether robustness increases for adversarial examples generated using more precise approaches such
as [21]. Similarly, [7] compares accuracy to adversarial examples generated using both itself and [5]
(but not [21]), and [20] only considers accuracy on adversarial examples generated using their own
approach on the baseline network. The aim of our paper is to provide metrics for evaluating robustness,
and to demonstrate the importance of using such impartial measures to compare robustness.
Additionally, there has been work on designing neural network architectures [6] and learning procedures [1] that improve robustness to adversarial perturbations, though they do not obtain state-of-theart accuracy on the unperturbed test sets. There has also been work using smoothness regularization
related to [5] to train neural nets, focusing on improving accuracy rather than robustness [14].
Robustness has also been studied in more general contexts; [23] studies the connection between
robustness and generalization, [2] establishes theoretical lower bounds on the robustness of linear and
quadratic classifiers, and [4] seeks to improve robustness by promoting resiliance to deleting features
during training. More broadly, robustness has been identified as a desirable property of classifiers
beyond prediction accuracy. Traditional metrics such as (out-of-sample) accuracy, precision, and
recall help users assess prediction accuracy of trained models; our work aims to develop analogous
metrics for assessing robustness.
2
Robustness Metrics
Consider a classifier f : X ? L, where X ? Rn is the input space and L = {1, ..., L} are the labels.
We assume that training and test points x ? X have distribution D. We first formalize the notion
of robustness at a point, and then describe two statistics to measure robustness. Our two statistics
depend on a parameter , which captures the idea that we only care about robustness below a certain
threshold?we disregard adversarial examples x whose L? distance to x? is greater than . We use
= 20 in our experiments on MNIST and CIFAR-10 (on the pixel scale 0-255).
Pointwise robustness. Intuitively, f is robust at x? ? X if a ?small? perturbation to x? does not
affect the assigned label. We are interested in perturbations sufficiently small that they do not affect
human classification; an established condition is kx ? x? k? ? for some parameter . Formally, we
say f is (x? , )-robust if for every x such that kx ? x? k? ? , f (x) = f (x? ). Finally, the pointwise
robustness ?(f, x? ) of f at x? is the minimum for which f fails to be (x? , )-robust:
def
?(f, x? ) = inf{ ? 0 | f is not (x? , )-robust}.
(1)
This definition formalizes the notion of robustness in [5, 6, 21].
Adversarial frequency. Given a parameter , the adversarial frequency
def
?(f, ) = Prx? ?D [?(f, x? ) ? ]
measures how often f fails to be (x? , )-robust. In other words, if f has high adversarial frequency,
then it fails to be (x? , )-robust for many inputs x? .
Adversarial severity. Given a parameter , the adversarial severity
def
?(f, ) = Ex? ?D [?(f, x? ) | ?(f, x? ) ? ]
measures the severity with which f fails to be robust at x? conditioned on f not being (x? , )-robust.
We condition on pointwise robustness since once f is (x? , )-robust at x? , then the degree to which f
is robust at x? does not matter. Smaller ?(f, ) corresponds to worse adversarial severity, since f is
more susceptible to adversarial examples if the distances to the nearest adversarial example are small.
The frequency and severity capture different robustness behaviors. A neural net may have high
adversarial frequency but low adversarial severity, indicating that most adversarial examples are
about distance away from the original point x? . Conversely, a neural net may have low adversarial
frequency but high adversarial severity, indicating that it is typically robust, but occasionally severely
fails to be robust. Frequency is typically the more important metric, since a neural net with low
adversarial frequency is robust most of the time. Indeed, adversarial frequency corresponds to the
3
(a)
(b)
Figure 1: Neural net with a single hidden layer and
ReLU activations trained on dataset with binary labels.
(a) The training data and loss surface. (b) The linear
region corresponding to the red training point.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2: For MNIST, (a) an image classified 1, (b) its adversarial example classifed 3, and (c)
the (scaled) adversarial perturbation. For CIFAR-10, (d) an image classified as ?automobile?, (e)
its adversarial example classified as
?truck?, and (f) the (scaled) adversarial perturbation.
accuracy on adversarial examples used to measure robustness in [5, 20]. Severity can be used to
differentiate between neural nets with similar adversarial frequency.
Given a set of samples X ? X drawn i.i.d. from D, we can estimate ?(f, ) and ?(f, ) using the
following standard estimators, assuming we can compute ?:
|{x? ? X | ?(f, x? ) ? }|
? , X) def
?(f,
=
|X|
P
?(f,
x? )I[?(f, x? ) ? ]
def
x? ?X
?
?(f, , X) =
.
|{x? ? X | ?(f, x? ) ? }|
An approximation ??(f, x? ) ? ?(f, x? ) of ?, such as the one we describe in Section 4, can be used in
place of ?. In practice, X is taken to be the test set Xtest .
3
Computing Pointwise Robustness
3.1
Overview
Consider the training points in Figure 1 (a) colored based on the ground truth label. To classify this
data, we train a two-layer neural net f (x) = arg max` {(W2 g(W1 x))` }, where the ReLU function g
is applied pointwise. Figure 1 (a) includes contours of the per-point loss function of this neural net.
Exhaustively searching the input space to determine the distance ?(f, x? ) to the nearest adversarial
example for input x? (labeled `? ) is intractable. Recall that neural nets with rectified-linear (ReLU)
units as activations are piecewise linear [15]. Since adversarial examples exist because of this
linearity in the neural net [5], we restrict our search to the region Z(x? ) around x? on which the
neural net is linear. This region around x? is defined by the activation of the ReLU function: for
each i, if (W1 x? )i ? 0 (resp., (W1 x? ) ? 0), we constrain to the half-space {x | (W1 x)i ? 0} (resp.,
{x | (W1 x)i ? 0}). The intersection of these half-spaces is convex, so it admits efficient search.
Figure 1 (b) shows one such convex region 1 .
Additionally, x is labeled ` exactly when f (x)` ? f (x)`0 for each `0 6= `. These constraints are linear
since f is linear on Z(x? ). Therefore, we can find the distance to the nearest input with label ` 6= `?
by minimizing kx ? x? k? on Z(x? ). Finally, we can perform this search for each label ` 6= `? ,
though for efficiency we take ` to be the label assigned the second-highest score by f . Figure 1 (b)
shows the adversarial example found by our algorithm in our running example. In Figure 1 note that
the direction of the nearest adversarial example is not necessary aligned with the signed gradient of
the loss function, as observed by others [7].
1
Our neural net has 8 hidden units, but for this x? , 6 of the half-spaces entirely contain the convex region.
4
3.2
Formulation as Optimization
We compute ?(f, ) by expressing (1) as constraints C, which consist of
? Linear relations; specifically, inequalities C ? (wT x + b ? 0) and equalities C ? (wT x +
b = 0), where x ? Rm (for some m) are variables and w ? Rm , b ? R are constants.
? Conjunctions C ? C1 ? C2 , where C1 and C2 are themselves constraints. Both constraints
must be satisfied for the conjunction to be satisfied.
? Disjunctions C ? C1 ?C2 , where C1 and C2 are themselves constraints. One of the constraints
must be satisfied for the disjunction to be satisfied.
The feasible set F(C) of C is the set of x ? Rm that satisfy C; C is satisfiable if F(C) is nonempty.
In the next section, we show that the condition f (x) = ` can be expressed as constraints Cf (x, `);
i.e., f (x) = ` if and only if Cf (x, `) is satisfiable. Then, ?(f, ) can be computed as follows:
?(f, x? ) = min ?(f, x? , `)
(2)
`6=`?
def
?(f, x? , `) = inf{ ? 0 | Cf (x, `) ? kx ? x? k? ? satisfiable}.
(3)
The optimization problem is typically intractable; we describe a tractable approximation in ?4.
3.3
Encoding a Neural Network
We show how to encode the constraint f (x)
= ` as constraints Cf (x, `) when
f is a neural net. We
assume f has form f (x) = arg max`?L f (k) (f (k?1) (...(f (1) (x))...)) ` , where the ith layer of
the network is a function f (i) : Rni?1 ? Rni , with n0 = n and nk = |L|. We describe the encoding
of fully-connected and ReLU layers; convolutional layers are encoded similarly to fully-connected
layers and max-pooling layers are encoded similarly to ReLU layers. We introduce the variables
x(0) , . . . , x(k) into our constraints, with the interpretation that x(i) represents the output vector of
layer i of the network; i.e., x(i) = f (i) (x(i?1) ). The constraint Cin (x) ? (x(0) = x) encodes the
input layer. For each layer f (i) , we encode the computation of x(i) given x(i?1) as a constraint Ci .
Fully-connected layer. In this n
case, x(i) = f (i) (x(i?1) ) =oW (i) x(i?1) + b(i) , which we encode
V ni
(i)
(i)
(i)
(i)
using the constraints Ci ? j=1 xj = Wj x(i?1) + bj , where Wj is the j-th row of W (i) .
(i)
(i?1)
In this case, xj = max {xj
, 0} (for each 1 ? j ? ni ), which we encode using
V ni
(i?1)
(i)
(i?1)
(i)
(i?1)
the constraints Ci ? j=1 Cij , where Cij = (xj
<0 ? xj =0) ? (xj
? 0 ? xj =xj
).
n
o
V
(k)
(k)
Finally, the constraints Cout (`) ? `0 6=` x` ? x`0
ensure that the output label is `. Together,
V
k
the constraints Cf (x, `) ? Cin (x) ?
i=1 Ci ? Cout (`) encodes the computation of f :
ReLU layer.
Theorem 1 For any x ? X and ` ? L, we have f (x) = ` if and only if Cf (x, `) is satisfiable.
4
Approximate Computation of Pointwise Robustness
Convex restriction. The challenge to solving (3) is the non-convexity of the feasible set of Cf (x, `).
To recover tractability, we approximate (3) by constraining the feasible set to x ? Z(x? ), where
Z(x? ) ? X is carefully chosen so that the constraints C?f (x, `) ? Cf (x, `) ? (x ? Z(x? )) have
convex feasible set. We call C?f (x, `) the convex restriction of Cf (x, `). In some sense, convex
restriction is the opposite of convex relaxation. Then, we can approximately compute robustness:
def
??(f, x? , `) = inf{ ? 0 | C?f (x, `) ? kx ? x? k? ? satisfiable}.
The objective is optimized over x ? Z(x? ), which approximates the optimum over x ? X .
5
(4)
Choice of Z(x? ). We construct Z(x? ) as the feasible set of constraints D(x? ); i.e., Z(x? ) =
F(D(x? )). We now describe how to construct D(x? ).
Note that F(wT x + b = 0) and F(wT x + b ? 0) are convex sets. Furthermore, if F(C1 ) and F(C2 )
are convex, then so is their conjunction F(C1 ? C2 ). However, their disjunction F(C1 ? C2 ) may not
be convex; for example, F((x ? 0) ? (y ? 0)). The potential non-convexity of disjunctions makes
(3) difficult to optimize.
We can eliminate disjunction operations by choosing one of the two disjuncts to hold. For example,
note that for C1 ? C2 ? C3 , we have both F(C2 ) ? F(C1 ) and F(C3 ) ? F(C1 ). In other words, if we
replace C1 with either C2 or C3 , the feasible set of the resulting constraints can only become smaller.
Taking D(x? ) ? C2 (resp., D(x? ) ? C3 ) effectively replaces C1 with C2 (resp., C3 ).
To restrict (3), for every disjunction C1 ? C2 ? C3 , we systematically choose either C2 or C3 to replace
the constraint C1 . In particular, we choose C2 if x? satisfies C2 (i.e., x? ? F(C2 )) and choose C3
otherwise. In our constraints, disjunctions are always mutually exclusive, so x? never simultaneously
satisfies both C2 and C3 . We then take D(x? ) to be the conjunction of all our choices. The resulting
constraints C?f (x, `) contains only conjunctions of linear relations, so its feasible set is convex. In
fact, it can be expressed as a linear program (LP) and can be solved using any standard LP solver.
For example, consider a rectified linear layer (as before, max pooling layers are similar). The original
constraint added for unit j of rectified linear layer f (i) is
(i?1)
(i)
(i?1)
(i)
(i?1)
xj
? 0 ? xj = 0 ? xj
? 0 ? xj = xj
To restrict this constraint, we evaluate the neural network on the seed input x? and look at the input
(i?1)
to f (i) , which equals x?
= f (i?1) (...(f (1) (x? ))...). Then, for each 1 ? j ? ni :
(
(i?1)
(i)
(i?1)
(i?1)
xj
? 0 ? xj = xj
if (x?
)j ? 0
D(x? ) ? D(x? ) ?
(i?1)
(i)
(i?1)
xj
? 0 ? xj = 0
if (x?
)j > 0.
Iterative constraint solving. We implement an optimization for solving LPs by lazily adding
constraints as necessary. Given all constraints C, we start off solving the LP with the subset of
equality constraints C? ? C, which yields a (possibly infeasible) solution z. If z is feasible, then z is
also an optimal solution to the original LP; otherwise, we add to C? the constraints in C that are not
satisfied by z and repeat the process. This process always yields the correct solution, since in the
worst case C? becomes equal to C. In practice, this optimization is an order of magnitude faster than
directly solving the LP with constraints C.
Single target label. For simplicity, rather than minimize over ?(f, x? , `) for each ` 6= `? , we fix `
to be the second most probable label f?(x? ); i.e.,
def
??(f, x? ) = inf{ ? 0 | C?f (x, f?(x? )) ? kx ? x? k? ? satisfiable}.
(5)
Approximate robustness statistics. We can use ?? in our statistics ?? and ?
? defined in ?2. Because
?? is an overapproximation of ? (i.e., ??(f, x? ) ? ?(f, x? )), the estimates ?? and ?
? may not be unbiased
? ) ? ?(f, )). In ?6, we show empirically that our algorithm produces substantially
(in particular, ?(f,
less biased estimates than existing algorithms for finding adversarial examples.
5
Improving Neural Net Robustness
Finding adversarial examples. We can use our algorithm for estimating ??(f, x? ) to compute
adversarial examples. Given x? , the value of x computed by the optimization procedure used to solve
(5) is an adversarial example for x? with kx ? x? k? = ??(f, x? ).
Finetuning. We use fine-tuning to reduce a neural net?s susceptability to adversarial examples.
First, we use an algorithm A to compute adversarial examples for each x? ? Xtrain and add them to
the training set. Then, we continue training the network on a the augmented training set at a reduced
training rate. We can repeat this process multiple rounds (denoted T ); at each round, we only consider
x? in the original training set (rather than the augmented training set).
6
Neural Net
Accuracy (%)
LeNet (Original)
Baseline (T = 1)
Baseline (T = 2)
Our Algo. (T = 1)
Our Algo. (T = 2)
99.08
99.14
99.15
99.17
99.23
Adversarial Frequency (%)
Baseline
Our Algo.
1.32
1.02
0.99
1.18
1.12
7.15
6.89
6.97
5.40
5.03
Adversarial Severity (pixels)
Baseline
Our Algo.
11.9
11.0
10.9
12.8
12.2
12.4
12.3
12.4
12.2
11.7
Table 1: Evaluation of fine-tuned networks. Our method discovers more adversarial examples than
the baseline [21] for each neural net, hence producing better estimates. LeNet fine-tuned for T = 1, 2
rounds (bottom four rows) exhibit a notable increase in robustness compared to the original LeNet.
(a)
(b)
(c)
Figure 3: The cumulative number of test points x? such that ?(f, x? ) ? as a function of . In (a)
and (b), the neural nets are the original LeNet (black), LeNet fine-tuned with the baseline and T = 2
(red), and LeNet fine-tuned with our algorithm and T = 2 (blue); in (a), ?? is measured using the
baseline, and in (b), ?? is measured using our algorithm. In (c), the neural nets are the original NiN
(black) and NiN finetuned with our algorithm, and ?? is estimated using our algorithm.
Rounding errors. MNIST images are represented as integers, so we must round the perturbation
to obtain an image, which oftentimes results in non-adversarial examples. When fine-tuning, we add
(k)
(k)
a constraint x` ? x`0 + ? for all `0 6= `, which eliminates this problem by ensuring that the neural
net has high confidence on its adversarial examples. In our experiments, we fix ? = 3.0.
Similarly, we modified the L-BFGS-B baseline so that during the line search over c, we only count
(k)
(k)
x? + r as adversarial if x` ? x`0 + ? for all `0 6= `. We choose ? = 0.15, since larger ? causes the
baseline to find significantly fewer adversarial examples, and small ? results in smaller improvement
in robustness. With this choice, rounding errors occur on 8.3% of the adversarial examples we find
on the MNIST training set.
6
6.1
Experiments
Adversarial Images for CIFAR-10 and MNIST
We find adversarial examples for the neural net LeNet [12] (modified to use ReLUs instead of
sigmoids) trained to classify MNIST [11], and for the network-in-network (NiN) neural net [13]
trained to classify CIFAR-10 [9]. Both neural nets are trained using Caffe [8]. For MNIST, Figure 2
(b) shows an adversarial example (labeled 1) we find for the image in Figure 2 (a) labeled 3, and
Figure 2 (c) shows the corresponding adversarial perturbation scaled so the difference is visible (it
has L? norm 17). For CIFAR-10, Figure 2 (e) shows an adversarial example labeled ?truck? for
the image in Figure 2 (d) labeled ?automobile?, and Figure 2 (f) shows the corresponding scaled
adversarial perturbation (which has L? norm 3).
6.2
Comparison to Other Algorithms on MNIST
We compare our algorithm for estimating ? to the baseline L-BFGS-B algorithm proposed by [21].
We use the tool provided by [22] to compute this baseline. For both algorithms, we use adversarial
target label ` = f?(x? ). We use LeNet in our comparisons, since we find that it is substantially more
robust than the neural nets considered in most previous work (including [21]). We also use versions
7
of LeNet fine-tuned using both our algorithm and the baseline with T = 1, 2. To focus on the most
severe adversarial examples, we use a stricter threshold for robustness of = 20 pixels.
We performed a similar comparison to the signed gradient algorithm proposed by [5] (with the signed
gradient multiplied by = 20 pixels). For LeNet, this algorithm found only one adversarial example
on the MNIST test set (out of 10,000) and four adversarial examples on the MNIST training set (out
of 60,000), so we omit results 2 .
Results. In Figure 3, we plot the number of test points x? for which ??(f, x? ) ? , as a function
of , where ??(f, x? ) is estimated using (a) the baseline and (b) our algorithm. These plots compare
the robustness of each neural network as a function of . In Table 1, we show results evaluating the
robustness of each neural net, including the adversarial frequency and the adversarial severity. The
running time of our algorithm and the baseline algorithm are very similar; in both cases, computing
??(f, x? ) for a single input x? takes about 1.5 seconds. For comparison, without our iterative constraint
solving optimization, our algorithm took more than two minutes to run.
Discussion. For every neural net, our algorithm produces substantially higher estimates of the
adversarial frequency. In other words, our algorithm estimates ??(f, x? ) with substantially better
accuracy compared to the baseline.
According to the baseline metrics shown in Figure 3 (a), the baseline neural net (red) is similarly
robust to our neural net (blue), and both are more robust than the original LeNet (black). Our neural
net is actually more robust than the baseline neural net for smaller values of , whereas the baseline
neural net eventually becomes slightly more robust (i.e., where the red line dips below the blue line).
This behavior is captured by our robustness statistics?the baseline neural net has lower adversarial
frequency (so it has fewer adversarial examples with ??(f, x? ) ? ) but also has worse adversarial
severity (since its adversarial examples are on average closer to the original points x? ).
However, according to our metrics shown in Figure 3 (b), our neural net is substantially more robust
than the baseline neural net. Again, this is reflected by our statistics?our neural net has substantially
lower adversarial frequency compared to the baseline neural net, while maintaining similar adversarial
severity. Taken together, our results suggest that the baseline neural net is overfitting to the adversarial
examples found by the baseline algorithm. In particular, the baseline neural net does not learn the
adversarial examples found by our algorithm. On the other hand, our neural net learns both the
adversarial examples found by our algorithm and those found by the baseline algorithm.
6.3
Scaling to CIFAR-10
We also implemented our approach for the for the CIFAR-10 network-in-network (NiN) neural
net [13], which obtains 91.31% test set accuracy. Computing ??(f, x? ) for a single input on NiN
takes about 10-15 seconds on an 8-core CPU. Unlike LeNet, NiN suffers severely from adversarial
examples?we measure a 61.5% adversarial frequency and an adversarial severity of 2.82 pixels. Our
neural net (NiN fine-tuned using our algorithm and T = 1) has test set accuracy 90.35%, which is
similar to the test set accuracy of the original NiN. As can be seen in Figure 3 (c), our neural net
improves slightly in terms of robustness, especially for smaller . As before, these improvements are
reflected in our metrics?the adversarial frequency of our neural net drops slightly to 59.6%, and
the adversarial severity improves to 3.88. Nevertheless, unlike LeNet, our fine-tuned version of NiN
remains very prone to adversarial examples. In this case, we believe that new techniques are required
to significantly improve robustness.
7
Conclusion
We have shown how to formulate, efficiently estimate, and improve the robustness of neural nets
using an encoding of the robustness property as a constraint system. Future work includes devising
better approaches to improving robustness on large neural nets such as NiN and studying properties
beyond robustness.
2
Futhermore, the signed gradient algorithm cannot be used to estimate adversarial severity since all the
adversarial examples it finds have L? norm .
8
References
[1] K. Chalupka, P. Perona, and F. Eberhardt. Visual causal feature learning. 2015.
[2] A. Fawzi, O. Fawzi, and P. Frossard. Analysis of classifers? robustness to adversarial perturbations. ArXiv
e-prints, 2015.
[3] Jiashi Feng, Tom Zahavy, Bingyi Kang, Huan Xu, and Shie Mannor. Ensemble robustness of deep learning
algorithms. arXiv preprint arXiv:1602.02389, 2016.
[4] Amir Globerson and Sam Roweis. Nightmare at test time: robust learning by feature deletion. In
Proceedings of the 23rd international conference on Machine learning, pages 353?360. ACM, 2006.
[5] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial
examples. 2015.
[6] S. Gu and L. Rigazio. Towards deep neural network architectures robust to adversarial examples. 2014.
[7] Ruitong Huang, Bing Xu, Dale Schuurmans, and Csaba Szepesv?ri. Learning with a strong adversary.
CoRR, abs/1511.03034, 2015.
[8] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv
preprint arXiv:1408.5093, 2014.
[9] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
[10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. 2012.
[11] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[12] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. In S. Haykin and B. Kosko, editors, Intelligent Signal Processing, pages 306?351.
IEEE Press, 2001.
[13] Min Lin, Qiang Chen, and Shuicheng Yan. Network In Network. CoRR, abs/1312.4400, 2013.
[14] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing
with virtual adversarial training. stat, 1050:25, 2015.
[15] Guido F. Mont?far, Razvan Pascanu, KyungHyun Cho, and Yoshua Bengio. On the number of linear
regions of deep neural networks. In Advances in Neural Information Processing Systems 27: Annual
Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec,
Canada, pages 2924?2932, 2014.
[16] Seyed Mohsen Moosavi Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate
method to fool deep neural networks. In Proceedings of 2016 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), number EPFL-CONF-218057, 2016.
[17] Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High confidence
predictions for unrecognizable images. In Computer Vision and Pattern Recognition (CVPR), 2015 IEEE
Conference on, pages 427?436. IEEE, 2015.
[18] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami.
Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint
arXiv:1602.02697, 2016.
[19] Sara Sabour, Yanshuai Cao, Fartash Faghri, and David J Fleet. Adversarial manipulation of deep representations. arXiv preprint arXiv:1511.05122, 2015.
[20] Uri Shaham, Yutaro Yamada, and Sahand Negahban. Understanding adversarial training: Increasing local
stability of neural nets through robust optimization. arXiv preprint arXiv:1511.05432, 2015.
[21] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and
Rob Fergus. Intriguing properties of neural networks. 2014.
[22] Pedro Tabacof and Eduardo Valle. Exploring the space of adversarial images. CoRR, abs/1510.05328,
2015.
[23] Huan Xu and Shie Mannor. Robustness and generalization. Machine learning, 86(3):391?423, 2012.
9
| 6339 |@word moosavi:1 version:2 norm:6 valle:1 shuicheng:1 seek:2 xtest:1 initial:1 contains:1 score:1 tuned:9 document:2 existing:3 guadarrama:1 com:3 surprising:1 activation:4 intriguing:1 must:3 visible:1 subsequent:1 informative:1 christian:2 plot:2 drop:1 n0:1 half:3 fewer:3 devising:2 amir:1 ith:1 core:1 yamada:1 haykin:1 colored:1 mannor:2 pascanu:1 successive:1 attack:3 c2:18 become:2 sabour:1 introduce:1 upenn:1 indeed:1 frossard:2 themselves:2 behavior:2 cpu:1 solver:1 increasing:1 becomes:2 spain:1 estimating:2 linearity:1 provided:1 anh:1 substantially:9 finding:6 csaba:1 eduardo:1 formalizes:2 every:3 stricter:1 exactly:1 zaremba:1 classifier:3 scaled:4 uk:1 rm:3 unit:4 omit:1 impartial:1 producing:1 nakae:1 before:2 local:1 severely:2 despite:2 encoding:4 overapproximation:1 approximately:1 signed:6 black:4 studied:2 conversely:1 sara:1 range:1 practical:1 camera:1 globerson:1 lecun:2 practice:2 implement:1 razvan:1 procedure:2 shin:2 evan:1 yan:1 significantly:2 word:4 confidence:2 suggest:1 cannot:1 risk:1 context:1 restriction:3 optimize:1 starting:1 convex:14 formulate:1 simplicity:1 deepfool:1 estimator:1 shlens:1 embedding:1 searching:1 notion:5 stability:1 analogous:2 resp:4 target:2 user:1 exact:2 guido:1 designing:1 goodfellow:3 recognition:5 finetuned:1 distributional:1 labeled:7 observed:1 bottom:1 fly:1 preprint:5 solved:2 capture:2 worst:1 region:9 wj:2 connected:3 highest:1 cin:2 convexity:2 cam:1 exhaustively:1 trained:5 depend:2 solving:8 mohsen:1 algo:4 swami:1 purely:1 upon:1 seyed:1 efficiency:1 gu:1 mislabeled:2 easily:1 finetuning:1 represented:1 train:4 fast:1 shortcoming:1 describe:5 choosing:1 harnessing:1 nori:1 disjunction:7 encoded:3 stanford:2 whose:1 solve:1 say:1 larger:1 otherwise:2 cvpr:2 statistic:9 itself:1 differentiate:1 karayev:1 net:65 took:1 propose:5 aligned:1 cao:1 dezfooli:1 roweis:1 scalability:1 tabacof:1 sutskever:2 optimum:1 nin:12 sea:1 produce:6 generating:1 assessing:1 darrell:1 unrecognizable:1 help:1 develop:1 ac:1 stat:1 montreal:1 measured:3 nearest:5 strong:1 implemented:1 c:1 predicted:2 direction:5 correct:1 criminisi:1 human:1 alp:4 jonathon:1 virtual:1 fix:3 generalization:2 probable:1 exploring:1 hold:1 around:4 sufficiently:1 ground:1 considered:1 seed:1 bj:1 major:1 smallest:1 susceptibility:1 shaham:1 label:14 ross:1 establishes:1 tool:1 always:2 aim:2 modified:2 rather:3 conjunction:5 encode:4 clune:1 focus:3 improvement:2 masanori:1 fooled:1 contrast:1 adversarial:115 ishii:1 baseline:28 sense:1 celik:1 epfl:1 typically:5 eliminate:1 a0:1 hidden:2 relation:2 perona:1 interested:1 pixel:6 arg:2 classification:2 pascal:1 augment:1 denoted:1 constrained:1 smoothing:1 equal:2 construct:3 once:1 having:2 never:1 qiang:1 represents:1 look:1 theart:1 future:1 others:1 yoshua:3 piecewise:2 intelligent:1 simultaneously:1 dimitrios:1 microsoft:6 ab:3 evaluation:1 severe:1 futhermore:1 accurate:4 closer:1 necessary:2 huan:2 causal:1 girshick:1 theoretical:1 minimal:1 fawzi:3 instance:1 classify:3 measuring:4 tractability:2 subset:1 krizhevsky:2 jiashi:1 rounding:2 synthetic:1 combined:1 chooses:1 cho:1 international:1 negahban:1 off:1 together:2 quickly:1 ilya:2 w1:5 augmentation:2 again:1 satisfied:5 choose:5 possibly:1 huang:1 worse:2 conf:1 wojciech:1 szegedy:2 potential:1 bfgs:11 includes:2 jha:1 matter:1 satisfy:1 notable:1 leonidas:1 performed:1 jason:1 overfits:1 red:4 start:1 recover:2 relus:1 satisfiable:6 jia:1 contribution:1 minimize:2 ass:1 ni:4 accuracy:15 convolutional:4 efficiently:2 ensemble:1 yield:2 generalize:1 ruitong:1 lighting:1 rectified:4 classified:3 explain:1 deploying:1 suffers:1 trevor:1 definition:1 papernot:1 against:1 frequency:18 dataset:1 recall:2 improves:4 formalize:2 carefully:2 actually:1 focusing:1 higher:1 reflected:2 tom:1 improved:1 formulation:2 evaluated:1 box:2 though:2 furthermore:4 overfit:4 hand:1 lack:3 reveal:1 believe:1 contain:1 unbiased:1 regularization:1 assigned:3 equality:2 lenet:13 hence:1 kyunghyun:1 round:4 during:2 cout:2 demonstrate:3 image:11 novel:1 discovers:2 common:1 empirically:1 perturbing:1 overview:1 interpretation:1 approximates:1 classifers:1 yosinski:1 expressing:1 cambridge:1 smoothness:1 tuning:2 flp:2 rd:1 similarly:5 caffe:2 bruna:1 surface:1 add:3 sergio:1 patrick:3 chalupka:1 own:2 recent:1 inf:4 manipulation:1 occasionally:1 certain:1 inequality:1 binary:1 continue:1 somesh:1 devise:5 captured:1 minimum:1 greater:1 care:1 seen:1 determine:1 signal:1 multiple:3 desirable:1 takeru:1 faster:3 long:1 cifar:9 lin:1 ensuring:1 prediction:3 vision:2 metric:14 arxiv:11 sergey:1 c1:14 whereas:1 szepesv:1 fine:12 concluded:1 w2:1 biased:1 eliminates:1 unlike:2 rigazio:1 pooling:2 shie:2 december:1 quebec:1 call:1 integer:1 constraining:2 vital:1 intermediate:1 bengio:3 variety:1 affect:2 relu:7 xj:18 pennsylvania:1 identified:2 architecture:3 restrict:3 opposite:1 idea:3 reduce:1 haffner:2 fleet:1 whether:1 sahand:1 cause:3 antonio:1 deep:10 fool:1 tune:1 xtrain:1 amount:2 ken:1 mcdaniel:1 reduced:1 specifies:1 exist:4 problematic:1 estimated:2 correctly:1 per:1 blue:3 broadly:1 ichi:1 key:2 four:2 threshold:3 nevertheless:1 yangqing:1 drawn:1 bastani:1 changing:1 relaxation:1 run:1 extends:1 place:1 yann:2 scaling:1 entirely:1 layer:19 fl:3 bound:1 def:8 quadratic:1 replaces:1 truck:2 annual:1 occur:2 constraint:36 constrain:1 alex:2 ri:1 encodes:2 generates:1 speed:1 min:2 according:7 poor:1 smaller:5 slightly:3 sam:1 lp:6 rob:1 intuitively:3 taken:2 mutually:1 previously:1 remains:1 bing:1 count:1 nonempty:1 eventually:1 tractable:3 studying:2 operation:1 multiplied:1 apply:1 promoting:1 away:1 robustness:70 original:11 running:2 cf:9 ensure:1 miyato:1 maintaining:1 ioannou:1 especially:1 approximating:1 feng:1 objective:4 added:1 print:1 exclusive:1 traditional:1 exhibit:1 gradient:10 ow:1 distance:7 koyama:1 considers:1 assuming:1 pointwise:8 minimizing:1 difficult:1 susceptible:2 cij:2 reliably:1 perform:1 kosko:1 datasets:1 hinton:2 severity:16 team:1 precise:1 frame:1 discovered:1 perturbation:14 rn:1 canada:1 introduced:1 david:1 required:1 c3:9 optimized:2 connection:1 security:2 imagenet:1 deletion:1 kang:1 established:1 barcelona:1 nip:1 beyond:3 adversary:1 below:3 pattern:2 dimitris:1 maeda:1 handicapped:1 challenge:2 program:5 max:5 including:2 video:1 deleting:1 critical:1 improve:8 alhussein:1 yani:1 joan:1 understanding:1 loss:6 fully:3 geoffrey:2 shelhamer:1 degree:1 rni:2 editor:1 systematically:1 tiny:2 production:1 row:2 prone:1 summary:1 repeat:2 mont:1 infeasible:1 understand:1 explaining:1 face:1 taking:2 dip:1 login:1 evaluating:2 contour:1 cumulative:1 dale:1 oftentimes:1 far:1 nguyen:1 erhan:1 approximate:7 obtains:1 overfitting:1 fergus:1 search:8 iterative:3 why:1 lazily:1 additionally:3 table:2 nature:1 learn:1 robust:27 nicolas:1 eberhardt:1 improving:5 schuurmans:1 automobile:2 bottou:2 krk:2 noise:1 prx:1 xu:3 augmented:2 formalization:1 precision:1 fails:5 learns:1 donahue:1 ian:3 theorem:1 minute:1 dumitru:1 specific:2 showing:1 unperturbed:1 admits:1 evidence:1 intractable:3 consist:1 mnist:12 albeit:1 adding:1 effectively:1 importance:1 ci:4 corr:3 magnitude:4 conditioned:1 sigmoids:1 uri:1 kx:7 nk:1 zahavy:1 chen:1 intersection:1 likely:2 visual:1 nightmare:1 aditya:1 expressed:2 pedro:1 corresponds:3 truth:1 satisfies:2 acm:1 towards:1 jeff:2 replace:2 feasible:8 experimentally:2 typical:1 specifically:2 wt:4 lens:1 disregard:1 indicating:2 formally:1 jonathan:1 evaluate:5 ex:1 |
5,902 | 634 | Weight Space Probability Densities
in Stochastic Learning:
I. Dynamics and Equilibria
Todd K. Leen and John E. Moody
Department of Computer Science and Engineering
Oregon Graduate Institute of Science & Technology
19600 N.W. von Neumann Dr.
Beaverton, OR 97006-1999
Abstract
The ensemble dynamics of stochastic learning algorithms can be
studied using theoretical techniques from statistical physics. We
develop the equations of motion for the weight space probability
densities for stochastic learning algorithms. We discuss equilibria
in the diffusion approximation and provide expressions for special
cases of the LMS algorithm. The equilibrium densities are not in
general thermal (Gibbs) distributions in the objective function being minimized, but rather depend upon an effective potential that
includes diffusion effects. Finally we present an exact analytical
expression for the time evolution of the density for a learning algorithm with weight updates proportional to the sign of the gradient.
1
Introduction: Theoretical Framework
Stochastic learning algorithms involve weight updates of the form
w(n+1) = w(n)
+
/-l(n)H[w(n),x(n)]
(1)
where w E 7?m is the vector of m weights, /-l is the learning rate, H[.] E 7?m is the
update function, and x(n) is the exemplar (input or input/target pair) presented
451
452
Leen and Moody
to the network at the nth iteration of the learning rule. Often the update function
is based on the gradient of a cost function H(w,x) = -a?{w,x) law. We assume
that the exemplars are Li.d. with underlying probability density p{x).
We are interested in studying the time evolution and steady state behavior of
the weight space probability density P(w, n) for ensembles of networks trained by
stochastic learning. Stochastic process theory and classical statistical mechanics
provide tools for doing this. As we shall see, the ensemble behavior of stochastic learning algorithms is similar to that of diffusion processes in physical systems,
although significant differences do exist.
1.1
Dynamics of the Weight Space Probability Density
Equation (1) defines a Markov process on the weight space. Given the particular
input x, the single time-step transition probability density for this process is a Dirac
delta function whose arguments satisfy the weight update (1):
W ( w'
~
w I x) = 8 ( w - w' - J-t H[ w' , x]) .
(2)
From this conditional transition probability, we calculate the total single time-step
transition probability (Leen and Orr 1992, Ritter and Schulten 1988)
W(w ' ~ w) = ( 8( w - w' - J-tH[w',x]) }z
(3)
where ( ... }z denotes integration over the measure on the random variable x.
The time evolution of the density is given by the Kolmogorov equation
P(w, n + 1) =
J
dw' P(w' , n) W(w '
~ w) ,
(4)
which forms the basis for our dynamical description of the weight space probability
density 1.
Stationary, or equilibrium, probability distributions are eigenfunctions of the transition probability
Ps(w)
=
J
dw' Ps(w') W(w'
~ w).
(5)
It is particularly interesting to note that for problems in which there exists an
optimal weight w,. such that
H(w,.,x) = 0, "Ix,
one stationary solution is a delta function at w = w,.. An important class of such
examples are noise-free mapping problems for which weight values exist that realize
the desired mapping over all possible input/target pairs. For such problems, the
ensemble can settle into a sharp distribution at the optimal weights (for examples
see Leen and Orr 1992, Orr and Leen 1993).
Although the Kolmogorov equation can be integrated numerically, we would like
to make further analytic progress. Towards this end we convert the Kolmogorov
1 An alternative is to base the time evolution on a suitable master equation.
approaches give the same results.
Both
Weight Space Probability Densities in Stochastic Learning: I. Dynamics and Equilibria
equ~,tion into a differential? difference equation by expanding (3) as a power series
in J.l. Since the transition probability is defined in the sense of generalized functions
(i.e. distributions), the proper way to proceed is to smear (4) with a smooth test
function of compact support f(w) to obtain
J
dw f{w) P(w, n + 1) =
J
dw dw' f(w) P(w', n) W(w' -t w).
(6)
Next we use the transition probability (3) to perform the integration over wand
expand the resulting expression as a power series in J.l. Finally, we integrate by
part5 to take derivatives off f, dropping the surface terms. This results in a discrete
time version of the classic Kramers?Moyal expansion (llisken 1989)
P(w,n+1) - P(w,n)
=
where Hja denotes the ja th component of the m-component vector H.
In section 3, we present an algorithm for which the Kramers-Moyal expansion can
be explicitly summed. In general the full expansion is not analytically tractable,
and to make further analytic progress we will truncate it at second order to obtain
the Fokker-Planck equation.
1.2
The Fokker-Planck (Diffusion) Approximation
For small enough 1J.l HI, the Kramers-Moyal expansion (7) can be truncated to
second order to obtain a Fokker-Planck equation: 2
P(w, n + 1) - P(w, n) =
{)
-J.l {)Wi [ Ai(W) P(w, n)
]
(8)
In (8), and throughout the remainder of the paper, repeated indices are summed
over. In the Fokker-Planck approximation, only two coefficients appear: Ai (w) =
(Hi)z, called the drift vector, and Bij(W) = (Hi Hj)z' called the diffusion matrix.
The drift vector is simply the average update applied at w. Since the diffusion
coefficients can be strongly dependent on the position in weight space, the equilibrium densities will, in general, not be thermal (Gibbs) distributions in the potential
corresponding to (H( w, x) ) z' This is exemplified in our discussion of equilibrium
densities for the LMS algorithm in section 2.1 below 3 ?
2Radons et al. (1990) independently derived a Fokker-Planck equation for backpropagation. Earlier, Ritter and Schulten (1988) derived a Fokker-Planck equation (for Kohonen's
self-ordering feature map) that is valid in the neighborhood of a local optimum.
3See (Leen and Orr 1992, Orr and Leen 1993) for further examples.
453
454
Leen and Moody
2
Equilibrium Densities in the Fokker-Planck
Approximation
= P(w, n)
=Ps(w),
~2 a: j [Bij(W) P8(W)] )
(9)
In equilibrium the probability density is stationary, P(w, n+1)
so the Fokker-Planck equation (8) becomes
0= - a:i Ji(w) == - a:i
(11. Ai(W) P8(W)
-
Here, we have implicitly defined the probability density current J(w). In equilibrium, its divergence is zero.
If the drift and diffusion coefficients satisfy potential conditions, then the equilibrium
current itself is zero and detailed balance is obtained. The potential conditions are
(Gardiner, 1990)
OZk
OZ,
OWl - OWk
=
0,
Zk(W)
where
0
= Bk/(w) [J-l2" ow;
Bi;(W) -
Ai(W)
1
(10)
Under these conditions the solution to (9) for the equilibrium density is:
Ps(w)
where
J(
= !...
e-2:F(w)/~,
J(
F(w)
=1dWk Zk(W)
w
(11)
is a normalization constant and F( w) is called the effective potential.
In general, the potential conditions are not satisfied for stochastic learning algorithms in multiple dimensions. 4 In this respect, stochastic learning differs from
most physical diffusion processes. However for LMS with inputs whose correlation
matrix is isotropic, the conditions are satisfied and the equilibrium density can be
reduced to the quadrature in (11).
2.1
Equilibrium Density for the LMS Algorithm
The best known on-line learning system is the LMS adaptive filter. For the LMS
algorithm, the training examples consist of input/target pairs x(n) = {s(n),t(n)},
the model output is u(n) = W? s(n), and the cost function is the squared error:
?(w,x(n))
1
1
= 2 [t(n)-u(n)]2 = 2 [t(n)-w?s(n)]2
(12)
The resulting update equations (for constant learning rate J-l) are
w(n+l)
= w(n) + J-l[t(n)-w.s(n)]s(n).
(13)
We assume that the training data are generated according to a "signal plus noise"
model:
t(n)
= w? . s(n) + ?(n)
,
(14)
where w. is the "true" weight vector and ?( n) is LLd. noise with mean zero and
variance (12. We denote the correlation matrix of the inputs s( n) by R and the
4For one-dimensional algorithms, the potential conditions are trivially satisfied.
Weight Space Probability Densities in Stochastic Learning: I. Dynamics and Equilibria
fourth order correlation tensor of the inputs by S. It is convenient to shift the
origin of coordinates in weight space and define the weight error vector
=w -
v
w?.
In terms of v, the weight update is
v(n+l)
= v(n)
- JJ[s(n).v(n)]s(n)
+
JJf(n)s(n).
The drift vector and diffusion matrix are given by
Ai=-(SiSj}s Vj
and
Bij
= (Si Sj Sle SI
Vie VI
+
=
f2 Sj Sj ) s ,(
-RijVj
(15)
= Sijlel Vie VI + (72 Rij
(16)
respectively. Notice that the diffusion matrix is quadratic in v. Thus as we move
away from the global minimum at v = 0, diffusive spreading of the probability
density is enhanced. Notice also that, in general, both terms of the diffusion matrix
contribute an anisotropy.
We further assume that the inputs are drawn from a zero-mean Gaussian process.
This assumption allows us to appeal to the Gaussian moment factoring theorem
(Haykin, 1991, p318) to express the fourth-order correlation S in terms of R
Sijlel
= Rij Rlcl +
Rile Rjl
+
Ril Rjle
.
The diffusion matrix reduces to
(17)
To compute the effective potential (10 and 11) the diffusion matrix is inverted
using the Sherman-Morrison formula (Press, 1987, p67). As a final simplification,
we assume that the input distribution is spherically symmetric. Thus
R = rI ,
where I denotes the identity matrix.
Together these assumptions insure detailed balance, and we can integrate (11) in
closed form. In figure 1, we compare the effective potential F(v) (for 1-D LMS)
with the potential corresponding to the quadratic cost function.
v
Fig.l: Effective potential (dashed curve) and cost function (solid curve) for I-D LMS.
The spatial dependence of the the diffusion coefficient forces the effective potential
to soften relative to the cost function for large Ivl. This accentuates the tails of the
distribution relative to a gaussian.
455
456
Leen and Moody
The equilibrium density is
1 [
Ps{v) = K
1+
3r
u21vl2
] -( ~+m
),
(18)
where, as before, m and J( denote the dimension of the weight vector and the
normalization constant for the density respectively. For a l-D filter, the equilibrium
density can be found in closed form without assuming Gaussian input data. We
find
(19)
With gaussian inputs (for which S
= 3r2 ) (19) properly reduces to (18) with m = 1.
The equilibrium densities (18) and (19) are clearly not gaussian, however in the limit
of very small J.lr they reduce to gaussian distributions with variance J.lu 2 /2. Figure
2 shows a comparison between the theoretical result and a histogram of 200,000
values of v generated by simulation with J.l = 0.005, and u 2 = 1.0. The input data
were drawn from a zero-mean Gaussian distribution with r = 4.0.
I
I
-0.2 -0.1
i
I
I
0.0
0.1
0.2
v
Fig.2: Equilibrium density for 1-D LMS
3
An Exactly Summable Model
As in the case of LMS learning above, stochastic gradient descent algorithms update
weights based on an instantaneous estimate of the gradient of some average cost
function ?(w) {?(w, x) }z. That is, the update is given by
=
o
Hi(W,X) = --0 ?(w,x).
Wi
An alternative is to increment or decrement each weight by a fixed amount depending only on the sign of O?/OWi. We formulated this alternative update rule because
it avoids a common problem for sigmoidal networks, getting stuck on "flat spots" or
"plateaus". The standard gradient descent update rule yields very slow movement
on plateaus, while second order methods such as gauss-newton can be unstable.
The sign-of-gradient update rule suffers from neither of these problems. s
5The use of the sign of the gradient has been suggested previously in the stochastic
approximation literature by Fabian (1960) and in the neural network literature by Derthick
(1984).
Weight Space Probability Densities in Stochastic Learning: I. Dynamics and Equilibria
If at each iteration one chooses a weight at random for updating, then the KramersMoyal expansion can be exactly summed. Thus at each iteration we 1) choose a
weight Wi and an exemplar x at random, and 2) update Wi with
H I.( w,x ) -_
.
-Sign
(8?(w,x(n)))
8
(20)
Wi
With this update rule, Hj = ?1 or 0 and Hi Hj = lSij (or 0). All of the coefficients
(HiHj Hk ... ) z in the Kramers-Moyal expansion (7) vanish unless i = j = k = ....
The remaining series can be summed by breaking it into odd and even parts. This
leav\!s
P(w,n+l) - P(w,n)
1
2m
+
1
2m
=
m
L
{ P(w
+ Ilj,n) Aj(w + Ilj) -
P(w -Ilh n) Aj(w -Ilj) }
j=1
m
L
{ P(w + Ilj, n) Bjj(w + Ilj) - 2P(w, n) Bjj(w)
j=1
+ P(w -Ilj, n) Bjj(w -Ilj)
=
}
(21)
where /-tj denotes a displacement along Wj a distance /-t, Aj(w) (Hj(w, x) )z' and
Bjj(w)
(H;(w,x)z' Note that Bjj(w) = 1 unless H(w,x) = 0, for all x, in
which case Bjj(w)
O. Although exact, (21) curiously has the form of a second
order finite difference approximation to the Fokker-Planck equation with diagonal
diffusion matrix. This form is understandable, since the dynamics (20) restrict the
weight values W to a hypercubic lattice with cell length /-t and generate only nearest
neighbor interactions.
=
=
L:
-0.5
!J!: k!~
0.5
v
1
1.5
2
2.5
-0.5
n=5oo
-0.5
0.5
v
1
1.5
2
2.5
2
2.5
n =5000
-0.5
0.5
v
1
1.5
Fig.3: Sequence of densities for the XOR problem
As an example, figure 3 shows the cost function evaluated along a 1-D slice through
the weight space for the XOR problem. Along this line are local and global minima
at v = 1 and v = 0 respectively. Also shown is the probability density (vertical
lines). The sequence shows the spreading of the density from its initialization at
the local minimum, and its eventual collection at the global minimum.
457
458
Leen and Moody
4
Discussion
A theoretical approach that focuses on the dynamics of the weight space probability
density, as we do here, provides powerful tools to extend understanding of stochastic
search. Both transient and equilibrium behavior can be studied using these tools.
We expect that knowledge of equilibrium weight space distributions can be used in
conjunction with theories of generalization (e.g. Moody, 1992) to assess the influence
of stochastic search on prediction error. Characterization of transient phenomena
should facilitate the design and evaluation of search strategies such as data batching
and adaptive learning rate schedules. Transient phenomena are treated in greater
depth in the companion paper in this volume (Orr and Leen, 1993).
Acknowledgements
T. Leen was supported under grants N00014-91-J-1482 and N00014-90-J-1349 from
ONR. J. Moody was supported under grants 89-0478 from AFOSR, ECS-9114333
from NSF, and N00014-89-J-1228 and N00014-92-J-4062 from ONR.
References
Todd K. Leen and Genevieve B. Orr (1992), Weight-space probability densities and convergence times for stochastic learning. In International Joint Conference on Neural Networks,
pages IV 158-164. IEEE, June.
H. Ritter and K. Schulten (1988), Convergence properties of Kohonen's topology conserving maps: Fluctuations, stability and dimension selection, Bioi. Cybern., 60, 59-71.
Genevieve B. Orr and Todd K. Leen (1993), Probability densities in stochastic learning;
II. Transients and Basin Hopping Times. In Giles, C.L., Hanson, S.J., and Cowan, J.D.
(eds.), Advances in Neural Information Processing Systems 5. San Mateo, CA: Morgan
Kaufmann Publishers.
H. Risken (1989), The Fokker-Planck Equation Springer-Verlag, Berlin.
G. Radons, H.G. Schuster and D. Werner (1990), Fokker-Planck description oflearning in
backpropagation networks, International Neural Network Conference - INNC 90, Paris, II
993-996, Kluwer Academic Publishers.
C.W. Gardiner (1990), Handbook of Stochastic Methods, 2nd Ed. Springer-Verlag, Berlin.
Simon Haykin (1991), Adaptive Filter Theory, 2nd edition. Prentice Hall, Englewood
Cliffs, N.J.
W.H. Press, B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling (1987) Numerical Recipes
- the Art of Scientific Computing. Cambridge University Press, Cambridge I New York.
V. Fabian (1960), Stochastic approximation methods. Czechoslovak Math J., 10, 123-159.
Mark Derthick (1984), Variations on the Boltzmann machine learning algorithm. Technical
Report CMU-CS-84-120, Department of Computer Science, Carnegie-Mellon University,
Pittsburgh, PA, August.
John E. Moody (1992), The effective number of parameters: An analysis of generalization and regularization in nonlinear learning systems. In J .E. Moody, S.J. Hanson, and
R.P. Lipmann, editors, Advances in Neural Information Processing Systems 4. Morgan
Kaufmann Publishers, San Mateo, CA.
| 634 |@word version:1 nd:2 simulation:1 solid:1 moment:1 series:3 current:2 si:2 john:2 realize:1 numerical:1 analytic:2 update:15 stationary:3 isotropic:1 hja:1 haykin:2 lr:1 characterization:1 provides:1 contribute:1 math:1 sigmoidal:1 along:3 differential:1 p8:2 behavior:3 mechanic:1 anisotropy:1 becomes:1 underlying:1 insure:1 exactly:2 grant:2 appear:1 planck:11 before:1 vie:2 engineering:1 local:3 todd:3 limit:1 cliff:1 fluctuation:1 plus:1 initialization:1 studied:2 mateo:2 czechoslovak:1 graduate:1 bi:1 differs:1 backpropagation:2 spot:1 displacement:1 convenient:1 selection:1 prentice:1 influence:1 cybern:1 map:2 independently:1 rule:5 dw:5 classic:1 stability:1 coordinate:1 variation:1 increment:1 enhanced:1 target:3 exact:2 origin:1 pa:1 particularly:1 updating:1 rij:2 calculate:1 wj:1 ordering:1 movement:1 dynamic:8 trained:1 depend:1 upon:1 f2:1 basis:1 joint:1 kolmogorov:3 ilh:1 effective:7 neighborhood:1 whose:2 itself:1 final:1 derthick:2 sequence:2 analytical:1 accentuates:1 interaction:1 remainder:1 kohonen:2 conserving:1 oz:1 description:2 dirac:1 getting:1 recipe:1 convergence:2 p:5 neumann:1 optimum:1 depending:1 develop:1 oo:1 exemplar:3 nearest:1 odd:1 progress:2 c:1 filter:3 stochastic:20 transient:4 settle:1 owl:1 ja:1 generalization:2 hall:1 equilibrium:22 mapping:2 lm:10 spreading:2 tool:3 clearly:1 gaussian:8 rather:1 hj:4 sisj:1 conjunction:1 derived:2 focus:1 june:1 properly:1 ivl:1 hk:1 sense:1 dependent:1 factoring:1 vetterling:1 integrated:1 expand:1 interested:1 spatial:1 special:1 integration:2 summed:4 art:1 minimized:1 report:1 divergence:1 englewood:1 evaluation:1 genevieve:2 tj:1 unless:2 iv:1 desired:1 theoretical:4 earlier:1 giles:1 werner:1 lattice:1 soften:1 cost:7 oflearning:1 chooses:1 density:33 international:2 ritter:3 off:1 physic:1 together:1 moody:9 ilj:7 von:1 squared:1 satisfied:3 rlcl:1 choose:1 summable:1 dr:1 derivative:1 li:1 potential:12 orr:8 includes:1 coefficient:5 oregon:1 satisfy:2 explicitly:1 vi:2 tion:1 closed:2 doing:1 simon:1 ass:1 xor:2 variance:2 kaufmann:2 ensemble:4 yield:1 lu:1 plateau:2 suffers:1 ed:2 knowledge:1 schedule:1 leen:14 evaluated:1 strongly:1 correlation:4 nonlinear:1 defines:1 aj:3 scientific:1 facilitate:1 effect:1 true:1 evolution:4 analytically:1 ril:1 regularization:1 spherically:1 symmetric:1 self:1 steady:1 lipmann:1 generalized:1 smear:1 motion:1 instantaneous:1 common:1 physical:2 ji:1 volume:1 tail:1 extend:1 kluwer:1 numerically:1 moyal:4 significant:1 mellon:1 cambridge:2 gibbs:2 ai:5 trivially:1 sherman:1 surface:1 base:1 bjj:6 n00014:4 verlag:2 onr:2 inverted:1 morgan:2 minimum:4 greater:1 signal:1 morrison:1 dashed:1 full:1 multiple:1 ii:2 reduces:2 smooth:1 technical:1 academic:1 prediction:1 cmu:1 iteration:3 normalization:2 histogram:1 cell:1 diffusive:1 publisher:3 eigenfunctions:1 cowan:1 enough:1 restrict:1 topology:1 reduce:1 shift:1 expression:3 curiously:1 proceed:1 york:1 jj:1 detailed:2 involve:1 amount:1 reduced:1 generate:1 exist:2 nsf:1 notice:2 sign:5 delta:2 discrete:1 carnegie:1 dropping:1 shall:1 express:1 drawn:2 neither:1 diffusion:15 sle:1 convert:1 wand:1 master:1 fourth:2 powerful:1 throughout:1 radon:2 hi:5 simplification:1 risken:1 quadratic:2 gardiner:2 ri:1 flat:1 argument:1 department:2 according:1 truncate:1 lld:1 wi:5 equation:14 previously:1 discus:1 tractable:1 end:1 studying:1 away:1 batching:1 alternative:3 denotes:4 remaining:1 beaverton:1 newton:1 hopping:1 classical:1 tensor:1 objective:1 move:1 strategy:1 dependence:1 rjl:1 diagonal:1 gradient:7 ow:1 distance:1 berlin:2 unstable:1 assuming:1 length:1 index:1 balance:2 design:1 proper:1 understandable:1 boltzmann:1 perform:1 vertical:1 markov:1 fabian:2 finite:1 descent:2 thermal:2 truncated:1 sharp:1 august:1 drift:4 bk:1 pair:3 paris:1 hanson:2 suggested:1 dynamical:1 exemplified:1 below:1 power:2 suitable:1 treated:1 force:1 nth:1 technology:1 literature:2 l2:1 understanding:1 acknowledgement:1 relative:2 law:1 afosr:1 expect:1 interesting:1 proportional:1 integrate:2 basin:1 kramers:4 editor:1 supported:2 free:1 institute:1 neighbor:1 slice:1 curve:2 dimension:3 depth:1 transition:6 valid:1 avoids:1 stuck:1 collection:1 adaptive:3 san:2 ec:1 sj:3 compact:1 implicitly:1 global:3 handbook:1 pittsburgh:1 equ:1 search:3 zk:2 expanding:1 ca:2 expansion:6 vj:1 decrement:1 owi:1 noise:3 edition:1 repeated:1 quadrature:1 fig:3 slow:1 position:1 schulten:3 vanish:1 breaking:1 ix:1 bij:3 theorem:1 formula:1 companion:1 appeal:1 r2:1 exists:1 consist:1 flannery:1 simply:1 springer:2 fokker:11 teukolsky:1 conditional:1 bioi:1 identity:1 formulated:1 towards:1 eventual:1 dwk:1 total:1 called:3 gauss:1 support:1 mark:1 phenomenon:2 schuster:1 |
5,903 | 6,340 | Doubly Convolutional Neural Networks
Yu Cheng
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598, USA
[email protected]
Shuangfei Zhai
Binghamton University
Vestal, NY 13902, USA
[email protected]
Weining Lu
Tsinghua University
Beijing 10084, China
[email protected]
Zhongfei (Mark) Zhang
Binghamton University
Vestal, NY 13902, USA
[email protected]
Abstract
Building large models with parameter sharing accounts for most of the success of
deep convolutional neural networks (CNNs). In this paper, we propose doubly convolutional neural networks (DCNNs), which significantly improve the performance
of CNNs by further exploring this idea. In stead of allocating a set of convolutional
filters that are independently learned, a DCNN maintains groups of filters where
filters within each group are translated versions of each other. Practically, a DCNN
can be easily implemented by a two-step convolution procedure, which is supported
by most modern deep learning libraries. We perform extensive experiments on
three image classification benchmarks: CIFAR-10, CIFAR-100 and ImageNet, and
show that DCNNs consistently outperform other competing architectures. We have
also verified that replacing a convolutional layer with a doubly convolutional layer
at any depth of a CNN can improve its performance. Moreover, various design
choices of DCNNs are demonstrated, which shows that DCNN can serve the dual
purpose of building more accurate models and/or reducing the memory footprint
without sacrificing the accuracy.
1
Introduction
In recent years, convolutional neural networks (CNNs) have achieved great success to solve many
problems in machine learning and computer vision. CNNs are extremely parameter efficient due
to exploring the translation invariant property of images, which is the key to training very deep
models without severe overfitting. While considerable progresses have been achieved by aggressively
exploring deeper architectures [1, 2, 3, 4] or novel regularization techniques [5, 6] with the standard
"convolution + pooling" recipe, we contribute from a different view by providing an alternative to the
default convolution module, which can lead to models with even better generalization abilities and/or
parameter efficiency.
Our intuition originates from observing well trained CNNs where many of the learned filters are the
slightly translated versions of each other. To quantify this in a more formal fashion, we define the
k-translation correlation between two convolutional filters within a same layer Wi , Wj as:
?k (Wi , Wj ) =
max
x,y?{?k,...,k},(x,y)6=(0,0)
< Wi , T (Wj , x, y) >f
,
kWi k2 kWj k2
(1)
where T (?, x, y) denotes the translation of the first operand by (x, y) along its spatial dimensions,
with proper zero padding at borders to maintain the shape; < ?, ? >f denotes the flattened inner
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Visualization of the 11 ? 11 sized first layer filters learned by AlexNet [1]. Each column
shows a filter in the first row along with its three most 3-translation-correlated filters. Only the first
32 filters are shown for brevity.
Figure 2: Illustration of the averaged maximum 1-translation correlation, together with the standard
deviation, of each convolutional layer for AlexNet [1] (left), and the 19-layer VGGNet [2] (right),
respectively. For comparison, for each convolutional layer in each network, we generate a filter set
with the same shape from the standard Gaussian distribution (the blue bars). For both networks, all
the convolutional layers have averaged maximum 1-translation correlations that are significantly
larger than their random counterparts.
product, where the two operands are flattened into column vectors before taking the standard inner
product; k ? k2 denotes the `2 norm of its flattened operand. In other words, the k-translation
correlation between a pair of filters indicates the maximum correlation achieved by translating one
filter up to k steps along any spatial dimension. As a concrete example, Figure 1 demonstrates
the 3-translation correlation of the first layer filters learned by the AlexNet [1], with the weights
obtained from the Caffe model zoo [7]. In each column, we show a filter in the first row and its three
most 3-translation-correlated filters (that is, filters with the highest 3-translation correlations) in the
second to fourth row. Only the first 32 filters are shown for brevity. It is interesting to see for most
filters, there exist several filters that are roughly its translated versions.
In addition to the convenient visualization of the first layers, we further study this property at
higher layers and/or in deeper models. To this end, we define the averaged maximum k-translation
PN
correlation of a layer W as ??k (W) = N1 i=1 maxN
j=1,j6=i ?k (Wi , Wj ), where N is the number
of filters. Intuitively, the ??k of a convolutional layer characterizes the average level of translation
correlation among the filters within it. We then load the weights of all the convolutional layers of
AlexNet as well as the 19-layer VGGNet [2] from the Caffe model zoo, and report the averaged
maximum 1-translation correlation of each layer in Figure 2. In each graph, the height of the red
bars indicates the ??1 calculated with the weights of the corresponding layer. As a comparison,
for each layer we have also generated a filter bank with the same shape but filled with standard
Gaussian samples, whose ??1 are shown as the blue bars. We clearly see that all the layers in both
models demonstrate averaged maximum translation correlations that are significantly higher than
their random counterparts. In addition, it appears that lower convolutional layers generally have
higher translation correlations, although this does not strictly hold (e.g., conv3_4 in VGGNet).
Motivated by the evidence shown above, we propose the doubly convolutional layer (with the double
convolution operation), which can be plugged in place of a convolutional layer in CNNs, yielding
the doubly convolutional neural networks (DCNNs). The idea of double convolution is to learn
groups filters where filters within each group are translated versions of each other. To achieve this, a
doubly convolutional layer allocates a set of meta filters which has filter sizes that are larger than the
effective filter size. Effective filters can be then extracted from each meta filter, which corresponds to
convolving the meta filters with an identity kernel. All the extracted filters are then concatenated, and
convolved with the input. Optionally, one can also choose to pool along activations produced by filters
from the same meta filter, in a similar spirit to the maxout networks [8]. We also show that double
convolution can be easily implemented with available deep learning libraries by utilizing the efficient
2
Figure 3: The architecture of a convolutional layer (left) and a doubly convolutional layer (right). A
doubly convolutional layer maintains meta filters whose spatial size z 0 ? z 0 is larger than the effective
filter size z ? z. By pooling and flattening the convolution output, a doubly convolutional layer
0
)2 times more channels for the output image, with s ? s being the pooling size.
produces ( z ?z+1
s
convolutional kernel. In our experiments, we show that the additional level of parameter sharing by
double convolution allows one to build DCNNs that yield an excellent performance on several popular
image classification benchmarks, consistently outperforming all the competing architectures with a
margin. We have also confirmed that replacing a convolutional layer with a doubly convolutional
layer consistently improves the performance, regardless of the depth of the layer. Last but not least,
we show that one is able to balance the trade off between performance and parameter efficiency by
leveraging the architecture of a DCNN.
2
2.1
Model
Convolution
We define an image I ? Rc?w?h as a real-valued 3D tensor, where c is the number of channels;
w, h are the width and height, respectively. We define the convolution operation, denoted by
I `+1 = I ` ? W` , as follows:
X
`+1
`
`
Ik,i,j
=
Wk,c
0 ,i0 ,j 0 Ic0 ,i+i0 ?1,j+j 0 ?1 ,
0
0
0
c ?[1,c],i ?[1,z],j ?[1,z]
(2)
k ? [1, c`+1 ], i ? [1, w`+1 ], j ? [1, h`+1 ].
`
`
`
`+1
`
Here I ` ? Rc ?w ?h is the input image; W` ? Rc ?c ?z?z is a set of c`+1 filters, with each
`+1
`+1
`+1
filter of shape c` ? z ? z; I `+1 ? Rc ?w ?h
is the output image. The spatial dimensions
`+1
`+1
`
of the output image w , h
are by default w + z ? 1 and h` + z ? 1, respectively (aka, valid
convolution), but one can also pad a number of zeros at the borders of I ` to achieve different output
spatial dimensions (e.g., keeping the spatial dimensions unchanged). In this paper, we use a loose
notation by freely allowing both the LHS and RHS of ? to be either a single image (filter) or a set of
images (filters), with proper convolution along the non-spatial dimensions.
A convolutional layer can thus be implemented with a convolution operation followed by a nonlinearity
function such as ReLU, and a convolutional neural network (CNN) is constructed by interweaving
several convolutoinal and spatial pooling layers.
2.2
Double convolution
We next introduce and define the double convolution operation, denoted by I `+1 = I ` ? W` , as
follows:
`+1
`
Oi,j,k
= Wk` ? I:,i:(i+z?1),j:(j+z?1)
,
`+1
`+1
I(nk+1):n(k+1),i,j
= pools (Oi,j,k
), n = (
k ? [1, c`+1 ], i ? [1, w`+1 ], j ? [1, h`+1 ].
3
z0 ? z + 1 2
) ,
s
(3)
`
`
`
`+1
`+1
`+1
Here I ` ? Rc ?w ?h and I `+1 ? Rnc ?w ?h
are the input and output image, respectively.
`+1
`
0
0
`+1
W` ? Rc ?c ?z ?z are a set of c`+1 meta filters, with filter size z 0 ? z 0 , z 0 > z; Oi,j,k
?
0
0
(z ?z+1)?(z ?z+1)
R
is the intermediate output of double convolution; pools (?) defines a spatial
pooling function with pooling size s ? s (and optionally reshaping the output to a column vector,
inferred from the context); ? is the convolution operator defined previously in Equation 2.
In words, a double convolution applies a set of c`+1 meta filters with spatial dimensions z 0 ? z 0 ,
which are larger than the effective filter size z ? z. Image patches of size z ? z at each location
`
(i, j) of the input image, denoted by I:,i:(i+z?1),j:(j+z?1)
, are then convolved with each meta filter,
0
resulting an output of size z ? z + 1 ? z 0 ? z + 1, for each (i, j). A spatial pooling of size s ? s is
then applied along this resulting output map, whose output is flattened into a column vector. This
produces an output feature map with nc`+1 channels. The above procedure can be viewed as a two
step convolution, where image patches are first convolved with meta filters, and the meta filters then
slide across and convolve with the image, hence the name double convolution.
A doubly convolutional layer is by analogy defined as a double convolution followed by a nonlinearity;
and substituting the convolutional layers in a CNN with doubly convolutional layers yields a doubly
convolutional neural network (DCNN). In Figure 3 we have illustrated the difference between a
convolutional layer and a doubly convolutional layer. It is possible to vary the combination of z, z 0 , s
for each doubly convolutional layer of a DCNN to yield different variants, among which three extreme
cases are:
(1) CNN: Setting z 0 = z recovers the standard CNN; hence, DCNN is a generalization of CNN.
(2) ConcatDCNN: Setting s = 1 produces a DCNN variant that is maximally parameter efficient.
This corresponds to extracting all sub-regions of size z ? z from a z 0 ? z 0 sized meta filter, which
are then stacked to form a set of (z 0 ? z + 1)2 filters with size z ? z. With the same amount of
0
2 2
z
parameters, this produces (z ?z+1)
times more channels for a single layer.
(z 0 )2
(3) MaxoutDCNN: Setting s = z 0 ? z + 1, i.e., applying global pooling on O`+1 , produces a DCNN
variant where the output image channel size is equal to the number of the meta filters. Interestingly,
this yields a parameter efficient implementation of the maxout network [8]. To be concrete, the
maxout units in a maxout network are equivalent to pooling along the channel (feature) dimension,
where each channel corresponds to a distinct filter. MaxoutDCNN, on the other hand, pools along
channels which are produced by the filters that are translated versions of each other. Besides the
obvious advantage of reducing the number of parameters required, this also acts as an effective
regularizer, which is verified later in the experiments at Section 4.
Implementing a double convolution is also readily supported by most main stream GPU-compatible
deep learning libraries (e.g., Theano which is used in our experiments), which we have summarized
in Algorithm 1. In particular, we are able to perform double convolution by two steps of convolution,
corresponding to line 4 and line 6, together with proper reshaping and pooling operations. The first
convolution extracts overlapping patches of size z ? z from the meta filters, which are then convolved
with the input image. Although it is possible to further reduce the time complexity by designing a
specialized double convolution module, we find that Algorithm 1 scales well to deep DCNNs, and
large datasets such as ImageNet.
3
Related work
The spirit of DCNNs is to further push the idea of parameter sharing of the convolutional layers,
which is shared by several recent efforts. [9] explores the rotation symmetry of certain classes of
images, and hence proposes to rotate each filter (or alternatively, the input) by a multiplication of
90? which produces four times filters with the same amount of parameters for a single layer. [10]
observes that filters learned by ReLU CNNs often contain pairs with opposite phases in the lower
layers. The authors accordingly propose the concatenated ReLU where the linear activations are
concatenated with their negations and then passed to ReLU, which effectively doubles the number of
filters. [11] proposes the dilated convolutions, where additional filters with larger sizes are generated
by dilating the base convolutional filters, which is shown to be effective in dense prediction tasks
such as image segmentation. [12] proposes a multi-bias activation scheme where k, k ? 1, bias
terms are learned for each filter, which produces a k times channel size for the convolution output.
4
Algorithm 1: Implementation of double convolution with convolution.
`
1
2
3
4
5
6
7
8
9
`
`
`+1
0
0
Input: Input image I ` ? Rc ?w ?h , meta filters W` ? Rc ?z ?z , effective filter size
z ? z, pooling size s ? s.
0
2
`+1
`+1
`+1
Output: Output image I `+1 ? Rnc ?w ?h , with n = (z ?z+1)
.
s2
begin
I` ? IdentityMatrix (c` z 2 ) ;
Reorganize I` to shape c` z 2 ? c` ? z ? z;
? ` ? W` ? I` ; /* output shape: c`+1 ? c` z 2 ? (z 0 ? z + 1) ? (z 0 ? z + 1) */
W
? ` to shape c`+1 (z 0 ? z + 1)2 ? c` ? z ? z;
Reorganize W
`+1
`
?`;
O
?I ?W
/* output shape: c`+1 (z 0 ? z + 1)2 ? w`+1 ? h`+1 */
`+1
Reorganize O
to shape c`+1 w`+1 h`+1 ? (z 0 ? z + 1) ? (z 0 ? z + 1) ;
0
0
? z ?z+1
*/
I `+1 ? pools (O`+1 ) ; /* output shape: c`+1 w`+1 h`+1 ? z ?z+1
s
s
0
2
`+1
`+1
Reorganize I `+1 to shape c`+1 ( z ?z+1
)
?
w
?
h
;
s
Additionally, [13, 14] have investigated the combination of more than one transformations of filters,
such as rotation, flipping and distortion. Note that all the aforementioned approaches are orthogonal
to DCNNs and can theoretically be combined in a single model. The need of correlated filters in
CNNs is also studied in [15], where similar filters are explicitly learned and grouped with a group
sparsity penalty.
While DCNNs are designed with better performance and generalization ability in mind, they are
also closely related to the thread of work on parameter reduction in deep neural networks. The
work of Vikas and Tara [16] addresses the problem of compressing deep networks by applying
structured transforms. [17] exploits the redundancy in the parametrization of deep architectures by
imposing a circulant structure on the projection matrix, while allowing the use of FFT for faster
computations. [18] attempts to obtain the compression of the fully-connected layers of the AlexNettype network with the Fastfood method. Novikov et al. [19] use a multi-linear transform (Tensor-Train
decomposition) to attain reduction of the number of parameters in the linear layers of CNNs. These
work differ from DCNNs as most of their focuses are on the fully connected layers, which often
accounts for most of the memory consumption. DCNNs, on the other hand, apply directly to the
convolutional layers, which provides a complementary view to the same problem.
4
Experiments
4.1
Datasets
We conduct several sets of experiments with DCNN on three image classification benchmarks:
CIFAR-10, CIFAR-100, and ImageNet. CIFAR-10 and CIFAR-100 both contain 50,000 training
and 10,000 testing 32 ? 32 sized RGB images, evenly drawn from 10 and 100 classes, respectively.
ImageNet is the dataset used in the ILSVRC-2012 challenge, which consists of about 1.2 million
images for training and 50,000 images for validation, sampled from 1,000 classes.
4.2
4.2.1
Is DCNN an effective architecture?
Model specifications
In the first set of experiments, we study the effectiveness of DCNN compared with two different
CNN designs. The three types of architectures subject to evaluation are:
(1) CNN: This corresponds to models using the standard convolutional layers. A convolutional layer
is denoted as C-<c>-<z>, where c, z are the number of filters and the filter size, respectively.
(2) MaxoutCNN: This corresponds to the maxout convolutional networks [8], which uses the maxout
unit to pool along the channel (feature) dimensions with a stride k. A maxout convolutional layer is
denoted as MC-<c>-<z>-<k>, where c, z, k are the number of filters, the filter size, and the feature
pooling stride, respectively.
5
Table 1: The configurations of the models used in Section 4.2. The architectures on the CIFAR-10
and CIFAR-100 datasets are the same, except for the top softmax layer (left). The architectures on the
ImageNet dataset are variants of the 16-layer VGGNet [2] (right). See the details about the naming
convention in Section 4.2.1.
CNN
DCNN
MaxoutCNN
C-128-3
C-128-3
DC-128-4-3-2
DC-128-4-3-2
MC-512-3-4
MC-512-3-4
C-128-3
C-128-3
DC-128-4-3-2
DC-128-4-3-2
C-128-3
C-128-3
DC-128-4-3-2
DC-128-4-3-2
CNN
DCNN
MaxoutCNN
C-64-3
C-64-3
DC-64-4-3-2
DC-64-4-3-2
MC-256-3-4
MC-256-3-4
C-128-3
C-128-3
DC-128-4-3-2
DC-128-4-3-2
P-2
P-2
P-2
MC-512-3-4
MC-512-3-4
P-2
MC-512-3-4
MC-512-3-4
DC-128-4-3-2
DC-128-4-3-2
C-256-3
C-256-3
C-256-3
DC-256-4-3-2
DC-256-4-3-2
DC-256-4-3-2
C-512-3
C-512-3
C-512-3
DC-512-4-3-2
DC-512-4-3-2
DC-512-4-3-2
MC-1024-3-4
MC-1024-3-4
MC-1024-3-4
P-2
P-2
C-128-3
C-128-3
MC-512-3-4
MC-512-3-4
MC-512-3-4
MC-512-3-4
MC-2048-3-4
MC-2048-3-4
MC-2048-3-4
P-2
P-2
C-512-3
C-512-3
C-512-3
Global Average Pooling
Softmax
DC-512-4-3-2
DC-512-4-3-2
DC-512-4-3-2
MC-2048-3-4
MC-2048-3-4
MC-2048-3-4
P-2
Global Average Pooling
Softmax
(3) DCNN: This corresponds to using the doubly convolutional layers. We denote a doubly convolutional layer with c filters as DC-<c>-<z 0 >-<z>-<s>, where z 0 , z, s are the meta filter size, effective
filter size and pooling size, respectively, as in Equation 3. In this set of experiments, we use the
MaxoutDCNN variant, whose layers are readily represented as DC-<c>-<z 0 >-<z>-<z 0 ? z + 1>.
We denote a spatial max pooling layer as P-<s> with s as the pooling size. For all the models, we
apply batch normalization [6] immediately after each convolution layer, after which ReLU is used as
the nonlinearity (including MaxoutCNN, which makes out implementation slightly different from
[8]). Our model design is similar to VGGNet [2] where 3 ? 3 filter sizes are used, as well as Network
in Network [20] where fully connected layers are completely eliminated. Zero padding is used before
each convolutional layer to maintain the spatial dimensions unchanged after convolution. Dropout is
applied after each pooling layer. Global average pooling is applied on top of the last convolutional
layer, which is fed to a Softmax layer with a proper number of outputs.
All the three models on each dataset are of the same architecture w.r.t. the number of layers and the
number of units per layer. The only difference thus resides in the choice of the convolutional layers.
Note that the architecture we have used on the ImageNet dataset resembles the 16-layer VGGNet [2],
but without the fully connected layers. The full specification of the model architectures is shown in
Table 1.
4.2.2
Training protocols
We preprocess all the datasets by extracting the mean for each pixel and each channel, calculated on
the training sets. All the models are trained with Adadelta [21] on NVIDIA K40 GPUs. Bath size is
set as 200 for CIFAR-10 and CIFAR-100, and 128 for ImageNet.
Data augmentation has also been explored. On CIFAR-10 and CIFAR-100, We follow the simple
data augmentation as in [2]. For training, 4 pixels are padded on each side of the images, from which
32 ? 32 crops are sampled with random horizontal flipping. For testing, only the original 32 ? 32
images are used. On ImageNet, 224 ? 224 crops are sampled with random horizontal flipping; the
standard color augmentation and the 10-crop testing are also applied as in AlexNet [1].
6
4.2.3
Results
The test errors are summarized in Table 2 and Table 3, where the relative # parameters of DCNN and
MaxoutCNN compared with the standard CNN are also shown. On the moderately-sized datasets
CIFAR-10 and CIFAR-100, DCNN achieves the best results of the three control experiments, with
and without data augmentation. Notably, DCNN consistently improves over the standard CNN with a
margin. More remarkably, DCNN also consistently outperforms MaxoutCNN, with 2.25 times less
parameters. This on the one hand proves that the doubly convolutional layers greatly improves the
model capacity, and on the other hand verifies our hypothesis that the parameter sharing introduced
by double convolution indeed acts as a very effective regularizer. The results achieved by DCNN on
the two datasets are also among the best published results compared with [20, 22, 23, 24].
Besides, we also note that DCNN does not have difficulty scaling up to a large dataset as ImageNet, where consistent performance gains over the other baseline architectures are again observed.
Compared with the results of the 16-layer VGGNet in [2] with multiscale evaluation, our DCNN
implementation achieves comparable results, with significantly less parameters.
Table 2: Test errors on CIFAR-10 and CIFAR-100 with and without data augmentation, together with
the relative # parameters compared with the standard CNN.
4.3
Without Data Augmentation
CIFAR-10
CIFAR-100
With Data Augmentation
CIFAR-10
CIFAR-100
Model
# Parameters
CNN
MaxoutCNN
DCNN
1.
4.
1.78
9.85%
9.56%
8.58%
34.26%
33.52%
30.35%
9.59%
9.23%
7.24%
33.04%
32.37%
26.53%
NIN [20]
DSN [22]
APL [23]
ELU [24]
0.92
-
10.41%
9.78%
9.59%
-
35.68%
34.57%
34.40%
-
8.81%
8.22%
7.51%
6.55%
30.83%
24.28%
Does double convolution contribute to every layer?
In the next set of experiments, we study the effect of applying double convolution to layers at
various depths. To this end, we replace the convolutional layers at each level of the standard CNN
defined in 4.2.1 with a doubly convolutional layer counterpart (e.g., replacing a C-128-3 layer with a
DC-128-4-3-2 layer). We hence define DCNN[i-j] as the network resulted from replacing the i ? jth
convolutional layer of a CNN with its doubly convolutional layer counterpart, and train {DCNN[1-2],
DCNN[3-4], DCNN[5-6], DCNN[7-8]} on CIFAR-10 and CIFAR-100 following the same protocol
as that in Section 4.2.2. The results are shown in Table 4. Interestingly, the doubly convolutional
layer is able to consistently improve the performance over that of the standard CNN regardless of the
depth with which it is plugged in. Also, it seems that applying double convolution at lower layers
contributes more to the performance, which is consistent with the trend of translation correlation
observed in Figure 2.
Table 3: Test errors on ImageNet, evaluated on the validation set, together with the relative #
parameters compared with the standard CNN.
Model
CNN
MaxoutCNN
DCNN
Top-5 Error
10.59%
9.82%
8.23%
Top-1 Error
29.42%
28.4%
26.27 %
# Parameters
1.
4.
1.78
VGG-16 [2]
ResNet-152 [4]
GoogLeNet [3]
7.5%
5.71%
7.9%
24.8%
21.43%
-
9.3
4.1
0.47
7
Table 4: Inserting the doubly convolutional layer at different depths of the network.
4.4
Model
CIFAR-10
CIFAR-100
CNN
DCNN[1-2]
DCNN[3-4]
DCNN[5-6]
DCNN[7-8]
DCNN[1-8]
9.85%
9.12%
9.23%
9.45%
9.57%
8.58%
34.26%
32.91%
33.27%
33.58%
33.72%
30.35%
Performance vs. parameter efficiency
In the last set of experiments, we study the behavior of DCNNs under various combinations of its
hyper-parameters, z 0 , z, s. To this end, we train three more DCNNs on CIFAR-10 and CIFAR-100,
namely {DCNN-32-6-3-2, DCNN-16-6-3-1, DCNN-4-10-3-1}. Here we have overloaded the notation
for a doubly convolutional layer to denote a DCNN which contains correspondingly shaped doubly
convolutional layers (the DCNN in Table 1 thus corresponds to DCNN-128-4-3-2). In particular,
DCNN-32-6-3-2 produces a DCNN with the exact same shape and number of parameters of those of
the reference CNN; DCNN-16-6-3-1, DCNN-4-10-3-1 are two ConcatDCNN instances from Section
2.2, which produce larger sized models with same or less amount of parameters. The results, together
with the effective layer size and the relative number of parameters, are listed in Table 5. We see that
all the variants of DCNN consistently outperform the standard CNN, even when fewer parameters
are used (DCNN-4-10-3-1). This verifies that DCNN is a flexible framework which allows one to
either maximize the performance with a fixed memory budget, or on the other hand, minimize the
memory footprint without sacrificing the accuracy. One can choose the best suitable architecture of a
DCNN by balancing the trade off between performance and the memory footprint.
Table 5: Different architecture configurations of DCNNs.
5
Model
CIFAR-10
CIFAR-100
Layer size
# Parameters
CNN
DCNN-32-6-3-2
DCNN-16-6-3-1
DCNN-4-10-3-1
DCNN-128-4-3-2
9.85%
9.05%
9.16%
9.65%
8.58%
34.26%
32.28%
32.54%
33.57%
30.35%
128
128
256
256
128
1.
1.
1.
0.69
1.78
Conclusion
We have proposed the doubly convolutional neural networks (DCNNs), which utilize a novel double
convolution operation to provide an additional level of parameter sharing over CNNs. We show that
DCNNs generalize standard CNNs, and relate to several recent proposals that explore parameter
redundancy in CNNs. A DCNN can be easily implemented by modern deep learning libraries
by reusing the efficient convolution module. DCNNs can be used to serve the dual purpose of 1)
improving the classification accuracy as a regularized version of maxout networks, and 2) being
parameter efficient by flexibly varying their architectures. In the extensive experiments on CIFAR-10,
CIFAR-100, and ImageNet datasets, we have shown that DCNNs significantly improves over other
architecture counterparts. In addition, we have shown that introducing the doubly convolutional
layer to any layer of a CNN improves its performance. We have also experimented with various
configurations of DCNNs, all of which are able to outperform the CNN counterpart with the same or
fewer number of parameters.
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
8
[2] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[3] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru
Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, pages 1?9, 2015.
[4] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[5] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research,
15(1):1929?1958, 2014.
[6] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[7] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In
Proceedings of the ACM International Conference on Multimedia, pages 675?678. ACM, 2014.
[8] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. arXiv preprint arXiv:1302.4389, 2013.
[9] Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symmetry in convolutional
neural networks. arXiv preprint arXiv:1602.02660, 2016.
[10] Wenling Shang, Kihyuk Sohn, Diogo Almeida, and Honglak Lee. Understanding and improving convolutional neural networks via concatenated rectified linear units. arXiv preprint arXiv:1603.05201,
2016.
[11] Fisher Yu and Vladlen Koltun. Multi-scale context aggregation by dilated convolutions. arXiv preprint
arXiv:1511.07122, 2015.
[12] Hongyang Li, Wanli Ouyang, and Xiaogang Wang. Multi-bias non-linear activation in deep neural networks.
arXiv preprint arXiv:1604.00676, 2016.
[13] Robert Gens and Pedro M Domingos. Deep symmetry networks. In Advances in neural information
processing systems, pages 2537?2545, 2014.
[14] Taco S Cohen and Max Welling.
arXiv:1602.07576, 2016.
Group equivariant convolutional networks.
arXiv preprint
[15] Koray Kavukcuoglu, Rob Fergus, Yann LeCun, et al. Learning invariant features through topographic
filter maps. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages
1605?1612. IEEE, 2009.
[16] Vikas Sindhwani, Tara Sainath, and Sanjiv Kumar. Structured transforms for small-footprint deep learning.
In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural
Information Processing Systems 28, pages 3088?3096. Curran Associates, Inc., 2015.
[17] Yu Cheng, Felix X. Yu, Rogerio Feris, Sanjiv Kumar, and Shih-Fu Chang. An exploration of parameter
redundancy in deep networks with circulant projections. In International Conference on Computer Vision
(ICCV), 2015.
[18] Zichao Yang, Marcin Moczulski, Misha Denil, Nando de Freitas, Alex Smola, Le Song, and Ziyu Wang.
Deep fried convnets. In International Conference on Computer Vision (ICCV), 2015.
[19] Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov. Tensorizing neural networks.
In Advances in Neural Information Processing Systems 28 (NIPS). 2015.
[20] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv preprint arXiv:1312.4400, 2013.
[21] Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
[22] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets.
arXiv preprint arXiv:1409.5185, 2014.
[23] Forest Agostinelli, Matthew Hoffman, Peter J. Sadowski, and Pierre Baldi. Learning activation functions
to improve deep neural networks. CoRR, abs/1412.6830, 2014.
[24] Djork-Arn? Clevert, Thomas Unterthiner, and Sepp Hochreiter. Fast and accurate deep network learning
by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
9
| 6340 |@word cnn:25 version:6 compression:1 norm:1 seems:1 shuicheng:1 rgb:1 decomposition:1 reduction:2 configuration:3 contains:1 liu:1 cyclic:1 interestingly:2 outperforms:1 freitas:1 guadarrama:1 com:1 activation:5 gpu:1 readily:2 sanjiv:2 shape:12 christian:2 hongyang:1 designed:1 diogo:1 moczulski:1 v:1 fewer:2 accordingly:1 fried:1 parametrization:1 podoprikhin:1 feris:1 provides:1 contribute:2 location:1 zhang:3 height:3 rc:8 along:9 constructed:1 ik:1 dsn:1 koltun:1 consists:1 doubly:26 baldi:1 introduce:1 theoretically:1 notably:1 interweaving:1 indeed:1 behavior:1 equivariant:1 roughly:1 multi:4 salakhutdinov:1 spain:1 begin:1 moreover:1 notation:2 alexnet:5 zhongfei:2 ouyang:1 transformation:1 every:1 act:2 k2:3 demonstrates:1 control:1 originates:1 unit:5 before:2 felix:1 tsinghua:2 vetrov:1 china:1 studied:1 resembles:1 averaged:5 lecun:1 testing:3 footprint:4 procedure:2 evan:1 yan:1 significantly:5 attain:1 convenient:1 projection:2 word:2 operator:1 context:2 applying:4 equivalent:1 map:3 demonstrated:1 center:1 sepp:1 regardless:2 dilating:1 independently:1 flexibly:1 sainath:1 immediately:1 utilizing:1 embedding:1 exact:1 us:1 designing:1 hypothesis:1 goodfellow:1 domingo:1 curran:1 trend:1 adadelta:2 recognition:4 associate:1 observed:2 reorganize:4 module:3 preprint:13 wang:2 wj:4 region:1 compressing:1 connected:4 sun:1 k40:1 trade:2 highest:1 observes:1 deeply:1 intuition:1 complexity:1 moderately:1 warde:1 trained:2 serve:2 efficiency:3 completely:1 translated:5 easily:3 various:4 represented:1 regularizer:2 stacked:1 train:3 distinct:1 fast:2 effective:11 hyper:1 caffe:3 whose:4 larger:6 solve:1 valued:1 distortion:1 cvpr:1 stead:1 ability:2 simonyan:1 topographic:1 transform:1 advantage:1 karayev:1 net:1 propose:3 product:2 clevert:1 inserting:1 tu:1 bath:1 gen:1 achieve:2 recipe:1 sutskever:2 exploiting:1 double:20 darrell:1 nin:1 produce:9 resnet:1 novikov:2 andrew:2 progress:1 implemented:4 c:1 elus:1 quantify:1 differ:1 convention:1 closely:1 cnns:12 filter:73 exploration:1 nando:1 translating:1 implementing:1 agostinelli:1 generalization:3 exploring:3 strictly:1 hold:1 practically:1 great:1 lawrence:1 dieleman:1 matthew:2 rnc:2 substituting:1 vary:1 achieves:2 purpose:2 ruslan:1 ross:1 grouped:1 hoffman:1 clearly:1 dcnn:53 gaussian:2 denil:1 pn:1 ic0:1 varying:1 focus:1 consistently:7 indicates:2 aka:1 greatly:1 baseline:1 i0:2 pad:1 going:1 marcin:1 dcnns:19 pixel:2 classification:5 dual:2 among:3 denoted:5 aforementioned:1 flexible:1 proposes:3 spatial:13 softmax:4 equal:1 shaped:1 eliminated:1 koray:2 qiang:1 yu:5 report:1 mirza:1 yoshua:1 modern:2 resulted:1 phase:1 jeffrey:1 maintain:2 n1:1 negation:1 attempt:1 ab:1 evaluation:2 severe:1 extreme:1 yielding:1 farley:1 misha:1 allocating:1 accurate:2 fu:1 lh:1 allocates:1 orthogonal:1 filled:1 conduct:1 plugged:2 unterthiner:1 sacrificing:2 girshick:1 instance:1 column:5 rabinovich:1 introducing:1 deviation:1 krizhevsky:2 combined:1 explores:1 international:3 lee:3 off:2 pool:6 together:5 ilya:2 concrete:2 augmentation:7 again:1 choose:2 convolving:1 reusing:1 szegedy:2 account:2 li:1 de:2 stride:2 summarized:2 wk:2 dilated:2 inc:1 explicitly:1 stream:1 later:1 view:2 observing:1 characterizes:1 red:1 aggregation:1 maintains:2 jia:2 minimize:1 oi:3 accuracy:3 convolutional:63 yield:4 preprocess:1 generalize:1 anton:1 vincent:1 kavukcuoglu:2 zhengyou:1 produced:2 ren:1 lu:1 zoo:2 mc:22 confirmed:1 rectified:1 j6:1 published:1 sharing:5 trevor:1 obvious:1 recovers:1 sampled:3 gain:1 dataset:5 popular:1 color:1 improves:5 segmentation:1 weining:1 appears:1 higher:3 xie:1 follow:1 supervised:1 zisserman:1 maximally:1 wei:1 evaluated:1 smola:1 djork:1 correlation:13 convnets:1 hand:5 horizontal:2 replacing:4 mehdi:1 multiscale:1 overlapping:1 defines:1 usa:3 building:2 name:1 contain:2 effect:1 counterpart:6 regularization:1 hence:4 aggressively:1 illustrated:1 width:1 yorktown:1 demonstrate:1 dragomir:1 image:29 novel:2 rotation:2 specialized:1 operand:3 cohen:1 million:1 googlenet:1 he:1 wanli:1 anguelov:1 honglak:1 imposing:1 nonlinearity:3 sugiyama:1 specification:2 base:1 sergio:1 patrick:1 recent:3 certain:1 nvidia:1 meta:15 outperforming:1 success:2 watson:1 additional:3 freely:1 arn:1 xiangyu:1 maximize:1 full:1 faster:1 long:1 cifar:30 lin:1 naming:1 reshaping:2 prediction:1 variant:6 crop:3 vision:5 arxiv:26 kernel:2 normalization:2 sergey:2 achieved:4 hochreiter:1 proposal:1 addition:3 remarkably:1 jian:1 kwi:1 pooling:19 subject:1 leveraging:1 spirit:2 effectiveness:1 extracting:2 yang:1 intermediate:1 bengio:1 sander:1 zhuowen:1 fft:1 relu:5 architecture:19 competing:2 opposite:1 inner:2 idea:3 cn:1 reduce:1 vgg:1 shift:1 thread:1 motivated:1 passed:1 padding:2 effort:1 accelerating:1 penalty:1 song:1 peter:1 karen:1 shaoqing:1 deep:21 generally:1 wenling:1 listed:1 amount:3 transforms:2 slide:1 sohn:1 generate:1 outperform:3 exist:1 per:1 blue:2 group:6 key:1 four:1 redundancy:3 shih:1 yangqing:2 drawn:1 prevent:1 verified:2 utilize:1 graph:1 padded:1 year:1 beijing:1 fourth:1 place:1 yann:1 patch:3 scaling:1 comparable:1 dropout:2 layer:85 followed:2 courville:1 cheng:2 xiaogang:1 alex:3 nitish:1 extremely:1 min:1 kumar:2 gpus:1 structured:2 maxn:1 combination:3 vladlen:1 across:1 slightly:2 wi:4 rob:1 intuitively:1 invariant:2 iccv:2 theano:1 equation:2 visualization:2 previously:1 loose:1 mind:1 fed:1 end:3 available:1 operation:6 apply:2 pierre:2 alternative:1 batch:2 convolved:4 vikas:2 original:1 thomas:1 denotes:3 convolve:1 top:4 zeiler:1 exploit:1 concatenated:4 build:1 prof:1 unchanged:2 tensor:2 flipping:3 capacity:1 consumption:1 evenly:1 mail:1 besides:2 reed:1 zhai:1 providing:1 illustration:1 balance:1 sermanet:1 optionally:2 nc:1 zichao:1 robert:1 relate:1 design:3 implementation:4 proper:4 perform:2 allowing:2 convolution:40 datasets:7 benchmark:3 tensorizing:1 hinton:2 dc:24 inferred:1 introduced:1 overloaded:1 pair:2 required:1 namely:1 extensive:2 david:1 imagenet:12 learned:7 barcelona:1 nip:2 address:1 able:4 bar:3 taco:1 pattern:2 scott:1 saining:1 sparsity:1 challenge:1 max:3 memory:5 including:1 suitable:1 difficulty:1 regularized:1 residual:1 scheme:1 improve:4 library:4 vggnet:7 extract:1 understanding:1 multiplication:1 relative:4 apl:1 fully:4 interesting:1 analogy:1 geoffrey:2 validation:2 shelhamer:1 vanhoucke:1 consistent:2 editor:1 bank:1 balancing:1 ibm:2 translation:16 row:3 compatible:1 supported:2 last:3 keeping:1 jth:1 formal:1 bias:3 deeper:3 side:1 circulant:2 taking:1 correspondingly:1 depth:5 default:2 dimension:10 calculated:2 valid:1 resides:1 author:1 adaptive:1 osokin:1 erhan:1 welling:1 dmitry:2 elu:1 global:4 overfitting:2 ioffe:1 fergus:1 ziyu:1 alternatively:1 table:11 additionally:1 learn:1 channel:11 symmetry:3 contributes:1 improving:2 forest:1 excellent:1 investigated:1 protocol:2 garnett:1 flattening:1 main:1 dense:1 fastfood:1 rh:1 border:2 s2:1 verifies:2 complementary:1 fashion:1 ny:3 sub:1 exponential:1 donahue:1 ian:1 z0:1 sadowski:1 dumitru:1 load:1 covariate:1 explored:1 experimented:1 cortes:1 evidence:1 effectively:1 flattened:4 corr:1 gallagher:1 budget:1 push:1 margin:2 nk:1 chen:2 explore:1 kaiming:1 kwj:1 sindhwani:1 applies:1 srivastava:1 pedro:1 corresponds:7 chang:1 extracted:2 acm:2 sized:5 identity:1 viewed:1 maxout:9 jeff:1 shared:1 replace:1 considerable:1 fisher:1 except:1 reducing:3 shang:1 multimedia:1 aaron:1 tara:2 ilsvrc:1 internal:1 mark:1 almeida:1 kihyuk:1 rotate:1 jonathan:1 brevity:2 alexander:1 correlated:3 |
5,904 | 6,341 | Multi-armed Bandits:
Competing with Optimal Sequences
Oren Anava
The Voleon Group
Berkeley, CA
[email protected]
Zohar Karnin
Yahoo! Research
New York, NY
[email protected]
Abstract
We consider sequential decision making problem in the adversarial setting, where
regret is measured with respect to the optimal sequence of actions and the feedback
adheres the bandit setting. It is well-known that obtaining sublinear regret in
this setting is impossible in general, which arises the question of when can we
do better than linear regret? Previous works show that when the environment is
guaranteed to vary slowly and furthermore we are given prior knowledge regarding
its variation (i.e., a limit on the amount of changes suffered by the environment),
then this task is feasible. The caveat however is that such prior knowledge is not
likely to be available in practice, which causes the obtained regret bounds to be
somewhat irrelevant.
Our main result is a regret guarantee that scales with the variation parameter of the
environment, without requiring any prior knowledge about it whatsoever. By that,
we also resolve an open problem posted by Gur, Zeevi and Besbes [8]. An important
key component in our result is a statistical test for identifying non-stationarity
in a sequence of independent random variables. This test either identifies nonstationarity or upper-bounds the absolute deviation of the corresponding sequence
of mean values in terms of its total variation. This test is interesting on its own
right and has the potential to be found useful in additional settings.
1
Introduction
Multi-Armed Bandit (MAB) problems have been studied extensively in the past, with two important
special cases: the Stochastic Multi-Armed Bandit, and the Adversarial (Non-Stochastic) Multi-Armed
Bandit. In both formulations, the problem can be viewed as a T -round repeated game between
a player and nature. In each round, the player chooses one of k actions1 and observes the loss
corresponding to this action only (the so-called bandit feedback). In the adversarial formulation, it is
usually assumed that the losses are chosen by an all-powerful adversary that has full knowledge of
our algorithm. In particular, the loss sequences need not comply with any distributional assumptions.
On the other hand, in the stochastic formulation each action is associated with some mean value
that does not change throughout the game. The feedback from choosing an action is an i.i.d. noisy
observation of this action?s mean value.
The performance of the player is traditionally measured using the static regret, which compares
the total loss of the player with the total loss of the benchmark playing the best fixed action in
hindsight. A stronger measure of the player?s performance, sometimes referred to as dynamic regret2
(or just regret for brevity), compares the total loss of the player with this of the optimal benchmark,
playing the best possible sequence of actions. Notice that in the stochastic formulation both measures
coincide, assuming that the benchmark has access to the parameters defining the random process of
1
2
We sometimes use the terminology arm for an action throughout.
The dynamic regret is occasionally referred to as shifting regret or tracking regret in the literature.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the losses but not to the random bits generating the loss sequences. In the adversarial formulation this
is clearly not the case, and it is not hard to show that attaining sublinear regret is impossible in general,
whereas obtaining sublinear static regret is possible indeed. This can perhaps explain why most of
the literature is concerned with optimizing the static regret rather than its dynamic counterpart.
Previous attempts to tackle the problem of regret minimization in the adversarial formulation mostly
took advantage of some niceness parameter of nature (that is, some non-adversarial behavior of the
loss sequences). This line of research becomes more and more popular, as full characterizations of the
regret turn out to be feasible with respect to specific niceness parameters. In this work we focus on a
broad family of such niceness parameters ?usually called variation type parameters? originated
from the work of [8] in the context of (dynamic) regret minimization. Essentially, we consider a
MAB setting in which the mean value of each action can vary over time in an adversarial manner,
and the feedback to the player is a noisy observation of that mean value. The variation is then defined
as the sum of distances between the vectors of mean values over consecutive rounds, or formally,
def
VT =
T
X
max |?t (i) ? ?t?1 (i)|,
t=2
i
(1)
where ?t (i) denotes the mean value of action i at round t. Despite the presentation of VT using the
maximum norm, any other norm will lead to similar qualitative formulations. Previous approaches to
the problem at hand relied on strong (and sometimes even unrealistic) assumptions on the variation
(we refer the reader to Section 1.3, in which related work is discussed in detail). The natural question
is whether it is possible to design an algorithm that does not require any assumptions on the variation,
yet can achieve o(T ) regret whenever VT = o(T ). In this paper we answer this question in the
affirmative and prove the following.
Theorem (Informal). Consider a MAB setting with two arms and time horizon T . Assume that at
each round t ? {1, . . . , T }, the random variables of obtainable losses correspond to a vector of
? T 0.771 + T 0.82 V 0.18 .
mean values ?t . Then, Algorithm 1 achieves a regret bound of O
T
Our techniques rely on statistical tests designed to identify changes in the environment on the one
hand, but exploit the best option observed so far in case there was no such significant environment
change. We elaborate on the key ideas behind our techniques in Section 1.2.
1.1
Model and Motivation
A player is faced with a sequential decision making task: In each round t ? {1, . . . , T } = [T ], the
player chooses an action it ? {1, . . . , k} = [k] and observes loss `t (it ) ? [0, 1]. We assume that
E [`t (i)] = ?t (i) for any i ? [k] and t ? [T ], where {?t (i)}Tt=1 are fixed beforehand by the adversary
(that is, the adversary is oblivious). For simplicity, we assume that {`t (i)}Tt=1 are also generated
beforehand. The goal of the player is to minimize the regret, which is henceforth defined as
RT =
T
X
?t (it ) ?
t=1
T
X
?t (i?t ),
t=1
i?t
where = arg mini?[k] {?t (i)}. A sequence of actions {it }Tt=1 has no-regret if RT = o(T ). It is
well-known that generating no-regret sequences in our setting is generally impossible, unless the
benchmark sequence is somehow limited (for example, in its total number of action switches) or
alternatively, some characterization of {?t (i)}Tt=1 is given (in our case, {?t (i)}Tt=1 are characterized
via the variation). While limiting the benchmark makes sense only when we have a strong reason to
believe that an action sequence from the limited class has satisfactory performance, characterizing
the environment is an approach that leads to guarantees of the following type:
If the environment is well-behaved (w.r.t. our characterization), then our performance is comparable with the optimal sequence of actions. If not, then no
algorithm is capable of obtaining sublinear regret without further assumptions on
the environment.
Obtaining algorithms with such guarantee is an important task in many real-world applications. For
example, an online forecaster must respond to time-related trends in her data, an investor seeks
to detect trading trends as quickly as possible, a salesman should adjust himself to the constantly
changing taste of his audience, and many other examples can be found. We believe that in many of
2
these examples the environment is often likely to change slowly, making guarantees of the type we
present highly desirable.
1.2
Our Techniques
An intermediate (noiseless) setting. We begin with an easier setting, in which the observable
losses are deterministic. That is, by choosing arm i at round t, rather than observing the realization
of a random variable with mean value ?t (i) we simply observe ?t (i). Note that {?t }Tt=1 are still
assumed to be generated adversarially. In this setting, the following intuitive solution can be shown
to work (for two arms): Pull each arm once and observe two values. Now, in each round pull the arm
with the smaller loss w.p. 1 ? o(1) and the other arm w.p. o(1), where the latter is decreasing with
time. As long as the mean values of the arms did not significantly shift compared to their original
values, continue. Once a significant shift is observed, reset all counters and start over. We note that
while the algorithm is simple, its analysis is not straightforward and contains some counterintuitive
ideas. In particular, we show that the true (unknown) variation can be replaced with a crude proxy
called the observed variation (to be later defined), while still maintaining mathematical tractability of
the problem.
To see the importance of this proxy, let us first describe a different approach to the problem at hand
that in particular extends directly the approach of [8] who show that if an upper-bound on VT is
known in advance, then the optimal regret is attainable. Therefore, one might guess that having
an unbiased estimator for VT will eliminate the need in this prior knowledge. Obtaining such an
unbiased estimator is not hard (via importance sampling), but it turns out that it is not sufficient: the
values of the variation to be identified are simply too small in order to be accurately estimated. Here
comes into the picture the observed variation, which is loosely defined as the loss difference between
two successive pulls of the same arm. Clearly, the true variation is only larger, but as we show, it
cannot be much larger without us noticing it. We provide a complete analysis of the noiseless setting
in Appendix A. This analysis is not directly used for dealing with the noisy setting but acts as a warm
up and contains some of the key techniques used for it.
Back to our (noisy) setting. Here also we focus on the case of k = 2 arms. When the losses are
stochastic the same basic ideas apply but several new major issues come up. In particular, here as
well we present an algorithm that resets all counters and starts over once a significant change in the
environment is detected. The similarity however ends here, mainly because of the noisy feedback that
makes it hard to determine whether the changes we see are due to some environmental shift or due
to the stochastic nature of the problem. The straightforward way of overcoming this is to forcefully
divide the time into ?bins? in which we continuously pull the same arm. By doing this, and averaging
the observed losses within a bin we can obtain feedback that is not as noisy. This meta-technique
raises two major issues: The first is, how long should these bins be? A long period would eliminate
the noise originating from the stochastic feedback but cripple our adaptive capabilities and make us
more vulnerable to changes in the environment. The second issue is, if there was a change in the
environment that is in some sense local to a single bin, how can we identify it? and assuming we did,
when should we tolerate it?
The algorithm we present overcomes the first issue by starting with an exploration phase, where both
arms are queried with equal probability. We advance to the next phase only once it is clear that the
average loss of one arm is greater than the other, and furthermore, we have a solid estimate of the
gap between them. In the next exploitation phase, we mimic the above algorithm for deterministic
feedback by pulling the arms in bins of length proportional to the (inverted squared) gap between the
arms. The techniques from above take care of the regret compared to a strategy that must be fixed
inside the bins, or alternatively, against the optimal strategy if we were somehow guaranteed that
there are no significant environment changes within bins. This leads us to the second issue, but first
consider the following example.
Example 1. During the exploration phase, we associated arm #1 with an expected loss of 0.5 and
arm #2 with an expected loss of 0.6. Now, consider a bin in which we pull arm #1. In the first half
of the bin the expected loss is 0.25 and in the second it is 0.75. The overall expected loss is 0.5,
hence without performing some test w.r.t. the pulls inside the bin we do not see any change in the
environment, and as far as we know we mimicked the optimal strategy. The optimal strategy however
can clearly do much better and we suffer a linear regret in this bin. Furthermore, the variation during
the bin is constant! The good news is that in this scenario a simple test would determine that the
outcome of the arm pulls inside the bin do not resemble those of an i.i.d. random variables, meaning
that the environment change can be detected.
3
Figure 1: The optimal policy of an adversary that minimizes variation while maximizing deviation.
Example 1 clearly demonstrates the necessity of a statistical test inside a bin. However, there are
cases in which the changes of the environment are unidentifiable and the regret suffered by any
algorithm will be linear, as can be seen in the following example.
Example 2. Assume that arm #1 has mean value of 0.5 for all t, and arm #2 has mean value of
1 with probability 0.5 and 0 otherwise. The feedback from pulling arm i at a specific round is a
Bernoulli random variable with the mean value of that arm. Clearly, there is no way to distinguish
between these arms, and thus any algorithm would suffer linear regret. The point, however, is that
the variation in this example is also linear, and thus linear regret is unavoidable in general.
Example 2 shows that if the adversary is willing to put enough effort (in terms of variation), then
linear regret is unavoidable. The intriguing question is whether the adversary can put some less
effort (that is, to invest less than linear variation) and still cause us to suffer linear regret, while not
providing us the ability to notice that the environment has changed. The crux of our analysis is the
design of two tests, one per phase (exploration or exploitation), each is able to identify changes
whenever it is possible or to ensure they do not hurt the regret too much whenever it is not. This
building block, along with the ?outer bin regret? analysis mentioned above, allows us to achieve our
result in this setting. The essence of our statistical tests is presented here, while formal statements
and proofs are deferred to Section 2.
Our statistical tests (informal presentation). Let X1 , . . . , Xn ? [0, 1] be a sequence of realizations, such that each Xi is generated from an arbitrary distribution with mean value ?i . Our task is
to determine whether it is likely that ?i = ?0 for all i, where ?0 is a given constant. In case there
is not enough evidence to reject this hypothesis, the test is required to bound the absolute deviation
of ?n = {?i }ni=1 (henceforthP
denoted by k?n kad ) in terms of its total variation3 k?n ktv . Assume
n
1
for simplicity that ?
?1:n = n i=1 ?i is close enough to ?0 (or even exactly equal to it), which
eliminates the need to check the deviation of the average from ?0 . We are thus left with checking the
inner-sequence dynamics.
It is worthwhile to consider the problem from the adversary?s point of view: The adversary has full
control of the values of {?i }ni=1 , and his task is to deviate as much as he can from the average without
providing us the ability to identify this deviation. Now, consider a partition of [n] into consecutive
segments, such that (?i ? ?0 ) has the same sign for any i within a segment. Given this partition, it can
be shown that the optimal policy of an adversary that tries to minimize the total variation of {?i }ni=1
while maximizing its absolute deviation, is to set ?i to be equal within each segment. The length
of a segment [a, b] is thus limited to be at most 1/|?
?a:b ? ?0 |2 , or otherwise the deviation is notable
(this follows by standard concentration arguments). Figure 1 provides a visualization of this optimal
policy. Summing the absolute deviation over the segments and using H?lder?s inequality ensures
1/3
that k?n kad ? n2/3 k?n ktv , or otherwise there exists a segment in which the distance between the
realization average and ?0 is significantly large. Our test is thus the simple test that measures this
distance for every segment. Notice that the test runs in polynomial time; further optimization might
improve the polynomial degree, but is outside the scope of this paper.
The test presented above aims to bound the absolute deviation w.r.t. some given mean value. As such,
it is appropriate only for the exploitation phase of our algorithm, in which a solid estimation of each
arm?s mean value is given. However, it turns out that bounding the absolute deviation with respect to
some unknown mean value can be done using similar ideas, yet is slightly more complicated.
3
We use standard notions of total
variation
P variation and absolute deviation. That is, the total
P of a sequence
{?i }n
as k?n ktv = n
|?i ? ?i?1 |, and its absolute deviation is k?n kad = n
?1:n |,
i=1 is defined
i=2
i=1 |?i ? ?
P
where ?
?1:n = n1 n
i=1 ?i .
4
Alternative approaches. We point out that the approach of running a meta-bandit algorithm over
(logarithmic many) instances of the algorithm proposed by [8] will be very difficult to pursue. In this
approach, whenever an E XP 3 instance is not chosen by the meta-bandit algorithm it is still forced to
play an arm chosen by a different E XP 3 instance. We are not aware of an analysis of E XP 3 nor other
algorithm equipped to handle such a harsh setting. Another idea that will be hard to pursue is tackling
the problem using a doubling trick. This idea is common when parameters needed for the execution
of an algorithm are unknown in advanced, but can in fact be guessed and updated if necessary. In
our case, the variation is not observed due to the bandit feedback, and moreover, estimating it using
importance sampling will lead to estimators that are too crude to allow a doubling trick.
1.3 Related Work
The question of whether (and when) it is possible to obtain bounds on other than the static regret is
long studied in a variety of settings including Online Convex Optimization (OCO), Bandit Convex
Optimization (BCO), Prediction with Expert Advice, and Multi-Armed bandits (MAB). Stronger
notions of regret include the dynamic regret (see for instance [17, 4]), the adaptive regret [11], the
strongly adaptive regret [5], and more. From now on, we focus on the dynamic regret only. Regardless
of the setting considered, it is not hard to construct a loss sequence such that obtaining sublinear
dynamic regret is impossible (in general). Thus, the problem of minimizing it is usually weakened in
one of the two following forms: (1) restricting the benchmark; and (2) characterizing the niceness of
the environment.
With respect to the first weakening form, [17] showed that in the OCO setting the dynamic regret
PT
can be bounded in terms of CT = t=2 kat ? at?1 k, where {at }Tt=1 is the benchmark sequence. In
particular, restricting the benchmark sequence with CT = 0 gives the standard static regret result.
[6] suggested that this type of result is attainable in the BCO setting as well, but we are not familiar
with such result. In the MAB setting, [1] defined the hardness of a bencmark sequence as the number
of its action switches, and bounded the dynamic regret in terms of this hardness. Here again, the
standard static regret bound is obtained if the hardness is restricted to 0. The concept of bounding the
dynamic regret in terms of the total number of action switches was studied by [14], in the setting of
Prediction with Expert Advice.
With respect to the second weakening form, one can find an immense amount of MAB literature that
uses stochastic assumptions to model the environment. In particular, [16] coined the term restless
bandits; a model in which the loss sequences change in time according to an arbitrary, yet known in
advance, stochastic process. To cope with the hard nature of this model, subsequent works offered
approximations, relaxations, and more detailed models [3, 7, 15, 2]. Perhaps the first attempt to
handle arbitrary loss sequences in the context of dynamic regret and MAB, appears in the work of [8].
1/3
In a setting identical to ours, the authors fully characterize the dynamic regret: ?(T 2/3 VT ), if a
bound on VT is known in advance. We provide a high-level description of their approach.
Roughly speaking, their algorithm divides the time horizon into (equally-sized) blocks and applies
the E XP 3 algorithm of [1] in each of them. This guarantees sublinear static regret w.r.t. the best fixed
action in the block. Now, since the number of blocks is set to be much larger than the value of VT (if
VT = o(T )), it can be shown that in most blocks, the variation inside the block is o(1) and the total
loss of the best fixed action (within a block) turns out to be not very far from the total loss of the best
sequence of actions. The size of the blocks (which is fixed and determined in advance as a function
of T and VT ) is tuned accordingly to obtain the optimal rate in this case. The main shortcomings of
this algorithms are the reliance on prior knowledge of VT , and the restarting procedure that does not
take the variation into account.
We also note the work of [13], in which the two forms of weakening are combined together to obtain
dynamic regret bounds that depend both on the complexity of the benchmark and on the niceness of
the environment. Another line of work that is close to ours (at least in spirit) aims to minimize the
static regret in terms of the variation (see for instance [9, 10]).
A word about existing statistical tests. There are actually many different statistical tests such as
z-test, t-test, and more, that aim to determine whether a sample data comes from a distribution with
a particular mean value. These tests however are not suitable for our setting since (1) they mostly
require assumptions on the data generation (e.g., Gaussianity), and (2) they lack our desired bound
on the total absolute deviation of the mean sequence in terms of its total variation. The latter is
especially important in light of Example 2, which demonstrates that a mean sequence can deviate
from its average without providing us any hint.
5
2
Competing with Optimal Sequences
Before presenting our algorithm and analysis we introduce some general notation and definitions. Let
Xn = {Xi }ni=1 ? [0, c]n be a sequence of independent random variables, and denote ?i = E [Xi ].
? n :n the average of Xn , . . . , Xn , and by
For any n1 , n2 ? [n], where n1 ? n2 , we denote by X
1
2
1
2
? nc :n the average of the other random variables. That is,
X
1
2
!
n2
nX
n
1 ?1
X
X
1
1
c
? n :n =
?
X
Xi and X
Xi +
Xi .
n1 :n2 =
1
2
n2 ? n1 + 1 i=n
n ? n2 + n1 ? 1 i=1
i=n +1
1
2
P
We sometimes use the notation i?{n
/ 1 ,...,n2 } for the second sum when n is implied from the context.
? n :n and X
? nc :n are denoted by ?
The expected values of X
?n1 :n2 and ?
?cn1 :n2 , respectively. We use
1
2
1
2
two additional quantities defined w.r.t. n1 , n2 :
1/2
1/2
1
1
1
def
def
?1 (n1 , n2 ) =
and ?2 (n1 , n2 ) =
+
.
n2 ? n1 + 1
n2 ? n1 + 1 n ? n2 + n1 ? 1
def Pn2
We slightly abuse notation and define Vn1 :n2 = i=n
|?i ? ?i?1 | as the total variation of a
1 +1
mean sequence ?n = {?i }ni=1 ? [0, 1]n over the interval {n1 , . . . , n2 }.
Definition 2.1. (weakly stationary, non-stationary) We say that ?n = {?i }ni=1 ? [0, 1]n is ?-weakly
stationary if V1:n ? ?. We say that ?n is ?-non-stationary if it is not ?-weakly stationary4 .
?
Throughout the paper, we mostly use these definitions with ? = 1/ n. In this case we will shorthand
the notation and simply say that a sequence is weakly stationary (or non-stationary). In the sequel,
we somewhat abuse notation and use capital letters (X1 , . . . , Xn ) both for random variables and
realizations. The specific use should be clear from the context, if not spelled out explicitly. Next, we
define a notion of a concentrated sequence that depends on a parameter T . In what follows, T will
always be used as the time horizon.
Definition 2.2. (concentrated, strongly concentrated) We say that a sequence Xn = {Xi }ni=1 ?
[0, c]n is concentrated w.r.t. ?n if for any n1 , n2 ? [n] it holds that:
1/2
? n :n ? ?
(1) X
?n1 :n2 ? 2.5c2 log(T )
?1 (n1 , n2 ).
1
2
1/2
? n :n ? X
? nc :n ? ?
(2) X
?n1 :n2 + ?
?cn1 :n2 ? 2.5c2 log(T )
?2 (n1 , n2 ).
1
2
1
2
2
We further say that Xn is strongly concentrated w.r.t. ?n if any successive sub-sequence {Xi }ni=n
?
1
n2
n
X is concentrated w.r.t. {?i }i=n1 .
Whenever the mean sequence is inferred from the context, we will simply say that Xn is concentrated (or strongly concentrated). The parameters in the above definition are set so that standard
concentration bounds lead to the statement that any sequence of independent random variables is
strongly concentrated with high probability. The formal statement is given below and is proven in
Appendix B.
Claim 2.3. Let XT = {Xi }Ti=1 ? [0, c]T be a sequence of independent random variables, such that
T ? 2 and c > 0. Then, XT is strongly concentrated with probability at least 1 ? T1 .
2.1
Statistical Tests for Identifying Non-Stationarity
T EST 1 (the offline test). The goal of the offline test is to determine whether a sequence of
realizations Xn is likely to be generated from a mean sequence ?n that is close (in a sense) to some
given value ?0 . This will later be used to determine whether a series of pulls of the same arm (inside
a single bin) in the exploitation phase exhibit the same behavior as those observed in the exploration
phase. We would like to have a two sided guarantee. If the means did not significantly shift the
algorithm must state that the sequence is weakly stationary. On the other hand, if the algorithm states
that the sequence is weakly stationary we require the absolute deviation of ?n to be bounded in terms
of its total variation. We provide an analysis of Test 1 in Appendix B.
6
Input: a sequence Xn = {Xi }ni=1 ? [0, c]n , and a constant ?0 ? [0, 1].
The test: for any two indices n1 , n2 ? [n] such that n1 < n2 , check whether
?
? n :n ? ?0 ?
X
2.5c + 2 log1/2 (T )?1 (n1 , n2 ) .
1
2
Output: non-stationary if such n1 , n2 were found; weakly stationary otherwise.
T EST 1: (the offline test) The test aims to identify variation during the exploitation phase.
Q
Input: a sequence XQ = {Xi }Q
i=1 ? [0, c] , that is revealed gradually (one Xi after the other).
The test: for n = 2, 3, . . . , Q:
(1) observe Xn and set Xn = {Xi }ni=1 .
(2) for any two indices n1 , n2 ? [n] such that n1 < n2 , check whether
?
? n :n ? X
? nc :n ?
X
2.5c + 1 log1/2 (T )?2 (n1 , n2 ) .
1
2
1
2
and terminate the loop if such n1 , n2 were found.
Output: non-stationary if the loop was terminated before n = Q, weakly stationary otherwise.
T EST 2: (the online test) The test aims to identify variation during the exploration phase.
T EST 2 (the online test). The online test gets a sequence XQ in an online manner (one variable
after the other), and has to stop whenever non-stationarity is exhibited (or the sequence ends). Here,
the value of Q is unknown to us beforehand, and might depend on the values of sequence elements
Xi . The rationale here is the following: In the exploration phase of the main algorithm we sample
the arms uniformly until discovering a significant gap between their average losses. While doing
so, we would like to make sure that the regret is not large due to environment changes within
the exploration process. We require a similar two sided guarantee as in the previous test, with an
additional requirement informally ensuring that if we exit the block in the exploration phase the
bound on the absolute deviation still applies. We provide the formal analysis in Appendix B.
2.2
Algorithm and Analysis
Having this set of testing tools, we proceed to provide a non-formal description of our algorithm.
Basically, the algorithm divides the time horizon into blocks according to the variation it identifies.
The blocks are denoted by {Bj }N
j=1 , where N is the total number of blocks generated by the
algorithm. The rounds within block Bj are split into an exploration and exploitation phase,
henceforth denoted Ej,1 and Ej,2 respectively. Each exploitation phase is further divided into bins,
where the size of the bins within a block is determined in the exploration phase and does not change
Nj
throughout the block. The bins within block Bj are denoted by {Aj,a }a=1
, where Nj is the total
number of bins in block Bj . Note that both N and Nj are random variables. We use t(j, ? ) to denote
the ? -th round in the exploration phase of block Bj , and t(j, a, ? ) to denote the ? -th round of the
a-th bin in the exploitation phase of block Bj . As before, notice that t(j, ? ) might vary from one run
of the algorithm to another, yet is uniquely defined per one run of the algorithm (and the same holds
for t(j, a, ? )). Our algorithm is formally given in Algorithm 1, and the working scheme is visually
presented in Figure 2. We discuss the extension of the algorithm to k arms in Appendix D.
Theorem 2.4. Set ? =
Algorithm 1 is
RT =
T
X
t=1
1
2
?
and ? =
?t (it ) ?
T
X
37?5
.
2
Then, with probability of at least 1 ?
10
T
the regret of
?t (i?t ) ? O log(T )T 0.82 VT0.18 + log(T )T 0.771 .
t=1
Proof sketch. Notice that the feedback we receive throughout the game is strongly concentrated with
high probability, and thus it suffices to prove the theorem for this case. We analyze separately (a)
blocks in which the algorithm did not reach the exploitation, and (b) blocks in which it did.
4
We use stationarity-related terms to classify mean sequences. Our definition might be not consistent with
stationarity-related definitions in the statistical literature, which are usually used to classify sequences of random
variables based on higher moments or CDF?s.
7
Figure 2: The time horizon is divided into blocks, where each block is split into an exploration phase and an
exploitation phase. The exploitation phase is further divided into bins.
Input: parameters ? and ?.
Algorithm: In each block j = 1, 2, . . .
(Exploration phase) In each round ? = 1, 2, . . .
(1) Select action it(j,? ) ? Uni{1, 2} and observe loss `t(j,? ) (it(j,? ) ).
2`t(j,? ) (it(j,? ) ) if i = it(j,? )
(2) Set Xt(j,? ) (i) =
0
otherwise,
and add Xt(j,? ) (i) (separately, for i ? {1, 2}) as an input to T EST 2.
(3) If the test identifies non-stationarity (on either one of the actions), exit block. Otherwise, if
?
2
def ?
?
? = |X
10 + 2 log(T )? ??/2
t(j,1):t(j,? ) (1) ? Xt(j,1):t(j,? ) (2)| ? 16
? t(j,1):t(j,? ) (i) for i ? {1, 2}.
move to the next phase with ?
?0 (i) = X
2
(Exploitation phase) Play in bins, each of size
n = 4/? . During each bin a =??1, 2, . . .
arg mini {?
?0 (i)} w.p. 1 ? a
(1) Select action it(j,a,1) , . . . , it(j,a,n) =
Uni{1, 2}
otherwise,
and observe losses {`t(j,a,? ) (it(j,a,? ) )}n
.
? =1
(2) Run T EST 1 on {`t(j,a,? ) (it(j,a,? ) )}n
? =1 , and exit the block if it returned non-stationary.
Algorithm 1: An algorithm for the non-stationary multi-armed bandit problem.
Analysis of part (a). From T EST 2, we know that as long as the test does not identify nonstationarity in the exploration phase E1 , we can ?trust? the feedback we observe as if we are in
1/3
the stationary setting, i.e. standard stochastic MAB, up to an additive factor of |E1 |2/3 VE1 to the
regret. This argument holds even if T EST 2 identified non-stationarity, by simply excluding the last
round. Now, since our stopping condition of the exploration phase is roughly ? ? ? ??/2 , we suffer
an additional regret of |E1 |1??/2 throughout the exploration phase. This gives an overall bound of
1/3
|E1 |2/3 VE1 + |E1 |1??/2 for the regret (formally proven in Lemma C.4). The terms of the form
|E1 |1??/2 are problematic, as summing them may lead to an expression linear in T . To avoid this
we use a lower bound on the variation VE1 guaranteed by the fact that T EST 2 caused the block to
?/3
end during the exploration phase. This lower bound allows to express |E1 |1??/2 as |E1 |1??/3 VE1
leading to a meaningful regret bound on the entire time horizon (as detailed in Lemma C.5).
Analysis of part (b). The regret suffered in the exploration phase is bounded by the same arguments
1??/3
as before, where the bound on |E1 |1??/2 is replaced by |E1 |1??/2 ? |B|1??/3 VB
with B being
the set of block rounds. This bound is achieved via a lower bound on VB , the variation in the block,
guaranteed by the algorithm behavior along with fact that the block ended in the exploitation phase.
For the regret in the exploitation phase, we first utilize the guarantees of T EST 1 to show that at the
1/3
expense of an additive cost of |E2 |2/3 VE2 to the regret, we may assume that there is no change to the
environment inside bins. From hereon the analysis becomes very similar to that of the deterministic
setting, as the noise corresponding to a bin is guaranteed to be lower than the gap ? between the
arms, and thus has no affect on the algorithm?s performance. The final regret bound for blocks of type
(b) comes from adding up the above mentioned bounds and is formally given in Lemma C.10.
8
References
[1] Peter Auer, Nicol? Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic
multiarmed bandit problem. SIAM J. Comput., 32(1):48?77, 2002.
[2] Mohammad Gheshlaghi Azar, Alessandro Lazaric, and Emma Brunskill. Online stochastic
optimization under correlated bandit feedback. In ICML, volume 32 of JMLR Workshop and
Conference Proceedings, pages 1557?1565, 2014.
[3] Dimitris Bertsimas and Jos? Ni?o-Mora. Restless bandits, linear programming relaxations, and
a primal-dual index heuristic. Operations Research, 48(1):80?90, 2000.
[4] Olivier Bousquet and Manfred K. Warmuth. Tracking a small set of experts by mixing past
posteriors. Journal of Machine Learning Research, 3:363?396, 2002.
[5] Amit Daniely, Alon Gonen, and Shai Shalev-Shwartz. Strongly adaptive online learning. In
ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 1405?1411, 2015.
[6] Abraham Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In SODA, pages 385?394. SIAM,
2005.
[7] Sudipto Guha and Kamesh Munagala. Approximation algorithms for partial-information based
stochastic control with markovian rewards. In FOCS, pages 483?493. IEEE Computer Society,
2007.
[8] Yonatan Gur, Assaf J. Zeevi, and Omar Besbes. Stochastic multi-armed-bandit problem with
non-stationary rewards. In NIPS, pages 199?207, 2014.
[9] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: regret bounded by variation
in costs. Machine Learning, 80(2-3):165?188, 2010.
[10] Elad Hazan and Satyen Kale. Better algorithms for benign bandits. Journal of Machine Learning
Research, 12:1287?1311, 2011.
[11] Elad Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. In
ICML, volume 382 of ACM International Conference Proceeding Series, pages 393?400. ACM,
2009.
[12] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of
the American statistical association, 58(301):13?30, 1963.
[13] Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. Online
optimization : Competing with dynamic comparators. In AISTATS, volume 38 of JMLR
Workshop and Conference Proceedings, 2015.
[14] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Inf. Comput.,
108(2):212?261, 1994.
[15] Ronald Ortner, Daniil Ryabko, Peter Auer, and R?mi Munos. Regret bounds for restless markov
bandits. Theor. Comput. Sci., 558:62?76, 2014.
[16] Peter Whittle. Restless bandits: Activity allocation in a changing world. Journal of applied
probability, pages 287?298, 1988.
[17] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent.
In ICML, pages 928?936. AAAI Press, 2003.
9
| 6341 |@word exploitation:14 polynomial:2 stronger:2 norm:2 open:1 willing:1 seek:1 forecaster:1 attainable:2 solid:2 moment:1 necessity:1 contains:2 series:2 ktv:3 tuned:1 ours:2 past:2 existing:1 com:2 yet:4 intriguing:1 must:3 tackling:1 ronald:1 subsequent:1 partition:2 additive:2 benign:1 designed:1 stationary:16 half:1 discovering:1 guess:1 warmuth:2 accordingly:1 manfred:2 caveat:1 characterization:3 provides:1 successive:2 mathematical:1 along:2 c2:2 qualitative:1 prove:2 shorthand:1 focs:1 wassily:1 assaf:1 inside:7 emma:1 introduce:1 manner:2 expected:5 hardness:3 roughly:2 indeed:1 nor:1 behavior:3 multi:7 decreasing:1 resolve:1 armed:7 equipped:1 becomes:2 spain:1 begin:1 moreover:1 estimating:1 bounded:6 notation:5 what:1 minimizes:1 affirmative:1 pursue:2 whatsoever:1 hindsight:1 ve1:4 nj:3 ended:1 guarantee:8 certainty:1 berkeley:1 every:1 act:1 ti:1 tackle:1 seshadhri:1 exactly:1 demonstrates:2 control:2 before:4 t1:1 local:1 limit:1 despite:1 kad:3 abuse:2 might:5 studied:3 weakened:1 limited:3 testing:1 practice:1 regret:61 block:31 kat:1 cn1:2 procedure:1 significantly:3 reject:1 word:1 get:1 cannot:1 close:3 put:2 context:5 impossible:4 zinkevich:1 deterministic:3 pn2:1 maximizing:2 straightforward:2 regardless:1 starting:1 kale:2 convex:4 simplicity:2 identifying:2 estimator:3 counterintuitive:1 pull:8 gur:2 his:2 mora:1 handle:2 notion:3 variation:34 traditionally:1 hurt:1 limiting:1 updated:1 pt:1 play:2 programming:2 olivier:1 us:1 hypothesis:1 trick:2 trend:2 element:1 distributional:1 observed:7 ensures:1 news:1 ryabko:1 counter:2 observes:2 mentioned:2 alessandro:1 environment:23 complexity:1 reward:2 dynamic:15 raise:1 depend:2 segment:7 weakly:8 ali:1 exit:3 forced:1 describe:1 shortcoming:1 detected:2 choosing:2 outcome:1 outside:1 shalev:1 heuristic:1 larger:3 elad:3 say:6 otherwise:8 lder:1 ability:2 satyen:2 noisy:6 final:1 online:11 sequence:46 advantage:1 took:1 reset:2 loop:2 realization:5 mixing:1 achieve:2 sudipto:1 intuitive:1 description:2 invest:1 requirement:1 generating:2 adam:1 spelled:1 alon:1 measured:2 strong:2 resemble:1 trading:1 come:4 stochastic:13 exploration:18 munagala:1 forcefully:1 bin:26 require:4 crux:1 suffices:1 mab:8 theor:1 extension:1 hold:3 considered:1 visually:1 scope:1 bj:6 zeevi:2 claim:1 major:2 vary:3 consecutive:2 achieves:1 estimation:1 tool:1 weighted:1 minimization:2 clearly:5 always:1 aim:5 shahin:1 rather:2 vn1:1 avoid:1 ej:2 kalai:1 focus:3 bernoulli:1 check:3 mainly:1 adversarial:7 brendan:1 sense:3 detect:1 stopping:1 eliminate:2 weakening:3 entire:1 her:1 bandit:21 originating:1 arg:2 issue:5 overall:2 dual:1 denoted:5 yahoo:2 voleon:2 special:1 equal:3 construct:1 karnin:1 once:4 having:2 sampling:2 aware:1 identical:1 adversarially:1 broad:1 icml:4 oco:2 comparators:1 mimic:1 hint:1 oblivious:1 ortner:1 familiar:1 replaced:2 phase:30 n1:28 karthik:1 attempt:2 stationarity:7 highly:1 adjust:1 deferred:1 light:1 behind:1 primal:1 immense:1 beforehand:3 capable:1 partial:1 necessary:1 unless:1 loosely:1 divide:3 littlestone:1 desired:1 instance:5 classify:2 markovian:1 yoav:1 tractability:1 cost:2 deviation:15 daniely:1 guha:1 too:3 daniil:1 characterize:1 answer:1 chooses:2 combined:1 international:1 siam:2 sequel:1 jadbabaie:1 jos:1 together:1 continuously:1 quickly:1 squared:1 aaai:1 cesa:1 again:1 unavoidable:2 slowly:2 hoeffding:1 henceforth:2 expert:3 american:1 leading:1 account:1 potential:1 attaining:1 whittle:1 gaussianity:1 inc:1 notable:1 explicitly:1 caused:1 depends:1 bco:2 later:2 view:1 try:1 hazan:3 observing:1 doing:2 analyze:1 start:2 relied:1 option:1 investor:1 capability:1 complicated:1 vt0:1 shai:1 minimize:3 ni:11 who:1 correspond:1 identify:7 guessed:1 accurately:1 basically:1 explain:1 reach:1 nonstationarity:2 whenever:6 definition:7 infinitesimal:1 against:1 e2:1 associated:2 proof:2 mi:1 static:8 stop:1 popular:1 knowledge:6 obtainable:1 actually:1 back:1 auer:2 appears:1 tolerate:1 higher:1 formulation:7 unidentifiable:1 done:1 strongly:8 furthermore:3 just:1 until:1 hand:5 working:1 sketch:1 trust:1 lack:1 somehow:2 aj:1 perhaps:2 behaved:1 pulling:2 believe:2 building:1 requiring:1 true:2 unbiased:2 counterpart:1 concept:1 hence:1 satisfactory:1 round:15 game:3 during:6 uniquely:1 essence:1 generalized:1 presenting:1 tt:7 complete:1 mohammad:1 hereon:1 meaning:1 common:1 volume:4 discussed:1 he:1 association:1 refer:1 significant:5 multiarmed:1 queried:1 access:1 similarity:1 add:1 posterior:1 own:1 showed:1 optimizing:1 irrelevant:1 inf:1 scenario:1 occasionally:1 yonatan:1 meta:3 inequality:2 continue:1 vt:11 inverted:1 seen:1 additional:4 somewhat:2 greater:1 care:1 determine:6 period:1 full:3 desirable:1 anava:1 characterized:1 long:5 divided:3 equally:1 e1:10 ensuring:1 prediction:2 basic:1 essentially:1 himself:1 noiseless:2 sometimes:4 oren:2 achieved:1 audience:1 receive:1 whereas:1 separately:2 interval:1 suffered:3 eliminates:1 exhibited:1 sure:1 ascent:1 spirit:1 sridharan:1 extracting:1 intermediate:1 besbes:2 enough:3 concerned:1 niceness:5 switch:3 variety:1 revealed:1 split:2 affect:1 nonstochastic:1 competing:3 identified:2 inner:1 regarding:1 idea:6 shift:4 whether:10 expression:1 effort:2 suffer:4 peter:3 returned:1 york:1 cause:2 speaking:1 action:23 proceed:1 useful:1 generally:1 clear:2 detailed:2 informally:1 amount:2 extensively:1 concentrated:11 schapire:1 problematic:1 notice:5 sign:1 estimated:1 lazaric:1 per:2 express:1 group:1 key:3 terminology:1 reliance:1 capital:1 changing:3 utilize:1 v1:1 bertsimas:1 relaxation:2 sum:3 run:4 noticing:1 powerful:1 respond:1 letter:1 soda:1 uncertainty:1 extends:1 throughout:6 family:1 reader:1 decision:2 appendix:5 vb:2 comparable:1 bit:1 def:5 bound:23 ct:2 guaranteed:5 distinguish:1 activity:1 bousquet:1 argument:3 performing:1 martin:1 according:2 smaller:1 slightly:2 making:3 restricted:1 gradually:1 sided:2 visualization:1 turn:4 discus:1 needed:1 know:2 end:3 informal:2 salesman:1 available:1 operation:1 apply:1 observe:6 worthwhile:1 appropriate:1 mimicked:1 alternative:1 original:1 denotes:1 running:1 ensure:1 include:1 maintaining:1 exploit:1 coined:1 especially:1 amit:1 society:1 implied:1 move:1 question:5 quantity:1 strategy:4 concentration:2 rt:3 exhibit:1 gradient:3 distance:3 sci:1 majority:1 outer:1 nx:1 omar:1 reason:1 zkarnin:1 assuming:2 length:2 index:3 mini:2 providing:3 minimizing:1 nc:4 difficult:1 mostly:3 robert:1 statement:3 expense:1 design:2 policy:3 unknown:4 bianchi:1 upper:2 observation:2 markov:1 benchmark:9 descent:1 kamesh:1 defining:1 excluding:1 arbitrary:3 overcoming:1 inferred:1 required:1 nick:1 barcelona:1 nip:2 zohar:1 able:1 adversary:9 suggested:1 usually:4 below:1 gheshlaghi:1 dimitris:1 cripple:1 gonen:1 max:1 including:1 shifting:1 unrealistic:1 suitable:1 natural:1 rely:1 warm:1 advanced:1 arm:30 scheme:1 improve:1 picture:1 identifies:3 harsh:1 log1:2 flaxman:1 xq:2 faced:1 prior:5 comply:1 literature:4 taste:1 checking:1 deviate:2 nicol:1 freund:1 loss:29 fully:1 rationale:1 sublinear:6 interesting:1 generation:1 proportional:1 allocation:1 proven:2 degree:1 offered:1 sufficient:1 proxy:2 xp:4 consistent:1 playing:2 changed:1 last:1 offline:3 formal:4 allow:1 characterizing:2 munos:1 absolute:11 tauman:1 feedback:13 xn:12 world:2 author:1 adaptive:4 coincide:1 far:3 cope:1 restarting:1 observable:1 uni:2 overcomes:1 dealing:1 summing:2 assumed:2 xi:14 shwartz:1 alternatively:2 why:1 nature:4 terminate:1 ca:1 obtaining:6 adheres:1 posted:1 did:5 aistats:1 main:3 terminated:1 motivation:1 noise:2 bounding:2 azar:1 n2:33 abraham:1 repeated:1 x1:2 advice:2 referred:2 elaborate:1 ny:1 sub:1 brunskill:1 originated:1 comput:3 mcmahan:1 crude:2 jmlr:3 theorem:3 specific:3 xt:5 rakhlin:1 evidence:1 exists:1 workshop:3 restricting:2 sequential:2 adding:1 importance:3 execution:1 horizon:6 gap:4 easier:1 restless:4 shahrampour:1 logarithmic:1 simply:5 likely:4 tracking:2 doubling:2 vulnerable:1 applies:2 environmental:1 constantly:1 acm:2 cdf:1 viewed:1 presentation:2 goal:2 sized:1 feasible:2 change:18 hard:6 determined:2 uniformly:1 averaging:1 lemma:3 total:18 called:3 player:10 est:10 meaningful:1 formally:4 select:2 latter:2 arises:1 brevity:1 alexander:1 correlated:1 |
5,905 | 6,342 | Learning Infinite RBMs with Frank-Wolfe
?
Wei Ping?
Qiang Liu?
Alexander Ihler?
?
Computer Science, UC Irvine
Computer Science, Dartmouth College
{wping,ihler}@ics.uci.edu [email protected]
Abstract
In this work, we propose an infinite restricted Boltzmann machine (RBM), whose
maximum likelihood estimation (MLE) corresponds to a constrained convex optimization. We consider the Frank-Wolfe algorithm to solve the program, which
provides a sparse solution that can be interpreted as inserting a hidden unit at each
iteration, so that the optimization process takes the form of a sequence of finite
models of increasing complexity. As a side benefit, this can be used to easily and
efficiently identify an appropriate number of hidden units during the optimization.
The resulting model can also be used as an initialization for typical state-of-the-art
RBM training algorithms such as contrastive divergence, leading to models with
consistently higher test likelihood than random initialization.
1
Introduction
Restricted Boltzmann machines (RBMs) are two-layer latent variable models that use a layer of
hidden units h to model the distribution of visible units v [Smolensky, 1986, Hinton, 2002]. RBMs
have been widely used to capture complex distributions in numerous application domains, including
image modeling [Krizhevsky et al., 2010], human motion capture [Taylor et al., 2006] and collaborative filtering [Salakhutdinov et al., 2007], and are also widely used as building blocks for deep
generative models, such as deep belief networks [Hinton et al., 2006] and deep Boltzmann machines
[Salakhutdinov and Hinton, 2009]. Due to the intractability of the likelihood function, RBMs are
usually learned using the contrastive divergence (CD) algorithm [Hinton, 2002, Tieleman, 2008],
which approximates the gradient of the likelihood using a Gibbs sampler.
One practical problem when using a RBM is that we need to decide the size of the hidden layer
(number of hidden units) before performing learning, and it can be challenging to decide what is
the optimal size. One simple heuristic is to search the ?best? number of hidden units using cross
validation or testing likelihood within a pre-defined candidate set. Unfortunately, this is extremely
time consuming; it involves running a full training algorithm (e.g., CD) for each possible size, and
thus we can only search over a relatively small set of sizes using this approach.
In addition, because the log-likelihood of the RBM is highly non-convex, its performance is sensitive
to the initialization of the learning algorithm. Although random initializations (to relatively small
values) are routinely used in practice with algorithms like CD, it would be valuable to explore more
robust algorithms that are less sensitive to the initialization, as well as smarter initialization strategies
to obtain better results.
In this work, we propose a fast, greedy algorithm for training RBMs by inserting one hidden unit at
each iteration. Our algorithm provides an efficient way to determine the size of the hidden layer in
an adaptive fashion, and can also be used as an initialization for a full CD-like learning algorithm.
Our method is based on constructing a convex relaxation of the RBM that is parameterized by a
distribution over the weights of the hidden units, for which the training problem can be framed as
a convex functional optimization and solved using an efficient Frank-Wolfe algorithm [Frank and
Wolfe, 1956, Jaggi, 2013] that effectively adds one hidden unit at each iteration by solving a relatively
fast inner loop optimization.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Related Work Our contributions connect to a number of different themes of existing work within
machine learning and optimization. Here we give a brief discussion of prior related work.
There have been a number of works on convex relaxations of latent variable models in functional
space, which are related to the gradient boosting method [Friedman, 2001]. In supervised learning,
Bengio et al. [2005] propose a convex neural network in which the number of hidden units is
unbounded and can be learned, and Bach [2014] analyzes the appealing theoretical properties of
such a model. For clustering problems, several works on convex functional relaxation have also
been proposed [e.g., Nowozin and Bakir, 2008, Bradley and Bagnell, 2009]. Other forms of convex
relaxation have also been developed for two layer latent variable models [e.g., Aslan et al., 2013].
There has also been considerable work on extending directed/hierarchical models into ?infinite?
models such that the dimensionality of the latent space can be automatically inferred during learning.
Most of these methods are Bayesian nonparametric models, and a brief overview can be found
in Orbanz and Teh [2011]. A few directions have been explored for undirected models, particularly
RBMs. Welling et al. [2002] propose a boosting algorithm in the feature space of the model; a new
feature is added into the RBM at each boosting iteration, instead of a new hidden unit. Nair and
Hinton [2010] conceptually tie the weights of an infinite number of binary hidden units, and connect
these sigmoid units with noisy rectified linear units (ReLUs). Recently, C?t? and Larochelle [2015]
extend an ordered RBM model with infinite number of hidden units, and Nalisnick and Ravi [2015]
use the same technique for word embedding. The ordered RBM is sensitive to the ordering of its
hidden units and can be viewed as an mixture of RBMs. In contrast, our model incorporates regular
RBMs as a special case, and enables model selection for standard RBMs.
The Frank-Wolfe method [Frank and Wolfe, 1956] (a.k.a. conditional gradient) is a classical algorithm
to solve constrained convex optimization. It has recently received much attention because it unifies a
large variety of sparse greedy methods [Jaggi, 2013], including boosting algorithms [e.g., Beygelzimer
et al., 2015], learning with dual structured SVM [Lacoste-Julien et al., 2013] and marginal inference
using MAP in graphical models [e.g., Belanger et al., 2013, Krishnan et al., 2015].
Verbeek et al. [2003] proposed a greedy learning algorithm for Gaussian mixture models, which
inserts a new component at each step and resembles our algorithm in its procedure. As one benefit,
it provides a better initialization for EM than random initialization. Likas et al. [2003] investigate
greedy initialization in k-means clustering.
2
Background
A restricted Boltzmann machine (RBM) is an undirected graphical model that defines a joint distribution over the vectors of visible units v ? {0, 1}|v|?1 and hidden units h ? {0, 1}|h|?1 ,
XX
1
exp v > W h + b> v ; Z(?) =
exp v > W h + b> v ,
(1)
p(v, h | ?) =
Z(?)
v
h
where |v| and |h| are the dimensions of v and h respectively, and ? := {W, b} are the model
parameters including the pairwise interaction term W ? R|v|?|h| and the bias term b ? R|v|?1 for
the visible units. Here we drop the bias term for the hidden units h, since it can be achieved by
introducing a dummy visible unit whose value is always one. The partition function Z(?) serves to
normalize the probability to sum to one, and is typically intractable to calculate exactly.
Because RBMs have a bipartite structure, the conditional distributions p(v|h; ?) and p(h|v; ?) are
fully factorized and can be calculated in closed form,
p(h|v, ?) =
|h|
Y
p(hi |v),
with p(hi = 1|v) = ? v T W?i ,
p(vj |h),
with p(vj = 1|h) = ? Wj? h + bj ,
i=1
p(v|h, ?) =
|v|
Y
(2)
j=1
where ?(u) = 1/(1 + exp(?u)) is the logistic function, and W?i and Wj? and are the i-th column
and j-th row of W respectively. Eq. (2) allows us to derive an efficient blocked Gibbs sampler that
iteratively alternates between drawing v and h.
2
The marginal log-likelihood of the RBM is
log p(v | ?) =
|h|
X
>
log 1 + exp(w>
i v) + b v ? log Z(?),
(3)
i=1
where wi := W?i is the i-th column of W and corresponds to the weights connected to the i-th
hidden unit. Because we assume each hidden unit hi takes values in {0, 1}, we get the softplus
function log(1 + exp(w>
i v)) when we marginalize hi . This form shows that the (marginal) free
energy of the RBM is a sum of a linear term b> v and a set of softplus functions with different weights
wi ; this provides a foundation for our development.
n
Given a dataset {v n }N
n=1 , the gradient of the log-likelihood for each data point v is
n
? log p(v |?)
>
(4)
= Ep(h|vn ;?) v n h> ? Ep(v,h|?) vh> = v n (?n ) ? Ep(v,h|?) vh> ,
?W
where ?n = ?(W > v n ) and the logistic function ? is applied in an element-wise manner. The positive
part of the gradient can be calculated exactly, since the conditional distribution p(h|v n ) is fully
factorized. The negative part arises from the derivatives of the log-partition function and is intractable.
Stochastic optimization algorithms, such as CD [Hinton, 2002] and persistent CD [Tieleman, 2008],
are popular methods to approximate the intractable expectation using Gibbs sampling.
3
RBM with Infinite Hidden Units
In this section, we first generalize the RBM model defined in Eq. (3) to a model with an infinite
number of hidden units, which can also be viewed as a convex relaxation of the RBM in functional
space. Then, we describe the learning algorithm.
3.1
Model Definition
Our general model is motivated by Eq. (3), in which the first term can be treated as an empirical
average of the softplus function log(1 + exp(w> v)) under an empirical distribution over the weights
{wi }. To extend this, we define a general distribution q(w) over the weight w, and replace the
empirical averaging with the expectation under q(w); this gives the following generalization of an
RBM with an infinite (possibly uncountable) number of hidden units,
log p(v | q, ?) = ?Eq(w) log(1 + exp(w> v)) + b> v ? log Z(q, ?),
(5)
X
Z(q, ?) =
exp ?Eq(w) log(1 + exp(w> v)) + b> v ,
v
where ? := {b, ?} and ? > 0 is a temperature parameter
which controls the ?effective number? of
R
hidden units in the model,Rand Eq(w) [f (w)] := w q(w)f (w)dw. Note that q(w) is assumed to be
properly normalized, i.e., w q(w)dw = 1. Intuitively, (5) defines a semi-parametric model whose
log probability is a sum of a linear bias term parameterized by b, and a nonlinear term parameterized
by the weight distribution w and ? that controls the magnitude of the nonlinear term. This model can
be regarded as a convex relaxation of the regular RBM, as shown in the following result.
Proposition 3.1. The model in Eq. (5) includes the standard RBM (3) as special case by constraining
P|h|
1
q(w) = |h|
i=1 I(w = w i ) and ? = |h|. Moreover, the log-likelihood of the model is concave
w.r.t. the function q(w), ? and b respectively, and is jointly concave with q(w) and b.
We should point out that the parameter ? plays a special role in this model: we reduce to the standard
P|h|
1
RBM only when ? equals the number |h| of particles in q(w) = |h|
i=1 I(w = w i ), and would
otherwise get a fractional RBM. The fractional RBM leads to a more challenging inference problem
than a standard RBM, since the standard Gibbs sampler is no longer directly applicable. We discuss
this point further in Section 3.3.
Given a dataset {v n }N
n=1 , we learn the parameters q and ? using a penalized maximum likelihood
estimator (MLE) that involves a convex functional optimization:
N
1 X
?
arg max L(q, ?) ?
log p(v n | q, ?) ? Eq(w) [||w||2 ] ,
(6)
N n=1
2
q?M, ?
3
where M is the set of valid distributions and we introduce a functional L2 norm regularization
Eq(w) [||w||2 ] to penalize the likelihood for large values of
R w. Alternatively, we could equivalently
optimize the likelihood on MC = {q | q(w) ? 0 and ||w||2 ?C q(w) = 1}, which restricts the
probability mass to a 2-norm ball ||w||2 ? C.
3.2
Learning Infinite RBMs with Frank-Wolfe
It is challenging to directly solve the optimization in Eq. (6) by standard gradient descent methods,
because it involves optimizing the function q(w) with infinite dimensions. Instead, we propose to
solve it using the Frank-Wolfe algorithm [Jaggi, 2013], which is projection-free and provides a sparse
solution.
Assume we already have qt at the iteration t; then Frank-Wolfe finds the qt+1 by maximizing the
linearization of the objective function :
qt+1 ? (1 ? ?t+1 )qt + ?t+1 rt+1 , where rt+1 ? arg maxhq, ?q L(qt , ?t )i,
(7)
q?M
where ?t+1 ? [0, 1] is a step size parameter, and the convex combination step guarantees the new
qt+1 remains a distribution after the update. A typical step-size is ?t = 1/t, in which case we have
Pt
qt (w) = 1t s=1 rs (w), that is, qt equals the average of all the earlier solutions obtained by the
linear program.
To apply Frank-Wolfe to solve our problem, we need to calculate the functional gradient ?q L(qt , ?t )
in E.q. (7). We can show that (see details in Appendix),
X
N
X
?
1
log(1 + exp(w> v n )) ?
p(v | qt , ?t ) log(1 + exp(w> v)) ,
?q L(qt , ?t ) = ? ||w||2 + ?t
2
N n=1
v
where p(v | qt , ?t ) is the distribution parametrized by the weight density qt (w) and parameter ?t ,
exp ?t Eqt (w) [log(1 + exp(w> v))] + b>
t v
.
(8)
p(v | qt , ?t ) =
Z(qt , ?t )
The (functional) linear program in Eq. (7) is equivalent to an optimization over weight vector w :
maxhq, ?q L(qt , ?t )i = max Eq(w) [?q L(qt , ?t )]
q?M
q?M
= ? min
w
N
X
?
1 X
||w||2 +
log(1 + exp(w> v n )) (9)
p(v | qt , ?t ) log(1 + exp(w> v)) ?
2
N n=1
v
The gradient of the objective (9) is,
N
1 X
?w = ?w + Ep(v|qt ,?t ) ?(w> v) ? v ?
?(w> v n ) ? v n ,
N n=1
where the expectation over p(v | qt , ?t ) can be intractable to calculate, and one may use stochastic
optimization and draw samples using MCMC. Note that the second two terms in the gradient enforce
an intuitive moment matching condition: the optimal w introduces a set of ?importance weights?
?(w> v) that adjust the empirical data and the previous model, such that their moments match with
each other.
Now, suppose w?t is the optimum of Eq. (9) at iteration t, the item rt (w) we added can be shown to be
Pt
the delta over w?t , that is, rt (w) = I(w = w?t ); in addition, we have qt (w) = 1t s=1 I(w = w?s )
when the step size is taken to be ?t = 1t . Therefore, this Frank-Wolfe update can be naturally
interpreted as greedily inserting a hidden unit into the current model p(v | qt , ?t ). In particular, if we
update the temperature parameter as ?t ? t, according to Proposition 3.1, we can directly transform
our model p(v | qt , ?t ) to a regular RBM after each Frank-Wolfe step, which enables the convenient
blocked Gibbs sampling for inference.
Compared with the (regularized) MLE of the standard RBM (e.g. in Eq. (4)), the optimization in
Eq. (9) has the following nice properties: (1) The current model p(v | qt , ?t ) does not depend on
4
Algorithm 1 Frank-Wolfe Learning Algorithm
Input: training data {v n }N
n=1 ; step size ?; regularization ?.
Output: sparse solution q ? (w), and ??
Initialize q0 (w) = I(w = w0 ) at random w0 ; b0 = 0; ?0 = 1;
for t = 1 : T [or, stopping criterion] do
s S
Draw sample {vn
}s=1 from p(v | qt?1 , ?t?1 );
o
PS
PN
?
wt = argminw ?2 ||w||2 + S1 s=1 log(1 + exp(w> v s )) ? N1 n=1 log(1 + exp(w> v n )) ;
Update qt (w) ? (1 ? 1t ) ? qt?1 (w) +
1
t
? I(w = w?t );
Update ?t ? t (optional: gradient descent);
Set bt = bt?1 ;
repeat
Draw a mini-batch samples {v m }M
from p(v | qt , ?t )
PN m=1 1 PM
m
Update bt ? bt + ? ? ( N1 n=1 v n ? M
m=1 v )
until
end for
Return q ? (w) = qt (w); ?? = {bt , ?t };
w, which means we can draw enough samples from p(v | qt , ?t ) at each iteration t, and reuse them
during the optimization of w. (2) The objective function in Eq. (9) can be evaluated explicitly given
a set of samples, and hence efficient off-the-shelf optimization tools such as L-BFGS can be used
to solve the optimization very efficiently. (3) Each iteration of our method involves much fewer
parameters (only the weights for a single hidden unit, which is |v| ? 1 instead of the full |v| ? |h|
weight matrix are updated), and hence defines a series of easier problems that can be less sensitive
to initialization. We note that a similar greedy learning strategy has been successfully applied for
learning mixture models [Verbeek et al., 2003], in which one greedily inserts a component at each
step, and that this approach can provide better initialization for EM optimization than using multiple
random initializations.
Once we obtain qt+1 , we can update the bias parameter bt by gradient descent,
?b L(qt+1 , ?t ) =
N
1 X n X
v ?
p(v|qt+1 , ?t )v.
N n=1
v
(10)
One can further optimize ?t by gradient descent,1 but we find simply updating ?t ? t is more efficient
and works well in practice. We summarize our Frank-Wolfe learning algorithm in Algorithm 1.
Adding hidden units on RBM. Besides initializing q(w) to be a delta function at some random w0
and learning the model from scratch, one can also adapt Algorithm 1 to incrementally add hidden
units into an existing RBM in Eq. (3) (e.g. have been learned by CD). According to Proposition 3.1,
P|h|
1
one can simply initialize qt (w) = |h|
i=1 I(w = w i ), ?t = |h|, and continue the Frank-Wolfe
iterations at t = |h| + 1.
Removing hidden units. Since the hidden units are added in a greedy manner, one may want to remove
an old hidden unit during the Frank-Wolfe learning, provided it is bad with respect to our objective
Eq. (9) after more hidden units have been added. A variant of Frank-Wolfe with away-steps [Gu?lat
and Marcotte, 1986] fits this requirement and can be directly applied. As shown by [Clarkson, 2010],
it can improve the sparsity of the final solution (i.e., fewer hidden units in the learned model).
3.3
MCMC Inference for Fractional RBMs
As we point out in Section 3.1, we need to take ? equal to the number of particles in q(w) (that
is, ?t ? t in Algorithm 1) in order to have our model reduce to the standard RBM. If ? takes a
more general real number, we obtain a more general fractional RBM model, for which inference is
1
see Appendix for the definition of ?? L(qt , ?t )
5
more challenging because the standard block Gibbs sampler is not directly applicable. In practice,
we find that setting ?t ? t to correspond to a regular RBM seems to give the best performance,
but for completeness, we discuss the fractional RBM in more detail in this section, and propose
a Metropolis-Hastings algorithm to draw samples from the fractional RBM. We believe that this
fractional RBM framework provides an avenue for further improvements in future work.
P
To frame the problem, let us assume ?q(w) = i ci ? I(w = wi ), where ci is a general real number;
the corresponding model is
X
>
log p(v | q, ?) =
ci log(1 + exp(w>
(11)
i v)) + b v ? log Z(q, ?),
i
which differs from the standard RBM in (3) because each softplus function is multiplied by ci .
Nevertheless, one may push the ci into the softplus function, and obtain a standard RBM that forms
an approximation of (11):
X
>
e
log pe(v | q, ?) =
log(1 + exp(ci ? w>
(12)
i v)) + b v ? log Z(q, ?).
i
This approximation can be justified by considering the special case when the magnitude of the
weights w is very large, so that the softplus function essentially reduces to a ReLU function, that
>
is, log(1 + exp(w>
i v)) ? max(0, w i v). In this case, (11) and (12) become equivalent because
ci max(0, x) = max(0, ci x). More concretely, we can guarantee the following bound:
Proposition 3.2. For any 0 < ci ? 1,we have
1
>
ci
>
(1 + exp(ci ? w>
i v)) ? (1 + exp(w i v)) ? 1 + exp(ci ? w i v).
21?ci
The proof can be found in the Appendix. Note that we apply the bound when ci > 1 by splitting ci
into the sum of its integer part and fractional remainder, and apply the bound to the fractional part.
Therefore, the fractional RBM (11) can be well approximated by the standard RBM (12), and this
can be leveraged to design an inference algorithm for (11). As one example, we can use the Gibbs
update of (12) as a proposal for a Metropolis-Hastings update for (11). To be specific, given a
configuration v, we perform Gibbs update in RBM pe(v | q, ?) to get v 0 , and accept it with probability
min(1, A(v ? v 0 )),
p(v 0 )Te(v 0 ? v)
,
A(v ? v 0 ) =
p(v)Te(v ? v 0 )
where Te(v ? v 0 ) is the Gibbs transition of RBM pe(v | q, ?). Because the acceptance probability of
a Gibbs sampler equals one, we have
A(v ? v 0 ) =
4
p
e(v)Te(v?v 0 )
p
e(v 0 )Te(v 0 ?v)
= 1 . This gives
Q
Q
> 0 ci
>
p(v 0 )e
p(v)
i (1 + exp(w i v )) ?
i (1 + exp(ci ? w i v))
Q
Q
=
.
>
> 0
ci
p(v)e
p(v 0 )
i (1 + exp(w i v)) ?
i (1 + exp(ci ? w i v ))
Experiments
In this section, we test the performance of our Frank-Wolfe (FW) learning algorithm on two datasets:
MNIST [LeCun et al., 1998] and Caltech101 Silhouettes [Marlin et al., 2010]. The MNIST handwritten digits database contains 60,000 images in the training set and 10,000 test set images, where each
image v n includes 28 ? 28 pixels and is associated with a digit label y n . We binarize the grayscale
images by thresholding the pixels at 127, and randomly select 10,000 images from training as the
validation set. The Caltech101 Silhouettes dataset [Marlin et al., 2010] has 8,671 images with 28 ? 28
binary pixels, where each image represents objects silhouette and has a class label (overall 101
classes). The dataset is divided into three subsets: 4,100 examples for training, 2,264 for validation
and 2,307 for testing.
6
?120
Avg. test log?likelihood
Avg. test log?likelihood
?80
?100
?120
FW
CD(rand init.)
CD(FW init.)
?140
?160
FW
CD(rand init.)
CD(FW init.)
?180
?200
?220
?160
0
?140
100
100 200 300 400 500 600 700
Number of hidden units
(a) MNIST
200
300
400
500
600
Number of hidden units
(b) Caltech101 Silhouettes
700
Figure 1: Average test log-likelihood on the two datasets as we increase the number of hidden
units. We can see that FW can correctly identify an appropriate hidden layer size with high test
log-likelihood (marked by the green dashed line). In addition, CD initialized by FW gives higher test
likelihood than random initialization for the same number of hidden units. Best viewed in color.
Training algorithms We train RBMs with CD-10 algorithm. 2 A fixed learning rate is selected
from the set {0.05, 0.02, 0.01, 0.005} using the validation set, and the mini-batch size is selected
from the set {10, 20, 50, 100, 200}. We use 200 epochs for training on MINIST and 400 epochs on
Caltech101. Early stopping is applied by monitoring the difference of average log-likelihood between
training and validation data, so that the intractable log-partition function is cancelled [Hinton, 2010].
We train RBMs with {20, 50, 100, 200, 300, 400, 500, 600, 700} hidden units. We incrementally
train a RBM model by Frank-Wolfe (FW) algorithm 1. A fixed step size ? is selected from the set
{0.05, 0.02, 0.01, 0.005} using the validation data, and a regularization strength ? is selected from
the set {1, 0.5, 0.1, 0.05, 0.01}. We set T = 700 in Algorithm 1, and use the same early stopping
criterion as CD. We randomly initialize the CD algorithm 5 times and select the best one on the
validation set; meanwhile, we also initialize CD by the model learned from Frank-Wolfe.
Test likelihood To evaluate the test likelihood of the learned models, we estimate the partition function using annealed importance sampling (AIS) [Salakhutdinov and Murray, 2008]. The temperature
parameter is selected following the standard guidance: first 500 temperatures spaced uniformly from
0 to 0.5, and 4,000 spaced uniformly from 0.5 to 0.9, and 10,000 spaced uniformly from 0.9 to 1.0;
this gives a total of 14,500 intermediate distributions. We summarize the averaged test log-likelihood
of MNIST and Caltech101 Silhouettes in Figure 1, where we report the result averaged over 500 AIS
runs in all experiments, with the error bars indicating the 3 standard deviations of the estimations.
We evaluate the test likelihood of the model in FW after adding every 20 hidden units. We perform
early stopping when the gap of average log-likelihood between training and validation data largely
increases. As shown in Figure 1, this procedure selects 460 hidden units on MNIST (as indicated by
the green dashed lines), and 550 hidden units on Caltech101; purely for illustration purposes, we
continue FW in the experiment until reaching T = 700 hidden units. We can see that the identified
number of hidden units roughly corresponds to the maximum of the test log-likelihood of all the
three algorithms, suggesting that FW can identify the appropriate number of hidden units during the
optimization.
We also use the model learned by FW as an initialization for CD (the blue lines in Figure 2), and
find it consistently performs better than the best result of CD with 5 random initializations. In our
implementation, the running time of the FW procedure is at most twice as CD for the same number
of hidden units. Therefore, FW initialized CD provides a practical strategy for learning RBMs: it
requires approximately three times of computation time as a single run of CD, while simultaneously
identifying the proper number of hidden units and obtaining better test likelihood.
2
CD-k refers to using k-step Gibbs sampler to approximate the gradient of the log-partition function.
7
FW
CD(rand init.)
CD(FW init.)
FW
CD(rand init.)
CD(FW init.)
42
Test error (%)
Test error (%)
3.5
3
2.5
40
38
36
34
2
300
400
500
600
Number of hidden units
300
700
(a) MNIST
400
500
600
Number of hidden units
700
(b) Caltech101 Silhouettes
Figure 2: Classification error when using the learned hidden representations as features.
Classification The performance of our method is further evaluated using discriminant image
classification tasks. We take the hidden units? activation vectors Ep(h|vn ) [h] generated by the three
algorithms in Figure 1 and use it as the feature in a multi-class logistic regression on the class labels
y n in MNIST and Caltech101. From Figure 2, we find that our basic FW tends to be worse than
the fully trained CD (best in 5 random initializations) when only small numbers of hidden units are
added, but outperforms CD when more hidden units (about 450 in both cases) are added. Meanwhile,
the CD initialized by FW outperforms CD using the best of 5 random initializations.
5
Conclusion
In this work, we propose a convex relaxation of the restricted Boltzmann machine with an infinite
number of hidden units, whose MLE corresponds to a constrained convex program in a function
space. We solve the program using Frank-Wolfe, which provides a sparse greedy solution that can be
interpreted as inserting a single hidden unit at each iteration. Our new method allows us to easily
identify the appropriate number of hidden units during the progress of learning, and can provide an
advanced initialization strategy for other state-of-the-art training methods such as CD to achieve
higher test likelihood than random initialization.
Acknowledgements
This work is sponsored in part by NSF grants IIS-1254071 and CCF-1331915. It is also funded
in part by the United States Air Force under Contract No. FA8750-14-C-0011 under the DARPA
PPAML program.
Appendix
Derivation of gradients The functional gradient of L(q, ?) w.r.t. the density function q(w) is
X
N
?
1
2
?q L(q, ?) = ? ||w|| + ?
log(1 + exp(w> v n ))
2
N n=1
P
>
>
>
v exp ?Eq(w) [log(1 + exp(w v))] + b v ? log(1 + exp(w v))
?
Z(q, b, ?)
X
N
X
?
1
2
> n
>
= ? ||w|| + ?
log(1 + exp(w v )) ?
p(v | q, ?) log(1 + exp(w v)) .
2
N n=1
v
The gradient of L(q, ?) w.r.t. the temperature parameter ? is
?? L(q, ?) =
N
X
1 X
Eq(w) log(1 + exp(w> v n )) ?
p(v | q, ?) Eq(w) log(1 + exp(w> v)) .
N n=1
v
8
Proof of Proposition 4.2
Proof. For any 0 < c ? 1, we have following classical inequality,
X
X
1 X c 1/c
1X
xk ? (
xk )
xk ? (
xck )1/c , and
2
2
k
k
k
k
Let x1 = 1 and x2 = exp(w>
i v), and the proposition is a direct result of above two inequalities.
References
?. Aslan, H. Cheng, X. Zhang, and D. Schuurmans. Convex two-layer modeling. In NIPS, 2013.
F. Bach. Breaking the curse of dimensionality with convex neural networks. arXiv:1412.8690, 2014.
D. Belanger, D. Sheldon, and A. McCallum. Marginal inference in MRFs using Frank-Wolfe. In NIPS Workshop
on Greedy Optimization, Frank-Wolfe and Friends, 2013.
Y. Bengio, N. L. Roux, P. Vincent, O. Delalleau, and P. Marcotte. Convex neural networks. In NIPS, 2005.
A. Beygelzimer, E. Hazan, S. Kale, and H. Luo. Online gradient boosting. In NIPS, 2015.
D. M. Bradley and J. A. Bagnell. Convex coding. In UAI, 2009.
K. L. Clarkson. Coresets, sparse greedy approximation, and the Frank-Wolfe algorithm. ACM Transactions on
Algorithms, 2010.
M.-A. C?t? and H. Larochelle. An infinite restricted Boltzmann machine. Neural Computation, 2015.
M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 1956.
J. H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 2001.
J. Gu?lat and P. Marcotte. Some comments on Wolfe?s ?away step?. Mathematical Programming, 1986.
G. Hinton. A practical guide to training restricted Boltzmann machines. UTML TR, 2010.
G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
2006.
G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 2002.
M. Jaggi. Revisiting Frank-Wolfe: Projection-free sparse convex optimization. In ICML, 2013.
R. G. Krishnan, S. Lacoste-Julien, and D. Sontag. Barrier Frank-Wolfe for marginal inference. In NIPS, 2015.
A. Krizhevsky, G. E. Hinton, et al. Factored 3-way restricted Boltzmann machines for modeling natural images.
In AISTATS, 2010.
S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate Frank-Wolfe optimization for
structural SVMs. In ICML, 2013.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 1998.
A. Likas, N. Vlassis, and J. J. Verbeek. The global k-means clustering algorithm. Pattern recognition, 2003.
B. M. Marlin, K. Swersky, B. Chen, and N. D. Freitas. Inductive principles for restricted Boltzmann machine
learning. In AISTATS, 2010.
V. Nair and G. E. Hinton. Rectified linear units improve restricted Boltzmann machines. In ICML, 2010.
E. Nalisnick and S. Ravi. Infinite dimensional word embeddings. arXiv:1511.05392, 2015.
S. Nowozin and G. Bakir. A decoupled approach to exemplar-based unsupervised learning. In ICML, 2008.
P. Orbanz and Y. W. Teh. Bayesian nonparametric models. In Encyclopedia of Machine Learning. 2011.
R. Salakhutdinov and G. E. Hinton. Deep Boltzmann machines. In AISTATS, 2009.
R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In ICML, 2008.
R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted Boltzmann machines for collaborative filtering. In ICML,
2007.
P. Smolensky. Information processing in dynamical systems: Foundations of harmony theory. Technical report,
DTIC Document, 1986.
G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In NIPS,
2006.
T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML,
2008.
J. J. Verbeek, N. Vlassis, and B. Kr?se. Efficient greedy learning of Gaussian mixture models. Neural
Computation, 2003.
M. Welling, R. S. Zemel, and G. E. Hinton. Self supervised boosting. In NIPS, 2002.
9
| 6342 |@word norm:2 seems:1 r:1 contrastive:3 tr:1 moment:2 liu:1 series:1 configuration:1 contains:1 united:1 document:2 fa8750:1 outperforms:2 existing:2 bradley:2 current:2 freitas:1 beygelzimer:2 luo:1 activation:1 visible:4 partition:5 enables:2 utml:1 remove:1 drop:1 sponsored:1 update:10 generative:1 greedy:11 fewer:2 item:1 selected:5 xk:3 mccallum:1 provides:8 boosting:7 completeness:1 zhang:1 unbounded:1 mathematical:1 direct:1 become:1 persistent:1 introduce:1 manner:2 pairwise:1 roughly:1 multi:1 salakhutdinov:6 automatically:1 curse:1 considering:1 increasing:1 spain:1 xx:1 moreover:1 provided:1 factorized:2 mass:1 what:1 interpreted:3 developed:1 marlin:3 guarantee:2 quantitative:1 every:1 concave:2 tie:1 exactly:2 control:2 unit:58 grant:1 before:1 positive:1 tends:1 approximately:1 twice:1 initialization:20 resembles:1 challenging:4 averaged:2 directed:1 practical:3 lecun:2 testing:2 practice:3 block:3 differs:1 digit:2 procedure:3 empirical:4 projection:2 matching:1 pre:1 word:2 regular:4 convenient:1 refers:1 get:3 marginalize:1 selection:1 optimize:2 equivalent:2 map:1 maximizing:1 annealed:1 kale:1 attention:1 convex:20 roux:1 splitting:1 identifying:1 factored:1 estimator:1 regarded:1 dw:2 embedding:1 coordinate:1 updated:1 annals:1 pt:2 play:1 suppose:1 programming:2 wolfe:29 element:1 approximated:1 particularly:1 updating:1 recognition:2 database:1 ep:5 role:1 solved:1 capture:2 initializing:1 calculate:3 revisiting:1 wj:2 connected:1 ordering:1 valuable:1 complexity:1 trained:1 depend:1 solving:1 purely:1 bipartite:1 gu:2 easily:2 joint:1 darpa:1 routinely:1 ppaml:1 derivation:1 train:3 fast:3 describe:1 effective:1 zemel:1 whose:4 heuristic:1 widely:2 solve:7 delalleau:1 drawing:1 otherwise:1 statistic:1 jointly:1 noisy:1 transform:1 final:1 online:1 sequence:1 net:1 propose:7 interaction:1 product:1 remainder:1 argminw:1 inserting:4 uci:1 loop:1 achieve:1 roweis:1 intuitive:1 normalize:1 optimum:1 extending:1 p:1 requirement:1 object:1 derive:1 friend:1 exemplar:1 qt:35 b0:1 received:1 progress:1 eq:21 c:1 involves:4 larochelle:2 direction:1 stochastic:2 human:2 generalization:1 proposition:6 insert:2 ic:1 exp:36 bj:1 early:3 purpose:1 estimation:2 applicable:2 harmony:1 label:3 sensitive:4 successfully:1 tool:1 gaussian:2 always:1 reaching:1 pn:2 shelf:1 naval:1 properly:1 consistently:2 improvement:1 likelihood:27 contrast:1 greedily:2 inference:8 mrfs:1 stopping:4 typically:1 bt:6 accept:1 hidden:55 selects:1 pixel:3 arg:2 dual:1 overall:1 classification:3 development:1 constrained:3 art:2 special:4 uc:1 marginal:5 equal:4 initialize:4 once:1 sampling:3 qiang:1 represents:1 icml:7 unsupervised:1 future:1 report:2 few:1 randomly:2 simultaneously:1 divergence:3 n1:2 friedman:2 acceptance:1 highly:1 investigate:1 mnih:1 adjust:1 introduces:1 mixture:4 decoupled:1 taylor:2 old:1 initialized:3 guidance:1 theoretical:1 column:2 modeling:4 earlier:1 introducing:1 deviation:1 subset:1 krizhevsky:2 osindero:1 connect:2 density:2 contract:1 off:1 leveraged:1 possibly:1 worse:1 expert:1 derivative:1 leading:1 return:1 suggesting:1 bfgs:1 coding:1 includes:2 coresets:1 explicitly:1 closed:1 hazan:1 relus:1 collaborative:2 contribution:1 air:1 largely:1 efficiently:2 correspond:1 identify:4 spaced:3 conceptually:1 generalize:1 bayesian:2 unifies:1 handwritten:1 vincent:1 mc:1 monitoring:1 rectified:2 ping:1 minist:1 definition:2 rbms:15 energy:1 naturally:1 proof:3 ihler:2 rbm:38 associated:1 irvine:1 dataset:4 popular:1 color:1 fractional:10 bakir:2 dimensionality:2 higher:3 supervised:2 wei:1 rand:5 evaluated:2 until:2 belanger:2 hastings:2 nonlinear:2 incrementally:2 defines:3 logistic:3 indicated:1 believe:1 building:1 normalized:1 ccf:1 inductive:1 regularization:3 hence:2 q0:1 iteratively:1 qliu:1 during:6 self:1 criterion:2 performs:1 motion:2 temperature:5 image:10 wise:1 recently:2 sigmoid:1 functional:9 overview:1 extend:2 approximates:1 blocked:2 gibbs:11 ai:2 framed:1 pm:1 particle:2 funded:1 longer:1 add:2 nalisnick:2 jaggi:5 orbanz:2 optimizing:1 inequality:2 binary:3 continue:2 analyzes:1 determine:1 dashed:2 semi:1 ii:1 full:3 multiple:1 reduces:1 likas:2 technical:1 match:1 eqt:1 adapt:1 cross:1 bach:2 divided:1 mle:4 verbeek:4 variant:1 regression:1 basic:1 essentially:1 expectation:3 arxiv:2 iteration:10 smarter:1 achieved:1 penalize:1 justified:1 addition:3 background:1 want:1 proposal:1 comment:1 undirected:2 incorporates:1 marcotte:3 integer:1 structural:1 constraining:1 bengio:3 enough:1 intermediate:1 krishnan:2 variety:1 embeddings:1 fit:1 relu:1 dartmouth:2 identified:1 inner:1 reduce:2 avenue:1 haffner:1 motivated:1 reuse:1 clarkson:2 sontag:1 deep:6 se:1 nonparametric:2 encyclopedia:1 svms:1 restricts:1 nsf:1 delta:2 dummy:1 correctly:1 blue:1 nevertheless:1 ravi:2 lacoste:3 relaxation:7 sum:4 run:2 parameterized:3 swersky:1 decide:2 vn:3 draw:5 appendix:4 layer:7 hi:4 bound:3 cheng:1 quadratic:1 strength:1 x2:1 sheldon:1 extremely:1 min:2 performing:1 relatively:3 structured:1 according:2 alternate:1 ball:1 combination:1 em:2 wi:4 appealing:1 metropolis:2 s1:1 intuitively:1 restricted:11 taken:1 remains:1 discus:2 serf:1 end:1 multiplied:1 apply:3 quarterly:1 hierarchical:1 away:2 appropriate:4 enforce:1 cancelled:1 batch:2 schmidt:1 uncountable:1 running:2 clustering:3 lat:2 graphical:2 murray:2 classical:2 objective:4 added:6 already:1 xck:1 strategy:4 parametric:1 rt:4 bagnell:2 gradient:20 parametrized:1 w0:3 discriminant:1 binarize:1 besides:1 mini:2 illustration:1 minimizing:1 equivalently:1 unfortunately:1 frank:28 negative:1 design:1 implementation:1 proper:1 boltzmann:13 perform:2 teh:3 datasets:2 finite:1 descent:4 optional:1 logistics:1 hinton:16 vlassis:2 frame:1 inferred:1 learned:8 barcelona:1 nip:8 bar:1 usually:1 pattern:1 dynamical:1 smolensky:2 sparsity:1 summarize:2 program:6 including:3 max:5 green:2 belief:3 treated:1 force:1 regularized:1 natural:1 advanced:1 pletscher:1 improve:2 brief:2 numerous:1 julien:3 vh:2 prior:1 nice:1 l2:1 epoch:2 acknowledgement:1 fully:3 aslan:2 filtering:2 validation:8 foundation:2 thresholding:1 principle:1 intractability:1 nowozin:2 cd:31 row:1 penalized:1 caltech101:8 repeat:1 free:3 side:1 bias:4 guide:1 barrier:1 sparse:7 benefit:2 dimension:2 calculated:2 valid:1 transition:1 concretely:1 adaptive:1 avg:2 welling:2 transaction:1 approximate:2 silhouette:6 global:1 uai:1 assumed:1 consuming:1 alternatively:1 grayscale:1 search:2 latent:5 learn:1 robust:1 init:8 obtaining:1 schuurmans:1 bottou:1 complex:1 meanwhile:2 constructing:1 domain:1 vj:2 aistats:3 x1:1 fashion:1 theme:1 candidate:1 pe:3 breaking:1 removing:1 bad:1 specific:1 explored:1 svm:1 intractable:5 workshop:1 mnist:7 adding:2 effectively:1 importance:2 ci:19 kr:1 magnitude:2 linearization:1 te:5 push:1 dtic:1 gap:1 easier:1 chen:1 simply:2 explore:1 ordered:2 corresponds:4 tieleman:3 acm:1 nair:2 conditional:3 viewed:3 marked:1 replace:1 considerable:1 fw:20 infinite:13 typical:2 uniformly:3 sampler:6 averaging:1 wt:1 total:1 indicating:1 select:2 college:1 softplus:6 arises:1 alexander:1 evaluate:2 mcmc:2 scratch:1 |
5,906 | 6,343 | Object based Scene Representations using Fisher
Scores of Local Subspace Projections
Mandar Dixit and Nuno Vasconcelos
Department of Electrical and Computer Engineering
University of California, San Diego
{mdixit, nvasconcelos}@ucsd.edu
Abstract
Several works have shown that deep CNNs can be easily transferred across datasets,
e.g. the transfer from object recognition on ImageNet to object detection on Pascal
VOC. Less clear, however, is the ability of CNNs to transfer knowledge across tasks.
A common example of such transfer is the problem of scene classification, that
should leverage localized object detections to recognize holistic visual concepts.
While this problems is currently addressed with Fisher vector representations, these
are now shown ineffective for the high-dimensional and highly non-linear features
extracted by modern CNNs. It is argued that this is mostly due to the reliance on
a model, the Gaussian mixture of diagonal covariances, which has a very limited
ability to capture the second order statistics of CNN features. This problem is
addressed by the adoption of a better model, the mixture of factor analyzers (MFA),
which approximates the non-linear data manifold by a collection of local sub-spaces.
The Fisher score with respect to the MFA (MFA-FS) is derived and proposed as an
image representation for holistic image classifiers. Extensive experiments show
that the MFA-FS has state of the art performance for object-to-scene transfer and
this transfer actually outperforms the training of a scene CNN from a large scene
dataset. The two representations are also shown to be complementary, in the sense
that their combination outperforms each of the representations by itself. When
combined, they produce a state-of-the-art scene classifier.
1
Introduction
In recent years, convolutional neural networks (CNNs) trained on large scale datasets have achieved
remarkable performance on traditional vision problems such as image classification [8, 18, 26], object
detection and localization [5, 16] and others. The success of CNNs can be attributed to their ability
to learn highly discriminative, non-linear, visual transformations with the help of supervised backpropagation [9]. Beyond the impressive, sometimes even superhuman, results on certain datasets,
a remarkable property of these classifiers is the solution of the dataset bias problem [20] that has
plagued computer vision for decades. It has now been shown many times that a network trained to
solve a task on a certain dataset (e.g. object recognition on ImageNet) can be very easily fine-tuned to
solve a related problem on another dataset (e.g. object detection on the Pascal VOC or MS-COCO).
Less clear, however, is the robustness of current CNNs to the problem of task bias, i.e. their ability to
generalize accross tasks. Given the large number of possible vision tasks, it is impossible to train a
CNN from scratch for each. In fact, it is likely not even feasible to collect the large number of images
needed to train effective deep CNNs for every task. Hence, there is a need to investigate the problem
of task transfer.
In this work, we consider a very common class of such problems, where a classifier trained on a class
of instances is to be transferred to a second class of instances, which are loose combinations of the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
original ones. In particular, we consider the problem where the original instances are objects and
the target instances are scene-level concepts that somehow depend on those objects. Examples of
this problem include the transfer of object classifiers to tasks such as scene classification [6, 11, 2] or
image captioning [23]. In all these cases, the goal is to predict holistic scene tags from the scores (or
features) from an object CNN classifier. The dependence of the holistic descriptions on these objects
could range from very explicit to very subtle. For example, on the explicit end of the spectrum, an
image captioning system could produce sentence such as ?a person is sitting on a stool and feeding a
zebra.? On the other hand, on the subtle end of the spectrum, a scene classification system would
leverage the recognition of certain rocks, tree stumps, bushes and a particular lizard species to label
an image with the tag ?Joshua Tree National Park?. While it is obviously possible 1) to collect a large
dataset of images, and 2) use them to train a CNN to directly solve each of these tasks, this approach
has two main limitations. First, it is extremely time consuming. Second, the ?directly learned? CNN
will typically not accommodate explicit relations between the holistic descriptions and the objects in
the scene. This has, for example, been documented in the scene classification literature, where the
performance of the best ?directly learned? CNNs [26], can be substantially improved by fusion with
object recognition CNNs [6, 11, 2].
So far, the transfer from object CNNs to holistic scene description has been most extensively studied
in the area of scene classification, where state of the art results have been obtained with the bag of
semantics representation of [2]. This consists of feeding image patches through an object recognition
CNN, collecting a bag vectors of object recognition scores, and embedding this bag into a fixed
dimensional vector space with recourse to a Fisher vector [7]. While there are variations of detail,
all other competitive methods are based on a similar architecture [6, 11]. This observation is, in
principle, applicable to other tasks. For example, the state of the art in image captioning is to use a
CNN as an image encoder that extracts a feature vector from the image. This feature vector is the fed
to a natural language decoder (typically an LSTM) that produces sentences. While there has not yet
been an extensive investigation of the best image encoder, it is likely that the best representations for
scene classification should also be effective encodings for language generation. For these reasons,
we restrict our attention to the scene classifcation problem in the remainder of this work, focusing
on the question of how to address possible limitations of the Fisher vector embedding. We note,
in particular, that while Fisher vectors have been classically defined using gradients of image loglikelihood with respect to the means and variances of a Gaussian mixture model (GMM) [13], this
definition has not been applied universally in the CNN transfer context, where variance statistics are
often disregarded [6, 2].
In this work we make several contributions to the use of Fisher vector type of representations for
object to scene transfer. The first is to show that, for object recognition scores produced by a CNN [2],
variance statistics are much less informative of scene class distributions than the mean gradients, and
can even degrade scene classification performance. We then argue that this is due to the inability of the
standard GMM of diagonal covariances to provide a good approximation to the non-linear manifold
of CNN responses. This leads to the adoption of a richer generative model, the mixture of factor
analyzers (MFA) [4, 22], which locally approximates the scene class manifold by low-dimensional
linear spaces. Our second contribution is to show that, by locally projecting the feature data into
these spaces, the MFA can efficiently model its local covariance structure. For this, we derive the
Fisher score of the MFA model, denoted the MFA Fisher score (MFA-FS), a representation similar to
the GMM Fisher vector of [13, 17]. We show that, for high dimensional CNN features, the MFA-FS
captures highly discriminative covariance statistics, which were previously unavailable in [6, 2],
producing significantly improved scene classification over the conventional GMM Fisher vector. The
third contribution is a detailed experimental investigation of the MFA-FS. Since this can be seen
as a second order pooling mechanism, we compare it to a number of recent methods for second
order pooling of CNN features [21, 3]. Although these methods describe global covariance structure,
they lack the ability of the MFA-FS to capture that information along locally linear approximations
of the highly non-linear CNN feature manifold. This is shown to be important, as the MFA-FS is
shown to outperform all these representations by non-trivial margins. Finally, we show that the
MFA-FS enables effective task transfer, by showing that MFA-FS vectors extracted from deep CNNs
trained for ImageNet object recognition [8, 18], achieve state-of-the-art results on challenging scene
recognition benchmarks, such as SUN [25] and MIT Indoor Scenes [14].
2
2
Fisher scores
In computer vision, an image I is frequently interpreted as a set of descriptors D = {x1 , . . . , xn }
sampled from some generative model p(x; ?). Since most classifiers require fixed-length inputs, it
is common to map the set D into a fixed-length vector. A popular mapping consists of computing
?
log p(D; ?) for a model ?b . This
the gradient (with respect to ?) of the log-likelihood ?? L(?) = ??
is known as the Fisher score of ?. This gradient vector is often normalized by the square root
1
of the Fisher information matrix F, according to F ? 2 ?? L(?). This is referred to as the Fisher
vector (FV) [7] representation of I. While the Fisher vector is frequently used with a Gaussian
mixture model (GMM) [13, 17], any generative model p(x; ?) can be used. However, the information
matrix is not always easy to compute. When this is case, it is common to rely on the simpler
representation of I by the score ?? L(?). This is, for example, the case with the sparse coded gradient
vectors in [11]. We next show that, for models with hidden variables, the Fisher score can be obtained
trivially from the steps of the expectation maximization (EM) algorithm commonly used to learn
such models.
2.1
Fisher Scores from EM
R
Consider the log-likelihood of D under a latent-variable model log p(D; ?) = log p(D, z; ?)dz of
hidden variable z. Since the left-hand side does not depend on the hidden variable, this can be written
in an alternate form, which is widely used in the EM literature,
Z
Z
Z
q(z)
log p(D; ?) =
q(z) log p(D, z; ?)dz ? q(z) log q(z)dz + q(z) log
dz
p(z|D; ?)
= Q(q; ?) + H(q) + KL(q||p; ?)
(1)
where Q(q; ?) is the ?Q? function, q(z) a general probability distribution, H(q) its differential entropy
and KL(q||p; ?) the Kullback Liebler divergence between the posterior p(z|D; ?) and q(z). Hence,
?
log p(D; ?)
??
=
?
?
Q(q; ?) +
KL(q||p; ?)
??
??
(2)
where
?
KL(q||p; ?)
??
Z
q(z)
?
p(z|D; ?)dz.
p(z|D; ?) ??
= ?
(3)
In each iteration of the EM algorithm the q distribution is chosen as q(z) = p(z|D; ?b ), where ?b is a
reference parameter vector (the parameter estimates from the previous EM iteration) and
Z
Q(q; ?) = p(z|D; ?b ) log p(D, z; ?)dz = Ez|D;?b [log p(D, z; ?)].
(4)
It follows that
?
KL(q||p; ?)
??
?=? b
Z
=
?
Z
p(z|D; ?b ) ?
?
p(z|D;
?)
dz
=
?
p(z|D;
?)
b dz = 0
b
p(z|D; ? ) ??
??
?=? b
?=?
and
?
log p(D; ?)
??
?=? b
?
Q(p(z|D; ?b ); ?)
.
??
?=? b
=
(5)
In summary, the Fisher score ?? L(?)|{?=?b } of background model ?b is the gradient of the Q-function
of EM evaluated at reference model ?b . The computation of the score thus simplifies into the two
steps of EM. First, the E step computes the Q function Q(p(z|x; ?b ); ?) at the reference ?b . Second,
the M-step evaluates the gradient of the Q function with respect to ? at ? = ?b . This interpretation
of the Fisher score is particularly helpful when efficient implementations of the EM algorithm are
available, e.g. the recursive Baum-Welch computations commonly used to learn hidden Markov
models [15]. For more tractable distributions, such as the GMM, it enables the simple reuse of the
EM equations, which are always required to learn the reference model ?b , to compute the Fisher
score.
3
2.2
Bag of features
Fisher scores are usually combined with the bag-of-features representation, where an image is
described as an orderless collection of localized descriptors D = {x1 , x2 , . . . xn }. These were traditionally SIFT descriptors, but have more recently been replaced with responses of object recognition
CNNs [6, 1, 2]. In this work we use the semantic features proposed in [2], which are obtained by
transforming softmax probability vectors pi , obtained for image patches, into their natural parameter
form. These features were shown to perform better than activations of other CNN layers [2].
2.3
Gaussian Mixture Fisher Vectors
A GMM is a model with a discrete hidden variable that determines the mixture component which
explains the observed data. The generative process is as follows. A mixture component zi is first
sampled from a multinomial distribution p(z = k) = wk . An observation xi is then sampled from
the Gaussian component p(x|z = k) ? G(x, ?k , ?k ) of mean ?k and variance ?k . Both the hidden
and observed variables are sampled independently, and the Q function simplifies to
hX
i
X
Q(p(z|D; ?b ); ?) =
Ezi |xi ;?b
I(zi , k) log p(xi , k; ?)
i
k
X
=
hik log p(xi |zi = k; ?)wk
(6)
i,k
where I(.) is the indicator function and hik is the posterior probability p(k|xi ; ?b ). The probability
vectors hi are the only quantities computed in the E-step.
In the Fisher vector literature [13, 17], the GMM is assumed to have diagonal covariances. This is
denoted as the variance-GMM. Substituting the expressions of p(xi |zi = k; ?) and differentiating the
Q function with respect to parameters ? = {?k , ?k } leads to the two components of the Fisher score
d
X
xi ? ?dk
?
G ?dk (I) =
L(?) =
p(k|xi )
(7)
i
??dk
(?kd )2
d
X
(xi ? ?dk )2
?
1
G ?kd (I) =
L(?) =
p(k|xi )
? d .
(8)
i
??kd
(?kd )3
?k
These quantities are evaluated using a reference model ?b = {?bk , ?kb } learned (with EM) from all
training data. To compute the Fisher vectors, scores in (7) and (8) are often scaled by an approximate
Fisher information matrix, as detailed in [17]. When used with SIFT descriptors, these mean
and variance scores usually capture complimentary discriminative information, useful for image
classification [13]. Yet, FVs computed from CNN features only use the mean gradients similar
to (7), ignoring second-order statistics [6, 2]. In the experimental section, we show that the variance
statistics of CNN features perform poorly compared to the mean gradients. This is perhaps due to the
inability of the variance-GMM to accurately model data in high dimensions. We test this hypothesis
by considering a model better suited for this task.
2.4
Fisher Scores for the Mixture of Factor Analyzers
A factor analyzer (FA) is a type of a Gaussian distribution that models high dimensional observations
x ? RD in terms of latent variables or ?factors? z ? RR defined on a low-dimensional subspace
R << D [4]. The process can be written as x = ?z + , where ? is known as the factor loading
matrix and models the additive noise in dimensions of x. Factors z are assumed distributed as
G(z, 0, I) and the noise is assumed to be G(, 0, ?), where ? is a diagonal matrix. It can be shown
that x has full covariance S = ??T + ?, making the FA better suited for high dimensional modeling
than a Gaussian of diagonal covariance.
A mixture of factor analyzers (MFA) is an extension of the FA that allows a piece-wise linear
approximation of a non-linear data manifold. Unlike the GMM, it has two hidden variables: a discrete
variable s, p(s = k) = wk , which determines the mixture assignments and a continuous latent
variable z ? RR , p(z|s = k) = G(z, 0, I), which is a low dimensional projection of the observation
variable x ? RD , p(x|z, s = k) = G(x, ?k z + ?k , ?). Hence, the k th MFA component is a FA of
mean ?k and subspace defined by ?k . Overall, the MFA components approximate the distribution of
4
the observations x by a set of sub-spaces in observation space. The Q function is
hX
i
X
Q(p(s, z|D; ?b ); ?) =
Ezi ,si |xi ;?b
I(si , k) log p(xi , zi , si = k; ?)
i
k
X
=
hik Ezi |xi ;?b [log G(xi , ?k zi + ?k , ?) + log G(zi , 0, I) + log wk ] .
i,k
(9)
(10)
where hik = p(si = k|xi ; ?b ). After some simplifications, the E step reduces to computing
hik
=
p(k|xi ; ?b ) ? wkb N (xi , ?bk , Skb )
(11)
Ezi |xi ;?b [zi ]
=
(12)
Ezi |xi ;?b [zi ziT ]
=
?kb (xi ? ?bk )
T
I ? ?kb ?bk + ?kb (xi ? ?bk )(xi ? ?bk )T ?kb
(13)
?1
T
T
with Skb = ?bk ?bk + ? b and ?kb = ?bk Skb
. The M-step then evaluates the Fisher score of
b
b
? = {?k , ?k }. With some algebraic manipulations, this can be shown to have components
X
?1
G ?k (I) =
p(k|xi ; ?b )? b
I ? ?bk ?kb xi ? ?bk
(14)
i
h
i
X
?1
T
G ?k (I) =
p(k|xi ; ?b )? b (?bk ?kb ? I) (xi ? ?bk )(xi ? ?bk )T ?kb ? ?bk .
(15)
i
For a detailed discussion of the Q function, the reader is referred to the EM derivation in [4]. Note
that the scores with respect to the means are functionally similar to the first order residuals in (7).
However, the scores with respect to the factor loading matrices ?k account for covariance statistics of
the observations xi , not just variances. We refer to the representations (14) and (15) as MFA Fisher
scores (MFA-FS). Note that these are not FVs due to the absence of normalization by the Fisher
information, which is more complex to compute than for the variance-GMM.
3
Related work
The most popular approach to transfer object scores (usually from an ImageNet CNN) into a feature
vector for scene classification is to rely on FV-style pooling. Although most classifiers default to the
GMM-FV embedding [6, 1, 2, 24], some recent works have explored different encoding [11] and
pooling schemes [21, 3] with promising results. Liu et al. [11] derived an FV like representation from
sparse coding. Their model can be described as a factorQanalyzer with Gaussian observations p(x|z) ?
N (?z, ? 2 I) conditioned on Laplace factors p(z) ? r exp(?|zr |). While the sparse FA marginal
R
p(x) is intractable, it can be approximated by an evidence lower bound p(x) ? q(z) p(x,z)
q(z) dz
?
derived from a suitable variational posterior q(z). In [11], q is a point posterior ?(z ? z ) and the
MAP inference simplifies into sparse coding. The image representation is obtained using gradients
of the sparse coding objective evaluated at the MAP factors P
z ? , with respect to the factor loadings
?. [21] proposed an alternative bilinear pooling mechanism i xi xTi . Similar to the MFA-FS, this
captures second order statistics of CNN feature space, albeit globally. Due to its simplicity, this
mechanism supports fine-tuning of the object CNN to scene classes. Gao et al. [3] have recently
shown that this representation can be compressed with minimal performance loss.
4
Experiments
The MFA-FS was evaluated on the scene classification problem, using the 67 class MIT Indoor scenes
dataset [14] and the 397 class MIT SUN dataset [25]. For Indoor scenes, a single training set of 80
images per class is provided by the authors. The test set consists of 20 images per class. Results are
reported as average per class classification accuracy. The authors of SUN provide multiple train/test
splits, each test set containing 50 images per class. Results are reported as mean average per class
classification accuracy over splits. Three object recognition CNNs, pre-trained on ImageNet, were
used to extract features: the 8 layer network of [8] (denoted as AlexNet) and the deeper 16 and 19
layer networks of [18] (denoted VGG-16 and VGG-19, respectively). These CNNs assign 1000
dimensional object recognition probabilities to P ? P patches (sampled on a grid of fixed spacing)
of the scene images, with P ? {128, 160, 96}. Patch probability vectors were converted into their
natural parameter form and PCA-reduced to 500 dimensions as in [2]. Each image was mapped into a
5
0.8
0.75
K = 200
K = 200
K = 400
K = 300
K = 100
0.6
0.55
0.5
K = 500
K = 50
K = 300
K = 400
K = 500
K = 100
K = 50
0.35
MFA Grad (Factor Loading)
MFA Grad (Factor Loading)
0.3
0.5
SUN
GMM FV (?)
GMM FV (?)
MFA FS (?)
MFA FS (?)
0.4
K = 50
R = 10
K = 50
R=5
K = 50
K = 50 R = 2
R=1
0.45
Accuracy
0.7
0.55
K = 50
R = 10
K = 50
R=5
K = 50
K = 50 R = 2
R=1
0.65
Accuracy
Table 1: Classification accuracy (K = 50, R = 10).
MIT Indoor
GMM FV (?) 66.08
GMM FV (?) 53.86
MFA FS (?)
67.68
MFA FS (?)
71.11
GMM FV (Variance)
50.01
37.71
51.43
53.38
GMM FV (Variance)
0.45
0.4
0
0.5
1
1.5
0.25
2
Descriptor Dimensions
2.5
3
5
x 10
0.2
0
0.5
1
1.5
2
Descriptor Dimensions
2.5
3
5
x 10
Figure 1: Classification accuracy vs. descriptor size for MFA-FS(?) of K = 50
components and R factor dimensions and GMM-FV(?) of K components. Left:
MIT Indoor. Right: SUN.
GMM-FV [13] using a background GMM, and an MFA-FS, using (14), (15) and a background MFA.
As usual in the FV literature, these vectors were power normalized, L2 normalized, and classified
with a cross-validated linear SVM. These classifiers were compared to scene CNNs, trained on the
large scale Places dataset. In this case, the features from the penultimate CNN layer were used as a
holistic scene representation and classified with a linear SVM, as in [26]. We used the places CNNs
with the AlexNet and VGG-16 architectures provided by the authors.
4.1
Impact of Covariance Modeling
We begin with an experiment to compare the modeling power of MFAs to variance-GMMs. This was
based on AlexNet features, mixtures of K = 50 components, and an MFA latent space dimension of
R = 10. Table 1 presents the classification accuracy of a GMM-FV that only considers the mean
- GMM-FV(?) - or variance - GMM-FV(?) - parameters and a MFA-FS that only considers the
mean - MFA-FS(?) - or covariance - MFA-FS(?) - parameters. The most interesting observation
is the complete failure of the GMM-FV (?), which under-performs the GMM-FV(?) by more than
10%. The difference between the two components of the GMM-FV is not as startling for lower
dimensional SIFT features [13]. However, for CNN features, the discriminative power of variance
statistics is exceptionally low. This explains why previous FV representations for CNNs [6, 2] only
consider gradients with respect to the means. A second observation of importance is that the improved
modeling of covariances by the MFA eliminates this problem. In fact, MFA-FS(?) is significantly
better than both GMM-FVs. It could be argued that a fair comparison requires an increase in the
GMM modeling capacity. Fig. 1 tests this hypothesis by comparing GMM-FVs(?) and MFA-FS
(?) for various numbers of GMM components (K ? {50, . . . , 500}) and MFA hidden sub-spaces
dimensions (R ? {1, . . . , 10}). For comparable vector dimensions, the covariance based scores
always significantly outperforms the variance statistics on both datasets. A final observation is that,
due to covariance modeling in MFAs, the MFA-FS(?) performs better the GMM-FV(?). The first
order residuals pooled to obtain the MFA-FS(?) (14) are scaled by covariance matrices instead of
variances. This local de-correlation provides a non-trivial improvement for the MFA-FS(?) over the
GMM-FV(?)(? 1.5% points). Covariance modeling was previously used in [19] to obtain FVs w.r.t.
Gaussian means and local subspace variances (eigen-values of covariance). Their subspace variance
FV, derived with our MFAs, performs much better than the variance GMM-FV (?), due to a better
underlying model (60.7% v 53.86% on Indoor). It is, however, still inferior to the MFA-FS(?) which
captures full covariance within local subspaces.
While a combination of the MFA-FS(?) and MFA-FS(?) produces a small improvement (? 1%), we
restrict to using the latter in the remainder of this work.
4.2
Multi-scale learning and Deep CNNs
Recent works have demonstrated value in combining deep CNN features extracted at multiple-scales.
Table 2 presents the classification accuracies of the MFA-FS (?) based on AlexNet, and 16 and 19
layer VGG features extracted from 96x96, 128x128 and 160x160 pixel image patches, as well as
their concatenation (3 scales), as suggested by [2]. These results confirm the benefits of multi-scale
feature combination, which achieves the best performance for all CNNs and datasets.
6
Table 2: MFA-FS classification accuracy as a function of patch scale.
MIT
SUN
Indoor
AlexNet
160x160
69.83
52.36
71.11
53.38
128x128
96x96
70.51
53.54
3 scales
73.58
55.95
VGG-16
160x160
77.26
59.77
77.28
60.99
128x128
96x96
79.57
61.71
3 scales
80.1
63.31
VGG-19
160x160
77.21
79.39
128x128
96x96
79.9
3 scales
81.43
-
4.3
Table 3: Performance of scene classification methods. *combination of patch scales (128, 96, 160).
Method
MIT Indoor SUN
MFA-FS + Places (VGG)
87.23
71.06
MFA-FS + Places (AlexNet)
79.86
63.16
MFA-FS (VGG)
81.43
63.31
MFA-FS (AlexNet)
73.58
55.95
Full BN (VGG) [3]
77.55
76.17
Compact BN (VGG) [3]
H-Sparse (VGG) [12]
79.5
Sparse Coding (VGG) [12]
77.6
Sparse Coding (AlexNet) [11]
68.2
MetaClass (AlexNet) + Places [24]
78.9
58.11
FV (AlexNet)(4 scales) + Places [2]
79.0
61.72
78.5?
FV (AlexNet)(3 scales) + Places [2]
FV (AlexNet) (4 scales) [2]
72.86
54.4
FV (Alexnet)(3 scales) [2]
71.24
53.0
68.88
51.98
VLAD (AlexNet) [6]
FV+FC (VGG) [1]
81.0
Mid Level [10]
70.46
-
Comparison with ImageNet based Classifiers
We next compared the MFA-FS to state of the art scene classifiers also based on transfer from
ImageNet CNN features [11, 1?3]. Since all these methods only report results for MIT Indoor, we
limited the comparison to this dataset, with the results of Table 4. The GMM-FV of [2] operates
on AlexNet CNN semantics extracted from image patches of multiple sizes (96, 128, 160, 80). The
FV in [1] is computed using convolutional features from AlexNet or VGG-16 extracted in a large
multi-scale setting. Liu et al. proposed a gradient representation based on sparse codes. Their initial
results were reported on a single patch scale of 128x128 using AlexNet features [11]. More recently,
they have proposed an improved H-Sparse representation, combined multiple patch scales and used
VGG features in [12]. The recently proposed bilinear (BN) descriptor pooling of [21] is similar to
the MFA-FS in the sense that it captures global second order descriptor statistics. The simplicity of
these descriptors enables the fine-tuning of the CNN layers to the scene classification task. However,
their results, reproduced in [3] for VGG-16 features, are clearly inferior to those of the MFA-FS
without fine-tuning. Gao et al. [3] propose a way to compress these bilinear statistics with trainable
transformations. For a compact image representation of size 8K, their accuracy is inferior to a
representation of 5K dimensions obtained by combining the MFA-FS with a simple PCA.
These experiments show that the MFA-FS is a state of the art procedure for task transfer from object
recognition (on ImageNet) to scene classification (e.g. on MIT Indoor or SUN). Its closest competitor
is the classifier of [1], which combines CNN features in a massive multiscale setting ( 10 image sizes).
While MFA-FS outperforms [1] with only 3 image scales, its performance improves even further with
addition of more scales (82% with VGG, 4 patch sizes).
4.4
Task transfer performance
The next question is how object-to-scene transfer compares to the much more intensive process,
pursued by [26], of collecting a large scale labeled dataset and training a deep CNN from it. Their
scene dataset, known as Places, consists of 2.4M images, from which both AlexNet and VGG Net
CNNs were trained for scene classification. The fully connected features from the networks are used
as scene representations and classified with linear SVMs on Indoor scenes and SUN. The Places
CNN features are a direct alternatives to the MFA-FS. While the use of the former is an example of
dataset transfer (features trained on scenes to classify scenes) the use of the latter is an example of
task transfer (features trained on objects to classify scenes).
A comparison between the two transfer approaches is shown in table 5. Somewhat surprisingly,
task transfer with the MFA-FS outperformed dataset transfer with the Places CNN, on both MIT
Indoors and SUN and for both the AlexNet and VGG architectures. This supports the hypothesis that
the variability of configurations of most scenes makes scene classification much harder than object
recognition, to the point where CNN architectures that have close-to or above human performance for
7
Table 4: Comparison to task transfer methods (ImageNet CNNs) on MIT Indoor.
Method
1 scale mscale
AlexNet
MFA-FS
71.11
73.58
GMM FV [2]
68.5
72.86
FV+FC [1]
71.6
Sparse Coding [11]
68.2
VGG
MFA-FS
79.9
81.43
Sparse Coding [12]
77.6
H-Sparse [12]
79.5
BN [3]
77.55
FV+FC [1]
81.0
VGG + dim. reduction
MFA-FS + PCA (5k)
79.3
76.17
BN (8k) [3]
Table 5: Comparison with the Places trained Scene
CNNs.
Method
SUN Indoor
AlexNet
MFA-FS
55.95
73.58
Places
54.3
68.24
Combined
63.16
79.86
VGG
MFA-FS
63.31
81.43
Places
61.32
79.47
Combined
71.06
87.23
AlexNet + VGG
Places (VGG + Alex)
65.91
81.29
MFA-FS(Alex) + Places(VGG)
68.8
85.6
MFA-FS(VGG) + Places(Alex) 67.34
82.82
object recognition are much less effective for scenes. It is, instead, preferable to pool object detections
across the scene image, using a pooling mechanism such as the MFA-FS. This observation is in line
with an interesting result of [2], showing that the object-based and scene-based representations are
complementary, by concatenating ImageNet- and Places-based feature vectors. By replacing the
the GMM-FV of [2] with the MFA-FS now proposed, we improve upon these results. For both the
AlexNet and VGG CNNs, the combination of the ImageNet-based MFA-FS and the Places CNN
feature vector outperformed both the MFA-FS and the Places CNN features by themselves, in both
SUN and MIT Indoor. To the best of our knowledge, no method using these or deeper CNNs has
reported better results than the combined MFA-FS and Places VGG features of Table 5.
It could be argued that this improvement is just an effect of the often observed benefits of fusing
different classifiers. Many works even resort to ?bagging? of multiple CNNs to achieve performance
improvements [18]. To test this hypothesis we also implemented a classifier that combines two Places
CNNs with the AlexNet and VGG architectures. This is shown as Places (VGG+AlexNet) in the last
section of Table 5. While improving on the performance of both MFA-FS and Places, its performance
is not as good as that of the combination of the object-based and scene-based representations (MFAFS + Places). As shown in the remainder of the last section of the table, any combination of an object
CNN with MFA-FS based transfer and a scene CNN outperforms this classifier.
Finally, table 3 compares results to the best recent scene classification methods in the literature.
This comparison shows that MFA-FS + Places combination is a state-of-the-art classifier with
substantial gains over all other proposals. The results of 71.06% on SUN and 87.23% on Indoor
scenes substantially outperform the previous best results of 61.7% and 81.7%, respectively.
5
Conclusion
It is now well established that deep CNNs can be transferred across datasets that address similar
tasks. It is less clear, however, whether they are robust to transfer across tasks. In this work, we have
considered a class of problems that involve this type of transfer, namely problems that benefit from
transferring object detections into holistic scene level inference, eg. scene classification. While such
problems have been addressed with FV-like representations in the past, we have shown that these are
not very effective for the high-dimensional CNN features. The reason is their reliance on a model, the
variance-GMM, with a limited flexibility. We have addressed this problem by adopting a better model,
the MFA, which approximates the non-linear data manifold by a set of local sub-spaces. We then
introduced the Fisher score with respect to this model, denoted as the MFA-FS. Through extensive
experiments, we have shown that the MFA-FS has state of the art performance for object-to-scene
transfer and this transfer actually outperforms a scene CNN trained on a large scene dataset. These
results are significant given that 1) MFA training takes only a few hours versus training a CNN, and
2) transfer requires a much smaller scene dataset.
8
References
[1] M. Cimpoi, S. Maji, I. Kokkinos, and A. Vedaldi. Deep filter banks for texture recognition, description,
and segmentation. International Journal of Computer Vision, 2015.
[2] Mandar Dixit, Si Chen, Dashan Gao, Nikhil Rasiwasia, and Nuno Vasconcelos. Scene classification with
semantic fisher vectors. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
June 2015.
[3] Yang Gao, Oscar Beijbom, Ning Zhang, and Trevor Darrell. Compact bilinear pooling. CoRR,
abs/1511.06062, 2015.
[4] Zoubin Ghahramani and Geoffrey E. Hinton. The em algorithm for mixtures of factor analyzers. Technical
report, 1997.
[5] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), 2014.
[6] Yunchao Gong, Liwei Wang, Ruiqi Guo, and Svetlana Lazebnik. Multi-scale orderless pooling of deep
convolutional activation features. In Computer Vision ECCV 2014, volume 8695, pages 392?407, 2014.
[7] Tommi S. Jaakkola and David Haussler. Exploiting generative models in discriminative classifiers. In
Proceedings of the 1998 conference on Advances in neural information processing systems II, pages
487?493, 1999.
[8] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems 25, pages 1097?1105, 2012.
[9] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[10] Yao Li, Lingqiao Liu, Chunhua Shen, and Anton van den Hengel. Mid-level deep pattern mining. In The
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015.
[11] Lingqiao Liu, Chunhua Shen, Lei Wang, Anton Hengel, and Chao Wang. Encoding high dimensional local
features by sparse coding based fisher vectors. In Advances in Neural Information Processing Systems 27,
pages 1143?1151. 2014.
[12] Lingqiao Liu, Peng Wang, Chunhua Shen, Lei Wang, Anton van den Hengel, Chao Wang, and Heng Tao
Shen. Compositional model based fisher vector coding for image classification. CoRR, abs/1601.04143,
2016.
[13] Florent Perronnin, Jorge S?nchez, and Thomas Mensink. Improving the fisher kernel for large-scale image
classification. In Proceedings of the 11th European conference on Computer vision: Part IV, ECCV?10,
pages 143?156, 2010.
[14] A. Quattoni and A. Torralba. Recognizing indoor scenes. 2012 IEEE Conference on Computer Vision and
Pattern Recognition, 0:413?420, 2009. doi: http://doi.ieeecomputersociety.org/10.1109/CVPRW.2009.
5206537.
[15] L. R. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77(2):257?286, Feb 1989. ISSN 0018-9219. doi: 10.1109/5.18626.
[16] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object
detection with region proposal networks. In Neural Information Processing Systems (NIPS), 2015.
[17] Jorge S?nchez, Florent Perronnin, Thomas Mensink, and Jakob J. Verbeek. Image classification with the
fisher vector: Theory and practice. International Journal of Computer Vision, 105(3):222?245, 2013.
[18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
CoRR, abs/1409.1556, 2014.
[19] Masayuki Tanaka, Akihiko Torii, and Masatoshi Okutomi. Fisher vector based on full-covariance gaussian
mixture model. IPSJ Transactions on Computer Vision and Applications, 5:50?54, 2013. doi: 10.2197/
ipsjtcva.5.50.
[20] Antonio Torralba and Alexei A. Efros. Unbiased look at dataset bias. In CVPR?11, June 2011.
[21] Aruni RoyChowdhury Tsung-Yu Lin and Subhransu Maji. Bilinear cnn models for fine-grained visual
recognition. In International Conference on Computer Vision (ICCV), 2015.
[22] Jakob Verbeek. Learning nonlinear image manifolds by global alignment of local linear models. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 28(8):1236?1250, August 2006.
[23] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image
caption generator. CoRR, abs/1411.4555, 2014.
[24] Ruobing Wu, Baoyuan Wang, Wenping Wang, and Yizhou Yu. Harvesting discriminative meta objects
with deep cnn features for scene classification. In The IEEE International Conference on Computer Vision
(ICCV), December 2015.
[25] Jianxiong Xiao, J. Hays, K.A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on,
pages 3485?3492, 2010.
[26] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition
using Places Database. NIPS, 2014.
9
| 6343 |@word cnn:41 ruiqi:1 loading:5 kokkinos:1 bn:5 covariance:19 harder:1 accommodate:1 reduction:1 initial:1 liu:5 configuration:1 score:28 tuned:1 document:1 outperforms:6 past:1 current:1 comparing:1 activation:2 yet:2 si:5 written:2 additive:1 informative:1 enables:3 x160:4 v:1 generative:5 pursued:1 selected:1 intelligence:1 dashan:1 harvesting:1 provides:1 org:1 simpler:1 x128:5 zhang:1 along:1 direct:1 differential:1 consists:4 combine:2 peng:1 themselves:1 frequently:2 multi:4 voc:2 globally:1 xti:1 accross:1 considering:1 spain:1 provided:2 begin:1 underlying:1 lapedriza:1 alexnet:25 interpreted:1 substantially:2 complimentary:1 transformation:2 every:1 collecting:2 preferable:1 classifier:17 scaled:2 producing:1 engineering:1 local:9 bilinear:5 encoding:3 studied:1 collect:2 challenging:1 limited:3 range:1 adoption:2 lecun:1 recursive:1 practice:1 backpropagation:1 procedure:1 area:1 significantly:3 vedaldi:1 projection:2 liwei:1 pre:1 ipsj:1 zoubin:1 close:1 context:1 impossible:1 conventional:1 map:3 demonstrated:1 dz:9 baum:1 attention:1 independently:1 welch:1 shen:4 simplicity:2 haussler:1 embedding:3 variation:1 traditionally:1 laplace:1 diego:1 target:1 hierarchy:1 massive:1 caption:1 samy:1 hypothesis:4 recognition:27 particularly:1 approximated:1 labeled:1 database:2 observed:3 electrical:1 capture:7 wang:8 region:1 connected:1 sun:15 substantial:1 transforming:1 trained:11 depend:2 localization:1 upon:1 easily:2 various:1 maji:2 derivation:1 train:4 effective:5 describe:1 doi:4 tell:1 richer:1 widely:1 solve:3 cvpr:5 loglikelihood:1 nikhil:1 compressed:1 encoder:2 ability:5 statistic:12 simonyan:1 itself:1 final:1 reproduced:1 obviously:1 rr:2 net:1 rock:1 propose:1 remainder:3 combining:2 holistic:8 poorly:1 achieve:2 flexibility:1 description:4 dixit:2 exploiting:1 sutskever:1 darrell:2 produce:4 captioning:3 object:40 help:1 derive:1 gong:1 zit:1 implemented:1 tommi:1 ning:1 cnns:27 filter:1 kb:9 human:1 explains:2 argued:3 require:1 feeding:2 hx:2 assign:1 investigation:2 extension:1 considered:1 plagued:1 exp:1 mapping:1 predict:1 substituting:1 efros:1 achieves:1 torralba:4 abbey:1 outperformed:2 applicable:1 bag:5 label:1 currently:1 ross:2 mit:12 clearly:1 gaussian:10 always:3 zhou:1 jaakkola:1 derived:4 validated:1 june:3 improvement:4 likelihood:2 sense:2 helpful:1 inference:2 dim:1 perronnin:2 typically:2 transferring:1 hidden:9 relation:1 semantics:2 tao:1 pixel:1 overall:1 classification:32 subhransu:1 pascal:2 denoted:5 art:9 softmax:1 marginal:1 vasconcelos:2 park:1 look:1 yu:2 others:1 report:2 few:1 modern:1 recognize:1 national:1 divergence:1 replaced:1 ab:4 detection:8 highly:4 investigate:1 mining:1 alexei:1 alignment:1 mixture:14 accurate:1 tree:2 iv:1 masayuki:1 girshick:2 minimal:1 instance:4 classify:2 modeling:7 assignment:1 maximization:1 fusing:1 krizhevsky:1 recognizing:1 reported:4 combined:6 person:1 lstm:1 international:4 pool:1 ilya:1 yao:1 containing:1 classically:1 resort:1 style:1 rasiwasia:1 li:1 wkb:1 account:1 converted:1 de:1 stump:1 coding:9 wk:4 pooled:1 jitendra:1 tsung:1 piece:1 root:1 competitive:1 contribution:3 square:1 accuracy:10 convolutional:5 variance:21 descriptor:10 efficiently:1 sitting:1 rabiner:1 generalize:1 anton:3 accurately:1 produced:1 ren:1 yizhou:1 zoo:1 startling:1 liebler:1 classified:3 quattoni:1 trevor:2 definition:1 failure:1 competitor:1 evaluates:2 nuno:2 attributed:1 sampled:5 gain:1 dataset:16 popular:2 vlad:1 knowledge:2 improves:1 segmentation:2 subtle:2 actually:2 focusing:1 supervised:1 response:2 improved:4 zisserman:1 mensink:2 evaluated:4 just:2 correlation:1 hand:2 x96:4 replacing:1 multiscale:1 nonlinear:1 lack:1 somehow:1 perhaps:1 lei:2 effect:1 concept:2 normalized:3 unbiased:1 former:1 hence:3 semantic:3 eg:1 inferior:3 m:1 hik:5 complete:1 performs:3 image:37 wise:1 variational:1 lazebnik:1 recently:4 common:4 multinomial:1 volume:1 interpretation:1 approximates:3 he:1 functionally:1 refer:1 significant:1 zebra:1 stool:1 rd:2 tuning:3 trivially:1 grid:1 analyzer:6 language:2 impressive:1 ezi:5 feb:1 posterior:4 closest:1 recent:5 skb:3 coco:1 manipulation:1 chunhua:3 certain:3 hay:1 meta:1 success:1 jorge:2 joshua:1 seen:1 somewhat:1 ii:1 full:4 multiple:5 reduces:1 technical:1 faster:1 cross:1 lin:1 coded:1 impact:1 verbeek:2 oliva:2 vision:16 expectation:1 iteration:2 sometimes:1 normalization:1 adopting:1 kernel:1 achieved:1 proposal:2 background:3 addition:1 fine:5 spacing:1 addressed:4 jian:1 eliminates:1 unlike:1 ineffective:1 pooling:9 december:1 superhuman:1 gmms:1 leverage:2 yang:1 split:2 easy:1 bengio:2 zi:9 architecture:5 restrict:2 florent:2 simplifies:3 haffner:1 vgg:30 intensive:1 grad:2 whether:1 expression:1 pca:3 lingqiao:3 reuse:1 f:60 algebraic:1 speech:1 shaoqing:1 compositional:1 deep:14 antonio:1 useful:1 baoyuan:1 clear:3 detailed:3 indoors:1 involve:1 mid:2 extensively:1 locally:3 svms:1 documented:1 reduced:1 http:1 outperform:2 tutorial:1 roychowdhury:1 per:5 discrete:2 reliance:2 gmm:39 year:1 oscar:1 svetlana:1 place:26 reader:1 wu:1 patch:11 comparable:1 layer:6 hi:1 bound:1 simplification:1 alex:4 scene:64 x2:1 tag:2 toshev:1 extremely:1 transferred:3 department:1 according:1 alternate:1 combination:9 kd:4 across:5 smaller:1 em:12 making:1 projecting:1 den:2 iccv:2 recourse:1 equation:1 previously:2 loose:1 mechanism:4 needed:1 fed:1 tractable:1 end:2 available:1 alternative:2 robustness:1 eigen:1 original:2 compress:1 bagging:1 thomas:2 include:1 ghahramani:1 objective:1 malik:1 question:2 quantity:2 yunchao:1 fa:5 dependence:1 usual:1 diagonal:5 traditional:1 gradient:13 subspace:6 mapped:1 penultimate:1 decoder:1 capacity:1 concatenation:1 degrade:1 manifold:7 argue:1 considers:2 trivial:2 reason:2 length:2 code:1 issn:1 mostly:1 implementation:1 perform:2 observation:12 datasets:6 markov:2 benchmark:1 november:1 hinton:2 variability:1 ucsd:1 jakob:2 nvasconcelos:1 august:1 bk:15 introduced:1 namely:1 required:1 kl:5 extensive:3 sentence:2 imagenet:12 david:1 california:1 fv:36 learned:3 beijbom:1 established:1 barcelona:1 nip:3 hour:1 address:2 beyond:1 suggested:1 tanaka:1 usually:3 pattern:7 indoor:16 mfa:81 suitable:1 power:3 natural:3 rely:2 indicator:1 zr:1 residual:2 scheme:1 improve:1 extract:2 chao:2 literature:5 l2:1 loss:1 fully:1 generation:1 limitation:2 interesting:2 versus:1 localized:2 remarkable:2 geoffrey:2 generator:1 xiao:2 principle:1 bank:1 heng:1 pi:1 eccv:2 summary:1 surprisingly:1 last:2 bias:3 side:1 deeper:2 differentiating:1 sparse:14 orderless:2 distributed:1 benefit:3 van:2 dimension:10 xn:2 default:1 rich:1 computes:1 hengel:3 author:3 collection:2 commonly:2 san:1 universally:1 far:1 erhan:1 transaction:2 approximate:2 compact:3 kullback:1 confirm:1 global:3 cimpoi:1 assumed:3 consuming:1 discriminative:6 xi:29 spectrum:2 continuous:1 latent:4 decade:1 why:1 table:13 promising:1 learn:4 transfer:28 robust:1 ignoring:1 unavailable:1 improving:2 bottou:1 complex:1 european:1 main:1 noise:2 fair:1 complementary:2 x1:2 fig:1 referred:2 ehinger:1 sub:4 lizard:1 explicit:3 concatenating:1 third:1 donahue:1 grained:1 dumitru:1 cvprw:1 showing:2 sift:3 explored:1 dk:4 svm:2 evidence:1 fusion:1 intractable:1 albeit:1 corr:4 importance:1 texture:1 conditioned:1 disregarded:1 margin:1 chen:1 suited:2 entropy:1 fc:3 likely:2 gao:4 ez:1 visual:3 nchez:2 vinyals:1 kaiming:1 determines:2 extracted:6 goal:1 torii:1 towards:1 jeff:1 fisher:38 feasible:1 absence:1 exceptionally:1 operates:1 specie:1 experimental:2 fvs:5 support:2 guo:1 latter:2 inability:2 jianxiong:1 alexander:1 bush:1 oriol:1 trainable:1 scratch:1 |
5,907 | 6,344 | Adaptive optimal training of animal behavior
Ji Hyun Bak1,4 Jung Yoon Choi2,3 Athena Akrami3,5 Ilana Witten2,3 Jonathan W. Pillow2,3
1
Department of Physics, 2 Department of Psychology, Princeton University
3
Princeton Neuroscience Institute, Princeton University
4
School of Computational Sciences, Korea Institute for Advanced Study
5
Howard Hughes Medical Institute
[email protected], {jungchoi,aakrami,iwitten,pillow}@princeton.edu
Abstract
Neuroscience experiments often require training animals to perform tasks designed
to elicit various sensory, cognitive, and motor behaviors. Training typically involves
a series of gradual adjustments of stimulus conditions and rewards in order to bring
about learning. However, training protocols are usually hand-designed, relying
on a combination of intuition, guesswork, and trial-and-error, and often require
weeks or months to achieve a desired level of task performance. Here we combine
ideas from reinforcement learning and adaptive optimal experimental design to
formulate methods for adaptive optimal training of animal behavior. Our work
addresses two intriguing problems at once: first, it seeks to infer the learning rules
underlying an animal?s behavioral changes during training; second, it seeks to
exploit these rules to select stimuli that will maximize the rate of learning toward a
desired objective. We develop and test these methods using data collected from rats
during training on a two-interval sensory discrimination task. We show that we can
accurately infer the parameters of a policy-gradient-based learning algorithm that
describes how the animal?s internal model of the task evolves over the course of
training. We then formulate a theory for optimal training, which involves selecting
sequences of stimuli that will drive the animal?s internal policy toward a desired
location in the parameter space. Simulations show that our method can in theory
provide a substantial speedup over standard training methods. We feel these results
will hold considerable theoretical and practical implications both for researchers in
reinforcement learning and for experimentalists seeking to train animals.
1
Introduction
An important first step in many neuroscience experiments is to train animals to perform a particular
sensory, cognitive, or motor task. In many cases this training process is slow (requiring weeks to
months) or difficult (resulting in animals that do not successfully learn the task). This increases the
cost of research and the time taken for experiments to begin, and poorly trained animals?for example,
animals that incorrectly base their decisions on trial history instead of the current stimulus?may
introduce variability in experimental outcomes, reducing interpretability and increasing the risk of
false conclusions.
In this paper, we present a principled theory for the design of normatively optimal adaptive training
methods. The core innovation is a synthesis of ideas from reinforcement learning and adaptive
experimental design: we seek to reverse engineer an animal?s internal learning rule from its observed
behavior in order to select stimuli that will drive learning as quickly as possible toward a desired
objective. Our approach involves estimating a model of the animal?s internal state as it evolves over
training sessions, including both the current policy governing behavior and the learning rule used to
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
A
stimulus space
B
active training schematic
optimal stimulus selection
x
stimulus 2
past observations
stimulus
...
weights
...
response
...
animal?s
learning
rule
target weights
stimulus 1
Figure 1: (A) Stimulus space for a 2AFC discrimination task, with optimal separatrix between
correct ?left? and ?right? choices shown in red. Filled circles indicate a ?reduced? set of stimuli
(consisting of those closest to the decision boundary) which have been used in several prominent
studies [3, 6, 9]. (B) Schematic of active training paradigm. We infer the animal?s current weights
wt and its learning rule (?RewardMax?), parametrized by ?, and use them to determine an optimal
stimulus xt for the current trial (?AlignMax?), where optimality is determined by the expected weight
change towards the target weights wgoal .
modify this policy in response to feedback. We model the animal as using a policy-gradient based
learning rule [15], and show that parameters of this learning model can be successfully inferred from
a behavioral time series dataset collected during the early stages of training. We then use the inferred
learning rule to compute an optimal sequence of stimuli, selected adaptively on a trial-by-trial basis,
that will drive the animal?s internal model toward a desired state. Intuitively, optimal training involves
selecting stimuli that maximally align the predicted change in model parameters with the trained
behavioral goal, which is defined as a point in the space of model parameters. We expect this research
to provide both practical and theoretical benefits: the adaptive optimal training protocol promises a
significantly reduced training time required to achieve a desired level of performance, while providing
new scientific insights into how and what animals learn over the course of the training period.
2
Modeling animal decision-making behavior
Let us begin by defining the ingredients of a generic decision-making task. In each trial, the animal
is presented with a stimulus x from a bounded stimulus space X, and is required to make a choice
y among a finite set of available responses Y . There is a fixed reward map r : {X, Y } ? R. It
is assumed that this behavior is governed by some internal model, or the psychometric function,
described by a set of parameters or weights w. We introduce the ?y-bar? notation y?(x) to indicate the
correct choice for the given stimulus x, and let Xy denote the ?stimulus group? for a given y, defined
as the set of all stimuli x that map to the same correct choice y = y?(x).
For concreteness, we consider a two-alternative forced-choice (2AFC) discrimination task where the
stimulus vector for each trial, x = (x1 , x2 ), consists of a pair of scalar-valued stimuli that are to be
compared [6, 8, 9, 16]. The animal should report either x1 > x2 or x1 < x2 , indicating its choice
with a left (y = L) or right (y = R) movement, respectively. This results in a binary response space,
Y = {L, R}. We define the reward function r(x, y) to be a Boolean function that indicates whether
a stimulus-response pair corresponds to a correct choice (which should therefore be rewarded) or not:
1
if {x1 > x2 , y = L} or {x1 < x2 , y = R};
r(x, y) =
(1)
0
otherwise.
Figure 1A shows an example 2-dimensional stimulus space for such a task, with circles representing
a discretized set of possible stimuli X, and the desired separatrix (the boundary separating the two
stimulus groups XL and XR ) shown in red. In some settings, the experimenter may wish to focus on
some ?reduced? set of stimuli, as indicated here by filled symbols [3, 6, 9].
We model the animal?s choice behavior as arising from a Bernoulli generalized linear model (GLM),
also known as the logistic regression model. The choice probabilities for the two possible stimuli at
trial t are given by
1
pR (xt , wt ) =
,
pL (xt , wt ) = 1 ? pR (xt , wt )
(2)
1 + exp(?g(xt )> wt )
2
where g(x) = (1, x> )> is the input carrier vector, and w = (b, a> )> is the vector of parameters or
weights governing behavior. Here b describes the animal?s internal bias to choosing ?right? (y = R),
and a = (a1 , a2 ) captures the animal?s sensitivity to the stimulus.1
We may also incorporate the trial history as additional dimensions of the input governing the animal?s
behavior; humans and animals alike are known to exhibit history-dependent behavior in trial-based
tasks [1, 3, 5, 7]. Based on some preliminary observations from animal behavior (see Supplementary
Material for details), we encode the trial history as a compressed stimulus history, using a binary
variable y?(x) defined as L = ?1 and R = +1. Taking into account the previous d trials, the input
carrier vector and the weight vector become:
>
g(xt ) ? (1, x>
t , y?(xt?1 ) , ? ? ? , y?(xt?d ) ) ,
wt ? (b, a> , h1 , ? ? ? , hd ).
(3)
The history dependence parameter hd describes the animal?s tendency to stick to the correct answer
from the previous trial (d trials back). Because varying number of history terms d gives a family of
psychometric models, determining the optimal d is a well-defined model selection problem.
3
Estimating time-varying psychometric function
In order to drive the animal?s performance toward a desired objective, we first need a framework to
describe, and accurately estimate, the time-varying model parameters of the animal behavior, which
is fundamentally non-stationary while training is in progress.
3.1
Constructing the random walk prior
We assume that the single-step weight change at each trial t follows a random walk, wt ? wt?1 = ?t ,
where ?t ? N (0, ?t2 ), for t = 1, ? ? ? , N . Let w0 be some prior mean for the initial weight. We
assume ?2 = ? ? ? = ?N = ?, which is to believe that although the behavior is variable, the variability
of the behavior is a constant property of the animal. We can write this more concisely using a statespace representation [2, 11], in terms of the vector of time-varying weights w = (w1 , w2 , ? ? ? , wN )>
and its prior mean w0 = w0 1:
D(w ? w0 ) = ? ? N (0, ?),
(4)
where ? = diag(?12 , ? 2 , ? ? ? , ? 2 ) is the N ? N covariance matrix, and D is the sparse banded
matrix with first row of an identity matrix and subsequent rows computing first order differences.
Rearranging, the full random walk prior on the N -dimensional vector w is
w ? N (w0 , C),
where
C ?1 = D> ??1 D.
(5)
In many practical cases there are multiple weights in the model, say K weights. The full set of
parameters should now be arranged into an N ? K array of weights {wti }, where the two subscripts
consistently indicate the trial number (t = 1, ? ? ? , N ) and the type of parameter (i = 1, ? ? ? , K),
respectively. This gives a matrix
W = {wti } = (w?1 , ? ? ? , w?i , ? ? ? , w?K ) = (w1? , ? ? ? , wt? , ? ? ? , wN ? )>
(6)
where we denote the vector of all weights at trial t as wt? = (wt1 , wt2 , ? ? ? , wtK )> , and the time
series of the i-th weight as w?i = (w1i , w2i , ? ? ? , wN i )> .
>
> >
Let w = vec(W ) = (w?1
, ? ? ? , w?K
) be the vectorization of W , a long vector with the columns
of W stacked together. Equation (5) still holds for this extended weight vector w, where the
extended D and ? are written as block diagonal matrices D = diag(D1 , D2 , ? ? ? , DK ) and ? =
diag(?1 , ?2 , ? ? ? , ?K ), respectively, where Di is the weight-specific N ? N difference matrix and
?i is the corresponding covariance matrix. Within a linear model one can freely renormalize the units
of the stimulus space in order to keep the sizes of all weights comparable, and keep all ?i ?s equal.
We used a transformed stimulus space in which the center is at 0 and the standard deviation is 1.
1
We use a convention in which a single-indexed tensor object is automatically represented as a column vector
(in boldface notation), and the operation (?, ?, ? ? ? ) concatenates objects horizontally.
3
3.2
Log likelihood
PN
Let us denote the log likelihood of the observed data by L = t=1 Lt , where Lt = log p(yt |xt , wt? )
is the trial-specific log likelihood. Within the binomial model we have
Lt = (1 ? ?yt ,R ) log(1 ? pR (xt , wt? )) + ?yt ,R log pR (xt , wt? ).
(7)
Abbreviating pR (xt , wt? ) = pt and pL (xt , wt? ) = 1 ? pt , the trial-specific derivatives are solved to
be ?Lt /?wt? = (?yt ,R ? pt ) g(xt ) ? ?t and ? 2 Lt /?wt? ?wt? = ?pt (1 ? pt )g(xt )g(xt )> ? ?t .
Extension to the full weight vector is straightforward because distinct trials do not interact. Working
out with the indices, we may write
?
?
M11 M12 ? ? ? M1K
M2K ?
? M21 M22
?L
?2L
? .
?
=
(8)
= vec([?1 , ? ? ? , ?N ]> ),
..
.
? ..
?
..
?w
?w2
.
MK1
MK2
???
MKK
where the (i, j)-th block of the full second derivative matrix is an N ? N diagonal matrix defined by
Mij = ? 2 L/?w?i ?w?j = diag((?1 )ij , ? ? ? , (?t )ij , ? ? ? , (?N )ij ). After this point, we can simplify
our notation such that wt = wt? . The weight-type-specific w?i will no longer appear.
3.3
MAP estimate of w
The posterior distribution of w is a combination of the prior and the likelihood (Bayes? rule):
?1 1
1
> ?1
log C
? (w ? w0 ) C (w ? w0 ) + L.
log p(w|D) ?
2
2
(9)
We can perform a numerical maximization of the log posterior using Newton?s method (we used the
Matlab function fminunc), knowing its gradient j and the hessian H explicitly:
?L
? 2 (log p)
?2L
?(log p)
?1
= ?C ?1 (w ? w0 ) +
,
H=
=
?C
+
.
(10)
?w
?w
?w2
?w2
? = 0. If we work
? is where the gradient vanishes, j(w)
The maximum a posteriori (MAP) estimate w
?
with a Laplace approximation, the posterior covariance is Cov = ?H?1 evaluated at w = w.
j=
3.4
Hyperparameter optimization
The model hyperparameters consist of ?1 , governing the variance of w1 , the weights on the first trial
of a session, and ?, governing the variance of the trial-to-trial diffusive change of the weights. To set
these hyperparameters, we fixed ?1 to a large default value, and used maximum marginal likelihood
or ?evidence optimization? over a fixed grid of ? [4, 11, 13]. The marginal likelihood is given by:
Z
exp(L) ? N (w|w0 , C)
p(y|x, w)p(w|?)
?
,
(11)
p(y|x, ?) = dwp(y|x, w)p(w|?) =
? ?H ?1 )
p(w|x, y, ?)
N (w|w,
? is the MAP estimate of the entire vector of time-varying weights and H is the Hessian of the
where w
log-posterior over w at its mode. This formula for marginal likelihood results from the well-known
Laplace approximation to the posterior [11, 12]. We found the estimate not to be insensitive to ?1 so
long as it is sufficiently large.
3.5
Application
We tested our method using a simulation, drawing binary responses from a stimulus-free GLM
yt ? logistic(wt ), where wt was diffused as wt+1 ? N (wt , ? 2 ) with a fixed hyperparameter
?. Given the time series of responses {yt }, our method captures the true ? through evidence
maximization, and provides a good estimate of the time-varying w = {wt } (Figure 2A). Whereas the
estimate of the weight wt is robust over independent realizations of the responses, the instantaneous
weight changes ?w = wt+1 ? wt are not reproducible across realizations (Figure 2B). Therefore it is
difficult to analyze the trial-to-trial weight changes directly from real data, where only one realization
of the learning process is accessible.
4
0
B
1000
trials
2000
-8
-12
-8
4
1
0.5
true weight
repeated fits
0
-0.5
0
1000
trials
2000
0
log evd
max-evd
true
10
-3
-6
log
-4
E
0
0
1
-2
0.5
0
2
w (rep 1)
sensitivity a1
sensitivity a2
0.5
1.5
4
10 -3
log evd
max-evd
-400
-12 -10 -8 -6
log
-300
1
2
-2
-200
-0.5
2
0
-100
log evd. (rel.)
-4
D
bias b
0.5
history dependence h
-4
2
0
early
mid
late
BIC (rel.)
true weight
best fit
0
log evd. (rel.)
0.5
-0.5
weight w
C
0
1
w (rep 2)
weight w
A
-100
-200
0
2000
4000
trials
6000
0
1 2 3 4
d (trials back)
5
Figure 2: Estimating time-varying model parameters. (A-B) Simulation: (A) Our method captures
the true underlying variability ? by maximizing evidence. (B) Weight estimates are accurate and
robust over independent realizations of the responses, but weight changes across realizations are
not reproducible. (C-E) From the choice behavior of a rat under training, we could (C) estimate the
time-varying weights of its psychometric model, and (D) determine the characteristic variability by
evidence maximization. (E) The number of history terms to be included in the model was determined
by comparing the BIC, using the early/mid/late parts of the rat dataset. Because log-likelihood is
calculated up to a constant normalization, both log-evidence and BIC are shown in relative values.
We also applied our method to an actual experimental dataset from rats during the early training
period for a 2AFC discrimination task, as introduced in Section 2 (using classical training methods
[3], see Supplementary Material for detailed description). We estimated the time-varying weights
of the GLM (Figure 2C), and estimated the characteristic variability of the rat behavior ?rat = 2?7
by maximizing marginal likelihood (Figure 2D). To determine the length d of the trial history
dependence, we fit models with varying d and used the Bayesian Information Criterion BIC(d) =
?2 log L(d) + K(d) log N (Figure 2E). We found that animal behavior exhibits long-range history
depedence at the beginning of training, but this dependence becomes shorter as training progresses.
Near the end of the dataset, the behavior of the rat is best described drat = 1 (single-trial history
dependence), and we use this value for the remainder of our analyses.
4
Incorporating learning
The fact that animals show improved performance, as training progresses, suggests that we need a
non-random component in our model that accounts for learning. We will first introduce a simple
model of weight change based on the ideas from reinforcement learning, then discuss how we can
incorporate the learning model into our time-varying estimate method.
A good candidate model for animal learning is the policy gradient update from reinforcement learning,
for example as in [15]. There are debates as to whether animals actually learn using policy-based
methods, but it is difficult to define a reasonable value function that is consistent with our preliminary
observations of rat behavior (e.g. win-stay/lose-switch). A recent experimental study supports the
use of policy-based models in human learning behavior [10].
4.1
RewardMax model of learning (policy gradient update)
Here we consider a simple model of learning, in which the learner attempts to update its policy (here
the weight parameters in the model) to maximize the expected reward. Given some fixed reward
function r(x, y), the expected reward at the next-upcoming trial t is defined as
D
E
?(wt ) = hr(xt , yt )ip(yt |xt ,wt )
(12)
PX (xt )
where PX (xt ) reflects the subject animal?s knowledge as to the probability that a given stimulus x
will be presented at trial t, which may be dynamically updated. One way to construct the empirical
5
0
-1
true
0
-5
log evidence
true
max-evd
-2
estimated
1000
trials
0
-5
-6
-1
log 2
1
C
2000
-9
-8
-7
-10
-7
-15
-20
-8
-6
log 2
log evidence (rel.)
B 0
log evidence (rel.)
model weights
A
-11
-10
-9
log 2
-8
-7
Figure 3: Estimating the learning model. (A-B) Simulated learner with ?sim = ?sim = 2?7 . (A) The
four weight parameters of the simulated model are successfully recovered by our MAP estimate with
the learning effect incorporated, where (B) the learning rate ? is accurately determined by evidence
maximization. (C) Evidence maximization analysis on the rat training dataset reveals ?rat = 2?6
and ?rat = 2?10 . Displayed is a color plot of log evidence on the hyperparameter plane (in relative
values). The optimal set of hyperparameters is marked with a star.
PX is to accumulate the stimulus statistics up to some timescale ? ? 0; here we restrict to the simplest
limit ? = 0, where only the most recent stimulus is remembered. That is, PX (xt ) = ?(xt ? xt?1 ).
In practice ? can be evaluated at wt = wt?1 , the posterior mean from previous observations.
Under the GLM (2), the choice probability is p(y|x, w) = 1/(1 + exp(?y g(x)> w)), where
L = ?1 and R = +1, trial index suppressed. Therefore the expected reward can be written out
explicitly, as well as its gradient with respect to w:
X
??
=
PX (x) f (x) pR (x, w) pL (x, w) g(x)
(13)
?w
x?X
P
where we define the effective reward function f (x) ? y?Y y r(x, y) for each stimulus. In the
spirit of the policy gradient update, we consider the RewardMax model of learning, which assumes
that the animal will try to climb up the gradient of the expected reward by
??
?wt = ?
? v(wt , xt ; ?),
(14)
?w
t
where ?wt = (wt+1 ? wt ). In this simplest setting, the learning rate ? is the only learning
hyperparameter ? = {?}. The model can be extended by incorporating more realistic aspects of
learning, such as the non-isotropic learning rate, the rate of weight decay (forgetting), or the skewness
between experienced and unexperienced rewards. For more discussion, see Supplementary Material.
4.2
Random walk prior with drift
Because our observation of a given learning process is stochastic and the estimate of the weight
change is not robust (Figure 2B), it is difficult to test the learning rule (14) on any individual dataset.
However, we can still assume that the learning rule underlies the observed weight changes as
h?wi = v(w, x; ?)
(15)
where the average h?i is over hypothetical repetitions of the same learning process. This effect of
non-random learning can be incorporated into our random walk prior as a drift term, to make a fully
Bayesian model for an imperfect learner. The new weight update prior is written as D(w ? w0 ) =
v + ?, where v is the ?drift velocity? and ? ? N (0, ?) is the noise. The modified prior is
w ? D?1 v ? N (w0 , C),
C ?1 = D> ??1 D.
(16)
Equations (9-10) can be re-written with the additional term D?1 v. For the RewardMax model
v = ???/?w, in particular, the first and second derivatives of the modified log posterior can be
written out analytically. Details can be found in Supplementary Material.
4.3
Application
To test the model with drift, a simulated RewardMax learner was generated, based on the same task
structure as in the rat experiment. The two hyperparameters {?sim , ?sim } were chosen such that the
6
resulting time series data is qualitatively similar to the rat data. The simulated learning model can be
recovered by maximizing the evidence (11), now with the learning hyperparameter ? as well as the
variability ?. The solution accurately reflects the true ?sim , shown where ? is fixed at the true ?sim
(Figures 3A-3B). Likewise, the learning model of a real rat was obtained by performing a grid search
on the full hyperparameter plane {?, ?}. We get ?rat = 2?6 and ?rat = 2?10 (Figure 3C). 2
Can we determine whether the rat?s behavior is in a regime where the effect of learning dominates the
effect of noise, or vice versa? The obtained values of ? and ? depend on our choice of units which
is arbitrary; more precisely, ? ? [w2 ] and ? ? [w] where [w] scales as the weight. Dimensional
analysis suggests a (dimensionless) order parameter ? = ?/? 2 , where ? 1 would indicate a regime
where the effect of learning is larger than the effect of noise. Our estimate of the hyperparameters
2
gives ?rat = ?rat /?rat
? 4, which leaves us optimistic.
5
AlignMax: Adaptive optimal training
Whereas the goal of the learner/trainee is (presumably) to maximize the expected reward, the trainer?s
goal is to drive the behavior of the trainee as close as possible to some fixed model that corresponds
to a desirable, yet hypothetically achievable, performance. Here we propose a simple algorithm that
aims to align the expected model parameter change of the trainee h?wt i = v(wt , xt ; ?) towards a
fixed goal wgoal . We can summarize this in an AlignMax training formula
xt+1 = argmax (wgoal ? wt )> h?wt i .
(17)
x
Looking at Equations (13), (14) and (17), it is worth noting that g(x) puts a heavier weight on more
distinguishable or ?easier? stimuli (exploitation), while pL pR puts more weight on more difficult
stimuli, with more uncertainty (exploration); an exploitation-exploration tradeoff emerges naturally.
We tested the AlignMax training protocol3 using a simulated learner with fixed hyperparameters
?sim = 0.005 and ?sim = 0, using wgoal = (b, a1 , a2 , h)goal = (0, ?10, 10, 0) in the current
paradigm. We chose a noise-free learner for clear visualization, but the algorithm works as well
in the presence of noise (? > 0, see Supplementary Material for a simulated noisy learner). As
expected, our AlignMax algorithm achieves a much faster training compared to the usual algorithm
where stimuli are presented randomly (Figure 4). The task performance was measured in terms of the
success rate, the expected
and the Kullback-Leibler (KL) divergence. The KL divergence
P reward (12),P
is defined as DKL = x?X PX (x) y?Y p?y (x) log(?
py (x)/py (x)) where p?y (x) = r(x, y) is the
?correct? psychometric function, and a smaller value of DKL indicates a behavior that is closer to
the ideal. Both the expected reward and the KL divergence were evaluated using a uniform stimulus
distribution PX (x). The low success rate is a distinctive feature of the adaptive training algorithm,
which selects adversarial stimuli such that the ?lazy flukes? are actively prevented (e.g. such that a
left-biased learner wouldn?t get thoughtless rewards from the left side). It is notable that the AlignMax
training eliminates the bias b and the history dependence h (the two stimulus-independent parameters)
much more quickly compared to the conventional (random) algorithm, as shown in Figure 4A.
Two general rules were observed from the optimal trainer. First, while the history dependence h is
non-zero, AlignMax alternates between different stimulus groups in order to suppress the win-stay
behavior; once h vanishes, AlignMax tries to neutralize the bias b by presenting more stimuli from the
?non-preferred? stimulus group yet being careful not to re-install the history dependence. For example,
it would give LLRLLR... for an R-biased trainee. This suggests that a pre-defined, non-adaptive
de-biasing algorithm may be problematic as it may reinforce an unwanted history dependence (see
Supp. Figure S1). Second, AlignMax exploits the full stimulus space by starting from some ?easier?
stimuli in the early stage of training (farther away from the true separatrix x1 = x2 ), and presenting
progressively more difficult stimuli (closer to the separatrix) as the trainee performance improves.
This suggests that using the reduced stimulus space may be suboptimal for training purposes. Indeed,
training was faster on the full stimulus plane, than on the reduced set (Figures 4B-4C).
2
Based on a 2000-trial subset of the rat dataset.
When implementing the algorithm within the current task paradigm, because of the way we model the
history variable as part of the stimulus, it is important to allow the algorithm to choose up to d + 1 future stimuli,
in this case as a pair {xt+1 , xt+2 } , in order to generate a desired pattern of trial history.
3
7
Figure 4: AlignMax training (solid lines) compared to a random training (dashed lines), for a
simulated noise-free learner. (A) Weights evolving as training progresses, shown from a simulated
training on the full stimulus space shown in Figure 1A. (B-C) Performances measured in terms of
the success rate (moving average over 500 trials), the expected reward and the KL divergence. The
simulated learner was trained either (B) in the full stimulus space, or (C) in the reduced stimulus
space. The low success rate is a natural consequence of the active training algorithm, which tends to
select adversarial stimuli to facilitate learning.
6
Discussion
In this work, we have formulated a theory for designing an optimal training protocol of animal
behavior, which works adaptively to drive the current internal model of the animal toward a desired,
pre-defined objective state. To this end, we have first developed a method to accurately estimate the
time-varying parameters of the psychometric model directly from animal?s behavioral time series,
while characterizing the intrinsic variability ? and the learning rate ? of the animal by empirical
Bayes. Interestingly, a dimensional analysis based on our estimate of the learning model suggests
that the rat indeed lives in a regime where the effect of learning is stronger than the effect of noise.
Our method to infer the learning model from data is different from many conventional approaches of
inverse reinforcement learning, which also seek to infer the underlying learning rules from externally
observable behavior, but usually rely on the stationarity of the policy or the value function. On the
contrary, our method works directly on the non-stationary behavior. Our technical contribution is
twofold: first, building on the existing framework for estimation of state-space vectors [2, 11, 14],
we provide a case in which parameters of a non-stationary model are successfully inferred from real
time-series data; second, we develop a natural extension of the existing Bayesian framework where
non-random model change (learning) is incorporated into the prior information.
The AlignMax optimal trainer provides important insights into the general principles of effective
training, including a balanced strategy to neutralize both the bias and the history dependence of the
animal, and a dynamic tradeoff between difficult and easy stimuli that makes efficient use of a broad
range of the stimulus space. There are, however, two potential issues that may be detrimental to the
practical success of the algorithm: First, the animal may suffer a loss of motivation due to the low
success rate, which is a natural consequence of the adaptive training algorithm. Second, as with any
model-based approach, mismatch of either the psychometric model (logistic, or any generalization
model) or the learning model (RewardMax) may result in poor performances of the training algorithm.
These issues are subject to tests on real training experiments. Otherwise, the algorithm is readily
applicable. We expect it to provide both a significant reduction in training time and a set of reliable
measures to evaluate the training progress, powered by direct access to the internal learning model of
the animal.
Acknowledgments
JHB was supported by the Samsung Scholarship and the NSF PoLS program. JWP was supported by grants
from the McKnight Foundation, Simons Collaboration on the Global Brain (SCGB AWD1004351) and the NSF
CAREER Award (IIS-1150186). We thank Nicholas Roy for the careful reading of the manuscript.
8
References
[1] A. Abrahamyan, L. L. Silva, S. C. Dakin, M. Carandini, and J. L. Gardner. Adaptable history
biases in human perceptual decisions. Proc. Nat. Acad. Sci., 113(25):E3548?E3557, 2016.
[2] Y. Ahmadian, J. W. Pillow, and L. Paninski. Efficient Markov chain Monte Carlo methods for
decoding neural spike trains. Neural Computation, 23(1):46?96, 2011.
[3] A. Akrami, C. Kopec, and C. Brody. Trial history vs. sensory memory - a causal study of
the contribution of rat posterior parietal cortex (ppc) to history-dependent effects in working
memory. Society for Neuroscience Abstracts, 2016.
[4] C. M. Bishop. Pattern Recognition and Machine Learning. Information science and statistics.
Springer, 2006.
[5] L. Busse, A. Ayaz, N. T. Dhruv, S. Katzner, A. B. Saleem, M. L. Sch?lvinck, A. D. Zaharia,
and M. Carandini. The detection of visual contrast in the behaving mouse. J. Neurosci.,
31(31):11351?11361, 2011.
[6] A. Fassihi, A. Akrami, V. Esmaeili, and M. E. Diamond. Tactile perception and working
memory in rats and humans. Proc. Nat. Acad. Sci., 111(6):2331?2336, 2014.
[7] I. Fr?nd, F. A. Wichmann, and J. H. Macke. Quantifying the effect of intertrial dependence on
perceptual decisions. J. Vision, 14(7):9?9, 2014.
[8] D. M. Green and J. A. Swets. Signal Detection Theory and Psychophysics. Wiley, New York,
1966.
[9] A. Hern?ndez, E. Salinas, R. Garc?a, and R. Romo. Discrimination in the sense of flutter: new
psychophysical measurements in monkeys. J. Neurosci., 17(16):6391?6400, 1997.
[10] J. Li and N. D. Daw. Signals in human striatum are appropriate for policy update rather than
value prediction. J. Neurosci., 31(14):5504?5511, 2011.
[11] L. Paninski, Y. Ahmadian, D. G. Ferreira, S. Koyama, K. Rahnama Rad, M. Vidne, J. Vogelstein,
and W. Wu. A new look at state-space models for neural data. J. Comp. Neurosci., 29(1):107?
126, 2010.
[12] J. W. Pillow, Y. Ahmadian, and L. Paninski. Model-based decoding, information estimation, and
change-point detection techniques for multineuron spike trains. Neural Comput, 23(1):1?45,
Jan 2011.
[13] M. Sahani and J. F. Linden. Evidence optimization techniques for estimating stimulus-response
functions. In S. Becker, S. Thrun, and K. Obermayer, editors, Adv. Neur. Inf. Proc. Sys. 15,
pages 317?324. MIT Press, 2003.
[14] A. C. Smith, L. M. Frank, S. Wirth, M. Yanike, D. Hu, Y. Kubota, A. M. Graybiel, W. A.
Suzuki, and E. N. Brown. Dynamic analysis of learning in behavioral experiments. J. Neurosci.,
24(2):447?461, 2004.
[15] R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement
learning with function approximation. In S. A. Solla, T. K. Leen, and K. Muller, editors, Adv.
Neur. Inf. Proc. Sys. 12, pages 1057?1063. MIT Press, 2000.
[16] C. W. Tyler and C.-C. Chen. Signal detection theory in the 2afc paradigm: Attention, channel
uncertainty and probability summation. Vision Research, 40(22):3121?3144, 2000.
9
| 6344 |@word trial:39 exploitation:2 achievable:1 stronger:1 nd:1 d2:1 hu:1 gradual:1 seek:4 simulation:3 covariance:3 solid:1 reduction:1 initial:1 ndez:1 series:7 selecting:2 interestingly:1 past:1 existing:2 current:7 comparing:1 recovered:2 yet:2 intriguing:1 written:5 readily:1 subsequent:1 numerical:1 realistic:1 multineuron:1 motor:2 designed:2 reproducible:2 update:6 plot:1 discrimination:5 stationary:3 fminunc:1 selected:1 leaf:1 v:1 progressively:1 plane:3 isotropic:1 beginning:1 sys:2 smith:1 core:1 farther:1 provides:2 location:1 install:1 become:1 direct:1 m22:1 consists:1 combine:1 behavioral:5 introduce:3 swets:1 forgetting:1 indeed:2 expected:11 behavior:28 busse:1 abbreviating:1 brain:1 discretized:1 relying:1 automatically:1 actual:1 increasing:1 becomes:1 begin:2 estimating:5 underlying:3 spain:1 bounded:1 notation:3 what:1 skewness:1 monkey:1 developed:1 hypothetical:1 unwanted:1 ferreira:1 stick:1 unit:2 medical:1 grant:1 appear:1 carrier:2 modify:1 tends:1 limit:1 consequence:2 acad:2 striatum:1 sutton:1 subscript:1 chose:1 dynamically:1 suggests:5 dwp:1 range:2 practical:4 acknowledgment:1 hughes:1 block:2 practice:1 xr:1 jan:1 flutter:1 empirical:2 elicit:1 evolving:1 significantly:1 pre:2 rahnama:1 get:2 close:1 selection:2 wt1:1 put:2 risk:1 dimensionless:1 py:2 conventional:2 map:6 center:1 yt:8 maximizing:3 straightforward:1 romo:1 starting:1 attention:1 formulate:2 rule:13 insight:2 array:1 d1:1 alignmax:11 hd:2 laplace:2 feel:1 updated:1 target:2 pt:5 designing:1 velocity:1 roy:1 recognition:1 observed:4 yoon:1 solved:1 capture:3 adv:2 solla:1 movement:1 substantial:1 intuition:1 principled:1 vanishes:2 balanced:1 pol:1 reward:15 dynamic:2 trained:3 depend:1 singh:1 distinctive:1 learner:11 basis:1 samsung:1 various:1 represented:1 train:4 stacked:1 forced:1 distinct:1 describe:1 effective:2 ahmadian:3 monte:1 outcome:1 choosing:1 salina:1 supplementary:5 valued:1 larger:1 say:1 drawing:1 otherwise:2 compressed:1 cov:1 statistic:2 timescale:1 noisy:1 ip:1 sequence:2 propose:1 remainder:1 fr:1 kopec:1 realization:5 poorly:1 achieve:2 description:1 m21:1 object:2 develop:2 measured:2 ij:3 school:1 progress:5 sim:8 predicted:1 involves:4 indicate:4 convention:1 correct:6 stochastic:1 exploration:2 human:5 mcallester:1 separatrix:4 material:5 implementing:1 garc:1 require:2 generalization:1 preliminary:2 summation:1 extension:2 pl:4 hold:2 sufficiently:1 ppc:1 dhruv:1 exp:3 presumably:1 tyler:1 week:2 achieves:1 early:5 a2:3 purpose:1 estimation:2 proc:4 applicable:1 lose:1 neutralize:2 repetition:1 vice:1 successfully:4 reflects:2 mit:2 aim:1 modified:2 rather:1 pn:1 varying:12 encode:1 focus:1 consistently:1 bernoulli:1 indicates:2 likelihood:9 contrast:1 adversarial:2 sense:1 posteriori:1 dependent:2 typically:1 entire:1 transformed:1 selects:1 trainer:3 issue:2 among:1 animal:44 psychophysics:1 marginal:4 equal:1 once:2 construct:1 broad:1 look:1 afc:4 future:1 report:1 stimulus:60 fundamentally:1 t2:1 simplify:1 randomly:1 divergence:4 individual:1 argmax:1 consisting:1 attempt:1 detection:4 stationarity:1 chain:1 implication:1 accurate:1 closer:2 xy:1 korea:1 shorter:1 filled:2 indexed:1 walk:5 re:3 desired:10 circle:2 renormalize:1 causal:1 theoretical:2 column:2 modeling:1 boolean:1 w1i:1 maximization:5 cost:1 deviation:1 subset:1 yanike:1 uniform:1 answer:1 adaptively:2 sensitivity:3 accessible:1 stay:2 physic:1 decoding:2 synthesis:1 quickly:2 together:1 mouse:1 w1:3 choose:1 cognitive:2 derivative:3 macke:1 actively:1 supp:1 account:2 potential:1 li:1 de:1 star:1 notable:1 explicitly:2 h1:1 try:2 optimistic:1 analyze:1 red:2 bayes:2 simon:1 contribution:2 variance:2 characteristic:2 likewise:1 drat:1 bayesian:3 accurately:5 carlo:1 worth:1 drive:6 researcher:1 comp:1 history:22 banded:1 intertrial:1 naturally:1 di:1 dataset:7 experimenter:1 carandini:2 knowledge:1 color:1 emerges:1 improves:1 actually:1 back:2 adaptable:1 manuscript:1 response:10 maximally:1 improved:1 arranged:1 evaluated:3 leen:1 governing:5 stage:2 hand:1 working:3 logistic:3 mode:1 indicated:1 scientific:1 believe:1 normatively:1 facilitate:1 effect:10 building:1 requiring:1 true:10 brown:1 analytically:1 leibler:1 during:4 rat:24 criterion:1 generalized:1 prominent:1 presenting:2 bring:1 silva:1 instantaneous:1 ji:1 insensitive:1 accumulate:1 significant:1 measurement:1 versa:1 vec:2 grid:2 session:2 guesswork:1 moving:1 access:1 longer:1 cortex:1 behaving:1 base:1 align:2 wt2:1 closest:1 posterior:8 recent:2 inf:2 reverse:1 rewarded:1 binary:3 rep:2 remembered:1 success:6 life:1 muller:1 additional:2 freely:1 determine:4 maximize:3 paradigm:4 period:2 signal:3 dashed:1 ii:1 full:9 multiple:1 desirable:1 infer:5 vogelstein:1 technical:1 faster:2 long:3 prevented:1 award:1 dkl:2 a1:3 schematic:2 mk1:1 underlies:1 regression:1 prediction:1 experimentalists:1 vision:2 normalization:1 diffusive:1 whereas:2 interval:1 sch:1 w2:5 biased:2 eliminates:1 subject:2 contrary:1 spirit:1 climb:1 near:1 noting:1 presence:1 ideal:1 easy:1 wn:3 switch:1 fit:3 psychology:1 bic:4 wti:2 restrict:1 suboptimal:1 imperfect:1 idea:3 jwp:1 knowing:1 tradeoff:2 whether:3 heavier:1 becker:1 suffer:1 tactile:1 hessian:2 york:1 matlab:1 detailed:1 clear:1 mid:2 statespace:1 simplest:2 reduced:6 generate:1 katzner:1 problematic:1 nsf:2 neuroscience:4 arising:1 estimated:3 write:2 hyperparameter:6 promise:1 group:4 four:1 graybiel:1 concreteness:1 inverse:1 uncertainty:2 mk2:1 family:1 reasonable:1 wu:1 decision:6 comparable:1 brody:1 precisely:1 x2:6 aspect:1 optimality:1 performing:1 px:7 kubota:1 speedup:1 department:2 alternate:1 neur:2 combination:2 poor:1 mcknight:1 describes:3 across:2 smaller:1 suppressed:1 wi:1 evolves:2 making:2 alike:1 s1:1 wtk:1 wichmann:1 intuitively:1 pr:7 glm:4 taken:1 equation:3 visualization:1 hern:1 discus:1 end:2 w2i:1 available:1 operation:1 away:1 generic:1 appropriate:1 nicholas:1 m2k:1 alternative:1 vidne:1 binomial:1 assumes:1 newton:1 exploit:2 scholarship:1 m1k:1 classical:1 society:1 upcoming:1 seeking:1 objective:4 tensor:1 diffused:1 psychophysical:1 spike:2 strategy:1 dependence:11 usual:1 diagonal:2 obermayer:1 exhibit:2 gradient:10 win:2 detrimental:1 thank:1 reinforce:1 thrun:1 separating:1 simulated:9 athena:1 parametrized:1 w0:11 sci:2 koyama:1 collected:2 toward:6 boldface:1 length:1 index:2 providing:1 innovation:1 difficult:7 frank:1 debate:1 suppress:1 design:3 policy:14 perform:3 diamond:1 m11:1 m12:1 observation:5 markov:1 howard:1 hyun:1 finite:1 displayed:1 incorrectly:1 parietal:1 defining:1 extended:3 variability:7 incorporated:3 looking:1 mansour:1 arbitrary:1 drift:4 inferred:3 introduced:1 pair:3 required:2 kl:4 rad:1 concisely:1 barcelona:1 daw:1 nip:1 address:1 bar:1 usually:2 pattern:2 mismatch:1 perception:1 regime:3 biasing:1 summarize:1 reading:1 program:1 interpretability:1 including:2 max:3 reliable:1 memory:3 green:1 natural:3 rely:1 hr:1 advanced:1 representing:1 mkk:1 gardner:1 sahani:1 prior:10 powered:1 determining:1 relative:2 fully:1 expect:2 loss:1 zaharia:1 ingredient:1 foundation:1 consistent:1 principle:1 editor:2 collaboration:1 row:2 course:2 awd1004351:1 jung:1 supported:2 free:3 bias:6 side:1 allow:1 institute:3 taking:1 evd:7 characterizing:1 sparse:1 benefit:1 boundary:2 feedback:1 dimension:1 default:1 pillow:3 calculated:1 sensory:4 qualitatively:1 adaptive:10 reinforcement:7 wouldn:1 suzuki:1 observable:1 preferred:1 kullback:1 keep:2 global:1 active:3 reveals:1 scgb:1 assumed:1 search:1 vectorization:1 channel:1 learn:3 concatenates:1 robust:3 rearranging:1 career:1 interact:1 constructing:1 protocol:3 diag:4 neurosci:5 motivation:1 noise:7 hyperparameters:6 repeated:1 x1:6 psychometric:7 slow:1 wiley:1 experienced:1 wish:1 xl:1 candidate:1 governed:1 perceptual:2 comput:1 late:2 wirth:1 externally:1 formula:2 xt:28 specific:4 bishop:1 symbol:1 dk:1 decay:1 linden:1 evidence:13 dominates:1 consist:1 incorporating:2 intrinsic:1 false:1 rel:5 kr:1 nat:2 chen:1 easier:2 lt:5 distinguishable:1 paninski:3 visual:1 horizontally:1 lazy:1 adjustment:1 scalar:1 springer:1 mij:1 corresponds:2 month:2 goal:5 identity:1 marked:1 careful:2 towards:2 formulated:1 twofold:1 quantifying:1 considerable:1 change:14 included:1 determined:3 reducing:1 wt:41 engineer:1 experimental:5 tendency:1 indicating:1 select:3 hypothetically:1 internal:9 support:1 jonathan:1 incorporate:2 evaluate:1 princeton:4 tested:2 |
5,908 | 6,345 | Finding significant combinations of features in the
presence of categorical covariates
Laetitia Papaxanthos? , Felipe Llinares-L?pez? , Dean Bodenham, Karsten Borgwardt
Machine Learning and Computational Biology Lab
D-BSSE, ETH Zurich
*Equally contributing authors.
Abstract
In high-dimensional settings, where the number of features p is much larger than the
number of samples n, methods that systematically examine arbitrary combinations
of features have only recently begun to be explored. However, none of the current
methods is able to assess the association between feature combinations and a target
variable while conditioning on a categorical covariate. As a result, many false
discoveries might occur due to unaccounted confounding effects.
We propose the Fast Automatic Conditional Search (FACS) algorithm, a significant
discriminative itemset mining method which conditions on categorical covariates
and only scales as O(k log k), where k is the number of states of the categorical
covariate. Based on the Cochran-Mantel-Haenszel Test, FACS demonstrates superior speed and statistical power on simulated and real-world datasets compared to
the state of the art, opening the door to numerous applications in biomedicine.
1
Introduction
In the last 10 years, the amount of data available is growing at an unprecedented rate. However, in
many application domains, such as computational biology and healthcare, the amount of features is
growing much faster than typical sample sizes. Therefore, statistical inference in high-dimensional
spaces has become a tool of the utmost importance for practitioners in those fields. Despite the great
success of approaches based on sparsity-inducing regularizers [16, 2], the development of methods to
systematically explore arbitrary combinations of features and assess their statistical association with
a target of interest has been less studied. Exploring all combinations of p features is equivalent to
handling a 2p -dimensional space, thus combinatorial feature discovery exacerbates the challenges for
statistical inference in high-dimensional spaces even for moderate p.
Under the assumption that features and targets are binary random variables, recent work in the field
of significant discriminative itemset mining offers tools to solve the computational and statistical
challenges incurred by combinatorial feature discovery. However, all state-of-the-art approaches [15,
10, 13, 7, 8] share a key limitation: no method exists to assess the conditional association between
feature combinations and the target. The ability to condition the associations on an observed covariate
is fundamental to correct for confounding effects. If unaccounted for, one may find many false
positives that are actually associated with the covariate and not the class of interest [17]. For example,
in medical case/control association studies, it is common to search for combinations of genetic
variants that are associated with a disease of interest. In this setting, the class labels are the health
status of individuals, sick or healthy. The features represent binary genetic variants, encoded as 1
if the variant is altered and as 0 if not. Often, in high-order association studies, a subset of genetic
variants are combined to form a binary variable whose value is 1 if the subset only contains altered
genetic variants and is 0 otherwise. A subset of genetic variants is associated with the class label if the
frequencies of altered combinations in each class are statistically different. However, it is often the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
case that the studied samples belong to several subpopulations, for example African-American, East
Asian or European Caucasian, which show differences in the prevalence of some altered combinations
of genetic variants because of systematic ancestry differences. When, additionally, the subpopulations
clusters are unevenly distributed across classes, it can result in false associations to the disease of
interest [12]. This is the reason why it is necessary to model ancestry differences between cases and
controls in the presence of population structure or to correct for covariates in more general settings.
Hence our goal in this article is to present the first approach to significant discriminative itemset
mining that allows one to correct for a confounding categorical covariate.
To reach this goal, we present the novel algorithm FACS, which enables significant discriminative itemset mining with categorical covariates through the Cochran-Mantel-Haenszel test [9] in
O(k log k) time, where k is the number of states of the categorical covariate, compared to the standard
implementation which is exponential in k.
The rest of this article is organized as follows: In Section 2 we define the problem to be solved
and introduce the main theoretical concepts from related work that FACS is based on, namely the
Cochran-Mantel-Haenszel-test and Tarone?s testability criterion. In Section 3, we describe in detail
our contribution, the FACS algorithm and its efficient implementation. Finally, Section 4 validates the
performance of our method on a set of simulated and biomedical datasets.
2
Problem statement and related work
In this section we introduce the necessary background, notation and terminology for the remainder
of this article. First, in Section 2.1 we rigorously define the problem we solve in this paper. Next,
in Sections 2.2 and 2.3 we describe two key elements on which our method is based: the CochranMantel-Haenszel (CMH) test and Tarone?s testability criterion.
2.1
Discovering significant feature combinations in the presence of a categorical covariate
n
We consider a dataset of n observations D = {(ui , yi , ci )}i=1 , where the ith observation consists of:
(I) a feature vector ui consisting of p binary features, ui,j ? {0, 1} for j = 1, . . . , p; (II) a binary class
label, yi ? {0, 1}; and (III) a categorical covariate ci , which has k categories, i.e. ci ? {1, 2, . . . , k}.
Given any subset of features
S ? {1, 2, . . . , p}, we define its induced feature combination for the ith
Q
observation as zi,S = j?S ui,j , such that zi,S takes value 1 if and only if ui,j = 1 for all features in
S. Now, we use ZS to denote the feature combination induced by S, of which zi,S is the realization
for the ith observation. Similarly, we use Y to denote the label, and C to denote the covariate, of
which yi and ci are realizations, respectively, for i = 1, 2, . . . , n. Below we use the standard notation
A?
? B to denote ?A is statistically independent of B?.
Typically, significant discriminative itemset mining aims to find all feature subsets S for which a
statistical association test rejects the null hypothesis, namely ZS ?
? Y , after a rigorous correction for
multiple hypothesis testing. However, for any feature subset such that ZS 6?
? Y but ZS ?
? Y | C, the
association between ZS and Y is exclusively mediated by the covariate C, which acts in this case as
a confounder creating spurious associations.
Our goal: In this work, the aim is to find all feature subsets S for which a statistical association
test rejects the null hypothesis ZS ?
? Y | C, thus allowing to correct for a confounding categorical
covariate while keeping the computational efficiency, statistical power and the ability to correct for
multiple hypothesis testing of existing methods.
In the remainder of this section we will introduce two fundamental concepts our work relies upon.
The first one is the Cochran-Mantel-Haenszel (CMH) test, which offers a principled way to test if a
feature combination ZS is conditionally dependent on the class labels Y given the covariate C, that
is, to test the null hypothesis ZS ?
? Y | C. The second concept is Tarone?s testability criterion, which
allows a correction for multiple hypothesis testing while retaining large statistical power, in scenarios
such as ours where billions or trillions of association tests must be performed.
Tarone?s testability criterion has only been successfully applied to unconditional association tests,
such as Fisher?s exact test [6] or Pearson?s ?2 test [11]. Thus, the state-of-the-art in significant
discriminative itemset mining forces one to choose between: (a) using Bonferroni?s correction,
resulting in very low statistical power or an arbitrary limit in the cardinality of feature subsets (e.g.
[18]), or (b) using Tarone?s testability criterion, losing the ability to account for covariates and
resulting in potentially many confounded patterns being deemed significant [15, 13, 7, 8].
2
Our contribution: In this paper, we propose FACS, a novel algorithm that allows applying Tarone?s
testability criterion to the CMH test, allowing to correct for a categorical covariate in significant
discriminative itemset mining for the first time. FACS will be introduced in detail in Section 3.
2.2
Conditional association testing with the Cochran-Mantel-Haenszel (CMH) test
n
To test if ZS ?
? Y | C, the CMH test [9] arranges the n realisations of {(zi,S , yi , ci )}i=1 into k
distinct 2 ? 2 contingency tables, one table for each possible value of the covariate c, as:
Variables
y=1
y=0
Col totals
zS = 1
aS,j
xS,j ? aS,j
xS,j
zS = 0
n1,j ? aS,j
n2,j ? xS,j + aS,j
nj ? xS,j
Row totals
n1,j
n2,j
nj
where: (I) nj is the number of observations with c = j, n1,j of which have class label y = 1 and n2,j
of which have class label y = 0; (II) xS,j is the number of observations with c = j and zi,S = 1;
(III) aS is the number of observations with c = j, class label y = 1 and zi,S = 1. Based on
{nj , n1,j , xS,j , aS,j }kj=1 , a p-value pS for feature combination ZS is computed as:
?
P
?
pS = 1 ? F?2 ? P
1
n1,j
k
j=1 nj
2
x
n
aS,j ? S,jnj 1,j
n
1 ? n1,j
xS,j 1 ?
j
?
k
j=1
xS,j
nj
(1)
?
?
where F?21 (?) is the distribution function of a ?2 random variable with 1 degree of freedom. Finally,
the feature combination ZS and its corresponding feature subset S will be deemed significantly
associated if the p-value pS falls below a corrected significance threshold ?, that is, if pS ? ?.
k
The CMH test can be understood as a form of meta-analysis applied to k disjoint datasets {Dj }j=1 ,
where Dj = {(ui , yi ) | ci = j} contains only observations for which the covariate c takes value j.
For confounded feature combinations, the association might be large in the entire dataset D, but small
for conditional datasets Dj . Thus, the CMH test will not deem such feature combinations significant.
2.3
The multiple testing problem in discriminative itemset mining
In our setup, one must perform 2p ? 1 association tests, one for each possible subset of features. Even
for moderate p, this leads to an enormous number of tests, resulting in a large multiple hypothesis
testing problem. To produce statistically reliable results, the significance threshold ? will be chosen to
guarantee that the Family-Wise Error Rate (FWER), defined as the probability of producing any false
positives, is upper-bounded by a significance level ?. FWER control is most commonly achieved with
Bonferroni?s correction [3, 5], which in our setup would imply using ? = ?/(2p ? 1) as significance
threshold. However, Bonferroni?s correction tends to be overly conservative, resulting in very low
statistical power when the number of tests performed is large. In contrast, recent work in significant
discriminative itemset mining [15, 10, 13, 7] showed that, in this setting, Bonferroni?s correction can
be outperformed in terms of statistical power by Tarone?s testability criterion [14].
Tarone?s testability criterion is based on the observation that, for some discrete test statistics based on
contingency tables, a minimum attainable p-value can be computed as a function of the table margins.
Let ?(S) denote the minimum attainable p-value corresponding to the contingency table of feature
combination ZS . By definition, pS ? ?(S), therefore ?(S) > ? implies that feature combination
ZS can never be deemed significantly associated, and hence it cannot cause a false positive. In other
words, feature subsets S for which ?(S) > ? are irrelevant as far as the FWER is concerned. In
Tarone?s terminology, S is said to be untestable. Thus, defining the set of testable feature subsets at
level ? as IT (?) = {S| ?(S) ? ?}, Tarone?s testability criterion obtains the corrected significance
threshold as ?tar = max {? : FWERtar (?) ? ?}, where FWERtar (?) = ?|IT (?)|. Note that this
amounts to applying a Bonferroni correction to feature subsets S in IT (?) only. FWER control
follows from the fact that untestable feature subsets cannot affect the FWER. Since in practice
|IT (?)| 2p ? 1, Tarone?s testability criterion often outperforms Bonferroni?s correction in terms
of statistical power by a large margin.
The main practical limitation of Tarone?s testability criterion is its computational complexity. Naively
computing ?tar would involve explicitly enumerating all 2p ? 1 feature subsets and evaluating their
respective minimum attainable p-values, something unfeasible even for moderate p. Existing work in
significant discriminative pattern mining solves that limitation by exploiting specific properties of
3
certain test statistics, such as Fisher?s Exact Test or Pearson?s ?2 test, that allow to apply branch-andbound algorithms to evaluate ?tar . However, the properties those algorithms rely on do not apply to
conditional statistical association tests, such as the CMH test. In the next section, we present in detail
our novel approach to apply Tarone?s method to the CMH test.
3
Our contribution: The FACS algorithm
This section introduces the Fast Automatic Conditional Search (FACS) algorithm, the first approach
that allows the application of Tarone?s testability criterion to the CMH test in a computationally
efficient manner. Section 3.1 discusses the main challenges facing FACS and summarizes how
FACS improves the state of the art. Section 3.2 provides a high-level description of the algorithm.
Finally, Sections 3.3 and 3.4 detail the two key steps of FACS, which are also the main algorithmic
contributions of this work.
3.1
Overview and Contributions
The main objective of the FACS algorithm, described in Section 3.2 below, can be summarised as:
n
Objective: Given a dataset D = {(ui , yi , ci )}i=1 , the goal of FACS is to:
1. Compute Tarone?s corrected significance threshold ?tar .
2. Retrieve all feature subsets S whose p-value pS is below ?tar .
For both (1) and (2), the test statistic of choice will be the CMH test, thus allowing to correct for a
confounding categorical covariate as described in Section 2.2.
The key contribution of our work is to bridge the gap between Tarone?s testability criterion and the
CMH test. Firstly, in Section 3.3, we show for the first time that Tarone?s method can be applied to
the CMH test. More importantly, in Section 3.4 we introduce a novel branch-and-bound algorithm to
efficiently compute ?tar without requiring the function ? computing Tarone?s minimum attainable
p-value to be monotonic. This allows us not only to apply Tarone?s testability criterion to the CMH
test, but to do so as efficiently as existing methods not able to handle confounding covariates do.
3.2
High-level description of FACS
As shown in the pseudocode in Algorithm 1, conceptually, FACS performs two main operations:
Algorithm 1 FACS
n
Input: Dataset D = {(ui , yi , ci )}i=1 ,
target FWER ?
Output: {S | pS ? ?tar }
1: Initialize global variables ?tar = 1
and IT (?tar ) = ?
2: ?tar , IT (?tar ) ? tarone_cmh(?)
3: Return
{S ? IT (?tar ) | pS ? ?tar }
Algorithm 2 tarone_cmh
Input: Current feature subset being processed S
1: if is_testable_cmh(S, ?tar ) then {see Section 3.3}
2:
Append S to IT (?tar )
3:
FWERtar (?tar ) ? ?tar |IT (?tar )|
4:
while FWERtar (?tar ) > ? do
5:
Decrease ?tar
6:
IT (?
tar ) ?
S ? IT (?tar ) : is_testable(S, ?tar )
7:
FWERtar (?tar ) ? ?tar |IT (?tar )|
8: if not is_prunable_cmh(S, ?tar ) then {see 3.4}
9:
for S 0 ? Children(S) do
10:
tarone_cmh(S 0 )
Firstly, Line 2 invokes the routine tarone_cmh, described in Algorithm 2. This routine uses our
novel branch-and-bound approach to efficiently compute Tarone?s corrected significance threshold
?tar and the set of testable feature subsets IT (?tar ).
Secondly, using the significance threshold ?tar obtained in the previous step, Line 3 evaluates the
conditional association of the feature combination ZS of each testable feature subset S ? IT (?tar )
with the class labels, given the categorical covariate, using the CMH test as shown in Section 2.2.
Note that, according to Tarone?s testability criterion, untestable feature subsets S 6? IT (?tar ) cannot
be significant and therefore do not need to be considered in this step. Since in practice |IT (?tar )|
2p ? 1, the procedure tarone_cmh is the most critical part of FACS.
The routine tarone_cmh uses the enumeration scheme first proposed in [10, 13]. All 2p feature
subsets are arranged in an enumeration tree such that S 0 ? Children(S) ? S ? S 0 . In other words,
4
the children of a feature subset S in the enumeration tree are obtained by adding an additional feature
to S. Before invoking tarone_cmh, in Line 1 of Algorithm 1 the significance threshold ?tar is
initialized to 1, the largest value it can take, and the set of testable feature combinations IT (?tar ) is
initialized to the empty set. The enumeration procedure is started by calling tarone_cmh with the
empty feature subset S = ?, which acts as the root of the enumeration tree1 . All 2p ? 1 non-empty
feature subsets will then be explored recursively by traversing the enumeration tree depth-first.
Every time a feature subset S in the tree is visited, Line 1 of Algorithm 2 checks if it is testable, as
detailed in Section 3.3. If it is, S is appended to the set of testable feature subsets IT (?tar ) in Line 2.
The FWER condition for Tarone?s testability criterion is checked in Lines 3 and 4. If it is found
to be violated, the significance threshold ?tar is decreased in Line 5 until the condition is satisfied
again, removing from IT (?tar ) any feature subsets made untestable by decreasing ?tar in Line 6 and
re-evaluating the FWER condition accordingly in Line 7. Before continuing the traversal of the tree
by exploring the children of the current feature subset S, Line 8 checks if our novel pruning criterion
applies, as described in Section 3.4. Only if it does not apply are all children of S visited recursively
in Lines 9 and 10. The testability and pruning conditions in Lines 1 and 8 become more stringent
as ?tar decreases. Because of this, as ?tar decreases along the enumeration procedure (see Line 5),
increasingly larger parts of the search space are pruned. Thus, the algorithm terminates when, for the
current value of ?tar and IT (?tar ), all feature subsets that cannot be pruned have been visited.
The two most challenging steps in FACS are the design of an appropriate testability criterion,
is_testable_cmh(S, ?), and an efficient pruning criterion, is_prunable_cmh(S, ?), that circumvent the limitations of the current state of the art. These are now each described in detail.
3.3
A testability criterion for the CMH test
As mentioned in Section 2.3, Tarone?s testability criterion has only been applied to test statistics such
as Fisher?s exact test, Pearson?s ?2 test and the Mann-Whitney U Test, none of which allows for
incorporating covariates. However, the following proposition shows that the CMH test also has a
minimum attainable p-value ?cmh (S):
Proposition 1 The CMH test has a minimum attainable p-value ?cmh (S), which can be computed
in O(k) time as a function of the margins {nj , n1,j , xS,j }kj=1 of the k 2 ? 2 contingency tables.
The proof of Proposition 1, provided in the Supp. Material, involves showing that ?cmh (S) can be
computed from the k 2 ? 2 contingency tables corresponding to ZS (see Section 2.2) by optimising
k
the p-value pS with respect to {aS,j }j=1 while keeping the table margins {nj , n1,j , xS,j }kj=1 fixed.
3.4
A pruning criterion for the CMH test
State-of-the-art methods [15, 8], all of which are limited to unconditional association testing, exploit
the fact that the minimum attainable p-value function ?(S), using either Fisher?s exact test or
Pearson?s ?2 test on a single contingency table, obeys a simple monotonicity property: S ? S 0 ?
?(S) ? ?(S 0 ) provided that xS ? min(n1 , n2 ). This leads to a remarkably simple pruning criterion:
if a feature subset S is non-testable, i.e. ?(S) > ?, and its support xS is smaller or equal to
min(n1 , n2 ), then all children S 0 of S, which satisfy S ? S 0 by construction of the enumeration tree,
will also be non-testable and can be pruned from the search space. However, such a monotonicity
property does not hold for the CMH minimum attainable p-value function ?cmh (S), severely
complicating the development of an effective pruning criterion.
In Section 3.4.1 we show how to circumvent this limitation by introducing a novel pruning criterion
e cmh (S) ? ?cmh (S) of the original minimum
based on defining a monotonic lower envelope ?
attainable p-value function ?cmh (S) and prove that it leads to a valid pruning strategy. Finally, in
e cmh (S) in O(k log k) time, instead of a
Section 3.4.2, we provide an efficient algorithm to evaluate ?
naive implementation whose computational complexity would scale exponentially with k, the number
of categories for the covariate. Due to space constraints, all proofs are in the Supp. Material.
3.4.1
Definition and correctness of the pruning criterion
As mentioned above, existing unconditional significant discriminative pattern mining methods only consider feature subsets S with support xS ? min(n1 , n2 ) to be potentially prun1
We define zi,? = 1 for all observations, so this artificial feature combination will never be significant.
5
able. Analogously, we consider as potentially prunable the set of feature subsets IP P =
{S | xS,j ? min(n1,j , n2,j ) ? j = 1, . . . , k}. Note that for k = 1, our definition reduces to that
of existing work. In itemset mining, a very large proportion of all feature subsets will have small
supports. Therefore, restricting the application of the pruning criterion to potentially prunable patterns
does not cause a loss of performance in practice. We can now state the definition of the lower envelope
for the CMH minimum attainable p-value:
e cmh (S) is
Definition 1 Let S ? IP P be a potentially prunable feature subset. The lower envelope ?
0
0
e
defined as ?cmh (S) = min {?cmh (S ) | S ? S}.
e cmh (S) satisfies ?
e cmh (S) ? ?cmh (S) for all feature subsets S in the
Note that, by construction, ?
set of potentially prunable patterns. Next, we show that unlike for the minimum attainable p-value
e cmh (S):
function ?cmh (S), the monotonicity property holds for the lower envelope ?
0
Lemma 1 Let S, S ? IP P be two potentially prunable feature subsets such that S ? S 0 . Then,
e cmh (S) ? ?
e cmh (S 0 ) holds.
?
Next, we state the main result of this section, which establishes our search space pruning criterion:
e cmh (S) > ?. Then,
Theorem 1 Let S ? IP P be a potentially prunable feature subset such that ?
?cmh (S 0 ) > ? for all S 0 ? S, i.e. all feature subsets containing S are non-testable at level ? and
can be pruned from the search space.
To summarize, the pruning criterion is_prunable_cmh in Line 8 of Algorithm 2 evaluates to true
e cmh (S) > ?tar .
if and only if S ? IP P ? xS,j ? min(n1,j , n2,j ) ? j = 1, . . . , k and ?
3.4.2
Evaluating the pruning criterion in O(k log k) time
In FACS, the pruning criterion stated above will be applied to all enumerated feature subsets. Hence,
it is mandatory to have an efficient algorithm to compute the lower envelope for the CMH minimum
e cmh (S) for any potentially prunable feature subset S ? IP P .
attainable p-value ?
As shown in the proof of Proposition 1 in the Supp. Material, ?cmh (S) depends on the pattern S
through its k-dimensional vector of supports xS = (xS,1 , . . . , xS,k ). Also, the condition S 0 ? S
implies that xS 0 ,j ? xS,j ? j = 1, . . . , k. As a consequence, one can rewrite Definition 1 as
e cmh (S) = min ?cmh (xS 0 ), where the vector inequality xS 0 ? xS holds component-wise. Thus,
?
xS 0 ?xS
Qk
e
naively computing ?(S)
would require optimizing ?cmh over a set of size j=1 xS,j = O(mk ),
k
where m is the geometric mean of {xS,j }j=1 . This scaling is clearly impractical, as even for moderate
k it would result in an overhead large enough to outweigh the benefits of pruning.
Because of this, in the remainder of this section we propose the last key part of FACS: an efficient
e
algorithm which evaluates ?(S)
in only O(k log(k)) time. We will arrive at our final result in two
steps, contained in Lemma 2 and Theorem 2.
Lemma 2 Let S ? IP P be a potentially prunable feature subset. The optimum x?S 0 of the discrete
optimization problem min ?cmh (xS 0 ) satisfies x?S 0 ,j = 0 or x?S 0 ,j = xS,j for each j = 1, . . . , k.
xS 0 ?xS
In short, Lemma 2 shows that the optimum x?S 0 = {?cmh (xS 0 ) | xS 0 ? xS } of the discrete optie
mization problem defining ?(S)
is always a vertex of the discrete hypercube J0, xS K. Thus, the
e cmh (S) can be reduced from O(mk ) to O(2k ), where
computational complexity of evaluating ?
m 2 for most patterns. Finally, building upon the result of Lemma 2, Theorem 2 below shows that
one can in fact find the optimal vertex out of all O(2k ) vertices in O(k log k) time.
xS,j
n
l
Theorem 2 Let S ? IP P be a potentially testable feature subset and define ?S,j
= n2,j
1
?
nj
j
xS,j
n1,j
r
and ?S,j = nj 1 ? nj for j = 1, . . . , k. Let ?l and ?r be permutations ?l , ?r : J1, kK 7?
l
l
r
r
J1, kK such that ?S,?
? . . . ? ?S,?
and ?S,?
? . . . ? ?S,?
, respectively.
r (1)
r (k)
l (1)
l (k)
Then, there exists an integer ? ? J1, kK such that the optimum x?S 0 = arg min?cmh (xS 0 ) satisfies
xS 0 ?xS
one of the two possible conditions: (I) x?S 0 ,?l (j) = xS,?l (j) for all j ? ? and x?S 0 ,?l (j) = 0 for all
j > ? or (II) x?S 0 ,?r (j) = xS,?r (j) for all j ? ? and x?S 0 ,?r (j) = 0 for all j > ?.
6
100 days
104
105
Number of features
1.(a)
106
106
100 days
105
One day
0.8
104
103
102
0.4
0.2
101
100
0
0.6
5
10 15 20 25 30
Number of categories k
1.(b)
0.0
0.0 0.2 0.4 0.6 0.8 1.0
Strength ?
1.(c)
False positive detection
One year
One day
103
1.0
107
Precision
FACS
2k-CMH
mk-CMH
Bonf-CMH
LAMP-? 2
Runtime (in seconds)
Runtime (in seconds)
1010
109
108
107
106
105
104
103
102
101
100
10-1
10-2 2
10
1.0
FACS-CMH
0.8
0.6
LAMP-? 2
Bonf-CMH
0.4
0.2
0.0
0.0 0.2 0.4 0.6 0.8 1.0
Strength ?
1.(d)
Figure 1: (a) Runtime as a function of the number of features, p. (b) Runtime as a function of the
number of categories of the covariate, k. (c) Precision as a function of the true signal strengh, ?true .
(d) False detection proportion as a function of the strength of the signal ?conf . n = 200 samples
were used in (a), (b) and n = 500 in (c), (d). Also, we set ?true = ?conf = ?.
In summary, Theorem 2 above implies that the 2k candidates to be the optimum x?S 0 according to
Lemma 2 can be narrowed down to only 2k vertices: k candidates satisfying the first condition and k
the second condition. Moreover, evaluating ?cmh for all k candidates satisfying the first condition
(resp. the second condition) can be done in O(k) time rather than O(k 2 ). This is due to the fact that
each of the k candidate vertices for each condition can be obtained by changing a single dimension
with respect to the previous one. Therefore, the operation dominating the computational complexity
l
l
r
r
is the sorting of the two k-vectors (?S,1
, . . . , ?S,k
) and (?S,1
, . . . , ?S,k
). As a consequence, the
e
runtime required to evaluate the lower envelope ?cmh (S), and thus our novel pruning criterion
is_prunable_cmh, scales as O(k log k) with the number of categories of the covariate.
4
Experiments
In Section 4.1 we describe a set of experiments on simulated datasets, evaluating the performance of
FACS in terms of runtime, precision and its ability to correct for confounding. Next, in Section 4.2,
we use our method in two applications in computational biology. Due to space constraints, only a
high-level summary of the experimental setup and results will be presented here. Additional details
can be found in the Supp. Material and code for FACS is available on GitHub2 .
4.1
Runtime and power comparisons on simulated datasets
We compare FACS with four significant discriminative itemset mining methods: LAMP-?2 , Bonf-CMH,
2k -FACS and mk -FACS. (1) LAMP-?2 [15, 10] is the state-of-the-art in significant discriminative
itemset mining. It uses Tarone?s testability criterion but is based on Pearson?s ?2 test and thus cannot
account for covariates; (2) Bonf-CMH uses the CMH test, being able to correct for confounders, but
uses Bonferroni?s correction, resulting in a considerable loss of statistical power; (3) and (4) 2k -FACS
and mk -FACS are two suboptimal versions of FACS, which implement the pruning criterion using the
approach shown in Lemma 2, which scales as O(2k ), or via brute-force search, scaling as O(mk ).
Runtime evaluations: Figure 1(a) shows that FACS scales as the state-of-the-art LAMP-?2 when
increasing the number of features p, while the Bonferroni-based method Bonf-CMH scales considerably worse. This indicates both that FACS is able to correct for covariates with virtually no runtime
overhead with respect to LAMP-?2 and confirms the efficacy of Tarone?s testability criterion. Figure
1(b) shows that FACS can handle categorical covariates of high-cardinality k with almost no overhead,
in contrast to mk -FACS and 2k -FACS which are only applicable for low k. This demonstrates the
importance of our efficient implementation of the pruning criterion.
Precision and false positive detection evaluations: We generated synthetic datasets with one truly
associated feature subset Strue and one confounded feature subset Sconf to evaluate precision and
ability to correct for confounders. Figure 1(c) shows that FACS has a similar precision as LAMP-?2 ,
being slightly worse for weak signals and slightly better for stronger signals. Again, the performance
of the Bonferroni-based method Bonf-CMH is drastically worse. Most importantly, Figure 1(d)
indicates that unlike LAMP-?2 , FACS has the ability to greatly reduce the false positive detection by
conditioning on an appropriate categorical covariate.
2
https://github.com/BorgwardtLab/FACS
7
Table 1: Total number of significant combinations (hits) found by LAMP-?2 , FACS and BONF-CMH and
average genomic inflation factor ?. ? for BONF-CMH is similar to FACS since both use the CMH test.
Datasets
LY
avrB
FACS
hits ?
433 1.17
43
1.21
LAMP-?2
hits
?
100,883 3.18
546
2.38
BONF-CMH
hits
19
1
4.2 Applications to computational biology
In this section, we look for significant feature combinations in two widely investigated biological
applications: Genome-Wide Association Studies (GWAS), using two A. thaliana datasets, and a study
of combinatorial regulation of gene expression in breast cancer cells.
A. thaliana GWAS: We apply FACS, LAMP-?2 and Bonf-CMH to two datasets from the plant model
organism A. thaliana [1], which contain 84 and 95 samples, respectively. The labels of each dataset
indicate the presence/absence of a plant defense-related phenotype: LY and avrB. In the two datasets,
each plant sample is represented by a sequence of approximately 214, 000 genetic bases. The genetic
bases are encoded as binary features which indicate if the base at a specific locus is standard or altered.
To minimize the effect of the evolutionary correlations between nearby bases (< 10 kilo-bases),
we downsampled each of the five chromosomes of each dataset, evenly by a factor of 20, using 20
different offsets. It resulted in complementary datasets containing between 1, 423 and 2, 661 features.
Our results for all methods are aggregated across all downsampled versions. In GWAS, one needs to
correct for the confounding effect of population structure to avoid many spurious associations. For
both datasets we condition on the ancestry, resulting in k = 5 and k = 3 categories for the covariate.
Table 1 shows the number of feature combinations (c.f. Section 2.1) reported as significant by each
method, as well as the corresponding genomic inflation factor ? [4], a popular criterion in statistical
genetics to quantify confounding. When compared to LAMP-?2 , we observe a severe reduction in the
number of feature combinations deemed significant by FACS, as well as a sharp decrease in ?. This
strongly indicates that many feature combinations reported by LAMP-?2 are affected by confounding.
The ? values of LAMP-?2 show strong marginal associations between many feature combinations
and labels, inflating the corresponding Pearson ?2 -test statistic values compared to the expected ?2
null distribution and resulting in many spurious associations. However, since most of those feature
combinations are independent of the labels given the covariates, the CMH test statistics values are
much closer to the ?2 distribution, leading to a lower ? and resulting in hits that are corrected for the
covariate. Moreover, the lack of power of BONF-CMH results in a very small number of hits.
Combinatorial regulation of gene expression in breast cancer cells: The breast cancer data set,
as used in [15], includes 12, 773 genes classified into up-regulated or not up-regulated. Each gene is
represented by 397 binary features which indicate the presence/absence of a sequence motif in the
neighborhood of this gene. We aim to find combinations of motifs that are enriched in up-regulated
genes. Two sets of experiments were conducted, conditioning on 8 and 16 categories respectively. In
this case, the covariate groups together genes sharing similar sets of motifs. As previously, LAMP-?2
reports 1, 214 motif combinations as significant, while FACS reports only 26 ? a reduction of over
97%. Further studies shown in the Supp. Material strongly suggest that most motif combinations
found by LAMP-?2 but not FACS are indeed due to confounding.
5
Conclusions
This article has presented FACS, the first approach to significant discriminative itemset mining that (i)
allows to condition on a categorical covariate, (ii) corrects for the inherent multiple testing problem
and (iii) retains high statistical power. Furthermore, we (iv) proved that the runtime of FACS scales
as O(k log k), where k is the number of states of the categorical covariate. Regarding future work,
generalizing the state-of-the-art to handle continuous data is a key open problem in significant
discriminative itemset mining. Solving it would greatly help make the framework applicable to new
domains. Another interesting improvement would be to combine FACS with the approach in [8]. In
their work, Tarone?s testability criterion is used along with permutation-testing to increase statistical
power by taking the redundancy between feature combinations into account. By using a similar
approach in combination with the CMH test, one could further increase statistical power while
retaining the ability to correct for a categorical covariate.
Acknowledgments: This work was funded in part by the SNSF Starting Grant ?Significant Pattern
Mining? (KB) and the Marie Curie ITN MLPM2012, Grant No. 316861 (KB, FLL).
8
References
[1] S. Atwell, Y. S. Huang, B. J. Vilhj?lmsson, G. Willems, M. Horton, Y. Li, D. Meng, A. Platt, A. M. Tarone,
T. T. Hu, et al. Genome-wide association study of 107 phenotypes in arabidopsis thaliana inbred lines.
Nature, 465(7298):627?631, 2010.
[2] C.-A. Azencott, D. Grimm, M. Sugiyama, Y. Kawahara, and K. M. Borgwardt. Efficient network-guided
multi-locus association mapping with graph cuts. Bioinformatics, 29(13):i171?i179, 2013.
[3] C. E. Bonferroni. Teoria statistica delle classi e calcolo delle probabilit?. Pubblicazioni del R Istituto
Superiore di Scienze Economiche e Commerciali di Firenze, 8:3?62, 1936.
[4] B. Devlin and K. Roeder. Genomic control for association studies. Biometrics, 55(4):997?1004, 1999.
[5] O. J. Dunn. Estimation of the medians for dependent variables. Ann. Math. Statist., 30(1):192?197, 03
1959.
[6] R. A. Fisher. On the Interpretation of ?2 from Contingency Tables, and the Calculation of P. Journal of
the Royal Statistical Society, 85(1):87?94, 1922.
[7] F. Llinares-L?pez, D. Grimm, D. A. Bodenham, U. Gieraths, M. Sugiyama, B. Rowan, and K. M. Borgwardt.
Genome-wide detection of intervals of genetic heterogeneity associated with complex traits. Bioinformatics,
31(12):240?249, 2015.
[8] F. Llinares-L?pez, M. Sugiyama, L. Papaxanthos, and K. M. Borgwardt. Fast and Memory-Efficient
Significant Pattern Mining via Permutation Testing. In Proceedings of the 21th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, Sydney, 2015, pages 725?734. ACM, 2015.
[9] N. Mantel and W. Haenszel. Statistical aspects of the analysis of data from retrospective studies of disease.
Journal of the National Cancer Institute, 22(4):719, 1959.
[10] S. Minato, T. Uno, K. Tsuda, A. Terada, and J. Sese. A fast method of statistical assessment for combinatorial hypotheses based on frequent itemset enumeration. In ECMLPKDD, volume 8725 of LNCS, pages
422?436, 2014.
[11] K. Pearson. On the criterion that a given system of deviations from the probable in the case of a correlated
system of variables is such that it can reasonable be supposed to have arisen from random sampling.
Philosophical Magazine, 50:157?175, 1900.
[12] A. L. Price, N. J. Patterson, R. M. Plenge, M. E. Weinblatt, N. A. Shadick, and D. Reich. Principal
components analysis corrects for stratification in genome-wide association studies. Nat Genet, 38(8):904?
909, 08 2006.
[13] M. Sugiyama, F. Llinares L?pez, N. Kasenburg, and K. M. Borgwardt. Mining significant subgraphs with
multiple testing correction. In SIAM Data Mining (SDM), 2015.
[14] R. E. Tarone. A modified bonferroni method for discrete data. Biometrics, 46(2):515?522, 1990.
[15] A. Terada, M. Okada-Hatakeyama, K. Tsuda, and J. Sese. Statistical significance of combinatorial
regulations. Proceedings of the National Academy of Sciences, 110(32):12996?13001, 2013.
[16] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), pages 267?288, 1996.
[17] B. J. Vilhj?lmsson and M. Nordborg. The nature of confounding in genome-wide association studies.
Nature Reviews Genetics, 14(1):1?2, Jan. 2013.
[18] G. Webb. Discovering significant rules. In Proceedings of the 12th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining, New York, 2006, pages 434 ? 443. ACM, 2006.
9
| 6345 |@word version:2 proportion:2 stronger:1 open:1 hu:1 confirms:1 attainable:12 invoking:1 recursively:2 reduction:2 contains:2 exclusively:1 efficacy:1 series:1 genetic:9 ours:1 outperforms:1 existing:5 rowan:1 current:5 com:1 must:2 j1:3 enables:1 discovering:2 caucasian:1 accordingly:1 lamp:16 ith:3 short:1 andbound:1 provides:1 math:1 firstly:2 five:1 along:2 become:2 bodenham:2 consists:1 prove:1 overhead:3 combine:1 manner:1 introduce:4 indeed:1 expected:1 karsten:1 examine:1 growing:2 multi:1 decreasing:1 enumeration:9 cardinality:2 deem:1 increasing:1 spain:1 provided:2 notation:2 bounded:1 moreover:2 null:4 z:17 finding:1 inflating:1 nj:11 impractical:1 guarantee:1 every:1 act:2 runtime:10 demonstrates:2 hit:6 platt:1 healthcare:1 control:5 grant:2 medical:1 brute:1 producing:1 ly:2 arabidopsis:1 positive:6 before:2 understood:1 tends:1 limit:1 severely:1 consequence:2 despite:1 meng:1 approximately:1 might:2 itemset:15 studied:2 challenging:1 limited:1 confounder:1 statistically:3 obeys:1 practical:1 acknowledgment:1 testing:11 practice:3 implement:1 firenze:1 prevalence:1 procedure:3 dunn:1 lncs:1 j0:1 probabilit:1 jan:1 eth:1 reject:2 significantly:2 word:2 subpopulation:2 downsampled:2 suggest:1 cannot:5 unfeasible:1 selection:1 applying:2 equivalent:1 dean:1 outweigh:1 starting:1 arranges:1 subgraphs:1 rule:1 importantly:2 retrieve:1 population:2 handle:3 resp:1 target:5 construction:2 magazine:1 exact:4 losing:1 us:5 hypothesis:8 element:1 satisfying:2 cut:1 observed:1 solved:1 decrease:4 disease:3 principled:1 mentioned:2 ui:8 covariates:11 complexity:4 testability:23 rigorously:1 traversal:1 rewrite:1 solving:1 upon:2 patterson:1 efficiency:1 mization:1 represented:2 distinct:1 fast:4 describe:3 effective:1 artificial:1 pearson:7 neighborhood:1 kawahara:1 whose:3 encoded:2 larger:2 solve:2 dominating:1 widely:1 otherwise:1 ability:7 statistic:6 validates:1 ip:8 final:1 sequence:2 sdm:1 unprecedented:1 propose:3 grimm:2 remainder:3 frequent:1 realization:2 academy:1 supposed:1 description:2 inducing:1 inbred:1 billion:1 exploiting:1 felipe:1 cluster:1 p:9 empty:3 produce:1 optimum:4 help:1 strong:1 sydney:1 solves:1 involves:1 implies:3 indicate:3 quantify:1 guided:1 correct:13 kb:2 stringent:1 material:5 mann:1 require:1 proposition:4 biological:1 probable:1 secondly:1 enumerated:1 exploring:2 correction:10 hold:4 inflation:2 considered:1 great:1 algorithmic:1 mapping:1 estimation:1 facs:49 outperformed:1 applicable:2 combinatorial:6 label:12 visited:3 healthy:1 bridge:1 largest:1 correctness:1 successfully:1 tool:2 establishes:1 clearly:1 genomic:3 always:1 snsf:1 aim:3 modified:1 rather:1 avoid:1 shrinkage:1 tar:44 improvement:1 methodological:1 check:2 indicates:3 greatly:2 contrast:2 rigorous:1 sigkdd:2 inference:2 roeder:1 dependent:2 motif:5 typically:1 entire:1 spurious:3 arg:1 retaining:2 development:2 art:9 initialize:1 marginal:1 field:2 equal:1 never:2 calcolo:1 sampling:1 stratification:1 biology:4 optimising:1 look:1 future:1 report:2 plenge:1 realisation:1 opening:1 inherent:1 resulted:1 national:2 individual:1 asian:1 consisting:1 n1:14 freedom:1 detection:5 interest:4 mining:22 evaluation:2 severe:1 introduces:1 truly:1 unconditional:3 regularizers:1 closer:1 necessary:2 istituto:1 respective:1 traversing:1 biometrics:2 tree:6 iv:1 continuing:1 initialized:2 re:1 tsuda:2 prunable:8 theoretical:1 mk:7 delle:2 whitney:1 retains:1 introducing:1 vertex:5 subset:44 deviation:1 conducted:1 reported:2 considerably:1 combined:1 confounders:2 synthetic:1 borgwardt:5 fundamental:2 international:2 siam:1 systematic:1 corrects:2 analogously:1 together:1 horton:1 again:2 satisfied:1 containing:2 choose:1 huang:1 worse:3 conf:2 creating:1 american:1 leading:1 return:1 li:1 supp:5 account:3 includes:1 satisfy:1 explicitly:1 depends:1 performed:2 root:1 lab:1 narrowed:1 gwas:3 cochran:5 curie:1 contribution:6 ass:3 appended:1 minimize:1 qk:1 azencott:1 efficiently:3 conceptually:1 weak:1 none:2 biomedicine:1 african:1 classified:1 reach:1 sharing:1 checked:1 definition:6 evaluates:3 frequency:1 associated:7 proof:3 di:2 dataset:6 proved:1 begun:1 popular:1 knowledge:2 improves:1 organized:1 routine:3 actually:1 day:4 arranged:1 done:1 strongly:2 furthermore:1 biomedical:1 until:1 correlation:1 assessment:1 lack:1 del:1 building:1 effect:4 concept:3 requiring:1 true:4 contain:1 hence:3 conditionally:1 bonferroni:11 exacerbates:1 fwer:8 criterion:39 performs:1 wise:2 novel:8 recently:1 superior:1 common:1 pseudocode:1 overview:1 unaccounted:2 conditioning:3 exponentially:1 volume:1 association:28 belong:1 organism:1 interpretation:1 trait:1 significant:29 automatic:2 similarly:1 sugiyama:4 dj:3 funded:1 reich:1 base:5 sick:1 something:1 recent:2 confounding:12 showed:1 optimizing:1 moderate:4 irrelevant:1 scenario:1 mandatory:1 certain:1 meta:1 binary:7 success:1 inequality:1 mlpm2012:1 yi:7 minimum:12 additional:2 kilo:1 aggregated:1 itn:1 signal:4 ii:4 branch:3 multiple:7 reduces:1 faster:1 calculation:1 offer:2 equally:1 variant:7 regression:1 breast:3 represent:1 arisen:1 achieved:1 cell:2 background:1 remarkably:1 decreased:1 interval:1 unevenly:1 median:1 envelope:6 rest:1 unlike:2 induced:2 virtually:1 practitioner:1 integer:1 presence:5 door:1 iii:3 enough:1 concerned:1 affect:1 zi:7 lasso:1 suboptimal:1 reduce:1 regarding:1 devlin:1 genet:1 enumerating:1 expression:2 defense:1 retrospective:1 york:1 cause:2 detailed:1 involve:1 utmost:1 amount:3 statist:1 processed:1 category:7 reduced:1 http:1 disjoint:1 overly:1 tibshirani:1 summarised:1 discrete:5 affected:1 group:1 key:6 four:1 terminology:2 threshold:9 enormous:1 redundancy:1 changing:1 marie:1 graph:1 year:2 terada:2 arrive:1 family:1 almost:1 reasonable:1 kasenburg:1 summarizes:1 scaling:2 thaliana:4 bound:2 fll:1 strength:3 occur:1 constraint:2 uno:1 calling:1 nearby:1 aspect:1 speed:1 min:9 pruned:4 according:2 combination:34 across:2 terminates:1 increasingly:1 smaller:1 slightly:2 scienze:1 handling:1 computationally:1 zurich:1 previously:1 discus:1 locus:2 confounded:3 available:2 operation:2 apply:6 observe:1 appropriate:2 jnj:1 original:1 ecmlpkdd:1 exploit:1 testable:10 invokes:1 hypercube:1 society:2 objective:2 strategy:1 said:1 evolutionary:1 regulated:3 simulated:4 evenly:1 reason:1 code:1 kk:3 setup:3 regulation:3 webb:1 statement:1 potentially:11 stated:1 append:1 implementation:4 design:1 bonf:11 perform:1 allowing:3 upper:1 observation:10 willems:1 datasets:13 pez:4 defining:3 heterogeneity:1 arbitrary:3 sharp:1 introduced:1 namely:2 required:1 philosophical:1 barcelona:1 nip:1 able:5 below:5 pattern:9 atwell:1 sparsity:1 challenge:3 summarize:1 reliable:1 max:1 royal:2 memory:1 power:13 critical:1 force:2 rely:1 circumvent:2 scheme:1 altered:5 github:1 imply:1 numerous:1 started:1 deemed:4 categorical:18 mediated:1 naive:1 health:1 kj:3 review:1 geometric:1 bsse:1 discovery:5 contributing:1 loss:2 plant:3 permutation:3 interesting:1 limitation:5 facing:1 contingency:7 incurred:1 degree:1 sese:2 article:4 systematically:2 share:1 row:1 cancer:4 genetics:2 summary:2 last:2 keeping:2 drastically:1 allow:1 institute:1 fall:1 wide:5 taking:1 distributed:1 benefit:1 depth:1 complicating:1 world:1 evaluating:6 valid:1 dimension:1 genome:5 author:1 commonly:1 mantel:6 made:1 far:1 pruning:18 obtains:1 status:1 gene:7 monotonicity:3 global:1 discriminative:15 tree1:1 search:8 ancestry:3 continuous:1 why:1 table:12 additionally:1 nature:3 chromosome:1 okada:1 investigated:1 european:1 complex:1 domain:2 significance:11 main:7 statistica:1 n2:9 child:6 complementary:1 minato:1 enriched:1 untestable:4 precision:6 exponential:1 col:1 candidate:4 removing:1 theorem:5 down:1 specific:2 covariate:27 showing:1 explored:2 x:42 offset:1 exists:2 naively:2 incorporating:1 false:9 adding:1 restricting:1 importance:2 ci:8 nat:1 margin:4 gap:1 sorting:1 phenotype:2 generalizing:1 explore:1 contained:1 monotonic:2 applies:1 satisfies:3 relies:1 trillion:1 acm:4 conditional:7 goal:4 ann:1 price:1 fisher:5 considerable:1 absence:2 typical:1 corrected:5 classi:1 lemma:7 conservative:1 total:3 principal:1 experimental:1 east:1 support:4 bioinformatics:2 violated:1 evaluate:4 correlated:1 |
5,909 | 6,346 | Maximal Sparsity with Deep Networks?
Bo Xin1,2
Yizhou Wang1
Wen Gao1
Baoyuan Wang3
David Wipf2
2
3
Peking University
Microsoft Research, Beijing
Microsoft Research, Redmond
{boxin, baoyuanw, davidwip}@microsoft.com {yizhou.wang, wgao}@pku.edu.cn
1
Abstract
The iterations of many sparse estimation algorithms are comprised of a fixed linear filter cascaded with a thresholding nonlinearity, which collectively resemble a
typical neural network layer. Consequently, a lengthy sequence of algorithm iterations can be viewed as a deep network with shared, hand-crafted layer weights. It
is therefore quite natural to examine the degree to which a learned network model
might act as a viable surrogate for traditional sparse estimation in domains where
ample training data is available. While the possibility of a reduced computational
budget is readily apparent when a ceiling is imposed on the number of layers, our
work primarily focuses on estimation accuracy. In particular, it is well-known that
when a signal dictionary has coherent columns, as quantified by a large RIP constant, then most tractable iterative algorithms are unable to find maximally sparse
representations. In contrast, we demonstrate both theoretically and empirically the
potential for a trained deep network to recover minimal `0 -norm representations in
regimes where existing methods fail. The resulting system, which can effectively
learn novel iterative sparse estimation algorithms, is deployed on a practical photometric stereo estimation problem, where the goal is to remove sparse outliers
that can disrupt the estimation of surface normals from a 3D scene.
1
Introduction
Our launching point is the optimization problem
min kxk0 s.t. y = ?x,
x
(1)
where y ? Rn is an observed vector, ? ? Rn?m is some known, overcomplete dictionary of
feature/basis vectors with m > n, and k?k0 denotes the `0 norm of a vector, or a count of the number
of nonzero elements. Consequently, (1) can be viewed as the search for a maximally sparse feasible
vector x? (or approximately feasible if the constraint is relaxed). Unfortunately however, direct
assault on (1) involves an intractable, combinatorial optimization process, and therefore efficient
alternatives that return a maximally sparse x? with high probability in restricted regimes are sought.
Popular examples with varying degrees of computational overhead include convex relaxations such
as `1 -norm minimization [2, 5, 21], greedy approaches like orthogonal matching pursuit (OMP)
[18, 22], and many flavors of iterative hard-thresholding (IHT) [3, 4].
Variants of these algorithms find practical relevance in numerous disparate domains, including feature selection [7, 8], outlier removal [6, 13], compressive sensing [5], and source localization [1, 16].
However, a fundamental weakness underlies them all: If the Gram matrix ?> ? has significant offdiagonal energy, indicative of strong coherence between columns of ?, then estimation of x? may
be extremely poor. Loosely speaking this occurs because, as higher correlation levels are present,
the null-space of ? is more likely to include large numbers of approximately sparse vectors that
tend to distract existing algorithms in the feasible region, an unavoidable nuisance in many practical
applications.
In this paper we consider recent developments in the field of deep learning as an entry point for
improving the performance of sparse recovery algorithms. Although seemingly unrelated at first
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
glance, the layers of a deep neural network (DNN) can be viewed as iterations of some algorithm
that have been unfolded into a network structure [9, 11]. In particular, iterative thresholding approaches such as IHT mentioned above typically involve an update rule comprised of a fixed, linear
filter followed by a non-linear activation function that promotes sparsity. Consequently, algorithm
execution can be interpreted as passing an input through an extremely deep network with constant
weights (dependent on ?) at every layer. This ?unfolding? viewpoint immediately suggests that
we consider substituting discriminatively learned weights in place of those inspired by the original
sparse recovery algorithm. For example, it has been argued that, given access to a sufficient number
of {x? , y} pairs, a trained network may be capable of producing quality sparse estimates with a
few number of layers. This in turn can lead to a dramatically reduced computational burden relative to purely optimization-based approaches [9, 19, 23] or to enhanced non-linearities for use with
traditional iterative algorithms [15].
While existing empirical results are promising, especially in terms of the reduction in computational
footprint, there is as of yet no empirical demonstration of a learned deep network that can unequivocally recover maximally sparse vectors x? with greater accuracy than conventional, state-of-the-art
optimization-based algorithms, especially with a highly coherent ?. Nor is there supporting theoretical evidence elucidating the exact mechanism whereby learning may be expected to improve the
estimation accuracy, especially in the presence of coherent dictionaries. This paper attempts to fill
in some of these gaps, and our contributions can be distilled to the following points:
Quantifiable Benefits of Unfolding: We rigorously dissect the benefits of unfolding conventional
sparse estimation algorithms to produce trainable deep networks. This includes a precise characterization of exactly how different architecture choices can affect the ability to improve so-called
restrictive isometry property (RIP) constants, which measure the degree of disruptive correlation
in ?. This helps to quantify the limits of shared layer weights, which are the standard template
of existing methods [9, 19, 23], and motivates more flexible network constructions reminiscent of
LSTM cells [12] that account for multi-resolution structure in ? in a previously unexplored fashion.
Note that we defer all proofs, as well as many additional analyses and problem details, to a longer
companion paper [26].
Isolation of Important Factors: Based on these theoretical insights, and a better understanding
of the essential factors governing performance, we establish the degree to which it is favorable to
diverge from strict conformity to any particular unfolded algorithmic script. In particular, we argue
that layer-wise independent weights and/or activations are essential, while retainment of original
thresholding non-linearities and squared-error loss implicit to many sparse algorithms is not. We
also recast the the core problem as deep multi-label classification given that optimal support pattern
recovery is the primary concern. This allows us to adopt a novel training paradigm that is less
sensitive to the specific distribution encountered during testing. Ultimately, we development the
first, ultra-fast sparse estimation algorithm (or more precisely a learning procedure that produces
such an algorithm) that can effectively deal with coherent dictionaries and adversarial RIP constants.
State-of-the-Art Empirical Performance: We apply the proposed system to a practical photometric stereo computer vision problem, where the goal is to estimate the 3D geometry of an object
using only 2D photos taken from a single camera under different lighting conditions. In this context, shadows and specularities represent sparse outliers that must be simultaneously removed from
? 104 ? 106 surface points. We achieve state-of-the-art performance using only weak supervision
despite a minuscule computational budget appropriate for real-time mobile environments.
2
From Iterative Hard Thesholding (IHT) to Deep Neural Networks
Although any number of iterative algorithms could be adopted as our starting point, here we examine
IHT because it is representative of many other sparse estimation paradigms and is amenable to
theoretical analysis. With knowledge of an upper bound on the true cardinality, solving (1) can be
replaced by the equivalent problem
min 12 ky ? ?xk22 s.t. kxk0 ? k.
(2)
x
IHT attempts to minimize (2) using what can be viewed as computationally-efficient projected gradient iterations [3]. Let x(t) denote the estimate of some maximally sparse x? after t iterations. The
aggregate IHT update computes
h
i
x(t+1) = Hk x(t) ? ??> ?x(t) ? y ,
(3)
2
where ? is a step-size parameter and Hk [?] is a hard-thresholding operator that sets all but the k
largest values (in magnitude) of a vector to zero. For the vanilla version of IHT, the step-size ? = 1
leads to a number of recovery guarantees whereby iterating (3), starting from x(0) = 0 is guaranteed
to reduce (2) at each step before eventually converging to the globally optimal solution. These
results hinge on properties of ? which relate to the coherence structure of dictionary columns as
encapsulated by the following definition.
Definition 1 (Restricted Isometry Property) A dictionary ? satisfies the Restricted Isometry
Property (RIP) with constant ?k [?] < 1 if
(1 ? ?k [?])kxk22 ? k?xk22 ? (1 + ?k [?])kxk22
(4)
holds for all {x : kxk0 ? k}.
In brief, the smaller the value of the RIP constant ?k [?], the closer any sub-matrix of ? with k
columns is to being orthogonal (i.e., it has less correlation structure). It is now well-established that
dictionaries with smaller values of ?k [?] lead to sparse recovery problems that are inherently easier
to solve. For example,
that if y = ?x? , with kx? k0 ? k
? in the context of IHT, it has been shown [3]
(t)
and ?3k [?] < 1/ 32, then at iteration t of (3) we will have kx ? x? k2 ? 2?t kx? k2 . It follows
that as t ? ?, x(t) ? x? , meaning that we recover the true, generating x? . Moreover, it can be
shown that this x? is also the unique, optimal solution to (1) [5].
The success of IHT in recovering
maximally sparse solutions crucially depends on the RIP-based
?
condition that ?3k [?] < 1/ 32, which heavily constrains the degree of correlation structure in ?
that can be tolerated. While dictionaries with columns drawn independently and uniformly from the
surface of a unit hypersphere (or with elements drawn iid from N (0, 1/n) ) will satisfy this condition
with high probability provided k is small enough [6], for many/most practical problems of interest
we cannot rely on this type of IHT recovery guarantee. In fact, except for randomized dictionaries
in high dimensions where tight bounds exist,
we cannot even compute the value of ?3k [?], which
m
requires calculating the spectral norm of 3k
subsets of dictionary columns.
There are many ways nature might structure a dictionary such that IHT (or most any other existing
sparse estimation algorithm) will fail. Here we examine one of the most straightforward forms
of
dictionary
coherence that can easily disrupt performance. Consider the situation where ? =
A + uv > N , where columns of A ? Rn?m and u ? Rn are drawn iid from the surface of a
unit hypersphere, while v ? Rm is arbitrary. Additionally, > 0 is a scalar and N is a diagonal
normalization matrix that scales each column of ? to have unit `2 norm. It then follows that if is
sufficiently small,
? the rank-one component begins to dominate, and there is no value of 3k such that
?3k [?] < 1/ 32. In this type of problem we hypothesize that DNNs provide a potential avenue for
improvement to the extent that they might be able to compensate for disruptive correlations in ?.
For example, at the most basic level we might consider general networks with the layer t defined by
h
i
x(t+1) = f ?x(t) + ?y ,
(5)
where f : Rm ? Rm is a non-linear activation function, and ? ? Rm?m and ? ? Rm?n are
arbitrary. Moreover, given access to training pairs {x? , y}, where x? is a sparse vector such that
y = ?x? , we can optimize ? and ? using traditional stochastic gradient descent just like any other
DNN structure. We will first precisely characterize the extent to which this adaptation affords any
benefit over IHT where f (?) = Hk [?]. Later we will consider flexible, layer-specific non-linearities
f (t) and parameters {?(t) , ?(t) }.
3
Analysis of Adaptable Weights and Activations
For simplicity in this section we restrict ourselves to the fixed hard-threshold operator Hk [?] across
all layers; however, many of the conclusions borne out of our analysis nonetheless carry over to a
much wider range of activation functions f . In general it is difficult to analyze how arbitrary ?
and ? may improve upon the fixed parameterization from (3) where ? = I ? ?> ? and ? =
?> (assuming ? = 1). Fortunately though, we can significantly collapse the space of potential
weight matrices by including the natural requirement that if x? represents the true, maximally sparse
solution, then it must be a fixed-point of (5). Indeed, without this stipulation the iterations could
3
diverge away from the globally optimal value of x, something IHT itself will never do. These
considerations lead to the following:
Proposition 1 Consider a generalized IHT-based network layer given by (5) with f (?) = Hk [?] and
let x? denote any unique, maximally sparse feasible solution to y = ?x with kxk0 ? k. Then to
ensure that any such x? is a fixed point it must be that ? = I ? ??.
Although ? remains unconstrained, this result has restricted ? to be a rank-n factor, parameterized
by ?, subtracted from an identity matrix. Certainly this represents a significant contraction of the
space of ?reasonable? parameterizations for a general IHT layer. In light of Proposition 1, we may
then further consider whether the added generality of ? (as opposed to the original fixed assignment
? = ?> ) affords any further benefit to the revised IHT update
h
i
x(t+1) = Hk (I ? ??) x(t) + ?y .
(6)
For this purpose we note that (6) can be interpreted as a projected gradient descent step for solving
min 21 x> ??x ? x> ?y s.t. kxk0 ? k.
(7)
x
However, if ?? is not positive semi-definite, then this objective is no longer even convex, and
combined with the non-convex constraint is likely to produce an even wider constellation of troublesome local minima with no clear affiliation with the global optimum of our original problem from
(2). Consequently it does not immediately appear that ? 6= ?> is likely to provide any tangible
benefit. However, there do exist important exceptions. The first indication of how learning a general
? might help comes from the following result:
Proposition 2 Suppose that ? = D?> W W > , where W is an arbitrary matrix of appropriate
dimension and D is a full-rank diagonal that jointly solve
?
?3k
[?] , inf ?3k [W ?D] .
(8)
W ,D
Moreover, assume that ? is substituted with ?D in (6), meaning we have simply replaced ? with
a new dictionary ?
that has scaled columns. Given these qualifications, if y = ?x? , with kx? k0 ? k
?
and ?3k [?] < 1/ 32, then at iteration t of (6)
kD ?1 x(t) ? D ?1 x? k2 ? 2?t kD ?1 x? k2 .
(9)
It follows that as t ? ?, x(t) ? x? , meaning that we recover the true, generating x? . Additionally, it can be guaranteed that after a finite number of iterations, the correct support pattern will be
discovered. And it should be emphasized that rescaling ? by some known diagonal D is a common prescription for sparse estimation (e.g., column normalization) that does not alter the optimal
`0 -norm support pattern.1
?
But the real advantage over regular IHT comes from the fact that ?3k
[?] ? ?k [?], and in many prac?
tical cases, ?3k [?] ?3k [?], which implies success can be guaranteed
across a much wider range
of RIP conditions. For example, if we revisit the dictionary ? = A + uv > N , an immediate
?
benefit can be observed. More concretely, for sufficiently small we argued that ?3k [?] > 1/ 32
for all k, and consequently convergence to the optimal solution may fail. In contrast, it can be shown
?
?
that ?3k
[?] will remain quite small, satisfying ?3k
[?] ? ?3k [A], implying that performance will
nearly match that of an equivalent recovery problem using A (and as we discussed above, ?3k [A] is
likely to be relatively small per its unique, randomized design). The following result generalizes a
sufficient regime whereby this is possible:
Corollary 1 Suppose ? = [A + ?r ] N , where elements of A are drawn iid from N (0, 1/n), ?r
is any arbitrary matrix with rank[?r ] = r < n, and N is a diagonal matrix (e.g, one that enforces
unit `2 column norms). Then
h i
?
e ,
E (?3k
[?]) ? E ?3k A
(10)
e denotes the matrix A with any r rows removed.
where A
1
Inclusion of this diagonal factor D can be equivalently viewed as relaxing Proposition 1 to hold under
some fixed rescaling of ?, i.e., an operation that preserves the optimal support pattern.
4
Additionally, as the size of h? grows
proportionally larger, it can be shown that with overwhelming
i
?
e
probability ?3k [?] ? ?3k A . Overall, these results suggest that we can essentially annihilate
any potentially disruptive rank-r component ?r at the cost of implicitly losing r measurements
(linearly independent rows of A, and implicitly the corresponding
elements of y). Therefore, at
h i
e
least provided that r is sufficiently small such that ?3k A ? ?3k [A], we can indeed be confident
that a modified form of IHT can perform much like a system with an ideal RIP constant. And of
course in practice we may not know how ? decomposes as some ? ? [A + ?r ] N ; however, to
the extent that this approximation can possibly hold, the RIP constant can be improved nonetheless.
It should be noted that globally solving (8) is non-differentiable and intractable, but this is the whole
point of incorporating a DNN network to begin with. If we have access to a large number of training
pairs {x? , y} generated using the true ?, then during the course of the learning process a useful
W and D can be implicitly estimated such that a maximal number of sparse vectors can be successfully recovered. Of course we will experience diminishing marginal returns as more non-ideal
components enter the picture. In fact, it is not difficult to describe a slightly more sophisticated scenario such that use of layer-wise constant weights and activations are no longer capable of lowering
?3k [?] significantly at all, portending failure when it comes to accurate sparse recovery.
One such example is a clustered dictionary model (which we describe in detail in [26]), whereby
columns of ? are grouped into a number of tight clusters with minimal angular dispersion. While the
clusters themselves may be well-separated, the correlation within clusters can be arbitrarily large. In
some sense this model represents the simplest partitioning of dictionary column correlation structure
into two scales: the inter- and intra-cluster structures. Assuming the number of such clusters is larger
than n, then layer-wise constant weights and activations are unlikely to provide adequate relief, since
the implicit ?r factor described above will be full rank.
Fortunately, simple adaptations of IHT, which are reflective of many generic DNN structures, can
remedy the problem. The core principle is to design a network such that earlier layers/iterations
are tasked with exposing the correct support at the cluster level, without concern for accuracy within
each cluster. Once the correct cluster support has been obtained, later layers can then be charged with
estimating the fine-grain details of within-cluster support. We believe this type of multi-resolution
sparse estimation is essential when dealing with highly coherent dictionaries. This can be accomplished with the following adaptations to IHT:
1. The hard-thresholding operator is generalized to ?remember? previously learned clusterlevel sparsity patterns, in much the same way that LSTM gates allow long term dependencies to propagate [12] or highway networks [20] facilitate information flow unfettered to
deeper layers. Practically speaking this adaptation can be computed by passing the prior
layer?s activations x(t) through linear filters followed by indicator functions, again reminiscent of how DNN gating functions are typically implemented.
2. We allow the layer weights {?(t) , ?(t) } to vary from iteration to iteration t sequencing
through a fixed set akin to layers of a DNN.
In [26] we show that hand-crafted versions of these changes allow IHT to provably recovery maximally sparse vectors x? in situations where existing algorithms fail.
4
Discriminative Multi-Resolution Sparse Estimation
As implied previously, guaranteed success for most existing sparse estimation strategies hinges on
the dictionary ? having columns drawn (approximately) from a uniform distribution on the surface
of a unit hypersphere, or some similar condition to ensure that subsets of columns behave approximately like an orthogonal basis. Essentially this confines the structure of the dictionary to operate on
a single universal scale. The clustered dictionary model described in the previous section considers
a dictionary built on two different scales, with a cluster-level distribution (coarse) and tightly-packed
within-cluster details (fine). But practical dictionaries may display structure operating across a variety of scales that interleave with one another, forming a continuum among multiple levels.
When the scales are clearly demarcated, we have argued that it is possible to manually define a
multi-resolution IHT-inspired algorithm that guarantees success in recovering the optimal support
pattern; and indeed, IHT could be extended to handle a clustered dictionary model with nested
5
structures across more than two scales. However, without clearly partitioned scales it is much less
obvious how one would devise an optimal IHT modification. It is in this context that learning flexible
algorithm iterations is likely to be most advantageous. In fact, the situation is not at all unlike
many computer vision scenarios whereby handcrafted features such as SIFT may work optimally in
confined, idealized domains, while learned CNN-based features are often more effective otherwise.
Given a sufficient corpus of {x? , y} pairs linked via some fixed ?, we can replace manual filter
construction with a learning-based approach. On this point, although we view our results from
Section 3 as a convincing proof of concept, it is unlikely that there is anything intrinsically special
about the specific hard-threshold operator and layer-wise construction we employed per se, as long
as we allow for deep, adaptable layers that can account for structure at multiple scales. For example,
we expect that it is more important to establish a robust training pipeline that avoids stalling at the
hand of vanishing gradients in a deep network, than to preserve the original IHT template analogous
to existing learning-based methods. It is here that we propose several deviations:
Multi-Label Classification Loss: We exploit the fact that in producing a maximally sparse vector x? , the main challenge is estimating supp[x? ]. Once the support is obtained, computing the
actual nonzero coefficients just boils down to solving a least squares problem. But any learning
system will be unaware of this and could easily expend undue effort in attempting to match coefficient magnitudes at the expense of support recovery. Certainly the use of a data fit penalty of the
form ky ? ?xk22 , as is adopted by nearly all sparse recovery algorithms, will expose us to this issue. Therefore we instead formulate sparse recovery as a multi-label classification problem. More
specifically, instead of directly estimating x? , we attempt to learn s? = [s?1 , . . . , s?m ]> , where s?i
equals the indicator function I[x?i 6= 0]. For this purpose we may then incorporate a traditional
multi-label classification loss function via a final softmax output layer, which forces the network to
only concern itself with learning support patterns. This substitution is further justified by the fact
that even with traditional IHT, the support pattern will be accurately recovered before the iterations
converge exactly to x? . Therefore we may expect that fewer layers (as well as training data) are
required if all we seek is a support estimate, opening the door for weaker forms of supervision.
Instruments for Avoiding Bad Local Solutions: Given that IHT can take many iterations to converge on challenging problems, we may expect that a relatively deep network structure will be
needed to obtain exact support recovery. We must therefore take care to avoid premature convergence to local minima or areas with vanishing gradient by incorporating several recent countermeasures proposed in the DNN community. For example, the adaptive variant of IHT described
previously is reminiscent of highway networks or LSTM cells, which have been proposed to allow longer range flow of gradient information to improve convergence through the use of gating
functions. An even simpler version of this concept involves direct, un-gated connections that allow
much deeper ?residual? networks to be trained [10] (which is even suggestive of the residual factor
embedded in the original IHT iterations). We deploy this tool, along with batch-normalization [14]
to aid convergence, for our basic feedforward pipeline, along with an alternative structure based
on recurrent LSTM cells. Note that unfolded LSTM networks frequently receive a novel input for
every time step, whereas here y is applied unaltered at every layer (more on this in [26]). We also
replace the non-integrable hard-threshold operator with simple rectilinear (ReLu) units [17], which
are functionally equivalent to one-sided soft-thresholding; this convex selection likely reduces the
constellation of sub-optimal local minima during the training process.
5
Experiments and Applications
Synthetic
We generate a dictionary matrix ? ? Rn?m using
Pn Tests with Correlated Dictionaries:
n
m
? = i=1 i12 ui v >
,
where
u
?
R
and
v
?
R
have iid elements drawn from N (0, 1). We also
i
i
i
rescale each column of ? to have unit `2 norm. ? generated in this way has super-linear decaying
singular values (indicating correlation between the columns) but is not constrained to any specific
structure. Many dictionaries in real applications have such a property. As a basic experiment, we
generate N = 700000 ground truth samples x? ? Rm by randomly selecting d nonzero entries, with
nonzero amplitudes drawn iid from the uniform distribution U[?0.5, 0.5], excluding the interval
[?0.1, 0.1] to avoid small, relatively inconsequential contributions to the support pattern. We then
create y ? Rn via y = ?x? . As d increases, the estimation problem becomes more difficult. In
fact, to guarantee
success with such correlated data (and high RIP constant) requires evaluating on
the order of m
n linear systems of size n ? n, which is infeasible even for small values, indicative of
how challenging it can be to solve sparse inverse problems of any size. We set n=20 and m=100.
6
0.5
0
2
4
6
8
1
1
Res
NoRes
HardAct
LSLoss
acc
?1
IHT
ISTA-Net
IHT-Net
Ours-Res
Ours-LSTM
acc
acc
1
0.5
0
2
4
d
6
d
8
0.5
0
U2U-test
U2U-train
U2N-test
U2N-train
N2U-test
N2U-train
N2N-test
N2N-train
2
4
6
8
d
Figure 1: Average support recovery accuracy. Left: Uniformly distributed nonzero elements. Mid:
Different network variants. Right: Different training and testing distr. (LSTM-Net results).
We used N1 = 600000 samples for training and the remaining N2 = 100000 for testing. Echoing
our arguments in Section 4, we explored both a feedforward network with residual connections
[10] and a recurrent network with vanilla LSTM cells [12]. To evaluate the performance, we check
whether the d ground truth nonzeros are aligned with the predicted top-d values produced by our
network, a common all-or-nothing metric in the compressive sensing literature. Detailed network
design, optimization setup, and alternative metrics can be found in [26].
Figure 1(left) shows comparisons against a battery of existing algorithms, both learning- and
optimization-based. These include standard `1 minimization via ISTA iterations [2], IHT [3] (supplied with the ground truth number of nonzeros), an ISTA-based network [9], and an IHT-inspired
network [23]. For both the ISTA- and IHT-based networks, we used the exact same training data
described above. Note that given the correlated ? matrix, the recovery performance of IHT, and to
a lesser degree `l minimization using ISTA, is rather modest as expected given that the associated
RIP constant will be quite large by construction. In contrast our two methods achieve uniformly
higher accuracy, including over other learning-based methods trained with the same data. This improvement is likely the result of three significant factors: (i) Existing learning methods initialize
using weights derived from the original sparse estimation algorithms, but such an initialization will
be associated with locally optimal solutions in most cases with correlated dictionaries. (ii) As described in Section 3, constant weights across layers have limited capacity to unravel multi-resolution
dictionary structure, especially one that is not confined to only possess some low rank correlating
component. (iii) The quadratic loss function used by existing methods does not adequately focus
resources on the crux of the problem, which is accurate support recovery. In contrast we adopt
an initialization motivated by DNN-based training considerations, unique layer weights to handle a
multi-resolution dictionary, and a multi-label classification output layer to focus on support recovery.
To further isolate essential factors affecting performance, we next consider the following changes:
(1) We remove the residual connections from Res-Net. (2) We replace ReLU with hard-threshold
activations. In particular, we utilize the so-called HELU? function introduced in [23], which is a
continuous and piecewise linear approximation of the scalar hard-threshold operator. (3) We use
a quadratic penalty layer instead of a multi-label classification loss layer, i.e., the loss function is
PN1
changed to i=1
ka(i) ? y (i) k22 (where a is the output of the last fully-connected layer) during
training. Figure 1(middle) displays the associated recovery percentages, where we observe that
in each case performance degrades. Without the residual design, and also with the inclusion of a
rigid, non-convex hard-threshold operator, local minima during training appear to be a likely culprit,
consistent with observations from [10]. Likewise, use of a least-squares loss function is likely to
over-emphasize the estimation of coefficient amplitudes rather than focusing on support recovery.
Finally, from a practical standpoint we may expect that the true amplitude distribution may deviate
at times from the original training set. To explore robustness to such mismatch, as well as different
amplitude distributions, we consider two sets of candidate data: the original data, and similarlygenerated data but with the uniform distribution of nonzero elements replaced with the Gaussians
N (?0.3, 0.1), where the mean is selected with equal probability as either ?0.3 or 0.3, thus avoiding
tiny magnitudes with high probability. Figure 1(right) reports accuracies under different distributions for both training and testing, including mismatched cases. (The results are obtained using
LSTM-Net, but the Res-net showed similar pattern.) The label ?U2U? refers to training and testing
with the uniformly distributed amplitudes, while ?U2N? uses uniform training set and a Gaussian test
set. Analogous definitions apply for ?N2N? and ?N2U?. In all cases we note that the performance is
7
(a) GT
1
1
1
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
(b) LS (E=12.1 T=4.1)
(c) `1 (E=7.1 T=33.7)
(d) Ours (E=1.5 T=1.2)
Figure 2: Reconstruction error maps. Angular error in degrees (E) and runtime in sec. (T) are provided.
quite stable across training and testing conditions. We would argue that our recasting of the problem
as multi-label classification contributes, at least in part, to this robustness. The application example
described next demonstrates further tolerance of training-testing set mismatches.
Practical Application - Photometric Stereo: Suppose we have q observations of a given surface
point from a Lambertian scene under different lighting directions. Then the resulting measurements
from a standard calibrated photometric stereo design (linear camera response function, an orthographic camera projection, and known directional light sources), denoted o ? Rq , can be expressed
as o = ?Ln, where n ? R3 denotes the true 3D surface normal, each row of L ? Rq?3 defines
a lighting direction, and ? is the diffuse albedo, acting here as a scalar multiplier [24]. If specular
highlights, shadows, or other gross outliers are present, then the observations are more realistically
modeled as o = ?Ln + e, where e is an an unknown sparse vector [13, 25]. It is apparent that,
since n is unconstrained, e need not compensate for any component of o in the range of L. Given
that null[L> ] is the orthogonal complement to range[L], we may consider the following problem
min kek0 s.t. Projnull[L> ] (o) = Projnull[L> ] (e)
(11)
e
which ultimately collapses to our canonical sparse estimation problem from (1), where lightinghardware-dependent correlations may be unavoidable in the implicit dictionary.
Following [13], we use 32-bit HDR gray-scale images of the object Bunny (256?256) with foreground masks under different lighting conditions whose directions, or rows of L, are randomly
selected from a hemisphere with the object placed at the center. To apply our method, we first compute ? using the appropriate projection operator derived from the lighting matrix L. As real-world
training data is expensive to acquire, we instead use weak supervision by synthetically generating a
training set as follows. First, we draw a support pattern for e randomly with cardinality d sampled
uniformly from the range [d1 , d2 ]. The values of d1 and d2 can be tuned in practice. Nonzero values
of e are assigned iid random values from a Gaussian distribution whose mean and variance are also
tunable. Beyond this, no attempt was made to match the true outlier distributions encountered in
applications of photometric stereo. Finally, for each e we can naturally compute observations via
the linear constraint in (11), which serve as candidate network inputs.
Given synthetic training data acquired in this way, we learn a network with the exact same structure
and optimization parameters as in Section 5; no application-specific tuning was introduced. We then
deploy the resulting network on the gray-scale Bunny images. For each surface point, we use our
DNN model to approximately solve (11). Since the network output will be a probability map for the
outlier support set instead of the actual values of e, we choose the 4 indices with the least probability
as inliers and use them to compute n via least squares.
We compare our method against the baseline least squares estimate from [24] and `1 norm minimization. We defer more quantitative comparisons to [26]. In Figure 2, we illustrate the recovered
surface normal error maps of the hardest case (fewest lighting directions). Here we observe that our
DNN estimates lead to far fewer regions of significant error and the runtime is orders of magnitude
faster. Overall though, this application example illustrates that weak supervision with mismatched
synthetic training data can, at least for some problem domains, be sufficient to learn a quite useful
sparse estimation DNN; here one that facilitates real-time 3D modeling in mobile environments.
Discussion: In this paper we have shown that deep networks with hand-crafted, multi-resolution
structure can provably solve certain specific classes of sparse recovery problems where existing
algorithms fail. However, much like CNN-based features can often outperform SIFT on many computer vision tasks, we argue that a discriminative approach can outperform manual structuring of
layers/iterations and compensate for dictionary coherence under more general conditions.
8
Acknowledgements: This work was done while the first author was an intern at Microsoft Research, Beijing. It is also funded by 973-2015CB351800, NSFC-61231010, NSFC-61527804,
NSFC-61421062, NSFC-61210005 and MOEMicrosoft Key Laboratory, Peking University.
References
[1] S. Baillet, J.C. Mosher, and R.M. Leahy. Electromagnetic brain mapping. IEEE Signal Processing
Magazine, pages 14?30, Nov. 2001.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Imaging Sciences, 2(1), 2009.
[3] T. Blumensath and M.E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3), 2009.
[4] T. Blumensath and M.E. Davies. Normalized iterative hard thresholding: Guaranteed stability and performance. IEEE J. Selected Topics Signal Processing, 4(2), 2010.
[5] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. Information Theory, 52(2):489?509, 2006.
[6] E. Cand`es and T. Tao. Decoding by linear programming. IEEE Trans. Information Theory, 51(12), 2005.
[7] S.F. Cotter and B.D. Rao. Sparse channel estimation via matching pursuit with application to equalization.
IEEE Trans. on Communications, 50(3), 2002.
[8] M.A.T. Figueiredo. Adaptive sparseness using Jeffreys prior. NIPS, 2002.
[9] K. Gregor and Y. LeCun. Learning fast approximations of sparse coding. In ICML, 2010.
[10] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. CVPR, 2016.
[11] J.R. Hershey, J. Le Roux, and F. Weninger. Deep unfolding: Model-based inspiration of novel deep
architectures. arXiv preprint arXiv:1409.2574v4, 2014.
[12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8), 1997.
[13] S. Ikehata, D.P. Wipf, Y. Matsushita, and K. Aizawa. Robust photometric stereo using sparse regression.
In CVPR, 2012.
[14] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[15] U. Kamilov and H. Mansour. Learning optimal nonlinearities for iterative thresholding algorithms. arXiv
preprint arXiv:1512.04754, 2015.
[16] D.M. Malioutov, M. C
? etin, and A.S. Willsky. Sparse signal reconstruction perspective for source localization with sensor arrays. IEEE Trans. Signal Processing, 53(8), 2005.
[17] V. Nair and G. Hinton. Rectified linear units improve restricted boltzmann machines. ICML, 2010.
[18] Y.C. Pati, R. Rezaiifar, and P.S. Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In 27th Asilomar Conference on Signals, Systems
and Computers, 1993.
[19] P. Sprechmann, A.M. Bronstein, and G. Sapiro. Learning efficient sparse and low rank models. IEEE
Trans. Pattern Analysis and Machine Intelligence, 37(9), 2015.
[20] R.K. Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. NIPS, 2015.
[21] R. Tibshirani. Regression shrinkage and selection via the lasso. J. of the Royal Statistical Society, 1996.
[22] J.A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231?2242, October 2004.
[23] Z. Wang, Q. Ling, and T. Huang. Learning deep `0 encoders. arXiv preprint arXiv:1509.00153v2, 2015.
[24] R.J. Woodham. Photometric method for determining surface orientation from multiple images. Optical
Engineering, 19(1), 1980.
[25] L. Wu, A. Ganesh, B. Shi, Y. Matsushita, Y. Wang, and Y. Ma. Robust photometric stereo via low-rank
matrix completion and recovery. Asian Conference on Computer Vision, 2010.
[26] Bo Xin, Yizhou Wang, Wen Gao, and David Wipf. Maximal sparsity with deep networks? arXiv preprint
arXiv:1605.01636, 2016.
9
| 6346 |@word cnn:2 unaltered:1 version:3 interleave:1 norm:9 middle:1 advantageous:1 d2:2 seek:1 propagate:1 crucially:1 contraction:1 decomposition:1 carry:1 reduction:1 substitution:1 selecting:1 mosher:1 tuned:1 ours:3 demarcated:1 existing:12 recovered:3 ka:1 com:1 culprit:1 activation:9 yet:1 reminiscent:3 must:4 readily:1 grain:1 exposing:1 recasting:1 hypothesize:1 remove:2 update:3 implying:1 greedy:1 selected:3 fewer:2 intelligence:1 parameterization:1 indicative:2 vanishing:2 core:2 short:1 hypersphere:3 coarse:1 parameterizations:1 characterization:1 launching:1 simpler:1 zhang:1 along:2 direct:2 viable:1 blumensath:2 overhead:1 theoretically:1 inter:1 mask:1 indeed:3 expected:2 acquired:1 themselves:1 examine:3 nor:1 multi:14 frequently:1 brain:1 cand:2 inspired:3 globally:3 unfolded:3 actual:2 overwhelming:1 cardinality:2 becomes:1 spain:1 provided:3 unrelated:1 linearity:3 begin:2 estimating:3 moreover:3 null:2 what:1 interpreted:2 compressive:2 guarantee:4 sapiro:1 remember:1 unexplored:1 quantitative:1 act:1 every:3 runtime:2 exactly:2 demonstrates:1 k2:4 rm:6 partitioning:1 scaled:1 unit:8 appear:2 producing:2 before:2 positive:1 engineering:1 local:5 qualification:1 limit:1 despite:1 troublesome:1 nsfc:4 approximately:5 inconsequential:1 might:5 initialization:2 quantified:1 suggests:1 relaxing:1 challenging:2 collapse:2 limited:1 range:6 unique:4 camera:3 lecun:1 enforces:1 practical:8 practice:2 orthographic:1 definite:1 testing:7 recursive:1 footprint:1 procedure:1 special:1 area:1 universal:1 empirical:3 significantly:2 matching:3 projection:2 davy:2 regular:1 refers:1 suggest:1 cannot:2 selection:3 operator:8 romberg:1 context:3 equalization:1 optimize:1 equivalent:3 map:3 charged:1 conventional:2 imposed:1 center:1 straightforward:1 shi:1 starting:2 independently:1 unravel:1 l:1 resolution:7 formulate:1 simplicity:1 roux:1 recovery:21 immediately:2 convex:5 rule:1 insight:1 array:1 fill:1 dominate:1 leahy:1 stability:1 handle:2 tangible:1 analogous:2 enhanced:1 construction:4 heavily:1 magazine:1 suppose:3 losing:1 programming:1 us:1 deploy:2 rip:11 exact:5 element:7 satisfying:1 recognition:1 expensive:1 observed:2 i12:1 preprint:5 wang:4 region:2 connected:1 sun:1 removed:2 gross:1 mentioned:1 environment:2 rq:2 ui:1 constrains:1 battery:1 rigorously:1 ultimately:2 trained:4 solving:4 tight:2 serve:1 localization:2 upon:1 purely:1 basis:2 easily:2 k0:3 fewest:1 train:4 separated:1 fast:3 describe:2 effective:1 aggregate:1 quite:5 apparent:2 larger:2 whose:2 cvpr:2 solve:5 wang3:1 compressed:1 otherwise:1 ability:1 ceiling:1 jointly:1 itself:2 final:1 seemingly:1 sequence:1 advantage:1 indication:1 differentiable:1 net:6 propose:1 reconstruction:3 maximal:3 adaptation:4 aligned:1 minuscule:1 achieve:2 realistically:1 ky:2 quantifiable:1 convergence:4 cluster:11 requirement:1 optimum:1 produce:3 generating:3 object:3 help:2 wider:3 recurrent:2 illustrate:1 completion:1 rescale:1 strong:1 implemented:1 recovering:2 resemble:1 implies:1 shadow:2 predicted:1 come:3 direction:4 involves:2 quantify:1 correct:3 filter:4 stochastic:1 argued:3 dnns:1 crux:1 electromagnetic:1 clustered:3 proposition:4 ultra:1 hold:3 practically:1 sufficiently:3 ground:3 normal:3 algorithmic:2 mapping:1 substituting:1 rezaiifar:1 sought:1 vary:1 adopt:2 continuum:1 albedo:1 dictionary:31 purpose:2 estimation:22 favorable:1 encapsulated:1 combinatorial:1 label:8 expose:1 sensitive:1 highway:2 largest:1 grouped:1 create:1 successfully:1 tool:1 cotter:1 unfolding:4 minimization:4 clearly:2 sensor:1 gaussian:2 super:1 modified:1 rather:2 pn:1 avoid:2 shrinkage:2 varying:1 mobile:2 corollary:1 structuring:1 derived:2 focus:3 u2u:3 improvement:2 rank:9 sequencing:1 check:1 hk:6 contrast:4 adversarial:1 baseline:1 sense:1 wang1:1 dependent:2 rigid:1 typically:2 unlikely:2 diminishing:1 dnn:11 stalling:1 tao:2 provably:2 overall:2 classification:7 issue:1 among:1 flexible:3 orientation:1 undue:1 krishnaprasad:1 denoted:1 art:3 constrained:1 softmax:1 initialize:1 development:2 field:1 equal:2 distilled:1 once:2 never:1 having:1 manually:1 marginal:1 represents:3 hardest:1 icml:2 nearly:2 alter:1 wipf:2 photometric:8 report:1 foreground:1 piecewise:1 primarily:1 opening:1 wen:2 randomly:3 few:1 preserve:2 simultaneously:1 tightly:1 asian:1 beck:1 replaced:3 geometry:1 relief:1 ourselves:1 microsoft:4 n1:1 attempt:4 interest:1 possibility:1 highly:3 intra:1 elucidating:1 certainly:2 weakness:1 light:2 inliers:1 tical:1 amenable:1 accurate:2 closer:1 capable:2 experience:1 orthogonal:5 modest:1 incomplete:1 loosely:1 pku:1 re:4 overcomplete:1 theoretical:3 minimal:2 column:17 earlier:1 modeling:1 teboulle:1 soft:1 rao:1 assignment:1 cost:1 deviation:1 subset:2 entry:2 uniform:4 comprised:2 optimally:1 characterize:1 dependency:1 encoders:1 synthetic:3 tolerated:1 confident:1 combined:1 calibrated:1 lstm:9 randomized:2 u2n:3 siam:1 fundamental:1 v4:1 decoding:1 diverge:2 again:1 squared:1 unavoidable:2 opposed:1 choose:1 huang:1 possibly:1 borne:1 return:2 rescaling:2 szegedy:1 supp:1 potential:3 nonlinearities:1 account:2 sec:1 coding:1 includes:1 coefficient:3 satisfy:1 idealized:1 depends:1 script:1 view:1 later:2 analyze:1 linked:1 recover:4 offdiagonal:1 decaying:1 defer:2 contribution:2 minimize:1 square:4 accuracy:7 variance:1 likewise:1 directional:1 weak:3 accurately:1 produced:1 iid:6 yizhou:3 ren:1 weninger:1 lighting:6 rectified:1 malioutov:1 acc:3 manual:2 iht:35 definition:3 lengthy:1 failure:1 against:2 energy:1 nonetheless:2 frequency:1 obvious:1 naturally:1 proof:2 associated:3 boil:1 sampled:1 tunable:1 popular:1 intrinsically:1 knowledge:1 amplitude:5 sophisticated:1 adaptable:2 focusing:1 higher:2 response:1 improved:1 hershey:1 maximally:10 done:1 though:2 generality:1 governing:1 implicit:3 angular:2 just:2 correlation:9 hand:4 tropp:1 ganesh:1 glance:1 defines:1 quality:1 gray:2 believe:1 grows:1 facilitate:1 k22:1 concept:2 true:8 remedy:1 multiplier:1 adequately:1 normalized:1 assigned:1 inspiration:1 nonzero:7 laboratory:1 deal:1 during:5 nuisance:1 whereby:5 noted:1 anything:1 generalized:2 demonstrate:1 greff:1 meaning:3 wise:4 image:4 novel:4 harmonic:1 consideration:2 common:2 empirically:1 handcrafted:1 discussed:1 he:1 functionally:1 significant:4 measurement:2 pn1:1 enter:1 rectilinear:1 tuning:1 vanilla:2 uv:2 unconstrained:2 inclusion:2 nonlinearity:1 funded:1 stable:1 access:3 supervision:4 surface:10 longer:4 gt:1 operating:1 something:1 isometry:3 recent:2 showed:1 perspective:1 hemisphere:1 inf:1 scenario:2 schmidhuber:2 certain:1 affiliation:1 kamilov:1 success:5 arbitrarily:1 accomplished:1 devise:1 integrable:1 minimum:4 fortunately:2 care:1 additional:1 omp:1 relaxed:1 employed:1 greater:1 converge:2 kxk0:5 paradigm:2 signal:7 ii:1 semi:1 full:2 multiple:3 reduces:1 nonzeros:2 baillet:1 match:3 faster:1 long:3 compensate:3 prescription:1 promotes:1 peking:2 converging:1 underlies:1 regression:2 basic:3 variant:3 essentially:2 metric:2 tasked:1 vision:4 arxiv:10 iteration:18 represent:1 normalization:4 confined:2 cell:4 hochreiter:1 justified:1 affecting:1 whereas:1 fine:2 bunny:2 interval:1 receive:1 singular:1 source:3 standpoint:1 operate:1 unlike:1 posse:1 strict:1 isolate:1 tend:1 facilitates:1 ample:1 flow:2 reflective:1 presence:1 ideal:2 door:1 synthetically:1 enough:1 feedforward:2 echoing:1 iii:1 variety:1 fit:1 relu:2 specular:1 architecture:2 restrict:1 lasso:1 kek0:1 reduce:1 affect:1 cn:1 lesser:1 avenue:1 shift:1 whether:2 motivated:1 accelerating:1 effort:1 akin:1 penalty:2 stereo:7 greed:1 speaking:2 passing:2 adequate:1 deep:21 baoyuan:1 useful:2 dramatically:1 proportionally:1 detailed:1 iterating:1 clear:1 se:1 involve:1 mid:1 locally:1 simplest:1 reduced:2 generate:2 outperform:2 exist:2 percentage:1 affords:2 supplied:1 revisit:1 canonical:1 estimated:1 tibshirani:1 per:2 key:1 threshold:6 drawn:7 utilize:1 lowering:1 assault:1 imaging:1 relaxation:1 beijing:2 inverse:2 parameterized:1 uncertainty:1 place:1 reasonable:1 wu:1 draw:1 coherence:4 bit:1 layer:33 bound:2 matsushita:2 guaranteed:5 followed:2 display:2 quadratic:2 encountered:2 stipulation:1 constraint:3 precisely:2 scene:2 countermeasure:1 diffuse:1 argument:1 min:4 extremely:2 attempting:1 optical:1 relatively:3 poor:1 kd:2 remain:1 across:6 slightly:1 smaller:2 partitioned:1 modification:1 ikehata:1 jeffreys:1 outlier:6 restricted:5 sided:1 taken:1 asilomar:1 computationally:1 pipeline:2 ln:2 previously:4 remains:1 resource:1 count:1 fail:5 mechanism:1 xk22:3 eventually:1 turn:1 r3:1 tractable:1 needed:1 know:1 photo:1 sprechmann:1 instrument:1 pursuit:3 gaussians:1 aizawa:1 distr:1 generalizes:1 adopted:2 available:1 operation:1 away:1 appropriate:3 spectral:1 generic:1 v2:1 apply:3 observe:2 lambertian:1 subtracted:1 alternative:3 robustness:2 batch:2 gate:1 original:9 denotes:3 remaining:1 ensure:2 include:3 top:1 hinge:2 calculating:1 exploit:1 restrictive:1 especially:4 establish:2 gregor:1 society:1 implied:1 objective:1 added:1 occurs:1 degrades:1 strategy:1 primary:1 diagonal:5 traditional:5 surrogate:1 gradient:6 unable:1 n2n:3 capacity:1 conformity:1 topic:1 argue:3 considers:1 extent:3 willsky:1 assuming:2 modeled:1 pati:1 index:1 convincing:1 demonstration:1 acquire:1 equivalently:1 difficult:3 unfortunately:1 setup:1 october:1 potentially:1 relate:1 expense:1 disparate:1 design:5 bronstein:1 motivates:1 boltzmann:1 packed:1 perform:1 gated:1 unknown:1 upper:1 observation:4 revised:1 dispersion:1 finite:1 descent:2 behave:1 supporting:1 immediate:1 situation:3 hinton:1 excluding:1 communication:1 precise:1 extended:1 rn:6 discovered:1 mansour:1 arbitrary:5 community:1 david:2 complement:1 introduced:2 required:1 trainable:1 pair:4 connection:3 coherent:5 learned:5 established:1 barcelona:1 nip:3 trans:5 beyond:1 redmond:1 able:1 pattern:12 mismatch:2 regime:3 sparsity:4 challenge:1 built:1 royal:1 memory:1 recast:1 including:4 natural:2 rely:1 force:1 cascaded:1 indicator:2 residual:6 improve:5 kxk22:2 brief:1 numerous:1 picture:1 deviate:1 prior:2 understanding:1 literature:1 removal:1 acknowledgement:1 determining:1 relative:1 embedded:1 fully:1 expect:4 discriminatively:1 loss:7 highlight:1 degree:7 sufficient:4 consistent:1 principle:2 viewpoint:1 thresholding:11 unequivocally:1 tiny:1 row:4 course:3 changed:1 placed:1 last:1 figueiredo:1 infeasible:1 weaker:1 deeper:2 allow:6 mismatched:2 template:2 disruptive:3 expend:1 sparse:46 distributed:2 benefit:6 tolerance:1 dimension:2 world:1 avoids:1 unaware:1 gram:1 evaluating:1 concretely:1 made:1 adaptive:2 projected:2 computes:1 author:1 premature:1 far:1 transaction:1 nov:1 emphasize:1 implicitly:3 annihilate:1 dealing:1 global:1 correlating:1 suggestive:1 ioffe:1 corpus:1 discriminative:2 disrupt:2 un:1 iterative:11 search:1 continuous:1 decomposes:1 additionally:3 promising:1 channel:1 learn:4 nature:1 robust:4 inherently:1 contributes:1 improving:1 distract:1 domain:4 substituted:1 main:1 linearly:1 whole:1 ling:1 n2:1 nothing:1 gao1:1 ista:5 crafted:3 representative:1 fashion:1 deployed:1 aid:1 sub:2 dissect:1 candidate:2 wavelet:1 companion:1 down:1 bad:1 specific:6 emphasized:1 covariate:1 gating:2 isolation:1 sift:2 sensing:3 constellation:2 explored:1 evidence:1 concern:3 intractable:2 burden:1 essential:4 incorporating:2 effectively:2 magnitude:4 execution:1 illustrates:1 budget:2 sparseness:1 kx:4 gap:1 easier:1 flavor:1 specularities:1 simply:1 likely:9 explore:1 forming:1 intern:1 gao:1 expressed:1 scalar:3 bo:2 collectively:1 srivastava:1 nested:1 truth:3 satisfies:1 ma:1 nair:1 identity:1 goal:2 viewed:5 consequently:5 replace:3 shared:2 feasible:4 hard:12 change:2 specifically:1 except:1 typical:1 reducing:1 acting:1 uniformly:5 etin:1 called:2 e:2 xin:1 exception:1 indicating:1 internal:1 support:21 confines:1 relevance:1 incorporate:1 evaluate:1 d1:2 avoiding:2 correlated:4 |
5,910 | 6,347 | Computing and maximizing influence in linear
threshold and triggering models
Justin Khim
Department of Statistics
The Wharton School
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Varun Jog
Electrical & Computer Engineering Department
University of Wisconsin - Madison
Madison, WI 53706
[email protected]
Po-Ling Loh
Electrical & Computer Engineering Department
University of Wisconsin - Madison
Madison, WI 53706
[email protected]
Abstract
We establish upper and lower bounds for the influence of a set of nodes in certain
types of contagion models. We derive two sets of bounds, the first designed for
linear threshold models, and the second more broadly applicable to a general class
of triggering models, which subsumes the popular independent cascade models, as
well. We quantify the gap between our upper and lower bounds in the case of the
linear threshold model and illustrate the gains of our upper bounds for independent
cascade models in relation to existing results. Importantly, our lower bounds
are monotonic and submodular, implying that a greedy algorithmfor influence
maximization is guaranteed to produce a maximizer within a 1 ? 1e -factor of the
truth. Although the problem of exact influence computation is NP-hard in general,
our bounds may be evaluated efficiently. This leads to an attractive, highly scalable
algorithm for influence maximization with rigorous theoretical guarantees.
1
Introduction
Many datasets in contemporary scientific applications possess some form of network structure [20].
Popular examples include data collected from social media websites such as Facebook and Twitter [1],
or electrical recordings gathered from a physical network of firing neurons [22]. In settings involving
biological data, a common goal is to construct an abstract network representing interactions between
genes, proteins, or other biomolecules [8].
Over the last century, a vast body of work has been developed in the epidemiology literature to
model the spread of disease [10]. The most popular models include SI (susceptible, infected), SIS
(susceptible, infected, susceptible), and SIR (susceptible, infected, recovered), in which nodes may
infect adjacent neighbors according to a certain stochastic process. These models have recently been
applied to social network and viral marketing settings by computer scientists [6, 14]. In particular,
the notion of influence, which refers to the expected number of infected individuals in a network
at the conclusion of an epidemic spread, was studied by Kempe et al. [9]. However, determining
an influence-maximizing seed set of a certain cardinality was shown to be NP-hard?in fact, even
computing the influence exactly in certain simple models is #P-hard [3, 5]. Recent work in theoretical
computer science has therefore focused on maximizing influence up to constant factors [9, 2].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
A series of recent papers [12, 21, 13] establish computable upper bounds on the influence when
information propagates in a stochastic manner according to an independent cascade model. In such a
model, the infection spreads in rounds, and each newly infected node may infect any of its neighbors
in the succeeding round. Central to the bounds is a matrix known as the hazard matrix, which encodes
the transmission probabilities across edges in the graph. A recent paper by Lee et al. [11] leverages
?sensitive? edges in the network to obtain tighter bounds via a conditioning argument. Such bounds
could be maximized to obtain a surrogate for the influence-maximizing set in the network; however,
the tightness of the proposed bounds is yet unknown. The independent cascade model may be viewed
as a special case of a more general triggering model, in which the infection status of each node in
the network is determined by a random subset of neighbors [9]. The class of triggering models also
includes another popular stochastic infection model known as the linear threshold model, and bounds
for the influence function in linear threshold models have been explored in an independent line of
work [5, 23, 4].
Naturally, one might wonder whether influence bounds might be derived for stochastic infection
models in the broader class of triggering models, unifying and extending the aforementioned results.
We answer this question affirmatively by establishing upper and lower bounds for the influence in
general triggering models. Our derived bounds are attractive for two reasons: First, we are able
to quantify the gap between our upper and lower bounds in the case of linear threshold models,
expressed in terms of properties of the graph topology and edge probabilities governing the likelihood
of infection. Second, maximizing a lower bound on the influence is guaranteed to yield a lower
bound on the true maximum influence in the graph. Furthermore, as shown via the theory of
submodular functions, the lower bounds in our paper may be maximized efficiently up to a constantfactor approximation via a greedy algorithm, leading to a highly-scalable algorithm with provable
guarantees. To the best of our knowledge, the only previously established bounds for influence
maximization are those mentioned above for the special cases of independent cascade and linear
threshold models, and no theoretical or computational guarantees were known.
The remainder of our paper is organized as follows: In Section 2, we fix notation to be used in the
paper and describe the aforementioned infection models in greater detail. In Section 3, we establish
upper and lower bounds for the linear threshold model, which we extend to triggering models in
Section 4. Section 5 addresses the question of maximizing the lower bounds established in Sections 3
and 4, and discusses theoretical guarantees achievable using greedy algorithms and convex relaxations
of the otherwise intractable influence maximization problem. We report the results of simulations in
Section 6, and conclude the paper with a selection of open research questions in Section 7.
2
Preliminaries
In this section, we introduce basic notation and define the infection models to be analyzed in our
paper. The network of individuals is represented by a directed graph G = (V, E), where V is the
set of vertices and E ? V ? V is the set of edges. Furthermore, each directed edge (i, j) possesses
a weight bij , whose interpretation changes with the specific model we consider. We denote the
weighted adjacency matrix of G by B = (bij ). We distinguish nodes as either being infected or
uninfected based on whether or not the information contagion has reached them. Let A? := V \ A.
2.1
Linear threshold models
We first describe the linear threshold model, introduced by Kempe et al. [9]. In this model, the edge
weights (bij ) denote the influence that node i has on node j. The chance that a node is infected
depends on two quantities: the set of infected neighbors at a particular time instant and a random
node-specific
threshold that remains constant over time. For each i ? V , we impose the condition
P
b
?
1.
j ji
The thresholds {?i : i ? V } are i.i.d. uniform random variables on [0, 1]. Beginning from an initially
infected set A ? V , the contagion proceeds in discrete time steps, as follows: At every
P time step,
each vertex i computes the total incoming weight from all infected neighbors, i.e., j is infected bji .
If this quantity exceeds ?i , vertex i becomes infected. Once a node becomes infected, it remains
infected for every succeeding time step. Note that the process necessarily stabilizes after at most |V |
time steps. The expected size of the infection when the process stabilizes is known as the influence of
A and is denoted by I(A). We may interpret the threshold ?i as the level of immunity of node i.
2
Kempe et al. [9] established the monotonicity and submodularity of the function I : 2V ? R. As
discussed in Section 5.1, these properties are key to the problem of influence maximization, which
concerns maximizing I(A) for a fixed size of the set A. An important step used in describing the
submodularity of I is the ?reachability via live-edge paths" interpretation of the linear threshold
model. Since this interpretation is also crucial to our analysis, we describe it below.
Reachability via live-edge paths: Consider the weighted adjacency matrix B of the graph. We
create a subgraph of G by selecting a subset of ?live" edges, as follows: Each vertex i designates
at most one incoming edge as a live edge, with edge
P (j, i) being selected with probability bji . (No
neighboring edge is selected with probability 1 ? j bji .) The ?reach" of a set A is defined as the
set of all vertices i such that a path exists from A to i consisting only of live edges. The distribution
of the set of nodes infected in the final state of the threshold model is identical to the distribution of
reachable nodes under the live-edges model, when both are seeded with the same set A.
2.2
Independent cascade models
Kempe et al. [9] also analyzed the problem of influence maximization in independent cascade models,
a class of models motivated by interacting particle systems in probability theory [7, 15]. Similar to
the linear threshold model, the independent cascade models begins with a set A of initially infected
nodes in a directed graph G = (V, E), and the infection spreads in discrete time steps. If a vertex i
becomes infected at time t, it attempts to infect each uninfected neighbor j via the edge from i to j
at time t + 1. The entries (bij ) capture the probability that i succeeds in infecting j. This process
continues until no more infections occur, which again happens after at most |V | time steps.
The influence function for this model was also shown to be monotonic and submodular, where the
main step again relied on a ?reachability via live-edge paths" model. In this case, the interpretation is
straightforward: Given a graph G, every edge (i, j) ? E is independently designated as a live edge
with probability bij . It is then easy to see that the reach of A again has the same distribution as the
set of infected nodes in the final state of the independent cascade model.
2.3
Triggering models
To unify the above models, Kempe et al. [9] introduced the ?triggering model," which evolves as
follows: Each vertex i chooses a random subset of its neighbors as triggers, where the choice of
triggers for a given node is independent of the choice for all other nodes. If a node i is uninfected at
time t but a vertex in its trigger set becomes infected, vertex i becomes infected at time t + 1. Note
that the triggering model may be interpreted as a ?reachability via live-edge paths" model if edge (j, i)
is designated as live when i chooses j to be in its trigger set. The entry bij represents the probability
that edge (i, j) is live. Clearly, the linear threshold and independent cascade models are special cases
of the triggering model when the distributions of the trigger sets are chosen appropriately.
2.4
Notation
Finally, we introduce some notational conventions. For a matrix M ? Rn?n , we write ?(M ) to
denote the spectral radius of M . We write kM k?,? to denote the `? -operator norm of M . The
matrix Diag(M ) denotes the matrix with diagonal entries equal to the diagonal entries of M and all
other entries equal to 0. We write 1S to denote the all-ones vector supported on a set S.
For a given vertex subset A ? V in the graph with weighted adjacency matrix B, define the vector
?
? such that bA? (i) = P
bA? ? R|A| indexed by i ? A,
? (i) records the total incoming
j?A bji . Thus, bA
weight from A into i. A walk in the graph G is a sequence of vertices {v1 , v2 , . . . , vr } such that
(vi , vi+1 ) ? E, for 1 ? i ?
Q r ? 1. A path is a walk with no repeated vertices. We define the weight
of a walk to be ?(w) := e?w be , where the product is over all edges e ? E included in w. (The
weight of a walk of length 0 is defined to be 1.) For a setP
of walks W = {w1 , w2 , . . . , wr }, we denote
r
the sum of the weights of all walks in W by ?(W ) = i ?(wi ).
3
Influence bounds for linear threshold models
We now derive upper and lower bounds for the influence of a set A ? V in the linear threshold model.
3
3.1
Upper bound
We begin with upper bounds. We have the following main result, which bounds the influence as a
function of appropriate sub-blocks of the weighted adjacency matrix:
Theorem 1. For any set A ? V , we have the bound
I(A) ? |A| + bTA? (I ? BA?A? )?1 1A? .
(1)
In fact, the proof of Theorem 1 shows that the bound (1) may be strengthened to
?
?
n?|A|
X
i?1 ?
I(A) ? |A| + bTA? ?
BA,
1A? ,
?A
?
(2)
i=1
since the upper bound is contained by considering paths from vertices in A to vertices in A? and
summing over paths of various lengths (see also Theorem 4 below). The bound (2) is exact when the
underlying graph G is a directed acyclic graph (DAG). However, the bound (1) may be preferable in
some cases from the point of view of computation or interpretation.
3.2
Lower bounds
We also establish lower bounds on the influence. The following theorem provides a family of lower
bounds, indexed by m ? 1:
Theorem 2. For any m ? 1, we have the following natural lower bound on the influence of A:
I(A) ?
m
X
?(PAk ),
(3)
k=0
where PAk are all paths from A to A? of length k, such that only the starting vertex lies in A. We note
some special cases when the bounds may be written explicitly:
m=1:
I(A) ? |A| + bTA? 1A? := LB1 (A)
m=2:
I(A) ? |A| +
bTA? (I
I(A) ? |A| +
bTA? (I
m=3:
(4)
+ BA,
?A
? )1A
? := LB2 (A)
+ BA,
?A
?+
2
BA,
?A
?
?
2
Diag(BA,
?.
?A
? ))1A
(5)
(6)
Remark: As noted in Chen et al. [5], computing exact influence is #-P hard precisely because it is
difficult to write down an expression for ?(PAk ) for arbitrary values of k. When m > 3, we may use
the techniques in Movarraei et al. [18, 16, 17] to obtain explicit lower bounds when m ? 7. Note
that as m increases, the sequence of lower bounds approaches the true value of I(A).
The lower bound (4) has a very simple interpretation. When |A| is fixed, the function LB1 (A)
? Furthermore, we may show that the function
computes the aggregate weight of edges from A to A.
LB1 is monotonic. Hence, maximizing LB1 with respect to A is equivalent to finding a maximum
cut in the directed graph. (For more details, see Section 5.) The lower bounds (5) and (6) also take
?
into account the weight of paths of length 2 and 3 from A to A.
3.3
Closeness of bounds
A natural question concerns the proximity of the upper bound (1) to the lower bounds in Theorem 2.
The bounds may be far apart in general, as illustrated by the following example:
Example:
Consider a graph G with vertex set {1, 2, . . . , n}, and edge weights given by
?
?0.5, if i = 1 and j = 2,
wij = 0.5, if i = 2 and 3 ? j ? n,
?
0,
otherwise.
Let A = {1}. We may check that LB1 (A) = 1.5. Furthermore, I(A) = n+2
4 , and any upper bound
necessarily exceeds this quantity. Hence, the gap between the upper and lower bounds may grow
linearly in the number of vertices. (Similar examples may be computed for LB2 , as well.)
4
The reason for the linear gap in the above example is that vertex 2 has a very large outgoing weight;
i.e., it is highly infectious. Our next result shows that if the graph does not contain any highlyinfectious vertices, the upper and lower bounds are guaranteed
to
differ by a constant factor. The
result is stated in terms of the maximum row sum ?A,?
=
BA,
, which corresponds to the
?
?A
?
?,?
?
maximum outgoing weight of the nodes in A.
UB
UB
? 1??1? and LB
? 1??12 .
Theorem 3. Suppose ?A,?
< 1. Then LB
?
1
2
A,?
?
A,?
Since the column sums of B are bounded above by 1 in a linear threshold model, we have the
following corollary:
UB
UB
Corollary 1. Suppose B is symmetric and A ( V . Then LB
? 1??1? and LB
? 1??12 .
1
2
A,?
?
A,?
Note that if ?? = kBk?,? , we certainly have ?A,?
? ?? for any choice of A ? V . Hence,
?
Theorem 3 and Corollary 1 hold a fortiori with ?A,?
replaced by ?? .
?
4
Influence bounds for triggering models
We now generalize our discussion to the broader class of triggering models. Recall that in this model,
bij records the probability that (i, j) is a live edge.
4.1
Upper bound
We begin by deriving an upper bound, which shows that inequality (2) holds for any triggering model:
Theorem 4. In a general triggering model, the influence of A ? V satisfies inequality (2).
The approach we use for general triggering models relies on slightly more sophisticated observations
than the proof for linear threshold models. Furthermore, the finite sum in inequality (2) may not
in general be replaced by an infinite sum, asin the statement of Theorem 1 for the case of linear
threshold models. This is because if ? BA,
?A
? > 1, the infinite series will not converge.
4.2
Lower bound
We also have a general lower bound:
Theorem 5. Let A ? V . The influence of A satisfies the inequality
X
I(A) ?
sup ?(p) := LBtrig (A),
i?V
(7)
p?PA?i
where PA?i is the set of all paths from A to i such that only the starting vertex lies in A.
The proof of Theorem 5 shows that the bound (7) is sharp when at most one path exists from A to
each vertex i. In the case of linear threshold models, the bound (7) is not directly comparable to
the bounds stated in Theorem 2, since it involves maximal-weight paths rather than paths of certain
lengths. Hence, situations exist in which one bound is tighter than the other, and vice versa (e.g., see
the Example in Section 3.3).
4.3
Independent cascade models
We now apply the general bounds obtained for triggering models to the case of independent cascade
models. Theorem 4 implies the following ?worst-case" upper bounds on influence, which only
depends on |A|:
Theorem 6. The influence of A ? V in an independent cascade model satisfies
n?|A|
I(A) ? |A| + ?? |A| ?
1 ? ??
.
1 ? ??
(8)
In particular, if ?? < 1, we have
I(A) ?
|A|
.
1 ? ??
5
(9)
Note that when ?? > 1, the bound (8) exceeds n for all large enough n, so the bound is trivial.
It is instructive to compare Theorem 6 with the results of Lemonnier et al. [13]. The hazard matrix of
an independent cascade model with weighted adjacency matrix (bij ) is defined by
Hij = ? log(1 ? bij ),
?(i, j).
T
The following result is stated in terms of the spectral radius ? = ? H+H
:
2
Proposition 1 (Corollary 1 in Lemonnier et al. [13]). Let A ( V , and suppose ? < 1 ? ?, where
1/3
q
p
|A|
?
? = 4(n?|A|)
. Then I(A) ? |A| + 1??
|A|(n ? |A|).
As illustrated in the following example, the bound in Theorem 6 may be significantly tighter than the
bound provided in Proposition 1:
Example: Consider a directed Erd?s-R?nyi graph on n vertices, where each edge (i, j) is independently present with probability nc . Suppose c < 1. For any set |A|, the bound (9) gives
|A|
.
(10)
1
?c
T
It is easy to check that ? H+H
= ?(n ? 1) log 1 ? nc . For large values of n, we have ?(H) ?
2
q
p
c
c < 1, so Proposition 1 implies the (approximate) bound I(A) ? |A| + 1?c
|A|(n ? |A|). In
particular, this bound increases with n, unlike our bound (10). Although the example is specific to
Erd?s-R?nyi graphs, we conjecture that whenever kBk?,? < 1, the bound in Theorem 6 is tighter
than the bound in Proposition 1.
I(A) ?
5
Maximizing influence
We now turn to the question of choosing a set A ? V of cardinality at most k that maximizes I(A).
5.1
Submodular maximization
We begin by reviewing the notion of submodularity, which will be crucial in our discussion of
influence maximization algorithms. We have the following definition:
Definition 1 (Submodularity). A set function f : 2V ? R is submodular if either of the following
equivalent conditions holds:
(i) For any two sets S, T ? V ,
f (S ? T ) + f (S ? T ) ? f (S) + f (T ).
(11)
(ii) For any two sets S ? T ? V and any x ?
/ T , the following inequality holds:
f (T ? {x}) ? f (T ) ? f (S ? {x}) ? f (x).
(12)
The left and right sides of inequality (12) are the discrete derivatives of f evaluated at T and S.
Submodular functions arise in a wide variety of applications. Although submodular functions
resemble convex and concave functions, optimization may be quite challenging; in fact, many
submodular function maximization problems are NP-hard. However, positive submodular functions
may be maximized efficiently if they are also monotonic, where monotonicity is defined as follows:
Definition 2 (Monotonicity). A function f : 2V ? R is monotonic if for any two sets S ? T ? V ,
f (S) ? f (T ).
Equivalently, a function is monotonic if its discrete derivative is nonnegative at all points.
We have the following
celebrated result, which guarantees that the output of the greedy algorithm
provides a 1 ? 1e -approximation to the cardinality-constrained maximization problem:
Proposition 2 (Theorem 4.2 of Nemhauser and Wolsey [19]). Let f : 2V ? R+ be a monotonic
submodular function. For any k ? 0, define m? (k) = max|S|?k f (S). Suppose we construct a
sequence of sets {S0 = ?, S1 , . . . , Sk } in a greedy fashion, such that Si+1 = Si ? {x}, where x
maximizes the discrete derivative of f evaluated at Si . Then f (Sk ) ? 1 ? 1e m? (k).
6
5.2
Greedy algorithms
Kempe et al. [9] leverage Proposition 2 and the submodularity of the influence function to derive
guarantees for a greedy algorithm for influence maximization in the linear threshold model. However,
due to the intractability of exact influence calculations, each step of the greedy algorithm requires
approximating the influence of several augmented sets. This leads to an overall runtime of O(nk)
times the runtime for simulations and introduces an additional source of error.
As the results of this section establish, the lower bounds {LBm }m?1 and LBtrig appearing in
Theorems 2 and 5 are also conveniently submodular, implying that Proposition 2 also applies when
a greedy algorithm is employed. In contrast to the algorithm studied by Kempe et al. [9], however,
our proposed greedy algorithms do not involve expensive simulations, since the functions LBm and
LBtrig are relatively straightforward to evaluate. This means the resulting algorithm is extremely fast
to compute even on large networks.
Theorem 7. The lower bounds {LBm }m?1 are monotone and submodular. Thus, for any
k ? n, a greedy algorithm that maximizes LBm at each step yields a 1 ? 1e -approximation
to maxA?V :|A|?k LBm (A).
Theorem 8. The function LBtrig is monotone and submodular. Thus, for any k ? n,
a greedy algorithm that maximizes LBtrig at each step yields a 1 ? 1e -approximation to
maxA?V :|A|?k LBtrig (A).
Note that maximizing LBm (A) or LBtrig (A) necessarily provides a lower bound on maxA?V I(A).
6
Simulations
In this section, we report the results of various simulations. In the first set of simulations, we generated
an Erd?s-Renyi graph with 900 vertices and edge probability n2 ; a preferential attachment graph with
900 vertices, 10 initial vertices, and 3 edges for each added vertex; and a 30 ? 30 grid. We generated
33 instances of edge probabilities for each graph, as follows: For each instance and each vertex i, we
chose ?(i) uniformly in [?min , 0.8], where ?min ranged from 0.0075 to 0.75 in increments of 0.0075.
The probability that the incoming edge was chosen was 1??
d(i) , where d(i) is the degree of i. An initial
infection set A of size 10 was chosen at random, and 50 simulations of the infection process were
run to estimate the true influence. The upper and lower bounds and value of I(A) computed via
simulations are shown in Figure 1. Note that the gap between the upper and lower bounds indeed
controlled for smaller values of ?A,?
, agreeing with the predictions of Theorem 3.
?
For the second set of simulations, we generated 10 of each of the following graphs: an Erd?s-Renyi
graph with 100 vertices and edge probability n2 ; a preferential attachment graph with 100 vertices, 10
initial vertices, and 3 additional edges for each added vertex; and a grid graph with 100 vertices. For
each of the 10 realizations, we also picked a value of ?(i) for each vertex i uniformly in [0.075, 0.8].
The corresponding edge probabilities were assigned as before. We then selected sets A of size 10
using greedy algorithms to maximize LB1 , LB2 , and U B, as well as the estimated influence based on
50 simulated infections. Finally, we used 200 simulations to approximate the actual influence of each
resulting set. The average influences, along with the average influence of a uniformly random subset
of vertices of size 10, are plotted in Figure 2. Note that the greedy algorithms all perform comparably,
although the sets selected using LB2 and U B appear slightly better. The fact that the algorithm that
uses U B performs well is somewhat unsurprising, since it takes into account the influence from all
paths. However, note that maximizing U B does not lead to the theoretical guarantees we have derived
for LB1 and LB2 . In Table 1, we report the runtimes scaled by the runtime of the LB1 algorithm.
As expected, the LB1 algorithm is fastest, and the other algorithms may be much slower.
7
Discussion
We have developed novel upper and lower bounds on the influence function in various contagion
models, and studied the problem of influence maximization subject to a cardinality constraint. Note
that all of our methods may be extended via the conditional expectation decomposition employed
by Lee et al. [11], to obtain sharper influence bounds for certain graph topologies. It would be
interesting to derive theoretical guarantees for the quality of improvement in such cases; we leave this
7
50
LB1
LB2
LB2
Simulation
UB
18
24
LB1
vertices infected
vertices infected
20
16
14
40
22
vertices infected
22
Simulation
UB
30
20
12
20
LB1
LB2
Simulation
UB
18
16
14
10
10
1
1.5
2
2.5
3
4
6
8
?A,?
?
10
12
0.2
12
0.4
0.6
?A,?
?
0.8
1
?A,?
?
Figure 1: Lower bounds, upper bounds, and simulated influence for Erd?s-Renyi, preferential
attachment, and 2D-grid graphs, respectively. For small values of ?A,?
, our bounds are tight.
?
30
15
LB
25
LB1
2
Simulation
UB
Random
10
LB1
40
LB
30
Simulation
UB
Random
2
influence
20
50
LB1
influence
influence
25
20
10
5
0
4
6
8
10
LB
15
Simulation
UB
Random
2
10
5
0
2
20
0
2
|A|
4
6
8
10
2
4
|A|
6
8
10
|A|
Figure 2: Simulated influence for sets |A| selected by greedy algorithms and uniformly at random
on Erd?s-Renyi, preferential attachment, and 2D-grid graphs respectively. All greedy algorithms
perform similarly, but the algorithms maximizing the simulated influence and U B are much more
computationally intensive.
Erd?s-Renyi
Preferential attachment
2D-grid
LB1
1.00
1.00
1.00
LB2
2.36
2.56
2.43
UB
27.43
28.49
47.08
Simulation
710.58
759.83
1301.73
Table 1: Runtimes for the influence maximization algorithms, scaled by the runtime of the greedy
LB1 algorithm. The corresponding lower bounds are much easier to compute, allowing for faster
algorithms.
exploration for future work. Other open questions involve quantifying the gap between the upper and
lower bounds derived in the case of general triggering models, and obtaining theoretical guarantees
for non-greedy algorithms in our lower bound maximization problem.
References
[1] L. A. Adamic and E. Adar. Friends and neighbors on the Web. Social Networks, 25(3):211 ?
230, 2003.
[2] C. Borgs, M. Brautbar, J. Chayes, and B. Lucier. Maximizing social influence in nearly optimal
time. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms,
pages 946?957. SIAM, 2014.
[3] W. Chen, C. Wang, and Y. Wang. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In Proceedings of the 16th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pages 1029?1038. ACM, 2010.
[4] W. Chen, Y. Wang, and S. Yang. Efficient influence maximization in social networks. In
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and
data mining, pages 199?208. ACM, 2009.
[5] W. Chen, Y. Yuan, and L. Zhang. Scalable influence maximization in social networks under the
linear threshold model. In Proceedings of the 2010 IEEE International Conference on Data
Mining, ICDM ?10, pages 88?97, Washington, DC, USA, 2010. IEEE Computer Society.
8
[6] P. Domingos and M. Richardson. Mining the network value of customers. In Proceedings of the
seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 57?66. ACM, 2001.
[7] R. Durrett. Lecture Notes on Particle Systems and Percolation. Wadsworth & Brooks/Cole
statistics/probability series. Wadsworth & Brooks/Cole Advanced Books & Software, 1988.
[8] M. Hecker, S. Lambeck, S. Toepfer, E. Van Someren, and R. Guthke. Gene regulatory network
inference: data integration in dynamic models?a review. Biosystems, 96(1):86?103, 2009.
[9] D. Kempe, J. Kleinberg, and ?. Tardos. Maximizing the spread of influence through a social
network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ?03, pages 137?146, New York, NY, USA, 2003. ACM.
[10] W. O. Kermack and A. G. McKendrick. A contribution to the mathematical theory of epidemics.
Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering
Sciences, 115(772):700?721, 1927.
[11] E. J. Lee, S. Kamath, E. Abbe, and S. R. Kulkarni. Spectral bounds for independent cascade
model with sensitive edges. In 2016 Annual Conference on Information Science and Systems
(CISS), pages 649?653, March 2016.
[12] R. Lemonnier, K. Scaman, and N. Vayatis. Tight bounds for influence in diffusion networks
and application to bond percolation and epidemiology. In Advances in Neural Information
Processing Systems, pages 846?854, 2014.
[13] R. Lemonnier, K. Scaman, and N. Vayatis. Spectral Bounds in Random Graphs Applied to
Spreading Phenomena and Percolation. ArXiv e-prints, March 2016.
[14] J. Leskovec, L. A. Adamic, and B. A. Huberman. The dynamics of viral marketing. ACM
Transactions on the Web (TWEB), 1(1):5, 2007.
[15] T. Liggett. Interacting Particle Systems, volume 276. Springer Science & Business Media,
2012.
[16] N. Movarraei and S. Boxwala. On the number of paths of length 5 in a graph. International
Journal of Applied Mathematical Research, 4(1):30?51, 2015.
[17] N. Movarraei and S. Boxwala. On the number of paths of length 6 in a graph. International
Journal of Applied Mathematical Research, 4(2):267?280, 2015.
[18] N. Movarraei and M. Shikare. On the number of paths of lengths 3 and 4 in a graph. International
Journal of Applied Mathematical Research, 3(2):178?189, 2014.
[19] G. L. Nemhauser and L. A. Wolsey. Best algorithms for approximating the maximum of a
submodular set function. Mathematics of operations research, 3(3):177?188, 1978.
[20] M. E. J. Newman. The structure and function of complex networks. SIAM review, 45(2):167?
256, 2003.
[21] K. Scaman, R. Lemonnier, and N. Vayatis. Anytime influence bounds and the explosive behavior
of continuous-time diffusion networks. In Advances in Neural Information Processing Systems,
pages 2017?2025, 2015.
[22] O. Sporns. The human connectome: A complex network. Annals of the New York Academy of
Sciences, 1224(1):109?125, 2011.
[23] C. Zhou, P. Zhang, J. Guo, and L. Guo. An upper bound based greedy algorithm for mining
top-k influential nodes in social networks. In Proceedings of the 23rd International Conference
on World Wide Web, pages 421?422. ACM, 2014.
9
| 6347 |@word achievable:1 norm:1 open:2 km:1 simulation:17 decomposition:1 initial:3 celebrated:1 series:3 selecting:1 existing:1 recovered:1 bta:5 si:5 yet:1 written:1 kdd:1 cis:1 designed:1 succeeding:2 implying:2 greedy:19 selected:5 website:1 beginning:1 record:2 provides:3 node:20 zhang:2 mathematical:5 along:1 symposium:1 yuan:1 mckendrick:1 introduce:2 manner:1 upenn:1 indeed:1 expected:3 behavior:1 actual:1 cardinality:4 considering:1 becomes:5 spain:1 begin:4 notation:3 underlying:1 bounded:1 medium:2 provided:1 maximizes:4 interpreted:1 maxa:3 developed:2 infecting:1 finding:1 guarantee:9 every:3 concave:1 runtime:4 exactly:1 preferable:1 scaled:2 appear:1 brautbar:1 positive:1 before:1 engineering:3 scientist:1 establishing:1 firing:1 path:18 might:2 chose:1 studied:3 challenging:1 fastest:1 directed:6 block:1 cascade:15 significantly:1 refers:1 protein:1 selection:1 operator:1 influence:63 live:12 equivalent:2 customer:1 maximizing:14 straightforward:2 starting:2 independently:2 convex:2 focused:1 unify:1 importantly:1 deriving:1 century:1 notion:2 adar:1 increment:1 tardos:1 annals:1 trigger:5 suppose:5 exact:4 uninfected:3 us:1 domingo:1 pa:3 expensive:1 continues:1 cut:1 electrical:3 capture:1 worst:1 wang:3 contemporary:1 disease:1 mentioned:1 instructive:1 dynamic:2 reviewing:1 tight:2 po:1 represented:1 various:3 fast:1 describe:3 london:1 newman:1 aggregate:1 choosing:1 whose:1 quite:1 tightness:1 otherwise:2 epidemic:2 statistic:2 richardson:1 final:2 chayes:1 sequence:3 interaction:1 product:1 maximal:1 remainder:1 scaman:3 neighboring:1 realization:1 subgraph:1 academy:1 infectious:1 transmission:1 extending:1 produce:1 leave:1 derive:4 illustrate:1 friend:1 school:1 reachability:4 involves:1 implies:2 quantify:2 convention:1 differ:1 submodularity:5 radius:2 resemble:1 stochastic:4 exploration:1 human:1 adjacency:5 fix:1 preliminary:1 proposition:7 biological:1 tighter:4 hold:4 proximity:1 seed:1 stabilizes:2 pak:3 applicable:1 bond:1 spreading:1 percolation:3 sensitive:2 cole:2 vice:1 create:1 weighted:5 clearly:1 rather:1 zhou:1 broader:2 corollary:4 derived:4 notational:1 improvement:1 prevalent:1 likelihood:1 check:2 contrast:1 rigorous:1 sigkdd:4 inference:1 twitter:1 kermack:1 initially:2 relation:1 wij:1 overall:1 aforementioned:2 denoted:1 constrained:1 special:4 kempe:8 wadsworth:2 integration:1 wharton:2 construct:2 once:1 equal:2 washington:1 runtimes:2 identical:1 represents:1 nearly:1 abbe:1 future:1 np:3 report:3 individual:2 replaced:2 consisting:1 explosive:1 attempt:1 highly:3 mining:7 certainly:1 introduces:1 analyzed:2 edge:35 preferential:5 indexed:2 walk:6 plotted:1 theoretical:7 biosystems:1 leskovec:1 instance:2 column:1 infected:23 maximization:17 vertex:37 subset:5 entry:5 uniform:1 wonder:1 seventh:1 unsurprising:1 answer:1 chooses:2 epidemiology:2 siam:3 international:9 lee:3 connectome:1 w1:1 again:3 central:1 book:1 derivative:3 leading:1 account:2 subsumes:1 includes:1 explicitly:1 depends:2 vi:2 view:1 picked:1 sup:1 reached:1 relied:1 contribution:1 efficiently:3 maximized:3 gathered:1 yield:3 generalize:1 comparably:1 reach:2 whenever:1 infection:13 facebook:1 lemonnier:5 definition:3 tweb:1 naturally:1 proof:3 gain:1 newly:1 popular:4 recall:1 knowledge:5 lucier:1 anytime:1 organized:1 sophisticated:1 varun:1 erd:7 evaluated:3 furthermore:5 marketing:3 governing:1 until:1 adamic:2 web:3 maximizer:1 quality:1 scientific:1 usa:2 contain:1 true:3 ranged:1 hence:4 seeded:1 assigned:1 symmetric:1 illustrated:2 attractive:2 adjacent:1 round:2 noted:1 performs:1 novel:1 recently:1 common:1 viral:3 physical:2 ji:1 conditioning:1 volume:1 extend:1 interpretation:6 discussed:1 interpret:1 versa:1 dag:1 rd:1 grid:5 mathematics:1 similarly:1 particle:3 submodular:14 reachable:1 fortiori:1 recent:3 apart:1 certain:6 inequality:6 greater:1 additional:2 impose:1 somewhat:1 employed:2 converge:1 maximize:1 ii:1 exceeds:3 jog:1 faster:1 calculation:1 hazard:2 icdm:1 controlled:1 prediction:1 scalable:4 involving:1 basic:1 expectation:1 arxiv:1 vayatis:3 grow:1 source:1 crucial:2 appropriately:1 w2:1 unlike:1 posse:2 recording:1 subject:1 leverage:2 yang:1 easy:2 enough:1 variety:1 pennsylvania:1 topology:2 triggering:18 computable:1 intensive:1 whether:2 motivated:1 expression:1 loh:2 york:2 remark:1 involve:2 exist:1 estimated:1 wr:1 broadly:1 discrete:6 write:4 key:1 threshold:25 wisc:2 diffusion:2 v1:1 vast:1 graph:30 relaxation:1 monotone:2 sum:5 run:1 family:1 comparable:1 bound:87 guaranteed:3 distinguish:1 nonnegative:1 annual:2 occur:1 precisely:1 constraint:1 software:1 encodes:1 kleinberg:1 argument:1 extremely:1 min:2 relatively:1 conjecture:1 department:3 designated:2 according:2 influential:1 march:2 across:1 slightly:2 smaller:1 agreeing:1 wi:3 evolves:1 happens:1 s1:1 kbk:2 lbm:6 computationally:1 previously:1 remains:2 discus:1 describing:1 turn:1 operation:1 apply:1 v2:1 spectral:4 appropriate:1 appearing:1 slower:1 denotes:1 top:1 include:2 madison:4 unifying:1 instant:1 establish:5 nyi:2 approximating:2 society:2 question:6 quantity:3 added:2 print:1 diagonal:2 surrogate:1 nemhauser:2 simulated:4 collected:1 trivial:1 reason:2 provable:1 length:8 nc:2 difficult:1 susceptible:4 equivalently:1 statement:1 sharper:1 hij:1 kamath:1 stated:3 ba:11 unknown:1 perform:2 allowing:1 upper:24 twenty:1 neuron:1 observation:1 datasets:1 finite:1 affirmatively:1 situation:1 extended:1 dc:1 interacting:2 rn:1 ninth:1 arbitrary:1 lb:7 sharp:1 introduced:2 immunity:1 established:3 barcelona:1 nip:1 brook:2 address:1 justin:1 able:1 proceeds:1 below:2 max:1 royal:1 sporns:1 natural:2 khim:1 business:1 advanced:1 representing:1 contagion:4 attachment:5 philadelphia:1 review:2 literature:1 discovery:4 determining:1 wisconsin:2 sir:1 lecture:1 interesting:1 wolsey:2 acyclic:1 degree:1 s0:1 propagates:1 intractability:1 row:1 supported:1 last:1 biomolecules:1 side:1 neighbor:8 wide:2 fifth:1 van:1 world:1 computes:2 durrett:1 far:1 social:9 transaction:1 approximate:2 status:1 gene:2 monotonicity:3 incoming:4 summing:1 conclude:1 continuous:1 regulatory:1 designates:1 sk:2 table:2 obtaining:1 necessarily:3 complex:2 diag:2 spread:5 main:2 linearly:1 ling:1 arise:1 n2:2 repeated:1 body:1 augmented:1 strengthened:1 fashion:1 ny:1 vr:1 sub:1 explicit:1 lie:2 renyi:5 bij:9 infect:3 theorem:23 down:1 specific:3 borgs:1 explored:1 liggett:1 concern:2 closeness:1 intractable:1 exists:2 nk:1 gap:6 chen:4 easier:1 conveniently:1 expressed:1 contained:1 monotonic:7 constantfactor:1 applies:1 corresponds:1 truth:1 chance:1 satisfies:3 relies:1 bji:4 acm:11 springer:1 lb1:17 conditional:1 goal:1 viewed:1 quantifying:1 hard:5 change:1 included:1 determined:1 infinite:2 uniformly:4 huberman:1 total:2 ece:1 succeeds:1 lb2:9 guo:2 ub:11 kulkarni:1 evaluate:1 outgoing:2 phenomenon:1 |
5,911 | 6,348 | Finite-Dimensional BFRY Priors and Variational
Bayesian Inference for Power Law Models
Juho Lee
POSTECH, Korea
[email protected]
Lancelot F. James
HKUST, Hong Kong
[email protected]
Seungjin Choi
POSTECH, Korea
[email protected]
Abstract
Bayesian nonparametric methods based on the Dirichlet Process (DP), gamma process and beta process, have proven effective in capturing aspects of various datasets
arising in machine learning. However, it is now recognized that such processes
have their limitations in terms of the ability to capture power law behavior. As such
there is now considerable interest in models based on the Stable Processs (SP),
Generalized Gamma process (GGP) and Stable-Beta Process (SBP). These models
present new challenges in terms of practical statistical implementation. In analogy
to tractable processes such as the finite-dimensional Dirichlet process, we describe
a class of random processes, we call iid finite-dimensional BFRY processes, that
enables one to begin to develop efficient posterior inference algorithms such as
variational Bayes that readily scale to massive datasets. For illustrative purposes,
we describe a simple variational Bayes algorithm for normalized SP mixture models, and demonstrate its usefulness with experiments on synthetic and real-world
datasets.
1
Introduction
Bayesian non-parametric ideas have played a major role in various intricate applications in statistics
and machine learning. The Dirichlet process (DP) [1], due to its remarkable properties and relative
tractability, has been the primary choice for many applications. It has also inspired the development of
variants such as the HDP [2] which can be seen as an infinite-dimensional extension of latent Dirichlet
allocation [3]. While there are many possible descriptions of a DP, a most intuitive one is its view as
PK
the limit, as K ? ?, of a finite-dimensional Dirichlet process, PK (A) = k=1 Dk I{Vk ?A} , where
one can take (D1 , . . . , DK ) to be a K-variate symmetric Dirichlet vector on the (K ? 1)-simplex
with parameters (?/K, . . . , ?/K), for ? > 0 and {Vk } are an arbitrary i.i.d. sequence of variables
over a space ?, with law H(A) = Pr(Vk ? A). Multiplying by a G? , an independent Gamma(?, 1),
PK
variable, leads to a finite-dimensional Gamma process ?K (A) = k=1 Gk I{Vk ?A} := G? PK (A),
where {Gk } are i.i.d. Gamma(?/K, 1) variables, and one may set ?K (?) = G? . It was shown
d
that limK?? (PK , ?K ) = (F?0,? , ?
?0,? ), where the limits correspond to a DP and a Gamma process
(GP) [4]. While (PK , ?K ) are often viewed as approximations to the DP and Gamma process (GP),
the works of [5, 6, 7] and references therein demonstrate the general utility of these models.
The relationship between the GP and DP shows that the GP is a more flexible random process. This
is borne out by its recognized applicability for a wider range of data structures. As such, it suffices
to focus on ?K as a tractable instance of what we refer to as an i.i.d. finite-dimensional process. In
PK
general, we say a random process, ?K (?) := k=1 Jk ?Vk , is an i.i.d. finite-dimensional process if
d
[(i)] For each fixed K, (J1 , . . . , JK ) are i.i.d. random variables [(ii)] limK?? ?K = ?
?, where ?
? is a
completely random measure (CRM) [8]. In fact, from [9] (Theorem 14), it follows that if the limit
exists ?
? must be a CRM and therefore T := ?
?(?) < ? is a non-negative infinitely divisible random
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
PK
variable. On the other hand, it is important to note that, {Jk } and TK = ?K (?) = k=1 Jk need
not be infinitely divisible. We also point out there are many constructions of ?K that converge to
the same ?
?. According to [4], for every CRM ?
? one can construct special cases of ?K that always
converge as follow: Let (C1 , . . . , CK ) denote a disjoint partition of ? such that H(Ck ) = 1/K for
d
k = 1, . . . , K, then one can set Jk = ?
?(Ck ), where the {Jk } are iid infinitely divisible variables
d
and TK = T. For reference we shall call such ?K finite-dimensional Kingman processes or simply
Kingman proceses. It is clear that the finite-dimensional gamma process satisfies such a construction
with Jk = Gk and TK = G? . However, the nice tractable features of this special case, do not carry
over in general. This is due to the fact that there are many cases where the distribution of ?
?(Ck ), is
not tractable either in the sense of not being easily simulated or having a relatively simple density.
The latter is of particular importance if one wants to consider developing inferential techniques for
CRM models that scale readily to large or massive data sets. An example of this would be variational
Bayes type methods, which would otherwise be well suited to the i.i.d. based models [10]. As such
we consider a finer class of i.i.d. finite-dimensional processes as follows: We say ?K is ideal if in
addition to [(i)] and [(ii)] it satisfies [(iii)] the Jk are easily simulated [(iv)] the density of Jk has
an explicit closed form suitable for application of techniques such as variational Bayes. We do not
attempt to specify any formal structure on what we mean by ideal, except to note that one can easily
recognize a choice of ?K that is not ideal.
Our focus in this paper is not to explore the generalities of finite-dimensional processes. Rather
it is to identify specific ideal processes which are suitable for important cases where ?
? is a Stable
process (SP), or Generalized Gamma process (GGP). Furthermore by a simple transformation we
can construct processes that have behaviour similar to a Stable-Beta process (SBP). The SP, GGP,
SBP, and processes constructed from them, are now regarded as important alternatives to the DP,
GP and beta process (BP), as they, unlike the (DP, GP, BP), are better able to capture power law
behavior inherent in many types of datasets [11, 12, 13, 14, 15]. Unfortunately Kingman processes
based on SP, GGP or SBP are clearly not ideal. Indeed, if one considers for 0 < ? < 1, T = S?
a positive stable random variable, with density f? , then the corresponding stable process ?
??,0 , is
d
d
such that Jk = ??,0 (Ck ) = K ?1/? S? . While it is fairly easy to sample S? and hence each Jk , it is
well-known that the density f? does not have generally a tractable form. Things become worse in
the GGP setting as the relevant density is formed by exponentially tilting the density f? . Finally it is
neither clear from the literature how to sample T for SBP, and much less have a simple form for its
corresponding density. Here we shall construct ideal processes based on various manipulations of a
class of ?K we call finite-dimensional BFRY [16] processes. We note that BFRY random variables
appear in machine learning contexts in recent work [17], albeit in a very different role.
Based on finite-dimensional BFRY processes, we provide simple variational Bayes algorithms for
mixture models based on normalized SP and GGP. We also derive collapsed variational Bayes
algorithms where the jumps are marginalized out. We demonstrate the effectiveness of our approach
on both synthetic and real-world datasets. Our intent here is to demonstrate how these processes can
be used within the context of variational inference. This in turn hopefully helps to elucidate how to
implement such procedures, or other inference techniques that benefit from explicit densities, such as
hybrid Monte Carlo [18] or stochastic gradient MCMC algorithms [19].
2
2.1
Background
Completely random measure and Laplace functionals
Let (?, F) be a measurable space, A random measure ? on ? is completely random [8] if for any
disjoint A, B ? F, ?(A) and ?(B) are independent. It is known that any CRM can be written as
the sum of a deterministic measure, a measure with fixed atoms, and a random measure represented
as a linear functional of the Poisson process [8]. In this paper, we focus on CRMs with only the
third component. Let ? be a Poisson process on R+ ? ? with mean intensity decomposed as
?(ds, d?) = ?(ds)H(d?). A realization of ? and corresponding CRM is written as
?(R+ ,?)
?=
X
k=1
Z
?(sk ,?k ) ,
?=
?(R+ ,?)
?
s?(ds, d?) =
0
X
k=1
2
sk ??k .
(1)
We refer to ? as the L?vy measure of ? and H as the base measure, and write ? ? CRM(?, H).
Examples of CRMs include the gamma process GP(?, H) with L?vy measure ?(ds) = ?s?1 e?s ds
or the beta process BP(c, ?, H) with L?vy measure ?(du) = ?cu?1 (1 ? u)c?1 I{0?u?1} du. Stable,
generalized gamma, and stable beta are also CRMs, and we will discuss them later.
A CRM is identified by its Laplace functional, just as a random variable is identified by its characteristic function [20]. For a random measure ? and a measurable function f , the Laplace functional
L? (f ) is defined as
Z
L? (f ) := E[e??(f ) ], ?(f ) :=
f (?)?(d?).
(2)
?
When ? ? CRM(?, H), the Laplace functional can be computed using the following theorem.
Theorem 1. (L?vy-Khintchine Formula [21]) For ? ? CRM(?, H) and measurable f on ?,
Z Z ?
L? (f ) = exp ?
(1 ? e?sf (?) )?(ds)H(d?) .
?
2.2
(3)
0
Stable and related processes
A Stable Process SP(?, ?, H) is a CRM with L?vy measure
?(ds) =
?
s???1 ds,
?(1 ? ?)
(4)
and a Generalized Gamma Process GGP(?, ?, ?, H) is a CRM with L?vy measure
?(ds) =
?
s???1 e?? s ds,
?(1 ? ?)
(5)
where ? > 0, 0 < ? < 1, and ? > 0.
GGP is general in the sense that we can get many other processes from it. For example, by letting
? ? 0 we get GP, and by setting ? = 0 we get SP. Furthermore, while it is well-known that the
Pitman-Yor process (see [22] and [23]) can be derived from SP, there is also a construction based
on GGP as follows. In particular as a consequence of ([23], Proposition 21), if we randomize ? =
Gamma(?0 /?, 1) in SP and normalize the jumps, then we get the Pitman-Yor process PYP(?0 , ?)
for ?0 > 0. The jumps of SP and GGP are known to be heavy-tailed, and this results in power-law
behaviour of data drawn from models having those processes as priors.
The stable beta process SBP(?, ?, c, H) is a CRM with L?vy measure
?(du) =
??(1 + c)
u???1 (1 ? u)c+??1 I{0?u?1} du,
?(1 ? ?)?(c + ?)
(6)
where ? > 0, 0 < ? < 1, and c > ??. SBP can be viewed as a heavy-tailed extension of BP, and the
special case of c = 0 can be obtained by applying the transformation u = s/(s + 1) in SP.
2.3
BFRY distributions
The BFRY distribution with parameter 0 < ? < 1, written as BFRY(?), is a random variable with
density
?
g? (s) =
s???1 (1 ? e?s ).
(7)
?(1 ? ?)
d
We can simulate S ? BFRY(?) with S = G/B, where G ? Gamma(1??, 1) and B ? Beta(?, 1).
One can easily verify this by computing the density of the ratio distribution.
The name BFRY was coined in [16] after the work of Bertoin, Fujita, Roynette, and Yor [24]
who obtained explicit descriptions of the infinitely divisible random variable and subordinator
corresponding to the density. However, the density arises much earlier, and can be found in a variety
of contexts, for instance in [23] (Proposition 12, Corollary 13 and see also Eq.(124)) and [25]. See
3
[17] for the use of BFRY distributions to induce the closed form Indian buffet process type generative
processes that have a type III power law behaviour.
We also explain some variations of BFRY distributions needed for the construction of finitedimensional BFRY processes for SP and GGP. First, we can scale the BFRY random variables
by some scale c > 0. In that case, we write S ? BFRY(c, ?), and the density is given as
1/?
c
gc,? (s) =
s???1 (1 ? e?(?/c) s ).
(8)
?(1 ? ?)
d
We can easily sample S ? BFRY(c, ?) as S = (?/c)?1/? T where T ? BFRY(?). We can also
exponentially tilt the scaled BFRY random variable, with a parameter ? > 0. For that we write
S ? BFRY(c, ?, ?), and the density is given as
1/?
gc,?,? (s) =
?s???1 e?? s (1 ? e?(?/c) s)
.
?(1 ? ?){(? + (?/c)1/? )? ? ? ? }
(9)
d
We can simulate S ? BFRY(c, ?, ?) as S = GT where G ? Gamma(1 ? ?, 1) and T is a random
variable with density,
h(t) =
?t???1
I
1/? ?1
?1 ,
(? + (?/c)1/? )? ? ? ? {(? +(?/c) ) ?t?? }
(10)
which can easily be sampled using inverse transform sampling.
3
3.1
Main Contributions
A Motivating example
Before we jump into our method, we first revisit an example of ideal finite-dimensional processes.
Inspired by constructions of DP and GP, the Indian buffet process (IBP, [26]) was developed as a
model for feature selection, by considering the limit K ? ? of an M ? K binary matrix whose entries {Zm,k } are conditionally independent Bern(Uk ) variables where {Uk } are i.i.d. Beta(?/K, 1)
variables. Although not explicitly described as such, this leads to the notion of a finite-dimensional
PK
beta process ?K =
k=1 Uk ?Vk . In [26], IBP was obtained as the limit of the marginal distribution where ?K was marginalized out, and this result coupled with [27] show indirectly that
limK?? ?K ? ? ? BP(?, H). Here, we show another proof of this convergence, by inspecting the
Laplace functional of ?K . The Laplace functional of ?K is computed as follows:
K
Z Z 1
? ? ?1 ?uf (?)
??K (f )
K
e
duH(d?)
]=
u
L?K (f ) = E[e
? 0 K
K
Z Z 1
?
1
=
1?
?u K ?1 (1 ? e?uf (?) )duH(d?) .
(11)
K ? 0
Since u?/K is bounded by 1, the bounded convergence theorem implies
Z Z ?
lim L?K (f ) = exp ?
(1 ? e?uf (?) )?u?1 I{0?u?1} duH(d?) ,
K??
?
(12)
0
which exactly matches the Laplace functional of ? computed by Eq. (3). In contrast to the marginal
likelihood arguments, in our proof, we illustrate the direct relationship between the random measures
and suggest a blueprint that can be applied to other CRMs. Note that the finite-dimensional beta
process is not a Kingman process, since the beta variables are not infinitely divisible and the total
mass T is a Dickman variable. We can also apply our argument to the case of the finite-dimensional
gamma process, the proof of which is given in our supplementary material.
3.2
Finite-dimensional BFRY processes
Inspired by the finite-dimensional beta and gamma process examples, we propose finite-dimensional
BFRY processes, which converge to SP, GGP, and SBP as K ? ?.
4
Theorem 2. (Finite-dimensional BFRY processes)
(i) Let ? ? SP(?, ?, H). Construct ?K as follows:
i.i.d.
J1 , . . . , JK ? BFRY(?/K, ?),
i.i.d.
V1 , . . . , VK ? H,
?K =
K
X
Jk ?Vk .
(13)
k=1
(ii) Let ? ? GGP(?, ?, ?, H). Construct ?K as follows:
i.i.d.
i.i.d.
J1 , . . . , JK ? BFRY(?/K, ?, ?),
V1 , . . . , VK ? H,
?K =
K
X
J k ? Vk .
(14)
k=1
(iii) Let ? ? SBP(?, ?, 0, H). Construct ?K as follows:
i.i.d.
S1 , . . . , SK ? BFRY(?/K, ?),
i.i.d.
V1 , . . . , VK ? H,
Jk =
Sk
Sk +1
?K =
K
X
for k = 1, . . . , K,
Jk ?Vk .
(15)
k=1
For all three cases, limK?? Lf (?K ) = Lf (?) for an arbitrary measurable f .
Proof. We first provide a proof for SP case (i), and the proof for GGP (ii) is almost identical. The
Laplace functional of ?K is written as
Z Z ?
K
?
?sf (?)
???1
?(?K/?)1/? s
L?K (f ) =
e
s
(1 ? e
)dsH(d?)
K?(1 ? ?)
? 0
K
Z Z ?
1/?
1
?
=
1?
(1 ? e?sf (?) )s???1 (1 ? e?(?K/?) s )dsH(d?)
K ? 0 ?(1 ? ?)
?(?K/?)s
Since 1 ? e
is bounded by 1, the bounded convergence theorem implies,
Z Z ?
?
???1
?sf (?)
s
dsH(d?) ,
lim L?K (f ) = exp ?
(1 ? e
)
K??
?(1 ? ?)
? 0
which exactly matches the Laplace functional of SP. The proof of (iii) is trivial from (i) and the
relationship between SP and SBP.
Corollary 1. Let ? = 1 and ? ? 0 in (14). Then ?K will converge to ? ? GP(?, H).
Proof. The result is trivial by letting ? ? 0 in Lf (?K ).
Finite-dimensional BFRY processes are certainly ideal processes, since
we
can easily sample the jumps {Jk }, and we have explicit closed form
-2
densities written as (8) and (9). Hence, based on those processes, we
-4
can develop efficient inference algorithms such as variational Bayes for
power-law models related to SP, GGP, and SBP that require explicit
-6
densities of jumps. Figure 1 illustrates the log of average jump sizes
1
2
3
4
5
6
atom indices (sorted)
of 100 normalized SPs drawn using finite-dimensional BFRY processes,
Figure 1: Log of average with ? = 1, K = 1000, and varying ?. As expected, the jumps generated
jump sizes of NSPs
with bigger ? are more heavy-tailed.
log (jumps)
0
3.3
alpha=0.8
alpha=0.4
alpha=0.2
Finite-dimensional normalized random measure mixture models
A normalized random measure (NRM) is obtained by normalizing a CRM by its total mass. A NRM
mixture model (NRMM) is then defined as a mixture model with NRM prior, and its generative
process is written as follows:
i.i.d.
? ? CRM(?, H), ?1 , . . . ?N ? ?/?(?), Xn |?n ? L(?n ),
(16)
where L is a likelihood distribution. One can easily do posterior inferences by marginalizing out ?,
with an auxiliary variable. Once ? is marginalized out we can develop a Gibbs sampler [28]. However,
this scales poorly as mentioned earlier. On the other hand, one may replace ? with ?K , yielding
the finite-dimensional NRMM (FNRMM), for which efficient variational Bayes can be developed
provided that the finite-dimensional process is ideal.
5
3.4
Variational Bayes for finite-dimensional mixture models
We first introduce a variational Bayes algorithm for finite-dimensional normalized SP mixture
(FNSPM). The joint likelihood of the model is written as
Pr({Xn ? dxn , zn }, {Jk ? dsk , Vk ? d?k }) = s?N
?
K
Y
N
sk k g?/K,? (dsk )
k=1
Y
L(dxn |?k )H(d?k ), (17)
zn =k
P
where s? := k sk , and zn is an indicator variable such that zn = k if ?n = ?k . We found it
convenient to introduce an auxiliary variable U ? Gamma(N, s? ) as in [20] to remove s?N
:
?
Pr({Xn ? dxn , zn }, {Jk ? dsk , Vk ? d?k }, U ? du)
?
uN ?1 du
K
Y
sk k e?usk g?/K,? (sk )dsk
N
k=1
Y
L(dxn |?k )H(d?k ).
(18)
zn =k
Now we introduce variational distributions for {z, s, ?, u} and optimize the Evidence Lower BOund
(ELBO) with respect to the parameters of the variational distributions. The posterior statistics can be
simulated using the optimized variational distributions. We can also optimize the hyperparamters ?
and ? with ELBO. The detailed optimization procedure is described in the supplementary material.
3.5
Collapsed Gibbs sampler for finite-dimensional mixture models
As with the NRMM, we can also marginalize out the jumps {Jk } to get the collapsed model.
Marginalizing out s in (18) gives
Pr({Xn ? dxn , zn }, {Vk ? d?k }, U ? du) ? uN ?1 du
I
K
Y
?(1 ? ? Nk ?? ) ?(Nk ? ?) {Nk >0}
k=1
uNk ??
I{N =0} Y
k
?u? ??
?
(?
? 1)
L(dxn |?k )H(d?k ),
?
?(1 ? ?)
(19)
zn =k
u
where ? := u+(?K/?)
1/? . Based on this, we can derive the collapsed Gibbs sampler for FNSPM,
and the detailed equations are in the supplementary material.
3.6
Collapsed variational Bayes for finite-dimensional mixture models
Based on the marginalized log likelihood (19), we can develop a collapsed variational Bayes algorithm
for FNSPM, following the collapsed variational inference algorithm for DPM [29]. We introduce
variational distributions for {u, z, ?}, and then the update equation for q(z) is computed using the
conditional posterior p(z|x). The hyperparamters can also be optimized, the detailed procedures for
which are explained in the supplementary material.
4
Experiments
4.1
4.1.1
Experiments on synthetic datasets
Data generation
We generated 10 datasets from PYP mixture models. Each dataset was generated as follows. We
first generated cluster labels for 2,000 data points from PYP(?, ?) with ? = 1 and ? = 0.7. Given
the cluster labels, we generated data points from Mult(M, ?), where the number of trials M was
chosen uniformly from [1, 50] and ? was sampled from Dir(0.05 ? 1200 ). We also generated another
10 datasets from CRP mixture models CRP(?) with ? = 1, to see if FNSPM adapts to the change of
the underlying random measure. For each dataset, we used 80% of data points for training and the
remaining 20% for testing.
4.1.2
Algorithm settings and performance measure
We compared six algorithms - Collapsed Gibbs (CG) for FDPM (CG/D), Variational Bayes (VB) for
FDPM (VB/D), Collapsed Variational Bayes (CVB) for FDPM (CVB/D), CG for FNSPM (CG/S),
6
1
0.5
0
200
400
600
800
1000
2
iter to converge
time / iter [sec]
test log likel diff
3
CGNSPM - CGFNSPM
CGNSPM - VBFNSPM
CGNSPM - CVBFNSPM
1.5
CGNSPM
CGFNSPM
VBFNSPM
CVBFNSPM
1
0
200
400
600
800
1000
VBFNSPM
CVBFNSPM
40
20
0
200
dimension K
dimension K
400
600
800
1000
dimension K
Figure 2: (Left) comparison between the infinite-dimensional algorithm and the finite dimensional algorithms.
(Middle) Average times per iteration of the infinite and the finite dimensional algorithms. (Right) Average
number of iterations need to converge for variational algorithms.
Table 1: Comparison between six finite-dimensional algorithms on synthetic PYP, synthetic CRP, AP corpus and
NIPS corpus. Average test log-likelihood values and ? estimates are shown with standard deviations.
PYP
loglikel
CG/D
VB/D
CVB/D
CG/S
VB/S
CVB/S
CRP
?
loglikel
AP
?
loglikel
NIPS
?
-33.2078
-25.4076
-157.2228
(1.5557)
(1.9081)
(0.0189)
-33.4480
-25.4148
-157.2379
(1.6495)
(1.9120)
(0.0304)
-33.4278
-25.4150
-157.2302
(1.6525)
(1.9115)
(0.0280)
-33.1039 0.6940 -25.4079 0.2867 -157.1920 0.5261
(1.5676) (0.0235) (1.9077) (0.0762) (0.0036) (0.0032)
-33.1861 0.4640 -25.5076 0.4770 -157.1391 0.4748
(1.5873) (0.0085) (1.9122) (0.0041) (0.1154) (0.0434)
-33.2031 0.7041 -25.4080 0.2925 -157.2182 0.5327
(1.5858) (0.0322) (1.9085) (0.0608) (0.0282) (0.0060)
loglikel
-352.8909
(0.0070)
-352.9104
(0.0172)
-352.8692
(0.0321)
-352.7487
(0.0037)
-352.6078
(0.2599)
-352.7544
(0.0088)
?
0.5857
(0.0032)
0.4945
(0.0324)
0.5899
(0.0070)
VB for FNSPM (VB/S) and CVB for FNSPM (CVB/S). All the algorithms were initialized with a
single run of sequential collapsed Gibbs sampling starting from zero clusters, and afterwards ran for
100 iterations. The variational algorithms were terminated if the improvements of the ELBO were
smaller than a threshold. The hyperparameters ? and ? were initialized as ? = 1 and ? = 0.5 for all
algorithms. The performances were measured by average test log-likelihood,
1
N
test
X
Ntest
n=1
log p(xn |xtrain ).
(20)
For CG, we computed the average of samples collected every 10 iterations. For VB and CVB, we
computed the log-likelihood using the expectations of the variational distributions.
4.1.3
Effect of K on predictive performance and running time
To see the effect of K on predictive performance, we first compared the finite-dimensional algorithms
(CG for FNSPM, VB for FNSPM and CVB for FNSPM) to the infinite-dimensional algorithm
(CG for NSPM [28]). We tested the four algorithms on 10 synthetic datasets generated from PYP
mixtures, with K ? {200, 400, 600, 800, 1000} for finite algorithms, and measured the difference
of average test log likelihood compared to the infinite-dimensional algorithm. We also measured
the average running time per iteration of the four algorithms, and the average number of iterations
to converge of the variational algorithms. Figure 2 shows the results. As expected, the difference
between finite-dimensional algorithms and the infinite-dimensional algorithm decreases as K grows.
The finite-dimensional algorithms have O(N K) time complexity per iteration, and the infinite? where K
? is the maximum number of clusters created during
dimensional algorithm has O(N K)
clustering. However, in practice, variational algorithms can be implemented with efficient matrix
multiplications, and this makes them much faster than sampling algorithms. Moreover, as shown in
Figure 2, variational algorithms usually converge in 50 iterations.
7
4.1.4
Comparing finite-dimensional algorithms on PYP and CRP datasets
We compared six algorithms for finite mixture models (CG/D, VB/D, CVB/D, CG/S, VB/S and
CVB/S) on PYP mixture datasets and CRP mixture datasets, with K = 1000. The results are summarized in Table 1. On PYP datasets, in general, FNSPM outperformed FDPM and CG outperformed
VB and CVB. CG/S consistently outperformed CG/D, and the same relationship applied to VB/S
and VB/D and CVB/S and CVB/D. Even though VB/S and CVB/S were variational algorithms, the
performance gap between them and CG/S was not significant. Table 1 shows the estimated ? values
for CG/S, VB/S and CVB/S. CG/S and CVB/S seemed to recover the true value ? = 0.7, but VB/S
didn?t. We found that VB/S tends to control the other parameter ? while holding ? near its initial
value 0.5. On CRP datasets, all the algorithms showed similar performances except for VB/S, which
was consistently worse than other algorithms. This is probably due to the bad estimates of ?.
4.2
Experiments on real-world documents
We compared the six algorithms on real-world document clustering task by clustering AP corpus 1 and
NIPS corpus 2 . We preprocessed the corpora using latent Dirichlet allocation (LDA) [3]. We ran LDA
with 300 topics, and then gave each document a bag-of-words representation of topic assignments
to those 300 topics. We assumed that those representations were generated from the multinomialDirichlet model, and clustered them using FDPM and FNSPM. We used 80% of documents for
training and the remaining 20% for computing average test log-likelihood. We set K = 2, 000 and ran
each algorithm for 200 iterations. We repeated this 10 times to measure the average performance.The
results are summarized in Table 1. In general, the algorithms based on FNSPM showed better
performance than those of FDPM based ones, implying that FNSPM based algorithms are well
capturing the heavy-tailed cluster distributions of the corpora. VB/S performed the best, even though
it sometimes converged to poor values.
5
Conclusion
In this paper, we proposed finite-dimensional BFRY processes that converge to SP, GGP and SBP. The
jumps of the finite-dimensional BFRY processes have nice closed-form densities, and this leads to
the efficient posterior inference algorithms. With finite-dimensional BFRY processes, we developed
variational Bayes and collapsed variational Bayes for finite-dimensional normalized SP mixture
models, and demonstrated its performance both on synthetic and real-world datasets. As mentioned
earlier, with finite dimensional BFRY processes one can develop variational Bayes or other posterior
inference algorithms for a variety of models with SP, GGP and SBP priors. This fact, along with
more theoretical properties of finite-dimensional processes, presents interesting avenues for future
research.
Acknowledgements: This work was supported by IITP (No. B0101-16-0307, Basic Software
Research in Human-Level Lifelong Machine Learning (Machine Learning Center)) and by National
Research Foundation (NRF) of Korea (NRF-2013R1A2A2A01067464), and supported in part by the
grant RGC-HKUST 601712 of the HKSAR.
References
[1] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209?
230, 1973.
[2] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[3] D. M. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[4] J. F. C. Kingman. Random discrete distributions. Journal of the Royal Statistical Society. Series B
(Methodological), 37(1):1?22, 1975.
1
2
http://www.cs.princeton.edu/~blei/lda-c/
https://archive.ics.uci.edu/ml/datasets/Bag+of+Words
8
[5] H. Ishwaran and L. F. James. Computational methods for multiplicative intensity models using weighted
gamma processes: proportional hazards, marked point processes, and panel count data. Journal of the
American Statistical Association, 99:175?190, 2004.
[6] H. Ishwaran, L. F. James, and J. Sun. Bayesian model selection in finite mixtures by marginal density
decompositions. Journal of the American Statistical Association, 96:1316?1332, 2001.
[7] H. Ishwaran and M. Zarepour. Dirichlet prior seives in finite normal mixtures. Statistica Sinica, 12(3):941?
963, 2002.
[8] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78, 1967.
[9] J. Pitman and N. M. Tran. Size-biased permutation of a finite sequence with independent and identically
distributed terms. Bernoulli, 21(4):2484?2512, 2015.
[10] D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. 2016.
arXiv:1601.00670.
[11] C. Chen, N. Ding, and W. Buntine. Dependent hierarchical normalized random measures for dynamic
topic modeling. ICML, 2012.
[12] Y. W. Teh and D. G?r?r. Indian buffet processes with power-law behavior. NIPS, 2009.
[13] F. Caron and E. B. Fox. Sparse graphs using exchangeable random measures. 2014. arXiv:1401.1137.
[14] F. Caron. Bayesian nonparametric models for bipartite graphs. NIPS, 2012.
[15] V. Veitch and D. M. Roy. The class of random graphs arising from exchangeable random measures. 2015.
arXiv:1512.03099.
[16] L. Devroye and L. F. James. On simulation and properties of the stable law. Statistical Methods and
Applications, 23(3):307?343, 2014.
[17] L. F. James, P. Orbanz, and Y. W. Teh. Scaled subordinators and generalizations of the Indian buffet
process. 2015. arXiv:1510.07309.
[18] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B,
195(2):216?222, 1987.
[19] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. ICML, 2011.
[20] L. F. James. Bayesian Poisson process partition calculus with an application to Bayesian L?vy moving
averages. The Annals of Statistics, 33(4):1771?1799, 2005.
[21] E. ?inlar. Probability and stochastics. 2010.
[22] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of the American
Statistical Association, 96(453):161?173, 2001.
[23] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator.
The Annals of Probability, 25(2):855?900, 1997.
[24] J. Bertoin, T. Fujita, B. Roynette, and M. Yor. On a particular class of self-decomposable random variables:
the durations of Bessel excursions straddling independent exponential times. Probability and Mathematical
Statistics, 26:315?366, 2006.
[25] M. Winkel. Electronic foreign-exchange markets and passage events of independent subordinators. Journal
of Applied Probability, 42(1):138?152, 2005.
[26] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. NIPS, 2006.
[27] R. Thibaux and M. I. Jordan. Hierarchcial beta processes and the Indian buffet process. AISTATS, 2007.
[28] S. Favaro and Y. W. Teh. MCMC for normalized random measure mixture models. Statistical Science,
28(3):335?359, 2013.
[29] K. Kurihara, M. Welling, and Y. W. Teh. Collapsed variational dirichlet process mixture models. IJCAI,
2007.
9
| 6348 |@word kong:1 cu:1 middle:1 trial:1 calculus:1 simulation:1 decomposition:1 carry:1 initial:1 series:1 document:4 comparing:1 hkust:2 ust:1 readily:2 must:1 written:7 partition:2 j1:3 enables:1 remove:1 update:1 implying:1 generative:2 blei:4 straddling:1 nrm:3 favaro:1 mathematical:1 along:1 constructed:1 direct:1 beta:14 become:1 introduce:4 subordinators:2 indeed:1 expected:2 market:1 intricate:1 behavior:3 inspired:3 decomposed:1 considering:1 begin:1 spain:1 bounded:4 provided:1 underlying:1 mass:2 moreover:1 didn:1 what:2 panel:1 developed:3 transformation:2 every:2 exactly:2 scaled:2 uk:3 control:1 exchangeable:2 grant:1 stick:1 appear:1 b0101:1 mcauliffe:1 positive:1 before:1 tends:1 limit:5 consequence:1 ap:3 therein:1 range:1 practical:1 testing:1 practice:1 implement:1 lf:3 procedure:3 lancelot:2 mult:1 inferential:1 convenient:1 word:2 induce:1 griffith:1 suggest:1 get:5 marginalize:1 selection:2 context:3 collapsed:12 applying:1 optimize:2 measurable:4 deterministic:1 demonstrated:1 center:1 blueprint:1 www:1 starting:1 duration:1 decomposable:1 d1:1 regarded:1 notion:1 variation:1 laplace:9 annals:3 construction:5 elucidate:1 hyperparamters:2 massive:2 roy:1 jk:20 sbp:13 role:2 ding:1 capture:2 sun:1 iitp:1 decrease:1 ran:3 mentioned:2 complexity:1 dynamic:2 predictive:2 bipartite:1 completely:4 easily:8 joint:1 various:3 represented:1 effective:1 describe:2 monte:2 pendleton:1 whose:1 supplementary:4 say:2 otherwise:1 elbo:3 ability:1 statistic:5 seungjin:2 gp:10 transform:1 beal:1 sequence:2 propose:1 tran:1 zm:1 relevant:1 uci:1 realization:1 poorly:1 adapts:1 description:2 intuitive:1 normalize:1 convergence:3 cluster:5 ijcai:1 dsh:3 tk:3 wider:1 derive:2 develop:5 ac:2 help:1 illustrate:1 measured:3 ibp:2 eq:2 auxiliary:2 proceses:1 implemented:1 implies:2 c:1 stochastic:2 human:1 material:4 require:1 exchange:1 behaviour:3 suffices:1 clustered:1 generalization:1 proposition:2 inspecting:1 extension:2 ic:1 normal:1 exp:3 major:1 purpose:1 outperformed:3 bag:2 tilting:1 label:2 weighted:1 clearly:1 always:1 ck:5 rather:1 varying:1 corollary:2 derived:2 focus:3 vk:15 improvement:1 consistently:2 likelihood:9 methodological:1 bernoulli:1 hk:1 contrast:1 cg:17 sense:2 inference:10 dependent:1 foreign:1 ferguson:1 fujita:2 flexible:1 unk:1 development:1 special:3 fairly:1 marginal:3 construct:6 once:1 having:2 ng:1 atom:2 sampling:4 identical:1 nrf:2 icml:2 future:1 simplex:1 inherent:1 pyp:9 gamma:19 recognize:1 national:1 statistician:1 cvb:16 attempt:1 interest:1 certainly:1 mixture:20 yielding:1 winkel:1 korea:3 fox:1 iv:1 initialized:2 theoretical:1 roweth:1 instance:2 earlier:3 modeling:1 zn:8 assignment:1 tractability:1 bertoin:2 deviation:1 applicability:1 entry:1 usefulness:1 motivating:1 buntine:1 thibaux:1 dir:1 synthetic:7 density:19 lee:1 physic:1 kucukelbir:1 borne:1 worse:2 american:4 kingman:6 sps:1 sec:1 summarized:2 explicitly:1 later:1 view:1 performed:1 closed:4 multiplicative:1 bayes:17 recover:1 contribution:1 formed:1 characteristic:1 who:1 correspond:1 identify:1 bayesian:9 iid:2 usk:1 carlo:2 multiplying:1 kennedy:1 finer:1 converged:1 explain:1 james:7 proof:8 sampled:2 dataset:2 lim:2 follow:1 specify:1 though:2 generality:1 furthermore:2 just:1 crp:7 d:10 hand:2 hopefully:1 lda:3 grows:1 name:1 effect:2 zarepour:1 normalized:9 verify:1 true:1 hence:2 symmetric:1 postech:4 conditionally:1 during:1 subordinator:2 self:1 illustrative:1 hong:1 generalized:4 demonstrate:4 passage:1 variational:33 functional:9 tilt:1 exponentially:2 association:4 refer:2 significant:1 caron:2 gibbs:6 mathematics:1 moving:1 stable:13 gt:1 base:1 posterior:6 recent:1 showed:2 orbanz:1 manipulation:1 binary:1 seen:1 recognized:2 converge:9 bessel:1 ii:4 afterwards:1 match:2 faster:1 hazard:1 bigger:1 variant:1 basic:1 expectation:1 poisson:4 arxiv:4 iteration:9 sometimes:1 c1:1 addition:1 want:1 background:1 biased:1 unlike:1 limk:4 probably:1 archive:1 thing:1 dpm:1 dxn:6 effectiveness:1 jordan:3 call:3 near:1 ideal:9 iii:4 easy:1 crm:15 divisible:5 variety:2 identically:1 variate:1 gave:1 identified:2 idea:1 avenue:1 six:4 utility:1 dsk:4 generally:1 clear:2 detailed:3 xtrain:1 nonparametric:3 http:2 vy:8 revisit:1 estimated:1 arising:2 disjoint:2 per:3 write:3 discrete:1 shall:2 iter:2 four:2 threshold:1 drawn:2 preprocessed:1 neither:1 v1:3 graph:3 sum:1 run:1 inverse:1 letter:1 khintchine:1 almost:1 electronic:1 excursion:1 vb:19 capturing:2 bound:1 played:1 bp:5 software:1 aspect:1 simulate:2 argument:2 inlar:1 relatively:1 uf:3 pacific:1 developing:1 according:1 poor:1 smaller:1 stochastics:1 s1:1 explained:1 pr:4 equation:2 turn:1 discus:1 count:1 needed:1 letting:2 tractable:5 apply:1 ishwaran:4 juho:1 hierarchical:2 indirectly:1 alternative:1 buffet:6 dirichlet:12 include:1 remaining:2 running:2 clustering:3 marginalized:4 coined:1 ghahramani:1 society:1 parametric:1 primary:1 randomize:1 gradient:2 dp:9 duh:3 simulated:3 veitch:1 topic:4 considers:1 collected:1 trivial:2 hdp:1 devroye:1 index:1 relationship:4 ratio:1 sinica:1 unfortunately:1 ggp:17 holding:1 gk:3 negative:1 intent:1 implementation:1 teh:6 datasets:16 finite:48 crms:4 langevin:1 gc:2 arbitrary:2 intensity:2 optimized:2 barcelona:1 nip:7 able:1 usually:1 challenge:1 royal:1 power:7 suitable:2 event:1 hybrid:2 indicator:1 created:1 coupled:1 prior:6 nice:2 literature:1 acknowledgement:1 review:1 multiplication:1 marginalizing:2 relative:1 law:9 permutation:1 generation:1 limitation:1 allocation:3 interesting:1 proven:1 analogy:1 remarkable:1 proportional:1 foundation:1 heavy:4 nrmm:3 supported:2 bern:1 formal:1 lifelong:1 pitman:4 yor:5 benefit:1 distributed:1 sparse:1 dimension:3 finitedimensional:1 world:5 xn:5 seemed:1 jump:12 welling:2 functionals:1 alpha:3 ml:1 corpus:6 assumed:1 un:2 latent:4 sk:9 tailed:4 table:4 du:8 sp:23 pk:9 main:1 statistica:1 aistats:1 terminated:1 hyperparameters:1 repeated:1 explicit:5 sf:4 exponential:1 breaking:1 third:1 theorem:6 choi:1 formula:1 bad:1 specific:1 dk:2 normalizing:1 evidence:1 exists:1 albeit:1 sequential:1 kr:2 importance:1 illustrates:1 nk:3 gap:1 chen:1 suited:1 simply:1 explore:1 infinitely:5 duane:1 satisfies:2 conditional:1 viewed:2 sorted:1 marked:1 replace:1 considerable:1 change:1 infinite:8 except:2 uniformly:1 diff:1 sampler:3 kurihara:1 total:2 ntest:1 rgc:1 latter:1 arises:1 indian:6 mcmc:2 princeton:1 tested:1 |
5,912 | 6,349 | DECOrrelated feature space partitioning for
distributed sparse regression
Xiangyu Wang
Dept. of Statistical Science
Duke University
[email protected]
David Dunson
Dept. of Statistical Science
Duke University
[email protected]
Chenlei Leng
Dept. of Statistics
University of Warwick
[email protected]
Abstract
Fitting statistical models is computationally challenging when the sample size or
the dimension of the dataset is huge. An attractive approach for down-scaling the
problem size is to first partition the dataset into subsets and then fit using distributed
algorithms. The dataset can be partitioned either horizontally (in the sample space)
or vertically (in the feature space). While the majority of the literature focuses on
sample space partitioning, feature space partitioning is more effective when p n.
Existing methods for partitioning features, however, are either vulnerable to high
correlations or inefficient in reducing the model dimension. In this paper, we solve
these problems through a new embarrassingly parallel framework named DECO
for distributed variable selection and parameter estimation. In DECO, variables
are first partitioned and allocated to m distributed workers. The decorrelated
subset data within each worker are then fitted via any algorithm designed for
high-dimensional problems. We show that by incorporating the decorrelation step,
DECO can achieve consistent variable selection and parameter estimation on each
subset with (almost) no assumptions. In addition, the convergence rate is nearly
minimax optimal for both sparse and weakly sparse models and does NOT depend
on the partition number m. Extensive numerical experiments are provided to
illustrate the performance of the new framework.
1
Introduction
In modern science and technology applications, it has become routine to collect complex datasets
with a huge number p of variables and/or enormous sample size n. Most of the emphasis in the
literature has been on addressing large n problems, with a common strategy relying on partitioning
data samples into subsets and fitting a model containing all the variables to each subset [1, 2, 3, 4, 5, 6].
In scientific applications, it is much more common to have huge p small n data sets. In such cases,
a sensible strategy is to break the features into groups, fit a model separately to each group, and
combine the results. We refer to this strategy as feature space partitioning, and to the large n strategy
as sample space partitioning.
There are several recent attempts on parallel variable selection by partitioning the feature space. [7]
proposed a Bayesian split-and-merge (SAM) approach in which variables are first partitioned into
subsets and then screened over each subset. A variable selection procedure is then performed on the
variables that survive for selecting the final model. One caveat for this approach is that the algorithm
cannot guarantee the efficiency of screening, i.e., the screening step taken on each subset might select
a large number of unimportant but correlated variables [7], so SAM could be ineffective in reducing
the model dimension. Inspired by a group test, [8] proposed a parallel feature selection algorithm by
repeatedly fitting partial models on a set of re-sampled features, and then aggregating the residuals to
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
form scores for each feature. This approach is generic and efficient, but the performance relies on a
strong condition that is almost equivalent to an independence assumption on the design.
Intuitively, feature space partitioning is much more challenging than sample space partitioning,
mainly because of the correlations between features. A partition of the feature space would succeed
only when the features across the partitioned subsets were mutually independent. Otherwise, it is
highly likely that any model posed on the subsets is mis-specified and the results are biased regardless
of the sample size. In reality, however, mutually independent groups of features may not exist; Even
if they do, finding these groups is likely more challenging than fitting a high-dimensional model.
Therefore, although conceptually attractive, feature space partitioning is extremely challenging.
On the other hand, feature space partitioning is straightforward if the features are independent.
Motivated by this key fact, we propose a novel embarrassingly-parallel framework named DECO by
decorrelating the features before partitioning. With the aid of decorrelation, each subset of data after
feature partitioning can now produce consistent estimates even though the model on each subset is
intrinsically mis-specified due to missing features. To the best of our knowledge, DECO is the first
embarrassingly parallel framework accommodating arbitrary correlation structures in the features. We
show, quite surprisingly, that the DECO estimate, by leveraging the estimates from subsets, achieves
the same convergence rate in `2 norm and `? norm as the estimate obtained by using the full dataset,
and that the rate does not depend on the number of partitions. In view of the huge computational gain
and the easy implementation, DECO is extremely attractive for fitting large-p data.
The most related work to DECO is [9], where a similar procedure was introduced to improve
lasso. Our work differs substantially in various aspects. First, our motivation is to develop a parallel
computing framework for fitting large-p data by splitting features, which can potentially accommodate
any penalized regression methods, while [9] aim solely at complying with the irrepresentable condition
for lasso. Second, the conditions posed on the feature matrix are more flexible in DECO, and our
theory, applicable for not only sparse signals but also those in lr balls, can be readily applied to the
preconditioned lasso in [9].
The rest of the paper is organized as follows. In Section 2, we detail the proposed framework. Section
3 provides the theory of DECO. In particular, we show that DECO is consistent for both sparse and
weakly sparse models. Section 4 presents extensive simulation studies to illustrate the performance of
our framework. In Section 5, we outline future challenges and future work. All the technical details
are relegated to the Appendix.
2
Motivation and the DECO framework
Consider the linear regression model
Y = X? + ?,
(1)
where X is an n ? p feature (design) matrix, ? consists of n i.i.d random errors and Y is the response
vector. A large class of approaches estimate ? by solving the following optimization problem
1
?? = arg min kY ? X?k22 + 2?n ?(?),
? n
where k ? k2 is the `2 norm and ?(?)
Pp is a penalty function. In this paper, we specialize our discussion
to the `1 penalty where ?(?) = j=1 |?j | [10] to highlight the main message of the paper.
As discussed in the introduction, a naive partition of the feature space will usually give unsatisfactory
results under a parallel computing framework. That is why a decorrelation step is introduced.
For data with p ? n, the most intuitive way is to orthogonalize features via the singular value
decomposition (SVD) of the design matrix as X = U DV T , where U is an n ? p matrix, D is an
p ? p diagonal matrix and V an p ? p orthogonal matrix. If we pre-multiply both sides of (1) by
+
?
pU D?1 U T = (XX T /p) 2 , where A+ denotes the Moore-Penrose pseudo-inverse, we get
+
+
?
(XX T /p) 2 Y = pU V T ? + (XX T /p) 2 ? .
(2)
|
{z
} | {z }
|
{z
}
Y?
?
X
??
?
It is obvious that the new features (the columns of pU V T ) are mutually orthogonal. Define the
? The mutually orthogonal property allows us to decompose X
? column-wisely to
new data as (Y? , X).
2
? (i) , i = 1, 2, ? ? ? , m, and still retain consistency if one fits a linear regression on each
m subsets X
? (i) ? (i) + W
? (i) where
subset. To see this, notice that each sub-model now takes a form of Y? = X
(i)
(?i) (?i)
(?i)
th
?
?
W =X
?
+ ?? and X
stands for variables not included in the i subset. If, for example,
we would like to compute the ordinary least squares estimates, it follows
? (i)T X
? (i) )?1 X
? (i) Y? = ? (i) + (X
? (i)T X
? (i) )?1 X
? (i) W
? (i) = ? (i) + (X
? (i)T X
? (i) )?1 X
? (i) ??,
??(i) = (X
where ??(i) converges at the same rate as if the full dataset were used.
When p is larger than n, the new features are no longer exactly orthogonal to each other due to the high
dimension. Nevertheless,p
as proved later in the article, the correlations between different columns
are roughly of the order log p/n for random designs, making the new features approximately
orthogonal when log(p) n. This allows us to follow the same strategy of partitioning the feature
space as in the low dimensional case. It is worth noting that when p > n, the SVD decomposition on
X induces a different form on the three matrices, i.e., U is now an n ? n orthogonal matrix, D is an
T +
T ? 1
2
2
n ? n diagonal matrix, V is an n ? p matrix, and XX
becomes XX
.
p
p
In this paper, we primarily focus on datasets where p is so large that a single computer is only able to
store and perform operations on an n ? q matrix (n < q < p) but not on an n ? p matrix. Because
the two decorrelation matrices yield almost the same properties, we will only present the algorithm
and the theoretical analysis for (XX T /p)?1/2 .
The concrete DECO framework consists of two main steps. Assume X has been partitioned columnwisely into m subsets X (i) , i = 1, 2, ? ? ? , m (each with a maximum of q columns) and distributed
onto m machines with Y . In the first stage, we obtain the decorrelation
matrix (XX T /p)?1/2 or
Pm
? ?1 T
T
T
pD U by computing XX in a distributed way as XX = i=1 X (i) X (i)T and perform the
SVD decomposition on XX T on a central machine. In the second stage, each worker receives the
? (i) ), and fits a penalized
decorrelation matrix, multiplies it to the local data (Y, X (i) ) to obtain (Y? , X
regression. When the model is assumed to be exactly sparse, we can potentially apply a refinement
step by re-estimating coefficients on all the selected variables simultaneously on the master machine
via ridge regression. The details are provided in Algorithm 1.
The entire Algorithm 1 contains only two map-reduce passes and is thus communication-efficient.
Lines 14 - 18 in Algorithm 1 are added only for the data analysis in Section 5.3, in which p is massive
compared to n in that log(p) is comparable to n, and the algorithm may not scale down the size of p
sufficiently for even obtaining a ridge regression estimator afterwards. Thus, a further sparsification
step is recommended. The condition in Line 16 is only triggered in our last experiment, but is crucial
for improving the performance for extreme cases. In Line 5, the algorithm inverts XX T + r1 I instead
of XX T for robustness, because the rank of XX T after standardization will be n ? 1. Using ridge
refinement instead of ordinary least squares is also for robustness. The precise choice of r1 and r2
will be discussed in the numerical section.
Penalized regression fitted using regularization path usually involves a computational complexity of
O(knp + kd2 ), where k is the number of path segmentations and d is the number of features selected.
Although the segmentation number k could be as bad as (3p + 1)/2 in the worst case [11], real data
experience suggests that k is on average O(n) [12], thus the complexity for DECO takes a form of
p
O n3 + n2 m
+ m in contrast to the full lasso which takes a form of O(n2 p).
3
Theory
In this section, we provide theoretical justification for DECO on random feature matrices. We
specialize our attention to lasso due to page limits and will provide the theory on general penalties in
the long version. We prove the consistency results for the estimator obtained after Stage 2 of DECO,
while the consistency of Stage 3 will then follow immediately. For simplicity, we assume that ?
follows a sub-Gaussian distribution and X ? N (0, ?) throughout this section, although the theory
can be easily extended to the situation where X follows an elliptical distribution and ? is heavy-tailed.
Recall that DECO fits the following linear regression on each worker
? (i) ? (i) + W
? (i) ,
Y? = X
? (i) = X
? (?i) ? (?i) + ??,
and W
3
Algorithm 1 The DECO framework
Initialization:
1: Input (Y, X), p, n, m, ?n . Standardize X and Y to x and y with mean zero;
2: Partition (arbitrarily) (y, x) into m disjoint subsets (y, x(i) ) and distribute to m machines;
Stage 1 : Decorrelation
3: On each worker, compute x(i) x(i)T and push to the center machine;
Pm
4: On the center machine, compute F = i=1 x(i) x(i)T ;
?
?1/2
;
5: F? = p F + r1 Ip
6: Push F? to each worker.
7: for i = 1 to m do
8:
y? = F? y and x
?(i) = F? x(i) ; # obtain decorrelated data
9: end for
Stage 2 : Estimation
10: On each worker we estimate ??(i) = arg min? n1 k?
y?x
?(i) ?k22 + 2?n ?(?);
(i)
(1) ?(2)
?
?
?
11: Push ? to the center machine and combine ? = (? , ? , ? ? ? , ??(m) );
12: ??0 = mean(Y ) ? mean(X)T ?? for intercept.
Stage 3 : Refinement (optional)
13: if #{?? 6= 0} ? n then
14:
# Sparsification is needed before ridge regression.
15:
M = {k : |??k | =
6 0};
16:
??M = arg min? n1 k?
y?x
?M ?k22 + 2?n ?(?);
17: end if
18: M = {k : |??k | =
6 0};
T
T
Y;
XM + r2 I|M| )?1 XM
19: ??M = (XM
?
20: Return ?;
where X (?i) stands for variables not included in the ith subset. Our proof relies on verifying each
? (i) ), i = 1, 2, ? ? ? , m satisfies the consistency condition of lasso for the random features.
pair of (Y? , X
Due to the page limit, we only state the main theorem in the article and defer all the proofs to the
supplementary materials.
Theorem 1 (s-sparse).qAssume that ?? is an s-sparse vector. Define ?02 = var(Y ). For any A > 0
we choose ?n = A?0
log p
n . Now
1?C1 A2
if p > c0 n for some c0 > 1 and 64C02 A2 s2 logn p ? 1, then with
? 18pe?Cn we have
r
5C0 A?0 log p
9C0 A2 ?02 s log p
?
and k?? ? ?? k22 ?
,
8
n
8
n
probability at least 1 ? 8p
k?? ? ?? k?
?
c c2
c3
? 0
?
?
where C0 = c8c
and C1 = min{ 8c? c2 (1?c
2 , 8c2 c?2 } are two constants and c1 , c2 , c4 , c? , c , C
1 c?
0)
4
are defined in Lemma 6 in the supplementary materials. Furthermore, if we have
r
C0 A?0 log p
min |?k | ?
,
?k 6=0
4
n
then ?? is sign consistent, i.e., sign(??k ) = sign(?k ), ??k 6= 0 and ??k = 0, ??k = 0.
Theorem 1 looks a bit surprising since the convergence rate does not depend on m. This is mainly
because the bounds used to verify the consistency conditions for lasso hold uniformly on all subsets
of variables. For subsets where no true signals are allocated, lasso will estimate all coefficients to be
zero, so that the loss on these subsets will be exactly zero. Thus, when summing over all subsets, we
p
retrieve the s log
rate. In addition, it is worth noting that Theorem 1 guarantees the `? convergence
n
and sign consistency for lasso without assuming the irrepresentable condition [13]. A similar but
weaker result was obtained in [9].
Theorem 2 (lr -ball). Assume that ?? ? B(r, R) and all conditions in Theorem 1 except that
1?r
64C02 A2 s2 logn p ? 1 are now replaced by 64C02 A2 R2 logn p
? 1. Then with probability at least
4
2
1 ? 8p1?C1 A ? 18pe?Cn , we have
r
3C0 A?0 log p
?
k? ? ?? k? ?
and
2
n
k?? ? ?? k22 ?
1? r2
9C0
log p
2?r
.
+ 38 (A?0 ) R
8
n
Note that ?02 = var(Y ) instead of ? appears in the convergence rate in both Theorem 1 and 2, which
? . Compared to the estimation risk using full
is inevitable due to the nonzero signals contained in W
? 2 , where R
? 2 is
data, the results in Theorem 1 and 2 are similar up to a factor of ? 2 /?02 = 1 ? R
2
?
the coefficient of determination. Thus, for a model with an R = 0.8, the risk of DECO is upper
bounded by five times the risk of the full data inference. The rates in Theorem 1 and 2 are nearly
minimax-optimal [14, 15], but the sample requirement n s2 is slightly off the optimal. This
requirement is rooted in the `? -convergence and sign consistency and is almost unimprovable for
random designs. We will detail this argument in the long version of the paper.
4
Experiments
In this section, we present the empirical performance of DECO via extensive numerical experiments.
In particular, we compare DECO after 2 stage fitting (DECO-2) and DECO after 3 stage fitting
(DECO-3) with the full data lasso (lasso-full), the full data lasso with ridge refinement (lasso-refine)
and lasso with a naive feature partition without decorrelation (lasso-naive). This section consists of
three parts. In the first part, we run DECO-2 on some simulated data and monitor its performance on
one randomly chosen subset that contains part of the true signals. In the second part, we verify our
claim in Theorem 1 and 2 that the accuracy of DECO does not depend on the subset number. In the
last part, we provide a comprehensive evaluation of DECO?s performance by comparing DECO with
other methods under various correlation structures.
The synthetic datasets are from model (1) with X ? N (0, ?) and ? ? N (0, ? 2 ). The variance ? 2 is
? 2 = var(X?)/var(Y ) = 0.9. We consider five different structures of ?.
chosen such that R
Model (i) Independent predictors. The support of ? is S = {1, 2, 3, 4, 5}. We generate Xi from a
standard multivariate normal distribution with independent components. The coefficients are specified
as
?
q
?
log p
Ber(0.5)
i?S
(?1)
|N (0, 1)| + 5
n
?i =
?
0
i 6? S.
Model (ii) Compound symmetry. All predictors are equally correlated with correlation ? = 0.6. The
coefficients are the same as those in Model (i).
Model (iii) Group structure. This example is Example 4 in [16], for which we allocate the 15 true
variables into three groups. Specifically, the predictors are generated as x1+3m = z1 + N (0, 0.01),
x2+3m = z2 + N (0, 0.01) and x3+3m = z3 + N (0, 0.01), where m = 0, 1, 2, 3, 4 and zi ? N (0, 1)
are independent. The coefficients are set as ?i = 3, i = 1, 2, ? ? ? , 15; ?i = 0, i = 16, ? ? ? , p.
Model (iv) Factor models. This model is considered in [17]. Let ?j , j = 1, 2, ? ? ? , k be independent
Pk
standard normal variables. We set predictors as xi =
j=1 ?j fij + ?i , where fij and ?i are
independent standard normal random variables. The number of factors is chosen as k = 5 in the
simulation while the coefficients are specified the same as in Model (i).
Model (v) `1 -ball. This model takes the same correlation structure as Model (ii), with the coefficients
drawn from Dirichlet distribution ? ? Dir p1 , p1 , ? ? ? , p1 ? 10. This model is to test the performance
under a weakly sparse assumption on ?, since ? is non-sparse satisfying k?k1 = 10.
Throughout this section, the performance of all the methods is evaluated in terms of four metrics:
the number of false positives (# FPs), the number of false negatives (# FNs), the mean squared error
k?? ? ?? k22 (MSE) and the computational time (runtime). We use glmnet [18] to fit lasso and
choose the tuning parameter using the extended BIC criterion [19] with ? fixed at 0.5. For DECO, the
features are partitioned randomly in Stage 1 and the tuning parameter r1 is fixed at 1 for DECO-3.
Since DECO-2 does not involve any refinement step, we choose r1 to be 10 to aid robustness. The
ridge parameter r2 is chosen by 5-fold cross-validation for both DECO-3 and lasso-refine. All
5
DECO-2
lasso-full
lasso-naive
4
3
2
1
0
100
200
300
Estimation error
0.8
DECO-2
lasso-full
lasso-naive
30
25
0.6
0.4
0.2
400
Runtime
8
DECO-2
lasso-full
lasso-naive
Runtime (sec)
5
False negatives
1
l2 error
6
number of False negatives
number of False positives
False positives
7
20
15
10
5
0
0
100
sample size n
200
300
400
DECO-2
lasso-full
lasso-naive
6
4
2
0
100
sample size n
200
300
400
100
sample size n
200
300
400
sample size n
Figure 1: Performance of DECO on one subset with p = 10, 000 and different n0 s.
5
0
2000
4000
6000
8000
model dimension p
0.3
Estimation error
DECO-2
lasso-full
lasso-naive
Runtime
8
5
4
0.2
0.1
DECO-2
lasso-full
lasso-naive
3
2
Runtime (sec)
10
False negatives
0.4
l2 error
DECO-2
lasso-full
lasso-naive
number of False negatives
number of False positives
False positives
15
6
DECO-2
lasso-full
lasso-naive
4
2
1
0
2000
0
4000
6000
8000
2000
model dimension p
4000
6000
8000
model dimension p
2000
4000
6000
8000
model dimension p
Figure 2: Performance of DECO on one subset with n = 500 and different p0 s.
the algorithms are coded and timed in Matlab on computers with Intel i7-3770k cores. For any
embarrassingly parallel algorithm, we report the preprocessing time plus the longest runtime of a
single machine as its runtime.
4.1
Monitor DECO on one subset
In this part, using data generated from Model (ii), we illustrate the performance of DECO on one
randomly chosen subset after partitioning. The particular subset we examine contains two nonzero
coefficients ?1 and ?2 with 98 coefficients, randomly chosen, being zero. We either fix p = 10, 000
and change n from 100 to 500, or fix n at 500 and change p from 2, 000 to 10, 000 to simulate
datasets. We fit DECO-2, lasso-full and lasso-naive to 100 simulated datasets, and monitor their
performance on that particular subset. The results are shown in Fig 1 and 2.
It can be seen that, though the sub-model on each subset is mis-specified, DECO performs as if the full
dataset were used as its performance is on par with lasso-full. On the other hand, lasso-naive fails
completely. This result clearly highlights the advantage of decorrelation before feature partitioning.
4.2
Impact of the subset number m
As shown in Theorem 1 and 2, the performance of DECO does not depend on the number of partitions
m. We verify this property by using Model (ii) again. This time, we fix p = 10, 000 and n = 500, and
vary m from 1 to 200. We compare the performance of DECO-2 and DECO-3 with lasso-full and
lasso-refine. The averaged results from 100 simulated datasets are plotted in Fig 3. Since p and n are
both fixed, lasso-full and lasso-refine are expected to perform stably over different m0 s. DECO-2
and DECO-3 also maintain a stable performance regardless of the value of m. In addition, DECO-3
achieves a similar performance to and sometimes better accuracy than lasso-refine, possibly because
the irrepresentable condition is satisfied after decorrelation (See the discussions after Theorem 1).
4.3
Comprehensive comparison
In this section, we compare all the methods under the five different correlation structures. The model
dimension and the sample size are fixed at p = 10, 000 and n = 500 respectively and the number
of subsets is fixed as m = 100. For each model, we simulate 100 synthetic datasets and record the
average performance in Table 1
6
1.5
1
0.5
0
50
100
Estimation error
False negatives
6
DECO-2
lasso-refine
lasso-full
DECO-3
0.15
5
0.1
0.05
4
Runtime
DECO-2
lasso-refine
lasso-full
DECO-3
3
2
1
0
150
subset number m
100
150
8
6
4
0
50
subset number m
10
2
0
50
DECO-2
lasso-refine
lasso-full
DECO-3
12
runtime (sec)
DECO-2
lasso-refine
lasso-full
DECO-3
2
0.2
l2 error
number of False negatives
number of False positives
False positives
100
150
50
subset number m
100
150
subset number m
Figure 3: Performance of DECO with different number of subsets.
Table 1: Results for five models with (n, p) = (500, 10000)
(i)
(ii)
(iii)
(iv)
(v)
MSE
# FPs
# FNs
Time
MSE
# FPs
# FNs
Time
MSE
# FPs
# FNs
Time
MSE
# FPs
# FNs
Time
MSE
Time
DECO-3
0.102
0.470
0.010
65.5
0.241
0.460
0.010
66.9
6.620
0.410
0.130
65.5
0.787
0.460
0.090
69.4
?
?
DECO-2
3.502
0.570
0.020
60.3
4.636
0.550
0.030
61.8
1220.5
0.570
0.120
60.0
5.648
0.410
0.100
64.1
2.341
57.5
lasso-refine
0.104
0.420
0.000
804.5
1.873
2.39
0.160
809.2
57.74
0.110
3.93
835.3
11.15
19.90
0.530
875.1
?
?
lasso-full
0.924
0.420
0.000
802.5
3.808
2.39
0.160
806.3
105.99
0.110
3.93
839.9
6.610
19.90
0.530
880.0
1.661
829.5
lasso-naive
3.667
0.650
0.010
9.0
171.05
1507.2
1.290
13.1
1235.2
1.180
0.110
9.1
569.56
1129.9
1.040
14.6
356.57
13.3
Several conclusions can be drawn from Table 1. First, when all variables are independent as in
Model (i), lasso-naive performs similarly to DECO-2 because no decorrelation is needed in this
simple case. However, lasso-naive fails completely for the other four models when correlations are
presented. Second, DECO-3 achieves the overall best performance. The better estimation error over
lasso-refine is due to the better variable selection performance, since the irrepresentable condition
is not needed for DECO. Finally, DECO-2 performs similarly to lasso-full and the difference is as
expected according to the discussions after Theorem 2.
5
Real data
We illustrate the competitve performance of DECO via three real datasets that cover a range of
high dimensionalities, by comparing DECO-3 to lasso-full, lasso-refine and lasso-naive in terms of
prediction error and computational time. The algorithms are configured in the same way as in Section
4. Although DECO allows arbitrary partitioning over the feature space, for simplicity, we confine
our attention to random partitioning. In addition, we perform DECO-3 multiple times on the same
dataset to ameliorate the uncertainty due to the randomness in partitioning.
Student performance dataset. We look at one of the two datasets used for evaluating student
achievement in two Portuguese schools [20]. The particular dataset used here provides the students?
performance in mathematics. The goal of the research is to predict the final grade (range from 0 to 20).
The original data set contains 395 students and 32 raw attributes. The raw attributes are recoded as
40 attributes and form 767 features after adding interaction terms. To reduce the conditional number
of the feature matrix, we remove features that are constant, giving 741 features. We standardize
7
all features and randomly partition them into 5 subsets for DECO. To compare the performance
of all methods, we use 10-fold cross validation and record the prediction error (mean square error,
MSE), model size and runtime. The averaged results are summarized in Table 2. We also report the
performance of the null model which predicts the final grade on the test set using the mean final grade
in the training set.
Mammalian eye diseases. This dataset, taken from [21], was collected to study mammalian eye
diseases, with gene expression for the eye tissues of 120 twelve-week-old male F2 rats recorded. One
gene coded as TRIM32 responsible for causing Bardet-Biedl syndrome is the response of interest.
Following the method in [21], 18,976 probes were selected as they exhibited sufficient signal for
reliable analysis and at least 2-fold variation in expressions, and we confine our attention to the top
5,000 genes with the highest sample variance. The 5,000 genes are standardized and partitioned into
100 subsets for DECO. The performance is assessed via 10-fold cross validation following the same
approach in Section 5.1. The results are summarized in Table 2. As a reference, we also report these
values for the null model.
Electricity load diagram. This dataset [22] consists of electricity load from 2011 - 2014 for 370
clients. The data are originally recorded in KW for every 15 minutes, resulting in 14,025 attributes.
Our goal is to predict the most recent electricity load by using all previous data points. The variance
of the 14,025 features ranges from 0 to 107 . To reduce the conditional number of the feature matrix,
we remove features whose variances are below the lower 10% quantile (a value of 105 ) and retain
126,231 features. We then expand the feature sets by including the interactions between the first
1,500 attributes that has the largest correlation with the clients? most recent load. The resulting
1,251,980 features are then partitioned into 1,000 subsets for DECO. Because cross-validation is
computationally demanding for such a large dataset, we put the first 200 clients in the training set and
the remaining 170 clients in the testing set. We also scale the value of electricity load between 0 and
300, so that patterns are more visible. The results are summarized in Table 2.
Table 2: The results of all methods on the three datasets.
DECO-3
lasso-full
lasso-refine
lasso-naive
Null
6
Student Performance
MSE size runtime
3.64
1.5
37.0
3.79
2.2
60.8
3.89
2.2
70.9
16.5
6.4
44.6
20.7
?
?
Mammalian eye disease
MSE size runtime
0.012 4.3
9.6
0.012 11
139.0
0.010 11
139.7
37.65 6.8
7.9
0.021 ?
?
Electricity load diagram
MSE
size
runtime
0.691
4
67.9
2.205
6
23,515.5
1.790
6
22,260.9
3.6 ? 108 4966
52.9
520.6
?
?
Concluding remarks
In this paper, we have proposed an embarrassingly parallel framework named DECO for distributed
estimation. DECO is shown to be theoretically attractive, empirically competitive and is straightforward to implement. In particular, we have shown that DECO achieves the same minimax convergence
rate as if the full data were used and the rate does not depend on the number of partitions. We
demonstrated the empirical performance of DECO via extensive experiments and compare it to
various approaches for fitting full data. As illustrated in the experiments, DECO can not only reduce
the computational cost substantially, but often outperform the full data approaches in terms of model
selection and parameter estimation.
Although DECO is designed to solve large-p-small-n problems, it can be extended to deal with
large-p-large-n problems by adding a sample space partitioning step, for example, using the message
approach [5]. More precisely, we first partition the large-p-large-n dataset in the sample space to
obtain l row blocks such that each becomes a large-p-small-n dataset. We then partition the feature
space of each row block into m subsets. This procedure is equivalent to partitioning the original data
matrix X into l ? m small blocks, each with a feasible size that can be stored and fitted in a computer.
We then apply the DECO framework to the subsets in the same row block using Algorithm 1. The
last step is to apply the message method to aggregate the l row block estimators to output the final
estimate. This extremely scalable approach will be explored in future work.
8
References
[1] Ryan Mcdonald, Mehryar Mohri, Nathan Silberman, Dan Walker, and Gideon S Mann. Efficient largescale distributed training of conditional maximum entropy models. In Advances in Neural Information
Processing Systems, pages 1231?1239, 2009.
[2] Yuchen Zhang, Martin J Wainwright, and John C Duchi. Communication-efficient algorithms for statistical
optimization. In Advances in Neural Information Processing Systems, pages 1502?1510, 2012.
[3] Steven L Scott, Alexander W Blocker, Fernando V Bonassi, H Chipman, E George, and R McCulloch.
Bayes and big data: The consensus monte carlo algorithm. In EFaBBayes 250 conference, volume 16,
2013.
[4] Xiangyu Wang, Fangjian Guo, Katherine A Heller, and David B Dunson. Parallelizing mcmc with random
partition trees. In Advances in Neural Information Processing Systems, pages 451?459, 2015.
[5] Xiangyu Wang, Peichao Peng, and David B Dunson. Median selection subset aggregation for parallel
inference. In Advances in Neural Information Processing Systems, pages 2195?2203, 2014.
[6] Stanislav Minsker et al. Geometric median and robust estimation in banach spaces. Bernoulli, 21(4):2308?
2335, 2015.
[7] Qifan Song and Faming Liang. A split-and-merge bayesian variable selection approach for ultrahigh
dimensional regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2014.
[8] Yingbo Zhou, Utkarsh Porwal, Ce Zhang, Hung Q Ngo, Long Nguyen, Christopher R?, and Venu
Govindaraju. Parallel feature selection inspired by group testing. In Advances in Neural Information
Processing Systems, pages 3554?3562, 2014.
[9] Jinzhu Jia and Karl Rohe. Preconditioning to comply with the irrepresentable condition. arXiv preprint
arXiv:1208.5584, 2012.
[10] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), pages 267?288, 1996.
[11] Julien Mairal and Bin Yu. Complexity analysis of the lasso regularization path. In Proceedings of the 29th
International Conference on Machine Learning (ICML-12), pages 353?360, 2012.
[12] Saharon Rosset and Ji Zhu. Piecewise linear regularized solution paths. The Annals of Statistics, pages
1012?1030, 2007.
[13] Peng Zhao and Bin Yu. On model selection consistency of lasso. The Journal of Machine Learning
Research, 7:2541?2563, 2006.
[14] Fei Ye and Cun-Hui Zhang. Rate minimaxity of the lasso and dantzig selector for the lq loss in lr balls.
The Journal of Machine Learning Research, 11:3519?3540, 2010.
[15] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Minimax rates of convergence for high-dimensional
? S? q-ball sparsity. In Communication, Control, and Computing, 2009. Allerton 2009.
regression under ?D
47th Annual Allerton Conference on, pages 251?257. IEEE, 2009.
[16] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005.
[17] Nicolai Meinshausen and Peter B?hlmann. Stability selection. Journal of the Royal Statistical Society:
Series B (Statistical Methodology), 72(4):417?473, 2010.
[18] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models
via coordinate descent. Journal of statistical software, 33(1):1, 2010.
[19] Jiahua Chen and Zehua Chen. Extended bayesian information criteria for model selection with large model
spaces. Biometrika, 95(3):759?771, 2008.
[20] Paulo Cortez and Alice Maria Gon?alves Silva. Using data mining to predict secondary school student
performance. 2008.
[21] Todd E Scheetz, Kwang-Youn A Kim, Ruth E Swiderski, Alisdair R Philp, Terry A Braun, Kevin L
Knudtson, Anne M Dorrance, Gerald F DiBona, Jian Huang, Thomas L Casavant, et al. Regulation of gene
expression in the mammalian eye and its relevance to eye disease. Proceedings of the National Academy of
Sciences, 103(39):14429?14434, 2006.
[22] Artur Trindade. UCI machine learning repository, 2014.
9
| 6349 |@word repository:1 version:2 complying:1 norm:3 cortez:1 c0:8 simulation:2 decomposition:3 p0:1 accommodate:1 contains:4 score:1 selecting:1 series:4 existing:1 elliptical:1 com:1 comparing:2 surprising:1 z2:1 nicolai:1 gmail:1 anne:1 readily:1 portuguese:1 john:1 numerical:3 partition:13 visible:1 remove:2 designed:2 n0:1 selected:3 ith:1 core:1 record:2 lr:3 caveat:1 provides:2 allerton:2 zhang:3 five:4 c2:4 become:1 fps:5 consists:4 specialize:2 prove:1 fitting:9 combine:2 dan:1 theoretically:1 peng:2 expected:2 roughly:1 p1:4 examine:1 grade:3 inspired:2 relying:1 becomes:2 provided:2 spain:1 xx:13 estimating:1 bounded:1 mcculloch:1 null:3 substantially:2 finding:1 sparsification:2 guarantee:2 pseudo:1 every:1 braun:1 runtime:13 exactly:3 biometrika:1 k2:1 uk:1 partitioning:22 control:1 before:3 positive:7 vertically:1 aggregating:1 local:1 limit:2 todd:1 minsker:1 solely:1 path:5 merge:2 approximately:1 might:1 plus:1 emphasis:1 initialization:1 dantzig:1 meinshausen:1 collect:1 challenging:4 suggests:1 alice:1 range:3 averaged:2 responsible:1 testing:2 block:5 implement:1 differs:1 x3:1 procedure:3 empirical:2 pre:1 get:1 cannot:1 irrepresentable:5 selection:15 onto:1 put:1 risk:3 intercept:1 equivalent:2 map:1 demonstrated:1 missing:1 center:3 straightforward:2 regardless:2 attention:3 simplicity:2 splitting:1 immediately:1 artur:1 estimator:3 retrieve:1 stability:1 variation:1 justification:1 coordinate:1 annals:1 massive:1 duke:3 standardize:2 satisfying:1 mammalian:4 gon:1 predicts:1 steven:1 preprint:1 wang:3 verifying:1 worst:1 highest:1 disease:4 pd:1 complexity:3 yingbo:1 gerald:1 weakly:3 depend:6 solving:1 efficiency:1 f2:1 completely:2 preconditioning:1 easily:1 various:3 effective:1 monte:1 aggregate:1 kevin:1 quite:1 whose:1 posed:2 solve:2 warwick:2 larger:1 supplementary:2 otherwise:1 statistic:2 final:5 ip:1 triggered:1 advantage:1 net:1 propose:1 interaction:2 causing:1 uci:1 achieve:1 academy:1 intuitive:1 ky:1 achievement:1 convergence:8 requirement:2 r1:5 produce:1 converges:1 illustrate:4 develop:1 ac:1 stat:1 school:2 strong:1 involves:1 fij:2 attribute:5 material:2 mann:1 bin:3 fix:3 decompose:1 ryan:1 hold:1 sufficiently:1 considered:1 confine:2 knp:1 normal:3 predict:3 week:1 claim:1 m0:1 achieves:4 vary:1 a2:5 estimation:11 applicable:1 largest:1 clearly:1 gaussian:1 aim:1 zhou:1 shrinkage:1 focus:2 longest:1 unsatisfactory:1 rank:1 bernoulli:1 mainly:2 methodological:1 maria:1 contrast:1 kim:1 inference:2 entire:1 relegated:1 expand:1 arg:3 overall:1 flexible:1 logn:3 multiplies:1 kw:1 kd2:1 survive:1 nearly:2 look:2 inevitable:1 yu:3 future:3 icml:1 report:3 piecewise:1 primarily:1 modern:1 randomly:5 simultaneously:1 national:1 comprehensive:2 replaced:1 n1:2 maintain:1 attempt:1 friedman:1 huge:4 interest:1 unimprovable:1 mining:1 message:3 multiply:1 screening:2 evaluation:1 highly:1 male:1 extreme:1 utkarsh:1 worker:7 partial:1 experience:1 orthogonal:6 tree:1 iv:2 old:1 yuchen:1 re:2 timed:1 plotted:1 theoretical:2 fitted:3 column:4 cover:1 hlmann:1 ordinary:2 electricity:5 cost:1 addressing:1 subset:45 predictor:4 stored:1 dir:1 synthetic:2 rosset:1 twelve:1 international:1 retain:2 off:1 concrete:1 squared:1 deco:82 central:1 again:1 containing:1 choose:3 possibly:1 satisfied:1 recorded:2 huang:1 inefficient:1 zhao:1 return:1 distribute:1 paulo:1 sec:3 student:6 summarized:3 coefficient:10 configured:1 mcmc:1 performed:1 break:1 view:1 later:1 competitive:1 bayes:1 aggregation:1 parallel:11 defer:1 jia:1 square:3 accuracy:2 variance:4 yield:1 conceptually:1 bayesian:3 raw:2 carlo:1 worth:2 randomness:1 tissue:1 decorrelated:3 trevor:2 pp:1 obvious:1 proof:2 mi:3 gain:1 sampled:1 dataset:14 proved:1 intrinsically:1 govindaraju:1 recall:1 knowledge:1 dimensionality:1 organized:1 embarrassingly:5 segmentation:2 routine:1 appears:1 originally:1 follow:2 methodology:3 response:2 decorrelating:1 evaluated:1 though:2 furthermore:1 stage:10 correlation:10 jerome:1 hand:2 receives:1 christopher:1 bonassi:1 stably:1 scientific:1 k22:6 verify:3 true:3 ye:1 regularization:4 moore:1 nonzero:2 illustrated:1 deal:1 attractive:4 rooted:1 rat:1 criterion:2 generalized:1 outline:1 ridge:6 mcdonald:1 performs:3 duchi:1 saharon:1 silva:1 novel:1 common:2 garvesh:1 raskutti:1 empirically:1 ji:1 volume:1 banach:1 discussed:2 refer:1 tuning:2 consistency:8 pm:2 similarly:2 mathematics:1 stable:1 longer:1 pu:3 multivariate:1 recent:3 store:1 compound:1 arbitrarily:1 seen:1 george:1 syndrome:1 xiangyu:3 fernando:1 recommended:1 signal:5 ii:5 full:32 afterwards:1 multiple:1 technical:1 determination:1 cross:4 long:3 equally:1 coded:2 impact:1 prediction:2 scalable:1 regression:13 metric:1 arxiv:2 sometimes:1 c1:4 addition:4 separately:1 diagram:2 singular:1 walker:1 median:2 allocated:2 crucial:1 biased:1 rest:1 jian:1 exhibited:1 ineffective:1 pass:1 leveraging:1 ngo:1 chipman:1 noting:2 split:2 easy:1 iii:2 independence:1 fit:7 zi:1 bic:1 hastie:2 lasso:67 reduce:4 cn:2 i7:1 motivated:1 expression:3 allocate:1 penalty:3 song:1 peter:1 repeatedly:1 matlab:1 remark:1 unimportant:1 involve:1 induces:1 generate:1 outperform:1 exist:1 wisely:1 notice:1 sign:5 disjoint:1 tibshirani:2 group:8 key:1 four:2 nevertheless:1 enormous:1 monitor:3 drawn:2 ce:1 blocker:1 screened:1 inverse:1 run:1 master:1 uncertainty:1 ameliorate:1 named:3 almost:4 throughout:2 c02:3 appendix:1 scaling:1 comparable:1 bit:1 chenlei:1 bound:1 fold:4 refine:13 annual:1 precisely:1 fei:1 n3:1 x2:1 software:1 aspect:1 simulate:2 argument:1 extremely:3 min:5 concluding:1 nathan:1 martin:2 according:1 ball:5 across:1 slightly:1 sam:2 partitioned:8 cun:1 rob:1 making:1 intuitively:1 dv:1 fns:5 taken:2 computationally:2 mutually:4 needed:3 end:2 operation:1 apply:3 probe:1 generic:1 robustness:3 original:2 thomas:1 denotes:1 dirichlet:1 top:1 standardized:1 remaining:1 giving:1 k1:1 quantile:1 society:4 silberman:1 added:1 strategy:5 diagonal:2 venu:1 simulated:3 majority:1 sensible:1 accommodating:1 collected:1 consensus:1 preconditioned:1 assuming:1 ruth:1 z3:1 liang:1 regulation:1 dunson:4 katherine:1 fangjian:1 potentially:2 robert:1 negative:7 design:5 implementation:1 recoded:1 perform:4 upper:1 datasets:10 descent:1 optional:1 situation:1 extended:4 communication:3 precise:1 arbitrary:2 parallelizing:1 david:3 introduced:2 pair:1 specified:5 extensive:4 c3:1 z1:1 c4:1 barcelona:1 nip:1 able:1 usually:2 below:1 xm:3 pattern:1 scott:1 gideon:1 challenge:1 sparsity:1 reliable:1 including:1 royal:4 wainwright:2 terry:1 decorrelation:11 demanding:1 client:4 regularized:1 largescale:1 residual:1 zhu:1 minimax:4 improve:1 technology:1 eye:6 julien:1 naive:18 minimaxity:1 heller:1 literature:2 l2:3 geometric:1 comply:1 ultrahigh:1 stanislav:1 loss:2 par:1 highlight:2 var:4 validation:4 sufficient:1 consistent:4 article:2 standardization:1 heavy:1 efabbayes:1 row:4 karl:1 penalized:3 mohri:1 surprisingly:1 last:3 side:1 weaker:1 ber:1 kwang:1 sparse:11 distributed:8 dimension:9 stand:2 evaluating:1 refinement:5 preprocessing:1 leng:2 nguyen:1 selector:1 gene:5 mairal:1 summing:1 assumed:1 xi:2 tailed:1 why:1 reality:1 table:7 robust:1 correlated:2 elastic:1 obtaining:1 symmetry:1 improving:1 mse:10 mehryar:1 complex:1 zou:1 pk:1 main:3 motivation:2 s2:3 big:1 n2:2 x1:1 fig:2 intel:1 aid:2 zehua:1 sub:3 fails:2 inverts:1 lq:1 pe:2 down:2 theorem:13 minute:1 bad:1 load:6 rohe:1 r2:5 explored:1 incorporating:1 false:14 adding:2 hui:2 push:3 alves:1 chen:2 jiahua:1 entropy:1 likely:2 penrose:1 horizontally:1 glmnet:1 contained:1 vulnerable:1 satisfies:1 relies:2 succeed:1 conditional:3 goal:2 porwal:1 feasible:1 change:2 included:2 specifically:1 except:1 reducing:2 uniformly:1 lemma:1 secondary:1 svd:3 orthogonalize:1 select:1 support:1 guo:1 competitve:1 assessed:1 alexander:1 relevance:1 dept:3 jinzhu:1 hung:1 |
5,913 | 635 | Topography and Ocular Dominance
with Positive Correlations
Geoffrey J. Goodhill
University of Edinburgh
Centre for Cognitive Science
2 Buccleuch Place
Edinburgh EH8 9LW
SCOTLAND
Abstract
A new computational model that addresses the formation of both topography and ocular dominance is presented. This is motivated by experimental evidence that these phenomena may be subserved by the same
mechanisms. An important aspect of this model is that ocular dominance segregation can occur when input activity is both distributed, and
positively correlated between the eyes. This allows investigation of the
dependence of the pattern of ocular dominance stripes on the degree of
correlation between the eyes: it is found that increasing correlation leads
to narrower stripes. Experiments are suggested to test whether such behaviour occurs in the natural system.
1
INTRODUCTION
The development of topographic and interdigitated mappings in the nervous system has been much studied experimentally, especially in the visual system (e.g.
[8, 15]). Here, each eye projects in a topographic manner to more central brain
structures: i.e. neighbouring points in the eye map to neighbouring points in the
brain. In addition, when fibres from the two eyes invade the same target struc-
ture, a competitive interaction often appears to take place such that eventually
postsynaptic cells receive inputs from only one eye or the other, in a pattern of
interdigitating ?ocular dominance? stripes.
These phenomena have received a great deal of theoretical attention: several models have been proposed, each based on a different variant of a Hebb-type rule (e.g.
[16, 17, 14, 13]). However, there are two aspects of the experimental data which
previous models have not satisfactorily accounted for.
Firstly, experimental manipulations in the frog and goldfish have shown that when
fibres from a second eye invade a region of brain which is normally innervated
by only one eye, ocular dominance stripes can be formed (e.g. [2]). This suggests
that ocular dominance may be a byproduct of the expression of the rules for topographic map formation, and does not require additional mechanisms [1]. However,
previous models of topography have required additional implausible assumptions
to account for ocular dominance (e.g. [16, 11], [17, 10]), while previous models of
ocular dominance (e.g. [14]) have not simultaneously addressed the development
of topography.
Secondly, the simulation results presented for most previous models of ocular dominance have used only localized rather than distributed patterns of input activity
(e.g. [11]), or zero or negative correlations in activity between the two eyes (e.g.
[3]). It is clear that, in reality, between-eye correlations have a minimum value of
zero (which might be achieved for instance in the case of strabismus), and in general these correlations will be positive after eye-opening. In the cat for instance, the
majority of ocular dominance segregation occurs anatomically three to six weeks
after birth, whereas eye opening occurs at postnatal day 7-10.
Here I present a new model that accounts for (a) both topography and ocular
dominance with the same mechanisms, and (b) ocular dominance segregation
and receptive field refinement for input patterns which are both distributed, and
positively correlated between the eyes.
2
OUTLINE OF THE MODEL
The model is formulated at a general enough level to be applicable to both the
retinocortical and the retinotectal systems. It consists of two two dimensional
sheets of input units (indexed by ) connected to one two-dimensional sheet of
output units (indexed by ) by fibres with variable synaptic weights
. It is
assumed in the retinocortical case that the topography of the retina is essentially
unchanged by the lateral geniculate nucleus (LGN) on its way to the cortex, and
in addition, the effects of retinal and LGN processing are taken together. Thus for
simplicity we refer to the input layers of the model as being retinae and the output
layer as being the cortex. An earlier version of the model appeared in [4], and a
fuller description can be found in [5].
Both retina and cortex are arranged in square arrays. All weights and unit activities
are positive. Lateral interactions exist in the cortical sheet of a circular centersurround excitation/inhibition form, although these are not modeled explicitly.
Initially there is total connectivity of random strengths between retinal and cortical
units, apart from a small bias that specifies an orientation for the map. At each time
step, a pattern of activity is presented by setting the activities
of retinal units.
Each cortical unit calculates its total input
according to a linear summation
rule:
We use assumptions similar to [9] concerning the effect of inhibitory lateral connections, to obtain the learning rule:
is a small positive constant, is the cortical unit with the largest input , and
is the function that specifies how the activities of units near to decrease with
distance from . We assume to be a gaussian function of the Euclidean distance
between units in the cortical sheet, with standard deviation .
Inputs to the model are random dot patterns with short range spatial correlation
introduced by convolution with a blurring function. Locally correlated patterns of
activity were generated by assigning the value 0 or 1 to each pixel in each eye with
a probability of 50%, and then convolving each eye with a gaussian function of
standard deviation . Between-eye correlations were produced in the following
way. Once each retina has been convolved individually with a gaussian function,
activity
of each unit in each retina is replaced with
, where
is the activity of the corresponding unit to in the other eye, and specifies the
similarity between the two eyes. Thus by varying it is possible to vary the degree
of correlation between the eyes: if
they are uncorrelated, and if
they
are perfectly correlated (i.e. the pattern of activity is identical in the two eyes).
The correlations existing in the biological system will clearly be more complicated
than this. However, the simple correlational structure described above aims to
capture the key features of the biological system: on average, cells in each retina are
correlated to an extent that decreases with distance between cells, and (after eye
opening) corresponding positions in the two eyes are also on the average somewhat
correlated.
The sum of the weights for each postsynaptic unit is maintained at a constant fixed
value. However, whereas this constraint is most usually enforced by dividing each
weight by the sum of the weights for that postsynaptic unit ("divisive" normalization), it is enforced in this model by subtracting a constant amount from each
weight ("subtractive" normalization), as in [13].
3
RESULTS
Typical results for the case of two positively correlated eyes are shown in figure
1. Gradually receptive fields refine over the course of development, and cortical
units eventually lose connections from one or the other eye (figure 1(a-c)). After
a large number of input patterns have been presented, cortical units are almost
entirely monocular, and units dominant for the left and right eyes are laid out in
a pattern of alternating stripes (figure 1(c)). In addition, maps from the two eyes
are in register and topographic (figure 1(d-f)). The map of cortical receptive fields
(a)
(b)
(c)
(d)
(e)
(f)
Figure 1: Typical results for two eyes. (a-c) show the ocular dominance of cortical
units after 0, 50,000, and 350,000 iterations respectively. Each cortical unit is represented by a square with colour showing the eye for which it is dominant (black for
right eye, white for left eye), and size showing the degree to which it is dominant.
(d-f) represent cortical topography. Here the centre of mass of weights for each
cortical unit is averaged over both eyes, imagining the retinae to be lying atop
one another, and neighbouring units are connected by lines to form a grid. This
type of picture reveals where the map is folded to take into account that the cortex
must represent both eyes. It can be seen that discontinuities in terms of folds tend
to follow stripe boundaries: first particular positions in one eye are represented,
and then the cortex ?doubles back? as its ocularity changes in order to represent
corresponding positions in the other eye.
Figure 2: The receptive fields of cortical units, showing topography and eye preference. Units are coloured white if they are strongly dominant for the left eye,
black if they are strongly dominant for the right eye, and grey if they are primarily
binocular. ?Strongly dominant? is taken to mean that at least 80% of the total
weight available to a cortical unit is concentrated in one eye. Within each unit is a
representation of its receptive field: there is a 16 by 16 grid within each cortical unit
with each grid point representing a retinal unit, and the size of the box at each grid
point encodes the strength of the connection between each retinal unit and the cortical unit. For binocular (grey) units, the larger of the two corresponding weights in
the two eyes is drawn at each position, coloured white or black according to which
eye that weight belongs. It can be seen that neighbouring positions in each eye
tend to be represented by neighbouring cortical units, apart from discontinuities
across stripe boundaries. For instance, the bottom right corner of the right retina is
represented by the bottom right cortical unit, but the bottom left corner of the right
retina is represented by cortical unit (3,3) (counting along and up from the bottom
left corner of the cortex), since unit (1,1) represents the left retina.
(figure 2) confirms that, as described for the natural system [8], there is a smooth
progression of retinal position represented across a stripe, followed by a doubling
back at stripe boundaries for the cortex to ?pick up where it left off? in the other
eye.
An important aspect of the model is that the effect on stripe width of the strength
of correlation between the two eyes can be investigated, which has not been done
in previous models. Figure 3 shows a series of results for the model, from which
it can be seen that stronger between-eye correlations lead to narrower stripes. It
is interesting to note that a similar relationship is seen in the elastic net model of
topography and ocular dominance [6], even though this is formulated on a rather
different mathematical basis to the model presented here.
4
DISCUSSION
It has sometimes been argued that it is not necessary to also consider the development of topography in models for ocular dominance, since in the cat for instance,
topography develops first, and is established before ocular dominance segregation occurs. However, non-simultaneity of development does not imply different
mechanisms. As a theoretical example, in the elastic net model [6], minimisation
by gradient descent of a particular objective function produces two clear stages of
development: first topography formation, then ocular dominance segregation. A
similar (though less marked) effect can be seen with the present model in figure 1:
the rough form of the map is established before ocular dominance segregation is
complete.
Interest in the contribution to map formation and eye-specific segregation of previsual activity in the retina has been re-awakened recently by the finding that
spontaneous retinal activity takes the form of waves sweeping across the retina
in a random direction [12]. Although this finding has in turn generated a wave
of theoretical activity, it is important to note that the theoretical principles of how
correlated activity can guide map formation have been fairly well worked out since
the 1970?s [16, 11]. Discovery of the precise form that these correlations take does
not invalidate earlier modelling studies.
Finally, the results for the model presented in figure 3 raise the question of whether
stronger between-eye correlations lead to narrower stripes in the natural system.
Perhaps the simplest way to test this experimentally would be to look for changes in
stripe width in the cat after artificially induced strabismus, which severely reduces
the correlations between the two eyes. Although the effect of strabismus on the
degree of monocularity of cortical cells has been extensively investigated (e.g.
[7]), the effect on stripe width has not been examined. Such experiments would
shed light on the extent to which the periodicity of ocular dominance stripes is
determined by environmental as opposed to innate factors.
Acknowledgements
This work was funded by an SERC postgraduate studentship, and an MRC/JCI
postdoctoral training fellowship. I thank Harry Barrow for advice relating to this
(a)
(b)
(c)
Figure 3: Effect on stripe width of the degree of correlation between the two eyes.
Shown from left to right are the cortical topography averaged over both eyes, the
stripe pattern, and the power spectrum of the fourier transform for each case. (a)
(b)
(c)
. Note that stripe width tends to decrease as
increases, and also that the topography becomes smoother.
work, and David Willshaw and David Price for helpful comments on an earlier
draft of this paper.
References
[1] Constantine-Paton, M. (1983). Position and proximity in the development of maps
and stripes. Trends Neurosci. 6, 32-36.
[2] Constantine-Paton, M. & Law, M.I. (1978). Eye-specific termination bands in tecta of
three-eyed frogs. Science, 202, 639-641.
[3] Cowan, J.D. & Friedman, A.E. (1991). Studies of a model for the development and
regeneration of eye-brain maps. In D.S. Touretzky, ed, Advances in Neural Information
Processing Systems, III, 3-10.
[4] Goodhill, G.J. (1991). Topography and ocular dominance can arise from distributed
patterns of activity. International Joint Conference on Neural Networks, Seattle, July 1991,
II, 623-627.
[5] Goodhill, G.J. (1991). Correlations, Competition and Optimality: Modelling the Development of Topography and Ocular Dominance. PhD Thesis, Sussex University.
[6] Goodhill, G.J. & Willshaw, D.J. (1990). Application of the elastic net algorithm to the
formation of ocular dominance stripes. Network, 1, 41-59.
[7] Hubel, D.H. & Wiesel, T.N. (1965). Binocular interaction in striate cortex of kittens
reared with artificial squint. Journal of Neurophysiology, 28, 1041-1059.
[8] Hubel, D.H. & Wiesel, T.N. (1977). Functional architecture of the macaque monkey
visual cortex. Proc. R. Soc. Lond. B, 198, 1-59.
[9] Kohonen, T. (1988). Self-organization and associative memory (3rd Edition). Springer,
Berlin.
[10] Malsburg, C. von der (1979). Development of ocularity domains and growth behaviour
of axon terminals. Biol. Cybern., 32, 49-62.
[11] Malsburg, C. von der & Willshaw, D.J. (1976). A mechanism for producing continuous
neural mappings: ocularity dominance stripes and ordered retino-tectal projections.
Exp. Brain. Res. Supplementum 1, 463-469.
[12] Meister, M., Wong, R.O.L., Baylor, D.A. & Shatz, C.J. (1991). Synchronous bursts of
action potentials in ganglion cells of the developing mammalian retina. Science, 252,
939-943.
[13] Miller, K.D., Keller, J.B. & Stryker, M.P. (1989). Ocular dominance column development: Analysis and simulation. Science, 245, 605-615.
[14] Swindale, N.V. (1980). A model for the formation of ocular dominance stripes. Proc.
R. Soc. Lond. B, 208, 243-264.
[15] Udin, S.B. & Fawcett, J.W. (1988). Formation of topographic maps. Ann. Rev. Neurosci.,
11, 289-327.
[16] Willshaw, D.J. & Malsburg, C. von der (1976). How patterned neural connections can
be set up by self-organization. Proc. R. Soc. Lond. B, 194, 431-445.
[17] Willshaw, D.J. & Malsburg, C. von der (1979). A marker induction mechanism for the
establishment of ordered neural mappings: its application to the retinotectal problem.
Phil. Trans. Roy. Soc. B, 287, 203-243.
| 635 |@word neurophysiology:1 version:1 wiesel:2 stronger:2 termination:1 grey:2 confirms:1 simulation:2 pick:1 series:1 existing:1 assigning:1 atop:1 must:1 nervous:1 postnatal:1 scotland:1 short:1 draft:1 preference:1 firstly:1 mathematical:1 along:1 burst:1 consists:1 manner:1 brain:5 terminal:1 increasing:1 becomes:1 project:1 mass:1 monkey:1 finding:2 growth:1 shed:1 willshaw:5 normally:1 unit:33 producing:1 positive:4 before:2 tends:1 severely:1 might:1 black:3 frog:2 studied:1 examined:1 jci:1 suggests:1 patterned:1 range:1 averaged:2 satisfactorily:1 eyed:1 projection:1 sheet:4 cybern:1 wong:1 map:12 phil:1 attention:1 keller:1 simplicity:1 rule:4 array:1 biol:1 target:1 spontaneous:1 neighbouring:5 trend:1 roy:1 mammalian:1 stripe:22 bottom:4 capture:1 region:1 connected:2 decrease:3 raise:1 blurring:1 basis:1 joint:1 cat:3 represented:6 paton:2 artificial:1 formation:8 birth:1 larger:1 regeneration:1 buccleuch:1 topographic:5 transform:1 associative:1 net:3 subtracting:1 interaction:3 kohonen:1 description:1 competition:1 seattle:1 double:1 produce:1 received:1 dividing:1 soc:4 direction:1 require:1 argued:1 behaviour:2 investigation:1 biological:2 secondly:1 summation:1 swindale:1 lying:1 proximity:1 exp:1 great:1 mapping:3 week:1 vary:1 proc:3 geniculate:1 applicable:1 lose:1 individually:1 largest:1 rough:1 clearly:1 gaussian:3 aim:1 establishment:1 rather:2 varying:1 minimisation:1 modelling:2 helpful:1 initially:1 lgn:2 pixel:1 orientation:1 development:11 spatial:1 fairly:1 field:5 once:1 fuller:1 identical:1 represents:1 look:1 develops:1 opening:3 retina:13 primarily:1 simultaneously:1 replaced:1 friedman:1 organization:2 interest:1 circular:1 light:1 byproduct:1 necessary:1 indexed:2 euclidean:1 re:2 theoretical:4 instance:4 column:1 earlier:3 udin:1 deviation:2 struc:1 international:1 off:1 together:1 connectivity:1 thesis:1 central:1 von:4 opposed:1 cognitive:1 corner:3 convolving:1 account:3 potential:1 retinal:7 harry:1 explicitly:1 register:1 monocularity:1 competitive:1 wave:2 complicated:1 contribution:1 formed:1 square:2 miller:1 produced:1 mrc:1 shatz:1 implausible:1 touretzky:1 synaptic:1 ed:1 ocular:25 back:2 appears:1 day:1 follow:1 arranged:1 done:1 box:1 strongly:3 though:2 binocular:3 stage:1 correlation:17 marker:1 invade:2 perhaps:1 innate:1 effect:7 alternating:1 deal:1 white:3 width:5 self:2 maintained:1 sussex:1 excitation:1 outline:1 complete:1 recently:1 functional:1 relating:1 refer:1 rd:1 grid:4 centre:2 dot:1 funded:1 cortex:9 similarity:1 inhibition:1 invalidate:1 dominant:6 constantine:2 belongs:1 apart:2 manipulation:1 interdigitated:1 der:4 seen:5 minimum:1 additional:2 somewhat:1 july:1 ii:1 smoother:1 tectal:1 reduces:1 smooth:1 concerning:1 calculates:1 variant:1 essentially:1 iteration:1 represent:3 sometimes:1 fawcett:1 normalization:2 achieved:1 cell:5 receive:1 addition:3 fellowship:1 whereas:2 addressed:1 comment:1 induced:1 tend:2 cowan:1 near:1 counting:1 iii:1 enough:1 ture:1 architecture:1 perfectly:1 synchronous:1 whether:2 motivated:1 expression:1 six:1 colour:1 action:1 retinocortical:2 clear:2 retino:1 amount:1 locally:1 extensively:1 concentrated:1 band:1 simplest:1 specifies:3 exist:1 inhibitory:1 simultaneity:1 dominance:26 key:1 drawn:1 sum:2 fibre:3 enforced:2 place:2 almost:1 laid:1 entirely:1 layer:2 followed:1 fold:1 refine:1 activity:16 strength:3 occur:1 constraint:1 worked:1 encodes:1 aspect:3 fourier:1 optimality:1 lond:3 developing:1 according:2 across:3 postsynaptic:3 rev:1 anatomically:1 gradually:1 taken:2 segregation:7 monocular:1 turn:1 eventually:2 mechanism:6 meister:1 available:1 progression:1 reared:1 innervated:1 convolved:1 malsburg:4 serc:1 especially:1 unchanged:1 objective:1 question:1 occurs:4 receptive:5 dependence:1 striate:1 stryker:1 gradient:1 distance:3 thank:1 lateral:3 berlin:1 majority:1 centersurround:1 extent:2 induction:1 modeled:1 relationship:1 baylor:1 negative:1 squint:1 goldfish:1 convolution:1 descent:1 barrow:1 precise:1 sweeping:1 introduced:1 david:2 required:1 tecta:1 connection:4 eh8:1 established:2 discontinuity:2 macaque:1 address:1 trans:1 suggested:1 usually:1 goodhill:4 pattern:12 appeared:1 ocularity:3 memory:1 power:1 natural:3 representing:1 eye:51 imply:1 picture:1 coloured:2 discovery:1 acknowledgement:1 law:1 topography:16 interesting:1 geoffrey:1 localized:1 nucleus:1 degree:5 principle:1 subtractive:1 uncorrelated:1 course:1 periodicity:1 accounted:1 bias:1 guide:1 edinburgh:2 distributed:4 boundary:3 studentship:1 cortical:21 refinement:1 hubel:2 reveals:1 assumed:1 postdoctoral:1 spectrum:1 continuous:1 subserved:1 reality:1 retinotectal:2 elastic:3 imagining:1 investigated:2 artificially:1 domain:1 neurosci:2 arise:1 edition:1 positively:3 advice:1 hebb:1 axon:1 position:7 strabismus:3 lw:1 specific:2 showing:3 evidence:1 postgraduate:1 phd:1 ganglion:1 visual:2 ordered:2 doubling:1 springer:1 environmental:1 marked:1 narrower:3 formulated:2 ann:1 price:1 experimentally:2 change:2 typical:2 folded:1 determined:1 correlational:1 total:3 experimental:3 divisive:1 phenomenon:2 kitten:1 correlated:8 |
5,914 | 6,350 | Adaptive Skills Adaptive Partitions (ASAP)
Daniel J. Mankowitz, Timothy A. Mann? and Shie Mannor
The Technion - Israel Institute of Technology,
Haifa, Israel
[email protected], [email protected], [email protected]
?
Timothy Mann now works at Google Deepmind.
Abstract
We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1)
learns skills (i.e., temporally extended actions or options) as well as (2) where to
apply them. We believe that both (1) and (2) are necessary for a truly general skill
learning framework, which is a key building block needed to scale up to lifelong
learning agents. The ASAP framework can also solve related new tasks simply by
adapting where it applies its existing learned skills. We prove that ASAP converges
to a local optimum under natural conditions. Finally, our experimental results,
which include a RoboCup domain, demonstrate the ability of ASAP to learn where
to reuse skills as well as solve multiple tasks with considerably less experience
than solving each task from scratch.
1
Introduction
Human-decision making involves decomposing a task into a course of action. The course of action is
typically composed of abstract, high-level actions that may execute over different timescales (e.g.,
walk to the door or make a cup of coffee). The decision-maker chooses actions to execute to solve
the task. These actions may need to be reused at different points in the task. In addition, the actions
may need to be used across multiple, related tasks.
Consider, for example, the task of building a city. The course of action to building a city may involve
building the foundations, laying down sewage pipes as well as building houses and shopping malls.
Each action operates over multiple timescales and certain actions (such as building a house) may need
to be reused if additional units are required. In addition, these actions can be reused if a neighboring
city needs to be developed (multi-task scenario).
Reinforcement Learning (RL) represents actions that last for multiple timescales as Temporally
Extended Actions (TEAs) (Sutton et al., 1999), also referred to as options, skills (Konidaris & Barto,
2009) or macro-actions (Hauskrecht, 1998). It has been shown both experimentally (Precup & Sutton,
1997; Sutton et al., 1999; Silver & Ciosek, 2012; Mankowitz et al., 2014) and theoretically (Mann &
Mannor, 2014) that TEAs speed up the convergence rates of RL planning algorithms. TEAs are seen
as a potentially viable solution to making RL truly scalable. TEAs in RL have become popular in
many domains including RoboCup soccer (Bai et al., 2012), video games (Mann et al., 2015) and
Robotics (Fu et al., 2015). Here, decomposing the domains into temporally extended courses of
action (strategies in RoboCup, move combinations in video games and skill controllers in Robotics
for example) has generated impressive solutions. From here on in, we will refer to TEAs as skills.
A course of action is defined by a policy. A policy is a solution to a Markov Decision Process (MDP)
and is defined as a mapping from states to a probability distribution over actions. That is, it tells the
RL agent which action to perform given the agent?s current state. We will refer to an inter-skill policy
as being a policy that tells the agent which skill to execute, given the current state.
A truly general skill learning framework must (1) learn skills as well as (2) automatically compose
them together (as stated by Bacon & Precup (2015)) and determine where each skill should be
executed (the inter-skill policy). This framework should also determine (3) where skills can be reused
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Comparison of Approaches to ASAP
ASAP (this paper)
da Silva et al. 2012
Konidaris & Barto 2009
Bacon & Precup 2015
Eaton & Ruvolo 2013
Automated Skill
Learning
with Policy
Gradient
X
X
?
X
?
Automatic
Skill
Composition
X
?
X
?
?
Continuous
State
Multitask
Learning
X
X
?
?
X
Learning
Reusable
Skills
Correcting
Model
Misspecification
X
?
?
?
?
X
?
?
?
?
in different parts of the state space and (4) adapt to changes in the task itself. Finally it should also
be able to (5) correct model misspecification (Mankowitz et al., 2014). Whilst different forms of
model misspecification exist in RL, we define it here as having an unsatisfactory set of skills and
inter-skill policy that provide a sub-optimal solution to a given task. This skill learning framework
should be able to correct this misspecification to obtain a near-optimal solution. A number of works
have addressed some of these issues separately as shown in Table 1. However, no work, to the best of
our knowledge, has combined all of these elements into a truly general skill-learning framework.
Our framework entitled ?Adaptive Skills, Adaptive Partitions (ASAP)? is the first of its kind to
incorporate all of the above-mentioned elements into a single framework, as shown in Table 1, and
solve continuous state MDPs. It receives as input a misspecified model (a sub-optimal set of skills and
inter-skill policy). The ASAP framework corrects the misspecification by simultaneously learning a
near-optimal skill-set and inter-skill policy which are both stored, in a Bayesian-like manner, within
the ASAP policy. In addition, ASAP automatically composes skills together, learns where to reuse
them and learns skills across multiple tasks.
Main Contributions: (1) The Adaptive Skills, Adaptive Partitions (ASAP) algorithm that automatically corrects a misspecified model. It learns a set of near-optimal skills, automatically composes
skills together and learns an inter-skill policy to solve a given task. (2) Learning skills over multiple
different tasks by automatically adapting both the inter-skill policy and the skill set. (3) ASAP can
determine where skills should be reused in the state space. (4) Theoretical convergence guarantees.
2
Background
Reinforcement Learning Problem: A Markov Decision Process is defined by a 5-tuple
hX, A, R, ?, P i where X is the state space, A is the action space, R ? [?b, b] is a bounded reward function, ? ? [0, 1] is the discount factor and P : X ? A ? [0, 1]X is the transition probability
function for the MDP. The solution to an MDP is a policy ? : X ? ?A which is a function
mapping states to a probability distribution over actions. An optimal policy ? ? : X ? ?A
determines the bestactions to take so as to maximize
the expected reward. The value function
V ? (x) = Ea??(?|a) R(x, a) + ?Ex0 ?P (?|x,a) V ? (x0 ) defines the expected reward for following
?
a policy ? from state x. The optimal expected reward V ? (x) is the expected value obtained for
following the optimal policy from state x.
Policy Gradient: Policy Gradient (PG) methods have enjoyed success in recent years especially
in the fields of robotics (Peters & Schaal,R2006, 2008). The goal in PG is to learn a policy ?? that
maximizes the expected return J(?? ) = ? P (? )R(? )d? , where ? is a set of trajectories, P (? ) is
the probability of a trajectory and R(? ) is the reward obtained for a particular trajectory. P (? ) is
defined as P (? ) = P (x0 )?Tk=0 P (xk+1 |xk , ak )?? (ak |xk ). Here, xk ? X is the state at the k th
timestep of the trajectory; ak ? A is the action at the k th timestep; T is the trajectory length. Only
the policy, in the general formulation of policy gradient, is parameterized with parameters ?. The
idea is then to update the policy parameters using stochastic gradient descent leading to the update
rule ?t+1 = ?t + ??J(?? ), where ?t are the policy parameters at timestep t, ?J(?? ) is the gradient
of the objective function with respect to the parameters and ? is the step size.
3
Skills, Skill Partitions and Intra-Skill Policy
Skills: A skill is a parameterized Temporally Extended Action (TEA) (Sutton et al., 1999). The
power of a skill is that it incorporates both generalization (due to the parameterization) and temporal
2
abstraction. Skills are a special case of options and therefore inherit many of their useful theoretical
properties (Sutton et al., 1999; Precup et al., 1998).
Definition 1. A Skill ? is a TEA that consists of the two-tuple ? = h?? , p(x)i where ?? : X ? ?A
is a parameterized, intra-skill policy with parameters ? ? Rd and p : X ? [0, 1] is the termination
probability distribution of the skill.
Skill Partitions: A skill, by definition, performs a specialized task on a sub-region of a state space.
We refer to these sub-regions as Skill Partitions (SPs) which are necessary for skills to specialize
during the learning process. A given set of SPs covering a state space effectively define the inter-skill
policy as they determine where each skill should be executed. These partitions are unknown a-priori
and are generated using intersections of hyperplane half-spaces (described below). Hyperplanes
provide a natural way to automatically compose skills together. In addition, once a skill is being
executed, the agent needs to select actions from the skill?s intra-skill policy ?? . We next utilize SPs
and the intra-skill policy for each skill to construct the ASAP policy, defined in Section 4. We first
define a skill hyperplane.
Definition 2. Skill Hyperplane (SH): Let ?x,m ? Rd be a vector of features that depend on a state
x ? X and an MDP environment m. Let ?i ? Rd be a vector of hyperplane parameters. A skill
T
hyperplane is defined as ?x,m
?i = L, where L is a constant.
In this work, we interpret hyperplanes to mean that the intersection of skill hyperplane half spaces
form sub-regions in the state space called Skill Partitions (SPs), defining where each skill is executed.
Figure 1a contains two example skill hyperplanes h1 , h2 . Skill ?1 is executed in the SP defined by the
intersection of the positive half-space of h1 and the negative half-space of h2 . The same argument
applies for ?0 , ?2 , ?3 . From here on in, we will refer to skill ?i interchangeably with its index i.
Skill hyperplanes have two functions: (1) They automatically compose skills together, creating
chainable skills as desired by Bacon & Precup (2015). (2) They define SPs which enable us to
derive the probability of executing a skill, given a state x and MDP m. First, we need to be able
to uniquely identify a skill. We define a binary vector B = [b1 , b2 , ? ? ? , bK ] ? {0, 1}K where bk is
a Bernoulli random variable and K is the number of skill hyperplanes. We define the skill index
PK
i = k=1 2k?1 bk as a sum of Bernoulli random variables bk . Note that this is but one approach to
generate skills (and SPs). In principle this setup defines 2K skills, but in practice, far fewer skills
are typically used (see experiments). Furthermore, the complexity of the SP is governed by the
VC-dimension. We can now define the probability of executing skill i as a Bernoulli likelihood in
Equation 1.
"
#
K
X
Y
k?1
P (i|x, m) = P i =
2
bk =
pk (bk = ik |x, m) .
(1)
k=1
k
Here, ik ? {0, 1} is the value of the k th bit of B, x is the current state and m is a description of the
MDP. The probability pk (bk = 1|x, m) and pk (bk = 0|x, m) are defined in Equation 2.
1
pk (bk = 1|x, m) =
, pk (bk = 0|x, m) = 1 ? pk (bk = 1|x, m) .
(2)
T
1 + exp(???(x,m)
?k )
T
We have made use of the logistic sigmoid function to ensure valid probabilities where ?x,m
?k is
a skill hyperplane and ? > 0 is a temperature parameter. The intuition here is that the k th bit
T
of a skill, bk = 1, if the skill hyperplane ?x,m
?k > 0 meaning that the skill?s partition is in the
T
positive half-space of the hyperplane. Similarly, bk = 0 if ?x,m
?k < 0 corresponding to the negative
half-space. Using skill 3 as an example with K = 2 hyperplanes in Figure 1a, we would define the
Bernoulli likelihood of executing ?3 as p(i = 3|x, m) = p1 (b1 = 1|x, m) ? p2 (b2 = 1|x, m).
Intra-Skill Policy: Now that we can define the probability of executing a skill based on its SP, we
define the intra-skill policy ?? for each skill. The Gibb?s distribution is a commonly used function
to define policies in RL (Sutton et al., 1999). Therefore we define the intra-skill policy for skill i,
parameterized by ?i ? Rd as
??i (a|s) = P
exp (??Tx,a ?i )
.
T
b?A exp (??x,b ?i )
3
(3)
Here, ? > 0 is the temperature, ?x,a ? Rd is a feature vector that depends on the current state x ? X
and action a ? A. Now that we have a definition of both the probability of executing a skill and an
intra-skill policy, we need to incorporate these distributions into the policy gradient setting using a
generalized trajectory.
Generalized Trajectory: A generalized trajectory is necessary to derive policy gradient update
rules with respect to the parameters ?, ? as will be shown in Section 4. A typical trajectory is
usually defined as ? = (xt , at , rt , xt+1 )Tt=0 where T is the length of the trajectory. For a generalized
trajectory, our algorithm emits a class it at each timestep t ? 1, which denotes the skill that was
executed. The generalized trajectory is defined as g = (xt , at , it , rt , xt+1 )Tt=0 . The probability
of a generalized trajectory, as an extension to the PG trajectory in Section 2, is now, P?,? (g) =
QT
P (x0 ) t=0 P (xt+1 |xt , at )P? (it |xt , m)??i (at |xt ), where P? (it |xt , m) is the probability of a skill
being executed, given the state xt ? X and environment m at time t ? 1; ??i (at |xt ) is the probability
of executing action at ? A at time t ? 1 given that we are executing skill i. The generalized trajectory
is now a function of two parameter vectors ? and ?.
4
Adaptive Skills, Adaptive Partitions (ASAP) Framework
The Adaptive Skills, Adaptive Partitions (ASAP) framework simultaneously learns a near-optimal set
of skills and SPs (inter-skill policy), given an initially misspecified model. ASAP also automatically
composes skills together and allows for a multi-task setting as it incorporates the environment m into
its hyperplane feature set. We have previously defined two important distributions P? (it |xt , m) and
??i (at |xt ) respectively. These distributions are used to collectively define the ASAP policy which is
presented below. Using the notion of a generalized trajectory, the ASAP policy can be learned in a
policy gradient setting.
ASAP Policy: Assume that we are given a probability distribution ? over MDPs with a d-dimensional
state-action space and a z-dimensional vector describing each MDP. We define ? as a (d + z) ? K
matrix where each column ?i represents a skill hyperplane, and ? is a (d ? 2K ) matrix where each
column ?j parameterizes an intra-skill policy. Using the previously defined distributions, we now
define the ASAP policy.
Definition 3. (ASAP Policy). Given K skill hyperplanes, a set of 2K skills ? = {?i |i = 1, ? ? ? 2K },
a state space x ? X, a set of actions a ? A and an MDP m from a hypothesis space of MDPs, the
ASAP policy is defined as,
2K
X
??,? (a|x, m) =
P? (i|x, m)??i (a|x) ,
(4)
i=1
where P? (i|x, m) and ??i (a|s) are the distributions as defined in Equations 1 and 3 respectively.
This is a powerful description for a policy, which resembles a Bayesian approach, as the policy takes
into account the uncertainty of the skills that are executing as well as the actions that each skill?s
intra-skill policy chooses. We now define the ASAP objective with respect to the ASAP policy.
ASAP Objective: We defined the policy with respect to a hypothesis space of m MDPs. We now
need to define an objective function which takes this hypothesis space into account. Since we assume
that we are provided with a distribution ? : M ? [0, 1] over possible MDP models m ? M , with a
d-dimensional state-action space, we can incorporate this into the ASAP objective function:
Z
?(??,? ) = ?(m)J (m) (??,? )dm ,
(5)
where ??,? is the ASAP policy and J (m) (??,? ) is the expected return for MDP m with respect to
the ASAP policy. To simplify the notation, we group all of the parameters into a single parameter
vector ? =
R [vec(?), vec(?)]. We define the expected reward for generalized trajectories g as
J(?? ) = g P? (g)R(g)dg, where R(g) is reward obtained for a particular trajectory g. This is a
slight variation of the original policy gradient objective defined in Section 2. We then insert J(?? )
into Equation 5 and we get the ASAP objective
function
Z
?(?? ) = ?(m)J (m) (?? )dm ,
(6)
4
where J (m) (?? ) is the expected return for policy ?? in MDP m. Next, we need to derive gradient
?
update rules to learn the parameters of the optimal policy ??
that maximizes this objective.
ASAP Gradients: To learn both intra-skill policy parameters matrix ? as well as the hyperplane
parameters matrix ? (and therefore implicitly the SPs), we derive an update rule for the policy
gradient framework with generalized trajectories. The full derivation is in the supplementary material.
The first step involves calculating the gradient of the ASAP objective function yielding the ASAP
gradient (Theorem 1).
Theorem
1. (ASAP Gradient Theorem). Suppose that the ASAP objective function is ?(?? ) =
R
(m)
?(m)J (?? )dm where ?(m) is a distribution over MDPs m and J (m) (?? ) is the expected
return for MDP m whilst following policy ?? , then the gradient of this objective is:
(m)
HX
(m)
?? ?(?? ) = E?(m) EP (m) (g)
?? Z? (xt , it , at )R(m)
,
?
i=0
(m)
where Z? (xt , it , at ) = log P? (it |xt , m)??i (at |xt ), H (m) is the length of a trajectory for MDP m;
PH (m)
R(m) = i=0 ? i ri is the discounted cumulative reward for trajectory H (m) 1 .
(m)
If we are able to derive ?? Z? (xt , it , at ), then we can estimate the gradient ?? ?(?? ). We will
(m)
(m)
refer to Z? = Z? (xt , it , at ) where it is clear from context. It turns out that it is possible to derive
(m)
(m)
this term as a result of the generalized trajectory. This yields the gradients ?? Z? and ?? Z? in
Theorems 2 and 3 respectively. The derivations can be found the supplementary material.
Theorem 2. (? Gradient Theorem). Suppose that ? is a (d ? 2K ) matrix where each column ?j
(m)
parameterizes an intra-skill policy. Then the gradient ??it Z? corresponding to the intra-skill
parameters of the ith skill at time t is:
P
?
?xt ,bt exp(??Txt ,bt ?it )
b?A
(m)
P
??it Z? = ??xt ,at ?
,
T
b?A exp(??xt ,bt ?it )
K
where ? > 0 is the temperature parameter and ?xt ,at ? Rd?2 is a feature vector of the current
state xt ? Xt and the current action at ? At .
Theorem 3. (? Gradient Theorem). Suppose that ? is a (d + z) ? K matrix where each column ?k
(m)
represents a skill hyperplane. Then the gradient ??k Z? corresponding to parameters of the k th
hyperplane is:
??xt ,m exp(???xTt ,m ?k )
??(xt ,m) exp(???xTt ,m ?k )
(m)
, ??k,0 Z?(m) = ???xt ,m +
??k,1 Z? =
1 + exp(???xTt ,m ?k )
1 + exp(???xTt ,m ?k )
(7)
T
where ? > 0 is the hyperplane temperature parameter, ?(x
?k is the k th skill hyperplane for
t ,m)
MDP m, ?k,1 corresponds to locations in the binary vector equal to 1 (bk = 1) and ?k,0 corresponds
to locations in the binary vector equal to 0 (bk = 0).
(m)
Using these gradient updates, we can then order all of the gradients into a vector ?? Z? =
(m)
(m)
(m)
(m)
h??1 Z? . . . ??2k Z? , ??1 Z? . . . ??k Z? i and update both the intra-skill policy parameters
and hyperplane parameters for the given task (learning a skill set and SPs). Note that the updates
occur on a single time scale. This is formally stated in the ASAP Algorithm.
5
ASAP Algorithm
We present the ASAP algorithm (Algorithm 1) that dynamically and simultaneously learns skills, the
inter-skill policy and automatically composes skills together by learning SPs. The skills (? matrix)
and SPs (? matrix) are initially arbitrary and therefore form a misspecified model. Line 2 combines
1
These expectations can easily be sampled (see supplementary material).
5
the skill and hyperplane parameters into a single parameter vector ?. Lines 3 ? 7 learns the skill
and hyperplane parameters (and therefore implicitly the skill partitions). In line 4 a generalized
trajectory is generated using the current ASAP policy. The gradient ?? ?(?? ) is then estimated in
line 5 from this trajectory and updates the parameters in line 6. This is repeated until the skill and
hyperplane parameters have converged, thus correcting the misspecified model. Theorem 4 provides
a convergence guarantee of ASAP to a local optimum (see supplementary material for the proof).
Algorithm 1 ASAP
Require: ?s,a ? Rd {state-action feature vector}, ?x,m ? R(d+z) {skill hyperplane feature vector},
K
K {The number of hyperplanes}, ? ? Rd?2 {An arbitrary skill matrix}, ? ? R(d+z)?K {An
arbitrary skill hyperplane matrix}, ?(m) {A distribution over MDP tasks}
1: Z = (|d||2K | + |(d + z)K|) {Define the number of parameters}
2: ? = [vec(?), vec(?)] ? RZ
3: repeat
4: Perform a trial (which may consist of multiple MDP tasks) and obtain
x0:H , i0:H , a0:H
, r0:H
, m0:H {states, skills, actions,
rewards, task-specific information}
P
PT (m)
(m)
5: ?? ?(?? ) =
(?)R(m)
{T is the task episode length}
m
i=0 ?? Z
6: ? ? ? + ??? ?(?? )
7: until parameters ? have converged
8: return ?
Theorem 4. Convergence of ASAP: Given an ASAP policy ?(?), an ASAP objective over MDP
models ?(?? ) as well as the ASAP gradient update rules. If (1) the step-size ?k satisfies lim ?k = 0
k??
P
and k ?k = ?; (2) The second derivative of the policy is bounded and we have bounded rewards;
??(??,k )
Then, the sequence {?(??,k )}?
= 0 almost surely.
k=0 converges such that lim
??
k??
6
Experiments
The experiments have been performed on four different continuous domains: the Two Rooms (2R)
domain (Figure 1b), the Flipped 2R domain (Figure 1c), the Three rooms (3R) domain (Figure 1d) and
RoboCup domains (Figure 1e) that include a one-on-one scenario between a striker and a goalkeeper
(R1), a two-on-one scenario of a striker against a goalkeeper and a defender (R2), and a striker against
two defenders and a goalkeeper (R3) (see supplementary material). In each experiment, ASAP is
provided with a misspecified model; that is, a set of skills and SPs (the inter-skill policy) that achieve
degenerate, sub-optimal performance. ASAP corrects this misspecified model in each case to learn
a set of near-optimal skills and SPs. For each experiment we implement ASAP using Actor-Critic
Policy Gradient (AC-PG) as the learning algorithm 2 .
The Two-Room and Flipped Room Domains (2R): In both domains, the agent (red ball) needs to
reach the goal location (blue square) in the shortest amount of time. The agent receives constant
negatives rewards and upon reaching the goal, receives a large positive reward. There is a wall
dividing the environment which creates two rooms. The state space is a 4-tuple consisting of the
continuous hxagent , yagent i location of the agent and the hxgoal , ygoal i location of the center of the
goal. The agent can move in each of the four cardinal directions. For each experiment involving the
two room domains, a single hyperplane is learned (resulting in two SPs) with a linear feature vector
representation ?x,m = [1, xagent , yagent ]. In addition, a skill is learned in each of the two SPs. The
intra-skill policies are represented as a probability distribution over actions.
Automated Hyperplane and Skill Learning: Using ASAP, the agent learned intuitive SPs and skills
as seen in Figure 1f and g. Each colored region corresponds to a SP. The white arrows have been
superimposed onto the figures to indicate the skills learned for each SP. Since each intra-skill policy
is a probability distribution over actions, each skill is unable to solve the entire task on its own. ASAP
has taken this into account and has positioned the hyperplane accordingly such that the given skill
representation can solve the task. Figure 2a shows that ASAP improves upon the initial misspecified
partitioning to attain near-optimal performance compared to executing ASAP on the fixed initial
misspecified partitioning and on a fixed approximately optimal partitioning.
2
AC-PG works well in practice and can be trivially incorporated into ASAP with convergence guarantees
6
Figure 1: (a) The intersection of skill hyperplanes {h1 , h2 } form four partitions, each of which
defines a skill?s execution region (the inter-skill policy). The (b) 2R, (c) Flipped 2R, (d) 3R and (e)
RoboCup domains (with a varying number of defenders for R1,R2,R3). The learned skills and Skill
Partitions (SPs) for the (f ) 2R, (g) Flipped 2R, (h) 3R and (i) across multiple tasks.
Figure 2: Average reward of the learned ASAP policy compared to (1) the approximately optimal SPs
and skill set as well as (2) the initial misspecified model. This is for the (a) 2R, (b) 3R, (c) 2R learning
across multiple tasks and the (d) 2R without learning by flipping the hyperplane. (e) The average
reward of the learned ASAP policy for a varying number of K hyperplanes. (f ) The learned SPs and
skill set for the R1 domain. (g) The learned SPs using a polynomial hyperplane (1),(2) and linear
hyperplane (3) representation. (h) The learned SPs using a polynomial hyperplane representation
without the defender?s location as a feature (1) and with the defender?s x location (2), y location
(3), and hx, yi location as a feature (4). (i) The dribbling behavior of the striker when taking the
defender?s y location into account. (j) The average reward for the R1 domain.
Multiple Hyperplanes: We analyzed the ASAP framework when learning multiple hyperplanes
in the two room domain. As seen in Figure 2e, increasing the number of hyperplanes K, does
not have an impact on the final solution in terms of average reward. However, it does increase the
computational complexity of the algorithm since 2K skills need to be learned. The approximate points
of convergence are marked in the figure as K1, K2 and K3, respectively. In addition, two skills
dominate in each case producing similar partitions to those seen in Figure 1a (see supplementary
material) indicating that ASAP learns that not all skills are necessary to solve the task.
Multitask Learning: We first applied ASAP to the 2R domain (Task 1) and attained a near optimal
average reward (Figure 2c). It took approximately 35000 episodes to get near-optimal performance
and resulted in the SPs and skill set shown in Figure 1i (top). Using the learned SPs and skills, ASAP
was then able to adapt and learn a new set of SPs and skills to solve a different task (Flipped 2R Task 2) in only 5000 episodes (Figure 2c) indicating that the parameters learned from the old task
provided a good initialization for the new task. The knowledge transfer is seen in Figure 1i (bottom)
as the SPs do not significantly change between tasks, yet the skills are completely relearned.
We also wanted to see whether we could flip the SPs; that is, switch the sign of the hyperplane
parameters learned in the 2R domain and see whether ASAP can solve the Flipped 2R domain (Task
2) without any additional learning. Due to the symmetry of the domains, ASAP was indeed able to
solve the new domain and attained near-optimal performance (Figure 2d). This is an exciting result as
many problems, especially navigation tasks, possess symmetrical characteristics. This insight could
dramatically reduce the sample complexity of these problems.
The Three-Room Domain (3R): The 3R domain (Figure 1d), is similar to the 2R domain regarding
the goal, state-space, available actions and rewards. However, in this case, there are two walls,
dividing the state space into three rooms. The hyperplane feature vector ?x,m consists of a single
7
fourier feature. The intra-skill policy is a probability distribution over actions. The resulting learned
hyperplane partitioning and skill set are shown in Figure 1h. Using this partitioning ASAP achieved
near optimal performance (Figure 2b). This experiment shows an insightful and unexpected result.
Reusable Skills: Using this hyperplane representation, ASAP was able to not only learn the intra-skill
policies and SPs, but also that skill ?A? needed to be reused in two different parts of the state space
(Figure 1h). ASAP therefore shows the potential to automatically create reusable skills.
RoboCup Domain: The RoboCup 2D soccer simulation domain (Akiyama & Nakashima, 2014) is
a 2D soccer field (Figure 1e) with two opposing teams. We utilized three RoboCup sub-domains
3
R1, R2 and R3 as mentioned previously. In these sub-domains, a striker (the agent) needs to
learn to dribble the ball and try and score goals past the goalkeeper. State space: R1 domain the continuous locations of the striker hxstriker , ystriker i , the ball hxball , yball i, the goalkeeper
hxgoalkeeper , ygoalkeeper i and the constant goal location hxgoal , ygoal i. R2 domain - we have the
addition of the defender?s location hxdef ender , ydef ender i to the state space. R3 domain - we add the
locations of two defenders. Features: For the R1 domain, we tested both a linear and degree two
polynomial feature representation for the hyperplanes. For the R2 and R3 domains, we also utilized
a degree two polynomial hyperplane feature representation. Actions: The striker has three actions
which are (1) move to the ball (M), (2) move to the ball and dribble towards the goal (D) (3) move
to the ball and shoot towards the goal (S). Rewards: The reward setup is consistent with logical
football strategies (Hausknecht & Stone, 2015; Bai et al., 2012). Small negative (positive) rewards for
shooting from outside (inside) the box and dribbling when inside (outside) the box. Large negative
rewards for losing possession and kicking the ball out of bounds. Large positive reward for scoring.
Different SP Optimas: Since ASAP attains a locally optimal solution, it may sometimes learn
different SPs. For the polynomial hyperplane feature representation, ASAP attained two different
solutions as shown in Figure 2g(1) as well as Figure 2g(2), respectively. Both achieve near optimal
performance compared to the approximately optimal scoring controller (see supplementary material).
For the linear feature representation, the SPs and skill set in Figure 2g(3) is obtained and achieved
near-optimal performance (Figure 2j), outperforming the polynomial representation.
SP Sensitivity: In the R2 domain, an additional player (the defender) is added to the game. It is
expected that the presence of the defender will affect the shape of the learned SPs. ASAP again learns
intuitive SPs. However, the shape of the learned SPs change based on the pre-defined hyperplane
feature vector ?m,x . Figure 2h(1) shows the learned SPs when the location of the defender is not used
as a hyperplane feature. When the x location of the defender is utilized, the ?flatter? SPs are learned
in Figure 2h(2). Using the y location of the defender as a hyperplane feature causes the hyperplane
offset shown in Figure 2h(3). This is due to the striker learning to dribble around the defender in
order to score a goal as seen in Figure 2i. Finally, taking the hx, yi location of the defender into
account results in the ?squashed? SPs shown in Figure 2h(4) clearly showing the sensitivity and
adaptability of ASAP to dynamic factors in the environment.
7
Discussion
We have presented the Adaptive Skills, Adaptive Partitions (ASAP) framework that is able to
automatically compose skills together and learns a near-optimal skill set and skill partitions (the
inter-skill policy) simultaneously to correct an initially misspecified model. We derived the gradient
update rules for both skill and skill hyperplane parameters and incorporated them into a policy
gradient framework. This is possible due to our definition of a generalized trajectory. In addition,
ASAP has shown the potential to learn across multiple tasks as well as automatically reuse skills.
These are the necessary requirements for a truly general skill learning framework and can be applied
to lifelong learning problems (Ammar et al., 2015; Thrun & Mitchell, 1995). An exciting extension
of this work is to incorporate it into a Deep Reinforcement Learning framework, where both the skills
and ASAP policy can be represented as deep networks.
Acknowledgements
The research leading to these results has received funding from the European Research Council
under the European Union?s Seventh Framework Program (FP/2007-2013) / ERC Grant Agreement n.
306638.
3
https://github.com/mhauskn/HFO.git
8
References
Akiyama, Hidehisa and Nakashima, Tomoharu. Helios base: An open source package for the robocup
soccer 2d simulation. In RoboCup 2013: Robot World Cup XVII, pp. 528?535. Springer, 2014.
Ammar, Haitham Bou, Tutunov, Rasul, and Eaton, Eric. Safe policy search for lifelong reinforcement
learning with sublinear regret. arXiv preprint arXiv:1505.05798, 2015.
Bacon, Pierre-Luc and Precup, Doina. The option-critic architecture. In NIPS Deep Reinforcement
Learning Workshop, 2015.
Bai, Aijun, Wu, Feng, and Chen, Xiaoping. Online planning for large mdps with maxq decomposition.
In AAMAS, 2012.
da Silva, B.C., Konidaris, G.D., and Barto, A.G. Learning parameterized skills. In ICML, 2012.
Eaton, Eric and Ruvolo, Paul L. Ella: An efficient lifelong learning algorithm. In Proceedings of the
30th international conference on machine learning (ICML-13), pp. 507?515, 2013.
Fu, Justin, Levine, Sergey, and Abbeel, Pieter. One-shot learning of manipulation skills with online
dynamics adaptation and neural network priors. arXiv preprint arXiv:1509.06841, 2015.
Hausknecht, Matthew and Stone, Peter. Deep reinforcement learning in parameterized action space.
arXiv preprint arXiv:1511.04143, 2015.
Hauskrecht, Milos, Meuleau Nicolas et. al. Hierarchical solution of markov decision processes using
macro-actions. In UAI, pp. 220?229, 1998.
Konidaris, George and Barto, Andrew G. Skill discovery in continuous reinforcement learning
domains using skill chaining. In NIPS, 2009.
Mankowitz, Daniel J, Mann, Timothy A, and Mannor, Shie. Time regularized interrupting options.
Internation Conference on Machine Learning, 2014.
Mann, Timothy A and Mannor, Shie. Scaling up approximate value iteration with options: Better
policies with fewer iterations. In Proceedings of the 31 st International Conference on Machine
Learning, 2014.
Mann, Timothy Arthur, Mankowitz, Daniel J, and Mannor, Shie. Learning when to switch between
skills in a high dimensional domain. In AAAI Workshop, 2015.
Masson, Warwick and Konidaris, George. Reinforcement learning with parameterized actions. arXiv
preprint arXiv:1509.01644, 2015.
Peters, Jan and Schaal, Stefan. Policy gradient methods for robotics. In Intelligent Robots and
Systems, 2006 IEEE/RSJ International Conference on, pp. 2219?2225. IEEE, 2006.
Peters, Jan and Schaal, Stefan. Reinforcement learning of motor skills with policy gradients. Neural
Networks, 21:682?691, 2008.
Precup, Doina and Sutton, Richard S. Multi-time models for temporally abstract planning. In
Advances in Neural Information Processing Systems 10 (Proceedings of NIPS?97), 1997.
Precup, Doina, Sutton, Richard S, and Singh, Satinder. Theoretical results on reinforcement learning
with temporally abstract options. In Machine Learning: ECML-98, pp. 382?393. Springer, 1998.
Silver, David and Ciosek, Kamil. Compositional Planning Using Optimal Option Models. In
Proceedings of the 29th International Conference on Machine Learning, Edinburgh, 2012.
Sutton, Richard S, Precup, Doina, and Singh, Satinder. Between MDPs and semi-MDPs: A framework
for temporal abstraction in reinforcement learning. Artificial Intelligence, 1999.
Sutton, Richard S, McAllester, David, Singh, Satindar, and Mansour, Yishay. Policy gradient methods
for reinforcement learning with function approximation. In NIPS, pp. 1057?1063, 2000.
Thrun, Sebastian and Mitchell, Tom M. Lifelong robot learning. Springer, 1995.
9
| 6350 |@word multitask:2 trial:1 polynomial:6 reused:6 open:1 termination:1 pieter:1 simulation:2 git:1 decomposition:1 pg:5 shot:1 initial:3 bai:3 contains:1 score:2 daniel:3 past:1 existing:1 current:7 com:1 yet:1 must:1 partition:18 shape:2 wanted:1 motor:1 update:11 half:6 fewer:2 intelligence:1 parameterization:1 accordingly:1 ruvolo:2 xk:4 ith:1 meuleau:1 colored:1 provides:1 mannor:5 location:18 hyperplanes:14 org:1 become:1 viable:1 ik:2 shooting:1 prove:1 consists:2 specialize:1 compose:4 combine:1 inside:2 manner:1 introduce:1 x0:4 theoretically:1 inter:13 indeed:1 expected:10 behavior:1 p1:1 planning:4 multi:3 discounted:1 automatically:12 increasing:1 spain:1 provided:3 bounded:3 notation:1 maximizes:2 israel:2 kind:1 deepmind:1 developed:1 whilst:2 possession:1 hauskrecht:2 guarantee:3 temporal:2 hfo:1 k2:1 partitioning:5 unit:1 grant:1 producing:1 positive:5 local:2 sutton:10 ak:3 approximately:4 initialization:1 resembles:1 dynamically:1 kicking:1 practice:2 block:1 implement:1 union:1 regret:1 jan:2 adapting:2 attain:1 significantly:1 pre:1 get:2 onto:1 context:1 center:1 masson:1 bou:1 correcting:2 rule:6 insight:1 dominate:1 notion:1 variation:1 pt:1 suppose:3 yishay:1 losing:1 hypothesis:3 agreement:1 element:2 utilized:3 ep:1 bottom:1 levine:1 preprint:4 region:5 episode:3 mentioned:2 intuition:1 environment:5 complexity:3 reward:23 dynamic:2 depend:1 solving:1 singh:3 upon:2 creates:1 eric:2 completely:1 easily:1 represented:2 tx:2 derivation:2 artificial:1 tell:2 outside:2 supplementary:7 solve:11 warwick:1 football:1 ability:1 itself:1 final:1 online:2 sequence:1 took:1 adaptation:1 macro:2 neighboring:1 degenerate:1 achieve:2 description:2 intuitive:2 convergence:6 optimum:2 r1:7 requirement:1 silver:2 converges:2 executing:9 tk:1 derive:6 andrew:1 ac:4 qt:1 received:1 p2:1 dividing:2 involves:2 indicate:1 direction:1 safe:1 correct:3 stochastic:1 vc:1 human:1 enable:1 mcallester:1 material:7 mann:8 require:1 hx:4 shopping:1 generalization:1 wall:2 abbeel:1 extension:2 insert:1 around:1 exp:9 k3:1 mapping:2 eaton:3 matthew:1 m0:1 maker:1 council:1 ex0:1 create:1 city:3 stefan:2 clearly:1 reaching:1 varying:2 barto:4 derived:1 schaal:3 unsatisfactory:1 bernoulli:4 likelihood:2 superimposed:1 attains:1 abstraction:2 i0:1 typically:2 bt:3 a0:1 initially:3 entire:1 issue:1 priori:1 special:1 field:2 once:1 construct:1 having:1 equal:2 represents:3 flipped:6 icml:2 simplify:1 defender:15 cardinal:1 intelligent:1 richard:4 composed:1 simultaneously:4 dg:1 resulted:1 consisting:1 opposing:1 mankowitz:5 intra:18 truly:5 sh:1 analyzed:1 yielding:1 navigation:1 tuple:3 fu:2 necessary:5 experience:1 arthur:1 hausknecht:2 old:1 walk:1 haifa:1 desired:1 theoretical:3 column:4 dribble:3 technion:3 seventh:1 stored:1 considerably:1 chooses:2 combined:1 st:1 international:4 sensitivity:2 corrects:3 together:8 precup:9 again:1 aaai:1 creating:1 derivative:1 leading:2 return:5 sps:35 relearned:1 account:5 potential:2 b2:2 flatter:1 doina:4 depends:1 performed:1 h1:3 try:1 red:1 option:8 contribution:1 il:2 robocup:10 square:1 characteristic:1 yield:1 identify:1 bayesian:2 trajectory:25 converged:2 composes:4 reach:1 sebastian:1 definition:6 against:2 konidaris:5 xiaoping:1 pp:6 dm:3 proof:1 emits:1 sampled:1 popular:1 logical:1 mitchell:2 knowledge:2 lim:2 improves:1 ender:2 positioned:1 adaptability:1 ea:1 attained:3 tom:1 formulation:1 execute:3 box:2 tomoharu:1 furthermore:1 until:2 receives:3 google:1 defines:3 logistic:1 mdp:17 believe:1 building:6 white:1 game:3 during:1 interchangeably:1 uniquely:1 covering:1 soccer:4 chaining:1 generalized:13 stone:2 tt:2 demonstrate:1 performs:1 temperature:4 silva:2 meaning:1 shoot:1 funding:1 misspecified:11 sigmoid:1 specialized:1 rl:7 slight:1 interpret:1 refer:5 composition:1 cup:2 vec:4 automatic:1 enjoyed:1 rd:8 trivially:1 similarly:1 erc:1 robot:3 actor:1 impressive:1 goalkeeper:5 add:1 base:1 own:1 recent:1 scenario:3 manipulation:1 certain:1 binary:3 success:1 outperforming:1 yi:2 scoring:2 seen:6 entitled:1 additional:3 george:2 r0:1 surely:1 determine:4 maximize:1 shortest:1 semi:1 multiple:12 full:1 adapt:2 impact:1 scalable:1 involving:1 controller:2 txt:1 expectation:1 arxiv:8 iteration:2 sometimes:1 sergey:1 robotics:4 achieved:2 addition:8 background:1 separately:1 addressed:1 source:1 posse:1 shie:5 incorporates:2 ee:1 near:13 presence:1 door:1 automated:2 switch:2 affect:1 architecture:1 reduce:1 idea:1 parameterizes:2 regarding:1 dribbling:2 whether:2 asap:71 reuse:3 peter:4 cause:1 compositional:1 action:42 deep:4 dramatically:1 useful:1 clear:1 involve:1 amount:1 discount:1 locally:1 ph:1 generate:1 http:1 exist:1 sign:1 estimated:1 blue:1 milo:1 tea:7 group:1 key:1 reusable:3 four:3 utilize:1 timestep:4 year:1 sum:1 package:1 parameterized:7 powerful:1 uncertainty:1 almost:1 wu:1 interrupting:1 decision:5 scaling:1 bit:2 bound:1 occur:1 ri:1 gibb:1 fourier:1 speed:1 argument:1 akiyama:2 combination:1 ball:7 across:5 making:2 taken:1 equation:4 previously:3 describing:1 turn:1 r3:5 needed:2 flip:1 available:1 decomposing:2 apply:1 hierarchical:1 pierre:1 original:1 rz:1 denotes:1 top:1 include:2 ensure:1 calculating:1 k1:1 coffee:1 especially:2 rsj:1 feng:1 move:5 objective:12 added:1 flipping:1 strategy:2 squashed:1 rt:2 gradient:33 unable:1 thrun:2 laying:1 length:4 index:2 setup:2 executed:7 potentially:1 stated:2 negative:5 policy:81 unknown:1 perform:2 markov:3 descent:1 ecml:1 defining:1 extended:4 incorporated:2 misspecification:5 team:1 mansour:1 arbitrary:3 bk:15 david:2 required:1 pipe:1 learned:21 barcelona:1 maxq:1 nip:5 able:8 justin:1 below:2 usually:1 fp:1 program:1 including:1 video:2 mall:1 power:1 natural:2 regularized:1 bacon:4 github:1 technology:1 mdps:8 temporally:6 helios:1 prior:1 acknowledgement:1 ella:1 ammar:2 xtt:4 discovery:1 sublinear:1 foundation:1 h2:3 aijun:1 agent:11 degree:2 consistent:1 principle:1 exciting:2 haitham:1 critic:2 course:5 repeat:1 last:1 institute:1 lifelong:5 taking:2 edinburgh:1 dimension:1 transition:1 valid:1 cumulative:1 world:1 made:1 adaptive:14 reinforcement:12 commonly:1 far:1 approximate:2 skill:176 implicitly:2 satinder:2 uai:1 b1:2 symmetrical:1 continuous:6 search:1 table:3 learn:11 transfer:1 nicolas:1 symmetry:1 european:2 kamil:1 domain:35 da:2 inherit:1 sp:7 timescales:3 main:1 pk:7 arrow:1 xvii:1 paul:1 repeated:1 aamas:1 referred:1 sub:8 house:2 governed:1 learns:11 down:1 theorem:10 xt:28 specific:1 showing:1 insightful:1 r2:6 offset:1 consist:1 workshop:2 effectively:1 execution:1 chen:1 intersection:4 timothy:6 simply:1 unexpected:1 nakashima:2 applies:2 collectively:1 springer:3 corresponds:3 determines:1 satisfies:1 acm:1 goal:10 marked:1 towards:2 internation:1 room:9 luc:1 experimentally:1 change:3 typical:1 operates:1 hyperplane:40 called:1 experimental:1 player:1 indicating:2 select:1 formally:1 incorporate:4 tested:1 scratch:1 |
5,915 | 6,351 | Sample Complexity of Automated Mechanism Design
Maria-Florina Balcan, Tuomas Sandholm, Ellen Vitercik
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
{ninamf,sandholm,vitercik}@cs.cmu.edu
Abstract
The design of revenue-maximizing combinatorial auctions, i.e. multi-item auctions
over bundles of goods, is one of the most fundamental problems in computational
economics, unsolved even for two bidders and two items for sale. In the traditional
economic models, it is assumed that the bidders? valuations are drawn from an
underlying distribution and that the auction designer has perfect knowledge of
this distribution. Despite this strong and oftentimes unrealistic assumption, it is
remarkable that the revenue-maximizing combinatorial auction remains unknown.
In recent years, automated mechanism design has emerged as one of the most practical and promising approaches to designing high-revenue combinatorial auctions.
The most scalable automated mechanism design algorithms take as input samples
from the bidders? valuation distribution and then search for a high-revenue auction
in a rich auction class. In this work, we provide the first sample complexity analysis
for the standard hierarchy of deterministic combinatorial auction classes used in
automated mechanism design. In particular, we provide tight sample complexity
bounds on the number of samples needed to guarantee that the empirical revenue
of the designed mechanism on the samples is close to its expected revenue on the
underlying, unknown distribution over bidder valuations, for each of the auction
classes in the hierarchy. In addition to helping set automated mechanism design on
firm foundations, our results also push the boundaries of learning theory. In particular, the hypothesis functions used in our contexts are defined through multi-stage
combinatorial optimization procedures, rather than simple decision boundaries, as
are common in machine learning.
1
Introduction
Multi-item, multi-bidder auctions have been studied extensively in economics, operations research,
and computer science. In a combinatorial auction (CA), the bidders may submit bids on bundles of
goods, rather than on individual items alone, and thereby they may fully express their complex valuation functions. Notably, these functions may be non-additive due to the presence of complementary
or substitutable goods for sale. There are many important and practical applications of CAs, ranging
from the US government?s wireless spectrum license auctions to sourcing auctions, through which
companies coordinate the procurement and distribution of equipment, materials and supplies.
One of the most important and tantalizing open questions in computational economics is the design
of optimal auctions, that is, auctions that maximize the seller?s expected. In the standard economic
model, it is assumed that the bidders? valuations are drawn from an underlying distribution and that
the mechanism designer has perfect information about this distribution. Astonishingly, even with this
strong assumption, the optimal CA design problem is unsolved even for auctions with just two distinct
items for sale and two bidders. A monumental advance in the study of optimal auction design was
the characterization of the optimal 1-item auction [Myerson, 1981]. However, the problem becomes
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
significantly more challenging with multiple items for sale. In particular, Conitzer and Sandholm
proved that the problem of finding a revenue-maximizing deterministic CA is NP-complete [Conitzer
and Sandholm, 2004]. We note here that it is well-known that randomization can increase revenue in
CAs, but we focus on deterministic CAs in this work because in many applications, randomization is
not palatable and very few, if any, randomized CAs are used in practice.
In recent years, a novel approach known as automated mechanism design (AMD) has been adopted
to attack the revenue-maximizing auction design problem [Conitzer and Sandholm, 2002, Sandholm,
2003]. In the most scalable strand of AMD, algorithms have been developed which take samples
from the bidders? valuation distributions as input, optimize over a rich class of auctions, and return an
auction which is high-performing over the sample [Likhodedov and Sandholm, 2004, 2005, Sandholm
and Likhodedov, 2015]. AMD algorithms have yielded deterministic mechanisms with the highest
known revenues in the contexts used for empirical evaluations [Sandholm and Likhodedov, 2015].
This approach relaxes the unrealistic assumption that the mechanism designer has perfect information
about the bidders? valuation distribution.
However, until now, there was no formal characterization of the number of samples required to
guarantee that the empirical revenue of the designed mechanism on the samples is close to its
expected revenue on the underlying, unknown distribution over bidder valuations. In this paper,
we provide that missing link. We present tight sample complexity guarantees over an extensive
hierarchy of expressive CA families. These are the most commonly used auction families in AMD.
The classes in the hierarchy are based on the classic VCG mechanism, which is a generalization of
the well-known second-price, or Vickrey, single-item auction. The auctions we consider achieve
significantly higher revenue than the VCG baseline by weighting bidders (multiplicatively increasing
all of their bids) and boosting outcomes (additively increasing the liklihood that a particular outcome
will be the result of the auction).
A major strength of our results is their applicability to any algorithm that determines the optimal
auction over the sample, a nearly optimal approximation, or any other black box procedure. Therefore,
they apply to any automated mechanism design algorithm, optimal or not. One of the key challenges
in deriving these general sample complexity bounds is that to do so, we must develop deep insights
into how changes to the auction parameters (the bidder weights and allocation boosts) effect the
outcome of the auction (who wins which items and how much each bidder pays) and thereby the
revenue of the auction. In our context, we show that the functions which determine the outcome of an
auction are highly complex, consisting of multi-stage optimization procedures.
Therefore, the function classes we consider are much more challenging than those commonly found
in machine learning contexts. Typically, for well-understood classes of functions used in machine
learning, such as linear separators or other smooth curves in Euclidean spaces, there is a simple
mapping from the parameters of a specific hypothesis to its prediction on a given example and a
close connection between the distance in the parameter space between two parameter vectors and the
distance in function space between their associated hypotheses. Roughly speaking, it is necessary to
understand this connection in order to determine how many significantly different hypotheses there
are over the full range of parameters. In our context, due to the inherent complexity of the classes we
consider, connecting the parameter space to the space of revenue functions requires a much more
delicate analysis. The key technical part of our work involves understanding this connection from a
learning theoretic perspective. For the more general classes in the hierarchy, we use Rademacher
complexity to derive our bounds, and for the auction classes with more combinatorial structure, we
exploit that structure to prove pseudo-dimension bounds. This work is both of practical importance
since we fill a fundamental gap in AMD, and of learning theoretical interest, as our sample complexity
analysis requires a deep understanding of the structure of the revenue function classes we consider.
Related Work. In prior research, the sample complexity of revenue maximization has been studied
primarily in the single-item or the more general single-dimensional settings [Elkind, 2007, Cole and
Roughgarden, 2014, Huang et al., 2015, Medina and Mohri, 2014, Morgenstern and Roughgarden,
2015, Roughgarden and Schrijvers, 2016, Devanur et al., 2016], as well as some multi-dimensional
settings which are reducible to the single-bidder setting [Morgenstern and Roughgarden, 2016]. In
contrast, the combinatorial settings that we study are much more complex since the revenue functions
consist of multi-stage optimization procedures that cannot be reduced to a single-bidder setting. The
complexity intrinsic to the multi-item setting is explored in [Dughmi et al., 2014], who show that
2
for a single unit-demand bidder, when the bidder?s values for the items may be correlated, ?(2m )
samples are required to determine a constant-factor approximation to the optimal auction.
Learning theory tools such as pseudo-dimension and Rademacher complexity were used to prove
strong guarantees in [Medina and Mohri, 2014, Morgenstern and Roughgarden, 2015, 2016], which
analyze piecewise linear revenue functions and show that few samples are needed to learn over the
revenue function classes in question. In a similar direction, bounds on the sample complexity of
welfare-optimal item pricings have been developed [Feldman et al., 2015, Hsu et al., 2016]. Earlier
work of Balcan et al. [2008] addressed sample complexity results for revenue maximization in
unrestricted supply settings. In that context, the revenue function decomposes additively among
bidders and does not apply to our combinatorial setting.
Despite the inherent complexity of designing high-revenue CAs, Morgenstern and Roughgarden
use linear separability as a tool to prove that certain simple classes of multi-parameter auctions
have small sample complexity. The auctions they study are sequential auctions with item and grand
bundle pricings, as well as second-price item auctions with item reserve prices [Morgenstern and
Roughgarden, 2016]. In the item pricing auctions, the bidders show up one at a time and the
seller offers each item that remains at some price. Each buyer then chooses the subset of goods
that maximizes her utility. In the grand bundle pricing auctions, the bidders are each offered the
grand bundle in some fixed order, and the first bidder to have a value greater than the price buys it.
They show that bounding the sample complexity of these sequential auctions can be reduced to the
single-buyer setting.
In contrast, the auctions we study are more versatile than item pricing auctions, as they give the
mechanism designer many more degrees of freedom than the number of items. This level of
expressiveness allows the designer to increase competition between bidders, much like Myerson?s
optimal auction, and thus boost revenue. It is easy to construct examples where even simple
AMAs achieve significantly greater revenue than sequential auctions with item and grand bundle
prices. Moreover, even the simpler auction classes we consider pose a unique challenge because the
parameters defining the auctions influence the multi-stage allocation procedure and resulting revenue
in non-intuitive ways. This is unlike item and grand bundle pricing auctions, as well as second-price
item auctions, which are simple by design. Our function classes therefore require us to understand
the specific form of the weighted VCG payment rule and its interaction with the parameter space.
Thus, our context and techniques diverge from those in [Morgenstern and Roughgarden, 2016].
Finally, there is a wealth of work on characterizing the optimal CA for restricted settings and designing
mechanisms which achieve high, if not optimal revenue in specific contexts. Due to space constraints,
in Section A of the supplementary materials, we describe these results as well as what is known
theoretically about the classes in the hierarchy of deterministic CAs we study.
2
Preliminaries, notation, and the combinatorial auction hierarchy
In the following section, we explain the basic mechanism design problem, fix notation, and then
describe the hierarchy of combinatorial auction families we study.
Mechanism Design Preliminaries. We consider the problem of selling m heterogeneous goods to
n bidders. This means that there are 2m different bundles of goods, B = {b1 , . . . , b2m }. Each bidder
i ? [n] is associated with a set-wise valuation function over the bundles, vi : B ? R. We assume
that the bidders? valuations are drawn from a distribution D.
Every auction is defined by an allocation function and a payment function. The allocation function
determines which bidders receive which items based on their bids and the payment function determines
how much the bidders need to pay based on their bids and the allocation. It is up to the mechanism
designer to determine which allocation and payment functions should be used. In our context, the
two functions are fixed based on the samples from D before the bidders submit their bids.
Each auction family that we consider has a design based on the classic Vickrey-Clarke-Groves
mechanism (VCG). The VCG mechanism, which we describe below, is the canonical strategy-proof
mechanism, which means that every bidder?s dominant strategy is to bid truthfully. In other words,
for every Bidder i, no matter the bids made by the other bidders, Bidder i maximizes her expected
utility (her value for her allocation minus the price she pays) by bidding her true value. Therefore, we
describe the VCG mechanism assuming that the bids equal the bidders? true valuations.
3
The VCG mechanism allocates the items such that the social welfare of the bidders, that is, the
sum of each bidder?s value for the items she wins, is maximized. Intuitively, each winning bidder then pays her bid minus a ?rebate? equal to the increase in welfare attributable to Bidder i?s
presence in the auction. This form of the payment function is crucial to ensuring that the auction
is strategy-proof. More concretely, the allocation
of the VCG mechanism is the disjoint set of
P
subsets (b?1 , . . . , b?n ) ? B that maximizes vi (b?i ). Meanwhile, let b?i
, . . . , b?i be the disjoint
P
P1 n?i
?i
set of subsets that maximizes j6=i vj bj . Then Bidder i must pay j6=i vj bj ? vj b?j =
hP
P
i
vi (b?i ) ?
vj b?j ? j6=i vj b?i
. In the special case where there is one item for sale, the
j
VCG mechanism is known as the second price, or Vickrey, auction, where the highest bidder wins
the item and pays the second highest bid. We note that every auction in the classes we study is
strategy-proof, so we may assume that the bids equal the bidders? valuations.
Notation. We study auctions with n bidders and m items. We refer to the bundle of all m items
as the grand bundle. In total, there are (n + 1)m possible allocations, which we denote as the
vectors O = ~o1 , . . . , ~o(n+1)m . Each allocation vector ~oi can be written as (oi,1 , . . . , oi,n ), where
oi,j = b` ? B denotes the bundle of items allocated to Bidder j in allocation ~oi . We use the
notation ~v1 = (v1 (b1 ) , . . . , v1 (b2m )) and ~v = (~v1 , . . . , ~vn ) to denote a vector of bidder valuation
functions. We say that revA (~v ) is the revenue of an auction A on the valuation vector ~v . Denoting
the paymentPof any one bidder under auction A given valuation vector ~v as pi,A (~v ), we have that
n
revA (~v ) = i=1 pi,A (~v ). Finally, U is an upper bound on the revenue achievable for any auction
over the support of the bidders? valuation distribution.
Auction Classes. We now give formal definitions of the CA families in the hierarchy we study. See
Figure 1 for the hierarchical organization of the auction classes, together with the papers which
introduced each family.
Affine maximizer auctions (AMAs).
An AMA A is defined by a set of weights
per
bidder (w1 , . . . , wn ) ? R>0 and boosts per allocation ? (~o1 ) , . . . , ? ~o(n+1)m
? R.
An auction A uniquely corresponds
to a set of these parameters, so we write A =
w1 , . . . , wn , ? (~o1 ) , . . . , ? ~o(n+1)m . To simplify notation, we write ?i = ? (~oi ) interchangeably. These parameters allow the mechanism designer to multiplicatively boost any bidder?s bids
by their corresponding weight and to increase the likelihood that any one allocation is returned
as the output of an auction. More concretely, the allocation ~on? of an AMA A is the oneowhich
Pn
maximizes the weighted social welfare, i.e. ~o? = argmax~oi ?O
oi ) . The
j=1 wj vj (oi,j ) + ? (~
payment function of A has the same form as the VCG payment rule, with the parameters factored in to ensure that the auction
particular, for all j ?i [n], the
hP remains strategy-proof. In
P
?
payments are pj,A (~v ) = w1j
w
v
(o
)
+
?
(~
o
)
?
o? ) , where
?j,`
?j
`6=j ` `
`6=j w` v` (o` ) ? ? (~
nP
o
~o?j = argmax~oi ?O
oi ) .
`6=j w` v` (oi,` ) + ? (~
We assume that Hw ? wi ? Hw , ?i ? H? , and vi (b` ) ? Hv for some Hw , Hw , H? , Hv ? R?0 .
It is typical to assume an upper bound (here, Hv ) on the bidders? valuation for any bundle. This is
related to the fact that an upper bound on a target function?s range is always assumed in standard
machine learning sample complexity bounds. Intuitively, generalizability depends on how much any
one sample can skew the empirical average of a hypothesis, or in this case, auction. The bounds on
the AMA parameters are closely related to the bound on the bidders? valuations Hv . For example, it
is a simple exercise to see that we need not search for a lambda value which is greater than Hv .
Virtual valuation combinatorial auctions (VVCAs). VVCAs are a subset of AMAs. P
The defining
n
characteristic of a VVCA is that each ? (~oj ) is split into n terms such that ? (~oj ) = i=1 ?i (~oj )
where ?i (~oj ) = ci,b for all allocations ~oj that give Bidder i exactly bundle b ? B.
?-auctions. ?-auctions are the subclass of AMAs where wi = 1 for all i ? [n].
Mixed bundling auctions (MBAs). The class of MBAs is parameterized by a constant c ? 0 which
can be seen as a discount for any bidder who receives the grand bundle. Formally, the c-MBA is the
?-auction with ?(~o) = c if some bidder receives the grand bundle in allocation ~o and 0 otherwise.
Mixed bundling auctions with reserve prices (MBARPs). MBARPs are identical to MBAs though
with reserve prices. In a single-item VCG auction (i.e. second price auction) with a reserve price, the
4
item is only sold if the highest bidder?s bid exceeds the reserve price, and the winner must pay the
maximum of the second highest bid and the reserve price. We describe how this intuition generalizes
to MBAs in Section 3.
Generalization bounds. In order to derive sample complexity bounds which apply to any algorithm
that determines the optimal auction over the sample, a nearly optimal approximation, or any other
black-box procedure, we derive uniform convergence sample complexity bounds with respect to the
auction classes we examine. Formally, we define the sample complexity of uniform convergence over
an auction class A as follows.
Definition 1 (Sample complexity of uniform convergence over A). We say that N
(, ?, A) is the
sample complexity of uniform convergence over A if for any , ? ? (0, 1), if S = ~v 1 , . . . , ~v N is
a sample of size N
at random from D, with probability at least 1 ? ?, for all
?PN (, ?, A) drawn
1 N
i
auctions A ? A, N i=1 revA ~v ? E~v?D [revA (~v )] ? .
3
Sample complexity bounds over the hierarchy of auction classes
In this section, we provide an overview of our sample complexity guarantees over the hierarchy of
auction classes we consider (Section 3.1 and 3.2). We show that more structured classes require
drastically fewer samples to learn over. We conclude with a note about sample complexity guarantees
for algorithms that find an approximately optimal mechanism over a sample, as opposed to the
optimal mechanism. All omitted proofs are presented in full in the supplementary material.
3.1
The sample complexity of AMA, VVCA, and ?-auction revenue maximization
We begin by analyzing the most general families in the CA hierarchy ? AMAs, VVCAs, and
?-auctions ? proving a general upper bound and class-specific lower bounds.
convergence over the classes
of n-bidder, m-item
Theorem 1. The sample complexity of uniform
e U nm ?m U + nm/2 / 2 . Moreover, for ?-Auctions,
AMAs, VVCAs, and ?-Auctions is N = O
N = ? (nm ) and for VVCAs, N = ? (2m ).
We derive the upper bound by analyzing the Rademacher complexity of the class of n-bidder, m-item
AMA revenue functions. For a family of functions G and a finite sample S = {x1 , . . . , xN } of size
b S (G) = E? [supg?G 1 P ?i g(xi )], where
N , the empirical Rademacher complexity is defined as R
N
? = (?1 , . . . , ?N ), with ?i s independent uniform random variables taking values in {?1, 1}. The
b S (G)].
Rademacher complexity of G is defined as RN (G) = ES?DN [R
The AMA revenue function, defined in Section 2, can be summarized as a multi-stage optimization
procedure: determine the weighted-optimal allocation and then compute the n different payments,
each of which requires a separate optimization procedure. Luckily, we are able to decompose the
revenue functions into small components, each of which is easier to analyze on its own, and then
combine our results to prove the following theorem about this class of revenue functions as a whole.
Theorem 2. Let F be the setof n-bidder, m-item AMA revenue functions revA such that A =
w1 , . . . , wn , ?1 , . . . , ?(n+1)m , Hw ? |wi | ? Hw , |?i | ? H? . Then
!!
r
? v (nHw + H? ) p
nm+2 (Hw Hv + H? ) m log n nH
RN (F) = O
+ nm log N
,
Hw
N
Hw
? v = max {Hv , 1}.
where H
Proof sketch. First, we describe how we split each revenue function into smaller, easier to analyze
atoms, which together allow us to bound the Rademacher complexity of the class of AMA revenue
functions. To this end, it is well-known (e.g. [Mohri et al., 2012]) that if every function f in a class
F can be written as the summation of two functions g and h from classes G and H, respectively, then
RN (F) ? RN (G) + RN (H). Therefore, we split each revenue function into n + 1 components
such that the sum of these components equals the revenue function.
5
Affine maximizer auctions [Roberts, 1979]
?
?
?-auctions [Jehiel et al., 2007]
Virtual valuation CAs [Likhodedov and Sandholm, 2004]
?
?
Mixed bundling auctions with reserve prices [Tang and Sandholm, 2012]
?
Mixed bundling auctions [Jehiel et al., 2007]
Figure 1: The hierarchy of deterministic CA families. Generality increases upward in the hierarchy.
?
With this objective in mind,
of the AMA A on the bidding instance ~v ,
nP let ~oA (~v ) be the outcome
o
n
?
i.e. ~oA = argmax~oi ?O
v ) be the weighted social welfare
j=1 wj vj (oi,j ) + ?i and let ?A,?j (~
of the welfare-maximizing
outcome
nP
o without Bidder j?s participation. In other words, ?A,?j (~v ) =
max~oi ?O
`6=j w` v` (oi,` ) + ?i . Then we can write
?
?
(n+1)m
n
n
X
X
X
X
1
1
?
revA (~v ) =
?A,?j (~v ) ?
w` v` (oi,` ) + ?i ? 1~oi =~o?A (~v) .
w
w
j
j
j=1
i=1
j=1
`6=j
We can now split revA into n + 1 simpler functions: revA,j (~v ) = w1j ?A,?j (~v ) for j ? [n] and
?
?
(n+1)m
n
X
X
X
1
?
revA,n+1 (~v ) = ?
w` v` (oi,` ) + ?i ? 1~oi =~o?A (~v) ,
w
i=1
j=1 j
`6=j
Pn+1
so revA (~v ) = j=1 revA,j (~v ). Intuitively, for j ? [n], revA,j is a weighted version of what the
social welfare would be if Bidder j had not participated in the auction, whereas revA,n+1 (~v ) measures
the amount of revenue subtracted to ensure that the resulting auction is strategy-proof.
As
complexity of each smaller
to be expected, bounding the Rademacher
class of functions Lj =
revA,j | w1 , . . . , wn , ?1 , . . . , ?(n+1)m , Hw ? |wi | ? Hw , |?i | ? H? for j ? [n+1] is simpler
than bounding the Rademacher complexity the class of revenue functions itself, and if F is the set
Pn+1
of all n-bidder, m-item AMA revenue functions, then RN (F) ? j=1 RN (Lj ). In Lemma 2 and
Lemma 3 of Section B.1 in the supplementary materials, we obtain bounds on RN (Lj ) for j ? [n+1]
which lead us to our bound on RN (F).
3.2
The sample complexity of MBA revenue maximization
Fortunately, these negative sample complexity results are not the end of the story. We do achieve
polynomial sample complexity upper bounds for the important classes of mixed bundling auctions
(MBAs) and mixed bundling auctions with reserve prices (MBARPs). We derive these sample
complexity bounds by analyzing the pseudo-dimensions of these classes of auctions. In this section,
we present our results in increasing complexity, beginning with the class of n-bidder, m-item MBAs,
which we show has a pseudo-dimension of 2. We build on the proof of thisresult to show that the
class of n-bidder, m-item MBARPs has a pseudo-dimension of O m3 log n .
We note that when we analyze the class of MBARPs, we assume additive reserve prices, rather than
bundle reserve prices. In other words, each item has its own reserve price, and the reserve price of a
bundle is the sum of its components? reserve prices, as opposed to each bundle having its own reserve
price. We have good reason to make this restriction; in Section C.1, we prove that an exponential
number of samples are required to learn over the class of MBARPs with bundle reserve prices.
Before we prove our sample complexity results, we fix some notation. For any c-MBA, let revc (~v )
be its revenue on ~v , which is determined in the exact same way as the general AMA revenue function
with the ? terms set as described in Section 2. Similarly, let rev~v` (c) be the revenue of the c-MBA
on ~v ` as a function of c. We will use the following result regarding the structure of revc (~v ) in order
to derive our pseudo-dimension results. The proof is in Section C of the supplementary materials.
6
Figure 2: Example of rev~v` (c).
Lemma 1. There exists c? ? [0, ?) such that rev~v (c) is non-decreasing on the interval [0, c? ] and
non-increasing on the interval (c? , ?).
The form of rev~v (c) as described in Lemma 1 is depicted in Figure 2. The full proof of the following
pseudo-dimension bound can be found in Section C of the supplementary materials.
Theorem 3. The pseudo-dimension of the class of n-bidder, m-item MBAs is 2.
Proof sketch. First, we recall what we must show in order to prove that the pseudo-dimension of this
class is 2 (for more on pseudo-dimension, see, for example, [Mohri et al., 2012]). The proof structure
is similar to those involved in VC dimension
derivations. To begin with, we must provide a set of
two valuation vectors S = ~v 1 , ~v 2 that can be shattered by the class of MBA revenue functions.
This means that there exist two targets z 1 , z 2 ?
R with the property that for any T ?
S, there exists
a cT ? C such that if ~v i ? T , then revcT ~v i ? z i and if ~v i 6? T , then revcT ~v i > z i . In other
words, S can be labeled in every possible way by MBA revenue functions (whether or not revc ~v j
is greater than its target z j ). We must also prove that no set of three valuation vectors is shatterable.
Our construction of the set S = ~v 1 , ~v 2 that can be shattered by the set of MBAs can be found in
the full proof of this theorem in Section C of the supplementary materials. We now show that no set
of size N ? 3 can be shattered by the class of MBAs. Fix one sample ~v i ? S and consider rev~vi (c).
From Lemma 1, we know that there exists c?i ? [0, ?), such that rev~vi (c) is non-decreasing on
the interval [0, c?i ] and non-increasing on the interval (c?i , ?). Therefore, there exist two thresholds
t1i ? [0, c?i ] and t2i ? (c?i , ?) ? {?} such that rev~vi (c) is below its threshold for c ? [0, t1i ), above
its threshold for c ? (t1i , t2i ), and below its threshold for c ? (t2i , ?). Now, merge these thresholds
for all N samples on the real line and consider the interval (t1 , t2 ) between two adjacent thresholds.
The binary labeling of the samples in S on this interval is fixed. In other words, for any sample
~v j ? S, rev~vj (c) is either at least z j or strictly less than z j for all c ? (t1 , t2 ). There are at most
2N + 1 intervals between adjacent thresholds, so at most 2N + 1 different binary labelings of S.
Since we assumed S is shatterable, it must be that 2N ? 2N + 1, so N ? 2.
This result allows us to prove the following sample complexity guarantee.
Theorem 4.
The sample complexity of uniform
convergence over the class of n-bidder, m-item MBAs
2
is N = O (U/) (log(U/) + log(1/?)) .
Mixed bundling auctions with reserve prices (MBARPs). MBARPs are a variation on MBAs,
with the addition of reserve prices. Reserve prices in the single-item case, as described in Section 2,
can be generalized to the multi-item case as follows. We enlarge the set of agents to include the seller,
who is now Bidder 0 and whose valuation for a set of items is the set?s reserve price. Working in this
expanded set of agents, the bidder weights are all 1 and the ? terms are the same as in the standard
MBA setup. Importantly, the seller makes no payments, no matter her allocation.PMore formally, given
n
a vector of valuation functions ~v , the MBARP allocation is ~o? = argmax~o?O i=0 vi (oi ) + ? (~o) .
For each i ? {1, . . . , n}, Bidder i?s payment is
X
X
pA,i (~v ) =
vj (o?i,j ) + ? (~o?i ) ?
vj o?j ? ? (~o? ) ,
j?{0,...,n}\{i}
j?{0,...,n}\{i}
where
~o?i = argmax
~
o?O
X
j?{0,...,n}\{i}
7
vj (oj ) + ? (~o) .
As mentioned, we restrict our attention to item-specific reserve prices. In this case, the the reserve
price of a bundle is the sum of the reserve prices of the items in the bundle.
Each MBARP is therefore parameterized by m + 1 values (c, r1 , . . . , rm ), where ri
is the reserve price for the ith good.
For a fixed valuation function vector ~v =
(v1 (b1 ) , . . . , v1 (b2m ) , . . . , vn (b1 ) , . . . , vn (b2m )), we can analyze the MBARP revenue function
on ~v as a mapping rev~v : Rm+1 ? R, where rev~v (c, r1 , . . . , rm ) is the revenue of the MBARP
parameterized by (c, r1 , . . . , rm ) on ~v .
Theorem 5. The psuedo-dimension
of the class of n-bidder, m-item MBARPs with item-specific
reserve prices is O m3 log n .
Proof sketch. Let S = ~v 1 , . . . , ~v N of size N be a set of n-bidder valuation function samples that
can be shattered by a set C of 2N MBARPs. This means that there exist N targets z 1 , . . . , z N such
that each MBARP in C induces a binary labeling of the samples ~v j in S (whether the revenue of the
MBARP on ~v j is greater than or less than z j ). Since S is shatterable, we can thus label S in every
possible way using MBARPs in C.
This proof is similar to the proof of Theorem 3, where we split the real line into a set of intervals
I such that for any I ? I, the binary labeling of S by the c-MBA revenue function was fixed for
all c ? I. In the case of MBARPs, however, the domain is Rm+1 , so we cannot split the domain
into intervals in the same way. Instead, we show that we can split the domain into cells such that
the binary labeling of S by the MBARP revenue function is a fixed linear
function as we range over
parameters in a single cell. In this way, we show that N = O m3 log n .
This is enough to prove the following guarantee.
Theorem 6. The sample complexity of uniform convergence
over the class of n-bidder, m-item
2
MBARPs with item-specific reserve prices is N = O (U/) m3 log n log (U/) + log (1/?) .
3.3
Sample complexity bounds for approximation algorithms
It may not always be computationally feasible to solve for the best auction over S for the given
auction family. Rather, we may only be able to determine an auction A that has average revenue
over S that is within a (1 + ?) multiplicative factor of the revenue-maximizing auction over S
within the family. Nonetheless, in Theorem 11 of the supplementary materials, we prove that with
slightly more samples, we can ensure that the expected revenue of A is close to being with a (1 + ?)
multiplicative factor of the expected revenue of the optimal auction within the family with respect to
the real distribution D. We prove a similar bound for an additive factor approximation as well.
4
Conclusion
In this paper, we proved strong bounds on the sample complexity of uniform convergence for the
well-studied and standard auction families that constitute the hierarchy of deterministic combinatorial
auctions. We thereby answered a crucial question in the study of (automated) mechanism design:
how to relate the performance of the mechanisms in the search space over the input samples to
their expectation over the underlying?unknown?distribution. Specifically, for a fixed class of
auctions, we determine the sample complexity necessary to ensure that with high probability, for
any auction in that class, the average revenue over the sample is close to the expected revenue with
respect to the underlying, unknown distribution over bidders? valuations. Our bounds apply to any
algorithm that finds an optimal or approximately optimal auction over an input sample, and therefore
to any automated mechanism design algorithm. Moreover, our results and analyses are of interest
from a learning theoretic perspective because the function classes which make up the hierarchy of
deterministic combinatorial auctions diverge significantly from well-understood hypothesis classes
typically found in machine learning.
Acknowledgments. This work was supported in part by NSF grants CCF-1535967, CCF-1451177,
CCF-1422910, IIS-1618714, IIS-1617590, IIS-1320620, IIS-1546752, ARO award W911NF-161-0061, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, an NSF Graduate
Research Fellowship, and a Microsoft Research Women?s Fellowship.
8
References
Maria-Florina Balcan, Avrim Blum, Jason Hartline, and Yishay Mansour. Reducing mechanism design to
algorithm design via machine learning. Journal of Computer and System Sciences, 74:78?89, 2008.
Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of the
Annual Symposium on Theory of Computing (STOC), pages 243?252, 2014.
Vincent Conitzer and Tuomas Sandholm. Complexity of mechanism design. In Proceedings of the 18th Annual
Conference on Uncertainty in Artificial Intelligence (UAI), pages 103?110, 2002.
Vincent Conitzer and Tuomas Sandholm. Self-interested automated mechanism design and implications for
optimal combinatorial auctions. In Proceedings of the ACM Conference on Electronic Commerce (ACM-EC),
pages 132?141, New York, NY, 2004.
Nikhil R Devanur, Zhiyi Huang, and Christos-Alexandros Psomas. The sample complexity of auctions with side
information. In Proceedings of the Annual Symposium on Theory of Computing (STOC), 2016.
Shaddin Dughmi, Li Han, and Noam Nisam. Sampling and representation complexity of revenue maximization.
In International Workshop On Internet And Network Economics (WINE), pages 277?291, 2014.
Edith Elkind. Designing and learning optimal finite support auctions. In Annual ACM-SIAM Symposium on
Discrete Algorithms (SODA), pages 736?745, 2007.
Michal Feldman, Nick Gravin, and Brendan Lucier. Combinatorial auctions via posted prices. In Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA), 2015.
Justin Hsu, Jamie Morgenstern, Ryan Rogers, Aaron Roth, and Rakesh Vohra. Do prices coordinate markets?
Proceedings of the Annual Symposium on Theory of Computing (STOC), 2016.
Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. In Proceedings of the
ACM Conference on Economics and Computation (EC), pages 45?60, 2015.
Philippe Jehiel, Moritz Meyer-Ter-Vehn, and Benny Moldovanu. Mixed bundling auctions. Journal of Economic
Theory, 127(1):494?512, 2007.
Anton Likhodedov and Tuomas Sandholm. Methods for boosting revenue in combinatorial auctions. In
Proceedings of the National Conference on Artificial Intelligence (AAAI), pages 232?237, 2004.
Anton Likhodedov and Tuomas Sandholm. Approximating revenue-maximizing combinatorial auctions. In
Proceedings of the National Conference on Artificial Intelligence (AAAI), 2005.
Andres Munoz Medina and Mehryar Mohri. Learning theory and algorithms for revenue optimization in second
price auctions with reserve. In International Conference on Machine Learning (ICML), pages 262?270, 2014.
Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press,
2012.
Jamie Morgenstern and Tim Roughgarden. On the pseudo-dimension of nearly optimal auctions. In Proceedings
of the Annual Conference on Neural Information Processing Systems (NIPS), pages 136?144, 2015.
Jamie Morgenstern and Tim Roughgarden. Learning simple auctions. In Conference on Learning Theory
(COLT), 2016.
Roger Myerson. Optimal auction design. Mathematics of Operation Research, 6:58?73, 1981.
Kevin Roberts. The characterization of implementable social choice rules. In J-J Laffont, editor, Aggregation
and Revelation of Preferences. North-Holland Publishing Company, 1979.
Tim Roughgarden and Okke Schrijvers. Ironing in the dark. Proceedings of the ACM Conference on Economics
and Computation (EC), 2016.
Tuomas Sandholm. Automated mechanism design: A new application area for search algorithms. In International
Conference on Principles and Practice of Constraint Programming (CP), pages 19?36, 2003.
Tuomas Sandholm and Anton Likhodedov. Automated design of revenue-maximizing combinatorial auctions.
Operations Research, 63(5):1000?1025, 2015.
Pingzhong Tang and Tuomas Sandholm. Mixed-bundling auctions with reserve prices. In International
Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), 2012.
9
| 6351 |@word faculty:1 version:1 polynomial:1 achievable:1 open:1 additively:2 thereby:3 minus:2 versatile:1 denoting:1 ironing:1 michal:1 must:7 written:2 additive:3 designed:2 alone:1 intelligence:3 fewer:1 item:51 beginning:1 ith:1 alexandros:1 characterization:3 boosting:2 preference:1 attack:1 simpler:3 dn:1 supply:2 symposium:5 prove:12 combine:1 theoretically:1 notably:1 market:1 expected:8 roughly:1 p1:1 examine:1 multi:13 decreasing:2 company:2 increasing:5 becomes:1 spain:1 begin:2 underlying:6 moreover:3 maximizes:5 notation:6 what:3 morgenstern:9 developed:2 finding:1 guarantee:8 pseudo:11 every:7 subclass:1 exactly:1 rm:5 revelation:1 sale:5 unit:1 grant:1 conitzer:5 before:2 t1:2 understood:2 despite:2 analyzing:3 approximately:2 merge:1 black:2 okke:1 studied:3 challenging:2 range:3 graduate:1 practical:3 unique:1 acknowledgment:1 commerce:1 practice:2 procedure:8 area:1 empirical:5 significantly:5 word:5 cannot:2 close:5 context:9 influence:1 zhiyi:2 optimize:1 restriction:1 deterministic:8 missing:1 maximizing:8 roth:1 economics:6 attention:1 devanur:2 factored:1 insight:1 rule:3 importantly:1 deriving:1 fill:1 ellen:1 classic:2 bundling:9 proving:1 coordinate:2 variation:1 autonomous:1 hierarchy:16 target:4 construction:1 yishay:2 exact:1 programming:1 designing:4 hypothesis:6 pa:2 t1i:3 labeled:1 reducible:1 t2i:3 hv:7 wj:2 highest:5 benny:1 mentioned:1 intuition:1 complexity:49 seller:4 tight:2 selling:1 bidding:2 derivation:1 distinct:1 describe:6 artificial:3 labeling:4 edith:1 outcome:6 kevin:1 firm:1 whose:1 emerged:1 supplementary:7 solve:1 say:2 nikhil:1 otherwise:1 itself:1 aro:1 jamie:3 interaction:1 laffont:1 achieve:4 ama:11 intuitive:1 competition:1 convergence:8 r1:3 rademacher:8 perfect:3 tim:5 derive:6 develop:1 pose:1 school:1 strong:4 dughmi:2 c:1 involves:1 direction:1 psuedo:1 closely:1 luckily:1 vc:1 material:8 virtual:2 rogers:1 require:2 government:1 fix:3 generalization:2 preliminary:2 randomization:2 decompose:1 gravin:1 ryan:1 summation:1 strictly:1 helping:1 welfare:7 mapping:2 bj:2 b2m:4 reserve:27 major:1 omitted:1 wine:1 combinatorial:19 label:1 cole:2 tool:2 weighted:5 mit:1 always:2 rather:4 pn:4 focus:1 maria:2 she:2 likelihood:1 contrast:2 brendan:1 equipment:1 baseline:1 rostamizadeh:1 talwalkar:1 shattered:4 typically:2 lj:3 her:7 labelings:1 interested:1 upward:1 among:1 colt:1 special:1 equal:4 construct:1 having:1 enlarge:1 atom:1 sampling:1 identical:1 icml:1 nearly:3 np:4 t2:2 simplify:1 piecewise:1 inherent:2 few:2 primarily:1 richard:1 national:2 individual:1 argmax:5 consisting:1 delicate:1 microsoft:2 freedom:1 organization:1 interest:2 highly:1 evaluation:1 bundle:22 implication:1 grove:1 necessary:2 allocates:1 euclidean:1 theoretical:1 instance:1 earlier:1 w911nf:1 maximization:6 applicability:1 mba:19 subset:4 uniform:9 generalizability:1 chooses:1 international:4 fundamental:2 randomized:1 grand:8 siam:2 diverge:2 connecting:1 together:2 w1:4 aaai:2 nm:5 opposed:2 huang:3 woman:1 lambda:1 return:1 li:1 bidder:71 summarized:1 north:1 matter:2 sloan:1 vi:8 depends:1 supg:1 multiplicative:2 jason:1 analyze:5 aggregation:1 oi:21 who:4 characteristic:1 maximized:1 anton:3 vincent:2 elkind:2 andres:1 vohra:1 hartline:1 j6:3 explain:1 definition:2 nonetheless:1 ninamf:1 involved:1 associated:2 proof:16 unsolved:2 hsu:2 proved:2 recall:1 knowledge:1 lucier:1 higher:1 box:2 though:1 generality:1 just:1 stage:5 roger:1 until:1 sketch:3 receives:2 working:1 expressive:1 maximizer:2 pricing:6 effect:1 true:2 ccf:3 moritz:1 vickrey:3 adjacent:2 interchangeably:1 self:1 uniquely:1 generalized:1 schrijvers:2 complete:1 theoretic:2 cp:1 balcan:3 auction:118 ranging:1 wise:1 novel:1 common:1 overview:1 winner:1 nh:1 mellon:1 refer:1 munoz:1 feldman:2 mathematics:1 hp:2 similarly:1 had:1 han:1 dominant:1 own:3 recent:2 perspective:2 certain:1 binary:5 psomas:1 vcg:11 seen:1 unrestricted:1 greater:5 fortunately:1 determine:7 maximize:1 ii:4 multiple:1 full:4 smooth:1 technical:1 exceeds:1 offer:1 award:1 ensuring:1 prediction:1 scalable:2 basic:1 florina:2 heterogeneous:1 cmu:1 expectation:1 cell:2 receive:1 addition:2 whereas:1 participated:1 fellowship:4 addressed:1 interval:9 wealth:1 crucial:2 allocated:1 unlike:1 presence:2 ter:1 split:7 easy:1 relaxes:1 automated:12 bid:14 wn:4 enough:1 restrict:1 economic:3 regarding:1 whether:2 utility:2 returned:1 speaking:1 york:1 constitute:1 deep:2 amount:1 discount:1 dark:1 extensively:1 induces:1 reduced:2 exist:3 canonical:1 nsf:2 designer:7 disjoint:2 per:2 carnegie:1 write:3 discrete:2 express:1 key:2 threshold:7 blum:1 drawn:4 license:1 pj:1 v1:6 year:2 sum:4 parameterized:3 uncertainty:1 soda:2 family:13 electronic:1 vn:3 decision:1 clarke:1 bound:28 ct:1 pay:7 internet:1 yielded:1 annual:7 roughgarden:13 strength:1 constraint:2 your:1 ri:1 answered:1 performing:1 expanded:1 ameet:1 structured:1 sandholm:18 smaller:2 slightly:1 separability:1 wi:4 rev:10 making:1 intuitively:3 restricted:1 computationally:1 remains:3 payment:11 skew:1 mechanism:35 needed:2 mind:1 know:1 end:2 adopted:1 generalizes:1 operation:3 apply:4 hierarchical:1 subtracted:1 denotes:1 ensure:4 include:1 publishing:1 exploit:1 build:1 approximating:1 shatterable:3 objective:1 question:3 strategy:6 traditional:1 win:3 distance:2 link:1 separate:1 oa:2 amd:5 vitercik:2 valuation:27 reason:1 assuming:1 afshin:1 tuomas:8 o1:3 multiplicatively:2 setup:1 robert:2 stoc:3 relate:1 noam:1 negative:1 design:25 unknown:5 upper:6 sold:1 finite:2 implementable:1 philippe:1 defining:2 rn:9 mansour:2 expressiveness:1 introduced:1 shaddin:1 required:3 extensive:1 connection:3 nick:1 barcelona:1 boost:4 nip:2 able:2 justin:1 below:3 challenge:2 oj:6 max:2 unrealistic:2 participation:1 prior:1 understanding:2 fully:1 mixed:9 allocation:19 remarkable:1 revenue:65 foundation:2 astonishingly:1 degree:1 agent:4 offered:1 affine:2 principle:1 editor:1 story:1 pi:2 mohri:6 sourcing:1 wireless:1 supported:1 drastically:1 formal:2 allow:2 understand:2 side:1 characterizing:1 taking:1 boundary:2 curve:1 dimension:13 xn:1 rich:2 concretely:2 commonly:2 made:1 oftentimes:1 ec:3 social:5 buy:1 uai:1 b1:4 pittsburgh:1 assumed:4 conclude:1 xi:1 spectrum:1 truthfully:1 search:4 w1j:2 decomposes:1 promising:1 learn:3 ca:15 mehryar:2 complex:3 separator:1 meanwhile:1 domain:3 submit:2 vj:11 posted:1 bounding:3 whole:1 aamas:1 complementary:1 x1:1 attributable:1 ny:1 christos:1 meyer:1 medina:3 winning:1 exercise:1 exponential:1 weighting:1 hw:11 procurement:1 tang:2 theorem:10 specific:7 explored:1 consist:1 intrinsic:1 exists:3 avrim:1 sequential:3 workshop:1 importance:1 ci:1 push:1 demand:1 gap:1 easier:2 depicted:1 tantalizing:1 myerson:3 strand:1 holland:1 corresponds:1 determines:4 acm:6 price:38 feasible:1 change:1 typical:1 determined:1 specifically:1 reducing:1 lemma:5 total:1 e:1 buyer:2 m3:4 rakesh:1 palatable:1 aaron:1 formally:3 support:2 correlated:1 |
5,916 | 6,352 | Learning brain regions via large-scale online
structured sparse dictionary-learning
Elvis Dohmatob, Arthur Mensch, Gael Varoquaux, Bertrand Thirion
[email protected]
Parietal Team, INRIA / CEA, Neurospin, Universit? Paris-Saclay, France
Abstract
We propose a multivariate online dictionary-learning method for obtaining decompositions of brain images with structured and sparse components (aka atoms).
Sparsity is to be understood in the usual sense: the dictionary atoms are constrained
to contain mostly zeros. This is imposed via an `1 -norm constraint. By "structured", we mean that the atoms are piece-wise smooth and compact, thus making up
blobs, as opposed to scattered patterns of activation. We propose to use a Sobolev
(Laplacian) penalty to impose this type of structure. Combining the two penalties,
we obtain decompositions that properly delineate brain structures from functional
images. This non-trivially extends the online dictionary-learning work of Mairal et
al. (2010), at the price of only a factor of 2 or 3 on the overall running time. Just
like the Mairal et al. (2010) reference method, the online nature of our proposed
algorithm allows it to scale to arbitrarily sized datasets. Preliminary xperiments
on brain data show that our proposed method extracts structured and denoised
dictionaries that are more intepretable and better capture inter-subject variability in
small medium, and large-scale regimes alike, compared to state-of-the-art models.
1
Introduction
In neuro-imaging, inter-subject variability is often handled as a statistical residual and discarded. Yet
there is evidence that it displays structure and contains important information. Univariate models are
ineffective both computationally and statistically due to the large number of voxels compared to the
number of subjects. Likewise, statistical analysis of weak effects on medical images often relies on
defining regions of interests (ROIs). For instance, pharmacology with Positron Emission Tomography
(PET) often studies metabolic processes in specific organ sub-parts that are defined from anatomy.
Population-level tests of tissue properties, such as diffusion, or simply their density, are performed on
ROIs adapted to the spatial impact of the pathology of interest. Also, in functional brain imaging,
e.g function magnetic resonance imaging (fMRI), ROIs must be adapted to the cognitive process
under study, and are often defined by the very activation elicited by a closely related process [18].
ROIs can boost statistical power by reducing multiple comparisons that plague image-based statistical
testing. If they are defined to match spatially the differences to detect, they can also improve the
signal-to-noise ratio by averaging related signals. However, the crux of the problem is how to define
these ROIs in a principled way. Indeed, standard approaches to region definition imply a segmentation
step. Segmenting structures in individual statistical maps, as in fMRI, typically yields meaningful
units, but is limited by the noise inherent to these maps. Relying on a different imaging modality hits
cross-modality correspondence problems.
Sketch of our contributions. In this manuscript, we propose to use the variability of the statistical
maps across the population to define regions. This idea is reminiscent of clustering approaches, that
have been employed to define spatial units for quantitative analysis of information as diverse as brain
fiber tracking, brain activity, brain structure, or even imaging-genetics. See [21, 14] and references
therein. The key idea is to group together features ?voxels of an image, vertices on a mesh, fiber tracts?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
based on the quantity of interest, to create regions ?or fiber bundles? for statistical analysis. However,
unlike clustering that models each observation as an instance of a cluster, we use a model closer to
the signal, where each observation is a linear mixture of several signals. The model is closer to mode
finding, as in a principal component analysis (PCA), or an independent component analysis (ICA),
often used in brain imaging to extract functional units [5]. Yet, an important constraint is that the
modes should be sparse and spatially-localized. For this purpose, the problem can be reformulated as
a linear decomposition problem like ICA/PCA, with appropriate spatial and sparse penalties [25, 1].
We propose a multivariate online dictionary-learning method for obtaining decompositions with
structured and sparse components (aka atoms). Sparsity is to be understood in the usual sense: the
atoms contain mostly zeros. This is imposed via an `1 penalty on the atoms. By "structured", we mean
that the atoms are piece-wise smooth and compact, thus making up blobs, as opposed to scattered
patterns of activation. We impose this type of structure via a Laplacian penalty on the dictionary atoms.
Combining the two penalties, we therefore obtain decompositions that are closer to known functional
organization of the brain. This non-trivially extends the online dictionary-learning work [16], with
only a factor of 2 or 3 on the running time. By means of experiments on a large public dataset, we
show the improvements brought by the spatial regularization with respect to traditional `1 -regularized
dictionary learning. We also provide a concise study of the impact of hyper-parameter selection on
this problem and describe the optimality regime, based on relevant criteria (reproducibility, captured
variability, explanatory power in prediction problems).
2
Smooth Sparse Online Dictionary-Learning (Smooth-SODL)
Consider a stack X ? Rn?p of n subject-level brain images X1 , X2 , . . . , Xn each of shape n1 ?
n2 ? n3 , seen as p-dimensional row vectors ?with p = n1 ? n2 ? n3 , the number of voxels. These
could be images of fMRI activity patterns like statistical parametric maps of brain activation, raw
pre-registered (into a common coordinate space) fMRI time-series, PET images, etc. We would
like to decompose these images as a mixture of k ? min(n, p) component maps (aka latent factors
or dictionary atoms) V1 , . . . , Vk ? Rp?1 and modulation coefficients U1 , . . . , Un ? Rk?1 called
codes (one k-dimensional code per sample point), i.e
Xi ? VUi , for i = 1, 2, . . . , n
(1)
where V := [V1 | . . . |Vk ] ? Rp?k , an unknown dictionary to be estimated. Typically, p ? 105 ?
106 (in full-brain high-resolution fMRI) and n ? 102 ? 105 (for example, in considering all the 500
subjects and all the about functional tasks of the Human Connectome Project dataset [20]). Our
work handles the extreme case where both n and p are large (massive-data setting). It is reasonable
then to only consider under-complete dictionaries: k ? min(n, p). Typically, we use k ? 50 or 100
components. It should be noted that online optimization is not only crucial in the case where n/p is
big; it is relevant whenever n is large, leading to prohibitive memory issues irrespective of how big or
small p is.
As explained in section 1, we want the component maps (aka dictionary atoms) Vj to be sparse and
spatially smooth. A principled way to achieve such a goal is to impose a boundedness constraint on
`1 -like norms of these maps to achieve sparsity and simultaneously impose smoothness by penalizing
their Laplacian. Thus, we propose the following penalized dictionary-learning model
!
n
k
X
1X
1
1
2
2
min
lim
min kXi ? VUi k2 + ?kUi k2 + ?
?Lap (Vj ).
2
V?Rp?k n?? n
Ui ?Rk 2
(2)
i=1
j=1
subject to V1 , . . . , Vk ? C
The ingredients in the model can be broken down as follows:
1
kXi ? VUi k22 measures how well the current dictionary V
2
explains data Xi from subject i. The Ridge penalty term ?(Ui ) ? 12 ?kUi k22 on the codes
amounts to assuming that the energy of the decomposition is spread across the different
samples. In the context of a specific neuro-imaging problem, if there are good grounds to
assume that each sample / subject should be sparsely encoded across only a few atoms of
the dictionary, then we can use the `1 penalty ?(Ui ) := ?kUi k1 as in [16]. We note that in
? Each of the terms maxUi ?Rk
2
contrast to the `1 penalty, the Ridge leads to stable codes. The parameter ? > 0 controls the
amount of penalization on the codes.
? The constraint set C is a sparsity-inducing compact simple (mainly in the sense that the
Euclidean projection onto C should be easy to comput) convex subset of Rp like an `1 -ball
Bp,`1 (? ) or a simplex Sp (? ), defined respectively as
Bp,`1 (? ) := {v ? Rp s.t |v1 | + . . . + |vp | ? ? } , and Sp (? ) := Bp,`1 (? ) ? Rp+ .
(3)
Other choices (e.g ElasticNet ball) are of course possible. The radius parameter ? > 0
controls the amount of sparsity: smaller values lead to sparser atoms.
? Finally, ?Lap is the 3D Laplacian regularization functional defined by
p
?Lap (v) :=
1
1X
(?x v)2k + (?y v)2k + (?z v)2k = vT ?v ? 0, ?v ? Rp ,
2
2
(4)
k=1
?x being the discrete spatial gradient operator along the x-axis (a p-by-p matrix), ?y along
the y-axis, etc., and ? := ?T ? is the p-by-p matrix representing the discrete Laplacian
operator. This penalty is meant to impose blobs. The regularization parameter ? ? 0 controls
how much regularization we impose on the atoms, compared to the reconstruction error.
The above formulation, which we dub Smooth Sparse Online Dictionary-Learning (Smooth-SODL)
is inspired by, and generalizes the standard online dictionary-learning framework of [16] ?henceforth
referred to as Sparse Online Dictionary-Learning (SODL)? with corresponds to the special case
? = 0.
3
3.1
Estimating the model
Algorithms
The objective function in problem (2) is separately convex and block-separable w.r.t each of U and V
but is not jointly convex in (U, V). Also, it is continuously differentiable on the constraint set, which
is compact and convex. Thus by classical results (e.g Bertsekas [6]), the problem can be solved via
Block-Coordinate Descent (BCD) [16]. Reasoning along the lines of [15], we derive that the BCD
iterates are as given in Alg. 1 in which, for each incoming sample point Xt , the loading vector Ut is
computing by solving a ridge regression problem (5) with the current dictionary Vt held fixed, and
the dictionary atoms are then updated sequentially via Alg. 2. A crucial advantage of using a BCD
scheme is that it is parameter free: there is not step size to tune. The resulting algorithm Alg. 1, is
adapted from [16]. It relies on Alg. 2 for performing the structured dictionary updates, the details of
which are discussed below.
Algorithm 1 Online algorithm for the dictionary-learning problem (2)
Require: Regularization parameters ?, ? > 0; initial dictionary V ? Rp?k , number of passes /
iterations T on the data.
1: A0 ? 0 ? Rk?k , B0 ? 0 ? Rp?k (historical ?sufficient statistics?)
2: for t = 1 to T do
3:
Empirically draw a sample point Xt at random.
4:
Code update: Ridge-regression (via SVD of current dictionary V)
Ut ? argminu?Rk
1
1
kXt ? Vuk22 + ?kuk22 .
2
2
(5)
5:
Rank-1 updates: At ? At?1 + Ut UTt , Bt ? Bt?1 + Xt UTt
6:
BCD dictionary update: Compute update for dictionary V using Alg. 2.
7: end for
Update of the codes: Ridge-coding.
The Ridge sub-problem for updating the codes
Ut = (VT V + ?I)?1 VT Xt
3
(6)
is computed via an SVD of the current dictionary V. For ? ? 0, Ut reduces to the orthogonal
projection of Xt onto the image of the current dictionary V. As in [16], we speed up the overall
algorithm by sampling mini-batches of ? samples Xt , . . . , X? and compute the corresponding codes
U1 , U2 , ..., U? at once. We typically use we use mini-batches of size ? = 20.
BCD dictionary update for the dictionary atoms. Let us define time-varying matrices At :=
Pt
Pt
T
k?k
and Bt := i=1 Xi UTi ? Rp?k , where t = 1, 2, . . . denotes time. We fix
i=1 Ui Ui ? R
the matrix of codes U, and for each j, consider the update of the jth dictionary atom, with all the
other atoms Vk6=j kept fixed. The update for the atom Vj can then be written as
!
t
X
1
j
2
V = argminv?C,V=[V1 |...|v|...|Vk ]
kXi ? VUi k2 + ?t?Lap (v)
2
i=1
(7)
= argminv?C F?(At [j,j]/t)?1 (v, Vj + At [j, j]?1 (Bjt ? VAjt )),
{z
}
|
refer to [16] for the details
where F?? (v, a) ? 12 kv ? ak22 + ?? ?Lap (v) = 21 kv ? ak22 + 21 ?? vT ?v.
Algorithm 2 BCD dictionary update with Laplacian prior
Require: V = [V1 | . . . |Vk ] ? Rp?k (input dictionary),
1: A = [A1 | . . . |Ak ] ? Rk?k , Bt = [B1t | . . . |Bkt ] ? Rp?k (history)
2: while stopping criteria not met, do
3:
for j = 1 to r do
4:
Fix the code U and all atoms k 6= j of the dictionary V and then update Vj as follows
Vj ? argminv?C F?(At [j,j]/t)?1 (v, Vj + At [j, j]?1 (Bjt ? VAj ))
(8)
(See below for details on the derivation and the resolution of this problem)
5:
end for
6: end while
Problem (7) is the compactly-constrained minimization of the 1-strongly-convex quadratic functions
F?? (., a) : Rp ? R defined above. This problem can further be identified with a denoising instance
(i.e in which the design matrix / deconvolution operator is the identity operator)?of the GraphNet
model [11, 13]. Fast first-order methods like FISTA [4] with optimal rates O(L/ ) are available1
for solving such problems to arbitrary precision > 0. One computes the Lipschitz constant to be
LF?? (.,a) ? 1 + ?? L?Lap = 1 + 4D?
? , where as before, D is the number of spatial dimensions (D = 3
for volumic images). One should also mention that under certain circumstances, it is possible to
perform the dictionary updates in the Fourier domain, via FFT. This alternative approach is detailed
in the supplementary materials.
Finally, one notes that, since constraints in problem (2) are separable in the dictionary atoms Vj ,
the BCD dictionary-update algorithm Alg. 2 is guaranteed to converge to a global optimum, at each
iteration [6, 16].
How difficult is the dictionary update for our proposed model ? A favorable property of the
vanilla dictionary-learning [16] is that the BCD dictionary updates amount to Euclidean projections
onto the constraint set C, which can be easily computed for a variety of choices (simplexes, closed
convex balls, etc.). One may then ask: do we retain a comparable algorithmic simplicity even with the
additional Laplacian terms ?Lap (Vj ) ? YES!: empirically, we found that 1 or 2 iterations of FISTA
[4] are sufficient to reach an accuracy of 10?6 in problem (7), which is sufficient to obtain a good
decomposition in the overall algorithm.
However, choosing ? ?too large? will provably cause the dictionary updates to eventually take forever
to run. Indeed, the Lipschitz constant in problem (7) is Lt = 1 + 4D?(At [j, j]/t)?1 , which will
blow-up (leading to arbitrarily small step-sizes) unless ? is chosen so that
!
t
X
? = ?t = O max At [j, j] = O max
kUj k22 /t = O(kAt k?,? /t).
(9)
1?j?k
1
1?j?k
i=1
For example, see [8, 24], implemented as part of the Nilearn open-source library Python library [2].
4
Finally, the Euclidean projections onto the `1 ball C can be computed exactly in linear-time O(p) (see
for example [7, 9]). The dictionary atoms j are repeatedly cycled and problem (7) solved. All in all,
in practice we observe that a single iteration is sufficient for the dictionary update sub-routine in Alg.
2 to converge to a qualitatively good dictionary.
Convergence of the overall algorithm. The Convergence of our algorithm (to a local optimum) is
guaranteed since all hypotheses of [16] are satisfied. For example, assumption (A) is satisfied because
fMRI data are naturally compactly supported. Assumption (C) is satisfied since the ridge-regression
problem (5) has a unique solution. More details are provided in the supplementary materials.
Practical considerations
54%
36%
21
100
30%
0
24%
2?1
18%
?2
12%
2
120
42%
22
2
140
48%
23
80
60
40
20
6%
2?3
0 102 103 104 105 106 107 108
?
normalized sparsity
24
explained variance
Hyper-parameter
tuning. Parameterselection in dictionary-learning is known to
be a difficult unsolved problem [16, 15], and
our proposed model (2) is not an exception
to this rule. We did an extensive study of the
quality of estimated dictionary varies with the
model hyper-parameters (?, ?, ? ). The data
experimental setup is described in Section 5.
The results are presented in Fig. 1. We make
the following observations: Taking the sparsity
parameter ? in (2) too large leads to dense
atoms that perfectly explain the data but are not
very intepretable. Taking it too small leads to
overly sparse maps that barely explain the data.
This normalized sparsity metric (small is better,
ceteris paribus) is defined as the mean ratio
kVj k1 /kVj k2 over the dictionary atoms.
?
3.2
0 102 103 104 105 106 107 108 0
?
Figure 1: Influence of model parameters. In the
experiments, ? was chosen according to (10). Left:
Percentage explained variance of the decomposition, measured on left-out data split. Right: Average normalized sparsity of the dictionary atoms.
Concerning the ? parameter, inspired by [26], we have found the following time-varying data-adaptive
choice for the ? parameter to work very well in practice:
? = ?t ? t?1/2 .
(10)
Likewise, care must be taken in selecting the Laplacian regularization parameter ?. Indeed taking it
too small amounts to doing vanilla dictionary-learning model [16]. Taking it too large can lead to
degenerate maps, as the spatial regularization then dominates the reconstruction error (data fidelity)
term. We find that there is a safe range of the parameter pair (?, ? ) in which a good compromise
between the sparsity of the dictionary (thus its intepretability) and its explanation power of the data
can be reached. See Fig. 1. K-fold cross-validation with explained variance metric was retained as a
good strategy for setting the Laplacian regularization ? parameter and the sparsity parameter ? .
Initialization of the dictionary. Problem (2) is non-convex jointly in (U, V), and so initialization
might be a crucial issue. However, in our experiments, we have observed that even randomly initialized
dictionaries eventually produce sensible results that do not jitter much across different runs of the
same experiment.
4
Related works
While there exist algorithms for online sparse dictionary-learning that are very efficient in large-scale
settings (for example [16], or more recently [17]) imposing spatial structure introduces couplings
in the corresponding optimization problem [8]. So far, spatially-structured decompositions have
been solved by very slow alternated optimization [25, 1]. Notably, structured priors such as TV-`1
[3] minimization, were used by [1] to extract data-driven state-of-the-art atlases of brain function.
However, alternated minimization is very slow, and large-scale medical imaging has shifted to online
solvers for dictionary-learning like [16] and [17]. These do not readily integrate structured penalties.
As a result, the use of structured decompositions has been limited so far, by the computational cost of
the resulting algorithms. Our approach instead uses a Laplacian penalty to impose spatial structure at
5
a very minor cost and adapts the online-learning dictionary-learning framework [16], resulting in a
fast and scalable structured decomposition. Second, the approach in [1] though very novel, is mostly
heuristic. In contrast, our method enjoys the same convergence guarantees and comparable numerical
complexity as the basic unstructured online dictionary-learning [16].
Finally, one should also mention [23] that introduced an online group-level functional brain mapping
strategy for differentiating regions reflecting the variety of brain network configurations observed in a
the population, by learning a sparse-representation of these in the spirit of [16].
5
Experiments
Setup. Our experiments were done on task fMRI data from 500 subjects from the HCP ?Human
Connectome Project? dataset [20]. These task fMRI data were acquired in an attempt to assess
major domains that are thought to sample the diversity of neural systems of interest in functional
connectomics. We studied the activation maps related to a task that involves language (story understanding) and mathematics (mental computation). This particular task is expected to outline number,
attentional and language networks, but the variability modes observed in the population cover even
wider cognitive systems. For the experiments, mass-univariate General Linear Models (GLMs) [10]
for n = 500 subjects were estimated for the Math vs Story contrast (language protocol), and the
corresponding full-brain Z-score maps each containing p = 2.6 ? 105 voxels, were used as the input
data X ? Rn?p , and we sought a decomposition into a dictionary of k = 40 atoms (components).
The input data X were shuffled and then split into two groups of the same size.
Models compared and metrics. We compared our proposed Smooth-SODL model (2) against
both the Canonical ICA ?CanICA [22], a single-batch multi-subject PCA/ICA-based method, and
the standard SODL (sparse online dictionary-learning) [16]. While the CanICA model accounts for
subject-to-subject differences, one of its major limitations is that it does not model spatial variability
across subjects. Thus we estimated the CanICA components on smoothed data: isotropic FWHM of
6mm, a necessary preprocessing step for such methods. In contrast, we did not perform pre-smoothing
for the SODL of Smooth-SODL models. The different models were compared across a variety of
qualitative and quantitative metrics: visual quality of the dictionaries obtained, explained variance,
stability of the dictionary atoms, their reproducibility, performance of the dictionaries in predicting
behavioral scores (IQ, picture vocabulary, reading proficiency, etc.) shipped with the HCP data [20].
For both SODL [16] and our proposed Smooth-SODL model, the constraint set for the dictionary
atoms was taken to be a simplex C := Sp (? ) (see section 2 for definition). The results of these
experiments are presented in Fig. 2 and Tab. 1.
6
Results
Running time. On the computational side, the vanilla dictionary-learning SODL algorithm [16]
with a batch size of ? = 20 took about 110s (? 1.7 minutes) to run, whilst with the same batch size,
our proposed Smooth-SODL model (2) implemented in Alg. 1 took 340s (? 5.6 minutes), which
is slightly less than 3 times slower than SODL. Finally, CanICA [22] for this experiment took 530s
(? 8.8 minutes) to run, which is about 5 times slower than the SODL model and 1.6 times slower
than our proposed Smooth-SODL (2) model. All experiments were run on a single CPU of laptop.
Qualitative assessment of dictionaries. As can be seen in Fig. 2(a), all methods recover dictionary
atoms that represent known functional brain organization; notably the dictionaries all contain the
well-known executive control and attention networks, at least in part. Vanilla dictionary-learning
leverages the denoising properties of the `1 sparsity constraint, but the voxel clusters are not very
structured. For, example most blobs are surrounded with a thick ring of very small nonzero values. In
contrast, our proposed regularization model leverages both sparse and structured dictionary atoms,
that are more spatially structured and less noisy.
In contrast to both SODL and Smooth-SODL, CanICA [22] is an ICA-based method that enforces no
notion of sparsity whatsoever. The result are therefore dense and noisy dictionary atoms that explain
the data very well (Fig. 2(b) but which are completely unintepretable. In a futile attempt to remedy
the situation, in practice such PCA/ICA-based methods (including FSL?s MELODIC tool [19]) are
hard-thresholded in order to see information. For CanICA, the hard-thresholded version has been
6
(a) Qualitative comparison of the estimated dictionaries. Each column represents an atom of the estimated
dictionary, where atoms from the different models (the rows of the plots) have been matched via a Hungarian
algorithm. Here, we only show a limited number of the most ?intepretable? atoms. Notice how the major
structures in each atom are reproducible across the different models. Maps corresponding to hard-thresholded
CanICA [22] components have also been included, and have been called tCanICA. In contrast, the maps from the
SODL [16] and our proposed Smooth-SODL (2) have not been thresholded.
80%
explained variance
70%
60%
Smooth-SODL(? = 104)
0.4
Smooth-SODL(? = 103)
SODL
tCanICA
CanICA
PCA
50%
40%
30%
Smooth-SODL(? = 104)
Smooth-SODL(? = 103)
SODL
tCanICA
CanICA
PCA
RAW
0.3
R2-score
90%
0.2
0.1
20%
10%
0%
(b) Mean explained variance of the
different models on both training data
and test (left-out) data. N.B.: Bold
bars represent performance on test
set while faint bars in the background
represent performance on train set.
0.0
Pictu
est trength
cab.
ding
trix T
S
re vo
sh rea
. Ma
Engli
Penn
Endu
rance
re
Pictu
mem. Dexterity
seq.
(c) Predicting behavioral variables of the HCP [20] dataset using
subject-level Z-maps. N.B.: Bold bars represent performance on
test set while faint bars in the background represent performance
on train set.
Figure 2: Main results. Benchmarking our proposed Smooth-SODL (2) model against competing
state-of-the-art methods like SODL (sparse online dictionary-learning) [16] and CanICA [22].
named tCanICA in Fig. 2. That notwithstanding, notice how the major structures (parietal lobes,
sulci, etc.) in each atom are reproducible across the different models.
Stability-fidelity trade-offs. PCA/ICA-based methods like CanICA [22] and MELODIC [19] are
the optimal linear decomposition method to maximize explained variance on a dataset. On the training
set, CanICA [22] out-performs all others algorithms with about 66% (resp. 50% for SODL [16]
and 58% for Smooth-SODL) of explained variance on the training set, and 60% (resp. 49% for
SODL and 55% for Smooth-SODL) on left-out (test) data. See Fig. 2(b). However, as noted in the
above paragraph, such methods lead to dictionaries that are hardly intepretable and thus the user
must recourse to some kind of post-processing hard-thresholding step, which destroys the estimated
model. More so, assessing the stability of the dictionaries, measured by mean correlation between
corresponding atoms, across different splits of the data, CanICA [22] scores a meager 0.1, whilst the
hard-thresholded version tCanICA obtains 0.2, compared to 0.4 for Smooth-SODL and 0.1 for SODL.
7
Is spatial regularization really needed ? As rightly pointed out by one of the reviewers, one does
not need spatial regularization if data are abundant (like in the HCP). So we computed learning curves
of mean explained variance (EV) on test data, as a function of the amount training data seen by
both Smooth-SODL and SODL [16] (Table 1). In the beginning of the curve, our proposed spatially
regularized Smooth-SODL model starts off with more than 31% explained variance (computed on
241 subjects), after having pooled only 17 subjects. In contrast, the vanilla SODL model [16] scores
a meager 2% explained variance; this corresponds to a 14-fold gain of Smooth-SODL over SODL. As
more and more data are pooled, both models explain more variance, the gap between Smooth-SODL
and SODL reduces, and both models perform comparably asymptotically.
Nb. subjects pooled
17
92
167
241
mean EV for vanilla SODL
2%
37%
47%
49%
Smooth-SODL (2)
31%
50%
54%
55%
gain factor
13.8
1.35
1.15
1.11
Table 1: Learning-curve for boost in explained variance of our proposed Smooth-SODL model over
the reference SODL model. Note the reduction in the explained variance gain as more data are pooled.
Thus our proposed Smooth-SODL method extracts structured denoised dictionaries that better capture
inter-subject variability in small, medium, and large-scale regimes alike.
Prediction of behavioral variables. If Smooth-SODL captures the patterns of inter-subject variability, then it should be possible to predict cognitive scores y like picture vocabulary, reading proficiency,
math aptitude, etc. (the behavioral variables are explained in the HCP wiki [12]) by projecting new
subjects? data into this learned low-dimensional space (via solving the ridge problem (5) for each
sample Xt ), without loss of performance compared with using the raw Z-values values X. Let RAW
refer to the direct prediction of targets y from X, using the top 2000 most voxels most correlated with
the target variable. Results of for the comparison are shown in Fig. 2(c). Only variables predicted
with a a positive mean (across the different methods and across subjects) R-score are reported. We
see that the RAW model, as expected over-fits drastically, scoring an R2 of 0.3 on training data and
only 0.14 on test data. Overall, for this metric CanICA performs best than all the other models in
predicting the different behavioral variables on test data. However, our proposed Smooth-SODL
model outperforms both SODL [16] and tCanICA, the thresholded version of CanICA.
7
Concluding remarks
To extract structured functionally discriminating patterns from massive brain data (i.e data-driven
atlases), we have extended the online dictionary-learning framework first developed in [16], to learn
structured regions representative of brain organization. To this end, we have successfully augmented
[16] with a Laplacian penalty on the component maps, while conserving the low numerical complexity
of the latter. Through experiments, we have shown that the resultant model ?Smooth-SODL model (2)?
extracts structured and denoised dictionaries that are more intepretable and better capture inter-subject
variability in small medium, and large-scale regimes alike, compared to state-of-the-art models. We
believe such online multivariate online methods shall become the de facto way to do dimensionality
reduction and ROI extraction in the future.
Implementation. The authors? implementation of the proposed Smooth-SODL (2) model will soon
be made available as part of the Nilearn package [2].
Acknowledgment. This work has been funded by EU FP7/2007-2013 under grant agreement no.
604102, Human Brain Project (HBP) and the iConnectome Digiteo. We would also like to thank the
Human Connectome Projection for making their wonderful data publicly available.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
A. Abraham et al. ?Extracting brain regions from rest fMRI with Total-Variation constrained
dictionary learning?. In: MICCAI. 2013.
A. Abraham et al. ?Machine learning for neuroimaging with scikit-learn?. In: Frontiers in
Neuroinformatics (2014).
L. Baldassarre, J. Mourao-Miranda, and M. Pontil. ?Structured sparsity models for brain
decoding from fMRI data?. In: PRNI. 2012.
A. Beck and M. Teboulle. ?A Fast Iterative Shrinkage-Thresholding Algorithm for Linear
Inverse Problems?. In: SIAM J. Imaging Sci. 2 (2009).
C. F. Beckmann and S. M. Smith. ?Probabilistic independent component analysis for functional
magnetic resonance imaging?. In: Trans Med. Im. 23 (2004).
D. P. Bertsekas. Nonlinear programming. Athena Scientific, 1999.
L. Condat. ?Fast projection onto the simplex and the `1 ball?. In: Math. Program. (2014).
E. Dohmatob et al. ?Benchmarking solvers for TV-l1 least-squares and logistic regression in
brain imaging?. In: PRNI. IEEE. 2014.
J. Duchi et al. ?Efficient projections onto the l 1-ball for learning in high dimensions?. In:
ICML. ACM. 2008.
K. J. Friston et al. ?Statistical Parametric Maps in Functional Imaging: A General Linear
Approach?. In: Hum Brain Mapp (1995).
L. Grosenick et al. ?Interpretable whole-brain prediction analysis with GraphNet?. In: NeuroImage 72 (2013).
HCP wiki. https://wiki.humanconnectome.org/display/PublicData/HCP+Data+
Dictionary+Public-+500+Subject+Release. Accessed: 2010-09-30.
M. Hebiri and S. van de Geer. ?The Smooth-Lasso and other `1 + `2 -penalized methods?. In:
Electron. J. Stat. 5 (2011).
D. P. Hibar et al. ?Genetic clustering on the hippocampal surface for genome-wide association
studies?. In: MICCAI. 2013.
R. Jenatton, G. Obozinski, and F. Bach. ?Structured sparse principal component analysis?. In:
AISTATS. 2010.
J. Mairal et al. ?Online learning for matrix factorization and sparse coding?. In: Journal of
Machine Learning Research 11 (2010).
A. Mensch et al. ?Dictionary Learning for Massive Matrix Factorization?. In: ICML. ACM.
2016.
R. Saxe, M. Brett, and N. Kanwisher. ?Divide and conquer: a defense of functional localizers?.
In: Neuroimage 30 (2006).
S. M. Smith et al. ?Advances in functional and structural MR image analysis and implementation
as FSL?. In: Neuroimage 23 (2004).
D. van Essen et al. ?The Human Connectome Project: A data acquisition perspective?. In:
NeuroImage 62 (2012).
E. Varol and C. Davatzikos. ?Supervised block sparse dictionary learning for simultaneous
clustering and classification in computational anatomy.? eng. In: Med Image Comput Comput
Assist Interv 17 (2014).
G. Varoquaux et al. ?A group model for stable multi-subject ICA on fMRI datasets?. In:
Neuroimage 51 (2010).
G. Varoquaux et al. ?Cohort-level brain mapping: learning cognitive atoms to single out
specialized regions?. In: IPMI. 2013.
G. Varoquaux et al. ?FAASTA: A fast solver for total-variation regularization of ill-conditioned
problems with application to brain imaging?. In: arXiv:1512.06999 (2015).
G. Varoquaux et al. ?Multi-subject dictionary learning to segment an atlas of brain spontaneous
activity?. In: Inf Proc Med Imag. 2011.
Y. Ying and D.-X. Zhou. ?Online regularized classification algorithms?. In: IEEE Trans. Inf.
Theory 52 (2006).
9
| 6352 |@word version:3 norm:2 loading:1 open:1 lobe:1 decomposition:13 eng:1 concise:1 mention:2 boundedness:1 reduction:2 initial:1 configuration:1 contains:1 series:1 selecting:1 score:7 genetic:1 outperforms:1 current:5 activation:5 yet:2 must:3 reminiscent:1 written:1 readily:1 mesh:1 numerical:2 connectomics:1 shape:1 atlas:3 plot:1 update:17 reproducible:2 v:1 interpretable:1 prohibitive:1 isotropic:1 positron:1 beginning:1 smith:2 mental:1 iterates:1 math:3 org:1 proficiency:2 accessed:1 along:3 direct:1 become:1 qualitative:3 kuj:1 behavioral:5 paragraph:1 acquired:1 inter:5 kanwisher:1 ica:8 expected:2 notably:2 indeed:3 multi:3 brain:29 inspired:2 bertrand:1 relying:1 cpu:1 considering:1 solver:3 spain:1 project:4 estimating:1 provided:1 matched:1 medium:3 brett:1 mass:1 laptop:1 kind:1 developed:1 whilst:2 whatsoever:1 finding:1 guarantee:1 quantitative:2 exactly:1 universit:1 k2:4 hit:1 facto:1 control:4 unit:3 grant:1 medical:2 penn:1 imag:1 bertsekas:2 segmenting:1 before:1 positive:1 understood:2 local:1 ak:1 modulation:1 inria:2 might:1 therein:1 initialization:2 studied:1 limited:3 factorization:2 range:1 statistically:1 fwhm:1 unique:1 practical:1 enforces:1 testing:1 acknowledgment:1 practice:3 block:3 lf:1 kat:1 b1t:1 pontil:1 thought:1 projection:7 pre:2 melodic:2 fsl:2 onto:6 selection:1 operator:4 nb:1 context:1 influence:1 imposed:2 map:16 reviewer:1 aptitude:1 attention:1 convex:7 resolution:2 simplicity:1 unstructured:1 rule:1 population:4 handle:1 stability:3 variation:2 coordinate:2 notion:1 updated:1 resp:2 pt:2 target:2 spontaneous:1 massive:3 user:1 programming:1 us:1 hypothesis:1 agreement:1 updating:1 digiteo:1 sparsely:1 observed:3 ding:1 solved:3 capture:4 region:9 eu:1 trade:1 principled:2 broken:1 ui:5 complexity:2 ipmi:1 solving:3 segment:1 compromise:1 paribus:1 completely:1 compactly:2 easily:1 fiber:3 derivation:1 train:2 fast:5 describe:1 hyper:3 choosing:1 neuroinformatics:1 encoded:1 supplementary:2 heuristic:1 statistic:1 grosenick:1 jointly:2 noisy:2 online:24 blob:4 differentiable:1 advantage:1 kxt:1 took:3 propose:5 reconstruction:2 fr:1 varol:1 relevant:2 combining:2 reproducibility:2 degenerate:1 achieve:2 adapts:1 conserving:1 inducing:1 kv:2 convergence:3 cluster:2 optimum:2 assessing:1 produce:1 tract:1 ring:1 wider:1 derive:1 coupling:1 iq:1 stat:1 measured:2 minor:1 b0:1 implemented:2 hungarian:1 involves:1 predicted:1 wonderful:1 met:1 bjt:2 safe:1 anatomy:2 closely:1 radius:1 utt:2 thick:1 human:5 intepretable:5 nilearn:2 saxe:1 public:2 material:2 explains:1 require:2 crux:1 fix:2 really:1 preliminary:1 decompose:1 varoquaux:5 im:1 frontier:1 mm:1 ground:1 roi:6 algorithmic:1 mapping:2 predict:1 electron:1 major:4 sought:1 dictionary:84 purpose:1 favorable:1 baldassarre:1 proc:1 organ:1 create:1 successfully:1 tool:1 minimization:3 brought:1 offs:1 destroys:1 zhou:1 shrinkage:1 varying:2 release:1 emission:1 properly:1 improvement:1 vk:5 rank:1 mainly:1 aka:4 contrast:8 sense:3 detect:1 stopping:1 typically:4 bt:4 explanatory:1 a0:1 france:1 provably:1 overall:5 issue:2 fidelity:2 classification:2 ill:1 resonance:2 smoothing:1 constrained:3 art:4 spatial:12 special:1 once:1 having:1 extraction:1 atom:37 sampling:1 represents:1 icml:2 fmri:11 simplex:3 others:1 future:1 inherent:1 few:1 randomly:1 simultaneously:1 individual:1 beck:1 vaj:1 n1:2 attempt:2 organization:3 interest:4 essen:1 introduces:1 mixture:2 extreme:1 sh:1 held:1 bundle:1 closer:3 arthur:1 necessary:1 orthogonal:1 unless:1 shipped:1 euclidean:3 divide:1 initialized:1 re:2 abundant:1 instance:3 column:1 teboulle:1 cover:1 cost:2 vertex:1 subset:1 prni:2 too:5 reported:1 varies:1 kxi:3 density:1 siam:1 discriminating:1 retain:1 probabilistic:1 off:1 decoding:1 connectome:4 together:1 continuously:1 kvj:2 satisfied:3 opposed:2 containing:1 henceforth:1 cognitive:4 leading:2 account:1 diversity:1 blow:1 de:2 coding:2 bold:2 pooled:4 coefficient:1 piece:2 performed:1 closed:1 doing:1 tab:1 reached:1 start:1 recover:1 denoised:3 elicited:1 contribution:1 ass:1 square:1 publicly:1 accuracy:1 variance:14 likewise:2 yield:1 yes:1 vp:1 weak:1 raw:5 comparably:1 dub:1 tissue:1 history:1 explain:4 simultaneous:1 reach:1 whenever:1 definition:2 dohmatob:2 against:2 energy:1 acquisition:1 simplexes:1 naturally:1 resultant:1 unsolved:1 gain:3 dataset:5 ask:1 lim:1 ut:5 dimensionality:1 localizers:1 segmentation:1 routine:1 jenatton:1 reflecting:1 manuscript:1 supervised:1 formulation:1 done:1 delineate:1 strongly:1 though:1 just:1 miccai:2 correlation:1 glms:1 sketch:1 nonlinear:1 assessment:1 scikit:1 mode:3 logistic:1 quality:2 scientific:1 believe:1 effect:1 k22:3 contain:3 normalized:3 remedy:1 regularization:12 shuffled:1 spatially:6 nonzero:1 lastname:1 noted:2 criterion:2 hippocampal:1 outline:1 complete:1 ridge:8 vo:1 mapp:1 performs:2 l1:1 duchi:1 reasoning:1 image:13 wise:2 consideration:1 novel:1 recently:1 dexterity:1 common:1 specialized:1 functional:13 empirically:2 discussed:1 association:1 davatzikos:1 functionally:1 refer:2 imposing:1 smoothness:1 tuning:1 vanilla:6 trivially:2 mathematics:1 pointed:1 pathology:1 language:3 funded:1 stable:2 surface:1 etc:6 multivariate:3 perspective:1 inf:2 driven:2 argminv:3 certain:1 vui:4 arbitrarily:2 vt:5 scoring:1 captured:1 seen:3 additional:1 care:1 impose:7 mr:1 employed:1 converge:2 maximize:1 signal:4 multiple:1 full:2 reduces:2 smooth:34 match:1 cross:2 bach:1 concerning:1 post:1 a1:1 laplacian:11 impact:2 prediction:4 neuro:2 regression:4 scalable:1 basic:1 circumstance:1 metric:5 arxiv:1 iteration:4 represent:5 rea:1 background:2 want:1 separately:1 interv:1 source:1 modality:2 crucial:3 rest:1 unlike:1 ineffective:1 pass:1 subject:26 med:3 spirit:1 extracting:1 structural:1 leverage:2 cohort:1 split:3 easy:1 fft:1 variety:3 fit:1 identified:1 perfectly:1 competing:1 lasso:1 idea:2 handled:1 pca:7 defense:1 assist:1 penalty:13 reformulated:1 cause:1 hardly:1 repeatedly:1 remark:1 gael:1 detailed:1 tune:1 amount:6 tomography:1 http:1 wiki:3 percentage:1 exist:1 canonical:1 shifted:1 notice:2 estimated:7 overly:1 per:1 diverse:1 discrete:2 shall:1 group:4 key:1 sulcus:1 penalizing:1 miranda:1 hebiri:1 thresholded:6 diffusion:1 kept:1 v1:6 imaging:13 asymptotically:1 run:5 package:1 inverse:1 jitter:1 named:1 extends:2 reasonable:1 uti:1 seq:1 sobolev:1 draw:1 comparable:2 guaranteed:2 display:2 correspondence:1 fold:2 quadratic:1 activity:3 adapted:3 constraint:9 bp:3 x2:1 n3:2 bcd:8 u1:2 speed:1 fourier:1 mensch:2 optimality:1 min:4 performing:1 separable:2 concluding:1 structured:21 tv:2 according:1 neurospin:1 ball:6 across:11 smaller:1 slightly:1 making:3 alike:3 explained:15 projecting:1 taken:2 recourse:1 computationally:1 eventually:2 thirion:1 needed:1 fp7:1 meager:2 end:4 generalizes:1 available:2 observe:1 appropriate:1 magnetic:2 batch:5 alternative:1 slower:3 rp:13 denotes:1 running:3 clustering:4 top:1 bkt:1 k1:2 conquer:1 classical:1 hbp:1 objective:1 quantity:1 hum:1 parametric:2 strategy:2 usual:2 traditional:1 gradient:1 attentional:1 thank:1 sci:1 athena:1 sensible:1 barely:1 pet:2 assuming:1 code:11 retained:1 mini:2 ratio:2 beckmann:1 ying:1 difficult:2 mostly:3 setup:2 neuroimaging:1 design:1 implementation:3 rightly:1 unknown:1 perform:3 observation:3 datasets:2 discarded:1 descent:1 parietal:2 defining:1 situation:1 variability:9 team:1 extended:1 rn:2 stack:1 smoothed:1 arbitrary:1 introduced:1 pair:1 paris:1 extensive:1 plague:1 registered:1 learned:1 barcelona:1 boost:2 nip:1 trans:2 bar:4 below:2 pattern:5 firstname:1 ev:2 regime:4 sparsity:14 reading:2 saclay:1 program:1 max:2 memory:1 explanation:1 including:1 power:3 available1:1 friston:1 regularized:3 predicting:3 residual:1 cea:1 elasticnet:1 representing:1 improve:1 scheme:1 imply:1 library:2 picture:2 axis:2 irrespective:1 extract:6 alternated:2 prior:2 voxels:5 understanding:1 python:1 loss:1 limitation:1 localized:1 ingredient:1 penalization:1 validation:1 integrate:1 executive:1 sufficient:4 thresholding:2 metabolic:1 cycled:1 story:2 surrounded:1 row:2 genetics:1 penalized:2 course:1 supported:1 free:1 soon:1 jth:1 enjoys:1 drastically:1 side:1 wide:1 taking:4 differentiating:1 sparse:18 van:2 curve:3 dimension:2 xn:1 vocabulary:2 genome:1 computes:1 author:1 qualitatively:1 adaptive:1 preprocessing:1 made:1 historical:1 far:2 voxel:1 compact:4 obtains:1 forever:1 global:1 sequentially:1 incoming:1 mem:1 mairal:3 xi:3 un:1 latent:1 iterative:1 table:2 nature:1 learn:2 obtaining:2 alg:8 kui:3 futile:1 domain:2 vj:9 protocol:1 sp:3 did:2 spread:1 dense:2 main:1 abraham:2 big:2 noise:2 whole:1 aistats:1 n2:2 pharmacology:1 condat:1 x1:1 augmented:1 fig:8 referred:1 representative:1 benchmarking:2 scattered:2 slow:2 precision:1 sub:3 kuk22:1 neuroimage:5 comput:3 rk:6 down:1 minute:3 specific:2 xt:7 r2:2 faint:2 evidence:1 deconvolution:1 dominates:1 cab:1 notwithstanding:1 conditioned:1 sparser:1 gap:1 ak22:2 lt:1 lap:7 simply:1 univariate:2 visual:1 hcp:7 tracking:1 trix:1 u2:1 corresponds:2 relies:2 acm:2 ma:1 obozinski:1 sized:1 goal:1 identity:1 price:1 lipschitz:2 hard:5 fista:2 included:1 reducing:1 averaging:1 denoising:2 principal:2 argminu:1 called:2 total:2 elvis:1 geer:1 svd:2 experimental:1 est:1 meaningful:1 exception:1 ceteris:1 latter:1 meant:1 correlated:1 |
5,917 | 6,353 | Neurally-Guided Procedural Models:
Amortized Inference for Procedural Graphics
Programs using Neural Networks
Daniel Ritchie
Stanford University
Anna Thomas
Stanford University
Pat Hanrahan
Stanford University
Noah D. Goodman
Stanford University
Abstract
Probabilistic inference algorithms such as Sequential Monte Carlo (SMC) provide
powerful tools for constraining procedural models in computer graphics, but they
require many samples to produce desirable results. In this paper, we show how
to create procedural models which learn how to satisfy constraints. We augment
procedural models with neural networks which control how the model makes
random choices based on the output it has generated thus far. We call such models
neurally-guided procedural models. As a pre-computation, we train these models
to maximize the likelihood of example outputs generated via SMC. They are then
used as efficient SMC importance samplers, generating high-quality results with
very few samples. We evaluate our method on L-system-like models with imagebased constraints. Given a desired quality threshold, neurally-guided models can
generate satisfactory results up to 10x faster than unguided models.
1
Introduction
Procedural modeling, or the use of randomized procedures to generate computer graphics, is a
powerful technique for creating visual content. It facilitates efficient content creation at massive scale,
such as procedural cities [13]. It can generate fine detail that would require painstaking effort to
create by hand, such as decorative floral patterns [24]. It can even generate surprising or unexpected
results, helping users to explore large or unintuitive design spaces [19].
Many applications demand control over procedural models: making their outputs resemble examples [22, 2], fit a target shape [17, 21, 20], or respect functional constraints such as physical
stability [19]. Bayesian inference provides a general-purpose control framework: the procedural
model specifies a generative prior, and the constraints are encoded as a likelihood function. Posterior
samples can then be drawn via Markov Chain Monte Carlo (MCMC) or Sequential Monte Carlo
(SMC). Unfortunately, these algorithms often require many samples to converge to high-quality
results, limiting their usability for interactive applications. Sampling is challenging because the
constraint likelihood implicitly defines complex (often non-local) dependencies not present in the
prior. Can we instead make these dependencies explicit by encoding them in a model?s generative
logic? Such an explicit model could simply be run forward to generate high-scoring results.
In this paper, we propose an amortized inference method for learning an approximation to this perfect
explicit model. Taking inspiration from recent work in amortized variational inference, we augment
the procedural model with neural networks that control how the model makes random choices based
on the partial output it has generated. We call such a model a neurally-guided procedural model.
We train these models by maximizing the likelihood of example outputs generated via SMC using a
large number of samples, as an offline pre-process. Once trained, they can be used as efficient SMC
importance samplers. By investing time up-front generating and training on many examples, our
system effectively ?pre-compiles? an efficient sampler that can generate further results much faster.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
For a given likelihood threshold, neurally-guided models can generate results which reliably achieve
that threshold using 10-20x fewer particles and up to 10x less compute time than an unguided model.
In this paper, we focus on accumulative procedural models that repeatedly add new geometry to a
structure. For our purposes, a procedural model is accumulative if, while executing, it provides a
?current position? p from which geometry generation will continue. Many popular growth models,
such as L-systems, are accumulative [16]. We focus on 2D models (p ? R2 ) which generate images,
though the techniques we present extend naturally to 3D.
2
Related Work
Guided Procedural Modeling Procedural models can be guided using non-probabilistic methods.
Open L-systems can query their spatial position and orientation, allowing them to prune their growth
to an implicit surface [17, 14]. Recent follow-up work supports larger models by decomposing them
into separate regions with limited interaction [1]. These methods were specifically designed to fit
procedural models to shapes. In contrast, our method learns how to guide procedural models and is
generally applicable to constraints expressable as likelihood functions.
Generatively Capturing Dependencies in Procedural Models A recent system by Dang et al.
modifies a procedural grammar so that its output distribution reflects user preference scores given
to example outputs [2]. Like us, they use generative logic to capture dependencies induced by a
likelihood function (in their case, a Gaussian process regression over user-provided examples). Their
method splits non-terminal symbols in the original grammar, giving it more combinatorial degrees
of freedom. This works well for discrete dependencies, whereas our method is better suited for
continuous constraint functions, such as shape-fitting.
Neural Amortized Inference Our method is also inspired by recent work in amortized variational
inference using neural variational families [11, 18, 8], but it uses a different learning objective.
Prior work has also aimed to train efficient neural SMC importance samplers [5, 15]. These efforts
focused on time series models and Bayesian networks, respectively; we focus on a class of structured
procedural models, the characteristics of which permit different design decisions:
? The likelihood of a partially-generated output can be evaluated at any time and is a good
heuristic for the likelihood of the completed output. This is different from e.g. time series
models, where the likelihood at each step considers a previously-unseen data point.
? They make many local random choices but have no global/top-level parameters.
? They generate images, which naturally support coarse-to-fine feature extraction.
These properties informed the design of our neurally-guided model architecture.
3
Approach
Consider a simple procedural modeling program chain that recursively generates a random sequence
of linear segments, constrained to match a target image. Figure 1a shows the text of this program,
along with samples generated from it (drawn in black) against several target images (drawn in gray).
Chains generated by running the program forward do not match the targets, since forward sampling
is oblivious to the constraint. Instead, we can generate constrained samples using Sequential Monte
Carlo (SMC) [20]. This results in final chains that more closely match the target images. However,
the algorithm requires many particles?and therefore significant computation?to produce acceptable
results. Figure 1a shows that N = 10 particles is not sufficient.
In an ideal world, we would not need costly inference algorithms to generate constraint-satisfying
results. Instead, we would have access to an ?oracle? program, chain_perfect, that perfectly fills in the
target image when run forward. While such an oracle can be difficult or impossible to write by hand,
it is possible to learn a program chain_neural that comes close. Figure 1b shows our approach. For
each random choice in the program text (e.g. gaussian, flip), we replace the parameters of that choice
with the output of a neural network. This neural network?s inputs (abstracted as ?...?) include the
target image as well the partial output image the program has generated thus far. The network thus
2
function chain(pos, ang) {
function chain_neural(pos, ang) {
var newang = ang + gaussian(0, PI/8);
var newang = ang + gaussMixture(nn1(...));
var newpos = pos + polarToRect(LENGTH, newang); var newpos = pos + polarToRect(LENGTH, newang);
genSegment(pos, newpos);
genSegment(pos, newpos);
if (flip(0.5)) chain(newpos, newang);
if (flip(nn2(...))) chain_neural(newpos, newang);
}
}
Forward
Samples
Forward
Samples
SMC
Samples
(N = 10)
SMC
Samples
(N = 10)
(a)
(b)
Figure 1: Turning a linear chain model into a neurally-guided model. (a) The original program. When
outputs (shown in black) are constrained to match a target image (shown in gray), SMC requires many
particles to achieve good results. (b) The neurally-guided model, where random choice parameters
are computed via neural networks. Once trained, forward sampling from this model adheres closely
to the target image, and SMC with only 10 particles consistently produces good results.
shapes the distribution over possible choices, guiding the programs?s future output based on the target
image and its past output. These neural nets affect both continuous choices (e.g. angles) as well as
control flow decisions (e.g. recursion): they dictate where the chain goes next, as well as whether it
keeps going at all. For continuous choices such as gaussian, we also modify the program to sample
from a mixture distribution. This helps the program handle situations where the constraints permit
multiple distinct choices (e.g. in which direction to start the chain for the circle-shaped target image
in Figure 1).
Once trained, chain_neural generates constraint-satisfying results more efficiently than its un-guided
counterpart. Figure 1b shows example outputs: forward samples adhere closely to the target images,
and SMC with 10 particles is sufficient to produce chains that fully fill the target shape. The next
section describes the process of building and training such neurally-guided procedural models.
4
Method
For our purposes, a procedural model is a generative probabilistic model of the following form:
PM (x) =
|x|
Y
pi (xi ; ?i (x1 , . . . , xi?1 ))
i=1
Here, x is the vector of random choices the procedural modeling program makes as it executes. The
pi ?s are local probability distributions from which each successive random choice is drawn. Each pi
is parameterized by a set of parameters (e.g. mean and variance, for a Gaussian distribution), which
are determined by some function ?i of the previous random choices x1 , . . . , xi?1 .
A constrained procedural model also includes an unnormalized likelihood function `(x, c) that
measures how well an output of the model satisfies some constraint c:
1
PCM (x|c) = ? PM (x) ? `(x, c)
Z
In the chain example, c is the target image, with `(?, c) measuring similarity to that image.
A neurally-guided procedural model modifies a procedural model by replacing each parameter
function ?i with a neural network:
PGM (x|c; ?) =
|x|
Y
p?i (xi ; NNi (I(x1 , . . . , xi?1 ), c; ?))
i=1
where I(x1 , . . . , xi?1 ) renders the model output after the first i ? 1 random choices, and ? are the
network parameters. p?i is a mixture distribution if random choice i is continuous; otherwise, p?i = pi .
3
(Optional)
Target
Image
(50x50)
Downsample
36c
Fully
Connected
(FC)
nf
2
tanh
nf
2
FC
np
Bounds
np
Output
Params
Downsample
Current
Partial
Output
(50x50)
36c
Target Image Features
na
Partial Output Features
function
?branch(
?pos,
?ang,
?width
?)
?{...}
?
Local State Features
nf
Figure 2: Network architecture for neurally-guided procedural models. The outputs are the parameters
for a random choice probability distribution. The inputs come from three sources: Local State
Features are the arguments to the function in which the random choice occurs; Partial Output
Features come from 3x3 pixel windows of the partial image the model has generated, extracted at
multiple resolutions, around the procedural model?s current position; Target Image Features are
analogous windows extracted from the target image, if the constraint requires one.
To train a neurally-guided procedural model, we seek parameters ? such that PGM is as close
as possible to PCM . This goal can be formalized as minimizing the conditional KL divergence
DKL (PCM ||PGM ) (see the supplemental materials for derivation):
minDKL (PCM ||PGM ) ? max
?
?
N
1 X
log PGM (xs |cs ; ?)
N s=1
xs ? PCM (x) , cs ? P (c)
(1)
where the xs are example outputs generated using SMC, given a cs drawn from some distribution
P (c) over constraints, e.g. uniform over a set of training images. This is simply maximizing the
likelihood of the xs under the neurally-guided model. Training then proceeds via stochastic gradient
ascent using the gradient
? log PGM (x|c; ?) =
|x|
X
? log p?i (xi ; NNi (I(x1 , . . . , xi?1 ), c; ?))
(2)
i=1
The trained PGM (x|c; ?) can then be used as an importance distribution for SMC.
It is worth noting that using the other direction of KL divergence, DKL (PGM ||PCM ), leads to
the marginal likelihood lower bound objective used in many black-box variational inference algorithms [23, 6, 11]. This objective requires training samples from PGM , which are much less
expensive to generate than samples from PCM . When used for procedural modeling, however, it
leads to models whose outputs lack diversity, making them unsuitable for generating visually-varied
content. This behavior is due to a well-known property of the objective: minimizing it produces
approximating distributions that are overly-compact, i.e. concentrating their probability mass in a
smaller volume of the state space than the true distribution being approximated [10]. Our objective
is better suited for training proposal distributions for importance sampling methods (such as SMC),
where the target density must be absolutely continuous with respect to the proposal density [3].
4.1
Neural Network Architecture
Each network NNi should predict a distribution over choice i that is as close as possible to its true
posterior distribution. More complex networks capture more dependencies and increase accuracy but
require more computation time to execute. We can also increase accuracy at the cost of computation
time by running SMC with more particles. If our networks are too complex (i.e. accuracy provided
per unit computation time is too low), then the neurally-guided model PGM will be outperformed by
simply using more particles with the original model PM . For neural guidance to provide speedups,
we require networks that pack as much predictive power into as simple an architecture as possible.
4
Scribbles
Glyphs
PhyloPic
Figure 3: Example images from our datasets.
Figure 2 shows our network architecture: a multilayer perceptron with nf inputs, one hidden layer of
size nf /2 with a tanh nonlinearity, and np outputs, where np is the number of parameters the random
choice expects. We found that a simpler linear model did not perform as well per unit time. Since
some parameters are bounded (e.g. Gaussian variance must be positive), each output is remapped
via an appropriate bounding transform (e.g. ex for non-negative parameters). The inputs come from
several sources, each providing the network with decision-critical information:
Local State Features The model?s current position p, the current orientation of any local reference
frame, etc. We access this data via the arguments of the function call in which the random choice
occurs, extracting all na scalar arguments and normalizing them to [?1, 1].
Partial Output Features Next, the network needs information about the output the model has
already generated. The raw pixels of the current partial output image I(?) provide too much data; we
need to summarize the relevant image contents. We extract 3x3 windows of pixels around the model?s
current position p at four different resolution levels, with each level computed by downsampling the
previous level via a 2x2 box filter. This results in 36c features for a c-channel image. This architecture
is similar to the foveated ?glimpses? used in visual attention models [12]. Convolutional networks
might also be used here, but this approach provided better performance per unit of computation time.
Target Image Features Finally, if the constraint being enforced involves a target image, as in
the chain example of Section 3, we also extract multi-resolution windows from this image. These
additional 36c features allow the network to make appropriate decisions for matching the image.
4.2
Training
We train with stochastic gradient ascent (see Equation 2). We use the Adam algorithm [7] with
? = ? = 0.75, step size 0.01, and minibatch size one. We terminate training after 20000 iterations.
5
Experiments
In this section, we evaluate how well neurally-guided procedural models capture image-based
constraints. We implemented our prototype system in the WebPPL probabilistic programming
language [4] using the adnn neural network library.1 All timing data was collected on an Intel Core
i7-3840QM machine with 16GB RAM running OSX 10.10.5.
5.1
Image Datasets
In experiments which require target images, we use the following image collections:
? Scribbles: 49 binary mask images drawn by hand with the brush tool in Photoshop. Includes
shapes with thick and thin regions, high and low curvature, and self-intersections.
? Glyphs: 197 glyphs from the FF Tartine Script Bold typeface (all glyphs with only one
foreground component and at least 500 foreground pixels when rendered at 129x97).
? PhyloPic: 35 images from the PhyloPic silhouette database.2
We augment the dataset with a horizontally-mirrored copy of each image, and we annotate each
image with a starting point and direction from which to initialize the execution of a procedural model.
Figure 3 shows some representative images from each collection.
1
2
https://github.com/dritchie/adnn
http://phylopic.org
5
Target
Reference
Guided
N = 600 , 30.26 s
N = 10 , 1.5 s
Unguided
(Equal N )
Unguided
(Equal Time)
N = 10 , 0.1 s
N = 45 , 1.58 s
Figure 4: Constraining a vine-growth procedural model to match a target image. N is the number of
SMC particles used. Reference shows an example result after running SMC on the unguided model
with a large number of particles. Neurally-guided models generate results of this quality in a couple
seconds; the unguided model struggles given the same amount of particles or computation time.
5.2
Shape Matching
We first train neurally-guided procedural models to match 2D shapes specified as binary mask images.
If D is the spatial domain of the image, then the likelihood function for this constraint is
`shape (x, c) = N (
sim(I(x), c) ? sim(0, c)
, 1, ?shape )
1 ? sim(0, c)
P
sim(I1 , I2 ) =
p?D w(p) ? 1{I1 (p) = I2 (p)}
P
p?D w(p)
?
?1
w(p) = 1
?
wfilled
(3)
if I2 (p) = 0
if ||?I2 (p)||= 1
if ||?I2 (p)||= 0
where ?I(p) is a binary edge mask computed using the Sobel operator. This function encourages
the output image I(x) to be similar to the target mask c, where similarity is normalized against
c?s similarity to an empty image 0. Each pixel p?s contribution is weighted by w(p), determined
by whether the target mask is empty, filled, or has an edge at that pixel. We use wfilled = 0.?6, so
empty and edge pixels are worth 1.5 times more than filled pixels. This encourages matching of
perceptually-important contours and negative space. ?shape = 0.02 in all experiments.
We wrote a WebPPL program which recursively generates vines with leaves and flowers and then
trained a neurally-guided version of this program to capture the above likelihood. The model was
trained on 10000 example outputs, each generated using SMC with 600 particles. Target images were
drawn uniformly at random from the Scribbles dataset. Each example took on average 17 seconds to
generate; parallelized across four CPU cores, the entire set of examples took approximately 12 hours
to generate. Training took 55 minutes in our single-threaded implementation.
Figure 4 shows some outputs from this program. 10-particle SMC produces recognizable results with
the neurally-guided model (Guided) but not with the unguided model (Unguided (Equal N )). A more
equitable comparison is to give the unguided model the same amount of wall-clock time as the guided
model. While the resulting outputs fare better, the target shape is still obscured (Unguided (Equal
Time)). We find that the unguided model needs ?200 particles to reliably match the performance of
the guided model. Additional results are shown in the supplemental materials.
Figure 5 shows a quantitative comparison between five different models on the shape matching task:
? Unguided: The original, unguided procedural model.
? Constant Params: The neural network for each random choice is a vector of constant
parameters (i.e. a partial mean field approximation [23]).
? + Local State Features: Adding the local state features described in Section 4.1.
? + Target Image Features: Adding the target image features described in Section 4.1.
? All Features: Adding the partial output features described in Section 4.1.
We test each model on the Glyph dataset and report the median normalized similarity-to-target
achieved (i.e. argument one to the Gaussian in Equation 3), plotted in Figure 5a. The performance
6
Condition
0.7
All Features
1.0
+ Target Image Features
0.6
Computation Time (seconds)
+ Local State Features
Similarity
0.5
0.4
0.3
0.2
0.8
Constant Params
Unguided
0.6
0.4
0.2
0.1
0.0
0.0
0
1
2
3
4
5
6
Number of Particles
7
8
9
10
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75
Similarity
11
(a)
(b)
Figure 5: Shape matching performance comparison. ?Similarity? is median normalized similarity to
target, averaged over all targets in the test dataset. Bracketing lines show 95% confidence bounds.
(a) Performance as number of SMC particles increases. The neurally-guided model achieves higher
average similarity as more features are added. (b) Computation time required as desired similarity
increases. The vertical gap between the two curves indicates speedup (which can be up to 10x).
0.7
0.7
0.6
Similarity at 10 Particles
0.6
Similarity
0.5
0.4
0.3
0.5
0.4
0.3
0.2
0.2
Condition
With Mixture Distributions
0.1
0.1
Without Mixture Distributions
0.0
0.0
0
1
2
3
4
5
6
Number of Particles
7
8
9
10
11
(a)
10
20
50
100
200
500
Number of Training Traces
1000
2000
(b)
Figure 6: (a) Using four-component mixtures for continuous random choices boosts performance. (b)
The effect of training set size on performance (at 10 SMC particles), plotted on a logarithmic scale.
Average similarity-to-target levels off at ?1000 examples.
of the guided model improves with the addition of more features; at 10 particles, the full model
is already approaching an asymptote. Figure 5b shows the wall-clock time required to achieve
increasing similarity thresholds. The vertical gap between the two curves shows the speedup given
by neural guidance, which can be as high as 10x. For example, the + Local State Features model
reaches similarity 0.35 about 5.5 times faster than the Unguided model, the + Target Image Features
model is about 1.5 times faster still, and the All Features Model is about 1.25 times faster than that.
Note that we trained on the Scribbles dataset but tested on the Glyphs dataset; these results suggest
that our models can generalize to qualitatively-different previously-unseen images.
Figure 6a shows the benefit of using mixture guides for continuous random choices. The experimental
setup is the same as in Figure 5. We compare a model which uses four-component mixtures with a
no-mixture model. Using mixtures boosts performance, which we alluded to in Section 3: at shape
intersections, such as the crossing of the letter ?t,? the model benefits from multimodal uncertainty.
Using more than four mixture components did not improve performance on this test dataset.
We also investigate how the number of training examples affects performance. Figure 6b plots the
median similarity at 10 particles as training set size increases. Performance increases rapidly for the
first few hundred examples before leveling off, suggesting that ?1000 sample traces is sufficient
(for our particular choice of training set, at least). This may seem surprising, as many published
neurally-based learning systems require many thousands to millions of training examples. In our case,
7
Reference N = 600
Unguided
(Equal N )
Guided N = 15
Unguided
(Equal Time)
Figure 7: Constraining the vine-growth program to generate circuit-like patterns. Reference outputs
took around ? 70 seconds to generate; outputs from the guided model took ? 3.5 seconds.
each training example contains hundreds to thousands of random choices, each of which provides
a learning signal?in this way, the training data is ?bigger? than it appears. Our implementation
generates 1000 samples in just over an hour using four CPU cores.
5.3
Stylized ?Circuit? Design
We next train neurally-guided procedural models to capture a likelihood that does not use a target
image: constraining the vines program to resemble a stylized circuit design. To achieve the dense
packing of long wire traces that is one of the most striking visual characteristics of circuit boards,
we encourage a percentage ? of the image to be filled (? = 0.5 in our results) and to have a dense,
high-magnitude gradient field, as this tends to create many long rectilinear or diagonal edges:
`circ (x) = N (edge(I(x)) ? (1 ? ?(fill(I(x)), ? )), 1, ?circ )
edge(I) =
1 X
||?I(p)||
|D|
p?D
fill(I) =
(4)
1 X
I(p)
|D|
p?D
where ?(x, x
?) is the relative error of x from x
? and ?circ = 0.01. We also penalize geometry outside
the bounds of the image, encouraging the program to fill in a rectangular ?die?-like region. We train
on 2000 examples generated using SMC with 600 particles. Example generation took 10 hours and
training took under two hours. Figure 7 shows outputs from this program. As with shape matching,
the neurally-guided model generates high-scoring results significantly faster than the unguided model.
6
Conclusion and Future Work
This paper introduced neurally-guided procedural models: constrained procedural models that use
neural networks to capture constraint-induced dependencies. We showed how to train guides for
accumulative models with image-based constraints using a simple-yet-powerful network architecture.
Experiments demonstrated that neurally-guided models can generate high-quality results significantly
faster than unguided models.
Accumulative procedural models provide a current position p, which is not true of other generative
paradigms (e.g. texture generation, which generates content across its entire spatial domain). In such
settings, the guide might instead learn what parts of the current partial output are relevant to each
random choice using an attention process [12].
Using neural networks to predict random choice parameters is just one possible program transformation for generatively capturing constraints. Other transformations, such as control flow changes, may
be necessary to capture more types of constraints. A first step in this direction would be to combine
our approach with the grammar-splitting technique of Dang et al. [2].
Methods like ours could also accelerate inference for other applications of procedural models, e.g. as
priors in analysis-by-synthesis vision systems [9]. A robot perceiving a room through an onboard
camera, detecting chairs, then fitting a procedural model to the detected chairs could learn importance
distributions for each step of the chair-generating process (e.g. the number of parts, their size,
arrangement, etc.) Future work is needed to determine appropriate neural guides for such domains.
8
References
[1] Bed?rich Bene?, Ond?rej ?t?ava, Radomir M?ech, and Gavin Miller. Guided Procedural Modeling. In
Eurographics 2011.
[2] Minh Dang, Stefan Lienhard, Duygu Ceylan, Boris Neubert, Peter Wonka, and Mark Pauly. Interactive
Design of Probability Density Functions for Shape Grammars. In SIGGRAPH Asia 2015.
[3] Charles Geyer. Importance Sampling, Simulated Tempering, and Umbrella Sampling. In S. Brooks,
A. Gelman, G. Jones, and X.L. Meng, editors, Handbook of Markov Chain Monte Carlo. CRC Press, 2011.
[4] Noah D Goodman and Andreas Stuhlm?ller. The Design and Implementation of Probabilistic Programming
Languages. http://dippl.org, 2014. Accessed: 2015-12-23.
[5] Shixiang Gu, Zoubin Ghahramani, and Richard E. Turner. Neural Adaptive Sequential Monte Carlo. In
NIPS 2015.
[6] K. Norman J. Manning, R. Ranganath and D. Blei. Black Box Variational Inference. In AISTATS 2014.
[7] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR 2015.
[8] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In ICLR 2014.
[9] T. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. Mansinghka. Picture: An Imperative Probabilistic
Programming Language for Scene Perception. In CVPR 2015.
[10] David J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press,
2002.
[11] Andriy Mnih and Karol Gregor. Neural Variational Inference and Learning in Belief Networks. In ICML
2014.
[12] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent Models of Visual
Attention. In NIPS 2014.
[13] Pascal M?ller, Peter Wonka, Simon Haegler, Andreas Ulmer, and Luc Van Gool. Procedural Modeling of
Buildings. In SIGGRAPH 2006.
[14] Radom?r M?ech and Przemyslaw Prusinkiewicz. Visual Models of Plants Interacting with Their Environment.
In SIGGRAPH 1996.
[15] B. Paige and F. Wood. Inference Networks for Sequential Monte Carlo in Graphical Models. In ICML
2016.
[16] P. Prusinkiewicz and Aristid Lindenmayer. The Algorithmic Beauty of Plants. Springer-Verlag New York,
Inc., 1990.
[17] Przemyslaw Prusinkiewicz, Mark James, and Radom?r M?ech. Synthetic Topiary. In SIGGRAPH 1994.
[18] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In ICML 2014.
[19] Daniel Ritchie, Sharon Lin, Noah D. Goodman, and Pat Hanrahan. Generating Design Suggestions under
Tight Constraints with Gradient-based Probabilistic Programming. In Eurographics 2015.
[20] Daniel Ritchie, Ben Mildenhall, Noah D. Goodman, and Pat Hanrahan. Controlling Procedural Modeling
Programs with Stochastically-Ordered Sequential Monte Carlo. In SIGGRAPH 2015.
[21] Jerry O. Talton, Yu Lou, Steve Lesser, Jared Duke, Radom?r M?ech, and Vladlen Koltun. Metropolis
Procedural Modeling. ACM Trans. Graph., 30(2), 2011.
[22] O. ?t?ava, S. Pirk, J. Kratt, B. Chen, R. M?ech, O. Deussen, and B. Bene?. Inverse Procedural Modelling of
Trees. Computer Graphics Forum, 33(6), 2014.
[23] David Wingate and Theophane Weber. Automated Variational Inference in Probabilistic Programming. In
NIPS 2012 Workshop on Probabilistic Programming.
[24] Michael T. Wong, Douglas E. Zongker, and David H. Salesin. Computer-generated Floral Ornament. In
SIGGRAPH 1998.
9
| 6353 |@word kohli:1 version:1 open:1 seek:1 recursively:2 generatively:2 series:2 score:1 contains:1 jimenez:1 daniel:3 ours:1 past:1 current:9 com:1 unguided:19 surprising:2 yet:1 diederik:2 must:2 shape:17 asymptote:1 designed:1 plot:1 generative:6 fewer:1 leaf:1 geyer:1 painstaking:1 core:3 blei:1 provides:3 coarse:1 detecting:1 preference:1 successive:1 org:2 simpler:1 accessed:1 five:1 wierstra:1 along:1 koltun:1 fitting:2 combine:1 recognizable:1 leveling:1 mask:5 behavior:1 multi:1 terminal:1 photoshop:1 inspired:1 cpu:2 encouraging:1 dang:3 window:4 increasing:1 spain:1 provided:3 bounded:1 theophane:1 circuit:4 mass:1 what:1 informed:1 supplemental:2 transformation:2 quantitative:1 nf:5 growth:4 interactive:2 qm:1 control:6 unit:3 positive:1 before:1 local:11 modify:1 timing:1 struggle:1 tends:1 encoding:2 meng:1 approximately:1 black:4 might:2 ava:2 challenging:1 limited:1 smc:24 averaged:1 camera:1 ond:1 x3:2 backpropagation:1 procedure:1 significantly:2 dictate:1 matching:6 pre:3 confidence:1 suggest:1 zoubin:1 close:3 operator:1 gelman:1 impossible:1 wong:1 demonstrated:1 maximizing:2 modifies:2 go:1 attention:3 starting:1 jimmy:1 focused:1 resolution:3 rectangular:1 formalized:1 splitting:1 fill:5 stability:1 handle:1 analogous:1 limiting:1 target:37 controlling:1 massive:1 user:3 programming:6 duke:1 us:2 amortized:5 crossing:1 satisfying:2 expensive:1 approximated:1 database:1 wingate:1 capture:7 vine:4 thousand:2 region:3 connected:1 environment:1 trained:7 tight:1 segment:1 creation:1 predictive:1 gu:1 packing:1 po:7 multimodal:1 stylized:2 accelerate:1 siggraph:6 derivation:1 train:9 distinct:1 ech:5 monte:8 query:1 detected:1 outside:1 whose:1 encoded:1 stanford:4 larger:1 heuristic:1 circ:3 cvpr:1 otherwise:1 grammar:4 unseen:2 transform:1 final:1 shakir:1 sequence:1 net:1 took:7 propose:1 interaction:1 relevant:2 rapidly:1 achieve:4 bed:1 empty:3 produce:6 generating:5 perfect:1 executing:1 adam:2 boris:1 help:1 karol:1 recurrent:1 ben:1 mansinghka:1 sim:4 implemented:1 c:3 resemble:2 come:4 involves:1 direction:4 guided:35 thick:1 closely:3 filter:1 stochastic:4 material:2 crc:1 require:7 wall:2 ceylan:1 helping:1 around:3 gavin:1 visually:1 algorithmic:1 predict:2 achieves:1 purpose:3 compiles:1 outperformed:1 applicable:1 combinatorial:1 tanh:2 create:3 city:1 tool:2 reflects:1 weighted:1 stefan:1 gaussian:7 beauty:1 rezende:1 focus:3 consistently:1 modelling:1 likelihood:16 indicates:1 contrast:1 inference:16 downsample:2 entire:2 hidden:1 going:1 i1:2 pixel:8 orientation:2 pascal:1 augment:3 spatial:3 constrained:5 initialize:1 mackay:1 marginal:1 equal:6 once:3 field:2 extraction:1 shaped:1 sampling:6 koray:1 jones:1 icml:3 yu:1 thin:1 foreground:2 future:3 np:4 report:1 richard:1 few:2 oblivious:1 divergence:2 geometry:3 freedom:1 investigate:1 mnih:2 mixture:10 sobel:1 chain:13 edge:6 encourage:1 partial:11 necessary:1 glimpse:1 filled:3 tree:1 desired:2 circle:1 plotted:2 guidance:2 obscured:1 modeling:9 measuring:1 cost:1 imperative:1 expects:1 uniform:1 hundred:2 graphic:4 front:1 too:3 dependency:7 params:3 synthetic:1 density:3 randomized:1 probabilistic:9 off:2 michael:1 synthesis:1 na:2 eurographics:2 stochastically:1 creating:1 suggesting:1 volodymyr:1 diversity:1 bold:1 includes:2 inc:1 satisfy:1 script:1 start:1 bayes:1 simon:1 contribution:1 accuracy:3 convolutional:1 variance:2 characteristic:2 efficiently:1 miller:1 salesin:1 generalize:1 bayesian:2 raw:1 kavukcuoglu:1 carlo:8 worth:2 published:1 executes:1 reach:1 against:2 mohamed:1 james:1 naturally:2 couple:1 dataset:7 popular:1 concentrating:1 improves:1 appears:1 steve:1 higher:1 danilo:1 follow:1 asia:1 evaluated:1 though:1 box:3 execute:1 typeface:1 just:2 implicit:1 clock:2 hand:3 replacing:1 lack:1 minibatch:1 defines:1 quality:5 gray:2 adnn:2 building:2 glyph:6 effect:1 accumulative:5 true:3 normalized:3 counterpart:1 norman:1 jerry:1 inspiration:1 umbrella:1 satisfactory:1 i2:5 width:1 self:1 encourages:2 shixiang:1 die:1 unnormalized:1 onboard:1 radom:3 image:53 variational:8 weber:1 charles:1 functional:1 physical:1 nn2:1 volume:1 extend:1 fare:1 million:1 significant:1 cambridge:1 ritchie:3 rectilinear:1 pm:3 particle:22 nonlinearity:1 language:3 access:2 robot:1 similarity:16 surface:1 zongker:1 etc:2 add:1 curvature:1 posterior:2 recent:4 showed:1 verlag:1 binary:3 continue:1 hanrahan:3 equitable:1 scoring:2 pauly:1 additional:2 prune:1 parallelized:1 converge:1 maximize:1 ller:2 paradigm:1 signal:1 determine:1 branch:1 neurally:25 desirable:1 multiple:2 full:1 faster:7 usability:1 match:7 long:2 lin:1 dkl:2 bigger:1 regression:1 multilayer:1 vision:1 iteration:1 annotate:1 achieved:1 penalize:1 proposal:2 whereas:1 addition:1 fine:2 adhere:1 source:2 median:3 bracketing:1 goodman:4 ascent:2 induced:2 facilitates:1 flow:2 seem:1 call:3 extracting:1 noting:1 ideal:1 constraining:4 split:1 automated:1 affect:2 fit:2 architecture:7 perfectly:1 approaching:1 andriy:1 andreas:2 prototype:1 lesser:1 i7:1 whether:2 gb:1 effort:2 render:1 peter:2 paige:1 york:1 repeatedly:1 deep:1 heess:1 generally:1 aimed:1 amount:2 ang:5 tenenbaum:1 generate:18 specifies:1 http:3 percentage:1 mirrored:1 overly:1 per:3 discrete:1 write:1 four:6 procedural:48 threshold:4 drawn:7 tempering:1 douglas:1 sharon:1 ram:1 graph:1 wood:1 enforced:1 run:2 angle:1 parameterized:1 powerful:3 letter:1 uncertainty:1 striking:1 inverse:1 family:1 decision:4 acceptable:1 capturing:2 bound:4 layer:1 nni:3 oracle:2 noah:4 constraint:22 pgm:10 alex:1 x2:1 scene:1 generates:6 argument:4 chair:3 rendered:1 speedup:3 structured:1 manning:1 vladlen:1 describes:1 smaller:1 across:2 metropolis:1 making:2 equation:2 alluded:1 previously:2 needed:1 flip:3 jared:1 przemyslaw:2 decomposing:1 permit:2 appropriate:3 rej:1 original:4 thomas:1 top:1 running:4 include:1 completed:1 graphical:1 unsuitable:1 giving:1 ghahramani:1 approximating:1 gregor:1 forum:1 objective:5 already:2 added:1 occurs:2 arrangement:1 costly:1 diagonal:1 gradient:5 iclr:2 separate:1 lou:1 simulated:1 threaded:1 considers:1 collected:1 length:2 providing:1 minimizing:2 downsampling:1 difficult:1 unfortunately:1 setup:1 trace:3 negative:2 wonka:2 ba:1 unintuitive:1 design:8 reliably:2 implementation:3 perform:1 allowing:1 ornament:1 vertical:2 wire:1 markov:2 datasets:2 daan:1 minh:1 pat:3 optional:1 situation:1 frame:1 interacting:1 varied:1 introduced:1 david:3 required:2 kl:2 specified:1 bene:2 x50:2 barcelona:1 hour:4 nip:4 boost:2 brook:1 kingma:2 trans:1 proceeds:1 flower:1 pattern:2 perception:1 remapped:1 summarize:1 program:22 max:2 belief:1 gool:1 power:1 critical:1 turning:1 recursion:1 turner:1 improve:1 github:1 library:1 picture:1 extract:2 auto:1 text:2 prior:4 relative:1 graf:1 fully:2 plant:2 generation:3 suggestion:1 var:4 degree:1 sufficient:3 editor:1 pi:5 copy:1 offline:1 guide:5 allow:1 perceptron:1 taking:1 benefit:2 van:1 curve:2 world:1 contour:1 rich:1 forward:8 collection:2 qualitatively:1 adaptive:1 far:2 welling:1 scribble:4 ranganath:1 approximate:1 compact:1 implicitly:1 silhouette:1 logic:2 keep:1 abstracted:1 global:1 wrote:1 handbook:1 xi:8 continuous:7 un:1 investing:1 learn:4 pack:1 channel:1 terminate:1 nicolas:1 adheres:1 complex:3 domain:3 anna:1 did:2 dense:2 aistats:1 bounding:1 x1:5 intel:1 representative:1 ff:1 board:1 position:6 guiding:1 explicit:3 learns:1 minute:1 symbol:1 r2:1 x:4 normalizing:1 workshop:1 sequential:6 effectively:1 importance:7 adding:3 texture:1 magnitude:1 execution:1 perceptually:1 foveated:1 demand:1 chen:1 gap:2 suited:2 intersection:2 logarithmic:1 fc:2 simply:3 explore:1 pcm:7 visual:5 horizontally:1 unexpected:1 ordered:1 partially:1 scalar:1 brush:1 springer:1 satisfies:1 extracted:2 webppl:2 acm:1 conditional:1 goal:1 room:1 replace:1 nn1:1 content:5 change:1 luc:1 specifically:1 determined:2 uniformly:1 perceiving:1 sampler:4 experimental:1 support:2 mark:2 stuhlm:1 absolutely:1 kulkarni:1 evaluate:2 mcmc:1 tested:1 ex:1 |
5,918 | 6,354 | Theoretical Comparisons of Positive-Unlabeled
Learning against Positive-Negative Learning
Gang Niu1 Marthinus C. du Plessis1 Tomoya Sakai1 Yao Ma3 Masashi Sugiyama2,1
1
The University of Tokyo, Japan 2 RIKEN, Japan 3 Boston University, USA
{ gang@ms., christo@ms., sakai@ms., yao@ms., sugi@ }k.u-tokyo.ac.jp
Abstract
In PU learning, a binary classifier is trained from positive (P) and unlabeled (U) data
without negative (N) data. Although N data is missing, it sometimes outperforms
PN learning (i.e., ordinary supervised learning). Hitherto, neither theoretical nor
experimental analysis has been given to explain this phenomenon. In this paper,
we theoretically compare PU (and NU) learning against PN learning based on the
upper bounds on estimation errors. We find simple conditions when PU and NU
learning are likely to outperform PN learning, and we prove that, in terms of the
upper bounds, either PU or NU learning (depending on the class-prior probability
and the sizes of P and N data) given infinite U data will improve on PN learning.
Our theoretical findings well agree with the experimental results on artificial and
benchmark data even when the experimental setup does not match the theoretical
assumptions exactly.
1
Introduction
Positive-unlabeled (PU) learning, where a binary classifier is trained from P and U data, has drawn
considerable attention recently [1, 2, 3, 4, 5, 6, 7, 8]. It is appealing to not only the academia but also
the industry, since for example the click-through data automatically collected in search engines are
highly PU due to position biases [9, 10, 11]. Although PU learning uses no negative (N) data, it is
sometimes even better than PN learning (i.e., ordinary supervised learning, perhaps with class-prior
change [12]) in practice. Nevertheless, there is neither theoretical nor experimental analysis for this
phenomenon, and it is still an open problem when PU learning is likely to outperform PN learning.
We clarify this question in this paper.
Problem settings For PU learning, there are two problem settings based on one sample (OS) and
two samples (TS) of data respectively. More specifically, let X ? Rd and Y ? {?1} (d ? N) be the
input and output random variables and equipped with an underlying joint density p(x, y). In OS [3],
a set of U data is sampled from the marginal density p(x). Then if a data point x is P, this P label is
observed with probability c, and x remains U with probability 1 ? c; if x is N, this N label is never
observed, and x remains U with probability 1. In TS [4], a set of P data is drawn from the positive
marginal density p(x | Y = +1) and a set of U data is drawn from p(x). Denote by n+ and nu the
sizes of P and U data. As two random variables, they are fully independent in TS, and they satisfy
n+ /(n+ + nu ) ? c? in OS where ? = p(Y = +1) is the class-prior probability. Therefore, TS is
slightly more general than OS, and we will focus on TS problem settings.
Similarly, consider TS problem settings of PN and NU learning, where a set of N data (of size n? ) is
sampled from p(x | Y = ?1) independently of the P/U data. For PN learning, if we enforce that
n+ /(n+ + n? ) ? ? when sampling the data, it will be ordinary supervised learning; otherwise, it is
supervised learning with class-prior change, a.k.a. prior probability shift [12].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In [7], a cost-sensitive formulation for PU learning was proposed, and its risk estimator was proven
unbiased if the surrogate loss is non-convex and satisfies a symmetric condition. Therefore, we can
naturally compare empirical risk minimizers in PU and NU learning against that in PN learning.
Contributions We establish risk bounds of three risk minimizers in PN, PU and NU learning for
comparisons in a flavor of statistical learning theory [13, 14]. For each minimizer, we firstly derive
a uniform deviation bound from the risk estimator to the risk using Rademacher complexities (see,
e.g., [15, 16, 17, 18]), and secondly obtain an estimation error bound. Thirdly, if the surrogate loss
is classification-calibrated [19], an excess risk bound is an immediate corollary. In [7], there was a
generalization error bound similar to our uniform deviation bound for PU learning. However, it is
based on a tricky decomposition of the risk, where surrogate losses for risk minimization and risk
analysis are different and labels of U data are needed for risk evaluation, so that no further bound is
implied. On the other hand, ours utilizes the same surrogate loss for risk minimization and analysis
and requires no label of U data for risk evaluation, so that an estimation error bound is possible.
Our main results can be summarized as follows. Denote by g?pn , g?pu and g?nu the risk minimizers in
PN, PU and NU learning. Under a mild assumption on the function class and data distributions,
? Finite-sample
case: The estimation
?
?
? error bound of g?pu is tighter than that of g?pn whenever
?/ n+ + 1/ nu < (1 ? ?)/ n? , and so is the bound of g?nu tighter than that of g?pn if
?
?
?
(1 ? ?)/ n? + 1/ nu < ?/ n+ .
? Asymptotic case: Either the limit of bounds of g?pu or that of g?nu (depending on ?, n+ and
n? ) will improve on that of g?pn , if n+ , n? ? ? in the same order and nu ? ? faster in
order than n+ and n? .
Notice that both results rely on only the constant ? and variables n+ , n? and nu ; they are simple and
independent of the specific forms of the function class and/or the data distributions. The asymptotic
case is from the finite-sample case that is based on theoretical comparisons of the aforementioned
upper bounds on the estimation errors of g?pn , g?pu and g?nu . To the best of our knowledge, this is the
first work that compares PU learning against PN learning.
Throughout the paper, we assume that the class-prior probability ? is known. In practice, it can be
effectively estimated from P, N and U data [20, 21, 22] or only P and U data [23, 24].
Organization The rest of this paper is organized as follows. Unbiased estimators are reviewed in
Section 2. Then in Section 3 we present our theoretical comparisons based on risk bounds. Finally
experiments are discussed in Section 4.
2
Unbiased estimators to the risk
For convenience, denote by p+ (x) = p(x | Y = +1) and p? (x) = p(x | Y = ?1) partial marginal
densities. Recall that instead of data sampled from p(x, y), we consider three sets of data X+ , X?
and Xu which are drawn from three marginal densities p+ (x), p? (x) and p(x) independently.
Let g : Rd ? R be a real-valued decision function for binary classification and ` : R ? {?1} ? R
be a Lipschitz-continuous loss function. Denote by
R? (g) = E? [`(g(X), ?1)]
R+ (g) = E+ [`(g(X), +1)],
partial risks, where E? [?] = EX?p? [?]. Then the risk of g w.r.t. ` under p(x, y) is given by
R(g) = E(X,Y ) [`(g(X), Y )] = ?R+ (g) + (1 ? ?)R? (g).
(1)
In PN learning, by approximating R(g) based on Eq. (1), we can get an empirical risk estimator as
P
1??
bpn (g) = ? P
R
xi ?X+ `(g(xi ), +1) + n?
xj ?X? `(g(xj ), ?1).
n+
bpn (g) is an unbiased and consistent estimator to R(g) and its convergence rate is
For any fixed g,?R
?
of order Op (1/ n+ + 1/ n? ) according to the central limit theorem [25], where Op denotes the
order in probability.
In PU learning, X? is not available and then R? (g) cannot be directly estimated. However, [7] has
shown that we can estimate R(g) without any bias if ` satisfies the following symmetric condition:
`(t, +1) + `(t, ?1) = 1.
2
(2)
Specifically, let Ru,? (g) = EX [`(g(X), ?1)] = ?E+ [`(g(X), ?1)] + (1 ? ?)R? (g) be a risk that
U data are regarded as N data. Given Eq. (2), we have E+ [`(g(X), ?1)] = 1 ? R+ (g), and hence
R(g) = 2?R+ (g) + Ru,? (g) ? ?.
(3)
By approximating R(g) based on (3) using X+ and Xu , we can obtain
P
1
bpu (g) = ?? + 2? P
R
xi ?X+ `(g(xi ), +1) + nu
xj ?Xu `(g(xj ), ?1).
n+
bpu (g) regards Xu as N data and aims at separating X+ and Xu if being minimized, it is
Although R
?
?
an unbiased and consistent estimator to R(g) with a convergence rate Op (1/ n+ + 1/ nu ) [25].
Similarly, in NU learning R+ (g) cannot be directly estimated. Let Ru,+ (g) = EX [`(g(X), +1)] =
?R+ (g) + (1 ? ?)E? [`(g(X), +1)]. Given Eq. (2), E? [`(g(X), +1)] = 1 ? R? (g), and
R(g) = Ru,+ (g) + 2(1 ? ?)R? (g) ? (1 ? ?).
(4)
By approximating R(g) based on (4) using Xu and X? , we can obtain
2(1??) P
bnu (g) = ?(1 ? ?) + 1 P
R
xi ?Xu `(g(xi ), +1) +
xj ?X? `(g(xj ), ?1).
nu
n?
On the loss function In order to train g by minimizing these estimators, it remains to specify the
loss `. The zero-one loss `01 (t, y) = (1 ? sign(ty))/2 satisfies (2) but is non-smooth. [7] proposed
to use a scaled ramp loss as the surrogate loss for `01 in PU learning:
`sr (t, y) = max{0, min{1, (1 ? ty)/2}},
instead of the popular hinge loss that does not satisfy (2). Let I(g) = E(X,Y ) [`01 (g(X), Y )] be the
risk of g w.r.t. `01 under p(x, y). Then, `sr is neither an upper bound of `01 so that I(g) ? R(g) is
not guaranteed, nor a convex loss so that it gets more difficult to know whether `sr is classificationcalibrated or not [19].1 If it is, we are able to control the excess risk w.r.t. `01 by that w.r.t. `. Here
we prove the classification calibration of `sr , and consequently it is a safe surrogate loss for `01 .
Theorem 1. The scaled ramp loss `sr is classification-calibrated (see Appendix A for the proof).
3
Theoretical comparisons based on risk bounds
When learning is involved, suppose we are given a function class G, and let g ? = arg ming?G R(g)
bpn (g), g?pu = arg ming?G R
bpu (g), and
be the optimal decision function in G, g?pn = arg ming?G R
b
g?nu = arg ming?G Rnu (g) be arbitrary global minimizers to three risk estimators. Furthermore, let
R? = inf g R(g) and I ? = inf g I(g) denote the Bayes risks w.r.t. ` and `01 , where the infimum of g
is over all measurable functions.
In this section, we derive and compare risk bounds of three risk minimizers g?pn , g?pu and g?nu under
the following mild assumption on G, p(x), p+ (x) and p? (x): There is a constant CG > 0 such that
?
Rn,q (G) ? CG / n
(5)
for any marginal density q(x) ? {p(x), p+ (x), p? (x)}, where
P
Rn,q (G) = EX ?qn E? supg?G n1 xi ?X ?i g(xi )
is the Rademacher complexity of G for the sampling of size n from q(x) (that is, X = {x1 , . . . , xn }
and ? = {?1 , . . . , ?n }, with each xi drawn from q(x) and each ?i as a Rademacher variable) [18].
A special case is covered, namely, sets of hyperplanes with bounded normals and feature maps:
G = {g(x) = hw, ?(x)iH | kwkH ? Cw , k?(x)kH ? C? },
(6)
where H is a Hilbert space with an inner product h?, ?iH , w ? H is a normal vector, ? : Rd ? H is a
feature map, and Cw > 0 and C? > 0 are constants [26].
1
A loss function ` is classification-calibrated if and only if there is a convex, invertible and nondecreasing
transformation ?` with ?` (0) = 0, such that ?` (I(g) ? inf g I(g)) ? R(g) ? inf g R(g) [19].
3
3.1
Risk bounds
Let L` be the Lipschitz constant of ` in its first parameter. To begin with, we establish the learning
guarantee of g?pu (the proof can be found in Appendix A).
Theorem 2. Assume (2). For any ? > 0, with probability at least 1 ? ?,2
q
q
2 ln(4/?)
+
,
(7)
R(?
gpu ) ? R(g ? ) ? 8?L` Rn+ ,p+ (G) + 4L` Rnu ,p (G) + 2? 2 ln(4/?)
n+
nu
where Rn+ ,p+ (G) and Rnu ,p (G) are the Rademacher complexities of G for the sampling of size n+
from p+ (x) and the sampling of size nu from p(x). Moreover, if ` is a classification-calibrated loss,
there exists nondecreasing ? with ?(0) = 0, such that with probability at least 1 ? ?,
q
q
2 ln(4/?)
I(?
gpu )?I ? ? ? R(g ? )?R? +8?L` Rn+ ,p+ (G)+4L` Rnu ,p (G)+2? 2 ln(4/?)
+
. (8)
n+
nu
In Theorem 2, R(?
gpu ) and I(?
gpu ) are w.r.t. p(x, y), though g?pu is trained from two samples following
p+ (x) and p(x). We can see that (7) is an upper bound of the estimation error of g?pu w.r.t. `, whose
right-hand side (RHS) is small if G is small; (8) is an upper bound of the excess risk of g?pu w.r.t. `01 ,
?
?
whose RHS also involves the approximation error of G (i.e., R(g
? ) ? R ) that is small if G is?large.
When G is fixed and satisfies (5), we have Rn+ ,p+ (G) = O(1/ n+ ) and Rnu ,p (G) = O(1/ nu ),
and then
R(?
g ) ? R(g ? ) ? 0, I(?
gpu ) ? I ? ? ?(R(g ? ) ? R? )
? pu
?
in Op (1/ n+ + 1/ nu ). On the other hand, when the size of G grows with n+ and nu properly,
?
?
those complexities of G vanish slower in order than O(1/ n+ ) and O(1/ nu ) but we may have
R(?
gpu ) ? R(g ? ) ? 0, I(?
gpu ) ? I ? ? 0,
which means g?pu approaches
the Bayes classifier if ` is a classification-calibrated loss, in an order
?
?
slower than Op (1/ n+ + 1/ nu ) due to the growth of G.
Similarly, we can derive the learning guarantees of g?pn and g?nu for comparisons. We will just focus
on estimation error bounds, because excess risk bounds are their immediate corollaries.
Theorem 3. Assume (2). For any ? > 0, with probability at least 1 ? ?,
q
q
2 ln(4/?)
+
(1
?
?)
,
R(?
gpn ) ? R(g ? ) ? 4?L` Rn+ ,p+ (G) + 4(1 ? ?)L` Rn? ,p? (G) + ? 2 ln(4/?)
n+
n?
(9)
where Rn? ,p? (G) is the Rademacher complexity of G for the sampling of size n? from p? (x).
Theorem 4. Assume (2). For any ? > 0, with probability at least 1 ? ?,
q
q
2 ln(4/?)
R(?
gnu )?R(g ? ) ? 4L` Rnu ,p (G)+8(1??)L` Rn? ,p? (G)+ 2 ln(4/?)
+2(1??)
. (10)
nu
n?
In order to compare
pthe bounds, we simplify (9), (7) and (10) using Eq. (5). To this end, we define
f (?) = 4L` CG + 2p
ln(4/?). For the special case of G defined in (6), define f (?) accordingly as
f (?) = 4L` Cw C? + 2 ln(4/?).
Corollary 5. The estimation error bounds below hold separately with probability at least 1 ? ?:
?
?
R(?
gpn ) ? R(g ? ) ? f (?) ? {?/ n+ + (1 ? ?)/ n? },
(11)
?
?
?
R(?
gpu ) ? R(g ) ? f (?) ? {2?/ n+ + 1/ nu },
(12)
?
?
?
R(?
gnu ) ? R(g ) ? f (?) ? {1/ nu + 2(1 ? ?)/ n? }.
(13)
3.2
Finite-sample comparisons
Note that three risk minimizers g?pn , g?pu and g?nu work in similar problem settings and their bounds
in Corollary 5 are proven using exactly the same proof technique. Then, the differences in bounds
reflect the intrinsic differences between risk minimizers. Let us compare those bounds. Define
?
?
?
?pu,pn = ?/ n+ + 1/ nu / (1 ? ?)/ n? ,
(14)
?
?
?
?nu,pn = (1 ? ?)/ n? + 1/ nu / ?/ n+ .
(15)
Eqs. (14) and (15) constitute our first main result.
2
Here, the probability is over repeated sampling of data for training g?pu , while in Lemma 8, it will be for
bpu (g).
evaluating R
4
Table 1: Properties of ?pu,pn and ?nu,pn .
no specification
?pu,pn
?nu,pn
sizes are proportional
?pn = ?/(1 ? ?)
mono. inc.
mono. dec.
mono. inc.
mono. dec.
mono. inc.
?, n?
n+
n+ , nu
?, n? , nu
?, ?pu
?pn , ?nu
?pn
?
?pu
?nu
minimum
p
?
2p?pu + ?pu
?
2 ?nu + ?nu
Theorem 6 (Finite-sample comparisons). Assume (5) is satisfied. Then the estimation error bound
of g?pu in (12) is tighter than that of g?pn in (11) if and only if ?pu,pn < 1; also, the estimation error
bound of g?nu in (13) is tighter than that of g?pn if and only if ?nu,pn < 1.
Proof. Fix ?, n+ , n? and nu , and then denote by Vpn , Vpu and Vnu the values of the RHSs of (11),
(12) and (13). In fact, the definitions of ?pu,pn and ?nu,pn in (14) and (15) came from
?
?
Vpu ? ?f (?)/ n+
Vnu ? (1 ? ?)f (?)/ n?
?pu,pn =
? , ?nu,pn =
? .
Vpn ? ?f (?)/ n+
Vpn ? (1 ? ?)f (?)/ n?
As a consequence, compared with Vpn , Vpu is smaller and (12) is tighter if and only if ?pu,pn < 1,
and Vnu is smaller and (13) is tighter if and only if ?nu,pn < 1.
We analyze some properties of ?pu,pn before going to our second main result. The most important
property is that it relies on ?, n+ , n? and nu only; it is independent of G, p(x, y), p(x), p+ (x) and
p? (x) as long as (5) is satisfied. Next, ?pu,pn is obviously a monotonic function of ?, n+ , n? and
nu . Furthermore, it is unbounded no matter if ? is fixed or not. Properties of ?nu,pn are similar, as
summarized in Table 1.
Implications of the monotonicity of ?pu,pn are given as follows. Intuitively, when other factors are
fixed, larger nu or n? improves g?pu or g?pn respectively. However, it is complicated why ?pu,pn is
monotonically decreasing with n+ and increasing with ?. The weights of the empirical average of
bpu (g) and ? in R
bpn (g), as in R
bpu (g) it also joins the estimation of (1 ? ?)R? (g). It
X+ is 2? in R
bpu (g), and thus larger n+ improves g?pu more than g?pn . Moreover,
makes X+ more important for R
bpn (g) and the concentration Op ((1 ? ?)/?n? ) is better if
(1 ? ?)R? (g) is directly estimated in R
b
? is larger, whereas it?is indirectly
? estimated through Ru,? (g) ? ?(1 ? R+ (g)) in Rpu (g) and the
concentration Op (?/ n+ + 1/ nu ) is worse if ? is larger. As a result, when the sample sizes are
fixed g?pu is more (or less) favorable as ? decreases (or increases).
A natural question is what the monotonicity of ?pu,pn would be if we enforce n+ , n? and nu to be
proportional. To answer this question, we assume n+ /n? = ?pn , n+ /nu = ?pu and n? /nu = ?nu
where ?pn , ?pu and ?nu are certain constants, then (14) and (15) can be rewritten as
?
?
?
?
?pu,pn = (? + ?pu )/((1 ? ?) ?pn ), ?nu,pn = (1 ? ? + ?nu )/(?/ ?pn ).
As shown in Table 1, ?pu,pn is now increasing with ?pu and decreasing with ?pn . It is because, for
instance, when ?pn is fixed and ?pu increases, nu is meant to decrease relatively to n+ and n? .
Finally, the properties will dramatically change if we enforce ?pn = ?/(1 ? ?) that approximately
holds in ordinary supervised learning. Under this constraint, we have
p
p
?
?
?pu,pn = (? + ?pu )/ ?(1 ? ?) ? 2 ?pu + ?pu ,
?
?
where the equality is achieved at ?
? = ?pu /(2 ?pu + 1). Here, ?pu,pn decreases with ? if ? < ?
?
and increases with ? if ? > ?
? , though it is not convex in ?. Only if nu is sufficiently larger than n+
(e.g., ?pu < 0.04), could ?pu,pn < 1 be possible and g?pu have a tighter estimation error bound.
3.3
Asymptotic comparisons
In practice, we may find that g?pu is worse than g?pn and ?pu,pn > 1 given X+ , X? and Xu . This is
probably the consequence especially when nu is not sufficiently larger than n+ and n? . Should we
then try to collect much more U data or just give up PU learning? Moreover, if we are able to have as
many U data as possible, is there any solution that would be provably better than PN learning?
5
We answer these questions by asymptotic comparisons. Notice that each pair of (n+ , nu ) yields a
value of the RHS of (12), each (n+ , n? ) yields a value of the RHS of (11), and consequently each
triple of (n+ , n? , nu ) determines a value of ?pu,pn . Define the limits of ?pu,pn and ?nu,pn as
?
?
?pu,pn
= limn+ ,n? ,nu ?? ?pu,pn , ?nu,pn
= limn+ ,n? ,nu ?? ?nu,pn .
?
Recall that n+ , n? and nu are independent, and we need two conditions for the existence of ?pu,pn
?
and ?nu,pn
: n+ ? ? and n? ? ? in the same order and nu ? ? faster in order than them. It is
a bit stricter than what is necessary, but is consistent with a practical assumption: P and N data are
roughly equally expensive, whereas U data are much cheaper than P and N data. Intuitively, since
?pu,pn and ?nu,pn measure relative qualities of the estimation error bounds of g?pu and g?nu against
?
?
that of g?pn , ?pu,pn
and ?nu,pn
measure relative qualities of the limits of those bounds accordingly.
?
?
In order to illustrate properties of ?pu,pn
and ?nu,pn
, assume only nu approaches infinity while n+
?
?
?
?
?
?
and n? stay finite, so that ?pu,pn = ? n? /((1 ? ?) n+ ) and ?nu,pn
= (1 ? ?) n+ /(? n? ).
?
?
?
?
?nu,pn
= 1, which implies ?pu,pn
< 1 or ?nu,pn
< 1 unless n+ /n? = ? 2 /(1 ? ?)2 .
Thus, ?pu,pn
In principle, this exception should be exceptionally rare since n+ /n? is a rational number whereas
? 2 /(1 ? ?)2 is a real number. This argument constitutes our second main result.
Theorem 7 (Asymptotic comparisons). Assume (5) and one set of conditions below are satisfied:
?
?
(a) n+ < ?, n? < ? and nu ? ?. In this case, let ?? = (? n? )/((1 ? ?) n+ );
(b) 0 < limn+ ,n? ?? n+ /n? < ? and limn+ ,n? ,nu ?? (n+ + n? )/nu = 0. In this case, let
p
?? = ?/((1 ? ?) ??pn ) where ??pn = limn+ ,n? ?? n+ /n? .
?
Then, either the limit of estimation error bounds of g?pu will improve on that of g?pn (i.e., ?pu,pn
< 1)
?
if ?? < 1, or the limit of bounds of g?nu will improve on that of g?pn (i.e., ?nu,pn
< 1) if ?? > 1. The
only exception is n+ /n? = ? 2 /(1 ? ?)2 in (a) or ??pn = ? 2 /(1 ? ?)2 in (b).
?
Proof. Note that ?? = ?pu,pn
in both cases. The proof of case (a) has been given as an illustration
?
?
of the properties of ?pu,pn and ?nu,pn
. The proof of case (b) is analogous.
As a result, when we find that g?pu is worse than g?pn and ?pu,pn > 1, we should look at ?? defined in
Theorem 7. If ?? < 1, g?pu is promising and we should collect more U data; if ?? > 1 otherwise,
we should give up g?pu , but instead g?nu is promising and we should collect more U data as well. In
addition, the gap between ?? and one indicates how many U data would be sufficient. If the gap is
significant, slightly more U data may be enough; if the gap is slight, significantly more U data may
be necessary. In practice, however, U data are cheaper but not free, and we cannot have as many U
data as possible. Therefore, g?pn is still of practical importance given limited budgets.
3.4
Remarks
bpu (g) to
Theorem 2 relies on a fundamental lemma of the uniform deviation from the risk estimator R
the risk R(g):
Lemma 8. For any ? > 0, with probability at least 1 ? ?,
q
q
bpu (g) ? R(g)| ? 4?L` Rn ,p (G) + 2L` Rn ,p (G) + 2? ln(4/?) + ln(4/?) .
supg?G |R
+ +
u
2n+
2nu
bpu (g) is w.r.t. p+ (x) and p(x). Rademacher complexities
In Lemma 8, R(g) is w.r.t. p(x, y), though R
are also w.r.t. p+ (x) and p(x), and they can be bounded easily for G defined in Eq. (6).
Theorems 6 and 7 rely on (5). Thanks to it, we can simplify Theorems 2, 3 and 4. In fact, (5) holds
for not only the special case of G defined in (6), but also the vast majority of discriminative models in
machine learning that are nonlinear in parameters such as decision trees (cf. Theorem 17 in [16]) and
feedforward neural networks (cf. Theorem 18 in [16]).
Theorem 2 in [7] is a similar bound of the same order as our Lemma 8. That theorem is based on a
tricky decomposition of the risk
?
?
E(X,Y ) [`(g(X), Y )] = ?E+ [`(g(X),
+1)] + E(X,Y ) [`(g(X),
Y )],
? y) = (2/(y + 3))`(t, y) is not ` for risk minimization and labels of Xu
where the surrogate loss `(t,
are needed for risk evaluation, so that no further bound is implied. Lemma 8 uses the same ` as risk
bpu (g), so that it can serve as the stepping
minimization and requires no label of Xu for evaluating R
stone to our estimation error bound in Theorem 2.
6
6
4
2
g^pu
g^nu
g^pn
30
Misclassification rate (%)
8
35
10 2
,
,pu;pn
,nu;pn
,=1
10
,
Misclassification rate (%)
12
10 1
25
10 0
20
35
30
25
20
15
10
5
0
0
50
100
150
200
0
50
100
150
0
200
nu
nu
(a) Theo. (nu var.)
0.2
0.4
0.6
0.8
1
0
0.2
:
(b) Expe. (nu var.)
0.4
0.6
0.8
1
:
(c) Theo. (? var.)
(d) Expe. (? var.)
Figure 1: Theoretical and experimental results based on artificial data.
4
Experiments
In this section, we experimentally validate our theoretical findings.
Artificial data Here, X+ , X? and Xu are in R2 and drawn from three marginal densities
?
?
p+ (x) = N (+12 / 2, I2 ), p? (x) = N (?12 / 2, I2 ), p(x) = ?p+ (x) + (1 ? ?)p? (x),
where N (?, ?) is the normal distribution with mean ? and covariance ?, 12 and I2 are the all-one
vector and identity matrix of size 2. The test set contains one million data drawn from p(x, y).
The model g(x) = hw, xi + b where w ? R2 , b ? R and the scaled ramp loss `sr are employed. In
addition, an `2 -regularization is added with the regularization parameter fixed to 10?3 , and there is
no hard constraint on kwk2 or kxk2 as in Eq. (6). The solver for minimizing three regularized risk
estimators comes from [7] (refer also to [27, 28] for the optimization technique).
The results are reported in Figure 1. In (a)(b), n+ = 45, n? = 5, ? = 0.5, and nu varies from 5 to
200; in (c)(d), n+ = 45, n? = 5, nu = 100, and ? varies from 0.05 to 0.95. Specifically, (a) shows
?pu,pn and ?nu,pn as functions of nu , and (c) shows them as functions of ?. For the experimental
results, g?pn , g?pu and g?nu were trained based on 100 random samplings for every nu in (b) and ? in
(d), and means with standard errors of the misclassification rates are shown, as `sr is classificationcalibrated. Note that the empirical misclassification rates are essentially the risks w.r.t. `01 as there
were one million test data, and the fluctuations are attributed to the non-convex nature of `sr . Also,
the curve of g?pn is not a flat line in (b), since its training data at every nu were exactly same as the
training data of g?pu and g?nu for fair experimental comparisons.
In Figure 1, the theoretical and experimental results are highly consistent. The red and blue curves
intersect at nearly the same positions in (a)(b) and in (c)(d), even though the risk minimizers in the
experiments were locally optimal and regularized, making our estimation error bounds inexact.
Benchmark data Table 2 summarizes the specification of benchmarks, which were downloaded
from many sources including the IDA benchmark repository [29], the UCI machine learning repository, the semi-supervised learning book [30], and the European ESPRIT 5516 project.3 In Table 2,
three rows describe the number of features, the number of data, and the ratio of P data according to
the true class labels. Given a random sampling of X+ , X? and Xu , the test set has all the remaining
data if they are less than 104 , or else drawn uniformly from the remaining data of size 104 .
For benchmark data, the linear model for the artificial data is not enough, and its kernel version is
employed. Consider training g?pu for example. Given a random sampling, g(x) = hw, ?(x)i + b is
used where w ? Rn+ +nu , b ? R and ? : Rd ? Rn+ +nu is the empirical kernel map [26] based on
X+ and Xu for the Gaussian kernel. The kernel width and the regularization parameter are selected
by five-fold cross-validation for each risk minimizer and each random sampling.
3
See http://www.raetschlab.org/Members/raetsch/benchmark/ for IDA, http://archive.ics.
uci.edu/ml/ for UCI, http://olivier.chapelle.cc/ssl-book/ for the SSL book and https://www.
elen.ucl.ac.be/neural-nets/Research/Projects/ELENA/ for the ELENA project.
Table 2: Specification of benchmark datasets.
dim
size
P ratio
banana
phoneme
magic
image
german
twonorm
waveform
spambase
coil2
2
5300
.448
5
5404
.293
10
19020
.648
18
2086
.570
20
1000
.300
20
7400
.500
21
5000
.329
57
4597
.394
241
1500
.500
7
35
30
40
35
Misclassification rate (%)
2
g^pu
g^nu
g^pn
40
Misclassification rate (%)
4
45
45
Misclassification rate (%)
,pu;pn
,nu;pn
,=1
6
,
Misclassification rate (%)
8
45
40
35
30
0
0
50
100
150
200
250
300
0
50
100
200
250
300
0
50
100
150
200
250
300
0
50
100
150
200
250
40
35
30
300
0
50
100
150
200
nu
nu
nu
nu
(b) banana
(c) phoneme
(d) magic
(e) image
nu
(a) Theo.
150
45
250
300
250
300
42
40
0
50
100
150
200
250
40
35
300
0
50
100
nu
(f) german
150
200
250
35
30
25
20
0
300
50
100
150
200
250
Misclassification rate (%)
44
45
Misclassification rate (%)
46
Misclassification rate (%)
Misclassification rate (%)
Misclassification rate (%)
48
48
45
40
35
30
0
300
50
100
150
200
250
46
44
42
40
38
36
300
0
50
100
150
200
nu
nu
nu
nu
(g) twonorm
(h) waveform
(i) spambase
(j) coil2
0
0.2
0.4
0.6
0.8
20
10
1
0
0.2
0.4
:
40
30
20
10
0
0.2
0.4
0.6
0.8
1
:
(f) german
0.8
20
10
1
0
0.2
0.4
0.6
0.8
:
:
(b) banana
(c) phoneme
40
30
20
10
1
0
0.2
0.4
0.6
0.8
40
30
20
10
1
0
0.2
0.4
:
0.6
0.8
1
0.8
1
:
(d) magic
(e) image
50
50
Misclassification rate (%)
Misclassification rate (%)
Misclassification rate (%)
(a) Theo.
50
0.6
30
50
Misclassification rate (%)
30
40
40
30
20
10
0
0.2
0.4
0.6
0.8
1
40
30
20
10
0
0.2
0.4
0.6
0.8
1
Misclassification rate (%)
10 0
g^pu
g^nu
g^pn
40
Misclassification rate (%)
1
50
50
Misclassification rate (%)
10
,pu;pn
,nu;pn
,=1
Misclassification rate (%)
,
10 2
Misclassification rate (%)
Figure 2: Experimental results based on benchmark data by varying nu .
40
30
20
10
0
0.2
0.4
0.6
0.8
1
50
40
30
20
10
0
0.2
0.4
0.6
:
:
:
:
(g) twonorm
(h) waveform
(i) spambase
(j) coil2
Figure 3: Experimental results based on benchmark data by varying ?.
The results by varying nu and ? are reported in Figures 2 and 3 respectively. Similarly to Figure 1,
in Figure 2, n+ = 25, n? = 5, ? = 0.5, and nu varies from 10 to 300, while in Figure 3, n+ = 25,
n? = 5, nu = 200, and ? varies from 0.05 to 0.95. Figures 2(a) and 3(a) depict ?pu,pn and ?nu,pn
as functions of nu and ?, and all the remaining subfigures depict means with standard errors of the
misclassification rates based on 100 random samplings for every nu and ?.
The theoretical and experimental results based on benchmarks are still highly consistent. However,
unlike in Figure 1(b), in Figure 2 only the errors of g?pu decrease with nu , and the errors of g?nu just
fluctuate randomly. This may be because benchmark data are more difficult than artificial data and
hence n? = 5 is not sufficiently informative for g?nu even when nu = 300. On the other hand, we
can see that Figures 3(a) and 1(c) look alike, and so do all the remaining subfigures in Figure 3 and
Figure 1(d). Nevertheless, three intersections in Figure 3(a) are closer than those in Figure 1(c), as
nu = 200 in Figure 3(a) and nu = 100 in Figure 1(c). The three intersections will become a single
one if nu = ?. By observing the experimental results, three curves in Figure 3 are also closer than
those in Figure 1(d) when ? ? 0.6, which demonstrates the validity of our theoretical findings.
5
Conclusions
In this paper, we studied a fundamental problem in PU learning, namely, when PU learning is likely
to outperform PN learning. Estimation error bounds of the risk minimizers were established in PN,
PU and NU learning. We found that under the very mild assumption (5): The PU (or NU) bound is
tighter than the PN bound, if ?pu,pn in (14) (or ?nu,pn in (15)) is smaller than one (cf. Theorem 6);
either the limit of ?pu,pn or that of ?nu,pn will be smaller than one, if the size of U data increases
faster in order than the sizes of P and N data (cf. Theorem 7). We validated our theoretical findings
experimentally using one artificial data and nine benchmark data.
Acknowledgments
GN was supported by the JST CREST program and Microsoft Research Asia. MCdP, YM, and MS
were supported by the JST CREST program. TS was supported by JSPS KAKENHI 15J09111.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
F. Denis. PAC learning from positive statistical queries. In ALT, 1998.
F. Letouzey, F. Denis, and R. Gilleron. Learning from positive and unlabeled examples. In ALT, 2000.
C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In KDD, 2008.
G. Ward, T. Hastie, S. Barry, J. Elith, and J. Leathwick. Presence-only data and the EM algorithm.
Biometrics, 65(2):554?563, 2009.
C. Scott and G. Blanchard. Novelty detection: Unlabeled data definitely help. In AISTATS, 2009.
G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. Journal of Machine Learning
Research, 11:2973?3009, 2010.
M. C. du Plessis, G. Niu, and M. Sugiyama. Analysis of learning from positive and unlabeled data. In
NIPS, 2014.
M. C. du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive and unlabeled
data. In ICML, 2015.
G. Dupret and B. Piwowarski. A user browsing model to predict search engine click data from past
observations. In SIGIR, 2008.
N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias
models. In WSDM, 2008.
O. Chapelle and Y. Zhang. A dynamic Bayesian network click model for web search ranking. In WWW,
2009.
J. Qui?onero-Candela, M. Sugiyama, A. Schwaighofer, and N. D. Lawrence. Dataset Shift in Machine
Learning. MIT Press, 2009.
V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. In O. Bousquet,
U. von Luxburg, and G. R?tsch, editors, Advanced Lectures on Machine Learning, pages 169?207. Springer,
2004.
V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information
Theory, 47(5):1902?1914, 2001.
P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:463?482, 2002.
R. Meir and T. Zhang. Generalization error bounds for Bayesian mixture algorithms. Journal of Machine
Learning Research, 4:839?860, 2003.
M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the
American Statistical Association, 101(473):138?156, 2006.
M. Saerens, P. Latinne, and C. Decaestecker. Adjusting the outputs of a classifier to new a priori
probabilities: A simple procedure. Neural Computation, 14(1):21?41, 2002.
M. C. du Plessis and M. Sugiyama. Semi-supervised learning of class balance under class-prior change by
distribution matching. In ICML, 2012.
A. Iyer, S. Nath, and S. Sarawagi. Maximum mean discrepancy for class ratio estimation: Convergence
bounds and kernel selection. In ICML, 2014.
M. C. du Plessis, G. Niu, and M. Sugiyama. Class-prior estimation for learning from positive and unlabeled
data. In ACML, 2015.
H. G. Ramaswamy, C. Scott, and A. Tewari. Mixture proportion estimation via kernel embedding of
distributions. In ICML, 2016.
K.-L. Chung. A Course in Probability Theory. Academic Press, 1968.
B. Sch?lkopf and A. Smola. Learning with Kernels. MIT Press, 2001.
R. Collobert, F. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In ICML, 2006.
A. L. Yuille and A. Rangarajan. The concave-convex procedure (CCCP). In NIPS, 2001.
G. R?tsch, T. Onoda, and K. R. M?ller. Soft margins for AdaBoost. Machine learning, 42(3):287?320,
2001.
O. Chapelle, B. Sch?lkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006.
C. McDiarmid. On the method of bounded differences. In J. Siemons, editor, Surveys in Combinatorics,
pages 148?188. Cambridge University Press, 1989.
M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer, 1991.
9
| 6354 |@word mild:3 repository:2 version:1 proportion:1 bpu:11 open:1 decomposition:2 covariance:1 contains:1 ours:1 outperforms:1 spambase:3 past:1 ramsey:1 ida:2 gpu:8 john:1 academia:1 informative:1 kdd:1 depict:2 selected:1 accordingly:2 letouzey:1 denis:2 hyperplanes:1 firstly:1 org:1 zhang:2 five:1 unbounded:1 mcdiarmid:1 become:1 prove:2 marthinus:1 theoretically:1 roughly:1 nor:3 wsdm:1 ming:4 decreasing:2 automatically:1 equipped:1 solver:1 increasing:2 spain:1 begin:1 underlying:1 bounded:3 moreover:3 project:3 piwowarski:1 hitherto:1 what:2 finding:4 transformation:1 sinz:1 guarantee:2 masashi:1 every:3 concave:1 growth:1 stricter:1 exactly:3 esprit:1 classifier:5 scaled:3 tricky:2 control:1 demonstrates:1 mcauliffe:1 positive:11 before:1 limit:7 consequence:2 niu:3 fluctuation:1 approximately:1 lugosi:1 koltchinskii:1 studied:1 collect:3 limited:1 practical:2 acknowledgment:1 elith:1 practice:4 sarawagi:1 procedure:2 intersect:1 empirical:5 significantly:1 matching:1 get:2 convenience:1 unlabeled:9 cannot:3 selection:1 risk:46 www:3 measurable:1 map:3 missing:1 attention:1 independently:2 convex:7 sigir:1 survey:1 estimator:11 regarded:1 embedding:1 analogous:1 suppose:1 gpn:2 user:1 olivier:1 us:2 elkan:1 expensive:1 observed:2 decrease:4 convexity:2 complexity:7 tsch:2 dynamic:1 trained:4 serve:1 yuille:1 easily:1 joint:1 riken:1 train:1 describe:1 elen:1 artificial:6 query:1 whose:2 larger:6 valued:1 ramp:3 otherwise:2 ward:1 nondecreasing:2 obviously:1 ledoux:1 net:1 ucl:1 product:1 uci:3 pthe:1 kh:1 validate:1 scalability:1 convergence:3 rangarajan:1 rademacher:8 help:1 depending:2 derive:3 ac:2 illustrate:1 mcdp:1 op:7 eq:7 involves:1 implies:1 come:1 trading:1 safe:1 waveform:3 tokyo:2 jst:2 fix:1 generalization:2 tighter:8 secondly:1 clarify:1 hold:3 sufficiently:3 ic:1 normal:3 lawrence:1 predict:1 noto:1 estimation:20 favorable:1 label:7 sensitive:1 minimization:5 mit:4 gaussian:2 aim:1 pn:122 fluctuate:1 varying:3 corollary:4 validated:1 focus:2 plessis:4 properly:1 kakenhi:1 indicates:1 cg:3 rostamizadeh:1 talwalkar:1 dim:1 minimizers:9 going:1 provably:1 arg:4 classification:8 aforementioned:1 priori:1 special:3 ssl:2 marginal:6 never:1 sampling:11 look:2 icml:5 constitutes:1 nearly:1 discrepancy:1 minimized:1 simplify:2 randomly:1 cheaper:2 n1:1 microsoft:1 detection:2 organization:1 highly:3 evaluation:3 mixture:2 implication:1 closer:2 partial:2 necessary:2 unless:1 tree:1 rpu:1 biometrics:1 taylor:1 decaestecker:1 theoretical:14 subfigure:2 instance:1 industry:1 soft:1 gn:1 gilleron:1 ordinary:4 cost:1 deviation:3 rare:1 uniform:3 jsps:1 reported:2 answer:2 varies:4 calibrated:5 thanks:1 density:7 fundamental:2 definitely:1 twonorm:3 stay:1 lee:1 invertible:1 ym:1 yao:2 von:1 central:1 reflect:1 satisfied:3 worse:3 book:3 american:1 chung:1 japan:2 summarized:2 blanchard:2 inc:3 vpn:4 satisfy:2 matter:1 combinatorics:1 ranking:1 collobert:1 supg:2 try:1 ramaswamy:1 candela:1 analyze:1 observing:1 red:1 zoeter:1 bayes:2 complicated:1 contribution:1 phoneme:3 yield:2 lkopf:2 bayesian:2 bnu:1 onero:1 cc:1 explain:1 whenever:1 definition:1 inexact:1 against:5 ty:2 sugi:1 involved:1 naturally:1 proof:7 attributed:1 sampled:3 rational:1 dataset:1 adjusting:1 popular:1 recall:2 knowledge:1 improves:2 organized:1 hilbert:1 supervised:9 asia:1 specify:1 adaboost:1 formulation:2 though:4 furthermore:2 just:3 smola:1 talagrand:1 hand:4 web:1 o:4 nonlinear:1 infimum:1 quality:2 perhaps:1 grows:1 usa:1 validity:1 unbiased:5 true:1 hence:2 equality:1 regularization:3 symmetric:2 boucheron:1 i2:3 width:1 m:5 stone:1 saerens:1 image:3 recently:1 stepping:1 jp:1 thirdly:1 discussed:1 slight:1 million:2 association:1 banach:1 kwk2:1 significant:1 refer:1 raetsch:1 cambridge:1 rd:4 similarly:4 sugiyama:5 christo:1 chapelle:3 calibration:1 specification:3 pu:109 expe:2 inf:4 certain:1 binary:3 came:1 minimum:1 employed:2 novelty:2 ller:1 monotonically:1 barry:1 semi:4 zien:1 smooth:1 match:1 faster:3 academic:1 cross:1 long:1 cccp:1 equally:1 essentially:1 sometimes:2 kernel:7 ma3:1 achieved:1 dec:2 whereas:3 addition:2 separately:1 else:1 source:1 limn:5 sch:2 rest:1 unlike:1 sr:8 archive:1 probably:1 latinne:1 member:1 nath:1 siemons:1 jordan:1 structural:2 presence:1 feedforward:1 enough:2 xj:6 hastie:1 click:4 inner:1 shift:2 whether:1 bartlett:2 penalty:1 nine:1 constitute:1 remark:1 dramatically:1 tewari:1 covered:1 locally:1 http:4 outperform:3 meir:1 notice:2 sign:1 estimated:5 blue:1 nevertheless:2 drawn:8 mono:5 neither:3 vast:1 luxburg:1 throughout:1 bpn:5 utilizes:1 decision:3 appendix:2 summarizes:1 qui:1 bit:1 bound:46 gnu:2 guaranteed:1 fold:1 gang:2 constraint:2 infinity:1 flat:1 bousquet:2 argument:1 min:1 relatively:1 according:2 smaller:4 slightly:2 em:1 son:1 appealing:1 making:1 alike:1 intuitively:2 ln:12 agree:1 remains:3 german:3 needed:2 know:1 end:1 available:1 rewritten:1 enforce:3 indirectly:1 slower:2 existence:1 denotes:1 remaining:4 cf:4 hinge:1 especially:1 establish:2 approximating:3 implied:2 question:4 added:1 concentration:2 craswell:1 surrogate:7 cw:3 separating:1 majority:1 collected:1 ru:5 dupret:1 illustration:1 ratio:3 minimizing:2 balance:1 setup:1 difficult:2 negative:3 magic:3 upper:6 observation:1 datasets:1 benchmark:12 finite:5 t:7 immediate:2 banana:3 acml:1 rn:14 arbitrary:1 leathwick:1 namely:2 pair:1 engine:2 established:1 barcelona:1 nu:142 nip:3 able:2 kwkh:1 below:2 scott:3 program:2 max:1 including:1 misclassification:23 natural:1 rely:2 regularized:2 isoperimetry:1 advanced:1 improve:4 rnu:6 prior:8 asymptotic:5 relative:2 fully:1 loss:19 lecture:1 proportional:2 proven:2 var:4 triple:1 validation:1 foundation:1 downloaded:1 sufficient:1 consistent:5 principle:1 editor:3 row:1 course:1 mohri:1 supported:3 free:1 theo:4 bias:3 side:1 regard:1 curve:3 xn:1 evaluating:2 sakai:1 qn:1 transaction:1 excess:4 crest:2 monotonicity:2 ml:1 global:1 xi:10 discriminative:1 search:3 continuous:1 why:1 reviewed:1 table:6 promising:2 nature:1 onoda:1 du:5 bottou:1 european:1 aistats:1 main:4 rh:5 repeated:1 fair:1 xu:13 x1:1 join:1 wiley:1 position:3 kxk2:1 vanish:1 hw:3 theorem:19 elena:2 specific:1 pac:1 r2:2 alt:2 exists:1 intrinsic:1 ih:2 vapnik:1 mendelson:1 effectively:1 importance:1 iyer:1 budget:1 margin:1 browsing:1 gap:3 flavor:1 boston:1 intersection:2 likely:3 schwaighofer:1 monotonic:1 springer:2 minimizer:2 satisfies:4 relies:2 determines:1 weston:1 identity:1 consequently:2 lipschitz:2 considerable:1 change:4 exceptionally:1 experimentally:2 infinite:1 specifically:3 hard:1 uniformly:1 lemma:6 experimental:13 exception:2 meant:1 phenomenon:2 ex:4 |
5,919 | 6,355 | Fairness in Learning: Classic and Contextual Bandits ?
Matthew Joseph
Michael Kearns
Jamie Morgenstern
Aaron Roth
University of Pennsylvania, Department of Computer and Information Science
majos, mkearns, jamiemor, [email protected]
Abstract
We introduce the study of fairness in multi-armed bandit problems. Our fairness
definition demands that, given a pool of applicants, a worse applicant is never
favored over a better one, despite a learning algorithm?s uncertainty over the true
payoffs. In the classic stochastic bandits problem we provide a provably fair
algorithm based on ?chained? confidence intervals, and prove a cumulative regret
bound with a cubic dependence on the number of arms. We further show that
any fair algorithm must have such a dependence, providing a strong separation
between fair and unfair learning that extends to the general contextual case. In
the general contextual case, we prove a tight connection between fairness and the
KWIK (Knows What It Knows) learning model: a KWIK algorithm for a class of
functions can be transformed into a provably fair contextual bandit algorithm and
vice versa. This tight connection allows us to provide a provably fair algorithm
for the linear contextual bandit problem with a polynomial dependence on the
dimension, and to show (for a different class of functions) a worst-case exponential
gap in regret between fair and non-fair learning algorithms.
1
Introduction
Automated techniques from statistics and machine learning are increasingly being used to make
decisions that have important consequences on people?s lives, including hiring [24], lending [10],
policing [25], and even criminal sentencing [7]. These high stakes uses of machine learning have led
to increasing concern in law and policy circles about the potential for (often opaque) machine learning
techniques to be discriminatory or unfair [13, 6]. At the same time, despite the recognized importance
of this problem, very little is known about technical solutions to the problem of ?unfairness?, or the
extent to which ?fairness? is in conflict with the goals of learning.
In this paper, we consider the extent to which a natural fairness notion is compatible with learning in
a bandit setting, which models many of the applications of machine learning mentioned above. In
this setting, the learner is a sequential decision maker, which must choose at each time step t which
decision to make from a finite set of k ?arms". The learner then observes a stochastic reward from
(only) the arm chosen, and is tasked with maximizing total earned reward (equivalently, minimizing
total regret) by learning the relationships between arms and rewards over time. This models, for
example, the problem of learning the association between loan applicants and repayment rates over
time by repeatedly granting loans and observing repayment.
We analyze two variants of the setting: in the classic case, the learner?s only source of information
comes from choices made in previous rounds. In the contextual case, before each round the learner
additionally observes some potentially informative context for each arm (for example representing
the content of an individual?s loan application), and the expected reward is some unknown function of
?
A full technical version of this paper is available on arXiv [17].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the context. The difficulty in this task stems from the unknown relationships between arms, rewards,
and (in the contextual case) contexts: these relationships must be learned.
We introduce fairness into the bandit learning framework by saying that it is unfair to preferentially
choose one arm over another if the chosen arm has lower expected quality than the unchosen arm.
In the loan application example, this means that it is unfair to preferentially choose a less-qualified
applicant (in terms of repayment probability) over a more-qualified applicant.
It is worth noting that this definition of fairness (formalized in the preliminaries) is entirely consistent
with the optimal policy, which can simply choose at each round to play uniformly at random from
the arms maximizing the expected reward. This is because ? it seems ? this definition of fairness
is entirely consistent with the goal of maximizing expected reward. Indeed, the fairness constraint
exactly states that the algorithm cannot favor low reward arms!
Our main conceptual result is that this intuition is incorrect in the face of unknown reward functions.
Although fairness is consistent with implementing the optimal policy, it may not be consistent with
learning the optimal policy. We show that fairness always has a cost, in terms of the achievable
learning rate of the algorithm. For some problems, the cost is mild, but for others, the cost is large.
1.1
Our Results
We divide our results into two parts. First, in Section 3 we study the classic stochastic multi-armed
bandit problem [20, 19]. In this case, there are no contexts, and each arm i has a fixed but unknown
average reward ?i . In Section 3.1 we give a fair algorithm, FAIR BANDITS, and show that it guarantees
nontrivial regret after T = O(k 3 ) rounds. We then show in Section 3.2 that it is not possible to do
better ? any fair learning algorithm can be forced to endure constant per-round regret for T = ?(k 3 )
rounds, thus tightly characterizing the optimal regret attainable by fair algorithms in this setting, and
formally separating it from the regret attainable by algorithms absent a fairness constraint.
We then move on to the general contextual bandit setting in Section 4 and prove a broad characterization result, relating fair contextual bandit learning to KWIK (?Knows What It Knows") learning [22].
Informally, a KWIK leaarning algorithm receives a series of unlabeled examples and must either
predict a label or announce ?I Don?t Know". The KWIK requirement then stipulates that any predicted
must be label close to its true label. The quality of a KWIK learning algorithm is characterized by
its ?KWIK bound?, which provides an upper bound on the maximum number of times the algorithm
can be forced to announce ?I Don?t Know?. For any contextual bandit problem (defined by the
set of functions C from which the payoff functions may be selected), we show that the optimal
learning rate of any fair algorithm is determined by the best KWIK bound for the class C. We prove
this constructively via a reduction showing how to convert a KWIK learning algorithm into a fair
contextual bandit algorithm in Section 4.1, and vice versa in Section 4.2.
This general connection immediately allows us to import known results for KWIK learning [22]. It
implies that some fair contextual bandit problems are easy and achieve non-trivial regret guarantees
in only polynomial many rounds. Conversely, it also implies that some contextual bandit problems
which are easy without the fairness constraint become hard once we impose the fairness constraint, in
that any fair algorithm must suffer constant per-round regret for exponentially many rounds. By way
of example, we will show in Section 4.1 that real contexts with linear reward functions are easy, and
we will show in Section 4.3 that boolean context vectors and conjunction reward functions are hard.
1.2
Other Related Work
Many papers study the problem of fairness in machine learning. One line of work studies algorithms
for batch classification which achieve group fairness otherwise known as equality of outcomes,
statistical parity ? or algorithms that avoid disparate impact (see e.g. [11, 23, 18, 15, 16] and [2] for
a study of auditing existing algorithms for disparate impact). While statistical parity is sometimes
a desirable or legally required goal, as observed by Dwork et al. [14] and others, it suffers from a
number of drawbacks. First, if different populations indeed have different statistical properties, then it
can be at odds with accurate classification. Second, even in cases when statistical parity is attainable
with an optimal classifier, it does not prevent discrimination at an individual level. This led Dwork
et al. [14] to encourage the study of individual fairness, which we focus on here.
2
Dwork et al. [14] also proposed and explored a technical definition of individual fairness formalizing
the idea that ?similar individuals should be treated similarly? by presupposing a task-specific quality
metric on individuals and proposing that fair algorithms should satisfy a Lipschitz condition on this
metric. Our definition of fairness is similar, in that the expected reward of each arm is a natural metric
through which we define fairness. However, where Dwork et al. [14] presupposes the existence of a
?fair" metric on individuals ? thus encoding much of the relevant challenge, as studied Zemel et al.
[27] ? our notion of fairness is entirely aligned with the goal of the algorithm designer and is satisfied
by the optimal policy. Nevertheless, it affects the space of feasible learning algorithms, because it
interferes with learning an optimal policy, which depends on the unknown reward functions.
At a technical level, our work is related to Amin et al. [4] and Abernethy et al. [1], which also relate
KWIK learning to bandit learning in a different context unrelated to fairness.
2
Preliminaries
We study the contextual bandit setting, defined by a domain X , a set of ?arms? [k] := {1, . . . , k} and
a class C of functions of the form f : X ? [0, 1]. For each arm j there is some function fj ? C,
unknown to the learner. In rounds t = 1, . . . , T , an adversary reveals to the algorithm a context xtj
for each arm2 . An algorithm A then chooses an arm it , and observes stochastic reward ritt for the
arm it chose. We assume rjt ? Djt , E[rjt ] = fj (xtj ), for some distribution Djt over [0, 1].
Let ? be the set of policies mapping contexts to distributions over arms X k ? ?k , and ? ? the
optimal policy which selects a distribution over arms as a function of contexts to maximize the
expected reward of those arms. The pseudo-regret of an algorithm A on contexts x1 , . . . , xT is
defined as follows, where ? t represents A?s distribution on arms at round t:
X
X
Eit? ??? (xt ) [fit? (xtit? )] ? Eit ??t [
fit (xtit )] = Regret(x1 , . . . , xT ),
t
t
?
shorthanded as A?s regret. Optimal policy ? pulls arms with highest expectation at each round, so:
X
X
Regret(x1 , . . . , xT ) =
max fj (xtj ) ? Eit ??t [
fit (xtit )].
t
j
t
1
We say that A satisfies regret bound R(T ) if maxx1 ,...,xT Regret(x , . . . , xt ) ? R(T ).
t?1
Let the history ht ? X k ? [k] ? [0, 1]
be a record of t ? 1 rounds experienced by A, t ? 1
t
3-tuples encoding the realization of the contexts, arm chosen, and reward observed. ?j|h
t denotes the
t
t
probability that A chooses arm j after observing contexts x , given h . For simplicity, we will often
t
t
:= ?j|h
drop the superscript t on the history when referring to the distribution over arms: ?j|h
t.
We now define what it means for a contextual bandit algorithm to be ?-fair with respect to its arms.
Informally, this will mean that A will play arm i with higher probability than arm j in round t only if
i has higher mean than j in round t, for all i, j ? [k], and in all rounds t.
Definition 1 (?-fair). A is ?-fair if, for all sequences of contexts x1 , . . . , xt and all payoff distributions D1t , . . . , Dkt , with probability at least 1 ? ? over the realization of the history h, for all rounds
t ? [T ] and all pairs of arms j, j 0 ? [k],
t
?j|h
> ?jt 0 |h only if fj (xtj ) > fj 0 (xtj 0 ).
KWIK learning Let B be an algorithm which takes as input a sequence of examples x1 , . . . , xT ,
and when given some xt ? X , outputs either a prediction y?t ? [0, 1] or else outputs y?t = ?,
representing ?I don?t know?. When y?t = ?, B receives feedback y t such that E[y t ] = f (xt ). B is an
(, ?)-KWIK learning algorithm for C : X ? [0, 1], with KWIK bound m(, ?) if for any sequence
of examples x1 , x2 , . . . and any target f ? C, with probability at least 1 ? ?, both:
t
t
t
1. Its numerical predictions are accurate:
P?for allt t, y? ? {?} ? [f (x ) ? , f (x ) + ], and
2. B rarely outputs ?I Don?t Know?: t=1 I [?
y = ?] ? m(, ?).
Often, the contextual bandit problem is defined such that there is a single context xt every day. Our model
is equivalent ? we could take xtj := xt for each j.
2
3
2.1
Specializing to Classic Stochastic Bandits
In Sections 3.1 and 3.2, we study the classic stochastic bandit problem, an important special case
of the contextual bandit setting described above. Here we specialize our notation to this setting, in
which there are no contexts. For each arm j ? [k], there is an unknown distribution Dj over [0, 1]
with unknown mean ?j . A learning algorithm A chooses an arm it in round t, and observes the
reward ritt ? Dit for the arm that it chose. Let i? ? [k] be the arm with highest expected reward:
i? ? arg maxi?[k] ?i . The pseudo-regret of an algorithm A on D1 , . . . , Dk is now just:
X
T ? ?i? ? Eit ??t [
?it ] = Regret(T, D1 , . . . , Dk )
0?t?T
t?1
Let ht ? ([k] ? [0, 1])
denote a record of the t ? 1 rounds experienced by the algorithm so far,
represented by t ? 1 2-tuples encoding the previous arms chosen and rewards observed. We write
t
t
?j|h
t to denote the probability that A chooses arm j given history h . Again, we will often drop the
t
t
:= ?j|h
superscript t on the history when referring to the distribution over arms: ?j|h
t . ?-fairness in
the classic bandit setting then specializes as follows:
Definition 2 (?-fairness in the classic bandits setting). A is ?-fair if, for all distributions D1 , . . . , Dk ,
with probability at least 1 ? ? over the history h, for all t ? [T ] and all j, j 0 ? [k]:
t
?j|h
> ?jt 0 |h only if ?j > ?j 0 .
3
3.1
Classic Bandits Setting
Fair Classic Stochastic Bandits: An Algorithm
In this section, we describe a simple and intuitive modification of the standard UCB algorithm [5],
called FAIR BANDITS, prove that it is fair, and analyze its regret bound. The algorithm and its analysis
highlight a key idea that is important to the design of fair algorithms in this setting: that of chaining
confidence intervals. Intuitively, as a ?-fair algorithm explores different arms it must play two arms j1
and j2 with equal probability until it has sufficient data to deduce, with confidence 1 ? ?, either that
?j1 > ?j2 or vice versa. FAIR BANDITS does this by maintaining empirical estimates of the means of
both arms, together with confidence intervals around those means. To be safe, the algorithm must
play the arms with equal probability while their confidence intervals overlap. The same reasoning
applies simultaneously to every pair of arms. Thus, if the confidence intervals of each pair of arms ji
and ji+1 overlap for each i ? [k], the algorithm is forced to play all arms j with equal probability.
This is the case even if the confidence intervals around arm jk and arm j1 are far from overlapping ?
i.e. when the algorithm can be confident that ?j1 > ?jk .
This chaining approach initially seems overly conservative when ruling out arms, as reflected in its
regret bound, which is only non-trivial after T k 3 . In contrast, the UCB algorithm [5] achieves
non-trivial regret after T = O(k) rounds. However, our lower bound in Section 3.2 shows that any
fair algorithm must suffer constant per-round regret for T k 3 rounds on some instances.
We now give an overview of the behavior of FAIR BANDITS. At every round t, FAIR BANDITS
identifies the arm it? = arg maxi uti that has the largest upper confidence interval amongst the active
arms. At each round t, we say i is linked to j if [`ti , uti ] ? [`tj , utj ] 6= ?, and i is chained to j if i and
j are in the same component of the transitive closure of the linked relation. FAIR BANDITS plays
uniformly at random among all active arms chained to arm it? .
Initially, the active set contains all arms. The active set of arms at each subsequent round is defined to
be the set of arms that are chained to the arm with highest upper confidence bound at the previous
round. The algorithm can be confident that arms that have become unchained to the arm with the
highest upper confidence bound at any round have means that are lower than the means of any chained
arms, and hence such arms can be safely removed from the active set, never to be played again. This
has the useful property that the active set of arms can only shrink: at any round t, St ? St?1 .
We first observe that with probability 1 ? ?, all of the confidence intervals maintained by FAIR BAN DITS (?) contain the true means of their respective arms over all rounds. We prove this claim, along
with all other claims in this paper, in the full technical version of this paper [17].
4
1: procedure FAIR BANDITS(?)
2:
S 0 ? {1, . . . , k}
3:
for i = 1, . . . k do
4:
?
?0i ? 21 , u0i ? 1, `0i ? 0, n0i ? 0
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
. Initialize the active set
. Initialize each arm
for t = 1 to T do
. Find arm with highest ucb
it? ? arg maxi?S t?1 uti
S t ? {j | j chains to it? , j ? S t?1 }
. Update active set
j ? ? (x ?R S t )
. Select active arm at random
t
nt+1
j ? ? nj ? + 1
t+1
1
?
?j ? ? nt+1
. Pull arm j ? , update its mean estimate
(?
?tj ? ? ntj ? + rjt ? )
j?
r
2 /3?)
B ? ln((??(t+1))
2nt+1
?
t+1 t+1 j t+1
`j ? , uj ? ? ?
?j ? ? B, ?
?t+1
. Update interval for pulled arm
j? + B
t
?
for j ? S , j 6= j do
?
?t+1
??
?tj , nt+1
? ntj , ut+1
? utj , `t+1
? `tj
j
j
j
j
Lemma 1. With probability at least 1 ? ?, for every arm i and round t `ti ? ?i ? uti .
The fairness of FAIR BANDITS follows: with high probability the algorithm constructs good confidence
intervals, so it can confidently choose between arms without violating fairness.
Theorem 1. FAIR BANDITS (?) is ?-fair.
Having proven that FAIR BANDITS is indeed fair, it remains to upper-bound its regret. We proceed by
a series of lemmas, first lower bounding the probability that any arm active in round t has been pulled
substantially fewer times than its expectation.
q
2
Lemma 2. With probability at least 1 ? 2t?2 , nti ? kt ? 2t ln 2k?t
for all i ? S t (for all active
?
arms in round t).
We now use this lower bound on the number of pulls of active arm i in round t to upper-bound ?(t),
an upper bound on the confidence interval width FAIR BANDITS uses for any active arm i in round t.
q
2
t ln( 2kt
? )
Lemma 3. Consider any round t and any arm i ? S t . Condition on nti ? kt ?
. Then,
2
v
u
u ln (? ? t)2 /3?
u
q
uti ? `ti ? 2t
= ?(t).
2
t ln( 2kt
? )
2 ? kt ?
2
We stitch together these lemmas as follows: Lemma 2 upper bounds the probability that any arm i
active in round t has been pulled substantially fewer times than its expectation, and Lemma 3 upper
bounds the width of any confidence interval used by FAIR BANDITS in round t by ?(t). Together,
these enable us to determine how both the number of arms in the active set, as well as the spread of
their confidence intervals, evolve over time. This translates into the following regret bound.
q
?
Theorem 2. If ? < 1/ T , then FAIR BANDITS has regret R(T ) = O
k 3 T ln T?k .
Two points are worth highlighting in Theorem 2. First, this bound becomes non-trivial (i.e. the
average per-round regret is 1) for T = ?(k 3 ). As we show in the next section, it is not possible to
improve on this. Second, the bound may appear to have suboptimal dependence on T when compared
to unconstrained regret bounds
?(where
the dependence on T is often described as logarithmic).
However, it is known that ?
kT regret is necessary even in the unrestricted setting (without
fairness) if one does not make data-specific assumptions on an instance [9] It would be possible to
state a logarithmic dependence on T in our setting as well while making assumptions on the gaps
between arms, but since our fairness constraint manifests itself as a cost that depends on k, we choose
for clarity to avoid such assumptions; without them, our dependence on T is also optimal.
5
3.2
Fair Classic Stochastic Bandits: A Lower Bound
We now show that the regret bound for FAIR BANDITS has an optimal dependence on k: no fair
algorithm has diminishing regret before T = ?(k 3 ) rounds. At a high level, we construct our lower
bound example to embody the ?worst of both worlds" for fair algorithms: the arm payoff means are
just close enough together that the chain takes a long time to break, and the arm payoff means are just
far enough apart that the algorithm incurs high regret while the chain remains unbroken. This lets
us prove the formal statement below. The full proof, which proceeds via Bayesian reasoning using
priors for the arm means, may be found in our technical companion paper [17].
Theorem 3. There is a distribution P over k-arm instances of the stochastic multi-armed bandit
problem such that
any fair algorithm run on P experiences constant per-round regret for at least
T = ? k 3 ln 1? rounds.
Thus, we tightly characterize the optimal regret attainable by fair algorithms in the classic bandits
setting, and formally separate it from the regret attainable by algorithms absent a fairness constraint.
Note that this already shows a separation between the best possible learning rates for contextual
bandit learning with and without the fairness constraint ? the classic multi-armed bandit problem is a
special case of every contextual bandit problem, and for general contextual bandit problems, it is also
known how to get non-trivial regret after only T = O(k) many rounds [3, 8, 12].
4
4.1
Contextual Bandits Setting
KWIK Learnability Implies Fair Bandit Learnability
In this section, we show if a class of functions is KWIK learnable, then there is a fair algorithm for
learning the same class of functions in the contextual bandit setting, with a regret bound polynomially
related to the function class? KWIK bound. Intuitively, KWIK-learnability of a class of functions
guarantees we can learn the function?s behavior to a high degree of accuracy with a high degree
of confidence. As fairness constrains an algorithm most before the algorithm has determined the
payoff functions? behavior accurately, this guarantee enables us to learn fairly without incurring much
additional regret. Formally, we prove the following polynomial relationship.
Theorem 4. For an instance of the contextual multi-armed bandit problem where fj ? C for all
j ? [k], if C is (, ?)-KWIK learnable with bound m(, ?), KWIKT O FAIR (?, T ) is ?-fair and
achieves regret bound:
k
3
2
? min (?, 1/T )
, k ln
R(T ) = O max k ? m ,
T 2k
?
for ? ?
?1
T
)
where ? = arg min (max( ? T, k ? m(, min(?,1/T
))).
kT 2
First, we construct an algorithm KWIKT O FAIR(?, T ) that uses the KWIK learning algorithm as a
subroutine, and prove that it is ?-fair. A call to KWIKT O FAIR(?, T ) will initialize a KWIK learner
for each arm, and in each of the T rounds will implicitly construct a confidence interval around the
prediction of each learner. If a learner makes a numeric valued prediction, we will interpret this as
a confidence interval centered at the prediction with width ? . If a learner outputs ?, we interpret
this as a trivial confidence interval (covering all of [0, 1]). We then use the same chaining technique
used in the classic setting to choose an arm from the set of arms chained to the predicted top arm.
Whenever all learners output predictions, they need no feedback. When a learner for j outputs ?, if j
is selected then we have feedback rjt to give it; on the other hand, if j isn?t selected, we ?roll back?
the learning algorithm for j to before this round by not updating the algorithm?s state.
1: procedure KWIKT O FAIR(?, T )
2:
3:
4:
5:
6:
7:
8:
min(?, 1 )
? ? ? kT 2 T , ? ? arg min (max( ? T, k ? m(, ? ? )))
Initialize KWIK(? , ? ? )-learner Li , hi ? [ ] ?i ? [k]
for 1 ? t ? T do
S??
for i = 1, . . . , k do
sti ? Li (xti , hi )
S ? S ? sti
6
. Initialize set of predictions S
. Store prediction sti
9:
10:
11:
12:
13:
14:
15:
if ?? S then
Pull j ? ? (x ?R [k]), receive reward rjt ?
. Pick arm at random from all arms
else
it? ? arg maxi sti
S t ? {j | (stj ? ? , stj + ? ) chains to (stit ? ? , stit + ? )}
?
?
Pull j ? ? (x ?R S t ), receive reward rjt ? . Pick arm at random from active set sti?
hj ? ? hj ? :: (xtj ? , rjt ? )
. Update the history for Lj ?
We begin by bounding the probability of certain failures of KWIKT O FAIR in Lemma 4.
Lemma 4. With probabilityPat least 1 ? min(?, T1 ), for all rounds t and all arms i, (a) if sti ? R then
|sti ? fi (xti )| ? ? and (b) t I [sti = ? and i is pulled] ? m(? , ? ? ).
This in turn lets us prove the fairness of KWIKT O FAIR in Theorem 5. Intuitively, the KWIK
algorithm?s confidence about predictions translates into confidence about expected rewards, which
lets us choose between arms without violating fairness.
Theorem 5. KWIKT O FAIR(?, T ) is ?-fair.
We now use the KWIK bounds of the KWIK learners to upper-bound the regret of KWIKT O FAIR(?, T ). We proceed by bounding the regret incurred in those rounds when all KWIK algorithms
make a prediction (i.e., when we have a nontrivial confidence interval for each arm?s expected reward)
and then bounding the number of rounds for which some learner outputs ? (i.e., when we choose
randomly from all arms and thus incur constant regret). These results combine to produce Lemma 5.
Lemma 5. KWIKT O FAIR(?, T ) achieves regret O(max(k 2 ? m(? , ? ? ), k 3 ln T?k )).
Our presentation of KWIKT O FAIR(?, T ) has a known time horizon T . Its guarantees extend to the
case in which T is unknown via the standard ?doubling trick? to prove Theorem 4.
An important instance of the contextual bandit problem is the linear case, where C consists of the
set of all linear functions of bounded norm in d dimensions, i.e. when the rewards of each arm are
governed by an underlying linear regression model on contexts. Known KWIK algorithms [26] for
the set of linear functions C then allow us, via our reduction, to give a fair contextual bandit algorithm
for this setting with a polynomial regret bound.
Lemma 6 ([26]). Let C = {f? |f? (x) = h?, xi, ? ? Rd , ||?|| ? 1} and X = {x ? Rd : ||x|| ? 1}.
? 3 /4 ).
C is KWIK learnable with KWIK bound m(, ?) = O(d
Then, an application of Theorem 4 implies that KWIKT O FAIR has a polynomial regret guarantee for
the class of linear functions.
Corollary 1. Let C and X be as in Lemma 6, and fj ? C for each j ? [k]. Then, KWIKT
O
k
4/5 6/5 3/5 3
?
FAIR(T, ?) using the learner from [26] has regret R(T ) = O max T k d , k ln ? .
4.2
Fair Bandit Learnability Implies KWIK Learnability
In this section, we show how to use a fair, no-regret contextual bandit algorithm to construct a KWIK
learning algorithm whose KWIK bound has logarithmic dependence on the number of rounds T .
Intuitively, any fair algorithm which achieves low regret must both be able to find and exploit an
optimal arm (since the algorithm is no-regret) and can only exploit that arm once it has a tight
understanding of the qualities of all arms (since the algorithm is fair). Thus, any fair no-regret
algorithm will ultimately have tight (1 ? ?)-confidence about each arm?s reward function.
Theorem 6. Suppose A is a ?-fair algorithm for the contextual bandit problem over the class of
functions C, with regret bound R(T, ?). Suppose also there exists f ? C, x(`) ? X such that for
every ` ? [d 1 e], f (x(`)) = ` ? . Then, FAIRT O KWIK is an (, ?)-KWIK algorithm for C with KWIK
?
bound m(, ?), with m(, ?) the solution to m(,?)
= R(m(, ?), 2T
).
4
Remark 1. The condition that C should contain a function that can take on values that are multiples
of is for technical convenience; C can always be augmented by adding a single such function.
7
Our aim is to construct a KWIK algorithm B to predict labels for a sequence of examples labeled
with some unknown function f ? ? C. We provide a sketch of the algorithm, FAIRT O KWIK, below,
and refer interested readers to our full technical paper [17] for a complete and formal description.
We use our fair algorithm to construct a KWIK algorithm as follows: we will run our fair contextual
bandit algorithm A on an instance that we construct online as examples xt arrive for B. The idea is
to simulate a two arm instance, in which one arm?s rewards are governed by f ? (the function to be
KWIK learned), and the other arm?s rewards are governed by a function f that we can set to take
any value in {0, , 2, . . . , 1}. For each input xt , we perform a thought experiment and consider A?s
probability distribution over arms when facing a context which forces arm 2?s payoff to take each of
the values 0, ? , 2? , . . . , 1. Since A is fair, A will play arm 1 with weakly higher probability than
arm 2 for those ` : `? ? f (xt ); analogously, A will play arm 1 with weakly lower probability than
arm 2 for those ` : `? ? f (xt ). If there are at least 2 values of ` for which arm 1 and arm 2 are
played with equal probability, one of those contexts will force A to suffer ? regret, so we continue
the simulation of A on one of those instances selected at random, forcing at least ? /2 regret in
expectation, and at the same time have B return ?. B receives f ? (xt ) on such a round, which is used
to construct feedback for A. Otherwise, A must transition from playing arm 1 with strictly higher
probability to playing 2 with strictly higher probability as ` increases: the point at which that occurs
will ?sandwich? the value of f (xt ), since A?s fairness implies this transition must occur when the
expected payoff of arm 2 exceeds that of arm 1. B uses this value to output a numeric prediction.
An important
fact we exploit is that we can query A?s behavior on (xt , x(`)), for any xt and
1
` ? d ? e without providing it feedback (and instead ?roll back? its history to ht not including the
query (xt , x(`))). We update A?s history by providing it feedback only in rounds where B outputs
?. Finally, we note that, as in KWIKT O FAIR, the claims of FAIRT O KWIK extend to the infinite
horizon case via the doubling trick.
4.3 An Exponential Separation Between Fair and Unfair Learning
In this section, we use this Fair-KWIK equivalence to give a simple contextual bandit problem for
which fairness imposes an exponential cost in its regret bound, unlike the polynomial cost proven
in the linear case in Section 4.1. In this problem, the context domain is the d-dimensional boolean
hypercube: X = {0, 1}d ? i.e. the context each round for each individual consists of d boolean
attributes. Our class of functions C is the class of boolean conjunctions: C = {f | f (x) =
xi1 ? xi2 ? . . . ? xik where 0 ? k ? d and i1 , . . . , ik ? [d]}.
We first note that there exists a simple but unfair algorithm for this problem which obtains regret
R(T ) = O(k 2 d). A full description of this algorithm, called C ONJUNCTION BANDIT, may be
found in our technical companion paper [17]. We now show that, in contrast, fair algorithms cannot
guarantee subexponential regret in d. This relies upon a known lower bound for KWIK learning
conjunctions [21]:
d
Lemma 7. There exists a sequence of examples (x1 , . . . , x2 ?1 ) such that for , ? ? 1/2, every
(, ?)-KWIK learning algorithm B for the class C of conjunctions on d variables must output ? for
xt for each t ? [2d ? 1]. Thus, B has a KWIK bound of at least m(, ?) = ?(2d ).
We then use the equivalence between fair algorithms and KWIK learning to translate this lower bound
on m(, ?) into a minimum worst case regret bound for fair algorithms on conjunctions. We modify
Theorem 6 to yield the following lemma.
Lemma 8. Suppose A is a ?-fair algorithm for the contextual bandit problem over the class C of
conjunctions on d variables. If A has regret bound R(T, ?) then for ? 0 = 2T ?, FAIRT O KWIK is an
(0, ? 0 )-KWIK algorithm for C with KWIK bound m(0, ? 0 ) = 4R(m(0, ? 0 ), ?).
Lemma 7 then lets us lower-bound the worst case regret of fair learning algorithms on conjunctions.
1
Corollary 2. For ? < 2T
, any ?-fair algorithm for the contextual bandit problem over the class C of
conjunctions on d boolean variables has a worst case regret bound of R(T ) = ?(2d ).
Together with the analysis of C ONJUNCTION BANDIT, this demonstrates a strong separation between
fair and unfair contextual bandit algorithms: when the underlying functions mapping contexts to
payoffs are conjunctions on d variables, there exist a sequence of contexts on which fair algorithms
must incur regret exponential in d while unfair algorithms can achieve regret linear in d.
8
References
[1] Jacob D. Abernethy, Kareem Amin, , Moez Draief, and Michael Kearns. Large-scale bandit problems and
kwik learning. In Proceedings of (ICML-13), pages 588?596, 2013.
[2] Philip Adler, Casey Falk, Sorelle A. Friedler, Gabriel Rybeck, Carlos Scheidegger, Brandon Smith, and
Suresh Venkatasubramanian. Auditing black-box models by obscuring features. CoRR, abs/1602.07043,
2016. URL http://arxiv.org/abs/1602.07043.
[3] Alekh Agarwal, Daniel J. Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. Taming
the monster: A fast and simple algorithm for contextual bandits. In Proceedings of ICML 2014, Beijing,
China, 21-26 June 2014, pages 1638?1646, 2014.
[4] Kareem Amin, Michael Kearns, and Umar Syed. Graphical models for bandit problems. arXiv preprint
arXiv:1202.3782, 2012.
[5] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine learning, 47(2-3):235?256, 2002.
[6] Solon Barocas and Andrew D. Selbst. Big data?s disparate impact. California Law Review, 104, 2016.
Available at SSRN: http://ssrn.com/abstract=2477899.
[7] Anna Maria Barry-Jester, Ben Casselman, and Dana Goldstein. The new science of sentencing. The
Marshall Project, August 8 2015. URL https://www.themarshallproject.org/2015/08/04/
the-new-science-of-sentencing. Retrieved 4/28/2016.
[8] Alina Beygelzimer, John Langford, Lihong Li, Lev Reyzin, and Robert E. Schapire. Contextual bandit
algorithms with supervised learning guarantees. In Proceedings of AISTATS 2011, Fort Lauderdale, USA,
April 11-13, 2011, pages 19?26, 2011.
[9] S?bastien Bubeck and Nicolo Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Machine Learning, 5(1):1?122, 2012.
[10] Nanette Byrnes. Artificial intolerance. MIT Technology Review, March 28 2016.
[11] Toon Calders and Sicco Verwer. Three naive bayes approaches for discrimination-free classification. Data
Mining and Knowledge Discovery, 21(2):277?292, 2010.
[12] Wei Chu, Lihong Li, Lev Reyzin, and Robert E. Schapire. Contextual bandits with linear payoff functions.
In Proceedings of AISTATS 2011, Fort Lauderdale, USA, April 11-13, 2011, pages 208?214, 2011.
[13] Cary Coglianese and David Lehr. Regulating by robot: Administrative decision-making in the machinelearning era. Georgetown Law Journal, 2016. Forthcoming.
[14] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through
awareness. In Proceedings of ITCS 2012, pages 214?226. ACM, 2012.
[15] Michael Feldman, Sorelle A. Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian.
Certifying and removing disparate impact. In Proceedings of ACM SIGKDD 2015, Sydney, NSW, Australia,
August 10-13, 2015, pages 259?268, 2015.
[16] Benjamin Fish, Jeremy Kun, and ?d?m D Lelkes. A confidence-based approach for balancing fairness and
accuracy. SIAM International Symposium on Data Mining, 2016.
[17] Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. Fairness in learning: Classic and
contextual bandits. CoRR, abs/1605.07139, 2016. URL http://arxiv.org/abs/1605.07139.
[18] Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization
approach. In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pages
643?650. IEEE, 2011.
[19] Michael N Katehakis and Herbert Robbins. Sequential choice from several populations. PROCEEDINGSNATIONAL ACADEMY OF SCIENCES USA, 92:8584?8584, 1995.
[20] Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics, 6(1):4?22, 1985.
[21] Lihong Li. A unifying framework for computational reinforcement learning theory. PhD thesis, Rutgers,
The State University of New Jersey, 2009.
[22] Lihong Li, Michael L Littman, Thomas J Walsh, and Alexander L Strehl. Knows what it knows: a
framework for self-aware learning. Machine learning, 82(3):399?443, 2011.
[23] Binh Thanh Luong, Salvatore Ruggieri, and Franco Turini. k-nn as an implementation of situation testing
for discrimination discovery and prevention. In Proceedings of ACM SIGKDD 2011, pages 502?510. ACM,
2011.
[24] Clair C Miller. Can an algorithm hire better than a human? The New York Times, June 25 2015.
[25] Cynthia Rudin. Predictive policing using machine learning to detect patterns of crime. Wired Magazine,
August 2013.
[26] Alexander L Strehl and Michael L Littman. Online linear regression and its application to model-based
reinforcement learning. In Advances in Neural Information Processing Systems, pages 1417?1424, 2008.
[27] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representations. In
Proceedings of (ICML-13), pages 325?333, 2013.
9
| 6355 |@word mild:1 version:2 achievable:1 polynomial:6 seems:2 norm:1 closure:1 simulation:1 jacob:1 attainable:5 pick:2 incurs:1 nsw:1 reduction:2 venkatasubramanian:2 mkearns:1 series:2 contains:1 daniel:1 existing:1 contextual:36 nt:4 com:1 beygelzimer:1 chu:1 must:14 import:1 applicant:5 john:3 numerical:1 subsequent:1 informative:1 j1:4 enables:1 drop:2 update:5 discrimination:3 selected:4 fewer:2 rudin:1 smith:1 granting:1 record:2 characterization:1 provides:1 lending:1 org:3 along:1 become:2 symposium:1 ik:1 katehakis:1 incorrect:1 prove:11 specialize:1 consists:2 combine:1 introduce:2 upenn:1 indeed:3 expected:10 embody:1 behavior:4 multi:6 little:1 armed:6 xti:2 increasing:1 becomes:1 spain:1 begin:1 unrelated:1 unfairness:1 formalizing:1 project:1 notation:1 bounded:1 what:4 underlying:2 substantially:2 morgenstern:2 proposing:1 nj:1 guarantee:8 pseudo:2 safely:1 every:7 ti:3 exactly:1 draief:1 classifier:1 demonstrates:1 appear:1 before:4 t1:1 modify:1 consequence:1 despite:2 encoding:3 era:1 lev:2 black:1 chose:2 studied:1 china:1 equivalence:2 conversely:1 unbroken:1 walsh:1 discriminatory:1 testing:1 regret:61 procedure:2 suresh:2 moez:1 empirical:1 thought:1 confidence:24 d1t:1 get:1 cannot:2 unlabeled:1 close:2 convenience:1 stj:2 context:23 www:1 equivalent:1 roth:2 maximizing:3 thanh:1 kale:1 formalized:1 simplicity:1 immediately:1 rule:1 machinelearning:1 pull:5 classic:15 population:2 notion:2 target:1 play:8 suppose:3 magazine:1 us:4 trick:2 jk:2 updating:1 legally:1 labeled:1 observed:3 monster:1 preprint:1 worst:5 verwer:1 earned:1 highest:5 removed:1 observes:4 mentioned:1 intuition:1 benjamin:1 constrains:1 reward:28 littman:2 chained:6 ultimately:1 sicco:1 weakly:2 tight:4 policing:2 incur:2 predictive:1 upon:1 aaroth:1 learner:15 eit:4 represented:1 jersey:1 forced:3 fast:1 dkt:1 describe:1 query:2 zemel:3 artificial:1 kevin:1 sentencing:3 outcome:1 abernethy:2 whose:1 presupposes:1 valued:1 say:2 otherwise:2 favor:1 statistic:1 satyen:1 fischer:1 itself:1 superscript:2 online:2 sequence:6 interferes:1 jamie:2 hire:1 j2:2 relevant:1 aligned:1 realization:2 moeller:1 reyzin:2 translate:1 omer:1 achieve:3 academy:1 amin:3 intuitive:1 description:2 requirement:1 produce:1 wired:1 ben:1 andrew:1 strong:2 sydney:1 predicted:2 come:1 implies:6 safe:1 drawback:1 repayment:3 attribute:1 stochastic:10 centered:1 human:1 australia:1 enable:1 implementing:1 preliminary:2 strictly:2 brandon:1 around:3 onjunction:2 mapping:2 predict:2 claim:3 matthew:2 achieves:4 friedler:2 label:4 maker:1 robbins:2 largest:1 vice:3 cary:1 mit:1 always:2 aim:1 avoid:2 hj:2 conjunction:9 corollary:2 focus:1 casey:1 june:2 maria:1 contrast:2 sigkdd:2 detect:1 leung:1 nn:1 lj:1 initially:2 diminishing:1 bandit:70 relation:1 transformed:1 subroutine:1 i1:1 selects:1 provably:3 endure:1 arg:6 classification:3 subexponential:1 among:1 interested:1 favored:1 jester:1 prevention:1 special:2 initialize:5 fairly:1 equal:4 construct:9 once:2 having:1 never:2 u0i:1 aware:2 represents:1 broad:1 yu:1 icml:3 fairness:40 others:2 richard:1 barocas:1 randomly:1 simultaneously:1 tightly:2 falk:1 individual:8 xtj:7 sandwich:1 ab:4 regulating:1 dwork:6 mining:3 lehr:1 tj:4 chain:4 accurate:2 kt:8 auditing:2 encourage:1 necessary:1 experience:1 respective:1 divide:1 circle:1 instance:8 boolean:5 marshall:1 cost:6 learnability:5 characterize:1 chooses:4 referring:2 confident:2 st:2 explores:1 adler:1 siam:1 international:2 xi1:1 lauderdale:2 pool:1 michael:8 together:5 analogously:1 again:2 thesis:1 satisfied:1 cesa:2 choose:9 worse:1 luong:1 return:1 li:7 potential:1 jeremy:1 satisfy:1 depends:2 break:1 observing:2 analyze:2 linked:2 bayes:1 carlos:2 accuracy:2 roll:2 miller:1 yield:1 bayesian:1 itcs:1 accurately:1 rjt:7 worth:2 history:9 suffers:1 whenever:1 definition:7 failure:1 proof:1 hsu:1 ruggieri:1 hardt:1 manifest:1 knowledge:1 ut:1 auer:1 back:2 goldstein:1 higher:5 day:1 violating:2 reflected:1 supervised:1 wei:1 april:2 shrink:1 box:1 just:3 until:1 langford:2 ntj:2 receives:3 hand:1 sketch:1 overlapping:1 quality:4 usa:3 contain:2 true:3 equality:1 hence:1 regularization:1 moritz:1 round:51 hiring:1 width:3 self:1 maintained:1 covering:1 chaining:3 djt:2 complete:1 fj:7 reasoning:2 fi:1 ji:2 overview:1 exponentially:1 association:1 extend:2 relating:1 interpret:2 refer:1 multiarmed:1 versa:3 sorelle:2 feldman:1 rd:2 unconstrained:1 mathematics:1 similarly:1 akaho:1 dj:1 lihong:5 robot:1 alekh:1 deduce:1 nicolo:2 kwik:49 pitassi:2 retrieved:1 apart:1 forcing:1 store:1 certain:1 continue:1 life:1 herbert:2 minimum:1 unrestricted:1 additional:1 impose:1 recognized:1 determine:1 maximize:1 barry:1 full:5 desirable:1 multiple:1 stem:1 exceeds:1 technical:9 characterized:1 long:1 lai:1 n0i:1 specializing:1 impact:4 prediction:11 variant:1 regression:2 metric:4 tasked:1 expectation:4 arxiv:5 rutgers:1 sometimes:1 agarwal:1 receive:2 scheidegger:2 interval:17 else:2 source:1 unlike:1 reingold:1 odds:1 call:1 noting:1 easy:3 enough:2 automated:1 affect:1 fit:3 forthcoming:1 pennsylvania:1 nonstochastic:1 suboptimal:1 idea:3 presupposing:1 translates:2 absent:2 url:3 suffer:3 peter:1 proceed:2 york:1 repeatedly:1 remark:1 ritt:2 gabriel:1 useful:1 informally:2 dit:2 http:4 schapire:3 exist:1 fish:1 designer:1 overly:1 per:5 stipulates:1 write:1 group:1 key:1 nevertheless:1 alina:1 clarity:1 prevent:1 ht:3 asymptotically:1 convert:1 beijing:1 run:2 sti:8 uncertainty:1 selbst:1 opaque:1 swersky:1 extends:1 saying:1 announce:2 ruling:1 uti:5 reader:1 separation:4 arrive:1 wu:1 decision:4 entirely:3 bound:44 hi:2 played:2 nontrivial:2 occur:1 constraint:7 x2:2 certifying:1 simulate:1 franco:1 min:6 utj:2 ssrn:2 department:1 march:1 increasingly:1 joseph:2 modification:1 making:2 intuitively:4 ln:10 calder:1 remains:2 turn:1 xi2:1 know:10 available:2 obscuring:1 incurring:1 observe:1 salvatore:1 batch:1 shotaro:1 existence:1 thomas:1 denotes:1 top:1 graphical:1 maintaining:1 unifying:1 toon:1 umar:1 exploit:3 uj:1 hypercube:1 move:1 already:1 occurs:1 dependence:9 amongst:1 separate:1 separating:1 philip:1 extent:2 trivial:6 relationship:4 providing:3 minimizing:1 preferentially:2 equivalently:1 kun:1 robert:3 potentially:1 relate:1 statement:1 xik:1 disparate:4 constructively:1 design:1 implementation:1 policy:9 unknown:10 perform:1 bianchi:2 upper:10 finite:2 payoff:10 situation:1 august:3 criminal:1 david:1 pair:3 required:1 fort:2 connection:3 crime:1 conflict:1 california:1 learned:2 nti:2 barcelona:1 nip:1 able:1 adversary:1 proceeds:1 below:2 pattern:1 challenge:1 confidently:1 including:2 max:6 overlap:2 syed:1 natural:2 difficulty:1 treated:1 force:2 arm:109 representing:2 improve:1 technology:1 identifies:1 specializes:1 transitive:1 naive:1 jun:1 isn:1 taming:1 prior:1 understanding:1 review:2 discovery:2 georgetown:1 evolve:1 law:3 toniann:1 highlight:1 allocation:1 proven:2 facing:1 dana:1 incurred:1 degree:2 awareness:1 solon:1 sufficient:1 consistent:4 imposes:1 playing:2 balancing:1 strehl:2 compatible:1 ban:1 parity:3 free:1 qualified:2 formal:2 allow:1 pulled:4 face:1 characterizing:1 kareem:2 feedback:6 dimension:2 world:1 cumulative:1 numeric:2 transition:2 rich:1 made:1 icdmw:1 adaptive:1 reinforcement:2 turini:1 far:3 polynomially:1 obtains:1 implicitly:1 active:16 reveals:1 conceptual:1 tuples:2 xi:1 don:4 additionally:1 toshihiro:1 learn:2 domain:2 anna:1 aistats:2 main:1 spread:1 bounding:4 big:1 paul:1 fair:87 toni:1 x1:7 augmented:1 cubic:1 binh:1 experienced:2 exponential:4 governed:3 unfair:8 administrative:1 theorem:11 companion:2 removing:1 specific:2 xt:22 jt:2 showing:1 bastien:1 cynthia:3 maxi:4 explored:1 dk:3 learnable:3 concern:1 exists:3 workshop:1 sequential:2 adding:1 importance:1 ci:1 corr:2 phd:1 sakuma:1 demand:1 horizon:2 gap:2 led:2 logarithmic:3 simply:1 tze:1 bubeck:1 highlighting:1 stitch:1 doubling:2 applies:1 satisfies:1 relies:1 acm:4 kamishima:1 goal:4 presentation:1 lipschitz:1 content:1 hard:2 feasible:1 loan:4 determined:2 infinite:1 uniformly:2 kearns:4 conservative:1 total:2 called:2 lemma:17 stake:1 ucb:3 aaron:2 formally:3 rarely:1 select:1 people:1 alexander:2 d1:3 |
5,920 | 6,356 | Probabilistic Linear Multistep Methods
Onur Teymur
Department of Mathematics
Imperial College London
[email protected]
Konstantinos Zygalakis
School of Mathematics
University of Edinburgh
[email protected]
Ben Calderhead
Department of Mathematics
Imperial College London
[email protected]
Abstract
We present a derivation and theoretical investigation of the Adams-Bashforth and
Adams-Moulton family of linear multistep methods for solving ordinary differential
equations, starting from a Gaussian process (GP) framework. In the limit, this
formulation coincides with the classical deterministic methods, which have been
used as higher-order initial value problem solvers for over a century. Furthermore,
the natural probabilistic framework provided by the GP formulation allows us to
derive probabilistic versions of these methods, in the spirit of a number of other
probabilistic ODE solvers presented in the recent literature [1, 2, 3, 4]. In contrast
to higher-order Runge-Kutta methods, which require multiple intermediate function
evaluations per step, Adams family methods make use of previous function evaluations, so that increased accuracy arising from a higher-order multistep approach
comes at very little additional computational cost. We show that through a careful
choice of covariance function for the GP, the posterior mean and standard deviation
over the numerical solution can be made to exactly coincide with the value given
by the deterministic method and its local truncation error respectively. We provide
a rigorous proof of the convergence of these new methods, as well as an empirical
investigation (up to fifth order) demonstrating their convergence rates in practice.
1
Introduction
Numerical solvers for differential equations are essential tools in almost all disciplines of applied
mathematics, due to the ubiquity of real-world phenomena described by such equations, and the lack
of exact solutions to all but the most trivial examples. The performance ? speed, accuracy, stability,
robustness ? of the numerical solver is of great relevance to the practitioner. This is particularly
the case if the computational cost of accurate solutions is significant, either because of high model
complexity or because a high number of repeated evaluations are required (which is typical if an
ODE model is used as part of a statistical inference procedure, for example). A field of work has
emerged which seeks to quantify this performance ? or indeed lack of it ? by modelling the numerical
errors probabilistically, and thence trace the effect of the chosen numerical solver through the entire
computational pipeline [5]. The aim is to be able to make meaningful quantitative statements about
the uncertainty present in the resulting scientific or statistical conclusions.
Recent work in this area has resulted in the development of probabilistic numerical methods, first
conceived in a very general way in [6]. An recent summary of the state of the field is given in [7].
The particular case of ODE solvers was first addressed in [8], formalised and extended in [1, 2, 3]
with a number of theoretical results recently given in [4]. The present paper modifies and extends the
constructions in [1, 4] to the multistep case, improving the order of convergence of the method but
avoiding the simplifying linearisation of the model required by the approaches of [2, 3]. Furthermore
we offer extensions to the convergence results in [4] to our proposed method and give empirical
results confirming convergence rates which point to the practical usefulness of our higher-order
approach without significantly increasing computational cost.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1.1
Mathematical setup
We consider an Initial Value Problem (IVP) defined by an ODE
d
y(t, ?) = f (y(t, ?), t),
y(t0 , ?) = y0
(1)
dt
Here y(?, ?) : R+ ! Rd is the solution function, f : Rd ? R+ ! Rd is the vector-valued function
that defines the ODE, and y0 2 Rd is a given vector called the initial value. The dependence of y on
an m-dimensional parameter ? 2 Rm will be relevant if the aim is to incorporate the ODE into an
inverse problem framework, and this parameter is of scientific interest. Bayesian inference under this
setup (see [9]) is covered in most of the other treatments of this topic but is not the main focus of this
paper; we therefore suppress ? for the sake of clarity.
Some technical conditions are required in order to justify the existence and uniqueness of solutions to
(1). We assume that f is evaluable point-wise given y and t and also that it satisfies the Lipschitz
condition in y, namely ||f (y1 , t) f (y2 , t)|| ? Lf ||y1 y2 || for some Lf 2 R+ and all t, y1 and y2 ;
and also is continuous in t. These conditions imply the existence of a unique solution, by a classic
result usually known as the Picard-Lindel?f Theorem [10].
We consider a finite-dimensional discretisation of the problem, with our aim being to numerically
generate an N -dimensional vector1 y1:N approximating the true solution y(t1:N ) in an appropriate
sense. Following [1], we consider the joint distribution of y1:N and the auxiliary variables f0:N
(obtained by evaluating the function f ), with each yi obtained by sequentially conditioning on
previous evaluations of f . A basic requirement is that the marginal mean of y1:N should correspond
to some deterministic iterative numerical method operating on the grid t1:N . In our case this will be a
linear multistep method (LMM) of specified type. 2
Firstly we telescopically factorise the joint distribution as follows:
p(y1:N , f0:N |y0 ) = p(f0 |y0 )
N
Y1
i=0
p(yi+1 |y0:i , f0:i ) p(fi+1 |y0:i+1 , f0:i )
(2)
We can now make simplifying assumptions about the constituent distributions. Firstly since we have
assumed that f is evaluable point-wise given y and t,
p(fi |yi , . . . ) = p(fi |yi ) = fi f (yi , ti ) ,
(3)
which is a Dirac-delta measure equivalent to simply performing this evaluation deterministically.
Secondly, we assume a finite moving window of dependence for each new state ? in other words
yi+1 is only allowed to depend on yi and fi , fi 1 , . . . , fi (s 1) for some s 2 N. This corresponds to
the inputs used at each iteration of the s-step Adams-Bashforth method. For i < s we will assume
dependence on only those derivative evaluations up to i; this initialisation detail is discussed briefly
in Section 4. Strictly speaking, fN is superfluous to our requirements (since we already have yN ) and
thus we can rewrite (2) as
N
Y1
p(y1:N , f0:N 1 |y0 ) =
p(fi |yi ) p(yi+1 |yi , fmax(0,i s+1):i )
(4)
i=0
=
N
Y1
fi (f (yi , ti ))
i=0
p(yi+1 |yi , fmax(0,i
|
{z
?
s+1):i )
}
(5)
The conditional distributions ? are the primary objects of our study ? we will define them by
constructing a particular Gaussian process prior over all variables, then identifying the appropriate
(Gaussian) conditional distribution. Note that a simple modification to the decomposition (2) allows
the same set-up to generate an (s + 1)-step Adams-Moulton iterator3 ? the implicit multistep method
where yi+1 depends in addition on fi+1 . At various stages of this paper this extension is noted but
omitted for reasons of space ? the collected results are given in Appendix C.
1
The notation y0:N denotes the vector (y0 , . . . , yN ), and analogously t0:N , f0:N etc.
We argue that the connection to some specific deterministic method is a desirable feature, since it aids
interpretability and allows much of the well-developed theory of IVP solvers to be inherited by the probabilistic
solver. This is a particular strength of the formulation in [4] which was lacking in all previous works.
3
The convention is that the number of steps is equal to the total number of derivative evaluations used in each
iteration, hence the s-step AB and (s + 1)-step AM methods both go ?equally far back?.
2
2
Linear multistep methods
We give a very short summary of Adams family LMMs and their conventional derivation via
interpolating polynomials. For a fuller treatment of this well-studied topic we refer the reader to
the comprehensive references [10, 11, 12]. Using the usual notation we write yi for the numerical
estimate of the true solution y(ti ), and fi for the estimate of f (ti ) ? y 0 (ti ).
The classic s-step Adams-Bashforth method calculates yi+1 by constructing the unique polynomial
Pi (!) 2 Ps 1 interpolating the points {fi j }sj=01 . This is given by Lagrange?s method as
Pi (!) =
s 1
X
`j0:s
1
(!)fi
`j0:s
j
1
(!) =
j=0
sY1
k=0
k6=j
!
ti
j
ti k
ti k
(6)
The `j0:s 1 (!) are known as Lagrange polynomials, have the property that `p0:s 1 (ti q ) = pq , and
form a basis for the space Ps 1 known as the Lagrange basis. The Adams-Bashforth iteration then
Rt
proceeds by writing the integral version of (1) as y(ti+1 ) y(ti ) ? tii+1 f (y, t) dt and approximating
the function under the integral by the extrapolated interpolating polynomial to give
Z ti+1
s 1
X
AB
yi+1 yi ?
Pi (!) d! = h
(7)
j,s fi j
ti
j=0
Rh
AB
where h = ti+1 ti and the j,s
? h 1 0 `j0:s 1 (!) d! are the Adams-Bashforth coefficients for
order s, all independent of h and summing to 1. Note that if f is a polynomial of degree s 1 (so y(t)
is a polynomial of degree s) this procedure will give the next solution value exactly. Otherwise the
extrapolation error in fi+1 is of order O(hs ) and in yi+1 (after an integration) is of order O(hs+1 ).
So the local truncation error is O(hs+1 ) and the global error O(hs ) [10].
Adams-Moulton methods are similar except that the polynomial Qi (!) 2 Ps interpolates the s + 1
points {fi j }sj=1 1 . The resulting equation analogous to (7) is thus an implicit one, with the unknown
yi+1 appearing on both sides. Typically AM methods are used in conjunction with an AB method of
?
one order lower, in a ?predictor-corrector? arrangement. Here, a predictor value yi+1
is calculated
?
?
using an AB step; this is then used to estimate fi+1
= f (yi+1
); and finally an AM step uses this
value to calculate yi+1 . We again refer the reader to Appendix C for details of the AM construction.
2
Derivation of Adams family LMMs via Gaussian processes
We now consider a formulation of the Adams-Bashforth family starting from a Gaussian process
framework and then present a probabilistic extension. We fix a joint Gaussian process prior over
yi+1 , yi , fi , fi 1 , . . . , fi s+1 as follows. We define two vectors of functions (!) and (!) in terms
of the Lagrange polynomials `j0:s 1 (!) defined in (6) as
?
?T
0:s 1
0:s 1
0:s 1
(!) = 0 `0
(!) `1
(!) . . . `s 1 (!)
(8)
(!) =
Z
(!) d! =
?
1
Z
`00:s
1
(!) d!
...
Z
`s0:s1 1 (!) d!
?T
(9)
The elements (excluding the first) of (!) form a basis for Ps 1 and the elements of (!) form a
basis for Ps . The initial 0 in (!) is necessary to make the dimensions of the two vectors equal, so we
can correctly define products such as (!)T (!) which will be required later. The first element of
(!) can be any non-zero constant C; the analysis later is unchanged and we therefore take C = 1.
Since we will solely be interested in values of the argument ! corresponding to discrete equispaced
time-steps tj tj 1 = h indexed relative to the current time-point ti = 0, we will make our notation
more concise by writing i+k for (ti+k ), and similarly i+k for (ti+k ). We now use these vectors
of basis functions to define a joint Gaussian process prior as follows:
3
0
20 1 0
0
B
C
6B0C B
B
C
6B C B
B
C
6B C B
B
C
6B0C B
B
C = N 6B C , B
B
C
6B0C B
B
C
6B C B
B
C
6B .. C B
@
A
4@ . A @
fi s+1
0
yi+1
yi
fi
fi 1
..
.
1
T
i+1 i+1
T
i+1
i
T
i+1
i
T
i 1 i+1
T
i+1 i
T
i
i
T
i
i
T
i 1 i
..
.
T
i s+1
i+1
T
i+1
T
i
T
i
T
i 1
..
.
T
i s+1
..
.
i
i
i
i
i
T
i s+1 i
???
???
...
...
..
.
...
T
i+1
T
i
T
i
T
i 1
i s+1
i s+1
i s+1
i s+1
..
.
T
i s+1 i s+1
13
C7
C7
C7
C7
C7
C7
C7
C7
A5
(10)
This construction works because y = f and differentiation is a linear operator; the rules for the
transformation of the covariance elements is given in Section 9.4 of [13] and can easily be seen to
correspond to the defined relationship between (!) and (!).
0
Recalling the decomposition in (5), we are interested in the conditional distribution
p(yi+1 |yi , fi s+1:i ). This is also Gaussian, with mean and covariance given by the standard formulae
for Gaussian conditioning. This construction now allows us to state the following result:
Proposition 1. The conditional distribution p(yi+1 |yi , fi s+1:i ) under the Gaussian process prior
given in (10), with covariance kernel basis functions as in (8) and (9), is a -measure concentrated
Ps 1 AB
on the s-step Adams-Bashforth predictor yi + h j=0 j,s
fi j .
The proof of this proposition is given in Appendix A.
Because of the natural probabilistic structure provided by the Gaussian process framework, we can
augment the basis function vectors (!) and (!) to generate a conditional distribution for yi+1
that has non-zero variance. By choosing a particular form for this augmented basis we can obtain an
expression for the standard deviation of yi+1 that is exactly equal to the leading-order local truncation
error of the corresponding deterministic method.
We will expand the vectors (!) and (!) by one component, chosen so that the new vector
comprises elements that span a polynomial space of order one greater than before. Define the
augmented bases + (!) and + (!) as
?
?T
(!)+ =
0 `00:s 1 (!) `10:s 1 (!) . . . `s0:s1 1 (!) ?hs ` 11:s 1 (!)
(11)
+
(!) =
?
1
Z
`00:s 1 (!) d!
...
Z
`s0:s1 1 (!) d!
Z
s
?h `
1:s 1
(!) d!
1
?T
(12)
The additional term at the end of + (!) is the polynomial of order s which arises from interpolating
f at s + 1 points (with the additional point at ti+1 ) and choosing the basis function corresponding to
the root at ti+1 , scaled by ?hs with ? a positive constant whose role will be explained in the next
section. The elements of these vectors span Ps and Ps+1 respectively. With this new basis we can
give the following result:
Proposition 2. The conditional distribution p(yi+1 |yi , fi s+1:i ) under the Gaussian process prior
given in (10), with covariance kernel basis functions as in (11) and (12), is Gaussian with mean
Ps 1 AB
equal to the s-step Adams-Bashforth predictor yi + h j=0 j,s
fi j and, setting ? = y (s+1) (?)
for some ? 2 (ti s+1 , ti+1 ), standard deviation equal to its local truncation error.
The proof is given in Appendix B. In order to de-mystify the construction, we now exhibit a concrete
example for the case s = 3. The conditional distribution of interest is p(yi+1 |yi , fi , fi 1 , fi 2 ) ?
p(yi+1 |yi , fi:i 2 ). In the deterministic case, the vectors of basis functions become
?
?
(! + h)(! + 2h) !(! + 2h) !(! + h)
(!)s=3 = 0
2h2
h2
2h2
?
?
2
2
! 2! + 9h! + h
! 2 (! + 3h) ! 2 (2! + 3h)
(!)s=3 = 1
12h2
3h2
12h2
4
and simple calculations give that
?
23
4
5
fi
fi 1 + fi 2
Var(yi+1 |yi , fi:i 2 ) = 0
12
3
12
The probabilistic version follows by setting
?
?
(! + h)(! + 2h) !(! + 2h) !(! + h) ?!(! + h)(! + 2h)
+
(!)s=3 = 0
2h2
h2
2h2
6
?
?
2
2
2
2
! 2! + 9h! + h
! (x + 3h) ! (2! + 3h) ?! 2 (! + 2h)2
+
(!)s=3 = 1
12h2
3h2
12h2
24
and further calculation shows that
?
?
? 4 ?2
23
4
5
3h ?
E(yi+1 |yi , fi:i 2 ) = yi + h
fi
fi 1 + fi 2
Var(yi+1 |yi , fi:i 2 ) =
12
3
12
8
E(yi+1 |yi , fi:i
2 ) = yi + h
?
An entirely analogous argument can be shown to reproduce and probabilistically extend the implicit
Adams-Moulton scheme. The Gaussian process prior now includes fi+1 as an additional variable
and the correlation structure and vectors of basis functions are modified accordingly. The required
modifications are given in Appendix C and a explicit derivation for the 4-step AM method is given in
Appendix D.
2.1
The role of ?
Replacing ? in (11) by y (s+1) (?), with ? 2 (ti s+1 , ti+1 ), makes the variance of the integrator
coincide exactly with the local truncation error of the underlying deterministic method.4
This is of course of limited utility unless higher derivatives of y(t) are available, and even if they
are, ? is itself unknowable in general. However it is possible to estimate the integrator variance in a
systematic way by using backward difference approximations [14] to the required derivative at ti+1 .
We show this by expanding the s-step Adams-Bashforth iterator as
Ps 1 AB
yi+1 = yi + h j=0 j,s
fi j + hs+1 CsAB y (s+1) (?)
? 2 [ti s+1 , ti+1 ]
Ps 1 AB
= yi + h j=0 j,s fi j + hs+1 CsAB y (s+1) (ti+1 ) + O(hs+2 )
Ps 1 AB
= yi + h j=0 j,s
fi j + hs+1 CsAB f (s) (ti+1 ) + O(hs+2 )
since y 0 = f
h
i
Ps 1 AB
Ps 1+p
p
s+2
= yi + h j=0 j,s
fi j + hs+1 CsAB h s k=0
)
k,s 1+p fi k + O(h ) + O(h
Ps 1 AB
P
s
= yi + h j=0 j,s fi j + hCsAB k=0 k,s fi k + O(hs+2 )
if we set p = 1
(13)
AB
where ?,s
are the set of coefficients and CsAB the local truncation error constant for the s-step
Adams-Bashforth method, and ?,s 1+p are the set of backward difference coefficients for estimating
the sth derivative of f to order O(hp ) [14].
Ps
In other words, the constant ? can be substituted with h s k=0 k,s fi k , using already available
AB
function values and to adequate order. It is worth noting that collecting the coefficients ?,s
and ?,s
results in an expression equivalent to the Adams-Bashforth method of order s + 1 and therefore, this
procedure is in effect employing two integrators of different orders and estimating the truncation error
from the difference of the two.5 This principle is similar to the classical Milne Device [12], which
pairs an AB and and AM iterator to achieve the same thing. Using the Milne Device to generate a
value for the error variance is also straightforward within our framework, but requires two evaluations
of f at each iteration (one of which immediately goes to waste) instead of the approach presented
here, which only requires one.
4
We do not claim that this is the only possible way of modelling the numerical error in the solver. The
question of how to do this accurately is an open problem in general, and is particularly challenging in the
multi-dimensional case. In many real world problems different noise scales will be appropriate for different
dimensions and ? especially in ?hierarchical? models arising from higher-order ODEs ? non-Gaussian noise is to
be expected. That said, the Gaussian assumption as a first order approximation for numerical error is present in
virtually all work on this subject and goes all the way back to [8]. We adopt this premise throughout, whilst
noting this interesting unresolved issue.
5
An explicit derivation of this for s = 3 is given in Appendix E.
5
3
Convergence of the probabilistic Adams-Bashforth integrator
We now give the main result of our paper, which demonstrates that the convergence properties of the
probabilistic Adams-Bashforth integrator match those of its deterministic counterpart.
Theorem 3. Consider the s-step deterministic Adams-Bashforth integrator given in Proposition 1,
which is of order s. Then the probabilistic integrator constructed in Proposition 2 has the same mean
square error as its deterministic counterpart. In particular
yk |2 ? Kh2s
max E|Yk
0?kh?T
where Yk ? y(tk ) denotes the true solution, yk the numerical solution, and K is a positive real
number depending on T but independent of h.
The proof of this theorem is given in Appendix F, and follows a similar line of reasoning to that given
for a one-step probabilistic Euler integrator in [4]. In particular, we deduce the convergence of the
algorithm by extrapolating from the local error. The additional complexity arises due to the presence
of the stochastic part, which means we cannot rely directly on the theory of difference equations
and the representations of their solutions. Instead, following [15], we rewrite the defining s-step
recurrence equation as a one-step recurrence equation in a higher dimensional space.
4
Implementation
We now have an implementable algorithm for an s-step probabilistic Adams-Bashforth integrator.
Firstly, an accurate initialisation is required for the first s iterations ? this can be achieved with, for
example, a Runge-Kutta method of sufficiently high order.6 Secondly, at iteration i, the preceding s
stored function evaluations are used to find the posterior mean and variance of yi+1 . The integrator
then advances by generating a realisation of the posterior measure derived in Proposition 2. Following
[1], a Monte Carlo repetition of this procedure with different random seeds can then be used as an
effective way of generating propagated uncertainty estimates at any time 0 < T < 1.
4.1
Example ? Chua circuit
The Chua circuit [16] is the simplest electronic circuit that exhibits chaotic behaviour, and has been
the subject of extensive study ? in both the mathematics and electronics communities ? for over
30 years. Readers interested in this rich topic are directed to [17] and the references therein. The
defining characteristic of chaotic systems is their unpredictable long-term sensitivity to tiny changes
in initial conditions, which also manifests itself in the sudden amplification of error introduced by
any numerical scheme. It is therefore of interest to understand the limitations of a given numerical
method applied to such a problem ? namely the point at which the solution can no longer be taken to
be a meaningful approximation of the ground truth. Probabilistic integrators allow us to do this in a
natural way [1].
The Chua system is given by x0 = ?(y (1 + h1 )x h3 x3 ), y 0 = x y + z, z 0 =
y
z. We
use parameter values ? = 1.4157, = 0.02944201, = 0.322673579, h1 = 0.0197557699,
h3 = 0.0609273571 and initial conditions x0 = 0, y0 = 0.003, z0 = 0.005. This particular choice
is taken from ?Attractor CE96? in [18]. Using the probabilistic version of the Adams-Bashforth
integrator with s > 1, it is possible to delay the point at which numerical path diverges from the
truth, with effectively no additional evaluations of f required compared to the one-step method. This
is demonstrated in Figure 1. Our approach is therefore able to combine the benefits of classical
higher-order methods with the additional insight into solution uncertainty provided by a probabilistic
method.
4.2
Example ? Lotka-Volterra model
We now apply the probabilistic integrator to a simple periodic predator-prey model given by the
system x0 = ?x
xy, y 0 = xy
y for parameters ? = 1, = 0.3, = 1 and = 0.7. We
demonstrate the convergence behaviour stated in Theorem 3 empirically.
6
We use a (packaged) adaptive Runge-Kutta-Fehlberg solver of 7th order with 8th order error control.
6
Figure 1: Time series for the x-component in the Chua circuit model described in Section
4.1, solved 20 times for 0 ? t ? 1000 using an s-step probabilistic AB integrator with
s = 1 (top), s = 3 (middle), s = 5 (bottom). Step-size remains h = 0.01 throughout.
Wall-clock time for each simulation was close to constant (?10 per cent ? the difference
primarily accounted for by the RKF initialisation procedure).
The left-hand plot in Figure 2 shows the sample mean of the absolute error of 200 realisations of
the probabilistic integrator plotted against step-size, on a log-log scale. The differing orders of
convergence of the probabilistic integrators are easily deduced from the slopes of the lines shown.
The right-hand plot shows the actual error value (no logarithm or absolute value taken) of the same
200 realisations, plotted individually against step-size. This plot shows that the error in the one-step
integrator is consistently positive, whereas for two- and three-step integrators is approximately centred
around 0. (This is also visible with the same data if the plot is zoomed to more closely examine the
range with small h.) Though this phenomenon can be expected to be somewhat problem-dependent,
it is certainly an interesting observation which may have implications for bias reduction in a Bayesian
inverse problem setting.
0.6
No. of
steps
(s)
0.4
?5
Error
log10 E |Error|
0
1
2
0.2
3
?10
4
0.0
?4
?3
?2
5
?4
log10 h
?3
?2
log10 h
Figure 2: Empirical error analysis for the x-component of 200 realisations of the
probabilistic AB integrator as applied to the Lotka-Volterra model described in Section 4.2.
The left-hand plot shows the convergence rates for AB integrators of orders 1-5, while the
right-hand plot shows the distribution of error around zero for integrators of orders 1-3.
7
5
Conclusion
We have given a derivation of the Adams-Bashforth and Adams-Moulton families of linear multistep
ODE integrators, making use of a Gaussian process framework, which we then extend to develop
their probabilistic counterparts.
We have shown that the derived family of probabilistic integrators result in a posterior mean at each
step that exactly coincides with the corresponding deterministic integrator, with the posterior standard
deviation equal to the deterministic method?s local truncation error. We have given the general forms
of the construction of these new integrators to arbitrary order. Furthermore, we have investigated
their theoretical properties and provided a rigorous proof of their rates of convergence, Finally we
have demonstrated the use and computational efficiency of probabilistic Adams-Bashforth methods
by implementing the solvers up to fifth order and providing example solutions of a chaotic system,
and well as empirically verifying the convergence rates in a Lotka-Voltera model.
We hope the ideas presented here will add to the arsenal of any practitioner who uses numerical
methods in their scientific analyses, and contributes a further tool in the emerging field of probabilistic
numerical methods.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
O. A. C HKREBTII, D. A. C AMPBELL, B. C ALDERHEAD, and M. A. G IROLAMI. Bayesian Solution
Uncertainty Quantification for Differential Equations. Bayesian Analysis, 2016.
P. H ENNIG and S. H AUBERG. Probabilistic Solutions to Differential Equations and their Application to
Riemannian Statistics. In, Proc. of the 17th int. Conf. on Artificial Intelligence and Statistics (AISTATS).
Vol. 33. JMLR, W&CP, 2014.
M. S CHOBER, D. K. D UVENAUD, and P. H ENNIG. Probabilistic ODE Solvers with Runge-Kutta Means.
In Z. G HAHRAMANI, M. W ELLING, C. C ORTES, N. D. L AWRENCE, and K. Q. W EINBERGER, editors,
Advances in Neural Information Processing Systems 27, pp. 739?747. Curran Associates, Inc., 2014.
P. R. C ONRAD, M. G IROLAMI, S. S ?RKK?, A. S TUART, and K. Z YGALAKIS. Statistical Analysis
of Differential Equations: Introducing Probability Measures on Numerical Solutions. Statistics and
Computing, 2016.
M. C. K ENNEDY and A. O?H AGAN. Bayesian Calibration of Computer Models. Journal of the Royal
Statistical Society: Series B, 63(3):425?464, 2001.
P. D IACONIS. Bayesian Numerical Analysis. In J. B ERGER and S. G UPTA, editors, Statistical Decision
Theory and Related Topics IV. Vol. 1, pp. 163?175. Springer, 1988.
P. H ENNIG, M. A. O SBORNE, and M. G IROLAMI. Probabilistic Numerics and Uncertainty in Computations. Proc. R. Soc. A, 471(2179):20150142, 2015.
J. S KILLING. Bayesian Numerical Analysis. In J. W. T. G RANDY and P. W. M ILONNI, editors, Physics
and Probability, pp. 207?222. Cambridge University Press, 1993.
M. G IROLAMI. Bayesian Inference for Differential Equations. Theor. Comp. Sci., 408(1):4?16, 2008.
A. I SERLES. A First Course in the Numerical Analysis of Differential Equations. Cambridge University
Press, 2nd ed., 2008.
E. H AIRER, S. N ?RSETT, and G. WANNER. Solving Ordinary Differential Equations I: Nonstiff
Problems. Of Springer Series in Computational Mathematics. Springer, 2008.
J. B UTCHER. Numerical Methods for Ordinary Differential Equations: Second Edition. Wiley, 2008.
C. R ASMUSSEN and C. W ILLIAMS. Gaussian Processes for Machine Learning. University Press Group
Limited, 2006.
B. F ORNBERG. Generation of Finite Difference Formulas on Arbitrarily Spaced Grids. Mathematics of
Computation, 51(184):699?706, 1988.
E. B UCKWAR and R. W INKLER. Multistep Methods for SDEs and Their Application to Problems with
Small Noise. SIAM J. Numer. Anal., 44(2):779?803, 2006.
L. O. C HUA. The Genesis of Chua?s Circuit. Archiv f?r Elektronik und ?bertragungstechnik, 46(4):250?
257, 1992.
L. O. C HUA. Chua Circuit. Scholarpedia, 2(10):1488, 2007.
E. B ILOTTA and P. PANTANO. A Gallery of Chua Attractors. World Scientific, 2008.
KZ was partially supported by a grant from the Simons Foundation and by the Alan Turing Institute under
the EPSRC grant EP/N510129/1. Part of this work was done during the author?s stay at the Newton Institute for
the programme Stochastic Dynamical Systems in Biology: Numerical Methods and Applications.
8
| 6356 |@word h:13 middle:1 version:4 polynomial:10 briefly:1 nd:1 open:1 seek:1 simulation:1 covariance:5 simplifying:2 decomposition:2 p0:1 concise:1 reduction:1 electronics:1 initial:6 series:3 initialisation:3 current:1 fn:1 numerical:22 visible:1 confirming:1 sdes:1 extrapolating:1 plot:6 intelligence:1 device:2 accordingly:1 short:1 chua:7 sudden:1 firstly:3 mathematical:1 constructed:1 differential:9 become:1 combine:1 x0:3 expected:2 indeed:1 examine:1 multi:1 integrator:25 little:1 actual:1 window:1 solver:12 increasing:1 unpredictable:1 provided:4 spain:1 notation:3 underlying:1 estimating:2 circuit:6 emerging:1 developed:1 whilst:1 differing:1 transformation:1 differentiation:1 quantitative:1 collecting:1 ti:29 exactly:5 unknowable:1 rm:1 scaled:1 uk:3 demonstrates:1 control:1 grant:2 yn:2 t1:2 before:1 positive:3 local:8 limit:1 multistep:9 solely:1 path:1 approximately:1 therein:1 studied:1 challenging:1 limited:2 range:1 directed:1 practical:1 unique:2 practice:1 lf:2 x3:1 chaotic:3 procedure:5 j0:5 area:1 empirical:3 arsenal:1 significantly:1 word:2 cannot:1 close:1 operator:1 writing:2 equivalent:2 deterministic:12 conventional:1 demonstrated:2 modifies:1 go:3 straightforward:1 starting:2 identifying:1 immediately:1 rule:1 insight:1 century:1 stability:1 classic:2 analogous:2 construction:6 exact:1 us:2 equispaced:1 curran:1 associate:1 element:6 particularly:2 bottom:1 role:2 epsrc:1 ep:1 solved:1 verifying:1 calculate:1 yk:4 und:1 complexity:2 depend:1 solving:2 rewrite:2 calderhead:2 efficiency:1 basis:13 easily:2 joint:4 various:1 derivation:6 effective:1 london:2 monte:1 artificial:1 choosing:2 whose:1 emerged:1 valued:1 vector1:1 otherwise:1 statistic:3 gp:3 itself:2 runge:4 formalised:1 product:1 unresolved:1 zoomed:1 relevant:1 fmax:2 achieve:1 amplification:1 kh:1 dirac:1 constituent:1 convergence:13 requirement:2 p:16 diverges:1 generating:2 adam:26 ben:1 object:1 tk:1 derive:1 depending:1 ac:2 develop:1 h3:2 school:1 soc:1 auxiliary:1 come:1 quantify:1 convention:1 closely:1 stochastic:2 implementing:1 require:1 premise:1 behaviour:2 fix:1 wall:1 investigation:2 rkk:1 proposition:6 secondly:2 theor:1 extension:3 strictly:1 sufficiently:1 around:2 ground:1 great:1 seed:1 claim:1 adopt:1 omitted:1 uniqueness:1 proc:2 individually:1 repetition:1 tool:2 ivp:2 iterator:2 hope:1 gaussian:18 aim:3 modified:1 rkf:1 probabilistically:2 conjunction:1 derived:2 focus:1 consistently:1 modelling:2 contrast:1 rigorous:2 sense:1 am:6 inference:3 dependent:1 entire:1 typically:1 expand:1 reproduce:1 interested:3 issue:1 augment:1 k6:1 development:1 integration:1 marginal:1 field:3 equal:6 fuller:1 biology:1 realisation:4 primarily:1 resulted:1 comprehensive:1 attractor:2 ab:18 factorise:1 recalling:1 interest:3 a5:1 picard:1 evaluation:10 certainly:1 numer:1 superfluous:1 tj:2 implication:1 accurate:2 integral:2 necessary:1 xy:2 discretisation:1 indexed:1 unless:1 iv:1 logarithm:1 plotted:2 theoretical:3 increased:1 ordinary:3 cost:3 zygalakis:2 deviation:4 introducing:1 euler:1 predictor:4 usefulness:1 delay:1 stored:1 periodic:1 deduced:1 sensitivity:1 siam:1 stay:1 probabilistic:29 systematic:1 physic:1 discipline:1 analogously:1 concrete:1 again:1 conf:1 derivative:5 leading:1 tii:1 de:1 centred:1 waste:1 includes:1 coefficient:4 ennig:3 int:1 inc:1 depends:1 scholarpedia:1 later:2 root:1 extrapolation:1 h1:2 inherited:1 predator:1 slope:1 simon:1 square:1 accuracy:2 variance:5 characteristic:1 who:1 correspond:2 spaced:1 bayesian:8 accurately:1 killing:1 carlo:1 worth:1 comp:1 ed:2 against:2 c7:8 pp:3 proof:5 riemannian:1 propagated:1 treatment:2 manifest:1 back:2 higher:8 dt:2 asmussen:1 formulation:4 done:1 though:1 furthermore:3 implicit:3 stage:1 correlation:1 clock:1 hand:4 replacing:1 lack:2 defines:1 scientific:4 effect:2 wanner:1 y2:3 true:3 counterpart:3 hence:1 during:1 recurrence:2 noted:1 coincides:2 demonstrate:1 cp:1 reasoning:1 wise:2 recently:1 fi:51 empirically:2 packaged:1 conditioning:2 discussed:1 extend:2 numerically:1 significant:1 refer:2 cambridge:2 rd:4 grid:2 mathematics:7 similarly:1 hp:1 pq:1 moving:1 calibration:1 f0:7 longer:1 operating:1 etc:1 base:1 deduce:1 add:1 posterior:5 recent:3 linearisation:1 randy:1 arbitrarily:1 yi:58 upta:1 seen:1 additional:7 greater:1 preceding:1 somewhat:1 lmm:1 multiple:1 desirable:1 alan:1 technical:1 match:1 calculation:2 offer:1 long:1 equally:1 calculates:1 qi:1 basic:1 moulton:5 iteration:6 kernel:2 achieved:1 addition:1 whereas:1 ode:9 addressed:1 subject:2 virtually:1 thing:1 spirit:1 practitioner:2 noting:2 presence:1 intermediate:1 idea:1 konstantinos:1 t0:2 expression:2 utility:1 interpolates:1 speaking:1 adequate:1 n510129:1 covered:1 concentrated:1 simplest:1 generate:4 delta:1 arising:2 per:2 conceived:1 correctly:1 write:1 discrete:1 vol:2 lotka:3 group:1 demonstrating:1 imperial:3 clarity:1 prey:1 backward:2 year:1 inverse:2 turing:1 uncertainty:5 extends:1 family:7 almost:1 reader:3 throughout:2 electronic:1 decision:1 appendix:8 bashforth:18 entirely:1 strength:1 sake:1 archiv:1 speed:1 argument:2 span:2 performing:1 department:2 y0:10 sth:1 modification:2 s1:3 making:1 explained:1 illiams:1 pipeline:1 taken:3 equation:14 remains:1 end:1 available:2 apply:1 hierarchical:1 appropriate:3 ubiquity:1 appearing:1 robustness:1 existence:2 cent:1 denotes:2 top:1 log10:3 newton:1 especially:1 approximating:2 classical:3 society:1 unchanged:1 already:2 arrangement:1 question:1 volterra:2 primary:1 dependence:3 usual:1 rt:1 said:1 exhibit:2 kutta:4 onur:1 sci:1 topic:4 argue:1 collected:1 trivial:1 reason:1 gallery:1 corrector:1 relationship:1 providing:1 setup:2 statement:1 trace:1 stated:1 suppress:1 numerics:1 implementation:1 anal:1 unknown:1 observation:1 thence:1 finite:3 implementable:1 defining:2 extended:1 excluding:1 genesis:1 y1:11 arbitrary:1 community:1 introduced:1 namely:2 required:8 specified:1 pair:1 connection:1 extensive:1 milne:2 barcelona:1 nip:1 able:2 proceeds:1 usually:1 dynamical:1 interpretability:1 max:1 royal:1 natural:3 rely:1 quantification:1 scheme:2 lmms:2 imply:1 prior:6 literature:1 relative:1 lacking:1 interesting:2 limitation:1 generation:1 var:2 h2:12 foundation:1 degree:2 s0:3 principle:1 editor:3 tiny:1 pi:3 course:2 summary:2 extrapolated:1 accounted:1 supported:1 truncation:8 evaluable:2 side:1 allow:1 understand:1 bias:1 institute:2 fifth:2 absolute:2 edinburgh:1 benefit:1 calculated:1 dimension:2 world:3 evaluating:1 rich:1 kz:1 author:1 made:1 adaptive:1 coincide:2 programme:1 far:1 employing:1 sj:2 global:1 sequentially:1 summing:1 assumed:1 continuous:1 iterative:1 expanding:1 contributes:1 improving:1 investigated:1 interpolating:4 constructing:2 substituted:1 aistats:1 main:2 rh:1 noise:3 edition:1 repeated:1 allowed:1 augmented:2 aid:1 wiley:1 comprises:1 deterministically:1 explicit:2 jmlr:1 theorem:4 formula:2 z0:1 specific:1 essential:1 effectively:1 simply:1 lagrange:4 partially:1 hua:2 springer:3 corresponds:1 truth:2 satisfies:1 conditional:7 careful:1 lipschitz:1 change:1 typical:1 except:1 justify:1 called:1 total:1 meaningful:2 college:2 arises:2 relevance:1 avoiding:1 incorporate:1 phenomenon:2 |
5,921 | 6,357 | High-Rank Matrix Completion and Clustering
under Self-Expressive Models
E. Elhamifar?
College of Computer and Information Science
Northeastern University
Boston, MA 02115
[email protected]
Abstract
We propose efficient algorithms for simultaneous clustering and completion of
incomplete high-dimensional data that lie in a union of low-dimensional subspaces.
We cast the problem as finding a completion of the data matrix so that each point
can be reconstructed as a linear or affine combination of a few data points. Since the
problem is NP-hard, we propose a lifting framework and reformulate the problem
as a group-sparse recovery of each incomplete data point in a dictionary built using
incomplete data, subject to rank-one constraints. To solve the problem efficiently,
we propose a rank pursuit algorithm and a convex relaxation. The solution of our
algorithms recover missing entries and provides a similarity matrix for clustering.
Our algorithms can deal with both low-rank and high-rank matrices, does not suffer
from initialization, does not need to know dimensions of subspaces and can work
with a small number of data points. By extensive experiments on synthetic data
and real problems of video motion segmentation and completion of motion capture
data, we show that when the data matrix is low-rank, our algorithm performs on
par with or better than low-rank matrix completion methods, while for high-rank
data matrices, our method significantly outperforms existing algorithms.
1
Introduction
High-dimensional data, which are ubiquitous in computer vision, image processing, bioinformatics
and social networks, often lie in low-dimensional subspaces corresponding to different categories
they belong to [1, 2, 3, 4, 5, 6]. Clustering and finding low-dimensional representations of data are
important unsupervised learning problems with numerous applications, including data compression
and visualization, image/video/costumer segmentation, collaborative filtering and more.
A major challenge in real problems is dealing with missing entries in data, due to sensor failure,
ad-hoc data collection, or partial knowledge of relationships in a dataset. For instance, in estimating
object motions in videos, the tracking algorithm may loose the track of features in some video frames
[7]; in the image inpainting problem, intensity values of some pixels are missing due to sensor failure
[8]; or in recommender systems, each user provides ratings for a limited number of products [9].
Prior Work. Existing algorithms that deal with missing entries in high-dimensional data can be
divided into two main categories. The first group of algorithms assume that data lie in a single
low-dimensional subspace. Probabilistic PCA (PPCA) [10] and Factor Analysis (FA) [11] optimize
a non-convex function using Expectation Maximization (EM), estimating low-dimensional model
parameters and missing entries of data in an iterative framework. However, their performance depends
?
E. Elhamifar is an Assistant Professor in the College of Computer and Information Science, Northeastern
University.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
on initialization and degrades as the dimension of the subspace or the percentage of missing entries
increases. Low-rank matrix completion algorithms, such as [12, 13, 14, 15, 16, 17] recover missing
entries by minimizing the convex surrogate of the rank, i.e., nuclear norm, of the complete data
matrix. When the underlying subspace is incoherent with standard basis vectors and missing entries
locations are spread uniformly at random, they are guaranteed to recover missing entries.
The second group of algorithms addresses the more general and challenging scenario where data
lie in a union of low-dimensional subspaces. The goals in this case are to recover missing entries
and cluster data according to subspaces. Since the union of low-dimensional subspaces is often
high/full-rank, methods in the first category are not effective. Mixture of Probabilistic PCA (MPPCA)
[18, 19], Mixture of Factor Analyzers (MFA) [20] and K-GROUSE [21] address clustering and
completion of multi-subspace data, yet suffer from dependence on initialization and perform poorly
as the dimension/number of subspaces or the percentage of missing entires increases. On the other
hand, [22] requires a polynomial number of data points in the ambient space dimension, which often
cannot be met in high-dimensional datasets. Building on the unpublished abstract in [23], a clustering
algorithm using expectation completion on the data kernel matrix was proposed in [24]. However, the
algorithm only addresses clustering and the resulting non-convex optimization is dealt with using the
heuristic approach of shifting eigenvalues of the Hessian to nonnegative values. [25] assumes that the
observed matrix corresponds to applying a Lipschitz, monotonic function to a low-rank matrix. While
an important generalization to low-rank regime, [25] cannot cover the case of multiple subspaces.
Paper Contributions. In this paper, we propose an efficient algorithm for the problem of simultaneous completion and clustering of incomplete data lying in a union of low-dimensional subspaces.
Building on the Sparse Subspace Clustering (SSC) algorithm [26], we cast the problem as finding a
completion of the data so that each complete point can be efficiently reconstructed using a few complete points from the same subspace. Since the formulation is non-convex and, in general, NP-hard,
we propose a lifting scheme, where we cast the problem as finding a group-sparse representation of
each incomplete data point in a modified dictionary, subject to a set of rank-one constraints. In our
formulation, coefficients in groups correspond to pairwise similarities and missing entries of data.
More specifically, our group-sparse recovery formulation finds a few incomplete data points that
well reconstruct a given point and, at the same time, completes the selected data points in a globally
consistent fashion. Our framework has several advantages over the state of the art:
? Unlike algorithms such as [22] that require a polynomial number of points in the ambient-space
dimension, our framework needs about as many points as the subspace dimension not the ambient
space. In addition, we do not need to know dimensions of subspaces a priori.
? While two-stage methods such as [24], which first obtain a similarity graph for clustering and then
apply low-rank matrix completion to each cluster, fail when subspaces intersect or clustering fails,
our method simultaneously recovers missing entries and builds a similarity matrix for clustering,
hence, each goal benefits from the other. Moreover, in scenarios where a hard clustering does not
exist, we can still recover missing entries.
? While we motivate and present our algorithm in the context of clustering and completion of multisubspace data, our framework can address any task that relies on the self-expressiveness property of
the data, e.g., column subset selection in the presence of missing data.
? By experiments on synthetic and real data, we show that our algorithm performs on par with or better
than low-rank matrix completion methods when the data matrix is low-rank, while it significantly
outperforms state-of-the-art clustering and completion algorithms when the data matrix is high-rank.
2
Problem Statement
L
Assume we have L subspaces {S` }L
`=1 of dimensions {d` }`=1 in an n-dimensional ambient space,
n
N
R . Let {y j }j=1 denote a set of N data points lying in the union of subspaces, where we observe only
>
some entries of each y j , [y1j y2j . . . ynj ] . Assume that we do not know a priori the bases for
subspaces nor do we know which data points belong to which subspace. Given the incomplete data
points, our goal is to recover missing entries and cluster the data into their underlying subspaces.
2
To set the notation, let ?j ? {1, . . . , n} and ?cj denote, respectively, indices of observed and missing
entries of y j . Let U ?j ? Rn?|?j | be the submatrix of the standard basis whose columns are indexed
by ?j . We denote by P ?j ? Rn?n the projection matrix onto the subspace spanned by U ?j , i.e.,
>
|?cj |
P ?j , U ?j U >
corresponds to the vector of missing entries of y j .
?j . Hence, xj , U ?cj y j ? R
? j an n-dimensional vector whose i-th coordinate is yij for i ? ?j and is zero for
We denote by y
? j , P ?j y j ? Rn . We can write each y j as the summation of two orthogonal vectors
i ? ?cj , i.e., y
with observed and unobserved entries, i.e.,
? j + U ?cj U >
? j + U ?cj xj .
y j = P ?j y j + P ?cj y j = y
?cj y j = y
(1)
Finally, we denote by Y ? Rn?N and Y? ? Rn?N matrices whose columns are complete data points
{y j }N
y j }N
j=1 and zero-filled data {?
j=1 , respectively.
To address completion and clustering of multi-subspace data, we propose a unified framework to
simultaneously recover missing entries and learn a similarity graph for clustering. To do so, we build
on the SSC algorithm [26, 4], which we review next.
3
Sparse Subspace Clustering Review
The sparse subspace clustering (SSC) algorithm [26, 4] addresses the problem of clustering complete
multi-subspace data. It relies on the observation that in a high-dimensional ambient space, while
there are many ways that each data point y j can be reconstructed using the entire dataset, a sparse
representation selects a few data points from the underlying subspace of y j , since each point in S`
can be represented using d` data points, in general directions, from S` . This motivates solving2
min
{c1j ,...,cN j }
N
X
i=1
|cij | s. t.
N
X
cij y i = 0, cjj = ?1,
(2)
i=1
where the constraints express that each y j should be written as a combination of other points. To
infer clustering, one builds a similarity graph using sparse coefficients, by connecting nodes i and j
of the graph, representing, respectively, y i and y j , with an edge with the weight wij = |cij | + |cji |.
Clustering of data is obtained then by applying spectral clustering [27] to the similarity graph.
While [4, 26, 28] show that, under appropriate conditions on subspace angles and data distribution,
(2) is guaranteed to recover desired representations, the algorithm requires complete data points.
3.1
Naive Extensions of SSC to Deal with Missing Entries
In the presence of missing entries, the `1 -minimization in (2) becomes non-convex, since coefficients
and a subset of data entries are both unknown. A naive approach is to solve (2) using zero-filled
data points, {?
y i }N
i=1 , to perform clustering and then apply standard matrix completion on each
cluster. However, the drawback of this approach is that not only it does not take advantage of the
known locations of missing entries, but also zero-filled data will no longer lie in original subspaces,
and deviate more from subspaces as the percentage of missing entries increases. Hence, a sparse
representation does not necessarily find points from the same subspace and spectral clustering fails.
An alternative approach to deal with incomplete data is to use standard low-rank matrix completion
algorithms to recover missing values and then apply SSC to cluster data into subspaces. While this
approach works when the union of subspaces is low-rank, its effectiveness diminishes as the number
of subspaces or their dimensions increases and the data matrix becomes high/full-rank.
4
Sparse Subspace Clustering and Completion via Lifting
In this section, we propose an algorithm to recover missing entries and build a similarity graph for
clustering, given observations {yij ; i ? ?j }N
j=1 for N data points lying in a union of subspaces.
2
`1 is the convex surrogate of the cardinality function,
3
PN
i=1
I(|cij |), where I(?) is the indicator function.
4.1
SSC?Lifting Formulation
To address the problem, we start from the SSC observation that, given complete data {y j }N
j=1 , the
solution of
min
{cij }
N X
N
X
I(|cij |) s. t.
j=1 i=1
N
X
cij y i = 0, cjj = ?1, ?j
(3)
i=1
ideally finds a representation of each y j as a linear combination of a few data points that lie in the
same subspace as of y j . I(?) denotes the indicator function, which is zero when its argument is zero
and is one otherwise. Notice that, using (1), we can write each y i as
1
? i U ?ci
? i + U ?ci xi = y
yi = y
,
(4)
xi
? i is the i-th data point whose missing entries are filled with zeros and xi is the vector
where y
containing missing entries of y i . Thus, substituting (4) in the optimization (3), we would like to solve
min
{cij },{xi }
N X
N
X
I(|cij |) s. t.
j=1 i=1
N
X
cij
? i U ?ci
y
= 0, cjj = ?1, ?j.
cij xi
(5)
i=1
>
c
? i U ?ci ? Rn?|?i |+1 are given and known while vectors cij cij x>
?
Notice that matrices y
i
c
R|?i |+1 are unknown. In fact, the optimization (5) has two sources of non-convexity: the `0 -norm in
the objective function and the product of unknown variables {cij } and {xi } in the constraint.
To pave the way for an efficient algorithm, first we use the fact that the number of nonzero coefficients
>
cij is the same as the number of nonzero blocks cij cij x>
, since cij is nonzero if and only if
i
> >
cij cij xj
is nonzero. Thus, we can write (5) as the equivalent group-sparse optimization
!
N X
N
N
X
X
cij
cij
c
?
y
U
min
I
s. t.
= 0, cjj = ?1, ?j,
(6)
?i
i
cij xi
p
cij xi
{cij },{xi }
j=1 i=1
i=1
where k ? kp denotes the `p -norm for p > 0. Next, to deal with the non-convexity of the product of
cij and xi , we use the fact that for each i ? {1, . . . , N }, the matrix
ci1 ? ? ? ciN
1
=
[c ? ? ? ciN ] ,
(7)
Ai ,
ci1 xi ? ? ? ciN xi
xi i1
is of rank one, since it can be written as the outer product of two vectors. This motivates to use a
lifting scheme where we define new optimization variables
c
?ij , cij xi ? R|?i | ,
(8)
and consider the group-sparse optimization program
!
N X
N
N
X
X
cij
cij
ci1 ? ? ? ciN
c
? i U ?i
y
min
I
s. t.
= 0, rk
= 1, ?i, j,
?ij
p
?ij
?i1 ? ? ? ?iN
{cij },{?ij }
cjj =?1,?j j=1 i=1
i=1
(9)
where we have replaced cij xi with ?ij and have introduced rank-one constraints. In fact, we show
that one can recover the solution of (5) using (9) and vice versa.
Proposition 1 Given a solution {cij } and {?ij } of (9), by computing xi ?s via the factorization in
(7), {cij } and {xi } is a solution of (5). Also, given a solution {cij } and {xi } of (5), {cij } and
{?ij , cij xi } would be a solution of (9).
Notice that, we have transferred the non-convexity of the product cij xi in (5) into a set of non-convex
rank-one constraints in (9). However, as we will see next, (9) admits an efficient convex relaxation.
4
4.2
Relaxations and Extensions
The optimization program in (9) is, in general, NP-hard, due to the mixed `0 /`p -norm in the objective
function. It is non-convex due to both mixed `0 /`p -norm and rank-one constraints. To solve (9), we
first take the convex surrogate of the objective function, which corresponds to an `1 /`p -norm [29, 30],
where we drop the indicator function and, for p ? {2, ?}, solve
!
N X
N
N
N
X
X
cij
X
cij
ci1 ? ? ? ciN
c
?
y
U
min ?
?
s. t. rk
= 1, ?i.
?i
i
?ij
+
?ij
?i1 ? ? ? ?iN
{cij ,?ij }
p
{cjj =?1} j=1 i=1
j=1
i=1
(10)
The nonnegative parameter ? is a regularization parameter and the function ?(?) ? {?e (?), ?a (?)}
enforces whether the reconstruction of each point should be exact or approximate, where
1
+? if u 6= 0
?e (u) ,
,
?a (u) , kuk22 .
(11)
0
if u = 0
2
More specifically, when dealing with missing entries from noise-free data, which perfectly lie in
multiple subspaces, we enforce exact reconstruction by selecting ?(?) = ?e (?). On the other hand,
when dealing with real data where observed entries are corrupted by noise, exact reconstruction is
infeasible or comes at the price of losing the sparsity of the solution, which is undesired. Thus, to
deal with noisy incomplete data, we consider approximate reconstruction by selecting ?(?) = ?a (?).
Notice that the objective function of (10) is convex for p ? 1, while the rank-one constraints are
non-convex. We can obtain a local solution, by solving (10) with an Alternating Direction Method of
Multipliers (ADMM) framework using projection onto the set of rank-one matrices.
To obtain a convex algorithm, we use a nuclear-norm3 relaxation [12, 14, 15] for the rank-one
constraints, where we replace rank(Ai ) = 1 with kAi k? ? ? , for ? > 0. In addition, to reduce
the number of constraints and the complexity of the problem, we choose to bring the nuclear norm
constraints into the objective function using a Lagrange multiple ? > 0. Hence, we propose to solve
!
N
N
N X
N
N
X
X
X
X
cij
ci1 ? ? ? ciN
cij
? i U ?ci
y
min ?
?
, (12)
?ij
+ ?
?i1 ? ? ? ?iN
+
?ij
{cij ,?ij }
p
?
{cjj =?1}
j=1 i=1
i=1
j=1
i=1
which is convex for p ? 1 and can be solved efficiently using convex solvers. Finally, using the
solution of (10), we recover missing entries by finding the best rank-one factorization of each block
Ai as in (7), which results in4
PN
j=1 cij ?ij
? i = PN
x
.
(13)
2
j=1 cij
In addition, we use the coefficients {cij } to build a similarity graph with weights wij = |cij | + |cji |
and obtain clustering of data using graph partitioning. It is important to note that we do not need to
know dimensions of subspaces a priori, since (10) automatically selects the appropriate number of
PN PN
data points from each subspace. Also, it is worth metioning that we can use j=1 i=1 |cij | instead
of the group-sparsity term in (10) and (12).
Remark 1 Notice that when all entries of all data points are observed, i.e., ?ci = ?, the rankone constraints in (9) are trivially satisfied. Hence, (10) and (12) with ? = 0 reduce to the `1 minimization of SSC. In other words, our framework is a generalization of SSC, which simultaneously
finds similarities and missing entries for incomplete data.
Table 1 shows the stable rank5 [31] of blocks Ai of the solution for the synthetic dataset explained in
the experiments in Section 5. As the results show, the penalized optimization successfully recovers
close to rank-one solutions for practical values of ? and ?.
3
The nuclear norm of A, denoted by kAk? , is the sum of its singular values, i.e., kAk? =
4
The denominator is always nonzero
P since cii = ?1 for all i.
5
Stable rank of B is defined as i ?i2 / maxi ?i2 , where ?i ?s are singular values of B.
5
P
i
?i (A).
Table 1: Average stable-rank of matrices Ai for high-rank data, n = 100, L = 12, d = 10, N = 600, with
? = 0.4, explained in section 5. Notice that rank of Ai is close to one, and as ? increases, it gets closer to one.
? = 0.001
? = 0.01
? = 0.1
? = 0.01 1.015 ? 0.005 1.009 ? 0.005 1.004 ? 0.002
? = 0.1 1.021 ? 0.007 1.011 ? 0.006 1.006 ? 0.003
???
???
Figure 1: Subset selection and completion via lifting on the Olivetti face dataset. Top: faces from the dataset
with missing entries. Bottom: solution of our method on the dataset. We successfully recover missing entries
and, at the same time, select a subset of faces as representatives.
Notice that the mixed `1 /`p-norm in the
objective function of (10) and (12) promotes selecting a few
nonzero coefficient blocks cij ?>
ij . In other words, we find a representation of each incomplete
data point using a few other incomplete data points, while, at the same time, find missing entries
of the selected data points. On the other hand, rank constraints on the sub-blocks of the solution
ensure that recovered missing entries are globally consistent, i.e., if a data point takes part in the
reconstruction of multiple points, the associated missing entries in each representation are the same.
Remark 2 Our lifting framework can also deal with missing entries in other tasks that rely the on
PN
the self-expressiveness property, i.e., y j = i=1 cij y i . Figure 1 shows results of the extension of
our method to column subset selection [32, 33] with missing entries. In fact, simultaneously selecting
a few data points that well reconstruct the entire dataset and recovering missing entires can be cast
as a modification of (10) or (12), where we modify the first term in the objective function in order to
select a few nonzero blocks, Ai .
5
Experiments
We study the performance of our algorithm for completion and clustering of synthetic and real data.
PN PN
We implement (10) and (12) with j=1 i=1 |cij | instead of the group-sparsity term using the
ADMM framework [34, 35]. Unless stated otherwise, we set ? = 0.01 and ? = 0.1. However, the
results are stable for ? ? [0.005, 0.05] and ? ? [0.01, 0.5].
We compare our algorithm, SSC-Lifting, with MFA [20], K-Subspaces with Missing Entries
(KSub-M) [21], Low-Rank Matrix Completion [13] followed by SSC (LRMC+SSC) or LSA [36]
(LRMC+LSA), and SSC using Column-wise Expectation Completion (SSC-CEC) [24]. It is worth
mentioning that in all experiments, we found that the performance of SSC-CEC is slightly better
than SSC using zero-filled data. In addition, as reported in [21], KSub-M generally outperforms
the high-rank matrix completion algorithm in [22], since the latter requires a very large number of
samples, which becomes impractical in high-dimensional problems. We compute
Clustering Error =
# Misclassified points
kY? ? Y kF
, Completion Error =
,
# All points
kY kF
(14)
where Y and Y? denote, respectively, the true and recovered matrix and k ? kF is the Frobenius norm.
5.1 Synthetic Experiments
In this section, we evaluate the performance of different algorithms on synthetic data. We generate L
random d-dimensional subspaces in Rn and draw Ng data points, at random, from each subspace. We
consider two scenarios: 1) a low-rank data matrix whose columns lie in a union of low-dimensional
subspaces; 2) a high rank data matrix whose columns lie in a union of low-dimensional subspaces.
Unless stated otherwise, for low-rank matrices, we set L = 3 and d = 5, hence, Ld = 15 < n = 100,
while for high-rank matrices, we set L = 12 and d = 10, hence, Ld = 120 > n = 100.
Completion Performance. We generate missing entries by selecting ? fraction of entries of the
data matrix uniformly at random and dropping their values. The left and middle left plots in Figure 2
6
0.4
0.2
0
0.6
0.4
0.2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Missing entries fraction
0
0.8
1
MFA
KSub-M
LRMC
SSC-lifting
LRMC
SSC-lifting
0.8
MFA
KSub-M
LRMC
SSC-lifting
0.6
0.4
0.2
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Missing entries fraction
Completion error
0.6
1
0.8
Completion error
MFA
KSub-M
LRMC
SSC-lifting
Completion error
Completion error
1
0.8
0.6
0.4
0.2
0
20 30 40 50 60 70 80 90 100 110 120
0
20 30 40 50 60 70 80 90 100 110 120
n
Ng
Figure 2: Completion errors of different algorithms as a function of ?. Left: low-rank matrices. Middle left:
high-rank matrices. Middle right: effect of the ambient space dimension, n. Right: effect of the number of data
points in each subspace, Ng , for low-rank (solid lines) and high-rank (dashed lines) matrices.
show completion errors of different algorithms for low-rank and high-rank matrices, respectively, as
a function of the fraction of missing entries, ?. Notice that in both cases, MFA and KSub-M have
high errors, which rapidly increase as ? increases, due to dependence on initialization and getting
trapped in local optima. In both cases, SSC-lifting outperforms all methods across all values of
?. Specifically, in the low-rank regime, while LRMC and SSC-lifting have almost zero error for
? ? 0.35, the performance of LRMC quickly degrades for larger ??s, while SSC-lifting performs well
for ? ? 0.6. On the other hand, the performance of LRMC significantly degrades for the high-rank
case, with a large gap to SSC-lifting, which performs well for ? < 0.45. The middle right plot in
Figure 2 demonstrates the effect of the ambient space dimension, n, for L = 7, d = 5, Ng = 100
and ? = 0.3. Notice that errors of MFA and KSub-M increases as n increases, due to larger number
of local optima. LRMC has a large error for small values of n, where n is smaller than or close to Ld,
i.e., high-rank regime. As n increases and matrices becomes low-rank, the error decreases. Notice
that SSC-lifting for n ? 40 has a low error, demonstrating its effectiveness in handling both low-rank
and high-rank matrices. Finally, the right plot in Figure 2 demonstrates the effect of the number
of points, Ng , for low and high rank matrices with ? = 0.5. We do not show results of MFA and
KSub-M, since they have large errors for all Ng . Notice that for all values of Ng , SSC-lifting obtains
smaller errors than LRMC, verifying the effectiveness of sparsity principle to complete the data.
Clustering Performance. Next, we compare the clustering performance. To better study the effect
of missing entries, we generate missing entries by selecting a fraction ? of data points and for each
selected data point, we drop the values for a fraction ? of its entries, both uniformly at random. We
change ? in [0.1, 1.0] and ? in [0.1, 0.9] and for each pair (?, ?), record the average clustering and
completion errors over 20 trials, each with different random subspaces and data points. Figure 3
shows the clustering errors of different algorithms for low-rank (top row) and high-rank (bottom
row) data matrices (completion errors provided in supplementary materials). In both cases, MFA
performs poorly, due to local optima. While LRMC+SSC, SSC-CEC and SSC-Lifting perform
similarly for low-rank matrices, SSC-Lifting performs best among all methods for high-rank matrices.
In particular, when the percentage of missing entries, ?, is more than 70%, SSC-Lifting performs
significantly better than other algorithms. It is important to notice that for small values of (?, ?), since
completion errors via SSC-Lifting and LRMC are sufficiently small, the recovered matrices will be
noisy versions of the original matrices. As a result, Lasso-type optimizations of SSC and SSC-Lifting
will succeed in recovering subspace-sparse representations, leading to zero clustering errors. In the
high-rank case, SSC-EC has a higher clustering error than LRMC and SSC-Lifting, which is due to
the fact that it relies on a heuristic of shifting eigenvalues of the kernel matrix to non-negative values.
5.2
Real Experiments on Motion Segmentation
We consider the problem of motion segmentation [37, 38] with missing entries on the Hopkins 155
dataset, with 155 sequences of 2 and 3 motions. Since the dataset consists of complete feature
trajectories (incomplete trajectories were removed manually to form the dataset), we select ? fraction
of feature points across all frames uniformly at random and remove their x ? y coordinate values.
Left plot in Figure 4 shows clustering error bars of different algorithms on the dataset as a function of
?. Notice that in all cases, MFA and SSC-CEC have large errors, due to, respectively, dependence
on initialization and the heuristic convex reformulation. On the other hand, LRMC+SSC and SSCLifting perform well, achieving less than 5% error for all values of ?. This comes from the fact that
sequences have at most L = 3 motions and dimension of each motion subspace is at most d = 4,
hence, Ld ? 12 2F , where F is the number of video frames. Since the data matrix is low-rank
and LRMC succeeds, SSC and our method achieve roughly the same errors for different values of ?.
7
0.2
0.2
0.1
0.3
0.5
0.7
Missing entries fraction
0.9
0.9
0.6
0.4
0.6
0.4
0.2
0.2
0.1
0.3
0.5
0.7
Missing entries fraction
0.9
0.3
0.5
0.7
Missing entries fraction
0.9
0.6
0.4
0.6
0.4
0.8
0.6
0.4
0.2
0.1
0.3
0.5
0.7
Missing entries fraction
0.9
0.8
0.6
0.6
0.4
0.4
0.2
0.10.3 0.3 0.5 0.5 0.7 0.7
Missing
entriesentries
fraction
Missing
fraction
1
0.8
1
0.8
0.2
0.1
0.2
0.1
0.9
0.3
0.5
0.7
Missing entries fraction
1
0.8
0.8
0.2
0.1
Corrupted data fraction
Corrupted data fraction
Corrupted data fraction
0.3
0.5
0.7
Missing entries fraction
1
0.8
0.4
0.2
0.1
1
Corrupted data fraction
0.4
0.6
Corrupted data fraction
0.6
1
0.8
Corrupted data fraction
0.4
0.8
Corrupted data fraction
0.6
1
Corrupted data fraction
1
Corrupted data fraction
Corrupted data fraction
1
0.8
0.90.9
0
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.1
0.10.3 0.3 0.5 0.5 0.7 0.7
Missing
entriesentries
fraction
Missing
fraction
0.90.9
0
Figure 3: Clustering errors for low-rank matrices (top row) with L = 3, d = 5, n = 100 and high-rank matrices
(bottom row) with L = 12, d = 10, n = 100 as a function of (?, ?), where ? is the fraction of data with missing
entires (vertical axis) and ? is the fraction of missing entries in each affected point (horizontal axis). Left to
Right: MFA, SSC-CEC, LRMC+SSC and SSC-Lifting.
15
SSC-CEC
MFA
LRMC+LSA
LRMC+SSC
SSC-lifting
0.2
10
5
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
Missing entries fraction
0.7
0
0.8
squat
run
stand
arm?up
jump
drink
punch
5
10
Index
15
Completion error
0.3
Singular values
Clustering error
0.4
20
LRMC
MFA
SSC-lifting
0.6
0.4
0.2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Missing entries fraction
Figure 4: Left: Clustering error bars of MFA, LRMC+LSA, LRMC+SSC, SSC-CEC and SSC-Lifting as a
function of the fraction of missing entries, ?. Middle: Singular values of CMU Mocap data reveal that each
activity lie in a low-dimensional subspace. Right: Average completion errors of MFA, LRMC and SSC-Lifting
on the CMU Mocap Dataset as a function of ?. Solid lines correspond to ? = 0.5, i.e., 50% of data have missing
entries, while dashed lines correspond to ? = 1, i.e., all data have missing entries.
5.3
Real Experiments on Motion Capture Data
We consider completion of time-series trajectories from motion capture sensors, where a trajectory
consists of different human activities, such as running, jumping, squatting, etc. We use the CMU
Mocap dataset, where each data point corresponds to measurements from n sensors at a particular
time instant. Since transition from one activity to another happens gradually, we do not consider
clustering. However, as the middle plot in Figure 4 shows, excluding the transition time periods, data
from each activity lie in a low-rank subspace. Since typically there are L ? 7 activities, each having a
dimension of d ? 8, and there are n = 42 sensors, the data matrix is full-rank, as Ld ? 56 > n = 42.
To evaluate performance of different algorithms, we select ? ? {0.5, 1.0} fraction of data points and
remove entries of ? ? {0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7} fraction of each selected point, both uniformly
at random. Right plot in Figure 4 shows completion errors of different algorithms as a function of ?
for ? ? {0.5, 1.0}. Notice that, unlike the previous experiment, since the data matrix is high-rank,
LRMC has a large completion error, similar to synthetic experiments. On the other hand, SSC-Lifting
error is less than 0.1 for ? = 0.1 and less than 0.55 for ? = 0.7. In all cases, for ? = 1, the
performance degrades with respect to ? = 0.5. Lastly, it is important to notice that MFA performs
slightly better than LRMC, demonstrating the importance of the union of low-dimensional subspaces
model for the problem. However, getting trapped in local optima does not allow MFA to take full
advantage of such a model, as opposed to SSC-Lifting.
6
Conclusions
We proposed efficient algorithms, based on lifting, for simultaneous clustering and completion of
incomplete multi-subspace data. By extensive experiments on synthetic and real data, we showed
that for low-rank data matrices, our algorithm performs on par with or better than low-rank matrix
completion methods, while for high-rank data matrices, it significantly outperforms existing algorithms. Theoretical guarantees of the proposed method and scaling the algorithm to large data is the
subject of our ongoing research.
8
References
[1] R. Basri and D. Jacobs, ?Lambertian reflection and linear subspaces,? IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 25, 2003.
[2] T. Hastie and P. Simard, ?Metrics and models for handwritten character recognition,? Statistical Science, 1998.
[3] C. Tomasi and T. Kanade, ?Shape and motion from image streams under orthography,? International Journal of Computer Vision, vol. 9,
1992.
[4] E. Elhamifar and R. Vidal, ?Sparse subspace clustering: Algorithm, theory, and applications,? IEEE Transactions on Pattern Analysis
and Machine Intelligence, 2013.
[5] G. Chen and G. Lerman, ?Spectral curvature clustering (SCC),? International Journal of Computer Vision, vol. 81, 2009.
[6] A. Zhang, N. Fawaz, S. Ioannidis, and A. Montanari, ?Guess who rated this movie: Identifying users through subspace clustering,?
Uncertainty in Artificial Intelligence (UAI), 2012.
[7] R. Vidal, R. Tron, and R. Hartley, ?Multiframe motion segmentation with missing data using PowerFactorization and GPCA,? International Journal of Computer Vision, vol. 79, 2008.
[8] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, ?Online dictionary learning for sparse coding,? in International Conference on Machine
Learning, 2009.
[9] D. Park, J. Neeman, J. Zhang, S. Sanghavi, and I. S. Dhillon, ?Preference completion: Large-scale collaborative ranking from pairwise
comparisons,? International Conference on Machine Learning (ICML), 2015.
[10] M. Tipping and C. Bishop, ?Probabilistic principal component analysis,? Journal of the Royal Statistical Society, vol. 61, 1999.
[11] M. Knott and D. Bartholomew, Latent variable models and factor analysis.
London: Edward Arnold, 1999.
[12] E. J. Cand?s and B. Recht, ?Exact matrix completion via convex optimization,? Foundations of Computational Mathematics, vol. 9, 2008.
[13] E. J. Cand?s and Y. Plan, ?Matrix completion with noise,? Proceedings of the IEEE, 2009.
[14] R. Keshavan, A. Montanari, and S. Oh, ?Matrix completion from noisy entries,? IEEE Transactions on Information Theory, 2010.
[15] Y. Chen, H. Xu, C. Caramanis, and S. Sanghavi, ?Robust matrix completion with corrupted columns,? in International Conference on
Machine Learning (ICML), 2011.
[16] S. Bhojanapalli and P. Jain, ?Universal matrix completion,? International Conference on Machine Learning (ICML), 2013.
[17] K. Y. Chiang, C. J. Hsieh, and I. S. Dhillon, ?Matrix completion with noisy side information,? Neural Information Processing Systems
(NIPS), 2015.
[18] M. Tipping and C. Bishop, ?Mixtures of probabilistic principal component analyzers,? Neural Computation, vol. 11, 1999.
[19] A. Gruber and Y. Weiss, ?Multibody factorization with uncertainty and missing data using the em algorithm,? IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2004.
[20] Z. Ghahramani and G. E. Hinton, ?The em algorithm for mixtures of factor analyzers,? Technical Report CRG-TR-96-1, Dept. Computer
Science, Univ. of Toronto, 1996.
[21] L. Balzano, A. Szlam, B. Recht, and R. Nowak, ?K-subspaces with missing data,? IEEE Statistical Signal Processing Workshop, 2012.
[22] B. Eriksson, L. Balzano, and R. Nowak, ?High rank matrix completion,? International Conference on Artificial Intelligence and Statistics,
2012.
[23] E. J. Candes, L. Mackey, and M. Soltanolkotabi, ?From robust subspace clustering to full-rank matrix completion,? Unpublished abstract,
2014.
[24] C. Yang, D. Robinson, and R. Vidal, ?Sparse subspace clustering with missing entries,? International Conference on Machine Learning
(ICML), 2015.
[25] R. Ganti, L. Balzano, and R. Willett, ?Matrix completion under monotonic single index models,? Neural Information Processing Systems
(NIPS), 2015.
[26] E. Elhamifar and R. Vidal, ?Sparse subspace clustering,? in IEEE Conference on Computer Vision and Pattern Recognition, 2009.
[27] A. Ng, Y. Weiss, and M. Jordan, ?On spectral clustering: analysis and an algorithm,? in Neural Information Processing Systems, 2001.
[28] M. Soltanolkotabi, E. Elhamifar, and E. J. Candes, ?Robust subspace clustering,? Annals of Statistics, 2014.
[29] B. Zhao, G. Rocha, and B. Yu, ?The composite absolute penalties family for grouped and hierarchical selection,? The Annals of Statistics,
vol. 37, 2009.
[30] R. Jenatton, J. Y. Audibert, and F. Bach, ?Structured variable selection with sparsity-inducing norms,? Journal of Machine Learning
Research, vol. 12, 2011.
[31] J. Tropp, ?Column subset selection, matrix factorization, and eigenvalue optimization,? in ACM-SIAM Symp. Discrete Algorithms
(SODA), 2009.
[32] E. Elhamifar, G. Sapiro, and S. S. Sastry, ?Dissimilarity-based sparse subset selection,? IEEE Transactions on Pattern Analysis and
Machine Intelligence, 2016.
[33] E. Elhamifar, G. Sapiro, and R. Vidal, ?See all by looking at a few: Sparse modeling for finding representative objects,? in IEEE
Conference on Computer Vision and Pattern Recognition, 2012.
[34] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, ?Distributed optimization and statistical learning via the alternating direction
method of multipliers,? Foundations and Trends in Machine Learning, vol. 3, 2010.
[35] D. Gabay and B. Mercier, ?A dual algorithm for the solution of nonlinear variational problems via finite-element approximations,? Comp.
Math. Appl., vol. 2, 1976.
[36] J. Yan and M. Pollefeys, ?A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and
non-degenerate,? in European Conf. on Computer Vision, 2006.
[37] J. Costeira and T. Kanade, ?A multibody factorization method for independently moving objects.? Int. Journal of Computer Vision,
vol. 29, 1998.
[38] K. Kanatani, ?Motion segmentation by subspace separation and model selection,? in IEEE Int. Conf. on Computer Vision, vol. 2, 2001.
9
| 6357 |@word trial:1 version:1 middle:6 compression:1 norm:11 polynomial:2 jacob:1 hsieh:1 inpainting:1 solid:2 tr:1 ld:5 series:1 selecting:6 neeman:1 outperforms:5 existing:3 recovered:3 ganti:1 yet:1 chu:1 written:2 shape:1 remove:2 drop:2 plot:6 mackey:1 intelligence:5 selected:4 guess:1 record:1 chiang:1 provides:2 gpca:1 node:1 location:2 preference:1 toronto:1 math:1 zhang:2 consists:2 symp:1 pairwise:2 roughly:1 cand:2 nor:1 multi:4 globally:2 automatically:1 cardinality:1 solver:1 becomes:4 spain:1 estimating:2 underlying:3 moreover:1 notation:1 multibody:2 provided:1 bhojanapalli:1 unified:1 finding:6 unobserved:1 impractical:1 guarantee:1 sapiro:3 demonstrates:2 partitioning:1 szlam:1 lsa:4 squatting:1 local:5 modify:1 initialization:5 challenging:1 appl:1 mentioning:1 limited:1 factorization:5 practical:1 enforces:1 union:10 block:6 implement:1 intersect:1 universal:1 yan:1 significantly:5 composite:1 projection:2 boyd:1 word:2 get:1 cannot:2 onto:2 selection:8 close:3 eriksson:1 context:1 applying:2 optimize:1 equivalent:1 missing:68 independently:1 convex:18 recovery:2 identifying:1 y2j:1 nuclear:4 spanned:1 oh:1 rocha:1 coordinate:2 annals:2 user:2 exact:4 losing:1 trend:1 element:1 recognition:4 observed:5 bottom:3 powerfactorization:1 solved:1 capture:3 verifying:1 squat:1 decrease:1 removed:1 cin:6 convexity:3 complexity:1 ideally:1 motivate:1 solving:1 basis:2 represented:1 caramanis:1 articulated:1 univ:1 jain:1 effective:1 london:1 kp:1 artificial:2 whose:6 heuristic:3 mppca:1 solve:6 kai:1 larger:2 supplementary:1 reconstruct:2 otherwise:3 cvpr:1 statistic:3 noisy:4 online:1 hoc:1 advantage:3 eigenvalue:3 sequence:2 propose:8 reconstruction:5 product:5 rapidly:1 poorly:2 achieve:1 degenerate:2 frobenius:1 inducing:1 ky:2 getting:2 cluster:5 optimum:4 rankone:1 object:3 completion:51 ij:15 edward:1 recovering:2 come:2 met:1 direction:3 drawback:1 hartley:1 human:1 material:1 require:1 generalization:2 ci1:5 proposition:1 summation:1 yij:2 extension:3 crg:1 lying:3 sufficiently:1 substituting:1 major:1 dictionary:3 assistant:1 diminishes:1 grouped:1 vice:1 successfully:2 minimization:2 sensor:5 always:1 modified:1 pn:8 ponce:1 rank:71 rigid:2 entire:5 typically:1 wij:2 misclassified:1 selects:2 i1:4 pixel:1 among:1 dual:1 denoted:1 priori:3 plan:1 art:2 having:1 ng:8 manually:1 park:1 yu:1 unsupervised:1 icml:4 np:3 sanghavi:2 report:1 c1j:1 few:10 simultaneously:4 replaced:1 mixture:4 ambient:7 edge:1 closer:1 partial:1 nowak:2 jumping:1 orthogonal:1 unless:2 indexed:1 incomplete:14 filled:5 desired:1 ynj:1 theoretical:1 instance:1 column:9 modeling:1 cover:1 maximization:1 entry:64 subset:7 reported:1 corrupted:12 synthetic:8 recht:2 international:9 siam:1 probabilistic:4 connecting:1 quickly:1 hopkins:1 satisfied:1 containing:1 choose:1 opposed:1 multiframe:1 ssc:52 conf:2 simard:1 leading:1 zhao:1 coding:1 coefficient:6 int:2 ranking:1 ad:1 depends:1 stream:1 audibert:1 start:1 recover:13 candes:2 collaborative:2 contribution:1 who:1 efficiently:3 correspond:3 dealt:1 norm3:1 handwritten:1 trajectory:4 worth:2 cc:1 comp:1 simultaneous:3 neu:1 failure:2 associated:1 recovers:2 ppca:1 dataset:13 knowledge:1 ubiquitous:1 segmentation:7 cj:8 jenatton:1 scc:1 higher:1 tipping:2 costeira:1 wei:2 formulation:4 kanatani:1 stage:1 lastly:1 hand:6 horizontal:1 tropp:1 expressive:1 keshavan:1 nonlinear:1 reveal:1 building:2 effect:5 true:1 multiplier:2 hence:8 regularization:1 alternating:2 lrmc:25 nonzero:7 dhillon:2 i2:2 deal:7 undesired:1 self:3 kak:2 complete:9 tron:1 performs:9 motion:14 bring:1 reflection:1 image:4 wise:1 variational:1 parikh:1 belong:2 in4:1 willett:1 measurement:1 versa:1 ai:7 trivially:1 mathematics:1 similarly:1 sastry:1 analyzer:3 soltanolkotabi:2 bartholomew:1 moving:1 stable:4 similarity:10 longer:1 etc:1 base:1 curvature:1 showed:1 olivetti:1 scenario:3 yi:1 cii:1 mocap:3 period:1 dashed:2 signal:1 full:5 multiple:4 infer:1 technical:1 cjj:7 bach:2 divided:1 y1j:1 promotes:1 denominator:1 vision:10 expectation:3 cmu:3 metric:1 kernel:2 orthography:1 addition:4 completes:1 singular:4 source:1 unlike:2 subject:3 effectiveness:3 jordan:1 presence:2 yang:1 ksub:8 xj:3 hastie:1 perfectly:1 lasso:1 reduce:2 cn:1 whether:1 pca:2 cji:2 penalty:1 suffer:2 hessian:1 remark:2 generally:1 category:3 generate:3 percentage:4 exist:1 punch:1 notice:15 trapped:2 track:1 write:3 discrete:1 dropping:1 vol:13 affected:1 express:1 group:10 pollefeys:1 reformulation:1 demonstrating:2 achieving:1 graph:8 relaxation:4 fraction:34 sum:1 run:1 angle:1 uncertainty:2 soda:1 almost:1 family:1 separation:1 draw:1 scaling:1 submatrix:1 drink:1 guaranteed:2 followed:1 nonnegative:2 activity:5 constraint:13 argument:1 min:7 transferred:1 structured:1 according:1 combination:3 across:2 slightly:2 em:3 smaller:2 character:1 modification:1 happens:1 explained:2 gradually:1 visualization:1 loose:1 fail:1 know:5 mercier:1 pursuit:1 grouse:1 vidal:5 apply:3 observe:1 lambertian:1 hierarchical:1 spectral:4 appropriate:2 enforce:1 alternative:1 original:2 assumes:1 clustering:50 denotes:2 top:3 ensure:1 running:1 instant:1 ghahramani:1 build:5 society:1 objective:7 fa:1 degrades:4 dependence:3 pave:1 surrogate:3 subspace:64 outer:1 evaluate:2 index:3 relationship:1 reformulate:1 minimizing:1 cij:51 statement:1 stated:2 negative:1 motivates:2 unknown:3 perform:4 recommender:1 vertical:1 observation:3 datasets:1 knott:1 finite:1 hinton:1 excluding:1 looking:1 frame:3 rn:7 intensity:1 expressiveness:2 peleato:1 rating:1 introduced:1 cast:4 unpublished:2 pair:1 extensive:2 eckstein:1 tomasi:1 barcelona:1 nip:3 robinson:1 address:7 bar:2 pattern:6 regime:3 sparsity:5 challenge:1 program:2 built:1 including:1 royal:1 video:5 shifting:2 mfa:17 rely:1 indicator:3 arm:1 representing:1 scheme:2 movie:1 rated:1 balzano:3 numerous:1 axis:2 incoherent:1 naive:2 deviate:1 prior:1 review:2 kf:3 par:3 ioannidis:1 mixed:3 filtering:1 foundation:2 affine:1 consistent:2 gruber:1 principle:1 row:4 penalized:1 free:1 infeasible:1 side:1 allow:1 arnold:1 face:3 absolute:1 sparse:19 benefit:1 distributed:1 dimension:14 stand:1 transition:2 collection:1 jump:1 ec:1 social:1 transaction:4 reconstructed:3 approximate:2 obtains:1 basri:1 dealing:3 uai:1 mairal:1 xi:20 iterative:1 latent:1 table:2 kanade:2 learn:1 robust:3 necessarily:1 european:1 main:1 spread:1 montanari:2 noise:3 gabay:1 xu:1 representative:2 fashion:1 fails:2 kuk22:1 sub:1 lie:11 northeastern:2 rk:2 cec:7 bishop:2 maxi:1 admits:1 workshop:1 importance:1 ci:6 lifting:32 dissimilarity:1 elhamifar:7 gap:1 chen:2 boston:1 lagrange:1 tracking:1 monotonic:2 corresponds:4 relies:3 acm:1 ma:1 succeed:1 goal:3 lipschitz:1 professor:1 price:1 hard:4 admm:2 replace:1 specifically:3 change:1 uniformly:5 principal:2 lerman:1 succeeds:1 select:4 college:2 latter:1 bioinformatics:1 ongoing:1 dept:1 handling:1 |
5,922 | 6,358 | Safe Exploration in Finite Markov Decision Processes
with Gaussian Processes
Matteo Turchetta
ETH Zurich
[email protected]
Felix Berkenkamp
ETH Zurich
[email protected]
Andreas Krause
ETH Zurich
[email protected]
Abstract
In classical reinforcement learning agents accept arbitrary short term loss for long
term gain when exploring their environment. This is infeasible for safety critical
applications such as robotics, where even a single unsafe action may cause system
failure or harm the environment. In this paper, we address the problem of safely
exploring finite Markov decision processes (MDP). We define safety in terms of an
a priori unknown safety constraint that depends on states and actions and satisfies
certain regularity conditions expressed via a Gaussian process prior. We develop a
novel algorithm, S AFE MDP, for this task and prove that it completely explores
the safely reachable part of the MDP without violating the safety constraint. To
achieve this, it cautiously explores safe states and actions in order to gain statistical
confidence about the safety of unvisited state-action pairs from noisy observations
collected while navigating the environment. Moreover, the algorithm explicitly
considers reachability when exploring the MDP, ensuring that it does not get stuck
in any state with no safe way out. We demonstrate our method on digital terrain
models for the task of exploring an unknown map with a rover.
1
Introduction
Today?s robots are required to operate in variable and often unknown environments. The traditional
solution is to specify all potential scenarios that a robot may encounter during operation a priori. This
is time consuming or even infeasible. As a consequence, robots need to be able to learn and adapt to
unknown environments autonomously [10, 2]. While exploration algorithms are known, safety is still
an open problem in the development of such systems [18]. In fact, most learning algorithms allow
robots to make unsafe decisions during exploration. This can damage the platform or its environment.
In this paper, we provide a solution to this problem and develop an algorithm that enables agents to
safely and autonomously explore unknown environments. Specifically, we consider the problem of
exploring a Markov decision process (MDP), where it is a priori unknown which state-action pairs
are safe. Our algorithm cautiously explores this environment without taking actions that are unsafe or
may render the exploring agent stuck.
Related Work. Safe exploration is an open problem in the reinforcement learning community and
several definitions of safety have been proposed [16]. In risk-sensitive reinforcement learning, the
goal is to maximize the expected return for the worst case scenario [5]. However, these approaches
only minimize risk and do not treat safety as a hard constraint. For example, Geibel and Wysotzki [7]
define risk as the probability of driving the system to a previously known set of undesirable states.
The main difference to our approach is that we do not assume the undesirable states to be known
a priori. Garcia and Fern?ndez [6] propose to ensure safety by means of a backup policy; that is,
a policy that is known to be safe in advance. Our approach is different, since it does not require a
backup policy but only a set of initially safe states from which the agent starts to explore. Another
approach that makes use of a backup policy is shown by Hans et al. [9], where safety is defined in
terms of a minimum reward, which is learned from data.
29th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Moldovan and Abbeel [14] provide probabilistic safety guarantees at every time step by optimizing
over ergodic policies; that is, policies that let the agent recover from any visited state. This approach
needs to solve a large linear program at every time step, which is computationally demanding even
for small state spaces. Nevertheless, the idea of ergodicity also plays an important role in our method.
In the control community, safety is mostly considered in terms of stability or constraint satisfaction
of controlled systems. Akametalu et al. [1] use reachability analysis to ensure stability under the
assumption of bounded disturbances. The work in [3] uses robust control techniques in order to
ensure robust stability for model uncertainties, while the uncertain model is improved.
Another field that has recently considered safety is Bayesian optimization [13]. There, in order to
find the global optimum of an a priori unknown function [21], regularity assumptions in form of
a Gaussian process (GP) [17] prior are made. The corresponding GP posterior distribution over
the unknown function is used to guide evaluations to informative locations. In this setting, safety
centered approaches include the work of Sui et al. [22] and Schreiter et al. [20], where the goal is
to find the safely reachable optimum without violating an a priori unknown safety constraint at any
evaluation. To achieve this, the function is cautiously explored, starting from a set of points that is
known to be safe initially. The method in [22] was applied to the field of robotics to safely optimize
the controller parameters of a quadrotor vehicle [4]. However, they considered a bandit setting, where
at each iteration any arm can be played. In contrast, we consider exploring an MDP, which introduces
restrictions in terms of reachability that have not been considered in Bayesian optimization before.
Contribution. We introduce S AFE MDP, a novel algorithm for safe exploration in MDPs. We model
safety via an a priori unknown constraint that depends on state-action pairs. Starting from an initial
set of states and actions that are known to satisfy the safety constraint, the algorithm exploits the
regularity assumptions on the constraint function in order to determine if nearby, unvisited states are
safe. This leads to safe exploration, where only state-actions pairs that are known to fulfil the safety
constraint are evaluated. The main contribution consists of extending the work on safe Bayesian
optimization in [22] from the bandit setting to deterministic, finite MDPs. In order to achieve this, we
explicitly consider not only the safety constraint, but also the reachability properties induced by the
MDP dynamics. We provide a full theoretical analysis of the algorithm. It provably enjoys similar
safety guarantees in terms of ergodicity as discussed in [14], but at a reduced computational cost.
The reason for this is that our method separates safety from the reachability properties of the MDP.
Beyond this, we prove that S AFE MDP is able to fully explore the safely reachable region of the
MDP, without getting stuck or violating the safety constraint with high probability. To the best of
our knokwledge, this is the first full exploration result in MDPs subject to a safety constraint. We
validate our method on an exploration task, where a rover has to explore an a priori unknown map.
2
Problem Statement
In this section, we define our problem and assumptions. The unknown environment is modeled as a
finite, deterministic MDP [23]. Such a MDP is a tuple hS, A(?), f (s, a), r(s, a)i with a finite set of
states S, a set of state-dependent actions A(?), a known, deterministic transition model f (s, a), and
reward function r(s, a). In the typical reinforcement learning framework, the goal is to maximize
the cumulative reward. In this paper, we consider the problem of safely exploring the MDP. Thus,
instead of aiming to maximize the cumulative rewards, we define r(s, a) as an a priori unknown
safety feature. Although r(s, a) is unknown, we make regularity assumptions about it to make the
problem tractable. When traversing the MDP, at each discrete time step, k, the agent has to decide
which action and thereby state to visit next. We assume that the underlying system is safety-critical
and that for any visited state-action pair, (sk , ak ), the unknown, associated safety feature, r(sk , ak ),
must be above a safety threshold, h. While the assumption of deterministic dynamics does not hold
for general MDPs, in our framework, uncertainty about the environment is captured by the safety
feature. If requested, the agent can obtain noisy measurements of the safety feature, r(sk , ak ), by
taking action ak in state sk . The index t is used to index measurements, while k denotes movement
steps. Typically k
t.
It is hopeless to achieve the goal of safe exploration unless the agent starts in a safe location. Hence,
we assume that the agent stays in an initial set of state action pairs, S0 , that is known to be safe a
priori. The goal is to identify the maximum safely reachable region starting from S0 , without visiting
any unsafe states. For clarity of exposition, we assume that safety depends on states only; that
is, r(s, a) = r(s). We provide an extension to safety features that also depend on actions in Sec. 3.
2
Figure 1: Illustration of the set operators with S = {?s1 , ?s2 }. The set S = {s} can be reached from s2
in one step and from s1 in two steps, while only the state s1 can be reached from s. Visiting s1 is safe;
that is, it is above the safety threshold, is reachable, and there exists a safe return path through s2 .
Assumptions on the reward function Ensuring that all visited states are safe without any prior
knowledge about the safety feature is an impossible task (e.g., if the safety feature is discontinuous).
However, many practical safety features exhibit some regularity, where similar states will lead to
similar values of r.
In the following, we assume that S is endowed with a positive definite kernel function k(?, ?) and that
the function r(?) has bounded norm in the associated Reproducing Kernel Hilbert Space (RKHS) [19].
The norm induced by the inner product of the RKHS indicates the smoothness of functions with respect to the kernel. This assumption allows us to model r as a GP [21], r(s) ? GP(?(s), k(s, s0 )). A
GP is a probability distribution over functions that is fully specified by its mean function ?(s)
and its covariance function k(s, s0 ). The randomness expressed by this distribution captures
our uncertainty about the environment. We assume ?(s) = 0 for all s 2 S, without loss of
generality. The posterior distribution over r(?) can be computed analytically, based on t measurements at states Dt = {s1 , . . . , st } ? S with measurements, yt = [r(s1 ) + !1 . . . r(st ) + !t ]T ,
that are corrupted by zero-mean Gaussian noise, !t ? N (0, 2 ). The posterior is a GP
distribution with mean ?t (s) = kt (s)T (Kt + 2 I) 1 yt , variance t (s) = kt (s, s), and covariance kt (s, s0 ) = k(s, s0 ) kt (s)T (Kt + 2 I) 1 kt (s0 ), where kt (s) = [k(s1 , s), . . . , k(st , s)]T and
Kt is the positive definite kernel matrix, [k(s, s0 )]s,s0 2Dt . The identity matrix is denoted by I 2 Rt?t .
We also assume L-Lipschitz continuity of the safety function with respect to some metric d(?, ?) on S.
This is guaranteed by many commonly used kernels with high probability [21, 8].
Goal In this section, we define the goal of safe exploration. In particular, we ask what the best that
any algorithm may hope to achieve is. Since we only observe noisy measurements, it is impossible to
know the underlying safety function r(?) exactly after a finite number of measurements. Instead, we
consider algorithms that only have knowledge of r(?) up to some statistical confidence ?. Based on
this confidence within some safe set S, states with small distance to S can be classified to satisfy the
safety constraint using the Lipschitz continuity of r(?). The resulting set of safe states is
R?safe (S) = S [ {s 2 S | 9s0 2 S : r(s0 )
?
Ld(s, s0 )
h},
(1)
which contains states that can be classified as safe given the information about the states in S.
While (1) considers the safety constraint, it does not consider any restrictions put in place by the
structure of the MDP. In particular, we may not be able to visit every state in R?safe (S) without visiting
an unsafe state first. As a result, the agent is further restricted to
Rreach (S) = S [ {s 2 S | 9s0 2 S, a 2 A(s0 ) : s = f (s0 , a)},
(2)
Rret (S, S) = S [ {s 2 S | 9a 2 A(s) : f (s, a) 2 S}
(3)
the set of all states that can be reached starting from the safe set in one step. These states are called
the one-step safely reachable states. However, even restricted to this set, the agent may still get stuck
in a state without any safe actions. We define
as the set of states that are able to return to a set S through some other set of states, S, in one step. In
particular, we care about the ability to return to a certain set through a set of safe states S. Therefore,
these are called the one-step safely returnable states. In general, the return routes may require taking
more than one action, see Fig. 1. The n-step returnability operator Rnret (S, S) = Rret (S, Rnret 1 (S, S))
with R1ret (S, S) = Rret (S, S) considers these longer return routes by repeatedly applying the return
ret
operator, Rret in (3), n times. The limit R (S, S) = limn!1 Rnret (S, S) contains all the states that
can reach the set S through an arbitrarily long path in S.
3
Algorithm 1 Safe exploration in MDPs (SafeMDP)
Inputs: states S, actions A, transition function f (s, a), kernel k(s, s0 ), Safety
threshold h, Lipschitz constant L, Safe seed S0 .
C0 (s)
[h, 1) for all s 2 S0
for t = 1, 2, . . . do
St
{s 2 S | 9s0 2 S?t 1 : lt (s0 ) Ld(s, s0 ) h}
ret
S?t
{s 2 St | s 2 Rreach (S?t 1 ), s 2 R (St , S?t 1 )}
Gt
{s 2 S?t | gt (s) > 0}
st
argmaxs2Gt wt (s)
Safe Dijkstra in St from st 1 to st
Update GP with st and yt
r(st ) + !t
if Gt = ; or max wt (s) ? ? then Break
s2Gt
For safe exploration of MDPs, all of the above are requirements; that is, any state that we may want
to visit needs to be safe (satisfy the safety constraint), reachable, and we must be able to return to safe
states from this new state. Thus, any algorithm that aims to safely explore an MDP is only allowed to
visit states in
ret
R? (S) = R?safe (S) \ Rreach (S) \ R (R?safe (S), S),
(4)
which is the intersection of the three safety-relevant sets. Given a safe set S that fulfills the safety
ret
requirements, R (R?safe (S), S) is the set of states from which we can return to S by only visiting
states that can be classified as above the safety threshold. By including it in the definition of R? (S),
we avoid the agent getting stuck in a state without an action that leads to another safe state to take.
Given knowledge about the safety feature in S up to ? accuracy thus allows us to expand the set of
safe ergodic states to R? (S). Any algorithm that has the goal of exploring the state space should
consequently explore these newly available safe states and gain new knowledge about the safety
feature to potentially further enlargen the safe set. The safe set after n such expansions can be found
by repeatedly applying the operator in (4): R?n (S) = R? (R?n 1 (S)) with R?1 = R? (S). Ultimately,
the size of the safe set is bounded by surrounding unsafe states or the number of states in S. As a
result, the biggest set that any algorithm may classify as safe without visiting unsafe states is given
by taking the limit, R? (S) = limn!1 R?n (S).
Thus, given a tolerance level ? and an initial safe seed set S0 , R? (S0 ) is the set of states that any
algorithm may hope to classify as safe. Let St denote the set of states that an algorithm determines
to be safe at iteration t. In the following, we will refer to complete, safe exploration whenever an
algorithm fulfills R? (S0 ) ? limt!1 St ? R0 (S0 ); that is, the algorithm classifies every safely
reachable state up to ? accuracy as safe, without misclassification or visiting unsafe states.
3
S AFE MDP Algorithm
We start by giving a high level overview of the method. The S AFE MDP algorithm relies on a GP
model of r to make predictions about the safety feature and uses the predictive uncertainty to guide
the safe exploration. In order to guarantee safety, it maintains two sets. The first set, St , contains
all states that can be classified as satisfying the safety constraint using the GP posterior, while the
second one, S?t , additionally considers the ability to reach points in St and the ability to safely return
to the previous safe set, S?t 1 . The algorithm ensures safety and ergodicity by only visiting states
in S?t . In order to expand the safe region, the algorithm visits states in Gt ? S?t , a set of candidate
states that, if visited, could expand the safe set. Specifically, the algorithm selects the most uncertain
state in Gt , which is the safe state that we can gain the most information about. We move to this state
via the shortest safe path, which is guaranteed to exist (Lemma 2). The algorithm is summarized
in Algorithm 1.
Initialization. The algorithm relies on an initial safe set S0 as a starting point to explore the MDP.
These states must be safe; that is, r(s) h, for all s 2 S0 . They must also fulfill the reachability and
returnability requirements from Sec. 2. Consequently, for any two states, s, s0 2 S0 , there must exist
ret
a path in S0 that connects them: s0 2 R (S0 , {s}). While this may seem restrictive, the requirement
is, for example, fulfilled by a single state with an action that leads back to the same state.
4
Safety r(s)
s0
s2
s4
s6
s8
s10
s12
s14
s16
s18
(a) States are classified as safe (above the safety constraint, dashed line) according to the confidence intervals of the GP model (red bar). States in the green
bar can expand the safe set if sampled, Gt .
(b) Modified MDP model that is used to encode safety
features that depend on actions. In this model, actions
lead to abstract action-states sa , which only have one
available action that leads to f (s, a).
Classification. In order to safely explore the MDP, the algorithm must determine which states are
safe without visiting them. The regularity assumptions introduced in Sec. 2 allow us to model the
safety feature as a GP, so that we can use the uncertainty estimate of the GP model in order to
determine a confidence interval within which the true safety?function lies
For
?
pwith high probability.
every state s, this confidence interval has the form Qt (s) = ?t 1 (s) ?
t t 1 (s) , where t is a
positive scalar that determines the amplitude of the interval. We discuss how to select t in Sec. 4.
Rather than defining high probability bounds on the values of r(s) directly in terms of Qt , we consider
the intersection of the sets Qt up to iteration t, Ct (s) = Qt (s) \ Ct 1 (s) with C0 (s) = [h, 1] for
safe states s 2 S0 and C0 (s) = R otherwise. This choice ensures that set of states that we classify as
safe does not shrink over iterations and is justified by the selection of t in Sec. 4. Based on these confidence intervals, we define a lower bound, lt (s) = min Ct (s), and upper bound, ut (s) = max Ct (s),
on the values that the safety features r(s) are likely to take based on the data obtained up to iteration t.
Based on these lower bounds, we define
St = s 2 S | 9s0 2 S?t 1 : lt (s0 ) Ld(s, s0 ) h
(5)
as the set of states that fulfill the safety constraint on r with high probability by using the Lipschitz
constant to generalize beyond the current safe set. Based on this classification, the set of ergodic safe
states is the set of states that achieve the safety threshold and, additionally, fulfill the reachability and
returnability properties discussed in Sec. 2:
S?t = s 2 St | s 2 Rreach (S?t
1)
\R
ret
(St , S?t
1)
.
(6)
Expanders. With the set of safe states defined, the task of the algorithm is to identify and explore
states that might expand the set of states that can be classified as safe. We use the uncertainty estimate
in the GP in order to define an optimistic set of expanders,
Gt = {s 2 S?t | gt (s) > 0},
(7)
where gt (s) = {s0 2 S \ St | ut (s) Ld(s, s0 ) h} . The function gt (s) is positive whenever an
optimistic measurement at s, equal to the upper confidence bound, ut (s), would allow us to determine
that a previously unsafe state indeed has value r(s0 ) above the safety threshold. Intuitively, sampling s
might lead to the expansion of St and thereby S?t . The set Gt explicitly considers the expansion of
the safe set as exploration goal, see Fig. 2a for a graphical illustration of the set.
Sampling and shortest safe path. The remaining part of the algorithm is concerned with selecting
safe states to evaluate and finding a safe path in the MDP that leads towards them. The goal is to
visit states that allow the safe set to expand as quickly as possible, so that we do not waste resources
when exploring the MDP. We use the GP posterior uncertainty about the states in Gt in order to make
this choice. At each iteration t, we select as next target sample the state with the highest variance
in Gt , st = argmaxs2Gt wt (s), where wt (s) = ut (s) lt (s). This choice is justified, because
while all points in Gt are safe and can potentially enlarge the safe set, based on one noisy sample
we can gain the most information from the state that we are the most uncertain about. This design
choice maximizes the knowledge acquired with every sample but can lead to long paths between
measurements within the safe region. Given st , we use Dijkstra?s algorithm within the set S?t in order
to find the shortest safe path to the target from the current state, st 1 . Since we require reachability
and returnability for all safe states, such a path is guaranteed to exist. We terminate the algorithm
when we reach the desired accuracy; that is, argmaxs2Gt wt (s) ? ?.
Action-dependent safety. So far, we have considered safety features that only depend on the
states, r(s). In general, safety can also depend on the actions, r(s, a). In this section, we introduce a
5
modified MDP that captures these dependencies without modifying the algorithm. The modified MDP
is equivalent to the original one in terms of dynamics, f (s, a). However, we introduce additional
action-states sa for each action in the original MDP. When we start in a state s and take action a, we
first transition to the corresponding action-state and from there transition to f (s, a) deterministically.
This model is illustrated in Fig. 2b. Safety features that depend on action-states, sa , are equivalent
to action-dependent safety features. The S AFE MDP algorithm can be used on this modified MDP
without modification. See the experiments in Sec. 5 for an example.
4
Theoretical Results
The safety and exploration aspects of the algorithm that we presented in the previous section rely
on the correctness of the confidence intervals Ct (s). In particular, they require that the true value of
the safety feature, r(s), lies within Ct (s) with high probability for all s 2 S and all iterations t > 0.
Furthermore, these confidence intervals have to shrink sufficiently fast over time. The probability
of r taking values within the confidence intervals depends on the scaling factor t . This scaling
factor trades off conservativeness in the exploration for the probability of unsafe states being visited.
Appropriate selection of t has been studied by Srinivas et al. [21] in the multi-armed bandit setting.
Even though our framework is different, their setting can be applied to our case. We choose,
t
= 2B + 300
t
log3 (t/ ),
(8)
where B is the bound on the RKHS norm of the function r(?), is the probability of visiting
unsafe states, and t is the maximum mutual information that can be gained about r(?) from t
noisy observations; that is, t = max|A|?t I(r, yA ). The information capacity t has a sublinear
dependence on t for many commonly used kernels [21]. The choice of t in (8) is justified by the
following Lemma, which follows from [21, Theorem 6]:
Lemma 1. Assume that krk2k ? B, and that the noise !t is zero-mean conditioned on the history,
as well as uniformly bounded by for all t > 0. If t is chosen as in (8), then, for all t > 0 and
all s 2 S, it holds with probability at least 1
that r(s) 2 Ct (s).
This Lemma states that, for t as in (8), the safety function r(s) takes values within the confidence
intervals C(s) with high probability. Now we show that the the safe shortest path problem has always
a solution:
ret
Lemma 2. Assume that S0 6= ; and that for all states, s, s0 2 S0 , s 2 R (S0 , {s0 }). Then, when
using Algorithm 1 under the assumptions in Theorem 1, for all t > 0 and for all states, s, s0 2 S?t ,
ret
s 2 R (St , {s0 }).
This lemma states that, given an initial safe set that fulfills the initialization requirements, we can
always find a policy that drives us from any state in S?t to any other state in S?t without leaving the set
of safe states, St . Lemmas 1 and 2 have a key role in ensuring safety during exploration and, thus, in
our main theoretical result:
Theorem 1. Assume that r(?) is L-Lipschitz continuous and that the assumptions of Lemma 1
hold. Also, assume that S0 6= ;, r(s) h for all s 2 S0 , and that for any two states, s, s0 2 S0 ,
ret
s0 2 R (S0 , {s}). Choose t as in (8). Then, with probability at least 1
, we have r(s) h for
any s along any state trajectory induced by Algorithm 1 on an MDP with transition function f (s, a).
?
C |R0 (S0 )|
Moreover, let t? be the smallest integer such that t?t t?
, with C = 8/ log(1 + 2 ).
?2
Then there exists a t0 ? t? such that, with probability at least 1
, R? (S0 ) ? S?t ? R0 (S0 ).
0
Theorem 1 states that Algorithm 1 performs safe and complete exploration of the state space; that
is, it explores the maximum reachable safe set without visiting unsafe states. Moreover, for any
desired accuracy ? and probability of failure , the safely reachable region can be found within a
finite number of observations. This bound depends on the information capacity t , which in turn
depends on the kernel. If the safety feature is allowed to change rapidly across states, the information
capacity will be larger than if the safety feature was smooth. Intuitively, the less prior knowledge
the kernel encodes, the more careful we have to be when exploring the MDP, which requires more
measurements.
6
5
Experiments
In this section, we demonstrate Algorithm 1 on an exploration task. We consider the setting in [14],
the exploration of the surface of Mars with a rover. The code for the experiments is available
at http://github.com/befelix/SafeMDP.
For space exploration, communication delays between the rover and the operator on Earth can be
prohibitive. Thus, it is important that the robot can act autonomously and explore the environment
without risking unsafe behavior. For the experiment, we consider the Mars Science Laboratory
(MSL) [11], a rover deployed on Mars. Due to communication delays, the MSL can travel 20 meters
before it can obtain new instructions from an operator. It can climb a maximum slope of 30 [15,
Sec. 2.1.3]. In our experiments we use digital terrain models of the surface of Mars from the High
Resolution Imaging Science Experiment (HiRISE), which have a resolution of one meter [12].
As opposed to the experiments considered in [14], we do not have to subsample or smoothen the data
in order to achieve good exploration results. This is due to the flexibility of the GP framework that
considers noisy measurements. Therefore, every state in the MDP represents a d ? d square area
with d = 1 m, as opposed to d = 20 m in [14].
At every state, the agent can take one of four actions: up, down, left, and right. If the rover
attempts to climb a slope that is steeper than 30 , it fails and may be damaged. Otherwise it moves
deterministically to the desired neighboring state. In this setting, we define safety over state transitions
by using the extension introduced in Sec. 3. The safety feature over the transition from s to s0 is defined
in terms of height difference between the two states, H(s) H(s0 ). Given the maximum slope of
? = 30 that the rover can climb, the safety threshold is set at a conservative h = d tan(25 ). This
encodes that it is unsafe for the robot to climb hills that are too steep. In particular, while the MDP
dynamics assume that Mars is flat and every state can be reached, the safety constraint depends on the
a priori unknown heights. Therefore, under the prior belief, it is unknown which transitions are safe.
We model the height distribution, H(s), as a GP with a Mat?rn kernel with ? = 5/2. Due to the
limitation on the grid resolution, tuning of the hyperparameters is necessary to achieve both safety
and satisfactory exploration results. With a finer resolution, more cautious hyperparameters would
also be able to generalize to neighbouring states. The lengthscales are set to 14.5 m and the prior
standard deviation of heights is 10 m. We assume a noise standard deviation of 0.075 m. Since the
safety feature of each state transition is a linear combination of heights, the GP model of the heights
induces a GP model over the differences of heights, which we use to classify whether state transitions
fulfill the safety constraint. In particular, the safety depends on the direction of travel, that is, going
downhill is possible, while going uphill might be unsafe.
Following the recommendations in [22], in our experiments we use the GP confidence intervals
Qt (s) directly to determine the safe set St . As a result, the Lipschitz constant is only used to
determine expanders in G. Guaranteeing safe exploration with high probability over multiple steps
leads to conservative behavior, as every step beyond the set that is known to be safe decreases the
?probability budget? for failure. In order to demonstrate that safety can be achieved empirically
using less conservative parameters than those suggested by Theorem 1, we fix t to a constant
value, t = 2, 8t 0. This choice aims to guarantee safety per iteration rather than jointly over all
the iterations. The same assumption is used in [14].
We compare our algorithm to several baselines. The first one considers both the safety threshold and
the ergodicity requirements but neglects the expanders. In this setting, the agent samples the most
uncertain safe state transaction, which corresponds to the safe Bayesian optimization framework
in [20]. We expect the exploration to be safe, but less efficient than our approach. The second baseline
considers the safety threshold, but does not consider ergodicity requirements. In this setting, we
expect the rover?s behavior to fulfill the safety constraint and to never attempt to climb steep slopes,
but it may get stuck in states without safe actions. The third method uses the unconstrained Bayesian
optimization framework in order to explore new states, without safety requirements. In this setting,
the agent tries to obtain measurements from the most uncertain state transition over the entire space,
rather than restricting itself to the safe set. In this case, the rover can easily get stuck and may also
incur failures by attempting to climb steep slopes. Last, we consider a random exploration strategy,
which is similar to the ?-greedy exploration strategies that are widely used in reinforcement learning.
7
35
70
0
30
60
120 0
90
30
90
30
20
35
10
70
0
30
120 0
distance [m]
(b) Unsafe
distance [m]
(a) Non-ergodic
distance [m]
60
60
90
30
60
90
distance [m]
(c) Random
altitude [m]
distance [m]
0
0
distance [m]
(e) SafeMDP
SafeMDP
No Expanders
Non-ergodic
Unsafe
Random
120 0
30
60
90
120
distance [m]
(d) No Expanders
R.15 (S0 ) [%]
80.28 %
30.44 %
0.86 %
0.23 %
0.98 %
k at failure
2
1
219
(f) Performance metrics.
Figure 2: Comparison of different exploration schemes. The background color shows the real altitude
of the terrain. All algorithms are run for 525 iterations, or until the first unsafe action is attempted.
The saturated color indicates the region that each strategy is able to explore. The baselines get stuck
in the crater in the bottom-right corner or fail to explore, while Algorithm 1 manages to safely explore
the unknown environment. See the statistics in Fig. 2f.
We compare these baselines over an 120 by 70 meters area at 30.6 latitude and 202.2 longitude.
We set the accuracy ? = n . The resulting exploration behaviors can be seen in Fig. 2. The rover
starts in the center-top part of the plot, a relatively planar area. In the top-right corner there is a hill
that the rover cannot climb, while in the bottom-right corner there is a crater that, once entered, the
rover cannot leave. The safe behavior that we expect is to explore the planar area, without moving
into the crater or attempting to climb the hill. We run all algorithms for 525 iterations or until the
first unsafe action is attempted. It can be seen in Fig. 2e that our method explores the safe area
that surrounds the crater, without attempting to move inside. While some state-action pairs closer
to the crater are also safe, the GP model would require more data to classify them as safe with the
necessary confidence. In contrast, the baselines perform significantly worse. The baseline that does
not ensure the ability to return to the safe set (non-ergodic) can be seen in Fig. 2a. It does not explore
the area, because it quickly reaches a state without a safe path to the next target sample. Our approach
avoids these situations explicitly. The unsafe exploration baseline in Fig. 2b considers ergodicity,
but concludes that every state is reachable according to the MDP model. Consequently, it follows
a path that crosses the boundary of the crater and eventually evaluates an unsafe action. Overall,
it is not enough to consider only ergodicity or only safety, in order to solve the safe exploration
problem. The random exploration in Fig. 2c attempts an unsafe action after some exploration. In
contrast, Algorithm 1 manages to safely explore a large part of the unknown environment. Running
the algorithm without considering expanders leads to the behavior in Fig. 2d, which is safe, but only
manages to explore a small subset of the safely reachable area within the same number of iterations
in which Algorithm 1 explores over 80% of it. The results are summarized in Table 2f.
6
Conclusion
We presented S AFE MDP, an algorithm to safely explore a priori unknown environments. We used
a Gaussian process to model the safety constraints, which allows the algorithm to reason about the
safety of state-action pairs before visiting them. An important aspect of the algorithm is that it
considers the transition dynamics of the MDP in order to ensure that there is a safe return route before
visiting states. We proved that the algorithm is capable of exploring the full safely reachable region
with few measurements, and demonstrated its practicality and performance in experiments.
Acknowledgement. This research was partially supported by the Max Planck ETH Center for
Learning Systems and SNSF grant 200020_159557.
8
References
[1] Anayo K. Akametalu, Shahab Kaynama, Jaime F. Fisac, Melanie N. Zeilinger, Jeremy H. Gillula, and
Claire J. Tomlin. Reachability-based safe learning with Gaussian processes. In Proc. of the IEEE
Conference on Decision and Control (CDC), pages 1424?1431, 2014.
[2] Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from
demonstration. Robotics and Autonomous Systems, 57(5):469?483, 2009.
[3] Felix Berkenkamp and Angela P. Schoellig. Safe and robust learning control with Gaussian processes. In
Proc. of the European Control Conference (ECC), pages 2501?2506, 2015.
[4] Felix Berkenkamp, Angela P. Schoellig, and Andreas Krause. Safe controller optimization for quadrotors
with Gaussian processes. In Proc. of the IEEE International Conference on Robotics and Automation
(ICRA), 2016.
[5] Stefano P. Coraluppi and Steven I. Marcus. Risk-sensitive and minimax control of discrete-time, finite-state
Markov decision processes. Automatica, 35(2):301?309, 1999.
[6] Javier Garcia and Fernando Fern?ndez. Safe exploration of state and action spaces in reinforcement
learning. Journal of Artificial Intelligence Research, pages 515?564, 2012.
[7] Peter Geibel and Fritz Wysotzki. Risk-sensitive reinforcement learning applied to control under constraints.
Journal of Artificial Intelligence Research (JAIR), 24:81?108, 2005.
[8] Subhashis Ghosal and Anindya Roy. Posterior consistency of Gaussian process prior for nonparametric
binary regression. The Annals of Statistics, 34(5):2413?2429, 2006.
[9] Alexander Hans, Daniel Schneega?, Anton Maximilian Sch?fer, and Steffen Udluft. Safe exploration for
reinforcement learning. In Proc. of the European Symposium on Artificial Neural Networks (ESANN),
pages 143?148, 2008.
[10] Jens Kober, J. Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: a survey. The
International Journal of Robotics Research, 32(11):1238?1274, 2013.
[11] Mary Kae Lockwood. Introduction: Mars Science Laboratory: The Next Generation of Mars Landers.
Journal of Spacecraft and Rockets, 43(2):257?257, 2006.
[12] Alfred S. McEwen, Eric M. Eliason, James W. Bergstrom, Nathan T. Bridges, Candice J. Hansen, W. Alan
Delamere, John A. Grant, Virginia C. Gulick, Kenneth E. Herkenhoff, Laszlo Keszthelyi, Randolph L. Kirk,
Michael T. Mellon, Steven W. Squyres, Nicolas Thomas, and Catherine M. Weitz. Mars Reconnaissance
Orbiter?s High Resolution Imaging Science Experiment (HiRISE). Journal of Geophysical Research:
Planets, 112(E5):E05S02, 2007.
[13] Jonas Mockus. Bayesian Approach to Global Optimization, volume 37 of Mathematics and Its Applications.
Springer Netherlands, 1989.
[14] Teodor Mihai Moldovan and Pieter Abbeel. Safe exploration in Markov decision processes. In Proc. of the
International Conference on Machine Learning (ICML), pages 1711?1718, 2012.
[15] MSL. MSL Landing Site Selection User?s Guide to Engineering Constraints, 2007. URL http://marsoweb.
nas.nasa.gov/landingsites/msl/memoranda/MSL_Eng_User_Guide_v4.5.1.pdf.
[16] Martin Pecka and Tomas Svoboda. Safe exploration techniques for reinforcement learning ? an overview.
In Modelling and Simulation for Autonomous Systems, pages 357?375. Springer, 2014.
[17] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning.
Adaptive computation and machine learning. MIT Press, 2006.
[18] Stefan Schaal and Christopher Atkeson. Learning Control in Robotics. IEEE Robotics & Automation
Magazine, 17(2):20?29, 2010.
[19] Bernhard Sch?lkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002.
[20] Jens Schreiter, Duy Nguyen-Tuong, Mona Eberts, Bastian Bischoff, Heiner Markert, and Marc Toussaint.
Safe exploration for active learning with Gaussian processes. In Proc. of the European Conference on
Machine Learning (ECML), volume 9284, pages 133?149, 2015.
[21] Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias Seeger. Gaussian process optimization
in the bandit setting: no regret and experimental design. In Proc. of the International Conference on
Machine Learning (ICML), 2010.
[22] Yanan Sui, Alkis Gotovos, Joel Burdick, and Andreas Krause. Safe exploration for optimization with
Gaussian processes. In Proc. of the International Conference on Machine Learning (ICML), pages
997?1005, 2015.
[23] Richard S. Sutton and Andrew G. Barto. Reinforcement learning: an introduction. Adaptive computation
and machine learning. MIT Press, 1998.
9
| 6358 |@word h:1 norm:3 mockus:1 c0:3 open:2 instruction:1 pieter:1 simulation:1 covariance:2 schoellig:2 thereby:2 ld:4 initial:5 ndez:2 contains:3 selecting:1 daniel:1 rkhs:3 s16:1 current:2 com:1 must:6 john:1 planet:1 informative:1 enables:1 burdick:1 plot:1 update:1 greedy:1 prohibitive:1 intelligence:2 randolph:1 short:1 location:2 height:7 along:1 symposium:1 jonas:1 prove:2 consists:1 inside:1 introduce:3 uphill:1 acquired:1 spacecraft:1 indeed:1 expected:1 behavior:6 multi:1 steffen:1 gov:1 armed:1 considering:1 spain:1 classifies:1 moreover:3 bounded:4 underlying:2 maximizes:1 brett:1 what:1 argall:1 ret:9 finding:1 guarantee:4 safely:20 every:11 act:1 exactly:1 control:8 grant:2 planck:1 safety:87 felix:3 before:4 positive:4 treat:1 ecc:1 limit:2 consequence:1 aiming:1 sutton:1 engineering:1 ak:4 path:12 matteo:1 might:3 bergstrom:1 initialization:2 studied:1 practical:1 regret:1 definite:2 jan:1 area:7 eth:4 significantly:1 confidence:14 get:5 cannot:2 undesirable:2 selection:3 operator:6 tuong:1 put:1 risk:5 impossible:2 applying:2 optimize:1 restriction:2 map:2 deterministic:4 yt:3 equivalent:2 center:2 demonstrated:1 jaime:1 starting:5 williams:1 ergodic:6 resolution:5 survey:2 subhashis:1 tomas:1 s6:1 stability:3 fulfil:1 autonomous:2 annals:1 target:3 today:1 play:1 damaged:1 tan:1 neighbouring:1 user:1 us:3 svoboda:1 carl:1 magazine:1 roy:1 satisfying:1 bottom:2 role:2 steven:2 capture:2 worst:1 region:7 ensures:2 autonomously:3 movement:1 highest:1 trade:1 cautiously:3 decrease:1 environment:15 bischoff:1 reward:5 dynamic:5 ultimately:1 depend:5 predictive:1 incur:1 duy:1 rover:12 eric:1 completely:1 easily:1 surrounding:1 fast:1 artificial:3 gotovos:1 lengthscales:1 larger:1 solve:2 widely:1 otherwise:2 ability:4 statistic:2 tomlin:1 gp:20 jointly:1 noisy:6 itself:1 schneega:1 matthias:1 propose:1 product:1 kober:1 fer:1 neighboring:1 relevant:1 rapidly:1 entered:1 flexibility:1 achieve:8 validate:1 cautious:1 getting:2 smoothen:1 regularity:6 optimum:2 extending:1 requirement:8 guaranteeing:1 leave:1 develop:2 andrew:2 qt:5 sa:3 edward:1 longitude:1 esann:1 reachability:9 direction:1 safe:108 discontinuous:1 modifying:1 exploration:40 centered:1 require:5 abbeel:2 fix:1 s18:1 exploring:12 extension:2 hold:3 sufficiently:1 considered:6 seed:2 driving:1 smallest:1 earth:1 schreiter:2 proc:8 travel:2 s12:1 visited:5 hansen:1 sensitive:3 bridge:1 correctness:1 hope:2 stefan:1 mit:3 snsf:1 gaussian:13 always:2 aim:2 modified:4 fulfill:5 rather:3 avoid:1 barto:1 encode:1 schaal:1 modelling:1 indicates:2 contrast:3 seeger:1 baseline:7 browning:1 dependent:3 typically:1 entire:1 accept:1 initially:2 bandit:4 expand:6 going:2 selects:1 provably:1 overall:1 classification:2 denoted:1 priori:12 development:1 platform:1 mutual:1 field:2 equal:1 never:1 once:1 enlarge:1 sampling:2 pwith:1 msl:5 represents:1 icml:3 richard:1 few:1 connects:1 eberts:1 attempt:3 evaluation:2 joel:1 saturated:1 introduces:1 chernova:1 kt:9 laszlo:1 tuple:1 closer:1 capable:1 necessary:2 afe:7 traversing:1 unless:1 lockwood:1 desired:3 theoretical:3 uncertain:5 alkis:1 classify:5 cost:1 deviation:2 subset:1 delay:2 too:1 virginia:1 dependency:1 corrupted:1 st:27 fritz:1 explores:6 international:5 stay:1 probabilistic:1 off:1 michael:1 reconnaissance:1 quickly:2 na:1 opposed:2 choose:2 worse:1 corner:3 return:12 unvisited:2 potential:1 jeremy:1 sec:9 summarized:2 waste:1 automation:2 satisfy:3 explicitly:4 rocket:1 depends:8 vehicle:1 break:1 try:1 optimistic:2 steeper:1 reached:4 start:5 recover:1 maintains:1 red:1 weitz:1 slope:5 contribution:2 minimize:1 square:1 accuracy:5 variance:2 identify:2 generalize:2 anton:1 bayesian:6 lkopf:1 manages:3 fern:2 trajectory:1 drive:1 finer:1 randomness:1 classified:6 history:1 reach:4 whenever:2 definition:2 failure:5 evaluates:1 coraluppi:1 james:1 associated:2 gain:5 newly:1 sampled:1 proved:1 ask:1 crater:6 knowledge:6 ut:4 color:2 hilbert:1 amplitude:1 javier:1 back:1 nasa:1 jair:1 dt:2 violating:3 planar:2 specify:1 improved:1 evaluated:1 shrink:2 though:1 generality:1 furthermore:1 ergodicity:7 mar:8 risking:1 smola:1 until:2 christopher:2 continuity:2 mdp:36 mary:1 true:2 hence:1 analytically:1 regularization:1 laboratory:2 satisfactory:1 illustrated:1 during:3 pdf:1 hill:3 complete:2 demonstrate:3 performs:1 stefano:1 novel:2 recently:1 empirically:1 overview:2 volume:2 discussed:2 s8:1 measurement:12 refer:1 mellon:1 surround:1 mihai:1 smoothness:1 tuning:1 unconstrained:1 grid:1 consistency:1 mathematics:1 reachable:13 moving:1 robot:7 han:2 longer:1 surface:2 gt:14 posterior:6 optimizing:1 scenario:2 route:3 certain:2 catherine:1 binary:1 arbitrarily:1 jens:2 captured:1 minimum:1 additional:1 care:1 seen:3 r0:3 determine:6 maximize:3 shortest:4 fernando:1 dashed:1 full:3 multiple:1 sham:1 smooth:1 alan:1 adapt:1 veloso:1 cross:1 long:3 niranjan:1 visit:6 controlled:1 ensuring:3 prediction:1 geibel:2 regression:1 controller:2 metric:2 iteration:12 kernel:11 limt:1 robotics:8 achieved:1 justified:3 background:1 want:1 krause:4 interval:10 lander:1 leaving:1 limn:2 sch:2 operate:1 induced:3 subject:1 climb:8 seem:1 integer:1 anindya:1 enough:1 concerned:1 s14:1 andreas:4 idea:1 inner:1 t0:1 whether:1 url:1 turchetta:1 render:1 peter:2 cause:1 action:41 repeatedly:2 teodor:1 netherlands:1 s4:1 nonparametric:1 induces:1 reduced:1 http:2 exist:3 fulfilled:1 per:1 alfred:1 discrete:2 mat:1 key:1 four:1 nevertheless:1 threshold:9 quadrotor:1 landing:1 clarity:1 kenneth:1 imaging:2 run:2 uncertainty:7 place:1 decide:1 decision:7 scaling:2 bound:7 ct:7 guaranteed:3 played:1 bastian:1 constraint:24 s10:1 flat:1 encodes:2 nearby:1 aspect:2 nathan:1 min:1 attempting:3 relatively:1 martin:1 according:2 combination:1 across:1 wysotzki:2 kakade:1 modification:1 s1:7 intuitively:2 restricted:2 altitude:2 computationally:1 resource:1 zurich:3 previously:2 discus:1 turn:1 fail:1 eventually:1 heiner:1 know:1 tractable:1 available:3 operation:1 endowed:1 moldovan:2 observe:1 appropriate:1 sonia:1 encounter:1 original:2 thomas:1 denotes:1 remaining:1 ensure:5 include:1 top:2 graphical:1 running:1 angela:2 neglect:1 exploit:1 giving:1 restrictive:1 practicality:1 classical:1 icra:1 move:3 damage:1 strategy:3 rt:1 dependence:1 traditional:1 bagnell:1 visiting:12 exhibit:1 navigating:1 distance:8 separate:1 capacity:3 collected:1 considers:10 reason:2 marcus:1 code:1 modeled:1 index:2 illustration:2 demonstration:1 mostly:1 steep:3 statement:1 potentially:2 design:2 policy:7 unknown:20 perform:1 upper:2 observation:3 markov:5 finite:8 dijkstra:2 ecml:1 defining:1 situation:1 communication:2 rn:1 reproducing:1 mona:1 arbitrary:1 community:2 ghosal:1 introduced:2 pair:8 required:1 specified:1 learned:1 unsafe:22 barcelona:1 nip:1 address:1 able:7 beyond:4 bar:2 suggested:1 latitude:1 program:1 max:4 including:1 green:1 belief:1 critical:2 demanding:1 satisfaction:1 misclassification:1 disturbance:1 rely:1 melanie:1 arm:1 minimax:1 scheme:1 github:1 mdps:6 concludes:1 udluft:1 akametalu:2 prior:7 acknowledgement:1 meter:3 loss:2 fully:2 expect:3 cdc:1 sublinear:1 generation:1 limitation:1 toussaint:1 digital:2 krausea:1 agent:15 s0:59 claire:1 hopeless:1 supported:1 last:1 rasmussen:1 infeasible:2 enjoys:1 guide:3 allow:4 taking:5 brenna:1 tolerance:1 boundary:1 transition:12 cumulative:2 avoids:1 stuck:8 made:1 reinforcement:11 commonly:2 adaptive:2 atkeson:1 far:1 nguyen:1 log3:1 transaction:1 bernhard:1 global:2 active:1 harm:1 manuela:1 automatica:1 consuming:1 terrain:3 continuous:1 sk:4 table:1 additionally:2 learn:1 terminate:1 robust:3 nicolas:1 requested:1 e5:1 expansion:3 european:3 marc:1 main:3 conservativeness:1 backup:3 s2:4 noise:3 expanders:7 subsample:1 hyperparameters:2 allowed:2 befelix:2 fig:10 biggest:1 site:1 deployed:1 fails:1 downhill:1 deterministically:2 candidate:1 lie:2 third:1 kirk:1 sui:2 theorem:5 down:1 explored:1 exists:2 restricting:1 gained:1 conditioned:1 budget:1 maximilian:1 intersection:2 garcia:2 lt:4 explore:19 likely:1 expressed:2 partially:1 scalar:1 recommendation:1 kae:1 springer:2 ch:3 corresponds:1 satisfies:1 determines:2 relies:2 goal:10 identity:1 consequently:3 exposition:1 towards:1 careful:1 lipschitz:6 hard:1 change:1 specifically:2 typical:1 uniformly:1 wt:5 lemma:8 conservative:3 called:2 experimental:1 ya:1 attempted:2 geophysical:1 berkenkamp:3 zeilinger:1 select:2 support:1 fulfills:3 alexander:2 ethz:3 evaluate:1 srinivas:2 |
5,923 | 6,359 | Stochastic Gradient MCMC with Stale Gradients
Changyou Chen?
Nan Ding?
Chunyuan Li?
Yizhe Zhang?
Lawrence Carin?
?
Dept. of Electrical and Computer Engineering, Duke University, Durham, NC, USA
?
Google Inc., Venice, CA, USA
?
{cc448,cl319,yz196,lcarin}@duke.edu; ? [email protected]
Abstract
Stochastic gradient MCMC (SG-MCMC) has played an important role in largescale Bayesian learning, with well-developed theoretical convergence properties. In
such applications of SG-MCMC, it is becoming increasingly popular to employ distributed systems, where stochastic gradients are computed based on some outdated
parameters, yielding what are termed stale gradients. While stale gradients could
be directly used in SG-MCMC, their impact on convergence properties has not
been well studied. In this paper we develop theory to show that while the bias and
MSE of an SG-MCMC algorithm depend on the staleness of stochastic gradients,
its estimation variance (relative to the expected estimate, based on a prescribed
number of samples) is independent of it. In a simple Bayesian distributed system
with SG-MCMC, where stale gradients are computed asynchronously by a set
of workers, our theory indicates a linear speedup on the decrease of estimation
variance w.r.t. the number of workers. Experiments on synthetic data and deep
neural networks validate our theory, demonstrating the effectiveness and scalability
of SG-MCMC with stale gradients.
1
Introduction
The pervasiveness of big data has made scalable machine learning increasingly important, especially
for deep models. A basic technique is to adopt stochastic optimization algorithms [1], e.g., stochastic
gradient descent and its extensions [2]. In each iteration of stochastic optimization, a minibatch of
data is used to evaluate the gradients of the objective function and update model parameters (errors
are introduced in the gradients, because they are computed based on minibatches rather than the
entire dataset; since the minibatches are typically selected at random, this yields the term ?stochastic?
gradient). This is highly scalable because processing a minibatch of data in each iteration is relatively
cheap compared to analyzing the entire (large) dataset at once. Under certain conditions, stochastic
optimization is guaranteed to converge to a (local) optima [1]. Because of its scalability, the minibatch
strategy has recently been extended to Markov Chain Monte Carlo (MCMC) Bayesian sampling
methods, yielding SG-MCMC [3, 4, 5].
In order to handle large-scale data, distributed stochastic optimization algorithms have been developed,
for example [6], to further improve scalability. In a distributed setting, a cluster of machines with
multiple cores cooperate with each other, typically through an asynchronous scheme, for scalability
[7, 8, 9]. A downside of an asynchronous implementation is that stale gradients must be used in
parameter updates (?stale gradients? are stochastic gradients computed based on outdated parameters,
instead of the latest parameters; they are easier to compute in a distributed system, but introduce
additional errors relative to traditional stochastic gradients). While some theory has been developed to
guarantee the convergence of stochastic optimization with stale gradients [10, 11, 12], little analysis
has been done in a Bayesian setting, where SG-MCMC is applied. Distributed SG-MCMC algorithms
share characteristics with distributed stochastic optimization, and thus are highly scalable and suitable
for large-scale Bayesian learning. Existing Bayesian distributed systems with traditional MCMC
methods, such as [13], usually employ stale statistics instead of stale gradients, where stale statistics
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
are summarized based on outdated parameters, e.g., outdated topic distributions in distributed Gibbs
sampling [13]. Little theory exists to guarantee the convergence of such methods. For existing
distributed SG-MCMC methods, typically only standard stochastic gradients are used, for limited
problems such as matrix factorization, without rigorous convergence theory [14, 15, 16].
In this paper, by extending techniques from standard SG-MCMC [17], we develop theory to study the
convergence behavior of SG-MCMC with Stale gradientsR(S2 G-MCMC). Our goal is to evaluate the
posterior average of a test function ?(x), defined as ?? , X ?(x)?(x)d x, where ?(x) is the desired
posterior distribution with x the possibly augmented model parameters (see Section 2). In practice,
PL
1
?
S2 G-MCMC generates L samples {xl }L
l=1 and uses the sample average ?L , L
l=1 ?(xl ) to
?
?
?
approximate ?. We measure how ?L approximates ? in terms of bias, MSE and estimation variance,
2
2
? E ??L ? ?? and E ??L ? E??L , respectively. From the definitions, the
defined as |E??L ? ?|,
? and the variance characterizes how
bias and MSE characterize how accurately ??L approximates ?,
?
fast ?L converges to its own expectation (for a prescribed number of samples L). Our theoretical
results show that while the bias and MSE depend on the staleness of stochastic gradients, the variance
is independent of it. In a simple asynchronous Bayesian distributed system with S2 G-MCMC, our
theory indicates a linear speedup on the decrease of the variance w.r.t. the number of workers
used to calculate the stale gradients, while maintaining the same optimal bias level as standard
SG-MCMC. We validate our theory on several synthetic experiments and deep neural network models,
demonstrating the effectiveness and scalability of the proposed S2 G-MCMC framework.
Related Work Using stale gradients is a standard setup in distributed stochastic optimization
systems. Representative algorithms include, but are not limited to, the ASYSG-CON [6] and HOGWILD! algorithms [18], and some more recent developments [19, 20]. Furthermore, recent research
on stochastic optimization has been extended to non-convex problems with provable convergence
rates [12]. In Bayesian learning with MCMC, existing work has focused on running parallel chains on
subsets of data [21, 22, 23, 24], and little if any effort has been made to use stale stochastic gradients,
the setting considered in this paper.
2
Stochastic Gradient MCMC
Throughout this paper, we denote vectors as bold lower-case letters, and matrices as bold uppercase letters. For example, N (m, ?) means a multivariate Gaussian distribution with mean m and
covariance ?. In the analysis we consider algorithms with fixed-stepsizes for simplicity; decreasingstepsize variants can be addressed similarly as in [17].
The goal of SG-MCMC is to generate random samples from a posterior distribution p(?| D) ?
QN
p(?) i=1 p(di |?), which are used to evaluate a test function. Here ? ? Rn represents the parameter
vector and D = {d1 , ? ? ? , dN } represents the data, p(?) is the prior distribution, and p(di |?) the
likelihood for di . SG-MCMC algorithms are based on a class of stochastic differential equations,
called It? diffusion, defined as
d xt = F (xt )dt + g(xt )dwt ,
(1)
where x ? Rm represents the model states, typically x augments ? such that ? ? x and n ? m;
t is the time index, wt ? Rm is m-dimensional Brownian motion, functions F : Rm ? Rm and
g : Rm ? Rm?m are assumed to satisfy the usual Lipschitz continuity condition [25].
For appropriate functions F and g, the stationary distribution, ?(x), of the It? diffusion (1) has a
marginal distribution equal to the posterior distribution p(?| D) [26]. For example, denoting the
PN
unnormalized negative log-posterior as U (?) , ? log p(?) ? i=1 log p(di |?), the stochastic
gradient Langevin dynamic (SGLD) method?
[3] is based on 1st-order Langevin dynamics, with
x = ?, and F (xt ) = ??? U (?), g(xt ) = 2 In , where In is the n ? n identity matrix. The
stochastic gradient Hamiltonian Monte Carlo (SGHMC) method [4] is based on 2nd-order Langevin
? 0 0
q
dynamics, with x = (?, q), and F (xt ) =
, g(xt ) = 2B
for a
?B q ??? U (?)
0 In
scalar B > 0; q is an auxiliary variable known as the momentum [4, 5]. Diffusion forms for other
SG-MCMC algorithms, such as the stochastic gradient thermostat [5] and variants with Riemannian
information geometry [27, 26, 28], are defined similarly.
In order to efficiently draw samples from the continuous-time diffusion (1), SG-MCMC algorithms
typically apply two approximations: i) Instead of analytically integrating infinitesimal increments
2
dt, numerical integration over small step size h is used to approximate the integration of the true
?l (?lh ),
dynamics. ii) Instead of working with the full gradient ?? U (?lh ), a stochastic gradient ?? U
J
defined as
X
?l (?) , ??? log p(?) ? N
?? U
?? log p(d?i |?),
(2)
J i=1
is calculated from a minibatch of size J, where {?1 , ? ? ? , ?J } is a random subset of {1, ? ? ? , N }.
Note that to match the time index t in (1), parameters have been and will be indexed by ?lh? in the
l-th iteration.
3
Stochastic Gradient MCMC with Stale Gradients
In this section, we extend SG-MCMC to the stale-gradient setting, commonly met in asynchronous
distributed systems [7, 8, 9], and develop theory to analyze convergence properties.
3.1
Stale stochastic gradient MCMC (S2 G-MCMC)
The setting for S2 G-MCMC is the same as the standard SG-MCMC described above, except that
the stochastic gradient (2) is replaced with a stochastic gradient evaluated with outdated parameter
?(l??l )h instead of the latest version ?lh (see Appendix A for an example):
J
X
?? (?) , ??? log p(?(l?? )h ) ? N
?? U
?? log p(d?i |?(l??l )h ),
(3)
l
l
J i=1
where ?l ? Z+ denotes the staleness of the parameter used to calculate the stochastic gradient in the
l-th iteration. A distinctive difference between S2 G-MCMC and SG-MCMC is that stale stochastic
gradients are no longer unbiased estimations of the true gradients. This leads to additional challenges
in developing convergence bounds, one of the main contributions of this paper.
We assume a bounded staleness for all ?l ?s, Algorithm 1 State update of SGHMC with the stale
i.e.,
?? (?)
stochastic gradient ?? U
l
max ?l ? ?
l
?? (?), ?l , ? , h, B
Input: xlh = (?lh , qlh ), ?? U
l
for some constant ? . As an example, AlOutput: x(l+1)h = (?(l+1)h , q(l+1)h )
gorithm 1 describes the update rule of the
if ?l ? ? then
stale-SGHMC in each iteration with the
Draw ?l ? N (0, I);
Euler integrator, where the stale gradient
?
?? (?)h+ 2Bh?l ;
q(l+1)h = (1?Bh) qlh ??? U
?
l
?? U?l (?) with staleness ?l is used.
?(l+1)h = ?lh + q(l+1)h h;
end if
3.2 Convergence analysis
This section analyzes the convergence properties of the basic S2 G-MCMC; an extension with multiple
chains is discussed in Section 3.3. It is shown that the bias and MSE depend on the staleness parameter
? , while the variance is independent of it, yielding significant speedup in Bayesian distributed systems.
Bias and MSE In [17], the bias and MSE of the standard SG-MCMC algorithms with a Kth order
integrator were analyzed, where the order of an integrator reflects how accurately an SG-MCMC
algorithm approximates the corresponding continuous diffusion. Specifically, if evolving xt with
a numerical integrator using discrete time increment h induces an error bounded by O(hK ), the
integrator is called a Kth order integrator, e.g., the popular Euler method used in SGLD [3] is a
1st-order integrator. In particular, [17] proved the bounds stated in Lemma 1.
Lemma 1 ([17]). Under standard assumptions (see Appendix B), the bias and MSE of SG-MCMC
with a Kth-order integrator at time T = hL are bounded as:
P
1
l kE?Vl k
Bias: E??L ? ?? = O
+
+ hK
L
Lh
!
P
2
1
2
1
l E k?Vl k
2K
L
?
?
MSE: E ?L ? ? = O
+
+h
L
Lh
Here ?Vl , L ? L?l , where L is the generator of the It? diffusion (1) defined as
E [f (xt+h )] ? f (xt )
1
Lf (xt ) , lim
= F (xt ) ? ?x +
g(xt )g(xt )T : ?x?Tx f (xt ) , (4)
h
2
h?0+
3
for any compactly supported twice differentiable function f : Rn ? R, h ? 0+ means h approaches
?l instead
zero along the positive real axis. L?l is the same as L except using the stochastic gradient ?U
of the full gradient.
We show that the bounds of the bias and MSE of S2 G-MCMC share similar forms as SG-MCMC, but
with additional dependence on the staleness parameter. In addition to the assumptions in SG-MCMC
[17] (see details in Appendix B), the following additional assumption is imposed.
Assumption 1. The noise in the stochastic gradients is well-behaved, such that: 1) the stochastic
? (?) where ? denotes the random permutation over
gradient is unbiased, i.e., ?? U (?) = E? ?? U
2
? (?)
{1, ? ? ? , N }; 2) the variance of stochastic gradient is bounded, i.e., E?
U (?) ? U
? ? 2 ; 3) the
? ), i.e., k?? U (x) ? ?? U (y)k ? C kx ? yk , ? x, y.
gradient function ?? U is Lipschitz (so is ?? U
In the following theorems, we omit the assumption statement for conciseness. Due to the staleness
? l?? , where L
? l?? arises
of the stochastic gradients, the term ?Vl in S2 G-MCMC is equal to L ?L
l
l
?
from
?
U
.
The
challenge
arises
to
bound
these
terms
involving
?V
.
To
this
end,
define
flh ,
?
?
l
l
xlh ? x(l?1)h
, and ? to be a functional satisfying the Poisson Equation? :
L
1X
L?(xlh ) = ??L ? ?? .
(5)
L
l=1
Theorem 2. After L iterations, the bias of S2 G-MCMC with a Kth-order integrator is bounded, for
some constant D1 independent of {L, h, ? }, as:
1
?
K
?
+ M1 ? h + M2 h
,
E?L ? ? ? D1
Lh
PK P EL? k+1
?(x(l?1)h )
l
are constants.
where M1 , maxl |Lflh | maxl kE??(xlh )k C, M2 , k=1 l (k+1)!L
Theorem 3. After L iterations, the MSE of S2 G-MCMC with a Kth-order integrator is bounded, for
some constant D2 independent of {L, h, ? }, as:
2
1
? 1 ? 2 h2 + M
? 2 h2K ,
E ??L ? ?? ? D2
+M
Lh
K+1
? 1 , maxl kE??(xlh )k2 maxl (Lflh )2 C 2 , M
? 2 , E( l L? l ?(x(l?1)h) )2 .
where constants M
L(K+1)!
The theorems indicate that both the bias and MSE depend on the staleness parameter ? . For a fixed
computational time, this could possibly lead to unimproved bounds, compared to standard SG-MCMC,
when ? is too large, i.e., the terms with ? would dominate, as is the case in the distributed system
discussed in Section 4. Nevertheless, better bounds than standard SG-MCMC could be obtained if
1
the decrease of Lh
is faster than the increase of the staleness in a distributed system.
P
Variance Next we investigate the convergence behavior of the variance, Var(??L ) ,
2
E ??L ? E??L . Theorem 4 indicates the variance is independent of ? , hence a linear speedup in the
decrease of variance is always achievable when stale gradients are computed in parallel. An example
is discussed in the Bayesian distributed system in Section 4.
Theorem 4. After L iterations, the variance of S2 G-MCMC with a Kth-order integrator is bounded,
for some constant D, as:
1
Var ??L ? D
+ h2K .
Lh
The variance bound is the same as for standard SG-MCMC, whereas L could increase linearly
w.r.t. the number of workers in a distributed setting, yielding significant variance reduction. When
optimizing the the variance bound w.r.t. h, we get an optimal variance bound stated in Corollary 5.
Corollary 5. In term of estimation variance,
rate of S2 G-MCMC with a
the optimal convergence
?2K/(2K+1)
?
Kth-order integrator is bounded as: Var ?L ? O L
.
?
The existence of a nice ? is guaranteed in the elliptic/hypoelliptic SDE settings when x is on a torus [25].
4
In real distributed systems, the decrease of 1/Lh and increase of ? , in the bias and MSE bounds,
would typically cancel, leading to the same bias and MSE level compared to standard SG-MCMC,
whereas a linear speedup on the decrease of variance w.r.t. the number of workers is always achievable.
More details are discussed in Section 4.
3.3 Extension to multiple parallel chains
This section extends the theory to the setting with S parallel chains, each independently running an
S2 G-MCMC algorithm. After generating samples from the S chains, an aggregation step is needed to
combine the sample average from each chain, i.e., {??Ls }M
s=1 , where Ls is the number of iterations on
chain s. For generality, we allow each chain to have different step sizes, e.g., (hs )Ss=1 . We aggregate
PS
PS
the sample averages as ??SL , s=1 TTs ??Ls , where Ts , Ls hs , T , s=1 Ts .
Interestingly, with increasing S, using multiple chains does not seem to directly improve the convergence rate for the bias, but improves the MSE bound, as stated in Theorem 6.
Theorem 6. Let Tm , maxl Tl , hm , maxl hl , T? = T /S, the bias and MSE of S parallel S2 GMCMC chains with a Kth-order integrator are bounded, for some constants D1 and D2 independent
of {L, h, ? }, as:
Tm
1
Bias: E??SL ? ?? ? D1 ? + ? M1 ? hs + M2 hK
s
T
T
2
2
Tm
1
1 ? 1/T?
2 2 2
2 2K
S
?
?
MSE: E ?L ? ? ? D2
+ ?2 + ?2 M1 ? hs + M2 hs
.
T
T
T
Assume that T? = T /S is independent of the number of chains. As a result, using multiple chains
does not directly improve the bound for the bias? . However, for the MSE bound, although the last
two terms are independent of S, the first term decreases linearly with respect to S because T = T?S.
This indicates a decreased estimation variance with more chains. This matches the intuition because
more samples can be obtained with more chains in a given amount of time.
The decrease of MSE for multiple-chain is due to the decrease of the variance as stated in Theorem 7.
Theorem 7. The variance of S parallel S2 G-MCMC chains with a Kth-order integrator is bounded,
for some constant D independent of {L, h, ? }, as:
!
S
2
2
X
1
T
s
S
S
2K
E ??L ? E??L ? D
+
h
.
T
T2 s
s=1
When
size for all chains, Theorem 7 gives an optimal variance bound of
Pusing the same step
O ( s Ls )?2K/(2K+1) , i.e. a linear speedup with respect to S is achieved.
In addition, Theorem 6 with ? = 0 and K = 1 provides convergence rates for the distributed SGLD
algorithm in [14], i.e., improved MSE and variance bounds compared to the single-server SGLD.
4
Applications to Distributed SG-MCMC Systems
Our theory for S2 G-MCMC is general, serving as a basic analytic tool for distributed SG-MCMC
systems. We propose two simple Bayesian distributed systems with S2 G-MCMC in the following.
Single-chain distributed SG-MCMC Perhaps the simplest architecture is an asynchronous distributed SG-MCMC system, where a server runs an S2 G-MCMC algorithm, with stale gradients
computed asynchronously from W workers. The detailed operations of the server and workers are
described in Appendix A.
With our theory, now we explain the convergence property of this simple distributed system with
SG-MCMC, i.e., a linear speedup w.r.t. the number of workers on the decrease of variance, while
? from Theorems 2 and 3, where L
? is the
maintaining the same bias level. To this end, rewrite L = W L
average number of iterations on each worker. We can observe from the theorems that when M1 ? h >
? 1 ? 2 h2 > M
? 2 h2K in the MSE, the terms with ? dominate. Optimizing the
M2 hK in the bias and M
? 1/2 ) for the bias, and O((? /W L)
? 2/3 ) for the
bounds with respect to h yields a bound of O((? /W L)
MSE. In practice, we usually observe ? ? W , making W in the optimal bounds cancels, i.e., the
same optimal bias and MSE bounds as standard SG-MCMC are obtained, no theoretical speedup is
?
It means the bound does not directly relate to low-order terms of S, though constants might be improved.
5
achieved when increasing W . However, from Corollary 5, the variance is independent of ? , thus a
linear speedup on the variance bound can be always obtained when increasing the number of workers,
i.e., the distributed SG-MCMC system convergences a factor of W faster than standard SG-MCMC
with a single machine. We are not aware of similar conclusions from optimization, because most of
the research focuses on the convex setting, thus only variance (equivalent to MSE) is studied.
Multiple-chain distributed SG-MCMC We can also adopt multiple servers based on the multiplechain setup in Section 3.3, where each chain corresponds to one server. The detailed architecture
is described in Appendix A. This architecture trades off communication cost with convergence
rates. As indicated by Theorems 6 and 7, the MSE and variance bounds can be improved with more
servers. Note that when only one worker is associated with one server, we recover the setting of S
independent servers. Compared to the single-server architecture described above with S workers,
from Theorems 2?7, while the variance bound is the same, the single-server arthitecture improves the
bias and MSE bounds by a factor of S.
More advanced architectures More complex architectures could also be designed to reduce
communication cost, for example, by extending the downpour [7] and elastic SGD [29] architectures
to the SG-MCMC setting. Their convergence properties can also be analyzed with our theory since
they are essentially using stale gradients. We leave the detailed analysis for future work.
5
Experiments
Our primal goal is to validate the theory, comparing with different distributed architectures and
algorithms, such as [30, 31], is beyond the scope of this paper. We first use two synthetic experiments
to validate the theory, then apply the distributed architecture described in Section 4 for Bayesian
deep learning. To quantitatively describe the speedup property, we adopt the the iteration speedup
with a single worker
[12], defined as: iteration speedup , #iterations
average #iterations on a worker , where # is the iteration count when the
same level of precision is achieved. This speedup best matches with the theory. We also consider the
time for a single worker
time speedup, defined as: running
running time for W worker , where the running time is recorded at the same
accuracy. It is affected significantly by hardware, thus is not accurately consistent with the theory.
MSE
5.1 Synthetic experiments
Impact of stale gradients A simple Gaussian model is used 10 1 == == 12
= =5
to verify the impact of stale gradients on the convergence
= = 10
= = 15
accuracy, with di ? N (?, 1), ? ? N (0, 1). 1000 data samples
= = 20
{di } are generated, with minibatches of size 10 to calculate
Achieving the
stochastic gradients. The test function is ?(?) , ?2 . The
same MSE level
distributed SGLD algorithm is adopted in this experiment. 10 0
2/3 ?2/3
We aim to verify that the optimal MSE bound ? ? L
,
derived from Theorem 3 and discussed in Section 4 (with 10 1
10 2
10 3
10 4
#iterations: L
W = 1). The optimal stepsize is h = C? ?2/3 L?1/3 for some
constant C. Based on the optimal bound, setting L = L0 ? ? Figure 1: MSE vs. # iterations (L =
for some fixed L0 and varying ? ?s would result in the same 500 ? ? ) with increasing staleness ? .
?2/3
MSE, which is ? L0 . In the experiments we set C = 1/30, Resulting in roughly the same MSE.
L0 = 500, ? = {1, 2, 5, 10, 15, 20}, and average over 200 runs to approximate the expectations in
the MSE formula. As indicated in Figure 1, approximately the same MSE?s are obtained after L0 ?
iterations for different ? values, consistent with the theory. Note since the stepsizes are set to make
end points of the curves reach the optimal MSE?s, the curves would not match the optimal MSE
curves of ? 2/3 L?2/3 in general, except for the end points, i.e., they are lower bounded by ? 2/3 L?2/3 .
Convergence speedup of the variance A Bayesian logistic regression model (BLR) is adopted
to verify the variance convergence properties. We use the Adult dataset? , a9a, with 32,561 training
samples and 16,281 test samples. The test function is defined as the standard logistic loss. We
average over 10 runs to estimate the expectation E??L in the variance. We use the single-server
distributed architecture in Section 4, with multiple workers computing stale gradients in parallel. We
? and the running time in
plot the variance versus the average number of iterations on the workers (L)
Figure 2 (a) and (b), respectively. We can see that the variance drops faster with increasing number
?
http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html.
6
10 0
9
8
10 -2
7
10 -4
10 -6
10 -8
10 0
10 1
10 -4
10
10 2
#iterations
10 3
-6
10 -1
1 worker
2 workers
3 workers
4 workers
5 workers
6 workers
7 workers
8 workers
9 workers
Speedup
1 worker
2 workers
3 workers
4 workers
5 workers
6 workers
7 workers
8 workers
9 workers
Var
Var
10 -2
6
5
4
3
linear speedup
iteration-speedup
time-speedup
2
10 0
Time (s)
1
2
4
6
8
#workers
?
(c) Speedup
(b) Variance vs. Time
(a) Variance vs. Iteration L
Figure 2: Variance with increasing number of workers.
W1
1
of workers. To quantitatively relate these results to the theory, Corollary 5 indicates that L
L 2 = W2 ,
2
where (Wi , Li )i=1 means the number of workers and iterations at the same variance, i.e., a linear
speedup is achieved. The iteration speedup and time speedup are plotted in Figure 2 (c), showing
that the iteration speedup approximately scales linearly worker numbers, consistent with Corollary 5;
whereas the time speedup deteriorates when the worker number is large due to high system latency.
5.2
Applications to deep learning
We further test S2 G-MCMC on Bayesian learning of deep neural networks. The distributed system is
developed based on an MPI (message passing interface) extension of the popular Caffe package for
deep learning [32]. We implement the SGHMC algorithm, with the point-to-point communications
between servers and workers handled by the MPICH library.The algorithm is run on a cluster of five
machines. Each machine is equipped with eight 3.60GHz Intel(R) Core(TM) i7-4790 CPU cores.
We evaluate S2 G-MCMC on the above BLR model and two deep convolutional neural networks
(CNN). In all these models, zero mean and unit variance Gaussian priors are employed for the weights
to capture weight uncertainties, an effective way to deal with overfitting [33]. We vary the number of
servers S among {1, 3, 5, 7}, and the number of workers for each server from 1 to 9.
LeNet for MNIST We modify the standard LeNet to a Bayesian setting for the MNIST
dataset.LeNet consists of 2 convolutional layers, 2 max pool layers and 2 ReLU nonlinear layers, followed by 2 fully connected layers [34]. The detailed specification can be found in Caffe. For
simplicity, we use the default parameter setting specified in Caffe, with the additional parameter B in
SGHMC (Algorithm 1) set to (1 ? m), where m is the moment variable defined in the SGD algorithm
in Caffe.
Cifar10-Quick net for CIFAR10 The Cifar10-Quick net consists of 3 convolutional layers, 3 max
pool layers and 3 ReLU nonlinear layers, followed by 2 fully connected layers. The CIFAR-10
dataset consists of 60,000 color images of size 32?32 in 10 classes, with 50,000 for training and
10,000 for testing.Similar to LeNet, default parameter setting specified in Caffe is used.
In these models, the test function is defined as the cross entropy of the softmax outputs {o1 , ? ? ? , oN }
PN
PC
for test data {(d1 , y1 ), ? ? ? , (dN , yN )} with C classes, i.e., loss = ? i=1 oyi +N log c=1 eoc .
Since the theory indicates a linear speedup on the decrease of variance w.r.t. the number of workers,
this means for a single run of the models, the loss would converge faster to its expectation with
increasing number of workers. The following experiments verify this intuition.
5.2.1 Single-server experiments
We first test the single-server architecture in Section 4 on the three models. Because the expectations
in the bias, MSE or variance are not analytically available in these complex models, we instead plot
? defined in Section 4) on each worker and the running
the loss versus average number of iterations (L
time in Figure 3. As mentioned above, faster decrease of the loss with more workers is expected.
For the ease of visualization, we only plot the results with {1, 2, 4, 6, 9} workers; more detailed
results are provided in Appendix I. We can see that generally the loss decreases faster with increasing
number of workers. In the CIFAR-10 dataset, the final losses of 6 and 9 workers are worst than the one
with 4 workers. It shows that the accuracy of the sample average suffers from the increased staleness
due to the increased number of workers. Therefore a smaller step size h should be considered to
maintain high accuracy when using a large number of workers. Note the 1-worker curves correspond
to the standard SG-MCMC, whose loss decreases much slower due to high estimation variance,
7
0.6
1 worker
2 workers
4 workers
6 workers
9 workers
0.55
10
2.2
1 worker
2 workers
4 workers
6 workers
9 workers
0
1.8
1.6
0.45
Loss
Loss
0.5
Loss
1 worker
2 workers
4 workers
6 workers
9 workers
2
1.4
1.2
10 -1
0.4
1
0.35
10 0
10 1
10 2
#iterations
10 3
2000
10 4
4000
6000
#iterations
8000
0.8
10000
0.6
1 worker
2 workers
4 workers
6 workers
9 workers
0.55
1.5
2
#10 4
1 worker
2 workers
4 workers
6 workers
9 workers
2
1.8
1.6
0.45
Loss
Loss
0.5
Loss
1
#iterations
2.2
1 worker
2 workers
4 workers
6 workers
9 workers
10 0
0.5
1.4
1.2
10 -1
0.4
1
0.35
10 -1
10 0
Time (s)
0
10 1
100
200
300
Time (s)
400
500
0.8
600
0
1000
2000
Time (s)
3000
4000
Figure 3: Testing loss vs. #workers. From left to right, each column corresponds to the a9a, MNIST
and CIFAR dataset, respectively. The loss is defined in the text.
though in theory it has the same level of bias as the single-server architecture for a given number of
iterations (they will converge to the same accuracy).
5.2.2 Multiple-server experiments
Finally, we test the multiple-servers architecture on the same models. We use the same criterion as
the single-server setting to measure the convergence behavior. The loss versus average number of
? defined in Section 4) for the three datasets are plotted in Figure 4, where
iterations on each worker (L
we vary the number of servers among {1, 3, 5, 7}, and use 2 workers for each server. The plots of
loss versus time and using different number of workers for each server are provided in the Appendix.
We can see that in the simple BLR model, multiple servers do not seem to show significant speedup,
probably due to the simplicity of the posterior, where the sample variance is too small for multiple
servers to take effect; while in the more complicated deep neural networks, using more servers results
in a faster decrease of the loss, especially in the MNIST dataset.
2.2
0.6
1 server
3 servers
5 servers
7 servers
0.55
1 server
3 servers
5 servers
7 servers
10 0
1.8
1.6
Loss
Loss
0.5
Loss
1 server
3 servers
5 servers
7 servers
2
0.45
1.2
10 -1
0.4
1.4
1
0.35
10 0
10 1
10 2
#iterations
10 3
10 4
2000
4000
6000
#iterations
8000
10000
0.5
1
#iterations
1.5
2
#10
4
Figure 4: Testing loss vs. #servers. From left to right, each column corresponds to the a9a, MNIST
and CIFAR dataset, respectively. The loss is defined in the text.
6
Conclusion
We extend theory from standard SG-MCMC to the stale stochastic gradient setting, and analyze the
impacts of the staleness to the convergence behavior of an S2 G-MCMC algorithm. Our theory reveals
that the estimation variance is independent of the staleness, leading to a linear speedup w.r.t. the
number of workers, although in practice little speedup in terms of optimal bias and MSE might be
achieved due to their dependence on the staleness. We test our theory on a simple asynchronous
distributed SG-MCMC system with two simulated examples and several deep neural network models.
Experimental results verify the effectiveness and scalability of the proposed S2 G-MCMC framework.
Acknowledgements Supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
L. Bottou, editor. Online algorithms and stochastic approximations. Cambridge University Press, 1998.
L. Bottou. Stochastic gradient descent tricks. Technical report, Microsoft Research, Redmond, WA, 2012.
M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In ICML, 2011.
T. Chen, E. B. Fox, and C. Guestrin. Stochastic gradient Hamiltonian Monte Carlo. In ICML, 2014.
N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven. Bayesian sampling using stochastic
gradient thermostats. In NIPS, 2014.
A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In NIPS, 2011.
J. Dean et al. Large scale distributed deep networks. In NIPS, 2012.
T. Chen et al. MXNet: A flexible and efficient machine learning library for heterogeneous distributed
systems. (arXiv:1512.01274), Dec. 2015.
M. Abadi et al. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Software
available from tensorflow.org.
Q. Ho, J. Cipar, H. Cui, J. K. Kim, S. Lee, P. B. Gibbons, G. A. Gibbons, G. R. Ganger, and E. P. Xing.
More effective distributed ML via a stale synchronous parallel parameter server. In NIPS, 2013.
M. Li, D. Andersen, A. Smola, and K. Yu. Communication efficient distributed machine learning with the
parameter server. In NIPS, 2014.
X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous parallel stochastic gradient for nonconvex optimization.
In NIPS, 2015.
A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable
models. In WSDM, 2012.
S. Ahn, B. Shahbaba, and M. Welling. Distributed stochastic gradient MCMC. In ICML, 2014.
S. Ahn, A. Korattikara, N. Liu, S. Rajan, and M. Welling. Large-scale distributed Bayesian matrix
factorization using stochastic gradient MCMC. In KDD, 2015.
U. Simsekli, H. Koptagel, Guldas H, A. Y. Cemgil, F. Oztoprak, and S. Birbil. Parallel stochastic gradient
Markov chain Monte Carlo for matrix factorisation models. Technical report, 2015.
C. Chen, N. Ding, and L. Carin. On the convergence of stochastic gradient MCMC algorithms with
high-order integrators. In NIPS, 2015.
F. Niu, B. Recht, C. R?, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic
gradient descent. In NIPS, 2011.
H. R. Feyzmahdavian, A. Aytekin, and M. Johansson. An asynchronous mini-batch algorithm for regularised stochastic optimization. Technical Report arXiv:1505.04824, May 2015.
S. Chaturapruek, J. C. Duchi, and C. R?. Asynchronous stochastic convex optimization: the noise is in the
noise and SGD don?t care. In NIPS, 2015.
S. L. Scott, A. W. Blocker, and F. V. Bonassi. Bayes and big data: The consensus Monte Carlo algorithm.
Bayes 250, 2013.
M. Rabinovich, E. Angelino, and M. I. Jordan. Variational consensus Monte Carlo. In NIPS, 2015.
W. Neiswanger, C. Wang, and E. P. Xing. Asymptotically exact, embarrassingly parallel MCMC. In UAI,
2014.
X. Wang, F. Guo, K. Heller, and D. Dunson. Parallelizing MCMC with random partition trees. In NIPS,
2015.
J. C. Mattingly, A. M. Stuart, and M. V. Tretyakov. Construction of numerical time-average and stationary
measures via Poisson equations. SIAM Journal on Numerical Analysis, 48(2):552?577, 2010.
Y. A. Ma, T. Chen, and E. B. Fox. A complete recipe for stochastic gradient MCMC. In NIPS, 2015.
S. Patterson and Y. W. Teh. Stochastic gradient Riemannian Langevin dynamics on the probability simplex.
In NIPS, 2013.
C. Li, C. Chen, D. Carlson, and L. Carin. Preconditioned stochastic gradient Langevin dynamics for deep
neural networks. In AAAI, 2016.
S. Zhang, A. E. Choromanska, and Y. Lecun. Deep learning with elastic averaging SGD. In NIPS, 2015.
Y. W. Teh, L. Hasenclever, T. Lienart, S. Vollmer, S. Webb, B. Lakshminarayanan, and C. Blundell.
Distributed Bayesian learning with stochastic natural-gradient expectation propagation and the posterior
server. Technical Report arXiv:1512.09327v1, December 2015.
R. Bardenet, A. Doucet, and C. Holmes. On Markov chain Monte Carlo methods for tall data. Technical
Report arXiv:1505.02827, May 2015.
T. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra. Weight uncertainty in neural networks. In
ICML, 2015.
A. Krizhevshy, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
S. J. Vollmer, K. C. Zygalakis, and Y. W. Teh. (Non-)asymptotic properties of stochastic gradient Langevin
dynamics. Technical Report arXiv:1501.00438, University of Oxford, UK, January 2015.
9
| 6359 |@word h:5 cnn:1 version:1 achievable:2 changyou:1 johansson:1 nd:1 d2:4 cipar:1 covariance:1 sgd:4 moment:1 reduction:1 liu:2 denoting:1 interestingly:1 existing:3 guadarrama:1 com:1 comparing:1 must:1 numerical:4 partition:1 kdd:1 cheap:1 analytic:1 designed:1 plot:4 update:4 drop:1 v:5 stationary:2 selected:1 hamiltonian:2 core:3 provides:1 org:1 zhang:2 five:1 wierstra:1 dn:2 along:1 differential:1 abadi:1 consists:3 combine:1 introduce:1 expected:2 roughly:1 behavior:4 integrator:15 wsdm:1 little:4 cpu:1 equipped:1 increasing:8 spain:1 provided:2 bounded:11 what:1 sde:1 developed:4 guarantee:2 rm:6 k2:1 uk:1 unit:1 omit:1 yn:1 positive:1 engineering:1 local:1 modify:1 cemgil:1 analyzing:1 oxford:1 niu:1 becoming:1 approximately:2 might:2 twice:1 studied:2 ease:1 limited:2 factorization:2 lecun:1 testing:3 practice:3 implement:1 lf:1 lcarin:1 evolving:1 significantly:1 integrating:1 get:1 bh:2 www:1 equivalent:1 imposed:1 quick:2 dean:1 latest:2 independently:1 convex:3 focused:1 ke:3 l:5 simplicity:3 factorisation:1 m2:5 rule:1 holmes:1 dominate:2 fang:1 embedding:1 handle:1 increment:2 construction:1 exact:1 duke:2 vollmer:2 us:1 regularised:1 trick:1 satisfying:1 gorithm:1 role:1 csie:1 preprint:1 ding:3 electrical:1 capture:1 worst:1 calculate:3 wang:2 connected:2 decrease:15 trade:1 yk:1 mentioned:1 intuition:2 gibbon:2 dynamic:8 depend:4 rewrite:1 distinctive:1 patterson:1 compactly:1 darpa:1 tx:1 fast:2 describe:1 effective:2 monte:7 aggregate:1 caffe:6 whose:1 s:1 statistic:2 asynchronously:2 final:1 online:1 karayev:1 differentiable:1 net:2 propose:1 aro:1 blr:3 korattikara:1 validate:4 scalability:6 recipe:1 sutskever:1 convergence:25 cluster:2 optimum:1 extending:2 p:2 darrell:1 generating:1 converges:1 leave:1 tall:1 develop:3 auxiliary:1 indicate:1 met:1 stochastic:58 libsvmtools:1 ntu:1 extension:4 pl:1 considered:2 wright:1 sgld:5 feyzmahdavian:1 lawrence:1 scope:1 vary:2 adopt:3 dingnan:1 estimation:8 tool:1 reflects:1 gaussian:3 always:3 aim:1 rather:1 pn:2 stepsizes:2 varying:1 corollary:5 derived:1 focus:1 l0:5 indicates:6 likelihood:1 hk:4 a9a:3 rigorous:1 kim:1 inference:1 el:1 neven:1 vl:4 entire:2 typically:6 mattingly:1 choromanska:1 among:2 html:1 flexible:1 classification:1 development:1 integration:2 softmax:1 marginal:1 equal:2 once:1 aware:1 sampling:3 represents:3 stuart:1 yu:1 cancel:2 carin:3 icml:4 future:1 simplex:1 t2:1 report:6 quantitatively:2 employ:2 delayed:1 replaced:1 geometry:1 maintain:1 microsoft:1 message:1 highly:2 investigate:1 analyzed:2 yielding:4 pc:1 uppercase:1 primal:1 chain:23 chaturapruek:1 worker:90 cifar10:3 lh:13 fox:2 indexed:1 tree:1 desired:1 plotted:2 girshick:1 theoretical:3 increased:2 column:2 downside:1 rabinovich:1 cost:2 zygalakis:1 subset:2 euler:2 too:2 characterize:1 synthetic:4 st:2 recht:1 siam:1 lee:1 off:1 pool:2 w1:1 andersen:1 aaai:1 recorded:1 huang:1 possibly:2 xlh:5 leading:2 li:5 bold:2 summarized:1 lakshminarayanan:1 inc:1 satisfy:1 mcmc:79 hogwild:2 shahbaba:1 analyze:2 characterizes:1 xing:2 aggregation:1 recover:1 parallel:11 complicated:1 bayes:2 jia:1 contribution:1 accuracy:5 convolutional:5 variance:45 characteristic:1 efficiently:1 yield:2 correspond:1 bayesian:19 kavukcuoglu:1 accurately:3 pervasiveness:1 carlo:7 explain:1 reach:1 suffers:1 definition:1 infinitesimal:1 associated:1 di:6 riemannian:2 con:1 conciseness:1 dataset:9 proved:1 popular:3 lim:1 color:1 improves:2 embarrassingly:1 dt:2 improved:3 done:1 evaluated:1 though:2 generality:1 furthermore:1 smola:2 working:1 nonlinear:2 propagation:1 google:2 minibatch:4 continuity:1 bonassi:1 logistic:2 perhaps:1 behaved:1 indicated:2 stale:30 usa:2 effect:1 verify:5 true:2 unbiased:2 analytically:2 hence:1 lenet:4 staleness:15 deal:1 mpi:1 unnormalized:1 criterion:1 tt:1 complete:1 duchi:2 motion:1 interface:1 mxnet:1 cooperate:1 image:1 variational:1 recently:1 functional:1 extend:2 discussed:5 approximates:3 m1:5 hasenclever:1 significant:3 cambridge:1 gibbs:1 lienart:1 narayanamurthy:1 similarly:2 specification:1 longer:1 ahn:2 posterior:7 own:1 recent:2 multivariate:1 brownian:1 optimizing:2 termed:1 certain:1 server:42 nonconvex:1 binary:1 onr:1 guestrin:1 analyzes:1 additional:5 care:1 employed:1 converge:3 ii:1 multiple:13 full:2 technical:6 match:4 faster:7 ahmed:1 cross:1 long:1 cifar:4 impact:4 scalable:4 basic:3 variant:2 involving:1 essentially:1 expectation:6 poisson:2 regression:1 heterogeneous:2 iteration:34 arxiv:7 agarwal:1 achieved:5 dec:1 addition:2 whereas:3 addressed:1 decreased:1 w2:1 probably:1 december:1 effectiveness:3 seem:2 jordan:1 relu:2 architecture:14 reduce:1 tm:4 blundell:2 i7:1 synchronous:1 handled:1 effort:1 passing:1 deep:14 generally:1 latency:1 detailed:5 amount:1 induces:1 hardware:1 augments:1 simplest:1 generate:1 http:1 sl:2 nsf:1 deteriorates:1 serving:1 discrete:1 affected:1 rajan:1 demonstrating:2 nevertheless:1 achieving:1 bardenet:1 diffusion:6 v1:1 asymptotically:1 blocker:1 nga:1 run:5 package:1 letter:2 uncertainty:2 extends:1 throughout:1 venice:1 draw:2 gonzalez:1 appendix:7 flh:1 bound:26 layer:8 outdated:5 nan:1 played:1 guaranteed:2 followed:2 software:1 simsekli:1 generates:1 prescribed:2 relatively:1 speedup:29 developing:1 cui:1 describes:1 smaller:1 increasingly:2 wi:1 tw:1 making:1 hl:2 equation:3 visualization:1 count:1 cjlin:1 needed:1 neiswanger:1 end:5 adopted:2 available:2 operation:1 sghmc:5 apply:2 observe:2 eight:1 appropriate:1 elliptic:1 dwt:1 stepsize:1 batch:1 ho:1 slower:1 existence:1 denotes:2 running:7 include:1 downpour:1 lock:1 maintaining:2 oyi:1 carlson:1 especially:2 objective:1 strategy:1 dependence:2 usual:1 traditional:2 gradient:71 kth:9 simulated:1 topic:1 evaluate:4 consensus:2 provable:1 preconditioned:1 o1:1 index:2 mini:1 nc:1 setup:2 dunson:1 webb:1 statement:1 relate:2 negative:1 stated:4 implementation:1 teh:4 markov:3 datasets:2 tretyakov:1 descent:3 t:2 january:1 langevin:7 extended:2 communication:4 hinton:1 y1:1 rn:2 chunyuan:1 parallelizing:2 aly:1 introduced:1 specified:2 imagenet:1 tensorflow:2 barcelona:1 nip:16 adult:1 beyond:1 redmond:1 usually:2 scott:1 challenge:2 max:3 cornebise:1 suitable:1 natural:1 largescale:1 advanced:1 scheme:1 improve:3 library:2 axis:1 hm:1 pusing:1 text:2 prior:2 sg:42 nice:1 acknowledgement:1 heller:1 relative:2 asymptotic:1 loss:24 fully:2 permutation:1 var:5 versus:4 generator:1 shelhamer:1 h2:2 consistent:3 editor:1 share:2 supported:2 last:1 asynchronous:9 free:1 bias:27 allow:1 distributed:42 ghz:1 curve:4 calculated:1 maxl:6 default:2 skeel:1 qn:1 made:2 commonly:1 welling:3 approximate:3 cl319:1 aytekin:1 ml:1 doucet:1 overfitting:1 reveals:1 uai:1 assumed:1 don:1 continuous:2 latent:1 ca:1 elastic:2 mse:38 bottou:2 complex:2 pk:1 main:1 linearly:3 big:2 s2:24 noise:3 augmented:1 representative:1 intel:1 tl:1 precision:1 momentum:1 torus:1 xl:2 donahue:1 angelino:1 theorem:17 formula:1 ganger:1 xt:15 showing:1 exists:1 thermostat:2 mnist:5 babbush:1 kx:1 chen:7 durham:1 easier:1 entropy:1 oztoprak:1 h2k:3 scalar:1 corresponds:3 minibatches:3 ma:1 yizhe:1 goal:3 identity:1 lipschitz:2 specifically:1 except:3 wt:1 averaging:1 lemma:2 called:2 experimental:1 guo:1 arises:2 dept:1 lian:1 d1:6 |
5,924 | 636 | Remote Sensing Image Analysis via a Texture
Classification Neural Network
Hayit K. Greenspan and Rodney Goodman
Department of Electrical Engineering
California Institute of Technology, 116-81
Pasadena, CA 91125
[email protected]
Abstract
In this work we apply a texture classification network to remote sensing image analysis. The goal is to extract the characteristics of the area depicted
in the input image, thus achieving a segmented map of the region. We have
recently proposed a combined neural network and rule-based framework
for texture recognition. The framework uses unsupervised and supervised
learning, and provides probability estimates for the output classes. We
describe the texture classification network and extend it to demonstrate
its application to the Landsat and Aerial image analysis domain .
1
INTRODUCTION
In this work we apply a texture classification network to remote sensing image
analysis. The goal is to segment the input image into homogeneous textured regions
and identify each region as one of a prelearned library of textures, e.g. tree area and
urban area distinction. Classification 0 f remote sensing imagery is of importance in
many applications, such as navigation, surveillance and exploration. It has become
a very complex task spanning a growing number of sensors and application domains.
The applications include: landcover identification (with systems such as the AVIRIS
and SPOT), atmospheric analysis via cloud-coverage mapping (using the AVHRR
sensor), oceanographic exploration for sea/ice type classification (SAR input) and
more.
Much attention has been given to the use of the spectral signature for the identifica425
426
Greenspan and Goodman
tion of region types (Wharton, 1987; Lee and Philpot, 1991). Only recently has the
idea of adding on spatial information been presented (Ton et aI, 1991). In this work
we investigate the possibility of gaining information from textural analysis. We
have recently developed a texture recognition system (Greenspan et aI, 1992) which
achieves state-of-the-art results on natural textures. In this paper we apply the
system to remote sensing imagery and check the system's robustness in this noisy
environment. Texture can playa major role in segmenting the images into homogeneous areas and enhancing other sensors capabilities, such as multispectra analysis,
by indicating areas of interest in which further analysis can be pursued. Fusion of
the spatial information with the spectral signature will enhance the classification
and the overall automated analysis capabilities.
Most of the work in the literature focuses on human expert-based rules with specific
sensor data calibration. Some of the existing problems with this classic approach
are the following (Ton et aI, 1991):
- Experienced photointerpreters are required to spend a considerable amount of
time generating rules.
- The rules need to be updated for different geographical regions.
- No spatial rules exist for the complex Landsat imagery.
An interesting question is if one can automate the rule generation. In this paper we
present a learning framework in which spatial rules are learned by the system from
a given database of examples.
The learning framework and its contribution in a texture-recognition system is the
topic of section 2. Experimental results of the system's application to remote sensing
imagery are presented in section 3.
2
The texture-classification network
We have previously presented a texture classification network which combines a
neural network and rule-based framework (Greenspan et aI, 1992) and enables both
unsupervised and supervised learning. The system consists of three major stages,
as shown in Fig. 1. The first stage performs feature extraction and transforms the
image space into an array of 15-dimensional feature vectors, each vector corresponding to a local window in the original image. There is much evidence in animal visual
systems supporting the use of multi-channel orientation selective band-pass filters
in the feature-extraction phase. An open issue is the decision regarding the appropriate number of frequencies and orientations required for the representation of the
input domain. We define an initial set of 15 filters and achieve a computationally
efficient filtering scheme via the multi-resolution pyramidal approach.
The learning mechanism shown next derives a minimal subset of the above filters
which conveys sufficient information about the visual input for its differentiation
and labeling. In an unsupervised stage a machine-learning clustering algorithm is
used to quantize the continuous input features. A supervised learning stage follows
in which labeling of the input domain is achieved using a rule-based network. Here
an information theoretic measure is utilized to find the most informative correlations
between the attributes and the pattern class specification, while providing probability estimates for the output classes. Ultimately, a minimal representation for a
library of patterns is learned in a training mode, following which the classification
Remote Sensing Image Analysis via a Texture Classification Neural Network
ORENI'AT10N
SELEcrlVE
8PF
SUPERVISED
UNSUPERVISED
a.USTEANi
TEXTURE
LEARNING
CLASSES
vIII
Window
of Input Image
N-Dimensional
Continuous
Feature- Vector
?
?
N-Dimensional
Quantized
Feature-Vector
?
?
FEATURE-EXTRACTION
LEARNING
PHASE
PHASE
Figure 1: System block diagram
of new patterns is achieved.
2.1
The system in more detail
The initial stage for a classification system is the feature extraction phase. In the
texture-analysis task there is both biological a.nd computational evidence supporting the use of Gabor-like filters for the feature-extraction. In this work, we use
the Log Gabor pyramid, or the Gabor wavelet decomposition to define an initial
finite set of filters. A computational efficient. scheme involves using a pyramidal
representation of the image which is convolved with fixed spatial support oriented
Gabor filters (Greenspan at aI, 1993). Three scales are used with 4 orientations per
scale (0,90,45,135 degrees), together with a non-oriented component, to produce a
15-dimensional feature vector as the output of the feature extraction stage. Using
the pyramid representation is computationally efficient as the image is subsampled
in the filtering process. Two such size reduction stages take place in the three scale
pyramid. The feature values thus generated correspond to the average power of the
response, to specific orientation and frequency ranges, in an 8 * 8 window of the
input image. Each such window gets mapped to a 15-dimensional attribute vector
as the output of the feature extraction stage.
The goal of the learning system is to use the feature representation described above
to discriminate between the input patterns, or textures. Both unsupervised and
supervised learning stages are utilized. A minimal set of features are extracted from
the 15-dimensional attribute vector, which convey sufficient information about the
visual input for its differentiation and labeling.
The unsupervised learning stage can be viewed as a preprocessing stage for achieving a more compact representation of the filtered input. The goal is to quantize the
continuous valued features which are the result of the initial filtering, thus shifting
to a more symbolic representation of the input domain . This clustering stage was
found experimentally to be of importance as an initial learning phase in a classification system. The need for discretization becomes evident when trying to learn
associations between attributes in a symbolic representation, such as rules.
427
428
Greenspan and Goodman
The output of the filtering stage consists of N (=15), continuous valued feature
maps; each representing a filtered version of the original input. Thus, each local
area of the input image is represented via an N-dimensional feature vector. An
array of such N-dimensional vectors, viewed across the input image, is the input
to the learning stage. We wish to detect characteristic behavior across the Ndimensional feature space, for the family of textures to be learned. In this work, each
dimension, out of the 15-dimensional attribute vector, is individually clustered. All
training samples are thus projected onto each axis of the space and one-dimensional
clusters are found using the K-means clustering algorithm (Duda and Hart, 1973).
This statistical clustering technique consists of an iterative procedure of finding
K means in the training sample space, following which each new input sample is
associated with the closest mean in Euclidean distance. The means, labeled 0 thru K
minus 1 arbitrarily, correspond to discrete codewords. Each continuous-valued input
sample gets mapped to the discrete codeword representing its associated mean. The
output of this preprocessing stage is a 15-dimensional quantized vector of attributes
which is the result of concatenating the discrete-valued codewords of the individual
dimensions.
In the final, supervised stage, we utilize the existing information in the feature
maps for higher level analysis, such as input labeling and classification. A rule based information theoretic approach is used which is an extension of a first order
Bayesian classifier, because of its ability to output probability estimates for the output classes (Goodman et aI, 1992). The classifier defines correlations between input
features and output classes as probabilistic rules. A data driven supervised learning
approach utilizes an information theoretic measure to learn the most informative
links or rules between features and class labels. The classifier then uses these links
to provide an estimate of the probability of a given output class being true. When
presented with a new input evidence vector, a set of rules R can be considered to
"fire". The classifier estimates the posterior probability of each class given the rules
that fire in the form log(p( x )IR), and the largest estimate is chosen as the initial
class label decision. The probability estimates for the output classes can now be
used for feedback purposes and further higher level processing.
The rule-based classification system can be mapped into a 3 layer feed forward
architecture as shown in Fig. 2 (Greenspan et aI, 1993). The input layer contains
a node for each attribute. The hidden layer contains a node for each rule and the
output layer contains a node for each class. Each rule (second layer node j) is
connected to a class via a multiplicative weight of evidence Wj.
Inputs
Rules
Class
Probability
Estimates
Figure 2: Rule-Based Network
Remote Sensing Image Analysis via a Texture Classification Neural Network
3
Results
The above-described system has achieved state-of-the-art results on both structured
and unstructured natural texture classification [5]. In this work we present initial
results of applying the network to the noisy environment of satellite and air-borne
Imagery.
Fig. 3 presents two such examples. The first example (top) is an image of Pasadena,
California, taken via the AVIRIS system (Airborne Visible/Infrared Imaging Spectrometer). rhe AVIRIS system covers 224 contiguous spectral bands simultaneously, at 20 meters per pixel resolution. The presented example is taken as an
average of several bands in the visual range. In this input image we can see that
a major distinguishing characteristic is urban area vs. hilly surround. These are
the two categories we set forth to learn. The training consists of a 128*128 image
sample for each category. The test input is a 512*512 image which is very noisy
and because of its low resolution, very difficult to segment into the two categories,
even to our own visual perception. In the presented output (top right), the urban area is labeled in white, the hillside in gray and unknown, undetermined areas
are in darker gray. We see that a rough segmentation into the desired regions has
been achieved. The probabilistic network's output allows for the identification of
unknown or unspecified regions, in which more elaborate analysis can be pursued
(Greenspan et aI, 1992). The dark gray areas correspond to such regions; one example is the hill and urban contact (bottom right) in which some urban suburbs
on the hill slopes form a mixture of the classes. Note that in the initial results
presented the blockiness perceived is the result of the analysis resolution chosen.
Fusing into the system additional spectral bands as our input, would enable pixel
resolution as well as enable detecting additional classes (not visually detectable),
such as concrete material, a variety of vegetation etc.
A higher resolution Airborne image is presented at the bottom of Fig. 3. The
classes learned are bush (output label dark gray), ground (output label gray) and a
structured area, such as a field present or the man-made structures (white). Here,
the training was done on 128*128 image examples (1 example per class). The input
image is 800*800. In the result presented (right) we see that the three classes have
been found and a rough segmentation into the three regions is achieved. Note in
particular the detection of the bush areas and the three main structured areas in
the image, including the man-made field, indicated in white.
Our final example relates to an autonomous navigation scenario. Autonomous vehicles require an automated scene analysis system to avoid obstacles and navigate
through rough terrain. Fusion of several visual modalities, such as intensity-based
segmentation, texture, stereo, and color, together with other domain inputs, such
as soil spectral decomposition analysis, will be required for this challenging task. In
Fig. 4. we present preliminary results on outdoor photographed scenes taken by an
autonomous vehicle at JPL (Jet Propulsion Laboratory, Pasadena). The presented
scenes (left) are segmented into bush and gravel regions (right). The training set
consists of 4 64 * 64 image samples from each category. In the top example (a
256*256 pixel image), light gray indicates gravel while black represents bushy regions. We can see that intensity alone can not suffice in this task (for example, top
right corner). The system has learned some textural characteristics which guided
429
430
Greenspan and Goodman
Figure 3: Remote sensing image analysis results. The input test image is shown
(left) followed by the system output classification map (right). In the AVIRIS (top)
input, white indicates urban regions, gray is a hilly area and dark gray reflects
undetermined or different region types. In the Airborne output (bottom), dark
gray indicates a bush area, light gray is a ground cover region and white indicates
man-made structures. Both robustness to noise and generalization are demonstrated
in these two challenging real-world problems.
Remote Sensing Image Analysis via a Texture Classification Neural Network
the segmentation in otherwise similar-intensity regions. Note that this is also probably the cause for identifying the track-like region (e.g., center bottom) as bush
regions. We could learn track-like regions as a third category, or specifically include
such examples as gravel in our training set.
In the second example (a 400*400 input image, bottom) light gray indicates gravel,
dark gray represents a bush-like region, and black represents the unknown category.
Here, the top right region of the sky, is labeled correctly as an unknown, or new
category. Kote that intensity alone would have confused that region as being gravel.
Overall, the texture classification neural-network succeeds in achieving a correct,
yet rough, segmentation of the scene based on textural characteristics alone. These
are encouraging results indicating that the learning system has learned informative
characteristics of the domain.
?
Fig 4: Image Analysis for Autonomous Navigation
431
432
Greenspan and Goodman
4
Summary and Discussion
The presented results demonstrate the network's capability for generalization and
robustness to noise in very challenging real-world problems. In the presented framework a learning mechanism automates the rule generation. This framework can answer some of the current difficulties in using the human expert's knowledge. Further
more, the automation of the rule generation can enhance the expert's knowledge
regarding the task at hand. We have demonstrated that the use of textural spatial information can segment complex scenery into homogeneous regions. Some of
the system's strengths include generalization to new scenes, invariance to intensity,
and the ability to enlarge the feature vector representation to include additional
inputs (such as additional spectral bands) and learn rules characterizing the integrated modalities. Future work includes fusing several modalities within the learning framework for enhanced performance and testing the performance on a large
database.
Acknowledgements
This work is supported in part by Pacific Bell, and in part by DARPA and ONR
under grant no. N00014-92-J-1860. H. Greenspan is supported in part by an Intel
fellowship. The research described in this paper was carried out in part by the
Jet Propulsion Laboratories, California Institute of Technology. We would like to
thank Dr. C. Anderson for his pyramid software support and Dr. 1. Matthies for
the autonomous vehicle images.
References
S. Wharton. (1987) A Spectral-Knowledge-Based Approach for Urban Land-Cover
Discrimination. IEEE Transactions on Geoscience and Remote Sensing, Vol. GE25[3] :272-282.
J. Lee and W. Philpot. (1991) Spectral Texture Pattern Matching: A Classifier
For Digital Imagery. IEEE Transactions on Geoscience and Remote Sensing, Vol.
29[4] :545-554.
J. Ton, J. Sticklen and A. Jain. (1991) Knowledge-Based Segmentation of Landsat
Images. IEEE Transactions on Geoscience and Remote Sensing, Vol. 29[2]:222-232.
H. Greenspan, R. Goodman and R. Chellappa. (1992) Combined Neural Network
and Rule-Based Framework for Probabilistic Pattern Recognition and Discovery. In
J. E. Moody, S. J. Hanson, and R. P. Lippman (eds.), Advances in Neural Information Processing Systems 4.,444-452, San Mateo, CA: Morgan Kaufmann Publishers.
H. Greenspan, R. Goodman, R. Chellappa and C. Anderson. (1993) Learning
Texture Discrimination Rules in a Multiresolution System. Submitted to IEEE
Transactions on Pattern Analysis and Machine Intelligence.
R. O. Duda and P. E. Hart. (1973) Pattern Classification and Scene Analysis. John
Wiley and Sons, Inc.
R. Goodman, C. Higgins, J. Miller and P. Smyth. (1992) Rule-Based Networks for
Classification and Probability Estimation. Neural Computation, [4]:781-804.
| 636 |@word version:1 duda:2 nd:1 open:1 decomposition:2 minus:1 reduction:1 initial:8 contains:3 existing:2 current:1 discretization:1 yet:1 john:1 visible:1 informative:3 enables:1 v:1 alone:3 pursued:2 discrimination:2 intelligence:1 filtered:2 provides:1 quantized:2 node:4 detecting:1 become:1 consists:5 combine:1 behavior:1 growing:1 multi:2 encouraging:1 window:4 pf:1 becomes:1 confused:1 suffice:1 unspecified:1 developed:1 finding:1 differentiation:2 sky:1 classifier:5 hilly:2 grant:1 segmenting:1 ice:1 engineering:1 local:2 textural:4 black:2 mateo:1 aviris:4 challenging:3 range:2 testing:1 block:1 lippman:1 spot:1 procedure:1 area:15 bell:1 gabor:4 matching:1 symbolic:2 get:2 onto:1 applying:1 map:4 demonstrated:2 center:1 attention:1 resolution:6 unstructured:1 identifying:1 rule:26 higgins:1 array:2 his:1 classic:1 autonomous:5 sar:1 updated:1 enhanced:1 smyth:1 homogeneous:3 us:2 distinguishing:1 recognition:4 utilized:2 infrared:1 database:2 labeled:3 bottom:5 cloud:1 role:1 electrical:1 spectrometer:1 region:22 wj:1 connected:1 remote:13 environment:2 automates:1 ultimately:1 signature:2 segment:3 textured:1 darpa:1 represented:1 jain:1 describe:1 chellappa:2 labeling:4 spend:1 valued:4 otherwise:1 ability:2 noisy:3 final:2 achieve:1 multiresolution:1 forth:1 cluster:1 satellite:1 sea:1 produce:1 generating:1 coverage:1 involves:1 guided:1 correct:1 attribute:7 filter:6 exploration:2 human:2 enable:2 material:1 require:1 clustered:1 generalization:3 preliminary:1 biological:1 extension:1 considered:1 ground:2 visually:1 mapping:1 automate:1 major:3 achieves:1 purpose:1 perceived:1 estimation:1 label:4 individually:1 largest:1 reflects:1 rough:4 sensor:4 avoid:1 greenspan:13 surveillance:1 focus:1 check:1 indicates:5 detect:1 landsat:3 integrated:1 pasadena:3 hidden:1 selective:1 pixel:3 overall:2 classification:22 orientation:4 issue:1 animal:1 spatial:6 art:2 wharton:2 field:2 extraction:7 enlarge:1 represents:3 unsupervised:6 matthies:1 future:1 micro:1 oriented:2 simultaneously:1 individual:1 subsampled:1 phase:5 fire:2 detection:1 interest:1 investigate:1 possibility:1 navigation:3 mixture:1 light:3 tree:1 euclidean:1 desired:1 minimal:3 obstacle:1 cover:3 contiguous:1 fusing:2 subset:1 undetermined:2 answer:1 combined:2 geographical:1 lee:2 probabilistic:3 enhance:2 together:2 concrete:1 moody:1 imagery:6 borne:1 dr:2 corner:1 expert:3 automation:1 includes:1 inc:1 tion:1 multiplicative:1 vehicle:3 capability:3 rodney:1 slope:1 contribution:1 air:1 ir:1 kaufmann:1 characteristic:6 miller:1 correspond:3 identify:1 identification:2 bayesian:1 submitted:1 ed:1 frequency:2 blockiness:1 conveys:1 suburb:1 associated:2 color:1 knowledge:4 segmentation:6 feed:1 higher:3 supervised:7 response:1 done:1 anderson:2 stage:16 correlation:2 hand:1 defines:1 mode:1 indicated:1 gray:12 true:1 laboratory:2 white:5 trying:1 hill:2 evident:1 theoretic:3 demonstrate:2 performs:1 image:34 recently:3 extend:1 association:1 vegetation:1 surround:1 ai:8 philpot:2 calibration:1 specification:1 etc:1 playa:1 closest:1 posterior:1 own:1 driven:1 scenario:1 codeword:1 n00014:1 onr:1 arbitrarily:1 caltech:1 morgan:1 additional:4 relates:1 segmented:2 jet:2 hart:2 enhancing:1 pyramid:4 achieved:5 fellowship:1 diagram:1 pyramidal:2 airborne:3 modality:3 goodman:9 publisher:1 probably:1 automated:2 variety:1 architecture:1 idea:1 regarding:2 stereo:1 cause:1 amount:1 transforms:1 dark:5 band:5 category:7 exist:1 per:3 track:2 correctly:1 discrete:3 vol:3 achieving:3 urban:7 utilize:1 imaging:1 bushy:1 place:1 family:1 utilizes:1 hayit:2 decision:2 layer:5 rhe:1 followed:1 strength:1 scene:6 software:1 photographed:1 department:1 structured:3 pacific:1 aerial:1 across:2 son:1 taken:3 computationally:2 prelearned:1 previously:1 detectable:1 mechanism:2 apply:3 spectral:8 appropriate:1 robustness:3 convolved:1 original:2 top:6 clustering:4 include:4 contact:1 gravel:5 question:1 codewords:2 distance:1 link:2 mapped:3 thank:1 propulsion:2 topic:1 spanning:1 viii:1 providing:1 difficult:1 unknown:4 finite:1 supporting:2 intensity:5 atmospheric:1 required:3 hanson:1 california:3 distinction:1 learned:6 pattern:8 perception:1 gaining:1 including:1 shifting:1 power:1 natural:2 difficulty:1 ndimensional:1 representing:2 scheme:2 technology:2 library:2 axis:1 carried:1 extract:1 thru:1 literature:1 acknowledgement:1 meter:1 discovery:1 interesting:1 generation:3 filtering:4 digital:1 degree:1 sufficient:2 land:1 summary:1 soil:1 supported:2 institute:2 characterizing:1 feedback:1 dimension:2 world:2 forward:1 made:3 preprocessing:2 projected:1 san:1 transaction:4 compact:1 terrain:1 continuous:5 iterative:1 channel:1 learn:5 ca:2 quantize:2 complex:3 domain:7 main:1 noise:2 convey:1 fig:6 intel:1 elaborate:1 darker:1 wiley:1 experienced:1 wish:1 concatenating:1 outdoor:1 third:1 wavelet:1 specific:2 navigate:1 sensing:13 ton:3 jpl:1 evidence:4 fusion:2 derives:1 adding:1 importance:2 texture:24 depicted:1 visual:6 geoscience:3 extracted:1 goal:4 viewed:2 scenery:1 man:3 considerable:1 experimentally:1 specifically:1 pas:1 discriminate:1 experimental:1 invariance:1 succeeds:1 indicating:2 support:2 bush:6 |
5,925 | 6,360 | Learning Transferrable Representations for
Unsupervised Domain Adaptation
Ozan Sener1 ,
Hyun Oh Song1 , Ashutosh Saxena2 , Silvio Savarese1
Stanford University1 Brain of Things2
{ozan,hsong,asaxena,ssilvio}@cs.stanford.edu
Abstract
Supervised learning with large scale labelled datasets and deep layered models has
caused a paradigm shift in diverse areas in learning and recognition. However, this
approach still suffers from generalization issues under the presence of a domain
shift between the training and the test data distribution. Since unsupervised domain
adaptation algorithms directly address this domain shift problem between a labelled
source dataset and an unlabelled target dataset, recent papers [11, 33] have shown
promising results by fine-tuning the networks with domain adaptation loss functions
which try to align the mismatch between the training and testing data distributions.
Nevertheless, these recent deep learning based domain adaptation approaches still
suffer from issues such as high sensitivity to the gradient reversal hyperparameters
[11] and overfitting during the fine-tuning stage. In this paper, we propose a unified
deep learning framework where the representation, cross domain transformation,
and target label inference are all jointly optimized in an end-to-end fashion for
unsupervised domain adaptation. Our experiments show that the proposed method
significantly outperforms state-of-the-art algorithms in both object recognition and
digit classification experiments by a large margin.
1
Introduction
Recently, deep convolutional neural networks [17, 26, 30] have propelled unprecedented advances
in artificial intelligence including object recognition, speech recognition, and image captioning.
Although these networks are very good at learning state of the art feature representations and
recognizing discriminative patterns, one major drawback is that the network requires huge amounts
of labelled training data to fit millions of parameters in the complex network. However, creating such
datasets with complete annotations is not only tedious and error prone, but also extremely costly. In
this regard, the research community has proposed different mechanisms such as semi-supervised
learning [27, 37], transfer learning [23, 31], weakly labelled learning, and domain adaptation. Among
these approaches, domain adaptation is one of the most appealing techniques when a fully annotated
dataset (e.g. ImageNet [7], Sports1M [14]) is already available as a reference.
The goal of unsupervised domain adaptation, in particular, is as follows. Given a fully labeled
source dataset and an unlabeled target dataset, to learn a model which can generalize to the target
domain while taking the domain shift across the datasets into account. The majority of the literature
[13, 29, 9, 28, 32] in unsupervised domain adaptation formulates a learning problem where the task
is to find a transformation matrix to align the labelled source data distribution to the unlabelled target
data distribution. Although these approaches have shown promising results, they show accuracy
degradation because of the discrepancy between the learning procedure and the actual target inference
procedure. In this paper, we aim to address this issue by incorporating the unknown target labels into
the training procedure.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this regard, we formulate a unified deep learning framework where the feature representation,
domain transformation, and target labels are all jointly optimized in an end-to-end fashion. The
proposed framework first takes as input a batch of labelled source and unlabelled target examples, and
maps this batch of raw input examples into a deep representation. Then, the framework computes the
loss of the input batch based on a two stage optimization in which it alternates between inferring the
labels of the target examples transductively and optimizing the domain transformation parameters.
Concretely, in the transduction stage, given the fixed domain transform parameter, we jointly infer
all target labels by solving a discrete multi-label energy minimization problem. In the adaptation
stage, given a fixed target label assignment, we seek to find the optimal asymmetric metric between
the source and the target data. The advantage of our method is that we can jointly learn the optimal
feature representation and the optimal domain transformation parameter, which are aware of the
subsequent transductive inference procedure.
Following the standard evaluation protocol in the unsupervised domain adaptation community,
we evaluate our method on the digit classification task using MNIST [19] and SVHN[21]
as well as the object recognition task using the Office [25] dataset, and demonstrate state
of the art performance in comparison to all existing unsupervised domain adaptation methods. Learned models and the source code can be reached from the project webpage
http://cvgl.stanford.edu/transductive_adaptation.
2
Related Work
This paper is closely related to two active research areas: (1) Unsupervised domain adaptation, and
(2) Transductive learning.
Unsupervised domain adaptation: [16] casts the zero-shot learning [22] problem as an unsupervised
domain adaptation problem in the dictionary learning and sparse coding framework, assuming access
to additional attribute information. Recently, [3] proposed the active nearest neighbor algorithm,
which combines the component of active learning into the domain adaptation problem and makes
a bounded number of active queries to users. Also, [13, 9, 28] proposed subspace alignment based
approaches to unsupervised domain adaptation where the task is to learn a joint transformation
and projection in which the difference between the source and the target covariance is minimized.
However, these methods learn the transformation matrices on the whole source and target dataset
without utilizing the source labels.
[32] utilizes a local max margin metric learning objective [35] to first assign the target labels with the
nearest neighbor scheme and then learn a distance metric to enforce the negative pairwise distances
to be larger than the positive pairwise distances. However, this method learns a symmetric distance
matrix shared by both the source and the target domains so the method is susceptible to the discrepancies between the source and the target distributions. Recently, [11, 33] proposed a deep learning
based method to learn domain invariant features by providing the reversed gradient signal from the
binary domain classifiers. Although this method performs better than aforementioned approaches,
their accuracy is limited since domain invariance does not necessarily imply discriminative features
in the target domain.
Transductive learning: In the transductive learning [10], the model has access to unlabelled test
samples during training. [24] utilizes a semi-supervised label propagation algorithm into the semisupervised transfer learning problem assuming access to few labeled examples and additional human
specified semantic knowledge. [15] tackled a classification problem where predictions are made
jointly across all test examples in a transductive [10] setting. The method essentially enforces the
notion that the true labels vary smoothly with respect to the input data. We extend this notion to
jointly infer the labels of unsupervised target data points in a k-NN graph.
To summarize, our main contribution is to formulate an end-to-end deep learning framework where
we learn the optimal feature representation, infer target labels via discrete energy minimization
(transduction), and learn the transformation (adaptation) between source and target examples all
jointly. Our experiments on digit classification using MNIST [19] and SVHN[21] as well as the
object recognition experiments on Office [25] datasets show state of the art results, outperforming all
existing methods by a substantial margin.
2
3
Method
3.1
Problem Definition and Notation
In the unsupervised domain adaptation, one of the domains (source) is supervised {?
xi , y?i }i?[N s ] with
N s data points x
?i and the corresponding labels y?i from a discrete set y?i ? Y = {1, . . . , Y }. The
other domain (target), on the other hand is unsupervised and has N u data points {xi }i?[N u ] .
We further assume that two domains have different distributions x
?i ? ps and xi ? pt defined on the
same space x
?i , xi ? X . We consider a case in which there are two feature functions ?s , ?t : X ? Rd
applicable to source and target separately. These feature functions extract the information both shared
among domains and explicit to the individual ones. The way we model common features is by
sharing a subset of parameters between feature functions as ?s = ??c ,?s and ?t = ??c ,?t . We use
deep neural networks to implement these functions. In our implementation, ?c corresponds to the
parameters in the first few layers of the networks and ?s , ?t correspond to the respective final layers.
In general, our model is applicable to any hierarchical and differentiable feature function which can
be expressed as a composite function ?s = f?s (g?c (?)) for both source and target.
3.2
Consistent Structured Transduction
Our method is based on jointly learning the transferable domain specific representations for source
and target as well as estimating the labels of the unsupervised data-points. We denote these two main
components of our method as transduction and adaptation. The transduction is the sub-problem of
labelling unsupervised data points and the adaptation is the sub-problem of solving for the domain
shift. In order to solve this joint problem tractably, we exploit two heuristics: cyclic consistency for
adaptation and structured consistency for transduction.
Cyclic consistency: One desired property of ?s and ?t is consistency. If we estimate the labels of
the unsupervised data points and then use these points with their estimated labels to estimate the
labels of supervised data-points, we want the predicted labels of the supervised data-points to be
consistent with the ground truth labels. Using the inner product as an asymmetric similarity metric as
s(?
xi , xj ) = ?s (?
xi )| ?t (xj ), this consistency can be represented with the following diagram.
(?
xi , y?i )
O
Transduction
/ (xj , yj )
/ (?
xi , y?ipred )
Transduction
Cyclic Consistency: y?i = y?ipred
It can be shown that if the transduction from target to source follows a nearest neighbor rule, cyclic
consistency can be enforced without explicitly computing y?ipred using the large-margin nearest
neighbor (LMNN)[35] rule. For each source point, we enforce a margin such that the similarity
between the source point and the nearest neighbor from the target with the same label is greater than
the similarity between the source point and the nearest neighbor from the target with a different label.
Formally; ?s (?
xi )| ?t (xi+ ) > ?s (?
xi )| ?t (xi? ) + ? where xi+ is the nearest target having the same
class label as x
?i and xi? is the nearest target having a different class label.
Structured consistency: We enforce a structured consistency when we label the target points during
the transduction. The structure we enforce is; if two target points are similar to each other, they are
more likely to have the same label. To do so, we create a k-NN graph of target points using a similarity
metric ?t (xi )| ?t (xj ). We denote the neighbors of the point x
?i as N (?
xi ). We enforce structured
consistency by penalizing neighboring points of different labels proportional to their similarity score.
Our model leads to the following optimization problem, over the target labels yi and the feature
function parameters ?c , ?s , ?t , jointly solving transduction and adaptation.
X
min
?c ,?s ,?t ,
y1 ,...yN u i?[N s ]
|
s.t.
[?s (?
xi )| ?t (xi? ) ? ?s (?
xi )| ?t (xi+ ) + ?]+ + ?
{z
}
Cyclic Consistency
i+ = arg maxj|yj =?yi ?s (?
xi )| ?t (xj )
and
|
X
X
i?[N u ]
xj ?N (xi )
?t (xi )| ?t (xj )1(yi 6= yj )
{z
Structured Consistency
}
i? = arg maxj|yj 6=y?i ?s (?
xi )| ?t (xj )
(1)
where 1(a) is an indicator function which is 1 if a is true and 0 otherwise. [a]+ is a rectifier function
which is equal to max(0, a).
3
We solve this optimization problem via alternating minimization through iterating over solving for
unsupervised labels yi (transduction) and learning the similarity metric ?c , ?s , ?t (adaptation). We
explain these two steps in detail in the following sections.
3.3
Transduction: Labeling Target Domain
In order to label the unsupervised points, we base our model on the k-nearest-neighbor rule. We
simply compute the k-NN supervised data point for each unsupervised data point using the learned
metric and transfer the corresponding majority label. Formally, given a similarity metric ?c , ?s , ?t ,
k (x )
the k-NN rule is (yi )pred = arg maxy y k i where ky (xi ) is the number of samples having label y
in the k nearest neighbors of xi from the source domain. One major issue with this approach is the
inaccuracy of transduction during the initial stage of the algorithm. Since the learned metric will not
be accurate, we expect to see some noisy k-NN sets. Hence, we propose two solutions to solve this
problem.
Structured Consistency: Similar to existing graph transduction algorithms [4, 36], we create a
k-nearest neighbor (k-NN) graph over the unsupervised data points and penalize disagreements of
labels between neighbors.
Reject option: In the initial stage of the algorithm, we let the transduction step use the reject R as
an additional label (besides the class labels) to label the unsupervised target points. In other words,
our transduction algorithm can decide to not label (reject) some of the points so that they will not be
used for adaptation. As the learned metric gets more accurate in the future iterations, transduction
algorithm can change the label from R to other class labels.
Using aforementioned heuristics, we define our transduction sub-problem as1 :
X
min
y1 ,...yN u ?Y?R
(
where l(xi , y) =
l(xi , yi ) + ?
i?[N u ]
ky (xi )
k 0
k (xi )
maxy0 ?Y y k
1?
?
X
X
i?[N u ]
xj ?N (xi )
y?Y
y=R
?t (xi )| ?t (xj )1(yi 6= yj )
(2)
and ? is relative cost of the reject option.
The l(xi , R) is smaller if none of the class has a majority, promoting the reject option for undecided
cases. We also modulate the ? during learning to decrease number of reject options in the later stage
of the adaptation. This problem can approximately be solved using many existing methods. We use
the ?-? swapping algorithm from [5] since it is experimentally shown to be efficient and accurate.
3.4
Adaptation: Learning the Metric
Given the predicted labels yi for unsupervised data points xi , we can then learn a metric in order
to minimize the loss function defined in (1). Following the cyclic consistency construction, the
LMNN rule can be represented using the triplet loss defined between the supervised source data
points and their nearest positive and negative neighbors among the unsupervised target points. We do
not include the target-data points with reject labels during this construction. Formally, we can define
the adaptation problem given unsupervised labels as;
min
?c ,?s ,?t
X
[?s (?
xi )| ?t (xi? ) ? ?s (?
xi )| ?t (xi+ ) + ?]+ + ?
i?[N s ]
X
X
?t (xi )| ?t (xj )1(yi 6= yj )
i?[N u ] xj ?N (xi )
(3)
where
i+ = arg maxj|yj =?yi ?s (?
xi )| ?t (xj ) and i? = arg maxj|yj 6=y?i ,yj 6=R ?s (?
xi )| ?t (xj )
(4)
?loss
We optimize this function via stochastic gradient descent using the sub-gradients ?loss
??s , ??t and
?loss
??c . These sub-gradients can be efficiently computed with back-propagation (see [1] for details).
1
The subproblem we define here does not directly correspond to optimization of (1) with respect to
y1 , . . . yN u . It is extension of the exact sub-problem by replacing 1-NN rule with k-NN rule and introducing
reject option.
4
3.5
Implementation Details
We use Alexnet [17] and LeNet [18] architectures with small modifications. We remove their final
softmax layer and change the size of the final fully connected layer according to the desired feature
dimension. We consider the last fully connected layer as domain specific (?s , ?t ) and the rest as
common network ?c . Common network weights are tied between domains, and the final layers are
learned separately. In order to have a fair comparison, we use the same architectures from [11] only
modifying the embedding size. (See supplementary material [1] for details).
Since the office dataset is quite small, we do not Algorithm 1 Transduction with Domain Shift
learn the network from scratch for office experiInput: Source x
?1???N s , y?1,???N s , Target x1,??? ,N u ,
ments and instead we initialize with the weights
Batch Size 2 ? B
pre-trained on ImageNet. In all of our experifor t = 0 to max_iter do
ments, we set the feature dimension as 128. We
Sample {?
x1,...,B , y?1,...,B }, {x1,...,B }
use stochastic gradient descent to learn the feaSolve (2) for {y1???B }
ture function with AdaGrad[8]. We initialize
for i = 1 to B do
convolutional weights with truncated normals
if y?i ? y1???B and ?k yk ? Y \ y?i then
having std-dev 0.1, biases with constant value
Compute (i+ , i? ) using {y1???B } in (4)
?4
0.1, and use a learning rate of 2.5 ? 10 with
, ?loss
, ?loss
Update ?loss
??c
??s
??t
batch size 512. We start the rejection penalty
end if
with ? = 0.1 and linearly increase with each
end for
?(t) ? Adagrad Rule [8]
epoch as ? = #epoch?1
+
0.1.
In
our
experiM
?c ? ?c + ?(t) ?loss
, ?s ? ?s + ?(t) ?loss
,
ments, we use M = 20, ? = 0.001 and ? = 1.
??c
??s
4
?t ? ?t + ?(t) ?loss
??t
end for
Experimental Results
We evaluate our algorithm on various unsupervised domain adaptation tasks while focusing on two
different problems: hand-written digit classification and object recognition.
Datasets: We use MNIST [19], Street View House Number [21] and the artificially generated version
of MNIST -MNIST-M- [11] to experiment our algorithm on the digit classification task. MNIST-M is
simply a blend of the digit images of the original MNIST dataset and the color images of BSDS500 [2]
following the method explained in [11]. Since the dataset is not distributed directly by the authors,
we generated the dataset using the same procedure and further confirmed that the performance is
the same as the one reported in [11]. Street View House Numbers is a collection of house numbers
collected from Google street view images. Each of these three domains are quite different from each
other. Among many important differences, the most significant ones are MNIST being grayscale
whilw the others are colored, and SVHN images having extra confusing digits around the centered
digit of interest. Moreover, all domains are large-scale, having at least 60k examples over 10 classes.
In addition, we use the Office [25] dataset to evaluate our algorithm on the object recognition task.
The office dataset includes images of the objects taken from Amazon, captured with a webcam and
captured with a D-SLR. Differences between domains include the white background of Amazon
images versus realistic webcam images, and the resolution differences. The Office dataset has fewer
images, with a maximum of 2478 per domain over 31 classes.
Baselines: We compare our method against a variety of methods with and without feature learning.
SA*[9] is the dominant state-of-the-art approach not employing any feature learning, and Backprop(BP)[11] is the dominant state-of-the-art employing feature learning. We use the available source
code of [11] and [9] and following the evaluation procedure in [11], we choose the hyper-parameter
of [9] as the highest performing one among various alternatives. We also compare our method with
the source only baseline which is a convolutional neural network trained only using the source data.
This classifier is clearly different from our nearest neighbor classifier; however, we experimentally
validated that the CNN always outperformed the nearest neighbor based classifier. Hence, we report
the highest performing source only method.
Evaluation: We evaluate all algorithms in a fully transductive setup [12]. We feed training images
and labels of first domain as the source and training images of the second domain as the target. We
evaluate the accuracy on the target domain as the ratio of correctly labeled images to all target images.
5
4.1
Results
Following the fully transductive evaluation, we summarize the results in Table 1 and Table 2. Table 1
summarizes the results on the object recognition task using office dataset whereas Table 2 summarizes
the digit classification task on MNIST and SVHN.
Table 1: Accuracy of our method and the state-of-the-art algorithms on Office dataset.
S OURCE A MAZON D-SLR W EBCAM W EBCAM A MAZON D-SLR
TARGET W EBCAM W EBCAM D-SLR A MAZON D-SLR A MAZON
GFK [12]
SA* [9]
DLID [6]
DDC [33]
DAN [20]
BACKPROP [11]
.398
.450
.519
.618
.685
.730
.791
.648
.782
.950
.960
.964
.746
.699
.899
.985
.990
.992
.371
.393
.522
.531
.536
.379
.388
.644
.670
.728
.379
.420
.521
.540
.544
S OURCE O NLY
O UR M ETHOD ( K -NN ONLY )
O UR M ETHOD ( NO REJECT )
O UR M ETHOD ( FULL )
.642
.727
.804
.811
.961
.952
.962
.964
.978
.915
.989
.992
.452
.575
.625
.638
.668
.791
.839
.841
.476
.521
.567
.583
Tables 1&2 show results on object recogniTable 2: Accuracy on the digit classification task.
tion and digit classification tasks covering
S OURCE M-M MNIST SVHN MNIST
all adaptation scenarios. Our experiments
TARGET MNIST M-M MNIST SVHN
show that our proposed method outperforms
SA* [9] .523
.569
.593
.211
all state-of-the-art algorithms. Moreover, the
BP [11] .732
.766
.738
.289
increase in the accuracy is rather signifiS OURCE O NLY .483
.522
.549
cant when there is a large domain difference O UR M ETHOD ( K -NN ONLY ) .805 .795 .713 .162
.158
such as MNIST?MNIST-M, MNIST?SVHN, O UR M ETHOD ( NO REJECT ) .835 .855 .774 .323
O UR M ETHOD ( FULL ) .839
.867
.788
.403
Amazon?Webcam and Amazon?D-SLR. Our
hypothesis is that the state-of-the-art algorithms
such as [11] are seeking features invariant to the domains whereas we seek an explicit similarity
metric explaining both differences and similarities of domains. In other words, instead of seeking an
invariance, we seek an equivariance.
Table 2 further suggests that our algorithm is the only one which can successfully perform adaptation
from MNIST to SVHN. Clearly the features which are learned from MNIST cannot generalize to
SVHN since the SVHN has concepts like color and occlusion which are not available in MNIST.
Hence, our algorithm learns SVHN specific features by enforcing accurate transduction in the
adaptation.
Another interesting conclusion is the asymmetric results. For example, adapting webcam to Amazon
and adapting Amazon to webcam yield very different accuracies. The similar asymmetry exists in
MNIST and SVHN as well. This observation validates the importance of an asymmetric modeling.
To evaluate the importance of joint labelling and reject option, we compare our method with self
baselines. Our self-baselines are versions of our algorithm not using the reject option (no reject) and
the version using neither reject option nor joint labelling (k-NN only). Results on both experiments
suggest that joint labelling and the reject option are both crucial for successful transduction. Moreover,
the reject option is more important when the domain shift is large (e.g. MNIST?SVHN). This is
expected since transduction under a large shift is more likely to fail a situation that can be prevented
with reject option.
4.1.1
Qualitative Analysis
To further study the learned representations and the similarity metric, we performed a series of
qualitative analysis in the form of nearest neighbor and tSNE[34] plots.
Figure 1 visualizes example target images from MNIST and their corresponding source images. First
of all, our experimental analysis suggests that MNIST and SVHN are the two domains with the largest
difference. Hence, we believe MNIST?SVHN is a very challenging set-up and despite the huge
6
visual differences, our algorithm results in accurate nearest neighbors. On the other hand, Figure 2
visualizes the example target images from webcam and their corresponding nearest source images
from Amazon.
The difference between invariance and equivariance is
clearer in the tSNE plots of the Office dataset in Figure 3
and the digit classification task in Figure 4. In Figure 3, we
plot the distribution of features before and after adaptation
for source and target while color coding class labels. We
use the learned embeddings as output of ?s and ?t as an
input to tSNE algorithm[34]. As Figure 3 suggests, the
source domain is well clustered according to the object
classes with and without adaptation. This is expected
since the features are specifically fine-tuned to the source
domain before the adaptation starts. However, the target
domain features have no structure before adaptation. This
Nearest neighbors for
is also expected since the algorithm did not see any image Figure 1:
SVHN?MNIST exp.
We show an
from the target domain. After the adaptation, target images
example MNIST image and its 5-NNs.
also get clustered according to the object classes.
In Figure 4, we show the digit images of the source and
target after the adaptation. In order to see the effect of common features and domain specific features separately, we
compute the low-dimensional embeddings of the output
of the shared network (output of the first fully connected
layer). We further compute the NN points between the
source and target using ?s and ?t , and draw an edge between NNs. Clearly, the target is well clustered according
to the classes and the source is not very well clustered
although it has some structure. Since we learn the entire network for digit classification, our networks learn
discriminative features in the target domain as our loss depends directly on classification scores in the target domain.
Moreover, discriminative features in the target arises because of the transductive modeling. In comparison, state
of the art domain invariance based algorithms only try to
be invariant to the domains without explicit modeling of
discriminative behavior on the target. Hence, our method
explicitly models the relationship between the domains
and results in an equivarient model while enforcing discriminative behavior in the target.
5
Figure 2:
Nearest neighbors for
Amazon?Webcam exp.
We show an
example Amazon image and its 3-NNs.
Conclusion
We described an end-to-end deep learning framework for jointly optimizing the optimal deep feature
representation, cross domain transformation, and the target label inference for state of the art
unsupervised domain adaptation.
Experimental results on digit classification using MNIST[19] and SVHN[21] as well as on object
recognition using the Office[25] dataset show state of the art performance with a significant margin.
Acknowledgments
We acknowledge the support of ONR-N00014-13-1-0761, MURI - WF911NF-15-1-0479 and Toyota
Center grant 1191689-1-UDAWF.
7
(a) S. w/o Adaptation
(b) S. with Adaptation
(c) T w/o Adaptation
(d) T with Adaptation
Figure 3: tSNE plots for office dataset Webcam(S)?Amazon(T). Source features were discriminative and stayed
discriminative as expected. On the other hand, target features became quite discriminative after the adaptation.
Figure 4: tSNE plot for SVHN?MNIST experiment. Please note that the discriminative behavior only emerges
in the unsupervised target instead of the source domain. This explains the motivation behind modeling the
problem as transduction. In other words, our algorithm is designed to be accurate and discriminative in the target
domain which is the domain we are interested in.
8
References
[1] Supplementary details for the paper. http://cvgl.stanford.edu/transductive_adaptation.
[2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation.
T-PAMI, 33:898?916, 2011.
[3] C. Berlind and R. Urner. Active nearest neighbors in changing environments. In ICML, 2015.
[4] A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML, 2001.
[5] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. T-PAMI, 26:1124?1137, 2004.
[6] S. Chopra, S. Balakrishnan, and R. Gopalan. Dlid: Deep learning for domain adaptation by interpolating
between domains. In ICML W, 2013.
[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
[8] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. JMLR, pages 2121?2159, 2011.
[9] B. Fernando, A. Habrard, M. Sebban, and T. Tuytelaars. Unsupervised visual domain adaptation using
subspace alignment. In ICCV, 2013.
[10] A. Gammerman, V. Vovk, and V. Vapnik. Learning by transduction. In UAI, 1998.
[11] Y. Ganin and V. S. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, 2015.
[12] B. Gong, K. Grauman, and F. Sha. Connecting the dots with landmarks: Discriminatively learning
domain-invariant features for unsupervised domain adaptation. In ICML, 2013.
[13] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In
CVPR, 2012.
[14] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification
with convolutional neural networks. In CVPR, 2014.
[15] S. Khamis and C. Lampert. Coconut: Co-classification with output space regularization. In BMVC, 2014.
[16] E. Kodirov, T. Xiang, Z. Fu, and S. Gong. Domain adaptation for zero-shot learning. In ICCV, 2015.
[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86:2278?2324, 1998.
[19] Y. LeCun, C. Cortes, and C. J. Burges. The mnist database of handwritten digits, 1998.
[20] M. Long, C. Yue, J. Wang, and J. Michael. Learning transferable features with deep adaptation networks.
arXiv, 2015.
[21] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with
unsupervised feature learning. In NIPS W, 2011.
[22] M. Palatucci, D. Pomerleau, G. E. Hinton, and T. M. Mitchell. Zero-shot learning with semantic output
codes. In NIPS, 2009.
[23] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: transfer learning from unlabeled
data. In ICML. ACM, 2007.
[24] M. Rohrbach, S. Ebert, and B. Schiele. Transfer learning in a transductive setting. In NIPS, 2013.
[25] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In ECCV,
pages 213?226. Springer, 2010.
[26] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition.
CoRR, abs/1409.1556, 2014.
[27] V. Sindhwani, P. Niyogi, and M. Belkin. Beyond the point cloud: from transductive to semi-supervised
learning. In ICML, 2005.
[28] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. In AAAI, 2016.
[29] B. Sun and K. Saenko. Subspace alignment for unsupervised domain adaptation. In BMVC, 2015.
[30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. arXiv:1409.4842, 2014.
[31] S. Thrun and L. Pratt. Learning to learn. Springer Science & Business, 2012.
[32] T. Tommasi and B. Caputo. Frustratingly easy NBNN domain adaptation. In ICCV, 2013.
[33] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for
domain invariance. arXiv:1412.3474, 2014.
[34] L. van der maaten. Accelerating t-sne using tree-based algorithms. In JMLR, 2014.
[35] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor
classification. In NIPS, 2006.
[36] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. 2002.
[37] X. Zhu, Z. Ghahramani, J. Lafferty, et al. Semi-supervised learning using gaussian fields and harmonic
functions. In ICML, 2003.
9
| 6360 |@word kulis:1 cnn:1 version:3 tedious:1 seek:3 covariance:1 shot:3 initial:2 cyclic:6 series:1 score:2 liu:1 tuned:1 document:1 outperforms:2 existing:4 ddc:1 written:1 subsequent:1 realistic:1 cant:1 remove:1 plot:5 designed:1 ashutosh:1 update:1 intelligence:1 fewer:1 bissacco:1 colored:1 zhang:1 qualitative:2 combine:1 dan:1 pairwise:2 expected:4 behavior:3 nor:1 multi:1 brain:1 lmnn:2 actual:1 toderici:1 spain:1 project:1 bounded:1 notation:1 estimating:1 moreover:4 alexnet:1 unified:2 transformation:9 grauman:2 classifier:4 grant:1 yn:3 slr:6 positive:2 before:3 local:1 despite:1 approximately:1 pami:2 suggests:3 challenging:1 co:1 limited:1 acknowledgment:1 lecun:2 enforces:1 testing:1 yj:9 implement:1 backpropagation:1 digit:17 maire:1 procedure:6 area:2 significantly:1 composite:1 projection:1 reject:17 word:3 adapting:3 pre:1 suggest:1 get:2 cannot:1 unlabeled:4 layered:1 sukthankar:1 optimize:1 map:1 center:1 shi:1 maximizing:1 formulate:2 resolution:1 amazon:10 rule:8 utilizing:1 oh:1 embedding:1 notion:2 target:63 pt:1 construction:2 user:1 exact:1 hypothesis:1 recognition:12 asymmetric:4 std:1 cut:1 muri:1 labeled:5 database:2 cloud:1 subproblem:1 solved:1 wang:2 connected:3 sun:2 decrease:1 highest:2 song1:1 yk:1 substantial:1 environment:1 schiele:1 geodesic:1 trained:2 weakly:1 solving:4 joint:5 represented:2 various:2 kolmogorov:1 undecided:1 artificial:1 query:1 labeling:1 hyper:1 quite:3 heuristic:2 stanford:4 larger:1 solve:3 supplementary:2 cvpr:3 otherwise:1 simonyan:1 niyogi:1 tuytelaars:1 transductive:10 jointly:10 transform:1 noisy:1 final:4 validates:1 online:1 advantage:1 differentiable:1 unprecedented:1 propose:2 product:1 adaptation:54 neighboring:1 ky:2 webpage:1 sutskever:1 p:1 asymmetry:1 darrell:2 captioning:1 object:12 blitzer:1 clearer:1 gong:3 ganin:1 nearest:21 sa:3 c:1 predicted:2 university1:1 drawback:1 annotated:1 attribute:1 closely:1 stochastic:3 modifying:1 centered:1 human:1 material:1 backprop:2 explains:1 assign:1 stayed:1 generalization:1 clustered:4 extension:1 around:1 ground:1 normal:1 exp:2 nbnn:1 major:2 dictionary:1 vary:1 outperformed:1 applicable:2 label:46 largest:1 create:2 successfully:1 hoffman:1 minimization:4 clearly:3 always:1 gaussian:1 aim:1 rather:1 office:12 validated:1 baseline:4 inference:4 transductively:1 nn:12 leung:1 entire:1 going:1 interested:1 issue:4 classification:17 among:5 aforementioned:2 arg:5 art:12 softmax:1 initialize:2 tzeng:1 equal:1 aware:1 field:1 having:6 ng:2 unsupervised:34 icml:8 discrepancy:2 minimized:1 future:1 others:1 report:1 few:2 belkin:1 packer:1 individual:1 maxj:4 occlusion:1 ab:1 detection:1 huge:2 interest:1 ource:4 evaluation:4 alignment:3 swapping:1 behind:1 accurate:6 edge:1 fu:1 netzer:1 respective:1 tree:1 desired:2 modeling:4 dev:1 formulates:1 assignment:1 rabinovich:1 cost:1 introducing:1 subset:1 habrard:1 recognizing:1 successful:1 krizhevsky:1 reported:1 nns:3 fritz:1 sensitivity:1 lee:1 dong:1 michael:1 connecting:1 aaai:1 choose:1 creating:1 return:1 li:2 szegedy:1 account:1 coding:2 includes:1 caused:1 explicitly:2 depends:1 performed:1 later:1 try:2 view:3 tion:1 hazan:1 reached:1 start:2 option:11 annotation:1 jia:1 contribution:1 minimize:1 accuracy:7 convolutional:6 became:1 efficiently:1 correspond:2 yield:1 generalize:2 raw:1 handwritten:1 none:1 confirmed:1 visualizes:2 explain:1 suffers:1 sharing:1 urner:1 definition:1 against:1 energy:3 dataset:19 mitchell:1 knowledge:1 color:3 emerges:1 segmentation:1 back:1 focusing:1 feed:1 supervised:10 zisserman:1 bmvc:2 stage:7 hand:4 replacing:1 propagation:3 google:1 tsne:5 believe:1 semisupervised:1 effect:1 concept:1 true:2 hence:5 lenet:1 regularization:1 alternating:1 symmetric:1 semantic:2 white:1 during:6 self:3 please:1 covering:1 transferable:2 complete:1 demonstrate:1 confusion:1 performs:1 duchi:1 svhn:18 image:26 harmonic:1 recently:3 boykov:1 propelled:1 common:4 sebban:1 million:1 extend:1 significant:2 anguelov:1 tuning:2 rd:1 consistency:14 dot:1 access:3 similarity:10 align:2 base:1 dominant:2 recent:2 optimizing:2 scenario:1 n00014:1 binary:1 outperforming:1 onr:1 yi:10 der:1 captured:2 additional:3 greater:1 deng:1 paradigm:1 fernando:1 signal:1 semi:4 full:2 infer:3 unlabelled:4 cross:2 long:1 gfk:1 prevented:1 prediction:1 essentially:1 metric:15 vision:1 arxiv:3 iteration:1 kernel:1 palatucci:1 penalize:1 addition:1 want:1 fine:3 separately:3 background:1 whereas:2 diagram:1 source:38 crucial:1 extra:1 rest:1 yue:1 balakrishnan:1 flow:2 lafferty:1 chopra:1 presence:1 bengio:1 embeddings:2 ture:1 easy:2 variety:1 xj:14 fit:1 pratt:1 architecture:2 inner:1 haffner:1 shift:8 tommasi:1 accelerating:1 penalty:1 suffer:1 speech:1 deep:16 iterating:1 gopalan:1 karpathy:1 amount:1 category:1 http:2 coates:1 estimated:1 per:1 correctly:1 gammerman:1 diverse:1 discrete:3 taught:1 nevertheless:1 blum:1 changing:1 penalizing:1 neither:1 graph:5 subgradient:1 enforced:1 decide:1 wu:1 utilizes:2 draw:1 maaten:1 confusing:1 summarizes:2 asaxena:1 layer:7 tackled:1 fei:4 bp:2 extremely:1 min:4 performing:2 structured:7 according:4 alternate:1 battle:1 across:2 smaller:1 ur:6 appealing:1 modification:1 maxy:1 explained:1 invariant:4 iccv:3 taken:1 mechanism:1 fail:1 singer:1 reversal:1 end:11 available:3 experim:1 promoting:1 hierarchical:3 ozan:2 enforce:5 disagreement:1 chawla:1 fowlkes:1 batch:5 alternative:1 shetty:1 weinberger:1 original:1 bsds500:1 include:2 exploit:1 ghahramani:2 webcam:8 feng:1 transferrable:1 objective:1 seeking:2 already:1 malik:1 blend:1 costly:1 sha:2 gradient:7 subspace:3 distance:5 reversed:1 arbelaez:1 thrun:1 majority:3 street:3 landmark:1 collected:1 enforcing:2 assuming:2 code:3 besides:1 relationship:1 reed:1 providing:1 ratio:1 sermanet:1 berlind:1 setup:1 susceptible:1 sne:1 negative:2 implementation:2 ethod:6 pomerleau:1 unknown:1 perform:1 observation:1 convolution:1 datasets:5 hyun:1 acknowledge:1 descent:2 truncated:1 situation:1 hinton:2 y1:6 community:2 pred:1 cast:1 specified:1 optimized:2 imagenet:4 learned:8 barcelona:1 inaccuracy:1 nip:6 tractably:1 address:2 beyond:1 pattern:1 mismatch:1 reading:1 summarize:2 nly:2 including:1 max:3 video:1 natural:1 business:1 indicator:1 raina:1 ebert:1 zhu:2 scheme:1 imply:1 extract:1 epoch:2 literature:1 adagrad:2 relative:1 xiang:1 loss:14 fully:7 expect:1 discriminatively:1 interesting:1 proportional:1 versus:1 vanhoucke:1 consistent:2 eccv:1 prone:1 last:1 bias:1 burges:1 deeper:1 neighbor:20 explaining:1 taking:1 saul:1 sparse:1 distributed:1 regard:2 van:1 dimension:2 contour:1 computes:1 concretely:1 made:1 author:1 collection:1 adaptive:1 employing:2 erhan:1 maxy0:1 overfitting:1 active:5 uai:1 discriminative:11 xi:42 grayscale:1 triplet:1 frustratingly:2 table:7 as1:1 promising:2 learn:14 transfer:5 caputo:1 bottou:1 complex:1 necessarily:1 artificially:1 domain:84 protocol:1 equivariance:2 did:1 interpolating:1 main:2 linearly:1 whole:1 motivation:1 hyperparameters:1 lampert:1 fair:1 x1:3 fashion:2 transduction:25 sub:6 inferring:1 explicit:3 house:3 tied:1 jmlr:2 toyota:1 learns:2 specific:4 rectifier:1 cortes:1 ments:3 incorporating:1 exists:1 mnist:29 socher:1 vapnik:1 corr:1 importance:2 labelling:4 margin:7 rejection:1 smoothly:1 simply:2 likely:2 rohrbach:1 visual:3 expressed:1 sindhwani:1 springer:2 corresponds:1 truth:1 acm:1 lempitsky:1 modulate:1 goal:1 labelled:6 shared:3 change:2 experimentally:2 specifically:1 vovk:1 degradation:1 silvio:1 mincuts:1 invariance:5 experimental:4 saenko:4 formally:3 support:1 arises:1 evaluate:6 scratch:1 |
5,926 | 6,361 | Deep Submodular Functions: Definitions & Learning
Brian Dolhansky? <[email protected]>
Dept. of Computer Science and Engineering?
University of Washington
Seattle, WA 98105
Jeff Bilmes?? <[email protected]>
Dept. of Electrical Engineering?
University of Washington
Seattle, WA 98105
Abstract
We propose and study a new class of submodular functions called deep submodular
functions (DSFs). We define DSFs and situate them within the broader context of
classes of submodular functions in relationship both to various matroid ranks and
sums of concave composed with modular functions (SCMs). Notably, we find that
DSFs constitute a strictly broader class than SCMs, thus motivating their use, but
that they do not comprise all submodular functions. Interestingly, some DSFs can
be seen as special cases of certain deep neural networks (DNNs), hence the name.
Finally, we provide a method to learn DSFs in a max-margin framework, and offer
preliminary results applying this both to synthetic and real-world data instances.
1
Introduction
Submodular functions are attractive models of many physical processes primarily because they
possess an inherent naturalness to a wide variety of problems (e.g., they are good models of diversity,
information, and cooperative costs) while at the same time they enjoy properties sufficient for efficient
optimization. For example, submodular functions can be minimized without constraints in polynomial
n
time [12] even though they lie within a 2n -dimensional cone in R2 . Moreover, while submodular
function maximization is NP-hard, submodular maximization is one of the easiest of the NP-hard
problems since constant factor approximation algorithms are often available ? e.g., in the cardinality
constrained case, the classic 1 ? 1/e result of Nemhauser [21] via the greedy algorithm. Other
problems also have guarantees, such as submodular maximization subject to knapsack or multiple
matroid constraints [8, 7, 18, 15, 16].
One of the critical problems associated with utilizing submodular functions in machine learning
contexts is selecting which submodular function to use, and given that submodular functions lie
in such a vast space with 2n degrees of freedom, it is a non-trivial task to find one that works
well, if not optimally. One approach is to attempt to learn the submodular function based on either
queries of some form or based on data. This has led to results, mostly in the theory community,
showing how learning submodularity can be harder or easier depending on how we judge what is
being learnt. For example, it was shown that learning submodularity in the PMAC setting is fairly
hard [2] although in some cases things are a bit easier [11]. In both of these cases, learning is over
all points in the hypercube. Learning can be made easier if we restrict ourselves to learn within
only a subfamily of submodular functions. For example, in [24, 19], it is shown that one can learn
mixtures of submodular functions using a max-margin learning framework ? here the components
of the mixture are fixed and it is only the mixture parameters that are learnt, leading often to a convex
optimization problem. In some cases, computing gradients of the convex problem can be done using
submodular maximization [19], while in other cases, even a gradient requires minimizing a difference
of two submodular functions [27].
Learning over restricted families rather than over the entire cone is desirable for the same reasons that
any form of regularization in machine learning is useful. By restricting the family over which learning
occurs, it decreases the complexity of the learning problem, thereby increasing the chance that one
finds a good model within that family. This can be seen as a classic bias-variance tradeoff, where
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a)
(b)
ground set
ground set
V = V (0)
v10
V (1)
v11
v30
v21
meta features
V
(2)
v22
v31
v50
v41
v32
v60
w
(1)
w
(2)
final
feature
v22
v32
v3
final
feature
v20
V (3)
v30
v3
w
v12
v10
V (3)
v12
v40
V (1)
V = V (0)
features
v20
features
v40
(3)
v50
v60
v11
v21
v31
meta features
V (2)
v41
Figure 1: Left: A layered DSF with K = 3 layers. Right: a 3-block DSF allowing layer skipping.
increasing bias can reduce variance. Up to now, learning over restricted families has apparently (to
the authors knowledge) been limited to learning mixtures over fixed components. This can be quite
limited if the components are restricted, and if not might require a very large number of components.
Therefore, there is a need for a richer and more flexible family of submodular functions over which
learning is still possible.
In this paper (and in [5]), we introduce a new family of submodular functions that we term ?deep
submodular functions,? or DSFs. DSFs strictly generalize, as we show below, many of the kinds
of submodular functions that are useful in machine learning contexts. These include the so-called
?decomposable? submodular functions, namely those that can be represented as a sum of concave
composed with modular functions [25].
We describe the family of DSFs and place them in the context of the general submodular family. In
particular, we show that DSFs strictly generalize standard decomposable functions, thus theoretically
motivating the use of deeper networks as a family over which to learn. Moreover, DSFs can represent
a variety of complex submodular functions such as laminar matroid rank functions. These matroid
rank functions include the truncated matroid rank function [13] that is often used to show theoretical
worst-case performance for many constrained submodular minimization problems. We also show,
somewhat surprisingly, that like decomposable functions, DSFs are unable to represent all possible
cycle matroid rank functions. This is interesting in and of itself since there are laminar matroids
that are not cycle matroids. On the other hand, we show that the more general DSFs share a variety
of useful properties with decomposable functions. Namely, that they: (1) can leverage the vast
amount of practical work on feature engineering that occurs in the machine learning community
and its applications; (2) can operate on multi-modal data if the data can be featurized in the same
space; (3) allow for training and testing on distinct sets since we can learn a function from the
feature representation level on up, similar to the work in [19]; and (4) are useful for streaming
applications since functions can be evaluated without requiring knowledge of the entire ground set.
These advantages are made apparent in Section 2.
Interestingly, DSFs also share certain properties with deep neural networks (DNNs), which have
become widely popular in the machine learning community. For example, DNNs with weights
that are strictly non-negative correspond to a DSF. This suggests, as we show in Section 5, that
it is possible to develop a learning framework over DSFs leveraging DNN learning frameworks.
Unlike standard deep neural networks, which typically are trained either in classification or regression
frameworks, however, learning submodularity often takes the form of trying to adjust the parameters
so that a set of ?summary? data sets are offered a high value. We therefore extend the max-margin
learning framework of [24, 19] to apply to DSFs. Our approach can be seen as a max-margin learning
approach for DNNs but restricted to DSFs. We show that DSFs can be learnt effectively in a variety
of contexts (Section 6). In the below, we discuss basic definitions and an initial implementation of
learning DSFS while in [5] we provide complete definitions, properties, relationships to concavity,
proofs, and a set of applications.
2
Background
Submodular functions are discrete set functions that have the property of diminishing returns. Assume
a given finite size n set of objects V (the large ground set of data items), where each v ? V is a
distinct data sample (e.g., a sentence, training pair, image, video, or even a highly structured object
such as a tree or a graph). A valuation set function f : 2V ? R that returns a real value for any
subset X ? V is said to be submodular if for all X ? Y and v ?
/ Y the following inequality holds:
2
f (X ? {v}) ? f (X) ? f (Y ? {v}) ? f (Y ). This means that the incremental value (or gain) of
adding another sample v to a subset decreases when the context in which v is considered grows
from X to Y . We can define the gain of v in the context of X as f (v|X) , f (X ? {v}) ? f (X).
Thus, f is submodular if f (v|X) ? f (v|Y ). If the gain of v is identical for all different contexts
i.e., f (v|X) = f (v|Y ), ?X, Y ? V , then the function is said to be modular. A function might also
have the property of being normalized (f (?) = 0) and monotone non-decreasing (f (X) ? f (Y )
whenever X ? Y ). If the negation of f , ?f , is submodular, then f is called supermodular.
A useful class of submodular functions in machine learning are decomposable functions [25], and one
example of useful instances of these for applications are called feature-based functions. Given a set
of non-negative monotone non-decreasing normalized (?(0) = 0) concave functions ?i : R+ ? R+
V
and a correspondingP
set of non-negativeP
modular
Pfunctions mi : V ? R+ , the function f : 2 ? R+
defined as f (X) = i ?i (mi (X)) = i ?i ( x?X mi (x)) is known to be submodular. Such functions have been called ?decomposable? in the past, but in this work we will refer to them as the family
of sums of concave over modular functions (SCMs). SCMs have been shown to be quite flexible [25],
being able to represent a diverse set of functions such as graph cuts, set cover functions, and multiclass
queuing system functions and yield efficient algorithms for minimization [25, 22]. Such functions are
useful also for applications involving maximization. Suppose that each element v ? V is associated
with a set of ?features? U in the sense of, say, how TFIDF is used in natural language processing (NLP).
Feature based submodular functions
P are those defined via the set of features. Feature based functions take the the form f (X) = u?U wu ?u (mu (X)), where ?u is a non-decreasing non-negative
univariate normalized concave function, mu (X) is a feature-specific non-negative modular function,
and wu is a non-negative feature weight. The result is the class of feature-based submodular functions
(instances of SCMs). Such functions have been successfully used for data summarization [29].
Another advantage of such functions is that they do not require the construction of a pairwise
graph and
Ptherefore do not have quadratic cost as would, say a facility location function (e.g.,
f (X) = v?V maxx?X wxv ), or any function based on pair-wise distances, all of which have cost
O(n2 ) to evaluate. Feature functions have an evaluation cost of O(n|U |), linear in the ground set V
size and therefore are more scalable to large data set sizes. Finally, unlike the facility location and
other graph-based functions, feature-based functions do not require the use of the entire ground set
for each evaluation and hence are appropriate for streaming algorithms
[1, 9] where future ground
P
elements are unavailable. Defining ? : Rn ? R as ?(x) = i ?i (hmi , xi), we get a monotone
non-decreasing concave function, which we refer to as univariate sum of concaves (USCs).
3
Deep Submodular Functions
While feature-based submodular functions are indisputably useful, their weakness lies in that features
themselves may not interact, although one feature u0 might be partially redundant with another
feature u00 . For example, when describing a sentence via its component n-grams features, higherorder n-grams always include lower-order n-grams, so n-gram features are partially redundant. We
may address
P this problem
P by utilizing an additional ?layer? of nested concave functions as in
f (X) = s?S ?s ?s ( u?U ws,u ?u (mu (X))), where S is a set of meta-features, ?s is a metafeature weight, ?s is a non-decreasing concave function associated with meta-feature s, and ws,u
is now a meta-feature specific feature weight. With this construct, ?s assigns a discounted value to
the set of features in U , which can be used to represent feature redundancy. Interactions between
the meta-features might be needed as well, and this can be done via meta-meta-features, and so on,
resulting in a hierarchy of increasingly higher-level features.
We hence propose a new class of submodular functions that we call deep submodular functions
(DSFs). They may make use of a series of disjoint sets (see Figure 1-(a)): V = V (0) , which is
the function?s ground set, and additional sets V (1) , V (2) , . . . , V (K) . U = V (1) can be seen as a set
of ?features?, V (2) as a set of meta-features, V (3) as a set of meta-meta features, etc. up to V (K) .
The size of V (i) is di = |V (i) |. Two successive sets (or ?layers?) i ? 1 and i are connected by a
i
i?1
(i)
matrix w(i) ? Rd+ ?d , for i ? {1, . . . , K}. Given v i ? V (i) , define wvi to be the row of w(i)
(i)
corresponding to element v i , and wvi (v i?1 ) is the element of matrix w(i) at row v i and column v i?1 .
(i)
We may think of wvi : V (i?1) ? R+ as a modular function defined on set V (i?1) . Thus, this matrix
contains di such modular functions. Further, let ?vk : R+ ? R+ be a non-negative non-decreasing
3
concave function. Then, a K-layer DSF f : 2V ? R+ can be expressed as follows, for any A ? V :
f (A) = ? K
v
X
v K?1 ?V (K?1)
(K) K?1
w
(v
)? K?1
v
vK
...
X
v 2 ?V (2)
(3) 2
w 3 (v )? 2
v
v
X
v 1 ?V (1)
(2) 1
w 2 (v )? 1
v
v
X
a?A
!
(1)
w 1 (a)
v
(1)
Submodularity follows since a composition of a monotone non-decreasing function f and a monotone
non-decreasing concave function ? (g(?) = ?(f (?))) is submodular (Theorem 1 in [20]) ? a DSF is
submodular via recursive application and since submodularity is closed under conic combinations.
A more general way to define a DSF (useful for the theorems below) uses recursion directly. We
are given a directed acyclic graph (DAG) G = (V, E) where for any given node v ? V, we say
pa(v) ? V are the parents of (vertices pointing towards) v. A given size n subset of nodes V ? V
corresponds to the ground set and for any v ? V , pa(v) = ?. A particular ?root? node ? V \ V
has the distinction that ?
/ pa(q) for any q ? V. Given a non-ground node v ? V \ V , we define
the concave function ?v : RV ? R+
X
?v (x) = ?v
wuv ?u (x) + hmv , xi
(2)
r
r
u?pa(v)\V
where ?v : R+ ? R+ is a non-decreasing univariate concave function,P
wuv ? R+ , mv : Rpa(v)?V ?
R+ is a non-negative linear function that evaluates as hmv , xi = u?pa(v)?V mv (u)x(u)) (i.e.,
hmv , xi is a sparse dot-product over elements pa(v)?V ? V ). The base case, where pa(v) ? V therefore has ?v (x) = ?v (hmv , xi), so ?v (1A ) is a SCM function with only one term in the sum (1A is the
characteristic vector of set A). A general DSF is defined as follows: for all A ? V , f (A) = ?r (1A ) +
m? (A), where m? : V ? R is an arbitrary modular function (i.e., it may include positive and negative elements). From the perspective of defining a submodular function, there is no loss of generality
by adding the final modular function m? to a monotone non-decreasing function ? this is because any
submodular function can be expressed as a sum of a monotone non-decreasing submodular function
and a modular function [10]. This form of DSF is more general than the layered approach mentioned
above which, in the current form, would partition V = {V (0) , V (1) , . . . , V (K) } into layers, and
where for any v ? V (i) , pa(v) ? V (i?1) . Figure 1-(a) corresponds to a layered graph G = (V, E)
where r = v13 and V = {v10 , v20 , . . . , v60 }. Figure 1-(b) uses the same partitioning but where units are
allowed to skip by more than one layer at a time. More generally, we can order the vertices in V with
order ? so that {?1 , ?2 , . . . , ?n } = V where n = |V |, ?m = = v K where m = |V| and where
?i ? pa(?j ) iff i < j. This allows an arbitrary pattern of skipping while maintaining submodularity.
r
The layered definition in Equation (1) is reminiscent of feed-forward deep neural networks (DNNs)
owing to its multi-layered architecture. Interestingly, if one restricts the weights of a DNN at every
layer to be non-negative, then for many standard hidden-unit activation functions the DNN constitutes
a submodular function when given Boolean input vectors. The result follows for any activation
function that is monotone non-decreasing concave for non-negative reals, such as the sigmoid, the
hyperbolic tangent, and the rectified linear functions. This suggests that DSFs can be trained in a
fashion similar to DNNs, as is further developed in Section 5. The recursive definition of DSFs, in
Equation (2) is more general and, moreover, useful for the analysis in Section 4.
DSFs should be useful for many applications in machine learning. First, they retain the advantages
of feature-based functions (i.e., they require neither O(n2 ) nor access to the entire ground set for
evaluation). Hence, DSFs can be both fast, and useful for streaming applications. Second, they allow
for a nested hierarchy of features, similar to advantages a deep model has over a shallow model. For
example, a one-layer DSF must construct a valuation over a set of objects from a large number of
low-level features which can lead to fewer opportunities for feature sharing while a deeper network
fosters distributed representations, analogous to DNNs [3, 4]. Below, we show that DSFs constitute
a strictly larger family of submodular functions than SCMs.
4
Analysis of the DSF family
DSFs represent a family that, at the very least, contain the family of SCMs. We argued intuitively that
DSFs might extend SCMs as they allow components themselves to interact, and the interactions may
propagate up a many-layered hierarchy. In this section, we formally place DSF within the context of
more general submodular functions. We show that DSFs strictly generalize SCMs while preserving
many of their attractive attributes. We summarize the results of this section in Figure 2, and that
includes familial relationships amongst other classes of submodular functions (e.g., various matroid
rank functions), useful for our main theorems.
4
All Submodular Functions
DSFs
Thanks to concave composition closure rules [6], the root function
n
Lam
?r (x) : R ? R in Eqn. (2) is a monotone non-decreasing multivariMatr inar
oid R
ank
ate concave function that, by the concave-submodular composition
rule [20], yields a submodular function ?r (1A ). It is widely known
that any univariate concave function composed with non-negative
modular functions yields a submodular function. However, given an
SCMs
arbitrary multivariate concave function this is not the case. Consider,
for example, any concave function ? over R2 that offers the follow
evaluations: ?(0, 0) = ?(1, 1) = 1, ?(0, 1) = ?(1, 0) = 0. Then Figure 2: Containment propf (A) = ?(1A ) is not submodular. Given a multivariate concave erties of the set of functions
function ? : Rn ? R, the superdifferential ??(x) at x is defined studied in this paper.
as: ??(x) = {h ? Rn : ?(y) ? ?(x) ? hh, yi ? hh, xi, ?y ? Rn }
and a particular supergradient hx is a member hx ? ??(x). If ?(x) is differentiable at x then
??(x) = {??(x)}. A concave function is said to have an antitone superdifferential if for all x ? y
we have that hx ? hy for all hx ? ??(x) and hy ? ??(y) (here, x ? y ? x(v) ? y(v)?v).
Apparently, the following result has not been previously reported.
Theorem 4.1. Let ? : Rn ? R be a concave function. Then if ? has an antitone superdifferential,
then the set function f : 2V ? R defined as f (A) = ?(1A ) for all A ? V is submodular.
A DSF?s associated concave function has an antitone superdifferential. Concavity is not necessary
in general (e.g., multilinear extensions of submodular functions are not concave but have properties
analogous to an antitone superdifferential, see [5]).
Lemma 4.3. Composition of monotone non-decreasing scalar concave and antitone superdifferential
concave functions, and conic combinations thereof, preserves superdifferential antitonicity.
Corollary 4.3.1. The concave function ?r associated with a DSF has an antitone superdifferential.
A matroid M [12] is a set system M = (V, I) where I = {I1 , I2 , . . .} is a set of subsets Ii ? V
that are called independent. A matroid has the property that ? ? I, that I is subclusive (i.e., given
I ? I and I 0 ? I then I 0 ? I) and that all maximally independent sets have the same size (i.e., given
A, B ? I with |A| < |B|, there exists a b ? B \ A such that A + b ? I). The rank of a matroid, a
set function r : 2V ? Z+ defined as r(A) = maxI?I |I ? A|, is a powerful class of submodular
functions. All matroids are uniquely defined by their rank function. All monotone non-decreasing
non-negative rational submodular functions can be represented by grouping and then evaluating
grouped ground elements in a matroid [12].
Cycle
Matroid
Rank
Partition
Matroid Rank
A particularly useful matroid is the partition matroid, where a partition (V1 , V2 , . . . , V` ) of V is
formed, along with a set of capacities k1 , k2 , . . . , k` ? Z+ . It?s rank function is defined as: r(X) =
P`
i=1 min(|X ? Vi |, ki ) and, therefore, is an SCM, owing to the fact that ?(x) = min(hx, 1Vi i, ki )
is USC. A cycle matroid is a different type of matroid based on a graph G = (V, E) where the rank
function r(A) for A ? E is defined as the size of the maximum spanning forest (i.e., a spanning tree
for each connected component) in the edge-induced subgraph GA = (V, A). From the perspective of
matroids, we can consider classes of submodular functions via their rank. If a given type of matroid
cannot represent another kind, their ranks lie in distinct families. To study where DSFs are situated
in the space of all submodular functions, it is useful first to study results regarding matroid rank
functions.
Lemma 4.4. There are partition matroids that are not cycle matroids.
In a laminar matroid, a generalization of a partition matroid, we start with a set V and a family
F = {F1 , F2 , . . . , } of subsets Fi ? V that is laminar, namely that for all i 6= j either Fi ? Fj = ?
or Fi ? Fj or Fj ? Fi (i.e., sets in F are either non-intersecting or comparable). In a laminar
matroid, we also have for every F ? F an associated capacity kF ? Z+ . A set I is independent
if |I ? F | ? kF for all F ? F. A laminar family of sets can be organized in a tree, where
there is one root R ? F in the tree that, w.l.o.g., can be V itself. Then the immediate parents
pa(F ) ? F of a set F ? F in the tree are the set of maximal subsets of F in F, i.e., pa(F ) =
{F 0 ? F : F 0 ? F and 6 ?F 00 ? F s.t. F 0 ? F 00 ? F }. We then define the following for all F ? F:
X
[
rF (A) = min(
rF 0 (A ? F 0 ) + |A \
F |, kF ).
(3)
F 0 ?pa(F )
F 0 ?pa(F )
A laminar matroid rank has a recursive definition r(A) = rR (A) = rV (A). Hence, if the family F
forms a partition of V , we have a partition matroid. More interestingly, when compared to Eqn. (2),
we see that a laminar matroid rank function is an instance of a DSF with a tree-structured DAG rather
5
than the non-tree DAGs in Figure 1. Thus, within the family of DSFs lie the truncated matroid rank
functions used to show information theoretic hardness for many constrained submodular optimization
problems [13]. Moreover, laminar matroids strictly generalize partition matroids.
Lemma 4.5. Laminar matroids strictly generalize partition matroids
Since a laminar matroid generalizes a partition matroid, this portends well for DSFs generalizing
SCMs. Before considering that, we already are up against some limits of laminar matroids, i.e.:
Lemma 4.6 (peeling proof). Laminar matroid cannot represent all cycle matroids.
We call this proof a ?peeling proof? since it recursively peels off each layer (in the sense of a DSF) of
a laminar matroid rank until it boils down to a partition matroid rank function, where the base case is
clear. The proof is elucidating, moreover, since it motivates the proof of Theorem 4.14 showing that
DSFs extend SCMs. We also have the immediate corollary.
Corollary 4.6.1. Partition matroids cannot represent all cycle matroids.
We see that SCMs generalize partition matroid rank functions and DSFs generalize laminar matroid
rank functions. We might expect, from the above results, that DSFs might generalize SCMs ? this
is not immediately obvious since SCMs are significantly more flexible than partition matroid rank
functions because: (1) the concave functions need not be simple truncations at integers, (2) each term
can have its own non-negative modular function, (3) there is no requirement to partition the ground
elements over terms in an SCM, and (4) we may with relative impunity extend the family of SCMs
to ones where we add an additional arbitrary modular function (what we will call SCMMs below).
We see, however, that SCMMs are also unable to represent the cycle matroid rank function over K4 ,
very much like the partition matroid rank function. Hence the above flexibility does not help in this
case. We then show that DSFs strictly generalize SCMMs, which means that DSFs indeed provide
a richer family of submodular functions to utilize, ones that as discussed above, retain many of the
advantages of SCMMs. We end the section by showing that DSFs, even with an additional arbitrary
modular function, are still unable to represent matroid rank over K4 , implying that although DSFs
extend SCMMs, they cannot express all monotone non-decreasing submodular functions.
We define a family of sums of concave
P over modular functions with an additional modular term
(SCMMs), taking the form: f (A) = i ?i (mi (A)) + m? (A) where each ?i and mi as in an SCM,
but where m? : 2V ? R is an arbitrary modular function, so if m? (?) = 0 the SCMM is an SCM.
Before showing that DSFs extend SCMMs, we include a result showing that SCMMs are strictly
smaller than the set of all submodular functions. We include an unpublished result [28] showing that
SCMMs can not represent the cycle matroid rank function, as described above, over the graph K4 .
Theorem 4.11 (Vondrak[28]). SCMMs ? Space of Submodular Functions
We next show that DSFs strictly generalize SCMMs, thus providing justification for using DSFs over
SCMMs and, moreover, generalizing Lemma 4.5. The DSF we choose is, again, a laminar matroid,
so SCMMs are unable to represent laminar matroid rank functions. Since DSFs generalize laminar
matroid rank functions, the result follows.
Theorem 4.14. The DSF family is strictly larger than that of SCMs.
P
The proof shows that it is not possible to represent a function of the form f (X) = min( i min(|X ?
Bi |, ki ), k) using an SCMM. Theorem 4.15 also has an immediate consequence for concave
functions.
Corollary 4.14.1. There exists a non-USC concave function with an antitone superdifferential.
The corollary follows since, as mentioned above, DSF functions have at their core a multivarate
concave function with an antitone superdifferential, and thanks to Theorem 4.14 it is not always
possible to represent this as a sum of concave over linear functions. It is currently an open problem if
DSFs with ` layers extend the family of DSFs with `0 < ` layers, for `0 ? 2. Our final result shows
that, while DSFs are richer than SCMMs, they still do not encompass all polymatroid functions. We
show this by proving that the cycle matroid rank function on K4 is not achievable with DSFs.
Theorem 4.15. DSFs ? Polymatroids
Proofs of these theorems and more may be found in [5].
6
5
Learning DSFs
As mentioned above, learning submodular functions is generally difficult [11, 13]. Learning mixtures
of fixed submodular component functions [24, 19], however, can give good empirical results on
several tasks, including image [27] and document [19] summarization. In these examples, rather than
attempting to learn a function at all 2n points, a max-margin approach is used only to approximate
a submodular function on its large values. Typically when training a summarizer, one is given a
ground set of items, and a set of representative sets of excerpts (usually human generated) each of
which summarizes the ground set. Within this setting, access to an oracle function h(A) ? that, if
available, could be used in a regression-style learning approach ? might not be available. Even if
available, such learning is often overkill. Thus, instead of trying to learn h everywhere, we only
seek to learn the parameters w of a function fw that lives within some family, parameterized by
w, so that if B ? argmaxA?V :|A|?k fw (A), then h(B) ? ?h(A? ) for some ? ? [0, 1] where
A? ? argmaxA?V :|A|?k h(A). In practice, this corresponds to selecting the best summary for a
given document based on the learnt function, in the hope that it mirrors what a human believes to be
best. Fortunately, the max-margin training approach directly addresses the above and is immediately
applicable to learning DSFs. Also, given the ongoing research on learning DNNs, which have
achieved state-of-the-art results on a plethora of machine learning tasks [17], and given the similarity
between DSFs and DNNs, we may leverage the DNN learning techniques (such as dropout, AdaGrad,
learning rate scheduling, etc.) to our benefit.
5.1
Using max-margin learning for DSFs
Given an unknown but desired function h : 2V ? R+ and a set of representative sets S =
{S1 , S2 , . . .}, with Si ? V and where for each S ? S, h(S) is highly scored, a max-margin
learning approach may be used to train a DSF f so that if A ? argmaxA?V f (A), h(A) is also
highly scored by h. Under the large-margin approach [24, 19, 27], we learn the parameters w of
fw such that for all S ? S, fw (S) is high, while for A ? 2V , fw (A) is lower by some given
loss. This may be performed by maximizing the loss-dependent margin so that for all S ? S and
A ? 2V , fw (S) ? fw (A) + `S (A). For a given loss function `S (A), optimization reduces to finding
parameters so that fw (S) ? maxA?2V [fw (A)+`S (A)] is satisfied for S ? S. The task of finding the
maximizing set is known as loss-augmented inference (LAI) [26], which for general `(A) is NP-hard.
With regularization, and defining the hinge operator (x)+ = max(0, x), the optimization becomes:
+
X
?
(4)
min
max [f (A) + `S (A)] ? f (S)
+ ||w||22 .
V
w?0
2
A?2
S?S
?
?
Given a LAI procedure, the subgradient of weight wi is ?w
f (A) ? ?w
f (S) + ?wi , and in the case
i
i
of a DSF, each subgradient can be computed efficiently with backpropagation, similar to the approach
of [23], but to retain polymatroidality of f , projected gradient descent is used ensure w 0
For arbitrary set functions, f (A) and `S (A), LAI is generally intractable. Even if f (A) is submodular,
the choice of loss can affect the computational feasibility of LAI. For submodular `(A), the greedy
algorithm can find an approximately maximizing set [21]. For supermodular `(A), the task of solving
maxA?2V \S [f (A) + `(A)] involves maximizing the difference of two submodular functions and the
submodular-supermodular procedure [14] can be used.
Once a DSF is learnt, we may wish to find maxA?V :|A|?k f (A) and this can be done, e.g., using
the greedy algorithm when m? ? 0. The task of summarization, however, might involve learning
based on one set of ground sets and testing via a different (set of) ground set(s) [19, 27]. To do
this, any particular element v ? V may be represented by a vector of non-negative feature weights
(m1 (v), m2 (v), . . . ) (e.g., mi (v) counts the number of times a unigram i appears in sentence v),
and the feature
P i weight for any set A ? V can is represented as the i-specific modular evaluation
mi (A) = a?A mi (a). We can treat the set of modular functions {mi : V ? R+ }i as a matrix
to be used as the first layer in DSF (e.g., w(1) in Figure 1 (left)) that is fixed during the training of
subsequent layers. This preserves submodularity, and allows all later layers (i.e., w(2) , w(3) , . . . ) to
be learnt generically over any set of objects that can be represented in the same feature space ? this
also allows training over one set of ground sets, and testing on a totally separate set of ground sets.
Max-margin learning, in fact, remains ignorant that this is happening since it sees the data only post
feature representation. In fact, learning can be cross modal ? e.g., images and sentences, represented
in the same feature space, can be learnt simultaneously. This is analogous to the ?shells? of [19]. In
7
Greedy set value
3.0
Greedy set value
Normalized VROUGE value
0.8
2.0
Laminar rank
Cycle matroid rank
2.5
1.5
1.0
0.6
0.4
0.2
0.5
0.0
2.0Average greedy VROUGE (Train)
1.0
0
50
100
150
Iteration
(a) Cycle matroid
200
0.0
0
50
100
150
Iteration
1.5
1.0
0.5
DSF
SCMM
Human avg.
Random avg.
0.5
0.0
200
Average greedy VROUGE (Test)
1.0
0
(b) Laminar matroid
20
40
60
Epoch
80
0.0
100
0
200
400
Epoch
600
800
(c) Image summarization
Figure 3: (a),(b) show matroid learning via a DSF is possible in a max-margin setting; (c) shows that
learning and generalization of a DSF can happen, via featurization, on real image data.
that case, however, mixtures were learnt over fixed components, some of which required a O(n2 )
calculation for element-pair similarity scores. Via featurization in the first layer of a DSF, however,
we may learn a DSF over a training set, preserving submodularity, avoid any O(n2 ) cost, and test on
any new data represented in the same feature space.
6
Empirical Experiments on Learning DSFs
We offer preliminary feasibility results showing it is possible to train a DSF on synthetic datasets and,
via featurization, on a real image summarization dataset.
The first synthetic experiment trains a DSF to learn a cycle matroid rank function on K4 . Although
Theorem 4.15 shows a DSF cannot represent such a rank function everywhere, we show that the
max-margin framework can learn a DSF that, when maximized via maxA?V :|A|?3 f (A) does not
return a 3-cycle (as is desirable). We used a simple two-layer DSF, where the first hidden layer
consisted of four hidden units with square root activation functions, and a normalized sigmoid
?
? (x) = 2 ? (?(x) ? 0.5) at the output. Figure 3a shows that after sufficient learning iterations, greedy
applied to the DSF returns independent sized-three sets. Further analysis shows that the function is
not learnt everywhere, as predicted by Theorem 4.15.
P10
We next tested a scaled laminar matroid rank r(A) = (1/8) min
i=1 min(|A ? Bi |, 1), 8 where
the Bi ?s are each size 10 and form a partition of V , with |V | = 100. Thus maximal independent
sets argmaxI?I |I| have r(I) = 1 with |I| = 8. A DSF is trained with a hidden layer of 10 units of
activation g(x) = max(x, 1), and a normalized sigmoid ?
? at the output. We randomly generated 200
matroid bases, and trained the network. The greedy solution to maxA?V :|A|?8 f (A) on the learnt
DSF produces sets that are maximally independent (Figure 3b).
For our real-world instance of learning DSFs, we use the dataset of [27], which consists of 14 distinct
image sets, 100 images each. The task is to select the best 10-image summary in terms of a visual
ROUGE-like function that is defined over a bag of visual features. For each of the 14 ground sets,
we trained on the other 13 sets and evaluated the performance
of the trained DSF on the test set.
p
P
We use a simple DSF of the form f (A) = ?
?
w
m
(A)
, where mu (A) is modular for
u
u
u?U
feature u, and ?
? is a sigmoid. We used (diagonalized) Adagrad, a decaying learning rate, weight
decay, and dropout (which was critical for test-set performance). We compared to an SCMM of
comparable complexity and number of parameters (i.e., the same form and features but a linear
output), and performance of the SCMM is much worse (Figure 3c) perhaps because of a DSF?s
?depth.? Notably, we only require |U | = 628 visual-word features (as covered in Section 5 of [27]),
while the approach in [27] required 594 components of O(n2 ) graph values, or roughly 5.94 million
precomputed values. The loss function is `(A) = 1 ? R(A), where R(A) is a ROUGE-like function
defined over visual-words. During training, we achieve numbers comparable to [27]. We do not yet
match the generalization results in [27], but we do not use strong O(n2 ) graph components, and we
expect better results perhaps with a deeper network and/or better base features.
Acknowledgments: Thanks to Reza Eghbali and Kai Wei for useful discussions. This material is based upon
work supported by the National Science Foundation under Grant No. IIS-1162606, the National Institutes of
Health under award R01GM103544, and by a Google, a Microsoft, a Facebook, and an Intel research award.
This work was supported in part by TerraSwarm, one of six centers of STARnet, a Semiconductor Research
Corporation program sponsored by MARCO and DARPA.
8
References
[1] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization:
Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD international conference
on Knowledge discovery and data mining, pages 671?680. ACM, 2014.
[2] M. Balcan and N. Harvey. Learning submodular functions. Technical report, arXiv:1008.2159, 2010.
R in Machine Learning,
[3] Y. Bengio. Learning Deep Architectures for AI. Foundations and Trends
2(1):1?127, 2009.
[4] Y. Bengio, A. Courville, and P. Vincent. Representation Learning: A Review and New Perspectives. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798?1828, 2013.
[5] Jeffrey Bilmes and Wenruo Bai. Deep Submodular Functions. Arxiv, abs/1701.08939, Jan 2017. http:
//arxiv.org/abs/1701.08939.
[6] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004.
[7] Niv Buchbinder, Moran Feldman, Joseph Seffi Naor, and Roy Schwartz. Submodular maximization with
cardinality constraints. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete
Algorithms, pages 1433?1452. Society for Industrial and Applied Mathematics, 2014.
[8] Gruia Calinescu, Chandra Chekuri, Martin P?l, and Jan Vondr?k. Maximizing a monotone submodular
function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740?1766, 2011.
[9] Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming algorithms for submodular function
maximization. In International Colloquium on Automata, Languages, and Programming, pages 318?330.
Springer, 2015.
[10] W. H. Cunningham. Testing membership in matroid polyhedra. J Combinatorial Theory B, 36:161?188,
1984.
[11] V. Feldman and J. Vondr?k. Optimal bounds on approximation of submodular and XOS functions by
juntas. CoRR, abs/1307.3301, 2013.
[12] S. Fujishige. Submodular Functions and Optimization. Number 58 in Annals of Discrete Mathematics.
Elsevier Science, 2nd edition, 2005.
[13] M.X. Goemans, N.J.A. Harvey, S. Iwata, and V. Mirrokni. Approximating submodular functions everywhere. In SODA, pages 535?544, 2009.
[14] R. Iyer and J. Bilmes. Algorithms for approximate minimization of the difference between submodular
functions, with applications. Uncertainty in Artificial Intelligence (UAI), 2012.
[15] Rishabh Iyer and Jeff Bilmes. Submodular optimization with submodular cover and submodular knapsack
constraints. In Neural Information Processing Society (NIPS), Lake Tahoe, CA, December 2013.
[16] Rishabh Iyer, Stefanie Jegelka, and Jeff A. Bilmes. Fast semidifferential-based submodular function
optimization. In International Conference on Machine Learning (ICML), Atlanta, Georgia, 2013.
[17] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, may 2015.
[18] Jon Lee, Vahab S Mirrokni, Viswanath Nagarajan, and Maxim Sviridenko. Non-monotone submodular
maximization under matroid and knapsack constraints. In Proceedings of the forty-first annual ACM
symposium on Theory of computing, pages 323?332. ACM, 2009.
[19] H. Lin and J. Bilmes. Learning mixtures of submodular shells with application to document summarization.
In Uncertainty in Artificial Intelligence (UAI), Catalina Island, USA, July 2012. AUAI.
[20] Hui Lin and Jeff Bilmes. A Class of Submodular Functions for Document Summarization, 2011.
[21] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing
submodular functions-I. Math. Program., 14:265?294, 1978.
[22] R. Nishihara, S Jegelka, and M. I. Jordan. On the convergence rate of decomposable submodular function
minimization. In Advances in Neural Information Processing Systems, pages 640?648, 2014.
[23] W. Pei. Max-Margin Tensor Neural Network for Chinese Word Segmentation. Transactions of the
Association of Computational Linguistics, pages 293?303, 2014.
[24] R. Sipos, P. Shivaswamy, and T. Joachims. Large-margin learning of submodular summarization models.
In Proceedings of the 13th Conference of the European Chapter of the Association for Computational
Linguistics, pages 224?233. Association for Computational Linguistics, 2012.
[25] P. Stobbe and A. Krause. Efficient minimization of decomposable submodular functions. In NIPS, 2010.
[26] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: A large
margin approach. Proceedings of the 22nd international conference on Machine learning, (December):896?
903, 2005.
[27] S. Tschiatschek, R. Iyer, H. Wei, and J. Bilmes. Learning mixtures of submodular functions for image
collection summarization. In Neural Information Processing Society (NIPS), Montreal, Canada, December
2014.
[28] J. Vondrak. Personal Communication, 2011.
[29] K. Wei, Y. Liu, K. Kirchhoff, and J. Bilmes. Unsupervised submodular subset selection for speech data. In
Proc. IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing, Florence, Italy, 2014.
9
| 6361 |@word achievable:1 polynomial:1 nd:2 semidifferential:1 open:1 closure:1 seek:1 propagate:1 kent:1 thereby:1 harder:1 recursively:1 bai:1 liu:1 series:1 contains:1 selecting:2 score:1 initial:1 document:4 interestingly:4 past:1 diagonalized:1 current:1 skipping:2 v21:2 activation:4 si:1 yet:1 reminiscent:1 must:1 subsequent:1 partition:18 happen:1 sponsored:1 implying:1 dolhansky:1 greedy:9 fewer:1 item:2 intelligence:3 core:1 junta:1 math:1 node:4 location:2 successive:1 tahoe:1 org:1 along:1 become:1 symposium:2 consists:1 naor:1 introduce:1 pairwise:1 theoretically:1 notably:2 indeed:1 hardness:1 roughly:1 themselves:2 nor:1 multi:2 discounted:1 decreasing:16 cardinality:2 increasing:2 considering:1 spain:1 becomes:1 moreover:6 totally:1 easiest:1 what:3 kind:2 maxa:5 developed:1 finding:2 corporation:1 guarantee:1 every:2 auai:1 concave:33 k2:1 scaled:1 schwartz:1 partitioning:1 unit:4 grant:1 enjoy:1 catalina:1 positive:1 before:2 engineering:3 wuv:2 treat:1 limit:1 consequence:1 rouge:2 semiconductor:1 approximately:1 might:9 studied:1 suggests:2 limited:2 tschiatschek:1 bi:3 directed:1 practical:1 acknowledgment:1 lecun:1 testing:4 recursive:3 block:1 practice:1 backpropagation:1 procedure:2 jan:2 antitone:8 empirical:2 maxx:1 hyperbolic:1 significantly:1 boyd:1 v11:2 word:3 argmaxa:3 get:1 cannot:5 ga:1 layered:6 operator:1 scheduling:1 selection:1 context:9 applying:1 center:1 maximizing:6 convex:3 automaton:1 decomposable:8 assigns:1 immediately:2 m2:1 rule:2 utilizing:2 vandenberghe:1 classic:2 proving:1 justification:1 analogous:3 annals:1 construction:1 suppose:1 hierarchy:3 massive:1 programming:1 us:2 pa:13 element:10 trend:1 roy:1 particularly:1 v32:2 viswanath:1 cut:1 cooperative:1 taskar:1 fly:1 electrical:1 worst:1 v13:1 cycle:14 connected:2 decrease:2 mentioned:3 colloquium:1 mu:4 complexity:2 personal:1 trained:6 solving:1 badanidiyuru:1 upon:1 f2:1 v50:2 darpa:1 kirchhoff:1 v22:2 various:2 represented:7 chapter:1 v41:2 train:4 distinct:4 fast:2 describe:1 univ:1 argmaxi:1 query:1 artificial:2 quite:2 modular:21 richer:3 apparent:1 widely:2 say:3 larger:2 kai:1 think:1 itself:2 final:4 advantage:5 differentiable:1 rr:1 propose:2 lam:1 interaction:2 product:1 maximal:2 iff:1 subgraph:1 flexibility:1 achieve:1 wenruo:1 seattle:2 parent:2 convergence:1 requirement:1 plethora:1 intl:1 dsf:38 produce:1 mirzasoleiman:1 incremental:1 impunity:1 object:4 help:1 depending:1 develop:1 v10:3 montreal:1 strong:1 c:1 skip:1 judge:1 involves:1 predicted:1 submodularity:8 owing:2 attribute:1 xos:1 human:3 featurization:3 material:1 require:5 argued:1 dnns:9 hx:5 nagarajan:1 f1:1 generalization:3 niv:1 preliminary:2 quanrud:1 tfidf:1 brian:1 multilinear:1 extension:1 strictly:12 hold:1 marco:1 considered:1 ground:20 pointing:1 proc:1 applicable:1 bag:1 combinatorial:1 currently:1 grouped:1 successfully:1 minimization:5 hope:1 always:2 eghbali:1 rather:3 avoid:1 broader:2 corollary:5 joachim:1 vk:2 rank:35 polyhedron:1 industrial:1 sigkdd:1 sense:2 elsevier:1 inference:1 shivaswamy:1 dependent:1 membership:1 streaming:5 chatalbashev:1 typically:2 entire:4 v30:2 diminishing:1 hidden:4 w:2 dnn:4 koller:1 cunningham:1 rpa:1 i1:1 classification:1 flexible:3 v40:2 constrained:3 special:1 fairly:1 art:1 comprise:1 construct:2 once:1 washington:3 identical:1 icml:1 constitutes:1 jon:1 unsupervised:1 future:1 minimized:1 np:3 report:1 inherent:1 primarily:1 randomly:1 composed:3 simultaneously:1 national:2 preserve:2 ignorant:1 usc:2 ourselves:1 jeffrey:1 microsoft:1 negation:1 attempt:1 freedom:1 ab:3 peel:1 atlanta:1 highly:3 mining:1 evaluation:5 adjust:1 elucidating:1 weakness:1 generically:1 mixture:8 rishabh:2 shalmoli:1 pmac:1 edge:1 necessary:1 tree:7 desired:1 terraswarm:1 theoretical:1 instance:5 column:1 vahab:1 boolean:1 cover:2 maximization:9 cost:5 polymatroids:1 vertex:2 v31:2 subset:7 v20:3 motivating:2 optimally:1 reported:1 learnt:10 synthetic:3 thanks:3 international:4 siam:2 retain:3 lee:1 off:1 intersecting:1 again:1 satisfied:1 choose:1 worse:1 conf:1 leading:1 return:4 style:1 diversity:1 includes:1 mv:2 vi:2 queuing:1 root:4 performed:1 closed:1 later:1 apparently:2 nishihara:1 start:1 decaying:1 florence:1 scm:5 square:1 formed:1 variance:2 characteristic:1 efficiently:1 maximized:1 correspond:1 yield:3 generalize:11 vincent:1 bilmes:10 rectified:1 whenever:1 sharing:1 facebook:1 definition:6 stobbe:1 evaluates:1 against:1 thereof:1 obvious:1 associated:6 proof:8 mi:9 di:2 boil:1 gain:3 rational:1 dataset:2 seffi:1 popular:1 knowledge:3 organized:1 segmentation:1 starnet:1 appears:1 feed:1 higher:1 supermodular:3 follow:1 modal:2 maximally:2 wei:3 done:3 though:1 evaluated:2 generality:1 chekuri:2 until:1 hand:1 eqn:2 google:1 perhaps:2 grows:1 name:1 usa:1 requiring:1 normalized:6 contain:1 consisted:1 facility:2 hence:6 regularization:2 i2:1 attractive:2 during:2 uniquely:1 trying:2 complete:1 theoretic:1 oid:1 fj:3 balcan:1 image:10 wise:1 fi:4 sigmoid:4 polymatroid:1 physical:1 reza:1 million:1 extend:7 discussed:1 m1:1 association:3 metafeature:1 refer:2 composition:4 cambridge:1 feldman:2 dag:3 ai:1 rd:1 mathematics:2 submodular:93 language:2 dot:1 access:2 similarity:2 etc:2 base:4 add:1 multivariate:2 own:1 perspective:3 italy:1 buchbinder:1 certain:2 harvey:2 meta:11 inequality:1 life:1 yi:1 seen:4 preserving:2 additional:5 somewhat:1 fortunately:1 p10:1 guestrin:1 forty:1 v3:2 redundant:2 signal:1 july:1 u0:1 multiple:1 desirable:2 rv:2 ii:2 encompass:1 reduces:1 gruia:1 technical:1 match:1 calculation:1 offer:3 cross:1 lin:2 supergradient:1 lai:4 dept:2 post:1 award:2 feasibility:2 prediction:1 involving:1 regression:2 basic:1 scalable:1 chandra:2 arxiv:3 iteration:3 represent:15 achieved:1 background:1 krause:2 ank:1 operate:1 unlike:2 posse:1 subject:2 induced:1 fujishige:1 thing:1 member:1 december:3 leveraging:1 jordan:1 call:3 integer:1 leverage:2 bengio:3 variety:4 affect:1 matroid:52 architecture:2 restrict:1 reduce:1 regarding:1 tradeoff:1 multiclass:1 six:1 speech:2 constitute:2 deep:13 useful:16 generally:3 clear:1 involve:1 covered:1 amount:1 situated:1 http:1 restricts:1 familial:1 disjoint:1 diverse:1 discrete:3 express:1 redundancy:1 four:1 k4:5 neither:1 utilize:1 uw:1 vast:2 graph:10 v1:1 monotone:14 subgradient:2 sum:8 cone:2 everywhere:4 powerful:1 parameterized:1 soda:1 uncertainty:2 place:2 family:25 v12:2 wu:2 lake:1 excerpt:1 summarizes:1 superdifferential:10 bit:1 comparable:3 bound:1 layer:19 ki:3 dropout:2 courville:1 laminar:21 quadratic:1 oracle:1 annual:2 summarizer:1 constraint:6 sviridenko:1 hy:2 vondrak:2 min:8 attempting:1 martin:1 structured:3 combination:2 ate:1 smaller:1 featurized:1 increasingly:1 wi:2 island:1 shallow:1 joseph:1 v60:3 s1:1 intuitively:1 restricted:4 pr:1 karbasi:1 equation:2 previously:1 remains:1 discus:1 describing:1 count:1 hh:2 needed:1 precomputed:1 end:1 naturalness:1 available:4 generalizes:1 apply:1 v2:1 appropriate:1 knapsack:3 include:6 nlp:1 ensure:1 linguistics:3 opportunity:1 maintaining:1 hinge:1 k1:1 chinese:1 approximating:1 hypercube:1 society:3 tensor:1 already:1 occurs:2 mirrokni:2 said:3 nemhauser:2 gradient:3 amongst:1 distance:1 unable:4 higherorder:1 separate:1 capacity:2 scms:17 calinescu:1 valuation:2 trivial:1 reason:1 spanning:2 sipos:1 relationship:3 providing:1 minimizing:1 difficult:1 mostly:1 negative:14 implementation:1 motivates:1 summarization:10 unknown:1 twenty:1 allowing:1 pei:1 datasets:1 finite:1 hmv:4 descent:1 truncated:2 immediate:3 defining:3 hinton:1 communication:1 rn:5 arbitrary:7 community:3 canada:1 namely:3 pair:3 unpublished:1 required:2 sentence:4 acoustic:1 distinction:1 barcelona:1 nip:4 address:2 able:1 below:5 pattern:2 usually:1 summarize:1 program:2 rf:2 max:15 including:1 video:1 belief:1 critical:2 natural:1 recursion:1 conic:2 stefanie:1 health:1 epoch:2 review:1 discovery:1 tangent:1 kf:3 adagrad:2 relative:1 loss:7 expect:2 interesting:1 wolsey:1 acyclic:1 foundation:2 degree:1 offered:1 sufficient:2 jegelka:2 foster:1 share:2 row:2 summary:3 surprisingly:1 supported:2 truncation:1 bias:2 allow:3 deeper:3 institute:1 wide:1 taking:1 overkill:1 matroids:14 sparse:1 fifth:1 distributed:1 benefit:1 depth:1 world:2 gram:4 evaluating:1 concavity:2 author:1 made:2 forward:1 projected:1 situate:1 avg:2 collection:1 transaction:2 approximate:2 vondr:2 uai:2 containment:1 xi:6 learn:13 nature:1 ca:1 unavailable:1 forest:1 interact:2 hmi:1 complex:1 european:1 main:1 s2:1 scored:2 edition:1 n2:6 allowed:1 augmented:1 representative:2 intel:1 fashion:1 georgia:1 wish:1 lie:5 erties:1 peeling:2 theorem:13 down:1 specific:3 unigram:1 showing:7 maxi:1 r2:2 decay:1 moran:1 gupta:1 grouping:1 exists:2 intractable:1 restricting:1 adding:2 effectively:1 corr:1 maxim:1 mirror:1 hui:1 iyer:4 margin:16 easier:3 generalizing:2 led:1 univariate:4 subfamily:1 happening:1 visual:4 expressed:2 partially:2 scalar:1 springer:1 nested:2 corresponds:3 chance:1 iwata:1 acm:5 shell:2 sized:1 towards:1 jeff:4 fisher:1 hard:4 fw:9 lemma:5 called:6 goemans:1 formally:1 select:1 ongoing:1 evaluate:1 tested:1 |
5,927 | 6,362 | Beyond Exchangeability: The Chinese Voting Process
Moontae Lee
Dept. of Computer Science
Cornell University
Ithaca, NY 14853
[email protected]
Seok Hyun Jin
Dept. of Computer Science
Cornell University
Ithaca, NY 14853
[email protected]
David Mimno
Dept. of Information Science
Cornell University
Ithaca, NY 14853
[email protected]
Abstract
Many online communities present user-contributed responses such as reviews of
products and answers to questions. User-provided helpfulness votes can highlight
the most useful responses, but voting is a social process that can gain momentum
based on the popularity of responses and the polarity of existing votes. We propose
the Chinese Voting Process (CVP) which models the evolution of helpfulness votes
as a self-reinforcing process dependent on position and presentation biases. We
evaluate this model on Amazon product reviews and more than 80 StackExchange
forums, measuring the intrinsic quality of individual responses and behavioral
coefficients of different communities.
1
Introduction
With the expansion of online social platforms, user-generated content has become increasingly
influential. Customer reviews in e-commerce like Amazon are often more helpful than editorial
reviews [14], and question answers in Q&A forums such as StackOverflow and MathOverflow are
highly useful for coders and researchers [9, 18]. Due to the diversity and abundance of user content,
promoting better access to more useful information is critical for both users and service providers.
Helpfulness voting is a powerful means to evaluate the quality of user responses (i.e., reviews/answers)
by the wisdom of crowds. While these votes are generally valuable in aggregate, estimating the
true quality of the responses is difficult because users are heavily influenced by previous votes. We
propose a new model that is capable of learning the intrinsic quality of responses by considering their
social contexts and momentum.
Previous work in self-reinforcing social behaviors shows that although inherent quality is an important
factor in overall ranking, users are susceptible to position bias [12, 13]. Displaying items in an order
affects users: top-ranked items get more popularity, while low-ranked items remain in obscurity.
We find that sensitivity to orders also differs across communities: some value a range of opinions,
while others prefer a single authoritative answer. Summary information displayed together can
lead to presentation bias [19]. As the current voting scores are visibly presented with responses,
users inevitably perceive the score before reading the contents of responses. Such exposure could
immediately nudge user evaluations toward the majority opinion, making high-scored responses more
attractive. We also find that the relative length of each response affects the polarity of future votes.
Standard discrete models for self-reRes
Votes
Diff Ratio Relative Quality
inforcing process include the Chinese
1 +++
0
0.5
quite negative
Restaurant Process and the P?lya urn
2 + + +
0
0.5 moderately negative
model. Since these models are exchange3
+ + + 0
0.5
moderately positive
able, the order of events does not affect
4
+ ++ 0
0.5
quite positive
the probability of a sequence. However,
Table 1 suggests how different contexts Table 1: Quality interpretation for each sequence of six votes.
of votes cause different impacts. While the four sequences have equal numbers of positive and
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
negative votes in aggregate, the fourth votes in the first and last responses are given against a clear
majority opinion. Our model treats objection as a more challenging decision, thereby deserving
higher weight. In contrast, the middle two sequences receive alternating votes. As each vote is
a relatively weaker disagreement, the underlying quality is moderate compared to the other two
responses. Furthermore, if these are responses to one item, the order between them also matters. If
the initial three votes on the fourth response pushed its display position to the next page, for example,
it might not have a chance to get future votes, which recover its reputation.
The Chinese Voting Process (CVP) models generation of responses and votes, formalizing the evolution of helpfulness under positional and presentational reinforcement. Whereas most previous work
on helpfulness prediction [7, 5, 8, 4, 11, 10, 15] has involved a single snapshot, the CVP estimates
intrinsic quality of responses solely from selection and voting trajectories over multiple snapshots.
The resulting model shows significant improvements in predictive probability for helpfulness votes,
especially in the critical early stages of a trajectory. We find that the CVP estimated intrinsic quality
ranks responses better than existing system rank, correlating orderly with the sentiment of comments
associated with each response. Finally, we qualitatively compare different characteristics of self-reinforcing behavior between communities using two learned coefficients: Trendiness and Conformity.
The two-dimensional embedding in Figure 1 characterizes different opinion dynamics from Judaism
to Javascript (in StackOverflow).
Figure 1: 2D Community embedding. Each of 83 communities is represented by two behavioral coefficients
(Trendiness, Conformity). Eleven clusters are grouped based on their common focus. The MEAN community is
synthesized by sampling 20 questions from every community (except Amazon due to the different user interface).
Related work. There is strong evidence that helpfulness voting is socially influenced. Helpfulness
ratings on Amazon product reviews differ significantly from independent human annotators [8]. Votes
are generally more positive, and the number of votes decreases exponentially based on displayed
page position. Review polarity is biased towards matching the consensus opinion [4]: when two
reviews contain essentially the same text but differ in star rating, the review closer to the consensus
star rating is considered more helpful. There is also evidence that users vote strategically to correct
perceived mismatches in review rank [16]. Many studies have attempted to predict helpfulness given
review-content features [7, 5, 11, 10, 15]. Each of these examples predicts helpfulness based on text,
star-ratings, sales, and badges, but only at a single snapshot. Our work differs in two ways. First, we
combine data on Amazon helpfulness votes from [16] with a much larger collection of helpfulness
votes from 82 StackExchange forums. Second, instead of considering text-based features (which we
hold out for evaluation) within a single snapshot, we attempt to predict the next vote at each stage
based on the previous voting trajectory over multiple snapshots without considering textual contents.
2
The Chinese Voting Process
Our goal is to model helpfulness voting as a two-phase self-reinforcing stochastic process. In the
selection phase, each user either selects an existing response based on their positions or writes a new
2
response. The positional reinforcement is inspired by the Chinese Restaurant Process (CRP) and
Distance Dependent Chinese Restaurant Process (ddCRP). In the voting phase, when one response is
selected, the user chooses one of the two feedback options: a positive or negative vote based on the
intrinsic quality and the presentational factors. The presentational reinforcement is modeled by a
log-linear model with time-varying features based on the P?lya urn model. The CVP implements
the-rich-get-richer dynamics as an interplay of these two preferential reinforcements, learning latent
qualities of individual responses as inspired in Table 1. Specifically, each user at time t interested in
the item i follows the generative story in Table 2.
Generative process
1. Evaluate j-th response:
(a) ?Yes?:
(b) ?No? :
(t)
p(vi
(t)
p(vi
(t)
p(zi
=
(1:t 1)
j|zi
; ?)
= 1|?) = logit
= 0|?) = 1
(t)
2. Or write a new response: p(zi
(a) Sample qi(J+1) from N (0,
1
(qij +
(t)
p(vi
/
(t 1)
fi
(j)
(t 1)
gi
(j))
(1:t 1)
2
).
(t)
(t)
(t)
(t)
gi (j) = rij + ?sij + ?i uij
= 1|?)
= Ji + 1|zi
Sample parametrization (Amazon)
?
??
1
(t)
fi (j) =
(t)
1 + the-display-ranki (j)
; ?) / ?
? = {{qij }, , ?, {?i }}
(t 1)
Ji = Ji
(abbreviated notation)
Table 2: The generative story and the parametrization of the Chinese Voting Process (CVP).
2.1
Selection phase
The CRP [1, 2] is a self-reinforcing decision process over an infinite discrete set. For each item
(product/question) i, the first user writes a new response (review/answer). The t-th subsequent user
can choose an existing response j out of Ji(t 1) possible responses with probability proportional to
the number of votes nj(t 1) given to the response j by time t 1, whereas the probability of writing a
new response Ji(t 1) + 1 is proportional to a constant ?. While the CRP models self-reinforcement ?
each vote for a response makes that response more likely to be selected later ? there is evidence
that the actual selection rate in an ordered list decays with display rank [6]. Since such rankings are
mechanism-specific and not always clearly known in advance, we need a more flexible model that
can specify various degrees of positional preference. The ddCRP [3] introduces a function f that
decays with respect to some distance measure. In our formulation, the distance function varies over
time and is further configurable with respect to the specific interface of service providers.
Specifically, the function fi(t) (j) in the CVP evaluates the popularity of the j -th response in the item i
at time t. Since we assume that popularity of responses is decided by their positional accessibility,
we can parametrize f to be inversely proportional to their display ranks. The exponent ? determines
sensitivity to popularity in the selection phase by controlling the degree of harmonic penalization
over ranks. Larger ? > 0 indicates that users are more sensitive to trendy responses displayed near
the top. If ? < 0, users often select low-ranked responses over high-ranked ones for some reasons.1
Note that even if the user at time t does not vote on the j -th response, fi(t) (j) could be different from
(t 1)
(t)
(t 1)
fi
(j) in the CVP,2 whereas nij = nij
in the CRP. Thus one can view the selection phase of the
CVP as a non-exchangeable extension of the CRP via a time-varying function f .
2.2
Voting phase
We next construct a self-reinforcing process for the inner voting phase. The P?lya urn model is
a self-reinforcing decision process over a finite discrete set, but because it is exchangeable, it is
unable to capture contextual information encoded in each a sequence of votes. We instead use
a log-linear formulation with the urn-based features, allowing other presentational features to be
flexibly incorporated based on the modeler?s observations.
Each response initially has x = x(0) positive and y = y(0) negative votes, which could be fractional
pseudo-votes. For each draw of a vote, we return w + 1 votes with the same polarity, thus selfreinforcing when w > 0. The following Table 3 shows time-evolving positive/negative ratios
(t)
(t)
(t)
(t)
(t)
(t)
(t)
(t)
rj = xj /(xj + yj ) and sj = yj /(xj + yj ) of the first two responses: j 2 {1, 2} in Table 1 with
the corresponding ratio gain j(t) = rj(t) rj(t 1) (if vj(t) = 1 or +) or sj(t) sj(t 1) (if vj(t) = 0 or ).
1
This sometimes happens especially in the early stage when only a few responses exist.
Say the rank of another response j 0 was lower than j?s at time t 1. If t-th vote given to the response j 0
(t)
(t 1)
raises its rank higher than the rank of the response j, then fi (j) < fi
(j) assuming ? > 0.
2
3
(t)
(t)
(t)
(t)
(t)
(t)
(t)
(t)
t or T v1
r1
s1
q1T
v2
r2
s2
q2T
1
2
0
1/2 1/2
1/2 1/2
1
+ 2/3 1/3 0.167
+ 2/3 1/3 0.167
2
+ 3/4 1/4 0.083 0.363
2/4 2/4 0.167 -0.363
3
+ 4/5 1/5 0.050 0.574 + 3/5 2/5 0.100 0.004
4
4/6 2/6 0.133 0.237
3/6 3/6 0.100 -0.230
5
4/7 3/7 0.095 0.004 + 4/7 3/7 0.071 0.007
6
4/8 4/8 0.071 -0.175
4/8 4/8 0.071 -0.166
Table 3: Change of quality estimation qj over times for the first two example responses in Table 1 with the initial
pseudo-votes (x, y, w) = (1, 1, 1). The estimated quality at the first response sharply decreases when receiving
the first majority-against vote at t = 4. The first response ends up being more negative than the second, even if
they receive the same number of votes in aggregate. These non-exchangeable behaviors cannot be modeled with
a simple exchangeable process.
In this toy setting, the polarity of a vote to a response is an outcome of its intrinsic quality as well
as presentational factors: positive and negative votes. Thus we model each sequence of votes by
`2 -regularized logistic regression with the latent intrinsic quality and the P?lya urn ratios.3
max log
?
T
Y
logit
1
(t 1)
qjT + rj
(t 1)
+ ?sj
t=2
1
k?k22
2
where ? = qjT , , ?
(1)
The {qjT } in the Table 3 shows the result from solving (1) up to T -th votes for each j 2 {1, 2}. The
initial vote given at t = 1 is disregarded in the training due to its arbitrariness from the uniform
prior (x0 = y0 ). Since it is quite possible to have only positive or only negative votes, Gaussian
regularization is necessary. Note that using the urn-based ratio features is essential to encode
contextual information. If we instead use raw count features (only the numerators of rj and sj ), for
example in the first response, the estimated quality q1T keeps increasing even after getting negative
votes from time 4 to 6. Log raw count features are unable to infer the negative quality.
In the first response, 1(t) shows the decreasing gain in positive ratios from t = 1 to 3 and in negative
ratios from t = 4 to 6, whereas it gains a relatively large momentum at the first negative vote when
(t)
t = 4. 2 converges to 0 in the 2nd response, implying that future votes have less effect than earlier
votes for alternating +/ votes. q2T also converges to 0 as we expect neutral quality in the limit.
Overall the model is capable of learning intrinsic quality as desired in Table 1 where relative gains
can be further controlled by tuning the initial pseudo-votes (x, y).
In the real setting, the polarity score function gi(t) (j) in the CVP evaluates presentational factors of
the j -th response in the item i at time t. Because we adopt a log-linear formulation, one can easily
(t)
add additional information about responses. In addition to the positive ratio rij
and the negative
(t)
(t)
ratio sij , g also contains a length feature uij (as given in Table 2), which is the relative length of the
response j against the average length of responses in the item i at particular time t. Users in some
items may prefer shorter responses than longer ones for brevity, whereas users in other items may
blindly believe that longer responses are more credible before reading their contents. The parameter
?i explains length-wise preferential idiosyncrasy as a per-item bias: ?i < 0 means a preference toward
the shorter responses. Note that gi(t) (j) could be different from gi(t 1) (j) even if the user at time t does
not choose to vote.4 All together, the voting phase of the CVP generates non-exchangeable votes.
3
Inference
Each phase of the CVP depends on the result of all previous stages, so decoupling these related
problems is crucial for efficient inference. We need to estimate community-level parameters, itemlevel length preferences, and response-level intrinsic qualities. The graphical model of the CVP and
corresponding parameters to estimate are illustrated in Table 4. We further compute two communitylevel behavioral coefficients: Trendiness and Conformity, which are useful summary statistics for
exploring different voting patterns and explaining macro characteristics across different communities.
3
One might think (1) can be equivalently achievable with only two parameters because of rj + sj = 1 for
all t. However, such reparametrization adds inconsistent translations to qjT and makes it difficult to interpret
different inclinations between positive and negative votes for various communities.
(t)
(t 1)
4
If a new response is written at time t, uij 6= uij
as the new response changes the average length.
(t)
4
(t)
?: hyper-parameter for response growth
2
: hyper-parameter for quality variance
? : community-level sensitivity to popularity
: community-level preference for positive ratio
?: community-level preference for negative ratio
?i : item-level preference for response length
qij : response-level hidden intrinsic quality
Process
m: # of items (e.g., products/questions)
Ji : # of responses of item i (e.g., reviews/answers)
Table 4: Graphical model and parameters for the CVP. Only three time steps are unrolled for visualization.
Parameter inference. The goal is to infer parameters ? = {{qij }, , ?, {?i }}. We sometimes use f
and g instead to compactly indicate parameters associated to each function. The likelihood of one
CVP step in the item i at time t is L(t)
i (?, ?; ?, ) =
n
?
(t 1)
PJi
(t
?+ j=1
fi
1)
(j)
N (qi,z(t) ; 0,
2
i
)
o
(t)
[zi
(t
=Ji
1)
+1] n
(t
1) (t)
(zi )
(t 1)
PJi
(t 1)
?+ j=1
fi
(j)
fi
(t)
(t 1)
p(vi |qi,z(t) , gi
(j))
i
o
(t)
[zi
(t
?Ji
1)
]
where the two terms correspond to writing a new response and selecting an existing response to
vote. The fractions in each term respectively indicate the probability of writing a new response and
choosing existing responses in the selection phase. The other two probability expression in each term
describe quality sampling from a normal distribution and the logistic regression in the voting phase.
It is important to note that our setting differs from many CRP-based models. The CRP is typically
used to represent a non-parametric prior over the choice of latent cluster assignments that must
themselves be inferred from noisy observations. In our case, the result of each choice is directly
observable because we have the complete trajectory of helpfulness votes. As a result, we only
need to infer the continuous parameters of the process, and not combinatorial configurations of
discrete variables. Since we know the complete trajectory where the rank inside the function f is a
part of the true observations, we can view each vote as an independent sample.
the last
P
PDenoting
(t)
Ti
timestamp of the item i by Ti , the log-likelihood becomes `(?, ?; ?, ) = m
i=1
t=1 log Li and is
further separated into two pieces:
`v (?; ) =
Ti n
m X
X
i=1 t=1
`s (? ; ?) =
Ti n
m X
X
i=1 t=1
[write] ? log N (qi,z(t) ; 0,
2
i
[write] ? log
(t)
i
PJi(t
1)
j=1
o
(j)) ,
(t 1)
fi
(j)
+ [choose] ? log
(2)
o
fi
(zi )
.
PJi(t 1) (t 1)
? + j=1 fi
(j)
(t 1)
?
?+
(t 1)
) + [choose] ? log p(vi |qi,z(t) , gi
(t)
Inferring a whole trajectory based only on the final snapshots would likely be intractable for a
non-exchangeable model. Due to the continuous interaction between f and g for every time step,
small mis-predictions in the earlier stages will cause entirely different configurations. Moreover the
rank function inside f is in many cases site-specific.5 It is therefore vital to observe all trajectories
of random variables {zi(t) , vi(t) }: decoupling f and g reduces the inference problem into estimating
parameters separately for the selection phase and the voting phase. Maximizing `v can be efficiently
solved by `2 -regularized logistic regression as demonstrated for (1). If the hyper-parameter ? is fixed,
maximizing `s becomes a convex optimization because ? appears in both the numerator and the
denominator. Since the gradient for each parameter in ? is obvious, we only include the gradient of
P
P Ti
(t)
(t)
s
`s,i for the particular item i at time t with respect to ? . Then @`
= m
i=1
t=1 @`s,i /@? .
@?
(t)
@`s,i
1
=
@?
?
(
(t)
[zi
?
(t 1)
Ji
]
(t 1)
?
fi
(t)
(t 1)
(zi ) log fi
(t 1)
fi
(t)
(zi )
(t)
(zi )
PJi(t
1)
(t 1)
(t 1)
fi
(j) log fif
(j)
PJi(t 1) (t 1)
? + j=1 fi
(j)
j=1
)
(3)
5
We generally know that Amazon decides the display order by the portion of positive votes and the total
number of votes on each response, but the relative weights between them are not known. We do not know how
StackExchange forums break ties, which affects highly in the early stages of voting.
5
Selection
Voting
Residual Bumpiness
CRP CVP qij
?i qij , qij , ?i , ?i Full Rank Qual Rank Qual
SOF(22925)
2.152 1.989 .107 .103 .108 .100 .106 .100 .096 .005 .003 .080 .038
math(6245)
1.841 1.876 .071 .064 .067 .062 .066 .060 .059 .014 .008 .280 .139
english(5242)
1.969 1.924 .160 .146 .152 .141 .147 .137 .135 .018 .007 .285 .149
mathOF(2255) 1.992 1.910 .049 .046 .049 .045 .047 .046 .045 .009 .007 .185 .119
physics(1288)
1.824 1.801 .174 .155 .166 .150 .156 .146 .142 .032 .014 .497 .273
stats(598)
1.889 1.822 .051 .044 .048 .043 .046 .042 .042 .030 .019 .613 .347
judaism(504)
2.039 1.859 .135 .124 .132 .121 .125 .118 .116 .046 .018 .875 .403
amazon(363)
2.597 2.261 .266 .270 .262 .254 .243 .253 .240 .023 .016 .392 .345
meta.SOF(294) 1.411 1.575 .261 .241 .270 .229 .243 .232 .225 .018 .013 .281 .255
cstheory(279)
1.893 1.795 .052 .040 .053 .039 .049 .039 .038 .032 .029 .485 .553
cs(123)
1.825 1.780 .128 .100 .118 .099 .113 .097 .096 .069 .040 .725 .673
linguistics(107) 1.993 1.789 .133 .127 .130 .122 .123 .120 .116 .074 .038 .778 .656
AVERAGE
2.050 1.945 .109 .103 .108 .099 .105 .098 .095 .011 .006 .186 .101
Table 5: Predictive analysis on the first 50 votes: In the selection phase, the CVP shows better negative
log-likelihood in almost all forums. In the voting phase, the full model shows better negative log-likelihood
than all subsets of features. Quality analysis at the final snapshot: Smaller residuals and bumpiness show that
the order based on the estimated quality qij more coherently correlates with the average sentiments of the
associated comments than the order by display rank. (SOF=StackOverflow, OF=Overflow, rest=Exchange, Blue:
p ? 0.001, Green: p ? 0.01, Red: p ? 0.05)
Community
Behavioral coefficients. To succinctly measure overall voting behaviors across different communities, we propose two community-level coefficients. Trendiness indicates the sensitivity to positional
popularity in the selection phase. While the community-level ? parameter renders Trendiness simply
to avoid overly-complicated models, one can easily extend the CVP to have per-item ?i to better
fit the data. In that case, Trendiness would be a summary statistics for {?i }. Conformity captures
users? receptiveness to prevailing polarity in the voting phase. To count every single vote, we define
Conformity to be a geometric mean of odds ratios between majority-following votes and majoritydisagreeing votes. Let Vi be the set of time steps when users vote rather than writing responses in the
item i. Say n is the total number of votes across all items in the target community. Then Conformity
is defined as
?=
(
(t+1)
m Y ? P (vi
= 1|q t (t+1) ,
Y
i,z
i
i=1 t2Vi
(t+1)
P (vi
= 0|q t
(t+1)
i,zi
,
t
, ?t , ?it ) ?hi(t)
t , ?t , ? t )
i
)1/n
where
(t)
hi
=
(
+(t)
1
1
(t)
(nij
nij )
+(t)
(t) .
(nij < nij )
t
To compute Conformity ?, we need to learn ?t = {qij
, t , ?t , ?it } for each t, which is a set of
parameters learned on the data only up to the time t. This is because the user at time t cannot see
any future which will be given later than the time t. Note that ?t+1 can be efficiently learned by
warm-starting at ?t . In addition, while positive votes are mostly dominant in the end, the dominant
mood up to time t could be negative, exactly when the user at time t + 1 tries to vote. In this case,
(t)
hi becomes 1, inverting the fraction to be the ratio of following the majority against the minority.
By summarizing learned parameters in terms of two coefficients (?, ?), we can compare different
selection/voting behaviors for various communities.
4
Experiments
We evaluate the CVP on product reviews from Amazon and 82 issue-specific forums from the
StackExchange network. The Amazon dataset [16] originally consisted of 595 products with daily
snapshots of writing/voting trajectories from Oct 2012 to Mar 2013. After eliminating duplicate
products6 and products with fewer than five reviews or fragmented trajectories,7 363 products are left.
For the StackExchange dataset8 , we filter out questions from each community with fewer than five
answers besides the answer chosen by the question owner.9 We drop communities with fewer than
100 questions after pre-processing. Many of these are ?Meta? forums where users discuss policies
and logistics for their original forums.
6
Different seasons of the same TV shows have different ASIN codes but share the same reviews.
If the number of total votes between the last snapshot of the early fragment and the first snapshot of the later
fragment is less than 3, we fill in the missing information simply with the last snapshot of the earlier fragment.
8
Dataset and statistics are available at https://archive.org/details/stackexchange.
9
The answer selected by the question owner is displayed first regardless of voting scores.
7
6
Figure 2: Comment and likelihood analysis on the StackOverflow forum. The left panels show that responses
with higher ranks tend to have more comments (top) and more positive sentiments (bottom). The middle
panels show responses have more comments at both high and low intrinsic quality qij (top). The corresponding
sentiment correlates more cohesively with the quality score (bottom). Each blue dot is approximately an average
over 1k responses, and we parse 337k comments given on 104k responses in total. The right panels show
predictive power for the selection phase (top) and the voting phase (bottom) up to t < 50 (lower is better).
Predictive analysis. In each community, our prediction task is to learn the model up to time t and
predict the action at t + 1. We align all items at their initial time steps and compute the average
negative log-likelihood of the next actions based on the current model. Since the complete trajectory
enables us to separate the selection and voting phases in inference, we also measure the predictive
power of these two tasks separately against their own baselines. For the selection phase, the baseline
is the CRP, which selects responses proportional to the number of accumulated votes or writes a
new response with the probability proportional to ?.10 When t < 50, as shown in the first column of
Table 5, the CVP significantly outperforms the CRP based on paired t-tests (two-tailed). Using the
function f based on display rank and Trendiness parameter ? is indeed a more precise representation
of positional accessibility. Especially in the early stages, users often select responses displayed at
lower ranks with fewer votes. While the CRP has no ability to give high scores in these cases, the
CVP properly models it by decreasing ? . The comparative advantage of the CVP declines as more
votes become available and the correlation between display rank and the number of votes increases.
For items with t 50, there is no significant difference between the two models as exemplified in the
third column of Figure 2. These results are coherent across other communities (p > 0.07).
Improving predictive power on the voting phase is difficult because positive votes dominate in every
community. We compare the fully parametrized model to simpler partial models in which certain
parameters are set to zero. For example, a model with all parameters but knocked out is comparable
to a plain P?lya Urn. As illustrated in the second column of Table 5, we verify that every sub-model
is significantly different from the full model in all major communities based on one-way ANOVA
test, implying that each feature adds distinctive and meaningful information. Having the item-specific
length bias ?i provides significant improvements as well as having intrinsic quality qij and current
opinion counts . While we omit the log-likelihood results with t 50, all model better predicts true
polarity when t 50, because the log-linear model obtains a more robust estimate of community-level
parameters as the model acquires more training samples.
Quality analysis. The primary advantage of the CVP is its ability to learn ?intrinsic quality? for
each response that filters out noise from self-reinforcing voting processes. We validate these scores
by comparing them to another source of user feedback: both StackExchange and Amazon allow
users to attach comments to responses along with votes. For each response, we record the number
of comments and the average sentiment of those comments as estimated by [17]. As a baseline, we
10
We fix ? to 0.5 after searching over a wide range of values.
7
also calculate the final display rank of each response, which we convert to a z-score to make it more
comparable to the quality scores qij . After sorting responses based on display rank and quality rank,
we measure the association between the two rankings and comment sentiment with linear regression.
Results are shown for StackOverflow in Figure 2. As expected, highly-ranked responses have more
comments, but we also find that there are more comments for both high and low values of intrinsic
quality. Both better display rank and higher quality score qij are clearly associated with more positive
comments (slope 2 [0.47, 0.64]), but the residuals of quality rank 0.012 are on average less than
the half the residuals of display rank 0.028. In addition, we also calculate the ?bumpiness? of these
plots by computing the mean variation of two consecutive slopes between each adjacent pair of data
points. Quality rank reduces bumpiness of display rank from 0.391 to 0.226 in average, implying the
estimated intrinsic quality yields locally consistent ranking as well as globally consistent.11
Community analysis. The 2D embedding
in Figure 1 shows that we can compare and
contrast the different evaluation cultures of
communities using two inferred behavioral
coefficients: Trendiness ? and Conformity ?.
Communities are sized according to the number of items and colored based on a manual
clustering. Related communities collocate
in the same neighborhood. Religion, scholarship, and meta-discussions cluster towards
the bottom left, where users are interested
in many different opinions, and are happy
to disagree with each other. Going from left
to right, communities become more trendy:
users in trendier communities tend to select
and vote mostly on already highly-ranked
responses. Going from bottom to top, users
become increasingly likely to conform to the Figure 3: Sub-community embedding for StackOverflow.
majority opinion on any given response. By comparing related communities we can observe that
characteristics of user communities determine voting behavior more than technical similarity. Highly
theoretical and abstract communities (cstheory) have low Trendiness but high Conformity. More
applied, but still graduate-level, communities in similar fields (cs, mathoverflow, stats) show less
Conformity but greater Trendiness. Finally, more practical homework-oriented forums (physics,
math) are even more trendy. In contrast, users in english are trendy and debatable. Users in Amazon
are most sensitive to trendy reviews and least afraid of voicing minority opinion.
StackOverflow is by far the largest community, and it is reasonable to wonder whether the Trendiness
parameter is simply a proxy for size. When we subdivide StackOverflow by programming languages
however (see Figure 3), individual community averages can be distinguished, but they all remain in
the same region. Javascript programmers are more satisfied with trendy responses than those using
c/c++. Mobile developers tend to be more conformist, while Perl hackers are more likely to argue.
5
Conclusions
Helpfulness voting is a powerful tool to evaluate user-generated responses such as product reviews
and question answers. However such votes can be socially reinforced by positional accessibility and
existing evaluations by other users. In contrast to many exchangeable random processes, the CVP
takes into account sequences of votes, assigning different weights based on the context that each vote
was cast. Instead of trying to model the response ordering function f , which is mechanism-specific
and often changes based on service providers? strategies, we leverage the fully observed trajectories of
votes, estimating the hidden intrinsic quality of each response and inferring two behavioral coefficients
for community-level exploration. The proposed log-linear urn model is capable of generating nonexchangeable votes with great scalability to incorporate other factors such as length bias or other
textual features. As we are more able to observe social interactions as they are occurring and not just
summarized after the fact, we will increasingly be able to use models beyond exchangeability.
11
All numbers and p-values in paragraphs are weighted averages on all 83 communities, whereas Table 5 only
includes results for the major communities and their own weighted averages due to space limits.
8
References
[1] D. J. Aldous. Exchangeability and related topics. In ?cole d??t? St Flour 1983, pages 1?198. SpringerVerlag, 1985.
[2] D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. Hierarchical topic models and the nested chinese
restaurant process. In Advances in Neural Information Processing System, NIPS ?03, 2003.
[3] D. M. Blei and P. I. Frazier. Distance dependent chinese restaurant processes. Journal of Machine Learning
Learning Research, pages 2461?2488, 2011.
[4] C. Danescu-Niculescu-Mizil, G. Kossinets, J. Kleinberg, and L. Lee. How opinions are received by online
communities: A case study on Amazon.Com helpfulness votes. In Proceedings of World Wide Web, WWW
?09, pages 141?150, 2009.
[5] A. Ghose and P. G. Ipeirotis. Designing novel review ranking systems: Predicting the usefulness and
impact of reviews. In Proceedings of the Ninth International Conference on Electronic Commerce, ICEC
?07, pages 303?310, 2007.
[6] T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, and G. Gay. Evaluating the accuracy of
implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information
Systems, 25(2), 2007.
[7] S.-M. Kim, P. Pantel, T. Chklovski, and M. Pennacchiotti. Automatically assessing review helpfulness. In
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ?06,
2006.
[8] J. Liu, Y. Cao, C.-Y. Lin, Y. Huang, and M. Zhou. Low-quality product review detection in opinion
summarization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language
Processing and Computational Natural Language Learning, EMNLP-CoNLL ?07, pages 334?342, 2007.
[9] L. Mamykina, B. Manoim, M. Mittal, G. Hripcsak, and B. Hartmann. Design lessons from the fastest q&a
site in the west. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI
?11, 2011.
[10] L. Martin and P. Pu. Prediction of helpful reviews using emotion extraction. In Proceedings of the
Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI ?14, pages 1551?1557, 2014.
[11] J. Otterbacher. ?helpfulness? in online communities: A measure of message quality. In Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems, CHI ?09, pages 955?964, 2009.
[12] M. J. Salganik, P. S. Dodds, and D. J. Watts. Experimental study of inequality and unpredictability in an
artificial cultural market. Science, 311:854?856, 2006.
[13] M. J. Salganik and D. J. Watts. Leading the herd astray: An experimental study of self-fulfilling prophecies
in an artificial cultural mmrket. Social Psychology Quarterly, 71:338?355, 2008.
[14] W. Shandwick. Buy it, try it, rate it: Study of consumer electronics purchase deicisions in the engagement
era. KRC Research, 2012.
[15] S. Siersdorfer, S. Chelaru, J. S. Pedro, I. S. Altingovde, and W. Nejdl. Analyzing and mining comments
and comment ratings on the social web. ACM Trans. Web, pages 17:1?17:39, 2014.
[16] R. Sipos, A. Ghosh, and T. Joachims. Was this review helpful to you?: It depends! context and voting
patterns in online content. In International Conference on World Wide Web, WWW ?14, pages 337?348,
2014.
[17] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep
models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference
on Empirical Methods in Natural Language Processing, EMNLP, pages 1631?1642. Association for
Computational Linguistics, 2013.
[18] Y. R. Tausczik, A. Kittur, and R. E. Kraut. Collaborative problem solving: A study of mathoverflow. In
Computer-Supported Cooperative Work and Social Computing, CSCW? 14, 2014.
[19] Y. Yue, R. Patel, and H. Roehrig. Beyond position bias: Examining result attractiveness as a source of
presentation bias in clickthrough data. In Proceedings of the 19th International Conference on World Wide
Web, WWW ?10, 2010.
9
| 6362 |@word middle:2 eliminating:1 achievable:1 logit:2 nd:1 thereby:1 fif:1 initial:5 configuration:2 contains:1 score:10 selecting:1 fragment:3 afraid:1 liu:1 electronics:1 outperforms:1 existing:7 current:3 contextual:2 comparing:2 com:1 assigning:1 written:1 must:1 subsequent:1 eleven:1 enables:1 drop:1 plot:1 implying:3 generative:3 selected:3 fewer:4 item:25 half:1 intelligence:1 parametrization:2 record:1 colored:1 blei:2 provides:1 math:2 preference:6 org:1 simpler:1 nonexchangeable:1 five:2 along:1 become:4 qij:13 combine:1 behavioral:6 inside:2 owner:2 paragraph:1 x0:1 expected:1 market:1 indeed:1 themselves:1 behavior:6 chi:2 socially:2 inspired:2 decreasing:2 globally:1 automatically:1 actual:1 considering:3 increasing:1 becomes:3 provided:1 estimating:3 spain:1 underlying:1 formalizing:1 notation:1 qjt:4 coder:1 moreover:1 panel:3 cultural:2 developer:1 ghosh:1 nj:1 pseudo:3 every:5 voting:34 ti:5 growth:1 tie:1 exactly:1 asin:1 sale:1 exchangeable:7 omit:1 before:2 service:3 positive:18 treat:1 limit:2 era:1 ghose:1 analyzing:1 solely:1 approximately:1 might:2 suggests:1 challenging:1 fastest:1 range:2 graduate:1 decided:1 commerce:2 practical:1 yj:3 recursive:1 implement:1 differs:3 writes:3 empirical:3 evolving:1 significantly:3 matching:1 pre:1 griffith:1 get:3 cannot:2 selection:15 context:4 writing:5 www:3 customer:1 demonstrated:1 maximizing:2 missing:1 exposure:1 regardless:1 flexibly:1 starting:1 convex:1 knocked:1 amazon:13 ranki:1 immediately:1 stats:2 perceive:1 fill:1 dominate:1 embedding:4 searching:1 variation:1 controlling:1 target:1 heavily:1 user:40 programming:1 designing:1 predicts:2 cooperative:1 bottom:5 observed:1 rij:2 capture:2 solved:1 nudge:1 calculate:2 region:1 ordering:1 decrease:2 valuable:1 q1t:2 moderately:2 dynamic:2 raise:1 solving:2 predictive:6 distinctive:1 compactly:1 easily:2 joint:1 represented:1 various:3 separated:1 describe:1 query:1 artificial:3 aggregate:3 hyper:3 outcome:1 crowd:1 choosing:1 neighborhood:1 quite:3 richer:1 larger:2 encoded:1 say:2 ability:2 statistic:3 gi:7 think:1 noisy:1 final:3 online:5 mood:1 interplay:1 sequence:7 judaism:2 advantage:2 propose:3 debatable:1 interaction:2 product:11 macro:1 cao:1 pantel:1 validate:1 scalability:1 getting:1 cluster:3 r1:1 assessing:1 comparative:1 generating:1 converges:2 unpredictability:1 received:1 strong:1 c:3 indicate:2 differ:2 correct:1 filter:2 stochastic:1 kraut:1 exploration:1 human:3 programmer:1 opinion:11 explains:1 exchange:1 fix:1 extension:1 exploring:1 hold:1 considered:1 normal:1 great:1 predict:3 major:2 early:5 adopt:1 consecutive:1 perceived:1 estimation:1 combinatorial:1 sensitive:2 cole:1 grouped:1 largest:1 mittal:1 tool:1 weighted:2 clearly:2 always:1 gaussian:1 rather:1 avoid:1 season:1 cornell:6 exchangeability:3 varying:2 mobile:1 zhou:1 encode:1 focus:1 joachim:2 improvement:2 potts:1 properly:1 frazier:1 indicates:2 likelihood:7 visibly:1 contrast:4 rank:26 baseline:3 summarizing:1 kim:1 helpful:4 inference:5 dependent:3 niculescu:1 accumulated:1 typically:1 dataset8:1 initially:1 hidden:2 uij:4 obscurity:1 going:2 selects:2 interested:2 overall:3 hartmann:1 flexible:1 issue:1 exponent:1 platform:1 prevailing:1 timestamp:1 equal:1 construct:1 field:1 having:2 extraction:1 sampling:2 emotion:1 ng:1 dodds:1 prophecy:1 future:4 purchase:1 others:1 inherent:1 few:1 strategically:1 duplicate:1 oriented:1 individual:3 phase:23 attempt:1 detection:1 message:1 highly:5 mining:1 evaluation:4 flour:1 introduces:1 capable:3 closer:1 preferential:2 necessary:1 daily:1 shorter:2 partial:1 culture:1 desired:1 nij:6 theoretical:1 column:3 earlier:3 measuring:1 assignment:1 neutral:1 subset:1 uniform:1 usefulness:1 wonder:1 examining:1 configurable:1 answer:10 varies:1 engagement:1 chooses:1 st:1 international:3 sensitivity:4 lee:2 physic:2 receiving:1 together:2 aaai:2 presentational:6 satisfied:1 choose:4 huang:1 emnlp:3 idiosyncrasy:1 leading:1 helpfulness:18 return:1 toy:1 li:1 account:1 diversity:1 star:3 summarized:1 includes:1 coefficient:9 matter:1 ranking:5 vi:9 depends:2 piece:1 later:3 view:2 break:1 try:2 characterizes:1 portion:1 red:1 recover:1 option:1 reparametrization:1 complicated:1 slope:2 collaborative:1 accuracy:1 variance:1 characteristic:3 efficiently:2 reinforced:1 correspond:1 wisdom:1 yield:1 lesson:1 yes:1 raw:2 provider:3 trajectory:11 researcher:1 kossinets:1 herd:1 influenced:2 manual:1 against:5 evaluates:2 involved:1 obvious:1 associated:4 mi:1 modeler:1 gain:5 dataset:2 fractional:1 credible:1 appears:1 higher:4 originally:1 response:89 specify:1 formulation:3 mar:1 furthermore:1 just:1 stage:7 crp:11 implicit:1 correlation:1 parse:1 web:6 logistic:3 quality:41 qual:2 believe:1 effect:1 k22:1 contain:1 true:3 consisted:1 verify:1 evolution:2 regularization:1 gay:1 alternating:2 semantic:1 illustrated:2 attractive:1 adjacent:1 numerator:2 self:11 acquires:1 trying:1 complete:3 interface:2 harmonic:1 wise:1 novel:1 fi:18 common:1 mathoverflow:3 ji:9 arbitrariness:1 exponentially:1 extend:1 interpretation:1 association:2 synthesized:1 interpret:1 significant:3 tuning:1 salganik:2 language:5 dot:1 access:1 longer:2 similarity:1 add:3 align:1 dominant:2 pu:1 own:2 aldous:1 moderate:1 certain:1 meta:3 trendy:6 inequality:1 ddcrp:2 additional:1 greater:1 lya:5 determine:1 multiple:2 full:3 rj:6 infer:3 reduces:2 technical:1 dept:3 lin:1 paired:1 controlled:1 impact:2 prediction:4 qi:5 regression:4 denominator:1 essentially:1 editorial:1 blindly:1 sometimes:2 represent:1 q2t:2 sof:3 receive:2 whereas:6 addition:3 separately:2 objection:1 source:2 crucial:1 ithaca:3 biased:1 rest:1 archive:1 comment:15 yue:1 tend:3 inconsistent:1 jordan:1 odds:1 near:1 leverage:1 vital:1 affect:4 restaurant:5 zi:14 xj:3 fit:1 psychology:1 click:1 inner:1 decline:1 qj:1 whether:1 six:1 expression:1 reinforcing:8 sentiment:7 render:1 hacker:1 cause:2 action:2 deep:1 useful:4 generally:3 clear:1 locally:1 tenenbaum:1 http:1 exist:1 estimated:6 overly:1 popularity:7 per:2 blue:2 conform:1 discrete:4 write:3 chelaru:1 four:1 anova:1 v1:1 fraction:2 convert:1 powerful:2 fourth:2 you:1 badge:1 almost:1 reasonable:1 electronic:1 wu:1 draw:1 decision:3 prefer:2 conll:1 comparable:2 pushed:1 entirely:1 hi:3 display:13 sharply:1 generates:1 kleinberg:1 urn:8 relatively:2 martin:1 influential:1 tv:1 according:1 watt:2 manning:1 remain:2 across:5 increasingly:3 y0:1 smaller:1 pan:1 nejdl:1 making:1 happens:1 s1:1 sij:2 fulfilling:1 visualization:1 abbreviated:1 count:4 mechanism:2 discus:1 know:3 end:2 reformulations:1 parametrize:1 available:2 promoting:1 quarterly:1 observe:3 v2:1 voicing:1 disagreement:1 hierarchical:1 sigchi:2 distinguished:1 pji:6 subdivide:1 moontae:2 original:1 chuang:1 top:6 clustering:1 include:2 linguistics:2 graphical:2 scholarship:1 chinese:10 especially:3 overflow:1 forum:10 question:10 coherently:1 already:1 parametric:1 primary:1 strategy:1 gradient:2 distance:4 stackexchange:7 unable:2 separate:1 majority:6 conformity:10 accessibility:3 parametrized:1 topic:2 astray:1 argue:1 consensus:2 toward:2 reason:1 sipos:1 minority:2 assuming:1 consumer:1 length:10 besides:1 chklovski:1 modeled:2 polarity:8 ratio:13 code:1 unrolled:1 happy:1 equivalently:1 difficult:3 susceptible:1 mostly:2 perelygin:1 javascript:2 negative:20 design:1 clickthrough:1 policy:1 summarization:1 contributed:1 allowing:1 disagree:1 twenty:1 observation:3 snapshot:11 hyun:1 finite:1 jin:1 inevitably:1 displayed:5 logistics:1 incorporated:1 precise:1 ninth:1 community:45 inferred:2 rating:5 david:1 inverting:1 pair:1 cast:1 compositionality:1 inclination:1 coherent:1 learned:4 textual:2 barcelona:1 nip:2 trans:1 beyond:3 able:3 pattern:2 mismatch:1 exemplified:1 eighth:1 reading:2 perl:1 max:1 green:1 power:3 critical:2 event:1 ranked:6 warm:1 regularized:2 attach:1 ipeirotis:1 predicting:1 residual:4 natural:4 mizil:1 inversely:1 text:3 review:24 prior:2 geometric:1 relative:5 fully:2 expect:1 highlight:1 generation:1 proportional:5 annotator:1 penalization:1 authoritative:1 degree:2 consistent:2 cvp:24 proxy:1 displaying:1 treebank:1 story:2 share:1 translation:1 succinctly:1 summary:3 supported:1 last:4 english:2 bias:8 weaker:1 allow:1 explaining:1 wide:4 mimno:2 feedback:3 fragmented:1 plain:1 world:3 evaluating:1 rich:1 qualitatively:1 reinforcement:5 collection:1 far:1 social:8 correlate:2 transaction:1 sj:6 observable:1 obtains:1 patel:1 keep:1 orderly:1 correlating:1 decides:1 buy:1 continuous:2 latent:3 search:1 reputation:1 tailed:1 table:18 learn:3 robust:1 decoupling:2 improving:1 expansion:1 vj:2 bumpiness:4 s2:1 whole:1 scored:1 noise:1 site:2 west:1 attractiveness:1 ny:3 sub:2 momentum:3 position:6 inferring:2 third:1 abundance:1 specific:6 list:1 decay:2 r2:1 evidence:3 intrinsic:16 essential:1 intractable:1 socher:1 occurring:1 disregarded:1 sorting:1 simply:3 likely:4 positional:7 cscw:1 ordered:1 religion:1 pedro:1 nested:1 chance:1 determines:1 acm:2 oct:1 goal:2 presentation:3 sized:1 towards:2 content:7 change:3 springerverlag:1 specifically:2 except:1 diff:1 infinite:1 total:4 experimental:2 attempted:1 vote:78 meaningful:1 select:3 radlinski:1 brevity:1 incorporate:1 evaluate:5 |
5,928 | 6,363 | Tractable Operations for
Arithmetic Circuits of Probabilistic Models
Yujia Shen and Arthur Choi and Adnan Darwiche
Computer Science Department
University of California
Los Angeles, CA 90095
{yujias,aychoi,darwiche}@cs.ucla.edu
Abstract
We consider tractable representations of probability distributions and the polytime
operations they support. In particular, we consider a recently proposed arithmetic
circuit representation, the Probabilistic Sentential Decision Diagram (PSDD). We
show that PSDDs support a polytime multiplication operator, while they do not
support a polytime operator for summing-out variables. A polytime multiplication
operator makes PSDDs suitable for a broader class of applications compared to
classes of arithmetic circuits that do not support multiplication. As one example,
we show that PSDD multiplication leads to a very simple but effective compilation
algorithm for probabilistic graphical models: represent each model factor as a
PSDD, and then multiply them.
1
Introduction
Arithmetic circuits (ACs) have been a central representation for probabilistic graphical models,
such as Bayesian networks and Markov networks. On the reasoning side, some state-of-the-art
approaches for exact inference are based on compiling probabilistic graphical models into arithmetic
circuits [Darwiche, 2003]; see also Darwiche [2009, chapter 12]. Such approaches can exploit
parametric structure (such as determinism and context-specific independence), allowing inference to
scale sometimes to models with very high treewidth, which are beyond the scope of classical inference
algorithms such as variable elimination and jointree. For example, the ace system for compiling
ACs [Chavira and Darwiche, 2008] was the only system in the UAI?08 evaluation of probabilistic
reasoning systems to exactly solve all 250 networks in a challenging (very high-treewidth) suite of
relational models [Darwiche et al., 2008].
On the learning side, arithmetic circuits have become a popular representation for learning from
data, as they are tractable for certain probabilistic queries. For example, there are algorithms for
learning ACs of Bayesian networks [Lowd and Domingos, 2008], ACs of Markov networks [Lowd
and Rooshenas, 2013, Bekker et al., 2015] and Sum-Product Networks (SPNs) [Poon and Domingos,
2011], among other related representations.1
Depending on their properties, different classes of ACs are tractable for different queries and operations. Among these queries are maximum a posteriori (MAP) inference,2 which is an NP-complete
problem, and evaluating the partition function, which is a PP-complete problem (more intractable).
Among operations, the multiplication of two ACs stands out as particularly important, being a primitive operation in some approaches to incremental or adaptive inference [Delcher et al., 1995, Acar
et al., 2008], bottom-up compilation of probabilistic graphical models [Choi et al., 2013], and some
search-based approaches to structure learning [Bekker et al., 2015].
1
2
SPNs can be converted into ACs (and vice-versa) with linear size and time [Rooshenas and Lowd, 2014].
This is also known as most probable explanation (MPE) inference [Pearl, 1988].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper, we investigate the tractability of two fundamental operations on arithmetic circuits:
multiplying two ACs and summing out a variable from an AC. We show that both operations are
intractable for some influential ACs that have been employed in the probabilistic reasoning and
learning literatures. We then consider a recently proposed sub-class of ACs, called the Probabilistic
Sentential Decision Diagram (PSDD) [Kisa et al., 2014]. We show that PSDDs support a polytime
multiplication operation, which makes them suitable for a broader class of applications. We also show
that PSDDs do not support a polytime summing-out operation (a primitive operation for messagepassing inference algorithms). We empirically illustrate the advantages of PSDDs compared to
other AC representations, for compiling probabilistic graphical models. Previous approaches for
compiling probabilistic models into ACs are based on encoding these models into auxiliary logical
representations, such as a Sentential Decision Diagram (SDD) or a deterministic DNNF circuits,
which are then converted to an AC [Chavira and Darwiche, 2008, Choi et al., 2013]. PSDDs are
a direct representation of probability distributions, bypassing the overhead of intermediate logical
representations, and leading to more efficient compilations in some cases. Most importantly though,
this approach lends itself to a significantly simpler compilation algorithm: represent each factor of a
given model as a PSDD, and then multiply the factors using PSDD multiplication.
This paper is organized as follows. In Section 2, we review arithmetic circuits (ACs) as a representation of probability distributions, including PSDDs in particular. In Section 3, we introduce a polytime
multiplication operator for PSDDs, and in Section 4, we show that there is no polytime sum-out
operator for PSDDs. In Section 5, we propose a simple compilation algorithm for PSDDs based
on the multiply operator, which we evaluate empirically. We discuss related work in Section 6 and
finally conclude in Section 7. Proofs of theorems are available in the Appendix.
2
Representing Distributions Using Arithmetic Circuits
We start with the definition of factors, which include distributions as a special case.
Definition 1 (Factor) A factor f (X) over variables X maps each instantiation
P x of variables X
into a non-negative number f (x). The factor represents a distribution when x f (x) = 1.
P
We define the value of a factor at a partial instantiation y, where Y ? X, as f (y) = z f (yz),
where Z = X \ Y. When the factor is a distribution, f (y) corresponds to the probability of evidence
y. We also define the MAP instantiation of a factor as argmaxx f (x), which corresponds to the most
likely instantiation when the factor is a distribution.
The classical, tabular representation of a factor f (X) is exponential in the number of variables X.
However, one can represent such factors more compactly using arithmetic circuits.
Definition 2 (Arithmetic Circuit) An arithmetic circuit AC(X) over variables X is a rooted DAG
whose internal nodes are labeled with + or ? and whose leaf nodes are labeled with either indicator
variables x or non-negative parameters ?. The value of the circuit at instantiation x, denoted
AC(x), is obtained by assigning indicator x the value 1 if x is compatible with instantiation x and
0 otherwise, then evaluating the circuit in the standard way. The circuit AC(X) represents factor
f (X) iff AC(x) = f (x) for each instantiation x.
A tractable arithmetic circuit allows one to efficiently answer certain queries about the factor it
represents. We next discuss two properties that lead to tractable arithmetic circuits. The first is
decomposability [Darwiche, 2001b], which was used for probabilistic reasoning in [Darwiche, 2003].
Definition 3 (Decomposability) Let n be a node in an arithmetic circuit AC(X). The variables of
n, denoted vars(n), are the variables X 2 X with some indicator x appearing at or under node
n. An arithmetic circuit is decomposable iff every pair of children c1 and c2 of a ?-node satisfies
vars(c1 ) \ vars(c2 ) = ;.
The second property is determinism [Darwiche, 2001a], which was also employed for probabilistic
reasoning in Darwiche [2003].
Definition 4 (Determinism) An arithmetic circuit AC(X) is deterministic iff each +-node has at
most one non-zero input when the circuit is evaluated under any instantiation x of the variables X.
2
A third property called smoothness is also desirable as it simplifies the statement of certain AC
algorithms, but is less important for tractability as it can be enforced in polytime [Darwiche, 2001a].
Definition 5 (Smoothness) An arithmetic circuit AC(X) is smooth iff it contains at least one indicator for each variable in X, and for each child c of +-node n, we have vars(n) = vars(c).
Decomposability and determinism lead to tractability in the following sense. Let Pr (X) be a
distribution represented by a decomposable, deterministic and smooth arithmetic circuit AC(X).
Then one can compute the following queries in time that is linear in the size of circuit AC(X): the
probability of any partial instantiation, Pr (y), where Y ? X [Darwiche, 2003] and the most likely
instantiation, argmaxx Pr (x) [Chan and Darwiche, 2006]. The decision problems of these queries
are known to be PP-complete and NP-complete for Bayesian networks [Roth, 1996, Shimony, 1994].
A number of methods have been proposed for
compiling a Bayesian network into a decomposable, deterministic and smooth AC that repre*
*
sents its distribution [Darwiche, 2003]. Figure 1
depicts such a circuit that represents the distribu+
+
tion of Bayesian network A ! B. One method
*
*
*
*
*
*
ensures that the size of the AC is proportional to
the size of a jointree for the network. Another
a
a
b|a
b|a
b |a
a
b
a b|a b
method yields circuits that can sometimes be exponentially smaller, and is implemented in the
Figure 1: An AC for a Bayesian network A ! B. publicly available ace system [Chavira and Darwiche, 2008]; see also Darwiche et al. [2008].
Additional methods are discussed in Darwiche [2009, chapter 12].
+
This work is motivated by the following limitation of these tractable circuits, which may narrow their
applicability in probabilistic reasoning and learning.
Definition 6 (Multiplication) The product of two arithmetic circuits AC 1 (X) and AC 2 (X) is an
arithmetic circuit AC(X) such that AC(x) = AC 1 (x)AC 2 (x) for every instantiation x.
Theorem 1 Computing the product of two decomposable ACs is NP-hard if the product is also
decomposable. Computing the product of two decomposable and deterministic ACs is NP-hard if the
product is also decomposable and deterministic.
We now investigate a newly introduced class of tractable ACs, called the Probabilistic Sentential
Decision Diagram (PSDD) [Kisa et al., 2014]. In particular, we show that this class of circuits admits
a tractable product operation and then explore an application of this operation to exact inference in
probabilistic graphical models.
PSDDs were motivated by the need to represent probability distributions Pr (X) with many instantiations x attaining zero probability, Pr (x) = 0. Consider the distribution Pr (X) in Figure 2(a) for an
example. The first step in constructing a PSDD for this distribution is to construct a special Boolean
circuit that captures its zero entries; see Figure 2(b). The Boolean circuit captures zero entries in the
following sense. For each instantiation x, the circuit evaluates to 0 at instantiation x iff Pr (x) = 0.
The second and final step of constructing a PSDD amounts to parameterizing this Boolean circuit
(e.g., by learning them from data), by including a local distribution on the inputs of each or-gate; see
Figure 2(c).
The Boolean circuit underlying a PSDD is known as a Sentential Decision Diagram (SDD) [Darwiche,
2011]. These circuits satisfy specific syntactic and semantic properties based on a binary tree, called
a vtree, whose leaves correspond to variables; see Figure 2(d). The following definition of SDD
circuits is a based on the one given by Darwiche [2011] and uses a different notation.
Definition 7 (SDD) An SDD normalized for a vtree v is a Boolean circuit defined as follows. If v is
a leaf node labeled with variable X, then the SDD is either X, ?X, ? or an or-gate with inputs X
and ?X. If v is an internal vtree node, then the SDD has the structure in Figure 3, where p1 , . . . , pn
are SDDs normalized for the left child v l and s1 , . . . , sn are SDDs normalized for the right child v r .
Moreover, the circuits p1 , . . . , pn are consistent, mutually exclusive and exhaustive.
3
3
A
0
0
0
0
1
1
1
1
(a)
B C Pr
0
0
0.2
0
1
0.2
1
0
0.0
1
1
0.1
0
0
0.0
0
1
0.3
1
0
0.1
1
1
0.1
Distribution
3
.6
.4
3
1
4
1
1
C
.33
C ?C
A B ?A?B
A ?B?A B
4
1
.5 .75 .25
.67 .5
C ?C
A B ?A?B
(b) SDD
A ?B?A B
(c) PSDD
C
4
1
A
0
C
2
B
(d) Vtree
Figure 2: A probability distribution and its SDD/PSDD representation. Note that the numbers
annotating or-gates in (b) & (c) correspond to vtree node IDs in (d). Further, note that while the
circuit appears to be a tree, the input variables are shared and hence the circuit is not a tree.
?1
?2
???
?n
SDD circuits alternate between or-gates and and-gates. Their andgates have two inputs each. The or-gates of these circuits are such
that at most one input will be high under any circuit input. An SDD
circuit may produce a 1-output for every possible input (i.e., the circuit
represents the function true). These circuits arise when representing
strictly positive distributions (with no zero entries).
???
A PSDD is obtained by including a distribution ?1 , . . . , ?n on the
inputs of each or-gate; see Figure 3. The semantics of PSDDs are
p1 s 1 p 2 s 2
pn s n
given in [Kisa et al., 2014].3 We next provide an alternative semantics,
which
is based on converting a PSDD into an arithmetic circuit.
Figure 3: Each (pi , si , ?i )
is called an element of the
or-gate, where the pi ?s are Definition 8 (ACs of PSDDs) The arithmetic circuit of a PSDD is
called primes and the si ?s obtained as follows. Leaf nodes x and ? are converted into x and 0,
respectively. Each and-gate is converted into a ?-node. Each or-node
are
P called subs. Moreover, with children c1 , . . . , cn and corresponding parameters ?1 , . . . , ?n is
i ?i = 1 and exactly one
p evaluates to 1 under any converted into a +-node with children ?1 ? c1 , . . . , ?n ? cn .
i
circuit input.
Theorem 2 The arithmetic circuit of a PSDD represents the distribution induced by the PSDD. Moreover, the arithmetic circuit is decomposable and deterministic.4
The PSDD is a complete and canonical representation of probability
distributions. That is, PSDDs can represent any distribution, and there is a unique PSDD for that distribution (under some conditions). A variety of probabilistic queries are tractable on PSDDs, including
that of computing the probability of a partial variable instantiation and the most likely instantiation.
Moreover, the maximum likelihood parameter estimates of a PSDD are unique given complete data,
and these parameters can be computed efficiently using closed-form estimates; see [Kisa et al.,
2014] for details. Finally, PSDDs have been used to learn distributions over combinatorial objects,
including rankings and permutations [Choi et al., 2015], paths and games [Choi et al., 2016]. In these
applications, the Boolean circuit underlying a PSDD captures variable instantiations that correspond
to combinatorial objects, while its parameterization induces a distribution over these objects.
As a concrete example, PSDDs were used to induce distributions over the permutations of n items as
follows. We have a variable Xij for each i, j 2 {1, . . . , n} denoting that item i is at position j in the
permutation. Clearly, not all instantiations of these variables correspond to (valid) permutations. An
SDD circuit is then constructed, which outputs 1 iff the corresponding input corresponds to a valid
permutation. Each parameterization of this SDD circuit leads to a distribution on permutations and
these parameterizations can be learned from data; see Choi et al. [2015].
3
Let x be an instantiation of PSDD variables. If the SDD circuit outputs 0 at input x, then Pr (x) = 0.
Otherwise, traverse the circuit top-down, visiting the (unique) high input of each visited or-node, and all inputs
of each visited and-node. Then Pr (x) is the product of parameters visited during the traversal process.
4
The arithmetic circuit also satisfies a minor weakening of smoothness with the same effect as smoothness.
4
3
Multiplying Two PSDDs
Factors and their operations are fundamental to probabilistic inference, whether exact or approximate
[Darwiche, 2009, Koller and Friedman, 2009]. Consider two of the most basic operations on factors:
(1) computing the product of two factors and (2) summing out a variable from a factor. With these
operations, one can directly implement various inference algorithms, including variable elimination,
the jointree algorithm, and message-passing algorithms such as loopy belief propagation. Typically,
tabular representations (and their sparse variations) are used to represent factors and implement
the above algorithms; see Larkin and Dechter [2003], Sanner and McAllester [2005], Chavira and
Darwiche [2007] for some alternatives.
More generally, factor multiplication is useful for online or incremental reasoning with probabilistic
models. In some applications, we may not have access to all factors of a model beforehand, to
compile as a jointree or an arithmetic circuit. For example, when learning the structure of a Markov
network from data [Bekker et al., 2015], we may want to introduce and remove candidate factors from
a model, while evaluating the changes to the log likelihood. Certain realizations of generalized belief
propagation also require the multiplication of factors [Yedidia et al., 2005, Choi and Darwiche, 2011].
In these realizations, one can use factor multiplication to enforce dependencies between factors that
have been relaxed to make inference more tractable, albeit less accurate.
We next discuss PSDD multiplication, while deferring summing out to the following section.
Algorithm 1 Multiply(n1 , n2 , v)
input: PSDDs n1 , n2 normalized for vtree v
output: PSDD n and constant ?
main:
1: n, k
cachem (n1 , n2 ), cachec (n1 , n2 )
2: if n 6= null then return (n, k)
3: else if v is a leaf then (n, ?)
BaseCase(n1 , n2 )
4: else
5:
,?
{}, 0
6:
for all elements (p, s, ?) of n1 do
7:
for all elements (q, r, ) of n2 do
8:
(m1 , k1 )
Multiply(p, q, v l )
9:
if k1 6= 0 then
10:
(m2 , k2 )
Multiply(s, r, v r )
11:
?
k1 ? k2 ? ? ?
12:
?
?+?
13:
add (m1 , m2 , ?) to
14:
{(m1 , m2 , ?/?) | (m1 , m2 , ?) 2 }
15:
n
unique PSDD node with elements
16: cachem (n1 , n2 )
n
17: cachec (n1 , n2 )
?
18: return (n, ?)
. check if previously computed
. return previously cached result
. n1 , n2 are literals, ? or simple or-gates
. n1 and n2 have the structure in Figure 3
. initialization
. see Figure 3
. see Figure 3
. recursively multiply primes p and q
. if (m1 , k1 ) is not a trivial factor
. recursively multiply subs s and r
. compute weight of element (m1 , m2 )
. aggregate weights of elements
. normalize parameters of
. cache lookup for unique nodes
. store results in cache
Our first observation is that the product of two distributions is generally not a distribution, but a factor.
Moreover, a factor f (X) can always be represented by a distribution Pr (X) and a constant ? such
that f (x) = ? ? Pr (x). Hence, our proposed multiplication method will output a PSDD together
with a constant, as given in Algorithm 1. This algorithm uses three caches, one for storing constants
(cachec ), another for storing circuits (cachem ), and a third used to implement Line 15.5 This line
ensures that the PSDD has no duplicate structures of the form given in Figure 3. The description
of function BaseCase() on Line 3 is available in the Appendix. It appears inside the proof of the
following theorem, which establishes the soundness and complexity of the given algorithm.
Theorem 3 Algorithm 1 outputs a PSDD n normalized for vtree v. Moreover, if Pr 1 (X) and Pr 2 (X)
are the distributions of input PSDDs n1 and n2 , and Pr (X) is the distribution of output PSDD n,
then Pr 1 (x)Pr 2 (x) = ? ? Pr (x) for every instantiation x. Finally, Algorithm 1 takes time O(s1 s2 ),
where s1 and s2 are the sizes of input PSDDs.
5
The cache key of a PSDD node in Figure 3 is based on the (unique) ID?s of nodes pi /si and parameters ?i .
5
We will later discuss an application of PSDD multiplication to probabilistic inference, in which we
cascade these multiplication operations. In particular, we end up multiplying two factors f1 and f2 ,
represented by PSDDs n1 and n2 and the corresponding constants ?1 and ?2 . We use Algorithm 1
for this purpose, multiplying PSDDs n1 and n2 (distributions), to yield a PSDD n (distribution) and
a constant ?. The factor f1 f2 will then correspond to PSDD n and constant ? ? ?1 ? ?2 .
A
A
A
G
G
G
F
K
E
B
C
J
K
H
D
B
H
H
I
D
D
Figure 4: A vtree and two of its projections.
Another observation is that Algorithm 1 assumes
that the input PSDDs are over the same vtree and,
hence, same set of variables. A more detailed version of this algorithm can multiply two PSDDs
over different sets of variables as long as the PSDDs have compatible vtrees. We omit this version
here to simplify the presentation, but mention that
it has the same complexity as Algorithm 1.
Two vtrees over variables X and Y are compatible
iff they can be obtained by projecting some vtree
on variables X and Y, respectively.
Definition 9 (Vtree Projection) Let v be a vtree
over variables Z. The projection of v on variables
X ? Z is obtained as follows. Successively remove every maximal subtree v 0 whose variables are
outside X, while replacing the parent of v 0 with its sibling.
Figure 4 depicts a vtree and two of its projections. When compiling a probabilistic graphical model
into a PSDD, we first construct a vtree v over all variables in the model. We then compile each factor
f (X) into a PSDD, using the projection of v on variables X. We finally multiply the PSDDs of these
factors. We will revisit these steps later.
4
Summing-Out a Variable in a PSDD
We now discuss the summing out of variables from distributions represented by arithmetic circuits.
Definition 10 (Sum Out) Summing-out a variable X 2 X from factor
in another fac? Pf (X)? results
P
def P
tor over variables Y = X \ {X}, denoted by X f and defined as:
X f (y) =
x f (x, y).
When the factor is a distribution (i.e., normalized), the sum out operation corresponds to marginalization. Together with multiplication, summing out provides a direct implementation of algorithms such
as variable elimination and those based on message passing.
Just like multiplication, summing out is also intractable for a common class of arithmetic circuits.
Theorem 4 The sum-out operation on decomposable and deterministic ACs is NP-hard, assuming
the output is also decomposable and deterministic.
This theorem does not preclude the possibility that the resulting AC is of polynomial size with respect
to the size of the input AC?it just says that the computation is intractable. Summing out is also
intractable on PSDDs, but the result is stronger here as the size of the output can be exponential.
Theorem 5 There exists a class of factors f (X) and variable X 2 X, such that nP
= |X| can be
arbitrarily large, f (X) has a PSDD whose size is linear in n, while the PSDD of X f has size
exponential in n for every vtree.
Only the multiplication operation is needed to compile probabilistic graphical models into arithmetic
circuits. Even for inference algorithms that require summing out variables, such as variable elimination, summing out can still be useful, even if intractable, since the size of resulting arithmetic circuit
will not be larger than a tabular representation.
6
5
Compiling Probabilistic Graphical Models into PSDDs
Even though PSDDs form a strict subclass of decomposable and deterministic ACs (and satisfy
stronger properties), one can still provide the following classical guarantee on PSDD size.
Theorem 6 The interaction graph of factors f1 (X1 ), . . . , fn (Xn ) has nodes corresponding to variables X1 [ . . . [ Xn and an edge between two variables iff they appear in the same factor. There is
a PSDD for the product f1 . . . fn whose size is O(m ? exp(w)), where m is the number of variables
and w is its treewidth.
This theorem provides an upper bound on the size of PSDD compilations for both Bayesian and
Markov networks. An analogous guarantee is available for SDD circuits of propositional models,
using a special type of vtree known as a decision vtree [Oztok and Darwiche, 2014]. We next discuss
our experiments, which focused on the compilation of Markov networks using decision vtrees.
To compile a Markov network, we first construct a decision vtree using a known technique.6 For each
factor of the network, we project the vtree on the factor variables, and then compile the factor into
a PSDD. This can be done in time linear in the factor size, but we omit the details here. We finally
multiply the obtained PSDDs. The order of multiplication is important to the overall efficiency of the
compilation approach. The order we used is as follows. We assign each PSDD to the lowest vtree
node containing the PSDD variables, and then multiply PSDDs in the order that we encounter them
as we traverse the vtree bottom-up (this is analogous to compiling CNFs in Choi et al. [2013]).
Table 1 summarizes our results. We compiled Markov networks into three types of arithmetic circuits.
The first compilation (AC1) is to decomposable and deterministic ACs using ace [Chavira and
Darwiche, 2008].7 The second compilation (AC2) is also to decomposable and deterministic ACs, but
using the approach proposed in Choi et al. [2013]. The third compilation is to PSDDs as discussed
above. The first two approaches are based on reducing the inference problem into a weighted model
counting problem. In particular, these approaches encode the network using Boolean expressions,
which are compiled to logical representations (d-DNNF or SDD), from which an arithmetic circuit is
induced. The systems underlying these approaches are quite complex and are the result of many years
of engineering. In contrast, the proposed compilation to PSDDs does not rely on an intermediate
representation or additional boxes, such as d-DNNF or SDD compilers.
The benchmarks in Table 1 are from the UAI-14 Inference Competition.8 We selected all networks
over binary variables in the MAR track, and report a network only if at least one approach successfully
compiled it (given time and space limits of 30 minutes and 16GB). We report the size (the number
of edges) and time spent for each compilation. First, we note that for all benchmarks that compiled
to both PSDD and AC2 (based on SDDs), the PSDD size is always smaller. This can be attributed
in part to the fact that reductions to weighted model counting represent parameters explicitly as
variables, which are retained throughout the compilation process. In contrast, PSDD parameters are
annotated on its edges. More interestingly, when we multiply two PSDD factors, the parameters of
the inputs may not persist in the output PSDD. That is, the PSDD only maintains enough parameters
to represent the resulting distribution, which further explains the size differences.
In the Promedus benchmarks, we also see that in all but 5 cases, the compiled PSDD is smaller than
AC1. However, several Grids benchmarks were compilable to AC1, but failed to compile to AC2
or PSDD, given the time and space limits. On the other hand, we were able to compile some of the
relational benchmarks to PSDD, which did not compile to AC1 and compiled partially to AC2.
6
Related Work
Tabular representations and their sparse variations (e.g., Larkin and Dechter [2003]) are typically
used to represent factors for probabilistic inference and learning. Rules and decision trees are more
succinct representations for modeling context-specific independence, although they are not much more
amenable to exact inference compared to tabular representations [Boutilier et al., 1996, Friedman and
Goldszmidt, 1998]. Domain specific representations have been proposed, e.g., in computer vision
6
We used the minic2d package which is available at reasoning.cs.ucla.edu/minic2d/.
The ace system is publicly available at http://reasoning.cs.ucla.edu/ace/.
8
http://www.hlt.utdallas.edu/~vgogate/uai14-competition/index.html
7
7
Table 1: AC compilation size (number of edges) and time (in seconds)
network
Alchemy_11
Grids_11
Grids_12
Grids_13
Grids_14
Segmentation_11
Segmentation_12
Segmentation_13
Segmentation_14
Segmentation_15
Segmentation_16
relational_3
relational_5
Promedus_11
Promedus_12
Promedus_13
Promedus_14
Promedus_15
Promedus_16
Promedus_17
compilation size
AC1
AC2
psdd
12,705,213
- 13,715,906
81,074,816
232,496
457,529
201,250
81,090,432
83,186,560
20,895,884 41,603,129 30,951,708
15,840,404 41,005,721 34,368,060
33,746,511 78,028,443 33,726,812
16,965,928 48,333,027 46,363,820
29,888,972
- 33,866,332
18,799,112 54,557,867 19,935,308
183,064
41,070
217,696
67,036
174,592
30,542
45,119
349,916
48,814
42,065
83,701
26,100
2,354,180 3,667,740
749,528
14,363
31,176
9,520
45,935
154,467
29,150
3,336,316 9,849,598 1,549,170
compilation time
AC1
AC2 psdd
130.83
- 300.80
271.97
0.93
1.12
1.68
273.88
279.12
72.39 204.54 223.60
51.27 209.03 283.79
117.46 388.97 255.29
62.31 279.19 639.07
107.87
- 273.67
65.64 265.07 163.38
1.21 10.43
- 594.68
6.80
1.88
2.28
0.91
5.81
2.46
0.80
0.23
3.94
21.64 33.27 24.90
0.95
0.10
1.52
1.35
0.40
2.06
68.08 48.47 50.22
compilation size
compilation time
network
AC1
AC2
psdd AC1
AC2 psdd
Promedus_18 3,006,654 762,247 539,478 20.46 18.38 21.20
Promedus_19 796,928 1,171,288 977,510 6.80 25.01 68.62
Promedus_20
70,422 188,322 70,492 0.96
3.24 3.46
Promedus_21
17,528
31,911 10,944 0.62
0.18 1.78
Promedus_22
26,010
39,016 33,064 0.63
0.10 1.58
Promedus_23 329,669 1,473,628 317,514 3.29 17.77 10.88
Promedus_24
4,774
9,085
1,960 0.45
0.05 0.80
Promedus_25 556,179 3,614,581 407,974 7.66 32.90 6.78
Promedus_26
57,190
24,578
5,146 0.71 198.74 2.72
Promedus_27
33,611
52,698 19,434 0.73
0.55 1.16
Promedus_28
24,049
46,364 17,084 1.04
0.30 1.59
Promedus_29
10,403
20,600
4,828 0.54
0.08 1.88
Promedus_30
9,884
21,230
6,734 0.50
0.07 1.23
Promedus_31
17,977
31,754 10,842 0.57
0.12 1.96
Promedus_32
15,215
33,064
8,682 0.59
0.11 1.77
Promedus_33
10,734
18,535
4,006 0.59
0.07 1.57
Promedus_34
38,113
54,214 21,398 0.87
0.78 1.78
Promedus_35
18,765
31,792 11,120 0.68
0.13 1.79
Promedus_36
19,175
31,792 11,004 1.22
0.12 1.91
Promedus_37
77,088 144,664 79,210 1.49
3.50 6.15
Promedus_38 177,560 593,675 123,552 1.67 17.19 8.09
[Felzenszwalb and Huttenlocher, 2006], to allow for more efficient factor operations. Algebraic
Decision Diagrams (ADDs) and Algebraic Sentential Decision Diagrams (ASDDs) can also be used
to multiply two factors in polytime [Bahar et al., 1993, Herrmann and de Barros, 2013], but their sizes
can grow quickly with repeated multiplications: ADDs have a distinct node for each possible value
of a factor/distribution. Since ADDs also support a polytime summing-out operation, ADDs are more
commonly used in the context of variable elimination [Sanner and McAllester, 2005, Chavira and
Darwiche, 2007], and in message passing algorithms [Gogate and Domingos, 2013]. Probabilistic
Decision Graphs (PDGs) and AND/OR Multi-Valued Decision Diagrams (AOMDD) support a
polytime multiply operator, and also have treewidth upper bounds when compiling probabilistic
graphical models [Jaeger, 2004, Mateescu et al., 2008]. Both PDGs and AOMDDs can be viewed as
sub-classes of PSDDs that branch on variables instead of sentences as is the case with PSDDs?this
distinction can lead to exponential reductions in size [Xue et al., 2012, Bova, 2016].
7
Conclusion
We considered the tractability of multiplication and summing-out operators for arithmetic circuits
(ACs), as tractable representations of factors and distributions. We showed that both operations are
intractable for deterministic and decomposable ACs (under standard complexity theoretic assumptions). We also showed that for a sub-class of ACs, known as PSDDs, a polytime multiplication
operator is supported. Moreover, we showed that PSDDs do not support summing-out in polytime
(unconditionally). Finally, we illustrated the utility of PSDD multiplication, providing a relatively
simple but effective algorithm for compiling probabilistic graphical models into PSDDs.
Acknowledgments
This work was partially supported by NSF grant #IIS-1514253 and ONR grant #N00014-15-1-2339.
References
U. A. Acar, A. T. Ihler, R. R. Mettu, and ?. S?mer. Adaptive inference on general graphical models. In UAI,
pages 1?8, 2008.
R. I. Bahar, E. A. Frohm, C. M. Gaona, G. D. Hachtel, E. Macii, A. Pardo, and F. Somenzi. Algebraic decision
diagrams and their applications. In ICCAD, pages 188?191, 1993.
J. Bekker, J. Davis, A. Choi, A. Darwiche, and G. Van den Broeck. Tractable learning for complex probability
queries. In NIPS, 2015.
C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian networks.
In UAI, pages 115?123, 1996.
S. Bova. SDDs are exponentially more succinct than OBDDs. In AAAI, pages 929?935, 2016.
H. Chan and A. Darwiche. On the robustness of most probable explanations. In UAI, 2006.
M. Chavira and A. Darwiche. Compiling Bayesian networks using variable elimination. In IJCAI, 2007.
8
M. Chavira and A. Darwiche. On probabilistic inference by weighted model counting. AIJ, 172(6?7):772?799,
April 2008.
A. Choi and A. Darwiche. Relax, compensate and then recover. In T. Onada, D. Bekki, and E. McCready,
editors, NFAI, volume 6797 of LNCF, pages 167?180. Springer, 2011.
A. Choi, D. Kisa, and A. Darwiche. Compiling probabilistic graphical models using sentential decision diagrams.
In ECSQARU, pages 121?132, 2013.
A. Choi, G. Van den Broeck, and A. Darwiche. Tractable learning for structured probability spaces: A case
study in learning preference distributions. In IJCAI, 2015.
A. Choi, N. Tavabi, and A. Darwiche. Structured features in naive Bayes classification. In AAAI, 2016.
A. Darwiche. On the tractable counting of theory models and its application to truth maintenance and belief
revision. Journal of Applied Non-Classical Logics, 11(1-2):11?34, 2001a.
A. Darwiche. Decomposable negation normal form. J. ACM, 48(4):608?647, 2001b.
A. Darwiche. A differential approach to inference in Bayesian networks. J. ACM, 50(3):280?305, 2003.
A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009.
A. Darwiche. SDD: A new canonical representation of propositional knowledge bases. In IJCAI, pages 819?826,
2011.
A. Darwiche and P. Marquis. A knowledge compilation map. JAIR, 17:229?264, 2002.
A. Darwiche, R. Dechter, A. Choi, V. Gogate, and L. Otten. Results from the probabilistic inference evaluation
of UAI-08. 2008.
A. L. Delcher, A. J. Grove, S. Kasif, and J. Pearl. Logarithmic-time updates and queries in probabilistic networks.
In UAI, pages 116?124, 1995.
P. F. Felzenszwalb and D. P. Huttenlocher. Efficient belief propagation for early vision. IJCV, 70(1):41?54,
2006.
N. Friedman and M. Goldszmidt. Learning bayesian networks with local structure. In Learning in graphical
models, pages 421?459. Springer, 1998.
V. Gogate and P. M. Domingos. Structured message passing. In UAI, 2013.
R. G. Herrmann and L. N. de Barros. Algebraic sentential decision diagrams in symbolic probabilistic planning.
In Proceedings of the Brazilian Conference on Intelligent Systems (BRACIS), pages 175?181, 2013.
M. Jaeger. Probabilistic decision graphs ? combining verification and AI techniques for probabilistic inference.
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 12:19?42, 2004.
D. Kisa, G. Van den Broeck, A. Choi, and A. Darwiche. Probabilistic sentential decision diagrams. In KR, 2014.
D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
D. Larkin and R. Dechter. Bayesian inference in the presence of determinism. In AISTATS, 2003.
D. Lowd and P. M. Domingos. Learning arithmetic circuits. In UAI, pages 383?392, 2008.
D. Lowd and A. Rooshenas. Learning Markov networks with arithmetic circuits. In AISTATS, pages 406?414,
2013.
R. Mateescu, R. Dechter, and R. Marinescu. AND/OR multi-valued decision diagrams (AOMDDs) for graphical
models. J. Artif. Intell. Res. (JAIR), 33:465?519, 2008.
U. Oztok and A. Darwiche. On compiling CNF into decision-DNNF. In CP, pages 42?57, 2014.
J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. MK, 1988.
H. Poon and P. M. Domingos. Sum-product networks: A new deep architecture. In UAI, pages 337?346, 2011.
A. Rooshenas and D. Lowd. Learning sum-product networks with direct and indirect variable interactions. In
ICML, pages 710?718, 2014.
D. Roth. On the hardness of approximate reasoning. Artif. Intell., 82(1-2):273?302, 1996.
S. Sanner and D. A. McAllester. Affine algebraic decision diagrams (AADDs) and their application to structured
probabilistic inference. In IJCAI, pages 1384?1390, 2005.
S. E. Shimony. Finding MAPs for belief networks is NP-hard. Artif. Intell., 68(2):399?410, 1994.
D. Sieling and I. Wegener. NC-algorithms for operations on binary decision diagrams. Parallel Processing
Letters, 3:3?12, 1993.
Y. Xue, A. Choi, and A. Darwiche. Basing decisions on sentences in decision diagrams. In AAAI, pages 842?849,
2012.
J. Yedidia, W. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51(7):2282?2312, 2005.
9
| 6363 |@word version:2 polynomial:1 stronger:2 jointree:4 adnan:1 mention:1 recursively:2 reduction:2 contains:1 denoting:1 interestingly:1 delcher:2 assigning:1 si:3 gaona:1 fn:2 dechter:5 partition:1 acar:2 remove:2 update:1 leaf:5 selected:1 item:2 parameterization:2 provides:2 parameterizations:1 node:23 traverse:2 preference:1 simpler:1 c2:2 direct:3 become:1 constructed:1 differential:1 ijcv:1 overhead:1 inside:1 introduce:2 darwiche:44 hardness:1 p1:3 planning:1 multi:2 freeman:1 cache:4 pf:1 preclude:1 revision:1 spain:1 project:1 underlying:3 notation:1 circuit:71 moreover:7 null:1 lowest:1 finding:1 suite:1 guarantee:2 every:6 subclass:1 exactly:2 k2:2 vgogate:1 grant:2 omit:2 appear:1 positive:1 engineering:1 local:2 limit:2 encoding:1 id:2 marquis:1 path:1 initialization:1 challenging:1 compile:8 mer:1 unique:6 acknowledgment:1 implement:3 frohm:1 significantly:1 aychoi:1 cascade:1 projection:5 induce:1 symbolic:1 operator:9 context:4 www:1 map:5 deterministic:13 roth:2 primitive:2 focused:1 shen:1 decomposable:15 m2:5 parameterizing:1 rule:1 bahar:2 importantly:1 variation:2 analogous:2 exact:4 us:2 domingo:6 element:6 ac1:8 particularly:1 persist:1 labeled:3 huttenlocher:2 bottom:2 capture:3 ensures:2 complexity:3 traversal:1 f2:2 efficiency:1 compactly:1 indirect:1 kasif:1 chapter:2 represented:4 various:1 distinct:1 fac:1 effective:2 query:9 aggregate:1 outside:1 exhaustive:1 whose:6 ace:5 larger:1 solve:1 quite:1 say:1 valued:2 otherwise:2 annotating:1 relax:1 plausible:1 soundness:1 syntactic:1 itself:1 final:1 online:1 advantage:1 propose:1 interaction:2 product:13 maximal:1 combining:1 realization:2 poon:2 iff:8 description:1 normalize:1 competition:2 los:1 parent:1 ijcai:4 jaeger:2 produce:1 cached:1 incremental:2 object:3 spent:1 depending:1 illustrate:1 ac:48 utdallas:1 minor:1 auxiliary:1 c:3 implemented:1 treewidth:4 somenzi:1 annotated:1 mcallester:3 elimination:6 explains:1 require:2 assign:1 f1:4 probable:2 strictly:1 bypassing:1 considered:1 normal:1 exp:1 scope:1 tor:1 early:1 rooshenas:4 purpose:1 combinatorial:2 visited:3 vice:1 basing:1 establishes:1 successfully:1 weighted:3 mit:1 clearly:1 always:2 pn:3 broader:2 encode:1 likelihood:2 check:1 contrast:2 sense:2 posteriori:1 inference:25 chavira:8 marinescu:1 weakening:1 typically:2 koller:3 semantics:2 overall:1 among:3 html:1 classification:1 denoted:3 art:1 special:3 construct:3 represents:6 icml:1 tabular:5 np:7 report:2 simplify:1 duplicate:1 ac2:8 intelligent:2 intell:3 mettu:1 n1:13 negation:1 friedman:5 message:4 investigate:2 possibility:1 multiply:15 evaluation:2 compilation:20 amenable:1 accurate:1 beforehand:1 hachtel:1 edge:4 grove:1 partial:3 arthur:1 tree:4 re:1 mk:1 modeling:2 boolean:7 shimony:2 loopy:1 tractability:4 applicability:1 decomposability:3 entry:3 dependency:1 answer:1 xue:2 broeck:3 yujias:1 fundamental:2 international:1 probabilistic:39 together:2 quickly:1 concrete:1 central:1 aaai:3 sentential:9 successively:1 containing:1 literal:1 leading:1 return:3 converted:5 de:2 attaining:1 lookup:1 satisfy:2 explicitly:1 ranking:1 tion:1 later:2 closed:1 mpe:1 compiler:1 start:1 repre:1 maintains:1 recover:1 bayes:1 parallel:1 cnfs:1 publicly:2 efficiently:2 yield:2 correspond:5 bayesian:13 multiplying:4 hlt:1 definition:12 evaluates:2 energy:1 pp:2 bekker:4 proof:2 attributed:1 ihler:1 newly:1 popular:1 logical:3 knowledge:3 organized:1 sdds:4 appears:2 jair:2 wei:1 april:1 evaluated:1 though:2 done:1 box:1 mar:1 just:2 hand:1 replacing:1 propagation:4 lowd:6 artif:3 effect:1 normalized:6 true:1 hence:3 semantic:1 illustrated:1 game:1 during:1 rooted:1 davis:1 generalized:2 complete:6 theoretic:1 cp:1 reasoning:12 recently:2 common:1 empirically:2 exponentially:2 volume:1 otten:1 discussed:2 m1:6 versa:1 cambridge:1 dag:1 ai:1 smoothness:4 grid:1 access:1 compiled:6 add:5 base:1 chan:2 showed:3 prime:2 store:1 certain:4 n00014:1 binary:3 arbitrarily:1 onr:1 additional:2 relaxed:1 employed:2 converting:1 arithmetic:36 branch:1 ii:1 desirable:1 smooth:3 long:1 compensate:1 basic:1 maintenance:1 vision:2 represent:9 sometimes:2 c1:4 want:1 diagram:16 else:2 grow:1 strict:1 induced:2 sent:1 counting:4 presence:1 intermediate:2 wegener:1 enough:1 variety:1 independence:3 marginalization:1 architecture:1 simplifies:1 cn:2 sibling:1 angeles:1 whether:1 motivated:2 expression:1 utility:1 gb:1 algebraic:5 passing:4 cnf:1 deep:1 boutilier:2 generally:2 useful:2 detailed:1 dnnf:4 amount:1 induces:1 http:2 xij:1 canonical:2 nsf:1 revisit:1 track:1 key:1 psdd:58 spns:2 graph:3 sum:7 year:1 enforced:1 package:1 letter:1 uncertainty:1 throughout:1 brazilian:1 decision:25 appendix:2 summarizes:1 def:1 bound:2 ucla:3 pardo:1 iccad:1 relatively:1 department:1 influential:1 structured:4 alternate:1 smaller:3 deferring:1 s1:3 projecting:1 den:3 pr:18 mutually:1 previously:2 discus:6 needed:1 tractable:15 end:1 available:6 operation:23 yedidia:2 enforce:1 appearing:1 alternative:2 encounter:1 compiling:13 gate:10 robustness:1 top:1 assumes:1 include:1 graphical:16 exploit:1 k1:4 yz:1 classical:4 parametric:1 exclusive:1 visiting:1 lends:1 trivial:1 assuming:1 retained:1 index:1 gogate:3 polytime:14 providing:1 nc:1 statement:1 negative:2 implementation:1 allowing:1 upper:2 observation:2 markov:8 benchmark:5 relational:2 introduced:1 propositional:2 pair:1 sentence:2 california:1 learned:1 narrow:1 distinction:1 pearl:3 barcelona:1 nip:2 beyond:1 able:1 yujia:1 including:6 explanation:2 belief:6 suitable:2 rely:1 indicator:4 sanner:3 representing:2 unconditionally:1 naive:1 sn:1 review:1 literature:1 multiplication:24 permutation:6 sdd:18 proportional:1 limitation:1 var:5 affine:1 verification:1 consistent:1 principle:1 editor:1 storing:2 pi:3 compatible:3 mateescu:2 supported:2 free:1 larkin:3 distribu:1 aij:1 side:2 allow:1 felzenszwalb:2 determinism:5 sparse:2 van:3 xn:2 evaluating:3 stand:1 valid:2 commonly:1 adaptive:2 herrmann:2 kisa:6 transaction:1 approximate:2 logic:1 uai:10 instantiation:20 summing:16 conclude:1 search:1 table:3 learn:1 ca:1 messagepassing:1 argmaxx:2 complex:2 constructing:3 domain:1 barros:2 did:1 aistats:2 main:1 s2:2 arise:1 n2:13 succinct:2 child:6 repeated:1 x1:2 depicts:2 sub:5 position:1 exponential:4 candidate:1 third:3 theorem:10 choi:17 down:1 minute:1 specific:5 admits:1 evidence:1 intractable:7 exists:1 albeit:1 kr:1 subtree:1 logarithmic:1 likely:3 explore:1 vtree:21 failed:1 partially:2 springer:2 psdds:39 corresponds:4 truth:1 satisfies:2 acm:2 viewed:1 presentation:1 macii:1 fuzziness:1 shared:1 hard:4 change:1 reducing:1 called:7 internal:2 support:9 goldszmidt:3 evaluate:1 |
5,929 | 6,364 | Optimal Black-Box Reductions
Between Optimization Objectives?
Zeyuan Allen-Zhu
[email protected]
Institute for Advanced Study
& Princeton University
Elad Hazan
[email protected]
Princeton University
Abstract
The diverse world of machine learning applications has given rise to a plethora
of algorithms and optimization methods, finely tuned to the specific regression
or classification task at hand. We reduce the complexity of algorithm design
for machine learning by reductions: we develop reductions that take a method
developed for one setting and apply it to the entire spectrum of smoothness and
strong-convexity in applications.
Furthermore, unlike existing results, our new reductions are optimal and more
practical. We show how these new reductions give rise to new and faster running
times on training linear classifiers for various families of loss functions, and
conclude with experiments showing their successes also in practice.
1
Introduction
The basic machine learning problem of minimizing a regularizer plus a loss function comes in
numerous different variations and names. Examples include Ridge Regression, Lasso, Support Vector
Machine (SVM), Logistic Regression and many others. A multitude of optimization methods were
introduced for these problems, but in most cases specialized to very particular problem settings.
Such specializations appear necessary since objective functions for different classification and
regularization tasks admin different convexity and smoothness parameters. We list below a few recent
algorithms along with their applicable settings.
? Variance-reduction methods such as SAGA and SVRG [9, 14] intrinsically require the objective to
be smooth, and do not work for non-smooth problems like SVM. This is because for loss functions
such as hinge loss, no unbiased gradient estimator can achieve a variance that approaches to zero.
? Dual methods such as SDCA or APCG [20, 30] intrinsically require the objective to be strongly
convex (SC), and do not directly apply to non-SC problems. This is because for a non-SC objective
such as Lasso, its dual is not well-behaved due to the `1 regularizer.
? Primal-dual methods such as SPDC [35] require the objective to be both smooth and SC. Many
other algorithms are only analyzed for both smooth and SC objectives [7, 16, 17].
In this paper we investigate whether such specializations are inherent. Is it possible to take a convex
optimization algorithm designed for one problem, and apply it to different classification or regression
settings in a black-box manner? Such a reduction should ideally take full and optimal advantage of
the objective properties, namely strong-convexity and smoothness, for each setting.
Unfortunately, existing reductions are still very limited for at least two reasons. First, they incur at
least a logarithmic factor log(1/?) in the running time so leading only to suboptimal convergence
?
The full version of this paper can be found on https://arxiv.org/abs/1603.05642. This paper is partially
supported by an NSF Grant, no. 1523815, and a Microsoft Research Grant, no. 0518584.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
rates.2 Second, after applying existing reductions, algorithms become biased so the objective value
does not converge to the global minimum. These theoretical concerns also translate into running time
losses and parameter tuning difficulties in practice.
In this paper, we develop new and optimal regularization and smoothing reductions that can
? shave off a non-optimal log(1/?) factor
? produce unbiased algorithms
Besides such technical advantages, our new reductions also enable researchers to focus on designing
algorithms for only one setting but infer optimal results more broadly. This is opposed to results such
as [4, 25] where the authors develop ad hoc techniques to tweak specific algorithms, rather than all
algorithms, and apply them to other settings without losing extra factors and without introducing bias.
Our new reductions also enable researchers to prove lower bounds more broadly [32].
1.1
Formal Setting and Classical Approaches
Consider minimizing a composite objective function
def
min F (x) = f (x) + ?(x) ,
x?Rd
(1.1)
where f (x) is a differentiable convex function and ?(x) is a relatively simple (but possibly nondifferentiable) convex function, sometimes referred to as the proximal function. Our goal is to find a
point x ? Rd satisfying F (x) ? F (x? ) + ?, where x? is a minimizer of F .
Pn
In most classification and regression problems, f (x) can be written as f (x) = n1 i=1 fi (hx, ai i)
where each ai ? Rd is a feature vector. We refer to this as the finite-sum case of (1.1).
? C LASSICAL R EGULARIZATION R EDUCTION.
def
Given a non-SC F (x), one can define a new objective F 0 (x) = F (x) + ?2 kx0 ? xk2 in which
? is on the order of ?. In order to minimize F (x), the classical regularization reduction calls an
oracle algorithm to minimize F 0 (x) instead, and this oracle only needs to work with SC functions.
E XAMPLE . If F is L-smooth, one can apply
gradient descent to minimize F 0 and
p accelerated
1
obtain an algorithm that converges in O( L/? log ? ) iterations in terms of minimizing the
original F . This complexity has a suboptimal dependence on ? and shall be improved using our
new regularization reduction.
? C LASSICAL S MOOTHING R EDUCTION ( FINITE - SUM CASE ).3
Given a non-smooth F (x) of a finite-sum form, one can define a smoothed variant fbi (?) for each
Pn
fi (?) and let F 0 (x) = n1 i=1 fbi (hai , xi) + ?(x). 4 In order to minimize F (x), the classical
smoothing reduction calls an oracle algorithm to minimize F 0 (x) instead, and this oracle only
needs to work with smooth functions.
E XAMPLE . If F (x) is ?-SC and one applies accelerated
gradient descent to minimize F 0 , this
1
1
yields an algorithm that converges in O ??? log ? iterations for minimizing the original F (x).
Again, the additional factor log(1/?) can be removed using our new smoothing reduction.
Besides the non-optimality, applying the above two reductions gives only biased algorithms. One has
to tune the regularization or smoothing parameter, and the algorithm only converges to the minimum
of the regularized or smoothed problem F 0 (x), which can be away from the true minimizer of F (x)
by a distance proportional to the parameter. This makes the reduction hard to use in practice.
2
Recall that obtaining the optimal convergence rate is one of the main goals in operations research and
machine learning. For instance, obtaining the optimal 1/? rate for online learning was a major breakthrough
since the log(1/?)/? rate was discovered [13, 15, 26].
3
Smoothing reduction is typically applied to the finite sum form only. This is because, for a general high
dimensional function f (x), its smoothed variant fb(x) may not be efficiently computable.
4
More formally, one needs this variant to satisfy |fbi (?) ? fi (?)| ? ? for all ? and be smooth at the
same time. This can be done at least in two classical ways if fbi (?) is Lipschitz continuous. One is to define
fbi (?) = Ev?[?1,1] [fi (? + ?v)] as an integral of f over the scaled unit interval, see for instance Chapter 2.3 of
[12], and the other is to define fbi (?) = max? ? ? ? ? fi? (?) ? 2? ?2 } using the Fenchel dual fi? (?) of fi (?),
see for instance [24].
2
1.2
Our New Results
To introduce our new reductions, we first define a property on the oracle algorithm.
Our Black-Box Oracle. Consider an algorithm A that minimizes (1.1) when the objective F is
L-smooth and ?-SC. We say that A satisfies the homogenous objective decrease (HOOD) property in
time Time(L, ?) if, for every starting vector x0 , A produces an output x0 satisfying F (x0 ) ? F (x? ) ?
F (x0 )?F (x? )
in time Time(L, ?). In other words, A decreases the objective value distance to the
4
minimum by a constant factor in time Time(L, ?), regardless of how large or small F (x0 ) ? F (x? )
is. We give a few example algorithms that satisfy HOOD:
? Gradient descent and accelerated
gradient descent satisfy HOOD with Time(L, ?) = O(L/?) ? C
p
and Time(L, ?) = O( L/?) ? C respectively, where C is the time needed to compute a gradient
?f (x) and perform a proximal gradient update [23]. Many subsequent works in this line of
research also satisfy HOOD, including [3, 7, 16, 17].
? SVRG and SAGA [14, 34]
solve the finite-sum form of (1.1) and satisfy HOOD with
Time(L, ?) = O n + L/? ? C1 where C1 is the time needed to compute a stochastic gradient ?fi (x) and perform a proximal gradient update.
? Katyusha
the finite-sum form of (1.1) and satisfies HOOD with Time(L, ?) =
p [1] solves
O n + nL/? ? C1 .
AdaptReg. For objectives F (x) that are non-SC and L-smooth, our AdaptReg reduction calls
the an oracle satisfying HOOD a logarithmic number of times, each time with a SC objective
F (x) + ?2 kx ? x0 k2 for an exponentially decreasing value ?. In
end, AdaptReg produces an
Pthe
?
output x
b satisfying F (b
x) ? F (x? ) ? ? with a total running time t=0 Time(L, ? ? 2t ).
Since most algorithms have an inverse polynomial dependence on ? in Time(L, ?), when summing
up Time(L, ? ? 2t ) for positive values t, we do not incur the additional factor log(1/?) as opposed
to the old reduction. In addition, AdaptReg is an unbiased and anytime algorithm. F (b
x) converges
to F (x? ) as the time goes without the necessity of changing parameters, so the algorithm can be
interrupted at any time. We mention some theoretical applications of AdaptReg:
? Applying AdaptReg to SVRG, we obtain a running time O n log 1? + L? ? C1 for minimizing
finite-sum, non-SC, and smooth objectives (such as Lasso and Logistic Regression). This improves
on known theoretical running time obtained by non-accelerated methods, including O n log 1? +
L
1
n+L
? C1 through direct methods such as
? log ? ? C1 through the old reduction, as well as O
?
SAGA [9] and SAG [27].
?
? Applying AdaptReg to Katyusha, we obtain a running time O n log 1? + ?nL
? C1 for minimiz?
ing finite-sum, non-SC, and smooth objectives (such as Lasso and Logistic Regression).
This is
?
the first and only
known
stochastic
method
that
converges
with
the
optimal
1/
?
rate
(as
opposed
?
to log(1/?)/ ?) for this class of objectives. [1]
? Applying AdaptReg to methods that do not originally work for non-SC objectives such as [7, 16,
17], we improve their running times by a factor of log(1/?) for working with non-SC objectives.
AdaptSmooth and JointAdaptRegSmooth. For objectives F (x) that are finite-sum, ?-SC, but
non-smooth, our AdaptSmooth reduction calls an oracle satisfying HOOD a logarithmic number
of times, each time with a smoothed variant of F (?) (x) and an exponentially decreasing smoothing
parameter ?. In thePend, AdaptSmooth produces an output x
b satisfying F (b
x) ? F (x? ) ? ? with a
?
1
total running time t=0 Time( ??2
,
?).
t
Since most algorithms have a polynomial dependence on L in Time(L, ?), when summing up
1
Time( ??2
t , ?) for positive values t, we do not incur an additional factor of log(1/?) as opposed to
the old reduction. AdaptSmooth is also an unbiased and anytime algorithm for the same reason as
AdaptReg.
In addition, AdaptReg and AdaptSmooth can effectively work together, to solve finite-sum, non-SC,
and non-smooth case of (1.1), and we call this reduction JointAdaptRegSmooth.
We mention some theoretical applications of AdaptSmooth and JointAdaptRegSmooth:
3
?
n
? Applying AdaptReg to Katyusha, we obtain a running time O n log 1? + ???
?C1 for minimizing
finite-sum, SC, and non-smooth objectives (such as SVM). Therefore, Katyusha combined with
?
AdaptReg is the first and only?known stochastic method that converges with the optimal 1/ ?
rate (as opposed to log(1/?)/ ?) for this class of objectives. [1]
?
? Applying JointAdaptRegSmooth to Katyusha, we obtain a running time O n log 1? + ?n ?
C1 for minimizing finite-sum, SC, and non-smooth objectives (such as L1-SVM). Therefore,
Katyusha combined with JointAdaptRegSmooth gives the first and only known stochastic
method that converges with the optimal 1/? rate (as opposed to log(1/?)/?) for this class of
objectives. [1]
Roadmap. We provide notations in Section 2 and discuss related works in Section 3. We propose
AdaptReg in Section 4 and AdaptSmooth in Section 5. We leave proofs as well as the description
and analysis of JointAdaptRegSmooth to the full version of this paper. We include experimental
results in Section 7.
2
Preliminaries
In this paper we denote by ?f (x) the full gradient of f if it is differentiable, or the subgradient if f
is only Lipschitz continuous. Recall some classical definitions on strong convexity and smoothness.
Definition 2.1 (smoothness and strong convexity). For a convex function f : Rn ? R,
? f is ?-strongly convex if ?x, y ? Rn , it satisfies f (y) ? f (x) + h?f (x), y ? xi + ?2 kx ? yk2 .
? f is L-smooth if ?x, y ? Rn , it satisfies k?f (x) ? ?f (y)k ? Lkx ? yk.
Characterization of SC and Smooth Regimes. In this paper we give numbers to the following
4 categories of objectives F (x) in (1.1). Each of them corresponds to some well-known training
problems in machine learning. (Letting (ai , bi ) ? Rd ? R be the i-th feature vector and label.)
Case 1: ?(x) is ?-SC and f (x) is L-smooth. Examples:
Pn
1
?
2
2
? ridge regression: f (x) = 2n
i=1 (hai , xi ? bi ) and ?(x) = 2 kxk2 .
P
n
1
?
2
2
? elastic net: f (x) = 2n i=1 (hai , xi ? bi ) and ?(x) = 2 kxk2 + ?kxk1 .
Case 2: ?(x) is non-SC and f (x) is L-smooth. Examples:
Pn
1
? Lasso: f (x) = 2n
? b )2 and ?(x) = ?kxk1 .
i=1 (hai , xi
Pn i
1
? logistic regression: f (x) = n i=1 log(1 + exp(?bi hai , xi)) and ?(x) = ?kxk1 .
Case 3: ?(x) is ?-SC and f (x) is non-smooth (but Lipschitz continuous). Examples:
Pn
? SVM: f (x) = n1 i=1 max{0, 1 ? bi hai , xi} and ?(x) = ?kxk22 .
Case 4: ?(x) is non-SC and f (x) is non-smooth (but Lipschitz continuous). Examples:
Pn
? `1 -SVM: f (x) = n1 i=1 max{0, 1 ? bi hai , xi} and ?(x) = ?kxk1 .
Definition 2.2 (HOOD property). We say an algorithm A(F, x0 ) solving Case 1 of problem (1.1)
satisfies the homogenous objective decrease (HOOD) property with time Time(L, ?) if, for every
x F (x)
starting point x0 , it produces output x0 ? A(F, x0 ) such that F (x0 )?minx F (x) ? F (x0 )?min
4
5
in time Time(L, ?).
In this paper, we denote by C the time needed for computing
?f (x) and performing
1a full gradient
0
2
a proximal
gradient
update
of
the
form
x
?
arg
min
kx
?
x
k
+
?(h?f (x), x ? x0 i +
0
x
2
?(x)) . For the finite-sum case of problem (1.1), we denote by C1 the time needed for computing
a stochastic (sub-)gradient
?fi (hai , xi) and performing a proximal
gradient update of the form
x0 ? arg minx 12 kx ? x0 k2 + ?(h?fi (hai , xi)ai , x ? x0 i + ?(x)) . For finite-sum forms of (1.1),
C is usually on the magnitude of n ? C1 .
Although our definition is only for deterministic algorithms, if the guarantee is probabilistic, i.e., E F (x0 ) ?
x F (x)
minx F (x) ? F (x0 )?min
, all the results of this paper remain true. Also, the constant 4 is very arbitrary
4
and can be replaced with any other constant bigger than 1.
5
4
Algorithm 1 The AdaptReg Reduction
Input: an objective F (?) in Case 2 (smooth and not necessarily strongly convex);
x0 a starting vector, ?0 an initial regularization parameter, T the number of epochs;
an algorithm A that solves Case 1 of problem (1.1).
Output: x
bT .
1: x
b0 ? x0 .
2: for t ? 0 to T ? 1 do
def
3:
Define F (?t ) (x) = ?2t kx ? x0 k + F (x).
(?t )
4:
x
bt+1 ? A(F
,x
bt ).
5:
?t+1 ? ?t /2.
6: end for
7: return x
bT .
3
Related Works
Catalyst/APPA [11, 19] turn non-accelerated methods into accelerated ones, which is different from
the purpose of this paper. They can be used as regularization reductions from Case 2 to Case 1 (but
not from Cases 3 or 4); however, they suffer from two log-factor loss in the running time, and perform
poor in practice [1]. In particular, Catalyst/APPA fix the regularization parameter throughout the
algorithm but our AdaptReg decreases it exponentially. Their results cannot imply ours.
Beck and Teboulle [5] give a reduction from Case 4 to Case 2. Such a reduction does not suffer from
a log-factor
? loss for almost trivial reason: by setting the smoothing parameter ? = ? and applying
any O(1/ ?)-convergence method for Case 2, we immediately obtain an O(1/?) method for Case
4 without an extra log factor. Our main challenge in this paper is to provide log-free reductions to
Case 1;6 simple ideas fail to produce log-free reductions in this case because all efficient algorithms
solving Case 1 (due to linear convergence) have a log factor. In addition, the Beck-Teboulle reduction
is biased but ours is unbiased.
The so-called homotopy methods (e.g. methods with geometrically decreasing regularizer/smoothing
weights) appeared a lot in the literature [6, 25, 31, 33]. However, to the best of our knowledge, all
existing homotopy analysis either only work for specific algorithms [6, 25, 31] or solve only special
problems [33]. In other words, none of them provides all-purpose black-box reductions like we do.
AdaptReg: Reduction from Case 2 to Case 1
4
We now focus on solving Case 2 of problem (1.1): that is, f (?) is L-smooth, but ?(?) is not necessarily
SC. We achieve so by reducing the problem to an algorithm A solving Case 1 that satisfies HOOD.
b0 to equal x0 ,
AdaptReg works as follows (see Algorithm 1). At the beginning of AdaptReg, we set x
an arbitrary given starting vector. AdaptReg consists of T epochs. At each epoch t = 0, 1, . . . , T ? 1,
def
we define a ?t -strongly convex objective F (?t ) (x) = ?2t kx ? x0 k2 + F (x). Here, the parameter
?t+1 = ?t /2 for each t ? 0 and ?0 is an input parameter to AdaptReg that will be specified later.
We run A on F (?t ) (x) with starting vector x
bt in each epoch, and let the output be x
bt+1 . After all T
epochs are finished, AdaptReg simply outputs x
bT .
We state our main theorem for AdaptReg below and prove it in the full version of this paper.
Theorem 4.1 (AdaptReg). Suppose that in problem (1.1) f (?) is L-smooth. Let x0 be a starting
vector such that F (x0 ) ? F (x? ) ? ? and kx0 ? x? k2 ? ?. Then, AdaptReg with ?0 = ?/? and
T = log2 (?/?) produces an output x
bT satisfying F (b
xT ) ? minx F (x) ? O(?) in a total running
PT ?1
time of t=0 Time(L, ?0 ? 2?t ).7
Remark 4.2. We compare the parameter tuning effort needed for AdaptReg against the classical
regularization reduction. In the classical reduction, there are two parameters: T , the number of
6
Designing reductions to Case 1 (rather than for instance Case 2) is crucial for various reasons. First,
algorithm design for Case 1 is usually easier (esp. in stochastic settings). Second, Case 3 can only be reduced to
Case 1 but not Case 2. Third, lower bound results [32] require reductions to Case 1.
7
If the HOOD property
is only satisfied probabilistically as per Footnote 5, our error guarantee becomes
probabilistic, i.e., E F (b
xT ) ? minx F (x) ? O(?). This is also true for other reduction theorems of this paper.
5
iterations that does not need tuning; and ?, which had better equal ?/? which is an unknown quantity
so requires tuning. In AdaptReg, we also need tune only one parameter, that is ?0 . Our T need not
be tuned because AdaptReg can be interrupted at any moment and x
bt of the current epoch can be
outputted. In our experiments later, we spent the same effort turning ? in the classical reduction and
?0 in AdaptReg. As it can be easily seen from the plots, tuning ?0 is much easier than ?.
Corollary 4.3. When AdaptReg is applied to SVRG, we solve the finite-sum case of Case 2 with
PT ?1
PT ?1
t
?
L?
running time t=0 Time(L, ?0 ? 2?t ) = t=0 O(n + L2
is
?0 ) ? C1 = O(n log ? + ? ) ? C1 . This
L?
?
faster than O n+ ? log ? ?C1 obtained through the old reduction, and faster than O n+L?
?C
1
?
obtained by SAGA [9] and SAG [27].
When AdaptReg is applied to Katyusha, we?solve the finite-sum case of Case 2 with running time
p
PT ?1
PT ?1
nL2t
?
?t
?
nL?/?) ? C1 . This is
t=0 Time(L, ?0 ? 2 ) =
t=0 O(n +
?0 ) ? C1 = O(n log ? +
p
?
faster than O n + nL/? log ? ? C1 obtained through the old reduction on Katyusha [1].8
AdaptSmooth: Reduction from Case 3 to 1
5
We now focus on solving the finite-sum form of Case 3 for problem (1.1). Without loss of generality,
we assume kai k = 1 for each i ? [n] because otherwise one can scale fi accordingly. We solve
this problem by reducing it to an oracle A which solves the finite-sum form of Case 1 and satisfies
HOOD. Recall the following definition using Fenchel conjugate:9
def
Definition 5.1. For each function fi : R ? R, let fi? (?) = max? {? ? ? ? fi (?)} be its Fenchel
def
(?)
conjugate. Then, we define the following smoothed variant of fi parameterized by ? > 0: fi (?) =
P
def
(?)
n
max? ? ? ? ? fi? (?) ? ?2 ? 2 . Accordingly, we define F (?) (x) = n1 i=1 fi (hai , xi) + ?(x) .
(?)
From the property of Fenchel conjugate (see for instance the textbook [28]), we know that fi (?) is
a (1/?)-smooth function and therefore the objective F (?) (x) falls into the finite-sum form of Case 1
for problem (1.1) with smoothness parameter L = 1/?.
Our AdaptSmooth works as follows (see pseudocode in the full version). At the beginning of
AdaptSmooth, we set x
b0 to equal x0 , an arbitrary given starting vector. AdaptSmooth consists of
T epochs. At each epoch t = 0, 1, . . . , T ? 1, we define a (1/?t )-smooth objective F (?t ) (x) using
Definition 5.1. Here, the parameter ?t+1 = ?t /2 for each t ? 0 and ?0 is an input parameter to
AdaptSmooth that will be specified later. We run A on F (?t ) (x) with starting vector x
bt in each
epoch, and let the output be x
bt+1 . After all T epochs are finished, AdaptSmooth outputs x
bT .
We state our main theorem for AdaptSmooth below and prove it the full version of this paper.
Theorem 5.2. Suppose that in problem (1.1), ?(?) is ? strongly convex and each fi (?) is G-Lipschitz
continuous. Let x0 be a starting vector such that F (x0 ) ? F (x? ) ? ?. Then, AdaptSmooth with
?0 = ?/G2 and T = log2 (?/?) produces an output x
bT satisfying F (b
xT ) ? minx F (x) ? O(?)
PT ?1
in a total running time of t=0 Time(2t /?0 , ?).
Remark 5.3. We emphasize that AdaptSmooth requires less parameter tuning effort than the old
reduction for the same reason as in Remark 4.2. Also, AdaptSmooth, when applied to Katyusha, provides the fastest running time on solving the Case 3 finite-sum form of (1.1), similar to Corollary 4.3.
JointAdaptRegSmooth: From Case 4 to 1
6
We show in the full version that AdaptReg and AdaptSmooth can work together to reduce the
finite-sum form of Case 4 to Case 1. We call this reduction JointAdaptRegSmooth and it relies
on a jointly exponentially decreasing sequence of (?t , ?t ), where ?t is the weight of the convexity
parameter that we add on top of F (x), and ?t is the smoothing parameter that determines how we
8
If the old reduction is applied on APCG, SPDC, or AccSDCA rather than Katyusha, then two log factors
will be lost.
9
For every explicitly given fi (?), this Fenchel conjugate can be symbolically computed and fed into the
algorithm. This pre-process is needed for nearly all known algorithms in order for them to apply to non-smooth
settings (such as SVRG, SAGA, SPDC, APCG, SDCA, etc).
6
0
20
40
60
80
100
1E-01
0
20
40
60
80
1E-01
1E-02
100
0
20
40
60
80
100
1E+00
1E-02
1E-01
1E-03
1E-03
1E-04
1E-05
1E-04
1E-02
1E-06
1E-05
1E-03
1E-07
1E-06
1E-08
1E-04
1E-07
1E-09
1E-10
1E-08
(a) covtype, ? = 10
?6
1E-05
(b) mnist, ? = 10
?5
(c) rcv1, ? = 10?5
Figure 1: Comparing AdaptReg and the classical reduction on Lasso (with `1 regularizer weight ?).
y-axis is the objective distance to minimum, and x-axis is the number of passes to the dataset. The
blue solid curves represent APCG under the old regularization reduction, and the red dashed curve
represents APCG under AdaptReg. For other values of ?, or the results on SDCA, please refer to the
full version of this paper.
change each fi (?). The analysis is analogous to a careful combination of the proofs for AdaptReg
and AdaptSmooth.
7
Experiments
We perform experiments to confirm our theoretical speed-ups obtained for AdaptSmooth and
AdaptReg. We work on minimizing Lasso and SVM objectives for the following three well-known
datasets that can be found on the LibSVM website [10]: covtype, mnist, and rcv1. We defer some
dataset and implementation details the full version of this paper.
7.1
Experiments on AdaptReg
To test the performance of AdaptReg, consider the Lasso objective which is in Case 2 (i.e. non-SC
but smooth). We apply AdaptReg to reduce it to Case 1 and apply either APCG [20], an accelerated
method, or (Prox-)SDCA [29, 30], a non-accelerated method. Let us make a few remarks:
? APCG and SDCA are both indirect solvers for non-strongly convex objectives and therefore
regularization is intrinsically required in order to run them for Lasso or more generally Case 2.
? APCG and SDCA do not satisfy HOOD in theory. However, they still benefit from AdaptReg as
we shall see, demonstrating the practical value of AdaptReg.
A Practical Implementation. In principle, one can implement AdaptReg by setting the termination
criteria of the oracle in the inner loop as precisely suggested by the theory, such as setting the number
of iterations for SDCA to be exactly T = O(n + ?Lt ) in the t-th epoch. However, in practice, it is
more desirable to automatically terminate the oracle whenever the objective distance to the minimum
has been sufficiently decreased. In all of our experiments, we simply compute the duality gap and
terminate the oracle whenever the duality gap is below 1/4 times the last recorded duality gap of the
previous epoch. For details see the full version of this paper.
Experimental Results. For each dataset, we consider three different magnitudes of regularization
weights for the `1 regularizer in the Lasso objective. This totals 9 analysis tasks for each algorithm.
For each such a task, we first implement the old reduction by adding an additional ?2 kxk2 term to the
Lasso objective and then apply APCG or SDCA. We consider values of ? in the set {10k , 3 ? 10k :
k ? Z} and show the most representative six of them in the plots (blue solid curves in Figure 3 and
Figure 4). Naturally, for a larger value of ? the old reduction converges faster but to a point that is
farther from the exact minimizer because of the bias. We implement AdaptReg where we choose the
initial parameter ?0 also from the set {10k , 3 ? 10k : k ? Z} and present the best one in each of 18
plots (red dashed curves in Figure 3 and Figure 4). Due to space limitations, we provide only 3 of the
18 plots for medium-sized ? in the main body of this paper (see Figure 1), and include Figure 3 and
4 only in the full version of this paper.
It is clear from our experiments that
? AdaptReg is more efficient than the old regularization reduction;
? AdaptReg requires no more parameter tuning than the classical reduction;
7
0
20
40
60
80
100
0
1E+00
1E+00
1E-01
1E-01
1E-02
1E-02
1E-03
1E-03
1E-04
1E-04
1E-05
1E-05
1E-06
1E-06
20
40
60
80
100
0
20
40
60
80
100
1E+00
1E-01
1E-02
1E-07
1E-03
1E-07
(a) covtype, ? = 10
?5
1E-04
(b) mnist, ? = 10
?4
(c) rcv1, ? = 10?4
Figure 2: Comparing AdaptSmooth and the classical reduction on SVM (with `2 regularizer weight
?). y-axis is the objective distance to minimum, and x-axis is the number of passes to the dataset.
The blue solid curves represent SVRG under the old smoothing reduction, and the red dashed curve
represents SVRG under AdaptSmooth. For other values of ?, please refer to the full version.
? AdaptReg is unbiased so simplifies the parameter selection procedure.10
7.2
Experiments on AdaptSmooth
To test the performance of AdaptSmooth, consider the SVM objective which is non-smooth but SC.
We apply AdaptSmooth to reduce it to Case 1 and apply SVRG [14]. We emphasize that SVRG is
an indirect solver for non-smooth objectives and therefore regularization is intrinsically required in
order to run SVRG for SVM or more generally for Case 3.
A Practical Implementation. In principle, one can implement AdaptSmooth by setting the termination criteria of the oracle in the inner loop as precisely suggested by the theory, such as setting
1
the number of iterations for SVRG to be exactly T = O(n + ??
) in the t-th epoch. In practice,
t
however, it is more desirable to automatically terminate the oracle whenever the objective distance
to the minimum has been sufficiently decreased. In all of our experiments, we simply compute the
Euclidean norm of the full gradient of the objective, and terminate the oracle whenever the norm is
below 1/3 times the last recorded Euclidean norm of the previous epoch. For details see full version.
Experimental Results. For each dataset, we consider three different magnitudes of regularization
weights for the `2 regularizer in the SVM objective. This totals 9 analysis tasks. For each such a task,
we first implement the old reduction by smoothing the hinge loss functions (using Definition 5.1) with
parameter ? > 0 and then apply SVRG. We consider different values of ? in the set {10k , 3 ? 10k :
k ? Z} and show the most representative six of them in the plots (blue solid curves in Figure 5).
Naturally, for a larger ?, the old reduction converges faster but to a point that is farther from the exact
minimizer due to its bias. We then implement AdaptSmooth where we choose the initial smoothing
parameter ?0 also from the set {10k , 3 ? 10k : k ? Z} and present the best one in each of the 9
plots (red dashed curves in Figure 5). Due to space limitations, we provide only 3 of the 9 plots for
small-sized ? in the main body of this paper (see Figure 2, and include Figure 5 only in full version.
It is clear from our experiments that
? AdaptSmooth is more efficient than the old smoothing reduction, especially when the desired
training error is small;
? AdaptSmooth requires no more parameter tuning than the classical reduction;
? AdaptSmooth is unbiased and simplifies the parameter selection for the same reason as
Footnote 10.
References
[1] Zeyuan Allen-Zhu. Katyusha: The First Direct Acceleration of Stochastic Gradient Methods. ArXiv
e-prints, abs/1603.05953, March 2016.
[2] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror
descent. ArXiv e-prints, abs/1407.1537, July 2014.
[3] Zeyuan Allen-Zhu, Peter Richt?rik, Zheng Qu, and Yang Yuan. Even faster accelerated coordinate descent
using non-uniform sampling. In ICML, 2016.
10
It is easy to determine the best ?0 in AdaptReg, and in contrast, in the old reduction if the desired error is
somehow changed for the application, one has to select a different ? and restart the algorithm.
8
[4] Zeyuan Allen-Zhu and Yang Yuan. Improved SVRG for Non-Strongly-Convex or Sum-of-Non-Convex
Objectives. In ICML, 2016.
[5] Amir Beck and Marc Teboulle. Smoothing and first order methods: A unified framework. SIAM Journal
on Optimization, 22(2):557?580, 2012.
[6] Radu Ioan Bo?t and Christopher Hendrich. A variable smoothing algorithm for solving convex optimization
problems. TOP, 23(1):124?150, 2015.
[7] S?bastien Bubeck, Yin Tat Lee, and Mohit Singh. A geometric alternative to Nesterov?s accelerated
gradient descent. ArXiv e-prints, abs/1506.08187, June 2015.
[8] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with
applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120?145, 2011.
[9] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A Fast Incremental Gradient Method
With Support for Non-Strongly Convex Composite Objectives. In NIPS, 2014.
[10] Rong-En Fan and Chih-Jen Lin. LIBSVM Data: Classification, Regression and Multi-label. Accessed:
2015-06.
[11] Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Un-regularizing: approximate proximal point
and faster stochastic algorithms for empirical risk minimization. In ICML, volume 37, pages 1?28, 2015.
[12] Elad Hazan. DRAFT: Introduction to online convex optimimization. Foundations and Trends in Machine
Learning, XX(XX):1?168, 2015.
[13] Elad Hazan and Satyen Kale. Beyond the regret minimization barrier: Optimal algorithms for stochastic
strongly-convex optimization. The Journal of Machine Learning Research, 15(1):2489?2512, 2014.
[14] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction.
In Advances in Neural Information Processing Systems, NIPS 2013, pages 315?323, 2013.
[15] Simon Lacoste-Julien, Mark W. Schmidt, and Francis R. Bach. A simpler approach to obtaining an o(1/t)
convergence rate for the projected stochastic subgradient method. ArXiv e-prints, abs/1212.2002, 2012.
[16] Yin Tat Lee and Aaron Sidford. Efficient accelerated coordinate descent methods and faster algorithms for
solving linear systems. In FOCS, pages 147?156. IEEE, 2013.
[17] Laurent Lessard, Benjamin Recht, and Andrew Packard. Analysis and design of optimization algorithms
via integral quadratic constraints. CoRR, abs/1408.3595, 2014.
[18] Hongzhou Lin. private communication, 2016.
[19] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A Universal Catalyst for First-Order Optimization. In
NIPS, 2015.
[20] Qihang Lin, Zhaosong Lu, and Lin Xiao. An Accelerated Proximal Coordinate Gradient Method and its
Application to Regularized Empirical Risk Minimization. In NIPS, pages 3059?3067, 2014.
[21] Arkadi Nemirovski. Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with
Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems. SIAM
Journal on Optimization, 15(1):229?251, January 2004.
[22] Yurii Nesterov. A method of solving a convex programming problem with convergence rate O(1/k2 ). In
Doklady AN SSSR (translated as Soviet Mathematics Doklady), volume 269, pages 543?547, 1983.
[23] Yurii Nesterov. Introductory Lectures on Convex Programming Volume: A Basic course, volume I. Kluwer
Academic Publishers, 2004.
[24] Yurii Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, December 2005.
[25] Francesco Orabona, Andreas Argyriou, and Nathan Srebro. Prisma: Proximal iterative smoothing algorithm.
arXiv preprint arXiv:1206.2372, 2012.
[26] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly
convex stochastic optimization. In ICML, 2012.
[27] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average
gradient. arXiv preprint arXiv:1309.2388, pages 1?45, 2013. Preliminary version appeared in NIPS 2012.
[28] Shai Shalev-Shwartz. Online Learning and Online Convex Optimization. Foundations and Trends in
Machine Learning, 4(2):107?194, 2012.
[29] Shai Shalev-Shwartz and Tong Zhang. Proximal Stochastic Dual Coordinate Ascent. arXiv preprint
arXiv:1211.2717, pages 1?18, 2012.
[30] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss
minimization. Journal of Machine Learning Research, 14:567?599, 2013.
[31] Quoc Tran-Dinh. Adaptive smoothing algorithms for nonsmooth composite convex minimization. arXiv
preprint arXiv:1509.00106, 2015.
[32] Blake Woodworth and Nati Srebro. Tight Complexity Bounds for Optimizing Composite Objectives. In
NIPS, 2016.
[33] Lin Xiao and Tong Zhang. A proximal-gradient homotopy method for the sparse least-squares problem.
SIAM Journal on Optimization, 23(2):1062?1091, 2013.
[34] Lin Xiao and Tong Zhang. A Proximal Stochastic Gradient Method with Progressive Variance Reduction.
SIAM Journal on Optimization, 24(4):2057?-2075, 2014.
[35] Yuchen Zhang and Lin Xiao. Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk
Minimization. In ICML, 2015.
9
| 6364 |@word private:1 version:14 polynomial:2 norm:3 termination:2 tat:2 mention:2 solid:4 moment:1 initial:3 reduction:62 necessity:1 tuned:2 ours:2 existing:4 kx0:2 current:1 comparing:2 written:1 interrupted:2 subsequent:1 zaid:1 designed:1 plot:7 update:4 website:1 amir:1 accordingly:2 beginning:2 farther:2 characterization:1 provides:2 draft:1 org:1 simpler:1 accessed:1 zhang:6 mathematical:2 along:1 direct:2 become:1 yuan:2 prove:3 consists:2 focs:1 introductory:1 introduce:1 manner:1 x0:28 mohit:1 multi:1 decreasing:4 automatically:2 solver:2 becomes:1 spain:1 xx:2 notation:1 medium:1 minimizes:1 textbook:1 developed:1 unified:1 guarantee:2 every:3 concave:1 sag:2 exactly:2 doklady:2 classifier:1 scaled:1 k2:5 unit:1 grant:2 appear:1 positive:2 pock:1 esp:1 laurent:1 black:4 plus:1 fastest:1 limited:1 nemirovski:1 bi:6 practical:4 hood:14 practice:6 lost:1 implement:6 regret:1 procedure:1 sdca:8 empirical:3 universal:1 outputted:1 composite:4 ups:1 word:2 pre:1 cannot:1 selection:2 operator:1 risk:3 applying:8 deterministic:1 go:1 regardless:1 starting:9 kale:1 convex:23 roux:1 immediately:1 estimator:1 variation:1 coordinate:6 analogous:1 pt:6 suppose:2 shamir:1 exact:2 losing:1 programming:3 designing:2 trend:2 roy:1 satisfying:8 kxk1:4 preprint:4 richt:1 decrease:4 removed:1 yk:1 benjamin:1 convexity:6 complexity:3 ideally:1 nesterov:4 singh:1 solving:9 tight:1 incur:3 predictive:1 prisma:1 translated:1 easily:1 indirect:2 various:2 chapter:1 regularizer:7 soviet:1 fast:1 sc:27 shalev:3 elad:3 solve:6 kai:1 say:2 larger:2 otherwise:1 satyen:1 jointly:1 online:4 hoc:1 advantage:2 differentiable:2 sequence:1 net:1 propose:1 tran:1 loop:2 translate:1 pthe:1 achieve:2 description:1 convergence:7 plethora:1 produce:8 incremental:1 converges:9 leave:1 spent:1 coupling:1 develop:3 andrew:1 b0:3 solves:3 strong:4 c:1 come:1 sssr:1 stochastic:17 enable:2 require:4 hx:1 fix:1 preliminary:2 homotopy:3 rong:2 sufficiently:2 blake:1 exp:1 major:1 xk2:1 purpose:2 applicable:1 label:2 minimization:7 mit:1 rather:3 pn:7 probabilistically:1 corollary:2 focus:3 june:1 hongzhou:2 contrast:1 entire:1 typically:1 bt:13 arg:2 classification:5 dual:8 smoothing:17 breakthrough:1 special:1 homogenous:2 equal:3 sampling:1 represents:2 progressive:1 icml:5 nearly:1 others:1 nonsmooth:1 inherent:1 few:3 beck:3 replaced:1 microsoft:1 n1:5 ab:6 karthik:1 investigate:1 zheng:1 zhaosong:1 analyzed:1 nl:4 primal:3 integral:2 ehazan:1 unification:1 necessary:1 ohad:1 old:16 euclidean:2 yuchen:1 desired:2 theoretical:5 instance:5 fenchel:5 xample:2 teboulle:3 sidford:2 tweak:1 introducing:1 uniform:1 johnson:1 shave:1 proximal:11 combined:2 recht:1 siam:4 csail:1 probabilistic:2 off:1 lee:2 together:2 again:1 satisfied:1 recorded:2 opposed:6 choose:2 possibly:1 leading:1 return:1 prox:2 satisfy:6 explicitly:1 ad:1 later:3 lot:1 hazan:3 francis:3 red:4 shai:3 defer:1 simon:2 arkadi:1 minimize:6 square:1 variance:4 efficiently:1 yield:1 none:1 lu:1 researcher:2 footnote:2 whenever:4 definition:8 against:1 naturally:2 proof:2 dataset:5 intrinsically:4 recall:3 anytime:2 knowledge:1 improves:1 originally:1 improved:2 katyusha:11 rie:1 done:1 box:4 strongly:10 generality:1 furthermore:1 chambolle:1 hand:1 working:1 christopher:1 somehow:1 logistic:4 behaved:1 accsdca:1 name:1 unbiased:7 true:3 regularization:15 please:2 criterion:2 ridge:2 allen:5 l1:1 regularizing:1 variational:1 fi:22 specialized:1 pseudocode:1 exponentially:4 volume:4 kluwer:1 refer:3 dinh:1 ai:4 smoothness:6 tuning:8 rd:4 mathematics:1 frostig:1 had:1 yk2:1 lkx:1 etc:1 add:1 recent:1 optimizing:1 inequality:1 success:1 seen:1 minimum:7 additional:4 zeyuan:6 converge:1 determine:1 dashed:4 july:1 full:17 desirable:2 sham:1 infer:1 harchaoui:1 smooth:34 technical:1 faster:9 ing:1 academic:1 bach:3 lin:8 bigger:1 variant:5 regression:10 basic:2 vision:1 arxiv:13 iteration:5 sometimes:1 represent:2 c1:17 addition:3 interval:1 decreased:2 crucial:1 publisher:1 biased:3 extra:2 finely:1 unlike:1 pass:2 ascent:2 orecchia:1 december:1 sridharan:1 call:6 yang:2 easy:1 lasso:11 suboptimal:2 reduce:4 idea:1 inner:2 simplifies:2 computable:1 andreas:1 whether:1 specialization:2 six:2 defazio:1 ultimate:1 accelerating:1 effort:3 suffer:2 peter:1 remark:4 generally:2 clear:2 tune:2 category:1 reduced:1 http:1 nsf:1 qihang:1 per:1 blue:4 diverse:1 broadly:2 shall:2 demonstrating:1 changing:1 libsvm:2 lacoste:2 imaging:2 subgradient:2 monotone:1 geometrically:1 symbolically:1 sum:23 run:4 inverse:1 parameterized:1 family:1 throughout:1 almost:1 chih:1 bound:3 def:7 fan:1 quadratic:1 oracle:15 precisely:2 constraint:1 nathan:1 speed:1 min:4 optimality:1 performing:2 rcv1:3 relatively:1 radu:1 combination:1 poor:1 march:1 conjugate:4 remain:1 kakade:1 qu:1 making:1 quoc:1 discus:1 turn:1 fail:1 needed:6 know:1 letting:1 ge:1 fed:1 end:2 yurii:3 operation:1 apply:12 away:1 fbi:6 alternative:1 schmidt:2 original:2 thomas:1 top:2 running:17 include:4 log2:2 hinge:2 woodworth:1 especially:1 classical:12 objective:48 print:4 quantity:1 dependence:3 hai:10 gradient:25 minx:6 distance:6 restart:1 nondifferentiable:1 roadmap:1 trivial:1 reason:6 besides:2 minimizing:9 unfortunately:1 rise:2 design:3 implementation:3 unknown:1 perform:4 eduction:2 francesco:1 datasets:1 finite:22 descent:10 january:1 communication:1 discovered:1 rn:3 smoothed:5 arbitrary:3 introduced:1 namely:1 required:2 specified:2 barcelona:1 nip:7 beyond:1 suggested:2 below:5 usually:2 ev:1 regime:1 appeared:2 challenge:1 max:5 including:2 packard:1 difficulty:1 regularized:4 turning:1 advanced:1 zhu:5 improve:1 kxk22:1 lorenzo:1 imply:1 numerous:1 julien:3 finished:2 axis:4 epoch:14 literature:1 l2:1 geometric:1 nati:1 catalyst:3 loss:10 lecture:1 limitation:2 proportional:1 srebro:2 foundation:2 rik:1 xiao:4 principle:2 lessard:1 course:1 changed:1 supported:1 last:2 free:2 svrg:13 formal:1 bias:3 institute:1 fall:1 barrier:1 sparse:1 benefit:1 apcg:9 curve:8 world:1 fb:1 author:1 adaptive:1 projected:1 approximate:1 emphasize:2 confirm:1 global:1 mairal:1 summing:2 conclude:1 xi:11 shwartz:3 spectrum:1 continuous:6 un:1 iterative:1 terminate:4 nicolas:1 elastic:1 obtaining:3 necessarily:2 marc:1 main:6 antonin:1 body:2 referred:1 representative:2 en:1 tong:5 sub:1 saga:6 kxk2:3 third:1 theorem:5 specific:3 xt:3 bastien:1 showing:1 jen:1 list:1 rakhlin:1 svm:11 covtype:3 multitude:1 concern:1 mnist:3 adding:1 effectively:1 corr:1 mirror:1 magnitude:3 kx:6 spdc:3 gap:3 easier:2 logarithmic:3 lt:1 simply:3 yin:2 bubeck:1 saddle:1 partially:1 g2:1 bo:1 applies:1 corresponds:1 minimizer:4 satisfies:7 relies:1 determines:1 goal:2 sized:2 acceleration:1 careful:1 orabona:1 lipschitz:6 hard:1 change:1 reducing:2 total:6 called:1 duality:3 experimental:3 aaron:3 formally:1 select:1 support:2 mark:2 alexander:1 accelerated:12 princeton:3 argyriou:1 |
5,930 | 6,365 | Achieving the KS threshold in the general stochastic
block model with linearized acyclic belief propagation
Emmanuel Abbe
Applied and Computational Mathematics and EE Dept.
Princeton University
[email protected]
Colin Sandon
Department of Mathematics
Princeton University
[email protected]
Abstract
The stochastic block model (SBM) has long been studied in machine learning and
network science as a canonical model for clustering and community detection. In
the recent years, new developments have demonstrated the presence of threshold
phenomena for this model, which have set new challenges for algorithms. For
the detection problem in symmetric SBMs, Decelle et al. conjectured that the
so-called Kesten-Stigum (KS) threshold can be achieved efficiently. This was
proved for two communities, but remained open for three and more communities.
We prove this conjecture here, obtaining a general result that applies to arbitrary
SBMs with linear size communities. The developed algorithm is a linearized
acyclic belief propagation (ABP) algorithm, which mitigates the effects of cycles
while provably achieving the KS threshold in O(n ln n) time. This extends prior
methods by achieving universally the KS threshold while reducing or preserving the
computational complexity. ABP is also connected to a power iteration method on a
generalized nonbacktracking operator, formalizing the spectral-message passing
interplay described in Krzakala et al., and extending results from Bordenave et al.
1
Introduction
The stochastic block model (SBM) is widely used as a model for community detection and as a
benchmark for clustering algorithms. The model emerged in multiple scientific communities, in
machine learning and statistics under the SBM [1, 2, 3, 4], in computer science as the planted partition
model [5, 6, 7], and in mathematics as the inhomogeneous random graph model [8]. Although the
model was defined as far back as the 80s, mainly studied for the exact recovery problem, it resurged
in the recent years due in part to fascinating conjectures on the detection problem, established in
[9] (and backed in [10]) from deep but non-rigorous statistical physics arguments. For efficient
algorithms, the following was conjectured:
Conjecture 1. (See formal definitions below) In the stochastic block model with n vertices, k
balanced communities, edge probability a/n inside the communities and b/n across, it is possible to
detect communities in polynomial time if and only if
(a ? b)2
> 1.
k(a + (k ? 1)b)
(1)
In other words, the problem of detecting efficiently communities is conjectured to have a sharp
threshold at the above, which is called the Kesten-Stigum (KS) threshold. Establishing such thresholds
is of primary importance for the developments of algorithms. A prominent example is Shannon?s
coding theorem, that gives a sharp threshold for coding algorithms at the channel capacity, and
which has led the development of coding algorithms used in communication standards. In the area of
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
clustering, where establishing rigorous benchmarks is a challenge, the quest of sharp thresholds is
likely to also have fruitful outcomes.
Interestingly, classical clustering algorithms do not seem to suffice for achieving the threshold in (1).
This includes spectral methods based on the adjacency matrix or Laplacians, as well as SDPs. For
standard spectral methods, a first issue is that the fluctuations in the node degrees produce high-degree
nodes that disrupt the eigenvectors from concentrating on the clusters. This issue is further enhanced
on real networks where degree variations are important. A classical trick is to trim such high-degree
nodes [11, 12], throwing away some information, but this does not seem to suffice. SDPs are a natural
alternative, but they also stumble before the threshold [13, 14], focusing on the most likely rather
than typical clusterings.
Significant progress has already been achieved on Conjecture 1. In particular, the conjecture is set
for k = 2, with the achievability part proved in [15, 16] and [17], and the impossibility part in [10].
Achievability results were also obtained in [17] for SBMs with multiple communities that satisfy a
certain asymmetry condition (see Theorem 5 in [17]). Conjecture 1 remained open for k ? 3.
In their original paper [9], Decelle et al. conjectured that belief propagation (BP) achieves the KS
threshold. The main issue when applying BP to the SBM is the classical one: the presence of cycles
in the graph makes the behavior of the algorithm difficult to understand, and BP is susceptible to
settle down in the wrong fixed points. While empirical studies of BP on loopy graph have shown
that convergence still takes place in some cases [18], obtaining rigorous results in the context of
loopy graphs remains a long standing challenge for message passing algorithms, and achieving the
KS threshold requires precisely running BP to an extent where the graph is not even tree-like. We
address this challenge in the present paper, with a linearized version of BP that mitigates cycles.
Note that establishing formally the converse of Conjecture 1 (i.e., that efficient detection is impossible
below the threshold) for arbitrary k seems out of reach at the moment, as the problem behaves very
differently for small rather than arbitrary k. Indeed, except for a few low values of k, it is proven in
[19, 20] that the threshold in (1) does not coincide with the information-theoretic threshold. Since it
is possible to detect below the threshold with non-efficient algorithms, proving formally the converse
of Conjecture 1 would require major headways in complexity theory. On the other hand, [9] provides
already non-rigourous arguments that the converse hold.
1.1
This paper
This paper proves the achievability part of conjecture 1. Our main result applies to a more general
context, with a generalized notion of detection that applies to arbitrary SBMs. In particular,
? we show that an approximate belief propagation (ABP) algorithm that mitigates cycles achieves
the KS threshold universally. The simplest linearized1 version of BP is to repeatedly update
beliefs about a vertex?s community based on its neighbor?s suspected communities while avoiding
backtrack. However, this only works ideally if the graph is a tree. The correct response to a
cycle would be to discount information reaching the vertex along either branch of the cycle to
compensate for the redundancy of the two branches. Due to computational issues we simply
prevent information from cycling around constant size cycles.
? we show how ABP can be interpreted as a power iteration method on a generalized rnonbacktracking operator, i.e., a spectral algorithm that uses a matrix counting the number of
r-nonbacktracking walks rather than the adjacency matrix. The random initialization of the beliefs
in ABP corresponds to the random vector to which the power iteration is applied, formalizing the
connection described in [22]. While using r = 2 backtracks may suffice to achieve the threshold,
larger backtracks are likely to help mitigating the presence of small loops in networks.
Our results are closest to [16, 17], while diverging in several key parts. A few technical expansions in
the paper are similar to those carried in [16], such as the weighted sums over nonbacktracking walks
and the SAW decomposition in [16], similar to our compensated nonbacktracking walk counts and
Shard decomposition. Our modifications are developed to cope with the general SBM, in particular
to compensation for the dominant eigenvalues in the latter setting. Our algorithm complexity is also
slightly reduced by a logarithmic factor.
1
Other forms of approximate message passing algorithms have been studied for dense graphs, in particular
[21] for compressed sensing.
2
Our algorithm is also closely related to [17], which focuses on extracting the eigenvectors of the
standard nonbacktracking operator. Our proof technique is different than the one in [17], so that we
can cope with the setting of Conjecture 1. We also implement the eigenvector extractions in a belief
propagation fashion. Another difference with [17] is that we rely on nonbacktracking operators of
higher orders r. While r = 2 is arguably the simplest implementation and may suffice for the sole
purpose of achieving the KS threshold, a larger r is likely to be beneficial in practice. For example,
an adversary may add triangles for which ABP with r = 2 would fail while larger r would succeed.
Finally, the approach of ABP can be extended beyond the linearized setting to move from detection
to an optimal accuracy as discussed in Section 5.
2
2.1
Results
A general notion of detection
The stochastic block model (SBM) is a random graph model with clusters defined as follows.
Definition 1. For k ? Z+ , a probability distribution p ? (0, 1)k , a k ? k symmetric matrix Q with
nonnegative entries, and n ? Z+ , we define SBM(n, p, Q/n) as the probability distribution over
ordered pairs (?, G) of an assignment of vertices to one of k communities and an n-vertex graph
generated by the following procedure. First, each vertex v ? V (G) is independently assigned a
community ?v under the probability distribution p. Then, for every v 6= v 0 , an edge is drawn in G
between v and v 0 with probability Q?v ,?v0 /n, independently of other edges. We sometimes say that
G is drawn under SBM(n, p, Q/n) without specifying ? and define ?i = {v : ?v = i}.
Definition 2. The SBM is called symmetric if p is uniform and if Q takes the same value on the
diagonal and the same value off the diagonal.
Our goal is to find an algorithm that can distinguish between vertices from one community and
vertices from another community in a non trivial way.
Definition 3. Let A be an algorithm that takes a graph as input and outputs a partition of its vertices
into two sets. A solves detection (or weak recovery) in graphs drawn from SBM(n, p, Q/n) if
there exists > 0 such that the following holds. When (?, G) is drawn from SBM(n, p, Q/n) and
A(G) divides its vertices into S and S c , with probability 1 ? o(1), there exist i, j ? [k] such that
|?i ? S|/|?i | ? |?j ? S|/|?j | > .
In other words, an algorithm solves detection if it divides the graph?s vertices into two sets such that
vertices from different communities have different probabilities of being assigned to one of the sets.
An alternate definition (see for example Decelle et al. [9]) requires the algorithm to divide the vertices
into k sets such that there exists > 0 for which there exists an identification of the sets with the
communities labelling at least max pi + of the vertices correctly with high probability. In the 2
community symmetric case, the two definitions are equivalent. In a two community asymmetric case
where p = (.2, .8), an algorithm that could find a set containing 2/3 of the vertices from the large
community and 1/3 of the vertices from the small community would satisfy Definition 3, however, it
would not satisfy previous definition due to the biased prior. If all communities have the same size,
this distinction is meaningless and we have the following equivalence:
Lemma 1. Let k > 0, Q be a k ? k symmetric matrix with nonnegative entries, p be the uniform distribution over k sets, and A be an algorithm that solves detection in graphs drawn from
SBM(n, p, Q/n). Then A also solves detection according to Decelle et al.?s criterion [9], provided
that we consider it as returning k ? 2 empty sets in addition to its actual output.
Proof. Let (?, G) ? SBM(n, p, Q/n) and A(G) return S and S 0 . There exists > 0 such that with
high probability (whp) there exist i, j such that |?i ? S|/|?i | ? |?j ? S|/|?j | > . So, if we map S
to community i and S 0 to community j, the algorithm classifies at least |?i ? S|/n + |?j ? S 0 |/n =
|?j |/n + |?i ? S|/n ? |?j ? S|/n ? 1/k + /k ? o(1) of the vertices correctly whp.
2.2
Achieving efficiently and universally the KS threshold
Given parameters p and Q for the SBM, let P be the diagonal matrix such that Pi,i = pi for each
i ? [k]. Also, let ?1 , ..., ?h be the distinct eigenvalues of P Q in order of nonincreasing magnitude.
3
Definition 4. The signal to noise ratio of SBM(n, p, Q/n) is defined by SNR := ?22 /?1 .
Theorem 1. Let k ? Z+ , p ? (0, 1)k be a probability distribution, Q be a k ? k symmetric matrix
with nonnegative entries, and G be drawn under SBM(n, p, Q/n). If SNR > 1, then there exist
r ? Z+ , c > 0, and m : Z+ ? Z+ such that ABP(G, m(n), r, c, (?1 , ..., ?h )) described in the next
section solves detection and runs in O(n log n) time.
For the symmetric SBM, this settles the achievability part of Conjecture 1, as the condition SNR > 1
a+(k?1)b
2
reads in this case SNR = ( a?b
) = (a ? b)2 /(k(a + (k ? 1)b)) > 1.
k ) /(
k
3
The linearized acyclic belief propagation algorithm (ABP)
3.1
Vanilla version
We present first a simplified version of our algorithm that captures the essence of the algorithm while
avoiding technicalities required for the proof, described in Section 3.3.
ABP? (G, m, r, ?1 ):
1. For each vertex v, randomly draw xv with a Normal distribution. For all adjacent v, v 0 in G, set
(1)
(t)
yv,v0 = xv0 and set yv,v0 = 0 whenever t < 1.
2. For each 1 < t ? m, set
X
1
(t?1)
(t?1)
(t?1)
(2)
zv,v0 = yv,v0 ?
yv00 ,v000
2|E(G)| 00 000
(v ,v
)?E(G)
for all adjacent v, v 0 . For each adjacent v, v 0 that are not part of a cycle of length r or less, set
X
(t?1)
(t)
zv0 ,v00
yv,v0 =
v 00 :(v 0 ,v 00 )?E(G),v 00 6=v
and for the other adjacent v, v 0 in G, let the other vertex in the cycle that is adjacent to v be v 000 ,
the length of the cycle be r0 , and set
X
X
(t)
(t?1)
(t?r 0 )
yv,v0 =
zv0 ,v00 ?
zv,v00
v 00 :(v 0 ,v 00 )?E(G),v 00 6=v
v 00 :(v,v 00 )?E(G),v 00 6=v 0 ,v 00 6=v 000
P
(t)
(t?1)
(1)
unless t = r0 , in which case, set yv,v0 = v00 :(v0 ,v00 )?E(G),v00 6=v zv0 ,v00 ? zv000 ,v .
P
(m)
3. Set yv0 = v0 :(v0 ,v)?E(G) yv,v0 for every v ? G and return ({v : yv0 > 0}, {v : yv0 ? 0}).
Remarks. (1) In the r = 2 case, one can exit step 2 after the second line. As mentioned above, we
rely on a less compact version of the algorithm to prove the theorem, but expect that the above also
succeeds at detection as long as m > 2 ln(n)/ ln(SNR).
(2) What the algorithm does if (v, v 0 ) is in multiple cycles of length r or less
is unspecified as there is no such edge with probability 1 ? o(1) in the sparse
SBM. This can be modified for more general settings, applying the adjustment inP
(t)
(t?1)
dependently for each such cycle, setting yv,v0
=
v 00 :(v 0 ,v 00 )?E(G),v 00 6=v zv 0 ,v 00 ?
0
0
0
Pr
P
P
(r )
(t?r )
(r )
where Cv000 ,v,v0 der 0 =1
v 000 :(v,v 000 )?E(G) Cv 000 ,v,v 0
v 00 :(v,v 00 )?E(G),v 00 6=v 0 ,v 00 6=v 000 zv,v 00 ,
notes the number of length r0 cycles that contain v 000 , v, v 0 as consecutive vertices.
(t?1)
(3) The purpose of setting zv,v0 as in step (2) is to ensure that the average value of the y (t) is
approximately 0, and thus that the eventual division of the vertices into two sets is roughly even. An
(t?1)
(t?1)
alternate way of doing this is to simply let zv,v0 = yv,v0 and then compensate for any bias of y (t)
towards positive or negative values at the end. More specifically, define Y to be the n ? m matrix such
P
(t)
that for all t and v, Yv,t = v0 :(v0 ,v)?E(G) yv,v0 , and M to be the m ? m matrix such that Mi,i = 1
0
and Mi,i+1 = ??1 for all i, and all other entries of M are equal to 0. Then set y 0 = Y M m em ,
where em ? Rm denotes the unit vector with 1 in the m-th entry, and m0 is a suitable integer.
4
3.2
Spectral implementation
One way of looking at this algorithm for r = 2 is the following. Given a vertex v in community i,
the expected number of vertices v 0 in community j that are adjacent to v is approximately ej ? P Qei .
For any such v 0 the expected number of vertices in community j 0 that are adjacent to v 0 not counting
v is approximately ej 0 ? P Qej , and so on. In order to explore this connection, define the graph?s
nonbacktracking walk matrix W as the 2|E(G)| ? 2|E(G)| matrix such that for all v ? V (G) and
all distinct v 0 and v 00 adjacent to v, W(v,v00 ),(v0 ,v) = 1, and all other entries in W are 0.
Now, let w be an eigenvector of P Q with eigenvalue ?i and w ? R2|E(G)| be the vector such that
w(v,v0 ) = w?v0 /p?v0 for all (v, v 0 ) ? E(G). For any small t, we would expect that w ? W t w ?
?ti ||w||22 , which strongly suggests that w is correlated with an eigenvector of W with eigenvalue ?i .
For any such w with i > 1, dividing G?s vertices into those with positive entries in w and those
with negative entries in w would put all vertices from some communities in the first set, and all
vertices from the other communities in the second. So, we suspect that an eigenvector of W with its
eigenvalue of second greatest magnitude would have entries that are correlated with the corresponding
vertices? communities.
We could simply extract this eigenvector, but a faster approach would be to take a random vector
y and then compute W m y for some suitably large m. That will be approximately equal to a
linear combination of W ?s dominant eigenvectors. Its dominant eigenvector is expected to have an
eigenvalue of approximately ?1 and to have all of its entries approximately equal, so if instead we
?1
compute (W ? 2|E(G)|
J)m y where J is the vector with all entries equal to 1, the component of y
proportional to W ?s dominant eigenvector will be reduced to negligable magnitude, leaving a vector
that is approximately proportional to W ?s eigenvector of second largest eigenvalue. This is essentially
what the ABP algorithm does for r = 2.
This vanilla approach does however not extend obviously to the case with multiple eigenvalues. In
such cases, we will have to subtract multiples of the identity matrix instead of J because we will
not know enough about W ?s eigenvectors to find a matrix that cancels out one of them in particular.
These are significant challenges to overcome to prove the general result and Conjecture 1.
For higher values of r, the spectral view of ABP can be understood as described above but introducing
the following generalized nonbacktracking operator as a replacement to W :
Definition 5. Given a graph, define the r-nonbacktracking matrix W (r) of dimension equal to the
(r)
number of r ? 1 directed paths in the graph and with entry W(v1 ,v2 ,...,vr ),(v0 ,v0 ,...,v0 ) equal to 1 if
r
1 2
0
vi+1
= vi for each 1 ? i < r and v10 6= vr , and equal to 0 otherwise.
Figure 1: Two paths of length 3 that contribute to an entry of 1 in W (4) .
3.3
Full version
The main modifications in the proof are as follows. First, at the end we assign vertices to sets with
probabilities that scale linearly with their entries in y 0 instead of simply assigning them based on the
signs of their entries. This allows us to convert the fact that the average values of yv0 for v in different
communities is different into a detection result. Second, we remove a small fraction of the edges
from the graph at random at the beginning of the algorithm (the graph-splitting step), defining yv00
to be the sum of yv0 0 over all v 0 connected to v by paths of a suitable length with removed edges at
their ends in order to eliminate some dependency issues. Also, instead of just compensating for P Q?s
dominant eigenvalue, we also compensate for some of its smaller eigenvalues, and subtract multiples
of y (t?1) from y (t) for some t instead of subtracting the average value of y (t) from all of its entries
for all t. We refer to [19] for the full description of the algorithm. Note that while it is easier to prove
that the ABP algorithm works, the ABP? algorithm should work at least as well in practice.
5
4
Proof technique
For simplicity, consider first the two community symmetric case. Consider determining the community of v using belief propagation, assuming some preliminary guesses about the vertices t edges
away from it, and assuming that the subgraph of G induced by the vertices within t edges of v is a tree.
For any vertex v 0 such that d(v, v 0 ) < t, let Cv0 be the set of the children of v 0 . If we believe based
on either our prior knowledge or propagation of beliefs up to these vertices that v 00 is in community 1
with probability 12 + 12 v00 for each v 00 ? Cv0 , then the algorithm will conclude that v 0 is in community
1 with a probability of
Q
a+b
a?b 00
v 00 ?Cv0 ( 2 + 2 v )
.
Q
Q
a+b
a?b 00
a?b 00
a+b
v 00 ?Cv0 ( 2 + 2 v ) +
v 00 ?Cv0 ( 2 ? 2 v )
If all of the v00 are close to 0, then this is approximately equal to (see also [9, 22])
P
00
1 + v00 ?Cv0 a?b
1 a?b X 1
a+b v
= +
v00 .
P
P
a?b 00
a?b 00
2 a + b 00
2
2 + v00 ?C 0 a+b v + v00 ?C 0 (?1) a+b v
v
v ?Cv0
v
That means that the belief propagation
algorithm will ultimately assign an average probability of
P
t
00
approximately 12 + 12 ( a?b
)
00
00
v :d(v,v )=t v to the possibility that v is in community 1. If there
a+b
exists such that Ev00 ??1 [v00 ] = and Ev00 ??2 [v00 ] = ? (recall that ?i = {v : ?v = i}), then
t
2
on average we would expect to assign a probability of approximately 12 + 21 (a?b)
to v being
2(a+b)
in its actual community, which is enhanced as t increases when SNR > 1. Note that since the
variance in the probability assigned to the possibility that v is in its actual community will also grow
t
2
as (a?b)
, the chance that this will assign a probability of greater than 1/2 to v being in its actual
2(a+b)
t/2
(a?b)2
community will be 21 + ?
.
2(a+b)
One idea for the initial estimate is to simply guess the vertices? communities at random, in the
expectation that the
?fractions of the vertices from the two communities assigned to a community
will differ by ?(1/ n) by the Central Limit Theorem. Unfortunately, for any t large enough that
t/2 ?
t
(a?b)2
(a+b)
>
> n which means that our approximation breaks down
n,
we
have
that
2(a+b)
2
before t gets large enough to detect communities. In fact, t would have to be so large that not only
would neighborhoods not be tree like, but vertices would have to be exhausted.
One way to handle this would be to stop counting vertices that are t edges away from v, and instead
count each vertex a number of times equal to the number of length t paths from v to it.2 Unfortunately,
finding all length t paths starting at v can be done efficiently enough only for values of t that are
smaller than what is needed to amplify a random guess to the extent needed here. We could instead
calculate the number of length t walks from v to each vertex more quickly, but this count would
probably be dominated by walks that go to a high degree vertex and then leave and return to it
repeatedly, which would throw the calculations off. On the other hand, most reasonably short
nonbacktracking walks are likely to be paths, so counting each vertex a number of times equal to the
number of nonbacktracking walks of length t from v to it seems like a reasonable modification. That
said, it is still possible that there is a vertex that is in cycles such that most nonbacktracking walks
simply leave and return to it many times. In order to mitigate this, we use r-nonbacktracking walks,
walks in which no vertex reoccurs within r steps of a previous occurrence, such that walks cannot
return to any vertex more than t/r times.
Unfortunately, this algorithm would not work because the original guesses will inevitably be biased
towards one community or the other. So, most of the vertices will have more r-nonbacktracking
walks of length t from them to vertices that were suspected of being in that community than the other.
One way to deal with this bias would be to subtract the average number of r-nonbacktracking walks
to vertices in each set from each vertex?s counts. Unfortunately, that will tend to undercompensate
for the bias when applied to high degree vertices and overcompensate for it when applied to low
2
This type of approach is considered in [23].
6
degree vertices. So, we modify the algorithm that counts the difference between the number of
r-nonbacktracking walks leading to vertices in the two sets to subtract off the average at every step in
order to prevent a major bias from building up.
One of the features of our approach is that it extends fairly naturally to the general SBM. Despite the
potential presence of more than 2 communities, we still only assign one value to each vertex, and
output a partition of the graph?s vertices into two sets in the expectation that different communities
will have different fractions of their vertices in the second set. One complication is that the method of
preventing the results from being biased towards one comunity does not work as well in the general
case. The problem is, by only assigning one value to each vertex, we compress our beliefs onto one
dimension. That means that the algorithm cannot detect biases orthogonal to that dimension, and
thus cannot subtract them off. So, we cancel out the bias by subtracting multiples of the counts of the
numbers of r-nonbacktracking walks of some shorter length that will also have been affected by it.
More concretely, we assign each vertex an initial value, xv , at random. Then, we compute a matrix Y
such that for each v ? G and 0 ? t ? m, Yv,t is the sum over all r-nonbacktracking walks of length
t ending at v of the initial values associated with their starting vertices. Next, for each v we compute
a weighted sum of Yv,1 , Yv,2 , ..., Yv,m where the weighting is such that any biases in the entries of Y
resulting from the initial values should mostly cancel out. We then use these to classify the vertices.
Proof outline for Theorem 1. If we were going to prove that ABP? worked, we would probably define Wr[S] ((v0 , ..., vm )) to be 1 if for every consecutive subsequence (i1 , . . . , im0 ) ?
S, we have that vi1 ?1 , ..., vim0 is a r-nonbacktracking walk, and 0 otherwise. Next, deP
?|S|
fine Wr ((v0 , ..., vm )) =
Wr[S] ((v0 , ..., vm )) and Wm (x, v) =
S?(1,...,m) (?2|E(G)|)
P
0
0
v0 ,...,vm ?G:vm =v xv0 Wr ((v0 , ..., vm )), and we would have that yv = Wm (x, v) for x and y
?
as in ABP . As explained above, we rely on a different approach to cope with the general SBM.
In order to prove that the algorithm works, we make the following definitions.
Definition 6. For any r ? 1 and series of vertices v0 , ..., vm , let Wr ((v0 , ..., vm )) be 1 if v0 , ..., vm
is an r-nonbacktracking walk and 0 otherwise. Also, for any r ? 1, series of vertices v0 , ..., vm and
c0 , ..., cm ? Rm+1 , let
?
?
X
Y
?
W(c0 ,...,cm )[r] ((v0 , ..., vm )) =
(?ci /n)? Wr ((vi0 , vi1 , ..., vim0 )).
(i0 ,...,im0 )?(0,...,m)
i6?(i0 ,...,im0 )
In other words, W(c0 ,...,cm )[r] ((v0 , ..., vm )) is the sum over all subsequences of (v0 , ..., vm ) that form
r-nonbacktracking walks of the products of the negatives of the ci /n corresponding to the elements
of (v0 , ..., vm ) that are not in the walks. Finally, let
X
Wm/{ci } (x, v) =
xv0 W(c0 ,...,cm )[r] ((v0 , ..., vm )).
v0 ,...,vm ?G:vm =v
The reason these definitions are important is that for each v and t, we have that
X
Yv,t =
xv0 Wr ((v0 , ..., vt ))
v0 ,...,vt ?G:vt =v
(m)
and yv is equal to Wm/{ci } (x, v) for suitable (c0 , ..., cm ). For the full ABP algorithm, both terms
in the above equality refer to G as it is after some of its edges are removed at random in the ?graph
splitting? step (which explains the presence of 1 ? ? factors in [19]). One can easily prove that if
v0 , ..., vt are distinct, ?v0 = i and ?vt = j, then
E[W(c0 ,...,ct )[r] ((v0 , ..., vt ))] = ei ? P ?1 (P Q)t ej /nt ,
and most of the rest of the proof centers around showing that W(c0 ,...,cm )[r] ((v0 , ..., vm )) such that
v0 , ..., vm are not all distinct do not contribute enough to the sums to matter. That starts with a bound
on |E[W(c0 ,...,cm )[r] ((v0 , ..., vm ))]| whenever there is no i, j 6= j 0 such that vj = vj 0 , |i ? j| ? r, and
ci 6= 0; and continues with an explanation of how to re-express any W(c0 ,...,cm )[r] ((v0 , ..., vm )) as a
0
linear combination of expressions of the form W(c00 ,...,c0 0 )[r] ((v00 , ..., vm
0 )) which have this property.
m
7
Then we use these to prove that for suitable (c0 , ..., cm ), the sum of |E[W(c0 ,...,cm )[r] ((v0 , ..., vm ))]|
for all sufficiently repetitive (v0 , ..., vm ) is sufficiently small. Next, we observe that
00
W(c0 ,...,cm )[r] ((v0 , ..., vm ))W(c000 ,...,c00m )[r] ((v000 , ..., vm
))
= W(c0 ,...,cm ,0,...,0,c00m ,...,c000 )[r] ((v0 , ..., vm , u1 , ..., ur , vn00 , ...v000 ))
if u1 , ..., ur are new vertices that are connected to all other vertices, and use that fact to translate
bounds on expected values to bounds on variances.
That allows us to show that if m and (c0 , ..., cm ) have the appropriate properties and w is an
eigenvector of P Q with eigenvalue ?j and magnitude 1, then with high probability
?
X
Y
? Y
n
|
w?v /p?v Wm/{ci } (x, v)| = O( n
|?j ? ci | +
|?s ? ci |)
log(n)
0?i?m
v?V (G)
and
|
X
0?i?m
? Y
|?j ? ci |).
w?v /p?v Wm/{ci } (x, v)| = ?( n
0?i?m
v?V (G)
We also show that under appropriate conditions Var[Wm/{ci } (x, v)] = O((1/n)
Q
0?i?m (?s ?ci )
2
).
Together, these facts would allow us to prove that the differences between the average values of
Wm/{ci } (x, v) in different communities are large enough relative to the variance of Wm/{ci } (x, v)
to let us detect communities, except for one complication. Namely, these bounds are not quite
good enough to rule out the possibility that there is a constant probability scenario in which the
empirical variance of {Wm/{ci } (x, v)} is large enough to disrupt our efforts at using Wm/{ci } (x, v)
for detection. Although we do not expect this to actually happen, we rely on the graph splitting step
described in Section 3.3 to discard this potential scenario.
5
Conclusions and extensions
This algorithm is intended to classify vertices with an accuracy nontrivially better than that attained
by guessing randomly, but it is not hard to convert this to an algorithm that classifies vertices with
optimal accuracy. Once one has reasonable initial guesses of which communities the vertices are in,
one can simply run full belief propagation on these guesses. This requires bridging the gap from
dividing the vertices into two sets that are correlated with their communities in an unknown way, and
assigning each vertex a nontrivial probability distribution for how likely it is to be in each community.
One way to do this is to divide G?s vertices into those that have positive and negative values of y 0 ,
and divide its directed edges into those that have positive and negative values of y (m) . We would
generally expect that edges from vertices in different communities will have different probabilities
of
?
corresponding to positive values of y (m) . Now, let d0 be the largest integer such that at least n of the
vertices have degree at least d0 , let S be the set of vertices with degree exactly d0 , and for each v ? S,
0
let ?v = |{v 0 : (v, v 0 ) ? E(G), y(v,v
0 ) > 0}|. We would expect that for any given community i, the
probability distribution of ?v for v ? ?i would be essentially a binomial distribution with parameters
d0 and some unknown probability. So, compute probabilities such that the observed distribution of
values of ?v approximately matches the appropriate weighted sum of k binomial distributions.
Next, go through all identifications of the communities with these binomial distributions that are consistent with the community sizes and determine which one most accurately predicts the connectivity
rates between vertices that have each possible value of ? when the edge in question is ignored, and
treat this as the mapping of communities to binomial distributions. Then, for each adjacent v and v 0 ,
0
determine the probability distribution of what community v is in based on the signs of y(v
00 ,v) for all
00
0
v 6= v . Finally, use these as the starting probabilities for BP with a depth of ln(n)/3 ln(?1 ).
Acknowledgments
This research was supported by NSF CAREER Award CCF-1552131 and ARO grant W911NF-16-10051.
8
References
[1] P. W. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks,
5(2):109?137, 1983.
[2] Peter J. Bickel and Aiyou Chen. A nonparametric view of network models and Newman-Girvan and other
modularities. Proceedings of the National Academy of Sciences, 106(50):21068?21073, 2009.
[3] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys. Rev.
E, 83:016107, Jan 2011.
[4] C. Gao, Z. Ma, A. Y. Zhang, and H. H. Zhou. Achieving Optimal Misclassification Proportion in Stochastic
Block Model. ArXiv e-prints, May 2015.
[5] T.N. Bui, S. Chaudhuri, F.T. Leighton, and M. Sipser. Graph bisection algorithms with good average case
behavior. Combinatorica, 7(2):171?191, 1987.
[6] R.B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In 28th Annual Symposium on
Foundations of Computer Science, pages 280?285, 1987.
[7] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001.
Proceedings. 42nd IEEE Symposium on, pages 529?537, 2001.
[8] B?la Bollob?s, Svante Janson, and Oliver Riordan. The phase transition in inhomogeneous random graphs.
Random Struct. Algorithms, 31(1):3?122, August 2007.
[9] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?. Asymptotic analysis of the stochastic block model
for modular networks and its algorithmic applications. Phys. Rev. E, 84:066106, December 2011.
[10] Elchanan Mossel, Joe Neeman, and Allan Sly. Reconstruction and estimation in the planted partition
model. Probability Theory and Related Fields, 162(3):431?461, 2015.
[11] A. Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Comb. Probab. Comput., 19(2):227?
284, March 2010.
[12] V. Vu. A simple svd algorithm for finding hidden partitions. ArXiv:1404.3918, April 2014.
[13] Olivier Gu?don and Roman Vershynin. Community detection in sparse networks via grothendieck?s
inequality. Probability Theory and Related Fields, 165(3):1025?1049, 2016.
[14] Andrea Montanari and Subhabrata Sen. Semidefinite programs on sparse random graphs and their
application to community detection. In Proceedings of the 48th Annual ACM SIGACT Symposium on
Theory of Computing, STOC 2016, pages 814?827, New York, NY, USA, 2016. ACM.
[15] L. Massouli?. Community detection thresholds and the weak Ramanujan property. In STOC 2014: 46th
Annual Symposium on the Theory of Computing, pages 1?10, New York, United States, June 2014.
[16] E. Mossel, J. Neeman, and A. Sly. A proof of the block model threshold conjecture. Available online at
arXiv:1311.4115 [math.PR], January 2014.
[17] Charles Bordenave, Marc Lelarge, and Laurent Massoulie. Non-backtracking spectrum of random graphs:
Community detection and non-regular ramanujan graphs. In FOCS ?15, pages 1347?1357, Washington,
DC, USA, 2015. IEEE Computer Society.
[18] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate inference:
An empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence,
UAI?99, pages 467?475, San Francisco, CA, USA, 1999. Morgan Kaufmann Publishers Inc.
[19] E. Abbe and C. Sandon. Detection in the stochastic block model with multiple clusters: proof of the
achievability conjectures, acyclic BP, and the information-computation gap. ArXiv:1512.09080, Dec. 2015.
[20] J. Banks and C. Moore. Information-theoretic thresholds for community detection in sparse networks.
ArXiv:1601.02658, January 2016.
[21] David L. Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed
sensing. Proceedings of the National Academy of Sciences, 106(45):18914?18919, 2009.
[22] Florent Krzakala, Cristopher Moore, Elchanan Mossel, Joe Neeman, Allan Sly, Lenka Zdeborova, and
Pan Zhang. Spectral redemption in clustering sparse networks. Proceedings of the National Academy of
Sciences, 110(52):20935?20940, 2013.
[23] S. Bhattacharyya and P. J. Bickel.
ArXiv:1401.3915, January 2014.
Community Detection in Networks using Graph Distance.
9
| 6365 |@word version:6 polynomial:1 seems:2 proportion:1 vi1:2 suitably:1 yv0:5 open:2 c0:15 leighton:1 nd:1 linearized:5 decomposition:2 moment:1 initial:5 series:2 united:1 neeman:3 interestingly:1 janson:1 bhattacharyya:1 whp:2 nt:1 assigning:3 partition:5 happen:1 remove:1 update:1 intelligence:1 guess:6 beginning:1 short:1 detecting:1 provides:1 node:3 contribute:2 complication:2 math:1 zhang:2 along:1 symposium:4 focs:1 prove:9 inside:1 comb:1 krzakala:3 allan:2 expected:4 indeed:1 roughly:1 andrea:2 behavior:2 compensating:1 actual:4 spain:1 provided:1 classifies:2 formalizing:2 suffice:4 what:4 cm:13 interpreted:1 unspecified:1 eigenvector:9 developed:2 finding:2 mitigate:1 every:4 ti:1 zdeborova:1 exactly:1 returning:1 wrong:1 rm:2 partitioning:2 unit:1 converse:3 grant:1 arguably:1 before:2 positive:5 understood:1 modify:1 decelle:5 xv:2 limit:1 treat:1 despite:1 establishing:3 laurent:1 fluctuation:1 path:6 approximately:11 initialization:1 k:10 studied:3 equivalence:1 specifying:1 suggests:1 directed:2 acknowledgment:1 vu:1 practice:2 block:9 implement:1 procedure:1 jan:1 area:1 empirical:3 word:3 inp:1 regular:1 get:1 amplify:1 close:1 cannot:3 operator:5 onto:1 put:1 context:2 impossible:1 applying:2 fruitful:1 equivalent:1 demonstrated:1 compensated:1 map:1 backed:1 go:2 center:1 starting:3 independently:2 ramanujan:2 simplicity:1 recovery:2 splitting:3 nonbacktracking:21 rule:1 sbm:20 proving:1 handle:1 notion:2 variation:1 headway:1 enhanced:2 exact:1 olivier:1 us:1 trick:1 element:1 continues:1 asymmetric:1 predicts:1 modularities:1 observed:1 capture:1 calculate:1 cycle:14 connected:3 removed:2 redemption:1 balanced:1 shard:1 mentioned:1 complexity:3 ideally:1 ultimately:1 rigourous:1 division:1 exit:1 triangle:1 gu:1 easily:1 differently:1 distinct:4 massoulie:1 artificial:1 newman:2 kevin:1 outcome:1 neighborhood:1 quite:1 emerged:1 widely:1 larger:3 modular:1 say:1 otherwise:3 compressed:2 statistic:1 online:1 obviously:1 interplay:1 eigenvalue:12 sen:1 aro:1 subtracting:2 leinhardt:1 product:1 reconstruction:1 loop:1 subgraph:1 translate:1 achieve:1 chaudhuri:1 academy:3 description:1 convergence:1 cluster:3 asymmetry:1 extending:1 empty:1 produce:1 leave:2 help:1 v10:1 sole:1 dep:1 progress:1 solves:5 dividing:2 throw:1 differ:1 inhomogeneous:2 closely:1 correct:1 stochastic:10 settle:2 adjacency:2 explains:1 require:1 assign:6 preliminary:1 c00:1 extension:1 hold:2 qei:1 around:2 considered:1 sufficiently:2 normal:1 mapping:1 algorithmic:1 m0:1 major:2 achieves:2 consecutive:2 overcompensate:1 bickel:2 purpose:2 estimation:1 stigum:2 saw:1 largest:2 weighted:3 modified:1 rather:3 reaching:1 zhou:1 ej:3 aiyou:1 focus:1 june:1 mainly:1 impossibility:1 rigorous:3 detect:5 inference:1 i0:2 eliminate:1 hidden:1 going:1 abp:17 i1:1 provably:1 mitigating:1 issue:5 development:3 fairly:1 equal:11 once:1 field:2 extraction:1 washington:1 cancel:3 abbe:2 cv0:7 roman:1 few:2 randomly:2 national:3 murphy:1 intended:1 phase:1 replacement:1 detection:23 message:4 possibility:3 semidefinite:1 mcsherry:1 nonincreasing:1 oliver:1 edge:13 arian:1 shorter:1 elchanan:2 orthogonal:1 unless:1 tree:4 divide:5 vi0:1 walk:21 re:1 classify:2 w911nf:1 assignment:1 karrer:1 loopy:3 introducing:1 vertex:72 entry:17 snr:6 uniform:2 dependency:1 vershynin:1 standing:1 physic:1 off:4 vm:27 michael:1 together:1 quickly:1 connectivity:1 central:1 containing:1 leading:1 return:5 potential:2 coding:3 includes:1 matter:1 inc:1 satisfy:3 vi:2 sipser:1 view:2 break:1 doing:1 yv:18 im0:3 wm:11 start:1 accuracy:3 variance:4 kaufmann:1 efficiently:4 weak:2 identification:2 sdps:2 accurately:1 backtrack:1 bisection:2 reach:1 phys:2 lenka:1 whenever:2 definition:13 lelarge:1 naturally:1 proof:9 mi:2 associated:1 stop:1 proved:2 concentrating:1 recall:1 knowledge:1 oghlan:1 actually:1 back:1 focusing:1 higher:2 attained:1 response:1 wei:1 april:1 done:1 strongly:1 just:1 sly:3 hand:2 ei:1 propagation:11 xv0:4 laskey:1 scientific:1 believe:1 building:1 effect:1 usa:3 contain:1 ccf:1 maleki:1 equality:1 assigned:4 read:1 symmetric:8 dependently:1 moore:3 deal:1 adjacent:9 essence:1 criterion:1 generalized:4 prominent:1 outline:1 theoretic:2 charles:1 behaves:1 discussed:1 extend:1 significant:2 refer:2 cv:1 vanilla:2 mathematics:3 i6:1 v0:55 add:1 dominant:5 closest:1 recent:2 conjectured:4 discard:1 scenario:2 certain:1 inequality:1 vt:6 der:1 preserving:1 morgan:1 greater:1 r0:3 determine:2 colin:1 signal:1 branch:2 multiple:8 full:4 d0:4 technical:1 faster:1 match:1 calculation:1 long:3 compensate:3 award:1 essentially:2 expectation:2 fifteenth:1 arxiv:6 iteration:3 sometimes:1 repetitive:1 achieved:2 dec:1 addition:1 fine:1 grow:1 leaving:1 publisher:1 biased:3 meaningless:1 rest:1 probably:2 sigact:1 suspect:1 induced:1 tend:1 december:1 seem:2 nontrivially:1 integer:2 extracting:1 ee:1 jordan:1 presence:5 counting:4 enough:8 zv0:3 florent:1 idea:1 expression:1 bridging:1 effort:1 kesten:2 peter:1 passing:4 york:2 repeatedly:2 remark:1 deep:1 ignored:1 generally:1 eigenvectors:4 nonparametric:1 discount:1 simplest:2 reduced:2 exist:3 canonical:1 nsf:1 sign:2 correctly:2 wr:7 eabbe:1 affected:1 express:1 zv:6 redundancy:1 key:1 threshold:25 achieving:8 drawn:6 prevent:2 v1:1 graph:30 fraction:3 year:2 sum:8 convert:2 run:2 uncertainty:1 massouli:1 extends:2 place:1 reasonable:2 draw:1 bound:4 ct:1 distinguish:1 fascinating:1 nonnegative:3 annual:3 nontrivial:1 precisely:1 throwing:1 worked:1 bp:9 bordenave:2 dominated:1 u1:2 argument:2 conjecture:14 department:1 according:1 alternate:2 combination:2 march:1 across:1 slightly:1 beneficial:1 em:2 smaller:2 ur:2 pan:1 rev:2 modification:3 explained:1 pr:2 ln:5 remains:1 count:6 fail:1 needed:2 know:1 end:3 available:1 backtracks:2 observe:1 away:3 spectral:9 v2:1 appropriate:3 occurrence:1 alternative:1 yair:1 struct:1 original:2 compress:1 denotes:1 clustering:6 running:1 ensure:1 binomial:4 emmanuel:1 sbms:4 prof:1 negligable:1 classical:3 society:1 move:1 already:2 question:1 print:1 planted:2 primary:1 diagonal:3 guessing:1 cycling:1 said:1 riordan:1 zdeborov:1 distance:1 capacity:1 extent:2 trivial:1 reason:1 assuming:2 length:13 ratio:1 bollob:1 difficult:1 susceptible:1 unfortunately:4 mostly:1 stoc:2 negative:5 implementation:2 unknown:2 coja:1 reoccurs:1 benchmark:2 compensation:1 inevitably:1 january:3 defining:1 extended:1 communication:1 looking:1 dc:1 arbitrary:4 sharp:3 august:1 community:67 david:1 pair:1 required:1 namely:1 connection:2 sandon:3 distinction:1 established:1 barcelona:1 nip:1 address:1 beyond:1 adversary:1 below:3 laplacians:1 challenge:5 program:1 max:1 explanation:1 belief:14 power:3 suitable:4 greatest:1 natural:1 rely:4 misclassification:1 mossel:3 carried:1 extract:1 grothendieck:1 prior:3 probab:1 determining:1 relative:1 asymptotic:1 stumble:1 girvan:1 expect:6 proportional:2 acyclic:4 proven:1 var:1 foundation:2 degree:9 consistent:1 suspected:2 bank:1 pi:3 achievability:5 supported:1 formal:1 bias:7 understand:1 allow:1 neighbor:1 sparse:5 overcome:1 dimension:3 depth:1 ending:1 transition:1 preventing:1 concretely:1 adaptive:1 universally:3 coincide:1 simplified:1 san:1 far:1 cope:3 social:1 approximate:3 compact:1 trim:1 bui:1 technicality:1 uai:1 conclude:1 svante:1 francisco:1 disrupt:2 subsequence:2 don:1 spectrum:1 channel:1 reasonably:1 ca:1 career:1 obtaining:2 expansion:1 marc:1 vj:2 blockmodels:2 main:3 dense:1 linearly:1 montanari:2 noise:1 child:1 v00:17 fashion:1 ny:1 vr:2 cristopher:1 comput:1 weighting:1 theorem:6 remained:2 down:2 showing:1 mitigates:3 sensing:2 r2:1 exists:5 joe:2 importance:1 ci:16 magnitude:4 labelling:1 exhausted:1 gap:2 easier:1 subtract:5 chen:1 led:1 logarithmic:1 simply:7 likely:6 explore:1 gao:1 backtracking:1 ordered:1 adjustment:1 holland:1 applies:3 corresponds:1 chance:1 acm:2 ma:1 succeed:1 goal:1 identity:1 boppana:1 donoho:1 towards:3 eventual:1 hard:1 typical:1 except:2 reducing:1 specifically:1 lemma:1 called:3 svd:1 diverging:1 succeeds:1 shannon:1 la:1 formally:2 combinatorica:1 quest:1 latter:1 phenomenon:1 dept:1 princeton:4 avoiding:2 correlated:3 |
5,931 | 6,366 | Yggdrasil: An Optimized System for Training Deep
Decision Trees at Scale
Firas Abuzaid1 , Joseph Bradley2 , Feynman Liang3 , Andrew Feng4 , Lee Yang4 ,
Matei Zaharia1 , Ameet Talwalkar5
1
2
MIT CSAIL, Databricks, 3 University of Cambridge, 4 Yahoo, 5 UCLA
Abstract
Deep distributed decision trees and tree ensembles have grown in importance due
to the need to model increasingly large datasets. However, P LANET, the standard
distributed tree learning algorithm implemented in systems such as XGB OOST
and Spark ML LIB, scales poorly as data dimensionality and tree depths grow. We
present Y GGDRASIL, a new distributed tree learning method that outperforms
existing methods by up to 24?. Unlike P LANET, Y GGDRASIL is based on vertical partitioning of the data (i.e., partitioning by feature), along with a set of
optimized data structures to reduce the CPU and communication costs of training. Y GGDRASIL (1) trains directly on compressed data for compressible features
and labels; (2) introduces efficient data structures for training on uncompressed
data; and (3) minimizes communication between nodes by using sparse bitvectors.
Moreover, while P LANET approximates split points through feature binning, Y G GDRASIL does not require binning, and we analytically characterize the impact of
this approximation. We evaluate Y GGDRASIL against the MNIST 8M dataset and
a high-dimensional dataset at Yahoo; for both, Y GGDRASIL is faster by up to an
order of magnitude.
1
Introduction
Decision tree-based methods, such as random forests and gradient-boosted trees, have a rich and
successful history in the machine learning literature. They remain some of the most widely-used
models for both regression and classification tasks, and have proven to be practically advantageous
for several reasons: they are arbitrarily expressive, can naturally handle categorical features, and are
robust to a wide range of hyperparameter settings [4].
As datasets have grown in scale, there is an increasing need for distributed algorithms to train decision
trees. Google?s P LANET framework [12] has been the de facto approach for distributed tree learning,
with several popular open source implementations, including Apache Mahout, Spark ML LIB, and
XGB OOST [1, 11, 7]. P LANET partitions the training instances across machines and parallelizes the
computation of split points and stopping criteria over them, thus effectively leveraging a large cluster.
While P LANET works well for shallow trees and small numbers of features, it has high communication
costs when tree depths and data dimensionality grow. P LANET?s communication cost is linear in the
number of features p, and is linear in 2D , where D is the tree depth. As demonstrated by several
studies [13, 3, 8], datasets have become increasingly high-dimensional (large p) and complex, often
requiring high-capacity models (e.g., deep trees with large D) to achieve good predictive accuracy.
We present Y GGDRASIL, a new distributed tree learning system that scales well to high-dimensional
data and deep trees. Unlike P LANET, Y GGDRASIL is based on vertical partitioning of the data [5]: it
assigns a subset of the features to each worker machine, and asks it to compute an optimal split for
each of its features. These candidate splits are then sent to a master, which selects the best one. On
top of the basic idea of vertical partitioning, Y GGDRASIL introduces three novel optimizations:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? Training on compressed data without decompression: Y GGDRASIL compresses features via
run-length encoding and encodes labels using dictionary compression. We design a novel splitfinding scheme that trains directly on compressed data for compressible features, which reduces
runtime by up to 20%.
? Efficient training on uncompressed data: Y GGDRASIL?s data structures let each worker implicitly store the split history of the tree without introducing any memory overheads. Each worker
requires only a sequential scan over the data to perform greedy split-finding across all leaf nodes
in the tree, and only one set of sufficient statistics is kept in memory at a time.
? Minimal communication between nodes: Y GGDRASIL uses sparse bit vectors to reduce intermachine communication costs during training.
Together, these optimizations yield an algorithm that is asymptotically less expensive than P LANET
on high-dimensional data and deep trees: Y GGDRASIL?s communication cost is O(2D + Dn), in
contrast to O(2D p) for P LANET-based methods, and its data structure optimizations yield up to 2?
savings in memory and 40% savings in time over a naive implementation of vertical partitioning.
These optimizations enable Y GGDRASIL to scale up to thousands of features and tree depths up to 20.
On tree depths greater than 10, Y GGDRASIL outperforms ML LIB and XGB OOST by up to 6? on the
MNIST 8M dataset, and up to 24? on a dataset with 2 million training examples and 3500 features
modeled after the production workload at Yahoo.
Notation We define n and p as the number of instances and features in the training set, D as the
maximum depth of the tree, B as the number of histogram buckets to use in P LANET, k as the number
of workers in the cluster, and Wj as the jth worker.
2
P LANET: Horizontal Partitioning
We now describe the standard algorithm for training a decision tree in a distributed fashion via
horizontal partitioning, inspired by P LANET [12]. We assume that B potential thresholds for each
of the p features are considered; thus, we define S as the set of cardinality pB containing all split
candidates. For a given node i, define the set I as the instances belonging to this P
node. We can then
express the splitting criterion Split(?) for node i as Split(i) = arg maxs?S f ( x?I g(x, s)) for
functions f and g where f : Rc ? R, g : Rp ? N ? Rc , and c ? O(1). Intuitively, for each split
candidate s ? S, g(x, s) computes the c sufficient statistics for each point x; f (?) aggregates these
sufficient statistics to compute the node purity for candidate s; and Split(i) returns the split candidate
that maximizes the node purity. Hence, if worker j contains instances in the set J , we have:
!
k
X
X
Split(i) = arg max f
gj (s)
where gj (s) =
g(x, s)
(1)
s?S
j=1
x?I?J
This observation suggests a natural distributed algorithm. We assume a star-shaped distributed
architecture with a master node and k worker nodes. Data are distributed using horizontal partitioning;
i.e., each worker node stores a subset of the n instances. For simplicity, we assume that we train our
tree to a fixed depth D. On the tth iteration of the algorithm, we compute the optimal splits for all
nodes on the tth level of the tree via a single round trip of communication between the master and the
workers. Each tree node i is split as follows:
1.
2.
3.
4.
The jth worker locally computes sufficient statistics gj (s) from Equation 1 for all s ? S.
Each worker communicates all statistics gj (s) to the master (Bp in total).
The master computes the best split s? = Split(i) from Equation 1.
The master broadcasts s? to the workers, who update their local states to keep track of which
instances are assigned to which child nodes.
Overall, the computation is linear in n, p, and D, and is trivially parallelizable. Yet the algorithm
is communication-intensive. For each tree node, step 2 above requires communicating kBp tuples
of size c. With 2D total nodes, the total communication is 2D kpBc floating point values, which is
exponential in tree depth D, and linear in B, the number of thresholds considered for each feature.
Moreover, using B < n thresholds results in an approximation of the tree trained on a single machine,
and can result in adverse statistical consequences, as noted empirically by [7]. We present a theoretical
analysis of the impact of this approximation in Section 4.
2
3
Y GGDRASIL: Vertical Partitioning
We propose an alternative algorithm to address the aforementioned shortcomings. Rather than
partition the data
by instance, we partition by feature: each of the k worker machines stores all feature
values for kp of the features, as well the labels for all instances. This organizational strategy has
two crucial benefits: (1) each worker can locally compute the node purity for a subset of the split
candidates, which significantly reduces the communication bottleneck; and (2) we can efficiently
consider all possible B = n ? 1 splits.
We can derive an expression for Split(i) with vertical partitioning analogous to Equation 1. Redefining J to be the set of features stored on the jth worker, we have
!
X
Split(i) = arg max fj where fj = arg max f
g(x, s)
(2)
j
s?J
x?I
Intuitively, each worker identifies its top split candidate among its set of features, and the master then
chooses the best split candidate among these top k candidates.
As with horizontal partitioning, computation is linear in n, p, and D, and is easily parallelizable.
However, the communication profile is quite different, with two major sources of communication.
For each node, each worker communicates one tuple of size c, resulting in 2D kc communication
for all nodes. When training each level of the tree, n bits are communicated to indicate the split
direction (left/right) for each training point. Hence, the overall communication is O(2D k + Dnk).
In contrast to the O(2D kpB) communication cost of horizontal partitioning, vertical partitioning has
no dependence on p, and, for large n, the O(Dnk) term will likely be the bottleneck.
5D
8p
7p
4D
6p
Vertical Partitioning Better
5p
Tree Depth
Num. Features
Vertical Partitioning Better
4p
3p
2p
3D
2D
D
p
Horizontal Partitioning Better
Horizontal Partitioning Better
n
2n
4n
Num. Instances, Log Scale
n
8n
(a) Regimes of (n, p) where each partitioning
strategy dominates for D = 15, k = 16, B =
32.
2n
4n
Num. Instances, Log Scale
8n
(b) Regimes of (n, D) where each partitioning
strategy dominates for p = 2500, k = 16, B =
32.
Figure 1: Communication cost tradeoffs between vertical and horizontal partitioning
Thus, there exists a set of tradeoffs between horizontal and vertical partitioning across different
regimes of n, p, and D, as illustrated in Figure 1. The overall trend is clear: for large p and D, vertical
partitioning can drastically reduce communication.
3.1
Algorithm
The Y GGDRASIL algorithm works as follows: at iteration t, we compute the optimal splits for all
nodes on the tth level of the tree via two round trips of communication between the master and the
workers. Like P LANET, all splits for a single depth t are computed at once. For each node i at depth
t, the following steps are performed:
ComputeBestSplit(i):
? The jth worker locally computes fj from Equation 2 and sends this to the master.
? The master selects s? = Split(i). Let fj? denote the optimal feature selected for s? , and let Wj? be
the worker containing this optimal feature: fj? ? Wj? .
bitV ector = CollectBitVector(Wj? ):
3
? The master requests a bitvector from Wj? in order to determine which child node (either left or
right) each training point x ? I should be assigned to.
BroadcastSplitInfo(bitV ector):
? The master then broadcasts the bitvector to all k workers. Each worker then updates its internal
state to prepare for the next iteration of training.
3.2
Optimizations
As we previously showed, vertical partitioning leads to asymptotically lower communication costs as
p and D increase. However, this asymptotic behavior does not necessarily translate to more efficient
tree learning; on the contrary, a naive implementation may easily lead to high CPU and memory
overheads, communication overhead, and poor utilization of the CPU cache. In Y GGDRASIL, we
introduce three novel optimizations for vertically partitioned tree learning that significantly improve
its scalability, memory usage and performance.
3.2.1
Sparse Bitvectors for Reduced Communication Overhead
Once the master has found the optimal split s? for each leaf node i in the tree, each worker must
then update its local features to reflect that the instances have been divided into new child nodes.
To accomplish this while minimizing communication, the workers and master communicate using
bitvectors. Specifically, after finding the optimal split, the master requests from worker Wj? a
corresponding bitvector for s? ; this bitvector encodes the partitioning of instances between the two
children of i. Once the master has collected all optimal splits for all leaf nodes, it broadcasts the
bitvectors out to all workers. This means that (assuming a fully balanced tree), for every depth t
during training, 2t bitvectors ? for a total of n bits ? are sent from the k workers.
Additionally, the n bits are encoded in a sparse format [6], which offers much better compression
via packed arrays than a naive bitvector. This sparse encoding is particularly useful for imbalanced
trees: rather than allocate memory to encode a potential split for all nodes at depth t, we only allocate
memory for the nodes in which an optimal split was found. By taking advantage of sparsity, we can
send the n bits between the master and the workers at only a fraction of the cost.
3.2.2
Training on Compressed Data without Decompression
In addition to its more favorable communication cost for large p and D, Y GGDRASIL?s vertical
partitioning strategy presents a unique optimization opportunity: the ability to efficiently compress
data by feature. Furthermore, because the feature values must be in sorted order to perform greedy
split-finding, we can use this to our advantage to perform lossless compression without sacrificing
recoverability. This leads to a clear optimization: feature compression via run-length encoding (RLE),
an idea that has been explored extensively in column-store databases [10, 14]. In addition to the
obvious in-memory savings, this technique also impacts the runtime performance of split-finding,
since the vast majority of feature values are now able to reside in the L3 cache. To the best of our
knowledge, Y GGDRASIL is the first system to apply this optimization to decision tree learning.
Many features compress well using RLE: sparse features, continuous features with few distinct
values, and categorical features with low arity. However, to train directly on compressed data without
decompressing, we must maintain the feature in sorted order throughout the duration of training,
a prerequisite for RLE. Therefore, to compute all splits for a given depth t, we introduce a data
structure to record the most recent splits at depth t ? 1. Specifically, we create a mapping between
each feature value and the node i at depth t that it is currently assigned to.
At the end of an iteration of training, each worker updates this data structure by applying the bitvector
it receives from the master, which requires a single sequential scan over the data. All random accesses
are confined to the labels, which we also encode (when feasible) using dictionary compression. This
gives us much better cache density during split-finding: all random accesses no longer touch DRAM
and instead read from the last-level cache.
To minimize the number of additional passes, we compute the optimal split across all leaf nodes as
we iterate over a given feature. This means that each feature requires only two sequential scans over
the data for each iteration of training: one to update the value-node mapping, and one to compute
the entire set of optimal splits for iteration t + 1. However, as a tradeoff, we must maintain the
4
sufficient statistics for all splits in memory as we scan over the feature. For categorical features
(especially those with high arity), this cost in memory overhead proves to be too exorbitant, and the
runtime performance suffers despite obtaining excellent compression. For sparse continuous features,
however, the improvements are significant: on MNIST 8M, we achieve 2? compression (including
the auxiliary data structure) and obtain a 20% reduction in runtime.
3.2.3
Efficient Training on Uncompressed Data
For features that aren?t highly compressible, Y GGDRASIL uses a different scheme that, in contrast,
does not use any auxiliary data structures to keep track of the split history. Since features no longer
need to stay sorted in perpetuity, Y GGDRASIL implicitly encodes the split partitions by recursively
dividing its features into sub-arrays ? each feature value is assigned to a sub-array based on the
bit assigned to it and its previous sub-array assignment. Because the feature is initially sorted,
a sequential scan over the sub-arrays maintains the sorted-order invariant, and we construct the
sub-arrays for the next iteration of training in O(n) time, requiring only a single pass over the feature.
By using this implicit representation of the split history, we?re left only with the feature values and
label indices stored in memory. Therefore, the memory load does not increase during training for
uncompressed features ? it remains constant at 2?.
This scheme yields another additional benefit: when computing the next iteration of splits for depth
t + 1, Y GGDRASIL only maintains the sufficient statistics for one node at a time, rather than for
all leaf nodes. Furthermore, Y GGDRASIL still only requires a single sequential scan through the
entire feature to compute all splits. This means that, as was the case for compressed features, every
iteration of training requires only two sequential scans over each feature, and all random accesses are
again confined to the dictionary-compressed labels. Finally, for imbalanced trees, we can skip entire
sub-arrays that no longer need to be split, which saves additional time as trees grow deeper.
Entire Cluster
Single Worker
i1
i0
bitVector = 100101
i2
ci =
0
0
1
0
0
sort by value
1
0
split found, sort
by bitvector
1
2
2
1
3
2
1
1
3
3
1) original feature
2) sorted feature
before training
0
3) after 1st iteration
of training
Figure 2: Overview of one iteration of uncompressed training in Y GGDRASIL. Left side: Root node
i0 is split into nodes i1 and i2 ; the split is encoded by a bitvector. Right side: Prior to training, the
feature ci is sorted to optimize split-finding. Once a split has been found, ci is re-sorted into two
sub-arrays: the 1st , 4th , and last values (the ?on? bits) are sorted into i1 ?s sub-array, and the ?off? bits
are sorted into i2 ?s sub-array. Each sub-array is in sorted order for the next iteration of training.
4
Discretization Error Analysis
Horizontal partitioning requires each worker to communicate the impurity on its subset of data for
all candidate splits associated with each of the p features, and to do so for all 2D tree nodes. For
continuous features where each training instance could have a distinct value, up to n candidate splits
are possible so the communication cost is O(2D kpn). To improve efficiency, continuous features
are instead commonly discretized to B discrete bins such that only B rather than n ? 1 candidate
splits are considered at each tree node [9]. In contrast, discretization of continuous features is not
required in Y GGDRASIL, since all n values for a particular feature are stored on a single worker (due
to vertical partitioning). Hence, the impurity for the best split rather than all splits is communicated.
This discretization heuristic results in the approximation of continuous values by discrete representatives, and can adversely impact the statistical performance of the resulting decision tree, as
demonstrated empirically by [7]. Prior work has shown that the number of bins can be chosen such
that the decrease in information gain at any internal tree node between the continuous and discretized
5
feature can be made arbitrarily small [2]. However, their proof does not quantify the effects of
discretization on a decision tree?s performance.
To understand the impact of discretization on accuracy, we analyze the simplified setting of training
a decision stump classifier on a single continuous feature x ? [0, 1]. Suppose the feature data is
iid
drawn i.i.d. from a uniform distribution, i.e., x(i) ? U[0, 1] for i = 1, 2, . . . , n, and that labels are
generated according to some threshold ttruth ? U[0, 1], i.e., y (i) = sgn(x(i) ? ttruth ). The decision
stump training criterion seeks to choose a splitting threshold at one of the training instances x(i) in
order to minimize t?n = arg maxt?{x(i) } f (t), where f (t) is some purity measure. In our analysis,
we will define the purity measure to be information gain. In our simplified setting, we show that there
is a natural relationship between the misclassification probability Perr (t), and the approximation
error of our decision stump, i.e., |t ? ttruth |. All proofs are deferred to the appendix.
a.s.
Observation 1. For an undiscretized decision stump, as n ? ?, Perr (t?n ) ? 0.
Observation 2. Maximizing information gain is equivalent to minimizing absolute distance, i.e.,
t?n = arg max f (t) = arg min |t ? ttruth | .
t?{x(i) }n
i=1
t?{x(i) }n
i=1
Moreover, Perr (t) = |t ? ttruth |.
We now present our main result. This intuitive result shows that increasing the number of discretization
bins B leads to a reduction in the expected probability of error.
Theorem 1. Let t?N,B denote the threshold learned by a decision stump on n training instances
a.s.
discretized to B + 1 levels. Then E Perr (t?N,B ) ? 1 .
4B
5
Evaluation
We developed Y GGDRASIL on top of Spark 1.6.0 with an API compatible with ML LIB. Our
implementation is 1385 lines of code, excluding comments and whitespace. Our implementation
is open-source and publicly available.1 Our experimental results show that, for large p and D,
Y GGDRASIL outperforms P LANET by an order of magnitude, corroborating our analysis in Section 3.
5.1
Experimental Setup
We benchmarked Y GGDRASIL against two implementations of P LANET: Spark ML LIB v1.6.0, and
XGB OOST 4J-S PARK v0.47. These two implementations are slightly different from the algorithm
from Panda et al. In particular, the original P LANET algorithm has separate subroutines for distributed
vs. ?local? training. By default, P LANET executes the horizontally partitioned algorithm in Section 2
using on-disk data; however, if the instances assigned to a given tree node fit in-memory on a single
worker, then P LANET moves all the data for that node to one worker and switches to in-memory
training on that worker. In contrast, ML LIB loads all the data into distributed memory across the
cluster at the beginning and executes all training passes in memory. XGB OOST extends P LANET
with several additional optimizations; see [7] for details.
We ran all experiments on 16 Amazon EC2 r3.2xlarge machines. Each machine has an Intel Xeon
E5-2670 v2 CPU, 61 GB of memory, and 1 Gigabit Ethernet connectivity. Prior to our experiments,
we tuned Spark?s memory configuration (heap memory used for storage, number of partitions, etc.)
for optimal performance. All results are averaged over five trials.
5.2
Large-scale experiments
To examine the performance of Y GGDRASIL and P LANET, we trained a decision tree on two largescale datasets: the MNIST 8 million dataset, and another modeled after a private Yahoo dataset that
is used for search ranking. Table 1 summarizes the parameters of these datasets.
1
Yggdrasil has been published as a Spark package at the following URL: https://spark-packages.
org/package/fabuzaid21/yggdrasil
6
Yggdrasil vs. PLANET and XGBoost: MNIST 8M
Yggdrasil vs. PLANET and XGBoost: Yahoo 2M
2500
10000
MLlib
Yggdrasil
XGBoost
1500
1000
500
0
MLlib
Yggdrasil
XGBoost
8000
Training Time (s)
Training Time (s)
2000
6000
4000
2000
6
8
10
12
14
Tree Depth
16
18
0
20
6
8
10
12
Tree Depth
14
16
18
Figure 3: Training time vs. tree depth for MNIST 8M and Yahoo 2M.
Dataset
MNIST 8M
Yahoo 2M
# instances
8.1?106
2?106
# features
784
3500
Size
18.2 GiB
52.2 GiB
Task
classification
regression
Table 1: Parameters of the datasets for our experiments
Figure 3 shows the training time across various tree depths for MNIST 8M and Yahoo 2M. For both
datasets, we carefully tuned XGB OOST to run on the maximum number of threads and the optimal
number of partitions. Despite this, XGB OOST was unable to train trees deeper than D = 13 without
crashing due to OutOfMemory exceptions. While Spark ML LIB?s implementation of P LANET is
marginally faster for shallow trees, its runtime increases exponentially as D increases. Y GGDRASIL,
on the other hand, scales well up to D = 20, for which it runs up to 6? faster. For the Yahoo dataset,
Y GGDRASIL?s speed-up is even greater because of the higher number of features p ? recall that the
communication cost for P LANET is proportional to 2D and p. Thus, for D = 18, Y GGDRASIL is up
to 24? faster than Spark ML LIB.
5.3
Study of Individual Optimizations
To understand the impacts of the optimizations in Section 3.2, we measure each optimization?s effect
on Y GGDRASIL runtime. To fully evaluate our optimizations ? including feature compression ? we
chose MNIST 8M, whose features are all sparse, for this study. The results are in Figure 6: we see
that the total improvement from the naive baseline to the fully optimized algorithm is a 40% reduction
in runtime. Using sparse bitvectors reduces the communication overhead between the master and the
workers, giving a modest speedup. Encoding the labels and compressing the features via run-length
encoding each yield 20% improvements. As discussed, these speedups are due to improved cache
utilization: encoding the labels via dictionary compression reduces their size in memory by 8?; as a
result, the labels entirely fit in the last-level cache. The feature values also fit in cache after applying
RLE, and we gain 2? in memory overhead once we factor in needed auxiliary data structures.
5.4
Scalability experiments
To further demonstrate the scalability of Y GGDRASIL vs. P LANET for high-dimensional datasets,
we measured the training time on a series of synthetic datasets parameterized by p. For each dataset,
approximately p2 features were categorical, while the remaining features were continuous. From
Figure 4, we see that, Y GGDRASIL scales much more effectively as p increases, especially for larger
D. In particular, for D = 15, Y GGDRASIL is initially 3? faster than P LANET for p = 500, but is
more than 8? faster for p = 4000. This confirms our asymptotic analysis in Section 3.
6
Related Work
Vertical Partitioning. Several authors have proposed partitioning data by feature for training
decision trees; to our knowledge, none of these systems perform the communication and data
structure optimizations in Y GGDRASIL, and none report results at the same scale. Svore and Burges
7
Yggdrasil vs. PLANET: Number of Features
Yggdrasil vs. PLANET: Communication Cost
1011
1800
Training Time (s)
1400
MLlib, D = 15
MLlib, D = 13
Yggdrasil, D = 15
Yggdrasil, D = 13
Number of Bytes Sent, Log Scale
1600
1200
1000
800
600
400
200
0
500
1000
1500
2000
2500
Num. Features
3000
3500
MLlib, p = 1K
MLlib, p = 2K
MLlib, p = 4K
1010
109
108
107
106
105
104
1
4000
Figure 4: Training time vs. number of
features for n = 2 ? 106 , k = 16, B =
32. Because the communication cost of
P LANET scales linearly with p, the total
runtime increases at a much faster rate.
Yggdrasil, p = {1K, 2K, 4K}
2
3
4
5
6
Tree Depth
7
8
9
10
Figure 5: Number of bytes sent vs. tree
depth for n = 2 ? 106 , k = 16, B = 32.
For Y GGDRASIL, the communication cost
is the same for all p; each worker sends its
best local feature to the master.
[15] treat data as vertical columns, but place a full copy of the dataset on every worker node, an
approach that is not scalable for large datasets. Caragea et al. [5] analyze the costs of horizontal
and vertical partitioning but do not include an implementation. Ye et al. [16] implement vertical
partitioning using MapReduce or MPI and benchmark data sizes up to 1.2 million rows and 520
features. None of these systems compress columnar data on each node, communicate using sparse
bitvectors, or optimize for cache locality as Y GGDRASIL does (Section 3.2). These optimizations
yield significant speedups over a basic implementation of vertical partitioning (Section 5.3).
Yggdrasil: Impact of Individual Optimizations
140
134 s
125 s
120
Training Time (s)
Distributed Tree Learning. The most widely used
distributed tree learning method is P LANET ([12]),
which is also implemented in open-source libraries
such as Apache Mahout ([1]) and ML LIB ([11]). As
shown in Figure 1, P LANET works well for shallow
trees and small numbers of features, but its cost grows
quickly with tree depth and is proportional to the number of features and the number of bins used for discretization. This makes it suboptimal for some largescale tree learning problems.
100
80
101 s
81 s
60
40
20
XGB OOST ([7]) uses a partitioning scheme similar
0
to P LANET, but uses a compressed, sorted columnar
RLE +
uncompressed
uncompressed +
uncompressed +
training
sparse bitvectors sparse bitvectors + sparse bitvectors +
format inside each ?block? of data. Its communicalabel encoding
label encoding
tion cost is therefore similar to P LANET, but its memory consumption is smaller. XGB OOST is optimized Figure 6: Y GGDRASIL runtime improvefor gradient-boosted trees, in which case each tree ments from specific optimizations, on
is relatively shallow. It does not perform as well as MNIST 8M at D = 10.
Y GGDRASIL on deeper trees, such as those needed
for random forests, as shown in our evaluation. XGB OOST also lacks some of the processing optimizations in Y GGDRASIL, such as label encoding to
maximize cache density and training directly on run-length encoded features without decompressing.
7
Conclusion
Decision trees and tree ensembles are an important class of models, but previous distributed training
algorithms were optimized for small numbers of features and shallow trees. We have presented
Y GGDRASIL, a new distributed tree learning system optimized for deep trees and thousands of
features. Through vertical partitioning of the data and a set of data structure and algorithmic optimizations, Y GGDRASIL outperforms existing tree learning systems by up to 24?, while simultaneously
eliminating the need to approximate data through binning. Y GGDRASIL is easily implementable on
parallel engines like MapReduce and Spark.
8
References
[1] Apache Mahout. https://mahout.apache.org/, 2015.
[2] Y. Ben-Haim and E. Tom-Tov. A streaming parallel decision tree algorithm. The Journal of
Machine Learning Research, 11:849?872, 2010.
[3] L. Breiman. Random forests. Machine learning, 45(1):5?32, 2001.
[4] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and regression trees.
CRC press, 1984.
[5] D. Caragea, A. Silvescu, and V. Honavar. A framework for learning from distributed data using
sufficient statistics and its application to learning decision trees. International Journal of Hybrid
Intelligent Systems, 1(1, 2):80?89, 2004.
[6] S. Chambi, D. Lemire, O. Kaser, and R. Godin. Better bitmap performance with roaring bitmaps.
Software: Practice and Experience, 2015.
[7] T. Chen and C. Guestrin.
arXiv:1603.02754, 2016.
Xgboost: A scalable tree boosting system.
arXiv preprint
[8] C. Cortes, M. Mohri, and U. Syed. Deep boosting. In ICML, 2014.
[9] U. M. Fayyad and K. B. Irani. On the handling of continuous-valued attributes in decision
tree generation. Mach. Learn., 8(1):87?102, Jan. 1992. ISSN 0885-6125. doi: 10.1023/A:
1022638503176. URL http://dx.doi.org/10.1023/A:1022638503176.
[10] A. Lamb, M. Fuller, R. Varadarajan, N. Tran, B. Vandiver, L. Doshi, and C. Bear. The vertica
analytic database: C-store 7 years later. Proceedings of the VLDB Endowment, 5(12):1790?1801,
2012.
[11] X. Meng, J. K. Bradley, B. Yavuz, E. R. Sparks, S. Venkataraman, D. Liu, J. Freeman, D. B. Tsai,
M. Amde, S. Owen, D. Xin, R. Xin, M. J. Franklin, R. Zadeh, M. Zaharia, and A. Talwalkar.
MLlib: Machine learning in apache spark. arXiv:1505.06807, 2015.
[12] B. Panda, J. S. Herbach, S. Basu, and R. J. Bayardo. Planet: Massively parallel learning of tree
ensembles with mapreduce. International Conference on Very Large Data Bases, 2009.
[13] S. R. Safavian and D. Landgrebe. A survey of decision tree classifier methodology. IEEE
transactions on systems, man, and cybernetics, 21(3):660?674, 1991.
[14] M. Stonebraker, D. J. Abadi, A. Batkin, X. Chen, M. Cherniack, M. Ferreira, E. Lau, A. Lin,
S. Madden, E. O?Neil, et al. C-store: a column-oriented dbms. In Proceedings of the 31st
international conference on Very large data bases, pages 553?564. VLDB Endowment, 2005.
[15] K. M. Svore and C. Burges. Large-scale learning to rank using boosted decision trees. Scaling
Up Machine Learning: Parallel and Distributed Approaches, 2, 2011.
[16] J. Ye, J.-H. Chow, J. Chen, and Z. Zheng. Stochastic gradient boosted distributed decision
trees. In Proceedings of the 18th ACM Conference on Information and Knowledge Management,
CIKM ?09, pages 2061?2064, New York, NY, USA, 2009. ACM. ISBN 978-1-60558-5123. doi: 10.1145/1645953.1646301. URL http://doi.acm.org/10.1145/1645953.
1646301.
9
| 6366 |@word trial:1 private:1 eliminating:1 compression:9 advantageous:1 disk:1 open:3 vldb:2 confirms:1 seek:1 asks:1 recursively:1 reduction:3 configuration:1 contains:1 series:1 liu:1 tuned:2 franklin:1 outperforms:4 existing:2 silvescu:1 bitmap:2 discretization:7 bradley:1 yet:1 dx:1 must:4 planet:5 partition:6 analytic:1 update:5 v:9 greedy:2 leaf:5 selected:1 beginning:1 record:1 num:4 boosting:2 node:41 compressible:3 org:4 five:1 rc:2 along:1 dn:1 become:1 abadi:1 overhead:7 inside:1 introduce:2 expected:1 behavior:1 examine:1 gigabit:1 discretized:3 inspired:1 kpb:1 freeman:1 cpu:4 cache:9 cardinality:1 lib:9 increasing:2 spain:1 notation:1 moreover:3 maximizes:1 benchmarked:1 minimizes:1 developed:1 perr:4 finding:6 every:3 runtime:9 ferreira:1 classifier:2 facto:1 partitioning:34 utilization:2 before:1 local:4 vertically:1 treat:1 consequence:1 api:1 despite:2 encoding:9 mach:1 meng:1 approximately:1 chose:1 suggests:1 range:1 averaged:1 unique:1 practice:1 block:1 implement:1 communicated:2 jan:1 significantly:2 varadarajan:1 storage:1 applying:2 optimize:2 equivalent:1 demonstrated:2 maximizing:1 send:1 duration:1 survey:1 spark:12 splitting:2 assigns:1 simplicity:1 amazon:1 communicating:1 array:11 handle:1 analogous:1 suppose:1 us:4 trend:1 expensive:1 particularly:1 database:2 binning:3 preprint:1 thousand:2 wj:6 compressing:1 venkataraman:1 decrease:1 ran:1 balanced:1 trained:2 impurity:2 predictive:1 decompressing:2 efficiency:1 workload:1 easily:3 various:1 grown:2 train:6 distinct:2 describe:1 shortcoming:1 kp:1 doi:4 aggregate:1 quite:1 encoded:3 widely:2 heuristic:1 whose:1 larger:1 valued:1 compressed:8 ability:1 statistic:8 neil:1 advantage:2 isbn:1 propose:1 tran:1 parallelizes:1 translate:1 poorly:1 achieve:2 intuitive:1 scalability:3 cluster:4 mahout:4 ben:1 derive:1 andrew:1 measured:1 p2:1 dividing:1 implemented:2 auxiliary:3 skip:1 indicate:1 gib:2 quantify:1 ethernet:1 direction:1 attribute:1 stochastic:1 dbms:1 sgn:1 enable:1 bin:4 crc:1 require:1 practically:1 considered:3 mapping:2 algorithmic:1 major:1 dictionary:4 heap:1 favorable:1 amde:1 label:12 prepare:1 currently:1 create:1 mit:1 rather:5 boosted:4 breiman:2 encode:2 improvement:3 rank:1 contrast:5 baseline:1 talwalkar:1 stopping:1 i0:2 streaming:1 entire:4 chow:1 initially:2 kc:1 subroutine:1 selects:2 i1:3 arg:7 classification:3 overall:3 aforementioned:1 among:2 yahoo:9 once:5 saving:3 shaped:1 construct:1 fuller:1 park:1 uncompressed:8 icml:1 report:1 intelligent:1 few:1 oriented:1 simultaneously:1 individual:2 floating:1 maintain:2 friedman:1 highly:1 zheng:1 evaluation:2 deferred:1 introduces:2 tuple:1 worker:38 experience:1 modest:1 tree:79 re:2 sacrificing:1 theoretical:1 minimal:1 instance:17 column:3 xeon:1 assignment:1 cost:19 introducing:1 organizational:1 subset:4 uniform:1 successful:1 firas:1 too:1 characterize:1 stored:3 svore:2 accomplish:1 synthetic:1 chooses:1 st:3 density:2 international:3 ec2:1 yavuz:1 csail:1 stay:1 lee:1 off:1 together:1 quickly:1 connectivity:1 again:1 reflect:1 management:1 containing:2 broadcast:3 choose:1 adversely:1 return:1 potential:2 de:1 stump:5 star:1 ranking:1 performed:1 root:1 tion:1 later:1 analyze:2 sort:2 maintains:2 panda:2 parallel:4 minimize:2 publicly:1 accuracy:2 who:1 efficiently:2 ensemble:3 yield:5 iid:1 marginally:1 none:3 cybernetics:1 published:1 executes:2 history:4 parallelizable:2 suffers:1 against:2 obvious:1 doshi:1 naturally:1 associated:1 proof:2 gain:4 dataset:10 popular:1 recall:1 knowledge:3 dimensionality:2 carefully:1 higher:1 tom:1 methodology:1 improved:1 furthermore:2 implicit:1 hand:1 receives:1 horizontal:11 expressive:1 touch:1 lack:1 google:1 dnk:2 grows:1 usage:1 effect:2 ye:2 requiring:2 usa:1 analytically:1 hence:3 assigned:6 read:1 irani:1 i2:3 illustrated:1 round:2 during:4 noted:1 mpi:1 criterion:3 stone:1 demonstrate:1 fj:5 novel:3 apache:5 empirically:2 overview:1 exponentially:1 million:3 discussed:1 approximates:1 significant:2 cambridge:1 trivially:1 tov:1 decompression:2 l3:1 access:3 longer:3 gj:4 ector:2 v0:1 etc:1 base:2 imbalanced:2 showed:1 recent:1 massively:1 store:6 arbitrarily:2 guestrin:1 greater:2 additional:4 purity:5 determine:1 maximize:1 full:1 reduces:4 faster:7 offer:1 lin:1 divided:1 impact:7 scalable:2 regression:3 basic:2 arxiv:3 histogram:1 iteration:12 xgboost:5 confined:2 addition:2 grow:3 source:4 sends:2 crucial:1 crashing:1 unlike:2 pass:2 comment:1 sent:4 contrary:1 leveraging:1 split:56 iterate:1 switch:1 fit:3 architecture:1 suboptimal:1 reduce:3 idea:2 tradeoff:3 intensive:1 bottleneck:2 thread:1 expression:1 allocate:2 gb:1 url:3 york:1 deep:7 useful:1 clear:2 locally:3 extensively:1 tth:3 reduced:1 http:4 cikm:1 track:2 lanet:30 discrete:2 hyperparameter:1 express:1 threshold:6 pb:1 drawn:1 kept:1 v1:1 vast:1 asymptotically:2 bayardo:1 fraction:1 year:1 run:6 package:3 parameterized:1 master:20 communicate:3 kbp:1 throughout:1 extends:1 place:1 lamb:1 whitespace:1 decision:22 appendix:1 summarizes:1 zadeh:1 scaling:1 bit:8 entirely:1 haim:1 mllib:8 bp:1 software:1 encodes:3 ucla:1 speed:1 min:1 fayyad:1 ameet:1 format:2 relatively:1 speedup:3 according:1 honavar:1 request:2 poor:1 belonging:1 remain:1 across:6 increasingly:2 slightly:1 smaller:1 partitioned:2 joseph:1 shallow:5 kpn:1 intuitively:2 invariant:1 lau:1 bucket:1 equation:4 previously:1 remains:1 r3:1 needed:2 feynman:1 end:1 available:1 prerequisite:1 apply:1 v2:1 matei:1 save:1 alternative:1 rp:1 original:2 compress:4 top:4 remaining:1 include:1 opportunity:1 giving:1 especially:2 prof:1 move:1 strategy:4 dependence:1 gradient:3 distance:1 separate:1 unable:1 capacity:1 majority:1 consumption:1 collected:1 reason:1 assuming:1 length:4 code:1 modeled:2 index:1 relationship:1 issn:1 minimizing:2 setup:1 olshen:1 dram:1 implementation:10 design:1 packed:1 perform:5 vertical:21 observation:3 datasets:10 benchmark:1 implementable:1 communication:31 excluding:1 recoverability:1 required:1 trip:2 optimized:6 redefining:1 caragea:2 engine:1 learned:1 barcelona:1 nip:1 address:1 able:1 regime:3 sparsity:1 max:5 including:3 memory:22 misclassification:1 syed:1 natural:2 hybrid:1 largescale:2 scheme:4 improve:2 lossless:1 library:1 identifies:1 madden:1 categorical:4 naive:4 byte:2 prior:3 literature:1 mapreduce:3 asymptotic:2 fully:3 bear:1 generation:1 proportional:2 zaharia:1 proven:1 sufficient:7 endowment:2 production:1 maxt:1 row:1 compatible:1 mohri:1 last:3 copy:1 jth:4 drastically:1 side:2 deeper:3 understand:2 burges:2 wide:1 basu:1 taking:1 absolute:1 sparse:13 distributed:19 bitvectors:10 benefit:2 depth:24 default:1 xlarge:1 rich:1 computes:4 landgrebe:1 reside:1 commonly:1 made:1 author:1 simplified:2 transaction:1 approximate:1 implicitly:2 keep:2 ml:9 corroborating:1 tuples:1 continuous:10 search:1 table:2 additionally:1 learn:1 robust:1 obtaining:1 forest:3 e5:1 excellent:1 complex:1 necessarily:1 main:1 linearly:1 profile:1 child:4 representative:1 intel:1 fashion:1 ny:1 sub:10 bitvector:9 exponential:1 candidate:12 communicates:2 theorem:1 load:2 specific:1 arity:2 explored:1 cortes:1 ments:1 dominates:2 exists:1 mnist:10 sequential:6 effectively:2 importance:1 ci:3 magnitude:2 chen:3 columnar:2 aren:1 locality:1 likely:1 horizontally:1 acm:3 sorted:12 oost:10 owen:1 man:1 feasible:1 adverse:1 rle:5 specifically:2 total:6 pas:1 experimental:2 xin:2 exception:1 internal:2 scan:7 tsai:1 evaluate:2 handling:1 |
5,932 | 6,367 | Deep Neural Networks with Inexact Matching for
Person Re-Identification
Arulkumar Subramaniam
Indian Institute of Technology Madras
Chennai, India 600036
[email protected]
Moitreya Chatterjee
Indian Institute of Technology Madras
Chennai, India 600036
[email protected]
Anurag Mittal
Indian Institute of Technology Madras
Chennai, India 600036
[email protected]
Abstract
Person Re-Identification is the task of matching images of a person across multiple
camera views. Almost all prior approaches address this challenge by attempting to
learn the possible transformations that relate the different views of a person from a
training corpora. Then, they utilize these transformation patterns for matching a
query image to those in a gallery image bank at test time. This necessitates learning
good feature representations of the images and having a robust feature matching
technique. Deep learning approaches, such as Convolutional Neural Networks
(CNN), simultaneously do both and have shown great promise recently.
In this work, we propose two CNN-based architectures for Person Re-Identification.
In the first, given a pair of images, we extract feature maps from these images via
multiple stages of convolution and pooling. A novel inexact matching technique
then matches pixels in the first representation with those of the second. Furthermore, we search across a wider region in the second representation for matching.
Our novel matching technique allows us to tackle the challenges posed by large
viewpoint variations, illumination changes or partial occlusions. Our approach
shows a promising performance and requires only about half the parameters as a
current state-of-the-art technique. Nonetheless, it also suffers from false matches at
times. In order to mitigate this issue, we propose a fused architecture that combines
our inexact matching pipeline with a state-of-the-art exact matching technique. We
observe substantial gains with the fused model over the current state-of-the-art on
multiple challenging datasets of varying sizes, with gains of up to about 21%.
1
Introduction
Successful object recognition systems, such as Convolutional Neural Networks (CNN), extract
?distinctive patterns? that describe an object (e.g. a human) in an image, when ?shown? several images
known to contain that object, exploiting Machine Learning techniques [1]. Through successive stages
of convolutions and a host of non-linear operations such as pooling, non-linear activation, etc., CNNs
extract complex yet discriminative representation of objects that are then classified into categories
using a classifier, such as softmax.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Some common challenges in Person Re-Identification.
1.1
Problem Definition
One of the key subproblems of the generic object recognition task is recognizing people. Of
special interest to the surveillance and Human-Computer interaction (HCI) community is the task
of identifying a particular person across multiple images captured from the same/different cameras,
from different viewpoints, at the same/different points in time. This task is also known as Person
Re-Identification. Given a pair of images, such systems should be able to decipher if both of them
contain the same person or not. This is a challenging task, since appearances of a person across images
can be very different due to large viewpoint changes, partial occlusions, illumination variations, etc.
Figure 1 highlights some of these challenges. This also leads to a fundamental research question:
Can CNN-based approaches effectively handle the challenges related to Person Re-Identification?
CNN-based approaches have recently been applied to this task, with reasonable success [2, 3] and are
amongst the most competitive approaches for this task. Inspired by such approaches, we explore a set
of novel CNN-based architectures for this task. We treat the problem as a classification task. During
training, for every pair of images, the model is told whether they are from the same person or not.
At test time, the posterior classification probabilities obtained from the models are used to rank the
images in a gallery image set in terms of their similarity to a query image (probe).
In this work, we propose two novel CNN-based schemes for Person Re-Identification. Our first
model hinges on the key observation that due to a wide viewpoint variation, the task of finding a
match between the pixels of a pair of images needs to be carried out over a larger region, since
?matching pixels? on the object may have been displaced significantly. Secondly, illumination
variations might cause the absolute intensities of image regions to vary, rendering exact matching
approaches ineffective. Finally, coupling these two solutions might provide a recipe for taking care
of partial occlusions as well. We call this first model of ours, Normalized X-Corr. However, the
flexibility of inexact (soft) matching over a wider search space comes at the cost of occasional false
matches. To remedy this, we propose a second CNN-based model which fuses a state-of-the-art exact
matching technique [2] with Normalized X-Corr. We hypothesize that proper training allows the two
components of the fused network to learn complimentary patterns from the data, thus aiding the final
classification. Empirical results show Normalized X-Corr to hold promise and the Fused network
outperforming all baseline approaches on multiple challenging datasets, with gains of upto 21% over
the baselines.
In the next section, we touch upon relevant prior work. We present our methodology in Section
3. Sections 4 and 5 deal with the Experiments and the discussion of the obtained Results thereof.
Finally, we conclude in Section 6, outlining some avenues worthy of exploration in the future.
2
Related Work
In broad terms, we categorize the prior work in this field into Non-Deep and Deep Learning approaches.
2.1
Non-Deep Learning Approaches
Person Re-Identification systems have two main components: Feature Extraction and Similarity
Metric for matching. Traditional approaches for Person Re-Identification either proposed useful
features, or discriminative similarity metrics for comparison, or both.
2
Typical features that have proven useful include color histograms and dense SIFT [4] features
computed over local patches in the image [5, 6]. Farenzena et al. represent image patches by
exploiting features that model appearance, chromatic content, etc [7]. Another interesting line of
work on feature representation attempts to learn a bag-of-features (a.k.a. a dictionary)-based approach
for image representation [8, 9]. Further, Prosser et al. show the effectiveness of learning a subspace
for representing the data, modeled using a set of standard features [10]. While these approaches show
promise, their performance is bounded by the ability to engineer good features. Our models, based
on deep learning, overcome this handicap by learning a representation from the data.
There is also a substantial body of work that attempts to learn an effective similarity metric for
comparing images [11, 12, 13, 14, 15, 16]. Here, the objective is to learn a distance measure that
is indicative of the similarity of the images. The Mahalanobis distance has been the most common
metric that has been adopted for matching in person re-identification [5, 17, 18, 19]. Some other
metric learning approaches attempt to learn transformations, which when applied to the feature space,
aligns similar images closer together [20]. Yet other successful metric learning approaches are an
ensemble of multiple metrics [21]. In contrast to these approaches, we jointly learn both features and
discriminative metric (using a classifier) in a deep learning framework.
Another interesting line of non-deep approaches for person re-identification have claimed novelty
both in terms of features as well as matching metrics [22, 23]. Many of them rely on weighing the
hand-engineered image features first, based on some measure such as saliency and then performing
matching [6, 24, 25]. However, this is done in a non-deep framework unlike ours.
2.2
Deep Learning based Approaches
There has been relatively fewer prior work based on deep learning for addressing the challenge of
Person Re-Identification. Most deep learning approaches exploit the CNN-framework for the task, i.e.
they first extract highly non-linear representations from the images, then they compute some measure
of similarity. Yi et al. propose a Siamese Network that takes as input the image pair that is to be
compared, performs 3 stages of convolution on them (with the kernels sharing weights), and finally
uses cosine similarity to judge their extent of match [26]. Both of our models differ by performing a
novel inexact matching of the images after two stages of convolution and then processing the output
of the matching layer to arrive at a decision.
Li et al. also adopt a two-input network architecture [3]. They take the product of the responses
obtained right after the first set of convolutions corresponding to the two inputs and process its output
to obtain a measure of similarity. Our models, on the other hand, are significantly deeper. Besides,
Normalized X-Corr stands out by retaining the matching outcome corresponding to every candidate in
the search space of a pixel rather than choosing only the maximum match. Ahmed et al. too propose
a very promising architecture for Person Re-Identification [2]. Our models are inspired from their
work and does incorporate some of their features, but we substantially differ from their approach by
performing an inexact matching over a wider search space after two stages of convolution. Further,
Normalized X-Corr has fewer parameters than Ahmed et al. [2].
Finally, our Fused model is a one of a kind deep learning architecture for Person Re-Identification.
This is because a combination (fusion) of multiple deep frameworks has hitherto remained unexplored
for this task.
3
Proposed Approach
3.1
Our Architecture
In this work, we propose two architectures for Person Re-Identification. Both of our architectures
are a type of ?Siamese?-CNN model which take as input two images for matching and outputs the
likelihood that the two images contain the same person.
3.1.1
Normalized X-Corr
The following are the principal components of the Normalized X-Corr model, as shown in Fig 2.
3
24
same
12
25
37
Shared weights
NORMALIZED
CORRELATION
+
RELU
CONV +
RELU
1x1x1500 ? 25
5
MAXPOOL
2x2
CONV
3x3x25 ? 25
37
60
10
different
17
12
25
20
20
35
74
78
500
12
MAXPOOL
2x2
CONV +
RELU
5x5x20 ? 25
37
56
156
160
28
MAXPOOL
2x2
CONV +
RELU
5x5x3 ? 20
25
Softmax
layer
25
25
56
24
1500
Fully
connected
Layer
MAXPOOL
2x2
CONV +
RELU
74
37
? 25
78
156
160
MAXPOOL
2x2
5x5x20
CONV +
RELU
5x5x3 ? 20
Cross Patch Feature
Aggregation
12
28
25
20
25
20
60
25
10
5
500
MAXPOOL
2x2
CONV
3x3x25 ? 25
35
CONV +
RELU
1x1x1500 ? 25
25
37
37
74
25
20
20
12
NORMALIZED
CORRELATION
+
RELU
17
12
12
MAXPOOL
2x2
37
CONV +
RELU
5x5x20 ? 25
78
156
160
28
MAXPOOL
2x2
CONV +
RELU
5x5x3 ? 20
24
56
Figure 2: Architecture of the Normalized X-Corr Model.
same
25
25
1500
Cross Patch Feature
Aggregation
60
Shared weights
different
37
CONV
3x3x25 ? 25
MAXPOOL
2x2
500
Softmax
layer
37
25
25
25
5
17
CONV +
RELU
1x1x1250 ? 25
10
35
MAXPOOL
2x2
CONV +
RELU
? 25
74
78
156
160
20
20
12
28
MAXPOOL
2x2
5x5x20
CONV +
RELU
5x5x3 ? 20
12
CROSS INPUT
NEIGHBORHOOD
+
RELU
37
56
24
12
1250
25
Patch summary
25
Fully connected
Layer
60
Figure 3: Architecture of the Fused Network Model.
Tied Convolution Layers: Convolutional features have been shown to be effective representation of
images [1, 2]. In order to measure similarity between the input images, the applied transformations
must be similar. Therefore, along the lines of Ahmed et al., we perform two stages of convolutions
and pooling on both the input images by passing them through two input pipelines of a ?Siamese?
Network, that share weights [2]. The first convolution layer takes as input images of size 60?160?3
and applies 20 learned filters of size 5?5?3, while the second one applies 25 learned filters of size
5?5?20. Both convolutions are followed by pooling layers, which reduce the output dimension by a
factor of 2, and ReLU (Rectified Linear Unit) clipping. This gives us 25 maps of dimension 12?37
as output from each branch which are fed to the next layer.
Normalized Correlation Layer: This is the first layer that captures the similarity of the two input
images; subsequent layers build on the output of this layer to finally arrive at a decision as to whether
the two images are of the same person or not. Different from [2, 3], we incorporate both inexact
matching and wider search. Given two corresponding input feature maps X and Y , we compute the
normalized correlation as follows. We start with every pixel of X located at (x, y), where x is along
the width and y along the height (denoted as X(x, y)). We then create two matrices. The first is a
5?5 matrix representing the 5?5 neighborhood of X(x, y), while the second is the corresponding
5?5 neighborhood of Y centered at (a, b), where 1 ? a ? 12 and y ? 2 ? b ? y + 2. Now,
markedly different from Ahmed et al. [2], we perform inexact matching over a wider search space, by
computing a Normalized Correlation between the two patch matrices. Given two matrices, E and F ,
whose elements are arranged as two N-dimensional vectors, the Normalized Correlation is given by:
PN
normxcorr(E, F ) =
? ?E )(Fi ? ?F )
,
(N ? 1).?E .?F
i=1 (Ei
where ?E , ?F denotes the means of the elements of the 2 matrices, E and F respectively, while
?E , ?F denotes their respective standard deviations (a small = 0.01 is added to ?E and ?F to
avoid division by 0). Interestingly, Normalized Correlation being symmetric, we need to model the
computation only in one-way, thereby cutting down the number of parameters in subsequent layers.
For every pair of 5?5 matrices corresponding to a given pixel in image X, we arrange the normalized
correlation values in different feature maps. These feature maps preserve the spatial ordering of
4
pixels in X and are also 12?37 each, but their pixel values represent normalized correlation. This
gives us 60 feature maps of dimension 12?37 each. Now similarly, we perform the same operation
for all 25 pairs of maps that are input to the Normalized Correlation layer, to obtain an output of
1500, 12?37 maps. One subtle but important difference between our approach and that of Li et al.
[3] is that we preserve every correlation output corresponding to the search space of a pixel, X(x, y),
while they only keep the maximum response. We then pass these set of feature maps through a ReLU,
to discard probable noisy matches.
The mean subtraction and standard deviation normalization step incorporates illumination invariance,
a step unaccounted for in Li et al. [3]. Thus two patches which differ only in absolute intensity values
but are similar in the intensity variation pattern would be treated as similar by our models. The wider
search space, compared to Ahmed et al. [2], gives invariance to large viewpoint variation. Further,
performing inexact matching (correlation measure) over a wider search space gives us robustness to
partial occlusions. Due to partial occlusions, a part(s) (P) of a person/object visible in one view may
not be visible in others. Using a wider search space, our model looks for a part which is similar to
the missing P within a wider neighborhood of P?s original location. This is justified since typically
adjacent regions of objects in an image have regularity/similarity in appearance. e.g. Bottom and
upper half of torso. Now, since we are comparing two different parts due to the occlusion of P, we
need to perform flexible matching. Thus, inexact matching is used.
Cross Patch Feature Aggregation Layers: The Normalized Correlation layer incorporates information from the local neighborhood of a pixel. We now seek to incorporate greater context information,
to obtain a summarization effect. In order to do so, we perform 2 successive layers of convolution
(with a ReLU filtered output) followed by pooling (by a factor of 2) of the output feature maps from
the previous layer. We use 1?1?1500 convolution kernels for the first layer and 3?3?25 convolution
kernels for the second convolution layer. Finally, we get 25 maps of dimension 5?17 each.
Fully Connected Layers: The fully connected layers collate information from pixels that are very
far apart. The feature map outputs obtained from the previous layer are reshaped into one long
2125-dimensional vector. This vector is fed as input to a 500-node fully connected layer, which
connects to another fully connected layer containing 2 softmax units. The first unit outputs the
probability that the two images are same and the latter, the probability that the two are different.
One key advantage of the Normalized X-Corr model is that it has about half the number of parameters
(about 1.121 million) as the model proposed by Ahmed et al. [2] (refer supplementary section for
more details).
3.1.2
Fused Model
While the Normalized X-Corr model incorporates inexact matching over a wider search space to
handle important challenges such as illumination variations, partial occlusions, or wide viewpoint
changes, however it also suffers from occasional false matches. Upon investigation, we found that
these false matches tended to recur, especially when the background of the false matches had a similar
appearance to the person being matched (see supplementary). For such cases, an exact matching,
such as taking a difference and constraining the search window might be beneficial. We therefore
fuse the model proposed by Ahmed et al. [2] with Normalized X-Corr to obtain a Fused model, in
anticipation that it incorporates the benefits of both models. Figure 3 shows a representative diagram.
We keep the tied convolution layers unchanged like before, then we fork off two separate pipelines:
one for Normalized X-Corr and the other for Ahmed et. al.?s model [2]. The two separate pipelines
output a 2125-dimensional vector each and then they are fused in a 1000-node fully connected layer.
The outputs from the fully connected layer are then fed into a 2 unit softmax layer as before.
3.2
Training Algorithm
All the proposed architectures are trained using the Stochastic Gradient Descent (SGD) algorithm, as
in Ahmed et al. [2]. The gradient computation is fairly simple except for the Normalized Correlation
layer. Given two matrices, E (from the first branch of the Siamese network) and F (from the second
branch of the Siamese network), represented by a N-dimensional vector each, the gradient pushed
from the Normalized Correlation layer back to the convolution layers on the top branch is given by:
?normxcorr(E, F )
1
=
?Ei
(N ? 1)?E
normxcorr(E, F )(Ei ? ?E )
F i ? ?F
?
?F
?E
5
,
where Ei is the ith element of the vector representing E and other symbols have their usual meaning. Similar
notation is used for the subnetwork at the bottom. The full derivation is available in the supplementary section.
4
Experiments
Table 1: Performance of different algorithms at ranks 1, 10, and 20 on CUHK03 Labeled (left) and
CUHK03 Detected (right) Datasets.
Method
Fused Model (ours)
Norm X-Corr (ours)
Ensembles [21]
LOMO+MLAPG [5]
Ahmed et al. [2]
LOMO+XQDA [22]
Li et al. [3]
KISSME [18]
LDML [14]
eSDC [25]
4.1
r=1
72.43
64.73
62.1
57.96
54.74
52.20
20.65
14.17
13.51
8.76
r = 10
95.51
92.77
92.30
94.74
93.88
92.14
68.74
52.57
52.13
38.28
r = 20
98.40
96.78
97.20
98.00
98.10
96.25
83.06
70.03
70.81
53.44
Method
Fused Model (ours)
Norm X-Corr (ours)
LOMO+MLAPG [5]
Ahmed et al. [2]
LOMO+XQDA [22]
Li et al. [3]
KISSME [18]
LDML [14]
eSDC [25]
r=1
72.04
67.13
51.15
44.96
46.25
19.89
11.70
10.92
7.68
r = 10
96.00
94.49
92.05
83.47
88.55
64.79
48.08
47.01
33.38
r = 20
98.26
97.66
96.90
93.15
94.25
81.14
64.86
65.00
50.58
Datasets, Evaluation Protocol, Baseline Methods
We conducted experiments on the large CUHK03 dataset [3], the mid-sized CUHK01 Dataset [23], and the small
QMUL GRID dataset [27]. The datasets are divided into training and test sets for our experiments. The goal of
every algorithm is to rank images in the gallery image bank of the test set by their similarity to a probe image
(which is also from the test set). To do so, they can exploit the training set, consisting of matched and unmatched
image pairs. An oracle would always rank the ground truth match (from the gallery) in the first position. All
our experiments are conducted in the single shot setting, i.e. there is exactly one image of every person in the
gallery image bank and the results averaged over 10 test trials are reported using tables and Cumulative Matching
Characteristics (CMC) Curves (see supplementary). For all our experiments, we use a momentum of 0.9, starting
learning rate of 0.05, learning rate decay of 1 ? 10?4 , weight decay of 5 ? 10?4 . The implementation was done
in a machine with NVIDIA Titan GPUs and the code was implemented using Torch and is available online 1 . We
also conducted an ablation study, to further analyze the contribution of the individual components of our model.
CUHK03 Dataset: The CUHK03 dataset is a large collection of 13,164 images of 1360 people captured from 6
different surveillance cameras, with each person observed by 2 cameras with disjoint views [3]. The dataset
comes with manual and algorithmically labeled pedestrian bounding boxes. In this work, we conduct experiments
on both these sets. For our experiments, we follow the protocol used by Ahmed et al. [2] and randomly pick a
set of 1260 identities for training and 100 for testing. We use 100 identities from the training set for validation.
We compare the performance of both Normalized X-Corr and the Fused model with several baselines for both
labeled [2, 3, 5, 14, 18, 21, 22, 25] and detected [2, 3, 5, 14, 18, 22, 25] sets. Of these, the comparison with
Ahmed et al. [2] and with Li et al. [3] is of special interest to us since these are deep learning approaches as well.
For our models, we use mini-batch sizes of 128 and train our models for about 200,000 iterations.
CUHK01 Dataset: The CUHK01 dataset is a mid-sized collection of 3,884 images of 971 people, with each
person observed by 2 cameras with disjoint views [23]. There are 4 images of every identity. For our experiments,
we follow the protocol used by Ahmed et al. [2] and conduct 2 sets of experiments with varying training set
sizes. In the first, we randomly pick a set of 871 identities for training and 100 for testing, while in the second,
486 identities are used for testing and the rest for training. We compare the performance of both of our models
with several baselines for both 100 test identities [2, 3, 14, 18, 25] and 486 test identities [2, 8, 9, 20, 21]. For
our models, we use mini-batch sizes of 128 and train our models for about 50,000 iterations.
QMUL GRID Dataset: The QMUL underGround Re-Identification (GRID) dataset is a small and a very
challenging dataset [27]. It is a collection of only 250 people captured from 2 views. Besides, the 2 images
of every identity, there are 775 unmatched images, i.e. for these identities only 1 view is available. For our
experiments, we follow the protocol used by Liao and Li [5]. We randomly pick a set of 125 identities (who
have 2 views each) for training and leave the remaining 125 for testing. Additionally, the gallery image bank of
the test is enriched with the 775 unmatched images. This makes the ranking task even more challenging. We
compare the performance of both of our models with several baselines [11, 12, 15, 19, 22, 24]. For our models,
we use mini-batch sizes of 128 and train our models for about 20,000 iterations.
1
https://github.com/InnovArul/personreid_normxcorr
6
Table 2: Performance of different algorithms at ranks 1, 10, and 20 on CUHK01 100 Test Ids (left)
and 486 Test Ids (right) Datasets
Method
Fused Model (ours)
Norm X-Corr (ours)
Ahmed et al. [2]
Li et al. [3]
KISSME [18]
LDML [14]
eSDC [25]
4.2
r=1
81.23
77.43
65.00
27.87
29.40
26.45
22.84
r = 10
97.39
96.67
93.12
73.46
72.43
72.04
57.67
Method
Fused Model (ours)
Norm X-Corr (ours)
CPDL [8]
Ensembles [21]
Ahmed et al. [2]
Mirror-KFMA [20]
Mid-Level Filters [9]
r = 20
98.60
98.40
97.20
86.31
86.07
84.69
69.84
r=1
65.04
60.17
59.5
51.9
47.50
40.40
34.30
r = 10
89.76
86.26
89.70
83.00
80.00
75.3
65.00
r = 20
94.49
91.47
93.10
89.40
87.44
84.10
74.90
Training Strategies for the Neural Network
The large number of parameters of a deep neural network necessitate special training strategies [2]. In this work,
we adopt 3 main strategies to train our model.
Data Augmentation: For almost all the datasets, the number of negative pairs far outnumbers the number of
positive pairs in the training set. This poses a serious challenge to deep neural nets, which can overfit and get
biased in the process. Further, the positive samples may not have all the variations likely to be encountered in a
real scenario. We therefore, hallucinate positive pairs and enrich the training corpus, along the lines of Ahmed
et al. [2]. For every image in the training set of size W?H, we sample 2 images for CUHK03 (5 images for
CUHK01 & QMUL) around the original image center and apply 2D translations chosen from a uniform random
distribution in the range of [?0.05W, 0.05W ] ? [?0.05H, 0.05H]. We also augment the data with images
reflected on a vertical mirror.
Fine-Tuning: For small datasets such as QMUL, training parameter-intensive models such as deep neural
networks can be a significant challenge. One way to mitigate this issue is to fine-tune the model while training.
We start with a model pre-trained on a large dataset such as CUHK01 with 871 training identities rather than
an untrained model and then refine this pre-trained model by training on the small dataset, QMUL in our case.
During fine-tuning, we use a learning rate of 0.001.
Others: Training deep neural networks is time taking. Therefore, to speed up the training, we implemented our
code such that it spawns threads across multiple GPUs.
5
Results and Discussion
CUHK03 Dataset: Table 1 summarizes the results of the experiments on the CUHK03 Labeled dataset. Our
Fused model outperforms the existing state-of-the-art models by a wide margin, of about 10% (about 72% vs.
62%) on rank-1 accuracy, while the Normalized X-Corr model gives a 3% gain. This serves as a promising
response to our key research endeavor for an effective deep learning model for person re-identification. Further,
both models are significantly better than Ahmed et al.?s model [2]. We surmise that this is because our models
are more adept at handling variations in illumination, partial occlusion and viewpoint change.
Interestingly, we also note that the existing best performing system is a non-deep approach. This shows that
designing an effective deep learning architecture is a fairly non-trivial task. Our models? performance, viz.-a-viz.
non-deep methods, once again underscores the benefits of learning representations from the data rather than
using hand-engineered ones. A visualization of some filter responses of our model, some of the result plots, and
some ranked matching results may be found in the supplementary material.
Table 1 also presents the results on the CUHK03 Detected dataset. Here too, we see the superior performance of
our models over the existing state-of-the-art baselines. Interestingly, here our models take a wider lead over the
existing baselines (about 21%) and our models? performance rivals its own performance on the Labeled dataset.
We hypothesize that incorporating a wider search space makes our models more robust to the challenges posed
by images in which the person is not centered, such as the CUHK03 Detected dataset.
CUHK01 Dataset: Table 2 summarizes the results of the experiments on the CUHK01 dataset with 100 and
486 test identities. For the 486 test identity setting, our models were pre-trained on the training set of the larger
CUHK03 Labeled dataset and then fine-tuned on the CUHK01-486 training set, owing to the paucity of training
data. As the tables show, our models give us a gain of upto 16% over the existing state-of-the-art on the rank-1
accuracies.
QMUL GRID Dataset: QMUL GRID is a challenging dataset for person re-identification due to its small size
and the additional 775 unmatched gallery images in the test set. This is evident from the low performances of
7
Table 3: Performance of different algorithms at ranks 1, 5, 10, and 20 on the QMUL GRID Dataset
Method
Fused Model (ours)
Norm X-Corr (ours)
KEPLER [24]
LOMO+XQDA [22]
PolyMap [11]
MtMCML [19]
MRank-RankSVM [15]
MRank-PRDC [15]
LCRML [12]
XQDA [22]
r=1
19.20
16.00
18.40
16.56
16.30
14.08
12.24
11.12
10.68
10.48
r=5
38.40
32.00
39.12
33.84
35.80
34.64
27.84
26.08
25.76
28.08
r = 10
53.6
40.00
50.24
41.84
46.00
45.84
36.32
35.76
35.04
38.64
r = 20
66.4
55.2
61.44
52.40
57.60
59.84
46.56
46.56
46.48
52.56
Deep Learning Model
Yes
Yes
No
No
No
No
No
No
No
No
existing state-of-the-art algorithms. In order to train our models on this small dataset, we start with a model
trained on CUHK01 dataset with 100 test identities, then we fine-tune the models on the QMUL GRID training
set. Table 3 summarizes the results of the experiments on the QMUL GRID dataset. Here too, our Fused model
performs the best. Even though, our gain in rank-1 accuracy is a modest 1% but we believe this is significant for
a challenging dataset like QMUL.
The ablation study across multiple datasets reveals that a wider search and inexact match each buy us at least
6% individually, in terms of performance. The supplementary presents these results in more detail and also
compares the number of parameters across different models. Multi-GPU training, on the other hand gives us a
3x boost to training speed.
6
Conclusions and Future Work
In this work, we address the central research question of proposing simple yet effective deep-learning models for
Person Re-Identification by proposing two new models. Our models are capable of handling the key challenges
of illumination variations, partial occlusions and viewpoint invariance by incorporating inexact matching over a
wider search space. Additionally, the proposed Normalized X-Corr model benefits from having fewer parameters
than the state-of-the-art deep learning model. The Fused model, on the other hand, allows us to cut down on
false matches resulting from a wide matching search space, yielding superior performance.
For future work, we intend to use the proposed Siamese architectures for other matching tasks in Vision such
as content-based image retrieval. We also intend to explore the effect of incorporating more feature maps on
performance.
References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[2] Ejaz Ahmed, Michael Jones, and Tim K Marks. An improved deep learning architecture for person
re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 3908?3916, 2015.
[3] Wei Li, Rui Zhao, Tong Xiao, and Xiaogang Wang. Deepreid: Deep filter pairing neural network for person
re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 152?159, 2014.
[4] David G Lowe. Distinctive image features from scale-invariant keypoints. International journal of
computer vision, 60(2):91?110, 2004.
[5] Shengcai Liao and Stan Z Li. Efficient psd constrained asymmetric metric learning for person reidentification. In Proceedings of the IEEE International Conference on Computer Vision, pages 3685?3693,
2015.
[6] Rui Zhao, Wanli Ouyang, and Xiaogang Wang. Person re-identification by salience matching. In
Proceedings of the IEEE International Conference on Computer Vision, pages 2528?2535, 2013.
[7] Michela Farenzena, Loris Bazzani, Alessandro Perina, Vittorio Murino, and Marco Cristani. Person
re-identification by symmetry-driven accumulation of local features. In Computer Vision and Pattern
Recognition (CVPR), 2010 IEEE Conference on, pages 2360?2367. IEEE, 2010.
8
[8] Sheng Li, Ming Shao, and Yun Fu. Cross-view projective dictionary learning for person re-identification. In
Proceedings of the 24th International Conference on Artificial Intelligence, AAAI Press, pages 2155?2161,
2015.
[9] Rui Zhao, Wanli Ouyang, and Xiaogang Wang. Learning mid-level filters for person re-identification. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 144?151, 2014.
[10] Bryan Prosser, Wei-Shi Zheng, Shaogang Gong, Tao Xiang, and Q Mary. Person re-identification by
support vector ranking. In BMVC, volume 2, page 6, 2010.
[11] Dapeng Chen, Zejian Yuan, Gang Hua, Nanning Zheng, and Jingdong Wang. Similarity learning on an
explicit polynomial kernel feature map for person re-identification. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 1565?1573, 2015.
[12] Jiaxin Chen, Zhaoxiang Zhang, and Yunhong Wang. Relevance metric learning for person re-identification
by exploiting global similarities. In Pattern Recognition (ICPR), 2014 22nd International Conference on,
pages 1657?1662. IEEE, 2014.
[13] Jason V Davis, Brian Kulis, Prateek Jain, Suvrit Sra, and Inderjit S Dhillon. Information-theoretic metric
learning. In Proceedings of the 24th international conference on Machine learning, pages 209?216. ACM,
2007.
[14] Matthieu Guillaumin, Jakob Verbeek, and Cordelia Schmid. Is that you? metric learning approaches for
face identification. In 2009 IEEE 12th International Conference on Computer Vision, pages 498?505.
IEEE, 2009.
[15] Chen Change Loy, Chunxiao Liu, and Shaogang Gong. Person re-identification by manifold ranking. In
2013 IEEE International Conference on Image Processing, pages 3567?3571. IEEE, 2013.
[16] Wei-Shi Zheng, Shaogang Gong, and Tao Xiang. Reidentification by relative distance comparison. IEEE
transactions on pattern analysis and machine intelligence, 35(3):653?668, 2013.
[17] Martin Hirzer, Peter M Roth, and Horst Bischof. Person re-identification by efficient impostor-based
metric learning. In Advanced Video and Signal-Based Surveillance (AVSS), 2012 IEEE Ninth International
Conference on, pages 203?208. IEEE, 2012.
[18] Martin K?stinger, Martin Hirzer, Paul Wohlhart, Peter M Roth, and Horst Bischof. Large scale metric
learning from equivalence constraints. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE
Conference on, pages 2288?2295. IEEE, 2012.
[19] Lianyang Ma, Xiaokang Yang, and Dacheng Tao. Person re-identification over camera networks using
multi-task distance metric learning. IEEE Transactions on Image Processing, 23(8):3656?3670, 2014.
[20] Ying-Cong Chen, Wei-Shi Zheng, and Jianhuang Lai. Mirror representation for modeling view-specific
transform in person re-identification. In Proc. IJCAI, pages 3402?3408. Citeseer, 2015.
[21] Sakrapee Paisitkriangkrai, Chunhua Shen, and Anton van den Hengel. Learning to rank in person reidentification with metric ensembles. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 1846?1855, 2015.
[22] Shengcai Liao, Yang Hu, Xiangyu Zhu, and Stan Z Li. Person re-identification by local maximal occurrence
representation and metric learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 2197?2206, 2015.
[23] Wei Li, Rui Zhao, and Xiaogang Wang. Human reidentification with transferred metric learning. In Asian
Conference on Computer Vision, pages 31?44. Springer, 2012.
[24] Niki Martinel, Christian Micheloni, and Gian Luca Foresti. Kernelized saliency-based person reidentification through multiple metric learning. IEEE Transactions on Image Processing, 24(12):5645?
5658, 2015.
[25] Rui Zhao, Wanli Ouyang, and Xiaogang Wang. Unsupervised salience learning for person re-identification.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3586?3593,
2013.
[26] Dong Yi, Zhen Lei, Shengcai Liao, Stan Z Li, et al. Deep metric learning for person re-identification. In
ICPR, volume 2014, pages 34?39, 2014.
[27] Chen Change Loy, Tao Xiang, and Shaogang Gong. Multi-camera activity correlation analysis. In
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 1988?1995.
IEEE, 2009.
9
| 6367 |@word trial:1 cnn:10 kulis:1 polynomial:1 norm:5 nd:1 hu:1 seek:1 jingdong:1 citeseer:1 pick:3 sgd:1 thereby:1 shot:1 liu:1 tuned:1 ours:12 interestingly:3 outperforms:1 existing:6 current:2 com:2 comparing:2 activation:1 gmail:1 yet:3 must:1 gpu:1 subsequent:2 visible:2 christian:1 hypothesize:2 plot:1 farenzena:2 v:1 half:3 fewer:3 weighing:1 intelligence:2 paisitkriangkrai:1 indicative:1 ith:1 hallucinate:1 filtered:1 cse:2 location:1 successive:2 node:2 kepler:1 zhang:1 height:1 along:4 pairing:1 yuan:1 hci:1 combine:1 multi:3 inspired:2 ming:1 window:1 conv:14 spain:1 bounded:1 matched:2 notation:1 hitherto:1 prateek:1 kind:1 complimentary:1 substantially:1 ouyang:3 proposing:2 finding:1 transformation:4 mitigate:2 every:10 unexplored:1 tackle:1 exactly:1 classifier:2 unit:4 before:2 positive:3 local:4 treat:1 anurag:1 iitm:2 id:2 might:3 equivalence:1 challenging:7 xiaokang:1 projective:1 range:1 averaged:1 camera:7 testing:4 impostor:1 empirical:1 significantly:3 matching:35 pre:3 anticipation:1 get:2 context:1 accumulation:1 map:14 vittorio:1 missing:1 center:1 shi:3 roth:2 starting:1 shen:1 identifying:1 matthieu:1 handle:2 variation:10 exact:4 us:1 designing:1 chunxiao:1 element:3 recognition:13 located:1 asymmetric:1 surmise:1 cut:1 labeled:6 bottom:2 fork:1 observed:2 wang:7 capture:1 murino:1 cong:1 region:4 connected:8 ordering:1 underground:1 substantial:2 alessandro:1 trained:5 distinctive:2 upon:2 division:1 shao:1 necessitates:1 represented:1 derivation:1 train:5 jain:1 describe:1 effective:5 query:2 detected:4 artificial:1 outcome:1 choosing:1 neighborhood:5 whose:1 posed:2 larger:2 supplementary:6 cvpr:3 ability:1 reshaped:1 jointly:1 noisy:1 transform:1 final:1 online:1 advantage:1 subramaniam:1 net:1 propose:7 interaction:1 product:1 maximal:1 relevant:1 ablation:2 flexibility:1 recipe:1 bazzani:1 exploiting:3 sutskever:1 regularity:1 ijcai:1 leave:1 object:8 wider:14 coupling:1 tim:1 ac:2 gong:4 pose:1 implemented:2 cmc:1 come:2 judge:1 differ:3 owing:1 cnns:1 filter:6 stochastic:1 exploration:1 human:3 engineered:2 centered:2 material:1 investigation:1 probable:1 brian:1 secondly:1 hold:1 marco:1 around:1 ground:1 ranksvm:1 great:1 vary:1 dictionary:2 adopt:2 arrange:1 proc:1 bag:1 individually:1 mittal:1 create:1 always:1 rather:3 pn:1 avoid:1 chromatic:1 varying:2 surveillance:3 viz:2 rank:10 likelihood:1 underscore:1 contrast:1 baseline:8 typically:1 torch:1 kernelized:1 tao:4 pixel:11 issue:2 classification:4 flexible:1 denoted:1 augment:1 retaining:1 enrich:1 art:9 softmax:5 special:3 fairly:2 spatial:1 field:1 once:1 constrained:1 having:2 extraction:1 cordelia:1 broad:1 look:1 jones:1 perina:1 unsupervised:1 future:3 others:2 serious:1 randomly:3 simultaneously:1 preserve:2 individual:1 asian:1 occlusion:9 connects:1 consisting:1 attempt:3 psd:1 interest:2 highly:1 zheng:4 evaluation:1 yielding:1 reidentification:5 fu:1 closer:1 partial:8 capable:1 respective:1 modest:1 conduct:2 re:35 metro:1 soft:1 modeling:1 clipping:1 cost:1 addressing:1 deviation:2 uniform:1 krizhevsky:1 successful:2 recognizing:1 conducted:3 too:3 reported:1 person:51 fundamental:1 international:9 recur:1 told:1 off:1 dong:1 maxpool:11 michael:1 together:1 fused:18 ilya:1 augmentation:1 again:1 central:1 aaai:1 containing:1 unmatched:4 necessitate:1 zhao:5 li:14 chennai:3 titan:1 pedestrian:1 ranking:3 view:10 lowe:1 jason:1 analyze:1 competitive:1 aggregation:3 start:3 contribution:1 accuracy:3 convolutional:4 characteristic:1 who:1 ensemble:4 saliency:2 yes:2 decipher:1 anton:1 identification:36 rectified:1 classified:1 suffers:2 tended:1 sharing:1 aligns:1 manual:1 definition:1 inexact:13 guillaumin:1 nonetheless:1 thereof:1 gain:6 dataset:28 color:1 torso:1 subtle:1 back:1 follow:3 methodology:1 response:4 reflected:1 improved:1 wei:5 arranged:1 done:2 box:1 though:1 bmvc:1 furthermore:1 stage:6 correlation:16 overfit:1 hand:5 sheng:1 touch:1 ei:4 lei:1 believe:1 mary:1 effect:2 contain:3 normalized:28 remedy:1 deepreid:1 symmetric:1 dhillon:1 deal:1 mahalanobis:1 adjacent:1 during:2 width:1 davis:1 cosine:1 yun:1 evident:1 theoretic:1 performs:2 image:63 meaning:1 novel:5 recently:2 fi:1 common:2 superior:2 unaccounted:1 volume:2 million:1 refer:1 significant:2 wanli:3 dacheng:1 tuning:2 grid:8 similarly:1 loris:1 had:1 similarity:14 etc:3 posterior:1 own:1 apart:1 discard:1 scenario:1 claimed:1 nvidia:1 driven:1 suvrit:1 chunhua:1 outperforming:1 success:1 yi:2 captured:3 greater:1 care:1 additional:1 subtraction:1 novelty:1 xiangyu:1 signal:1 branch:4 multiple:10 siamese:6 full:1 keypoints:1 match:13 ahmed:19 cross:5 collate:1 long:1 retrieval:1 divided:1 host:1 shaogang:4 lai:1 luca:1 verbeek:1 liao:4 vision:16 metric:21 histogram:1 represent:2 kernel:4 normalization:1 iteration:3 justified:1 background:1 fine:5 diagram:1 biased:1 rest:1 unlike:1 ineffective:1 markedly:1 pooling:5 incorporates:4 smile:1 effectiveness:1 call:1 yang:2 constraining:1 rendering:1 relu:17 architecture:15 reduce:1 avenue:1 intensive:1 whether:2 thread:1 peter:2 passing:1 cause:1 wohlhart:1 deep:30 useful:2 tune:2 aiding:1 mid:4 adept:1 rival:1 category:1 http:1 disjoint:2 algorithmically:1 bryan:1 promise:3 key:5 utilize:1 fuse:2 you:1 arrive:2 almost:2 reasonable:1 cuhk03:11 patch:8 decision:2 summarizes:3 pushed:1 handicap:1 layer:33 followed:2 encountered:1 oracle:1 refine:1 activity:1 gang:1 xiaogang:5 constraint:1 qmul:12 alex:1 x2:11 speed:2 attempting:1 performing:5 relatively:1 gpus:2 martin:3 transferred:1 icpr:2 combination:1 across:7 beneficial:1 den:1 invariant:1 pipeline:4 spawn:1 visualization:1 fed:3 serf:1 adopted:1 available:3 operation:2 probe:2 observe:1 occasional:2 apply:1 generic:1 upto:2 occurrence:1 batch:3 robustness:1 original:2 denotes:2 top:1 include:1 remaining:1 hinge:1 madras:3 paucity:1 exploit:2 build:1 especially:1 unchanged:1 objective:1 intend:2 question:2 added:1 strategy:3 usual:1 traditional:1 subnetwork:1 amongst:1 gradient:3 subspace:1 distance:4 separate:2 manifold:1 extent:1 trivial:1 gallery:7 besides:2 code:2 modeled:1 mini:3 loy:2 ying:1 relate:1 subproblems:1 lomo:5 negative:1 implementation:1 proper:1 summarization:1 perform:5 upper:1 vertical:1 convolution:16 observation:1 datasets:9 displaced:1 av:1 descent:1 hinton:1 worthy:1 jakob:1 ninth:1 community:1 intensity:3 david:1 pair:11 imagenet:1 bischof:2 learned:2 barcelona:1 boost:1 nip:1 address:2 able:1 pattern:16 challenge:11 video:1 treated:1 rely:1 ranked:1 advanced:1 zhu:1 representing:3 scheme:1 github:1 technology:3 stan:3 carried:1 zhen:1 extract:4 schmid:1 niki:1 prior:4 xiang:3 relative:1 fully:8 highlight:1 interesting:2 proven:1 geoffrey:1 outlining:1 validation:1 xiao:1 viewpoint:8 bank:4 share:1 translation:1 summary:1 salience:2 deeper:1 institute:3 india:3 wide:4 taking:3 face:1 absolute:2 benefit:3 van:1 overcome:1 dimension:4 curve:1 stand:1 cumulative:1 hengel:1 horst:2 collection:3 far:2 transaction:3 cutting:1 keep:2 global:1 reveals:1 buy:1 corpus:2 conclude:1 discriminative:3 search:16 table:9 additionally:2 promising:3 learn:7 robust:2 sra:1 symmetry:1 untrained:1 complex:1 protocol:4 main:2 dense:1 bounding:1 paul:1 body:1 enriched:1 fig:1 representative:1 x5x20:4 tong:1 position:1 momentum:1 explicit:1 candidate:1 tied:2 down:2 remained:1 specific:1 sift:1 symbol:1 decay:2 fusion:1 incorporating:3 false:6 effectively:1 corr:20 mirror:3 illumination:7 chatterjee:1 margin:1 rui:5 chen:5 appearance:4 explore:2 likely:1 cristani:1 inderjit:1 applies:2 hua:1 springer:1 truth:1 acm:1 ma:1 sized:2 goal:1 identity:14 endeavor:1 shared:2 content:2 change:6 typical:1 except:1 engineer:1 principal:1 pas:1 invariance:3 outnumbers:1 people:4 mark:1 latter:1 support:1 categorize:1 relevance:1 indian:3 incorporate:3 handling:2 |
5,933 | 6,368 | Local Similarity-Aware Deep Feature Embedding
Chen Huang
Chen Change Loy
Xiaoou Tang
Department of Information Engineering, The Chinese University of Hong Kong
{chuang,ccloy,xtang}@ie.cuhk.edu.hk
Abstract
Existing deep embedding methods in vision tasks are capable of learning a compact
Euclidean space from images, where Euclidean distances correspond to a similarity
metric. To make learning more effective and efficient, hard sample mining is
usually employed, with samples identified through computing the Euclidean feature
distance. However, the global Euclidean distance cannot faithfully characterize
the true feature similarity in a complex visual feature space, where the intraclass
distance in a high-density region may be larger than the interclass distance in
low-density regions. In this paper, we introduce a Position-Dependent Deep Metric
(PDDM) unit, which is capable of learning a similarity metric adaptive to local
feature structure. The metric can be used to select genuinely hard samples in a
local neighborhood to guide the deep embedding learning in an online and robust
manner. The new layer is appealing in that it is pluggable to any convolutional
networks and is trained end-to-end. Our local similarity-aware feature embedding
not only demonstrates faster convergence and boosted performance on two complex
image retrieval datasets, its large margin nature also leads to superior generalization
results under the large and open set scenarios of transfer learning and zero-shot
learning on ImageNet 2010 and ImageNet-10K datasets.
1
Introduction
Deep embedding methods aim at learning a compact feature embedding f (x) ? Rd from image x
using a deep convolutional neural network (CNN). They have been increasingly adopted in a variety of
vision tasks such as product visual search [1, 14, 29, 33] and face verification [13, 27]. The embedding
objective is usually in a Euclidean sense: the Euclidean distance Di,j = kf (xi ) ? f (xj )k2 between
two feature vectors should preserve their semantic relationship encoded pairwise (by contrastive
loss [1]), in triplets [27, 33] or even higher order relationships (e.g., by lifted structured loss [29]).
It is widely observed that an effective data sampling strategy is crucial to ensure the quality and
learning efficiency of deep embedding, as there are often many more easy examples than those
meaningful hard examples. Selecting overly easy samples can in practice lead to slow convergence
and poor performance since many of them satisfy the constraint well and give nearly zero loss,
without exerting any effect on parameter update during the back-propagation [3]. Hence hard
example mining [7] becomes an indispensable step in state-of-the-art deep embedding methods.
These methods usually choose hard samples by computing the convenient Euclidean distance in the
embedding space. For instance, in [27, 29], hard negatives with small Euclidean distances are found
online in a mini-batch. An exception is [33] where an online reservoir importance sampling scheme
is proposed to sample discriminative triplets by relevance scores. Nevertheless, these scores are
computed offline with different hand-crafted features and distance metrics, which is suboptimal.
We question the effectiveness of using a single and global Euclidean distance metric for finding hard
samples, especially for real-world vision tasks that exhibit complex feature variations due to pose,
lighting, and appearance. As shown in a fine-grained bird image retrieval example in Figure 1(a),
the diversity of feature patterns learned for each class throughout the feature space can easily lead
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a)
Myrtle Warbler Nashville Warbler
Horned Puffin
Deep Euclidean metric
Triplet loss [27,33]
(b)
Position-Dependent Deep Metric
Double header hinge loss
Positive similarity
Negative similarity
f ( xi )
Horned Puffin
f (x j )
f (xj )
f ( xi )
Similarity
0
1
Similarity
0
1
Mini-batch
(c)
p(S )
f ( xi? ) f ( xk? )
PDDM
Yellow Warbler
Elegant Tern
f ( x ?j )
Elegant Tern
f ( xl? )
S ?j ,l?
Si?,k?
Si?, ?j
Similarity score space
Feature embedding space
Hard quadruplet mining
Common
Yellowthroat
Figure 1: (a) 2-D feature embedding (by t-SNE [19]) of the CUB-200-2011 [32] test set. The
intraclass distance can be larger than the interclass distance under the global Euclidean metric, which
can mislead the hard sample mining and consequently deep embedding learning. We propose a PDDM
unit that incorporates the absolute position (i.e., feature mean denoted by the red triangle) to adapt
metric to the local feature structure. (b) Overlapped similarity distribution by the Euclidean metric (the
similarity scores are transformed from distances by a Sigmoid-like function) vs. the well-separated
distribution by PDDM. (c) PDDM-guided hard sample mining and embedding learning.
to a larger intraclass Euclidean distance than the interclass distance. Such a heterogeneous feature
distribution yields a highly overlapped similarity score distribution for the positive and negative
pairs, as shown in the left chart of Figure 1(b). We observed similar phenomenon for the global
Mahalanobis metric [11, 12, 21, 35, 36] in our experiments. It is not difficult to see that using a single
and global metric would easily mislead the hard sample mining. To circumvent this issue, Cui et
al. [3] resorted to human intervention for harvesting genuinely hard samples.
Mitigating the aforementioned issue demands an improved metric that is adaptive to the local feature
structure. In this study we wish to learn a local-adaptive similarity metric online, which will be
exploited to search for high-quality hard samples in local neighborhood to facilitate a more effective
deep embedding learning. Our key challenges lie in the formulation of a new layer and loss function
that jointly consider the similarity metric learning, hard samples selection, and deep embedding
learning. Existing studies [13, 14, 27, 29, 33] only consider the two latter objectives but not together
with the first. To this end, we propose a new Position-Dependent Deep Metric (PDDM) unit for
similarity metric learning. It is readily pluggable to train end-to-end with an existing deep embedding
learning CNN. We formulate the PDDM such that it learns locally adaptive metric (unlike the
global Euclidean metric), through a non-linear regression on both the absolute feature difference
and feature mean (which encodes absolute position) of a data pair. As depicted in the right chart of
Figure 1(b), the proposed metric yields a similarity score distribution that is more distinguishable
than the conventional Euclidean metric. As shown in Figure 1(c), hard samples are mined from
the resulting similarity score space and used to optimize the feature embedding space in a seamless
manner. The similarity metric learning in PDDM and embedding learning in the associated CNN are
jointly optimized using a novel large-margin double-header hinge loss.
Image retrieval experiments on two challenging real-world vision datasets, CUB-200-2011 [32] and
CARS196 [15], show that our local similarity-aware feature embedding significantly outperforms
state-of-the-art deep embedding methods that come without the online metric learning and associated
hard sample mining scheme. Moreover, the proposed approach incurs a far lower computational
cost and encourages faster convergence than those structured embedding methods (e.g., [29]), which
need to compute a fully connected dense matrix of pairwise distances in a mini-batch. We further
demonstrate our learned embedding is generalizable to new classes in large open set scenarios. This
is validated in the transfer learning and zero-shot learning (using the ImageNet hierarchy as auxiliary
knowledge) tasks on ImageNet 2010 and ImageNet-10K [5] datasets.
2
x ?j
l? ? arg max S ?j ,l
xl?
CNN
( i? , k )?N?
CNN
?j
f ( xl? )
( ?j ,l )?N?
...
( x1 , y1 )
( xm , ym )
L2
Double-header
hinge loss
(a)
Mini-batches
PDDM
Hard quadruplet
(i?, ?j ) ? arg min Si , j
( i , j )?P?
CNN
xi?
k? ? arg max Si? ,k
x ?j
l? ? arg max S ?j ,l
xl?
CNN
( i? , k )?N?
CNN
f ( xi? )
Cat
Si?, ?j
f ( xi ) ? f ( x j )
f ( xk? )
f ( x j ) L2
f ( xi? )
L2
f ( x ?j )
L2
PDDM
L2
PDDM
Feature embedding
score
Si , j
L2
f ( xl? )
(b)
L2
f ( xl? )
2
FC
f ( x ?j )
f ( xk? )
( ?j ,l )?N?
(a)
Feature
embedding:
Si?, k?
PDDM-Net architecture
CNN
xk?
S ?j ,l?
Dou
hi
k? ? arg max Si?,k
PDDM learning unit
Similarity score
PDDM
Si?,k?
Si?, ?j
S ?j ,l?
PDDM
Double-header
hinge loss
( xm , ym )
f ( xi ) L2
f ( xi ) ? f ( x j ) L2
FC
f ( xi ) ? f ( x j )
f ( x j ) L2
2
Cat
FC
Si , j
FC L2
Double-header
hinge loss
Overall network architecture
(b)
PDDM unit
Figure 2: (a) The overall network architecture. All CNNs have shared architectures and parameters.Training data
(b) The PDDM unit.
Related work
Hard sample mining in deep learning: Hard sample mining is a popular technique used in computer
vision for training robust classifier. The method aims at augmenting a training set progressively with
false positive examples with the model learned so far. It is the core of many successful vision solutions,
e.g. pedestrian detection [4, 7]. In a similar spirit, contemporary deep embedding methods [27, 29]
choose hard samples in a mini-batch by computing the Euclidean distance in the embedding space.
For instance, Schroff et al. [27] selected online the semi-hard negative samples with relatively small
Euclidean distances. Wang et al. [33] proposed an online reservoir importance sampling algorithm
to sample triplets by relevance scores, which are computed offline with different distance metrics.
Similar studies on image descriptor learning [28] and unsupervised feature learning [34] also select
hard samples according to the Euclidean distance-based losses in their respective CNNs. We argue in
this paper that the global Euclidean distance is a suboptimal similarity metric for hard sample mining,
and propose a locally adaptive metric for better mining.
Metric learning: An effective similarity metric is at the core of hard sample mining. Euclidean
distance is the simplest similarity metric, and it is widely used by current deep embedding methods
where Euclidean feature distances directly correspond the similarity. Similarities can be encoded
pairwise with a contrastive loss [1] or in more flexible triplets [27, 33]. Song et al. [29] extended to
even higher order similarity constraints by lifting the pairwise distances within a mini-batch to the
dense matrix of pairwise distances. Beyond Euclidean metric, one can actually turn to the parametric
Mahalanobis metric instead. Representative works [12, 36] minimize the Mahalanobis distance
between positive sample pairs while maximizing the distance between negative pairs. Alternatives
directly optimize the Mahalanobis metric for nearest neighbor classification via the method of
Neighbourhood Component Analysis (NCA) [11], Large Margin Nearest Neighbor (LMNN) [35] or
Nearest Class Mean (NCM) [21]. However, the common drawback of the Mahalanobis and Euclidean
metrics is that they are both global and are far from being ideal in the presence of heterogeneous
feature distribution (see Figure 1(a)). An intuitive remedy would be to learn multiple metrics [9],
which would be computationally expensive though. Xiong et al. [37] proposed a single adaptive
metric using the absolute position information in random forest classifiers. Our approach shares the
similar intuition, but incorporates the position information by a deep CNN in a more principled way,
and can jointly learn similarity-aware deep features instead of using hand-crafted ones as in [37].
3
Local similarity-aware deep embedding
Let X = {(xi , yi )} be an imagery dataset, where yi is the class label of image xi . Our goal is to jointly
learn a deep feature embedding f (x) from image x into a feature space Rd , and a similarity metric
Si,j = S(f (xi ), f (xj )) ? R1 , such that the metric can robustly select hard samples online to learn a
discriminative, similarity-aware feature embedding. Ideally, the learned features (f (xi ), f (xj )) from
the set of positive pairs P = {(i, j)|yi = yj } should be close to each other with a large similarity
score Si,j , while the learned features from the set of negative pairs N = {(i, j)|yi 6= yj } should be
pushed far away with a small similarity score. Importantly, this relationship should hold independent
of the (heterogeneous) feature distribution in Rd , where a global metric Si,j can fail. To adapt Si,j to
the latent structure of feature embeddings, we propose a Position-Dependent Deep Metric (PDDM)
unit that can be trained end-to-end, see Figure 2(b).
3
...
2
The overall network architecture is shown in Figure 2(a). First, we use PDDM to compute similarity
scores for the mini-batch samples during a particular forward pass. The scores are used to select
? ? N in the
one hard quadruplet from the local sets of positive pairs P? ? P and negative pairs N
batch. Then each sample in the hard quadruplet is separately fed into four identical CNNs with shared
parameters W to extract d-dimensional features. Finally, a discriminative double-header hinge loss is
applied to both the similarity score and feature embeddings. This enables us to jointly optimize the
two that benefit each other. We will provide the details in the following.
3.1
PDDM learning and hard sample mining
Given a feature pair (fW (xi ), fW (xj )) extracted from images xi and xj by an embedding function
fW (?) parameterized by W , we wish to obtain an ideal similarity score yi,j = 1 if (i, j) ? P , and
yi,j = 0 if (i, j) ? N . Hence, we seek the optimal similarity metric S ? (?, ?) from an appropriate
function space H, and also seek the optimal feature embedding parameters W ? :
X
1
l (S(fW (xi ), fW (xj )), yi,j ) ,
(1)
(S ? (?, ?), W ? ) = argmin
S(?,?)?H,W |P ? N |
(i,j)?P ?N
where l(?) is some loss function. We will omit the parameters W of f (?) in the following for brevity.
Adapting to local feature structure. The standard Euclidean or Mahalanobis metric can be seen as a
special form of function S(?, ?) that is based solely on the feature difference vector u = |f (xi )?f (xj )|
or its linearly transformed version. These metrics are suboptimal in a heterogeneous embedding
space, thus could easily fail the searching of genuinely hard samples. On the contrary, the proposed
PDDM leverages the absolute feature position to adapt the metric throughout the embedding space.
Specifically, inspired by [37], apart from the feature difference vector u, we additionally incorporate
the feature mean vector v = (f (xi ) + f (xj ))/2 to encode the absolute position. Unlike [37], we
formulate a principled learnable similarity metric from u and v in our CNN.
Formally, as shown in Figure 2(b), we first normalize the features f (xi ) and f (xj ) onto the unit
hypersphere, i.e., kf (x)k2 = 1, in order to maintain feature comparability. Such normalized features
are used to compute their relative and absolute positions encoded in u and v, each followed by
a fully connected layer, an elementwise ReLU nonlinear function ?(?) = max(0, ?), and again,
`2 -normalization r(x) = x/kxk2 . To treat u and v differently, the fully connected layers applied to
them are not shared, parameterized by Wu ? Rd?d , bu ? Rd and Wv ? Rd?d , bv ? Rd , respectively.
The nonlinearities ensure the model is not trivially equivalent to be the mapping from f (xi ) and
f (xj ) themselves. Then we concatenate the mapped u0 and v 0 vectors and pass them through another
fully connected layer parameterized by Wc ? R2d?d , bc ? Rd and the ReLU function, and finally
map to a score Si,j = S(f (xi ), f (xj )) ? R1 via Ws ? Rd?1 , bs ? R1 . To summarize:
u = |f (xi ) ? f (xj )| , v = (f (xi ) + f (xj )) /2,
u0 = r (?(Wu u + bu )) , v 0 = r (?(Wv v + bv )) ,
0
u
c = ? Wc
+ bc , Si,j = Ws c + bs .
v0
(2)
In this way, we transform the seeking of the similarity metric function S(?, ?) into the joint learning of
CNN parameters (Wu , Wv , Wc , Ws , bu , bv , bc , bs for the PDDM unit, and W for feature embeddings).
The parameters collectively define a flexible nonlinear regression function for the similarity score.
Double-header hinge loss. To optimize all these CNN parameters, we can choose a standard
regression loss function l(?), e.g., logistic regression loss. Or alternatively, we can cast the problem as
a binary classification one as in [37]. However, in both cases the CNN is prone to overfitting, because
the supervisory binary similarity labels yi,j ? {0, 1} tend to independently push the scores towards
two single points. While in practice, the similarity scores of positive and negative pairs live on a 1-D
manifold following some distribution patterns on heterogeneous data, as illustrated in Figure 1(b).
This motivates us to design a loss function l(?) to separate the similarity distributions, instead of in
an independent pointwise way that is noise-sensitive. One intuitive option is to impose the Fisher
criterion [20] on the similarity scores, i.e., maximizing the ratio between the interclass and intraclass
scatters of scores. Similarly, it can be reduced to maximize (?P ? ?N )2 /(VarP + VarN ) in our
1-D case, where ? and Var are the mean and variance of each score distribution. Unfortunately, the
4
optimality of Fisher-like criteria relies on the assumption that the data of each class is of a Gaussian
distribution, which is obviously not satisfied in our case. Also, a high cost O(m2 ) is entailed to
compute the Fisher loss in a mini-batch with m samples by computing all the pairwise distances.
Consequently, we propose a faster-to-compute loss function that approximately maximizes the
margin between the positive and negative similarity distributions without making any assumption
about the distribution?s shape or pattern. Specifically, we retrieve one hard quadruplet from a
random batch during each forward pass. Please see the illustration in Figure 1(c). The quadruplet
consists of the most dissimilar positive sample pair in the batch (?i, ?j) = argmin(i,j)?P? Si,j , which
means their similarity score is most likely to cross the ?safe margin? towards the negative similarity
distribution in this local range. Next, we build a similarity neighborhood graph that links the chosen
positive pair with their respective negative neighbors in the batch, and choose the hard negatives
as the other two quadruplet members k? = argmax(?i,k)?N? S?i,k , and ?l = argmax(?j,l)?N? S?j,l . Using
? ?l), we can now locally approximate the inter-distribution margin as
this hard quadruplet (?i, ?j, k,
min(S?i,?j ? S?i,k? , S?i,?j ? S?j,?l ) in a robust manner. This makes us immediately come to a double-header
hinge loss Em to discriminate the target similarity distributions under the large margin criterion:
X
min Em =
(?? ? + ??i,?j ),
(3)
?i,?
j i,j
s.t. : ?(?i, ?j), max 0, ? + S?i,k? ? S?i,?j ? ??i,?j , max 0, ? + S?j,?l ? S?i,?j ? ??i,?j ,
(?i, ?j) = argmin Si,j , k? = argmax S?i,k , ?l = argmax S?j,l , ??i,?j ? 0, ??i,?j ? 0,
(i,j)?P?
?
(?i,k)?N
?
(?
j,l)?N
where ??i,?j , ??i,?j are the slack variables, and ? is the enforced margin.
The discriminative loss has four main benefits: 1) The discrimination of similarity distributions is
assumption-free. 2) Hard samples are simultaneously found during the loss minimization. 3) The
loss function incurs a low computational cost and encourages faster convergence. Specifically, the
searching cost of the hard positive pair (?i, ?j) is very small since the positive pair set P? is usually much
? in an m-sized mini-batch. While the hard negative mining only
smaller than the negative pair set N
incurs an O(m) complexity. 4) Eqs. (2, 3) can be easily optimized through the standard stochastic
gradient descent to adjust the CNN parameters.
3.2
Joint metric and embedding optimization
Given the learned PDDM and mined hard samples in a mini-batch, we can use them to solve for
a better, local similarity-aware feature embedding at the same time. For computational efficiency,
we reuse the hard quadruplet?s features for metric optimization (Eq. (3)) in the same forward pass.
What follows is to use the double-header hinge loss again, but to constrain the deep features this time,
see Figure 2. The objective is to ensure the Euclidean distance between hard negative features (D?i,k?
or D?j,?l ) is larger than that between hard positive features D?i,?j by a large margin. Combining the
embedding loss Ee and metric loss Em (Eq. (3)) gives our final joint loss function:
X
f k2 , where Ee =
min Em + ?Ee + ?kW
(o? ? + ??i,?j ),
(4)
?i,?
j i,j
s.t. : ?(?i, ?j), max 0, ? + D?i,?j ? D?i,k? ? o?i,?j , max 0, ? + D?i,?j ? D?j,?l ? ??i,?j ,
D?i,?j = kf (x?i ) ? f (x?j )k2 , o?i,?j ? 0, ??i,?j ? 0,
f are the CNN parameters for both the PDDM and feature embedding, and o? ? , ?? ? and ? are
where W
i,j
i,j
the slack variables and enforced margin for Ee , and ?, ? are the regularization parameters. Since all
features are `2 -normalized (see Figure 2), we have ?+D?i,?j ?D?i,k? = ??2f (x?i )f (x?j )+2f (x?i )f (xk? ),
and can conveniently derive the gradients as those in triplet-based methods [27, 33].
This joint objective provides effective supervision in two domains, respectively at the score level and
feature level that are mutually informed. Although the score level supervision by Em alone is already
capable of optimizing both our metric and feature embedding, we will show the benefits of adding the
feature level supervision by Ee in experiments. Note we can still enforce the large margin relations
of quadruple features in Ee using the simple Euclidean metric. This is because the quadruple features
are selected by our PDDM that is learned in the local Euclidean space as well.
5
Class1
Class1
Class1
Class1
f ( xl? )
f ( x ?j )
f ( xi )
Class2
Class3
f ( xa )
f (x j )
Class4
Contrastive embedding
Class2 f ( x p )
Class3
f ( xn )
Class4
Triplet embedding
f ( xl )...
f ( xi )
Class2 f ( x j )
Class3
f ( xk ) ...
Class4
f ( xk? )
Class2
Class3
f ( xi? )
Class4
Lifted structured embedding Local similarity-aware embedding
Figure 3: Illustrative comparison of different feature embeddings. Pairwise similarities in classes 1-3
are effortlessly distinguishable in a heterogeneous feature space because there is always a relative
safe margin between any two involved classes w.r.t. their class bounds. However, it is not the case for
class 4. The contrastive [1], triplet [27, 33] and lifted structured [29] embeddings select hard samples
by the Euclidean distance that is not adaptive to the local feature structure. They may thus select
inappropriate hard samples and the negative pairs get misled towards the wrong gradient direction
(red arrow). In contrast, our local similarity-aware embedding is correctly updated by the genuinely
hard examples in class 4.
Figure 3 compares our local similarity-aware feature embedding with existing works. Contrastive [1]
embedding is trained on pairwise data {(xi , xj , yi,j )}, and tries to minimize the distance between the
positive feature pair and penalize the distance between negative feature pair for being smaller than a
margin ?. Triplet embedding [27, 33] samples the triplet data {(xa , xp , xn )} where xa is an anchor
point and xp , xn are from the same and different class, respectively. The objective is to separate
the intraclass distance between (f (xa ), f (xp )) and interclass distance between (f (xa ), f (xn )) by
margin ?. While lifted structured embedding [29] considers all the positive feature pairs (e.g.,
(f (xi ), f (xj )) in Figure 3) and all their linked negative pairs (e.g., (f (xi ), f (xk )), (f (xj ), f (xl ))
and so on), and enforces a margin ? between positive and negative distances.
The common drawback of the above-mentioned embedding methods is that they sample pairwise
or triplet (i.e., anchor) data randomly and rely on simplistic Euclidean metric. They are thus very
likely to update from inappropriate hard samples and push the negative pairs towards the already
well-separated embedding space (see the red arrow in Figure 3). While our method can use PDDM
to find the genuinely hard feature quadruplet (f (x?i ), f (x?j ), f (xk? ), f (x?l )), thus can update feature
embedding in the correct direction. Also, our method is more efficient than the lifted structured
embedding [29] that requires computing dense pairwise distances within a mini-batch.
3.3
Implementation details
We use GoogLeNet [31] (feature dimension d = 128) and CaffeNet [16] (d = 4096) as our base
network architectures for retrieval and transfer learning tasks respectively. They are initialized
with their pretrained parameters on ImageNet classification. The fully-connected layers of our
PDDM unit are initialized with random weights and followed by dropout [30] with p = 0.5. For all
experiments, we choose by grid search the mini-batch size m = 64, initial learning rate 1 ? 10?4 ,
momentum 0.9, margin parameters ? = 0.5, ? = 1 in Eqs. (3, 4), and regularization parameters
? = 0.5, ? = 5 ? 10?4 (? balances the metric loss Em against the embedding loss Ee ). To find
meaningful hard positives in our hard quadruplets, we ensure that any one class in a mini-batch has at
least 4 samples. And, we always scale S?i,?j into the range [0, 1] by the similarity graph in the batch.
The entire network is trained for a maximum of 400 epochs until convergence.
4
Results
Image retrieval. The task of image retrieval is a perfect testbed for our method, where both the
learned PDDM and feature embedding (under the Euclidean feature distance) can be used to find
similar images for a query. Ideally, a good similarity metric should be query-adapted (i.e., positiondependent), and both the metric and features should be able to generalize. We test these properties of
our method on two popular fine-grained datasets with complex feature distribution. We deliberately
make the evaluation more challenging by preparing training and testing sets that are disjoint in terms
of class labels. Specifically, we use the CUB-200-2011 [32] dataset with 200 bird classes and 11,788
6
60
CUB-200-2011
50
Quadruplet+Euclidean
Quadruplet+PDDM
Loss
40
30
0
CARS196
70
60
Loss
50
40
400 Epoch
200
CUB-200-2011
0
200
400 Epoch
(0.87,0.11) (0.86,0.16) (0.84,0.25)
(0.84,0.29)
(0.82,0.35)
(0.81,0.41)
(0.79,0.49)
(0.75,0.71)
(0.87,0.09) (0.87,0.19) (0.85,0.23)
(0.84,0.31)
(0.83,0.34)
(0.83,0.27)
(0.81,0.41)
(0.80,0.59)
(0.96,0.04) (0.96,0.04) (0.96,0.07)
(0.96,0.08)
(0.95,0.10)
(0.95,0.11)
(0.95,0.12)
(0.94,0.13)
Figure 4: Top: a comparison of the training convergence curves of our method with Euclidean- and
PDDM-based hard quadruplet mining on the test sets of CUB-200-2011 [32] and CARS196 [15]
datasets. Bottom: top 8 images retrieved by PDDM (similarity score and feature distance are shown
underneath) and the corresponding feature embeddings (black dots) on CUB-200-2011.
Table 1: Recall@K (%) on the test sets of CUB-200-2011 [32] and CARS196 [15] datasets.
CUB-200-2011
CARS196
K
1
2
4
8
16
32
1
2
4
8
16
32
Contrastive [1]
Triplet [27, 33]
LiftedStruct [29]
26.4
36.1
47.2
37.7
48.6
58.9
49.8
59.3
70.2
62.3
70.0
80.2
76.4
80.2
89.3
85.3
88.4
93.2
21.7
39.1
49.0
32.3
50.4
60.3
46.1
63.3
72.1
58.9
74.5
81.5
72.2
84.1
89.2
83.4
89.8
92.8
LMDM score
PDDM score
49.5
55.0
61.1
67.1
72.1
77.4
81.8
86.9
90.5
92.2
94.1
95.0
50.9
55.2
61.9
66.5
73.5
78.0
82.5
88.2
89.8
91.5
93.1
94.3
PDDM+Triplet
PDDM+Quadruplet
50.9
58.3
62.1
69.2
73.2
79.0
82.5
88.4
91.1
93.1
94.4
95.7
46.4
57.4
58.2
68.6
70.3
80.1
80.1
89.4
88.6
92.3
92.6
94.9
images. We employ the first 100 classes (5,864 images) for training, and the remaining 100 classes
(5,924 images) for testing. Another used dataset is CARS196 [15] with 196 car classes and 16,185
images. The first 98 classes (8,054 images) are used for training, and the other 98 classes are retained
for testing (8,131 images). We use the standard Recall@K as the retrieval evaluation metric.
Figure 4-(top) shows that the proposed PDDM leads to 2? faster convergence in 200 epochs (28 hours
on a Titan X GPU) and lower converged loss than the regular Euclidean metric, when both are used
to mine hard quadruplets for embedding learning. Note the two resulting approaches both incur lower
computational costs than [29], with a near linear rather than quadratic [29] complexity in mini-batches.
As observed from the retrieval results and their feature distributions in Figure 4-(bottom), our PDDM
copes comfortably with large intraclass variations, and generates stable similarity scores for those
differently scattered features positioned around a particular query. These results also demonstrate the
successful generalization of PDDM on a test set with disjoint class labels.
Table 1 quantifies the advantages of both of our similarity metric (PDDM) and similarity-aware feature
embedding (dubbed ?PDDM+Quadruplet? for short). In the middle rows, we compare the results
from using the metrics of Large Margin Deep Metric (LMDM) and our PDDM, both jointly trained
with our quadruplet embedding. The LMDM is implemented by deeply regressing the similarity
score from the feature difference only, without using the absolute feature position. Although it is also
optimized under the large margin rule, it performs worse than our PDDM due to the lack of position
information for metric adaptation. In the bottom rows, we test using the learned features under the
Euclidean distance. We observed PDDM significantly improves the performance of both triplet and
quadruplet embeddings. In particular, our full ?PDDM+Quadruplet? method yields large gains (8%+
Recall@K=1) over previous works [1, 27, 29, 33] all using the Euclidean distance for hard sample
mining. Indeed, as visualized in Figure 4, our learned features are typically well-clustered, with sharp
boundaries and large margins between many classes.
7
Table 2: The flat top-1 accuracy (%) of transfer learning on ImageNet-10K [5] and flat top-5 accuracy
(%) of zero-shot learning on ImageNet 2010.
Transfer learning on ImageNet-10K
Zero-shot learning on ImageNet 2010
[5]
[26]
[23]
[18]
[21]
Ours
ConSE [22]
DeViSE [8]
PST [24]
[25]
[21]
AMP [10]
Ours
6.4
16.7
18.1
19.2
21.9
28.4
28.5
31.8
34.0
34.8
35.7
41.0
48.2
Discussion. We previously mentioned that our PDDM and feature embedding can be learned by
only optimizing the metric loss Em in Eq. (3). Here we experimentally prove the necessity of extra
supervision from the embedding loss Ee in Eq. (4). Without it, the Recall@K=1 of image retrieval
by our ?PDDM score? and ?PDDM+Quadruplet? methods drop by 3.4%+ and 6.5%+, respectively.
Another important parameter is the batch size m. When we set it to be smaller than 64, say 32,
Recall@K=1 on CUB-200-2011 drops to 55.7% and worse with even smaller m. This is because the
chosen hard quadruplet from a small batch makes little sense for learning. When we use large m=132,
we have marginal gains but need many more epochs (than 400) to use enough training quadruplets.
Transfer learning. Considering the good performance of our fully learned features, here we evaluate
their generalization ability under the scenarios of transfer learning and zero-shot learning. Transfer
learning aims to transfer knowledge from the source classes to new ones. Existing methods explored
the knowledge of part detectors [6] or attribute classifiers [17] across classes. Zero-shot learning is
an extreme case of transfer learning, but differs in that for a new class only a description rather than
labeled training samples is provided. The description can be in terms of attributes [17], WordNet
hierarchy [21, 25], semantic class label graph [10, 24], or text data [8, 22]. These learning scenarios
are also related to the open set one [2] where new classes grow continuously.
For transfer learning, we follow [21] to train our feature embeddings and a Nearest Class Mean
(NCM) classifier [21] on the large-scale ImageNet 20101 dataset, which contains 1,000 classes and
more than 1.2 million images. Then we apply the NCM classifier to the larger ImageNet-10K [5]
dataset with 10,000 classes, thus do not use any auxiliary knowledge such as parts and attributes.
We use the standard flat top-1 accuracy as the classification evaluation metric. Table 2 shows that
our features outperform state-of-the-art methods by a large margin, including the deep feature-based
ones [18, 21]. We attribute this advantage to our end-to-end feature embedding learning and its large
margin nature, which directly translates to good generalization ability.
For zero-shot learning, we follow the standard settings in [21, 25] on ImageNet 2010: we learn our
feature embeddings on 800 classes, and test on the remaining 200 classes. For simplicity, we also
use the ImageNet hierarchy to estimate the mean of new testing classes from the means of related
training classes. The flat top-5 accuracy is used as the classification evaluation metric. As can be
seen from Table 2, our features achieve top results again among many competing deep CNN-based
methods. Considering our PDDM and local similarity-aware feature embedding are both well learned
with safe margins between classes, in this zero-shot task, they would be naturally resistent to class
boundary confusion between known and unseen classes.
5
Conclusion
In this paper, we developed a method of learning local similarity-aware deep feature embeddings
in an end-to-end manner. The PDDM is proposed to adaptively measure the local feature similarity
in a heterogeneous space, thus it is valuable for high quality online hard sample mining that can
better guide the embedding learning. The double-header hinge loss on both the similarity metric
and feature embedding is optimized under the large margin criterion. Experiments show the efficacy
of our learned feature embedding in challenging image retrieval tasks, and point to its potential of
generalizing to new classes in the large and open set scenarios such as transfer learning and zero-shot
learning. In the future, it is interesting to study the generalization performance when using the shared
attributes or visual-semantic embedding instead of ImageNet hierarchy for zero-shot learning.
Acknowledgments. This work is supported by SenseTime Group Limited and the Hong Kong
Innovation and Technology Support Programme.
1
See http://www.image-net.org/challenges/LSVRC/2010/index.
8
References
[1] S. Bell and K. Bala. Learning visual similarity for product design with convolutional neural networks.
TOG, 34(4):98:1?98:10, 2015.
[2] A. Bendale and T. Boult. Towards open world recognition. In CVPR, 2015.
[3] Y. Cui, F. Zhou, Y. Lin, and S. Belongie. Fine-grained categorization and dataset bootstrapping using deep
metric learning with humans in the loop. In CVPR, 2016.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[5] J. Deng, A. C. Berg, K. Li, and L. Fei-Fei. What does classifying more than 10,000 image categories tell
us? In ECCV, 2010.
[6] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. TPAMI, 28(4):594?611, 2006.
[7] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part-based models. TPAMI, 32(9):1627?1645, 2010.
[8] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, M. A. Ranzato, and T. Mikolov. DeViSE: A deep
visual-semantic embedding model. In NIPS, 2013.
[9] A. Frome, Y. Singer, F. Sha, and J. Malik. Learning globally-consistent local distance functions for
shape-based image retrieval and classification. In ICCV, 2007.
[10] Z. Fu, T. A. Xiang, E. Kodirov, and S. Gong. Zero-shot object recognition by semantic manifold distance.
In CVPR, 2015.
[11] J. Goldberger, G. E. Hinton, S. T. Roweis, and R. R. Salakhutdinov. Neighbourhood components analysis.
In NIPS, 2005.
[12] S. C. H. Hoi, W. Liu, M. R. Lyu, and W.-Y. Ma. Learning distance metrics with contextual constraints for
image retrieval. In CVPR, 2006.
[13] C. Huang, Y. Li, C. C. Loy, and X. Tang. Learning deep representation for imbalanced classification. In
CVPR, 2016.
[14] C. Huang, C. C. Loy, and X. Tang. Unsupervised learning of discriminative attributes and visual representations. In CVPR, 2016.
[15] J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3D object representations for fine-grained categorization. In
ICCVW, 2013.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[17] C. H. Lampert, H. Nickisch, and S. Harmeling. Attribute-based classification for zero-shot visual object
categorization. TPAMI, 36(3):453?465, 2014.
[18] Q. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. Corrado, J. Dean, and A. Ng. Building high-level
features using large scale unsupervised learning. In ICML, 2012.
[19] L. Maaten and G. E. Hinton. Visualizing high-dimensional data using t-SNE. JMLR, 9:2579?2605, 2008.
[20] A. M. Martinez and A. C. Kak. PCA versus LDA. TPAMI, 23(2):228?233, 2001.
[21] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Distance-based image classification: Generalizing to
new classes at near-zero cost. TPAMI, 35(11):2624?2637, 2013.
[22] M. Norouzi, T. Mikolov, S. Bengio, Y. Singer, J. Shlens, A. Frome, G. Corrado, and J. Dean. Zero-shot
learning by convex combination of semantic embeddings. In ICLR, 2014.
[23] F. Perronnin, Z. Akata, Z. Harchaoui, and C. Schmid. Towards good practice in large-scale learning for
image classification. In CVPR, 2012.
[24] M. Rohrbach, S. Ebert, and B. Schiele. Transfer learning in a transductive setting. In NIPS, 2013.
[25] M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowledge transfer and zero-shot learning in a
large-scale setting. In CVPR, 2011.
[26] J. Sanchez and F. Perronnin. High-dimensional signature compression for large-scale image classification.
In CVPR, 2011.
[27] F. Schroff, D. Kalenichenko, and J. Philbin. FaceNet: A unified embedding for face recognition and
clustering. In CVPR, 2015.
[28] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, and F. Moreno-Noguer. Fracking deep convolutional
image descriptors. arXiv preprint, arXiv:1412.6537v2, 2015.
[29] H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature
embedding. In CVPR, 2016.
[30] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. JMLR, 15(1):1929?1958, 2014.
[31] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich.
Going deeper with convolutions. In CVPR, 2015.
[32] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset.
Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
[33] J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen, and Y. Wu. Learning fine-grained
image similarity with deep ranking. In CVPR, 2014.
[34] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
[35] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification.
JMLR, 10:207?244, 2009.
[36] E. P. Xing, M. I. Jordan, S. J. Russell, and A. Y. Ng. Distance metric learning with application to clustering
with side-information. In NIPS, 2003.
[37] C. Xiong, D. Johnson, R. Xu, and J. J. Corso. Random forests for metric learning with implicit pairwise
position dependence. In SIGKDD, 2012.
9
| 6368 |@word kong:2 cnn:17 middle:1 version:1 dalal:1 compression:1 kokkinos:1 triggs:1 open:5 seek:2 contrastive:6 incurs:3 tr:1 shot:15 necessity:1 liu:2 contains:1 score:32 selecting:1 efficacy:1 initial:1 bc:3 ours:2 amp:1 outperforms:1 existing:5 current:1 contextual:1 si:19 scatter:1 goldberger:1 readily:1 gpu:1 class1:4 devin:1 concatenate:1 shape:2 enables:1 moreno:1 drop:2 update:3 progressively:1 v:1 discrimination:1 alone:1 selected:2 xk:9 core:2 short:1 harvesting:1 hypersphere:1 provides:1 org:1 conse:1 consists:1 prove:1 manner:4 introduce:1 pairwise:11 inter:1 indeed:1 themselves:1 inspired:1 lmnn:1 globally:1 salakhutdinov:2 little:1 inappropriate:2 considering:2 becomes:1 spain:1 provided:1 moreover:1 maximizes:1 what:2 argmin:3 generalizable:1 informed:1 developed:1 unified:1 finding:1 dubbed:1 bootstrapping:1 demonstrates:1 k2:4 classifier:5 wrong:1 unit:10 ramanan:1 intervention:1 omit:1 positive:17 engineering:1 local:23 treat:1 quadruple:2 solely:1 approximately:1 black:1 bird:3 challenging:3 branson:1 limited:1 range:2 nca:1 acknowledgment:1 enforces:1 yj:2 testing:4 practice:3 harmeling:1 differs:1 bell:1 significantly:2 adapting:1 convenient:1 positiondependent:1 regular:1 sensetime:1 get:1 cannot:1 close:1 selection:1 onto:1 live:1 optimize:4 conventional:1 equivalent:1 map:1 www:1 maximizing:2 dean:3 independently:1 convex:1 formulate:2 mislead:2 simplicity:1 immediately:1 m2:1 rule:1 importantly:1 shlens:2 retrieve:1 embedding:69 searching:2 variation:2 updated:1 hierarchy:4 target:1 overlapped:2 expensive:1 recognition:3 genuinely:5 labeled:1 observed:4 bottom:3 preprint:1 wang:4 region:2 connected:5 ranzato:2 russell:1 contemporary:1 valuable:1 deeply:1 principled:2 intuition:1 mentioned:2 complexity:2 schiele:2 kalenichenko:1 ideally:2 mine:1 signature:1 trained:6 incur:1 tog:1 efficiency:2 triangle:1 easily:4 joint:4 xiaoou:1 differently:2 cat:2 dou:1 train:2 separated:2 effective:5 query:3 tell:1 neighborhood:3 header:10 encoded:3 larger:5 widely:2 solve:1 say:1 cvpr:14 ability:2 unseen:1 transductive:1 jointly:6 transform:1 final:1 online:9 obviously:1 advantage:2 tpami:5 net:2 propose:5 product:2 adaptation:1 nashville:1 combining:1 loop:1 achieve:1 roweis:1 intraclass:6 caffenet:1 intuitive:2 description:2 normalize:1 sutskever:2 convergence:7 double:10 r1:3 categorization:3 perfect:1 object:5 derive:1 augmenting:1 gong:1 pose:1 nearest:5 eq:6 auxiliary:2 implemented:1 frome:3 come:2 direction:2 guided:1 safe:3 drawback:2 correct:1 attribute:7 cnns:3 stochastic:1 human:3 mcallester:1 hoi:1 generalization:5 clustered:1 hold:1 effortlessly:1 around:1 mapping:1 lyu:1 cub:10 schroff:2 label:5 sensitive:1 faithfully:1 minimization:1 gaussian:1 always:2 aim:3 rather:2 zhou:1 boosted:1 lifted:6 rosenberg:1 encode:1 validated:1 hk:1 contrast:1 sigkdd:1 underneath:1 sense:2 dependent:4 perronnin:3 leung:1 entire:1 typically:1 fracking:1 w:3 relation:1 perona:2 transformed:2 going:1 mitigating:1 issue:2 aforementioned:1 flexible:2 arg:5 overall:3 denoted:1 classification:13 among:1 art:3 special:1 marginal:1 aware:13 ng:2 sampling:3 identical:1 kw:1 preparing:1 unsupervised:4 nearly:1 icml:1 future:1 report:1 employ:1 randomly:1 oriented:1 preserve:1 simultaneously:1 argmax:4 cns:1 maintain:1 detection:3 mining:17 highly:1 evaluation:4 adjust:1 regressing:1 entailed:1 extreme:1 fu:1 capable:3 respective:2 simo:1 euclidean:35 savarese:1 initialized:2 girshick:1 instance:2 rabinovich:1 cost:6 krizhevsky:2 successful:2 welinder:1 johnson:1 characterize:1 nickisch:1 adaptively:1 density:2 ie:1 seamless:1 bu:3 together:1 ym:2 continuously:1 imagery:1 again:3 satisfied:1 huang:3 choose:5 r2d:1 worse:2 stark:2 li:2 szegedy:1 potential:1 nonlinearities:1 diversity:1 pedestrian:1 titan:1 satisfy:1 ranking:1 csurka:1 try:1 philbin:2 linked:1 red:3 xing:1 option:1 jia:1 minimize:2 chart:2 accuracy:4 convolutional:5 descriptor:2 variance:1 correspond:2 yield:3 yellow:1 generalize:1 norouzi:1 lighting:1 converged:1 detector:1 against:1 corso:1 involved:1 naturally:1 associated:2 di:1 gain:2 dataset:7 pst:1 popular:2 recall:5 knowledge:5 car:1 improves:1 exerting:1 positioned:1 akata:1 actually:1 back:1 higher:2 follow:2 improved:1 formulation:1 mensink:1 though:1 xa:5 varp:1 implicit:1 until:1 hand:2 nonlinear:2 propagation:1 lack:1 logistic:1 quality:3 lda:1 supervisory:1 building:1 effect:1 facilitate:1 normalized:2 true:1 remedy:1 deliberately:1 hence:2 regularization:2 semantic:6 illustrated:1 mahalanobis:6 visualizing:1 during:4 quadruplet:23 encourages:2 please:1 illustrative:1 kak:1 hong:2 criterion:4 demonstrate:2 confusion:1 performs:1 image:31 novel:1 superior:1 common:3 sigmoid:1 million:1 googlenet:1 comfortably:1 elementwise:1 anguelov:1 rd:9 trivially:1 grid:1 similarly:1 dot:1 stable:1 similarity:68 supervision:4 v0:1 base:1 imbalanced:1 retrieved:1 optimizing:2 apart:1 scenario:5 indispensable:1 wv:3 binary:2 yi:9 exploited:1 devise:2 caltech:1 seen:2 impose:1 employed:1 deng:2 maximize:1 cuhk:1 corrado:3 semi:1 u0:2 multiple:1 full:1 harchaoui:1 technical:1 faster:5 adapt:3 cross:1 retrieval:12 lin:1 verbeek:1 regression:4 simplistic:1 heterogeneous:7 vision:6 metric:71 arxiv:2 histogram:1 normalization:1 monga:1 penalize:1 fine:5 separately:1 krause:1 grow:1 source:1 crucial:1 extra:1 unlike:2 tend:1 elegant:2 sanchez:1 member:1 contrary:1 incorporates:2 spirit:1 effectiveness:1 jordan:1 ee:8 near:2 presence:1 ideal:2 leverage:1 bengio:2 easy:2 embeddings:11 enough:1 variety:1 xj:17 relu:2 architecture:6 identified:1 suboptimal:3 competing:1 translates:1 pca:1 reuse:1 song:3 trulls:1 deep:36 locally:3 visualized:1 category:2 simplest:1 reduced:1 http:1 outperform:1 overly:1 correctly:1 disjoint:2 group:1 key:1 four:2 nevertheless:1 prevent:1 resorted:1 graph:3 enforced:2 parameterized:3 throughout:2 wu:4 maaten:1 pushed:1 dropout:2 layer:6 hi:1 bound:1 followed:2 mined:2 bala:1 quadratic:1 bv:3 adapted:1 constraint:3 constrain:1 fei:6 flat:4 encodes:1 wc:3 generates:1 min:4 optimality:1 mikolov:2 relatively:1 department:1 structured:7 according:1 combination:1 poor:1 cui:2 smaller:4 across:1 increasingly:1 em:7 appealing:1 b:3 making:1 iccv:2 computationally:1 mutually:1 previously:1 turn:1 slack:2 fail:2 singer:2 fed:1 end:11 adopted:1 apply:1 noguer:1 away:1 appropriate:1 enforce:1 v2:1 robustly:1 neighbourhood:2 xiong:2 batch:21 alternative:1 weinberger:1 chuang:1 top:8 remaining:2 ensure:4 clustering:2 hinge:10 chinese:1 especially:1 build:1 seeking:1 objective:5 malik:1 question:1 already:2 strategy:1 parametric:1 sha:1 dependence:1 exhibit:1 comparability:1 gradient:4 iclr:1 distance:46 separate:2 mapped:1 link:1 manifold:2 argue:1 considers:1 retained:1 reed:1 index:1 pointwise:1 relationship:3 mini:14 ratio:1 illustration:1 loy:3 balance:1 difficult:1 unfortunately:1 innovation:1 sermanet:1 sne:2 negative:20 design:2 implementation:1 motivates:1 warbler:3 convolution:1 datasets:7 descent:1 extended:1 hinton:4 y1:1 ucsd:1 interclass:5 sharp:1 pair:21 cast:1 class2:4 optimized:4 imagenet:16 wah:1 california:1 learned:14 testbed:1 barcelona:1 hour:1 nip:6 beyond:1 able:1 usually:4 pattern:3 xm:2 challenge:2 summarize:1 max:9 including:1 video:1 rely:1 circumvent:1 misled:1 ebert:1 scheme:2 technology:2 extract:1 schmid:1 text:1 epoch:5 l2:11 kf:3 relative:2 xiang:2 loss:36 fully:6 discriminatively:1 boult:1 interesting:1 var:1 versus:1 vanhoucke:1 jegelka:1 verification:1 xp:3 consistent:1 classifying:1 share:1 row:2 prone:1 eccv:1 supported:1 free:1 offline:2 guide:2 side:1 deeper:1 institute:1 neighbor:4 saul:1 face:2 felzenszwalb:1 absolute:8 serra:1 benefit:3 curve:1 dimension:1 xn:4 world:3 boundary:2 evaluating:1 forward:3 adaptive:7 programme:1 far:4 erhan:1 cope:1 approximate:1 compact:2 global:9 overfitting:2 anchor:2 xtang:1 belongie:2 xi:31 discriminative:5 alternatively:1 fergus:1 search:3 latent:1 triplet:14 quantifies:1 table:5 additionally:1 nature:2 transfer:14 robust:3 learn:6 forest:2 complex:4 domain:1 dense:3 main:1 linearly:1 arrow:2 noise:1 lampert:1 martinez:1 x1:1 xu:1 reservoir:2 crafted:2 representative:1 scattered:1 slow:1 position:14 momentum:1 wish:2 xl:9 lie:1 kxk2:1 jmlr:3 learns:1 grained:5 tang:3 ferraz:1 learnable:1 explored:1 gupta:1 false:1 adding:1 importance:2 lifting:1 push:2 margin:24 demand:1 chen:4 depicted:1 generalizing:2 distinguishable:2 fc:4 appearance:1 likely:2 rohrbach:2 visual:8 ncm:3 conveniently:1 pretrained:1 collectively:1 relies:1 extracted:1 ma:1 goal:1 sized:1 consequently:2 towards:6 shared:4 fisher:3 change:1 hard:51 fw:5 specifically:4 experimentally:1 lsvrc:1 wordnet:1 pas:4 discriminate:1 meaningful:2 exception:1 select:6 formally:1 berg:1 tern:2 support:1 latter:1 brevity:1 relevance:2 dissimilar:1 facenet:1 incorporate:1 evaluate:1 phenomenon:1 srivastava:1 |
5,934 | 6,369 | Efficient Nonparametric Smoothness Estimation
Shashank Singh
Carnegie Mellon University
[email protected]
Simon S. Du
Carnegie Mellon University
[email protected]
Barnab?s P?czos
Carnegie Mellon University
[email protected]
Abstract
Sobolev quantities (norms, inner products, and distances) of probability density
functions are important in the theory of nonparametric statistics, but have rarely
been used in practice, due to a lack of practical estimators. They also include,
as special cases, L2 quantities which are used in many applications. We propose
and analyze a family of estimators for Sobolev quantities of unknown probability
density functions. We bound the finite-sample bias and variance of our estimators,
finding that they are generally minimax rate-optimal. Our estimators are significantly more computationally tractable than previous estimators, and exhibit a
statistical/computational trade-off allowing them to adapt to computational constraints. We also draw theoretical connections to recent work on fast two-sample
testing and empirically validate our estimators on synthetic data.
1
Introduction
L2 quantities (i.e., inner products, norms, and distances) of continuous probability density functions
are important information theoretic quantities with many applications in machine learning and signal
processing. For example, L2 norm estimates can be used for goodness-of-fit testing [7], image
registration and texture classification [12], and parameter estimation in semi-parametric models [36].
L2 inner products estimates can generalize linear or polynomial kernel methods to inputs which are
distributions rather than numerical vectors. [28] L2 distance estimators are used for two-sample
testing [1, 25], transduction learning [30], and machine learning on distributional inputs [27]. [29]
gives applications of L2 quantities to adaptive information filtering, classification, and clustering.
L2 quantities are a special case of less-well-known Sobolev quantities. Sobolev norms measure
global smoothness of a function in terms of integrals of squared derivatives. For example, for a
non-negative integer s and a function f : R ? R with an sth derivative f (s) , the s-order Sobolev
2
R
norm k ? kH s is given by kf kH s = R f (s) (x) dx (when this quantity is finite). See Section 2 for
more general definitions, and see [21] for an introduction to Sobolev spaces.
Estimation of general Sobolev norms has a long history in nonparametric statistics (e.g., [32, 13, 10,
2]) This line of work was motivated by the role of Sobolev norms in many semi- and non-parametric
problems, including density estimation, density functional estimation, and regression, (see [35],
Section 1.7.1) where they dictate the convergence rates of estimators. Despite this, to our knowledge,
these quantities have never been studied in real data, leaving an important gap between the theory
and practice of nonparametric statistics. We suggest this is in part due a lack of practical estimators
for these quantities. For example, the only one of the above estimators that is statistically minimaxoptimal [2] is extremely difficult to compute in practice, requiring numerical integration over each of
O(n2 ) different kernel density estimates, where n denotes the sample size. We know of no estimators
previously proposed for Sobolev inner products and distances.
The main goal of this paper is to propose and analyze a family of computationally and statistically
efficient estimators for Sobolev inner products, norms, and distances. Our specific contributions are:
1. We propose families of nonparametric estimators for all Sobolev quantities (Section 4).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
2. We analyze the estimators? bias and variance. Assuming the underlying density functions lie in
a Sobolev class of smoothness parametrized by s0 , we show the estimator for Sobolev quantities
?1
of order s < s0 converges to the true value at the ?parametric?
rate
of O(n ) in mean squared
8(s?s0 )
error when s0 ? 2s + D/4, and at a slower rate of O n 4s0 +D otherwise. (Section 5).
3. We validate our theoretical results on simulated data. (Section 8).
4. We derive asymptotic distributions for our estimators, and we use these to derive tests for the
general statistical problem of two-sample testing. We also draw theoretical connections between
our test and the recent work on nonparametric two-sample testing. (Section 9).
In terms of mean squared error, minimax lower bounds matching our convergence rates over Sobolev
or H?lder smoothness classes have been shown by [16] for s = 0 (i.e., L2 quantities), and [3] for
Sobolev norms with integer s. Since these lower bounds intuitively ?span? the space of relevant
quantities, it is a small step to conjecture that our estimators are minimax rate-optimal for all Sobolev
quantities and s ? [0, ?).
As described in Section 7, our estimators are computable in O(n1+? ) time using only basic matrix
operations, where n is the sample size and ? ? (0, 1) is a tunable parameter trading statistical and
computational efficiency; the smallest value of ? at which the estimator continues to be minimax
rate-optimal approaches 0 as we assume more smoothness of the true density.
2
Problem setup and notation
Let X = [??, ?]D and let ? denote the Lebesgue measure on X . For D-tuples z ? ZD of integers,
let ?z ? L2 = L2 (X ) 1 defined by ?z (x) = e?ihz,xi for all x ? X denote the z th element of the L2 R
orthonormal Fourier basis, and, for f ? L2 , let fe(z) := h?z , f iL2 = X ?z (x)f (x) d?(x) denote
the z th Fourier coefficient of f . 2 For any s ? [0, ?), define the Sobolev space H s = H s (X ) ? L2
of order s on X by 3
(
)
2
X
H s = f ? L2 :
z 2s fe(z) < ? .
(1)
z?ZD
Fix a known s ? [0, ?) and a unknown probability density functions p, q ? H s , and suppose we
have n IID samples X1 , ..., Xn ? p and Y1 , . . . , Yn ? q from each of p and q. We are interested in
estimating the inner product
X
hp, qiH s :=
z 2s pe(z)e
q (z) defined for all p, q ? H s .
(2)
z?ZD
Estimating the inner product gives an estimate for the (squared) induced norm and distance, since 4
X
2
kpk2H s :=
z 2s |e
p(z)| = hp, piH s and kp ? qk2H s = kpk2H s ? 2hp, qiH s + kqk2H s . (3)
z?ZD
Since our theoretical results assume the samples from p and q are independent, when estimating
kpk2H s , we split the sample from p in half to compute two independent estimates of pe, although this
may not be optimal in practice.
For a more classical intuition, we note that, in the case D = 1 and s ? {0, 1, 2, . . . }, (via Parseval?s
(s) (z) = (iz)s fe(z)), that one can show the following: H s includes the
identity and the identity fg
1
We suppress dependence on X ; all function spaces are over X except as discussed in Section 2.1.
Here, h?, ?i denotes the dot product
on?RD . For a complex number c = a + bi, c = a ? bi denotes the
?
complex conjugate of c, and |c| = cc = a2 + b2 denotes the modulus of c.
Q
3
2s
2s
When D > 1, z 2s = D
should be read as (z 2 )s , so that z 2s ? R even when
j=1 zj . For z < 0, z
2
0
2s ?
/ Z. In the L case, we use the convention that 0 = 1.
4
kpkH s is pseudonorm on H s because it fails to distinguish functions identical almost everywhere up to
additive constants; a combination of kpkL2 and kpkH s is used when a proper norm is needed. However, since
probability densities integrate to 1, k ? ? ? kH s is a proper metric on the subset of (almost-everywhere equivalence
classes of) probability density functions in H s , which is important for two-sample testing (see Section 9). For
simplicity, we use the terms ?norm?, ?inner product?, and ?distance? for the remainder of the paper.
2
2
Functional Name
Functional Form
R
2
kpk2L2 = (p(x)) dx
R (k) 2
kpk2H k =
p (x) dx
R
?(x, p(x)) dx
L2 norms
(Integer) Sobolev norms
Density functionals
Derivative functionals
L2 inner products
Multivariate functionals
R
0
(k)
?(x, p(x), p (x), . . . , p (x)) dx
R
hp1 , p2 iL2 = p1 (x)p2 (x) dx
R
?(x, p1 (x), . . . , pk (x)) dx
References
[32, 6]
[2]
[18, 19]
[3]
[16, 17]
[34, 14]
Table 1: Some related functional forms for which estimators for which nonparametric estimators have
been developed and analyzed. p, p1 , ..., pk are unknown probability densities, from each of which we
draw n IID samples, ? is a known real-valued measurable function, and k is a non-negative integer.
subspace of L2 functions with at least s derivatives in L2 and, if f (s) denotes the sth derivative of f
Z
2
2
kf k2H s = 2?
f (s) (x) dx = 2?
f (s)
2 , ?f ? H s .
(4)
L
X
In particular, when s = 0, H s = L2 , k ? kH s = k ? kL2 , and h?, ?iH s = h?, ?iL2 . As we describe in
the supplement, equation (4) and our results generalizes trivially to weak derivatives, as well as to
non-integer s ? [0, ?) via a notion of fractional derivative.
2.1 Unbounded domains
A notable restriction above is that p and q are supported in X := [??, ?]D . In fact, our estimators
and tests are well-defined and valid for densities supported on arbitrary subsets of RD . In this
case, they P
act on the 2?-periodic summation p2? : [??, ?]D ? [0, ?] defined for x ? X by
p2? (x) := z?ZD p(x + 2?z), which is itself a probability density function on X . For example, the
estimator for kpkH s will instead estimate kp2? kH s , and the two-sample test for distributions p and q
will attempt to distinguish p2? from q2? . Typically, this is not problematic; for example, for most
realistic probability densities, p and p2? have similar orders of smoothness, and p2? = q2? if and
only if p = q. However, there are (meagre) sets of exceptions; for example, if q is a translation of p
by exactly 2?, then p2? = q2? , and one can craft a highly discontinuous function p such that p2? is
uniform on X . [39] These exceptions make it difficult to extend theoretical results to densities with
arbitrary support, but they are fixed, in practice, by randomly rescaling the data (as in [4]). If the
densities have (known) bounded support, they can simply be shifted and scaled to be supported on X .
3
Related work
There is a large body of work on estimating nonlinear functionals of probability densities, with
various generalizations in terms of the class of functionals considered. Table 1 gives a subset of such
work, for functionals related to Sobolev quantities. As shown in Section 2, the functional form we
consider is a strict generalization of L2 norms, Sobolev norms, and L2 inner products. It overlaps
with, but is neither a special case nor a generalization of the remaining functional forms in the table.
Nearly all of the above approaches compute an optimally smoothed kernel density estimate and
then perform bias corrections based on Taylor series expansions of the functional of interest. They
typically consider distributions with densities that are ?-H?lder continuous and satisfy periodicity
assumptions of order ? on the boundary of their support, for some constant ? > 0 (see, for example,
Section 4 of [16] for details of these assumptions). The Sobolev class we consider is a strict superset
of this H?lder class, permitting, for example, certain ?small? discontinuities. In this regard, our
results are slightly more general than most of these prior works.
Finally, there is much recent work on estimating entropies, divergences, and mutual informations,
using methods based on kernel density estimates [14, 17, 16, 24, 33, 34] or k-nearest neighbor
statistics [20, 23, 22, 26]. In contrast, our estimators are more similar to orthogonal series density
estimators, which are computationally attractive because they require no pairwise operations between
samples. However, they require quite different theoretical analysis; unlike prior work, our estimator
3
is constructed and analyzed entirely in the frequency domain, and then related to the data domain via
Parseval?s identity. We hope our analysis can be adapted to analyze new, computationally efficient
information theoretic estimators.
4
Motivation and construction of our estimator
For a non-negative integer parameter Zn (to be specified later), let
X
X
pn :=
pe(z)?z and qn :=
qe(z)?z where
kzk? ?Zn
kzk? ?Zn
kzk? :=
max
j?{1,...,D}
zj
(5)
denote the L2 projections of p and q, respectively, onto the linear subspace spanned by the L2 fz (y) = 0 whenever y 6= z,
orthonormal family Fn := {?z : z ? ZD , |z| ? Zn }. Note that, since ?
the Fourier basis has the special property that it is orthogonal in h?, ?iH s as well. Hence, since
pn and qn lie in the span of Fn while p ? pn and q ? qn lie in the span of {?z : z ? Z}\Fn ,
hp ? pn , qn iH s = hpn , q ? qn iH s = 0. Therefore,
hp, qiH s = hpn , qn iH s + hp ? pn , qn iH s + hpn , q ? qn iH s + hp ? pn , q ? qn iH s
= hpn , qn iH s + hp ? pn , q ? qn iH s .
(6)
P
We propose an unbiased estimate of Sn := hpn , qn iH s = kzk? ?Zn z 2s pen (z)e
qn (z). Notice that
Pn
Fourier coefficients of p are the expectations pe(z) = EX?p [?z (X)]. Thus, p?(z) := n1 j=1 ?z (Xj )
P
n
and q?(z) := n1 j=1 ?z (Yj ) are independent unbiased estimates of pe and qe, respectively. Since Sn
is bilinear in pe and qe, the plug-in estimator for Sn is unbiased. That is, our estimator for hp, qiH s is
X
S?n :=
z 2s p?(z)?
q (z).
(7)
kzk? ?Zn
5
Finite sample bounds
Here, we present our main theoretical results, bounding the bias, variance, and mean squared error of
our estimator for finite n.
By construction, our estimator satisfies
h i
X
z 2s E [?
p(z)] E [?
q (z)] =
E S?n =
kzk? ?Zn
X
z 2s pen (z)e
qn (z) = Sn .
kzk? ?Zn
Thus, via (6) and Cauchy-Schwarz, the bias of the estimator S?n satisfies
q
h i
?
2
2
E Sn ? hp, qiH s = |hp ? pn , q ? qn iH s | ? kp ? pn kH s kq ? qn kH s .
(8)
kp ? pn kH s is the error of approximating p by an order-Zn trigonometric polynomial, a classic
problem in approximation theory, for which Theorem 2.2 of [15] shows:
0
if p ? H s for some s0 > s,
then
0
kp ? pn kH s ? kpkH s0 Zns?s .
(9)
In combination with (8), this implies the following bound on the bias of our estimator:
0
Theorem 1. (Bias bound) If p, q ? H s for some s0 > s, then, for CB := kpkH s0 kqkH s0 ,
h i
0
?
E Sn ? hp, qiH s ? CB Zn2(s?s )
(10)
Hence, the bias of S?n decays polynomially in Zn , with a power depending on the ?extra? s0 ? s
orders of smoothness available. On the other hand, as we increase Zn , the number of frequencies at
which we estimate p? increases, suggesting that the variance of the estimator will increase with Zn .
Indeed, this is expressed in the following bound on the variance of the estimator.
4
0
Theorem 2. (Variance bound) If p, q ? H s for some s0 ? s, then
h i
C2
2D ?(4s + 1)
Z 4s+D
V S?n ? 2C1 n 2 +
, where C1 :=
kpkL2 kqkL2
n
n
?(4s + D + 1)
(11)
and C2 := (kpkH s + kqkH s ) kpkW 2s,4 kqkW 2s,4 + kpk4H s kqk4H s are the constants (in n)
The proof of Theorem 2 is perhaps the most significant theoretical contribution of this work. Due to
space constraints, the proof is given in the supplement. Combining Theorems 1 and 2 gives a bound
on the mean squared error (MSE) of S?n via the usual decomposition into squared bias and variance:
0
Corollary 3. (Mean squared error bound) If p, q ? H s for some s0 > s, then
2
C2
Z 4s+D
2 4(s?s0 )
?
s
? CB
Zn
+ 2C1 n 2 +
.
E Sn ? hp, qiH
n
n
(12)
2
If, furthermore, we choose Zn n 4s0 +D (optimizing the rate in inequality 12), then
o
n
2
8(s?s0 )
max 4s0 +D ,?1
?
2
.
n
E Sn ? hp, qiH
(13)
Corollary 3 recovers the phenomenon discovered by [2]: when s0 ? 2s + D
minimax optimal
4 , the
D
?1
0
MSE decays at the ?semi-parametric? n rate, whereas, when s ? s, 2s + 4 , the MSE decays at
2
a slower rate. Also, the estimator is L2 -consistent if Zn ? ? and Zn n? 4s+D ? 0 as n ? ?. This
is useful in practice, since s is known but s0 is not.
Finally, it is worth reiterating that, by (3), these finite sample rates extend, with additional constant
factors, to estimating Sobolev norms and distances.
6
Asymptotic distributions
In this section, we derive the asymptotic distributions of our estimator in two cases: (1) the inner
product estimator and (2) the distance estimator in the case p = q. These results provide confidence
intervals and two-sample tests without computationally intensive resampling. While (1) is more
general in that it can be used with (3) to bound the asymptotic distributions of the norm and distance
estimators, (2) provides a more precise result leading to a more computationally and statistically
efficient two-sample test. Proofs are given in the supplementary material.
Theorem 4 shows that our estimator has a normal asymptotic distribution, assuming Zn ? ? slowly
enough as n ? ?, and also gives a consistent estimator for its asymptotic variance. From this, one
can easily estimate asymptotic confidence intervals for inner products, and hence also for norms.
0
s
Theorem 4. (Asymptotic normality) Suppose that, for some s0 > 2s + D
4 , p, q ? H , and
1
1
?
0
suppose Zn n 4(s?s ) ? ? and Zn n 4s+D ? 0 as n ? ?. Then, S?n is asymptotically normal
with mean hp, qiH s . In particular, for j ? {1, . . . , n} and z ? ZD with kzk? ? Zn , define
D
Wj,z := z s eizXj and Vj,z := z s eizYj , so that Wj and Vj are column vectors in R(2Zn ) . Let
P
P
D
n
n
W := n1 j=1 Wj , V := n1 j=1 Vj ? R(2Zn ) ,
n
?W :=
1X
(Wj ?W )(Wj ?W )T ,
n j=1
n
and
?V :=
D
D
1X
(Vj ?V )(Vj ?V )T ? R(2Zn ) ?(2Zn )
n j=1
denote the empirical means and covariances of W and V , respectively. Then, for
!
T
?
S?n ? hp, qiH s D
V
?W
0
V
2
?
?n :=
, we have
n
? N (0, 1),
0
?V W
W
?
?n
D
where ? denotes convergence in distribution.
Since distances can be written as a sum of three inner products (Eq. (3)), Theorem 4 might suggest
an asymptotic normal distribution for Sobolev distances. However, extending asymptotic normality
5
from inner products to their sum requires that the three estimates be independent, and hence that we
split data between the three estimates. This is inefficient in practice and somewhat unnatural, as we
know, for example, that distances should be non-negative. For the particular case p = q (as in the
null hypothesis of two-sample testing), the following theorem 5 provides a more precise asymptotic
(?2 ) distribution of our Sobolev distance estimator, after an extra decorrelation step. This gives, for
example, a more powerful two-sample test statistic (see Section 9 for details).
Theorem 5. (Asymptotic null distribution) Suppose that, for some s0 > 2s +
1
4(s?s0 )
D
4,
0
p, q ? H s , and
1
suppose Zn n
? ? and Zn n? 4s+D ? 0 as n ? ?. For j ? {1, . . . , n} and z ? ZD with
D
kzk? ? Zn , define Wj,z := z s e?izXj ? e?izYj , so that Wj is a column vector in R(2Zn ) . Let
n
W :=
D
1X
Wj ? R(2Zn )
n j=1
n
and
? :=
T
D
D
1X
? R(2Zn ) ?(2Zn )
Wj ? W Wj ? W
n j=1
T
denote the empirical mean and covariance of W , and define T := nW ??1 W . Then, if p = q, then
D
Q?2 ((2Zn )D ) (T ) ? Uniform([0, 1])
as
n ? ?,
where Q?2 (d) : [0, ?) ? [0, 1] denotes the quantile function (inverse CDF) of the ?2 distribution
?2 (d) with d degrees of freedom.
? denote our estimator for kp ? qkH s (i.e., plugging S?n into (3)). While Theorem 5 immediately
Let M
provides a valid two-sample test of desired level, it is not immediately clear how this relates to
? , nor is there any suggestion of why the test statistic ought to be a good (i.e., consistent) one.
M
? = W T W . Since, by the central limit theorem, W
Some intuition is as follows. Notice that M
has a normal asymptotic distribution, if the components of W were uncorrelated (and Zn were
? to have an asymptotic ?2 distribution with (2Zn )D degrees of freedom.
fixed), we would expect nM
? , they are not typically
However, because we use the same data to compute each component of M
?
uncorrelated, and so the asymptotic distribution of M is difficult to derive. This motivates the statistic
q
T q
q
?1
T =
?W W
??1
??1
W W , since the components of
W W are (asymptotically) uncorrelated.
7
Parameter selection and statistical/computational trade-off
Here, we give statistical and computational considerations for choosing the smoothing parameter Zn .
Statistical perspective: In practice, of course, we do not typically know s0 , so we cannot simply
2
set Zn n 4s0 +D , as suggested by the mean squared error bound (13). Fortunately (at least for ease
1
? 4s+D
of parameter selection), when s0 ? 2s + D
.
4 , the dominant term of (13) is C2 /n for Zn n
Hence if we are willing to assume that the density has at least 2s + D
orders
of
smoothness
(which
4
may be a mild assumption in practice), then we achieve statistical optimality (in rate) by setting
1
Zn n? 4s+D , which depends only on known parameters. On the other hand, the estimator can
continue to benefit from additional smoothness computationally.
Computational perspective An attractive property of the estimator discussed is its computational
simplicity and efficiency, in low dimensions. Most competing nonparametric estimators, such
as kernel-based or nearest-neighbor methods, either take O(n2 ) time or rely on complex data
structures such as k-d trees or cover trees [31] for O(2D n log n) time performance. Since computing
the estimator takes O(nZnD ) time and O(ZnD ) memory (that is, the cost of estimating each of
(2Zn )D Fourier
coefficients by an average), a statistically optimal choice of Zn gives a runtime of
4s0 +2D
O n 4s0 +D . Since the estimate requires only a vector outer product, exponentiation, and averaging,
constants involved are small and computations parallelize trivially over frequencies and data.
Under severe computational constraints, for very large data sets, or if D is large relative to s0 , we can
reduce Zn to trade off statistical for computational efficiency. For example, if we want an estimator
5
This result is closely related to Proposition 4 of [4]. However, in their situation, s = 0 and the set of test
frequencies is fixed as n ? ?, whereas our set is increasing.
6
0.3
0.25
0.25
Estimated Distance
True Distance
0.2
Estimated Distance
True Distance
1
0.4
0.8
0.3
0.15
L22
L22
L22
L22
0.2
0.6
0.2
0.15
0.1
0.1
0.05 1
10
2
10
3
10
4
10
0.05 1
10
5
10
number of samples
2
10
3
10
4
10
0.2 1
10
5
10
number of samples
(a) 1D Gaussians with
different means.
0.035
2
10
3
10
4
10
Estimated Distance
True Distance
0 1
10
5
10
(c) Uniform distributions
with different range.
2
10
Estimated Distance
True Distance
0.4
3
10
4
10
5
10
number of samples
(d) One uniform and one
triangular distribution.
0.5
0.04
Estimated Distance
True Distance
0.1
Estimated Distance
True Distance
number of samples
(b) 1D Gaussians with
different variance.
0.045
0.04
0.4
0.6
Estimated Distance
True Distance
Estimated Distance
True Distance
0.4
||?||2H0
L22
L22
0.025
0.03
||?||2H1
0.03
0.035
0.3
0.2
0
0.02
0.2
0.025
?0.2
0.015
0.02
5
0.01
(a) 3D Gaussians with
different means.
0.1 1
10
5
10
10
number of samples
2
10
3
10
4
10
5
10
number of samples
number of samples
(c) Estimation of H 0
norm of N (0, 1).
(b) 3D Gaussians with
different variance.
?0.4 1
10
2
10
3
10
4
10
5
10
number of samples
(d) Estimation of H 1
norm of N (0, 1).
with runtime O(n1+? ) and space requirement O(n? ) for some ? ? 0, 4s2D
, setting Zn n?/D
0
+D 4?(s?s0 )
still gives a consistent estimator, with mean squared error of the order O nmax{ D ,?1} .
Kernel- or nearest-neighbor-based methods, including nearly all of the methods described in Section
3, tend to require storing previously observed data, resulting in O(n) space requirements. In
contrast, orthogonal basis estimation requires storing only O(ZnD ) estimated Fourier coefficients.
The estimated coefficients can be incrementally updated with each new data point, which may make
the estimator or close approximations feasible in streaming settings.
8
Experimental results
In this section, we use synthetic data to demonstrate effectiveness of our methods.
use 10, 102 , . . . , 105 samples for estimation.
6
All experiments
We first test our estimators on 1D L2 distances. Figure 1a shows estimated distance between N (0, 1)
and N (1, 1); Figure 1b shows estimated distance between N (0, 1) and N (0, 4); Figure 1c shows
estimated distance between Unif [0, 1] and Unif[0.5, 1.5]; Figure 1d shows estimated distance between
[0, 1] and a triangular distribution whose density is highest at x = 0.5. Error bars indicate asymptotic
95% confidence intervals based on Theorem 4. These experiments suggest 105 samples is sufficient
to estimate L2 distances with high confidence. Note that we need fewer samples to estimate Sobolev
quantities of Gaussians than, say, of uniform distributions, consistent with our theory, since (infinitely
differentiable) Gaussians are smoothier than (discontinuous) uniform distributions.
Next, we test our estimators on L2 distances of multivariate distributions. Figure 2a shows estimated
distance between N ([0, 0, 0] , I) and N ([1, 1, 1] , I); Figure 2b shows estimated distance between
N ([0, 0, 0] , I) and N ([0, 0, 0] , 4I). These experiments show that our estimators can also handle
multivariate distributions. Lastly, we test our estimators for H s norms. Figure 2c shows estimated
H 0 norm of N (0, 1) and Figure 2d shows H 1 norm of N (0, 1). Additional experiments with other
distributions and larger values of s are given in the supplement.
9
Connections to two-sample testing
We now discuss the use of our estimator in two-sample testing. From the large literature on nonparametric two-sample testing, we discuss only some recent approaches closely related to ours.
? denote our estimate of the Sobolev distance, consisting of plugging S? into equation (3).
Let M
?
Since k ? ? ? kH s is a metric on the space of probability density functions in H s , computing M
leads naturally to a two-sample test on this space. Theorem 5 suggests an asymptotic test, which
is computationally preferable to a permutation test. In particular, for a desired Type I error rate
? ? (0, 1) our test rejects the null hypothesis p = q if and only if Q?2 (2ZnD ) (T ) < ?.
6
MATLAB code for these experiments is available at https://github.com/sss1/SobolevEstimation.
7
When s = 0, this approach is closely related to several two-sample tests in the literature based on
comparing empirical characteristic functions (CFs). Originally, these tests [11, 5] computed the same
statistic T with a fixed number of random RD -valued frequencies instead of deterministic ZD -valued
frequencies. This test runs in linear time, but is not generally consistent, since the two CFs need
not differ almost everywhere. Recently, [4] suggested using smoothed CFs, i.e., the convolution of
the CF with a universal smoothing kernel k. This is computationally easy (due to the convolution
theorem) and, when p 6= q, (e
p ? k)(z) 6= (e
q ? k)(z) for almost all z ? RD , reducing the need for
carefully choosing test frequencies. Furthermore, this test is almost-surely consistent under very
general alternatives. However, it is not clear what sort of assumptions would allow finite sample
analysis of the power of their test. Indeed, the convergence as n ? ? can be arbitrarily slow,
0
depending on the random test frequencies used. Our analysis instead uses the assumption p, q ? H s
D
7
to ensure that small, Z -valued frequencies contain most of the power of pe.
These fixed-frequency approaches can be thought of as the extreme point ? = 0 of the computational/statistical trade-off described in section 7: they are computable in linear time and (with
smoothing) are strongly consistent, but do not satisfy finite-sample bounds under general conditions.
At the other extreme (? = 1) are MMD-based tests of [8, 9], which use the entire spectrum pe. These
tests are statistically powerful and have strong guarantees for densities in an RKHS, but have O(n2 )
computational complexity. 8 The computational/statistical trade-off discussed in Section 7 can be
thought of as an interpolation (controlled by ?) of these approaches, with runtime in the case ? = 1
approaching quadratic for large D and small s0 .
10
Conclusions and future work
In this paper, we proposed nonparametric estimators for Sobolev inner products, norms and distances
of probability densities, for which we derived finite-sample bounds and asymptotic distributions.
A natural follow-up question to our work is whether estimating smoothness of a density can guide the
choice of smoothing parameters in nonparametric estimation. When analyzing many nonparametric
estimators, Sobolev norms appear as the key unknown term in error bounds. While theoretically
optimal smoothing parameter values are often suggested based on optimizing these error bounds,
our work may suggest a practical way of mimicking this procedure by plugging estimated Sobolev
norms into these bounds. For some problems, such as estimating functionals of a density, this may
be especially useful, since no error metric is typically available for cross-validation. Even when
cross-validation is an option, as in density estimation or regression, estimating smoothness may be
faster, or may suggest an appropriate range of parameter values.
Acknowledgments
This material is based upon work supported by a National Science Foundation Graduate Research
Fellowship to the first author under Grant No. DGE-1252522.
References
[1] N. H. Anderson, P. Hall, and D. M. Titterington. Two-sample test statistics for measuring discrepancies
between two multivariate probability density functions using kernel-based density estimates. Journal of
Multivariate Analysis, 50(1):41?54, 1994.
[2] P. J. Bickel and Y. Ritov. Estimating integrated squared density derivatives: sharp best order of convergence
estimates. Sankhy?a: The Indian Journal of Statistics, Series A, pages 381?393, 1988.
[3] L. Birg? and P. Massart. Estimation of integral functionals of a density. The Annals of Statistics, pages
11?29, 1995.
[4] K. P. Chwialkowski, A. Ramdas, D. Sejdinovic, and A. Gretton. Fast two-sample testing with analytic
representations of probability measures. In NIPS, pages 1972?1980, 2015.
[5] T. Epps and K. J. Singleton. An omnibus test for the two-sample problem using the empirical characteristic
function. Journal of Statistical Computation and Simulation, 26(3-4):177?203, 1986.
[6] E. Gin? and R. Nickl. A simple adaptive estimator of the integrated square of a density. Bernoulli, pages
47?61, 2008.
P
7
?izXj
Note that smooth CFs can be used in our test by replacing p?(z) with n1 n
k(x), where k is the
j=1 e
inverse Fourier transform of a characteristic kernel. However, smoothing seems less desirable under Sobolev
assumptions, as it spreads the power of the CF away from small ZD -valued frequencies where our test focuses.
?
8
Fast MMD approximations have been proposed, including the Block MMD, [37] FastMMD, [38] and n
sub-sampled MMD, but these lack the statistical guarantees of MMD.
8
[7] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. N. Inverardi. A new class of random vector entropy
estimators and its applications in testing statistical hypotheses. J. Nonparametric Stat., 17:277?297, 2005.
[8] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?lkopf, and A. J. Smola. A kernel method for the
two-sample-problem. In Advances in neural information processing systems, pages 513?520, 2006.
[9] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Sch?lkopf, and A. Smola. A kernel two-sample test. The
Journal of Machine Learning Research, 13(1):723?773, 2012.
[10] P. Hall and J. S. Marron. Estimation of integrated squared density derivatives. Statistics & Probability
Letters, 6(2):109?115, 1987.
[11] C. Heathcote. A test of goodness of fit for symmetric random variables. Australian Journal of Statistics,
14(2):172?181, 1972.
[12] A. O. Hero, B. Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs. IEEE Signal
Processing Magazine, 19(5):85?95, 2002.
[13] I. Ibragimov and R. Khasminskii. On the nonparametric estimation of functionals. In Symposium in
Asymptotic Statistics, pages 41?52, 1978.
[14] K. Kandasamy, A. Krishnamurthy, B. Poczos, L. Wasserman, et al. Nonparametric von Mises estimators
for entropies, divergences and mutual informations. In NIPS, pages 397?405, 2015.
[15] H.-O. Kreiss and J. Oliger. Stability of the Fourier method. SIAM Journal on Numerical Analysis, 16(3):
421?433, 1979.
[16] A. Krishnamurthy, K. Kandasamy, B. Poczos, and L. A. Wasserman. Nonparametric estimation of renyi
divergence and friends. In ICML, pages 919?927, 2014.
[17] A. Krishnamurthy, K. Kandasamy, B. Poczos, and L. A. Wasserman. On estimating L22 divergence. In
AISTATS, 2015.
[18] B. Laurent. Efficient estimation of integral functionals of a density. Universit? de Paris-sud, D?partement
de math?matiques, 1992.
[19] B. Laurent et al. Efficient estimation of integral functionals of a density. The Annals of Statistics, 24(2):
659?681, 1996.
[20] N. Leonenko, L. Pronzato, V. Savani, et al. A class of r?nyi information estimators for multidimensional
densities. The Annals of Statistics, 36(5):2153?2182, 2008.
[21] G. Leoni. A first course in Sobolev spaces, volume 105. American Math. Society, Providence, RI, 2009.
[22] K. Moon and A. Hero. Multivariate f-divergence estimation with confidence. In Advances in Neural
Information Processing Systems, pages 2420?2428, 2014.
[23] K. R. Moon and A. O. Hero. Ensemble estimation of multivariate f-divergence. In Information Theory
(ISIT), 2014 IEEE International Symposium on, pages 356?360. IEEE, 2014.
[24] K. R. Moon, K. Sricharan, K. Greenewald, and A. O. Hero III. Improving convergence of divergence
functional ensemble estimators. arXiv preprint arXiv:1601.06884, 2016.
[25] L. Pardo. Statistical inference based on divergence measures. CRC Press, 2005.
[26] B. P?czos and J. G. Schneider. On the estimation of alpha-divergences. In International Conference on
Artificial Intelligence and Statistics, pages 609?617, 2011.
[27] B. P?czos, L. Xiong, and J. Schneider. Nonparametric divergence estimation with applications to machine
learning on distributions. arXiv preprint arXiv:1202.3758, 2012.
[28] B. P?czos, L. Xiong, D. J. Sutherland, and J. Schneider. Nonparametric kernel estimators for image
classification. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages
2989?2996. IEEE, 2012.
[29] J. C. Principe. Information theoretic learning: Renyi?s entropy and kernel perspectives. Springer Science
& Business Media, 2010.
[30] N. Quadrianto, J. Petterson, and A. J. Smola. Distribution matching for transduction. In Advances in
Neural Information Processing Systems, pages 1500?1508, 2009.
[31] P. Ram, D. Lee, W. March, and A. G. Gray. Linear-time algorithms for pairwise statistical problems. In
Advances in Neural Information Processing Systems, pages 1527?1535, 2009.
[32] T. Schweder. Window estimation of the asymptotic variance of rank estimators of location. Scandinavian
Journal of Statistics, pages 113?126, 1975.
[33] S. Singh and B. P?czos. Generalized exponential concentration inequality for renyi divergence estimation.
In Proceedings of The 31st International Conference on Machine Learning, pages 333?341, 2014.
[34] S. Singh and B. P?czos. Exponential concentration of a density functional estimator. In Advances in
Neural Information Processing Systems, pages 3032?3040, 2014.
[35] A. Tsybakov. Introduction to Nonparametric Estimation. Springer Publishing Company, Incorporated, 1st
edition, 2008. ISBN 0387790519, 9780387790510.
[36] E. Wolsztynski, E. Thierry, and L. Pronzato. Minimum-entropy estimation in semi-parametric models.
Signal Processing, 85(5):937?949, 2005.
[37] W. Zaremba, A. Gretton, and M. Blaschko. B-test: A non-parametric, low variance kernel two-sample test.
In Advances in neural information processing systems, pages 755?763, 2013.
[38] J. Zhao and D. Meng. Fastmmd: Ensemble of circular discrepancy for efficient two-sample test. Neural
computation, 27(6):1345?1372, 2015.
[39] A. Zygmund. Trigonometric series, volume 1. Cambridge university press, 2002.
9
| 6369 |@word mild:1 sss1:2 polynomial:2 norm:27 seems:1 unif:2 willing:1 simulation:1 decomposition:1 covariance:2 series:4 ours:1 rkhs:1 com:1 comparing:1 dx:8 written:1 fn:3 additive:1 realistic:1 numerical:3 analytic:1 resampling:1 half:1 fewer:1 kandasamy:3 intelligence:1 provides:3 math:2 location:1 unbounded:1 constructed:1 c2:4 symposium:2 theoretically:1 pairwise:2 indeed:2 p1:3 nor:2 sud:1 company:1 window:1 increasing:1 spain:1 estimating:12 underlying:1 notation:1 bounded:1 medium:1 blaschko:1 null:3 what:1 q2:3 developed:1 titterington:1 finding:1 ought:1 guarantee:2 multidimensional:1 act:1 runtime:3 exactly:1 preferable:1 scaled:1 universit:1 zaremba:1 grant:1 yn:1 appear:1 sutherland:1 limit:1 despite:1 bilinear:1 analyzing:1 parallelize:1 laurent:2 meng:1 interpolation:1 might:1 studied:1 equivalence:1 suggests:1 ease:1 bi:2 statistically:5 range:2 graduate:1 savani:1 practical:3 acknowledgment:1 testing:12 yj:1 practice:9 block:1 procedure:1 empirical:4 universal:1 significantly:1 dictate:1 matching:2 projection:1 confidence:5 reject:1 thought:2 suggest:5 nmax:1 onto:1 cannot:1 selection:2 close:1 restriction:1 measurable:1 deterministic:1 simplicity:2 immediately:2 wasserman:3 estimator:67 orthonormal:2 spanned:1 classic:1 handle:1 notion:1 stability:1 krishnamurthy:3 updated:1 annals:3 construction:2 suppose:5 magazine:1 us:1 hypothesis:3 element:1 recognition:1 continues:1 distributional:1 observed:1 role:1 preprint:2 shashank:1 wj:10 fastmmd:2 trade:5 highest:1 intuition:2 complexity:1 singh:3 upon:1 efficiency:3 basis:3 easily:1 various:1 fast:3 describe:1 kp:5 artificial:1 choosing:2 h0:1 quite:1 whose:1 supplementary:1 valued:5 larger:1 say:1 cvpr:1 otherwise:1 lder:3 triangular:2 statistic:18 transform:1 itself:1 differentiable:1 isbn:1 propose:4 product:17 remainder:1 relevant:1 combining:1 trigonometric:2 achieve:1 kh:10 validate:2 convergence:6 requirement:2 extending:1 converges:1 derive:4 andrew:1 depending:2 stat:1 friend:1 nearest:3 thierry:1 eq:1 strong:1 p2:9 c:2 trading:1 implies:1 convention:1 indicate:1 differ:1 rasch:2 australian:1 closely:3 discontinuous:2 material:2 crc:1 require:3 barnab:1 fix:1 generalization:3 proposition:1 isit:1 summation:1 correction:1 considered:1 hall:2 normal:4 k2h:1 cb:3 nw:1 bickel:1 entropic:1 smallest:1 a2:1 estimation:25 schwarz:1 hope:1 rather:1 pn:12 corollary:2 derived:1 focus:1 bernoulli:1 rank:1 contrast:2 kp2:1 inference:1 kqkh:2 streaming:1 typically:5 entire:1 integrated:3 interested:1 mimicking:1 classification:3 smoothing:6 special:4 integration:1 mutual:2 never:1 nickl:1 identical:1 icml:1 nearly:2 sankhy:1 future:1 discrepancy:2 partement:1 heathcote:1 randomly:1 divergence:11 national:1 petterson:1 consisting:1 lebesgue:1 n1:7 attempt:1 freedom:2 interest:1 highly:1 circular:1 severe:1 analyzed:2 extreme:2 integral:4 il2:3 orthogonal:3 tree:2 taylor:1 desired:2 theoretical:8 column:2 cover:1 goodness:2 zn:42 measuring:1 cost:1 subset:3 uniform:6 kq:1 optimally:1 providence:1 marron:1 periodic:1 synthetic:2 st:2 density:41 borgwardt:2 siam:1 international:3 ssdu:1 lee:1 off:5 s2d:1 squared:12 central:1 nm:1 von:1 choose:1 slowly:1 l22:7 american:1 derivative:9 leading:1 rescaling:1 inefficient:1 michel:1 zhao:1 suggesting:1 singleton:1 de:2 b2:1 includes:1 coefficient:5 satisfy:2 notable:1 depends:1 later:1 h1:1 analyze:4 sort:1 option:1 simon:1 hpn:5 contribution:2 square:1 moon:3 variance:12 characteristic:3 ensemble:3 generalize:1 weak:1 lkopf:2 iid:2 worth:1 cc:1 history:1 zn2:1 whenever:1 definition:1 kl2:1 frequency:11 involved:1 naturally:1 proof:3 mi:1 recovers:1 sampled:1 tunable:1 knowledge:1 fractional:1 carefully:1 originally:1 follow:1 ritov:1 strongly:1 anderson:1 furthermore:2 smola:3 lastly:1 hand:2 znd:3 replacing:1 nonlinear:1 lack:3 incrementally:1 perhaps:1 gray:1 dge:1 modulus:1 name:1 omnibus:1 requiring:1 true:10 unbiased:3 contain:1 hence:5 read:1 symmetric:1 attractive:2 qe:3 generalized:1 theoretic:3 demonstrate:1 image:2 consideration:1 matiques:1 recently:1 functional:9 empirically:1 volume:2 discussed:3 extend:2 mellon:3 significant:1 cambridge:1 smoothness:11 rd:4 trivially:2 hp:16 dot:1 scandinavian:1 dominant:1 multivariate:7 recent:4 perspective:3 optimizing:2 certain:1 inequality:2 continue:1 arbitrarily:1 minimum:1 additional:3 somewhat:1 fortunately:1 schneider:3 surely:1 signal:3 semi:4 relates:1 desirable:1 gretton:4 khasminskii:1 smooth:1 faster:1 adapt:1 plug:1 cross:2 long:1 permitting:1 plugging:3 controlled:1 regression:2 basic:1 vision:1 cmu:3 metric:3 expectation:1 arxiv:4 kernel:14 sejdinovic:1 mmd:5 c1:3 whereas:2 want:1 fellowship:1 interval:3 leaving:1 sch:2 extra:2 unlike:1 strict:2 massart:1 induced:1 tend:1 chwialkowski:1 effectiveness:1 integer:7 split:2 superset:1 enough:1 easy:1 mergel:1 xj:1 fit:2 iii:1 competing:1 approaching:1 inner:15 reduce:1 computable:2 zygmund:1 intensive:1 whether:1 motivated:1 unnatural:1 poczos:3 matlab:1 generally:2 pih:1 useful:2 clear:2 ibragimov:1 nonparametric:19 reiterating:1 tsybakov:1 http:1 fz:1 inverardi:1 zj:2 problematic:1 shifted:1 notice:2 estimated:18 zd:10 carnegie:3 iz:1 key:1 goria:1 neither:1 registration:1 ram:1 asymptotically:2 graph:1 sum:2 run:1 inverse:2 everywhere:3 powerful:2 exponentiation:1 letter:1 family:4 almost:5 sobolev:31 draw:3 epps:1 entirely:1 bound:17 distinguish:2 quadratic:1 pronzato:2 adapted:1 constraint:3 ri:1 pardo:1 fourier:8 extremely:1 span:3 optimality:1 leonenko:2 conjecture:1 combination:2 march:1 conjugate:1 slightly:1 sth:2 intuitively:1 bapoczos:1 computationally:9 equation:2 previously:2 discus:2 needed:1 know:3 hero:4 tractable:1 generalizes:1 operation:2 available:3 gaussians:6 away:1 appropriate:1 birg:1 xiong:2 alternative:1 slower:2 denotes:7 clustering:1 include:1 remaining:1 cf:6 ensure:1 publishing:1 wolsztynski:1 quantile:1 especially:1 approximating:1 classical:1 nyi:1 society:1 question:1 quantity:18 parametric:6 concentration:2 dependence:1 usual:1 exhibit:1 gin:1 subspace:2 distance:41 simulated:1 parametrized:1 outer:1 cauchy:1 spanning:1 assuming:2 code:1 difficult:3 setup:1 fe:3 minimaxoptimal:1 negative:4 suppress:1 proper:2 motivates:1 unknown:4 perform:1 allowing:1 convolution:2 sricharan:1 finite:8 qih:10 situation:1 incorporated:1 precise:2 y1:1 discovered:1 smoothed:2 arbitrary:2 sharp:1 paris:1 specified:1 connection:3 barcelona:1 nip:3 discontinuity:1 hp1:1 suggested:3 bar:1 pattern:1 including:3 max:2 memory:1 power:4 overlap:1 decorrelation:1 natural:1 rely:1 business:1 minimax:5 normality:2 github:1 sn:8 prior:2 literature:2 l2:27 kf:2 asymptotic:20 relative:1 parseval:2 expect:1 permutation:1 suggestion:1 filtering:1 validation:2 foundation:1 integrate:1 degree:2 sufficient:1 consistent:8 s0:30 uncorrelated:3 storing:2 translation:1 periodicity:1 course:2 supported:4 czos:6 bias:9 allow:1 guide:1 neighbor:3 fg:1 benefit:1 regard:1 boundary:1 kzk:9 xn:1 valid:2 dimension:1 qn:16 author:1 adaptive:2 polynomially:1 functionals:11 alpha:1 global:1 tuples:1 xi:1 spectrum:1 continuous:2 pen:2 why:1 table:3 improving:1 du:1 expansion:1 mse:3 complex:3 domain:3 vj:5 aistats:1 pk:2 main:2 spread:1 motivation:1 bounding:1 edition:1 n2:3 ramdas:1 quadrianto:1 x1:1 body:1 transduction:2 slow:1 fails:1 sub:1 exponential:2 lie:3 pe:8 renyi:3 theorem:15 specific:1 decay:3 ih:12 supplement:3 texture:1 gorman:1 gap:1 entropy:5 simply:2 infinitely:1 expressed:1 springer:2 satisfies:2 cdf:1 ma:1 goal:1 identity:3 feasible:1 except:1 reducing:1 averaging:1 experimental:1 craft:1 rarely:1 exception:2 principe:1 support:3 indian:1 greenewald:1 phenomenon:1 ex:1 |
5,935 | 637 | Weight Space Probability Densities
in Stochastic Learning:
II. Transients and Basin Hopping Times
Genevieve B. Orr and Todd K. Leen
Department of Computer Science and Engineering
Oregon Graduate Institute of Science & Technology
19600 N.W. von Neumann Drive
Beaverton, OR 97006-1999
Abstract
In stochastic learning, weights are random variables whose time
evolution is governed by a Markov process. At each time-step,
n, the weights can be described by a probability density function
pew, n). We summarize the theory of the time evolution of P, and
give graphical examples of the time evolution that contrast the
behavior of stochastic learning with true gradient descent (batch
learning). Finally, we use the formalism to obtain predictions of the
time required for noise-induced hopping between basins of different
optima. We compare the theoretical predictions with simulations
of large ensembles of networks for simple problems in supervised
and unsupervised learning.
1
Weight-Space Probability Densities
Despite the recent application of convergence theorems from stochastic approximation theory to neural network learning (Oja 1982, White 1989) there remain outstanding questions about the search dynamics in stochastic learning. For example,
the convergence theorems do not tell us to which of several optima the algorithm
507
508
Orr and Leen
is likely to converge 1? Also, while it is widely recognized that the intrinsic noise
in the weight update can move the system out of sUb-optimal local minima (for a
graphical example, see Darken and Moody 1991), there have been no theoretical
predictions of the time required to escape from local optima, or of its dependence
on learning rates.
In order to more fully understand the dynamics of stochastic search, we study the
weight-space probability density and its time evolution. In this paper we summarize
a theoretical framework that describes this time evolution. We graphically portray
the motion of the density for examples that contrast stochastic and batch learning.
Finally we use the theory to predict the statistical distribution of times required for
escape from local optima. We compare the theoretical results with simulations for
simple examples in supervised and unsupervised learning.
2
2.1
Stochastic Learning and Noisy Maps
Motion of the Probability Density
We consider stochastic learning algorithms of the form
w(n+1}
= w(n) + JJH[w(n},x(n)]
(1)
where w(n} E 1R,m is the weight, x(n} is the data exemplar input to the algorithm at
time-step n, JJ is the learning rate, and H[ ... ] E 1R, m is the weight update function.
The exemplars x(n} can be either inputs or, in the case of supervised learning,
input/target pairs. We assume that the x(n) are i.i.d. with density p(x). Angled
brackets ("'):t denote averaging over this density. In what follows, the learning
rate will be held constant.
The learning algorithm (1) is a noisy map on w. The weights are thus random
variables described by the probability density function P(w, n). The time evolution
of this density is given by the Kolmogorov equation
P(w, n + 1)
=
f dw' P(w', n) W(w' -+ w)
(2)
where the single time-step transition probability is given by (Leen and Orr 1992,
Leen and Moody 1993)
W(w' -+ w) = (8{w-w'-JJH[w',xJ) ):t
(3)
and 8 ( ... ) is the Dirac delta function.
The Kolmogorov equation can be recast as a differential-difference equation by
expanding the transition probability (3) as a power series in JJ. This gives a KramersMoya! expansion (Leen and Orr 1992, Leen and Moody 1993)
IHowever Kushner (1987) has proved convergence to global optima for stochastic
approximation algorithms with added Gaussian noise subject to logarithmic annealing
schedules.
Weight Space Probability Densities in Stochastic Learning
P(w, n + 1) - P(w, n)
L
00
.
(_~)t
L
m
(4)
I.
i=l
;l ?...;j=l
where Wj" and Hj" are the j~h component of weight, and weight update, respectively.
Truncating (4) to second order in J1, leaves a Fokker-Planck equation2 that is valid for
small 1J1,H I. The drift coefficient (H}z is simply the average update. It is important
to note that the diffusion coefficients, (Hj"HjIJ ) , can be strongly dependent on
location in the weight-space. This spatial depe~dence influences both equilibria
and transient phenomena. In section 3.1 we will use both the Kolmogorov equation
(2), and the Fokker-Planck equation to track the time evolution of network ensemble
densities.
2.2
First Passage Times
Our discussion of basin hopping will use the notion of the first passage time (Gardiner, 1990); the time required for a network initialized at Wo to first pass into an
?-neighborhood D of a global or local optimum w. (see Figure 1). The first passage
time is a random variable. Its distribution function P( n; wo) is the probability that
a network initialized at Wo makes its first passage into D at the nth iteration of the
learning rule.
Figure 1: Sample search path.
To arrive at an expression for P( n; wo), we first examine the probability of passing
from the initial weight Wo to the weight w after n iterations. This probability can
be expressed as
pew, n I Wo, 0) =
J
dw' pew, n I w', 1) W( Wo
-7
w').
(5)
Substituting the single time-step transition probability (3) into the above expression, integrating over w', and making use of the time-shift invariance of the system 3
we find
(6)
pew, n 1 wo, 0) = ( pew, n - 1 1 Wo + J1,H(wo, x), O)}z .
Next, let G(n; wo) denote the probability that a network initialized at Wo has not
passed into the region D by the nth iteration. We obtain G(n; wo) by integrating
pew, n 1 Wo, 0) over weights w not in D;
G(n;wo) =
f
JDe
dwP(w,nlwo,O)
(7)
2See (Ritter and Schulten 1988) and (Radons et al. 1990) for independent derivations.
3With our assumptions of a constant learning rate J.l and stationary sample density p{x), the system is time-shift invariant.
Mathematically stated,
P{w,n I w',m) = P{w,n -1/ w',m -1)
509
510
Orr and Leen
where DC is the complement of D. Substituting equation (6) into (7) and integrating
over w we obtain the recursion
G(n;wo) = (G(n - l;wo
+ JJH[wo,x]) L: .
(8)
Before any learning takes place, none of the networks in the ensemble have entered
D. Thus the initial condition for G is
G(O;wo)
= 1,
woEDc.
(9)
Networks that have entered D are removed from the ensemble (i.e. aD is an absorbing boundary). Thus G satisfies the boundary condition
G(n; wo)
= 0,
Wo
ED.
(10)
Finally, the probability that the network has not passed into the region D on or
before iteration n - 1 minus the probability the network has not passed into D
on or before iteration n is simply the probability that the network has passed into
D exactly at iteration n. This is just the probability for first passage into D at
time-step n. Thus
P(n;wo)
= G(n -1;wo)
- G(n;wo) .
(11)
Finally the recursion (8) for G can be expanded in a power series in JJ to obtain the
backward Kramers-Moyal equation
G(n -1;w) =
G(n;w) ex>
.
J.lt
m
L"1 L
;=1
I.
(12)
;1, ... ;;=1
Truncation to second order in JJ results in the backward Fokker-Planck equation. In
section 3.2 we will use both the full recursion (8) and the Fokker-Planck approximation to (12) to predict basin hopping times in stochastic learning.
3
Backpropagation and Competitive Nets
We apply the above formalism to study the time evolution of the probability density
for simple backpropagation and competitive learning problems. We give graphical
examples of the time evolution of the weight space density, and calculate times for
passage from local to global optima.
3.1
Densities for the XOR Problem
Feed-forward networks trained to solve the XOR problem provide an example of
supervised learning with well-characterized local optima (Lisboa and Perantonis,
1991). We use a 2-input, 2-hidden, I-output network (9 weights) trained by stochastic gradient descent on the cross-entropy error function in Lisboa and Perantonis
(1991). For computational tractability, we reduce the state space dimension by
Weight Space Probability Densities in Stochastic Learning
constraining the search to one- or two-dimensional subs paces of the weight space.
To provide global optima at finite weight values, the output targets are set to 8 and
1 - 8, with 8 < < 1.
Figure 2a shows the cost function evaluated along a line in the weight space. This
line, parameterized by v, is chosen to pass through a global optimum at v
0,
and a local optimum at v = 1.0. In this one-dimensional slice, another local
optimum occurs at v 1.24 . Figure 2b shows the evolution of P( v, n) obtained by
numerical integration of the Fokker-Planck equation. Figure 2c shows the evolution
of P( v, n) estimated by simuhtion of 10,000 networks, each receiving a different
random sequence of the four input/target patterns. Initially the density is peaked
up about the local optimum at v = 1.24. At intermediate times, there is a spike of
density at the local optimum at v = 1.0. This spike is narrow since the diffusion
coefficient is small there. At late times the density collects at the global optimum.
We note that for the learning rate used here, the local optimum at v = 1.24 is
asymptotically stable under true gradient descent, and no escape would occur.
=
=
CD
It)
ii
8?..,
(II
a)
-
..().5
0.0
0.5
v
1.0
1.5
2.0
c)
Figure 2: a) XOR cost function. b) Predicted density. c) Simulated density.
Figure 3 shows a series of snapshots of the density superimposed on the cost function
for a 2-D slice through the XOR weight space. The first frame shows the weight
evolution under true gradient descent. The weights are initialized at the upper
right-hand corner of the frame, travel down the gradient and settle into a local
optimum. The remaining frames show the evolution of the density calculated by
direct integration of the Kolmogorov equation (2). Here one sees an early spreading
of the initial density and the ultimate concentration at the global optimum.
3.2
Basin Hopping Times
The above examples graphically illustrate the intuitive notion that the noise inherent in stochastic learning can move the system out of local optima4 In this section
we calculate the statistical distribution of times required to pass between basins.
4The reader should not infer from these examples that stochastic update necessarily
converges to global optima. It is straightforward to construct examples for which stochastic
learning convergences to local optima with probability one.
511
512
Orr and Leen
True Gradient Descent
Time
Time =0
=10
Time-28
Time =100
Time =34
Figure 3: Weight evolution for 2-D XOR. The density is superimposed on top of the cost
function. The first frame shows density using true gradient descent for all 100 timesteps.
The remaining frames show the density for selected timesteps using stochastic descent.
3.2.1
Basin Hopping in Back-propagation
For the search direction used in the example of Figure 2, we calculated the distribution of times required for networks initialized at v = 1.2 to first pass within ? = 0.1
of the global optimum at v = 0.0. For this example we numerically integrated
the backward Fokker-Planck equation. We verified the theoretical predictions by
obtaining first passage times from an ensemble of 10,000 networks initialized at
v = 1.2. See Figure 4. For this example the agreement is good at the small learning rate (JJ = 0.025) used, but degrades for larger JJ as higher order terms in the
expansion (12) become significant.
o
o
o
200
400
600
800 1000
First Passage Time
Figure 4: XOR problem. Simulated (histogram) and theoretical (solid line) distributions
of first passage times for the cost function of Figure la.
Weight Space Probability Densities in Stochastic Learning
When the Fokker-Planck approximation fails, results obtained from the exact expression (8) are in excellent agreement with experimental results. One such example is shown in Figure 5. Similar to Figure 2a, we have chosen a one-dimensional
subspace of the XOR weight space (but in a different direction). Here, the FokkerPlanck solution is quite poor because the steepness of the cost function results in
large contributions from higher order terms in (12). As one would expect, the exact
solution obtained using (8) agrees well with the simulations.
Exact
Fokker-Planck
It)
-0.5
0.0
0.5
1.0
1.5
v
a)
o
b)
200
400
600
First Passage Time
800
Figure 5: Second I-D XOR example. a) Cost function. b) Simulated (histogram) and
theoretical (lines) distributions of first passage times.
3.2.2
Basin Hopping in Competitive Learning
As a final example, we consider competitive learning with two 2-D weight vectors
symmetrically placed about the center of a rectangle. Inputs are uniformly distributed in a rectangle of width 1.1 and height 1. This configuration has both
global and local optima.
Figure 6a shows a sample path with weights started near the local optimum (crosses)
and switching to hover around the global optimum. The measured and predicted
(from numerical integration of (8)) distribution of times required to first pass within
a distance ? = 0.1 of the global optimum are shown in Figure 6b.
w2
~:~. :. :. : .<~~:\.:??i
?
0
~
~8
-80
Q:
4--~:-O---=,,"-:----'=''''':::-'''''''- : - : - - - r - -
a)
0.2
0.4
0.6
O.S
1
q
w1
0
b)
0
100
200
300
400
500
First Passage Time
Figure 6: Competitive Learning a) Data (small dots) and sample weight path (large dots).
b) First passage times.
4
Discussion
The dynamics of the time evolution of the weight space probability density provides
a direct handle on the performance of learning algorithms. This paper has focused
513
514
Orr and Leen
on transient phenomena in stochastic learning with constant learning rate. The
same theoretical framework can be used to analyze the asymptotic properties of
stochastic search with decreasing learning rates, and to analyze equilibrium densities. For a discussion of the latter, see the companion paper in this volume (Leen
and Moody 1993).
Acknowledgements
This work was supported under grants N00014-90-J-1349 and NOOO-HI-J-1482 from
the Office of Naval Research.
References
E. Oja (1982), A simplified neuron model as a principal component analyzer. J.
Math. Biology, 15:267-273.
Halbert White (1989), Learning in artificial neural networks: A statistical perspective. Neural Computation, 1:425-464.
J.J. Kushner (1987), Asymptotic global behavior for stochastic approximation and
diffusions with slowly decreasing noise effects: Global minimization via monte carlo.
SIAM J. Appl. Math., 47:169-185.
Christian Darken and John Moody (1991), Note on learning rate schedules for
stochastic optimization. In Advances in Neural Information Processing Systems 3,
San Mateo, CA, Morgan Kaufmann.
Todd K. Leen and Genevieve B. Orr. (1992), Weight-space probability densities
and convergence times for stochastic learning. In International Joint Conference
on Neural Networks, pages IV 158-164. IEEE.
Todd K. Leen and John Moody (1993), Probability Densities in Stochastic Learning:
Dynamics and Equilibria. In Giles, C.L., Hanson, S.J., and Cowan, J.D. (eds.),
Advances in Neural Information Processing Systems 5. San Mateo, CA: Morgan
Kaufmann Publishers.
H. Ritter and K. Schulten (1988), Convergence properties of Kohonen's topology
conserving maps: Fluctuations, stability and dimension selection, Bioi. Cybern.,
60, 59-71.
G. Radons, H.G. Schuster and D. Werner (1990), Fokker-Planck description oflearning in backpropagation networks, International Neural Network Conference, Paris,
II 993-996, Kluwer Academic Publishers.
C.W. Gardiner (1990), Handbook of Stochastic Methods, 2nd Ed. Springer-Verlag,
Berlin.
P. Lisboa and S. Perantonis (1991), Complete solution of the local minima in the
XOR problem. Network: Computation in Neural Systems, 2:119.
| 637 |@word nd:1 simulation:3 minus:1 solid:1 initial:3 configuration:1 series:3 john:2 numerical:2 j1:3 christian:1 update:5 stationary:1 leaf:1 selected:1 provides:1 math:2 location:1 height:1 along:1 direct:2 differential:1 become:1 behavior:2 examine:1 decreasing:2 what:1 exactly:1 grant:1 planck:9 before:3 engineering:1 local:17 todd:3 switching:1 despite:1 path:3 fluctuation:1 mateo:2 collect:1 dwp:1 appl:1 graduate:1 backpropagation:3 integrating:3 selection:1 influence:1 cybern:1 map:3 center:1 graphically:2 straightforward:1 truncating:1 focused:1 rule:1 dw:2 stability:1 handle:1 notion:2 target:3 exact:3 agreement:2 calculate:2 wj:1 region:2 removed:1 dynamic:4 trained:2 joint:1 kolmogorov:4 derivation:1 monte:1 artificial:1 tell:1 neighborhood:1 whose:1 quite:1 widely:1 solve:1 larger:1 noisy:2 final:1 sequence:1 net:1 hover:1 kohonen:1 entered:2 conserving:1 intuitive:1 description:1 dirac:1 convergence:6 optimum:25 neumann:1 converges:1 illustrate:1 measured:1 exemplar:2 predicted:2 direction:2 stochastic:26 transient:3 settle:1 mathematically:1 around:1 equilibrium:3 predict:2 substituting:2 angled:1 early:1 travel:1 spreading:1 agrees:1 minimization:1 gaussian:1 hj:2 office:1 naval:1 superimposed:2 contrast:2 dependent:1 integrated:1 initially:1 hidden:1 fokkerplanck:1 spatial:1 integration:3 construct:1 biology:1 unsupervised:2 peaked:1 escape:3 inherent:1 oja:2 genevieve:2 bracket:1 held:1 iv:1 initialized:6 halbert:1 theoretical:8 formalism:2 giles:1 werner:1 tractability:1 cost:7 oflearning:1 density:33 international:2 siam:1 ritter:2 receiving:1 moody:6 w1:1 von:1 slowly:1 corner:1 depe:1 orr:8 coefficient:3 oregon:1 ad:1 analyze:2 competitive:5 contribution:1 xor:9 kaufmann:2 ensemble:5 none:1 carlo:1 drive:1 ed:3 proved:1 schedule:2 back:1 feed:1 higher:2 supervised:4 leen:12 evaluated:1 strongly:1 just:1 hand:1 propagation:1 effect:1 true:5 evolution:15 white:2 width:1 complete:1 motion:2 passage:13 absorbing:1 volume:1 kluwer:1 moyal:1 numerically:1 significant:1 pew:6 analyzer:1 dot:2 stable:1 recent:1 perspective:1 n00014:1 verlag:1 morgan:2 minimum:2 recognized:1 converge:1 ii:4 full:1 lisboa:3 infer:1 characterized:1 academic:1 cross:2 prediction:4 iteration:6 histogram:2 annealing:1 publisher:2 w2:1 jde:1 induced:1 subject:1 cowan:1 near:1 symmetrically:1 constraining:1 intermediate:1 xj:1 timesteps:2 topology:1 reduce:1 shift:2 expression:3 ultimate:1 passed:4 wo:24 passing:1 jj:6 delta:1 estimated:1 track:1 pace:1 steepness:1 four:1 verified:1 diffusion:3 backward:3 rectangle:2 asymptotically:1 parameterized:1 place:1 arrive:1 reader:1 radon:2 hi:1 gardiner:2 occur:1 dence:1 expanded:1 department:1 poor:1 remain:1 describes:1 making:1 invariant:1 equation:11 apply:1 batch:2 top:1 remaining:2 kushner:2 graphical:3 beaverton:1 hopping:7 move:2 question:1 added:1 occurs:1 spike:2 degrades:1 concentration:1 dependence:1 gradient:7 subspace:1 distance:1 simulated:3 berlin:1 stated:1 upper:1 neuron:1 darken:2 markov:1 snapshot:1 finite:1 descent:7 dc:1 frame:5 drift:1 complement:1 pair:1 required:7 nooo:1 paris:1 hanson:1 narrow:1 pattern:1 summarize:2 recast:1 power:2 recursion:3 nth:2 technology:1 started:1 portray:1 acknowledgement:1 asymptotic:2 fully:1 expect:1 basin:8 kramers:1 cd:1 placed:1 supported:1 truncation:1 understand:1 institute:1 distributed:1 slice:2 boundary:2 dimension:2 calculated:2 transition:3 valid:1 forward:1 san:2 simplified:1 global:14 handbook:1 search:6 expanding:1 ca:2 obtaining:1 expansion:2 excellent:1 necessarily:1 noise:5 sub:2 fails:1 schulten:2 governed:1 late:1 theorem:2 down:1 companion:1 intrinsic:1 entropy:1 lt:1 logarithmic:1 simply:2 likely:1 expressed:1 springer:1 fokker:9 satisfies:1 bioi:1 uniformly:1 averaging:1 principal:1 pas:5 invariance:1 experimental:1 la:1 latter:1 outstanding:1 phenomenon:2 schuster:1 ex:1 |
5,936 | 6,370 | Regret of Queueing Bandits
Subhashini Krishnasamy
University of Texas at Austin
Rajat Sen
University of Texas at Austin
Ramesh Johari
Stanford University
Sanjay Shakkottai
University of Texas at Austin
Abstract
We consider a variant of the multiarmed bandit problem where jobs queue for service, and service rates of different servers may be unknown. We study algorithms
that minimize queue-regret: the (expected) difference between the queue-lengths
obtained by the algorithm, and those obtained by a ?genie?-aided matching algorithm that knows exact service rates. A naive view of this problem would suggest
that queue-regret should grow logarithmically: since queue-regret cannot be larger
than classical regret, results for the standard MAB problem give algorithms that ensure queue-regret increases no more than logarithmically in time. Our paper shows
surprisingly more complex behavior. In particular, the naive intuition is correct
as long as the bandit algorithm?s queues have relatively long regenerative cycles:
in this case queue-regret is similar to cumulative regret, and scales (essentially)
logarithmically. However, we show that this ?early stage? of the queueing bandit
eventually gives way to a ?late stage?, where the optimal queue-regret scaling is
O(1/t). We demonstrate an algorithm that (order-wise) achieves this asymptotic
queue-regret, and also exhibits close to optimal switching time from the early stage
to the late stage.
1
Introduction
Stochastic multi-armed bandits (MAB) have a rich history in sequential decision making [1, 2, 3].
In its simplest form, a collection of K arms are present, each having a binary reward (Bernoulli
random variable over {0, 1}) with an unknown success probability1 (and different across arms). At
each (discrete) time, a single arm is chosen by the bandit algorithm, and a (binary-valued) reward is
accrued. The MAB problem is to determine which arm to choose at each time in order to minimize
the cumulative expected regret, namely, the cumulative loss of reward when compared to a genie that
has knowledge of the arm success probabilities.
In this paper, we consider the variant of this problem motivated by queueing applications. Formally,
suppose that arms are pulled upon arrivals of jobs; each arm is now a server that can serve the arriving
job. In this model, the stochastic reward described above is equivalent to service. In other words,
if the arm (server) that is chosen results in positive reward, the job is successfully completed and
departs the system. However, this basic model fails to capture an essential feature of service in many
settings: in a queueing system, jobs wait until they complete service. Such systems are stateful: when
the chosen arm results in zero reward, the job being served remains in the queue, and over time the
model must track the remaining jobs waiting to be served. The difference between the cumulative
number of arrivals and departures, or the queue length, is the most common measure of the quality of
the service strategy being employed.
1
Here, the success probability of an arm is the probability that the reward equals ?1?.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Queueing is employed in modeling a vast range of service systems, including supply and demand
in online platforms (e.g., Uber, Lyft, Airbnb, Upwork, etc.); order flow in financial markets (e.g.,
limit order books); packet flow in communication networks; and supply chains. In all of these
systems, queueing is an essential part of the model: e.g., in online platforms, the available supply
(e.g. available drivers in Uber or Lyft, or available rentals in Airbnb) queues until it is ?served? by
arriving demand (ride requests in Uber or Lyft, booking requests in Airbnb). Since MAB models are
a natural way to capture learning in this entire range of systems, incorporating queueing behavior
into the MAB model is an essential challenge.
This problem clearly has the explore-exploit tradeoff inherent in the standard MAB problem: since
the success probabilities across different servers are unknown, there is a tradeoff between learning
(exploring) the different servers and (exploiting) the most promising server from past observations.
We refer to this problem as the queueing bandit. Since the queue length is simply the difference
between the cumulative number arrivals and departures (cumulative actual reward; here reward is
1 if job is served), the natural notion of regret here is to compare the expected queue length under
a bandit algorithm with the corresponding one under a genie policy (with identical arrivals) that
however always chooses the arm with the highest expected reward.
Queueing System: To capture this trade-off, we consider a discrete-time queueing system with a
single queue and K servers. Arrivals to the queue and service offered by the links are according to
product Bernoulli distribution and i.i.d. across time slots. Statistical parameters corresponding to the
service distributions are considered unknown. In any time slot, the queue can be served by at most
one server and the problem is to schedule a server in every time slot. The service is pre-emptive and a
job returns to the queue if not served. There is at least one server that has a service rate higher than
the arrival rate, which ensures that the "genie" policy is stable.
Let Q(t) be the queue length at time t under a given bandit algorithm, and let Q? (t) be the corresponding queue length under the ?genie? policy that always schedules the optimal server (i.e. always
plays the arm with the highest mean). We define the queue-regret as the difference in expected queue
lengths for the two policies. That is, the regret is given by:
(t) := E [Q(t) Q? (t)] .
(1)
Here (t) has the interpretation of the traditional MAB regret with caveat that rewards are accumulated only if there is a job that can benefit from this reward. We refer to (t) as the queue-regret;
formally, our goal is to develop bandit algorithms that minimize the queue-regret at a finite time t.
To develop some intuition, we compare this to the standard stochastic MAB problem. For the
standard problem, well-known algorithms such as UCB, KL-UCB, and Thompson sampling achieve
a cumulative regret of O((K 1) log t) at time t [4, 5, 6], and this result is essentially tight [7]. In
the queueing bandit, we can obtain a simple bound on the queue-regret by noting that it cannot be
any higher than the traditional regret (where a reward is accrued at each time whether a job is present
or not). This leads to an upper bound of O((K 1) log t) for the queue regret.
However, this upper bound does not tell the whole story for the queueing bandit: we show that
there are two ?stages? to the queueing bandit. In the early stage, the bandit algorithm is unable to
even stabilize the queue ? i.e. on average, the queue length increases over time and is continuously
backlogged; therefore the queue-regret grows with time, similar to the cumulative regret. Once the
algorithm is able to stabilize the queue?the late stage?then a dramatic shift occurs in the behavior
of the queue regret. A stochastically stable queue goes through regenerative cycles ? a random
cyclical behavior where queues build-up over time, then empty, and the cycle repeats. The associated
recurring?zero-queue-length? epochs means that sample-path queue-regret essentially ?resets? at
(stochastically) regular intervals; i.e., the sample-path queue-regret becomes non-positive at these
time instants. Thus the queue-regret should fall over time, as the algorithm learns.
Our main results provide lower bounds on queue-regret for both the early and late stages, as well
as algorithms that essentially match these lower bounds. We first describe the late stage, and then
describe the early stage for a heavily loaded system.
1. The late stage. We first consider what happens to the queue regret as t ! 1. As noted above, a
reasonable intuition for this regime comes from considering a standard bandit algorithm, but where
the sample-path queue-regret ?resets? at time points of regeneration.2 In this case, the queue-regret is
2
This is inexact since the optimal queueing system and bandit queueing system may not regenerate at the
same time point; but the intuition holds.
2
approximately a (discrete) derivative of the cumulative regret. Since the optimal cumulative regret
scales like log t, asymptotically the optimal queue-regret should scale like 1/t. Indeed, we show
that the queue-regret for ?-consistent policies is at least C/t infinitely often, where C is a constant
independent of t. Further, we introduce an algorithm called Q-ThS for the queueing bandit (a variant
of Thompson sampling with explicit structured exploration), and show an asymptotic regret upper
bound of O (poly(log t)/t) for Q-ThS, thus matching the lower bound up to poly-logarithmic factors
in t. Q-ThS exploits structured exploration: we exploit the fact that the queue regenerates regularly
to explore more systematically and aggressively.
2. The early stage. The preceding discussion might suggest that an algorithm that explores aggressively would dominate any algorithm that balances exploration and exploitation. However, an
overly aggressive exploration policy will preclude the queueing system from ever stabilizing, which
is necessary to induce the regenerative cycles that lead the system to the late stage. To even enter the
late stage, therefore, we need an algorithm that exploits enough to actually stabilize the queue (i.e.
choose good arms sufficiently often so that the mean service rate exceeds the expected arrival rate).
We refer to the early stage of the system, as noted above, as the period before the algorithm has
learned to stabilize the queues. For a heavily loaded system, where the arrival rate approaches the
service rate of the optimal server, we show a lower bound of ?(log t/ log log t) on the queue-regret in
the early stage. Thus up to a log log t factor, the early stage regret behaves similarly to the cumulative
regret (which scales like log t). The heavily loaded regime is a natural asymptotic regime in which to
study queueing systems, and has been extensively employed in the literature; see, e.g., [9, 10] for
surveys.
Perhaps more importantly, our analysis shows that the time to switch from the early stage to the late
stage scales at least as t = ?(K/?), where ? is the gap between the arrival rate and the service rate
of the optimal server; thus ? ! 0 in the heavy-load setting. In particular, we show that the early
stage lower bound of ?(log t/ log log t) is valid up to t = O(K/?); on the other hand, we also show
that, in the heavy-load limit, depending on the relative scaling between K and ?, the regret of Q-ThS
scales like O poly(log t)/?2 t for times that are arbitrarily close to ?(K/?). In other words, Q-ThS
is nearly optimal in the time it takes to ?switch? from the early stage to the late stage.
Our results constitute the first insight into the behavior of regret in this queueing setting; as emphasized, it is quite different than that seen for minimization of cumulative regret in the standard
MAB problem. The preceding discussion highlights why minimization of queue-regret presents a
subtle learning problem. On one hand, if the queue has been stabilized, the presence of regenerative
cycles allows us to establish that queue regret must eventually decay to zero at rate 1/t under an
optimal algorithm (the late stage). On the other hand, to actually have regenerative cycles in the first
place, a learning algorithm needs to exploit enough to actually stabilize the queue (the early stage).
Our analysis not only characterizes regret in both regimes, but also essentially exactly characterizes
the transition point between the two regimes. In this way the queueing bandit is a remarkable new
example of the tradeoff between exploration and exploitation.
2
Related work
MAB algorithms. Stochastic MAB models have been widely used in the past as a paradigm for
various sequential decision making problems in industrial manufacturing, communication networks,
clinical trials, online advertising and webpage optimization, and other domains requiring resource
allocation and scheduling; see, e.g., [1, 2, 3]. The MAB problem has been studied in two variants,
based on different notions of optimality. One considers mean accumulated loss of rewards, often
called regret, as compared to a genie policy that always chooses the best arm. Most effort in this
direction is focused on getting the best regret bounds possible at any finite time in addition to designing
computationally feasible algorithms [3]. The other line of research models the bandit problem as a
Markov decision process (MDP), with the goal of optimizing infinite horizon discounted or average
reward. The aim is to characterize the structure of the optimal policy [2]. Since these policies deal
with optimality with respect to infinite horizon costs, unlike the former body of research, they give
steady-state and not finite-time guarantees. Our work uses the regret minimization framework to
study the queueing bandit problem.
Bandits for queues. There is body of literature on the application of bandit models to queueing and
scheduling systems [2, 11, 12, 13, 14, 15, 16, 17]. These queueing studies focus on infinite-horizon
3
costs (i.e., statistically steady-state behavior, where the focus typically is on conditions for optimality
of index policies); further, the models do not typically consider user-dependent server statistics. Our
focus here is different: algorithms and analysis to optimize finite time regret.
3
Problem Setting
We consider a discrete-time queueing system with a single queue and K servers. The servers are
indexed by k = 1, . . . , K. Arrivals to the queue and service offered by the links are according to
product Bernoulli distribution and i.i.d. across time slots. The mean arrival rate is given by and
the mean service rates by the vector ? = [?k ]k2[K] , with < maxk2[K] ?k . In any time slot, the
queue can be served by at most one server and the problem is to schedule a server in every time slot.
The scheduling decision at any time t is based on past observations corresponding to the services
obtained from the scheduled servers until time t 1. Statistical parameters corresponding to the
service distributions are considered unknown. The queueing system evolution can be described
as follows. Let ?(t) denote the server that is scheduled at time t. Also, let Rk (t) 2 {0, 1} be
the service offered by server k and S(t) denote the service offered by server ?(t) at time t, i.e.,
S(t) = R?(t) (t). If A(t) is the number of arrivals at time t, then the queue-length at time t is given
+
by: Q(t) = (Q(t 1) + A(t) S(t)) .
Our goal in this paper is to focus attention on how queueing behavior impacts regret minimization
in bandit algorithms. We evaluate the performance of scheduling policies against the policy that
schedules the (unique) optimal server in every time slot, i.e., the server k ? := arg maxk2[K] ?k with
the maximum mean rate ?? := maxk2[K] ?k . Let Q(t) be the queue-length vector at time t under
our specified algorithm, and let Q? (t) be the corresponding vector under the optimal policy. We
define regret as the difference in mean queue-lengths for the two policies. That is, the regret is given
by: (t) := E [Q(t) Q? (t)] . We use the terms queue-regret or simply regret to refer to (t).
Throughout, when we evaluate queue-regret, we do so under the assumption that the queueing system
starts in the steady state distribution of the system induced by the optimal policy, as follows.
Assumption 1 (Initial State). Both Q(0) and Q? (0) have the same initial state distribution, and this
is chosen to be the stationary distribution of Q? (t); this distribution is denoted ?( ,?? ) .
4
The Late Stage
We analyze the performance of a scheduling algorithm with respect to queue-regret as a function of
time and system parameters like: (a) the load on the system ? := (??
), and (b) the minimum
difference between the rates of the best and the next best servers := ?? maxk6=k? ?k .
As a preview of the theoretical results, Figure 1 shows the evolution of queue-regret
40
with time in a system with 5 servers under
Early Stage
Late Stage
35
a scheduling policy inspired by Thompson
!
"
30 O log t
Sampling. Exact details of the scheduling
25
algorithm can be found in Section 4.2. It
20
is observed that the regret goes through a
$
#
#
$
15
phase transition. In the initial stage, when
O
O
10
the algorithm has not estimated the service
! "
?
rates well enough to stabilize the queue, the
5
regret grows poly-logarithmically similar
0
0
500
1000
1500
2000
2500
3000
3500
4000
t
to the classical MAB setting. After a critical point when the algorithm has learned Figure 1: Queue-regret (t) under Q-ThS in a system
the system parameters well enough to sta- with K = 5, ? = 0.1 and = 0.17
bilize the queue, the queue-length goes
through regenerative cycles as the queue
become empty. In other-words, instead of the queue length being continuously backlogged, the
queuing system has a stochastic cyclical behavior where the queue builds up, becomes empty, and
this cycle recurs. Thus at the beginning of every regenerative cycle, there is no accumulation of past
errors and the sample-path queue-regret is at most zero. As the algorithm estimates the parameters
better with time, the length of the regenerative cycles decreases and the queue-regret decays to zero.
?(t)
3
log3 t
t
log t
log log t
1
t
4
Notation: For the results in Section 4, the notation f (t) = O (g(K, ?, t)) for all t 2 h(K, ?) (here,
h(K, ?) is an interval that depends on K, ?) implies that there exist constants C and t0 independent
of K and ? such that f (t) ? Cg(K, ?, t) for all t 2 (t0 , 1) \ h(K, ?).
4.1
An Asymptotic Lower Bound
We establish an asymptotic lower bound on regret for the class of ?-consistent policies; this class
for the queueing bandit is a generalization of the ?-consistent class used in the literature for the
traditional stochastic MAB problem [7, 18, 19]. The precise definition is given below (1{?} below is
the indicator function).
Definition 1. A scheduling policy is said
hPto be ?-consistent
i (for some ? 2 (0, 1)) if given any
t
?
?
problem instance, specified by ( , ? ), E
s=1 1{?(s) = k} = O(t ) for all k 6= k .
Theorem 1 below gives an asymptotic lower bound on the average queue-regret and per-queue regret
for an arbitrary ?-consistent policy.
Theorem 1. For any problem instance ( , ? ) and any ?-consistent policy, the regret
?
?
1
?)(1 ?)(K 1)
(t)
D(?
4
t
(t) satisfies
for infinitely many t, where
?) =
D(?
?
KL ?min , ? 2+1
.
(2)
Outline for theorem 1. The proof of the lower bound consists of three main steps. First, in lemma 21,
we show that the regret at any time-slot is lower bounded by the probability of a sub-optimal schedule
in that time-slot (up to a constant factor that is dependent on the problem instance). The key idea in
this lemma is to show the equivalence of any two systems with the same marginal service distributions
under bandit feedback. This is achieved through a carefully constructed coupling argument that maps
the original system with independent service across links to another system with service process that
is dependent across links but with the same marginal distribution.
As a second step, the lower bound on the regret in terms of the probability of a sub-optimal schedule
enables us to obtain a lower bound on the cumulative queue-regret in terms of the number of
sub-optimal schedules. We then use a lower bound on the number of sub-optimal schedules for
?-consistent policies (lemma 19 and corollary 20) to obtain a lower bound on the cumulative regret.
In the final step, we use the lower bound on the cumulative queue-regret to obtain an infinitely often
lower bound on the queue-regret.
4.2
Achieving the Asymptotic Bound
We next focus on algorithms that can (up to a poly log factor) achieve a scaling of O (1/t) . A key
challenge in showing this is that we will need high probability bounds on the number of times the
correct arm is scheduled, and these bounds to hold over the late-stage regenerative cycles of the
queue. Recall that these regenerative cycles are random time intervals with ?(1) expected length
for the optimal policy, and whose lengths are correlated with the bandit algorithm decisions (the
queue length evolution is dependent on the past history of bandit arm schedules). To address this, we
propose a slightly modified version of the Thompson Sampling algorithm. The algorithm, which we
call Q-ThS, has an explicit structured exploration component similar to ?-greedy algorithms. This
structured exploration provides sufficiently good estimates for all arms (including sub-optimal ones)
in the late stage.
We describe the algorithm we employ in detail. Let Tk (t) be the number of times server k is
? (t) be the empirical mean of service rates at time-slot t
assigned in the first t time-slots and ?
from past observations (until t 1). At time-slot t, Q-ThS decides to explore with probability
min{1, 3K log2 t/t}, otherwise it exploits. When exploring, it chooses a server uniformly at random.
The chosen exploration rate ensures that we are able to obtain concentration results for the number
5
of times any link is sampled.3 When exploiting, for each k 2 [K], we pick a sample ??k (t) of
distribution Beta (?
?k (t)Tk (t 1) + 1, (1 ?
?k (t)) Tk (t 1) + 1) , and schedule the arm with the
largest sample (the standard Thompson sampling for Bernoulli arms [20]). Details of the algorithm
are given in Algorithm 1 in the Appendix.
We now show that, for a given problem instance ( , ? ) (and therefore fixed ?), the regret under
Q-ThS scales as O (poly(log t)/t). We state the most general form of the asymptotic upper bound in
theorem 2. A slightly weaker version of the result is given in corollary 3. This corollary is useful to
understand the dependence of the upper bound on the load ? and the number of servers K.
Notation : For the following results, the notation f (t) = O (g(K, ?, t)) for all t 2 h(K, ?) (here,
h(K, ?) is an interval that depends on K, ?) implies that there exist constants C and t0 independent
of K and ? such that f (t) ? Cg(K, ?, t) for all t 2 (t0 , 1) \ h(K, ?).
??
?2/3 ?
2 log t
Theorem 2. Consider any problem instance ( , ? ). Let w(t) = exp
, v 0 (t) =
6K
? w(t)
and v(t) =
for all t such that
24
?2
w(t)
log t
log t +
2
?,
0
2
60K v (t) log t
.
?
t
t
Then, under Q-ThS the regret
?
?
Kv(t) log2 t
(t) = O
t
exp 6/
2
(t), satisfies
and v(t) + v 0 (t) ? t/2.
Corollary 3. Let w(t) be as defined in Theorem 2. Then,
?
?
log3 t
(t) = O K 2
? t
for all t such that
w(t)
log t
2
t
? , w(t)
max
24K
2
? , 15K
log t , t
exp 6/
2
and
t
log t
198
?2 .
Outline for Theorem 2. As mentioned earlier, the central idea in the proof is that the sample-path
queue-regret is at most zero at the beginning of regenerative cycles, i.e., instants at which the queue
becomes empty. The proof consists of two main parts ? one which gives a high probability result on
the number of sub-optimal schedules in the exploit phase in the late stage, and the other which shows
that at any time, the beginning of the current regenerative cycle is not very far in time.
The former part is proved in lemma 9, where we make use of the structured exploration component
of Q-ThS to show that all the links, including the sub-optimal ones, are sampled a sufficiently large
number of times to give a good estimate of the link rates. This in turn ensures that the algorithm
schedules the correct link in the exploit phase in the late stages with high probability.
For the latter part, we prove a high probability bound on the last time instant when the queue
was zero (which is the beginning of the current regenerative cycle) in lemma 15. Here, we make
use of a recursive argument to obtain a tight bound. More specifically, we first use a coarse high
probability upper bound on the queue-length (lemma 11) to get a first cut bound on the beginning of
the regenerative cycle (lemma 12). This bound on the regenerative cycle-length is then recursively
used to obtain tighter bounds on the queue-length, and in turn, the start of the current regenerative
cycle (lemmas 14 and 15 respectively).
The proof of the theorem proceeds by combining the two parts above to show that the main contribution to the queue-regret comes from the structured exploration component in the current regenerative
cycle, which gives the stated result.
5
The Early Stage in the Heavily Loaded Regime
In order to study the performance of ?-consistent policies in the early stage, we consider the heavily
loaded system, where the arrival rate is close to the optimal service rate ?? , i.e., ? = ??
! 0.
This is a well studied asymptotic in which to study queueing systems, as this regime leads to
3
The exploration rate could scale like log t/t if we knew
additional exploration is needed.
6
in advance; however, without this knowledge,
fundamental insight into the structure of queueing systems. See, e.g., [9, 10] for extensive surveys.
Analyzing queue-regret in the early stage in the heavily loaded regime has the effect that the the
optimal server is the only one that stabilizes the queue. As a result, in the heavily loaded regime,
effective learning and scheduling of the optimal server play a crucial role in determining the transition
point from the early stage to the late stage. For this reason the heavily loaded regime reveals the
behavior of regret in the early stage.
Notation: For all the results in this section, the notation f (t) = O (g(K, ?, t)) for all t 2 h(K, ?)
(h(K, ?) is an interval that depends on K, ?) implies that there exist numbers C and ?0 that depend
on such that for all ? ?0 , f (t) ? Cg(K, ?, t) for all t 2 h(K, ?).
Theorem 4 gives a lower bound on the regret in the heavily loaded regime, roughly in the time interval
K 1/(1 ?) , O (K/?) for any ?-consistent policy.
Theorem 4. Given any problem instance ( , ? ), and for any ?-consistent policy and
regret (t) satisfies
(t)
?)
D(?
(K
2
1)
>
1
1 ?,
the
log t
log log t
h
i
?)
?) is given by equation 2, and ? and C1 are
for t 2 max{C1 K , ? }, (K 1) D(?
where D(?
2?
constants that depend on ?, and the policy.
Outline for Theorem 4. The crucial idea in the proof is to show a lower bound on the queue-regret in
terms of the number of sub-optimal schedules (Lemma 22). As in Theorem 1, we then use a lower
bound on the number of sub-optimal schedules for ?-consistent policies (given by Corollary 20) to
obtain a lower bound on the queue-regret.
Theorem 4 shows that, for any ?-consistent policy, it takes at least ? (K/?) time for the queue-regret
to transition from the early stage to the late stage. In this region, the scaling O(log t/ log log t)
reflects the fact that queue-regret is dominated by the cumulative regret growing like O(log t). A
reasonable question then arises: after time ? (K/?), should we expect the regret to transition into the
late stage regime analyzed in the preceding section?
We answer this question by studying when Q-ThS achieves its late-stage regret scaling of
O poly(log t)/?2 t scaling; as we will see, in an appropriate sense, Q-ThS is close to optimal
in its transition from early stage to late stage, when compared to the bound discovered in Theorem 4.
Formally, we have Corollary 5, which is an analog to Corollary 3 under the heavily loaded regime.
Corollary 5. For any problem instance ( , ? ), any 2 (0, 1) and 2 (0, min( , 1
)), the regret
under Q-ThS satisfies
?
?
K log3 t
(t) = O
?2 t
n
1
1
1 o
1
8t C2 max 1?
, K? 1 , (K 2 ) 1
, ?12 1
, where C2 is a constant independent of ?
(but depends on , and ).
By combining the result in Corollary 5 with Theorem 4, we can infer that in the heavily loaded
regime, the time taken by Q-ThS to achieve O poly(log t)/?2 t scaling is, in some sense, order-wise
close to the optimal in the ?-consistent class. Specifically, for any 2 (0, 1), there exists a scaling of
K with ? such that the queue-regret under Q-ThS scales as O poly(log t)/?2 t for all t > (K/?)
while the regret under any ?-consistent policy scales as ? (K log t/ log log t) for t < K/?.
We conclude by noting that while the transition point from the early stage to the late stage for Q-ThS
is near optimal in the heavily loaded regime, it does not yield optimal regret performance in the early
stage in general. In particular, recall that at any time t, the structured exploration component in Q-ThS
is invoked with probability 3K log2 t/t. As a result, we see that, in the early stage, queue-regret under
Q-ThS could be a log2 t-factor worse than the ? (log t/ log log t) lower bound shown in Theorem 4
for the ?-consistent class. This intuition can be formalized: it is straightforward to show an upper
bound of 2K log3 t for any t > max{C3 , U }, where C3 is a constant that depends on
but is
independent of K and ?; we omit the details.
7
150
250
? = 0.05
200
? = 0.05
100
?(t)
?(t)
150
50
100
Phase Transition
Shift
? = 0.1
50
? = 0.1
0
0
? = 0.15
? = 0.15
1000
2000
3000
4000
5000
6000
7000
8000
0
0
9000
t
1000
2000
3000
4000
5000
6000
7000
8000
9000
t
(a) Queue-Regret under Q-ThS for a system with
5 servers with ? 2 {0.05, 0.1, 0.15}
(b) Queue-Regret under Q-ThS for a a system with
7 servers with ? 2 {0.05, 0.1, 0.15}
Figure 2: Variation of Queue-regret (t) with K and ? under Q-Ths. The phase-transition point
shifts towards the right as ? decreases. The efficiency of learning decreases with increase in the size
of the system.
6
Simulation Results
In this section we present simulation results of various queueing bandit systems with K servers.
These results corroborate our theoretical analysis in Sections 4 and 5. In particular a phase transition
from unstable to stable behavior can be observed in all our simulations, as predicted by our analysis.
In the remainder of the section we demonstrate the performance of Algorithm 1 under variations of
system parameters like the traffic (?), the gap between the optimal and the suboptimal servers ( ),
and the size of the system (K). We also compare the performance of our algorithm with versions of
UCB-1 [4] and Thompson Sampling [20] without structured exploration (Figure 3 in the appendix).
Variation with ? and K. In Figure 2 we see the evolution of (t) in systems of size 5 and 7 . It can
be observed that the regret decays faster in the smaller system, which is predicted by Theorem 2 in
the late stage and Corollary 5 in the early stage. The performance of the system under different traffic
settings can be observed in Figure 2. It is evident that the regret of the queueing system grows with
decreasing ?. This is in agreement with our analytical results (Corollaries 3 and 5). In Figure 2 we
can observe that the time at which the phase transition occurs shifts towards the right with decreasing
? which is predicted by Corollaries 3 and 5.
7
Discussion and Conclusion
This paper provides the first regret analysis of the queueing bandit problem, including a characterization of regret in both early and late stages, together with analysis of the switching time; and
an algorithm (Q-ThS) that is asymptotically optimal (to within poly-logarithmic factors) and also
essentially exhibits the correct switching behavior between early and late stages. There remain
substantial open directions for future work.
First, is there a single algorithm that gives optimal performance in both early and late stages, as well
as the optimal switching time between early and late stages? The price paid for structured exploration
by Q-ThS is an inflation of regret in the early stage. An important open question is to find a single,
adaptive algorithm that gives good performance over all time. As we note in the appendix, classic
(unstructured) Thompson sampling is an intriguing candidate from this perspective.
Second the most significant technical hurdle in finding a single optimal algorithm is the difficulty
of establishing concentration results for the number of suboptimal arm pulls within a regenerative
cycle whose length is dependent on the bandit strategy. Such concentration results would be needed
in two different limits: first, as the start time of the regenerative cycle approaches infinity (for the
asymptotic analysis of late stage regret); and second, as the load of the system increases (for the
analysis of early stage regret in the heavily loaded regime). Any progress on the open directions
described above would likely require substantial progress on these technical questions as well.
Acknowledgement: This work is partially supported by NSF Grants CNS-1161868, CNS-1343383, CNS1320175, ARO grants W911NF-16-1-0377, W911NF-15-1-0227, W911NF-14-1-0387 and the US DoT supported
D-STOP Tier 1 University Transportation Center.
8
References
[1] J. C. Gittins, ?Bandit processes and dynamic allocation indices,? Journal of the Royal Statistical Society.
Series B (Methodological), pp. 148?177, 1979.
[2] A. Mahajan and D. Teneketzis, ?Multi-armed bandit problems,? in Foundations and Applications of Sensor
Management. Springer, 2008, pp. 121?151.
[3] S. Bubeck and N. Cesa-Bianchi, ?Regret analysis of stochastic and nonstochastic multi-armed bandit
problems,? Machine Learning, vol. 5, no. 1, pp. 1?122, 2012.
[4] P. Auer, N. Cesa-Bianchi, and P. Fischer, ?Finite-time analysis of the multiarmed bandit problem,? Machine
learning, vol. 47, no. 2-3, pp. 235?256, 2002.
[5] A. Garivier and O. Capp?, ?The kl-ucb algorithm for bounded stochastic bandits and beyond,? arXiv
preprint arXiv:1102.2490, 2011.
[6] S. Agrawal and N. Goyal, ?Analysis of thompson sampling for the multi-armed bandit problem,? arXiv
preprint arXiv:1111.1797, 2011.
[7] T. L. Lai and H. Robbins, ?Asymptotically efficient adaptive allocation rules,? Advances in applied
mathematics, vol. 6, no. 1, pp. 4?22, 1985.
[8] J.-Y. Audibert and S. Bubeck, ?Best arm identification in multi-armed bandits,? in COLT-23th Conference
on Learning Theory-2010, 2010, pp. 13?p.
[9] W. Whitt, ?Heavy traffic limit theorems for queues: a survey,? in Mathematical Methods in Queueing
Theory. Springer, 1974, pp. 307?350.
[10] H. Kushner, Heavy traffic analysis of controlled queueing and communication networks. Springer Science
& Business Media, 2013, vol. 47.
[11] J. Ni?o-Mora, ?Dynamic priority allocation via restless bandit marginal productivity indices,? Top, vol. 15,
no. 2, pp. 161?198, 2007.
[12] P. Jacko, ?Restless bandits approach to the job scheduling problem and its extensions,? Modern trends in
controlled stochastic processes: theory and applications, pp. 248?267, 2010.
[13] D. Cox and W. Smith, ?Queues,? Wiley, 1961.
[14] C. Buyukkoc, P. Varaiya, and J. Walrand, ?The c? rule revisited,? Advances in applied probability, vol. 17,
no. 1, pp. 237?238, 1985.
[15] J. A. Van Mieghem, ?Dynamic scheduling with convex delay costs: The generalized c| mu rule,? The
Annals of Applied Probability, pp. 809?833, 1995.
[16] J. Ni?o-Mora, ?Marginal productivity index policies for scheduling a multiclass delay-/loss-sensitive
queue,? Queueing Systems, vol. 54, no. 4, pp. 281?312, 2006.
[17] C. Lott and D. Teneketzis, ?On the optimality of an index rule in multichannel allocation for single-hop
mobile networks with multiple service classes,? Probability in the Engineering and Informational Sciences,
vol. 14, pp. 259?297, 2000.
[18] A. Salomon, J.-Y. Audiber, and I. El Alaoui, ?Lower bounds and selectivity of weak-consistent policies in
stochastic multi-armed bandit problem,? The Journal of Machine Learning Research, vol. 14, no. 1, pp.
187?207, 2013.
[19] R. Combes, C. Jiang, and R. Srikant, ?Bandits with budgets: Regret lower bounds and optimal algorithms,?
in Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling
of Computer Systems. ACM, 2015, pp. 245?257.
[20] W. R. Thompson, ?On the likelihood that one unknown probability exceeds another in view of the evidence
of two samples,? Biometrika, pp. 285?294, 1933.
[21] S. Bubeck, V. Perchet, and P. Rigollet, ?Bounded regret in stochastic multi-armed bandits,? arXiv preprint
arXiv:1302.1611, 2013.
[22] V. Perchet, P. Rigollet, S. Chassang, and E. Snowberg, ?Batched bandit problems,? arXiv preprint
arXiv:1505.00369, 2015.
[23] A. B. Tsybakov, Introduction to nonparametric estimation. Springer Science & Business Media, 2008.
[24] O. Chapelle and L. Li, ?An empirical evaluation of thompson sampling,? in Advances in neural information
processing systems, 2011, pp. 2249?2257.
[25] S. L. Scott, ?A modern bayesian look at the multi-armed bandit,? Appl. Stoch. Models in Business and
Industry, vol. 26, no. 6, pp. 639?658, 2010.
[26] E. Kaufmann, N. Korda, and R. Munos, ?Thompson sampling: An asymptotically optimal finite-time
analysis,? in Algorithmic Learning Theory. Springer, 2012, pp. 199?213.
[27] D. Russo and B. Van Roy, ?Learning to optimize via posterior sampling,? Mathematics of Operations
Research, vol. 39, no. 4, pp. 1221?1243, 2014.
9
| 6370 |@word trial:1 exploitation:2 version:3 cox:1 open:3 simulation:3 pick:1 dramatic:1 paid:1 recursively:1 initial:3 series:1 past:6 current:4 intriguing:1 must:2 enables:1 stationary:1 greedy:1 beginning:5 smith:1 caveat:1 provides:2 coarse:1 characterization:1 revisited:1 mathematical:1 constructed:1 c2:2 become:1 supply:3 driver:1 beta:1 consists:2 prove:1 introduce:1 market:1 indeed:1 roughly:1 expected:7 behavior:11 growing:1 multi:8 inspired:1 discounted:1 decreasing:2 informational:1 actual:1 armed:8 preclude:1 considering:1 becomes:3 spain:1 notation:6 bounded:3 medium:2 what:1 finding:1 guarantee:1 every:4 exactly:1 biometrika:1 k2:1 grant:2 omit:1 positive:2 service:28 before:1 engineering:1 limit:4 switching:4 analyzing:1 jiang:1 establishing:1 path:5 approximately:1 might:1 studied:2 equivalence:1 salomon:1 appl:1 range:2 statistically:1 russo:1 unique:1 recursive:1 regret:105 goyal:1 empirical:2 matching:2 word:3 pre:1 regular:1 induce:1 wait:1 suggest:2 get:1 cannot:2 close:5 scheduling:12 optimize:2 equivalent:1 accumulation:1 map:1 transportation:1 center:1 go:3 attention:1 straightforward:1 thompson:11 survey:3 focused:1 stabilizing:1 formalized:1 unstructured:1 convex:1 insight:2 rule:4 importantly:1 dominate:1 pull:1 financial:1 mora:2 classic:1 notion:2 variation:3 annals:1 suppose:1 play:2 heavily:13 exact:2 user:1 us:1 designing:1 probability1:1 agreement:1 logarithmically:4 trend:1 roy:1 perchet:2 cut:1 observed:4 role:1 preprint:4 capture:3 region:1 ensures:3 cycle:21 mieghem:1 trade:1 highest:2 decrease:3 mentioned:1 intuition:5 substantial:2 mu:1 reward:15 dynamic:3 depend:2 tight:2 serve:1 upon:1 efficiency:1 capp:1 various:2 describe:3 effective:1 tell:1 quite:1 whose:2 stanford:1 larger:1 valued:1 widely:1 regeneration:1 otherwise:1 statistic:1 fischer:1 final:1 online:3 agrawal:1 analytical:1 sen:1 propose:1 aro:1 product:2 reset:2 remainder:1 combining:2 achieve:3 kv:1 getting:1 exploiting:2 webpage:1 empty:4 gittins:1 tk:3 depending:1 develop:2 coupling:1 progress:2 job:12 predicted:3 come:2 implies:3 direction:3 correct:4 stochastic:11 exploration:15 packet:1 require:1 generalization:1 mab:14 tighter:1 exploring:2 extension:1 hold:2 sufficiently:3 considered:2 inflation:1 exp:3 algorithmic:1 stabilizes:1 achieves:2 early:31 estimation:1 sensitive:1 robbins:1 largest:1 successfully:1 reflects:1 minimization:4 clearly:1 sensor:1 always:4 sigmetrics:1 aim:1 modified:1 mobile:1 corollary:12 focus:5 stoch:1 methodological:1 bernoulli:4 recurs:1 likelihood:1 industrial:1 cg:3 sense:2 dependent:5 el:1 accumulated:2 entire:1 typically:2 bandit:44 arg:1 colt:1 denoted:1 platform:2 marginal:4 equal:1 once:1 having:1 sampling:11 hop:1 identical:1 look:1 nearly:1 future:1 inherent:1 employ:1 sta:1 modern:2 phase:7 cns:2 preview:1 evaluation:1 analyzed:1 chain:1 necessary:1 indexed:1 theoretical:2 korda:1 instance:7 industry:1 modeling:2 earlier:1 corroborate:1 w911nf:3 cost:3 delay:2 characterize:1 answer:1 chooses:3 accrued:2 international:1 explores:1 fundamental:1 off:1 together:1 continuously:2 central:1 cesa:2 management:1 choose:2 priority:1 worse:1 stochastically:2 book:1 derivative:1 return:1 li:1 aggressive:1 stabilize:6 audibert:1 depends:5 queuing:1 view:2 johari:1 characterizes:2 analyze:1 start:3 traffic:4 whitt:1 contribution:1 minimize:3 ni:2 loaded:13 kaufmann:1 yield:1 weak:1 identification:1 bayesian:1 advertising:1 served:7 chassang:1 history:2 definition:2 inexact:1 against:1 pp:20 associated:1 proof:5 sampled:2 stop:1 proved:1 recall:2 knowledge:2 genie:6 schedule:14 subtle:1 carefully:1 actually:3 auer:1 higher:2 stage:57 until:4 hand:3 combes:1 quality:1 perhaps:1 scheduled:3 grows:3 mdp:1 snowberg:1 effect:1 requiring:1 former:2 evolution:4 assigned:1 aggressively:2 mahajan:1 deal:1 noted:2 steady:3 generalized:1 outline:3 complete:1 demonstrate:2 evident:1 wise:2 invoked:1 common:1 behaves:1 rigollet:2 analog:1 interpretation:1 refer:4 multiarmed:2 significant:1 measurement:1 enter:1 mathematics:2 similarly:1 dot:1 ride:1 stable:3 chapelle:1 rental:1 etc:1 posterior:1 perspective:1 optimizing:1 selectivity:1 server:35 binary:2 success:4 arbitrarily:1 seen:1 minimum:1 additional:1 preceding:3 employed:3 determine:1 paradigm:1 period:1 multiple:1 infer:1 exceeds:2 technical:2 match:1 faster:1 clinical:1 long:2 lai:1 regenerative:19 controlled:2 impact:1 variant:4 basic:1 essentially:6 arxiv:8 achieved:1 c1:2 addition:1 hurdle:1 interval:6 grow:1 crucial:2 unlike:1 induced:1 alaoui:1 regularly:1 flow:2 call:1 near:1 noting:2 presence:1 enough:4 switch:2 nonstochastic:1 suboptimal:2 idea:3 tradeoff:3 multiclass:1 texas:3 shift:4 t0:4 whether:1 motivated:1 effort:1 queue:99 shakkottai:1 constitute:1 useful:1 nonparametric:1 tsybakov:1 extensively:1 simplest:1 stateful:1 multichannel:1 exist:3 walrand:1 nsf:1 stabilized:1 srikant:1 estimated:1 overly:1 track:1 per:1 discrete:4 vol:11 waiting:1 key:2 achieving:1 queueing:36 garivier:1 vast:1 asymptotically:4 place:1 throughout:1 reasonable:2 decision:5 appendix:3 scaling:8 bound:40 infinity:1 dominated:1 argument:2 optimality:4 min:3 relatively:1 structured:9 according:2 request:2 across:6 subhashini:1 slightly:2 smaller:1 remain:1 making:2 happens:1 taken:1 tier:1 computationally:1 resource:1 equation:1 remains:1 turn:2 eventually:2 needed:2 know:1 studying:1 available:3 operation:1 observe:1 appropriate:1 original:1 top:1 remaining:1 ensure:1 kushner:1 completed:1 log2:4 instant:3 exploit:8 build:2 establish:2 classical:2 society:1 question:4 occurs:2 strategy:2 concentration:3 dependence:1 traditional:3 said:1 exhibit:2 link:8 unable:1 considers:1 unstable:1 reason:1 length:22 index:5 teneketzis:2 balance:1 maxk2:3 stated:1 policy:31 unknown:6 bianchi:2 upper:7 observation:3 regenerate:1 markov:1 ramesh:1 finite:6 communication:3 ever:1 precise:1 discovered:1 arbitrary:1 namely:1 kl:3 specified:2 extensive:1 c3:2 varaiya:1 learned:2 barcelona:1 nip:1 address:1 able:2 recurring:1 proceeds:1 sanjay:1 below:3 beyond:1 departure:2 scott:1 regime:16 challenge:2 including:4 max:4 royal:1 critical:1 natural:3 difficulty:1 business:3 indicator:1 arm:21 naive:2 epoch:1 literature:3 acknowledgement:1 determining:1 asymptotic:10 relative:1 loss:3 expect:1 highlight:1 allocation:5 remarkable:1 foundation:1 offered:4 consistent:16 story:1 systematically:1 heavy:4 austin:3 surprisingly:1 repeat:1 last:1 arriving:2 supported:2 weaker:1 pulled:1 understand:1 fall:1 munos:1 benefit:1 van:2 feedback:1 valid:1 cumulative:16 rich:1 transition:11 collection:1 adaptive:2 far:1 log3:4 lyft:3 decides:1 reveals:1 conclude:1 knew:1 why:1 promising:1 maxk6:1 complex:1 poly:10 domain:1 main:4 whole:1 arrival:13 body:2 batched:1 wiley:1 fails:1 sub:9 explicit:2 candidate:1 late:29 learns:1 rk:1 departs:1 theorem:18 load:5 emphasized:1 showing:1 decay:3 evidence:1 essential:3 incorporating:1 exists:1 sequential:2 budget:1 demand:2 horizon:3 gap:2 restless:2 logarithmic:2 simply:2 explore:3 infinitely:3 likely:1 bubeck:3 partially:1 cyclical:2 springer:5 satisfies:4 acm:2 slot:12 goal:3 manufacturing:1 towards:2 price:1 feasible:1 aided:1 infinite:3 specifically:2 uniformly:1 lemma:9 called:2 uber:3 ucb:4 productivity:2 formally:3 latter:1 arises:1 rajat:1 evaluate:2 correlated:1 |
5,937 | 6,371 | Online Convex Optimization with Unconstrained
Domains and Losses
Kwabena Boahen
Department of Bioengineering
Stanford University
[email protected]
Ashok Cutkosky
Department of Computer Science
Stanford University
[email protected]
Abstract
We propose an online convex optimization algorithm (RESCALEDEXP) that achieves
optimal regret in the unconstrained setting without prior knowledge of any bounds
on the loss functions. We prove a lower bound showing an exponential separation between the regret of existing algorithms that require a known bound
on the loss functions and any algorithm that does not require such knowledge.
RESCALEDEXP matches this lower bound asymptotically in the number of iterations. RESCALEDEXP is naturally hyperparameter-free and we demonstrate empirically that it matches prior optimization algorithms that require hyperparameter
optimization.
1
Online Convex Optimization
Online Convex Optimization (OCO) [1, 2] provides an elegant framework for modeling noisy,
antagonistic or changing environments. The problem can be stated formally with the help of the
following definitions:
Convex Set: A set W is convex if W is contained in some real vector space and tw + (1 ? t)w0 ? W
for all w, w0 ? W and t ? [0, 1].
Convex Function: f : W ? R is a convex function if f (tw + (1 ? t)w0 ) ? tf (w) + (1 ? t)f (w0 )
for all w, w0 ? W and t ? [0, 1].
An OCO problem is a game of repeated rounds in which on round t a learner first chooses an element
wt in some convex space W , then receives a convex loss function `t , and suffers loss `t (wt ). The
regret of the learner with respect to some other u ? W is defined by
RT (u) =
T
X
t=1
`t (wt ) ? `t (u)
The objective is to design an algorithm that can achieve low regret with respect to any u, even in the
face of adversarially chosen `t .
Many practical problems can be formulated as OCO problems. For example, the stochastic optimization problems found widely throughout machine learning have exactly the same form, but with i.i.d.
loss functions, a subset of the OCO problems. In this setting the goal is to identify a vector w? with
low generalization error (E[`(w? ) ? `(u)]). We can solve this by running an OCO algorithm for T
rounds and setting w? to be the average value of wt . By online-to-batch conversion results [3, 4],
the generalization error is bounded by the expectation of the regret over the `t divided by T . Thus,
OCO algorithms can be used to solve stochastic optimization problems while also performing well in
non-i.i.d. settings.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The regret of an OCO problem is upper-bounded by the regret on a corresponding Online Linear
Optimization (OLO) problem, in which each `t is further constrained to be a linear function:
`t (w) = gt ? wt for some gt . The reduction follows, with the help of one more definition:
Subgradient: g ? W is a subgradient of f at w, denoted g ? ?f (w), if and only if f (w) + g ? (w0 ?
w) ? f (w0 ) for all w0 . Note that ?f (w) 6= ? if f is convex.1
To reduce OCO to OLO, suppose gt ? ?`t (wt ), and consider replacing `t (w) with the linear
approximation gt ? w. Then using the definition of subgradient,
RT (u) =
T
X
t=1
`t (wt ) ? `t (u) ?
T
X
t=1
gt (wt ? u) =
T
X
t=1
gt wt ? gt u
so that replacing `t (w) with gt ? w can only make the problem more difficult. All of the analysis in
this paper therefore addresses OLO, accessing convex losses functions only through subgradients.
There are two major factors that influence the regret of OLO algorithms: the size of the space W
and the size of the subgradients gt . When W is a bounded set (the ?constrained? case),
then given
?
B = maxw?W kwk, there exist OLO algorithms [5, 6] that can achieve RT (u) ? O BLmax T
without knowing Lmax = maxt kgt k. When W is unbounded (the ?unconstrained? case),
? then
?
given Lmax , there
exist
algorithms
[7,
8,
9]
that
achieve
R
(u)
?
O(kuk
log(kuk)L
T ) or
T
max
p
?
?
? hides factors that depend logarithmically on Lmax
Rt (u) ? O(kuk
log(kuk)Lmax T ), where O
and T . These algorithms are known to be optimal (up to constants) for their respective regimes
[10, 7]. All algorithms for the unconstrained setting to-date require knowledge of Lmax to achieve
these optimal bounds.2 Thus a natural question is: can we achieve O(kuk log(kuk)) regret in the
unconstrained, unknown-Lmax setting? This problem has been posed as a COLT 2016 open problem
[12], and is solved in this paper.
A simple approach is to maintain an estimate of Lmax and double it whenever we see a new gt that
violates the assumed bound (the so-called ?doubling trick?), thereby turning a known-Lmax algorithm
into an unknown-Lmax algorithm. This strategy fails for previous known-Lmax algorithms because
their analysis makes strong use of the assumption that each and every kgt k is bounded by Lmax . The
existence of even a small number of bound-violating gt can throw off the entire analysis.
In this
paper,
we prove that it is actuallyimpossible to achieve regret
1/2?
?
tk
O kuk log(kuk)Lmax T + Lmax exp maxt kg
for any > 0 where Lmax
L(t)
and L(t) = maxt0 <t kgp
t0 k are unknown?in advance (Section 2). This immediately rules out the
?
?ideal? bound of O(kuk
log(kuk)Lmax T ) which is possible in the known-Lmax case. Secondly,
we provide an algorithm, RESCALEDEXP, that matches our lower bound without prior knowledge of
Lmax , leading to a naturally hyperparameter-free algorithm (Section 3). To our knowledge, this is the
first algorithm to address the unknown-Lmax issue while maintaining O(kuk log kuk) dependence
on u. Finally, we present empirical results showing that RESCALEDEXP performs well in practice
(Section 4).
2
Lower Bound with Unknown Lmax
?
The following theorem rules out algorithms that achieve regret O(u log(u)Lmax T ) without prior
knowledge of Lmax . In fact, any such algorithm must pay an up-front penalty that is exponential in
T . This lower bound resolves a COLT 2016 open problem (Parameter-Free and Scale-Free Online
Algorithms) [12] in the negative.
1
In full generality, a subgradient is an element of the dual space W ? . However, we will only consider
cases where the subgradient is naturally identified with an element in the original space W (e.g. W is finite
dimensional) so that the definition in terms of dot-products suffices.
2
There are algorithms that do not require Lmax , but achieve only regret O(kuk2 ) [11]
2
Theorem 1. For any constants c, k, > 0, there exists a T and an adversarial strategy picking
gt ? R in response to wt ? R such that regret is:
T
X
RT (u) =
gt wt ? gt u
t=1
?
? (k + ckuk log kuk)Lmax T log(Lmax + 1) + kLmax exp((2T )1/2? )
"
1/2? #
?
kgt k
? (k + ckuk log kuk)Lmax T log(Lmax + 1) + kLmax exp max
t
L(t)
for some u ? R where Lmax = maxt?T kgt k and L(t) = maxt0 <t kgt0 k.
Proof. We prove the theorem by showing that for sufficiently large T , the adversary can ?checkmate?
the learner by presenting it only with the subgradient gt = ?1. If the learner fails to have wt increase
quickly, then there is a u 1 against which the learner has high regret. On the other hand, if the
learner ever does make wt higher than a particular threshold, the adversary immediately punishes the
learner with a subgradient gt = 2T , again resulting in high regret.
Let T be large enough such that both of the following hold:
?
1/2?
T 1/2
T
)
4 exp( 4 log(2)c ) > k log(2) T + k exp((2T )
?
1/2
1/2?
T
T
) + 2kT T log(2T + 1)
2 exp( 4 log(2)c ) > 2kT exp((2T )
(1)
(2)
The adversary plays the following strategy: for all t ? T , so long as wt < exp(T /4 log(2)c),
give gt = ?1. As soon as wt ? 12 exp(T 1/2 /4 log(2)c), give gt = 2T and gt = 0 for all subsequent
t. Let?s analyze the regret at time T in these two cases.
1
2
Case 1: wt <
1
2
1/2
exp(T 1/2 /4 log(2)c) for all t:
tk
In this case, let u = exp(T 1/2 /4 log(2)c). Then Lmax = 1, maxt kg
L(t) = 1, and using (1) the
learner?s regret is at least
1
T 1/2
RT (u) ? T u ? T exp( 4 log(2)c
)
2
= 12 T u
?
T 1/2
)
= cu log(u) T log(2) + T4 exp( 4 log(2)c
?
?
> cu log(u)Lmax T log(Lmax + 1) + kLmax T log(Lmax + 1) + kLmax exp((2T )1/2? )
h
i
?
1/2?
= (k + cu log u)Lmax T log(Lmax + 1) + kLmax exp max (2T )
t
Case 2: wt ?
1
2
exp(T
1/2
/4 log(2)c) for some t:
In this case, Lmax = 2T and maxt
RT (u) ?
T
2
kgt k
L(t)
= 2T . For u = 0, using (2), the regret is at least
1/2
T
exp( 4 log(2)c
)
?
? 2kT exp((2T )1/2? ) + 2kT T log(2T + 1)
?
= kLmax exp((2T )1/2? ) + kLmax T log(Lmax + 1)
h
i
?
1/2?
= (k + cu log u)Lmax T log(Lmax + 1) + kLmax exp max (2T )
t
The exponential lower-bound arises because the learner has to move exponentially fast in order
to deal with exponentially far away u, but then experiences exponential regret if the adversary
provides a gradient of unprecedented magnitude in the opposite direction. However, if we play
against an adversary that is constrained to give loss vectors kgt k ? Lmax for some Lmax that does
not grow with time, or
? if the losses do not grow too quickly, then we can still achieve RT (u) =
O(kuk log(kuk)Lmax T ) asymptotically without knowing Lmax . In the following sections we
describe an algorithm that accomplishes this.
3
3
RESCALEDEXP
Our algorithm, RESCALEDEXP, adapts to the unknown Lmax using a guess-and-double strategy that
is robust to a small number of bound-violating gt s. We initialize a guess L for Lmax to kg1 k. Then
we run a novel known-Lmax algorithm that can achieve good regret in the unconstrained u setting.
As soon as we see a gt with kgt k > 2L, we update our guess to kgt k and restart the known-Lmax
algorithm. To prove that this scheme is effective, we show (Lemma 3) that our known-Lmax algorithm
does not suffer too much regret when it sees a gt that violates its assumed bound.
Our known-Lmax algorithm uses the Follow-the-Regularized-Leader (FTRL) framework. FTRL is
an intuitive way
algorithms
h to design OCO
i [13]: Given functions ?t : W ? R, at time T we play
PT ?1
wT = argmin ?T ?1 (w) + t=1 `t (w) . The functions ?t are called regularizers. A large number
of OCO algorithms (e.g. gradient descent) can be cleanly formulated as instances of this framework.
Our known-Lmax algorithm is FTRL with regularizers ?t (w) = ?(w)/?t , where ?(w) = (kwk +
1) log(kwk
+ 1) ? kwk and ?t is a scale-factor that we adapt over time. Specifically, we set ?t?1 =
? p
PT
k 2 Mt + kgk21:t , where we use the compressed sum notations g1:T = t=1 gt and kgk21:T =
PT
2
kg1:t k/p ? kgk21:t ), so
t=1 kgt k . Mt is defined recursively by M0 = 0 and Mt = max(Mt?1 ,?
2
that Mt ? Mt?1 , and Mt + kgk1:t ? kg1:t k/p. k and p are constants: k = 2 and p = L?1
max .
RESCALEDEXP ?s strategy is to maintain an estimate Lt of Lmax at all time steps. Whenever it
observes kgt k ? 2Lt , it updates Lt+1 = kgt k. We call periods during which Lt is constant epochs.
Every time it updates Lt , it restarts our known-Lmax algorithm with p = L1t , beginning a new epoch.
Notice that since Lt at least doubles every epoch, there will be at most log2 (Lmax /L1 ) + 1 total
epochs. To address edge cases, we set wt = 0 until we suffer a non-constant loss function, and we set
the initial value of Lt to be the first non-zero gt . Pseudo-code is given in Algorithm 1, and Theorem
2 states our regret bound. For simplicity, we re-index so that that g1 is the first non-zero gradient
received. No regret is suffered when gt = 0 so this does not affect our analysis.
Algorithm 1 RESCALEDEXP
?
Initialize: k ? 2, M0 ? 0, w1 ? 0, t? ? 1 // t? is the start-time of the current epoch.
for t = 1 to T do
Play wt , receive subgradient gt ? ?`t (wt ).
if t = 1 then
L1 ? kg1 k
p ? 1/L1
end if
Mt ? max(Mt?1 , kgt? :t k/p ? kgk2t? :t ).
1
?t ? ?
2
k
2(Mt +kgkt? :t )
//Set wt+1 using FTRL update
h
i
g
wt+1 ? ? kgtt? :t:t k [exp(?t kgt? :t k) ? 1] // = argminw ?(w)
+
g
w
t
:t
?
?t
?
if kgt k > 2Lt then
//Begin a new epoch: update L and restart FTRL
Lt+1 ? kgt k
p ? 1/Lt+1
t? ? t + 1
Mt ? 0
wt+1 ? 0
else
Lt+1 ? Lt
end if
end for
Theorem 2. Let W be a separable real inner-product space with corresponding norm k ? k and
suppose (with mild abuse of notation) every loss function `t : W ? R has some subgradient gt ? W ?
at wt such that gt (w) = gt ? w for some gt ? W . Let Mmax = maxt Mt . Then if Lmax = maxt kgt k
4
and L(t) = maxt0 <t kgt k, rescaledexp achieves regret:
q
Lmax
RT (u) ? (2?(u) + 96) log2
+1
Mmax + kgk21:T
L1
p
Lmax
kgt k2
+ 8Lmax log2
, exp( T /2)
+ 1 min exp 8 max
t
L1
L(t)2
?
Lmax
kgt k2
= O Lmax log
(kuk log(kuk) + 2) T + exp 8 max
t
L1
L(t)2
The conditions on W in Theorem 2 are fairly mild. In particular they are satisfied whenever W is
finite-dimensional and in most kernel method settings [14]. In the kernel method setting, W is an
RKHS of functions X ? R and our losses take the form `t (w) = `t (hw, kxt i) where kxt is the
representing element in W of some xt ? X , so that gt = g t kxt where g t ? ?`t (hw, kxt i).
Although we nearly match our lower-bound exponential term of exp((2T )1/2? ), in order to have a
2
tk
practical algorithm we need to do much better. Fortunately, the maxt kg
L(t)2 term may be significantly
smaller when the losses are not fully adversarial. For example, if the loss vectors gt satisfy kgt k = t2 ,
then the exponential term in our bound reduces to a manageable constant even though kgt k is growing
quickly without bound.
To prove Theorem 2, we bound the regret of RESCALEDEXP during each epoch. Recall that during
an epoch, RESCALEDEXP is running FTRL with ?t (w) = ?(w)/?t . Therefore our first order of
business is to analyze the regret of FTRL across one of these epochs, which we do in Lemma 3
(proved in appendix):
?
Lemma 3. Set k = 2. Suppose kgt k ? L for t < T , 1/L ? p ? 2/L, gT ? Lmax and Lmax ? L.
Let Wmax = maxt?[1,T ] kwt k. Then the regret of FTRL with regularizers ?t (w) = ?(w)/?t is:
2
q
p
Lmax
2
RT (u) ? ?(u)/?T + 96 MT + kgk1:T + 2Lmax min Wmax , 4 exp 4 2
, exp( T /2)
L
v
uT ?1
2
uX
p
4Lmax
t
2
? (2?(u) + 96)
L|gt | + Lmax + 8Lmax min exp
, exp( T /2)
L2
t=1
2
?
?
4Lmax
2
L
? Lmax (2((kuk + 1) log(kuk + 1) ? kuk) + 96) T + 8Lmax min e
, e T /2
Lemma 3 requires us to know the value of L in order to set p. However, the crucial point is that it
encompasses the case in which L is misspecified on the last loss vector. This allows us to show that
RESCALEDEXP does not suffer too much by updating p on-the-fly.
Proof of Theorem 2. The theorem follows by applying Lemma 3 to each epoch in which Lt is
constant.
Let 1 = t1 , t2 , t3 , ? ? ? , tn be the various increasing values of t? (as defined in Algorithm 1), and we
define tn+1 = T + 1. Then define
Ra:b (u) =
b?1
X
t=a
so that RT (u) ?
Pn
j=1
gt (wt ? u)
Rtj :tj+1 (u). We will bound Rtj :tj+1 (u) for each j.
Fix a particular j < n. Then Rtj :tj+1 (u) is simply the regret of FTRL with k =
?t =
1
q
k 2(Mt +kgk2t
j :t
)
?
2, p =
1
Ltj
,
and regularizers ?(w)/?t . By definition of Lt , for t ? [1, tj+1 ? 2] we have
kgt k ? 2Ltj . Further, if L = maxt?[1,tj+1 ?2] kgt k we have L ? Ltj . Therefore, Ltj ? L ? 2Ltj
so that L1 ? p ? L2 . Further, we have kgtj+1 ?1 k/Ltj ? 2 maxt kgt k/L(t). Thus by Lemma 3 we
5
have
q
Rtj :tj+1 (u) ? ?(u)/?tj+1 ?1 + 96 Mtj+1 ?1 + kgk2tj :tj+1 ?1
"
!
?
#
kgtj+1 ?1 k2
tj+1 ? tj
?
+ 2Lmax min Wmax , 4 exp 4
, exp
L2tj
2
q
?
kgt k2
8 maxt L(t)
2
? ?(u)/?tj+1 ?1 + 96 Mmax + kgk2tj :tj+1 ?1 + 8Lmax min e
, e T /2
q
p
kgt k2
? (2?(u) + 96) Mmax + kgk21:T + 8Lmax min exp 8 max
,
exp(
T
/2)
t
L(t)2
Summing across epochs, we have
RT (u) =
n
X
Rtj :tj+1 (u)
j=1
q
p
kgt k2
,
exp
? n (2?(u) + 96) Mmax + kgk21:T + 8Lmax min exp 8 max
T
/2
t
L(t)2
Observe that n ? log2 (Lmax /L1 ) + 1 to prove the first line of the theorem. The big-Oh expression
Ptj+1 ?1
PT
follows from the inequality: Mtj+1 ?1 ? Ltj t=t
kgt k ? Lmax t=1 kgt k.
j
Our specific choices for k and p are somewhat arbitrary. We suspect (although we do not prove)
that the preceding theorems are true for larger values of k and any p inversely proportional to Lt ,
albeit with differing constants. In Section 4 we perform experiments using the values for k, p and Lt
described in Algorithm 1. In keeping with the spirit of designing a hyperparameter-free algorithm, no
attempt was made to empirically optimize these values at any time.
4
4.1
Experiments
Linear Classification
To validate our theoretical results in practice, we evaluated RESCALEDEXP on 8 classification datasets.
The data for each task was pulled from the libsvm website [15], and can be found individually
in a variety of sources [16, 17, 18, 19, 20, 21, 22]. We use linear classifiers with hinge-loss for
each task and we compare RESCALEDEXP to five other optimization algorithms: A DAG RAD [5],
S CALE I NVARIANT [23], P I STOL [24], A DAM [25], and A DA D ELTA [26]. Each of these algorithms
requires tuning of some hyperparameter for unconstrained problems with unknown Lmax (usually a
scale-factor on a learning rate). In contrast, our RESCALEDEXP requires no such tuning.
We evaluate each algorithm with the average loss after one pass through the data, computing a
prediction, an error, and an update to model parameters for each example in the dataset. Note that
this is not the same as a cross-validated error, but is closer to the notion of regret addressed in our
theorems. We plot this average loss versus hyperparameter setting for each dataset in Figures 1 and
2. These data bear out the effectiveness of RESCALEDEXP: while it is not unilaterally the highest
performer on all datasets, it shows remarkable robustness across datasets with zero manual tuning.
4.2
Convolutional Neural Networks
We also evaluated RESCALEDEXP on two convolutional neural network models. These models have
demonstrated remarkable success in computer vision tasks and are becoming increasingly more
popular in a variety of areas, but can require significant hyperparameter tuning to train. We consider
the MNIST [18] and CIFAR-10 [27] image classification tasks.
Our MNIST architecture consisted of two consecutive 5 ? 5 convolution and 2 ? 2 max-pooling layers
followed by a 512-neuron fully-connected layer. Our CIFAR-10 architecture was two consecutive
5 ? 5 convolution and 3 ? 3 max-pooling layers followed by a 384-neuron fully-connected layer and
a 192-neuron fully-connected layer.
6
covtype
10 1
gisette_scale
10 0
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
average loss
average loss
10 0
10 -1
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
10 -1
10 -2 -5
10
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 -2 -5
10
10 3
madelon
0.60
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 3
mnist
10 0
0.55
average loss
average loss
0.50
0.45
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
0.40
0.35
0.30 -5
10
10 -1
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
10 -2 -5
10
10 3
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 3
Figure 1: Average loss vs hyperparameter setting for each algorithm across each dataset. RESCALED EXP has no hyperparameters and so is represented by a flat yellow line. Many of the other algorithms
display large sensitivity to hyperparameter setting.
These models are highly non-convex, so that none of our theoretical analysis applies. Our use of
RESCALEDEXP is motivated by the fact that in practice convex methods are used to train these models.
We found that RESCALEDEXP can match the performance of other popular algorithms (see Figure 3).
In order to achieve this performance, we made a slight modification to RESCALEDEXP: when we
update Lt , instead of resetting wt to zero, we re-center the algorithm about the previous prediction
point. We provide no theoretical justification for this modification, but only note that it makes
intuitive sense in stochastic optimization problems, where one can reasonably expect that the previous
prediction vector is closer to the optimal value than zero.
5
Conclusions
We have presented RESCALEDEXP
, an Online Convex Optimization algorithm that achieves regret
?
?
O(kuk
log(kuk)Lmax T + exp(8 maxt kgt k2 /L(t)2 )) where Lmax = maxt kgt k is unknown in
advance. Since RESCALEDEXP does not use any prior-knowledge about the losses or comparison
vector u, it is hyperparameter free and so does not require any tuning of learning rates. We also prove
a lower-bound showing that any algorithm that addresses the unknown-Lmax scenario must suffer
an exponential penalty in the regret. We compare RESCALEDEXP to prior optimization algorithms
empirically and show that it matches their performance.
While our lower-bound matches our regret bound for RESCALEDEXP in terms of T , clearly there is
much work to be done. For example, when RESCALEDEXP is run on the adversarial loss sequence
presented in Theorem 1, its regret matches the lower-bound, suggesting that the optimality gap could
be improved with superior analysis. We also hope that our lower-bound inspires work in algorithms
that adapt to non-adversarial properties of the losses to avoid the exponential penalty.
7
ijcnn1
10 0
epsilon_normalized
10 0
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
average loss
average loss
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
10 -1
10 -2 -5
10
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 -1 -5
10
10 3
rcv1_train.multiclass
10 0
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 3
SenseIT Vehicle Combined
10 0
average loss
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
average loss
PiSTOL
Scale Invariant
ADAM
AdaDelta
AdaGrad
RescaledExp
10 -1 -5
10
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 -1 -5
10
10 3
10 -4
10 -3
10 -1
10 -2
10 0
hyperparameter setting
10 1
10 2
10 3
Figure 2: Average loss vs hyperparameter setting, continued from Figure 1.
Figure 3: We compare RESCALEDEXP to A DAM, A DAG RAD, and stochastic gradient descent
(SGD), with learning-rate hyperparameter optimization for the latter three algorithms. All algorithms
achieve a final validation accuracy of 99% on MNIST and 84%, 84%, 83% and 85% respectively on
CIFAR-10 (after 40000 iterations).
References
[1] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pages 928?936, 2003.
[2] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine
Learning, 4(2):107?194, 2011.
[3] Nick Littlestone. From on-line to batch learning. In Proceedings of the second annual workshop on
Computational learning theory, pages 269?284, 2014.
8
[4] Nicolo Cesa-Bianchi, Alex Conconi, and Claudio Gentile. On the generalization ability of on-line learning
algorithms. Information Theory, IEEE Transactions on, 50(9):2050?2057, 2004.
[5] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. In Conference on Learning Theory (COLT), 2010.
[6] H. Brendan McMahan and Matthew Streeter. Adaptive bound optimization for online convex optimization.
In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), 2010.
[7] Brendan Mcmahan and Matthew Streeter. No-regret algorithms for unconstrained online convex optimization. In Advances in neural information processing systems, pages 2402?2410, 2012.
[8] Francesco Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing
Systems, pages 1806?1814, 2013.
[9] Brendan McMahan and Jacob Abernethy. Minimax optimal algorithms for unconstrained linear optimization. In Advances in Neural Information Processing Systems, pages 2724?2732, 2013.
[10] Jacob Abernethy, Peter L Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the nineteenth annual conference on
computational learning theory, 2008.
[11] Francesco Orabona and D?vid P?l. Scale-free online learning. arXiv preprint arXiv:1601.01974, 2016.
[12] Francesco Orabona and D?vid P?l. Open problem: Parameter-free and scale-free online algorithms. In
Conference on Learning Theory, 2016.
[13] S. Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, The Hebrew
University of Jerusalem, 2007.
[14] Thomas Hofmann, Bernhard Sch?lkopf, and Alexander J Smola. Kernel methods in machine learning. The
annals of statistics, pages 1171?1220, 2008.
[15] Chih-Chung Chang and Chih-Jen Lin. Libsvm: A library for support vector machines. ACM Transactions
on Intelligent Systems and Technology (TIST), 2(3):27, 2011.
[16] Isabelle Guyon, Steve Gunn, Asa Ben-Hur, and Gideon Dror. Result analysis of the nips 2003 feature
selection challenge. In Advances in Neural Information Processing Systems, pages 545?552, 2004.
[17] Chih-chung Chang and Chih-Jen Lin. Ijcnn 2001 challenge: Generalization ability and text decoding. In
In Proceedings of IJCNN. IEEE. Citeseer, 2001.
[18] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[19] David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. Rcv1: A new benchmark collection for text
categorization research. The Journal of Machine Learning Research, 5:361?397, 2004.
[20] Marco F Duarte and Yu Hen Hu. Vehicle classification in distributed sensor networks. Journal of Parallel
and Distributed Computing, 64(7):826?838, 2004.
[21] M. Lichman. UCI machine learning repository, 2013.
[22] Shimon Kogan, Dimitry Levin, Bryan R Routledge, Jacob S Sagi, and Noah A Smith. Predicting risk
from financial reports with regression. In Proceedings of Human Language Technologies: The 2009
Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages
272?280. Association for Computational Linguistics, 2009.
[23] Francesco Orabona, Koby Crammer, and Nicolo Cesa-Bianchi. A generalized online mirror descent with
applications to classification and regression. Machine Learning, 99(3):411?435, 2014.
[24] Francesco Orabona. Simultaneous model selection and optimization through parameter-free stochastic
learning. In Advances in Neural Information Processing Systems, pages 1116?1124, 2014.
[25] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[26] Matthew D Zeiler. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
[27] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
[28] H. Brendan McMahan. A survey of algorithms and analysis for adaptive online learning. arXiv preprint
arXiv:1403.3465, 2014.
9
| 6371 |@word mild:2 madelon:1 cu:4 repository:1 manageable:1 norm:1 open:3 cleanly:1 hu:1 jacob:3 citeseer:1 sgd:1 thereby:1 recursively:1 olo:5 reduction:1 initial:1 ftrl:9 lichman:1 tist:1 punishes:1 rkhs:1 document:1 existing:1 current:1 diederik:1 must:2 subsequent:1 hofmann:1 plot:1 update:7 v:2 guess:3 website:1 beginning:1 smith:1 provides:2 mtj:2 five:1 unbounded:1 prove:8 ra:1 growing:1 resolve:1 dimitry:1 increasing:1 spain:1 begin:1 bounded:4 notation:2 kg:3 argmin:1 dror:1 differing:1 pseudo:1 every:4 exactly:1 k2:7 classifier:1 t1:1 sagi:1 becoming:1 abuse:1 klmax:8 practical:2 lecun:1 practice:3 regret:34 area:1 empirical:1 significantly:1 selection:2 dam:2 influence:1 risk:1 impossible:1 applying:1 optimize:1 zinkevich:1 demonstrated:1 center:1 jerusalem:1 jimmy:1 convex:20 survey:1 simplicity:1 immediately:2 rule:2 continued:1 oh:1 unilaterally:1 financial:1 notion:1 justification:1 antagonistic:1 annals:1 pt:4 suppose:3 play:4 programming:1 us:1 designing:1 trick:1 element:4 logarithmically:1 adadelta:9 trend:1 updating:1 recognition:1 gunn:1 fly:1 preprint:4 solved:1 connected:3 kogan:1 highest:1 rescaled:1 observes:1 rose:1 boahen:2 environment:1 accessing:1 depend:1 asa:1 learner:9 vid:2 various:1 represented:1 chapter:1 train:2 fast:1 describe:1 effective:1 shalev:2 abernethy:2 stanford:4 widely:1 solve:2 posed:1 larger:1 nineteenth:1 compressed:1 ability:2 statistic:1 g1:2 noisy:1 final:1 online:20 kxt:4 sequence:1 unprecedented:1 propose:1 product:2 argminw:1 uci:1 date:1 achieve:12 adapts:1 intuitive:2 validate:1 ltj:7 double:3 categorization:1 adam:9 ben:1 tk:3 help:2 yiming:1 received:1 strong:1 throw:1 c:1 direction:1 kgt:32 stochastic:7 human:1 violates:2 require:7 ckuk:2 suffices:1 generalization:4 fix:1 secondly:1 hold:1 marco:1 sufficiently:1 exp:37 matthew:3 m0:2 major:1 achieves:3 consecutive:2 ptj:1 individually:1 tf:1 hope:1 clearly:1 sensor:1 pn:1 avoid:1 claudio:1 validated:1 contrast:1 adversarial:4 brendan:4 sense:1 duarte:1 entire:1 issue:1 dual:1 colt:4 classification:5 denoted:1 constrained:3 initialize:2 fairly:1 kwabena:1 adversarially:1 yu:1 icml:1 oco:10 nearly:1 koby:1 t2:2 yoshua:1 intelligent:1 report:1 kwt:1 maintain:2 attempt:1 highly:1 tj:13 regularizers:4 kt:4 bioengineering:1 edge:1 closer:2 experience:1 respective:1 littlestone:1 re:2 theoretical:3 instance:1 modeling:1 subset:1 krizhevsky:1 levin:1 inspires:1 front:1 too:3 chooses:1 combined:1 international:1 sensitivity:1 off:1 decoding:1 picking:1 quickly:3 w1:1 again:1 thesis:1 satisfied:1 cesa:2 cale:1 american:1 chung:2 leading:1 li:1 suggesting:1 north:1 satisfy:1 vehicle:2 kwk:4 analyze:2 hazan:1 start:1 parallel:1 shai:1 accuracy:1 convolutional:2 resetting:1 t3:1 identify:1 yellow:1 lkopf:1 none:1 simultaneous:1 suffers:1 whenever:3 manual:1 definition:5 infinitesimal:1 against:2 naturally:3 proof:2 proved:1 dataset:3 popular:2 recall:1 knowledge:7 ut:1 hur:1 actually:1 steve:1 higher:1 violating:2 follow:1 restarts:1 kgp:1 response:1 improved:1 evaluated:2 though:1 done:1 generality:1 smola:1 until:1 hand:1 receives:1 replacing:2 wmax:3 consisted:1 true:1 deal:1 round:3 mmax:5 during:3 game:2 generalized:2 presenting:1 demonstrate:1 tn:2 performs:1 l1:8 duchi:1 image:2 novel:1 misspecified:1 superior:1 mt:14 empirically:3 exponentially:2 association:2 slight:1 significant:1 isabelle:1 dag:2 routledge:1 tuning:5 unconstrained:9 rd:1 language:1 dot:1 gt:35 nicolo:2 patrick:1 hide:1 scenario:1 kg1:4 inequality:1 success:1 fortunately:1 somewhat:1 preceding:1 performer:1 gentile:1 ashok:1 accomplishes:1 period:1 full:1 multiple:1 reduces:1 match:8 adapt:2 cross:1 long:1 cifar:3 lin:2 divided:1 prediction:3 regression:2 vision:1 expectation:1 arxiv:8 iteration:2 kernel:3 receive:1 addressed:1 else:1 grow:2 source:1 suffered:1 crucial:1 sch:1 ascent:1 suspect:1 pooling:2 elegant:1 spirit:1 effectiveness:1 call:1 yang:1 ideal:1 bengio:1 enough:1 variety:2 affect:1 l1t:1 identified:1 opposite:1 architecture:2 reduce:1 inner:1 knowing:2 multiclass:1 haffner:1 t0:1 expression:1 motivated:1 bartlett:1 penalty:3 suffer:4 peter:1 tewari:1 exist:2 notice:1 bryan:1 hyperparameter:20 threshold:1 changing:1 libsvm:2 kuk:23 asymptotically:2 subgradient:10 sum:1 run:2 throughout:1 chih:4 guyon:1 yann:1 separation:1 appendix:1 bound:27 layer:6 pay:1 followed:2 display:1 fan:1 annual:4 ijcnn:2 noah:1 alex:2 flat:1 min:8 optimality:1 performing:1 subgradients:2 separable:1 rcv1:1 martin:1 department:2 kgt0:1 smaller:1 across:4 increasingly:1 tw:2 modification:2 ijcnn1:1 invariant:8 singer:1 know:1 end:3 observe:1 away:1 batch:2 robustness:1 existence:1 original:1 thomas:1 running:2 tony:1 linguistics:2 zeiler:1 log2:4 maintaining:1 hinge:1 objective:1 move:1 question:1 strategy:6 rt:12 dependence:1 gradient:7 restart:2 w0:8 code:1 index:1 hebrew:1 difficult:1 stated:1 negative:1 ba:1 design:2 unknown:9 perform:1 bianchi:2 conversion:1 upper:1 convolution:2 neuron:3 datasets:3 francesco:5 benchmark:1 finite:2 descent:3 hinton:1 ever:1 arbitrary:1 david:1 kgk21:6 rad:2 nick:1 barcelona:1 kingma:1 nip:2 address:4 adversary:5 usually:1 regime:1 gideon:1 challenge:2 encompasses:1 ambuj:1 max:13 rtj:5 natural:1 business:1 regularized:1 predicting:1 turning:1 representing:1 scheme:1 minimax:2 technology:2 inversely:1 library:1 cutkosky:1 text:2 prior:6 epoch:11 l2:2 hen:1 adagrad:8 loss:31 fully:4 bear:1 pistol:8 expect:1 proportional:1 versus:1 remarkable:2 geoffrey:1 validation:1 foundation:1 tiny:1 maxt:14 lmax:77 last:1 free:11 soon:2 keeping:1 exponentiated:1 pulled:1 face:1 distributed:2 dimension:1 made:2 adaptive:4 collection:1 far:1 transaction:2 bernhard:1 summing:1 assumed:2 leader:1 shwartz:2 streeter:2 reasonably:1 robust:1 bottou:1 domain:1 da:1 big:1 hyperparameters:1 repeated:1 fails:2 exponential:8 mcmahan:4 hw:2 shimon:1 theorem:13 kuk2:1 xt:1 specific:1 jen:2 showing:4 rakhlin:1 covtype:1 exists:1 workshop:1 mnist:4 albeit:1 mirror:1 phd:1 magnitude:1 t4:1 maxt0:3 gap:1 lt:17 simply:1 contained:1 conconi:1 ux:1 doubling:1 maxw:1 applies:1 chang:2 lewis:1 acm:1 goal:1 formulated:2 orabona:5 specifically:1 wt:27 lemma:6 called:2 total:1 pas:1 formally:1 support:1 latter:1 arises:1 crammer:1 alexander:2 kgk1:2 evaluate:1 |
5,938 | 6,372 | Learning the Number of Neurons in Deep Networks
Jose M. Alvarez?
Data61 @ CSIRO
Canberra, ACT 2601, Australia
[email protected]
Mathieu Salzmann
CVLab, EPFL
CH-1015 Lausanne, Switzerland
[email protected]
Abstract
Nowadays, the number of layers and of neurons in each layer of a deep network
are typically set manually. While very deep and wide networks have proven
effective in general, they come at a high memory and computation cost, thus
making them impractical for constrained platforms. These networks, however,
are known to have many redundant parameters, and could thus, in principle, be
replaced by more compact architectures. In this paper, we introduce an approach to
automatically determining the number of neurons in each layer of a deep network
during learning. To this end, we propose to make use of a group sparsity regularizer
on the parameters of the network, where each group is defined to act on a single
neuron. Starting from an overcomplete network, we show that our approach can
reduce the number of parameters by up to 80% while retaining or even improving
the network accuracy.
1
Introduction
Thanks to the growing availability of large-scale datasets and computation power, Deep Learning
has recently generated a quasi-revolution in many fields, such as Computer Vision and Natural
Language Processing. Despite this progress, designing a deep architecture for a new task essentially
remains a dark art. It involves defining the number of layers and of neurons in each layer, which,
together, determine the number of parameters, or complexity, of the model, and which are typically
set manually by trial and error.
A recent trend to avoid this issue consists of building very deep [Simonyan and Zisserman, 2014] or
ultra deep [He et al., 2015] networks, which have proven more expressive. This, however, comes at
a significant cost in terms of memory requirement and speed, which may prevent the deployment
of such networks on constrained platforms at test time and complicate the learning process due to
exploding or vanishing gradients.
Automatic model selection has nonetheless been studied in the past, using both constructive and
destructive approaches. Starting from a shallow architecture, constructive methods work by incrementally incorporating additional parameters [Bello, 1992] or, more recently, layers to the network [Simonyan and Zisserman, 2014]. The main drawback of this approach stems from the fact
that shallow networks are less expressive than deep ones, and may thus provide poor initialization
when adding new layers. By contrast, destructive techniques exploit the fact that very deep models
include a significant number of redundant parameters [Denil et al., 2013, Cheng et al., 2015], and thus,
given an initial deep network, aim at reducing it while keeping its representation power. Originally,
this has been achieved by removing the parameters [LeCun et al., 1990, Hassibi et al., 1993] or
the neurons [Mozer and Smolensky, 1988, Ji et al., 1990, Reed, 1993] that have little influence on
the output. While effective this requires analyzing every parameter/neuron independently, e.g., via
the network Hessian, and thus does not scale well to large architectures. Therefore, recent trends
?
http://www.josemalvarez.net.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to performing network reduction have focused on training shallow or thin networks to mimic the
behavior of large, deep ones [Hinton et al., 2014, Romero et al., 2015]. This approach, however, acts
as a post-processing step, and thus requires being able to successfully train an initial deep network.
In this paper, we introduce an approach to automatically selecting the number of neurons in each
layer of a deep architecture simultaneously as we learn the network. Specifically, our method does
not require training an initial network as a pre-processing step. Instead, we introduce a group sparsity
regularizer on the parameters of the network, where each group is defined to act on the parameters
of one neuron. Setting these parameters to zero therefore amounts to canceling the influence of a
particular neuron and thus removing it entirely. As a consequence, our approach does not depend on
the success of learning a redundant network to later reduce its parameters, but instead jointly learns
the number of relevant neurons in each layer and the parameters of these neurons.
We demonstrate the effectiveness of our approach on several network architectures and using several
image recognition datasets. Our experiments demonstrate that our method can reduce the number of
parameters by up to 80% compared to the complete network. Furthermore, this reduction comes at no
loss in recognition accuracy; it even typically yields an improvement over the complete network. In
short, our approach not only lets us automatically perform model selection, but it also yields networks
that, at test time, are more effective, faster and require less memory.
2
Related work
Model selection for deep architectures, or more precisely determining the best number of parameters,
such as the number of layers and of neurons in each layer, has not yet been widely studied. Currently,
this is mostly achieved by manually tuning these hyper-parameters using validation data, or by relying
on very deep networks [Simonyan and Zisserman, 2014, He et al., 2015], which have proven effective
in many scenarios. These large networks, however, come at the cost of high memory footprint and
low speed at test time. Furthermore, it is well-known that most of the parameters in such networks
are redundant [Denil et al., 2013, Cheng et al., 2015], and thus more compact architectures could do
as good a job as the very deep ones.
While sparse, some literature on model selection for deep learning nonetheless exists. In particular, a
forerunner approach was presented in [Ash, 1989] to dynamically add nodes to an existing architecture.
Similarly, [Bello, 1992] introduced a constructive method that incrementally grows a network by
adding new neurons. More recently, a similar constructive strategy was successfully employed
by [Simonyan and Zisserman, 2014], where their final very deep network was built by adding new
layers to an initial shallower architecture. The constructive approach, however, has a drawback:
Shallow networks are known not to handle non-linearities as effectively as deeper ones [Montufar
et al., 2014]. Therefore, the initial, shallow architectures may easily get trapped in bad optima, and
thus provide poor initialization for the constructive steps.
In contrast with constructive methods, destructive approaches to model selection start with an initial
deep network, and aim at reducing it while keeping its behavior unchanged. This trend was started
by [LeCun et al., 1990, Hassibi et al., 1993] to cancel out individual parameters, and by [Mozer
and Smolensky, 1988, Ji et al., 1990, Reed, 1993], and more recently [Liu et al., 2015], when it
comes to removing entire neurons. The core idea of these methods consists of studying the saliency
of individual parameters or neurons and remove those that have little influence on the output of
the network. Analyzing individual parameters/neurons, however, quickly becomes computationally
expensive for large networks, particularly when the procedure involves computing the network
Hessian and is repeated multiple times over the learning process. As a consequence, these techniques
have no longer been pursued in the current large-scale era. Instead, the more recent take on the
destructive approach consists of learning a shallower or thinner network that mimics the behavior
of an initial deep one [Hinton et al., 2014, Romero et al., 2015], which ultimately also reduces the
number of parameters of the initial network. The main motivation of these works, however, was not
truly model selection, but rather building a more compact network.
As a matter of fact, designing compact models also is an active research focus in deep learning. In
particular, in the context of Convolutional Neural Networks (CNNs), several works have proposed
to decompose the filters of a pre-trained network into low-rank filters, thus reducing the number
of parameters [Jaderberg et al., 2014b, Denton et al., 2014, Gong et al., 2014]. However, this
approach, similarly to some destructive methods mentioned above, acts as a post-processing step, and
2
thus requires being able to successfully train an initial deep network. Note that, in a more general
context, it has been shown that a two-step procedure is typically outperformed by one-step, direct
training [Srivastava et al., 2015]. Such a direct approach has been employed by [Weigend et al.,
1991] and [Collins and Kohli, 2014] who have developed regularizers that favor eliminating some
of the parameters of the network, thus leading to lower memory requirement. The regularizers are
minimized simultaneously as the network is learned, and thus no pre-training is required. However,
they act on individual parameters. Therefore, similarly to [Jaderberg et al., 2014b, Denton et al.,
2014] and to other parameter regularization techniques [Krogh and Hertz, 1992, Bartlett, 1996], these
methods do not perform model selection; the number of layers and neurons per layer is determined
manually and won?t be affected by learning.
By contrast, in this paper, we introduce an approach to automatically determine the number of neurons
in each layer of a deep network. To this end, we design a regularizer-based formulation and therefore
do not rely on any pre-training. In other words, our approach performs model selection and produces
a compact network in a single, coherent learning framework. To the best of our knowledge, only
three works have studied similar group sparsity regularizers for deep networks. However, [Zhou
et al., 2016] focuses on the last fully-connected layer to obtain a compact model, and [S. Tu, 2014]
and [Murray and Chiang, 2015] only considered small networks. Our approach scales to datasets
and architectures two orders of magnitude larger than in these last two works with minimum (and
tractable) training overhead. Furthermore, these three methods define a single global regularizer. By
contrast, we work in a per-layer fashion, which we found more effective to reduce the number of
neurons by large factors without accuracy drop.
3
Deep Model Selection
We now introduce our approach to automatically determining the number of neurons in each layer of
a deep network while learning the network parameters. To this end, we describe our framework for a
general deep network, and discuss specific architectures in the experiments section.
A general deep network can be described as a succession of L layers performing linear operations on
their input, intertwined with non-linearities, such as Rectified Linear Units (ReLU) or sigmoids, and,
potentially, pooling operations. Each layer l consists of Nl neurons, each of which is encoded by
parameters ?ln = [wln , bnl ], where wln is a linear operator acting on the layer?s input and bnl is a bias.
Altogether, these parameters form the parameter set ? = {?l }1?l?L , with ?l = {?ln }1?n?Nl . Given
an input signal x, such as an image, the output of the network can be written as y? = f (x, ?), where
f (?) encodes the succession of linear, non-linear and pooling operations.
Given a training set consisting of N input-output pairs {(xi , yi )}1?i?N , learning the parameters of
the network can be expressed as solving an optimization of the form
min
?
N
1 X
`(yi , f (xi , ?)) + r(?) ,
N i=1
(1)
where `(?) is a loss that compares the network prediction with the ground-truth output, such as the
logistic loss for classification or the square loss for regression, and r(?) is a regularizer acting on the
network parameters. Popular choices for such a regularizer include weight-decay, i.e., r(?) is the
(squared) `2 -norm, of sparsity-inducing norms, e.g., the `1 -norm.
Recall that our goal here is to automatically determine the number of neurons in each layer of the
network. We propose to do this by starting from an overcomplete network and canceling the influence
of some of its neurons. Note that none of the standard regularizers mentioned above achieve this goal:
The former favors small parameter values, and the latter tends to cancel out individual parameters, but
not complete neurons. In fact, a neuron is encoded by a group of parameters, and our goal therefore
translates to making entire groups go to zero. To achieve this, we make use of the notion of group
sparsity [Yuan and Lin., 2007]. In particular, we write our regularizer as
r(?) =
L
X
?l
p
Pl
Nl
X
k?ln k2 ,
(2)
n=1
l=1
where, without loss of generality, we assume that the parameters of each neuron in layer l are grouped
in a vector of size Pl , and where ?l sets the influence of the penalty. Note that, in the general case,
3
this weight can be different for each layer l. In practice, however, we found most effective to have
two different weights: a relatively small one for the first few layers, and a larger weight for the
remaining ones. This effectively prevents killing too many neurons in the first few layers, and thus
retains enough information for the remaining ones.
While group sparsity lets us effectively remove some of the neurons, exploiting standard regularizers
on the individual parameters has proven effective in the past for generalization purpose [Bartlett,
1996, Krogh and Hertz, 1992, Theodoridis, 2015, Collins and Kohli, 2014]. To further leverage this
idea within our automatic model selection approach, we propose to exploit the sparse group Lasso
idea of [Simon et al., 2013]. This lets us write our regularizer as
!
Nl
L
X
p X
n
r(?) =
(1 ? ?)?l Pl
k?l k2 + ??l k?l k1 ,
(3)
n=1
l=1
where ? ? [0, 1] sets the relative influence of both terms. Note that ? = 0 brings us back to the
regularizer of Eq. 2. In practice, we experimented with both ? = 0 and ? = 0.5.
To solve Problem (1) with the regularizer defined by either Eq. 2 or Eq. 3, we follow a proximal
gradient descent approach [Parikh and Boyd, 2014]. In our context, proximal gradient descent can be
PN
thought of as iteratively taking a gradient step of size t with respect to the loss i=1 `(yi , f (xi , ?))
only, and, from the resulting solution, applying the proximal operator of the regularizer. In our case,
since the groups are non-overlapping, we can apply the proximal operator to each group independently.
Specifically, for a single group, this translates to updating the parameters as
1
??ln = argmin k?ln ? ??ln k22 + r(?) ,
2t
?ln
(4)
where ??ln is the solution obtained from the loss-based gradient step. Following the derivations
of [Simon et al., 2013], and focusing on the regularizer of Eq. 3 of which Eq. 2 is a special case, this
problem has a closed-form solution given by
? !
t(1
?
?)?
Pl
l
n
??l = 1 ?
S(??ln , t??l ) ,
(5)
||S(??ln , t??l )||2 ) +
where + corresponds to taking the maximum between the argument and 0, and S(?) is the softthresholding operator defined elementwise as
(S(z, ? ))j = sign(zj )(|zj | ? ? )+ .
(6)
The learning algorithm therefore proceeds by iteratively taking a gradient step based on the loss only,
and updating the variables of all the groups according to Eq. 5. In practice, we follow a stochastic
gradient descent approach and work with mini-batches. In this setting, we apply the proximal operator
at the end of each epoch and run the algorithm for a fixed number of epochs.
When learning terminates, the parameters of some of the neurons will have gone to zero. We can
thus remove these neurons entirely, since they have no effect on the output. Furthermore, when
considering fully-connected layers, the neurons acting on the output of zeroed-out neurons of the
previous layer also become useless, and can thus be removed. Ultimately, removing all these neurons
yields a more compact architecture than the original, overcomplete one.
4
Experiments
In this section, we demonstrate the ability of our method to automatically determine the number of
neurons on the task of large-scale classification. To this end, we study three different architectures and
analyze the behavior of our method on three different datasets, with a particular focus on parameter
reduction. Below, we first describe our experimental setup and then discuss our results.
4.1
Experimental setup
Datasets: For our experiments, we used two large-scale image classification datasets, ImageNet [Russakovsky et al., 2015] and Places2-401 [Zhou et al., 2015]. Furthermore, we conducted
4
additional experiments on the character recognition dataset of [Jaderberg et al., 2014a]. ImageNet
contains over 15 million labeled images split into 22, 000 categories. We used the ILSVRC-2012 [Russakovsky et al., 2015] subset consisting of 1000 categories, with 1.2 million training images and
50, 000 validation images. Places2-401 [Zhou et al., 2015] is a large-scale dataset specifically created
for high-level visual understanding tasks. It consists of more than 10 million images with 401 unique
scene categories. The training set comprises between 5,000 and 30,000 images per category. Finally,
the ICDAR character recognition dataset of [Jaderberg et al., 2014a] consists of 185,639 training and
5,198 test samples split into 36 categories. The training samples depict characters collected from
text observed in a number of scenes and from synthesized datasets, while the test set comes from the
ICDAR2003 training set after removing all non-alphanumeric characters.
Architectures: For ImageNet and Places2-401, our architectures are based on the VGG-B network
(BNet) [Simonyan and Zisserman, 2014] and on DecomposeMe8 (Dec8 ) [Alvarez and Petersson,
2016]. BNet consists of 10 convolutional layers followed by three fully-connected layers. In our
experiments, we removed the first two fully-connected layers. As will be shown in our results, while
this reduces the number of parameters, it maintains the accuracy of the original network. Below, we
refer to this modified architecture as BNetC . Following the idea of low-rank filters, Dec8 consists of
16 convolutional layers with 1D kernels, effectively modeling 8 2D convolutional layers. For ICDAR,
we used an architecture similar to the one of [Jaderberg et al., 2014b]. The original architecture
consists of three convolutional layers with a maxout layer [Goodfellow et al., 2013] after each
convolution, followed by one fully-connected layer. [Jaderberg et al., 2014b] first trained this network
and then decomposed each 2D convolution into 2 1D kernels. Here, instead, we directly start with 6
1D convolutional layers. Furthermore, we replaced the maxout layers with max-pooling. As shown
below, this architecture, referred to as Dec3 , yields similar results as the original one, referred to as
MaxOut.
Implementation details: For the comparison to be fair, all models including the baselines were
trained from scratch on the same computer using the same random seed and the same framework.
More specifically, for ImageNet and Places2-401, we used the torch-7 multi-gpu framework [Collobert
et al., 2011] on a Dual Xeon 8-core E5-2650 with 128GB of RAM using three Kepler Tesla K20m
GPUs in parallel. All models were trained for a total of 55 epochs with 12, 000 batches per epoch
and a batch size of 48 and 180 for BNet and Dec8 , respectively. These variations in batch size were
mainly due to the memory and runtime limitations of BNet. The learning rate was set to an initial
value of 0.01 and then multiplied by 0.1. Data augmentation was done through random crops and
random horizontal flips with probability 0.5. For ICDAR, we trained each network on a single Tesla
K20m GPU for a total 45 epochs with a batch size of 256 and 1,000 iterations per epoch. In this
case, the learning rate was set to an initial value of 0.1 and multiplied by 0.1 in the second, seventh
and fifteenth epochs. We used a momentum of 0.9. In terms of hyper-parameters, for large-scale
classification, we used ?l = 0.102 for the first three layers and ?l = 0.255 for the remaining ones.
For ICDAR, we used ?l = 5.1 for the first layer and ?l = 10.2 for the remaining ones.
Evaluation: We measure classification performance as the top-1 accuracy using the center crop,
referred to as Top-1. We compare the results of our approach with those obtained by training
the same architectures, but without our model selection technique. We also provide the results of
additional, standard architectures. Furthermore, since our approach can determine the number of
neurons per layer, we also computed results with our method starting for different number of neurons,
referred to as M below, in the overcomplete network. In addition to accuracy, we also report, for the
convolutional layers, the percentage of neurons set to 0 by our approach (neurons), the corresponding
percentage of zero-valued parameters (group param), the total percentage of 0 parameters (total
param), which additionally includes the parameters set to 0 in non-completely zeroed-out neurons,
and the total percentage of zero-valued parameters induced by the zeroed-out neurons (total induced),
which additionally includes the neurons in each layer, including the last fully-connected layer, that
have been rendered useless by the zeroed-out neurons of the previous layer.
4.2 Results
Below, we report our results on ImageNet and ICDAR. The results on Places-2 are provided as
supplementary material.
ImageNet: We first start by discussing our results on ImageNet. For this experiment, we used
BNetC and Dec8 , both with the group sparsity (GS) regularizer of Eq. 2. Furthermore, in the case of
5
Table 1: Top-1 accuracy results for several state-of-the art architectures and our method on ImageNet.
Model
Top-1 acc. (%)
Model
Top-1 acc. (%)
BNet
62.5
C
Ours-Bnet
62.7
GS
BNetC
61.1
Ours-Dec
64.8
8?GS
ResNet50a [He et al., 2015]
67.3
Ours-Dec8 -640SGL
67.5
Dec8
64.8
Ours-Dec8 -640GS
68.6
Dec8 -640
66.9
Ours-Dec8 -768GS
68.0
Dec8 -768
68.1
a
Trained over 55 epochs using a batch size of 128 on two TitanX with code publicly available.
BNetC on ImageNet (in %)
GS
neurons
12.70
group param
13.59
total param
13.59
total induced
27.38
accuracy gap
1.6
Figure 1: Parameter reduction on ImageNet using BNetC . (Left) Comparison of the number of
neurons per layer of the original network with that obtained using our approach. (Right) Percentage
of zeroed-out neurons and parameters, and accuracy gap between our network and the original one.
Note that we outperform the original network while requiring much fewer parameters.
Dec8 , we evaluated two additional versions that, instead of the 512 neurons per layer of the original
architecture, have M = 640 and M = 768 neurons per layer, respectively. Finally, in the case of
M = 640, we further evaluated both the group sparsity regularizer of Eq. 2 and the sparse group Lasso
(SGL) regularizer of Eq. 3 with ? = 0.5. Table 1 compares the top-1 accuracy of our approach with
that of the original architectures and of other baselines. Note that, with the exception of Dec8 -768, all
our methods yield an improvement over the original network, with up to 1.6% difference for BNetC
and 2.45% for Dec8 -640. As an additional baseline, we also evaluated the naive approach consisting
of reducing each layer in the model by a constant factor of 25%. The corresponding two instances,
Dec25%
and Dec25%
-640, yield 64.5% and 65.8% accuracy, respectively.
8
8
More importantly, in Figure 1 and Figure 2, we report the relative saving obtained with our approach
in terms of percentage of zeroed-out neurons/parameters for BNetC and Dec8 , respectively. For
BNetC , in Figure 1, our approach reduces the number of neurons by over 12%, while improving its
generalization ability, as indicated by the accuracy gap in the bottom row of the table. As can be seen
from the bar-plot, the reduction in the number of neurons is spread all over the layers with the largest
difference in the last layer. As a direct consequence, the number of neurons in the subsequent fully
connected layer is significantly reduced, leading to 27% reduction in the total number of parameters.
For Dec8 , in Figure 2, we can see that, when considering the original architecture with 512 neurons
per layer, our approach only yields a small reduction in parameter numbers with minimal gain in
performance. However, when we increase the initial number of neurons in each layer, the benefits
of our approach become more significant. For M = 640, when using the group sparsity regularizer,
we see a reduction of the number of parameters of more than 19%, with improved generalization
ability. The reduction is even larger, 23%, when using the sparse group Lasso regularizer. In the case
of M = 768, we managed to remove 26% of the neurons, which translates to 48% of the parameters.
While, here, the accuracy is slightly lower than that of the initial network, it is in fact higher than that
of the original Dec8 network, as can be seen in Table 1.
Interestingly, during learning, we also noticed a significant reduction in the training-validation
accuracy gap when applying our regularization technique. For instance, for Dec8 -768, which zeroes
out 48.2% of the parameters, we found the training-validation gap to be 28.5% smaller than in the
original network (from 14% to 10%). We believe that this indicates that networks trained using our
approach have better generalization ability, even if they have fewer parameters. A similar phenomenon
was also observed for the other architectures used in our experiments.
We now analyze the sensitivity of our method with respect to ?l (see Eq. (2)). To this end, we
considered Dec8 ? 768GS and varied the value of the parameter in the range ?l = [0.051..0.51].
More specifically, we considered 20 different pairs of values, (?1 , ?2 ), with the former applied to the
6
Dec8 on ImageNet (in %)
Dec8
Dec8 -640
GS
SGL
GS
neurons
3.39 12.42 10.08
group param
2.46 13.69 12.48
total param
2.46 22.72 12.48
total induced 2.82 23.33 19.11
accuracy gap 0.01
0.94
2.45
Dec8 -768
GS
26.83
31.53
31.63
48.28
-0.02
Figure 2: Parameter reduction using Dec8 on ImageNet. Note that we significantly reduce the number
of parameters and, in almost all cases, improve recognition accuracy over the original network.
Dec3 on ICDAR (in %)
SGL
GS
neurons
38.64 55.11
group param 32.57 66.48
total param
72.41 66.48
total induced 72.08 80.45
accuracy gap 1.24
1.38
Top-1 acc. on ICDAR
MaxOutDec a
91.3%
MaxOutb
89.8%
MaxPool2Dneurons 83.8%
Dec3 (baseline)
89.3%
Ours-Dec3?SGL
89.9%
Ours-Dec3?GS
90.1%
a
Results from Jaderberg et al. [2014a] using MaxOut layer instead of MaxPooling and decompositions as post-processing step
b
Results from Jaderberg et al. [2014a]
Figure 3: Experimental results on ICDAR using Dec3 . Note that our approach reduces the number of
parameters by 72% while improving the accuracy of the original network.
first three layers and the latter to the remaining ones. The details of this experiment are reported in
supplementary material. Altogether, we only observed small variations in validation accuracy (std of
0.33%) and in number of zeroed-out neurons (std of 1.1%).
ICDAR: Finally, we evaluate our approach on a smaller dataset where architectures have not yet
been heavily tuned. For this dataset, we used the Dec3 architecture, where the last two layers initially
contain 512 neurons. Our goal here is to obtain an optimal architecture for this dataset. Figure 3
summarizes our results using GS and SGL regularization and compares them to state-of-the-art
baselines. From the comparison between MaxPool2Dneurons and Dec3 , we can see that learning 1D
filters leads to better performance than an equivalent network with 2D kernels. More importantly, our
algorithm reduces by up to 80% the number of parameters, while further improving the performance
of the original network. We believe that these results evidence that our algorithm effectively performs
automatic model selection for a given (classification) task.
4.3
Benefits at test time
We now discuss the benefits of our algorithm at test time. For simplicity, our implementation does not
remove neurons during training. However, these neurons can be effectively removed after training,
thus yielding a smaller network to deploy at test time. Not only does this entail benefits in terms of
memory requirement, as illustrated above when looking at the reduction in number of parameters,
but it also leads to speedups compared to the complete network. To demonstrate this, in Table 2,
we report the relative runtime speedups obtained by removing the zeroed-out neurons. For BNet
and Dec8 , these speedups were obtained using ImageNet, while Dec3 was tested on ICDAR. Note
that significant speedups can be achieved, depending on the architecture. For instance, using BNetC ,
we achieve a speedup of up to 13% on ImageNet, while with Dec3 on ICDAR the speedup reaches
almost 50%. The right-hand side of Table 2 shows the relative memory saving of our networks. These
numbers were computed from the actual memory requirements in MB of the networks. In terms of
parameters, for ImageNet, Dec8 -768 yields a 46% reduction, while Dec3 increases this saving to
more than 80%. When looking at the actual features computed in each layer of the network, we reach
a 10% memory saving for Dec8 -768 and a 25% saving for Dec3 . We believe that these numbers
clearly evidence the benefits of our approach in terms of speed and memory footprint at test time.
Note also that, once the models are trained, additional parameters can be pruned using, at the level of
individual parameters, `1 regularization and a threshold [Liu et al., 2015]. On ImageNet, with our
7
Table 2: Gain in runtime (actual clock time) and memory requirement of our reduced networks.
Note that, for some configurations, our final networks achieve a speedup of close to 50%. Similarly,
we achieve memory savings of up to 82% in terms of parameters, and up to 25% in terms of the
features computed by the network. The runtimes were obtained using a single Tesla K20m and
memory estimations using RGB-images of size 224 ? 224 for Ours-BNetC , Ours-Dec8 -640GS and
Ours-Dec8 -768GS , and gray level images of size 32 ? 32 for Ours-Dec3?GS .
Model
Ours-BNetC
GS
Ours-Dec8 -640GS
Ours-Dec8 -768GS
Ours-Dec3?GS
Relative speed-up (%)
Batch size
1
2
8
16
10.04 8.97 13.01 13.69
-0.1
5.44
3.91
4.37
15.29 17.11 15.99 15.62
35.62 43.07 44.40 49.63
Relative memory-savings
Batch size 1 (%)
Params
Features
12.06
18.54
26.51
2.13
46.73
10.00
82.35
25.00
Dec8 -768GS model and the `1 weight set to 0.0001 as in [Liu et al., 2015], this method yields 1.34M
zero-valued parameters, compared to 7.74M for our approach, i.e., a 82% relative reduction in the
number of individual parameters for our approach.
5
Conclusions
We have introduced an approach to automatically determining the number of neurons in each layer
of a deep network. To this end, we have proposed to rely on a group sparsity regularizer, which
has allowed us to jointly learn the number of neurons and the parameter values in a single, coherent
framework. Not only does our approach estimate the number of neurons, it also yields a more compact
architecture than the initial overcomplete network, thus saving both memory and computation at test
time. Our experiments have demonstrated the benefits of our method, as well as its generalizability
to different architectures. One current limitation of our approach is that the number of layers in the
network remains fixed. To address this, in the future, we intend to study architectures where each
layer can potentially be bypassed entirely, thus ultimately canceling out its influence. Furthermore,
we plan to evaluate the behavior of our approach on other types of problems, such as regression
networks and autoencoders.
Acknowledgments
The authors thank John Taylor and Tim Ho for helpful discussions and their continuous support through using
the CSIRO high-performance computing facilities. The authors also thank NVIDIA for generous hardware
donations.
References
J.M. Alvarez and L. Petersson. Decomposeme: Simplifying convnets for end-to-end learning. CoRR,
abs/1606.05426, 2016.
T. Ash. Dynamic node creation in backpropagation networks. Connection Science, 1(4):365?375, 1989.
P. L. Bartlett. For valid generalization the size of the weights is more important than the size of the network. In
NIPS, 1996.
M. G. Bello. Enhanced training algorithms, and integrated training/architecture selection for multilayer perceptron networks. IEEE Transactions on Neural Networks, 3(6):864?875, Nov 1992.
Yu Cheng, Felix X. Yu, Rog?rio Schmidt Feris, Sanjiv Kumar, Alok N. Choudhary, and Shih-Fu Chang. An
exploration of parameter redundancy in deep networks with circulant projections. In ICCV, 2015.
M. D. Collins and P. Kohli. Memory Bounded Deep Convolutional Networks. CoRR, abs/1412.1442, 2014.
R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning. In
BigLearn, NIPS Workshop, 2011.
M. Denil, B. Shakibi, L. Dinh, M.A. Ranzato, and N. de Freitas. Predicting parameters in deep learning. CoRR,
abs/1306.0543, 2013.
8
E. L Denton, W. Zaremba, J. Bruna, Y. LeCun, and R. Fergus. Exploiting linear structure within convolutional
networks for efficient evaluation. In NIPS. 2014.
Y. Gong, L. Liu, M. Yang, and L. D. Bourdev. Compressing deep convolutional networks using vector
quantization. In CoRR, volume abs/1412.6115, 2014.
I. J. Goodfellow, D. Warde-farley, M. Mirza, A. Courville, and Y. Bengio. Maxout networks. In ICML, 2013.
B. Hassibi, D. G. Stork, and G. J. Wolff. Optimal brain surgeon and general network pruning. In ICNN, 1993.
K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. In CoRR, volume
abs/1512.03385, 2015.
G. E. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. In arXiv, 2014.
M. Jaderberg, A. Vedaldi, and A. Zisserman. Deep features for text spotting. In ECCV, 2014a.
M. Jaderberg, A. Vedaldi, and A. Zisserman. Speeding up convolutional neural networks with low rank
expansions. In BMVC, 2014b.
C. Ji, R. R. Snapp, and D. Psaltis. Generalizing smoothness constraints from discrete samples. Neural
Computation, 2(2):188?197, June 1990. ISSN 0899-7667.
A. Krogh and J. A. Hertz. A simple weight decay can improve generalization. In NIPS, 1992.
Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In NIPS, 1990.
B. Liu, M. Wang, H. Foroosh, M. Tappen, and M. Penksy. Sparse convolutional neural networks. In CVPR,
2015.
G. F Montufar, R. Pascanu, K. Cho, and Y. Bengio. On the number of linear regions of deep neural networks. In
NIPS. 2014.
M. Mozer and P. Smolensky. Skeletonization: A technique for trimming the fat from a network via relevance
assessment. In NIPS, 1988.
K. Murray and D. Chiang. Auto-sizing neural networks: With applications to n-gram language models. CoRR,
abs/1508.05051, 2015.
N. Parikh and S. Boyd. Proximal algorithms. Found. Trends Optim., 1(3):127?239, January 2014.
R. Reed. Pruning algorithms-a survey. IEEE Transactions on Neural Networks, 4(5):740?747, Sep 1993.
A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio. Fitnets: Hints for thin deep nets. In
ICLR, 2015.
O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein,
A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
J. Wang X. Huang X. Zhang S. Tu, Y. Xue. Learning block group sparse representation combined with
convolutional neural networks for rgb-d object recognition. Journal of Fiber Bioengineering and Informatics,
7(4):603, 2014.
N. Simon, J. Friedman, T. Hastie, and R. Tibshirani. A sparse-group lasso. Journal of Computational and
Graphical Statistics, 2013.
K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR,
abs/1409.1556, 2014.
R. K Srivastava, K. Greff, and J. Schmidhuber. Training very deep networks. In NIPS, 2015.
S. Theodoridis. Machine Learning, A Bayesian and Optimization Perspective, volume 8. Elsevier, 2015.
A. S. Weigend, D. Rumelhart, and B. A. Huberman. Generalization by weight-elimination with application to
forecasting. In NIPS, 1991.
M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal
Statistical Society, Series B, 68(1):49?67, 2007.
B. Zhou, A. Khosla, A. Lapedriza, A. Torralba, and A. Oliva. Places: An image database for deep scene
understanding. 2015.
H. Zhou, J. M. Alvarez, and F. Porikli. Less is more: Towards compact CNNs. In ECCV, 2016.
9
| 6372 |@word kohli:3 trial:1 version:1 eliminating:1 norm:3 rgb:2 decomposition:1 simplifying:1 reduction:14 initial:14 liu:5 contains:1 configuration:1 selecting:1 series:1 salzmann:2 tuned:1 ours:15 interestingly:1 rog:1 past:2 existing:1 freitas:1 current:2 optim:1 places2:4 yet:2 bello:3 written:1 gpu:2 john:1 subsequent:1 romero:3 alphanumeric:1 sanjiv:1 remove:5 drop:1 plot:1 depict:1 pursued:1 fewer:2 vanishing:1 short:1 core:2 feris:1 chiang:2 pascanu:1 node:2 kepler:1 zhang:2 direct:3 become:2 yuan:2 consists:9 ijcv:1 overhead:1 introduce:5 behavior:5 growing:1 multi:1 brain:2 relying:1 decomposed:1 titanx:1 automatically:8 little:2 actual:3 param:8 considering:2 becomes:1 spain:1 provided:1 linearity:2 bounded:1 lapedriza:1 argmin:1 developed:1 porikli:1 impractical:1 every:1 act:6 runtime:3 zaremba:1 fat:1 k2:2 unit:1 felix:1 thinner:1 tends:1 consequence:3 despite:1 era:1 analyzing:2 bnl:2 fitnets:1 au:1 studied:3 initialization:2 dynamically:1 lausanne:1 deployment:1 gone:1 range:1 unique:1 lecun:4 acknowledgment:1 practice:3 block:1 backpropagation:1 footprint:2 procedure:2 thought:1 significantly:2 boyd:2 projection:1 pre:4 word:1 vedaldi:2 get:1 close:1 selection:14 operator:5 context:3 influence:7 applying:2 www:1 equivalent:1 dean:1 demonstrated:1 center:1 go:1 starting:4 independently:2 focused:1 survey:1 simplicity:1 importantly:2 handle:1 notion:1 variation:2 enhanced:1 deploy:1 heavily:1 designing:2 goodfellow:2 trend:4 rumelhart:1 recognition:9 expensive:1 particularly:1 updating:2 tappen:1 std:2 kahou:1 labeled:1 database:1 observed:3 bottom:1 wang:2 region:1 compressing:1 connected:7 montufar:2 sun:1 ranzato:1 solla:1 removed:3 mentioned:2 mozer:3 environment:1 complexity:1 warde:1 dynamic:1 ultimately:3 trained:8 depend:1 solving:1 creation:1 surgeon:1 completely:1 easily:1 sep:1 fiber:1 regularizer:18 derivation:1 train:2 sgl:6 effective:7 describe:2 hyper:2 dec8:31 encoded:2 widely:1 larger:3 solve:1 valued:3 supplementary:2 cvpr:1 favor:2 simonyan:6 ability:4 statistic:1 jointly:2 final:2 net:2 propose:3 mb:1 tu:2 relevant:1 achieve:5 inducing:1 exploiting:2 foroosh:1 requirement:5 optimum:1 produce:1 k20m:3 object:1 tim:1 depending:1 donation:1 bourdev:1 gong:2 progress:1 eq:10 job:1 krogh:3 involves:2 come:6 distilling:1 switzerland:1 drawback:2 cnns:2 filter:4 stochastic:1 exploration:1 australia:1 material:2 elimination:1 require:2 generalization:7 sizing:1 icnn:1 decompose:1 ultra:1 pl:4 considered:3 ground:1 seed:1 generous:1 torralba:1 purpose:1 estimation:2 outperformed:1 psaltis:1 currently:1 grouped:2 largest:1 successfully:3 clearly:1 biglearn:1 aim:2 modified:1 rather:1 denil:3 avoid:1 zhou:5 pn:1 focus:3 june:1 improvement:2 rank:3 indicates:1 mainly:1 contrast:4 baseline:5 helpful:1 rio:1 elsevier:1 epfl:2 typically:4 entire:2 torch:1 initially:1 integrated:1 quasi:1 issue:1 classification:6 dual:1 wln:2 retaining:1 plan:1 constrained:2 platform:2 art:3 special:1 field:1 once:1 saving:8 manually:4 runtimes:1 yu:2 cancel:2 denton:3 thin:2 icml:1 mimic:2 minimized:1 report:4 future:1 mirza:1 hint:1 few:2 simultaneously:2 individual:8 replaced:2 consisting:3 csiro:3 decomposeme:1 ab:7 friedman:1 trimming:1 evaluation:2 truly:1 nl:4 yielding:1 farley:1 regularizers:5 bioengineering:1 nowadays:1 fu:1 taylor:1 overcomplete:5 minimal:1 instance:3 xeon:1 modeling:1 retains:1 cost:3 subset:1 theodoridis:2 conducted:1 too:1 seventh:1 reported:1 data61:2 proximal:6 params:1 generalizability:1 cho:1 thanks:1 xue:1 combined:1 sensitivity:1 informatics:1 together:1 quickly:1 squared:1 augmentation:1 huang:2 leading:2 de:1 availability:1 includes:2 matter:1 collobert:2 later:1 closed:1 analyze:2 start:3 maintains:1 parallel:1 simon:3 shakibi:1 square:1 accuracy:19 convolutional:14 publicly:1 who:1 succession:2 yield:10 saliency:1 bayesian:1 kavukcuoglu:1 killing:1 none:1 ren:1 rectified:1 russakovsky:3 chassang:1 acc:3 reach:2 canceling:3 complicate:1 farabet:1 nonetheless:2 destructive:5 gain:2 dataset:6 popular:1 recall:1 knowledge:2 back:1 focusing:1 originally:1 higher:1 follow:2 zisserman:8 alvarez:5 improved:1 bmvc:1 formulation:1 done:1 evaluated:3 generality:1 furthermore:9 clock:1 autoencoders:1 hand:1 convnets:1 horizontal:1 expressive:2 su:1 overlapping:1 assessment:1 incrementally:2 logistic:1 brings:1 indicated:1 gray:1 grows:1 believe:3 building:2 effect:1 k22:1 requiring:1 contain:1 managed:1 former:2 regularization:4 facility:1 iteratively:2 illustrated:1 during:3 won:1 complete:4 demonstrate:4 performs:2 greff:1 image:13 recently:4 parikh:2 ji:3 stork:1 ballas:1 volume:3 million:3 he:4 elementwise:1 synthesized:1 significant:5 refer:1 dinh:1 smoothness:1 automatic:3 tuning:1 similarly:4 language:2 bruna:1 entail:1 longer:1 maxpooling:1 add:1 recent:3 perspective:1 scenario:1 schmidhuber:1 nvidia:1 success:1 discussing:1 yi:3 seen:2 minimum:1 additional:6 employed:2 deng:1 determine:5 redundant:4 exploding:1 signal:1 multiple:1 reduces:5 stem:1 faster:1 lin:2 post:3 prediction:1 regression:3 crop:2 multilayer:1 vision:1 essentially:1 oliva:1 fifteenth:1 arxiv:1 iteration:1 kernel:3 achieved:3 dec:1 addition:1 krause:1 pooling:3 induced:5 effectiveness:1 leverage:1 yang:1 bernstein:1 split:2 enough:1 bengio:3 relu:1 architecture:36 lasso:4 hastie:1 reduce:5 idea:4 vgg:1 translates:3 bartlett:3 gb:1 torch7:1 forecasting:1 penalty:1 alok:1 hessian:2 matlab:1 deep:41 karpathy:1 amount:1 dark:1 hardware:1 category:5 reduced:2 http:1 outperform:1 percentage:6 zj:2 sign:1 trapped:1 per:10 tibshirani:1 intertwined:1 write:2 discrete:1 affected:1 group:26 redundancy:1 shih:1 threshold:1 prevent:1 ram:1 weigend:2 run:1 jose:2 place:2 almost:2 summarizes:1 entirely:3 layer:61 followed:2 courville:1 cheng:3 g:21 precisely:1 constraint:1 fei:2 scene:3 encodes:1 speed:4 argument:1 min:1 pruned:1 performing:2 dec3:14 kumar:1 rendered:1 relatively:1 gpus:1 speedup:7 according:1 poor:2 hertz:3 terminates:1 slightly:1 smaller:3 character:4 shallow:5 making:2 iccv:1 computationally:1 ln:10 remains:2 discus:3 icdar:11 flip:1 tractable:1 end:9 studying:1 available:1 operation:3 multiplied:2 apply:2 denker:1 icdar2003:1 skeletonization:1 batch:8 softthresholding:1 ho:1 altogether:2 schmidt:1 original:16 top:7 remaining:5 include:2 graphical:1 exploit:2 k1:1 murray:2 society:1 unchanged:1 noticed:1 intend:1 strategy:1 damage:1 gradient:7 iclr:1 thank:2 collected:1 code:1 issn:1 useless:2 reed:3 mini:1 setup:2 mostly:1 potentially:2 design:1 implementation:2 satheesh:1 perform:2 shallower:2 neuron:65 convolution:2 datasets:7 descent:3 january:1 defining:1 hinton:3 looking:2 varied:1 introduced:2 pair:2 required:1 connection:1 imagenet:17 coherent:2 learned:1 barcelona:1 nip:10 address:1 able:2 bar:1 proceeds:1 below:5 spotting:1 smolensky:3 sparsity:10 challenge:1 built:1 max:1 memory:17 including:2 royal:1 power:2 natural:1 rely:2 predicting:1 residual:1 improve:2 choudhary:1 mathieu:2 started:1 created:1 naive:1 auto:1 speeding:1 text:2 epoch:8 literature:1 understanding:2 determining:4 relative:7 loss:8 fully:7 limitation:2 proven:4 validation:5 ash:2 principle:1 zeroed:8 row:1 eccv:2 petersson:2 last:5 keeping:2 bias:1 side:1 deeper:1 perceptron:1 circulant:1 wide:1 taking:3 sparse:7 benefit:6 valid:1 gram:1 author:2 transaction:2 nov:1 compact:9 pruning:2 jaderberg:10 global:1 active:1 xi:3 fergus:1 continuous:1 khosla:2 table:7 additionally:2 scratch:1 learn:2 bypassed:1 improving:4 e5:1 expansion:1 main:2 spread:1 motivation:1 snapp:1 repeated:1 cvlab:1 fair:1 tesla:3 allowed:1 canberra:1 referred:4 fashion:1 hassibi:3 momentum:1 comprises:1 learns:1 removing:6 bad:1 specific:1 revolution:1 decay:2 experimented:1 evidence:2 incorporating:1 exists:1 workshop:1 quantization:1 adding:3 effectively:6 corr:7 magnitude:1 sigmoids:1 gap:7 generalizing:1 visual:2 gatta:1 prevents:1 expressed:1 vinyals:1 chang:1 ch:2 corresponds:1 truth:1 ma:1 goal:4 towards:1 maxout:5 specifically:5 determined:1 reducing:4 huberman:1 acting:3 wolff:1 total:13 experimental:3 exception:1 ilsvrc:1 berg:1 support:1 latter:2 collins:3 relevance:1 bnet:7 constructive:7 evaluate:2 tested:1 phenomenon:1 srivastava:2 |
5,939 | 6,373 | k ?-Nearest Neighbors: From Global to Local
Oren Anava
The Voleon Group
[email protected]
Kfir Y. Levy
ETH Zurich
[email protected]
Abstract
The weighted k-nearest neighbors algorithm is one of the most fundamental nonparametric methods in pattern recognition and machine learning. The question of
setting the optimal number of neighbors as well as the optimal weights has received
much attention throughout the years, nevertheless this problem seems to have
remained unsettled. In this paper we offer a simple approach to locally weighted
regression/classification, where we make the bias-variance tradeoff explicit. Our
formulation enables us to phrase a notion of optimal weights, and to efficiently find
these weights as well as the optimal number of neighbors efficiently and adaptively,
for each data point whose value we wish to estimate. The applicability of our
approach is demonstrated on several datasets, showing superior performance over
standard locally weighted methods.
1
Introduction
The k-nearest neighbors (k-NN) algorithm [1, 2], and Nadarays-Watson estimation [3, 4] are the
cornerstones of non-parametric learning. Owing to their simplicity and flexibility, these procedures
had become the methods of choice in many scenarios [5], especially in settings where the underlying
model is complex. Modern applications of the k-NN algorithm include recommendation systems [6],
text categorization [7], heart disease classification [8], and financial market prediction [9], amongst
others.
A successful application of the weighted k-NN algorithm requires a careful choice of three ingredients:
the number of nearest neighbors k, the weight vector ?, and the distance metric. The latter requires
domain knowledge and is thus henceforth assumed to be set and known in advance to the learner.
Surprisingly, even under this assumption, the problem of choosing the optimal k and ? is not fully
understood and has been studied extensively since the 1950?s under many different regimes. Most
of the theoretic work focuses on the asymptotic regime in which the number of samples n goes to
infinity [10, 11, 12], and ignores the practical regime in which n is finite. More importantly, the vast
majority of k-NN studies aim at finding an optimal value of k per dataset, which seems to overlook
the specific structure of the dataset and the properties of the data points whose labels we wish to
estimate. While kernel based methods such as Nadaraya-Watson enable an adaptive choice of the
weight vector ?, theres still remains the question of how to choose the kernel?s bandwidth , which
could be thought of as the parallel of the number of neighbors k in k-NN. Moreover, there is no
principled approach towards choosing the kernel function in practice.
In this paper we offer a coherent and principled approach to adaptively choosing the number of
neighbors k and the corresponding weight vector ? 2 Rk per decision point. Given a new decision
point, we aim to find the best locally weighted predictor, in the sense of minimizing the distance
between our prediction and the ground truth. In addition to yielding predictions, our approach
enbles us to provide a per decision point guarantee for the confidence of our predictions. Fig. 1
illustrates the importance of choosing k adaptively. In contrast to previous works on non-parametric
regression/classification, we do not assume that the data {(xi , yi )}ni=1 arrives from some (unknown)
underlying distribution, but rather make a weaker assumption that the labels {yi }ni=1 are independent
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a) First scenario
(b) Second scenario
(c) Third scenario
Figure 1: Three different scenarios. In all three scenarios, the same data points x1 , . . . , xn 2 R2 are
given (represented by black dots). The red dot in each of the scenarios represents the new data point
whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider
only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit
by considering more neighbors.
given the data points {xi }ni=1 , allowing the latter to be chosen arbitrarily. Alongside providing a
theoretical basis for our approach, we conduct an empirical study that demonstrates its superiority
with respect to the state-of-the-art.
This paper is organized as follows. In Section 2 we introduce our setting and assumptions, and derive
the locally optimal prediction problem. In Section 3 we analyze the solution of the above prediction
problem, and introduce a greedy algorithm designed to efficiently find the exact solution. Section 4
presents our experimental study, and Section 5 concludes.
1.1
Related Work
Asymptotic universal consistency is the most widely known theoretical guarantee for k-NN. This
powerful guarantee implies that as the number of samples n goes to infinity, and also k ! 1,
k/n ! 0, then the risk of the k-NN rule converges to the risk of the Bayes classifier for any
underlying data distribution. Similar guarantees hold for weighted k-NN rules, with the additional
Pk
assumptions that i=1 ?i = 1 and maxi?n ?i ! 0,p[12, 10]. In the regime of practical interest
where the number of samples n is finite, using k = b nc neighbors is a widely mentioned rule of
thumb [10]. Nevertheless, this rule often yields poor results, and in the regime of finite samples it is
usually advised to choose k using cross-validation. Similar consistency results apply to kernel based
local methods [13, 14].
A novel study of k-NN by Samworth, [11], derives a closed form expression for the optimal weight
vector, and extracts the optimal number of neighbors. However, this result is only optimal under
several restrictive assumptions, and only holds for the asymptotic regime where n ! 1. Furthermore,
the above optimal number of neighbors/weights do not adapt, but are rather fixed over all decision
points given the dataset. In the context of kernel based methods, it is possible to extract an expression
for the optimal kernel?s bandwidth [14, 15]. Nevertheless, this bandwidth is fixed over all decision
points, and is only optimal under several restrictive assumptions.
There exist several heuristics to adaptively choosing the number of neighbors and weights separately
for each decision point. In [16, 17] it is suggested to use local cross-validation in order to adapt the
value of k to different decision points. Conversely, Ghosh [18] takes a Bayesian approach towards
choosing k adaptively. Focusing on the multiclass classification setup, it is suggested in [19] to
consider different values of k for each class, choosing k proportionally to the class populations.
Similarly, there exist several attitudes towards adaptively choosing the kernel?s bandwidth , for
kernel based methods [20, 21, 22, 23].
Learning the distance metric for k-NN was extensively studied throughout the last decade. There
are several approaches towards metric learning, which roughly divide into linear/non-linear learning
methods. It was found that metric learning may significantly affect the performance of k-NN in
numerous applications, including computer vision, text analysis, program analysis and more. A
comprehensive survey by Kulis [24] provides a review of the metric learning literature. Throughout
2
this work we assume that the distance metric is fixed, and thus the focus is on finding the best (in a
sense) values of k and ? for each new data point.
Two comprehensive monographs, [10] and [25], provide an extensive survey of the existing literature
regarding k-NN rules, including theoretical guarantees, useful practices, limitations and more.
2
Problem Definition
In this section we present our setting and assumptions, and formulate the locally weighted optimal
estimation problem. Recall we seek to find the best local prediction in a sense of minimizing the
distance between this prediction and the ground truth. The problem at hand is thus defined as follows:
We are given n data points x1 , . . . , xn 2 Rd , and n corresponding labels1 y1 , . . . , yn 2 R. Assume
that for any i 2 {1, . . . , n} = [n] it holds that yi = f (xi ) + ?i , where f (?) and ?i are such that:
(1) f (?) is a Lipschitz continuous function: For any x, y 2 Rd it holds that |f (x) f (y)| ?
L ? d(x, y), where the distance function d(?, ?) is set and known in advance. This assumption
is rather standard when considering nearest neighbors-based algorithms, and is required in
our analysis to bound the so-called bias term (to be later defined). In the binary classification
setup we assume that f : Rd 7! [0, 1], and that given x its label y 2 {0, 1} is distributed
Bernoulli(f (x)).
(2) ?i ?s are noise terms: For any i 2 [n] it holds that E [?i |xi ] = 0 and |?i | ? b for some given
b > 0. In addition, it is assumed that given the data points {xi }ni=1 then the noise terms
{?i }ni=1 are independent. This assumption is later used in our analysis to apply Hoeffding?s
inequality and bound
variance term (to be later defined). Alternatively, we
? the so-called
?
could assume that E ?2i |xi ? b (instead of |?i | ? b), and apply Bernstein inequalities. The
results and analysis remain qualitatively similar.
Given a new data point x0 , our task is to estimate f (x0 ), where we restrict the estimator f?(x0 ) to
Pn
be of the form f?(x0 ) = i=1 ?i yi . That is, the estimator is a weighted average of the given noisy
labels. Formally, we aim at minimizing the absolute distance between our prediction and the ground
truth f (x0 ), which translates into
n
X
min
?i yi f (x0 )
(P1),
?2
n
i=1
Pn
where we minimize over the simplex, n = {? 2 Rn | i=1 ?i = 1 and ?i 0, 8i}. Decomposing
the objective of (P1) into a sum of bias and variance terms, we arrive at the following relaxed
objective:
n
n
X
X
?i yi f (x0 ) =
?i (yi f (xi ) + f (xi )) f (x0 )
i=1
i=1
=
?
?
n
X
i=1
n
X
i=1
n
X
?i ? i +
n
X
?i (f (xi )
i=1
n
X
?i ? i +
?i (f (xi )
i=1
n
X
?i ? i + L
i=1
f (x0 ))
f (x0 ))
?i d(xi , x0 ).
i=1
By Hoeffding?s
inequality (see supplementary material) it follows that |
q
Pn
i=1
?i ?i | ? Ck?k2 for
C = b 2 log
, w.p. at least 1
. We thus arrive at a new optimization problem (P2), such that
solving it would yield a guarantee for (P1) with high probability:
n
X
min Ck?k2 + L
?i d(xi , x0 )
(P2).
2
?2
n
i=1
1
Note that our analysis holds for both setups of classification/regression. For brevity we use a classification
task terminology, relating to the yi ?s as labels. Our analysis extends directly to the regression setup.
3
The first term in (P2) corresponds to the noise in the labels and is therefore denoted as the variance
term, whereas the second term corresponds to the distance between f (x0 ) and {f (xi )}ni=1 and is
thus denoted as the bias term.
3
Algorithm and Analysis
In this section we discuss the properties of the optimal solution for (P2), and present a greedy
algorithm which is designed in order to efficiently find the exact solution of the latter objective (see
Section 3.1). Given a decision point x0 , Theorem 3.1 demonstrates that the optimal weight ?i of the
data point xi is proportional to d(xi , x0 ) (closer points are given more weight). Interestingly, this
weight decay is quite slow compared to popular weight kernels, which utilize sharper decay schemes,
e.g., exponential/inversely-proportional. Theorem 3.1 also implies a cutoff effect, meaning that there
exists k ? 2 [n], such that only the k ? nearest neighbors of x0 donate to the prediction of its label.
Note that both ? and k ? may adapt from one x0 to another. Also notice that the optimal weights
depend on a single parameter L/C, namely the Lipschitz to noise ratio. As L/C grows k ? tends to
be smaller, which is quite intuitive.
Without loss of generality, assume that the points are ordered in ascending order according to their
distance from x0 , i.e., d(x1 , x0 ) ? d(x2 , x0 ) ? . . . ? d(xn , x0 ). Also, let 2 Rn be such that
i = Ld(xi , x0 )/C. Then, the following is our main theorem:
Theorem 3.1. There exists > 0 such that the optimal solution of (P2) is of the form
(
?i? = Pn
i)
i=1 (
? 1{ i < }
.
i) ? 1 { i < }
(1)
Furthermore, the value of (P2) at the optimum is C .
Following is a direct corollary of the above Theorem:
Corollary 3.2. There exists 1 ? k ? ? n such that for the optimal solution of (P2) the following
applies:
?i? > 0; 8i ? k ?
?i? = 0; 8i > k ? .
and
Proof of Theorem 3.1. Notice that (P2) may be written as follows:
min C k?k2 + ?>
?2
(P2).
n
We henceforth ignore the parameter C. In order to find the solution of (P2), let us first consider its
Lagrangian:
!
n
n
X
X
L(?, , ?) = k?k2 + ?> +
1
?i
?i ? i ,
i=1
i=1
P
where 2 R is the multiplier of the equality constraint i ?i = 1, and ?1 , . . . , ?n
0 are the
multipliers of the inequality constraints ?i
0, 8i 2 [n]. Since (P2) is convex, any solution
satisfying the KKT conditions is a global minimum. Deriving the Lagrangian with respect to ?, we
get that for any i 2 [n]:
?i
=
i + ?i .
k?k2
Denote by ?? the optimal solution of (P2). By the KKT conditions, for any ?i? > 0 it follows that
?i = 0. Otherwise, for any i such that ?i? = 0 it follows that ?i 0, which implies ? i . Thus, for
any nonzero weight ?i? > 0 the following holds:
?i?
=
k?? k2
i.
(2)
Squaring and summing Equation (2) over all the nonzero entries of ?, we arrive at the following
equation for :
X (?? )2
X
i
2
1=
=
(
(3)
i) .
k?? k22
?
?
?i >0
?i >0
4
Algorithm 1 k ? -NN
Input: vector of ordered distances 2 Rn , noisy labels y1 , . . . , yn 2 R
Set: 0 = 1 + 1, k = 0
while k > k+1 and k ? n 1 do
Update: k
k+1
!
r
?P
?2
Pk
Pk
k
1
2
Calculate: k = k
k+
k i=1 i
i=1 i +
i=1 i
end while
P
Return: estimation f?(x0 ) =
i ?i yi , where ? 2
Pn(
k
i=1 ( k
n
is a weight vector such ?i =
i )?1{ i < k }
i )?1{ i < k }
Next, we show that the valuePof the objective at the optimum is C . Indeed, note that by Equation (2)
and the equality constraint i ?i? = 1, any ?i? > 0 satisfies
i
?i? =
A
,
where
A=
X
(
Plugging the above into the objective of (P2) yields
s
C X
C X
2
C k?? k2 + ??> =
(
(
i) +
A
A ?
?
?i >0
=
C
A
=C ,
i )( i
+ )
?i >0
C X
(
A ?
?i >0
i)
2
+
C X
(
A ?
Solving (P2) Efficiently
i)
?i >0
where in the last equality we used Equation (3), and substituted A =
3.1
(4)
i ).
??
i >0
P
(
??
i >0
i ).
Note that (P2) is a convex optimization problem, and it can be therefore (approximately) solved
efficiently, e.g., via any first order algorithm. Concretely, given an accuracy ? > 0, any off-the-shelf
convex optimization method would require a running time which is poly(n, 1? ) in order to find an
?-optimal solution to (P2)2 . Note that the calculation of (the unsorted) requires an additional
computational cost of O(nd).
Here we present an efficient method that computes the exact solution of (P2). In addition to the
O(nd) cost for calculating , our algorithm requires an O(n log n) cost for sorting the entries of
, as well as an additional running time of O(k ? ), where k ? is the number of non-zero elements at
the optimum. Thus, the running time of our method is independent of any accuracy ?, and may be
significantly better compared to any off-the-shelf optimization method. Note that in some cases [26],
using advanced data structures may decrease the cost of finding the nearest neighbors (i.e., the sorted
), yielding a running time substantially smaller than O(nd + n log n).
Our method is depicted in Algorithm 1. Quite intuitively, the core idea is to greedily add neighbors
according to their distance form x0 until a stopping condition is fulfilled (indicating that we have
found the optimal solution). Letting CsortNN , be the computational cost of calculating the sorted vector
, the following theorem presents our guarantees.
Theorem 3.3. Algorithm 1 finds the exact solution of (P2) within k ? iterations, with an O(k ? +
CsortNN ) running time.
2
Note that (P2) is not strongly-convex, and therefore the polynomial dependence on 1/? rather than log(1/?)
for first order methods. Other methods such as the Ellipsoid depend logarithmically on 1/?, but suffer a worse
dependence on n compared to first order methods.
5
Proof of Theorem 3.3. Denote by ?? the optimal solution of (P2), and by k ? the corresponding
number of nonzero weights. By Corollary 3.2, these k ? nonzero weights correspond to the k ? smallest
values of . Thus, we are left to show that (1) the optimal is of the form calculated by the algorithm;
and (2) the algorithm halts after exactly k ? iterations and outputs the optimal solution.
Let us first find the optimal . Since the non-zero elements of the optimal solution correspond to the
k ? smallest values of , then Equation (3) is equivalent to the following quadratic equation in :
!
k?
k?
X
X
? 2
2
k
2
1 = 0.
i+
i
i=1
Solving for
i=1
and neglecting the solution that does not agree with ?i 0, 8i 2 [n], we get
0
1
v
!2
u
k?
k?
k?
u
X
X
X
1 B
tk ? +
2C
= ?@
k?
i+
i
iA .
k
i=1
i=1
i=1
(5)
The above implies that given k ? , the optimal solution (satisfying KKT) can be directly derived by a
calculation of according to Equation (5) and computing the ?i ?s according to Equation (1). Since
Algorithm 1 calculates and ? in the form appearing in Equations (5) and (1) respectively, it is
therefore sufficient to show that it halts after exactly k ? iterations in order to prove its optimality. The
latter is a direct consequence of the following conditions:
(1) Upon reaching iteration k ? Algorithm 1 necessarily halts.
(2) For any k ? k ? it holds that
k
2 R.
(3) For any k < k ? Algorithm 1 does not halt.
Note that the first condition together with the second condition imply that k is well defined until the
algorithm halts (in the sense that the ? > ?operation in the while condition is meaningful). The first
condition together with the third condition imply that the algorithm halts after exactly k ? iterations,
which concludes the proof. We are now left to show that the above three conditions hold:
Condition (1): Note that upon reaching k ? , Algorithm 1 necessarily calculates the optimal = k? .
Moreover, the entries of ?? whose indices are greater than k ? are necessarily zero, and in particular,
?k?? +1 = 0. By Equation (1), this implies that k? ? k? +1 , and therefore the algorithm halts upon
reaching k ? .
In order to establish conditions (2) and (3) we require the following lemma:
Lemma 3.4. Let
following holds:
k
k
be as calculated by Algorithm 1 at iteration k. Then, for any k ? k ? the
= min
?2
(k)
n
k?k2 + ?>
(k)
n
, where
= {? 2
n
: ?i = 0, 8i > k}
We are now ready to prove the remaining conditions.
Condition (2): Lemma 3.4 states that
therefore k 2 R.
(k)
k
is the solution of a convex program over a nonempty set,
(k+1)
Condition (3): By definition n ? n
for any k < n. Therefore, Lemma 3.4 implies
?
that k
for
any
k
<
k
(minimizing
the
same objective with stricter constraints yields
k+1
a higher optimal value). Now assume by contradiction that Algorithm 1 halts at some k0 < k ? ,
then the stopping condition of the algorithm implies that k0 ? k0 +1 . Combining the latter with
?
k
k+1 , 8k ? k , and using k ? k+1 , 8k ? n, we conclude that:
k?
?
k0 +1
?
k0
?
k0 +1
?
k?
.
The above implies that ?k? = 0 (see Equation (1)), which contradicts Corollary 3.2 and the definition
of k ? .
6
Running time: Note that the main running time burden of Algorithm 1 is the calculation of k for
any k ? k ? . A naive calculation of k requires an O(k) running time. However, note that k depends
Pk
Pk
only on i=1 i and i=1 i2 . Updating these sums incrementally implies that we require only
O(1) running time per iteration, yielding a total running time of O(k ? ). The remaining O(CsortNN )
running time is required in order to calculate the (sorted) .
3.2
Special Cases
The aim of this section is to discuss two special cases in which the bound of our algorithm coincides
with familiar bounds in the literature, thus justifying the relaxed objective of (P2). We present here
only a high-level description of both cases, and defer the formal details to the full version of the
paper.
The
Pn solution of (P2) is a high probability upper-bound on the true prediction error
| i=1 ?i yi f (x0 )|. Two interesting cases to consider in this context are i = 0 for all i 2 [n], and
> 0. In the first case, our p
algorithm includes all labels in the computation of ,
1 = ... = n =
thus yielding a confidence bound of 2C = 2b (2/n) log (2/ ) for the prediction error (with probability 1
). Not surprisingly, this bound coincides with the standard Hoeffding bound for the task of
estimating the mean value of a given distribution based on noisy observations drawn from this distribution. Since the latter is known to be tight (in general), so is the confidence bound obtained by our
algorithm. In the second p
case as well, our algorithm will use all data points to arrive at the confidence
bound 2C = 2Ld + 2b (2/n) log (2/ ), where we denote d(x1 , x0 ) = . . . = d(xn , x0 ) = d. The
second term is again tight by concentration arguments, whereas the first term cannot be improved due
to Lipschitz property of f (?), thus yielding an overall tight confidence bound for our prediction in
this case.
4
Experimental Results
The following experiments demonstrate the effectiveness of the proposed algorithm on several
datasets. We start by presenting the baselines used for the comparison.
4.1
Baselines
The standard k-NN: Given k, the standard k-NN finds the k nearest data points to x0 (assume withPk
out loss of generality that these data points are x1 , . . . , xk ), and then estimates f?(x0 ) = k1 i=1 yi .
The Nadaraya-Watson estimator: This estimator assigns the data points with weights that are
proportional to some given similarity kernel K : Rd ? Rd 7! R+ . That is,
Pn
K(xi , x0 )yi
?
f (x0 ) = Pi=1
.
n
i=1 K(xi , x0 )
kxi
x j k2
2 2
Popular choices of kernel functions include
the Gaussian
kernel K(xi , xj ) = 1 e
;
?
?
2
kxi xj k
3
Epanechnikov Kernel K(xi , xj ) = 4 1
1{kxi xj k? } ; and the triangular kernel
2
?
?
kxi xj k
K(xi , xj ) = 1
1{kxi xj k? } . Due to lack of space, we present here only the best performing kernel function among the three listed above (on the tested datasets), which is the Gaussian
kernel.
4.2
Datasets
In our experiments we use 8 real-world datasets, all are available in the UCI repository website
(https://archive.ics.uci.edu/ml/). In each of the datasets, the features vector consists of
real values only, whereas the labels take different forms: in the first 6 datasets (QSAR, Diabetes,
PopFailures, Sonar, Ionosphere, and Fertility), the labels are binary yi 2 {0, 1}. In the last two
datasets (Slump and Yacht), the labels are real-valued. Note that our algorithm (as well as the other
two baselines) applies to all datasets without requiring any adjustment. The number of samples n and
the dimension of each sample d are given in Table 1 for each dataset.
7
Dataset (n, d)
QSAR (1055,41)
Diabetes (1151,19)
PopFailures (360,18)
Sonar (208,60)
Ionosphere (351,34)
Fertility (100,9)
Slump (103,9)
Yacht (308,6)
Standard k-NN
Error (STD)
Value of k
0.2467 (0.3445)
2
0.3809 (0.2939)
4
0.1333 (0.2924)
2
0.1731 (0.3801)
1
0.1257 (0.3055)
2
0.1900 (0.3881)
1
3.4944 (3.3042)
4
6.4643 (10.2463)
2
Nadarays-Watson
Error (STD)
Value of
0.2303 (0.3500)
0.1
0.3675 (0.3983)
0.1
0.1155 (0.2900)
0.01
0.1711 (0.3747)
0.1
0.1191 (0.2937)
0.5
0.1884 (0.3787)
0.1
2.9154 (2.8930)
0.05
5.2577 (8.7051)
0.05
Our algorithm (k? -NN)
Error (STD)
Range of k
0.2105* (0.3935)
1-4
0.3666 (0.3897)
1-9
0.1218 (0.2302)
2-24
0.1636 (0.3661)
1-2
0.1113* (0.3008)
1-4
0.1760 (0.3094)
1-5
2.8057 (2.7886)
1-4
5.0418* (8.6502)
1-3
Table 1: Experimental results. The values of k, and L/C are determined via 5-fold cross validation
on the validation set. These value are then used on the test set to generate the (absolute) error rates
presented in the table. In each line, the best result is marked with bold font, where asterisk indicates
significance level of 0.05 over the second best result.
4.3
Experimental Setup
We randomly divide each dataset into two halves (one used for validation and the other for test). On
the first half (the validation set), we run the two baselines and our algorithm with different values
of k, and L/C (respectively), using 5-fold cross validation. Specifically, we consider values of k
in {1, 2, . . . , 10} and values of and L/C in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10}. The best
values of k, and L/C are then used in the second half of the dataset (the test set) to obtain the
results presented in Table 1. For our algorithm, the range of k that corresponds to the selection of
L/C is also given. Notice that we present here the average absolute error of our prediction, as a
consequence of our theoretical guarantees.
4.4
Results and Discussion
As evidenced by Table 1, our algorithm outperforms the baselines on 7 (out of 8) datasets, where
on 3 datasets the outperformance is significant. It can also be seen that whereas the standard k-NN
is restricted to choose one value of k per dataset, our algorithm fully utilizes the ability to choose
k adaptively per data point. This validates our theoretical findings, and highlights the advantage of
adaptive selection of k.
5
Conclusions and Future Directions
We have introduced a principled approach to locally weighted optimal estimation. By explicitly
phrasing the bias-variance tradeoff, we defined the notion of optimal weights and optimal number of
neighbors per decision point, and consequently devised an efficient method to extract them. Note
that our approach could be extended to handle multiclass classification, as well as scenarios in which
predictions of different data points correlate (and we have an estimate of their correlations). Due to
lack of space we leave these extensions to the full version of the paper.
A shortcoming of current non-parametric methods, including our k ? -NN algorithm, is their limited
geometrical perspective. Concretely, all of these methods only consider the distances between the
decision point and dataset points, i.e., {d(x0 , xi )}ni=1 , and ignore the geometrical relation between
the dataset points, i.e., {d(xi , xj )}ni,j=1 . We believe that our approach opens an avenue for taking
advantage of this additional geometrical information, which may have a great affect over the quality
of our predictions.
References
[1] Thomas M Cover and Peter E Hart. Nearest neighbor pattern classification. IEEE Transactions on
Information Theory, 13(1):21?27, 1967.
[2] Evelyn Fix and Joseph L Hodges Jr. Discriminatory analysis-nonparametric discrimination: consistency
properties. Technical report, DTIC Document, 1951.
[3] Elizbar A Nadaraya. On estimating regression. Theory of Probability & Its Applications, 9(1):141?142,
1964.
8
[4] Geoffrey S Watson. Smooth regression analysis. Sankhy?a: The Indian Journal of Statistics, Series A, pages
359?372, 1964.
[5] Xindong Wu, Vipin Kumar, J Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J
McLachlan, Angus Ng, Bing Liu, S Yu Philip, et al. Top 10 algorithms in data mining. Knowledge and
information systems, 14(1):1?37, 2008.
[6] DA Adeniyi, Z Wei, and Y Yongquan. Automated web usage data mining and recommendation system
using k-nearest neighbor (knn) classification method. Applied Computing and Informatics, 12(1):90?108,
2016.
[7] Bruno Trstenjak, Sasa Mikac, and Dzenana Donko. Knn with tf-idf based framework for text categorization.
Procedia Engineering, 69:1356?1364, 2014.
[8] BL Deekshatulu, Priti Chandra, et al. Classification of heart disease using k-nearest neighbor and genetic
algorithm. Procedia Technology, 10:85?94, 2013.
[9] Sadegh Bafandeh Imandoust and Mohammad Bolandraftar. Application of k-nearest neighbor (knn)
approach for predicting economic events: Theoretical background. International Journal of Engineering
Research and Applications, 3(5):605?610, 2013.
[10] Luc Devroye, L?szl? Gy?rfi, and G?bor Lugosi. A probabilistic theory of pattern recognition, volume 31.
Springer Science & Business Media, 2013.
[11] Richard J Samworth et al. Optimal weighted nearest neighbour classifiers. The Annals of Statistics,
40(5):2733?2763, 2012.
[12] Charles J Stone. Consistent nonparametric regression. The Annals of Statistics, pages 595?620, 1977.
[13] Luc P Devroye, TJ Wagner, et al. Distribution-free consistency results in nonparametric discrimination and
regression function estimation. The Annals of Statistics, 8(2):231?239, 1980.
[14] L?szl? Gy?rfi, Michael Kohler, Adam Krzyzak, and Harro Walk. A distribution-free theory of nonparametric regression. Springer Science & Business Media, 2006.
[15] Jianqing Fan and Irene Gijbels. Local polynomial modelling and its applications: monographs on statistics
and applied probability 66, volume 66. CRC Press, 1996.
[16] Dietrich Wettschereck and Thomas G Dietterich. Locally adaptive nearest neighbor algorithms. Advances
in Neural Information Processing Systems, pages 184?184, 1994.
[17] Shiliang Sun and Rongqing Huang. An adaptive k-nearest neighbor algorithm. In 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery.
[18] Anil K Ghosh. On nearest neighbor classification using adaptive choice of k. Journal of computational
and graphical statistics, 16(2):482?502, 2007.
[19] Li Baoli, Lu Qin, and Yu Shiwen. An adaptive k-nearest neighbor text categorization strategy. ACM
Transactions on Asian Language Information Processing (TALIP), 3(4):215?226, 2004.
[20] Ian S Abramson. On bandwidth variation in kernel estimates-a square root law. The annals of Statistics,
pages 1217?1223, 1982.
[21] Bernard W Silverman. Density estimation for statistics and data analysis, volume 26. CRC press, 1986.
[22] Serdar Demir and ?niz Toktami?s. On the adaptive nadaraya-watson kernel regression estimators. Hacettepe
Journal of Mathematics and Statistics, 39(3), 2010.
[23] Khulood Hamed Aljuhani et al. Modification of the adaptive nadaraya-watson kernel regression estimator.
Scientific Research and Essays, 9(22):966?971, 2014.
[24] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287?364,
2012.
[25] G?rard Biau and Luc Devroye. Lectures on the Nearest Neighbor Method, volume 1. Springer, 2015.
[26] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604?613.
ACM, 1998.
9
| 6373 |@word kulis:2 repository:1 version:2 polynomial:2 seems:2 nd:3 open:1 essay:1 seek:1 motoda:1 profit:1 ld:2 liu:1 series:1 genetic:1 document:1 interestingly:1 outperforms:1 existing:1 current:1 com:1 written:1 enables:1 designed:2 update:1 discrimination:2 greedy:2 half:3 website:1 xk:1 core:1 epanechnikov:1 provides:1 direct:2 become:1 symposium:1 prove:2 consists:1 introduce:2 x0:33 indeed:1 market:1 roughly:1 p1:3 curse:1 considering:2 spain:1 estimating:2 underlying:3 moreover:2 medium:2 substantially:1 fuzzy:1 finding:4 ghosh:3 guarantee:8 stricter:1 exactly:3 demonstrates:2 classifier:2 k2:9 superiority:1 yn:2 understood:1 local:5 engineering:2 tends:1 consequence:2 advised:1 approximately:1 black:1 might:1 lugosi:1 niz:1 studied:2 conversely:1 nadaraya:5 limited:1 discriminatory:1 range:2 practical:2 yehuda:1 practice:2 angus:1 yacht:2 silverman:1 procedure:1 empirical:1 universal:1 eth:1 thought:1 significantly:2 confidence:5 get:2 cannot:1 selection:2 risk:2 context:2 equivalent:1 demonstrated:1 lagrangian:2 go:2 attention:1 convex:5 survey:3 formulate:1 simplicity:1 assigns:1 contradiction:1 rule:5 estimator:6 importantly:1 deriving:1 financial:1 population:1 handle:1 notion:2 variation:1 annals:4 exact:4 diabetes:2 element:2 logarithmically:1 recognition:2 satisfying:2 updating:1 trend:1 std:3 solved:1 calculate:2 sun:1 irene:1 decrease:1 disease:2 principled:3 mentioned:1 monograph:2 depend:2 solving:3 tight:3 upon:3 learner:1 basis:1 k0:6 represented:1 attitude:1 shortcoming:1 hiroshi:1 choosing:8 whose:4 heuristic:1 widely:2 supplementary:1 quite:3 valued:1 otherwise:1 triangular:1 ability:1 statistic:9 knn:3 noisy:3 validates:1 indyk:1 advantage:2 dietrich:1 qin:1 uci:2 combining:1 flexibility:1 intuitive:1 description:1 motwani:1 optimum:3 categorization:3 adam:1 converges:1 leave:1 tk:1 derive:1 nearest:20 received:1 p2:22 implies:9 direction:1 owing:1 enable:1 material:1 crc:2 require:3 fix:1 brian:1 extension:1 hold:10 ground:3 ic:1 great:1 smallest:2 estimation:7 samworth:2 label:12 ross:1 tf:1 weighted:10 mclachlan:1 gaussian:2 aim:4 rather:4 ck:2 pn:7 shelf:2 reaching:3 thirtieth:1 corollary:4 derived:1 focus:2 bernoulli:1 indicates:1 modelling:1 contrast:1 greedily:1 baseline:5 sense:4 stopping:2 squaring:1 nn:19 relation:1 overall:1 classification:12 among:1 denoted:2 voleon:2 art:1 special:2 ng:1 piotr:1 qiang:1 represents:1 yu:2 sankhy:1 future:1 simplex:1 others:1 report:1 richard:1 modern:1 randomly:1 neighbour:1 comprehensive:2 asian:1 familiar:1 interest:1 mining:2 szl:2 arrives:1 yielding:5 tj:1 kfir:1 closer:1 neglecting:1 conduct:1 divide:2 walk:1 theoretical:6 joydeep:1 cover:1 phrase:1 applicability:1 cost:5 entry:3 predictor:1 successful:1 seventh:1 kxi:5 adaptively:7 density:1 fundamental:1 international:2 probabilistic:1 off:2 informatics:1 michael:1 together:2 again:1 hodges:1 choose:4 huang:1 hoeffding:3 henceforth:2 worse:1 return:1 li:1 wettschereck:1 gy:2 bold:1 includes:1 explicitly:1 depends:1 later:3 root:1 closed:1 analyze:1 red:1 start:1 bayes:1 parallel:1 slump:2 defer:1 minimize:1 square:1 ni:8 accuracy:2 variance:5 efficiently:6 yield:4 correspond:2 biau:1 bayesian:1 thumb:1 bor:1 overlook:1 lu:1 qsar:2 hamed:1 harro:1 definition:3 outperformance:1 proof:3 dataset:10 popular:2 recall:1 knowledge:3 dimensionality:1 organized:1 focusing:1 higher:1 improved:1 wei:1 rard:1 formulation:1 strongly:1 generality:2 furthermore:2 until:2 correlation:1 hand:1 web:1 rajeev:1 lack:2 incrementally:1 xindong:1 quality:1 scientific:1 grows:1 believe:1 usage:1 dietterich:1 effect:1 k22:1 multiplier:2 true:1 requiring:1 equality:3 nonzero:4 i2:1 vipin:1 unsettled:1 coincides:2 stone:1 presenting:1 theoretic:1 demonstrate:1 mohammad:1 geometrical:3 meaning:1 novel:1 charles:1 superior:1 volume:4 relating:1 significant:1 rd:5 consistency:4 mathematics:1 similarly:1 bruno:1 language:1 had:1 dot:2 phrasing:1 similarity:1 add:1 perspective:1 inf:1 scenario:10 jianqing:1 inequality:4 binary:2 arbitrarily:1 watson:7 unsorted:1 yi:13 seen:1 minimum:1 additional:4 relaxed:2 greater:1 full:2 smooth:1 technical:1 anava:1 adapt:3 offer:2 cross:4 calculation:4 justifying:1 devised:1 hart:1 plugging:1 halt:8 calculates:2 prediction:16 regression:11 vision:1 metric:7 chandra:1 iteration:7 kernel:19 oren:2 addition:3 whereas:5 separately:1 background:1 archive:1 effectiveness:1 yang:1 bernstein:1 automated:1 affect:2 xj:8 bandwidth:5 restrict:1 economic:1 regarding:1 idea:1 avenue:1 tradeoff:2 multiclass:2 translates:1 expression:2 krzyzak:1 suffer:1 peter:1 cornerstone:1 useful:1 rfi:2 proportionally:1 listed:1 nonparametric:5 locally:7 extensively:2 http:1 generate:1 demir:1 exist:2 notice:3 fulfilled:1 per:7 group:1 terminology:1 nevertheless:3 drawn:1 cutoff:1 shiliang:1 utilize:1 vast:1 year:1 sum:2 gijbels:1 run:1 powerful:1 arrive:4 throughout:3 extends:1 wu:1 utilizes:1 decision:10 sadegh:1 bound:11 fold:2 quadratic:1 fan:1 annual:1 infinity:2 constraint:4 idf:1 x2:1 argument:1 min:4 optimality:1 kumar:1 performing:1 according:4 poor:1 jr:1 beneficial:1 remain:1 smaller:2 contradicts:1 joseph:1 modification:1 intuitively:2 restricted:1 heart:2 equation:11 zurich:1 remains:1 agree:1 labels1:1 discus:2 nonempty:1 bing:1 letting:1 ascending:1 end:1 available:1 decomposing:1 operation:1 apply:3 appearing:1 thomas:2 top:1 running:11 include:2 remaining:2 graphical:1 quinlan:1 calculating:2 restrictive:2 k1:1 especially:1 establish:1 bl:1 objective:7 question:2 font:1 parametric:3 concentration:1 dependence:2 strategy:1 amongst:1 distance:12 majority:1 philip:1 devroye:3 index:1 ellipsoid:1 providing:1 minimizing:4 ratio:1 nc:1 setup:5 sharper:1 unknown:1 allowing:1 upper:1 observation:1 datasets:11 finite:3 extended:1 y1:2 rn:3 introduced:1 evidenced:1 namely:1 required:2 extensive:1 coherent:1 barcelona:1 nip:1 suggested:2 alongside:1 usually:1 pattern:3 regime:6 program:2 including:3 ia:1 event:1 business:2 predicting:1 advanced:1 scheme:1 technology:1 inversely:1 numerous:1 imply:2 concludes:2 ready:1 extract:3 naive:1 fertility:2 text:4 review:1 literature:3 discovery:1 asymptotic:3 law:1 fully:2 loss:2 highlight:1 lecture:1 interesting:1 limitation:1 proportional:3 geoffrey:2 ingredient:1 validation:7 asterisk:1 foundation:1 sufficient:1 consistent:1 pi:1 surprisingly:2 last:3 free:2 bias:5 weaker:1 formal:1 neighbor:29 taking:1 wagner:1 absolute:3 distributed:1 calculated:2 xn:4 world:1 dimension:1 computes:1 ignores:1 concretely:2 qualitatively:1 adaptive:8 correlate:1 transaction:2 approximate:1 ignore:2 ml:1 global:2 kkt:3 summing:1 assumed:2 conclude:1 xi:23 alternatively:1 continuous:1 decade:1 sonar:2 table:5 complex:1 poly:1 necessarily:3 domain:1 substituted:1 da:1 pk:5 main:2 significance:1 noise:4 x1:5 fig:1 slow:1 explicit:1 wish:2 exponential:1 levy:2 third:2 ian:1 anil:1 rk:1 remained:1 theorem:9 removing:1 specific:1 showing:1 maxi:1 r2:1 decay:2 ionosphere:2 derives:1 exists:3 burden:1 importance:1 illustrates:1 dtic:1 sorting:1 depicted:1 ordered:2 adjustment:1 recommendation:2 applies:2 springer:3 ch:1 corresponds:3 truth:3 satisfies:1 acm:3 procedia:2 sorted:3 marked:1 consequently:1 careful:1 towards:5 lipschitz:3 luc:3 determined:1 specifically:1 lemma:4 called:2 total:1 bernard:1 experimental:4 meaningful:1 indicating:1 formally:1 latter:6 brevity:1 ethz:1 indian:1 kohler:1 tested:1 |
5,940 | 6,374 | Equality of Opportunity in Supervised Learning
Moritz Hardt
Google
[email protected]
Eric Price?
UT Austin
[email protected]
Nathan Srebro
TTI-Chicago
[email protected]
Abstract
We propose a criterion for discrimination against a specified sensitive attribute in
supervised learning, where the goal is to predict some target based on available features. Assuming data about the predictor, target, and membership in the protected
group are available, we show how to optimally adjust any learned predictor so as to
remove discrimination according to our definition. Our framework also improves
incentives by shifting the cost of poor classification from disadvantaged groups to
the decision maker, who can respond by improving the classification accuracy.
We enourage readers to consult the more complete manuscript on the arXiv.
1
Introduction
As machine learning increasingly affects decisions in domains protected by anti-discrimination law,
there is much interest in algorithmically measuring and ensuring fairness in machine learning. In
domains such as advertising, credit, employment, education, and criminal justice, machine learning
could help obtain more accurate predictions, but its effect on existing biases is not well understood.
Although reliance on data and quantitative measures can help quantify and eliminate existing biases,
some scholars caution that algorithms can also introduce new biases or perpetuate existing ones [1].
In May 2014, the Obama Administration?s Big Data Working Group released a report [2] arguing
that discrimination can sometimes ?be the inadvertent outcome of the way big data technologies
are structured and used? and pointed toward ?the potential of encoding discrimination in automated
decisions?. A subsequent White House report [3] calls for ?equal opportunity by design? as a guiding
principle in domains such as credit scoring.
Despite the demand, a vetted methodology for avoiding discrimination against protected attributes
in machine learning is lacking. A na?ve approach might require that the algorithm should ignore all
protected attributes such as race, color, religion, gender, disability, or family status. However, this
idea of ?fairness through unawareness? is ineffective due to the existence of redundant encodings,
ways of predicting protected attributes from other features [4].
Another common conception of non-discrimination is demographic parity [e.g. 5, 6, 7]. Demographic
parity requires that a decision?such as accepting or denying a loan application?be independent of
the protected attribute. Through its various equivalent formalizations this idea appears in numerous
papers. Unfortunately, the notion is seriously flawed on two counts [8]. First, it doesn?t ensure
fairness. The notion permits that we accept the qualified applicants in one demographic, but random
individuals in another, so long as the percentages of acceptance match. This behavior can arise
naturally, when there is little or no training data available for one of the demographics. Second,
demographic parity often cripples the utility that we might hope to achieve, especially in the common
scenario in which an outcome to be predicated, e.g. whether the loan be will defaulted, is correlated
with the protected attribute. Demographic parity would not allow the ideal prediction, namely giving
loans exactly to those who won?t default. As a result, the loss in utility of introducing demographic
parity can be substantial.
?
Work partially performed while at OpenAI.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper, we consider non-discrimination from the perspective of supervised learning, where the
goal is to predict a true outcome Y from features X based on labeled training data, while ensuring
the prediction is ?non-discriminatory? with respect to a specified protected attribute A. The main
question here, for which we suggest an answer, is what does it mean for such a prediction to be
non-discriminatory. As in the usual supervised learning setting, we assume that we have access to
labeled training data, in our case indicating also the protected attribute A. That is, to samples from
the joint distribution of (X, A, Y ). This data is used to construct a (possibly randomized) predictor
Y? (X) or Y? (X, A), and we also use such labeled data to test for non-discriminatory.
The notion we propose is ?oblivious?, in that it is based only on the joint distribution, or joint statistics,
of the true target Y , the predictions Y? , and the protected attribute A. In particular, it does not evaluate
the features in X nor the functional form of the predictor Y? (X) nor how it was derived. This matches
other tests recently proposed and conducted, including demographic parity and different analyses of
common risk scores. In many cases, only oblivious analysis is possible as the functional form of the
score and underlying training data are not public. The only information about the score is the score
itself, which can then be correlated with the target and protected attribute. Furthermore, even if the
features or the functional form are available, going beyond oblivious analysis essentially requires
subjective interpretation or casual assumptions about specific features, which we aim to avoid.
In a recent concurrent work, Kleinberg, Mullainathan and Raghavan [9] showed that the only way
for a meaningful score that is calibrated within each group to satisfy a criterion equivalent to
equalized odds is for the score to be a perfectly accurate predictor. This result highlights a contrast
between equalized odds and other desirable properties of a score, as well the relationship between
nondiscrimination and accuracy, which we also discuss.
Contributions We propose a simple, interpretable, and easily checkable notion of nondiscrimination with respect to a specified protected attributes. We argue that, unlike demographic
parity, our notion provides a meaningful measure of discrimination, while allowing for higher utility.
Unlike demographic parity, our notion always allows for the perfectly accurate solution of Yb = Y.
More broadly, our criterion is easier to achieve the more accurate the predictor Yb is, aligning fairness
with the central goal in supervised learning of building more accurate predictors. Our notion is
actionable, in that we give a simple and effective framework for constructing classifiers satisfying our
criterion from an arbitrary learned predictor.
Our notion can also be viewed as shifting the burden of uncertainty in classification from the protected
class to the decision maker. In doing so, our notion helps to incentivize the collection of better features,
that depend more directly on the target rather then the protected attribute, and of data that allows
better prediction for all protected classes.
In an updated and expanded paper, arXiv:1610.02413, we also capture the inherent limitations of
our approach, as well as any other oblivious approach, through a non-identifiability result showing that
different dependency structures with possibly different intuitive notions of fairness cannot be separated
based on any oblivious notion or test. We strongly encourage readers to consult arXiv:1610.02413
instead of this shortened presentation.
2
Equalized odds and equal opportunity
We now formally introduce our first criterion.
Definition 2.1 (Equalized odds). We say that a predictor Yb satisfies equalized odds with respect to
protected attribute A and outcome Y, if Yb and A are independent conditional on Y.
Unlike demographic parity, equalized odds allows Yb to depend on A but only through the target
variable Y. This encourages the use of features that relate to Y directly, not through A.
As stated, equalized odds applies to targets and protected attributes taking values in any space,
including discrete and continuous spaces. But in much of our presentation we focus on binary targets
Y, Yb and protected attributes A, in which case equalized odds is equivalent to:
n
o
n
o
Pr Yb = 1 | A = 0, Y = y = Pr Yb = 1 | A = 1, Y = y , y ? {0, 1}
(2.1)
2
For the outcome y = 1, the constraint requires that Yb has equal true positive rates across the two
demographics A = 0 and A = 1. For y = 0, the constraint equalizes false positive rates. Equalized
odds thus enforces both equal bias and equal accuracy in all demographics, punishing models that
perform well only on the majority.
Equal opportunity In the binary case, we often think of the outcome Y = 1 as the ?advantaged?
outcome, such as ?not defaulting on a loan?, ?admission to a college? or ?receiving a promotion?. A
possible relaxation of equalized odds is to require non-discrimination only within the ?advantaged?
outcome group. That is, to require that people who pay back their loan, have an equal opportunity of
getting the loan in the first place (without specifying any requirement for those that will ultimately
default). This leads to a relaxation of our notion that we call ?equal opportunity?.
Definition 2.2 (Equal opportunity). We say that a binary predictor Yb satisfies equal opportunity with
respect to A and Y if Pr{Yb = 1 | A = 0, Y = 1} = Pr{Yb = 1 | A = 1, Y = 1} .
Equal opportunity is a weaker, though still interesting, notion of non-discrimination, and can thus
allows for better utility.
Real-valued scores Even if the target is binary, a real-valued predictive score R = f (X, A) is
often used (e.g. FICO scores for predicting loan default), with the interpretation that higher values
of R correspond to greater likelihood of Y = 1 and thus a bias toward predicting Yb = 1. A binary
classifier Yb can be obtained by thresholding the score, i.e. setting Yb = I{R > t} for some threshold
t. Varying this threshold changes the trade-off between sensitivity and specificity.
Our definition for equalized odds can be applied also to score functions: a score R satisfies equalized
odds if R is independent of A given Y . If a score obeys equalized odds, then any thresholding
Yb = I{R > t} of it also obeys equalized odds In Section 3, we will consider scores that might not
satisfy equalized odds, and see how equalized odds predictors can be derived from them by using
different (possibly randomized) thresholds depending on the value of A.
Oblivious measures Our notions of non-discrimination are oblivious in the following formal sense:
Definition 2.3. A property of a predictor Yb or score R is said to be oblivious if it only depends on
the joint distribution of (Y, A, Yb ) or (Y, A, R), respectively.
As a consequence of being oblivious, all the information we need to verify our definitions is contained
in the joint distribution of predictor, protected group and outcome, (Yb , A, Y ). In the binary case,
when A and Y are reasonably well balanced, the joint distribution of (Yb , A, Y ) is determined by 8
parameters that can be estimated to very high accuracy from samples. We will therefore ignore the
effect of finite sampling and instead assume that we know the joint distribution of (Yb , A, Y ).
3
Achieving non-discrimination
We now explain how to obtain an equalized odds or equal opportunity predictor Ye from a, possibly
discriminatory, learned binary predictor Yb or score R. We envision that Yb or R are whatever comes
out of the existing training pipeline for the problem at hand. Importantly, we do not require changing
the training process, as this might introduce additional complexity, but rather only a post-learning
step. Instead, we will construct a non-discriminating predictor which is derived from Yb or R:
Definition 3.1 (Derived predictor). A predictor Ye is derived from a random variable R and the
protected attribute A if it is a possibly randomized function of the random variables (R, A) alone. In
particular, Ye is independent of X conditional on (R, A).
The definition asks that the value of a derived predictor Ye should only depend on R and the protected
attribute, though it may introduce additional randomness. But the formulation of Ye (that is, the
function applied to the values of R and A), depends on information about the joint distribution of
(R, A, Y ). In other words, this joint distribution (or an empirical estimate of it) is required at training
time in order to construct the predictor Ye , but at prediction time we only have access to values of
(R, A). No further data about the underlying features X, nor their distribution, is required.
3
For equal odds, result lies
below all ROC curves.
1.0
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.0
0.0
For equal opportunity, results lie
on the same horizontal line
e = 1 | A, Y = 1]
Pr[Y
e =1?Y
b
Result for Y
Equal-odds optimum
Equal opportunity (A=0)
Equal opportunity (A=1)
1.0
e = 1 | A, Y = 1]
Pr[Y
Achievable region (A=0)
Achievable region (A=1)
Overlap
e =Y
b
Result for Y
0.2
0.2
0.4
0.6
0.8
1.0
e = 1 | A, Y = 0]
Pr[Y
0.0
0.0
0.2
0.4
0.6
0.8
1.0
e = 1 | A, Y = 0]
Pr[Y
Figure 1: Finding the optimal equalized odds predictor (left), and equal opportunity predictor (right).
Loss minimization. It is always easy to construct a trivial predictor satisfying equalized odds, by
making decisions independent of X, A and R. For example, using the constant predictor Yb = 0 or
Yb = 1. The goal, of course, is to obtain a good predictor satisfying the condition. To quantify the
notion of ?good?, we consider a loss function ` : {0, 1}2 ? R that takes a pair of labels and returns a
real number `(b
y , y) ? R which indicates the loss (or cost, or undesirability) of predicting yb when
the correct label is y. Our goal is then to design derived predictors Ye that minimize the expected
loss E`(Ye , Y ) subject to one of our definitions.
3.1
Deriving from a binary predictor
In designing a derived predictor from binary Yb and A we can only set four parameters: the conditional
probabilities pya = Pr{Ye = 1 | Yb = a, A = a}. These four parameters, p = (p00 , p01 , p10 , p11 ),
together specify the derived predictor Yep . To check whether Yep satisfies equalized odds we need to
verify the two equalities specified by (2.1), for both values of y. To this end, we denote
def
?a (Ye ) = Pr{Ye = 1 | A = a, Y = 0}, Pr{Ye = 1 | A = a, Y = 1} .
(3.1)
The components of ?a (Ye ) are the false positive rate and the true positive rate within the demographic
A = a. Following (2.1), Ye satisfies equalized odds iff ?0 (Ye ) = ?1 (Ye ). But ?a (Yep ) is just a linear
function of p, with coefficients determined by the joint distribution of (Y, Yb , A). Since the expected
loss E`(Yep , Y ) is also linear in p, we have that the optimal derived predictor can be obtained as a
solution to the following linear program with four variables and two equality constraints:
min E`(Yep , Y )
p
s.t.
?0 (Yep ) = ?1 (Yep )
?y,a 0 6 pya 6 1
(3.2)
(3.3)
(3.4)
To better understand this linear program, let us understand the range of values ?a (Yep ) can take:
n
o
def
Claim 3.2. {?a (Yep ) | 0 6 p 6 1} = Pa (Yb ) = convhull (0, 0), ?a (Yb )?a (1 ? Yb ), (1, 1)
These polytopes are visualized in Figure 1. Since each ?a (Yep ), for each demographic A = a, depends
on two different coordinates of p, the choice of ?0 ? P0 and ?1 ? P1 is independent. Requiring
?0 (Yep ) = ?1 (Yep ) then restricts us exactly to the intersection P0 ? P1 , and this intersection exactly
specifies the range of possible tradeoffs between the false-positive-rate and true-positive-rate for
derived predictors Ye (see Figure 1). Solving the linear program (3.2) amounts to finding the tradeoff
in P0 ? P1 that optimizes the expected loss.
For equalized opportunity, we only require the first components of ? agree, removing one of the
equality constraints from the linear program. Now, any ?0 ? P0 and ?1 ? P1 that are on the same
horizontal line are feasible.
3.2
Deriving from a score function
A ?protected attribute blind? way of deriving a binary predictor from a score R would be to threshold
it, i.e. using Yb = I{R > t}. If R satisfied equalized odds, then so will such a predictor, and the
4
optimal threshold should be chosen to balance false and true positive rates so as to minimize the
expected loss. When R does not already satisfy equalized odds, we might need to use different
thresholds for different values of A (different protected groups), i.e. Y? = I{R > tA }. As we will
see, even this might not be sufficient, and we might need to introduce randomness also here.
Central to our study is the ROC (Receiver Operator Characteristic) curve of the score, which captures
the false positive and true positive (equivalently, false negative) rates at different thresholds. These are
curves in a two dimensional plane, where the horizontal axes is the false positive rate of a predictor
and the vertical axes is the true positive rate. As discussed in the previous section, equalized odds can
be stated as requiring the true positive and false positive rates, (Pr{Yb = 1 | Y = 0, A = a}, Pr{Yb =
1 | Y = 1, A = a}), agree between different values of a of the protected attribute. That is, that for all
values of the protected attribute, the conditional behavior of the predictor is at exactly the same point
in this space. We will therefor consider the A-conditional ROC curves
def
b > t | A = a, Y = 0}, Pr{R
b > t | A = a, Y = 1} .
Ca (t) = Pr{R
Since the ROC curves exactly specify the conditional distributions R|A, Y , a score function obeys
equalized odds if and only if the ROC curves for all values of the protected attribute agree, that is
Ca (t) = Ca0 (t) for all values of a and t. In this case, any thresholding of R yields an equalized
odds predictor (all protected groups are at the same point on the curve, and the same point in
false/true-positive plane).
When the ROC curves do not agree, we might choose different thresholds ta for the different protected
groups. This yields different points on each A-conditional ROC curve. For the resulting predictor
to satisfy equalized odds, these must be at the same point in the false/true-positive plane. This is
possible only at points where all A-conditional ROC curves intersect. But the ROC curves might
not all intersect except at the trivial endpoints, and even if they do, their point of intersection might
represent a poor tradeoff between false positive and false negatives.
As with the case of correcting a binary predictor, we can use randomization to fill the span of possible
derived predictors and allow for significant intersection in the false/true-positive plane. In particular,
for every protected group a, consider the convex hull of the image of the conditional ROC curve:
def
Da = convhull {Ca (t) : t ? [0, 1]}
(3.5)
The definition of Da is analogous to the polytope Pa in the previous section, except that here we
do not consider points below the main diagonal (line from (0, 0) to (1, 1)), which are worse than
?random guessing? and hence never desirable for any reasonable loss function.
Deriving an optimal equalized odds threshold predictor. Any point in the convex hull Da represents the false/true positive rates, conditioned on A = a, of a randomized derived predictor based
on R. In particular, since the space is only two-dimensional, such a predictor Y? can always be taken
to be a mixture of two threshold predictors (corresponding to the convex hull of two points on the
ROC curve). Conditional on A = a, the predictor Y? behaves as
Y? = I{R > Ta } ,
where Ta is a randomized threshold assuming the value ta with probability pa and the value ta with
probability pa . In other words, to construct an equalized odds predictor, we should choose a point in
the intersection of these convex hulls, ? = (?[0], ?[1]) ? ?a Da , and then for each protected group
realize the true/false-positive rates ? with a (possible randomized) predictor Y? |(A = a) = I{R > Ta }
resulting in the predictor Y? = Pr I{R > TA }. For each group a, we either use a fixed threshold
Ta = ta or a mixture of two thresholds ta < ta . In the latter case, if A = a and R < ta we always
set Y? = 0, if R > ta we always set Y? = 1, but if ta < R < ta , we flip a coin and set Y? = 1 with
probability pa .
The feasible set of false/true positive rates of possible equalized odds predictors is thus the intersection
of the areas under the A-conditional ROC curves, and above the main diagonal (see Figure 2).
Since for any loss function the optimal false/true-positive rate will always be on the upper-left
boundary of this feasible set, this is effectively the ROC curve of the equalized odds predictors.
This ROC curve is the pointwise minimum of all A-conditional ROC curves. The performance of
5
Within each group, max profit
is a tangent of the ROC curve
1.0
0.6
0.6
0.4
0.4
0.2
0.2
0.4
0.6
0.8
1.0
0.0
0.0
Pr[Ye = 1 | A, Y = 0]
A=0
A=1
Average
Optimal
0.6
0.4
0.2
Equal opportunity cost is
convex function of TP rate
0.8
Pr[Ye = 1 | A, Y = 1]
0.8
Pr[Ye = 1 | A, Y = 1]
0.8
0.0
0.0
Equal odds makes the average
vector tangent to the interior
1.0
Pr[Ye = 1 | A, Y = 1]
1.0
0.2
0.2
0.4
0.6
0.8
1.0
0.0
0.0
Pr[Ye = 1 | A, Y = 0]
0.2
0.4
0.6
0.8
1.0
Cost of best solution for given
true positive rate
Figure 2: Finding the optimal equalized odds threshold predictor (middle), and equal opportunity threshold
predictor (right). For the equal opportunity predictor, within each group the cost for a given true positive rate
is proportional to the horizontal gap between the ROC curve and the profit-maximizing tangent line (i.e., the
two curves on the left plot), so it is a convex function of the true positive rate (right). This lets us optimize it
efficiently with ternary search.
an equalized odds predictor is thus determined by the minimum performance among all protected
groups. Said differently, requiring equalized odds incentivizes the learner to build good predictors for
all classes. For a given loss function, finding the optimal tradeoff amounts to optimizing (assuming
w.l.o.g. `(0, 0) = `(1, 1) = 0):
min
?a : ??Da
?[0]`(1, 0) + (1 ? ?[1])`(0, 1)
(3.6)
This is no longer a linear program, since Da are not polytopes, or at least are not specified as such.
Nevertheless, (3.6) can be efficiently optimized numerically using ternary search.
For an optimal equation opportunity derived predictor the construction is similar, except its
sufficient to find points in Da that are on the same horizontal line. Assuming continuity of the
conditional ROC curves, we can always take points on the ROC curve Ca itself and no randomization
is necessary.
4
Bayes optimal predictors
In this section, we develop the theory a theory for non-discriminating Bayes optimal classification.
We will first show that a Bayes optimal equalized odds predictor can be obtained as an derived
threshold predictor of the Bayes optimal regressor. Second, we quantify the loss of deriving an
equalized odds predictor based on a regressor that deviates from the Bayes optimal regressor. This
can be used to justify the approach of first training classifiers without any fairness constraint, and
then deriving an equalized odds predictor in a second step.
Definition 4.1 (Bayes optimal regressor). Given random
variables (X,
A) and a target variable Y,
the Bayes optimal regressor is R = arg minr(x,a) E (Y ? r(X, A))2 = r? (X, A) with r? (x, a) =
E[Y | X = x, A = a].
The Bayes optimal classifier, for any proper loss, is then a threshold predictor of R, where the
threshold depends on the loss function (see, e.g., [10]). We will extend this result to the case where
we additionally ask the classifier to satisfy an oblivious property:
Proposition 4.2. For any source distribution over (Y, X, A) with Bayes optimal regressor R(X, A),
any loss function, and any oblivious property C, there exists a predictor Y ? (R, A) such that:
1. Y ? is an optimal predictor satisfying C. That is, E`(Y ? , Y ) 6 E`(Yb , Y ) for any predictor
Yb (X, A) which satisfies C.
2. Y ? is derived from (R, A).
Corollary 4.3 (Optimality characterization). An optimal equalized odds predictor can be derived
from the Bayes optimal regressor R and the protected attribute A. The same is true for an optimal
equal opportunity predictor.
6
We can furthermore show that if we can approximate the (unconstrained) Bayes optimal regressor
well enough, then we can also construct a nearly optimal non-discriminating classifier:
Definition 4.4. We define the conditional Kolmogorov distance between two random variables
R, R0 ? [0, 1] in the same probability space as A and Y as:
def
dK (R, R0 ) =
max
sup |Pr {R > t | A = a, Y = y} ? Pr {R0 > t | A = a, Y = y}| .
a,y?{0,1} t?[0,1]
(4.1)
b
Theorem 4.5 (Near optimality). Assume that ` is a bounded loss function, and let R ? [0, 1] be an
arbitrary random variable. Then, there is an optimal equalized odds predictor Y ? and an equalized
b A) such that
odds predictor Yb derived from (R,
?
b R? ) ,
E`(Yb , Y ) 6 E`(Y ? , Y ) + 2 2 ? dK (R,
where R? is the Bayes optimal regressor. The same claim is true for equal opportunity.
5
Case study: FICO scores
Non-default rate by FICO score
100
Asian
White
Hispanic
Black
80
60
40
20
0
300
400
500
600
FICO score
700
800
Fraction of group below
Non-default rate
FICO scores are a proprietary classifier widely used in the United States to predict credit worthiness [11]. These scores, ranging from 300 to 850, try to predict credit risk; they form our score
R. People were labeled as in default if they failed to pay a debt for at least 90 days on at least one
account in the ensuing 18-24 month period; this gives an outcome Y . Our protected attribute A
is race, which is restricted to four values: Asian, white non-Hispanic (labeled ?white? in figures),
Hispanic, and black. FICO scores are complicated proprietary classifiers based on features, like
number of bank accounts kept, that could interact with culture and race.
CDF of FICO score by group
1.0
0.8
0.6
0.4
Asian
White
Hispanic
Black
0.2
0.0
300
400
500
600
FICO score
700
800
900
Figure 3: These two marginals, and the number of people per group, constitute our input data.
To illustrate the effect of non-discrimination on utility we used a loss in which false positives (giving
loans to people that default on any account) is 82/18 as expensive as false negatives (not giving a loan
to people that don?t default). Given the marginal distributions for each group (Figure 3), we can then
study the optimal profit-maximizing classifier under five different constraints on allowed predictors:
? Max profit has no fairness constraints, and will pick for each group the threshold that
maximizes profit. This is the score at which 82% of people in that group do not default.
? Race blind requires the threshold to be the same for each group. Hence it will pick the
single threshold at which 82% of people do not default overall.
? Demographic parity picks for each group a threshold such that the fraction of group
members that qualify for loans is the same.
? Equal opportunity picks for each group a threshold such that the fraction of non-defaulting
group members that qualify for loans is the same.
? Equalized odds requires both the fraction of non-defaulters that qualify for loans and the
fraction of defaulters that qualify for loans to be constant across groups. This might require
randomizing between two thresholds for each group.
Our proposed fairness definitions give thresholds between those of max-profit/race-blind thresholds
and of demographic parity. Figure 4 shows the thresholds used by each predictor, and Figure 5 plots
the ROC curves for each group, and the per-group false and true positive rates for each resulting
predictor. Differences in the ROC curve indicate differences in predictive accuracy between groups
(not differences in default rates), demonstrating that the majority (white) group is classified more
accurately than other.
7
FICO score thresholds (raw)
FICO score thresholds (within-group)
Asian
White
Hispanic
Black
Max profit
Single threshold
Opportunity
Max profit
Single threshold
Opportunity
Equal odds
Equal odds
Demography
Demography
300
400
500
600
FICO score
700
800
0
20
40
60
80
Within-group FICO score percentile
100
1.0
Per-group ROC curve
classifying non-defaulters using FICO score
0.8
Asian
White
Hispanic
Black
0.6
0.4
0.2
0.0
0.0
0.2
0.4
0.6
0.8
Fraction defaulters getting loan
1.0
Fraction non-defaulters
getting loan
Fraction non-defaulters
getting loan
Figure 4: FICO thresholds for various definitions of fairness. The equal odds method does not give a single
threshold, but instead Pr[Yb = 1 | R, A] increases over some not uniquely defined range; we pick the one
containing the fewest people.
Zoomed in view
1.0
0.9
0.8
Max profit
Single threshold
Opportunity
Equal odds
0.7
0.6
0.5
0.4
0.00
0.05
0.10
0.15
0.20
Fraction defaulters getting loan
0.25
Figure 5: The ROC curve for using FICO score to identify non-defaulters. Within a group, we can achieve any
convex combination of these outcomes. Equality of opportunity picks points along the same horizontal line.
Equal odds picks a point below all lines.
We can compute the profit achieved by each method, as a fraction of the max profit achievable. A
race blind threshold gets 99.3% of the maximal profit, equal opportunity gets 92.8%, equalized odds
gets 80.2%, and demographic parity only 69.8%.
6
Conclusions
We proposed a fairness measure that accomplishes two important desiderata. First, it remedies the
main conceptual shortcomings of demographic parity as a fairness notion. Second, it is fully aligned
with the central goal of supervised machine learning, that is, to build higher accuracy classifiers.
Our notion requires access to observed outcomes such as default rates in the loan setting. This
is precisely the same requirement that supervised learning generally has. The broad success of
supervised learning demonstrates that this requirement is met in many important applications. That
said, having access to reliable ?labeled data? is not always possible. Moreover, the measurement of
the target variable might in itself be unreliable or biased. Domain-specific scrutiny is required in
defining and collecting a reliable target variable.
Requiring equalized odds creates an incentive structure for the entity building the predictor that aligns
well with achieving fairness. Achieving better prediction with equalized odds requires collecting
features that more directly capture the target, unrelated to its correlation with the protected attribute.
An equalized odds predictor derived from a score depends on the pointwise minimum ROC curve
among different protected groups, encouraging constructing of predictors that are accurate in all
groups, e.g., by collecting data appropriately or basing prediction on features predictive in all groups.
An important feature of our notion is that it can be achieved via a simple and efficient post-processing
step. In fact, this step requires only aggregate information about the data and therefore could even be
carried out in a privacy-preserving manner (formally, via Differential Privacy).
8
References
[1] Solon Barocas and Andrew Selbst. Big data?s disparate impact. California Law Review, 104,
2016.
[2] John Podesta, Penny Pritzker, Ernest J. Moniz, John Holdren, and Jefrey Zients. Big data:
Seizing opportunities and preserving values. Executive Office of the President, May 2014.
[3] Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the
President, May 2016.
[4] Dino Pedreshi, Salvatore Ruggieri, and Franco Turini. Discrimination-aware data mining. In
Proc. 14th ACM SIGKDD, 2008.
[5] T. Calders, F. Kamiran, and M. Pechenizkiy. Building classifiers with independency constraints.
In In Proc. IEEE International Conference on Data Mining Workshops, pages 13?18, 2009.
[6] Indre Zliobaite. On the relation between accuracy and fairness in binary classification. CoRR,
abs/1505.05723, 2015.
[7] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi.
Learning fair classifiers. CoRR, abs:1507.05259, 2015.
[8] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard S. Zemel. Fairness
through awareness. In Proc. ACM ITCS, pages 214?226, 2012.
[9] Jon M. Kleinberg, Sendhil Mullainathan, and Manish Raghavan. Inherent trade-offs in the fair
determination of risk scores. CoRR, abs/1609.05807, 2016.
[10] Larry Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer, 2010.
[11] US Federal Reserve. Report to the congress on credit scoring and its effects on the availability
and affordability of credit, 2007.
9
| 6374 |@word middle:1 achievable:3 justice:1 checkable:1 zliobaite:1 p0:4 pick:7 concise:1 asks:1 profit:12 denying:1 score:39 united:1 seriously:1 sendhil:1 envision:1 bilal:1 subjective:1 existing:4 manuel:1 must:1 yep:12 applicant:1 realize:1 john:2 chicago:1 subsequent:1 remove:1 plot:2 interpretable:1 discrimination:15 alone:1 plane:4 accepting:1 provides:1 characterization:1 org:1 five:1 admission:1 along:1 differential:1 nondiscrimination:2 incentivizes:1 privacy:2 manner:1 introduce:5 expected:4 behavior:2 p1:4 nor:3 little:1 encouraging:1 spain:1 underlying:2 bounded:1 maximizes:1 moreover:1 unrelated:1 what:1 fico:15 caution:1 finding:4 quantitative:1 every:1 collecting:3 exactly:5 classifier:12 demonstrates:1 whatever:1 advantaged:2 scrutiny:1 positive:26 understood:1 congress:1 consequence:1 despite:1 encoding:2 shortened:1 might:12 black:5 specifying:1 discriminatory:4 range:3 obeys:3 arguing:1 enforces:1 ternary:2 intersect:2 area:1 empirical:1 word:2 specificity:1 suggest:1 get:3 cannot:1 interior:1 operator:1 risk:3 optimize:1 equivalent:3 maximizing:2 convex:7 correcting:1 wasserman:1 importantly:1 deriving:6 fill:1 notion:18 coordinate:1 analogous:1 updated:1 president:2 target:13 construction:1 designing:1 pa:5 satisfying:4 expensive:1 labeled:6 observed:1 capture:3 region:2 trade:2 substantial:1 balanced:1 complexity:1 convhull:2 employment:1 ultimately:1 depend:3 solving:1 predictive:3 creates:1 eric:1 learner:1 easily:1 joint:10 differently:1 isabel:1 various:2 kolmogorov:1 fewest:1 separated:1 effective:1 shortcoming:1 equalized:46 zemel:1 aggregate:1 outcome:12 equalizes:1 widely:1 valued:2 say:2 statistic:2 think:1 itself:3 propose:3 maximal:1 zoomed:1 aligned:1 iff:1 omer:1 achieve:3 intuitive:1 getting:5 requirement:3 optimum:1 tti:1 help:3 depending:1 develop:1 illustrate:1 andrew:1 c:1 come:1 indicate:1 quantify:3 met:1 correct:1 attribute:24 hull:4 raghavan:2 larry:1 public:1 education:1 muhammad:1 require:6 scholar:1 randomization:2 proposition:1 credit:6 algorithmic:1 predict:4 claim:2 reserve:1 hispanic:6 released:1 proc:3 label:2 maker:2 utexas:1 sensitive:1 concurrent:1 basing:1 hope:1 minimization:1 promotion:1 offs:1 federal:1 always:8 aim:1 rather:2 avoid:1 varying:1 office:2 corollary:1 derived:19 focus:1 ax:2 likelihood:1 indicates:1 check:1 contrast:1 sigkdd:1 sense:1 inference:1 membership:1 eliminate:1 accept:1 relation:1 going:1 arg:1 classification:5 among:2 overall:1 marginal:1 equal:31 construct:6 never:1 having:1 defaulter:8 flawed:1 sampling:1 aware:1 represents:1 broad:1 fairness:14 nearly:1 jon:1 report:4 inherent:2 oblivious:11 barocas:1 richard:1 ve:1 individual:1 asian:5 ab:3 interest:1 acceptance:1 mining:2 dwork:1 worthiness:1 adjust:1 mixture:2 accurate:6 encourage:1 mullainathan:2 necessary:1 culture:1 tp:1 measuring:1 cost:5 introducing:1 minr:1 predictor:74 conducted:1 optimally:1 dependency:1 answer:1 randomizing:1 calibrated:1 international:1 randomized:6 sensitivity:1 discriminating:3 off:1 receiving:1 regressor:9 together:1 na:1 central:3 satisfied:1 p00:1 containing:1 choose:2 possibly:5 worse:1 return:1 manish:1 account:3 potential:1 availability:1 coefficient:1 satisfy:5 race:6 depends:5 blind:4 performed:1 try:1 view:1 doing:1 sup:1 bayes:12 complicated:1 identifiability:1 contribution:1 minimize:2 accuracy:7 pya:2 who:3 characteristic:1 efficiently:2 correspond:1 yield:2 identify:1 raw:1 itcs:1 accurately:1 advertising:1 casual:1 randomness:2 classified:1 explain:1 aligns:1 definition:14 against:2 naturally:1 ruggieri:1 hardt:2 ask:1 color:1 ut:1 improves:1 back:1 manuscript:1 appears:1 higher:3 ta:16 supervised:8 day:1 methodology:1 specify:2 yb:41 formulation:1 though:2 strongly:1 furthermore:2 just:1 predicated:1 correlation:1 working:1 hand:1 horizontal:6 google:1 rodriguez:1 continuity:1 building:3 effect:4 ye:22 verify:2 true:22 requiring:4 remedy:1 equality:5 hence:2 moritz:2 white:8 encourages:1 uniquely:1 percentile:1 won:1 criterion:5 complete:1 image:1 ranging:1 recently:1 common:3 behaves:1 functional:3 endpoint:1 discussed:1 interpretation:2 extend:1 numerically:1 marginals:1 significant:1 measurement:1 unconstrained:1 pointed:1 therefor:1 dino:1 access:4 longer:1 pitassi:1 aligning:1 recent:1 showed:1 perspective:1 optimizing:1 optimizes:1 scenario:1 binary:12 success:1 qualify:4 scoring:2 p10:1 minimum:3 greater:1 additional:2 preserving:2 krishna:1 r0:3 accomplishes:1 redundant:1 period:1 desirable:2 match:2 determination:1 pechenizkiy:1 long:1 post:2 gummadi:1 ensuring:2 prediction:9 desideratum:1 impact:1 ernest:1 essentially:1 arxiv:3 sometimes:1 represent:1 achieved:2 source:1 appropriately:1 biased:1 unlike:3 ineffective:1 subject:1 p01:1 member:2 reingold:1 call:2 consult:2 odds:53 near:1 ideal:1 conception:1 easy:1 automated:1 enough:1 affect:1 perfectly:2 idea:2 tradeoff:4 administration:1 defaulting:2 whether:2 utility:5 constitute:1 proprietary:2 generally:1 amount:2 kamiran:1 visualized:1 specifies:1 percentage:1 restricts:1 estimated:1 algorithmically:1 per:3 broadly:1 discrete:1 incentive:2 group:39 independency:1 four:4 reliance:1 openai:1 threshold:35 achieving:3 p11:1 nevertheless:1 changing:1 demonstrating:1 vetted:1 kept:1 incentivize:1 relaxation:2 fraction:10 uncertainty:1 respond:1 selbst:1 place:1 family:1 reader:2 reasonable:1 decision:6 def:5 pay:2 gomez:1 disadvantaged:1 constraint:8 precisely:1 kleinberg:2 nathan:1 franco:1 min:2 span:1 optimality:2 expanded:1 structured:1 according:1 combination:1 poor:2 unawareness:1 across:2 increasingly:1 making:1 restricted:1 pr:24 pipeline:1 taken:1 equation:1 agree:4 calder:1 discus:1 count:1 know:1 flip:1 demographic:19 end:1 available:4 permit:1 salvatore:1 coin:1 existence:1 actionable:1 ensure:1 opportunity:29 giving:3 especially:1 build:2 question:1 already:1 usual:1 diagonal:2 disability:1 said:3 guessing:1 distance:1 entity:1 majority:2 ensuing:1 polytope:1 argue:1 trivial:2 toward:2 assuming:4 pointwise:2 relationship:1 balance:1 equivalently:1 unfortunately:1 relate:1 stated:2 negative:3 disparate:1 design:2 proper:1 perform:1 allowing:1 upper:1 vertical:1 finite:1 anti:1 defining:1 arbitrary:2 ttic:1 criminal:1 namely:1 required:3 specified:5 pair:1 optimized:1 california:1 learned:3 polytopes:2 barcelona:1 nip:1 beyond:1 below:4 cripple:1 program:5 including:2 max:8 debt:1 reliable:2 shifting:2 overlap:1 predicting:4 valera:1 technology:1 numerous:1 carried:1 deviate:1 review:1 nati:1 tangent:3 law:2 lacking:1 loss:17 fully:1 highlight:1 toniann:1 interesting:1 limitation:1 proportional:1 srebro:1 executive:2 awareness:1 solon:1 sufficient:2 principle:1 thresholding:3 bank:1 classifying:1 austin:1 course:2 parity:13 qualified:1 bias:5 allow:2 weaker:1 formal:1 understand:2 taking:1 penny:1 curve:27 default:12 boundary:1 doesn:1 collection:1 turini:1 approximate:1 ignore:2 status:1 unreliable:1 receiver:1 conceptual:1 don:1 continuous:1 search:2 protected:35 additionally:1 reasonably:1 ca:4 improving:1 interact:1 punishing:1 constructing:2 domain:4 obama:1 da:7 zafar:1 main:4 big:5 arise:1 allowed:1 fair:2 roc:24 formalization:1 guiding:1 lie:2 house:1 removing:1 theorem:1 specific:2 showing:1 cynthia:1 dk:2 burden:1 exists:1 workshop:1 false:20 effectively:1 corr:3 conditioned:1 demand:1 gap:1 easier:1 civil:1 intersection:6 pedreshi:1 failed:1 religion:1 contained:1 partially:1 applies:1 springer:1 gender:1 satisfies:6 acm:2 cdf:1 conditional:14 goal:6 viewed:1 presentation:2 month:1 price:1 feasible:3 change:1 loan:18 determined:3 except:3 justify:1 meaningful:2 indicating:1 formally:2 college:1 demography:2 people:8 latter:1 evaluate:1 avoiding:1 correlated:2 |
5,941 | 6,375 | Interaction Screening: Efficient and Sample-Optimal
Learning of Ising Models
Marc Vuffray1 , Sidhant Misra2 , Andrey Y. Lokhov1,3 , and Michael Chertkov1,3,4
1
Theoretical Division T-4, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Theoretical Division T-5, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
3
Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
4
Skolkovo Institute of Science and Technology, 143026 Moscow, Russia
2
{vuffray, sidhant, lokhov, chertkov}@lanl.gov
Abstract
We consider the problem of learning the underlying graph of an unknown Ising
model on p spins from a collection of i.i.d. samples generated from the model. We
suggest a new estimator that is computationally efficient and requires a number of
samples that is near-optimal with respect to previously established informationtheoretic lower-bound. Our statistical estimator has a physical interpretation in
terms of ?interaction screening?. The estimator is consistent and is efficiently
implemented using convex optimization. We prove that with appropriate regularization, the estimator recovers the underlying graph using a number of samples that is
logarithmic in the system size p and exponential in the maximum coupling-intensity
and maximum node-degree.
1
Introduction
A Graphical Model (GM) describes a probability distribution over a set of random variables which
factorizes over the edges of a graph. It is of interest to recover the structure of GMs from random
samples. The graphical structure contains valuable information on the dependencies between the
random variables. In fact, the neighborhood of a random variable is the minimal set that provides us
maximum information about this variable. Unsurprisingly, GM reconstruction plays an important
role in various fields such as the study of gene expression [1], protein interactions [2], neuroscience
[3], image processing [4], sociology [5] and even grid science [6, 7].
The origin of the GM reconstruction problem is traced back to the seminal 1968 paper by Chow and
Liu [8], where the problem was posed and resolved for the special case of tree-structured GMs. In
this special tree case the maximum likelihood estimator is tractable and is tantamount to finding a
maximum weighted spanning-tree. However, it is also known that in the case of general graphs with
cycles, maximum likelihood estimators are intractable as they require computation of the partition
function of the underlying GM, with notable exceptions of the Gaussian GM, see for instance [9],
and some other special cases, like planar Ising models without magnetic field [10].
A lot of efforts in this field has focused on learning Ising models, which are the most general
GMs over binary variables with pairwise interaction/factorization. Early attempts to learn the Ising
model structure efficiently were heuristic, based on various mean-field approximations, e.g. utilizing
empirical correlation matrices [11, 12, 13, 14]. These methods were satisfactory in cases when
correlations decrease with graph distance. However it was also noticed that the mean-field methods
perform poorly for the Ising models with long-range correlations. This observation is not surprising
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
in light of recent results stating that learning the structure of Ising models using only their correlation
matrix is, in general, computationally intractable [15, 16].
Among methods that do not rely solely on correlation matrices but take advantage of higher-order
correlations that can be estimated from samples, we mention the approach based on sparsistency of
the so-called regularized pseudo-likelihood estimator [17]. This estimator, like the one we propose
in this paper, is from the class of M-estimators i.e. estimators that are the minimum of a sum of
functions over the sampled data [22]. The regularized pseudo-likelihood estimator is regarded as
a surrogate for the intractable likelihood estimator with an additive `1 -norm penalty to encourage
sparsity of the reconstructed graph. The sparsistency-based estimator offers guarantees for the
structure reconstruction, but the result only applies to GMs that satisfy a certain condition that is
rather restrictive and hard to verify. It was also proven that the sparsity pattern of the regularized
pseudo-likelihood estimator fails to reconstruct the structure of graphs with long-range correlations,
even for simple test cases [18].
Principal tractability of structure reconstruction of an arbitrary Ising model from samples was proven
only very recently. Bresler, Mossel and Sly in [19] suggested an algorithm which reconstructs the
graph without errors in polynomial time. They showed that the algorithm requires number of samples
that is logarithmic in the number of variables. Although this algorithm is of a polynomial complexity,
it relies on an exhaustive neighborhood search, and the degree of the polynomial is equal to the
maximal node degree.
Prior to the work reported in this manuscript the best known procedure for perfect reconstruction
of an Ising model was through a greedy algorithm proposed by Bresler in [20]. Bresler?s algorithm
is based on the observation that the mutual information between neighboring nodes in an Ising
model is lower bounded. This observation allows to reconstruct the Ising graph perfectly with only
a logarithmic number of samples and in time quasi-quadratic in the number of variables. On the
other hand, Bresler?s algorithm suffers from two major practical limitations. First, the number of
samples, hence the running time as well, scales double exponentially with respect to the largest node
degree and with respect to the largest coupling intensity between pairs of variables. This scaling is
rather far from the information-theoretic lower-bound reported in [21] predicting instead a single
exponential dependency on the two aforementioned quantities. Second, Bresler?s algorithm requires
prior information on the maximum and minimum coupling intensities as well as on the maximum
node degree, guarantees which, in reality, are not necessarily available.
In this paper we propose a novel estimator for the graph structure of an arbitrary Ising model which
achieves perfect reconstruction in quasi-quartic time (although we believe it can be provably reduced
to quasi-quadratic time) and with a number of samples logarithmic in the system size. The algorithm
is near-optimal in the sense that the number of samples required to achieve perfect reconstruction,
and the run time, scale exponentially with respect to the maximum node-degree and the maximum
coupling intensity, thus matching parametrically the information-theoretic lower bound of [21]. Our
statistical estimator has the structure of a consistent M-estimator implemented via convex optimization
with an additional thresholding procedure. Moreover it allows intuitive interpretation in terms of
what we coin the ?interaction screening?. We show that with a proper `1 -regularization our estimator
reconstructs couplings of an Ising model from a number of samples that is near-optimal. In addition,
our estimator does not rely on prior information on the model characteristics, such as maximum
coupling intensity and maximum degree.
The rest of the paper is organized as follows. In Section 2 we give a precise definition of the
structure estimation problem for the Ising models and we describe in detail our method for structure
reconstruction within the family of Ising models. The main results related to the reconstruction
guarantees are provided by Theorem 1 and Theorem 2. In Section 3 we explain the strategy and the
sequence of steps that we use to prove our main theorems. Proofs of Theorem 1 and Theorem 2
are summarized at the end of this Section. Section 4 illustrates performance of our reconstruction
algorithm via simulations. Here we show on a number of test cases that the sample complexity of the
suggested method scales logarithmically with the number of variables and exponentially with the
maximum coupling intensity. In Section 5 we discuss possible generalizations of the algorithm and
future work.
2
2
Main Results
Consider a graph G = (V, E) with p vertexes where V = {1, . . . , p} is the vertex set and E ? V ? V
is the undirected edge set. Vertexes i ? V are associated with binary random variables ?i ? {?1, +1}
?
that are called spins. Edges (i, j) ? E are associated with non-zero real parameters ?ij
6= 0
that are called couplings. An Ising model is a probability distribution ? over spin configurations
? = {?1 , . . . , ?p } that reads as follows:
?
?
X
1
?
? (?) = exp ?
?ij
?i ?j ? ,
(1)
Z
(i,j)?E
where Z is a normalization factor called the partition function.
?
?
X
X
?
exp ?
?ij
?i ?j ? .
Z=
?
(2)
(i,j)?E
Notice that even though the main innovation of this paper ? the efficient ?interaction screening?
estimator ? can be constructed for the most general Ising models, we restrict our attention in this
paper to the special case of the Ising models with zero local magnetic-field. This simplification is not
necessary and is done solely to simplify (generally rather bulky) algebra. Later in the text we will
thus refer to the zero magnetic field model (2) simply as the Ising model.
2.1
Structure-Learning of Ising Models
Suppose that n sequences/samples of p spins ? (k) k=1,...,n are observed. Let us assume that each
(k)
(k)
observed spin configuration ? (k) = {?1 , . . . , ?p } is i.i.d. from (1). Based on these measureb of the edge set that reconstructs the structure
ments/samples we aim to construct an estimator E
exactly with high probability, i.e.
h
i
b = E = 1 ? ,
P E
(3)
where ? 0, 12 is a prescribed reconstruction error.
We are interested to learn structures of Ising models in the high-dimensional regime where the
number of observations/samples is of the order n = O (ln p). A necessary condition on the number
of samples is given in [21, Thm. 1]. This condition depends explicitly on the smallest and largest
coupling intensity
?
?
? := min |?ij |, ? := max |?ij |,
(4)
(i,j)?E
(i,j)?E
and on the maximal node degree
d := max |?i| ,
i?V
(5)
where the set of neighbors of a node i ? V is denoted by ?i := {j | (i, j) ? E}.
According to [21], in order to reconstruct the structure of the Ising model with minimum coupling
intensity ?, maximum coupling intensity ?, and maximum degree d, the required number of samples
should be at least
?
?
e?d ln pd
4 ?1
ln
p
?.
(6)
,
n ? max ?
4d?e?
2? tanh ?
We see from Eq. (6) that the exponential dependence on the degree and the maximum coupling
intensity are both unavoidable. Moreover, when the minimal coupling is small, the number of samples
should scale at least as ??2 .
It remains unknown if the inequality (6) is achievable. It is shown in [21, Thm. 3] that there exists a
reconstruction algorithm with error probability ? 0, 12 if the number of samples is greater than
!2
?d 3e2?d + 1
n?
(16 log p + 4 ln (2/)) .
(7)
sinh2 (?/4)
3
Unfortunately, the existence proof presented in [21] is based on an exhaustive search with the
intractable maximum likelihood estimator and thus it does not guarantee actual existence of an
algorithm with low computational complexity. Notice also that the number of samples in (7) scales as
exp (4?d) when d and ? are asymptotically large and as ??4 when ? is asymptotically small.
2.2
Regularized Interaction Screening Estimator
The main contribution of this paper consists in presenting explicitly a structure-learning algorithm
that is of low complexity and which is near-optimal with respect to bounds (6) and (7). Our algorithm
reconstructs
the structure of the Ising model exactly, as stated in Eq. (3), with an error probability
? 0, 12 , and with a number of samples which is at most proportional to exp (6?d) and ??2 . (See
Theorem 1 and Theorem 2 below for mathematically accurate statements.) Our algorithm consists of
two steps. First, we estimate couplings in the vicinity of every node. Then, on the second step, we
threshold the estimated couplings that are sufficiently small to zero. Resulting zero coupling means
that the corresponding edge is not present.
Denote the set of couplings around node u ? V by the vector ??u ? Rp?1 . In this, slightly abusive
notation, we use the convention that if a coupling is equal to zero it reads as absence of the edge, i.e.
?
?ui
= 0 if and only if (u, i) ?
/ E. Note that if the node degree is bounded by d, it implies that the
vector of couplings ??u is non-zero in at most d entries.
Our estimator for couplings around node u ? V is based on the following loss function coined the
Interaction Screening Objective (ISO):
?
?
n
X
1X
(k)
(8)
exp ??
?ui ?u(k) ?i ? .
Sn (?u ) =
n
i?V \u
k=1
The ISO is an empirical weighted-average and its gradient is the vector of weighted pair-correlations
involving ?u . At ?u = ??u the exponential weight cancels exactly with the corresponding factor in the
distribution (1). As a result, weighted pair-correlations involving ?u vanish as if ?u was uncorrelated
with any other spins or completely ?screened? from them, which explains our choice for the name of
the loss function. This remarkable ?screening? feature of the ISO suggests the following choice of
the Regularized Interaction Screening Estimator (RISE) for the interaction vector around node u:
?b (?) = argmin Sn (? ) + ? k? k ,
(9)
u
u
? u ?Rp?1
u 1
where ? > 0 is a tunable parameter promoting sparsity through the additive `1 -penalty. Notice
that the ISO is the empirical average of an exponential function of ?u which implies it is convex.
Moreover, addition of the `1 -penalty preserves the convexity of the minimization objective in Eq. (9).
As expected, the performance of RISE does depend on the choice of the penalty parameter ?. If ? is
too small ?bu (?) is too sensitive to statistical fluctuations. On the other hand, if ? is too large ?bu (?)
has too much of a bias towards zero. In general, the optimal value of ? is hard to guess. Luckily, the
following theorem provides strong guarantees on the square error for the case when ? is chosen to be
sufficiently large.
Theorem 1 (Square Error of RISE). Let ? (k) k=1,...,n be n realizations of p spins drawn i.i.d. from
an Ising model with maximum degree d and maximum coupling intensity ?. Then for any node u ? V
and for any 1 > 0, the square
error of the Regularized Interaction Screening Estimator (9) with
q
ln(3p/1 )
penalty parameter ? = 4
is bounded with probability at least 1 ? 1 by
n
s
?
ln 3p
b
1
,
(10)
?u (?) ? ??u
? 28 d (d + 1) e3?d
n
2
2
2
whenever n ? 214 d2 (d + 1) e6?d ln 3p
1 .
Our structure estimator (for the second step of the algorithm), Structure-RISE, takes RISE output and
thresholds couplings whose absolute value is less than ?/2 to zero:
n
o
b (?, ?) = (i, j) ? V ? V | ?bij (?) + ?bji (?) ? ? .
E
(11)
Performance of the Structure-RISE is fully quantified by the following Theorem.
4
Theorem 2 (Structure Learning of Ising Models). Let ? (k) k=1,...,n be n realizations of p spins
drawn i.i.d. from an Ising model with maximum degree d, maximum coupling intensity ? and
q minimal
coupling intensity ?. Then for any 2 > 0, Structure-RISE with penalty parameter ? = 4
reconstructs the edge-set perfectly with probability
b (?, ?) = E ? 1 ? 2 ,
P E
whenever n ? max d/16, ?
?2
ln(3p2 /2 )
n
(12)
3
2
218 d (d + 1) e6?d ln 3p
2 .
Proofs of Theorem 1 and Theorem 2 are given in Subsection 3.3.
Theorem 1 states that RISE recovers not only the structure but also the correct value of the couplings
up to an error based on the available samples. It is possible to improve the square-error bound (10)
even further by first, running Structure-RISE to recover edges, and then re-running RISE with ? = 0
for the remaining non-zero couplings.
The computational complexity of RISE
is equal to the complexity of minimizing the convex ISO and,
as such, it scales at most as O np3 . Therefore, computational complexity of Structure-RISE scales
at most as O np4 simply because one has to call RISE at every node. We believe that this runningtime estimate can be proven to be quasi-quadratic when using first-order minimization-techniques, in
the spirit of [23]. We have observed through
numerical experiments that such techniques implement
Structure-RISE with running-time O np2 .
Notice that in order to implement RISE there is no need for prior knowledge on the graph parameters.
This is a considerable advantage in practical applications where the maximum degree or bounds on
couplings are often unknown.
3
Analysis
The Regularized Interaction Screening Estimator (9) is from the class of the so-called regularized
M-estimators. Negahban et al. proposed in [22] a framework to analyze the square error of such
estimators. As per [22], enforcing only two conditions on the loss function is sufficient to get a handle
on the square error of an `1 -regularized M-estimator.
The first condition links the choice of the penalty parameter to the gradient of the objective function.
Condition 1. The `1 -penalty parameter strongly enforces regularization if it is greater than any
partial derivatives of the objective function at ?u = ?u? , i.e.
? ? 2 k?Sn (?u? )k? .
(13)
Condition 1 guarantees that if the vector of couplings ?u? has at most d non-zero entries, then the
estimation difference ?bu (?) ? ??u lies within the set
n
o
?
K := ? ? Rp?1 | k?k1 ? 4 d k?k2 .
(14)
The second condition ensure that the objective function is strongly convex in a restricted subset of
Rp?1 . Denote the reminder of the first-order Taylor expansion of the objective function by
?Sn (?u , ?u? ) := Sn (?u? + ?u ) ? Sn (?u? ) ? h?Sn (?u? ) , ?u i ,
(15)
p?1
where ?u ? R
is an arbitrary vector. Then the second condition reads as follows.
Condition 2. The objective function is restricted strongly convex with respect to K on a ball of
radius R centered at ?u = ?u? , if for all ?u ? K such that k?u k2 ? R, there exists a constant ? > 0
such that
2
?Sn (?u , ?u? ) ? ? k?u k2 .
(16)
Strong regularization and restricted strong convexity enables us to control that the minimizer ?bu of
the full objective (9) lies in the vicinity of the sparse vector of parameters ?u? . The precise formulation
is given in the proposition following from [22, Thm. 1].
Proposition 1.
? If the `1 -regularized M-estimator of the form (9) satisfies Condition 1 and Condition
2 with R > 3 d ?? then the square-error is bounded by
? ?
b
(17)
?u ? ??u
? 3 d .
?
2
5
3.1
Gradient Concentration
Like the ISO (8), its gradient in any component l ? V \ u is an empirical average
n
1 X (k)
?
Sn (?u ) =
Xul (?u ) ,
??ul
n
(18)
k=1
(k)
where the random variables Xul (?u ) are i.i.d and they are related to the spin configurations according
to
?
?
X
Xul (?u ) = ??u ?l exp ??
?ui ?u ?i ? .
(19)
i?V \u
In order to prove that the ISO gradient concentrates we have to state few properties of the support,
the mean and the variance of the random variables (19), expressed in the following three Lemmas.
The first of the Lemmas states that at ?u = ??u , the random variable Xul (??u ) has zero mean.
Lemma 1. For any Ising model with p spins and for all l 6= u ? V
E [Xul (??u )] = 0.
(20)
As a direct corollary of the Lemma 1, ?u = ??u is always a minimum of the averaged ISO (8).
The second Lemma proves that at ?u = ??u , the random variable Xul (??u ) has a variance equal to one.
Lemma 2. For any Ising model with p spins and for all l 6= u ? V
h
i
2
E Xul (??u ) = 1.
(21)
The next lemma states that at ?u = ??u , the random variable Xul (??u ) has a bounded support.
Lemma 3. For any Ising model with p spins, with maximum degree d and maximum coupling
intensity ?, it is guaranteed that for all l 6= u ? V
|Xul (??u )| ? exp (?d) .
(22)
With Lemma 1, 2 and 3, and using Berstein?s inequality we are now in position to prove that every
partial derivative of the ISO concentrates uniformly around zero as the number of samples grows.
Lemma 4. For any Ising model with p spins, with maximum degree d and maximum coupling
intensity ?. For any 3 > 0, if the number of observation satisfies n ? exp (2?d) ln 2p
3 , then the
following bound holds with probability at least 1 ? 3 :
s
ln 2p
3
k?Sn (??u )k? ? 2
.
(23)
n
3.2
Restricted Strong-Convexity
The remainder of the first-order Taylor-expansion of the ISO, defined in Eq. (15) is explicitly
computed
?
! ?
n
X
X
X
1
(k)
(k)
? (k)
?Sn (?u , ?? ) =
exp ?
?ui
?u ?i
f?
?ui ?u(k) ?i ? ,
(24)
n
k=1
where f (z) := e
?z
i?V \u
i??u
? 1 + z.
In the following lemma we prove that Eq. (24) is controlled by a much simpler expression using a
lower-bound on f (z).
Lemma 5. For all ?u ? Rp?1 , the remainder of the first-order Taylor expansion admits the following
lower-bound
e??d
?Sn (?u , ?? ) ?
?> H n ?u
(25)
2 + k?u k1 u
Pn
(k) (k)
n
where H n is an empirical covariance matrix with elements Hij
= n1 k=1 ?i ?j for i, j ? V \ u.
6
Lemma 5 enables us to control the randomness in ?Sn (?u , ?? ) through the simpler matrix H n that
is independent of ?u . This last point is crucial as we show in the next lemma that H n concentrates
independently of ?u towards its mean.
Lemma 6. Consider an Ising model with p spins, with maximum degree d and maximum coupling
2
intensity ?. Let ? > 0, 4 > 0 and n ? ?22 ln p4 . Then with probability greater than 1 ? 4 , we have
for all i, j ? V \ u
n
Hij ? Hij ? ?,
(26)
where the matrix H is the covariance matrix with elements Hij = E [?i ?j ], for i, j ? V \ u.
The last ingredient that we need is a proof that the smallest eigenvalue of the covariance matrix H is
bounded away from zero independently of the dimension p. Equivalently the next lemma shows that
the quadratic form associated with H is non-degenerate regardless of the value of p.
Lemma 7. Consider an Ising model with p spins, with maximum degree d and maximum coupling
intensity ?. For all ?u ? Rp?1 the following bound holds
?>
u H?u ?
e?2?d
2
k?u k2 .
d+1
(27)
We stress that Lemma 7 is a deterministic result valid for all ?u ? Rp?1 . We are now in position to
prove the restricted strong convexity of the ISO.
Lemma 8. Consider an Ising model with p spins, with maximum degree d and maximum coupling
2
2
intensity ?. For all 4 > 0 and R > 0, when n ? 211 d2 (d + 1) e4?d ln p4 the ISO (8) satisfies, with
probability at least 1 ? 4 , the restricted strong convexity condition
?Sn (?u , ?u? ) ?
e?3?d
2
? k?u k2 ,
4 (d + 1) 1 + 2 dR
(28)
?
for all ?u ? Rp?1 such that k?u k1 ? 4 d k?u k2 and k?u k2 ? R.
3.3
Proof of the main Theorems
to the Regularized InterProof of Theorem 1 (Square Error of RISE). We seek to apply Proposition 1 q
action Screening Estimator (9). Using 3 =
21
3
in Lemma 4 and letting ? = 4
1
n
ln 3p/1 , it follows
that Condition 1 is satisfied with probability greater than 1 ? 21 /3, whenever n ? e2?d ln 3p
1 . Using
?
?
?
3?d
4 = 1 /3 in Lemma 8, and observing that 12 d?e (d + 1)(1 + 2 dR) < R, for R = 2/ d
2
2
and n ? 214 d2 (d + 1) e6?d ln 3p
1 , we conclude that condition 2 is satisfied with probability greater
1
than 1 ? 3 . Theorem 1 then follows by using a union bound and then applying Proposition 1.
The proof of Theorem 2 becomes an immediate application of Theorem 1 for achieving an estimation
of couplings at each node with squared-error of ?/2 and with probability 1 ? 1 = 1 ? 2 /p.
4
Numerical Results
We test
qperformance of the Struct-RISE, with the strength of the l1 -regularization parametrized by
? = 4 n1 ln(3p2 /), on Ising models over two-dimensional grid with periodic boundary conditions
(thus degree of every node in the graph is 4). We have observed that this topology is one of the
hardest for the reconstruction problem. We are interested to find the minimal number of samples,
nmin , such that the graph is perfectly reconstructed with probability 1 ? ? 0.95. In our numerical
experiments, we recover the value of nmin as the minimal n for which Struct-RISE outputs the perfect
structure 45 times from 45 different trials with n samples, thus guaranteeing that the probability of
perfect reconstruction is greater than 0.95 with a statistical confidence of at least 90%.
We first verify the logarithmic scaling of nmin with respect to the number of spins p. The couplings
?
are chosen uniform and positive ?ij
= 0.7. This choice ensures that samples generated by Glauber
7
Figure
? 1: Left:
? Linear-exponential plot showing the observed relation between nmin and p. The graph
is a p ? p two-dimensional grid with uniform and positive couplings ?? = 0.7. Right: Linearexponential plot showing the observed relation between nmin and ?. The graph is the two-dimensional
4 ? 4 grid. In red the couplings are uniform and positive and in blue the couplings have uniform
intensity but random sign.
dynamics are i.i.d. according to (1). Values of nmin for p ? {9, 16, 25, 36, 49, 64} are shown on the
left in Figure 1. Empirical scaling is, ? 1.1 ? 105 ln p, which is orders of magnitude better than the
rather conservative prediction of the theory for this model, 3.2 ? 1015 ln p.
We also test the exponential scaling of nmin with respect to the maximum coupling intensity ?. The
test is conducted over two different settings both with p = 16 spins: the ferromagnetic case where all
couplings are uniform and positive, and the spin glass case where the sign
?of couplings is assigned
, is uniform and equal
uniformly at random. In both cases the absolute value of the couplings, ?ij
to ?. To ensure that the samples are i.i.d, we sample directly from the exhaustive weighted list of
the 216 possible spin configurations. The structure is recovered by thresholding the reconstructed
couplings at the value ?/2 = ?/2.
Experimental values of nmin for different values of the maximum coupling intensity, ?, are shown
on the right in Fig. 1. Empirically observed exponential dependence on ? is matched best by,
exp (12.8?), in the ferromagnetic case and by, exp (5.6?), in the case of the spin glass. Theoretical
bound for d = 4 predicts exp (24?). We observe that the difference in sample complexity depends
significantly on the type of interaction. An interesting observation one can make based on these
experiments is that the case which is harder from the sample-generating perspective is easier for
learning and vice versa.
5
Conclusions and Path Forward
In this paper we construct and analyze the Regularized Interaction Screening Estimator (9). We show
that the estimator is computationally efficient and needs an optimal number of samples for learning
Ising models. The RISE estimator does not require any prior knowledge about the model parameters
for implementation and it is based on the minimization of the loss function (8), that we call the
Interaction Screening Objective. The ISO is an empirical average (over samples) of an objective
designed to screen an individual spin/variable from its factor-graph neighbors.
Even though we focus in this paper solely on learning pair-wise binary models, the ?interaction
screening? approach we introduce here is generic. The approach extends to learning other Graphical
Models, including those over higher (discrete, continuous or mixed) alphabets and involving highorder (beyond pair-wise) interactions. These generalizations are built around the same basic idea
pioneered in this paper ? the interaction screening objective is (a) minimized over candidate GM
parameters at the actual values of the parameters we aim to learn; and (b) it is an empirical average over
samples. In the future, we plan to explore further theoretical and experimental power, characteristics
and performance of the generalized screening estimator.
Acknowledgment: We are thankful to Guy Bresler and Andrea Montanari for valuable discussions,
comments and insights. The work was supported by funding from the U.S. Department of Energy?s
Office of Electricity as part of the DOE Grid Modernization Initiative.
8
References
[1] D. Marbach, J. C. Costello, R. Kuffner, N. M. Vega, R. J. Prill, D. M. Camacho, K. R. Allison, M. Kellis,
J. J. Collins, and G. Stolovitzky, ?Wisdom of crowds for robust gene network inference,? Nat Meth, vol. 9,
pp. 796?804, Aug 2012.
[2] F. Morcos, A. Pagnani, B. Lunt, A. Bertolino, D. S. Marks, C. Sander, R. Zecchina, J. N. Onuchic, T. Hwa,
and M. Weigt, ?Direct-coupling analysis of residue coevolution captures native contacts across many
protein families,? Proceedings of the National Academy of Sciences, vol. 108, no. 49, pp. E1293?E1301,
2011.
[3] E. Schneidman, M. J. Berry, R. Segev, and W. Bialek, ?Weak pairwise correlations imply strongly correlated
network states in a neural population,? Nature, vol. 440, pp. 1007?1012, Apr 2006.
[4] S. Roth and M. J. Black, ?Fields of experts: a framework for learning image priors,? in Computer Vision
and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol. 2, pp. 860?867
vol. 2, June 2005.
[5] N. Eagle, A. S. Pentland, and D. Lazer, ?Inferring friendship network structure by using mobile phone
data,? Proceedings of the National Academy of Sciences, vol. 106, no. 36, pp. 15274?15278, 2009.
[6] M. He and J. Zhang, ?A dependency graph approach for fault detection and localization towards secure
smart grid,? IEEE Transactions on Smart Grid, vol. 2, pp. 342?351, June 2011.
[7] D. Deka, S. Backhaus, and M. Chertkov, ?Structure learning and statistical estimation in distribution
networks,? submitted to IEEE Control of Networks; arXiv:1501.04131; arXiv:1502.07820, 2015.
[8] C. Chow and C. Liu, ?Approximating discrete probability distributions with dependence trees,? IEEE
Transactions on Information Theory, vol. 14, pp. 462?467, May 1968.
[9] A. d?Aspremont, O. Banerjee, and L. E. Ghaoui, ?First-order methods for sparse covariance selection,?
SIAM Journal on Matrix Analysis and Applications, vol. 30, no. 1, pp. 56?66, 2008.
[10] J. K. Johnson, D. Oyen, M. Chertkov, and P. Netrapalli, ?Learning planar ising models,? Journal of Machine
Learning, in press; arXiv:1502.00916, 2015.
[11] T. Tanaka, ?Mean-field theory of Boltzmann machine learning,? Phys. Rev. E, vol. 58, pp. 2302?2310, Aug
1998.
[12] H. J. Kappen and F. d. B. Rodr?guez, ?Efficient learning in Boltzmann machines using linear response
theory,? Neural Computation, vol. 10, no. 5, pp. 1137?1156, 1998.
[13] Y. Roudi, J. Tyrcha, and J. Hertz, ?Ising model for neural data: Model quality and approximate methods
for extracting functional connectivity,? Phys. Rev. E, vol. 79, p. 051915, May 2009.
[14] F. Ricci-Tersenghi, ?The Bethe approximation for solving the inverse Ising problem: a comparison with
other inference methods,? Journal of Statistical Mechanics: Theory and Experiment, vol. 2012, no. 08,
p. P08015, 2012.
[15] G. Bresler, D. Gamarnik, and D. Shah, ?Hardness of parameter estimation in graphical models,? in
Advances in Neural Information Processing Systems 27 (Z. Ghahramani, M. Welling, C. Cortes, N. D.
Lawrence, and K. Q. Weinberger, eds.), pp. 1062?1070, Curran Associates, Inc., 2014.
[16] A. Montanari, ?Computational implications of reducing data to sufficient statistics,? Electron. J. Statist.,
vol. 9, no. 2, pp. 2370?2390, 2015.
[17] P. Ravikumar, M. J. Wainwright, and J. D. Lafferty, ?High-dimensional Ising model selection using
`1-regularized logistic regression,? Ann. Statist., vol. 38, pp. 1287?1319, 06 2010.
[18] A. Montanari and J. A. Pereira, ?Which graphical models are difficult to learn?,? in Advances in Neural
Information Processing Systems 22 (Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and
A. Culotta, eds.), pp. 1303?1311, Curran Associates, Inc., 2009.
[19] G. Bresler, E. Mossel, and A. Sly, ?Reconstruction of Markov random fields from samples: Some
observations and algorithms,? SIAM Journal on Computing, vol. 42, no. 2, pp. 563?578, 2013.
[20] G. Bresler, ?Efficiently learning Ising models on arbitrary graphs,? in Proceedings of the Forty-Seventh
Annual ACM on Symposium on Theory of Computing, pp. 771?782, ACM, 2015.
[21] N. P. Santhanam and M. J. Wainwright, ?Information-theoretic limits of selecting binary graphical models
in high dimensions,? IEEE Transactions on Information Theory, vol. 58, pp. 4117?4134, July 2012.
[22] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu, ?A unified framework for high-dimensional
analysis of M -estimators with decomposable regularizers,? Statist. Sci., vol. 27, pp. 538?557, 11 2012.
[23] A. Agarwal, S. Negahban, and M. J. Wainwright, ?Fast global convergence of gradient methods for
high-dimensional statistical recovery,? Ann. Statist., vol. 40, pp. 2452?2482, 10 2012.
9
| 6375 |@word trial:1 achievable:1 polynomial:3 norm:1 d2:3 simulation:1 seek:1 covariance:4 mention:1 harder:1 kappen:1 liu:2 contains:1 configuration:4 selecting:1 recovered:1 surprising:1 guez:1 additive:2 numerical:3 partition:2 camacho:1 enables:2 plot:2 designed:1 greedy:1 guess:1 iso:13 provides:2 node:17 simpler:2 zhang:1 constructed:1 direct:2 symposium:1 initiative:1 prove:6 consists:2 introduce:1 pairwise:2 hardness:1 expected:1 andrea:1 mechanic:1 pagnani:1 gov:1 actual:2 becomes:1 spain:1 provided:1 underlying:3 bounded:6 moreover:3 notation:1 matched:1 what:1 argmin:1 unified:1 finding:1 guarantee:6 pseudo:3 zecchina:1 every:4 modernization:1 exactly:3 k2:7 control:3 positive:4 local:1 limit:1 sidhant:2 solely:3 fluctuation:1 path:1 black:1 quantified:1 suggests:1 factorization:1 range:2 averaged:1 practical:2 acknowledgment:1 enforces:1 union:1 implement:2 procedure:2 empirical:8 significantly:1 matching:1 confidence:1 suggest:1 protein:2 get:1 selection:2 applying:1 seminal:1 deterministic:1 center:1 roth:1 williams:1 attention:1 regardless:1 independently:2 convex:6 focused:1 decomposable:1 recovery:1 estimator:38 insight:1 utilizing:1 regarded:1 population:1 handle:1 gm:10 play:1 suppose:1 pioneered:1 curran:2 origin:1 associate:2 logarithmically:1 element:2 recognition:1 ising:40 predicts:1 native:1 observed:7 role:1 capture:1 culotta:1 ensures:1 cycle:1 ferromagnetic:2 decrease:1 valuable:2 pd:1 convexity:5 complexity:8 ui:5 stolovitzky:1 dynamic:1 highorder:1 depend:1 solving:1 algebra:1 smart:2 localization:1 division:2 xul:9 completely:1 resolved:1 various:2 abusive:1 alphabet:1 fast:1 describe:1 neighborhood:2 crowd:1 exhaustive:3 whose:1 heuristic:1 posed:1 p08015:1 cvpr:1 reconstruct:3 tyrcha:1 statistic:1 advantage:2 sequence:2 eigenvalue:1 reconstruction:15 propose:2 interaction:18 maximal:2 remainder:2 neighboring:1 realization:2 poorly:1 achieve:1 degenerate:1 academy:2 intuitive:1 los:6 convergence:1 double:1 generating:1 perfect:5 guaranteeing:1 thankful:1 coupling:45 stating:1 ij:7 bulky:1 aug:2 strong:6 netrapalli:1 implemented:2 p2:2 eq:5 implies:2 convention:1 concentrate:3 radius:1 correct:1 luckily:1 centered:1 explains:1 require:2 ricci:1 generalization:2 proposition:4 mathematically:1 hold:2 sufficiently:2 around:5 exp:12 lawrence:1 electron:1 lokhov:1 major:1 early:1 achieves:1 smallest:2 estimation:5 tanh:1 sensitive:1 largest:3 vice:1 weighted:5 minimization:3 gaussian:1 always:1 aim:2 rather:4 pn:1 factorizes:1 mobile:1 office:1 corollary:1 np2:1 focus:1 june:2 likelihood:7 secure:1 sense:1 glass:2 inference:2 chow:2 relation:2 quasi:4 interested:2 provably:1 rodr:1 among:1 aforementioned:1 denoted:1 plan:1 special:4 mutual:1 field:10 equal:5 construct:2 hardest:1 cancel:1 yu:1 future:2 minimized:1 simplify:1 few:1 preserve:1 national:5 individual:1 sparsistency:2 n1:2 attempt:1 detection:1 screening:16 interest:1 allison:1 light:1 regularizers:1 implication:1 accurate:1 edge:8 encourage:1 partial:2 necessary:2 tree:4 taylor:3 re:1 theoretical:4 minimal:5 sociology:1 instance:1 electricity:1 tractability:1 vertex:3 parametrically:1 entry:2 subset:1 alamo:6 uniform:6 seventh:1 conducted:1 johnson:1 too:4 reported:2 dependency:3 periodic:1 andrey:1 negahban:3 siam:2 bu:4 michael:1 connectivity:1 squared:1 satisfied:2 nm:3 reconstructs:5 unavoidable:1 russia:1 dr:2 guy:1 expert:1 derivative:2 summarized:1 inc:2 satisfy:1 notable:1 explicitly:3 depends:2 later:1 lot:1 analyze:2 observing:1 red:1 recover:3 contribution:1 hwa:1 square:8 spin:22 variance:2 characteristic:2 efficiently:3 wisdom:1 weak:1 weigt:1 randomness:1 submitted:1 explain:1 phys:2 suffers:1 whenever:3 ed:2 definition:1 vuffray:1 energy:1 pp:19 e2:2 proof:6 associated:3 recovers:2 lazer:1 sampled:1 tunable:1 subsection:1 knowledge:2 reminder:1 organized:1 back:1 manuscript:1 higher:2 planar:2 response:1 formulation:1 done:1 though:2 strongly:4 sly:2 correlation:10 nmin:8 hand:2 nonlinear:1 banerjee:1 logistic:1 quality:1 believe:2 grows:1 usa:3 name:1 verify:2 regularization:5 hence:1 vicinity:2 read:3 assigned:1 laboratory:3 satisfactory:1 glauber:1 generalized:1 presenting:1 stress:1 theoretic:3 l1:1 image:2 wise:2 novel:1 recently:1 funding:1 vega:1 gamarnik:1 functional:1 physical:1 empirically:1 exponentially:3 interpretation:2 he:1 refer:1 versa:1 grid:7 marbach:1 recent:1 showed:1 quartic:1 perspective:1 roudi:1 phone:1 certain:1 inequality:2 binary:4 fault:1 minimum:4 additional:1 greater:6 forty:1 schneidman:1 july:1 full:1 offer:1 long:2 ravikumar:2 controlled:1 prediction:1 involving:3 basic:1 regression:1 vision:1 arxiv:3 normalization:1 agarwal:1 addition:2 residue:1 crucial:1 rest:1 comment:1 undirected:1 lafferty:2 spirit:1 call:2 extracting:1 near:4 bengio:1 sander:1 coevolution:1 perfectly:3 restrict:1 topology:1 idea:1 expression:2 ul:1 effort:1 penalty:8 e3:1 action:1 generally:1 statist:4 reduced:1 notice:4 sign:2 neuroscience:1 estimated:2 per:1 blue:1 discrete:2 vol:19 santhanam:1 threshold:2 traced:1 drawn:2 achieving:1 graph:19 asymptotically:2 sum:1 run:1 screened:1 inverse:1 extends:1 family:2 scaling:4 bound:12 guaranteed:1 simplification:1 quadratic:4 annual:1 eagle:1 strength:1 segev:1 prill:1 prescribed:1 min:1 structured:1 department:1 according:3 ball:1 hertz:1 describes:1 slightly:1 across:1 costello:1 rev:2 kuffner:1 restricted:6 ghaoui:1 computationally:3 ln:19 previously:1 remains:1 discus:1 letting:1 tractable:1 end:1 available:2 promoting:1 apply:1 observe:1 away:1 appropriate:1 generic:1 magnetic:3 coin:1 struct:2 shah:1 rp:8 existence:2 weinberger:1 moscow:1 running:4 remaining:1 ensure:2 graphical:6 coined:1 restrictive:1 ghahramani:1 k1:3 prof:1 approximating:1 society:1 kellis:1 contact:1 objective:11 noticed:1 quantity:1 strategy:1 concentration:1 dependence:3 surrogate:1 bialek:1 gradient:6 distance:1 link:1 sci:1 parametrized:1 spanning:1 enforcing:1 sinh2:1 minimizing:1 innovation:1 equivalently:1 difficult:1 unfortunately:1 statement:1 hij:4 stated:1 rise:19 implementation:1 proper:1 boltzmann:2 unknown:3 perform:1 observation:7 markov:1 pentland:1 immediate:1 precise:2 arbitrary:4 thm:3 intensity:21 pair:5 required:2 lanl:1 established:1 barcelona:1 tanaka:1 nip:1 beyond:1 suggested:2 below:1 pattern:2 regime:1 sparsity:3 built:1 max:4 including:1 wainwright:4 power:1 rely:2 regularized:13 predicting:1 meth:1 improve:1 technology:1 mossel:2 imply:1 aspremont:1 sn:14 text:1 prior:6 berry:1 tantamount:1 unsurprisingly:1 loss:4 bresler:9 fully:1 mixed:1 interesting:1 limitation:1 skolkovo:1 proportional:1 proven:3 remarkable:1 ingredient:1 degree:20 sufficient:2 consistent:2 thresholding:2 uncorrelated:1 supported:1 last:2 bias:1 institute:1 neighbor:2 absolute:2 sparse:2 boundary:1 dimension:2 valid:1 forward:1 collection:1 far:1 welling:1 transaction:3 reconstructed:3 approximate:1 informationtheoretic:1 gene:2 global:1 conclude:1 search:2 continuous:1 reality:1 bethe:1 learn:4 nature:1 robust:1 morcos:1 schuurmans:1 expansion:3 necessarily:1 marc:1 apr:1 main:6 montanari:3 fig:1 screen:1 fails:1 position:2 inferring:1 pereira:1 exponential:8 lie:2 candidate:1 vanish:1 chertkov:3 bij:1 theorem:19 e4:1 friendship:1 showing:2 lunt:1 list:1 admits:1 cortes:1 ments:1 intractable:4 exists:2 magnitude:1 nat:1 illustrates:1 easier:1 runningtime:1 logarithmic:5 simply:2 explore:1 expressed:1 applies:1 minimizer:1 satisfies:3 relies:1 tersenghi:1 bji:1 acm:2 ann:2 towards:3 absence:1 considerable:1 hard:2 uniformly:2 reducing:1 principal:1 lemma:21 called:5 conservative:1 experimental:2 exception:1 e6:3 support:2 mark:1 collins:1 correlated:1 |
5,942 | 6,376 | General Tensor Spectral Co-clustering
for Higher-Order Data
Tao Wu
Purdue University
[email protected]
Austin R. Benson
Stanford University
[email protected]
David F. Gleich
Purdue University
[email protected]
Abstract
Spectral clustering and co-clustering are well-known techniques in data analysis,
and recent work has extended spectral clustering to square, symmetric tensors
and hypermatrices derived from a network. We develop a new tensor spectral
co-clustering method that simultaneously clusters the rows, columns, and slices
of a nonnegative three-mode tensor and generalizes to tensors with any number of
modes. The algorithm is based on a new random walk model which we call the
super-spacey random surfer. We show that our method out-performs state-of-the-art
co-clustering methods on several synthetic datasets with ground truth clusters and
then use the algorithm to analyze several real-world datasets.
1
Introduction
Clustering is a fundamental task in machine learning that aims to assign closely related entities to
the same group. Traditional methods optimize some aggregate measure of the strength of pairwise
relationships (e.g., similarities) between items. Spectral clustering is a particularly powerful technique
for computing the clusters when the pairwise similarities are encoded into the adjacency matrix of a
graph. However, many graph-like datasets are more naturally described by higher-order connections
among several entities. For instance, multilayer or multiplex networks describe the interactions
between several graphs simultaneously with node-node-layer relationships [17]. Nonnegative tensors
are a common representation for many of these higher-order datasets. For instance the i, j, k entry in
a third-order tensor might represent the similarity between items i and j in layer k.
Here we develop the General Tensor Spectral Co-clustering (GTSC) framework for clustering tensor
data. The algorithm takes as input a nonnegative tensor, which may be sparse, non-square, and
asymmetric, and outputs subsets of indices from each dimension (co-clusters). Underlying our
method is a new stochastic process that models higher-order Markov chains, which we call a superspacey random walk. This is used to generalize ideas from spectral clustering based on random walks.
We introduce a variant on the well-known conductance measure from spectral graph partitioning [24]
that we call biased conductance and describe how this provides a tensor partition quality metric; this
is akin to Chung?s use of circulations to spectrally-partition directed graphs [7]. Essentially, biased
conductance is the exit probability from a set following our new super-spacey random walk model.
We use experiments on both synthetic and real-world problems to validate the effectiveness of our
method1 . For the synthetic experiments, we devise a ?planted cluster? model for tensors and show that
GTSC has superior performance compared to other state-of-the-art clustering methods in recovering
the planted clusters. In real-world tensor data experiments, we find that our GTSC framework
identifies stop-words and semantically independent sets in n-gram tensors as well as worldwide and
regional airlines and airports in a flight multiplex network.
1
Code and data for this paper are available at: https://github.com/wutao27/GtensorSC
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1.1
Related work
The Tensor Spectral Clustering (TSC) algorithm [4], another generalization of spectral methods to
higher-order graph data [4], is closely related. Both the perspective and high-level view are similar,
but the details differ in important ways. For instance, TSC was designed for the case when the
higher-order tensor recorded the occurrences of small subgraph patterns within the network. This
imposes limitations, including how, because the tensor arose based on some underlying graph that
the partitioning metric was designed explicitly for a graph. Thus, the applications are limited in
scope and cannot model, for example, the airplane-airplane-airport multiplex network we analyze
in Section 3.2. Second, for sparse data, the model used by TSC required a correction term with
magnitude proportional to the sparsity in the tensor. In sparse tensors, this makes it difficult to
accurately identify clusters, which we show in Section 3.1.
Most other approaches to tensor clustering proceed by using low-rank factorizations [15, 21]. or
a k-means objective [16]. In contrast, our work is based on a stochastic interpretation (escape
probabilities from a set), in the spirit of random walks in spectral clustering for graphs. There are
also several methods specific to clustering multiplex networks [27, 20] and clustering graphs with
multiple entities [11, 2]. Our method handles general tensor data, which includes these types of
datasets as a special case. Hypergraphs clustering [14] can also model the higher-order structures of
the data, and in the case of tensor data, it is approximated by a standard weighted graph.
1.2
Background on spectral clustering of graphs from the perspective of random walks
We first review graph clustering methods from the view of graph cuts and random walks, and then
review the standard spectral clustering method using sweep cuts. In Section 2, we generalize these
notions to higher-order data in order to develop our GTSC framework.
n?n
Let A ? R+
be the adjacency matrix of an undirected graph G = (V, E) and let n = |V |
be the number of nodes in the graph. Define the diagonal matrix of degrees of vertices in V as
D = diag(Ae), where e is the vector with all ones. The graph Laplacian is L = D ? A and
the transition matrix is P = AD ?1 = AT D ?1 . The transition matrix represents the transition
probabilities of a random walk on the graph. If a walker is at node j, it transitions to node i with
probability Pij = Aji /Djj .
Conductance. One of the most widely-used quality metrics for partitioning a graph?s vertices into
two sets S and S? = V \S is conductance [24]. Intuitively, conductance measures the ratio of the
? Formally,
number of edges in the graph that go between S and S? to the number of edges in S or S.
we define conductance as:
? ,
?(S) = cut(S)/min vol(S), vol(S)
(1)
where
cut(S) =
X
Aij
and
?
i?S,j?S
vol(S) =
X
Aij .
(2)
i?S,j?V
? The following well-known observation
A set S with small conductance is a good partition (S, S).
relates conductance to random walks on the graph.
Observation 1 ([18]) Let G be undirected, connected, and not bipartite. Start a random walk
(Zt )t?N where the initial state X0 is randomly chosen following the stationary distribution of the
random walk. Then for any set S ? V ,
? .
?(S) = max Pr(Z1 ? S? | Z0 ? S), Pr(Z1 ? S | Z0 ? S)
This provides an alternative view of conductance?it measures the probability that one step of a
? This random walk view, in concert with the super-spacey
random walk will traverse between S and S.
random walk, will serve as the basis for our biased conductance idea to partition tensors in Section 2.4.
Partitioning with a sweep cut. Finding the set of minimum conductance is an NP-hard combinatorial
optimization problem [26]. However, there are real-valued relaxations of the problem that are
tractable to solve and provide a guaranteed approximation [19, 9]. The most well known computes
an eigenvector called the Fiedler vector and then uses a sweep cut to identify a partition based on this
eigenvector.
2
The Fiedler eigenvector z solves Lz = ?Dz where ? is the second smallest generalized eigenvalue.
This can be equivalently formulated in terms of the random walk transition matrix P . Specifically,
Lz = ?Dz ? (I ? D ?1 A)z = ?z ? zT P = (1 ? ?)zT .
The sweep cut procedure to identify a low-conductance set S from z is as follows:
1. Sort the vertices by z as z?1 ? z?2 ? ? ? ? ? z?n .
2. Consider the n ? 1 candidate sets Sk = {?1 , ?2 , ? ? ? , ?k } for 1 ? k ? n ? 1
3. Choose S = argminSk ?(Sk ) as the solution set.
The
p solution set S from this algorithm satisfies the celebrated Cheeger inequality [19, 8]: ?(S) ?
2 ?opt , where ?opt = minS?V ?(S) is the minimum conductance over any set of nodes. Computing
?(Sk ) for all k only takes time linear in the number of edges in the graph because Sk+1 and Sk differ
only in the vertex ?k+1 .
To summarize, the spectral method requires two components: the second left eigenvector of P and
the conductance criterion. We generalize these ideas to tensors in the following section.
2
A higher-order spectral method for tensor co-clustering
We now generalize the ideas from spectral graph partitioning to nonnegative tensor data. We first
review our notation for tensors and then show how tensor data can be interpreted as a higher-order
Markov chain. We briefly review Tensor Spectral Clustering [4] before introducing the new superspacey random walk that we use here. This super-spacey random walk will allow us to compute a
vector akin to the Fiedler vector for a tensor and to generalize conductance to tensors. Furthermore,
we generalize the ideas from co-clustering in bipartite graph data [10] to rectangular tensors.
2.1
Preliminaries and tensor notation
We use T to denote a tensor. As a generalization of a matrix, T has m indices (making T an
mth-order or m-mode tensor), with the (i1 , i2 , ?, im ) entry denoted Ti1 ,i2 ,??? ,im . We will work with
non-negative tensors where Ti1 ,i2 ,??? ,im ? 0. We call a subset of the tensor entries with all but the
first element fixed a column of the tensor. For instance, the j, k column of a three-mode tensor T
is T:,j,k . A tensor is square if the dimension of all the modes is equal and rectangular if not, and
a square tensor is symmetric if it is equal for any permutation of the indices. For simplicity in the
remainder of our exposition, we will focus on three-mode tensors. However, all of or ideas generalize
to an arbitrary number of modes. (See, e.g., the work of Gleich et al. [13] and Benson et al. [5] for
representative examples of how these generalizations work.) Finally, we use two operations between
a tensor and a vector. First, a tensor-vector product with a three-mode tensor can output a vector,
which we denote by:
P
y = T x2 ? yi = j,k Ti,j,k xj xk .
Second, a tensor-vector product can also produce a matrix, which we denote by:
P
A = T [x] ? Ai,j = k Ti,j,k xk .
2.2
Forming higher-order Markov chains from nonnegative tensor data
Recall from Section 1.2 that we can form the transition matrix for a Markov chain from a square
non-negative matrix A by normalizing the columns of the matrix AT . We can generalize this idea
to define a higher-order Markov chain by normalizing a square tensor. This leads to a probability
transition tensor P :
P
Pi,j,k = Ti,j,k / i Ti,j,k
(3)
P
where we assume i Ti,j,k > 0. In Section 2.3, we will discuss the sparse case where the column
T:,j,k may be entirely zero. When that case does not arise, entries of P can be interpreted as the
transition probabilities of a second-order Markov chain (Zt )t?N :
Pi,j,k = Pr(Zt+1 = i | Zt = j, Zt?1 = k).
In other words, If the last two states were j and k, then the next state is i with probability Pi,j,k .
3
It is possible to turn any higher-order Markov chain into a first-order Markov chain on the product
state space of all ordered pairs (i, j). The new Markov chain moves to the state-pair (i, j) from
(j, k) with probability Pi,j,k . Computing the Fiedler vector associated with this chain would be one
approach to tensor clustering. However, there are two immediate problems. First, the eigenvector
is of size n2 , which quickly becomes infeasible to store. Second, the eigenvector gives information
about the product space?not the original state space. (In future work we plan to explore insights
from marginals of this distribution.)
Recent work uses the spacey random walk and spacey random surfer stochastic processes to circumvent these issues [5]. The process is non-Markovian and generates a sequence of states Xt as follows.
After arriving at state Xt , the walker promptly ?spaces out? and forgets the state Xt?1 , yet it still
wants to transition according to the higher-order transitions P . Thus, it invents a state Yt by drawing
a random state from its history and then transitions to state Xt+1 with probability PXt+1 ,Xt ,Yt . We
denote Ind{?} as the indicator event and Ht as the history of the process up to time t,2 then
Pt
1
1 + r=1 Ind{Xr = j} .
(4)
Pr(Yt = j | Ht ) = t+n
In this case, we assume that the process has a non-zero probability of picking any state by inflating
its history count by 1 visit. The spacey random surfer is a generalization where the walk follows
the above process with probability ? and teleports at random following a stochastic vector v with
probability 1 ? ?. This is akin to how the PageRank random walk includes teleportation.
Limiting stationary distributions are solutions to the multilinear PageRank problem [13]:
?P x2 + (1 ? ?)v = x,
(5)
and the limiting distribution x represents the stationary distribution of the transition matrix P [x] [5].
The transition matrix P [x] asymptotically approximates the spacey walk or spacey random surfer.
Thus, it is feasible to compute an eigenvector of P [x] matrix and use it with the sweep cut procedure
on a generalized notion of conductance. However, this derivation assumes that all n2 columns of T
were non-zero, which does not occur in real-world datasets. The TSC method adjusted the tensor
T and replaced any columns of all zeros with the uniform distribution vector [4]. Because the
number of zero-columns may be large, this strategy dilutes the information in the eigenvector (see
Appendix D.1). We deal with this issue more generally in the following section, and note that our
new solution outperforms TSC in our experiments (Section 3).
2.3
A stochastic process for sparse tensors
Here we consider another model of the random surfer that avoids the issue of undefined transitions?
which correspond to columns of T that are all zero?entirely. If the surfer attempts to use an
undefined transition, then the surfer moves to a random state drawn from history. Formally, define
the set of feasible states by
X
F = {(j, k) |
Ti,j,k > 0}.
(6)
i
Here, the set F denotes all the columns in T that are non-zero. The transition probabilities of our
proposed stochastic process are given by
Pr(Xt+1 = i | Xt = j, Ht )
(7)
P
= (1 ? ?)vi + ? k Pr(Xt+1 = i | Xt = j, Yt = k, Ht )Pr(Yt = k | Ht )
(
P
Ti,j,k / i Ti,j,k
(j, k) ? F
Pt
Pr(Xt+1 = i | Xt = j, Yt = k, Ht ) =
(8)
1
(1
+
Ind{X
=
i})
(j, k) 6? F,
r
r=1
n+t
where vi is the teleportation probability. Again Yt is chosen according to Equation (4). We call
this process the super-spacey random surfer because when the transitions are not defined it picks a
random state from history.
This process is aP
(generalized) vertex-reinforced random walk [3]. Let P be the normalized tensor
Pi,j,k = Ti,j,k / i Ti,j,k only for the columns in F and where all other entries are zero. Stationary
distributions of the stochastic process must satisfy the following equation:
?P x2 + ?(1 ? kP x2 k1 )x + (1 ? ?)v = x,
(9)
2
Formally, this is the ?-algebra generated by the states X1 , . . . , Xt .
4
where x is a probability distribution vector (see Appendix A.1 for a proof). At least one solution
vector x must exist, which follows directly from Brouwer?s fixed-point theorem. Here we give a
sufficient condition for it to be unique and easily computable.
Theorem 2.1 If ? < 1/(2m ? 1) then there is a unique solution x to (9) for the general m-mode
tensor. Furthermore, the iterative fixed point algorithm
xk+1 = ?P x2k + ?(1 ? kP x2k k1 )xk + (1 ? ?)vk
(10)
will converge at least linearly to this solution.
This is a nonlinear setting and tighter convergence results are currently unknown, but these are
unlikely to be tight on real-world data. For our experiments, we found that high values (e.g., 0.95) of
? do not impede convergence. We use ? = 0.8 for all our experiments.
In the following section, we show how to form a Markov chain from x and then develop our spectral
clustering technique by operating on the corresponding transition matrix.
2.4
First-order Markov approximations and biased conductance for tensor partitions
From Observation 1 in Section 1.2, we know that conductance may be interpreted as the exit
probability between two sets that form a partition of the nodes in the graph. In this section, we derive
an equivalent first-order Markov chain from the stationary distribution of the super-spacey random
surfer. If this Markov chain was guaranteed to be reversible, then we could apply the standard
definitions of conductance and the Fiedler vector. This will not generally be the case, and so we
introduce a biased conductance measure to partition this non-reversible Markov chain with respect to
starting in the stationary distribution of the super-spacey random walk. We use the second largest,
real-valued eigenvector of the Markov chain as an approximate Fiedler vector. Thus, we can use the
sweep cut procedure described in Section 1.2 to identify the partition.
Forming a first-order Markov chain approximation. In the following derivation, we use the
property of the two tensor-vector products that P [x]x = P x2 . The stationary distribution x for
the super-spacey random surfer is equivalently the stationary distribution of the Markov chain with
transition matrix
? P [x] + x(eT ? eT P [x]) + (1 ? ?)veT .
(Here we have used the fact that x ? 0 and eT x = 1.) The above transition matrix denotes
transitioning based on a first-order Markov chain with probability ?, and based on a fixed vector v
with probability 1 ? ?. We introduce this following first-order Markov chain
P? = P [x] + x(eT ? eT P [x]),
which represents a useful (but crude) approximation of the higher-order structure in the data. First,
we determine how often we visit states using the super-spacey random surfer to get a vector x.
Then the Markov chain P? will tend to have a large probability of spending time in states where the
higher-order information concentrates. This matrix represents a first-order Markov chain on which
we can compute an eigenvector and run a sweep cut.
Biased conductance. Consider a random walk (Zt )t?N . We define the biased conductance ?p (S)
of a set S ? {1, . . . , n} to be
? ,
?p (S) = max Pr(Z1 ? S? | Z0 ? S), Pr(Z1 ? S | Z0 ? S)
where Z0 is chosen according to a fixed distribution p. Just as with the standard definition of
conductance, we can interpret biased conductance as an escape probability. However, the initial state
Z0 is not chosen following the stationary distribution (as in the standard definition with a reversible
chain) but following p instead. This is why we call it biased conductance. We apply this measure
to P? using p = x (the stationary distribution of the super-spacey walk). This choice emphasizes
the higher-order information. Our idea of biased conductance is equivalent to how Chung defines a
conductance score for a directed graph [7].
We use the eigenvector of P? with the second-largest real eigenvalue as an analogue of the Fiedler
vector. If the chain were reversible, this would be exactly the Fiedler vector. When it is not, then
the vector coordinates still encode indications of state clustering [25]; hence, this vector serves as a
principled heuristic. It is important to note that although P? is a dense matrix, we can implement the
two operations we need with P? in time and space that depends only on the number of non-zeros of
the sparse tensor P using standard iterative methods for eigenvalues of matrices (see Appendix B.1).
5
2.5
Handling rectangular tensor data
So far, we have only considered square, symmetric tensor data. However, tensor data are often rectangular. This is usually the case when the different modes represent different types of
data. For example, in Section 3.2, we examine a tensor T ? Rp?n?n of airline flight data,
where Ti,j,k represents that there is a flight from airport j to airport k on airline i. Our approach
is to embed the rectangular tensor into a larger square tensor and then symmetrize this tensor,
using approaches developed by Ragnarsson and Van Loan [23]. After the embedding, we can
run our algorithm to simultaneously cluster rows,
columns, and slices of the tensor. This approach is
similar in style to the symmetrization of bipartite
graphs for co-clustering proposed by Dhillon [10].
Let U be an n-by-m-by-` rectangular tensor. Then
we embed U into a square three-mode tensor T
with n + m + ` dimensions and where Ui,j,k =
Ti,j+n,k+n+m . This is illustrated in Figure 1 (left).
Then we symmetrize the tensor by using all permu- Figure 1: The tensor is first embedded into a
tations of the indices Figure 1 (right). When viewed larger square tensor (left) and then this square
tensor is symmetrized (right).
as a 3-by-3-by-3 block tensor, the tensor is
0 0
0
0
0 U (1,3,2)
0
U (1,2,3) 0
0
0
U
(2,3,1)
0
0
0
T =
,
U (2,1,3)
0
0
0 U (3,2,1)
0
U (3,1,2) 0
0
0
0
0
where U (1,3,2) is a generalized transpose of U with the dimensions permuted.
2.6
Summary of the algorithm
Our GTSC algorithm works by recursively applying the sweep cut procedure, similar to the recursive
bisection procedures for clustering matrix-based data [6]. Formally for each cut, we:
1.
2.
3.
4.
Compute the super-spacey stationary vector x (Equation (9)) and form P [x].
Compute second largest left, real-valued eigenvector z of P? = P [x] + x(eT ? eT P [x]).
Sort the vertices by the eigenvector z as z?1 ? z?2 ? ? ? ? ? z?n .
Find the set Sk = {?1 , . . . , ?k } for which the biased conductance ?x (Sk ) on transition
matrix P? is minimized.
We continue partitioning as long as the clusters are large enough or we can get good enough splits.
Specifically, if a cluster has dimension less than a specified size minimum size, we do not consider
it for splitting. Otherwise, the algorithm recursively splits the cluster if either (1) its dimension is
above some threshold or (2) the biased conductance of a new split is less than a target value ??3 . The
overall algorithm is summarized in Appendix B as well as the algorithm complexity. Essentially, the
algorithm scales linearly in the number of non-zeros of the tensor for each cluster that is produced.
3
Experiments
We now demonstrate the efficacy of our method by clustering synthetic and real-world data. We find
that our method is better at recovering planted cluster structure in synthetically generated tensor data
compared to other state-of-the-art methods. Please refer to Appendix C for the parameter details.
3.1
Synthetic data
We generate tensors with planted clusters and try to recover the clusters. For each dataset, we
generate 20 groups of nodes that will serve as our planted clusters, where the number of nodes in
each group from a truncated normal distribution with mean 20 and variance 5 so that each group
has at least 4 nodes. For each group g we also
?assign a weight wg where the weight depends on the
group number. For group i, the weight is (? 2?)?1 exp(?(i ? 10.5)2 /(2? 2 )), where ? varies by
experiment. Non-zeros correspond to interactions between three indices (triples). We generate tw
triples whose indices are within a group and ta triples whose indices span across more than one group.
3
We tested ?? from 0.3 to 0.4, and we found the value of ?? is not very sensitive to the experimental results.
6
Table 1: Adjusted Rand Index (ARI), Normalized Mutual Information (NMI), and F1 scores on
various clustering methods for recovering synthetically generated tensor data with planted cluster
structure. The ? entries are the standard deviation over 5 trials.
ARI
NMI
F1
Square tensor with ? = 4
GTSC
TSC
PARAFAC
SC
MulDec
0.99?0.01
0.42 ? 0.05
0.82 ? 0.05
0.99?0.01
0.48 ? 0.05
0.99?0.00
0.60 ? 0.04
0.94 ? 0.02
0.99?0.01
0.66 ? 0.03
0.78?0.13
0.41 ? 0.11
0.48 ? 0.08
0.43 ? 0.07
0.19 ? 0.01
0.89?0.06
0.60 ? 0.09
0.67 ? 0.04
0.66 ? 0.04
0.37 ? 0.01
NMI
F1
Rectangular tensor with ? = 4
0.99?0.01
0.45 ? 0.04
0.83 ? 0.04
0.99?0.01
0.51 ? 0.05
Square tensor with ? = 2
GTSC
TSC
PARAFAC
SC
MulDec
ARI
0.97?0.06
0.38 ? 0.17
0.81 ? 0.04
0.91 ? 0.06
0.27 ? 0.06
0.98?0.03
0.53 ? 0.15
0.90 ? 0.02
0.94 ? 0.04
0.39 ? 0.05
0.97?0.05
0.41 ? 0.16
0.82 ? 0.04
0.91 ? 0.06
0.32 ? 0.05
Rectangular tensor with ? = 2
0.79?0.12
0.44 ? 0.10
0.50 ? 0.07
0.47 ? 0.06
0.24 ? 0.01
0.96?0.06
0.28 ? 0.08
0.10 ? 0.04
0.38 ? 0.07
0.08 ? 0.01
0.97?0.04
0.44 ? 0.10
0.24 ? 0.05
0.52 ? 0.05
0.19 ? 0.02
0.96?0.06
0.32 ? 0.08
0.15 ? 0.04
0.41 ? 0.07
0.14 ? 0.01
The tw triples are chosen by first uniformly selecting a group g and then uniformly selecting three
indices i, j, and k from group g and finally assigning a weight of wg . For the ta triples, the sampling
procedure first selects an index i from group gi with a probability proportional to the weights of the
group. In other words, indices in group g are chosen proportional to wg . Two indices j and k are
then selected uniformly at random from groups gj and gk other than gi . Finally, the weight in the
tensor is assigned to be the average of the three group weights. For rectangular data, we follow a
similar procedure where we distinguish between the indices for each mode of the tensor.
For our experiments, tw = 10, 000 and ta = 1, 000, and the variance ? that controls the group
weights is 2 or 4. For each value of ?, we create 5 sample datasets. The value of ? affects the
concentration of the weights and how certain groups of nodes interact with others. This skew reflects
properties of the real-world networks we examine in the next section.
Our GTSC method is compared with Tensor Spectral Clustering (TSC) [4], the Tensor Decomposition
PARAFAC [1], Spectral Clustering (SC) via Multilinear SVD [12] and Multilinear Decomposition
(MulDec) [21]. Table 1 depicts the performances of the four algorithms in recovering the planted
clusters. In all cases, GTSC has the best performance. We note that the running time is a few seconds
for GTSC, TSC and SC and nearly 30 minutes for PARAFAC and MulDec per trial. Note that the
tensors have roughly 50, 000 non-zeros. The poor scalability prohibits the later two methods from
being applied to the real-world tensors in the following section.
3.2
Case study in airline flight networks
We now turn to studying real-world tensor
datasets. We first cluster an airline-airport multimodal network which consists of global air
flight routes from 539 airlines and 2, 939 airports4 . In this application, the entry Ti,j,k of
the three-mode tensor T is 1 if airline i flies between airports j and k and 0 otherwise. Figure 2
illustrates the connectivity of the tensor with a
random ordering of the indices (left) and the
ordering given by the popularity of co-clusters
(right). We can see that after the co-clustering,
there is clear structure in the data tensor.
Figure 2: Visualization of the airline-airport data
tensor. The x and y axes index airports and the
z axis indexes airlines. A dot represents that an
airline flies between those two airports. On the left,
indices are sorted randomly. On the right, indices
are sorted by the co-clusters found by our GTSC
framework, which reveals structure in the tensor.
One prominent cluster found by the method corresponds to large international airports in cities such as
Beijing and New York City. This group only accounts for 8.5% of the total number of airports, but it
is responsible for 59% of the total routes. Figure 2 illustrates this result?the airports with the highest
4
Data were collected from http://openflights.org/data.html#route.
7
indices are connected to almost every other airport. This cluster is analogous to the ?stop word? group
we will see in the n-gram experiments. Most other clusters are organized geographically. Our GTSC
framework finds large clusters for Europe, the United States, China/Taiwan, Oceania/SouthEast Asia,
and Mexico/Americas. Interestingly, Canc?n International Airport is included with the United States
cluster, likely due to large amounts of tourism.
3.3
Case study on n-grams
Next, we study data from n-grams (consective sequences of words in texts). We construct a square
mode-n tensor where indices correspond to words. An entry in the tensor is the number of occurrences
this n-gram. We form tensors from both English and Chinese corpora for n = 3, 4.5 The non-zeros
in the tensor consist of the frequencies of the one million most frequent n-grams.
English n-grams. We find several conclusions that hold for both tensor datasets. Two large groups in
both datasets consist of stop words, i.e., frequently occuring connector words. In fact, 48% (3-gram)
and 64% (4-gram) of words in one cluster are prepositions (e.g., in, of, as, to) and link verbs (e.g.,
is, get, does). In the another cluster, 64% (3-gram) and 57% (4-gram) of the words are pronouns
(e.g., we, you, them) and link verbs. This result matches the structure of English language where link
verbs can connect both prepositions and pronouns whereas prepositions and pronouns are unlikely to
appear in close vicinity. Other groups consist of mostly semantically related English words, e.g.,
{cheese, cream, sour, low-fat, frosting, nonfat, fat-free} and
{bag, plastic, garbage, grocery, trash, freezer}.
The clustering of the 4-gram tensor contains some groups that the 3-gram tensor fails to find, e.g.,
{german, chancellor, angela, merkel, gerhard, schroeder, helmut, kohl}.
In this case, Angela Merkel, Gerhard Schroeder, and Helmut Kohl have all been German chancellors,
but it requires a 4-gram to make this connection strong. Likewise, some clusters only appear from
clustering the 3-gram tensor. One such cluster is
{church, bishop, catholic, priest, greek, orthodox, methodist, roman, episcopal}.
In 3-grams, we may see phrases such as ?catholic church bishop", but 4-grams containing these words
likely also contain stop words, e.g., ?bishop of the church". However, since stop words already form
their own cluster, this connection is destoryed.
Chinese n-grams. We find that many of the conclusions from the English n-gram datasets also hold
for the Chinese n-gram datasets. This includes groups of stop words and semantically related words.
For example, there are two clusters consisting of mostly stop words (200 most frequently occurring
words) from the 3-gram and 4-gram tensors. In the 4-gram data, one cluster of 31 words consists
entirely of stop words and another cluster contains 36 total words, of which 23 are stop words.
There are some words from the two groups that are not typically considered as stop words, e.g.,
?? society, ?? economy, ?? develop, ?? -ism, ?? nation, ?? government
These words are also among the top 200 most common words according to the corpus. This is a
consequence of the dataset coming from scanned Chinese-language books and is a known issue with
the Google Books corpus [22]. In this case, it is a feature as we are illustrating the efficacy of our
tensor clustering framework rather than making any linguistic claims.
4
CONCLUSION
In this paper we developed the General Tensor Spectral Co-clustering (GTSC) method for coclustering the modes of nonnegative tensor data. Our method models higher-order data with a new
stochastic process, the super-spacey random walk, which is a variant of a higher-order Markov
chain. With the stationary distribution of this process, we can form a first-order Markov chain which
captures properties of the higher-order data and then use tools from spectral graph partitioning to
find co-clusters. In future work, we plan to create tensors that bridge information from multiple
modes. For instance, clusters in the n-gram data depended on n, e.g., the names of various German
chancellors only appeared as a 4-gram cluster. It would be useful to have a holistic tensor to jointly
partition both 3- and 4-gram information.
Acknowledgements. TW and DFG are supported by NSF IIS-1422918 and DARPA SIMPLEX.
ARB is supported by a Stanford Graduate Fellowship.
5
English n-gram data were collected from http://www.ngrams.info/intro.asp and Chinese n-gram
data were collected from https://books.google.com/ngrams.
8
References
[1] B. W. Bader, T. G. Kolda, et al. Matlab tensor toolbox version 2.6. Available online, February 2015.
[2] B.-K. Bao, W. Min, K. Lu, and C. Xu. Social event detection with robust high-order co-clustering. In
ICMR, pages 135?142, 2013.
[3] M. Bena?m. Vertex-reinforced random walks and a conjecture of pemantle. Ann. Prob., 25(1):361?392,
1997.
[4] A. R. Benson, D. F. Gleich, and J. Leskovec. Tensor spectral clustering for partitioning higher-order
network structures. In SDM, pages 118?126, 2015.
[5] A. R. Benson, D. F. Gleich, and L.-H. Lim. The spacey random walk: a stochastic process for higher-order
data. arXiv, cs.NA:1602.02102, 2016.
[6] D. Boley. Principal direction divisive partitioning. Data Mining and Knowledge Discovery, 2(4):325?344,
1998.
[7] F. Chung. Laplacians and the Cheeger inequality for directed graphs. Annals of Combinatorics, 9(1):1?19,
2005. 10.1007/s00026-005-0237-z.
[8] F. Chung. Four proofs for the Cheeger inequality and graph partition algorithms. In ICCM, 2007.
[9] F. R. L. Chung. Spectral Graph Theory. AMS, 1992.
[10] I. S. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning. In KDD,
pages 269?274, 2001.
[11] B. Gao, T.-Y. Liu, X. Zheng, Q.-S. Cheng, and W.-Y. Ma. Consistent bipartite graph co-partitioning for
star-structured high-order heterogeneous data co-clustering. In KDD, pages 41?50, 2005.
[12] D. Ghoshdastidar and A. Dukkipati. Spectral clustering using multilinear svd: Analysis, approximations
and applications. In AAAI, pages 2610?2616, 2015.
[13] D. F. Gleich, L.-H. Lim, and Y. Yu. Multilinear PageRank. SIAM J. Matrix Ann. Appl., 36(4):1507?1541,
2015.
[14] M. Hein, S. Setzer, L. Jost, and S. S. Rangapuram. The total variation on hypergraphs-learning on
hypergraphs revisited. In Advances in Neural Information Processing Systems, pages 2427?2435, 2013.
[15] H. Huang, C. Ding, D. Luo, and T. Li. Simultaneous tensor subspace selection and clustering: the
equivalence of high order SVD and k-means clustering. In KDD, pages 327?335, 2008.
[16] S. Jegelka, S. Sra, and A. Banerjee. Approximation algorithms for tensor clustering. In Algorithmic
learning theory, pages 368?383. Springer, 2009.
[17] M. Kivel?, A. Arenas, M. Barthelemy, J. P. Gleeson, Y. Moreno, and M. A. Porter. Multilayer networks.
Journal of Complex Networks, 2(3):203?271, 2014.
[18] M. Meil?a and J. Shi. A random walks view of spectral segmentation. In AISTATS, 2001.
[19] M. Mihail. Conductance and convergence of markov chains?a combinatorial treatment of expanders. In
FOCS, pages 526?531, 1989.
[20] J. Ni, H. Tong, W. Fan, and X. Zhang. Flexible and robust multi-network clustering. In KDD, pages
835?844, 2015.
[21] E. E. Papalexakis and N. D. Sidiropoulos. Co-clustering as multilinear decomposition with sparse latent
factors. In ICASSP, pages 2064?2067. IEEE, 2011.
[22] E. A. Pechenick, C. M. Danforth, and P. S. Dodds. Characterizing the Google Books corpus: strong limits
to inferences of socio-cultural and linguistic evolution. PloS one, 10(10):e0137041, 2015.
[23] S. Ragnarsson and C. F. Van Loan. Block tensors and symmetric embeddings. Linear Algebra Appl.,
438(2):853?874, 2013.
[24] S. E. Schaeffer. Graph clustering. Computer Science Review, 1(1):27?64, 2007.
[25] W. J. Stewart. Introduction to the numerical solutions of Markov chains. Princeton Univ. Press, 1994.
[26] D. Wagner and F. Wagner. Between min cut and graph bisection. In MFCS, pages 744?750, 1993.
[27] D. Zhou and C. J. Burges. Spectral clustering and transductive learning with multiple views. In ICML,
pages 1159?1166, 2007.
9
| 6376 |@word trial:2 illustrating:1 version:1 briefly:1 decomposition:3 pick:1 recursively:2 initial:2 celebrated:1 contains:2 score:2 efficacy:2 selecting:2 united:2 liu:1 document:1 interestingly:1 outperforms:1 com:2 luo:1 yet:1 assigning:1 must:2 numerical:1 partition:11 kdd:4 moreno:1 designed:2 concert:1 stationary:12 selected:1 item:2 xk:4 provides:2 node:11 revisited:1 traverse:1 org:1 zhang:1 focs:1 consists:2 introduce:3 x0:1 pairwise:2 roughly:1 examine:2 frequently:2 multi:1 becomes:1 spain:1 underlying:2 notation:2 cultural:1 interpreted:3 eigenvector:13 spectrally:1 developed:2 prohibits:1 finding:1 inflating:1 sour:1 every:1 ti:13 nation:1 socio:1 exactly:1 fat:2 partitioning:11 control:1 appear:2 before:1 multiplex:4 papalexakis:1 depended:1 consequence:1 limit:1 meil:1 ap:1 might:1 china:1 equivalence:1 appl:2 co:19 limited:1 factorization:1 graduate:1 directed:3 unique:2 responsible:1 recursive:1 block:2 implement:1 xr:1 procedure:7 aji:1 merkel:2 word:27 coclustering:1 barthelemy:1 get:3 cannot:1 close:1 selection:1 applying:1 optimize:1 equivalent:2 www:1 dz:2 yt:7 shi:1 go:1 starting:1 rectangular:9 danforth:1 simplicity:1 splitting:1 insight:1 embedding:1 handle:1 notion:2 variation:1 coordinate:1 analogous:1 limiting:2 kolda:1 pt:2 target:1 gerhard:2 annals:1 us:2 element:1 approximated:1 particularly:1 asymmetric:1 cut:13 rangapuram:1 fly:2 ding:1 capture:1 connected:2 ordering:2 plo:1 highest:1 boley:1 principled:1 cheeger:3 ui:1 complexity:1 freezer:1 dukkipati:1 tight:1 algebra:2 serve:2 bipartite:5 exit:2 basis:1 easily:1 multimodal:1 connector:1 darpa:1 icassp:1 various:2 america:1 derivation:2 fiedler:8 univ:1 describe:2 kp:2 sc:4 aggregate:1 whose:2 encoded:1 stanford:3 pemantle:1 widely:1 valued:3 solve:1 drawing:1 heuristic:1 larger:2 otherwise:2 wg:3 gi:2 transductive:1 jointly:1 online:1 sequence:2 eigenvalue:3 indication:1 sdm:1 interaction:2 product:5 coming:1 remainder:1 frequent:1 pronoun:3 holistic:1 subgraph:1 validate:1 bao:1 scalability:1 priest:1 convergence:3 cluster:37 produce:1 derive:1 develop:5 arb:1 orthodox:1 arbenson:1 strong:2 solves:1 recovering:4 c:1 differ:2 concentrate:1 greek:1 direction:1 closely:2 stochastic:9 bader:1 adjacency:2 government:1 assign:2 trash:1 f1:3 generalization:4 preliminary:1 opt:2 tighter:1 multilinear:6 im:3 adjusted:2 correction:1 hold:2 bena:1 considered:2 ground:1 normal:1 exp:1 surfer:11 scope:1 algorithmic:1 claim:1 smallest:1 bag:1 combinatorial:2 currently:1 symmetrization:1 sensitive:1 largest:3 southeast:1 bridge:1 create:2 city:2 tool:1 weighted:1 reflects:1 super:12 aim:1 rather:1 arose:1 zhou:1 asp:1 geographically:1 linguistic:2 encode:1 derived:1 focus:1 parafac:4 ax:1 vk:1 rank:1 contrast:1 helmut:2 am:1 inference:1 economy:1 unlikely:2 typically:1 mth:1 i1:1 tao:1 selects:1 issue:4 among:2 flexible:1 overall:1 html:1 denoted:1 grocery:1 plan:2 art:3 special:1 airport:14 mutual:1 equal:2 construct:1 tourism:1 sampling:1 represents:6 yu:1 dodds:1 nearly:1 icml:1 future:2 minimized:1 np:1 pxt:1 others:1 escape:2 few:1 roman:1 simplex:1 randomly:2 simultaneously:3 dfg:1 replaced:1 consisting:1 attempt:1 conductance:31 detection:1 mining:1 dgleich:1 zheng:1 arena:1 undefined:2 chain:27 edge:3 icmr:1 walk:29 hein:1 leskovec:1 instance:5 column:12 markovian:1 ghoshdastidar:1 stewart:1 phrase:1 introducing:1 vertex:7 entry:8 subset:2 deviation:1 uniform:1 connect:1 varies:1 synthetic:5 fundamental:1 international:2 siam:1 picking:1 canc:1 quickly:1 na:1 connectivity:1 again:1 aaai:1 recorded:1 containing:1 choose:1 huang:1 book:4 chung:5 style:1 li:1 account:1 star:1 summarized:1 includes:3 satisfy:1 combinatorics:1 explicitly:1 ad:1 vi:2 depends:2 later:1 view:6 try:1 analyze:2 start:1 sort:2 recover:1 square:14 ni:1 air:1 circulation:1 variance:2 likewise:1 reinforced:2 correspond:3 identify:4 generalize:8 plastic:1 accurately:1 emphasizes:1 bisection:2 produced:1 lu:1 promptly:1 history:5 simultaneous:1 definition:3 frequency:1 naturally:1 associated:1 proof:2 schaeffer:1 stop:10 dataset:2 treatment:1 recall:1 lim:2 knowledge:1 organized:1 gleich:5 segmentation:1 higher:22 ta:3 follow:1 asia:1 rand:1 furthermore:2 just:1 flight:5 nonlinear:1 banerjee:1 reversible:4 google:3 porter:1 defines:1 mode:16 quality:2 impede:1 name:1 normalized:2 contain:1 evolution:1 hence:1 assigned:1 vicinity:1 symmetric:4 dhillon:2 i2:3 illustrated:1 deal:1 ind:3 please:1 djj:1 criterion:1 generalized:4 prominent:1 tsc:9 occuring:1 demonstrate:1 performs:1 spending:1 ari:3 common:2 superior:1 permuted:1 million:1 interpretation:1 hypergraphs:3 approximates:1 marginals:1 interpret:1 sidiropoulos:1 refer:1 ai:1 language:2 dot:1 europe:1 similarity:3 operating:1 gj:1 own:1 recent:2 perspective:2 store:1 certain:1 route:3 inequality:3 continue:1 yi:1 devise:1 minimum:3 converge:1 determine:1 ii:1 relates:1 multiple:3 worldwide:1 match:1 long:1 visit:2 laplacian:1 jost:1 variant:2 multilayer:2 essentially:2 metric:3 ae:1 heterogeneous:1 arxiv:1 represent:2 background:1 want:1 whereas:1 fellowship:1 walker:2 biased:12 regional:1 airline:10 tend:1 undirected:2 cream:1 spirit:1 effectiveness:1 call:6 synthetically:2 split:3 enough:2 embeddings:1 xj:1 affect:1 idea:8 airplane:2 ti1:2 computable:1 setzer:1 akin:3 proceed:1 york:1 matlab:1 garbage:1 generally:2 useful:2 clear:1 amount:1 http:4 generate:3 exist:1 nsf:1 per:1 popularity:1 vol:3 group:25 four:2 threshold:1 drawn:1 ht:6 graph:35 relaxation:1 asymptotically:1 beijing:1 mihail:1 run:2 prob:1 powerful:1 you:1 catholic:2 almost:1 wu:1 appendix:5 x2k:2 entirely:3 layer:2 guaranteed:2 distinguish:1 cheng:1 fan:1 nonnegative:6 schroeder:2 kohl:2 strength:1 occur:1 scanned:1 x2:5 generates:1 min:4 span:1 conjecture:1 structured:1 according:4 poor:1 across:1 nmi:3 tw:4 making:2 benson:4 intuitively:1 pr:10 equation:3 visualization:1 discus:1 turn:2 count:1 skew:1 german:3 know:1 tractable:1 serf:1 studying:1 generalizes:1 available:2 operation:2 apply:2 spectral:28 occurrence:2 alternative:1 symmetrized:1 rp:1 original:1 assumes:1 clustering:50 denotes:2 brouwer:1 running:1 angela:2 top:1 ism:1 k1:2 chinese:5 february:1 society:1 tensor:112 objective:1 sweep:8 move:2 already:1 intro:1 strategy:1 planted:7 concentration:1 traditional:1 diagonal:1 subspace:1 link:3 entity:3 collected:3 taiwan:1 code:1 index:20 relationship:2 ratio:1 mexico:1 equivalently:2 difficult:1 mostly:2 gk:1 info:1 negative:2 zt:8 unknown:1 observation:3 datasets:12 markov:25 purdue:4 truncated:1 immediate:1 extended:1 arbitrary:1 verb:3 david:1 pair:2 required:1 specified:1 toolbox:1 connection:3 z1:4 method1:1 barcelona:1 nip:1 usually:1 pattern:1 appeared:1 sparsity:1 summarize:1 laplacians:1 pagerank:3 including:1 max:2 analogue:1 event:2 circumvent:1 indicator:1 github:1 tations:1 identifies:1 axis:1 church:3 text:1 review:5 acknowledgement:1 discovery:1 chancellor:3 embedded:1 permutation:1 limitation:1 proportional:3 teleportation:2 triple:5 degree:1 jegelka:1 pij:1 sufficient:1 imposes:1 consistent:1 pi:5 austin:1 row:2 preposition:3 summary:1 supported:2 last:1 transpose:1 arriving:1 infeasible:1 english:6 aij:2 free:1 allow:1 burges:1 characterizing:1 wagner:2 sparse:7 van:2 slice:2 dimension:6 world:9 gram:28 transition:21 computes:1 avoids:1 symmetrize:2 invents:1 lz:2 far:1 social:1 approximate:1 global:1 cheese:1 reveals:1 corpus:4 vet:1 iterative:2 latent:1 sk:7 why:1 table:2 ngrams:2 robust:2 sra:1 interact:1 complex:1 diag:1 aistats:1 dense:1 linearly:2 arise:1 expanders:1 n2:2 x1:1 xu:1 representative:1 depicts:1 tong:1 fails:1 candidate:1 crude:1 forgets:1 third:1 z0:6 theorem:2 transitioning:1 embed:2 specific:1 xt:12 minute:1 bishop:3 normalizing:2 consist:3 magnitude:1 illustrates:2 occurring:1 explore:1 likely:2 forming:2 gao:1 ordered:1 springer:1 corresponds:1 truth:1 satisfies:1 ma:1 viewed:1 formulated:1 sorted:2 ann:2 exposition:1 mfcs:1 feasible:2 hard:1 oceania:1 loan:2 specifically:2 included:1 uniformly:3 semantically:3 principal:1 called:1 total:4 experimental:1 svd:3 divisive:1 formally:4 permu:1 princeton:1 tested:1 handling:1 |
5,943 | 6,377 | Fast and Flexible Monotonic Functions with
Ensembles of Lattices
K. Canini, A. Cotter, M. R. Gupta, M. Milani Fard, J. Pfeifer
Google Inc.
1600 Amphitheatre Parkway, Mountain View, CA 94043
{canini,acotter,mayagupta,janpf,mmilanifard}@google.com
Abstract
For many machine learning problems, there are some inputs that are known to
be positively (or negatively) related to the output, and in such cases training the
model to respect that monotonic relationship can provide regularization, and makes
the model more interpretable. However, flexible monotonic functions are computationally challenging to learn beyond a few features. We break through this
barrier by learning ensembles of monotonic calibrated interpolated look-up tables (lattices). A key contribution is an automated algorithm for selecting feature
subsets for the ensemble base models. We demonstrate that compared to random
forests, these ensembles produce similar or better accuracy, while providing guaranteed monotonicity consistent with prior knowledge, smaller model size and faster
evaluation.
1
Introduction
A long-standing challenge in machine learning is to learn flexible monotonic functions [1] for
classification, regression, and ranking problems. For example, all other features held constant, one
would expect the prediction of a house?s cost to be an increasing function of the size of the property.
A regression trained on noisy examples and many features might not respect this simple monotonic
relationship everywhere, due to overfitting. Failing to capture such simple relationships is confusing
for users. Guaranteeing monotonicity enables users to trust that the model will behave reasonably
and predictably in all cases, and enables them to understand at a high-level how the model responds
to monotonic inputs [2]. Prior knowledge about monotonic relationships can also be an effective
regularizer [3].
0.9
1.0
0.0
1.0
0.4
0.4
1
f1(x)
feature 4
feature 2
feature 2
0.8
f2(x)
0.6
f3(x)
0.4
0.2
0
0.0
feature 1
0.8
0.0
feature 3
0.5
0.0
feature 3
1.0
Figure 1: Contour plots for an ensemble of three lattices, each of which is a linearly-interpolated
look-up table. The first lattice f1 acts on features 1 and 2, the second lattice f2 acts on features 2
and 3, and f3 acts on features 3 and 4. Each 2?2 look-up table has four parameters: for example,
for f1 (x) the parameters are ?1 = [0, 0.9, 0.8, 1]. Each look-up table parameter is the function value
for an extreme input, for example, f1 ([0, 1]) = ?1 [2] = 0.9. The ensemble function f1 + f2 + f3 is
monotonic with respect to features 1, 2 and 3, but not feature 4.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Key notation used in the paper.
Symbol
Definition
D
D
S
L
s` ? D
(xi , yi ) ? [0, 1]D ? R
x[s` ] ? [0, 1]S
D
?(x) : [0, 1]D ? [0, 1]2
v ? {0, 1}D
D
? ? R2
number of features
set of features 1, 2, . . . , D
number of features in each lattice
number of lattices in the ensemble
`th lattice?s set of S feature indices
ith training example
x for feature subset s`
linear interpolation weights for x
a vertex of a 2D lattice
parameter values for a 2D lattice
Simple monotonic functions can be learned with a linear function forced to have positive coefficients,
but learning flexible monotonic functions is challenging. For example, multi-dimensional isotonic
regression has complexity O(n4 ) for n training examples [4]. Other prior work learned monotonic
functions on small datasets and very low-dimensional problems; see [2] for a survey. The largest
datasets used in recent papers on training monotonic neural nets included two features and 3434
examples [3], one feature and 30 examples [5], and six features and 174 examples [6]. Another
approach that uses an ensemble of rules on a modified training set, scales poorly with the dataset size
and does not provide monotonicity guarantees on test samples [7].
Recently, monotonic lattice regression was proposed [2], which extended lattice regression [8] to
learn a monotonic interpolated look-up table by adding linear inequality constraints to constrain
adjacent look-up table parameters to be monotonic. See Figure 1 for an illustration of three such
lattice functions. Experiments on real-world problems with millions of training examples showed
similar accuracy to random forests [9], which generally perform well [10], but with the benefit of
guaranteed monotonicity. Monotonic functions on up to sixteen features were demonstrated, but that
approach is fundamentally limited in its ability to scale to higher input dimensions, as the number of
parameters for a lattice model scales as O(2D ).
In this paper, we break through previous barriers on D by proposing strategies to learn a monotonic
ensemble of lattices. The main contributions of this paper are: (i) proposing different architectures
for ensembles of lattices with different trade-offs for flexibility, regularization and speed, (ii) theorem
showing lattices can be merged, (iii) an algorithm to automatically learn feature subsets for the base
models, (iv) extensive experimental analysis on real data showing results that are similar to or better
than random forests, while respecting prior knowledge about monotonic features.
2
Ensemble of Lattices
Consider the usual supervised machine learning set-up of training sample pairs {(xi , yi )} for i =
1, . . . , n. The label is either a real-valued yi ? R, or a binary classification label yi ? {?1, 1}.
We assume that sensible upper and lower bounds can be set for each feature, and without loss of
generality, the feature is then scaled so that xi ? [0, 1]D . Key notation is summarized in Table 1. We
propose learning a weighted ensemble of L lattices:
F (x) = ?0 +
L
X
?` f (x[s` ]; ?` ),
(1)
`=1
where each lattice f (x[s` ]; ?` ) is a lattice function defined on a subset of features s` ? D, and x[s` ]
denotes the S ? 1 vector with the components of x corresponding to the feature set s` . We require
S
each lattice to have S features, i.e. |sl | = S and ?l ? R2 for all l. The `th lattice is a linearly
interpolated look-up table defined on the vertices of the S-dimensional unit hypercube:
f (x[s` ]; ?` ) = ?`T ?(x[s` ]),
S
S
where ?` ? R2 are the look-up table parameters, and ?(x[s]) : [0, 1]S ? [0, 1]2 are the linear
interpolation weights for x in the `th lattice. See Appendix A for a review of multilinear and simplex
interpolations.
2
2.1
Monotonic Ensemble of Lattices
Define a function f to be monotonic with respect to feature d if f (x) ? f (z) for any two feature
vectors x, z ? RD that satisfy x[d] ? z[d] and x[m] = z[m] for m 6= d. A lattice function is
monotonic with respect to a feature if the look-up table parameters are nondecreasing as that feature
increases, and a lattice can be constrained to be monotonic with an appropriate set of sparse linear
inequality constraints [2].
To guarantee that an ensemble of lattices is monotonic with respect to feature d, it is sufficient that
each lattice be monotonic in feature d and that we have positive weights ?` ? 0 for all `, by linearity
of the sum over lattices in (1). Constraining each base model to be monotonic provides only a subset
of the possible monotonic ensemble functions, as there could exist monotonic functions of the form
(1) that are not monotonic in each f` (x).
In this paper, for notational simplicity and because we find it the most empirically useful case for
machine learning, we only consider lattices that have two vertices along each dimension, so a look-up
table on S features has 2S parameters. Generalization to larger look-up tables is trivial [11, 2].
2.2
Ensemble of Lattices Can Be Merged into One Lattice
We show that an ensemble of lattices as expressed by (1) is equivalent to a single lattice defined on all
D features. This result is important for two practical reasons. First, it shows that training an ensemble
over subsets of features is a regularized form of training one lattice with all D features. Second, this
result can be used to merge small lattices that share many features into larger lattices on the superset
of their features, which can be useful to reduce evaluation time and memory use of the ensemble
model.
PL
Theorem 1 Let F (x) := l=1 ?l ?lT ?(x[sl ]) as described in (1) where ? are either multilinear
D
or simplex interpolation weights. Then there exists a ? ? R2 such that F (x) = ?T ?(x) for all
x ? [0, 1]D .
The proof and an illustration is given in the supplementary material.
3
Feature Subset Selection for the Base Models
A key problem is which features should be combined in each lattice. The random subspace method
[12] and many variants of random forests randomly sample subsets of features for the base models.
Sampling the features to increase the likelihood of selecting combinations of features that better
correlate with the label can produce better random forests [13]. Ye et al. first estimated the informativeness of each feature by fitting a linear model and dividing the features up into two groups based
on the linear coefficients, then randomly sampled the features considered for each tree from both
groups, to ensure strongly informative features occur in each tree [14]. Others take the random subset
of features for each tree as a starting point, but then create more diverse trees. For example, one can
increase the diversity of the features in a tree ensemble by changing the splitting criterion [15], or by
maximizing the entropy of the joint distribution over the leaves [16].
We also consider randomly selecting the subset of S features for each lattice uniformly and independently without replacement from the set of D features, we refer to this as a random tiny lattice (RTL).
To improve the accuracy of ensemble selection, we can draw K independent RTL ensembles using
K different random seeds, train each ensemble independently, and select the one with the lowest
training or validation or cross-validation error. This approach treats the random seed that generates
the RTL as a hyper-parameter to optimize, and in the extreme of sufficiently large K, this strategy
can be used to minimize the empirical risk. The computation scales linearly in K, but the K training
jobs can be parallelized.
3.1
Crystals: An Algorithm for Selecting Feature Subsets
As a more efficient alternative to randomly selecting feature subsets, we propose an algorithm we
term Crystals to jointly optimize the selection of the L feature subsets. Note that linear interactions
between features can be captured by the ensemble?s linear combination of base models, but nonlinear
3
interactions must be captured by the features occurring together in a base model. This motivates us
to propose choosing the feature subsets for each base model based on the importance of pairwise
nonlinear interactions between the features.
To measure pairwise feature interactions,we first separately (and in parallel) train lattices on all
possible pairs of features, that is L = D
2 lattices, each with S = 2 features. Then, we measure
the nonlinearity of the interaction of any two features d and d? by the torsion of their lattice, which
is the squared difference between the slopes of the lattice?s parallel edges [2]. Let ?d,d? denote the
? The torsion of the pair (d, d)
? is defined as:
parameters of the 2?2 lattice for features d and d.
2
def
?d,d? = (?d,d?[1] ? ?d,d?[0]) ? (?d,d?[3] ? ?d,d?[2]) .
(2)
If the lattice trained on features d and d? is just a linear function, its torsion is ?d,d? = 0, whereas a large
torsion value implies a nonlinear interaction between the features. Given the torsions of all pairs
of features, we propose using the L feature subsets {s` } that maximize the weighted total pairwise
torsion of the ensemble:
def
H({s` }) =
L
X
X
?d,d? ?
P`?1
`0 =1
?
I(d,d?s
`0 )
,
(3)
? `
`=1 d,d?s
d6=d?
where I is an indicator. The discount value ? is a hyperparameter that controls how much the value
of a pair of features diminishes when repeated in multiple lattices. The extreme case ? = 1 makes
the objective (3) optimized by the degenerate case that all L lattices include the same feature pairs
that have the highest torsion. The other extreme of ? = 0 only counts the first inclusion of a pair
of features in the ensemble towards the objective, and results in unnecessarily diverse lattices. We
found a default of ? = 12 generally produces good, diverse lattices, but ? can also be optimized as a
hyperparameter.
In order to select the subsets {s` }, we first choose the number of times each feature is going to be
used in the ensemble. We make sure each feature is used at least once and then assign the rest of the
feature counts proportional to the median of each feature?s torsion with other features. We initialize
a random ensemble that satisfies the selected feature counts and then try to maximize the objective
in (3) using a greedy swapping method: We loop over all pairs of lattices, and swap any features
between them that increase H({s` }), until no swaps improve the objective. This optimization takes a
small fraction of the total training time for the ensemble. One can potentially improve the objective
by using a stochastic annealing procedure, but we find our deterministic method already yields good
solutions in practice.
4
Calibrating the Features
Accuracy can be increased by calibrating each feature with a one-dimensional monotonic piecewise
linear function (PLF) before it is combined with other features [17, 18, 2]. These calibration functions
can approximate log, sigmoidal, and other useful feature pre-processing transformations, and can be
trained as part of the model.
For an ensemble, we consider two options. Either a set of D calibrators shared across the base models
(one PLF per feature), or L sets of calibrators, one set of S calibrators for each base model, for a total
of LS calibrators. Use of separate calibrators provides more flexibility, but increases the potential for
overfitting, increases evaluation time, and removes the ability to merge lattices.
Let c(x[s` ]; ?` ) : [0, 1]S ? [0, 1]S denote the vector-valued calibration function on the feature subset
s` with calibration parameters ?` . The ensemble function will thus be:
Separate Calibration: F (x) = ?0 +
L
X
?` f (c(x[s` ]; ?` ); ?` )
(4)
?` f (c(x[s` ]; ?[s` ]); ?` ),
(5)
`=1
Shared Calibration: F (x) = ?0 +
L
X
`=1
4
where ? is the set of all shared calibration parameters and ?[s` ] is the subset corresponding to the
feature set s` . Note that Theorem 1 holds for shared calibrators, but does not hold for separate
calibrators.
We implement these PLF?s as in [2]. The PLF?s are monotonic if the adjacent parameters in each
PLF are monotonic, which can be enforced with additional linear inequality constraints [2]. By
composition, if the calibration functions for monotonic features are monotonic, and the lattices are
monotonic with respect to those features, then the ensemble with positive weights on the lattices is
monotonic.
5
Training the Lattice Ensemble
The key question in training the ensemble is whether to train the base models independently, as is
usually done in random forests, or to train the base models jointly, akin to generalized linear models.
5.1
Joint Training of the Lattices
In this setting, we optimize (1) jointly over all calibration and lattice parameters, in which case each
?` is subsumed by the corresponding base model parameters ?` and can therefore be ignored. Joint
training allows the base models to specialize, and increases the flexibility, but can slow down training
and is more prone to overfitting for the same choice of S.
5.2
Parallelized, Independent Training of the Lattices
We can train the lattices independently in parallel much faster, and then fit the weights ?t in (1) in a
second post-fitting stage as described in Step 5 below.
Step 1: Initialize the calibration function parameters ? and each tiny lattice?s parameters ?` .
Step 2: Train the L lattices in parallel; for the `th lattice, solve the monotonic lattice regression
problem [2]:
n
X
arg min
L(f (c(xi [s` ]; ?` ); ?` ), yi ) + ?R(?` ), such that A? ? 0,
(6)
?`
i=1
where A? ? 0 captures the linear inequality constraints needed to enforce monotonicity for whichever
features are required, L is a convex loss function, and R denotes a regularizer on the lattice parameters.
Step 3: If separate calibrators are used as per (4) their parameters can be optimized jointly with
the lattice parameters in (6). If shared calibrators are used, we hold all lattice parameters fixed and
optimize the shared calibration parameters ?:
n
X
arg min
L(F (xi ; ?, ?, ?), yi ), such that B? ? 0.
?
i=1
F is as defined in (5), and B? ? 0 specifies the linear inequality constraints needed to enforce
monotonicity of each of the piecewise linear calibration functions.
Step 4: Loop on Steps 2 and 3 until convergence.
Step 5: Post-fit the weights ? ? RL over the ensemble:
n
X
?
arg min
L(F (xi ; ?, ?, ?), yi ) + ? R(?),
such that ?` ? 0 for all `,
?
(7)
i=1
?
where R(?)
is a regularizer on ?. For example, regularizing ? with the `1 norm encourages a
sparser ensemble, which can be useful for improving evaluation speed. Regularizing ? with a ridge
regularizer makes the postfit more similar to averaging the base models, reducing variance.
6
Fast Evaluation
The proposed lattice ensembles are fast to evaluate. The evaluation complexity of simplex interpolation of a lattice ensemble with L lattices each of size S is O(LS log S), but in practice one encounters
5
Table 2: Details for the datasets used in the experiments.
Dataset
Problem
1
2
3
4
Classification
Regression
Classification
Classification
Features
Monotonic
Train
Validation
Test 1
Test 2
12
12
54
29
4
10
9
21
29 307
115 977
500 000
88 715
9769
100 000
11 071
9769
31 980
200 000
11 150
65 372
fixed costs, and caching efficiency that depends on the model size. For example, for Experiment 3
with D = 50 features, with C++ implementations of both Crystals and random forests both evaluating
the base models sequentially, the random forests takes roughly 10? as long to evaluate, and takes
roughly 10? as much memory as the Crystals. See Appendix C.5 for further discussions and more
timing results.
Evaluating calibration functions can add notably to the overall evaluation time. If evaluating the
average calibration function takes time c, with shared calibrators the eval time is O(cD + LS log S)
because the D calibrators can be evaluated just once, but with separate calibrators, the eval time
is generally worse at O(LS(c + log S)). For practical problems, evaluating separate calibrators
may be one-third of the total evaluation time, even when implemented with an efficient binary
search. However, if evaluation can be efficiently parallelized with multi-threading, then there is little
difference in evaluation time between shared and separate calibrators.
If shared (or no) calibrators are used, merging the ensemble?s lattices into fewer larger lattices (see
Sec. 2.2) can greatly reduce evaluation time. For example, for a problem D = 14 features, we found
the accuracy was best if we trained an ensemble of 300 lattices with S = 9 features each. The resulting
ensemble had 300?29 = 153 600 parameters. We then merged all 300 lattices into one equivalent 214
lattice with only 16 384 parameters, which reduced both the memory and the evaluation time by a
factor of 10.
7
Experiments
We demonstrate the proposals on four datasets. Dataset 1 is the ADULT dataset from the UCI Machine
Learning Repository [19], and the other datasets are provided by product groups from Google, with
monotonicity constraints for certain features given by the corresponding product group. See Table
2 for details. To efficiently handle the large number of linear inequality constraints (?100 000
constraints for some of these problems) when training a lattice or ensemble of lattices, we used
LightTouch [20].
We compared to random forests (RF) [9], an ensemble method that consistently performs well for
datasets this size [10]. However, RF makes no attempt to respect the monotonicity constraints, nor is
it easy to check if an RF is monotonic. We used a C++ package implementation for RF.
All the hyper parameters were optimized on validation sets. Please see the supplemental for further
experimental details.
7.1
Experiment 1 - Monotonicity as a Regularizer
In the first experiment, we compare the accuracy of different models in predicting whether income is
greater than $50k for the ADULT dataset (Dataset 1). We compare four models on this dataset: (I)
random forest (RF), (II) single unconstrained lattice, (III) single lattice constrained to be monotonic
in 4 features, (IV) an ensemble of 50 lattices with 5 features in each lattice and separate calibrators
for each lattice, jointly trained. For the constrained models, we set the function to be monotonically
increasing in capital-gain, weekly hours of work and education level, and the gender wage gap [21].
Results over 5 runs of each algorithm is shown in Table 3 demonstrate how monotonicity can act as
a regularizer to improve the testing accuracy: the monotonic models have lower training accuracy,
but higher test accuracy. The ensemble of lattices also improves accuracy over the single lattice, we
hypothesize because the small lattices provide useful regularization, while the separate calibrators
and ensemble provide helpful flexibility. See the appendix for a more detailed analysis of the results.
6
Table 3: Accuracy of different models on the ADULT dataset from UCI.
Random Forest
Unconstrained Lattice
Monotonic Lattice
Monotonic Crystals
7.2
Training
Accuracy
Testing
Accuracy
90.56 ? 0.03
86.34 ? 0.00
86.29 ? 0.02
86.25 ? 0.02
85.21 ? 0.01
84.96 ? 0.00
85.36 ? 0.03
85.53 ? 0.04
Experiment 2 - Crystals vs. Random Search for Feature Subsets Selection
The second experiment on Dataset 2 is a regression to score the quality of a candidate for a matching
problem on a scale of [0, 4]. The training set consists of 115 977 past examples, and the testing set
consists of 31 980 more recent examples (thus the samples are not IID). There are D = 12 features,
of which 10 are constrained to be monotonic on all the models compared for this experiment. We
use this problem with its small number of features to illustrate the effect of the feature subset choice,
comparing to RTLs optimized over K = 10 000 different trained ensembles, each with different
random feature subsets.
All ensembles were restricted to S = 2 features per base model, so there were only 66 distinct feature
9
subsets possible, and thus for a L = 8 lattice ensemble, there were 66
8 ' 5.7?10 possible feature
subsets. Ten-fold cross-validation was used to select an RTL out of 1, 10, 100, 1000, or 10 000 RTLs
whose feature subsets were randomized with different random seeds. We compared to a calibrated
linear model and a calibrated single lattice on all D = 12 features. See the supplemental for further
experimental details.
Figure 2 shows the normalized mean squared error (MSE divided by label variance, which is 0 for
the oracle model, and 1 for the best constant model). Results show that Crystals (orange line) is
substantially better than a random draw of the feature subsets (light blue line), and for mid-sized
ensembles (e.g. L = 32 lattices), Crystals can provide very large computational savings (1000?) over
the RTL strategy of randomly considering different feature subsets.
7.3
Experiment 3 - Larger-Scale Classification: Crystals vs. Random Forest
The third experiment on Dataset 3 is to classify whether a candidate result is a good match to a user.
There are D = 54 features, of which 9 were constrained to be monotonic. We split 800k labelled
samples based on time, using the 500k oldest samples for a training set, the next 100k samples for a
validation set, and the most recent 200k samples for a testing set (so the three datasets are not IID).
Results over 5 runs of each algorithm in Figure 3 show that Crystals ensemble is about 0.25% 0.30% more accurate on the testing set over a broad range of ensemble sizes. The best RF on the
validation set used 350 trees with a leaf size of 1 and the best Crystals model used 350 lattices with 6
features per lattice. Because the RF hyperparameter validation chose to use a minimum leaf size of 1,
95.4
Linear
Full lattice
Crystals
RTL ? 1
RTL ? 10
RTL ? 100
RTL ? 1,000
RTL ? 10,000 .
0.76
0.75
0.74
95.35
Average Test Accuracy
Mean Squared Error / Variance
0.77
0.73
Crystals
Random Forests
95.3
95.25
95.2
95.15
95.1
95.05
0.72
95
0.71
8
16
32
64
94.95
0
128
50
100
150
200
250
300
350
400
Number of Base Models
Number of 2D Lattices
Figure 2: Comparison of normalized mean
squared test error on Dataset 2. Average standard error is less than 10?4 .
Figure 3: Test accuracy on Dataset 3 over
the number of base models (trees or lattices).
Error bars are standard errors.
7
Table 4: Results on Dataset 4. Test Set 1 has the same distribution as the training set. Test Set 2 is
a more realistic test of the task. Left: Crystals vs. random forests. Right: Comparison of different
optimization algorithms.
Random Forest
Crystals
Test Set 1
Accuracy
Test Set 2
Accuracy
Lattices
Training
Calibrators
Per Lattice
Test Set 1
Accuracy
75.23 ? 0.06
75.18 ? 0.05
90.51 ? 0.01
91.15 ? 0.05
Joint
Independent
Joint
Separate
Separate
Shared
74.78 ? 0.04
72.80 ? 0.04
74.48 ? 0.04
the size of the RF model for this dataset scaled linearly with the dataset size, to about 1GB. The large
model size severely affects memory caching, and combined with the deep trees in the RF ensemble,
makes both training and evaluation an order of magnitude slower than for Crystals.
7.4
Experiment 4 - Comparison of Optimization Algorithms
The fourth experiment on Dataset 4 is to classify whether a specific visual element should be shown
on a webpage. There are D = 29 features, of which 21 were constrained to be monotonic. The Train
Set, Validation Set, and Test Set 1 were randomly split from one set of examples, whose sampling
distribution was skewed to sample mostly difficult examples. In contrast, Test Set 2 was uniformly
randomly sampled, with samples from a larger set of countries, making it a more accurate measure of
the expected accuracy in practice.
After optimizing the hyperparameters on the validation set, we independently trained 10 RF and 10
Crystal models, and report the mean test accuracy and standard error on the two test sets in Table
4. On Test Set 1, which was split off from the same set of difficult examples as the Train Set and
Validation Set, the random forests and Crystals perform statistically similar. On Test Set 2, which
is six times larger and was sampled uniformly and from a broader set of countries, the Crystals
are statistically significantly better. We believe this demonstrates that the imposed monotonicity
constraints effectively act as regularizers and help the lattice ensemble generalize better to parts of
the feature space that were sparser in the training data.
In a second experiment with Dataset 4, we used RTLs to illustrate the effects of shared vs. separate
calibrators, and training the lattices jointly vs. independently.
We first constructed an RTL model with 500 lattices of 5 features each and a separate set of calibrators
for each lattice, and trained the lattices jointly as a single model. We then separately modified this
model in two different ways: (1) training the lattices independently of each other and then learning an
optimal linear combination of their predictions, and (2) using a single, shared set of calibrators for all
lattices. All models were trained using logistic loss, mini-batch size of 100, and 200 loops. For each
model, we chose the optimization algorithms? step sizes by finding the power of 2 that maximized
accuracy on the validation set.
Table 4 (right) shows that joint training and separate calibrators for the different lattices can provide a
notable and statistically significantly increase in accuracy, due to the greater flexibility.
8
Conclusions
The use of machine learning has become increasingly popular in practice. That has come with a
greater demand for machine learning that matches the intuitions of domain experts. Complex models,
even when highly accurate, may not be accepted by users who worry the model may not generalize
well to new samples.
Monotonicity guarantees can provide an important sense of control and understanding on learned
functions. In this paper, we showed how ensembles can be used to learn the largest and most
complicated monotonic functions to date. We proposed a measure of pairwise feature interactions
that can identify good feature subsets in a fraction of the computation needed for random feature
selection. On real-world problems, we showed these monotonic ensembles provide similar or better
accuracy, and faster evaluation time compared to random forests, which do not provide monotonicity
guarantees.
8
References
[1] Y. S. Abu-Mostafa. A method for learning from hints. In Advances in Neural Information
Processing Systems, pages 73?80, 1993.
[2] M. R. Gupta, A. Cotter, J. Pfeifer, K. Voevodski, K. Canini, A. Mangylov, W. Moczydlowski,
and A. Van Esbroeck. Monotonic calibrated interpolated look-up tables. Journal of Machine
Learning Research, 17(109):1?47, 2016.
[3] C. Dugas, Y. Bengio, F. B?lisle, C. Nadeau, and R. Garcia. Incorporating functional knowledge
in neural networks. Journal Machine Learning Research, 2009.
[4] J. Spouge, H. Wan, and W. J. Wilbur. Least squares isotonic regression in two dimensions.
Journal of Optimization Theory and Applications, 117(3):585?605, 2003.
[5] Y.-J. Qu and B.-G. Hu. Generalized constraint neural network regression model subject to linear
priors. IEEE Trans. on Neural Networks, 22(11):2447?2459, 2011.
[6] H. Daniels and M. Velikova. Monotone and partially monotone neural networks. IEEE Trans.
Neural Networks, 21(6):906?917, 2010.
[7] W. Kotlowski and R. Slowinski. Rule learning with monotonicity constraints. In Proceedings of
the 26th Annual International Conference on Machine Learning, pages 537?544. ACM, 2009.
[8] E. K. Garcia and M. R. Gupta. Lattice regression. In Advances in Neural Information Processing
Systems (NIPS), 2009.
[9] L. Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
[10] M. Fernandez-Delgado, E. Cernadas, S. Barro, and D. Amorim. Do we need hundreds of
classifiers to solve real world classification problems? Journal Machine Learning Research,
2014.
[11] E. K. Garcia, R. Arora, and M. R. Gupta. Optimized regression for efficient function evaluation.
IEEE Trans. Image Processing, 21(9):4128?4140, Sept. 2012.
[12] T. Ho. Random subspace method for constructing decision forests. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 20(8):832?844, 1998.
[13] D. Amaratunga, J. Cabrera, and Y. S. Lee. Enriched random forests. Bioinformatics, 24(18),
2008.
[14] Y. Ye, Q. Wu, J. Z. Huang, M. K. Ng, and X. Li. Stratified sampling for feature subspace
selection in random forests for high-dimensional data. Pattern Recognition, 2013.
[15] L. Zhang and P. N. Suganthan. Random forests with ensemble of feature spaces. Pattern
Recognition, 2014.
[16] H. Xu, G. Chen, D. Povey, and S. Khudanpur. Modeling phonetic context with non-random
forests for speech recognition. Proc. Interspeech, 2015.
[17] G. Sharma and R. Bala. Digital Color Imaging Handbook. CRC Press, New York, 2002.
[18] A. Howard and T. Jebara. Learning monotonic transformations for classification. In Advances
in Neural Information Processing Systems, 2007.
[19] C. Blake and C. J. Merz. UCI repository of machine learning databases, 1998.
[20] A. Cotter, M. R. Gupta, and J. Pfeifer. A Light Touch for heavily constrained SGD. In 29th
Annual Conference on Learning Theory, pages 729?771, 2016.
[21] D. Weichselbaumer and R. Winter-Ebmer. A meta-analysis of the international gender wage
gap. Journal of Economic Surveys, 19(3):479?511, 2005.
9
| 6377 |@word repository:2 norm:1 hu:1 sgd:1 delgado:1 score:1 selecting:5 daniel:1 past:1 com:1 comparing:1 must:1 realistic:1 informative:1 enables:2 remove:1 plot:1 interpretable:1 hypothesize:1 v:5 greedy:1 leaf:3 selected:1 fewer:1 intelligence:1 oldest:1 ith:1 provides:2 sigmoidal:1 zhang:1 along:1 constructed:1 become:1 specialize:1 consists:2 fitting:2 cernadas:1 barro:1 notably:1 pairwise:4 amphitheatre:1 expected:1 roughly:2 nor:1 multi:2 automatically:1 little:1 considering:1 increasing:2 spain:1 provided:1 notation:2 linearity:1 wilbur:1 lowest:1 mountain:1 substantially:1 proposing:2 supplemental:2 finding:1 transformation:2 guarantee:4 act:5 weekly:1 scaled:2 demonstrates:1 classifier:1 control:2 unit:1 positive:3 before:1 timing:1 treat:1 severely:1 interpolation:5 merge:2 might:1 chose:2 challenging:2 limited:1 stratified:1 range:1 statistically:3 practical:2 testing:5 practice:4 implement:1 procedure:1 empirical:1 significantly:2 fard:1 matching:1 pre:1 selection:6 risk:1 context:1 isotonic:2 optimize:4 equivalent:2 deterministic:1 demonstrated:1 imposed:1 maximizing:1 starting:1 independently:7 l:4 survey:2 convex:1 simplicity:1 splitting:1 rule:2 handle:1 heavily:1 user:4 suganthan:1 us:1 calibrators:22 element:1 recognition:3 database:1 capture:2 trade:1 highest:1 intuition:1 complexity:2 respecting:1 trained:9 negatively:1 f2:3 efficiency:1 swap:2 joint:6 regularizer:6 train:9 forced:1 fast:3 effective:1 distinct:1 hyper:2 choosing:1 whose:2 larger:6 valued:2 supplementary:1 solve:2 ability:2 nondecreasing:1 noisy:1 jointly:7 net:1 propose:4 interaction:7 product:2 milani:1 uci:3 loop:3 date:1 poorly:1 flexibility:5 degenerate:1 webpage:1 convergence:1 produce:3 guaranteeing:1 help:1 illustrate:2 job:1 dividing:1 implemented:1 implies:1 come:1 merged:3 stochastic:1 material:1 education:1 crc:1 require:1 assign:1 f1:5 generalization:1 multilinear:2 voevodski:1 pl:1 hold:3 sufficiently:1 considered:1 blake:1 seed:3 mostafa:1 failing:1 diminishes:1 proc:1 label:4 largest:2 create:1 weighted:2 cotter:3 offs:1 modified:2 caching:2 breiman:1 broader:1 notational:1 consistently:1 likelihood:1 check:1 greatly:1 contrast:1 cabrera:1 sense:1 helpful:1 going:1 arg:3 classification:8 flexible:4 overall:1 constrained:7 initialize:2 orange:1 once:2 f3:3 saving:1 ng:1 sampling:3 unnecessarily:1 look:12 broad:1 simplex:3 others:1 report:1 fundamentally:1 piecewise:2 few:1 hint:1 randomly:7 winter:1 replacement:1 attempt:1 subsumed:1 highly:1 eval:2 evaluation:15 extreme:4 light:2 swapping:1 regularizers:1 held:1 accurate:3 edge:1 tree:8 iv:2 increased:1 classify:2 modeling:1 lattice:106 cost:2 vertex:3 subset:27 hundred:1 calibrated:4 combined:3 international:2 randomized:1 standing:1 lee:1 off:1 together:1 squared:4 choose:1 wan:1 huang:1 worse:1 expert:1 li:1 potential:1 diversity:1 summarized:1 sec:1 coefficient:2 inc:1 satisfy:1 notable:1 ranking:1 depends:1 fernandez:1 view:1 break:2 try:1 option:1 parallel:4 complicated:1 slope:1 contribution:2 minimize:1 square:1 accuracy:23 variance:3 who:1 efficiently:2 ensemble:57 yield:1 maximized:1 identify:1 generalize:2 iid:2 definition:1 proof:1 sampled:3 gain:1 dataset:17 popular:1 knowledge:4 color:1 improves:1 worry:1 higher:2 supervised:1 done:1 evaluated:1 strongly:1 generality:1 just:2 stage:1 until:2 trust:1 touch:1 nonlinear:3 google:3 logistic:1 quality:1 mayagupta:1 believe:1 effect:2 ye:2 calibrating:2 normalized:2 regularization:3 adjacent:2 skewed:1 interspeech:1 encourages:1 please:1 criterion:1 generalized:2 crystal:19 ridge:1 demonstrate:3 performs:1 image:1 recently:1 velikova:1 functional:1 empirically:1 rl:1 million:1 refer:1 composition:1 rd:1 unconstrained:2 inclusion:1 nonlinearity:1 had:1 calibration:13 base:19 add:1 recent:3 showed:3 optimizing:1 phonetic:1 certain:1 inequality:6 binary:2 meta:1 yi:7 captured:2 minimum:1 additional:1 greater:3 parallelized:3 sharma:1 maximize:2 monotonically:1 ii:2 multiple:1 full:1 faster:3 match:2 cross:2 long:2 divided:1 post:2 prediction:2 variant:1 regression:12 proposal:1 whereas:1 separately:2 annealing:1 median:1 country:2 rest:1 kotlowski:1 sure:1 subject:1 constraining:1 iii:2 superset:1 easy:1 automated:1 split:3 affect:1 fit:2 bengio:1 architecture:1 reduce:2 economic:1 whether:4 six:2 gb:1 akin:1 speech:1 york:1 deep:1 ignored:1 generally:3 useful:5 detailed:1 discount:1 mid:1 ten:1 reduced:1 specifies:1 sl:2 exist:1 estimated:1 per:5 blue:1 diverse:3 hyperparameter:3 abu:1 group:4 key:5 four:3 capital:1 changing:1 povey:1 lisle:1 imaging:1 monotone:2 fraction:2 sum:1 enforced:1 run:2 package:1 everywhere:1 fourth:1 wu:1 draw:2 decision:1 confusing:1 appendix:3 bound:1 def:2 guaranteed:2 bala:1 fold:1 oracle:1 annual:2 occur:1 constraint:12 constrain:1 interpolated:5 generates:1 speed:2 min:3 combination:3 smaller:1 across:1 increasingly:1 qu:1 n4:1 making:1 restricted:1 computationally:1 count:3 needed:3 whichever:1 appropriate:1 enforce:2 alternative:1 encounter:1 batch:1 slower:1 ho:1 denotes:2 ensure:1 include:1 hypercube:1 threading:1 objective:5 already:1 question:1 strategy:3 usual:1 responds:1 subspace:3 separate:14 d6:1 sensible:1 trivial:1 reason:1 index:1 relationship:4 illustration:2 providing:1 mini:1 difficult:2 mostly:1 potentially:1 implementation:2 motivates:1 perform:2 upper:1 datasets:7 howard:1 behave:1 dugas:1 canini:3 extended:1 jebara:1 amorim:1 pair:8 required:1 extensive:1 optimized:6 learned:3 barcelona:1 hour:1 nip:2 trans:4 adult:3 beyond:1 bar:1 usually:1 below:1 pattern:3 challenge:1 rf:10 memory:4 power:1 regularized:1 predicting:1 indicator:1 improve:4 torsion:8 arora:1 nadeau:1 sept:1 prior:5 review:1 understanding:1 loss:3 expect:1 proportional:1 sixteen:1 validation:12 digital:1 wage:2 sufficient:1 consistent:1 informativeness:1 tiny:2 share:1 cd:1 prone:1 understand:1 barrier:2 sparse:1 benefit:1 van:1 dimension:3 default:1 world:3 evaluating:4 contour:1 income:1 correlate:1 approximate:1 rtl:11 monotonicity:14 overfitting:3 sequentially:1 parkway:1 predictably:1 handbook:1 xi:6 search:2 table:21 learn:6 reasonably:1 ca:1 forest:23 improving:1 mse:1 complex:1 constructing:1 domain:1 main:1 linearly:4 hyperparameters:1 repeated:1 positively:1 enriched:1 xu:1 slow:1 candidate:2 house:1 third:2 pfeifer:3 theorem:3 down:1 specific:1 showing:2 symbol:1 r2:4 gupta:5 exists:1 incorporating:1 adding:1 merging:1 importance:1 effectively:1 magnitude:1 occurring:1 demand:1 sparser:2 gap:2 chen:1 entropy:1 lt:1 garcia:3 visual:1 expressed:1 khudanpur:1 partially:1 monotonic:50 gender:2 satisfies:1 acm:1 mmilanifard:1 sized:1 towards:1 labelled:1 shared:12 included:1 uniformly:3 reducing:1 averaging:1 total:4 accepted:1 experimental:3 merz:1 select:3 bioinformatics:1 evaluate:2 regularizing:2 |
5,944 | 6,378 | Learning Parametric Sparse Models for Image
Super-Resolution
Yongbo Li, Weisheng Dong?, Xuemei Xie, Guangming Shi1 , Xin Li2 , Donglai Xu3
State Key Lab. of ISN, School of Electronic Engineering, Xidian University, China
1
Key Lab. of IPIU (Chinese Ministry of Education), Xidian University, China
2
Lane Dep. of CSEE, West Virginia University, USA
3
Sch. of Sci. and Eng., Teesside University, UK
[email protected], {wsdong, xmxie}@mail.xidian.edu.cn
[email protected], [email protected]
Abstract
Learning accurate prior knowledge of natural images is of great importance for
single image super-resolution (SR). Existing SR methods either learn the prior
from the low/high-resolution patch pairs or estimate the prior models from the
input low-resolution (LR) image. Specifically, high-frequency details are learned
in the former methods. Though effective, they are heuristic and have limitations
in dealing with blurred LR images; while the latter suffers from the limitations
of frequency aliasing. In this paper, we propose to combine those two lines of
ideas for image super-resolution. More specifically, the parametric sparse prior
of the desirable high-resolution (HR) image patches are learned from both the
input low-resolution (LR) image and a training image dataset. With the learned
sparse priors, the sparse codes and thus the HR image patches can be accurately
recovered by solving a sparse coding problem. Experimental results show that the
proposed SR method outperforms existing state-of-the-art methods in terms of both
subjective and objective image qualities.
1
Introduction
Image super-resolution (SR) aiming to recover a high-resolution (HR) image from a single lowresolution (LR) image, has important applications in image processing and computer vision, ranging
from high-definition (HD) televisions and surveillance to medical imaging. Due to the information
loss in the LR image formation, image SR is a classic ill-posed inverse problem, for which strong
prior knowledge of the underlying HR image is required. Generally, image SR methods can be
categorized into two types, i.e., model-based and learning-based methods.
In model-based image SR, the selection of image prior is of great importance. The image priors,
ranging from smoothness assumptions to sparsity and structured sparsity priors, have been exploited
for image SR [1][3][4][13][14][15][19]. The smoothness prior models, e.g., Tikhonov and total
variation (TV) regularizers[1], are effective in suppressing the noise but tend to over smooth image
details. The sparsity-based SR methods, assuming that the HR patches have sparse representation with
respect to a learned dictionary, have led to promising performances. Due to the ill-posed nature of the
SR problem, designing an appropriate sparse regularizer is critical for the success of these methods.
Generally, parametric sparse distributions, e.g., Laplacian and Generalized Gaussian models, which
correspond to the `1 and `p (0 ? p ? 1) regularizers, are widely used. It has been shown that the
SR performance can be much boosted by exploiting the structural self-similarity of natural images
?
Corresponding author.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
[3][4][15]. Though promising SR performance can be achieved by the sparsity-based methods, it
is rather challenging to recover high-quality HR images for a large scaling factors, as there is no
sufficient information for accurate estimation of the sparse models from the input LR image.
Instead of adopting a specifical prior model, learning-based SR methods learn the priors directly
from a large set of LR and HR image patch pairs [2][5][6][8][18]. Specifically, mapping functions
between the LR and the high-frequency details of the HR patches are learned. Popular learning-based
SR methods include the sparse coding approaches[2] and the more efficient anchored neighborhood
regression methods (i.e., ANR and A+)[5][6]. More recently, inspired by the great success of the
deep neural network (DNN)[16] for image recognition, the DNN based SR methods have also been
proposed[8], where the DNN models is used to learn the mapping functions between the LR and
the high-frequency details of the HR patches. Despite the state-of-the-art performances achieved,
these patch-based methods [6][8] have limitations in dealing with the blurred LR images (as shown
in Sec. 5). Instead of learning high-frequency details, in [12] Li et al. proposed to learn parametric
sparse distributions (i.e., non-zero mean Laplacian distributions) of the sparse codes from retrieved
HR images that are similar to the LR image. State-of-the-art SR results have been achieved for the
landmark LR images, for which similar HR images can be retrieved from a large image set. However,
it has limitations for general LR images (i.e., it reduces to be the conventional sparsity-based SR
method), for which correlated HR images cannot be found in the image database.
In this paper, we propose a novel image SR approach combining the ideas of sparsity-based and
learning-based approaches for SR. The sparse prior, i.e., the parametric sparse distributions (e.g.,
Laplace distribution) are learned from general HR image patches. Specifically, a set of mapping
functions between the LR image patches and the sparse codes of the HR patches are learned. In
addition to the learned sparse prior, the learned sparse distributions are also combined with those
estimated from the input LR image. Experimental results show that the proposed method performs
much better than the current state-of-the-art SR approaches.
2
Related works
In model-based SR, it is often assumed that the desirable HR image/patches have sparse expansions
with respect to a certain dictionary. For a given LR image y = Hx + n, where H ? RM ?N specifies
the degradation model, x ? RN and n ? RM denote the original image and additive Gaussian noise,
respectively. Sparsity-based SR image reconstruction can be formulated as [3][4]
X
(x, ?) = argmin ||y ? Hx||22 + ?
{||Ri x ? D?i ||22 + ??(?)},
(1)
x,?
i
?
?
where Ri ? R
denotes the matrix extracting image patch of size n ? n at position i from x,
D ? Rn?K denotes the dictionary that is an off-the-shelf basis or learned from an training dataset,
and ?(?) denotes the sparsity regularizer. As recovering x from y is an ill-posed inverse problem,
the selection of ?(?) is critical for the SR performance. Common selection of ?(?) is the `p -norm
(0 ? p ? 1) regularizer, where zero-mean sparse distributions of the sparse coefficients are assumed.
In [12], nonzero-mean Laplacian distributions are used, leading to the following sparsity-based SR
method,
X
(x, ?) = argmin ||y ? Hx||22 + ?
{||Ri x ? D?i ||22 + ||?i (?i ? ?i )||1 },
(2)
n?N
x,?
i
?
2 2? 2
diag( ?i,j n ),
where ? =
?i and ?i denote the standard derivation and expectation of ?i , respectively.
It has been shown in [3] that by estimating {?i , ?i } from the nonlocal similar image patches of
the input image, promising SR performance can be achieved. However, for large scaling factors,
it is rather challenging to accurately estimate {?i , ?i } from the input LR image, due to the lack
of sufficient information. To overcome this limitations, Li et al., propose to learn the parametric
distributions from retrieved similar HR images [12] via block matching, and obtain state-of-the-art
SR performance for landmark images. However, for general LR images, for which similar HR images
cannot be found, the sparse prior (?i , ?i ) cannot be learned.
Learning-based SR methods resolve the SR problem by learning mapping functions between LR and
HR image patches [2][6][8]. Popular methods include the sparse coding methods [2], where LR/HR
dictionary pair is jointly learned from a training set. The sparse codes of the LR patches with respect
2
to the LR dictionary are inferred via sparse coding and then used to reconstruct the HR patches with
the HR dictionary. To reduce the computational complexity, anchored neighborhood points (ANR)
and its advanced version (i.e., A+) methods [6] have been proposed. These methods first divided the
patch spaces into many clusters, then LR/HR dictionary pairs are learned for each cluster. Mapping
functions between the LR/HR patches are learned for each cluster via ridge regression. Recently,
deep neural network (DNN) model has also been developed to learn the mapping functions between
the LR and HR patches [8]. The advantages of the DNN model is that the entire SR pipeline is
jointly optimized via end-to-end learning, leading to state-of-the-art SR performance. Despite the
excellent performances, these learning-based methods focusing on learning the mapping functions
between LR and HR patches have limitations in recovering a HR image from a blurry LR image
generated by first applying a low-pass filtering followed by downsampling (as shown in Sec. 4). In
this paper, we propose a novel image SR method by taking advantages of both the sparse-based and
the example-based SR approaches. Specifically, mapping functions between the LR patches and
the sparse codes of the desirable HR patches are learned. Hence, sparse prior can be learned from
both the training patches and the input LR image. With the learned sparse prior, state-of-the-art SR
performance can be achieved.
3
Learning Parametric Sparse Models
In this section, we first propose a novel method to learn the sparse codes of the desirable HR patches
and then present the method to estimate the parametric distributions from both the predicted sparse
codes and those of the LR images.
3.1
Learning the sparse codes from LR/HR patch pairs
For a given LR image patch yi ? Rm , we aim to learn the expectation of the sparse code ?i of the
desirable HR patch xi with respect to dictionary D. Without the loss of generality, we define the
learning function as
??i = f (zi ; W, b) = g(W ? zi + b),
(3)
K?m
where zi denotes the feature vector extracted from the LR patch yi , W ? R
is the weighting
matrix and b ? RK is the bias, and g(?) denotes an activation function. Now, the remaining task
is to learn the parameters of the learning function of Eq. (3). To learn the parameters, we first
construct a large set of LR feature vectors and HR image patch pairs {(zi , xi )}, i = 1, 2, ? ? ? , N .
For a given dictionary, the sparse codes ?i of xi can be obtained by a sparse coding algorithm. Then,
the parameters W = {W, b} can be learned by minimizing the following objective function
(W, b) = argmin
W,b
N
X
||?i ? f (zi ; W, b)||22 .
(4)
i=1
The above optimization problem can be iteratively solved by using a stochastic gradient descent
approach.
Considering the highly complexity of the mapping function between the LR feature vectors and the
desirable sparse codes, we propose to learn a set of mapping functions for each possible local image
structures. Specifically, the K-means clustering algorithm is used to cluster the LR/HR patches into
K clusters. Then, a mapping function is learned for each cluster. After clustering, the LR/HR patches
in each cluster generally contain similar image structures, and linear mapping function would be
sufficient to characterize the correlations between the LR feature vectors and the sparse codes of
the desirable HR patches. Therefore, for each cluster Sk , the mapping function can be learned via
minimizing
X
(Wk , bk ) = argmin
||?i ? (Wk zi + bk )||22 .
(5)
Wk ,bk i?S
k
For simplicity, the bias term bk in the above equation can be absorbed into Wk by rewriting Wk and
zi as Wk = [Wk , bk ] and zi = [zi> ; 1]> , respectively. Then, the parameters Wk can be easily solved
via a least-square method.
As the HR patches in each cluster generally have similar image structures, a compact dictionary
should be sufficient to represent the various HR patches. Hence, instead of learning an overcomplete
dictionary for all HR patches, an orthogonal basis is learned for each cluster Sk . Specifically, a PCA
3
Algorithm 1 Sparse codes learning algorithm
Initialization:
? with a
(a) Construct a set of LR and HR image pairs {y, x} and recover the HR images {x}
conventional SR method;
? y, x}, respectively;
(b) Extract feature patches zi , the LR and HR patches yi and xi from {x,
(c) Clustering {zi , yi , xi } into K clusters using K-means algorithm.
Outer loop: Iteration on k = 1, 2, ? ? ? , K
(a) Calculate the PCA basis Dk for each cluster using the HR patches belong to the k-th cluster;
(b) Computer the sparse codes as ?i = S? (D>
ki xi ) for each xi , i ? Sk ;
(c) Learn the parameters W of the mapping function via solving Eq. (5).
End for
Output: {Dk , Wk }.
basis, denoted as Dk ? Rn?n is learned for each Sk , k = 1, 2, ? ? ? , K. Then, the sparse codes ?i can
be easily obtained ?i = S? (D>
ki xi ), where Dki denotes the PCA basis of the ki -th cluster. Regarding
the feature vectors zi , we extract feature vectors from an initially recovered HR image, which can be
obtained with a conventional sparsity-based method. Similar to [5][6], the first- and second-order
gradients are extracted from the initially recovered HR image as the features. However, other more
effective features can also be used. The sparse distribution learning algorithm is summarized in
Algorithm 1.
3.2
Parametric sparse models estimation
? i , the estimates of ?i can be estimated from
After learning linearized mapping functions, denoted as ?
LR patch via Eq. (3). Based on the observation that natural images contain abundant self-repeating
structures, a collection of similar patches can often be found for an exemplar patch. Then, the mean
of ?i can be estimated as a weighted average of the sparse codes of the similar patches. As the
? is obtained
original image is unknown, an initial estimate of the desirable HR image, denoted as x
using a conventional SR method, e.g., solving Eq. (2). Then, the search of similar patches can be
? Let x?i denote the patch extracted from x
? at position i and x
? i,l denote the
conducted based on x.
? i that are within the first L-th closest matches, l = 1, 2, ? ? ? , L. Denoted by zi,l
patches similar to x
? Therefore, the mean of ?i can be estimated by
the corresponding features vectors extracted from x.
??i =
L
X
? i,l ,
wi,l ?
(6)
l=1
where wi,l =
parameter.
1
c
? i,l ? x
? i ||/h), c is the normalization constant, and h is the predefined
exp(?||x
Additionally, we can also estimate the mean of space codes ?i directly from the intermediate estimate
? i , the sparse codes can be obtained
of target HR image. For each initially recovered HR patch x
via a sparse coding algorithm. As the patch space has been clustered into K sub-spaces and a
? i can be easily computed as
compact PCA basis is computed for each cluster, the sparse code of x
? i,j = S? (D>
?
?
x
),
where
S
(?)
is
the
soft-thresholding
function
with
threshold ?, ki denote the
i,j
?
ki
? i falls into. The sparse codes of the set of similar patches x
? i,l can also be computed.
cluster that x
Then, the expectation of ?i can be estimated as
??i =
L
X
? i,l .
wi,j ?
(7)
l=1
Then, an improved estimation of ?i can be obtained by combining the above two estimates, i.e.,
?i = ???i + (1 ? ?)??i .
4
(8)
where ? = ?diag(?j ) ? RK?K . Similar to [12], ?j is set according to the energy ratio of ??i (j) and
??i (j) as
rj2
, rj = ??i (j)/??i (j).
(9)
?j = 2
rj + 1/rj2
And ? is a predefined constant. After estimating ?i , the variance of the sparse codes are estimated as
L
?i2 =
1X
? i,j ? ?i )2 .
(?
L j=1
(10)
The learned parametric Laplacian distributions with {?i , ?i } for image patches xi are then used with
the MAP estimator for image SR in the next section.
4
Image Super-Resolution with learned Parametric Sparsity Models
With the learned parametric sparse distributions {(?i , ?i )}, image SR problem can be formulated as
? i ) = argmin ||y ? xH||2 + ?
? A
(x,
2
X
xi ,Ai
? i x ? Dk Ai ||2 + ?
{||R
F
i
i
L
X
||?i (?i,l ? ?i )||1 },
(11)
l=1
? i x = [Ri,1 x, Ri,2 x, ? ? ? , Ri,L x] ? Rn?L denotes the matrix formed by the similar patches,
where R
1
).
Ai = [?i,1 , ? ? ? , ?i,L ], Dki denotes the selected PCA basis of the ki -th cluster, and ?i = diag( ?i,j
In Eq. (11), the group of similar patches is assumed to follow the same estimated parametric
distribution {?i , ?i }. Eq. (11) can be approximately solved via alternative optimization. For fixed
xi , the sets of sparse codes Ai can be solved by minimizing
? i = argmin ||R
? i x ? Dk Ai ||2 + ?
A
F
i
Ai
L
X
||?i (?i,l ? ?i )||1
(12)
l=1
As the orthogonal PCA basis is used, the above equation can be solved in closed-form solution, i.e.,
? i,l = S?i (D>
?
ki Ri,l x ? ?i ) + ?i ,
? i , the whole image can be estimated by solving
where ?i = ?/?i . With estimated A
X
? i x ? Dk Ai ||2 ,
? = argmin ||y ? xH||22 + ?
x
||R
F
i
x
(13)
(14)
i
which is a quadratic optimization problem and admits a closed-form solution, as
X >
X >
? R
? i )?1 (H> y + ?
? Dk A
? i ),
? = (H> H + ?
x
R
R
i
i
i
i
(15)
i
? >R
? i = PL R> Rl and R
? > Dk A
? i = PL R> Dk ?
? i,l . As the matrix to be inverted in Eq.
where R
i
i
i
i
l=1 l
l=1 l
(15) is very large, the conjugate gradient algorithm is used to compute Eq. (15). The proposed image
SR algorithm is summarized in Algorithm 2. In Algorithm 2, we iteratively extract the feature
? (t) and learn ??i from the training set, leading to further improvements in predicting
patches from x
the sparse codes with the learned mapping functions.
5
Experimental results
In this section, we verify the performance of the proposed SR method. For fair comparisons, we
use the relative small training set of images used in [2][6]. The training images are used to simulate
the LR images, which are recovered by a sparsity-based method (e.g., the NCSR method [3]). Total
100, 000 features and HR patches pairs are extracted from the reconstructed HR images and the
original HR images. Patches of size 7 ? 7 are extracted from the feature images and HR images.
Similar to [5][6], the PCA technique is used to reduce the dimensions of the feature vectors. The
training patches are clustered into 1000 clusters. The other major parameters of the proposed SR
5
Algorithm 2 Image SR with Learned Sparse Representation
Initialization:
? (0) with a conventional SR method;
(a) Initialize x
(b) Set parameters ? and ?;
Outer loop: Iteration over t = 0, 1, ? ? ? , T
? (t) and cluster the patches into clusters;
(a) Extract feature vectors zi from x
(b) Learn ??i for each local patch using Eq. (6);
(c) Update the estimate of ?i using Eq. (8) and estimate ?i with Eq. (10);
(d) Inner loop (solve Eq.(11)): iteration over j = 1, 2, ? ? ? , J;
(j+1)
(I) Compute Ai
by solving Eq.(13);
? (j+1) via Eq. (15);
(II) Update the whole image x
(t+1)
(j+1)
(III) Set x
=x
if j = J.
End for
Output: x(t+1) .
method are set as: L = 12, T = 8, and J = 10. The proposed SR method is compared with several
current state-of-the-art image SR methods, i.e., the sparse coding based SR method (denoted as
SCSR)[2], the SR method based on sparse regression and natural image prior (denoted as KK) [7],
the A+ method [6], the recent SRCNN method [8], and the NCSR method [3]. Note that the NCSR is
the current sparsity-based SR method. Three images sets, i.e., Set5[9], Set14[10] and BSD100[11],
which consists of 5, 14 and 100 images respectively, are used as the test images.
In this paper, we consider two types of degradation when generating the LR images, i.e., the bicubic
image resizing function implemented with imresize in matlab and Gaussian blurring followed by
downsampling with a scaling factor, both of which are commonly used in the literature of image SR.
5.1
Image SR for LR images generated with bicubic interpolation function
In [2][6][7][8], the LR images are generated with the bicubic interpolation function (i.e., imresize
function in Matlab), i.e., y = B(x) + n, where B(?) denotes the bicubic downsampling function. To
deal with this type of degradation, we implement the degradation matrix H as an operator that resizes
a HR image using bicubic function with scaling factors of 1s and implement H> as an operator that
upscales a LR image using bicubic function with scaling factor s, where s = 2, 3, 4. The average
PSNR and SSIM results of the reconstructed HR images are reported in Table 1. It can be seen that
the SRCNN method performs better than the A+ and the SCSR methods. It is surprising to see that
the NCSR method, which only exploits the internal similar samples performs comparable with the
SRCNN method. By exploiting both the external image patches and the internal similar patches, the
proposed method outperforms the NCSR. The average PSNR gain over SRCNN can be up to 0.64
dB. Parts of some reconstructed HR images by the test methods are shown in Fig. 1, from which
we can see that the proposed method reproduces the most visually pleasant HR images than other
competing methods. Please refer to the supplementary file for more visual comparison results.
5.2
Image SR for LR images generated with Gaussian blur followed by downsampling
Another commonly used degradation process is to first apply a Gaussian kernel followed by downsampling. In this experimental setting, the 7 ? 7 Gaussian kernel of standard deviation of 1.6 is used,
followed by downsampling with scaling factor s = 2, 3, 4. For these SCSR, KK, A+ and SRCNN
methods, which cannot deal with the Gaussian blur kernel, the iterative back-projection [17] method
is applied to the reconstructed HR images by those methods as a post processing to remove the
blur. The average PSNR and SSIM results on the three test image sets are reported in Table 2. It
can be seen that the performance of the example-based methods, i.e., SCSR[2], KK[7], A+[6] and
SRCNN[8] methods are much worse than the NCSR [3] method. Compared with the NCSR method,
the average PSNR gain of the proposed method can be up to 0.46 dB, showing the effectiveness of
the proposed sparse codes learning method. Parts of the reconstructed HR images are shown in Fig. 2
6
Table 1: Average PSNR and SSIM results of the test methods (LR images generated with bicubic
resizing function)
Images
Upscaling
SCSR[2]
KK[7]
A+[6]
SRCNN[8]
NCSR[3]
Proposed
?2
36.22
0.9514
36.55
0.9544
36.66
0.9542
36.68
0.9550
36.99
0.9551
Se5
?3
31.42
0.8821
32.29
0.9037
32.59
0.9088
32.75
0.9090
33.05
0.9149
33.39
0.9173
?4
?2
30.03
0.8544
30.29
0.8603
30.49
0.8628
30.77
0.8720
31.04
0.8779
32.12
0.9029
32.28
0.9056
32.45
0.9067
32.26
0.9058
32.61
0.9072
Set14
?3
28.31
0.7954
28.39
0.8135
29.13
0.8188
29.30
0.8215
29.30
0.8239
29.59
0.8264
?4
?2
27.15
0.7422
27.33
0.7491
27.50
0.7513
27.52
0.7563
27.77
0.7620
31.08
0.8834
31.21
0.8863
31.36
0.8879
31.14
0.8863
31.42
0.8879
BSD100
?3
26.54
0.7729
28.15
0.7780
28.29
0.7835
28.41
0.7863
28.37
0.7872
28.56
0.7899
?4
26.69
0.7017
26.82
0.7087
26.90
0.7103
26.91
0.7143
27.08
0.7187
(a) Original
(b) Bicubic
(c) SCSR / 26.01dB
(d) KK / 26.49dB
(e) A+ / 26.55dB
(f) SRCNN / 26.71dB
(g) NCSR / 27.11dB
(h) Proposed / 27.35dB
Figure 1: SR results on image ?86000? of BSD100 of scaling factor 3 (LR image generated with
bicubic interpolation function).
and Fig. 3. Obviously, the proposed method can recover sharper edges and finer details than other
competing methods.
6
Conclusion
In this paper, we propose a novel approach for learning parametric sparse models for image superresolution. Specifically, mapping functions between the LR patch and the sparse codes of the desirable
HR patches are learned from a training set. Then, parametric sparse distributions are estimated from
the learned sparse codes and those estimated from the input LR image. With the learned sparse
models, the sparse codes and thus the HR image patches can be accurately recovered by solving a
sparse coding problem. Experimental results show that the proposed SR method outperforms existing
state-of-the-art methods in terms of both subjective and objective image qualities.
Acknowledgments
This work was supported in part by the Natural Science Foundation (NSF) of China under Grants(No.
No. 61622210, 61471281, 61632019, 61472301, and 61390512), in part by the Specialized Research
Fund for the Doctoral Program of Higher Education (No. 20130203130001).
7
Table 2: Average PSNR and SSIM results of the test methods of scaling factor 3 (LR images generated
with Gaussian kernel followed by downsampling)
SCSR[2] KK[7] A+[6] SRCNN[8] NCSR[3] Proposed
30.22
30.28
29.39
30.20
33.03
33.49
Set5
0.8484
0.8536 0.8502
08514
0.9106
0.9165
27.51
27.46
26.96
27.48
29.28
29.63
Set14
0.7619
0.7640 0.7627
0.7638
0.8203
0.8255
27.10
27.10
26.59
27.11
28.35
28.60
BSD100
0.7338
0.7342 0.7331
0.7338
0.7841
0.7887
(a) Original
(b) Bicubic
(c) SCSR / 29.85dB
(d) KK / 29.94dB
(e) A+ / 29.48dB
(f) SRCNN / 29.88dB
(g) NCSR / 32.97dB
(h) Proposed / 33.84dB
Figure 2: SR results on ?Monarch? from Set14 of scaling factor 3 (LR images generated with Gaussian
blur followed downsampling).
(a) Original
(b) Bicubic
(c) SCSR / 32.22dB
(d) KK / 32.12dB
(e) A+ / 30.81dB
(f) SRCNN / 32.16dB
(g) NCSR / 34.59dB
(h) Proposed / 35.15dB
Figure 3: SR results on ?Pepper? from Set14 of scaling factor 3 (LR images generated with Gaussian
blur followed downsampling).
8
References
[1] A. Marquina and S. J. Osher. Image super-resolution by TV-regularization and bregman iteration. Journal of
Scientific Computing, 37(3):367?382, 2008.
[2] J. Yang, J. Wright, T. S. Huang, and Y. Ma. Image super-resolution via sparse representation. IEEE
transactions on image processing, 19(11):2861?2873, 2010.
[3] W. Dong, L. Zhang, G. Shi, and X. Li. Nonlocally centralized sparse representation for image restoration.
IEEE Transactions on Image Processing, 22(4):1620?1630, 2013.
[4] W. Dong, G. Shi, Y. Ma, and X. Li. Image restoration via simultaneous sparse coding: Where structured
sparsity meets gaussian scale mixture. International Journal of Computer Vision, 114(2-3):217?232, 2015.
[5] R. Timofte, V. De Smet, and L. Van Gool. Anchored neighborhood regression for fast example-based
super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, pages 1920?1927,
2013.
[6] R. Timofte, V. De Smet, and L. Van Gool. A+: Adjusted anchored neighborhood regression for fast
super-resolution. In Asian Conference on Computer Vision, pages 111?126. Springer, 2014.
[7] K. I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 32(6):1127?1133, 2010.
[8] C. Dong, C. C. Loy, K. He, and X. Tang. Image super-resolution using deep convolutional networks. IEEE
transactions on pattern analysis and machine intelligence, 38(2):295?307, 2016.
[9] M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. AlberiMorel. Low-complexity single-image superresolution based on nonnegative neighbor embedding. 2012.
[10] R. Zeyde, M. Elad, and M. Protter. On single image scale-up using sparse-representations. In International
conference on curves and surfaces, pages 711?730. Springer, 2010.
[11] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. In Computer Vision, 2001.
ICCV 2001. Proceedings. Eighth IEEE International Conference on, volume 2, pages 416?423. IEEE, 2001.
[12] Y. Li, W. Dong, G. Shi, and X. Xie. Learning parametric distributions for image super-resolution: Where
patch matching meets sparse coding. In Proceedings of the IEEE International Conference on Computer Vision,
pages 450?458, 2015.
[13] W. Dong, L. Zhang, G. Shi, and X. Wu. Image deblurring and super-resolution by adaptive sparse domain
selection and adaptive regularization. IEEE Transactions on Image Processing, 20(7):1838?1857, 2011.
[14] W. Dong, L. Zhang, and G. Shi. Centralized sparse representation for image restoration. In 2011 International Conference on Computer Vision, pages 1259?1266. IEEE, 2011.
[15] G. Yu, G. Sapiro, and S. Mallat. Solving inverse problems with piecewise linear estimators: From gaussian
mixture models to structured sparsity. IEEE Transactions on Image Processing, 21(5):2481?2499, 2012.
[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[17] M. Irani and S. Peleg. Motion analysis for image enhancement: Resolution, occlusion, and transparency.
Journal of Visual Communication and Image Representation, 4(4):324?335, 1993.
[18] D. Dai, R. Timofte, and L. Van Gool. Jointly optimized regressors for image super-resolution. In Computer
Graphics Forum, volume 34, pages 95?104. Wiley Online Library, 2015.
[19] K. Egiazarian and V. Katkovnik. Single image super-resolution via BM3D sparse coding. In Signal
Processing Conference (EUSIPCO), 2015 23rd European, pages 2849?2853. IEEE, 2015.
9
| 6378 |@word version:1 norm:1 linearized:1 eng:1 set5:2 initial:1 nonlocally:1 suppressing:1 outperforms:3 existing:3 subjective:2 current:3 recovered:6 surprising:1 activation:1 additive:1 blur:5 bsd100:4 remove:1 update:2 fund:1 intelligence:2 selected:1 lr:54 zhang:3 lowresolution:1 consists:1 combine:1 aliasing:1 bm3d:1 inspired:1 resolve:1 considering:1 spain:1 estimating:2 underlying:1 superresolution:2 argmin:7 developed:1 sapiro:1 rm:3 uk:1 medical:1 grant:1 engineering:1 local:2 aiming:1 eusipco:1 despite:2 meet:2 xidian:5 interpolation:3 approximately:1 initialization:2 china:3 doctoral:1 challenging:2 monarch:1 acknowledgment:1 block:1 implement:2 matching:2 projection:1 cannot:4 selection:4 operator:2 applying:1 conventional:5 map:1 shi:5 resolution:21 simplicity:1 estimator:2 hd:1 classic:1 embedding:1 variation:1 laplace:1 target:1 mallat:1 designing:1 deblurring:1 recognition:1 database:2 solved:5 calculate:1 imresize:2 complexity:3 solving:7 blurring:1 basis:8 easily:3 various:1 regularizer:3 derivation:1 fast:2 effective:3 formation:1 neighborhood:4 heuristic:1 posed:3 widely:1 solve:1 supplementary:1 elad:1 reconstruct:1 anr:2 resizing:2 statistic:1 jointly:3 online:1 obviously:1 advantage:2 propose:7 reconstruction:1 combining:2 loop:3 exploiting:2 sutskever:1 cluster:20 enhancement:1 generating:1 exemplar:1 school:1 dep:1 eq:14 strong:1 recovering:2 predicted:1 implemented:1 peleg:1 stochastic:1 human:1 donglai:1 education:2 hx:3 clustered:2 adjusted:1 pl:2 wright:1 exp:1 great:3 visually:1 mapping:17 major:1 dictionary:11 estimation:3 weighted:1 gaussian:12 super:15 aim:1 rather:2 shelf:1 boosted:1 surveillance:1 guillemot:1 improvement:1 kim:1 entire:1 initially:3 dnn:5 classification:1 ill:3 denoted:6 art:9 initialize:1 construct:2 yu:1 piecewise:1 kwon:1 rj2:2 asian:1 stu:1 occlusion:1 centralized:2 highly:1 mixture:2 regularizers:2 predefined:2 accurate:2 bicubic:11 edge:1 bregman:1 orthogonal:2 abundant:1 overcomplete:1 soft:1 measuring:1 restoration:3 deviation:1 krizhevsky:1 conducted:1 virginia:1 graphic:1 characterize:1 reported:2 combined:1 international:6 off:1 dong:7 huang:1 worse:1 external:1 leading:3 li:7 de:2 coding:11 sec:2 wk:9 coefficient:1 blurred:2 summarized:2 lab:2 closed:2 recover:4 square:1 formed:1 egiazarian:1 convolutional:2 variance:1 correspond:1 accurately:3 finer:1 simultaneous:1 suffers:1 definition:1 energy:1 frequency:5 gain:2 dataset:2 popular:2 knowledge:2 psnr:6 segmentation:1 back:1 focusing:1 higher:1 xie:2 follow:1 improved:1 though:2 generality:1 correlation:1 lack:1 quality:3 scientific:1 usa:1 contain:2 verify:1 former:1 hence:2 regularization:2 irani:1 nonzero:1 iteratively:2 i2:1 deal:2 self:2 please:1 generalized:1 ridge:1 performs:3 motion:1 image:142 ranging:2 novel:4 recently:2 common:1 specialized:1 rl:1 volume:2 belong:1 he:1 refer:1 bevilacqua:1 ai:8 smoothness:2 rd:1 similarity:1 surface:1 closest:1 recent:1 retrieved:3 tikhonov:1 certain:1 ecological:1 success:2 yi:4 exploited:1 inverted:1 seen:2 ministry:1 dai:1 signal:1 ii:1 desirable:9 rj:2 reduces:1 transparency:1 smooth:1 segmented:1 match:1 divided:1 post:1 laplacian:4 regression:6 vision:7 expectation:3 iteration:4 represent:1 adopting:1 normalization:1 kernel:4 achieved:5 addition:1 sch:1 sr:54 file:1 tend:1 db:20 effectiveness:1 extracting:1 structural:1 yang:1 intermediate:1 iii:1 zi:14 pepper:1 competing:2 reduce:2 idea:2 cn:3 regarding:1 inner:1 pca:7 matlab:2 deep:4 generally:4 pleasant:1 repeating:1 specifies:1 nsf:1 estimated:11 upscaling:1 li2:1 group:1 key:2 threshold:1 rewriting:1 imaging:1 zeyde:1 inverse:3 electronic:1 timofte:3 patch:64 wu:1 scaling:10 comparable:1 ki:7 followed:8 quadratic:1 nonnegative:1 ri:7 lane:1 tal:1 simulate:1 martin:1 structured:3 tv:2 according:1 conjugate:1 wi:3 osher:1 iccv:1 pipeline:1 equation:2 wvu:1 end:4 apply:1 appropriate:1 blurry:1 fowlkes:1 alternative:1 original:6 denotes:9 remaining:1 include:2 clustering:3 exploit:1 chinese:1 forum:1 objective:3 malik:1 parametric:16 gradient:3 sci:1 landmark:2 outer:2 mail:2 assuming:1 code:27 kk:8 ratio:1 downsampling:9 minimizing:3 loy:1 sharper:1 unknown:1 ssim:4 observation:1 descent:1 hinton:1 communication:1 rn:4 ncsr:12 inferred:1 bk:5 pair:8 required:1 optimized:2 imagenet:1 learned:30 barcelona:1 nip:1 pattern:2 eighth:1 sparsity:15 program:1 gool:3 critical:2 natural:7 predicting:1 hr:58 advanced:1 roumy:1 library:1 extract:4 isn:1 set14:5 prior:19 literature:1 relative:1 protter:1 loss:2 limitation:6 filtering:1 foundation:1 sufficient:4 thresholding:1 supported:1 bias:2 katkovnik:1 fall:1 neighbor:1 taking:1 sparse:71 van:3 overcome:1 dimension:1 curve:1 evaluating:1 author:1 collection:1 commonly:2 adaptive:2 regressors:1 transaction:6 nonlocal:1 reconstructed:5 smet:2 compact:2 dealing:2 reproduces:1 assumed:3 xi:11 search:1 iterative:1 anchored:4 sk:4 table:4 additionally:1 promising:3 learn:14 nature:1 expansion:1 excellent:1 european:1 domain:1 diag:3 whole:2 noise:2 fair:1 categorized:1 fig:3 west:1 srcnn:11 wiley:1 sub:1 position:2 xh:2 csee:1 weighting:1 tang:1 rk:2 showing:1 dk:9 dki:2 admits:1 importance:2 television:1 led:1 absorbed:1 visual:2 springer:2 extracted:6 ma:2 formulated:2 specifically:8 degradation:5 total:2 pas:1 experimental:5 xin:2 internal:2 latter:1 correlated:1 |
5,945 | 6,379 | Composing graphical models with neural networks
for structured representations and fast inference
Matthew James Johnson
Harvard University
[email protected]
Alexander B. Wiltschko
Harvard University, Twitter
[email protected]
David Duvenaud
Harvard University
[email protected]
Sandeep R. Datta
Harvard Medical School
[email protected]
Ryan P. Adams
Harvard University, Twitter
[email protected]
Abstract
We propose a general modeling and inference framework that combines the complementary strengths of probabilistic graphical models and deep learning methods.
Our model family composes latent graphical models with neural network observation likelihoods. For inference, we use recognition networks to produce local
evidence potentials, then combine them with the model distribution using efficient
message-passing algorithms. All components are trained simultaneously with a
single stochastic variational inference objective. We illustrate this framework by
automatically segmenting and categorizing mouse behavior from raw depth video,
and demonstrate several other example models.
1
Introduction
Modeling often has two goals: first, to learn a flexible representation of complex high-dimensional
data, such as images or speech recordings, and second, to find structure that is interpretable and
generalizes to new tasks. Probabilistic graphical models [1, 2] provide many tools to build structured
representations, but often make rigid assumptions and may require significant feature engineering.
Alternatively, deep learning methods allow flexible data representations to be learned automatically,
but may not directly encode interpretable or tractable probabilistic structure. Here we develop a
general modeling and inference framework that combines these complementary strengths.
Consider learning a generative model for video of a mouse. Learning interpretable representations
for such data, and comparing them as the animal?s genes are edited or its brain chemistry altered,
gives useful behavioral phenotyping tools for neuroscience and for high-throughput drug discovery
[3]. Even though each image is encoded by hundreds of pixels, the data lie near a low-dimensional
nonlinear manifold. A useful generative model must not only learn this manifold but also provide
an interpretable representation of the mouse?s behavioral dynamics. A natural representation from
ethology [3] is that the mouse?s behavior is divided into brief, reused actions, such as darts, rears,
and grooming bouts. Therefore an appropriate model might switch between discrete states, with
each state representing the dynamics of a particular action. These two learning tasks ? identifying
an image manifold and a structured dynamics model ? are complementary: we want to learn the
image manifold in terms of coordinates in which the structured dynamics fit well. A similar challenge
arises in speech [4], where high-dimensional spectrographic data lie near a low-dimensional manifold
because they are generated by a physical system with relatively few degrees of freedom [5] but also
include the discrete latent dynamical structure of phonemes, words, and grammar [6].
To address these challenges, we propose a new framework to design and learn models that couple
nonlinear likelihoods with structured latent variable representations. Our approach uses graphical
models for representing structured probability distributions while enabling fast exact inference
subroutines, and uses ideas from variational autoencoders [7, 8] for learning not only the nonlinear
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a) Data
(b) GMM
(c) Density net (VAE)
(d) GMM SVAE
Figure 1: Comparison of generative models fit to spiral cluster data. See Section 2.1.
feature manifold but also bottom-up recognition networks to improve inference. Thus our method
enables the combination of flexible deep learning feature models with structured Bayesian (and
even nonparametric [9]) priors. Our approach yields a single variational inference objective in
which all components of the model are learned simultaneously. Furthermore, we develop a scalable
fitting algorithm that combines several advances in efficient inference, including stochastic variational
inference [10], graphical model message passing [1], and backpropagation with the reparameterization
trick [7]. Thus our algorithm can leverage conjugate exponential family structure where it exists to
efficiently compute natural gradients with respect to some variational parameters, enabling effective
second-order optimization [11], while using backpropagation to compute gradients with respect to all
other parameters. We refer to our general approach as the structured variational autoencoder (SVAE).
2
Latent graphical models with neural net observations
In this paper we propose a broad family of models. Here we develop three specific examples.
2.1
Warped mixtures for arbitrary cluster shapes
One particularly natural structure used frequently in graphical models is the discrete mixture model.
By fitting a discrete mixture model to data, we can discover natural clusters or units. These discrete
structures are difficult to represent directly in neural network models.
Consider the problem of modeling the data y = {yn }N
n=1 shown in Fig. 1a. A standard approach to
finding the clusters in data is to fit a Gaussian mixture model (GMM) with a conjugate prior:
iid
iid
iid
? ? Dir(?), (?k , ?k ) ? NIW(?), zn | ? ? ? yn | zn , {(?k , ?k )}K
k=1 ? N (?zn , ?zn ).
However, the fit GMM does not represent the natural clustering of the data (Fig. 1b). Its inflexible
Gaussian observation model limits its ability to parsimoniously fit the data and their natural semantics.
Instead of using a GMM, a more flexible alternative would be a neural network density model:
iid
iid
? ? p(?)
xn ? N (0, I),
yn | xn , ? ? N (?(xn ; ?), ?(xn ; ?)),
(1)
where ?(xn ; ?) and ?(xn ; ?) depend on xn through some smooth parametric function, such as
multilayer perceptron (MLP), and where p(?) is a Gaussian prior [12]. This model fits the data
density well (Fig. 1c) but does not explicitly represent discrete mixture components, which might
provide insights into the data or natural units for generalization. See Fig. 2a for a graphical model.
By composing a latent GMM with nonlinear observations, we can combine the modeling strengths of
both [13], learning both discrete clusters along with non-Gaussian cluster shapes:
iid
? ? Dir(?),
iid
iid
(zn )
(?k , ?k ) ? NIW(?),
(zn )
iid
? ? p(?)
zn | ? ? ?
xn ? N (?
,?
),
yn | xn , ? ? N (?(xn ; ?), ?(xn ; ?)).
This combination of flexibility and structure is shown in Fig. 1d. See Fig. 2b for a graphical model.
2.2
Latent linear dynamical systems for modeling video
Now we consider a harder problem: generatively modeling video. Since a video is a sequence of
image frames, a natural place to start is with a model for images. Kingma et al. [7] shows that the
2
?
?
zn
xn
xn
yn
yn
z1
z2
z3
z4
?
(a) Latent Gaussian (b) Latent GMM
x1
x2
x3
x4
x1
x2
x3
x4
y1
y2
y3
y4
y1
y2
y3
y4
(c) Latent LDS
(d) Latent SLDS
Figure 2: Generative graphical models discussed in Section 2.
density network of Eq. (1) can accurately represent a dataset of high-dimensional images {yn }N
n=1 in
terms of the low-dimensional latent variables {xn }N
n=1 , each with independent Gaussian distributions.
To extend this image model into a model for videos, we can introduce dependence through time
between the latent Gaussian samples {xn }N
n=1 . For instance, we can make each latent variable xn+1
depend on the previous latent variable xn through a Gaussian linear dynamical system, writing
xn+1 = Axn + Bun ,
iid
un ? N (0, I),
A, B ? Rm?m ,
where the matrices A and B have a conjugate prior. This model has low-dimensional latent states and
dynamics as well as a rich nonlinear generative model of images. In addition, the timescales of the
dynamics are represented directly in the eigenvalue spectrum of A, providing both interpretability
and a natural way to encode prior information. See Fig. 2c for a graphical model.
2.3
Latent switching linear dynamical systems for parsing behavior from video
As a final example that combines both time series structure and discrete latent units, consider again
the behavioral phenotyping problem described in Section 1. Drawing on graphical modeling tools,
we can construct a latent switching linear dynamical system (SLDS) [14] to represent the data in
terms of continuous latent states that evolve according to a discrete library of linear dynamics, and
drawing on deep learning methods we can generate video frames with a neural network image model.
At each time n ? {1, 2, . . . , N } there is a discrete-valued latent state zn ? {1, 2, . . . , K} that evolves
according to Markovian dynamics. The discrete state indexes a set of linear dynamical parameters,
and the continuous-valued latent state xn ? Rm evolves according to the corresponding dynamics,
zn+1 | zn , ? ? ?zn ,
xn+1 = Azn xn + Bzn un ,
iid
un ? N (0, I),
K
where ? = {?k }K
k=1 denotes the Markov transition matrix and ?k ? R+ is its kth row. We use the
same neural net observation model as in Section 2.2. This SLDS model combines both continuous
and discrete latent variables with rich nonlinear observations. See Fig. 2d for a graphical model.
3
Structured mean field inference and recognition networks
Why aren?t such rich hybrid models used more frequently? The main difficulty with combining rich
latent variable structure and flexible likelihoods is inference. The most efficient inference algorithms
used in graphical models, like structured mean field and message passing, depend on conjugate
exponential family likelihoods to preserve tractable structure. When the observations are more
general, like neural network models, inference must either fall back to general algorithms that do not
exploit the model structure or else rely on bespoke algorithms developed for one model at a time.
In this section, we review inference ideas from conjugate exponential family probabilistic graphical
models and variational autoencoders, which we combine and generalize in the next section.
3.1
Inference in graphical models with conjugacy structure
Graphical models and exponential families provide many algorithmic tools for efficient inference [15].
Given an exponential family latent variable model, when the observation model is a conjugate
exponential family, the conditional distributions stay in the same exponential families as in the prior
and hence allow for the same efficient inference algorithms.
3
?
?
zn
xn
xn
yn
yn
(a) VAE
z1
z2
z3
z4
?
(b) GMM SVAE
x1
x2
x3
x4
x1
x2
x3
x4
y1
y2
y3
y4
y1
y2
y3
y4
(c) LDS SVAE
(d) SLDS SVAE
Figure 3: Variational families and recognition networks for the VAE [7] and three SVAE examples.
For example, consider learning a Gaussian linear dynamical system model with linear Gaussian
N
observations. The generative model for latent states x = {xn }N
n=1 and observations y = {yn }n=1 is
xn = Axn?1 + Bun?1 ,
iid
un ? N (0, I),
yn = Cxn + Dvn ,
iid
vn ? N (0, I),
given parameters ? = (A, B, C, D) with a conjugate prior p(?). To approximate the posterior p(?, x | y), consider the mean field family q(?)q(x) and the variational inference objective
p(?)p(x | ?)p(y | x, ?)
L[ q(?)q(x) ] = Eq(?)q(x) log
,
(2)
q(?)q(x)
where we can optimize the variational family q(?)q(x) to approximate the posterior p(?, x | y) by
maximizing Eq. (2). Because the observation model p(y | x, ?) is conjugate to the latent variable
model p(x | ?), for any fixed q(?) the optimal factor q ?(x) , arg maxq(x) L[ q(?)q(x) ] is itself a
Gaussian linear dynamical system with parameters that are simple functions of the expected statistics
of q(?) and the data y. As a result, for fixed q(?) we can easily compute q ?(x) and use message
passing algorithms to perform exact inference in it. However, when the observation model is not
conjugate to the latent variable model, these algorithmically exploitable structures break down.
3.2
Recognition networks in variational autoencoders
The variational autoencoder (VAE) [7] handles general non-conjugate observation models by introducing recognition networks. For example, when a Gaussian latent variable model p(x) is paired with
a general nonlinear observation model p(y | x, ?), the posterior p(x | y, ?) is non-Gaussian, and it is
difficult to compute an optimal Gaussian approximation. The VAE instead learns to directly output a
suboptimal Gaussian factor q(x | y) by fitting a parametric map from data y to a mean and covariance,
?(y; ?) and ?(y; ?), such as an MLP with parameters ?. By optimizing over ?, the VAE effectively
learns how to condition on non-conjugate observations y and produce a good approximating factor.
4
Structured variational autoencoders
We can combine the tractability of conjugate graphical model inference with the flexibility of
variational autoencoders. The main idea is to use a conditional random field (CRF) variational family.
We learn recognition networks that output conjugate graphical model potentials instead of outputting
the complete variational distribution?s parameters directly. These potentials are then used in graphical
model inference algorithms in place of the non-conjugate observation likelihoods.
The SVAE algorithm computes stochastic gradients of a mean field variational inference objective.
It can be viewed as a generalization both of the natural gradient SVI algorithm for conditionally
conjugate models [10] and of the AEVB algorithm for variational autoencoders [7]. Intuitively,
it proceeds by sampling a data minibatch, applying the recognition model to compute graphical
model potentials, and using graphical model inference algorithms to compute the variational factor,
combining the evidence from the potentials with the prior structure in the model. This variational
factor is then used to compute gradients of the mean field objective. See Fig. 3 for graphical models
of the variational families with recognition networks for the models developed in Section 2.
In this section, we outline the SVAE model class more formally, write the mean field variational
inference objective, and show how to efficiently compute unbiased stochastic estimates of its gradients.
The resulting algorithm for computing gradients of the mean field objective, shown in Algorithm 1, is
4
Algorithm 1 Estimate SVAE lower bound and its gradients
Input: Variational parameters (?? , ?? , ?), data sample y
function SVAEG RADIENTS(?? , ?? , ?, y)
? ? r(yn ; ?)
. Get evidence potentials
local
?
(?
x, tx , KL ) ? PGMI NFERENCE(?? , ?)
. Combine evidence with prior
?? ? q(?)
. Sample observation parameters
L ? N log p(y | x
?, ?? ) ? N KLlocal ? KL(q(?)q(?)kp(?)p(?)) . Estimate variational bound
0
e
??? L ? ?? ? ?? + N (t?x , 1) + N (??x log p(y | x
?, ?? ), 0)
. Compute natural gradient
e ? L, gradients ?? ,? L
return lower bound L, natural gradient ?
?
?
function PGMI NFERENCE(?? , ?)
q ?(x) ? O PTIMIZE L OCAL FACTORS(?? , ?)
. Fast message-passing inference
return sample x
? ? q ?(x), statistics Eq?(x) tx (x), divergence Eq(?) KL(q ?(x)kp(x | ?))
simple and efficient and can be readily applied to a variety of learning problems and graphical model
structures. See the supplementals for details and proofs.
4.1
SVAE model class
To set up notation for a general SVAE, we first define a conjugate pair of exponential family densities
on global latent variables ? and local latent variables x = {xn }N
n=1 . Let p(x | ?) be an exponential
family and let p(?) be its corresponding natural exponential family conjugate prior, writing
p(?) = exp h??0 , t? (?)i ? log Z? (??0 ) ,
p(x | ?) = exp h?x0 (?), tx (x)i ? log Zx (?x0 (?)) = exp {ht? (?), (tx (x), 1)i} ,
where we used exponential family conjugacy to write t? (?) = ?x0 (?), ? log Zx (?x0 (?)) . The local
latent variables x could have additional structure, like including both discrete and continuous latent
variables or tractable graph structure, but here we keep the notation simple.
Next, we define a general likelihood function. Let p(y | x, ?) be a general family of densities and
let p(?) be an exponential family prior on its parameters. For example, each observation yn may
depend on the latent value xn through an MLP, as in the density network model of Section 2.
This generic non-conjugate observation model provides modeling flexibility, yet the SVAE can still
leverage conjugate exponential family structure in inference, as we show next.
4.2
Stochastic variational inference algorithm
Though the general observation model p(y | x, ?) means that conjugate updates and natural gradient
SVI [10] cannot be directly applied, we show that by generalizing the recognition network idea we
can still approximately optimize out the local variational factors leveraging conjugacy structure.
For fixed y, consider the mean field family q(?)q(?)q(x) and the variational inference objective
p(?)p(?)p(x | ?)p(y | x, ?)
L[ q(?)q(?)q(x) ] , Eq(?)q(?)q(x) log
.
(3)
q(?)q(?)q(x)
Without loss of generality we can take the global factor q(?) to be in the same exponential family
as the prior p(?), and we denote its natural parameters by ?? . We restrict q(?) to be in the same
exponential family as p(?) with natural parameters ?? . Finally, we restrict q(x) to be in the same
exponential family as p(x | ?), writing its natural parameter as ?x . Using these explicit variational
parameters, we write the mean field variational inference objective in Eq. (3) as L(?? , ?? , ?x ).
To perform efficient optimization of the objective L(?? , ?? , ?x ), we consider choosing the variational
parameter ?x as a function of the other parameters ?? and ?? . One natural choice is to set ?x to be a
local partial optimizer of L. However, without conjugacy structure finding a local partial optimizer
may be computationally expensive for general densities p(y | x, ?), and in the large data setting this
expensive optimization would have to be performed for each stochastic gradient update. Instead, we
choose ?x by optimizing over a surrogate objective Lb with conjugacy structure, given by
b ? , ?x , ?) , Eq(?)q(x) log p(?)p(x | ?) exp{?(x; y, ?)} , ?(x; y, ?) , hr(y; ?), tx (x)i,
L(?
q(?)q(x)
5
where {r(y; ?)}??Rm is some parameterized class of functions that serves as the recognition model.
Note that the potentials ?(x; y, ?) have a form conjugate to the exponential family p(x | ?). We
define ?x? (?? , ?) to be a local partial optimizer of Lb along with the corresponding factor q ?(x),
b ? , ?x , ?),
?x? (?? , ?) , arg min L(?
q ?(x) = exp {h?x? (?? , ?), tx (x)i ? log Zx (?x? (?? , ?))} .
?x
As with the variational autoencoder of Section 3.2, the resulting variational factor q ?(x) is suboptimal
for the variational objective L. However, because the surrogate objective has the same form as a
variational inference objective for a conjugate observation model, the factor q ?(x) not only is easy to
compute but also inherits exponential family and graphical model structure for tractable inference.
Given this choice of ?x? (?? , ?), the SVAE objective is LSVAE (?? , ?? , ?) , L(?? , ?? , ?x? (?? , ?)). This
objective is a lower bound for the variational inference objective Eq. (3) in the following sense.
Proposition 4.1 (The SVAE objective lower-bounds the mean field objective)
The SVAE objective function LSVAE lower-bounds the mean field objective L in the sense that
max L[ q(?)q(?)q(x) ] ? max L(?? , ?? , ?x ) ? LSVAE (?? , ?? , ?) ?? ? Rm ,
q(x)
?x
for any parameterized function class {r(y; ?)}??Rm . Furthermore, if there is some ?? ? Rm such
that ?(x; y, ?? ) = Eq(?) log p(y | x, ?), then the bound can be made tight in the sense that
max L[ q(?)q(?)q(x) ] = max L(?? , ?? , ?x ) = max LSVAE (?? , ?? , ?).
q(x)
?x
?
Thus by using gradient-based optimization to maximize LSVAE (?? , ?? , ?) we are maximizing a lower
bound on the model log evidence log p(y). In particular, by optimizing over ? we are effectively
learning how to condition on observations so as to best approximate the posterior while maintaining
conjugacy structure. Furthermore, to provide the best lower bound we may choose the recognition
model function class {r(y; ?)}??Rm to be as rich as possible.
Choosing ?x? (?? , ?) to be a local partial optimizer of Lb provides two computational advantages. First,
it allows ?x? (?? , ?) and expectations with respect to q ?(x) to be computed efficiently by exploiting
exponential family graphical model structure. Second, it provides a simple expression for an unbiased
estimate of the natural gradient with respect to the latent model parameters, as we summarize next.
Proposition 4.2 (Natural gradient of the SVAE objective)
The natural gradient of the SVAE objective LSVAE with respect to ?? is
e ? LSVAE (?? , ?? , ?) = ??0 + Eq?(x) [(tx (x), 1)] ? ?? + (?? L(?? , ?? , ?x? (?? , ?)), 0).
?
(4)
x
?
Note that the first term in Eq. (4) is the same as the expression for the natural gradient in SVI for
conjugate models [10], while a stochastic estimate of the second term is computed automatically
as part of the backward pass for computing the gradients with respect to the other parameters, as
described next. Thus we have an expression for the natural gradient with respect to the latent model?s
parameters that is almost as simple as the one for conjugate models and just as easy to compute.
Natural gradients are invariant to smooth invertible reparameterizations of the variational family [16,
17] and provide effective second-order optimization updates [18, 11].
The gradients of the objective with respect to the other variational parameters, namely
??? LSVAE (?? , ?? , ?) and ?? LSVAE (?? , ?? , ?), can be computed using the reparameterization trick.
To isolate the terms that require the reparameterization trick, we rearrange the objective as
LSVAE (?? , ?? , ?) = Eq(?)q?(x) log p(y | x, ?) ? KL(q(?)q ?(x) k p(?, x)) ? KL(q(?) k p(?)).
The KL divergence terms are between members of the same tractable exponential families. An
unbiased estimate of the first term can be computed by sampling x
? ? q ?(x) and ?? ? q(?) and
computing ??? ,? log p(y | x
?, ?? ) with automatic differentiation. Note that the second term in Eq. (4)
is automatically computed as part of the chain rule in computing ?? log p(y | x
?, ?? ).
5
Related work
In addition to the papers already referenced, there are several recent papers to which this work is
related. The two papers closest to this work are Krishnan et al. [19] and Archer et al. [20].
6
(a) Predictions after 200 training steps.
(b) Predictions after 1100 training steps.
Figure 4: Predictions from an LDS SVAE fit to 1D dot image data at two stages of training. The
top panel shows an example sequence with time on the horizontal axis. The middle panel shows the
noiseless predictions given data up to the vertical line, while the bottom panel shows the latent states.
10
?L
5
0
?5
?10
?15
0
1000
2000
iteration
3000
4000
(a) Natural (blue) and standard (orange) gradient updates.
(b) Subspace of learned observation model.
Figure 5: Experimental results from LDS SVAE models on synthetic data and real mouse data.
In Krishnan et al. [19] the authors consider combining variational autoencoders with continuous
state-space models, emphasizing the relationship to linear dynamical systems (also called Kalman
filter models). They primarily focus on nonlinear dynamics and an RNN-based variational family, as
well as allowing control inputs. However, the approach does not extend to general graphical models
or discrete latent variables. It also does not leverage natural gradients or exact inference subroutines.
In Archer et al. [20] the authors also consider the problem of variational inference in general
continuous state space models but focus on using a structured Gaussian variational family without
considering parameter learning. As with Krishnan et al. [19], this approach does not include discrete
latent variables (or any latent variables other than the continuous states). However, the method they
develop could be used with an SVAE to handle inference with nonlinear dynamics.
In addition, both Gregor et al. [21] and Chung et al. [22] extend the variational autoencoder framework
to sequential models, though they focus on RNNs rather than probabilistic graphical models.
6
Experiments
We apply the SVAE to both synthetic and real data and demonstrate its ability to learn feature
representations and latent structure. Code is available at github.com/mattjj/svae.
6.1
LDS SVAE for modeling synthetic data
Consider a sequence of 1D images representing a dot bouncing from one side of the image to the
other, as shown at the top of Fig. 4. We use an LDS SVAE to find a low-dimensional latent state
space representation along with a nonlinear image model. The model is able to represent the image
accurately and to make long-term predictions with uncertainty. See supplementals for details.
This experiment also demonstrates the optimization advantages that can be provided by the natural
gradient updates. In Fig. 5a we compare natural gradient updates with standard gradient updates at
three different learning rates. The natural gradient algorithm not only learns much faster but also
is less dependent on parameterization details: while the natural gradient update used an untuned
7
Figure 6: Predictions from an LDS SVAE fit to depth video. In each panel, the top is a sampled
prediction and the bottom is real data. The model is conditioned on observations to the left of the line.
(a) Extension into running
(b) Fall from rear
Figure 7: Examples of behavior states inferred from depth video. Each frame sequence is padded on
both sides, with a square in the lower-right of a frame depicting when the state is the most probable.
stepsize of 0.1, the standard gradient dynamics at step sizes of both 0.1 and 0.05 resulted in some
matrix parameters to be updated to indefinite values.
6.2
LDS SVAE for modeling video
We also apply an LDS SVAE to model depth video recordings of mouse behavior. We use the dataset
from Wiltschko et al. [3] in which a mouse is recorded from above using a Microsoft Kinect. We
used a subset consisting of 8 recordings, each of a distinct mouse, 20 minutes long at 30 frames per
second, for a total of 288000 video fames downsampled to 30 ? 30 pixels.
We use MLP observation and recognition models with two hidden layers of 200 units each and a 10D
latent space. Fig. 5b shows images corresponding to a regular grid on a random 2D subspace of the
latent space, illustrating that the learned image manifold accurately captures smooth variation in the
mouse?s body pose. Fig. 6 shows predictions from the model paired with real data.
6.3
SLDS SVAE for parsing behavior
Finally, because the LDS SVAE can accurately represent the depth video over short timescales, we
apply the latent switching linear dynamical system (SLDS) model to discover the natural units of
behavior. Fig. 7 shows some of the discrete states that arise from fitting an SLDS SVAE with 30
discrete states to the depth video data. The discrete states that emerge show a natural clustering of
short-timescale patterns into behavioral units. See the supplementals for more.
7
Conclusion
Structured variational autoencoders provide a general framework that combines some of the strengths
of probabilistic graphical models and deep learning methods. In particular, they use graphical models
both to give models rich latent representations and to enable fast variational inference with CRF
structured approximating distributions. To complement these structured representations, SVAEs use
neural networks to produce not only flexible nonlinear observation models but also fast recognition
networks that map observations to conjugate graphical model potentials.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Daphne Koller and Nir Friedman. Probabilistic graphical models: principles and techniques.
MIT Press, 2009.
Kevin P Murphy. Machine Learning: a Probabilistic Perspective. MIT Press, 2012.
Alexander B. Wiltschko, Matthew J. Johnson, Giuliano Iurilli, Ralph E. Peterson, Jesse M.
Katon, Stan L. Pashkovski, Victoria E. Abraira, Ryan P. Adams, and Sandeep Robert Datta.
?Mapping Sub-Second Structure in Mouse Behavior?. In: Neuron 88.6 (2015), pp. 1121?1135.
Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly,
Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. ?Deep neural
networks for acoustic modeling in speech recognition: The shared views of four research
groups?. In: Signal Processing Magazine, IEEE 29.6 (2012), pp. 82?97.
Li Deng. ?Computational models for speech production?. In: Computational Models of Speech
Pattern Processing. Springer, 1999, pp. 199?213.
Li Deng. ?Switching dynamic system models for speech articulation and acoustics?. In:
Mathematical Foundations of Speech and Language Processing. Springer, 2004, pp. 115?133.
Diederik P. Kingma and Max Welling. ?Auto-Encoding Variational Bayes?. In: International
Conference on Learning Representations (2014).
Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. ?Stochastic Backpropagation and
Approximate Inference in Deep Generative Models?. In: Proceedings of the 31st International
Conference on Machine Learning. 2014, pp. 1278?1286.
Matthew J. Johnson and Alan S. Willsky. ?Stochastic Variational Inference for Bayesian Time
Series Models?. In: International Conference on Machine Learning. 2014.
Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. ?Stochastic variational
inference?. In: Journal of Machine Learning Research (2013).
James Martens. ?New insights and perspectives on the natural gradient method?. In: arXiv
preprint arXiv:1412.1193 (2015).
David J.C. MacKay and Mark N. Gibbs. ?Density networks?. In: Statistics and neural networks:
advances at the interface. Oxford University Press, Oxford (1999), pp. 129?144.
Tomoharu Iwata, David Duvenaud, and Zoubin Ghahramani. ?Warped Mixtures for Nonparametric Cluster Shapes?. In: 29th Conference on Uncertainty in Artificial Intelligence. 2013,
pp. 311?319.
E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. ?Bayesian Nonparametric Inference
of Switching Dynamic Linear Models?. In: IEEE Transactions on Signal Processing 59.4
(2011).
Martin J. Wainwright and Michael I. Jordan. ?Graphical Models, Exponential Families, and
Variational Inference?. In: Foundations and Trends in Machine Learning (2008).
Shun-Ichi Amari. ?Natural gradient works efficiently in learning?. In: Neural computation
10.2 (1998), pp. 251?276.
Shun-ichi Amari and Hiroshi Nagaoka. Methods of Information Geometry. American Mathematical Society, 2007.
James Martens and Roger Grosse. ?Optimizing Neural Networks with Kronecker-factored
Approximate Curvature?. In: arXiv preprint arXiv:1503.05671 (2015).
Rahul G Krishnan, Uri Shalit, and David Sontag. ?Deep Kalman Filters?. In: arXiv preprint
arXiv:1511.05121 (2015).
Evan Archer, Il Memming Park, Lars Buesing, John Cunningham, and Liam Paninski. ?Black
box variational inference for state space models?. In: arXiv preprint arXiv:1511.07367 (2015).
Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. ?DRAW: A recurrent neural
network for image generation?. In: arXiv preprint arXiv:1502.04623 (2015).
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua
Bengio. ?A recurrent latent variable model for sequential data?. In: Advances in Neural
information processing systems. 2015, pp. 2962?2970.
9
| 6379 |@word illustrating:1 middle:1 reused:1 bun:2 covariance:1 harder:1 generatively:1 series:2 comparing:1 z2:2 cxn:1 com:1 yet:1 diederik:1 must:2 readily:1 parsing:2 john:2 shape:3 enables:1 interpretable:4 update:8 generative:7 intelligence:1 parameterization:1 ivo:1 short:2 blei:1 provides:3 daphne:1 mathematical:2 along:3 wierstra:2 combine:11 fitting:4 behavioral:4 introduce:1 x0:4 expected:1 behavior:8 axn:2 frequently:2 brain:1 automatically:4 considering:1 spain:1 discover:2 notation:2 provided:1 panel:4 developed:2 finding:2 differentiation:1 y3:4 rm:7 demonstrates:1 control:1 unit:6 medical:1 yn:13 segmenting:1 danihelka:1 engineering:1 local:9 referenced:1 limit:1 switching:5 encoding:1 oxford:2 laurent:1 approximately:1 might:2 rnns:1 black:1 liam:1 backpropagation:3 x3:4 svi:3 evan:1 rnn:1 drug:1 word:1 regular:1 downsampled:1 zoubin:1 get:1 cannot:1 applying:1 writing:3 optimize:2 map:2 marten:2 maximizing:2 jesse:1 sainath:1 identifying:1 factored:1 insight:2 rule:1 reparameterization:3 handle:2 coordinate:1 variation:1 updated:1 magazine:1 exact:3 us:2 jaitly:1 harvard:10 trick:3 trend:1 recognition:15 particularly:1 expensive:2 bottom:3 preprint:5 wang:1 capture:1 edited:1 reparameterizations:1 dynamic:14 trained:1 depend:4 tight:1 easily:1 kratarth:1 represented:1 tx:7 distinct:1 fast:5 effective:2 kp:2 artificial:1 hiroshi:1 kevin:1 choosing:2 encoded:1 valued:2 drawing:2 amari:2 grammar:1 ability:2 statistic:3 nagaoka:1 timescale:1 itself:1 final:1 shakir:1 sequence:4 eigenvalue:1 advantage:2 net:3 propose:3 outputting:1 combining:3 flexibility:3 exploiting:1 cluster:7 sea:3 produce:3 karol:1 adam:2 illustrate:1 develop:4 andrew:1 pose:1 recurrent:2 school:1 eq:14 filter:2 stochastic:10 lars:1 enable:1 shun:2 require:2 generalization:2 proposition:2 probable:1 ryan:2 extension:1 duvenaud:2 exp:5 algorithmic:1 mapping:1 matthew:4 optimizer:4 tool:4 hoffman:1 mit:2 gaussian:16 rather:1 phenotyping:2 vae:6 categorizing:1 encode:2 inherits:1 focus:3 rezende:1 likelihood:6 sense:3 inference:43 twitter:2 rear:2 rigid:1 dependent:1 cunningham:1 hidden:1 koller:1 archer:3 rpa:1 subroutine:2 semantics:1 pixel:2 arg:2 ralph:1 flexible:6 animal:1 orange:1 mackay:1 field:12 construct:1 sampling:2 x4:4 broad:1 yu:1 park:1 throughput:1 kastner:1 yoshua:1 few:1 primarily:1 simultaneously:2 preserve:1 divergence:2 parsimoniously:1 resulted:1 murphy:1 geometry:1 consisting:1 microsoft:1 friedman:1 freedom:1 mlp:4 message:5 chong:1 mixture:6 rearrange:1 chain:1 partial:4 nference:2 fox:1 shalit:1 svae:30 instance:1 modeling:12 markovian:1 zn:13 tractability:1 introducing:1 subset:1 hundred:1 johnson:3 dir:2 synthetic:3 st:1 density:9 international:3 stay:1 probabilistic:8 dong:1 invertible:1 michael:1 mouse:10 again:1 recorded:1 choose:2 ocal:1 warped:2 chung:2 american:1 return:2 li:3 potential:8 chemistry:1 explicitly:1 performed:1 break:1 view:1 start:1 bayes:1 memming:1 square:1 il:1 phoneme:1 efficiently:4 yield:1 azn:1 generalize:1 lds:10 raw:1 bayesian:3 vincent:1 accurately:4 buesing:1 iid:13 zx:3 composes:1 pp:9 mohamed:2 james:3 proof:1 couple:1 sampled:1 dataset:2 back:1 danilo:1 rahul:1 though:3 tomoharu:1 generality:1 furthermore:3 just:1 stage:1 roger:1 box:1 autoencoders:8 rahman:1 horizontal:1 nonlinear:11 minibatch:1 spectrographic:1 y2:4 unbiased:3 hence:1 bzn:1 conditionally:1 outline:1 crf:2 demonstrate:2 complete:1 interface:1 image:18 variational:49 kyle:1 physical:1 discussed:1 extend:3 significant:1 refer:1 dinh:1 gibbs:1 paisley:1 automatic:1 grid:1 z4:2 language:1 dot:2 patrick:1 curvature:1 posterior:4 closest:1 recent:1 perspective:2 optimizing:4 niw:2 additional:1 george:1 deng:3 goel:1 maximize:1 signal:2 smooth:3 alan:1 faster:1 wiltschko:3 long:2 divided:1 paired:2 prediction:8 scalable:1 multilayer:1 noiseless:1 expectation:1 navdeep:1 arxiv:10 iteration:1 represent:7 addition:3 want:1 else:1 sudderth:1 recording:3 isolate:1 member:1 leveraging:1 jordan:2 near:2 leverage:3 bengio:1 spiral:1 easy:2 krishnan:4 switch:1 variety:1 fit:8 restrict:2 suboptimal:2 idea:4 sandeep:2 expression:3 mattjj:2 sontag:1 speech:7 passing:5 action:2 deep:8 useful:2 nonparametric:3 generate:1 neuroscience:1 algorithmically:1 per:1 blue:1 discrete:18 write:3 group:1 ichi:2 indefinite:1 four:1 gmm:8 dahl:1 ht:1 backward:1 graph:1 padded:1 parameterized:2 uncertainty:2 bouncing:1 place:2 family:33 almost:1 vn:1 draw:1 bound:9 layer:1 courville:1 strength:4 kronecker:1 alex:1 x2:4 min:1 relatively:1 martin:1 structured:15 according:3 combination:2 conjugate:25 inflexible:1 evolves:2 intuitively:1 invariant:1 computationally:1 conjugacy:6 tractable:5 serf:1 generalizes:1 available:1 apply:3 victoria:1 appropriate:1 generic:1 stepsize:1 alternative:1 denotes:1 clustering:2 include:2 bespoke:1 top:3 graphical:34 running:1 maintaining:1 exploit:1 ghahramani:1 build:1 approximating:2 gregor:2 society:1 objective:25 already:1 fa:1 parametric:2 dependence:1 surrogate:2 gradient:32 kth:1 subspace:2 manifold:7 willsky:2 kalman:2 code:1 index:1 y4:4 z3:2 providing:1 relationship:1 difficult:2 robert:1 design:1 perform:2 allowing:1 vertical:1 observation:27 neuron:1 markov:1 dart:1 enabling:2 daan:2 hinton:1 frame:5 y1:4 kinect:1 arbitrary:1 lb:3 datta:2 inferred:1 david:5 complement:1 pair:1 namely:1 kl:6 z1:2 acoustic:2 learned:4 bout:1 barcelona:1 kingma:2 nip:1 maxq:1 address:1 able:1 proceeds:1 dynamical:10 pattern:2 articulation:1 challenge:2 summarize:1 including:2 interpretability:1 video:15 max:6 wainwright:1 natural:34 hybrid:1 difficulty:1 rely:1 hr:1 representing:3 altered:1 improve:1 github:1 brief:1 library:1 ethology:1 stan:1 axis:1 hm:1 autoencoder:4 auto:1 nir:1 prior:12 review:1 discovery:1 evolve:1 ptimize:1 graf:1 loss:1 grooming:1 generation:1 geoffrey:1 untuned:1 abdel:1 foundation:2 degree:1 vanhoucke:1 principle:1 production:1 row:1 side:2 allow:2 senior:1 perceptron:1 fall:2 peterson:1 emerge:1 depth:6 xn:27 transition:1 rich:6 computes:1 author:2 made:1 nguyen:1 welling:1 transaction:1 approximate:5 gene:1 keep:1 global:2 alternatively:1 spectrum:1 un:4 latent:46 continuous:7 why:1 learn:6 composing:2 depicting:1 complex:1 timescales:2 main:2 aevb:1 arise:1 complementary:3 x1:4 exploitable:1 fig:14 body:1 junyoung:1 grosse:1 sub:1 explicit:1 exponential:21 lie:2 learns:3 down:1 emphasizing:1 minute:1 specific:1 evidence:5 exists:1 sequential:2 effectively:2 fame:1 conditioned:1 uri:1 slds:7 aren:1 generalizing:1 paninski:1 springer:2 iwata:1 conditional:2 goal:1 viewed:1 shared:1 called:1 total:1 pas:1 experimental:1 aaron:1 formally:1 tara:1 mark:1 arises:1 alexander:2 |
5,946 | 638 | Network Structuring And Training Using
Rule-based Knowledge
Volker Tresp
Siemens AG
Central Research
Otto-Hahn-Ring 6
8000 Munchen 83, Germany
Jiirgen Hollatz*
Institut fur Informatik
TV Munchen
ArcisstraBe 21
8000 Munchen 2, Germany
Subutai Ahmad
Siemens AG
Central Research
Otto-Hahn-Ring 6
8000 Munchen 83, Germany
Abstract
We demonstrate in this paper how certain forms of rule-based
knowledge can be used to prestructure a neural network of normalized basis functions and give a probabilistic interpretation of
the network architecture. We describe several ways to assure that
rule-based knowledge is preserved during training and present a
method for complexity reduction that tries to minimize the number of rules and the number of conjuncts. After training the refined
rules are extracted and analyzed.
1
INTRODUCTION
Training a network to model a high dimensional input/output mapping with only a
small amount of training data is only possible if the underlying map is of low complexity and the network, therefore, can be oflow complexity as well. With increasing
*Mail address: Siemens AG, Central Research, Otto-Hahn-Ring 6, 8000 Miinchen 83.
871
872
Tresp, Hollatz, and Ahmad
network complexity, parameter variance increases and. the network prediction becomes less reliable. This predicament can be solved if we manage to incorporate
prior knowledge to bias the network as it was done by Roscheisen, Hofmann and
Tresp (1992). There, prior knowledge was available . in the form of an algorithm
which summarized the engineering knowledge accumulated over many years. Here,
we consider the case that prior knowledge is available in the form of a set of rules
which specify knowledge about the input/output mapping that the network has to
learn. This is a very common occurrence in industrial and medical applications
where rules can be either given by experts or where rules can be extracted from the
existing solution to the problem.
The inclusion of prior knowledge has the additional advantage that if the network
is required to extrapolate into regions of the input space where it has not seen any
training data, it can rely on this prior knowledge. Furthermore, in many on-line
control applications, the network is required to make reasonable predictions right
from the beginning. Before it has seen sufficient training data it has to rely primarily
on prior knowledge.
This situation is also typical for human learning. If we learn a new skill such
as driving a car or riding a bicycle, it would be disastrous to start without prior
knowledge about the problem. Typically, we are told some basic rules, which we
try to follow in the beginning, but which are then refined and altered through
experience. The better our initial knowledge about a problem, the faster we can
achieve good performance and the less training is required (Towel, Shavlik and
Noordewier, 1990).
2
FROM KNOWLEDGE TO NETWORKS
We consider a neural network y = N N(x) which makes a prediction about the state
of y E ~ given the state of its input x E ~n. We assume that an expert provides
information about the same mapping in terms of a set of rules. The premise of a
rule specifies the conditions on x under which the conclusion can be applied. This
region of the input space is formally described by a basis function bi(x). Instead
of allowing only binary values for a basis function (1: premise is valid, 0: premise
is not valid), we permit continuous positive values which represent the certainty or
weight of a rule given the input.
We assume that the conclusion of the rule can be described in form of a mathematical expression, such as conc/usioni: the output is equal to Wi(X) where Wi(X) is a
function of the input (or a subset of the input) and can be a constant, a polynomial
or even another neural network.
Since several rules can be active for a given state of the input, we define the output
of the network to be a weighted average of the conclusions of the active rules where
the weighting factor is proportional to the activity of the basis function given the
input
(1)
( ) = NN( ) = Li Wj(x) bj(x)
y x
x
Lj bj(x)
.
This is a very general concept since we still have complete freedom to specify the
form of the basis function bi(x) and the conclusion Wj(x). If bi(x) and Wi(X) are
Network Structuring And Training Using Rule-based Knowledge
described by neural networks themselves, there is a close relationship with the
adaptive mixtures of local experts (Jacobs, Jordan, Nowlan and Hinton, 1991). On
the other hand, if we assume that the basis function can be approximated by a
multivariate Gaussian
1 ~ (x? - W?)2
(2)
bi(x) Ki ex p [-2 L- J u? 'J ],
=
j
'J
and if the Wi are constants, we obtain the network of normalized basis functions
which were previously described by Moody and Darken (1989) and Specht (1990).
In some cases the expert might want to formulate the premise as simple logical
expressions. As an example, the rule
IF [((Xl:::::: a) AND (X4:::::: b)] OR (X2:::::: c) THEN y
= d X x2
is encoded as
premzsei :
conclusioni :
b( )
i x
= exp [1(xl-a)2+(x4-2
u2
Wi(X) =
d
b)2]
[1(x2-c)2]
u2
+ exp -2
Xx 2 .
This formulation is related to the fuzzy logic approach of Tagaki and Sugeno (1992).
3
PRESERVING THE RULE-BASED KNOWLEDGE
Equation 1 can be implemented as a network of normalized basis functions N Ninit
which describes the rule-based knowledge and which can be used for prediction.
Actual training data can be used to improve network performance. We consider
four different ways to ensure that the expert knowledge is preserved during training.
Forget. We use the data to adapt N Ninit with gradient descent (we typically adapt
all parameters in the network). The sooner we stop training, the more of the initial
expert knowledge is preserved.
Freeze. We freeze the parameters in the initial network and introduce a new basis
function whenever prediction and data show a large deviation. In this way the
network learns an additive correction to the initial network.
Correct. Whereas normal weight decay penalizes the deviation of a parameter from
zero, we penalize a parameter if it deviates from its initial value q}nit
E p = 20:"j
1 ~(
L- qj - qjinit)2
(3)
j
where the qj is a generic network parameter.
Internal teacher. We formulate a penalty in terms of the mapping rather than in
terms of the parameters
Ep =
~O:" j(NNinit(x) -
NN(x?2dx.
This has the advantage that we do not have to specify priors on relatively unintuitive network parameters. Instead, the prior directly reflects the certainty that we
873
874
Tresp, Hollatz, and Ahmad
associate with the mapping of the initialized network which can often be estimated.
Roscheisen, Hofmann and Tresp (1992) estimated this certainty from problem specific knowledge. We can approximate the integral in Equation 3 numerically by
Monte-Carlo integration which leads to a training procedure where we adapt the
network with a .~ixture of measured training data and training data artificially generated by JVJV'"&t(x) at randomly chosen inputs. The mixing proportion directly
relates to the weight of the penalty, a (Roscheisen, Hofmann and Tresp, 1992).
4
COMPLEXITY REDUCTION
After training the rules can be extracted again from the network but we have to
ensure that the set of rules is as concise as possible, otherwise the value of the
extracted rules is limited. We would like to find the smallest number of rules that
can still describe the knowledge sufficiently. Also, the network should be encouraged
to find rules with the smallest number of conjuncts, which in this case means that
a basis function is only dependent on a small number of input dimensions.
We suggest the following pruning strategy for Gaussian basis functions.
1. Prune basis functions. Evaluate the relative weight of each basis function at its
center Wi
bi(J-Ldl 2: j bj (J-Li) which is a measure of its importance in the network.
Remove the unit with the smallest Wi. Figure 1 illustrates the pruning of basis
functions.
=
2. Prune conjuncts. Successively, set the largest
removing input j from basis function i.
(J'ij
equal to infinity, effectively
Sequentially remove basis functions and conjuncts until the error increases above a
threshold. Retrain after a unit or a conjunct is removed.
5
A PROBABILISTIC INTERPRETATION
One of the advantages of our approach is that there is a probabilistic interpretation
of the system. In addition, if the expert formulates his or her knowledge in terms
of probability distributions then a number of useful properties can be derived (it is
natural here to interpret probability as a subjective degree of belief in an event.).
We assume that the system can be in a number of states Si which are unobservable.
Formally, each of those hidden states corresponds to a rule. The prior probability
that the system is in state Sj is equal to P(Si). Assuming that the system is in state
Si there is a probability distribution P(x, ylSi) that we measure an input vector x
and an output y and
For every rule the expert specifies the probability distributions in the last sum. Let's
P(XISi) P(ylsd and that P(XISi) and P(yISi)
consider the case that P(x, ylsd
can be approximated by Gaussians. In this case Equation 4 describes a Gaussian
mixture model. For every rule, the expert has to specify
=
Network Structuring And Training Using Rule-based Knowledge
5 units
20 units
Figure 1: 80 values of a noisy sinusoid (A) are presented as training data to a
network of20 (Cauchy) basis functions, (bi(x) = Ki [1+ Lj (Xj - P.ij )2/O'fj]-2). (B)
shows how this network also tries to approximate the noise in the data. (D) shows
the basis functions bi(x) and (F) the normalized basis functions bi(x)/ Lj bj(x).
Pruning reduces the network architecture to 5 units placed at the extrema of the
sinusoid (basis functions: E, normalized basis functions: G). The network output
is shown in (C) .
y
y
. .. :..........
....?
-: ..
? ,.:,..
?
"
'1.
? ..--.c .,.--.
'.'
. .
?
~
I,"
,.
_
~
'41'
?
?
?
~. ~.
?
. ...
? .. ....
~! ...
':!-.~.!.-.-'.
. . .."'
..t'
: . " . . . ..
~.
?
?i
?
-?
~.::
.... . .'.-.....
#
?
?
?
?
.'.. .....
. . ... .
t...
??-;'-'?f
-....-.---... ~-
? -; . . . . . f#. . .
? !
i
. .,. . ... ......""'.
,:
..
? ?? " --c : :
....
"
~.j
E(xty)
,-
?
.. "'
i
i
??
~..
?
.t
':,
.#
..---
\.'.az:t? ?.~..
?~
:
?
i '
x
??
x
Figure 2: Left: The two rectangles indicate centers and standard deviations of two
Gaussians that approximate a density. Right: the figure shows the expected values
f(Ylx) (continuous line) and f(xly) (dotted line).
875
876
Tresp, Hollatz, and Ahmad
? P (Si), the probability of the occurrence of state Si (the overall weight of
the rule),
=
Nj(x; J-Lj, Ed, the probability that an input vector x occurs, given
that the system is in state Sj, and
? P(ylsd Nl (y; Wi,
the probability of output y given state Si.
? P(xlsd
un,
=
The evidence for a state given an input x becomes
P(sdx)
=
P(XISi)P(Si)
Lj P(xlsj )P(Sj)
and the expected value of the output
E(ylx) = Li J y P(ylx, Si) dy P(xlsdP(sj),
Lj P(xlsj)P(sj)
where, P(xlsj) =
(5)
J P(x, ylSi)
dy. If we substitute bi(x) = P(XISi)P(sd and
Wi(X) = J y P(ylx, bi) dy we can calculate the expected value of y using the same
architecture as described in Equation 1.
Subsequent training data can be employed to improve the model. The likelihood of
the data {xk, y"} becomes
L = IIEp(xk,ykISd P(sd
k
i
which can be maximized using gradient descent or EM. These adaptation rules are
more complicated than supervised learning since according to our model the data
generating process also makes assumptions about the distributions of the data in
the input space.
Equation 4 gives an approximation of the joint probability density of input and
output. Input and output are formally equivalent (Figure 2) and, in the case of
Gaussian mixtures, we can easily calculate the optimal output given just a subset
of inputs (Ahmad and Tresp, 1993).
A number of authors used clustering and Gaussian approximation on the input
space alone and resources were distributed according to the complexity of the input
space. In this method, resources are distributed according to the complexity of both
input and output space. 1
6
CLASSIFICATION
=
A conclusion now specifies the correct class. Let {bikli
l. .. Nd denote the set of
basis functions whose conclusion specifies c1assk. We set
= Dkj, where
is the
weight from basis function bij to the kth output and Dkj is the Kronecker symbol.
The kth output of the network
wt
( ) -_ NNk (X ) -- Lij wtbij(X) -_ Li bik(X) .
Lim b,m(X)
Lim blm(x)
Yk X
wt
(6)
1 Note, that a probabilistic interpretation is only possible if the integral over a basis
function is finite, i.e. all variances are finite.
Network Structuring And Training Using Rule-based Knowledge
specifies the certainty of classk, given the input. During training, we do not adapt
the output weights
Therefore, the outputs of the network are always positive
and sum to one.
wt.
A probabilistic interpretation can be found if we assume that P(xlclassk)P(classk)
~ L:i bik(X). We obtain,
P( cIassk I)
x =
P(xlclassk)P(classk)
L:, P(xlclass,)P(class,)
and recover Equation 6. If the basis functions are Gausssians, again we obtain a
Gaussian mixture learning problem and, as a special case (one unit per class), a
Gaussian classifier.
7
APPLICATIONS
We have validated our approach on a number of applications including a network
that learned how to control a bicycle and an application in the legal sciences (Hollatz
and Tresp, 1992). Here we present results for a well known data set, the Boston
housing data (Breiman et al., 1981), and demonstrate pruning and rule extraction.
The task is to predict the housing price in a Boston neighborhood as a function of
13 potentially relevant input features. We started with 20 Gaussian basis functions
which were adapted using gradient descent. We achieved a generalization error of
0.074. We then pruned units and conjuncts according to the procedure described
in Section 4. We achieved the best generalization error (0.058) using 4 units (this
is approximately 10% better than the result reported for CART in Breiman et al.,
1981). With only two basis functions and 3 conjuncts, we still achieved reasonable
prediction accuracy (generalization error of 0.12; simply predicting the mean results
in a generalization error of 0.28). Table 1 describes the final network. Interestingly,
our network was left with the input features which CART also considered the most
relevant.
The network was trained with normalized inputs. If we translate them back into
real world values, we obtain the rules:
Rule14: IF the number of rooms (RM) is approximately 5.4 (0.62 corresponds to
5.4 rooms which is smaller than the average of 6.3) AND the pupil/teacher value is
approximately 20.2 (0.85 corresponds to 20.2 pupils/teacher which is higher than
the average of 18.4) THEN the value of the home is approximately $14000 (0.528
corresponds to $14000 which is lower than the average of $22500).
Table 1: Network structure after pruning.
=
Unit#: i 14
0.17
Unit#: i = 20
Ki
0.83
Ki
=
=
conclusion
Wi
0.528
=
Wi = 1.6
feature j
RM
PIT
LSTAT
CART rating
second
third
most important
center:
0.62
0.85
0.06
J-I.ij
width:
0.21
0.35
0.24
l1'ij
877
878
Tresp, Hollatz, and Ahmad
Rule2o: IF the percentage of lower-status population (LSTAT) is approximately
2.5% (0.06 corresponds to 2.5% which is lower than the average of 12.65%), THEN
the value of the home is approximately $34000 (1.6 corresponds to $34000 which is
higher than the average of $22500).
8
CONCLUSION
We demonstrated how rule-based knowledge can be incorporated into the structuring and training of a neural network. Training with experimental data allows for
rule refinement. Rule extraction provides a quantitative interpretation of what is
"going on" in the network, although, in general, it is difficult to define the domain
where a given rule "dominates" the network response and along which boundaries
the rules partition the input space.
Acknowledgements
We acknowledge valuable discussions with Ralph Neuneier and his support in the
Boston housing data application. V.T. was supported in part by a grant from
the Bundesminister fiir Forschung und Technologie and J. H. by a fellowship from
Siemens AG.
References
S. Ahmad and V. Tresp. Some solutions to the missing feature problem in vision.
This volume, 1993.
L. Breiman et al.. Classification and regression trees.
1981.
Wadsworth and Brooks,
R. A. Jacobs, M. I. Jordan, S. J. Nowlan and G. E. Hinton. Adaptive mixtures of
local experts. Neural Computation, Vol. 3, pp. 79-87, 1991.
J. Hollatz and V. Tresp. A Rule-based network architecture. Artificial Neural
Networks II, I. Aleksander, J. Taylor, eds., Elsevier, Amsterdam, 1992.
J. Moody and C. Darken. Fast learning in networks of locally-tuned processing
units. Neural Computation, Vol. 1, pp. 281-294, 1989.
M. Roscheisen, R. Hofmann and V. Tresp. Neural control for rolling mills: incorporating domain theories to overcome data deficiency. In: Advances in Neural
Information Processing Systems 4, 1992.
D. F. Specht. Probabilistic neural networks. Neural Networks, Vol. 3, pp. 109-117,
1990.
T. Takagi and M. Sugeno. Fuzzy identification of systems and its applications to
modeling and control. IEEE Transactions on Systems, Man and Cybernetics, Vol.
15, No.1, pp. 116-132, 1985.
G. G. Towell, J. W. Shavlik and M. O. Noordewier. Refinement of approximately
correct domain theories by knowledge-based neural networks. In Proceedings of the
Eights National Conference on Artificial Intelligence, pp. 861-866, MA, 1990.
| 638 |@word polynomial:1 proportion:1 nd:1 yisi:1 jacob:2 concise:1 reduction:2 initial:5 tuned:1 interestingly:1 subjective:1 existing:1 ninit:2 neuneier:1 nowlan:2 si:8 conjunct:1 dx:1 subsequent:1 additive:1 partition:1 hofmann:4 remove:2 xlclass:1 alone:1 intelligence:1 xk:2 beginning:2 provides:2 miinchen:1 mathematical:1 along:1 nnk:1 introduce:1 expected:3 themselves:1 actual:1 predicament:1 increasing:1 becomes:3 xx:1 underlying:1 what:1 fuzzy:2 ag:4 extremum:1 nj:1 certainty:4 quantitative:1 every:2 classifier:1 rm:2 control:4 unit:11 medical:1 grant:1 before:1 positive:2 engineering:1 local:2 sd:2 approximately:7 might:1 pit:1 limited:1 bi:10 procedure:2 suggest:1 close:1 equivalent:1 map:1 demonstrated:1 center:3 missing:1 nit:1 formulate:2 rule:39 his:2 population:1 associate:1 assure:1 approximated:2 ep:1 solved:1 calculate:2 region:2 wj:2 ahmad:7 removed:1 valuable:1 yk:1 und:1 complexity:7 technologie:1 trained:1 basis:27 easily:1 joint:1 fast:1 describe:2 monte:1 artificial:2 neighborhood:1 refined:2 whose:1 encoded:1 otherwise:1 otto:3 noisy:1 final:1 housing:3 advantage:3 adaptation:1 relevant:2 mixing:1 translate:1 achieve:1 az:1 generating:1 ring:3 measured:1 ij:4 implemented:1 indicate:1 xly:1 ixture:1 correct:3 human:1 premise:4 generalization:4 correction:1 sufficiently:1 considered:1 normal:1 exp:2 mapping:5 bicycle:2 bj:4 predict:1 driving:1 smallest:3 largest:1 weighted:1 reflects:1 subutai:1 gaussian:8 always:1 conc:1 rather:1 aleksander:1 breiman:3 volker:1 ylsi:2 structuring:5 derived:1 validated:1 fur:1 likelihood:1 industrial:1 elsevier:1 dependent:1 accumulated:1 nn:2 typically:2 lj:6 her:1 hidden:1 going:1 germany:3 ralph:1 unobservable:1 overall:1 classification:2 sugeno:2 integration:1 special:1 wadsworth:1 equal:3 extraction:2 encouraged:1 x4:2 of20:1 primarily:1 randomly:1 national:1 blm:1 freedom:1 analyzed:1 mixture:5 nl:1 integral:2 experience:1 institut:1 tree:1 sooner:1 taylor:1 penalizes:1 initialized:1 modeling:1 formulates:1 deviation:3 subset:2 rolling:1 oflow:1 noordewier:2 reported:1 teacher:3 density:2 probabilistic:6 told:1 moody:2 again:2 central:3 manage:1 successively:1 ldl:1 expert:10 li:4 summarized:1 try:3 hollatz:7 start:1 recover:1 complicated:1 minimize:1 accuracy:1 variance:2 maximized:1 identification:1 informatik:1 carlo:1 cybernetics:1 whenever:1 ed:2 pp:5 jiirgen:1 stop:1 logical:1 knowledge:26 car:1 lim:2 back:1 higher:2 supervised:1 follow:1 specify:4 response:1 formulation:1 done:1 roscheisen:4 furthermore:1 just:1 until:1 hand:1 riding:1 normalized:6 concept:1 dkj:2 sinusoid:2 during:3 width:1 complete:1 demonstrate:2 l1:1 fj:1 common:1 volume:1 interpretation:6 numerically:1 interpret:1 freeze:2 inclusion:1 multivariate:1 certain:1 binary:1 preserving:1 seen:2 additional:1 employed:1 prune:2 ii:1 relates:1 reduces:1 faster:1 adapt:4 prediction:6 basic:1 regression:1 vision:1 represent:1 achieved:3 penalize:1 preserved:3 whereas:1 want:1 fellowship:1 addition:1 cart:3 jordan:2 bik:2 xj:1 architecture:4 qj:2 expression:2 penalty:2 useful:1 ylx:4 amount:1 locally:1 specifies:5 percentage:1 dotted:1 estimated:2 lstat:2 per:1 towell:1 vol:4 four:1 threshold:1 rectangle:1 year:1 sum:2 reasonable:2 home:2 dy:3 ki:4 activity:1 adapted:1 infinity:1 kronecker:1 deficiency:1 x2:3 pruned:1 xty:1 relatively:1 tv:1 according:4 describes:3 smaller:1 em:1 wi:11 legal:1 equation:6 resource:2 previously:1 specht:2 available:2 gaussians:2 permit:1 eight:1 munchen:4 generic:1 occurrence:2 substitute:1 clustering:1 ensure:2 hahn:3 occurs:1 strategy:1 gradient:3 kth:2 mail:1 cauchy:1 assuming:1 relationship:1 difficult:1 disastrous:1 potentially:1 unintuitive:1 allowing:1 darken:2 xisi:4 finite:2 descent:3 acknowledge:1 takagi:1 situation:1 hinton:2 incorporated:1 rating:1 required:3 extrapolate:1 learned:1 brook:1 address:1 reliable:1 including:1 belief:1 event:1 natural:1 rely:2 predicting:1 altered:1 improve:2 started:1 lij:1 tresp:13 deviate:1 prior:10 acknowledgement:1 relative:1 proportional:1 degree:1 sufficient:1 placed:1 last:1 supported:1 bias:1 shavlik:2 distributed:2 boundary:1 dimension:1 overcome:1 valid:2 world:1 author:1 adaptive:2 refinement:2 transaction:1 sj:5 approximate:3 skill:1 pruning:5 status:1 logic:1 fiir:1 active:2 sequentially:1 continuous:2 un:1 table:2 learn:2 conjuncts:6 artificially:1 domain:3 noise:1 retrain:1 pupil:2 xl:2 weighting:1 third:1 learns:1 bij:1 removing:1 specific:1 symbol:1 decay:1 evidence:1 dominates:1 incorporating:1 effectively:1 importance:1 forschung:1 illustrates:1 boston:3 forget:1 mill:1 simply:1 amsterdam:1 u2:2 corresponds:6 extracted:4 ma:1 towel:1 room:2 price:1 man:1 typical:1 wt:3 experimental:1 siemens:4 formally:3 internal:1 support:1 incorporate:1 evaluate:1 ex:1 |
5,947 | 6,380 | Mutual information for symmetric rank-one matrix
estimation: A proof of the replica formula
Jean Barbier, Mohamad Dia and Nicolas Macris
Laboratoire de Th?orie des Communications, Facult? Informatique et Communications,
Ecole Polytechnique F?d?rale de Lausanne, 1015, Suisse.
[email protected]
Florent Krzakala
Laboratoire de Physique Statistique, CNRS, PSL Universit?s et Ecole Normale Sup?rieure,
Sorbonne Universit?s et Universit? Pierre & Marie Curie, 75005, Paris, France.
[email protected]
Thibault Lesieur and Lenka Zdeborov?
Institut de Physique Th?orique, CNRS, CEA, Universit? Paris-Saclay, F-91191, Gif-sur-Yvette, France.
[email protected],[email protected]
Abstract
Factorizing low-rank matrices has many applications in machine learning and statistics. For probabilistic models in the Bayes optimal setting, a general expression
for the mutual information has been proposed using heuristic statistical physics
computations, and proven in few specific cases. Here, we show how to rigorously
prove the conjectured formula for the symmetric rank-one case. This allows to
express the minimal mean-square-error and to characterize the detectability phase
transitions in a large set of estimation problems ranging from community detection to sparse PCA. We also show that for a large set of parameters, an iterative
algorithm called approximate message-passing is Bayes optimal. There exists,
however, a gap between what currently known polynomial algorithms can do and
what is expected information theoretically. Additionally, the proof technique has
an interest of its own and exploits three essential ingredients: the interpolation
method introduced in statistical physics by Guerra, the analysis of the approximate
message-passing algorithm and the theory of spatial coupling and threshold saturation in coding. Our approach is generic and applicable to other open problems in
statistical estimation where heuristic statistical physics predictions are available.
Consider the following probabilistic rank-one matrix estimation problem: one has access to
noisy observations w = (wij )ni,j=1 of the pair-wise product of the components of a vector
s = (s1 , . . . , sn )| ? Rn with i.i.d components distributed as Si ? P0 , i = 1, . . . , n. The entries
of w are observed
? through a noisy element-wise (possibly non-linear) output probabilistic channel
Pout (wij |si sj / n). The goal is to estimate the vector s from w assuming that both P0 and Pout are
known and independent of n (noise is symmetric so that wij = wji ). Many important problems in
statistics and machine learning can be expressed in this way, such as sparse PCA [1], the Wigner
spiked model [2, 3], community detection [4] or matrix completion [5].
Proving a result initially derived by a heuristic method from statistical physics, we give an explicit
expression for the mutual information (MI) and the information theoretic minimal mean-square-error
(MMSE) in the asymptotic n ? ? limit. Our results imply that for a large region of parameters,
the posterior marginal expectations of the underlying signal components (often assumed intractable
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
to compute) can be obtained in the leading order in n using a polynomial-time algorithm called
approximate message-passing (AMP) [6, 3, 4, 7]. We also demonstrate the existence of a region
where both AMP and spectral methods [8] fail to provide a good answer to the estimation problem,
while it is nevertheless information theoretically possible to do so. We illustrate our theorems with
examples and also briefly discuss the implications in terms of computational complexity.
1
Setting and main results
The additive white Gaussian noise setting: A standard and natural setting
? is the case of additive
?
white Gaussian noise (AWGN) of known variance ?, wij = si sj / n+zij ?, where z = (zij )ni,j=1
is a symmetric matrix with i.i.d entries Zij ? N (0, 1), 1 ? i ? j ? n. Perhaps surprisingly, it turns
out that this Gaussian setting is sufficient to completely characterize all the problems discussed in
the introduction, even if these have more complicated output channels. This is made possible by a
theorem of channel universality [9] (already proven for community detection in [4] and conjectured in
[10]). This theorem states that given an output channel Pout (w|y), such that (s.t) log Pout (w|y = 0) is
three times differentiable
with bounded second and third derivatives, then the MI satisfies I(S; W) =
?
?
| ?
I(S; SS / n+Z ?)+O( n), where ? is the inverse Fisher information (evaluated at y = 0) of
?1 :=
the output channel: ?
EPout (w|0) [(?y log Pout (W |y)|y=0 )2 ]. Informally, this means that we
only have to compute the MI for an AWGN channel to take care of a wide range of problems, which
can be expressed in terms of their Fisher information. In this paper we derive rigorously, for a large
class of signal distributions P0 , an explicit one-letter formula for the MI per variable I(S; W)/n in
the asymptotic limit n ? ?.
Main result: Our central result is a proof of the expression for the asymptotic n ? ? MI per variable
via the so-called replica symmetric (RS) potential iRS (E; ?) defined as
Z
x2
S
Z
(v ? E)2 + v 2
? 2?(E;?)
2 +x ?(E;?)2 + ?(E;?)
iRS (E; ?) :=
? ES,Z ln
, (1)
dx P0 (x)e
4?
with Z ? N (0, 1), S ? P0 , E[S 2 ] = v and ?(E; ?)2 := ?/(v?E), E ? [0, P
v]. Here we will assume
?
that P0 is a discrete distribution over a finite bounded real alphabet P0 (s) = ?=1 p? ?(s?a? ). Thus
the only continuous integral in (1) is the Gaussian over z. Our results can be extended to mixtures of
discrete and continuous signal distributions at the expense of technical complications in some proofs.
It turns out that both the information theoretic and algorithmic AMP thresholds are determined by the
set of stationary points of (1) (w.r.t E). It is possible to show that for all ? > 0 there always exist at
least one stationary minimum. Note E = 0 is never a stationary point (except for P0 a single Dirac
mass) and E = v is stationary only if E[S] = 0. In this contribution we suppose that at most three
stationary points exist, corresponding to situations with at most one phase transition. We believe that
situations with multiple transitions can also be covered by our techniques.
Theorem 1.1 (RS formula for the mutual information) Fix ? > 0 and let P0 be a discrete distribution s.t (1) has at most three stationary points. Then limn?? I(S; W)/n = minE?[0,v] iRS (E; ?).
The proof of the existence of the limit does not require the above hypothesis on P0 . Also, it was first
shown in [9] that for all n, I(S; W)/n ? minE?[0,v] iRS (E; ?), an inequality that we will use in the
proof section. It is conceptually useful to define the following threshold:
Definition 1.2 (Information theoretic threshold) Define ?Opt as the first non-analyticity point of
the MI as ? increases: ?Opt := sup{?| limn?? I(S; W)/n is analytic in ]0, ?[}.
When P0 is s.t (1) has at most three stationary points, as discussed below, then minE?[0,v] iRS (E; ?)
has at most one non-analyticity point denoted ?RS (if minE?[0,v] iRS (E; ?) is analytic over all R+
we set ?RS = ?). Theorem 1.1 gives us a mean to compute the information theoretic threshold
?Opt = ?RS . A basic application of theorem 1.1 is the expression of the MMSE:
Corollary 1.3 (Exact formula for the MMSE) For all ? 6= ?RS , the matrix-MMSE Mmmsen :=
ES,W [kSS| ? E[XX| |W]k2F ]/n2 (k ? kF being the Frobenius norm) is asymptotically
limn?? Mmmsen (??1 ) = v 2 ?(v ?argminE?[0,v] iRS (E; ?))2 . Moreover, if ? < ?AMP (where
?AMP is the algorithmic threshold, see definition 1.4) or ? > ?RS , then the usual vector-MMSE
Vmmsen := ES,W [kS?E[X|W]k22 ]/n satisfies limn?? Vmmsen = argminE?[0,v] iRS (E; ?).
2
It is natural to conjecture that the vector-MMSE is given by argminE?[0,v] iRS (E; ?) for all ? 6= ?RS ,
but our proof does not quite yield the full statement.
A fundamental consequence concerns the performance of the AMP algorithm [6] for estimating s.
AMP has been analysed rigorously in [11, 12, 4] where it is shown that its asymptotic performance
is tracked by state evolution (SE). Let E t := limn?? ES,Z [kS??st k22 ]/n be the asymptotic average
vector-MSE of the AMP estimate ?st at time t. Define mmse(??2 ) := ES,Z [(S ?E[X|S +?Z])2 ] as
the usual scalar mmse function associated to a scalar AWGN channel of noise variance ?2 , with
S ? P0 and Z ? N (0, 1). Then
E t+1 = mmse(?(E t ; ?)?2 ),
E 0 = v,
(2)
is the SE recursion. Monotonicity properties of the mmse function imply that E t is a decreasing
sequence s.t limt?? E t = E ? exists. Note that when E[S] = 0 and v is an unstable fixed point, as
such, SE ?does not start?. While this is not really a problem when one runs AMP in practice, for
analysis purposes one can slightly bias P0 and remove the bias at the end of the proofs.
Definition 1.4 (AMP algorithmic threshold) For ? > 0 small enough, the fixed point equation
corresponding to (2) has a unique solution for all noise values in ]0, ?[. We define ?AMP as the
supremum of all such ?.
Corollary 1.5 (Performance of AMP) In the limit n ? ?, AMP initialized without any knowledge
other than P0 yields upon convergence the asymptotic matrix-MMSE as well as the asymptotic
vector-MMSE iff ? < ?AMP or ? > ?RS , namely E ? = argminE?[0,v] iRS (E; ?).
?AMP can be read off the replica potential (1): by differentiation of (1) one finds a fixed point
equation that corresponds to (2). Thus ?AMP is the smallest solution of ?iRS /?E = ? 2 iRS /?E 2 = 0;
in other words it is the ?first? horizontal inflexion point appearing in iRS (E; ?) when ? increases.
Discussion: With our hypothesis on P0 there are only three possible scenarios: ?AMP < ?RS
(one ?first order? phase transition); ?AMP = ?RS < ? (one ?higher order? phase transition);
?AMP = ?RS = ? (no phase transition). In the sequel we will have in mind the most interesting
case, namely one first order phase transition, where we determine the gap between the algorithmic
AMP and information theoretic performance. The cases of no phase transition or higher order phase
transition, which present no algorithmic gap, are basically covered by the analysis of [3] and follow
as a special case from our proof. The only cases that would require more work are those where P0 is
s.t (1) develops more than three stationary points and more than one phase transition is present.
For ?AMP < ?RS the structure of stationary points of (1) is as follows1 (figure 1). There exist three
branches Egood (?), Eunstable (?) and Ebad (?) s.t: 1) For 0 < ? < ?AMP there is a single stationary
point Egood (?) which is a global minimum; 2) At ?AMP a horizontal inflexion point appears, for
? ? [?AMP , ?RS ] there are three stationary points satisfying Egood (?AMP ) < Eunstable (?AMP ) =
Ebad (?AMP ), Egood (?) < Eunstable (?) < Ebad (?) otherwise, and moreover iRS (Egood ; ?) ?
iRS (Ebad ; ?) with equality only at ?RS ; 3) for ? > ?RS there is at least the stationary point
Ebad (?) which is always the global minimum, i.e. iRS (Ebad ; ?) < iRS (Egood ; ?). (For higher ?
the Egood (?) and Eunstable (?) branches may merge and disappear); 4) Egood (?) is analytic for
? ?]0, ?0 [, ?0 > ?RS , and Ebad (?) is analytic for ? > ?AMP .
We note for further use in the proof section that E ? = Egood (?) for ? < ?AMP and E ? = Ebad (?)
for ? > ?AMP . Definition 1.4 is equivalent to ?AMP = sup{?|E ? = Egood (?)}. Moreover we
will also use that iRS (Egood ; ?) is analytic on ]0, ?0 [, iRS (Ebad ; ?) is analytic on ]?AMP , ?[, and
the only non-analyticity point of minE?[0,v] iRS (E; ?) is at ?RS .
Relation to other works: Explicit single-letter characterization of the MI in the rank-one problem
has attracted a lot of attention recently. Particular cases of theorem 1.1 have been shown rigorously
in a number of situations. A special case when si = ?1 ? Ber(1/2) already appeared in [13] where
an equivalent spin glass model is analysed. Very recently, [9] has generalized the results of [13]
and, notably, obtained a generic matching upper bound. The same formula has been also rigorously
computed following the study of AMP in [3] for spiked models (provided, however, that the signal
was not too sparse) and in [4] for strictly symmetric community detection.
1
We take E[S] 6= 0. Once theorem 1.1 is proven for this case a limiting argument allows to extend it to
E[S] = 0.
3
iRS(E)
0.125
0.12
0.115
0.11
0.105
0.1
0.095
0.086
0.085
0.084
0.083
?=0.0008
0
0.005
0.01
0.082
0.08
0
0.005
0.01
E
0.015
?=0.0012
0.082
0.02
?=0.00125
0.084
iRS(E)
0.015
0.02
0.08
0.078
0.076
0.074
0.072
0.07
0.068
0.066
0
0.005
0.01
0.015
0.02
?=0.0015
0
0.005
0.01
E
0.015
0.02
Figure 1: The replica symmetric potential iRS (E) for four values of ? in the Wigner spiked model. The MI
is min iRS (E) (the black dot, while the black cross corresponds to the local minimum) and the asymptotic
matrix-MMSE is v 2 ?(v?argminE iRS (E))2 , where v = ? in this case with ? = 0.02 as in the inset of figure 2.
From top left to bottom right: (1) For low noise values, here ? = 0.0008 < ?AMP , there exists a unique ?good?
minimum corresponding to the MMSE and AMP is Bayes optimal. (2) As the noise increases, a second local
?bad? minimum appears: this is the situation at ?AMP < ? = 0.0012 < ?RS . (3) For ? = 0.00125 > ?RS , the
?bad? minimum becomes the global one and the MMSE suddenly deteriorates. (4) For larger values of ?, only
the ?bad? minimum exists. AMP can be seen as a naive minimizer of this curve starting from E = v = 0.02. It
reaches the global minimum in situations (1), (3) and (4), but in (2), when ?AMP < ? < ?RS , it is trapped by
the local minimum with large MSE instead of reaching the global one corresponding to the MMSE.
For rank-one symmetric matrix estimation problems, AMP has been introduced by [6], who also
computed the SE formula to analyse its performance, generalizing techniques developed by [11] and
[12]. SE was further studied by [3] and [4]. In [7, 10], the generalization to larger rank was also
considered. The general formula proposed by [10] for the conditional entropy and the MMSE on the
basis of the heuristic cavity method from statistical physics was not demonstrated in full generality.
Worst, all existing proofs could not reach the more interesting regime where a gap between the
algorithmic and information theoretic perfomances appears, leaving a gap with the statistical physics
conjectured formula (and rigorous upper bound from [9]). Our result closes this conjecture and has
interesting non-trivial implications on the computational complexity of these tasks.
Our proof technique combines recent rigorous results in coding theory along the study of capacityachieving spatially coupled codes [14, 15, 16, 17] with other progress, coming from developments
in mathematical physics putting on a rigorous basis predictions of spin glass theory [18]. From
this point of view, the theorem proved in this paper is relevant in a broader context going beyond
low-rank matrix estimation. Hundreds of papers have been published in statistics, machine learning
or information theory using the non-rigorous statistical physics approach. We believe that our result
helps setting a rigorous foundation of a broad line of work. While we focus on rank-one symmetric
matrix estimation, our proof technique is readily extendable to more generic low-rank symmetric
matrix or low-rank symmetric tensor estimation. We also believe that it can be extended to other
problems of interest in machine learning and signal processing, such as generalized linear regression,
features/dictionary learning, compressed sensing or multi-layer neural networks.
2
Two examples: Wigner spiked model and community detection
In order to illustrate the consequences of our results we shall present two examples.
Wigner spiked model: In this model, the vector s is a Bernoulli random vector, Si ? Ber(?). For
large enough densities (i.e. ? > 0.041(1)), [3] computed the matrix-MMSE and proved that AMP is a
computationally efficient algorithm that asymptotically achieves the matrix-MMSE for any value of
the noise ?. Our results allow to close the gap left open by [3]: on one hand we now obtain rigorously
the MMSE for ? ? 0.041(1), and on the other one we observe that for such values of ?, and as ?
decreases, there is a small region where two local minima coexist in iRS (E; ?). In particular for
?AMP < ? < ?Opt = ?RS the global minimum corresponding to the MMSE differs from the local
one that traps AMP, and a computational gap appears (see figure 1). While the region where AMP
is Bayes optimal is quite large, the region where is it not, however, is perhaps the most interesting
one. While this is by no means evident, statistical physics analogies with physical phase transitions
in nature suggest that this region should be hard for a very broad class of algorithms. For small ? our
4
Wigner Spike model
0.005
Asymmetric Community Detection
matrix-MSE(?) at ?=0.02
0.0004
0.004
MMSE
AMP
?AMP
0.003
0.0003
?opt
0.0002
0.0001
?
0.002
0
0.001
?
0
0.002
0.001
?AMP
?Opt
?spectral
0
0
0.01
0.02
0.03
0.04
0.05
3
2.8
2.6
2.4
2.2
2
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
matrix-MSE(?) at ?=0.05
1
0.8
MMSE
0.6 AMP
0.4
?AMP
0.2
0
0
0.5
1
1.5
2
2.5
?AMP
?Opt
?spectral
0
?
?opt
0.1
0.2
0.3
0.4
0.5
?
Figure 2: Phase diagram in the noise variance ? versus density ? plane for the rank-one spiked Wigner model
(left) and the asymmetric community detection (right). Left: [3] proved that AMP achieves the matrix-MMSE
for all ? as long as ? > 0.041(1). Here we show that AMP is actually achieving the optimal reconstruction in
the whole phase diagram except in the small region between the blue and red lines. Notice the large gap with
spectral methods (dashed black line). Inset: matrix-MMSE (blue) at ? = 0.02 as a function of ?. AMP (dashed
red) provably achieves the matrix-MMSE except in the region ?AMP < ? < ?Opt = ?RS . We conjecture
that no polynomial-time algorithm will do better than p
AMP in this region. Right: Asymmetric community
detection problem with two communities. For ? > 1/2? 1/12 (black point) and when ? > 1, it is information
theoretically impossible to find any overlap with the true communities and the matrix-MMSE is 1, while it
becomes possible for ? < 1. In this region, AMP is always achieving the matrix-MMSEp
and spectral methods
can find a non-trivial overlap with the truth as well, starting from ? < 1. For ? < 1/2? 1/12, however, it is
information theoretically possible to find an overlap with the hidden communities for ? > 1 (below the blue line)
but both AMP and spectral methods miss this information. Inset: matrix-MMSE (blue) at ? = 0.05 as a function
of ?. AMP (dashed red) again provably achieves the matrix-MMSE except in the region ?AMP < ? < ?Opt .
results are consistent with the known optimal and algorithmic thresholds predicted in sparse PCA
[19, 20], that treats the case of sub-extensive ? = O(1) values. Another interesting line of work for
such probabilistic models appeared in the context of random matrix theory (see [8] and references
therein) and predicts that a sharp phase transition occurs at a critical value of the noise ?spectral = ?2
below which an outlier eigenvalue (and its principal eigenvector) has a positive correlation with the
hidden signal. For larger noise values the spectral distribution of the observation is indistinguishable
from that of the pure random noise.
Asymmetric balanced community detection: We now consider the problem of detecting two communities (groups) with different sizes ?n and (1 ? ?)n, that generalizes the one considered in
[4]. One is given ?
a graph where the probability to have a link between?
nodes in the first group
is p + ?(1 ? ?)/(? n), between those in the
? second group is p + ??/( n(1 ? ?)), while interconnections appear with probability p ? ?/ n. With this peculiar ?balanced? setting, the nodes
in each group have the same degree distribution with mean pn, making them harder to distinguish. According to the universality property described in the first section, this is equivalent to a
2
model with AWGN
according to
p of variance ? = p(1p? p)/? where each variable si is chosen
2
P0 (s) = ??(s? (1??)/?)+(1??)?(s+ ?/(1??)). Our
results
for
this
problem
are
summarized
p
on the right hand side of figure 2. For ? > ?c = 1/2 ? 1/12 (black point), it is asymptotically
information theoretically possible to get an estimation better than chance if and only if ? < 1. When
? < ?c , however, it becomes possible for much larger values of the noise. Interestingly, AMP and
spectral methods have the same transition and can find a positive correlation with the hidden communities for ? < 1, regardless of the value of ?. Again, a region [?AMP , ?Opt = ?RS ] exists where a
computational gap appears when ? < ?c . One can investigate the very low ? regime where we find
that the information theoretic transition goes as ?Opt (? ? 0) = 1/(4?| log ?|). Now if we assume
that this result
? stays true even for ? = O(1) (which is a speculation at this point), we can choose
? ? (1?p)? n such that the small group is a clique. Then the problem corresponds to a ?balanced?
version of the famous planted clique problem [21]. We find that the AMP/spectral approach finds the
2
Note that here since E = v = 1 is an extremum of iRS (E; ?), one must introduce a small bias in P0 and let
it then tend to zero at the end of the proofs.
5
p
hidden clique when it is larger than np/(1?p), while the information theoretic transition translates
into size of the clique 4p log(n)/(1?p). This is indeed reminiscent of the more
pclassical planted
clique
problem
at
p
=
1/2
with
its
gap
between
log(n)
(information
theoretic),
n/e (AMP [22])
?
and n (spectral [21]). Since in our balanced case the spectral and AMP limits match, this suggests
that the small gain of AMP in the standard clique problem is simply due to the information provided
by the distribution of local degrees in the two groups (which is absent in our balanced case). We
believe this correspondence strengthens the claim that the AMP gap is actually a fundamental one.
3
Proofs
The crux of our proof rests on an auxiliary ?spatially coupled system?. The hallmark of spatially
coupled models is that one can tune them so that the gap between the algorithmic and information
theoretic limits is eliminated, while at the same time the MI is maintained unchanged for the coupled
and original models. Roughly speaking, this means that it is possible to algorithmically compute
the information theoretic limit of the original model because a suitable algorithm is optimal on
the coupled system. The present spatially coupled construction is similar to the one used for the
coupled Curie-Weiss model [14]. Consider a ring of length L+1 (L even) with blocks positioned at
? ? {0, . . . , L} and coupled to neighboring blocks {??w, . . . , ?+w}. Positions ? are taken modulo
L+1 and the integer w ? {0, . . . , L/2} equals the size of the coupling window. The coupled model is
r
?
???
wi? j? = si? sj?
(3)
+ zi? j? ?,
n
where the index i? ? {1, . . . , n} (resp. j? ) belongs to the block ? (resp. ?) along the ring, ? is an
(L+1)?(L+1) matrix which describes the strength of the coupling between blocks, and Zi? j? ? N (0, 1)
are i.i.d. For the proof to work, the matrix elements have to be chosen appropriately. We assume
that: i) ? is a doubly stochastic matrix; ii) ??? depends on |???|; iii) ??? is not vanishing for
|???| ? w and vanishes for |???| > w; iv) ? is smooth in the sense |??? ???+1? | = O(w?2 ); v)
? has a non-negative Fourier transform. All these conditions can easily be met, the simplest example
being a triangle of base 2w +1 and height 1/(w +1). The construction of the coupled system is
completed by introducing a seed in the ring: we assume perfect knowledge of the signal components
{si? } for ? ? B := {?w?1, . . . , w?1} mod L+1. This seed is what allows to close the gap between
the algorithmic and information theoretic limits and therefore plays a crucial role. Note it can also be
viewed as an ?opening? of the chain with fixed boundary conditions. Our first crucial result states
that the MI Iw,L (S; W) of the coupled and original systems are the same in a suitable limit.
Lemma 3.1 (Equality of mutual informations) For any fixed w the following limits exist and are
equal: limL?? limn?? Iw,L (S; W)/(n(L+1)) = limn?? I(S; W)/n.
An immediate corollary is that non-analyticity points (w.r.t ?) of the MIs are the same
in the coupled and original models.
In particular, defining ?Opt,coup := sup{? |
limL?? limn?? Iw,L (S; W)/(n(L+1)) is analytic in ]0, ?[}, we have ?Opt,coup = ?Opt .
The second crucial result states that the AMP threshold of the spatially coupled system is at least
as good as ?RS . The analysis of AMP applies to the coupled system as well [11, 12] and it can be
shown that the performance of AMP is assessed by SE. Let E?t := limn?? ES,Z [kS? ??st? k22 ]/n be
the asymptotic average vector-MSE of the AMP estimate ?st? at time t for the ?-th ?block? of S. We
associate to each position ? ? {0, . . . , L} an independent scalar system with AWGN of the form
PL
Y = S +?? (E; ?)Z, with ?? (E; ?)2 := ?/(v ? ?=0 ??? E? ) and S ? P0 , Z ? N (0, 1). Taking
into account knowledge of the signal components in B, SE reads:
E?t+1 = mmse(?? (Et ; ?)?2 ), E?0 = v for ? ? {0, . . . , L} \ B, E?t = 0 for ? ? B, t ? 0,
(4)
where the mmse function is defined as in section 1. From the monotonicity of the mmse function we
have E?t+1 ? E?t for all ? ? {0, . . . , L}, a partial order which implies that limt?? Et = E? exists.
This allows to define an algorithmic threshold for the coupled system: ?AMP,w,L := sup{?|E?? ?
Egood (?) ? ?}. We show (equality holds but is not directly needed):
Lemma 3.2 (Threshold saturation) Let ?AMP,coup := lim inf w?? lim inf L?? ?AMP,w,L . We
have ?AMP,coup ? ?RS .
6
Proof sketch of theorem 1.1: First we prove the RS formula for ? ? ?Opt . It is known [3] that the
matrix-MSE of AMP when n ? ? is equal to v 2 ?(v?E t )2 . This cannot improve the matrix-MMSE,
hence (v 2 ?(v ?E ? )2 )/4 ? lim supn?? Mmmsen /4. For ? ? ?AMP we have E ? = Egood (?)
which is the global minimum of (1) so the left hand side of the last inequality equals the derivative of
minE?[0,v] iRS (E; ?) w.r.t ??1 . Thus using the matrix version of the I-MMSE relation [23] we get
d
1 dI(S; W)
min iRS (E; ?) ? lim sup
.
(5)
?1
d??1 E?[0,v]
n?? n d?
Integrating this relation on [0, ?] ? [0, ?AMP ] and checking that minE?[0,v] iRS (E; 0) = H(S)
(the Shannon entropy of P0 ) we obtain minE?[0,v] iRS (E; ?) ? lim inf n?? I(S; W)/n. But we
know I(S; W)/n ? minE?[0,v] iRS (E; ?) [9], thus we already get theorem 1.1 for ? ? ?AMP . We
notice that ?AMP ? ?Opt . While this might seem intuitively clear, it follows from ?RS ? ?AMP
(by their definitions) which together with ?AMP > ?Opt would imply from theorem 1.1 that
limn?? I(S; W)/n is analytic at ?Opt , a contradiction. The next step is to extend theorem 1.1 to
the range [?AMP , ?Opt ]. Suppose for a moment ?RS ? ?Opt . Then both functions on each side of
the RS formula are analytic on the whole range ]0, ?Opt [ and since they are equal for ? ? ?AMP ,
they must be equal on their whole analyticity range and by continuity, they must also be equal at
?Opt (that the functions are continuous follows from independent arguments on the existence of
the n ? ? limit of concave functions). It remains to show that ?RS ? ]?AMP , ?Opt [ is impossible.
We proceed by contradiction, so suppose this is true. Then both functions on each side of the RS
formula are analytic on ]0, ?RS [ and since they are equal for ]0, ?AMP [?]0, ?RS [ they must be equal
on the whole range ]0, ?RS [ and also at ?RS by continuity. For ? > ?RS the fixed point of SE is
E ? = Ebad (?) which is also the global minimum of iRS (E; ?), hence (5) is verified. Integrating
this inequality on ]?RS , ?[?]?RS , ?Opt [ and using I(S; W)/n ? minE?[0,v] iRS (E; ?) again, we
find that the RS formula holds for all ? ? [0, ?Opt ]. But this implies that minE?[0,v] iRS (E; ?) is
analytic at ?RS , a contradiction.
We now prove the RS formula for ? ? ?Opt . Note that the previous arguments showed that necessarily
?Opt ? ?RS . Thus by lemmas 3.1 and 3.2 (and the sub-optimality of AMP as shown as before) we
obtain ?RS ? ?AMP,coup ? ?Opt,coup = ?Opt ? ?RS . This shows that ?Opt = ?RS (this is the
point where spatial coupling came in the game and we do not know of other means to prove such
an equality). For ? > ?RS we have E ? = Ebad (?) which is the global minimum of iRS (E; ?).
Therefore we again have (5) in this range and the proof can be completed by using once more the
integration argument, this time over the range [?RS , ?] = [?Opt , ?].
Proof sketch of corollaries 1.3 and 1.5: Let E? (?) = argminE iRS (E; ?) for ? 6= ?RS . By explicit
calculation one checks that diRS (E? , ?)/d??1 = (v 2 ?(v?E? (?))2 )/4, so from theorem 1.1 and
the matrix form of the I-MMSE relation we find Mmmsen ? v 2 ?(v?E? (?))2 as n ? ? which is
the first part of the statement of corollary 1.3. Let us now turn to corollary 1.5. For n ? ? the vectorMSE of the AMP estimator at time t equals E t , and since the fixed point equation corresponding to
SE is precisely the stationarity equation for iRS (E; ?), we conclude that for ? ?
/ [?AMP , ?RS ] we
must have E ? = E? (?). It remains to prove that E? (?) = limn?? Vmmsen (?) at least for ? ?
/
[?AMP , ?RS ] (we believe this is in fact true for all ?). This will settle the second part of corollary 1.3
as well as 1.5. Using (Nishimori) identities ES,W [Si Sj E[Xi Xj |W]] = ES,W [E[Xi Xj |W]2 ] (see e.g.
[9]) and using the law of large numbers we can show limn?? Mmmsen ? limn?? (v 2 ? (v ?
Vmmsen (?))2 ). Concentration techniques similar to [13] suggest that the equality in fact holds
(for ? 6= ?RS ) but there are technicalities that prevent us from completing the proof of equality.
However it is interesting to note that this equality would imply E? (?) = limn?? Vmmsen (?) for
all ? 6= ?RS . Nevertheless, another argument can be used when AMP is optimal. On one hand the
right hand side of the inequality is necessarily smaller than v 2 ?(v?E ? )2 . On the other hand the left
hand side of the inequality is equal to v 2?(v?E? (?))2 . Since E? (?) = E ? when ? ?
/ [?AMP , ?RS ],
we can conclude limn?? Vmmsen (?) = argminE iRS (E; ?) for this range of ?.
Proof sketch of lemma 3.1: Here we prove the lemma for a ring that is not seeded. An easy argument
shows that a seed of size w does not change the MI per variable when L ? ?. The statistical physics
formulation is convenient: up to the trivial additive
term n(L+1)v 2 /4, the MI Iw,L (S; W) equals the
R
:=
free energy ?ES,Z [ln Zw,L ], where Zw,L
dxP0 (x) exp(?H(x, z, ?)) and
?+w
L
X
X
X
X
1
H(x, z, ?) =
???
Ai? j? (x, z, ?) +
???
Ai? j? (x, z, ?) ,
(6)
? ?=0
?=?+1
i ,j
i? ?j?
?
7
?
?
p
with Ai? j? (x, z, ?) := (x2i? x2j? )/(2n)?(si? sj? xi? xj? )/n?(xi? xj? zi? j? ?)/ n??? . Consider a
pair of systems with coupling matrices ? and ?0 and i.i.d noize realizations z, z0 , an interpolated
Hamiltonian H(x, z, t?)+H(x, z0 , (1?t)?0 ), t ? [0, 1], and the corresponding partition function Zt .
d
ES,Z,Z0 [ln Zt ] ? 0 for all
The main idea of the proof is to show that for suitable choices of matrices, ? dt
t ? [0, 1] (up to negligible terms), so that by the fundamental theorem of calculus, we get a comparison
between the free energies of H(x, z, ?) and H(x, z0 , ?0 ). Performing the t-derivative brings down a
Gibbs average of a polynomial in all variables si? , xi? , zi? j? and zi0? j? . This expectation over S, Z,
Z0 of this Gibbs average is simplified using integration by parts over the Gaussian noise zi? j? , zi0? j?
and Nishimori identities (see e.g. proof of corollary 1.3 for one of them). This algebra leads to
?
1
d
1
ES,Z,Z0 [ln Zt ] =
ES,Z,Z0 [hq| ?q ? q| ?0 qit ] + O(1/(nL)),
n(L + 1) dt
4?(L + 1)
(7)
where P
h?it is the Gibbs average w.r.t the interpolated Hamiltonian, q is the vector of overlaps
n
q? := i? =1 si? xi? /n. If we can choose matrices s.t ?0 > ?, the difference of quadratic forms
in the Gibbs bracket is negative and we obtain an inequality in the large size limit. We use this
scheme to interpolate between the fully decoupled system w = 0 and the coupled one 1 ? w < L/2
and then between 1 ? w < L/2 and the fully connected system w = L/2. The w = 0 system has
??? = ??? with eigenvalues (1, 1, . . . , 1). For the 1 ? w < L/2 system, we take any stochastic
translation invariant matrix with non-negative discrete Fourier transform (of its rows): such matrices
have an eigenvalue equal to 1 and all others in [0, 1[ (the eigenvalues are precisely equal to the
discrete Fourier transform). For w = L/2 we choose ??? = 1/(L + 1) which is a projector with
eigenvalues (0, 0, . . . , 1). With these choices we deduce that the free energies and MIs are ordered as
Iw=0,L + O(1) ? Iw,L + O(1) ? Iw=L/2,L + O(1). To conclude the proof we divide by n(L+1) and
note that the limits of the leftmost and rightmost MIs are equal, provided the limit exists. Indeed the
leftmost term equals L times I(S; W) and the rightmost term is the same MI for a system of n(L+1)
variables. Existence of the limit follows by subadditivity, proven by a similar interpolation [18].
Proof sketch of lemma 3.2: Fix ? < ?RS . We show that, for w large enough, the coupled SE
recursion (4) must converge to a fixed point E?? ? Egood (?) for all ?. The main intuition behind
the proof is to use a ?potential function? whose ?energy? can be lowered by small perturbation of
a fixed point that would go above Egood (?) [16, 17]. The relevant potential function iw,L (E, ?)
is in fact the replica potential of the coupled system (a generalization of (1)). The stationarity
condition for this potential is precisely (4) (without the seeding condition). Monotonicity properties
of SE ensure that any fixed point has a ?unimodal? shape (and recall that it vanishes for ? ? B =
{0, . . . , w?1} ? {L?w, . . . , L}). Consider a position ?max ? {w, . . . , L?w?1} where it is maximal
and suppose that E??max > Egood (?). We associate to the fixed point E? a so-called saturated
profile Es defined on the whole of Z as follows: E?s = Egood (?) for all ? ? ?? where ?? +1 is the
smallest position s.t E?? > Egood (?); E?s = E?? for ? ? {?? +1, . . . , ?max ?1}; E?s = E??max for all
? ? ?max . We show that Es cannot exist for w large enough. To this end define a shift operator by
s
[S(Es )]? := E??1
. On one hand the shifted profile is a small perturbation of Es which matches a
fixed point, except where it is constant, so if we Taylor expand, the first order vanishes and the second
order and higher orders can be estimated as |iw,L (S(Es ); ?)?iw,L (Es ; ?)| = O(1/w) uniformly in
L. On the other hand, by explicit cancellation of telescopic sums iw,L (S(Es ); ?)?iw,L (Es ; ?) =
iRS (Egood ; ?)?iRS (E??max ; ?). Now one can show from monotonicity properties of SE that if E?
is a non trivial fixed point of the coupled SE then E??max cannot be in the basin of attraction of
Egood (?) for the uncoupled SE recursion. Consequently as can be seen on the plot of iRS (E; ?) (e.g.
figure 1) we must have iRS (E??max ; ?) ? iRS (Ebad ; ?). Therefore iw,L (S(Es ); ?)?iw,L (Es ; ?) ?
?|iRS (Ebad ; ?)?iRS (Egood ; ?)| which is an energy gain independent of w, and for large enough
w we get a contradiction with the previous estimate coming from the Taylor expansion.
Acknowledgments
J.B and M.D acknowledge funding from the SNSF (grant 200021-156672). Part of this research
received funding from the ERC under the EU?s 7th Framework Programme (FP/2007-2013/ERC
Grant Agreement 307087-SPARCS). F.K and L.Z thank the Simons Institute for its hospitality.
8
References
[1] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265?286, 2006.
[2] I.M. Johnstone and A.Y. Lu. On consistency and sparsity for principal components analysis in
high dimensions. Journal of the American Statistical Association, 2012.
[3] Y. Deshpande and A. Montanari. Information-theoretically optimal sparse pca. In IEEE Int.
Symp. on Inf. Theory, pages 2197?2201, 2014.
[4] Y. Deshpande, E. Abbe, and A. Montanari. Asymptotic mutual information for the two-groups
stochastic block model. arXiv:1507.08685, 2015.
[5] E.J. Cand?s and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational mathematics, 9(6):717?772, 2009.
[6] S. Rangan and A.K. Fletcher. Iterative estimation of constrained rank-one matrices in noise. In
IEEE Int. Symp. on Inf. Theory, pages 1246?1250, 2012.
[7] T. Lesieur, F. Krzakala, and L. Zdeborov?. Phase transitions in sparse pca. In IEEE Int. Symp.
on Inf. Theory, page 1635, 2015.
[8] J. Baik, G. Ben Arous, and S. P?ch?. Phase transition of the largest eigenvalue for nonnull
complex sample covariance matrices. Annals of Probability, page 1643, 2005.
[9] F. Krzakala, J. Xu, and L. Zdeborov?. Mutual information in rank-one matrix estimation.
arXiv:1603.08447, 2016.
[10] T. Lesieur, F. Krzakala, and L. Zdeborov?. Mmse of probabilistic low-rank matrix estimation:
Universality with respect to the output channel. In Annual Allerton Conference, 2015.
[11] M. Bayati and A. Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Trans. on Inf. Theory, 57(2):764 ?785, 2011.
[12] A. Javanmard and A. Montanari. State evolution for general approximate message passing
algorithms, with applications to spatial coupling. J. Infor. & Inference, 2:115, 2013.
[13] S.B. Korada and N. Macris. Exact solution of the gauge symmetric p-spin glass model on a
complete graph. Journal of Statistical Physics, 136(2):205?230, 2009.
[14] S.H. Hassani, N. Macris, and R. Urbanke. Coupled graphical models and their thresholds. In
IEEE Information Theory Workshop (ITW), 2010.
[15] S. Kudekar, T.J. Richardson, and R. Urbanke. Threshold saturation via spatial coupling: Why
convolutional ldpc ensembles perform so well over the bec. IEEE Trans. on Inf. Th., 57, 2011.
[16] A. Yedla, Y.Y. Jian, P.S. Nguyen, and H.D. Pfister. A simple proof of maxwell saturation for
coupled scalar recursions. IEEE Trans. on Inf. Theory, 60(11):6943?6965, 2014.
[17] J. Barbier, M. Dia, and N. Macris. Threshold saturation of spatially coupled sparse superposition
codes for all memoryless channels. CoRR, abs/1603.04591, 2016.
[18] F. Guerra. An introduction to mean field spin glass theory: methods and results. Mathematical
Statistical Physics, pages 243?271, 2005.
[19] A.A. Amini and M.J. Wainwright. High-dimensional analysis of semidefinite relaxations for
sparse principal components. In IEEE Int. Symp. on Inf. Theory, page 2454, 2008.
[20] Q. Berthet and P. Rigollet. Computational lower bounds for sparse pca. arXiv:1304.0828, 2013.
[21] A. d?Aspremont, L. El Ghaoui, M.I. Jordan, and G.RG. Lanckriet. A direct formulation for
sparse pca using semidefinite programming. SIAM review, 49(3):434, 2007.
p
[22] Y. Deshpande and A. Montanari. Finding hidden cliques of size N/e in nearly linear time.
Foundations of Computational Mathematics, 15(4):1069?1128, 2015.
[23] D. Guo, S. Shamai, and S. Verd?. Mutual information and minimum mean-square error in
gaussian channels. IEEE Trans. on Inf. Theory, 51, 2005.
9
| 6380 |@word version:2 briefly:1 polynomial:4 norm:1 open:2 calculus:1 r:55 covariance:1 p0:20 arous:1 harder:1 moment:1 zij:3 ecole:2 interestingly:1 mmse:36 amp:91 rightmost:2 existing:1 com:2 analysed:2 si:12 gmail:2 universality:3 dx:1 attracted:1 readily:1 must:7 reminiscent:1 additive:3 partition:1 shape:1 analytic:11 remove:1 seeding:1 plot:1 shamai:1 stationary:12 plane:1 vanishing:1 hamiltonian:2 characterization:1 detecting:1 complication:1 node:2 allerton:1 height:1 mathematical:2 along:2 direct:1 prove:6 doubly:1 combine:1 symp:4 introduce:1 krzakala:5 theoretically:6 javanmard:1 indeed:2 notably:1 expected:1 cand:1 roughly:1 multi:1 decreasing:1 window:1 becomes:3 spain:1 xx:1 underlying:1 bounded:2 moreover:3 mass:1 estimating:1 provided:3 what:3 follows1:1 gif:1 eigenvector:1 developed:1 finding:1 extremum:1 differentiation:1 zdeborova:1 concave:1 universit:4 grant:2 appear:1 positive:2 before:1 negligible:1 local:6 treat:1 limit:15 consequence:2 awgn:5 interpolation:2 merge:1 black:5 might:1 therein:1 k:3 studied:1 suggests:1 lausanne:1 zi0:2 range:8 unique:2 acknowledgment:1 practice:1 block:6 differs:1 matching:1 convenient:1 statistique:1 word:1 integrating:2 suggest:2 get:5 cannot:3 close:3 coexist:1 operator:1 context:2 impossible:2 equivalent:3 projector:1 demonstrated:1 telescopic:1 go:2 attention:1 starting:2 regardless:1 convex:1 pure:1 contradiction:4 estimator:1 attraction:1 proving:1 limiting:1 resp:2 construction:2 suppose:4 play:1 modulo:1 exact:3 annals:1 programming:1 verd:1 hypothesis:2 agreement:1 associate:2 element:2 lanckriet:1 satisfying:1 egood:21 strengthens:1 asymmetric:4 predicts:1 bec:1 observed:1 bottom:1 role:1 worst:1 region:12 connected:1 eu:1 decrease:1 balanced:5 intuition:1 vanishes:3 complexity:2 rigorously:6 mine:11 dynamic:1 algebra:1 upon:1 completely:1 basis:2 triangle:1 easily:1 alphabet:1 informatique:1 jean:1 heuristic:4 quite:2 larger:5 whose:1 s:1 otherwise:1 compressed:2 interconnection:1 statistic:4 richardson:1 analyse:1 noisy:2 transform:3 sequence:1 differentiable:1 eigenvalue:6 reconstruction:1 maximal:1 product:1 fr:1 coming:2 neighboring:1 relevant:2 realization:1 iff:1 frobenius:1 dirac:1 convergence:1 perfect:1 ring:4 ben:1 help:1 coupling:7 illustrate:2 completion:2 derive:1 received:1 progress:1 auxiliary:1 predicted:1 implies:2 met:1 stochastic:3 settle:1 require:2 crux:1 fix:2 generalization:2 really:1 opt:32 strictly:1 pl:1 hold:3 considered:2 exp:1 seed:3 algorithmic:10 fletcher:1 claim:1 dictionary:1 achieves:4 smallest:2 purpose:1 estimation:13 applicable:1 currently:1 iw:14 superposition:1 largest:1 gauge:1 hospitality:1 snsf:1 gaussian:6 always:3 normale:1 reaching:1 pn:1 broader:1 corollary:8 derived:1 focus:1 rank:15 bernoulli:1 check:1 rigorous:5 sense:1 glass:4 inference:1 el:1 cnrs:2 epfl:1 initially:1 hidden:5 relation:4 expand:1 wij:4 france:2 going:1 provably:2 infor:1 denoted:1 development:1 spatial:4 special:2 integration:2 mutual:8 marginal:1 equal:16 once:2 never:1 field:1 constrained:1 eliminated:1 broad:2 k2f:1 abbe:1 nearly:1 np:1 others:1 develops:1 few:1 opening:1 interpolate:1 inflexion:2 phase:15 ab:1 detection:9 irs:46 interest:2 message:5 stationarity:2 investigate:1 saturated:1 physique:2 mixture:1 bracket:1 nl:1 semidefinite:2 behind:1 chain:1 implication:2 peculiar:1 integral:1 partial:1 institut:1 decoupled:1 iv:1 divide:1 taylor:2 initialized:1 urbanke:2 minimal:2 introducing:1 entry:2 pout:5 hundred:1 too:1 characterize:2 answer:1 extendable:1 st:4 density:2 fundamental:3 recht:1 siam:1 stay:1 sequel:1 probabilistic:5 physic:12 off:1 together:1 again:4 central:1 choose:3 possibly:1 american:1 derivative:3 leading:1 account:1 potential:7 de:5 coding:2 summarized:1 int:4 depends:1 view:1 lot:1 sup:6 red:3 start:1 bayes:4 complicated:1 simon:1 curie:2 contribution:1 square:3 ni:2 spin:4 convolutional:1 variance:4 who:1 ensemble:1 yield:2 conceptually:1 famous:1 basically:1 lu:1 published:1 reach:2 lenka:2 definition:5 energy:5 deshpande:3 coup:6 proof:28 mi:16 associated:1 di:1 gain:2 proved:3 recall:1 knowledge:3 lim:5 hassani:1 positioned:1 actually:2 appears:5 maxwell:1 higher:4 dt:2 follow:1 wei:1 formulation:2 evaluated:1 generality:1 correlation:2 hand:9 sketch:4 horizontal:2 continuity:2 brings:1 perhaps:2 believe:5 k22:3 true:4 evolution:2 equality:7 hence:2 seeded:1 read:2 symmetric:12 spatially:6 memoryless:1 white:2 indistinguishable:1 game:1 lastname:1 maintained:1 generalized:2 leftmost:2 evident:1 theoretic:12 polytechnique:1 demonstrate:1 complete:1 macris:4 wigner:6 ranging:1 wise:2 hallmark:1 recently:2 funding:2 rigollet:1 physical:1 tracked:1 korada:1 discussed:2 extend:2 association:1 gibbs:4 ai:3 consistency:1 mathematics:2 erc:2 cancellation:1 dot:1 lowered:1 access:1 deduce:1 base:1 posterior:1 own:1 recent:1 showed:1 conjectured:3 belongs:1 inf:11 rieure:1 scenario:1 inequality:6 came:1 itw:1 wji:1 seen:2 minimum:16 care:1 determine:1 converge:1 signal:8 dashed:3 branch:2 multiple:1 unimodal:1 thibault:2 full:2 ii:1 smooth:1 technical:1 match:2 calculation:1 cross:1 long:1 prediction:2 basic:1 regression:1 expectation:2 arxiv:3 limt:2 laboratoire:2 diagram:2 leaving:1 limn:15 crucial:3 appropriately:1 zw:2 rest:1 jian:1 tend:1 mod:1 seem:1 jordan:1 integer:1 iii:1 enough:5 easy:1 baik:1 xj:4 zi:5 hastie:1 florent:2 idea:1 subadditivity:1 translates:1 absent:1 psl:1 shift:1 expression:4 pca:7 passing:5 speaking:1 proceed:1 useful:1 covered:2 informally:1 se:14 tune:1 clear:1 sparcs:1 simplest:1 exist:5 notice:2 shifted:1 deteriorates:1 trapped:1 per:3 algorithmically:1 estimated:1 blue:4 tibshirani:1 discrete:5 detectability:1 shall:1 express:1 group:7 putting:1 four:1 threshold:14 nevertheless:2 achieving:2 prevent:1 marie:1 verified:1 replica:5 asymptotically:3 graph:3 relaxation:1 sum:1 run:1 inverse:1 letter:2 sorbonne:1 ks:1 bound:3 layer:1 completing:1 distinguish:1 correspondence:1 quadratic:1 annual:1 strength:1 precisely:3 rangan:1 x2:1 interpolated:2 fourier:3 argument:6 min:2 optimality:1 performing:1 conjecture:3 according:2 describes:1 slightly:1 smaller:1 wi:1 making:1 s1:1 outlier:1 spiked:6 intuitively:1 ghaoui:1 invariant:1 taken:1 ln:4 equation:4 computationally:1 remains:2 discus:1 turn:3 fail:1 needed:1 mind:1 know:2 end:3 dia:2 available:1 generalizes:1 observe:1 generic:3 spectral:12 amini:1 pierre:1 appearing:1 existence:4 original:4 top:1 ensure:1 completed:2 graphical:2 qit:1 exploit:1 disappear:1 suddenly:1 unchanged:1 tensor:1 already:3 spike:1 occurs:1 liml:2 planted:2 concentration:1 usual:2 noize:1 zdeborov:4 supn:1 hq:1 link:1 thank:1 unstable:1 trivial:4 assuming:1 code:2 sur:1 length:1 index:1 ldpc:1 statement:2 expense:1 negative:3 zt:3 perform:1 upper:2 observation:2 finite:1 acknowledge:1 immediate:1 situation:5 extended:2 communication:2 defining:1 rn:1 perturbation:2 sharp:1 community:14 introduced:2 pair:2 paris:2 namely:2 extensive:1 speculation:1 uncoupled:1 barcelona:1 nip:1 trans:4 beyond:1 below:3 firstname:1 guerra:2 rale:1 appeared:2 regime:2 fp:1 sparsity:1 saclay:1 saturation:5 max:8 wainwright:1 overlap:4 critical:1 natural:2 suitable:3 recursion:4 cea:1 analyticity:5 scheme:1 improve:1 x2i:1 imply:4 aspremont:1 naive:1 coupled:22 sn:1 review:1 checking:1 kf:1 nishimori:2 asymptotic:10 law:1 fully:2 interesting:6 proven:4 analogy:1 ingredient:1 versus:1 bayati:1 foundation:3 degree:2 sufficient:1 consistent:1 basin:1 translation:1 row:1 surprisingly:1 last:1 free:3 nonnull:1 bias:3 allow:1 side:6 ber:2 institute:1 wide:1 johnstone:1 taking:1 sparse:11 distributed:1 curve:1 boundary:1 dimension:1 transition:17 berthet:1 made:1 simplified:1 programme:1 nguyen:1 sj:5 approximate:4 supremum:1 monotonicity:4 cavity:1 global:9 clique:7 technicality:1 assumed:1 conclude:3 xi:6 factorizing:1 continuous:3 facult:1 iterative:2 why:1 additionally:1 channel:10 nature:1 nicolas:1 expansion:1 mse:6 necessarily:2 zou:1 complex:1 main:4 montanari:5 dense:1 whole:5 noise:15 profile:2 n2:1 argmine:7 xu:1 lesieur:4 en:1 sub:2 position:4 explicit:5 barbier:2 third:1 formula:14 theorem:15 z0:7 bad:3 specific:1 down:1 inset:3 sensing:2 orie:1 concern:1 exists:7 essential:1 intractable:1 trap:1 workshop:1 corr:1 yvette:1 gap:13 entropy:2 generalizing:1 rg:1 simply:1 expressed:2 ordered:1 scalar:4 applies:1 ch:2 corresponds:3 minimizer:1 satisfies:2 truth:1 chance:1 conditional:1 goal:1 viewed:1 identity:2 consequently:1 fisher:2 hard:1 change:1 determined:1 except:5 uniformly:1 miss:1 principal:4 lemma:6 called:4 x2j:1 pfister:1 e:22 shannon:1 guo:1 assessed:1 |
5,948 | 6,381 | A Communication-Efficient Parallel Algorithm for
Decision Tree
Qi Meng1,?, Guolin Ke2,? , Taifeng Wang2 , Wei Chen2 , Qiwei Ye2 ,
Zhi-Ming Ma3 , Tie-Yan Liu2
1
Peking University 2 Microsoft Research
3
Chinese Academy of Mathematics and Systems Science
1
[email protected]; 2 {Guolin.Ke, taifengw, wche, qiwye, tie-yan.liu}@microsoft.com;
3
[email protected]
Abstract
Decision tree (and its extensions such as Gradient Boosting Decision Trees and
Random Forest) is a widely used machine learning algorithm, due to its practical
effectiveness and model interpretability. With the emergence of big data, there is
an increasing need to parallelize the training process of decision tree. However,
most existing attempts along this line suffer from high communication costs. In
this paper, we propose a new algorithm, called Parallel Voting Decision Tree
(PV-Tree), to tackle this challenge. After partitioning the training data onto a
number of (e.g., M ) machines, this algorithm performs both local voting and
global voting in each iteration. For local voting, the top-k attributes are selected
from each machine according to its local data. Then, globally top-2k attributes
are determined by a majority voting among these local candidates. Finally, the
full-grained histograms of the globally top-2k attributes are collected from local
machines in order to identify the best (most informative) attribute and its split point.
PV-Tree can achieve a very low communication cost (independent of the total
number of attributes) and thus can scale out very well. Furthermore, theoretical
analysis shows that this algorithm can learn a near optimal decision tree, since it
can find the best attribute with a large probability. Our experiments on real-world
datasets show that PV-Tree significantly outperforms the existing parallel decision
tree algorithms in the trade-off between accuracy and efficiency.
1
Introduction
Decision tree [16] is a widely used machine learning algorithm, since it is practically effective and
the rules it learns are simple and interpretable. Based on decision tree, people have developed other
algorithms such as Random Forest (RF) [3] and Gradient Boosting Decision Trees (GBDT) [7],
which have demonstrated very promising performances in various learning tasks [5].
In recent years, with the emergence of very big training data (which cannot be held in one single
machine), there has been an increasing need of parallelizing the training process of decision tree. To
this end, there have been two major categories of attempts: 2 .
?
Denotes equal contribution. This work was done when the first author was visiting Microsoft Research Asia.
There is another category of works that parallelize the tasks of sub-tree training once a node is split [15],
which require the training data to be moved from machine to machine for many times and are thus inefficient.
Moreover, there are also some other works accelerating decision tree construction by using pre-sorting [13] [19]
[11] and binning [17] [8] [10], or employing a shared-memory-processors approach [12] [1]. However, they are
out of our scope.
2
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Attribute-parallel: Training data are vertically partitioned according to the attributes and allocated to
different machines, and then in each iteration, the machines work on non-overlapping sets of attributes
in parallel in order to find the best attribute and its split point (suppose this best attribute locates
at the i-th machine) [19] [11] [20]. This process is communicationally very efficient. However,
after that, the re-partition of the data on other machines than the i-th machine will induce very high
communication costs (proportional to the number of data samples). This is because those machines
have no information about the best attribute at all, and in order to fulfill the re-partitioning, they
must retrieve the partition information of every data sample from the i-th machine. Furthermore, as
each worker still has full sample set, the partition process is not parallelized, which slows down the
algorithm.
Data-parallel: Training data are horizontally partitioned according to the samples and allocated to
different machines. Then the machines communicate with each other the local histograms of all
attributes (according to their own data samples) in order to obtain the global attribute distributions and
identify the best attribute and split point [12] [14]. It is clear that the corresponding communication
cost is very high and proportional to the total number of attributes and histogram size. To reduce the
cost, in [2] and [21] [10], it was proposed to exchange quantized histograms between machines when
estimating the global attribute distributions. However, this does not really solve the problem ? the
communication cost is still proportional to the total number of attributes, not to mentioned that the
quantization may hurt the accuracy.
In this paper, we proposed a new data-parallel algorithm for decision tree, called Parallel Voting
Decision Tree (PV-Tree), which can achieve much better balance between communication efficiency
and accuracy. The key difference between conventional data-parallel decision tree algorithm and
PV-Tree lies in that the former only trusts the globally aggregated histogram information, while the
latter leverages the local statistical information contained in each machine through a two-stage voting
process, thus can significantly reduce the communication cost. Specifically, PV-Tree contains the
following steps in each iteration. 1) Local voting. On each machine, we select the top-k attributes
based on its local data according to the informativeness scores (e.g., risk reduction for regression,
and information gain for classification). 2) Global voting. We determine global top-2k attributes
by a majority voting among the local candidates selected in the previous step. That is, we rank the
attributes according to the number of local machines who select them, and choose the top 2k attributes
from the ranked list. 3) Best attribute identification. We collect the full-grained histograms of the
globally top-2k attributes from local machines in order to compute their global distributions. Then
we identify the best attribute and its split point according to the informativeness scores calculated
from the global distributions.
It is easy to see that PV-Tree algorithm has a very low communication cost. It does not need to
communicate the information of all attributes, instead, it only communicates indices of the locally
top-k attributes per machine and the histograms of the globally top-2k attributes. In other words, its
communication cost is independent of the total number of attributes. This makes PV-Tree highly
scalable. On the other hand, it can be proven that PV-Tree can find the best attribute with a large
probability, and the probability will approach 1 regardless of k when the training data become
sufficiently large. In contrast, the data-parallel algorithm based on quantized histogram could fail in
finding the best attribute, since the bias introduced by histogram quantization cannot be reduced to
zero even if the training data are sufficiently large.
We have conducted experiments on real-world datasets to evaluate the performance of PV-Tree. The
experimental results show that PV-Tree has consistently higher accuracy and training speed than all
the baselines we implemented. We further conducted experiments to evaluate the performance of
PV-Tree in different settings (e.g., with different numbers of machines, different values of k). The
experimental results are in accordance with our theoretical analysis.
2
Decision Tree
Suppose the training data set Dn = {(xi,j , yi ); i = 1, ? ? ? , n, j = 1, ? ? ? , d} are independently
Qd
Qd
sampled from j=1 Xj ? Y according to ( j=1 PXj )PY |X . The goal is to learn a regression or
Qd
classification model f ? F : j=1 Xj ? Y by minimizing loss functions on the training data, which
hopefully could achieve accurate prediction for the unseen test data.
2
Decision tree[16, 18] is a widely used model for both regression [4] and classification [18]. A typical
decision tree algorithm is described in Alg 1. As can be seen, the tree growth procedure is recursive,
and the nodes will not stop growing until they reach the stopping criteria. There are two important
functions in the algorithm: FindBestSplit returns the best split point {attribute, threshold} of a node,
and Split splits the training data according to the best split Algorithm 1 BulidTree
point. The details of FindBestSplit is given in Alg 2: first
Input: Node N, Dateset D
histograms of the attributes are constructed (for continuous
if StoppingCirteria(D) then
attributes, one usually converts their numerical values to
N.output = Prediction(D)
finite bins for ease of compuation) by going over all trainelse
ing data on the current node; then all bins (split points) are
bestSplit = FindBestSplit(D)
traversed from left to right, and leftSum and rightSum are
(DL, DR) = Split(D, N, bestSplit)
used to accumulate sum of left and right parts of the split
BuildTree(N.leftChild, DL)
point respectively. When selecting the best split point, an
BuildTree(N.rightChild, DR)
informativeness measure is adopted. The widely used inforend if
mative measures are information gain and variance gain for
classification and regression, respectively.
Definition 2.1 [6][16] In classification, the information gain (IG) for attribute Xj ? [w1 , w2 ] at
node O, is defined as the entropy reduction of the output Y after splitting node O by attribute Xj at
w, i.e.,
IGj (w; O) = Hj ? (Hjl (w) + Hjr (w))
= P (w1 ? Xj ? w2 )H(Y |w1 ? Xj ? w2 ) ? P (w1 ? Xj < w)H(Y |w1 ? Xj < w)
? P (w ? Xj ? w2 )H(Y |w ? Xj ? w2 ),
where H(?|?) denotes the conditional entropy.
In regression, the variance gain (VG) for attribute Xj ? [w1 , w2 ] at node O, is defined as variance
reduction of the output Y after splitting node O by attribute Xj at w, i.e.,
V Gj (w; O) = ?j ? (?jl (w) + ?jr (w))
= P (w1 ? Xj ? w2 )V ar[Y |w1 ? Xj ? w2 ] ? P (w1 ? Xj < w)V ar[Y |w1 ? Xj < w]
? P (w2 ? Xj ? w)V ar[Y |w2 ? Xj ? w],
where V ar[?|?] denotes the conditional variance.
3
PV-Tree
In this section, we describe our proposed PV-Tree algorithm for parallel decision tree learning,
which has a very low communication cost, and can achieve a good trade-off between communication
efficiency and learning accuracy.
PV-Tree is a data-parallel algorithm, which also partitions the training data onto M machines just
like in [2] [21]. However, its design principal is very different. In [2][21], one does not trust the local
information about the attributes in each machine, and decides the best attribute and split point only
based on the aggregated global histograms of the attributes. In contrast, in PV-Tree, we leverage the
meaningful statistical information about the attributes contained in each local machine, and make
decisions through a two-stage (local and then global) voting process. In this way, we can significantly
reduce the communication cost since we do not need to communicate the histogram information of
all the attributes across machines, instead, only the histograms of those attributes that survive in the
voting process.
The flow of PV-tree algorithm is very similar to the standard decision tree, except function FindBestSplit. So we only give the new implementation of this function in Alg 3, which contains following
three steps:
Local Voting: We select the top-k attributes for each machine based on its local data set (according
to the informativeness scores, e.g., information gain for classification and variance reduction for
regression), and then exchange indices of the selected attributes among machines. Please note that
the communication cost for this step is very low, because only the indices for a small number of (i.e.,
k ? M ) attributes need to be communicated.
Global Voting: We determine the globally top-2k attributes by a majority voting among all locally
selected attributes in the previous step. That is, we rank the attributes according to the number of
3
local machines who select them, and choose the top-2k attributes from the ranked list. It can be
proven that when the local data are big enough to be statistically representative, there is a very high
probability that the top-2k attributes obtained by this majority voting will contain the globally best
attribute. Please note that this step does not induce any communication cost.
Best Attribute Identification: We collect full-grained histograms of the globally top-2k attributes from
local machines in order to compute their global distributions. Then we identify the best attribute and
its split point according to the informativeness scores calculated from the global distributions. Please
note that the communication cost for this step is also low, because we only need to communicate the
histograms of 2k pre-selected attributes (but not all attributes).3 As a result, PV-Tree algorithm can
scale very well since its communication cost is independent of both the total number of attributes and
the total number of samples in the dataset.
In next section, we will provide theoretical analysis on accuracy guarantee of PV-Tree algorithm.
Algorithm 2 FindBestSplit
Algorithm 3 PV-Tree_FindBestSplit
Input: DataSet D
for all X in D.Attribute do
. Construct Histogram
H = new Histogram()
for all x in X do
H.binAt(x.bin).Put(x.label)
end for
. Find Best Split
leftSum = new HistogramSum()
for all bin in H do
leftSum = leftSum + H.binAt(bin)
rightSum = H.AllSum - leftSum
split.gain = CalSplitGain(leftSum, rightSum)
bestSplit = ChoiceBetterOne(split,bestSplit)
end for
end for
return bestSplit
4
Input: Dataset D
localHistograms = ConstructHistograms(D)
. Local Voting
splits = []
for all H in localHistograms do
splits.Push(H.FindBestSplit())
end for
localTop = splits.TopKByGain(K)
. Gather all candidates
allCandidates = AllGather(localTop)
. Global Voting
globalTop = allCandidates.TopKByMajority(2*K)
. Merge global histograms
globalHistograms = Gather(globalTop, localHistograms)
bestSplit = globalHistograms.FindBestSplit()
return bestSplit
Theoretical Analysis
In this section, we conduct theoretical analysis on proposed PV-Tree algorithm. Specifically, we
prove that, PV-Tree can select the best (most informative) attribute in a large probability, for both
classification and regression. In order to better present the theorem, we firstly introduce some
notations4 In classification, we denote IGj = maxw IGj (w), and rank {IGj ; j ? [d]} from large to
small as {IG(1) , ..., IG(d) }. We call the attribute j(1) the most informative attribute. Then, we denote
|IG(1) ?IG(j) |
l(j) (k) =
, ?j ? k + 1 to indicate the distance between the largest and the k-th largest IG.
2
In regression, l(j) (k) is defined in the same way, except replacing IG with VG.
Theorem 4.1 Suppose we have M local machines, and each one has n training data. PV-Tree at an
arbitrary tree node with local voting size k and global majority voting size 2k will select the most
informative attribute with a probability at least
M
X
m=[M/2+1]
?
?
m ?
CM
1??
d
X
??m ?
?(j) (n, k)?? ?
j=k+1
d
X
?M ?m
?(j) (n, k)?
,
j=k+1
2
where ?(j) (n, k) = ?(j) (n) + 4e?c(j) n(l(j) (k)) with limn?? ?(j) (n) = 0 and c(j) is constant.
Due to space restrictions, we briefly illustrate the proof idea here and leave detailed proof to supplementary materials. Our proof contains two parts. (1) For local voting, we find a sufficient condition
to guarantee a similar rank of attributes ordered by information gain computed based on local data
and full data. Then, we derive a lower bound of probability to make the sufficient condition holds by
3
As indicated by our theoretical analysis and empirical study (see the next sections), a very small k already
leads to good performance in PV-Tree algorithm.
4
Since all analysis are for one arbitrarily fixed node O, we omit the notation O here.
4
using concentration inequalities. (2) For global voting, we select top-2k attributes. It?s easy to proof
that we can select the most informative attribute if only no less than [M/2 + 1] of all machines select
it.5 Therefore, we can calculate the probability in the theorem using binomial distribution.
Regarding Theorem 4.1, we have following discussions on factors that impact the lower bound for
probability of selecting the best attribute.
1.Size of local training data n: Since ?(j) (n, k) decreased with n, with more and more local training
data, the lower bound will increase. That means, if we have sufficiently large data, PV-Tree will
select the best attribute with almost probability 1.
2. Input dimension d: It is clear that for fixed local voting size k and global voting size 2k, with d
increasing, the lower bound is decreasing. Consider the case that the number of attributes become
Pd
P100d
100 times larger. Then the terms in the summation (from j=k+1 to j=k+1 ) is roughly 100 times
larger for a relatively small k. But there must be many attributes away from attribute (1) and l(j) (k)
is a large number which results in a small ?(j) (n, k). Thus we can say that the bound in the theorem
is not sensitive with d.
3. Number of machines M : We assume the whole training data size N is fixed and the local data size
N
n= M
. Then on one hand, as M increases, n decreases, and therefore the lower bound will decrease
PM
m m
due to larger ?j (n, k). On the other hand, because function m=[M/2+1] CM
p (1 ? p)M ?m will
approach 1 as M increases when p > 0.5 [[23]], the lower bound will increase. In other words, the
number of machines M has dual effect on the lower bound: with more machines, local data size
becomes smaller which reduces the accuracy of local voting, however, it also leads to more copies of
local votes and thus increase the reliability of global voting. Therefore, in terms of accuracy, there
should be an optimal number of machines given a fixed-size training data.6
4. Local/Global voting size k/2k: Local/Global voting size k/2k influence l(j) (k) and the terms in
the summation in the lower bound . As k increases, l(j) (k) increases and the terms in the summation
decreases, and the lower bound increases. But increasing k will bring more communication and
calculating time. Therefore, we should better select a moderate k. For some distributions, especially
for the distributions over high-dimensional space, l(j) (k) is less sensitive to k, then we can choose a
relatively smaller k to save communication time.
As a comparison, we also prove a theorem for the data-parallel algorithm based on quantized
histogram as follows (please refer to the supplementary material for its proof). The theorem basically
tells us that the bias introduced by histogram quantization cannot be reduced to zero even if the
training data are sufficiently large, and as a result the corresponding algorithm could fail in finding
the best attribute.7 This could be the critical weakness of this algorithm in big data scenario.
Theorem 4.2 We denote quantized histogram with b bins of the underlying distribution P as P b , that
of the empirical distribution Pn as Pnb , the information gain of Xj calculated under the distribution P b
and Pnb as IGbj and IGbn,j respectively, and fj (b) , |IGj ? IGbj |. Then, for ? minj=1,??? ,d fj (b),
with probability at least ?j (n, fj (b) ? )), we have |IGbn,j ? IGj | > .
5
Experiments
In this section, we report the experimental comparisons between PV-Tree and baseline algorithms.
We used two data sets, one for learning to rank (LTR) and the other for ad click prediction (CTR)8
(see Table 1 for details). For LTR, we extracted about 1200 numerical attributes per data sample, and
used NDCG [5] as the evaluation measure. For CTR, we extracted about 800 numerical attributes [9],
and used AUC as the evaluation measure.
5
In fact, the global voting size can be ?k with ? > 1. Then the sufficient condition becomes that no less than
[M/? + 1] of all machines select the most informative attribute.
6
Please note that using more machines will reduce local computing time, thus the optimal value of machine
number may be larger in terms of speed-up.
7
The theorem for regression holds in the same way, with replacing IG with VG.
8
We use private data in LTR experiments and data of KDD Cup 2012 track 2 in CTR experiments.
5
Table 1: Datasets
Task #Train #Test #Attribute
LTR
CTR
11M
235M
1M
31M
1200
800
Table 2: Convergence time (seconds)
Source
Private
KDD Cup
Task Sequential
LTR
CTR
28690
154112
Data- Attribute- PV-Tree
Parallel Parallel
32260
14660
5825
9209
26928
5349
According to recent industrial practices, a single decision tree might not be strong enough to learn an
effective model for complicated tasks like ranking and click prediction. Therefore, people usually
use decision tree based boosting algorithms (e.g., GBDT) to perform tasks. In this paper, we also
use GBDT as a platform to examine the efficiency and effectiveness of decision tree parallelization.
That is, we used PV-Tree or other baseline algorithms to parallelize the decision tree construction
process in each iteration of GBDT, and compare their performance. Our experimental environment is
a cluster of servers (each with 12 CPU cores and 32 GB RAM) inter-connected with 1 Gbps Ethernet.
For the experiments on LTR, we used 8 machines for parallel training; and for the experiments on
CTR, we used 32 machines since the dataset is much larger.
5.1
Comparison with Other Parallel Decision Trees
For comparison with PV-Tree, we have implemented an attribute-parallel algorithm, in which a binary
vector is used to indicate the split information and exchanged across machines. In addition, we
implemented a data-parallel algorithm according to [2, 21], which can communicate both full-grained
histograms and quantized histograms. All parallel algorithms and sequential(single machine) version
are compared together.
The experimental results can be found in Figure 1a and 1b. From these figures, we have the following
observations:
For LTR, since the number of data samples is relatively small, the communication of the split
information about the samples does not take too much time. As a result, the attribute-parallel
algorithm appears to be efficient. Since most attributes take numerical values in this dataset, the fullgrained histogram has quite a lot of bins. Therefore, the data-parallel algorithm which communicates
full-grained histogram is quite slow, even slower than the sequential algorithm. When reducing the
bins in the histogram to 10%, the data-parallel algorithm becomes much more efficient, however, its
convergence point is not good (consistent with our theory ? the bias in quantized histograms leads to
accuracy drop).
For CTR, attribute-parallel algorithm becomes very slow since the number of data samples is
very large. In contrast, many attributes in CTR take binary or discrete values, which make the
full-grained histogram have limited number of bins. As a result, the data-parallel algorithm with
full-grain histogram is faster than the sequential algorithm. The data-parallel algorithm with quantized
histograms is even faster, however, its convergence point is once again not very good.
PV-Tree reaches the best point achieved by sequential algorithm within the shortest time in both LTR
and CTR task. For a more quantitative comparison on efficiency, we list the time for each algorithm
(8 machines for LTR and 32 machines for CTR) to reach the convergent accuracy of the sequential
algorithm in Table 2. From the table, we can see that, for LTR, it costed PV-Tree 5825 seconds, while
it costed the data-parallel algorithm (with full-grained histogram9 ) and attribute-parallel algorithm
32260 and 14660 seconds respectively. As compared with the sequential algorithm (which took 28690
seconds to converge), PV-Tree achieves 4.9x speed up on 8 machines. For CTR, it costed PV-Tree
5349 seconds, while it costed the data-parallel algorithm (with full-grained histogram) and attributeparallel algorithm 9209 and 26928 seconds respectively. As compared with the sequential algorithm
(which took 154112 seconds to converge), PV-Tree achieves 28.8x speed up on 32 machines.
We also conducted independent experiments to get a clear comparison of communication cost for
different parallel algorithms given some typical big data workload setting. The result is listed in
Table 3. We find the cost of attribute-parallel algorithm is relative to the size of training data N , and
the cost of data-parallel algorithm is relative to the number of attributes d. In contrast, the cost of
PV-Tree is constant.
9
The data-parallel algorithm with 10% bins could not achieve the same accuracy with the sequential algorithm
and thus we did not put it in the table.
6
Table 3: Comparison of communication
cost, train one tree with depth=6.
Data size
N=1B,
d=1200
N=100M,
d=1200
N=1B,
d=200
N=100M,
d=200
Table 4: Convergence time and accuracy w.r.t. global
voting parameter k for PV-Tree.
Attribute
Data
PV-Tree
Palallel Parallel
k=15
750MB 424MB 10MB
75MB
424MB
10MB
750MB
70MB
10MB
75MB
70MB
10MB
LTR
M=4
LTR
M=16
CTR
M=16
CTR
M=128
k=1
11256/
0.7905
8211/
0.7882
9131/
0.7535
1806/
0.7533
(a) LTR, 8 machines
k=5
9906/
0.7909
8131/
0.7893
9947/
0.7538
1745/
0.7536
k=10
9065/
0.7909
8496/
0.7897
9912/
0.7538
2077/
0.7537
k=20
8323/
0.7909
10320/
0.7906
10309/
0.7538
2133/
0.7537
k=40
9529/
0.7909
12529/
0.7909
10877/
0.7538
2564/
0.7538
(b) CTR, 32 machines
Figure 1: Performances of different algorithms
5.2
Tradeoff between Speed-up and Accuracy in PV-Tree
In the previous subsection, we have shown that PV-tree is more efficient than other algorithms. Here
we make a deep dive into PV-tree to see how its key parameters affect the trade-off between efficiency
and accuracy. According to Theorem 4.1, the following two parameters are critical to PV-Tree: the
number of machines M and the size of voting k.
5.2.1
On Different Numbers of Machines
When more machines join the distributed training process, the data throughput will grow larger but
the amortized training data on each machine will get smaller. When the data size on each machine
becomes too small, there will be no guarantee on the accuracy of the voting procedure, according to
our theorem. So it is important to appropriately set the number of machines.
To gain more insights on this, we conducted some additional experiments, whose results are shown
in Figure 2a and 2b. From these figures, we can see that for LTR, when the number of machines
grows from 2 to 8, the training process is significantly accelerated. However, when the number goes
up to 16, the convergence speed is even lower than that of using 8 machines. Similar results can be
observed for CTR. These observations are consistent with our theoretical findings. Please note that
PV-Tree is designed for the big data scenario. Only when the entire training data are huge (and thus
distribution of the training data on each local machine can be similar to that of the entire training
data), the full power of PV-Tree can be realized. Otherwise, we need to have a reasonable expectation
on the speed-up, and should choose to use a smaller number of machines to parallelize the training.
5.2.2
On Different Sizes of Voting
In PV-Tree, we have a parameter k, which controls the number of top attributes selected during
local and global voting. Intuitively, larger k will increase the probability of finding the globally best
attribute from the local candidates, however, it also means higher communication cost. According
to our theorem, the choice of k should depend on the size of local training data. If the size of local
training data is large, the locally best attributes will be similar to the globally best one. In this case,
one can safely choose a small value of k. Otherwise, we should choose a relatively larger k. To gain
more insights on this, we conducted some experiments, whose results are shown in Table 4, where M
refers to the number of machines. From the table, we have the following observations. First, for both
cases, in order to achieve good accuracy, one does not need to choose a large k. When k ? 40, the
7
(a) LTR
(b) CTR
Figure 2: PV-Tree on different numbers of machines
(a) LTR, 8 machines
(b) CTR, 32 machines
Figure 3: Comparison with parallel boosting algorithms
accuracy has been very good. Second, we find that for the cases of using small number of machines,
k can be set to an even smaller value, e.g., k = 5. This is because, given a fixed-size training data,
when using fewer machines, the size of training data per machine will become larger and thus a
smaller k can already guarantee the approximation accuracy.
5.3
Comparison with Other Parallel GBDT Algorithms
While we mainly focus on how to parallelize the decision tree construction process inside GBDT in
the previous subsections, one could also parallelize GBDT in other ways. For example, in [22, 20],
each machine learns its own decision tree separately without communication. After that, these
decision trees are aggregated by means of winner-takes-all or output ensemble. Although these works
are not the focus of our paper, it is still interesting to compare with them.
For this purpose, we implemented both the algorithms proposed in [22] and [20]. For ease of
reference, we denote them as Svore and Yu respectively. Their performances are shown in Figure 3a
and 3b. From the figures, we can see that PV-Tree outperforms both Svore and Yu: although these two
algorithms converge at a similar speed to PV-Tree, they have much worse converge points. According
to our limited understanding, these two algorithms are lacking solid theoretical guarantee. Since
the candidate decision trees are trained separately and independently without necessary information
exchange, they may have non-negligible bias, which will lead to accuracy drop at the end. In contrast,
we can clearly characterize the theoretical properties of PV-tree, and use it in an appropriate setting
so as to avoid observable accuracy drop.
To sum up all the experiments, we can see that with appropriately-set parameters, PV-Tree can achieve
a very good trade-off between efficiency and accuracy, and outperforms both other parallel decision
tree algorithms designed specifically for GBDT parallelization.
6
Conclusions
In this paper, we proposed a novel parallel algorithm for decision tree, called Parallel Voting Decision
Tree (PV-Tree), which can achieve high accuracy at a very low communication cost. Experiments
on both ranking and ad click prediction indicate that PV-Tree has its advantage over a number of
baselines algorithms. As for future work, we plan to generalize the idea of PV-Tree to parallelize
other machine learning algorithms. Furthermore, we will open-source PV-Tree algorithm to benefit
more researchers and practitioners.
8
References
[1] Rakesh Agrawal, Ching-Tien Ho, and Mohammed J Zaki. Parallel classification for data mining in a
shared-memory multiprocessor system, 2001. US Patent 6,230,151.
[2] Yael Ben-Haim and Elad Tom-Tov. A streaming parallel decision tree algorithm. In The Journal of
Machine Learning Research, volume 11, pages 849?872, 2010.
[3] Leo Breiman. Random forests. In Machine learning, volume 45, pages 5?32. Springer, 2001.
[4] Leo Breiman, Jerome Friedman, Charles J Stone, and Richard A Olshen. Classification and regression
trees. CRC press, 1984.
[5] Christopher JC Burges. From ranknet to lambdarank to lambdamart: An overview. In Learning, volume 11,
pages 23?581, 2010.
[6] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1.
Springer series in statistics Springer, Berlin, 2001.
[7] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. In Annals of statistics,
pages 1189?1232. JSTOR, 2001.
[8] Johannes Gehrke, Venkatesh Ganti, Raghu Ramakrishnan, and Wei-Yin Loh. Boat?optimistic decision
tree construction. In ACM SIGMOD Record, volume 28, pages 169?180. ACM, 1999.
[9] Michael Jahrer, A Toscher, JY Lee, J Deng, H Zhang, and J Spoelstra. Ensemble of collaborative filtering
and feature engineered models for click through rate prediction. In KDDCup Workshop, 2012.
[10] Ruoming Jin and Gagan Agrawal. Communication and memory efficient parallel decision tree construction.
In SDM, pages 119?129. SIAM, 2003.
[11] Mahesh V Joshi, George Karypis, and Vipin Kumar. Scalparc: A new scalable and efficient parallel
classification algorithm for mining large datasets. In Parallel processing symposium, 1998. IPPS/SPDP
1998, pages 573?579. IEEE, 1998.
[12] Richard Kufrin. Decision trees on parallel processors. In Machine Intelligence and Pattern Recognition,
volume 20, pages 279?306. Elsevier, 1997.
[13] Manish Mehta, Rakesh Agrawal, and Jorma Rissanen. Sliq: A fast scalable classifier for data mining. In
Advances in Database Technology?EDBT?96, pages 18?32. Springer, 1996.
[14] Biswanath Panda, Joshua S Herbach, Sugato Basu, and Roberto J Bayardo. Planet: massively parallel
learning of tree ensembles with mapreduce. In Proceedings of the VLDB Endowment, volume 2, pages
1426?1437. VLDB Endowment, 2009.
[15] Robert Allan Pearson. A coarse grained parallel induction heuristic. University College, University of
New South Wales, Department of Computer Science, Australian Defence Force Academy, 1993.
[16] J. Ross Quinlan. Induction of decision trees. In Machine learning, volume 1, pages 81?106. Springer,
1986.
[17] Sanjay Ranka and V Singh. Clouds: A decision tree classifier for large datasets. In Knowledge discovery
and data mining, pages 2?8, 1998.
[18] S Rasoul Safavian and David Landgrebe. A survey of decision tree classifier methodology. IEEE
transactions on systems, man, and cybernetics, 21(3):660?674, 1991.
[19] John Shafer, Rakesh Agrawal, and Manish Mehta. Sprint: A scalable parallel classi er for data mining. In
Proc. 1996 Int. Conf. Very Large Data Bases, pages 544?555. Citeseer, 1996.
[20] Krysta M Svore and CJ Burges. Large-scale learning to rank using boosted decision trees. Scaling Up
Machine Learning: Parallel and Distributed Approaches, 2, 2011.
[21] Stephen Tyree, Kilian Q Weinberger, Kunal Agrawal, and Jennifer Paykin. Parallel boosted regression
trees for web search ranking. In Proceedings of the 20th international conference on World wide web,
pages 387?396. ACM, 2011.
[22] C Yu and DB Skillicorn. Parallelizing boosting and bagging. Queen?s University, Kingston, Canada, Tech.
Rep, 2001.
[23] Zhi-Hua Zhou. Ensemble methods: foundations and algorithms. CRC Press, 2012.
9
| 6381 |@word private:2 version:1 rightchild:1 briefly:1 open:1 mehta:2 vldb:2 citeseer:1 solid:1 reduction:4 liu:1 contains:3 score:4 selecting:2 series:1 outperforms:3 existing:2 sugato:1 current:1 com:1 ganti:1 must:2 john:1 grain:1 planet:1 numerical:4 partition:4 informative:6 kdd:2 dive:1 ranka:1 drop:3 interpretable:1 designed:2 greedy:1 selected:6 fewer:1 intelligence:1 core:1 record:1 coarse:1 quantized:7 boosting:6 node:11 firstly:1 zhang:1 along:1 dn:1 constructed:1 become:3 symposium:1 prove:2 wale:1 inside:1 introduce:1 inter:1 allan:1 roughly:1 examine:1 growing:1 ming:1 globally:10 decreasing:1 zhi:2 cpu:1 ye2:1 increasing:4 spain:1 estimating:1 moreover:1 notation:1 becomes:5 underlying:1 cm:2 developed:1 finding:4 guarantee:5 safely:1 quantitative:1 every:1 voting:35 growth:1 tackle:1 tie:2 classifier:3 partitioning:2 control:1 omit:1 negligible:1 local:40 vertically:1 accordance:1 parallelize:7 merge:1 ndcg:1 might:1 collect:2 ease:2 limited:2 karypis:1 statistically:1 practical:1 recursive:1 practice:1 communicated:1 procedure:2 empirical:2 yan:2 significantly:4 pre:2 induce:2 word:2 refers:1 get:2 onto:2 cannot:3 put:2 risk:1 influence:1 lambdarank:1 py:1 restriction:1 conventional:1 demonstrated:1 go:1 regardless:1 independently:2 survey:1 ke:1 splitting:2 jorma:1 chen2:1 rule:1 insight:2 retrieve:1 paykin:1 hurt:1 annals:1 construction:5 suppose:3 kunal:1 amortized:1 element:1 recognition:1 database:1 binning:1 observed:1 cloud:1 costed:4 calculate:1 connected:1 kilian:1 trade:4 decrease:3 mentioned:1 pd:1 environment:1 trained:1 hjl:1 depend:1 singh:1 qiwei:1 efficiency:7 workload:1 various:1 leo:2 train:2 fast:1 effective:2 describe:1 tell:1 pearson:1 quite:2 whose:2 widely:4 solve:1 supplementary:2 larger:9 say:1 otherwise:2 elad:1 heuristic:1 statistic:2 unseen:1 emergence:2 advantage:1 agrawal:5 sdm:1 took:2 propose:1 mb:12 achieve:8 academy:2 moved:1 convergence:5 cluster:1 leave:1 ben:1 illustrate:1 derive:1 ac:1 strong:1 implemented:4 indicate:3 ethernet:1 qd:3 australian:1 attribute:90 engineered:1 material:2 bin:10 crc:2 require:1 exchange:3 really:1 sprint:1 summation:3 traversed:1 extension:1 hold:2 practically:1 sufficiently:4 scope:1 major:1 achieves:2 purpose:1 gbps:1 proc:1 label:1 pxj:1 ross:1 sensitive:2 largest:2 gehrke:1 clearly:1 defence:1 fulfill:1 pn:1 hj:1 avoid:1 breiman:2 boosted:2 zhou:1 focus:2 consistently:1 rank:6 mainly:1 tech:1 contrast:5 industrial:1 baseline:4 elsevier:1 stopping:1 multiprocessor:1 streaming:1 entire:2 going:1 among:4 classification:11 dual:1 plan:1 platform:1 equal:1 once:2 construct:1 yu:3 survive:1 throughput:1 future:1 report:1 richard:2 microsoft:3 attempt:2 friedman:3 huge:1 highly:1 mining:5 evaluation:2 weakness:1 held:1 accurate:1 worker:1 necessary:1 tree:99 conduct:1 pku:1 exchanged:1 re:2 theoretical:9 ar:4 queen:1 cost:22 conducted:5 too:2 characterize:1 svore:3 international:1 siam:1 lee:1 off:4 michael:1 together:1 w1:10 ctr:17 again:1 choose:7 dr:2 worse:1 conf:1 inefficient:1 return:3 manish:2 int:1 jc:1 ranking:3 ad:2 lot:1 optimistic:1 parallel:50 complicated:1 panda:1 contribution:1 collaborative:1 accuracy:22 variance:5 who:2 ensemble:4 identify:4 generalize:1 identification:2 krysta:1 basically:1 researcher:1 cybernetics:1 processor:2 minj:1 reach:3 trevor:1 definition:1 wche:1 proof:5 gain:11 sampled:1 stop:1 dataset:5 subsection:2 knowledge:1 cj:1 jahrer:1 appears:1 higher:2 zaki:1 asia:1 tom:1 wei:2 methodology:1 done:1 furthermore:3 just:1 stage:2 until:1 jerome:3 hand:3 web:2 trust:2 replacing:2 christopher:1 overlapping:1 hopefully:1 indicated:1 grows:1 effect:1 contain:1 former:1 during:1 please:6 auc:1 vipin:1 criterion:1 stone:1 performs:1 bring:1 fj:3 novel:1 charles:1 igj:6 overview:1 patent:1 winner:1 volume:8 jl:1 mahesh:1 accumulate:1 refer:1 cup:2 mathematics:1 pm:1 tov:1 guolin:2 reliability:1 gj:1 base:1 own:2 recent:2 moderate:1 scenario:2 massively:1 mohammed:1 server:1 inequality:1 binary:2 arbitrarily:1 rep:1 yi:1 tien:1 joshua:1 seen:1 additional:1 george:1 ke2:1 parallelized:1 deng:1 aggregated:3 determine:2 shortest:1 converge:4 stephen:1 full:12 reduces:1 ing:1 faster:2 locates:1 jy:1 peking:1 qi:1 prediction:6 scalable:4 regression:11 impact:1 expectation:1 iteration:4 histogram:31 ma3:1 achieved:1 liu2:1 addition:1 separately:2 decreased:1 grow:1 source:2 limn:1 allocated:2 appropriately:2 w2:10 parallelization:2 south:1 db:1 flow:1 effectiveness:2 call:1 practitioner:1 joshi:1 near:1 leverage:2 split:23 easy:2 enough:2 xj:19 affect:1 hastie:1 gbdt:8 click:4 reduce:4 idea:2 cn:2 regarding:1 tradeoff:1 gb:1 accelerating:1 suffer:1 loh:1 ranknet:1 deep:1 clear:3 detailed:1 listed:1 johannes:1 locally:3 category:2 reduced:2 pnb:2 per:3 track:1 tibshirani:1 discrete:1 key:2 threshold:1 rissanen:1 ram:1 bayardo:1 year:1 convert:1 sum:2 communicate:5 almost:1 reasonable:1 decision:41 scaling:1 bound:10 haim:1 convergent:1 jstor:1 leftchild:1 speed:8 kumar:1 relatively:4 department:1 according:18 jr:1 across:2 smaller:6 partitioned:2 intuitively:1 jennifer:1 fail:2 end:6 raghu:1 adopted:1 yael:1 away:1 appropriate:1 save:1 weinberger:1 ho:1 wang2:1 slower:1 bagging:1 top:16 denotes:3 binomial:1 quinlan:1 calculating:1 sigmod:1 chinese:1 especially:1 already:2 realized:1 concentration:1 visiting:1 gradient:3 distance:1 berlin:1 majority:5 collected:1 induction:2 index:3 balance:1 minimizing:1 ching:1 olshen:1 robert:2 slows:1 design:1 implementation:1 perform:1 observation:3 datasets:5 finite:1 jin:1 communication:26 arbitrary:1 ipps:1 parallelizing:2 canada:1 introduced:2 venkatesh:1 david:1 mazm:1 barcelona:1 nip:1 usually:2 pattern:1 sanjay:1 challenge:1 rf:1 interpretability:1 memory:3 power:1 critical:2 ranked:2 force:1 boat:1 technology:1 roberto:1 understanding:1 mapreduce:1 discovery:1 relative:2 lacking:1 loss:1 interesting:1 proportional:3 filtering:1 proven:2 vg:3 foundation:1 gather:2 sufficient:3 ltr:16 informativeness:5 consistent:2 tyree:1 endowment:2 copy:1 bias:4 burges:2 basu:1 wide:1 distributed:2 benefit:1 calculated:3 dimension:1 world:3 depth:1 landgrebe:1 author:1 ig:8 employing:1 transaction:1 observable:1 global:23 decides:1 xi:1 kddcup:1 continuous:1 search:1 table:11 promising:1 learn:3 taifeng:1 forest:3 alg:3 did:1 big:6 whole:1 shafer:1 representative:1 join:1 slow:2 sub:1 pv:53 candidate:5 lie:1 communicates:2 learns:2 grained:9 down:1 theorem:12 er:1 list:3 dl:2 workshop:1 quantization:3 sequential:9 push:1 sorting:1 entropy:2 gagan:1 yin:1 horizontally:1 contained:2 ordered:1 maxw:1 hua:1 springer:5 ramakrishnan:1 amt:1 extracted:2 acm:3 conditional:2 goal:1 shared:2 man:1 determined:1 specifically:3 typical:2 except:2 reducing:1 classi:1 principal:1 called:3 total:6 experimental:5 vote:1 meaningful:1 rakesh:3 select:12 college:1 people:2 latter:1 lambdamart:1 accelerated:1 evaluate:2 |
5,949 | 6,382 | Leveraging Sparsity for Efficient
Submodular Data Summarization
Erik M. Lindgren, Shanshan Wu, Alexandros G. Dimakis
The University of Texas at Austin
Department of Electrical and Computer Engineering
[email protected], [email protected], [email protected]
Abstract
The facility location problem is widely used for summarizing large datasets and
has additional applications in sensor placement, image retrieval, and clustering.
One difficulty of this problem is that submodular optimization algorithms require
the calculation of pairwise benefits for all items in the dataset. This is infeasible for
large problems, so recent work proposed to only calculate nearest neighbor benefits.
One limitation is that several strong assumptions were invoked to obtain provable
approximation guarantees. In this paper we establish that these extra assumptions
are not necessary?solving the sparsified problem will be almost optimal under
the standard assumptions of the problem. We then analyze a different method of
sparsification that is a better model for methods such as Locality Sensitive Hashing
to accelerate the nearest neighbor computations and extend the use of the problem
to a broader family of similarities. We validate our approach by demonstrating that
it rapidly generates interpretable summaries.
1
Introduction
In this paper we study the facility location problem: we are given sets V of size n, I of size m and a
benefit matrix of nonnegative numbers C 2 RI?V , where Civ describes the benefit that element i
receives from element v. Our goal is to select a small set A of k columns in this matrix. Once we
have chosen A, element i will get a benefit equal to the best choice out of the available columns,
maxv2A Civ . The total reward is the sum of the row rewards, so the optimal choice of columns is the
solution of:
X
arg max
max Civ .
(1)
{A?V :|A|?k} i2I v2A
A natural application of this problem is in finding a small set of representative images in a big dataset,
where Civ represents the similarity between images i and v. The problem is to select k images that
provide a good coverage of the full dataset, since each one has a close representative in the chosen
set.
Throughout this paper we follow the nomenclature common to the submodular optimization for
machine learning literature. This problem is also known as the maximization version of the k-medians
problem or the submodular facility location problem. A number of recent works have used this
problem for selecting subsets of documents or images from a larger corpus [27, 39], to identify
locations to monitor in order to quickly identify important events in sensor or blog networks [24, 26],
as well as clustering applications [23, 34].
We can naturally interpret Problem 1 as a maximization of a set function F (A) which takes as an
input the selected set of columns and returns the total reward of that set. Formally, let F (;) = 0 and
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
for all other sets A ? V define
F (A) =
X
i2I
max Civ .
v2A
(2)
The set function F is submodular, since for all j 2 V and sets A ? B ? V \ {j}, we have
F (A [ {j}) F (A) F (B [ {j}) F (B), that is, the gain of an element is diminishes as we add
elements. Since the entries of C are nonnegative, F is monotone, since for all A ? B ? V , we have
F (A) ? F (B). We also have F normalized, since F (;) = 0.
The facility location problem is NP-Hard, so we consider approximation algorithms. Like all monotone and normalized submodular functions, the greedy algorithm guarantees a (1 1/e)-factor
approximation to the optimal solution [35]. The greedy algorithm starts with the empty set, then for
k iterations adds the element with the largest reward. This approximation is the best possible?the
maximum coverage problem is an instance of the submodular facility location problem, which was
shown to be NP-Hard to optimize within a factor of 1 1/e + " for all " > 0 [13].
The problem is that the greedy algorithm has super-quadratic running time ?(nmk) and in many
datasets n and m can be in the millions. For this reason, several recent papers have focused on
accelerating the greedy algorithm. In [26], the authors point out that if the benefit matrix is sparse,
this can dramatically speed up the computation time. Unfortunately, in many problems of interest,
data similarities or rewards are not sparse. Wei et al. [40] proposed to first sparsify the benefit matrix
and then run the greedy algorithm on this new sparse matrix. In particular, [40] considers t-nearest
neighbor sparsification, i.e., keeping for each row the t largest entries and zeroing out the rest. Using
this technique they demonstrated an impressive 80-fold speedup over the greedy algorithm with little
loss in solution quality. One limitation of their theoretical analysis was the limited setting under
which provable approximation guarantees were established.
Our Contributions: Inspired by the work of Wei et al. [40] we improve the theoretical analysis of
the approximation error induced by sparsification. Specifically, the previous analysis assumes that
the input came from a probability distribution where the preferences of each element of i 2 I are
independently chosen uniformly at random. For this distribution, when k = ?(n), they establish that
the sparsity can be taken to be O(log n) and running the greedy algorithm on the sparsified problem
will guarantee a constant factor approximation with high probability. We improve the analysis in the
following ways:
? We prove guarantees for all values of k and our guarantees do not require any assumptions
on the input besides nonnegativity of the benefit matrix.
? In the case where k = ?(n), we show that it is possible to take the sparsity of each row as
low as O(1) while guaranteeing a constant factor approximation.
? Unlike previous work, our analysis does not require the use of any particular algorithm and
can be integrated to many algorithms for solving facility location problems.
? We establish a lower bound which shows that our approximation guarantees are tight up to
log factors, for all desired approximation factors.
In addition to the above results we propose a novel algorithm that uses a threshold based sparsification
where we keep matrix elements that are above a set value threshold. This type of sparsification is
easier to efficiently implement using nearest neighbor methods. For this method of sparsification, we
obtain worst case guarantees and a lower bound that matches up to constant factors. We also obtain a
data dependent guarantee which helps explain why our algorithm empirically performs better than
the worst case.
Further, we propose the use of Locality Sensitive Hashing (LSH) and random walk methods to
accelerate approximate nearest neighbor computations. Specifically, we use two types of similarity
metrics: inner products and personalized PageRank (PPR). We propose the use of fast approximations
for these metrics and empirically show that they dramatically improve running times. LSH functions
are well-known but, to the best of our knowledge, this is the first time they have been used to accelerate
facility location problems. Furthermore, we utilize personalized PageRank as the similarity between
vertices on a graph. Random walks can quickly approximate this similarity and we demonstrate that
it yields highly interpretable results for real datasets.
2
2
Related Work
The use of a sparsified proxy function was shown by Wei et al. to also be useful for finding a
subset for training nearest neighbor classifiers [41]. Further, they also show a connection of nearest
neighbor classifiers to the facility location function. The facility location function was also used by
Mirzasoleiman et al. as part of a summarization objective function in [32], where they present a
summarization algorithm that is able to handle a variety of constraints.
The stochastic greedy algorithm was shown to get a 1 1/e " approximation with runtime
O(nm log 1" ), which has no dependance on k [33]. It works by choosing a sample set from V of size
n
1
k log " each iteration and adding to the current set the element of the sample set with the largest gain.
Also, there are several related algorithms for the streaming setting [5] and distributed setting [6, 25,
31, 34]. Since the objective function is defined over the entire dataset, optimizing the submodular
facility location function becomes more complicated in these memory limited settings. Often the
function is estimated by considering a randomly chosen subset from the set I.
2.1
Benefits Functions and Nearest Neighbor Methods
For many problems, the elements V and I are vectors in some feature space where the benefit
matrix is defined by some similarity function sim. For example, in Rd we may use the RBF kernel
2
xT y
sim(x, y) = e kx yk2 , dot product sim(x, y) = xT y, or cosine similarity sim(x, y) = kxkkyk
.
There has been decades of research on nearest neighbor search in geometric spaces. If the vectors are
low dimensional, then classical techniques such as kd-trees [7] work well and are exact. However
it has been observed that as the dimensions grow that the runtime of all known exact methods does
little better than a linear scan over the dataset.
As a compromise, researchers have started to work on approximate nearest neighbor methods, one of
the most successful approaches being locality sensitive hashing [15, 20]. LSH uses a hash function
that hashes together items that are close. Locality sensitive hash functions exist for a variety of metrics
and similarities such as Euclidean [11], cosine similarity [3, 9], and dot product [36, 38]. Nearest
neighbor methods other than LSH that have been shown to work for machine learning problems
include [8, 10]. Additionally, see [14] for efficient and exact GPU methods.
An alternative to vector functions is to use similarities and benefits defined from graph structures. For
instance, we can use the personalized PageRank of vertices in a graph to define the benefit matrix
[37]. The personalized PageRank is similar to the classic PageRank, except the random jumps, rather
than going to anywhere in the graph, go back to the users ?home? vertex. This can be used as a value
of ?reputation? or ?influence? between vertices in a graph [17].
There are a variety of algorithms for finding the vertices with a large PageRank personalized to some
vertex. One popular one is the random walk method. If ?i is the personalized PageRank vector to
some vertex i, then ?i (v) is the same as the probability that a random walk of geometric length
starting from i ends on a vertex v (where the parameter of the geometric distribution is defined by the
probability of jumping back to i) [4]. Using this approach, we can quickly estimate all elements in
the benefit matrix greater than some value ? .
3
Guarantees for t-Nearest Neighbor Sparsification
We associate a bipartite support graph G = (V, I, E) by having an edge between v 2 V and i 2 I
whenever Cij > 0. If the support graph is sparse, we can use the graph to calculate the gain of an
element much more efficiently, since we only need to consider the neighbors of the element versus
the entire set I. If the average degree of a vertex i 2 I is t, (and we use a cache for the current best
value of an element i) then we can execute greedy in time O(mtk). See Algorithm 1 in the Appendix
for pseudocode. If the sparsity t is much smaller than the size of V , the runtime is greatly improved.
However, the instance we wish to optimize may not be sparse. One idea is to sparsify the original
matrix by only keeping the values in the benefit matrix C that are t-nearest neighbors, which
was considered in [40]. That is, for every element i in I, we only keep the top t elements of
Ci1 , Ci2 , . . . , Cin and set the rest equal to zero. This leads to a matrix with mt nonzero elements. We
3
then want the solution from optimizing the sparse problem to be close to the value of the optimal
solution in the original objective function F .
n
m
Our main theorem is that we can set the sparsity parameter t to be O( ?k
log ?k
)?which is a
significant improvement for large enough k?while still having the solution to the sparsified problem
1
be at most a factor of 1+?
from the value of the optimal solution.
Theorem 1. Let Ot be the optimal solution to an instance of the submodular facility location problem
n
n
with a benefit matrix that was sparsified with t-nearest neighbor. For any t t? (?) = O( ?k
log ?k
),
1
we have F (Ot ) 1+? OPT.
Proof Sketch. For the value of t chosen, there exists a set of size ?k such that every element of
I has a neighbor in the t-nearest neighbor graph; this is proven using the probabilistic method. By
appending to the optimal solution and using the monotonicity of F , we can move to the sparsified
function, since no element of I would prefer an element that was zeroed out in the sparsified matrix
as one of their top t most beneficial elements is present in the set . The optimal solution appended
with is a set of size (1 + ?)k. We then bound the amount that the optimal value of a submodular
function can increase by when adding ?k elements. See the appendix for the complete proof.
Note that Theorem 1 is agnostic to the algorithm used to optimize the sparsified function, and so if we
?
use a ?-approximation algorithm, then we are at most a factor of 1+?
from the optimal solution. Later
this section we will utilize this to design a subquadratic algorithm for optimizing facility location
problems as long as we can quickly compute t-nearest neighbors and k is large enough.
If m = O(n) and k = ?(n), we can achieve a constant factor approximation even when taking
the sparsity parameter as low as t = O(1), which means that the benefit matrix C has only O(n)
nonzero entries. Also note that the only assumption we need is that the benefits between elements are
nonnegative. When k = ?(n), previous work was only able to take t = O(log n) and required the
benefit matrix to come from a probability distribution [40].
Our guarantee has two regimes depending on the value of ?. If we want the optimal solution to the
sparsified function to be a 1 " factor from the optimal solution to the original function, we have that
n
m
t? (") = O( "k
log "k
) suffices. Conversely, if we want to take the sparsity t to be much smaller than
n
m
log
,
then
this
is
equivalent to taking ? very large and we have some guarantee of optimality.
k
k
In the proof of Theorem 1, the only time we utilize the value of t is to show that there exists a small
set that covers the entire set I in the t-nearest neighbor graph. Real datasets often contain a covering
n
m
set of size ?k for t much smaller than O( ?k
log ?k
). This observation yields the following corollary.
Corollary 2. If after sparsifying a problem instance there exists a covering set of size ?k in the
t-nearest neighbor graph, then the optimal solution Ot of the sparsified problem satisfies F (Ot )
1
1+? OPT.
In the datasets we consider in our experiments of roughly 7000 items, we have covering sets with
only 25 elements for t = 75, and a covering set of size 10 for t = 150. The size of covering set was
upper bounded by using the greedy set cover algorithm. In Figure 2 in the appendix, we see how the
size of the covering set changes with the choice of the number of neighbors chosen t.
It would be desirable to take the sparsity parameter t lower than the value dictated by t? (?). As
demonstrated by the following lower bound, is not possible to take the sparsity significantly lower
1
than ?1 nk and still have a 1+?
approximation in the worst case.
Proposition 3. Suppose we take
t = max
?
1
1
,
2? 1 + ?
There exists a family of inputs such that we have F (Ot ) ?
n
1
k
.
1
1+? 2/k OPT.
The example we create to show this has the property that in the t-nearest neighbor graph, the set
needs ?k elements to cover every element of I. We plant a much smaller covering set that is very
close in value to but is hidden after sparsification. We then embed a modular function within the
facility location objective. With knowledge of the small covering set, an optimal solver can take
4
advantage of this modular function, while the sparsified solution would prefer to first choose the set
before considering the modular function. See the appendix for full details.
Sparsification integrates well with the stochastic greedy algorithm [33]. By taking t t? ("/2) and
running stochastic greedy with sample sets of size nk ln 2" , we get a 1 1/e " approximation in
1
m
expectation that runs in expected time O( nm
"k log " log "k ). If we can quickly sparsify the problem
1/3
and k is large enough, for example n , this is subquadratic. The following proposition shows a high
probability guarantee on the runtime of this algorithm and is proven in the appendix.
Proposition 4. When m = O(n), the stochastic greedy algorithm [33] with set sizes of
size nk log 2" , combined with sparsification with sparsity parameter t, will terminate in time
n
m
O(n log 1" max{t, log n}) with high probability. When t t? ("/2) = O( "k
log "k
), this algorithm
has a 1 1/e " approximation in expectation.
4
Guarantees for threshold-based Sparsification
Rather than t-nearest neighbor sparsification, we now consider using ? -threshold sparsification, where
we zero-out all entries that have value below a threshold ? . Recall the definition of a locality sensitive
hash.
Definition. H is a (?, K?, p, q)-locality sensitive hash family if for x, y satisfying sim(x, y) ? we
have Ph2H (h(x) = h(y)) p and if x, y satisfy sim(x, y) ? K? we have Ph2H (h(x) = h(y)) ?
q.
We see that ? -threshold sparsification is a better model than t-nearest neighbors for LSH, as for
K = 1 it is a noisy ? -sparsification and for non-adversarial datasets it is a reasonable approximation
of a ? -sparsification method. Note that due to the approximation constant K, we do not have an a
priori guarantee on the runtime of arbitrary datasets. However we would expect in practice that we
would only see a few elements with threshold above the value ? . See [2] for a discussion on this.
One issue is that we do not know how to choose the threshold ? . We can sample elements of the
benefit matrix C to estimate how sparse the threshold graph will be for a given threshold ? . Assuming
the values of C are in general position1 , by using the Dvoretzky-Kiefer-Wolfowitz-Massart Inequality
[12, 28] we can bound the number of samples needed to choose a threshold that achieves a desired
sparsification level.
We establish the following data-dependent bound on the difference in the optimal solutions of the
? -threshold sparsified function and the original function. We denote the set of vertices adjacent to S
in the ? -threshold graph with N (S).
Theorem 5. Let O? be the optimal solution to an instance of the facility location problem with a
benefit matrix that was sparsified using a ? -threshold. Assume there exists a set S of size k such that
in the ? -threshold graph we have the neighborhood of S satisfying |N (S)| ?n. Then we have
?
? 1
1
F (O? )
1+
OPT.
?
For the datasets we consider, we see that we can keep a 0.01 0.001 fraction of the elements of C
while still having a small set S with a neighborhood N (S) that satisfied |N (S)| 0.3n. In Figure 3
in the appendix, we plot the relationship between the number of edges in the ? -threshold graph and
the number of coverable element by a a set of small size, as estimated by the greedy algorithm for
max-cover.
Additionally, we have worst case dependency on the number of edges in the ? -threshold graph and the
approximation factor. The guarantees follow from applying Theorem 5 with the following Lemma.
c
1
1 2 2
Lemma 6. For k
n edges has a set S of size k such that the
1 2c2 , any graph with 2
neighborhood N (S) satisfies
|N (S)| c n.
1
By this we mean that the values of C are all unique, or at least only a few elements take any particular value.
We need this to hold since otherwise a threshold based sparsification may exclusively return an empty graph or
the complete graph.
5
To get a matching lower bound, consider the case where the graph has two disjoint cliques, one of
size n and one of size (1
)n. Details are in the appendix.
5
Experiments
5.1
Summarizing Movies and Music from Ratings Data
We consider the problem of summarizing a large collection of movies. We first need to create a feature
vector for each movie. Movies can be categorized by the people who like them, and so we create
our feature vectors from the MovieLens ratings data [16]. The MovieLens database has 20 million
ratings for 27,000 movies from 138,000 users. To do this, we perform low-rank matrix completion
and factorization on the ratings matrix [21, 22] to get a matrix X = U V T , where X is the completed
ratings matrix, U is a matrix of feature vectors for each user and V is a matrix of feature vectors for
each movie. For movies i and j with vectors vi and vj , we set the benefit function Cij = viT vj . We
do not use the normalized dot product (cosine similarity) because we want our summary movies to be
movies that were highly rated, and not normalizing makes highly rated movies increase the objective
function more.
We complete the ratings matrix using the MLlib library in Apache Spark [29] after removing all but
the top seven thousand most rated movies to remove noise from the data. We use locality sensitive
hashing to perform sparsification; in particular we use the LSH in the FALCONN library for cosine
similarity [3] and the reduction from a cosine simiarlity hash to a dot product hash [36]. As a baseline
we consider sparsification using a scan over the entire dataset, the stochastic greedy algorithm with
lazy evaluations[33], and the greedy algorithm with lazy evaluations [30]. The number of elements
chosen was set to 40 and for the LSH method and stochastic greedy we average over five trials.
We then do a scan over the sparsity parameter t for the sparsification methods and a scan over the
number of samples drawn each iteration for the stochastic greedy algorithm. The sparsified methods
use the (non-stochastic) lazy greedy algorithm as the base optimization algorithm, which we found
worked best for this particular problem2 . In Figure 1(a) we see that the LSH method very quickly
approaches the greedy solution?it is almost identical in value just after a few seconds even though
the value of t is much less than t? ("). The stochastic greedy method requires much more time to get
the same function value. Lazy greedy is not plotted, since it took over 500 seconds to finish.
(b) Fraction of Greedy Set Contained vs. Runtime
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0
25
50
75 100 125 150
Runtime (s)
(a) Fraction of Greedy Set Value vs. Runtime
0.99
0.98
0.97
0.96
0.95
0.94
0.93
0.92
0.91
0.90
Exact top-t
LSH top-t
Stochastic Greedy
0
25
50
75 100
Runtime (s)
125
150
Figure 1: Results for the MovieLens dataset [16]. Figure (a) shows the function value as the runtime increases,
normalized by the value the greedy algorithm obtained. As can be seen our algorithm is within 99.9% of greedy
in less than 5 seconds. For this experiment, the greedy algorithm had a runtime of 512 seconds, so this is a 100x
speed up for a small penalty in performance. We also compare to the stochastic greedy algorithm [33], which
needs 125 seconds to get equivalent performance, which is 25x slower.
Figure (b) shows the fraction of the set that was returned by each method that was common with the set returned
by greedy. We see that the approximate nearest neighbor method has 90% of its elements common with the
greedy set while being 50x faster than greedy, and using exact nearest neighbors can perfectly match the greedy
set while being 4x faster than greedy.
2
When experimenting on very larger datasets, we found that runtime constraints can make it necessary to use
stochastic greedy as the base optimization algorithm
6
Table 1: A subset of the summarization outputted by our algorithm on the MovieLens dataset, plus the elements
who are represented by each representative with the largest dot product. Each group has a natural interpretation:
90?s slapstick comedies, 80?s horror, cult classics, etc. Note that this was obtained with only a similarity matrix
obtained from ratings.
Happy Gilmore
Nightmare on Elm Street
Star Wars IV
Shawshank Redemption
Tommy Boy
Billy Madison
Dumb & Dumber
Ace Ventura Pet Detective
Road Trip
American Pie 2
Black Sheep
Friday the 13th
Halloween II
Nightmare on Elm Street 3
Child?s Play
Return of the Living Dead II
Friday the 13th 2
Puppet Master
Star Wars V
Raiders of the Lost Ark
Star Wars VI
Indiana Jones, Last Crusade
Terminator 2
The Terminator
Star Trek II
Schindler?s List
The Usual Suspects
Life Is Beautiful
Saving Private Ryan
American History X
The Dark Knight
Good Will Hunting
Pulp Fiction
The Notebook
Pride and Prejudice
The Godfather
Reservoir Dogs
American Beauty
A Clockwork Orange
Trainspotting
Memento
Old Boy
No Country for Old Men
P.S. I Love You
The Holiday
Remember Me
A Walk to Remembe
The Proposal
The Vow
Life as We Know It
Anne of Green Gables
Persuasion
Emma
Mostly Martha
Desk Set
The Young Victoria
Mansfield Park
The Godfather II
One Flew Over the Cuckoo?s Nest
Goodfellas
Apocalypse Now
Chinatown
12 Angry Men
Taxi Driver
A performance metric that can be better than the objective value is the fraction of elements returned
that are common with the greedy algorithm. We treat this as a proxy for the interpretability of
the results. We believe this metric is reasonable since we found the subset returned by the greedy
algorithm to be quite interpretable. We plot this metric against runtime in Figure 1b. We see that the
LSH method quickly gets to 90% of the elements in the greedy set while stochastic greedy takes
much longer to get to just 70% of the elements. The exact sparsification method is able to completely
match the greedy solution at this point. One interesting feature is that the LSH method does not
go much higher than 90%. This may be due to the increased inaccuracy when looking at elements
with smaller dot products. We plot this metric against the number of exact and approximate nearest
neighbors t in Figure 4 in the appendix.
We include a subset of the summarization and for each representative a few elements who are
represented by this representative with the largest dot product in Table 1 to show the interpretability
of our results.
5.2
Finding Influential Actors and Actresses
For our second experiment, we consider how to find a diverse subset of actors and actresses in a
collaboration network. We have an edge between an actor or actress if they collaborated in a movie
together, weighted by the number of collaborations. Data was obtained from [19] and an actor or
actress was only included if he or she was one of the top six in the cast billing. As a measure of
influence, we use personalized PageRank [37]. To quickly calculate the people with the largest
influence relative to someone, we used the random walk method[4].
We first consider a small instance where we can see how well the sparsified approach works. We
build a graph based on the cast in the top thousand most rated movies. This graph has roughly 6000
vertices and 60,000 edges. We then calculate the entire PPR matrix using the power method. Note
that this is infeasible on larger graphs in terms of time and memory. Even on this moderate sized
graph it took six hours and takes up two gigabytes of space. We then compare the value of the greedy
algorithm using the entire PPR matrix with the sparsified algorithm using the matrix approximated
by Monte Carlo sampling using the two metrics mentioned in the previous section. We omit exact
nearest neighbor and stochastic greedy because it is not clear how it would work without having to
compute the entire PPR matrix. Instead we compare to an approach where we choose a sample from
I and calculate the PPR only on these elements using the power method. As mentioned in Section 2,
several algorithms utilize random sampling from I. We take k to be 50 for this instance. In Figure 5 in
the appendix we see that sparsification performs drastically better in both function value and percent
of the greedy set contained for a given runtime.
7
We now scale up to a larger graph by taking the actors and actresses billed in the top six for the
twenty thousand most rated movies. This graph has 57,000 vertices and 400,000 edges. We would
not be able to compute the entire PPR matrix for this graph in a reasonable amount of time and even
if we could it would be a challenge to store the entire matrix in memory. However we can run the
sparsified algorithm in three hours using only 2 GB of memory, which could be improved further by
parallelizing the Monte Carlo approximation.
We run the greedy algorithm separately on the actors and actresses. For each we take the top twentyfive and compare to the actors and actresses with the largest (non-personalized) PageRank. In Figure 2
of the appendix, we see that the PageRank output fails to capture the diversity in nationality of the
dataset, while the submodular facility location optimization returns actors and actresses from many
of the worlds film industries.
Acknowledgements
This material is based upon work supported by the National Science Foundation Graduate Research
Fellowship under Grant No. DGE-1110007 as well as NSF Grants CCF 1344179, 1344364, 1407278,
1422549 and ARO YIP W911NF-14-1-0258.
References
[1] N. Alon and J. H. Spencer. The Probabilistic Method. Wiley, 3rd edition, 2008.
[2] A. Andoni.
Exact algorithms from approximation algorithms? (part 2).
Windows
on
Theory,
2012.
https://windowsontheory.org/2012/04/17/
exact-algorithms-from-approximation-algorithms-part-2/ (version: 2016-09-06).
[3] A. Andoni, P. Indyk, T. Laarhoven, I. Razenshteyn, and L. Schmidt. Practical and optimal LSH for angular
distance. In NIPS, 2015.
[4] K. Avrachenkov, N. Litvak, D. Nemirovsky, and N. Osipova. Monte Carlo methods in PageRank computation: When one iteration is sufficient. SIAM Journal on Numerical Analysis, 2007.
[5] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization:
Massive data summarization on the fly. In KDD, 2014.
[6] R. Barbosa, A. Ene, H. L. Nguyen, and J. Ward. The power of randomization: Distributed submodular
maximization on massive datasets. In ICML, 2015.
[7] J. L. Bentley. Multidimensional binary search trees used for associative searching. Communications of the
ACM, 1975.
[8] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In ICML, 2006.
[9] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002.
[10] J. Chen, H.-R Fang, and Y. Saad. Fast approximate k?NN graph construction for high dimensional data via
recursive Lanczos bisection. JMLR, 2009.
[11] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable
distributions. In Symposium on Computational Geometry, 2004.
[12] A. Dvoretzky, J. Kiefer, and J. Wolfowitz. Asymptotic minimax character of the sample distribution
function and of the classical multinomial estimator. The Annals of Mathematical Statistics, pages 642?669,
1956.
[13] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 1998.
[14] V. Garcia, E. Debreuve, F. Nielsen, and M. Barlaud. K-nearest neighbor search: Fast GPU-based implementations and application to high-dimensional feature matching. In International Conference on Image
Processing, 2010.
[15] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In VLDB, 1999.
[16] GroupLens. MovieLens 20M dataset, 2015. http://grouplens.org/datasets/movielens/20m/.
8
[17] P. Gupta, A. Goel, J. Lin, A. Sharma, D. Wang, and R. Zadeh. Wtf: The who to follow service at twitter. In
WWW, 2013.
[18] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American
Statistical Association, 1963.
[19] IMDb. Alternative interfaces, 2016. http://www.imdb.com/interfaces.
[20] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality.
In STOC, 1998.
[21] P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating minimization. In
STOC, 2013.
[22] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer,
2009.
[23] A. Krause and R. G. Gomes. Budgeted nonparametric learning from data streams. In ICML, 2010.
[24] A. Krause, J. Leskovec, C. Guestrin, J. VanBriesen, and C. Faloutsos. Efficient sensor placement optimization for securing large water distribution networks. Journal of Water Resources Planning and Management,
2008.
[25] R. Kumar, B. Moseley, S. Vassilvitskii, and A. Vattani. Fast greedy algorithms in MapReduce and streaming.
ACM Transactions on Parallel Computing, 2015.
[26] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak
detection in networks. In KDD, 2007.
[27] H. Lin and J. A. Bilmes. Learning mixtures of submodular shells with application to document summarization. In UAI, 2012.
[28] P. Massart. The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. The Annals of Probability,
pages 1269?1283, 1990.
[29] X. Meng, J. Bradley, B. Yavuz, E. Sparks, S. Venkataraman, D. Liu, J. Freeman, D. B. Tsai, M. Amde,
S. Owen, D. Xin, R. Xin, M. J. Franklin, R. Zadeh, M. Zaharia, and A. Talwalkar. MLlib: Machine learning
in Apache Spark. JMLR, 2016.
[30] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization
Techniques. Springer, 1978.
[31] V. Mirrokni and M. Zadimoghaddam. Randomized composable core-sets for distributed submodular
maximization. STOC, 2015.
[32] B. Mirzasoleiman, A. Badanidiyuru, and A. Karbasi. Fast constrained submodular maximization: Personalized data summarization. In ICML, 2016.
[33] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondrak, and A. Krause. Lazier than lazy greedy. In
AAAI, 2015.
[34] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause. Distributed submodular maximization: Identifying
representative elements in massive data. In NIPS, 2013.
[35] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing
submodular set functions?-I. Mathematical Programming, 1978.
[36] B. Neyshabur and N. Srebro. On symmetric and asymmetric LSHs for inner product search. In ICML,
2015.
[37] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank citation ranking: Bringing order to the web.
Stanford Digital Libraries, 1999.
[38] A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search
(MIPS). In NIPS, 2014.
[39] S. Tschiatschek, R. K. Iyer, H. Wei, and J. A. Bilmes. Learning mixtures of submodular functions for
image collection summarization. In NIPS, 2014.
[40] K. Wei, R. Iyer, and J. Bilmes. Fast multi-stage submodular maximization. In ICML, 2014.
[41] K. Wei, R. Iyer, and J. Bilmes. Submodularity in data subset selection and active learning. In ICML, 2015.
9
| 6382 |@word trial:1 private:1 version:2 vldb:1 ci2:1 nemirovsky:1 detective:1 reduction:1 hunting:1 liu:1 exclusively:1 selecting:1 document:2 franklin:1 bradley:1 current:2 com:1 anne:1 beygelzimer:1 gpu:2 numerical:1 razenshteyn:1 kdd:2 civ:5 plot:3 interpretable:3 remove:1 hash:7 v:2 greedy:47 selected:1 item:3 cult:1 core:1 alexandros:1 location:16 preference:1 org:2 five:1 raider:1 mathematical:2 c2:1 driver:1 symposium:1 prove:1 emma:1 tommy:1 pairwise:1 expected:1 roughly:2 love:1 planning:1 multi:1 inspired:1 freeman:1 little:2 curse:1 cache:1 considering:2 solver:1 becomes:1 spain:1 window:1 bounded:2 agnostic:1 dimakis:2 finding:4 sparsification:23 elm:2 indiana:1 guarantee:16 remember:1 every:3 multidimensional:1 runtime:14 classifier:2 puppet:1 grant:2 omit:1 before:1 service:1 engineering:1 treat:1 taxi:1 meng:1 datar:1 black:1 plus:1 conversely:1 someone:1 minoux:1 limited:2 factorization:2 tschiatschek:1 graduate:1 unique:1 practical:1 practice:1 kxkkyk:1 implement:1 lost:1 recursive:1 litvak:1 bell:1 significantly:1 outputted:1 matching:2 road:1 get:9 close:4 selection:1 influence:3 applying:1 optimize:3 equivalent:2 www:2 demonstrated:2 maximizing:2 clockwork:1 go:2 starting:1 independently:1 shanshan:2 focused:1 vit:1 spark:3 identifying:1 estimator:1 fang:1 gigabyte:1 classic:2 handle:1 searching:1 holiday:1 annals:2 construction:1 suppose:1 play:1 user:3 exact:10 massive:3 programming:1 us:2 associate:1 element:41 satisfying:2 approximated:1 persuasion:1 asymmetric:2 ark:1 database:1 winograd:1 observed:1 fly:1 electrical:1 capture:1 worst:4 calculate:5 thousand:3 laarhoven:1 barbosa:1 wang:1 venkataraman:1 knight:1 cin:1 redemption:1 mentioned:2 reward:5 avrachenkov:1 solving:2 tight:2 badanidiyuru:3 compromise:1 upon:1 bipartite:1 imdb:2 completely:1 accelerate:3 represented:2 jain:1 fast:6 effective:1 monte:3 choosing:1 neighborhood:3 quite:1 modular:3 widely:1 larger:4 ace:1 film:1 stanford:1 otherwise:1 statistic:1 ward:1 noisy:1 indyk:4 associative:1 advantage:1 took:2 propose:3 aro:1 product:10 barlaud:1 rapidly:1 horror:1 achieve:1 validate:1 empty:2 motwani:3 lshs:1 flew:1 guaranteeing:1 mirzasoleiman:5 help:1 depending:1 alon:1 completion:2 nearest:28 sim:6 strong:1 netrapalli:1 coverage:2 come:1 submodularity:1 stochastic:14 material:1 brin:1 require:3 suffices:1 ci1:1 randomization:1 opt:4 proposition:3 ryan:1 spencer:1 hold:1 considered:1 achieves:1 notebook:1 estimation:1 diminishes:1 integrates:1 halloween:1 amde:1 grouplens:2 utexas:3 sensitive:8 largest:7 create:3 weighted:1 minimization:1 sensor:3 super:1 rather:2 beauty:1 broader:1 sparsify:3 corollary:2 improvement:1 she:1 rank:2 experimenting:1 greatly:1 adversarial:1 baseline:1 summarizing:3 talwalkar:1 twitter:1 dependent:2 streaming:3 nn:1 integrated:1 entire:9 hidden:1 going:1 arg:1 issue:1 priori:1 constrained:1 yip:1 orange:1 equal:2 once:1 saving:1 having:4 sampling:2 identical:1 represents:1 park:1 jones:1 icml:7 np:2 subquadratic:2 sanghavi:1 few:4 randomly:1 national:1 geometry:1 detection:1 interest:1 highly:3 evaluation:2 sheep:1 mixture:2 dumb:1 edge:7 necessary:2 jumping:1 tree:3 iv:1 euclidean:1 old:2 walk:6 desired:2 plotted:1 theoretical:2 leskovec:2 instance:8 column:4 increased:1 industry:1 cover:6 w911nf:1 lanczos:1 maximization:8 cost:1 vertex:12 subset:8 lazier:1 entry:4 successful:1 rounding:1 dependency:1 combined:1 international:1 siam:1 yavuz:1 randomized:1 probabilistic:2 together:2 quickly:8 godfather:2 aaai:1 nm:2 satisfied:1 management:1 choose:4 hoeffding:1 nest:1 dead:1 american:4 vattani:1 friday:2 return:4 li:1 diversity:1 star:4 gionis:1 satisfy:1 ranking:1 vi:2 stream:1 later:1 analyze:1 start:1 complicated:1 parallel:1 contribution:1 appended:1 kiefer:3 who:4 efficiently:2 yield:2 identify:2 i2i:2 bisection:1 carlo:3 bilmes:4 researcher:1 history:1 explain:1 whenever:1 definition:2 against:2 volinsky:1 naturally:1 proof:3 gain:3 dataset:10 popular:1 recall:1 knowledge:2 pulp:1 dimensionality:1 nielsen:1 back:2 dvoretzky:3 hashing:6 higher:1 follow:3 wei:6 improved:2 execute:1 though:1 furthermore:1 anywhere:1 just:2 angular:1 stage:1 langford:1 sketch:1 receives:1 web:1 glance:1 quality:1 believe:1 dge:1 bentley:1 normalized:4 contain:1 gilmore:1 ccf:1 facility:15 alternating:1 symmetric:1 nonzero:2 adjacent:1 covering:8 cosine:5 complete:3 demonstrate:1 performs:2 interface:2 percent:1 image:7 invoked:1 novel:1 common:4 pseudocode:1 multinomial:1 mt:1 empirically:2 apache:2 million:2 extend:1 interpretation:1 he:1 association:1 interpret:1 significant:1 rd:2 nationality:1 zeroing:1 submodular:21 had:1 dot:7 lsh:13 stable:1 actor:8 similarity:16 impressive:1 yk2:1 etc:1 add:2 base:2 lindgren:1 longer:1 recent:3 dictated:1 optimizing:3 moderate:1 zadimoghaddam:1 store:1 inequality:3 blog:1 came:1 binary:1 life:2 seen:1 wtf:1 additional:1 greater:1 guestrin:2 goel:1 sharma:1 wolfowitz:3 living:1 ii:4 full:2 desirable:1 match:3 faster:2 calculation:1 long:1 retrieval:1 lin:2 mansfield:1 metric:8 expectation:2 shawshank:1 iteration:4 kernel:1 proposal:1 addition:1 want:4 separately:1 fellowship:1 krause:6 median:1 grow:1 country:1 extra:1 rest:2 unlike:1 ot:5 billed:1 massart:2 saad:1 induced:1 suspect:1 bringing:1 leveraging:1 enough:3 trek:1 mips:1 variety:3 finish:1 perfectly:1 inner:3 idea:1 billing:1 texas:1 vassilvitskii:1 six:3 war:3 gb:1 accelerating:1 penalty:1 returned:4 nomenclature:1 dramatically:2 useful:1 clear:1 amount:2 nonparametric:1 dark:1 desk:1 http:3 exist:1 nsf:1 fiction:1 estimated:2 disjoint:1 diverse:1 sparsifying:1 group:1 demonstrating:1 threshold:19 monitor:1 drawn:1 actress:8 schindler:1 budgeted:1 alsh:1 utilize:4 graph:29 monotone:2 fraction:5 sum:2 run:4 master:1 you:1 almost:2 family:3 throughout:1 wu:1 reasonable:3 home:1 zadeh:2 appendix:10 prefer:2 bound:7 collaborated:1 securing:1 angry:1 mllib:2 koren:1 fold:1 quadratic:1 nonnegative:3 placement:2 constraint:2 worked:1 ri:1 personalized:9 generates:1 vondrak:1 speed:2 optimality:1 kumar:1 speedup:1 department:1 influential:1 charikar:1 kd:1 describes:1 smaller:5 beneficial:1 character:1 feige:1 vanbriesen:2 kakade:1 outbreak:1 karbasi:4 ene:1 taken:1 ln:2 resource:1 needed:1 know:2 end:1 available:1 neyshabur:1 victoria:1 coverable:1 appending:1 alternative:2 schmidt:1 faloutsos:2 slower:1 original:4 cuckoo:1 assumes:1 clustering:2 running:4 include:2 top:9 completed:1 madison:1 music:1 build:1 establish:4 approximating:1 classical:2 objective:6 move:1 pride:1 usual:1 mirrokni:2 nemhauser:1 distance:1 street:2 me:1 seven:1 considers:1 reason:1 provable:2 pet:1 water:2 assuming:1 erik:1 besides:1 length:1 relationship:1 happy:1 pie:1 unfortunately:1 cij:2 ventura:1 mostly:1 boy:2 stoc:4 design:1 implementation:1 summarization:9 twenty:1 perform:2 upper:1 recommender:1 observation:1 datasets:11 sparsified:17 looking:1 communication:1 arbitrary:1 parallelizing:1 rating:7 sarkar:1 dog:1 required:1 cast:2 trip:1 connection:1 comedy:1 established:1 barcelona:1 inaccuracy:1 nip:5 hour:2 able:4 mtk:1 below:1 regime:1 sparsity:11 challenge:1 pagerank:12 max:6 memory:4 green:1 interpretability:2 power:3 event:1 difficulty:1 natural:2 beautiful:1 minimax:1 scheme:1 improve:3 movie:14 rated:5 library:3 started:1 ppr:6 literature:1 geometric:3 acknowledgement:1 mapreduce:1 relative:1 asymptotic:1 loss:1 plant:1 expect:1 sublinear:1 men:2 limitation:2 interesting:1 zaharia:1 proven:2 versus:1 billy:1 composable:1 wolsey:1 srebro:1 foundation:1 digital:1 degree:1 sufficient:1 proxy:2 zeroed:1 nmk:1 collaboration:2 austin:2 row:3 summary:2 supported:1 last:1 keeping:2 infeasible:2 drastically:1 neighbor:31 taking:4 sparse:7 benefit:21 distributed:4 dimension:2 world:1 author:1 collection:2 jump:1 nguyen:1 transaction:1 approximate:7 citation:1 keep:3 monotonicity:1 clique:1 active:1 uai:1 corpus:1 gomes:1 search:6 decade:1 reputation:1 why:1 table:2 additionally:2 terminate:1 shrivastava:1 terminator:2 vj:2 main:1 big:1 noise:1 edition:1 child:1 categorized:1 reservoir:1 representative:6 wiley:1 fails:1 nonnegativity:1 wish:1 jmlr:2 young:1 theorem:6 removing:2 embed:1 xt:2 list:1 gupta:1 normalizing:1 exists:5 dependance:1 andoni:2 adding:2 iyer:3 kx:1 nk:3 chen:1 easier:1 locality:8 garcia:1 nightmare:2 lazy:5 contained:2 problem2:1 springer:1 satisfies:2 acm:3 shell:1 goal:1 sized:1 rbf:1 towards:1 owen:1 fisher:1 hard:2 change:1 martha:1 specifically:2 except:1 uniformly:1 movielens:6 included:1 prejudice:1 lemma:2 total:2 xin:2 moseley:1 select:2 formally:1 immorlica:1 support:2 people:2 scan:4 accelerated:1 tsai:1 |
5,950 | 6,383 | Unifying Count-Based Exploration and Intrinsic Motivation
Marc G. Bellemare
[email protected]
Sriram Srinivasan
[email protected]
Georg Ostrovski
[email protected]
Tom Schaul
[email protected]
David Saxton
[email protected]
R?emi Munos
[email protected]
Google DeepMind
London, United Kingdom
Abstract
We consider an agent?s uncertainty about its environment and the problem of generalizing this uncertainty across states. Specifically, we focus on the problem of
exploration in non-tabular reinforcement learning. Drawing inspiration from the
intrinsic motivation literature, we use density models to measure uncertainty, and
propose a novel algorithm for deriving a pseudo-count from an arbitrary density
model. This technique enables us to generalize count-based exploration algorithms to the non-tabular case. We apply our ideas to Atari 2600 games, providing
sensible pseudo-counts from raw pixels. We transform these pseudo-counts into
exploration bonuses and obtain significantly improved exploration in a number of
hard games, including the infamously difficult M ONTEZUMA? S R EVENGE.
1
Introduction
Exploration algorithms for Markov Decision Processes (MDPs) are typically concerned with reducing the agent?s uncertainty over the environment?s reward and transition functions. In a tabular
setting, this uncertainty can be quantified using confidence intervals derived from Chernoff bounds,
or inferred from a posterior over the environment parameters. In fact, both confidence intervals
and posterior shrink as the inverse square root of the state-action visit count N (x, a), making this
quantity fundamental to most theoretical results on exploration.
Count-based exploration methods directly use visit counts to guide an agent?s behaviour towards
reducing uncertainty. For example, Model-based Interval Estimation with Exploration Bonuses
(MBIE-EB; Strehl and Littman, 2008) solves the augmented Bellman equation
h
i
? a) + ? E ? [V (x0 )] + ?N (x, a)?1/2 ,
V (x) = max R(x,
P
a?A
? the empirical transition function P? , and an exploration bonus
involving the empirical reward R,
proportional to N (x, a)?1/2 . This bonus accounts for uncertainties in both transition and reward
functions and enables a finite-time bound on the agent?s suboptimality.
In spite of their pleasant theoretical guarantees, count-based methods have not played a role in the
contemporary successes of reinforcement learning (e.g. Mnih et al., 2015). Instead, most practical
methods still rely on simple rules such as -greedy. The issue is that visit counts are not directly
useful in large domains, where states are rarely visited more than once.
Answering a different scientific question, intrinsic motivation aims to provide qualitative guidance
for exploration (Schmidhuber, 1991; Oudeyer et al., 2007; Barto, 2013). This guidance can be
summarized as ?explore what surprises you?. A typical approach guides the agent based on change
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
in prediction error, or learning progress. If en (A) is the error made by the agent at time n over
some event A, and en+1 (A) the same error after observing a new piece of information, then learning
progress is
en (A) ? en+1 (A).
Intrinsic motivation methods are attractive as they remain applicable in the absence of the Markov
property or the lack of a tabular representation, both of which are required by count-based algorithms. Yet the theoretical foundations of intrinsic motivation remain largely absent from the literature, which may explain its slow rate of adoption as a standard approach to exploration.
In this paper we provide formal evidence that intrinsic motivation and count-based exploration are
but two sides of the same coin. Specifically, we consider a frequently used measure of learning
progress, information gain (Cover and Thomas, 1991). Defined as the Kullback-Leibler divergence
of a prior distribution from its posterior, information gain can be related to the confidence intervals
used in count-based exploration. Our contribution is to propose a new quantity, the pseudo-count,
which connects information-gain-as-learning-progress and count-based exploration.
We derive our pseudo-count from a density model over the state space. This is in departure from
more traditional approaches to intrinsic motivation that consider learning progress with respect to
a transition model. We expose the relationship between pseudo-counts, a variant of Schmidhuber?s
compression progress we call prediction gain, and information gain. Combined to Kolter and Ng?s
negative result on the frequentist suboptimality of Bayesian bonuses, our result highlights the theoretical advantages of pseudo-counts compared to many existing intrinsic motivation methods.
The pseudo-counts we introduce here are best thought of as ?function approximation for exploration?. We bring them to bear on Atari 2600 games from the Arcade Learning Environment (Bellemare et al., 2013), focusing on games where myopic exploration fails. We extract our pseudo-counts
from a simple density model and use them within a variant of MBIE-EB. We apply them to an experience replay setting and to an actor-critic setting, and find improved performance in both cases.
Our approach produces dramatic progress on the reputedly most difficult Atari 2600 game, M ON TEZUMA? S R EVENGE : within a fraction of the training time, our agent explores a significant portion
of the first level and obtains significantly higher scores than previously published agents.
2
Notation
We consider a countable state space X . We denote a sequence of length n from X by x1:n ? X n ,
the set of finite sequences from X by X ? , write x1:n x to mean the concatenation of x1:n and a state
x ? X , and denote the empty sequence by . A model over X is a mapping from X ? to probability
distributions over X . That is, for each x1:n ? X n the model provides a probability distribution
?n (x) := ?(x ; x1:n ).
Note that we do not require ?n (x) to be strictly positive for all x and x1:n . When it is, however, we
may understand ?n (x) to be the usual conditional probability of Xn+1 = x given X1 . . . Xn = x1:n .
We will take particular interest in the empirical distribution ?n derived from the sequence x1:n . If
Nn (x) := N (x, x1:n ) is the number of occurrences of a state x in the sequence x1:n , then
?n (x) := ?(x ; x1:n ) :=
Nn (x)
.
n
We call the Nn the empirical count function, or simply empirical count. The above notation extends
to state-action spaces, and we write Nn (x, a) to explicitly refer to the number of occurrences of a
state-action pair when the argument requires it. When x1:n is generated by an ergodic Markov chain,
for example if we follow a fixed policy in a finite-state MDP, then the limit point of ?n is the chain?s
stationary distribution.
In our setting, a density model is any model that assumes states are independently (but not necessarily
identically) distributed; a density model is thus a particular kind of generative model. We emphasize
that a density model differs from a forward model, which takes into account the temporal relationship
between successive states. Note that ?n is itself a density model.
2
3
From Densities to Counts
In the introduction we argued that the visit count Nn (x) (and consequently, Nn (x, a)) is not directly
useful in practical settings, since states are rarely revisited. Specifically, Nn (x) is almost always
zero and cannot help answer the question ?How novel is this state?? Nor is the problem solved
by a Bayesian approach: even variable-alphabet models (e.g. Hutter, 2013) must assign a small,
diminishing probability to yet-unseen states. To estimate the uncertainty of an agent?s knowledge,
we must instead look for a quantity which generalizes across states. Guided by ideas from the
intrinsic motivation literature, we now derive such a quantity. We call it a pseudo-count as it extends
the familiar notion from Bayesian estimation.
3.1
Pseudo-Counts and the Recoding Probability
We are given a density model ? over X . This density model may be approximate, biased, or even
inconsistent. We begin by introducing the recoding probability of a state x:
?0n (x) := ?(x ; x1:n x).
This is the probability assigned to x by our density model after observing a new occurrence of x.
The term ?recoding? is inspired from the statistical compression literature, where coding costs are
inversely related to probabilities (Cover and Thomas, 1991). When ? admits a conditional probability distribution,
?0n (x) = Pr? (Xn+2 = x | X1 . . . Xn = x1:n , Xn+1 = x).
?n (x), and a pseudo-count total n
We now postulate two unknowns: a pseudo-count function N
? . We
relate these two unknowns through two constraints:
?n (x) + 1
?n (x)
N
N
?0n (x) =
.
(1)
n
?
n
?+1
In words: we require that, after observing one instance of x, the density model?s increase in prediction of that same x should correspond to a unit increase in pseudo-count. The pseudo-count itself is
derived from solving the linear system (1):
?n (x) =
0
?n (x) = ?n (x)(1 ? ?n (x)) = n
N
? ?n (x).
?0n (x) ? ?n (x)
(2)
?n (x) = 0 (with n
Note that the equations (1) yield N
? = ?) when ?n (x) = ?0n (x) = 0, and are
0
inconsistent when ?n (x) < ?n (x) = 1. These cases may arise from poorly behaved density models,
but are easily accounted for. From here onwards we will assume a consistent system of equations.
Definition 1 (Learning-positive density model). A density model ? is learning-positive if for all
x1:n ? X n and all x ? X , ?0n (x) ? ?n (x).
By inspecting (2), we see that
?n (x) ? 0 if and only if ? is learning-positive;
1. N
?n (x) = 0 if and only if ?n (x) = 0; and
2. N
?n (x) = ? if and only if ?n (x) = ?0n (x).
3. N
?n (x) matches our intuition. If ?n = ?n then N
?n = Nn .
In many cases of interest, the pseudo-count N
?n recovers the usual notion of pseudo-count. More
Similarly, if ?n is a Dirichlet estimator then N
importantly, if the model generalizes across states then so do pseudo-counts.
3.2
Estimating the Frequency of a Salient Event in F REEWAY
As an illustrative example, we employ our method to estimate the number of occurrences of an
infrequent event in the Atari 2600 video game F REEWAY (Figure 1, screenshot). We use the Arcade
Learning Environment (Bellemare et al., 2013). We will demonstrate the following:
1. Pseudo-counts are roughly zero for novel events,
3
Pseudo-counts
periods without
salient events
r
sta
itio
os
p
t
n
t
un
co
do
eu
ps
s
salient event
pseudo-counts
Frames (1000s)
Figure 1: Pseudo-counts obtained from a CTS density model applied to F REEWAY, along with a
frame representative of the salient event (crossing the road). Shaded areas depict periods during
which the agent observes the salient event, dotted lines interpolate across periods during which the
salient event is not observed. The reported values are 10,000-frame averages.
2. they exhibit credible magnitudes,
3. they respect the ordering of state frequency,
4. they grow linearly (on average) with real counts,
5. they are robust in the presence of nonstationary data.
These properties suggest that pseudo-counts provide an appropriate generalized notion of visit
counts in non-tabular settings.
In F REEWAY, the agent must navigate a chicken across a busy road. As our example, we consider
estimating the number of times the chicken has reached the very top of the screen. As is the case for
many Atari 2600 games, this naturally salient event is associated with an increase in score, which
ALE translates into a positive reward. We may reasonably imagine that knowing how certain we
are about this part of the environment is useful. After crossing, the chicken is teleported back to the
bottom of the screen.
To highlight the robustness of our pseudo-count, we consider a nonstationary policy which waits for
250,000 frames, then applies the UP action for 250,000 frames, then waits, then goes UP again. The
salient event only occurs during UP periods. It also occurs with the cars in different positions, thus
requiring generalization. As a point of reference, we record the pseudo-counts for both the salient
event and visits to the chicken?s start position.
We use a simplified, pixel-level version of the CTS model for Atari 2600 frames proposed by Bellemare et al. (2014), ignoring temporal dependencies. While the CTS model is rather impoverished in
comparison to state-of-the-art density models for images (e.g. Van den Oord et al., 2016), its countbased nature results in extremely fast learning, making it an appealing candidate for exploration.
Further details on the model may be found in the appendix.
Examining the pseudo-counts depicted in Figure 1 confirms that they exhibit the desirable properties
listed above. In particular, the pseudo-count is almost zero on the first occurrence of the salient event;
it increases slightly during the 3rd period, since the salient and reference events share some common
structure; throughout, it remains smaller than the reference pseudo-count. The linearity on average
and robustness to nonstationarity are immediate from the graph. Note, however, that the pseudocounts are a fraction of the real visit counts (inasmuch as we can define ?real?): by the end of the
trial, the start position has been visited about 140,000 times, and the topmost part of the screen, 1285
times. Furthermore, the ratio of recorded pseudo-counts differs from the ratio of real counts. Both
effects are quantifiable, as we shall show in Section 5.
4
The Connection to Intrinsic Motivation
Having argued that pseudo-counts appropriately generalize visit counts, we will now show that they
are closely related to information gain, which is commonly used to quantify novelty or curiosity and
consequently as an intrinsic reward. Information gain is defined in relation to a mixture model ? over
4
a class of density models M. This model predicts according to a weighted combination from M:
Z
?n (x) := ?(x ; x1:n ) :=
wn (?)?(x ; x1:n )d?,
??M
with wn (?) the posterior weight of ?. This posterior is defined recursively, starting from a prior
distribution w0 over M:
wn (?)?(x ; x1:n )
wn+1 (?) := wn (?, xn+1 )
wn (?, x) :=
.
(3)
?n (x)
Information gain is then the Kullback-Leibler divergence from prior to posterior that results from
observing x:
IGn (x) := IG(x ; x1:n ) := KL wn (?, x) k wn .
Computing the information gain of a complex density model is often impractical, if not downright
intractable. However, a quantity which we call the prediction gain provides us with a good approximation of the information gain. We define the prediction gain of a density model ? (and in particular,
?) as the difference between the recoding log-probability and log-probability of x:
PGn (x) := log ?0n (x) ? log ?n (x).
Prediction gain is nonnegative if and only if ? is learning-positive. It is related to the pseudo-count:
?1
?n (x) ? ePGn (x) ? 1
N
,
with equality when ?0n (x) ? 0. As the following theorem shows, prediction gain allows us to relate
pseudo-count and information gain.
Theorem 1. Consider a sequence x1:n ? X n . Let ? be a mixture model over a class of learning?n be the pseudo-count derived from ? (Equation 2). For this model,
positive models M. Let N
?n (x)?1
?n (x)?1/2 .
IGn (x) ? PGn (x) ? N
and
PGn (x) ? N
?n (x)?1/2 , similar to the
Theorem 1 suggests that using an exploration bonus proportional to N
MBIE-EB bonus, leads to a behaviour at least as exploratory as one derived from an information
gain bonus. Since pseudo-counts correspond to empirical counts in the tabular setting, this approach
also preserves known theoretical guarantees. In fact, we are confident pseudo-counts may be used
to prove similar results in non-tabular settings.
On the other hand, it may be difficult to provide theoretical guarantees about existing bonus-based
intrinsic motivation approaches. Kolter and Ng (2009) showed that no algorithm based on a bonus
upper bounded by ?Nn (x)?1 for any ? > 0 can guarantee PAC-MDP optimality. Again considering
the tabular setting and combining their result to Theorem 1, we conclude that bonuses proportional
to immediate information (or prediction) gain are insufficient for theoretically near-optimal exploration: to paraphrase Kolter and Ng, these methods produce explore too little in comparison to
pseudo-count bonuses. By inspecting (2) we come to a similar negative conclusion for bonuses
proportional to the L1 or L2 distance between ?n0 and ?n .
Unlike many intrinsic motivation algorithms, pseudo-counts also do not rely on learning a forward
(transition and/or reward) model. This point is especially important because a number of powerful
density models for images exist (Van den Oord et al., 2016), and because optimality guarantees
cannot in general exist for intrinsic motivation algorithms based on forward models.
5
Asymptotic Analysis
?n /Nn . We use this analysis to assert
In this section we analyze the limiting behaviour of the ratio N
the consistency of pseudo-counts derived from tabular density models, i.e. models which maintain
per-state visit counts. In the appendix we use the same result to bound the approximation error of
pseudo-counts derived from directed graphical models, of which our CTS model is a special case.
Consider a fixed, infinite
sequence x1 , x2 , . . . from X . We define the limit of a sequence of functions
f (x ; x1:n ) : n ? N with respect to the length n of the subsequence x1:n . We additionally assume
that the empirical distribution ?n converges pointwise to a distribution ?, and write ?0n (x) for the
recoding probability of x under ?n . We begin with two assumptions on our density model.
5
Assumption 1. The limits
?0n (x) ? ?n (x)
n?? ?0n (x) ? ?n (x)
?n (x)
n?? ?n (x)
(b) r(x)
?
:= lim
(a) r(x) := lim
exist for all x; furthermore, r(x)
?
> 0.
Assumption (a) states that ? should eventually assign a probability to x proportional to the limiting
empirical distribution ?(x). In particular there must be a state x for which r(x) < 1, unless ?n ? ?.
Assumption (b), on the other hand, imposes a restriction on the learning rate of ? relative to ??s. As
both r(x) and ?(x) exist, Assumption 1 also implies that ?n (x) and ?0n (x) have a common limit.
?n (x) to empirical counts
Theorem 2. Under Assumption 1, the limit of the ratio of pseudo-counts N
Nn (x) exists for all x. This limit is
?n (x)
N
r(x) 1 ? ?(x)r(x)
lim
=
.
n?? Nn (x)
r(x)
?
1 ? ?(x)
The model?s relative rate of change, whose convergence to r(x)
?
we require, plays an essential role
in the ratio of pseudo- to empirical counts. To see this, consider a sequence (xn : n ? N) generated
i.i.d. from a distribution ? over a finite state space, and a density model defined from a sequence of
nonincreasing step-sizes (?n : n ? N):
?n (x) = (1 ? ?n )?n?1 (x) + ?n I {xn = x} ,
with initial condition ?0 (x) = |X |?1 . For ?n = n?1 , this density model is the empirical distribution. For ?n = n?2/3 , we may appeal to well-known results from stochastic approximation (e.g.
Bertsekas and Tsitsiklis, 1996) and find that almost surely
lim ?n (x) = ?(x)
n??
?0n (x) ? ?n (x)
= ?.
n?? ?0n (x) ? ?n (x)
lim
but
? ?n (x) = n?1 (1 ? ?0n (x)), we may think of Assumption 1(b) as also requiring ?
Since
to converge at a rate of ?(1/n) for a comparison with the empirical count Nn to be meaningful.
Note, however, that a density model that does not satisfy Assumption 1(b) may still yield useful (but
incommensurable) pseudo-counts.
P
Corollary 1. Let ?(x) > 0 with x?X ?(x) < ? and consider the count-based estimator
?0n (x)
?n (x) =
Nn (x) + ?(x)
P
.
n + x0 ?X ?(x0 )
?n is the pseudo-count corresponding to ?n then N
?n (x)/Nn (x) ? 1 for all x with ?(x) > 0.
If N
6
Empirical Evaluation
In this section we demonstrate the use of pseudo-counts to guide exploration. We return to the
Arcade Learning Environment, now using the CTS model to generate an exploration bonus.
6.1
Exploration in Hard Atari 2600 Games
From 60 games available through the Arcade Learning Environment we selected five ?hard? games,
in the sense that an -greedy policy is inefficient at exploring them. We used a bonus of the form
?n (x) + 0.01)?1/2 ,
Rn+ (x, a) := ?(N
(4)
where ? = 0.05 was selected from a coarse parameter sweep. We also compared our method to the
optimistic initialization trick proposed by Machado et al. (2015). We trained our agents? Q-functions
with Double DQN (van Hasselt et al., 2016), with one important modification: we mixed the Double
Q-Learning target with the Monte Carlo return. This modification led to improved results both with
and without exploration bonuses (details in the appendix).
Figure 2 depicts the result of our experiment, averaged across 5 trials. Although optimistic initialization helps in F REEWAY, it otherwise yields performance similar to DQN. By contrast, the
6
FREEWAY
VENTURE
H.E.R.O.
PRIVATE EYE
Score
MONTEZUMA?S REVENGE
Training frames (millions)
Figure 2: Average training score with and without exploration bonus or optimistic initialization in 5
Atari 2600 games. Shaded areas denote inter-quartile range, dotted lines show min/max scores.
No bonus
With bonus
Figure 3: ?Known world? of a DQN agent trained for 50 million frames with (right) and without
(left) count-based exploration bonuses, in M ONTEZUMA? S R EVENGE.
count-based exploration bonus enables us to make quick progress on a number of games, most dramatically in M ONTEZUMA? S R EVENGE and V ENTURE.
M ONTEZUMA? S R EVENGE is perhaps the hardest Atari 2600 game available through the ALE. The
game is infamous for its hostile, unforgiving environment: the agent must navigate a number of
different rooms, each filled with traps. Due to its sparse reward function, most published agents
achieve an average score close to zero and completely fail to explore most of the 24 rooms that
constitute the first level (Figure 3, top). By contrast, within 50 million frames our agent learns a
policy which consistently navigates through 15 rooms (Figure 3, bottom). Our agent also achieves a
score higher than anything previously reported, with one run consistently achieving 6600 points by
100 million frames (half the training samples used by Mnih et al. (2015)). We believe the success of
our method in this game is a strong indicator of the usefulness of pseudo-counts for exploration.1
6.2
Exploration for Actor-Critic Methods
We next used our exploration bonuses in conjunction with the A3C (Asynchronous Advantage
Actor-Critic) algorithm of Mnih et al. (2016). One appeal of actor-critic methods is their explicit
separation of policy and Q-function parameters, which leads to a richer behaviour space. This very
separation, however, often leads to deficient exploration: to produce any sensible results, the A3C
policy must be regularized with an entropy cost. We trained A3C on 60 Atari 2600 games, with and
without the exploration bonus (4). We refer to our augmented algorithm as A3C+. Full details and
additional results may be found in the appendix.
We found that A3C fails to learn in 15 games, in the sense that the agent does not achieve a score
50% better than random. In comparison, there are only 10 games for which A3C+ fails to improve on
the random agent; of these, 8 are games where DQN fails in the same sense. We normalized the two
algorithms? scores so that 0 and 1 are respectively the minimum and maximum of the random agent?s
and A3C?s end-of-training score on a particular game. Figure 4 depicts the in-training median score
for A3C and A3C+, along with 1st and 3rd quartile intervals. Not only does A3C+ achieve slightly
superior median performance, but it also significantly outperforms A3C on at least a quarter of the
games. This is particularly important given the large proportion of Atari 2600 games for which an
-greedy policy is sufficient for exploration.
7
Related Work
Information-theoretic quantities have been repeatedly used to describe intrinsically motivated behaviour. Closely related to prediction gain is Schmidhuber (1991)?s notion of compression progress,
1
A video of our agent playing is available at https://youtu.be/0yI2wJ6F8r0.
7
Baseline score
A3C+ PERFORMANCE ACROSS GAMES
Training frames (millions)
Figure 4: Median and interquartile performance across 60 Atari 2600 games for A3C and A3C+.
which equates novelty with an agent?s improvement in its ability to compress its past. More recently,
Lopes et al. (2012) showed the relationship between time-averaged prediction gain and visit counts
in a tabular setting; their result is a special case of Theorem 2. Orseau et al. (2013) demonstrated
that maximizing the sum of future information gains does lead to optimal behaviour, even though
maximizing immediate information gain does not (Section 4). Finally, there may be a connection between sequential normalized maximum likelihood estimators and our pseudo-count derivation (see
e.g. Ollivier, 2015).
Intrinsic motivation has also been studied in reinforcement learning proper, in particular in the context of discovering skills (Singh et al., 2004; Barto, 2013). Recently, Stadie et al. (2015) used a
squared prediction error bonus for exploring in Atari 2600 games. Closest to our work is Houthooft
et al. (2016)?s variational approach to intrinsic motivation, which is equivalent to a second order Taylor approximation to prediction gain. Mohamed and Rezende (2015) also considered a variational
approach to the different problem of maximizing an agent?s ability to influence its environment.
Aside for Orseau et al.?s above-cited work, it is only recently that theoretical guarantees for exploration have emerged for non-tabular, stateful settings. We note Pazis and Parr (2016)?s PAC-MDP
result for metric spaces and Leike et al. (2016)?s asymptotic analysis of Thompson sampling in
general environments.
8
Future Directions
The last few years have seen tremendous advances in learning representations for reinforcement
learning. Surprisingly, these advances have yet to carry over to the problem of exploration. In this
paper, we reconciled counts, the fundamental unit of uncertainty, with prediction-based heuristics
and intrinsic motivation. Combining our work with more ideas from deep learning and better density
models seems a plausible avenue for quick progress in practical, efficient exploration. We now
conclude by outlining a few research directions we believe are promising.
Induced metric. We did not address the question of where the generalization comes from. Clearly,
the choice of density model induces a particular metric over the state space. A better understanding
of this metric should allow us to tailor the density model to the problem of exploration.
Compatible value function. There may be a mismatch in the learning rates of the density model
and the value function: DQN learns much more slowly than our CTS model. As such, it should be
beneficial to design value functions compatible with density models (or vice-versa).
The continuous case. Although we focused here on countable state spaces, we can as easily define a
pseudo-count in terms of probability density functions. At present it is unclear whether this provides
us with the right notion of counts for continuous spaces.
Acknowledgments
The authors would like to thank Laurent Orseau, Alex Graves, Joel Veness, Charles Blundell, Shakir
Mohamed, Ivo Danihelka, Ian Osband, Matt Hoffman, Greg Wayne, Will Dabney, and A?aron van
den Oord for their excellent feedback early and late in the writing, and Pierre-Yves Oudeyer and
Yann Ollivier for pointing out additional connections to the literature.
8
References
Barto, A. G. (2013). Intrinsic motivation and reinforcement learning. In Intrinsically Motivated Learning in
Natural and Artificial Systems, pages 17?47. Springer.
Bellemare, M., Veness, J., and Talvitie, E. (2014). Skip context tree switching. In Proceedings of the 31st
International Conference on Machine Learning, pages 1458?1466.
Bellemare, M. G., Naddaf, Y., Veness, J., and Bowling, M. (2013). The Arcade Learning Environment: An
evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253?279.
Bertsekas, D. P. and Tsitsiklis, J. N. (1996). Neuro-Dynamic Programming. Athena Scientific.
Cover, T. M. and Thomas, J. A. (1991). Elements of information theory. John Wiley & Sons.
Houthooft, R., Chen, X., Duan, Y., Schulman, J., De Turck, F., and Abbeel, P. (2016). Variational information
maximizing exploration.
Hutter, M. (2013). Sparse adaptive dirichlet-multinomial-like processes. In Proceedings of the Conference on
Online Learning Theory.
Kolter, Z. J. and Ng, A. Y. (2009). Near-bayesian exploration in polynomial time. In Proceedings of the 26th
International Conference on Machine Learning.
Leike, J., Lattimore, T., Orseau, L., and Hutter, M. (2016). Thompson sampling is asymptotically optimal in
general environments. In Proceedings of the Conference on Uncertainty in Artificial Intelligence.
Lopes, M., Lang, T., Toussaint, M., and Oudeyer, P.-Y. (2012). Exploration in model-based reinforcement
learning by empirically estimating learning progress. In Advances in Neural Information Processing Systems
25.
Machado, M. C., Srinivasan, S., and Bowling, M. (2015). Domain-independent optimistic initialization for
reinforcement learning. AAAI Workshop on Learning for General Competency in Video Games.
Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T. P., Harley, T., Silver, D., and Kavukcuoglu, K.
(2016). Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M.,
Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning.
Nature, 518(7540):529?533.
Mohamed, S. and Rezende, D. J. (2015). Variational information maximisation for intrinsically motivated
reinforcement learning. In Advances in Neural Information Processing Systems 28.
Ollivier, Y. (2015). Laplace?s rule of succession in information geometry. arXiv preprint arXiv:1503.04304.
Orseau, L., Lattimore, T., and Hutter, M. (2013). Universal knowledge-seeking agents for stochastic environments. In Proceedings of the Conference on Algorithmic Learning Theory.
Oudeyer, P., Kaplan, F., and Hafner, V. (2007). Intrinsic motivation systems for autonomous mental development. IEEE Transactions on Evolutionary Computation, 11(2):265?286.
Pazis, J. and Parr, R. (2016). Efficient PAC-optimal exploration in concurrent, continuous state MDPs with
delayed updates. In Proceedings of the 30th AAAI Conference on Artificial Intelligence.
Schmidhuber, J. (1991). A possibility for implementing curiosity and boredom in model-building neural controllers. In From animals to animats: proceedings of the first international conference on simulation of
adaptive behavior.
Singh, S., Barto, A. G., and Chentanez, N. (2004). Intrinsically motivated reinforcement learning. In Advances
in Neural Information Processing Systems 16.
Stadie, B. C., Levine, S., and Abbeel, P. (2015). Incentivizing exploration in reinforcement learning with deep
predictive models. arXiv preprint arXiv:1507.00814.
Strehl, A. L. and Littman, M. L. (2008). An analysis of model-based interval estimation for Markov decision
processes. Journal of Computer and System Sciences, 74(8):1309 ? 1331.
Van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. (2016). Pixel recurrent neural networks. In Proceedigns of the 33rd International Conference on Machine Learning.
van Hasselt, H., Guez, A., and Silver, D. (2016). Deep reinforcement learning with double Q-learning. In
Proceedings of the 30th AAAI Conference on Artificial Intelligence.
9
| 6383 |@word trial:2 private:1 version:1 compression:3 proportion:1 seems:1 polynomial:1 confirms:1 simulation:1 dramatic:1 recursively:1 carry:1 initial:1 score:12 united:1 outperforms:1 existing:2 hasselt:2 past:1 com:6 lang:1 yet:3 guez:1 must:6 john:1 enables:3 depict:1 n0:1 aside:1 stationary:1 greedy:3 generative:1 selected:2 half:1 discovering:1 ivo:1 intelligence:4 talvitie:1 record:1 mental:1 provides:3 coarse:1 revisited:1 successive:1 five:1 along:2 qualitative:1 prove:1 introduce:1 theoretically:1 x0:3 inter:1 behavior:1 roughly:1 frequently:1 nor:1 bellman:1 inspired:1 duan:1 little:1 freeway:1 considering:1 spain:1 begin:2 notation:2 estimating:3 bonus:24 linearity:1 bounded:1 what:1 atari:13 kind:1 deepmind:1 impractical:1 guarantee:6 assert:1 pseudo:47 temporal:2 control:1 unit:2 wayne:1 bertsekas:2 danihelka:1 positive:7 limit:6 infamous:1 switching:1 laurent:1 eb:3 initialization:4 quantified:1 studied:1 suggests:1 shaded:2 co:1 leike:2 range:1 adoption:1 averaged:2 directed:1 practical:3 acknowledgment:1 maximisation:1 differs:2 area:2 riedmiller:1 empirical:13 universal:1 significantly:3 thought:1 confidence:3 word:1 road:2 spite:1 arcade:5 suggest:1 wait:2 cannot:2 close:1 context:2 influence:1 writing:1 bellemare:8 restriction:1 equivalent:1 quick:2 demonstrated:1 maximizing:4 go:1 starting:1 independently:1 thompson:2 ergodic:1 focused:1 rule:2 estimator:3 importantly:1 deriving:1 countbased:1 notion:5 exploratory:1 autonomous:1 laplace:1 limiting:2 imagine:1 play:1 infrequent:1 target:1 programming:1 trick:1 crossing:2 element:1 particularly:1 predicts:1 observed:1 role:2 bottom:2 preprint:2 levine:1 solved:1 eu:1 ordering:1 contemporary:1 observes:1 topmost:1 intuition:1 environment:14 reward:7 littman:2 saxton:2 dynamic:1 trained:3 singh:2 solving:1 orseau:5 predictive:1 completely:1 easily:2 alphabet:1 derivation:1 fast:1 describe:1 london:1 monte:1 artificial:5 kalchbrenner:1 whose:1 richer:1 emerged:1 heuristic:1 plausible:1 drawing:1 otherwise:1 ability:2 unseen:1 think:1 transform:1 itself:2 shakir:1 online:1 advantage:2 sequence:10 propose:2 combining:2 poorly:1 achieve:3 schaul:2 venture:1 quantifiable:1 convergence:1 empty:1 p:1 double:3 produce:3 silver:3 converges:1 help:2 derive:2 recurrent:1 progress:11 strong:1 solves:1 skip:1 come:2 implies:1 quantify:1 direction:2 guided:1 closely:2 stochastic:2 quartile:2 exploration:42 human:1 implementing:1 require:3 argued:2 behaviour:6 assign:2 abbeel:2 generalization:2 pgn:3 inspecting:2 strictly:1 exploring:2 considered:1 mapping:1 algorithmic:1 parr:2 pointing:1 achieves:1 early:1 estimation:3 applicable:1 visited:2 expose:1 concurrent:1 vice:1 weighted:1 hoffman:1 clearly:1 always:1 aim:1 rather:1 rusu:1 barto:4 conjunction:1 corollary:1 derived:7 focus:1 rezende:2 improvement:1 consistently:2 likelihood:1 contrast:2 baseline:1 sense:3 nn:15 typically:1 diminishing:1 relation:1 pixel:3 issue:1 development:1 animal:1 art:1 special:2 platform:1 once:1 having:1 ng:4 sampling:2 chernoff:1 veness:4 look:1 hardest:1 tabular:11 future:2 mirza:1 competency:1 employ:1 few:2 sta:1 preserve:1 divergence:2 interpolate:1 delayed:1 familiar:1 hafner:1 geometry:1 connects:1 maintain:1 harley:1 onwards:1 ostrovski:3 interest:2 sriram:1 possibility:1 mnih:5 interquartile:1 evaluation:2 joel:1 mixture:2 myopic:1 nonincreasing:1 chain:2 experience:1 unless:1 filled:1 tree:1 taylor:1 a3c:14 guidance:2 theoretical:7 hutter:4 instance:1 cover:3 cost:2 introducing:1 usefulness:1 examining:1 too:1 reported:2 dependency:1 answer:1 combined:1 confident:1 st:2 density:33 fundamental:2 explores:1 cited:1 oord:4 international:5 again:2 postulate:1 recorded:1 squared:1 aaai:3 slowly:1 inefficient:1 return:2 account:2 busy:1 de:1 summarized:1 coding:1 satisfy:1 kolter:4 explicitly:1 piece:1 aron:1 root:1 optimistic:4 observing:4 analyze:1 portion:1 reached:1 start:2 youtu:1 contribution:1 square:1 yves:1 greg:1 largely:1 succession:1 correspond:2 yield:3 generalize:2 raw:1 bayesian:4 kavukcuoglu:3 carlo:1 published:2 explain:1 nonstationarity:1 definition:1 frequency:2 mohamed:3 naturally:1 associated:1 recovers:1 gain:22 intrinsically:4 knowledge:2 car:1 lim:5 credible:1 impoverished:1 back:1 focusing:1 higher:2 follow:1 tom:1 improved:3 shrink:1 though:1 furthermore:2 hand:2 o:1 lack:1 google:7 perhaps:1 behaved:1 scientific:2 mdp:3 believe:2 dqn:5 building:1 effect:1 matt:1 requiring:2 normalized:2 lillicrap:1 equality:1 inspiration:1 assigned:1 leibler:2 attractive:1 game:26 during:4 bowling:2 illustrative:1 anything:1 suboptimality:2 pazis:2 generalized:1 theoretic:1 demonstrate:2 l1:1 bring:1 image:2 variational:4 lattimore:2 novel:3 recently:3 charles:1 common:2 superior:1 machado:2 quarter:1 multinomial:1 empirically:1 million:5 significant:1 refer:2 versa:1 chentanez:1 rd:3 consistency:1 similarly:1 badia:1 actor:4 navigates:1 posterior:6 closest:1 showed:2 schmidhuber:4 certain:1 hostile:1 dabney:1 success:2 seen:1 minimum:1 additional:2 surely:1 novelty:2 converge:1 period:5 ale:2 full:1 desirable:1 match:1 ign:2 visit:10 prediction:13 involving:1 variant:2 neuro:1 controller:1 metric:4 arxiv:4 chicken:4 interval:6 grow:1 median:3 appropriately:1 biased:1 unlike:1 induced:1 deficient:1 inconsistent:2 call:4 nonstationary:2 near:2 presence:1 identically:1 concerned:1 wn:8 idea:3 incommensurable:1 knowing:1 avenue:1 translates:1 absent:1 blundell:1 whether:1 motivated:4 osband:1 constitute:1 action:4 repeatedly:1 deep:5 dramatically:1 useful:4 pleasant:1 listed:1 induces:1 stateful:1 generate:1 http:1 exist:4 dotted:2 mbie:3 per:1 naddaf:1 write:3 shall:1 georg:1 srinivasan:2 salient:11 achieving:1 ollivier:3 graph:1 asymptotically:1 fraction:2 year:1 sum:1 houthooft:2 run:1 inverse:1 uncertainty:10 you:1 powerful:1 lope:2 tailor:1 extends:2 almost:3 throughout:1 yann:1 separation:2 decision:2 appendix:4 bound:3 ct:6 montezuma:1 played:1 nonnegative:1 constraint:1 alex:1 x2:1 emi:1 argument:1 extremely:1 optimality:2 min:1 according:1 teleported:1 combination:1 across:8 remain:2 slightly:2 smaller:1 beneficial:1 son:1 appealing:1 revenge:1 making:2 modification:2 den:4 pseudocounts:1 pr:1 equation:4 previously:2 remains:1 count:77 eventually:1 fail:1 end:2 generalizes:2 available:3 apply:2 appropriate:1 pierre:1 occurrence:5 frequentist:1 inasmuch:1 coin:1 robustness:2 thomas:3 compress:1 assumes:1 dirichlet:2 top:2 graphical:1 unifying:1 especially:1 sweep:1 seeking:1 turck:1 question:3 quantity:6 occurs:2 usual:2 traditional:1 unclear:1 exhibit:2 evolutionary:1 distance:1 thank:1 fidjeland:1 concatenation:1 athena:1 sensible:2 w0:1 length:2 pointwise:1 relationship:3 insufficient:1 providing:1 ratio:5 kingdom:1 difficult:3 relate:2 negative:2 kaplan:1 design:1 countable:2 proper:1 policy:7 unknown:2 animats:1 upper:1 markov:4 finite:4 immediate:3 frame:11 rn:1 arbitrary:1 paraphrase:1 inferred:1 david:1 pair:1 required:1 kl:1 connection:3 tremendous:1 barcelona:1 nip:1 address:1 curiosity:2 mismatch:1 departure:1 including:1 max:2 video:3 event:14 natural:1 rely:2 regularized:1 indicator:1 improve:1 mdps:2 inversely:1 eye:1 extract:1 prior:3 literature:5 l2:1 understanding:1 schulman:1 asymptotic:2 relative:2 graf:3 highlight:2 bear:1 mixed:1 proportional:5 outlining:1 toussaint:1 foundation:1 agent:25 sufficient:1 consistent:1 imposes:1 playing:1 critic:4 strehl:2 share:1 compatible:2 accounted:1 surprisingly:1 last:1 asynchronous:2 tsitsiklis:2 guide:3 formal:1 side:1 understand:1 allow:1 munos:2 sparse:2 distributed:1 recoding:5 van:6 feedback:1 xn:8 transition:5 world:1 forward:3 made:1 reinforcement:13 commonly:1 simplified:1 ig:1 author:1 adaptive:2 boredom:1 transaction:1 approximate:1 obtains:1 emphasize:1 skill:1 kullback:2 conclude:2 subsequence:1 un:1 continuous:3 additionally:1 promising:1 learn:1 nature:2 reasonably:1 robust:1 ignoring:1 excellent:1 necessarily:1 complex:1 marc:1 domain:2 did:1 reconciled:1 linearly:1 motivation:18 arise:1 update:1 x1:25 augmented:2 representative:1 en:4 screen:3 depicts:2 slow:1 wiley:1 fails:4 position:3 stadie:2 explicit:1 downright:1 replay:1 screenshot:1 answering:1 candidate:1 late:1 learns:2 ian:1 theorem:6 incentivizing:1 navigate:2 pac:3 appeal:2 admits:1 evidence:1 intrinsic:19 intractable:1 exists:1 essential:1 trap:1 sequential:1 equates:1 workshop:1 magnitude:1 chen:1 surprise:1 entropy:1 generalizing:1 depicted:1 led:1 simply:1 explore:3 applies:1 springer:1 conditional:2 consequently:2 towards:1 room:3 absence:1 change:2 hard:3 specifically:3 typical:1 reducing:2 infinite:1 total:1 oudeyer:4 meaningful:1 rarely:2 |
5,951 | 6,384 | Causal meets Submodular: Subset Selection with
Directed Information
Costas J. Spanos
Department of EECS
UC Berkeley
[email protected]
Yuxun Zhou
Department of EECS
UC Berekely
[email protected]
Abstract
We study causal subset selection with Directed Information as the measure of
prediction causality. Two typical tasks, causal sensor placement and covariate
selection, are correspondingly formulated into cardinality constrained directed
information maximizations. To attack the NP-hard problems, we show that the first
problem is submodular while not necessarily monotonic. And the second one is
?nearly? submodular. To substantiate the idea of approximate submodularity, we
introduce a novel quantity, namely submodularity index (SmI), for general set functions. Moreover, we show that based on SmI, greedy algorithm has performance
guarantee for the maximization of possibly non-monotonic and non-submodular
functions, justifying its usage for a much broader class of problems. We evaluate
the theoretical results with several case studies, and also illustrate the application
of the subset selection to causal structure learning.
1
Introduction
A wide variety of research disciplines, including computer science, economic, biology and social
science, involve causality analysis of a network of interacting random processes. In particular, many
of those tasks are closely related to subset selection. For example, in sensor network applications,
with limited budget it is necessary to place sensors at information ?sources? that provide the best
observability of the system. To better predict a stock under consideration, investors need to select
causal covariates from a pool of candidate information streams. We refer to the first type of problems
as ?causal sensor placement?, and the second one as ?causal covariate selection?.
To solve the aforementioned problems we firstly need a causality measure for multiple random
processes. In literature, there exists two types of causality definitions, one is related with time
series prediction (called Granger-type causality) and another with counter-factuals [18]. We focus
on Granger-type prediction causality substantiated with Directed Information (DI), a tool from
information theory. Recently, a large body of work has successfully employed DI in many research
fields, including influence mining in gene networks [14], causal relationship inference in neural spike
train recordings [19], and message transmission analysis in social media [23]. Compared to modelbased or testing-based methods such as [2][21], DI is not limited by model assumptions and can
naturally capture non-linear and non-stationary dependence among random processes. In addition, it
has clear information theoretical interpretation and admits well-established estimation techniques. In
this regards, we formulate causal sensor placement and covariate selection into cardinality constrained
directed information maximizations problems.
We then need an efficient algorithm that makes optimal subset selection. Although subset selection, in
general, is not tractable due to its combinatorial nature, the study of greedy heuristics for submodular
objectives has shown promising results in both theory and practice. To list a few, following the
pioneering work [8] that proves the near optimal 1 ? 1/e guarantee, [12] [1] investigates the submod30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
ularity of mutual information under Gaussian assumption, and then uses a greedy algorithm for sensor
placement. In the context of speech and Nature Language Processing (NLP), the authors of and [13]
adopt submodular objectives that encourage small vocabulary subset and broad coverage, and then
proceed to maximization with a modified greedy method. In [3], the authors combine insights from
spectral theory and submodularity analysis of R2 score, and their result remarkably explains the near
optimal performance of forward regression and orthogonal matching pursuit.
In this work, we also attack the causal subset selection problem via submodularity analysis. We show
that the objective function of causal sensor placement, i.e., DI from selected set to its complement,
is submodular, although not monotonic. And the problem of causal covariates selection, i.e., DI
from selected set to some target process, is not submodular in general but is ?nearly? submodular in
particular cases. Since classic results require strictly submodularity and monotonicity which cannot
be established for our purpose, we propose a novel measure of the degree of submodularity and show
that, the performance guarantee of greedy algorithms can be obtained for possibly non-monotonic
and non-submodular functions. Our contributions are: (1) Two important causal subset selection
objectives are formulated with directed information and the corresponding submodularity analysis
are conducted. (2) The SmI dependent performance bound implies that submodularity is better
characterized by a continuous indicator than being used as a ?yes or no? property, which extends the
application of greedy algorithms to a much broader class of problems.
The rest of the paper is organized as follows. In next section, we briefly review the notion of directed
information and submodular function. Section 3 is devoted to problem formulation and submodularity
analysis. In section 4, we introduce SmI and provides theoretical results on performance guarantee of
random and deterministic greedy algorithms. Finally in Section 5, we conduct experiments to justify
our theoretical findings and illustrate a causal structure learning application.
2
Preliminary
Directed Information Consider two random process X n and Y n , we use the convention that
X i = {X0 , X1 , ...Xi }, with t = 0, 1, ..., n as the time index. Directed Information from X n to Y n
is defined in terms of mutual information:
n
n
I(X ? Y ) =
n
X
I(X t ; Yt |Y t?1 )
(1)
t=1
which can be viewed as the aggregated dependence between the history of process X and the current
value of process Y , given past observations of Y . The above definition captures a natural intuition
about causal relationship, i.e., the unique information X t has on Yt , when the past of Y t?1 is known.
Pn
With causally conditioned entropy defined by H(Y n ||X n ) , t=1 H(Yt |Y t?1 , X t ), the directed
information from X n to Y n when causally conditioned on the series Z n can be written as
n
n
n
n
n
n
n
n
I(X ? Y ||Z ) = H(Y ||Z ) ? H(Y ||X , Z ) =
n
X
I(X t ; Yt |Y t?1 , Z t )
(2)
t=1
Observe that causally conditioned directed information is expressed as the difference between two
causally conditioned entropy, which can be considered as ?causal uncertainty reduction?. With
? as the
this interpretation one is able to relate directed information to Granger Causality. Denote X
complement of X in a a universal set V , then,
? t ) is precisely the value of the side information
Theorem 1 [20] With log loss, I(X n ? Y n ||X
(expected cumulative reduction in loss) that X has, when sequentially predicting Y with the knowledge
? The predictors are distributions with minimal expected loss.
of X.
In particular, with linear models directed information is equivalent to Granger causality for jointly
Gaussian processes.
Submodular Function There are three equivalent definitions of submodular functions, and each
of them reveals a distinct character of submodularity, a diminishing return property that universally
exists in economics, game theory and network systems.
2
Definition 1 A submodular funciton is a set function f : 2? ? R, which satisfies one of the three
equivalent definitions:
(1) For every S, T ? ? with S ? T , and every x ? ? \ T , we have that
f (S ? {x}) ? f (S) ? f (T ? {x}) ? f (T )
(3)
(2) For every S, T ? ?, we have that
f (S) + f (T ) ? f (S ? T ) + f (S ? T )
(3) For every S ? ?, and x1 , x2 ? ? \ S, we have that
f (S ? {x1 }) + f (S ? {x2 }) ? f (S ? {x1 , x2 }) + f (S)
(4)
(5)
A set function f is called supermodular if ?f is submodular. The first definition is directly related
to the diminishing return property. The second definition is better understood with the classic
max k-cover problem [4]. The third definition indicates that the contribution of two elements is
maximized when they are added individually to the base set. Throughout this paper, we will denote
fx (S) , f (S ? x) ? f (S) as the ?first order derivative? of f at base set S for further analysis.
3
Problem Formulation and Submodularity Analysis
In this section, we first formulate two typical subset selection problems into cardinality constrained
directed information maximization. Then we address the issues of submodularity and monotonicity
in details. All proofs involved in this and the other sections, are given in supplementary material.
Causal Sensor Placement and Covariates Selection by Maximizing DI To motivate the first
formulation, imagine we are interested in placing sensors to monitor pollution particles in a vast
region. Ideally, we would like to put k sensors, which is a given budget, at pollution sources to better
predict the particle dynamics for other areas of interest. As such, the placement locations can be
obtained by maximizing the directed information from selected location set S to its complement S
(in the universal set V that contains all candidate sites). Then this type of ?causal sensor placement?
problems can be written as
argmax I(S n ? S n )
(OPT1)
S?V,|S|?k
Regarding the causal covariates selection problem, the goal is to choose a subset S from a universal
set V , such that S has maximal prediction causality to a (or several) target process Y . To leverage
sparsity, the cardinality constraints |S| ? k is also imposed on the number of selected covariates.
Again with directed information, this type of subset selection problems reads
argmax I(S n ? Y n )
(OPT2)
S?V,|S|?k
The above two optimizations are hard even in the most reduced cases: Consider a collection of
causally independent Gaussian processes, then the above problems are equivalent to the D-optimal
design problem, which has been shown to be NP-hard [11]. Unless ?P = NP?, it is unlikely to find
any polynomial algorithm for the maximization, and a resort to tractable approximations is necessary.
Submodularity Analysis of the Two Objectives
Fortunately, we can show that the objective
function of OPT1, the directed information from selected processes to unselected ones, is submodular.
Theorem 2 The objective I(S n ? S?n ) as a function of S ? V is submodular.
The problem is that OPT1 is not monotonic for all S, which can be seen since both I(? ? V ) and
I(V ? ?) are 0 by definition. On the other hand, the deterministic greedy algorithm has guaranteed
performance only when the objective function is monotonic up to 2k elements. In literature, several
works have been addressing the issue of maximizing non-monotonic submodular function [6][7][17].
In this work we mainly analysis the random greedy technique proposed in [17], which is simpler
compared to other alternatives and achieves best-known guarantees.
Concerning the second objective OPT2, we make a slight detour and take a look at the property of its
?first derivative?.
3
Proposition 1 fx (S) = I(S n ? xn ? Y n ) ? I(S n ? Y n ) = I(xn ? Y n ||S n )
Thus, the derivative is the directed information from processes x to Y causally conditioned on S.
By the first definition of submodularity, if the derivative is decreasing in S, i.e. if fx (S) ? fx (T )
for any S ? T ? V and x ? V \ T , then the objective I(S n ? Y n ) is a submodular function.
Intuition may suggest this is true since knowing more (conditioning on a larger set) seems to reduce
the dependence (and also the causality) of two phenomena under consideration. However, in general,
this conjecture is not correct, and a counterexample could be constructed by having ?explaining away?
processes. Hence the difficulty encountered for solved OPT2 is that the objective is not submodular.
Note that with some extra conditional independence assumptions we can justify its submodularity,
Proposition 2 If for any two processes s1 , s2 ? S, we have the conditional independence that
(s1t ?
? s2t | Yt ), then I(S n ? Y n ) is a monotonic submodular function of set S.
In practice, the assumption made in the above proposition is hard to check. Yet one may wonder that
if the conditional dependence is weak or sparse, possibly a greedy selection still works to some extent
because the submodularity is not severely deteriorated. Extending this idea we propose Submodularity
Index (SmI), a novel measure of the degree of submodularity for general set functions, and we will
provide the performance guarantee of greedy algorithms as a function of SmI.
4
Submodularity Index and Performance Guarantee
For the ease of notation, we use f to denote a general set function and treat directed information
objectives as special realizations. It?s worth mentioning that in literature, several effort has already
been made to characterize approximate submodularity, such as the ? relaxation of definition (3)
proposed in [5] for a dictionary selection objective, and the submodular ratio proposed in [3].
Compared to existing works, the SmI suggested in this work (1) is more generally defined for all set
functions, (2) does not presume monotonicity, and (3) is more suitable for tasks involving information,
influence, and coverage metrics in terms of computational convenience.
SmI Definition and its Properties We start by defining the local submodular index for a function
f at location A for a candidate set S
X
?f (S, A) ,
fx (A) ? fS (A)
(6)
x?S
which can be considered as an extension of the third definition (5) of submodularity. In essence,
it captures the difference between the sum of individual effect and aggregated effect on the first
derivative of the function. Moreover, it has the following property:
Proposition 3 For a given submodular function f , the local submodular index ?f (S, A) is a supermodular function of S.
Now we define SmI by minimizing over set variables:
Definition 2 For a set function f : 2V ? R the submodularity index (SmI) for a location set L and
a cardinality k, denoted by ?f (L, k), is defined as
?f (L, k) ,
min
A?L
S?A=?, |S|?k
?f (S, A)
(7)
Thus, SmI is the smallest possible value of local submodularity indexes subject to |S| ? k. Note
that we implicitly assume |S| ? 2 in the above definition, as in the cases where |S| = {0, 1}, SmI
reduces to 0. Besides, the definition of submodularity can be alternatively posed with SmI,
Lemma 1 A set function f is submodular if and only if ?f (L, k) ? 0, ? L ? V and k.
For functions that are already submodular, SmI measures how strong the submodularity is. We call a
function super-submodular if its SmI is strictly larger than zero. On the other hand for functions that
are not submodular, SmI provides an indicator of how close the function is to submodular. We call a
function quasi-submodular if it has a negative but close to zero SmI.
4
Direct computation of SmI by solving (7) is hard. For the purpose of obtaining performance guarantee,
however, a lower bound of SmI is sufficient and is much easier to compute. Consider the objective of
(OPT1), which is already a submodular function. By using proposition 3, we conclude that its local
submodular index is a super-modular function for fixed location set. Hence computing (7) becomes
a cardinality constrained supermodular minimization problem for each location set. Besides, the
following decomposition is useful to avoid extra work on directed information estimation:
I({?}n ? {V \ ?}n ) can be decomposed
Proposition 4 The local submodular index of the function P
n
as ?I({?}n ?{V \?}n ) (S n , An ) = ?H({V \?}n ) (S n , An ) + t=1 ?H({?}|V t?1 ) (St , At ), where H(?)
is the entropy function.
The lower bound of SmI for the objective of OPT2 is more involved. With some work on an alternative
representation of causally conditioned directed information, we obtain that
Lemma 2 For any location sets L ? V , cardinality k, and target process set Y , we have
n
X
?I({?}n ?Y n ) (L, k) ? min
G|L|+k W t , Y t?1 ? G|L|+k W t , Y t
W ?V
|W |?|L|+k t=1
??
where the function Gk (W, Z) ,
max
W ?V
|W |?|L|+k
P
w?W
I(W n ? Y n ) ? ?I(V n ? Y n )
(8)
(9)
H(w|Z) ? kH(W |Z) is super-modular of W .
Since (8) is in fact minimizing (maximizing) the difference of two supermodular (submodular)
functions, one can use existing approximate or exact algorithms [10] [16] to compute the lower bound.
(9) is often a weak lower bound, although is much easier to compute.
Random Greedy Algorithm and Performance Bound with SmI With the introduction of SmI,
in this subsection, we analyze the performance of the random greedy algorithm for maximizing
non-monotonic, quasi- or super-submodular function in a unified framework. The results broaden the
theoretical guarantee for a much richer class of functions.
The randomized greedy algorithm was recently
proposed in [17] [22] for maximizing cardinality
S0 ? ?
constrained non-monotonic submodular funcfor i = 1, ..., k do
tions. Also in [17], a 1/e expected performance
P
Mi = argmaxMi ?V \Si?1 ,|Mi |=k u?Mi fu (Si ) bound was provided. The overall procedure is
summarized in algorithm 1 for reference. Note
Draw ui uniformly from Mi
that the random greedy algorithm only requires
Si ? Si?1 ? {ui }
O(k|V |) calls of the function evaluation, makend for
ing it suitable for large-scale problems.
Algorithm 1 Random Greedy for Subset Selection
In order to analyze the performance of the algorithm, we start with two lemmas that reveal more
properties of SmI. The first lemma shows that the monotonicity of the first derivative of a general set
function f could be controlled by its SmI.
Lemma 3 Given a set function f : V ? R, and the corresponding SmI ?f (L, k) defined in
(7), and also let set B = A ? {y1 , ..., yM } and x ? B. For an ordering {j1 , ..., jM }, define
Bm = A ? {yj1 , ..., yjm }, B0 = A, BM = B, we have
fx (A) ? fx (B) ?
max
{j1 ,...,jM }
M
?1
X
?f (Bm , 2) ? M ?f (B, 2)
(10)
m=0
Essentially, the above result implies that as long as SmI can be lower bounded by some small negative
number, the submodularity (the decreasing derivative property (3) in Definition 1) is not severely
degraded. The second lemma provides an SmI dependent bound on the expected value of a function
with random arguments.
Lemma 4 Let the set function f : V ? R be quasi submodular with ?f (L, k) ? 0. Also let S(p)
a random subset of S, with each element appears in S(p) with probability at most p, then we have
P|S|
E [f (S(p))] ? (1 ? p1 )f (?) + ?S,p , with ?S,p , i=1 (i ? 1)p?f (Si , 2).
5
Now we present the main theory and provide refined bounds for two different cases when the function
is monotonic (but not necessarily submodular) or submodular (but not necessarily monotonic).
Theorem 3 For a general (possibly non-monotonic, non-submodular) functions f , let the optimal
solution of the cardinality constrained maximization be denoted as S ? , and the solution of random
greedy algorithm be S g then
!
?Sf g ,k
1
g
E [f (S )] ?
+
f (S ? )
e E[f (S g )]
where ?Sf g ,k = ?f (Sg , k) +
k(k?1)
2
min{?f (Sg , 2), 0}.
The role of SmI in determining the performance of the random greedy algorithm is revealed: the
bound consist of 1/e ? 0.3679 plus a term as a function of SmI. If SmI = 0, the 1/e bound in
previous literature is recovered. For super-submodular functions, as SmI is strictly larger than zero, the
theorem provides a stronger guarantee by incorporating SmI. For quasi-submodular functions having
negative SmI, although a degraded guarantee is produced, the bound is only slightly deteriorated
when SmI is close to zero. In short, the above theorem not only encompasses existing results as
special cases, but also suggests that we should view submodularity and monotonicity as a ?continuous?
property of set functions. Besides, greedy heuristics should not be restricted to the maximization
of submodular functions, but can also be applied for ?quasi-submodular? functions because a near
optimal solution is still achievable theoretically. As such, we can formally define quasi-submodular
functions as those having an SmI such that
f
?S,k
E[f (S)]
> ? 1e .
Corollary 1 For monotonic functions in general, random greedy algorithm achieves
g
E [f (S )] ?
1 ?0f (S g , k)
1? +
e E [f (S g )]
and deterministic greedy algorithm also achieves f (S g ) ?
?f (S g , k)
if ?f (S g , k) < 0
.
?0f (S g , k) =
2
g
(1 ? 1/e) ?f (S , k) if ?f (S g , k) ? 0
f (S ? )
1?
1
e
+
?0f (S g ,k)
f (S g )
f (S ? ), where
We see that in the monotonic case, we get a stronger bound for submodular functions compared to the
1 ? 1/e ? 0.6321 guarantee. Similarly, for quasi-submodular functions, the guarantee is degraded
but not too much if SmD is close to 0. Note that the objective function of OPT2 fits into this category.
For submodular but non-monotonic functions, e.g., the objective function of OPT1, we have
Corollary 2 For submodular function that are not necessarily monotonic, random greedy algorithm
has performance
1 ?f (S g , k)
E [f (S g )] ?
+
f (S ? )
e E [f (S g )]
5
Experiment and Applications
In this section, we conduct experiments to verify the theoretical results, and provide an example that
uses subset selection for causal structure learning.
Data and Setup The synthesis data is generated with the Bayes network Toolbox (BNT) [15]
using dynamic Bayesian network models. Two sets of data, denoted by D1 and D2, are simulated,
each containing 15 and 35 processes, respectively. For simplicity, all processes are {0, 1} valued.
The processes are created with both simultaneous and historical dependence on each other. The order
(memory length) of the historical dependence is set to 3. The MCMC sampling engine is used to
draw n = 104 points for both D1 and D2. The stock market dataset, denoted by ST, contains hourly
values of 41 stocks and indexes for the years 2014-2015. Note that data imputation is performed to
amend a few missing values, and all processes are aligned in time. Moreover, we detrend each time
6
series with a recursive HP-filter [24] to remove long-term daily or monthly seasonalities that are not
relevant for hourly analysis.
Directed information is estimated with the procedure proposed in [9], which adopts the context tree
weighting algorithm as an intermediate step to learn universal probability assignment. Interested
readers are referred to [19][20] for other possible estimators. The maximal context tree depth is set to
5, which is sufficient for both the synthesis datasets and the real-world ST dataset.
Causal Subset Selection Results
Figure 1: Solution and Bounds for OPT1 on D1 Figure 2: Solution and Bounds for OPT2 on ST
Firstly, the causal sensor placement problem, OPT1, is solved on data set D1 with the random greedy
algorithm. Figure 1 shows the optimal solution by exhaustive search (red-star), random greedy
solution (blue-circle), the 1/e reference bound (cyan-triangle), and the bound with SmI (magentadiamond), each for cardinality constraints imposed from k = 2 to k = 8. It is seen that the random
greedy solution is close to the true optimum. In terms of computational time, the greedy method
finishes in less than five minutes, while the exhaustive search takes about 10 hours on this small-scale
problem (|V | = 15). Comparing two bounds in Figure 1, we see that the theoretical guarantee is
greatly improved, and a much tighter bound is produced with SmI. The corresponding normalized
SmI values, defined by fSmI
(Lg ) , is shown in the first row of Table 1. As a consequence of those strictly
positive SmI values and Corollary 2, the guarantees are made greater than 1/e. This observation
justifies that the bounds with SmI are better indicators of the performance of the greedy heuristic.
Table 1: Normalized submodularity index (NSmI) for OPT1 on D1 and OPT2 on ST at locations of
greedy selections. Cardinality is imposed from k = 2 to k = 8.
k=
2
3
4
5
6
7
8
normalized SmI for OPT1 0.382 0.284 0.175 0.082 0.141 0.078 0.074
normalized SmI for OPT2 -0.305 0.071 -0.068 -0.029 0.030 0.058 0.092
Secondly, the causal covariates selection problem, OPT2, is solved on ST dataset with the stock XOM
used as the target process Y . The results of random greedy, exhaustive search, and performance
bound (Corollary 1) are shown in Figure 2, and normalized SmIs are listed in the second row of
Table 1. Note that the 1 ? 1/e reference line (green-triangle) in the figure is only for comparison
purpose and is NOT an established bound. We observe that although the objective is not submodular,
the random greedy algorithm is still near optimal. As we compare the reference line and the bound
calculated with SmI (magenta-diamond), we see that the performance guarantee can be either larger
or smaller than 1 ? 1/e, depending on the sign of SmI. By definition, SmI measures the submodularity
of a function at a location set. Hence, the SmI computed at each greedy selection captures the ?local?
submodularity of the function. The central insight gained from this experiment is that, for a function
lacking general submodularity, such as the objective function of OPT2, it can be quasi-submodular
(SmI ? 0, SmI ? 0) or super-submodular (SmI > 0) at different locations. Accordingly the
performance guarantee can be either larger or smaller than 1 ? 1/e, depending on the values of SmI.
Application: Causal Structure Learning The greedy method for subset selection can be used
in many situations. Here we briefly illustrate the structure learning application based on covariates
selection. As is detailed in the supplementary material and [20], one can show that the causal structure
learning problem can be reduces to solving argmaxS?V,|S|?k I(S n ? Xin ) for each node i ? V ,
7
assuming maximal in degree is bounded by k for all nodes. Since the above problem is exactly the
covariate selection considered in this work, we can reconstruct the causal structure for a network of
random processes by simply using the greedy heuristic for each node.
Figure 3 and Figure 4 illustrate the structure learning results on D1 and D2, respectively. In both two
figures, the left subfigure is the ground truth structure, i.e., the dynamic Bayesian networks that are
used in the data generation. Note that each node in the figure represents a random process, and an
edge from node i to j indicates a causal (including both simultaneous and historical) influence. The
subfigure on the right shows the reconstructed causal graph. Comparing two subfigures in Figure 3,
we observe that the simple structure learning method performs almost flawlessly. In fact, only the
edge 6 ? 4 is miss detected. On the larger case D2 with 35 processes, the method still works
relatively well, correctly reconstructing 82.69% causal relations. Given that only the maximal in
degree for all nodes is assumed a priori, these results not only justify the greedy approximation for
the subset selection, but also demonstrate its effectiveness in causal structure learning applications.
Original
Reconstructed
Original
Reconstructed
1
1
2
1
2
1
3
3
5
6
7
8
10
12
2
3
4
2
3
4
4
11
6
7
5
11
8
10
14
9
14
15
15
9
17
5
6
7
8
10
11
5
7
8
10
18
18
17
20
11
20
21
19
23
23
16
19
22
25
21
22
13
6
12
24
30
14
15
9
14
26
13
16
9
26
28
31
33
25
34
33
28
31
30
15
32
34
35
32
29
Figure 3: Ground truth structure (left) versus
Reconstructed causal graph (right), D1 dataset
6
27
27
24
12
13
12
13
4
35
29
Figure 4: Ground truth structure (left) versus
Reconstructed causal graph (right), D2 dataset
Conclusion
Motivated by the problems of source detection and causal covariate selection, we start with two formulations of directed information based subset selection, and then we provide detailed submodularity
analysis for both of the objective functions. To extend the greedy heuristics to possibly non-monotonic,
approximately submodular functions, we introduce an novel notion, namely submodularity index, to
characterize the ?degree? of submodularity for general set functions. More importantly, we show that
with SmI, the theoretical performance guarantee of greedy heuristic can be naturally extended to a
much broader class of problems. We also point out several bounds and techniques that can be used to
calculate SmI efficiently for the objectives under consideration. Experimental results on the synthesis
and real data sets reaffirmed our theoretical findings, and also demonstrated the effectiveness of
solving subset selection for learning causal structures.
7
Acknowledgments
This research is funded by the Republic of Singapore?s National Research Foundation through a grant
to the Berkeley Education Alliance for Research in Singapore (BEARS) for the Singapore-Berkeley
Building Efficiency and Sustainability in the Tropics (SinBerBEST) Program. BEARS has been
established by the University of California, Berkeley as a center for intellectual excellence in research
and education in Singapore. We also thank the reviews for their helpful suggestions.
References
[1] A. S. A. Krause and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient
algorithms and empirical studies. Journal of Machine Learning Research (JMLR), pages 9:235?284, 2008.
[2] A. N.-M. Y. L. C. P. J. H. N. A. A. Lozano, H. Li. Spatial-temporal causal modeling for climate change
attribution. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD?09), 2009.,
2009.
8
[3] D. K. Abhimanyu Das. Submodular meets spectral: Greedy algorithms for subset selection, sparse
approximation and dictionary selection. Proc. of ICML 2011, Seattle, WA, 2011.
[4] Z. Abrams, A. Goel, and S. Plotkin. Set k-cover algorithms for energy efficient monitoring in wireless
sensor networks. In Proceedings of the 3rd international symposium on Information processing in sensor
networks, pages 424?432. ACM, 2004.
[5] V. Cevher and A. Krause. Greedy dictionary selection for sparse representation. Selected Topics in Signal
Processing, IEEE Journal of, 5(5):979?988, 2011.
[6] U. Feige, V. S. Mirrokni, and J. Vondrak. Maximizing non-monotone submodular functions. SIAM Journal
on Computing, 40(4):1133?1153, 2011.
[7] M. Feldman, J. Naor, and R. Schwartz. A unified continuous greedy algorithm for submodular maximization.
In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, pages 570?579.
IEEE, 2011.
[8] M. L. F. G. L. Nemhauser, L. A. Wolsey. An analysis of approximations for maximizing submodular set
functions. Mathematical Programming, pages Volume 14, Issue 1, pp 265?294, 1978.
[9] J. Jiao, H. H. Permuter, L. Zhao, Y.-H. Kim, and T. Weissman. Universal estimation of directed information.
Information Theory, IEEE Transactions on, 59(10):6220?6242, 2013.
[10] Y. Kawahara and T. Washio. Prismatic algorithm for discrete dc programming problem. In Advances in
Neural Information Processing Systems, pages 2106?2114, 2011.
[11] C.-W. Ko, J. Lee, and M. Queyranne. An exact algorithm for maximum entropy sampling. Operations
Research, 43(4):684?691, 1995.
[12] A. Krause and Guestrin. Near-optimal value of information in graphical models. UAI, 2005.
[13] H. Lin and J. Bilmes. A class of submodular functions for document summarization. ACL/HLT, 2011.
[14] P. Mathai, N. C. Martins, and B. Shapiro. On the detection of gene network interconnections using directed
mutual information. In Information Theory and Applications Workshop, 2007, pages 274?283. IEEE, 2007.
[15] K. Murphy et al. The bayes net toolbox for matlab. Computing science and statistics, 33(2):1024?1034,
2001.
[16] M. Narasimhan and J. A. Bilmes. A submodular-supermodular procedure with applications to discriminative structure learning. arXiv preprint arXiv:1207.1404, 2012.
[17] J. S. N. Niv Buchbinder, Moran Feldman and R. Schwartz. Submodular maximization with cardinality
constraints. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2014.
[18] J. Pearl. Causality: Models, Reasoning and Inference (Second Edition). Cambridge university press, 2009.
[19] C. J. Quinn, T. P. Coleman, N. Kiyavash, and N. G. Hatsopoulos. Estimating the directed information to
infer causal relationships in ensemble neural spike train recordings. Journal of computational neuroscience,
30(1):17?44, 2011.
[20] C. J. Quinn, N. Kiyavash, and T. P. Coleman. Directed information graphs. Information Theory, IEEE
Transactions on, 61(12):6887?6909, 2015.
[21] D. V. B. P. R. Sheehan, N. A. and M. D. Tobin. Mendelian randomisation and causal inference in
observational epidemiology. PLoS medicine., 2008.
[22] V. S. M. Uriel Feige and J. Vondrak. Maximizing non-monotone submodular functions. SIAM Journal on
Computing, page 40(4):1133?1153, 2011.
[23] G. Ver Steeg and A. Galstyan. Information-theoretic measures of influence based on content dynamics. In
Proceedings of the sixth ACM international conference on Web search and data mining, pages 3?12. ACM,
2013.
[24] Y. Zhou, Z. Kang, L. Zhang, and C. Spanos. Causal analysis for non-stationary time series in sensor-rich
smart buildings. In Automation Science and Engineering (CASE), 2013 IEEE International Conference on,
pages 593?598. IEEE, 2013.
9
| 6384 |@word briefly:2 achievable:1 polynomial:1 seems:1 stronger:2 nd:1 d2:5 decomposition:1 reduction:2 series:4 score:1 contains:2 document:1 past:2 existing:3 current:1 recovered:1 comparing:2 si:5 yet:1 written:2 j1:2 remove:1 stationary:2 greedy:39 selected:6 accordingly:1 coleman:2 short:1 provides:4 seasonalities:1 location:10 node:6 attack:2 firstly:2 simpler:1 intellectual:1 five:1 zhang:1 mathematical:1 constructed:1 direct:1 symposium:3 s2t:1 focs:1 naor:1 combine:1 introduce:3 theoretically:1 excellence:1 x0:1 market:1 expected:4 p1:1 decreasing:2 decomposed:1 jm:2 cardinality:12 becomes:1 spain:1 provided:1 moreover:3 notation:1 bounded:2 medium:1 estimating:1 narasimhan:1 unified:2 finding:2 guarantee:18 temporal:1 berkeley:6 every:4 exactly:1 schwartz:2 grant:1 causally:7 hourly:2 positive:1 understood:1 local:6 treat:1 engineering:1 severely:2 consequence:1 meet:2 approximately:1 plus:1 acl:1 suggests:1 ease:1 limited:2 mentioning:1 directed:27 unique:1 acknowledgment:1 testing:1 practice:2 recursive:1 procedure:3 area:1 universal:5 empirical:1 matching:1 suggest:1 get:1 cannot:1 convenience:1 selection:34 close:5 put:1 context:3 influence:4 equivalent:4 deterministic:3 imposed:3 yt:5 maximizing:9 missing:1 demonstrated:1 economics:1 center:1 attribution:1 formulate:2 simplicity:1 insight:2 estimator:1 importantly:1 classic:2 notion:2 fx:7 deteriorated:2 target:4 imagine:1 exact:2 programming:2 us:2 element:3 role:1 preprint:1 solved:3 capture:4 calculate:1 region:1 ordering:1 counter:1 plo:1 hatsopoulos:1 intuition:2 ui:2 covariates:7 ideally:1 dynamic:4 motivate:1 solving:3 smart:1 efficiency:1 triangle:2 stock:4 substantiated:1 train:2 jiao:1 distinct:1 amend:1 opt2:10 detected:1 refined:1 exhaustive:3 kawahara:1 heuristic:6 supplementary:2 solve:1 larger:6 posed:1 modular:2 richer:1 valued:1 reconstruct:1 interconnection:1 statistic:1 jointly:1 net:1 propose:2 maximal:4 galstyan:1 aligned:1 relevant:1 realization:1 argmaxs:1 kh:1 seattle:1 transmission:1 extending:1 optimum:1 tions:1 illustrate:4 depending:2 b0:1 strong:1 bnt:1 coverage:2 implies:2 convention:1 submodularity:34 closely:1 correct:1 filter:1 observational:1 material:2 education:2 explains:1 require:1 mathai:1 preliminary:1 niv:1 proposition:6 tighter:1 secondly:1 strictly:4 extension:1 considered:3 ground:3 predict:2 achieves:3 adopt:1 dictionary:3 smallest:1 purpose:3 estimation:3 proc:1 combinatorial:1 individually:1 successfully:1 tool:1 minimization:1 sensor:16 gaussian:4 super:6 modified:1 zhou:2 pn:1 avoid:1 broader:3 corollary:4 focus:1 indicates:2 mainly:1 check:1 greatly:1 sigkdd:2 kim:1 helpful:1 inference:3 dependent:2 unlikely:1 abhimanyu:1 diminishing:2 relation:1 quasi:8 interested:2 issue:3 smi:52 aforementioned:1 among:1 denoted:4 overall:1 priori:1 constrained:6 special:2 spatial:1 s1t:1 uc:2 mutual:3 field:1 having:3 sampling:2 biology:1 placing:1 broad:1 look:1 represents:1 nearly:2 icml:1 np:3 few:2 national:1 individual:1 murphy:1 argmax:2 detection:2 interest:1 message:1 mining:3 evaluation:1 devoted:1 fu:1 encourage:1 edge:2 necessary:2 daily:1 orthogonal:1 unless:1 conduct:2 tree:2 detour:1 circle:1 alliance:1 causal:38 theoretical:9 minimal:1 subfigure:3 cevher:1 modeling:1 cover:2 assignment:1 maximization:10 republic:1 addressing:1 subset:21 predictor:1 wonder:1 mendelian:1 conducted:1 too:1 characterize:2 eec:2 plotkin:1 st:6 international:3 randomized:1 siam:3 epidemiology:1 lee:1 discipline:1 pool:1 modelbased:1 ym:1 synthesis:3 again:1 central:1 containing:1 choose:1 possibly:5 resort:1 derivative:7 zhao:1 return:2 li:1 ularity:1 star:1 summarized:1 automation:1 stream:1 performed:1 view:1 analyze:2 red:1 start:3 investor:1 bayes:2 contribution:2 degraded:3 efficiently:1 maximized:1 ensemble:1 yes:1 weak:2 bayesian:2 produced:2 monitoring:1 worth:1 presume:1 bilmes:2 history:1 simultaneous:2 hlt:1 definition:18 sixth:1 energy:1 pp:1 involved:2 naturally:2 proof:1 di:6 mi:4 costa:1 dataset:5 knowledge:2 subsection:1 organized:1 appears:1 supermodular:5 improved:1 formulation:4 uriel:1 hand:2 web:1 reveal:1 usage:1 effect:2 building:2 verify:1 true:2 normalized:5 lozano:1 hence:3 read:1 climate:1 game:1 essence:1 substantiate:1 theoretic:1 demonstrate:1 performs:1 reasoning:1 consideration:3 novel:4 recently:2 conditioning:1 volume:1 extend:1 interpretation:2 slight:1 opt1:9 refer:1 monthly:1 cambridge:1 counterexample:1 feldman:2 rd:1 similarly:1 hp:1 particle:2 submodular:62 language:1 funded:1 base:2 buchbinder:1 seen:2 guestrin:2 fortunately:1 greater:1 employed:1 goel:1 aggregated:2 signal:1 multiple:1 reduces:2 infer:1 ing:1 characterized:1 long:2 lin:1 justifying:1 concerning:1 weissman:1 controlled:1 prediction:4 involving:1 regression:1 ko:1 essentially:1 metric:1 arxiv:2 addition:1 remarkably:1 krause:3 source:3 extra:2 rest:1 recording:2 subject:1 effectiveness:2 call:3 tobin:1 near:6 leverage:1 revealed:1 intermediate:1 variety:1 independence:2 fit:1 finish:1 economic:1 idea:2 observability:1 regarding:1 knowing:1 reduce:1 motivated:1 effort:1 queyranne:1 f:1 speech:1 proceed:1 matlab:1 generally:1 useful:1 clear:1 involve:1 listed:1 detailed:2 category:1 reduced:1 shapiro:1 singapore:4 sign:1 estimated:1 neuroscience:1 correctly:1 blue:1 discrete:2 monitor:1 imputation:1 vast:1 graph:4 relaxation:1 monotone:2 sum:1 year:1 prismatic:1 uncertainty:1 soda:1 place:1 extends:1 throughout:1 reader:1 almost:1 draw:2 investigates:1 bound:24 cyan:1 smd:1 guaranteed:1 encountered:1 annual:1 placement:10 precisely:1 constraint:3 x2:3 vondrak:2 randomisation:1 argument:1 min:3 relatively:1 conjecture:1 martin:1 department:2 smaller:2 slightly:1 reconstructing:1 character:1 feige:2 s1:1 restricted:1 flawlessly:1 granger:4 tractable:2 yj1:1 pursuit:1 operation:1 observe:3 sustainability:1 away:1 spectral:2 quinn:2 alternative:2 original:2 broaden:1 nlp:1 graphical:1 medicine:1 reaffirmed:1 prof:1 objective:21 pollution:2 added:1 quantity:1 spike:2 already:3 dependence:6 mirrokni:1 abrams:1 nemhauser:1 thank:1 simulated:1 topic:1 extent:1 assuming:1 besides:3 length:1 index:13 relationship:3 ratio:1 minimizing:2 setup:1 lg:1 tropic:1 relate:1 gk:1 negative:3 design:1 summarization:1 diamond:1 observation:2 datasets:1 defining:1 situation:1 extended:1 yjm:1 y1:1 interacting:1 dc:1 kiyavash:2 complement:3 namely:2 toolbox:2 engine:1 california:1 kang:1 established:4 barcelona:1 hour:1 nip:1 pearl:1 address:1 able:1 suggested:1 sparsity:1 encompasses:1 pioneering:1 program:1 including:3 max:3 memory:1 green:1 suitable:2 natural:1 difficulty:1 predicting:1 indicator:3 unselected:1 created:1 review:2 literature:4 sg:2 discovery:1 determining:1 lacking:1 loss:3 bear:2 generation:1 suggestion:1 wolsey:1 versus:2 foundation:2 degree:5 sufficient:2 s0:1 row:2 wireless:1 side:1 wide:1 explaining:1 correspondingly:1 sparse:3 regard:1 depth:1 vocabulary:1 xn:2 cumulative:1 world:1 calculated:1 rich:1 author:2 forward:1 collection:1 universally:1 made:3 bm:3 historical:3 adopts:1 social:2 transaction:2 reconstructed:5 approximate:3 implicitly:1 gene:2 monotonicity:5 sequentially:1 reveals:1 uai:1 ver:1 conclude:1 assumed:1 xi:1 discriminative:1 alternatively:1 continuous:3 search:4 table:3 promising:1 nature:2 learn:1 obtaining:1 necessarily:4 da:1 main:1 s2:1 steeg:1 edition:1 body:1 x1:4 causality:11 site:1 referred:1 sf:2 candidate:3 jmlr:1 third:2 weighting:1 theorem:5 minute:1 magenta:1 covariate:5 moran:1 list:1 r2:1 admits:1 exists:2 consist:1 incorporating:1 workshop:1 gained:1 budget:2 conditioned:6 justifies:1 easier:2 entropy:4 simply:1 expressed:1 monotonic:18 truth:3 satisfies:1 acm:5 conditional:3 viewed:1 formulated:2 goal:1 content:1 hard:5 change:1 typical:2 uniformly:1 justify:3 miss:1 lemma:7 called:2 experimental:1 xin:1 select:1 formally:1 phenomenon:1 evaluate:1 mcmc:1 d1:7 washio:1 sheehan:1 |
5,952 | 6,385 | Matching Networks for One Shot Learning
Oriol Vinyals
Google DeepMind
[email protected]
Charles Blundell
Google DeepMind
[email protected]
Koray Kavukcuoglu
Google DeepMind
[email protected]
Timothy Lillicrap
Google DeepMind
[email protected]
Daan Wierstra
Google DeepMind
[email protected]
Abstract
Learning from a few examples remains a key challenge in machine learning.
Despite recent advances in important domains such as vision and language, the
standard supervised deep learning paradigm does not offer a satisfactory solution
for learning new concepts rapidly from little data. In this work, we employ ideas
from metric learning based on deep neural features and from recent advances
that augment neural networks with external memories. Our framework learns a
network that maps a small labelled support set and an unlabelled example to its
label, obviating the need for fine-tuning to adapt to new class types. We then define
one-shot learning problems on vision (using Omniglot, ImageNet) and language
tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to
93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches.
We also demonstrate the usefulness of the same model on language modeling by
introducing a one-shot task on the Penn Treebank.
1
Introduction
Humans learn new concepts with very little supervision ? e.g. a child can generalize the concept
of ?giraffe? from a single picture in a book ? yet our best deep learning systems need hundreds or
thousands of examples. This motivates the setting we are interested in: ?one-shot? learning, which
consists of learning a class from a single labelled example.
Deep learning has made major advances in areas such as speech [7], vision [13] and language [16],
but is notorious for requiring large datasets. Data augmentation and regularization techniques alleviate
overfitting in low data regimes, but do not solve it. Furthermore, learning is still slow and based on
large datasets, requiring many weight updates using stochastic gradient descent. This, in our view, is
mostly due to the parametric aspect of the model, in which training examples need to be slowly learnt
by the model into its parameters.
In contrast, many non-parametric models allow novel examples to be rapidly assimilated, whilst not
suffering from catastrophic forgetting. Some models in this family (e.g., nearest neighbors) do not
require any training but performance depends on the chosen metric [1]. Previous work on metric
learning in non-parametric setups [18] has been influential on our model, and we aim to incorporate
the best characteristics from both parametric and non-parametric models ? namely, rapid acquisition
of new examples while providing excellent generalisation from common examples.
The novelty of our work is twofold: at the modeling level, and at the training procedure. We propose
Matching Nets, a neural network which uses recent advances in attention and memory that enable
rapid learning. Secondly, our training procedure is based on a simple machine learning principle:
test and train conditions must match. Thus to train our network to do rapid learning, we train it by
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Matching Networks architecture
showing only a few examples per class, switching the task from minibatch to minibatch, much like
how it will be tested when presented with a few examples of a new task.
Besides our contributions in defining a model and training criterion amenable for one-shot learning,
we contribute by the definition of two new tasks that can be used to benchmark other approaches on
both ImageNet and small scale language modeling. We hope that our results will encourage others to
work on this challenging problem.
We organized the paper by first defining and explaining our model whilst linking its several components to related work. Then in the following section we briefly elaborate on some of the related work
to the task and our model. In Section 4 we describe both our general setup and the experiments we
performed, demonstrating strong results on one-shot learning on a variety of tasks and setups.
2
Model
Our non-parametric approach to solving one-shot learning is based on two components which we
describe in the following subsections. First, our model architecture follows recent advances in neural
networks augmented with memory (as discussed in Section 3). Given a (small) support set S, our
model defines a function cS (or classifier) for each S, i.e. a mapping S ? cS (.). Second, we employ
a training strategy which is tailored for one-shot learning from the support set S.
2.1
Model Architecture
In recent years, many groups have investigated ways to augment neural network architectures with
external memories and other components that make them more ?computer-like?. We draw inspiration
from models such as sequence to sequence (seq2seq) with attention [2], memory networks [29] and
pointer networks [27].
In all these models, a neural attention mechanism, often fully differentiable, is defined to access (or
read) a memory matrix which stores useful information to solve the task at hand. Typical uses of
this include machine translation, speech recognition, or question answering. More generally, these
architectures model P (B|A) where A and/or B can be a sequence (like in seq2seq models), or, more
interestingly for us, a set [26].
Our contribution is to cast the problem of one-shot learning within the set-to-set framework [26].
The key point is that when trained, Matching Networks are able to produce sensible test labels for
unobserved classes without any changes to the network. More precisely, we wish to map from a
(small) support set of k examples of input-label pairs S = {(xi , yi )}ki=1 to a classifier cS (?
x) which,
given a test example x
?, defines a probability distribution over outputs y?. Here, x
? could be an image,
and y? a distribution over possible visual classes. We define the mapping S ? cS (?
x) to be P (?
y |?
x, S)
where P is parameterised by a neural network. Thus, when given a new support set of examples S 0
from which to one-shot learn, we simply use the parametric neural network defined by P to make
predictions about the appropriate label distribution y? for each test example x
?: P (?
y |?
x, S 0 ).
2
Our model in its simplest form computes a probability over y? as follows:
P (?
y |?
x, S) =
k
X
a(?
x, xi )yi
(1)
i=1
where xi , yi are the inputs and corresponding label distributions from the support set S =
{(xi , yi )}ki=1 , and a is an attention mechanism which we discuss below. Note that eq. 1 essentially describes the output for a new class as a linear combination of the labels in the support set.
Where the attention mechanism a is a kernel on X ? X, then (1) is akin to a kernel density estimator.
Where the attention mechanism is zero for the b furthest xi from x
? according to some distance
metric and an appropriate constant otherwise, then (1) is equivalent to ?k ? b?-nearest neighbours
(although this requires an extension to the attention mechanism that we describe in Section 2.1.2).
Thus (1) subsumes both KDE and kNN methods. Another view of (1) is where a acts as an attention
mechanism and the yi act as values bound to the corresponding keys xi , much like a hash table. In
this case we can understand this as a particular kind of associative memory where, given an input,
we ?point? to the corresponding example in the support set, retrieving its label. Hence the functional
form defined by the classifier cS (?
x) is very flexible and can adapt easily to any new support set.
2.1.1
The Attention Kernel
Equation 1 relies on choosing a(., .), the attention mechanism, which fully specifies the classifier. The simplest form that this takes (and which has very tight relationships with common
attention models and kernel functions) is to use the softmax over the cosine distance c, i.e.,
Pk
a(?
x, xi ) = ec(f (?x),g(xi )) / j=1 ec(f (?x),g(xj )) with embedding functions f and g being appropriate neural networks (potentially with f = g) to embed x
? and xi . In our experiments we shall see
examples where f and g are parameterised variously as deep convolutional networks for image
tasks (as in VGG[22] or Inception[24]) or a simple form word embedding for language tasks (see
Section 4).
We note that, though related to metric learning, the classifier defined by Equation 1 is discriminative.
For a given support set S and sample to classify x
?, it is enough for x
? to be sufficiently aligned with
pairs (x0 , y 0 ) ? S such that y 0 = y and misaligned with the rest. This kind of loss is also related to
methods such as Neighborhood Component Analysis (NCA) [18], triplet loss [9] or large margin
nearest neighbor [28].
However, the objective that we are trying to optimize is precisely aligned with multi-way, one-shot
classification, and thus we expect it to perform better than its counterparts. Additionally, the loss is
simple and differentiable so that one can find the optimal parameters in an ?end-to-end? fashion.
2.1.2
Full Context Embeddings
The main novelty of our model lies in reinterpreting a well studied framework (neural networks with
external memories) to do one-shot learning. Closely related to metric learning, the embedding functions f and g act as a lift to feature space X to achieve maximum accuracy through the classification
function described in eq. 1.
Despite the fact that the classification strategy is fully conditioned on the whole support set through
P (.|?
x, S), the embeddings on which we apply the cosine similarity to ?attend?, ?point? or simply
compute the nearest neighbor are myopic in the sense that each element xi gets embedded by g(xi )
independently of other elements in the support set S. Furthermore, S should be able to modify how
we embed the test image x
? through f .
We propose embedding the elements of the set through a function which takes as input the full set
S in addition to xi , i.e. g becomes g(xi , S). Thus, as a function of the whole support set S, g can
modify how to embed xi . This could be useful when some element xj is very close to xi , in which
case it may be beneficial to change the function with which we embed xi ? some evidence of this is
discussed in Section 4. We use a bidirectional Long-Short Term Memory (LSTM) [8] to encode xi in
the context of the support set S, considered as a sequence (see Appendix).
The second issue to make f depend on x
? and S can be fixed via an LSTM with read-attention over
the whole set S, whose inputs are equal to f 0 (?
x) (f 0 is an embedding function, e.g. a CNN). To do
3
so, we define the following recurrence over ?processing? steps k, following work from [26]:
? k , ck
h
=
hk
=
LSTM(f 0 (?
x), [hk?1 , rk?1 ], ck?1 )
0
? k + f (?
h
x)
(2)
(3)
|S|
rk?1
=
X
a(hk?1 , g(xi ))g(xi )
(4)
i=1
a(hk?1 , g(xi ))
T
= ehk?1 g(xi ) /
|S|
X
T
ehk?1 g(xj )
(5)
j=1
noting that LSTM(x, h, c) follows the same LSTM implementation defined in [23] with x the input,
h the output (i.e., cell after the output gate), and c the cell. a is commonly referred to as ?content?
based attention. We do K steps of ?reads?, so f (?
x, S) = hK where hk is as described in eq. 3.
2.2
Training Strategy
In the previous subsection we described Matching Networks which map a support set to a classification
function, S ? c(?
x). We achieve this via a modification of the set-to-set paradigm augmented with
attention, with the resulting mapping being of the form P? (.|?
x, S), noting that ? are the parameters
of the model (i.e. of the embedding functions f and g described previously).
The training procedure has to be chosen carefully so as to match inference at test time. Our model
has to perform well with support sets S 0 which contain classes never seen during training.
More specifically, let us define a task T as distribution over possible label sets L. Typically we
consider T to uniformly weight all data sets of up to a few unique classes (e.g., 5), with a few
examples per class (e.g., up to 5). In this case, a label set L sampled from a task T , L ? T , will
typically have 5 to 25 examples.
To form an ?episode? to compute gradients and update our model, we first sample L from T (e.g.,
L could be the label set {cats, dogs}). We then use L to sample the support set S and a batch B
(i.e., both S and B are labelled examples of cats and dogs). The Matching Net is then trained to
minimise the error predicting the labels in the batch B conditioned on the support set S. This is a
form of meta-learning since the training procedure explicitly learns to learn from a given support set
to minimise a loss over a batch. More precisely, the Matching Nets training objective is as follows:
?
?
??
X
? = arg max EL?T ?ES?L,B?L ?
log P? (y|x, S)?? .
(6)
?
(x,y)?B
Training ? with eq. 6 yields a model which works well when sampling S 0 ? T 0 from a different
distribution of novel labels. Crucially, our model does not need any fine tuning on the classes it has
never seen due to its non-parametric nature. Obviously, as T 0 diverges far from the T from which we
sampled to learn ?, the model will not work ? we belabor this point further in Section 4.1.2.
3
3.1
Related Work
Memory Augmented Neural Networks
A recent surge of models which go beyond ?static? classification of fixed vectors onto their classes
has reshaped current research and industrial applications alike. This is most notable in the massive
adoption of LSTMs [8] in a variety of tasks such as speech [7], translation [23, 2] or learning programs
[4, 27]. A key component which allowed for more expressive models was the introduction of ?content?
based attention in [2], and ?computer-like? architectures such as the Neural Turing Machine [4] or
Memory Networks [29]. Our work takes the metalearning paradigm of [21], where an LSTM learnt
to learn quickly from data presented sequentially, but we treat the data as a set. The one-shot learning
task we defined on the Penn Treebank [15] relates to evaluation techniques and models presented in
[6], and we discuss this in Section 4.
4
3.2
Metric Learning
As discussed in Section 2, there are many links between content based attention, kernel based nearest
neighbor and metric learning [1]. The most relevant work is Neighborhood Component Analysis
(NCA) [18], and the follow up non-linear version [20]. The loss is very similar to ours, except we
use the whole support set S instead of pair-wise comparisons which is more amenable to one-shot
learning. Follow-up work in the form of deep convolutional siamese [11] networks included much
more powerful non-linear mappings. Other losses which include the notion of a set (but use less
powerful metrics) were proposed in [28].
Lastly, the work in one-shot learning in [14] was inspirational and also provided us with the invaluable
Omniglot dataset ? referred to as the ?transpose? of MNIST. Other works used zero-shot learning on
ImageNet, e.g. [17]. However, there is not much one-shot literature on ImageNet, which we hope to
amend via our benchmark and task definitions in the following section.
4
Experiments
In this section we describe the results of many experiments, comparing our Matching Networks
model against strong baselines. All of our experiments revolve around the same basic task: an N -way
k-shot learning task. Each method is providing with a set of k labelled examples from each of N
classes that have not previously been trained upon. The task is then to classify a disjoint batch of
unlabelled examples into one of these N classes. Thus random performance on this task stands at
1/N . We compared a number of alternative models, as baselines, to Matching Networks.
Let L0 denote the held-out subset of labels which we only use for one-shot. Unless otherwise specified,
training is always on 6=L0 , and test in one-shot mode on L0 .
We ran one-shot experiments on three data sets: two image classification sets (Omniglot [14] and
ImageNet [19, ILSVRC-2012]) and one language modeling (Penn Treebank). The experiments on
the three data sets comprise a diverse set of qualities in terms of complexity, sizes, and modalities.
4.1
Image Classification Results
For vision problems, we considered four kinds of baselines: matching on raw pixels, matching on
discriminative features from a state-of-the-art classifier (Baseline Classifier), MANN [21], and our
reimplementation of the Convolutional Siamese Net [11]. The baseline classifier was trained to
classify an image into one of the original classes present in the training data set, but excluding the
N classes so as not to give it an unfair advantage (i.e., trained to classify classes in 6=L0 ). We then
took this network and used the features from the last layer (before the softmax) for nearest neighbour
matching, a strategy commonly used in computer vision [3] which has achieved excellent results
across many tasks. Following [11], the convolutional siamese nets were trained on a same-or-different
task of the original training data set and then the last layer was used for nearest neighbour matching.
Model
Matching Fn
Fine Tune
5-way Acc
1-shot 5-shot
20-way Acc
1-shot 5-shot
P IXELS
BASELINE C LASSIFIER
BASELINE C LASSIFIER
BASELINE C LASSIFIER
Cosine
Cosine
Cosine
Softmax
N
N
Y
Y
41.7%
80.0%
82.3%
86.0%
26.7%
69.5%
70.6%
72.9%
MANN (N O C ONV ) [21]
C ONVOLUTIONAL S IAMESE N ET [11]
C ONVOLUTIONAL S IAMESE N ET [11]
Cosine
Cosine
Cosine
N
N
Y
82.8% 94.9%
96.7% 98.4%
97.3% 98.4%
?
?
88.0% 96.5%
88.1% 97.0%
M ATCHING N ETS (O URS )
M ATCHING N ETS (O URS )
Cosine
Cosine
N
Y
98.1% 98.9%
97.9% 98.7%
93.8% 98.5%
93.5% 98.7%
Table 1: Results on the Omniglot dataset.
5
63.2%
95.0%
98.4%
97.6%
42.6%
89.1%
92.0%
92.3%
We also tried further fine tuning the features using only the support set S 0 sampled from L0 . This
yields massive overfitting, but given that our networks are highly regularized, can yield extra gains.
Note that, even when fine tuning, the setup is still one-shot, as only a single example per class from
L0 is used.
4.1.1
Omniglot
Omniglot [14] consists of 1623 characters from 50 different alphabets. Each of these was hand drawn
by 20 different people. The large number of classes (characters) with relatively few data per class
(20), makes this an ideal data set for testing small-scale one-shot classification. The N -way Omniglot
task setup is as follows: pick N unseen character classes, independent of alphabet, as L. Provide
the model with one drawing of each of the N characters as S ? L and a batch B ? L. Following
[21], we augmented the data set with random rotations by multiples of 90 degrees and used 1200
characters for training, and the remaining character classes for evaluation.
We used a simple yet powerful CNN as the embedding function ? consisting of a stack of modules,
each of which is a 3 ? 3 convolution with 64 filters followed by batch normalization [10], a Relu
non-linearity and 2 ? 2 max-pooling. We resized all the images to 28 ? 28 so that, when we stack 4
modules, the resulting feature map is 1 ? 1 ? 64, resulting in our embedding function f (x). A fully
connected layer followed by a softmax non-linearity is used to define the Baseline Classifier.
Results comparing the baselines to our model on Omniglot are shown in Table 1. For both 1-shot
and 5-shot, 5-way and 20-way, our model outperforms the baselines. There are no major surprises in
these results: using more examples for k-shot classification helps all models, and 5-way is easier than
20-way. We note that the Baseline Classifier improves a bit when fine tuning on S 0 , and using cosine
distance versus training a small softmax from the small training set (thus requiring fine tuning) also
performs well. Siamese nets fare well versus our Matching Nets when using 5 examples per class,
but their performance degrades rapidly in one-shot. Fully Conditional Embeddings (FCE) did not
seem to help much and were left out of the table due to space constraints.
Like the authors in [11], we also test our method trained on Omniglot on a completely disjoint task ?
one-shot, 10 way MNIST classification. The Baseline Classifier does about 63% accuracy whereas
(as reported in their paper) the Siamese Nets do 70%. Our model achieves 72%.
4.1.2
ImageNet
Our experiments followed the same setup as Omniglot for testing, but we considered a rand and a
dogs (harder) setup. In the rand setup, we removed 118 labels at random from the training set, then
tested only on these 118 classes (which we denote as Lrand ). For the dogs setup, we removed all
classes in ImageNet descended from dogs (totalling 118) and trained on all non-dog classes, then
tested on dog classes (Ldogs ). ImageNet is a notoriously large data set which can be quite a feat of
engineering and infrastructure to run experiments upon it, requiring many resources. Thus, as well as
using the full ImageNet data set, we devised a new data set ? miniImageNet ? consisting of 60, 000
colour images of size 84 ? 84 with 100 classes, each having 600 examples. This dataset is more
complex than CIFAR10 [12], but fits in memory on modern machines, making it very convenient for
rapid prototyping and experimentation. We used 80 classes for training and tested on the remaining
20 classes. In total, thus, we have randImageNet, dogsImageNet, and miniImageNet.
The results of the miniImageNet experiments are shown in Table 2. As with Omniglot, Matching
Networks outperform the baselines. However, miniImageNet is a much harder task than Omniglot
which allowed us to evaluate Full Contextual Embeddings (FCE) sensibly (on Omniglot it made no
difference). As we an see, FCE improves the performance of Matching Networks, with and without
fine tuning, typically improving performance by around two percentage points.
Next we turned to experiments based upon full size, full scale ImageNet. Our baseline classifier for
this data set was Inception [25] trained to classify on all classes except those in the test set of classes
(for randImageNet) or those concerning dogs (for dogsImageNet). We also compared to features from
an Inception Oracle classifier trained on all classes in ImageNet, as an upper bound. Our Baseline
Classifier is one of the strongest published ImageNet models at 79% top-1 accuracy on the standard
ImageNet validation set. Instead of training Matching Networks from scratch on these large tasks, we
initialised their feature extractors f and g with the parameters from the Inception classifier (pretrained
on the appropriate subset of the data) and then further trained the resulting network on random 5-way
6
MatchNet
Inception
S?
Figure 2: Example of two 5-way problem instance on ImageNet. The images in the set S 0 contain
classes never seen during training. Our model makes far less mistakes than the Inception baseline.
Table 2: Results on miniImageNet.
Model
Matching Fn
Fine Tune
5-way Acc
1-shot 5-shot
P IXELS
BASELINE C LASSIFIER
BASELINE C LASSIFIER
BASELINE C LASSIFIER
Cosine
Cosine
Cosine
Softmax
N
N
Y
Y
23.0%
36.6%
36.2%
38.4%
26.6%
46.0%
52.2%
51.2%
M ATCHING N ETS (O URS )
M ATCHING N ETS (O URS )
M ATCHING N ETS (O URS )
M ATCHING N ETS (O URS )
Cosine
Cosine
Cosine (FCE)
Cosine (FCE)
N
Y
N
Y
41.2%
42.4%
44.2%
46.6%
56.2%
58.0%
57.0%
60.0%
1-shot tasks from the training data set, incorporating Full Context Embeddings and our Matching
Networks and training strategy.
The results of the randImageNet and dogsImageNet experiments are shown in Table 3. The Inception
Oracle (trained on all classes) performs almost perfectly when restricted to 5 classes only, which is
not too surprising given its impressive top-1 accuracy. When trained solely on 6=Lrand , Matching
Nets improve upon Inception by almost 6% when tested on Lrand , halving the errors. Figure 2 shows
two instances of 5-way one-shot learning, where Inception fails. Looking at all the errors, Inception
appears to sometimes prefer an image above all others (these images tend to be cluttered like the
example in the second column, or more constant in color). Matching Nets, on the other hand, manage
to recover from these outliers that sometimes appear in the support set S 0 .
Matching Nets manage to improve upon Inception on the complementary subset 6=Ldogs (although
this setup is not one-shot, as the feature extraction has been trained on these labels). However, on the
much more challenging Ldogs subset, our model degrades by 1%. We hypothesize this to the fact
that the sampled set during training, S, comes from a random distribution of labels (from 6=Ldogs ),
whereas the testing support set S 0 from Ldogs contains similar classes, more akin to fine grained
classification. Thus, we believe that if we adapted our training strategy to samples S from fine grained
sets of labels instead of sampling uniformly from the leafs of the ImageNet class tree, improvements
could be attained. We leave this as future work.
Table 3: Results on full ImageNet on rand and dogs one-shot tasks. Note that 6=Lrand and 6=Ldogs
are sets of classes which are seen during training, but are provided for completeness.
Model
Matching Fn
Fine Tune
ImageNet 5-way 1-shot Acc
Lrand 6=Lrand
Ldogs 6=Ldogs
P IXELS
I NCEPTION C LASSIFIER
Cosine
Cosine
N
N
42.0%
87.6%
42.8%
92.6%
41.4%
59.8%
43.0%
90.0%
M ATCHING N ETS (O URS )
Cosine (FCE)
N
93.2%
97.0%
58.8%
96.4%
I NCEPTION O RACLE
Softmax (Full)
Y (Full)
? 99%
? 99%
7
? 99% ? 99%
4.1.3 One-Shot Language Modeling
We also introduce a new one-shot language task which is analogous to those examined for images.
The task is as follows: given a query sentence with a missing word in it, and a support set of sentences
which each have a missing word and a corresponding 1-hot label, choose the label from the support
set that best matches the query sentence. Here we show a single example, though note that the words
on the right are not provided and the labels for the set are given as 1-hot-of-5 vectors.
1. an experimental vaccine can alter the immune response of people infected with the aids virus a
<blank_token> u.s. scientist said.
2. the show one of five new nbc <blank_token> is the second casualty of the three networks so far
this fall.
3. however since eastern first filed for chapter N protection march N it has consistently promised
to pay creditors N cents on the <blank_token>.
4. we had a lot of people who threw in the <blank_token> today said <unk> ellis a partner in
benjamin jacobson & sons a specialist in trading ual stock on the big board.
5. it?s not easy to roll out something that <blank_token> and make it pay mr. jacob says.
Query: in late new york trading yesterday the <blank_token> was quoted at N marks down from N
marks late friday and at N yen down from N yen late friday.
prominent
series
dollar
towel
comprehensive
dollar
Sentences were taken from the Penn Treebank dataset [15]. On each trial, we make sure that the set
and batch are populated with sentences that are non-overlapping. This means that we do not use
words with very low frequency counts; e.g. if there is only a single sentence for a given word we do
not use this data since the sentence would need to be in both the set and the batch. As with the image
tasks, each trial consisted of a 5 way choice between the classes available in the set. We used a batch
size of 20 throughout the sentence matching task and varied the set size across k=1,2,3. We ensured
that the same number of sentences were available for each class in the set. We split the words into a
randomly sampled 9000 for training and 1000 for testing, and we used the standard test set to report
results. Thus, neither the words nor the sentences used during test time had been seen during training.
We compared our one-shot matching model to an oracle LSTM language model (LSTM-LM) [30]
trained on all the words. In this setup, the LSTM has an unfair advantage as it is not doing one-shot
learning but seeing all the data ? thus, this should be taken as an upper bound. To do so, we examined
a similar setup wherein a sentence was presented to the model with a single word filled in with 5
different possible words (including the correct answer). For each of these 5 sentences the model gave
a log-likelihood and the max of these was taken to be the choice of the model.
As with the other 5 way choice tasks, chance performance on this task was 20%. The LSTM language
model oracle achieved an upper bound of 72.8% accuracy on the test set. Matching Networks
with a simple encoding model achieve 32.4%, 36.1%, 38.2% accuracy on the task with k = 1, 2, 3
examples in the set, respectively. Future work should explore combining parametric models such as
an LSTM-LM with non-parametric components such as the Matching Networks explored here.
Two related tasks are the CNN QA test of entity prediction from news articles [5], and the Children?s
Book Test (CBT) [6]. In the CBT for example, a sequence of sentences from a book are provided
as context. In the final sentence one of the words, which has appeared in a previous sentence, is
missing. The task is to choose the correct word to fill in this blank from a small set of words given
as possible answers, all of which occur in the preceding sentences. In our sentence matching task
the sentences provided in the set are randomly drawn from the PTB corpus and are related to the
sentences in the query batch only by the fact that they share a word. In contrast to CBT and CNN
dataset, they provide only a generic rather than specific sequential context.
5
Conclusion
In this paper we introduced Matching Networks, a new neural architecture that, by way of its
corresponding training regime, is capable of state-of-the-art performance on a variety of one-shot
classification tasks. There are a few key insights in this work. Firstly, one-shot learning is much
easier if you train the network to do one-shot learning. Secondly, non-parametric structures in a
neural network make it easier for networks to remember and adapt to new training sets in the same
tasks. Combining these observations together yields Matching Networks. Further, we have defined
new one-shot tasks on ImageNet, a reduced version of ImageNet (for rapid experimentation), and a
language modeling task. An obvious drawback of our model is the fact that, as the support set S grows
in size, the computation for each gradient update becomes more expensive. Although there are sparse
and sampling-based methods to alleviate this, much of our future efforts will concentrate around this
limitation. Further, as exemplified in the ImageNet dogs subtask, when the label distribution has
obvious biases (such as being fine grained), our model suffers. We feel this is an area with exciting
challenges which we hope to keep improving in future work.
8
Acknowledgements
We would like to thank Nal Kalchbrenner for brainstorming around the design of the function g, and
Sander Dieleman and Sergio Guadarrama for their help setting up ImageNet. We would also like
thank Simon Osindero for useful discussions around the tasks discussed in this paper, and Theophane
Weber and Remi Munos for following some early developments. Karen Simonyan and David Silver
helped with the manuscript, as well as many at Google DeepMind. Thanks also to Geoff Hinton and
Alex Toshev for discussions about our results, and to the anonymous reviewers for great suggestions.
References
[1] C Atkeson, A Moore, and S Schaal. Locally weighted learning. Artificial Intelligence Review, 1997.
[2] D Bahdanau, K Cho, and Y Bengio. Neural machine translation by jointly learning to align and translate.
ICLR, 2014.
[3] J Donahue, Y Jia, O Vinyals, J Hoffman, N Zhang, E Tzeng, and T Darrell. Decaf: A deep convolutional
activation feature for generic visual recognition. In ICML, 2014.
[4] A Graves, G Wayne, and I Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
[5] K Hermann, T Kocisky, E Grefenstette, L Espeholt, W Kay, M Suleyman, and P Blunsom. Teaching
machines to read and comprehend. In NIPS, 2015.
[6] F Hill, A Bordes, S Chopra, and J Weston. The goldilocks principle: Reading children?s books with explicit
memory representations. arXiv preprint arXiv:1511.02301, 2015.
[7] G Hinton et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of
four research groups. Signal Processing Magazine, IEEE, 2012.
[8] S Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 1997.
[9] E Hoffer and N Ailon. Deep metric learning using triplet network. Similarity-Based Pattern Recognition,
2015.
[10] S Ioffe and C Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[11] G Koch, R Zemel, and R Salakhutdinov. Siamese neural networks for one-shot image recognition. In
ICML Deep Learning workshop, 2015.
[12] A Krizhevsky and G Hinton. Convolutional deep belief networks on cifar-10. Unpublished, 2010.
[13] A Krizhevsky, I Sutskever, and G Hinton. Imagenet classification with deep convolutional neural networks.
In NIPS, 2012.
[14] BM Lake, R Salakhutdinov, J Gross, and J Tenenbaum. One shot learning of simple visual concepts. In
CogSci, 2011.
[15] MP Marcus, MA Marcinkiewicz, and B Santorini. Building a large annotated corpus of english: The penn
treebank. Computational linguistics, 1993.
[16] T Mikolov, M Karafi?t, L Burget, J Cernock`y, and S Khudanpur. Recurrent neural network based language
model. In INTERSPEECH, 2010.
[17] M Norouzi, T Mikolov, S Bengio, Y Singer, J Shlens, A Frome, G Corrado, and J Dean. Zero-shot learning
by convex combination of semantic embeddings. arXiv preprint arXiv:1312.5650, 2013.
[18] S Roweis, G Hinton, and R Salakhutdinov. Neighbourhood component analysis. NIPS, 2004.
[19] O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein,
A Berg, and L Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
[20] R Salakhutdinov and G Hinton. Learning a nonlinear embedding by preserving class neighbourhood
structure. In AISTATS, 2007.
[21] A Santoro, S Bartunov, M Botvinick, D Wierstra, and T Lillicrap. Meta-learning with memory-augmented
neural networks. In ICML, 2016.
[22] K Simonyan and A Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv
preprint arXiv:1409.1556, 2014.
[23] I Sutskever, O Vinyals, and QV Le. Sequence to sequence learning with neural networks. In NIPS, 2014.
[24] C Szegedy, W Liu, Y Jia, P Sermanet, S Reed, D Anguelov, D Erhan, V Vanhoucke, and A Rabinovich.
Going deeper with convolutions. In CVPR, 2015.
[25] C Szegedy, V Vanhoucke, S Ioffe, J Shlens, and Z Wojna. Rethinking the inception architecture for
computer vision. arXiv preprint arXiv:1512.00567, 2015.
[26] O Vinyals, S Bengio, and M Kudlur. Order matters: Sequence to sequence for sets. arXiv preprint
arXiv:1511.06391, 2015.
[27] O Vinyals, M Fortunato, and N Jaitly. Pointer networks. In NIPS, 2015.
[28] K Weinberger and L Saul. Distance metric learning for large margin nearest neighbor classification. JMLR,
2009.
[29] J Weston, S Chopra, and A Bordes. Memory networks. ICLR, 2014.
[30] W Zaremba, I Sutskever, and O Vinyals. Recurrent neural network regularization. arXiv preprint
arXiv:1409.2329, 2014.
9
| 6385 |@word trial:2 cnn:4 version:2 briefly:1 crucially:1 tried:1 jacob:1 pick:1 harder:2 shot:51 liu:1 contains:1 series:1 ours:1 interestingly:1 outperforms:1 current:1 com:5 comparing:2 contextual:1 surprising:1 virus:1 yet:2 protection:1 must:1 activation:1 fn:3 hypothesize:1 update:3 hash:1 intelligence:1 leaf:1 short:2 pointer:2 infrastructure:1 completeness:1 contribute:1 firstly:1 zhang:1 five:1 wierstra:3 retrieving:1 consists:2 ijcv:1 reinterpreting:1 introduce:1 nbc:1 x0:1 forgetting:1 rapid:5 surge:1 nor:1 multi:1 ptb:1 salakhutdinov:4 little:2 becomes:2 spain:1 provided:5 linearity:2 theophane:1 kind:3 deepmind:6 ehk:2 whilst:2 unobserved:1 remember:1 act:3 zaremba:1 sensibly:1 classifier:15 ensured:1 botvinick:1 wayne:1 penn:5 appear:1 danihelka:1 before:1 attend:1 engineering:1 modify:2 treat:1 mistake:1 scientist:1 switching:1 despite:2 ets:7 encoding:1 solely:1 blunsom:1 studied:1 examined:2 challenging:2 misaligned:1 adoption:1 nca:2 unique:1 testing:4 reimplementation:1 procedure:4 descended:1 countzero:1 area:2 matching:31 convenient:1 word:15 burget:1 seeing:1 get:1 onto:1 close:1 context:5 optimize:1 equivalent:1 map:4 reviewer:1 missing:3 dean:1 go:1 attention:16 independently:1 cluttered:1 convex:1 estimator:1 insight:1 fill:1 shlens:2 goldilocks:1 kay:1 embedding:9 notion:1 analogous:1 feel:1 today:1 massive:2 magazine:1 us:2 jaitly:1 element:4 recognition:7 expensive:1 module:2 preprint:8 thousand:1 connected:1 news:1 episode:1 removed:2 ran:1 gross:1 benjamin:1 subtask:1 complexity:1 trained:15 depend:1 solving:1 tight:1 upon:5 completely:1 easily:1 stock:1 geoff:1 cat:2 chapter:1 alphabet:2 train:4 amend:1 describe:4 cogsci:1 query:4 artificial:1 zemel:1 lift:1 choosing:1 neighborhood:2 kalchbrenner:1 whose:1 quite:1 solve:2 cvpr:1 say:1 drawing:1 otherwise:2 simonyan:2 knn:1 unseen:1 reshaped:1 jointly:1 final:1 associative:1 obviously:1 sequence:9 differentiable:2 advantage:2 net:11 took:1 propose:2 aligned:2 relevant:1 turned:1 rapidly:3 combining:2 translate:1 achieve:3 roweis:1 sutskever:3 darrell:1 diverges:1 produce:1 silver:1 leave:1 help:3 recurrent:2 nearest:8 eq:4 strong:2 c:5 frome:1 come:1 trading:2 concentrate:1 hermann:1 korayk:1 closely:1 correct:2 filter:1 stochastic:1 drawback:1 annotated:1 human:1 enable:1 mann:2 require:1 espeholt:1 marcinkiewicz:1 alleviate:2 anonymous:1 secondly:2 extension:1 sufficiently:1 considered:3 around:5 koch:1 great:1 mapping:4 dieleman:1 lm:2 major:2 achieves:1 early:1 threw:1 label:21 qv:1 weighted:1 hoffman:1 hope:3 always:1 aim:1 ck:2 rather:1 resized:1 encode:1 l0:6 schaal:1 improvement:1 consistently:1 likelihood:1 hk:6 contrast:2 industrial:1 baseline:20 sense:1 dollar:2 inference:1 el:1 typically:3 santoro:1 going:1 interested:1 pixel:1 issue:1 classification:14 flexible:1 arg:1 augment:2 unk:1 development:1 art:2 softmax:7 tzeng:1 equal:1 comprise:1 never:3 miniimagenet:5 having:1 koray:1 sampling:3 extraction:1 icml:3 alter:1 future:4 others:2 report:1 few:7 employ:2 modern:1 randomly:2 neighbour:3 comprehensive:1 variously:1 consisting:2 highly:1 evaluation:2 jacobson:1 myopic:1 held:1 amenable:2 encourage:1 capable:1 cifar10:1 unless:1 tree:1 filled:1 atching:7 seq2seq:2 instance:2 classify:5 modeling:7 column:1 elli:1 infected:1 rabinovich:1 introducing:1 subset:4 hundred:1 usefulness:1 krizhevsky:2 osindero:1 too:1 reported:1 answer:2 learnt:2 casualty:1 cho:1 kudlur:1 thanks:1 density:1 lstm:11 filed:1 together:1 quickly:1 augmentation:1 manage:2 choose:2 slowly:1 huang:1 external:3 book:4 friday:2 szegedy:3 subsumes:1 matter:1 notable:1 explicitly:1 mp:1 depends:1 performed:1 view:3 lot:1 helped:1 doing:1 recover:1 simon:1 jia:2 contribution:2 yen:2 accuracy:7 convolutional:8 roll:1 characteristic:1 who:1 yield:4 generalize:1 raw:1 norouzi:1 kavukcuoglu:1 notoriously:1 russakovsky:1 published:1 acc:4 metalearning:1 strongest:1 suffers:1 definition:2 against:1 acquisition:1 initialised:1 frequency:1 obvious:2 static:1 sampled:5 gain:1 dataset:5 onv:1 subsection:2 color:1 improves:3 organized:1 carefully:1 appears:1 bidirectional:1 manuscript:1 attained:1 supervised:1 follow:2 response:1 wherein:1 rand:3 zisserman:1 though:2 furthermore:2 parameterised:2 inception:12 lastly:1 hand:3 lstms:1 expressive:1 su:1 nonlinear:1 overlapping:1 google:11 minibatch:2 defines:2 mode:1 quality:1 believe:1 grows:1 building:1 lillicrap:2 concept:4 requiring:4 contain:2 counterpart:1 consisted:1 regularization:2 inspiration:1 hence:1 read:4 moore:1 satisfactory:1 semantic:1 comprehend:1 during:6 interspeech:1 recurrence:1 yesterday:1 cosine:21 criterion:1 trying:1 prominent:1 hill:1 demonstrate:1 cbt:3 performs:2 invaluable:1 image:15 wise:1 weber:1 novel:2 charles:1 common:2 rotation:1 functional:1 linking:1 discussed:4 fare:1 anguelov:1 tuning:7 populated:1 teaching:1 omniglot:14 language:13 had:2 immune:1 access:1 supervision:1 similarity:2 impressive:1 align:1 sergio:1 something:1 recent:6 schmidhuber:1 store:1 meta:2 yi:5 seen:5 preserving:1 preceding:1 mr:1 deng:1 novelty:2 paradigm:3 corrado:1 signal:1 relates:1 full:10 siamese:6 multiple:1 unlabelled:2 adapt:3 match:3 offer:1 long:2 cifar:1 devised:1 concerning:1 prediction:2 halving:1 basic:1 vision:6 metric:11 essentially:1 arxiv:16 kernel:5 tailored:1 normalization:2 sometimes:2 achieved:2 cell:2 hochreiter:1 addition:1 whereas:2 fine:13 krause:1 modality:1 suleyman:1 extra:1 rest:1 sure:1 pooling:1 tend:1 bahdanau:1 seem:1 chopra:2 noting:2 ideal:1 bernstein:1 split:1 enough:1 embeddings:6 easy:1 variety:3 xj:3 relu:1 fit:1 gave:1 architecture:8 competing:1 perfectly:1 bartunov:1 idea:1 vgg:1 blundell:1 minimise:2 shift:1 colour:1 accelerating:1 effort:1 akin:2 karen:1 speech:4 york:1 deep:14 useful:3 generally:1 tune:3 karpathy:1 locally:1 tenenbaum:1 iamese:2 simplest:2 reduced:1 specifies:1 outperform:1 percentage:1 disjoint:2 per:5 diverse:1 shall:1 group:2 key:5 revolve:1 four:2 demonstrating:1 promised:1 drawn:2 neither:1 nal:1 year:1 run:1 turing:2 powerful:3 you:1 family:1 almost:2 throughout:1 guadarrama:1 lake:1 draw:1 appendix:1 prefer:1 bit:1 ki:2 bound:4 layer:3 followed:3 pay:2 oracle:4 adapted:1 occur:1 precisely:3 constraint:1 alex:1 fei:2 toshev:1 aspect:1 mikolov:2 relatively:1 influential:1 ailon:1 according:1 combination:2 march:1 blank:1 describes:1 beneficial:1 across:2 character:6 ur:7 son:1 karafi:1 modification:1 alike:1 making:1 outlier:1 restricted:1 notorious:1 taken:3 equation:2 resource:1 remains:1 previously:2 discus:2 count:1 mechanism:7 singer:1 end:2 hoffer:1 available:2 experimentation:2 apply:1 appropriate:4 generic:2 neighbourhood:2 batch:11 alternative:1 specialist:1 gate:1 weinberger:1 original:2 cent:1 top:2 remaining:2 include:2 linguistics:1 inspirational:1 objective:2 question:1 matchnet:1 parametric:11 strategy:6 degrades:2 cblundell:1 said:2 gradient:3 iclr:2 distance:4 link:1 thank:2 entity:1 rethinking:1 sensible:1 partner:1 bengio:3 furthest:1 marcus:1 besides:1 relationship:1 reed:1 providing:2 sermanet:1 setup:12 mostly:1 potentially:1 kde:1 fortunato:1 wojna:1 implementation:1 design:1 motivates:1 satheesh:1 perform:2 upper:3 convolution:2 observation:1 datasets:2 daan:1 benchmark:2 descent:1 racle:1 defining:2 hinton:6 excluding:1 looking:1 santorini:1 varied:1 stack:2 sander:1 kocisky:1 introduced:1 david:1 namely:1 cast:1 pair:3 dog:10 specified:1 imagenet:24 sentence:19 unpublished:1 acoustic:1 barcelona:1 nip:6 qa:1 able:2 beyond:1 below:1 exemplified:1 prototyping:1 pattern:1 regime:2 appeared:1 challenge:3 reading:1 program:1 max:3 memory:16 including:1 belief:1 hot:2 ual:1 cernock:1 regularized:1 predicting:1 improve:2 picture:1 review:1 literature:1 acknowledgement:1 graf:1 embedded:1 fully:5 loss:6 expect:1 fce:6 suggestion:1 limitation:1 versus:2 validation:1 degree:1 vanhoucke:2 article:1 principle:2 treebank:5 exciting:1 share:1 bordes:2 translation:3 totalling:1 last:2 transpose:1 eastern:1 english:1 bias:1 allow:1 understand:1 deeper:1 neighbor:5 explaining:1 fall:1 saul:1 munos:1 sparse:1 stand:1 computes:1 author:1 made:2 commonly:2 bm:1 atkeson:1 ec:2 far:3 erhan:1 feat:1 keep:1 overfitting:2 sequentially:1 ioffe:2 corpus:2 xi:21 discriminative:2 quoted:1 triplet:2 khosla:1 table:8 additionally:1 learn:5 assimilated:1 nature:1 improving:2 excellent:2 investigated:1 complex:1 domain:1 did:1 giraffe:1 pk:1 main:1 aistats:1 whole:4 big:1 child:3 obviating:1 suffering:1 allowed:2 complementary:1 augmented:5 referred:2 elaborate:1 fashion:1 board:1 slow:1 vaccine:1 aid:1 fails:1 wish:1 explicit:1 lie:1 answering:1 unfair:2 jmlr:1 late:3 extractor:1 learns:2 grained:3 donahue:1 rk:2 down:2 embed:4 specific:1 covariate:1 showing:1 explored:1 evidence:1 incorporating:1 workshop:1 mnist:2 sequential:1 decaf:1 conditioned:2 margin:2 easier:3 surprise:1 timothy:1 simply:2 explore:1 remi:1 visual:4 vinyals:7 khudanpur:1 pretrained:1 chance:1 relies:1 ma:2 grefenstette:1 conditional:1 towel:1 weston:2 labelled:4 twofold:1 shared:1 content:3 change:2 included:1 generalisation:1 typical:1 specifically:1 uniformly:2 except:2 reducing:1 total:1 catastrophic:1 e:1 experimental:1 ilsvrc:1 internal:1 support:26 people:3 mark:2 berg:1 oriol:1 incorporate:1 evaluate:1 tested:5 scratch:1 |
5,953 | 6,386 | Learning Bayesian networks
with ancestral constraints
Eunice Yuh-Jie Chen and Yujia Shen and Arthur Choi and Adnan Darwiche
Computer Science Department
University of California
Los Angeles, CA 90095
{eyjchen,yujias,aychoi,darwiche}@cs.ucla.edu
Abstract
We consider the problem of learning Bayesian networks optimally, when subject
to background knowledge in the form of ancestral constraints. Our approach is
based on a recently proposed framework for optimal structure learning based on
non-decomposable scores, which is general enough to accommodate ancestral
constraints. The proposed framework exploits oracles for learning structures using
decomposable scores, which cannot accommodate ancestral constraints since they
are non-decomposable. We show how to empower these oracles by passing them
decomposable constraints that they can handle, which are inferred from ancestral
constraints that they cannot handle. Empirically, we demonstrate that our approach
can be orders-of-magnitude more efficient than alternative frameworks, such as
those based on integer linear programming.
1
Introduction
Bayesian networks learned from data are broadly used for classification, clustering, feature selection,
and to determine associations and dependencies between random variables, in addition to discovering
causes and effects; see, e.g., [Darwiche, 2009, Koller and Friedman, 2009, Murphy, 2012].
In this paper, we consider the task of learning Bayesian networks optimally, subject to background
knowledge in the form of ancestral constraints. Such constraints are important in practice as they
allow one to assert direct or indirect cause-and-effect relationships (or lack thereof) between random
variables. Further, one expects that their presence should improve the efficiency of the learning
process as they reduce the size of the search space. However, nearly all mainstream approaches for
optimal structure learning make a fundamental assumption, that the scoring function (i.e., the prior
and likelihood) is decomposable. This in turn limits their ability to integrate ancestral constraints,
which are non-decomposable. Such approaches only support structure-modular constraints such
as the presence or absence of edges, or order-modular constraints such as pairwise constraints on
topological orderings; see, e.g., [Koivisto and Sood, 2004, Parviainen and Koivisto, 2013].
Recently, a new framework has been proposed for optimal Bayesian network structure learning [Chen
et al., 2015], but with non-decomposable priors and scores. This approach is based on navigating the
seemingly intractable search space over all network structures (i.e., all DAGs). This intractability can
be mitigated however by leveraging an omniscient oracle that can optimally learn structures with
decomposable scores. This approach led to the first system for finding optimal DAGs (i.e., model
selection) given order-modular priors (a type of non-decomposable prior) [Chen et al., 2015]. The
approach was also applied towards the enumeration of the k-best structures [Chen et al., 2015, 2016],
where it was orders-of-magnitude more efficient than the existing state-of-the-art [Tian et al., 2010,
Cussens et al., 2013, Chen and Tian, 2014].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper, we show how to incorporate non-decomposable constraints into the structure learning
approach of Chen et al. [2015, 2016]. We consider learning with ancestral constraints, and inferring
decomposable constraints from ancestral constraints to empower the oracle. In principle, structure
learning approaches based on integer linear programming (ILP) and constraint programming (CP)
can also represent ancestral constraints (and other non-decomposable constraints) [Jaakkola et al.,
2010, Bartlett and Cussens, 2015, van Beek and Hoffmann, 2015].1 We empirically evaluate the
proposed approach against those based on ILP, showing orders of magnitude improvements.
This paper is organized as follows. In Section 2, we review the problem of Bayesian network structure
learning. In Section 3, we discuss ancestral constraints and how they relate to existing structure
learning approaches. In Section 4, we introduce our approach for learning with ancestral constraints.
In Section 5, we show how to infer decomposable constraints from non-decomposable ancestral
constraints. We evaluate our approach empirically in Section 6, and conclude in Section 7.
2
Technical preliminaries
We use upper case letters X to denote variables and bold-face upper case letters X to denote sets of
variables. We use X to denote a variable in a Bayesian network and U to denote its parents.
In score-based approaches to structure learning, we are given a complete dataset D and want to learn
a DAG G that optimizes a decomposable score, which aggregates scores over the DAG families XU:
P
score(G | D) = XU score(XU | D)
(1)
The MDL and BDeu scores are examples of decomposable scores; see, e.g., Darwiche [2009], Koller
and Friedman [2009], Murphy [2012]. The seminal K 2 algorithm is one of the first algorithms to
exploit decomposable scores [Cooper and Herskovits, 1992]. The K 2 algorithm optimizes Equation 1,
but assumes that a DAG G is consistent with a given topological ordering ?. This assumption
decomposes the structure learning problem into independent sub-problems, where we find the optimal
set of parents for each variable X, from those variables that precede X in ordering ?.
We can find the DAG G that optimizes Equation 1 by running the K 2 algorithm on all n! variable
orderings ?, and then take the DAG with the best score. Note that these n! instances share many
computational sub-problems: finding the optimal set of parents for some variable X. One can
aggregate these common sub-problems, leaving us with only n ? 2n?1 unique sub-problems. This
technique underlies a number of modern approaches to score-based structure learning, including
some based on dynamic programming [Koivisto and Sood, 2004, Singh and Moore, 2005, Silander
and Myllym?ki, 2006], and related approaches based on heuristic search methods such as A* [Yuan
et al., 2011, Yuan and Malone, 2013]. This aggregation of K 2 sub-problems also corresponds to a
search space called the order graph [Yuan et al., 2011, Yuan and Malone, 2013].
Bayesian network structure learning can also be formulated using integer linear programming (ILP),
with Equation 1 as the linear objective function of an ILP. Further, for each variable X and candidate
parent set U, we introduce an ILP variable I(X, U) ? {0, 1} to represent the event that X has
parents U when I(X, U) = 1, and I(X,PU) = 0 otherwise. We then assert constraints that each
variable X has a unique set of parents, U I(X, U) = 1. Another set of constraints ensure that
all variables X and their parents U must yield an acyclic graph. One approach is to use cluster
constraints [Jaakkola
P et al.,
P 2010], where for each cluster C ? X, at least one variable X in C has
no
parents
in
C,
X?C
U?C=? I(X, U) ? 1. Finally, we have the objective function of our ILP,
P
P
X?X
U?X\X score(XU | D) ? I(X, U), which corresponds to Equation 1.
3
Ancestral constraints
An ancestral constraint specifies a relation between two variables X and Y in a DAG G. If X is an
ancestor of Y , then there is a directed path connecting X to Y in G. If X is not an ancestor of Y ,
then there is no such path. Ancestral constraints can be used, for example, to express background
knowledge in the form of cause-and-effect relations between variables. When X is an ancestor of Y ,
we have a positive ancestral constraint, denoted X
Y . When X is not an ancestor of Y , we have a
1
To our knowledge, however, the ILP and CP approaches have not been previously evaluated, in terms of
their efficacy in structure learning with ancestral constraints.
2
G0
X2
X1
X1
X2
X1
X2
X2
P0
X1
X2
X3
X2
X3
X3
X3
X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X3 X2 X1 X3 X2
X2
X1
X1
X3
X3
X1
...
X3
X1
X1
X2
X1
X3
X2
X2
X2
X3
X2
X3
X1
X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X2 X3 X1 X3 X2
(a) A BN graph
X3
X1
X3
...
(b) A EC Tree
Figure 1: Bayesian network search spaces for the set of variables X = {X1 , X2 , X3 }.
negative ancestral constraint, denoted X 6 Y . In this case, there is no directed path from X to Y ,
but there may still be a directed path from Y to X. Positive ancestral constraints are transitive, i.e., if
X
Y and Y
Z then X
Z. Negative ancestral constraints are not transitive.
15
11
16
Ancestral constraints are non-decomposable since we cannot in general check whether an ancestral
constraint is satisfied or violated by independently checking the parents of each variable. For example,
consider an optimal DAG, compatible with ordering hX1 , X2 , X3 i, from the family scores:
X
X1
U
{}
score
1
X
X2
U
{}, {X1 }
score
1, 2
X
X3
U
{}, {X1 }, {X2 }, {X1 , X2 }
score
10, 10, 1, 10
X2 ? X3 . If we assert the ancestral
The optimal DAG (with minimal score) in this case is X1
constraint X1
X3 , then the optimal DAG is X1 ? X2 ? X3 . Yet, we cannot enforce this
ancestral constraints using independent, local constraints on the parents that each variable can take.
In particular, the choice of parents for variable X2 and the choice of parents for variable X3 will
jointly determine whether X1 is an ancestor of X3 . Hence, the K 2 algorithm and approaches based
on the order graph (dynamic programming and heuristic search) cannot enforce ancestral constraints.
These approaches, however, can enforce decomposable constraints, such as the presence or absence
of an edge U ? X, or a limit on the size of a family XU. Interestingly, one can infer some
decomposable constraints from non-decomposable ones. We discuss this technique extensively later,
showing how it can lead to significant impact on the efficiency of structure search.
Structure learning approaches based on ILP can in principle enforce non-decomposable constraints,
when they can be encoded as linear constraints. In fact, ancestral relations have been employed
in ILPs and other formalisms to enforce a graph?s acyclicity; see, e.g., [Cussens, 2008]. However,
to our knowledge, these approaches have not been evaluated for learning structures with ancestral
constraints. We provide such an empirical evaluation in Section 6.2
4
Learning with constraints
In this section, we review two recently proposed search spaces for learning Bayesian networks: the
BN graph and the EC tree [Chen et al., 2015, 2016]. We subsequently show how we can adapt the
EC tree to facilitate the learning of Bayesian network structures under ancestral constraints.
4.1
BN graphs
The BN graph is a search space for learning structures with non-decomposable scores [Chen et al.,
2015]. Figure 1(a) shows a BN graph over 3 variables, where nodes represent DAGs over different
XU
subsets of variables. A directed edge Gi ??? Gj from a DAG Gi to a DAG Gj exists in the BN
graph iff Gj can be obtained from Gi by adding a leaf node X with parents U. Each edge has a cost,
corresponding to the score of the family XU, as in Equation 1. Hence, a path from the root G0 to a
DAG Gn yields the score of the DAG, score(Gn | D). As a result, the shortest path in the BN graph
(the one with the lowest score) corresponds to an optimal DAG, as in Equation 1.
2
We also make note of Borboudakis and Tsamardinos [2012], which uses ancestral constraints (path constraints) for constraint-based learning methods, such as the PC algorithm. Borboudakis and Tsamardinos [2013]
further proposes a prior based on path beliefs (soft constraints), and evaluated using greedy local search.
3
Unlike the order graph, the BN graph explicitly represents all possible DAGs. Hence, ancestral
constraints can be easily integrated by pruning the search space, i.e., by pruning away those DAGs
that do not satisfy the given constraints. Consider Figure 1(a) and the ancestral constraint X1
X2 .
X2 violates the constraint, we can prune this node, along with all of its
Since the DAG X1
descendants, as the descendants must also violate an ancestral constraint (adding new leaves to a
DAG will not undo a violated ancestral constraint). Finding a shortest path in this pruned search
space will yield an optimal Bayesian network satisfying a given set of ancestral constraints.
We can use A* search to find a shortest path in a BN graph. A* is a best-first search algorithm
that uses an evaluation function f to guide the search. For a given DAG G, we have the evaluation
function f (G) = g(G) + h(G), where g(G) is the actual cost to reach G from the root G0 , and
h(G) is the estimated cost to reach a leaf from G. A* search is guaranteed to find a shortest path
when the heuristic function h is admissible, i.e., it does not over-estimate. Chen et al. [2015, 2016]
showed that a heuristic function can be induced by any learning algorithm that takes a (partial) DAG
as input, and returns an optimal DAG that extends it. Learning systems based on the order graph fall
in this category and can be viewed as powerful oracles that help us to navigate the DAG graph. We
employed URL EARNING as an oracle in our experiments [Yuan and Malone, 2013]. We will later
show how to empower this oracle by passing it decomposable constraints that we infer from a set of
non-decomposable ancestral constraints?the impact of this empowerment turns out to be dramatic.
4.2
EC trees
The EC tree is a recently proposed search space that improves the BN graph along two dimensions
[Chen et al., 2016]. First, it merges Markov-equivalent nodes in the BN graph. Second, it canonizes
the resulting EC graph into a tree, where each node is reachable by a unique path from the root.
Two network structures are Markov equivalent iff they have the same undirected skeleton and the
same v-structures. A Markov equivalence class can be represented by a completed, partially directed
acyclic graph (CPDAG). The set of structures represented by a CPDAG P is denoted by class(P )
and may contain exponentially many Markov equivalent structures.
Figure 1(b) illustrates an EC tree over 3 variables, where nodes represent CPDAGs over different
XU
subsets of variables. A directed edge Pi ??? Pj from a CPDAG Pi to a CPDAG Pj exists in the
EC tree iff there exists a DAG Gj ? class(Pj ) that can be obtained from a DAG Gi ? class(Pi ) by
adding a leaf node X with parents U, but where X must be the largest variable in Gj (according to
some canonical ordering). Each edge of an EC tree has a cost score(XU | D), so the shortest path in
the EC tree corresponds to an optimal equivalence class of Bayesian networks.
4.3
EC trees and ancestral constraints
A DAG G satisfies a set of ancestral constraints A (both over the same set of variables) iff the DAG G
satisfies each constraint in A. Moreover, a CPDAG P satisfies A iff there exists a DAG G ? class(P )
that satisfies A. We enforce ancestral constraints by pruning a CPDAG node P from an EC tree when
P does not satisfy the constraints A. First, consider an ancestral constraint X1 6 X2 . A CPDAG
P containing a directed path from X1 to X2 violates the constraint, as every structure in class(P )
contains a path from X1 to X2 . Next, consider an ancestral constraint X1
X2 . A CPDAG P with
no partially directed paths from X1 to X2 violates the given constraint, as no structure in class(P )
contains a path from X1 to X2 .3 Given a CPDAG P , we first test for these two cases, which can be
done efficiently. If these tests are inconclusive, we exhaustively enumerate the structures of class(P ),
to check if any of them satisfies the given constraints. If not, we can prune P and its descendants
from the EC tree. The soundness of this pruning step is due to the following.
Theorem 1 In an EC tree, a CPDAG P satisfies ancestral constraints A, both over the same set of
variables X, iff its descendants satisfy A.
5
Projecting constraints
In this section, we show how one can project non-decomposable ancestral constraints onto decomposable edge and ordering constraints. For example, if G is a set of DAGs satisfying a set of ancestral
3
A partially directed path from X to Y consists of undirected edges and directed edges oriented towards Y .
4
constraints A, we want to find the edges that appear in all DAGs of G. These projected constraints
can be then used to improve the efficiency of structure learning. Recall (from Section 4.1) that our
approach to structure learning uses a heuristic function that utilizes an optimal structure learning
algorithm for decomposable scores (the oracle). We tighten this heuristic (empower the oracle) by
passing to it projected edge and ordering constraints, leading to a more efficient search when we are
subject to non-decomposable ancestral constraints.
Given a set of ancestral constraints A, we shall show how to infer new edge and ordering constraints,
that we can utilize to empower our oracle. For the case of edge constraints, we propose a simple
algorithm that can efficiently enumerate all inferrable edge constraints. For the case of ordering
constraints, we propose a reduction to MaxSAT, that can find a maximally large set of ordering
constraints that can be jointly inferred from ancestral constraints.
5.1
Edge constraints
We now propose an algorithm for finding all edge constraints that can be inferred from a set of
ancestral constraints A. We consider (decomposable) constraints on the presence of an edge, or the
absence of an edge. We refer to edge presence constraints as positive constraints, denoted by X ? Y ,
and refer to edge absence constraints as negative constraints, denoted by X 6? Y .
We let E denote a set of edge constraints. We further let G(A) denote the set of DAGs G over the
variables X that satisfy all ancestral constraints in the set A, and let G(E) denote the set of DAGs G
that satisfy all edge constraints in E. Given a set of ancestral constraints A, we say that A entails
a positive edge constraint X ? Y iff G(A) ? G(X ? Y ), and that A entails a negative edge
constraint X 6? Y iff G(A) ? G(X 6? Y ). For example, consider the four DAGs over the variables
X, Y and Z that satisfy ancestral constraints X 6 Z and Y
Z.
Y
Z X
Y
Z X
Y
Z X
Y
Z X
First, we note that no DAG above contains the edge X ? Z, since this would immediately violate the
constraint X 6 Z. Next, no DAG above contains the edge X ? Y . Suppose instead that this edge
appeared; since Y
Z, we can infer X
Z, which contradicts the existing constraint X 6 Z.
Hence, we can infer the negative edge constraint X 6? Y . Finally, no DAG above contains the edge
Z ? Y , since this would lead to a directed cycle with the constraint Y
Z.
Before we present our algorithm for inferring edge constraints, we first revisit some properties of
ancestral constraints that we will need. Note that given a set of ancestral constraints, we may be
able to infer additional ancestral constraints. First, given two constraints X
Y and Y
Z, we
can infer an additional ancestral constraint X
Z (by transitivity of ancestral relations). Second,
if adding a path X
Y would create a directed cycle (e.g., if Y
X exists in A), or if it would
violate an existing negative ancestral constraints (e.g., if X 6 Z and Y
Z exists in A), then we
can infer a new negative constraint X 6 Y . By using a few rules based on the examples above, we
can efficiently enumerate all of the ancestral constraints that are entailed by a given set of ancestral
constraints (details omitted for space). Hence, we shall subsequently assume that a given set of
ancestral constraints A will already include all ancestral constraints that can be entailed from it. We
then refer to A as a maximum set of ancestral constraints.
We now consider how to infer edge constraints from a (maximum) set of ancestral constraints A.
First, let ?(X) be the set that consists of X and every X 0 such that X 0
X ? A, and let ?(X)
be the set that consists of X and every X 0 such that X
X 0 ? A. In other words, ?(X) contains
X and all nodes that are constrained to be ancestors of X by A, i.e., each X 0 ? ?(X) is either X
or an ancestor of X, for all DAGs G ? G(A). Similarly, ?(X) contains X and all nodes that are
constrained to be descendants of X by A.
First, we can check if a negative edge constraint X 6? Y is entailed by A by enumerating all possible
Xa 6 Yb for all Xa ? ?(X) and all Yb ? ?(Y ). If any Xa 6 Yb is in A then we know that A
entails X 6? Y . That is, since Xa
X and Y
Yb , then if there was a DAG G ? G(A) with the
edge X ? Y , then G would also have a path from Xa to Yb . Hence, we can infer X 6? Y . This idea
is summarized by the following theorem:
5
Theorem 2 Given a maximum set of ancestral constraints A, then A entails the negative edge
constraint X 6? Y iff Xa 6 Yb , where Xa ? ?(X) and Yb ? ?(Y ).
Next, suppose that both (1) A dictates that X can reach Y , and that (2) A dictates that there is no
path from X to Z to Y , for any other variable Z. In this case, we can infer a positive edge constraint
X ? Y . We can again verify if X ? Y is entailed by A by enumerating all relevant candidates Z,
based on the following theorem.
Theorem 3 Given a maximum set of ancestral constraints A, then A entails the positive edge
constraint X ? Y iff A contains X
Y and for all Z 6? ?(X) ? ?(Y ), the set A contains a
constraint Xa 6 Zb or Za 6 Yb , where Xa ? ?(X), Zb ? ?(Z), Za ? ?(Z) and Yb ? ?(Y ).
5.2
Topological ordering constraints
We next consider constraints on the topological orderings of a DAG. An ordering satisfies a constraint
X < Y iff X appears before Y in the ordering. Further, an ordering constraint X < Y is compatible
with a DAG G iff there exists a topological ordering of DAG G that satisfies the constraint X < Y .
The negation of an ordering constraint X < Y is the ordering constraint Y < X. A given ordering
satisfies either X < Y or Y < X, but not both at the same time. A DAG G may be compatible with
both X < Y and Y < X through two different topological orderings.
We let O denote a set of ordering constraints, and let G(O) denote the set of DAGs G that are
compatible with each ordering constraint in O. The task of determining whether a set of ordering
constraints O is entailed by a set of ancestral constraints A, i.e., whether G(A) ? G(O), is more
subtle than the case of edge constraints. For example, consider the set of ancestral constraints
A = {Z 6 Y, X 6 Z}. We can infer the ordering constraint Y < Z from the first constraint
Z 6 Y , and Z < X from the second constraint X 6 Z.4 If we were to assume both ordering
constraints, we could infer the third ordering constraint Y < X, by transitivity. However, consider the
Z . This DAG is compatible with the constraint
following DAG G which satisfies A: X ? Y
Y < Z as well as the constraint Z < X, but it is not compatible with the constraint Y < X. Consider
the three topological orderings of the DAG G: hX, Y, Zi, hX, Z, Y i and hZ, X, Y i. We see that none
of the orderings satisfy both ordering constraints at the same time. Hence, if we assume both ordering
constraints at the same time, it eliminates all topological orderings of the DAG G, and hence the DAG
itself. Consider another example over variables W, X, Y and Z with a set of ancestral constraints
Y ? Z . However,
A = {W 6 Z, Y 6 X}. The following DAG G satisfies A: W ? X
inferring the ordering constraints Z < W and X < Y from each ancestral constraint of A leads to a
cycle in the above DAG (W < X < Y < Z < W ), hence eliminating the DAG.
Hence, for a given set of ancestral constraints A, we want to infer from it a set O of ordering
constraints that is as large as possible, but without eliminating any DAGs satisfying A. Roughly, this
involves inferring ordering constraints X < Y from ancestral constraints Y 6 X, as long as the ordering constraints do not induce a cycle. We propose to encode the problem as an instance of MaxSAT
[Li and Many?, 2009]. Given a maximum set of ancestral constraints A, we construct a MaxSAT
instance where propositional variables represent ordering constraints and ancestral constraints (true if
the constraint is present, and false otherwise). The clauses encode the ancestral constraints, as well as
constraints to ensure acyclicity. By maximizing the set of satisfied clauses, we then maximize the set
of constraints X < Y selected. In turn, the (decomposable) ordering constraints can be to empower
an oracle during structure search. Our MaxSAT problem includes hard constraints (1-3), as well as
soft constraints (4):
1. transitivity of orderings: for all X < Y , Y < Z: (X < Y ) ? (Y < Z) ? (X < Z)
2. a necessary condition for orderings: for all X < Y : (X < Y ) ? (Y 6
X)
3. a sufficient condition for acyclicity: for all X < Y and Z < W : (X < Y ) ? (Z <
W ) ? (X
Y ) ? (Z
W ) ? (X
Z) ? (Y
W ) ? (X
W ) ? (Y 6 Z)
4. infer orderings from ancestral constraints: for all X 6
Y in A: (X 6
Y ) ? (Y < X)
4
To see this, consider any DAG G satisfying Z 6 Y . We can construct another DAG G0 from G by adding
the edge Y ? Z, since adding such an edge does not introduce a directed cycle. As a result, every topological
ordering of G0 satisfies Y < Z, and G(Z 6 Y ) ? G(Y < Z).
6
n = 10
n = 12
N
512
2048
8192
512
2048
8192
512
p
EC G OB EC G OB EC G OB EC G OB EC G OB EC G OB
EC
G OB
0.00
< 7.81 < 112.98 < 19.70 0.01 70.85 0.02 98.28 0.02 144.21 0.06 625.081
0.01
< 9.61 < 15.41 < 23.58 0.01 73.39 0.01 99.46 0.01 145.75 0.05 673.003
0.05
< 11.56 < 14.54 < 19.85 0.02 60.16 0.01 75.40 0.27 95.11 0.08 243.681
0.10
< 10.74 < 11.60 < 13.87 0.21 52.02 0.10 53.29 0.36 59.42 0.58 176.500
0.25 0.01 4.04 <
3.43 < 3.37 4.91 22.47 0.18 20.88 0.17 19.68 55.07 126.312
0.50
< 0.87 <
0.71 < 0.72 0.51 6.11 0.03 6.10 0.01
5.85 0.48 73.236
0.75
< 0.31 <
0.75 < 0.30
< 2.66
< 2.62
<
2.57
< 44.074
1.00
< 0.21 <
0.31 < 0.21
< 2.29
< 2.30
<
2.27
< 39.484
n = 14
2048
EC G OB
0.07 839.46
0.06 901.50
0.05 287.45
1.26 198.18
0.91 112.80
0.02 67.29
< 42.95
< 39.67
8192
EC
G OB
0.09 1349.24
0.08 1356.63
0.04 411.22
0.03 218.94
0.02 107.44
<
62.60
<
41.21
<
37.78
Table 1: Time (in sec) used by EC tree and G OBNILP to find optimal networks. < is less than 0.01 sec. n is the
variable number, N is the dataset size, p is the percentage of the ancestral constraints.
N
p
0.01
0.05
0.10
0.25
0.50
0.75
1.00
512
EC (t/s)
0.01
1
0.06
1
2.56
1
70.19 0.98
137.31
1
21.86
1
2.31
1
G OB
63.53
55.20
50.33
23.29
7.74
4.38
4.10
n = 12
2048
EC (t/s) G OB
0.01 1 83.59
0.03 1 70.20
2.36 1 52.80
4.57 1 20.74
15.53 1 7.80
1.73 1 4.39
0.35 1 4.07
EC
0.02
1.18
0.91
1.63
1.43
0.50
0.15
8192
(t/s) G OB
1 128.23
1 90.59
1 57.66
1 21.16
1
7.36
1
4.30
1
4.02
512
EC (t/s)
0.01
1
0.06
1
2.54
1
280.59 0.84
609.18 0.88
258.80
1
21.18
1
G OB
634.19
228.57
174.70
137.67
90.92
64.51
61.44
n = 14
2048
EC (t/s)
G OB
0.123
1 738.25
0.868
1 276.68
34.979 0.98 183.93
88.80
1 126.80
35.62
1 85.58
6.49
1 63.68
1.39
1 60.56
8192
EC (t/s) G OB
0.12
1 1295.90
0.18
1 404.35
0.60
1 210.12
1.85
1 126.24
4.74
1
83.81
2.28
1
64.04
0.54
1
61.06
Table 2: Time t (in sec) used by EC tree and
GOBNILP to find optimal networks, without any projected
constraints, using a 32G memory and 2 hour time limit. s is the percentage of test cases that finish.
We remark that the above constraints are sufficient for finding a set of ordering constraints O that are
entailed by a set of ancestral constraints A, which is formalized in the following theorem.
Theorem 4 Given a maximum set of ancestral constraints A, and let O be a closed set of ordering
constraints. The set O is entailed by A if O satisfies the following two statements:
1. for all X < Y in O, A contains Y 6
X
2. for all X < Y and Z < W in O, where X, Y, Z and W are distinct, A contains at least
one of X
Y, Z
W, X
Z, Y
W, X
W, Y 6 Z.
6
Experiments
We now empirically evaluate the effectiveness of our approach to learning with ancestral constraints.
We simulated different structure learning problems from standard Bayesian network benchmarks5
ALARM , ANDES , CHILD , CPCS 54, and HEPAR 2, by (1) taking a random sub-network N of a given
size6 (2) simulating a training dataset from N of varying sizes (3) simulating a set of ancestral
constraints of a given size, by randomly selecting ordered pairs whose ground-truth ancestral relations
in N were used as constraints. In our experiments, we varied the number of variables in the learning
problem (n), the size of the training dataset (N ), and the percentage of the n(n ? 1)/2 total ancestral
relations that were given as constraints (p). We report results that were averaged over 50 different
datasets: 5 datasets were simulated from each of 2 different sub-networks, which were taken from
each of the 5 original networks mentioned above. Our experiments were run on a 2.67GHz Intel Xeon
X5650 CPU. We assumed BDeu scores with an equivalent sample size of 1. We further pre-computed
the scores of candidate parent sets, which were fed as input into each system evaluated. Finally, we
used the E VA S OLVER partial MaxSAT solver, for inferring ordering constraints.7
In our first set of experiments, we compared our approach with the ILP-based system of GOBNILP,8
where we encoded ancestral constraints using linear constraints, based on [Cussens, 2008]; note
again that both are exact approaches for structure learning. In Table 1, we supplied both systems
with decomposable constraints inferred via projection (which empowers the oracle for searching
the EC tree, and provides redundant constraints for the ILP). In Table 2, we withheld the projected
5
The networks used in our experiments are available at http://www.bnlearn.com/bnrepository
We select random sets of nodes and all their ancestors, up to a connected sub-network of a given size.
7
Available at http://www.maxsat.udl.cat/14/solvers/eva500a__
8
Available at http://www.cs.york.ac.uk/aig/sw/gobnilp
6
7
N
p
0.00
0.01
0.05
0.10
0.25
0.50
0.75
1.00
t
2.25
2.22
41.15
149.40
251.74
95.18
9.07
<
512
s
1
1
0.96
0.94
0.78
0.98
1
1
?
16.74
16.58
15.02
12.72
6.33
5.49
3.30
0.72
n = 18
2048
t
s
2.78
1
3.46
1
2.91 0.98
73.03 0.96
338.10 0.94
13.92 0.98
5.83
1
<
1
?
8.32
8.60
6.96
5.81
3.79
2.69
1.66
0.48
8192
t
s
3.11
1
3.63
1
2.12
1
7.35
1
30.90 0.96
116.29 0.98
0.72
1
<
1
?
7.06
7.38
5.56
3.78
1.96
1.24
0.72
0.26
t
19.40
30.38
87.74
492.59
507.02
163.19
1.47
<
512
s
1
1
0.96
0.82
0.58
0.88
1
1
?
23.44
23.67
18.44
14.67
6.17
6.36
4.49
2.02
n = 20
2048
t
s
20.62
1
30.46
1
39.25
1
185.82 0.94
572.68 0.88
46.43 0.96
0.28
1
<
1
?
10.60
10.53
8.20
7.21
4.46
2.19
1.36
0.47
8192
t
s
?
28.22
1 7.22
34.34
1 7.09
17.40
1 5.00
24.46 0.98 3.94
153.81 0.96 2.28
70.15
1 1.07
0.38
1 0.60
<
1 0.18
Table 3: Time t (in sec) used by EC tree to find optimal networks, with a 32G memory, a 2 hour time limit. < is
less than 0.01 sec. n is the variable number, N is the dataset size, p is the percentage of the ancestor constraints,
s is the percentage of test cases that finish, ? is the edge difference of the learned and true networks.
constraints. In Table 1, our approach is consistently orders-of-magnitude faster than GOBNILP, for
almost all values of n, N and p that we varied. This difference increased with the number of variables
n.9 When we compare Table 2 to Table 1, we see that for the EC tree, the projection of constraints
has a significant impact on the efficiency of learning (often by several orders of magnitude). For ILP,
there is some mild overhead with a smaller number of variables (n = 12), but with a larger number
of variables (n = 14), there were consistent improvements when projected constraints are used.
Next, we evaluate (1) how introducing ancestral constraints effects the efficiency of search, and (2)
how scalable our approach is as we increase the number of variables in the learning problem. In
Table 3, we report results where we varied the number of variables n ? {16, 18, 20}, and asserted
a 2 hour time limit and a 32GB memory limit. First, we observe an easy-hard-easy trend as we
increase the proportion p of ancestral constraints. When p is small, the learning problem is close
to the unconstrained problem, and our oracle serves as an accurate heuristic. When p is large, the
problem is highly constrained, and the search space is significantly reduced. In contrast, the ILP
approach more consistently became easier as more constraints were provided (from Table 1). As
expected, the learning problem becomes more challenging when we increase the number of variables
n, and when less training data is available. We note that our approach scales to n = 20 variables here,
which is comparable to the scalability of modern score-based approaches reported in the literature (for
BDeu scores); e.g., Yuan and Malone [2013] reported results up to 26 variables (for BDeu scores).
Table 3 also reports the average structural Hamming distance ? between the learned network and the
ground-truth network used to generate the data. We see that as the dataset size N and the proportion
p of constraints available increases, the more accurate the learned model becomes.10 We remark
that a relatively small number of ancestral constraints (say 10%?25%) can have a similar impact
on the quality of the observed network (relative to the ground-truth), as increasing the amount of
data available from 512 to 2048, or from 2048 to 8192. This highlights the impact that background
knowledge can have, in contrast to collecting more (potentially expensive) training data.
7
Conclusion
We proposed an approach for learning the structure of Bayesian networks optimally, subject to
ancestral constraints. These constraints are non-decomposable, posing a particular difficulty for
learning approaches for decomposable scores. We utilized a search space for structure learning with
non-decomposable scores, called the EC tree, and employ an oracle that optimizes decomposable
scores. We proposed a sound and complete method for pruning the EC tree, based on ancestral
constraints. We also showed how the employed oracle can be empowered by passing it decomposable
constraints inferred from the non-decomposable ancestral constraints. Empirically, we showed that
our approach is orders-of-magnitude more efficient compared to learning systems based on ILP.
Acknowledgments
This work was partially supported by NSF grant #IIS-1514253 and ONR grant #N00014-15-1-2339.
9
When no limits are placed on the sizes of families (as was done here), heuristic-search approaches (like
ours) have been observed to scale better than ILP approaches [Yuan and Malone, 2013, Malone et al., 2014].
10
? can be greater than 0 when p = 1, as there may be many DAGs that respect a set of ancestral constraints.
For example, DAG X ? Y ? Z expresses the same ancestral relations, after adding edge X ? Z.
8
References
M. Bartlett and J. Cussens. Integer linear programming for the Bayesian network structure learning problem.
Artificial Intelligence, 2015.
G. Borboudakis and I. Tsamardinos. Incorporating causal prior knowledge as path-constraints in Bayesian
networks and maximal ancestral graphs. In Proceedings of the Twenty-Ninth International Conference on
Machine Learning, 2012.
G. Borboudakis and I. Tsamardinos. Scoring and searching over Bayesian networks with causal and associative
priors. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, 2013.
E. Y.-J. Chen, A. Choi, and A. Darwiche. Learning optimal Bayesian networks with DAG graphs. In Proceedings
of the 4th IJCAI Workshop on Graph Structures for Knowledge Representation and Reasoning (GKR?15),
2015.
E. Y.-J. Chen, A. Choi, and A. Darwiche. Enumerating equivalence classes of Bayesian networks using EC
graphs. In Proceedings of the Nineteenth International Conference on Artificial Intelligence and Statistics,
2016.
Y. Chen and J. Tian. Finding the k-best equivalence classes of Bayesian network structures for model averaging.
In Proceedings of the Twenty-Eighth Conference on Artificial Intelligence, 2014.
G. F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data.
Machine learning, 9(4):309?347, 1992.
J. Cussens. Bayesian network learning by compiling to weighted MAX-SAT. In Proceedings of the 24th
Conference in Uncertainty in Artificial Intelligence (UAI), pages 105?112, 2008.
J. Cussens, M. Bartlett, E. M. Jones, and N. A. Sheehan. Maximum likelihood pedigree reconstruction using
integer linear programming. Genetic epidemiology, 37(1):69?83, 2013.
A. Darwiche. Modeling and reasoning with Bayesian networks. Cambridge University Press, 2009.
T. Jaakkola, D. Sontag, A. Globerson, and M. Meila. Learning Bayesian network structure using LP relaxations.
In Proceedings of the Thirteen International Conference on Artificial Intelligence and Statistics, pages
358?365, 2010.
M. Koivisto and K. Sood. Exact Bayesian structure discovery in Bayesian networks. The Journal of Machine
Learning Research, 5:549?573, 2004.
D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. The MIT Press, 2009.
C.-M. Li and F. Many?. MaxSAT, hard and soft constraints. In Handbook of Satisfiability, pages 613?631. 2009.
B. Malone, K. Kangas, M. J?rvisalo, M. Koivisto, and P. Myllym?ki. Predicting the hardness of learning
Bayesian networks. In Proceedings of the Twenty-Eighth Conference on Artificial Intelligence, 2014.
K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012.
P. Parviainen and M. Koivisto. Finding optimal Bayesian networks using precedence constraints. Journal of
Machine Learning Research, 14(1):1387?1415, 2013.
T. Silander and P. Myllym?ki. A simple approach for finding the globally optimal Bayesian network structure. In
Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, pages 445?452, 2006.
A. P. Singh and A. W. Moore. Finding optimal Bayesian networks by dynamic programming. Technical report,
CMU-CALD-050106, 2005.
J. Tian, R. He, and L. Ram. Bayesian model averaging using the k-best Bayesian network structures. In
Proceedings of the Twenty-Six Conference on Uncertainty in Artificial Intelligence, pages 589?597, 2010.
P. van Beek and H. Hoffmann. Machine learning of Bayesian networks using constraint programming. In
Proceedings of the 21st International Conference on Principles and Practice of Constraint Programming
(CP), pages 429?445, 2015.
C. Yuan and B. Malone. Learning optimal Bayesian networks: A shortest path perspective. Journal of Artificial
Intelligence Research, 48:23?65, 2013.
C. Yuan, B. Malone, and X. Wu. Learning optimal Bayesian networks using A* search. In Proceedings of the
Twenty-Second International Joint Conference on Artificial Intelligence, pages 2186?2191, 2011.
9
| 6386 |@word mild:1 eliminating:2 proportion:2 adnan:1 bn:11 p0:1 dramatic:1 accommodate:2 reduction:1 contains:11 score:35 efficacy:1 selecting:1 ilps:1 genetic:1 ours:1 interestingly:1 omniscient:1 existing:4 com:1 yet:1 must:3 greedy:1 discovering:1 malone:9 leaf:4 selected:1 intelligence:11 provides:1 node:11 along:2 direct:1 yuan:9 descendant:5 consists:3 overhead:1 introduce:3 darwiche:7 pairwise:1 hardness:1 expected:1 roughly:1 globally:1 bdeu:4 enumeration:1 actual:1 cpu:1 solver:2 becomes:2 spain:1 project:1 mitigated:1 moreover:1 provided:1 increasing:1 lowest:1 finding:9 assert:3 every:4 collecting:1 uk:1 grant:2 appear:1 positive:6 before:2 local:2 limit:7 path:23 equivalence:4 challenging:1 tian:4 averaged:1 directed:13 unique:3 acknowledgment:1 globerson:1 practice:2 x3:31 empirical:1 significantly:1 aychoi:1 dictate:2 projection:2 word:1 induce:1 pre:1 hx1:1 cannot:5 onto:1 selection:2 close:1 seminal:1 www:3 equivalent:4 maximizing:1 independently:1 shen:1 decomposable:38 formalized:1 immediately:1 rule:1 handle:2 searching:2 suppose:2 exact:2 programming:11 us:3 trend:1 satisfying:4 expensive:1 utilized:1 observed:2 cycle:5 sood:3 connected:1 ordering:45 andes:1 mentioned:1 skeleton:1 dynamic:3 exhaustively:1 singh:2 efficiency:5 borboudakis:4 easily:1 joint:1 indirect:1 gobnilp:4 represented:2 olver:1 cat:1 distinct:1 artificial:11 aggregate:2 whose:1 modular:3 heuristic:8 encoded:2 larger:1 say:2 nineteenth:1 otherwise:2 ability:1 soundness:1 gi:4 statistic:2 jointly:2 itself:1 seemingly:1 associative:1 propose:4 reconstruction:1 maximal:1 silander:2 relevant:1 iff:12 scalability:1 los:1 parent:15 cluster:2 ijcai:1 help:1 ac:1 c:2 involves:1 subsequently:2 violates:3 hx:2 preliminary:1 precedence:1 ground:3 omitted:1 precede:1 largest:1 create:1 weighted:1 mit:2 varying:1 jaakkola:3 encode:2 improvement:2 consistently:2 likelihood:2 check:3 contrast:2 size6:1 integrated:1 relation:7 ancestor:9 koller:3 classification:1 udl:1 denoted:5 proposes:1 art:1 constrained:3 construct:2 represents:1 jones:1 nearly:1 report:4 few:1 employ:1 modern:2 oriented:1 randomly:1 murphy:3 negation:1 friedman:3 highly:1 evaluation:3 mdl:1 entailed:7 yuh:1 pc:1 asserted:1 accurate:2 edge:41 partial:2 arthur:1 necessary:1 tree:21 causal:2 minimal:1 increased:1 xeon:1 instance:3 formalism:1 soft:3 gn:2 empowered:1 modeling:1 cost:4 introducing:1 subset:2 expects:1 optimally:4 reported:2 dependency:1 st:1 yujias:1 fundamental:1 international:5 epidemiology:1 ancestral:92 probabilistic:3 connecting:1 again:2 satisfied:2 containing:1 leading:1 return:1 li:2 bold:1 summarized:1 includes:1 sec:5 satisfy:7 explicitly:1 later:2 root:3 closed:1 aggregation:1 became:1 efficiently:3 yield:3 bayesian:36 none:1 gkr:1 za:2 reach:3 against:1 thereof:1 hamming:1 dataset:6 recall:1 knowledge:8 improves:1 satisfiability:1 organized:1 subtle:1 appears:1 maximally:1 yb:9 evaluated:4 done:2 xa:9 lack:1 quality:1 facilitate:1 effect:4 contain:1 verify:1 true:2 cald:1 hence:10 moore:2 transitivity:3 during:1 pedigree:1 complete:2 demonstrate:1 cp:3 reasoning:2 recently:4 common:1 empirically:5 clause:2 exponentially:1 association:1 he:1 significant:2 refer:3 cambridge:1 dag:59 unconstrained:1 meila:1 similarly:1 reachable:1 entail:5 mainstream:1 gj:5 pu:1 showed:3 perspective:2 optimizes:4 n00014:1 onr:1 scoring:2 additional:2 greater:1 employed:3 prune:2 determine:2 shortest:6 maximize:1 redundant:1 ii:1 violate:3 sound:1 infer:16 technical:2 faster:1 adapt:1 long:1 va:1 impact:5 underlies:1 scalable:1 cmu:1 represent:5 background:4 addition:1 want:3 leaving:1 eliminates:1 unlike:1 subject:4 undo:1 induced:1 undirected:2 hz:1 leveraging:1 effectiveness:1 integer:5 structural:1 presence:5 empower:6 enough:1 easy:2 finish:2 zi:1 reduce:1 idea:1 enumerating:3 angeles:1 whether:4 six:1 bartlett:3 url:1 gb:1 sontag:1 passing:4 cause:3 york:1 remark:2 jie:1 enumerate:3 tsamardinos:4 amount:1 extensively:1 category:1 reduced:1 http:3 specifies:1 supplied:1 percentage:5 generate:1 herskovits:2 canonical:1 revisit:1 nsf:1 estimated:1 broadly:1 shall:2 express:2 four:1 pj:3 utilize:1 ram:1 graph:24 relaxation:1 aig:1 run:1 letter:2 powerful:1 uncertainty:4 extends:1 family:5 almost:1 wu:1 utilizes:1 earning:1 cussens:7 ob:15 comparable:1 ki:3 rvisalo:1 guaranteed:1 topological:9 oracle:15 constraint:204 x2:39 ucla:1 pruned:1 relatively:1 department:1 according:1 smaller:1 contradicts:1 lp:1 projecting:1 taken:1 equation:6 previously:1 turn:3 discus:2 ilp:14 know:1 fed:1 serf:1 koivisto:6 available:6 observe:1 away:1 enforce:6 simulating:2 alternative:1 compiling:1 original:1 assumes:1 clustering:1 running:1 ensure:2 completed:1 include:1 graphical:1 sw:1 exploit:2 maxsat:7 objective:2 g0:5 already:1 hoffmann:2 navigating:1 distance:1 simulated:2 induction:1 relationship:1 cpdags:1 thirteen:1 statement:1 relate:1 potentially:1 negative:9 twenty:7 upper:2 markov:4 datasets:2 withheld:1 kangas:1 varied:3 ninth:2 inferred:5 propositional:1 pair:1 california:1 learned:4 merges:1 barcelona:1 hour:3 nip:1 able:1 yujia:1 eighth:2 appeared:1 including:1 memory:3 max:1 belief:1 event:1 difficulty:1 predicting:1 improve:2 transitive:2 prior:7 review:2 literature:1 checking:1 discovery:1 determining:1 relative:1 highlight:1 acyclic:2 integrate:1 sufficient:2 consistent:2 principle:4 intractability:1 share:1 pi:3 compatible:6 supported:1 placed:1 guide:1 allow:1 fall:1 face:1 taking:1 van:2 ghz:1 dimension:1 projected:5 ec:37 tighten:1 pruning:5 uai:1 sat:1 handbook:1 conclude:1 assumed:1 search:24 decomposes:1 table:11 learn:2 ca:1 posing:1 beek:2 alarm:1 myllym:3 child:1 xu:9 x1:40 intel:1 cooper:2 sub:8 inferring:5 candidate:3 third:1 admissible:1 theorem:7 choi:3 navigate:1 showing:2 inconclusive:1 intractable:1 exists:7 incorporating:1 false:1 adding:7 workshop:1 cpdag:10 magnitude:6 illustrates:1 chen:13 parviainen:2 easier:1 led:1 ordered:1 partially:4 corresponds:4 truth:3 satisfies:13 viewed:1 formulated:1 towards:2 absence:4 hard:3 averaging:2 zb:2 called:2 total:1 empowerment:1 acyclicity:3 select:1 support:1 violated:2 incorporate:1 evaluate:4 sheehan:1 |
5,954 | 6,387 | CliqueCNN: Deep Unsupervised Exemplar Learning
Miguel A. Bautista? , Artsiom Sanakoyeu? , Ekaterina Sutter, Bj?rn Ommer
Heidelberg Collaboratory for Image Processing
IWR, Heidelberg University, Germany
[email protected]
Abstract
Exemplar learning is a powerful paradigm for discovering visual similarities in
an unsupervised manner. In this context, however, the recent breakthrough in
deep learning could not yet unfold its full potential. With only a single positive
sample, a great imbalance between one positive and many negatives, and unreliable
relationships between most samples, training of Convolutional Neural networks is
impaired. Given weak estimates of local distance we propose a single optimization
problem to extract batches of samples with mutually consistent relations. Conflicting relations are distributed over different batches and similar samples are grouped
into compact cliques. Learning exemplar similarities is framed as a sequence of
clique categorization tasks. The CNN then consolidates transitivity relations within
and between cliques and learns a single representation for all samples without
the need for labels. The proposed unsupervised approach has shown competitive
performance on detailed posture analysis and object classification.
1
Introduction
Visual similarity learning is the foundation for numerous computer vision subtasks ranging from
low-level image processing to high-level object recognition or posture analysis. A common paradigm
has been category-level recognition, where categories and the similarities of all their instances
to other classes are jointly modeled. However, large intra-class variability has recently spurred
exemplar methods [15, 11], which split this problem into simpler sub-tasks. Therefore, separate
exemplar classifiers are trained by learning the similarities of individual exemplars against a large
set of negatives. The exemplar paradigm has been successfully employed in diverse areas such as
segmentation [11], grouping [10], instance retrieval [2, 19], and object recognition [15, 5]. Learning
similarities is also of particular importance for posture analysis [8] and video parsing [17].
Among the many approaches for similarity learning, supervised techniques have been particularly
popular in the vision community, leading to the formulation as a ranking [23], regression [6], and
classification [17] task. With the recent advances of convolutional neural networks (CNN), two-stream
architectures [25] and ranking losses [21] have shown great improvements. However, to achieve their
performance gain, CNN architectures require millions of samples of supervised training data or at
least the fine-tuning [3] on large datasets such as PASCAL VOC. Although the amount of accessible
image data is increasing at an enormous rate, supervised labeling of similarities is very costly. In
addition, not only similarities between images are important, but especially between objects and their
parts. Annotating the fine-grained similarities between all these entities is hopelessly complex, in
particular for the large datasets typically used for training CNNs.
Unsupervised deep learning of similarities that does not requiring any labels for pre-training or
fine-tuning is, therefore, of great interest to the vision community. This way we can utilize large
?
Both authors contributed equally to this work.
Project on GitHub: https://github.com/asanakoy/cliquecnn
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
image datasets without being limited by the need for costly manual annotations. However, CNNs for
exemplar-based learning have been rare [4] due to limitations resulting from the widely used softmax
loss. The learning task suffers from only a single positive instance, it is highly unbalanced with many
more negatives, and the relationships between samples are unknown, cf. Sec. 2. Consequentially,
stochastic gradient descend (SGD) gets corrupted and has a bias towards negatives, thus forfeiting
the benefits of deep learning.
Outline of the proposed approach: We overcome these limitations by updating similarities and
CNNs. Typically at the beginning only a few, local estimates of (dis-)similarity are easily available,
i.e., pairs of samples that are highly similar (near duplicates) or that are very distant. Most of the
similarities are, however, unknown or mutually contradicting, so that transitivity does not hold.
Therefore, we initially can only gather small, compact cliques of mutually similar samples around an
exemplar, but for most exemplars we know neither if they are similar nor dissimilar. To nevertheless
define balanced classification tasks suited for CNN training, we formulate an optimization problem
that builds training batches for the CNN by selecting groups of compact cliques, so that all cliques in
a batch are mutually distant. Thus for all samples of a batch (dis-)similarity is defined?they either
belong to the same compact clique or are far away and belong to different cliques. However, pairs of
samples with no reliable similarities end up in different batches so they do not yield false training
signal for SGD. Classifying if a sample belongs to a clique serves as a pretext task for learning
exemplar similarity. Training the network then implicitly reconciles the transitivity relations between
samples in different batches. Thus, the learned CNN representations impute similarities that were
initially unavailable and generalize them to unseen data.
In the experimental evaluation the proposed approach significantly improves over state-of-the-art
approaches for posture analysis and retrieval by learning a general feature representation for human
pose that can be transferred across datasets.
1.1 Exemplar Based Methods for Similarity Learning
The Exemplar Support Vector Machine (Exemplar-SVM) has been one of the driving methods for
exemplar based learning [15]. Each Exemplar-SVM classifier is defined by a single positive instance
and a large set of negatives. To improve performance, Exemplar-SVMs require several round of hard
negative mining, increasing greatly the computational cost of this approach. To circumvent this high
computational cost [10] proposes to train Linear Discriminant Analysis (LDA) over Histogram of
Gradient (HOG) features [10]. LDA whitened HOG features with the common covariance matrix
estimated for all the exemplars removes correlations between the HOG features, which tend to amplify
the background of the image.
Recently, several CNN approaches have been proposed for supervised similarity learning using either
pairs [25], or triplets [21] of images. However, supervised formulations for learning similarities
require that the supervisory information scales quadratically for pairs of images, or cubically for
triplets. This results in very large training times.
Literature on exemplar based learning in CNNs is very scarce. In [4] the authors of ExemplarCNN tackle the problem of unsupervised feature learning. A patch-based categorization problem is
designed by randomly extracting patch for each image in the training set and defining it as surrogate
class. Hence, since this approach does not take into account (dis-)similarities between exemplars, it
fails to model their transitivity relationships, resulting in poor performances (see Sect. 3.1). le
Furthermore, recent works by Wang et al. [22] and Doersh et al. [3] showed that temporal information
in videos and spatial context information in images can be utilized as a convenient supervisory
signal for learning feature representation with CNNs. However, the computational cost of the
training algorithm is enormous since the approach in [3] needs to tackle all possible pair-wise image
relationships requiring training set that scales quadratically with the number of samples. On the
contrary, our approach leverages the relationship information between compact cliques, defining
a multi-class classification problem. As each training batch contains mutually distinct cliques the
computational cost of the training algorithm is greatly decreased.
2
Approach
We will now discuss how we can employ a CNN for learning similarities between all pairs of a large
number of exemplars. Exemplar learning in CNNs has been a relatively unexplored approach for
multiple reasons. First and foremost, deep learning requires large amounts of training data, thus
2
0.9
0.8
True positive rate
0.7
0.6
0.5
0.4
0.3
0.2
1-sample-CNN(0.62)
NN-CNN(0.65)
Ours(0.79)
0.1
0
0
0.2
0.4
0.6
0.8
1
False positive rate
(a)
(b)
(c)
(d)
Figure 1: (a) Average AUC for posture retrieval in the Olympic Sports dataset. Similarities learnt
by (b) 1-sample CNN, (c) using NN-CNN, and (d) for the proposed approach. The plots show a
magnified crop of the full similarity matrix. Note the more detailed fine structure in (d).
conflicting with having only a single positive exemplar in a setup that we now abbreviate as 1-sample
CNN. Such a 1-sample CNN faces several issues. (i) The within-class variance of an individual
exemplar cannot be modeled. (ii) The ratio of one exemplar and many negatives is highly imbalanced,
so that the softmax loss over SGD batches overfits against the negatives. (iii) An SGD batch for
training a CNN on multiple exemplars can contain arbitrarily similar samples with different label (the
different exemplars may be similar or dissimilar), resulting in label inconsistencies. The proposed
method overcomes these issues as follows. In Sect. 2.2 we discuss why simply merging an exemplar
with its nearest neighbors and data augmentation (similar in spirit to the Clustered Exemplar-SVM
[20]) is not sufficient to address (i). Sect. 3.1 compares this NN-CNN approach against other methods.
Sect. 2.3 deals with (ii) and (iii) by generating batches of cliques that maximize the intra-clique
similarity while minimizing inter-clique similarity.
To show the effectiveness of the proposed method we give empirical proof by training CNNs in both
1-sample CNN and NN-CNN manners. Fig. 1(a) shows the average ROC curve for posture retrieval
in the Olympic Sports dataset [16] (refer to Sec. 3.1 for further details) for 1-sample CNN, NN-CNN
and the proposed method, which clearly outperforms both exemplar based strategies. In addition,
Fig. 1(b-d) show an excerpt of the similarity matrix learned for each method. It becomes evident
that the proposed approach captures more detailed similarity structures, e.g., the diagonal structures
correspond to repetitions of the same gait cycle within a long jump.
2.1
Initialization
Since deep learning benefits from large amounts of data and requires more than a single exemplar
to avoid biased gradients, we now reframe exemplar-based learning of similarities so that it can
be handled by a CNN. Given a single exemplar di we thus strive for related samples to enable a
CNN training that then further improves the similarities between samples. To obtain this initial
set of few, mutually similar samples for an exemplar, we now briefly discuss the reliability of
standard feature distances such as whitening HOG features using LDA [10]. HOG-LDA is a
computationally effective foundation for estimating similarities sij between large numbers of samples,
sij = s(di , dj ) = ?(di )> ?(dj ). Here ?(di ) is the initial HOG-LDA representation of the exemplar
and S is the resulting kernel.
Most of these initial similarities are unreliable (cf. Fig. 4(b)) and, thus, the majority of samples
cannot be properly ranked w.r.t. their similarity to an exemplar di . However, highly similar samples
and those that are far away can be reliably identified as they stand out from the similarity distribution.
Subsequently we utilize these few reliable relationships to build groups of compact cliques.
2.2
Compact Cliques
Simply assigning the same label to all the nearest and another label to all the furthest neighbors
of an exemplar is inappropriate. The samples in these groups may be close to di (or distant for
the negative group) but not to another due to lacking transitivity. Moreover, mere augmentation of
the exemplar with synthetic data does not add transitivity relations to other samples. Therefore, to
learn within-class similarities we need to restrict the model to compact cliques of samples so that all
samples in a clique are also mutually close to another and deserve the same label.
3
Query
Ours
Alexnet [13]
HOG-LDA [10]
Figure 2: Averaging of the 50 nearest neighbours for a given query frame using similarities obtained
by our approach, Alexnet[13] and HOG-LDA [10].
To build candidate cliques we apply complete-linkage clustering starting at each di to merge the
sample with its local neighborhood, so that all merged samples are mutually similar. Thus, cliques are
compact, differ in size, and may be mutually overlapping. To reduce redundancy, highly overlapping
cliques are subsequently merged by clustering cliques using farthest-neighbor clustering. This
agglomerative grouping is terminated if intra-clique similarity of a cluster is less than half that of its
constituents. Let K be the resulting number of clustered cliques and N the number of samples di .
Then C ? {0, 1}K?N is the resulting assignment matrix of samples to cliques.
2.3
Selecting Batches of Mutually Consistent Cliques
We now have a set of compact cliques that comprise all training data. Thus, one may consider to train
a CNN to assign all samples of a clique with the same label. However, since only the highest/lowest
similarities are reliable, samples in different cliques are not necessarily dissimilar. Forcing them into
different classes can consequently entail incorrect similarities. Therefore, we now seek batches of
mutually distant cliques, so that all samples in a batch can be labeled consistently because they are
either similar (same compact clique) or dissimilar (different, distant clique). Samples with unreliable
similarity then end up in different batches and we train a CNN successively on these batches.
We now formulate an optimization problem that produces a set of consistent batches of cliques. Let
X ? {0, 1}B?K be an indicator matrix that assigns K cliques to B batches (the rows xb of X are the
cliques in batch b) and S0 ? RK?K be the similarity between cliques. We enforce cliques in the same
batch to be dissimilar by minimizing tr (XS0 X> ), which is regularized for the diagonal elements of
the matrix S0 selected for each batch (see Eq. (1)). Moreover, each batch should maximize sample
coverage, i.e., the number of distinct samples in all cliques of a batch kxb Ckpp should be maximal.
Finally, the number of distinct points covered by all batches, k1XCkpp , should be maximal, so that
the different (potentially overlapping) batches together comprise as much samples as possible. We
select p = 1/16 so that our penalty function roughly approximate the non-linear step function. The
objective of the optimization problem then becomes
min
X?{0,1}B?K
s.t.
tr (XS0 X> )? tr (X diag (S0 )X> ) ? ?1
B
X
kxb Ckpp ??2 k1XCkpp
(1)
b=1
>
X1>
K = r1B
(2)
where r is the desired number of cliques in one batch for CNN training. The number of batches, B,
can be set arbitrarily high to allow for as many rounds of SGD training as desired. If it is too low,
this can be easily spotted as only limited coverage of training data can be achieved in the last term of
Eq. (1). Since X is discrete, the optimization problem (1) is not easier than the Quadratic Assignment
Proble which is known to be N P -hardm [1]. To overcome this issue we relax the binary constraints
and force instead the continuous solution to the boundaries of the feasible range by maximizing the
additional term ?3 kX ? 0.5k2F using the Frobenius norm.
We condition S0 to be positive semi-definite by thresholding its eigenvectors and projecting onto the
resulting base. Since also p < 1 the previous objective function is a difference of convex functions
4
Figure 3: Visual example of a resulting batch of cliques for long jump category of Olympic Sports
dataset. Each clique contains at least 20 samples and is represented as their average.
u(X) ? v(X), where
u(X) = tr (XS0 X> ) ? ?1
B
X
kxb Ckpp ? ?2 k1XCkpp
(3)
b=1
v(X) = tr(X diag (S0 )X> ) + ?3 kX ? 0.5k2F
(4)
It can be solved using the CCCP Algorithm [24]. In each iteration of CCCP the following convex
optimization problem is solved,
>
argmin u(X) ? vec (X) vec (?v(Xt )),
(5)
X?[0,1]B?K
s.t.
>
X1>
K = r1B
(6)
where ?v(Xt ) = 2X (1 diag (S0 )) + 2X ? 1 and denotes the Hadamard product. We solve
this constrained optimization problem by means of the interior-point method. Fig. 3 shows a visual
example of a selected batch of cliques.
2.4
CNN Training
We successively train a CNN on the different batches xb obtained using Eq. (1). In each batch,
classifying samples according to the clique they are in then serves as a pretext task for learning
sample similarities. One of the key properties of CNNs is the training using SGD and backpropagation
[14]. The backpropagated gradient is estimated only over a subset (batch) of training samples, so it
depends only on the subset of cliques in xb . Following this observation, the clique categorization
problem is effectively decoupled into a set of smaller sub-tasks?the individual batches of cliques.
During training, we randomly pick a batch b in each iteration and compute the stochastic gradient,
using the softmax loss L(W),
1 X
L(W) ?
fW (dj ) + ?r(W)
(7)
M
b
j?x
Vt+1 = ?Vt ? ??L(Wt ),
Wt+1 = Wt + Vt+1 ,
(8)
where M is the SGD batch size, Wt denotes the CNN weights at iteration t, and Vt denotes the
weight update of the previous iteration. Parameters ? and ? denote the learning rate and momentum,
respectively. We then compute similarities between exemplars by simply measuring correlation on
the learned feature representation extracted from the CNN (see Sect. 3.1 for details).
2.5
Similarity Imputation
By alternating between the different batches, which contain cliques with mutually inconsistent
similarities, the CNN learns a single representation for samples from all batches. In effect, this
consolidates similarities between cliques in different batches. It generalizes from a subset of initial
cliques to new, previously unreliable relations between samples in different batches by utilizing
transitivity relationships implied by the cliques.
After a training round over all batches we impute the similarities using the representation learned
by the CNN. The resulting similarities are more reliable and enable the grouping algorithm from
Sect. 2.2 to find larger cliques of mutually related samples. As there are fewer unreliable similarities,
5
Frames sorted by exemplar similarity score
Query exemplar
200
180
160
Similarity score
140
120
100
80
60
40
20
0
1000
2000
3000
4000
5000
Frame ranking
(a)
(b)
6000
7000
Figure 4: (a) Cumulative distribution of the spectrum of the similarity matrices obtained by our
method and the HOG-LDA initialization. (b) Sorted similarities with respect to one exemplar,
where only similarities at the ends
of the distribution can be trusted.
more samples can be comprised in a batch and overall less batches already cover the same fraction of
data as before. Consequently, we alternately train the CNN and recompute cliques and batches using
the similarities inferred in the previous iteration of CNN training. This alternating imputation of
similarities and update of the classifier follows the idea of multiple-instance learning and has shown
to converge quickly in less than four iterations.
To evaluate the improvement of the similarities Fig. 4 analyzes the eigenvalue spectrum of S on
the Olympic Sports dataset, see Sect. 3.1. The plot shows the normalized cumulative sum of the
eigenvalues as the function of the number of eigenvectors. Compared to the initialization, transitivity
relations are learned and the approach can generalize from an exemplar to more related samples.
Therefore, the similarity matrix becomes more structured (cf. Fig. 1) and random noisy relations
disappear. As a consequence it can be represented using very few basis vectors. In a further
experiment we evaluate the number of reliable similarities and dissimilarities within and between
cliques per batch. Recall that samples can only be part of the same batch, if their similarity is
reliable. So the goal of similarity learning is to remove transitivity conflicts and reconcile relations
between samples to yield larger batches. We now observe that after the iterative update of similarities,
the average number of similarities and dissimilarities in a batch has increased by a factor of 2.34
compared to the batches at initialization.
3
Experimental Evaluation
We provide a quantitative and qualitative analysis of our exemplar-based approach for unsupervised
similarity learning. For evaluation, three different settings are considered: posture analysis on
Olympic Sports [16], pose estimation on Leeds Sports [12], and object classification on PASCAL
VOC 2007.
3.1
Olympic Sports Dataset: Posture Analysis
The Olympic Sports dataset [16] is a video compilation of different sports competitions. To evaluate
fine-scale pose similarity, for each sports category we had independent annotators manually label 20
positive (similar) and negative (dissimilar) samples for 1033 exemplars. Note that these annotations
are solely used for testing, since we follow an unsupervised approach.
We compare the proposed method with the Exemplar-CNN [4], the two-stream approach of Doersch
et. al [3], 1-sample CNN and NN-CNN models (in a very similar spirit to [20]), Alexnet [13],
Exemplar-SVMs [15], and HOG-LDA [10]. Due to its performance in object and person detection,
we use the approach of [7] to compute person bounding boxes. (i) The evaluation should investigate
the benefit of the unsupervised gathering of batches of cliques for deep learning of exemplars using
standard CNN architectures. Therefore we incarnate our approach by adopting the widely used model
of Krizhevsky et al. [13]. Batches for training the network are obtained by solving the optimization
problem in Eq. (1) with B = 100, K = 100, and r = 20 and fine-tuning the model for 105 iterations.
Thereafter we compute similarities using features extracted from layer fc7 in the caffe implementation
of [13]. (ii) Exemplar-CNN is trained using the best performing parameters reported in [4] and the
64c5-128c5-256c5-512f architecture. Then we use the output of fc4 and compute 4-quadrant max
pooling. (iii) Exemplar-SVM was trained on the exemplar frames using the HOG descriptor. The
samples for hard negative mining come from all categories except the one that an exemplar is from.
We performed cross-validation to find an optimal number of negative mining rounds (less than three).
The class weights of the linear SVM were set as C1 = 0.5 and C2 = 0.01. (iv) LDA whitened HOG
6
HOG-LDA [10]
0.58
Ex-SVM [15]
0.67
Ex-CNN [4]
0.56
Alexnet [13]
0.65
1-s CNN
0.62
NN-CNN
0.65
Doersch et. al [3]
0.58
Ours
0.79
Table 1: Avg. AUC for each method on Olympic Sports dataset.
was computed as specified in [10]. (v) The 1-sample CNN was trained by defining a separate class for
each exemplar sample plus a negative category containing all other samples. (vi) In a similar fashion,
the NN-CNN was trained using the exemplar plus 10 nearest neighbours obtained using the whitened
HOG similarities. As implementation for both CNNs we again used the model of [13] fine-tuned
for 105 iterations. Each image in the training set is augmented with 10 transformed versions by
performing random translation, scaling, rotation and color transformation, to improve invariance with
respect to these.
Tab. 1 reports the average AuC for each method over all categories of the Olympic Sports dataset.
Our approach obtains a performance improvement of at least 10% w.r.t. the other methods. In
particular, the experiments show that the 1-sample CNN fails to model the positive distribution,
due to the high imbalance between positives and negatives and the resulting biased gradient. In
comparison, additional nearest neighbours to the exemplar (NN-CNN) yield a better model of withinclass variability of the exemplar leading to a 3% performance increase over the 1-sample CNN.
However NN-CNN also sees a large set of negatives, which are partially similar and dissimilar. Due
to this unstructuredness of the negative set, the approach fails to thoroughly capture the fine-grained
similarity structure over the negative samples. To circumvent this issue we compute sets of mutually
distant compact cliques resulting in a relative performance increase of 12% over NN-CNN.
Furthermore, Fig. 1 presents the similarity structures, which the different approaches extract when
analyzing human postures. Fig. 2 further highlights the similarities and the relations between
neighbors. For each method the top 50 nearest neighbours for a randomly chosen exemplar frame in
the Olympic Sports dataset are blended. We can see how the neighbors obtained by our approach
depict a sharper average posture, since they result from compact cliques of mutually similar samples.
Therefore they retain more details and are more similar to the original than in case of the other
methods.
3.2
Leeds Sports Dataset: Pose Estimation
The Leeds Sports Dataset [12] is the most widely used benchmark for pose estimation. For training
we employ 1000 images from the dataset combined with 4000 images from the extended version of
this dataset, where each image is annotated with 14 joint locations. We use the visual similarities
learned by our approach to find frames similar in posture to a query frame. Since our training is
unsupervised, joint labels are not available. At test time we therefore estimate the pose of a query
person by identifying the nearest neighbor from the training set. To compare against the supervised
methods, the pose of the nearest neighbor is then compared against ground-truth.
Now we evaluate our visual similarity learning and the resulting identification of nearest postures.
For comparison, similar postures are also retrieved using HOG-LDA [10] and Alexnet [13]. In
addition, we also report an upper bound on the performance that can be achieved by the nearest
neighbor using ground-truth similarities. Therefore, the nearest training pose for a query is identified
by minimizing the average distance between their ground-truth pose annotation. This is the best one
can do by finding the most similar frame, when not provided with a supervised parametric model (the
performance gap to 100% shows the difference between training and test poses). For completeness,
we compare with a fully supervised state-of-the-art approach for pose estimation [18]. We use the
same experimental settings described in Sect. 3.1.
Tab. 2 reports the Percentage of Correct Parts (PCP) for the different methods. The prediction for a
part is considered correct when its endpoints are within 50% part length of the corresponding ground
truth endpoints. Our approach significantly improves the visual similarities learned using Alexnet
and HOG-LDA. It is note-worthy that even though our approach for estimating the pose is fully
unsupervised it attains a competitive performance when compared to the upper-bound of supervised
ground truth similarities.
In addition, Fig. 5 presents success (a) and failure (c) cases of our method. In Fig.5(a) we can see
that the pose is correctly transferred from the nearest neighbor (b) from the training set, resulting in a
PCP score of 0.6 for that particular image. Moreover, Fig.5(c), (d) show that the representation learnt
7
Method
Ours
HOG-LDA[10]
Alexnet[13]
Ground Truth
Pose Machines [18]
Torso
80.1
73.7
76.9
93.7
93.1
Upper legs
50.1
41.8
47.8
78.8
83.6
Lower legs
45.7
39.2
41.8
74.9
76.8
Upper arms
27.2
23.2
26.7
58.7
68.1
Lower arms
12.6
10.3
11.2
36.4
42.2
Head
45.5
42.2
42.4
72.4
85.4
Total
43.5
38.4
41.1
69.2
72.0
Table 2: PCP measure for each method on Leeds Sports dataset.
(a)
(b)
(c)
(d)
Figure 5: Pose prediction results. (a) and (c) are test images with the superimposed ground truth
skeleton depicted in red and the predicted skeleton in green. (b) and (d) are corresponding nearest
neighbours, which were used to transfer pose.
by our method is invariant to front-back flips (matching a person facing away from the camera to one
facing the camera). Since our approach learns pose similarity in an unsupervised manner, it becomes
invariant to changes in appearance as long as the shape is similar, thus explaining this confusion.
Adding additional training data or directly incorporating face detection-based features could resolve
this.
3.3
PASCAL VOC 2007: Object Classification
The previous sections have analyzed the learning of pose similarities. Now we evaluate the learning
of similarities over object categories. Therefore, we classify object bounding boxes of the PASCAL
VOC 2007 dataset. To initialize our model we now use the visual similarities of Wang et al. [22]
without applying any fine tuning on PASCAL and also compare against this approach. Thus, neither
ImageNet nor Pascal VOC labels are utilized. For comparison we evaluate against HOG-LDA [10],
[22], and R-CNN [9]. For our method and HOG-LDA we use the same experimental settings as
described in Sect. 3.1, initializing our method and network with the similarities obtained by [22].
For all methods, the k nearest neighbors are computed using similarities (Pearson correlation) based
on fc6. In Tab. 3 we show the classification accuracies for all approaches for k = 5. Our approach
improves upon the initial similarities of the unsupervised approach of [22] to yield a performance
gain of 3% without requiring any supervision information or fine-tuning on PASCAL.
HOG-LDA
0.1180
Wang et. al [22]
0.4501
Wang et. al [22] + Ours
0.4812
RCNN
0.6825
Table 3: Classification results for PASCAL VOC 2007
4
Conclusion
We have proposed an approach for unsupervised learning of similarities between large numbers
of exemplars using CNNs. CNN training is made applicable in this context by addressing crucial
problems resulting from the single positive exemplar setup, the imbalance between exemplar and
negatives, and inconsistent labels within SGD batches. Optimization of a single cost function yields
SGD batches of compact, mutually dissimilar cliques of samples. Learning exemplar similarities is
then posed as a categorization task on individual batches. In the experimental evaluation the approach
has shown competitive performance compared to the state-of-the-art, providing significantly finer
similarity structure that is particularly crucial for detailed posture analysis.
This research has been funded in part by the Ministry for Science, Baden-W?rttemberg and the Heidelberg
Academy of Sciences, Heidelberg, Germany. We are grateful to the NVIDIA corporation for donating a Titan X
GPU.
8
References
[1] R. E. Burkard, E. ?ela, P. M. Pardalos, and L. Pitsoulis. The quadratic assignment problem. In P. M.
Pardalos and D.-Z Du, editors, Handbook of Combinatorial Optimization. 1998.
[2] C. Doersch, S. Singh, A. Gupta, J. Sivic, and A. Efros. What makes paris look like paris? ACM TOG,
31(4), 2012.
[3] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context
prediction. In ICCV, pages 1422?1430, 2015.
[4] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative
unsupervised feature learning with convolutional neural networks. In NIPS, pages 766?774, 2014.
[5] A. Eigenstetter, M. Takami, and B. Ommer. Randomized max-margin compositions for visual recognition.
In CVPR ?14.
[6] I. El-Naqa, Y. Yang, N. P. Galatsanos, R. M. Nishikawa, and M. N. Wernick. A similarity learning approach
to content-based image retrieval: application to digital mammography. TMI, 23(10):1233?1244, 2004.
[7] Pedro Felzenszwalb, David McAllester, and Deva Ramanan. A discriminatively trained, multiscale,
deformable part model. In CVPR, pages 1?8. IEEE, 2008.
[8] V. Ferrari, M. Marin-Jimenez, and A. Zisserman. Pose search: retrieving people using their pose. In CVPR,
pages 1?8. IEEE, 2009.
[9] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In CVPR, pages 580?587, 2014.
[10] Bharath Hariharan, Jitendra Malik, and Deva Ramanan. Discriminative decorrelation for clustering and
classification. In ECCV, pages 459?472. Springer, 2012.
[11] X. He and S. Gould. An exemplar-based crf for multi-instance object segmentation. In CVPR. IEEE, 2014.
[12] Sam Johnson and Mark Everingham. Learning effective human pose estimation from inaccurate annotation.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2011.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, pages 1097?1105, 2012.
[14] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541?551, 1989.
[15] Tomasz Malisiewicz, Abhinav Gupta, and Alexei A Efros. Ensemble of exemplar-svms for object detection
and beyond. In ICCV, pages 89?96. IEEE, 2011.
[16] Juan Carlos Niebles, Chih-Wei Chen, and Li Fei-Fei. Modeling temporal structure of decomposable motion
segments for activity classification. In ECCV, pages 392?405. Springer, 2010.
[17] H. Pirsiavash and D. Ramanan. Parsing videos of actions with segmental grammars. In CVPR, 2014.
[18] V. Ramakrishna, D. Munoz, M. Hebert, J. A. Bagnell, and Y. Sheikh. Pose machines: Articulated pose
estimation via inference machines. In ECCV ?14. Springer, 2014.
[19] J. Rubio, A. Eigenstetter, and B. Ommer. Generative regularization with latent topics for discriminative
object recognition. PR, 48:3871?3880, 2015.
[20] Nataliya Shapovalova and Greg Mori. Clustered exemplar-svm: Discovering sub-categories for visual
recognition. In ICIP, pages 93?97. IEEE, 2015.
[21] Jiang Wang, Yang Song, Thomas Leung, Chuck Rosenberg, Jingbin Wang, James Philbin, Bo Chen, and
Ying Wu. Learning fine-grained image similarity with deep ranking. In CVPR, pages 1386?1393, 2014.
[22] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015.
[23] Hao Xia, Steven CH Hoi, Rong Jin, and Peilin Zhao. Online multiple kernel similarity learning for visual
search. TPAMI, 36(3):536?549, 2014.
[24] A. L. Yuille and Anand Rangarajan. The concave-convex procedure (CCCP). Neural computation,
15(4):915?936, 2003.
[25] S. Zagoruyko and N. Komodakis. Learning to compare image patches via convolutional neural networks.
In CVPR ?14.
9
| 6387 |@word cnn:50 version:2 briefly:1 norm:1 everingham:1 seek:1 covariance:1 pick:1 sgd:9 tr:5 initial:5 contains:2 score:3 selecting:2 jimenez:1 tuned:1 ours:5 outperforms:1 com:1 yet:1 assigning:1 pcp:3 parsing:2 gpu:1 distant:6 shape:1 remove:2 designed:1 plot:2 update:3 depict:1 half:1 discovering:2 selected:2 fewer:1 generative:1 beginning:1 sutter:1 recompute:1 completeness:1 forfeiting:1 location:1 simpler:1 c2:1 retrieving:1 incorrect:1 qualitative:1 manner:3 inter:1 roughly:1 nor:2 multi:2 voc:6 resolve:1 inappropriate:1 increasing:2 becomes:4 project:1 spain:1 estimating:2 moreover:3 provided:1 alexnet:7 lowest:1 what:1 argmin:1 burkard:1 finding:1 magnified:1 transformation:1 corporation:1 temporal:2 quantitative:1 unexplored:1 concave:1 tackle:2 classifier:3 ramanan:3 farthest:1 positive:12 before:1 local:3 consequence:1 marin:1 analyzing:1 jiang:1 solely:1 merge:1 plus:2 alexey:1 initialization:4 limited:2 range:1 malisiewicz:1 camera:2 lecun:1 testing:1 definite:1 backpropagation:2 procedure:1 unfold:1 area:1 riedmiller:1 empirical:1 significantly:3 convenient:1 matching:1 pre:1 quadrant:1 get:1 amplify:1 cannot:2 close:2 onto:1 interior:1 context:4 applying:1 maximizing:1 starting:1 convex:3 formulate:2 decomposable:1 identifying:1 assigns:1 mammography:1 utilizing:1 ferrari:1 hierarchy:1 carl:1 element:1 recognition:8 particularly:2 updating:1 utilized:2 labeled:1 steven:1 wang:7 capture:2 descend:1 solved:2 initializing:1 cycle:1 sect:9 highest:1 balanced:1 skeleton:2 tobias:1 trained:6 grateful:1 solving:1 singh:1 deva:2 segment:1 yuille:1 upon:1 tog:1 basis:1 easily:2 joint:2 represented:2 train:5 articulated:1 distinct:3 effective:2 query:6 labeling:1 neighborhood:1 pearson:1 caffe:1 widely:3 solve:1 larger:2 posed:1 relax:1 annotating:1 cvpr:8 grammar:1 unseen:1 jointly:1 noisy:1 online:1 sequence:1 eigenvalue:2 tpami:1 propose:1 gait:1 maximal:2 product:1 hadamard:1 achieve:1 deformable:1 academy:1 frobenius:1 competition:1 constituent:1 sutskever:1 impaired:1 cluster:1 darrell:1 rangarajan:1 produce:1 categorization:4 generating:1 nishikawa:1 object:13 pose:23 miguel:1 exemplar:64 nearest:14 eq:4 coverage:2 predicted:1 come:1 differ:1 merged:2 annotated:1 correct:2 cnns:10 stochastic:2 subsequently:2 human:3 enable:2 mcallester:1 hoi:1 pardalos:2 require:3 assign:1 clustered:3 niebles:1 rong:1 hold:1 around:1 considered:2 ground:7 great:3 bj:1 driving:1 efros:3 estimation:6 applicable:1 label:12 combinatorial:1 jackel:1 ross:1 hubbard:1 grouped:1 repetition:1 successfully:1 trusted:1 clearly:1 collaboratory:1 avoid:1 rosenberg:1 improvement:3 properly:1 consistently:1 superimposed:1 greatly:2 attains:1 inference:1 el:1 nn:11 cubically:1 typically:2 inaccurate:1 leung:1 initially:2 relation:10 transformed:1 germany:2 issue:4 classification:11 among:1 pascal:8 overall:1 proposes:1 art:3 breakthrough:1 softmax:3 brox:1 initialize:1 spatial:1 comprise:2 constrained:1 having:1 manually:1 look:1 unsupervised:16 k2f:2 report:3 dosovitskiy:1 duplicate:1 few:4 employ:2 randomly:3 neighbour:5 individual:4 detection:4 interest:1 highly:5 mining:3 intra:3 investigate:1 evaluation:5 alexei:2 henderson:1 analyzed:1 compilation:1 xb:3 accurate:1 decoupled:1 iv:1 desired:2 girshick:1 instance:6 increased:1 classify:1 ommer:3 modeling:1 cover:1 blended:1 measuring:1 assignment:3 cost:5 addressing:1 subset:3 rare:1 comprised:1 krizhevsky:2 johnson:1 too:1 front:1 reported:1 reframe:1 corrupted:1 learnt:2 synthetic:1 combined:1 thoroughly:1 person:4 randomized:1 accessible:1 retain:1 together:1 quickly:1 ilya:1 augmentation:2 again:1 baden:1 successively:2 containing:1 juan:1 strive:1 leading:2 zhao:1 li:1 account:1 potential:1 de:1 sec:2 titan:1 jitendra:2 ranking:4 depends:1 stream:2 vi:1 performed:1 philbin:1 overfits:1 tab:3 red:1 competitive:3 tmi:1 carlos:1 tomasz:1 annotation:4 hariharan:1 greg:1 accuracy:1 convolutional:5 variance:1 descriptor:1 ensemble:1 yield:5 correspond:1 kxb:3 generalize:2 weak:1 identification:1 handwritten:1 mere:1 finer:1 bharath:1 suffers:1 manual:1 trevor:1 against:7 failure:1 james:1 proof:1 di:8 donating:1 gain:2 dataset:15 popular:1 recall:1 color:1 improves:4 torso:1 segmentation:3 back:1 supervised:9 follow:1 zisserman:1 wei:1 formulation:2 box:2 though:1 furthermore:2 correlation:3 multiscale:1 overlapping:3 lda:17 supervisory:2 effect:1 xs0:3 requiring:3 true:1 contain:2 normalized:1 hence:1 regularization:1 alternating:2 semantic:1 deal:1 round:4 komodakis:1 transitivity:9 impute:2 during:1 lastname:1 auc:3 outline:1 evident:1 complete:1 confusion:1 crf:1 motion:1 image:20 ranging:1 wise:1 recently:2 common:2 rotation:1 fc4:1 endpoint:2 million:1 belong:2 he:1 refer:1 composition:1 munoz:1 vec:2 framed:1 tuning:5 doersch:4 dj:3 funded:1 reliability:1 had:1 entail:1 similarity:89 supervision:1 whitening:1 add:1 base:1 fc7:1 segmental:1 imbalanced:1 recent:3 showed:1 retrieved:1 belongs:1 forcing:1 nvidia:1 binary:1 arbitrarily:2 success:1 vt:4 inconsistency:1 chuck:1 analyzes:1 additional:3 ministry:1 zip:1 employed:1 converge:1 paradigm:3 maximize:2 signal:2 ii:3 semi:1 full:2 multiple:4 cross:1 long:3 retrieval:5 cccp:3 equally:1 spotted:1 prediction:3 jost:1 regression:1 crop:1 whitened:3 vision:4 foremost:1 histogram:1 kernel:2 iteration:8 adopting:1 achieved:2 c1:1 addition:4 background:1 fine:11 decreased:1 crucial:2 biased:2 zagoruyko:1 pooling:1 tend:1 contrary:1 inconsistent:2 spirit:2 effectiveness:1 anand:1 extracting:1 near:1 leverage:1 yang:2 split:1 iii:3 architecture:4 identified:2 restrict:1 reduce:1 idea:1 withinclass:1 handled:1 linkage:1 penalty:1 song:1 action:1 deep:9 detailed:4 covered:1 eigenvectors:2 amount:3 backpropagated:1 takami:1 svms:3 category:9 pretext:2 http:1 percentage:1 estimated:2 per:1 correctly:1 diverse:1 discrete:1 group:4 redundancy:1 key:1 four:1 nevertheless:1 enormous:2 thereafter:1 imputation:2 neither:2 bautista:1 utilize:2 fraction:1 sum:1 powerful:1 springenberg:1 chih:1 wu:1 patch:3 excerpt:1 scaling:1 peilin:1 layer:1 bound:2 quadratic:2 activity:1 constraint:1 alex:1 fei:2 min:1 performing:2 relatively:1 martin:1 consolidates:2 transferred:2 structured:1 gould:1 according:1 poor:1 across:1 smaller:1 sam:1 sheikh:1 leg:2 projecting:1 invariant:2 sij:2 gathering:1 iccv:3 pr:1 computationally:1 mori:1 mutually:16 previously:1 discus:3 know:1 flip:1 end:3 serf:2 available:2 generalizes:1 apply:1 observe:1 denker:1 away:3 enforce:1 batch:53 original:1 thomas:2 denotes:3 spurred:1 cf:3 clustering:4 top:1 especially:1 build:3 disappear:1 implied:1 objective:2 malik:2 already:1 posture:14 strategy:1 costly:2 parametric:1 diagonal:2 surrogate:1 bagnell:1 gradient:6 distance:3 separate:2 entity:1 majority:1 topic:1 agglomerative:1 discriminant:1 reason:1 furthest:1 length:1 code:1 modeled:2 relationship:7 ratio:1 minimizing:3 providing:1 ying:1 setup:2 potentially:1 hog:20 sharper:1 hao:1 negative:18 implementation:2 reliably:1 unknown:2 contributed:1 imbalance:3 upper:4 observation:1 datasets:4 benchmark:1 howard:1 jin:1 defining:3 extended:1 variability:2 head:1 hinton:1 frame:8 rn:1 worthy:1 community:2 subtasks:1 inferred:1 david:1 pair:6 paris:2 specified:1 imagenet:2 icip:1 conflict:1 sivic:1 learned:7 conflicting:2 quadratically:2 boser:1 barcelona:1 shapovalova:1 nip:3 alternately:1 address:1 deserve:1 beyond:1 pattern:1 firstname:1 reliable:6 max:2 video:5 green:1 pirsiavash:1 decorrelation:1 ela:1 ranked:1 force:1 circumvent:2 indicator:1 abbreviate:1 ekaterina:1 scarce:1 regularized:1 leeds:4 arm:2 improve:2 github:2 abhinav:2 numerous:1 extract:2 literature:1 relative:1 lacking:1 loss:4 fully:2 highlight:1 discriminatively:1 limitation:2 facing:2 geoffrey:1 ramakrishna:1 annotator:1 validation:1 foundation:2 rcnn:1 digital:1 gather:1 sufficient:1 consistent:3 s0:6 thresholding:1 editor:1 classifying:2 translation:1 row:1 eccv:3 last:1 hebert:1 dis:3 bias:1 allow:1 neighbor:10 explaining:1 face:2 felzenszwalb:1 distributed:1 benefit:3 overcome:2 curve:1 boundary:1 stand:1 cumulative:2 rich:1 xia:1 author:2 c5:3 jump:2 avg:1 made:1 far:2 approximate:1 compact:14 uni:1 implicitly:1 obtains:1 unreliable:5 clique:57 consequentially:1 overcomes:1 handbook:1 discriminative:3 spectrum:2 continuous:1 iterative:1 search:2 triplet:2 olympic:10 why:1 table:3 latent:1 learn:1 transfer:1 fc6:1 unavailable:1 heidelberg:5 du:1 complex:1 necessarily:1 reconciles:1 diag:3 terminated:1 bounding:2 reconcile:1 contradicting:1 x1:2 augmented:1 fig:11 roc:1 fashion:1 sub:3 fails:3 momentum:1 candidate:1 learns:3 grained:3 donahue:1 rk:1 xt:2 svm:7 gupta:4 grouping:3 incorporating:1 false:2 merging:1 effectively:1 importance:1 iwr:2 adding:1 dissimilarity:2 kx:2 margin:1 gap:1 easier:1 chen:2 suited:1 depicted:1 simply:3 appearance:1 visual:13 hopelessly:1 sport:16 partially:1 bo:1 springer:3 pedro:1 ch:1 truth:7 extracted:2 acm:1 eigenstetter:2 sorted:2 goal:1 consequently:2 towards:1 jeff:1 feasible:1 hard:2 fw:1 change:1 content:1 except:1 averaging:1 wt:4 total:1 invariance:1 experimental:5 select:1 support:1 people:1 mark:1 unbalanced:1 dissimilar:8 evaluate:6 ex:2 |
5,955 | 6,388 | Deep Learning Models of the Retinal Response to
Natural Scenes
Lane T. McIntosh?1 , Niru Maheswaranathan?1 , Aran Nayebi1 ,
Surya Ganguli2,3 , Stephen A. Baccus3
1
Neurosciences PhD Program, 2 Department of Applied Physics, 3 Neurobiology Department
Stanford University
{lmcintosh, nirum, anayebi, sganguli, baccus}@stanford.edu
Abstract
A central challenge in sensory neuroscience is to understand neural computations
and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic
transmission and spiking dynamics present a significant obstacle to the creation of
accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to
natural scenes nearly to within the variability of a cell?s response, and are markedly
more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs:
they are less susceptible to overfitting than their LN counterparts when trained
on small amounts of data, and generalize better when tested on stimuli drawn
from a different distribution (e.g. between natural scenes and white noise). An
examination of the learned CNNs reveals several properties. First, a richer set
of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying
inputs originate from feedforward inhibition, similar to known retinal mechanisms.
Third, the injection of latent noise sources in intermediate layers enables our model
to capture the sub-Poisson spiking variability observed in retinal ganglion cells.
Fourth, augmenting our CNNs with recurrent lateral connections enables them to
capture contrast adaptation as an emergent property of accurately describing retinal
responses to natural scenes. These methods can be readily generalized to other
sensory modalities and stimulus ensembles. Overall, this work demonstrates that
CNNs not only accurately capture sensory circuit responses to natural scenes, but
also can yield information about the circuit?s internal structure and function.
1
Introduction
A fundamental goal of sensory neuroscience involves building accurate neural encoding models that
predict the response of a sensory area to a stimulus of interest. These models have been used to shed
light on circuit computations [1, 2, 3, 4], uncover novel mechanisms [5, 6], highlight gaps in our
understanding [7], and quantify theoretical predictions [8, 9].
A commonly used model for retinal responses is a linear-nonlinear (LN) model that combines a linear
spatiotemporal filter with a single static nonlinearity. Although LN models have been used to describe
responses to artificial stimuli such as spatiotemporal white noise [10, 2], they fail to generalize to
natural stimuli [7]. Furthermore, the white noise stimuli used in previous studies are often low
resolution or spatially uniform and therefore fail to differentially activate nonlinear subunits in the
?
These authors contributed equally to this work.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
?
responses
?
convolution
dense
convolution
time
8 subunits
16 subunits
Figure 1: A schematic of the model architecture. The stimulus was convolved with 8 learned
spatiotemporal filters whose activations were rectified. The second convolutional layer then projected
the activity of these subunits through spatial filters onto 16 subunit types, whose activity was linearly
combined and passed through a final soft rectifying nonlinearity to yield the predicted response.
retina, potentially simplifying the retinal response to such stimuli [11, 12, 2, 10, 13]. In contrast to
the perceived linearity of the retinal response to coarse stimuli, the retina performs a wide variety of
nonlinear computations including object motion detection [6], adaptation to complex spatiotemporal
patterns [14], encoding spatial structure as spike latency [15], and anticipation of periodic stimuli
[16], to name a few. However it is unclear what role these nonlinear computational mechanisms have
in generating responses to more general natural stimuli.
To better understand the visual code for natural stimuli, we modeled retinal responses to natural image
sequences with convolutional neural networks (CNNs). CNNs have been successful at many pattern
recognition and function approximation tasks [17]. In addition, these models cascade multiple layers
of spatiotemporal filtering and rectification?exactly the elementary computational building blocks
thought to underlie complex functional responses of sensory circuits. Previous work utilized CNNs
to gain insight into the neural computations of inferotemporal cortex [18], but these models have
not been applied to early sensory areas where knowledge of neural circuitry can provide important
validation for such models.
We find that deep neural network models markedly outperform previous models in predicting retinal
responses both for white noise and natural scenes. Moreover, these models generalize better to unseen
stimulus classes, and learn internal features consistent with known retinal properties, including
sub-Poisson variability, feedforward inhibition, and contrast adaptation. Our findings indicate that
CNNs can reveal both neural computations and mechanisms within a multilayered neural circuit
under natural stimulation.
2
Methods
The spiking activity of a population of tiger salamander retinal ganglion cells was recorded in response
to both sequences of natural images jittered with the statistics of eye movements and high resolution
spatiotemporal white noise. Convolutional neural networks were trained to predict ganglion cell
responses to each stimulus class, simultaneously for all cells in the recorded population of a given
retina. For a comparison baseline, we also trained linear-nonlinear models [19] and generalized
linear models (GLMs) with spike history feedback [2]. More details on the stimuli, retinal recordings,
experimental structure, and division of data for training, validation, and testing are given in the
Supplemental Material.
2.1
Architecture and optimization
The convolutional neural network architecture is shown in Figure 2.1. Model parameters were
optimized to minimize a loss function corresponding to the negative log-likelihood under Poisson
spike generation. Optimization was performed using ADAM [20] via the Keras and Theano software
libraries [21]. The networks were regularized with an `2 weight penalty at each layer and an `1
activity penalty at the final layer, which helped maintain a baseline firing rate near 0 Hz.
2
We explored a variety of architectures for the CNN, varying the number of layers, number of filters per
layer, the type of layer (convolutional or dense), and the size of the convolutional filters. Increasing
the number of layers increased prediction accuracy on held-out data up to three layers, after which
performance saturated. One implication of this architecture search is that LN-LN cascade models ?
which are equivalent to a 2-layer CNN ? would also underperform 3 layer CNN models.
Contrary to the increasingly small filter sizes used by many state-of-the-art object recognition
networks, our networks had better performance using filter sizes in excess of 15x15 checkers. Models
were trained over the course of 100 epochs, with early-stopping guided by a validation set. See
Supplementary Materials for details on the baseline models we used for comparison.
3
Results
We found that convolutional neural networks were substantially better at predicting retinal responses
than either linear-nonlinear (LN) models or generalized linear models (GLMs) on both white noise
and natural scene stimuli (Figure 2).
3.1
Performance
B
A
C
ROC Curve for Natural Scenes
Retinal reliability
CNN
D
GLM
LN
White noise
CNN
GLM
LN
Natural scenes
LN
GLM
CNN
6 trials
Data
Firing Rate (Hz)
E
Time (seconds)
Figure 2: Model performance. (A,B) Correlation coefficients between the data and CNN, GLM or
LN models for white noise and natural scenes. Dotted line indicates a measure of retinal reliability
(See Methods). (C) Receiver Operating Characteristic (ROC) curve for spike events for CNN, GLM
and LN models. (D) Spike rasters of one example retinal ganglion cell responding to 6 repeated
trials of the same randomly selected segment of the natural scenes stimulus (black) compared to the
predictions of the LN (red), GLM (green), or CNN (blue) model with Poisson spike generation used
to generate model rasters. (E) Peristimulus time histogram (PSTH) of the spike rasters in (D).
3
LN models and GLMs failed to capture retinal responses to natural scenes (Figure 2B) consistent with
previous results [7]. In addition, we also found that LN models only captured a small fraction of the
response to high resolution spatiotemporal white noise, presumably because of the finer resolution that
were used (Figure 2A). In contrast, CNNs approach the reliability of the retina for both white noise
and natural scenes. Using other metrics, including fraction of explained variance, log-likelihood, and
mean squared error, CNNs showed a robust gain in performance over previously described sensory
encoding models.
We investigated how model performance varied as a function of training data, and found that LN
models were more susceptible to overfitting than CNNs, despite having fewer parameters (Figure 4A).
In particular, a CNN model trained using just 25 minutes of data had better held out performance
than an LN model fit using the full 60 minute recording. We expect that both depth and convolutional
filters act as implicit regularizers for CNN models, thereby increasing generalization performance.
3.2
CNN model parameters
Figure 3 shows a visualization of the model parameters learned when a convolutional network is
trained to predict responses to either white noise or natural scenes. We visualized the average feature
represented by a model unit by computing a response-weighted average for that unit. Models trained
on white noise learned first layer features with small (?200 ?m) receptive field widths (top left box
in Figure 3), whereas the natural scene model learns spatiotemporal filters with overall lower spatial
and temporal frequencies. This is likely in part due to the abundance of low spatial frequencies
present in natural images [22]. We see a greater diversity of spatiotemporal features in the second
layer receptive fields compared to the first (bottom panels in Figure 3). Additionally, we see more
diversity in models trained on natural scenes, compared to white noise.
Figure 3: Model parameters visualized by computing a response-weighted average for different
model units, computed for models trained on spatiotemporal white noise stimuli (left) or natural
image sequences (right). Top panel (purple box): visualization of units in the first layer. Each 3D
spatiotemporal receptive field is displayed via a rank-one decomposition consisting of a spatial filter
(top) and temporal kernel (black traces, bottom). Bottom panel (green box): receptive fields for the
second layer units, again visualized using a rank-one decomposition. Natural scenes models required
more active second layer units, displaying a greater diversity of spatiotemporal features. Receptive
fields are cropped to the region of space where the subunits have non-zero sensitivity.
3.3
Generalization across stimulus distributions
Historically, much of our understanding of the retina comes from fitting models to responses to
artificial stimuli and then generalizing this understanding to cases where the stimulus distribution is
more natural. Due to the large difference between artificial and natural stimulus distributions, it is
unknown what retinal encoding properties generalize to a new stimulus.
4
Figure 4: CNNs overfit less and generalize better across stimulus class as compared to simpler models.
(A) Held-out performance curves for CNN (?150,000 parameters) and GLM/LN models cropped
around the cell?s receptive field (?4,000 parameters) as a function of the amount of training data. (B)
Correlation coefficients between responses to natural scenes and models trained on white noise but
tested on natural scenes. See text for discussion.
We explored what portion of CNN, GLM, and LN model performance is specific to a particular
stimulus distribution (white noise or natural scenes), versus what portion describes characteristics
of the retinal response that generalize to another stimulus class. We found that CNNs trained
on responses to one stimulus class generalized better to a stimulus distribution that the model
was not trained on (Figure 4B). Despite LN models having fewer parameters, they nonetheless
underperform larger convolutional neural network models when predicting responses to stimuli not
drawn from the training distribution. GLMs faired particularly poorly when generalizing to natural
scene responses, likely because changes in mean luminance result in pathological firing rates after
the GLM?s exponential nonlinearity. Compared to standard models, CNNs provide a more accurate
description of sensory responses to natural stimuli even when trained on artificial stimuli (Figure 4B).
3.4
Capturing uncertainty of the neural response
In addition to describing the average response to a particular stimulus, an accurate model should also
capture the variability about the mean response. Typical noise models assume i.i.d. Poisson noise
drawn from a deterministic mean firing rate. However, the variability in retinal spiking is actually
sub-Poisson, that is, the variability scales with the mean but then increases sublinearly at higher mean
rates [23, 24]. By training models with injected noise [25], we provided a latent noise source in the
network that models the unobserved internal variability in the retinal population. Surprisingly, the
model learned to shape injected Gaussian noise to qualitatively match the shape of the true retinal
noise distribution, increasing with the mean response but growing sublinearly at higher mean rates
(Figure 5). Notably, this relationship only arises when noise is injected during optimization?injecting
Gaussian noise in a pre-trained network simply produced a linear scaling of the noise variance as a
function of the mean.
3.5
Feedforward inhibition shapes temporal responses in the model
To understand how a particular model response arises, we visualized the flow of signals through the
network. One prominent aspect of the difference between CNN and LN model responses is that CNNs
but not LN models captured the precise timing and short duration of firing events. By examining
the responses to the internal units of CNNs in time and averaged over space (Figure 6 A-C), we
found that in both convolutional layers, different units had either positive or negative responses to the
same stimuli, equivalent to excitation and inhibition as found in the retina. Precise timing in CNNs
arises by a timed combination of positive and negative responses, analogous to feedforward inhibition
that is thought to generate precise timing in the retina [26, 27]. To examine network responses in
5
Normalized Variance
in Spike Count
Mean Spike Count
0.1
0.1
1.0
2.0
4.0
10.0
Poisson
Data
Mean Spike Count
C
Variance in Spike Count
B
Variance in Spike Count
A
Poisson
Data
1.0
2.0
4.0
10.0
Mean Spike Count
Figure 5: Training with added noise recovers retinal sub-Poisson noise scaling property. (A) Variance
versus mean spike count for CNNs with various strengths of injected noise (from 0.1 to 10 standard
deviations), as compared to retinal data (black) and a Poisson distribution (dotted red). (B) The same
plot as A but with each curve normalized by the maximum variance. (C) Variance versus mean spike
count for CNN models with noise injection at test time but not during training.
space, we selected a particular time in the experiment and visualized the activation maps in the first
(purple) and second (green) convolutional layers (Figure 6D). A given image is shown decomposed
through multiple parallel channels in this manner. Finally, Figure 6E highlights how the temporal
autocorrelation in the signals at different layers varies. There is a progressive sharpening of the
response, such that by the time it reaches the model output the predicted responses are able to mimic
the statistics of the real firing events (Figure 6C).
3.6
Feedback over long timescales
Retinal dynamics are known to exceed the duration of the filters that we used (400 ms). In particular,
changes in stimulus statistics such as luminance, contrast and spatio-temporal correlations can
generate adaptation lasting seconds to tens of seconds [5, 28, 14]. Therefore, we additionally
explored adding feedback over longer timescales to the convolutional network.
To do this, we added a recurrent neural network (RNN) layer with a history of 10s after the fully
connected layer prior to the output layer. We experimented with different recurrent architectures
(LSTMs [29], GRUs [30], and MUTs [31]) and found that they all had similar performance to the
CNN at predicting natural scene responses. Despite the similar performance, we found that the
recurrent network learned to adapt its response over the timescale of a few seconds in response to
step changes in stimulus contrast (Figure 7). This suggests that RNNs are a promising way forward
to capture dynamical processes such as adaptation over longer timescales in an unbiased, data-driven
manner.
4
Discussion
In the retina, simple models of retinal responses to spatiotemporal white noise have greatly influenced
our understanding of early sensory function. However, surprisingly few studies have addressed
whether or not these simple models can capture responses to natural stimuli. Our work applies
models with rich computational capacity to bear on the problem of understanding natural scene
responses. We find that convolutional neural network (CNN) models, sometimes augmented with
lateral recurrent connections, well exceed the performance of other standard retinal models including
LN and GLMs. In addition, CNNs are better at generalizing both to held-out stimuli and to entirely
different stimulus classes, indicating that they are learning general features of the retinal response.
Moreover, CNNs capture several key features about retinal responses to natural stimuli where LN
models fail. In particular, they capture: (1) the temporal precision of firing events despite employing
filters with slower temporal frequencies, (2) adaptive responses during changing stimulus statistics,
and (3) biologically realistic sub-Poisson variability in retinal responses. In this fashion, this work
provides the first application of deep learning to understanding early sensory systems under natural
conditions.
6
Figure 6: Visualizing the internal activity of a CNN in response to a natural scene stimulus. (A-C)
Time series of the CNN activity (averaged over space) for the first convolutional layer (8 units, A),
the second convolutional layer (16 units, B), and the final predicted response for an example cell (C,
cyan trace). The recorded (true) response is shown below the model prediction (C, gray trace) for
comparison. (D) Spatial activation of example CNN filters at a particular time point. The selected
stimulus frame (top, grayscale) is represented by parallel pathways encoding spatial information in
the first (purple) and second (green) convolutional layers (a subset of the activation maps is shown
for brevity). (E) Autocorrelation of the temporal activity in (A-C). The correlation in the recorded
firing rates is shown in gray.
B
LSTM
Full Field Flicker
Stimulus
Intensity
A
Firing Rate
(spikes/s)
4
RNN
2
0
0
4
2
Time (s)
6
Figure 7: Recurrent neural network (RNN) layers capture response features occurring over multiple
seconds. (A) A schematic of how the architecture from Figure 2.1 was modified to incorporate the
RNN at the last layer of the CNN. (B) Response of an RNN trained on natural scenes, showing a
slowly adapting firing rate in response to a step change in contrast.
To date, modeling efforts in sensory neuroscience have been most useful in the context of carefully
designed parametric stimuli, chosen to illuminate a computation or mechanism of interest [32]. In
part, this is due to the complexities of using generic natural stimuli. It is both difficult to describe
the distribution of natural stimuli mathematically (unlike white or pink noise), and difficult to fit
models to stimuli with non-stationary statistics when those statistics influence response properties.
7
We believe the approach taken in this paper provides a way forward for understanding general natural
scene responses. We leverage the computational power and flexibility of CNNs to provide us with a
tractable, accurate model that we can then dissect, probe, and analyze to understand what that model
captures about the retinal response. This strategy of casting a wide computational net to capture
neural circuit function and then constraining it to better understand that function will likely be useful
in a variety of neural systems in response to many complex stimuli.
Acknowledgments
The authors would like to thank Ben Poole and EJ Chichilnisky for helpful discussions related to this
work. Thanks also goes to the following institutions for providing funding and hardware grants, LM:
NSF, NVIDIA Titan X Award, NM: NSF, AN and SB: NEI grants, SG: Burroughs Wellcome, Sloan,
McKnight, Simons, James S. McDonnell Foundations and the ONR.
References
[1] Tim Gollisch and Markus Meister. Eye smarter than scientists believed: neural computations in
circuits of the retina. Neuron, 65(2):150?164, 2010.
[2] Jonathan W Pillow, Jonathon Shlens, Liam Paninski, Alexander Sher, Alan M Litke,
EJ Chichilnisky, and Eero P Simoncelli. Spatio-temporal correlations and visual signalling in a
complete neuronal population. Nature, 454(7207):995?999, 2008.
[3] Nicole C Rust, Odelia Schwartz, J Anthony Movshon, and Eero P Simoncelli. Spatiotemporal
elements of macaque v1 receptive fields. Neuron, 46(6):945?956, 2005.
[4] David B Kastner and Stephen A Baccus. Coordinated dynamic encoding in the retina using
opposing forms of plasticity. Nature neuroscience, 14(10):1317?1322, 2011.
[5] Stephen A Baccus and Markus Meister. Fast and slow contrast adaptation in retinal circuitry.
Neuron, 36(5):909?919, 2002.
[6] Bence P ?lveczky, Stephen A Baccus, and Markus Meister. Segregation of object and background motion in the retina. Nature, 423(6938):401?408, 2003.
[7] Alexander Heitman, Nora Brackbill, Martin Greschner, Alexander Sher, Alan M Litke, and
EJ Chichilnisky. Testing pseudo-linear models of responses to natural scenes in primate retina.
bioRxiv, page 045336, 2016.
[8] Joseph J Atick and A Norman Redlich. Towards a theory of early visual processing. Neural
Computation, 2(3):308?320, 1990.
[9] Xaq Pitkow and Markus Meister. Decorrelation and efficient coding by retinal ganglion cells.
Nature neuroscience, 15(4):628?635, 2012.
[10] Jonathan W Pillow, Liam Paninski, Valerie J Uzzell, Eero P Simoncelli, and EJ Chichilnisky.
Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model.
The Journal of Neuroscience, 25(47):11003?11013, 2005.
[11] S Hochstein and RM Shapley. Linear and nonlinear spatial subunits in y cat retinal ganglion
cells. The Journal of Physiology, 262(2):265, 1976.
[12] Tim Gollisch. Features and functions of nonlinear spatial integration by retinal ganglion cells.
Journal of Physiology-Paris, 107(5):338?348, 2013.
[13] Adrienne L Fairhall, C Andrew Burlingame, Ramesh Narasimhan, Robert A Harris, Jason L
Puchalla, and Michael J Berry. Selectivity for multiple stimulus features in retinal ganglion
cells. Journal of neurophysiology, 96(5):2724?2738, 2006.
[14] Toshihiko Hosoya, Stephen A Baccus, and Markus Meister. Dynamic predictive coding by the
retina. Nature, 436(7047):71?77, 2005.
[15] Tim Gollisch and Markus Meister. Rapid neural coding in the retina with relative spike latencies.
science, 319(5866):1108?1111, 2008.
8
[16] Greg Schwartz, Rob Harris, David Shrom, and Michael J Berry. Detection and prediction of
periodic patterns by the retina. Nature neuroscience, 10(5):552?554, 2007.
[17] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436?444,
2015.
[18] Daniel L Yamins, Ha Hong, Charles Cadieu, and James J DiCarlo. Hierarchical modular
optimization of convolutional networks achieves representations similar to macaque it and
human ventral stream. In Advances in neural information processing systems, pages 3093?3101,
2013.
[19] EJ Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12(2):199?213, 2001.
[20] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
arXiv:1412.6980, 2014.
[21] Fr?d?ric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud
Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. Theano: new features
and speed improvements. arXiv preprint arXiv:1211.5590, 2012.
[22] Aapo Hyv?rinen, Jarmo Hurri, and Patrick O Hoyer. Natural Image Statistics: A Probabilistic
Approach to Early Computational Vision., volume 39. Springer Science & Business Media,
2009.
[23] Michael J Berry, David K Warland, and Markus Meister. The structure and precision of retinal
spike trains. Proceedings of the National Academy of Sciences, 94(10):5411?5416, 1997.
[24] Rob R de Ruyter van Steveninck, Geoffrey D Lewen, Steven P Strong, Roland Koberle, and
William Bialek. Reproducibility and variability in neural spike trains. Science, 275(5307):1805?
1808, 1997.
[25] Ben Poole, Jascha Sohl-Dickstein, and Surya Ganguli. Analyzing noise in autoencoders and
deep networks. arXiv preprint arXiv:1406.1831, 2014.
[26] Botond Roska and Frank Werblin. Vertical interactions across ten parallel, stacked representations in the mammalian retina. Nature, 410(6828):583?587, 2001.
[27] Botond Roska and Frank Werblin. Rapid global shifts in natural scenes block spiking in specific
ganglion cell types. Nature neuroscience, 6(6):600?608, 2003.
[28] Peter D Calvert, Victor I Govardovskii, Vadim Y Arshavsky, and Clint L Makino. Two temporal
phases of light adaptation in retinal rods. The Journal of general physiology, 119(2):129?146,
2002.
[29] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural Computation,
9(8):1735?1780, 1997.
[30] Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the
properties of neural machine translation: Encoder?decoder approaches. Eighth Workshop on
Syntax, Semantics and Structure in Statistical Translation (SSST-8), pages 103?111, 2014.
[31] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent
network architectures. Proceedings of the 32nd International Conference on Machine Learning,
37:2342?2350, 2015.
[32] Nicole C Rust and J Anthony Movshon. In praise of artifice. Nature neuroscience, 8(12):1647?
1650, 2005.
9
| 6388 |@word neurophysiology:1 trial:2 cnn:22 nd:1 underperform:2 hyv:1 simplifying:1 decomposition:2 thereby:1 series:1 daniel:1 amp:1 surprising:1 activation:4 diederik:1 readily:1 realistic:1 plasticity:1 shape:3 enables:2 plot:1 designed:1 bart:1 stationary:1 selected:3 fewer:2 signalling:1 greschner:1 short:2 institution:1 coarse:1 provides:2 pascanu:1 psth:1 simpler:1 combine:1 fitting:1 pathway:1 autocorrelation:2 shapley:1 manner:2 notably:1 sublinearly:2 rapid:2 examine:1 growing:1 decomposed:1 gollisch:3 increasing:3 spain:1 provided:1 moreover:3 linearity:1 circuit:9 panel:3 medium:1 what:5 substantially:1 narasimhan:1 supplemental:1 finding:1 unobserved:1 sharpening:1 temporal:10 pseudo:1 act:1 shed:1 zaremba:1 exactly:1 demonstrates:1 rm:1 schwartz:2 unit:10 underlie:2 grant:2 werblin:2 positive:2 scientist:1 timing:3 despite:4 encoding:7 analyzing:1 firing:10 clint:1 black:3 rnns:1 praise:1 suggests:1 liam:2 averaged:2 steveninck:1 acknowledgment:1 lecun:1 testing:2 block:2 razvan:1 area:2 rnn:5 empirical:1 cascade:2 thought:2 adapting:1 physiology:3 pre:1 bergeron:1 anticipation:1 onto:1 context:1 influence:1 equivalent:2 map:3 deterministic:1 nicole:2 go:1 sepp:1 duration:2 jimmy:1 resolution:4 pitkow:1 jascha:1 insight:1 shlens:1 lamblin:1 population:4 analogous:1 rinen:1 goodfellow:1 element:1 recognition:2 particularly:1 utilized:1 mammalian:1 observed:1 role:1 bottom:3 preprint:3 steven:1 capture:13 region:1 connected:1 movement:1 complexity:1 warde:1 dynamic:4 trained:15 segment:1 creation:1 predictive:1 division:1 maheswaranathan:1 emergent:1 represented:2 various:1 cat:1 train:2 stacked:1 fast:1 describe:2 activate:1 niru:1 artificial:4 whose:2 richer:1 stanford:2 supplementary:1 larger:1 modular:1 encoder:1 statistic:7 unseen:1 timescale:1 final:3 sequence:3 net:1 interaction:1 adaptation:7 fr:1 relevant:1 date:1 reproducibility:1 poorly:1 flexibility:1 academy:1 description:1 differentially:1 sutskever:1 transmission:1 generating:1 adam:2 ben:2 object:3 tim:3 recurrent:7 andrew:1 augmenting:1 strong:1 predicted:3 involves:1 indicate:1 come:1 quantify:1 guided:1 cnns:22 filter:13 stochastic:1 exploration:1 human:1 jonathon:1 material:2 generalization:2 merrienboer:1 elementary:1 mathematically:1 around:1 presumably:1 predict:3 lm:1 circuitry:2 rgen:1 achieves:1 early:6 ventral:1 perceived:1 injecting:1 weighted:2 gaussian:2 modified:1 ej:5 varying:2 casting:1 improvement:1 rank:2 likelihood:2 arshavsky:1 nora:1 indicates:1 contrast:8 salamander:1 litke:2 baseline:3 sganguli:1 greatly:1 helpful:1 ganguli:1 stopping:1 sb:1 semantics:1 overall:2 pascal:1 toshihiko:1 spatial:9 art:1 integration:1 field:8 having:2 cadieu:1 progressive:1 valerie:1 nearly:1 kastner:1 mimic:1 yoshua:3 stimulus:51 few:3 retina:16 pathological:1 randomly:1 simultaneously:1 national:1 phase:1 consisting:1 maintain:1 opposing:1 william:1 detection:2 interest:2 saturated:1 farley:1 light:3 regularizers:1 held:4 implication:1 accurate:6 necessary:1 calvert:1 heitman:1 timed:1 biorxiv:1 theoretical:1 increased:1 soft:1 obstacle:1 modeling:1 ssst:1 deviation:1 subset:1 uniform:1 successful:1 examining:1 varies:1 spatiotemporal:14 periodic:2 jittered:1 combined:1 cho:1 thanks:1 grus:1 fundamental:1 sensitivity:1 lstm:1 international:1 probabilistic:2 physic:1 decoding:1 michael:3 ilya:1 squared:1 central:1 recorded:4 again:1 nm:1 rafal:1 slowly:2 wojciech:1 diversity:3 de:1 retinal:40 bergstra:1 coding:3 coefficient:2 titan:1 coordinated:1 sloan:1 stream:1 performed:1 helped:1 jason:1 analyze:1 red:2 portion:2 parallel:3 bouchard:1 simon:1 rectifying:1 minimize:1 purple:3 botond:2 accuracy:1 convolutional:19 variance:8 characteristic:2 greg:1 ensemble:1 yield:2 generalize:6 accurately:2 produced:1 rectified:1 finer:1 history:2 reach:1 influenced:1 synaptic:1 raster:3 nonetheless:1 frequency:3 james:3 burroughs:1 recovers:1 static:1 gain:2 knowledge:1 uncover:1 actually:1 carefully:1 higher:2 faired:1 jarmo:1 response:70 box:3 brackbill:1 furthermore:1 just:1 implicit:1 atick:1 correlation:5 glms:6 overfit:1 autoencoders:1 lstms:1 nonlinear:10 reveal:1 gray:2 believe:1 name:1 building:2 normalized:2 true:2 unbiased:1 counterpart:1 norman:1 kyunghyun:1 spatially:1 arnaud:1 lveczky:1 white:20 visualizing:1 during:3 width:1 excitation:1 m:1 generalized:5 prominent:1 hong:1 syntax:1 complete:1 demonstrate:1 performs:1 motion:2 image:6 novel:1 funding:1 charles:1 functional:1 spiking:6 stimulation:1 rust:2 volume:1 significant:1 jozefowicz:1 mcintosh:1 nonlinearity:3 had:4 reliability:3 cortex:1 operating:1 inhibition:5 longer:2 patrick:1 inferotemporal:1 showed:1 driven:1 schmidhuber:1 selectivity:1 nvidia:1 onr:1 victor:1 captured:2 additional:1 greater:2 xaq:1 signal:2 stephen:5 multiple:4 full:2 simoncelli:3 alan:2 match:1 adapt:1 believed:1 long:2 equally:1 award:1 roland:1 schematic:2 prediction:6 aapo:1 vision:1 metric:1 poisson:11 arxiv:6 histogram:1 kernel:1 sometimes:1 smarter:1 hochreiter:1 cell:14 addition:4 whereas:1 cropped:2 background:1 addressed:1 source:2 modality:1 vadim:1 unlike:1 checker:1 markedly:2 recording:2 hz:2 bahdanau:1 contrary:1 flow:1 kera:1 near:1 leverage:1 feedforward:4 intermediate:1 exceed:2 constraining:1 bengio:3 variety:3 fit:2 architecture:8 shift:1 rod:1 whether:1 passed:1 effort:1 penalty:2 movshon:2 peter:1 deep:6 useful:2 latency:2 amount:2 ten:2 hardware:1 visualized:5 generate:3 outperform:1 flicker:1 nsf:2 dotted:2 neuroscience:10 per:1 blue:1 dickstein:1 key:1 drawn:3 changing:1 luminance:2 v1:1 fraction:2 fourth:1 uncertainty:1 injected:4 yann:1 ric:1 scaling:2 capturing:1 layer:28 entirely:1 cyan:1 activity:7 fairhall:1 strength:1 scene:29 software:1 lane:1 markus:7 aspect:1 speed:1 hochstein:1 injection:2 martin:1 department:2 combination:1 mcknight:1 pink:1 mcdonnell:1 across:3 describes:1 increasingly:1 joseph:1 rob:2 biologically:1 primate:1 lasting:1 explained:1 theano:2 glm:9 taken:1 wellcome:1 ln:24 rectification:1 visualization:2 previously:1 segregation:1 describing:2 count:8 mechanism:6 fail:3 yamins:1 tractable:1 meister:7 probe:1 hierarchical:1 generic:1 slower:1 convolved:1 responding:1 top:4 warland:1 aran:1 added:2 spike:19 receptive:7 parametric:1 strategy:1 illuminate:1 unclear:1 hoyer:1 bialek:1 thank:1 lateral:2 capacity:1 decoder:1 originate:1 dzmitry:1 code:1 dicarlo:1 modeled:1 relationship:1 providing:1 baccus:5 susceptible:2 difficult:2 robert:1 potentially:1 frank:2 trace:3 negative:3 ba:1 unknown:1 contributed:1 ethologically:1 convolution:2 neuron:3 vertical:1 ramesh:1 displayed:1 subunit:7 neurobiology:1 variability:9 precise:4 hinton:1 frame:1 varied:1 nei:1 intensity:1 david:4 required:1 chichilnisky:5 paris:1 connection:2 optimized:1 roska:2 learned:6 barcelona:1 kingma:1 nip:1 macaque:2 able:1 poole:2 dynamical:1 pattern:3 below:1 eighth:1 challenge:1 program:1 including:4 green:4 memory:1 power:1 event:4 decorrelation:1 natural:48 examination:1 regularized:1 predicting:5 business:1 historically:1 eye:2 temporally:1 library:1 sher:2 koberle:1 text:1 epoch:1 understanding:7 prior:1 sg:1 berry:3 lewen:1 relative:1 loss:1 expect:1 highlight:2 fully:1 bear:1 generation:2 filtering:1 versus:3 geoffrey:2 validation:3 foundation:1 consistent:2 displaying:1 translation:2 course:1 surprisingly:2 last:1 understand:5 wide:2 van:2 feedback:3 curve:4 depth:1 pillow:2 rich:1 sensory:12 author:2 commonly:1 qualitatively:1 projected:1 forward:2 adaptive:1 makino:1 employing:1 excess:1 global:1 overfitting:2 reveals:1 active:1 receiver:1 eero:3 spatio:2 hurri:1 surya:2 grayscale:1 search:1 latent:2 additionally:2 promising:1 learn:1 channel:1 robust:1 nature:10 nicolas:1 ruyter:1 adrienne:1 investigated:1 complex:3 anthony:2 dense:2 timescales:3 multilayered:2 linearly:1 noise:35 repeated:1 augmented:1 neuronal:2 redlich:1 roc:2 fashion:1 slow:1 precision:2 sub:5 exponential:1 dissect:1 third:1 learns:1 x15:1 abundance:1 ian:1 minute:2 hosoya:1 specific:2 bastien:1 showing:1 peristimulus:1 explored:3 experimented:1 workshop:1 adding:1 sohl:1 phd:1 occurring:1 gap:1 generalizing:3 simply:1 likely:3 paninski:2 ganglion:10 visual:3 failed:1 applies:1 springer:1 harris:2 goal:1 towards:1 tiger:1 change:4 typical:1 experimental:1 indicating:1 uzzell:1 internal:5 odelia:1 arises:3 jonathan:2 brevity:1 alexander:3 incorporate:1 tested:2 |
5,956 | 6,389 | On Robustness of Kernel Clustering
Bowei Yan
Department of Statistics and Data Sciences
University of Texas at Austin
Purnamrita Sarkar
Department of Statistics and Data Sciences
University of Texas at Austin
Abstract
Clustering is an important unsupervised learning problem in machine learning
and statistics. Among many existing algorithms, kernel k-means has drawn much
research attention due to its ability to find non-linear cluster boundaries and its
inherent simplicity. There are two main approaches for kernel k-means: SVD
of the kernel matrix and convex relaxations. Despite the attention kernel clustering has received both from theoretical and applied quarters, not much is known
about robustness of the methods. In this paper we first introduce a semidefinite
programming relaxation for the kernel clustering problem, then prove that under a
suitable model specification, both K-SVD and SDP approaches are consistent in
the limit, albeit SDP is strongly consistent, i.e. achieves exact recovery, whereas
K-SVD is weakly consistent, i.e. the fraction of misclassified nodes vanish. Also
the error bounds suggest that SDP is more resilient towards outliers, which we also
demonstrate with experiments.
1
Introduction
Clustering is an important problem which is prevalent in a variety of real world problems. One of the
first and widely applied clustering algorithms is k-means, which was named by James MacQueen [14],
but was proposed by Hugo Steinhaus [21] even before. Despite being half a century old, k-means has
been widely used and analyzed under various settings.
One major drawback of k-means is its incapability to separate clusters that are non-linearly separated.
This can be alleviated by mapping the data to a high dimensional feature space and do clustering on
top of the feature space [19, 9, 12], which is generally called kernel-based methods. For instance,
the widely-used spectral clustering [20, 16] is an algorithm to calculate top eigenvectors of a kernel
matrix of affinities, followed by a k-means on the top r eigenvectors. The consistency of spectral
clustering is analyzed by [22]. [9] shows that spectral clustering is essentially equivalent to a weighted
version of kernel k-means.
The performance guarantee for clustering is often studied under distributional assumptions; usually
a mixture model with well-separated centers suffices to show consistency. [5] uses a Gaussian
mixture model, and proposes a variant of EM algorithm that provably recovers the center of each
Gaussian when the minimum distance between clusters is greater than some multiple of the square
root of dimension. [2] works with a projection based algorithm and shows the separation needs to
be greater than the operator norm and the Frobenius norm of difference between data matrix and its
corresponding center matrix, up to a constant.
Another popular technique is based on semidefinite relaxations. For example [18] proposes a SDP
relaxation for k-means typed clustering. In a very recent work, [15] shows the effectiveness of SDP
relaxation with k-means clustering for subgaussian mixtures, provided the minimum distance between
centers is greater than the variance of the sub-gaussian times the square of the number of clusters r.
On a related note, SDP relaxations have been shown to be consistent for community detection in
networks [1, 3]. In particular, [3] consider ?inlier? (these are generated from the underlying clustering
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
model, to be specific, a blockmodel) and ?outlier? nodes. The authors show that SDP is weakly
consistent in terms of clustering the inlier nodes as long as the number of outliers m is a vanishing
fraction of the number of nodes.
In contrast, among the numerous work on clustering, not much focus has been on robustness of
different kernel k-means algorithms in presence of arbitrary outliers. [24] illustrates the robustness of
Gaussian kernel based clustering, where no explicit upper bound is given. [8] detects the influential
points in kernel PCA by looking at an influence function. In data mining community, many find
clustering can be used to detect outliers, with often heuristic but effective procedures [17, 10]. On the
other hand, kernel based methods have been shown to be robust for many machine learning tasks.
For supervised learning, [23] shows the robustness of SVM by introducing an outlier indicator and
relaxing the problem to a SDP. [6, 7, 4] develop the robustness for kernel regression. For unsupervised
learning, [13] proposes a robust kernel density estimation.
In this paper we ask the question: how robust are SVD type algorithms and SDP relaxations when
outliers are present. In the process we also present results which compare these two methods. To be
specific, we show that without outliers, SVD is weakly consistent, i.e. the fraction of misclassified
nodes vanishes with high probability, whereas SDP is strongly consistent, i.e. the number of
misclassified nodes vanishes with high probability. We also prove that both methods are robust to
arbitrary outliers as long as the number of outliers is growing at a slower rate than the number of
nodes. Surprisingly our results also indicate that SDP relaxations are more resilient to outliers than
K-SVD methods. The paper is organized as follows. In Section 2 we set up the problem and the data
generating model. We present the main results in Section 3. Proof sketch and more technical details
are introduced in Section 4. Numerical experiments in Section 5 illustrate and support our theoretical
analysis. More additional analysis are included in the extended version of this paper 1 .
2
Problem Setup
We denote by Y = [Y1 , ? ? ? , Yn ]T the n ? p data matrix. Among the n observations, m outliers are
distributed arbitrarily, and n ? m inliers form r equal-sized clusters, denoted by C1 , ? ? ? , Cr . Let
us denote the index set of inliers by I and index set of outliers by O, I ? O = [n]. Also denote by
R = {(i, j) : i ? O or j ? O}.
The problem is to recover the true and unknown data partition given by a membership matrix Z =
{0, 1}n?r , where Zik = 1 if i belongs to the k-th cluster and 0 otherwise. For convenience we assume
the outliers are also arbitrarily equally assigned to r clusters, so that each extended cluster, denoted
by C?i , i ? [r] has exactly n/r points. A ground truth clustering matrix X0 ? Rn?n can be achieved
1 if i, j ? I, and belong to the same cluster;
T
by X0 = ZZ . It can be seen that X0 (i, j) =
0 otherwise.
For the inliers, we assume the following mixture distribution model.
Wi
Conditioned on Zia = 1, Yi = ?a + ? , E[Wi ] = 0, Cov[Wi ] = ?a2 Ip ,
p
Wi are independent sub-gaussian random vectors.
We treat Y as a low dimensional signal hidden in high dimensional noise. More concretely ?a is
sparse and k?a k0 does not depend on n or p; as n ? ?, p ? ?. Wi ?s for i ? [n] are independent.
For simplicity, we assume the noise is isotropic and the covariance only depends on the cluster. The
sub-gaussian assumption is non-parametric and includes most of the commonly used distribution
such as Gaussian and bounded distributions. We include some background materials on sub-gaussian
random variables in Appendix A. This general setting for inliers is common and also motivated by
many practical problems where the data lies on a low dimensional manifold, but is obscured by
high-dimensional noise [11].
We use the kernel matrix based on Euclidean distances between covariates. Our analysis can be
extended to inner product kernels as well. From now onwards, we will assume that the function
generating the kernel is bounded and Lipschitz.
1
https://arxiv.org/abs/1606.01869
2
Assumption 1. For n observations Y1 , ? ? ? , Yn , the kernel matrix (sometimes also called Gram
matrix) K is induced by K(i, j) = f (kYi ? Yj k22 ), where f satisfies |f (x)| ? 1, ?x and ?C0 >
0, s.t. supx,y |f (x) ? f (y)| ? C0 |x ? y|.
A widely used example that satisfies the above condition is the Gaussian kernel. For simplicity, we
will without loss of generality assume K(x, y) = f (kx ? yk2 ) = exp(??kx ? yk2 ).
For the asymptotic analysis, we use the following standard notations for approximated rate of
convergence. T (n) is O(f (n)) iff for some constant c and n0 , T (n) ? cf (n) for all n ? n0 ; T (n)
is ?(f (n)) if for some constant c and n0 , T (n) ? cf (n) for all n ? n0 ; T (n) is ?(f (n)) if T (n) is
O(f (n)) and ?(f (n)); T (n) is o(f (n)) if T (n) is O(f (n)) but not ?(f (n)). T (n) is oP (f (n)) ( or
OP (f (n))) if it is o(f (n)) (or O(f (n))) with high probability.
n?n
Several matrix norms are considered in this manuscript. Assume M ? RP
, the `1 and `? norm
are defined the same as the vector `1 and `? norm. We define: kM k1 := ij |Mij | and kM k? :=
maxi,j |Mij |. For two matrices M, Q ? Rm?n , their inner product is hM, Qi = trace(M T Q). The
operator norm kM k is simply the largest singular value of M , which equals the largest eigenvalue for
a symmetric matrix. Throughout the manuscript, we use 1n to represent the all one n ? 1 vector and
En , En,k to represent the all one matrix with size n ? n and n ? k. The subscript will be dropped
when it is clear from context.
2.1
Two kernel clustering algorithms
Kernel clustering algorithms can be broadly divided into two categories; one is based on semidefinite
relaxation of the k-means objective function and the other is eigen-decomposition based, like kernel
PCA, spectral clustering, etc. In this section we describe these two settings.
SDP relaxation for kernel clustering It is well known [9] that kernel k-means could be achieved
by maximizing trace(Z T KZ) where Z is the n ? r matrix of cluster memberships. However due to
the non-convexity of the constraints, the problem is NP-hard. Thus lots of convex relaxations are
proposed in literature. In this paper, we propose the following semidefinite programming relaxation.
The same relaxation has been used in stochastic block models [1].
max trace(KX)
(SDP-1)
X
s.t., X 0, X ? 0, X1 =
n
1, diag(X) = 1
r
The clustering procedure is listed in Algorithm 1.
Algorithm 1 SDP relaxation for kernel clustering
Require: Observations Y1 , ? ? ? , Yn , kernel function f .
1: Compute kernel matrix K where K(i, j) = f (kYj ? Yj k22 );
? be the optimal solution;
2: Solve SDP-1 and let X
?
3: Do k-means on the r leading eigenvectors U of X.
Kernel singular value decomposition Kernel singular value decomposition (K-SVD) is a spectral
based clustering approach. One first do SVD on the kernel matrix, then do k-means on first r
eigenvectors. Different variants include K-PCA which uses singular vectors of centered kernel
matrix and spectral clustering which uses singular vectors of normalized kernel matrix. The detailed
algorithm is shown in Algorithm 2.
3
Main results
In this section we summarize our main results. In this paper we analyze SDP relaxation of kernel
k-means and K-SVD type methods. Our main contribution is two-fold. First, we show that SDP
relaxation produces strongly consistent results, i.e. the number of misclustered nodes goes to zero
with high probability when there are no outliers, which means r without rounding. On the other
3
Algorithm 2 K-SVD (K-PCA, spectral clustering)
Require: Observations Y1 , ? ? ? , Yn , kernel function f .
1: Compute kernel matrix K where K(i, j) = f (kYj ? Yj k22 );
2: if K-PCA then
3:
K = K ? K11T /n ? 11T K/n + 11T K11T /n2 ;
4: else if spectral clustering then
5:
K = D?1/2 KD?1/2 where D = diag(K1n );
6: end if
7: Do k-means on the r leading singular vectors V of K.
hand, K-SVD is weakly consistent, i.e. fraction of misclassified nodes goes to zero when there are
no outliers. In presence of outliers, we see an interesting dichotomy in the behaviors of these two
methods. Both can be proven to be weakly consistent in terms of misclassification error. However,
SDP is more resilient to the effect of outliers than K-SVD, if the number of clusters grows or if the
separation between the cluster means decays.
Our analysis is organized as follows. First we present a result on the concentration of kernel matrices
around their population counterpart. The population kernel matrix for inliers is blockwise constant
? of
with r blocks (except the diagonal, which is one). Next we prove that as n increases, the optima X
SDP-1 converges strongly to X0 , when there are no outliers and weakly if the number of outliers
grows slowly with n. Then we show the mis-clustering error of the clustering returned by Algorithm 1
goes to zero with probability tending to one as n ? ? when there are no outliers. Finally, when the
number of outliers is growing slowly with n, the fraction of mis-clustered nodes from algorithms 1
and 2 converges to zero.
We will start with the concentration of kernel matrices to their population counterpart. We show that
under our data model (1) the empirical kernel matrix with the Gaussian kernel restricted on inliers
? and the `? norm of K I?I ? K
? I?I goes to zero at
concentrates around a "population" matrix K,
f
f
q
the rate of O( logp p ).
Theorem 1. Let dk` = k?k ? ?` k. For i ? C?k , j ? C?` , define
f (d2k` + ?k2 + ?`2 )
if i 6= j,
? f (i, j) =
.
K
f (0)
if i = j.
q
? I?I k? ? c log p ) ? n2 p??c2 .
Then there exists constant ? > 0, such that P (kKfI?I ? K
f
p
q
3 log n
Remark 1. Setting c = p log p , there exists constant ? > 0, such that
s
!
3 log n
1
I?I
I?I
?
P kK
?K
k? ?
? .
?p
n
(1)
The error probability goes to zero for a suitably chosen constant as long as p is growing faster than
log n.
While our analysis is inspired by [11], there are two main differences. First
q we have a mixture model
where the population kernel is blockwise constant. Second, we obtain logp p rates of convergence
by carefully bounding the tail probabilities. In order to attain this we further assume that the noise is
sub-gaussian and isotropic. From now on we will drop the subscript f and refer to the kernel matrix
as K.
? is blockwise constant with r unique rows (except the diagonal elements which are
By definition, K
? is that ?r ? ?r+1 (where ?i is the ith largest eigenvalue of K)
?
ones). An important property of K
will be ?(n?min (B)/r). B is the r ? r Gaussian kernel matrix generated by the centers.
Lemma 1. If the scale parameter in Gaussian kernel is non-zero, and none of the clusters shares a
same center, let B be the r ? r matrix where Bk` = f (k?k ? ?` k), then
? ? ?r+1 (K)
? ? n ?min (B) ? min f (? 2 ) 2 ? 2 max(1 ? f (2? 2 )) = ?(n?min (B)/r)
?r (K)
k
k
k
k
r
4
? ? X0 k1 ,
Now we present our result on the consistency of SDP-1. To this end, we will upper bound kX
? is the optima returned by SDP-1 and X0 is the true clustering matrix. We first present a
where X
lemma, which is crucial to the proof of the theorem. Before presenting this, we define
?k` := f (2?k2 ) ? f (d2k` + ?k2 + ?`2 );
?min := min ?k`
(2)
`6=k
The first quantity ?k` measures separation between the two clusters k and `. The second quantity
measures the smallest separation possible. We will assume that ?min is positive. This is very similar
to the analysis in asymptotic network analysis where strong assortativity is often assumed. Our results
show that the consistency of clustering deteriorates as ?min decreases.
? be the solution to (SDP-1), then
Lemma 2. Let X
? ?
? 1 ? 2hK ? K, X ? X0 i
kX0 ? Xk
(3)
?min
Combining the above with the concentration of K from Theorem 1 we have the following result:
q
log p
then for some absolute
Theorem 2. When d2k` > |?k2 ? ?`2 |, ?k 6= `, and ?min = ?
p
o
n
mn
? 1 ? max oP (1), oP
.
constant c > 0, kX0 ? Xk
r?min
? = X0 with high probability and
Remark 2. When there?s no outlier in the data, i.e., m = 0, X
SDP-1 is strongly consistent without rounding. When m > 0, the right hand side of the inequality is
2
dominated by mn/r. Note that kX0 k1 = nr , therefore after suitable normalization, the error rate
goes to zero with rate O(m/(n?min )) when n ? ?.
? is strongly
Now we will present the mis-clustering error rate of Algorithm 1 and 2. Although X
consistent in the absence of outliers, in practice one often wants to get the labeling in addition to the
clustering matrix. Therefore it is usually needed to carry out the last eigen-decomposition step in
Algorithm 1. Since X0 is the clustering matrix, its principal eigenvectors are blockwise constant. In
? are converging
order to show small mis-clustering error one needs to show that the eigenvectors of X
(modulo a rotation) to those of X0 . This is achieved by a careful application of Davis-Kahan theorem,
a detailed discussion of which we defer to the analysis in Section 4.
? of a
The Davis-Kahan theorem lets one bound the deviation of the r principal eigenvectors U
? , from the r principal eigenvectors U of M as : kU
? ? U OkF ? 23/2 kM ?
Hermitian matrix M
? kF /(?r ? ?r+1 ) [25], where ?r is the rth largest eigenvalue of M and O is the optimal rotation
M
matrix. For a complete statement of the theorem see Appendix F.
? provides us with two different upper bounds on the distance between
Applying the result to X0 and K
leading eigenvectors. We will see in Theorem 3 that the eigengap derived by two algorithms differ,
which results in different upper bounds for number of misclustered nodes. Since the Davis-Kahan
bounds are tight up-to a constant [25], despite being upper bounds, this indicates that algorithm 1 is
less sensitive to the separation between cluster means than Algorithm 2.
Once the eigenvector deviation is established, we present explicit bounds on mis-clustering error for
? (input eigenvectors of K or
both methods in the following theorem. K-means assigns each row of U
r
?
X) to one of r clusters. Define c1 ? ? ? , cn ? R such that ci is the centroid corresponding to the ith
? . Similarly, for the population eigenvectors U (top r eigenvectors of K
? or X0 ), we define the
row of U
population centroids as (Z?)i , for some ? ? Rr?r . Recall that we construct Z such that the outliers
are equally and arbitrarily divided amongst the r clusters. We show that when the empirical centroids
are close to the population centroids with a rotation, then the node will be correctly clustered. We
give a general definition of a superset of the misclustered nodes applicable both to K-SVD and SDP:
p
M = {i : kci ? Zi ?Ok ? 1/ 2n/r}
(4)
Theorem 3. Let Msdp and Mksvd be defined as Eq. 4, where ci ?s are generated from Algorithm 1
? We have:
and 2 respectively. Let ?r be the rth largest eigenvalue value of K.
m
|Msdp | ? max oP (1), OP
?min
mn2
n3 log p
|Mksvd | ? OP max
,
r(?r ? ?r+1 )2 rp(?r ? ?r+1 )2
5
Remark 3. Getting a bound for ?r in terms of ?min for general blockwise constant matrices is
difficult. But as shown in Lemma 1, the eigengap is ?(n/r?min (B)). Plugging this back in we have,
nr log p/p
mr
,
O
|Mksvd | ? max OP
P
?min (B)2
?min (B)2
.
In some simple cases one can get explicit bounds for ?r , and we have the following.
Corollary 1. Consider the special case when all clusters share the same variance ? 2 and dk` are
identical for all pairs of clusters. The number of misclustered nodes of K-SVD is upper bounded by:
nr log p/p
mr
,
O
(5)
|Mksvd | ? max OP
P
2
2
?min
?min
Corollary 1 is proved in Appendix H.
Remark 4. The situation may happen if cluster center for a is of the form cea where ea is a
binary vector with ea (i) = 1a=i . In this case,
algorithm is weakly
the q
consistent (fraction of
p
r log p
misclassified nodes vanish) when ?min = ? max{
mr/n} . Compared to |Msdp |,
p ,
r
|Mksvd | an additional factor of ?min
. With same m, n, the algorithm has worse upper bound of errors
and is more sensitive to ?min , which depends both on the data distribution and the scale parameter of
the kernel. The proposed SDP can be seen as a denoising procedure which enlarges the separation.
It succeeds as long as the denoising is faithful, which requires much weaker assumptions.
4
Proof of the main results
In this section, we show the proof sketch of the main theorems. The full proofs are deferred to
supplementary materials.
4.1
Proof of Theorem 1
? restricted
In Theorem 1, we show that if the data distribution
the `? norm of K ? K
q is sub-gaussian,
on the inlier nodes concentrates with rate O
log p
p
.
Proof sketch. With the Lipschitz condition, it suffices to show kYi ?Yj k22 concentrates to d2k` +?k2 +?`2 .
(W ?W )T
kW ?W k2
i
j 2
To do this, we decompose kYi ? Yj k22 = k?k ? ?` k22 + 2 i ?p j (?k ? ?` ) +
. Now
p
2
2
it suffices to show the third term concentrates to ?k + ?` and the second term concentrates around
0. Note the fact that Wi ? Wj is sub-gaussian, its square is sub-exponential. With sub-gaussian tail
bound and a Bernstein type inequality for sub-exponential random variables, we prove the result.
With the elementwise bound, the Frobenius norm of the matrix difference is just one more factor of n.
p
2
? I?I kF ? cn log p/p.
Corollary 2. With probability at least 1 ? n2 p??c , kK I?I ? K
4.2
Proof of Theorem 2
Lemma 2 is proved in Appendix D, where we make use of the optimality condition and the constraints
in SDP-1. Equipped with Lemma 2 we?re ready to prove Theorem 2.
Proof sketch. In the outlier-free ideal scenario, Lemma 2 along with the dualtiy of `1 and `? norms
? ? kX?X
?
0 k1
? ? X0 k1 ? 2kK?Kk
we get kX
. Then by Theorem 1, we get the strong consistency result.
?min
When outliers are present, we have to derive a slightly different upper bound. The main idea is to
divide the matrices into two parts, one corresponding to the rows and columns of inliers, and the other
corresponding to those of the outliers. Now by the concentration result (Theorem 1) on K along
? sums to
? are bounded by 1; and the rows of X
with the fact that both the kernel function and X0 , X
n/r because of the constraint in SDP-1, we obtain the proof. The full proof is deferred to Appendix
E.
6
4.3
Proof of Theorem 3
? is to the ground truth,
Although Theorem 2 provides insights on how close the recovered matrix X
it remains unclear how the final clustering result behaves. In this section, we bound the number of
? and X0 . We start by presenting a
misclassified points by bounding the distance in eigenvectors of X
lemma that provides a bound for k-means step.
K-means is a non-convex procedure and is usually hard to analyze directly. However, when the
centroids are well-separated, it is possible to come up with sufficient conditions for a node to be
correctly clustered. When the set of misclustered nodes is defined as Eq. 4, the cardinality of M is
directly upper bounded by the distance between eigenvectors. To be explicit, we have the following
? denotes top r eigenvectors of K for K-SVD and X
? for SDP. U denotes the top r
lemma. Here U
?
eigenvectors of K for K-SVD and X0 for SDP. O denotes the corresponding rotation that aligns the
empirical eigenvectors to their population counterpart.
? ? U Ok2 .
Lemma 3. M is defined as Eq. (4), then |M| ? 8n kU
F
r
Lemma 3 is proved in Appendix G.
Analysis of |Msdp |: In order to get the deviation in eigenvectors, note the rth eigenvalue of X0 is
? be eigenvectors of X0 . By
n/r, and r + 1th is 0, let U ? Rn?r be top r eigenvectors of X and U
applying Davis-Kahan Theorem, we have
q
r
? ? X0 k1
3/2 ?
8kX
mr
(6)
? ? U OkF ? 2 kX ? X0 kF ?
= OP
?O, kU
n/r
n/r
n?min
Applying Lemma 3,
8n
|Msdp | ?
r
? ? X0 kF
23/2 kX
n/r
!2
cn
?
r
r
mr
n?min
2
? OP
m
?min
Analysis of |Mksvd |: In the outlier-present kernel scenario, by Corollary 2,
p
?
? F ? kK I?I ? K
? I?I kF + kK R ? K
? R kF = OP (n log p/p) + OP ( mn)
kK ? Kk
? from Lemma 1, let U
Again by Davis-Kahan theorem, and the eigengap between ?r and ?r+1 of K
?
?
be the matrix with rows as the top r eigenvectors of K. Let U be its empirical counterpart.
!
p
?
3/2
? F
mn,
n
log
p/p}
max{
2
kK
?
Kk
? ? U OkF ?
?O, kU
? OP
(7)
?r ? ?r+1
?r ? ?r+1
Now we apply Lemma 3 and get the upper bound for number of misclustered nodes for K-SVD.
!2
p
?
8n 23/2 C max{ mn, n log p/p}
|Mksvd | ?
? ? ?r+1 (K)
?
r
?r (K)
( ?
)
2
Cn
mn
n2 log p
?
max
,
r
?r ? ?r+1
p(?r ? ?r+1 )
mn2
n3 log p
?OP max
,
r(?r ? ?r+1 )2 rp(?r ? ?r+1 )2
5
Experiments
In this section, we collect some numerical results. For implementation of the proposed SDP, we use
Alternating Direction Method of Multipliers that is used in [1]. In each synthetic experiment, we
generate n ? m inliers from r equal-sized clusters. The centers of the clusters are sparse and hidden
in a p-dim noise. For each generated data set, we add in m observations of outliers. To capture the
7
(a) # clusters
(b) # outliers
(c) Separation
Figure 1: Performance vs parameters: (a) Inlier accuracy vs number of cluster (n = p = 1500, m =
10, d2 = 0.125, ? = 1); (b) Inlier accuracy vs number of outliers (n = 1000, r = 5, d2 = 0.02, ? =
1, p = 500); (c) Inlier accuracy vs separation (n = 1000, r = 5, m = 50, ? = 1, p = 1000).
arbitrary nature of the outliers, we generate half the outliers by a random Gaussian with large variance
(100 times of the signal variance), and the other half by a uniform distribution that scatters across all
clusters. We compare Algorithm 1 with 1) k-means by Lloyd?s algorithms; 2) kernel SVD and 3)
kernel PCA by [19].
The evaluating metric is accuracy of inliers, i.e., number of correctly clustered nodes divided by
the total number of inliers. To avoid the identification problem, we compare all permutations of the
predicted labels to ground truth labels and record the best accuracy. Each set of parameter is run 10
replicates and the mean accuracy and standard deviation (shown as error bars) are reported. For all
k-means used in the experiments we do 10 restarts and choose the one with smallest k-means loss.
For each experiment, we change only one parameter and fix all the others. Figure 1 shows how
the performance of different clustering algorithms change when (a) number of clusters, (b) number
of outliers, (c) minimum distance between clusters, increase. The value of all parameters used are
specified in the caption of the figure. Panel (a) shows the inlier accuracy for various methods as
we increase number of clusters. It can be seen that with r growing, the performance of all methods
? which remains stable as the
deteriorate except for the SDP. We also examine the `1 norm of X0 ? X,
number of clusters increases. Panel (b) describes the trend with respect to number of outliers. The
accuracy of SDP on inliers is almost unaffected by the number of outliers while other methods suffer
with large m. Panel (c) compares the performance as the minimum distance between cluster centers
changes. Both SDP and K-SVD are consistent as the distance increases. Compared with K-SVD,
SDP achieves consistency faster and variates less across random runs, which matches the analysis
given in Section 3.
6
Conclusion
In this paper, we investigate the consistency and robustness of two kernel-based clustering algorithms.
We propose a semidefinite programming relaxation which is shown to be strongly consistent without
outliers and weakly consistent in presence of arbitrary outliers. We also show that K-SVD is also
weakly consistent in that the misclustering rate is going to zero as the observation grows and the
outliers are of a small fraction of inliers. By comparing two methods, we conclude that although both
are robust to outliers, the proposed SDP is less sensitive to the minimum separation between clusters.
The experimental result also supports the theoretical analysis.
8
References
[1] A. A. Amini and E. Levina. On semidefinite relaxations for the block model. arXiv preprint
arXiv:1406.5647, 2014.
[2] P. Awasthi and O. Sheffet. Improved spectral-norm bounds for clustering. In Approximation, Randomization,
and Combinatorial Optimization. Algorithms and Techniques, pages 37?49. Springer, 2012.
[3] T. T. Cai, X. Li, et al. Robust and computationally feasible community detection in the presence of arbitrary
outlier nodes. The Annals of Statistics, 43(3):1027?1059, 2015.
[4] A. Christmann and I. Steinwart. Consistency and robustness of kernel-based regression in convex risk
minimization. Bernoulli, pages 799?819, 2007.
[5] S. Dasgupta and L. Schulman. A probabilistic analysis of em for mixtures of separated, spherical gaussians.
The Journal of Machine Learning Research, 8:203?226, 2007.
[6] K. De Brabanter, K. Pelckmans, J. De Brabanter, M. Debruyne, J. A. Suykens, M. Hubert, and B. De Moor.
Robustness of kernel based regression: a comparison of iterative weighting schemes. In Artificial Neural
Networks?ICANN 2009, pages 100?110. Springer, 2009.
[7] M. Debruyne, M. Hubert, and J. A. Suykens. Model selection in kernel based regression using the influence
function. Journal of Machine Learning Research, 9(10), 2008.
[8] M. Debruyne, M. Hubert, and J. Van Horebeek. Detecting influential observations in kernel pca. Computational Statistics & Data Analysis, 54(12):3007?3019, 2010.
[9] I. S. Dhillon, Y. Guan, and B. Kulis. Kernel k-means: spectral clustering and normalized cuts. In
Proceedings of the tenth ACM SIGKDD international conference on KDD, pages 551?556. ACM, 2004.
[10] L. Duan, L. Xu, Y. Liu, and J. Lee. Cluster-based outlier detection. Annals of Operations Research,
168(1):151?168, 2009.
[11] N. El Karoui et al. On information plus noise kernel random matrices. The Annals of Statistics, 38(5):3191?
3216, 2010.
[12] D.-W. Kim, K. Y. Lee, D. Lee, and K. H. Lee. Evaluation of the performance of clustering algorithms in
kernel-induced feature space. Pattern Recognition, 38(4):607?611, 2005.
[13] J. Kim and C. D. Scott. Robust kernel density estimation. The Journal of Machine Learning Research,
13(1):2529?2565, 2012.
[14] J. MacQueen et al. Some methods for classification and analysis of multivariate observations. In
Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages
281?297. Oakland, CA, USA., 1967.
[15] D. G. Mixon, S. Villar, and R. Ward. Clustering subgaussian mixtures by semidefinite programming. arXiv
preprint arXiv:1602.06612, 2016.
[16] A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. Advances in
neural information processing systems, 2:849?856, 2002.
[17] R. Pamula, J. K. Deka, and S. Nandi. An outlier detection method based on clustering. In 2011 Second
International Conference on EAIT, pages 253?256. IEEE, 2011.
[18] J. Peng and Y. Wei. Approximating k-means-type clustering via semidefinite programming. SIAM Journal
on Optimization, 18(1):186?205, 2007.
[19] B. Sch?lkopf, A. Smola, and K.-R. M?ller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural computation, 10(5):1299?1319, 1998.
[20] J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine Intelligence,
IEEE Transactions on, 22(8):888?905, 2000.
[21] H. Steinhaus. Sur la division des corp materiels en parties. Bull. Acad. Polon. Sci, 1:801?804, 1956.
[22] U. Von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. The Annals of Statistics,
pages 555?586, 2008.
[23] L. Xu, K. Crammer, and D. Schuurmans. Robust support vector machine training via convex outlier
ablation. In AAAI, volume 6, pages 536?542, 2006.
[24] M.-S. Yang and K.-L. Wu. A similarity-based robust clustering method. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 26(4):434?448, 2004.
[25] Y. Yu, T. Wang, and R. Samworth. A useful variant of the davis?kahan theorem for statisticians. Biometrika,
102(2):315?323, 2015.
9
| 6389 |@word kulis:1 version:2 norm:12 c0:2 suitably:1 km:4 d2:2 covariance:1 decomposition:4 sheffet:1 carry:1 liu:1 mixon:1 existing:1 kx0:3 recovered:1 comparing:1 karoui:1 scatter:1 numerical:2 partition:1 happen:1 kdd:1 drop:1 n0:4 zik:1 v:4 half:3 intelligence:2 pelckmans:1 isotropic:2 xk:2 ith:2 vanishing:1 record:1 provides:3 detecting:1 node:21 org:1 mathematical:1 along:2 c2:1 symposium:1 prove:5 hermitian:1 introduce:1 deteriorate:1 x0:22 purnamrita:1 peng:1 behavior:1 examine:1 sdp:35 growing:4 inspired:1 detects:1 spherical:1 duan:1 equipped:1 cardinality:1 provided:1 spain:1 underlying:1 bounded:5 notation:1 panel:3 eigenvector:1 guarantee:1 berkeley:1 exactly:1 biometrika:1 rm:1 k2:6 yn:4 before:2 positive:1 dropped:1 treat:1 limit:1 acad:1 despite:3 subscript:2 plus:1 studied:1 collect:1 relaxing:1 practical:1 unique:1 faithful:1 yj:5 practice:1 block:3 procedure:4 empirical:4 yan:1 attain:1 alleviated:1 projection:1 suggest:1 get:6 convenience:1 close:2 selection:1 operator:2 context:1 influence:2 applying:3 risk:1 equivalent:1 center:9 maximizing:1 shi:1 go:6 attention:2 convex:5 simplicity:3 recovery:1 assigns:1 insight:1 century:1 population:9 annals:4 modulo:1 exact:1 programming:5 caption:1 us:3 element:1 trend:1 approximated:1 recognition:1 cut:2 distributional:1 preprint:2 wang:1 capture:1 calculate:1 wj:1 decrease:1 vanishes:2 convexity:1 covariates:1 weakly:9 depend:1 tight:1 division:1 k0:1 various:2 separated:4 mn2:2 effective:1 describe:1 artificial:1 dichotomy:1 labeling:1 heuristic:1 widely:4 solve:1 supplementary:1 bowei:1 otherwise:2 enlarges:1 ability:1 statistic:8 cov:1 ward:1 kahan:6 ip:1 final:1 brabanter:2 eigenvalue:6 rr:1 cai:1 propose:2 product:2 combining:1 ablation:1 iff:1 frobenius:2 getting:1 convergence:2 cluster:33 optimum:2 misclustered:6 produce:1 generating:2 converges:2 inlier:7 illustrate:1 develop:1 derive:1 misclustering:1 ij:1 op:15 received:1 eq:3 strong:2 predicted:1 christmann:1 indicate:1 come:1 differ:1 concentrate:5 direction:1 drawback:1 stochastic:1 centered:1 material:2 resilient:3 require:2 suffices:3 clustered:4 fix:1 decompose:1 randomization:1 around:3 considered:1 ground:3 exp:1 mapping:1 major:1 achieves:2 a2:1 smallest:2 estimation:2 samworth:1 applicable:1 label:2 combinatorial:1 villar:1 sensitive:3 largest:5 weighted:1 moor:1 minimization:1 awasthi:1 gaussian:17 avoid:1 cr:1 corollary:4 derived:1 focus:1 prevalent:1 indicates:1 bernoulli:1 hk:1 contrast:1 sigkdd:1 blockmodel:1 centroid:5 detect:1 kim:2 dim:1 el:1 membership:2 hidden:2 misclassified:6 going:1 provably:1 among:3 classification:1 denoted:2 proposes:3 special:1 equal:3 once:1 construct:1 ng:1 zz:1 identical:1 kw:1 yu:1 unsupervised:2 np:1 others:1 inherent:1 belkin:1 statistician:1 ab:1 detection:4 onwards:1 mining:1 investigate:1 evaluation:1 replicates:1 deferred:2 analyzed:2 mixture:7 semidefinite:8 inliers:12 hubert:3 old:1 euclidean:1 divide:1 re:1 obscured:1 theoretical:3 instance:1 column:1 logp:2 bull:1 introducing:1 deviation:4 uniform:1 rounding:2 reported:1 supx:1 synthetic:1 density:2 international:2 siam:1 probabilistic:1 lee:4 again:1 aaai:1 von:1 choose:1 slowly:2 d2k:4 worse:1 leading:3 li:1 de:4 lloyd:1 includes:1 depends:2 root:1 lot:1 analyze:2 start:2 recover:1 defer:1 contribution:1 square:3 accuracy:8 variance:4 lkopf:1 identification:1 none:1 unaffected:1 assortativity:1 aligns:1 definition:2 typed:1 james:1 proof:12 mi:5 recovers:1 proved:3 popular:1 ask:1 nandi:1 recall:1 organized:2 segmentation:1 carefully:1 ea:2 back:1 manuscript:2 ok:1 supervised:1 restarts:1 improved:1 wei:2 strongly:7 generality:1 just:1 smola:1 hand:3 sketch:4 steinwart:1 nonlinear:1 grows:3 usa:1 effect:1 k22:6 normalized:3 true:2 multiplier:1 counterpart:4 assigned:1 alternating:1 symmetric:1 dhillon:1 davis:6 presenting:2 complete:1 demonstrate:1 image:1 common:1 rotation:4 tending:1 quarter:1 behaves:1 hugo:1 volume:2 belong:1 tail:2 rth:3 elementwise:1 refer:1 consistency:9 similarly:1 zia:1 specification:1 stable:1 similarity:1 yk2:2 etc:1 add:1 multivariate:1 recent:1 belongs:1 scenario:2 corp:1 inequality:2 binary:1 arbitrarily:3 yi:1 seen:3 minimum:5 greater:3 additional:2 mr:5 ller:1 signal:2 multiple:1 full:2 technical:1 faster:2 match:1 levina:1 long:4 divided:3 equally:2 plugging:1 qi:1 converging:1 variant:3 regression:4 essentially:1 metric:1 arxiv:5 kernel:60 sometimes:1 represent:2 normalization:1 achieved:3 suykens:2 c1:2 whereas:2 background:1 want:1 addition:1 else:1 singular:6 crucial:1 sch:1 induced:2 effectiveness:1 jordan:1 subgaussian:2 presence:4 ideal:1 bernstein:1 yang:1 superset:1 variety:1 variate:1 zi:1 inner:2 idea:1 cn:4 texas:2 motivated:1 pca:7 eigengap:3 suffer:1 returned:2 remark:4 generally:1 useful:1 clear:1 eigenvectors:21 listed:1 detailed:2 incapability:1 category:1 http:1 generate:2 deteriorates:1 correctly:3 broadly:1 dasgupta:1 kci:1 drawn:1 kyi:3 tenth:1 relaxation:18 fraction:7 sum:1 run:2 luxburg:1 named:1 throughout:1 almost:1 wu:1 separation:9 appendix:6 bound:19 followed:1 fold:1 constraint:3 n3:2 dominated:1 bousquet:1 min:26 optimality:1 department:2 influential:2 kd:1 across:2 slightly:1 em:2 describes:1 wi:6 outlier:45 restricted:2 computationally:1 remains:2 needed:1 oakland:1 end:2 gaussians:1 operation:1 apply:1 spectral:12 amini:1 robustness:9 slower:1 rp:3 eigen:2 top:8 clustering:50 include:2 cf:2 denotes:3 k1:6 approximating:1 objective:1 malik:1 question:1 quantity:2 parametric:1 concentration:4 diagonal:2 nr:3 unclear:1 affinity:1 amongst:1 distance:9 separate:1 sci:1 manifold:1 sur:1 index:2 kk:10 setup:1 difficult:1 statement:1 blockwise:5 trace:3 implementation:1 unknown:1 upper:10 observation:8 macqueen:2 situation:1 extended:3 looking:1 y1:4 rn:2 arbitrary:5 community:3 sarkar:1 introduced:1 bk:1 pair:1 specified:1 established:1 barcelona:1 nip:1 bar:1 usually:3 pattern:3 scott:1 summarize:1 max:12 suitable:2 misclassification:1 indicator:1 cea:1 mn:6 scheme:1 numerous:1 ready:1 hm:1 literature:1 schulman:1 kf:6 asymptotic:2 loss:2 permutation:1 interesting:1 proven:1 sufficient:1 consistent:17 share:2 austin:2 row:6 ok2:1 surprisingly:1 last:1 free:1 side:1 weaker:1 absolute:1 sparse:2 fifth:1 distributed:1 van:1 boundary:1 dimension:1 world:1 gram:1 kyj:2 kz:1 evaluating:1 author:1 concretely:1 commonly:1 party:1 transaction:2 assumed:1 conclude:1 k1n:1 iterative:1 ku:4 nature:1 robust:9 ca:1 schuurmans:1 diag:2 icann:1 main:9 linearly:1 bounding:2 noise:6 n2:4 x1:1 xu:2 en:3 sub:10 explicit:4 exponential:2 lie:1 vanish:2 guan:1 third:1 weighting:1 theorem:22 specific:2 maxi:1 decay:1 svm:1 dk:2 exists:2 albeit:1 ci:2 illustrates:1 conditioned:1 kx:9 simply:1 springer:2 mij:2 truth:3 satisfies:2 acm:2 sized:2 careful:1 towards:1 lipschitz:2 absence:1 feasible:1 hard:2 change:3 included:1 except:3 denoising:2 lemma:14 principal:3 called:2 total:1 svd:21 experimental:1 succeeds:1 la:1 support:3 crammer:1 |
5,957 | 639 | STIMULUS ENCODING BY
MULTIDIMENSIONAL RECEPTIVE FIELDS
IN SINGLE CELLS AND CELL POPULATIONS
IN VI OF AWAKE MONKEY
Edward Stern
Center for Neural Computation
and Department of Neurobiology
Life Sciences Institute
Hebrew University
Jerusalem, Israel
Eilon Vaadia
Center for Neural Computation
and Physiology Department
Hadassah Medical School
Hebrew University
Jerusalem, Israel
Ad Aertsen
Institut fur Neuroinfonnatik
Ruhr-Universitat-Bochum
Bochum, Gennany
Shaul Hochstein
Center for Neural Computation
and Department of Neurobiology,
Life Sciences Institute
Hebrew University
Jerusalem, Israel
ABSTRACT
Multiple single neuron responses were recorded
from a single electrode in VI of alert, behaving
monkeys. Drifting sinusoidal gratings were
presented in the cells' overlapping receptive
fields, and the stimulus was varied along several
visual dimensions. The degree of dimensional
separability was calculated for a large population
of neurons, and found to be a continuum. Several
cells showed different temporal response
dependencies to variation of different stimulus
dimensions, i.e. the tuning of the modulated
firing was not necessarily the same as that of the
mean firing rate. We describe a multidimensional
receptive field, and use simultaneously recorded
responses to compute a multi-neuron receptive
field, describing the information processing
capabilities of a group of cells. Using dynamic
correlation analysis, we propose several
computational schemes for multidimensional
spatiotemporal tuning for groups of cells. The
implications for neuronal coding of stimuli are
discussed.
377
378
Stern, Aensen, Vaadia, and Hochstein
INTRODUCTION
The receptive field is perhaps the most useful concept for understanding neuronal
information processing. The ideal definition of the receptive field is that set of stimuli
which cause a change in the neuron's firing properties. However, as with many such
concepts, the use of the receptive field in describing the behavior of sensory neurons falls
short of the ideal. The classical method for describing the receptive field has been to
measure the "tuning curve" i.e. the response of the neuron as a function of the value of
one dimension of the stimulus. This presents a problem because the sensory world is
multidimensional; For example, even a simple visual stimulus, such as a patch of a
sinusoidal grating, may vary in location, orientation, spatial frequency, temporal
frequency, movement direction and speed, phase, contrast, color, etc. Does the tuning to
one dimension remain constant when other dimensions are varied? i.e. are the dimensions
linearly separable? It is not unreasonable to expect inseparability: Consider an oriented,
spatially discrete receptive field. The excitation generated by passing a bar through the
receptive field will of course change with orientation. However, the shape of this tuning
curve will depend upon the bar width, related to the spatial frequency. This effect has not
been studied quantitatively, however. If interactions among dimensions exist, do they
account for a large portion of the cell's response variance? Are there discrete populations
of cells, with some cells showing interactions among dimensions and others not? These
question have clear implications for the problem of neural coding.
Related to the question of dimensional separability is that of stimulus encoding: Given
that the receptive field is multidimensional in nature, how can the cell maximize the
amount of stimulus information it encodes? Does the neuron use a single code to
represent all the stimulus dimensions? It is possible that interactions lead to greater
uncertainty in stimulus identification. Does the small number of visual cortical cells
encode all the possible combinations of stimuli using only spike rate as the dependent
variable? We present data indicating that more information is indeed present in the
neuronal response, and propose a new approach for its utilization.
The final problem that we address is the following: Clearly, many cells participate in the
stimulus encoding process. Arriving at a valid concept of a multidimensional receptive
field, can we generalize this concept to more than one cell introducing the notion of a
multi-cellular receptive field?
METHODS
Drifting sinusoidal gratings were presented for 500 msec to the central 10 degrees of the
visual field of monkeys performing a fixation task. The gratings were varied in
orientation, spatial frequency,temporal frequency, and movement direction. We recorded
from up to 3 cells simultaneously with a single electrode in the monkey's primary visual
cortex (VI). The cells described in this study were well separated, using a templatematching procedure. The responses of the neurons were plotted as Peri-Stimulus Time
Histograms (PSTHs) and their parameters quantified (Abeles, 1982), and offline Fourier
analysis and time-dependent crosscorrelation analysis (Aertsen et ai, 1989) were
performed.
Stimulus Encoding by Multidimensional Receptive Fields in Single Cells and Cell Populations
RESULTS
Recording the responses of visual cortical neurons to stimuli varied over a number of
dimensions, we found that in some cases, the tuning curve to one dimension depended on
the value of another dimension. Figure lA shows the spatial-frequency tuning curve of a
single cell measured at 2 different stimulus orientations. When the orientation of the
stimulus is 72 degrees, the peak response is at a spatial frequency of 4.5 cycles/degree
(cpd), while at an orientation of216 degrees, the spatial frequency of peak response is 2.3
cpd. If the responses to different visual dimensions were truly linearly separable, the
tuning curve to any single dimension would have the same shape and, in particular,
position of peak, despite any variations in other dimensions. If the tuning curves are not
parallel, then interactions must exist between dimensions. Clearly, this is an example of
a cell whose responses are not linearly separable. In order to quantify the inseparability
phenomenon, analyses of variance were performed, using spike rate as the dependent
variable, and the visual dimensions of the stimuli as the independent variables. We then
measured the amount of interaction as a percentage of the total between-conditions
A.
Spatial
B. Interaction effects between
Frequency
stimulus dimensions:
Tuning Dependence upon
Orientation
Percentage or total variance
30'.....-------------.
<1)
0.9
~
0.8
25
bO O.7
s::
'C 0.6
u::
"0 0 .5
~ 0.4
E0.3
a
0.2
s::
0.1
O~--~~~~n---P-~~~
0.1
1
10
!?1I11~
1I11~
II
oooooo~
_
N
f""I
\I"')
110
--- --of non-residual variance
~
,
Spatial Frequency (cpd)
I
_
%
'
N
1
,
1""'11
"'"
,
It/')
__ ORl=72 - 6 - ORl=216
nfac=34
nfac=45
Figure 1: Dimensional Inseparability or Visual Cortical Neurons. A:
An example or dimensionsional inseparability in the response or a single
cell; B: Histogram or dimensional inseparability as a percentage or
total response variance.
379
380
Stern, Aertsen, Vaadia, and Hochstein
variance divided by the residuals. The resulting histogram for 69 cells is shown in Figure
lB. Although there are several cells with non-significant interactions, i.e. linearly
separable dimensions, this is not the majority of cells. The amount of dimensional
inseparability seems to be a continuum. We suggest that separability is a significant
variable in the coding capability of the neurons, which must be taken into account when
modeling the representation of sensory information by cortical neural networks.
We found that the time course of the response was not always constant, but varied with
stimulus parameters. Cortical cell responses may have components which are sustained
(constant over time), transient (with a peak near stimulus onset and/or offset), or
modulated (varying with the stimulus period). For example, Figure 2 shows the responses
of a single neuron in VI to 50 stimuli, varying in orientation and spatial frequency. Each
response is plotted as a PSTH, and the stippled bar under the PSTH indicates the time of
Orientation
DD~DD
4.5
-
D
[:j Eaij tJ D
D~EJijjG5D
23DG!5~GjD
D~[MjE5E:5
G:J [;J CiIIJ ~ c:::J
Ej~~~E:::J
liM ! f.ll !!u ? D.I
11m.,
f Iq lip ? B.'
p.i
!
Ill.'
f 1.'1
1.5
....t
I 1.1 I Ft.2
! U .. t
u .s lI!.t !
I.t
J
I!.' !
i.4
J
O.8D~~~D
DG5~tJD
04oEJt5DCj
II ?'
!
U
I flU t f.4 I 117.1 ? f .1 I 11'" ! 1.0 I p.i
: u!
Figure 2: Spatial Frequency/Orientation Tuning of Responses
of VI Cell
Stimulus Encoding by Multidimensional Receptive Fields in Single Cells and Cell Populations
the stimulus presentation (500 msec). The numbers beneath each PSTH are the firing rate
averaged over the response time. and the standard deviations of the response over
repetitions of the stimulus (in this case 40). Clearly. the cell is orientation selective, and
the neuronal response is also tuned to spatial frequency. The stimulus eliciting the
highest firing rate is ORI=252 degrees; SF=3.2 cycles/degree (cpd). However, when
looking at the responses to lower spatial frequencies, we see a modulation in the PSTH.
The modulation, when present, has 2 peaks, corresponding to the temporal frequency of
the stimulus grating (4 cycles/second). Therefore, although the response rate of the cell is
lower at low spatial frequencies than for other stimuli, the spike train carries additional
information about another stimulus dimension.
If the visual neuron is considered as a linear system, the predicted response to a drifting
sinusoidal grating would be a (rectified) sinusoid of the same (temporal) frequency as that
of the stimulus, i.e. a modulated response (Enroth-Cugell & Robson, 1966; Hochstein &
Shapley, 1976; Spitzer & Hochstein. 1988). However, as seen in Figure 2, in some
stimulus regimes the cell's response deviates from linearity. We conclude that the
linearity or nonlinearity of the response is dependent upon the stimulus conditions
(Spitzer & Hochstein, 1985). A modulated response is one that would be expected from
simple cells, while the sustained response seen at higher spatial frequencies is that
expected from complex cells. Our data therefore suggest that the simple/complex cell
categorization is not complete.
A further example of response time-course dependence on stimulus parameters is seen in
Figure 3A. In this case, the stimulus was varied in spatial frequency and temporal
frequency, while other dimensions were held constant. Again, as spatial frequency is
raised. the modulation of the PSTH gives way to a more sustained response. Funhennore,
as temporal frequency is raised. both the sustained and the modulated responses are
replaced by a single transient response. When present, the frequency of the modulation
follows that of the temporal frequency of the stimulus. Fourier analysis of the response
histograms (Figure 3B) reveals that the DC and fundamental component (FC) are not
tuned to the same stimulus values (arrows indicating peaks). We propose that this
information may be available to the cell readout, enabling the single cell to encode
multiple stimulus dimensions simultaneously.
Thus, a complete description of the receptive field must be multidimensional in nature.
Furthermore, in light of the evidence that the spike train is not constant, one of the
dimensions which must be used to display the receptive field must be time.
Figure 4 shows one method of displaying a multidimensional response map, with time
along the abscissa (in 10 msec bins) and orientation along the ordinate. In the top two
figures, the z axis, represented in gray-scale, is the number of counts (spikes) per bin.
Therefore, each line is a PSTH, with counts (bin height) coded by shading. In this
example. cell 2 (upper picture) is tuned to orientation, with peaks at 90 and 270 degrees.
The cell is only slightly direction selective, as represented by the fact that the 2 areas of
high activity are similarly shaded. However, there is a transient peak at270 degrees which
381
382
Stern, Aertsen, Vaadia, and Hochstein
A.
Spatial Frequency (cpd)
0.6
lb
0.11
1.5
1.1
2.1
DG:J~G:J~
!
p..
III"
111.1
,.0
4.21
!'" 1
! I.e 1 p.l ! I.S 1
! 1.11
RCJ~~~~
1.51 pi., m.s
lIZ.'
Fi.S
I
~
/II.! ??.?
!
" .4 :
! n.I
! is.1
C:W~~CitJ~
!
III"! "'I ~i.I !
2
tl.S
pu ! 14.1 p.1
IU ".S ! 14.1
UJ~CWJ[MJ~
!
!
IA.'
pi ?? ? D.C [fT.'
1M FO.I
IS! tl.O ! 1 .1
! 1M
B.
nonnalized values
1
o
1
2
16
TF (cycles/second)
0.6
0.8
1.1
SF (cpd)
Figure 3: A. TF/SF Tuning of response of VI cell.
B. Tuning of DC and FC of response to stimulus parameters.
is absent at 90 degrees. The middle picture. representing a simultaneously recorded cell
shows a different pattern of activity. The orientation tuning of this cell is similar to that
of cell 2, but it has slIonger directional selectivity. (towards 90 degrees). In this case, the
lIansient is also at 90 degrees. The bottom picture shows the joint activity of these 2
cells. Rather than each line being a PSTH, each line is a Joint PSTH (JPSTH; Aertsen et
al. 1989). This histogram represents the time-dependent correlated activity of a pair of
cells. It is equivalent to sliding a window across a spike lIain of one neuron and
Stimulus Encoding by Multidimensional Receptive Fields in Single Cells and Cell Populations
324
cell 2
counts/bin
250
200
150
\00
50
216
108
o
o
cell 3
-
324
~
216
I II
t
Q,I
:s
-
.?
108
S
I:
.~
...=>
0
cell 2,3 coincidence
324
60
216
20
108
o
50
40
30
10
o
o
250
500
time (msec)
750
1000
SF=4.S cpd
Figure 4: Response Maps.
Top, Middle: Single-cell Multidimensional Receptive Fields;
Bottom: Multi-Cell Multidimensional Receptive Field
asking when a spike from another neuron falls within the window. The size of the
window can be varied; here we used 2 msec. Therefore, we are asking when these cells
fire within 2 msec of each other, and how this is connected to the stimulus. The z axis is
now coincidences per bin. We may consider this the logical AND activity of these cells;
if there is a cell receiving infonnation from both of these neurons, this is the receptive
field which would describe its input. Clearly. it is different from the each of the 2
individual cells. In our results. it is more narrowly tuned. and the tuning can not be
predicted from the individual components. We emphasize that this is the "raw" JPSTH.
which is not corrected for stimulus effects. common input. or normalized. This is because
we want a measure comparable to the PSTHs themselves, to compare a multi-unit
383
384
Stern, Aertsen, Vaadia, and Hochstein
receptive field to its single unit components. In this case, however, a significant (p<O.01;
Palm et ai, 1988) "mono-directional" interaction is present. For a more complete
description of the receptive field, this type of figure, shown here for one spatial frequency
only, can be shown for all spatial frequencies as "slices" along a fourth axis. However,
space limitations prevent us from presenting this multidimensional aspect of the
multicellular receptive field.
CONCLUSIONS
We have shown that interactions among stimulus dimensions account for a significant
proponion of the response variance of V 1 cells. The variance of the interactions itself
may be a useful parameter when considering a population response, as the amount and
location of the dimensional inseparability varies among cells. We have also shown that
different temporal characteristics of the spike trains can be tuned to different dimensions,
and add to the encoding capabilities of the cell in a neurobioiogically realistic manner.
Finally, we use these results to generate multidimensional receptive fields, for single cells
and small groups of cells. We emphasize that this can be generalized to larger populations
of cells, and to compute the population responses of cells that may be meaningful for the
cone x as a biological neuronal network.
Acknowledgements
We thank Israel Nelken, Hagai Bergman, Volodya Yakovlev, Moshe Abeles, Peter
Hillman, Roben Shapley and Valentino Braitenberg for helpful discussions. This study
was supponed by grants from the U.S.-Israel Bi-National Science Foundation (BSF) and
the Israel Academy of Sciences.
References
1. Abeles, M. Quantification, Smoothing, and Confidence Limits for Single Units'
Histograms 1. Neurosci . Melhods 5 ,317-325,1982.
2. Aertsen, A.M .H.J., Gerstein, G. L., Habib, M.K., and Palm, G. Dynamics of
Neuronal Firing Correlation: Modulation of "Effective Connectivity" 1. Neurophysio151
(5),900-917, 1989.
3. Enroth-CugeU, C. and Robson, J.G. The Contrast Sensitivity of Retinal Ganglion
Cells of the Call Physiol. Lond 187, 517-552,1966.
4. Hochstein, S. and Shapley, R. M. Linear and Nonlinear Spatial Subunits in Y Cat
Retinal Ganglion Cells 1 Physiol. Lond 262, 265-284, 1976.
5. Palm, G ., Aensen, A.M.H.J. and Gerstein, G.L. On the Significance of Correlations
Among Neuronal Spike Trains Bioi. Cybern. 59, 1-11, 1988.
6. Spitzer, H. and Hochstein , S. Simple and Complex-Cell Response Dependencies on
Stimulation Parameters 1.Neurophysiol 53,1244-1265,1985.
7. Spitzer, H. and Hochstein, S. Complex Cell Receptive Field Models Prog. in
Neurobiology, 31 ,285-309, 1988.
| 639 |@word middle:2 seems:1 ruhr:1 shading:1 carry:1 tuned:5 must:5 physiol:2 realistic:1 shape:2 short:1 location:2 psth:8 height:1 along:4 alert:1 fixation:1 sustained:4 shapley:3 manner:1 expected:2 indeed:1 behavior:1 abscissa:1 themselves:1 multi:4 window:3 considering:1 linearity:2 spitzer:4 israel:6 monkey:4 temporal:9 multidimensional:15 utilization:1 unit:3 medical:1 grant:1 depended:1 limit:1 despite:1 encoding:7 flu:1 firing:6 modulation:5 studied:1 quantified:1 shaded:1 bi:1 averaged:1 procedure:1 area:1 physiology:1 confidence:1 suggest:2 bochum:2 cybern:1 eilon:1 equivalent:1 map:2 center:3 jerusalem:3 bsf:1 valentino:1 population:9 notion:1 variation:2 bergman:1 roben:1 bottom:2 ft:2 coincidence:2 readout:1 cycle:4 connected:1 movement:2 highest:1 dynamic:2 depend:1 upon:3 neurophysiol:1 joint:2 represented:2 cat:1 train:4 separated:1 describe:2 effective:1 whose:1 larger:1 itself:1 final:1 vaadia:5 propose:3 interaction:10 beneath:1 academy:1 description:2 electrode:2 categorization:1 iq:1 measured:2 school:1 grating:6 edward:1 predicted:2 quantify:1 direction:3 transient:3 bin:5 biological:1 hagai:1 considered:1 liz:1 continuum:2 vary:1 robson:2 infonnation:1 repetition:1 tf:2 clearly:4 always:1 rather:1 ej:1 varying:2 gennany:1 encode:2 fur:1 indicates:1 contrast:2 helpful:1 dependent:5 shaul:1 selective:2 iu:1 among:5 orientation:14 ill:1 spatial:20 raised:2 smoothing:1 field:27 represents:1 braitenberg:1 others:1 cpd:7 stimulus:44 quantitatively:1 oriented:1 dg:2 simultaneously:4 national:1 individual:2 replaced:1 phase:1 fire:1 eaij:1 truly:1 light:1 tj:1 held:1 implication:2 institut:1 plotted:2 e0:1 modeling:1 asking:2 introducing:1 deviation:1 universitat:1 dependency:2 varies:1 spatiotemporal:1 abele:3 peri:1 peak:8 fundamental:1 sensitivity:1 receiving:1 connectivity:1 again:1 central:1 recorded:4 crosscorrelation:1 li:1 account:3 sinusoidal:4 retinal:2 coding:3 cugell:1 vi:6 ad:1 onset:1 performed:2 ori:1 portion:1 capability:3 parallel:1 variance:8 characteristic:1 directional:2 generalize:1 identification:1 raw:1 rectified:1 fo:1 definition:1 frequency:27 logical:1 color:1 lim:1 higher:1 response:43 furthermore:1 correlation:3 nonlinear:1 overlapping:1 gray:1 perhaps:1 effect:3 concept:4 normalized:1 sinusoid:1 spatially:1 ll:1 width:1 excitation:1 generalized:1 presenting:1 complete:3 fi:1 common:1 psths:2 stimulation:1 discussed:1 significant:4 ai:2 tuning:15 similarly:1 nonlinearity:1 cortex:1 behaving:1 etc:1 pu:1 add:1 hadassah:1 showed:1 selectivity:1 life:2 seen:3 greater:1 additional:1 maximize:1 period:1 ii:4 sliding:1 multiple:2 divided:1 coded:1 histogram:6 represent:1 cell:65 want:1 recording:1 call:1 near:1 ideal:2 iii:2 absent:1 narrowly:1 peter:1 enroth:2 passing:1 cause:1 useful:2 clear:1 amount:4 generate:1 exist:2 percentage:3 per:2 discrete:2 group:3 mono:1 prevent:1 cone:1 uncertainty:1 fourth:1 prog:1 patch:1 gerstein:2 orl:2 comparable:1 display:1 activity:5 awake:1 encodes:1 fourier:2 speed:1 aspect:1 lond:2 hochstein:11 performing:1 separable:4 department:3 palm:3 combination:1 remain:1 slightly:1 across:1 separability:3 cwj:1 taken:1 describing:3 count:3 available:1 unreasonable:1 tjd:1 drifting:3 top:2 uj:1 classical:1 eliciting:1 question:2 moshe:1 spike:9 receptive:26 primary:1 dependence:2 aertsen:7 thank:1 majority:1 participate:1 cellular:1 code:1 gjd:1 hebrew:3 supponed:1 stern:5 upper:1 neuron:16 enabling:1 subunit:1 neurobiology:3 looking:1 nonnalized:1 dc:2 varied:7 lb:2 ordinate:1 pair:1 yakovlev:1 address:1 bar:3 pattern:1 regime:1 ia:1 quantification:1 residual:2 representing:1 scheme:1 picture:3 axis:3 deviate:1 understanding:1 acknowledgement:1 expect:1 limitation:1 foundation:1 degree:12 dd:2 displaying:1 pi:2 course:3 arriving:1 offline:1 institute:2 fall:2 slice:1 curve:6 dimension:25 calculated:1 world:1 valid:1 cortical:5 sensory:3 nelken:1 emphasize:2 reveals:1 conclude:1 lip:1 nature:2 mj:1 necessarily:1 complex:4 significance:1 linearly:4 arrow:1 neurosci:1 hillman:1 neuronal:7 tl:2 position:1 msec:6 sf:4 showing:1 offset:1 jpsth:2 evidence:1 i11:2 multicellular:1 inseparability:7 fc:2 ganglion:2 visual:10 bo:1 volodya:1 bioi:1 presentation:1 towards:1 habib:1 change:2 corrected:1 total:3 la:1 meaningful:1 indicating:2 modulated:5 phenomenon:1 correlated:1 |
5,958 | 6,390 | CNNpack: Packing Convolutional Neural Networks
in the Frequency Domain
Yunhe Wang1,3 , Chang Xu2 , Shan You1,3 , Dacheng Tao2 , Chao Xu1,3
Key Laboratory of Machine Perception (MOE), School of EECS, Peking University
2
Centre for Quantum Computation and Intelligent Systems,
School of Software, University of Technology Sydney
3
Cooperative Medianet Innovation Center, Peking University
[email protected], [email protected], [email protected],
[email protected], [email protected]
1
Abstract
Deep convolutional neural networks (CNNs) are successfully used in a number
of applications. However, their storage and computational requirements have
largely prevented their widespread use on mobile devices. Here we present an
effective CNN compression approach in the frequency domain, which focuses not
only on smaller weights but on all the weights and their underlying connections.
By treating convolutional filters as images, we decompose their representations
in the frequency domain as common parts (i.e., cluster centers) shared by other
similar filters and their individual private parts (i.e., individual residuals). A
large number of low-energy frequency coefficients in both parts can be discarded
to produce high compression without significantly compromising accuracy. We
relax the computational burden of convolution operations in CNNs by linearly
combining the convolution responses of discrete cosine transform (DCT) bases.
The compression and speed-up ratios of the proposed algorithm are thoroughly
analyzed and evaluated on benchmark image datasets to demonstrate its superiority
over state-of-the-art methods.
1
Introduction
Thanks to the large amount of accessible training data and computational power of GPUs, deep
learning models, especially convolutional neural networks (CNNs), have been successfully applied to
various computer vision (CV) applications such as image classification [19], human face verification
[20], object recognition, and object detection [7, 17]. However, most of the widely used CNNs can
only be used on desktop PCs or even workstations due to their demanding storage and computational
resource requirements. For example, over 232MB of memory and over 7.24 ? 108 multiplications
are required to launch AlexNet and VGG-Net per image, preventing them from being used in mobile
terminal apps on smartphones or tablet PCs. Nevertheless, CV applications are growing in importance
for mobile device use and there is, therefore, an imperative to develop and use CNNs for this purpose.
Considering the lack of GPU support and the limited storage and CPU performance of mainstream
mobile devices, compressing and accelerating CNNs is essential. Although CNNs can have millions
of neurons and weights, recent research [9] has highlighted that over 85% of weights are useless
and can be set to 0 without an obvious deterioration in performance. This suggests that the gap in
demands made by large CNNs and the limited resources offered by mobile devices may be bridged.
Some effective algorithms have been developed to tackle this challenging problem. [8] utilized vector
quantization to allow similar connections to share the same cluster center. [6] showed that the weight
matrices can be reduced by low-rank decomposition approaches.[4] proposed a network architecture
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: The flowchart of the proposed CNNpack.
using the ?hashing trick? and [4] then transferred the HashedNet into the discrete cosine transform
(DCT) frequency domain [3]. [16, 5] proposed binaryNet, whose weights were -1/1 or -1/0/1 [2]. [15]
utilizes a sparse decomposition to reduce the redundancy of weights and computational complexity of
CNNs. [9] employed pruning [10], quantization, and Huffman coding to obtain a greater than 35?
compression ratio and 3? speed improvement, thereby producing state-of-the-art CNNs compression
to the best of our knowledge. The effectiveness of the pruning strategy relies on the principle that
if the absolute value of a weight in a CNN is sufficiently small, its influence on the output is often
negligible. However, these methods tend to disregard the properties of larger weights, which might
also provide opportunities for compression. Moreover, independently considering each weight ignores
the contextual information of other weights.
To address the aforementioned problems, we propose handling convolutional filters in the frequency
domain using DCT (see Fig. 1). In practice, convolutional filters can be regarded as small and
smooth image patches. Hence, any operation on the convolutional filter frequency coefficients in the
frequency domain is equivalent to an operation performed simultaneously over all weights of the
convolutional filters in the spatial domain. We factorize the representation of the convolutional filter
in the frequency domain as the composition of common parts shared with other similar filters and
its private part describing some unique information. Both parts can be significantly compressed by
discarding a large number of subtle frequency coefficients. Furthermore, we develop an extremely fast
convolution calculation scheme that exploits the relationship between the feature maps of DCT bases
and frequency coefficients. We have theoretically discussed the compression and the speed-up of
the proposed algorithm. Experimental results on benchmark datasets demonstrate that our proposed
algorithm can consistently outperform state-of-the-art competitors, with higher compression ratios
and speed gains.
2
Compressing CNNs in the Frequency Domain
Recently developed CNNs contain a large number of convolutional filters. We regard convolutional
filters as small images with intrinsic patterns, and present an approach to compress CNNs in the
frequency domain with the help of the DCT.
2.1
The Discrete Cosine Transform (DCT)
The DCT plays an important role in JPEG compression [22], which is regarded as an approximate
KL-transformation for 2D images [1]. In JPEGs, the original image is usually divided into several
square patches. For an image patch P ? Rn?n , its DCT coefficient C ? Rn?n in the frequency
domain is defined as:
Cj1 j2 = D(Pi1 i2 ) = sj1 sj2
n?1
X n?1
X
?(i1 , i2 , j1 , j2 )Pi1 i2 = cTj1 P cj2 ,
(1)
i1 =0 i2 =0
p
p
where sj = 1/n if j = 0 and sj = 2/n, otherwise, and C = C T P C is the matrix form of
the DCT, where C = [c1 , ..., cd ] ? Rd?d is the transformation matrix. The basis of this DCT is
2
Sj1 j2 = cj1 cTj2 , and ?(i1 , i2 , j1 , j2 ) denotes the cosine basis function:
?(2i1 + 1)j1
?(2i2 + 1)j2
?(i1 , i2 , j1 , j2 ) = cos
cos
,
(2)
2n
2n
and cj (i) = ?(2i+1)j
. The DCT is a lossless transformation thus we can recover the original
2n
image by simply utilizing the inverse DCT, i.e.,
Pi1 i2 = D?1 (Cj1 j2 ) =
n?1
X n?1
X
sj1 sj2 ?(i1 , i2 , j1 , j2 )Cj1 j2 ,
(3)
j1 =0 j2 =0
whose matrix form is P = CCC T . Furthermore, to facilitate the notations we denote the DCT and the
inverse DCT for vectors as vec(C) = D(vec(P )) = (C ? C)vec(P ) and vec(P ) = D?1 (vec(C)) =
(C ? C)T vec(C), where vec(?) is the vectorization operation and ? is the Kronecker product.
2.2
Convolutional Layer Compression
Computing Residuals in the Frequency Domain. For a given convolutional layer Li , we first
(i)
(i)
extract its convolutional filters F (i) = {F1 , ..., FNi }, where the size of each convolutional filter is
di ? di and Ni is the number of filters in Li . Each filter can then be transformed into a vector, and
2
(i)
(i)
(i)
(i)
together they form a matrix Xi = [x1 , ..., xNi ] ? Rdi ?Ni , where xj = vec(Fj ), ? j = 1, ..., Ni .
DCT has been widely used for image compression, since DCT coefficients present an experienced
distribution in the frequency domain. Energies of high-frequency coefficients are usually much
smaller than those of low-frequency coefficients for 2D natural images, i.e., the high frequencies
tend to have values equal or close to zero [22]. Hence, we propose to transfer Xi into the DCT
frequency domain and obtain its frequency representation C = D(Xi ) = [C1 , ..., CNi ]. Since a
number of convolutional filters will share some similar components, we divide them into several
groups G = {G1 , ..., GK } by exploiting K centers U = [?1 , ..., ?k ] ? Rdi ?K with the following
minimization problem:
K X
X
arg min
||C ? ?k ||22 ,
(4)
G
k=1 C?Gk
where ?k is the cluster center of Gk . Fcn. 4 can easily be solved with the conventional k-means
algorithm [9, 8]. For each Cj , we denote its residual with its corresponding cluster as Rj = Cj ? ?kj ,
where kj = arg mink ||Cj ??k ||2 . Hence, each convolutional filter is represented by its corresponding
cluster center shared by other similar filters and its private part Rj in the frequency domain. We
further employ the following `1 -penalized optimization problem to control redundancy in the private
parts for each convolutional filter:
b j ? Rj ||22 + ?||R
b j ||1 ,
arg min ||R
(5)
bj
R
where ? is a parameter for balancing the reconstruction error and sparsity penalty. The solution to
Fcn. 5 is:
b j = sign(Rj ) max{|Rj | ? ? , 0},
R
(6)
2
where sign(?) is the sign function. We can also control the redundancy in the cluster centers using the
above approach as well.
Quantization, Encoding, and Fine-tuning. The sparse data obtained through Fcn. 6 is continuous,
which is not benefit for storing and compressing. Hence we need to represent similar values with
a common value, e.g., [0.101, 0.100, 0.102, 0.099] ? 0.100. Inspired by the conventional JPEG
bj :
algorithm, we use the following function to quantize R
(
)
b j , ?b, b)
Clip(R
b
Rj = Q Rj , ?, b = ? ? I
,
(7)
?
where Clip(x, ?b, b) = max(?b, min(b, x)) with boundary b > 0, and ? is a large integer with
similar functionality to the quantization table in the JPEG algorithm. It is useful to note that the
3
quantized values produced by applying Fcn. 7 can be regarded as the one-dimensional k-means
centers in [9], thus a dictionary can consist of unique values in all {R1 , ..., RNi }. Since occurrence
probabilities of elements in the codebook are unbalanced, we employ the Huffman encoding for a
more compact storage. Finally, the Huffman encoded data is stored in the compressed sparse row
format (CSR), denoted as Ei .
It has also been shown that fine-tuning after compression further enhances network accuracy [10, 9].
In our algorithm, we also employ the fine-tuning approach by holding weights that have been
discarded so that the fine-tuning operation does not change the compression ratio. After generating a
new model, we apply Fcn. 7 again to quantize the new model?s parameters until convergence.
The above scheme for compressing convolutional CNNs stores four types of data: the compressed
2
data Ei , the Huffman dictionary with Hi quantized values, the k-means centers U ? Rdi ?K , and
the indexes for mapping residual data to centers. It is obvious that if all filters from every layer can
share the same Huffman dictionary and cluster centers, the compression ratio will be significantly
increased.
2.3
CNNpack for CNN Compression
Global Compression Scheme. To enable all convolutional filters to share the same cluster centers
2
U ? Rdi ?k in the frequency domain, we must convert them into a fixed-dimensional space. It is
intuitive to directly resize all convolutional filters into matrices of the same dimensions and then
apply k-means. However, this simple resizing method increases the amount of the data that needs to
be stored. Considering d as the target dimension and di ? di as the convolutional filter size of the
i-th layer, the weight matrix would be inaccurately reshaped in the case of di < d or di > d.
A more reasonable approach would be to resize the DCT coefficient matrices of convolutional filters
in the frequency domain, because high-frequency coefficients are generally small and discarding
them only has a small impact on the result (di > d). On the other hand, the introduced zeros will be
immediately compressed by CSR since we do not need to encode or store them (di < d). Formally,
the resizing operation for convolutional filters in the DCT frequency domain can be defined as:
Cj1 ,j2 , if j1 , j2 ? d,
Cbj1 ,j2 = ?(C, d) =
(8)
0, otherwise.
where d ? d is the fixed filter size, C ? Rdi ?di is the DCT coefficient matrix of a filter in the i-th
layer and Cb ? Rd?d is the coefficient matrix after resizing. After applying Fcn. 8, we can pack all
the coefficient matrices together and use only one set of cluster centers to compute the residual data
and then compress the network. We extend the individual compression scheme into an integrated
method that is convenient and effective for compressing deep CNNs, which we call CNNpack. The
procedures of the proposed algorithm are detailed in the supplementary materials.
CNNpack has five hyper-parameters: ?, d, K, b, and ?. Its compression ratio can be calculated by:
Pp
2
i=1 32Ni di
,
(9)
rc = Pp
2
i=1 Ni log K + Bi + 32H + 32d K
where p is the number of convolutional layers and H is the bits for storing the Huffman dictionary. It
is instructive to note that a larger ? (Fcn. 5) puts more emphasis on the common parts of convolutional
filters, which leads to a higher compression ratio rc . A decrease in any of b, d and ? will increase
rc accordingly. Parameter K is related to the sparseness of Ei , and a larger K would contribute
more to Ei ?s sparseness but would lead to higher storage requirement. A detailed investigation of
all these parameters is presented in Section. 4, and we also demonstrate and validate the trade-off
between the compression ratio and CNN accuracy of the convolutional neural work (i.e., classification
accuracy [19]).
3
Speeding Up Convolutions
According to Fcn. 9, we can obtain a good compression ratio by converting the convolutional filters
into the DCT frequency domain and representing them using their corresponding cluster centers and
4
residual data Ri = [R1 , ..., RNi ]. This is an effective strategy to reduce CNN storage requirements
and transmission consumption. However, two other important issues need to be emphasized: memory
usage and complexity.
This section focuses on a single layer throughout, we will drop the layer index i in order to simplify
notation. In practice, the residual data R in the frequency domain cannot be simply used to calculate
convolutions, so we must first transform them to the original filters in the spatial domain, thus not
saving memory. Furthermore, the transformation will further increase algorithm complexity. It is
thus necessary to explore a scheme that enables the proposed compression method to easily calculate
feature maps of original filters in the DCT frequency domain.
Given a convolutional layer L with its filters F = {Fq }N
q=1 of size d ? d, we denote the input data
H?W
as X ? R
and its output feature maps as Y = {Y1 , Y2 , ..., YN } with size H 0 ? W 0 , where
Yq = Fq ? X for the convolution operation ?. Here we propose a scheme which decomposes the
calculation of the conventional convolutions in the DCT frequency domain.
For the DCT matrix C = [c1 , ..., cd ], the d ? d convolutional filter Fq can be represented by its
DCT coefficient matrix C (q) with DCT bases {Sj1 ,j2 }dj1 ,j2 =1 defined as Sj1 ,j2 = cj1 cTj2 , namely,
Pd
Pd
(q)
Fq =
j1 =1
j2 =1 Cj1 ,j2 Sj1 ,j2 . In this way, feature maps of X through F can be calculated
Pd
(q)
as Yq = j1 ,j2 =1 Cj1 ,j2 (Sj1 ,j2 ? X), where M = d2 is the number of DCT bases. Fortunately,
since the DCT is anorthogonal transformation, all of its bases are rank-1 matrices. Note that
Sj1 ,j2 ? X = cj1 cTj2 ? X, and thus feature map Yq can be written as
d
X
Yq = Fq ? X =
(q)
Cj1 ,j2 (Sj1 ,j2
? X) =
j1 ,j2 =1
d
X
(q)
Cj1 ,j2 [cj1 ? (cTj2 ? X)].
(10)
j1 ,j2 =1
One drawback of this scheme is that when the number of filters in this layer is relatively small, i.e.,
M ? N , Fcn. 10 will increase computational cost. Moreover, the complexity can be further reduced
given the fascinating fact that the feature maps calculated by these DCT bases are exactly the same as
the DCT frequency coefficients of the input data. For a d ? d matrix X, we consider the matrix form
of its DCT, i.e., C = C T XC, and DCT bases {Sj1 ,j2 }, then the coefficient Cj1 ,j2 can be calculated
as
Cj1 ,j2 = cTj1 Xcj2 = cTj1 (cTj2 ? X) = cj1 ? (cTj2 ? X) = (cj1 ? cTj2 ) ? X = (cj1 cTj2 ) ? X = Sj1 ,j2 ? X, (11)
thus we can obtain the feature maps of M DCT bases by only applying the DCT once. Thus the
computational complexity of our proposed scheme can be analyzed in Proposition 1.
Proposition 1. Given a convolutional layers with N filters, denote M = d ? d as the number of DCT
2
base, and C ? Rd ?N as the compressed coefficients of filters in this layer. Suppose ? is the ratio of
non-zero elements in C, while ? is the ratio of non-zero elements in K 0 active cluster centers of this
layer. The computational complexity of our proposed scheme is O((d2 log d+?M K 0 +?M N )H 0 W 0 ).
The proof of Proposition 1 can be found in the supplementary materials. According to Proposition 1,
the proposed compression scheme in the frequency domain can also improve the speed. Compared to
the original CNN, for a convolutional layer, the speed-up of the proposed method is
rs =
(d2
d2 N H 0 W 0
N
?
.
log d + ?K 0 M + ?N M )H 0 W 0
?K 0 + ?N
(12)
Obviously, the speed-up ratio of the proposed method is directly relevant to ? and ?, which correspond
to ? in Fcn. 6.
4
Experimental Results
Baselines and Models. We compared the proposed approach with 4 baseline approaches: Pruning [10], P+QH (Pruning + Quantization and Huffman encoding) [9], SVD [6], and XNOR-Net [16].
The evaluation was conducted using the MNIST and ILSVRC2012 datasets. We compared the
proposed compression approach with four baseline CNNs: LeNet [14, 21], AlexNet [13], VGG-16
Net [19], and ResNet-50 [11]. All methods were implemented using MatConvNet [21] and run on
5
NVIDIA Titan X graphics cards. Model parameters were stored and updated as 32 bit floating-point
values.
Impact of parameters. As discussed above, the proposed compression method has several important
parameters: ?, d, K, b, and ?. We first tested their impact on the network accuracy by conducting
an experiment using MNIST [21], where the network has two convolutional layers and two fullyconnected layers of size 5 ? 5 ? 1 ? 20, 5 ? 5 ? 20 ? 50, 4 ? 4 ? 50 ? 500, and 1 ? 1 ? 500 ? 10,
respectively. The model accuracy was 99.06%. The compression results of different ? and d after
fine-tuning 20 epochs are shown in Fig. 2 in which k was set as 16, b was equal to +? since it did
not make an obvious contribution to the compression ratio even when set at a relatively smaller value
(e.g., b = 0.05) but caused the accuracy reduction. ? was set to 500, making the average length
of weights in the frequency domain about 5, a bit larger than that in [9] but more flexible and with
relatively better performance. Note that all the training parameters used their the default settings,
such as for epochs, learning rates, etc..
It can be seen from Fig. 2 that although a lower d slightly improves the compression ratio and
speed-up ratio simultaneously, this comes at a cost of decreased overall network accuracy; thus, we
kept d = max{di }, ? i = 1, ..., p, in CNNpack. Overall, ? is clearly the most important parameter in
the proposed scheme, which is sensitive but monotonous. Thus, it only needs to be adjusted according
to demand and restrictions. Furthermore, we tested the impact of number of cluster centers K. As
mentioned above, K is special in that its impact on performance is not intuitive, and when it becomes
larger, E becomes sparser but needs more space for storing cluster centers U and indexes. Fig. 3
shows that K = 16 provides the best trade-off between compression performance and accuracy.
25
0.988
0.984
0.980
d-- = 5
d- = 4
d=3
0.025
80
Speedup ratio
Compression ratio
Accuracy
0.992
60
40
d-- = 5
d- = 4
d=3
20
0.035
0.045
0.055
0.025
0.035
?
0.045
20
15
10
d-- = 5
d- = 4
d=3
5
0.055
0.025
0.035
?
0.045
0.055
?
Figure 2: The performance of the proposed approach with different ? and d.
20
80
0.990
0.988
0.986
K = 16
K = 64
K = 128
K=0
0.025
Speedup ratio
Compression ratio
Accuracy
0.992
60
40
K = 16
K = 64
K = 128
K=0
20
0.035
0.045
0.055
0.025
0.035
?
0.045
?
15
10
K = 16
K = 64
K = 128
K=0
5
0.055
0.025
0.035
0.045
0.055
?
Figure 3: The performance of the proposed approach with different numbers of cluster centers K.
We also report the compression results by directly compressing the DCT frequency coefficients of
original filters C as before (i.e., K = 0, the black line in Fig. 3). It can be seen that the clustering
number does not affect accuracy, but a suitable K does enhance the compression ratio. Another
interesting phenomenon is that the speed-up ratio without decomposition is larger than that of the
proposed scheme because the network is extremely small and the clustering introduces additional
computational cost as shown in Fcn. 12. However, recent networks contain a lot more filters in a
convolutional layer, larger than K = 16.
Based on the above analysis, we kept ? = 0.04 and K = 16 for this network (an accuracy of 99.14%).
Accordingly, the compression ratio rc = 32.05? and speed-up ratio rs = 8.34?, which is the best
trade-off between accuracy and compression performance.
Filter visualization. The proposed algorithm operates in the frequency domain, and although we
do not need to invert the compressed net when calculating convolutions, we can reconstruct the
convolutional filters in the spatial domain to provide insights into our approach. Reconstructed
convolution filters are obtained from the LeNet on MNIST, as shown in Fig. 4.
6
Figure 4: Visualization of example filters learned on MNIST: (a) the original convolutional filters, (b)
filters after pruning, (c) convolutional filters compressed by the proposed algorithm.
The proposed approach is fundamentally different to the previously used pruning algorithm. According to Fig. 4(b), weights with smaller magnitudes are pruned while influences of larger weights
have been discarded. In contrast, our proposed algorithm not only handles the smaller weights but
also considers impacts of those larger weights. Most importantly, we accomplish the compressing
task by exploring the underlying connections between all the weights in the convolutional filter (see
Fig. 4(c)).
Compression AlexNet and VGGNet on ImageNet. We next employed CNNpack for CNN compression on the ImageNet ILSVRC-2012 dataset [18], which contains over 1.2M training images and
50k validation images. First, we examined two conventional models: AlexNet [13], with over 61M
parameters and a top-5 accuracy of 80.8%; and VGG-16 Net, which is much larger than the AlexNet
with over 138M parameters and has a top-5 accuracy of 90.1%. Table 1 shows detailed compression
and speed-up ratios of the AlexNet with ? = 0.04 and K = 16. The result of the VGG-16 Net can
be found in the supplementary materials. The reported multiplications are for computing one image.
Table 1: Compression statistics for AlexNet.
Layer
conv1
conv2
conv3
conv4
conv5
fc6
fc7
fc8
Total
Num of Weights
11 ? 11 ? 3 ? 96
5 ? 5 ? 48 ? 256
3 ? 3 ? 256 ? 384
3 ? 3 ? 192 ? 384
3 ? 3 ? 192 ? 256
6 ? 6 ? 256 ? 4096
1 ? 1 ? 4096 ? 4096
1 ? 1 ? 4096 ? 1000
60954656
Memory
0.13MB
1.17MB
3.37MB
2.53MB
1.68MB
144MB
64MB
15.62MB
232.52MB
rc
878?
94?
568?
42?
43?
148?
15?
121?
39?
Multiplication
1.05?108
2.23?108
1.49?108
1.12?108
0.74?108
0.37?108
0.16?108
0.04?108
7.24?108
rs
127?
28?
33?
15?
12?
100?
8?
60?
25?
We achieved a 39? compression ratio and a 46? compression ratio for AlexNet and VGG-16 Net,
respectively. The layer with a relatively larger filter size had a larger compression ratio because it
contains more subtle high-frequency coefficients. In contrast, the highest speed-up ratio was often
obtained on the layer whose filter number N was much larger than its filter size, e.g., the fc6 layer of
AlexNet. We obtained an about 9? speed-up ratio on VGG-16 Net, which is lower than that on the
AlexNet since complexity is relevant to the feature map size and the first several layers definitely have
more multiplications. Unfortunately, their filter numbers are relatively small and their compression
ratios are all small, thus the overall speed-up ratio is lower than that on AlexNet. Accordingly, when
we set K = 0, the compression ratio and the speed-up ratio of AlexNet were close to 35? and 22?
and those of VGG-16 Net were near 28? and 7?. This reduction is due to these two networks being
relatively large and containing many similar filters. Moreover, the filter number in each layer is larger
than the number of cluster centers, i.e., N > K. Thus, cluster centers can effectively reduce memory
consumption and computational complexity simultaneously.
ResNet-50 on ImageNet. Here we discuss a more recent work, ResNet-50 [11], which has more
than 150 layers and 54 convolutional layers. This model achieves a top-5 accuracy of 7.71% and
a top-1 accuracy of 24.62% with only about 95MB parameters [21]. Moreover, since most of its
convolutional filters are 1 ? 1, i.e., its architecture is very thin with less redundant weights, it is very
hard to perform compression methods on it.
For the experiment on ResNet-50, we set K = 0 since the functionality of Fcn. 7 for 1-dimensional
filters are similar to that of k-means clustering, thus cluster centers are dispensable for this model.
Further, we set ? = 0.04 and therefore discarded about 84% of original weights in the frequency
7
(a) Compression ratios of all convolutional layers. (b) Speed-up ratios of all convolutional layers.
Figure 5: Compression statistics for ResNet-50 (better viewed in color version).
domain. After fine-tuning, we obtained a 7.82% top-5 accuracy on ILSVRC2012 dataset. Fig. 5
shows the detailed compression statistics of ResNet-50 utilizing the proposed CNNpack. In summary,
memory usage for storing its filters was squeezed by a factor of 12.28?.
It is worth mentioning that, larger filters seem have more redundant weights and connections because
compression ratios on layers with 1?1 filters are smaller than those on layers with 3?3 filters. On
the other side, these 1?1 filters hold a larger proportion of multiplications of the ResNet, thus we got
an about 4? speed-up on this network.
Comparison with state-of-the-art methods. We detail a comparison with state-of-the-art-methods
for compressing DNNs in Table 2. CNNpack clearly achieves the best performance in terms of both
the compression ratio (rc ) and the speed-up ratio (rs ). Note that although Pruning+QH achieves a
similar compression ratio to the proposed method, filters in their algorithm are stored after applying a
modified CSC format which stores index differences, means that it needs to be decoded before any
calculation. Hence, the compression ratio of P+QH will be lower than that have been reported in [9]
if we only consider memory usage. In contrast, the compressed data produced by our method can be
directly used for network calculation. In reality, online memory usage is the real restriction for mobile
devices, and the proposed method is superior to previous works in terms of both the compression
ratio and the speed-up ratio.
Table 2: An overall comparison of state-of-the-art methods for deep neural network compression and
speed-up, where rc is the compression ratio and rs is the speed-up.
Model
AlexNet
VGG16
5
Evaluation
rc
rs
top-1 err
top-5 err
rc
rs
top-1 err
top-5 err
Original
1
1
41.8%
19.2%
1
1
28.5%
9.9%
Pruning [10]
9?
42.7%
19.6%
13?
31.3
10.8
P+QH [9]
35?
42.7%
19.7%
49?
3.5?
31.1%
10.9%
SVD [6]
5?
2?
44.0%
20.5%
-
XNOR [16]
64?
58?
56.8%
31.8%
-
CNNpack
39?
25?
41.6%
19.2%
46?
9.4?
29.7%
10.4%
Conclusion
Neural network compression techniques are desirable so that CNNs can be used on mobile devices.
Therefore, here we present an effective compression scheme in the DCT frequency domain, namely,
CNNpack. Compared to state-of-the-art methods, we tackle this issue in the frequency domain,
which can offer the probability for more compression ratio and speed-up. Moreover, we no longer
independently consider each weight since each frequency coefficient?s calculation involves all
weights in the spatial domain. Following the proposed compression approach, we explore a much
cheaper convolution calculation based on the sparsity of the compressed net in the frequency domain.
Moreover, although the compressed network produced by our approach is sparse in the frequency
domain, the compressed model has the same functionality as the original network since filters in the
spatial domain have preserved intrinsic structure.Our experiments show that the compression ratio
and the speed-up ratio are both higher than those of state-of-the-art methods. The proposed CNNpack
approach creates a bridge to link traditional signal and image compression with CNN compression
theory, allowing us to further explore CNN approaches in the frequency domain.
Acknowledgements This work was supported by the National Natural Science Foundation of China
under Grant NSFC 61375026 and 2015BAF15B00, and Australian Research Council Projects: FT130101457, DP-140102164 and LE-140100061.
8
References
[1] Nasir Ahmed, T Natarajan, and Kamisetty R Rao. Discrete cosine transform. Computers, IEEE Transactions on, 100(1):90?93, 1974.
[2] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep
representations. ICML, 2014.
[3] Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing
convolutional neural networks. arXiv preprint arXiv:1506.04449, 2015.
[4] Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural
networks with the hashing trick. 2015.
[5] Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and
activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
[6] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure
within convolutional networks for efficient evaluation. In NIPs, 2014.
[7] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. In CVPR, 2014.
[8] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks
using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
[9] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. 2016.
[10] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient
neural network. In NIPs, 2015.
[11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition.
arXiv preprint arXiv:1512.03385, 2015.
[12] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In
Proceedings of the ACM International Conference on Multimedia, 2014.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPs, 2012.
[14] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[15] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional
neural networks. In CVPR, 2015.
[16] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016.
[17] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection
with region proposal networks. In NIPs, 2015.
[18] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge.
IJCV, 115(3):211?252, 2015.
[19] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015.
[20] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint
identification-verification. In NIPs, 2014.
[21] Andrea Vedaldi and Karel Lenc. Matconvnet: Convolutional neural networks for matlab. In Proceedings
of the 23rd Annual ACM Conference on Multimedia Conference, 2015.
[22] Gregory K Wallace. The jpeg still picture compression standard. Consumer Electronics, IEEE Transactions
on, 38(1):xviii?xxxiv, 1992.
9
| 6390 |@word cnn:10 version:1 private:4 compression:64 proportion:1 d2:4 r:7 decomposition:3 thereby:1 reduction:2 electronics:1 liu:3 contains:2 document:1 err:4 guadarrama:1 contextual:1 activation:1 must:2 gpu:1 written:1 john:1 dct:37 csc:1 j1:11 enables:1 treating:1 drop:1 device:6 accordingly:3 desktop:1 fni:1 num:1 provides:1 quantized:2 codebook:1 contribute:1 zhang:1 five:1 rc:9 ijcv:1 fullyconnected:1 theoretically:1 andrea:1 wallace:1 growing:1 terminal:1 inspired:1 ming:1 cpu:1 farhadi:1 considering:3 becomes:2 spain:1 project:1 underlying:2 moreover:6 notation:2 alexnet:13 developed:2 transformation:5 every:1 tackle:2 zaremba:1 exactly:1 control:2 grant:1 superiority:1 producing:1 yn:1 before:2 negligible:1 encoding:3 nsfc:1 might:1 black:1 emphasis:1 au:2 china:1 examined:1 suggests:1 challenging:1 co:2 mentioning:1 limited:2 bi:1 unique:2 lecun:2 practice:2 procedure:1 evan:1 significantly:3 got:1 convenient:1 vedaldi:1 cannot:1 close:2 andrej:1 storage:6 put:1 influence:2 applying:4 wenlin:2 restriction:2 equivalent:1 map:8 conventional:4 center:21 independently:2 conv4:1 emily:1 immediately:1 matthieu:1 insight:1 utilizing:2 regarded:3 importantly:1 embedding:1 handle:1 updated:1 target:1 play:1 suppose:1 qh:4 hierarchy:1 tablet:1 trick:2 element:3 recognition:5 natarajan:1 utilized:1 tappen:1 cooperative:1 role:1 preprint:5 solved:1 wang:2 calculate:2 region:1 compressing:12 ilsvrc2012:2 kilian:2 sun:3 decrease:1 trade:3 highest:1 mentioned:1 sj1:11 pd:3 complexity:8 instructive:1 trained:1 ali:1 creates:1 basis:2 packing:1 easily:2 joint:1 xiaoou:1 various:1 represented:2 hashednet:1 fast:2 effective:5 hyper:1 medianet:1 caffe:1 whose:3 encoded:1 widely:2 larger:16 cvpr:2 supplementary:3 relax:1 otherwise:2 compressed:11 resizing:3 reconstruct:1 statistic:3 simonyan:1 g1:1 reshaped:1 transform:5 highlighted:1 online:1 obviously:1 karayev:1 net:11 propose:3 reconstruction:1 tran:1 mb:11 product:1 j2:32 relevant:2 combining:1 intuitive:2 validate:1 foroosh:1 sutskever:1 exploiting:2 convergence:1 cluster:17 requirement:4 r1:2 transmission:1 produce:1 generating:1 darrell:2 object:4 help:1 resnet:7 develop:2 bourdev:1 gong:1 andrew:1 school:2 conv5:1 sydney:1 implemented:1 launch:1 involves:1 come:1 australian:1 drawback:1 compromising:1 cnns:17 filter:60 functionality:3 human:1 enable:1 hassan:1 material:3 dnns:1 f1:1 decompose:1 investigation:1 proposition:4 adjusted:1 exploring:1 rong:1 hold:1 sufficiently:1 cb:1 mapping:1 bj:2 matconvnet:2 dictionary:4 achieves:3 yixin:2 purpose:1 ross:3 sensitive:1 bridge:1 council:1 successfully:2 karel:1 minimization:1 clearly:2 modified:1 mobile:7 wilson:2 encode:1 focus:2 improvement:1 consistently:1 rank:2 fq:5 contrast:3 baseline:3 wang1:1 integrated:1 transformed:1 i1:6 tao:1 arg:3 classification:4 aforementioned:1 issue:2 denoted:1 flexible:1 overall:4 art:8 spatial:5 special:1 constrained:1 equal:2 once:1 saving:1 icml:1 denton:1 thin:1 fcn:12 report:1 yoshua:2 intelligent:1 simplify:1 employ:3 fundamentally:1 simultaneously:3 national:1 individual:3 cheaper:1 floating:1 william:2 detection:3 evaluation:3 introduces:1 analyzed:2 pc:2 accurate:1 xni:1 necessary:1 orthogonal:1 pku:3 divide:1 girshick:3 monotonous:1 increased:1 rao:1 marshall:1 jpeg:4 cost:3 rdi:5 imperative:1 krizhevsky:1 conducted:1 graphic:1 stored:4 reported:2 eec:1 gregory:1 accomplish:1 thoroughly:1 thanks:1 definitely:1 international:1 accessible:1 off:3 pool:1 enhance:1 together:2 ilya:1 michael:1 sanjeev:2 again:1 containing:1 huang:1 marianna:1 wojciech:1 li:2 coding:2 coefficient:20 titan:1 jitendra:1 xu1:1 caused:1 performed:1 lot:1 dally:2 recover:1 jia:2 contribution:1 square:1 ni:5 accuracy:19 convolutional:51 largely:1 conducting:1 correspond:1 cni:1 apps:1 identification:1 produced:3 ren:2 worth:1 russakovsky:1 trevor:2 competitor:1 energy:2 frequency:43 pp:2 james:2 obvious:3 proof:1 di:11 workstation:1 gain:1 dataset:2 bridged:1 knowledge:1 ut:2 improves:1 color:1 cj:4 subtle:2 segmentation:1 sean:1 hashing:2 higher:4 response:1 zisserman:1 huizi:1 evaluated:1 furthermore:4 until:1 hand:1 ei:4 su:1 lack:1 widespread:1 ordonez:1 facilitate:1 usage:4 contain:2 y2:1 hence:5 lenet:2 laboratory:1 i2:9 xnor:3 semantic:1 cosine:5 demonstrate:3 mohammad:1 fj:1 zhiheng:1 image:18 recently:1 common:4 superior:1 million:1 discussed:2 extend:1 he:2 dacheng:2 composition:1 vec:8 cv:2 rd:4 tuning:6 centre:1 had:1 bruna:1 han:2 longer:1 mainstream:1 etc:1 base:9 fc7:1 sergio:1 patrick:1 recent:3 showed:1 store:3 nvidia:1 binary:1 yi:1 seen:2 greater:1 fortunately:1 additional:1 employed:2 converting:1 deng:1 xiangyu:1 redundant:2 signal:1 vgg16:1 stephen:2 desirable:1 rj:7 smooth:1 faster:1 calculation:6 offer:1 ahmed:1 long:1 divided:1 prevented:1 peking:2 impact:6 vision:1 arxiv:10 represent:1 sergey:1 deterioration:1 invert:1 achieved:1 c1:3 preserved:1 huffman:8 proposal:1 fine:6 krause:1 decreased:1 xviii:1 jian:2 lenc:1 tend:2 effectiveness:1 seem:1 integer:1 call:1 near:1 yang:1 yuheng:1 bernstein:1 bengio:2 xj:1 affect:1 architecture:3 reduce:3 cn:3 lubomir:1 vgg:7 haffner:1 accelerating:1 sj2:2 penalty:1 song:2 xxxiv:1 karen:1 shaoqing:2 matlab:1 deep:13 baoyuan:1 useful:1 generally:1 detailed:4 karpathy:1 amount:2 clip:2 reduced:2 outperform:1 cj2:1 sign:3 per:1 jpegs:1 discrete:4 group:1 key:1 redundancy:3 four:2 nevertheless:1 yangqing:1 kept:2 convert:1 run:1 inverse:2 throughout:1 reasonable:1 yann:2 utilizes:1 patch:3 dj1:1 resize:2 bit:3 layer:29 shan:1 hi:1 bound:1 fascinating:1 annual:1 xiaogang:1 kronecker:1 alex:1 ri:1 software:1 cj1:17 speed:23 ctj1:3 extremely:2 min:4 pi1:3 pruned:1 tengyu:1 format:2 gpus:1 relatively:6 transferred:1 speedup:2 according:4 smaller:6 slightly:1 joseph:1 rob:1 making:1 resource:2 visualization:2 previously:1 describing:1 discus:1 ge:1 operation:7 apply:2 occurrence:1 weinberger:2 original:10 compress:2 denotes:1 clustering:3 top:9 opportunity:1 xc:1 calculating:1 exploit:1 especially:1 malik:1 xu2:1 yunchao:1 strategy:2 traditional:1 ccc:1 enhances:1 gradient:1 dp:1 iclr:1 link:1 card:1 consumption:2 considers:1 provable:1 consumer:1 length:1 useless:1 relationship:1 index:4 ratio:46 innovation:1 unfortunately:1 holding:1 gk:3 hao:1 mink:1 satheesh:1 conv2:1 perform:1 allowing:1 convolution:10 neuron:1 datasets:3 discarded:4 benchmark:2 hinton:1 y1:1 rn:2 csr:2 introduced:1 namely:2 moe:1 required:1 kl:1 connection:5 smartphones:1 imagenet:6 learned:1 binarynet:2 barcelona:1 flowchart:1 nip:6 address:1 usually:2 perception:1 pattern:1 sparsity:2 challenge:1 max:3 memory:8 power:1 suitable:1 demanding:1 natural:2 residual:8 representing:1 scheme:13 improve:1 fc8:1 technology:1 lossless:1 yq:4 picture:1 vggnet:1 arora:1 extract:1 kj:2 speeding:1 chao:1 epoch:2 inaccurately:1 acknowledgement:1 joan:1 multiplication:5 squeezed:1 interesting:1 geoffrey:1 validation:1 foundation:1 shelhamer:1 rni:2 offered:1 verification:2 conv1:1 principle:1 tyree:2 courbariaux:1 storing:4 share:4 cd:2 balancing:1 row:1 dispensable:1 penalized:1 summary:1 supported:1 side:1 allow:1 conv3:1 face:2 absolute:1 sparse:5 benefit:1 regard:1 boundary:1 dimension:2 calculated:4 default:1 rich:1 quantum:1 preventing:1 ignores:1 made:1 transaction:2 sj:2 pruning:9 approximate:1 compact:1 reconstructed:1 global:1 active:1 factorize:1 xi:3 fergus:1 continuous:1 vectorization:1 decomposes:1 khosla:1 table:5 reality:1 pack:1 transfer:1 fc6:2 rastegari:1 quantize:2 bottou:1 domain:35 did:1 linearly:1 xu:1 x1:1 fig:9 experienced:1 mao:1 decoded:1 donahue:2 tang:1 bhaskara:1 discarding:2 emphasized:1 burden:1 essential:1 quantization:7 intrinsic:2 consist:1 mnist:4 importance:1 ci:1 effectively:1 magnitude:1 sparseness:2 demand:2 gap:1 sparser:1 chen:5 simply:2 explore:3 visual:1 aditya:2 kaiming:2 chang:2 relies:1 acm:2 ma:2 viewed:1 towards:1 jeff:3 shared:3 change:1 hard:1 vicente:1 operates:1 pensky:1 redmon:1 olga:1 total:1 multimedia:2 experimental:2 disregard:1 svd:2 formally:1 ilsvrc:1 support:1 unbalanced:1 jonathan:2 tested:2 phenomenon:1 handling:1 |
5,959 | 6,391 | Generative Adversarial Imitation Learning
Jonathan Ho
OpenAI
[email protected]
Stefano Ermon
Stanford University
[email protected]
Abstract
Consider learning a policy from example expert behavior, without interaction with
the expert or access to a reinforcement signal. One approach is to recover the
expert?s cost function with inverse reinforcement learning, then extract a policy
from that cost function with reinforcement learning. This approach is indirect
and can be slow. We propose a new general framework for directly extracting a
policy from data as if it were obtained by reinforcement learning following inverse
reinforcement learning. We show that a certain instantiation of our framework
draws an analogy between imitation learning and generative adversarial networks,
from which we derive a model-free imitation learning algorithm that obtains significant performance gains over existing model-free methods in imitating complex
behaviors in large, high-dimensional environments.
1
Introduction
We are interested in a specific setting of imitation learning?the problem of learning to perform a
task from expert demonstrations?in which the learner is given only samples of trajectories from
the expert, is not allowed to query the expert for more data while training, and is not provided a
reinforcement signal of any kind. There are two main approaches suitable for this setting: behavioral
cloning [18], which learns a policy as a supervised learning problem over state-action pairs from
expert trajectories; and inverse reinforcement learning [23, 16], which finds a cost function under
which the expert is uniquely optimal.
Behavioral cloning, while appealingly simple, only tends to succeed with large amounts of data, due
to compounding error caused by covariate shift [21, 22]. Inverse reinforcement learning (IRL), on
the other hand, learns a cost function that prioritizes entire trajectories over others, so compounding
error, a problem for methods that fit single-timestep decisions, is not an issue. Accordingly, IRL has
succeeded in a wide range of problems, from predicting behaviors of taxi drivers [29] to planning
footsteps for quadruped robots [20].
Unfortunately, many IRL algorithms are extremely expensive to run, requiring reinforcement learning
in an inner loop. Scaling IRL methods to large environments has thus been the focus of much recent
work [6, 13]. Fundamentally, however, IRL learns a cost function, which explains expert behavior
but does not directly tell the learner how to act. Given that the learner?s true goal often is to take
actions imitating the expert?indeed, many IRL algorithms are evaluated on the quality of the optimal
actions of the costs they learn?why, then, must we learn a cost function, if doing so possibly incurs
significant computational expense yet fails to directly yield actions?
We desire an algorithm that tells us explicitly how to act by directly learning a policy. To develop such
an algorithm, we begin in Section 3, where we characterize the policy given by running reinforcement
learning on a cost function learned by maximum causal entropy IRL [29, 30]. Our characterization
introduces a framework for directly learning policies from data, bypassing any intermediate IRL step.
Then, we instantiate our framework in Sections 4 and 5 with a new model-free imitation learning
algorithm. We show that our resulting algorithm is intimately connected to generative adversarial
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
networks [8], a technique from the deep learning community that has led to recent successes in
modeling distributions of natural images: our algorithm harnesses generative adversarial training to fit
distributions of states and actions defining expert behavior. We test our algorithm in Section 6, where
we find that it outperforms competing methods by a wide margin in training policies for complex,
high-dimensional physics-based control tasks over various amounts of expert data.
2
Background
Preliminaries R will denote the extended real numbers R ? {?}. Section 3 will work with
finite state and action spaces S and A, but our algorithms and experiments later in the paper will
run in high-dimensional continuous environments. ? is the set of all stationary stochastic policies
that take actions in A given states in S; successor states are drawn from the dynamics model
P (s0 |s, a). We work in the ?-discounted infinite horizon setting, and we will use an expectation
with respect a policy
P? ? ? ? to denote an expectation with respect to the trajectory it generates:
E? [c(s, a)] , E [ t=0 ? t c(st , at )], where s0 ? p0 , at ? ?(?|st ), and st+1 ? P (?|st , at ) for t ? 0.
? ? to denote an empirical expectation with respect to trajectory samples ? , and we will
We will use E
always use ?E to refer to the expert policy.
Inverse reinforcement learning Suppose we are given an expert policy ?E that we wish to rationalize with IRL. For the remainder of this paper, we will adopt and assume the existence of solutions
of maximum causal entropy IRL [29, 30], which fits a cost function from a family of functions C with
the optimization problem
maximize min ?H(?) + E? [c(s, a)] ? E?E [c(s, a)]
(1)
c?C
???
where H(?) , E? [? log ?(a|s)] is the ?-discounted causal entropy [3] of the policy ?. In practice,
?E will only be provided as a set of trajectories sampled by executing ?E in the environment, so the
expected cost of ?E in Eq. (1) is estimated using these samples. Maximum causal entropy IRL looks
for a cost function c ? C that assigns low cost to the expert policy and high cost to other policies,
thereby allowing the expert policy to be found via a certain reinforcement learning procedure:
RL(c) = arg min ?H(?) + E? [c(s, a)]
(2)
???
which maps a cost function to high-entropy policies that minimize the expected cumulative cost.
3
Characterizing the induced optimal policy
To begin our search for an imitation learning algorithm that both bypasses an intermediate IRL
step and is suitable for large environments, we will study policies found by reinforcement learning
on costs learned by IRL on the largest possible set of cost functions C in Eq. (1): all functions
RS?A = {c : S ? A ? R}. Using expressive cost function classes, like Gaussian processes [14]
and neural networks [6], is crucial to properly explain complex expert behavior without meticulously
hand-crafted features. Here, we investigate the best IRL can do with respect to expressiveness by
examining its capabilities with C = RS?A .
Of course, with such a large C, IRL can easily overfit when provided a finite dataset. Therefore,
we will incorporate a (closed, proper) convex cost function regularizer ? : RS?A ? R into our
study. Note that convexity is a not particularly restrictive requirement: ? must be convex as a
function defined on all of RS?A , not as a function defined on a small parameter space; indeed, the
cost regularizers of Finn et al. [6], effective for a range of robotic manipulation tasks, satisfy this
requirement. Interestingly, ? will play a central role in our discussion and will not serve as a nuisance
in our analysis.
Let us define an IRL primitive procedure, which finds a cost function such that the expert performs
better than all other policies, with the cost regularized by ?:
IRL? (?E ) = arg max ??(c) + min ?H(?) + E? [c(s, a)] ? E?E [c(s, a)]
(3)
c?RS?A
???
2
Let c? ? IRL? (?E ). We are interested in a policy given by RL(?
c)?this is the policy given by
running reinforcement learning on the output of IRL. To characterize RL(?
c), let
us first define for a
P?
policy ? ? ? its occupancy measure ?? : S ? A ? R as ?? (s, a) = ?(a|s) t=0 ? t P (st = s|?).
The occupancy measure can be interpreted as the unnormalized distribution of state-action pairs
that an agent encounters
P when navigating the environment with the policy ?, and it allows us to
write E? [c(s, a)] = s,a ?? (s, a)c(s, a) for any cost function c. We will also need the concept of a
convex conjugate: for a function f : RS?A ? R, its convex conjugate f ? : RS?A ? R is given by
f ? (x) = supy?RS?A xT y ? f (y).
Now, we are prepared to characterize RL(?
c), the policy learned by RL on the cost recovered by IRL:
Proposition 3.1. RL ? IRL? (?E ) = arg min??? ?H(?) + ? ? (?? ? ??E )
(4)
The proof of Proposition 3.1 can be found in Appendix A.1. It relies on the observation that the
optimal cost function and policy form a saddle point of a certain function. IRL finds one coordinate
of this saddle point, and running RL on the output of IRL reveals the other coordinate.
Proposition 3.1 tells us that ?-regularized inverse reinforcement learning, implicitly, seeks a policy
whose occupancy measure is close to the expert?s, as measured by ? ? . Enticingly, this suggests that
various settings of ? lead to various imitation learning algorithms that directly solve the optimization
problem given by Proposition 3.1. We explore such algorithms in Sections 4 and 5, where we show
that certain settings of ? lead to both existing algorithms and a novel one.
The special case when ? is a constant function is particularly illuminating, so we state and show it
directly using concepts from convex optimization.
Proposition 3.2. Suppose ??E > 0. If ? is a constant function, c? ? IRL? (?E ), and ?
? ? RL(?
c),
then ??? = ??E .
In other words, if there were no cost regularization at all, the recovered policy will exactly match the
expert?s occupancy measure. (The condition ??E > 0, inherited from Ziebart et al. [30], simplifies
our discussion and in fact guarantees the existence of c? ? IRL? (?E ). Elsewhere in the paper, as
mentioned in Section 2, we assume the IRL problem has a solution.) To show Proposition 3.2, we need
the basic result that the set of valid occupancy measures D , {?? : ? ? ?} can be written as a feasible
set of affine constraints
distribution of starting states and P (s0 |s, a) is the dynamics
n [19]: if p0 (s) is the
o
P
P
0
0
model, then D = ? : ? ? 0 and
a ?(s, a) = p0 (s) + ?
s0 ,a P (s|s , a)?(s , a) ? s ? S .
Furthermore, there is a one-to-one correspondence between ? and D:
Lemma 3.1 (Theorem
P 2 of Syed et al. [27]). If ? ? D, then ? is the occupancy measure for
?? (a|s) , ?(s, a)/ a0 ?(s, a0 ), and ?? is the only policy whose occupancy measure is ?.
We are therefore justified in writing ?? to denote the unique policy for an occupancy measure ?. We
also need a lemma that lets us speak about causal entropies of occupancy measures:
P
P
?
? is strictly concave,
Lemma 3.2. Let H(?)
= ? s,a ?(s, a) log(?(s, a)/ a0 ?(s, a0 )). Then, H
?
?
and for all ? ? ? and ? ? D, we have H(?) = H(?? ) and H(?) = H(?? ).
The proof of this lemma is in Appendix A.1. Lemma 3.1 and Lemma 3.2 together allow us to
freely switch between policies and occupancy measures when considering functions involving causal
entropy and expected costs, as in the following lemma:
P
? c) = ?H(?)
?
Lemma 3.3. If L(?, c) = ?H(?) + E? [c(s, a)] and L(?,
+ s,a ?(s, a)c(s, a), then,
? ? , c) for all policies ? ? ?, and L(?,
? c) = L(?? , c) for all
for all cost functions c, L(?, c) = L(?
occupancy measures ? ? D.
Now, we are ready to verify Proposition 3.2.
P
? c) = ?H(?)
?
Proof of Proposition 3.2. Define L(?,
+ s,a c(s, a)(?(s, a) ? ?E (s, a)). Given that
? is a constant function, we have the following, due to Lemma 3.3:
c? ? IRL? (?E ) = arg max min ?H(?) + E? [c(s, a)] ? E?E [c(s, a)] + const.
(5)
c?RS?A ???
?
= arg max min ?H(?)
+
c?RS?A ??D
X
?(s, a)c(s, a) ?
X
s,a
s,a
3
? c). (6)
?E (s, a)c(s, a) = arg max min L(?,
c?RS?A ??D
This is the dual of the optimization problem
?
minimize ?H(?)
subject to
??D
?(s, a) = ?E (s, a) ? s ? S, a ? A
(7)
? for which the costs c(s, a) serve as dual variables for equality constraints. Thus,
with Lagrangian L,
?
c? is a dual optimum for (7). In addition, strong duality holds for (7): D is compact and convex, ?H
is convex, and, since ?E > 0, there exists a feasible point in the relative interior of the domain D.
? is in fact strictly convex, so the primal optimum can
Moreover, Lemma 3.2 guarantees that ?H
? c?) =
be uniquely recovered from the dual optimum [4, Section 5.5.5] via ?? = arg min??D L(?,
P
?
arg min??D ?H(?) + s,a c?(s, a)?(s, a) = ?E , where the first equality indicates that ?? is the
? c?), and the third follows from the constraints in the primal problem (7). But
unique minimizer of L(?,
if ?
? ? RL(?
c), then Lemma 3.3 implies ??? = ?? = ?E .
Let us summarize our conclusions. First, IRL is a dual of an occupancy measure matching
problem, and the recovered cost function is the dual optimum. Classic IRL algorithms that solve
reinforcement learning repeatedly in an inner loop, such as the algorithm of Ziebart et al. [29] that
runs a variant of value iteration in an inner loop, can be interpreted as a form of dual ascent, in which
one repeatedly solves the primal problem (reinforcement learning) with fixed dual values (costs).
Dual ascent is effective if solving the unconstrained primal is efficient, but in the case of IRL, it
amounts to reinforcement learning! Second, the induced optimal policy is the primal optimum.
The induced optimal policy is obtained by running RL after IRL, which is exactly the act of recovering
the primal optimum from the dual optimum; that is, optimizing the Lagrangian with the dual variables
fixed at the dual optimum values. Strong duality implies that this induced optimal policy is indeed
the primal optimum, and therefore matches occupancy measures with the expert. IRL is traditionally
defined as the act of finding a cost function such that the expert policy is uniquely optimal, but we can
alternatively view IRL as a procedure that tries to induce a policy that matches the expert?s occupancy
measure.
4
Practical occupancy measure matching
We saw in Proposition 3.2 that if ? is constant, the resulting primal problem (7) simply matches
occupancy measures with expert at all states and actions. Such an algorithm is not practically useful.
In reality, the expert trajectory distribution will be provided only as a finite set of samples, so in large
environments, most of the expert?s occupancy measure values will be small, and exact occupancy
measure matching will force the learned policy to rarely visit these unseen state-action pairs simply
due to lack of data. Furthermore, in the cases in which we would like to use function approximation to
learn parameterized policies ?? , the resulting optimization problem of finding an appropriate ? would
have an intractably large number of constraints when the environment is large: as many constraints as
points in S ? A.
Keeping in mind that we wish to eventually develop an imitation learning algorithm suitable for large
environments, we would like to relax Eq. (7) into the following form, motivated by Proposition 3.1:
minimize d? (?? , ?E ) ? H(?)
?
(8)
by modifying the IRL regularizer ? so that d? (?? , ?E ) , ? ? (?? ? ?E ) smoothly penalizes violations
in difference between the occupancy measures.
Entropy-regularized apprenticeship learning It turns out that with certain settings of ?, Eq. (8)
takes on the form of regularized variants of existing apprenticeship learning algorithms, which
indeed do scale to large environments with parameterized policies [10]. For a class of cost functions
C ? RS?A , an apprenticeship learning algorithm finds a policy that performs better than the expert
across C, by optimizing the objective
minimize max E? [c(s, a)] ? E?E [c(s, a)]
?
c?C
(9)
Classic apprenticeship learning algorithms restrict C to convex sets given by linear combinations
of basis functions f1 , . . . , fd , which give rise a feature vector f (s, a) = [f1 (s, a), . . . , fd (s, a)] for
each state-action pair. Abbeel and Ng [1] and Syed et al. [27] use, respectively,
P
P
P
Clinear = { i wi fi : kwk2 ? 1} and Cconvex = { i wi fi :
(10)
i wi = 1, wi ? 0 ?i} .
4
Clinear leads to feature expectation matching [1], which minimizes `2 distance between expected
feature vectors: maxc?Clinear E? [c(s, a)]?E?E [c(s, a)] = kE? [f (s, a)]?E?E [f (s, a)]k2 . Meanwhile,
Cconvex leads to MWAL [26] and LPAL [27], which minimize worst-case excess cost among the
individual basis functions, as maxc?Cconvex E? [c(s, a)] ? E?E [c(s, a)] = maxi?{1,...,d} E? [fi (s, a)] ?
E?E [fi (s, a)].
We now show how Eq. (9) is a special case of Eq. (8) with a certain setting of ?. With the indicator
function ?C : RS?A ? R, defined by ?C (c) = 0 if c ? C and +? otherwise, we can write the
apprenticeship learning objective (9) as
X
max E? [c(s, a)]?E?E [c(s, a)] = max ??C (c) +
(?? (s, a)???E(s, a))c(s, a) = ?C? (?? ???E)
c?RS?A
c?C
s,a
Therefore, we see that entropy-regularized apprenticeship learning
minimize ?H(?) + max E? [c(s, a)] ? E?E [c(s, a)]
?
c?C
(11)
is equivalent to performing RL following IRL with cost regularizer ? = ?C , which forces the implicit
IRL procedure to recover a cost function lying in C. Note that we can scale the policy?s entropy
regularization strength in Eq. (11) by scaling C by a constant ? as {?c : c ? C}, recovering the
original apprenticeship objective (9) by taking ? ? ?.
Cons of apprenticeship learning It is known that apprenticeship learning algorithms generally do
not recover expert-like policies if C is too restrictive [27, Section 1]?which is often the case for the
linear subspaces used by feature expectation matching, MWAL, and LPAL, unless the basis functions
f1 , . . . , fd are very carefully designed. Intuitively, unless the true expert cost function (assuming it
exists) lies in C, there is no guarantee that if ? performs better than ?E on all of C, then ? equals ?E .
With the aforementioned insight based on Proposition 3.1 that apprenticeship learning is equivalent
to RL following IRL, we can understand exactly why apprenticeship learning may fail to imitate: it
forces ?E to be encoded as an element of C. If C does not include a cost function that explains expert
behavior well, then attempting to recover a policy from such an encoding will not succeed.
Pros of apprenticeship learning While restrictive cost classes C may not lead to exact imitation,
apprenticeship learning with such C can scale to large state and action spaces with policy function
approximation. Ho et al. [10] rely on the following policy gradient formula for the apprenticeship
objective (9) for a parameterized policy ?? :
?? max E?? [c(s, a)] ? E?E [c(s, a)] = ?? E?? [c? (s, a)] = E?? [?? log ?? (a|s)Qc? (s, a)]
c?C
(12)
where c? = arg max E?? [c(s, a)] ? E?E [c(s, a)], Qc? (?
s, a
?) = E?? [c? (?
s, a
?) | s0 = s?, a0 = a
?]
c?C
Observing that Eq. (12) is the policy gradient for a reinforcement learning objective with cost c? , Ho
et al. propose an algorithm that alternates between two steps:
1. Sample trajectories of the current policy ??i by simulating in the environment, and fit a
cost function c?i , as defined in Eq. (12). For the cost classes Clinear and Cconvex (10), this cost
fitting amounts to evaluating simple analytical expressions [10].
2. Form a gradient estimate with Eq. (12) with c?i and the sampled trajectories, and take a trust
region policy optimization (TRPO) [24] step to produce ??i+1 .
This algorithm relies crucially on the TRPO policy step, which is a natural gradient step constrained
to ensure that ??i+1 does not stray too far ??i , as measured by KL divergence between the two
policies averaged over the states in the sampled trajectories. This carefully constructed step scheme
ensures that the algorithm does not diverge due to high noise in estimating the gradient (12). We refer
the reader to Schulman et al. [24] for more details on TRPO.
With the TRPO step scheme, Ho et al. were able train large neural network policies for apprenticeship learning with linear cost function classes (10) in environments with hundreds of observation
dimensions. Their use of these linear cost function classes, however, limits their approach to settings
in which expert behavior is well-described by such classes. We will draw upon their algorithm to
develop an imitation learning method that both scales to large environments and imitates arbitrarily
complex expert behavior. To do so, we first turn to proposing a new regularizer ? that wields more
expressive power than the regularizers corresponding to Clinear and Cconvex (10).
5
5
Generative adversarial imitation learning
As discussed in Section 4, the constant regularizer leads to an imitation learning algorithm that exactly
matches occupancy measures, but is intractable in large environments. The indicator regularizers
for the linear cost function classes (10), on the other hand, lead to algorithms incapable of exactly
matching occupancy measures without careful tuning, but are tractable in large environments. We
propose the following new cost regularizer that combines the best of both worlds, as we will show in
the coming sections:
E?E [g(c(s, a))] if c < 0
?x ? log(1 ? ex ) if x < 0
?GA (c) ,
where g(x) =
(13)
+?
otherwise
+?
otherwise
This regularizer places low penalty on cost functions c that assign an amount of negative cost to
expert state-action pairs; if c, however, assigns large costs (close to zero, which is the upper bound
for costs feasible for ?GA ) to the expert, then ?GA will heavily penalize c. An interesting property of
?GA is that it is an average over expert data, and therefore can adjust to arbitrary expert datasets. The
indicator regularizers ?C , used by the linear apprenticeship learning algorithms described in Section 4,
are always fixed, and cannot adapt to data as ?GA can. Perhaps the most important difference between
?GA and ?C , however, is that ?C forces costs to lie in a small subspace spanned by finitely many basis
functions, whereas ?GA allows for any cost function, as long as it is negative everywhere.
Our choice of ?GA is motivated by the following fact, shown in the appendix (Corollary A.1.1):
?
?GA
(?? ? ??E ) =
sup
E? [log(D(s, a))] + E?E [log(1 ? D(s, a))]
(14)
D?(0,1)S?A
where the supremum ranges over discriminative classifiers D : S ? A ? (0, 1). Equation (14) is proportional to the optimal negative log loss of the binary classification problem of distinguishing between
state-action pairs of ? and ?E . It turns out that this optimal loss is, up to a constant shift and scaling,
the Jensen-Shannon divergence DJS (?
?? , ???E ) , DKL (?
?? k(?
?? + ??E )/2) + DKL (?
?E k(?
?? + ??E )/2),
which is a squared metric between the normalized occupancy distributions ??? = (1 ? ?)?? and
???E = (1 ? ?)??E [8, 17]. Treating the causal entropy H as a policy regularizer controlled by ? ? 0
and dropping the 1 ? ? occupancy measure normalization for clarity, we obtain a new imitation
learning algorithm:
?
minimize ?GA
(?? ? ??E ) ? ?H(?) = DJS (?? , ??E ) ? ?H(?),
(15)
?
which finds a policy whose occupancy measure minimizes Jensen-Shannon divergence to the expert?s.
Equation (15) minimizes a true metric between occupancy measures, so, unlike linear apprenticeship
learning algorithms, it can imitate expert policies exactly.
Algorithm Equation (15) draws a connection between imitation learning and generative adversarial
networks [8], which train a generative model G by having it confuse a discriminative classifier
D. The job of D is to distinguish between the distribution of data generated by G and the true
data distribution. When D cannot distinguish data generated by G from the true data, then G has
successfully matched the true data. In our setting, the learner?s occupancy measure ?? is analogous
to the data distribution generated by G, and the expert?s occupancy measure ??E is analogous to the
true data distribution.
We now present a practical imitation learning algorithm, called generative adversarial imitation
learning or GAIL (Algorithm 1), designed to work in large environments. GAIL solves Eq. (15) by
finding a saddle point (?, D) of the expression
E? [log(D(s, a))] + E?E [log(1 ? D(s, a))] ? ?H(?)
(16)
with both ? and D represented using function approximators: GAIL fits a parameterized policy
?? , with weights ?, and a discriminator network Dw : S ? A ? (0, 1), with weights w. GAIL
alternates between an Adam [11] gradient step on w to increase Eq. (16) with respect to D, and a
TRPO step on ? to decrease Eq. (16) with respect to ? (we derive an estimator for the causal entropy
gradient ?? H(?? ) in Appendix A.2). The TRPO step serves the same purpose as it does with the
apprenticeship learning algorithm of Ho et al. [10]: it prevents the policy from changing too much
due to noise in the policy gradient. The discriminator network can be interpreted as a local cost
function providing learning signal to the policy?specifically, taking a policy step that decreases
expected cost with respect to the cost function c(s, a) = log D(s, a) will move toward expert-like
regions of state-action space, as classified by the discriminator.
6
Algorithm 1 Generative adversarial imitation learning
1: Input: Expert trajectories ?E ? ?E , initial policy and discriminator parameters ?0 , w0
2: for i = 0, 1, 2, . . . do
3:
Sample trajectories ?i ? ??i
4:
Update the discriminator parameters from wi to wi+1 with the gradient
? ? [?w log(Dw (s, a))] + E
? ? [?w log(1 ? Dw (s, a))]
E
i
E
5:
(17)
Take a policy step from ?i to ?i+1 , using the TRPO rule with cost function log(Dwi+1 (s, a)).
Specifically, take a KL-constrained natural gradient step with
? ? [?? log ?? (a|s)Q(s, a)] ? ??? H(?? ),
E
i
? ? [log(Dw (s, a)) | s0 = s?, a0 = a
where Q(?
s, a
?) = E
?]
i
i+1
(18)
6: end for
6
Experiments
We evaluated GAIL against baselines on 9 physics-based control tasks, ranging from low-dimensional
control tasks from the classic RL literature?the cartpole [2], acrobot [7], and mountain car [15]?to
difficult high-dimensional tasks such as a 3D humanoid locomotion, solved only recently by modelfree reinforcement learning [25, 24]. All environments, other than the classic control tasks, were
simulated with MuJoCo [28]. See Appendix B for a complete description of all the tasks.
Each task comes with a true cost function, defined in the OpenAI Gym [5]. We first generated expert
behavior for these tasks by running TRPO [24] on these true cost functions to create expert policies.
Then, to evaluate imitation performance with respect to sample complexity of expert data, we sampled
datasets of varying trajectory counts from the expert policies. The trajectories constituting each
dataset each consisted of about 50 state-action pairs. We tested GAIL against three baselines:
1. Behavioral cloning: a given dataset of state-action pairs is split into 70% training data and
30% validation data. The policy is trained with supervised learning, using Adam [11] with
minibatches of 128 examples, until validation error stops decreasing.
2. Feature expectation matching (FEM): the algorithm of Ho et al. [10] using the cost function
class Clinear (10) of Abbeel and Ng [1]
3. Game-theoretic apprenticeship learning (GTAL): the algorithm of Ho et al. [10] using the
cost function class Cconvex (10) of Syed and Schapire [26]
We used all algorithms to train policies of the same neural network architecture for all tasks: two
hidden layers of 100 units each, with tanh nonlinearities in between. The discriminator networks
for GAIL also used the same architecture. All networks were always initialized randomly at the
start of each trial. For each task, we gave FEM, GTAL, and GAIL exactly the same amount of
environment interaction for training. We ran all algorithms 5-7 times over different random seeds in
all environments except Humanoid, due to time restrictions.
Figure 1 depicts the results, and Appendix B provides exact performance numbers and details of our
experiment pipeline, including expert data sampling and algorithm hyperparameters. We found that on
the classic control tasks (cartpole, acrobot, and mountain car), behavioral cloning generally suffered
in expert data efficiency compared to FEM and GTAL, which for the most part were able produce
policies with near-expert performance with a wide range of dataset sizes, albeit with large variance
over different random initializations of the policy. On these tasks, GAIL consistently produced
policies performing better than behavioral cloning, FEM, and GTAL. However, behavioral cloning
performed excellently on the Reacher task, on which it was more sample efficient than GAIL. We
were able to slightly improve GAIL?s performance on Reacher using causal entropy regularization?in
the 4-trajectory setting, the improvement from ? = 0 to ? = 10?3 was statistically significant over
training reruns, according to a one-sided Wilcoxon rank-sum test with p = .05. We used no causal
entropy regularization for all other tasks.
7
Acrobot
Mountain Car
HalfCheetah
1.0
1.0
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0.0
0.0
1
4
7
10
1.0
0.8
0.0
0.0
1
Hopper
4
7
10
Reacher
1.0
1
4
Walker
7
10
4
Ant
11
18
25
Humanoid
1.0
Performance (scaled)
Performance (scaled)
Cartpole
1.0
0.5
0.0
?0.5
1.0
1.0
0.8
0.8
0.5
0.8
0.6
0.6
0.0
0.6
0.4
0.4
?0.5
0.4
Expert
GAIL (? = 0)
0.2
0.2
0.2
Random
GAIL (? = 10 ?3 )
0.0
0.0
0.0
Behavioral cloning
GAIL (? = 10 ?2 )
4
11
18
25
1.0
?1.0
4
11
18
25
4
11
18
25
4
11
18
Number of trajectories in dataset
80
160
240
Number of trajectories in dataset
Expert
Random
Behavioral cloning
FEM
GTAL
GAIL (ours)
(a)
(b)
Figure 1: (a) Performance of learned policies. The y-axis is negative cost, scaled so that the expert
achieves 1 and a random policy achieves 0. (b) Causal entropy regularization ? on Reacher. Except
for Humanoid, shading indicates standard deviation over 5-7 reruns.
On the other MuJoCo environments, GAIL almost always achieved at least 70% of expert performance
for all dataset sizes we tested and reached it exactly with the larger datasets, with very little variance
among random seeds. The baseline algorithms generally could not reach expert performance even
with the largest datasets. FEM and GTAL performed poorly for Ant, producing policies consistently
worse than a policy that chooses actions uniformly at random. Behavioral cloning was able to reach
satisfactory performance with enough data on HalfCheetah, Hopper, Walker, and Ant, but was unable
to achieve more than 60% for Humanoid, on which GAIL achieved exact expert performance for all
tested dataset sizes.
7
Discussion and outlook
As we demonstrated, GAIL is generally quite sample efficient in terms of expert data. However, it is
not particularly sample efficient in terms of environment interaction during training. The number
of such samples required to estimate the imitation objective gradient (18) was comparable to the
number needed for TRPO to train the expert policies from reinforcement signals. We believe that we
could significantly improve learning speed for GAIL by initializing policy parameters with behavioral
cloning, which requires no environment interaction at all.
Fundamentally, our method is model free, so it will generally need more environment interaction than
model-based methods. Guided cost learning [6], for instance, builds upon guided policy search [12]
and inherits its sample efficiency, but also inherits its requirement that the model is well-approximated
by iteratively fitted time-varying linear dynamics. Interestingly, both GAIL and guided cost learning
alternate between policy optimization steps and cost fitting (which we called discriminator fitting),
even though the two algorithms are derived completely differently.
Our approach builds upon a vast line of work on IRL [29, 1, 27, 26], and hence, just like IRL,
our approach does not interact with the expert during training. Our method explores randomly
to determine which actions bring a policy?s occupancy measure closer to the expert?s, whereas
methods that do interact with the expert, like DAgger [22], can simply ask the expert for such actions.
Ultimately, we believe that a method that combines well-chosen environment models with expert
interaction will win in terms of sample complexity of both expert data and environment interaction.
Acknowledgments
We thank Jayesh K. Gupta, John Schulman, and the anonymous reviewers for assistance, advice,
and critique. This work was supported by the SAIL-Toyota Center for AI Research and by a NSF
Graduate Research Fellowship (grant no. DGE-114747).
8
References
[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the
21st International Conference on Machine Learning, 2004.
[2] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult
learning control problems. Systems, Man and Cybernetics, IEEE Transactions on, (5):834?846, 1983.
[3] M. Bloem and N. Bambos. Infinite time horizon maximum causal entropy inverse reinforcement learning.
In Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, pages 4911?4916. IEEE, 2014.
[4] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
[5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. OpenAI
Gym. arXiv preprint arXiv:1606.01540, 2016.
[6] C. Finn, S. Levine, and P. Abbeel. Guided cost learning: Deep inverse optimal control via policy
optimization. In Proceedings of the 33rd International Conference on Machine Learning, 2016.
[7] A. Geramifard, C. Dann, R. H. Klein, W. Dabney, and J. P. How. Rlpy: A value-function-based reinforcement learning framework for education and research. JMLR, 2015.
[8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, pages 2672?2680, 2014.
[9] J.-B. Hiriart-Urruty and C. Lemar?chal. Convex Analysis and Minimization Algorithms, volume 305.
Springer, 1996.
[10] J. Ho, J. K. Gupta, and S. Ermon. Model-free imitation learning with policy optimization. In Proceedings
of the 33rd International Conference on Machine Learning, 2016.
[11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[12] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown
dynamics. In Advances in Neural Information Processing Systems, pages 1071?1079, 2014.
[13] S. Levine and V. Koltun. Continuous inverse optimal control with locally optimal examples. In Proceedings
of the 29th International Conference on Machine Learning, pages 41?48, 2012.
[14] S. Levine, Z. Popovic, and V. Koltun. Nonlinear inverse reinforcement learning with gaussian processes.
In Advances in Neural Information Processing Systems, pages 19?27, 2011.
[15] A. W. Moore and T. Hall. Efficient memory-based learning for robot control. 1990.
[16] A. Y. Ng and S. Russell. Algorithms for inverse reinforcement learning. In ICML, 2000.
[17] X. Nguyen, M. J. Wainwright, and M. I. Jordan. On surrogate loss functions and f-divergences. The Annals
of Statistics, pages 876?904, 2009.
[18] D. A. Pomerleau. Efficient training of artificial neural networks for autonomous navigation. Neural
Computation, 3(1):88?97, 1991.
[19] M. L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley &
Sons, 2014.
[20] N. D. Ratliff, D. Silver, and J. A. Bagnell. Learning to search: Functional gradient techniques for imitation
learning. Autonomous Robots, 27(1):25?53, 2009.
[21] S. Ross and D. Bagnell. Efficient reductions for imitation learning. In AISTATS, pages 661?668, 2010.
[22] S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to
no-regret online learning. In AISTATS, pages 627?635, 2011.
[23] S. Russell. Learning agents for uncertain environments. In Proceedings of the Eleventh Annual Conference
on Computational Learning Theory, pages 101?103. ACM, 1998.
[24] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In
Proceedings of The 32nd International Conference on Machine Learning, pages 1889?1897, 2015.
[25] J. Schulman, P. Moritz, S. Levine, M. Jordan, and P. Abbeel. High-dimensional continuous control using
generalized advantage estimation. arXiv preprint arXiv:1506.02438, 2015.
[26] U. Syed and R. E. Schapire. A game-theoretic approach to apprenticeship learning. In Advances in Neural
Information Processing Systems, pages 1449?1456, 2007.
[27] U. Syed, M. Bowling, and R. E. Schapire. Apprenticeship learning using linear programming. In
Proceedings of the 25th International Conference on Machine Learning, pages 1032?1039, 2008.
[28] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent
Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026?5033. IEEE, 2012.
[29] B. D. Ziebart, A. Maas, J. A. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning.
In AAAI, AAAI?08, 2008.
[30] B. D. Ziebart, J. A. Bagnell, and A. K. Dey. Modeling interaction via the principle of maximum causal
entropy. In ICML, pages 1255?1262, 2010.
9
| 6391 |@word trial:1 nd:1 seek:1 r:14 crucially:1 p0:3 incurs:1 thereby:1 outlook:1 shading:1 reduction:2 initial:1 ours:1 interestingly:2 outperforms:1 existing:3 recovered:4 com:1 current:1 yet:1 must:2 written:1 john:2 designed:2 treating:1 update:1 stationary:1 generative:10 instantiate:1 imitate:2 accordingly:1 characterization:1 provides:1 constructed:1 driver:1 koltun:2 fitting:3 combine:2 behavioral:10 eleventh:1 apprenticeship:22 indeed:4 expected:5 behavior:10 planning:1 discounted:2 decreasing:1 little:1 considering:1 provided:4 begin:2 spain:1 moreover:1 estimating:1 matched:1 appealingly:1 kind:1 interpreted:3 minimizes:3 mountain:3 proposing:1 finding:3 guarantee:3 act:4 concave:1 zaremba:1 exactly:8 k2:1 classifier:2 scaled:3 control:12 unit:1 grant:1 producing:1 local:1 tends:1 limit:1 taxi:1 encoding:1 sutton:1 critique:1 initialization:1 suggests:1 mujoco:3 range:4 statistically:1 averaged:1 sail:1 graduate:1 unique:2 practical:2 acknowledgment:1 practice:1 regret:1 procedure:4 empirical:1 significantly:1 matching:7 boyd:1 word:1 induce:1 cannot:2 close:2 interior:1 ga:10 writing:1 restriction:1 equivalent:2 map:1 lagrangian:2 demonstrated:1 reviewer:1 center:1 primitive:1 starting:1 convex:11 ke:1 qc:2 assigns:2 pouget:1 insight:1 estimator:1 rule:1 spanned:1 vandenberghe:1 dw:4 classic:5 coordinate:2 traditionally:1 analogous:2 autonomous:2 annals:1 suppose:2 play:1 heavily:1 speak:1 exact:4 programming:2 distinguishing:1 goodfellow:1 locomotion:1 element:2 expensive:1 particularly:3 approximated:1 role:1 levine:6 preprint:3 solved:1 initializing:1 worst:1 region:3 ensures:1 connected:1 decrease:2 russell:2 ran:1 mentioned:1 environment:26 convexity:1 complexity:2 ziebart:4 warde:1 dynamic:5 ultimately:1 trained:1 solving:1 serve:2 upon:3 efficiency:2 learner:4 basis:4 completely:1 easily:1 indirect:1 differently:1 various:3 represented:1 regularizer:8 train:4 effective:2 quadruped:1 query:1 artificial:1 tell:3 whose:3 encoded:1 stanford:2 solve:3 larger:1 quite:1 relax:1 otherwise:3 statistic:1 unseen:1 online:1 advantage:1 analytical:1 net:1 propose:3 hiriart:1 interaction:8 coming:1 remainder:1 loop:3 halfcheetah:2 poorly:1 achieve:1 description:1 requirement:3 optimum:9 produce:2 adam:3 executing:1 silver:1 derive:2 develop:3 measured:2 finitely:1 job:1 strong:2 solves:2 recovering:2 c:1 eq:13 implies:2 come:1 guided:5 modifying:1 stochastic:3 ermon:3 successor:1 education:1 explains:2 assign:1 f1:3 abbeel:7 preliminary:1 anonymous:1 proposition:11 mwal:2 strictly:2 bypassing:1 hold:1 practically:1 lying:1 hall:1 seed:2 achieves:2 adopt:1 purpose:1 estimation:1 tanh:1 ross:2 saw:1 largest:2 gail:20 successfully:1 create:1 minimization:1 compounding:2 gaussian:2 always:4 varying:2 barto:1 corollary:1 derived:1 focus:1 inherits:2 properly:1 consistently:2 rank:1 improvement:1 indicates:2 cloning:10 adversarial:9 baseline:3 entire:1 a0:6 hidden:1 footstep:1 interested:2 rerun:2 issue:1 arg:9 dual:12 classification:1 among:2 aforementioned:1 geramifard:1 constrained:2 special:2 equal:1 having:1 ng:4 sampling:1 look:1 icml:2 prioritizes:1 others:1 jayesh:1 fundamentally:2 brockman:1 gordon:1 intelligent:1 mirza:1 randomly:2 divergence:4 individual:1 neuronlike:1 fd:3 investigate:1 adjust:1 introduces:1 violation:1 navigation:1 farley:1 primal:8 regularizers:4 succeeded:1 closer:1 cartpole:3 unless:2 penalizes:1 initialized:1 causal:13 fitted:1 uncertain:1 instance:1 modeling:2 cost:66 deviation:1 hundred:1 examining:1 too:3 characterize:3 chooses:1 st:6 explores:1 international:7 physic:3 diverge:1 together:1 squared:1 central:1 aaai:2 possibly:1 worse:1 expert:62 meticulously:1 nonlinearities:1 satisfy:1 caused:1 explicitly:1 dann:1 later:1 view:1 try:1 closed:1 performed:2 doing:1 observing:1 sup:1 start:1 recover:4 reached:1 capability:1 inherited:1 lpal:2 dagger:1 minimize:7 variance:2 yield:1 ant:3 produced:1 trajectory:17 cybernetics:1 classified:1 explain:1 maxc:2 reach:2 against:2 proof:3 con:1 gain:1 sampled:4 dataset:8 stop:1 ask:1 car:3 carefully:2 supervised:2 harness:1 rationalize:1 evaluated:2 though:1 anderson:1 furthermore:2 just:1 implicit:1 dey:2 until:1 overfit:1 hand:3 expressive:2 irl:39 trust:2 nonlinear:1 lack:1 quality:1 perhaps:1 dabney:1 believe:2 dge:1 requiring:1 true:9 concept:2 verify:1 normalized:1 regularization:5 equality:2 consisted:1 hence:1 moritz:2 iteratively:1 satisfactory:1 moore:1 puterman:1 assistance:1 game:2 during:2 uniquely:3 nuisance:1 bowling:1 unnormalized:1 generalized:1 modelfree:1 complete:1 theoretic:2 performs:3 stefano:1 pro:1 bring:1 image:1 ranging:1 novel:1 fi:4 recently:1 hopper:2 functional:1 rl:13 volume:1 tassa:1 discussed:1 kwk2:1 significant:3 refer:2 cambridge:1 ai:1 tuning:1 unconstrained:1 rd:3 erez:1 dj:2 access:1 robot:4 wilcoxon:1 recent:2 optimizing:2 manipulation:1 certain:6 incapable:1 binary:1 success:1 arbitrarily:1 approximators:1 schneider:1 freely:1 determine:1 maximize:1 signal:4 match:5 adapt:1 long:1 visit:1 dkl:2 controlled:1 prediction:1 involving:1 basic:1 variant:2 expectation:6 metric:2 arxiv:6 iteration:1 normalization:1 achieved:2 penalize:1 justified:1 background:1 addition:1 whereas:2 fellowship:1 walker:2 suffered:1 crucial:1 unlike:1 ascent:2 induced:4 subject:1 jordan:3 extracting:1 near:1 intermediate:2 split:1 enough:1 bengio:1 switch:1 todorov:1 fit:5 gave:1 architecture:2 competing:1 restrict:1 inner:3 simplifies:1 shift:2 pettersson:1 motivated:2 expression:2 penalty:1 action:20 repeatedly:2 deep:2 useful:1 generally:5 amount:6 prepared:1 excellently:1 locally:1 schapire:3 nsf:1 estimated:1 klein:1 write:2 discrete:1 dropping:1 trpo:9 openai:4 drawn:1 clarity:1 changing:1 iros:1 timestep:1 vast:1 sum:1 run:3 inverse:13 parameterized:4 everywhere:1 place:1 family:1 reader:1 almost:1 draw:3 decision:3 appendix:6 scaling:3 comparable:1 bound:1 layer:1 distinguish:2 courville:1 correspondence:1 annual:2 strength:1 constraint:5 generates:1 speed:1 extremely:1 min:9 performing:2 attempting:1 structured:1 according:1 alternate:3 combination:1 conjugate:2 across:1 slightly:1 son:1 intimately:1 wi:6 intuitively:1 imitating:2 sided:1 pipeline:1 equation:3 turn:3 eventually:1 fail:1 count:1 needed:1 mind:1 bambos:1 urruty:1 finn:2 tractable:1 serf:1 end:1 appropriate:1 simulating:1 gym:2 encounter:1 ho:8 existence:2 original:1 running:5 include:1 ensure:1 const:1 restrictive:3 build:2 rsj:1 objective:6 move:1 bagnell:5 surrogate:1 navigating:1 gradient:12 subspace:2 distance:1 unable:1 win:1 simulated:1 thank:1 w0:1 toward:1 ozair:1 assuming:1 providing:1 demonstration:1 difficult:2 unfortunately:1 expense:1 negative:4 rise:1 ba:1 ratliff:1 pomerleau:1 proper:1 policy:84 unknown:1 perform:1 allowing:1 upper:1 observation:2 datasets:4 markov:1 finite:3 defining:1 extended:1 arbitrary:1 community:1 expressiveness:1 pair:8 required:1 kl:2 connection:1 discriminator:7 engine:1 learned:5 barcelona:1 kingma:1 nip:2 able:4 chal:1 summarize:1 max:10 including:1 memory:1 wainwright:1 power:1 suitable:3 syed:5 natural:3 force:4 regularized:5 predicting:1 indicator:3 rely:1 occupancy:28 scheme:2 improve:2 axis:1 ready:1 extract:1 imitates:1 literature:1 schulman:5 relative:1 loss:3 cdc:1 interesting:1 proportional:1 analogy:1 validation:2 illuminating:1 supy:1 agent:2 humanoid:5 affine:1 s0:6 principle:1 bypass:1 course:1 elsewhere:1 maas:1 supported:1 free:5 keeping:1 intractably:1 allow:1 understand:1 wide:3 characterizing:1 taking:2 dimension:1 valid:1 cumulative:1 evaluating:1 world:1 reinforcement:27 dwi:1 adaptive:1 nguyen:1 far:1 constituting:1 transaction:1 excess:1 obtains:1 compact:1 implicitly:1 supremum:1 robotic:1 instantiation:1 reveals:1 popovic:1 discriminative:2 imitation:23 alternatively:1 continuous:3 search:4 fem:6 why:2 reality:1 reacher:4 learn:3 interact:2 complex:4 meanwhile:1 domain:1 aistats:2 main:1 noise:2 hyperparameters:1 allowed:1 xu:1 crafted:1 advice:1 depicts:1 slow:1 wiley:1 fails:1 stray:1 wish:2 lie:2 jmlr:1 third:1 toyota:1 learns:3 tang:1 theorem:1 formula:1 specific:1 covariate:1 xt:1 jensen:2 maxi:1 abadie:1 gupta:2 exists:2 intractable:1 albeit:1 acrobot:3 confuse:1 margin:1 horizon:2 entropy:18 smoothly:1 led:1 simply:3 saddle:3 explore:1 prevents:1 desire:1 springer:1 minimizer:1 relies:2 acm:1 minibatches:1 succeed:2 goal:1 cheung:1 careful:1 bloem:1 lemar:1 man:1 feasible:3 infinite:2 specifically:2 except:2 uniformly:1 lemma:11 called:2 duality:2 shannon:2 rarely:1 jonathan:1 incorporate:1 evaluate:1 tested:3 ex:1 |
5,960 | 6,392 | Feature selection in functional data classification with
recursive maxima hunting
Jos?e L. Torrecilla
Computer Science Department
Universidad Aut?onoma de Madrid
28049 Madrid, Spain
[email protected]
Alberto Su?arez
Computer Science Department
Universidad Aut?onoma de Madrid
28049 Madrid, Spain
[email protected]
Abstract
Dimensionality reduction is one of the key issues in the design of effective machine
learning methods for automatic induction. In this work, we introduce recursive
maxima hunting (RMH) for variable selection in classification problems with functional data. In this context, variable selection techniques are especially attractive
because they reduce the dimensionality, facilitate the interpretation and can improve the accuracy of the predictive models. The method, which is a recursive
extension of maxima hunting (MH), performs variable selection by identifying the
maxima of a relevance function, which measures the strength of the correlation of
the predictor functional variable with the class label. At each stage, the information
associated with the selected variable is removed by subtracting the conditional
expectation of the process. The results of an extensive empirical evaluation are
used to illustrate that, in the problems investigated, RMH has comparable or higher
predictive accuracy than standard dimensionality reduction techniques, such as
PCA and PLS, and state-of-the-art feature selection methods for functional data,
such as maxima hunting.
1
Introduction
In many important prediction problems from different areas of application (medicine, environmental
monitoring, etc.) the data are characterized by a function, instead of by a vector of attributes, as is
commonly assumed in standard machine learning problems. Some examples of these types of data
are functional magnetic resonance imaging (fMRI) (Grosenick et al., 2008) and near-infrared spectra
(NIR) (Xiaobo et al., 2010). Therefore, it is important to develop methods for automatic induction
that take into account the functional structure of the data (infinite dimension, high redundancy, etc.)
(Ramsay and Silverman, 2005; Ferraty and Vieu, 2006). In this work, the problem of classification
of functional data is addressed. For simplicity, we focus on binary classification problems (Ba??llo
et al., 2011). Nonetheless, the proposed method can be readily extended to a multiclass setting. Let
X(t), t ? [0, 1] be a continuous stochastic process in a probability space (?, F, P). A functional
train
datum Xn (t) is a realization of this process (a trajectory). Let {Xn (t), Yn }N
n=1 , t ? [0, 1] be a
set of trajectories labeled by the dichotomous variable Yn ? {0, 1}. These trajectories come from
one of two different populations; either P0 , when the label is Yn = 0, or P1 , when the label is
Yn = 1. For instance, the data could be the ECG?s from either healthy or sick persons (P0 and P1 ,
respectively). The classification problem consist in deciding to which population a new unlabeled
observation X test (t) belongs (e.g., to decide from his or her ECG whether a person is healthy or
not). Specifically, we are interested in the problem of dimensionality reduction for functional data
classification. The goal is to achieve the optimal discrimination performance using only a finite, small
set of values from the trajectory as input to a standard classifier (in our work, k-nearest neighbors).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In general, to properly handle functional data, some kind of reduction of information is necessary. Standard dimensionality reduction methods in functional data analysis (FDA) are based
on principal component analysis (PCA) (Ramsay and Silverman, 2005) or partial least squares
(PLS) (Preda et al., 2007). In this work, we adopt a different approach based on variable selection (Guyon et al., 2006). The goal is to replace the complete function X(t) by a d-dimensional
vector (X(t1 ), . . . , X(td )) for a set of ?suitable chosen? points {t1 , . . . , td } (for instance, instants in
a heartbeat in ECG?s), where d is small.
Most previous work on feature selection in supervised learning with functional data is quite recent
and focuses on regression problems; for instance, on the analysis of fMRI images (Grosenick et al.,
2008; Ryali et al., 2010) and NIR spectra (Xiaobo et al., 2010). In particular, adaptations of lasso
and other embedded methods have been proposed to this end (see, e.g., Kneip and Sarda (2011);
Zhou et al. (2013); Aneiros and Vieu (2014)). In most cases, functional data are simply treated as
high-dimensional vectors for which the standard methods apply. Specifically, G?omez-Verdejo et al.
(2009) propose feature extraction from the functional trajectories before applying a multivariate
variable selector based on measuring the mutual information. Similarly, Fernandez-Lozano et al.
(2015) compare different standard feature selection techniques for image texture classification. The
method of minimum Redundancy Maximum Relevance (mRMR) introduced by Ding and Peng (2005)
has been applied to functional data in Berrendero et al. (2016a). In that work distance correlation
(Sz?ekely et al., 2007) is used instead of mutual information to measure nonlinear dependencies, with
good results. A fully functional perspective is adopted in Ferraty et al. (2010) and Delaigle et al.
(2012). In these articles, a wrapper approach is used to select the optimal set of instants in which the
trajectories should be monitored by minimizing a cross-validation estimate of the classification error.
Berrendero et al. (2015) introduce a filter selection procedure based on computing the Mahalanobis
distance and Reproducing Kernel Hilbert Space techniques. Logistic regression models have been
applied to the problem of binary classification with functional data in Lindquist and McKeague (2009)
and McKeague and Sen (2010), assuming Brownian and fractional Brownian trajectories, respectively.
Finally, the selection of intervals or elementary functions instead of variables is addressed in Li and
Yu (2008); Fraiman et al. (2016) or Tian and James (2013).
From the analysis of previous work one concludes that, in general, it is preferable, both in terms of
accuracy and interpretability, to adopt a fully functional approach to the problem. In particular, if
the data are characterized by functions that are continuous, values of the trajectory that are close to
each other tend to be highly redundant and convey similar information. Therefore, if the value of the
process at a particular instant has high discriminant capacity, one could think of discarding nearby
values. This idea is exploited in maxima hunting (MH) (Berrendero et al., 2016b).
In this work, we introduce recursive Maxima Hunting (RMH), a novel variable selection method for
feature selection in functional data classification that takes advantage of the good properties of MH
while addressing some of its deficiencies. The extension of MH consists in removing the information
conveyed by each selected local maximum before searching for the next one in a recursive manner.
The rest of the paper is organized as follows: Maxima hunting for feature selection in classification
problems with functional data is introduced in Section 2. Recursive maxima hunting, which is the
method proposed in this work, is described in Section 3. The improvements that can be obtained with
this novel feature selection method are analyzed in an exhaustive empirical evaluations whose results
are presented and discussed in Section 4.
2
Maxima Hunting
Maxima hunting (MH) is a method for feature selection in functional classification based on measuring
dependencies between values selected from {X(t), t ? [0, 1]} and the response variable (Berrendero
et al., 2016b). In particular, one selects the values {X(t1 ), . . . , X(td )} whose dependence with the
class label (i.e., the response variable) is locally maximal. Different measures of dependency can
be used for this purpose. In Berrendero et al. (2016b), the authors propose the distance correlation
(Sz?ekely et al., 2007). The distance covariance between the random variables X ? Rp and Y ? Rq ,
whose components are assumed to have finite first-order moments, is
V 2 (X, Y ) =
Z
| ?X,Y (u, v) ? ?X (u)?Y (v) |2 w(u, v)dudv,
Rp+q
2
(1)
where ?X,Y , ?X , ?Y are the characteristic functions of (X, Y ), X and Y , respectively, w(u, v) =
? (1+d)/2
(cp cq |u|1+p
|v|1+q
)?1 , cd = ?((1+d)/2)
is half the surface area of the unit sphere in Rd+1 , and | ? |d
p
q
d
stands for the Euclidean norm in R .
In terms of V 2 (X, Y ), the square of the distance correlation is
(
2
? 2 V (X,Y2)
, V 2 (X)V 2 (Y ) > 0
2
V (X,X)V (Y,Y )
R (X, Y ) =
0,
V 2 (X)V 2 (Y ) = 0.
(2)
The distance correlation is a measure of statistical independence; that is, R2 (X, Y ) = 0 if and only
if X and Y are independent. Besides being defined for random variables of different dimensions, it
has other valuable properties. In particular, it is rotationally invariant and scale equivariant (Sz?ekely
and Rizzo, 2012). A further advantage over other measures of independence, such as the mutual
information, is that the distance correlation can be readily estimated using a plug-in estimator that
does not involve any parameter tuning. The almost sure convergence of the estimator Vn2 is proved in
Sz?ekely et al. (2007, Thm. 2).
To summarize, in maxima hunting, one selects the d different local maxima of the distance correlation
between X(t), the values of random process at different instants t ? [0, 1], and the response variable
X(ti ) = argmax R2 (X(t), Y ),
i = 1, 2, . . . , d.
(3)
t?[0,1]
Maxima Hunting is easy to interpret. It is also well-motivated from the point of view of FDA, because
it takes advantage of functional properties of the data, such as continuity, which implies that similar
information is conveyed by the values of the function at neighboring points. In spite of the simplicity
of the method, it naturally accounts for the relevance and redundancy trade-off in feature selection (Yu
and Liu, 2004): the local maxima (3) are relevant for discrimination. Points around them, which do
not maximize the distance correlation with the class label, are automatically excluded. Furthermore, it
is also possible to derive a uniform convergence result, which provides additional theoretical support
for the method. Finally, the empirical investigation carried out in Berrendero et al. (2016b) shows
that MH performs well in standard benchmark classification problems for functional data. In fact,
for some problems, one can show that the optimal (Bayes) classification rules depends only on the
maxima of R2 (X(t), Y ).
However, maxima hunting presents also some limitations. First, it is not always a simple task
to estimate the local maxima, especially in functions that are very smooth or that vary abruptly.
Furthermore, there is no guarantee that different maxima are not redundant. In most cases, the local
maxima of R2 (X(t), Y ) are indeed relevant for classification. However, there are important points
for which this quantity does not attain a maximum.
As an example, consider the family of classification problems introduced in Berrendero et al. (2016b,
Prop. 3), in which the goal is to discriminate trajectories generated by a standard Brownian motion
process, B(t), and trajectories from the process B(t) + ?m,k (t), where
Z t?
i
h
?m,k (t) =
2m?1 I( 2k?2
m, k ? N, 1 ? k ? 2m?1 . (4)
2k?1 (s) ? I 2k?1 2k (s) ds,
( m , m)
m ,
m )
0
2
2
2
2
Assuming a balanced class distribution (P(Y = 0) = P(Y= 1) = 1/2), the optimal
classification
2k?2
2k?1
2k
1
rule is g ? (x) = 1 if and only if X 2k?1
?
X
+
X
?
X
> ?2m+1
.
m
2m
2m
2m
0
2
k?
(t)k
m,k
The optimal classification error is L? = 1 ? normcdf
= 1 ? normcdf 12 ' 0.3085,
2
where, k ? k denotes the L2 [0, 1] norm, and normcdf(?) is the cumulative distribution
function of
the standard normal. The relevance function has a single maximum at X 2k?1
.
However,
the
m
2
Bayes classification rule involves three relevant variables, two of which are clearly not maxima of
R2 (X(t), Y ). In spite of the simplicity of these types of functional classification problems, they are
important to analyze, because the set of functions ?m,k , with m > 0 and k > 0 form an orthonormal
basis of the Dirichlet space D[0, 1], the space of continuous functions whose derivatives are in
L2 [0, 1]. Furthermore, this space is the reproducing kernel Hilbert space associated with Brownian
motion and plays and important role in functional classification (M?orters and Peres, 2010; Berrendero
et al., 2015). In fact, any trend in the Brownian process can be approximated by a linear combination
or by a mixture of ?m,k (t).
3
X(t)
Initial step
After -rst correction
After second correction
2
2
2
1
1
1
0
0
0
-1
-1
-1
-2
-2
0
1/4
1/2
3/4
1
1/4
1/2
3/4
1
0.4
0
1/4
1/2
3/4
1
0
1/4
1/2
time
3/4
1
0.1
R2 (X(t); Y )
0,1
-2
0
0
0
0
1/4
1/2
time
3/4
1
0
0
1/4
1/2
time
3/4
1
Figure 1: First row: Individual and average trajectories for the classification of B(t) vs. B(t) + 2?3,3 (t)
initially (left) and after the first (center) and second (right) corrections. Second row: Values of R2 (X(t), Y ) as
a function of t. The variables required for optimal classification are marked with vertical dashed lines.
To illustrate the workings of maxima hunting and its limitations we analyze in detail the classification
problem B(t) vs. B(t) + 2?3,3 (t), which is of the type considered above. In this case, the optimal
classification rule depends on the maximum X(5/8), and on X(1/2) and X(3/4), which are not
maxima, and would therefore not be selected by the MH algorithm. The optimal error is L? = 15.87%.
To illustrate the importance of selecting all the relevant variables, we perform simulations in which
we compare the accuracy of the linear Fisher discriminant with the maxima hunting selection, and
with the optimal variable selection procedures. In these experiments, independent training and test
samples of size 1000 are generated. The values reported are averages over 100 independent runs.
Standard deviations are given between parentheses. The average prediction error when only the
maximum of the trajectories is considered is 37.63%(1.44%). When all three variables are used the
empirical error is 15.98%(1%), which is close to the Bayes error. When other points in addition
to the maximum are used (i.e., (X(t1 ), X(5/8), X(t2 ), with t1 and t2 randomly chosen so that
0 ? t1 < 5/8 < t2 ? 1) the average classification error is 22.32%(2.18%). In the top leftmost plot
of Figure 1 trajectories from both classes, together with the corresponding averages (thick lines) are
shown. The relevance function R2 (X(t), Y ) is plotted below. The relevant variables, which are
required for optimal classification, are marked by dashed vertical lines.
3
Recursive Maxima Hunting
As a variable selection process, MH avoids, at least partially, the redundancy introduced by the
continuity of the functions that characterize the instances. However, this local approach cannot detect
redundancies among different local maxima. Furthermore, there could be points in the trajectory
that do not correspond to maxima of the relevance function, but which are relevant when considered
jointly with the maxima. The goal of recursive maxima hunting (RMH) is to select the maxima of
R2 (X(t), Y ) in a recursive manner by removing at each step the information associated to the most
recently selected maximum. This avoids the influence of previously selected maxima, which can
obscure ulterior dependencies. The influence of a selected variable X(t0 ) on the rest of the trajectory
can be eliminated by subtracting the conditional expectation E(X(t)|X(t0 )) from X(t). Assuming
that the underlying process is Brownian
E(X(t)|X(t0 )) =
min(t, t0 )
X(t0 ),
t0
t ? [0, 1].
(5)
In the subsequent iterations, there are two intervals: [t, t0 ] and [t0 , 1]. Conditioned on the value at
X(t0 ), the process in the interval [t0 , 1] is still Brownian motion. By contrast, for the interval [0, t0 ]
the process is a Brownian bridge, whose conditional expectation is
t
t < t0
min(t, t0 ) ? t t0
t0 X(t0 ),
E(X(t)|X(t0 ) =
X(t0 ) = 1?t
(6)
X(t
),
t
> t0 .
t0 (1 ? t0 )
0
1?t0
As illustrated by the results in the experimental section, the Brownian hypothesis is a robust assumption. Nevertheless, if additional information on the underlying stochastic processes is available, it can
4
be incorporated to the algorithm during the calculation of the conditional expectation in Equations (5)
and (6).
The center and right plots in Figure 1 illustrate the behavior of RMH in the example described in
the previous section. The top center plot diplays the trajectories and corresponding averages (thick
lines) for both classes after applying the correction (5) with t0 = 5/8, which is the first maximum of
the distance correlation function (bottom leftmost plot in Figure 1). The variable X(5/8) is clearly
uninformative once this correction has been applied. The distance correlation R2 (X(t), Y ) for the
corrected trajectories is displayed in the bottom center plot. Also in this plot the relevant variables
are marked by vertical dashed lines. It is clear that the subsequent local maxima at t = 1/2, in the
subinterval [0, 5/8], and at t = 3/4, in the subinterval, and [5/8, 1] correspond to the remaining
relevant variables. The last column shows the corresponding plots after the correction is applied
anew (equations (6) with t0 = 1/2 in [0, 5/8] and (5) with t0 = 3/4 in [5/8, 1]). After this second
correction, the discriminant information has been removed. In consequence, the distance correlation
function, up to sample fluctuations, is zero.
An important issue in the application of this method is how to decide when to stop the recursive
search. The goal is to avoid including irrelevant and/or redundant variables. To address the first
problem, we only include maxima that are sufficiently prominent R2 (X(tmax ), Y ) > s, where
0 < s < 1 can be used to gauge the relative importance of the maximum. Redundancy is avoided
by excluding points around a selected maximum tmax for which R2 (X(tmax ), X(t)) ? r, for some
redundancy threshold 0 < r < 1, which is typically close to one. As a result of these two conditions
only a finite (typically small) number of variables are selected. This data-driven stopping criterion
avoids the need to set the number of selected variables beforehand or to determine this number by a
costly validation procedure. The sensitivity of the results to the values of r and s will be studied in
Section 4. Nonetheless, RMH has a good and robust performance for a wide range of reasonable
values of these parameters (r close to 1 and s close to 0). The pseudocode of the RMH algorithm is
given in Algorithm 1.
Algorithm 1 Recursive Maxima Hunting
1: function RMH(X(t), Y )
2:
t? ? [ ]
3:
RMH rec(X(t),Y ,0,1)
4:
return t?
5: end function
6: procedure RMH REC(X(t),
2 Y, tinf , tsup
)
7:
tmax ? argmax
R (X(t), Y )
. Vector of selected points initially empty
. Recursive search of the maxima of R2 (X(t), Y )
. Vector of selected points
tinf ?t?tsup
8:
9:
10:
11:
12:
13:
14:
15:
if R2 (X(tmax ), Y ) > s then
t? ? [t? tmax ]
. Include tmax in t? the vector of selected points
X(t) ? X(t) ? E(X(t) | X(tmax )), t ? [tinf , tsup ] . Correction of type (5) or (6) as required
else
return
end if
. Exclude redundant points
to the left of tmax
t?
max
t : R2 (X(tmax ), X(t)) ? r
max ?
16:
17:
18:
19:
20:
if t?
max > tinf then
RMH rec(X(t), Y, tinf , t?
max )
end if
. Exclude redundant points
to the right of tmax
t+
min
t : R2 (X(tmax ), X(t)) ? r
max ?
tinf ?t<tmax
. Recursion on left subinterval
tmax <t?tsup
21:
if t+
max < tsup then
22:
RMH rec(X(t), Y, t+
max , tsup )
23:
end if
24:
return
25: end procedure
. Recursion on right subinterval
5
4
Empirical study
To assess the performance of RMH, we have carried out experiments in simulated and real-world
data in which it is compared with some well-established dimensionality reduction methods, such
as PCA (Ramsay and Silverman, 2005) and partial least squares (Delaigle and Hall, 2012b), and
with Maxima Hunting (Berrendero et al., 2016b). In these experiments, k-nearest neighbors (kNN)
with the Euclidean distance is used for classification. kNN has been selected because it is a simple,
nonparametric classifier with reasonable
overall predictive accuracy. The value k in kNN is selected
?
by 10-fold CV from integers in [1, Ntrain ], where Ntrain is the size of the training set. Since RMH
is a filter method for variable selection, the results are expected to be similar when other types of
classifiers are used. As a reference, the results of kNN using complete trajectories (i.e., without
dimensionality reduction) are also reported. This approach is referred to as Base. Note that, in this
case, the performance of kNN need not be optimal because of the presence of irrelevant attributes.
RMH requires determining the values of two hyperparameters: the redundancy threshold r (0 < r < 1
typically close to 1), and the relevance threshold s (0 < s < 1 typically close to 0). Through extensive
simulations we have observed that RMH is quite robust for a wide range of appropriate values of
these parameters. In particular, the results are very similar for values of r in the interval [0.75, 0.95].
The predictive accuracy is somewhat more sensitive to the choice of s: If the value of s is too small,
irrelevant variables can be selected. If s is too large, it is possible that relevant points are excluded.
For most of the experiments performed, the optimal values of s are between 0.025 and 0.1. In view
of these observations, the experiments are made using r = 0.8. The value of s is selected from the set
{0.025, 0.05, 0.1} by 10-fold CV. A more careful determination of r and s is beneficial, especially in
some extreme problems (e.g., with very smooth or with rapidly-varying trajectories). In RMH, the
number of selected variables, which is not determined beforehand, depends indirectly on the values
of r and s. In the other methods, the number of selected variables is determined using 10-fold CV,
with maximum of 30.
A first batch of experiments is carried out on simulated data generated from the model
P0 : B(t)
, t ? [0, 1]
,
P1 : B(t) + m(t) , t ? [0, 1]
where B(t) is standard Brownian motion, m(t) is a deterministic trend, and P(Y = 0) = P(Y =
1) = 1/2. Using Berrendero et al. (2015, Theorem 2), it is possible to compute the optimal
classification rules g ? and the corresponding Bayes errors L? . To ensure a wide coverage, we
consider two problems in which the Bayes rule depends only on a few variables and two problems in
which complete trajectories are needed for optimal classification: (i) Peak: m(t) = 2?3,3 (t). The
optimal rule depends only on X(1/2), X(5/8) and X(3/4). The Bayes error is L? ' 0.1587. This
is the example analyzed in the previous section. (ii) Peak2: m(t) = 2?3,2 (t) + 3?3,3 (t) ? 2?2,2 (t).
The optimal rule depends only on X(1/4), X(3/8), X(1/2), X(5/8), X(3/4), and X(1). The
Bayes error is L? ' 0.0196. (iii) Square: m(t) = 2t2 . The Bayes error is L? ' 0.1241. (iv) Sin:
m(t) = 1/2 sin(2?t). The Bayes error is L? ' 0.1333. In Figure 2 we have plotted some trajectories
corresponding to class 1 instances, together with their corresponding averages (thick lines). Class 0
trajectories are realizations of a standard Brownian process. In these experiments, training samples
of different sizes (Ntrain = {50, 100, 200, 500, 1000}) and an independent test set of size 1000 are
generated. The trajectories are discretized in 200 points. Half of the trajectories belong to each class
in both the training and test sets. The values reported are averages over 200 independent repetitions.
Figure 3 displays the average classification error (first row) and the average number of selected
variable /components (second row) as a function of the training sample size for each model and
classification method. Horizontal dashed lines are used to indicate the Bayes error level in the
different problems. From the results reported in Figure 3, one concludes that RMH has the best
overall performance. It is always more accurate than the Base method. This observation justifies
performing variable selection not only for the sake of dimensionality reduction, but also to improve
the classification accuracy. RMH is also better than the original MH in all the problems investigated:
there is both an improvement of the prediction error, and a reduction of the numeber of variables
used for classification. In peak and peak2, problems in which the relevant variables are known, RMH
generally selects the correct ones. As expected, PLS performs better than PCA. However, both MH
and RMH outperform these projection methods, except in sin, where their accuracies are similar.
Both PLS and RMH are effective dimensionality reduction methods with comparable performance.
6
Peak
X(t) | Y = 1
3
3
Peak2
3
Square
3
2
2
2
2
1
1
1
1
0
0
0
0
?1
?1
?1
?1
?2
?2
?2
?2
?3
?3
?3
?3
Sin
Figure 2: Class 1 trajectories and averages (thick lines) for the different synthetic problems.
Classification error
Peak
Square
Peak2
0.45
0.3
0.24
0.4
0.25
0.22
0.35
0.2
0.2
0.3
0.15
0.18
Sin
0.26
0.24
0.22
0.2
0.18
0.25
0.1
0.16
0.2
0.05
0.14
0.15
0
0.12
50
100 200 500 1000
0.16
PCA
0.14
50
100 200 500 1000
PLS
50
100 200 500 1000
50
100 200 500 1000
MH
Number of variables
RMH
20
20
20
20
15
15
15
15
10
10
10
10
5
5
5
5
0
50
100 200 500 1000
N train
0
50
100 200 500 1000
0
Base
L*
0
50
100 200 500 1000
Nt rain
Nt rain
50
100 200 500 1000
Nt rain
Figure 3: Average classification error (first row) and average number of selected variables/components (second
row) as a function of the size of the training.
However, the components selected in PLS are, in general, more difficult to interpret because they
involve whole trajectories. Finally, the accuracy of RMH is very close to the Bayes level for higher
sample sizes, even when the optimal rule requires using complete trajectories (square and sin).
To assess the performance of RMH in real-world functional classification problems, we have carried
out a second batch of experiments in four datasets, which are commonly used as benchmarks in the
FDA literature. Instances in Growth correspond to curves of the heights of 54 girls and 38 boys
from the Berkeley Growth Study. Observations are discretized in 31 non-equidistant ages between 1
and 18 years (Ramsay and Silverman, 2005; Mosler and Mozharovskyi, 2014). The Tecator dataset
consists of 215 near-infrared absorbance spectra of finely chopped meat. The spectral curves consist
of 100 equally spaced points. The class labels are determined in terms of fat content (above or below
20%). The curves are fairly smooth. In consequence, we have followed the general recommendation
and used the second derivative for classification (Ferraty and Vieu, 2006; Galeano et al., 2014).
The Phoneme data consists of 4509 log-periodograms observed at 256 equidistant points. Here, we
consider the binary problem of distinguishing between the phonemes ?aa? (695) and ?ao? (1022)
(Galeano et al., 2014). Following Delaigle and Hall (2012a), the curves are smoothed with a local
linear method and truncated to the first 50 variables. The Medflies are records of daily egg-laying
patterns of a thousand flies. The goal is to discriminate between short- and long-lived flies. Following
Mosler and Mozharovskyi (2014), curves equal to zero are excluded. There are 512 30-day curves
(starting from day 5) of flies who live at most 34 days, 266 of these are long-lived (reach the day
44). The classes in Growth and Tecator are well separated. In consequence, they are relatively easy
problems. By contrast, Phoneme and Medflies are notoriously difficult classification tasks. Some
trajectories of each problem and each class, together with the corresponding averages (thick lines),
are plotted in Figure 4. To estimate the classification error, the datasets are partitioned at random
into a training set (with 2/3 of the observations) and a test set (1/3). This procedure is repeated
200 times. The boxplots of the results for each dataset and method are shown in Figure 5. Errors
are shown in first row and the number of selected variables/components in the second one. From
7
Gr owth
X(t) | Y = 0
200
?3
x 10
4
Tecator
Phonem e
25
Medflies
150
2
150
20
100
15
50
0
100
?2
50
?4
10
20
10
30
20
40
60
80
100
0
10
20
30
40
50
5
10
15
20
25
30
5
10
15
20
25
30
?3
x 10
X(t) | Y = 1
200
4
25
150
20
100
15
50
2
150
0
100
?2
50
?4
10
20
10
30
20
40
60
80
100
0
10
20
30
40
50
Figure 4: Trajectories for each of the classes and their corresponding averages (thick lines).
Classification error
Growth
Tecator
0.25
Medflies
0.6
0.22
0.06
0.5
0.2
0.2
0.04
0.15
0.18
0.1
0.4
0.02
0.05
0.16
0
0
PCA PLS MH RMH Base
Number of variables
Phoneme
0.24
0.08
0.3
12
0.3
PCA PLS MH RMH Base
15
10
8
PCA PLS MH RMH Base
12
30
10
25
8
20
6
15
4
10
2
5
PCA PLS
MH RMH Base
PCA PLS
MH RMH
10
6
4
5
2
0
0
PCA PLS
MH RMH
0
PCA PLS MH RMH
0
PCA PLS MH RMH
Figure 5: Classification error (first row) and number of variables/components selected (second row) by RMH.
these results we observe that, in general, dimensionality reduction is effective: the accuracy of the
four considered methods is similar or better than the Base method, in which complete trajectories
are used for classification. In particular, Base does not perform well when the trajectories are not
smooth (Medflies). The best overall performance corresponds to RMH. In the easy problems (Growth
and Tecator), all methods behave similarly and give good results. In Growth, RMH is slightly more
accurate. However, it tends to select more variables than the other methods. In the more difficult
problems, (Phoneme and Medflies), RMH yields very accurate predictions while selecting only two
variables. In these problems it exhibits the best performance, except in Phoneme, where Base is
more accurate. The variables selected by RMH and MH are directly interpretable, which is an
advantage over projection-based methods (PCA, PLS). Finally, let us point out that the accuracy of
RMH is comparable and often better that state-of-the-art functional classification methods. See, for
instance, Berrendero et al. (2016a); Delaigle et al. (2012); Delaigle and Hall (2012a); Mosler and
Mozharovskyi (2014); Galeano et al. (2014). In most of these works no dimensionality reduction is
applied. Nevertheless, these comparisons must be done carefully because the evaluation protocol and
the classifiers used vary in the different studies. In any case, RMH is a filter method, which means
that it could be more effective if used in combination with other types of classifiers or adapted and
used as a wrapper or, even, as an embedded variable selection method.
Acknowledgments
The authors thank Dr. Jos?e R. Berrendero for his insightful suggestions. We also acknowledge
financial support from the Spanish Ministry of Economy and Competitiveness, project TIN201342351-P and from the Regional Government of Madrid, CASI-CAM-CM project (S2013/ICE-2845).
8
References
Aneiros, G. and P. Vieu (2014). Variable selection in infinite-dimensional problems. Statistics & Probability
Letters 94, 12?20.
Ba??llo, A., A. Cuevas, and R. Fraiman (2011). Classification methods for functional data, pp. 259?297. Oxford:
Oxford University Press.
Berrendero, J. R., A. Cuevas, and J. L. Torrecilla (2015). On near perfect classification and functional Fisher
rules via reproducing kernels. arXiv:1507.04398, 1?27.
Berrendero, J. R., A. Cuevas, and J. L. Torrecilla (2016a). The mRMR variable selection method: a comparative
study for functional data. Journal of Statistical Computation and Simulation 86(5), 891?907.
Berrendero, J. R., A. Cuevas, and J. L. Torrecilla (2016b). Variable selection in functional data classification: a
maxima hunting proposal. Statistica Sinica 26(2), 619?638.
Delaigle, A. and P. Hall (2012a). Achieving near perfect classification for functional data. Journal of the Royal
Statistical Society B 74(2), 267?286.
Delaigle, A. and P. Hall (2012b). Methodology and theory for partial least squares applied to functional data.
The Annals of Statistics 40(1), 322?352.
Delaigle, A., P. Hall, and N. Bathia (2012). Componentwise classification and clustering of functional data.
Biometrika 99(2), 299?313.
Ding, C. and H. Peng (2005). Minimum redundancy feature selection from microarray gene expression data.
Journal of Bioinformatics and Computational Biology 3(2), 185?205.
Fernandez-Lozano, C., J. A. Seoane, M. Gestal, T. R. Gaunt, J. Dorado, and C. Campbell (2015). Texture
classification using feature selection and kernel-based techniques. Soft Computing 19(9), 2469?2480.
Ferraty, F., P. Hall, and P. Vieu (2010). Most-predictive design points for functional data predictors.
Biometrika 97(4), 807?824.
Ferraty, F. and P. Vieu (2006). Nonparametric Functional Data Analysis: Theory and Practice. Springer.
Fraiman, R., Y. Gim?enez, and M. Svarc (2016). Feature selection for functional data. Journal of Multivariate
Analysis 146, 191?208.
Galeano, P., E. Joseph, and R. E. Lillo (2014). The Mahalanobis distance for functional data with applications to
classification. Technometrics 57(2), 281?291.
G?omez-Verdejo, V., M. Verleysen, and J. Fleury (2009). Information-theoretic feature selection for functional
data classification. Neurocomputing 72(16), 3580?3589.
Grosenick, L., S. Greer, and B. Knutson (2008). Interpretable classifiers for FMRI improve prediction of
purchases. Neural Systems and Rehabilitation Engineering, IEEE Transactions on 16(6), 539?548.
Guyon, I., S. Gunn, M. Nikravesh, and L. A. Zadeh (2006). Feature Extraction: Foundations and Applications.
Springer.
Kneip, A. and P. Sarda (2011). Factor models and variable selection in high-dimensional regression analysis.
The Annals of Statistics 39(5), 2410?2447.
Li, B. and Q. Yu (2008). Classification of functional data: A segmentation approach. Computational Statistics &
Data Analysis 52(10), 4790?4800.
Lindquist, M. A. and I. W. McKeague (2009). Logistic regression with Brownian-like predictors. Journal of the
American Statistical Association 104(488), 1575?1585.
McKeague, I. W. and B. Sen (2010). Fractals with point impact in functional linear regression. Annals of
Statistics 38(4), 2559.
M?orters, P. and Y. Peres (2010). Brownian Motion. Cambridge University Press.
Mosler, K. and P. Mozharovskyi (2014). Fast DD-classification of functional data. Statistical Papers 55, 49?59.
Preda, C., G. Saporta, and C. L?ev?eder (2007). PLS classification of functional data. Computational Statistics 22(2), 223?235.
Ramsay, J. O. and B. W. Silverman (2005). Functional Data Analysis. Springer.
Ryali, S., K. Supekar, D. A. Abrams, and V. Menon (2010). Sparse logistic regression for whole-brain
classification of fMRI data. NeuroImage 51(2), 752?764.
Sz?ekely, G. J. and M. L. Rizzo (2012). On the uniqueness of distance covariance. Statistics & Probability
Letters 82(12), 2278?2282.
Sz?ekely, G. J., M. L. Rizzo, and N. K. Bakirov (2007). Measuring and testing dependence by correlation of
distances. The Annals of Statistics 35(6), 2769?2794.
Tian, T. S. and G. M. James (2013). Interpretable dimension reduction for classifying functional data. Computational Statistics & Data Analysis 57(1), 282?296.
Xiaobo, Z., Z. Jiewen, M. J. Povey, M. Holmes, and M. Hanpin (2010). Variables selection methods in
near-infrared spectroscopy. Analytica Chimica Acta 667(1), 14?32.
Yu, L. and H. Liu (2004). Efficient feature selection via analysis of relevance and redundancy. The Journal of
Machine Learning Research 5, 1205?1224.
Zhou, J., N.-Y. Wang, and N. Wang (2013). Functional linear model with zero-value coefficient function at
sub-regions. Statistica Sinica 23(1), 25?50.
9
| 6392 |@word norm:2 simulation:3 llo:2 covariance:2 p0:3 galeano:4 absorbance:1 moment:1 initial:1 liu:2 wrapper:2 reduction:13 selecting:2 hunting:20 nt:3 must:1 readily:2 casi:1 subsequent:2 plot:7 interpretable:3 discrimination:2 v:2 half:2 selected:25 ntrain:3 short:1 record:1 provides:1 s2013:1 height:1 competitiveness:1 consists:3 manner:2 introduce:3 peng:2 periodograms:1 expected:2 behavior:1 p1:3 indeed:1 equivariant:1 brain:1 discretized:2 td:3 automatically:1 spain:3 project:2 underlying:2 kind:1 cm:1 guarantee:1 berkeley:1 cuevas:4 ti:1 growth:6 preferable:1 fat:1 classifier:6 biometrika:2 unit:1 yn:4 t1:6 before:2 ice:1 local:9 engineering:1 tends:1 consequence:3 oxford:2 fluctuation:1 tmax:14 acta:1 studied:1 ecg:3 vieu:6 tian:2 range:2 acknowledgment:1 testing:1 recursive:12 practice:1 silverman:5 analytica:1 procedure:6 area:2 empirical:5 mckeague:4 attain:1 projection:2 spite:2 cannot:1 unlabeled:1 selection:32 close:8 context:1 applying:2 influence:2 live:1 deterministic:1 center:4 starting:1 simplicity:3 identifying:1 onoma:2 estimator:2 rule:10 holmes:1 orthonormal:1 his:2 financial:1 fraiman:3 population:2 handle:1 lindquist:2 searching:1 annals:4 play:1 distinguishing:1 hypothesis:1 trend:2 approximated:1 rec:4 infrared:3 gunn:1 labeled:1 tecator:5 bottom:2 role:1 suarez:1 observed:2 ding:2 fly:3 wang:2 thousand:1 region:1 trade:1 removed:2 valuable:1 rq:1 balanced:1 cam:1 predictive:5 heartbeat:1 basis:1 girl:1 mh:20 arez:1 train:2 separated:1 fast:1 effective:4 exhaustive:1 quite:2 whose:5 statistic:9 knn:5 grosenick:3 think:1 jointly:1 advantage:4 sen:2 propose:2 subtracting:2 maximal:1 adaptation:1 neighboring:1 relevant:10 realization:2 rapidly:1 achieve:1 rst:1 convergence:2 empty:1 comparative:1 perfect:2 illustrate:4 develop:1 derive:1 nearest:2 coverage:1 involves:1 come:1 implies:1 indicate:1 thick:6 correct:1 attribute:2 filter:3 stochastic:2 government:1 ao:1 investigation:1 elementary:1 extension:2 correction:8 around:2 considered:4 sufficiently:1 normal:1 deciding:1 hall:7 vary:2 adopt:2 purpose:1 uniqueness:1 label:6 healthy:2 bridge:1 sensitive:1 repetition:1 gauge:1 clearly:2 always:2 zhou:2 avoid:1 varying:1 focus:2 properly:1 improvement:2 contrast:2 detect:1 economy:1 stopping:1 kneip:2 typically:4 initially:2 her:1 interested:1 selects:3 issue:2 classification:56 among:1 overall:3 verleysen:1 resonance:1 art:2 fairly:1 mutual:3 equal:1 once:1 extraction:2 eliminated:1 biology:1 yu:4 fmri:4 purchase:1 t2:4 few:1 randomly:1 neurocomputing:1 individual:1 argmax:2 technometrics:1 highly:1 evaluation:3 analyzed:2 mixture:1 extreme:1 accurate:4 beforehand:2 partial:3 necessary:1 daily:1 iv:1 euclidean:2 plotted:3 theoretical:1 instance:7 column:1 soft:1 measuring:3 addressing:1 deviation:1 predictor:3 uniform:1 gr:1 too:2 characterize:1 reported:4 dependency:4 synthetic:1 person:2 peak:4 sensitivity:1 preda:2 universidad:2 off:1 jos:2 together:3 gim:1 dr:1 american:1 derivative:2 return:3 li:2 account:2 exclude:2 de:2 coefficient:1 fernandez:2 depends:6 performed:1 view:2 analyze:2 bayes:11 ass:2 square:8 accuracy:11 phoneme:6 characteristic:1 who:1 correspond:3 spaced:1 yield:1 xiaobo:3 monitoring:1 trajectory:31 notoriously:1 reach:1 nonetheless:2 pp:1 james:2 naturally:1 associated:3 monitored:1 stop:1 proved:1 dataset:2 fractional:1 dimensionality:11 hilbert:2 organized:1 segmentation:1 carefully:1 campbell:1 higher:2 supervised:1 day:4 methodology:1 response:3 done:1 furthermore:4 stage:1 correlation:12 d:1 working:1 horizontal:1 su:1 nonlinear:1 ekely:6 continuity:2 logistic:3 menon:1 facilitate:1 y2:1 lozano:2 excluded:3 illustrated:1 attractive:1 mahalanobis:2 sin:6 during:1 spanish:1 criterion:1 leftmost:2 prominent:1 complete:5 theoretic:1 performs:3 cp:1 motion:5 image:2 nikravesh:1 novel:2 recently:1 pseudocode:1 functional:46 discussed:1 interpretation:1 belong:1 association:1 interpret:2 cambridge:1 cv:3 automatic:2 rd:1 tuning:1 similarly:2 ramsay:5 surface:1 etc:2 base:10 sick:1 multivariate:2 brownian:13 recent:1 perspective:1 belongs:1 irrelevant:3 driven:1 binary:3 exploited:1 rotationally:1 minimum:2 aut:2 additional:2 somewhat:1 ministry:1 determine:1 maximize:1 redundant:5 dashed:4 ii:1 fleury:1 smooth:4 characterized:2 plug:1 cross:1 sphere:1 calculation:1 determination:1 alberto:2 long:2 equally:1 parenthesis:1 impact:1 prediction:5 regression:6 expectation:4 arxiv:1 iteration:1 kernel:4 proposal:1 addition:1 uninformative:1 chopped:1 addressed:2 interval:5 else:1 microarray:1 greer:1 rest:2 finely:1 regional:1 sure:1 tend:1 rizzo:3 integer:1 near:5 presence:1 iii:1 easy:3 sarda:2 independence:2 equidistant:2 lasso:1 reduce:1 idea:1 multiclass:1 gaunt:1 t0:25 whether:1 motivated:1 pca:14 expression:1 abruptly:1 fractal:1 generally:1 clear:1 involve:2 nonparametric:2 locally:1 outperform:1 vn2:1 estimated:1 delaigle:8 key:1 redundancy:10 four:2 nevertheless:2 threshold:3 achieving:1 povey:1 boxplots:1 imaging:1 enez:1 chimica:1 year:1 run:1 letter:2 almost:1 guyon:2 decide:2 family:1 reasonable:2 zadeh:1 comparable:3 followed:1 datum:1 display:1 bakirov:1 fold:3 strength:1 adapted:1 deficiency:1 fda:3 sake:1 dichotomous:1 nearby:1 min:3 performing:1 relatively:1 department:2 combination:2 beneficial:1 slightly:1 partitioned:1 joseph:1 rehabilitation:1 invariant:1 mosler:4 equation:2 previously:1 needed:1 end:6 adopted:1 available:1 uam:2 apply:1 observe:1 appropriate:1 indirectly:1 magnetic:1 spectral:1 dudv:1 batch:2 rp:2 original:1 denotes:1 dirichlet:1 top:2 remaining:1 include:2 ensure:1 rain:3 clustering:1 instant:4 medicine:1 especially:3 society:1 quantity:1 costly:1 dependence:2 exhibit:1 abrams:1 distance:16 thank:1 simulated:2 capacity:1 discriminant:3 induction:2 laying:1 assuming:3 besides:1 cq:1 minimizing:1 difficult:3 sinica:2 boy:1 ba:2 design:2 lived:2 perform:2 vertical:3 observation:5 datasets:2 benchmark:2 finite:3 acknowledge:1 behave:1 displayed:1 truncated:1 peres:2 extended:1 incorporated:1 excluding:1 reproducing:3 smoothed:1 thm:1 ulterior:1 introduced:4 required:3 extensive:2 componentwise:1 established:1 barcelona:1 nip:1 address:1 below:2 pattern:1 ev:1 summarize:1 interpretability:1 including:1 max:7 royal:1 suitable:1 treated:1 recursion:2 improve:3 concludes:2 carried:4 nir:2 literature:1 l2:2 determining:1 relative:1 embedded:2 fully:2 suggestion:1 limitation:2 age:1 validation:2 foundation:1 conveyed:2 article:1 dd:1 classifying:1 cd:1 obscure:1 row:9 last:1 neighbor:2 wide:3 sparse:1 curve:6 dimension:3 xn:2 stand:1 cumulative:1 avoids:3 world:2 author:2 commonly:2 made:1 avoided:1 transaction:1 selector:1 meat:1 gene:1 sz:6 anew:1 assumed:2 ryali:2 spectrum:3 continuous:3 search:2 robust:3 spectroscopy:1 subinterval:4 investigated:2 protocol:1 statistica:2 whole:2 hyperparameters:1 repeated:1 convey:1 referred:1 madrid:5 egg:1 neuroimage:1 sub:1 supekar:1 removing:2 theorem:1 discarding:1 rmh:40 insightful:1 r2:16 consist:2 importance:2 texture:2 conditioned:1 justifies:1 simply:1 omez:2 pls:16 partially:1 recommendation:1 springer:3 aa:1 corresponds:1 environmental:1 prop:1 conditional:4 goal:6 marked:3 mrmr:2 careful:1 replace:1 fisher:2 content:1 infinite:2 specifically:2 corrected:1 determined:3 except:2 principal:1 discriminate:2 e:2 experimental:1 eder:1 select:3 support:2 relevance:8 bioinformatics:1 |
5,961 | 6,393 | Integrated Perception with Recurrent Multi-Task
Neural Networks
Hakan Bilen
Andrea Vedaldi
Visual Geometry Group, University of Oxford
{hbilen,vedaldi}@robots.ox.ac.uk
Abstract
Modern discriminative predictors have been shown to match natural intelligences in
specific perceptual tasks in image classification, object and part detection, boundary
extraction, etc. However, a major advantage that natural intelligences still have is
that they work well for all perceptual problems together, solving them efficiently
and coherently in an integrated manner. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks
can learn universal image representations, useful not only for a single task but for
all of them, and how the solutions to the different tasks can be integrated in this
framework. We answer by proposing a new architecture, which we call multinet, in
which not only deep image features are shared between tasks, but where tasks can
interact in a recurrent manner by encoding the results of their analysis in a common
shared representation of the data. In this manner, we show that the performance of
individual tasks in standard benchmarks can be improved first by sharing features
between them and then, more significantly, by integrating their solutions in the
common representation.
1
Introduction
Natural perception can extract complete interpretations of sensory data in a coherent and efficient
manner. By contrast, machine perception remains a collection of disjoint algorithms, each solving
specific information extraction sub-problems. Recent advances such as modern convolutional neural
networks have dramatically improved the performance of machines in individual perceptual tasks,
but it remains unclear how these could be integrated in the same seamless way as natural perception
does.
In this paper, we consider the problem of learning data representations for integrated perception. The
first question we ask is whether it is possible to learn universal data representations that can be used
to solve all sub-problems of interest. In computer vision, fine-tuning or retraining has been show
to be an effective method to transfer deep convolutional networks between different tasks [9, 29].
Here we show that, in fact, it is possible to learn a single, shared representation that performs well on
several sub-problems simultaneously, often as well or even better than specialised ones.
A second question, complementary to the one of feature sharing, is how different perceptual subtasks
should be combined. Since each subtask extracts a partial interpretation of the data, the problem
is to form a coherent picture of the data as a whole. We consider an incremental interpretation
scenario, where subtasks collaborate in parallel or sequentially in order to gradually enrich a shared
interpretation of the data, each contributing its own ?dimension? to it. Informally, many computer
vision systems operate in this stratified manner, with different modules running in parallel or in
sequence (e.g. object detection followed by instance segmentation). The question is how this can be
done end-to-end and systematically.
In this paper, we develop an architecture, multinet (fig. 1), that provides an answer to such questions.
Multinet builds on the idea of a shared representation, called an integration space, which reflects both
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
class
location
parts
airplane
Figure 1: Multinet. We propose a modular multi-task architecture in which several perceptual tasks
are integrated in a synergistic manner. The subnetwork ?0enc encodes the data x0 (an image in the
example) producing a representation h shared between K different tasks. Each task estimates one of
K different labels x? (object class, location, and parts in the example) using K decoder functions
?
?dec
. Each task contributes back to the shared representation by means of a corresponding encoder
function ??
enc . The loop is closed in a recurrent configuration by means of suitable integrator functions
(not shown here to avoid cluttering the diagram).
the statistics extracted from the data as well as the result of the analysis carried by the individual
subtasks. As a loose metaphor, one can think the integration space as a ?canvas? which is progressively
updated with the information obtained by solving sub-problems. The representation distills this
information and makes it available for further task resolution, in a recurrent configuration.
Multinet has several advantages. First, by learning the latent integration space automatically, synergies
between tasks can be discovered automatically. Second, tasks are treated in a symmetric manner, by
associating to each of them encoder, decoder, and integrator functions, making the system modular
and easily extensible to new tasks. Third, the architecture supports incremental understanding because
tasks contribute back to the latent representation, making their output available to other tasks for
further processing. Finally, while multinet is applied here to a image understanding setting, the
architecture is very general and could be applied to numerous other domains as well.
The new architecture is described in detail in sect. 2 and an instance specialized for computer vision
applications is given in sect. 3. The empirical evaluation in sect. 4 demonstrates the benefits of
the approach, including that sharing features between different tasks is not only economical, but
also sometimes better for accuracy, and that integrating the outputs of different tasks in the shared
representation yields further accuracy improvements. Sect. 5 summarizes our findings.
1.1
Related work
Multiple task learning (MTL): Multitask learning [5, 25, 1] methods have been studied over
two decades by the machine learning community. The methods are based on the key idea that
the tasks share a common low-dimensional representation which is jointly learnt with the task
specific parameters. While MLT trains many tasks in parallel, Mitchell and Thrun [18] propose a
sequential transfer method called Explanation-Based Neural Nets (EBNN) which exploits previously
learnt domain knowledge to initialise or constraint the parameters of the current task. Breiman
and Freidman [3] devise a hybrid method that first learns separate models and then improves their
generalisation by exploiting the correlation between the predictions.
Multi-task learning in computer vision: MTL has been shown to improve results in many computer
vision problems. Typically, researchers incorporate auxiliary tasks into their target tasks, jointly train
them in parallel and achieve performance gains in object tracking [30], object detection [11], facial
landmark detection [31]. Differently, Dai et al. [8] propose multi-task network cascades in which
convolutional layer parameters are shared between three tasks and the tasks are predicted sequentially.
Unlike [8], our method can train multiple tasks in parallel and does not require a specification of task
execution.
Recurrent networks: Our work is also related to recurrent neural networks (RNN) [22] which has
been successfully used in language modelling [17], speech recognition [13], hand-written recognition [12], semantic image segmentation [20] and human pose estimation [2]. Related to our work,
Carreira et al. [4] propose an iterative segmentation model that progressively updates an initial
2
Figure 2: Multinet recurrent architecture. The components in the rounded box are repeated K
times, one for each task ? = 1, . . . , K.
solution by feeding back error signal. Najibi et al. [19] propose an efficient grid based object detector
that iteratively refine the predicted object coordinates by minimising the training error. While these
methods [4, 19] are also based on an iterative solution correcting mechanism, our main goal is to
improve generalisation performance for multiple tasks by sharing the previous predictions across
them and learning output correlations.
2
Method
In this section, we first introduce the multinet architecture for integrated multi-task prediction
(sect. 2.1) and then we discuss ordinary multi-task prediction as a special case of multinet (sect. 2.2).
2.1
Multinet: integrated multiple-task prediction
We propose a recurrent neural network architecture (fig. 1 and 2) that can address simultaneously
multiple data labelling tasks. For symmetry, we drop the usual distinction between input and output
spaces and consider instead K label spaces X ? , ? = 0, 1, . . . , K. A label in the ?-th space is denoted
by the symbol x? ? X ? . In the following, ? = 0 is used for the input (e.g. an image) of the network
and is not inferred, whereas x1 , . . . , xK are labels estimated by the neural network (e.g. an object
class, location, and parts). One reason why it is useful to keep the notation symmetric is because it is
possible to ground any label x? and treat it as an input instead.
?
Each task ? is associated to a corresponding encoder function ??
enc , which maps the label x to a
vectorial representation r? ? R? given by
?
r? = ??
enc (x ).
(1)
?
Each task has also a decoder function ?dec
going in the other direction, from a common representation
space h ? H to the label x? :
?
x? = ?dec
(h).
(2)
0 1
?
The information r , r , . . . , r extracted from the data and the different tasks by the encoders is
integrated in the shared representation h by using an integrator function ?. Since this update operation
is incremental, we associate to it an iteration number t = 0, 1, 2, . . . . By doing so, the update equation
can be written as
ht+1 = ?(ht , r0 , r1t , . . . , rK
(3)
t ).
0
0
Note that, in the equation above, r is constant as the corresponding variable x is the input of the
network, which is grounded and not updated.
?
Overall, a task ? is specified by the triplet T ? = (X ? , ??
enc , ?dec ) and by its contribution to the update
rule (3). Full task modularity can be achieved by decomposing the integrator function as a sequence
1
1
of task-specific updates ht+1 = ?K (?, rK
t ) ? ? ? ? ? ? (ht , rt ), such that each task is a quadruplet
?
?
(X ? , ??
,
?
,
?
),
but
this
option
is
not
investigated
further
here.
enc
dec
Given tasks T ? , ? = 1, . . . , K, several variants of the recurrent architecture are possible. A natural
one is to process tasks sequentially, but this has the added complication of having to choose a
particular order and may in any case be suboptimal; instead, we propose to update all the task at each
recurrent iteration, as follows:
3
t = 0 Ordinary multi-task prediction. At the first iteration, the measurement x0 is acquired
and the shared representation h is initialized as h0 = ?0enc (x0 ) = ?(?, r0 , ?, . . . , ?). The
symbol ? denotes the initial value of a variable (often zero in practice). Given h0 , the output
?
?
0
0
x?
0 = ?dec (h0 ) = (?dec ? ?enc )(x ) for each task is computed. This step corresponds to
ordinary multi-task prediction, as discussed later (sect. 2.2).
?
?
t > 0 Iterative updates. Each task ? = 1, . . . , K is re-encoded using equations r?
t = ?enc (xt ),
0 1
K
the shared representation is updated using ht+1 = ?(ht , r , rt , . . . , rt ), and the labels are
?
predicted again using x?
t+1 = ?dec (ht+1 ).
The idea of feeding back the network output for further processing exists in several existing recurrent
architectures [16, 24]; however, in these cases it is used to process sequential data, passing back
the output obtained from the last process element in the sequence; here, instead, the feedback is
used to integrate different and complementary labelling tasks. Our model is also reminiscent of
encoder/decoder architectures [15, 21, 28]; however, in our case the encoder and decoder functions
are associated to the output labels rather than to the input data.
2.2
Ordinary multi-task learning
Ordinarily, multiple-task learning [5, 25, 1] is based on sharing features or parameters between
different tasks. Multinet reduces to ordinary multi-task learning when there is no recurrence. At the
1
K
first iteration t = 0, in fact, multinet simply evaluates K predictor functions ?dec
??0enc , . . . , ?dec
??0enc ,
0
one for each task, which share the common subnetwork ?enc .
While multi-task learning from representation sharing is conceptually simple, it is practically important because it allows learning a universal representation function ?0enc which works well for all
tasks simultaneously. The possibility of learning such a polyvalent representation, which can only
be verified empirically, is a non-trivial and useful fact. In particular, in our experiments in image
understanding (sect. 4), we will see that, for certain image analysis tasks, it is not only possible and
efficient to learn such a shared representation, but that in some cases feature sharing can even improve
the performance in the individual sub-problems.
3
A multinet for classification, localization, and part detection
In this section we instantiate multinet for three complementary tasks in computer vision: object
classification, object detection, and part detection. The main advantage of multinet compared to
ordinary multi-task prediction is that, while sharing parameters across related tasks may improve
generalization [5], it is not enough to capture correlations in the task input spaces. For example,
in our computer vision application ordinary multi-task prediction would not be able to ensure that
the detected parts are contained within a detected object. Multinet can instead capture interactions
between the different labels and potentially learn to enforce such constraints. The latter is done in
a soft and distributed manner, by integrating back the output of the individual tasks in the shared
representation.
Next, we discuss in some detail the specific architecture components used in our application. As a
starting point we consider a standard CNN for image classification. While more powerful networks
exist, we choose here a good performing model which is at the same time reasonably efficient to
train and evaluate, namely the VGG-M-1024 network of [6]. This model is pre-trained for image
classification from the ImageNet ILSVRC 2012 data [23] and was extended in [11] to object detection;
here we follow such blueprints, and in particular the Fast R-CNN method of [11], to design the
subnetworks for the three tasks. These components are described in some detail below, first focusing
on the components corresponding to ordinary multi-task prediction, and then moving to the ones used
for multiple task integration.
Ordinary multiple-task components. The first several layers of the VGG-M network can be
grouped in five convolutional sections, each comprising linear convolution, a non-linear activation
function and, in some cases, max pooling and normalization. These are followed by three fullyconnected sections, which are the same as the convolutional ones, but with filter support of the same
size as the corresponding input. The last layer is softmax and computes a posterior probability vector
over the 1,000 ImageNet ILSVRC classes.
4
VGG-M is adapted for the different tasks as follows. For clarity, we use symbolic names for the tasks
rather than numeric indexes, and consider ? ? {img, cls, det, part} instead of ? ? {0, 1, 2, 3}. The
five convolutional sections of VGG-M are used as the image encoder ?img
enc and hence compute the
initial value h0 of the shared representation. Cutting VGG-M at the level of the last convolutional
layer is motivated by the fact that the fully-connected layers remove or at least dramatically blur
spatial information, whereas we would like to preserve it for object and part localization. Hence, the
shared representation is a tensor h ? RH?W ?C , where H ? W are the spatial dimensions and C is
the number of feature channels as determined by the VGG-M configuration (see sect. 4).
?
Next, ?img
enc is branched off in three directions, choosing a decoder ?dec for each task: image classification (? = cls), object detection (? = det), and part detection (? = part). For the image classification
branch, we choose ??
enc as the rest of the original VGG-M network for image classification. In
cls
other words, the decoder function ?dec
for the image-level labels is initialized to be the same as the
cls
fully-connected layers of the original VGG-M, such that ?VGG-M
= ?dec
? ?img
enc . There are however
enc
two differences. The first is the last fully-connected layer is reshaped and reinitialized randomly to
predict a different number C of possible objects instead of the 1,000 ImageNet classes. The second
difference is that the final output is a vector of binary probabilities obtained using sigmoid instead of
a softmax.
The object and part detection decoders are instead based on the Fast R-CNN architecture [11], and
classify individual image regions as belonging to one of the object classes (part types) or background.
To do so, the Selective Search Windows (SSW) method [26] is used to generate a shortlist of M
region (bounding box) proposals B(ximg ) = {b1 , . . . , bM } from image ximg ; this set is inputted to
SPP
the spatial pyramid pooling (SPP) layer [14, 11] ?dec
(h, B(ximg )), which extracts subsets of the
feature map h in correspondence of each region using max pooling. The object detection decoder
SPP
det
(and similarly for the part detector) is then given by ?dec
(h, B(ximg ))) where
(h) = ?dec det (?dec
det
?dec contains fully connected layers initialized in the same manner as the classification decoder
above (hence, before training one also has ?VGG-M
= ?dec det ? ?img
enc ). The exception is once more the
enc
last layer, reshaped and reinitialized as needed, whereas softmax is still used as regions can have only
one class.
part
cls
det
So far, we have described the image encored ?img
enc and the decoder branches ?dec , ?dec and ?dec for
the three tasks. Such components are sufficient for ordinary multi-task learning, corresponding to the
initial multinet iteration. Next, we specify the components that allow to iterate multinet several times.
Recurrent components: integrating multiple tasks. For task integration, we need to construct
part
det
the encoder functions ?cls
enc , ?enc and ?enc for each task as well as the integrator function ?. While
several constructions are possible, here we experiment with simple ones.
cls
cls
In order to encode the image label xcls , the encoder rcls = ?cls
binary
enc (x ) takes the vector of C
cls
cls
C
cls
probabilities x ? R , one for each of the C possible object classes, and broadcasts the
cls
corresponding values to all H ? W spatial locations (u, v) in h. Formally rcls ? RH?W ?C and
?u, v, c :
cls
ruvc
= xcls
c .
Encoding the object detection label xdet is similar, but reflects the geometric information captured by
such labels. In particular, each bounding box bm of the M extracted by SSW is associated to a vector
C cls +1
of C cls + 1 probabilities (one for each object class plus one more for background) xdet
.
m ? R
cls
This is decoded in a heat map rcls ? RH?W ?(C +1) by max pooling across boxes:
cls
?u, v, c : ruvc
= max xdet
mc , ?m : (u, v) ? bm ? {0}.
The part label xpart is encoded in an entirely analogous manner.
Lastly, we need to construct the integrator function ?. We experiment with two simple designs. The
first one simply stacks evidence from the different sources: h = stack(rimg , rcls , rdet , rpart ). Then the
update equation is given by
det part
img cls det part
ht = ?(ht?1 , rimg , rcls
, rt , rt , rt ).
t , rt , rt ) = stack(r
(4)
cls
Note that this formulation requires modifying the first fully-connected layers of each decoder ??dec
,
part
cls
part
det
?
?
?dec and ?dec as the shared representation h has now C + 2C + C + 2 channels instead of just C
5
Figure 3: Illustration of the multinet instantiation tackling three computer vision problem: image
classification, object detection, and part detection.
as for the original VGG-M architecture. This is done by initializing randomly additional dimensions
in the linear maps.
We also experiment with a second update equation
det part
cls cls det part
ht = ?(ht?1 , rimg , rcls
t , rt , rt ) = ReLU(A ? stack(ht?1 , r , rt , rt , rt ))
cls
(5)
part
where A ? R1?1?(2C+2C +C +2)?C is a filter bank whose purpose is to reduce the stacked
representation back to the original C channels. This is a useful design as it maintains the same representation dimensionality regardless of the number of tasks added. However, due to the compression,
it may perform less well.
4
4.1
Experiments
Implementation details and training
The image encoder ?img
enc is initialized from the pre-trained VGG-M model using sections conv1 to
img
img
conv5. If the input to the network is an RGB image ximg ? RH ?W ?3 , then, due to downsampling,
img
) are H ? H img /16 and W ? W img /16.
the spatial dimension H ? W ? C of rimg = ?img
enc (x
The number of feature channels is C = 512. As noted above, the decoders contain respectively
subnetworks ?dec cls , ?dec det , and ?dec part comprising layers fc6 and fc7 from VGG-M, followed by a
randomly-initialized linear predictor with output dimension equal to, respectively, C cls , C cls + 1, and
C part + 1. Max pooling in SPP is performed in a grid of 6 ? 6 spatial bins as in [14, 11]. The task
part
det
encoders ?cls
enc , ?enc , ?enc are given in sect. 2 and contain no parameter.
For training, each task is associated with a corresponding loss function. For the classification task, the
objective is to minimize the sum of negative posterior log-probabilities of whether the image contains
a certain object type or not (this allows different objects to be present in a single image). Combined
with the fact that the classification branch uses sigmoid, this is the same as binary logistic regression.
For the object and part detection tasks, decoders are optimized to classify the target regions as one of
the C cls or C part classes or background (unlike image-level labels, classes in region-level labels are
mutually exclusive). Furthermore, we also train a branch performing bounding box refinement to
improve the fit of the selective search region as proposed by [11].
The fully connected layers used for softmax classification and bounding-box regression in object
and part detection tasks are initialized from zero-mean Gaussian distributions with 0.01 and 0.001
standard deviations respectively. The fully connected layers used for object classification task and the
adaptation layer A (see eq. 5) are initialized with zero-mean Gaussian with 0.01 standard deviation.
6
All layers use a learning rate of 1 for filters and 2 for biases. We used SGD to optimize the parameters
with a learning rate of 0.001 for 6 epochs and lower it to 0.0001 for another 6 epochs. We observe that
running two iterations of recursion is sufficient to reach 99% of the performance, although marginal
gains are possible with more. We use the publicly available CNN toolbox MatConvNet [27] in our
experiments.
4.2
Results
In this section, we describe and discuss experimental results of our models in two benchmarks.
PASCAL VOC 2010 [10] and Parts [7]: The dataset contains 4998 training and 5105 validation
images for 20 object categories and ground truth bounding box annotations for target categories. We
use the PASCAL-Part dataset [7] to obtain bounding box annotations of object parts which consists
of 193 annotated part categories such as aeroplane engine, bicycle back-wheel, bird left-wing, person
right-upper-leg. After removing annotations that are smaller than 20 pixels on one side and the
categories with less than 50 training samples, the number of part categories reduces to 152. The
dataset provides annotations for only training and validation splits, thus we train our models in the
train split and report results in the validation split for all the tasks. We follow the standard PASCAL
VOC evaluation and report average precision (AP) and AP at 50% intersection-over-union (IoU) of
the detected boxes with the ground ones for object classification and detection respectively. For the
part detection, we follow [7] and report AP at a more relaxed 40% IoU threshold. The results for the
tasks are reported in tab. 1.
In order to establish the first baseline, we train an independent network for each task. Each network
is initialized with the VGG-M model, the last classification and regression layers are initialized with
random noise and all the layers are fine-tuned for the respective task. For object and part detection,
we use our implementation of Fast-RCNN [11]. Note that, for consistency between the baselines and
our method, minimum dimension of each image is scaled to be 600 pixels for all the tasks including
object classification. An SPP layer is employed to scale the feature map into 6 ? 6 dimensionality.
For the second baseline, we train a multi-task network that shares the convolutional layers across the
tasks (this setting is called ordinary multi-task prediction in sect. 2.1). We observe in tab. 1 that the
multi-task model performs comparable or better than the independent networks, while being more
efficient due to the shared convolutional computations. Since the training images are the same in all
cases, this shows that just combining multiple labels together improves efficiency and in some cases
even performance.
Finally we test the full multinet model for two settings defined as update rules (1) and (2) corresponding to eq. 4 and 5 respectively. We first see that both models outperforms the independent
networks and multi-task network as well. This is remarkable because our model consists of smaller
number of parameters than the sum of three independent networks and yet our best model (update 1)
consistently outperforms them by roughly 1.5 points in mean AP. Furthermore, multinet improves
over the ordinary multi-task prediction by exploiting the correlations in the solutions of the individual
tasks. In addition, we observe that update (1) performs better than update (2) that constraints the
shared representation space to 512 dimensions regardless of the number of tasks, as it can be expected
due to the larger capacity. Nevertheless, even with the bottleneck we observe improvements compared
to ordinary multi-task prediction.
We also run a test case to verify whether multinet learns to mix information extracted by the various
tasks as presumed. To do so, we exploit the predictions performed by these task in will be able to
improve more with ground truth labels during test time. At test time we ground the classification
label rcls in the first iteration of multinet to the ground truth class labels and we read the predictions
after one iteration. The performances expectedly in the three tasks improve to 90.1, 58.9 and 39.2
respectively. This shows that, the feedback on the class information has a strong effect on class
prediction itself, and a more modest but nevertheless significant effect on the other tasks as well.
PASCAL VOC 2007 [10]: The dataset consists of 2501 training, 2510 validation, and 5011 test
images containing bounding box annotations for 20 object categories. There is no part annotations
available for this dataset, thus, we exclude the part detection task and run the same baselines and
our best model for object classification and detection. The results are reported for the test split and
depicted in tab. 2. Note that our RCNN for the individual networks obtains the same detection score
7
Method / Task
classification object-detection part-detection
Independent
76.4
55.5
37.3
Multi-task
76.2
57.1
37.2
Ours
77.4
57.5
38.8
Ours (with bottleneck)
76.8
57.3
38.5
Table 1: Object classification, detection and part detection results in the PASCAL VOC 2010
validation split.
Method / Task classification object-detection
Independent
78.7
59.2
MTL
78.9
60.4
Ours
79.8
61.3
Table 2: Object classification and detection results in the PASCAL VOC 2007 test split.
in [11]. In parallel to the former results, our method consistently outperforms both the baselines in
classification and detection tasks.
5
Conclusions
In this paper, we have presented multinet, a recurrent neural network architecture to solve multiple
perceptual tasks in an efficient and coordinated manner. In addition to feature and parameter sharing,
which is common to most multi-task learning methods, multinet combines the output of the different
tasks by updating a shared representation iteratively.
Our results are encouraging. First, we have shown that such architectures can successfully integrate multiple tasks by sharing a large subset of the data representation while matching or even
outperforming specialised network. Second, we have shown that the iterative update of a common
representation is an effective method for sharing information between different tasks which further
improve performance.
Acknowledgments
This work acknowledges the support of the ERC Starting Grant Integrated and Detailed Image
Understanding (EP/L024683/1).
References
[1] J. Baxter. A model of inductive bias learning. J. Artif. Intell. Res.(JAIR), 12(149-198):3, 2000.
[2] V. Belagiannis and A. Zisserman.
arXiv:1605.02914, 2016.
Recurrent human pose estimation.
arXiv preprint
[3] L. Breiman and J. H. Friedman. Predicting multivariate responses in multiple linear regression.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59(1):3?54, 1997.
[4] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative
error feedback. CVPR, 2016.
[5] R. Caruana. Multitask learning. Machine Learning, 28(1), 1997.
[6] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details:
Delving deep into convolutional nets. In BMVC, 2014.
[7] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. L. Yuille. Detect what you can:
Detecting and representing objects using holistic models and body parts. In CVPR, pages
1971?1978, 2014.
[8] J. Dai, K. He, and J. Sun. Instance-aware semantic segmentation via multi-task network
cascades. In CVPR, 2016.
[9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep
convolutional activation feature for generic visual recognition. CoRR, abs/1310.1531, 2013.
8
[10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL
Visual Object Classes (VOC) challenge. IJCV, 88(2):303?338, 2010.
[11] R. Girshick. Fast r-cnn. In ICCV, 2015.
[12] A. Graves, M. Liwicki, S. Fern?ndez, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel
connectionist system for unconstrained handwriting recognition. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 31(5):855?868, 2009.
[13] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural
networks. In ICASSP, pages 6645?6649. IEEE, 2013.
[14] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks
for visual recognition. In ECCV, pages 346?361, 2014.
[15] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, 2006.
[16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?
1780, 1997.
[17] T. Mikolov. Statistical Language Models Based on Neural Networks. PhD thesis, Ph. D. thesis,
Brno University of Technology, 2012.
[18] T. M. Mitchell and S. B. Thrun. Explanation-based neural network learning for robot control.
NIPS, pages 287?287, 1993.
[19] M. Najibi, M. Rastegari, and L. S. Davis. G-cnn: an iterative grid based object detector. CVPR,
2016.
[20] P. H. O. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene parsing.
arXiv preprint arXiv:1306.2795, 2013.
[21] M. A. Ranzato, F. J. Huang, Y. Boureau, and Y. LeCun. Unsupervised learning of invariant
feature hierarchies with applications to object recognition. In CVPR, pages 1?8, 2007.
[22] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating
errors. Cognitive modeling, 5(3):1, 1988.
[23] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, S. Huang, A. Karpathy,
A. Khosla, M. Bernstein, A.C. Berg, and F.F. Li. Imagenet large scale visual recognition
challenge. IJCV, 2015.
[24] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In
NIPS, pages 3104?3112, 2014.
[25] S. Thrun and L. Pratt, editors. Learning to Learn. Kluwer Academic Publishers, 1998.
[26] K. van de Sande, J. Uijlings, T. Gevers, and A. Smeulders. Segmentation as selective search for
object recognition. In ICCV, 2011.
[27] A. Vedaldi and K. Lenc. Matconvnet ? convolutional neural networks for matlab. In Proceeding
of the ACM Int. Conf. on Multimedia, 2015.
[28] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol. Extracting and composing robust
features with denoising autoencoders. In ICML, pages 1096?1103. ACM, 2008.
[29] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR,
abs/1311.2901, 2013.
[30] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja. Robust visual tracking via structured multi-task
sparse learning. IJCV, 101(2):367?383, 2013.
[31] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning.
In ECCV, pages 94?108. Springer, 2014.
9
| 6393 |@word multitask:2 cnn:6 compression:1 retraining:1 everingham:1 rgb:1 sgd:1 initial:4 configuration:3 contains:3 score:1 series:1 liu:2 ndez:1 tuned:1 ours:3 outperforms:3 existing:1 current:1 luo:1 activation:2 tackling:1 yet:1 written:2 reminiscent:1 parsing:1 blur:1 remove:1 drop:1 progressively:2 update:14 intelligence:3 instantiate:1 xk:1 short:1 ebnn:1 provides:2 shortlist:1 contribute:1 location:4 complication:1 detecting:1 zhang:4 five:2 consists:3 ijcv:3 combine:1 fullyconnected:1 introduce:1 manner:11 acquired:1 x0:3 presumed:1 expected:1 roughly:1 andrea:1 multi:26 integrator:6 salakhutdinov:1 voc:6 automatically:2 encouraging:1 metaphor:1 window:1 cluttering:1 spain:1 notation:1 what:1 rcls:7 proposing:1 finding:1 demonstrates:1 scaled:1 uk:1 control:1 grant:1 producing:1 before:1 treat:1 encoding:2 oxford:1 ap:4 plus:1 bird:1 studied:1 stratified:1 acknowledgment:1 lecun:1 practice:1 union:1 universal:3 empirical:1 rnn:1 significantly:1 vedaldi:4 cascade:2 matching:1 pre:2 integrating:4 word:1 symbolic:1 synergistic:1 wheel:1 optimize:1 map:5 blueprint:1 williams:2 regardless:2 starting:2 resolution:1 correcting:1 rule:2 initialise:1 coordinate:1 analogous:1 updated:3 target:3 construction:1 hierarchy:1 us:1 associate:1 element:1 rumelhart:1 recognition:9 updating:1 ep:1 module:1 preprint:2 initializing:1 capture:3 region:7 connected:7 sun:2 sect:11 ranzato:1 subtask:1 trained:2 solving:3 yuille:1 localization:2 efficiency:1 easily:1 icassp:1 differently:1 various:1 train:9 rpart:1 heat:1 fast:4 effective:2 stacked:1 describe:1 liwicki:1 detected:3 choosing:1 h0:4 whose:1 modular:2 encoded:2 solve:2 larger:1 cvpr:5 encoder:9 statistic:1 simonyan:1 think:1 jointly:2 reshaped:2 itself:1 final:1 advantage:4 sequence:5 agrawal:1 net:2 propose:7 interaction:1 adaptation:1 enc:30 loop:1 combining:1 holistic:1 achieve:1 exploiting:2 reinitialized:2 sutskever:1 darrell:1 r1:1 incremental:3 object:43 recurrent:16 ac:1 propagating:1 pose:3 develop:1 conv5:1 eq:2 strong:1 auxiliary:1 predicted:3 larochelle:1 hakan:1 iou:2 direction:2 annotated:1 filter:3 modifying:1 human:3 bin:1 require:1 feeding:2 generalization:1 practically:1 ground:6 bicycle:1 predict:1 matconvnet:2 major:1 inputted:1 purpose:1 estimation:3 label:21 grouped:1 successfully:2 reflects:2 hoffman:1 gaussian:2 rather:2 bunke:1 avoid:1 breiman:2 encode:1 improvement:2 multinet:25 modelling:1 consistently:2 contrast:1 baseline:5 detect:1 integrated:10 typically:1 going:1 selective:3 comprising:2 pixel:2 overall:1 classification:24 pascal:7 denoted:1 enrich:1 spatial:7 integration:5 tzeng:1 special:1 softmax:4 equal:1 once:1 construct:2 extraction:2 having:1 marginal:1 aware:1 unsupervised:1 icml:1 report:3 connectionist:1 modern:2 randomly:3 simultaneously:3 preserve:1 intell:1 individual:8 geometry:1 friedman:1 ab:2 detection:31 interest:1 possibility:1 evaluation:2 bilen:1 partial:1 respective:1 facial:2 modest:1 initialized:9 re:2 girshick:1 instance:3 classify:2 soft:1 modeling:1 extensible:1 caruana:1 ordinary:13 deviation:2 subset:2 predictor:3 reported:2 encoders:2 answer:2 learnt:2 combined:2 person:1 seamless:1 off:1 rounded:1 together:2 again:1 thesis:2 containing:1 choose:3 broadcast:1 huang:2 cognitive:1 conf:1 wing:1 return:1 li:1 exclude:1 de:1 int:1 coordinated:1 collobert:1 later:1 performed:2 closed:1 doing:1 tab:3 option:1 parallel:6 maintains:1 annotation:6 gevers:1 jia:1 contribution:1 smeulders:1 minimize:1 publicly:1 accuracy:2 convolutional:15 efficiently:1 yield:1 conceptually:1 vincent:1 fern:1 mc:1 ren:1 economical:1 researcher:1 russakovsky:1 detector:3 reach:1 sharing:11 evaluates:1 mohamed:1 associated:4 handwriting:1 gain:2 dataset:5 ask:2 mitchell:2 knowledge:1 improves:3 dimensionality:3 segmentation:5 back:9 focusing:1 jair:1 mtl:3 follow:3 specify:1 improved:2 zisserman:3 response:1 formulation:1 done:3 ox:1 box:10 methodology:1 furthermore:2 just:2 bmvc:1 lastly:1 correlation:4 canvas:1 hand:1 autoencoders:1 su:1 logistic:1 artif:1 name:1 effect:2 contain:2 verify:1 former:1 hence:3 inductive:1 fidler:1 read:1 symmetric:2 iteratively:2 mlt:1 semantic:2 visualizing:1 quadruplet:1 during:1 recurrence:1 davis:1 noted:1 complete:1 performs:3 image:31 novel:1 common:7 sigmoid:2 specialized:1 empirically:1 discussed:1 interpretation:4 he:2 kluwer:1 measurement:1 significant:1 tuning:1 unconstrained:1 collaborate:1 grid:3 similarly:1 consistency:1 erc:1 language:2 moving:1 robot:2 specification:1 etc:1 fc7:1 posterior:2 own:1 recent:1 multivariate:1 scenario:1 schmidhuber:2 certain:2 sande:1 binary:3 outperforming:1 devise:1 mottaghi:1 captured:1 minimum:1 dai:2 additional:1 relaxed:1 employed:1 r0:2 deng:1 signal:1 branch:4 multiple:13 full:2 mix:1 reduces:2 match:1 academic:1 minimising:1 long:1 prediction:16 variant:1 regression:4 vision:8 arxiv:4 iteration:8 sometimes:1 grounded:1 normalization:1 pyramid:2 achieved:1 dec:28 hochreiter:1 proposal:1 whereas:3 background:3 fine:2 addition:2 winn:1 krause:1 diagram:1 source:1 publisher:1 pinheiro:1 operate:1 unlike:2 rest:1 lenc:1 pooling:6 call:1 extracting:1 bernstein:1 split:6 enough:1 baxter:1 pratt:1 iterate:1 bengio:1 relu:1 fit:1 architecture:17 associating:1 suboptimal:1 reduce:1 idea:3 airplane:1 vgg:14 det:15 bottleneck:2 whether:4 motivated:1 chatfield:1 aeroplane:1 speech:2 passing:1 matlab:1 deep:8 dramatically:2 useful:4 fragkiadaki:1 detailed:1 informally:1 karpathy:1 ph:1 category:6 generate:1 exist:1 estimated:1 disjoint:1 group:1 key:1 threshold:1 nevertheless:2 distills:1 clarity:1 verified:1 branched:1 ht:12 sum:2 run:2 powerful:1 you:1 bertolami:1 summarizes:1 comparable:1 entirely:1 layer:20 followed:3 expectedly:1 correspondence:1 refine:1 adapted:1 vectorial:1 constraint:3 scene:1 encodes:1 performing:2 mikolov:1 structured:1 belonging:1 across:4 smaller:2 brno:1 making:2 leg:1 gradually:1 iccv:2 invariant:1 equation:5 mutually:1 remains:2 previously:1 discus:3 loose:1 mechanism:1 needed:1 end:2 subnetworks:2 available:4 operation:1 decomposing:1 observe:4 enforce:1 generic:1 original:4 denotes:1 running:2 ensure:1 zeiler:1 freidman:1 exploit:2 build:1 establish:1 society:1 tensor:1 objective:1 malik:1 question:5 coherently:1 added:2 rt:13 usual:1 exclusive:1 unclear:1 subnetwork:2 separate:1 thrun:3 capacity:1 decoder:14 landmark:2 trivial:1 reason:1 urtasun:1 index:1 illustration:1 manzagol:1 downsampling:1 loy:1 potentially:1 negative:1 ordinarily:1 design:3 implementation:2 satheesh:1 perform:1 upper:1 convolution:1 benchmark:2 extended:1 hinton:3 discovered:1 stack:4 community:1 subtasks:3 hbilen:1 inferred:1 namely:1 specified:1 toolbox:1 optimized:1 imagenet:4 engine:1 coherent:2 distinction:1 barcelona:1 nip:3 address:1 able:2 below:1 perception:6 spp:5 pattern:1 challenge:2 including:2 max:5 explanation:2 royal:1 gool:1 memory:1 suitable:1 natural:5 treated:1 hybrid:1 belagiannis:1 predicting:1 recursion:1 representing:1 improve:8 technology:1 numerous:1 picture:1 carried:1 acknowledges:1 extract:3 epoch:2 understanding:5 geometric:1 contributing:1 graf:2 fully:7 loss:1 remarkable:1 ghanem:1 validation:5 rcnn:2 integrate:2 sufficient:2 conv1:1 editor:1 bank:1 systematically:1 share:3 eccv:2 last:6 bias:2 allow:1 side:1 sparse:1 benefit:1 distributed:1 boundary:1 dimension:7 feedback:3 numeric:1 van:2 computes:1 sensory:1 collection:1 refinement:1 bm:3 far:1 transaction:1 obtains:1 cutting:1 r1t:1 synergy:1 keep:1 sequentially:3 instantiation:1 b1:1 img:14 discriminative:1 fergus:1 search:3 latent:2 iterative:6 decade:1 khosla:1 triplet:1 why:1 modularity:1 table:2 fc6:1 learn:6 robust:2 composing:1 transfer:2 rastegari:1 reasonably:1 channel:4 contributes:1 symmetry:1 interact:1 delving:1 investigated:1 cl:29 uijlings:1 domain:2 main:2 rh:4 whole:1 bounding:7 noise:1 repeated:1 complementary:3 x1:1 body:1 fig:2 ahuja:1 precision:1 sub:5 decoded:1 perceptual:6 third:1 learns:2 donahue:1 tang:1 rk:2 removing:1 specific:5 xt:1 symbol:2 evidence:1 exists:1 sequential:2 ssw:2 decaf:1 corr:2 phd:1 execution:1 labelling:2 boureau:1 chen:1 specialised:2 intersection:1 depicted:1 simply:2 visual:6 vinyals:2 contained:1 tracking:2 springer:1 corresponds:1 truth:3 extracted:4 ma:1 acm:2 goal:1 shared:20 carreira:2 generalisation:2 determined:1 reducing:1 denoising:1 called:3 multimedia:1 experimental:1 exception:1 formally:1 ilsvrc:2 berg:1 support:3 latter:1 devil:1 incorporate:1 evaluate:1 |
5,962 | 6,394 | Structured Matrix Recovery via the Generalized
Dantzig Selector
Sheng Chen
Arindam Banerjee
Dept. of Computer Science & Engineering
University of Minnesota, Twin Cities
{shengc,banerjee}@cs.umn.edu
Abstract
In recent years, structured matrix recovery problems have gained considerable
attention for its real world applications, such as recommender systems and computer
vision. Much of the existing work has focused on matrices with low-rank structure,
and limited progress has been made on matrices with other types of structure. In
this paper we present non-asymptotic analysis for estimation of generally structured
matrices via the generalized Dantzig selector based on sub-Gaussian measurements.
We show that the estimation error can always be succinctly expressed in terms of a
few geometric measures such as Gaussian widths of suitable sets associated with
the structure of the underlying true matrix. Further, we derive general bounds on
these geometric measures for structures characterized by unitarily invariant norms,
a large family covering most matrix norms of practical interest. Examples are
provided to illustrate the utility of our theoretical development.
1
Introduction
Structured matrix recovery has found a wide spectrum of applications in real world, e.g., recommender
systems [22], face recognition [9], etc. The recovery of an unknown structured matrix ?? ? Rd?p
essentially needs to consider two aspects: the measurement model, i.e., what kind of information
about the unknown matrix is revealed from each measurement, and the structure of the underlying
matrix, e.g., sparse, low-rank, etc. In the context of structured matrix estimation and recovery, a
widely used measurement model is the linear measurement, i.e., one has access to n observations
of the form yi = hh?? , Xi ii + ?i for ?? , where hh?, ?ii denotes the matrix inner product, i.e.,
hhA, Bii = Tr(AT B) for any A, B ? Rd?p , and ?i ?s are additive noise. In the literature, various
types of measurement matrices Xi has been investigated, for example, Gaussian ensemble where Xi
consists of i.i.d. standard Gaussian entries [11], rank-one projection model where Xi is randomly
generated with constraint rank(Xi ) = 1 [7]. A special case of rank-one projection is the matrix
completion model [8], in which Xi has a single entry equal to 1 with all the rest set to 0, i.e., yi
takes the value of one entry from ?? at each measurement. Other measurement models include
row-and-column affine measurement [34], exponential family matrix completion [21, 20], etc.
Previous work has shown that low-complexity structure of ?? , often captured by a small value of
some norm R(?), can significantly benefit its recovery [11, 26]. For instance, one of the popular
structures of ?? is low-rank, which can be approximated by a small value of trace norm (i.e., nuclear
norm) k ? ktr . Under the low-rank assumption of ?? , recovery guarantees have been established for
different measurement matrices using convex programs, e.g., trace-norm regularized least-square
estimator [10, 27, 26, 21],
n
min
??Rd?p
1X
2
(yi ? hhXi , ?ii) + ?n k?? ktr ,
2 i=1
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(1)
and constrained trace-norm minimization estimators [10, 27, 11, 7, 20], such as
n
X
min k?ktr s.t.
(hhXi , ?ii ? yi ) Xi
? ?n ,
??Rd?p
i=1
(2)
op
where ?n , ?n are tuning parameters, and k ? kop denotes the operator (spectral) norm. Among the
convex approaches, the exact recovery guarantee of a matrix-form basis-pursuit [14] estimator was
analyzed for the noiseless setting in [27], under certain matrix-form restricted isometry property
(RIP). In the presence of noise, [10] also used matrix RIP to establish the recovery error bound for
both regularized and constraint estimators, i.e., (1) and (2). In [7], a variant of estimator (2) was
proposed and its recovery guarantee was built on a so-called restricted uniform boundedness (RUB)
condition, which is more suitable for the rank-one projection based measurement model. Despite
the fact that the low-rank structure has been well studied, only a few works extend to more general
structures. In [26], the regularized estimator (1) was generalized by replacing the trace norm with a
decomposable norm R(?) for other structures. [11] extended the estimator in [27] with k ? ktr replaced
by a norm from a broader class called atomic norm, but the consistency of the estimator is only
available when the noise vector is bounded.
In this work, we make two key contributions. First, we present a general framework for estimation of
structured matrices via the generalized Dantzig sector (GDS) [12, 6] as follows
!
n
X
?
?
? = argmin R(?) s.t. R
(hhXi , ?ii ? yi ) Xi ? ?n ,
(3)
??Rd?p
i=1
in which R(?) can be any norm and its dual norm is R? (?). GDS has been studied in the context of
structured vectors [12], so (3) can be viewed as a natural generalization to matrices. Note that the
estimator (2) is a special case of the formulation above, as operator norm is dual to trace norm. Our
? ? ?? kF relies on a condition based on a suitable
deterministic analysis of the estimation error k?
choice of ?n and the restricted strong convexity (RSC) condition [26, 3]. By assuming sub-Gaussian
Xi and ?i , we show that these conditions are satisfied with high probability, and the recovery error
can be expressed in terms of certain geometric measures of sets associated with ?? . Such a geometric
characterization is inspired by related advances in recent years [26, 11, 3]. One key ingredient in
such characterization is the Gaussian width [18], which measures the size of sets in Rd?p . Related
advances can be found in [11, 12, 6], but they all rely on the Gaussian measurements, to which
classical concentration results [18] are directly applicable. In contrast, our work allows general
sub-Gaussian measurement matrices and noise, by suitably using ideas from generic chaining [30], a
powerful geometric approach to bounding stochastic processes. Our results can also be extended to
heavy tailed measurement and noise, following recent advances [28]. Recovery guarantees of the
GDS were analyzed for general norms in matrix completion setting [20], but it is different from our
work since its measurement model is not sub-Gaussian as we consider.
Our second contribution is motivated by the fact that though certain existing analyses end up with
the geometric measures such as Gaussian widths, limited attention has been paid in bounding these
measures in terms of more easily understandable quantities especially for matrix norms. Here our key
novel contribution is deriving general bounds for those geometric measures for the class of unitarily
invariant norms, which are invariant under any unitary transformation, i.e., for any matrix ? ? Rd?p ,
its norm value is equal to that of U ?V if both U ? Rd?d and V ? Rp?p are unitary matrices. The
widely-used trace norm, spectral norm and Frobenius norm all belong to this class. A well-known
result is that any unitarily invariant matrix norm is equivalent to some vector norm applied on the
set of singular values [23] (see Lemma 1 for details), and this equivalence allows us to build on the
techniques developed in [13] for vector norms to derive the bounds of the geometric measures for
unitarily invariant norms. Previously these general bounds were not available in the literature for the
matrix setting, and bounds were only in terms the geometric measures, which can be hard to interpret
or bound in terms of understandable quantities. We illustrate concrete versions of the general bounds
using the trace norm and the recently proposed spectral k-support norm [24].
The rest of the paper is organized as follows: we first provide the deterministic analysis in Section 2.
In Section 3, we introduce some probability tools, which are used in the later analysis. In Section 4,
we present the probabilistic analysis for sub-Gaussian measurement matrices and noise, along with
the general bounds of the geometric measures for unitarily invariant norms. Section 5 is dedicated to
the examples for the application of general bounds, and we conclude in Section 6.
2
2
Deterministic Recovery Guarantees
? ? ?? kF .
To evaluate the performance of GDS (3), we focus on the Frobenius-norm error, i.e., k?
Throughout the paper, w.l.o.g. we assume that d ? p. For convenience, we denote the collection of
Xi ?s by X = {Xi }ni=1 , and let ? = [?1 , ?2 , . . . , ?n ]T be the noise vector. In the following theorem,
? ? ?? kF under some standard assumptions on ?n and X.
we provide a deterministic bound for k?
Theorem 1 Define the set ER (?? ) = cone{ ? ? Rd?p | R(? + ?? ) ? R(?? )} . Assume that
!
n
n
X
X
?
?n ? R
?i Xi , and
hhXi , ?ii2 / k?k2F ? ? > 0, ? ? ? ER (?? ) .
(4)
i=1
i=1
? ? ? kF error satisfies
Then the estimation k?
?
? ? ?? kF ?
k?
2?R (?? )?n
,
?
where ?R (?) is the restricted compatibility constant defined as ?R (?? ) = sup??ER (?? )
(5)
R(?)
k?kF .
The proof is deferred to the supplement. The convex cone ER (?? ) plays a important role in characterizing the error bound, and its geometry is determined by R(?) and ?? . The recovery bound assumes
no knowledge of the norm R(?) and true matrix ?? , thus allowing general structures. The second
condition in 4 is often referred to as restricted strong convexity [26]. In this work, we are particularly
interested in R(?) from the class of unitarily invariant matrix norm, which essentially satisfies the following property, R(?) = R(U ?V ) for any ? ? Rd?p and unitary matrices U ? Rd?d , V ? Rp?p .
A useful result for such norms is given in Lemma 1 (see [23, 4] for details).
Lemma 1 Suppose that the singular values of a matrix ? ? Rd?p are given by ? =
[?1 , ?2 , . . . , ?d ]T . A unitarily invariant norm R : Rd?p 7? R can be characterized by some symmetric gauge function1 f : Rd 7? R as R(?) = f (?), and its dual norm is given by R? (?) = f ? (?).
As the sparsity of ? equals the rank of ?, the class of unitarily invariant matrix norms is useful in
structured low-rank matrix recovery and includes many widely used norms, e.g., trace norm with
f (?) = k ? k1 , Frobenius norm with f (?) = k ? k2 , Schatten p-norm with f (?) = k ? kp , Ky Fan k-norm
when f (?) is the `1 norm of the largest k elements in magnitude, etc.
Before proceeding with the analysis, we introduce some notations. For the rest of paper, we denote
by ?(?) ? Rd the vector of singular values (sorted in descending order) of matrix ? ? Rd?p ,
and may use the shorthand ? ? for ?(?? ). For any ? ? Rd , we define the corresponding |?|? by
arranging the absolute values of elements of ? in descending order. Given any matrix ? ? Rd?p
and subspace M ? Rd?p , we denote by ?M the orthogonal projection of ? onto M. Besides we
let colsp(?) (rowsp(?)) be the subspace spanned by columns (rows) of ?. The notation Sdp?1
represents the unit sphere of Rd?p , i.e., the set {?|k?kF = 1}. The unit ball of norm R(?) is denoted
by ?R = {? | R(?) ? 1}. Throughout the paper, the symbols c, C, c0 , C0 , etc., are reserved for
universal constants, which may be different at each occurrence.
In the rest of our analysis, we will frequently use the so-called ordered weighted `1 (OWL) norm
for Rd [17], which is defined as k?kw , h|?|? , |w|? i, where w ? Rd is a predefined weight vector.
Noting that the OWL norm is a symmetric gauge, we define the spectral OWL norm for ? as:
k?kw , k?(?)kw , i.e., applying the OWL norm on ?(?).
3
Background and Preliminaries
The tools for our probabilistic analysis include the notion of Gaussian width [18], sub-Gaussian
random matrices, and generic chaining [30]. Here we briefly introduce the basic ideas and results for
each of them as needed for our analysis.
1
Symmetric gauge function is a norm that is invariant under sign-changes and permutations of the elements.
3
3.1
Gaussian width and sub-Gaussian random matrices
The Gaussian width can be defined for any subset A ? Rd?p as follows [18, 19],
h
i
w(A) , EG sup hhG, Zii ,
(6)
Z?A
where G is a random matrix with i.i.d. standard Gaussian entries, i.e., Gij ? N (0, 1). The Gaussian
width essentially measures the size of the set A, and some of its properties can be found in [11, 1].
A random matrix X is sub-Gaussian with |||X|||?2 ? ? if |||hhX, Zii|||?2 ? ? for any Z ? Sdp?1 ,
1
1
where the ?2 norm for sub-Gaussian random variable x is defined as |||x|||?2 = supq?1 q ? 2 (E|x|q ) q
(see [31] for more details of ?2 norm). One nice property of sub-Gaussian random variable is the
thin tail, i.e., P(|x| > ) ? e ? exp(?c2 /kxk2?2 ), in which c is a constant.
3.2
Generic chaining
Generic chaining is a powerful tool for bounding the supreme of stochastic processes [30]. Suppose
{Zt }t?T is a centered stochastic process, where each Zt is a centered random variable. We assume
the index set T is endowed with some metric s(?, ?). In order to use generic chaining bound,
the critical condition for {Zt }t?T to satisfy is that, for any u, v ? T , P (|Zu ? Zv | ? ) ? c1 ?
exp ?c2 2 /s2 (u, v) , where c1 and c2 are constants. Under this condition, we have
E[sup Zt ] ? c0 ?2 (T , s) ,
t?T
P sup |Zu ? Zv | ? C1 (?2 (T , s) + ? diam (T , s)) ? C2 exp ?2 ,
(7)
(8)
u,v?T
where diam (T , s) is the diameter of set T w.r.t. the metric s(?, ?). (7) is often referred to as generic
chaining bound (see Theorem 2.2.18 and 2.2.19 in [30]), and (8) is the Theorem 2.2.27 in [30]. The
functional ?2 (T , s) essentially measures the geometric size of the set T under the metric s(?, ?). To
avoid unnecessary complications, we omit the definition of ?2 (T , s) here (see Chapter 2 of [30] for
an introduction if one is interested), but provide two of its properties below,
?2 (T , s1 ) ? ?2 (T , s2 ) if s1 (u, v) ? s2 (u, v) ? u, v ? T ,
?2 (T , ?s) = ? ? ?2 (T , s) for any ? > 0 .
(9)
(10)
The important aspect of ?2 -functional is the following majorizing measure theorem [29, 30].
Theorem 2 Given any Gaussian process {Yt }t?T , define s(u, v) =
Then ?2 (T , s) can be upper bounded by ?2 (T , s) ? C0 E [supt?T Yt ].
p
E|Yu ? Yv |2 for u, v ? T .
This theorem is essentially Theorem 2.4.1 in [30]. For our purpose, we simply focus on the Gaussian
process {Y? = hhG, ?ii}??A , in whichpA ? Rd?p and G is a standard Gaussian random matrix.
Given Theorem 2, the metric s(U, V ) = E|hhG, U ? V ii|2 = kU ? V kF . Therefore we have
?2 (A, k ? kF ) ? C0 E[ sup hhG, ?ii] = C0 w(A) ,
(11)
??A
4
Error Bounds with Sub-Gaussian Measurement and Noise
Though the deterministic recovery bound (5) in Section 2 applies to any measurement X and noise
? as long as the assumptions in (4) are satisfied, it is of practical interest to express the bound in
terms of the problem parameters, e.g., d, p and n, for random X and ? sampled from some general
and widely used family of distributions. For this work, we assume that Xi ?s in X are i.i.d. copies of
a zero-mean random vector X, which is sub-Gaussian with |||X|||?2 ? ? for a constant ?, and the
noise ? contains i.i.d. centered random variables with k?i k?2 ? ? for a constant ? . In this section,
we show that each quantity in (5) can be bounded using certain geometric measures associated with
the true matrix ?? . Further, we show that for unitarily invariant norms, the geometric measures can
themselves be bounded in terms of d, p, n, and structures associated with ?? .
4
4.1
Bounding restricted compatibility constant
Given the definition of restricted compatibility constant in Theorem 1, it involves no randomness and
purely depends on R(?) and the geometry of ER (?? ). Hence we directly work on its upper bound for
unitarily invariant norms. In general, characterizing the error cone ER (?? ) is difficult, especially for
non-decomposable R(?). To address the issue, we first define the seminorm below.
Definition 1 Given two orthogonal subspaces M1 , M2 ? Rd?p and two vectors w, z ? Rd , the
subspace spectral OWL seminorm for Rd?p is defined as k?kw,z , k?M1 kw + k?M2 kz , where
?M1 and ?M2 are the orthogonal projections of ? onto M1 and M2 , respectively.
Next we will construct such a seminorm based on a subgradient ?? of the symmetric gauge f
associated with R(?) at ? ? , which can be obtained by solving the so-called polar operator [32]
?? ? argmax hx, ? ? i .
(12)
x:f ? (x)?1
Given that ? ? is sorted, w.l.o.g. we may assume that ?? is nonnegative and sorted because h? ? , ?? i ?
?
?
(?min
) the largest (smallest) element
h? ? , |?? |? i and f ? (?? ) = f ? (|?? |? ). Also, we denote by ?max
?
?
?
of the ?? , and define ? = ?max
/?min
(if ?min
= 0, we define ? = +?). Throughout the paper, we
will frequently use these notations. As shown in the lemma below, a constructed seminorm based on
?? will induce a set E 0 that contains ER (?? ) and is considerably easier to work with.
Lemma 2 Assume that rank(?? ) = r and its compact SVD is given by ?? = U ?V T , where
U ? Rd?r , ? ? Rr?r and V ? Rp?r . Let ?? be any subgradient of f (? ? ), w =
?
?
[?1? , ?2? , . . . , ?r? , 0, . . . , 0]T ? Rd , z = [?r+1
, ?r+2
, . . . , ?d? , 0, . . . , 0]T ? Rd , U = colsp(U )
T
and V = rowsp(V ), and define M1 , M2 as M1 = {? | colsp(?) ? U, rowsp(?) ?
V}, M2 = {? | colsp(?) ? U ? , rowsp(?) ? V ? }, where U ? , V ? are orthogonal complements of U and V respectively. Then the specified subspace spectral OWL seminorm k ? kw,z satisfies
ER (?? ) ? E 0 , cone{? | k? + ?? kw,z ? k?? kw,z }
The proof is given in the supplementary. Base on the superset E 0 , we are able to bound the restricted
compatibility constant for unitarily invariant norms by the following theorem.
Theorem 3 Assume there exist ?1 and ?2 such that the symmetric gauge f for R(?) satisfies f (?) ?
max {?1 k?k1 , ?2 k?k2 } for any ? ? Rd . Then given a rank-r ?? , the restricted compatibility constant
?R (?? ) is upper bounded by
?
?R (?? ) ? 2?f (r) + max ?2 , ?1 (1 + ?) r ,
(13)
?
?
where ? = ?max
/?min
, and ?f (r) = supk?k0 ?r f (?)/k?k2 is called sparse compatibility constant.
Remark: The assumption for Theorem 3 might seem cumbersome at the first glance, but the different
combinations of ?1 and ?2 give us more flexibility. In fact, it trivially covers two cases, ?2 = 0 along
with f (?) ? ?1 k?k1 for any ?, and the other way around, ?1 = 0 along with f (?) ? ?2 k?k2 .
4.2
Bounding restricted convexity ?
Pn
The second condition in (4) is equivalent to i=1 hhXi , ?ii2 ? ? > 0, ? ? ? ER (?? ) ? Sdp?1 . In
the following theorem, we express the restricted convexity ? in terms of Gaussian width.
Theorem 4 Assume that Xi ?s are i.i.d. copies of a centered isotropic sub-Gaussian random
matrix X with |||X|||?2 ? ?, and let AR (?? ) = ER (?? ) ? Sdp?1 . With probability at least
1 ? exp(??w2 (AR (?? ))), the following inequality holds with absolute constant ? and ?,
n
inf
??A
1X
w(AR (?? ))
?
hhXi , ?ii2 ? 1 ? ??2 ?
.
n i=1
n
5
(14)
The proof is essentially an application of generic chaining [30] and the following theorem from [25].
Related line of works can be found in [15, 16, 5].
Theorem 5 (Theorem D in [25]) There exist absolute constants c1 , c2 , c3 for which the following
holds. Let (?, ?) be a probability space, H be a subset of the unit sphere of L2 (?), i.e., H ? SL2 =
{h : |||h|||L2 = 1}, and assume suph?H |||h|||?2 ? ?. Then, for any ? > 0 and n ? 1 satisfying
?
c1 ??2 (H, |||?|||?2 ) ? ? n, with probability at least 1 ? exp(?c2 ? 2 n/?4 ), we have
n
1 X
2
2
(15)
sup
h (Xi ) ? E h ? ? .
h?H n
i=1
Proof of Theorem 4: For simplicity, we use A as shorthand for AR (?? ). Let (?, ?) be the
probability space that X is defined on, and construct
H = {h(?) = hh?, ?ii | ? ? A} .
|||X|||?2 ? ? immediately implies that suph?H |||h|||?2 ? ?. As X is isotropic, i.e., E[hhX, ?ii2 ] = 1
for any ? ? A ? Sdp?1 , thus H ? SL2 and E[h2 ] = 1 for any h ? H. Given h1 = hh?, ?1 ii, h2 =
hh?, ?2 ii ? H, where ?1 , ?2 ? A, the metric induced by ?2 norm satisfies |||h1 ? h2 |||?2 =
|||hhX, ?1 ? ?2 ii|||?2 ? ?k?1 ? ?2 kF . Using the properties of ?2 -functional and the majorizing
measure theorem in Section 3, we have
?2 (H, |||?|||?2 ) ? ??2 (A, k ? kF ) ? ?c4 w(A) ,
?
where c4 is an absolute constant.?Hence, by choosing ? = c1 c4 ?2 w(A)/ n, we can guarantee that
condition c1 ??2 (H, |||?|||?2 ) ? ? n holds for H. Applying Theorem 5 to this H, with probability at
Pn
least 1 ? exp(?c2 c21 c24 w2 (A)), we have suph?H n1 i=1 h2 (Xi ) ? 1 ? ?, which implies
n
1X
hhXi , ?ii2 ? 1 ? ? .
??A n
i=1
inf
Letting ? = c2 c21 c24 , ? = c1 c4 , we complete the proof.
The bound (14) involves the Gaussian width of set AR (?? ), i.e., the error cone intersecting with unit
sphere. For unitarily invariant R, the theorem below provides a general way to bound w(AR (?? )).
?
?
Theorem 6 Under the setting of Lemma 2, let ? = ?max
/?min
and rank(?? ) = r. The Gaussian
?
width w(AR (? )) satisfies
np p
o
w(AR (?? )) ? min
dp, (2?2 + 1) (d + p ? r) r .
(16)
The proof of Theorem 6 is included in the supplementary material, which relies on a few specific
properties of Gaussian random matrix [1, 11].
4.3
Bounding regularization parameter ?n
In view of Theorem 1, we should choose theP
?n large enough to satisfy the condition in (4). Hence
n
we an upper bound for random quantity R? ( i=1 ?i Xi ), which holds with high probability.
Theorem 7 Assume that X = {Xi }ni=1 are i.i.d. copies of a centered isotropic sub-Gaussian
random matrix X with |||X|||?2 ? ?, and the noise ? consists of i.i.d. centered entries with
|||?i |||?2 ? ? . Let ?R be the unit ball of R(?) and ? = sup???R k?kF . With probability at least
1 ? exp(?c1 n) ? c2 exp ?w2 (?R )/c23 ? 2 , the following inequality holds
!
n
X
?
?
R
?i Xi ? c0 ?? ? nw(?R ) .
(17)
i=1
6
p
?
?
Proof: For each entry in ?, we have E[?i2 ] ? 2|||?i |||?2 = 2? , and ?i2 ? E[?i2 ]?1 ?
2
2?i2 ? ? 4|||?i |||?2 ? 4? 2 , where we use the definition of ?2 norm and its relation to ?1 norm
1
[31]. By Bernstein?s inequality, we get
P(k?k22 ? 2? 2 ? ) ? P k?k22 ? E[k?k22 ] ? ? exp ?c1 min 2 /16? 4 n, /4? 2 .
?
Pn
Taking = 4? 2 n, we have P k?k2 ? ? 6n ? exp (?c1 n). Denote Yu = i=1 ui Xi for u ? Rn .
For any u ? Sn?1 , we get |||Yu |||?2 ? c? due to
v
u n
n
X
uX
2
|||hhYu , ?ii|||?2 =
ui hhXi , ?ii ? ct
u2i |||hhXi , ?ii|||?2 ? c? for any ? ? Sdp?1 .
i=1
i=1
?2
For the rest of the proof, we may drop the subscript of Yu for convenience. We construct the stochastic
process {Z? = hhY, ?ii}???R , and note that any ZU and ZV from this process satisfy
P (|ZU ? ZV | ? ) = P (|hhY, U ? V ii| ? ) ? e ? exp ?C2 /?2 kU ? V k2F ,
for some universal constant C due to the sub-Gaussianity of Y . As ?R is symmetric, it follows that
sup |ZU ? ZV | = 2 sup Z? ,
U,V ??R
???R
sup kU ? V kF = 2 sup k?kF = 2? .
U,V ??R
???R
Let s(?, ?) be the metric induced by norm ?k ? kF and T = ?R . Using deviation bound (8), we have
P 2 sup Z? ? c4 ? (?2 (?R , k ? kF ) + ? 2?) ? c2 exp ?2 ,
???R
where c2 and c4 are absolute constant. By (11), there exist constants c3 and c5 such that
P (2R? (Y ) ? c5 ? (w(?R ) + )) = P 2 sup Z? ? c5 ? (w(?R ) + ) ? c2 exp ?2 /c23 ? 2 .
???R
2
Letting = w(?R ), we have P (R? (Yu ) ? c5 ?w(?R )) ? c2 exp ? (w(?R )/c3 ?) for any u ?
?
Sn?1 . Combining this with the bound for k?k2 and letting c0 = 6c5 , by union bound, we have
!
!
?
n
X
?
?
R (Y? )
?
P R
?i Xi ? c0 ?? nw(?R ) ? P
? c5 ?w(?R ) + P k?k2 ? ? 6n
k?k2
i=1
?
? sup P (R? (Yu ) ? c5 ?w(?R )) + P k?k2 ? ? 6n ? c2 exp ?w2 (?R )/c23 ? 2 + exp (?c1 n) ,
u?Sn?1
which completes the proof.
The theorem above shows that the lower bound of ?n depends on the Gaussian width of the unit ball
of R(?). Next we give its general bound for the unitarily invariant matrix norm.
Theorem 8 Suppose that the symmetric gauge f associated with R(?) satisfies f (?) ? ?k ? k1 . Then
the Gaussian width w(?R ) is upper bounded by
?
?
d+ p
w(?R ) ?
.
(18)
?
5
Examples
Combining results in Section 4, we have that if the number of measurements n > O(w2 (AR (?? ))),
? ? ?? kF ? O (?R (?? )w(?R )/?n). Here
then the recovery error, with high probability, satisfies k?
we give two examples based on the trace norm [10] and the recently proposed spectral k-support
norm [24] to illustrate how to bound the geometric measures and obtain the error bound.
7
5.1
Trace norm
Trace norm has been widely used in low-rank matrix recovery. The trace norm of ?? is basically
the `1 norm of ? ? , i.e., f = k ? k1 . Now we turn to the three geometric measures. Assuming that
rank(?? ) = r d, one subgradient of k? ? k1 is ?? = [1, 1, . . . , 1]T .
Restricted compatibility constant ?tr (?? ): It is obvious that assumption in Theorem 3 will hold
for?f by choosing ?1 =?1 and ?2 = 0, and we have ? = 1. The sparse compatibility constant??`1 (r)
is r because k?k1 ? rk?k2 for any r-sparse ?. Using Theorem 3, we have ?tr (?? ) ? 4 r.
p
Gaussian width w(Atr (?? )): As ? = 1, Theorem 6 implies that w(Atr (?? )) ? 3r(d + p ? r).
?
?
Gaussian width w(?tr ): Using Theorem 8 with ? = 1, it is easy to see that w(?tr ) ? d + p.
p
p
? ? ?? kF ? O( rd/n + rp/n) holds with high
Putting all the results together, we have k?
probability when n > O(r(d + p ? r)), which matches the bound in [8].
5.2
Spectral k-support norm
The k-support norm proposed in [2] is defined as
nX
o
X
k?ksp
,
inf
ku
k
ku
k
?
k,
u
=
?
,
i 2
i 0
i
k
i
(19)
i
and its dual norm is simply given by k?ksp?
= k|?|?1:k k2 . It is shown that k-support norm has similar
k
behavior as elastic-net regularizer [33]. Spectral k-support norm (denoted by k ? ksk ) of ?? is defined
by applying the k-support norm on ? ? , i.e., f = k ? ksp
k , which has demonstrated better performance
than trace norm in matrix completion task [24]. For simplicity, We assume that rank(?? ) = r = k
?
?
?
?
?
? T
and k? ? k2 = 1. One subgradient of k? ? ksp
k can be ? = [?1 , ?2 , . . . , ?r , ?r , . . . , ?r ] .
Restricted compatibility constant ?sk (?? ): The following relation has been shown for k-support
norm in [2],
?
?
?
2 max{k ? k2 , k ? k1 / k} .
(20)
max{k ? k2 , k ? k1 / k} ? k ? ksp
k ?
q
?
Hence the assumption in Theorem 3 will hold for ?1 = k2 and ?2 = 2, and we have ? = ?1? /?r? .
sp
sp
The sparse compatibility constant ?sp
k (r)?= ?k?(k) = 1 because k?k
k = k?k2 for any k-sparse ?.
?
Using Theorem 3, we have ?sk (?? ) ? 2 2 + 2 (1 + ?1? /?r? ) = 2 (3 + ?1? /?r? ).
p
Gaussian width w(Ask (?? )): Theorem 6 implies w(Ask (?? )) ? r(d + p ? r) [2?1?2 /?r?2 + 1].
norm shown in [2] also implies that
Gaussian
? width?w(?sk ): The relation above for k-support
? ?
?
? = 1/ k = 1/ r. By Theorem 8, we get w(?sk ) ? r( d + p).
?
?
Given
p the upper
p bounds for geometric measures, with high probability, we have k? ? ? kF ?
O( rd/n + rp/n) when n > O(r(d + p ? r)). The spectral k-support norm was first introduced
in [24], in which no statistical results are provided. Although [20] investigated the statistical aspects
of spectral k-support norm in matrix completion setting, the analysis was quite different from our
setting. Hence this error bound is new in the literature.
6
Conclusions
In this work, we present the recovery analysis for matrices with general structures, under the setting
of sub-Gaussian measurement and noise. Base on generic chaining and Gaussian width, the recovery
guarantees can be succinctly summarized in terms of some geometric measures. For the class
of unitarily invariant norms, we also provide novel general bounds of these measures, which can
significantly facilitate the analysis in future.
Acknowledgements
The research was supported by NSF grants IIS-1563950, IIS-1447566, IIS-1447574, IIS-1422557,
CCF-1451986, CNS- 1314560, IIS-0953274, IIS-1029711, NASA grant NNX12AQ39A, and gifts
from Adobe, IBM, and Yahoo.
8
References
[1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: Phase transitions in convex
programs with random data. Inform. Inference, 3(3):224?294, 2014.
[2] A. Argyriou, R. Foygel, and N. Srebro. Sparse prediction with the k-support norm. In NIPS, 2012.
[3] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with norm regularization. In NIPS, 2014.
[4] R. Bhatia. Matrix Analysis. Springer, 1997.
[5] J. Bourgain, S. Dirksen, and J. Nelson. Toward a unified theory of sparse dimensionality reduction in
Euclidean space. Geometric and Functional Analysis, 25(4):1009?1088, 2015.
[6] T. T. Cai, T. Liang, and A. Rakhlin. Geometrizing Local Rates of Convergence for High-Dimensional
Linear Inverse Problems. arXiv:1404.4408, 2014.
[7] T. T. Cai and A. Zhang. ROP: Matrix recovery via rank-one projections. The Annals of Statistics,
43(1):102?138, 2015.
[8] E. Cand?s and B. Recht. Exact matrix completion via convex optimization. Communications of the ACM,
55(6):111?119, 2012.
[9] E. J. Cand?s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM,
58(3):11:1?11:37, 2011.
[10] E. J. Cand?s and Y. Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of
noisy random measurements. IEEE Transactions on Information Theory, 57(4):2342?2359, 2011.
[11] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse
problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[12] S. Chatterjee, S. Chen, and A. Banerjee. Generalized dantzig selector: Application to the k-support norm.
In Advances in Neural Information Processing Systems (NIPS), 2014.
[13] S. Chen and A. Banerjee. Structured estimation with atomic norms: General bounds and applications. In
NIPS, pages 2908?2916, 2015.
[14] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Rev.,
43(1):129?159, 2001.
[15] S. Dirksen. Dimensionality reduction with subgaussian matrices: a unified theory. arXiv:1402.3973, 2014.
[16] S. Dirksen. Tail bounds via generic chaining. Electron. J. Probab., 20, 2015.
[17] M. A. T. Figueiredo and R. D. Nowak. Ordered weighted l1 regularized regression with strongly correlated
covariates: Theoretical aspects. In AISTATS, 2016.
[18] Y. Gordon. Some inequalities for Gaussian processes and applications. Israel Journal of Mathematics,
50(4):265?289, 1985.
[19] Y. Gordon. On Milman?s inequality and random subspaces which escape through a mesh in Rn . In
Geometric Aspects of Functional Analysis, volume 1317 of Lecture Notes in Mathematics, pages 84?106.
Springer, 1988.
[20] S. Gunasekar, A. Banerjee, and J. Ghosh. Unified view of matrix completion under general structural
constraints. In NIPS, pages 1180?1188, 2015.
[21] S. Gunasekar, P. Ravikumar, and J. Ghosh. Exponential family matrix completion under structural
constraints. In International Conference on Machine Learning (ICML), 2014.
[22] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer,
42(8):30?37, 2009.
[23] A. S. Lewis. The Convex Analysis of Unitarily Invariant Matrix Functions. Journal of Convex Analysis,
2(1-2):173?183, 1995.
[24] A. M. McDonald, M. Pontil, and D. Stamos. Spectral k-support norm regularization. In NIPS, 2014.
[25] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann. Reconstruction and subGaussian operators in
asymptotic geometric analysis. Geometric and Functional Analysis, 17:1248?1282, 2007.
[26] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for the analysis of
regularized M -estimators. Statistical Science, 27(4):538?557, 2012.
[27] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[28] V. Sivakumar, A. Banerjee, and P. Ravikumar. Beyond sub-gaussian measurements: High-dimensional
structured estimation with sub-exponential designs. In NIPS, pages 2206?2214, 2015.
[29] M. Talagrand. A simple proof of the majorizing measure theorem. Geometric & Functional Analysis
GAFA, 2(1):118?125, 1992.
[30] M. Talagrand. Upper and Lower Bounds for Stochastic Processes. Springer, 2014.
[31] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar and G. Kutyniok,
editors, Compressed Sensing, chapter 5, pages 210?268. Cambridge University Press, 2012.
[32] X. Zhang, Y. Yu, and D. Schuurmans. Polar operators for structured sparse estimation. In NIPS, 2013.
[33] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society, Series B, 67:301?320, 2005.
[34] O. Zuk and A. Wagner. Low-rank matrix recovery from row-and-column affine measurements. In
International Conference on Machine Learning (ICML), 2015.
9
| 6394 |@word briefly:1 version:1 norm:83 suitably:1 c0:9 decomposition:1 dirksen:3 paid:1 tr:5 boundedness:1 reduction:2 contains:2 series:1 zuk:1 existing:2 mesh:1 additive:1 drop:1 isotropic:3 provides:1 characterization:2 complication:1 zhang:2 zii:2 u2i:1 along:3 c2:13 constructed:1 consists:2 shorthand:2 introduce:3 behavior:1 themselves:1 frequently:2 sdp:6 cand:3 inspired:1 pajor:1 gift:1 provided:2 spain:1 bounded:6 underlying:2 notation:3 what:1 israel:1 kind:1 argmin:1 developed:1 unified:4 ghosh:2 transformation:1 guarantee:7 kutyniok:1 k2:16 unit:6 grant:2 omit:1 before:1 engineering:1 local:1 despite:1 subscript:1 sivakumar:2 might:1 dantzig:4 studied:2 equivalence:1 supq:1 limited:2 factorization:1 c21:2 fazel:1 practical:2 atomic:3 union:1 pontil:1 c24:2 universal:2 bell:1 significantly:2 projection:6 induce:1 get:3 convenience:2 onto:2 selection:1 operator:5 context:2 applying:3 descending:2 equivalent:2 deterministic:5 demonstrated:1 yt:2 attention:2 convex:8 focused:1 decomposable:2 recovery:23 simplicity:2 immediately:1 m2:6 estimator:10 nuclear:2 deriving:1 spanned:1 notion:1 arranging:1 annals:1 play:1 suppose:3 rip:2 exact:2 element:4 recognition:1 approximated:1 particularly:1 satisfying:1 role:1 convexity:4 complexity:1 ui:2 covariates:1 solving:1 tight:1 purely:1 basis:2 easily:1 k0:1 various:1 chapter:2 regularizer:1 kp:1 bhatia:1 choosing:2 saunders:1 quite:1 widely:5 supplementary:2 compressed:1 statistic:1 noisy:1 rr:1 net:2 cai:2 reconstruction:1 product:1 supreme:1 combining:2 flexibility:1 ktr:4 frobenius:3 ky:1 convergence:1 derive:2 illustrate:3 completion:8 op:1 progress:1 strong:2 c:1 involves:2 implies:5 stochastic:5 centered:6 material:1 owl:6 hx:1 generalization:1 preliminary:1 hold:8 around:1 wright:1 exp:16 nw:2 electron:1 smallest:1 purpose:1 estimation:10 polar:2 applicable:1 majorizing:3 largest:2 gauge:6 city:1 tool:3 weighted:2 minimization:2 gaussian:42 always:1 supt:1 avoid:1 pn:3 mccoy:1 broader:1 focus:2 rank:21 contrast:1 amelunxen:1 inference:1 stamos:1 relation:3 lotz:1 interested:2 compatibility:10 issue:1 among:1 dual:4 eldar:1 denoted:2 ksp:5 yahoo:1 development:1 plan:1 rop:1 constrained:1 special:2 equal:3 construct:3 represents:1 kw:8 yu:8 k2f:2 icml:2 thin:1 future:1 np:1 gordon:2 escape:1 few:3 randomly:1 geometrizing:1 replaced:1 geometry:3 argmax:1 cns:1 phase:1 n1:1 interest:2 umn:1 deferred:1 analyzed:2 fazayeli:1 predefined:1 edge:1 nowak:1 orthogonal:4 euclidean:1 theoretical:2 minimal:1 rsc:1 instance:1 column:3 cover:1 ar:9 deviation:1 entry:6 subset:2 uniform:1 shengc:1 considerably:1 gd:4 vershynin:1 recht:3 international:2 siam:2 negahban:1 probabilistic:2 together:1 concrete:1 intersecting:1 satisfied:2 choose:1 li:1 parrilo:2 twin:1 summarized:1 includes:1 gaussianity:1 satisfy:3 depends:2 later:1 h1:2 view:2 sup:14 yv:1 contribution:3 square:1 ni:2 reserved:1 ensemble:1 basically:1 randomness:1 inform:1 cumbersome:1 definition:4 volinsky:1 obvious:1 jaegermann:1 associated:6 proof:10 sampled:1 popular:1 ask:2 knowledge:1 hhx:3 dimensionality:2 organized:1 nasa:1 formulation:1 though:2 strongly:1 talagrand:2 sheng:1 tropp:1 replacing:1 banerjee:7 glance:1 seminorm:5 facilitate:1 unitarily:15 k22:3 true:3 ccf:1 hence:5 regularization:4 symmetric:7 i2:4 eg:1 width:17 covering:1 chaining:9 generalized:5 complete:1 mcdonald:1 dedicated:1 l1:1 arindam:1 novel:2 recently:2 functional:7 function1:1 volume:1 extend:1 belong:1 tail:2 m1:6 interpret:1 measurement:23 cambridge:1 rd:33 tuning:1 consistency:1 trivially:1 mathematics:3 minnesota:1 access:1 etc:5 base:2 isometry:1 recent:3 inf:3 certain:4 inequality:6 yi:5 captured:1 minimum:1 living:1 ii:23 match:1 characterized:2 sphere:3 long:1 dept:1 ravikumar:3 gunasekar:2 adobe:1 prediction:1 variant:1 basic:1 regression:1 vision:1 essentially:6 noiseless:1 metric:6 arxiv:2 c1:12 background:1 completes:1 singular:3 w2:5 rest:5 induced:2 seem:1 structural:2 unitary:3 subgaussian:2 presence:1 noting:1 revealed:1 superset:1 enough:1 bernstein:1 easy:1 hastie:1 inner:1 idea:2 motivated:1 utility:1 remark:1 generally:1 useful:2 diameter:1 exist:3 nsf:1 sign:1 express:2 zv:5 key:3 putting:1 subgradient:4 year:2 cone:5 inverse:2 powerful:2 family:4 throughout:3 sl2:2 chandrasekaran:1 rub:1 bound:37 ct:1 guaranteed:1 koren:1 milman:1 fan:1 nonnegative:1 oracle:1 constraint:4 aspect:5 min:9 structured:12 ball:3 combination:1 rev:1 s1:2 invariant:17 restricted:13 equation:1 previously:1 foygel:1 turn:1 nnx12aq39a:1 hh:5 needed:1 letting:3 end:1 pursuit:2 available:2 endowed:1 spectral:12 generic:9 bii:1 occurrence:1 rp:5 denotes:2 assumes:1 include:2 k1:9 especially:2 establish:1 build:1 classical:1 society:1 quantity:4 concentration:1 dp:1 subspace:6 schatten:1 atr:2 nx:1 nelson:1 toward:1 willsky:1 assuming:2 besides:1 index:1 tomczak:1 liang:1 difficult:1 sector:1 trace:13 design:1 understandable:2 zt:4 unknown:2 allowing:1 recommender:3 upper:7 observation:1 extended:2 communication:1 rn:2 introduced:1 complement:1 specified:1 c3:3 c4:6 established:1 barcelona:1 nip:9 address:1 able:1 beyond:1 below:4 sparsity:1 program:2 built:1 max:8 royal:1 wainwright:1 suitable:3 critical:1 natural:1 rely:1 regularized:5 bourgain:1 ii2:5 sn:3 nice:1 geometric:22 literature:3 l2:2 kf:19 acknowledgement:1 probab:1 asymptotic:3 review:1 lecture:1 permutation:1 ksk:1 suph:3 srebro:1 ingredient:1 h2:4 foundation:1 affine:2 editor:1 c23:3 heavy:1 ibm:1 row:3 succinctly:2 supported:1 copy:3 figueiredo:1 wide:1 face:1 characterizing:2 taking:1 absolute:5 sparse:9 wagner:1 benefit:1 world:2 transition:1 kz:1 made:1 collection:1 c5:7 transaction:1 selector:3 compact:1 hhxi:9 conclude:1 unnecessary:1 xi:21 thep:1 spectrum:1 tailed:1 sk:4 ku:5 robust:1 elastic:2 schuurmans:1 investigated:2 zou:1 sp:3 aistats:1 bounding:6 noise:12 s2:3 referred:2 hhy:2 sub:18 exponential:3 kxk2:1 kop:1 theorem:37 rk:1 specific:1 zu:5 er:10 symbol:1 rakhlin:1 sensing:1 hha:1 mendelson:1 gained:1 supplement:1 magnitude:1 chatterjee:1 chen:5 easier:1 simply:2 expressed:2 ordered:2 ux:1 supk:1 applies:1 springer:3 satisfies:8 relies:2 acm:2 ma:1 lewis:1 viewed:1 sorted:3 diam:2 donoho:1 considerable:1 hard:1 change:1 included:1 determined:1 lemma:6 principal:1 called:5 gij:1 svd:1 support:14 evaluate:1 argyriou:1 correlated:1 |
5,963 | 6,395 | Unsupervised Feature Extraction by
Time-Contrastive Learning and Nonlinear ICA
Aapo Hyv?rinen1,2 and Hiroshi Morioka1
1
Department of Computer Science and HIIT
University of Helsinki, Finland
2
Gatsby Computational Neuroscience Unit
University College London, UK
Abstract
Nonlinear independent component analysis (ICA) provides an appealing framework
for unsupervised feature learning, but the models proposed so far are not identifiable.
Here, we first propose a new intuitive principle of unsupervised deep learning
from time series which uses the nonstationary structure of the data. Our learning
principle, time-contrastive learning (TCL), finds a representation which allows
optimal discrimination of time segments (windows). Surprisingly, we show how
TCL can be related to a nonlinear ICA model, when ICA is redefined to include
temporal nonstationarities. In particular, we show that TCL combined with linear
ICA estimates the nonlinear ICA model up to point-wise transformations of the
sources, and this solution is unique ? thus providing the first identifiability result
for nonlinear ICA which is rigorous, constructive, as well as very general.
1
Introduction
Unsupervised nonlinear feature learning, or unsupervised representation learning, is one of the
biggest challenges facing machine learning. Various approaches have been proposed, many of them
in the deep learning framework. Some of the most popular methods are multi-layer belief nets and
Restricted Boltzmann Machines [13] as well as autoencoders [14, 31, 21], which form the basis for
the ladder networks [30]. While some success has been obtained, the general consensus is that the
existing methods are lacking in scalability, theoretical justification, or both; more work is urgently
needed to make machine learning applicable to big unlabeled data.
Better methods may be found by using the temporal structure in time series data. One approach which
has shown a great promise recently is based on a set of methods variously called temporal coherence
[17] or slow feature analysis [32]. The idea is to find features which change as slowly as possible,
originally proposed in [6] for learning invariant features. Kernel-based methods [12, 26] and deep
learning methods [23, 27, 9] have been developed to extend this principle to the general nonlinear
case. However, it is not clear how one should optimally define the temporal stability criterion; these
methods typically use heuristic criteria and are not based on generative models.
In fact, the most satisfactory solution for unsupervised deep learning would arguably be based
on estimation of probabilistic generative models, because probabilistic theory often gives optimal
objectives for learning. This has been possible in linear unsupervised learning, where sparse coding
and independent component analysis (ICA) use independent, typically sparse, latent variables that
generate the data via a linear mixing. Unfortunately, at least without temporal structure, the nonlinear
ICA model is seriously unidentifiable [18], which means that the original sources cannot be found.
In spite of years of research [20], no generally applicable identifiability conditions have been found.
Nevertheless, practical algorithms have been proposed [29, 1, 5] with the hope that some kind of
useful solution can still be found even for data with no temporal structure (that is, an i.i.d. sample).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1
Source
signals
2
Segments:
3
T
1
23
34
T
T
1
Multinomial logistic regression:
Theorem 1
n
1
Nonlinear mixture:
Observed
signals
Predictions of segment labels
12
Feature
values
1
m
Feature extractor:
n
Time ( )
A Generative model
B Time-contrastive learning
Figure 1: An illustration of how we combine a new generative nonlinear ICA model with the new
learning principle called time-contrastive learning (TCL). A) The probabilistic generative model is
a nonlinear version of ICA, where the observed signals are given by a nonlinear transformation of
source signals, which are mutually independent, and have segment-wise nonstationarity. B) In TCL
we train a feature extractor to be sensitive to the nonstationarity of the data by using a multinomial
logistic regression which attempts to discriminate between the segments, labelling each data point
with the segment label 1, . . . , T . The feature extractor and the logistic regression together can be
implemented by a conventional multi-layer perceptron with back-propagation training.
Here, we combine a new heuristic principle for analysing temporal structure with a rigorous treatment
of a nonlinear ICA model, leading to a new identifiability proof. The structure of our theory is
illustrated in Figure 1.
First, we propose to learn features using the (temporal) nonstationarity of the data. The idea is that
the learned features should enable discrimination between different time windows; in other words,
we search for features that provide maximal information on which part of the time series a given data
point comes from. This provides a new, intuitively appealing method for feature extraction, which we
call time-contrastive learning (TCL).
Second, we formulate a generative model in which independent components have different distributions in different time windows, and we observe nonlinear mixtures of the components. While
a special case of this principle, using nonstationary variances, has been very successfully used in
linear ICA [22], our extension to the nonlinear case is completely new. Such nonstationarity of
variances seems to be prominent in many kinds of data, for example EEG/MEG [2], natural video
[17], and closely related to changes in volatility in financial time series; but we further generalize the
nonstationarity to modulated exponential families.
Finally, we show that as a special case, TCL estimates the nonlinear part of the nonlinear ICA model,
leaving only a simple linear mixing to be determined by linear ICA, and a final indeterminacy in
terms of a component-wise nonlinearity similar to squaring. For modulated Gaussian sources, even
the squaring can be removed and we have ?full? identifiability. This gives the very first identifiability
proof for a high-dimensional, nonlinear, ICA mixing model ? together with a practical method for
its estimation.
2
Time-contrastive learning
TCL is a method to train a feature extractor by using a multinomial logistic regression (MLR)
classifier which aims to discriminate all segments (time windows) in a time series, given the segment
indices as the labels of the data points. In more detail, TCL proceeds as follows:
1. Divide a multivariate time series xt into segments, i.e. time windows, indexed by ? =
1, . . . , T . Any temporal segmentation method can be used, e.g. simple equal-sized bins.
2. Associate each data point with the corresponding segment index ? in which the data point is
contained; i.e. the data points in the segment ? are all given the same segment label ? .
2
3. Learn a feature extractor h(xt ; ?) together with an MLR with a linear regression function
w?T h(xt ; ?) + b? to classify all data points with the corresponding segment labels ? used as
class labels Ct , as defined above. (For example, by ordinary deep learning with h(xt ; ?)
being outputs in the last hidden layer and ? being network weights.)
The purpose of the feature extractor is to extract a feature vector that enables the MLR to discriminate
the segments. Therefore, it seems intuitively clear that the feature extractor needs to learn a useful
representation of the temporal structure of the data, in particular the differences of the distributions
across segments. Thus, we are effectively using a classification method (MLR) to accomplish
unsupervised learning. Methods such as noise-contrastive estimation [11] and generative adversarial
nets [8], see also [10], are similar in spirit, but clearly distinct from TCL which uses the temporal
structure of the data by contrasting different time segments.
In practice, the feature extractor needs to be capable of approximating a general nonlinear relationship
between the data points and the log-odds of the classes, and it must be easy to learn from data
simultaneously with the MLR. To satisfy these requirements, we use here a multilayer perceptron
(MLP) as the feature extractor. Essentially, we use ordinary MLP/MLR training according to very
well-known neural network theory, with the last hidden layer working as the feature extractor. Note
that the MLR is here only used as an instrument for training the feature extractor, and has no practical
meaning after the training.
3
TCL as approximator of log-pdf ratios
We next show how the combination of the optimally discriminative feature extractor and MLR learns
to model the nonstationary probability density functions (pdf?s) of the data. The posterior over classes
for one data point xt in the multinomial logistic regression of TCL is given by well-known theory as
p(Ct = ? |xt ; ?, W, b) =
exp(w?T h(xt ; ?) + b? )
PT
1 + j=2 exp(wjT h(xt ; ?) + bj )
(1)
where Ct is a class label of the data at time t, xt is the n-dimensional data point at time t, ? is the
parameter vector of the m-dimensional feature extractor (MLP) denoted by h, W = [w1 , . . . , wT ] ?
Rm?T , and b = [b1 , . . . , bT ]T are the weight and bias parameters of the MLR. We fixed the elements
of w1 and b1 to zero to avoid the well-known indeterminacy of the softmax function.
On the other hand, the true posteriors of the segment labels can be written, by the Bayes rule, as
p? (xt )p(Ct = ? )
p(Ct = ? |xt ) = PT
,
j=1 pj (xt )p(Ct = j)
(2)
where p(Ct = ? ) is a prior distribution of the segment label ? , and p? (xt ) = p(xt |Ct = ? ).
Assume that the feature extractor has a universal approximation capacity (in the sense of well-known
neural network theory), and that the amount of data is infinite, so that the MLR converges to the
optimal classifier. Then, we will have equality between the model posterior Eq. (1) and the true
posterior in Eq. (2) for all ? . Well-known developments, intuitively based on equating the numerators
in those equations and taking the pivot into account, lead to the relationship
w?T h(xt ; ?) + b? = log p? (xt ) ? log p1 (xt ) + log
p(Ct = ? )
,
p(Ct = 1)
(3)
where the last term on the right-hand side is zero if the segments have equal prior probability (i.e.
equal length). In other words, what the feature extractor computes after TCL training (under optimal
conditions) is the log-pdf of the data point in each segment (relative to that in the first segment which
was chosen as pivot above). This gives a clear probabilistic interpretation of the intuitive principle of
TCL, and will be used below to show its connection to nonlinear ICA.
4
Nonlinear nonstationary ICA model
In this section, seemingly unrelated to the preceding section, we define a probabilistic generative
model; the connection will be explained in the next section. We assume, as typical in nonlinear ICA,
3
that the observed multivariate time series xt is a smooth and invertible nonlinear mixture of a vector
of source signals st = (s1 (t), . . . , sn (t)); in other words:
xt = f (st ).
(4)
The components si (t) in st are assumed mutually independent over i (but not over time t). The
crucial question is how to define a suitable model for the sources, which is general enough while
allowing strong identifiability results.
Here, we start with the fundamental assumption that the source signals si (t) are nonstationary,
and use such nonstationarity for source separation. For example, the variances (or similar scaling
coefficients) could be changing as proposed earlier in the linear case [22, 24, 16]. We generalize
that idea and propose a generative model for nonstationary sources based on the exponential family.
Merely for mathematical convenience, we assume that the nonstationarity is much slower than the
sampling rate, so the time series can be divided into segments in each of which the distribution is
approximately constant (but the distribution is different in different segments). The log-pdf of the
source signal with index i in the segment ? is then defined as:
log p? (si ) = qi,0 (si ) +
V
X
?i,v (? )qi,v (si ) ? log Z(?i,1 (? ), . . . , ?i,V (? ))
(5)
v=1
where qi,0 is a ?stationary baseline? log-pdf of the source, and the qi,v , v ? 1 are nonlinear scalar
functions defining the exponential family for source i; the index t is dropped for simplicity. The
essential point is that the parameters ?i,v (? ) of the source i depend on the segment index ? , which
creates nonstationarity. The normalization constant Z disappears in all our proofs below.
A simple example would be obtained by setting qi,0 = 0, V = 1, i.e., using a single modulated
function qi,1 with qi,1 (si ) = ?s2i /2 which means that the variance of a Gaussian source is modulated,
or qi,1 (si ) = ?|si |, a modulated Laplacian source. Another interesting option might be to use
two nonlinearities similar to ?rectified linear units? (ReLU) given by qi,1 (si ) = ? max(si , 0) and
qi,2 (si ) = ? max(?si , 0) to model both changes in scale (variance) and location (mean). Yet another
option is to use a Gaussian baseline qi,0 (si ) = ?s2i /2 with a nonquadratic function qi,1 .
Our definition thus generalizes the linear model [22, 24, 16] to the nonlinear case, as well as to very
general modulated non-Gaussian densities by allowing qi,v to be non-quadratic, using more than one
qi,v per source (i.e. we can have V > 1) as well as a non-stationary baseline. We emphasize that our
principle of nonstationarity is clearly distinct from the principle of linear autocorrelations previously
used in the nonlinear case [12, 26]. Note further that some authors prefer to use the term blind source
separation (BSS) for generative models with temporal structure.
5
Solving nonlinear ICA by TCL
Now we consider the case where TCL as defined in Section 2 is applied on data generated by the
nonlinear ICA model in Section 4. We refer again to Figure 1 which illustrates the total system. For
simplicity, we consider the case qi,0 = 0, V = 1, i.e. the exponential family has a single modulated
function qi,1 per source, and this function is the same for all sources; we will discuss the general case
separately below. The modulated function will be simply denoted by q := qi,1 in the following.
First, we show that the nonlinear functions q(si ), i = 1, . . . , n, of the sources can be obtained as
unknown linear transformations of the outputs of the feature extractor hi trained by the TCL:
Theorem 1. Assume the following:
A1. We observe data which is obtained by generating independent sources1 according to (5),
and mixing them as in (4) with a smooth invertible f . For simplicity, we assume only a single
function defining the exponential family, i.e. qi,0 = 0, V = 1 and q := qi,1 as explained
above.
A2. We apply TCL on the data so that the dimension of the feature extractor h is equal to the
dimension of the data vector xt , i.e., m = n.
1
More precisely: the sources are generated independently given the ?i,v . Depending on how the ?i,v are
generated, there may or may not be marginal dependency between the si ; see the Corollary 1 below.
4
A3. The modulation parameter matrix L with elements [L]?,i = ?i,1 (? ) ? ?i,1 (1), ? =
1, . . . , T ; i = 1, . . . , n has full column rank n. (Intuitively: the variances of the components are modulated sufficiently independently of each other. Note that many segments
are actually allowed to have equal distributions since this matrix is typically very tall.)
Then, in the limit of infinite data, the outputs of the feature extractor are equal to q(s) =
(q(s1 ), q(s2 ), . . . , q(sn ))T up to an invertible linear transformation. In other words,
q(st ) = Ah(xt ; ?) + d
(6)
for some constant invertible matrix A ? Rn?n and a constant vector d ? Rn .
Sketch of proof : (see Supplementary Material for full proof) The basic idea is that after convergence
we must have equality between the model of the log-pdf in each segment given by TCL in Eq. (3)
and that given by nonlinear ICA, obtained by summing the RHS of Eq. (5) over i:
n
X
T
w? h(x; ?) ? k1 (x) =
?i,1 (? )q(si ) ? k2 (? )
(7)
i=1
where k1 does not depend on ? , and k2 (? ) does not depend on x or s. We see that the functions hi (x)
and q(si ) must span the same linear subspace. (TCL looks at differences of log-pdf?s, introducing
the baseline k1 (x), but this does not actually change the subspace). This implies that the q(si ) must
be equal to some invertible linear transformation of h(x; ?) and a constant bias term, which gives
(6).
To further estimate the linear transformation A in (6), we can simply use linear ICA, under a further
independence assumption regarding the generation of the ?i,1 :
Corollary 1. Assume the ?i,1 are randomly generated, independently for each i. The estimation
(identification) of the q(si ) can then be performed by first performing TCL, and then linear ICA on
the hidden representation h(x).
Proof: We only need to combine the well-known identifiability proof of linear ICA [3] with Theorem 1,
noting that the quantities q(si ) are now independent, and since q has a strict upper bound (which is
necessary for integrability), q(si ) must be non-Gaussian.
In general, TCL followed by linear ICA does not allow us to exactly recover the independent
components because the function q(?) can hardly be invertible, typically being something like
squaring or absolute values. However, for a specific class of q including the modulated Gaussian
family, we can prove a stricter form of identifiability. Slightly counterintuitively, we can recover the
signs of the si , since we also know the corresponding x and the transformation is invertible:
Corollary 2. Assume q(s) is a strictly monotonic function of |s|. Then, we can further identify the
original si , up to strictly monotonic transformations of each source.
Proof: To make p? (s) integrable, necessarily q(s) ? ?? when |s| ? ?, and q(s) must have a
finite maximum, which we can set to zero without restricting generality. For each fixed i, consider the
manifold defined by q(gi (x))) = 0. By invertibility of g, this divides the space of x into two halves.
In one half, define s?i = q(si ), and in the other, s?i = ?q(si ). With such s?i , we have thus recovered
the original sources, up to the strictly monotonic transformation s?i = c sign(si )q(si ), where c is
either +1 or ?1. (Note that in general, the si are meaningfully defined only up to a strictly monotonic
transformation, analogue to multiplication by an arbitrary constant in the linear case [3].)
Summary of Theory What we have proven is that in the special case of a single q(s) which is a
monotonic function of |s|, our nonlinear ICA model is identifiable, up to inevitable component-wise
monotonic transformations. We also provided a practical method for the estimation of the nonlinear
transformations q(si ) for any general q, given by TCL followed by linear ICA. (The method provided
for ?inverting? q in the proof of Corollary 2 may be very difficult to implement in practice.)
Extensions First, allowing a stationary baseline qi,0 does not change the Theorem at all, and a
weaker form of Corollary 1 holds as well. Second, with many qi,v (V > 1), the left-hand-side of (6)
will have V n entries given by all the possible qi,v (si ), and the dimension of the feature extractor must
be equally increased; the condition of full rank on L is likewise more complicated. Corollary 1 must
then consider an independent subspace model, but it can still be proven in the same way. (The details
and the proof will be presented in a later paper.) Third, the case of combining ICA with dimension
reduction is treated in Supplementary Material.
5
6
Simulation on artificial data
Data generation We created data from the nonlinear ICA model in Section 4, using the simplified
case of the Theorem (a single function q) as follows. Nonstationary source signals (n = 20, segment
length 512) were randomly generated by modulating Laplacian sources by ?i,1 (? ) randomly drawn
so that the std?s inside the segments have a uniform distribution in [0, 1]. As the nonlinear mixing
function f (s), we used an MLP (?mixing-MLP?). In order to guarantee that the mixing-MLP is
invertible, we used leaky ReLU?s and the same number of units in all layers.
TCL settings, training, and final linear ICA As the feature extractor to be trained by TCL, we
adopted an MLP (?feature-MLP?). The segmentation in TCL was the same as in the data generation,
and the number of layers was the same in the mixing-MLP and the feature-MLP. Note that when
L = 1, both the mixing-MLP and feature-MLP are a one layer model, and then the observed signals
are simply linear mixtures of the source signals as in a linear ICA model. As in the Theorem, we
set m = n. As the activation function in the hidden layers, we used a ?maxout? unit, constructed by
taking the maximum across G = 2 affine fully connected weight groups. However, the output layer
has ?absolute value? activation units exclusively. This is because the output of the feature-MLP (i.e.,
h(x; ?)) should resemble q(s), based on Theorem 1, and here we used the Laplacian distribution
for generating the sources. The initial weights of each layer were randomly drawn from a uniform
distribution for each layer, scaled as in [7]. To train the MLP, we used back-propagation with a
momentum term. To avoid overfitting, we used `2 regularization for the feature-MLP and MLR.
According to Corollary 1 above, after TCL we further applied linear ICA (FastICA, [15]) to the
h(x; ?), and used its outputs as the final estimates of q(si ). To evaluate the performance of source
recovery, we computed the mean correlation coefficients between the true q(si ) and their estimates.
For comparison, we also applied a linear ICA method based on nonstationarity of variance (NSVICA)
[16], a kernel-based nonlinear ICA method (kTDSEP) [12], and a denoising autoencoder (DAE) [31]
to the observed data. We took absolute values of the estimated sources to make a fair comparison
with TCL. In kTDSEP, we selected the 20 estimated components with the highest correlations with
the source signals. We initialized the DAE by the stacked DAE scheme [31], and sigmoidal units
were used in the hidden layers; we omitted the case L > 3 because of instability of training.
Results Figure 2a) shows that after training the feature-MLP by TCL, the MLR achieved higher
classification accuracies than chance level, which implies that the feature-MLP was able to learn
a representation of the data nonstationarity. (Here, chance level denotes the performance of the
MLP with a randomly initialized feature-MLP.) We can see that the larger the number of layers is
(which means that the nonlinearity in the mixing-MLP is stronger), the more difficult it is to train the
feature-MLP and the MLR. The classification accuracy also goes down when the number of segments
increases, since when there are more and more classes, some of them will inevitably have very similar
distributions and are thus difficult to discriminate; this is why we computed the chance level as above.
Figure 2b) shows that the TCL method could reconstruct the q(si ) reasonably well even for the
nonlinear mixture case (L > 1), while all other methods failed (NSVICA obviously performed very
well in the linear case).The figure also shows that (1) the larger the number of segments (amount of
data) is, the higher the performance of the TCL method is (i.e. the method seems to converge), and
(2) again, more layers makes learning more difficult.
To summarize, this simulation confirms that TCL is able to estimate the nonlinear ICA model based
on nonstationarity. Using more data increases performance, perhaps obviously, while making the
mixing more nonlinear decreases performance.
7
Experiments on real brain imaging data
To evaluate the applicability of TCL to real data, we applied it on magnetoencephalography (MEG),
i.e. measurements of the electrical activity in the human brain. In particular, we used data measured
in a resting-state session, during which the subjects did not have any task nor were receiving any
particular stimulation. In recent years, many studies have shown the existence of networks of brain
activity in resting state, with MEG as well [2, 4]. Such networks mean that the data is nonstationary,
and thus this data provides an excellent target for TCL.
6
a)
100
80
0.8
20
Mean correlation
Accuracy (%)
40
L=1
L=2
L=3
L=4
L=5
L=1(chance)
L=2(chance)
L=3(chance)
L=4(chance)
L=5(chance)
10
8
4
2
1
1
b)
8
16
32
64
128
Number of segments
256
0.6
0.4
0.2
0
512
8
16
32
64
128
Number of segments
256
512
TCL(L=1)
TCL(L=2)
TCL(L=3)
TCL(L=4)
TCL(L=5)
NSVICA(L=1)
NSVICA(L=2)
NSVICA(L=3)
NSVICA(L=4)
NSVICA(L=5)
kTDSEP(L=1)
kTDSEP(L=2)
kTDSEP(L=3)
kTDSEP(L=4)
kTDSEP(L=5)
DAE(L=1)
DAE(L=2)
DAE(L=3)
Figure 2: Simulation on artificial data. a) Mean classification accuracies of the MLR in TCL, as a
function of the numbers of layers and segments. (Accuracies are on training data since it is not obvious
how to define test data.) Note that chance levels (dotted lines) change as a function of the number
of segments (see text). The MLR achieved higher accuracy than chance level. b) Mean absolute
correlation coefficients between the true q(s) and the features learned by TCL (solid line) and, for
comparison: nonstationarity-based linear ICA (NSVICA, dashed line), kernel-based nonlinear ICA
(kTDSEP, dotted line), and denoising autoencoder (DAE, dash-dot line). TCL has much higher
correlations than DAE or kTDSEP, and in the nonlinear case (L > 1), higher than NSVICA.
Data and preprocessing We used MEG data from an earlier neuroimaging study [25], graciously
provided by P. Ramkumar. MEG signals were measured from nine healthy volunteers by a Vectorview
helmet-shaped neuromagnetometer at a sampling rate of 600 Hz with 306 channels. The experiment
consisted of two kinds of sessions, i.e., resting sessions (2 sessions of 10 min) and task sessions (2
sessions of 12 min). In the task sessions, the subjects were exposed to a sequence of 6?33 s blocks of
auditory, visual and tactile stimuli, which were interleaved with 15 s rest periods. We exclusively
used the resting-session data for the training of the network, and task-session data was only used in
the evaluation. The modality of the sensory stimulation (incl. no stimulation, i.e. rest) provided a
class label that we used in the evaluation, giving in total four classes. We preprocessed the MEG
signals by Morlet filtering around the alpha frequency band.
TCL settings We used segments of equal size, of length 12.5 s or 625 data points (downsampling
to 50 Hz); the length was based on prior knowledge about the time-scale of resting-state networks.
The number of layers took the values L ? {1, 2, 3, 4}, and the number of nodes of each hidden
layer was a function of L so that we always fixed the number of output layer nodes to 10, and
increased gradually the number of nodes when going to earlier layer as L = 1 : 10, L = 2 : 20 ? 10,
L = 3 : 40 ? 20 ? 10, and L = 4 : 80 ? 40 ? 20 ? 10. We used ReLU?s in the middle layers, and
adaptive units ?(x) = max(x, ax) exclusively for the output layer, which is more flexible than the
?absolute value? unit used in the Simulation above. To prevent overfitting, we applied dropout [28] to
inputs, and batch normalization [19] to hidden layers. Since different subjects and sessions are likely
to have uninteresting technical differences, we used a multi-task learning scheme, with a separate
top-layer MLR classifier for each measurement session and subject, but a shared feature-MLP. (In
fact, if we use the MLR to discriminate all segments of all sessions, it tends to mainly learn such
inter-subject and inter-session differences.) Otherwise, all the settings were as in Section 6.
Evaluation methods To evaluate the obtained features, we performed classification of the sensory
stimulation categories (modalities) by applying feature extractors trained with (unlabeled) restingsession data to (labeled) task-session data. Classification was performed using a linear support
vector machine (SVM) classifier trained on the stimulation modality labels, and its performance was
evaluated by a session-average of session-wise one-block-out cross-validation (CV) accuracies. The
hyperparameters of the SVM were determined by nested CV without using the test data. The average
activities of the feature extractor during each block were used as feature vectors in the evaluation
of TCL features. However, we used log-power activities for the other (baseline) methods because
the average activities had much lower performance with those methods. We balanced the number of
blocks between the four categories. We measured the CV accuracy 10 times by changing the initial
values of the feature extractor training, and showed their average performance. We also visualized
the spatial activity patterns obtained by TCL, using weighted-averaged sensor signals; i.e., the sensor
signals are averaged while weighted by the activities of the feature extractor.
7
Classification accuracy (%)
a)
L=1
L=4
b)
50
L=1
L3
L=4
L2
40
30
L1
TCL
DAE
kTDSEP NSVICA
Figure 3: Real MEG data. a) Classification accuracies of linear SVMs newly trained with tasksession data to predict stimulation labels in task-sessions, with feature extractors trained in advance
with resting-session data. Error bars give standard errors of the mean across ten repetitions. For TCL
and DAE, accuracies are given for different numbers of layers L. Horizontal line shows the chance
level (25%). b) Example of spatial patterns of nonstationary components learned by TCL. Each
small panel corresponds to one spatial pattern with the measurement helmet seen from three different
angles (left, back, right); red/yellow is positive and blue is negative. L3: approximate total spatial
pattern of one selected third-layer unit. L2: the patterns of the three second-layer units maximally
contributing to this L3 unit. L1: for each L2 unit, the two most strongly contributing first-layer units.
Results Figure 3a) shows the comparison of classification accuracies between the different methods,
for different numbers of layers L = {1, 2, 3, 4}. The classification accuracies by the TCL method
were consistently higher than those by the other (baseline) methods.2 We can also see a superior
performance of multi-layer networks (L ? 3) compared with that of the linear case (L = 1), which
indicates the importance of nonlinear demixing in the TCL method.
Figure 3b) shows an example of spatial patterns learned by the TCL method. For simplicity of
visualization, we plotted spatial patterns for the three-layer model. We manually picked one out of
the ten hidden nodes from the third layer, and plotted its weighted-averaged sensor signals (Figure 3b,
L3). We also visualized the most strongly contributing second- and first-layer nodes. We see
progressive pooling of L1 units to form left temporal, right temporal, and occipito-parietal patterns in
L2, which are then all pooled together in the L3 resulting in a bilateral temporal pattern with negative
contribution from the occipito-parietal region. Most of the spatial patterns in the third layer (not
shown) are actually similar to those previously reported using functional magnetic resonance imaging
(fMRI), and MEG [2, 4]. Interestingly, none of the hidden units seems to represent artefacts (i.e.
non-brain signals), in contrast to ordinary linear ICA of EEG or MEG.
8
Conclusion
We proposed a new learning principle for unsupervised feature (representation) learning. It is based
on analyzing nonstationarity in temporal data by discriminating between time segments. The ensuing
?time-contrastive learning? is easy to implement since it only uses ordinary neural network training: a
multi-layer perceptron with logistic regression. However, we showed that, surprisingly, it can estimate
independent components in a nonlinear mixing model up to certain indeterminacies, assuming that
the independent components are nonstationary in a suitable way. The indeterminacies include a linear
mixing (which can be resolved by a further linear ICA step), and component-wise nonlinearities,
such as squares or absolute values. TCL also avoids the computation of the gradient of the Jacobian,
which is a major problem with maximum likelihood estimation [5].
Our developments also give by far the strongest identifiability proof of nonlinear ICA in the literature.
The indeterminacies actually reduce to just inevitable monotonic component-wise transformations in
the case of modulated Gaussian sources. Thus, our results pave the way for further developments in
nonlinear ICA, which has so far seriously suffered from the lack of almost any identifiability theory,
and provide a new principled approach to unsupervised deep learning.
Experiments on real MEG found neuroscientifically interesting networks. Other promising future
application domains include video data, econometric data, and biomedical data such as EMG and
ECG, in which nonstationary variances seem to play a major role.3
2
Note that classification using the final linear ICA is equivalent to using whitening since ICA only makes a
further orthogonal rotation.
3
This research was supported in part by JSPS KAKENHI 16J08502 and the Academy of Finland.
8
References
[1] L. B. Almeida. MISEP?linear and nonlinear ICA based on mutual information. J. of Machine Learning
Research, 4:1297?1318, 2003.
[2] M. J. Brookes et al. Investigating the electrophysiological basis of resting state networks using magnetoencephalography. Proc. Natl. Acad. Sci., 108(40):16783?16788, 2011.
[3] P. Comon. Independent component analysis?a new concept? Signal Processing, 36:287?314, 1994.
[4] F. de Pasquale et al. A cortical core for dynamic integration of functional networks in the resting human
brain. Neuron, 74(4):753?764, 2012.
[5] L. Dinh, D. Krueger, and Y. Bengio. NICE: Non-linear independent components estimation.
arXiv:1410.8516 [cs.LG], 2015.
[6] P. F?ldi?k. Learning invariance from transformation sequences. Neural Computation, 3:194?200, 1991.
[7] X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In
AISTATS?10, 2010.
[8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, pages 2672?2680, 2014.
[9] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun. Unsupervised feature learning from temporal
data. arXiv:1504.02518, 2015.
[10] M. U. Gutmann, R. Dutta, S. Kaski, and J. Corander. Likelihood-free inference via classification.
arXiv:1407.4981 [stat.CO], 2014.
[11] M. U. Gutmann and A. Hyv?rinen. Noise-contrastive estimation of unnormalized statistical models, with
applications to natural image statistics. J. of Machine Learning Research, 13:307?361, 2012.
[12] S. Harmeling, A. Ziehe, M. Kawanabe, and K.-R. M?ller. Kernel-based nonlinear blind source separation.
Neural Comput., 15(5):1089?1124, 2003.
[13] G. E. Hinton. Learning multiple layers of representation. Trends Cogn. Sci., 11:428?434, 2007.
[14] G. E. Hinton and R. S. Zemel. Autoencoders, minimum description length, and helmholtz free energy. Adv.
Neural Inf. Process. Syst., 1994.
[15] A. Hyv?rinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans.
Neural Netw., 10(3):626?634, 1999.
[16] A. Hyv?rinen. Blind source separation by nonstationarity of variance: A cumulant-based approach. IEEE
Transactions on Neural Networks, 12(6):1471?1474, 2001.
[17] A. Hyv?rinen, J. Hurri, and P. O. Hoyer. Natural Image Statistics. Springer-Verlag, 2009.
[18] A. Hyv?rinen and P. Pajunen. Nonlinear independent component analysis: Existence and uniqueness
results. Neural Netw., 12(3):429?439, 1999.
[19] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. CoRR, abs/1502.03167, 2015.
[20] C. Jutten, M. Babaie-Zadeh, and J. Karhunen. Nonlinear mixtures. Handbook of Blind Source Separation,
Independent Component Analysis and Applications, pages 549?592, 2010.
[21] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv:1312.6114 [stat.ML], 2014.
[22] K. Matsuoka, M. Ohya, and M. Kawamoto. A neural net for blind separation of nonstationary signals.
Neural Netw., 8(3):411?419, 1995.
[23] H. Mobahi, R. Collobert, and J. Weston. Deep learning from temporal coherence in video. In Proceedings
of the 26th Annual International Conference on Machine Learning, pages 737?744, 2009.
[24] D.-T. Pham and J.-F. Cardoso. Blind separation of instantaneous mixtures of non stationary sources. IEEE
Trans. Signal Processing, 49(9):1837?1848, 2001.
[25] P. Ramkumar, L. Parkkonen, R. Hari, and A. Hyv?rinen. Characterization of neuromagnetic brain rhythms
over time scales of minutes using spatial independent component analysis. Hum. Brain Mapp., 33(7):1648?
1662, 2012.
[26] H. Sprekeler, T. Zito, and L. Wiskott. An extension of slow feature analysis for nonlinear blind source
separation. J. of Machine Learning Research, 15(1):921?947, 2014.
[27] J. T. Springenberg and M. Riedmiller. Learning temporal coherent features through life-time sparsity. In
Neural Information Processing, pages 347?356. Springer, 2012.
[28] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929?1958, 2014.
[29] Y. Tan, J. Wang, and J.M. Zurada. Nonlinear blind source separation using a radial basis function network.
IEEE Transactions on Neural Networks, 12(1):124?134, 2001.
[30] H. Valpola. From neural PCA to deep unsupervised learning. In Advances in Independent Component
Analysis and Learning Machines, pages 143?171. Academic Press, 2015.
[31] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol. Stacked denoising autoencoders:
Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res.,
11:3371?3408, 2010.
[32] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural
Comput., 14(4):715?770, 2002.
9
| 6395 |@word middle:1 version:1 seems:4 stronger:1 hyv:7 confirms:1 simulation:4 contrastive:9 solid:1 reduction:1 initial:2 series:8 exclusively:3 seriously:2 zurada:1 interestingly:1 existing:1 recovered:1 urgently:1 si:33 yet:1 activation:2 must:8 written:1 enables:1 discrimination:2 stationary:4 generative:11 half:2 selected:2 core:1 provides:3 characterization:1 node:5 location:1 sigmoidal:1 mathematical:1 constructed:1 prove:1 combine:3 inside:1 inter:2 ica:45 p1:1 nor:1 multi:5 brain:7 salakhutdinov:1 window:5 spain:1 provided:4 unrelated:1 panel:1 what:2 kind:3 developed:1 contrasting:1 transformation:14 guarantee:1 temporal:19 stricter:1 exactly:1 classifier:4 rm:1 uk:1 k2:2 unit:15 scaled:1 arguably:1 positive:1 dropped:1 local:1 tends:1 limit:1 acad:1 encoding:1 mach:2 analyzing:1 modulation:1 approximately:1 might:1 equating:1 ecg:1 co:1 averaged:3 unique:1 practical:4 lecun:1 harmeling:1 practice:2 block:4 implement:2 cogn:1 riedmiller:1 universal:1 word:4 radial:1 spite:1 cannot:1 unlabeled:2 convenience:1 applying:1 instability:1 conventional:1 equivalent:1 go:1 independently:3 formulate:1 simplicity:4 recovery:1 pouget:1 rule:1 financial:1 stability:1 justification:1 pt:2 target:1 play:1 rinen:6 tan:1 us:3 goodfellow:1 associate:1 element:2 trend:1 helmholtz:1 std:1 labeled:1 observed:5 role:1 electrical:1 wang:1 region:1 connected:1 adv:1 gutmann:2 decrease:1 removed:1 highest:1 balanced:1 principled:1 neuromagnetic:1 warde:1 dynamic:1 trained:6 depend:3 solving:1 segment:38 exposed:1 creates:1 basis:3 completely:1 resolved:1 various:1 kaski:1 s2i:2 train:4 stacked:2 distinct:2 fast:1 london:1 sejnowski:1 hiroshi:1 artificial:2 zemel:1 heuristic:2 supplementary:2 larger:2 reconstruct:1 otherwise:1 statistic:2 gi:1 final:4 seemingly:1 obviously:2 sequence:2 net:4 took:2 propose:3 maximal:1 combining:1 mixing:13 academy:1 intuitive:2 description:1 scalability:1 sutskever:1 convergence:1 requirement:1 generating:2 converges:1 volatility:1 depending:1 tall:1 stat:2 measured:3 ldi:1 indeterminacy:5 eq:4 strong:1 implemented:1 c:1 resemble:1 come:1 implies:2 goroshin:1 larochelle:1 artefact:1 sprekeler:1 closely:1 human:2 enable:1 material:2 bin:1 extension:3 strictly:4 hold:1 pham:1 sufficiently:1 around:1 exp:2 great:1 bj:1 predict:1 major:2 finland:2 a2:1 omitted:1 purpose:1 uniqueness:1 estimation:8 proc:1 applicable:2 label:12 counterintuitively:1 helmet:2 healthy:1 sensitive:1 modulating:1 repetition:1 successfully:1 weighted:3 hope:1 clearly:2 sensor:3 gaussian:7 always:1 aim:1 ohya:1 avoid:2 corollary:7 ax:1 kakenhi:1 consistently:1 rank:2 integrability:1 mainly:1 indicates:1 likelihood:2 contrast:1 rigorous:2 adversarial:2 baseline:7 sense:1 graciously:1 inference:1 squaring:3 typically:4 bt:1 hidden:9 going:1 classification:12 flexible:1 denoted:2 development:3 resonance:1 spatial:8 special:3 softmax:1 mutual:1 marginal:1 equal:8 integration:1 extraction:2 shaped:1 sampling:2 manually:1 progressive:1 look:1 unsupervised:13 inevitable:2 fmri:1 future:1 mirza:1 stimulus:1 randomly:5 simultaneously:1 variously:1 attempt:1 ab:1 mlp:22 evaluation:4 brooke:1 mixture:7 tompson:1 farley:1 natl:1 capable:1 necessary:1 orthogonal:1 indexed:1 divide:2 initialized:2 re:2 plotted:2 dae:10 theoretical:1 increased:2 classify:1 earlier:3 column:1 ordinary:4 applicability:1 introducing:1 entry:1 uniform:2 uninteresting:1 jsps:1 fastica:1 krizhevsky:1 optimally:2 reported:1 dependency:1 emg:1 accomplish:1 combined:1 st:4 density:2 fundamental:1 international:1 discriminating:1 probabilistic:5 receiving:1 invertible:8 together:4 w1:2 again:2 slowly:1 leading:1 syst:1 account:1 szegedy:1 nonlinearities:2 de:1 coding:1 pooled:1 invertibility:1 coefficient:3 satisfy:1 blind:8 collobert:1 performed:4 later:1 picked:1 bilateral:1 red:1 start:1 bayes:2 option:2 recover:2 complicated:1 identifiability:10 pajunen:1 contribution:1 square:1 dutta:1 accuracy:13 variance:9 likewise:1 identify:1 yellow:1 generalize:2 identification:1 vincent:1 none:1 rectified:1 ah:1 strongest:1 nonstationarity:15 definition:1 energy:1 frequency:1 obvious:1 proof:11 auditory:1 newly:1 treatment:1 popular:1 knowledge:1 electrophysiological:1 segmentation:2 actually:4 back:3 originally:1 higher:6 maximally:1 hiit:1 unidentifiable:1 evaluated:1 generality:1 strongly:2 just:1 biomedical:1 autoencoders:3 correlation:5 working:1 hand:3 sketch:1 horizontal:1 nonlinear:50 propagation:2 lack:1 autocorrelations:1 jutten:1 logistic:6 matsuoka:1 perhaps:1 consisted:1 true:4 concept:1 equality:2 regularization:1 satisfactory:1 illustrated:1 numerator:1 during:2 rhythm:1 unnormalized:1 criterion:3 prominent:1 pdf:7 mapp:1 l1:3 meaning:1 wise:7 image:2 variational:1 recently:1 krueger:1 instantaneous:1 superior:1 rotation:1 multinomial:4 mlr:17 stimulation:6 functional:2 extend:1 interpretation:1 resting:8 refer:1 measurement:3 dinh:1 cv:3 session:18 nonlinearity:2 had:1 dot:1 l3:5 bruna:1 morlet:1 whitening:1 something:1 pasquale:1 multivariate:2 posterior:4 recent:1 showed:2 inf:1 certain:1 verlag:1 success:1 life:1 integrable:1 seen:1 minimum:1 preceding:1 converge:1 period:1 ller:1 signal:20 dashed:1 full:4 multiple:1 smooth:2 technical:1 academic:1 cross:1 nonstationarities:1 divided:1 equally:1 a1:1 laplacian:3 qi:22 prediction:1 aapo:1 regression:7 multilayer:1 essentially:1 basic:1 volunteer:1 arxiv:4 kernel:4 normalization:3 represent:1 achieved:2 separately:1 source:38 leaving:1 crucial:1 modality:3 suffered:1 rest:2 strict:1 subject:5 hz:2 pooling:1 meaningfully:1 spirit:1 seem:1 call:1 nonstationary:12 odds:1 noting:1 feedforward:1 bengio:4 easy:2 enough:1 independence:1 relu:3 reduce:1 idea:4 regarding:1 shift:1 pivot:2 ramkumar:2 pca:1 accelerating:1 nonquadratic:1 tactile:1 nine:1 hardly:1 deep:11 generally:1 useful:3 clear:3 cardoso:1 amount:2 band:1 ten:2 visualized:2 category:2 svms:1 generate:1 dotted:2 sign:2 neuroscience:1 estimated:2 per:2 blue:1 promise:1 group:1 four:2 nevertheless:1 drawn:2 changing:2 preprocessed:1 pj:1 prevent:2 econometric:1 imaging:2 merely:1 year:2 angle:1 springenberg:1 family:6 almost:1 separation:9 coherence:2 prefer:1 scaling:1 zadeh:1 dropout:2 interleaved:1 layer:35 ct:10 hi:2 bound:1 followed:2 dash:1 courville:1 quadratic:1 identifiable:2 activity:7 annual:1 precisely:1 helsinki:1 span:1 min:2 performing:1 department:1 according:3 combination:1 across:3 slightly:1 appealing:2 making:1 s1:2 comon:1 intuitively:4 restricted:1 invariant:1 explained:2 ktdsep:10 gradually:1 equation:1 mutually:2 previously:2 visualization:1 discus:1 needed:1 know:1 instrument:1 kawamoto:1 adopted:1 generalizes:1 apply:1 observe:2 kawanabe:1 magnetic:1 batch:2 slower:1 eigen:1 existence:2 original:3 denotes:1 top:1 include:3 giving:1 k1:3 approximating:1 objective:1 question:1 quantity:1 hum:1 pave:1 corander:1 hoyer:1 gradient:1 subspace:3 separate:1 valpola:1 sci:2 capacity:1 ensuing:1 lajoie:1 manifold:1 consensus:1 parkkonen:1 ozair:1 assuming:1 meg:10 length:5 index:5 relationship:2 illustration:1 providing:1 ratio:1 downsampling:1 manzagol:1 difficult:4 unfortunately:1 neuroimaging:1 lg:1 negative:2 redefined:1 boltzmann:1 unknown:1 allowing:3 upper:1 neuron:1 finite:1 inevitably:1 parietal:2 defining:2 hinton:3 rn:2 arbitrary:1 neuroscientifically:1 inverting:1 connection:2 coherent:1 learned:4 barcelona:1 kingma:1 nip:2 trans:2 able:2 bar:1 proceeds:1 below:4 pattern:10 sparsity:1 challenge:1 summarize:1 max:3 including:1 video:3 belief:1 analogue:1 power:1 suitable:2 natural:3 treated:1 difficulty:1 scheme:2 ladder:1 disappears:1 created:1 extract:1 autoencoder:2 auto:1 sn:2 text:1 prior:3 literature:1 l2:4 nice:1 understanding:1 multiplication:1 contributing:3 relative:1 lacking:1 fully:1 interesting:2 generation:3 filtering:1 proven:2 facing:1 approximator:1 validation:1 affine:1 wiskott:2 principle:10 summary:1 surprisingly:2 last:3 supported:1 free:2 bias:2 side:2 allow:1 perceptron:3 weaker:1 taking:2 absolute:6 sparse:2 leaky:1 bs:1 dimension:4 cortical:1 avoids:1 computes:1 sensory:2 author:1 adaptive:1 preprocessing:1 simplified:1 far:3 welling:1 transaction:2 alpha:1 emphasize:1 approximate:1 netw:3 ml:1 overfitting:3 investigating:1 ioffe:1 handbook:1 b1:2 summing:1 assumed:1 hurri:1 hari:1 discriminative:1 search:1 latent:1 why:1 promising:1 learn:8 reasonably:1 channel:1 robust:1 eeg:2 excellent:1 necessarily:1 domain:1 did:1 aistats:1 rh:1 big:1 noise:2 s2:1 hyperparameters:1 allowed:1 fair:1 xu:1 biggest:1 gatsby:1 slow:3 occipito:2 momentum:1 exponential:5 comput:2 third:4 extractor:25 jacobian:1 learns:1 theorem:7 down:1 minute:1 xt:21 specific:1 covariate:1 mobahi:1 abadie:1 svm:2 a3:1 demixing:1 essential:1 glorot:1 restricting:1 effectively:1 importance:1 corr:1 labelling:1 illustrates:1 karhunen:1 simply:3 likely:1 visual:1 failed:1 contained:1 scalar:1 monotonic:7 springer:2 nested:1 corresponds:1 chance:11 weston:1 sized:1 magnetoencephalography:2 wjt:1 maxout:1 shared:1 change:6 analysing:1 determined:2 infinite:2 typical:1 reducing:1 wt:1 denoising:4 called:2 total:3 discriminate:5 invariance:2 college:1 incl:1 ziehe:1 support:1 tcl:52 almeida:1 modulated:11 internal:1 cumulant:1 constructive:1 evaluate:3 srivastava:1 |
5,964 | 6,396 | Joint M-Best-Diverse Labelings as a Parametric
Submodular Minimization
Alexander Kirillov1 Alexander Shekhovtsov2 Carsten Rother1 Bogdan Savchynskyy1
1
2
TU Dresden, Dresden, Germany
TU Graz, Graz, Austria
[email protected]
Abstract
We consider the problem of jointly inferring the M -best diverse labelings for a
binary (high-order) submodular energy of a graphical model. Recently, it was
shown that this problem can be solved to a global optimum, for many practically
interesting diversity measures. It was noted that the labelings are, so-called, nested.
This nestedness property also holds for labelings of a class of parametric submodular minimization problems, where different values of the global parameter ? give
rise to different solutions. The popular example of the parametric submodular
minimization is the monotonic parametric max-flow problem, which is also widely
used for computing multiple labelings. As the main contribution of this work
we establish a close relationship between diversity with submodular energies and
the parametric submodular minimization. In particular, the joint M -best diverse
labelings can be obtained by running a non-parametric submodular minimization
(in the special case - max-flow) solver for M different values of ? in parallel, for
certain diversity measures. Importantly, the values for ? can be computed in a
closed form in advance, prior to any optimization. These theoretical results suggest
two simple yet efficient algorithms for the joint M -best diverse problem, which
outperform competitors in terms of runtime and quality of results. In particular, as
we show in the paper, the new methods compute the exact M -best diverse labelings
faster than a popular method of Batra et al., which in some sense only obtains
approximate solutions.
1
Introduction
A variety of tasks in machine learning, computer vision and other disciplines can be formulated as
energy minimization problems, also known as Maximum-a-Posteriori (MAP) or Maximum Likelihood
(ML) estimation problems in undirected graphical models (Markov or Conditional Random Fields).
The importance of this problem is well-recognized, which can be seen by the many specialized
benchmarks [36, 21] and computational challenges [10] for its solvers. This motivates the task of
finding the most probable solution. Recently, a slightly different task has gained popularity, both from
a practical and theoretical perspective. The task is not only to find the most probable solution but
multiple diverse solutions, all with low energy, see e.g., [4, 31, 22, 23]. The task is referred to as the
?M -best-diverse problem? [4], and it has been used in a variety of scenarios, such as: (a) Expressing
uncertainty of the computed solutions [33]; (b) Faster training of model parameters [16]; (c) Ranking
of inference results [40]; (d) Empirical risk minimization [32]; (e) Loss-aware optimization [1]; (f)
Using diverse proposals in a cascading framework [39, 35].
This project has received funding from the European Research Council (ERC) under the European Union?s
Horizon 2020 research and innovation programme (grant agreement No 647769). A. Shekhovtsov was supported
by ERC starting grant agreement 640156.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this work we build on the recently proposed formulation of [22] for the M -best-diverse problem. In
this formulation all M configurations are inferred jointly, contrary to the well-known method [4, 31],
where a sequential, greedy procedure is used. Hence, we term it ?joint M -best-diverse problem?. As
shown in [22, 23], the joint approach qualitatively outperforms the sequential approach [4, 31] in a
number of applications. This is explained by the fact that the sequential method [4] can be considered
as an approximate and greedy optimization technique for solving the joint M -best-diverse problem.
While the joint approach is superior with respect to quality of its results, it is inferior to the sequential
method [4] with respect to runtime. For the case of binary submodular energies, the approximate
solver in [22] and the exact solver in [23] are several times slower than the sequential technique [4]
for a normally sized image. Obviously, this is a major limitation when using it in a practical setting.
Furthermore, the difference in runtime grows with the number M of configurations.
In this work, we show that in case of binary submodular energies an exact solution to the joint M best-diverse problem can be obtained significantly faster than the approximate one with the sequential
method [4]. Moreover, the difference in runtime grows with the number M of configurations.
Related work
The importance of the considered problem can be justified by the fact that a procedure of computing
M -best solutions to discrete optimization problems was proposed over 40 years ago, in [28]. Later,
more efficient specialized procedures were introduced for MAP-inference on a tree [34], junctiontrees [30] and general graphical models [41, 13, 3]. However, such methods are however not suited
for the scenario where diversity of the solutions is required, since they do not enforce it explicitly.
Structural Determinant Point Processes [27] is a tool for modelling probability distributions of
structured models. Unfortunately, an efficient sampling procedure to obtain diverse configurations is
feasible only for tree-structured graphical models. The recently proposed algorithm [8] to find M
best modes of a distribution is also limited to chains and junction chains of bounded width.
Training of M independent graphical models to produce multiple diverse solutions was proposed
in [15], and was further explored in [17, 9]. In contrast, we assume a single fixed model where
configurations with low energy (hopefully) correspond to the desired output.
The work of [4] defines the M -best-diverse problem, and proposes a solver for it. However, the
diversity of the solutions is defined sequentially, with respect to already extracted labelings. In
contrast to [4], the work [22] defined the ?joint M -best-diverse problem? as an optimization problem
of a single joint energy. The most related work to ours is [23], where an efficient method for the
joint M -best-diverse problem was proposed for submodular energies. The method is based on the
fact that for submodular energies and a family of diversity measures (which includes e.g., Hamming
distance) the set of M diverse solutions can be totally ordered with respect to the partial labeling
order. In the binary labeling case, the M -best-diverse solutions form a nested set. However, although
the method [23] is a considerably more efficient way to solve the problem, compared to the general
algorithm proposed in [22], it is still considerably slower than the sequential method [4]. Furthermore,
the runtime difference grows with the number M of configurations.
Interestingly, the above-mentioned ?nestedness property? is also fulfilled by minimizers of a parametric submodular minimization problem [12]. In particular, it holds for the monotonic max-flow
method [25], which is also widely used for obtaining diverse labelings in practice [7, 20, 19]. Naturally, we would like to ask questions about the relationship of these two techniques, such as: ?Do
the joint M -best-diverse configurations form a subset of the configurations returned by a parametric
submodular minimization problem??, and conversely ?Can the parametric submodular minimization
be used to (efficiently) produce the M -best-diverse configurations?? We give positive answers to
both these questions.
Contribution
? For binary submodular energies we provide a relationship between the joint M -best-diverse
and the parametric submodular minimization problems. In case of ?concave node-wise diversity
measures? 1 we give a closed-form formula for the parameters values, which corresponds to the joint
M -best-diverse labelings. The values can be computed in advance, prior to any optimization, which
allows to obtain each labeling independently.
? Our theoretical results suggest a number of efficient algorithms for the joint M -best-diverse
1
Precise definition is given in Sections 2 and 3.
2
problem. We describe and experimentally evaluate the two simplest of them, sequential and parallel.
Both are considerably faster than the popular technique [4] and are as easy to implement. We
demonstrate the effectiveness of these algorithms on two publicly available datasets.
2
Background and Problem Definition
Energy Minimization Let 2A denote the powerset of a set A. The pair G = (V, F ) is called a
hyper-graph and has V as a finite set of variable nodes and F ? 2V as a set of factors. Each variable
node vQ? V is associated with a variable yv taking its values in a finite set of labels Lv . The set
LA = v?A Lv denotes the Cartesian product of sets of labels corresponding to a subset A ? V of
variables. Functions ?f : Lf ? R, associated with factors f ? F , are called potentials and define
local costs on values of variables and their combinations. Potentials ?f with |f | = 1 are called
unary, with |f | = 2 pairwise and |f | > 2 high-order. Without loss of generality we will assume
that there is a unary potential ?v assigned to each variable v ? V. This implies that F = V ? F,
where F = {f ? F : |f | ? 2}. For any non-unary factor f ? F the corresponding set of variables
{yv : v ? f } will be denoted by yf . The energy minimization problem consists in finding a labeling
y ? = (yv : v ? V) ? LV , which minimizes the total sum of corresponding potentials:
X
X
arg min E(y) = arg min
?v (yv ) +
?f (yf ) .
(1)
y?LV
y?L
v?V
f ?F
Problem (1) is also known as MAP-inference. Labeling y ? satisfying (1) will be later called a solution
of the energy-minimization or MAP-inference problem, shortly MAP-labeling or MAP-solution.
Unless otherwise specified, we will assume that Lv = {0, 1}, v ? V, i.e. each variable may take only
two values. Such energies will be called binary. We also assume that the logical operations ? and ?
are defined in a natural way on the sets Lv . The case, when the energy E decomposes into unary and
pairwise potentials only, we will term as pairwise case or pairwise energy.
In the following, we use brackets to distinguish between upper index and power, i.e. (A)n means
the n-th power of A, whereas n is an upper index in the expression An . We will keep, however, the
standard notation Rn for the n-dimensional vector space and skip the brackets if an upper index does
not make mathematical sense such as in the expression {0, 1}|V| .
Joint-DivMBest Problem Instead of searching for a single labeling with the lowest energy, one
might ask for a set of labelings with low energies, yet being significantly different from each other. In
[22] it was proposed to infer such M diverse labelings {y 1 , . . . , y M } ? (L)M jointly by minimizing
M
X
E M ({y}) =
E(y i ) ? ??M ({y})
(2)
i=1
1
M
w.r.t. {y} := y , . . . , y for some ? > 0. Following [22] we use the notation {y} and {y}v
as shortcuts for y 1 , . . . , y M and yv1 , . . . , yvM correspondingly. Function ?M defines diversity of
arbitrary M labelings, i.e. ?M ({y}) takes a large value if labelings {y} are in a certain sense diverse,
and a small value otherwise.
In the following, we will refer to the problem (1) of minimizing the energy E itself as to the master
problem for (2).
Node-Wise Diversity In what follows we will consider only node-wise diversity measures, i.e. those
which can be represented in the form
X
?M ({y}) =
?M
(3)
v ({y}v )
v?V
M
for some node diversity measure ?M
? R. Moreover, we will stick to permutation
v : {0, 1}
M
invariant diversity measures. In other words, such measures that ?M
v ({y}v ) = ?v (?({y}v )) for
any permutation ? of variables {y}v .
PM
Let the expression JAK be equal to 1 if A is true and 0 otherwise. Let also m0v = m=1 Jyvm = 0K
count the number of 0?s in {y}v . In the binary case Lv = {0, 1}, any permutation invariant measure
can be represented as
?M 0
?M
(4)
v ({y}v ) = ?v (mv ) .
M
?M 0
To keep notation simple, we will use ?M
v for both representations: ?v ({y}v ) and ?v (mv ).
3
Example 1 (Hamming distance diversity). Consider the common node diversity measure, the sum of
Hamming distances between each pair of labels:
M X
M
X
?M
({y}
)
=
Jyvi 6= yvj K.
(5)
v
v
i=1 j=i+1
This measure is permutation invariant. Therefore, it can be written as a function of the number m0v :
0
0
0
?M
v (mv ) = mv ? (M ? mv ).
(6)
Minimization Techniques for Joint-DivMBest Problem Direct minimization of (2) has so far been
considered as a difficult problem even when the master problem (1) is easy to solve. We refer to [22]
for a detailed investigation of using general MAP-inference solvers for (2). In this paragraph we
briefly summarize existing efficient minimization approaches for (2).
As shown in [22] the sequential method DivMBest [4] can be seen as a greedy algorithm for
approximate minimization of (2), by finding one solution after another. The sequential method [4]
is used for diversity measures that can be represented by sum of diversity measures between each
PM
PM
pair of solutions, i.e. ?M ({y}) = m1 =1 m2 =m1 +1 ?2 (y m1 , y m2 ). For each m = 1, . . . , M
"
#
the method sequentially computes
m?1
X
m
2
i
y = arg min E(y) ? ?
? (y, y ) .
(7)
y?LV
i=1
In case of a node diversity measure (3), this algorithm requires sequentially solving M energy
minimization problems (1), with only modified unary potentials comparing to the master problem (1).
It typically implies that an efficient solver for the master problem can also be used to obtain its diverse
solutions.
In [23] an efficient approach for (2) was proposed for submodular energies E. An energy E(y) is
called submodular if for any two labelings y, y 0 ? LV it holds
E(y ? y 0 ) + E(y ? y 0 ) ? E(y) + E(y 0 ) ,
0
(8)
0
where y ? y and y ? y denote correspondingly the node-wise maximum and minimum with respect
to the natural order on the label set Lv .
In the following, we will use the term the higher labeling. The labeling y is higher than the labeling
y 0 if yv ? yv0 for all v ? V. So, the labeling y ? y 0 is higher than y and y 0 . Since the set of all
labelings is a lattice w.r.t. the operation ?, we will speak also about the highest labeling.
It was shown in [23] that for submodular energies, under certain technical conditions on the diversity
measure ?M
v (see Lemma 2 in [23]), the problem (2) can be reformulated as a submodular energy
minimization and, therefore, can be solved either exactly or approximately by efficient optimization
techniques (e.g., by reduction to the min-cut/max-flow in the pairwise case). However, the size of
the reformulated problem grows at least linearly with M (quadratically in the case of the Hamming
distance diversity (5)) and therefore even approximate algorithms require longer time than the
DivMBest (7) method. Moreover, this difference in runtime grows with M .
The mentioned transformation of (2) into a submodular energy minimization problem is based on the
theorem below, which plays a crucial role in obtaining the main results of this work. We first give a
definition of the ?nestedness property?, which is also important for the rest of the paper.
Definition 1. An M -tuple (y 1 , . . . , y M ) ? (LV )M is called nested if for each v ? V the inequality
yvi ? yvj holds for 1 ? i ? j ? M , i.e. for LV = {0, 1}, yvi = 1 implies yvj = 1 for j > i.
Theorem 1. [Special case of Thm. 1 of [23]] For a binary submodular energy and a node-wise
permutation invariant diversity, there exists a nested minimizer to the Joint-DivMBest objective (2).
Parametric submodular minimization Let ? ? R|V| , i = {1, . . . , k} be a vector of parameters
with the coordinates indexed by the node index v ? V. We define the parametric energy minimization
"
#
as the problem of evaluating the function
X
?
min E (y) := min E(y) +
?v yv
(9)
y?LV
y?L
v?V
for all values of the parameter ? ? ? ? R|V| . The most important cases of the parametric energy
minimization are
4
6
?1
4
?M
v (m)
?M
v (m)
?3
2
0
?5
0
1
2
3
4
0
5
1
2
3
4
5
m
m
Figure 1: Hamming distance (left) and linear (right) diversity measures for M = 5. Value m is
PM
defined as m=1 Jyvm = 0K. Both diversity measures are concave.
? the monotonic parametric max-flow problem [14, 18], which corresponds to the case when E is
a binary submodular pairwise energy and ? = {? ? R|V| : ?v = ?v (?)} and functions ?v : ? ? R
are non-increasing for ? ? R.
? a subclass of the parametric submodular minimization [12, 2], where E is submodular and
? = {? 1 , ? 2 , . . . , ? k ? R|V| : ? 1 ? ? 2 ? . . . ? ? k }, where operation ? is applied coordinate-wise.
i
It is known [38] that in these two cases, (i) the highest minimizers y 1 , . . . , y k ? LV of E ? ,
i = {1, . . . , k} are nested and (ii) the parametric problem (9) is solvable efficiently by respective
algorithms [14, 18, 12]. In the following, we will show that for a submodular energy E the JointDivMBest problem (2) reduces to the parametric submodular minimization with the values ? 1 ?
? 2 ? . . . ? ? M ? R|V| given in closed form.
3
Joint M -Best-Diverse Problem as a Parametric Submodular Minimization
Our results hold for the following subclass of the permutation invariant node-wise diversity measures:
Definition 2. A node-wise diversity measure ?M
v (m) is called concave if for any 1 ? i ? j ? M it
holds
M
M
M
?M
(10)
v (i) ? ?v (i ? 1) ? ?v (j) ? ?v (j ? 1).
There are a number of practically relevant concave diversity measures:
Example 2. Hamming distance diversity (6) is concave, see Fig. 1 for illustration.
Example 3. Diversity measures of the form
p
p
0
0
0
0
?M
v (mv ) = ? |mv ? (M ? mv )| = ? |2mv ? M |
(11)
are concave for any p ? 1. Here M ? m0v is the number of variables labeled as 1. Hence,
|m0v ? (M ? m0v )| is an absolute value of the difference between the numbers of variables labeled
as 0 and 1. It expresses the natural fact that a distribution of 0?s and 1?s is more diverse, when their
amounts are similar.
For p = 1 we call the measure (11) linear; for p = 2 the measure (11) coincides with the Hamming
distance diversity (6). An illustration of these two cases is given in Fig. 1.
Our main theoretical result is given by the following theorem:
Theorem 2. Let E be binary submodular and ?M be a node-wise diversity measure with each
m M
component ?M
v , v ? V , being permutation invariant and concave. Then a nested M -tuple (y )m=1
minimizing the Joint-DivMBest objective (2) can be found as the solutions of the following M
"
#
problems:
X
m
m
y = arg min E(y) +
?v yv ,
(12)
yV
?vm
?M
v (m)
where
=?
?
minimizer must be selected.
?M
v (m
v?V
? 1) . In the case of multiple solutions in (12) the highest
We refer to the supplement for the proof of Theorem 2 and discuss its practical consequences below.
M
First note that the sequence (? m )M
m=1 is monotone due to concavity of ?v . Each of the M
optimization problems (12) has the same size as the master problem (1) and differs from it by
5
unary potentials only. Theorem 2 implies that ? m in (12) satisfy the monotonicity condition:
? 1 ? ? 2 ? . . . ? ? M . Therefore, equations (12) constitute the parametric submodular minimization
problem as defined above, which reduces to the monotonic parametric max-flow problem for pairwise
E. Let b?c denote the largest integer not exceeding an argument of the operation.
Corollary 1. Let ?M
v in Theorem 2 be the Hamming distance diversity (6). Then it holds:
1. ?vm = ?(M ? 2m + 1).
2. The values ?vm , m = 1, . . . , M are symmetrically distributed around 0:
??vm = ?vM +1?m ? 0, for m ? b(M + 1)/2c
and ?vm = 0, if m = (M + 1)/2 .
3. Moreover, this distribution is uniform, that is ?vm+1 ? ?vm = 2?, m = 1, . . . , M .
4. When M is odd, the MAP-solution (corresponding to ? (M +1)/2 = 0) is always among the
M -best-diverse labelings minimizing (2).
Corollary 2. Implications 2 and 4 of Corollary 1 hold for any symmetrical concave ?M
v , i.e., those
M
where ?M
v (m) = ?v (M + 1 ? m) for m ? b(M + 1)/2c.
Corollary 3. For linear diversity measure the value ?vm in (12) is equal to ? ? sgn M
2 ? m , where
sgn(x) is a sign function, i.e., sgn(x) = Jx > 0K ? Jx < 0K. Since all ?vm for m < M
2 are the
same, this diversity measure can give only up to 3 different diverse labelings. Therefore, this diversity
measure is not useful for M > 3, and can be seen as a limit of useful concave diversity measures.
4
Efficient Algorithmic Solutions
Theorem 2 suggests several new computational methods for minimizing the Joint-DivMBest objective (2). All of them are more efficient than those proposed in [23]. Indeed, as we show experimentally
in Section 5, they outperform even the sequential DivMBest method (7).
The simplest algorithm applies a MAP-inference solver to each of the M problems (12) sequentially
and independently. This algorithm has the same computational cost as DivMBest (7) since it also
sequentially solves M problems of the same size. However, already its slightly improved version,
described below, performs faster than DivMBest (7).
Sequential Algorithm Theorem 2 states that solutions of (12) are nested. Therefore, from yvm?1 = 1
it follows that yvm = 1 for labelings y m?1 and y m obtained according to (12). This allows to
reduce the size and computing time for each subsequent problem in the sequence.2 Reusing the
flow from the previous step gives an additional speedup. In fact, when applying a push relabel or
pseudoflow algorithm in this fashion the total work complexity is asymptotically the same as of a
single minimum cut [14, 18] of the master problem. In practice, this strategy is efficient with other
min-cut solvers (without theoretical guarantees) as well. In our experiments we evaluated it with the
dynamic augmenting path method [6, 24].
Parallel Algorithm The M problems (12) are completely independent, and their highest minimizers
recover the optimal M -tuple (ym )m according to Theorem 2. They can be solved fully in parallel or,
using p < M processors, in parallel groups of M/p problems per processor, incrementally within
each group. The overhead is only in copying data costs and sharing the memory bandwidth.
Alternative approaches One may suggest that for large M it would be more efficient to solve the full
parametric maxflow problem [18, 14] and then ?read out? solutions corresponding to the desired values
? m . However, the known algorithms [18, 14] would perform exactly the incremental computation
described in the sequential approach above plus an extra work of identifying all breakpoints. This
is only sensible when M is larger than the number of breakpoints or the diversity measure is not
known in advance (e.g., is itself parametric). Similarly, parametric submodular function minimization
can be solved in the same worst case complexity [12] as non-parametric, but the algorithm is again
incremental and would just perform less work when the parameters of interest are known in advance.
5
Experimental Evaluation
We base our experiments on two datasets: (i) The interactive foreground/background image segmentation dataset utilized in several papers [4, 31, 22, 23] for comparing diversity techniques; (ii) A new
2
By applying ?symmetric reasoning? for the label 0, further speed-ups can be achieved. However, we stick to
the first variant in our experiments.
6
Table 1: Interactive segmentation. The quality measure is a per-pixel accuracy of the best
segmentation, out of M , averaged over all test images. The runtime is in milliseconds (ms).
The quality for M = 1 is 91.57. Parametric-parallel is the fastest method followed by
Parametric-sequential. Both achieve higher quality than DivMBest, and return the same solution as Joint-DivMBest.
M=2
M=6
M=10
DivMBest [4]
Joint-DivMBest [23]
Parametric-sequential (1 core)
Parametric-parallel (6 cores)
quality
time (ms)
quality
time (ms)
quality
time (ms)
93.16
95.13
95.13
95.13
2.6
5.5
2.2
1.9
95.02
96.01
96.01
96.01
11.6
17.2
5.5
4.3
95.16
96.19
96.19
96.19
15.4
80.3
8.4
6.2
dataset for foreground/background image segmentation with binary pairwise energies derived from
the well-known PASCAL VOC 2012 dataset [11]. Energies of the master problem (1) in both cases
are binary and pairwise, therefore we use their reduction [26] to the min-cut/max-flow problem to
obtain solutions efficiently.
Baselines Our main competitor is the fastest known approach for inferring M diverse solutions, the
DivMBest method [4]. We made its efficient re-implementation using dynamic graph-cut [24]. We
also compare our method with Joint-DivMBest [23], which provides an exact minimum of (2) as
our method does.
Diversity Measure In all of our experiments we use the Hamming distance diversity measure (5).
Note that in [31] more sophisticated diversity measures were used e.g., the Hamming Ball. However,
the DivMBest method (7) with this measure requires to run a very time-consuming HOP-MAP [37]
inference technique. Moreover, the experimental evaluation in [23] suggests that the exact minimum
of (2) with Hamming distance diversity (5) outperforms DivMBest with a Hamming Ball distance
diversity.
Our Method We evaluate both algorithms described in Section 4, i.e., sequential and parallel. We
refer to them as Parametric-sequential and Parametric-parallel respectively. We utilize
the dynamic graph-cut [24] technique for Parametric-sequential, which makes it comparable to
our implementation of DivMBest. The max-flow solver of [6] is used for Parametric-parallel
together with OpenMP directives. For the experiments we use a computer with 6 physical cores (12
virtual cores), and run Parametric-parallel with M threads.
Parameters ? (from (7) and (2)) were tuned via cross-validation for each algorithm and each experiment separately.
5.1 Interactive Segmentation
The basic idea is that after a user interaction, the system provides the user with M diverse segmentations, instead of a single one. The user can then manually select the best one and add more
user scribbles, if necessary. Following [4] we consider only the first iteration of such an interactive
procedure, i.e., we consider user scribbles to be given and compare the sets of segmentations returned
by the system.
The authors of [4] kindly provided us their 50 super-pixel graphical model instances. They are based
on a subset of the PASCAL VOC 2010 [11] segmentation challenge with manually added scribbles.
An instance has on average 3000 nodes. Pairwise potentials are given by contrast-sensitive Potts
terms [5], which are submodular in the binary case. This implies that Theorem 2 is applicable.
Quantitative comparison and runtime of the different algorithms are presented in Table 1. As
in [4], our quality measure is a per-pixel accuracy of the best solution for each test image, averaged
over all test images. As expected, Joint-DivMBest and Parametric-* return the same, exact
solution of (2). The measured runtime is also averaged over all test images. Parametric-parallel
is the fastest method followed by Parametric-sequential. Note that on a computer with fewer
cores, Parametric-sequential may even outperform Parametric-parallel because of the
parallelization overheads.
5.2 Foreground/Background Segmentation
The Pascal VOC 2012 [11] segmentation dataset has 21 labels. We selected all those 451 images
from the validation set for which the ground truth labeling has only two labels (background and one
7
88
0.40
DivMBest
Parametric-sequential
Parametric-parallel
0.35
87
0.25
86
time (s)
IoU score
0.30
85
0.20
0.15
0.10
84
83
1
Parametric
DivMBest
2
3
4
5
6
7
8
9
0.05
0.00
1
10
M
2
3
4
5
6
7
8
9
10
M
(a) Intersection-over-union (IoU)
(b) Runtime in seconds
Figure 2: Foreground/background segmentation. (a) Intersection-over-union (IoU) score
for the best segmentation, out of M . Parametric represents a curve, which is the same
for Parametric-sequential, Parametric-parallel and Joint-DivMBest, since they exactly solve the same Joint-DivMBest problem. (b) DivMBest uses dynamic graph-cut [24].
Parametric-sequential uses dynamic graph-cut and a reduced size graph for each consecutive labeling problem. Parametric-parallel solves M problems in parallel using OpenMP.
of the 20 object classes) and which were not used for training. As unary potentials we use the output
probabilities of the publicly available fully convolutional neural network FCN-8s [29], which is
trained for the Pascal VOC 2012 challenge. This CNN gives unary terms for all 21 classes. For each
image we pick only two classes: the background and the class-label that is presented in the ground
truth. As pairwise potentials we use the contrastive-sensitive Potts terms [5] with a 4-connected grid
structure.
Quantitative Comparison and Runtime As quality measure we use the standard Pascal VOC
measure for semantic segmentation ? average intersection-over-union (IoU) [11]. The unary potentials
alone, i.e., output of FCN-8s, give 82.12 IoU. The single best labeling, returned by the MAP-inference
problem, improves it to 83.23 IoU.
The comparisons with respect to runtime and accuracy of results are presented in Fig. 2a and 2b
respectively. The increase in runtime with respect to M for Parametric-parallel is due to
parallelization overhead costs, which grow with M . Parametric-parallel is a clear winner in
this experiment, both in terms of quality and runtime. Parametric-sequential is slower than
Parametric-parallel but faster than DivMBest. The difference in runtime between these three
algorithms grows with M .
6
Conclusion and Outlook
We have shown that the M labelings, which constitute a solution to the Joint-DivMBest problem with
binary submodular energies, and concave node-wise permutation invariant diversity measures can be
computed in parallel, independently from each other, as solutions of the master energy minimization
problem with modified unary costs. This allows to build solvers which run even faster than the
approximate method of Batra et al. [4]. Furthermore, we have shown that such Joint-DivMBest
problems reduce to the parametric submodular minimization. This shows a clear connection of these
two practical approaches to obtaining diverse solutions to the energy minimization problem.
References
[1] Ahmed, F., Tarlow, D., Batra, D.: Optimizing expected Intersection-over-Union with candidate-constrained
CRFs. In: ICCV (2015)
[2] Bach, F.: Learning with submodular functions: A convex optimization perspective. Foundations and
Trends in Machine Learning 6(2-3), 145?373 (2013)
[3] Batra, D.: An efficient message-passing algorithm for the M-best MAP problem. In: UAI (2012)
[4] Batra, D., Yadollahpour, P., Guzman-Rivera, A., Shakhnarovich, G.: Diverse M-best solutions in Markov
random fields. In: ECCV. Springer Berlin/Heidelberg (2012)
[5] Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in
N-D images. In: ICCV (2001)
[6] Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy
minimization in vision. TPAMI 26(9), 1124?1137 (2004)
8
[7] Carreira, J., Sminchisescu, C.: Constrained parametric min-cuts for automatic object segmentation. In:
CVPR. pp. 3241?3248 (2010)
[8] Chen, C., Kolmogorov, V., Zhu, Y., Metaxas, D.N., Lampert, C.H.: Computing the M most probable modes
of a graphical model. In: AISTATS (2013)
[9] Dey, D., Ramakrishna, V., Hebert, M., Andrew Bagnell, J.: Predicting multiple structured visual interpretations. In: ICCV (2015)
[10] Elidan, G., Globerson, A.: The probabilistic inference challenge (PIC2011) (2011)
[11] Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The PASCAL Visual Object
Classes Challenge 2012 (VOC2012) Results (2012)
[12] Fleischer, L., Iwata, S.: A push-relabel framework for submodular function minimization and applications
to parametric optimization. Discrete Applied Mathematics 131(2), 311?322 (2003)
[13] Fromer, M., Globerson, A.: An LP view of the M-best MAP problem. In: NIPS 22 (2009)
[14] Gallo, G., Grigoriadis, M.D., Tarjan, R.E.: A fast parametric maximum flow algorithm and applications.
SIAM Journal on Computing 18(1), 30?55 (1989)
[15] Guzman-Rivera, A., Batra, D., Kohli, P.: Multiple choice learning: Learning to produce multiple structured
outputs. In: NIPS 25 (2012)
[16] Guzman-Rivera, A., Kohli, P., Batra, D.: DivMCuts: Faster training of structural SVMs with diverse
M-best cutting-planes. In: AISTATS (2013)
[17] Guzman-Rivera, A., Kohli, P., Batra, D., Rutenbar, R.A.: Efficiently enforcing diversity in multi-output
structured prediction. In: AISTATS (2014)
[18] Hochbaum, D.S.: The pseudoflow algorithm: A new algorithm for the maximum-flow problem. Operations
research 56(4), 992?1009 (2008)
[19] Hower, D., Singh, V., Johnson, S.C.: Label set perturbation for MRF based neuroimaging segmentation.
In: ICCV. pp. 849?856. IEEE (2009)
[20] Jug, F., Pietzsch, T., Kainm?ller, D., Funke, J., Kaiser, M., van Nimwegen, E., Rother, C., Myers, G.:
Optimal joint segmentation and tracking of Escherichia Coli in the mother machine. In: BAMBI (2014)
[21] Kappes, J.H., Andres, B., Hamprecht, F.A., Schn?rr, C., Nowozin, S., Batra, D., Kim, S., Kausler, B.X.,
Kr?ger, T., Lellmann, J., Komodakis, N., Savchynskyy, B., Rother, C.: A comparative study of modern
inference techniques for structured discrete energy minimization problems. IJCV pp. 1?30 (2015)
[22] Kirillov, A., Savchynskyy, B., Schlesinger, D., Vetrov, D., Rother, C.: Inferring M-best diverse labelings in
a single one. In: ICCV (2015)
[23] Kirillov, A., Schlesinger, D., Vetrov, D.P., Rother, C., Savchynskyy, B.: M-best-diverse labelings for
submodular energies and beyond. In: NIPS (2015)
[24] Kohli, P., Torr, P.H.: Dynamic graph cuts for efficient inference in Markov random fields. PAMI (2007)
[25] Kolmogorov, V., Boykov, Y., Rother, C.: Applications of parametric maxflow in computer vision. In: ICCV.
pp. 1?8. IEEE (2007)
[26] Kolmogorov, V., Zabih, R.: What energy functions can be minimized via graph cuts? TPAMI (2004)
[27] Kulesza, A., Taskar, B.: Structured determinantal point processes. In: NIPS 23 (2010)
[28] Lawler, E.L.: A procedure for computing the K best solutions to discrete optimization problems and its
application to the shortest path problem. Management Science 18(7) (1972)
[29] Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. CVPR
(2015)
[30] Nilsson, D.: An efficient algorithm for finding the M most probable configurationsin probabilistic expert
systems. Statistics and Computing 8(2), 159?173 (1998)
[31] Prasad, A., Jegelka, S., Batra, D.: Submodular meets structured: Finding diverse subsets in exponentiallylarge structured item sets. In: NIPS 27 (2014)
[32] Premachandran, V., Tarlow, D., Batra, D.: Empirical minimum bayes risk prediction: How to extract an
extra few % performance from vision models with just three more parameters. In: CVPR (2014)
[33] Ramakrishna, V., Batra, D.: Mode-marginals: Expressing uncertainty via diverse M-best solutions. In:
NIPS Workshop on Perturbations, Optimization, and Statistics (2012)
[34] Schlesinger, M.I., Hlavac, V.: Ten lectures on statistical and structural pattern recognition (2002)
[35] Sener, O., Saxena, A.: rCRF: Recursive belief estimation over CRFs in RGB-D activity videos. In:
Robotics Science and Systems (RSS) (2015)
[36] Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M.F., Rother, C.:
A comparative study of energy minimization methods for Markov random fields with smoothness-based
priors. TPAMI 30(6), 1068?1080 (2008)
[37] Tarlow, D., Givoni, I.E., Zemel, R.S.: HOP-MAP: Efficient message passing with high order potentials. In:
AISTATS (2010)
[38] Topkis, D.M.: Minimizing a submodular function on a lattice. Operations research 26(2), 305?321 (1978)
[39] Wang, S., Fidler, S., Urtasun, R.: Lost shopping! monocular localization in large indoor spaces. In: ICCV
(2015)
[40] Yadollahpour, P., Batra, D., Shakhnarovich, G.: Discriminative re-ranking of diverse segmentations. In:
CVPR (2013)
[41] Yanover, C., Weiss, Y.: Finding the M most probable configurations using loopy belief propagation. In:
NIPS 17 (2004)
9
| 6396 |@word kohli:4 determinant:1 version:1 briefly:1 cnn:1 everingham:1 yv0:1 r:1 prasad:1 rgb:1 contrastive:1 pick:1 rivera:4 outlook:1 reduction:2 configuration:10 score:2 tuned:1 ours:1 interestingly:1 outperforms:2 existing:1 comparing:2 yet:2 written:1 must:1 determinantal:1 subsequent:1 premachandran:1 alone:1 greedy:3 selected:2 fewer:1 item:1 plane:1 core:5 tarlow:3 provides:2 node:16 mathematical:1 direct:1 consists:1 ijcv:1 overhead:3 paragraph:1 pairwise:11 expected:2 indeed:1 multi:1 jolly:1 voc:5 solver:11 totally:1 increasing:1 project:1 spain:1 moreover:5 bounded:1 notation:3 provided:1 lowest:1 what:2 minimizes:1 finding:6 transformation:1 guarantee:1 quantitative:2 saxena:1 subclass:2 concave:10 interactive:5 runtime:15 exactly:3 stick:2 normally:1 grant:2 positive:1 local:1 limit:1 consequence:1 vetrov:2 meet:1 path:2 approximately:1 pami:1 might:1 plus:1 dresden:3 conversely:1 suggests:2 escherichia:1 fastest:3 limited:1 averaged:3 practical:4 globerson:2 jug:1 union:5 practice:2 implement:1 lf:1 differs:1 recursive:1 lost:1 procedure:6 maxflow:2 empirical:2 significantly:2 ups:1 word:1 suggest:3 close:1 savchynskyy:3 risk:2 applying:2 map:14 nestedness:3 crfs:2 williams:1 starting:1 independently:3 convex:1 identifying:1 m2:2 cascading:1 importantly:1 searching:1 coordinate:2 play:1 user:5 exact:6 speak:1 us:2 givoni:1 agreement:2 trend:1 satisfying:1 recognition:1 utilized:1 tappen:1 cut:13 labeled:2 role:1 taskar:1 solved:4 wang:1 worst:1 graz:2 region:1 connected:1 kappes:1 highest:4 mentioned:2 complexity:2 dynamic:6 trained:1 singh:1 solving:2 shakhnarovich:2 localization:1 completely:1 joint:30 represented:3 kolmogorov:5 fast:1 describe:1 zemel:1 labeling:15 hyper:1 widely:2 solve:4 larger:1 cvpr:4 otherwise:3 statistic:2 jointly:3 itself:2 obviously:1 sequence:2 tpami:3 myers:1 rr:1 interaction:1 product:1 tu:3 relevant:1 achieve:1 optimum:1 darrell:1 produce:3 comparative:2 incremental:2 object:4 bogdan:1 andrew:1 augmenting:1 measured:1 odd:1 received:1 solves:2 skip:1 implies:5 iou:6 sgn:3 virtual:1 require:1 shopping:1 investigation:1 probable:5 hold:8 practically:2 around:1 considered:3 ground:2 algorithmic:1 major:1 jx:2 consecutive:1 estimation:2 applicable:1 label:9 council:1 sensitive:2 largest:1 tool:1 minimization:36 always:1 super:1 modified:2 corollary:4 derived:1 potts:2 modelling:1 likelihood:1 contrast:3 baseline:1 sense:3 kim:1 posteriori:1 inference:11 minimizers:3 unary:10 typically:1 labelings:22 germany:1 pixel:3 arg:4 among:1 agarwala:1 pascal:6 denoted:1 proposes:1 constrained:2 special:2 field:4 aware:1 equal:2 sener:1 sampling:1 manually:2 hop:2 represents:1 foreground:4 fcn:2 minimized:1 guzman:4 few:1 modern:1 powerset:1 interest:1 message:2 evaluation:2 bracket:2 hamprecht:1 chain:2 implication:1 tuple:3 partial:1 savchynskyy1:1 necessary:1 respective:1 unless:1 tree:2 indexed:1 desired:2 re:2 theoretical:5 schlesinger:3 instance:2 lattice:2 loopy:1 cost:5 subset:4 uniform:1 veksler:1 johnson:1 answer:1 considerably:3 divmbest:28 siam:1 probabilistic:2 vm:10 discipline:1 ym:1 together:1 again:1 management:1 coli:1 expert:1 return:2 reusing:1 potential:12 de:1 diversity:43 includes:1 satisfy:1 explicitly:1 ranking:2 mv:9 later:2 view:1 closed:3 yv:8 recover:1 bayes:1 parallel:21 contribution:2 publicly:2 accuracy:3 convolutional:2 efficiently:4 correspond:1 shekhovtsov:1 metaxas:1 andres:1 ago:1 processor:2 sharing:1 definition:5 competitor:2 energy:41 pp:4 naturally:1 associated:2 proof:1 hamming:12 dataset:4 popular:3 ask:2 logical:1 austria:1 improves:1 segmentation:19 directive:1 sophisticated:1 lawler:1 higher:4 voc2012:1 zisserman:1 improved:1 wei:1 formulation:2 evaluated:1 generality:1 furthermore:3 just:2 dey:1 hopefully:1 propagation:1 incrementally:1 defines:2 mode:3 yf:2 quality:11 grows:6 true:1 hence:2 assigned:1 fidler:1 read:1 symmetric:1 semantic:2 komodakis:1 width:1 inferior:1 noted:1 coincides:1 m:4 yv1:1 demonstrate:1 performs:1 reasoning:1 image:10 wise:10 recently:4 funding:1 boykov:3 superior:1 common:1 specialized:2 physical:1 winner:1 yvm:3 m1:3 interpretation:1 marginals:1 expressing:2 refer:4 mother:1 smoothness:1 automatic:1 grid:1 pm:4 similarly:1 erc:2 mathematics:1 submodular:40 longer:1 base:1 add:1 perspective:2 pseudoflow:2 optimizing:1 scenario:2 certain:3 gallo:1 inequality:1 binary:14 topkis:1 seen:3 minimum:5 additional:1 recognized:1 shortest:1 ller:1 elidan:1 ii:2 multiple:7 full:1 infer:1 reduces:2 technical:1 faster:8 ahmed:1 cross:1 bach:1 long:1 prediction:2 variant:1 basic:1 mrf:1 relabel:2 vision:4 iteration:1 hochbaum:1 achieved:1 robotics:1 proposal:1 justified:1 background:7 whereas:1 separately:1 winn:1 grow:1 crucial:1 extra:2 rest:1 parallelization:2 undirected:1 contrary:1 flow:12 effectiveness:1 call:1 integer:1 structural:3 symmetrically:1 easy:2 variety:2 bandwidth:1 reduce:2 idea:1 fleischer:1 thread:1 expression:3 returned:3 reformulated:2 passing:2 constitute:2 yvj:3 useful:2 detailed:1 clear:2 amount:1 ten:1 zabih:2 svms:1 simplest:2 reduced:1 outperform:3 millisecond:1 sign:1 fulfilled:1 popularity:1 per:3 diverse:42 discrete:4 express:1 group:2 yadollahpour:2 utilize:1 graph:9 asymptotically:1 monotone:1 year:1 sum:3 run:3 uncertainty:2 master:8 family:1 comparable:1 breakpoints:2 followed:2 distinguish:1 activity:1 grigoriadis:1 speed:1 argument:1 min:11 speedup:1 structured:9 according:2 combination:1 ball:2 funke:1 slightly:2 lp:1 nilsson:1 explained:1 invariant:7 iccv:7 equation:1 vq:1 monocular:1 discus:1 count:1 junction:1 available:2 operation:6 kirillov:3 enforce:1 alternative:1 shortly:1 slower:3 denotes:1 running:1 graphical:7 build:2 establish:1 objective:3 already:2 question:2 added:1 kaiser:1 parametric:55 strategy:1 bagnell:1 distance:11 berlin:1 sensible:1 urtasun:1 enforcing:1 rother:6 index:4 relationship:3 illustration:2 copying:1 minimizing:6 innovation:1 difficult:1 unfortunately:1 neuroimaging:1 rise:1 fromer:1 implementation:2 motivates:1 perform:2 upper:3 markov:4 datasets:2 benchmark:1 finite:2 jak:1 precise:1 rn:1 perturbation:2 arbitrary:1 thm:1 tarjan:1 inferred:1 introduced:1 pair:3 required:1 specified:1 rutenbar:1 connection:1 schn:1 quadratically:1 barcelona:1 nip:8 beyond:1 below:3 pattern:1 indoor:1 kulesza:1 challenge:5 summarize:1 max:9 memory:1 video:1 belief:2 gool:1 power:2 natural:3 kausler:1 predicting:1 solvable:1 yanover:1 zhu:1 hlavac:1 extract:1 prior:3 loss:2 fully:3 permutation:8 lecture:1 interesting:1 limitation:1 ger:1 lv:14 ramakrishna:2 validation:2 foundation:1 shelhamer:1 jegelka:1 nowozin:1 eccv:1 supported:1 hebert:1 yvi:2 szeliski:1 taking:1 correspondingly:2 absolute:1 distributed:1 van:2 curve:1 boundary:1 evaluating:1 computes:1 concavity:1 author:1 qualitatively:1 made:1 programme:1 far:1 scribble:3 scharstein:1 approximate:7 obtains:1 cutting:1 configurationsin:1 keep:2 monotonicity:1 ml:1 global:2 sequentially:5 uai:1 symmetrical:1 consuming:1 discriminative:1 decomposes:1 table:2 obtaining:3 heidelberg:1 sminchisescu:1 european:2 kindly:1 aistats:4 main:4 linearly:1 lampert:1 lellmann:1 fig:3 referred:1 fashion:1 inferring:3 exceeding:1 candidate:1 formula:1 theorem:11 explored:1 exists:1 workshop:1 sequential:24 importance:2 gained:1 supplement:1 kr:1 push:2 cartesian:1 horizon:1 chen:1 suited:1 intersection:4 visual:2 ordered:1 tracking:1 monotonic:4 applies:1 springer:1 nested:7 corresponds:2 minimizer:2 truth:2 extracted:1 iwata:1 conditional:1 carsten:1 formulated:1 sized:1 feasible:1 experimentally:2 shortcut:1 carreira:1 openmp:2 torr:1 lemma:1 called:9 batra:13 total:2 experimental:3 la:1 select:1 alexander:3 evaluate:2 |
5,965 | 6,397 | Faster Projection-free Convex Optimization over the
Spectrahedron
Dan Garber
Toyota Technological Institute at Chicago
[email protected]
Abstract
Minimizing a convex function over the spectrahedron, i.e., the set of all d ? d
positive semidefinite matrices with unit trace, is an important optimization task
with many applications in optimization, machine learning, and signal processing. It
is also notoriously difficult to solve in large-scale since standard techniques require
to compute expensive matrix decompositions. An alternative is the conditional
gradient method (aka Frank-Wolfe algorithm) that regained much interest in recent
years, mostly due to its application to this specific setting. The key benefit of the
CG method is that it avoids expensive matrix decompositions all together, and
simply requires a single eigenvector computation per iteration, which is much more
efficient. On the downside, the CG method, in general, converges with an inferior
rate. The error for minimizing a -smooth function after t iterations scales like /t.
This rate does not improve even if the function is also strongly convex. In this work
we present a modification of the CG method tailored for the spectrahedron. The
per-iteration complexity of the method is essentially identical to that of the standard
CG method: only a single eigenvector computation is required. For minimizing an
?-strongly convex and -smooth function, the expected error of the method after t
iterations is:
0
1
!4/3 ?
p
?2
?)
rank(X
O @min{ ,
, p
}A ,
t
? min (X? )t
?1/4 t
where rank(X? ), min (X? ) are the rank of the optimal solution and smallest nonzero eigenvalue, respectively. Beyond the significant improvement in convergence
rate, it also follows that when the optimum is low-rank, our method provides better
accuracy-rank tradeoff than the standard CG method. To the best of our knowledge,
this is the first result that attains provably faster convergence rates for a CG variant
for optimization over the spectrahedron. We also present encouraging preliminary
empirical results.
1
Introduction
Minimizing a convex function over the set of positive semidefinite matrices with unit trace, aka
the spectrahedron, is an important optimization task which lies at the heart of many optimization,
machine learning, and signal processing tasks such as matrix completion [1, 13], metric learning
[21, 22], kernel matrix learning [16, 9], multiclass classification [2, 23], and more.
Since modern applications are mostly of very large scale, first-order methods are the obvious choice to
deal with this optimization problem. However, even these are notoriously difficult to apply, since most
of the popular gradient schemes require the computation of an orthogonal projection on each iteration
to enforce feasibility, which for the spectraheron, amounts to computing a full eigen-decomposition
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
of a real symmetric matrix. Such a decomposition requires O(d3 ) arithmetic operations for a d ? d
matrix and thus is prohibitive for high-dimensional problems. An alternative is to use first-order
methods that do not require expensive decompositions, but rely only on computationally-cheap
leading eigenvector computations. These methods are mostly based on the conditional gradient
method, also known as the Frank-Wolfe algorithm [3, 12], which is a generic method for constrained
convex optimization given an oracle for minimizing linear functions over the feasible domain. Indeed,
linear minimization over the spectrahedron amounts to a single leading eigenvector computation.
While the CG method has been discovered already in the 1950?s [3], it has regained much interest
in recent years in the machine learning and optimization communities, in particular due to its
applications to semidefinite optimization and convex optimization with a nuclear norm constraint
/ regularization1 , e.g., [10, 13, 17, 19, 22, 2, 11]. This regained interest is not surprising: while a
full eigen-decomposition for d ? d matrix requires O(d3 ) arithmetic operations, leading eigenvecor
computations can be carried out, roughly speaking, in worst-case time that is only linear in the
number of non-zeros in the input matrix multiplied by either ? 1 for the popular Power Method
or by ? 1/2 for the more efficient Lanczos method, where ? is the target accuracy. These running
times improve exponentially to only depend on log(1/?) when the eigenvalues of the input matrix
are well distributed [14]. Indeed, in several important machine learning applications, such as matrix
completion, the CG method requires eigenvector computations of very sparse matrices [13]. Also,
very recently, new eigenvector algorithms with significantly improved performance guarantees were
introduced which are applicable for matrices with certain popular structure [5, 8, 20].
The main drawback of the CG method is that its convergence rate is, in general, inferior compared to
projection-based gradient methods. The convergence rate for minimizing a smooth function, roughly
speaking, scales only like 1/t. In particular, this rate does not improve even when the function is
also strongly convex. On the other hand, the convergence rate of optimal projection-based methods,
such as Nesterov?s accelerated gradient method, scales like 1/t2 for smooth functions, and can be
improved exponentially to exp( ?(t)) when the objective is also strongly convex.
Very recently, several successful attempts were made to devise natural modifications of the CG method
that retain the overall low per-iteration complexity, while enjoying provably faster convergence rates,
usually under a strong-convexity assumption, or a slightly weaker one. These results exhibit provablyfaster rates for optimization over polyhedral sets [7, 15] and strongly-convex sets [6], but do not apply
to the spectrahedron. For the specific setting considered in this work, several heuristic improvements
of the CG method were suggested which show promising empirical evidence, however, non of them
provably improve over the rate of the standard CG method [19, 17, 4].
In this work we present a new non-trivial variant of the CG method, which, to the best of our
knowledge, is the first to exhibit provably faster convergence rates for optimization over the spectrahedron, under standard smoothness and strong convexity assumptions. The per-iteration complexity
of the method is essentially identical to that of the standard CG method, i.e., only a single leading
eigenvector computation per iteration is required. Our method is tailored for optimization over the
spectrahedron, and can be seen as a certain hybridization of the standard CG method and the projected
gradient method. From a high-level view, we take advantage of the fact that solving a `2 -regularized
linear problem over the set of extreme points of the spectrahedron is equivalent to linear optimization
over this set, i.e., amounts to a single eigenvector computation. We then show via a novel and
non-trivial analysis, that includes new decomposition concepts for positive semidefinite matrices, that
such an algorithmically-cheap regularization is sufficient, in presence of strong convexity, to derive
faster convergence rates.
2
Preliminaries and Notation
For vectors we let k ? k denote the standard Euclidean norm, while for matrices we let k ? k denote
the spectral norm, k?kF denote the Frobenius norm, and k ? k? denote the nuclear norm. We
denote by Sd the space of d ? d real symmetric matrices, and by Sd the spectrahedron in Sd , i.e.,
Sd := {X 2 Sd | X ? 0, Tr(X) = 1}. We let Tr(?) and rank(?) denote the trace and rank of a given
matrix in Sd , respectively. We let ? denote the standard inner-product for matrices. Given a matrix
X 2 Sd , we let min (X) denote the smallest non-zero eigenvalue of X.
1
minimizing a convex function subject to a nuclear norm constraint is efficiently reducible to the minimization
of the function over the spectrahedron, as fully detailed in [13].
2
Given a matrix A 2 Sd , we denote by EV(A) an eigenvector of A that corresponds to the largest
(signed) eigenvalue of A, i.e., EV(A) 2 arg maxv:kvk=1 v> Av. Given a scalar ? > 0, we also
denote by EV? (A) an ?-approximation to the largest (in terms of eigenvalue) eigenvector of A, i.e.,
EV? (A) returns a unit vector v such that v> Av
?.
max (A)
m?n
Definition 1. We say that a function f (X) : R
! R is ?-strongly convex w.r.t. a norm k ? k, if
for all X, Y 2 Rm?n it holds that
?
f (Y) f (X) + (Y X) ? rf (X) + kX Yk2 .
2
m?n
Definition 2. We say that a function f (X) : R
! R is -smooth w.r.t. a norm k ? k, if for all
X, Y 2 Rm?n it holds that
f (Y) ? f (X) + (Y
X) ? rf (X) +
2
kX
Yk2 .
The first-order optimality condition implies that for a ?-strongly convex f , if X? is the unique
minimizer of f over a convex set K ? Rm?n , then for all X 2 K it holds that
?
f (X) f (X? )
kX X? k2 .
(1)
2
2.1
Problem setting
The main focus of this work is the following optimization problem:
min f (X),
X2Sd
(2)
where we assume that f (X) is both ?-strongly convex and -smooth w.r.t. k ? kF . We denote the
(unique) minimizer of f over Sd by X? .
3
Our Approach
We begin by briefly describing the conditional gradient and projected-gradient methods, pointing out
their advantages and short-comings for solving Problem (2) in Subsection 3.1. We then present our
new method which is a certain combination of ideas from both methods in Subsection 3.2.
3.1
Conditional gradient and projected gradient descent
The standard conditional gradient algorithm is detailed below in Algorithm 1.
Algorithm 1 Conditional Gradient
1:
2:
3:
4:
5:
6:
input: sequence of step-sizes {?t }t
let X1 be an arbitrary matrix in Sd
for t = 1... do
vt
EV ( rf (Xt ))
Xt+1
Xt + ?t (vt vt> Xt )
end for
1
? [0, 1]
Let us denote the approximation error of Algorithm 1 after t iterations by ht := f (Xt )
f (X? ).
The convergence result of Algorithm 1 is based on the following simple observations:
ht+1 = f (Xt + ?t (vt vt>
Xt ))
f (X? )
(3)
?2
? ht + ?t (vt vt> Xt ) ? rf (Xt ) + t kvt vt> Xt k2F
2
2
?
?2
? ht + ?t (X? Xt ) ? rf (Xt ) + t kvt vt> Xt k2F ? (1 ?t )ht + t kvt vt> Xt k2F ,
2
2
where the first inequality follows from the -smoothness of f (X), the second one follows for the
optimal choice of vt , and the third one follows from convexity of f (X). Unfortunately, while we
3
expect the error ht to rapidly converge to zero, the term kvt vt> Xt k2F in Eq. (3), in principal,
might remain as large as the diameter of Sd , which, given a proper choice of step-size ?t , results in
the well-known convergence rate of O( /t) [12, 10]. This consequence holds also in case f (X) is
not only smooth, but also strongly-convex.
However, in case f is strongly convex, a non-trivial modification of Algorithm 1 can lead to a much
faster convergence rate. In this case, it follows from Eq. (1), that on any iteration t, kXt X? k2F ?
2
? ht . Thus, if we consider replacing the choice of Xt+1 in Algorithm 1 with the following update
rule:
Vt
arg min V ? rf (Xt ) +
V2Sd
?t
kV
2
Xt k2F ,
Xt+1
Xt + ?t (Vt
(4)
Xt ),
then, following basically the same steps as in Eq. (3), we will have that
ht+1
?
ht + ?t (X?
Xt ) ? rf (Xt ) +
?t2
kX?
2
Xt k2F ?
?
1
?t +
?t2
?
?
ht ,
(5)
and thus by a proper choice of ?t , a linear convergence rate will be attained. Of course the issue now,
is that computing Vt is no longer a computationally-cheap leading eigenvalue problem (in particular
Vt is not rank-one), but requires a full eigen-decomposition of Xt , which is much more expensive.
In fact, the update rule in Eq. (4) is nothing more than the projected gradient decent method.
3.2
A new hybrid approach: rank one-regularized conditional gradient algorithm
At the heart of our new method is the combination of ideas from both of the above approaches: on
one hand, solving a certain regularized linear problem in order to avoid the shortcomings of the
CG method, i.e., slow convergence rate, and on the other hand, maintaining the simple structure
of a leading eigenvalue computation that avoids the shortcoming of the computationally-expensive
projected-gradient method.
Towards this end, suppose that we have an explicit decomposition of the current iterate Xt =
Pk
>
i=1 ai xi xi , where (a1 , a2 , ..., ak ) is a probability distribution over [k], and each xi is a unit vector.
Note in particular that the standard CG method (Algorithm 1) naturally produces such an explicit
decomposition of Xt (provided X1 is chosen to be rank-one). Consider now the update rule in Eq.
(4), but with the additional restriction that Vt is rank one, i.e, Vt
arg minV2Sd , rank(V)=1 V ?
?t
2
rf (Xt ) + 2 kV Xt kF . Note that in this case it follows that Vt is a unit trace rank-one matrix
which corresponds to the leading eigenvector of the matrix rf (Xt ) + ?t Xt . However, when
Vt is rank-one, the regularization kVt Xt k2F makes little sense in general, since unless X? is
rank-one, we do not expect Xt to be such (note however, that if X? is rank one, this modification
will already result in a linear convergence rate). However, we can think of solving a set of decoupled
component-wise regularized problems:
(i)
8i 2 [k] : vt
2
arg min v> rf (Xt )v + ?t2 kvv> xi x>
rf (Xt ) + ?t xi x>
i kF ? EV
i
kvk=1
?
?
Pk
(i) (i)>
Xt+1
?t )xi x>
,
(6)
i + ?t v t v t
i=1 ai (1
where the equivalence in the first line follows since kvv> kF = 1, and thus the minimizer of the LHS
is w.l.o.g. a leading eigenvector of the matrix on the RHS. Following the lines of Eq. (3), we will
now have that
ht+1 ? ht + ?t
? ht + ? t
k
X
(i) (i)>
ai (vt vt
xi x>
i )
i=1
k
X
i=1
(i) (i)>
ai (vt vt
k
?t2 X
(i) (i)>
? rf (Xt ) +
k
ai (vt vt
2 i=1
xi x>
i ) ? rf (Xt ) +
?
(i) (i)>
= ht + ?t Ei?(a1 ,...,ak ) (vt vt
k
?t2 X
(i) (i)>
ai kvt vt
2 i=1
xi x>
i ) ? rf (Xt ) +
4
?t
(i) (i)>
kvt vt
2
2
xi x>
i )kF
2
xi x>
i kF
2
xi x>
i kF , (7)
where the second inequality follows from convexity of the squared Frobenius norm, and the last
equality follows since (a1 , ..., ak ) is a probability distribution over [k].
While the approach in Eq. (6) relies only on leading eigenvector computations, the benefit in terms
of potential convergence rates is not trivial, since it is not immediate that we can get non-trivial
(i) (i)>
bounds for the individual distances kvt vt
xi x>
i kF . Indeed, the main novelty in our analysis
is dedicated precisely to this issue. A motivation, if any, is that there might exists a decomposition of
Pk
X? as X? = i=1 bi x?(i) x?(i)> , which is close in some sense to the decomposition of Xt . We can
then think of the regularized problem in Eq. (6), as an attempt to push each individual component
x(i) towards its corresponding component in the decomposition of X? , and as an overall result, bring
the following iterate Xt+1 closer to X? .
Note that Eq. (7) implicitly describes a randomized algorithm in which, instead of solving a
regularized EV problem for each rank-one matrix in the decomposition of Xt , which is expensive as
this decomposition grows large with the number of iterations, we pick a single rank-one component
according to its weight in the decomposition, and only update it. This directly brings us to our
proposed algorithm, Algorithm 2, which is given below.
Algorithm 2 Randomized Rank one-regularized Conditional Gradient
input: sequence of step-sizes {?t }t 1 , sequence of error tolerances {?t }t 0
let x0 be an arbitrary unit vector
X1
x1 x>
EV?0 ( rf (x0 x>
1 such that x1
0 ))
for t = 1... do
Pk
suppose Xt is given by Xt = i=1 ai xi x>
i , where each xi is a unit vector, and (a1 , a2 , ..., ak )
is a probability distribution over [k], for some integer k
6:
pick it 2 [k] according to the probability distribution (a1 , a2 , ...ak )
7:
set a new step-size ??t as follows:
?
?t /2 if ait ?t
??t
a it
else
1:
2:
3:
4:
5:
8:
vt
EV?t
rf (Xt ) + ?t xit x>
it
9:
Xt+1
Xt + ??t (vt vt> xit x>
)
it
10: end for
We have the following guarantee for Algorithm 2 which is the main result of this paper.
Theorem 1. [Main Theorem] Consider the sequence of step-sizes {?t }t 1 defined by ?t =
18/(t + 8), and suppose that ?0 =
and for
1 it holds that ?t =
! any iteration t
? p
?4/3 ?
?2
?
rank(X )
O min{ t ,
, p? min (X? )t } . Then, all iterates are feasible, and
?1/4 t
8t
1:
E [f (Xt )
0
!4/3 ?
p
rank(X? )
, p
?
?1/4 t
f (X? )] = O @min{ ,
t
min (X
? )t
?2
1
}A .
It is important to note that the step-size choice in Theorem 1 does not require any knowledge on
the parameters ?, , rank(X? ), and min (X? ). The knowledge of is required however for the EV
computations. While it follows from Theorem 1 that the knowledge of ?, rank(X? ), min (X? ) is
needed to set the accuracy parameters - ?t , in practice, iterative eigenvector methods are very efficient
and are much less sensitive to exact knowledge of parameters than the choice of step-size for instance.
While the eigenvalue problem in Algorithm 2 is different from the one in Algorithm 1, due to the
additional term in xit x>
it , the efficiency of solving both problems is essentially the same since efficient
EV procedures are based on iteratively multiplying the input matrix with a vector. In particular,
multiplying a vector with a rank-one matrix takes O(d) time. Thus, as long as nnz(rf (Xt )) = ?(d),
which is highly reasonable, both EV computations run in essentially the same time.
Finally, note also that aside from the computation of the gradient direction and the leading eigenvector
computation, all other operations on any iteration t, can be carried out in O(d2 + t) additional time.
5
4
Analysis
The complete proof of Theorem 1 and all supporting lemmas are given in full detail in the appendix.
Here we only detail the two main ingredients in the analysis of Algorithm 2.
Throughout this section, given a matrix Y 2 Sd , we let PY,? 2 Sd denote the projection matrix onto
all eigenvectors of Y that correspond to eigenvalues of magnitude at least ? . Similarly, we let P?
Y,?
denote the projection matrix onto the eigenvectors of Y that correspond to eigenvalues of magnitude
smaller than ? (including eigenvectors that correspond to zero-valued eigenvalues).
4.1
A new decomposition for positive semidefinite matrices with locality properties
The analysis of Algorithm 2 relies heavily on a new decomposition idea of matrices in Sd that suggests
Pk
that given a matrix X in the form of a convex combination of rank-one matrices: X = i=1 ?i xi x>
i ,
and another matrix Y 2 Sd , roughly speaking, we can decompose Y as the sum of rank-one matrices,
such that the components in the decomposition of Y are close to those in the decomposition of X in
terms of the overall distance kX YkF . This decomposition and corresponding property justifies
the idea of solving rank-one regularized problems, as suggested in Eq. (6), and applied in Algorithm
2.
Pk
>
Lemma 1. Let X, Y 2 Sd such that X is given as X =
i=1 ai xi xi , where each xi is a
unit vector, and (a1 , ..., ak ) is a distribution over [k], and let ?, 2 [0, 1] be scalars that satisfy
Pk
Pk
?
kX YkF . Then, Y can be written as Y = i=1 bi yi yi> + j=1 (aj bj )W, such that
1
1. each yi is a unit vector and W 2 Sd
p
Pk
2. 8i 2 [k] : 0 ? bi ? ai and j=1 (aj bj ) ? rank(Y) kYP?
Y,? kF + kX
3.
4.2
p
yi yi> k2F ? 2 rank(Y) kYP?
Y,? kF + kX
Pk
>
i=1 bi kxi xi
YkF +
YkF
Bounding the per-iteration improvement
The main step in the proof of Theorem 1, is understanding the per-iteration improvement, as captured
in Eq. (7), achievable by applying the update rule in Eq. (6), which updates on each iteration all of
the rank-one components in the decomposition of the current iterate.
Pk
Lemma 2. [full deterministic update] Fix a scalar ? > 0. Let X 2 Sd such that X = i=1 ai xi x>
i ,
where each xi is a unit vector, and (a1 , ..., ak ) is a probability distribution over [k]. For any i 2 [k],
let vi := EV rf (X) + ? xi x>
i . Then, it holds that
k
X
i=1
?
ai (vi vi>
xi x>
i ) ? rf (X) +
sr
+? ? min{1, 5
?
kvi vi>
2
p
2
rank(X? ) f (X)
?
2
xi x>
i kF ?
(f (X)
p
p
3
2
f (X? ), p
f (X)
?
? min (X )
f (X? ))
f (X? )}.
proof sketch. The proof is divided to three parts, each corresponding to a different term in the min
expression in the bound in the Lemma. The first bound, at a high-level, follows from the standard
conditional gradient analysis (see Eq. (3)). We continue to derive the second and third bounds.
From Lemma 1 we know we can write X? in the following way:
X? =
k
X
i=1
b?i yi? yi?> +
k
X
(aj
b?j )W? ,
j=1
where for all i 2 [k], b?i 2 [0, ai ] and yi? is a unit vector, and W? 2 Sd .
6
(8)
Using nothing more than Eq. (8), the optimality of vi for each i 2 [k], and the bounds in Lemma 1, it
can be shown that
?
k
X
?
2
ai (vi vi> xi x>
kvi vi> xi x>
i ) ? rf (X) +
i kF ?
2
i=1
(X?
(X?
k
k
X
? X ? ? ?>
2
bi kyi yi
xi x>
k
+
?
(ai b?i ) ?
i F
2 i=1
i=1
? p
?
?
?
X) ? rf (X) + ? 2 rank(X? ) kX PX? ,? kF + kX X? kF +
.
X) ? rf (X) +
(9)
Now we can optimize the above bound in terms of ?, . One option is to upper bound kX? P?
X? ,? kF ?
q
p
p
?k
kX
X
F
rank(X? )? , which together with the choice ?1 =
2rank(X? )kX X? kF ,
2rank(X? ) , 1 =
give us:
p
RHS of (9) ? (X? X) ? rf (X) + 5?
rank(X? )kX X? kF .
(10)
Another option, is to choose ?2 =
This results in the bound:
RHS of (9) ?
min (X
?
(X?
),
2
=
kX X? kF
?
min (X )
X) ? rf (X) +
which gives us kX? P?
X? ,? kF = 0.
X? kF
.
?
min (X )
3? kX
(11)
Now, using the convexity of f to upper bound (X? X) ? rf (X) ? (f (X) f (X? )) and Eq.
(1) in both Eq. (10) and (11), gives the second the third parts of the bound in the lemma.
5
Preliminary Empirical Evaluation
We evaluate our method, along with other conditional gradient variants, on the task of matrix
completion [13].
Setting The underlying optimization problem for the matrix completion task is the following:
n
min
Z2N Bd1 ,d2 (?)
{f (Z) :=
1X
(Z ? Eil ,jl
2
l=1
2
(12)
rl ) },
where Ei,j is the indicator matrix for the entry (i, j) in Rd1 ?d2 , {(il , jl , rl )}nl=1 ? [d1 ] ? [d2 ] ? R,
and N B d1 ,d2 (?) denotes the nuclear-norm ball of radius ? in Rd1 ?d2 , i.e.,
N B d1 ,d2 (?) := {Z 2 R
d1 ?d2
| kZk? :=
min{d1 ,d2 }
X
i=1
i (Z)
? ?},
where we let (Z) denote the vector of singular values of Z. . That is, our goal is to find a matrix
with bounded nuclear norm (which serves as a convex surrogate for bounded rank) which matches
best the partial observations given by {(il , jl , rl )}nl=1 .
In order to transform Problem (12) to optimization over the spectrahedron, we use the reduction
specified in full detail in [13], and also described in Section A in the appendix.
The objective function in Eq. (12) is known to have a smoothness parameter with respect to
k ? kF , which satisfies = O(1), see for instance [13]. While the objective function in Eq. (12)
is not strongly convex, it is known that under certain conditions, the matrix completion problem
exhibit properties very similar to strong convexity, in the sense of Eq. (1) (which is indeed the only
consequence of strong convexity that we use in our analysis) [18].
7
3.5
?10 4
?10 5
2.2
CG
Away-CG
ROR-CG
3
CG
Away-CG
ROR-CG
2
1.8
2.5
error
error
1.6
2
1.4
1.2
1.5
1
1
0.8
0.6
0.5
60
80
100
120
140
160
180
200
220
60
#iterations
80
100
120
140
160
180
200
220
240
#iterations
Figure 1: Comparison between conditional gradient variants for solving the matrix completion
problem on the M OVIE L ENS 100 K (left) and M OVIE L ENS 1M (right) datasets.
Two modifications of Algorithm 2 We implemented our rank one-regularized conditional gradient
variant, Algorithm 2 (denoted ROR-CG in our figures) with two modifications. First, on each iteration
t, instead of picking an index it of a rank-one matrix in the decomposition of the current iterate at
random according to the distribution (a1 , a2 , ..., ak ), we choose it in a greedy way, i.e., we choose
the rank-one component that has the largest product with the current gradient direction. While this
approach is computationally more expensive, it could be easily parallelized since all dot-product
computations are independent of each other. Second, after computing the eigenvector vt using the
step-size ?t = 1/t (which is very close to that prescribed in Theorem 1), we apply a line-search, as
detailed in [13], in order to the determine the optimal step-size given the direction vt vt> xit x>
it .
Baselines As baselines for comparison we used the standard conditional gradient method with exact
line-search for setting the step-size (denoted CG in our figures)[13], and the conditional gradient with
away-steps variant, recently studied in [15] (denoted Away-CG in our figures). While the away-steps
variant was studied in the context of optimization over polyhedral sets, and its formal improved
guarantees apply only in that setting, the concept of away-steps still makes sense for any convex
feasible set. This variant also allows the incorporation of an exact line-search procedure to choose
the optimal step-size.
Datasets We have experimented with two well known datasets for the matrix completion task: the
M OVIE L ENS 100 K dataset for which d1 = 943, d2 = 1682, n = 105 , and the M OVIE L ENS 1M
dataset for which d1 = 6040, d2 = 3952, n ? 106 . The M OVIE L ENS 1M dataset was further
sub-sampled to contain roughly half of the observations. We have set the parameter ? in Problem (12)
to ? = 10000 for the ML100 K dataset, and ? = 35000 for the ML1M dataset.
Figure 1 presents the objective (12) vs. the number of iterations executed. Each graph is the average
over 5 independent experiments 2 . It can be seen that our approach indeed improves significantly
over the baselines in terms of convergence rate, for the setting under consideration.
References
[1] Emmanuel J Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational mathematics, 9(6):717?772, 2009.
[2] Miroslav Dud?k, Za?d Harchaoui, and J?r?me Malick. Lifted coordinate descent for learning with tracenorm regularization. Journal of Machine Learning Research - Proceedings Track, 22:327?336, 2012.
[3] M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly,
3:149?154, 1956.
[4] Robert M Freund, Paul Grigas, and Rahul Mazumder. An extended frank-wolfe method with" in-face"
directions, and its application to low-rank matrix completion. arXiv preprint arXiv:1511.02204, 2015.
2
We ran several experiments since the leading eigenvector computation in each one of the CG variants is
randomized.
8
[5] Dan Garber and Elad Hazan. Fast and simple pca via convex optimization. arXiv preprint arXiv:1509.05647,
2015.
[6] Dan Garber and Elad Hazan. Faster rates for the frank-wolfe method over strongly-convex sets. In
Proceedings of the 32nd International Conference on Machine Learning,ICML, pages 541?549, 2015.
[7] Dan Garber and Elad Hazan. A linearly convergent variant of the conditional gradient algorithm under
strong convexity, with applications to online and stochastic optimization. SIAM Journal on Optimization,
26(3):1493?1528, 2016.
[8] Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, and Aaron
Sidford. Faster eigenvector computation via shift-and-invert preconditioning. In Proceedings of the 33nd
International Conference on Machine Learning, ICML 2016, New York City, NY, USA, pages 2626?2634,
2016.
[9] Mehmet G?nen and Ethem Alpayd?n. Multiple kernel learning algorithms. The Journal of Machine
Learning Research, 12:2211?2268, 2011.
[10] Elad Hazan. Sparse approximate solutions to semidefinite programs. In 8th Latin American Theoretical
Informatics Symposium, LATIN, 2008.
[11] Elad Hazan and Satyen Kale. Projection-free online learning. In Proceedings of the 29th International
Conference on Machine Learning, ICML, 2012.
[12] Martin Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. In Proceedings of the
30th International Conference on Machine Learning, ICML, 2013.
[13] Martin Jaggi and Marek Sulovsk?. A simple algorithm for nuclear norm regularized problems. In
Proceedings of the 27th International Conference on Machine Learning, ICML, 2010.
[14] J. Kuczy?nski and H. Wo?zniakowski. Estimating the largest eigenvalues by the power and lanczos algorithms
with a random start. SIAM J. Matrix Anal. Appl., 13:1094?1122, October 1992.
[15] Simon Lacoste-Julien and Martin Jaggi. On the global linear convergence of Frank-Wolfe optimization
variants. In Advances in Neural Information Processing Systems, pages 496?504, 2015.
[16] Gert RG Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I Jordan. Learning
the kernel matrix with semidefinite programming. The Journal of Machine Learning Research, 5:27?72,
2004.
[17] S?ren Laue. A hybrid algorithm for convex semidefinite optimization. In Proceedings of the 29th
International Conference on Machine Learning, ICML, 2012.
[18] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K. Ravikumar. A unified framework for
high-dimensional analysis of m-estimators with decomposable regularizers. In Y. Bengio, D. Schuurmans,
J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing
Systems 22, pages 1348?1356. 2009.
[19] Shai Shalev-Shwartz, Alon Gonen, and Ohad Shamir. Large-scale convex minimization with a low-rank
constraint. In Proceedings of the 28th International Conference on Machine Learning, ICML, 2011.
[20] Ohad Shamir. A stochastic PCA and SVD algorithm with an exponential convergence rate. In Proceedings
of the 32nd International Conference on Machine Learning, ICML, 2015.
[21] Eric P Xing, Andrew Y Ng, Michael I Jordan, and Stuart Russell. Distance metric learning with application
to clustering with side-information. Advances in neural information processing systems, 15:505?512,
2003.
[22] Yiming Ying and Peng Li. Distance metric learning with eigenvalue optimization. J. Mach. Learn. Res.,
13(1):1?26, January 2012.
[23] Xinhua Zhang, Dale Schuurmans, and Yao-liang Yu. Accelerated training for matrix-norm regularization:
A boosting approach. In Advances in Neural Information Processing Systems, pages 2906?2914, 2012.
9
| 6397 |@word briefly:1 achievable:1 norm:13 nd:3 d2:11 decomposition:23 pick:2 tr:2 reduction:1 current:4 surprising:1 written:1 chicago:1 cheap:3 update:7 maxv:1 aside:1 v:1 greedy:1 prohibitive:1 half:1 short:1 provides:1 iterates:1 boosting:1 zhang:1 along:1 bd1:1 symposium:1 dan:5 polyhedral:2 x0:2 peng:1 indeed:5 expected:1 roughly:4 cand:1 chi:1 eil:1 encouraging:1 little:1 spain:1 begin:1 notation:1 provided:1 underlying:1 bounded:2 estimating:1 eigenvector:18 unified:1 guarantee:3 rm:3 k2:1 unit:11 positive:4 sd:19 consequence:2 mach:1 ak:8 laurent:1 signed:1 might:2 studied:2 equivalence:1 suggests:1 appl:1 bi:5 unique:2 practice:1 procedure:2 nnz:1 empirical:3 significantly:2 projection:8 get:1 onto:2 close:3 context:1 applying:1 py:1 optimize:1 restriction:1 equivalent:1 deterministic:1 kale:1 williams:1 convex:27 musco:1 decomposable:1 rule:4 estimator:1 nuclear:6 gert:1 coordinate:1 target:1 suppose:3 heavily:1 shamir:2 exact:4 programming:2 lanckriet:1 wolfe:7 expensive:7 reducible:1 preprint:2 worst:1 revisiting:1 culotta:1 russell:1 technological:1 ran:1 benjamin:1 convexity:9 complexity:3 xinhua:1 nesterov:1 cristianini:1 depend:1 solving:8 ror:3 efficiency:1 eric:1 preconditioning:1 easily:1 fast:1 shortcoming:2 shalev:1 garber:5 heuristic:1 solve:1 valued:1 say:2 elad:6 satyen:1 think:2 transform:1 online:2 advantage:2 eigenvalue:13 sequence:4 kxt:1 product:3 coming:1 rapidly:1 frobenius:2 kv:2 convergence:18 optimum:1 produce:1 converges:1 yiming:1 derive:2 alon:1 completion:9 andrew:1 eq:19 strong:6 netrapalli:1 implemented:1 implies:1 direction:4 radius:1 drawback:1 stochastic:2 bin:1 require:4 fix:1 preliminary:3 decompose:1 hold:6 considered:1 kyp:2 exp:1 bj:2 kvt:8 pointing:1 smallest:2 a2:4 applicable:1 sensitive:1 largest:4 city:1 minimization:3 avoid:1 lifted:1 focus:1 xit:4 naval:1 improvement:4 rank:42 dgarber:1 aka:2 cg:27 attains:1 sense:4 baseline:3 el:1 provably:4 overall:3 classification:1 arg:4 issue:2 denoted:3 malick:1 constrained:1 ovie:5 ng:1 identical:2 stuart:1 yu:2 k2f:9 icml:8 t2:6 modern:1 individual:2 attempt:2 interest:3 highly:1 evaluation:1 extreme:1 kvk:2 semidefinite:8 nl:2 pradeep:1 spectrahedron:13 regularizers:1 closer:1 partial:1 lh:1 ohad:2 orthogonal:1 unless:1 decoupled:1 enjoying:1 euclidean:1 re:1 theoretical:1 miroslav:1 instance:2 downside:1 sidford:1 lanczos:2 entry:1 successful:1 sulovsk:1 kxi:1 nski:1 recht:1 international:8 randomized:3 siam:2 negahban:1 retain:1 informatics:1 picking:1 michael:2 together:2 yao:1 squared:1 choose:4 american:1 leading:11 return:1 li:1 potential:1 includes:1 satisfy:1 vi:8 view:1 hazan:6 start:1 xing:1 option:2 shai:1 simon:1 il:2 accuracy:3 efficiently:1 kvv:2 correspond:3 basically:1 ren:1 multiplying:2 notoriously:2 za:1 definition:2 obvious:1 z2n:1 naturally:1 proof:4 sampled:1 dataset:5 popular:3 knowledge:6 subsection:2 improves:1 attained:1 improved:3 rahul:1 strongly:12 hand:3 sketch:1 ykf:4 replacing:1 ei:2 brings:1 aj:3 grows:1 usa:1 concept:2 contain:1 regularization:4 equality:1 dud:1 symmetric:2 nonzero:1 iteratively:1 deal:1 inferior:2 complete:1 dedicated:1 bring:1 wise:1 consideration:1 novel:1 recently:3 rl:3 exponentially:2 jl:3 significant:1 ai:14 smoothness:3 zniakowski:1 mathematics:1 similarly:1 tracenorm:1 dot:1 longer:1 yk2:2 jaggi:3 recent:2 certain:5 inequality:2 continue:1 vt:38 yi:9 devise:1 seen:2 captured:1 regained:3 additional:3 parallelized:1 converge:1 novelty:1 determine:1 signal:2 arithmetic:2 multiple:1 full:6 harchaoui:1 sham:1 smooth:7 faster:8 match:1 long:1 divided:1 cameron:1 ravikumar:1 a1:8 feasibility:1 variant:11 grigas:1 essentially:4 metric:3 arxiv:4 iteration:20 kernel:3 tailored:2 invert:1 else:1 singular:1 sr:1 subject:1 hybridization:1 lafferty:1 jordan:2 integer:1 presence:1 latin:2 bengio:1 decent:1 iterate:4 inner:1 idea:4 praneeth:1 tradeoff:1 multiclass:1 shift:1 expression:1 pca:2 bartlett:1 sahand:1 wo:1 peter:1 speaking:3 york:1 detailed:3 eigenvectors:3 amount:3 regularization1:1 diameter:1 algorithmically:1 per:7 track:1 write:1 key:1 d3:2 kyi:1 ht:14 lacoste:1 graph:1 year:2 sum:1 run:1 throughout:1 reasonable:1 appendix:2 bound:10 convergent:1 quadratic:1 oracle:1 constraint:3 precisely:1 incorporation:1 min:21 optimality:2 prescribed:1 px:1 martin:4 according:3 combination:3 ball:1 remain:1 slightly:1 describes:1 smaller:1 kakade:1 modification:6 ghaoui:1 heart:2 computationally:4 describing:1 needed:1 know:1 end:3 serf:1 operation:3 multiplied:1 apply:4 quarterly:1 nen:1 away:6 enforce:1 generic:1 spectral:1 alternative:2 eigen:3 denotes:1 running:1 clustering:1 maintaining:1 emmanuel:1 objective:4 already:2 laue:1 surrogate:1 exhibit:3 gradient:25 distance:4 me:1 nello:1 trivial:5 alpayd:1 index:1 minimizing:7 ying:1 liang:1 difficult:2 mostly:3 unfortunately:1 executed:1 robert:1 frank:7 october:1 trace:4 anal:1 proper:2 upper:2 av:2 observation:3 datasets:3 descent:2 jin:1 supporting:1 immediate:1 logistics:1 extended:1 january:1 discovered:1 arbitrary:2 community:1 ttic:1 introduced:1 required:3 specified:1 barcelona:1 nip:1 beyond:1 suggested:2 usually:1 below:2 ev:13 gonen:1 program:1 rf:25 max:1 including:1 marek:1 wainwright:1 power:2 natural:1 rely:1 regularized:10 hybrid:2 indicator:1 scheme:1 improve:4 julien:1 carried:2 mehmet:1 understanding:1 kf:22 freund:1 fully:1 expect:2 ingredient:1 foundation:1 sufficient:1 editor:1 course:1 last:1 free:3 formal:1 weaker:1 side:1 institute:1 face:1 sparse:3 benefit:2 distributed:1 tolerance:1 kzk:1 avoids:2 dale:1 made:1 projected:5 approximate:1 implicitly:1 global:1 xi:28 shwartz:1 search:3 iterative:1 promising:1 learn:1 mazumder:1 schuurmans:2 domain:1 pk:11 main:7 linearly:1 rh:3 motivation:1 bounding:1 paul:1 nothing:2 ait:1 x1:5 en:5 slow:1 ny:1 sub:1 explicit:2 exponential:1 lie:1 toyota:1 third:3 theorem:7 specific:2 xt:49 kvi:2 ethem:1 experimented:1 evidence:1 exists:1 magnitude:2 justifies:1 push:1 kx:17 locality:1 rd1:2 rg:1 simply:1 scalar:3 corresponds:2 minimizer:3 satisfies:1 relies:2 conditional:15 goal:1 towards:2 feasible:3 principal:1 lemma:7 svd:1 aaron:1 accelerated:2 evaluate:1 d1:7 |
5,966 | 6,398 | Learning Multiagent Communication
with Backpropagation
Sainbayar Sukhbaatar
Dept. of Computer Science
Courant Institute, New York University
[email protected]
Arthur Szlam
Facebook AI Research
New York
[email protected]
Rob Fergus
Facebook AI Research
New York
[email protected]
Abstract
Many tasks in AI require the collaboration of multiple agents. Typically, the
communication protocol between agents is manually specified and not altered
during training. In this paper we explore a simple neural model, called CommNet,
that uses continuous communication for fully cooperative tasks. The model consists
of multiple agents and the communication between them is learned alongside their
policy. We apply this model to a diverse set of tasks, demonstrating the ability
of the agents to learn to communicate amongst themselves, yielding improved
performance over non-communicative agents and baselines. In some cases, it
is possible to interpret the language devised by the agents, revealing simple but
effective strategies for solving the task at hand.
1
Introduction
Communication is a fundamental aspect of intelligence, enabling agents to behave as a group, rather
than a collection of individuals. It is vital for performing complex tasks in real-world environments
where each actor has limited capabilities and/or visibility of the world. Practical examples include
elevator control [3] and sensor networks [5]; communication is also important for success in robot
soccer [25]. In any partially observed environment, the communication between agents is vital to
coordinate the behavior of each individual. While the model controlling each agent is typically
learned via reinforcement learning [1, 28], the specification and format of the communication is
usually pre-determined. For example, in robot soccer, the bots are designed to communicate at each
time step their position and proximity to the ball.
In this work, we propose a model where cooperating agents learn to communicate amongst themselves
before taking actions. Each agent is controlled by a deep feed-forward network, which additionally
has access to a communication channel carrying a continuous vector. Through this channel, they
receive the summed transmissions of other agents. However, what each agent transmits on the
channel is not specified a-priori, being learned instead. Because the communication is continuous,
the model can be trained via back-propagation, and thus can be combined with standard single
agent RL algorithms or supervised learning. The model is simple and versatile. This allows it to be
applied to a wide range of problems involving partial visibility of the environment, where the agents
learn a task-specific communication that aids performance. In addition, the model allows dynamic
variation at run time in both the number and type of agents, which is important in applications such
as communication between moving cars.
We consider the setting where we have J agents, all cooperating to maximize reward R in some
environment. We make the simplifying assumption of full cooperation between agents, thus each
agent receives R independent of their contribution. In this setting, there is no difference between
each agent having its own controller, or viewing them as pieces of a larger model controlling all
agents. Taking the latter perspective, our controller is a large feed-forward neural network that maps
inputs for all agents to their actions, each agent occupying a subset of units. A specific connectivity
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
structure between layers (a) instantiates the broadcast communication channel between agents and
(b) propagates the agent state.
We explore this model on a range of tasks. In some, supervision is provided for each action while
for others it is given sporadically. In the former case, the controller for each agent is trained by
backpropagating the error signal through the connectivity structure of the model, enabling the agents
to learn how to communicate amongst themselves to maximize the objective. In the latter case,
reinforcement learning must be used as an additional outer loop to provide a training signal at each
time step (see the supplementary material for details).
2
Communication Model
We now describe the model used to compute the distribution over actions p(a(t)|s(t), ?) at a given
time t (omitting the time index for brevity). Let sj be the jth agent?s view of the state of the
environment. The input to the controller is the concatenation of all state-views s = {s1 , ..., sJ },
and the controller ? is a mapping a = ?(s), where the output a is a concatenation of discrete
actions a = {a1 , ..., aJ } for each agent. Note that this single controller ? encompasses the individual
controllers for each agents, as well as the communication between agents.
2.1
Controller Structure
We now detail our architecture for ? that is built from modules f i , which take the form of multilayer
neural networks. Here i ? {0, .., K}, where K is the number of communication steps in the network.
Each f i takes two input vectors for each agent j: the hidden state hij and the communication cij ,
and outputs a vector hi+1
j . The main body of the model then takes as input the concatenated vectors
0
0
0
0
h = [h1 , h2 , ..., hJ ], and computes:
hi+1
j
ci+1
j
= f i (hij , cij )
1 X i+1
=
hj 0 .
J ?1 0
(1)
(2)
j 6=j
In the case that f i is a single linear layer followed by a non-linearity ?, we have: hi+1
= ?(H i hij +
j
i i
i+1
i i
C cj ) and the model can be viewed as a feedforward network with layers h
= ?(T h ) where hi
i
i
i
i
?
is the concatenation of all hj and T takes the block form (where C = C /(J ? 1)):
?
Hi
?i
?C
? i
?
?C
i
T =?
? .
? ..
?i
C
?i
C
Hi
?i
C
..
.
?i
C
?i
C
?i
C
Hi
..
.
?i
C
...
...
...
..
.
...
?i ?
C
?i ?
C
?
?i ?
C
?,
.. ?
. ?
Hi
A key point is that T is dynamically sized since the number of agents may vary. This motivates the
the normalizing factor J ? 1 in equation (2), which rescales the communication vector by the number
of communicating agents. Note also that T i is permutation invariant, thus the order of the agents
does not matter.
At the first layer of the model an encoder function h0j = r(sj ) is used. This takes as input state-view
sj and outputs feature vector h0j (in Rd0 for some d0 ). The form of the encoder is problem dependent,
but for most of our tasks it is a single layer neural network. Unless otherwise noted, c0j = 0 for all j.
At the output of the model, a decoder function q(hK
j ) is used to output a distribution over the space of
actions. q(.) takes the form of a single layer network, followed by a softmax. To produce a discrete
action, we sample from this distribution: aj ? q(hK
j ).
Thus the entire model (shown in Fig. 1), which we call a Communication Neural Net (CommNet), (i)
takes the state-view of all agents s, passes it through the encoder h0 = r(s), (ii) iterates h and c in
equations (1) and (2) to obtain hK , (iii) samples actions a for all agents, according to q(hK ).
2.2 Model Extensions
Local Connectivity: An alternative to the broadcast framework described above is to allow agents
to communicate to others within a certain range. Let N (j) be the set of agents present within
2
38 by
controller
istheir
a large
feed-forward
neural
network
that
maps
inputs
for
all agents
to theirstructure
actions,between
each layers
45
will Address
be treated
ascontroller
such
the
reinforcement
learning.
In network
the
continuous
case,
signals
passed
actions,
each
agent
occupying
amaps
subset
ofthe
units.
A
connectivity
38
is adifferent
large
feed-forward
neural
that
inputs
for
allspecific
agents
tothe
their
actions,
each
46of5between
agents
are
no
than
hidden
inused
ab(s,
neural
network;
thus
credit
assignment
the
agents.
5
agents.
agents.
5 we
agents.
email
51 states
states
?),
computed
via
an
extra
head
model
the
4abstract
architecture
??.
Second,
show
this
architecture
can
be
to
control
groups
of
cooperating
55
Here
r(t)
is reward
given
at
time
t,thus
and
the
hyperparameter
?onisfor
for
balancing
the
reward
and theprobabilities. Beside
39different
agenthow
occupying
a states
subset
of
units.
Anetwork;
specific
connectivity
structure
between
layers
(a)producing
instantiates
theaction
46 0 between
agents
are
no
than
hidden
in
a
neural
credit
assignment
for
the
0
0 email
0
(a)
instantiates
the
broadcast
communication
channel
between
agents
and
(b)
propagates
the agent
h
=
[h
,
h
,
...,
h
],
and
computes:
can be performed
standard
backpropagation
(within
the outer
RLpolicy
loop).
agent
occupying
acommunication
subsetusing
ofobjectives,
units.
Amaximizing
specific
connectivity
structure
between
layers
(a) state
instantiates
the areofalso
0
0Abstract
2 47 communication
J 39 can
50 , hagents.
52 channel
the
expected
reward
with
gradient,
the
models
trained to minimize the
56 using
baseline
set
to
0.03
in
all
experiments.
40 performed
broadcast
between
agents
and
(b)
propagates
the
agent
in
the
manner
...,1 h
communication
be
standard
backpropagation
(within
the
outer
RL
loop).
1 2 ,47
J ], and computes:
state. channel between agents and (b) propagates the agent state in the manner of
40
broadcast
communication
between
the
baseline
value
and
actual
reward.
Thus,
after
finishing
an episode, we update
i+1with ai state
i 53
i distance
ymous
Author(s)
Anonymous
48 Author(s)
We use policy
[33]
specific
baseline
for
delivering
a
gradient
to
the
model.
41 gradient
an
RNN.
= fspecific
(hj , cjbaseline
)
(1)
i+1withhjai state
Author(s)
nymous
Author(s)
486
We
use
policy
[33]
for delivering
gradient
to
41 gradient
an
RNN.
2
Model
2
Model
2
6Model
2
Model
ion
54..., the
parameters
by at each
hin
f 3(hij ,Communication
cij )s(1),
(1)theofmodel.
49 6
Denote
the
states
an=episode
by
s(Tmodel
), and
the actionsa?taken
those states
Affiliation
Affiliation
roduction
j
Model
Abstract
6 Affiliation
2 49 Model
57 by
3iss(1),
Model
42an Because
agents
reward,
but
necessarily
supervision
for of
each
reinforcement
Denote
thea(1),
states
...,will
s(Treceive
), the
andepisode.
the actions
taken
at each
of
those
states
on
2 not
50
as
...,ina(T
),episode
where
Tthe
the
length
of
The
baseline
is a scalar
function
theaction,
!
!2 3
Address
Address
Abstract
42
will
receive
reward,
but
not
necessarily
supervision
for
each
action,
reinforcement
Xepisode.
T
Tof
T
1to
43
learning
islength
used
expected
future
reward.
We explore
two
forms
of
communication
within
50
as a(1),
..., a(T
),Because
where
Tthe
is agents
the
ofmaximize
the
The
baseline
iscompute
a scalar
function
the
X
X
X
smake
i+1
i+1
Address
51
states
b(s,
?),
computed
via
an
extra
head
on
the
model
producing
the
action
probabilities.
Beside
We
now
describe
the
model
used
to
p(a(t)|s(t),
?)
at
a
given
time
t
(omitting
the
time
X
@
log
p(a(t)|s(t),
?)
@
iforms
i
i
email
two
contributions.
we
simplify
and
extend
the
graph
neural
email
cthe
=
hthe
.of
(2)
inetwork
1to
43 the
learning
isconsists
used
maximize
expected
future
reward.
We
explore
two
of
0 consists
work
we
make
two
contributions.
First,
we
simplify
and
extend
the
neural
network
simplest
The
7simplest
form
The
of
7First,
simplest
form
the
The
model
of
simplest
form
consists
model
of
form
the
consists
model
of
of
multilayer
model
neural
consists
of
multilayer
networks
neural
of
multilayer
networks
neural
fofcase,
that
networks
neural
take
fr(i)
as
networks
input
take
f time
that
vectors
input
take
f i? that
as
vectors
input
take
vectors
input
4
j58
jand
i+1
44
the
controller:
(i)
(ii)
continuous.
In
the
former
communication
is?)as
an
action,
and
7 The
The
simplest
form
of
model
of
multilayer
neural
networks
fgraph
that
take
as
input
vectors
We
now
describe
the
model
used
to
compute
p(a(t)|s(t),
?)
atcommunication
athat
given
t within
(ommiting
the r(i)
time
517
states
?),
computed
via
extra
head
on
the
model
producing
the
action
probabilities.
?multilayer
b(s(t),
b(s(t),vectors
?) 5 .
52 b(s,
maximizing
the
expected
reward
with
policy
gradient,
the
models
are
also
trained
to Beside
minimize
the
J
1discrete
index
for
Let
s=
the
jth
agent?s
view
the
state
of
the
environment.
The
input
toas
the
email
cii+1
=an
h
.j(ii)
j be
0 brevity).
0 6=
i ??. we
ii ishow how
i+1
0 @?
0case, (2)
0
jihwill
jto
iof
ithe
ishow
ii+1
i+1
i+1
i+1
0 the
0is
0action,
0
0 and
0
0
0to the
. Second,
this
architecture
can
be
used
control
groups
of
cooperating
@?
44
the
controller:
(i)
discrete
and
continuous.
In
the
former
communication
an
j
ure
Second,
we
how
this
architecture
can
be
used
to
control
groups
of
cooperating
8 hh
and
c
and
output
a
vector
.
The
model
takes
as
input
a
set
of
vectors
{h
,
h
,
...,
h
},
and
45
be
treated
as
such
by
the
reinforcement
learning.
In
the
continuous
case,
the
signals
59
index
for
brevity).
Let
s
be
the
jth
agent?s
view
of
the
state
of
environment.
The
52
maximizing
expected
reward
with
policy
gradient,
the
models
are
also
trained
to
minimize
the
J
1
and
8 ch 53
and
and
8distance
output
ch and
and
8 aoutput
cvector
h and
and
abaseline
output
hcvector
and
.jvalue
The
aoutput
hvector
model
. actual
The
ahvector
takes
model
. The
as
takes
model
input
. The
as
atakes
model
set
input
of
avectors
takes
set
input
of1i=t
as
avectors
{h
set
input
,},of
hand
avectors
,{h
...,
set
,hcontroller
of
hm2vectors
},
,{h
...,
and
,passed
h
h0minput
},
,{h
...,
and
,hh0m02 },
, ...,
and
h0m }, and
jh
t=1
i=t
m {s
1
2 as
between
theModule
reward.
Thus,
after
finishing
an
episode,
we
update
controller
is
the
concatenation
of
all
state-views
s
=
,
...,
s
the
is
a
mapping
1
2
1
1
2
1
0 6=j and
J
th
for
agent
communication
step
CommNet
model is the
33
2concatenation
Problem
Formulation
45
will
be between
treated
such
by
the
reinforcement
In
continuous
the
signals
9
computes
60
controller
iswhere
the
of
all
state-views
s of
=network;
{s
, update
...,case,
sactions
controller
mapping
46
agents
are
no
different
than
hidden
in
athe
neural
thus
credit
539 distance
between
baseline
and
reward.
after
an
episode,
we
J }, and
54 9ithe
model
parameters
?value
byas
a
= actual
(s),
theThus,
output
alearning.
isfinishing
astates
concatenation
discrete
athe
=assignment
{a1 , passed
..., aJ }for
fora each
agent.
computes
computes
9the
computes
i+1
i1 i
i+1
Incomputes
the54case
that
f isparameters
a46single
linear
layer
followed
by
a
nonlinearity
,
we
have:
h
=
(H
h
+
i be
iperformed
i the
between
agents
no
different
than
hidden
states
in
a
neural
network;
thus
credit
assignment
for
55
Here
r(t)
is
reward
given
at
time
t,
and
the
hyperparameter
?Jthe
foreach
balancing
61 are
a
=
(s),
where
output
a
is
a
concatenation
of
discrete
actions
a
=
{a
,
...,
a
}is
for
47
communication
can
using
standard
backpropagation
(within
the
outer
RL
loop).
j
2
3
j
the
model
?
by
1
hNote
=that
f (h
, single
cWe
) consider
this
controller
encompasses
individual
controllers
for
each
agents,
asRagent.
well
asthe reward and the
i+1
i+1
i+1
i+1
i+1
i
i
i
i
i
i
i
i
i
i
i
i
!
!
i
itheiM
j
j
j
2
34
the
setting
where
we
have
agents,
all
cooperating
to
maximize
reward
in
some
ase
fAbstract
is amodel
singlecan
linear
bybecommunication
athat
nonlinearity
we
have:
hjj=
=
(H
T layer followed
T 56
T(h
i that
i
i+1
i j=
h
=
h
fcontroller
(h
, ch
fjmake
(h
,=
csimplifying
h
fjjto
)(T
,ih)cinassumption
fjindividual
)+(hexperiments.
,3the
cithat
)outer
Abstract
47
communication
can
performed
using
standard
backpropagation
RLfor
loop).
baseline
objectives,
set
0.03
all
2
62feedforward
Note
this
encompasses
the
each agents,
as wellofastheir
X
X
X
0 c ) and
j=
j(within
j2h
jcontrollers
j35 single
j,!
j)
C
the
be
viewed
as
a
network
with
layers
h
h
where
the
between
agents.
@
log
p(a(t)|s(t),
?)
@
!
X
environment.
We
the
each
agent
receives
R,
independent
j
We use
policy
gradientr(i)
[33] between
with
a state
specific
forb(s(t),
gradient to the model.
4 a feedforward
T ? =10 48
Tcommunication
T i ibaseline
i delivering
Abstract
ct
i+1
i+1
10
10 be
b(s(t),
?)
?
?) 5a.each
del
X
X
X
63?) c
the
agents.
nd
the
model can
viewed
as
network
with
hi+1
= ...,
(T
hX
) and
where
hactions
36
In
this
setting,
there
isr(i)
no difference
between
agent having its own controller, or
X
X
=
h
;layers
@we
log
p(a(t)|s(t),
@X
0 contribution.
two
contributions.
First,
and
extend
the
graph
neural
network
is
the
concatenation
of
all
hijsimplify
and
T
takes
the
block
form:
j states
jgraph
@?tanh
@?
48
We
use Denote
policy
gradient
[33]
with
aStructure
state
baseline
for
delivering
ataken
gradient
to
the
49
the
in
an
episode
byspecific
s(1),
s(T
),i+1
the
at each
of model.
those states
5
4simplify
3.1
Controller
tions.
First,
we
and
extend
the
neural
network
i+1
i+1
i+1
i+1
i+1
i+1
i+1
?
=
r(i)
b(s(t),
?)
?
r(i)
b(s(t),
?)
.
t=1
i=t
i=t
37
viewing
them
as
pieces
of
a
larger
model
controlling
all
agents.
Taking the latter perspective, our
i
0 6=
c
=
c
h
=
c
;
h
=
c
;
h
=
;
h
;
j
j
nd,
we
show
how
this
architecture
can
be
used
to
control
groups
of
cooperating
tributions.
First,
we
simplify
and
extend
the
graph
neural
network
0
0
0
0
ncatenation
of
all
h
and
T
takes
the
block
form:
@?
64 ...,
One
obvious
choice
is
a
multi-layer
neural
network,
which
could
extracteach
j3857
jfor
jfully-connected
j
j
j
j
j
50 @?
as
a(1),
a(T
),
where
T
is
the
length
of
the
episode.
The
baseline
is
a
scalar
function
theactions,
49
Denote
the
states
in
an
episode
by
s(1),
...,
s(T
),
and
the
actions
taken
at
each
of
those
statestoof
0
1
j
3
Model
First,
we
simplify
and
extend
the
graph
neural
network
i
controller
is
a
large
feed-forward
neural
network
that
maps
inputs
for
all
agents
their
i
i
i
i
mean
i input
t=1
i=t
i=t
how
this
architecture
can
be
used
to
control
groups
of
cooperating
We
now
detail
our
architecture
for
that
communication
without
losing modularity.
is built
m
ofhow
the
model
consists
of
multilayer
neural
networks
fthe
that
take
as
vectors
H),atfeatures
C
C
...
plest
form
of
the
model
consists
ofto
multilayer
neural
networks
that
take
as
input
vectors
0extra
0allows
0the
how
this
can
be
used
control
groups
of
Anonymous
Anonymous
Anonymous
Author(s)
Anonymous
Author(s)
0 architecture
65
from
shyperparameter
and
use
them
to
good
actions
RL
framework.
model
would
55
Here
r(t)
reward
given
time
t, Th
and
?
isAuthor(s)
for
balancing
reward
and
the
jC
6=
jf
ja 0subset
6=
jpredict
jmodel
6=
jA
jnetwork).
6=Author(s)
jwith
510
states
b(s,
?),
computed
via
an
head
on
the
producing
action
probabilities.
Beside
agent
occupying
of
units.
specific
connectivity
structure
between
layers
(a) instantiates
the
50
asisand
a(1),
...,
a(T
where
is39cooperating
the
of
the
episode.
The
baseline
isthe
aour
scalar
function
ofThis
the..,
1
i length
1
We
set
c
=
0
for
all
j,
i
2
{0,
..,
K}
(we
will
call
K
the
number
of
hops
in
the
i+1
0
0
0
i
i
i
i
i
i
i
i
j
i
i+1
0
0
0
from
modules
f
,
which
take
the
form
of
multilayer
neural
networks.
Here
i
2
{0,
K},
where
Kof
33
2
Problem
Formulation
his
can
be
used
to
control
groups
of
cooperating
tput
a55
vector
h
.
The
model
takes
as
input
a
set
of
vectors
{h
,
h
,
...,
h
},
and
Connecting
Neural
Models
H
C
C
...
C
C
H
C
...
C
B
C
andarchitecture
output
a
vector
h
.
The
model
takes
as
input
a
set
of
vectors
{h
,
h
,
...,
h
},
and
40
broadcast
communication
channel
between
agents
and
(b)
propagates
the
agent
state
in
the
manner
m
1
2
Affiliation
Affiliation
Affiliation
Affiliation
66
allow
agents
to
communicate
with
each
other
and
share
views
of
the
environment.
However,
it
56
baseline
objectives,
set
to
0.03
in
all
experiments.
52 b(s,
maximizing
the
expected
reward
with
policy
gradient,
the
models
are
also
trained
to
minimize
the
mreward
1the
2model
Here
r(t)
is
reward
given
at
time
t,
and
the
hyperparameter
?
is
for
balancing
the
and
the
K
51
states
?),
computed
via
an
extra
head
on
the
model
producing
the
action
probabilities.
Beside
58
We
now
describe
used
to
compute
p(a(t)|s(t),
?)
at
a
given
time
t
(ommiting
the time
B
C
2
If desired,
can
take11
the
final
output
them
that
the
model
outputs
awill
vector
0
0 i i hj and
is
the
number
of
communication
steps
in
the
network.
i and
i ij,
i {0,
i so(we
i02
i0
i(we
41
an
RNN.
11
setbaseline
We
c0j we
=
11
set
0 We
cfor
set
all
0set
We
cB
j,
for
and
=
set
allB
0H
i67cj,
for
{0,
all
..,
for
2
K}
and
all
iC
j,
..,
2K}
and
will
{0,
i,to..,
call
2
K}
will
{0,
K
(we
..,
the
call
K}
will
number
K
(we
the
call
number
K
of
the
call
hops
number
K
of
inMthe
hops
the
number
network).
of
in
hops
the
network).
of
inenvironment.
hops
the
network).
in theThe
network).
C
C
H
...
Cvalue
C
...
C59directly,
s We
is=
inflexible
with
respect
the
composition
and
number
oftrained
agents
itagents,
controls;
cannot
deal
well
Address
Address
Address
Address
C
53
between
the
baseline
and
actual
reward.
Thus,
after
finishing
an
episode,
we
update
56
objectives,
toCdistance
0.03
in
experiments.
j =
jT
jall
52
maximizing
the
expected
reward
with
policy
gradient,
the
models
are
also
to
minimize
the
=
34
We
consider
the
setting
where
we
have
all
cooperating
to
maximize
reward
R
intosome
index
for
brevity).
Let
s
be
the
jth
agent?s
view
of
the
state
of
the
input
the
3
corresponding
to
each
input
vector,
or
we
can
feed
them
into
another
network
to
get
a
single
vector
or
j
B
C
B
C
Connecting
Neural
Models
i
i
i
i+1
i i i+1
i 68i . with
iKi. iagents
i takes
. K
..Kthe
42
Because
agents
will
receive
reward,
but
not
necessarily
supervision
each
reinforcement
and
leaving
the
group
and
even
order
ofstate
the
must
beaction,
fixed.
On outputs
the
Each
input
vectors
for
each
agent
j:the
thethat
hidden
hthe
and
the
communication
cvector
K
.iand
i 54
email
email
email
email
h
fCdesired,
,Cccan
the
model
parameters
?joining
by
H
C
A
j{s
j , independent
Bthe
C
35is
environment.
We
make
the
simplifying
assumption
that
each
agent
receives
R,
of their
h(h
=
fh
(h
,f...
cthe
53 desired,
distance
between
the
value
and
actual
reward.
Thus,
after
finishing
an
episode,
we
update
..baseline
.two
j@
j..)
jtake
desired,
12 output.
If 57desired,
12
we
IfModel
we
can
If
we
take
the
we
take
final
can
take
final
the
hthem
output
final
and
directly,
h
them
output
and
directly,
so
them
output
that
directly,
so
them
model
directly,
the
so
that
outputs
model
so
that
outputs
afor
vector
outputs
model
a vector
awithin
a avector
60
controller
the
concatenation
of
all
state-views
s agents
=
,model
...,
softhe
and
the
controller
is
mapping
T12
==
,
4 Ifscalar
3 can
jand
j ..)houtput
jfinal
1
J },
.
.
j
j
j
j
43
learning
is
used
to
maximize
expected
future
reward.
We
explore
two
forms
communication
BX
C
i+1
Connecting
Neural
Models
69
other
hand,
ifa.no
is used
then
wesetting,
can write
atakes
(sinput
(s
where
is a its own controller, or
2?outputs
3
contribution.
In
this
there
is=
no{difference
between
each
agent
having
1 ), ...,
J )},
.. vector,
.. the
..vector,
and
hthem
. 36The
main
body
ofanother
the
model
then
as
the
concatenated
vectors
!or
. .each
54each
the2the
model
parameters
by
2
iFormulation
iiinput
itake
ivector,
j (s),
61
=
where
the
output
a is!
athem
concatenation
of
discrete
actions
asingle
{a
...,
aJ } or
for
each agent.
Aacommunication
@
X
he
model
consists
of
multilayer
neural
networks
that
as
input
vectors
..vector
44
the
controller:
(i)
discrete
and
(ii)
continuous.
the
former
case,
communication
is =
an
action,
and
1a,vector
corresponding
13First,
13and
to
corresponding
13
input
to
corresponding
each
input
to
or
we
can
or
vector,
feed
input
we
can
feed
we
into
them
oranother
feed
we
into
can
them
feed
into
network
to
into
get
another
network
to
single
get
avector
network
to
single
get
athe
vector
to
get
or
single
vector
orour
i Model
tions.
we
simplify
extend
neural
network
T graph
Tcan
Ta
we
simplify
extend
graph
57
3 corresponding
33
Problem
C;each
C
...
i+1
i+1and
33
Problem
Formulation
Anonymous
37
viewing
them
asnetwork
pieces
of another
aIn
larger
model
controlling
all
agents.
Taking
the
latter perspective,
X
X
X
70
per-agent
applied
independently.
This
communication-free
model
satisfies
flexibility
5rst,If
each
linear
layer
followed
by
ato
nonlinearity
:or
.c2i+1
.neural
.0Cfnetwork
i+1
0. controller
00
0H
i+1f is a simple c
0 Note
0 Author(s)
@
log
p(a(t)|s(t),
?)
@
=
h
2
3
i
0set
h
=
[h
,
h
,
...,
h
],
and
computes:
45
will
be
treated
as
such
by
the
reinforcement
learning.
In
the
continuous
case,
the
signals
passed
=
h
;
62
that
this
single
controller
encompasses
the
individual
controllers
for
each agents,
as welleach
as
i
58
We
now
describe
the
model
used
to
compute
p(a(t)|s(t),
?)
at
a
given
time
t
(ommiting
the
time
vector
h
.
The
model
takes
as
input
a
of
vectors
{h
,
h
,
...,
h
},
and
0
j
j
!
!
1 of
21
JCommNet
1 cooperating
5
4
2
j
Figure
1:
An
overview
of
our
model.
Left:
view
of
module
f
for
a
single
agent
j.
Note
j
how
this
can
be
used
to
control
groups
m
2
Affiliation
i
i
i
i
38
controller
is
a
large
feed-forward
neural
network
that
maps
inputs
for
all
agents
to
their
actions,
?
=
r(i)
b(s(t),
?)
?
r(i)
b(s(t),
?)
.
architecture
can
be
used
to
control
groups
of
cooperating
scalar
14
output.
scalar
14
output.
scalar
14
output.
scalar
output.
lsimplify
consists
of
multilayer
neural
networks
f
that
take
as
input
vectors
71
requirements
,
but
is
not
able
to
coordinate
their
actions.
First,
wearchitecture
simplify
and
extend
the
graph
neural
network
T Cnetwork
Tagents are no different than hidden states
T in a neural network; thus credit assignment for the
and 59extend
the
graph
neural
C
...iagent?s
H
between
0 6=X
X
X
i+1
i 46
ithe
iAddress
0 6=
j34C
jLet
@?
@?
index
for
s
be
the
jth
view
of
the
state
of
the
environment.
The
input
to
the
63
communication
between
agents.
i+1
0
0
0
@
log
p(a(t)|s(t),
?)
@
34brevity).
We
consider
the
setting
where
we
have
M
agents,
all
cooperating
to
maximize
reward
R
in
some
We
consider
the
setting
where
we
have
M
agents,
all
cooperating
to
maximize
reward
R
in
some
j
j
j
39
agent
occupying
a
subset
of
units.
A
specific
connectivity
structure
between
layers
(a)
instantiates
the
h
=
(A
h
+
B
c
),
i+1 Middle:
i standard
i backpropagation
that
parameters
shared
across
agents.
communication
step,
where
each
58 model
now
describe
the
used
?)r(i)
atAbstract
aall
given
time
t=
(ommiting
.can
The
takes
a?model
set
ofcooperating
vectors
{h
, hiare
...,
hcommunication
t=1compute
i=t
i=t b(s(t),(within
j and
Abstract
Abstract
5
4We
jito
47
can
be performed
using
the. outer
RL loop).
ssists
architecture
can
be as
used
groups
of
cooperating
m },
1
2 ,jp(a(t)|s(t),
hAbstract
(haij single
,independent
cthe
)istime
(1) state in the manner of
=each
?)
?:f nonlinearity
?)and (b)
ure
beWe
used
to
control
groups
ofthe
i input
i+1
i is
i control
j R,
j:agent
environment.
We
make
the
simplifying
that
each
agent
receives
independent
of their
35
environment.
make
the
simplifying
assumption
that
each
receives
of their
the
of
all
state-views
semail
=by
{sassumption
, ...,
sb(s(t),
},vectors
the
controller
athe
mapping
broadcast
communication
channel
between
agents
propagates the agent
J
of
multilayer
neural
networks
f@?
that
take
as
Anonymous
Author(s)
each
15
fIfi60
each
is
15
a{0,
simple
fIf..,
each
is
ato
linear
simple
fIf
alayer
linear
simple
fjth
is
followed
alayer
linear
simple
followed
by
layer
linear
aStructure
nonlinearity
followed
layer
a140input
nonlinearity
followed
by
aand
nonlinearity
by
aThe
:tor(i)
:multi-layer
hcontroller
=
f15
(h
, csi35ijconcatenation
)is
@?R,
59
index
for
Let
be
the
agent?s
view
of
the
state
of
the
environment.
input
jagents
0 Ifall
jbrevity).
modules
propagate
their
internal
state
h,
as
well
as
broadcasting
a
communication
vector
c
on
jwill
64is
One
obvious
choice
for
is
a
fully-connected
neural
network,
which could extract
or
j,
and
i
2
K}
(we
call
K
the
number
of
hops
in
the
network).
72
3.1
Controller
X
48
We
use
policy
gradient
[33]
with
a
state
specific
baseline
for
delivering
a
gradient
to
the
model.
36
contribution.
In
this
setting,
there
is
no
difference
between
each
agent
having
its
own
controller,
or
=
0
for
all
j,
and
i
2
{0,
..,
K}
(we
will
call
K
the
number
of
hops
in
the
network).
36
contribution.
In
this
setting,
there
no
difference
between
each
agent
having
its
own
controller,
or
t=1
41
an
RNN.
i=t
i=t
61
a
=
(s),
where
the
output
a
is
a
concatenation
of
discrete
actions
a
=
{a
,
...,
a
}
for
each
agent.
6j then the model can be viewed
a feedforward
network
with
1is?Ja is
0,layers
0s and
0hyperparameter
1
Affiliation
istate-views
55 as
Here
r(t)
is
reward
given
at
time
t,
the
for
balancing
the reward
and the
Anonymous
Author(s)
i+1
i+1
i+1 takes
i is the
i as
iconcatenation
60model
of
all
s65abstract
=
{s
...,
},
and
controller
mapping
The
input
acontroller
set
of
vectors
,controlling
h
,hi+1
...,
hcontrollers
},
and
1red).
J
of
multilayer
neural
networks
f
that
as
input
vectors
49
Denote
the
states
in
an
episode
by ?,
s(1),
...,
s(T
),has
and
actionsour
at each
of thosetwo
states
ci+1
=
.itheactions
(2)This model would
ai+1
common
channel
(shown
in
Right:
full
model
showing
input
states
staken
forour
each
K
37 1viewing
them
as
pieces
of
a{h
larger
model
controlling
all
agents.
Taking
the
latter
features
from
sthe
and
use
them
predict
with
RLagent,
framework.
0 perspective,
hcontroller
fNote
(hfinal
, cX
)1this
37
viewing
them
as
pieces
of
atake
larger
model
all
agents.
Taking
theto
latter
perspective,
our
K
i+1
m
1
2
1
abstract
abstract
1 Address
abstract
joutputs
jgood
j
j
62= the
that
single
encompasses
the
individual
for
each
agents,
well
as
i+1
i+1
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
j take
an
the
final
h
and
output
them
directly,
so
that
the
model
outputs
a
vector
Affiliation
d, take
we61can
h
and
output
them
directly,
so
that
the
model
a
vector
56
baseline
objectives,
set
to
0.03
in
all
experiments.
42
Because
the
agents
will
receive
reward,
but
not
necessarily
supervision
for
each
reinforcement
c
=
h
;
J
1
iof
i (A
50
as
a(1),
),
where
T 1B
is,each
the
length
of
the
The
baseline
is a the
scalar
function
of theaction,However,
h=j detail
=
h
=
h...,jagents
ha(T
+
(A
Bmaps
=
hcthat
hinputs
),
+
(A
=
hcjjall
+
(A
B
heach
cagents
+
B
cthe
),
jj where
j 0r(t)
a = 63 (s),
thejcontroller
output
aisis
concatenation
discrete
actions
a
...,
athe
}for
for
38
controller
isi+1
agiven
large
feed-forward
neural
network
maps
inputs
all
to
their
actions,
each and
0),
73a
We
now
the
architecture
for
that
has
of
communication-free
model but
0
0allow
0
38
aAuthor(s)
large
feed-forward
neural
network
for
agents
tojbalancing
their
actions,
each
J
Anonymous
55
Here
is
reward
at
time
t,
and
the
hyperparameter
?),
ismodularity
for
the
reward
6=episode.
jagent.
H
(T
H
j=
jcommunicate
jeach
jfuture
66
to
with
other
and
share
views
of the
it
j),
jthat
j{a
communication
steps
and
the
output
actions
for
agent.
Anonymous
Author(s)
Connecting
Neural
Models
X
the
communication
between
agents.
Connecting
Neural
Models
43
learning
issingle
used
to
maximize
expected
reward.
We
explore
two environment.
forms
of communication within
Address
Abstract
model
takes
as
input
a
set
of
vectors
{h
,
h
,
...,
h
},
and
email
each input
vector,
or
we
can
feed
them
into
another
network
to
get
a
single
vector
or
0 6=jor
i
51
states
b(s,
?),
computed
via
an
extra
head
on
the
model
producing
the
action
probabilities.
Beside
nding
to
each
input
vector,
we
can
feed
them
into
another
network
to
get
a
vector
or
j
39
agent
occupying
a
subset
of
units.
A
specific
connectivity
structure
between
layers
(a)
instantiates
i+1
i+1
m
1
2
39
agent
occupying
a
subset
of
units.
A
specific
connectivity
structure
between
layers
(a)
instantiates
the
62 c
Note =
that thish56single
the
individual
controllers
for
eachmodules
agents,
as
74 encompasses
stillset
allows
communication.
is built
from
fas
,well
which
take
the
form
of
multilayer
neural cannot
baseline
objectives,
to
in
all
experiments.
; icontroller
67
is
inflexible
with
respect
to
the
composition
and
number
of trained
agents
itminimize
controls;
well
33Problem
2 0.03
Problem
Formulation
Affiliation
0Affiliation
44 expected
the
controller:
(i)
discrete
and
(ii)
continuous.
In
the
former
case,
communication
is an deal
action,
and
33
2
Formulation
j
i+1
j
i
i
multilayer
52email
maximizing
the
reward
with
policy
gradient,
the
models
are
also
to
the
i+1
i
i
i
40
broadcast
communication
channel
between
agents
and
(b)
propagates
the
agent
state
in
the
manner
of
40
broadcast
communication
channel
between
agents
(b)
propagates
thenumber
agent
state
in the
manner
of, we
utput.
imodel
64
One
obvious
choice
for
isInfeedforward
athe
fully-connected
multi-layer
neural
network,
which
could
extract
7 then
where
Tthen
is
written
in
block
form
case
that
f2
is
afeedforward
single
linear
followed
by
agroup
nonlinearity
have:
h
=agents
(H
hmust
+ be
63
communication
between
agents.
16
model
16K}
the
then
can
model
16
be
the
viewed
can
model
be
the
viewed
as
can
anetworks.
be
viewed
as
can
ahops
feedforward
be
as
network
a{0,
as45and
network
awith
feedforward
layers
network
with
network
with
layers
with
layers
ij
Avg.
75
Here
iviewed
..,
K},
where
Ktreated
is
the
communication
layers
inInthe
network.
hithe
=
f026=jneural
(h
,will
)then
jcase,
will
belayer
as layers
such
byof
the
reinforcement
learning.
continuous
thefixed.
signalsOn
passed
sists
of
multilayer
networks
that
take
as
input
vectors
j, and
{0,
..,
K
of
the
network).
57
Model
68 in
with
agents
joining
and
leaving
the
and
even
the
order
of
the
the
1 (we
abstract
53
distance
between
the
baseline
value
and
actual
reward.
Thus,
after
finishing
anjthe
episode,
we update
multilayer
neural
fcfrom
input
jnetworks
j2the
413take
an the
RNN.
Address
NN
1 jAddress
1call
2Introduction
1fas00number
2Introduction
10toivectors
Introduction
41
anthat
RNN.
i+1
i in
i some
i
65
features
h2Introduction
s and
use
them
good
actions
with
ourwe
RL
framework.
This
model
would
1
0We
0 the
0the
46where
between
agents
are
nocooperating
different
than
hidden
states
in a reward
neural
network;
thus
credit
assignment foristhe
34
consider
setting
have
M
agents,
all
cooperating
to
maximize
R
i i{h
ipredict
i...,
iand
ii+1model
0
C
c
)
and
model
can
be
viewed
as
a
feedforward
network
with
layers
h
=
(T
h
)
where
h
K
i
i
i
ayer
54
the
model
parameters
?
by
i
i
i
69
other
hand,
if
no
communication
is
used
then
we
can
write
a
=
{
(s
),
...,
(s
a
mple
linear
layer
followed
by
a
nonlinearity
:
The
takes
as
input
a
set
of
vectors
,
h
,
h
},
j
34
We
consider
the
setting
where
we
have
M
agents,
all
to
maximize
reward
R
in
some
A
B
B
...
B
1
J )}, where
emultilayer
the
final
hjinput
and
output
them
directly,
so
that
the
model
outputs
a
vector
is
a
simple
linear
layer
followed
by
a
nonlinearity
:
takes
as
a
set
of
vectors
{h
,
h
,
...,
h
},
and
iemail
64
One
obvious
choice
for
is
a
fully-connected
multi-layer
neural
network,
which
could
extract
76
Each
f
takes
two
input
vectors
for
each
agent
j:
the
hidden
state
h
and
the
communication
c
,
communication
range
of
agent
j.
Then
(2)
becomes:
m
1
2
X
i+1
i+1
i
i
i+1
i
i
i+1
i
i
i
i
i
Avg.
42
Because
the
agents
will
receive
reward,
but
not
necessarily
supervision
for
each
action,
reinforcement
hdel
=
f
(h
,
c
)
m
1
2
j
j
66
allow
agents
to
communicate
with
each
other
and
share
views
of
the
environment.
However,
it
neural
networks
fasWe
that
take
as
vectors
47
communication
can
beeach
performed
using
standard
{0, ..,i+1
K}
(wej will
call42i+1
KBecause
the
number
of
hops
in
the
network).
57
3them
Model
35 input
Weof
make
the
assumption
that
each
agent
receives
R,
independent
of(within
their 2 the outer RL loop).
email
iHsimplifying
i=
iR, independent
ibackpropagation
the agents
will
receive
reward,
but
not
necessarily
supervision
for
each
action,
reinforcement
jfeed
er
networks
ffrom
that
take
input
?
j2neural
H
=
H
(T
=
H
),(T
H
H
),
(T
=
H
),
(T
H
),This
Connecting
Neural
Models
iAbstract
ienvironment.
imodel
ivector
Connecting
Neural
Models
35vectors
environment.
We
make
the
simplifying
assumption
that
agent
receives
of their
is
the
concatenation
all
h
and
T
takes
the
block
form
(where
C
=
C
/(J
1)):
NN
70a...
per-agent
controller
applied
independently.
communication-free
model
satisfies
the
flexibility
58use
now
describe
the
used
to
compute
p(a(t)|s(t),
?)
at
a
given
time
t
(ommiting
the
time
i+1
nput
vector,
or
we
can
into
another
network
to
get
single
or
T
T
T
B
A
B
B
65c features
h
s
and
them
to
predict
good
actions
with
our
RL
framework.
This
model
would
j
B
C
43Anonymous
learning
is
used
to
maximize
expected
future
reward.
We
explore
two
forms
of
communication
within
Author(s)
0
0
0
X
X
X
67
is
to,contribution.
the
composition
and
of
agents
itdifference
cannot
deal
well
=
hthis
;hi+1
77
and
vector
hthis
.there
The
body
ofcontrols;
the
model
then
takes
as
input
the
concatenated
vectors
36
contribution.
In
there
is
between
each
agent
having
its
own
controller,
or a network
0with
0ithis
0make
43Anonymous
learning
used
to
expected
future
reward.
We
explore
two
forms
ofwe
communication
within
Ktakes
i+1
iis
ithis
imodel
0Introduction
log
p(a(t)|s(t),
?)
3 inflexible
3of
work
In
3that
we
work
In
3
we
In
two
make
contributions.
two
make
contributions.
we
two
First,
contributions.
two
we
First,
contributions.
simplify
we
First,
simplify
and
First,
extend
simplify
and
we
the
extend
simplify
and
graph
the
extend
neural
and
graph
the
extend
network
neural
graph
thenetwork
neural
graph
neural network
2aIn
1directly,
Abstract
X
imaximize
iwork
iwe
ia
j number
odel
as
input
set
vectors
{h
h
,outputs
...,
h
},
and
1main
B
Csetting,
ji allow
j=
36
In
this
setting,
is
no
difference
between
each
agent
having
its own
controller,
or
and
output
them
so
the
outputs
abe
vector
48
We
use
policy
gradient
[33]
with
ait
specific
baseline
for
delivering
iand
ithis
iwork
imake
aX
set
of
vectors
{h
,respect
...,
hAffiliation
},
1no
h
+Author(s)
B
cBbrevity).
),
m
1
i+1
ih
ito
44
controller:
(i)
discrete
and
(ii)
continuous.
the
former
case,
communication
isstate
an
action,
and
71
requirements
,In
but
is
not
able
to
coordinate
their
actions.
for
Let
sthem
jth
agent?s
ofcommunication
the
state
of
the
environment.
The
input
tob(s(t),
the
?the
=
r(i)
?)
r(i)
?)
. gradient to the model.
hAffiliation
(A
h237
+
B
c(ii)
),
i=
m
159,(A
1alashjinput
i 68
i+1
i+1
B
A
...
B
agents
communicate
with
each
other
share
views
of
the
environment.
However,
jcontinuous.
jthe
jdiscrete
jagents
B
C
viewing
as
pieces
of
aDenote
larger
model
controlling
all
agents.
Taking
the
latter
perspective,
our
jarchitecture
jand
44
the
controller:
(i)
and
In
the
former
isb(s(t),
an
action,
and
j2index
with
joining
and
leaving
the
group
and
even
the
order
ofview
the
must
be
fixed.
On
the
hi6617
=
f i17
(h
,where
carchitecture
) 17
Tdescribe
=
.how
?case,
? of
0Inagents
1
=
f
(h
,
c
)
4
4 another
architecture
4
of
??.
architecture
Second,
4
of
??.
Second,
we
of
??.
show
Second,
we
of
how
??.
show
this
Second,
we
show
this
we
how
architecture
show
this
can
how
architecture
be
used
this
can
architecture
to
control
used
can
be
to
groups
control
used
can
be
to
groups
control
used
cooperating
to
of
groups
control
cooperating
of
groups
cooperating
of cooperating
h
.
(3)
c
=
i+1
i+1
j
j
j
0
49architecture
the
states
in
an
episode
by
s(1),
...,
s(T
),
and
the
actions
taken
at
each
of those states
37
viewing
them
as
pieces
of
a
larger
model
controlling
all
agents.
Taking
the
latter
perspective,
our
0be
58
We
now
the
model
used
to
compute
p(a(t)|s(t),
?)
at
a
given
time
t
(ommiting
the
time
B
C
where
T
where
is
written
T
is
written
in
block
T
where
is
written
in
form
block
T
is
written
in
form
block
in
form
block
form
i
i
i
i
45
will
be
treated
as
such
by
the
reinforcement
learning.
the
continuous
case,
the
signals
passed
j
j
tor,
or
we
can
feed
them
into
network
to
get
a
single
vector
or
j
t=1
j
i=t
i=t
j
=
6
j
Address
?
?
?
1
1
abstract0 3 with
60 be
controller
the
all
state-views
scontinuous
=network
{s
, graph
...,
sJJmaps
},theand
the
controller
a mapping
.issuch
..concatenation
..reinforcement
H
C
...signals
C for
67
inflexible
respect
to
the
composition
and
of
agents
itof
controls;
cannot
deal
well
1
38
controller
a large
feed-forward
neural
that
inputs
toistheir
actions,
each is a scalar function of the
=i 1isfollowed
.isincludes
45
will
treated
astwo
by
learning.
the
case,
passed
Assuming
sthen
the
identity
agent
j.
|N
(j)|
Innonlinearity
this
we
make
contributions.
simplify
and
extend
neural
Address
69 hother
ifwork
no
communication
is the
used
weof..we
can
write
=
{neural
(s
),the
...,
(sC
)},
isall
a agents
@
A
j number
.First,
1 network;
50 states
asaIn
a(1),
...,
a(T
where
iswhere
the
length
of
the
episode.
The
baseline
near
layer
byabstract
a;hand,
:agents.
ji beX
j5feedforward
38
controller
is
adifferent
large
feed-forward
neural
network
maps
inputs
fornetwork
all
agents
toinput
their
actions,
each
X
46 5between
agents
are
no
than
hidden
in
athe
thus
assignment
for
the
agents.
5 feedforward
agents.
5.. Let
agents.
0),
.the
i that
i (j)
iT credit
iC
+1
email
ibeas
iviewed
an
viewed
a
network
with
layers
.
.
.
59
index
for
brevity).
s
be
the
jth
agent?s
view
of
state
of
the
environment.
The
to
the
j
?N
?
?
?
model
can
as
a
network
with
layers
B
i
j
39
agent
occupying
a
subset
of
units.
A
specific
connectivity
structure
between
layers
(a)
instantiates
the
61
a
=
(s),
where
the
output
a
is
a
concatenation
of
discrete
actions
a
=
{a
,
...,
a
}
for
each
agent.
0
0
0
0
1
1
1
1
46
between
agents
are
no
different
than
hidden
states
in
a
neural
network;
thus
credit
assignment
for
the
C
H
C
...
C
0
0
0
0
4
architecture
of
??.
Second,
we
show
how
this
architecture
can
be
used
to
control
groups
of
cooperating
i+1
i+1
1
J
68
with
agents
joining
and
leaving
the
group
and
even
order
of
the
agents
must
be
fixed.
On
the
email
c0j )78per-agent
5572aHere
is
reward
given
time
t,
hyperparameter
for
balancing
thethe
reward
and theprobabilities. Beside
i r(t)
i of
istandard
i specific
iStructure
iat
iconnectivity
imodel
i and
ii the
ian
ibetween
i isthe
controller
applied
independently.
This
communication-free
satisfies
the
flexibility
3.1
51i A
states
b(s,
?),
computed
via
extra
head
on
model
producing
the action
hj=, =
cj c)fj h(h
B
C loop).
h =
,47...,
h
andagent
computes:
communication
can
beiperformed
using
backpropagation
the
outer
RL
39
occupying
subset
units.
structure
layers
(a)
instantiates
0j ,;70
i
i A
i
iController
=
h
;0 ,47i[h
1 ,hh0Abstract
J ],
B
B
B
A
...
B
B
A
...
B
B
B
...
B
B
...
i(within
i (within
i (b)
i CB
ji+1
ji+1
?...,
?)},
?
j5010if
B
B
Bof
...
A
controller
the
concatenation
allA
state-views
sB
=
{s
sexpected
},
and
the
controller
is
a mapping
40single
broadcast
communication
channel
between
agents
and
propagates
the
agent
state the
in
manner
ofalso trained to minimize the
B
78
h
=requirements
[h
,60h
...,
and
can
be
performed
using
standard
backpropagation
the
outer
RL
1J2],K
agents.
i communication
i is
i is
0,followed
..,69X
K}
will
the
number
hops
in
network).
1 ,C
C
H
...
C
62
Note
that
this
controller
encompasses
individual
controllers
each
agents,
asthe
well
j 0 6=
56 of
baseline
objectives,
set
to
0.03
in
experiments.
2 call
Figure
Blah.
other
no
communication
is
used
then
we
can
write
athe
=
{maximizing
...,
(sall
where
isloop).
a for
52
the
reward
with
policy
gradient,
models
i+1
inot
Abstract
71ahand,
but
to
coordinate
their
actions.
h
=
(A
h
c(T
),computes:
1
JJ
T(sibaseline
=),the
,the
i+1
i 1:
icommunication
B
C
40iable
broadcast
agents
and
(b)
propagates
the
agent
state
in
the manner
ofasare
er
by(we
nonlinearity
:, B
j +
jWe
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i+1and
0j6=
ichannel
i the
i between
Anonymous
Author(s)
Anonymous
Author(s)
H
=
H
),
48the
use
policy
gradient
[33]
with
a
state
specific
for
delivering
a
gradient
to
model.
As
agents
move,
enter
exit
environment,
N
(j)
will
change
over
time.
In
this
setting,
41
an
RNN.
H
=
(T
H
),
j
j
0
C
.. B
.. 2for
.. and
h
=
fnow
(hjA
,detail
cjB
)...
(1)
61
a=
(s),
where
theRNN.
a is
aA
concatenation
of
discrete
actions
a..B
=B
{a
, ...,
amodel.
}modularity
for
each agent.
Bbetween
B
B
A
B
B
...
A
B
...
B
. .value
B
B
B
B
C
C
C
Cstates
i+1Anonymous
+1i+1
2 j 6=
1jAnonymous
Introduction
63 use
the
communication
53
distance
between
the
baseline
actual
reward.
Thus,
finishing an episode,
we update
1...
J
i+1
i state
iagents.
iB
Author(s)
Author(s)
per-agent
controller
applied
independently.
This
communication-free
model
satisfies
flexibility
K
73
We
the
architecture
has
the
of
the after
communication-free
model
but
486
We
policy
[33]
with
baseline
forB
delivering
athe
gradient
to
the
41 gradient
an
@
A
2
Model
2that
Model
2so
6
Model
2aoutput
hin
fjainterpretation
(h
,can
cispecific
)s(1),
=
;Introduction
.each
49 6
Denote
the
states
an
episode
by
s(T
and
the
taken
at
of
those
Affiliation
8h and
The
key
isi6call
that
T
is
sized,
and
the
matrix
be
dynamically
the
Affiliation
0
. actions
.because
. ithat
.N
2 hidea
output
them
that
the
model
a),individual
vector
joutputs
B
B
B
B
C
Ceachiasaction,
;70
jModel
j1Affiliation
i=
ijModel
inetwork).
i...,iand
ias
ii C
isized
icontrollers
igraph,
iC
i(1)
ibeing
our
model
has
natural
aiB
dynamic
with
(j)
the set
vertices
12
i+1
i6 Affiliation
idirectly,
idynamically
54
the
model
parameters
? built
by
K}
(we
will
K
the
number
of
hops
in
the
Abstract
62
Note
this
single
controller
encompasses
the
for
each
agents,
well
astakeofthe
viewed
a(we
feedforward
network
with
layers
j 0 key
42
Because
the
agents
will
receive
reward,
but
not
necessarily
supervision
for
reinforcement
49
Denote
the
states
ina(T
an),B
episode
by
s(1),
...,
s(T
),
the
actions
taken
at
each
of
those
Figure
1:
Blah.
Model
i
i
i
57number
3
71asidea
requirements
,
but
is
not
able
to
coordinate
their
actions.
iA
i motivates
iis aB
i states
The
is
that
T
is
dynamically
sized.
First,
the
of
agents
may
vary.
This
B
B
B
A
B
B
...
A
B
B
...
B
B
...
A
...
B
74
still
allows
communication.
is
from
modules
f
,
which
form
of
multilayer
neural
h
=
(A
h
+
B
c
),
79
B
B
B
C
C
C
C
50
as
a(1),
...,
where
T
is
the
length
of
the
episode.
The
baseline
scalar
function
of
the
?
?
?
72
3.1
Controller
Structure
0,
..,
K}
will
call
K
the
number
of
hops
in
the
network).
Address
en
block
form
jform
j 64
Address
Abstract
Cnecessarily
Cis a C
...
Hfor
9(we
blocks
are
applied
by type,
rather
than
coordinate.
One
obvious
choice
for
is 1to
areceive
fully-connected
multi-layer
neural
network,
which
could
extract
isin
written
infeed
block
T
=
Tjsimplify
=
T
=
T
=The
. network
.theexplore
. represent
. communication
42 by
will
reward,
but
not
supervision
each
action,
reinforcement
will
K we
the
number
ofa(1),
hops
in
the
network).
X
j 0jcan
6=j79call
43 we
learning
islength
used
maximize
expected
future
reward.
We
two
forms
of
within
50
as
...,
a(T
),Because
where
Tthe
is agents
the
the
episode.
The
baseline
scalar
function
of
the
connected
to
vertex
at
the
current
time.
edges
within
the
communication
2
B
B
B
B
C
C
C
63
the
communication
between
agents.
or
we
them
into
network
to
get
ato
single
vector
or
Address
i+1
i+1
Address
3 the
In
this work
make
two
contributions.
First,
the
graph
states
b(s,
?),
computed
via
an
extra
head
on
the
the
action
probabilities.
Beside
T of
T
normalizing
factor
Jmake
151ianother
in7simplest
equation
(2),
which
resacles
the
communication
vector
by
the
75
networks.
Here
ifuture
2
{0,
..,
K},
where
K
is
number
of
communication
layers
invectors
the
network.
X
i+1
i65
ithe
i
..form
..and
..ofextend
..of
..multilayer
..model
..continuous.
..neural
..compute
..In
.. RL
..graph
Kthe
email
email
cthe
=
h
.of
(2)
X
X
X
.Tproducing
.C
.?)
1to
0 consists
43 h
learning
is
used
maximize
expected
reward.
We
explore
two
forms
within
features
from
smodel
and
use
them
predict
good
actions
with
our
This
would
3 them
Inthem
this
work
we
two
contributions.
First,
we
simplify
and
extend
the
graph
neural
network
i.
0
1
The
7simplest
The
form
The
of
7simplest
form
the
The
of
simplest
consists
model
of
form
the
consists
model
of
the
model
multilayer
neural
consists
of
multilayer
networks
neural
of
multilayer
networks
neural
fframework.
that
networks
neural
take
f..aticommunication
that
as
networks
input
take
fmodel
that
vectors
input
take
f i the
that
as
vectors
input
take
as
input
vectors
H7??.
=
(T
H
),
output
directly,
so
that
the
model
outputs
a
vector
j58
j.and
log
p(a(t)|s(t),
@
@
@
@
A
A
A
A
i+1
i+1
0
1
44
the
controller:
(i)
discrete
(ii)
the
former
case,
communication
isas
an
action,
and
and
output
so
that
the
model
outputs
a
vector
abstract
51
states
b(s,
?),
computed
via
an
extra
head
on
the
model
producing
the
action
probabilities.
Beside
i
i
i
i
.
.
.
.
.
.
.
7directly,
The
simplest
form
of
the
model
consists
of
multilayer
neural
networks
f
that
take
as
input
vectors
We
now
describe
the
model
used
to
p(a(t)|s(t),
?)
a
given
time
t
(ommiting
theof
time
i
i
i
i
52
maximizing
the
expected
reward
with
policy
gradient,
the
models
are
also
trained
to
minimize
the
tput
them
directly,
so
that
the
model
outputs
a
vector
J
1
s
a
feedforward
network
with
layers
email
4
architecture
of
Second,
we
show
how
this
architecture
can
be
used
to
control
groups
of
cooperating
email
c
=
h
.
(2)
channel
between
agents,
with
(3)
being
equivalent
to
belief
propagation
[22].
Furthermore,
use
.
.
.
.
73 isWe
now
detail
the
architecture
for
that
has
the
modularity
of
the
communication-free
model
but
0.based
A
B
B
...
B
?
=
r(i)
b(s(t),
?)
r(i)
b(s(t),
?)
.
idea
is
that
T
dynamically
sized.
First,
the
number
of
agents
vary.
This
motivates
0may
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A
B
...
B
number
of
communicating
agents.
Second,
the
blocks
are
applied
on
category,
rather
than
by
j
j
i
i
i
i
i
i
i
i
i+1
i+1
i+1
i+1
0
0
0
0
0
0
0
0
0
0
0
0
44
the
controller:
(i)
discrete
and
(ii)
continuous.
In
the
former
case,
communication
is
an
action,
and
i
i
i+1
0
0
0
j
=
6
j
72
3.1
Controller
Structure
abstract
}
(we
will
call
K
the
number
of
hops
in
the
network).
i
i
architecture
??.
we
show
how
this
architecture
can
be
used
to
control
groups
of
allow
agents
to
communicate
with
each
other
and
share
views
of
the
environment.
However,
ithand
64of
One
obvious
choice
for
is
awith
fully-connected
multi-layer
neural
network,
which
could
extract
45
will
be
treated
as
such
by
the
reinforcement
learning.
In
the
continuous
the
signals
52
the
expected
reward
policy
gradient,
the
models
are
also
trained
to
minimize
the
Jh
1i .single
index
for
brevity).
Let
the
jth
agent?s
view
of
the
state
of
The
input
to,hh
the
hofand
c66
aand
vector
hand
.a
The
model
takes
as
input
a. sset
of
vectors
,h
...,
h
},
and
8 8 into
h
8 maximizing
cSecond,
h
and
and
8output
output
cah
and
8 to
aoutput
cvector
vector
h
and
output
c59vector
The
aoutput
h
vector
.(NIPS
a,ih
vector
takes
model
The
hbe
as
takes
model
input
.vectors
The
as
ai{h
takes
model
set
input
of
as
avectors
takes
set
input
of
as
avectors
{h
set
input
,the
of
hcase,
aenvironment.
vectors
,{h
...,
set
of
hm
},
,{h
...,
and
h
,{h
...,
and
, ...,
and
hm }, andcij ,
llrfeed
call
K 4the
number
hops
in
the
network).
?cooperating
iThe
jthinput
53
distance
between
the
baseline
value
actual
Thus,
after
an
we
update
76
fmodel
takes
two
agent
j:
state
h,passed
the
communication
we
can
feed
them
another
network
get
a Module
single
vector
or
m
1for
2 ,each
1the
2hidden
1 ,h
2vectors
1
2 },
2 },
0 6=ji and
iand
iand
iat
5Submitted
agents.
j ?m
iinetwork
inets
iC
can
feed
them
into
another
to
get
aand
vector
or
them
into
another
network
to
get
single
or
t=1
jEach
74this
still
allows
communication.
is
built
from
modules
which
the
form
of
multilayer
neural
i=t case,
i=t1Li m
ivalue
iagent
ifreward.
i reward.
itake
icommunication
ivector
iof
idistribute.
ifinishing
i continuous
iepisode,
iof
to
29th
Conference
on
Neural
Information
Processing
Systems
2016).
Do
not
for
step
CommNet
model
multi-layer
each
vertex
makes
our
model
similar
to
an
instantiation
the
GGSNN
work
of
45
will
be
treated
as
such
byB
the
reinforcement
learning.
In
the
the
signals
passed
lock
form
B
A
B
...
B
normalizing
factor
J
1
in
equation
(2),
which
resacles
the
communication
by
the
i65computes
i53
coordinate.
In
simple
form
of
the
model
?category?
refers
to
either
?self?
or
?teammate?;
but
as
B
A
...
B
5 i+1
agents.
46
between
agents
are
no
different
than
hidden
states
in
a
neural
network;
thus
credit
assignment
for
the
B
C
60
controller
is
the
concatenation
all
state-views
s
=
{s
,
...,
s
},
and
the
controller
is
a
mapping
9B
distance
between
the
baseline
and
actual
Thus,
after
finishing
an
episode,
we
update
B
B
B
B
B
...
B
B
A
B
...
B
B
A
...
B
A
...
A
67
is
inflexible
with
respect
to
the
composition
and
number
of
agents
it
controls;
cannot
deal
well
1
J
54
the
model
parameters
?
by
features
h
from
s
and
use
them
to
predict
good
actions
with
our
RL
framework.
This
model
would
i+1
9
computes
9
computes
9
computes
9
computes
H sodirectly,
=80networks.
(T
),that
ithe
utput
them
so
the
model
outputs
a 61isvector
n1 directly,
B
C
75that
Here
ial.
2fAii[14].
{0,
where
K
the
number
communication
layers
ini+1
the
network.
m
the
model
a...
vector
icase
ioutputs
i?iC
77
and
outputs
vector
hstates
. in
The
main
body
of
model
input
theagent.
concatenated
vectors
InBH
the
that
isparameters
a..,
single
linear
layer
followed
by
aof
have:
h
=
(H
hij(within
+
i+1
i be
iperformed
46
between
agents
are
no
different
than
hidden
aiorder
neural
network;
thus
credit
assignment
iK},
73 see
We
now
the
for
that
has
the
modularity
of
the
communication-free
model
but
=
(s),
where
output
ajis,?broadcast
iswe
concatenation
athen
=
{atakes
...,for
aas
}is
for
47
communication
can
using
standard
backpropagation
the
outer
loop).
et
0
1
54 iarchitecture
model
by
i detail
we
will
below,
the
communication
can
be
complicated
than
to
1 ,RL
Jthe
Introduction
of
communicating
agents.
Second,
blocks
are
applied
based
on
category,
than
llowed
by
aTthe
nonlinearity
:ithe
B
B
haimore
fthe
(h
,nonlinearity
cija)the
i=
iB
ithe
iarchitecture
55
Here
r(t)
given
at
time
the
foreach
balancing
the reward and the
i+1
B
C
68
with
joining
and
leaving
and
even
of
the
agents
must
be
fixed.
the
i+1
i+1
i+1
i+1
B
B
A
B
irather
ireward
iathe
iby
ijthe
iall?,
it, and
i actions
i hyperparameter
iofiidiscrete
jgroup
C
2 However,
jby
66
agents
to
communicate
with
and
share
views
of
environment.
itOn
....
A
B
B
Bagents
80
In
case
fAbstract
isaB
a...
single
linear
followed
aeach
nonlinearity
,i (h
we
have:
hmultilayer
=
(H
+
=
.i=
T layer
Tother
T(h
i that
ii Tallow
i+1
i j=
ih
ij
h
=
h
f
=
,
c
h
f
)
(h
=
,
c
h
f
)
,
c
f
)
(h
,
c
)
Abstract
n
feed
them
into
another
network
to
get
a
single
vector
or
47
communication
can
be
performed
using
standard
backpropagation
(within
the
outer
RL
loop).
i
62feedforward
Note
that
this
single
controller
encompasses
the
individual
controllers
for
each
agents,
as
well
as
j
j
B
C
X
X
X
10
j
j
j
j
j
j
i
i
troduction
81
C
c
)
and
the
model
can
be
viewed
as
a
network
with
layers
h
=
(T
h
)
where
h
j
j
j
j
B
C
m
into
another
network
to
get
single
vector
or
74
still
allows
communication.
is
built
from
modules
f
,
which
take
the
form
of
neural
log
p(a(t)|s(t),
?)
56invariant,
baseline
objectives,
set
to
0.03
all
experiments.
X
so218
may
categories.
Note
also
Tto
isto
permutation
thus
the
order
of
the
2dynamically
..18
..The
..key
..Tthat
m
76i18
Each
fijof
takes
two
vectors
for
agent
j:
the
hidden
state
hagents.
and
the
c?)j , (s
ate.
In
this
simple
the
model
refers
either
?self?
or
?teammate?;
as
ihand,
other
if48
no
communication
isand
used
then
can
write
ainput
=r(i)
{inencoding
(s
),
...,
where
abecause
.is
..input
..=
..cand
We
use
policy
gradient
[33]
with
awe
state
specific
baseline
for
delivering
a.)},
gradient
to
the
lowed
by681Model
arequire
nonlinearity
:i69Skip
6nonlinearity
key
The
idea
key
ismore
The
that
idea
key
Tinflexible
is
is
that
idea
dynamically
T
is
that
idea
dynamically
is
is
sized,
that
dynamically
T
is
sized,
dynamically
matrix
sized,
the
and
can
matrix
sized,
the
dynamically
and
can
matrix
the
dynamically
can
matrix
be
can
be
dynamically
because
the
sized
the
sized
the
the
0Jsized
. For
1the
T?category?
Tcommunication
Tbut
67
is
with
respect
the
composition
and
number
agents
it
controls;
cannot
deal
well
iModel
i+1
i communication
ibe
i 1sized
jbe
i+1
i+1
Abstract
@
10
10
10
10
?C
r(i)
b(s(t),
?)
b(s(t),
yand
aThe
:iform
A
...
B
@
A
63
the
between
2Abstract
X
X
X
.each
some
tasks,
it
is
useful
to
have
the
hbecause
as
anis model.
input
for because
isimplify
CB
cB
the
model
can
be
viewed
as
aA
feedforward
network
with
hlayers
=of
(T
hX
) and
where
hactions
Assuming
includes
the
identity
of
agent
j.
h
;layers
X
X
X
jstate
log
p(a(t)|s(t),
?)
.we
In
this
work
we
make
two
contributions.
First,
and
extend
the
graph
neural
network
j present
j ) iand
.ivector
.and
75does
networks.
iconcatenation
2B{0,
..,
K},
isbe
the
number
of=
communication
in
the
network.
.applied
j states
j episode
82
is
the
of
hcan
T
takes
the
block
form:
i+1
?tanh
?all?,
i+1
..Connections:
.all
.type,
i68
iper-agent
48
We
use
policy
gradient
[33]
with
astakes
specific
baseline
forb(s(t),
delivering
ataken
gradient
to
the
model.
B
C
49
Denote
the
in
an
by
s(1),
...,
s(T
),i+1
the
at each
of
those states
not
matter.
jK
0
1
70
controller
independently.
This
communication-free
model
satisfies
the
flexibility
ake
two
contributions.
we
simplify
and
extend
the
graph
neural
network
i Here
i .applied
iFirst,
iwhere
i+1
i+1
i+1
i+1
i+1
i+1
i+1
see
below,
the
communication
architecture
more
complicated
than
?broadcast
to
dagents
by
a
nonlinearity
:
?
=
r(i)
b(s(t),
?)
r(i)
?)
.
t=1
77
and
outputs
a
h
.
The
main
body
of
the
model
then
as
input
the
concatenated
vectors
i
i
i
i
i=t
i=t
with
agents
joining
and
leaving
the
group
and
even
the
order
of
the
agents
must
be
fixed.
On
the
blocks
19
are
blocks
applied
19
are
blocks
19
by
type,
are
blocks
applied
by
rather
type,
are
applied
than
by
rather
type,
by
than
by
coordinate.
rather
by
than
coordinate.
rather
by
than
coordinate.
by
coordinate.
i
i
B
B
A
...
B
h
=
(A
h
+
B
c
),
j
=
6
j
j
B
C
cgroups
=
h...,
=
cafor
hand
=j 0episode.
cthe
h=j step
; itaken
;have:
ctarchitecture
iishow
ihow
A82 of=is
B
B First,
B
??.
Second,
we
can
be
used
to
control
of
work
contributions.
First,
we
simplify
extend
graph
neural
network
communication
beyond
the
first
Thus
agent
at
i,hwe
0The
64 the
One
obvious
choice
for
multi-layer
neural
which
could
i+1
iihjarchitecture
i. Denote
i in
iithe
i ...
concatenation
ofjrequirements
all
and
T and
takes
the
block
form:
T
jB
1and
?the
?j;fully-connected
j iwe
jtheir
j; j actions
jis0s(T
j 0 at
50
as
a(1),
...,
a(T
),graph
where
T
isccooperating
the
length
of),
the
baseline
is a network,
scalar
function
the extract
49
states
an
episode
by
s(1),
each
of those
statesof
ii,isteps
i layer.
B
Bthis
...
A
i make
itwo
wo
we
simplify
extend
the
neural
network
iiform
iiModel
B
B
...
A
C
iable
icontrol
ijthen
71
but
is
not
to
coordinate
actions.
57
3
h
=
(A
hwe
+
cthe
),.input
iwrite
mean
t=1
i=t
i=t
The
simplest
of
model
consists
of
multilayer
neural
networks
fthe
that
as
input
may
require
more
Note
also
that
Tand
is
permutation
invariant,
thus
the
order
of
the
0multilayer
Second,
we
show
how
architecture
can
be
used
to
groups
of
cooperating
76
Each
two
vectors
for
agent
j:
the
hidden
state
htake
and
the
communication
c ,reward
iB
iarchitecture
69
other
hand,
no
communication
is
used
we
can
avectors
=
{balancing
(s
),vectors
...,
(sisJthe
where
is aofThis
.model
..ifB
j.categories.
jthis
H
C)computed
C
...
(A
h
+7Second,
cB
j7the
by
acontributions.
nonlinearity
:iB
.55
agents.
The
simplest
form
of
the
model
consists
of
networks
that
take
as
input
0
65
features
h
from
shyperparameter
and
use
them
actions
our
RL
framework.
model would
1producing
0good
0the
ture
of
??.
show
how
can
be
to
control
of
At
first
of
the
function
h
=
p(s
isneural
used.
This
as
input
state-view
@
A
jfi),
r(t)
iseach
reward
given
atK}
time
t, T
and
isthe
for
and
the
inearity
B
...
B
jC0jtakes
6=
jf
j 0to
6=
jpredict
jmodel
6=
j the
jnetwork).
6=jjwith
51 used
b(s,
?),
via
an
extra
head
on
action
probabilities.
..takes
..this
.encoder
B
C
11
set
can
=
0Here
for
all
j,
istates
2
{0,
..,),
(we
will
call
K
the
number
of
hops
in
1 .We
50
as
a(1),
...,
a(T
where
iscooperating
the
length
of
the
episode.
The
baseline
a)},
scalar
function
theBeside
jgroups
j
ij B: layer
i iA
i+1
0
0
0
j
.
i
i
i
i
i
i
i
i
.
.
i
i+1
0
0
0
Assuming
s
includes
the
identity
of
agent
j.
i+1
im },
0...,model
jih
8 not
hshow
a55
vector
. dis
The
takes
as
input
ainset
of
vectors
{h
,vectors
h2fthe
,isi...,
h{h
and
d,oeswe
this
architecture
can
be
used
to
groups
of
iand
ivector
ioutput
Band
C
66
allow
agents
to
communicate
with
and
share
views
oftrained
the
environment.
However,
Hmaximizing
C
...
Ccontrol
H
C
C
matter.
8and
and
c12
a56
hmodel
. given
The
model
takes
as
input
a...h
set
of1cooperating
,i h
,other
hthe
},
and
70
per-agent
controller
applied
independently.
This
communication-free
satisfies
the
flexibility
2the
ii chhow
i output
KC
baseline
objectives,
set
to
0.03
all
experiments.
52
the
expected
reward
with
policy
gradient,
models
are
also
to minimize
the time
hB0i+1
.vector
The
main
body
of
the
model
then
takes
as
input
concatenated
vectors
mreward
=
(h
, 1model
ceach
hmodel
).the
(4) itt (ommiting the time
2
r(t)
ath
time
and
the
hyperparameter
for
balancing
and
0ireward
51
states
b(s,
?),
computed
via
anC58
extra
head
on
the
model
producing
action
probabilities.
Beside
i T 77
If
desired,
we
can
take
the
final
and
output
them
directly,
that
the
outputs
athe
vector
B
B
A
eedforward
with
hcomputes
+
Biaand
),layers
ic...
iHere
hat
is
dynamically
sized,
and
the
matrix
can
be
dynamically
sized
the
jcomposition
j ,dependent,
jthe
B
C
We
now
describe
the
used
to
compute
p(a(t)|s(t),
?) at
a given
jrespect
srd
and
vector
h
R
for=
some
diand
).
form
of
the
encoder
is(we
problem
idea
isoutputs
that
Toutputs
is
sized,
and
the
matrix
be
dynamically
sized
because
0
0
0it,is
j The
i ij,
i {0,
i so
eedforward
network
with
iset
ican
i0
i(we
i(A
inetwork
iidynamically
0
Tj=
=icomputes
.layers
jfeature
j11
inflexible
with
the
and
number
of agents
it episode,
controls;
cannot
deal
well
B
B
B
...
A
9network
jcj0(in
11
We
set
We
=
11
set
0
We
c1Controller
for
11
set
all
029th
We
cj,
for
=
all
0H
i67con
j,
for
2
and
=
{0,
all
..,
for
2
K}
and
all
ibecause
j,
..,
2another
K}
and
will
{0,
i,to..,
call
2
K}
will
{0,
K(we
..,
the
call
K}
will
number
K
(we
the
call
will
number
K
of
the
call
hops
number
K
of
in(NIPS
the
hops
the
number
network).
of
in
hops
the
network).
of
in
hops
the
network).
in
theThe
network).
with
layers
C
C
H
...
Cvalue
C
C
...
C
72
3.1
Structure
ihi9 +
B
C
53
distance
between
the
baseline
and
actual
reward.
Thus,
after
finishing
an
we
update
71
requirements
,
but
is
not
able
to
coordinate
their
actions.
56
baseline
objectives,
set
to
0.03
in
all
experiments.
=
(A
B
c
),
j
j
j
j
13
corresponding
to
each
input
vector,
or
we
can
feed
them
into
network
to
get
a
single
vector
or
52
maximizing
the
expected
reward
with
policy
gradient,
the
models
are
also
trained
to
minimize
the
T
=
ction
Submitted
Submitted
to
29th
Submitted
Conference
to
29th
Submitted
Conference
to
29th
on
Neural
Conference
to
on
Information
Neural
Conference
Information
Neural
Processing
on
Information
Neural
Processing
Systems
Information
Processing
Systems
(NIPS
Processing
2016).
Systems
(NIPS
Do
2016).
Systems
(NIPS
not
distribute.
Do
2016).
not
distribute.
Do
2016).
not
distribute.
Do
not
distribute.
0
2
Model
59
index
for
brevity).
Let
s
be
the
jth
agent?s
view
of
the
state
of
the
environment.
input to the
.
.
.
.
j
j
+
B
c
),
. .coordinate.
d by
than
by
joining
and
leavingstate-view
the group jand
even the order of the agents must be fixed. On the
i+1
i i i+1
ij embedding
iis
but
fortype,
most
of
tasks
they
consist
of a lookup-table
bags
of
vectors
thereof).
Unless
rst
layer
the
model
an
encoder
function
=fCthe
p(s
)c68=
used.
This
takes
as
input
re
applied
by
rather
than
by
coordinate.
@
j of
iKi. iagents
i(or
.. rather
..our
.. If
.. A
..) fwith
..) K
..K
1 type,
K
.iand
ihjagent
54
model
parameters
?
by
C
H
...
C
h
=
(h
,
14
scalar
output.
.
h
(h
,
c
i+1
i
i
53
distance
between
the
baseline
value
and
actual
reward.
Thus,
after
finishing
an
episode,
we
update
.
.
Assuming
s
includes
the
identity
of
j.
j
j
12
desired,
12
If
desired,
12
we
can
If
desired,
12
we
take
can
If
the
desired,
we
take
final
can
the
h
we
take
final
and
can
the
h
output
take
final
the
h
them
output
final
and
directly,
h
them
output
and
directly,
so
them
output
that
directly,
the
so
them
that
model
directly,
the
so
that
outputs
model
the
so
that
outputs
model
a
vector
the
outputs
model
a
vector
outputs
a
vector
a avector
j
60
controller
is
the
concatenation
of
all
state-views
s
=
{s
,
...,
s
},
and
the
controller
is
mapping
j
i+1
i
i
T
=
,
57
3
Model
jhand,
j . jif no
0 Hand
j 69 .sized
1
J )}, where
.
2
other
communication
is
used
then
we
can
write
a
=
{
(s
),
...,
(s
is
a
odel
1is dynamically
i noted,
i sized,
.
.
j
j
j
the
matrix
can
be
dynamically
because
the
1
J
H
=
(T
),
0
d
otherwise
c
=
0
for
all
j.
H
=i of13jFirst,
(T
H
),0iiisModel
Temporal
Recurrence:
We
also
explore
having
the
network
be
aathem
recurrent
neural
network
(RNN).
10
73
We
now
detail
the
architecture
for
that
has
the
modularity
ofThis
the
communication-free
model
but
=
(T
),B
.. form
.. the
..vector,
.. feed
10H
rward
network
with
layers
i form
i model
i encoder
.nonlinearity
54each
the
model
parameters
by
utputs
feature
vector
h
(in
R
for
some
d13
The
of
the
isanetwork
problem
dependent,
2
X
61
=
(s),
where
the
output
a is
concatenation
ofmodel
discrete
actions
a
=get
{aor
...,
aJ } or
for
each agent.
iX
i?
itake
ivector,
ward
network
with
layers
0 ).
simplest
the
consists
of
multilayer
neural
networks
f
that
as
input
vectors
.
1a,vector
70
per-agent
controller
applied
independently.
communication-free
satisfies
the
flexibility
B
B
...
A
corresponding
13
13
to
corresponding
input
to
corresponding
each
vector,
input
to
each
or
input
to
we
each
can
or
vector,
input
we
can
them
or
feed
we
into
can
them
or
another
feed
we
into
can
them
another
network
feed
into
another
network
to
into
get
a
another
network
to
single
get
a
vector
network
to
single
get
or
a
vector
to
single
single
vector
or
15
each
fcorresponding
a and
simple
linear
layer
followed
by
a
:
eThe
make
two
contributions.
First,
we
simplify
and
extend
graph
neural
jIfwe
T
T
T
eype,
two
contributions.
simplify
extend
the
graph
neural
network
57
3
C
C
C
...
H
ork
with
layers
rather
than
by
coordinate.
.
72
3.1
Controller
Structure
i
i+1
i+1
X
X
X
.ci+1
.=; replacing
. hi+1
. the
i
0
0 Note
0
log
p(a(t)|s(t),
?)
162
74
allows
communication.
is
built
from
f Unless
which
take
the
form
offand
multilayer
i individual
c14
=
hacontrol
This
is
achieved
by
communication
step
iactions.
Eqn.
(1)
by 2aneural
time
that
controller
encompasses
the
controllers
fort,each agents, as well as
;{h
0set
iof
58still
We
now
the
model
used
to
?)single
at
a, given
time
tin
(ommiting
the
time
hi and
cour
and
output
vector
hi+1
.can
The
takes
as
input
of
vectors
, p(a(t)|s(t),
...,
hmodules
}, this
and
0 compute
requirements
,cooperating
but
is
not
able
to
coordinate
their
most
of
tasks
they
consist
of
aProcessing
lookup-table
embedding
(or
bags
ofH
vectors
thereof).
jdescribe
K
joutput.
jsimply
jDo
Figure
overview
our
model.
Left:
view
of
module
for(2)
ab(s(t),
single
agent .step
j. Note
??.
Second,
we
show
how
this
architecture
can
be
used
to
groups
1
2vectors
i1:
ii+1
itake
ihdistribution
?An
=
r(i)
b(s(t),
?)
r(i)
?)
ond,
we
show
how
this
architecture
bemodel
used
to
control
groups
cooperating
14amodel,
scalar
14
output.
scalar
14
output.
scalar
output.
scalar
mplest
form
ofon
the
model
consists
of
multilayer
neural
networks
f71
that
as
input
two
contributions.
First,
we
simplify
and
extend
the
graph
neural
network
i of
iof
iCommNet
i Do
T is
T m not distribute.
T
Conference
Neural
Information
Systems
(NIPS
2016).
not
distribute.
ntributions.
First,
we
simplify
and
extend
the
graph
neural
network
the
output
of
the
a
decoder
function
q(h
)
used
to
output
a
over
the
space
of
C
C
C
...
dAt
to
29th
Conference
on
Neural
Information
Processing
Systems
(NIPS
2016).
+1
i
i
t
t
X
X
X
h
=
(A
h
+
B
c
),
j
0 6=
i+1
imatrix
the
communication
between
agents.
?be
59be
index
for
brevity).
Let
sbecause
be
jth
of?)number
the
state
of
the
environment.
The
input
to the
the network.
75
networks.
Here
2jof
{0,
..,
where
K
is
ofstep,
communication
layers
in
iicomputes
0are
0shared
0view
log
?)
j 63
j the
i output
jisized
jj2
j 0K},
6=
jp(a(t)|s(t),
sized,
and
the
can
dynamically
and
using
the
same
fthe
all
At
every
time
actions
will
sampled
from
q(h
=
(T
that
the
parameters
across
alli=t
Middle:
a single
communication
step,
each
58
We
now
describe
the
used
to
compute
p(a(t)|s(t),
at
aagents.
given
t (ommiting
the
time
cshow
and
vector
hi+1
.can
model
takes
as
input
a?model
vectors
{h
,the
hiagent?s
...,t.
h?the
and
t=1
i=t b(s(t),
se
noted,
c0j aH
=
0),ifor
all
j.The
j ). Note
H
=
(T
H
),
ond,
we
show
how
this
architecture
can
be layer
to
groups
of1for
m },
2 ,has
73
We
now
detail
that
of atime
the
communication-free
model
Tmically
),
=
r(i)
?)the
?)but
. where
how
this
architecture
be
to
control
groups
ofmodule
cooperating
i the
iset
ifor
0
1
i is
i control
iconcatenation
the
of
all
state-views
smodularity
=by
{s
, produce
...,
sb(s(t),
and
is
athe
mapping
actions.
q(.)
form
of
ai60each
single
network,
followed
by
a by
softmax.
To
afor
discrete
iNeural
itakes
iithe
iused
J },vectors
ofH
model
consists
of
multilayer
neural
networks
fcooperating
that
take
as
72
3.1
Controller
15
If
each
15
fIf
is
15
ai+1
simple
fIfused
is
15
aarchitecture
linear
simple
If
each
alayer
linear
simple
fjth
is
followed
alayer
linear
simple
followed
layer
linear
aStructure
nonlinearity
followed
layer
a1iinput
nonlinearity
followed
by
: as
nonlinearity
bycontroller
:asanonlinearity
:tor(i)
:multi-layer
hcontroller
=
f(NIPS
(h
, c2016).
)is
1
taThe
ieach
i
i vector
?
?
er
than
by
coordinate.
59
index
for
brevity).
Let
s
be
the
agent?s
view
of
the
state
of
the
environment.
input
ence
on
Processing
Systems
Do
not
distribute.
jf
jas
es
64
One
obvious
choice
is
fully-connected
neural
network,
jcan
i the
i11set
0Information
A
B
B
B
160 ...
then
the
model
be
viewed
a
feedforward
network
with
layers
agents
modules
propagate
their
internal
state
h,
well
broadcasting
a
communication
c which could extract
j
that
agents
can
leave
or
join
the
swarm
at
any
time
step.
If
f
is
a
single
layer
network,
we
obtain
76
Each
f
takes
two
input
vectors
for
each
agent
j:
the
hidden
state
h
and
the
communication
c
,
We
set
c
=
0
for
all
j,
and
i
2
{0,
..,
K}
(we
will
call
K
the
number
of
hops
in
the
network).
11
We
c
=
0
for
all
j,
and
i
2
{0,
..,
K}
(we
will
call
K
the
number
of
hops
in
the
network).
t=1
i=t
still
allows
communication.
isisaistate-views
built
from
modules
,from
which
take
the
form
multilayer
neural
B0B
...
61
ai=
(s),
where
the
output
is
a concatenation
discrete
actions
a = {a
a }isof
for
each
agent. the
1
0of
0sfhand
0hyperparameter
j 74ithe
j framework.
1 , ...,
j i+1
55
Here
r(t)
reward
given
at
time
t,
the
for
balancing
reward
the
action,
sample
this
distribution.
Kon
i+1
i as
iconcatenation
iconsists
iBifrom
60model
isiq(h
the
of
all
s65={h
{s
,red).
...,
},
and
controller
isjJpredict
a mapping
iwe
iB
ifunction
features
sthe
and
use
them
to
good
actionss with
our and
RL
ut
vector
h
.aof
The
takes
input
acontroller
set
ofi+1
vectors
,i=t
h
,i+1
...,
hcontrollers
},
and
1input
J
model
multilayer
neural
fobjectives,
that
take
vectors
asingle
common
channel
(shown
in
Right:
full
model,
showing
input
states
for
each
agent,
two
tbut This model would
hcontroller
fNote
, ciX
)2
utput
of
model,
decoder
)networks
isi+1
used
to
output
ai+1
distribution
over
the
space
of
K
i the
ias
i+1
m
1other.
2the
i+1
A
B
...
ia B
ithe
ican
K(hfinal
B
...
B
j=
jj
C
62= B
that
this
encompasses
the
individual
for
each
agents,
as
well
as
i+1
i+1
73
We
now
detail
architecture
for
that
has
the
modularity
of
the
communication-free
model
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
jplain
RNNs
that
communicate
with
each
In
later
experiments,
we
also
use
an
LSTM
as
an
f
75
networks.
Here
{0,
..,
K},
where
K
is
the
number
of
communication
layers
in
the
network.
12 A...
If
desired,
we
can
take
the
h
and
output
them
directly,
so
that
model
outputs
a
vector
H
=
(T
H
),
56
baseline
set
to
0.03
in
all
experiments.
c
h
;
12
If
desired,
we
take
the
final
h
and
output
them
directly,
so
that
the
model
outputs
a
vector
AB
B
B
77
and
outputs
a
vector
h
.
The
main
body
of
the
model
then
takes
as
input
the
concatenated
vectors
C 61 ai=C63 (s),
j output
hj and
=
(A
=
hjagents
h
+
Bisto
=
hcbuilt
h),
+
(A
B, ...,
=
hcjmodules
+(A
B
heach
cfjbalancing
+
Bother
cjthe
),and
jj where
66hdiscrete
allow
with
share
views
However, it
j r(t) is
the
a is
of
actions
a
}isfor
agent.
0the
0
0hyperparameter
i ),
ja concatenation
1
J
B takes
55
Here
reward
given
at communication.
time
t,
and
the
for
reward
theof the environment.
j=
jcommunicate
ja),
jeach
joutput
j(A
j{a
communication
steps
actions
for
i i+1
X
the
communication
between
agents.
74
still
from
which take
the
formand
of multilayer
neural
i13entire
iAito
iawhich
iainput
ctor
h
.icorresponding
The
takes
as
acontroller
setthem
of
vectors
{h
,h
...,
},
and
CB
Thus
model,
we
Communication
Neural
Net
(CommNN),
takes
theagent.
stateq(.)
form
of
single
layer
network,
followed
by
aallows
softmax.
To
produce
avector
discrete
iB
imodel
i the
iithe
i1
toSystems
each
input
vector,
or
we
can
feed
them
another
network
to
get
aeach
single
vector
or,well
B
B
...
B
jcan
i+1
i+1
C
m(i)
1
2
13
corresponding
each
input
vector,
or
we
feed
into
another
network
to,{0,
get
ah
single
or
1
67
is
inflexible
with
respect
to
the composition
agents
it controls; cannot deal well
62
this
encompasses
the
controllers
for
agents,
as
as and number
B
A
...
B
ithat
i number
iof
Neural
Information
Processing
(NIPS
2016).
Do
not
distribute.
C
=
.call
iBB
AiB
...
B
B
...
B
baseline
objectives,
seteach
tointo
0.03
inindividual
experiments.
cNote
=
h56
;6=jinput
C
17
where
Tjmodule.
isfwritten
injsingle
block
form
0i
75
networks.
Here
iall
2
.., K},
where
K
iseach
the
of
communication
layers
in
the
network.
0
76
Each
takes
two
vectors
for
agent
j:
the
hidden
state
h
and
the
communication
c
,
i+1
i
i
multilayer
B
C
.
...
B
1
1
j layers
j of the agents must be fixed. On the
14 from
scalar
output.
view
of
all
s,multilayer
passes
itthe
the
encoder
h
p(s),
(ii)
iterates
h
and
in
equations
(1)
i=
64
One
obvious
choice
for
a fully-connected
neural
network,
which
could
extract
0i
68
agents
joining
and
leaving
the group
and
even
the order
..scalar
..agents
..16
..neural
we
sample
this
C
63
communication
between
agents.
then
16
then
model
16K}
the
then
can
model
be
the
viewed
can
be
the
viewed
as
can
model
ais
feedforward
be
viewed
as
can
ahops
feedforward
be
viewed
aswith
network
aj.multi-layer
feedforward
asnetwork
acwith
feedforward
layers
network
with
network
with
layers
with
layers
C
ij57
14B
Avg.
includes
the
identity
of
agent
i .@
ithrough
hithe
=
f0 6=Assuming
(h
,will
)sthen
.iand
jmodel
iset
iioutput.
ithe
i distribution.
ijneural
orm
of
the
model
consists
of
multilayer
networks
fas
that
vectors
=
0 for
all
j,
{0,
..,
(we
call
K
the
number
of
in
the
network).
3take
Model
A
j16
the
model
of
fcfrom
that
input
vectors
jnetworks
..ciB
..K
j2
.C
i take
iinput
i as
i
i
j consists
i+1
A=
B
...
B
i We
...ii+1
B
NN
A
B
A
...
AEach
B
B
...q(h
BKi hand,
76 main
f0ipredict
takes
two
for
each
the
hidden
state
andwrite
the communication
65 actions
features
h
s and
use
good
actions
with
RL
framework.
This
model
would
..A
.B
B
.B
..C
.take
.houtputs
other
if our
no
communication
isconcatenated
used
then
wehjcan
a = { (s1 ), ...,cj ,(sJ )}, where is a
0 of69
0 input
...
Bobain
77 . model
and
alayer
vector
hfor
.agents,
body
the
model
then
takes
asagent
inputj:could
the
vectors
(2)
to
h
,The
(iii)
samples
a
for
all
according
to
).vectors
ii+1
i+1
0The
0a them
0to
K
itakes
iB
iC
multilayer
.
.
output
a
vector
h
.
as
input
a
set
of
vectors
{h
,
h
,
...,
h
},
and
.
.
ican
j
C
i
i
i
i
15
If
each
f
is
a
simple
linear
followed
by
nonlinearity
:
If
desired,
we
the
final
and
output
them
directly,
so
that
the
model
outputs
a
vector
aeand
vector
h
The
model
takes
as
input
a
set
of
vectors
{h
,
h
,
...,
h
},
and
i
64
One
obvious
choice
is
a
fully-connected
multi-layer
neural
network,
which
extract
A
key
point
is
that
T
is
dynamically
sized
since
the
number
of
agents
may
vary.
This
motivates
thesatisfies the flexibility
m
1
2
X
i
i
i
0
i+1
i+1
i
i
i+1
i
i
i+1
i
i
i
i
i+1
i
3
Related
Work
Avg.
h
=
f
(h
,
c
)
jlayer
15
Ififor
each
fmultilayer
isand
a of
linear
followed
by
a
nonlinearity
:
m
1 77
2vectors
B
C
B
A
Bthe
...=
B(i)
66
allow
agents
to
communicate
with
each
other
and
share
views
of
the
environment.
However,
it
ijentire
i
i
i
i j,...
isimple
i
C
he
model
consists
multilayer
neural
networks
f
that
take
as
input
vectors
model,
which
we
call
a
Communication
Neural
Net
(CommNN),
takes
the
statec
=
0
all
i
2
{0,
..,
K}
(we
will
call
K
the
number
of
hops
in
network).
57
3
Model
70
per-agent
controller
applied
independently.
This
communication-free
model
and
outputs
a
vector
h
.
The
main
body
of
the
model
then
takes
as
input
the
concatenated
vectors
i
i
j
j
B
A
B
el
consists
of
neural
networks
f
that
take
as
input
j
H
H
(T
=
H
H
),
(T
=
H
H
),
(T
=
H
),
(T
H
),
C
i+1
j ivector
BA
BAi input
A
..ii+1
....C
i ...
NN
Bvector
B
...
58use
We
now
describe
the
used
to
p(a(t)|s(t),
?)the
at
a given
time t (ommiting
.features
.cB
corresponding
to...each
vector,
orC
we is
can
feed
them
into
another
network
toA
single
or1offramework.
BB...
ifactor
igood
imodel
65
h.. inflexible
from
si+1
and
them
predict
actions
with
ourcompute
RL
This
would
C
0B
0get
J
1ain
equation
(2),
which
rescales
communication
vector bythethetime
number
@
A
B
B
i
0;normalizing
67
respect
the
composition
and
agents
it able
controls;
cannot
deal
=
hincludes
B
...
Bcnumber
i+1
0with
0T
0to0to
Ktakes
.output
0the
71
requirements
, but
is not
to model
coordinate
theirwell
actions.
ih
imij.
h
. The
model
as
input
set
of
vectors
,brevity).
, ...,
hLet
},
and
=
. equations
ji allow
j{h
all
s,C
passes
the
encoder
hjoining
=
(ii)
iterates
in
(1)
ed,
we
can
take
final
and
them
so
that
the
outputs
aand
vector
h
. The
as
a1X
of
vectors
...,
h{h
},
and
2order
.iC
..agents
.. ..hthrough
i+1
iagents
i ai+1
ito
i ,the
ip(s),
i1
imodel
Assuming
sagents
identity
agent
for
sh
be
the
jth
agent?s
view
of
the state
of
the
environment.
The
input
the
iof
.the
jinput
hhi+1
=
(A
h2ijof
B
c. jand
),
output.
.
.
.
1 +
m
159,communicating
jdirectly,
i+1 it
i66
communicate
with
each
other
share
views
ofTthe
environment.
However,
itOn
jwith
iscalar
i ..model
itakes
iset
A
.
.
.
j2index
68
with
and
leaving
the
group
and
even
the
the
agents
must
be
fixed.
the
.
h
=
f
(h
,
c
)
h
=
(A
h
+
B
c
),
.
Our
model
combines
a
deep
network
reinforcement
learning
[8,
20,
13].
Several
recent
.
of
agents.
Note
also
that
is
permutation
invariant,
thus
the
order
of to
theworks
agents
Assuming
s
includes
the
identity
of
agent
j.
3.2
Model
Extensions
j
h
=
f
(h
,
c
)
i+1
i+1
j
j
j
j
j
.
.
.
.
.
0
K
K
j
A
Bto
A
...
B
58
We
now
describe
the
model
used
to
compute
p(a(t)|s(t),
?)
at
a
given
time
t
(ommiting
the
time
17
where
17
T
where
is
17
written
T
where
is
17
written
in
block
T
where
is
written
in
form
block
T
is
written
in
form
block
in
form
block
form
.
jcan
jC
jsamples
onding
to
each
input
vector,
or
we
feed
them
into
another
network
to
get
a
single
vector
or
.
. .B
.
.
j
=
6
j
.
.
.
.
.
60
controller
is
the
concatenation
of
all
state-views
s
=
{s
,
...,
s
},
and
the
controller
is
a
mapping
ly
sized,
and
the
matrix
can
be
dynamically
sized
because
the
obain
h
,
(iii)
actions
a
for
all
agents,
according
to
q(h
).
67
is
inflexible
with
respect
to
the
composition
and
number
of
agents
it
controls;
cannot
deal
well
1
J
c
=
h
;
icanlinear
i ahand,
.Bf iimatrix
0by
69 .
other
if nonot
communication
is used
then we domains,
can write a =
{ (sas
..., (s
where
isAtari
a
. athen
1 ), Go
J )},24]
If each
is
simple
layer
followed
nonlinearity
: network
ji i idynamically
does
matter.
j sized
and
be
because
the
X
i Let
ij with
ithe
iController
have
applied
these
methods
to
multi-agent
such
[16,
and
games
[29],
but
i+1
B
B
A
i ...
ibe
iviewed
72
3.1
Structure
X
C
59as
index
for
brevity).
s
be
jth
agent?s
view
of
the
state
of
the
environment.
The
input
to
the
utput.
ithe
16 ii+1
the
model
can
a
feedforward
layers
i
i
B
B
B
...
A
61
a
=
(s),
where
the
output
a
is
a
concatenation
of
discrete
actions
a
=
{a
,
...,
a
}
for
each
agent.
i+1
i+1
1 i J
with
joining
andnetwork
leaving
the independently.
group
the
the
=
falternative
(h
casagents
ievenThis
i communication-free
iorder
i of
i the
i agents
i ii must
imodel
i besatisfies
iifixed.i On
controller
applied
thei...
flexibility
feedforward
with
layersand
B
BAhthe
A
.. and
.... hi+1
.=0j6=,A
i 16
(h
, be
cj c)68viewed
an
bythen
coordinate.
j )aper-agent
.j=.An
jcan
hto
; controller
ji model
A
B
B
B
A
...
B
B
B
A
...
B
B
B
...
BtheB
Local
Connectivity:
the
broadcast
framework
described
above
is
to
agents
ji+1
ji+1
hother
;70
jK}
j 0if
ordinate.
60
the
concatenation
ofhops
allA
state-views
=allow
{s
, ...,
},2 and
controller
aamapping
1 K
i 62
i is
i is
i..=2fclayer
..,69X
(we
will
the
number
of
in
the
1communication.
assume
full
visibility
of
the
environment
and
lack
isagents,
rich
j {0,
Note
that
this
single
controller
encompasses
individual
controllers
forB
eachis
as literature
well as
j.0they
i ...
Figure
1:
Blah.
noicall
communication
is
used
then
we
can
write
adetail
=
{isnetwork).
(s
),the
...,
(ssiJJ)},
where
is
aThere
71ahand,
requirements
,
but
not
able
to
coordinate
their
actions.
h
=
(A
h
+
B
c
),
1architecture
i+1
i
i
2
.
fall
is aj,
simple
linear
followed
by
nonlinearity
:
j
j
.
.
i
i
i
i
i
i
i
i
i
i
i
i
i
i
j
0
73
We
now
the
for
that
has
the
modularity
of
the
communication-free
model but
18
The
key idea
is
dynamically
sized,
and
the
can
be
the
X
H
H
),
j 6=is
i+1
iLet
i=
61 T a
=
where
the(T
output
a matrix
iscommunication-free
aA
concatenation
discrete
actions
aB
=the
{a1robotics
, ..., aJB} fordomain
each agent.
to
communicate
others
aj that
certain
range.
(j)
beThis
the
set
ofBagents
present
within
odel
Extensions
B
B
Adynamically
Bthe
...of
B
A
BB...sized
B
A
Bbecause
...in
B
...
j 0 6=
i+1
63
the
communication
between
agents.
70j within
per-agent
controller
applied
independently.
model
satisfies
the
flexibility
K
i+1
i+1andto
Htype,
= (s),
(Tthan
Hso
),N
on
multi-agent
reinforcement
learning
(MARL)
[1],
particularly
[18,
25,form
5, of multilayer neural
ci+1
=
hfeedforward
; icall
lly
sized,
the
matrix
can
be
dynamically
sized
0applied
74 because
still
allows
communication.
is
built
from
modules
f ias
, which
take the
19h and
are
by
rather
by
coordinate.
d,
and
the
matrix
can
be
dynamically
sized
because
the
2
take
the
final
h
output
them
directly,
that
the
model
outputs
a
vector
c
=
;
i
i
i
i
j
0blocks
j
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
1
i+1
i
i
i
and
i
2
{0,
..,
K}
(we
will
K
the
number
of
hops
in
the
network).
j
62
Note
that
this
single
controller
encompasses
the
individual
controllers
for
each
agents,
well
as
j
then
the
model
can
be
viewed
as
a
network
with
layers
j
Figure
1:i Blah.
matrix
be..,
sized
because
the
i the
i may
communication
range
agent
Then
(2)
becomes:
71 A
requirements
,isB
but
able
to
coordinate
their
actions.
83 {0,
The
key
idea
is
that
T+
dynamically
sized.
First,
the
number
agents
vary.
This
motivates
B
Bnetworks.
A
BT
B
...
Ai 2B
B{0,
Bmulti-layer
.....,A
B
Bwhere
...neural
A
Bisnetwork,
...
B which
h
=
(A
hjcall
cis
),64not
Bach
Bij,can
B
...
72
3.1
Controller
Structure
for
all
i dynamically
2
..,
K}
(we
will
K
number
of
hops
in
jthe
One
obvious
choice
for
isBiaof
fully-connected
could
75 network).
Here
K
the. number
of
layers in the network.
17
where
Tor
isof
written
inj.
block
form
Tthe
=
T
=
T
=
=
.K},
.[12,
. communication
21,
2].
Amongst
fully
cooperative
algorithms,
many
approaches
15,
33]
avoid
the extract
need for
j, and
2and
{0,
will
K
the
number
of
hops
in
network).
j 0jcan
6=jcall
0 6=
lan
Information
Systems
(NIPS
Do
not
distribute.
jK}
j (we
oordinate.
632016).
the
communication
between
input
vector,
we
feed
them
into
network
to
get
ato
single
vector
by
coordinate.
17 Processing
where
T
isSystems
written
in
block
form
KProcessing
84alternative
the
normalizing
factor
J i65
1ianother
inthat
equation
(2),
which
resacles
communication
the
ion
(NIPS
2016).
Do
not
distribute.
i+1
onnectivity:
An
to
the
broadcast
framework
described
above
to
.. agents.
..a vector
.. is
..the
..aallow
..vector
.. agents
....actions
... or
.. vector
.... . .ourby
.. RL
.. framework.
.. This model would
Kthe
.
.
.
features
h
from
s
and
use
them
predict
good
with
X
Kfinal
H
=
(T
H
),
he
final
h
and
output
them
directly,
so
the
model
outputs
i
i
can
take
the
h
and
output
them
directly,
so
that
the
model
outputs
.
.
1
i
i
i
i
communication
by
making
strong
assumptions
about
visibility
of
other
agents
and
the
environment.
ke
the
final
h
and
output
them
directly,
so
that
the
model
outputs
a
vector
jviewed
e j,
model
can
be
a
feedforward
network
with
layers
jas
76
Each
f
takes
two
input
vectors
for
each
agent
j:
the
hidden
state
h
and
the communication cij ,
Submitted
to
Conference
on
Neural
Information
Processing
Systems
(NIPS
2016).
Do
not
distribute.
i+1
i+1
.
.
.
.
7329th
We
now
detail
the
architecture
for
that
has
the
modularity
of
the
communication-free
model
but
83
The
key
idea
is
that
T
is
dynamically
sized.
First,
the
number
of
agents
may
vary.
This
motivates
j
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
j
A
B
B
...
B
85
number
of
communicating
agents.
Second,
the
blocks
are
applied
based
on
category,
rather
than
by
72K
Controller
Structure
and
i 2
{0,
.., will
K}
(wecan
will
call
the
number
ofi (j)
hops
in
ihops
iinallow
ifor
66
agents
to
communicate
withor
each
other
and
share
views
of the environment.
However, it
municate
toor
others
within
a3.1
certain
range.
Let
N
beto
agents
present
within
64of
One
obvious
choice
isof
a fully-connected
multi-layer
neural
network,
which could extract
cKjinto
=
hBthe
.set
(3)
iinput
2each
{0,
..,
K}
(we
call
the
number
the
network).
0the
i
to
input
vector,
or
we
feed
them
into
another
network
get
anetwork).
single
vector
A1communication,
B
B
...
i+1
jvector
i
i
i
i
put
vector,
or
we
can
feed
them
another
network
to
get
a
single
vector
or
vector,
we
can
feed
them
into
another
network
to
get
a
single
or
74
still
allows
communication.
is
built
from
modules
f
,
which
take
the
form
of
multilayer
neural
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
ed,
and
the
matrix
can
be
dynamically
sized
because
the
Others
use
but
with
a
pre-determined
protocol
[30,
19,
37,
17].
77
and
outputs
a
vector
h
.
The
main
body
of
the
model
then
takes
as
input
the
concatenated vectors
where
T
is
in
block
form
84written
the
the
normalizing
factor
J
in
equation
(2),
which
resacles
the
communication
vector
by
the
i+1
i
i
86
coordinate.
In
this
simple
form
of
the
model
?category?
refers
to
either
?self?
or
?teammate?;
but
as
|N67(j)|
Bhinflexible
A s and
Bwith
Bto
K
B
B
B
B
B
...actions
BB
A
B
...
Blayers
B
ARL
...
A
...
A model
is
respect
topredict
the B
composition
and
of B
agents
it controls;
cannot
deal well
j number
65
features
them
good
with
our
framework.
This
would
i),
ifrom
i use...
Kfinal
H sodirectly,
=
(T
H
0 2N (j)
nication
range
of87them
agent
j.73them
Then
(2)
becomes:
e the
h
and
output
so
that
the
model
outputs
a isvector
75
networks.
Here
i 2Bji{0,
..,
where
K
the
number
ofcategory,
communication
in to
theall?,
network.
B
Aioutputs
B
nal
hlinear
and
output
directly,
that
the
model
a...
vector
j number
iK},
iare
imore
We
now
detail
architecture
for
that
has
the
modularity
of
the
communication-free
model
but
j Processing
we
will
see
below,
communication
can
be
complicated
than
?broadcast
85 layer
of
communicating
agents.
Second,
the
blocks
applied
based
on
rather
than
by
ple
followed
by
a
nonlinearity
:
i
ation
Systems
(NIPS
2016).
Do
not
ithe
ithe
i 68 distribute.
iarchitecture
with
agents
joining
and
leaving
the
group
and
even
the
order
of
the
agents
must
be
fixed.
On
the
B
B
A
...
B
66
allow
agents
to
communicate
with
each
other
and
share
views
of
the
environment.
However,
it
Ai another
BB
B vector
coordinate.
iB
i ...
ito
i
T(NIPS
=
i 1 Assuming
input
orfeed
we88Processing
can
feed
them
into
network
get
ainvolve
single
vector
or
essing
Systems
(NIPS
2016).
DoA
not
distribute.
ilearning
sjthen
includes
identity
agent
ito
i the
few
notable
approaches
to
communicate
between
agents
under
partial
visibility:
orvector,
we86in
canblock
them
into
another
network
get
amodel
single
or
A
...
B
allows
communication.
is
built
from
modules
f .dynamically
,the
which
take
the
form
of
multilayer
neural
X
lctor,
Information
Systems
Do
not
distribute.
so74
may
more
categories.
Note
also
that
isto
permutation
invariant,
thus
the
order
the
Tsimple
is
written
form
76
Each
fi of
takes
two
input
vectors
for
each
j:
the
hidden
state
hthe
and
the
communication
cij , (s
coordinate.
In
this
simple
the
?category?
refers
either
?self?
or
?teammate?;
but
as
ihand,
69
if
no
communication
isand
used
we
can
write
aofof
=dynamically
{ (sj.
), dynamically
...,
)},
where
is abecause
..2016).
..that
.T
..and
linear
layer
followed
bystill
arequire
nonlinearity
:iB
T18
=
.that
18
18
key
The
idea
key
is
The
that
idea
18
Tinflexible
is
The
is
that
idea
dynamically
key
T
is
is
idea
dynamically
is
is
sized,
dynamically
Tagent
is
sized,
matrix
sized,
the
and
can
matrix
sized,
dynamically
and
can
matrix
the
be
can
matrix
be
can
because
be
dynamically
because
the
sized
the
sized because
the
the
167
.T
1sized
Jsized
iskey
with
respect
the
composition
and
number
of
agents
it
controls;
cannot
deal
well
jbe
linear
layer
followed
byand
aThe
nonlinearity
:iform
B
A
...other
B
i+1
i+1
.to
75does
networks.
Here
i 2B
{0,
..,
K},
where
K.. isbethe
number
of communication
layers
in to
theall?,
network. tabular-RL
.
.
.
.
i+1
i+1
.
.
.
i
i
i
i
c
=
h
.
(3)
.
89
agents
not
matter.
Kasai
et
al.
[10]
and
Varshavskaya
et
al.
[32],
both
use
distributed
approaches
for
0
70
per-agent
controller
applied
independently.
This
communication-free
model
satisfies
the
flexibility
i
i
i
i
we will
see
below,
the
communication
architecture
can
more
complicated
than
?broadcast
ear layer87followed
by
a
nonlinearity
:
77(A
atype,
vector
h
.i. The
main
body
of
thegroup
model
thenthan
takesthe
as input
the
concatenated
vectors
.
.
.
.
.
iBand
ioutputs
i... applied
jiiare
jtype,
68 +
with
agents
joining
and
leaving
the
and
even
order
of
the
agents
must
be
fixed.
On
the
19
blocks
19 A
blocks
applied
19
are
blocks
applied
19
by
are
blocks
by
rather
are
applied
than
by
rather
type,
by
than
by
coordinate.
rather
type,
by
than
coordinate.
rather
by
coordinate.
by
coordinate.
B
A
B
h
=
h
B
c
),
j
B
B
...
B
3
.
.
.
.
i+1
i
i
i
i
i
i
i
T =
. 1i
j
|Nj(j)|
j
Abstract
Connecting
Connecting
Connecting
Neural
Connecting
Neural
Models
Neural
Models
Neural
Models
Models
Connecting
Neural
Models
Connecting
Neural
Models
nnecting
Neural
Models
Connecting
Neural
Models
0
0
0
0
i 76=
i Each
i ifhi categories.
i the
B
B
B,[6]
...
A
71
requirements
but
is
able
toinvariant,
coordinate
actions.
h(A
(A
+:iB
cijother
),
0 2N
so may
more
Note
also
that
Teach
is
permutation
thus
order
the
0 not
vectors
for
agent
the
state
htheir
andwrite
the
communication
tasks.
Jim
use
an
algorithm,
rather
69
hand,
no
communication
isevolutionary
used
then
we
can
a of
=state-view
{ (s1 ),than
...,cij , (sreinforcement
)}, where is alearning.
j.takes
. two
hai+1
hfirst
+:isimulated
B
cji),of
jthe
inear
layer88 followed
by
arequire
nonlinearity
j.input
. .(j)
iiGiles
i .if&
J2
jtakes
90 = At
layer
the
an
encoder
function
=j:p(s
) hidden
is used.
This
as
input
j B
j nonlinearity
yer
followed
byand
A
B
.. B
1 .model
Ai ofhjagent
.B .. ...
. B... .. B
Assuming
si+1
the identity
j. j
j includes
i+1does not
i outputs
ii
ivector
ial.
89
agents
matter.
per-agent
controller
applied
independently.
This
model
satisfies
flexibilitymessage
iGuestrin
ia70
0ji .use
dDo
77(A
The
main
body
of
the
model
takes
ascommunication-free
input
the
concatenated
vectors
et
[7]
single
MDP
to the
control
asized
collection
ofthe
agents,
via athefactored
i Systems
Band
AB
B
be viewed
as
aia feedforward
with
hi+1
hnetwork
+
),
idynamically
ic...
ih
ia
mation
(NIPS
2016).
not
distribute.
sidea
and
outputs
vector
h
RA
for
some
dlarge
form
of
encoder
is
problem
dependent,
The
key
TB
is
sized,
and
the
matrix
can
bethen
dynamically
because
can
be viewed
feedforward
with
layers
ii that
inetwork
j=
0 ). The
=iis
.layers
jfeature
jrequirements
1Controller
B
B
...
j91
j (in
viewed
as18Processing
a feedforward
network
with
layers
i+1
3.1
i T
71
, but is not Structure
able to coordinate their actions.
has
=
(A
hi +
B
ci ), B
0
72
Submitted
to.our
29th
Submitted
Conference
tothey
29th
Submitted
to
29th
on
Conference
to
on
Information
Neural
Conference
Neural
on
Information
Neural
Processing
Systems
Processing
Systems
(NIPS
2016).
Systems
(NIPS
Do
2016).
Systems
(NIPS
not distribute.
Do
2016).
(NIPS
not model
distribute.
Do
2016).
notuses
distribute.
Doanot distribute.
029th
j. passing
jtasks
..framework
.. Conference
hj 1990key
=blocks
(A
hare
+Submitted
B
jthe
The
idea
T cis
dynamically
sized,
and
the
matrix
be
dynamically
sized
because
the
. .encoder
92is
but
for
most
of
consist
of
aNeural
lookup-table
embedding
(or
bags
of
vectors
thereof).
Unless
where
the
messages
areProcessing
learned.
In
contrast
toProcessing
these
approaches,
our
At
first
layer
of
the
an
function
hcan
=
p(s
)on
isInformation
used.
This
takes
asInformation
input
state-view
applied
rather
by
coordinate.
jthat
j ),
.. by
..model
1 type,
jagent
. than
i+1
isjiincludes
. and
. the identity
of
j. j sized because
imatrix
0i H
3
2
i+1
ii+1
i Assuming
The
key
idea
is
that
T
is
dynamically
sized,
the
can
be
dynamically
the
H
=
(T
),
19
blocks
are
applied
by
type,
rather
than
by
coordinate.
0
d
93
otherwise
noted,
c
=
0
for
all
j.
H
=i network
(T
Hlayers
),0i for
H outputs
= network
(T
Hideep
),vector
73
Wesome
now
detail
the form
architecture
for that
the modularity
of the communication-free model but
ewed
as
network
with
i
jBlayers
for
both
agent
and
communication.
sj and
feature
h
d0 ). control
The
of the
encoder
is has
problem
dependent,
viewed
as91aapplied
a feedforward
feedforward
with
B
...
A
j (in
asblocks
a feedforward
network
layers
are
by type, with
rather
thanB
by coordinate.
72
3.1R74
Controller
allowsStructure
communication.
is built
from modules
f iUnless
, which take the form of multilayer neural
92 form
but for
most
ofoutput
our
tasks
they
consist
ofInformation
a still
lookup-table
embedding
(orto(NIPS
bags
of2016).
vectors
thereof).
K
tten
in
block
94
At
the
of
the
model,
a
decoder
function
q(h
)
is
used
output
a
distribution
over
the
space
of work
Submitted
to
29th
Conference
on
Neural
Processing
Systems
Do
not
distribute.
i+1
i
i
form
jclosest
imatrix
75
networks.
Here
2 {0,because
K}, where
the number
communication
layers
the network.
a MARL
perspective,
theisized
tois ours
is theof
ofinFoerster
2..,
i+1
ii+1
i sized,
yblock
is that
T otherwise
is=dynamically
and
the
be
dynamically
the hasKthe
H(T
=
(T
H
inidea
block
93form
noted,
c0jFrom
0),ifor
all
j.Weofcan
Hactions.
(T
H
),
73
now
detail
the
architecture
forapproach
that
modularity
ofconcurrent
the
communication-free
model butet al. [4].
H
H
),i=
Submitted
toConference
29th
Conference
oni=
Neural
Information
Systems
(NIPS
2016).
Do
not
distribute.
95
q(.)
takes
form
single
layer
network,
followed
by
a
softmax.
To
produce
a
discrete
iithe
i aProcessing
i
i
are
applied
by29th
type,
rather
than
by
coordinate.
Submitted
to
on
Neural
Information
Processing
Systems
(NIPS
2016).
Do
not
distribute.
i
i A
i B
i
B
... still
Ba76allows
This
also
deep
reinforcement
learning
in
multi-agent
partially
observable
tasks,
specifically
Each
f
takes
two
input
vectors
for
each
agent
j:
the
hidden
state
h
and
the
communication
cij ,
74uses
communication.
is
built
from
modules
f
,
which
take
the
form
of
multilayer
neural
A
B
B
...
B
j
96
action,ofiwe
distribution.
i sample
ithe this
iq(hK ) is used toi+1
i B i ifrom
ifunction
block
form94 At the
model,
a
decoder
output
a
distribution
over
the
space
of
B
...
B
i output
i BAithe
i
A
B
...
B
rm
j
75
networks.
Here
i
2
{0,
..,
K},
where
K
is
the
number
of
communication
layers
in
the
network.
two
riddle
problems
spirit
task)
which
communication.
B A B
...
B
77
and(similar
outputs ain
vector
hj to. our
The levers
main body
of the
modelnecessitate
then takes as multi-agent
input the concatenated
vectors
ck form95 actions.
i ientire
i form
i
iAii of...
ia Communication
97i i Thus
the
model,
call
Neural
Net (CommNN),
the istateq(.)
single
followed
by a softmax.
To produce(i)a takes
discrete
i B
itakes
iithe
B
B
B
B76i. iawhich
...weflayer
B1 network,
ii
i. (NIPS
i
ed to 29th Conference
onB=
Neural
Information
Processing
Systems
2016).
Do
not
T
iB
iBB
AiB
...A...
B
A
B
0distribute.
Each
takes
two
input
vectors
for
each
agent
j:
the
hidden
state
h
and
the
communication
c
,
T
=
A
B
...
B
j
j
98
view
of
all
agents
s,
passes
it
through
the
encoder
h
=
p(s),
(ii)
iterates
h
and
c
in
equations
(1)
. iithe
. this
96
action,
sj includes the identity of agent j.
i sample
.i.i distribution.
iwe
i ifrom
i
.. iA
...B.. iB
... i BAssuming
...
B
i ..Ai B
i.. .B
...h....KB
K
B
...
B i AiT99B
B
...
77 . samples
and
outputs
a vector
The main
body oftothe
model
and
(2)
toBobain
,A
(iii)
actions
a
forhi+1
according
q(h
). then takes as input the concatenated vectors
.
=
.
.
.
.
.
jall .agents,
i
i
i
i
iB
iAB
i model,
i B we calliia Communication Neural Net (CommNN),
97i Thus
3 (i) takes the statei iB...
BA
B
A
.iwhich
.... . B
..
i ...
B
. 1.
T
=B iBBithe
AiBentire
B..i B
...passes
Ai ...
Ti = B
98
all
it. .. through
h0 =the
p(s),
(ii) of
iterates
(1)
2
.. .3.2
Assuming
sj includes
identity
agent j.h and c in equations
.....i
.
.
i ..agents
i ...Ks, Extensions
i. the encoder
..i view
.. 100 of
.
Model
...
B be dynamically
. samples
.B..to obain
. .B
...and
. i can
that
T is
sized,
the
matrix
sized
because
the K ).
and
(2)
,A
(iii)
actions
a
for
agents,
according
to q(h
ican
i allbecause
.B ih
T99.dynamically
=
.
.sized,
Ts is
dynamically
and
the
matrix
be
dynamically
sized
the
...
A
i
i
i
B
.. BAii B..... A.B.An
..
i rather
i101
i B
ied by
type,
than
by
Connectivity:
alternative
to the broadcast framework described above is to allow agents
Bthan
Bby
B..Local
... coordinate.
type,
rather
coordinate.
2
.
.
.
.
.
to communicate
100
3.2102 Model
Extensions to others within a certain range. Let N (j) be the set of agents present within
18
Like our approach, the communication is learned rather than being pre-determined. However, the
agents communicate in a discrete manner through their actions. This contrasts with our model where
multiple continuous communication cycles are used at each time step to decide the actions of all
agents. Furthermore, our approach is amenable to dynamic variation in the number of agents.
The Neural GPU [9] has similarities to our model but differs in that a 1-D ordering on the input is
assumed and it employs convolution, as opposed to the global pooling in our approach (thus permitting
unstructured inputs). Our model can be regarded as an instantiation of the GNN construction of
Scarselli et al. [23], as expanded on by Li et al. [14]. In particular, in [23], the output of the model
is the fixed point of iterating equations (3) and (1) to convergence, using recurrent models. In [14],
these recurrence equations are unrolled a fixed number of steps and the model trained via backprop
through time. In this work, we do not require the model to be recurrent, neither do we aim to reach
steady state. Additionally, we regard Eqn. (3) as a pooling operation, conceptually making our model
a single feed-forward network with local connections.
4
Experiments
4.1 Baselines
We describe three baselines models for ? to compare against our model.
Independent controller: A simple baseline is where agents are controlled independently without
any communication between them. We can write ? as a = {?(s1 ), ..., ?(sJ )}, where ? is a per-agent
controller applied independently. The advantages of this communication-free model is modularity
and flexibility1 . Thus it can deal well with agents joining and leaving the group, but it is not able to
coordinate agents? actions.
Fully-connected: Another obvious choice is to make ? a fully-connected multi-layer neural network,
that takes concatenation of h0j as an input and outputs actions {a1 , ..., aJ } using multiple output
softmax heads. It is equivalent to allowing T to be an arbitrary matrix with fixed size. This model
would allow agents to communicate with each other and share views of the environment. Unlike our
model, however, it is not modular, inflexible with respect to the composition and number of agents it
controls, and even the order of the agents must be fixed.
Discrete communication: An alternate way for agents to communicate is via discrete symbols, with
the meaning of these symbols being learned during training. Since ? now contains discrete operations
and is not differentiable, reinforcement learning is used to train in this setting. However, unlike
actions in the environment, an agent has to output a discrete symbol at every communication step.
But if these are viewed as internal time steps of the agent, then the communication output can be
treated as an action of the agent at a given (internal) time step and we can directly employ policy
gradient [35].
At communication step i, agent j will output the index wji corresponding to a particular symbol,
sampled according to:
wji ? Softmax(Dhij )
(5)
where matrix D is the model parameter. Let w
? be a 1-hot binary vector representation of w. In our
broadcast framework, at the next step the agent receives a bag of vectors from all the other agents
(where ? is the element-wise OR operation):
^
ci+1
=
w
?ji 0
(6)
j
j 0 6=j
4.2 Simple Demonstration with a Lever Pulling Task
We start with a very simple game that requires the agents to communicate in order to win. This
consists of m levers and a pool of N agents. At each round, m agents are drawn at random from
the total pool of N agents and they must each choose a lever to pull, simultaneously with the other
m ? 1 agents, after which the round ends. The goal is for each of them to pull a different lever.
Correspondingly, all agents receive reward proportional to the number of distinct levers pulled. Each
agent can see its own identity, and nothing else, thus sj = j.
1
Assuming sj includes the identity of agent j.
4
We implement the game with m = 5 and N = 500. We use a CommNet with two communication
steps (K = 2) and skip connections from (4). The encoder r is a lookup-table with N entries of
128D. Each f i is a two layer neural net with ReLU non-linearities that takes in the concatenation
of (hi , ci , h0 ), and outputs a 128D vector. The decoder is a linear layer plus softmax, producing
a distribution over the m levers, from which we sample to determine the lever to be pulled. We
compare it against the independent controller, which has the same architecture as our model except
that communication c is zeroed. The results are shown in Table 1. The metric is the number of distinct
levers pulled divided by m = 5, averaged over 500 trials, after seeing 50000 batches of size 64 during
training. We explore both reinforcement (see the supplementary material) and direct supervision
(using the solution given by sorting the agent IDs, and having each agent pull the lever according to
its relative order in the current m agents). In both cases, the CommNet performs significantly better
than the independent controller. See the supplementary material for an analysis of a trained model.
Model ?
Independent
CommNet
Training method
Supervised Reinforcement
0.59
0.59
0.99
0.94
Table 1: Results of lever game (#distinct levers pulled)/(#levers) for our CommNet and independent
controller models, using two different training approaches. Allowing the agents to communicate
enables them to succeed at the task.
4.3 Multi-turn Games
In this section, we consider two multi-agent tasks using the MazeBase environment [26] that use
reward as their training signal. The first task is to control cars passing through a traffic junction to
maximize the flow while minimizing collisions. The second task is to control multiple agents in
combat against enemy bots.
We experimented with several module types. With a feedforward MLP, the module f i is a single
layer network and K = 2 communication steps are used. For an RNN module, we also used a single
layer network for f t , but shared parameters across time steps. Finally, we used an LSTM for f t . In
all modules, the hidden layer size is set to 50. MLP modules use skip-connections. Both tasks are
trained for 300 epochs, each epoch being 100 weight updates with RMSProp [31] on mini-batch of
288 game episodes (distributed over multiple CPU cores). In total, the models experience ?8.6M
episodes during training. We repeat all experiments 5 times with different random initializations, and
report mean value along with standard deviation. The training time varies from a few hours to a few
days depending on task and module type.
4.3.1 Traffic Junction
This consists of a 4-way junction on a 14 ? 14 grid as shown in Fig. 2(left). At each time step, new
cars enter the grid with probability parrive from each of the four directions. However, the total number
of cars at any given time is limited to Nmax = 10. Each car occupies a single cell at any given time
and is randomly assigned to one of three possible routes (keeping to the right-hand side of the road).
At every time step, a car has two possible actions: gas which advances it by one cell on its route or
brake to stay at its current location. A car will be removed once it reaches its destination at the edge
of the grid.
Two cars collide if their locations overlap. A collision incurs a reward rcoll = ?10, but does not affect
the simulation in any other way. To discourage a traffic jam, each car gets reward of ? rtime = ?0.01?
at every time step, where ? is the number time steps passed since the car arrived. Therefore, the total
reward at time t is:
Nt
X
t
r(t) = C rcoll +
?i rtime ,
i=1
where C t is the number of collisions occurring at time t, and N t is number of cars present. The
simulation is terminated after 40 steps and is classified as a failure if one or more more collisions
have occurred.
Each car is represented by one-hot binary vector set {n, l, r}, that encodes its unique ID, current
location and assigned route number respectively. Each agent controlling a car can only observe other
cars in its vision range (a surrounding 3 ? 3 neighborhood), but it can communicate to all other cars.
5
Independent
Enemy bot
Attack actions
(e.g. attack_4)
3
4
Car exiting
3
1
5
Firing range
5
3 possible
routes
2
Visual range
Visual range
Discrete comm.
4 movement actions
1
CommNet
100%
2
Failure rate
New car
arrivals
4
10%
1%
1x1
3x3
5x5
7x7
Vision range
Figure 2: Left: Traffic junction task where agent-controlled cars (colored circles) have to pass the
through the junction without colliding. Middle: The combat task, where model controlled agents (red
circles) fight against enemy bots (blue circles). In both tasks each agent has limited visibility (orange
region), thus is not able to see the location of all other agents. Right: As visibility in the environment
decreases, the importance of communication grows in the traffic junction task.
The state vector sj for each agent is thus a concatenation of all these vectors, having dimension
32 ? |n| ? |l| ? |r|.
In Table 2(left), we show the probability of failure of a variety of different model ? and module
f pairs. Compared to the baseline models, CommNet significantly reduces the failure rate for all
module types, achieving the best performance with LSTM module (a video showing this model
before and after training can be found at http://cims.nyu.edu/~sainbar/commnet).
We also explored how partial visibility within the environment effects the advantage given by
communication. As the vision range of each agent decreases, the advantage of communication
increases as shown in Fig. 2(right). Impressively, with zero visibility (the cars are driving blind) the
CommNet model is still able to succeed 90% of the time.
Table 2(right) shows the results on easy and hard versions of the game. The easy version is a junction
of two one-way roads, while the harder version consists from four connected junctions of two-way
roads. Details of the other game variations can be found in the supplementary material. Discrete
communication works well on the easy version, but the CommNet with local connectivity gives the
best performance on the hard case.
4.3.2 Analysis of Communication
We now attempt to understand what the agents communicate when performing the junction task.
We start by recording the hidden state hij of each agent and the corresponding communication
vectors c?i+1
= C i+1 hij (the contribution agent j at step i + 1 makes to the hidden state of other
j
agents). Fig. 3(left) and Fig. 3(right) show the 2D PCA projections of the communication and hidden
state vectors respectively. These plots show a diverse range of hidden states but far more clustered
communication vectors, many of which are close to zero. This suggests that while the hidden state
carries information, the agent often prefers not to communicate it to the others unless necessary. This
is a possible consequence of the broadcast channel: if everyone talks at the same time, no-one can
understand. See the supplementary material for norm of communication vectors and brake locations.
Model ?
Independent
Fully-connected
Discrete comm.
CommNet
Module f () type
MLP
RNN
LSTM
20.6? 14.1 19.5? 4.5 9.4? 5.6
12.5? 4.4 34.8? 19.7 4.8? 2.4
15.8? 9.3
15.2? 2.1 8.4? 3.4
2.2? 0.6
7.6? 1.4
1.6? 1.0
Model ?
Independent
Discrete comm.
CommNet
CommNet local
Other game versions
Easy (MLP) Hard (RNN)
15.8? 12.5
26.9? 6.0
1.1? 2.4
28.2? 5.7
0.3? 0.1
22.5? 6.1
21.1? 3.4
Table 2: Traffic junction task. Left: failure rates (%) for different types of model and module function
f (.). CommNet consistently improves performance, over the baseline models. Right: Game variants.
In the easy case, discrete communication does help, but still less than CommNet. On the hard version,
local communication (see Section 2.2) does at least as well as broadcasting to all agents.
6
3
40
STOP
30
A
B
2
20
1
C
10
A
0
0
B
?1
?10
?2
?20
?3
?30
?40
?20
C2
C1
?10
0
10
20
30
40
50
60
70
?4
?4
?3
?2
?1
0
1
2
3
Figure 3: Left: First two principal components of communication vectors c? from multiple runs on
the traffic junction task Fig. 2(left). While the majority are ?silent? (i.e. have a small norm), distinct
clusters are also present. Middle: for three of these clusters, we probe the model to understand
their meaning (see text for details). Right: First two principal components of hidden state vectors h
from the same runs as on the left, with corresponding color coding. Note how many of the ?silent?
communication vectors accompany non-zero hidden state vectors. This shows that the two pathways
carry different information.
To better understand the meaning behind the communication vectors, we ran the simulation with
only two cars and recorded their communication vectors and locations whenever one of them braked.
Vectors belonging to the clusters A, B & C in Fig. 3(left) were consistently emitted when one of the
cars was in a specific location, shown by the colored circles in Fig. 3(middle) (or pair of locations for
cluster C). They also strongly correlated with the other car braking at the locations indicated in red,
which happen to be relevant to avoiding collision.
4.3.3 Combat Task
We simulate a simple battle involving two opposing teams in a 15?15 grid as shown in Fig. 2(middle).
Each team consists of m = 5 agents and their initial positions are sampled uniformly in a 5 ? 5
square around the team center, which is picked uniformly in the grid. At each time step, an agent can
perform one of the following actions: move one cell in one of four directions; attack another agent
by specifying its ID j (there are m attack actions, each corresponding to one enemy agent); or do
nothing. If agent A attacks agent B, then B?s health point will be reduced by 1, but only if B is inside
the firing range of A (its surrounding 3 ? 3 area). Agents need one time step of cooling down after
an attack, during which they cannot attack. All agents start with 3 health points, and die when their
health reaches 0. A team will win if all agents in the other team die. The simulation ends when one
team wins, or neither of teams win within 40 time steps (a draw).
The model controls one team during training, and the other team consist of bots that follow a hardcoded policy. The bot policy is to attack the nearest enemy agent if it is within its firing range. If not,
it approaches the nearest visible enemy agent within visual range. An agent is visible to all bots if it
is inside the visual range of any individual bot. This shared vision gives an advantage to the bot team.
When input to a model, each agent is represented by a set of one-hot binary vectors {i, t, l, h, c}
encoding its unique ID, team ID, location, health points and cooldown. A model controlling an agent
also sees other agents in its visual range (3 ? 3 surrounding area). The model gets reward of -1 if the
team loses or draws at the end of the game. In addition, it also get reward of ?0.1 times the total
health points of the enemy team, which encourages it to attack enemy bots.
Model ?
Independent
Fully-connected
Discrete comm.
CommNet
Module f () type
MLP
RNN
LSTM
34.2? 1.3 37.3? 4.6 44.3? 0.4
17.7? 7.1
2.9? 1.8
19.6? 4.2
29.1? 6.7 33.4? 9.4 46.4? 0.7
44.5? 13.4 44.4? 11.9 49.5? 12.6
Other game variations (MLP)
Model ?
m=3
m = 10 5 ? 5 vision
Independent 29.2? 5.9 30.5? 8.7
60.5? 2.1
CommNet 51.0? 14.1 45.4? 12.4 73.0? 0.7
Table 3: Win rates (%) on the combat task for different communication approaches and module
choices. Continuous consistently outperforms the other approaches. The fully-connected baseline
does worse than the independent model without communication. On the right we explore the
effect of varying the number of agents m and agent visibility. Even with 10 agents on each team,
communication clearly helps.
7
Table 3 shows the win rate of different module choices with various types of model. Among
different modules, the LSTM achieved the best performance. Continuous communication with
CommNet improved all module types. Relative to the independent controller, the fully-connected
model degraded performance, but the discrete communication improved LSTM module type. We
also explored several variations of the task: varying the number of agents in each team by setting
m = 3, 10, and increasing visual range of agents to 5 ? 5 area. The result on those tasks are shown
on the right side of Table 3. Using CommNet model consistently improves the win rate, even with
the greater environment observability of the 5?5 vision case.
4.4 bAbI Tasks
We apply our model to the bAbI [34] toy Q & A dataset, which consists of 20 tasks each requiring
different kind of reasoning. The goal is to answer a question after reading a short story. We can
formulate this as a multi-agent task by giving each sentence of the story its own agent. Communication
among agents allows them to exchange useful information necessary to answer the question.
The input is {s1 , s2 , ..., sJ , q}, where sj is j?th sentence of the story, and q is the question sentence.
We use the same encoder representation as [27] to convert them to vectors. The f (.) module consists
of a two-layer MLP with ReLU non-linearities. After K = 2 communication steps, we add the
final hidden states together and pass it through a softmax decoder layer to sample an output word y.
The model is trained in a supervised fashion using a cross-entropy loss between y and the correct
answer y ? . The hidden layer size is set to 100 and weights are initialized from N (0, 0.2). We train
the model for 100 epochs with learning rate 0.003 and mini-batch size 32 with Adam optimizer [11]
(?1 = 0.9, ?2 = 0.99, = 10?6 ). We used 10% of training data as validation set to find optimal
hyper-parameters for the model.
Results on the 10K version of the bAbI task are shown in Table 4, along with other baselines (see the
supplementary material for a detailed breakdown). Our model outperforms the LSTM baseline, but is
worse than the MemN2N model [27], which is specifically designed to solve reasoning over long
stories. However, it successfully solves most of the tasks, including ones that require information
sharing between two or more agents through communication.
LSTM [27]
MemN2N [27]
DMN+ [36]
Independent (MLP module)
CommNet (MLP module)
Mean error (%)
36.4
4.2
2.8
15.2
7.1
Failed tasks (err. > 5%)
16
3
1
9
3
Table 4: Experimental results on bAbI tasks.
5
Discussion and Future Work
We have introduced CommNet, a simple controller for MARL that is able to learn continuous
communication between a dynamically changing set of agents. Evaluations on four diverse tasks
clearly show the model outperforms models without communication, fully-connected models, and
models using discrete communication. Despite the simplicity of the broadcast channel, examination
of the traffic task reveals the model to have learned a sparse communication protocol that conveys
meaningful information between agents. Code for our model (and baselines) can be found at
http://cims.nyu.edu/~sainbar/commnet/.
One aspect of our model that we did not fully exploit is its ability to handle heterogenous agent types
and we hope to explore this in future work. Furthermore, we believe the model will scale gracefully
to large numbers of agents, perhaps requiring more sophisticated connectivity structures; we also
leave this to future work.
Acknowledgements
The authors wish to thank Daniel Lee and Y-Lan Boureau for their advice and guidance. Rob Fergus
is grateful for the support of CIFAR.
References
[1] L. Busoniu, R. Babuska, and B. De Schutter. A comprehensive survey of multiagent reinforcement learning.
Systems, Man, and Cybernetics, IEEE Transactions on, 38(2):156?172, 2008.
8
[2] Y. Cao, W. Yu, W. Ren, and G. Chen. An overview of recent progress in the study of distributed multi-agent
coordination. IEEE Transactions on Industrial Informatics, 1(9):427?438, 2013.
[3] R. H. Crites and A. G. Barto. Elevator group control using multiple reinforcement learning agents. Machine
Learning, 33(2):235?262, 1998.
[4] J. N. Foerster, Y. M. Assael, N. de Freitas, and S. Whiteson. Learning to communicate to solve riddles
with deep distributed recurrent Q-networks. arXiv, abs/1602.02672, 2016.
[5] D. Fox, W. Burgard, H. Kruppa, and S. Thrun. Probabilistic approach to collaborative multi-robot
localization. Autonomous Robots, 8(3):325??344, 2000.
[6] C. L. Giles and K. C. Jim. Learning communication for multi-agent systems. In Innovative Concepts for
Agent Based Systems, pages 377?-390. Springer, 2002.
[7] C. Guestrin, D. Koller, and R. Parr. Multiagent planning with factored MDPs. In NIPS, 2001.
[8] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time atari game play using
offline monte-carlo tree search planning. In NIPS, 2014.
[9] L. Kaiser and I. Sutskever. Neural gpus learn algorithms. In ICLR, 2016.
[10] T. Kasai, H. Tenmoto, and A. Kamiya. Learning of communication codes in multi-agent reinforcement
learning problem. IEEE Conference on Soft Computing in Industrial Applications, pages 1?6, 2008.
[11] D. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[12] M. Lauer and M. A. Riedmiller. An algorithm for distributed reinforcement learning in cooperative
multi-agent systems. In ICML, 2000.
[13] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. Journal of
Machine Learning Research, 17(39):1?40, 2016.
[14] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel. Gated graph sequence neural networks. In ICLR, 2015.
[15] M. L. Littman. Value-function reinforcement learning in markov games. Cognitive Systems Research,
2(1):55?66, 2001.
[16] C. J. Maddison, A. Huang, I. Sutskever, and D. Silver. Move evaluation in go using deep convolutional
neural networks. In ICLR, 2015.
[17] D. Maravall, J. De Lope, and R. Domnguez. Coordination of communication in robot teams by reinforcement learning. Robotics and Autonomous Systems, 61(7):661?666, 2013.
[18] M. Matari. Reinforcement learning in the multi-robot domain. Autonomous Robots, 4(1):73?83, 1997.
[19] F. S. Melo, M. Spaan, and S. J. Witwicki. Querypomdp: Pomdp-based communication in multiagent
systems. In Multi-Agent Systems, pages 189?204, 2011.
[20] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, D. Wierstra, S. Legg, and D. Hassabis.
Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[21] R. Olfati-Saber, J. Fax, and R. Murray. Consensus and cooperation in networked multi-agent systems.
Proceedings of the IEEE, 95(1):215?233, 2007.
[22] J. Pearl. Reverend bayes on inference engines: A distributed hierarchical approach. In AAAI, 1982.
[23] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model.
IEEE Trans. Neural Networks, 20(1):61?80, 2009.
[24] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Van Den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, et al. Mastering the game of go with deep neural
networks and tree search. Nature, 529(7587):484?489, 2016.
[25] P. Stone and M. Veloso. Towards collaborative and adversarial learning: A case study in robotic soccer.
International Journal of Human Computer Studies, (48), 1998.
[26] S. Sukhbaatar, A. Szlam, G. Synnaeve, S. Chintala, and R. Fergus. Mazebase: A sandbox for learning
from games. CoRR, abs/1511.07401, 2015.
[27] S. Sukhbaatar, A. Szlam, J. Weston, and R. Fergus. End-to-end memory networks. NIPS, 2015.
[28] R. S. Sutton and A. G. Barto. Introduction to Reinforcement Learning. MIT Press, 1998.
[29] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, and R. Vicente. Multiagent cooperation
and competition with deep reinforcement learning. arXiv:1511.08779, 2015.
[30] M. Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In ICML, 1993.
[31] T. Tieleman and G. Hinton. Lecture 6.5?RmsProp: Divide the gradient by a running average of its recent
magnitude. COURSERA: Neural Networks for Machine Learning, 2012.
[32] P. Varshavskaya, L. P. Kaelbling, and D. Rus. Distributed Autonomous Robotic Systems 8, chapter Efficient
Distributed Reinforcement Learning through Agreement, pages 367?378. 2009.
[33] X. Wang and T. Sandholm. Reinforcement learning to play an optimal nash equilibrium in team markov
games. In NIPS, pages 1571?1578, 2002.
[34] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards ai-complete question answering: A set of
prerequisite toy tasks. In ICLR, 2016.
[35] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
In Machine Learning, pages 229?256, 1992.
[36] C. Xiong, S. Merity, and R. Socher. Dynamic memory networks for visual and textual question answering.
ICML, 2016.
[37] C. Zhang and V. Lesser. Coordinating multi-agent reinforcement learning with limited communication. In
Proc. AAMAS, pages 1101?1108, 2013.
9
| 6398 |@word trial:1 version:7 middle:6 norm:2 cah:1 nd:2 bf:1 iki:2 simulation:4 propagate:2 simplifying:5 incurs:1 amaps:1 fif:3 versatile:1 harder:1 carry:2 initial:1 bai:1 contains:1 daniel:1 ours:1 outperforms:3 freitas:1 err:1 current:4 com:2 nt:1 si:2 theof:3 must:14 srd:1 gpu:1 written:14 guez:1 visible:2 happen:1 rward:1 enables:1 visibility:10 designed:2 plot:1 update:12 avectors:6 v:1 sukhbaatar:3 intelligence:1 cthe:7 hja:1 ial:2 core:1 short:1 colored:2 tarlow:1 iterates:5 inputj:1 location:10 iset:4 forb:4 attack:8 zhang:1 wierstra:1 along:2 c2:1 direct:1 jbe:2 ik:2 consists:29 pathway:1 combine:1 inside:2 manner:9 ra:1 expected:18 merity:1 themselves:3 isi:3 planning:2 multi:30 behavior:1 cand:1 actual:11 cpu:1 increasing:1 becomes:3 spain:1 aeach:3 awith:2 provided:1 kuzovkin:1 linearity:3 awe:1 what:2 atari:1 kind:1 reverend:1 nj:1 temporal:1 combat:4 every:4 ti:1 ofa:1 rm:1 control:43 szlam:3 ly:1 unit:10 producing:11 before:2 teammate:4 aszlam:1 local:7 consequence:1 despite:1 sutton:1 joining:12 id:5 encoding:1 ure:2 firing:3 alli:1 rnns:1 plus:1 initialization:1 k:1 dynamically:47 suggests:1 specifying:1 ato:4 isof:3 limited:4 sists:1 range:22 averaged:1 d13:1 unique:2 ofinformation:1 practical:1 ond:2 tsoi:1 block:40 implement:1 differs:1 x3:1 backpropagation:10 hthis:2 dmn:1 area:3 riedmiller:2 rnn:12 marl:3 significantly:2 revealing:1 projection:1 pre:3 road:3 word:1 refers:4 seeing:1 petersen:1 nmax:1 ilet:1 get:33 close:1 orm:1 cannot:13 put:1 bellemare:1 equivalent:2 map:6 sset:1 maximizing:9 brake:2 williams:1 go:3 center:1 independently:14 survey:1 ke:1 formulate:1 simplicity:1 pomdp:1 aor:1 unstructured:1 communicating:6 factored:1 regarded:1 pull:3 his:1 swarm:1 handle:1 i02:1 embedding:4 autonomous:4 coordinate:40 variation:5 controlling:10 play:2 construction:1 tan:1 saber:1 losing:1 us:2 agreement:1 element:1 jfi:1 jif:1 jk:3 particularly:1 breakdown:1 cooling:1 tributions:1 cooperative:4 rlfor:1 observed:1 levine:1 module:36 wang:2 t12:1 region:1 connected:21 cycle:1 episode:30 theand:1 ordering:1 decrease:2 removed:1 movement:1 coursera:1 byb:1 ran:1 environment:42 nash:1 rmsprop:2 comm:4 reward:71 littman:1 babuska:1 dynamic:4 trained:15 carrying:1 grateful:1 solving:1 ithe:16 singh:1 localization:1 f2:1 exit:1 iall:2 collide:1 represented:2 various:1 chapter:1 talk:1 mazebase:2 surrounding:3 train:2 distinct:4 effective:1 describe:13 monte:1 inear:1 ction:1 sc:1 zemel:1 visuomotor:1 hyper:1 oni:1 neighborhood:1 h0:3 inetwork:4 modular:1 larger:7 enemy:8 supplementary:6 solve:2 otherwise:4 encoder:17 ability:2 ward:1 final:18 ip:1 hagenbuchner:1 tampuu:1 advantage:4 sequence:1 differentiable:1 tput:2 net:6 propose:1 isthe:3 fr:1 j2:3 cao:1 loop:10 relevant:1 networked:1 ath:1 flexibility:11 maravall:1 competition:1 sutskever:2 rst:2 convergence:1 requirement:10 darrell:1 cour:1 cluster:4 transmission:1 silver:3 leave:2 adam:2 produce:5 tions:2 help:2 iq:1 recurrent:4 depending:1 ij:7 nearest:2 progress:1 sa:1 strong:1 solves:1 c:1 skip:2 arl:1 riddle:2 of1i:1 f15:1 direction:2 correct:1 stochastic:1 kb:1 occupies:1 human:2 viewing:7 material:6 atk:1 jam:1 backprop:1 require:4 iat:2 exchange:1 ja:5 tthe:4 abbeel:1 sandbox:1 kodelja:1 anonymous:15 hx:2 clustered:1 ied:1 sainbayar:1 im:1 extension:5 proximity:1 around:1 credit:10 ic:9 cb:7 mapping:13 equilibrium:1 predict:6 hardcoded:1 parr:1 ibe:2 driving:1 cib:1 optimizer:1 vary:6 tor:4 fh:1 proc:1 bag:5 tanh:2 communicative:1 coordination:2 ain:4 iaii:1 successfully:1 occupying:10 hope:1 mit:1 clearly:2 sensor:1 mation:1 aim:1 ck:1 rather:25 avoid:1 hj:9 rusu:1 varying:2 barto:2 jih:1 ax:1 finishing:9 ifrom:3 consistently:4 legg:1 sthe:2 hk:5 industrial:2 adversarial:1 contrast:2 baseline:48 inference:1 ajb:1 el:1 dependent:5 i0:2 sb:3 nn:4 jwith:2 entire:1 fight:1 hidden:34 koller:1 typically:2 bt:1 kc:1 i1:4 among:2 priori:1 softmax:9 orange:1 summed:1 once:1 having:10 veness:1 manually:1 hop:26 yu:1 icml:3 hcontroller:5 tabular:1 future:9 jb:1 others:7 ofto:1 report:1 simplify:21 c0j:4 jfinal:1 randomly:1 imatrix:3 employ:2 simultaneously:1 few:3 comprehensive:1 individual:15 elevator:2 scarselli:2 assael:1 n1:1 ab:8 attempt:1 opposing:1 mlp:9 ostrovski:1 message:1 mnih:1 evaluation:2 cwe:1 bthis:1 sh:1 yielding:1 behind:1 tj:1 bki:1 amenable:1 edge:2 partial:3 arthur:1 experience:1 necessary:2 fox:1 unless:5 tree:2 divide:1 initialized:1 circle:4 desired:12 re:1 guidance:1 soft:1 giles:1 ence:1 assignment:10 kaelbling:1 vertex:3 subset:9 deviation:1 kasai:2 burgard:1 olfati:1 entry:1 c14:1 answer:3 varies:1 combined:1 international:1 lstm:9 fundamental:1 stay:1 memn2n:2 lee:2 ismore:1 destination:1 informatics:1 pool:2 probabilistic:1 connecting:14 together:1 connectivity:15 aaai:1 ear:1 lever:13 opposed:1 choose:1 huang:2 broadcast:21 recorded:1 worse:2 cognitive:1 bx:1 li:2 toy:2 distribute:23 de:3 lookup:5 c12:1 coding:1 includes:10 ithat:2 matter:6 rescales:2 jc:2 notable:1 blind:1 piece:7 performed:7 view:51 picked:1 later:1 h1:1 traffic:8 sporadically:1 red:4 bayes:1 start:3 capability:1 odel:3 tof:1 complicated:4 parrive:1 contribution:24 collaborative:2 square:1 minimize:10 degraded:1 iab:1 convolutional:1 ir:1 synnaeve:1 ofthe:3 conceptually:1 kavukcuoglu:1 ren:1 carlo:1 cybernetics:1 j6:2 classified:1 submitted:13 ah:2 asnetwork:2 reach:3 whenever:1 ed:3 facebook:2 email:18 sharing:1 against:4 failure:5 bof:1 hare:1 jj2:1 obvious:13 thereof:4 chintala:1 transmits:1 conveys:1 sampled:3 stop:1 dataset:1 color:1 car:22 ut:1 improves:2 cj:5 sophisticated:1 back:1 feed:36 ta:1 courant:1 supervised:3 day:1 follow:1 ayer:1 improved:3 formulation:5 strongly:1 furthermore:3 hand:12 receives:8 eqn:2 babi:4 replacing:1 propagation:2 del:1 lack:1 iable:2 aj:9 perhaps:1 indicated:1 pulling:1 believe:1 mdp:1 grows:1 hcan:2 omitting:2 effect:2 aib:3 requiring:2 ininthe:1 concept:1 former:9 matiisen:1 assigned:2 ilearning:1 deal:13 round:2 x5:1 game:17 self:4 encourages:1 aand:4 backpropagating:1 noted:5 die:2 during:6 recurrence:2 steady:1 soccer:3 stone:1 ini:1 arrived:1 blah:4 complete:1 performs:1 fj:1 matari:1 reasoning:2 hin:2 meaning:3 wise:1 fi:1 common:2 rl:20 ji:12 overview:3 jp:2 foreach:2 extend:24 occurred:1 rd0:1 braking:1 interpret:1 he:3 cix:1 composition:12 ai:18 enter:2 grid:5 i6:1 nonlinearity:32 language:1 moving:1 specification:1 robot:7 supervision:10 f0:1 doa:1 similarity:1 add:1 access:1 actor:1 afor:2 own:9 recent:3 perspective:8 route:4 certain:4 affiliation:15 success:1 binary:3 wji:2 sinput:1 guestrin:1 additional:1 greater:1 cii:1 s65:1 ireward:2 determine:1 maximize:15 signal:11 ii:36 multiple:8 full:4 bother:1 reduces:1 d0:2 imodel:9 h7:1 veloso:1 melo:1 long:1 cifar:1 bach:1 devised:1 cross:1 divided:1 thethe:3 ase:1 permitting:1 a1:3 controlled:4 involving:2 variant:1 controller:94 multilayer:43 metric:1 vision:6 lly:1 foerster:1 arxiv:2 represent:1 robotics:1 ion:2 c1:1 cell:3 addition:2 thethat:1 achieved:2 receive:8 onb:1 else:1 leaving:12 extra:11 unlike:2 toi:1 pass:5 lauer:1 pooling:2 recording:1 j11:1 accompany:1 flow:1 spirit:1 emitted:1 call:27 near:1 chopra:1 feedforward:27 vital:2 easy:5 ture:1 iii:5 variety:1 affect:1 relu:2 architecture:42 silent:2 observability:1 haij:1 idea:15 lesser:1 ddo:1 pca:1 cji:1 passed:10 wo:1 weof:2 york:3 jj:4 prefers:1 ican:4 passing:2 action:101 useful:2 deep:10 delivering:10 detailed:1 iterating:1 collision:5 se:1 ifor:4 band:2 category:11 simplest:12 reduced:1 http:2 bot:10 coordinating:1 per:11 blue:1 diverse:3 write:10 hyperparameter:10 discrete:37 group:36 key:17 four:4 demonstrating:1 lan:2 achieving:1 drawn:1 changing:1 neither:2 iof:8 nal:1 isin:1 ani:1 graph:26 cooperating:27 convert:1 ork:1 run:3 communicate:28 lope:1 decide:1 hjj:1 draw:2 lanctot:1 jall:2 toa:1 gnn:1 hi:19 layer:105 ct:1 followed:33 hhi:1 tto:1 athe:14 colliding:1 encodes:1 witwicki:1 x7:1 aspect:2 simulate:1 innovative:1 mikolov:1 expanded:1 performing:2 format:1 gpus:1 according:7 alternate:1 ball:1 abecause:2 instantiates:10 belonging:1 battle:1 ate:1 inflexible:10 across:3 sandholm:1 mastering:1 spaan:1 rob:2 making:2 s1:5 jas:2 den:1 invariant:5 isto:2 taken:6 equation:13 turn:1 hh:2 hto:1 finn:1 antonoglou:1 end:7 junction:11 panneershelvam:1 operation:3 prerequisite:1 apply:2 observe:1 probe:1 hierarchical:1 xiong:1 alternative:3 batch:3 aia:1 hassabis:1 hat:1 gori:1 running:1 include:1 mple:1 lock:1 ifa:1 hthe:4 tois:1 giving:1 concatenated:13 exploit:1 murray:1 forum:1 dat:1 move:3 objective:11 nput:1 cfor:1 question:5 kaiser:1 strategy:1 fa:2 fthe:4 hai:1 gradient:33 win:7 iclr:5 amongst:4 thank:1 distance:10 thrun:1 concatenation:31 decoder:7 outer:11 majority:1 gracefully:1 maddison:2 fidjeland:1 consensus:1 assuming:12 ru:1 length:8 pearl:1 code:2 index:13 mini:2 minimizing:1 ibb:2 demonstration:1 unrolled:1 schrittwieser:1 cij:7 thei:1 hij:7 teach:1 ba:3 korjus:1 motivates:6 policy:25 ake:1 perform:1 gated:1 rtime:2 allowing:2 convolution:1 ctor:1 markov:2 enabling:2 behave:1 gas:1 t:1 hinton:1 communication:203 team:17 jim:2 head:12 i68:1 arbitrary:1 jand:5 exiting:1 abe:1 iwe:4 ordinate:1 introduced:1 fort:1 pair:2 specified:2 iand:10 connection:4 sentence:3 engine:1 jof:1 learned:7 hfor:2 textual:1 hour:1 heterogenous:1 kingma:1 trans:1 barcelona:1 nip:29 address:17 beyond:1 usually:1 alongside:1 able:14 below:4 reading:1 monfardini:1 encompasses:14 tb:1 built:13 ofh:2 memory:2 including:1 belief:1 everyone:1 video:1 overlap:1 ia:10 treated:9 examination:1 hot:3 ation:1 natural:1 altered:1 t18:1 mdps:1 cim:2 nding:1 hm:2 extract:11 health:5 fax:1 text:1 epoch:3 literature:1 acknowledgement:1 graf:1 relative:2 beside:10 fully:21 lecture:1 multiagent:5 permutation:6 ina:2 impressively:1 loss:1 proportional:1 validation:1 h2:1 agent:368 iswe:2 propagates:10 zeroed:1 story:4 share:13 collaboration:1 bordes:1 balancing:10 cooperation:3 repeat:1 keeping:1 free:25 jth:12 dis:1 h0j:3 offline:1 jh:2 pulled:4 understand:4 aru:1 fall:1 institute:1 taking:8 side:2 allow:14 wide:1 sparse:1 isr:1 distributed:8 regard:1 van:1 dimension:1 correspondingly:1 world:2 jor:1 rich:1 fb:2 computes:14 forward:11 collection:2 reinforcement:40 kon:1 sifre:1 author:18 ple:1 far:1 avg:4 transaction:2 bb:4 sj:15 schutter:1 observable:1 global:1 robotic:2 reveals:1 instantiation:2 isb:2 bex:1 assumed:1 b1:1 connectionist:1 fergus:4 kamiya:1 continuous:22 search:2 modularity:12 table:15 additionally:2 channel:17 itt:1 learn:6 nature:2 commnet:25 brockschmidt:1 whiteson:1 complex:1 utput:4 discourage:1 domain:2 necessarily:7 jthe:7 did:1 protocol:3 main:14 hinputs:1 terminated:1 asthe:2 crites:1 s2:1 arrival:1 nothing:2 tothe:1 aamas:1 ait:1 body:14 x1:1 fig:9 advice:1 tob:1 join:1 orc:1 i:1 fashion:1 en:1 aid:1 position:2 wish:1 iinput:2 answering:2 ib:14 tin:1 ito:4 ix:1 bij:1 ian:1 down:1 specific:18 i65:2 jt:1 showing:3 thea:1 er:3 symbol:4 nyu:3 experimented:1 explored:2 lowed:1 normalizing:6 a3:1 consist:5 icase:1 socher:1 ih:7 ofthis:2 corr:1 importance:1 ci:10 magnitude:1 yer:1 occurring:1 boureau:1 sorting:1 chen:1 entropy:1 cx:1 broadcasting:3 ifb:1 explore:13 visual:7 failed:1 partially:2 scalar:21 sadik:1 driessche:1 ch:3 aa:4 loses:1 satisfies:10 tieleman:1 lewis:1 bji:1 springer:1 weston:2 succeed:2 sized:48 identity:14 kof:1 goal:2 viewed:24 towards:2 jf:3 shared:4 man:1 change:1 vicente:1 hard:4 specifically:2 except:1 determined:3 uniformly:2 ithis:3 beattie:1 yand:1 principal:2 total:5 called:1 pas:2 c2i:1 experimental:1 e:1 inj:1 meaningful:1 busoniu:1 ofi:2 internal:4 support:1 guo:1 sainbar:3 latter:8 brevity:12 dept:1 avoiding:1 correlated:1 |
5,967 | 6,399 | InfoGAN: Interpretable Representation Learning by
Information Maximizing Generative Adversarial Nets
Xi Chen?? , Yan Duan?? , Rein Houthooft?? , John Schulman?? , Ilya Sutskever? , Pieter Abbeel??
? UC Berkeley, Department of Electrical Engineering and Computer Sciences
? OpenAI
Abstract
This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a
completely unsupervised manner. InfoGAN is a generative adversarial network
that also maximizes the mutual information between a small subset of the latent
variables and the observation. We derive a lower bound of the mutual information
objective that can be optimized efficiently. Specifically, InfoGAN successfully
disentangles writing styles from digit shapes on the MNIST dataset, pose from
lighting of 3D rendered images, and background digits from the central digit on
the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments
show that InfoGAN learns interpretable representations that are competitive with
representations learned by existing supervised methods. For an up-to-date version
of this paper, please see https://arxiv.org/abs/1606.03657.
1
Introduction
Unsupervised learning can be described as the general problem of extracting value from unlabelled
data which exists in vast quantities. A popular framework for unsupervised learning is that of
representation learning [1, 2], whose goal is to use unlabelled data to learn a representation that
exposes important semantic features as easily decodable factors. A method that can learn such
representations is likely to exist [2], and to be useful for many downstream tasks which include
classification, regression, visualization, and policy learning in reinforcement learning.
While unsupervised learning is ill-posed because the relevant downstream tasks are unknown at
training time, a disentangled representation, one which explicitly represents the salient attributes of a
data instance, should be helpful for the relevant but unknown tasks. For example, for a dataset of
faces, a useful disentangled representation may allocate a separate set of dimensions for each of the
following attributes: facial expression, eye color, hairstyle, presence or absence of eyeglasses, and the
identity of the corresponding person. A disentangled representation can be useful for natural tasks
that require knowledge of the salient attributes of the data, which include tasks like face recognition
and object recognition. It is not the case for unnatural supervised tasks, where the goal could be,
for example, to determine whether the number of red pixels in an image is even or odd. Thus, to be
useful, an unsupervised learning algorithm must in effect correctly guess the likely set of downstream
classification tasks without being directly exposed to them.
A significant fraction of unsupervised learning research is driven by generative modelling. It is
motivated by the belief that the ability to synthesize, or ?create? the observed data entails some form
of understanding, and it is hoped that a good generative model will automatically learn a disentangled
representation, even though it is easy to construct perfect generative models with arbitrarily bad
representations. The most prominent generative models are the variational autoencoder (VAE) [3]
and the generative adversarial network (GAN) [4].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this paper, we present a simple modification to the generative adversarial network objective that
encourages it to learn interpretable and meaningful representations. We do so by maximizing the
mutual information between a fixed small subset of the GAN?s noise variables and the observations,
which turns out to be relatively straightforward. Despite its simplicity, we found our method to be
surprisingly effective: it was able to discover highly semantic and meaningful hidden representations
on a number of image datasets: digits (MNIST), faces (CelebA), and house numbers (SVHN). The
quality of our unsupervised disentangled representation matches previous works that made use of
supervised label information [5?9]. These results suggest that generative modelling augmented with
a mutual information cost could be a fruitful approach for learning disentangled representations.
In the remainder of the paper, we begin with a review of the related work, noting the supervision that is
required by previous methods that learn disentangled representations. Then we review GANs, which
is the basis of InfoGAN. We describe how maximizing mutual information results in interpretable
representations and derive a simple and efficient algorithm for doing so. Finally, in the experiments
section, we first compare InfoGAN with prior approaches on relatively clean datasets and then
show that InfoGAN can learn interpretable representations on complex datasets where no previous
unsupervised approach is known to learn representations of comparable quality.
2
Related Work
There exists a large body of work on unsupervised representation learning. Early methods were based
on stacked (often denoising) autoencoders or restricted Boltzmann machines [10?13].
Another intriguing line of work consists of the ladder network [14], which has achieved spectacular
results on a semi-supervised variant of the MNIST dataset. More recently, a model based on the
VAE has achieved even better semi-supervised results on MNIST [15]. GANs [4] have been used by
Radford et al. [16] to learn an image representation that supports basic linear algebra on code space.
Lake et al. [17] have been able to learn representations using probabilistic inference over Bayesian
programs, which achieved convincing one-shot learning results on the OMNI dataset.
In addition, prior research attempted to learn disentangled representations using supervised data.
One class of such methods trains a subset of the representation to match the supplied label using
supervised learning: bilinear models [18] separate style and content; multi-view perceptron [19]
separate face identity and view point; and Yang et al. [20] developed a recurrent variant that generates
a sequence of latent factor transformations. Similarly, VAEs [5] and Adversarial Autoencoders [9]
were shown to learn representations in which class label is separated from other variations.
Recently several weakly supervised methods were developed to remove the need of explicitly
labeling variations. disBM [21] is a higher-order Boltzmann machine which learns a disentangled
representation by ?clamping? a part of the hidden units for a pair of data points that are known to
match in all but one factors of variation. DC-IGN [7] extends this ?clamping? idea to VAE and
successfully learns graphics codes that can represent pose and light in 3D rendered images. This line
of work yields impressive results, but they rely on a supervised grouping of the data that is generally
not available. Whitney et al. [8] proposed to alleviate the grouping requirement by learning from
consecutive frames of images and use temporal continuity as supervisory signal.
Unlike the cited prior works that strive to recover disentangled representations, InfoGAN requires
no supervision of any kind. To the best of our knowledge, the only other unsupervised method that
learns disentangled representations is hossRBM [13], a higher-order extension of the spike-and-slab
restricted Boltzmann machine that can disentangle emotion from identity on the Toronto Face Dataset
[22]. However, hossRBM can only disentangle discrete latent factors, and its computation cost grows
exponentially in the number of factors. InfoGAN can disentangle both discrete and continuous latent
factors, scale to complicated datasets, and typically requires no more training time than regular
GANs.
3
Background: Generative Adversarial Networks
Goodfellow et al. [4] introduced the Generative Adversarial Networks (GAN), a framework for
training deep generative models using a minimax game. The goal is to learn a generator distribution
PG (x) that matches the real data distribution Pdata (x). Instead of trying to explicitly assign probability
to every x in the data distribution, GAN learns a generator network G that generates samples from
2
the generator distribution PG by transforming a noise variable z ? Pnoise (z) into a sample G(z).
This generator is trained by playing against an adversarial discriminator network D that aims to
distinguish between samples from the true data distribution Pdata and the generator?s distribution PG .
So for a given generator, the optimal discriminator is D(x) = Pdata (x)/(Pdata (x) + PG (x)). More
formally, the minimax game is given by the following expression:
min max V (D, G) = Ex?Pdata [log D(x)] + Ez?noise [log (1 ? D(G(z)))]
G
4
D
(1)
Mutual Information for Inducing Latent Codes
The GAN formulation uses a simple factored continuous input noise vector z, while imposing no
restrictions on the manner in which the generator may use this noise. As a result, it is possible that
the noise will be used by the generator in a highly entangled way, causing the individual dimensions
of z to not correspond to semantic features of the data.
However, many domains naturally decompose into a set of semantically meaningful factors of
variation. For instance, when generating images from the MNIST dataset, it would be ideal if the
model automatically chose to allocate a discrete random variable to represent the numerical identity
of the digit (0-9), and chose to have two additional continuous variables that represent the digit?s
angle and thickness of the digit?s stroke. It would be useful if we could recover these concepts without
any supervision, by simply specifying that an MNIST digit is generated by an 1-of-10 variable and
two continuous variables.
In this paper, rather than using a single unstructured noise vector, we propose to decompose the input
noise vector into two parts: (i) z, which is treated as source of incompressible noise; (ii) c, which we
will call the latent code and will target the salient structured semantic features of the data distribution.
Mathematically, we denote the set of structured latent variables by c1 , c2 , . . . , cL . In its simplest
QL
form, we may assume a factored distribution, given by P (c1 , c2 , . . . , cL ) = i=1 P (ci ). For ease of
notation, we will use latent codes c to denote the concatenation of all latent variables ci .
We now propose a method for discovering these latent factors in an unsupervised way: we provide
the generator network with both the incompressible noise z and the latent code c, so the form of the
generator becomes G(z, c). However, in standard GAN, the generator is free to ignore the additional
latent code c by finding a solution satisfying PG (x|c) = PG (x). To cope with the problem of trivial
codes, we propose an information-theoretic regularization: there should be high mutual information
between latent codes c and generator distribution G(z, c). Thus I(c; G(z, c)) should be high.
In information theory, mutual information between X and Y , I(X; Y ), measures the ?amount of
information? learned from knowledge of random variable Y about the other random variable X. The
mutual information can be expressed as the difference of two entropy terms:
I(X; Y ) = H(X) ? H(X|Y ) = H(Y ) ? H(Y |X)
(2)
This definition has an intuitive interpretation: I(X; Y ) is the reduction of uncertainty in X when Y
is observed. If X and Y are independent, then I(X; Y ) = 0, because knowing one variable reveals
nothing about the other; by contrast, if X and Y are related by a deterministic, invertible function,
then maximal mutual information is attained. This interpretation makes it easy to formulate a cost:
given any x ? PG (x), we want PG (c|x) to have a small entropy. In other words, the information in
the latent code c should not be lost in the generation process. Similar mutual information inspired
objectives have been considered before in the context of clustering [23?25]. Therefore, we propose
to solve the following information-regularized minimax game:
min max VI (D, G) = V (D, G) ? ?I(c; G(z, c))
G
5
D
(3)
Variational Mutual Information Maximization
In practice, the mutual information term I(c; G(z, c)) is hard to maximize directly as it requires
access to the posterior P (c|x). Fortunately we can obtain a lower bound of it by defining an auxiliary
3
distribution Q(c|x) to approximate P (c|x):
I(c; G(z, c)) = H(c) ? H(c|G(z, c))
= Ex?G(z,c) [Ec0 ?P (c|x) [log P (c0 |x)]] + H(c)
= Ex?G(z,c) [DKL (P (?|x) k Q(?|x)) + Ec0 ?P (c|x) [log Q(c0 |x)]] + H(c)
|
{z
}
(4)
?0
? Ex?G(z,c) [Ec0 ?P (c|x) [log Q(c0 |x)]] + H(c)
This technique of lower bounding mutual information is known as Variational Information Maximization [26]. We note that the entropy of latent codes H(c) can be optimized as well since it has a
simple analytical form for common distributions. However, in this paper we opt for simplicity by
fixing the latent code distribution and we will treat H(c) as a constant. So far we have bypassed the
problem of having to compute the posterior P (c|x) explicitly via this lower bound but we still need
to be able to sample from the posterior in the inner expectation. Next we state a simple lemma, with
its proof deferred to Appendix 1, that removes the need to sample from the posterior.
Lemma 5.1 For random variables X, Y and function f (x, y) under suitable regularity conditions:
Ex?X,y?Y |x [f (x, y)] = Ex?X,y?Y |x,x0 ?X|y [f (x0 , y)].
By using Lemma 5.1, we can define a variational lower bound, LI (G, Q), of the mutual information,
I(c; G(z, c)):
LI (G, Q) = Ec?P (c),x?G(z,c) [log Q(c|x)] + H(c)
= Ex?G(z,c) [Ec0 ?P (c|x) [log Q(c0 |x)]] + H(c)
(5)
? I(c; G(z, c))
We note that LI (G, Q) is easy to approximate with Monte Carlo simulation. In particular, LI can
be maximized w.r.t. Q directly and w.r.t. G via the reparametrization trick. Hence LI (G, Q) can be
added to GAN?s objectives with no change to GAN?s training procedure and we call the resulting
algorithm Information Maximizing Generative Adversarial Networks (InfoGAN).
Eq (4) shows that the lower bound becomes tight as the auxiliary distribution Q approaches the
true posterior distribution: Ex [DKL (P (?|x) k Q(?|x))] ? 0. In addition, we know that when the
variational lower bound attains its maximum LI (G, Q) = H(c) for discrete latent codes, the bound
becomes tight and the maximal mutual information is achieved. In Appendix, we note how InfoGAN
can be connected to the Wake-Sleep algorithm [27] to provide an alternative interpretation.
Hence, InfoGAN is defined as the following minimax game with a variational regularization of
mutual information and a hyperparameter ?:
min max VInfoGAN (D, G, Q) = V (D, G) ? ?LI (G, Q)
G,Q
6
D
(6)
Implementation
In practice, we parametrize the auxiliary distribution Q as a neural network. In most experiments, Q
and D share all convolutional layers and there is one final fully connected layer to output parameters
for the conditional distribution Q(c|x), which means InfoGAN only adds a negligible computation
cost to GAN. We have also observed that LI (G, Q) always converges faster than normal GAN
objectives and hence InfoGAN essentially comes for free with GAN.
For categorical latent code ci , we use the natural choice of softmax nonlinearity to represent Q(ci |x).
For continuous latent code cj , there are more options depending on what is the true posterior P (cj |x).
In our experiments, we have found that simply treating Q(cj |x) as a factored Gaussian is sufficient.
Since GAN is known to be difficult to train, we design our experiments based on existing techniques
introduced by DC-GAN [16], which are enough to stabilize InfoGAN training and we did not have to
introduce new trick. Detailed experimental setup is described in Appendix. Even though InfoGAN
introduces an extra hyperparameter ?, it?s easy to tune and simply setting to 1 is sufficient for discrete
latent codes. When the latent code contains continuous variables, a smaller ? is typically used
to ensure that ?LI (G, Q), which now involves differential entropy, is on the same scale as GAN
objectives.
4
7
Experiments
The first goal of our experiments is to investigate if mutual information can be maximized efficiently.
The second goal is to evaluate if InfoGAN can learn disentangled and interpretable representations
by making use of the generator to vary only one latent factor at a time in order to assess if varying
such factor results in only one type of semantic variation in generated images. DC-IGN [7] also uses
this method to evaluate their learned representations on 3D image datasets, on which we also apply
InfoGAN to establish direct comparison.
Mutual Information Maximization
To evaluate whether the mutual information between latent codes c and
generated images G(z, c) can be maximized efficiently with proposed
method, we train InfoGAN on MNIST dataset with a uniform categorical distribution on latent codes c ? Cat(K = 10, p = 0.1). In Fig 1,
the lower bound LI (G, Q) is quickly maximized to H(c) ? 2.30,
which means the bound (4) is tight and maximal mutual information
is achieved.
2.5
2.0
1.5
LI
7.1
InfoGAN
GAN
1.0
0.5
0.0
As a baseline, we also train a regular GAN with an auxiliary distribution Q when the generator is not explicitly encouraged to maximize
the mutual information with the latent codes. Since we use expressive
neural network to parametrize Q, we can assume that Q reasonably
Figure 1: Lower bound LI
approximates the true posterior P (c|x) and hence there is little mutual
over training iterations
information between latent codes and generated images in regular
GAN. We note that with a different neural network architecture, there
might be a higher mutual information between latent codes and generated images even though we
have not observed such case in our experiments. This comparison is meant to demonstrate that in a
regular GAN, there is no guarantee that the generator will make use of the latent codes.
?0.5
7.2
0
200
400
600
Iteration
800
1000
Disentangled Representation
To disentangle digit shape from styles on MNIST, we choose to model the latent codes with one
categorical code, c1 ? Cat(K = 10, p = 0.1), which can model discontinuous variation in data, and
two continuous codes that can capture variations that are continuous in nature: c2 , c3 ? Unif(?1, 1).
In Figure 2, we show that the discrete code c1 captures drastic change in shape. Changing categorical
code c1 switches between digits most of the time. In fact even if we just train InfoGAN without
any label, c1 can be used as a classifier that achieves 5% error rate in classifying MNIST digits by
matching each category in c1 to a digit type. In the second row of Figure 2a, we can observe a digit 7
is classified as a 9.
Continuous codes c2 , c3 capture continuous variations in style: c2 models rotation of digits and c3
controls the width. What is remarkable is that in both cases, the generator does not simply stretch
or rotate the digits but instead adjust other details like thickness or stroke style to make sure the
resulting images are natural looking. As a test to check whether the latent representation learned
by InfoGAN is generalizable, we manipulated the latent codes in an exaggerated way: instead of
plotting latent codes from ?1 to 1, we plot it from ?2 to 2 covering a wide region that the network
was never trained on and we still get meaningful generalization.
Next we evaluate InfoGAN on two datasets of 3D images: faces [28] and chairs [29], on which
DC-IGN was shown to learn highly interpretable graphics codes.
On the faces dataset, DC-IGN learns to represent latent factors as azimuth (pose), elevation, and
lighting as continuous latent variables by using supervision. Using the same dataset, we demonstrate
that InfoGAN learns a disentangled representation that recover azimuth (pose), elevation, and lighting
on the same dataset. In this experiment, we choose to model the latent codes with five continuous
codes, ci ? Unif(?1, 1) with 1 ? i ? 5.
Since DC-IGN requires supervision, it was previously not possible to learn a latent code for a variation
that?s unlabeled and hence salient latent factors of variation cannot be discovered automatically from
data. By contrast, InfoGAN is able to discover such variation on its own: for instance, in Figure 3d a
5
(a) Varying c1 on InfoGAN (Digit type)
(b) Varying c1 on regular GAN (No clear meaning)
(c) Varying c2 from ?2 to 2 on InfoGAN (Rotation)
(d) Varying c3 from ?2 to 2 on InfoGAN (Width)
Figure 2: Manipulating latent codes on MNIST: In all figures of latent code manipulation, we will
use the convention that in each one latent code varies from left to right while the other latent codes
and noise are fixed. The different rows correspond to different random samples of fixed latent codes
and noise. For instance, in (a), one column contains five samples from the same category in c1 , and a
row shows the generated images for 10 possible categories in c1 with other noise fixed. In (a), each
category in c1 largely corresponds to one digit type; in (b), varying c1 on a GAN trained without
information regularization results in non-interpretable variations; in (c), a small value of c2 denotes
left leaning digit whereas a high value corresponds to right leaning digit; in (d), c3 smoothly controls
the width. We reorder (a) for visualization purpose, as the categorical code is inherently unordered.
latent code that smoothly changes a face from wide to narrow is learned even though this variation
was neither explicitly generated or labeled in prior work.
On the chairs dataset, DC-IGN can learn a continuous code that represents rotation. InfoGAN again is
able to learn the same concept as a continuous code (Figure 4a) and we show in addition that InfoGAN
is also able to continuously interpolate between similar chair types of different widths using a single
continuous code (Figure 4b). In this experiment, we choose to model the latent factors with four
categorical codes, c1,2,3,4 ? Cat(K = 20, p = 0.05) and one continuous code c5 ? Unif(?1, 1).
Next we evaluate InfoGAN on the Street View House Number (SVHN) dataset, which is significantly
more challenging to learn an interpretable representation because it is noisy, containing images of
variable-resolution and distracting digits, and it does not have multiple variations of the same object.
In this experiment, we make use of four 10?dimensional categorical variables and two uniform
continuous variables as latent codes. We show two of the learned latent factors in Figure 5.
Finally we show in Figure 6 that InfoGAN is able to learn many visual concepts on another challenging
dataset: CelebA [30], which includes 200, 000 celebrity images with large pose variations and
background clutter. In this dataset, we model the latent variation as 10 uniform categorical variables,
each of dimension 10. Surprisingly, even in this complicated dataset, InfoGAN can recover azimuth as
in 3D images even though in this dataset no single face appears in multiple pose positions. Moreover
InfoGAN can disentangle other highly semantic variations like presence or absence of glasses,
hairstyles and emotion, demonstrating a level of visual understanding is acquired.
6
(a) Azimuth (pose)
(b) Elevation
(c) Lighting
(d) Wide or Narrow
Figure 3: Manipulating latent codes on 3D Faces: We show the effect of the learned continuous
latent factors on the outputs as their values vary from ?1 to 1. In (a), we show that a continuous latent
code consistently captures the azimuth of the face across different shapes; in (b), the continuous code
captures elevation; in (c), the continuous code captures the orientation of lighting; and in (d), the
continuous code learns to interpolate between wide and narrow faces while preserving other visual
features. For each factor, we present the representation that most resembles prior results [7] out of 5
random runs to provide direct comparison.
(a) Rotation
(b) Width
Figure 4: Manipulating latent codes on 3D Chairs: In (a), the continuous code captures the pose
of the chair while preserving its shape, although the learned pose mapping varies across different
types; in (b), the continuous code can alternatively learn to capture the widths of different chair types,
and smoothly interpolate between them. For each factor, we present the representation that most
resembles prior results [7] out of 5 random runs to provide direct comparison.
8
Conclusion
This paper introduces a representation learning algorithm called Information Maximizing Generative
Adversarial Networks (InfoGAN). In contrast to previous approaches, which require supervision,
InfoGAN is completely unsupervised and learns interpretable and disentangled representations on
challenging datasets. In addition, InfoGAN adds only negligible computation cost on top of GAN and
is easy to train. The core idea of using mutual information to induce representation can be applied to
other methods like VAE [3], which is a promising area of future work. Other possible extensions to
this work include: learning hierarchical latent representations, improving semi-supervised learning
with better codes [31], and using InfoGAN as a high-dimensional data discovery tool.
7
(a) Continuous variation: Lighting
(b) Discrete variation: Plate Context
Figure 5: Manipulating latent codes on SVHN: In (a), we show that one of the continuous codes
captures variation in lighting even though in the dataset each digit is only present with one lighting
condition; In (b), one of the categorical codes is shown to control the context of central digit: for
example in the 2nd column, a digit 9 is (partially) present on the right whereas in 3rd column, a digit
0 is present, which indicates that InfoGAN has learned to separate central digit from its context.
(b) Presence or absence of glasses
(a) Azimuth (pose)
(c) Hair style
(d) Emotion
Figure 6: Manipulating latent codes on CelebA: (a) shows that a categorical code can capture the
azimuth of face by discretizing this variation of continuous nature; in (b) a subset of the categorical
code is devoted to signal the presence of glasses; (c) shows variation in hair style, roughly ordered
from less hair to more hair; (d) shows change in emotion, roughly ordered from stern to happy.
Acknowledgements
We thank the anonymous reviewers. This research was funded in part by ONR through a PECASE
award. Xi Chen was also supported by a Berkeley AI Research lab Fellowship. Yan Duan was also
supported by a Berkeley AI Research lab Fellowship and a Huawei Fellowship. Rein Houthooft was
supported by a Ph.D. Fellowship of the Research Foundation - Flanders (FWO).
References
[1]
Y. Bengio, ?Learning deep architectures for ai,? Foundations and trends in Machine Learning, 2009.
8
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
Y. Bengio, A. Courville, and P. Vincent, ?Representation learning: A review and new perspectives,?
Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 35, no. 8, 2013.
D. P. Kingma and M. Welling, ?Auto-encoding variational bayes,? ArXiv preprint arXiv:1312.6114, 2013.
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y.
Bengio, ?Generative adversarial nets,? in NIPS, 2014, pp. 2672?2680.
D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling, ?Semi-supervised learning with deep
generative models,? in NIPS, 2014, pp. 3581?3589.
B. Cheung, J. A. Livezey, A. K. Bansal, and B. A. Olshausen, ?Discovering hidden factors of variation in
deep networks,? ArXiv preprint arXiv:1412.6583, 2014.
T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum, ?Deep convolutional inverse graphics
network,? in NIPS, 2015, pp. 2530?2538.
W. F. Whitney, M. Chang, T. Kulkarni, and J. B. Tenenbaum, ?Understanding visual concepts with
continuation learning,? ArXiv preprint arXiv:1602.06822, 2016.
A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow, ?Adversarial autoencoders,? ArXiv preprint
arXiv:1511.05644, 2015.
G. E. Hinton, S. Osindero, and Y.-W. Teh, ?A fast learning algorithm for deep belief nets,? Neural
Comput., vol. 18, no. 7, pp. 1527?1554, 2006.
G. E. Hinton and R. R. Salakhutdinov, ?Reducing the dimensionality of data with neural networks,?
Science, vol. 313, no. 5786, pp. 504?507, 2006.
P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, ?Extracting and composing robust features
with denoising autoencoders,? in ICLR, 2008, pp. 1096?1103.
G. Desjardins, A. Courville, and Y. Bengio, ?Disentangling factors of variation via generative entangling,?
ArXiv preprint arXiv:1210.5474, 2012.
A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko, ?Semi-supervised learning with ladder
networks,? in NIPS, 2015, pp. 3532?3540.
L. Maal?e, C. K. S?nderby, S. K. S?nderby, and O. Winther, ?Improving semi-supervised learning with
auxiliary deep generative models,? in ICML, 2016.
A. Radford, L. Metz, and S. Chintala, ?Unsupervised representation learning with deep convolutional
generative adversarial networks,? ArXiv preprint arXiv:1511.06434, 2015.
B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum, ?Human-level concept learning through probabilistic
program induction,? Science, vol. 350, no. 6266, pp. 1332?1338, 2015.
J. B. Tenenbaum and W. T. Freeman, ?Separating style and content with bilinear models,? Neural
computation, vol. 12, no. 6, pp. 1247?1283, 2000.
Z. Zhu, P. Luo, X. Wang, and X. Tang, ?Multi-view perceptron: A deep model for learning face identity
and view representations,? in NIPS, 2014, pp. 217?225.
J. Yang, S. E. Reed, M.-H. Yang, and H. Lee, ?Weakly-supervised disentangling with recurrent transformations for 3d view synthesis,? in NIPS, 2015, pp. 1099?1107.
S. Reed, K. Sohn, Y. Zhang, and H. Lee, ?Learning to disentangle factors of variation with manifold
interaction,? in ICML, 2014, pp. 1431?1439.
J. Susskind, A. Anderson, and G. E. Hinton, ?The Toronto face dataset,? Tech. Rep., 2010.
J. S. Bridle, A. J. Heading, and D. J. MacKay, ?Unsupervised classifiers, mutual information and
?phantom targets?,? in NIPS, 1992.
D. Barber and F. V. Agakov, ?Kernelized infomax clustering,? in NIPS, 2005, pp. 17?24.
A. Krause, P. Perona, and R. G. Gomes, ?Discriminative clustering by regularized information maximization,? in NIPS, 2010, pp. 775?783.
D. Barber and F. V. Agakov, ?The IM algorithm: A variational approach to information maximization,?
in NIPS, 2003.
G. E. Hinton, P. Dayan, B. J. Frey, and R. M. Neal, ?The" wake-sleep" algorithm for unsupervised neural
networks,? Science, vol. 268, no. 5214, pp. 1158?1161, 1995.
P. Paysan, R. Knothe, B. Amberg, S. Romdhani, and T. Vetter, ?A 3d face model for pose and illumination
invariant face recognition,? in AVSS, 2009, pp. 296?301.
M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic, ?Seeing 3D chairs: Exemplar part-based
2D-3D alignment using a large dataset of CAD models,? in CVPR, 2014, pp. 3762?3769.
Z. Liu, P. Luo, X. Wang, and X. Tang, ?Deep learning face attributes in the wild,? in ICCV, 2015.
J. T. Springenberg, ?Unsupervised and semi-supervised learning with categorical generative adversarial
networks,? ArXiv preprint arXiv:1511.06390, 2015.
9
| 6399 |@word kohli:1 version:1 nd:1 c0:4 unif:3 pieter:1 simulation:1 pg:8 shot:1 reduction:1 liu:1 contains:2 existing:2 luo:2 cad:1 intriguing:1 must:1 john:1 numerical:1 shape:5 remove:2 treating:1 interpretable:10 plot:1 generative:21 discovering:2 guess:1 intelligence:1 core:1 toronto:2 org:1 zhang:1 five:2 c2:7 direct:3 differential:1 consists:1 wild:1 introduce:1 manner:2 acquired:1 x0:2 roughly:2 multi:2 inspired:1 salakhutdinov:2 freeman:1 automatically:3 duan:2 little:1 becomes:3 spain:1 discover:2 begin:1 notation:1 maximizes:1 moreover:1 what:2 kind:1 ec0:4 developed:2 generalizable:1 finding:1 transformation:2 guarantee:1 temporal:1 berkeley:3 every:1 classifier:2 control:3 unit:1 before:1 negligible:2 engineering:1 frey:1 treat:1 despite:1 bilinear:2 encoding:1 might:1 chose:2 resembles:2 specifying:1 challenging:3 ease:1 lost:1 practice:2 digit:25 procedure:1 susskind:1 area:1 yan:2 significantly:1 matching:1 word:1 induce:1 regular:5 vetter:1 seeing:1 suggest:1 get:1 cannot:1 unlabeled:1 context:4 spectacular:1 disentangles:1 writing:1 restriction:1 fruitful:1 deterministic:1 reviewer:1 phantom:1 maximizing:5 straightforward:1 formulate:1 resolution:1 simplicity:2 unstructured:1 pouget:1 factored:3 shlens:1 disentangled:16 variation:25 target:2 us:2 goodfellow:3 jaitly:1 trick:2 synthesize:1 trend:1 recognition:3 satisfying:1 nderby:2 agakov:2 labeled:1 observed:4 preprint:7 electrical:1 capture:10 wang:2 region:1 connected:2 russell:1 transforming:1 warde:1 trained:3 weakly:2 tight:3 algebra:1 exposed:1 completely:2 basis:1 easily:1 cat:3 stacked:1 train:6 separated:1 fast:1 effective:1 describe:1 monte:1 labeling:1 rein:2 whose:1 posed:1 solve:1 cvpr:1 ability:1 noisy:1 final:1 sequence:1 net:3 analytical:1 propose:4 interaction:1 maximal:3 remainder:1 causing:1 relevant:2 date:1 intuitive:1 inducing:1 sutskever:1 regularity:1 requirement:1 generating:1 perfect:1 converges:1 object:2 derive:2 recurrent:2 depending:1 fixing:1 pose:11 maturana:1 exemplar:1 odd:1 eq:1 auxiliary:5 involves:1 come:1 larochelle:1 convention:1 discontinuous:1 attribute:4 human:1 disbm:1 require:2 assign:1 abbeel:1 generalization:1 alleviate:1 decompose:2 opt:1 elevation:4 anonymous:1 mathematically:1 im:1 extension:3 stretch:1 considered:1 normal:1 mapping:1 slab:1 desjardins:1 vary:2 early:1 consecutive:1 achieves:1 efros:1 purpose:1 label:4 expose:1 honkala:1 pnoise:1 create:1 successfully:2 tool:1 always:1 gaussian:1 aim:1 rather:1 varying:6 vae:4 rezende:1 consistently:1 modelling:2 check:1 indicates:1 tech:1 contrast:3 adversarial:15 attains:1 baseline:1 glass:3 helpful:1 inference:1 huawei:1 dayan:1 typically:2 hidden:3 kernelized:1 manipulating:5 perona:1 pixel:1 classification:2 ill:1 orientation:1 softmax:1 mackay:1 uc:1 mutual:26 emotion:5 construct:1 never:1 having:1 encouraged:1 represents:2 unsupervised:16 icml:2 pdata:5 celeba:4 future:1 mirza:1 decodable:1 manipulated:1 interpolate:3 individual:1 ab:1 highly:4 investigate:1 adjust:1 alignment:1 deferred:1 introduces:2 farley:1 light:1 devoted:1 paysan:1 facial:1 instance:4 column:3 whitney:3 maximization:5 cost:5 subset:4 uniform:3 azimuth:7 graphic:3 osindero:1 thickness:2 varies:2 person:1 cited:1 winther:1 probabilistic:2 lee:2 invertible:1 infomax:1 synthesis:1 ilya:1 quickly:1 gans:3 continuously:1 pecase:1 again:1 central:3 containing:1 choose:3 berglund:1 strive:1 style:9 li:12 unordered:1 stabilize:1 includes:1 explicitly:6 vi:1 eyeglass:2 view:6 lab:2 doing:1 red:1 competitive:1 recover:4 option:1 complicated:2 reparametrization:1 bayes:1 metz:1 ass:1 convolutional:3 largely:1 efficiently:3 maximized:4 yield:1 correspond:2 bayesian:1 vincent:2 carlo:1 lighting:8 classified:1 stroke:2 romdhani:1 definition:1 against:1 pp:17 mohamed:1 naturally:1 proof:1 chintala:1 bridle:1 dataset:21 aubry:1 popular:1 color:1 knowledge:3 dimensionality:1 cj:3 appears:1 higher:3 attained:1 supervised:15 formulation:1 though:6 anderson:1 just:1 autoencoders:4 expressive:1 celebrity:1 continuity:1 quality:2 grows:1 supervisory:1 olshausen:1 effect:2 concept:6 true:4 regularization:3 hence:5 semantic:6 neal:1 game:4 width:6 encourages:1 please:1 covering:1 prominent:1 trying:1 distracting:1 plate:1 bansal:1 theoretic:2 demonstrate:2 svhn:4 image:18 variational:8 meaning:1 discovers:1 recently:2 common:1 rotation:4 exponentially:1 interpretation:3 approximates:1 significant:1 imposing:1 ai:3 rd:1 similarly:1 nonlinearity:1 funded:1 access:1 entail:1 supervision:6 impressive:1 add:2 disentangle:6 posterior:7 own:1 exaggerated:1 perspective:1 driven:1 manipulation:1 discretizing:1 arbitrarily:1 onr:1 rep:1 preserving:2 additional:2 fortunately:1 determine:1 maximize:2 signal:2 semi:7 ii:1 multiple:2 hairstyle:2 unlabelled:2 match:4 faster:1 ign:6 award:1 dkl:2 variant:2 regression:1 hair:5 basic:1 essentially:1 expectation:1 arxiv:15 iteration:2 represent:5 achieved:5 c1:14 background:3 addition:4 want:1 whereas:2 fellowship:4 entangled:1 wake:2 krause:1 source:1 extra:1 unlike:1 sure:1 call:2 extracting:2 presence:5 noting:1 yang:3 ideal:1 easy:5 enough:1 bengio:5 switch:1 architecture:2 inner:1 idea:2 knowing:1 whether:3 expression:2 motivated:1 allocate:2 unnatural:1 deep:10 useful:5 generally:1 detailed:1 clear:1 tune:1 amount:1 clutter:1 fwo:1 tenenbaum:4 ph:1 sohn:1 category:4 simplest:1 http:1 continuation:1 supplied:1 exist:1 correctly:1 discrete:7 hyperparameter:2 vol:6 salient:4 four:2 openai:1 demonstrating:1 changing:1 neither:1 clean:1 vast:1 downstream:3 fraction:1 houthooft:2 run:2 angle:1 inverse:1 uncertainty:1 springenberg:1 extends:1 lake:2 appendix:3 comparable:1 bound:10 layer:2 distinguish:1 courville:3 sleep:2 generates:2 min:3 chair:7 rendered:2 relatively:2 department:1 structured:2 describes:1 smaller:1 across:2 modification:1 making:1 restricted:2 invariant:1 iccv:1 visualization:2 previously:1 turn:1 know:1 drastic:1 maal:1 available:1 parametrize:2 apply:1 observe:1 hierarchical:1 incompressible:2 alternative:1 denotes:1 clustering:3 include:4 ensure:1 gan:21 top:1 establish:1 objective:6 added:1 quantity:1 spike:1 makhzani:1 iclr:1 separate:4 thank:1 valpola:1 concatenation:1 street:1 separating:1 manifold:1 barber:2 trivial:1 induction:1 ozair:1 code:63 reed:2 manzagol:1 rasmus:1 convincing:1 happy:1 ql:1 difficult:1 setup:1 disentangling:2 implementation:1 design:1 stern:1 policy:1 unknown:2 boltzmann:3 teh:1 av:1 observation:2 datasets:7 defining:1 hinton:4 looking:1 dc:7 omni:1 frame:1 discovered:1 introduced:2 pair:1 required:1 c3:5 optimized:2 discriminator:2 sivic:1 learned:9 narrow:3 barcelona:1 kingma:2 nip:11 able:8 pattern:1 program:2 max:3 belief:2 suitable:1 natural:3 rely:1 treated:1 regularized:2 zhu:1 minimax:4 eye:1 ladder:2 raiko:1 categorical:12 autoencoder:1 auto:1 livezey:1 review:3 understanding:3 schulman:1 prior:6 discovery:1 acknowledgement:1 fully:1 generation:1 remarkable:1 generator:16 foundation:2 sufficient:2 plotting:1 leaning:2 playing:1 share:1 classifying:1 row:3 surprisingly:2 supported:3 free:2 heading:1 perceptron:2 wide:4 face:19 dimension:3 made:1 reinforcement:1 c5:1 far:1 ec:1 cope:1 transaction:1 welling:2 approximate:2 ignore:1 reveals:1 gomes:1 reorder:1 xi:2 discriminative:1 alternatively:1 continuous:27 latent:54 promising:1 learn:21 reasonably:1 robust:1 bypassed:1 nature:2 inherently:1 composing:1 improving:2 complex:1 cl:2 domain:1 did:1 bounding:1 noise:13 nothing:1 body:1 augmented:1 fig:1 xu:1 position:1 comput:1 house:2 infogan:40 flanders:1 entangling:1 learns:9 tang:2 bad:1 abadie:1 grouping:2 exists:2 mnist:10 ci:5 hoped:1 illumination:1 clamping:2 chen:2 amberg:1 entropy:4 smoothly:3 knothe:1 simply:4 likely:2 ez:1 visual:5 expressed:1 ordered:2 partially:1 chang:1 radford:2 corresponds:2 conditional:1 goal:5 identity:5 cheung:1 absence:4 content:2 hard:1 change:4 specifically:1 reducing:1 semantically:1 denoising:2 lemma:3 called:1 experimental:1 attempted:1 meaningful:4 vaes:1 formally:1 support:1 rotate:1 meant:1 kulkarni:2 evaluate:5 ex:8 |
5,968 | 64 | 592
A Trellis-Structured Neural Network*
Thomas Petsche t and Bradley W. Dickinson
Princeton University, Department of Electrical Engineering
Princeton, N J 08544
Abstract
We have developed a neural network which consists of cooperatively interconnected Grossberg on-center off-surround subnets and which can be used to
optimize a function related to the log likelihood function for decoding convolutional codes or more general FIR signal deconvolution problems. Connections in
the network are confined to neighboring subnets, and it is representative of the
types of networks which lend themselves to VLSI implementation. Analytical and
experimental results for convergence and stability of the network have been found.
The structure of the network can be used for distributed representation of data
items while allowing for fault tolerance and replacement of faulty units.
1
Introd uction
In order to study the behavior of locally interconnected networks, we have focused
on a class of "trellis-structured" networks which are similar in structure to multilayer
networks [5] but use symmetric connections and allow every neuron to be an output.
We are studying such? locally interconnected neural networks because they have the
potential to be of great practical interest. Globally interconnected networks, e.g.,
Hopfield networks [3], are difficult to implement in VLSI because they require many
long wires. Locally connected networks, however, can be designed to use fewer and
shorter wires.
In this paper, we will describe a subclass of trellis-structured networks which optimize a function that, near the global minimum, has the form of the log likelihood
function for decoding convolutional codes or more general finite impulse response signals. Convolutional codes, defined in section 2, provide an alternative representation
scheme which can avoid the need for global connections. Our network, described in
section 3, can perform maximum likelihood sequence estimation of convolutional coded
sequences in the presence of noise. The performance of the system is optimal for low
error rates.
The specific application for this network was inspired by a signal decomposition
network described by Hopfield and Tank [6]. However, in our network, there is an
emphasis on local interconnections and a more complex neural model, the Grossberg
on-center off-surround network [2], is used. A modified form of the Gorssberg model
is defined in section 4. Section 5 presents the main theoretical results of this paper.
Although the deconvolution network is simply a set of cooperatively interconnected
?Supported by the Office of N ava.l Research through grant N00014-83-K-0577 and by the National
Science Foundation through grant ECS84-05460.
tpermanent address: Siemens Corporate Research and Support, Inc., 105 College Road East,
Princeton, N J 08540.
@
American Institute of Physics 1988
593
on-center off-surround subnetworks, and absolute stability for the individual subnetworks has been proven [1], the cooperative interconnections between these subnets
make a similar proof difficult and unlikely. We have been able, however, to prove
equiasymptotic stability in the Lyapunov sense for this network given that the gain
of the nonlinearity in each neuron is large. Section 6 will describe simulations of the
network that were done to confirm the stability results.
2
Convolutional Codes and MLSE
In an error correcting code, an input sequence is transformed from a b-dimensional
input space to an M -dimensional output space, where M ~ b for error correction
and/ or detection. In general, for the b-bit input vector U = (U1, . ?? ,Ub) and the Mbit output vector V = (VI, ... , VM), we can write V = F( U1, . . . ,Ub). A convolutional
code, however, is designed so that relatively short subsequences of the input vector
are used to determine subsequences of the output vector. For example, for a rate 1/3
convolutional code (where M ~ 3b), with input subsequences oflength 3, we can write
the output, V = (VI, ... , Vb) for Vi = (Vi,I, Vi,2, Vi,3), of the encoder as a convolution
of the input vector U = (UI, ... , Ub, 0, 0) and three generator sequences
go = (11 1)
gi = (1 1 0)
g2 = (0 1 1).
This convolution can be written, using modulo-2 addition, as
(1)
Vi=
k=max{I,i-2)
In this example, each 3-bit output subsequence, Vi, of V depends only on three
bits of the input vector, i.e., Vi = I( Ui-2, Ui-I, Ui). In general, for a rate l/n code, the
constraint length, K, is the number of bits of the input vector that uniquely determine
each n-bit output subsequence. In the absence of noise, any subsequences in the
input vector separated by more than K bits (i.e., that do not overlap) will produce
subsequences in the output vector that are independent of each other.
If we view a convolutional code as a special case of block coding, this rate 1/3,
K = 3 code converts a b-bit input word into a codeword of length 3(b + 2) where
the 2 is added by introducing two zeros at the end of every input to "zero-out" the
code. Equivalently, the coder can be viewed as embedding 2b memories into a 23{b+2L
dimensional space. The minimum distance between valid memories or codewords in
this space is the free distance of the code, which in this example is 7. This implies
that the code is able to correct a minimum of three errors in the received signal.
For a convolutional code with constraint length K, the encoder can be viewed as
a finite state machine whose state at time i is determined by the K - 1 input bits,
Ui-k, ... , Ui-I. The encoder can also be represented as a trellis graph such as the one
shown in figure 1 for a K = 3, rate 1/3 code. In this example, since the constraint
length is three, the two bits Ui-2 and Ui-I determine which of four possible states the
encoder is in at time i . In the trellis graph, there is a set of four nodes arranged in a
vertical column, which we call a stage, for each time step i. Each node is labeled with
the associated values of Ui-2 and Ui-1. In general, for a rate l/n code, each stage of
the trellis graph contains 2K -1 nodes, representing an equal number of possible states.
A trellis graph which contains S stages therefore fully describes the operation of the
encoder for time steps 1 through S. The graph is read from left to right and the upper
edge leaving the right side of a node in stage i is followed if Ui is a zero; the lower edge
594
stage i-1
stage i-2
stage i
stage i+1
stage i+2
state 1
&000
111
state 2
state 3
101
state 4
&-010
Figure 1: Part of the trellis-code representation for a rate 1/3, K = 3 convolutional
code.
if Ui is a one. The label on the edge determined by Ui is Vi, the output of the encoder
given by equation 1 for the subsequence Ui-2, Ui-I, Ui.
Decoding a noisy sequence that is the output of a convolutional coder plus noise
is typically done using a maximum likelihood sequence estimation (MLSE) decoder
which is designed to accept as input a possibly noisy convolutional coded sequence, R,
and produce as output the maximum likelihood estimate, V, of the original sequence,
V. If the set of possible n(b+2)-bit encoder output vectors is {Xm : m = 1, ... , 2n(b+2)}
and Xm,i is the ith n-bit subsequence of Xm and ri is the ith n-bit subsequence of R
then
b
V = argmax II P(ri I Xm,i)
Xm
(2)
i=l
That is, the decoder chooses the Xm that maximizes the conditional probability, given .
X m , of the received sequence.
A binary symmetric channel (BSC) is an often used transmission channel model in
which the decoder produces output sequences formed from an alphabet containing two
symbols and it is assumed that the probability of either of the symbols being affected
by noise so that the other symbol is received is the same for both symbols. In the
case of a BSC, the log of the conditional probability, P( ri I Xm,i), is a linear function
of the Hamming distance between ri and Xm,i so that maximizing the right side of
equation 2 is equivalent to choosing the Xm that has the most bits in common with
R. Therefore, equation 2 can be rewritten as
(3)
where Xm,i,l is the lth bit of the ith subsequence of Xm and fa (b) is the indicator
function: fa(b) = 1 if and only if a equals b.
For the general case, maximum likelihood sequence estimation is very expensive
since the number of possible input sequences is exponential in b. The Viterbi algorithm [7], fortunately, is able to take advantage of the structure of convolutional codes
and their trellis graph representations to reduce the complexity of the decoder so that
595
it is only exponential in I( (in general K ~ b). An optimum version of the Viterbi algorithm examines all b stages in the trellis graph, but a more practical and very nearly
optimum version typically examines approximately 5K stages, beginning at stage i,
before making a decision about Ui.
3
A Network for MLSE Decoding
The structure of the network that we have defined strongly reflects the structure of a
trellis graph. The network usually consists of 5]( subnetworks, each containing 2 K - 1
neurons. Each subnetwork corresponds to a stage in the trellis graph and each neuron
to a state. Each stage is implemented as an "on-center off-surround" competitive
network [2], described in more detail in the next section, which produces as output a
contrast enhanced version of the input. This contrast enhancement creates a "winner
take all" situation in which, under normal circumstances, only one neuron in each
stage -the neuron receiving the input with greatest magnitude - will be on. The
activation pattern of the network after it reaches equilibrium indicates the decoded
sequence as a sequence of "on" neurons in the network. If the j-th neuron in subnet i,
Ni,i is on, then the node representing state j in stage i lies on the network's estimate
of the most likely path.
For a rate lin code, there is a symmetric cooperative connection between neurons
Ni,j and Ni+1 ,k if there is an edge between the corresponding nodes in the trellis
graph. If (xi,i,k,l, . .. , Xi,j,k,n) are the encoder output bits for the transition between
these two nodes and (ri,!, ... , ri,n) are the received bits, then the connection weight
for the symmetric cooperative connection between Ni,i and Ni+1,k is
1 n
m ',J,
"" k-- - "
I .ri I (X ',J"
"" k/)
L.J
n
1=1
(4)
'
If there is no edge between the nodes, then mi,i,k = o.
Intuitively, it is easiest to understand the action of the entire network by examining one stage. Consider the nodes in stage i of the trellis graph and assume that
the conditional probabilities of the nodes in stages i - 1 and i + 1 are known. (All
probabilities are conditional on the received sequence.) Then the conditional probability of each node in stage i is simply the sum of the probabilities of each node in
stages i - 1 and i + 1 weighted by the conditional transition probabilities. If we look
at stage i in the network, and let the outputs of the neighboring stages i - 1 and
i + 1 be fixed with the output of each neuron corresponding to the "likelihood" of
the corresponding state at that stage, then the final outputs of the neurons M,i will
correspond to the "likelihood" of each of the corresponding states. At equilibrium, the
neuron corresponding to the most likely state will have the largest output.
4
The Neural Model
The "on-center off-surround" network[2] is used to model each stage in our network.
This model allows the output of each neuron to take on a range of values, in this
case between zero and one, and is designed to support contrast enhancement and
competition between neurons. The model also guarantees that the final output of
each neuron is a function of the relative intensity of its input as a fraction of the total
input provided to the network.
596
Using the "on-center off-surround" model for each stage and the interconnection
weights, mi,j,k, defined in equation 4, the differential equation that governs the instantaneous activity of the neurons in our deconvolution network with S stages and
N states in each stage can be written as
N
Ui,j = -Aui,j
-
+ I)mi-I,k,jf(Ui-I,k) + mi,j,kf(Ui+1,k)])
N
klJ
(C + Ui,j) L (f( Ui,k) + L[mi-I,k,t!( Ui-I,k) + mi,l,kf( Ui+1,k)])
+ (B -
Ui ,j) (f(Ui,j)
k"lj
(5)
1=1
where f(x) = (1 + e-'\X)-I, oX is the gain of the nonlinearity, and A, B, and Care
constants
For the analysis to be presented in section 5, we note that equation 5 can be
rewritten more compactly in a notation that is similar to the equation for additive
analog neurons given in [4]:
S
U'&,}. --
-
Au"&,}
-
N
'"'
' ?S?&,},. k, 1!(Uk,
I) - 1'.&,3"
.. k 1!(Uk, I))
L..J '"'Cu
L..J &,}
(6)
k=I/=I
where, for 1
~
I
~
N,
1'.&,},&,}
. . .. -- B
1'.&,},&,
.. . I = -C V I r~ J'
Ti j i-I I = Bmi_l I j - C
S?&,},&,
" 11
S&,},&-,
? " 11 = l.J
~m&-I"q
'
I
q
S &,},&+
.. ? 1, /- - ~m'
l.J
&,q, I
'"
q
Si,j,k,1 = 0 V k ? {i - 1, i, i
+ 1}
, ,
1'.&,3,&+
... II,
Bm"1
I,),
--
-
C
E mi-l "I q
q?j
~
l.J
q"lj
(7)
m'&,q, I
To eliminate the need for global interconnections within a stage, we can add two
summing elements to calculate
N
N
Xi
=L
f(Xi,j)
and
N
Ji = L L
j=1
[mi-l,k,j!( Ui-l,k)
+ mi,j,kf( Ui+1,k)]
(8)
j=I k=I
Using these two sums allows us to rewrite equation 5 as
U?&,}. = -Au'&,}. + (B
+ C)(f(u? .) + L
&,)
.) - U?&,}?(X&? + J.)
&
(9)
&,)
This form provides a more compact design for the network that is particularly suited
to implementation as a digital filter or for use in simulations since it greatly reduces
the calculations required,
5
Stability of the Network
The end of section 3 described the desired operation of a single stage, given that the
outputs of the neighboring stages are fixed. It is possible to show that in this situation
a single stage is stable. To do this, fix f( Uk,/) for k E {i - 1, i + 1} so that equation 6
can be written in the form originally proposed by Grossberg [2]:
Ui,j = -Auj,j
+ (B -
Uj,j) (Ii,j
+ f(Ui,j)) -
(Ui,j
N
N
k=1
k=1
+ C)(L Ii,k + L
!(Ui,k))
(10)
597
where Ii,; = 2:1'=1 [mi-l,k,jf( Ui-l,k) + mi,j,kf( Ui+I,k)].
Equation 10 is a special case of the more general nonlinear system
t
Xi = ai(xi) (bi(Xi) -
(11)
Ci,kdk(Xk))
k=1
where: (1) ai(xi) is continuous and ai(xd > 0 for Xi 2: OJ (2) bi(Xi) is continuous
for Xi ~ OJ (3) Ci,k = Ck,ij and (4) di(Xi) ~ 0 for all Xi E (-00,00). Cohen and
Grossberg [1] showed that such a system has a global Lyapunov function:
(12)
and that, therefore, such a system is equiasymptotically stable for all constants and
functions satisfying the four constraints above. In our case, this means that a single
stage has the desired behavior when the neighboring stages are fixed. IT we take the
output of each neuron to correspond to the likelihood of the corresponding state then,
if the two neighboring stages are fixed, stage i will converge to an equilibrium point
where the neuron receiving the largest input will be on and the others will be off, just
as it should according to section 2.
It does not seem possible to use the Cohen-Grossberg stability proof for the entire
system in equation 5. In fact, Cohen and Grossberg note that networks which allow
cooperative interactions define systems for which no stability proof exists [1].
Since an exact stability proof seems unlikely, we have instead shown that in the
limit as the gain, A, of the nonlinearity gets large the system is asymptotically stable.
Using the notation in [4], define Vi = f(Ui) and a normalized nonlinearity J(.) such
that J-l(Vi) = AUi. Then we can define an energy function for the deconvolution
network to be
1
E -- -2
L
T:.
1,.1,k,l V.?1,.1?Vik,l -
i,j,k,l
L
1 ( -A -A
i,j
- 1
S 1,.1,k,l
?? Vik,l ) ~VIc'1
1
f - (() d(
L
k,l
(13)
2
The time derivative of E is
.
E -- -
- ' ( -Au?? L -dVii
dt
I,)
i,i
U?1,.1.
? ? Vik,l + L T?1,.1,k,l
? Vik,l
L S1,.1,k,l
k,l
k,l
-
1~
(VIc ,1 - 1
)
"X L.i Si,j,k,l } l. f- (()d(
k,l
(14)
2
It is difficult to prove that E is nonpositive because of the last term in the parentheses.
However, for large gain, this term can be shown to have a negligible effect on the
derivative.
It can be shown that for f(u) = (1 + C>'U)-I,
l-I(()d( is bounded above
Ii'2
by log(2). In this deconvolution network, there are no connections between neurons
unless they are in the same or neighboring stages, i.e., Si,i,k,l = 0 for Ii - kl > 1 and
1 is restricted so that 0 ~ 1 ~ S, so there are no more than 3S non-zero terms in the
problematical summation. Therefore, we can write that
1
lim - ,
>'-00
L
A k,l
Si,j,k,l
~VIc'1
t
_
f-l(() d( = 0
598
Then, in the limit as ). -- 00, the terms in parentheses in equation 14 converge to Ui
. equatIOn
. 6, so t h at li m E? = ~
dVi- j'Ui.
.
U?
III
L..J smg th e ch?
am ruI e, we can reWrl?te thOIS
.\-+00
..
dt
t,)
as
E.n~J = - t= (d~J )'( d~/-l (V;J?)
It can also be shown that that, if 1(?) is a monotonically increasing function then
~ I-I (Vi) > 0 for all Vi. This implies that for all u = (Ui,b . .? ,UN,S), lim>.-+oo E S; 0,
and, therefore, for large gains, E as defined in equation 13 is a Lyapunov function for
the system described by equation 5 and the network is equiasymtotically stable.
If we apply a similar asymptotic argument to the energy function, equation 13
reduces to
1
E -- - '2 ~
(15)
L..J T:',3". k IV.?1,3.Vik , I
i,j,k,1
which is the Lyapunov function for a network of discontinuous on-off neurons with
interconnection matrix T. For the binary neuron case, it is fairly straight forward to
show that the energy function has minima at the desired decoder outputs if we assume
that only one neuron in each stage may be on and that Band C are appropriately
chosen to favor this. However, since there are 0(5 2 N) terms in the disturbance
summation in equation 15, convergence in this case is not as fast as for the derivative
of the energy function in equation 13, which has only 0(5) terms in the summation.
6
Simulation Results
The simulations presented in this section are for the rate 1/3, K = 3 convolutional code
illustrated in figure 1. Since this code has a constraint length of 3, there are 4 possible
states in each stage and an MLSE decoder would normally examine a minimum of
5K subsequences before making a decision, we will use a total of 16 stages. In these
simulations, the first and last stage are fixed since we assume that we have prior
knowledge or a decision about the first stage and zero knowledge about the last stage.
The transmitted codeword is assumed to be all zeros.
The simulation program reads the received sequence from standard input and uses
it to define the interconnection matrix W according to equation 4. A relaxation
subroutine is then called to simulate the performance of the network according to an
Euler discretization of equation 5. Unit time is then defined as one RC time constant of
the unforced system. All variables were defined to be single precision (32 bit) floating
point numbers.
Figure 2a shows the evolution of the network over two unit time intervals with the
sampling time T = 0.02 when the received codeword contains no noise. To interpret
the figure, recall that there are 16 stages of 4 neurons each. The output of each stage
is a vertical set of 4 curves. The upper-left set is the output of the first stage; the
upper-most curve is the output of the first neuron in the stage. For the first stage,
the first neuron has a fixed output of 1 and the other neurons have a fixed output of
o. The outputs of the neurons in the last stages are fixed at an intermediate value to
represent zero a priori knowledge about these states. Notice that the network reaches
an equilibrium point in which only the top neurons in each state (representing the "00"
node in figure 1) are on and all others are off. This case illustrates that the network
can correctly decode an unerrored input and that it does so rapidly, i.e., in about one
time constant. In this case, with no errors in the input, the network performs the
599
o
2 0
2 0
( a)
2 0
2
o
10 0
10 0
10 0
10
(b)
Figure 2: Evolution of the trellis network for (a) unerrored input, (b) input with burst
errors: R is 000 000 000 000 000 000 000 000 111 000 000 000 000 000 000. A = 10.,
A = 1.0, B = 1.0, C = 0.75, T = 0.02. The initial conditions are XI,1 = 1., xI,.i = 0.0,
X16,j = 0.2, all other Xi,j = 0.0.
same function as Hopfield and Tank's network and does so quite well. Although we
have not been able to prove it analytically, all our simulations support the conjecture
that if xi,AO) = ~ for all i and j then the network will always converge to the global
minimum.
One of the more difficult decoding problems for this network is the correction of
a burst of errors in a transition subsequence. Figure 2b shows the evolution of the
network when three errors occur in the transition between stages 9 and 10. Note that
10 unit time intervals are shown since complete convergence takes much longer than
in the first example. However, the network has correctly decoded many of the stages
far from the burst error in a much shorter time.
If the received codeword contains scattered errors, the convolutional decoder should
be able to correct more than 3 errors. Such a case is shown in figure 3a in which the
received codeword contains 7 errors. The system takes longest to converge around two
transitions, 5-6 and 11-12. The first is in the midst of consecutive subsequences which
each have one bit errors and the second transition contains two errors.
To illustrate that the energy function shown in equation 13 is a good candidate
for a Lyapunov function for this network, it is plotted in figure 3b for the three cases
described above. The nonlinearity used in these simulations has a gain of ten, and, as
predicted by the large gain limit, the energy decreases monotonically.
To more thoroughly explore the behavior of the network, the simulation program
was modified to test many possible error patterns. For one and two errors, the program
exhaustively tested each possible error pattern. For three or more errors, the errors
were generated randomly. For four or more errors, only those errored sequences for
which the MLS estimate was the sequence of all zeros were tested. The results of
this simulation are summarized in the column labeled "two-nearest" in figure 4. The
performance of the network is optimum if no more than 3 errors are present in the
received sequence, however for four or more errors, the network fails to correctly decode
some sequences that the MLSE decoder can correctly decode.
600
~~~~
~~~~
~~~~
~~~E
0
2 0
2 0
2 0
80
60
40
E
20
errors
0
-20
0. 0
0.5
1.0
1.5
2.0
time
2
(b)
(a)
Figure 3: (a) Evolution of the trellis network for input with distributed errors. The
input, R, is 000 010 010 010 100 001 000 000 000 000 110 000 000 000 000. The
constants and initial conditions are the same as in figure 2. (b) The energy function
defined in equation 13 evaulated for the three simulations discussed.
errored
bits
number of
test vectors
0
1
2
3
4
5
1
39
500
500
500
500
500
500
2500
6
7
Total
number of errors
tvo-nearest four-nearest
0
0
0
0
7
33
72
132
244
0
0
0
0
0
20
68
103
191
Figure 4: Simulation results for a deconvolution network for a K = 3, rate 1/3 code.
The network parameters were: .x = 15, A = 6, B = 1, C = 0.45, and T = 0.025.
For locally interconnected networks, the major concern is the flow of information
through the network. In the simulations presented until now, the neurons in each stage
are connected only to neurons in neighboring stages. A modified form of the network
was also simulated in which the neurons in each stage are connected to the neurons
in the four nearest neighboring stages. To implement this network, the subroutine to
initialize the connection weights was modified to assign a non-zero value to Wi,j ,i+2,k.
This is straight-forward since, for a code with a constraint length of three, there is a
single path connecting two nodes a distance two apart.
The results of this simulation are shown in the column labeled "four-nearest" in
figure 4. It is easy to see that the network with the extra connections performs better
601
than the previous network. Most of the errors made by the nearest neighbor network
occur for inputs in which the received subsequences ri and ri+1 or ri+2 contain a total
of four or more errors. It appears that the network with the additional connections
is, in effect, able to communicate around subsequences containing errors that block
communications for the two-nearest neighbor network.
7
Summary and Conclusions
We have presented a locally interconnected network which minimizes a function that
is analogous to the log likelihood function near the global minimum. The results of
simulations demonstrate that the network can successfully decode input sequences
containing no noise at least as well as the globally connected Hopfield-Tank [6] decomposition network. Simulations also strongly support the conjecture that in the
noiseless case, the network can be guaranteed to converge to the global minimum. In
addition, for low error rates, the network can also decode noisy received sequences.
We have been able to apply the Cohen-Grossberg proof of the stability of "oncenter off-surround" networks to show that each stage will maximize the desired local
"likelihood" for noisy received sequences. We have also shown that, in the large gain
limit, the network as a whole is stable and that the equilibrium points correspond to
the MLSE decoder output. Simulations have verified this proof of stability even for relatively small gains. Unfortunately, a proof of strict Lyapunov stability is very difficult,
and may not be possible, because of the cooperative connections in the network.
This network demonstrates that it is possible to perform interesting functions even
if only localized connections are allowed, although there may be some loss of performance. If we view the network as an associative memory, a trellis structured network
that contains N S neurons can correctly recall 28 memories. Simulations of trellis networks strongly suggest that it is possible to guarantee a non-zero minimum radius of
attraction for all memories. We are currently investigating the use of trellis structured
layers in multilayer networks to explicitly provide the networks with the ability to
tolerate errors and replace faulty neurons.
References
[1] M. Cohen and S. Grossberg, "Absolute stability of global pattern formation and parallel
memory storage by competitive neural networks," IEEE Trans. Sys., Man, and Cyber.,
vol. 13, pp. 815-826, Sep.-Oct. 1983.
[2] S. Grossberg, "How does a brain build a cognitive code," in Studies of Mind and Brain,
pp. 1-52, D. Reidel Pub. Co., 1982.
[3] J. Hopfield, "Neural networks and physical systems with emergent collective computational
abilities," Proceedings of the National Academy of Sciences USA, vol. 79, pp. 2554-2558,
1982.
[4] J. Hopfield, "Neurons with graded response have collective computational properties like
those of two-state neurons," Proceeedings of the National Academy of Science, USA, vol. 81,
pp. 3088-3092, May 1984.
[5] J. McClelland and D. Rumelhart, Parallel Distributed Processing, Vol. 1. The MIT Press,
1986.
[6] D. Tank and J. Hopfield, "Simple 'neural' optimization networks: an AID converter, signal
decision circuit and a linear progra.mming circuit," IEEE Trans. on Circuits and Systems,
vol. 33, pp. 533-541, May 1986.
[7] A. Viterbi and J. Omura, Principles of Digital Communications and Coding. McGra.w-Hill,
1979.
| 64 |@word cu:1 version:3 seems:1 oncenter:1 simulation:18 decomposition:2 initial:2 contains:7 pub:1 bradley:1 discretization:1 activation:1 si:4 written:3 additive:1 designed:4 fewer:1 item:1 xk:1 beginning:1 sys:1 ith:3 short:1 provides:1 node:14 tvo:1 rc:1 burst:3 differential:1 consists:2 prove:3 behavior:3 themselves:1 examine:1 brain:2 inspired:1 globally:2 increasing:1 provided:1 notation:2 bounded:1 maximizes:1 circuit:3 coder:2 easiest:1 minimizes:1 developed:1 guarantee:2 every:2 subclass:1 ti:1 xd:1 demonstrates:1 uk:3 unit:4 grant:2 normally:1 before:2 negligible:1 engineering:1 local:2 limit:4 path:2 approximately:1 plus:1 emphasis:1 au:3 ava:1 co:1 range:1 bi:2 grossberg:9 practical:2 block:2 implement:2 word:1 road:1 suggest:1 get:1 faulty:2 storage:1 optimize:2 equivalent:1 center:6 maximizing:1 go:1 focused:1 unerrored:2 correcting:1 examines:2 attraction:1 unforced:1 stability:12 embedding:1 analogous:1 enhanced:1 modulo:1 exact:1 dickinson:1 decode:5 us:1 element:1 rumelhart:1 expensive:1 particularly:1 satisfying:1 problematical:1 cooperative:5 labeled:3 electrical:1 calculate:1 connected:4 decrease:1 ui:38 complexity:1 exhaustively:1 rewrite:1 creates:1 compactly:1 sep:1 hopfield:7 emergent:1 represented:1 alphabet:1 separated:1 fast:1 describe:2 formation:1 choosing:1 whose:1 quite:1 interconnection:6 encoder:8 favor:1 ability:2 gi:1 noisy:4 final:2 associative:1 sequence:23 advantage:1 analytical:1 interconnected:7 interaction:1 neighboring:8 rapidly:1 academy:2 competition:1 convergence:3 enhancement:2 transmission:1 optimum:3 produce:4 oo:1 illustrate:1 subnets:3 nearest:7 ij:1 received:13 implemented:1 predicted:1 implies:2 lyapunov:6 radius:1 correct:2 discontinuous:1 filter:1 require:1 subnet:1 assign:1 fix:1 ao:1 summation:3 auj:1 cooperatively:2 correction:2 around:2 normal:1 great:1 equilibrium:5 viterbi:3 major:1 consecutive:1 estimation:3 label:1 currently:1 largest:2 successfully:1 reflects:1 weighted:1 bsc:2 mit:1 always:1 modified:4 ck:1 avoid:1 office:1 longest:1 likelihood:11 indicates:1 greatly:1 contrast:3 sense:1 am:1 unlikely:2 typically:2 accept:1 entire:2 lj:2 vlsi:2 eliminate:1 transformed:1 subroutine:2 tank:4 priori:1 special:2 fairly:1 initialize:1 equal:2 sampling:1 look:1 nearly:1 others:2 randomly:1 national:3 individual:1 floating:1 argmax:1 replacement:1 detection:1 interest:1 edge:5 shorter:2 unless:1 iv:1 desired:4 plotted:1 theoretical:1 column:3 smg:1 introducing:1 euler:1 examining:1 chooses:1 thoroughly:1 off:10 physic:1 decoding:5 vm:1 receiving:2 connecting:1 containing:4 possibly:1 fir:1 cognitive:1 american:1 derivative:3 li:1 potential:1 coding:2 summarized:1 inc:1 explicitly:1 vi:14 depends:1 view:2 competitive:2 parallel:2 formed:1 ni:5 convolutional:15 correspond:3 straight:2 reach:2 energy:7 pp:5 proof:7 associated:1 mi:11 hamming:1 di:1 gain:9 nonpositive:1 recall:2 lim:2 knowledge:3 appears:1 originally:1 dt:2 tolerate:1 response:2 arranged:1 done:2 ox:1 strongly:3 just:1 stage:56 until:1 nonlinear:1 impulse:1 usa:2 effect:2 normalized:1 contain:1 evolution:4 analytically:1 read:2 symmetric:4 illustrated:1 uniquely:1 hill:1 complete:1 demonstrate:1 performs:2 instantaneous:1 common:1 ji:1 physical:1 cohen:5 winner:1 analog:1 discussed:1 oflength:1 interpret:1 surround:7 ai:3 nonlinearity:5 stable:5 longer:1 add:1 showed:1 apart:1 codeword:5 n00014:1 binary:2 fault:1 transmitted:1 minimum:9 additional:1 care:1 fortunately:1 mbit:1 determine:3 converge:5 maximize:1 monotonically:2 signal:5 ii:7 corporate:1 reduces:2 calculation:1 long:1 lin:1 coded:2 parenthesis:2 multilayer:2 circumstance:1 noiseless:1 represent:1 confined:1 addition:2 interval:2 leaving:1 appropriately:1 extra:1 strict:1 errored:2 cyber:1 flow:1 seem:1 call:1 near:2 presence:1 intermediate:1 iii:1 easy:1 converter:1 reduce:1 vik:5 introd:1 action:1 governs:1 locally:5 band:1 ten:1 mcclelland:1 notice:1 correctly:5 write:3 vol:5 affected:1 four:9 verified:1 graph:11 asymptotically:1 relaxation:1 fraction:1 convert:1 sum:2 communicate:1 decision:4 vb:1 bit:19 layer:1 followed:1 guaranteed:1 activity:1 aui:2 occur:2 constraint:6 ri:10 u1:2 simulate:1 argument:1 relatively:2 conjecture:2 structured:5 department:1 according:3 describes:1 wi:1 making:2 s1:1 intuitively:1 restricted:1 equation:22 mind:1 end:2 subnetworks:3 studying:1 operation:2 rewritten:2 apply:2 petsche:1 alternative:1 thomas:1 original:1 top:1 uj:1 build:1 graded:1 added:1 codewords:1 fa:2 subnetwork:1 distance:4 simulated:1 decoder:9 klj:1 code:25 length:6 equivalently:1 difficult:5 unfortunately:1 proceeedings:1 reidel:1 implementation:2 design:1 collective:2 perform:2 allowing:1 upper:3 vertical:2 neuron:37 wire:2 convolution:2 finite:2 situation:2 communication:2 intensity:1 required:1 kl:1 connection:12 trans:2 address:1 able:7 usually:1 pattern:4 xm:11 program:3 max:1 memory:6 lend:1 oj:2 greatest:1 overlap:1 disturbance:1 indicator:1 representing:3 scheme:1 vic:3 prior:1 kf:4 relative:1 asymptotic:1 fully:1 loss:1 interesting:1 proven:1 localized:1 generator:1 digital:2 foundation:1 principle:1 dvi:1 summary:1 supported:1 last:4 free:1 side:2 allow:2 understand:1 institute:1 neighbor:2 absolute:2 distributed:3 tolerance:1 curve:2 valid:1 transition:6 kdk:1 forward:2 made:1 bm:1 far:1 compact:1 confirm:1 ml:1 global:8 investigating:1 summing:1 assumed:2 xi:17 subsequence:16 continuous:2 un:1 channel:2 complex:1 main:1 midst:1 whole:1 noise:6 allowed:1 representative:1 scattered:1 x16:1 aid:1 precision:1 trellis:19 fails:1 decoded:2 exponential:2 lie:1 candidate:1 specific:1 symbol:4 concern:1 deconvolution:6 exists:1 uction:1 ci:2 magnitude:1 te:1 illustrates:1 rui:1 suited:1 simply:2 likely:2 explore:1 g2:1 ch:1 corresponds:1 oct:1 conditional:6 lth:1 viewed:2 jf:2 absence:1 replace:1 man:1 determined:2 total:4 called:1 experimental:1 siemens:1 east:1 college:1 support:4 ub:3 princeton:3 tested:2 |
5,969 | 640 | Hybrid Circuits of Interacting Computer Model
and Biological Neurons
Sylvie Renaud-LeMassonDepartment of Physics
Brandeis University
Waltham. MA 02254
Gwendal LeMasson'
Department of Biology
Brandeis University
Waltham. MA 02254
Eve Marder
Department of Biology
Brandeis University
Waltham. MA 02254
L.F. Abbott
Department of Physics
Brandeis University
Waltham. MA 02254
Abstract
We demonstrate the use of a digital signal processing board to construct
hybrid networks consisting of computer model neurons connected to a
biological neural network. This system operates in real time. and the
synaptic connections are realistic effective conductances. Therefore.
the synapses made from the computer model neuron are integrated
correctly by the postsynaptic biological neuron. This method provides
us with the ability to add additional. completely known elements to a
biological network and study their effect on network activity.
Moreover. by changing the parameters of the model neuron. it is
possible to assess the role of individual conductances in the activity of
the neuron. and in the network in which it participates.
"Present address. lXL. Universit6 de Bordeaux l-Enserb.
CNRSURA 846. 351 crs de la Liberation. 33405 Talence Cedex.
France.
'Present address. LNPC. CNRS. Universite de Bordeaux 1. Place
de Dr. Peyneau. 33120 Arcachon. France
813
814
Renaud-LeMasson, LeMasson, Marder, and Abbott
1 INTRODUCTION
A primary goal in neuroscience is to understand how the electrical properties of
individual neurons contribute to the complex behavior of the networks in which they
are found. However, the experimentalist wishing to assess the contribution of a given
neuron or synapse in network function is hampered by lack of adeq.uate tools. For
example, although pharmacological agents are often used to block synaptic
connections within a network (Eisen and Marder, 1982), or individual currents within
a neuron (Tierney and Harris-Warrick, 1992), it is rarely possible to do precise
pharmacological dissections of network function. Computational models of neurons
and networks (Koch and Segev, 1989) allow the investigator the control over
parameters not possible with pharmacology. However, because realistic computer
models are always based on inadequate biophysical data, the investigator must always
be concerned that the simulated system may differ from biological reality in a critical
way. We have developed a system that allows us to construct hybrid networks
consisting of a model neuron interacting with a biological neural network. This
allows us to work with a real biological system while retaining complete control over
the parameters of the model neuron.
2 THE MODEL NEURON
Biophysical data describing the ionic currents of the Lateral Pyloric (LP) neuron of
the crab stomatogastric ganglion (STG) (Golowasch and Marder, 1992) were used to
construct an isopotential model LP neuron using MAXIM. MAXIM is a software
package that runs on MacIntosh systems and provides a graphical modeling tool for
neurons and small neural networks (LeMasson, 1993). The model LP neuron used
uses Hodgkin-Huxley type equations and contains a fast Na+ conductance, a Ca+
conductance, a delayed rectifier K+ conductance, a transient outward current (iJ and
a hyperpolarization-activated current OJ, as well as a leak conductance. This model
is similar to that reported in Buchholtz et al. (1992) but because the raw data were
refit using MAXIM, details are slightly different.
3 ARTIFICIAL SYNAPSES
Artificial chemical synapses are produced by the same method used in Sharp et at.
(1993). An axoclamp in discontinuous current clamp (DCC) mode is used to record
the membrane potential and inject current into the biological neurons (Fig. 1). The
presynaptic membrane potential is used to control current injection into the
postsynaptic neuron simulating a conductance change (rather than an injected current
as in Yarom et al.). The synaptic current injected into the postsynaptic neuron
depends on the programmed synaptic conductance and an investigator-determined
reversal potential. The investigator also specifies the threshold and the function
relating "transmitter release" to presynaptic membrane potential, as well as the time
course of the synaptic conductance.
Hybrid Circuits of Interacting Computer Model and Biological Neurons
parameters
Mac IIfx
data
,...------, V m & Is
....Vl--m
- ---4 Axoclamp
DSP t--~ D. C. C.
STG
Figure 1: Schematic diagram of the system used to establish hybrid circuits.
4 HARDWARE
Our system uses a Digital Signal Processor (DSP) board with built-in AID and DI A
16 bit-precision converters (Spectral Innovations MacDSP256KNI), with DSP32C
(AT&T) mounted in a Macintosh II fx (MAC) computer. A block diagram of the
system is shown Fig. 1. The parameters describing the membrane and synaptic
conductances of the model neuron are stored in the MAC and are transferred to the
DSP board RAM (256x32K) through the standard NuBus interface. The DSP
translates the parameter files into look-up tables via a polynomial fitting procedure.
The differential equations of the LP model and the artificial synapses are integrated by
the DSP board, taking advantage of its optimized arithmetic functions and data access.
In this system, the computational model runs on the DSP board, and the Mac IIfx
functions to store and display data on the screen.
The computational speed of this system depends on the integration time step and the
complexity of the model (the number of differential equations implemented in the
model). For the results shown here, the integration time step was 0.7 msec, and
under the conditions described below, 10-15 differential equations were used. The
current system is limited to two real neurons and one model neuron because the DSP
board has only two input and two output channels. A later generation system with
more input and output channels and additional speed will increase the number of
neurons and connections that can be created. During anyone time step, the membrane
potential of the model neuron is computed, the synaptic currents are determined, and a
voltage command is exported to the Axoclamp instructing it to inject the appropriate
current into the biological neuron (typically a few nA). During each time step the
Axoclamp is used to measure the membrane potential of the biological neurons
(typically between -80mV and OmV) used to compute the value of the synaptic inputs
to the model neuron. The computed and measured membrane potentials are
periodically (every 500 time steps) sent to the computer main memory to be displayed
and recorded.
To make this system run in real time, it is necessary to maintain perfect timing among
all the components. Therefore in every experiment we first determine the minimum
time step needed to do the integration depending on the complexity of the model being
implemented. For complex models we used the internal clock of the MacII to drive
the board. Under some conditions it was preferable to drive the board with an external
clock. It is critical that the Axoclamp sampling rate be more than twice the board time
step if the two are not synchronized. In our experiments, the Axoclamp switching
815
816
Renaud-LeMasson, LeMasson, Marder, and Abbott
A
Model LP
~ real synapses
-
?..,~ artificial synapses
B
C: increaseofglh
0.5 sec
ISOmV
Figure 2: Hybrid network consisting of a model LP neuron connected to a PD neuron of
a biological stomatogastric ganglion. A: Simplified connectivity diagram of the pyloric
circuit of the stomatogastric ganglion. The AB/PD group consists of one AB neuron
electrically coupled to two PD neurons. All chemical synapses are inhibitory. B:
Simultaneous intracellular recordings from two biological neurons (PD and LP) and a plot
of the membrane potential of the model LP neuron connected to the circuit. The
parameters of the synaptic connections and the model LP neuron were adjusted so that
the model LP neuron fired in the same time in the pyloric cycle as the biological LP
neuron. C: Same recording configuration as Bt but maximal conductance of ih in the
model neuron was increased.
Hybrid Circuits of Interacting Computer Model and Biological Neurons
817
A
Model LP
~ real synapses
~
artificial synapses
B
110 mV
150 mV
110 mV
py
0.5 sec
Figure 3: Hybrid network in which the model LP neuron is connected to. two diffe~ent
biological neurons. A: Connectivity diagram showing the pattern of synaptic connections
shown in part B. B: Simultaneous recordings from the biological AB neuron, the model
LP neuron, and a biological PY neuron.
circuit was running about three times faster than the board time step. However, if
experimental conditions force a slower Axoclamp sampling rate, then it will be
important to synchronize the Axoclamp clock with the board.
S RESULTS
The STG of the lobster, Panulirus interruptus contains one LP neuron, two Pyloric
Dilator (PO) neurons, one Anterior Burster (AB), and eight Pyloric (PY) neurons
(Eisen and Marder, 1982; Harris-Warrick et al., 1992). The connectivity among
these neurons is known, and is shown in Figure 2A. The PD and LP neurons fire in
alternation, because of the reciprocal inhibitory connections between them. Figure 2B
shows a model LP neuron connected with reciprocal inhibitory synapses to a
biological PD neuron. The parameters controlling the threshold, activation curve,
time course, and reversal potential of the model neuron were adjusted until the model
neuron fired at the same time within the rhythmic pyloric cycle as the biological LP
neuron (Fig. 2B). Once these parameters were set, it was then possible to ask what
effect changing the membrane properties of the model neuron had on emergent
network activity. Figure 2C shows the result of increasing the maximal conductance
of one of the currents in the model LP neuron, it,. Note that increasing this current
818
Renaud-LeMasson, LeMasson, Marder, and Abbott
increased the number of LP action potentials per burst. The increased activity in the
LP neuron delayed the onset of the next burst in the PO neurons because of the
inhibitory synapse between the model LP neuron and the biological PO neuron, and
the cycle period therefore also increased. Another effect seen in this example, is that
the increased conductance of ~ in the LP neuron delayed the onset of the model LP
neuron's firing relative to that of the biological LP neuron.
In the experiment shown in Figure 3 we created reciprocal inhibitory connections
between the model LP neuron and two biological neurons, the AB and a PY (Fig.
3A). (The action potentials in the AB neuron are higbly attenuated by the cable
properties of this neuron). This example shows clearly the unitary inhibitory
postsynaptic potentials (IPSPs) in the biological neurons resulting from the model LP's
action potentials. During each burst of LP action potentials the IPSPs in the AB
neuron increase considerably in amplitude, although the AB neuron's membrane
potential is moving towards the reversal potential of the IPSPs. This occurs
presumably because the conductance of the AB neuron is higher right at the end of its
burst, and decreases as it hyperpolarizes. The same burst of LP action potentials
evokes IPSPs in the PY neuron that increase in amplitude, here presumably because
the PY neuron is depolarizing and increasing the driving force on the artificial
chemical synapse. These recordings demonstrate that although the same function is
controlling the synaptic -release- properties in the model LP neuron, the actual
change in membrane potential evoked by action potentials in the LP neuron is affected
by the total conductance of the biological neurons.
6 CONCLUSIONS
The ability to connect a realistic model neuron to a biological network offers a unique
opportunity to study the effects of individual currents on network activity. It also
provides realistic, two-way interactions between biological and computer-based
networks. As well as providing an ?important new tool for neuroscience, this
represents an exciting new direction in biologically-based computing.
7 ACKNOWLEDGMENTS
We thank Ms. Joan McCarthy for help with manuscript preparation. Research
supported by MH 46742, the Human Science Frontier Program, and NSF DMS9208206.
8 REFERENCES
Buchholtz, F., Golowasch, J., Epstein, I.R., and Marder, E. (1992) Mathematical
model of an identified stomatogastric ganglion neuron. 1. Neurophysiology 67:332340.
Eisen, J.S., and Marder, E. (1982) Mechanisms underlying pattern generation in
lobster stomatogastric ganglion as determined by selective inactivation of identified
neurons. III. Synaptic connections of electrically coupled pyloric neurons. 1.
Neurophysiology 48: 1392-1415.
Golowasch, J. and Marder, E. (1992) Ionic currents of the lateral pyloric neuron of
Hybrid Circuits of Interacting Computer Model and Biological Neurons
the stomatogastric ganglion of the crab. J. Neurophysiology 67:318-331.
Harris-Warrick. R.M .? Marder, E .? Selverston, A.I.. and Maurice, M .? eds. (1992)
Dynamic Biological Networks. Cambridge. MA: MIT Press.
Koch. C .? and Segev. I.. eds. (1989) Methods in Neuronal Modeling. Cambridge,
MA: MIT press.
LeMasson, G. (1993) Maxim: A software system for simulating single neurons and
neural networks, in preparat~on.
Sharp, A.A .? O'Neil, M.B., Abbott, L.F. and Marder. E. (1993) The dynamic
clamp: Computer-generated conductances in real neurons. J. Neurophysiology, in
press.
Yarom, Y. (1992) Rhythmogenesis in a hybrid system interconnecting an olivary
neuron to an network of coupled oscillators. Neuroscience 44:263-275.
819
| 640 |@word neurophysiology:4 polynomial:1 configuration:1 contains:2 current:16 anterior:1 activation:1 must:1 periodically:1 realistic:4 plot:1 reciprocal:3 record:1 provides:3 contribute:1 mathematical:1 burst:5 differential:3 consists:1 fitting:1 behavior:1 actual:1 increasing:3 moreover:1 underlying:1 circuit:8 what:1 developed:1 selverston:1 every:2 preferable:1 olivary:1 control:3 timing:1 omv:1 switching:1 firing:1 twice:1 evoked:1 programmed:1 limited:1 unique:1 acknowledgment:1 block:2 procedure:1 py:6 stomatogastric:6 fx:1 controlling:2 us:2 element:1 role:1 electrical:1 renaud:4 connected:5 cycle:3 decrease:1 pd:6 leak:1 complexity:2 dynamic:2 completely:1 po:3 mh:1 emergent:1 fast:1 effective:1 artificial:6 ability:2 neil:1 advantage:1 biophysical:2 clamp:2 interaction:1 maximal:2 fired:2 ent:1 macintosh:2 perfect:1 help:1 depending:1 measured:1 ij:1 implemented:2 waltham:4 synchronized:1 differ:1 direction:1 discontinuous:1 human:1 transient:1 biological:29 adjusted:2 frontier:1 koch:2 crab:2 presumably:2 driving:1 tool:3 mit:2 clearly:1 always:2 rather:1 inactivation:1 cr:1 voltage:1 command:1 release:2 dsp:7 transmitter:1 stg:3 wishing:1 cnrs:1 vl:1 integrated:2 typically:2 bt:1 selective:1 france:2 among:2 warrick:3 retaining:1 integration:3 construct:3 once:1 sampling:2 biology:2 represents:1 look:1 few:1 individual:4 delayed:3 panulirus:1 consisting:3 fire:1 maintain:1 ab:9 conductance:16 activated:1 necessary:1 rhythmogenesis:1 increased:5 modeling:2 mac:4 inadequate:1 reported:1 stored:1 connect:1 considerably:1 golowasch:3 physic:2 participates:1 na:2 connectivity:3 recorded:1 dr:1 external:1 maurice:1 inject:2 potential:19 de:4 sec:2 mv:4 depends:2 onset:2 experimentalist:1 later:1 depolarizing:1 contribution:1 ass:2 raw:1 produced:1 ionic:2 drive:2 processor:1 simultaneous:2 synapsis:10 synaptic:12 ed:2 lobster:2 universite:1 di:1 ask:1 diffe:1 amplitude:2 manuscript:1 higher:1 dcc:1 synapse:3 clock:3 until:1 lack:1 mode:1 epstein:1 effect:4 chemical:3 pharmacological:2 during:3 m:1 complete:1 demonstrate:2 interface:1 hyperpolarization:1 relating:1 cambridge:2 had:1 moving:1 access:1 add:1 mccarthy:1 store:1 exported:1 alternation:1 seen:1 minimum:1 additional:2 determine:1 period:1 signal:2 arithmetic:1 ii:1 faster:1 offer:1 ipsps:4 schematic:1 diagram:4 file:1 cedex:1 recording:4 sent:1 eve:1 unitary:1 iii:1 concerned:1 converter:1 identified:2 attenuated:1 translates:1 sylvie:1 action:6 outward:1 lemasson:9 hardware:1 specifies:1 nsf:1 inhibitory:6 neuroscience:3 correctly:1 per:1 affected:1 group:1 threshold:2 tierney:1 changing:2 abbott:5 ram:1 run:3 package:1 injected:2 hodgkin:1 place:1 evokes:1 bit:1 display:1 activity:5 marder:12 huxley:1 segev:2 software:2 speed:2 anyone:1 dsp32c:1 injection:1 transferred:1 department:3 hyperpolarizes:1 electrically:2 membrane:11 slightly:1 postsynaptic:4 lp:31 cable:1 biologically:1 dissection:1 equation:4 describing:2 mechanism:1 needed:1 reversal:3 end:1 eight:1 spectral:1 appropriate:1 simulating:2 talence:1 slower:1 hampered:1 running:1 graphical:1 opportunity:1 establish:1 yarom:2 occurs:1 primary:1 thank:1 simulated:1 lateral:2 presynaptic:2 providing:1 innovation:1 refit:1 neuron:86 displayed:1 precise:1 interacting:5 sharp:2 connection:8 optimized:1 instructing:1 burster:1 address:2 below:1 pattern:2 program:1 built:1 oj:1 memory:1 critical:2 bordeaux:2 force:2 hybrid:10 synchronize:1 pyloric:8 created:2 coupled:3 joan:1 relative:1 generation:2 mounted:1 digital:2 agent:1 exciting:1 course:2 supported:1 allow:1 understand:1 taking:1 rhythmic:1 curve:1 eisen:3 made:1 simplified:1 brandeis:4 table:1 reality:1 channel:2 ca:1 complex:2 main:1 intracellular:1 pharmacology:1 neuronal:1 fig:4 board:11 screen:1 aid:1 precision:1 interconnecting:1 msec:1 rectifier:1 showing:1 ih:1 maxim:4 lxl:1 ganglion:6 harris:3 ma:6 goal:1 macii:1 towards:1 oscillator:1 change:2 determined:3 operates:1 total:1 isopotential:1 experimental:1 la:1 rarely:1 internal:1 preparation:1 investigator:4 buchholtz:2 |
5,970 | 6,400 | Improved Regret Bounds for Oracle-Based
Adversarial Contextual Bandits
Vasilis Syrgkanis
Microsoft Research
[email protected]
Haipeng Luo
Microsoft Research
[email protected]
Akshay Krishnamurthy
University of Massachusetts, Amherst
[email protected]
Robert E. Schapire
Microsoft Research
[email protected]
Abstract
We propose a new oracle-based algorithm, BISTRO+, for the adversarial contextual bandit problem, where either contexts are drawn i.i.d. or the sequence
of contexts is known a priori, but where the losses are picked adversarially. Our
algorithm is computationally efficient, assuming access to an offline optimization
2
1
oracle, and enjoys a regret of order O((KT ) 3 (log N ) 3 ), where K is the number
of actions, T is the number of iterations, and N is the number of baseline policies.
3
Our result is the first to break the O(T 4 ) barrier achieved by recent algorithms,
which was left as a major open problem. Our analysis employs the recent relaxation
framework of Rakhlin and Sridharan [7].
1
Introduction
We study online decision making problems where a learner chooses an action based on some side
information (context) and incurs some cost for that action with a goal of incurring minimal cost over
a sequence of rounds. These contextual online learning settings form a powerful framework for
modeling many important decision-making scenarios with applications ranging from personalized
health care to content recommendation and targeted advertising. Many of these applications also
involve a partial feedback component, wherein costs for alternative actions are unobserved, and are
typically modeled as contextual bandits.
The contextual information present in these problems enables learning of a much richer policy
for choosing actions based on context. In the literature, the typical goal for the learner is to have
cumulative cost that is not much higher than the best policy ? in a large policy class ?. This is
formalized by the notion of regret, which is the learner?s cumulative cost minus the cumulative cost
of the best fixed policy ? in hindsight.
Naively, one can view the contextual problem as a standard online learning problem where the set
of possible ?actions? available at each iteration is the set of policies. This perspective is fruitful, as
classical algorithms,
such as Hedge [5, 3] and Exp4 [2],
p
p give information theoretically optimal regret
bounds of O( T log(N )) in full-information and O( T K log(N ) in the bandit setting, where T is
the number of rounds, K is the number of actions, and N is number of policies. However, naively
lifting standard online learning algorithms to the contextual setting leads to a running time that is
linear in the number of policies. Given that the optimal regret is only logarithmic in N and that our
high-level goal is to learn a very rich policy, we want to capture policy classes that are exponentially
large. When we use a large policy class, existing algorithms are no longer computationally tractable.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
To study this computational question, a number of recent papers have developed oracle-based
algorithms that only access the policy class through an optimization oracle for the offline fullinformation problem. Oracle-based approaches harness the research in supervised learning that
focuses on designing efficient algorithms for full-information problems and uses it for online and
partial-feedback problems. Optimization oracles
have been used in designing contextual bandit
p
algorithms [1, 4] that achieve the optimal O( KT log(N )) regret while also being computationally
efficient (i.e. requiring poly(K, log(N ), T ) oracle calls and computation). However, these results
only apply when the contexts and costs are independently and identically distributed at each iteration,
contrasting with the computationally inefficient approaches that can handle adversarial inputs.
Two very recent works provide the first oracle efficient algorithms for the contextual bandit problem
in adversarial settings [7, 8]. Rakhlin and Sridharan [7] considers a setting where the contexts are
drawn i.i.d. from a known distribution with adversarial costs and they provide an oracle efficient
3
1
1
algorithm called BISTRO with O(T 4 K 2 (log(N )) 4 ) regret. Their algorithm also applies in the
transductive setting where the sequence of contexts is known a priori. Srygkanis et. al [8] also obtain
3
a T 4 -style bound with a different oracle-efficient algorithm, but in a setting where the learner knows
only the set of contexts that will arrive. Both of these results achieve very?suboptimal regret bounds,
as the dependence on the number of iterations is far from the optimal O( T )-bound. A major open
3
question posed by both works is whether the O(T 4 ) barrier can be broken.
In this paper, we provide an oracle-based contextual bandit algorithm, BISTRO+, that achieves regret
1
2
O((KT ) 3 (log(N )) 3 ) in both the i.i.d. context and the transductive settings considered by Rakhlin
and Sridharan [7]. This bound matches the T -dependence of the epoch-greedy algorithm of Langford
and Zhang [6] that only applies to the fully stochastic setting. As in Rakhlin and Sridharan [7],
our algorithm only requires access to a value oracle, which is weaker than the standard argmax
oracle, and it makes K + 1 oracle calls per iteration. To our knowledge, this is the best regret bound
achievable by an oracle-efficient algorithm for any adversarial contextual bandit problem.
Our algorithm and regret bound are based on a novel and improved analysis of the minimax problem that arises in the relaxation-based framework of Rakhlin and Sridharan [7] (hence the name
BISTRO+). Our proof requires analyzing the value of a sequential game where the learner chooses a
distribution over actions and then the adversary chooses a distribution over costs in some bounded
finite domain, with - importantly - a bounded variance. This is unlike the simpler minimax problem
analyzed in [7], where the adversary is only constrained by the range of the costs.
Apart from showing that this more structured minimax problem has a small value, we also need to
derive an oracle-based strategy for the learner that achieves the improved regret bound. The additional
constraints on the game require a much more intricate argument to derive this strategy which is an
algorithm for solving a structured two-player minimax game (see Section 4).
2
Model and Preliminaries
Basic notation. Throughout the paper we denote with x1:t a sequence of quantities {x1 , . . . , xt }
and with (x, y, z)1:t a sequence of tuples {(x1 , y1 , z1 ), . . .}. ? denotes an empty sequence. The
vector of ones is denoted by 1 and the vector of zeroes is denoted by 0. Denote with [K] the set
{1, . . . , K}, e1 , . . . , eK the standard basis vectors in RK , and ?U the set of distributions over a set
U . We also use ?K as a shorthand for ?[K] .
Contextual online learning. We consider the following version of the contextual online learning
problem. On each round t = 1, . . . , T , the learner observes a context xt and then chooses a probability
distribution qt over a set of K actions. The adversary then chooses a cost vector ct ? [0, 1]K . The
learner picks an action y?t drawn from distribution qt , incurs a cost ct (?
yt ) and observes only ct (?
yt )
and not the cost of the other actions.
Throughout the paper we will assume that the context xt at each iteration t is drawn i.i.d. from a
distribution D. This is referred to as the hybrid i.i.d.-adversarial setting [7]. As in prior work [7], we
assume that the learner can sample contexts from this distribution as needed. It is easy to adapt the
arguments in the paper to apply for the transductive setting where the learner knows the sequence of
contexts that will arrive. The cost vectors ct are chosen by a non-adaptive adversary.
2
The goal of the learner is to compete with a set of policies ? of size N , where each policy ? ? ? is a
function mapping contexts to actions. The cumulative expected regret with respect to the best fixed
policy in hindsight is
R EG =
T
T
X
X
hqt , ct i ? min
ct (?(xt )).
???
t=1
t=1
Optimization value oracle. We will assume that we are given access to an optimization oracle that
when given as input a sequence of contexts and cost vectors (x, c)1:t , it outputs the cumulative cost
of the best fixed policy, which is
min
???
t
X
ct (?(xt )).
(1)
? =1
This can be viewed as an offline batch optimization or ERM oracle.
2.1
Relaxation based algorithms
We briefly review the relaxation based framework proposed in [7]. The reader is directed to [7]
for a more extensive exposition. We will also slightly augment the framework with some internal
randomness that the algorithm can generate and use, which does not affect the cost of the algorithm.
A crucial concept in the relaxation based framework is the information obtained by the learner at the
end of each round t ? [T ], which is the following tuple:
It (xt , qt , y?t , ct , St ) = (xt , qt , y?t , ct (?
yt ), St ),
where y?t is the realized chosen action drawn from the distribution qt and St is some random string
drawn from some distribution that can depend on qt , y?t and ct (?
yt ) and which can be used by the
algorithm in subsequent rounds.
Definition 1 A partial-information relaxation R EL(?) is a function that maps (I1 , . . . , It ) to a real
value for any t ? [T ]. A partial-information relaxation is admissible if for any t ? [T ], and for all
I1 , . . . , It?1
Ext min max Ey?t ?qt ,St [ct (?
yt ) + R EL(I1:t?1 , It (xt , qt , y?t , ct , St ))] ? R EL(I1:t?1 ), (2)
qt
ct
and for all x1:T , c1:T and q1:T
Ey?1:T ?q1:T ,S1:T [R EL(I1:T )] ? ? min
???
T
X
ct (?(xt )).
(3)
t=1
Definition 2 Any randomized strategy q1:T that certifies inequalities (2) and (3) is called an admissible strategy.
A basic lemma proven in [7] is that if one constructs a relaxation and a corresponding admissible
strategy, then the expected regret of the admissible strategy is upper bounded by the value of the
relaxation at the beginning of time.
Lemma 1 ([7]) Let R EL be an admissible relaxation and q1:T be an admissible strategy. Then for
any c1:T , we have
E [R EG] ? R EL(?).
We will utilize this framework and construct a novel relaxation with an admissible strategy. We
will show that the value of the relaxation at the beginning of time is upper bounded by the desired
improved regret bound and that the admissible strategy can be efficiently computed assuming access
to an optimization value oracle.
3
3
A Faster Contextual Bandit Algorithm
First we define an unbiased estimator the cost vectors ct . In addition to doing the usual importance
weighting, we also discretize the estimated cost to either 0 or L for some constant L ? K to be
specified later. Specifically, suppose that at iteration t an action y?t is picked based on some distribution
ht ? ?K . Now, consider the random variable Xt , which is defined conditionally on y?t and ht , as
(
ct (?
yt )
1 with probability Lh
yt ) ,
t (?
(4)
Xt =
0 with the remaining probability.
This is a valid random variable whenever miny ht (y) ? L1 , which will be ensured by the algorithm.
This is the only randomness in the random string St that we used in the general relaxation framework.
Our construction of an unbiased estimate for each ct based on the information It collected at the end
of each round is then: c?t = LXt ey?t . Observe that for any y ? [K],
Ey?t ,Xt [?
ct (y)] = L ? Pr[?
yt = y] ? Pr[Xt = 1|?
yt = y] = L ? ht (y) ?
ct (y)
= ct (y).
Lht (y)
Hence, c?t is an unbiased estimate of ct .
We are now ready to define our relaxation. Let t ? {?1, 1}K be a Rademacher random vector
(i.e. each coordinate is an independent Rademacher random variable, which is ?1 or 1 with equal
probability), and let Zt ? {0, L} be a random variable which is L with probability K/L and 0
otherwise. We denote with ?t = (x, , Z)t+1:T and with Gt the distribution of ?t which is described
above. Our relaxation is defined as follows:
R EL(I1:t ) = E?t ?Gt [R((x, c?)1:t , ?t )],
(5)
where
!
t
T
X
X
R((x, c?)1:t , ?t ) = ? min
c?? (?(x? )) +
2? (?(x? ))Z? + (T ? t)K/L.
???
? =1
? =t+1
Note that R EL(?) is the following quantity, whose first part resembles a Rademacher average:
"
#
T
X
R EL(?) = 2E(x,,Z)1:T max
? (?(x? ))Z? + T K/L.
???
? =1
Using
p to the supplementary material) and the fact
2 the following lemma (whose proof is deferred
E Zt ? KL, we can upper bound R EL(?) by O( T KL log(N ) + T K/L), which after tuning L
will give the claimed O(T 2/3 ) bound.
Lemma 2 Let t be Rademacher
random vectors, and Zt be non-negative real-valued random
variables such that E Zt2 ? M for some constant M > 0. Then
"
#
T
X
p
EZ1:T ,1:T max
t (?(xt )) ? Zt ? 2T M log(N ).
???
t=1
To show an admissible strategy for our relaxation, we let D = {L ? ei : i ? [K]} ? {0}. For a
distribution p ? ?(D), we denote with p(i), for i ? {0, . . . , K}, the probability assigned to vector
ei , with the convention that e0 = 0. Also let ?0D = {p ? ?D : p(i) ? 1/L, ?i ? [K]}.
Based on this notation our admissible strategy is defined as
1
K
qt? (?t ) + 1,
qt = E?t [qt (?t )] where qt (?t ) = 1 ?
L
L
and
qt? (?t ) = argmin max0 Ec?t ?pt [hq, c?t i + R((x, c?)1:t , ?t )].
q??K pt ??D
(6)
(7)
Algorithm 1 implements this admissible strategy. Note that it suffices to use qt (?t ) for a single
random draw ?t instead of qt to ensure the exact same guarantee in expectation. In Section 4 we
show that qt (?t ) can be computed efficiently using an optimization value oracle.
We state the main theorem of our relaxation construction and defer the proof to Section 5.
4
Algorithm 1 BISTRO+
Input: parameter L ? K
for each time step t ? [T ] do
Observe xt . Draw ?t = (x, , Z)t+1:T where each x? is drawn from the distribution of contexts,
? is a Rademacher random vectors and Z? ? {0, L} is L with probability K/L and 0 otherwise.
Compute qt (?t ) based on Eq. (6) (using Algorithm 2).
Predict y?t ? qt (?t ) and observe ct (?
yt ).
Create an estimate c?t = LXt ey?t , where Xt is defined in Eq. (4) using qt (?t ) as ht .
end for
Algorithm 2 Computing qt? (?t )
Input: a value optimization oracle, (x, c?)1:t?1 , xt and ?t .
Output: q ? ?K , a solution to Eq. (7).
Compute ?i as in Eq. (9) for all i = 0, . . . , K using the optimization oracle.
0
for all i ? [K].
Compute ?i = ?i ??
L
Let m = 1 and q = 0.
for each coordinate i ? [K] do
Set q(i) = min{(?i )+ , m}. ((x)+ = max{x, 0})
Update m ? m ? q(i).
end for
Distribute m arbitrarily on the coordinates of q if m > 0.
Theorem 3 The relaxation defined in Equation (5) is admissible. An admissible randomized strategy
for this relaxation is given by (6). The expected regret of BISTRO+ is upper bounded by
p
(8)
2 2T KL log(N ) + T K/L,
1
for any L ? K. Specifically, setting L = (KT / log(N )) 3 when T ? K 2 log(N ), the regret is of
1
2
order O((KT ) 3 (log(N )) 3 ).
4
Computational Efficiency
In this section we will argue that if one is given access to a value optimization oracle (1), then one
can run BISTRO+ efficiently. Specifically, we will show that the minimizer of Equation (7) can be
computed efficiently via Algorithm 2.
Lemma 4 Computing the quantity defined in equation (7) for any given ?t can be done in time O(K)
and with only K + 1 accesses to a value optimization oracle.
Proof:
For i ? {0, . . . , K}, let
?i = min
???
t?1
X
c?? (?(x? )) + Lei (?(xt )) +
? =1
T
X
!
2? (?(x? ))Zt
(9)
? =t+1
with the convention e0 = 0. Then observe that we can re-write the definition of qt? (?t ) as
qt? (?t )
= argmin max0
K
X
q??K pt ??D i=1
pt (i)(L ? q(i) ? ?i ) ? pt (0) ? ?0 .
Observe that each ?i can be computed with a single oracle access. Thus we can assume that all K + 1
??s are computed efficiently and are given. We now argue how to compute the minimizer.
For any q, the maximum over pt can be characterized as follows. With the notation zi = L ? q(i) ? ?i
and z0 = ??0 we re-write the minimax quantity as
qt? (?t )
= argmin max0
K
X
q??K pt ??D i=1
5
pt (i) ? zi + pt (0) ? z0 .
Observe that without the constraint that pt (i) ? 1/L for i > 0 we would put all the probability mass
on the maximum of the zi . However, with the constraint the maximizer put as much probability
mass as allowed on the maximum coordinate argmaxi?{0,...,K} zi and continues to the next highest
quantity. We repeat this until reaching the quantity z0 , which is unconstrained. Thus we can put all
the remaining probability mass on this coordinate.
Let z(1) , z(2) , . . . , z(K) denote the ordered zi quantities for i > 0 (from largest to smallest). Moreover,
let ? ? [K] be the largest index such that z(?) ? z0 . By the above reasoning we get that for a given
q, the maximum over pt is equal to (recall that we assume L ? K)
?
?
X
X
z(t)
z(t) ? z0
?
z0 =
+ 1?
+ z0 .
L
L
L
t=1
t=1
Now since for any t > ?, z(t) < z0 , we can write the latter as
?
X
z(t)
t=1
K
X
?
(zi ? z0 )+
z0 =
+ 1?
+ z0
L
L
L
i=1
+
with the convention (x) = max{x, 0}. We thus further re-write the minimax expression as
qt? (?t ) = argmin
q??K
K
X
(zi ? z0 )+
i=1
L
+ z0 = argmin
q??K
K
X
(zi ? z0 )+
L
i=1
+
K
X
?i ? ?0
q(i) ?
.
= argmin
L
q??K i=1
Let ?i =
?i ??0
L .
The expression becomes: qt? (?t ) = argminq??K
PK
t=1 (q(i)
? ?i )+ .
This quantity is minimized as follows: consider any i ? [K] such that ?i ? 0. Then putting positive
mass ? on such a coordinate i is going to lead to a marginal increase of ? in the objective. On the other
hand if we put some mass on an index ?i > 0, then that will not increase the objectiveP
until we reach
the point where q(i) = ?i . Thus a minimizer will distribute probability mass of min{ i:?i >0 ?i , 1},
on the coordinates for which ?i > 0. The remaining mass, if any, can be distributed arbitrarily. See
Algorithm 2 for details.
5
Proof of Theorem 3
We verify the two conditions for admissibility.
Final condition. It is clear that inequality (3) is satisfied since c?t are unbiased estimates of ct :
"
#
T
X
Ey?1:T ,X1:T [R EL(I1:T )] = Ey?1:T ,X1:T max ?
c?? (?(x? ))
???
? =1
"
? max ?Ey?1:T ,X1:T
???
T
X
? =1
#
c?? (?(x? )) = max ?
???
T
X
c? (?(x? )).
? =1
t-th Step condition. We now check that inequality (2) is also satisfied at some time step t ? [T ].
We reason conditionally on the observed context xt and show that qt defines an admissible strategy
for the relaxation. For convenience let Ft denote the joint distribution of the pair (?
yt , Xt ). Observe
that the marginal of Ft on the first coordinate is equal to qt . Let qt? = E?t [qt? (?t )]. First observe that:
1
K
E(?yt ,Xt )?Ft [ct (?
yt )] = Ey?t ?qt [ct (?
yt )] = hqt , ct i ? hqt? , ct i+ h1, ct i ? E(?yt ,Xt )?Ft [hqt? , c?t i]+ .
L
L
Hence,
ct
max E(?yt ,Xt )?Ft [ct (?
yt ) + R EL(I1:t )] ?
?[0,1]K
ct
max E(?yt ,Xt )?Ft [hqt? , c?t i + R EL(I1:t )] +
?[0,1]K
6
K
.
L
We now work with the first term of the right hand side.
max E(?yt ,Xt )?Ft [hqt? , c?t i + R EL(I1:t )]
ct ?[0,1]K
=
max E(?yt ,Xt )?Ft [E?t ?Gt [hqt? (?t ), c?t i + R((x, c?)1:t , ?t )]]
ct ?[0,1]K
Observe that c?t is a random variable taking values in D and such that the probability that it is equal
to Ley (for y ? {0, . . . , K}) can be upper bounded as
ct (y)
Pr[?
ct = Ley ] = E?t ?Gt [Pr[?
ct = Ley |?t ]] = E?t ?Gt qt (?t )(y)
? 1/L.
L ? qt (?t )(y)
Thus we can upper bound the latter quantity by the supremum over all distributions in ?0D , i.e.,
max E(?yt ,Xt ) [hqt? , c?t i + R EL(I1:t )] ?
ct ?[0,1]K
max Ec?t ?pt [E?t ?Gt [hqt? (?t ), c?t i + R((x, c?)1:t , ?t )]].
pt ??0D
Now we can continue by pushing the expectation over ?t outside of the supremum, i.e.,
max E(?yt ,Xt ) [hqt? , c?t i + R EL(I1:t )] ? E?t ?Gt max0 Ec?t ?pt [hqt? (?t ), c?t i + R((x, c?)1:t , ?t )]
pt ??D
ct ?[0,1]K
and working conditionally on ?t . Since the expression is linear in pt , the supremum is realized, and
by the definition of qt? (?t ), the quantity inside the expectation E?t ?Gt is equal to
min max Ec?t ?pt [hq, c?t i + R((x, c?)1:t , ?t )].
q??K pt ??0D
We can now apply the minimax theorem and upper bound the above by
max min Ec?t ?pt [hq, c?t i + R((x, c?)1:t , ?t )].
pt ??0D q??K
Since the inner objective is linear in q, we continue with
max min Ec?t ?pt [?
ct (y) + R((x, c?)1:t , ?t )].
pt ??0D
y
We can now expand the definition of R(?)
"
!#
t
T
X
X
max0 min Ec?t ?pt c?t (y) + max ?
c?? (?(x? )) +
2? (?(x? ))Z?
+ (T ? t)K/L.
pt ??D
y
???
With the notation A? = ?
? =1
Pt?1
? =t+1
PT
? ? =t+1 2? (?(x? ))Z? , we re-write the above as
max0 min Ec?t ?pt c?t (y) + max(A? ? c?t (?(xt ))) + (T ? t)K/L.
pt ??D
?? (?(x? ))
? =1 c
y
???
We now upper bound the first term. The extra term (T ? t)K/L will be combined with the extra
K/L that we have abandoned to give the correct term (T ? (t ? 1))K/L needed for R EL(I1:t?1 ).
Observe that we can re-write the first term by using symmetrization as
max0 min Ec?t ?pt c?t (y) + max(A? ? c?t (?(xt )))
???
pt ??D y
0
0
= max0 Ec?t ?pt max(A? + min Ec?t ?pt [?
ct (y)] ? c?t (?(xt )))
y
???
pt ??D
0
ct (?(xt ))] ? c?t (?(xt )))
? max0 Ec?t ?pt max(A? + Ec?0t ?pt [?
???
pt ??D
0
? max0 Ec?t ,?c0t ?pt max(A? + c?t (?(xt )) ? c?t (?(xt )))
???
pt ??D
= max0 Ec?t ,?c0t ?pt ,? max(A? + ? (?
c0t (?(xt )) ? c?t (?(xt ))))
???
pt ??D
? max0 Ec?t ?pt ,? max(A? + 2??
ct (?(xt )))
pt ??D
???
7
where ? is a random variable which is ?1 and 1 with equal probability. The last inequality follows by
splitting the maximum into two equal parts.
Conditioning on c?t , consider the random variable Mt which is ? maxy c?t (y) or maxy c?t (y) on the
coordinates where c?t is equal to zero and equal to c?t on the coordinate that achieves the maximum.
This is clearly an unbiased estimate of c?t . Thus we can upper bound the last quantity by
max0 Ec?t ?pt ,? max(A? + 2?E [Mt (?(xt ))|?
ct ]) ? max0 Ec?t ,?,Mt max(A? + 2?Mt (?(xt ))) .
pt ??D
???
pt ??D
???
The random vector ?Mt , conditioning on c?t , is equal to ? maxy c?t (y) or maxy c?t (y) with equal
probability independently on each coordinate. Moreover, observe that for any distribution pt ? ?0D ,
the distribution of the maximum coordinate of c?t has support on {0, L} and is equal to L with
probability at most K/L. Since the objective only depends on the distribution of the maximum
coordinate of c?t , we can continue the upper bound with a maximum over any distribution of random
vectors whose coordinates are 0 with probability at least 1 ? K/L and otherwise are ?L or L with
equal probability. Specifically, let t be a Rademacher random vector, we continue with
max
Et ,Zt max(A? + 2t (?(xt ))Zt ) .
???
Zt ??{0,L} :P r[Zt =L]?K/L
Now observe that if we denote with a = Pr[Zt = L], the above is equal to
max
(1 ? a) max(A? ) + aEt max(A? + 2t (?(xt ))L) .
???
a:0?a?K/L
???
We now argue that this maximum is achieved by setting a = K/L. For that it suffices to show that
max(A? ) ? Et max(A? + 2t (?(xt ))L) ,
???
???
which is true by observing that with ? ? = argmax??? (A? ) one has
Et max(A? + 2t (?(xt ))L) ? Et [A?? + 2t (? ? (xt ))L)] = A?? +Et [2t (? ? (xt ))L)] = A?? .
???
Thus we can upper bound the quantity we want by
Et ,Zt max(A? + 2t (?(xt ))Zt ,
???
where t is a Rademacher random vector and Zt is now a random variable which is equal to L with
probability K/L and is equal to 0 with the remaining probability.
Taking expectation over ?t and xt and adding the (T ? (t ? 1))K/L term that we abandoned, we
arrive at the desired upper bound of R EL(I1:t?1 ). This concludes the proof of admissibility.
Regret bound. By applying Lemma 2 (See Appendix A) with E[Zt2 ] = L2 Pr[Zt = L] = KL
and invoking Lemma 1, we get the regret bound in Equation (8).
6
Discussion
In this paper, we present a new oracle-based algorithm for adversarial contextual bandits and we prove
that it achieves O((KT )2/3 log(N )1/3 ) regret in the settings studied by Rakhlin and Sridharan [7].
This is the best regret bound that we are aware of among oracle-based algorithms.
3/4
While
p our bound improves on the O(T ) bounds in prior work [7, 8], achieving the optimal
O( T K log(N )) regret bound with an oracle based approach still remains an important open
question. Another interesting avenue for future work involves removing the stochastic assumption on
the contexts.
8
References
[1] Alekh Agarwal, Daniel Hsu, Satyen Kale, John Langford, Lihong Li, and Robert E. Schapire. Taming
the monster: A fast and simple algorithm for contextual bandits. In International Conference on Machine
Learning (ICML), 2014.
[2] Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged casino: The
adversarial multi-armed bandit pproblem. In Foundations of Computer Science (FOCS), 1995.
[3] Nicolo Cesa-Bianchi, Yoav Freund, David Haussler, David P Helmbold, Robert E Schapire, and Manfred K
Warmuth. How to use expert advice. Journal of the ACM (JACM), 1997.
[4] Miroslav Dud?k, Daniel Hsu, Satyen Kale, Nikos Karampatziakis, John Langford, Lev Reyzin, and Tong
Zhang. Efficient optimal learning for contextual bandits. In Uncertainty and Artificial Intelligence (UAI),
2011.
[5] Yoav Freund and Robert E Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 1997.
[6] John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information.
In Advances in Neural Information Processing Systems (NIPS), 2008.
[7] Alexander Rakhlin and Karthik Sridharan. BISTRO: an efficient relaxation-based method for contextual
bandits. In International Conference on Machine Learning (ICML), 2016.
[8] Vasilis Syrgkanis, Akshay Krishnamurthy, and Robert E. Schapire. Efficient algorithms for adversarial
contextual learning. In International Conference on Machine Learning (ICML), 2016.
9
| 6400 |@word briefly:1 version:1 achievable:1 open:3 rigged:1 q1:4 pick:1 incurs:2 invoking:1 minus:1 uma:1 daniel:2 existing:1 contextual:19 com:3 luo:1 john:3 subsequent:1 enables:1 update:1 greedy:2 intelligence:1 warmuth:1 beginning:2 manfred:1 boosting:1 simpler:1 zhang:3 focs:1 shorthand:1 prove:1 inside:1 theoretically:1 expected:3 intricate:1 multi:2 armed:2 becomes:1 spain:1 bounded:6 notation:4 moreover:2 mass:7 argmin:6 string:2 developed:1 contrasting:1 unobserved:1 hindsight:2 guarantee:1 ensured:1 positive:1 ext:1 analyzing:1 lev:1 resembles:1 studied:1 range:1 directed:1 regret:22 implement:1 get:2 convenience:1 put:4 context:18 applying:1 fruitful:1 map:1 hqt:11 yt:22 syrgkanis:2 kale:2 independently:2 formalized:1 splitting:1 helmbold:1 estimator:1 haussler:1 importantly:1 handle:1 notion:1 coordinate:14 krishnamurthy:2 construction:2 suppose:1 pt:46 exact:1 us:1 designing:2 continues:1 observed:1 ft:8 monster:1 capture:1 highest:1 observes:2 broken:1 miny:1 depend:1 solving:1 efficiency:1 learner:12 basis:1 joint:1 fast:1 argmaxi:1 artificial:1 choosing:1 outside:1 whose:3 richer:1 posed:1 supplementary:1 valued:1 otherwise:3 satyen:2 transductive:3 final:1 online:7 sequence:8 propose:1 vasilis:2 reyzin:1 achieve:2 haipeng:2 lxt:2 empty:1 rademacher:7 derive:2 qt:34 eq:4 c:1 involves:1 convention:3 correct:1 ley:3 stochastic:2 material:1 require:1 suffices:2 generalization:1 preliminary:1 considered:1 mapping:1 predict:1 major:2 achieves:4 smallest:1 symmetrization:1 largest:2 create:1 clearly:1 reaching:1 focus:1 karampatziakis:1 check:1 adversarial:10 baseline:1 el:18 typically:1 bandit:15 expand:1 going:1 i1:14 among:1 denoted:2 priori:2 augment:1 constrained:1 marginal:2 equal:16 construct:2 aware:1 adversarially:1 icml:3 future:1 minimized:1 employ:1 argmax:2 microsoft:6 karthik:1 deferred:1 analyzed:1 kt:6 tuple:1 partial:4 lh:1 desired:2 re:5 e0:2 minimal:1 miroslav:1 modeling:1 yoav:3 cost:19 chooses:5 combined:1 st:6 international:3 amherst:1 randomized:2 satisfied:2 cesa:2 ek:1 inefficient:1 style:1 expert:1 li:1 distribute:2 casino:1 depends:1 later:1 break:1 picked:2 view:1 h1:1 doing:1 observing:1 defer:1 variance:1 efficiently:5 advertising:1 randomness:2 reach:1 whenever:1 definition:5 proof:6 hsu:2 massachusetts:1 recall:1 knowledge:1 improves:1 auer:1 higher:1 supervised:1 harness:1 wherein:1 improved:4 done:1 langford:4 until:2 hand:2 working:1 vasy:1 ei:2 maximizer:1 defines:1 lei:1 name:1 requiring:1 concept:1 unbiased:5 true:1 verify:1 hence:3 assigned:1 dud:1 eg:2 conditionally:3 round:6 game:3 theoretic:1 l1:1 reasoning:1 ranging:1 novel:2 mt:5 conditioning:2 exponentially:1 tuning:1 unconstrained:1 c0t:3 lihong:1 access:8 longer:1 alekh:1 gt:8 nicolo:2 recent:4 perspective:1 apart:1 scenario:1 claimed:1 inequality:4 arbitrarily:2 continue:4 additional:1 care:1 nikos:1 ey:9 full:2 match:1 adapt:1 faster:1 characterized:1 e1:1 ez1:1 basic:2 ae:1 expectation:4 iteration:7 agarwal:1 achieved:2 c1:2 addition:1 want:2 argminq:1 crucial:1 extra:2 unlike:1 sridharan:7 call:2 identically:1 easy:1 affect:1 zi:8 suboptimal:1 inner:1 avenue:1 whether:1 expression:3 peter:1 action:14 clear:1 involve:1 schapire:7 generate:1 estimated:1 per:1 write:6 putting:1 achieving:1 drawn:7 ht:5 utilize:1 relaxation:21 compete:1 run:1 powerful:1 uncertainty:1 arrive:3 throughout:2 reader:1 draw:2 decision:3 appendix:1 bound:25 ct:43 oracle:30 constraint:3 personalized:1 argument:2 min:15 structured:2 slightly:1 making:2 s1:1 maxy:4 pr:6 erm:1 computationally:4 equation:4 remains:1 needed:2 know:2 tractable:1 end:4 certifies:1 available:1 incurring:1 apply:3 observe:12 alternative:1 batch:1 abandoned:2 denotes:1 running:1 remaining:4 ensure:1 pushing:1 classical:1 objective:3 question:3 quantity:12 realized:2 strategy:14 dependence:2 usual:1 hq:3 argue:3 considers:1 collected:1 reason:1 assuming:2 modeled:1 index:2 robert:6 negative:1 zt:14 policy:16 bianchi:2 upper:12 discretize:1 finite:1 y1:1 david:2 pair:1 specified:1 extensive:1 z1:1 kl:4 barcelona:1 nip:2 adversary:4 zt2:2 max:37 exp4:1 hybrid:1 minimax:7 ready:1 concludes:1 health:1 taming:1 epoch:2 literature:1 prior:2 review:1 l2:1 pproblem:1 freund:3 loss:1 fully:1 admissibility:2 interesting:1 proven:1 foundation:1 repeat:1 last:2 enjoys:1 offline:3 side:3 weaker:1 fullinformation:1 taking:2 barrier:2 akshay:3 distributed:2 feedback:2 valid:1 cumulative:5 rich:1 adaptive:1 far:1 ec:18 supremum:3 uai:1 tuples:1 learn:1 lht:1 poly:1 domain:1 pk:1 main:1 allowed:1 bistro:8 x1:7 advice:1 referred:1 gambling:1 tong:2 weighting:1 admissible:14 rk:1 theorem:4 z0:14 removing:1 xt:48 showing:1 rakhlin:7 naively:2 sequential:1 adding:1 importance:1 lifting:1 logarithmic:1 jacm:1 ordered:1 recommendation:1 applies:2 minimizer:3 acm:1 hedge:1 goal:4 targeted:1 viewed:1 exposition:1 content:1 typical:1 specifically:4 lemma:7 max0:14 called:2 player:1 internal:1 support:1 latter:2 arises:1 alexander:1 |
5,971 | 6,401 | Quantum Perceptron Models
Nathan Wiebe
Microsoft Research
Redmond WA, 98052
[email protected]
Ashish Kapoor
Microsoft Research
Redmond WA, 98052
[email protected]
Krysta M Svore
Microsoft Research
Redmond WA, 98052
[email protected]
Abstract
We demonstrate how quantum computation can provide non-trivial improvements
in the computational and statistical complexity of the perceptron model. We
develop two quantum algorithms for perceptron learning. The first algorithm
exploits quantum information processing to determine a separating hyperplane
?
using a number of steps sublinear in the number of data points N , namely O( N ).
The second algorithm illustrates how the classical mistake bound of O( ?12 ) can be
further improved to O( ?1? ) through quantum means, where ? denotes the margin.
Such improvements are achieved through the application of quantum amplitude
amplification to the version space interpretation of the perceptron model.
1
Introduction
Quantum computation is an emerging technology that utilizes quantum effects to achieve significant,
and in some cases exponential, speed-ups of algorithms over their classical counterparts. The growing
importance of machine learning has in recent years led to a host of studies that investigate the promise
of quantum computers for machine learning [1, 2, 3, 4, 5, 6, 7, 8, 9].
While a number of important quantum speedups have been found, the majority of these speedups
are due to replacing a classical subroutine with an equivalent albeit faster quantum algorithm.
The true potential of quantum algorithms may therefore remain underexploited since quantum
algorithms have been constrainted to follow the same methodology behind traditional machine
learning methods [10, 8, 9]. Here we consider an alternate approach: we devise a new machine
learning algorithm that is tailored to the speedups that quantum computers can provide.
We illustrate our approach by focusing on perceptron training [11]. The perceptron is a fundamental
building block for various machine learning models including neural networks and support vector
machines [12]. Unlike many other machine learning algorithms, tight bounds are known for the
computational and statistical complexity of traditional perceptron training. Consequently, we are
able to rigorously show different performance improvements that stem from either using quantum
computers to improve traditional perceptron training or from devising a new form of perceptron
training that aligns with the capabilities of quantum computers.
We provide two quantum approaches to perceptron training. The first approach focuses on the
computational aspect of the problem and the proposed method quadratically reduces the scaling of
the complexity of training with respect to the number of training vectors. The second algorithm
focuses on statistical efficiency. In particular, we use the mistake bounds for traditional perceptron
training methods and ask if quantum computation lends any advantages. To this end, we propose an
algorithm that quadratically improves the scaling of the training algorithm with respect to the margin
between the classes in the training data. The latter algorithm combines quantum amplitude estimation
in the version space interpretation of the perceptron learning problem. Our approaches showcase the
trade-offs that one can consider in developing quantum algorithms, and the ultimate advantages of
performing learning tasks on a quantum computer.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Version Space
Feature Space
Figure 1: Version space and feature space views of classification. This figure is from [18].
The rest of the paper is organized as follows: we first cover the background on perceptrons, version
space and Grover search. We then present our two quantum algorithms and provide analysis of their
computational and statistical efficiency before concluding.
2
2.1
Background
Perceptrons and Version Space
Given a set of N separable training examples {?1 , .., ?N } ? IRD with corresponding labels
{y1 , .., yN }, yi ? {+1, ?1}, the goal of perceptron learning is to recover a hyperplane w that
perfectly classifies the training set [11]. Formally, we want w such that yi ? wT ?i > 0 for all i.
There are various simple online algorithms that start with a random initialization of the hyperplane
and make updates as they encounter more and more data [11, 13, 14, 15]; however, the rule that we
consider for online perceptron training is, upon misclassifying a vector (?, y), w ? w + y?.
A remarkable feature of the perceptron model is that upper bounds exist for the number of updates that
need to be made during this training procedure. In particular, if the training data is composed of unit
vectors, ?i ? IRD , that are separated by a margin of ? then there are perceptron training algorithms
that make at most O( ?12 ) mistakes [16], independent of the dimension of the training vectors. Similar
bounds also exist when the data is not separated [17] and also for other generalizations of perceptron
training [13, 14, 15]. Note that in the worst case, the algorithm will need to look at all points in the
training set at least once, consequently the computation complexity will be O(N ).
Our goal is to explore if the quantum procedures can provide improvements both in terms of
computational complexity (that is better than O(N )) and statistical efficiency (improve upon O( ?12 ).
Instead of solely applying quantum constructs to the feature space, we also consider the version space
interpretation of perceptrons which leads to the improved scaling with ?.
Formally, version space is defined as the set of all possible hyperplanes that perfectly separate the
data: VS := {w|yi ? wT ?i > 0 for all i}. Given a training datum, the traditional representation is
to depict data as points in the feature space and use hyperplanes to depict the classifiers. However,
there exists a dual representation where the hyperplanes are depicted as points and the data points are
represented as hyperplanes that induce constraints on the feasible set of classifiers. Figure 1, which is
borrowed from [18], illustrates the version space interpretation of perceptrons. Given three labeled
data points in a 2D space, the dual space illustrates the set of normalized hyperplanes as a yellow ball
with unit radius. The third dimension corresponds to the weights that multiply the two dimensions
of the input data and the bias term. The planes represent the constraints imposed by observing the
labeled data as every labeled data renders one-half of the space infeasible. The version space is then
the intersection of all the half-spaces that are valid. Naturally, classifiers including SVMs [12] and
Bayes point machines [19] lie in the version space.
We note that there are quantum constructs such as Grover search and amplitude amplification which
provide non-trivial speedups for the search task. This is the main reason why we resort to the version
space interpretation. We can use this formalism to simply pose the problem of determining the
2
P?/||P?||
P?/||P?||
P?/||P?||
U init U targ?
2 qa
qa
?
qa
qa
?
U targ?
qa
qa
?
U targ?
Figure 2: A geometric description of the action of Ugrover on an initial state vector ?.
separating hyperplane as a search problem in the dual space. For example given a set of candidates
hyperplanes, our problem reduces to searching amongst the sample set for the classifier that will
successfully classify the entire set. Therefore training the perceptron is equivalent to finding any
feasible point in the version space. We describe these quantum constructs in detail below.
2.2
Grover?s Search
Both quantum approaches introduced in this work and their corresponding speed-ups stem from a
quantum subroutine called Grover?s search [20, 21], which is a special case of a more general method
referred to as amplitude amplification [22]. Rather than sampling from a probability distribution until
a given marked element is found, the Grover search algorithm draws only one sample and then uses
quantum operations to modify the distribution from which it sampled. The probability distribution is
rotated, or more accurately the quantum state that yields the distribution is rotated, into one whose
probability is sharply concentrated on the marked element. Once a sharply peaked distribution is
identified, the marked item can be found using just one sample. In general, if p
the probability of
finding such an element is known to be a then amplitude amplification requires O( 1/a) operations
to find the marked item with certainty.
While Grover?s search is a quantum subroutine, it can in fact be understood using only geometric
arguments. The only notions from quantum mechanics used are those of the quantum state vector
and that of Born?s rule (measurement). A quantum state vector is a complex unit vector whose
components have magnitudes that are equal to the square?roots of the probabilities. In particular, if ?
is a quantum state vector and p is the corresponding probability distribution then
p = ? ? ? ?,
(1)
where the unit column vector ? is called the quantum state vector which sits in the vector space Cn , ?
is the Hadamard (pointwise) product and ? is the complex conjugate transpose. A quantum state can
be measured such that if we have a quantum state vector ? and a basis vector w then the probability
of measuring ? = w is |h?, wi|2 , where h?, ?i denotes the inner product.
We need to implement two unitary operations in order to perform the search algorithm:
Uinit = 2?? ? ? 11, Utarg = 11 ? 2P.
(2)
The operators Uinit and Utarg can be interpreted geometrically as reflections within a two?dimensional
space spanned by the vectors ? and P ?. If we assume that P ? 6= 0 and P ? 6= ? then these two
reflection operations can be used to rotate ? in the space span(?, P ?). Specifically this rotation
is Ugrover = Uinit Utarg . Its action is illustrated in Figure 2. If the angle between the vector ? and
P ?/kP ?k is ?/2 ? ?a , where ?a := sin?1 (|h?, P ?/kP ?ki|). It then follows from elementary
geometry and the rule for computing the probability distribution from a quantum state (known as
Born?s rule) that after j iterations of Grover?s algorithm the probability of finding a desirable outcome
is
p(? ? ?good |j) = sin2 ((2j + 1)?a ).
(3)
It is then easy
? to see that if ?a 1 and a probability of success greater than 1/4 is desired then
j ? O(1/ ?a ) suffices to find a marked outcome. This is quadratically faster than is possible
from statistical sampling, which requires O(1/?a ) samples on average. Simple modifications to this
algorithm allow it to be used in cases where ?a is not known [21, 22].
3
3
Online quantum perceptron
Now that we have discussed Grover?s search we turn our attention to applying it to speed up online
perceptron training. In order to do so, we first need to define the quantum model that we wish to
use as our quantum analogue of perceptron training. While there are many ways of defining such a
model but the following approach is perhaps the most direct. Although the traditional feature space
perceptron training algorithm is online [16], meaning that the training examples are provided one at a
time to it in a streaming fashion, we deviate from this model slightly by instead requiring that the
algorithm be fed training examples that are, in effect, sampled uniformly from the training set. This
is a slightly weaker model, as it allows for the possibility that some training examples will be drawn
multiple times. However, the ability to draw quantum states that are in a uniform superposition over
all vectors in the training set enables quantum computing to provide advantages over both classical
methods that use either access model.
We assume without loss of generality that the training set consists of N unit vectors, ?1 , . . . , ?N . If
we then define ?1 , . . . , ?N to be the basis vectors whose indices each coincide with a (B + 1)-bit
representation of the corresponding (?j , yj ) where yj ? {?1, 1} is the class assigned to ?j and let
?0 be a fixed unit vector that is chosen to represent a blank memory register.
We introduce the vectors ?j to make it clear that the quantum vectors states used to represent training
vectors do not live in the same vector space as the training vectors themselves. We choose the
quantum state vectors here to occupy a larger space than the training vectors because the Heisenberg
uncertainty principle makes it much more difficult for a quantum computer to compute the class that
the perceptron assigns to a training vector in such cases.
For example, the training vector (?j , yj ) ? ([0, 0, 1, 0]T , 1) can be encoded as an unsigned integer
00101 ? 5, which in turn can be represented by the unit vector ? = [0, 0, 0, 0, 0, 1]T . More generally,
if ?j ? IRD were a vector of floating point numbers then a similar vector could be constructed
by concatenating the binary representations of the D floating point numbers that comprise it with
(yj + 1)/2 and express the bit string as an unsigned integer, Q. The integer can then be expressed as
a unit vector ? : [?]q = ?q,Q . While encoding the training data as an exponentially long vector is
inefficient in a classical computer, it is not in a quantum computer because of the quantum computer?s
innate ability to store and manipulate exponentially large quantum state vectors.
Any machine learning algorithm, be it quantum or classical, needs to have a mechanism to access the
training data. We assume that the data is accessed via an oracle that not only accesses the training
data but also determines whether the data is misclassified. To clarify, let {uj : j = 1 : N } be an
orthonormal basis of quantum state vectors that serve as addresses for the training vectors in the
database. Given an input address for the training datum, the unitary operations U and U ? allow the
quantum computer to access the corresponding vector. Specifically, for all j
U [uj ? ?0 ]
U ? [uj ? ?j ]
= uj ? ?j ,
= uj ? ?0 .
(4)
Given an input address vector uj , the former corresponds to a database access and the latter inverts
the database access.
PN
P
Note that because U and U ? are linear operators we have that U j=1 uj ? ?0 = j uj ? ?j . A
quantum computer can therefore access each training vector simultaneously using a single operation.
The resultant vector is often called in the physics literature a quantum superposition of states and this
feature of linear transformations is referred to as quantum parallelism within quantum computing.
The next ingredient that we need is a method to test if the perceptron correctly assigns a training
vector addressed by a particular uj . This process can be pictured as being performed by a unitary
transformation that flips the sign of any basis-vector that is misclassified. By linearity, a single
application of this process flips the sign of any component of the quantum state vector that coincides
with a misclassified training vector. It therefore is no more expensive than testing if a given training
vector is misclassified in a classical setting. We denote the operator, which depends on the perceptron
weights w, Fw and require that
Fw [uj ? ?0 ] = (?1)fw (?j ,yj ) [uj ? ?0 ],
(5)
where fw (?j ) is a Boolean function that is 1 if and only if the perceptron with weights w misclassifies
training vector ?j . Since the classification step involves computing the dot?products of finite size
vectors, this process is efficient given that the ?j are efficiently computable.
4
We apply Fw in the following way. Let Fw be a unitary operation such that
Fw ?j = (?1)fw (?j ,yj ) ?j .
(6)
Fw is easy to implement in the quantum computer using a multiply controlled phase gate and a
quantum implementation of the perceptron classification algorithm, fw . We can then write
Fw = U ? (11 ? Fw )U.
(7)
Classifying the data based on the phases (the minus signs) output by Fw naturally leads to a very
memory efficient training algorithm because only one training vector is ever stored in memory during
the implementation of Fw given in Eq. (7). We can then use Fw to perform Grover?s search algorithm,
PN
by taking Utarg = Fw and Uinit = 2?? ? ? 11 with ? = ? := ?1N j=1 uj , to seek out training
vectors that the current perceptron model misclassifies. This leads to a quadratic reduction in the
number of times that the training vectors need to be accessed by Fw or its classical analogue.
In the classical setting, the natural object to query is slightly different. The oracle that is usually
assumed in online algorithms takes the form U c : Z 7? CD where U c (j) = ?j . We will assume that
a similar function exists in both the classical and the quantum settings for simplicity. In both cases,
we will consider the cost of a query to U c to be proportional to the cost of a query to Fw .
We use these operations in to implement a quantum search for training vectors that the perceptron
misclassifies. This leads to a quadratic speedup relative to classical methods as shown in the following
theorem. It is also worth noting that our algorithm uses a slight variant on the Grover search algorithm
to ensure that the runtime is finite.
Theorem 1. Given a training set that consists of unit vectors ?1 , . . . , ?N that are separated by a
margin of ? in feature space, the number of applications of Fw needed to infer a perceptron model,
w, such that P (? j : fw (?j ) = 1) ? using a quantum computer is Nquant where
?
!
?
N
1
?( N ) 3 Nquant ? O
log
,
?2
? 2
whereas the number of queries to fw needed in the classical setting, Nclass , where the training vectors
are found by sampling uniformly from the training data is bounded by
1
N
log
.
?(N ) 3 Nclass ? O
?2
? 2
We assume in Theorem 1 that the training data in the classical case is accessed in a manner that
is analogous to the sampling procedure used in the quantum setting. If instead the training data is
supplied by a stream (as in the standard online model) then the upper bound changes to Nclass ?
O(N/? 2 ) because all N training vectors can be deterministically checked to see if they are correctly
classified by the perceptron. A quantum advantage is therefore obtained if N log2 (1/? 2 ).
In order to prove Theorem 1 we need to have two technical lemmas (proven in the supplemental
material). The first bounds the complexity of the classical analogue to our training method:
Lemma 1. Given only the ability to sample uniformly from the training vectors, the number of queries
to fw needed to find a training vector that the current perceptron model fails to classify correctly, or
conclude that no such example exists, with probability 1 ? ? 2 is at most O(N log(1/? 2 )).
The second proves the correctness of our online quantum perceptron algorithm and bounds the
complexity of the algorithm:
Lemma 2. Assuming that the training vectors {?1 , . . . , ?N } are unit vectors and that they are drawn
from two classes separated by a margin of ? in feature space, Algorithm 2 will either update the
perceptron weights, or conclude that the current model provides a separating?hyperplane between
the two classes, using a number of queries to Fw that is bounded above by O( N log(1/? 2 )) with
probability of failure at most ? 2 .
After stating these results, we can now provide the proof of Theorem 1.
5
Proof of Theorem 1. The upper bounds follow as direct consequences of Lemma 2 and Lemma 1.
Novikoff?s theorem [16, 17] states that the algorithms described in both lemmas must be applied at
most 1/? 2 times before finding the result. However, either the classical or the quantum algorithm
may fail to find a misclassified vector at each of the O(1/? 2 ) steps. The union bound states that the
probability that this happens is at most the sum of the respective probabilities in each step. These
probabilities are constrained to be ? 2 , which means that the total probability of failing to correctly
find a mistake is at most if both algorithms are repeated 1/? 2 times (which is the worst case number
of times that they need to be repeated).
The lower bound on the quantum query complexity follows from contradiction.
Assume that there
?
exists an algorithm that can train an arbitrary perceptron using o( N ) query operations. Now we
want to show that unstructured search with one marked element can be expressed as a perceptron
training algorithm. Let w be a known set of perceptron weights and assume that the perceptron only
misclassifies one vector ?1 . Thus if perceptron training succeeds then w the value of ?1 can be
extracted from the updated weights. This training problem is therefore equivalent to searching for a
misclassified vector. Now let ?j = [1 ? F (j), F (j)]T ? ?j where ?j is a unit vector that represents
the bit string j and F (j) is a Boolean function. Assume that F (0) = 1 and F (j) = 0 if j 6= 0,
which is without loss of generality equivalent ?
to Grover?s
[20, 21]. Now assume that ?j is
? T problem
P
1
?
assigned to class 2F (j) ? 1 and take w = [1/ 2, 1/ 2] ? N j ?j . This perceptron therefore
misclassifies ?0 and no other vector in the training set. Updating the weights yields ?j , which in turn
yields the value of j such that F (j) = 1, and so Grover?s search reduces to perceptron training.
Since?
Grover?s search reduces to perceptron training in the case of one marked item the lower bound
of ?( N ) queries for Grover?s
? search [21] applies to perceptron training. Since we assumed?that
perceptron training needs o( N ) queries this is a contradiction and the lower bound must be ?( N ).
We have assumed that in the classical setting that the user only has access to the training vectors
through an oracle that is promised to draw a uniform sample from {(?1 , y1 ), . . . , (?N , yN )}. Since
we are counting the number of queries to fw it is clear that in the worst possible case that the training
vector that the perceptron makes a mistake on can be the last unique value sampled from this list.
Thus the query complexity is ?(N ) classically.
4
Quantum version space perceptron
The strategy for our quantum version space training algorithm is to pose the problem of determining
a separating hyperplane as search. Specifically, the idea is to first generate K sample hyperplanes
w1 , . . . , wK from a spherical Gaussian distribution N (0, 11). Given a large enough K, we are
guaranteed to have at least one hyperplane amongst the samples that would lie in the version space
and perfectly separate the data. As discussed earlier Grover?s algorithm can provide quadratic speedup
over the classical search consequently the efficiency of the algorithm is determined by K. Theorem 2
provides an insight on how to determine this number of hyperplanes to be sampled.
Theorem 2. Given a training set that consists of d-dimensional unit vectors ?1 , . . . , ?N with labels
y1 , . . . , yN that are separated by a margin of ? in feature space, then a D-dimensional vector w
sampled from N (0, 11) perfectly separates the data with probability ?(?).
The proof of this theorem is provided in the supplementary material. The consequence of Theorem 2
stated below is that the expected number of samples K, required such that a separating hyperplane
exists in the set, only needs to scale as O( ?1 ). This is remarkable because, similar to Novikoff?s
theorem [16], the number of samples needed does not scale with D. Thus Theorem 2 implies that if
amplitude amplification is used to boost the probability of finding a vector in the version space then
the resulting quantum algorithm will need only O( ?1? ) quantum steps on average.
Next we show how to use Grover?s algorithm to search for a hyperplane that lies in the version space.
Let us take K = 2` , for positive integer `. Then given w1 , . . . , wK be the sampled hyperplanes, we
represent W1 , . . . , WK to be vectors that encode a binary representation of these random perceptron
vectors. In analogy to ?0 , we also define W0 to be a vector that represents an empty data register. We
define the unitary operator V to generate these weights given an address vector uj using the following
V [uj ? W0 ] = [uj ? Wj ].
6
(8)
In this context we can also think of the address vector, uj , as representing a seed for a pseudo?random
number generator that yields perceptron weights Wj .
Also let us define the classical analogue of V to be V c which obeys V c (j) = wj . Now using V (and
applying the Hadamard transform [23]) we can prepare the following quantum state
K
1 X
? := ?
uk ? Wk ,
K k=1
(9)
which corresponds to a uniform distribution over the randomly chosen w.
Now that we have defined the initial state, ?, for Grover?s search we need to define an oracle that
marks the vectors inside the version space. Let us define the operator F??,y via
F??,y [uj ? W0 ] = (?1)1+fwj (?,y) [uj ? W0 ].
(10)
This unitary operation looks at an address vector, uj , computes the corresponding perceptron model
Wj , flips the sign of any component of the quantum state vector that is in the half space in version
space specified by ? and then uncomputes Wj . This process can be realized using a quantum
subroutine that computes fw , an application of V and V ? and also the application of a conditional
phase gate (which is a fundamental quantum operation that is usually denoted Z) [23].
The oracle F??,y does not allow us to directly use Grover?s search to rotate a quantum state vector that
is outside the version space towards the version space boundary because it effectively only checks
one of the half?space inequalities that define the version space. It can, however, be used to build an
? that reflects about the version space:
operation, G,
? j ? W0 ] = (?1)1+(fwj (?1 ,y1 )?????fwj (?N ,yN )) [uj ? W0 ].
G[u
(11)
? can be implemented using 2N applications of F?? as well as a sequence of O(N )
The operation G
? as O(N ) queries to F??,y .
elementary quantum gates, hence we cost a query to G
We use these components in our version
? space training algorithm to, in effect, amplify the margin
between the two classes from ? to ?. We give the asymptotic scaling of this algorithm in the
following theorem.
Theorem 3. Given a training set that consists of unit vectors ?1 , . . . , ?N that are separated by a
margin of ? in feature space, the number of queries to F??,y needed to infer a perceptron model with
probability at least 1 ? , w, such that w is in the version space using a quantum computer is Nquant
where
N
3/2 1
Nquant ? O ? log
.
?
Proof. The proof of the theorem follows directly from bounds on K and the validity of our version
space training algorithm. It is clear from previous discussions that the algorithm carries out Grover?s
search, but instead of searching for a ? that is misclassified it instead searches for a w in version space.
Its validity therefore follows by following the exact same steps followed in the proof of Lemma 2
but with N = K. However, since the algorithm need is not repeated 1/? 2 times in this context we
can replace ? with 1 in the proof. Thus if ?
we wish to have a probability of failure of at most 0 then
? is in O( K log(1/0 )). This also guarantees that if any of the K
the number of queries made to G
vectors are in the version space then the probability of failing to find that vector is at most 0 .
? is costed at N queries to F??,y the query complexity (in units of queries
Next since one query to?G
?
to F?,y ) becomes O(N K log(1/0 )). The only thing that then remains is to bound the value of K
needed.
The probability of finding a vector in the version space is ?(?) from Theorem 2. This means that
there exists ? > 0 such that the probability of failing to find a vector in the version
space K
times is
1
K
???K
at most (1 ? ??) ? e
. Thus this probability is at most ? for K ? ? ? log(1/?) . It then
suffices to pick K ? ?(log(1/?)/?) for the algorithm.
The union bound implies that the probability that either none of the vectors lie in the version space
or that Grover?s search failing to find such an element is at most 0 + ? ? . Thus it suffices to pick
7
0 ? ?() and ? ? ?() to ensure that the total probability is at most . Therefore the total number of
?
queries made to F??,y is in O(N log3/2 (1/)/ ?) as claimed.
The classical algorithm discussed previously has complexity O(N log(1/)/?), which follows from
the fact from Theorem 2 that K ? ?(log(1/)/?) suffices to make the probability of not drawing an
element of the version space at most . This demonstrates a quantum advantage if ?1 log(1/),
and illustrates that quantum computing can be used to boost the effective margins of the training data.
Quantum models of perceptrons therefore not only provide advantages in terms of the number of
vectors that need to be queried in the training process, they also can make the perceptron much more
perceptive by making training less sensitive to small margins.
These performance improvements can also be viewed as mistake bounds for the version space
perceptron. The inner loop in the version space algorithm attempts to sample from the version space
and then once it draws a sample it tests
? it against the training vectors to see if it errs on any example.
Since the inner loop is repeated O( K log(1/)) times, the maximum number of misclassified
vectors that arises from this training process is from Theorem 2 O( ?1? log3/2 (1/)) which, for
constant , constitutes a quartic improvement over the standard mistake bound of 1/? 2 [16].
5
Conclusion
We have provided two distinct ways to look at quantum perceptron training that each afford different
speedups relative to the other. The first provides a quadratic speedup with respect to the size of the
training data. We further show that this algorithm is asymptotically optimal in that if a super?quadratic
speedup were possible then it would violate known lower bounds for quantum searching. The second
provides a quadratic reduction in the scaling of the training time (as measured by the number of
interactions with the training data) with the margin between the two classes. This latter result is
especially interesting because it constitutes a quartic speedup relative to the typical perceptron training
bounds that are usually seen in the literature.
Perhaps the most significant feature of our work is that it demonstrates that quantum computing
can provide provable speedups for perceptron training, which is a foundational machine learning
method. While our work gives two possible ways of viewing the perceptron model through the lens of
quantum computing, other quantum variants of the perceptron model may exist. Seeking new models
for perceptron learning that deviate from these classical approaches may not only provide a more
complete understanding of what form learning takes within quantum systems, but also may lead to
richer classes of quantum models that have no classical analogue and are not efficiently simulatable
on classical hardware. Such models may not only revolutionize quantum learning but also lead to a
deeper understanding of the challenges and opportunities that the laws of physics place on our ability
to learn.
References
[1] M Lewenstein. Quantum perceptrons. Journal of Modern Optics, 41(12):2491?2501, 1994.
[2] Esma A??meur, Gilles Brassard, and S?ebastien Gambs. Machine learning in a quantum world. In
Advances in artificial intelligence, pages 431?442. Springer, 2006.
[3] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and
unsupervised machine learning. arXiv preprint arXiv:1307.0411, 2013.
[4] Nathan Wiebe, Ashish Kapoor, and Krysta Svore. Quantum nearest-neighbor algorithms for
machine learning. Quantum Information and Computation, 15:318?358, 2015.
[5] Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis.
Nature Physics, 10(9):631?633, 2014.
[6] Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big
data classification. Physical review letters, 113(13):130503, 2014.
[7] Nathan Wiebe and Christopher Granade. Can small quantum systems learn? arXiv preprint
arXiv:1512.03145, 2015.
8
[8] Nathan Wiebe, Ashish Kapoor, and Krysta M Svore. Quantum deep learning. arXiv preprint
arXiv:1412.3489, 2014.
[9] Mohammad H Amin, Evgeny Andriyash, Jason Rolfe, Bohdan Kulchytskyy, and Roger Melko.
Quantum boltzmann machine. arXiv preprint arXiv:1601.02036, 2016.
[10] Silvano Garnerone, Paolo Zanardi, and Daniel A Lidar. Adiabatic quantum algorithm for search
engine ranking. Physical review letters, 108(23):230506, 2012.
[11] Frank Rosenblatt. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386, 1958.
[12] Johan AK Suykens and Joos Vandewalle. Least squares support vector machine classifiers.
Neural processing letters, 9(3):293?300, 1999.
[13] Yaoyong Li, Hugo Zaragoza, Ralf Herbrich, John Shawe-Taylor, and Jaz Kandola. The
perceptron algorithm with uneven margins. In ICML, volume 2, pages 379?386, 2002.
[14] Claudio Gentile. A new approximate maximal margin classification algorithm. The Journal of
Machine Learning Research, 2:213?242, 2002.
[15] Shai Shalev-Shwartz and Yoram Singer. A new perspective on an old perceptron algorithm. In
Learning Theory, pages 264?278. Springer, 2005.
[16] Albert BJ Novikoff. On convergence proofs for perceptrons. Technical report, DTIC Document,
1963.
[17] Yoav Freund and Robert E Schapire. Large margin classification using the perceptron algorithm.
Machine learning, 37(3):277?296, 1999.
[18] Thomas P Minka. A family of algorithms for approximate Bayesian inference. PhD thesis,
Massachusetts Institute of Technology, 2001.
[19] Ralf Herbrich, Thore Graepel, and Colin Campbell. Bayes point machines: Estimating the
bayes point in kernel space. In IJCAI Workshop SVMs, pages 23?27, 1999.
[20] Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the
twenty-eighth annual ACM symposium on Theory of computing, pages 212?219. ACM, 1996.
[21] Michel Boyer, Gilles Brassard, Peter H?yer, and Alain Tapp. Tight bounds on quantum
searching. arXiv preprint quant-ph/9605034, 1996.
[22] Gilles Brassard, Peter Hoyer, Michele Mosca, and Alain Tapp. Quantum amplitude amplification
and estimation. Contemporary Mathematics, 305:53?74, 2002.
[23] Michael A Nielsen and Isaac L Chuang. Quantum computation and quantum information.
Cambridge university press, 2010.
9
| 6401 |@word version:36 seek:1 pick:2 minus:1 carry:1 reduction:2 initial:2 born:2 daniel:1 document:1 blank:1 com:3 current:3 jaz:1 must:2 john:1 enables:1 depict:2 update:3 v:1 half:4 intelligence:1 devising:1 item:3 plane:1 provides:4 sits:1 hyperplanes:9 herbrich:2 accessed:3 constructed:1 direct:2 symposium:1 consists:4 prove:1 combine:1 inside:1 manner:1 introduce:1 lov:1 expected:1 themselves:1 growing:1 mechanic:1 brain:1 spherical:1 becomes:1 spain:1 classifies:1 provided:3 linearity:1 bounded:2 estimating:1 what:1 interpreted:1 string:2 emerging:1 supplemental:1 finding:6 transformation:2 guarantee:1 certainty:1 pseudo:1 every:1 runtime:1 classifier:5 demonstrates:2 uk:1 unit:14 yn:4 before:2 positive:1 understood:1 modify:1 mistake:7 consequence:2 encoding:1 ak:1 solely:1 initialization:1 obeys:1 unique:1 yj:6 testing:1 union:2 block:1 implement:3 procedure:3 foundational:1 ups:2 induce:1 amplify:1 operator:5 unsigned:2 context:2 applying:3 live:1 storage:1 equivalent:4 imposed:1 attention:1 simplicity:1 unstructured:1 assigns:2 contradiction:2 rule:4 insight:1 spanned:1 orthonormal:1 ralf:2 searching:5 notion:1 analogous:1 updated:1 user:1 exact:1 us:2 element:6 expensive:1 updating:1 showcase:1 labeled:3 database:4 costed:1 preprint:5 worst:3 wj:5 trade:1 contemporary:1 complexity:11 rigorously:1 tight:2 serve:1 upon:2 efficiency:4 basis:4 seth:3 represented:2 various:2 train:1 separated:6 distinct:1 fast:1 describe:1 effective:1 kp:2 query:21 artificial:1 outcome:2 outside:1 shalev:1 whose:3 encoded:1 larger:1 supplementary:1 richer:1 drawing:1 ability:4 think:1 transform:1 online:8 advantage:6 sequence:1 propose:1 interaction:1 product:3 maximal:1 hadamard:2 loop:2 kapoor:3 achieve:1 amplification:6 description:1 amin:1 convergence:1 empty:1 ijcai:1 rolfe:1 rotated:2 object:1 illustrate:1 develop:1 stating:1 pose:2 measured:2 nearest:1 targ:3 borrowed:1 eq:1 implemented:1 involves:1 implies:2 radius:1 viewing:1 material:2 require:1 suffices:4 generalization:1 elementary:2 clarify:1 seed:1 bj:1 failing:4 estimation:2 label:2 prepare:1 superposition:2 sensitive:1 correctness:1 successfully:1 reflects:1 offs:1 gaussian:1 super:1 rather:1 pn:2 claudio:1 encode:1 focus:2 improvement:6 check:1 sin2:1 inference:1 streaming:1 entire:1 boyer:1 misclassified:8 subroutine:4 classification:6 dual:3 denoted:1 misclassifies:5 constrained:1 special:1 equal:1 once:3 construct:3 comprise:1 sampling:4 represents:2 look:3 unsupervised:1 constitutes:2 icml:1 peaked:1 report:1 novikoff:3 modern:1 randomly:1 composed:1 simultaneously:1 kandola:1 floating:2 geometry:1 phase:3 microsoft:6 attempt:1 organization:1 investigate:1 possibility:1 multiply:2 behind:1 yaoyong:1 respective:1 taylor:1 old:1 desired:1 psychological:1 formalism:1 classify:2 column:1 boolean:2 earlier:1 cover:1 measuring:1 brassard:3 yoav:1 cost:3 uniform:3 vandewalle:1 svore:3 stored:1 fundamental:2 probabilistic:1 physic:3 michael:1 ashish:3 w1:3 thesis:1 choose:1 classically:1 resort:1 inefficient:1 michel:1 li:1 potential:1 lloyd:3 wk:4 register:2 ranking:1 depends:1 stream:1 performed:1 view:1 root:1 jason:1 observing:1 start:1 recover:1 bayes:3 capability:1 shai:1 square:2 efficiently:2 yield:4 yellow:1 bayesian:1 accurately:1 krysta:3 none:1 worth:1 meur:1 classified:1 joos:1 aligns:1 checked:1 fwj:3 failure:2 against:1 minka:1 isaac:1 naturally:2 resultant:1 proof:8 sampled:6 massachusetts:1 ask:1 improves:1 organized:1 graepel:1 amplitude:7 nielsen:1 campbell:1 focusing:1 supervised:1 follow:2 methodology:1 improved:2 revolutionize:1 generality:2 just:1 roger:1 until:1 replacing:1 christopher:1 perhaps:2 michele:1 innate:1 thore:1 building:1 effect:3 validity:2 normalized:1 true:1 requiring:1 counterpart:1 former:1 hence:1 assigned:2 illustrated:1 zaragoza:1 sin:1 during:2 coincides:1 complete:1 demonstrate:1 mohammad:1 reflection:2 meaning:1 rotation:1 physical:2 hugo:1 exponentially:2 volume:1 discussed:3 interpretation:5 slight:1 significant:2 measurement:1 cambridge:1 queried:1 mathematics:1 shawe:1 dot:1 access:8 uinit:4 patrick:3 recent:1 quartic:2 perspective:1 store:1 claimed:1 inequality:1 binary:2 success:1 errs:1 yi:3 devise:1 seen:1 greater:1 gentile:1 determine:2 colin:1 violate:1 desirable:1 multiple:1 reduces:4 stem:2 infer:2 technical:2 faster:2 long:1 host:1 manipulate:1 controlled:1 variant:2 arxiv:9 iteration:1 represent:4 tailored:1 albert:1 kernel:1 achieved:1 suykens:1 background:2 want:2 whereas:1 addressed:1 rest:1 unlike:1 thing:1 integer:4 unitary:6 noting:1 counting:1 easy:2 enough:1 perfectly:4 identified:1 quant:1 inner:3 idea:1 cn:1 computable:1 whether:1 ultimate:1 ird:3 render:1 peter:2 afford:1 action:2 deep:1 generally:1 clear:3 ph:1 concentrated:1 svms:2 hardware:1 generate:2 occupy:1 supplied:1 exist:3 schapire:1 misclassifying:1 sign:4 correctly:4 rosenblatt:1 write:1 promise:1 paolo:1 express:1 promised:1 drawn:2 asymptotically:1 geometrically:1 year:1 sum:1 angle:1 letter:3 uncertainty:1 place:1 family:1 utilizes:1 draw:4 scaling:5 wiebe:4 bit:3 bound:21 ki:1 guaranteed:1 datum:2 followed:1 quadratic:6 oracle:5 annual:1 optic:1 constraint:2 sharply:2 nathan:4 speed:3 aspect:1 argument:1 concluding:1 span:1 performing:1 separable:1 speedup:11 developing:1 alternate:1 ball:1 conjugate:1 remain:1 slightly:3 wi:1 modification:1 happens:1 making:1 remains:1 previously:1 turn:3 mechanism:1 fail:1 needed:6 singer:1 flip:3 fed:1 end:1 operation:13 apply:1 encounter:1 gate:3 thomas:1 chuang:1 denotes:2 ensure:2 log2:1 opportunity:1 exploit:1 yoram:1 uj:20 prof:1 build:1 classical:22 especially:1 seeking:1 realized:1 strategy:1 traditional:6 hoyer:1 amongst:2 lends:1 separate:3 separating:5 majority:1 w0:6 trivial:2 reason:1 provable:1 assuming:1 pointwise:1 index:1 nclass:3 difficult:1 robert:1 frank:1 stated:1 implementation:2 ebastien:1 boltzmann:1 twenty:1 perform:2 gilles:3 upper:3 finite:2 defining:1 ever:1 y1:4 arbitrary:1 introduced:1 namely:1 required:1 specified:1 mechanical:1 engine:1 quadratically:3 boost:2 barcelona:1 mohseni:3 nip:1 qa:6 able:1 redmond:3 address:6 below:2 parallelism:1 usually:3 eighth:1 challenge:1 including:2 memory:3 analogue:5 natural:1 pictured:1 representing:1 improve:2 technology:2 deviate:2 review:3 geometric:2 literature:2 understanding:2 determining:2 relative:3 asymptotic:1 law:1 loss:2 freund:1 sublinear:1 interesting:1 proportional:1 grover:21 proven:1 remarkable:2 ingredient:1 analogy:1 generator:1 principle:1 classifying:1 cd:1 last:1 transpose:1 infeasible:1 alain:2 bias:1 allow:3 weaker:1 perceptron:62 deeper:1 neighbor:1 heisenberg:1 taking:1 institute:1 boundary:1 dimension:3 valid:1 world:1 quantum:106 computes:2 made:3 coincide:1 log3:2 approximate:2 assumed:3 conclude:2 shwartz:1 evgeny:1 search:27 why:1 learn:2 nature:1 johan:1 init:1 complex:2 masoud:3 main:1 big:1 tapp:2 repeated:4 referred:2 fashion:1 fails:1 adiabatic:1 wish:2 inverts:1 exponential:1 concatenating:1 lie:4 candidate:1 deterministically:1 third:1 theorem:19 list:1 exists:6 workshop:1 albeit:1 effectively:1 importance:1 phd:1 magnitude:1 yer:1 illustrates:4 margin:14 dtic:1 depicted:1 led:1 intersection:1 simply:1 explore:1 expressed:2 applies:1 springer:2 corresponds:3 determines:1 extracted:1 utarg:4 acm:2 conditional:1 goal:2 marked:7 viewed:1 consequently:3 towards:1 replace:1 feasible:2 fw:25 change:1 lidar:1 specifically:3 determined:1 uniformly:3 typical:1 hyperplane:9 wt:2 lemma:7 principal:1 called:3 total:3 lens:1 succeeds:1 perceptrons:7 formally:2 uneven:1 perceptive:1 support:3 mark:1 latter:3 rotate:2 arises:1 |
5,972 | 6,402 | Parameter Learning
for Log-supermodular Distributions
Tatiana Shpakova
INRIA - ?cole Normale Sup?rieure Paris
[email protected]
Francis Bach
INRIA - ?cole Normale Sup?rieure Paris
[email protected]
Abstract
We consider log-supermodular models on binary variables, which are probabilistic
models with negative log-densities which are submodular. These models provide
probabilistic interpretations of common combinatorial optimization tasks such as
image segmentation. In this paper, we focus primarily on parameter estimation in
the models from known upper-bounds on the intractable log-partition function. We
show that the bound based on separable optimization on the base polytope of the
submodular function is always inferior to a bound based on ?perturb-and-MAP?
ideas. Then, to learn parameters, given that our approximation of the log-partition
function is an expectation (over our own randomization), we use a stochastic
subgradient technique to maximize a lower-bound on the log-likelihood. This can
also be extended to conditional maximum likelihood. We illustrate our new results
in a set of experiments in binary image denoising, where we highlight the flexibility
of a probabilistic model to learn with missing data.
1
Introduction
Submodular functions provide efficient and flexible tools for learning on discrete data. Several
common combinatorial optimization tasks, such as clustering, image segmentation, or document
summarization, can be achieved by the minimization or the maximization of a submodular function [1,
8, 14]. The key benefit of submodularity is the ability to model notions of diminishing returns, and
the availability of exact minimization algorithms and approximate maximization algorithms with
precise approximation guarantees [12].
In practice, it is not always straightforward to define an appropriate submodular function for a problem
at hand. Given fully-labeled data, e.g., images and their foreground/background segmentations in
image segmentation, structured-output prediction methods such as the structured-SVM may be
used [18]. However, it is common (a) to have missing data, and (b) to embed submodular function
minimization within a larger model. These are two situations well tackled by probabilistic modelling.
Log-supermodular models, with negative log-densities equal to a submodular function, are a first important step toward probabilistic modelling on discrete data with submodular functions [5]. However,
it is well known that the log-partition function is intractable in such models. Several bounds have
been proposed, that are accompanied with variational approximate inference [6]. These bounds are
based on the submodularity of the negative log-densities. However, parameter learning (typically by
maximum likelihood), which is a key feature of probabilistic modeling, has not been tackled yet. We
make the following contributions:
? In Section 3, we review existing variational bounds for the log-partition function and show that
the bound of [9], based on ?perturb-and-MAP? ideas, formally dominates the bounds proposed
by [5, 6].
? In Section 4.1, we show that for parameter learning via maximum likelihood the existing bound
of [5, 6] typically leads to a degenerate solution while the one based on ?perturb-and-MAP? ideas
and logistic samples [9] does not.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? In Section 4.2, given that the bound based on ?perturb-and-MAP? ideas is an expectation (over
our own randomization), we propose to use a stochastic subgradient technique to maximize the
lower-bound on the log-likelihood, which can also be extended to conditional maximum likelihood.
? In Section 5, we illustrate our new results on a set of experiments in binary image denoising, where
we highlight the flexibility of a probabilistic model for learning with missing data.
2
Submodular functions and log-supermodular models
In this section, we review the relevant theory of submodular functions and recall typical examples of
log-supermodular distributions.
2.1
Submodular functions
We consider submodular functions on the vertices of the hypercube {0, 1}D . This hypercube
representation is equivalent to the power set of {1, . . . , D}. Indeed, we can go from a vertex of the
hypercube to a set by looking at the indices of the components equal to one and from set to vertex by
taking the indicator vector of the set.
For any two vertices of the hypercube, x, y ? {0, 1}D , a function f : {0, 1}D ? R is submodular
if f (x) + f (y) > f (min{x, y}) + f (max{x, y}), where the min and max operations are taken
component-wise and correspond to the intersection and union of the associated sets. Equivalently, the
function x 7? f (x + ei ) ? f (x), where ei ? RD is the i-th canonical basis vector, is non-increasing.
Hence, the notion of diminishing returns is often associated with submodular functions. Most widely
used submodular functions are cuts, concave functions of subset cardinality, mutual information, set
covers, and certain functions of eigenvalues of submatrices [1, 7]. Supermodular functions are simply
negatives of submodular functions.
In this paper, we are going to use a few properties of such submodular functions (see [1, 7] and
references therein). Any submodular function f can be extended from {0, 1}D to a convex function
on RD , which is called the Lov?sz extension. This extension has the same value on {0, 1}D , hence
we use the same notation f . Moreover, this function is convex and piecewise linear, which implies
the existence of a polytope B(f ) ? RD , called the base polytope, such that for all x ? RD ,
f (x) = maxs?B(f ) x> s, that is, f is the support function of B(f ). The Lov?sz extension f and the
base polytope B(f ) have explicit expressions that are, however, not relevant to this paper. We will
only use the fact that f can be efficiently minimized on {0, 1}D , by a variety of generic algorithms,
or by more efficient dedicated ones for subclasses such as graph-cuts.
2.2
Log-supermodular distributions
Log-supermodular models are introduced in [5] to model probability distributions on a hypercube,
x ? {0, 1}D , and are defined as
1
p(x) =
exp(?f (x)),
Z(f )
where f : {0,P
1}D ? R is a submodular function such that f (0) = 0 and the partition function
is Z(f ) =
(x)). It is more convenient to deal with the convex log-partition
x?{0,1}D exp(?f P
function A(f ) = log Z(f ) = log x?{0,1}D exp(?f (x)). In general, the calculation of the partition
function Z(f ) or the log-partition function A(f ) is intractable, as it includes simple binary Markov
random fields?the exact calculation is known to be #P -hard [10]. In Section 3, we review upperbounds for the log-partition function.
2.3
Examples
Essentially, all submodular functions used in the minimization context can be used as negative
log-densities [5, 6]. In computer vision, the most common examples are graph-cuts, which are
essentially binary Markov random fields with attractive potentials, but higher-order potentials have
been considered as well [11]. In our experiments, we use graph-cuts, where submodular function
minimization may be performed with max-flow techniques and is thus efficient [4]. Note that there
are extensions of submodular functions to continuous domains that could be considered as well [2].
2
3
Upper-bounds on the log-partition function
In this section, we review the main existing upper-bounds on the log-partition function for logsupermodular densities. These upper-bounds use several properties of submodular functions, in
particular, the Lov?sz extension and the base polytope. Note that lower bounds based on submodular
maximization aspects and superdifferentials [5] can be used to highlight the tightness of various
bounds, which we present in Figure 1.
3.1
Base polytope relaxation with L-Field [5]
This method exploits the fact that any submodular function f (x) can be lower bounded by a modular
function s(x), i.e., a linear function of x ? {0, 1}D in the hypercube representation. The submodular
function and its lower bound are related by f (x) = maxs?B(f ) s> x, leading to:
P
P
A(f ) = log x?{0,1}D exp (?f (x)) = log x?{0,1}D mins?B(f ) exp (?s> x),
which, by swapping the sum and min, is less than
P
PD
def
mins?B(f ) log x?{0,1}D exp (?s> x) = mins?B(f ) d=1 log (1 + e?sd ) = AL-field (f ). (1)
Since the polytope B(f ) is tractable (through its membership oracle or by maximizing linear functions
efficiently), the bound AL-field (f ) is tractable, i.e., computable in polynomial time. Moreover, it has
a nice interpretation through convex duality as the logistic function log(1 + e?sd ) may be represented
as max?d ?[0,1] ??d sd ? ?d log ?d ? (1 ? ?d ) log(1 ? ?d ), leading to:
AL-field (f ) = min
max ??> s + H(?) = max H(?) ? f (?),
s?B(f ) ??[0,1]D
??[0,1]D
PD
where H(?) = ? d=1 ?d log ?d + (1 ? ?d ) log(1 ? ?d ) . This shows in particular the convexity
of f 7? AL-field (f ). Finally, [6] shows the remarkable result that the minimizer s ? B(f ) may
be obtained by minimizing a simpler function on B(f ), namely the squared Euclidean norm, thus
leading to algorithms such as the minimum-norm-point algorithm [7].
3.2
?Pertub-and-MAP? with logistic distributions
Estimating the log-partition function can be done through optimization using ?pertub-and-MAP?
ideas. The main idea is to perturb the log-density, find the maximum a-posteriori configuration (i.e.,
perform optimization), and then average over several random perturbations [9, 17, 19].
The Gumbel distribution on R, whose cumulative distribution function is F (z) = exp(? exp(?(z +
c))), where c is the Euler constant, is particularly useful. Indeed, if {g(y)}y?{0,1}D is a collection of independent random variables g(y) indexed by y ? {0, 1}D , each following the
Gumbel distribution, then the random variable maxy?{0,1}D g(y) ? f (y) is such that we have
log Z(f ) = Eg maxy?{0,1}D {g(y) ? f (y)} [9, Lemma 1]. The main problem is that we need
2D such variables, and a key contribution of [9] is to show that if we consider a factored collection {gd (yd )}yd ?{0,1},d=1,...,D of i.i.d. Gumbel variables, then we get an upper-bound on the log
PD
partition-function, that is, log Z(f ) ? Eg maxy?{0,1}D { d=1 gd (yd ) ? f (y)}.
Writing gd (yd ) = [gd (1) ? gd (0)]yd + gd (0) and using the fact that (a) gd (0) has zero expectation
and (b) the difference between two independent Gumbel distributions has a logistic distribution (with
cumulative distribution function z 7? (1 + e?z )?1 ) [15], we get the following upper-bound:
ALogistic (f ) = Ez1 ,...,zD ?logistic max {z > y ? f (y)} ,
(2)
y?{0,1}D
where the random vector z ? RD consists of independent elements taken from the logistic distribution.
This is always an upper-bound on A(f ) and it uses only the fact that submodular functions are efficient
to optimize. It is convex in f as an expectation of a maximum of affine functions of f .
3.3
Comparison of bounds
In this section, we show that AL-field (f ) is always dominated by ALogistic (f ). This is complemented
by another result within the maximum likelihood framework in Section 4.
3
Proposition 1. For any submodular function f : {0, 1}D ? R, we have:
A(f ) 6 ALogistic (f ) 6 AL-field (f ).
(3)
Proof. The first inequality was shown by [9]. For the second inequality, we have:
ALogistic (f ) = Ez max z > y ? f (y)
D
y?{0,1}
= Ez max z > y ? max s> y from properties of the base polytope B(f ),
D
s?B(f )
y?{0,1}
= Ez maxy?{0,1}D mins?B(f ) z > y ? s> y ,
= Ez mins?B(f ) max z > y ? s> y by convex duality,
D
y?{0,1}
6 mins?B(f ) Ez maxy?{0,1}D (z ? s)> y by swapping expectation and minimization,
PD
= mins?B(f ) Ez
d=1 (zd ? sd )+ by explicit maximization,
PD
= mins?B(f )
d=1 Ezd (zd ? sd )+ by using linearity of expectation,
PD R +?
= mins?B(f )
d=1 ?? (zd ? sd )+ P (zd )dzd by definition of expectation,
PD R +?
e?zd
= min
(zd ? sd ) (1+e
?zd 2 dzd by substituting the density function,
d=1 sd
)
s?B(f )
PD
= mins?B(f ) d=1 log(1 + e?sd ), which leads to the desired result.
In the inequality above, since the logistic distribution has full support, there cannot be equality.
However, if the base polytope is such that, with high probability ?d, |sd | ? |zd |, then the two bounds
are close. Since the logistic distribution is concentrated around zero, we have equality when |sd | is
large for all d and s ? B(f ).
Running-time complexity of AL-field and Alogistic . The logistic bound Alogistic can be computed
if there is efficient MAP-solver for submodular functions (plus a modular term). In this case, the
divide-and-conquer algorithm can be applied for L-Field [5]. Thus, the complexity is dedicated to
the minimization of O(|V |) problems. Meanwhile, for the method based on logistic samples, it is
necessary to solve M optimization problems. In our empirical bound comparison (next paragraph),
the running time was the same for both methods. Note however that for parameter learning, we need
a single SFM problem per gradient iteration (and not M ).
Empirical comparison of AL-field and Alogistic . We compare the upper-bounds on the log-partition
function AL-field and Alogistic , with the setup used by [5]. We thus consider data from a Gaussian
mixture model with 2 clusters in R2 . The centers are sampled from N([3, 3], I) and N([?3, ?3], I),
respectively. Then we sampled n = 50 points for each cluster. Further, these 2n points are used as
nodes in a complete weighted graph, where the weight between points x and y is equal to e?c||x?y|| .
We consider the graph cut function associated to this weighted graph, which defines a logsupermodular distribution. We then consider conditional distributions, one for each k = 1, . . . , n,
on the events that at least k points from the first cluster lie on the one side of the cut and at least k
points from the second cluster lie on the other side of the cut. For each conditional distribution, we
evaluate and compare the two upper bounds. We also add the tree-reweighted belief propagation
upper bound [23] and the superdifferential-based lower bound [5].
In Figure 1, we show various bounds on A(f ) as functions of the number on conditioned pairs. The
logistic upper bound is obtained using 100 logistic samples: the logistic upper-bound Alogistic is close
to the superdifferential lower bound from [5] and is indeed significantly lower than the bound AL-field .
However, the tree-reweighted belief propagation bound behaves a bit better in the second case, but its
calculation takes more time, and it cannot be applied for general submodular functions.
3.4
From bounds to approximate inference
Since linear functions are submodular functions, given any convex upper-bound on the log-partition
function, we may derive an approximate marginal probability for each xd ? {0, 1}. Indeed, following [9], we consider an exponential family model p(x|t) = exp(?f (x) + t> x ? A(f ? t)), where
4
200
150
Superdifferential lower bound
L?field upper bound
Tree?reweighted BP
Logistic upper bound
Log?Partition Function
Log?Partition Function
250
100
50
0
0
Superdifferential lower bound
L?field upper bound
Tree?reweighted BP
Logistic upper bound
100
80
60
40
20
0
20
40
Number of Conditioned Pairs
60
0
(a) Mean bounds with confidence intervals, c = 1.
10
20
30
40
Number of Conditioned Pairs
50
(b) Mean bounds with confidence intervals, c = 3.
Figure 1: Comparison of log-partition function bounds for different values of c. See text for details.
f ? t is the function x 7? f (x) ? t> x. When f is assumed to be fixed, this can be seen as an
exponential family with the base measure exp(?f (x)), sufficient statistics x, and A(f ? t) is the
log-partition function. It is known that the expectation of the sufficient statistics under the exponential
family model Ep(x|t) x is the gradient of the log-partition function [23]. Hence, any approximation of
this log-partition gives an approximation of this expectation, which in our situation is the vector of
marginal probabilities that an element is equal to 1.
For the L-field bound, at t = 0, we have ?td AL-field (f ? t) = ?(s?d ), where s? is the minimizer of
PD
?sd
), thus recovering the interpretation of [6] from another point of view.
d=1 log(1 + e
For the logistic bound, this is the inference mechanism from [9], with ?td Alogistic (f ? t) =
Ez y ? (z), where y ? (z) is the maximizer of maxy?{0,1}D z > y ? f (y). In practice, in order to perform
approximate inference, we only sample M logistic variables. We could do the same for parameter
learning, but a much more efficient alternative, based on mixing sampling and convex optimization,
is presented in the next section.
4
Parameter learning through maximum likelihood
An advantage of log-supermodular probabilistic models is the opportunity to learn the model parameters from data using the maximum-likelihood principle. In this section, we consider that we are given
N observations x1 , . . . , xN ? {0, 1}D , e.g., binary images such as shown in Figure 2.
PK
We consider a submodular function f (x) represented as f (x) = k=1 ?k fk (x) ? t> x. The modular
term t> x is explicitly taken into account with t ? RD , and K base submodular functions are
assumed to be given with ? ? RK
+ so that the function f remains submodular. Assuming the data
x1 , . . . , xN are independent and identically (i.i.d.) distributed, then maximum likelihood is equivalent
to minimizing:
N
N
K
1 X
1 XX
min
?
?k fk (xn ) ? t> xn + A(f ) ,
log p(xn |?, t) =
min
K , t?RD N
D
N
??RK
,
t?R
??R
+
+
n=1
n=1
k=1
which takes the particularly simple form
P
P
PK
N
N
1
> 1
min??RK
?
f
(x
)
?
t
x
+ A(?, t),
D
k
k
n
n
k=1
n=1
n=1
N
N
+ , t?R
(4)
where we use the notation A(?, t) = A(f ). We now consider replacing the intractable log-partition
function by its approximations defined in Section 3.
4.1
Learning with the L-field approximation
In this section, we show that if we replace A(f ) by AL-field (f ), we obtain a degenerate solution.
Indeed, we have
D
D
X
X
?sd
AL-field (?, t) = min
log (1 + e ) =
log (1 + e?sd +td ).
Pmin
s?B(f )
s?B(
d=1
5
K
k=1
?K fK )
d=1
This implies that Eq. (4) becomes
K
N
N
D
1 X
X
X
X
> 1
min
?
min
f
(x
)
?
t
x
+
log (1 + e?sd +td ).
k
k n
n
PK
D s?B(
N
N
,
t?R
??RK
?
f
)
K
K
+
k=1
n=1
n=1
k=1
d=1
hxid
The minimum with respect to td may be performed in closed form with td ? sd = log 1?hxi
, where
d
P
N
1
hxi = N n=1 xn . Putting this back into the equation above, we get the equivalent problem:
K
N
N
1 X
X
X
> 1
min
min
?
f
(x
)
?
s
xn + const ,
k
k n
P
K
N n=1
N n=1
??RK
+ s?B(
k=1 ?K fK )
k=1
which is equivalent to, using the representation of f as the support function of B(f ):
1 PN
PK
PN
1
min??RK
k=1 ?k N
n=1 fk (xn ) ? fk N
n=1 xn .
+
Since fk is convex, by Jensen?s inequality, the linear term in ?k is non-negative; thus maximum
likelihood through L-field will lead to a degenerate solution where all ??s are equal to zero.
4.2
Learning with the logistic approximation with stochastic gradients
In this section we consider the problem (4) and replace A(f ) by ALogistic (f ):
K
K
X
X
min
?k hfk (x)iemp. ?t> hxiemp. +Ez?logistic max z > y + t> y ?
?k f (y) ,
D
??RK
+ , t?R
y?{0,1}D
k=1
(5)
k=1
where hM (x)iemp. denotes the empirical average of M (x) (over the data).
PK
Denoting by y ? (z, t, ?) ? {0, 1}D the maximizers of z > y + t> y ? k=1 ?k f (y), the objective
function may be written:
K
X
?k hfk (x)iemp. ?hfk (y ? (z, t, ?))ilogistic ?t> hxiemp. ?hy ? (z, t, ?)ilogistic ]+hz > y ? (z, t, ?)ilogistic .
k=1
This implies that at optimum, for ?k > 0, then hfk (x)iemp. = hfk (y ? (z, t, ?))ilogistic , while,
hxiemp. = hy ? (z, t, ?)ilogistic , the expected values of the sufficient statistics match between the data
and the optimizers used for the logistic approximation [9].
In order to minimize the expectation in Eq. (5), we propose to use the projected stochastic gradient
method, not on the data as usually done, but on our own internal randomization. The algorithm then
becomes, once we add a weighted `2 -regularization ?(t, ?):
? Input: functions fk , k = 1, . . . , K, and expected sufficient statistics hfk (x)iemp. ? R and
hxiemp. ? [0, 1]D , regularizer ?(t, ?).
? Initialization: ? = 0, t = 0
? Iterations: for h from 1 to H
? Sample z ? RD as independent logistics
PK
? Compute y ? = y ? (z, t, ?) ? arg max z > y + t> y ? k=1 ?k f (y)
D
y?{0,1}
? Replace t by t ? ?Ch y ? ? hxiemp. + ?t ?(t, ?)
? Replace ?k by ?k ? ?Ch hfk (x)iemp. ? fk (y ? ) + ??k ?(t, ?) + .
? Output: (?, t).
Since our cost function is convex and
? Lipschitz-continuous, the averaged iterates are converging to
the global optimum [16] at rate 1/ H (for function values).
4.3
Extension to conditional maximum likelihood
In experiments in Section 5, we consider a joint model over two binary vectors x, z ? RD , as follows
D
Y
?(z 6=x )
p(x, z|?, t, ?) = p(x|?, t)p(z|x, ?) = exp(?f (x) ? A(f ))
?d d d (1 ? ?d )?(zd =xd ) , (6)
d=1
6
(a) original image
(b) noisy image
(c) denoised image
Figure 2: Denoising of a horse image from the Weizmann horse database [3].
which corresponds to sampling x from a log-supermodular model and considering z that switches the
values of x with probability ?d for each d, that is, a noisy observation of x. We have:
PD
log p(x, z|?, t, ?) = ?f (x) ? A(f ) + d=1 ? log(1 + eud ) + xd ud + zd ud ? 2xd zd ud ,
with ud = log
?d
1??d
which is equivalent to ?d = (1 + e?ud )?1 .
Using Bayes rule, we have p(x|z, ?, t, ?) ? exp(?f (x) ? A(f ) + x> u ? 2x> (u ? z)), which leads
to the log-supermodular model p(x|z, ?, t, ?) = exp(?f (x) + x> (u ? 2u ? z) ? A(f ? u + 2u ? z)).
Thus, if we observe both z and x, we can consider a conditional maximization of the log-likelihood
(still a convex optimization problem), which we do in our experiments for supervised image denoising,
where we assume we know both noisy and original images at training time. Stochastic gradient on
the logistic samples can then be used. Note that our conditional ML estimation can be seen as a form
of approximate conditional random fields [13].
While supervised learning can be achieved by other techniques such as structured-output-SVMs [18,
20, 22], our approach also applies when we do not observe the original image, which we now consider.
4.4
Missing data through maximum likelihood
In the model in Eq. (6), we now assume we only observed the noisy output z, and we perform
parameter learning for ?, t, ?. This is a latent variable model for which maximum likelihood can be
readily applied. We have:
P
log p(z|?, t, ?) = log x?{0,1} p(z, x|?, t, ?)
QD
P
?(z 6=x )
= log x?{0,1}D exp(?f (x) ? A(f )) d=1 ?d d d (1 ? ?d )?(zd =xd )
P
D
= A(f ? u + 2u ? z) ? A(f ) + z > u ? d=1 log(1 + eud ).
In practice, we will assume that the noise probability ? (and hence u) is uniform across all elements.
While we could use majorization-minization approaches such as the expectation-minimization algorithm (EM), we consider instead stochastic subgradient descent to learn the model parameters ?, t
and u (now a non-convex optimization problem, for which we still observed good convergence).
5
Experiments
The aim of our experiments is to demonstrate the ability of our approach to remove noise in binary
images, following the experimental set-up of [9]. We consider the training sample of Ntrain = 100
images of size D = 50 ? 50, and the test sample of Ntest = 100 binary images, containing a horse
silhouette from the Weizmann horse database [3]. At first we add some noise by flipping pixels
values independently with probability ?. In Figure 2, we provide an example from the test sample:
the original, the noisy and the denoised image (by our algorithm).
We consider the model from Section 4.3, with the two functions f1 (x), f2 (x) which are horizontal and
vertical cut functions with binary weights respectively, together with a modular term of dimension D.
To perform minimization we use graph-cuts [4] as we deal with positive or attractive potentials.
Supervised image denoising. We assume that we observe N = 100 pairs (xi , zi ) of original-noisy
images, i = 1, . . . , N . We perform parameter inference by maximum likelihood using stochastic
subgradient descent (over the logistic samples), with regularization by the squared `2 -norm, one
7
noise ?
1%
5%
10%
20%
max-marg.
0.4%
1.1%
2.1%
4.2%
std
<0.1%
<0.1%
<0.1%
<0.1%
mean-marginals
0.4%
1.1%
2.0%
4.1%
std
<0.1%
<0.1%
<0.1%
<0.1%
SVM-Struct
0.6%
1.5%
2.8%
6.0%
std
<0.1%
<0.1%
0.3%
0.6%
Table 1: Supervised denoising results.
?
1%
5%
10%
20%
max-marg.
0.5%
0.9%
1.9%
5.3%
? is fixed
std
mean-marg.
<0.1%
0.5%
0.1%
1.0%
0.4%
2.1%
2.0%
6.0%
std
<0.1%
0.1%
0.4%
2.0%
max-marg.
1.0%
3.5%
6.8%
20.0%
? is not fixed
std
mean-marg.
1.0%
0.9%
3.6%
2.2%
7.0%
20.0%
std
0.8%
2.0%
-
Table 2: Unsupervised denoising results.
parameter for t, one for ?, both learned by cross-validation. Given our estimates, we may denoise
a new image by computing the ?max-marginal?, e.g., the maximum a posteriori maxx p(x|z, ?, t)
through a single graph-cut, or computing ?mean-marginals? with 100 logistic samples. To calculate
the error we use the normalized Hamming distance and 100 test images.
Results are presented in Table 1, where we compare the two types of decoding, as well as a structured
output SVM (SVM-Struct [22]) applied to the same problem. Results are reported in proportion
of correct pixels. We see that the probabilistic models here slightly outperform the max-margin
formulation1 and that using mean-marginals (which is optimal given our loss measure) lead to slightly
better performance.
Unsupervised image denoising. We now only consider N = 100 noisy images z1 , . . . , zN to
learn the model, without the original images, and we use the latent model from Section 4.4. We apply
stochastic subgradient descent for the difference of the two convex functions Alogistic to learn the
model parameters and use fixed regularization parameters equal to 10?2 .
We consider two situations, with a known noise-level ? or with learning it together with ? and t. The
error was calculated using either max-marginals and mean-marginals. Note that here, structuredoutput SVMs cannot be used because there is no supervision. Results are reported in Table 2. One
explanation for a better performance for max-marginals in this case is that the unsupervised approach
tends to oversmooth the outcome and max-marginals correct this a bit.
When the noise level is known, the performance compared to supervised learning is not degraded
much, showing the ability of the probabilistic models to perform parameter estimation with missing
data. When the noise level is unknown and learned as well, results are worse, still better than a trivial
answer for moderate levels of noise (5% and 10%) but not better than outputting the noisy image for
extreme levels (1% and 20%). In challenging fully unsupervised case the standard deviation is up to
2.2% (which shows that our results are statistically significant).
6
Conclusion
In this paper, we have presented how approximate inference based on stochastic gradient and ?perturband-MAP? ideas could be used to learn parameters of log-supermodular models, allowing to benefit
from the versatility of probabilistic modelling, in particular in terms of parameter estimation with
missing data. While our experiments have focused on simple binary image denoising, exploring
larger-scale applications in computer vision (such as done by [24, 21]) should also show the benefits
of mixing probabilistic modelling and submodular functions.
Acknowledgements. We acknowledge support the European Union?s H2020 Framework
Programme (H2020-MSCA-ITN-2014) under grant agreement no 642685 MacSeNet, and thank Sesh
Kumar, Anastasia Podosinnikova and Anton Osokin for interesting discussions related to this work.
1
[9] shows a stronger difference, which we believe (after consulting with authors) is due to lack of convergence
for the iterative algorithm solving the max-margin formulation.
8
References
[1] F. Bach. Learning with submodular functions: a convex optimization perspective. Foundations
and Trends in Machine Learning, 6(2-3):145 ? 373, 2013.
[2] F. Bach. Submodular functions: from discrete to continuous domains. Technical Report
1511.00394, arXiv, 2015.
[3] E. Borenstein, E. Sharon, and S. Ullman. Combining Top-down and Bottom-up Segmentation.
In Proc. ECCV, 2004.
[4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(11):1222?1239, 2001.
[5] J. Djolonga and A. Krause. From MAP to Marginals: Variational Inference in Bayesian
Submodular Models. In Adv. NIPS, 2014.
[6] J. Djolonga and A. Krause. Scalable Variational Inference in Log-supermodular Models. In
Proc. ICML, 2015.
[7] S. Fujishige. Submodular Functions and Optimization. Annals of discrete mathematics. Elsevier,
2005.
[8] D. Golovin and A. Krause. Adaptive Submodularity: Theory and Applications in Active
Learning and Stochastic Optimization. Journal of Artificial Intelligence Research, 42:427?486,
2011.
[9] T. Hazan and T. Jaakkola. On the Partition Function and Random Maximum A-Posteriori
Perturbations. In Proc. ICML, 2012.
[10] M. Jerrum and A. Sinclair. Polynomial-time approximation algorithms for the Ising model.
SIAM Journal on Computing, 22(5):1087?1116, 1993.
[11] P. Kohli, L. Ladicky, and P. H. S. Torr. Robust higher order potentials for enforcing label
consistency. International Journal of Computer Vision, 82(3):302?324, 2009.
[12] Andreas Krause and Daniel Golovin. Submodular function maximization. In Tractability:
Practical Approaches to Hard Problems. Cambridge University Press, February 2014.
[13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proc. ICML, 2001.
[14] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proc.
NAACL/HLT, 2011.
[15] S. Nadarajah and S. Kotz. A generalized logistic distribution. International Journal of Mathematics and Mathematical Sciences, 19:3169 ? 3174, 2005.
[16] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[17] G. Papandreou and A. Yuille. Perturb-and-map random fields: Using discrete optimization to
learn and sample from energy models. In Proc. ICCV, 2011.
[18] M. Szummer, P. Kohli, and D. Hoiem. Learning CRFs using graph cuts. In Proc. ECCV, 2008.
[19] D. Tarlow, R.P. Adams, and R.S. Zemel. Randomized optimum models for structured prediction.
In Proc. AISTATS, 2012.
[20] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. 2003.
[21] S. Tschiatschek, J. Djolonga, and A. Krause. Learning probabilistic submodular diversity
models via noise contrastive estimation. In Proc. AISTATS, 2016.
[22] I. Tsochantaridis, Thomas Joachims, T., Y. Altun, and Y. Singer. Large margin methods
for structured and interdependent output variables. Journal of Machine Learning Research,
6:1453?1484, 2005.
[23] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[24] J. Zhang, J. Djolonga, and A. Krause. Higher-order inference for multi-class log-supermodular
models. In Proc. ICCV, 2015.
9
| 6402 |@word kohli:2 polynomial:2 stronger:1 norm:3 proportion:1 contrastive:1 configuration:1 hoiem:1 daniel:1 denoting:1 document:2 existing:3 yet:1 written:1 readily:1 partition:23 remove:1 juditsky:1 intelligence:2 ntrain:1 mccallum:1 tarlow:1 iterates:1 consulting:1 node:1 simpler:1 zhang:1 mathematical:1 consists:1 paragraph:1 lov:3 expected:2 indeed:5 multi:1 td:6 cardinality:1 increasing:1 solver:1 spain:1 estimating:1 notation:2 moreover:2 bounded:1 linearity:1 becomes:2 guarantee:1 subclass:1 concave:1 xd:5 grant:1 segmenting:1 positive:1 sd:16 tends:1 yd:5 inria:4 plus:1 therein:1 initialization:1 challenging:1 tschiatschek:1 nemirovski:1 statistically:1 averaged:1 weizmann:2 practical:1 practice:3 union:2 hfk:7 optimizers:1 empirical:3 submatrices:1 significantly:1 maxx:1 convenient:1 confidence:2 altun:1 get:3 cannot:3 close:2 tsochantaridis:1 context:1 marg:5 writing:1 optimize:1 equivalent:5 map:10 missing:6 maximizing:1 center:1 straightforward:1 go:1 crfs:1 independently:1 convex:14 focused:1 factored:1 rule:1 notion:2 dzd:2 annals:1 exact:2 programming:1 us:1 agreement:1 element:3 trend:2 particularly:2 std:7 cut:12 ising:1 labeled:1 database:2 ep:1 observed:2 bottom:1 taskar:1 calculate:1 adv:1 pd:10 convexity:1 complexity:2 solving:1 yuille:1 f2:1 basis:1 joint:1 various:2 represented:2 regularizer:1 fast:1 artificial:1 zemel:1 horse:4 labeling:1 outcome:1 whose:1 modular:4 larger:2 widely:1 solve:1 tightness:1 ability:3 statistic:4 jerrum:1 noisy:8 advantage:1 eigenvalue:1 sequence:1 propose:2 outputting:1 fr:2 relevant:2 combining:1 mixing:2 flexibility:2 degenerate:3 convergence:2 cluster:4 optimum:3 h2020:2 adam:1 illustrate:2 derive:1 eq:3 recovering:1 implies:3 qd:1 submodularity:3 correct:2 stochastic:12 f1:1 randomization:3 proposition:1 extension:6 exploring:1 around:1 considered:2 exp:14 substituting:1 estimation:5 proc:10 combinatorial:2 label:1 cole:2 tool:1 weighted:3 minimization:10 always:4 gaussian:1 aim:1 normale:2 pn:2 jaakkola:1 focus:1 joachim:1 modelling:4 likelihood:16 posteriori:3 inference:10 elsevier:1 membership:1 typically:2 diminishing:2 koller:1 going:1 pixel:2 arg:1 flexible:1 mutual:1 marginal:3 equal:6 field:25 once:1 sampling:2 unsupervised:4 icml:3 foreground:1 djolonga:4 minimized:1 report:1 piecewise:1 primarily:1 few:1 versatility:1 mixture:1 extreme:1 swapping:2 necessary:1 indexed:1 tree:4 euclidean:1 divide:1 desired:1 modeling:1 cover:1 papandreou:1 zn:1 maximization:6 cost:1 tractability:1 deviation:1 vertex:4 euler:1 subset:1 veksler:1 uniform:1 reported:2 answer:1 gd:7 density:7 international:2 siam:2 randomized:1 probabilistic:14 decoding:1 together:2 squared:2 containing:1 worse:1 sinclair:1 leading:3 return:2 ullman:1 pmin:1 account:1 potential:4 diversity:1 accompanied:1 availability:1 includes:1 explicitly:1 performed:2 view:1 closed:1 hazan:1 sup:2 francis:2 bayes:1 denoised:2 msca:1 contribution:2 minimize:1 majorization:1 degraded:1 efficiently:2 correspond:1 anton:1 bayesian:1 considering:1 bilmes:1 hlt:1 definition:1 energy:2 associated:3 proof:1 hamming:1 sampled:2 recall:1 segmentation:5 back:1 higher:3 supermodular:14 supervised:5 formulation:1 done:3 hand:1 horizontal:1 ei:2 replacing:1 maximizer:1 propagation:2 lack:1 defines:1 logistic:24 believe:1 naacl:1 normalized:1 hence:4 equality:2 regularization:3 nadarajah:1 deal:2 attractive:2 eg:2 reweighted:4 inferior:1 generalized:1 complete:1 demonstrate:1 dedicated:2 image:27 variational:5 wise:1 boykov:1 common:4 behaves:1 interpretation:3 marginals:8 significant:1 cambridge:1 rd:9 fk:9 mathematics:2 consistency:1 submodular:42 logsupermodular:2 hxi:2 supervision:1 structuredoutput:1 base:9 add:3 own:3 perspective:1 moderate:1 rieure:2 certain:1 inequality:4 binary:11 seen:2 minimum:2 guestrin:1 maximize:2 ud:5 itn:1 full:1 technical:1 match:1 calculation:3 bach:4 cross:1 lin:1 podosinnikova:1 ez1:1 prediction:2 converging:1 scalable:1 essentially:2 expectation:11 vision:3 arxiv:1 iteration:2 achieved:2 background:1 krause:6 interval:2 borenstein:1 hz:1 fujishige:1 flow:1 lafferty:1 jordan:1 identically:1 variety:1 switch:1 zi:1 andreas:1 idea:7 computable:1 expression:1 useful:1 concentrated:1 svms:2 zabih:1 shapiro:1 outperform:1 canonical:1 per:1 zd:13 discrete:5 key:3 putting:1 lan:1 sharon:1 graph:10 subgradient:5 relaxation:1 sum:1 family:4 kotz:1 sfm:1 superdifferential:4 bit:2 bound:49 def:1 tackled:2 oracle:1 ladicky:1 bp:2 hy:2 dominated:1 aspect:1 min:25 kumar:1 separable:1 structured:6 across:1 slightly:2 em:1 maxy:6 iccv:2 taken:3 equation:1 remains:1 mechanism:1 singer:1 know:1 tractable:2 operation:1 apply:1 observe:3 ezd:1 appropriate:1 generic:1 alternative:1 struct:2 existence:1 original:6 thomas:1 denotes:1 clustering:1 running:2 top:1 graphical:1 opportunity:1 const:1 tatiana:2 upperbounds:1 exploit:1 perturb:6 conquer:1 february:1 hypercube:6 objective:1 flipping:1 anastasia:1 gradient:6 distance:1 thank:1 polytope:9 trivial:1 toward:1 enforcing:1 assuming:1 index:1 minimizing:2 equivalently:1 setup:1 negative:6 summarization:2 unknown:1 perform:6 allowing:1 upper:17 vertical:1 observation:2 markov:3 acknowledge:1 descent:3 logistics:1 situation:3 extended:3 looking:1 precise:1 perturbation:2 introduced:1 namely:1 paris:2 pair:4 z1:1 learned:2 barcelona:1 nip:2 usually:1 pattern:1 eud:2 max:25 explanation:1 belief:2 wainwright:1 power:1 event:1 superdifferentials:1 indicator:1 hm:1 text:1 review:4 nice:1 acknowledgement:1 interdependent:1 fully:2 loss:1 highlight:3 interesting:1 remarkable:1 iemp:6 validation:1 foundation:2 affine:1 sufficient:4 principle:1 oversmooth:1 eccv:2 side:2 taking:1 benefit:3 distributed:1 dimension:1 xn:9 calculated:1 cumulative:2 author:1 collection:2 adaptive:1 projected:1 programme:1 osokin:1 transaction:1 approximate:8 silhouette:1 sz:3 ml:1 global:1 active:1 assumed:2 xi:1 continuous:3 latent:2 iterative:1 table:4 learn:8 robust:2 golovin:2 european:1 meanwhile:1 domain:2 aistats:2 pk:6 main:3 noise:9 denoise:1 x1:2 pereira:1 explicit:2 exponential:4 lie:2 rk:7 down:1 embed:1 showing:1 jensen:1 r2:1 svm:4 dominates:1 maximizers:1 intractable:4 conditioned:3 margin:4 gumbel:4 intersection:1 simply:1 ez:8 applies:1 ch:2 corresponds:1 minimizer:2 complemented:1 conditional:9 replace:4 lipschitz:1 hard:2 typical:1 torr:1 denoising:9 lemma:1 minization:1 called:2 duality:2 experimental:1 ntest:1 formally:1 internal:1 support:4 szummer:1 evaluate:1 |
5,973 | 6,403 | Exploiting the Structure:
Stochastic Gradient Methods Using Raw Clusters?
Zeyuan Allen-Zhu?
Princeton University / IAS
[email protected]
Yang Yuan?
Cornell University
[email protected]
Karthik Sridharan
Cornell University
[email protected]
Abstract
The amount of data available in the world is growing faster than our ability to deal
with it. However, if we take advantage of the internal structure, data may become
much smaller for machine learning purposes. In this paper we focus on one of
the fundamental machine learning tasks, empirical risk minimization (ERM), and
provide faster algorithms with the help from the clustering structure of the data.
We introduce a simple notion of raw clustering that can be efficiently computed
from the data, and propose two algorithms based on clustering information. Our
accelerated algorithm ClusterACDM is built on a novel Haar transformation applied
to the dual space of the ERM problem, and our variance-reduction based algorithm
ClusterSVRG introduces a new gradient estimator using clustering. Our algorithms
outperform their classical counterparts ACDM and SVRG respectively.
1
Introduction
For large-scale machine learning applications, n, the number of training data examples, is usually
very large. To search for the optimal solution, it is often desirable to use stochastic gradient methods
which only require one (or a batch of) random example(s) from the given training set per iteration in
order to form an estimator of the true gradient.
For empirical risk minimization problems (ERM) in particular, stochastic gradient methods have
received a lot of attention in the past decade. The original stochastic gradient descent (SGD) [4, 26]
simply defines the estimator using one random data example and converges slowly. Recently, variancereduction methods were introduced to improve the running time of SGD [6, 7, 13, 18, 20?22, 24],
and accelerated gradient methods were introduced to further improve the running time when the
regularization parameter is small [9, 16, 17, 23, 27].
None of the above cited results, however, have considered the internal structure of the dataset, that is,
using the stochastic gradient with respect to one data vector p to estimate the stochastic gradients
of other data vectors close to p. To illustrate why internal structure can be helpful, consider the
following extreme case: if all the data vectors are located at the same spot, then every stochastic
gradient represents the full gradient of the entire dataset. In a non-extreme case, if data vectors form
clusters, then the stochastic gradient of one data vector could provide a rough estimation for its
neighbors. Therefore, one should expect ERM problems to be easier if the data vectors are clustered.
More importantly, well-clustered datasets are abundant in big-data scenarios. For instance, although
there are more than 1 billion of users on Facebook, the intrinsic ?feature vectors? of these users can be
naturally categorized by the users? occupations, nationalities, etc. As another example, although there
?
?
The full version of this paper can be found on https://arxiv.org/abs/1602.02151.
These two authors equally contribute to this paper.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
are 581,012 vectors in the famous Covtype dataset [8], these vectors can be efficiently categorized
into 1,445 clusters of diameter 0.1 ? see Section 5. With these examples in mind, we investigate in
this paper how to train an ERM problem faster using clustering information.
1.1 Known Result and Our Notion of Raw Clustering
In a seminal work by Hofmann et al. published in NIPS 2015 [11], they introduced N-SAGA, the
first ERM training algorithm that takes into account the similarities between data vectors. In each
iteration, N-SAGA computes the stochastic gradient of one data vector p, and uses this information
as a biased representative for a small neighborhood of p (say, 20 nearest neighbors of p).
In this paper, we focus on a more general and powerful notion of clustering yet capturing only the
minimum requirement for a cluster to have similar vectors. Assume without loss of generality data
vectors have norm at most 1. We say that a partition of the data vectors is an (s, ?) raw clustering if
the vectors are divided into s disjoint sets, and the average distance between vectors in each set is at
most ?. For different values of ?, one can obtain an (s? , ?) raw clustering where s? is a function on ?.
For example, a (1445, 0.1) raw clustering exists for the Covtype dataset that contains 581,012 data
vectors. Raw clustering enjoys the following nice properties.
? It allows outliers to exist in a cluster and nearby vectors to be split into multiple clusters.
? It allows large clusters. This is in contrast to N-SAGA which requires each cluster to be very
small (say of size 20) due to their algorithmic limitation.
Computation Overhead. Since we do not need exactness, raw clusterings can be obtained very
efficiently. We directly adopt the approach of Hofmann et al. [11] because finding approximate
clustering is the same as finding approximate neighbors. Hofmann et al. [11] proposed to use
approximate nearest neighbor algorithms such as LSH [2, 3] and product quantization [10, 12], and
we use LSH in our experiments. Without trying hard to optimize the code, we observed that in time
0.3T we can detect if good clustering exists, and if so, in time around 3T we find the actual clustering.
Here T is the running time for a stochastic method such as SAGA to perform n iterations (i.e., one
pass) on the dataset.
We repeat three remarks from Hofmann et al. First, the better the clustering quality the better
performance we can expect; yet one can always use the trivial clustering as a fallback option. Second,
the clustering time should be amortized over multiple runs of the training program: if one performs
30 runs to choose between loss functions and tune parameters, the amortized cost to compute a
raw clustering is at most 0.1T . Third, since stochastic gradient methods are sequential methods,
increasing the computational cost in a highly parallelizable way may not affect data throughput.
N OTE . Clustering can also be obtained for free in some scenarios. If Facebook data are retrieved,
one can use the geographic information of the users to form raw clustering. If one works with the
CIFAR-10 dataset, the known CIFAR-100 labels can be used as clustering information too [14].
1.2 Our New Results
We first observe some limitations of N-SAGA. Firstly, it is biased algorithm and does not converge to
the objective minimum.3 Secondly, in order to keep the bias small, N-SAGA only exploits a small
neighborhood for every data vector. Thirdly, N-SAGA may need 20 times more computation time per
iteration as compared to SAGA or SGD, if 20 is the average neighborhood size.
We explore in this paper how a given (s, ?) raw clustering can improve the performance of training
ERM problems. We propose two unbiased algorithms that we call ClusterACDM and ClusterSVRG.
The two algorithms use different techniques. ClusterACDM uses a novel clustering-based transformation in the dual space, and provides a faster algorithm than ACDM [1, 15] both in practice and in
terms of asymptotic worst-case performance. ClusterSVRG is built on top of SVRG [13], but using a
new clustering-based gradient estimator to improve the running time.
More specifically, consider for simplicity ridge regression where the `2 regularizer has weight ? > 0.
The best known non-accelerated methods (such as SAGA [6] and SVRG [6]) and the best known
accelerated methods (such as ACDM or AccSDCA [23])) run in time respectively
?
e nd + ?n d
e nd + d
and accelerated: O
(1.1)
non-accelerated: O
?
?
e notation hides the log(1/?) factor that depends
where d is the dimension of the data vectors and the O
on the accuracy. Accelerated methods converge faster when ? is smaller than 1/n.
3
N-SAGA uses the stochastic gradient of one data vector to completely represent its neighbors. This changes
the objective value and therefore cannot give very accurate solutions.
2
Our ClusterACDM method outperforms (1.1) both in terms of theory and practice. Given an (s, ?)
raw clustering, ClusterACDM enjoys a worst-case running time
? ?
e nd + max{ ?s, ?n} d .
(1.2)
O
?
e nd +
In the ideal case when all the feature vectors are identical, ClusterACDM converges in time O
?d . Otherwise, our running time is asymptotically better than known accelerated methods by a
?
p
factor O(min{ ns , ?1? }) that depends on the clustering quality. Our speed-up also generalizes to
other ERM problems as well such as Lasso.
Our ClusterSVRG matches the best non-accelerated result in (1.1) in the worst-case;4 however, it
enjoys a provably smaller variance than SVRG or SAGA, so runs faster in practice.
Techniques Behind ClusterACDM. We highlight our main techniques behind ClusterACDM.
Since a cluster of vectors have almost identical directions if ? is small, we wish to create an auxiliary
vector for each cluster representing ?moving in the average direction of all vectors in this cluster?.
Next, we design a stochastic gradient method that, instead of uniformly choosing a random vector,
selects those auxiliary vectors with a much higher probability compared with ordinary ones. This
could lead to a running time improvement because moving in the direction of an auxiliary vector only
costs O(d) running time but exploits the information of the entire cluster.
We implement the above intuition using optimization insights. In the dual space of the ERM
problem, each variable corresponds to a data example in the primal, and the objective is known to be
coordinate-wise smooth with the same smoothness parameter per coordinate. In the preprocessing
step, ClusterACDM applies a novel Haar transformation on each cluster of the dual coordinates.
Haar transformation rotates the dual space, and for each cluster, it automatically reserves a new
dual variable that corresponds to the ?auxiliary vector? mentioned above. Furthermore, these new
dual variables have significantly larger smoothness parameters and therefore will be selected with
probability much larger than 1/n if one applies a state-of-the-art accelerated coordinate descent
method such as ACDM.
Other Related Work. ClusterACDM can be viewed as ?preconditioning? the data matrix from
the dual variable side. Recently, preconditioning received some attentions in machine learning. In
particular, non-uniform sampling can be viewed as using diagonal preconditioners [1, 28]. However,
diagonal preconditioning has nothing to do with clustering: for instance, if all data vectors have
the same Euclidean norm, the cited results are identical to SVRG or APCG so do not exploit the
clustering information. Some authors also study preconditioning from the primal side using SVD [25].
This is different from us because for instance when all the data vectors are same (thus forming a
perfect cluster), the cited result reduces to SVRG and does not improve the running time.
2
Preliminaries
Given a dataset consisting of n vectors {a1 , . . . , an } ? Rd , we assume without loss of generality
that kai k2 ? 1 for each i ? [n]. Let a clustering of the dataset be a partition of the indices
[n]
Ps= S1 ? ? ? ? ? Ss . We call each set Sc a cluster and use nc = |Sc | to denote its size. It satisfies
c=1 nc = n. We are interested in the following quantification that estimates the clustering quality:
Definition 2.1 (raw clustering on vectors). We say a partition [n] = S1 ? ?P
? ? ? Ss is an (s, ?) raw
clustering for the vectors {a1 , . . . , an } if for every cluster Sc it satisfies |S1c |2 i,j?Sc kai ?aj k2 ? ?.
We call it a raw clustering because the above definition captures the minimum requirement for each
cluster to have similar vectors. For instance, the above ?average? definition allows a few outliers to
exist in each cluster and allows nearby vectors to be split into different clusters.
Raw clustering of the dataset is very easy to obtain: we include in Section 5.1 a simple and efficient
algorithm for computing an (s? , ?) raw clustering of any quality ?. A similar assumption like our
(s, ?) raw clustering assumption in Definition 2.1 was also introduced by Hofmann et al. [11].
Definition 2.2 (Smoothness and strong convexity). For a convex function g : Rn ? R,
? g is ?-strongly convex if ?x, y ? Rn , it satisfies g(y) ? g(x) + h?g(x), y ? xi + ?2 kx ? yk2 .
? g is L-smooth if ?x, y ? Rn , it satisfies k?g(x) ? ?g(y)k ? Lkx ? yk.
4
The asymptotic worst-case running time for non-accelerated methods in (1.1) cannot be improved in general,
even if a perfect clustering (i.e., ? = 0) is given.
3
? g is coordinate-wise smooth with parameters (L1 , L2 , . . . , Ln ), if for every x ? Rn , ? > 0,
i ? [n], it satisfies |?i g(x + ?ei ) ? ?i g(x)| ? Li ? ?.
For strongly convex and coordinate-wise smooth functions g, one can apply the accelerated coordinate
descent algorithm (ACDM) to minimize g:
Theorem 2.3 (ACDM). If g(x) is ?-strongly convex and coordinate-wise smooth with parameters
(L1 , . . . , Ln ), the non-uniform accelerated coordinate
descent method of [1] produces an output y
P p
satisfying g(y) ? minx g(x) ? ? in O
Li /? ? log(1/?) iterations. Each iteration runs in
i
time proportional to the computation of a coordinate gradient ?i g(?) of g.
Remark 2.4. Accelerated coordinate descent admits several variants such as APCG [17], ACDM [15],
and NU_ACDM [1]. These variants agree on the running time when L1 = ? ? ? = Ln , but NU_ACDM
is the fastest when L1 , . . . , Ln are ?
non-uniform. More specifically, NU_ACDM selects a coordinate
i with probability proportional to Li . In contrast, ACDM samples coordinate i with probability
proportional to Li and APCG samples i with probability 1/n. We refer to NU_ACDM as the
accelerated coordinate descent method (ACDM) in this paper.
3
ClusterACDM Algorithm
Our ClusterACDM method is an accelerated stochastic gradient method just like AccSDCA [23],
APCG [17], ACDM [1, 15], SPDC [27], etc. Consider a regularized least-square problem
n
n
o
X
def 1
Primal: min P (x) =
(hai , xi ? li )2 + r(x) ,
(3.1)
2n i=1
x?Rd
where each ai ? Rd is the feature vector of a training example and li is the label of ai . Problem (3.1)
becomes ridge regression when r(x) = ?2 kxk22 , and becomes Lasso when r(x) = ?kxk1 . One of the
state-of-the-art accelerated stochastic gradient methods to solve (3.1) is through its dual. Consider
the following equivalent dual formulation of (3.1) (see for instance [17] for the detailed proof):
n
def 1
Dual: miny?Rn D(y) = 2n
kyk2 + n1 hy, li + r? ? n1 Ay
o
Pn
Pn
= n1 i=1 12 yi2 + yi ? li + r? ? n1 i=1 yi ai
, (3.2)
def
where A = [a1 , a2 , . . . , an ] ? Rd?n and r? (y) = maxw y T w ? r(w) is the Fenchel dual of r(w).
3.1
Previous Solutions
If r(x) is ?-strongly convex in P (x), the dual objective D(y) is both strongly convex and smooth.
The following lemma is due to [17] but is also proved in our appendix for completeness.
Lemma 3.1. If r(x) is ?-strongly convex, then D(y) is ? = n1 strongly convex and coordinate-wise
smooth with parameters (L1 , . . . , Ln ) for Li = n1 + ?n1 2 kai k2 .
For this reason, the authors of [17] proposed to apply accelerated coordinate descent (such as
their APCG method) to minimize D(y).5 Assuming without loss of generality kai k2 ? 1 for
i ? [n], we have Li ? n1 + ?n1 2 . Using Theorem 2.3 on D(?), we know that ACDM produces an
p
P p
e + n/?) iterations, and each
?-approximate dual minimizer y in O( i Li /? log(1/?)) = O(n
iteration runs p
in time proportional to the computation of ?i D(y) which is O(d). This total running
e
time O(nd
+ n/? ? d) is the fastest for solving (3.1) when r(x) is ?-strongly convex.
Due to space limitation, in the main body we only focus on the case when r(x) is strongly convex; the
non-strongly convex case (such as Lasso) can be reduced to this case. See Remark A.1 in appendix.
3.2
Our New Algorithm
Each dual coordinate yi naturally corresponds to the i-th feature vector ai . Therefore, given a
raw clustering [n] = S1 ? S2 ? ? ? ? ? Ss of the dataset, we can partition the coordinates of the
dual vector y ? Rn into s blocks each corresponding to a cluster. Without loss of generality, we
assume the coordinates of y are sorted in the order of the cluster indices. In other words, we write
y = (yS1 , . . . , ySs ) where each ySc ? Rnc .
They showed that defining x = ?r? (?Ay/n), if y is a good approximate minimizer of the dual objective
D(y), x is also a good approximate minimizer of the primal objective P (x).
5
4
Algorithm 1 ClusterACDM
Input: a raw clustering S1 ? ? ? ? ? Ss .
1: Apply cluster-based Haar transformation Hcl to get the transformed objective D 0 (y 0 ).
2: Run ACDM to minimize D 0 (y 0 )
3: Transform the solution of D 0 (y 0 ) back to the original space.
ClusterACDM transforms the dual objective (3.2) into an equivalent form, by performing an nc dimensional Haar transformation on the c-th block of coordinates for every c ? [s]. Formally,
?
?
?
? i
h ? ?
?
?
2/ 3 ? 2/(2
3) ? 2/(2
3)
def
def
?
?
,
Definition 3.2. Let R2 = 1/ 2 ?1/ 2 , R3 =
0
1/ 2
?1/ 2
and more generally
?
def ?
Rn = ?
?
1/a
1/a+1/b
... ?
1/a
? ?1/b
1/a+1/b
1/a+1/b
Ra
0
. . . ? ?1/b
1/a+1/b
0
Rb
?
?
(n?1)?n
??R
for a = bn/2c and b = dn/2e. Then, define the n-dimensional (normalized) Haar matrix as
?
?
def
1/ n ? ? ? 1/ n
Hn =
? Rn?n
Rn
We give a few examples of Haar matrices in Example A.2 in Appendix A. It is easy to verify that
Lemma 3.3. For every n, HnT Hn = Hn HnT = I, so Hn is a unitary matrix.
Definition 3.4. Given a clustering [n] = S1 ? ? ? ? ? Ss , define the following cluster-based Haar
transformation Hcl ? Rn?n that is a block diagonal matrix:
def
Hcl = diag H|S1 | , H|S2 | , . . . , H|Ss | .
Accordingly, we apply the unitary transformation Hcl on (3.2) and consider
n
o
1 0
1
0 0 def 1
0 2
?
T 0
min
D
(y
)
=
ky
k
+
hy
,
H
li
+
r
?
AH
y
.
(3.3)
cl
cl
y 0 ?Rn
2n
n
n
We call D0 (y 0 ) the transformed objective function.
It is clear that the minimization problem (3.3) is equivalent to (3.2) by transforming y = HclT y 0 . Now,
our ClusterACDM algorithm applies ACDM on minimizing this transformed objective D0 (y 0 ).
We claim the following running time of ClusterACDM and discuss the high-level intuition in the
main body. We defer detailed analysis to Appendix A.
Theorem 3.5. If r(?) is ?-strongly convex and an (s, ?) raw clustering is given,
then ClusterACDM
? ?
max{ s, ?n}
?
d .
outputs an ?-approximate minimizer of D(?) in time T = O nd +
?
Comparing to the complexity p
of APCG,
p ACDM,
or AccSDCA (see (1.1)), ClusterACDM is faster by
a factor that is up to ? min{ n/s, 1/?} .
High-Level Intuition. To see why Haar transformation is helpful, we focus on one cluster c ? [s].
Assume without loss of generality that cluster c has vectors a1 , a2 , ? ? ? , anc . After applying Haar
transformation, the new columns 1, 2, . . . , nc of matrix AHclT become weighted combinations of
a1 , a2 , ? ? ? , anc , and the weights are determined by the entries in the corresponding rows of Hnc .
Observe that every row except the first one in Hnc has its entries sum up to 0. Therefore, columns
2, . . . , nc in AHclT will be close to zero vectors and have small norms. In contrast, since the first row
?
?
a +???+a
in Hnc has all its entries equal to 1/ nc , the first column of AHclT becomes nc ? 1 nc nc , the
scaled average of all vectors in this cluster. It has a large Euclidean norm.
The first column after Haar transformation can be viewed as an auxiliary feature vector representing
the entire cluster. If we run ACDM with respect to this new matrix, and whenever this auxiliary
column is selected, it represents ?moving in the average direction of all vectors in this cluster?. Since
this single auxiliary column cannot represent the entire cluster, the remaining nc ? 1 columns serve
as helpers that ensure that the algorithm is unbiased (i.e., converges to the exact minimizer).
Most importantly, as discussed in Remark 2.4, ACDM is a stochastic method that samples a dual
coordinate i (thus a primal feature vector ai ) with a probability proportional to its square-root
5
coordinate-smoothness (thus roughly proportional to kai k). Since auxiliary vectors have much larger
Euclidean norms, we expect them to be sampled with probabilities much larger 1/n. This is how the
faster running time is obtained in Theorem 3.5.
R EMARK. The speed-up of ClusterACDM depends on how much ?non-uniformity? the underlying
coordinate descent method can utilize. Therefore, no speed-up can be obtained if one applies APCG
instead of the NU_ACDM which is optimally designed to utilize coordinate non-uniformity.
4
ClusterSVRG Algorithm
Our ClusterSVRG is a non-accelerated stochastic gradient method just like SVRG [13], SAGA [6],
SDCA [22], etc. It directly works on minimizing the primal objective (similar to SVRG and SAGA):
n
n
o
X
def
def 1
min F (x) = f (x) + ?(x) =
fi (x) + ?(x) .
(4.1)
n i=1
x?Rd
Pn
Here, f (x) = n1 i=1 fi (x) is the finite average of n functions, each fi (x) is convex and L-smooth,
and ?(x) is a simple (but possibly non-differentiable) convex function, sometimes called the proximal
function. We denote x? as a minimizer of (4.1).
Recall that stochastic gradient methods work as follows. At every iteration t, they perform updates
e t?1 for some step length ? > 0,6 where ?
e t?1 is the so-called gradient estimator
xt ? xt?1 ? ? ?
and its expectation had better equal the full gradient ?f (xt?1 ). It is a known fact that the faster the
e t?1 ] diminishes, the faster the underlying method converges. [13]
variance Var[?
For instance, SVRG defines the estimator as follows. It has an outer loop of epochs. At the beginning
of each epoch, SVRG records the current iterate x as a snapshot point x
e, and computes its full
Pn
e t?1 def
gradient ?f (e
x). In each inner iteration within an epoch, SVRG defines ?
= n1 j=1 ?fj (e
x) +
?fi (xt?1 ) ? ?fi (e
x) where i is a random index in [n]. SVRG usually chooses the epoch length m
e t?1
e t?1 ] approaches to zero as t increases. We denote by ?
to be 2n, and it is known that Var[?
SVRG
t?1
e
this choice of ?
for SVRG.
e t?1 based on clustering information. Given a
In ClusterSVRG, we define the gradient estimator ?
clustering [n] = S1 ? ? ? ? ? Ss and denoting by ci ? [s] the cluster that index i belongs to, we define
n
1X
e t?1 def
?
=
?fj (e
x) + ?cj + ?fi (xt?1 ) ? ?fi (e
x) + ?ci .
n j=1
Above, for each cluster c we introduce an additional ?c term that can be defined in one of the following
two ways. Initializing ?c = 0 at the beginning of the epoch for each cluster c. Then,
? In Option I, after each iteration t is completed and suppose i is the random index chosen at
iteration t, we update ?ci ? ?fi (xt?1 ) ? ?fi (e
x).
? In Option II, we divide an epoch into subepochs of length s each (recall s is the number of clusters).
At the beginning of each subepoch, for each cluster c ? [s], we define ?c ? ?fj (x) ? ?fj (e
x).
Here, x is the last iterate of the previous subepoch and j is a random index in Sc .
We summarize both options in Algorithm 2. Note that Option I gives simpler intuition but Option II
leads to a simpler proof.
e t?1 can be understood as follows. Observe that in the
The intuition behind our new choice of ?
t?1
e
SVRG estimator ?SVRG , each term ?fj (e
x) can be viewed as a ?guess term? of the true gradient
?fj (xt?1 ) for function fj . However, these guess terms may be very ?outdated? because x
e can be
m = 2n iterations away from xt?1 , and therefore contribute to a large variance.
We use raw clusterings to improve these guess terms and reduce the variance. If function fj belongs
to cluster c, then our Option I uses ?fj (e
x) + ?fk (xt ) ? ?fk (e
x) as the new guess of ?fj (xt ),
where t is the last time cluster c was accessed and k is the index of the vector in this cluster that was
accessed. This new guess only has an ?outdatedness? of roughly s that could be much smaller than n.
Due to space limitation, we defer all technical details of ClusterSVRG to Appendix B and B.3.
SVRG vs. SAGA vs. ClusterSVRG. SVRG becomes a special case of ClusterSVRG when all the
data vectors belong to the same cluster; SAGA becomes a special case of ClusterSVRG when each
Or more generally the proximal updates xt ? arg minx
is nonzero.
6
6
1
kx
2?
e t?1 , xi + ?(x) if ?(x)
? xt?1 k2 + h?
Algorithm 2 ClusterSVRG
Input: Epoch length m and learning rate ?, a raw clustering S1 ? ? ? ? ? Ss .
1: x0 , x ? initial point, t ? 0.
2: for epoch ? 0 to MaxEpoch do
3:
x
e ? xt , and (?1 , . . . , ?s ) ? (0, . . . , 0)
4:
for iter ? 1 to m do
5:
t ? t + 1 and choose
from {1, ? ? ? , n}
P i uniformly at random
n
1
t
t?1
6:
x ?x
? ? n j=1 ?fj (e
x) + ?cj + ?fi (xt?1 ) ? ?fi (e
x) + ?ci
7:
Option I: ?ci ? ?fi (xt?1 ) ? ?fi (e
x)
8:
Option II: if iter mod s = 0 then for all c = 1, . . . , s,
9:
?c ? ?fj (xt?1 ) ? ?fj (e
x) where j is randomly chosen from Sc .
10:
end for
11: end for
data vector belongs to its own cluster. We hope that this interpolation helps experimentalists decide
between these methods: (1) if the data vectors are pairwise close to each other then use SVRG; (2) if
the data vectors are all very separated from each other then use SAGA; and (3) if the data vectors
have nice clustering structures (which one can detect using LSH), then use our ClusterSVRG.
5
Experiments
We conduct experiments for three datasets that can be found on the LibSVM website [8]: C OVTYPE . BINARY, S ENS IT (combined scale), and N EWS 20. BINARY. To make easier comparison across
datasets, we scale every vector by the average Euclidean norm of all the vectors. This step is for
comparison only and not necessary in practice. Note that Covtype and SensIT are two datasets
where the feature vectors have a nice clustering structure; in contrast, dataset News20 cannot be well
clustered and we include it for comparison purpose only.
5.1 Clustering and Haar Transformation
We use the approximate nearest neighbor algorithm library E2LSH [2] to compute raw clusterings.
Since this is not the main focus of our paper, we include our implementation in Appendix D. The
running time needed for raw clustering is reasonable. In Table 1 in the appendix, we list the running
time (1) to sub-sample and detect if good clustering exists and (2) to compute the actual clustering.
We also list the one-pass running time of SAGA using sparse implementation for comparison.
We conclude two things from Table 1. First, in about the same time as SAGA performing 0.3 pass
on the datasets, we can detect clustering structure in the dataset for a given diameter ?. This is a
fast-enough preprocessing step to help experimentalists choose to use clustering-based methods or
not. Second, in about the same time as SAGA performing 3 passes on well-clustered datasets such as
Covtype and SensIT, we obtain the actual raw clustering. As emphasized in the introduction, we view
the time needed for clustering as negligible. This not only because 0.3 and 3 are small quantities as
compared to the average number of passes needed to converge (which is usually around 20). It is also
because the clustering time is usually amortized over multiple runs of the training algorithm due to
different data analysis tasks, parameter tunings, etc.
In ClusterACDM, we need to pre-compute matrix AHclT using Haar transformation. This can be
efficiently implemented thanks to the sparsity of Haar matrices. In Table 2 in the appendix, we see
that the time needed to do so is roughly 2 passes of the dataset. Again, this time should be amortized
over multiple runs of the algorithm so is negligible.
5.2 Performance Comparison
We compare our algorithms with SVRG, SAGA and ACDM. We use default epoch length m = 2n
and Option I for SVRG. We use m = 2n and Option I for ClusterSVRG. We consider ridge and Lasso
regressions, and denote by ? the weight of the `2 regularizer for ridge or the `1 regularizer for Lasso.
Parameters. For SVRG and SAGA, we tune the best step size for each test case. To make our
comparison even stronger, instead of tuning the best step size for ClusterSVRG, we simply set it to
be either the best of SVRG or the best of SAGA in each test case. For ACDM and ClusterACDM, the
step size is computed automatically so tuning is unnecessary.
For Lasso, because the objective is not strongly convex, one has to add a dummy `2 regularizer on the
objective in order to run ACDM or ClusterACDM. (This step is needed for every accelerated method
7
training loss - optimum
10 -4
10 -5
10 -6
10 -7
10 -8
10 -9
10 0
10 -1
10 -2
10 -2
10 -3
10 -3
10 -4
10 -4
10 -5
10 -6
10 -7
10 -8
10 -9
10 -10
10 -10
10 -11
10 -11
10 -12
10 -12
10 -13
10 -14
5
10
15
#grad / n
20
25
10 -14
30
?5
10 -3
10 -4
training loss - optimum
ClusterACDM-376-0.2
ClusterSVRG-376-0.2-0.03
SVRG-0.03
SAGA-0.03
ACDM (No Cluster)
10 -2
0
10 -5
10 -6
10 -7
10 -8
10 -9
25
(d) SensIT, Ridge ? = 10
?3
25
30
10 -14
30
?6
10 -3
10 -4
10 -4
10 -5
10 -6
10 -7
10 -8
10 -9
10 -14
ClusterACDM-376-0.2
ClusterSVRG-376-0.2-0.1
SVRG-1.0
SAGA-0.3
ACDM (No Cluster)
0
5
10
15
#grad / n
0
25
30
10 -8
10 -9
ClusterACDM-376-0.2
ClusterSVRG-376-0.2-0.1
SVRG-1.0
SAGA-0.3
ACDM (No Cluster)
10 -13
(e) SensIT, Ridge ? = 10
20
10 -7
10 -12
?5
15
#grad / n
10 -6
10 -11
25
10
10 -5
10 -10
20
5
(c) Covtype, Ridge ? = 10?7
10 -3
10 -13
20
10 -13
20
10 0
10 -12
ClusterACDM-7661-0.06
ClusterSVRG-7661-0.06-0.3
SVRG-0.3
SAGA-0.3
ACDM (No Cluster)
10 -12
10 -2
10 -12
15
#grad / n
10 -9
10 -11
10 -1
10 -11
10 -13
10 -8
10 0
10 -11
10
15
#grad / n
10 -7
10 -2
10 -10
5
10
10 -6
10 -1
10 -10
0
5
10 -5
10 -10
(b) Covtype, Ridge ? = 10
10 0
10 -1
10 -14
ClusterACDM-7661-0.06
ClusterSVRG-7661-0.06-0.3
SVRG-0.3
SAGA-0.3
ACDM (No Cluster)
10 -13
0
(a) Covtype, Ridge ? = 10
training loss - optimum
10 0
10 -1
training loss - optimum
10 -3
training loss - optimum
ClusterACDM-7661-0.06
ClusterSVRG-7661-0.06-0.3
SVRG-0.3
SAGA-0.1
ACDM (No Cluster)
10 -2
training loss - optimum
10 0
10 -1
30
10 -14
0
5
10
15
#grad / n
20
25
(f) SensIT, Ridge ? = 10
30
?6
Figure 1: Selected plots on ridge regression. For Lasso and more detailed comparisons, see Appendix
including AccSDCA, APCG or SPDC.) We choose this dummy regularizer to have weight 10?7 for
Covtype and SenseIT, and weight 10?6 for News20.7
Plot Format. In our plots, the y-axis represents the objective distance to the minimizer, and the
x-axis represents the number of passes of the dataset. (The snapshot computation of SVRG and
ClusterSVRG counts as one pass.) In the legend, we use the format
? ?ClusterSVRG?s???stepsize? for ClusterSVRG,
? ?ClusterACDM?s??? for ClusterACDM.
? ?SVRG/SAGA?stepsize? for SVRG or SAGA.
? ?ACDM (no Cluster)? for the vanilla ACDM without using any clustering info.8
Results. Our comprehensive experimental plots are included only in the appendix, see Figure 2, 3,
4, 5), 6, and 7. Due to space limitation, here we simply compare all the algorithms on ridge regression
for datasets SensIT and Covtype by choosing only one representative clustering, see Figure 1.
Generally, ClusterSVRG outperforms SAGA/SVRG when the regularizing parameter ? is large.
ClusterACDM outperforms all other algorithms when ? is small. This is because accelerated methods
outperform non-accelerated ones with smaller values of ?, and the complexity of ClusterACDM
outperforms ACDM more when ? is smaller (compare (1.1) with (1.2)).9
Our other findings can be summarized as follows. Firstly, dataset News20 does not have a nice
clustering structure but our ClusterSVRG and ClusterACDM still perform comparably well to SVRG
and ACDM respectively. Secondly, the performance of ClusterSVRG is slightly better with clustering
that has smaller diameter ?. In contrast, ClusterACDM with larger ? performs slightly better. This is
because ClusterACDM can take advantage of very large but low-quality clusters, and this is a very
appealing feature in practice.
Sensitivity on Clustering. In Figure 8 in appendix, we plot the performance curves of ClusterSVRG
and ClusterACDM for SensIT and Covtype, with 7 different clusterings. From the plots we claim
that ClusterSVRG and ClusterACDM are very insensitive to the clustering quality. As long as one
does not choose the most extreme clustering, the performance improvement due to clustering can be
significant. Moreover, ClusterSVRG is slightly faster if the clustering has relatively smaller diameter
? (say, below 0.1), while the ClusterACDM can be fast even for very large ? (say, around 0.6).
7
Choosing large dummy regularizer makes the algorithm converge faster but to a worse minimum, and vice
versa. In our experiments, we find these choices reasonable for our datasets. Since our main focus is to compare
ClusterACDM with ACDM, as long as we choose the same dummy regularizer our the comparison is fair.
8
ACDM has slightly better performance compared to APCG, so we adopt ACDM in our experiments [1].
Furthermore, our comparison is fair because ClusterACDM and ACDM are implemented in the same manner.
9
The best choice of ? usually requires cross-validation. For instance, by performing a 10-fold cross validation,
one can figure out that the best ? is around 10?6 for SensIT Ridge, 10?5 for SensIT Lasso, 10?7 for Covtype
Ridge, and 10?6 for Covtype Lasso. Therefore, for these two datasets ClusterACDM is preferred.
8
References
[1] Zeyuan Allen-Zhu, Peter Richt?rik, Zheng Qu, and Yang Yuan. Even faster accelerated coordinate
descent using non-uniform sampling. In ICML, 2016.
[2] Alexandr Andoni. E2LSH. http://www.mit.edu/ andoni/LSH/, 2004.
[3] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, and Ludwig Schmidt. Practical
and optimal lsh for angular distance. In NIPS, pages 1225?1233, 2015.
[4] L?on Bottou. Stochastic gradient descent. http://leon.bottou.org/projects/sgd.
[5] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University
Press, Cambridge, 2006.
[6] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A Fast Incremental Gradient
Method With Support for Non-Strongly Convex Composite Objectives. In NIPS, 2014.
[7] Aaron J. Defazio, Tib?rio S. Caetano, and Justin Domke. Finito: A Faster, Permutable Incremental
Gradient Method for Big Data Problems. In ICML, 2014.
[8] Rong-En Fan and Chih-Jen Lin. LIBSVM Data: Classification, Regression and Multi-label. Accessed: 2015-06.
[9] Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization. In ICML, volume 37,
pages 1?28, 2015.
[10] Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization. IEEE Trans.
Pattern Anal. Mach. Intell., 36(4):744?755, 2014.
[11] Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, and Brian McWilliams. Variance reduced stochastic gradient descent with neighbors. In NIPS 2015, pages 2296?2304, 2015.
[12] Herv? J?gou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor
search. IEEE Trans. Pattern Anal. Mach. Intell., 33(1):117?128, 2011.
[13] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Advances in Neural Information Processing Systems, NIPS 2013, pages 315?323,
2013.
[14] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009.
[15] Yin Tat Lee and Aaron Sidford. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In FOCS, pages 147?156. IEEE, 2013.
[16] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A Universal Catalyst for First-Order Optimization. In NIPS, 2015.
[17] Qihang Lin, Zhaosong Lu, and Lin Xiao. An Accelerated Proximal Coordinate Gradient Method
and its Application to Regularized Empirical Risk Minimization. In NIPS, pages 3059?3067, 2014.
[18] Julien Mairal. Incremental Majorization-Minimization Optimization with Application to LargeScale Machine Learning. SIAM Journal on Optimization, 25(2):829?855, April 2015. Preliminary
version appeared in ICML 2013.
[19] Yurii Nesterov. Introductory Lectures on Convex Programming Volume: A Basic course, volume I.
Kluwer Academic Publishers, 2004.
[20] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic
average gradient. arXiv preprint arXiv:1309.2388, pages 1?45, 2013. Preliminary version appeared
in NIPS 2012.
[21] Shai Shalev-Shwartz and Tong Zhang. Proximal Stochastic Dual Coordinate Ascent. arXiv preprint
arXiv:1211.2717, pages 1?18, 2012.
[22] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized
loss minimization. Journal of Machine Learning Research, 14:567?599, 2013.
[23] Shai Shalev-Shwartz and Tong Zhang. Accelerated Proximal Stochastic Dual Coordinate Ascent
for Regularized Loss Minimization. In ICML, pages 64?72, 2014.
[24] Lin Xiao and Tong Zhang. A Proximal Stochastic Gradient Method with Progressive Variance
Reduction. SIAM Journal on Optimization, 24(4):2057?-2075, 2014.
[25] Tianbao Yang, Rong Jin, Shenghuo Zhu, and Qihang Lin. On data preconditioning for regularized
loss minimization. Machine Learning, pages 1?23, 2014.
[26] Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML, 2004.
[27] Yuchen Zhang and Lin Xiao. Stochastic Primal-Dual Coordinate Method for Regularized Empirical
Risk Minimization. In ICML, 2015.
[28] Peilin Zhao and Tong Zhang. Stochastic Optimization with Importance Sampling for Regularized
Loss Minimization. In Proceedings of the 32nd International Conference on Machine Learning,
volume 37, pages 1?9, 2015.
9
| 6403 |@word version:3 norm:6 stronger:1 nd:7 tat:1 bn:1 sgd:4 reduction:3 initial:1 contains:1 denoting:1 past:1 outperforms:4 current:1 comparing:1 yet:2 partition:4 razenshteyn:1 hofmann:6 zaid:1 designed:1 plot:6 update:3 v:2 selected:3 guess:5 website:1 accordingly:1 beginning:3 record:1 provides:1 completeness:1 contribute:2 org:2 firstly:2 simpler:2 accessed:3 zhang:8 dn:1 become:2 yuan:2 focs:1 overhead:1 introductory:1 manner:1 introduce:2 pairwise:1 x0:1 news20:3 ra:1 roughly:3 growing:1 multi:1 ote:1 automatically:2 actual:3 gou:1 increasing:1 becomes:5 spain:1 project:1 notation:1 underlying:2 moreover:1 permutable:1 finding:3 transformation:13 every:10 k2:5 scaled:1 mcwilliams:1 negligible:2 understood:1 mach:2 interpolation:1 lugosi:1 shenghuo:1 fastest:2 practical:1 alexandr:2 practice:5 block:3 implement:1 spot:1 sdca:1 empirical:5 universal:1 significantly:1 gabor:1 composite:1 word:1 pre:1 get:1 cannot:4 close:3 risk:5 applying:1 seminal:1 optimize:1 equivalent:3 www:1 tianbao:1 attention:2 convex:17 ke:1 simplicity:1 roux:1 estimator:8 insight:1 importantly:2 notion:3 coordinate:31 suppose:1 user:4 exact:1 qifa:1 programming:1 us:4 amortized:4 roy:1 satisfying:1 located:1 observed:1 kxk1:1 tib:1 preprint:2 initializing:1 capture:1 worst:4 laarhoven:1 caetano:1 sun:1 richt:1 yk:1 mentioned:1 intuition:5 transforming:1 convexity:1 complexity:2 miny:1 nesterov:1 uniformity:2 solving:3 predictive:1 serve:1 completely:1 preconditioning:5 regularizer:7 train:1 separated:1 fast:3 sc:6 neighborhood:3 choosing:3 shalev:3 larger:5 kai:5 solve:1 say:6 s:8 otherwise:1 ability:1 acdm:33 transform:1 indyk:1 emark:1 advantage:2 differentiable:1 propose:2 douze:1 product:3 e2lsh:2 loop:1 ludwig:1 ky:1 exploiting:1 billion:1 cluster:48 requirement:2 p:1 optimum:6 produce:2 perfect:2 converges:4 incremental:3 help:3 illustrate:1 nearest:4 received:2 strong:1 auxiliary:8 c:2 implemented:2 direction:4 sensit:9 stochastic:30 require:1 clustered:4 preliminary:3 brian:1 secondly:2 rong:3 around:4 considered:1 algorithmic:1 claim:2 reserve:1 rnc:1 adopt:2 a2:3 purpose:2 estimation:1 diminishes:1 label:3 vice:1 create:1 weighted:1 minimization:11 hope:1 mit:2 rough:1 exactness:1 always:1 pn:4 cornell:4 focus:6 improvement:2 s1c:1 hongzhou:1 contrast:5 detect:4 helpful:2 rio:1 entire:4 transformed:3 selects:2 interested:1 provably:1 arg:1 dual:23 classification:1 art:2 special:2 equal:2 cordelia:1 piotr:1 sampling:3 identical:3 represents:4 progressive:1 icml:7 throughput:1 few:2 hcl:4 randomly:1 comprehensive:1 intell:2 consisting:1 karthik:1 n1:11 ab:1 investigate:1 highly:1 zheng:1 zhaosong:1 introduces:1 extreme:3 behind:3 primal:7 ovtype:1 accurate:1 helper:1 necessary:1 conduct:1 euclidean:4 divide:1 yuchen:1 abundant:1 instance:7 fenchel:1 column:7 sidford:2 ordinary:1 cost:3 entry:3 uniform:4 krizhevsky:1 johnson:1 too:1 optimally:1 proximal:7 chooses:1 combined:1 thanks:1 cited:3 fundamental:1 sensitivity:1 siam:2 matthijs:1 csail:1 international:1 lee:1 ilya:1 lucchi:1 again:1 cesa:1 choose:6 slowly:1 hn:4 possibly:1 worse:1 zhao:1 li:12 account:1 summarized:1 depends:3 root:1 lot:1 view:1 francis:2 option:11 shai:3 defer:2 simon:2 majorization:1 minimize:3 square:2 accuracy:1 variance:8 efficiently:4 raw:26 famous:1 comparably:1 none:1 lu:1 published:1 ah:1 parallelizable:1 whenever:1 facebook:2 definition:7 naturally:2 proof:2 sampled:1 dataset:15 proved:1 recall:2 cj:2 back:1 higher:1 improved:1 april:1 rie:1 formulation:1 strongly:13 generality:5 furthermore:2 just:2 angular:1 ei:1 defines:3 fallback:1 quality:6 aj:1 accsdca:4 normalized:1 true:2 geographic:1 counterpart:1 unbiased:2 regularization:1 verify:1 nonzero:1 deal:1 game:1 kyk2:1 trying:1 ay:2 ridge:14 performs:2 allen:2 l1:5 fj:13 image:1 wise:5 novel:3 recently:2 fi:13 insensitive:1 volume:4 clustersvrg:30 thirdly:1 discussed:1 belong:1 he:1 kluwer:1 refer:1 significant:1 versa:1 cambridge:2 ai:5 smoothness:4 rd:5 nationality:1 fk:2 tuning:3 vanilla:1 frostig:1 had:1 lsh:5 moving:3 similarity:1 yk2:1 lkx:1 etc:4 add:1 nicolo:1 own:1 hide:1 showed:1 retrieved:1 belongs:3 scenario:2 binary:2 yi:3 minimum:4 additional:1 zeyuan:3 converge:4 ii:3 full:4 desirable:1 sham:1 multiple:5 reduces:1 d0:2 smooth:8 technical:1 faster:16 match:1 academic:1 cross:2 long:2 cifar:2 bach:2 divided:1 lin:7 equally:1 y:1 a1:5 prediction:2 variant:2 regression:6 basic:1 experimentalists:2 expectation:1 arxiv:5 iteration:13 represent:2 sometimes:1 jian:1 publisher:1 biased:2 pass:4 ascent:3 thing:1 legend:1 sridharan:2 mod:1 call:4 unitary:2 yang:3 ideal:1 split:2 easy:2 enough:1 iterate:2 affect:1 lasso:9 inner:1 reduce:1 grad:6 herv:1 defazio:2 accelerating:1 peter:1 remark:4 generally:3 detailed:3 clear:1 tune:2 amount:1 transforms:1 diameter:4 reduced:2 http:3 outperform:2 exist:2 qihang:2 disjoint:1 per:3 rb:1 dummy:4 write:1 iter:2 libsvm:2 utilize:2 lacoste:2 asymptotically:1 harchaoui:1 sum:2 run:11 powerful:1 almost:1 reasonable:2 decide:1 chih:1 appendix:11 peilin:1 capturing:1 def:13 layer:1 outdated:1 fold:1 fan:1 alex:1 aurelien:1 hy:2 nearby:2 speed:3 min:5 preconditioners:1 leon:1 performing:4 format:2 relatively:1 combination:1 smaller:8 across:1 slightly:4 appealing:1 qu:1 kakade:1 s1:8 outlier:2 erm:9 ln:5 agree:1 discus:1 r3:1 count:1 needed:5 mind:1 know:1 ge:2 end:2 yurii:1 available:1 generalizes:1 apply:4 observe:3 away:1 stepsize:2 batch:1 schmidt:2 original:2 thomas:1 top:1 clustering:68 running:17 include:3 remaining:1 ensure:1 completed:1 exploit:3 classical:1 objective:15 quantity:1 diagonal:3 ys1:1 hai:1 gradient:36 minx:2 distance:3 rotates:1 outer:1 trivial:1 reason:1 assuming:1 code:1 length:5 index:7 minimizing:3 nc:10 info:1 design:1 implementation:2 anal:2 perform:3 bianchi:1 snapshot:2 datasets:9 finite:2 descent:14 jin:1 defining:1 hinton:1 rn:11 introduced:4 optimized:1 barcelona:1 nip:9 trans:2 justin:1 usually:5 below:1 pattern:2 appeared:2 sparsity:1 summarize:1 program:1 built:2 max:2 including:1 ia:1 quantification:1 regularized:7 haar:14 largescale:1 zhu:3 representing:2 improve:6 kxk22:1 library:1 julien:4 axis:2 schmid:1 thijs:1 nice:4 hnc:3 l2:1 epoch:9 occupation:1 asymptotic:2 catalyst:1 loss:16 expect:3 highlight:1 lecture:1 limitation:5 proportional:6 var:2 geoffrey:1 validation:2 rik:1 xiao:3 tiny:1 row:3 course:1 repeat:1 last:2 free:1 svrg:34 enjoys:3 bias:1 side:2 neighbor:8 sparse:1 apcg:9 dimension:1 default:1 world:1 curve:1 computes:2 author:3 preprocessing:2 approximate:9 preferred:1 keep:1 mairal:2 conclude:1 unnecessary:1 xi:3 shwartz:3 search:2 un:1 decade:1 why:2 table:3 nicolas:1 anc:2 bottou:2 cl:2 diag:1 main:5 yi2:1 big:2 s2:2 nothing:1 finito:1 fair:2 categorized:2 body:2 representative:2 en:2 tong:7 n:1 sub:1 saga:32 wish:1 third:1 theorem:4 xt:16 emphasized:1 jen:1 r2:1 list:2 covtype:12 admits:1 intrinsic:1 exists:3 quantization:3 andoni:3 sequential:1 importance:1 ci:5 kx:2 spdc:2 easier:2 yin:1 simply:3 explore:1 forming:1 kaiming:1 maxw:1 applies:4 corresponds:3 minimizer:7 satisfies:5 viewed:4 sorted:1 hnt:2 hard:1 change:1 included:1 specifically:2 determined:1 uniformly:2 except:1 domke:1 lemma:3 total:1 called:2 pas:4 svd:1 experimental:1 ew:1 aaron:4 formally:1 internal:3 support:1 mark:1 accelerated:26 princeton:1 regularizing:2 |
5,974 | 6,404 | Can Peripheral Representations Improve Clutter
Metrics on Complex Scenes?
Arturo Deza
Dynamical Neuroscience
Institute for Collaborative Biotechnologies
UC Santa Barbara, CA, USA
[email protected]
Miguel P. Eckstein
Psychological and Brain Sciences
Institute for Collaborative Biotechnologies
UC Santa Barbara, CA, USA
[email protected]
Abstract
Previous studies have proposed image-based clutter measures that correlate with
human search times and/or eye movements. However, most models do not take
into account the fact that the effects of clutter interact with the foveated nature of
the human visual system: visual clutter further from the fovea has an increasing
detrimental influence on perception. Here, we introduce a new foveated clutter
model to predict the detrimental effects in target search utilizing a forced fixation
search task. We use Feature Congestion (Rosenholtz et al.) as our non foveated
clutter model, and we stack a peripheral architecture on top of Feature Congestion
for our foveated model. We introduce the Peripheral Integration Feature Congestion
(PIFC) coefficient, as a fundamental ingredient of our model that modulates clutter
as a non-linear gain contingent on eccentricity. We show that Foveated Feature
Congestion (FFC) clutter scores (r(44) = ?0.82 ? 0.04, p < 0.0001) correlate
better with target detection (hit rate) than regular Feature Congestion (r(44) =
?0.19 ? 0.13, p = 0.0774) in forced fixation search; and we extend foveation to
other clutter models showing stronger correlations in all cases. Thus, our model
allows us to enrich clutter perception research by computing fixation specific clutter
maps. Code for building peripheral representations is available1 .
1
Introduction
What is clutter? While it seems easy to make sense of a cluttered desk vs an uncluttered desk at a
glance, it is hard to quantify clutter with a number. Is a cluttered desk, one stacked with papers? Or is
an uncluttered desk, one that is more organized irrelevant of number of items? An important goal in
clutter research has been to develop an image based computational model that outputs a quantitative
measure that correlates with human perceptual behavior [19, 12, 24, 21]. Previous studies have
created models that output global or regional metrics to measure clutter perception. Such measures
are aimed to predict the influence of clutter on perception. However, one important aspect of human
visual perception is that it is not space invariant: the fovea processes visual information with high
spatial detail while regions away from the central fovea have access to lower spatial detail. Thus,
the influence of clutter on perception can depend on the retinal location of the stimulus and such
influences will likely interact with the information content in the stimulus.
The goal of the current paper is to develop a foveated clutter model that can successfully predict
the interaction between retinal eccentricity and image content in modulating the influence of clutter
on perceptual behavior. We introduce a foveated mechanism based on the peripheral architecture
proposed by Freeman and Simoncelli [9] and stack it into a current clutter model (Feature Congestion [23, 24]) to generate a clutter map that arises from a calculation of information loss with retinal
eccentricity but is multiplicatively modulated by the original unfoveated clutter score. The new
1
Piranhas Toolkit: https://github.com/ArturoDeza/Piranhas
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Pyramid 2
Pyramid 3
Max operation
Pyramid 1
Pyramid 2
Pyramid 3
Max operation
Pyramid 1
Pyramid 2
Pyramid 3
Max operation
Color Map
Pyramid 1
Orientation Map Contrast Map
Input Image
Feature Congestion map
Figure 1: The Feature Congestion pipeline as explained in Rosenholtz et al. [24]. A color, contrast
and orientation feature map for each spatial pyramid is extracted, and the max value of each is
computed as the final feature map. The Feature Congestion map is then computed by a weighted sum
over each feature map. The Feature Congestion score is the mean value of the map.
measure is evaluated in a gaze-contingent psychophysical experiment measuring target detection in
complex scenes as a function of target retinal eccentricity. We show that the foveated clutter models
that account for loss of information in the periphery correlates better with human target detection
(hit rate) across retinal eccentricities than non-foveated models. Although the model is presented in
the context of Feature Congestion, the framework can be extended to any previous or future clutter
metrics that produce clutter scores that are computed from a global pixel-wise clutter map.
2
Previous Work
Previous studies have developed general measures of clutter computed for an entire image and do
not consider the space-variant properties of the human visual system. Because our work seeks to
model and assess the interaction between clutter and retinal location, experiments manipulating
the eccentricity of a target while observers hold fixation (gaze contingent forced fixation) are most
appropriate to evaluate the model. To our knowledge there has been no systematic evaluation of
fixation dependent clutter models with forced fixation target detection in scenes. In this section, we
will give an overview of state-of-the-art clutter models, metrics and evaluations.
2.1
Clutter Models
Feature Congestion: Feature Congestion, initially proposed by [23, 24] produces both a pixel-wise
clutter score map as a well as a global clutter score for any input image or Region of Interest (ROI).
Each clutter map is computed by combining a Color map in CIELab space, an orientation map [14],
and a local contrast map at multiple scales through Gaussian Pyramids [5]. One of the main advantages Feature Congestion has is that each pixel-wise clutter score (Fig. 1) and global score can be
computed in less than a second. Furthermore, this is one of the few models that can output a specific
clutter score for any pixel or ROI in an image. This will be crucial for developing a foveated model
as explained in Section 4.
Edge Density: Edge Density computes a ratio after applying an Edge Detector on the input image [19]. The final clutter score is the ratio of edges to total number of pixels present in the image.
The intuition for this metric is straightforward: ?the more edges, the more clutter? (due to objects for
example).
Subband Entropy: The Subband Entropy model begins by computing N steerable pyramids [25] at
K orientations across each channel from the input image in CIELab color space. Once each N ? K
subband is collected for each channel, the entropy for each oriented pyramid is computed pixelwise
and they are averaged separately. Thus, Subband Entropy wishes to measure the entropy of each
spatial frequency and oriented filter response of an image.
Scale Invariance: The Scale Invariant Clutter Model proposed by Farid and Bravo [4] uses graphbased segmentation [8] at multiple k scales. A scale invariant clutter representation is given by the
power law coefficient that matches the decay of number of regions with the adjusted scale parameter.
2
+
+
Fixation: 500 - 1000 ms
(1 of 4 locations)
Stimulus: 100, 200, 400, 900, 1600 ms
(Remain ?xated)
Task 1 response
(unlimited time, no feedback)
Figure 2: Experiment 1: Forced Fixation Search flow diagram. A naive observer begins by fixating
the image at a location that is either 1, 4, 9 or 15 deg away from the target (the observer is not aware
of the possible eccentricities). After fixating on the image for a variable amount of time (100, 200,
400, 900 or 1600 ms), the observer must make a decision on target detection.
ProtoObject Segmentation: ProtoObject Segmentation proposes an unsupervised metric for clutter scoring [27, 28]. The model begins by converting the image into HSV color space, and then
proceeds to segment the image through superpixel segmentation [17, 16, 1]. After segmentation,
mean-shift [11] is applied on all cluster (superpixel) medians to calculate the final amount of representative colors present in the image. Next, superpixels are merged with one another contingent on
them being adjacent, and being assigned to the same mean-shift HSV cluster. The final score is a
ratio between initial number of superpixels and final number of superpixels.
Crowding Model: The Crowding Model developed by van der Berg et al. [26] is the only model
to have used losses in the periphery due to crowding as a clutter metric. It decomposes the image
into 3 different scales in CIELab color space. It then produces 6 different orientation maps for each
scale given the luminance channel; a contrast map is also obtained by difference of Gaussians on the
previously mentioned channel. All feature maps are then pooled with Gaussian kernels that grow
linearly with eccentricity, KL-divergence is then computed between the pre and post pooling feature
maps to get information loss coefficients, all coefficients are averaged together to produce a final
clutter score. We will discuss the differences of this model to ours in the Discussion (Section 5).
Texture Tiling Model: The Texture Tiling Model (TTM) is a recent perceptual model that accounts
for losses in the periphery [22, 13] through psyhophysical experiments modelling visual search [7]:
feature search, conjunction search, configuration search and asymmetric search. In essence, the
Mongrels proposed by Rosenholtz et al. that simulate peripheral losses are very similar to the
Metamers proposed by Freeman & Simoncelli [9]. We do not include comparisons to the TTM model
since it requires additional psychophysics on the Mongrel versions of the images.
2.2
Clutter Metrics
Global Clutter Score: The most basic clutter metric used in clutter research is the original clutter
score that every model computes over the entire image. Edge Density & Proto-Object Segmentation
output a ratio, while Subband Entropy and Feature Congestion output a score. However, Feature
Congestion is the only model that outputs a dense pixelwise clutter map before computing a global
score (Fig. 1). Thus, we use Feature Congestion clutter maps for our foveated clutter model.
Clutter ROI: The second most used clutter metric is ROI (Region of Interest)-based, as shown in the
work of Asher et al. [3]. This metric is of interest when an observer is engaging in target search, vs
making a human judgement (Ex: ?rate the clutter of the following scenes?).
2.3
Clutter Evaluations
Human Clutter Judgements: Multiple studies of clutter, correlate their metrics with rankings/ratings
of clutter provided by human participants. Ideally, if clutter model A is better than clutter model B,
then the correlation of model scores and human rankings/ratings should be higher for model A than
for model B. [28, 19, 26]
3
Response Time: Highly cluttered images will require more time for target search, hence more time
to arrive to a decision of target present/absent. Under the previous assumption, a high correlation
value between response time and clutter score are a good sign for a clutter model. [24, 4, 26, 3, 12]
Target Detection (Hit Rate, False Alarms, Performance): In general, when engaging in target
search for a fixed amount of time across all trial conditions, an observer will have a lower hit rate and
higher false alarm rate for a highly cluttered image than an uncluttered image. [24, 3, 12]
3
3.1
Methods & Experiments
Experiment 1: Forced Fixation Search
A total of 13 subjects participated in a Forced Fixation Search experiment where the goal was to
detect a target in the subject?s periphery and identify if there was a target (person) present or absent.
Participants had variable amounts of time (100, 200, 400, 900, 1600 ms) to view each clip that was
presented in a random order at a variable degree of eccentricities that the subjects were not aware of
(1 deg, 4 deg, 9 deg, 15 deg). They were then prompted with a Target Detection rating scale where
they had to rate from a scale from 1-10 by clicking on a number reporting how confident they were
on detecting the target. Participants have unlimited time for making their judgements, and they did
not take more than 10 seconds per judgment. There was no response feedback after each trial. Trials
were aborted when subjects broke fixation outside of a 1 deg radius around the fixation cross.
Each subject did 12 sessions that consisted of 360 unique images. Every session also presented the
images with aerial viewpoints from different vantage points (Example: session 1 had the target at
12 o?clock - North, while session 2 had the target at 3 o?clock - East). To control for any fixational
biases, all subjects had a unique fixation point for every trial for the same eccentricity values. All
images were rendered with variable levels of clutter. Each session took about an hour to complete.
The target was of size 0.5 deg ?0.5 deg, 1 deg ?1 deg, 1.5 deg ?1.5 deg, depending on zoom level.
For our analysis, we only used the low zoom and 100 ms time condition since there was less ceiling
effects across all eccentricities.
Stimuli Creation: A total of 273 videos were created each with a total duration of 120 seconds,
where a ?birds eye? point-of-view camera rotated slowly around the center. While the video was
in rotating motion, there was no relative motion between any parts of the video. From the original
videos, a total of 360 ? 4 different clips were created. Half of the clips were target present, while the
other half were target absent. These short and slowly rotating clips were used instead of still images
in our experiment, to simulate slow real movement from a pilot point of view. All clips were shown
to participants in random order.
Apparatus: An EyeLink 1000 system (SR Research) was used to collect Eye Tracking data at a
frequency of 1000Hz. Each participant was at a distance of 76 cm from a LCD screen on gamma
display, so that each pixel subtended a visual angle of 0.022 deg /px. All video clips were rendered
at 1024 ? 760 pixels (22.5 deg ?16.7 deg) and a frame rate of 24fps. Eye movements with velocity
over 22 deg /s and acceleration over 4000 deg /s2 were qualified as saccades. Every trial began with
a fixation cross, where each subject had to fixate the cross with a tolerance of 1 deg.
4
Foveated Feature Congestion
A regular Feature Congestion clutter score is computed by taking the mean of the Feature Congestion
map of the image or of a target ROI [12]. We propose a Foveated Feature Congestion (FFC) model
that outputs a score which takes into account two main terms: 1) a regular Feature Congestion (FC)
score and 2) a Peripheral Integration Feature Congestion (PIFC) coefficient that accounts the lower
spatial resolution of the visual periphery that are detrimental for target detection. The first term is
independent of fixation, while the second term will act as a non-linear gain that will either reduce or
amplify the clutter score depending on fixation distance from the target.
In this Section we will explain how to compute a PIFC, which will require creating a human-like
peripheral architecture as explained in Section 4.1. We then present our Foveated Feature Congestion
(FFC) clutter model in Section 4.2. Finally, we conclude by making a quantiative evaluation of the
FFC (Section 4.3) in its ability to predict variations of target detectability across images and retinal
eccentricity of the target.
4
1.0
-24
1.0
0.7
0.8
0.5
-12
0.3
0.2
0.0
0
3
6
9
12
15
18
Eccentricity in degrees away from fovea
21
24
1.0
0.9
0.8
0.7
0.6
0.7
-6
0.6
0.5
0
0.4
6
Function Value
0.4
0.1
Function value
0.9
-18
0.6
Eccentricity (degrees)
Function value
0.9
0.8
0.3
12
0.5
0.2
0.4
0.3
18
0.1
0.2
0.1
0.0
0
?/4
?/2
3?/4
?
5?/4
3?/2
7?/4
24
-24
2?
-18
-12
-6
0
6
12
18
24
0.0
Eccentricity (degrees)
Polar angle referenced from fovea
(a) Top: gn (e) function. Bottom: hn (?) function.
(b) Peripheral Architecture.
Figure 3: Construction of a Peripheral Architecture a la Freeman & Simoncelli [9] using the functions
described in Section 4.1 are shown in Fig. 3(a). The blue region in the center of Fig. 3(b), represents
the fovea where all information is preserved. Outer regions (in red), represent different parts of the
periphery at multiple eccentricities.
4.1
Creating a Peripheral Architecture
We used the Piranhas Toolkit to create a Freeman and Simoncelli [9] peripheral architecture. This
biologically inspired model has been tested and used to model V1 and V2 responses in human and
non-human primates with high precision for a variety of tasks [20, 10, 18, 2]. It is described by a
set of pooling (linear) regions that increase in size with retinal eccentricity. Each pooling region is
separable with respect to polar angle hn (?) and log eccentricity gn (e), as described in Eq. 2 and
Eq. 3 respectively. These functions are multiplied for every angle and eccentricity (?, e) and are
plotted in log polar coordinates to create the peripheral architecture as seen in Fig. 3.
?
2 ? x?(t0 ?1)/2
?
));
?cos ( 2 (
t0
f (x) = 1;
?
??cos2 ( ? ( x?(1+t0 )/2 )) + 1;
2
t0
hn (?) = f
gn (e) = f
?(1 + t0 )/2 < x ? (t0 ? 1)/2
(t0 ? 1)/2 < x ? (1 ? t0 )/2
(1 ? t0 )/2 < x ? (1 + t0 )/2
? ? (w n + w? (1?t0 ) )
2?
?
2
; w? =
; n = 0, ..., N? ? 1
w?
N?
log(e) ? [log(e ) + w (n + 1)]
log(er ) ? log(e0 )
0
e
; we =
; n = 0, ..., Ne ? 1
we
Ne
(1)
(2)
(3)
The parameters we used match a V1 architecture with a scale of s = 0.25 , a visual radius of
er = 24 deg, a fovea of 2 deg, with e0 = 0.25 deg 2 , and t0 = 1/2. The scale defines the number of
eccentricities Ne , as well as the number of polar pooling regions N? from h0, 2?].
Although observers saw the original stimuli at 0.022 deg/pixel, with image size 1024 ? 760; for
modelling purposes: we rescaled all images to half their size so the peripheral architecture could fit
all images under any fixation point. To preserve stimuli size in degrees after rescaling our images,
our foveal model used an input value of 0.044 deg/pixel (twice the value of experimental settings).
Resizing the image to half its size also allows the peripheral architecture to consume less CPU
computation time and memory.
4.2
Creating a Foveated Feature Congestion Model
Intuitively, a foveated clutter model that takes into account target search should score very low when
the target is in the fovea (near zero), and very high when the target is in the periphery. Thus, an
observer should find a target without difficulty, achieving a near perfect hit rate in the fovea, yet
the observer should have a lower hit rate in the periphery given crowding effects. Note that in the
2
We remove regions with a radius smaller than the foveal radius, since there is no pooling in the fovea.
5
Input Image
Total map
Foveated
Feature Congestion Feature Congestion
Feature maps
Feature Congestion
Score
Foveated
Feature Congestion
Score
PIFC Coe?cient
Di?erence Map
PIFC map
Figure 4: Foveated Feature Congestion flow diagram: In this example, the point of fixation is at
15 deg away from the target (bottom right corner of the input image). A Feature Congestion map of
the image (top flow), and a Foveated Feature Congestion map (bottom flow) are created. The PIFC
coefficient is computed around an ROI centered at the target (bottom flow; zoomed box). The Feature
Congestion score is then multiplied by the PIFC coefficient, and the Foveated Feature Congestion
score is returned. Sample PIFC?s across eccentricities can be seen in the Supplementary Material.
periphery, not only should it be harder to detect a target, but it is also likely to confuse the target
with another object or region affine in shape, size, texture and/or pixel value (false alarms). Under
this assumption, we wish to modulate a clutter score (Feature Congestion) by a multiplicative factor,
given the target and fixation location. We call this multiplicative term: the PIFC coefficient, which is
defined over a 6 deg ?6 deg ROI around the location of target t. The target itself was removed when
processing the clutter maps since it indirectly contributes to the ROI clutter score [3]. The PIFC aims
at quantifying the information loss around the target region due to peripheral processing.
To compute the PIFC, we use the before mentioned ROI, and calculate a mean difference from the
foveated clutter map with respect to the original non-foveated clutter map. If the target is foveated,
there should be little to no difference between a foveated map and the original map, thus setting the
PIFC coefficient value to near zero. However, as the target is farther away from the fovea, the PIFC
coefficient should be higher given pooling effects in the periphery. To create a foveated map, we use
Feature Congestion and apply max pooling on each pooling region after the peripheral architecture
has been stacked on top of the Feature Congestion map. Note that the FFC map values will depend on
the fixation location as shown in Fig. 4. The PIFC map is the result of subtracting the foveated map
from the unfoveated map in the ROI, and the score is a mean distance value between these two maps
(we use L1-norm, L2-norm or KL-divergence). Computational details can be seen in Algorithm 1.
Thus, we can resume our model in Eq. 4:
f
FFCf,t
I = FCI ? PIFCROI(t)
(4)
where FCI is the Feature Congestion score [24] of image I which is computed by the mean of the
Feature Congestion map RF C , and FFCf,t
I is the Foveated Feature Congestion score of the image I,
depending on the point of fixation f and the location of the target t.
4.3
Foveated Feature Congestion Evaluation
A visualization of each image and its respective Hit Rate vs Clutter Score across both foveated and
unfoveated models can be visualized in Fig 5. Qualitatively, it shows the importance of a PIFC
weighting term to the total image clutter score when performing our forced fixation search experiment.
Futhermore, a quantitative bootstrap correlation analysis comparing classic metrics (Image, Target,
ROI) against foveal metrics (FFC1 , FFC2 and FFC3 ) shows that hit rate vs clutter scores are greater
for those foveated models with a PIFC: Image: (r(44) = ?0.19 ? 0.13, p = 0.0774), Target:
(r(44) = ?0.03 ? 0.14, p = 0.4204), ROI: (r(44) = ?0.25 ? 0.14, p = 0.0392), FFC1 (L1-norm):
6
Algorithm 1 Computation of Peripheral Integration Feature Congestion (PIFC) Coefficient
procedure C OMPUTE PIFC OF ROI OF I MAGE I ON FIXATION f
Create a Peripheral Architecture A : (N? , Ne )
Offset image I in Peripheral Architecture by fixation f : (fx , fy ).
Compute Regular Feature Congestion (RF C ) map of image I
2
Set Peripheral Feature Congestion (PFf C ) ? IR+
map to zero.
Copy Feature Congestion values in fovea r0 : PFf C (r0 ) = (RF C (r0 ))
for each pooling region ri overlapping I, s.t. 1 ? i ? N? ? Ne do
Get Regular Feature Congestion (FC) values in ri
Set Peripheral FC value to max Regular FC value: PFf C (ri ) = max(RF C (ri ))
end for
Crop PIFC map to ROI: pfF C = PFf C (ROI)
Crop FC map to ROI: rF C = RF C (ROI)
Choose Distance metric D between rF C and pfF C map
Compute Coefficient = mean(D(rF C , pfF C ))
return Coefficient
end procedure
(52)
(51)
(50)
(50)
1.0
0.9
(51)
(50)
Hit Rate
0.8
0.7
(52)
0.6
(52)
(51)
(51)
(50)
(52)
0.5
0.4
(52)
(25)
(3) (1) (27)
(26)
(26)
(28)
(2)
(25)
(4)
(3)
(26)
(25)
(27)
(3) (1)
(2)
(25) (3)(27)
(1)
1.0 (51)
(50)(26)
(50)
(26)
0.9 (51)(25)(2) (28)
1 deg
4 deg
9 deg
15 deg
0.7
(28)
(4)
(4)
(4)
(2)
(26)
(27)
(27)(28)
0.3
(25)
(3)
0.2
(1)
(2) (1)
0.5
0.4
(4)
(27)
(3)
(25)
(1)
0.2
(27)
(28)
(1)
(2)
(28)
0.1
0.1
0.0
0.6
0.3
(28)
1 deg
4 deg
9 deg
15 deg
(4)
(3)(50)
(26)
(25)
(27)
(52)
(28)
(4)
(52)
(1) (3)
(51)
(2)
(51)
(50)(4)
(52)
(26)(2)
0.8
Hit Rate
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
2.00
2.25
2.50
2.75
3.00
3.25
0.0
0
3
6
9
12
15
Foveated Feature Congestion
Feature Congestion
(a) Feature Congestion with image ID?s
(b) Foveated Feature Congestion with image ID?s
Figure 5: Fig. 5(a) shows the current limitations of global clutter metrics when engaging in Forced
Fixation Search. The same image under different eccentricities has the same clutter score yet possess
a different hit rate. Our proposed foveated model (Fig. 5(b)), compensates this difference through the
PIFC coefficient, and modulates each clutter score depending on fixation distance from target.
(r(44) = ?0.82 ? 0.04, p < 0.0001), FFC2 (L2-norm): (r(44) = ?0.79 ? 0.06, p < 0.0001), FFC3
(KL-divergence): (r(44) = ?0.82 ? 0.04, p < 0.0001).
Notice that there is no difference in correlations between using the L1-norm, L2-norm or KLdivergence distance for each model in terms of the correlation with hit rate. Table ??(Supp. Mat.)
also shows the highest correlation with a 6 ? 6 deg ROI window across all metrics. Note that the
same analysis can not be applied to false alarms, since it is indistinguishable to separate a false
alarm at 1 deg from 15 deg (the target is not present, so there is no real eccentricity away from
fixation). However as mentioned in the Methods section, fixation location for target absent trials
in the experiment were placed assuming a location from its matching target present image. It is
important that target present and absent fixations have the same distributions for each eccentricity.
5
Discussion
In general, images that have low Feature Congestion have less gain in PIFC coefficients as eccentricity
increases. While images with high clutter have higher gain in PIFC coefficients. Consequently, the
difference of FFC between different images increases nonlinearly with eccentricity, as observed in
Fig. 6. This is our main contribution, as these differences in clutter score as a function of eccentricity
do not exist for regular Feature Congestion, and these differences in scores should be able to correlate
with human performance in target detection.
7
5
2.75
2.50
2.25
2.00
(52)
(50)
(51)
1
4
9
Eccentricity
(a) FC vs Eccentricity.
15
15
4
(28)
(2)
(1)
3
(27)
2
(26)
(50)
(25)
(4)
(3)
(52)
(51)
1
0
1
4
9
Eccentricity
15
Foveated Feature Congestion
(4)
(28)
(27)
(26)
(1)
(3)
(25)
(2)
PIFC coe?cient
Feature Congestion
3.00
12
(28)
(2)
(1)
9
(27)
(4)
(3)
6
(26)
(25)
(50)
3
0
(b) PIFC (L1-norm) vs Eccentricity.
(52)
(51)
1
4
9
Eccentricity
15
(c) FFC vs Eccentricity.
Figure 6: Feature Congestion (FC) vs Foveated Feature Congestion (FFC). In Fig. 6(a) we see
that clutter stays constant across different eccentricities for a forced fixation task. Our FFC model
(Fig. 6(c)) enriches the FC model, by showing how clutter increases as a function of eccentricity
through the PIFC in Fig. 6(b).
Edge Density
Subband Entropy
ProtoObject Segmentation
Foveated
Representation
Dense
Representation
Feature Congestion
Figure 7: Dense and Foveated representations of multiple models assuming a center point of fixation.
Our model is also different from the van der Berg et al. [26] model since our peripheral architecture
uses: a biologically inspired peripheral architecture with log polar regions that provide anisotropic
pooling [15] rather than isotropic gaussian pooling as a linear function of eccentricity [26, 6]; we used
region-based max pooling for each final feature map instead of pixel-based mean pooling (gaussians)
per each scale (which allows for stronger differences); this final difference also makes our model
computationally more efficient running at 700ms per image, vs 180s per image for the Crowding
model (?250 speed up). A home-brewed Crowding Model applied to our forced fixation experiment
resulted in a correlation of (r(44) = ?0.23 ? 0.13, p = 0.0469), equivalent to using a non foveated
metric such as regular Feature Congestion (r(44) = ?0.19 ? 0.13, p = 0.0774).
We finally extended our model to create foveated(FoV) versions of Edge Density(ED) [19], Subband Entropy(SE) [25, 24] and ProtoObject Segmentation(PS) [28] showing that correlations for
all foveated versions are stronger than non-foveated versions for the same task: rED = ?0.21,
rED+F oV = ?0.76, rSE = ?0.19, rSE+F oV = ?0.77, rP S = ?0.30, but rP S+F oV = ?0.74.
Note that the highest foveated correlation is FC: rF C+F oV = ?0.82, despite rF C = ?0.19 under a
L1-norm loss of the PIFC. Feature Congestion has a dense representation, is more bio-inspired than
the other models, and outperforms in the periphery. See Figure 7. An overview of creating dense and
foveated versions for previously mentioned models can be seen in the Supp. Material.
6
Conclusion
In this paper we have introduced a peripheral architecture that shows detrimental effects of different
eccentricities on target detection, that helps us model clutter for forced fixation experiments. We
introduced a forced fixation experimental design for clutter research; we defined a biologically
inspired peripheral architecture that pools features in V1; and we stacked the previously mentioned
peripheral architecture on top of a Feature Congestion map to create a Foveated Feature Congestion
(FFC) model ? and we extended this pipeline to other clutter models. We showed that the FFC
model better explains loss in target detection performance as a function of eccentricity through the
introduction of the Peripheral Integration Feature Congestion coefficient which varies non linearly.
8
Acknowledgements
We would like to thank Miguel Lago and Aditya Jonnalagadda for useful proof-reads and revisions,
as well as Mordechai Juni, N.C. Puneeth, and Emre Akbas for useful suggestions. This work has been
sponsored by the U.S. Army Research Office and the Regents of the University of California, through
Contract Number W911NF-09-0001 for the Institute for Collaborative Biotechnologies, and that the
content of the information does not necessarily reflect the position or the policy of the Government or
the Regents of the University of California, and no official endorsement should be inferred.
References
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. S?sstrunk. Slic superpixels. Technical report,
2010.
[2] E. Akbas and M. P. Eckstein. Object detection through exploration with a foveated visual field. arXiv
preprint arXiv:1408.0814, 2014.
[3] M. F. Asher, D. J. Tolhurst, T. Troscianko, and I. D. Gilchrist. Regional effects of clutter on human target
detection performance. Journal of vision, 13(5):25?25, 2013.
[4] M. J. Bravo and H. Farid. A scale invariant measure of clutter. Journal of Vision, 8(1):23?23, 2008.
[5] P. J. Burt and E. H. Adelson. The laplacian pyramid as a compact image code. Communications, IEEE
Transactions on, 31(4):532?540, 1983.
[6] R. Dubey, C. S. Soon, and P.-J. B. Hsieh. A blurring based model of peripheral vision predicts visual
search performances. Journal of Vision, 14(10):935?935, 2014.
[7] M. P. Eckstein. Visual search: A retrospective. Journal of Vision, 11(5):14?14, 2011.
[8] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. International
Journal of Computer Vision, 59(2):167?181, 2004.
[9] J. Freeman and E. P. Simoncelli. Metamers of the ventral stream. Nature neuroscience, 14(9):1195?1201,
2011.
[10] J. Freeman, C. M. Ziemba, D. J. Heeger, E. P. Simoncelli, and J. A. Movshon. A functional and perceptual
signature of the second visual area in primates. Nature neuroscience, 16(7):974?981, 2013.
[11] K. Fukunaga and L. D. Hostetler. The estimation of the gradient of a density function, with applications in
pattern recognition. Information Theory, IEEE Transactions on, 21(1):32?40, 1975.
[12] J. M. Henderson, M. Chanceaux, and T. J. Smith. The influence of clutter on real-world scene search:
Evidence from search efficiency and eye movements. Journal of Vision, 9(1):32?32, 2009.
[13] S. Keshvari and R. Rosenholtz. Pooling of continuous features provides a unifying account of crowding.
Journal of Vision, 16(39), 2016.
[14] M. S. Landy and J. R. Bergen. Texture segregation and orientation gradient. Vision research, 31(4):679?691,
1991.
[15] D. M. Levi. Visual crowding. Current Biology, 21(18):R678?R679, 2011.
[16] A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and K. Siddiqi. Turbopixels: Fast
superpixels using geometric flows. Pattern Analysis and Machine Intelligence, IEEE Transactions on,
31(12):2290?2297, 2009.
[17] M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa. Entropy rate superpixel segmentation. In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 2097?2104. IEEE, 2011.
[18] J. A. Movshon and E. P. Simoncelli. Representation of naturalistic image structure in the primate visual
cortex. In Cold Spring Harbor symposia on quantitative biology, volume 79, pages 115?122. Cold Spring
Harbor Laboratory Press, 2014.
[19] A. Oliva, M. L. Mack, M. Shrestha, and A. Peeper. Identifying the perceptual dimensions of visual
complexity of scenes.
[20] J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet
coefficients. International Journal of Computer Vision, 40(1):49?70, 2000.
[21] R. Pramod and S. Arun. Do computational models differ systematically from human object perception?
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1601?1609,
2016.
[22] R. Rosenholtz, J. Huang, A. Raj, B. J. Balas, and L. Ilie. A summary statistic representation in peripheral
vision explains visual search. Journal of vision, 12(4):14?14, 2012.
[23] R. Rosenholtz, Y. Li, J. Mansfield, and Z. Jin. Feature congestion: a measure of display clutter. In
Proceedings of the SIGCHI conference on Human factors in computing systems, pages 761?770. ACM,
2005.
[24] R. Rosenholtz, Y. Li, and L. Nakano. Measuring visual clutter. Journal of vision, 7(2):17?17, 2007.
[25] E. P. Simoncelli and W. T. Freeman. The steerable pyramid: A flexible architecture for multi-scale
derivative computation. In icip, page 3444. IEEE, 1995.
[26] R. van den Berg, F. W. Cornelissen, and J. B. Roerdink. A crowding model of visual clutter. Journal of
Vision, 9(4):24?24, 2009.
[27] C.-P. Yu, W.-Y. Hua, D. Samaras, and G. Zelinsky. Modeling clutter perception using parametric protoobject partitioning. In Advances in Neural Information Processing Systems, pages 118?126, 2013.
[28] C.-P. Yu, D. Samaras, and G. J. Zelinsky. Modeling visual clutter perception using proto-object segmentation. Journal of vision, 14(7):4?4, 2014.
9
| 6404 |@word trial:6 version:5 judgement:3 stronger:3 seems:1 norm:8 seek:1 cos2:1 hsieh:1 harder:1 crowding:9 initial:1 configuration:1 foveal:3 score:39 liu:1 ours:1 outperforms:1 current:4 com:1 comparing:1 yet:2 must:1 mordechai:1 shape:1 remove:1 sponsored:1 v:9 congestion:63 half:4 intelligence:1 item:1 isotropic:1 smith:2 short:1 farther:1 ziemba:1 detecting:1 tolhurst:1 provides:1 location:10 hsv:2 symposium:1 fps:1 fixation:35 regent:2 introduce:3 behavior:2 aborted:1 multi:1 brain:1 freeman:7 inspired:4 cpu:1 little:1 window:1 increasing:1 revision:1 spain:1 begin:3 provided:1 what:1 cm:1 psych:1 developed:2 kldivergence:1 quantitative:3 every:5 act:1 pramod:1 hit:12 control:1 bio:1 partitioning:1 before:2 local:1 referenced:1 apparatus:1 despite:1 id:2 bird:1 twice:1 achanta:1 fov:1 collect:1 co:1 averaged:2 unique:2 camera:1 bootstrap:1 fci:2 steerable:2 procedure:2 cold:2 tuzel:1 area:1 erence:1 matching:1 pre:1 vantage:1 regular:8 get:2 amplify:1 naturalistic:1 context:1 influence:6 applying:1 equivalent:1 map:50 center:3 straightforward:1 cluttered:4 duration:1 resolution:1 identifying:1 utilizing:1 classic:1 variation:1 coordinate:1 fx:1 target:54 construction:1 dickinson:1 us:2 superpixel:3 engaging:3 velocity:1 recognition:3 asymmetric:1 predicts:1 huttenlocher:1 bottom:4 observed:1 preprint:1 calculate:2 region:16 movement:4 rescaled:1 removed:1 highest:2 mentioned:5 intuition:1 eyelink:1 complexity:1 ideally:1 lcd:1 signature:1 depend:2 ov:4 segment:1 creation:1 rse:2 samara:2 efficiency:1 blurring:1 joint:1 stacked:3 forced:13 fast:1 chellappa:1 outside:1 h0:1 supplementary:1 cvpr:1 consume:1 resizing:1 ability:1 compensates:1 statistic:2 itself:1 final:8 advantage:1 took:1 propose:1 subtracting:1 interaction:2 zoomed:1 combining:1 cluster:2 eccentricity:38 p:1 produce:4 perfect:1 rotated:1 object:6 help:1 depending:4 develop:2 miguel:2 eq:3 quantify:1 differ:1 radius:4 merged:1 filter:1 centered:1 human:17 broke:1 exploration:1 material:2 explains:2 require:2 government:1 adjusted:1 hold:1 around:5 roi:18 predict:4 ventral:1 purpose:1 roerdink:1 polar:5 estimation:1 saw:1 modulating:1 create:6 successfully:1 arun:1 weighted:1 gaussian:3 aim:1 rather:1 office:1 conjunction:1 modelling:2 superpixels:5 contrast:4 sense:1 detect:2 dependent:1 bergen:1 entire:2 initially:1 manipulating:1 pixel:11 orientation:6 flexible:1 proposes:1 enrich:1 spatial:5 integration:4 art:1 uc:2 psychophysics:1 field:1 once:1 aware:2 biology:2 represents:1 adelson:1 unsupervised:1 yu:2 future:1 report:1 stimulus:6 few:1 oriented:2 gamma:1 divergence:3 zoom:2 preserve:1 resulted:1 detection:13 interest:3 highly:2 evaluation:5 henderson:1 futhermore:1 edge:8 respective:1 rotating:2 plotted:1 e0:2 psychological:1 modeling:2 gn:3 w911nf:1 measuring:2 pixelwise:2 varies:1 confident:1 person:1 density:6 fundamental:1 international:2 stay:1 systematic:1 contract:1 pool:1 gaze:2 together:1 lucchi:1 central:1 reflect:1 zelinsky:2 hn:3 slowly:2 choose:1 huang:1 ucsb:2 corner:1 creating:4 cornelissen:1 derivative:1 rescaling:1 return:1 li:2 supp:2 account:7 fixating:2 retinal:8 pooled:1 north:1 coefficient:18 ranking:2 stream:1 multiplicative:2 view:3 observer:9 graphbased:1 red:3 participant:5 collaborative:3 ass:1 contribution:1 ir:1 judgment:1 identify:1 farid:2 resume:1 detector:1 explain:1 ed:1 against:1 frequency:2 fixate:1 proof:1 di:1 gain:4 pilot:1 color:7 knowledge:1 organized:1 segmentation:11 higher:4 response:6 fua:1 evaluated:1 box:1 hostetler:1 furthermore:1 subtended:1 correlation:10 clock:2 overlapping:1 glance:1 defines:1 usa:2 effect:7 building:1 consisted:1 hence:1 assigned:1 read:1 laboratory:1 adjacent:1 indistinguishable:1 kutulakos:1 essence:1 troscianko:1 m:6 juni:1 ramalingam:1 complete:1 motion:2 l1:5 image:56 wise:3 began:1 enriches:1 gilchrist:1 functional:1 overview:2 volume:1 anisotropic:1 extend:1 session:5 had:6 toolkit:2 access:1 cortex:1 recent:1 showed:1 raj:1 irrelevant:1 barbara:2 periphery:11 der:2 scoring:1 seen:4 contingent:4 additional:1 greater:1 converting:1 r0:3 multiple:5 simoncelli:9 uncluttered:3 technical:1 match:2 calculation:1 cross:3 post:1 laplacian:1 mansfield:1 variant:1 basic:1 crop:2 vision:17 metric:18 oliva:1 arxiv:2 kernel:1 represent:1 pyramid:15 preserved:1 ompute:1 ilie:1 separately:1 participated:1 diagram:2 median:1 grow:1 crucial:1 regional:2 sr:1 posse:1 pooling:14 subject:7 hz:1 flow:6 call:1 near:3 easy:1 variety:1 harbor:2 fit:1 architecture:20 reduce:1 shift:2 absent:5 t0:12 fleet:1 retrospective:1 movshon:2 returned:1 biotechnology:3 bravo:2 useful:2 santa:2 aimed:1 se:1 dubey:1 amount:4 clutter:93 desk:4 clip:6 visualized:1 siddiqi:1 fixational:1 generate:1 http:1 exist:1 notice:1 rosenholtz:7 sign:1 neuroscience:3 per:4 blue:1 detectability:1 mat:1 slic:1 levi:1 achieving:1 luminance:1 v1:3 graph:1 sum:1 angle:4 arrive:1 reporting:1 home:1 endorsement:1 decision:2 display:2 scene:6 ri:4 unlimited:2 aspect:1 simulate:2 speed:1 fukunaga:1 spring:2 performing:1 separable:1 shaji:1 rendered:2 px:1 developing:1 peripheral:30 aerial:1 across:9 remain:1 smaller:1 making:3 biologically:3 primate:3 den:1 explained:3 invariant:4 intuitively:1 mack:1 pipeline:2 ceiling:1 computationally:1 segregation:1 visualization:1 previously:3 discus:1 mechanism:1 end:2 tiling:2 operation:3 gaussians:2 multiplied:2 apply:1 away:6 appropriate:1 v2:1 indirectly:1 sigchi:1 rp:2 original:6 top:5 running:1 include:1 landy:1 coe:2 unifying:1 nakano:1 subband:7 psychophysical:1 parametric:2 gradient:2 detrimental:4 fovea:12 distance:6 separate:1 thank:1 outer:1 collected:1 fy:1 assuming:2 code:2 multiplicatively:1 ratio:4 prompted:1 design:1 policy:1 jin:1 extended:3 communication:1 frame:1 portilla:1 stack:2 piranha:3 inferred:1 rating:3 introduced:2 burt:1 eckstein:4 nonlinearly:1 kl:3 icip:1 california:2 barcelona:1 hour:1 nip:1 able:1 proceeds:1 dynamical:1 perception:9 pattern:4 rf:10 max:8 memory:1 video:5 power:1 available1:1 difficulty:1 improve:1 github:1 eye:5 ne:5 created:4 naive:1 geometric:1 l2:3 acknowledgement:1 relative:1 law:1 emre:1 loss:9 suggestion:1 limitation:1 ingredient:1 degree:5 affine:1 viewpoint:1 systematically:1 deza:2 summary:1 placed:1 copy:1 qualified:1 soon:1 bias:1 institute:3 taking:1 felzenszwalb:1 van:3 tolerance:1 feedback:2 dimension:1 world:1 computes:2 qualitatively:1 correlate:6 transaction:3 compact:1 deg:37 global:7 conclude:1 search:23 continuous:1 decomposes:1 table:1 nature:3 channel:4 ca:2 contributes:1 interact:2 complex:3 necessarily:1 official:1 did:2 main:3 dense:5 linearly:2 s2:1 alarm:5 asher:2 fig:13 representative:1 cient:2 cielab:3 screen:1 slow:1 precision:1 position:1 wish:2 heeger:1 clicking:1 perceptual:5 weighting:1 wavelet:1 specific:2 showing:3 er:2 offset:1 decay:1 evidence:1 balas:1 false:5 modulates:2 importance:1 texture:5 metamers:2 foveated:46 confuse:1 pff:7 mage:1 entropy:9 fc:9 likely:2 army:1 visual:20 aditya:1 quantiative:1 tracking:1 saccade:1 chanceaux:1 hua:1 extracted:1 acm:1 modulate:1 goal:3 acceleration:1 quantifying:1 consequently:1 content:3 hard:1 foveation:1 total:7 invariance:1 experimental:2 la:1 east:1 berg:3 arises:1 modulated:1 evaluate:1 proto:2 tested:1 ex:1 |
5,975 | 6,405 | GAP Safe Screening Rules for Sparse-Group Lasso
Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon
LTCI, CNRS, T?l?com ParisTech
Universit? Paris-Saclay
75013 Paris, France
[email protected]
Abstract
For statistical learning in high dimension, sparse regularizations have proven useful
to boost both computational and statistical efficiency. In some contexts, it is natural
to handle more refined structures than pure sparsity, such as for instance group
sparsity. Sparse-Group Lasso has recently been introduced in the context of linear
regression to enforce sparsity both at the feature and at the group level. We propose
the first (provably) safe screening rules for Sparse-Group Lasso, i.e., rules that allow
to discard early in the solver features/groups that are inactive at optimal solution.
Thanks to efficient dual gap computations relying on the geometric properties of
-norm, safe screening rules for Sparse-Group Lasso lead to significant gains in
term of computing time for our coordinate descent implementation.
1
Introduction
Sparsity is a critical property for the success of regression methods, especially in high dimension.
Often, group (or block) sparsity is helpful when a known group structure needs to be enforced. This
is for instance the case in multi-task learning [1] or multinomial logistic regression [5, Chapter 3]. In
the multi-task setting, the group structure appears natural since one aims at jointly recovering signals
whose supports are shared. In this context, sparsity and group sparsity are generally obtained by
adding a regularization term to the data-fitting: `1 norm for sparsity and `1,2 norm for group sparsity.
Along with recent works on hierarchical regularization [12, 17] have focused on a specific case:
the Sparse-Group Lasso. This method is the solution of a (convex) optimization program with a
regularization term that is a convex combination of the two aforementioned norms, enforcing sparsity
and group sparsity at the same time.
With such advanced regularizations, the computational burden can be particularly heavy in high
dimension. Yet, it can be significantly reduced if one can exploit the known sparsity of the solution
in the optimization. Following the seminal paper on ?safe screening rules? [9], many contributions
have investigated such strategies [21, 20, 3]. These so called safe screening rules compute some
tests on dual feasible points to eliminate primal variables whose coefficients are guaranteed to be
zero in the exact solution. Still, the computation of a dual feasible point can be challenging when
the regularization is more complex than `1 or `1,2 norms. This is the case for the Sparse-Group
Lasso as it is not straightforward to characterize if a dual point is feasible or not [20]. Here, we
propose an efficient computation of the associated dual norm. It is all the more crucial since the naive
implementation computes the Sparse-Group Lasso dual norm with a quadratic complexity w.r.t the
groups dimensions.
We propose here efficient safe screening rules for the Sparse-Group Lasso that combine sequential
rules (i.e., rules that perform screening thanks to solutions obtained for a previously processed tuning
parameter) and dynamic rules (i.e., rules that perform screening as the algorithm proceeds) in a
unified way. We elaborate on GAP safe rules, a strategy relying on dual gap computations introduced
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
for the Lasso [10] and to more general learning tasks in [15]. Note that alternative (unsafe) screening
rules, for instance the ?strong rules? [19], have been applied to the Lasso and its simple variants.
Our contributions are two fold here. First, we introduce the first safe screening rules for this problem,
other alleged safe rules [20] for Sparse-Group Lasso were in fact not safe, as explained in detail in
[15], and could lead to non-convergent implementation. Second, we link the Sparse-Group Lasso
penalties to the -norm in [6]. This allows to provide a new algorithm to efficiently compute the
required dual norms, adapting an algorithm introduced in [7]. We incorporate our proposed GAP Safe
rules in a block coordinate descent algorithm and show its practical efficiency in climate prediction
tasks. Another strategy leveraging dual gap computations and active sets has recently been proposed
under the name Blitz [13]. It could naturally benefit from our fast dual norm evaluations in this
context.
Notation For any integer d P N, we denote by rds the set t1, . . . , du. The standard Euclidean norm
is written k?k, the `1 norm k?k1 , the `8 norm k?k8 , and the transpose of a matrix Q is denoted by
QJ . We also denote ptq` ? maxp0, tq. Our observation vector is y P Rn and the design matrix
X ? rX1 , . . . , Xp s P Rn?p has p features, stored column-wise. We consider problems where the
vector of parameters ? ? p?1 , . . . , ?p qJ admits a natural group structure. A group of features is a
subset g ? rps and ng is its cardinality. The set of groups is denoted by G and we focus only on
non-overlapping groups that form a partition of rps. We denote by ?g the vector in Rng which is the
restriction of ? to the indexes in g. We write r?g sj the j-th coordinate of ?g . We also use the notation
Xg P Rn?ng for the sub-matrix of X assembled from the columns with indexes j P g; similarly
rXg sj is the j-th column of rXg s.
For any norm ?, B? refers to the corresponding unit ball, and B (resp. B8 ) stands for the Euclidean
(resp. `8 ) unit ball. The soft-thresholding operator (at level ? ? 0), S? , is defined for any x P Rd
by rS? pxqsj ? signpxj qp|xj | ? ? q` , while the group soft-thresholding (at level ? ) is S?gp pxq ?
p1 ? ? {kxkq` x. Denoting ?C the projection on a closed convex set C, this yields S? ? Id ??? B8 .
The sub-differential of a convex function f : Rd ? R at x is defined by Bf pxq ? tz P Rd : @y P
Rd , f pxq ? f pyq ? z J px ? yqu. We recall that"the sub-differential Bk?k1 of the `1 norm is signp?q,
tsignpxj qu , if xj ? 0,
defined element-wise by @j P rds, signpxqj ?
r?1, 1s,
if xj ? 0.
"
tx{kxku , if x ? 0,
Note that the sub-differential Bk?k of the Euclidean norm is Bk?kpxq ?
B,
if x ? 0.
For any norm ? on Rd , ?D is the dual norm of ?, and is defined for any x P Rd by ?D pxq ?
D
maxvPB? v J x, e.g., k?kD
? k?k. We only focus on the Sparse-Group
Lasso
1 ? k?k8 and k?k
?
norm, so we assume that ? ? ??,w , where ??,w p?q :? ? k?k1 ` p1 ? ? q gPG wg k?g k, for
? P r0, 1s, w ? pwg qgPG with wg ? 0 for all g P G. The case where wg ? 0 for some g P G together
with ? ? 0 is excluded (??,w is not a norm in such a case).
2
Sparse-Group Lasso regression
For ? ? 0 and ? P r0, 1s, the Sparse-Group Lasso estimator denoted by ??p?,?q is defined as a
minimizer of the primal objective P?,? defined by:
1
2
??p?,?q P arg min ky ? X?k ` ??p?q :? P?,? p?q.
?PRp 2
(1)
A dual formulation (see [4, Th. 3.3.5]) of (1) is given by
1
?2
y
2
2
??p?,?q ? arg max kyk ?
? ?
:? D? p?q,
2
?
?P?X,? 2
(2)
where ?X,? ? t? P Rn : ?D pX J ?q ? 1u. The parameter ? ? 0 controls the trade-off between
data-fitting and sparsity, and ? controls the trade-off between features sparsity and group sparsity. In
particular one recovers the Lasso [18] if ? ? 1, and the Group-Lasso [22] if ? ? 0.
2
For the primal problem, Fermat?s rule (cf. Appendix for details) reads:
???p?,?q ? y ? X ??p?,?q
(link-equation) ,
J ?p?,?q
X ?
P B?p??p?,?q q
(3)
(sub-differential inclusion).
(4)
?p?,?q
?p?,?q
is unique, while the primal solution ?
Remark 1 (Dual uniqueness). The dual solution ?
might not be. Indeed, the dual formulation (2) is equivalent to ??p?,?q ? arg min?P?X,? k? ? y{?k,
so ??p?,?q ? ?? py{?q is the projection of y{? over the dual feasible set ?X,? .
X,?
Remark 2 (Critical parameter: ?max ). There is a critical value ?max such that 0 is a primal solution
of (1) for all ? ? ?max . Indeed, the Fermat?s rule states 0 P arg min?PRp ky ? X?k2 {2 ` ??p?q??
0 P tX J yu ` ?B?p0q???D pX J yq ? ?. Hence, the critical parameter is given by: ?max :?
?D pX J yq. Note that evaluating ?max highly relies on the ability to (efficiently) compute the dual
norm ?D .
3
GAP safe rule for the Sparse-Group Lasso
The safe rule we propose here is an extension to the Sparse-Group Lasso of the GAP safe rules
introduced for Lasso and Group-Lasso [10, 15]. For the Sparse-Group Lasso, the geometry of the
dual feasible set ?X,? is more complex (an illustration is given in Fig. 1). Hence, computing a dual
feasible point is more intricate. As seen in Section 3.2, the computation of a dual feasible point
strongly relies on the ability to evaluate the dual norm ?D . This crucial evaluation is discussed in
Section 4. We first detail how GAP safe screening rules can be obtained for the Sparse-Group Lasso.
3.1
Description of the screening rules
Safe screening rules exploit the known sparsity of the solutions of problems such as (1). They discard
inactive features/groups whose coefficients are guaranteed to be zero for optimal solutions. Then, a
significant reduction in computing time can be obtained ignoring ?irrelevant? features/groups. The
Sparse-Group Lasso benefits from two levels of screening: the safe rules can detect both group-wise
zeros in the vector ??p?,?q and coordinate-wise zeros in the remaining groups.
To obtain useful screening rules one needs a safe region, i.e., a set containing the optimal dual
solution ??p?,?q . Following [9], when we choose a ball Bp?c , rq with radius r and centered at ?c as a
safe region, we call it a safe sphere. A safe sphere is all the more useful that r is small and ?c close to
??p?,?q . The safe rules for the Sparse-Group Lasso work as follows: for any group g in G and any safe
sphere Bp?c , rq
Group level safe screening rule:
max
S? pX J ?q
? p1 ? ? qwg ? ??p?,?q ? 0, (5)
?PBp?c ,rq
g
g
p?,?q
@j P g, max |XjJ ?| ? ? ? ??j
? 0.
Feature level safe screening rule:
?PBp?c ,rq
(6)
This means that provided one the last two test is true, the corresponding group or feature can be
(safely) discarded. For screening variables, we rely on the following upper-bounds:
Proposition 1. For all group g P G and j P g,
max |XjJ ?| ? |XjJ ?c | ` r kXj k .
(7)
?PBp?c ,rq
and
max
S? pXgJ ?q
? Tg :?
?PBp?c ,rq
#
S? pXgJ ?c q
` r kXg k
J
p
Xg ?c
8 ` r kXg k ? ? q`
if
XgJ ?c
8 ? ?,
otherwise.
(8)
Assume now that one has found a safe sphere Bp?c , rq (their creation is deferred to Section 3.2), then
the safe screening rules given by (5) and (6) read:
Theorem 1 (Safe rules for the Sparse-Group Lasso). Using Tg defined in (8), we can state the
following safe screening rules:
Group level safe screening:
@g P G, if Tg ? p1 ? ? qwg ,
then ??p?,?q ? 0,
g
Feature level safe screening:
@g P G, @j P g,
3
if |XjJ ?c | ` r kXj k ? ?,
then ??j
p?,?q
? 0.
(a) Lasso dual ball B?D
?D p?q ? k?k8 .
for (b) Group-Lassoadual ball B?D for (c) Sparse-Group Lasso dual ball
B?D ? ?(: @g P G, kS? p?g qk ?
?D p?q ? maxp ?12 ` ?22 , |?3 |q.
p1 ? ? qwg .
Figure 1: Lasso, Group-Lasso and Sparse-Group Lasso dual unit balls B?D ? t? : ?D p?q ? 1u, for
the case of G ? tt1, 2u, t3uu (i.e., g1 ? t1, 2u, g2 ? t3u), n ? p ? 3, wg1 ? wg2 ? 1 and ? ? 1{2.
The screening rules can detect which coordinates or group of coordinates can be safely set to zero.
This allows to remove the corresponding features from the design matrix X during the optimization
process. While standard algorithms solve (1) scanning all variables, only active ones, i.e., non
screened-out variables (using the terminology from Section 3.3) need to be considered with safe
screening strategies. This leads to significant computational speed-ups, especially with a coordinate
descent algorithm for which it is natural to ignore features (see Algorithm 2, in Appendix G).
3.2
GAP safe sphere
We now show how to compute the safe sphere radius and center using the duality gap.
3.2.1
Computation of the radius
With a dual feasible point ? P ?X,? and a primal vector ? P Rp at hand, let us construct a safe sphere
centered on ?, with radius obtained thanks to dual gap computations.
Theorem 2 (Safe radius). For any ? P ?X,? and ? P Rp , one has ??p?,?q P B p?, r?,? p?, ?qq , for
c
2pP?,? p?q ? D? p?qq
r?,? p?, ?q ?
,
?2
i.e., the aforementioned ball is a safe region for the Sparse-Group Lasso problem.
Proof. The result holds thanks to strong concavity of the dual objective, cf. Appendix C.
3.2.2
Computation of the center
In GAP safe screening rules, the screening test relies crucially on the ability to compute a vector
that belongs to the dual feasible set ?X,? . The geometry of this set is illustrated in Figure 1.
Following [3], we leverage the primal/dual link-equation (3) to construct a dual point based on a
1
current approximation ? of ??p?,?q . When ? ? ? ? is obtained as an approximation for a previous
value of ?1 ? ? we call such a strategy sequential screening. When ? ? ?k is the primal value at
iteration k obtained by an iterative algorithm, we call this dynamical screening. Starting from a
residual ? ? y ? X?, one can create a dual feasible point by choosing 1 :
?
??
.
(9)
maxp?, ?D pX J ?qq
We refer to the sets Bp?, r?,? p?, ?qq as GAP safe spheres. Note that the generalization to any smooth
data fitting term would be straightforward see [15].s
Remark 3. Recall that ? ? ?max yields ??p?,?q ? 0, in which case ? :? y ? X ??p?,?q ? y is the
optimal residual and y{?max is the dual solution. Thus, as for getting ?max ? ?D pX J yq, the scaling
computation in (9) requires a dual norm evaluation.
1
We have used
? a simpler
? J scaling w.r.t.?[2] choice?s?(without noticing much difference in practice): ? ? s?
? y
?1
1
where s ? min max ?k?k
, ?D pX
2 , ?D pX J ?q
J ?q .
4
Algorithm 1 Computation of ?px, ?, Rq.
p2q
S0 ? xp0q , S0 ? x2p0q , a0 ? 0
for k P rnI ? 1s do
p2q
p2q
Sk ? Sk?1 ` xpkq ; Sk ? Sk?1 ` x2pkq
Input:
x ? px1 , . . . , xd qJ P Rd , ? P r0, 1s, R ? 0
Output: ?px, ?, Rq
p2q
if ? ? 0 and R ? 0 then
?px, ?, Rq ? 8
else if ? ? 0 and R ? 0 then
?px, ?, Rq ? kxk{R
else if R ? 0 then
?px, ?, Rq ? kxk8 {?
else
!
)
8
Get I :? i P rds : |xi | ? ?kxk
?`R
ak`1 ?
? 2 x Sk
pk`1q
`k`1
2
R
?2
j0
P rak , ak`1 r then
?k`1
break
if ?2 j0 ? R2 ? 0 then
if
?px, ?, Rq ?
else
nI :? CardpIq
Sort xp1q ? xp2q ? ? ? ? ? xpnI q
3.3
Sk
x2
pk`1q
?px, ?, Rq ?
Sj2
0
2?Sj0
c
p2q
?Sj0 ? ?2 Sj2 ?Sj p?2 j0 ?R2 q
0
0
?2 j0 ?R2
Convergence of the active set
The next proposition states that the sequence of dual feasible points obtained from (9) converges to the
dual solution ??p?,?q if p?k qkPN converges to an optimal primal solution ??p?,?q (proof in Appendix).
It guarantees that the GAP safe spheres Bp?k , r?,? p?k , ?k qq are converging safe regions in the sense
introduced by [10], since by strong duality limk?8 r?,? p?k , ?k q ? 0.
Proposition 2. If limk?8 ?k ? ??p?,?q , then limk?8 ?k ? ??p?,?q .
For any safe region R, i.e., a set containing ??p?,?q , we define two levels of active sets, one for the
group level and one for the feature level:
?
Agp pRq :? tg P G, max
S? pXgJ ?q
? p1 ? ? qwg u, Aft pRq :?
tj P g : max |XjJ ?| ? ? u.
?PR
gPAgp pRq
?PR
If one considers sequence of converging regions, then the next proposition (whose proof in Appendix)
states that we can identify in finite time the optimal active sets defined as follows:
!
)
)
? !
Egp :? g P G :
S? pXgJ ??p?,?q q
? p1 ? ? qwg , Eft :?
j P g : |XjJ ??p?,?q | ? ? .
gPEgp
Proposition 3. Let pRk qkPN be a sequence of safe regions whose diameters converge to 0. Then,
lim Agp pRk q ? Egp and lim Aft pRk q ? Eft .
k?8
4
k?8
Properties of the Sparse-Group Lasso
To apply our safe rule, we need to be able to evaluate the dual norm ?D efficiently. We describe such
as step hereafter along with some useful properties of the norm ?. Such evaluations are performed
multiple times during the algorithm, motivating the derivation of an efficient algorithm, as presented
in Algorithm 1.
Connections with -norms
4.1
Here, we establish a link between the Sparse-Group Lasso norm ? and the -norm (denoted k?k )
introduced in [6]. For any P r0, 1s and x P Rd , kxk is defined as the unique nonnegative solution
?d
? of the equation i?1 p|xi | ? p1 ? q?q2` ? p?q2 , (kxk0 :? kxk8 ). Using soft-thresholding, this
?d
2
is equivalent to solve in ? the equation i?1 Sp1?q? pxi q ? kSp1?q? pxqk2 ? p?q2 . Moreover, the
D
D
dual norm of the -norm is given by2 : kykD
? kyk ` p1 ? qkyk8 ? kyk ` p1 ? qkyk1 . Now
we can express the Sparse-Group Lasso norm ? in term of the dual -norm and derive some basic
properties.
2
see [7, Eq. (42)] or Appendix
5
p1?? qw
g
Proposition 4. For all groups g in G, let us introduce g :? ? `p1?? qw
. Then, the Sparse-Group
g
Lasso norm satisfies the following properties: for any ? and ? in Rp
?
k?g kg
D
?p?q ?
p? ` p1 ? ? qwg q k?g kg ,
and
?D p?q ? max
,
(10)
gPG ? ` p1 ? ? qwg
gPG
(
B?D ? ? P Rp : @g P G, kS? p?g qk ? p1 ? ? qwg .
(11)
The sub-differential at ? reads B?p?q ? tz P Rp : @g P G, zg P ? Bk?k1 p?g q ` p1 ? ? qwg Bk?kp?g qu .
We obtain from the characterization of the unit dual ball (11) that for the Sparse-Group Lasso, any
dual feasible point ? P ?X,? verifies: @g P G, XgJ ? P p1 ? ? qwg B ` ? B8 .
From the dual norm formulation (10), a vector ? P Rn is feasible if and only if ?D pX J ?q ? 1,
i.e., @g P G, kXgJ ?kg ? ? ` p1 ? ? qwg . Hence we deduce from (11) a new characterization of the
(
dual feasible set: ?X,? ? ? P Rn : @g P G, kXgJ ?kg ? ? ` p1 ? ? qwg .
4.2
Efficient computation of the dual norm
The following proposition shows how to compute the dual norm of the Sparse-Group Lasso (and the
-norm). This is turned into an efficient procedure in Algorithm 1 (see the Appendix for details).
?d
Proposition 5. For ? P r0, 1s, R ? 0 and x P Rd , the equation i?1 S?? pxi q2 ? p?Rq2 has
a unique solution ? :? ?px, ?, Rq P R` , that can be computed in Opd log dq operations in the
worst case. With nI ? Card ti P rds : |xi | ? ?kxk8 {p? ` Rqu, the complexity of Algorithm 1 is
nI ` nI logpnI q, which is comparable to the ambient dimension d.
Thanks to Remark 2, we can explicit the critical parameter ?max for the Sparse-Group Lasso that is
?max ? max
gPG
?pXgJ y, 1 ? g , g q
? ?D pX J yq,
? ` p1 ? ? qwg
(12)
and get a dual feasible point (9), since ?D pX J ?q ? maxgPG ?pXgJ ?, 1 ? g , g q{p? ` p1 ? ? qwg q.
5
Implementation
In this section we provide details on how to solve the Sparse-Group Lasso primal problem, and how
we apply the GAP safe screening rules. We focus on the block coordinate iterative soft-thresholding
algorithm (ISTA-BC); see [16]. This algorithm requires a block-wise Lipschitz gradient condition
on the data fitting term f p?q ? ky ? X?k2 {2. For our problem (1), one can show that for all
group g in G, Lg ? kXg k22 (where k?k2 is the spectral norm of a matrix) is a suitable block-wise
Lipschitz constant. We define the block coordinate descent algorithm according to the MajorizationMinimization principle: at each iteration l, we choose (e.g., cyclically) a group g and the next
iterate ? l`1 is defined such that ?gl`1
? ?gl 1 if g 1 ? g and otherwise ?gl`1 ? arg min?g PRng k?g ?
1
` l
? 2
`
?
l
?g ? ?g f p? q{Lg k {2 ` ? k?g k1 ` p1 ? ? qwg k?g k ?{Lg , where we denote for all g in G, ?g :?
`
` l
??
gp
l
?{Lg . This can be simplified to ?gl`1 ? Sp1??
q?g ?g S? ?g ?g ? ?g f p? q{Lg . The expensive
computation of the dual gap is not performed at each pass over the data, but only every f ce pass (in
practice f ce ? 10 in all our experiments). A pseudo code is given in Appendix G.
6
Experiments
In this section we present our experiments and illustrate the numerical benefit of screening rules for
the Sparse-Group Lasso.
6.1
Experimental settings and methods compared
We have run our ISTA-BC algorithm 3 to obtain the Sparse-Group Lasso estimator for a non-increasing
sequence of T regularization parameters p?t qtPrT ?1s defined as follows: ?t :? ?max 10??pt?1q{pT ?1q .
3
The source code can be found in https://github.com/EugeneNdiaye/GAPSAFE_SGL.
6
Figure 2: Experiments on a synthetic dataset (? ? 0.5, ?1 ? 10, ?2 ? 4, ? ? 0.2).
(a) Proportion of active variables, i.e., variables not safely eliminated, as a function of parameters p?t q
and the number of iterations K. More red, means more variables eliminated and better screening. (b)
Time to reach convergence w.r.t the accuracy on the duality gap, using various screening strategies.
By default, we choose ? ? 3 and T ? 100, following the standard practice when running crossvalidation using sparse models (see R glmnet package [11]). The weights are always chosen as
?
wg ? ng (as in [17]).
We also provide a natural extension of the previous safe rules [9, 21, 3] to the Sparse-Group
Lasso for comparisons (please refer to Appendix D for more details). The static safe region
[9] is given by B py{?, ky{?max ? y{?kq. The corresponding dynamic safe region [3]) is given by
B py{?, k?k ? y{?kq, where p?k qkPN is a sequence of dual feasible points obtained by dual scaling;
cf. Equation (9). The DST3, is an improvement of the preceding safe region, see [21, 3], that we
adapted to the Sparse-Group Lasso. The GAP safe sequential rules corresponds to using only
GAP Safe spheres whose centers are the (last) dual point output by the solver for a former value
of ? in the path. The GAP safe rules corresponds to performing our strategy both sequentially and
dynamically. Presenting the sequential rule allows to measure the benefits due to sequential rules and
to the dynamic rules.
We now demonstrate the efficiency of our method in both synthetic (Fig. (2)) and real datasets
(Fig. 6.2). For comparison, we report computation times to reach convergence up to a certain
tolerance on the duality gap for all the safe rules considered.
Synthetic dataset: We use a common framework [19, 20] based on the model y ? X? ` 0.01?
where ? ? N p0, Idn q, X P Rn?p follows a multivariate normal distribution such that @pi, jq P
rps2 , corrpXi , Xj q ? ?|i?j| . We fix n ? 100 and break randomly p ? 10000 in 1000 groups of
size 10 and select ?1 groups to be active and the others are set to zero. In each selected groups, ?2
coordinates are drawn with r?g sj ? signp?q ? U for U is uniform in r0.5, 10sq, ? uniform in r?1, 1s.
Real dataset: NCEP/NCAR Reanalysis 1 [14] The dataset contains monthly means of climate data
measurements spread across the globe in a grid of 2.5? ? 2.5? resolutions (longitude and latitude
144?73) from 1948{1{1 to 2015{10{31 . Each grid point constitutes a group of 7 predictive variables
(Air Temperature, Precipitable water, Relative humidity, Pressure, Sea Level Pressure, Horizontal
Wind Speed and Vertical Wind Speed) whose concatenation across time constitutes our design matrix
X P R814?73577 . Such data have therefore a natural group structure.
In our experiments, we considered as target variable y P R814 , the values of Air Temperature in a
neighborhood of Dakar. Seasonality and trend are first removed, as usually done in climate analysis
for bias reduction in the regression estimates. Similar data has been used in [8], showing that the
Sparse-Group Lasso estimator is well suited for prediction in climatology. Indeed, thanks to the
sparsity structure, the estimates delineate via their support some predictive regions at the group level,
as well as predictive features via coordinate-wise screening.
We choose ? in the set t0, 0.1, . . . , 0.9, 1u by splitting in 50% the observations and run a training-test
validation procedure. For each value of ? , we require a duality gap of 10?8 on the training part
7
a)
Figure 3: Experiments on NCEP/NCAR Reanalysis
1 pn ? 814, p ? 73577q: (a) Prediction error for the
Sparse-Group Lasso path with 100 values of ? and
11 values of ? (best : ? ? ? 0.4). (b) Time to reach
convergence controlled by duality gap (for whole path
p?t qtPrT s with ? ? 2.5 and ? ? ? 0.4). (c) Active groups
to predict Air Temperature in a neighborhood of Dakar
(in blue). Cross validation was run over 100 values for
??s and 11 for ? ?s. At each location, the highest absolute
value among the seven coefficients is displayed.
b)
and pick the best one in term of prediction accuracy on the test part. The result is displayed in
Figure 6.2.(a). We fixed ? ? 2.5 for the computational time benchmark in Figure 6.2.(b)
6.2
Performance of the screening rules
In all our experiments, we observe that our proposed GAP Safe rule outperforms the other rules
in term of computation time. On Figure 2.(c), we can see that we need 65s to reach convergence
whereas others rules need up to 212s at a precision of 10?8 . A similar performance is observed on
the real dataset (Figure 6.2) where we obtain up to a 5x speed up over the other rules. The key reason
behind this performance gain is the convergence of the GAP Safe regions toward the dual optimal
point as well as the efficient strategy to compute the screening rule. As shown in the results presented
on Figure 2, our method still manages to screen out variables when ? is small. It corresponds to low
regularizations which lead to less sparse solutions but need to be explored during cross-validation.
In the climate experiments, the support map in Figure 6.2.(c) shows that the most important coefficients are distributed in the vicinity of the target region (in agreement with our intuition). Nevertheless,
some active variables with small coefficients remain and cannot be screened out.
Note that we do not compare our method to the TLFre [20], since this sequential rule requires the
exact knowledge of the dual optimal solution which is not available in practice. As a consequence,
one may discard active variables which can prevent the algorithm from converging as shown in [15].
7
Conclusion
The recent GAP safe rules introduced have shown great improvements, for a wide range of regularized
regression, in the reduction of computing time, especially in high dimension. To apply such GAP
safe rules to the Sparse-Group Lasso, we have proposed a new description of the dual feasible set
by establishing connections between the Sparse-Group Lasso norm and -norms. This geometrical
connection has helped providing an efficient algorithm to compute the dual norm and dual feasible
points, bottlenecks for applying the GAP Safe rules. Extending GAP safe rules on general hierarchical
regularizations, is a possible direction for future research.
Acknowledgments: this work was supported by the ANR THALAMEEG ANR-14-NEUC-000201, the NIH R01 MH106174, by ERC Starting Grant SLAB ERC-YStG-676943 and by the Chair
Machine Learning for Big Data at T?l?com ParisTech.
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[2] A. Bonnefoy, V. Emiya, L. Ralaivola, and R. Gribonval. A dynamic screening principle for the lasso. In
EUSIPCO, 2014.
[3] A. Bonnefoy, V. Emiya, L. Ralaivola, and R. Gribonval. Dynamic Screening: Accelerating First-Order
Algorithms for the Lasso and Group-Lasso. IEEE Trans. Signal Process., 63(19):20, 2015.
[4] J. M. Borwein and A. S. Lewis. Convex analysis and nonlinear optimization. Springer, New York, second
edition, 2006.
[5] P. B?hlmann and S. van de Geer. Statistics for high-dimensional data. Springer Series in Statistics.
Springer, Heidelberg, 2011. Methods, theory and applications.
[6] O. Burdakov. A new vector norm for nonlinear curve fitting and some other optimization problems. 33.
Int. Wiss. Kolloq. Fortragsreihe "Mathematische Optimierung | Theorie und Anwendungen", pages 15?17,
1988.
[7] O. Burdakov and B. Merkulov. On a new norm for data fitting and optimization problems. Link?ping
University, Link?ping, Sweden, Tech. Rep. LiTH-MAT, 2001.
[8] S. Chatterjee, K. Steinhaeuser, A. Banerjee, S. Chatterjee, and A. Ganguly. Sparse group lasso: Consistency
and climate applications. In SIAM International Conference on Data Mining, pages 47?58, 2012.
[9] L. El Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination in sparse supervised learning. J. Pacific
Optim., 8(4):667?698, 2012.
[10] O. Fercoq, A. Gramfort, and J. Salmon. Mind the duality gap: safer rules for the lasso. In ICML, pages
333?342, 2015.
[11] J. Friedman, T. Hastie, H. H?fling, and R. Tibshirani. Pathwise coordinate optimization. Ann. Appl. Stat.,
1(2):302?332, 2007.
[12] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding. J.
Mach. Learn. Res., 12:2297?2334, 2011.
[13] T. B. Johnson and C. Guestrin. Blitz: A principled meta-algorithm for scaling sparse optimization. In
ICML, pages 1171?1179, 2015.
[14] E. Kalnay, M. Kanamitsu, R. Kistler, W. Collins, D. Deaven, L. Gandin, M. Iredell, S. Saha, G. White,
J. Woollen, et al. The NCEP/NCAR 40-year reanalysis project. Bulletin of the American meteorological
Society, 77(3):437?471, 1996.
[15] E. Ndiaye, O. Fercoq, A. Gramfort, and J. Salmon. GAP safe screening rules for sparse multi-task and
multi-class models. NIPS, pages 811?819, 2015.
[16] Z. Qin, K. Scheinberg, and D. Goldfarb. Efficient block-coordinate descent algorithms for the group lasso.
Mathematical Programming Computation, 5(2):143?169, 2013.
[17] N. Simon, J. Friedman, T. Hastie, and R. Tibshirani. A sparse-group lasso. J. Comput. Graph. Statist.,
22(2):231?245, 2013.
[18] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSSB, 58(1):267?288, 1996.
[19] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong rules for
discarding predictors in lasso-type problems. JRSSB, 74(2):245?266, 2012.
[20] J. Wang and J. Ye. Two-layer feature reduction for sparse-group lasso via decomposition of convex sets.
arXiv preprint arXiv:1410.4210, 2014.
[21] Z. J. Xiang, H. Xu, and P. J. Ramadge. Learning sparse representations of high dimensional data on large
scale dictionaries. In NIPS, pages 900?908, 2011.
[22] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. JRSSB,
68(1):49?67, 2006.
9
| 6405 |@word norm:43 proportion:1 humidity:1 bf:1 r:1 crucially:1 egp:2 decomposition:1 p0:1 pressure:2 pick:1 reduction:4 contains:1 series:1 hereafter:1 denoting:1 bc:2 outperforms:1 ndiaye:2 current:1 com:3 ncar:3 optim:1 yet:1 written:1 aft:2 numerical:1 partition:1 remove:1 prk:3 selected:1 kyk:3 gribonval:2 characterization:2 location:1 simpler:1 rabbani:1 mathematical:1 along:2 differential:5 yuan:1 fitting:6 combine:1 bonnefoy:2 introduce:2 intricate:1 indeed:3 p1:22 multi:5 relying:2 kxgj:2 solver:2 cardinality:1 increasing:1 spain:1 provided:1 notation:2 moreover:1 project:1 qw:2 q2:4 unified:1 guarantee:1 safely:3 pseudo:1 every:1 ti:1 xd:1 universit:1 k2:3 control:2 unit:4 grant:1 t1:2 pwg:1 consequence:1 eusipco:1 pxq:4 mach:1 ak:2 id:1 establishing:1 path:3 might:1 tt1:1 k:2 dynamically:1 challenging:1 appl:1 ramadge:1 range:1 practical:1 unique:3 acknowledgment:1 practice:4 block:7 sq:1 procedure:2 pontil:1 j0:4 significantly:1 adapting:1 projection:2 ups:1 qkpn:3 refers:1 get:2 pyq:1 close:1 cannot:1 operator:1 ralaivola:2 selection:2 context:4 applying:1 seminal:1 py:3 restriction:1 equivalent:2 map:1 center:3 straightforward:2 starting:2 convex:7 focused:1 resolution:1 splitting:1 pure:1 rule:59 estimator:3 handle:1 coordinate:13 qq:5 resp:2 pt:2 target:2 exact:2 olivier:1 programming:1 agreement:1 element:1 trend:1 expensive:1 particularly:1 rx1:1 observed:1 preprint:1 wang:1 worst:1 region:13 trade:2 removed:1 highest:1 rq:15 intuition:1 und:1 principled:1 complexity:2 dynamic:5 creation:1 predictive:3 efficiency:3 kxj:2 chapter:1 tx:2 various:1 derivation:1 fast:1 describe:1 kp:1 rds:4 choosing:1 refined:1 neighborhood:2 whose:7 solve:3 otherwise:2 wg:4 maxp:2 ability:3 anr:2 statistic:2 g1:1 sj0:2 gp:2 jointly:1 ganguly:1 sequence:5 propose:4 fr:1 qin:1 turned:1 description:2 ky:4 getting:1 crossvalidation:1 convergence:6 extending:1 sea:1 converges:2 derive:1 illustrate:1 stat:1 eq:1 strong:4 longitude:1 recovering:1 direction:1 safe:61 radius:5 steinhaeuser:1 centered:2 kistler:1 elimination:1 require:1 fix:1 generalization:1 proposition:8 extension:2 hold:1 considered:3 normal:1 great:1 predict:1 slab:1 dictionary:1 early:1 uniqueness:1 estimation:1 by2:1 grouped:1 create:1 always:1 aim:1 pn:1 shrinkage:1 focus:3 pxi:2 improvement:2 prp:2 tech:1 detect:2 sense:1 helpful:1 el:1 cnrs:1 eliminate:1 a0:1 jq:1 france:1 provably:1 arg:5 dual:55 aforementioned:2 among:1 denoted:4 gramfort:3 construct:2 evgeniou:1 ng:3 eliminated:2 yu:1 icml:2 constitutes:2 future:1 report:1 others:2 tlfre:1 saha:1 randomly:1 fling:1 geometry:2 tq:1 friedman:3 ltci:1 screening:38 highly:1 mining:1 evaluation:4 deferred:1 primal:10 tj:1 behind:1 ambient:1 rps:2 sweden:1 euclidean:3 taylor:1 re:1 instance:3 column:3 soft:4 hlmann:1 tg:4 subset:1 kq:2 uniform:2 predictor:1 p0q:1 johnson:1 characterize:1 stored:1 motivating:1 scanning:1 proximal:1 synthetic:3 thanks:6 international:1 siam:1 off:2 wg2:1 together:1 b8:3 borwein:1 containing:2 choose:4 tz:2 american:1 de:1 coding:1 coefficient:5 int:1 performed:2 helped:1 break:2 closed:1 wind:2 red:1 sort:1 alleged:1 simon:2 xgj:2 contribution:2 kxg:3 air:3 ni:4 accuracy:2 qk:2 efficiently:3 yield:2 identify:1 fermat:2 manages:1 ping:2 reach:4 pp:1 naturally:1 associated:1 proof:3 recovers:1 static:1 gain:2 dataset:5 recall:2 lim:2 knowledge:1 jenatton:1 appears:1 alexandre:1 supervised:1 formulation:3 done:1 delineate:1 strongly:1 hand:1 horizontal:1 nonlinear:2 overlapping:1 banerjee:1 meteorological:1 logistic:1 name:1 k22:1 ye:1 true:1 former:1 regularization:9 hence:3 vicinity:1 excluded:1 read:3 goldfarb:1 illustrated:1 climate:5 white:1 during:3 please:1 presenting:1 demonstrate:1 climatology:1 temperature:3 geometrical:1 wise:7 salmon:3 recently:2 pbp:4 ncep:3 common:1 nih:1 multinomial:1 qp:1 discussed:1 eft:2 significant:3 refer:2 monthly:1 measurement:1 tuning:1 rd:9 grid:2 px1:1 similarly:1 inclusion:1 erc:2 consistency:1 lith:1 deduce:1 multivariate:1 recent:2 irrelevant:1 belongs:1 discard:3 certain:1 meta:1 rep:1 success:1 seen:1 guestrin:1 preceding:1 kxk0:1 r0:6 converge:1 signal:2 multiple:1 smooth:1 cross:2 sphere:10 xp1q:1 bach:1 lin:1 controlled:1 prediction:4 variant:1 regression:8 converging:3 basic:1 globe:1 opd:1 arxiv:2 iteration:3 whereas:1 else:4 source:1 crucial:2 limk:3 leveraging:1 integer:1 call:3 leverage:1 iterate:1 xj:4 hastie:3 lasso:58 qj:3 inactive:2 t0:1 bottleneck:1 accelerating:1 sj2:2 penalty:1 xjj:6 york:1 remark:4 useful:4 generally:1 statist:1 processed:1 diameter:1 reduced:1 http:1 seasonality:1 tibshirani:5 blue:1 mathematische:1 write:1 mat:1 express:1 group:85 key:1 terminology:1 nevertheless:1 drawn:1 prevent:1 ce:2 viallon:1 graph:1 year:1 enforced:1 screened:2 run:3 noticing:1 package:1 agp:2 appendix:9 scaling:4 comparable:1 bound:1 layer:1 guaranteed:2 convergent:1 fold:1 quadratic:1 nonnegative:1 adapted:1 k8:3 bp:5 x2:1 kxku:1 speed:4 fercoq:3 min:5 chair:1 performing:1 px:20 pacific:1 according:1 combination:1 ball:9 idn:1 kd:1 across:2 remain:1 wi:1 joseph:1 qu:2 explained:1 pr:2 ghaoui:1 equation:6 previously:1 scheinberg:1 mind:1 available:1 operation:1 apply:3 observe:1 hierarchical:3 enforce:1 spectral:1 alternative:1 rp:5 remaining:1 cf:3 running:1 exploit:2 k1:5 especially:3 establish:1 society:1 r01:1 objective:2 strategy:8 prq:3 jrssb:3 gradient:1 link:6 card:1 concatenation:1 seven:1 considers:1 water:1 enforcing:1 reason:1 toward:1 code:2 index:2 illustration:1 providing:1 lg:5 theorie:1 implementation:4 design:3 perform:2 rng:1 upper:1 observation:2 vertical:1 datasets:1 discarded:1 benchmark:1 finite:1 descent:5 displayed:2 rn:7 introduced:7 bk:5 rq2:1 paris:2 required:1 connection:3 unsafe:1 boost:1 barcelona:1 nip:3 assembled:1 trans:1 able:1 proceeds:1 dynamical:1 usually:1 latitude:1 sparsity:17 bien:1 saclay:1 program:1 p2q:5 max:22 critical:5 suitable:1 natural:6 rely:1 regularized:1 residual:2 advanced:1 github:1 yq:4 xg:2 naive:1 eugene:1 geometric:1 ptq:1 relative:1 xiang:1 proven:1 rak:1 validation:3 rni:1 xp:1 s0:2 thresholding:4 dq:1 principle:2 pi:1 heavy:1 reanalysis:3 gl:4 last:3 transpose:1 majorizationminimization:1 supported:1 bias:1 allow:1 wide:1 bulletin:1 absolute:1 sparse:50 benefit:4 tolerance:1 distributed:1 dimension:6 default:1 stand:1 evaluating:1 van:1 computes:1 concavity:1 curve:1 simplified:1 sj:4 ignore:1 emiya:2 active:10 sequentially:1 mairal:1 xi:3 iterative:2 sk:6 learn:1 ignoring:1 heidelberg:1 du:1 investigated:1 complex:2 pk:2 spread:1 whole:1 big:1 edition:1 burdakov:2 verifies:1 ista:2 xu:1 fig:3 telecom:1 elaborate:1 screen:1 precision:1 sub:6 explicit:1 comput:1 cyclically:1 theorem:2 specific:1 discarding:1 showing:1 r2:3 explored:1 admits:1 blitz:2 burden:1 adding:1 sequential:6 chatterjee:2 gap:32 suited:1 glmnet:1 kxk:3 pathwise:1 g2:1 springer:3 corresponds:3 minimizer:1 satisfies:1 relies:3 lewis:1 obozinski:1 ann:1 shared:1 lipschitz:2 feasible:18 paristech:3 safer:1 sp1:2 called:1 geer:1 pas:2 kxkq:1 duality:7 experimental:1 zg:1 select:1 support:3 collins:1 incorporate:1 evaluate:2 argyriou:1 |
5,976 | 6,406 | Deep ADMM-Net for Compressive Sensing MRI
Yan Yang
Xi?an Jiaotong University
[email protected]
Jian Sun
Xi?an Jiaotong University
[email protected]
Huibin Li
Xi?an Jiaotong University
[email protected]
Zongben Xu
Xi?an Jiaotong University
[email protected]
Abstract
Compressive Sensing (CS) is an effective approach for fast Magnetic Resonance
Imaging (MRI). It aims at reconstructing MR image from a small number of undersampled data in k-space, and accelerating the data acquisition in MRI. To improve
the current MRI system in reconstruction accuracy and computational speed, in
this paper, we propose a novel deep architecture, dubbed ADMM-Net. ADMMNet is defined over a data flow graph, which is derived from the iterative procedures in Alternating Direction Method of Multipliers (ADMM) algorithm for
optimizing a CS-based MRI model. In the training phase, all parameters of the
net, e.g., image transforms, shrinkage functions, etc., are discriminatively trained
end-to-end using L-BFGS algorithm. In the testing phase, it has computational
overhead similar to ADMM but uses optimized parameters learned from the training data for CS-based reconstruction task. Experiments on MRI image reconstruction under different sampling ratios in k-space demonstrate that it significantly
improves the baseline ADMM algorithm and achieves high reconstruction accuracies with fast computational speed.
1
Introduction
Magnetic Resonance Imaging (MRI) is a non-invasive imaging technique providing both functional
and anatomical information for clinical diagnosis. Imaging speed is a fundamental challenge. Fast
MRI techniques are essentially demanded for accelerating data acquisition while still reconstructing
high quality image. Compressive sensing MRI (CS-MRI) is an effective approach allowing for data
sampling rate much lower than Nyquist rate without significantly degrading the image quality [1].
CS-MRI methods first sample data in k-space (i.e., Fourier space), then reconstruct image using
compressive sensing theory. Regularization related to the data prior is a key component in a CSMRI model to reduce imaging artifacts and improve imaging precision. Sparse regularization can be
explored in specific transform domain or general dictionary-based subspace [2]. Total Variation (TV)
regularization in gradient domain has been widely utilized in MRI [3, 4]. Although it is easy and
fast to optimize, it introduces staircase artifacts in reconstructed image. Methods in [5, 6] leverage
sparse regularization in the wavelet domain. Dictionary learning methods rely on a dictionary of
local patches to improve the reconstruction accuracy [7, 8]. The non-local method uses groups of
similar local patches for joint patch-level reconstruction to better preserve image details [9, 10, 11].
In performance, the basic CS-MRI methods run fast but produce less accurate reconstruction results.
The non-local and dictionary learning-based methods generally output higher quality MR images,
but suffer from slow reconstruction speed. In a CS-MRI model, it is commonly challenging to
choose an optimal image transform domain / subspace and the corresponding sparse regularization.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
To optimize the CS-MRI models, Alternating Direction Method of Multipliers (ADMM) has proven
to be an efficient variable splitting algorithm with convergence guarantee [4, 12, 13]. It considers
the augmented Lagrangian function of a given CS-MRI model, and splits variables into subgroups,
which can be alternatively optimized by solving a few simply subproblems. Although ADMM is
generally efficient, it is not trivial to determine the optimal parameters (e.g., update rates, penalty
parameters) influencing accuracy in CS-MRI.
In this work, we aim to design a fast yet accurate method to reconstruct high-quality MR images
from under-sampled k-space data. We propose a novel deep architecture, dubbed ADMM-Net, inspired by the ADMM iterative procedures for optimizing a general CS-MRI model. This deep architecture consists of multiple stages, each of which corresponds to an iteration in ADMM algorithm.
More specifically, we define a deep architecture represented by a data flow graph [14] for ADMM
procedures. The operations in ADMM are represented as graph nodes, and the data flow between
two operations in ADMM is represented by a directed edge. Therefore, the ADMM iterative procedures naturally determine a deep architecture over a data flow graph. Given an under-sampled data
in k-space, it flows over the graph and generates a reconstructed image. All the parameters (e.g.,
transforms, shrinkage functions, penalty parameters, etc.) in the deep architecture can be discriminatively learned from training pairs of under-sampled data in k-space and reconstructed image using
fully sampled data by backpropagation [15] over the data flow graph.
Our experiments demonstrate that the proposed deep ADMM-Net is effective both in reconstruction accuracy and speed. Compared with the baseline methods using sparse regularization in transform domain, it achieves significantly higher accuracy and takes comparable computational time.
Compared with the state-of-the-art methods using dictionary learning and non-local techniques, it
achieves high accuracy in significantly faster computational speed.
The main contributions of this paper can be summarized as follows. We propose a novel deep
ADMM-Net by reformulating an ADMM algorithm to a deep network for CS-MRI. This is achieved
by designing a data flow graph for ADMM to effectively build and train the ADMM-Net. ADMMNet achieves high accuracy in MR image reconstruction with fast computational speed justified in
experiments. The discriminative parameter learning approach has been applied to sparse coding
and Markov Random Filed [16, 17, 18, 19]. But, to the best of our knowledge, this is the first
computational framework that maps an ADMM algorithm to a learnable deep architecture.
2
2.1
Deep ADMM-Net for Fast MRI
Compressive Sensing MRI Model and ADMM Algorithm
?
General CS-MRI Model: Assume x ? CN is an MRI image to be reconstructed, y ? CN (N ? <
N ) is the under-sampled k-space data, according to the CS theory, the reconstructed image can be
estimated by solving the following optimization problem:
}
{
L
?
1
2
?l g(Dl x) ,
(1)
?Ax ? y?2 +
x
? = arg min
2
x
l=1
?
?
where A = P F ? RN ?N is a measurement matrix, P ? RN ?N is a under-sampling matrix, and F
is a Fourier transform. Dl denotes a transform matrix for a filtering operation, e.g., Discrete Wavelet
Transform (DWT), Discrete Cosine Transform (DCT), etc. g(?) is a regularization function derived
from the data prior, e.g., lq -norm (0 ? q ? 1) for a sparse prior. ?l is a regularization parameter.
ADMM solver: [12] The above optimization problem can be solved efficiently using ADMM algorithm. By introducing auxiliary variables z = {z1 , z2 , ? ? ? , zL }, Eqn. (1) is equivalent to:
?
1
min ?Ax ? y?22 +
?l g(zl )
x,z 2
L
s.t. zl = Dl x, ? l ? [1, 2, ? ? ? , L].
(2)
l=1
Its augmented Lagrangian function is :
?
?
? ?l
1
2
?l g(zl ) ?
??l , zl ? Dl x? +
?zl ? Dl x?22 ,
L? (x, z, ?) = ?Ax ? y?2 +
2
2
L
L
L
l=1
l=1
l=1
2
(3)
Sampling data
in k-space
M (n-1)
X(1)
X(n-1)
C(n-1)
Z(n-1)
M (n)
X(n)
C(n)
Z(n)
Reconstructed
MR image
M (n+1)
X(n+1)
C(n+1)
Z(n+1)
X(Ns?1)
stage n
Figure 1: The data flow graph for the ADMM optimization of a general CS-MRI model. This graph
consists of four types of nodes: reconstruction (X), convolution (C), non-linear transform (Z), and
multiplier update (M). An under-sampled data in k-space is successively processed over the graph,
and finally generates a MR image. Our deep ADMM-Net is defined over this data flow graph.
where ? = {?l } are Lagrangian multipliers and ? = {?l } are penalty parameters. ADMM alternatively optimizes {x, z, ?} by solving the following three subproblems:
? (n+1)
?L
?L
(n) (n)
(n)
x
= arg min 12 ?Ax ? y?22 ? l=1 ??l , zl ? Dl x? + l=1 ?2l ?zl ? Dl x?22 ,
?
?
?
x
?
?L
?L
?L
(n)
z (n+1) = arg min l=1 ?l g(zl ) ? l=1 ??l , zl ? Dl x(n+1) ? + l=1 ?2l ?zl ? Dl x(n+1) ?22 ,
z
?
?
?L
(n+1)
? (n+1)
?
?
= arg min l=1 ??l , Dl x(n+1) ? zl
?,
?
(4)
where n ? [1, 2, ? ? ? , Ns ] denotes n-th iteration. For simplicity, let ?l = ??ll (l ? [1, 2, ? ? ? , L]), and
substitute A = P F into Eqn. (4). Then the three subproblems have the following solutions:
?
?L
?L
(n?1)
(n?1)
T
T
T ?1
T
(n)
(n)
T
T
?
? ?l
)],
?X : x = F [P P + l=1 ?l F Dl Dl F ] [P y + l=1 ?l F Dl (zl
(n)
(n?1)
(n)
(n)
Z : zl = S(Dl x + ?l
; ?l /?l ),
?
? (n)
(n)
(n?1)
(n)
(n)
M : ?l = ?l
+ ?l (Dl x ? zl ),
(5)
where x(n) can be efficiently computed by fast Fourier transform, S(?) is a nonlinear shrinkage
function. It is usually a soft or hard thresholding function corresponding to the sparse regularization
of l1 -norm and l0 -norm respectively [20]. The parameter ?l is an update rate.
In CS-MRI, it commonly needs to run the ADMM algorithm in dozens of iterations to get a satisfactory reconstruction result. However, it is challenging to choose the transform Dl and shrinkage
function S(?) for general regularization function g(?). Moreover, it is also not trivial to tune the
parameters ?l and ?l for k-space data with different sampling ratios. To overcome these difficulties, we will design a data flow graph for the ADMM algorithm, over which we can define a deep
ADMM-Net to discriminatively learn all the above transforms, functions, and parameters.
2.2
Data Flow Graph for the ADMM Algorithm
To design our deep ADMM-Net, we first map the ADMM iterative procedures in Eqn. (5) to a
data flow graph [14]. As shown in Fig. 1, this graph comprises of nodes corresponding to different
operations in ADMM, and directed edges corresponding to the data flows between operations. In
this case, the n-th iteration of ADMM algorithm corresponds to the n-th stage of the data flow graph.
In the n-th stage of the graph, there are four types of nodes mapped from four types of operations in
ADMM, i.e., reconstruction operation (X(n) ), convolution operation (C(n) ) defined by {Dl x(n) }L
l=1 ,
nonlinear transform operation (Z(n) ) defined by S(?), and multiplier update operation (M(n) ) in
Eqn. (5). The whole data flow graph is a multiple repetition of the above stages corresponding to
successive iterations in ADMM. Given an under-sampled data in k-space, it flows over the graph
and finally generates a reconstructed image. In this way, we map the ADMM iterations to a data
flow graph, which is useful to define and train our deep ADMM-Net in the following sections.
2.3
Deep ADMM-Net
Our deep ADMM-Net is defined over the data flow graph. It keeps the graph structure but generalizes
the four types of operations to have learnable parameters as network layers. These operations are
now generalized as reconstruction layer, convolution layer, non-linear transform layer, and multiplier
update layer. We next discuss them in details.
3
Reconstruction layer (X(n) ): This layer reconstructs an MRI image following the reconstruction
(n?1)
(n?1)
operation X(n) in Eqn. (5). Given zl
and ?l
, the output of this layer is defined as:
x(n) = F T (P T P +
L
?
(n)
(n) T
?l F Hl
Hl F T )?1 [P T y +
(n)
L
?
(n)
(n) T
?l F Hl
(n?1)
(zl
(n?1)
??l
)], (6)
l=1
l=1
(n)
(n)
where Hl is the l-th filter, ?l is the l-th penalty parameter, l = 1, ? ? ? , L, and y is the input
(0)
(0)
under-sampled data in k-space. In the first stage (n = 1), zl and ?l are initialized to zeros,
?
(1)
(1)
(1)
L
therefore x(1) = F T (P T P + l=1 ?l F Hl T Hl F T )?1 (P T y).
Convolution layer (C(n) ): It performs convolution operation to transform an image into transform domain. Given an image x(n) , i.e., a reconstructed image in stage n, the output is
(n)
cl
(n)
= Dl x(n) ,
(7)
(n)
Dl
where
is a learnable filter matrix in stage n. Different from the original ADMM, we do not
(n)
(n)
constrain the filters Dl and Hl to be the same to increase the network capacity.
Nonlinear transform layer (Z(n) ): This layer performs nonlinear transform inspired by the
shrinkage function S(?) defined in Z(n) in Eqn. (5). Instead of setting it to be a shrinkage function determined by the regularization term g(?) in Eqn. (1), we aim to learn more general function
(n)
(n?1)
using piecewise linear function. Given cl and ?l
, the output of this layer is defined as:
(n)
zl
(n)
= SP LF (cl
(n?1)
+ ?l
(n)
c
; {pi , ql,i }N
i=1 ),
(8)
(n)
c
{pi , ql,i }N
i=1 .
where SP LF (?) is a piecewise linear function determined by a set of control points
?
(n)
?
? p1 ,
a < p1 ,
?
? a + ql,1
(n)
(n) Nc
a
+
q
?
p
,
a > pNc ,
Nc
SP LF (a; {pi , ql,i }i=1 ) =
l,Nc
(n)
(n)
?
?
(a?p
)(q
?q
)
k
? q (n) +
l,k+1
l,k
, p ?a?p
1
pk+1 ?pk
l,k
1
? pa?p
?,
2 ?p1
i.e.
(9)
Nc ,
(n)
c
{pi }N
i=1
c
where k =
are predefined positions uniformly located within [-1,1], and {ql,i }N
i=1
are the values at these positions for l-th filter in n-th stage. Figure 2 gives an illustrative example.
Since a piecewise linear function can approximate any function, we can learn flexible nonlinear
transform function from data beyond the off-the-shelf hard or soft thresholding functions.
-1
?
?
(?)
(??, ??,? )
1
Figure 2: Illustration of a piecewise linear function determined by a set of control points.
Multiplier update layer (M(n) ): This layer is defined by the Lagrangian multiplier updating
procedure M(n) in Eqn. (5). The output of this layer in stage n is defined as:
(n)
?l
where
(n)
?l
(n?1)
= ?l
(n)
(n)
+ ?l (cl
(n)
? zl ),
(10)
are learnable parameters.
Network Parameters: These layers are organized in a data flow graph shown in Fig. 1. In the
(n)
(n)
deep architecture, we aim to learn the following parameters: Hl and ?l in reconstruction layer,
(n)
(n)
(n)
c
filters Dl in convolution layer, {ql,i }N
in multiplier update
i=1 in nonlinear transform layer, ?l
layer, where l ? [1, 2, ? ? ? , L] and n ? [1, 2, ? ? ? , Ns ] are the indexes for the filters and stages
respectively. All of these parameters are taken as the network parameters to be learned.
Figure 3 shows an example of a deep ADMM-Net with three stages. The under-sampled data in
k-space flows over three stages in a order from circled number 1 to number 12, followed by a
final reconstruction layer with circled number 13 and generates a reconstructed image. Immediate
reconstruction result at each stage is shown under each reconstruction layer.
4
Sampling data
in k-space
M (1) ?
M (2) ?
Reconstructed
MR image
M (3)?
X(1)
C(1)
Z(1)
X(2)
C(2)
Z(2)
X(3)
C(3)
Z(3)
X(4)
?
?
?
?
?
?
?
?
?
?
Figure 3: An example of deep ADMM-Net with three stages. The sampled data in k-space is
successively processed by operations in a order from 1 to 12, followed by a reconstruction layer
X (4) to output the final reconstructed image. The reconstructed image in each stage is shown under
each reconstruction layer.
3
Network Training
We take the reconstructed MR image using fully sampled data in k-space as the ground-truth MR
image xgt , and under-sampled data y in k-space as the input. Then a training set ? is constructed
containing pairs of under-sampled data and ground-truth MR image. We choose normalized mean
square error (NMSE) as the loss function in network training. Given pairs of training data, the loss
between the network output and ground truth is defined as:
?
?
??
x(y, ?) ? xgt ?22
1
?
E(?) =
,
(11)
|?|
?xgt ?22
gt
(y,x
)??
where x
?(y, ?) is the network output based on network parameter ? and under-sampled data y in k(n)
(n)
(n) (n) (n) Ns
(Ns +1) (Ns +1)
c
space. We learn the parameters ? = {(ql,i )N
, ?l
}
i=1 , Dl , Hl , ?l , ?l }n=1 ? {Hl
1
(l = 1, ? ? ? , L) by minimizing the loss w.r.t. them using L-BFGS . In the following, we first discuss
the initialization of these parameters and then compute the gradients of the loss function E(?) w.r.t.
parameters ? using backpropagation (BP) [21] over the data flow graph.
3.1
Initialization
We initialize the network parameters ? according to the ADMM solver of the following baseline
CS-MRI model:
}
{
L
?
1
2
||Dl x||1 .
(12)
arg min
?Ax ? y?2 + ?
2
x
l=1
In this model, we set Dl as a DCT basis and impose l1 -norm regularization in the DCT transform space. The function S(?) in ADMM algorithm (Eqn. (5)) is a soft thresholding function:
S(t; ?/?l ) = sgn(t)(|t| ? ?/?l ) when |t| > ?/?l , and 0 otherwise. For each n-th stage of deep
(n)
(n)
ADMM-Net, filters Dl in convolution layers and Hl in reconstruction layers are initialized to be
Dl in Eqn. (12). In the nonlinear transform layer, we uniformly choose 101 positions located within
(n)
(n) (n)
[-1,1], and each value ql,i is initialized as S(pi ; ?/?l ). Parameters ?, ?l , ?l are initialized to
be the corresponding values in the ADMM algorithm. In this case, the initialized net is exactly a
realization of ADMM optimizing Eqn. (12), therefore outputs the same reconstructed image as the
ADMM algorithm. The optimization of the network parameters is expected to produce improved
reconstruction result.
3.2
Gradient Computation by Backpropagation over Data Flow Graph
It is challenging to compute the gradients of loss w.r.t. parameters using backpropagation over the
deep architecture in Fig. 1, because it is a directed graph. In the forward pass, we process the data
of n-th stage in the order of X(n) , C(n) , Z(n) and M(n) . In the backward pass, the gradients are
1
http://users.eecs.northwestern.edu/~nocedal/lbfgsb.html
5
zl ( n )
?l( n )
?l( n )
?l ( n ?1)
zl ( n )
?l( n ?1)
?l ( n )
?l ( n )
cl( n )
zl( n?1)
x ( n ?1)
(a) Multiplier update layer
Z(n)
?l( n )
(n)
)
?1
c
M (n)
(n
(n)
l
?l
?l( n ?1)
cl
zl( n )
x(n)
x ( n ?1)
(b) Non-linear transform layer
cl( n )
C(n)
zl( n?1)
zl( n )
X(n)
x(n)
cl( n )
(d) Reconstruction layer
(c) Convolution layer
Figure 4: Illustration of four types of graph nodes (i.e., layers in network) and their data flows in
stage n. The solid arrow indicates the data flow in forward pass and dashed arrow indicates the
backward pass when computing gradients in backpropagation.
computed in an inverse order. Figure 3 shows an example, where the gradient can be computed
backwardly from the layers with circled number 13 to 1 successively. For a stage n, Fig. 4 shows
four types of nodes (i.e., network layers) and the data flow over them. Each node has multiple inputs
and (or) outputs. We next briefly introduce the gradients computation for each layer in a typical
stage n (n < Ns ). Please refer to supplementary material for details.
Multiplier update layer (M(n) ): As shown in Fig. 4(a), this layer has three sets of inputs:
(n?1)
(n)
(n)
(n)
(n+1)
(n+1)
{?l
}, {cl } and {zl }. Its output {?l } is the input to compute {?l
}, {zl
} and
(n)
(n+1)
x
. The parameters of this layer are ?l , l = 1, ? ? ? , L. The gradients of loss w.r.t. the parameters can be computed as:
?E
(n)
=
?E ??l
?E
, where
(n)
=
(n+1)
??l
?E
(n+1)
?zl
?E
+
?E
+
?x(n+1)
.
(n)
(n)
(n)
(n+1)
(n)
(n+1)
(n)
?x(n+1) ?? (n)
??l
??l ??l
??l
??l
??l
?zl
??l
l
?E
(n) is the summation of gradients along the three dashed blue arrows in Fig. 4(a). We also compute
??l
(n)
gradients of the output in this layer w.r.t. its inputs:
(n)
??l
(n?1)
??l
,
??l
(n)
?cl
(n)
, and
??l
(n)
?zl
.
Nonlinear transform layer (Z(n) ): As shown in Fig. 4(b), this layer has two sets of inputs:
(n?1)
(n)
(n)
(n)
{?l
}, {cl }, and its output {zl } is the input for computing {?l } and x(n+1) in next stage.
(n) Nc
The parameters of this layers are {ql,i }i=1 , l = 1, ? ? ? , L. The gradient of loss w.r.t. parameters can
be computed as
(n)
?E
(n)
=
?ql,i
?E ?zl
(n)
?zl
, where
(n)
?ql,i
(n)
?E
=
(n)
?zl
?E ??l
(n)
+
(n)
??l
?zl
?E ?x(n+1)
.
?x(n+1) ?z (n)
l
(n)
We also compute the gradients of layer output to its inputs:
(n)
?zl
and
(n)
??l
?zl
(n)
?cl
.
(n)
Convolution layer (C(n) ): The parameters of this layer are Dl (l = 1, ? ? ? , L). We represent
?t
(n)
(n)
(n)
the filter by Dl = m=1 ?l,m Bm , where Bm is a basis element, and {?l,m } is the set of filter
coefficients to be learned. The gradients of loss w.r.t. filter coefficients are computed as
?E
(n)
??l,m
(n)
=
?E ?cl
(n)
?cl
, where
(n)
??l,m
?E
(n)
(n)
=
?cl
?E ??l
(n)
??l
(n)
(n)
+
?cl
?E ?zl
(n)
?zl
(n)
.
?cl
(n)
The gradient of layer output w.r.t. input is computed as
?cl
.
?x(n)
(n)
(n)
Reconstruction layer (X(n) ): The parameters of this layer are Hl , ?l (l = 1, ? ? ? , L). Similar
?s
(n)
(n)
(n)
to convolution layer, we represent the filter by Hl = m=1 ?l,m Bm , where {?l,m } is the set of
filter coefficients to be learned. The gradients of loss w.r.t. parameters are computed as
?E
?E ?x(n) ?E
?E ?x(n)
=
, (n) =
,
(n)
(n)
(n)
?x ??
?x(n) ??(n)
??
??
l,m
(n)
where
?E
?E ?c
= (n) (n) , if n ? Ns ,
(n)
?x
?c ?x
l,m
l
l
(x ? xgt )
?E
1
?
?
=
, if n = Ns + 1.
|?| ?xgt ?22 ?x(n) ? xgt ?22
?x(n)
(n)
The gradients of layer output w.r.t. inputs are computed as
6
?x(n)
(n?1)
??l
and
?x(n)
(n?1)
?zl
.
4
Experiments
We train and test ADMM-Net on brain and chest MR images2 . For each dataset, we randomly
take 100 images for training and 50 images for testing. ADMM-Net is separately learned for each
sampling ratio. The reconstruction accuracies are reported as the average NMSE and Peak Signalto-Noise Ratio (PSNR) over the test images. The sampling pattern in k-space is the commonly used
pseudo radial sampling. All experiments are performed on a desktop with Intel core i7-4790k CPU.
Table 1: Performance comparisons on brain data with different sampling ratios.
20%
Method
30%
40%
50%
Test time
NMSE
PSNR
NMSE
PSNR
NMSE
PSNR
NMSE
PSNR
Zero-filling
TV [2]
RecPF [4]
SIDWT
0.1700
0.0929
0.0917
0.0885
29.96
35.20
35.32
35.66
0.1247
0.0673
0.0668
0.0620
32.59
37.99
38.06
38.72
0.0968
0.0534
0.0533
0.0484
34.76
40.00
40.03
40.88
0.0770
0.0440
0.0440
0.0393
36.73
41.69
41.71
42.67
0.0013s
0.7391s
0.3105s
7.8637s
PBDW [6]
PANO [10]
FDLCP [8]
BM3D-MRI [11]
0.0814
0.0800
0.0759
0.0674
36.34
36.52
36.95
37.98
0.0627
0.0592
0.0592
0.0515
38.64
39.13
39.13
40.33
0.0518
0.0477
0.0500
0.0426
40.31
41.01
40.62
41.99
0.0437
0.0390
0.0428
0.0359
41.81
42.76
42.00
43.47
35.3637s
53.4776s
52.2220s
40.9114s
Init-Net13
ADMM-Net13
ADMM-Net14
ADMM-Net15
0.1394
0.0752
0.0742
0.0739
31.58
37.01
37.13
37.17
0.1225
0.0553
0.0548
0.0544
32.71
39.70
39.78
39.84
0.1128
0.0456
0.0448
0.0447
33.44
41.37
41.54
41.56
0.1066
0.0395
0.0380
0.0379
33.95
42.62
42.99
43.00
0.6914s
0.6964s
0.7400s
0.7911s
In Tab. 1, we compare our method to conventional compressive sensing MRI methods on brain data.
These methods include Zero-filling [22], TV [2], RecPF [4], SIDWT 3 , and also the state-of-the-art
methods such as PBDW [6], PANO [10], FDLCP [8] and BM3D-MRI [11]. For ADMM-Net, we
initialize the filters in each stage to be eight 3 ? 3 DCT basis (the average DCT basis is discarded).
Compared with the baseline methods such as Zero-filling, TV, RecPF and SIDWT, our proposed
method produces the best quality with comparable reconstruction speed. Compared with the stateof-the-art methods PBDW, PANO and FDLCP, our ADMM-Net has more accurate reconstruction
results with fastest computational speed. For the sampling ratio of 30%, our method (ADMMNet15 ) outperforms the state-of-the-art methods PANO and FDLCP by 0.71 db. Moreover, our
reconstruction speed is around 66 times faster. BM3D-MRI method relies on a well designed BM3D
denoiser, it produces higher accuracy, but runs around 50 times slower in computational time than
ours. The visual comparisons in Fig. 5 show that the proposed network can preserve the fine image
details without obvious artifacts. In Fig. 6(a), we compare the NMSEs and the average test time for
different methods using scatter plot. It is easy to observe that our method is the best considering
the reconstruction accuracy and running time. Examples of the learned nonlinear functions and the
filters are shown in Fig. 7.
Table 2: Comparisons of NMSE and PSNR on chest data with 20% sampling ratio.
Method
TV
RecPF
PANO
FDLCP
ADMM-Net15 -B
ADMM-Net15
ADMM-Net17
NMSE
PSNR
0.1019
35.49
0.1017
35.51
0.0858
37.01
0.0775
37.77
0.0790
37.68
0.0775
37.84
0.0768
37.92
Network generalization ability: We test the generalization ability of ADMM-Net by applying the
learned net from brain data to chest data. Table 2 shows that our net learned from brain data (ADMMNet15 -B) still achieves competitive reconstruction accuracy on chest data, resulting in remarkable
a generalization ability. This might be due to that the learned filters and nonlinear transforms are
performed over local patches, which are repetitive across different organs. Moreover, the ADMMNet17 learned from chest data achieves the better reconstruction accuracy on test chest data.
Effectiveness of network training: In Tab. 1, we also present the results of the initialized network
for ADMM-Net13 . As discussed in Section 3.1, this initialized network (Init-Net13 ) is a realization
2
3
CAF Project: https://masi.vuse.vanderbilt.edu/workshop2013/index.php/Segmentation_Challenge_Details
Rice Wavelet Toolbox: http://dsp.rice.edu/software/rice-wavelet-toolbox
7
NMSE:0.0727; PSNR:33.62 NMSE:0.0612; PSNR:35.10 NMSE:0.0489; PSNR:37.03
Ground truth image
NMSE:0.0660; PSNR:33.61 NMSE:0.0843; PSNR:31.51 NMSE:0.0726; PSNR:32.80 NMSE:0.0614; PSNR:34.22
Ground truth image
NMSE:0.0564; PSNR:35.79
NMSE
Figure 5: Examples of reconstruction results with 20% (the first row) and 30% (the second row)
sampling ratios. The left four columns show results of ADMM-Net15 , RecPF, PANO, BM3D-MRI.
15
Test time in seconds
(a)
Stage number
(b)
Figure 6: (a) Scatter plot of NMSEs and average test time for different methods; (b) The NMSEs of
ADMM-Net using different number of stages (20% sampling ratio for brain data).
Figure 7: Examples of learned filters in convolution layer and the corresponding nonlinear transforms (the first stage of ADMM-Net15 with 20% sampling ratio for brain data).
of the ADMM optimizing Eqn. (12). The network after training produces significantly improved
accuracy, e.g., PNSR is increased from 32.71 db to 39.84 db with sampling ratio of 30%.
Effect of the number of stages: To test the effect of the number of stages (i.e., Ns ), we greedily train
deeper network by adding one stage at each time. Fig. 6(b) shows the average testing NMSE values
using different stages in ADMM-Net under the sampling ratio of 20%. The reconstruction error
decreases fast when Ns < 8 and marginally decreases when further increasing the number of stages.
Effect of the filter sizes: We also train ADMM-Net initialized by two gradient filters with size of 1?3
and 3 ? 1 respectively for all convolution and reconstruction layers, the corresponding trained net
with 13 stages under 20% sampling ratio achieves NMSE value of 0.0899 and PSNR value of 36.52
db on brain data, compared with 0.0752 and 37.01 db using eight 3 ? 3 filters as shown in Tab. 1.
We also learn ADMM-Net13 with 8 filters sized 5 ? 5 initialized by DCT basis, the performance is
not significantly improved, but the training and testing time are significantly longer.
5
Conclusions
We proposed a novel deep network for compressive sensing MRI. It is a novel deep architecture defined over a data flow graph determined by an ADMM algorithm. Due to its flexibility in parameter
learning, this deep net achieved high reconstruction accuracy while keeping the computational efficiency of the ADMM algorithm. As a general framework, the idea that models an ADMM algorithm
as a deep network can be potentially applied to other applications in the future work.
8
References
[1] Michael Lustig, David L Donoho, Juan M Santos, and John M Pauly. Compressed sensing mri. IEEE
Journal of Signal Processing, 25(2):72?82, 2008.
[2] Michael Lustig, David Donoho, and John M Pauly. Sparse mri: The application of compressed sensing
for rapid mr imaging. Magnetic Resonance in Medicine, 58(6):1182?1195, 2007.
[3] Kai Tobias Block, Martin Uecker, and Jens Frahm. Undersampled radial mri with multiple coils: Iterative
image reconstruction using a total variation constraint. Magnetic Resonance in Medicine, 57(6):1086?
1098, 2007.
[4] Junfeng Yang, Yin Zhang, and Wotao Yin. A fast alternating direction method for tvl1-l2 signal reconstruction from partial fourier data. IEEE Journal of Selected Topics in Signal Processing, 4(2):288?297,
2010.
[5] Chen Chen and Junzhou Huang. Compressive sensing mri with wavelet tree sparsity. In Advances in
Neural Information Processing Systems, pages 1115?1123, 2012.
[6] Xiaobo Qu, Di Guo, Bende Ning, and et al. Undersampled mri reconstruction with patch-based directional
wavelets. Magnetic resonance imaging, 30(7):964?977, 2012.
[7] Saiprasad Ravishankar and Yoram Bresler. Mr image reconstruction from highly undersampled k-space
data by dictionary learning. IEEE Transactions on Medical Imaging, 30(5):1028?1041, 2011.
[8] Zhifang Zhan, Jian-Feng Cai, Di Guo, Yunsong Liu, Zhong Chen, and Xiaobo Qu. Fast multi-class
dictionaries learning with geometrical directions in mri reconstruction. IEEE Transactions on Biomedical
Engineering, 2016.
[9] Sheng Fang, Kui Ying, Li Zhao, and Jianping Cheng. Coherence regularization for sense reconstruction
with a nonlocal operator (cornol). Magnetic Resonance in Medicine, 64(5):1413?1425, 2010.
[10] Xiaobo Qu, Yingkun Hou, Fan Lam, Di Guo, Jianhui Zhong, and Zhong Chen. Magnetic resonance image
reconstruction from undersampled measurements using a patch-based nonlocal operator. Medical Image
Analysis, 18(6):843?856, 2014.
[11] Ender M Eksioglu. Decoupled algorithm for mri reconstruction using nonlocal block matching model:
Bm3d-mri. Journal of Mathematical Imaging and Vision, pages 1?11, 2016.
[12] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and
statistical learning via the alternating direction method of multipliers. Foundation and Trends in Machine
Learning, 3(1):1?122, 2011.
[13] Huahua Wang, Arindam Banerjee, and Zhi-Quan Luo. Parallel direction method of multipliers. In Advances in Neural Information Processing Systems, pages 181?189, 2014.
[14] Krishna M Kavi, Bill P Buckles, and U Narayan Bhat. A formal definition of data flow graph models.
IEEE Transactions on Computers, 100(11):940?948, 1986.
[15] Yann L?cun, Leon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[16] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. In Proceedings of the
27th International Conference on Machine Learning, pages 399?406, 2010.
[17] Uwe Schmidt and Stefan Roth. Shrinkage fields for effective image restoration. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, pages 2774?2781, 2014.
[18] Sun Jian and Xu Zongben. Color image denoising via discriminatively learned iterative shrinkage. IEEE
Transactions on Image Processing, 24(11):4148?4159, 2015.
[19] John R Hershey, Jonathan Le Roux, and Felix Weninger. Deep unfolding: Model-based inspiration of
novel deep architectures. arXiv preprint arXiv:1409.2574, 2014.
[20] Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optimization with sparsityinducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, 2012.
[21] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations by backpropagating errors. Cognitive modeling, 5(3):1, 1988.
[22] Matt A Bernstein, Sean B Fain, and Stephen J Riederer. Effect of windowing and zero-filled reconstruction
of mri data on spatial resolution and acquisition strategy. Magnetic Resonance Imaging, 14(3):270?280,
2001.
9
| 6406 |@word briefly:1 mri:42 norm:4 solid:1 liu:1 ours:1 document:1 outperforms:1 current:1 z2:1 luo:1 yet:1 scatter:2 chu:1 john:3 hou:1 dct:6 ronald:1 designed:1 plot:2 update:9 selected:1 desktop:1 core:1 node:7 successive:1 zhang:1 mathematical:1 along:1 constructed:1 consists:2 overhead:1 introduce:1 expected:1 rapid:1 p1:3 multi:1 brain:8 bm3d:6 inspired:2 zhi:1 cpu:1 solver:2 considering:1 increasing:1 spain:1 project:1 moreover:3 recpf:5 santos:1 degrading:1 compressive:8 dubbed:2 guarantee:1 pseudo:1 yunsong:1 exactly:1 zl:41 control:2 medical:2 felix:1 influencing:1 local:6 engineering:1 might:1 initialization:2 challenging:3 fastest:1 directed:3 lecun:1 testing:4 block:2 lf:3 backpropagation:5 procedure:6 yan:1 significantly:7 lbfgsb:1 matching:1 boyd:1 radial:2 get:1 operator:2 applying:1 optimize:2 equivalent:1 map:3 lagrangian:4 conventional:1 bill:1 roth:1 williams:1 resolution:1 simplicity:1 splitting:1 roux:1 jianhui:1 fang:1 variation:2 user:1 xjtu:4 sparsityinducing:1 us:2 designing:1 pa:1 element:1 trend:2 recognition:2 rumelhart:1 utilized:1 located:2 updating:1 preprint:1 solved:1 wang:1 sun:2 decrease:2 tobias:1 trained:2 solving:3 efficiency:1 eric:1 basis:5 joint:1 represented:3 train:5 fast:13 effective:4 widely:1 supplementary:1 kai:1 reconstruct:2 otherwise:1 compressed:2 ability:3 transform:22 final:2 net:30 cai:1 reconstruction:46 propose:3 lam:1 junfeng:1 realization:2 flexibility:1 images2:1 convergence:1 vanderbilt:1 produce:5 karol:1 narayan:1 pnc:1 auxiliary:1 c:17 direction:6 ning:1 filter:20 sgn:1 material:1 generalization:3 summation:1 junzhou:1 around:2 ground:5 achieves:7 dictionary:7 organ:1 repetition:1 stefan:1 unfolding:1 aim:4 shelf:1 shrinkage:8 zhong:3 derived:2 ax:5 l0:1 dsp:1 indicates:2 greedily:1 baseline:4 sense:1 arg:5 uwe:1 flexible:1 html:1 stateof:1 resonance:8 art:4 spatial:1 initialize:2 field:1 sampling:18 filling:3 future:1 yoshua:1 piecewise:4 few:1 randomly:1 preserve:2 stu:1 phase:2 highly:1 rodolphe:1 introduces:1 predefined:1 accurate:3 edge:2 partial:1 decoupled:1 tree:1 filled:1 initialized:9 increased:1 column:1 soft:3 modeling:1 restoration:1 introducing:1 reported:1 eec:1 fundamental:1 peak:1 international:1 filed:1 off:1 michael:2 nm:3 successively:3 containing:1 choose:4 reconstructs:1 huang:1 juan:1 cognitive:1 zhao:1 li:2 bfgs:2 summarized:1 coding:2 coefficient:3 fain:1 performed:2 tab:3 francis:1 competitive:1 parallel:1 contribution:1 square:1 php:1 accuracy:15 efficiently:2 directional:1 xiaobo:3 weninger:1 marginally:1 definition:1 acquisition:3 invasive:1 pano:6 obvious:1 naturally:1 di:3 jianping:1 sampled:14 dataset:1 knowledge:1 color:1 improves:1 psnr:16 organized:1 ender:1 sean:1 jenatton:1 masi:1 higher:3 hershey:1 improved:3 stage:32 biomedical:1 sheng:1 eqn:12 nonlinear:11 banerjee:1 quality:5 artifact:3 effect:4 matt:1 staircase:1 multiplier:13 normalized:1 regularization:13 inspiration:1 reformulating:1 alternating:4 satisfactory:1 neal:1 ll:1 please:1 backpropagating:1 illustrative:1 cosine:1 generalized:1 demonstrate:2 performs:2 l1:2 geometrical:1 image:44 novel:6 parikh:1 arindam:1 functional:1 discussed:1 measurement:2 refer:1 longer:1 etc:3 gt:1 patrick:1 optimizing:4 optimizes:1 jens:1 jiaotong:4 pauly:2 krishna:1 impose:1 mr:13 xgt:6 determine:2 dashed:2 stephen:2 signal:3 multiple:4 windowing:1 borja:1 faster:2 clinical:1 bach:1 basic:1 essentially:1 vision:2 arxiv:2 iteration:6 represent:2 repetitive:1 achieved:2 justified:1 separately:1 fine:1 bhat:1 jian:3 db:5 quan:1 flow:27 effectiveness:1 chest:6 yang:2 leverage:1 bernstein:1 split:1 easy:2 bengio:1 architecture:11 reduce:1 idea:1 cn:6 haffner:1 i7:1 accelerating:2 nyquist:1 penalty:5 suffer:1 deep:29 generally:2 useful:1 tune:1 transforms:5 processed:2 http:3 estimated:1 anatomical:1 diagnosis:1 blue:1 discrete:2 group:1 key:1 four:7 lustig:2 backward:2 nocedal:1 imaging:11 graph:29 run:3 inverse:1 yann:2 patch:6 coherence:1 comparable:2 zhan:1 layer:52 followed:2 cheng:1 fan:1 constraint:1 constrain:1 bp:1 software:1 generates:4 fourier:4 speed:10 min:6 leon:1 martin:1 tv:5 according:2 across:1 reconstructing:2 qu:3 cun:1 hl:13 taken:1 discus:2 end:2 generalizes:1 operation:15 eight:2 observe:1 magnetic:8 dwt:1 schmidt:1 slower:1 substitute:1 original:1 denotes:2 running:1 include:1 medicine:3 yoram:1 build:1 gregor:1 feng:1 strategy:1 gradient:19 subspace:2 mapped:1 capacity:1 topic:1 mail:3 considers:1 trivial:2 denoiser:1 index:2 illustration:2 ratio:13 providing:1 minimizing:1 ying:1 nc:5 ql:11 potentially:1 subproblems:3 design:3 allowing:1 wotao:1 convolution:12 markov:1 discarded:1 immediate:1 hinton:1 rn:2 peleato:1 david:3 pair:3 eckstein:1 toolbox:2 optimized:2 z1:1 huahua:1 learned:13 barcelona:1 subgroup:1 nip:1 beyond:1 usually:1 pattern:2 sparsity:1 challenge:1 difficulty:1 rely:1 undersampled:5 improve:3 julien:1 prior:3 circled:3 l2:1 fully:2 loss:9 discriminatively:4 bresler:1 northwestern:1 filtering:1 proven:1 geoffrey:1 remarkable:1 foundation:2 thresholding:3 pi:5 row:2 keeping:1 tvl1:1 formal:1 deeper:1 sparse:9 distributed:1 overcome:1 forward:2 commonly:3 bm:3 transaction:4 nonlocal:3 reconstructed:14 approximate:1 keep:1 mairal:1 xi:4 discriminative:1 alternatively:2 iterative:6 demanded:1 table:3 learn:6 init:2 kui:1 bottou:1 cl:18 domain:6 sp:3 pk:2 main:1 arrow:3 whole:1 noise:1 xu:2 augmented:2 fig:11 nmse:19 intel:1 frahm:1 slow:1 n:11 precision:1 position:3 comprises:1 lq:1 wavelet:6 dozen:1 specific:1 sensing:10 explored:1 learnable:4 dl:28 adding:1 effectively:1 chen:4 signalto:1 yin:2 simply:1 visual:1 corresponds:2 truth:5 relies:1 rice:3 coil:1 obozinski:1 ravishankar:1 sized:1 donoho:2 admm:72 hard:2 specifically:1 determined:4 uniformly:2 typical:1 denoising:1 total:2 pas:4 guillaume:1 guo:3 jonathan:2 |
5,977 | 6,407 | Homotopy Smoothing for Non-Smooth Problems
with Lower Complexity than O(1/)
Yi Xu??, Yan Yan??, Qihang Lin\ , Tianbao Yang )?
Department of Computer Science, University of Iowa, Iowa City, IA 52242
?
QCIS, University of Technology Sydney, NSW 2007, Australia
\
Department of Management Sciences, University of Iowa, Iowa City, IA 52242
{yi-xu, qihang-lin, tianbao-yang}@uiowa.edu, [email protected]
?
Abstract
In this paper, we develop a novel homotopy smoothing (HOPS) algorithm for
solving a family of non-smooth problems that is composed of a non-smooth
term with an explicit max-structure and a smooth term or a simple non-smooth
term whose proximal mapping is easy to compute. The best known iteration
complexity for solving such non-smooth optimization problems is O(1/) without
any assumption on the strong convexity. In this work, we will show that the
1?? 1
?
proposed HOPS achieved a lower iteration complexity of O(1/
) with ? ?
(0, 1] capturing the local sharpness of the objective function around the optimal
solutions. To the best of our knowledge, this is the lowest iteration complexity
achieved so far for the considered non-smooth optimization problems without
strong convexity assumption. The HOPS algorithm employs Nesterov?s smoothing
technique and Nesterov?s accelerated gradient method and runs in stages, which
gradually decreases the smoothing parameter in a stage-wise manner until it yields
a sufficiently good approximation of the original function. We show that HOPS
enjoys a linear convergence for many well-known non-smooth problems (e.g.,
empirical risk minimization with a piece-wise linear loss function and `1 norm
regularizer, finding a point in a polyhedron, cone programming, etc). Experimental
results verify the effectiveness of HOPS in comparison with Nesterov?s smoothing
algorithm and the primal-dual style of first-order methods.
1
Introduction
In this paper, we consider the following optimization problem:
min F (x) , f (x) + g(x)
x??1
(1)
where g(x) is a convex (but not necessarily smooth) function, ?1 is a closed convex set and f (x) is a
convex but non-smooth function which can be explicitly written as
f (x) = max hAx, ui ? ?(u)
u??2
(2)
where ?2 ? Rm is a closed convex bounded set, A ? Rm?d and ?(u) is a convex function, and h?, ?i
is scalar product. This family of non-smooth optimization problems has applications in numerous
domains, e.g., machine learning and statistics [7], image processing [6], cone programming [11],
and etc. Several first-order methods have been developed for solving such non-smooth optimization
?
The first two authors make equal contributions. The work of Y. Yan was done when he was a visiting student
at Department of Computer Science of the University of Iowa.
1 ?
O() suppresses a logarithmic factor.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
problems including the primal-dual methods [15, 6], Nesterov?s smoothing algorithm [16] 2 , and they
can achieve O(1/) iteration complexity for finding an -optimal solution, which is faster than the
corresponding black-box lower complexity bounds by an order of magnitude.
In this paper, we propose a novel homotopy smoothing (HOPS) algorithm for solving the problem
in (1) that achieves a lower iteration complexity than O(1/). In particular, the iteration complexity
1??
?
of HOPS is given by O(1/
), where ? ? (0, 1] captures the local sharpness (defined shortly) of
the objective function around the optimal solutions. The proposed HOPS algorithm builds on the
Nesterov?s smoothing technique, i.e., approximating the non-smooth function f (x) by a smooth
function and optimizing the smoothed function to a desired accuracy level.
The striking difference between HOPS and Nesterov?s smoothing algorithm is that Nesterov uses
a fixed small smoothing parameter that renders a sufficiently accurate approximation of the nonsmooth function f (x), while HOPS adopts a homotopy strategy for setting the value of the smoothing
parameter. It starts from a relatively large smoothing parameter and gradually decreases the smoothing
parameter in a stage-wise manner until the smoothing parameter reaches a level that gives a sufficiently
good approximation of the non-smooth objective function. The benefit of using a homotopy strategy
is that a larger smoothing parameter yields a smaller smoothness constant and hence a lower iteration
complexity for smoothed problems in earlier stages. For smoothed problems in later stages with
larger smoothness constants, warm-start can help reduce the number of iterations to converge. As
a result, solving a series of smoothed approximations with a smoothing parameter from large to
small and with warm-start is faster than solving one smoothed approximation with a very small
smoothing parameter. To the best of our knowledge, this is the first work that rigorously analyzes
such a homotopy smoothing algorithm and establishes its theoretical guarantee on lower iteration
complexities. The keys to our analysis of lower iteration complexity are (i) to leverage a global error
inequality (Lemma 1) [21] that bounds the distance of a solution to the sublevel set by a multiple
of the functional distance; and (ii) to explore a local error bound condition to bound the multiplicative
factor.
2
Related Work
In this section, we review some related work for solving the considered family of non-smooth
optimization problems.
In the seminal paper by Nesterov [16], he proposed a smoothing technique for a family of structured
non-smooth optimization problems as in (1) with g(x) being a smooth function and f (x) given in (2).
By adding a strongly convex prox function in terms of u with a smoothing parameter ? into the
definition of f (x), one can obtain a smoothed approximation of the original objective function. Then
he developed an accelerated gradient method with an O(1/t2 ) convergence rate for the smoothed
objective function with t being the number of iterations, which implies an O(1/t) convergence rate for
the original objective function by setting ? ? c/t with c being a constant. The smoothing technique
has been exploited to solving problems in machine learning, statistics, cone programming [7, 11, 24].
The primal-dual style of first-order methods treat the problem as a convex-concave minimization
problem, i.e.,
min max g(x) + hAx, ui ? ?(u)
x??1 u??2
Nemirovski [15] proposed a mirror prox method, which has a convergence rate of O(1/t) by assuming
that both g(x) and ?(u) are smooth functions. Chambolle & Pock [6] designed first-order primal-dual
algorithms, which tackle g(x) and ?(u) using proximal mapping and achieve the same convergence
rate of O(1/t) without assuming smoothness of g(x) and ?(u). When g(x) or ?(u) is strongly
convex, their algorithms achieve O(1/t2 ) convergence rate. The effectiveness of their algorithms
was demonstrated on imaging problems. Recently, the primal-dual style of first-order methods have
been employed to solve non-smooth optimization problems in machine learning where both the loss
function and the regularizer are non-smooth [22]. Lan et al. [11] also considered Nemirovski?s prox
method for solving cone programming problems.
The key condition for us to develop an improved convergence is closely related to local error bounds
(LEB) [17] and more generally the Kurdyka-?ojasiewicz property [12, 4]. The LEB characterizes
2
The algorithm in [16] was developed for handling a smooth component g(x), which can be extended to
handling a non-smooth component g(x) whose proximal mapping is easy to compute.
2
the relationship between the distance of a local solution to the optimal set and the optimality gap
of the solution in terms of objective value. The Kurdyka-?ojasiewicz property characterizes that
property of a function that whether it can be made ?sharp? by some transformation. Recently,
these conditions/properties have been explored for feasible descent methods [13], non-smooth
optimization [8], gradient and subgradient methods [10, 21]. It is notable that our local error bound
condition is different from the one used in [13, 25] which bounds the distance of a point to the optimal
set by the norm of the projected or proximal gradient at that point instead of the functional distance,
consequentially it requires some smoothness assumption about the objective function. By contrast,
the local error bound condition in this paper covers a much broad family of functions and thus it is
more general. Recent work [14, 23] have shown that the error bound in [13, 25] is a special case of
our considered error bound with ? = 1/2. Two mostly related work leveraging a similar error bound
to ours are discussed in order. Gilpin et al. [8] considered the two-person zero-sum games, which is a
special case of (1) with g(x) and ?(u) being zeros and ?1 and ?2 being polytopes. The present work
is a non-trivial generalization of their work that leads to improved convergence for a much broader
family of non-smooth optimization problems. In particular, their result is just a special case of our
result when the constant ? that captures the local sharpness is one for problems whose epigraph is
a polytope. Recently, Yang & Lin [21] proposed a restarted subgradient method by exploring the
local error bound condition or more generally the Kurdyka-?ojasiewicz property, resulting in an
2(1??)
?
O(1/
) iteration complexity with the same constant of ?. In contrast, our result is an improved
1??
?
iteration complexity of O(1/
).
It is worth emphasizing that the proposed homotopy smoothing technique is different from recently
proposed homotopy methods for sparse learning (e.g., `1 regularized least-squares problem [20]),
though a homotopy strategy on an involved parameter is also employed to boost the convergence. In
particular, the involved parameter in the homotopy methods for sparse learning is the regularization
parameter before the `1 regularization, while the parameter in the present work is the introduced
smoothing parameter. In addition, the benefit of starting from a relatively large regularization
parameter in sparse learning is the sparsity of the solution, which makes it possible to explore the
restricted strong convexity for proving faster convergence. We do not make such assumption of
the data and we are mostly interested in that when both f (x) and g(x) are non-smooth. Finally,
we note that a similar homotopy (a.k.a continuation) strategy is employed in Nesterov?s smoothing
algorithm for solving an `1 norm minimization problem subject to a constraint for recovering a sparse
solution [3]. However, we would like to draw readers? attention to that they did not provide any
theoretical guarantee on the iteration complexity of the homotopy strategy and consequentially their
implementation is ad-hoc without guidance from theory. More importantly, our developed algorithms
and theory apply to a much broader family of problems.
3
Preliminaries
We present some preliminaries in this section. Let kxk denote the Euclidean norm on the primal
variable x. A function h(x) is L-smooth in terms of k ? k, if k?h(x) ? ?h(y)k ? Lkx ? yk. Let
kuk+ denote a norm on the dual variable, which is not necessarily the Euclidean norm. Denote by
?+ (u) a 1-strongly convex function of u in terms of k ? k+ .
For the optimization problem in (1), we let ?? , F? denote the set of optimal solutions and optimal
value, respectively, and make the following assumption throughout the paper.
Assumption 1. For a convex minimization problem (1), we assume (i) there exist x0 ? ?1 and
0 ? 0 such that F (x0 ) ? minx??1 F (x) ? 0 ; (ii) f (x) is characterized as in (2), where ?(u) is
a convex function; (iii) There exists a constant D such that maxu??2 ?+ (u) ? D2 /2; (iv) ?? is a
non-empty convex compact set.
Note that: 1) Assumption 1(i) assumes that the objective function is lower bounded; 2) Assumption 1(iii) assumes that ?2 is a bounded set, which is also required in [16].
In addition, for brevity we assume that g(x) is simple enough
defined below is easy to compute similar to [6]:
P?g (x) = min
z??1
3
such that the proximal mapping
1
kz ? xk2 + ?g(z)
2
3
(3)
If g(x) is smooth, this assumption can be relaxed. We will defer the discussion and result on a smooth
function g(x) to the supplement.
3
Relying on the proximal mapping, the key updates in the optimization algorithms presented below
take the following form:
c
?cv,?g (x) = arg min kz ? xk2 + hv, zi + ?g(z)
(4)
z??1 2
For any x ? ?1 , let x? denote the closest optimal solution in ?? to x measured in terms of k ? k, i.e.,
x? = arg minz??? kz ? xk2 , which is unique because ?? is a non-empty convex compact set We
denote by L the -level set of F (x) and by S the -sublevel set of F (x), respectively, i.e.,
L = {x ? ?1 : F (x) = F? + },
S = {x ? ?1 : F (x) ? F? + }
It follows from [18] (Corollary 8.7.1) that the sublevel set S is bounded for any ? 0 and so as the
level set L due to that ?? is bounded. Define dist(L , ?? ) to be the maximum distance of points on
the level set L to the optimal set ?? , i.e.,
dist(L , ?? ) = max dist(x, ?? ) , min kx ? zk .
(5)
x?L
z???
Due to that L and ?? are bounded, dist(L , ?? ) is also bounded. Let x? denote the closest point in
the -sublevel set to x, i.e.,
x? = arg min kz ? xk2
(6)
z?S
It is easy to show that x? ? L when x ?
/ S (using the KKT condition).
4
Homotopy Smoothing
4.1 Nesterov?s Smoothing
We first present the Nesterov?s smoothing technique and accelerated proximal gradient methods for
solving the smoothed problem due to that the proposed algorithm builds upon these techniques. The
idea of smoothing is to construct a smooth function f? (x) that well approximates f (x). Nesterov
considered the following function
f? (x) = max hAx, ui ? ?(u) ? ??+ (u)
u??2
It was shown in [16] that f? (x) is smooth w.r.t k ? k and its smoothness parameter is given by
L? = ?1 kAk2 where kAk is defined by kAk = maxkxk?1 maxkuk+ ?1 hAx, ui. Denote by
u? (x) = arg max hAx, ui ? ?(u) ? ??+ (u)
u??2
The gradient of f? (x) is computed by ?f? (x) = A> u? (x). Then
f? (x) ? f (x) ? f? (x) + ?D2 /2
(7)
From the inequality above, we can see that when ? is very small, f? (x) gives a good approximation
of f (x). This motivates us to solve the following composite optimization problem
min F? (x) , f? (x) + g(x)
x??1
Many works have studied such an optimization problem [2, 19] and the best convergence rate is
given by O(L? /t2 ), where t is the total number of iterations. We present a variant of accelerated
proximal gradient (APG) methods in Algorithm 1 that works even with kxk replaced with a general
norm as long as its square is strongly convex. We make several remarks about Algorithm 1: (i) the
variant here is similar to Algorithm 3 in [19] and the algorithm proposed in [16] except that the prox
function d(x) is replaced by kx ? x0 k2 /2 in updating the sequence of zk , which is assumed to be
?1 -strongly convex w.r.t k ? k; (ii) If k ? k is simply the Euclidean norm, a simplified algorithm with
only one update in (4) can be used (e.g., FISTA [2]); (iii) if L? is difficult to compute, we can use the
backtracking trick (see [2, 19]).
The following theorem states the convergence result for APG.
Theorem 2. ([19]) Let ?k =
any x ? ?1 , we have
2
k+2 ,
?k =
2
k+1 , k
? 0 or ?k+1 = ?k+1 =
F? (xt ) ? F? (x) ?
4
2L? kx ? x0 k2
t2
?
4 +4? 2 ?? 2
?k
k
k
,k
2
? 0. For
(8)
Algorithm 1 An Accelerated Proximal Gradient Method: APG(x0 , t, L? )
1: Input: the number of iterations t, the initial solution x0 , and the smoothness constant L?
2: Let ?0 = 1, V?1 = 0, ??1 = 0, z0 = x0
3: Let ?k and ?k be two sequences given in Theorem 2.
4: for k = 0, . . . , t ? 1 do
5:
Compute yk = (1 ? ?k )xk + ?k zk
6:
Compute vk = ?f? (yk ), Vk = Vk?1 + ?vkk , and ?k = ?k?1 + ?1k
L /?
L
7:
Compute zk+1 = ?Vk?,?k1g (x0 ) and xk+1 = ?vk?,g (yk )
8: end for
9: Output: xt
Combining the above convergence result with the relation in (7), we can establish the iteration
complexity of Nesterov?s smoothing algorithm for solving the original problem (1).
Corollary 3. For any x ? ?1 , we have
F (xt ) ? F (x) ? ?D2 /2 +
2L? kx ? x0 k2
t2
In particular in order to have F (xt ) ? F? + , it suffices to set ? ?
where x? is an optimal solution to (1).
(9)
D2
and t ?
2DkAkkx0 ?x? k
,
4.2 Homotopy Smoothing
From the convergence result in (9), we can see that in order to obtain a very accurate solution, we
have to set ? - the smoothing parameter - to be a very small value, which will cause the blow-up of
the second term because L? ? 1/?. On the other hand, if ? is set to be a relatively large value, then t
can be set to be a relatively small value to match the first term in the R.H.S. of (9), which may lead to
a not sufficiently accurate solution. It seems that the O(1/) is unbeatable. However, if we adopt a
homotopy strategy, i.e., starting from a relatively large value ? and optimizing the smoothed function
with a certain number of iterations t such that the second term in (9) matches the first term, which
will give F (xt ) ? F (x? ) ? O(?). Then we can reduce the value of ? by a constant factor b > 1
and warm-start the optimization process from xt . The key observation is that although ? decreases
and L? increases, the other term kx? ? xt k is also reduced compared to kx? ? x0 k, which could
cancel the blow-up effect caused by increased L? . As a result, we expect to use the same number of
iterations to optimize the smoothed function with a smaller ? such that F (x2t ) ? F (x? ) ? O(?/b).
To formalize our observation, we need the following key lemma.
Lemma 1 ([21]). For any x ? ?1 and > 0, we have
dist(x? , ?? )
(F (x) ? F (x? ))
where x? ? S is the closest point in the -sublevel set to x as defined in (6).
kx ? x? k ?
The lemma is proved in [21]. We include its proof in the supplement. If we apply the above bound
into (9), we will see in the proof of the main theorem (Theorem 5) that the number of iterations t for
solving each smoothed problem is roughly O( dist(L ,?? ) ), which will be lower than O( 1 ) in light of
the local error bound condition given below.
Definition 4 (Local error bound (LEB)). A function F (x) is said to satisfy a local error bound
condition if there exist ? ? (0, 1] and c > 0 such that for any x ? S
dist(x, ?? ) ? c(F (x) ? F? )?
(10)
Remark: In next subsection, we will discuss the relationship with other types of conditions and
show that a broad family of non-smooth functions (including almost all commonly seen functions in
machine learning) obey the local error bound condition. The exponent constant ? can be considered
as a local sharpness measure of the function. Figure 1 illustrates the sharpness of F (x) = |x|p for
p = 1, 1.5, and 2 around the optimal solutions and their corresponding ?.
With the local error bound condition, we can see that dist(L , ?? ) ? c? , ? ? (0, 1]. Now, we
are ready to present the homotopy smoothing algorithm and its convergence guarantee under the
5
0.1
|x|,
?=1
|x|1.5, ?=2/3
0.08
|x|2,
?=0.5
0.06
F(x)
Algorithm 2 HOPS for solving (1)
1: Input: m, t, x0 ? ?1 , 0 , D 2 and b > 1.
2: Let ?1 = 0 /(bD 2 )
3: for s = 1, . . . , m do
4:
Let xs = APG(xs?1 , t, L?s )
5:
Update ?s+1 = ?s /b
6: end for
7: Output: xm
0.04
0.02
0
?0.1
?0.05
0
0.05
0.1
x
Figure 1: Illustration of local sharpness of three functions and the corresponding ? in the LEB condition.
local error bound condition. The HOPS algorithm is presented in Algorithm 2, which starts from
a relatively large smoothing parameter ? = ?1 and gradually reduces ? by a factor of b > 1 after
running a number t of iterations of APG with warm-start. The iteration complexity of HOPS is
established in Theorem 5. We include the proof in the supplement.
Theorem 5. Suppose Assumption 1 holds and F (x) obeys the local error bound condition. Let
) ? 2bcDkAk
HOPS run with t = O( 2bcDkAk
iterations for each stage, and m = dlogb ( 0 )e.
1??
1??
Then F (xm ) ? F? ? 2. Hence, the iteration complexity for achieving an 2-optimal solution is
2bcDkAk
dlogb ( 0 )e in the worst-case.
1??
4.3 Local error bounds and Applications
In this subsection, we discuss the local error bound condition and its application in non-smooth
optimization problems.
The Hoffman?s bound and finding a point in a polyhedron. A polyhedron can be expressed as
P = {x ? Rd ; B1 x ? b1 , B2 x = b2 }. The Hoffman?s bound [17] is expressed as
dist(x, P) ? c(k(B1 x ? b1 )+ k + kB2 x ? b2 k), ?c > 0
(11)
where [s]+ = max(0, s). This can be considered as the error bound for the polyhedron feasibility
problem, i.e., finding
a x ? P, which is equivalent to
min F (x) , k(B1 x ? b1 )+ k + kB2 x ? b2 k = max hB1 x ? b1 , u1 i + hB2 x ? b2 , u2 i
u??2
x?Rd
> >
(u>
1 , u2 )
where u =
and ?2 = {u|u1 0, ku1 k ? 1, ku2 k ? 1}. If there exists a x ? P, then
F? = 0. Thus the Hoffman?s bound in (11) implies a local error bound (10) with ? = 1. Therefore,
the HOPS has a linear convergence for finding a feasible ?
solution in a polyhedron. If we let ?+ (u) =
1
2
2
kuk
then
D
=
2
so
that
the
iteration
complexity
is
2
2bc max(kB1 k, kB2 k)dlogb ( 0 )e.
2
Cone programming. Let U, V denote two vector spaces. Given a linear opearator E : U ? V ? 4 ,
a closed convex set ? ? U , and a vector e ? V ? , and a closed convex cone K ? V , the general
constrained cone linear system (cone programing) consists of finding a vector x ? ? such that
Ex ? e ? K? . Lan et al. [11] have considered Nesterov?s smoothing algorithm for solving the cone
programming problem with O(1/) iteration complexity. The problem can be cast into a non-smooth
optimization problem:
min F (x) , dist(Ex ? e, K? ) =
x??
max
kuk?1,u??K
hEx ? e, ui
Assume that e ? Range(E) ? K? , then F? = 0. Burke et al. [5] have considered the error bound for
such problems and their results imply that there exists c > 0 such that dist(x, ?? ) ? c(F (x) ? F? )
as long as ?x ? ?, s.t. Ex ? e ? int(K? ), where ?? denotes the optimal solution set. Therefore,
the HOPS also has a linear convergence for cone programming. Considering that both U and V are
Euclidean spaces, we set ?+ (u) = 21 kuk2 then D2 = 1. Thus, the iteraction complexity of HOPS
for finding an 2-solution is 2bckEkdlogb ( 0 )e.
Non-smooth regularized empirical loss (REL) minimization in Machine Learning The REL
consists of a sum of loss functions on the training data and a regularizer, i.e.,
n
1X
`(x> ai , yi ) + ?g(x)
min F (x) ,
n i=1
x?Rd
4
V ? represents the dual space of V . The notations and descriptions are adopted from [11].
6
where (ai , yi ), i = 1, . . . , n denote pairs of a feature vector and a label of training data. Non-smooth
loss functions include hinge loss `(z, y) = max(0, 1 ? yz), absolute loss `(z, y) = |z ? y|, which
can be written as the max structure in (2). Non-smooth regularizers include e.g., g(x) = kxk1 ,
g(x) = kxk? . These loss functions and regularizers are essentially piecewise linear functions,
whose epigraph is a polyhedron. The error bound condition has been developed for such kind of
problems [21]. In particular, if F (x) has a polyhedral epigraph, then there exists c > 0 such that
dist(x, ?? ) ? c(F (x) ? F? ) for any x ? Rd . It then implies HOPS has an O(log(0 /)) iteration
complexity for solving a non-smooth REL minimization with a polyhedral epigraph. Yang et al. [22]
has also considered such non-smooth problems, but they only have O(1/) iteration complexity.
When F (x) is essentially locally strongly convex [9] in terms of k ? k such that 5
2
dist2 (x, ?? ) ? (F (x) ? F? ), ?x ? S
(12)
?
then we can see that the local error bound holds with ? = 1/2, which implies the iteration complexity
? ?1 ), which is up to a logarithmic factor the same as the result in [6] for a strongly
of HOPS is O(
convex function. However, here only local strong convexity is sufficient and there is no need to
develop a different algorithm and different analysis from thePnon-strongly convex case as done
n
p
in [6]. For example, one can consider F (x) = kAx ? ykpp = i=1 |a>
i x ? yi | , p ? (1, 2), which
satisfies (12) according to [21].
The Kurdyka-?ojasiewicz (KL) property. The definition of KL property is given below.
Definition 6. The function F (x) is said to have the KL property at x? ? ?? if there exist ? ? (0, ?],
a neighborhood U of x? and a continuous concave function ? : [0, ?) ? R+ such that i) ?(0) = 0,
? is continuous on (0, ?), ii) for all s ? (0, ?), ?0 (s) > 0, iii) and for all x ? U ? {x : F (x? ) <
F (x) < F (x? ) + ?}, the KL inequality ?0 (F (x) ? F (x? ))k?F (x)k ? 1 holds.
The function ? is called the desingularizing function of F at x? , which makes the function F (x)
sharp by reparameterization. An important desingularizing function is in the form of ?(s) = cs1??
1
for some c > 0 and ? ? [0, 1), which gives the KL inequality k?F (x)k ? c(1??)
(F (x) ? F (x? ))? .
It has been established that the KL property is satisfied by a wide class of non-smooth functions
definable in an o-minimal structure [4]. Semialgebraic functions and (globally) subanalytic functions
are for instance definable in their respective classes. While the definition of KL property involves
a neighborhood U and a constant ?, in practice many convex functions satisfy the above property
with U = Rd and ? = ? [1]. The proposition below shows that a function with the KL property
with a desingularizing function ?(s) = cs1?? obeys the local error bound condition in (10) with
?
?
? = 1 ? ? ? (0, 1], which implies an iteration complexity of O(1/
) of HOPS for optimizing such
a function.
Proposition 1. (Theorem 5 [10]) Let F (x) be a proper, convex and lower-semicontinuous function
that satisfies KL property at x? and U be a neighborhood of x? . For all x ? U ? {x : F (x? ) <
1
F (x) < F (x? ) + ?}, if k?F (x)k ? c(1??)
(F (x) ? F (x? ))? , then dist(x, ?? ) ? c(F (x) ? F? )1?? .
4.4 Primal-Dual Homotopy Smoothing (PD-HOPS)
Finally, we note that the required number of iterations per-stage t for finding an accurate solution
depends on an unknown constant c and sometimes ?. Thus, an inappropriate setting of t may lead
to a less accurate solution. In practice, it can be tuned to obtain the fastest convergence. A way to
eschew the tuning is to consider a primal-dual homotopy smoothing (PD-HOPS). Basically, we also
apply the homotopy smoothing to the dual problem:
max ?(u) , ??(u) + min hA> u, xi + g(x)
u??2
x??1
Denote by ?? the optimal value of the above problem. Under some mild conditions, it is easy to
see that ?? = F? . By extending the analysis and result to the dual problem, we can obtain that
F (xs ) ? F? ? + s and ?? ? ?(us ) ? + s after the s-th stage with a sufficient number of
iterations per-stage. As a result, we get F (xs ) ? ?(us ) ? 2( + s ). Therefore, we can use the
duality gap F (xs ) ? ?(us ) as a certificate to monitor the progress of optimization. As long as
the above inequality holds, we restart the next stage. Then with at most m = dlogb (0 /)e epochs
5
This is true if g(x) is strongly convex or locally strongly convex.
7
Table 1: Comparison of different optimization algorithms by the number of iterations and running
time in second (mean ? standard deviation) for achieving a solution that satisfies F (x) ? F? ? .
Linear Classification
Image Denoising
= 10?4
= 10?5
= 10?3
= 10?4
PD
9861 (1.58?0.02)
27215 (4.33?0.06)
8078 (22.01?0.51)
APG-D
APG-F
4918 (2.44?0.22)
3277 (1.33?0.01)
28600 (11.19?0.26)
19444 (7.69?0.07)
179204 (924.37?59.67)
14150 (40.90?2.28)
HOPS-D
HOPS-F
1012 (0.44?0.02)
1009 (0.46?0.02)
4101 (1.67?0.01)
4102 (1.69?0.04)
3542 (13.77?0.13)
2206 (6.99?0.15)
PD-HOPS
846 (0.36?0.01)
3370 (1.27?0.02)
2538 (7.97?0.13)
Matrix Decomposition
= 10?3
= 10?4
34292 (94.26?2.67)
2523 (4.02?0.10)
3441 (5.65?0.20)
1726043 (9032.69?539.01)
91380 (272.45?14.56)
1967 (6.85?0.08)
1115 (3.76?0.06)
8622 (30.36?0.11)
4151 (9.16?0.10)
4501 (17.38?0.10)
3905 (16.52?0.08)
224 (1.36?0.02)
230 (0.91?0.01)
313 (1.51?0.03)
312 (1.23?0.01)
3605 (11.39?0.10)
124 (0.45?0.01)
162 (0.64?0.01)
we get F (xm ) ? ?(um ) ? 2( + m ) ? 4. Similarly, we can show that PD-HOPS enjoys an
?
1??
?
O(max{1/
, 1/1?? }) iteration complexity, where ?? is the exponent constant in the local error
bound of the objective function for dual problem. For example, for linear classification problems
with a piecewise linear loss and `1 norm regularizer we can have ? = 1 and ?? = 1, and PD-HOPS
enjoys a linear convergence. Due to the limitation of space, we defer the details of PD-HOPS and its
analysis into the supplement.
5
Experimental Results
In this section, we present some experimental results to demonstrate the effectiveness of HOPS
and PD-HOPS by comparing with two state-of-the-art algorithms, the first-order Primal-Dual (PD)
method [6] and the Nesterov?s smoothing with Accelerated Proximal Gradient (APG) methods. For
APG, we implement two variants, where APG-D refers to the variant with the dual averaging style of
update on one sequence of points (i.e., Algorithm 1) and APG-F refers to the variant of the FISTA
style [2]. Similarly, we also implement the two variants for HOPS. We conduct experiments for
solving three problems: (1) an `1 -norm regularized hinge loss for linear classification on the w1a
dataset 6 ; (2) a total variation based ROF model for image denoising on the Cameraman picture 7 ; (3)
a nuclear norm regularized absolute error minimization for low-rank and sparse matrix decomposition
on a synthetic data. More details about the formulations and experimental setup can be found in the
supplement.
To make fair comparison, we stop each algorithm when the optimality gap is less than a given and
count the number of iterations and the running time that each algorithm requires. The optimal value
is obtained by running PD with a sufficiently large number of iterations such that the duality gap is
very small. We present the comparison of different algorithms on different tasks in Table 1, where
for PD-HOPS we only report the results of using the faster variant of APG, i.e., APG-F. We repeat
each algorithm 10 times for solving a particular problem and then report the averaged running time
in second and the corresponding standard deviations. The running time of PD-HOPS only accounts
the time for updating the primal variable since the updates for the dual variable are fully decoupled
from the primal updates and can be carried out in parallel. From the results, we can see that (i) HOPS
converges consistently faster than their APG variants especially when is small; (ii) PD-HOPS allows
for choosing the number of iterations at each epoch automatically, yielding faster convergence speed
than HOPS with manual tuning; (iii) both HOPS and PD-HOPS are significantly faster than PD.
6
Conclusions
In this paper, we have developed a homotopy smoothing (HOPS) algorithm for solving a family of
structured non-smooth optimization problems with formal guarantee on the iteration complexities. We
1??
?
) with ? ? (0, 1]
show that the proposed HOPS can achieve a lower iteration complexity of O(1/
for obtaining an -optimal solution under a mild local error bound condition. The experimental results
on three different tasks demonstrate the effectiveness of HOPS.
Acknowlegements
We thank the anonymous reviewers for their helpful comments. Y. Xu and T. Yang are partially
supported by National Science Foundation (IIS-1463988, IIS-1545995).
6
7
https://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
http://pages.cs.wisc.edu/?swright/TVdenoising/
8
References
[1] H. Attouch, J. Bolte, P. Redont, and A. Soubeyran. Proximal alternating minimization and projection
methods for nonconvex problems: An approach based on the kurdyka-lojasiewicz inequality. Math. Oper.
Res., 35:438?457, 2010.
[2] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM J. Img. Sci., 2:183?202, 2009.
[3] S. Becker, J. Bobin, and E. J. Cand?s. Nesta: A fast and accurate first-order method for sparse recovery.
SIAM J. Img. Sci., 4:1?39, 2011.
[4] J. Bolte, A. Daniilidis, and A. Lewis. The ?ojasiewicz inequality for nonsmooth subanalytic functions with
applications to subgradient dynamical systems. SIAM J. Optim., 17:1205?1223, 2006.
[5] J. V. Burke and P. Tseng. A unified analysis of hoffman?s bound via fenchel duality. SIAM J. Optim.,
6(2):265?282, 1996.
[6] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to
imaging. J. Math. Img. Vis., 40:120?145, 2011.
[7] X. Chen, Q. Lin, S. Kim, J. G. Carbonell, and E. P. Xing. Smoothing proximal gradient method for general
structured sparse regression. Ann. Appl. Stat., 6(2):719?752, 06 2012.
[8] A. Gilpin, J. Pe?a, and T. Sandholm. First-order algorithm with log(1/epsilon) convergence for epsilonequilibrium in two-person zero-sum games. Math. Program., 133(1-2):279?298, 2012.
[9] R. Goebel and R. T. Rockafellar. Local strong convexity and local lipschitz continuity of the gradient of
convex functions. J. Convex Anal., 15(2):263?270, 2008.
[10] J. P. B. S. Jerome Bolte, Trong Phong Nguyen. From error bounds to the complexity of first-order descent
methods for convex functions. CoRR, abs/1510.08234, 2015.
[11] G. Lan, Z. Lu, and R. D. C. Monteiro. Primal-dual first-order methods with O(1/e) iteration-complexity for
cone programming. Math. Program., 126(1):1?29, 2011.
[12] S. ?ojasiewicz. Ensembles semi-analytiques. Institut des Hautes Etudes Scientifiques, 1965.
[13] Z.-Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general
approach. Ann. Oper. Res., 46(1):157?178, 1993.
[14] I. Necoara, Y. Nesterov, and F. Glineur. Linear convergence of first order methods for non-strongly convex
optimization. CoRR, abs/1504.06298, 2015.
[15] A. Nemirovski. Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz
continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim.,
15(1):229?251, Jan. 2005.
[16] Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127?152, 2005.
[17] J. Pang. Error bounds in mathematical programming. Math. Program., 79:299?332, 1997.
[18] R. Rockafellar. Convex Analysis. Princeton mathematical series. Princeton University Press, 1970.
[19] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM
J. Optim., 2008.
[20] L. Xiao and T. Zhang. A proximal-gradient homotopy method for the sparse least-squares problem. SIAM
J. Optim., 23(2):1062?1091, 2013.
[21] T. Yang and Q. Lin. Rsg: Beating subgradient method without smoothness and strong convexity. CoRR,
abs/1512.03107, 2016.
[22] T. Yang, M. Mahdavi, R. Jin, and S. Zhu. An efficient primal-dual prox method for non-smooth optimization.
Machine Learning, 2014.
[23] H. Zhang. New analysis of linear convergence of gradient-type methods via unifying error bound conditions.
arXiv:1606.00269, 2016.
[24] X. Zhang, A. Saha, and S. Vishwanathan. Smoothing multivariate performance measures. JMLR,
13(Dec):3623?3680, 2012.
[25] Z. Zhou and A. M.-C. So. A unified approach to error bounds for structured convex optimization problems.
CoRR, abs/1512.03518, 2015.
9
| 6407 |@word mild:2 norm:11 seems:1 d2:5 semicontinuous:1 unbeatable:1 decomposition:2 nsw:1 acknowlegements:1 initial:1 series:2 nesta:1 tuned:1 ours:1 bc:1 comparing:1 optim:5 luo:1 written:2 bd:1 designed:1 update:6 xk:2 ojasiewicz:6 certificate:1 math:6 scientifiques:1 zhang:3 mathematical:2 consists:2 polyhedral:2 manner:2 bobin:1 x0:11 roughly:1 cand:1 dist:13 relying:1 globally:1 automatically:1 inappropriate:1 considering:1 redont:1 spain:1 bounded:7 notation:1 lowest:1 kind:1 suppresses:1 developed:6 unified:2 finding:8 transformation:1 guarantee:4 concave:4 tackle:1 um:1 rm:2 k2:3 before:1 local:28 treat:1 pock:2 black:1 au:1 maxkxk:1 studied:1 appl:1 fastest:1 nemirovski:3 range:1 obeys:2 averaged:1 unique:1 practice:2 implement:2 jan:1 empirical:2 yan:5 significantly:1 composite:1 projection:1 refers:2 get:2 uiowa:1 operator:1 risk:1 seminal:1 kb1:1 optimize:1 equivalent:1 www:1 demonstrated:1 reviewer:1 leb:4 tianbao:2 attention:1 starting:2 convex:33 sharpness:6 recovery:1 importantly:1 nuclear:1 reparameterization:1 proving:1 variation:1 suppose:1 programming:9 us:1 trick:1 updating:2 kxk1:1 csie:1 capture:2 hv:1 worst:1 decrease:3 yk:4 pd:15 convexity:6 complexity:29 ui:6 nesterov:17 rigorously:1 hb1:1 solving:19 upon:1 regularizer:4 w1a:1 subanalytic:2 fast:2 neighborhood:3 choosing:1 whose:4 larger:2 solve:2 statistic:2 hoc:1 sequence:3 propose:1 product:1 combining:1 achieve:4 description:1 dist2:1 convergence:25 empty:2 extending:1 converges:1 help:1 develop:3 stat:1 measured:1 progress:1 strong:6 sydney:1 recovering:1 c:1 involves:1 implies:5 closely:1 australia:1 libsvmtools:1 suffices:1 generalization:1 homotopy:21 preliminary:2 proposition:2 anonymous:1 ntu:1 exploring:1 hold:4 burke:2 around:3 considered:11 sufficiently:5 maxu:1 mapping:5 achieves:1 adopt:1 xk2:4 label:1 city:2 establishes:1 hoffman:4 minimization:9 zhou:1 shrinkage:1 broader:2 corollary:2 vk:5 consistently:1 polyhedron:6 rank:1 contrast:2 kim:1 helpful:1 relation:1 interested:1 monteiro:1 arg:4 dual:18 classification:3 exponent:2 smoothing:41 special:3 constrained:1 art:1 trong:1 equal:1 construct:1 hop:41 represents:1 broad:2 cancel:1 definable:2 t2:5 nonsmooth:2 piecewise:2 report:2 employ:1 saha:1 composed:1 national:1 beck:1 replaced:2 ab:4 yielding:1 light:1 primal:14 regularizers:2 necoara:1 accurate:6 desingularizing:3 respective:1 decoupled:1 institut:1 conduct:1 iv:1 euclidean:4 desired:1 re:2 guidance:1 theoretical:2 minimal:1 increased:1 instance:1 earlier:1 fenchel:1 teboulle:1 cover:1 deviation:2 vkk:1 proximal:14 synthetic:1 person:2 siam:7 satisfied:1 management:1 sublevel:5 style:5 oper:2 mahdavi:1 account:1 ku1:1 prox:6 de:1 blow:2 student:2 b2:5 int:1 rockafellar:2 satisfy:2 notable:1 explicitly:1 caused:1 vi:1 ad:1 piece:1 multiplicative:1 later:1 depends:1 closed:4 characterizes:2 start:6 xing:1 parallel:1 defer:2 contribution:1 square:3 pang:1 accuracy:1 ensemble:1 yield:2 rsg:1 basically:1 lu:1 worth:1 daniilidis:1 submitted:1 reach:1 manual:1 definition:5 involved:2 proof:3 stop:1 proved:1 dataset:1 knowledge:2 ut:1 subsection:2 maxkuk:1 formalize:1 cs1:2 improved:3 formulation:1 done:2 box:1 strongly:11 chambolle:2 though:1 just:1 stage:10 until:2 jerome:1 hand:1 continuity:1 effect:1 attouch:1 verify:1 true:1 hence:2 regularization:3 alternating:1 game:2 kak:2 demonstrate:2 image:3 wise:3 variational:1 novel:2 recently:4 functional:2 phong:1 discussed:1 he:3 approximates:1 goebel:1 cv:1 ai:2 smoothness:7 rd:5 tuning:2 similarly:2 lkx:1 etc:2 closest:3 multivariate:1 recent:1 optimizing:3 certain:1 nonconvex:1 inequality:8 yi:5 exploited:1 seen:1 analyzes:1 relaxed:1 employed:3 converge:1 ii:7 semi:1 multiple:1 reduces:1 smooth:45 faster:7 characterized:1 match:2 long:3 lin:5 feasibility:1 kax:1 variant:8 regression:1 essentially:2 arxiv:1 iteration:40 sometimes:1 achieved:2 dec:1 addition:2 comment:1 subject:1 leveraging:1 effectiveness:4 yang:7 leverage:1 iii:5 easy:5 enough:1 zi:1 reduce:2 idea:1 whether:1 becker:1 render:1 cause:1 remark:2 generally:2 locally:2 reduced:1 continuation:1 http:2 exist:3 qihang:2 per:2 key:5 lan:3 achieving:2 monitor:1 wisc:1 kuk:3 imaging:2 subgradient:4 monotone:1 cone:11 sum:3 run:2 inverse:1 striking:1 family:9 reader:1 throughout:1 almost:1 draw:1 capturing:1 bound:39 apg:14 constraint:1 vishwanathan:1 u1:2 speed:1 min:11 optimality:2 relatively:6 department:3 structured:4 according:1 smaller:2 sandholm:1 tw:1 gradually:3 handling:2 restricted:1 discus:2 cameraman:1 x2t:1 count:1 cjlin:1 end:2 lojasiewicz:1 adopted:1 apply:3 obey:1 shortly:1 original:4 assumes:2 running:6 include:4 denotes:1 hinge:2 unifying:1 epsilon:1 build:2 establish:1 approximating:1 yz:1 especially:1 objective:10 strategy:6 kak2:1 visiting:1 said:2 gradient:14 minx:1 distance:6 thank:1 sci:2 restart:1 carbonell:1 polytope:1 tseng:3 trivial:1 assuming:2 relationship:2 illustration:1 difficult:1 mostly:2 setup:1 glineur:1 implementation:1 anal:1 motivates:1 proper:1 unknown:1 observation:2 etude:1 datasets:1 descent:3 jin:1 extended:1 smoothed:11 sharp:2 introduced:1 cast:1 required:2 pair:1 kl:9 rof:1 polytopes:1 barcelona:1 boost:1 nip:1 established:2 below:5 dynamical:1 xm:3 beating:1 sparsity:1 program:4 max:14 including:2 eschew:1 ia:2 warm:4 regularized:4 zhu:1 technology:1 imply:1 numerous:1 picture:1 ready:1 carried:1 hb2:1 kb2:3 analytiques:1 review:1 epoch:2 loss:10 expect:1 fully:1 limitation:1 semialgebraic:1 foundation:1 iowa:5 sufficient:2 xiao:1 thresholding:1 repeat:1 supported:1 hex:1 enjoys:3 formal:1 wide:1 absolute:2 sparse:8 benefit:2 kz:4 author:1 adopts:1 made:1 projected:1 simplified:1 commonly:1 nguyen:1 far:1 compact:2 consequentially:2 global:1 kkt:1 b1:7 img:3 assumed:1 xi:1 continuous:3 iterative:1 table:2 zk:4 ku2:1 obtaining:1 necessarily:2 domain:1 did:1 main:1 fair:1 xu:3 epigraph:4 explicit:1 pe:1 jmlr:1 minz:1 theorem:8 emphasizing:1 z0:1 kuk2:1 xt:7 explored:1 x:5 exists:4 rel:3 adding:1 corr:4 mirror:1 supplement:5 magnitude:1 illustrates:1 kx:7 gap:4 chen:1 bolte:3 logarithmic:2 backtracking:1 simply:1 explore:2 hax:5 kurdyka:5 saddle:1 kxk:3 expressed:2 partially:1 scalar:1 u2:2 restarted:1 satisfies:3 lewis:1 ann:2 lipschitz:2 feasible:3 fista:2 programing:1 except:1 averaging:1 denoising:2 lemma:4 total:2 called:1 duality:3 experimental:5 hautes:1 swright:1 gilpin:2 brevity:1 accelerated:7 princeton:2 ex:3 |
5,978 | 6,408 | Learning from Small Sample Sets by Combining
Unsupervised Meta-Training with CNNs
Yu-Xiong Wang
Martial Hebert
Robotics Institute, Carnegie Mellon University
{yuxiongw, hebert}@cs.cmu.edu
Abstract
This work explores CNNs for the recognition of novel categories from few examples. Inspired by the transferability properties of CNNs, we introduce an additional
unsupervised meta-training stage that exposes multiple top layer units to a large
amount of unlabeled real-world images. By encouraging these units to learn diverse
sets of low-density separators across the unlabeled data, we capture a more generic,
richer description of the visual world, which decouples these units from ties to a
specific set of categories. We propose an unsupervised margin maximization that
jointly estimates compact high-density regions and infers low-density separators.
The low-density separator (LDS) modules can be plugged into any or all of the
top layers of a standard CNN architecture. The resulting CNNs significantly improve the performance in scene classification, fine-grained recognition, and action
recognition with small training samples.
1
Motivation
To successfully learn a deep convolutional neural network (CNN) model, hundreds of millions of
parameters need to be inferred from millions of labeled examples on thousands of image categories [1,
2, 3]. In practice, however, for novel categories/tasks of interest, collecting a large corpus of annotated
data to train CNNs from scratch is typically unrealistic, such as in robotics applications [4] and for
customized categories [5]. Fortunately, although trained on particular categories, CNNs exhibit certain
attractive transferability properties [6, 7]. This suggests that they could serve as universal feature
extractors for novel categories, either as off-the-shelf features or through fine-tuning [7, 8, 9, 10].
Such transferability is promising but still restrictive, especially for novel-category recognition from
few examples [11, 12, 13, 14, 15, 16, 17, 18]. The overall generality of CNNs is negatively affected
by the specialization of top layer units to their original task. Recent analysis shows that from bottom,
middle, to top layers of the network, features make a transition from general to specific [6, 8]. While
features in the bottom and middle layers are fairly generic to many categories (i.e., low-level features
of Gabor filters or color blobs and mid-level features of object parts), high-level features in the top
layers eventually become specific and biased to best discriminate between a particular set of chosen
categories. With limited samples from target tasks, fine-tuning cannot effectively adjust the units and
would result in over-fitting, since it typically requires a significant amount of labeled data. Using
off-the-shelf CNNs becomes the best strategy, despite the specialization and reduced performance.
In this work we investigate how to improve pre-trained CNNs for the learning from few examples.
Our key insight is to expose multiple top layer units to a massive set of unlabeled images, as shown
in Figure 1, which decouples these units from ties to the original specific set of categories. This
additional stage is called unsupervised meta-training to distinguish this phase from the conventional
unsupervised pre-training phase [19] and the training phase on the target tasks. Based on the above
transferability analysis, intuitively, bottom and middle layers construct a feature space with highdensity regions corresponding to potential latent categories. Top layer units in the pre-trained CNN,
however, only have access to those regions associated with the original, observed categories. The
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
1.2
?M
?
??
?
??
?
100
?M
?
Supervised Pre-Training
Unsupervised Meta-Training
of Bottom and Middle Layers
of Top Layers
Novel Category Recognition
from Few Examples
Figure 1: We aim to improve the transferability of pre-trained CNNs for the recognition of novel
categories from few labeled examples. We perform a multi-stage training procedure: 1) We first
pre-train a CNN that recognizes a specific set of categories on a large-scale labeled dataset (e.g.,
ImageNet 1.2M), which provides fairly generic bottom and middle layer units; 2) We then meta-train
the top layers as low-density separators on a far larger set of unlabeled data (e.g., Flickr 100M), which
further improves the generality of multiple top layer units; 3) Finally, we use our modified CNN
on new categories/tasks (e.g., scene classification, fine-grained recognition, and action recognition),
either as off-the-shelf features or as initialization of fine-tuning that allows for end-to-end training.
units are then tuned to discriminate between these regions by separating the regions while pushing
them further away from each other. To tackle this limitation, our unsupervised meta-training provides
a far larger pool of unlabeled images as a much less biased sampling in the feature space. Now,
instead of producing separations tied to the original categories, we generate diverse sets of separations
across the unlabeled data. Since the unit ?tries to discriminate the data manifold from its surroundings,
in all non-manifold directions?1 , we capture a more generic and richer description of the visual world.
How can we generate these separations in an unsupervised manner? Inspired by the structure/manifold
assumption in shallow semi-supervised and unsupervised learning (i.e., the decision boundary should
not cross high-density regions, but instead lie in low-density regions) [20, 21], we introduce a lowdensity separator (LDS) module that can be plugged into any (or all) top layers of a standard CNN
architecture. More precisely, the vector of weights connecting a unit to its previous layer (together
with the non-linearity) can be viewed as a separator or decision boundary in the activation space of
the previous layer. LDS then generates connection weights (decision boundaries) between successive
layers that traverse regions of as low density as possible and avoid intersecting high-density regions in
the activation space. Many LDS methods typically infer a probability distribution, for example through
densest region detection, lowest-density hyperplane estimation [21], and clustering [22]. However,
exact clustering or density estimation is known to be notoriously difficult in high-dimensional spaces.
We instead adopt a discriminative paradigm [20, 23, 24, 14] to circumvent the aforementioned
difficulties. Using a max-margin framework, we propose an unsupervised, scalable, coarse-to-fine
approach that jointly estimates compact, distinct high-density quasi-classes (HDQC), i.e., sets of data
points sampled in high-density regions, as stand-ins for plausible high-density regions and infers lowdensity hyperplanes (separators). Our decoupled formulations generalize those in supervised binary
code discovery [23] and semi-supervised learning [24], respectively; and more crucially, we propose
a novel combined optimization to jointly estimate HDQC and learn LDS in large-scale unsupervised
scenarios, from the labeled ImageNet 1.2M [25] to the unlabeled Flickr 100M dataset [26].
Our approach of exploiting unsupervised learning on top of CNN transfer learning is unique as opposed to other recent work on unsupervised, weakly-supervised, and semi-supervised deep learning.
Most existing unsupervised deep learning approaches focus on unsupervised learning of visual representations that are both sparse and allow image reconstruction [19], including deep belief networks
(DBN), convolutional sparse coding, and (denoising) auto-encoders (DAE). Our unsupervised LDS
meta-training is different from conventional unsupervised pre-training as in DBN and DAE in two
important ways: 1) our meta-training ?post-arranges? the network that has undergone supervised
training on a labeled dataset and then serves as a kind of network ?pre-conditioner? [19] for the target
tasks; and 2) our meta-training phase is not necessarily followed by fine-tuning and the features
obtained by meta-training could be used off the shelf.
Other types of supervisory information (by creating auxiliary tasks), such as clustering, surrogate
classes [27, 4], spatial context, temporal consistency, web supervision, and image captions [28], have
been explored to train CNNs in an unsupervised (or weakly-supervised) manner. Although showing
1
Yoshua Bengio. https://disqus.com/by/yoshuabengio/
2
initial promise, the performance of these unsupervised (or weakly-supervised) deep models is still
not on par with that of their supervised counterparts, partially due to noisy or biased external information [28]. In addition, our LDS, if viewed as an auxiliary task, is directly related to discriminative
classification, which results in more desirable and consistent features for the final novel-category
recognition tasks. Unlike using a single image and its pre-defined transformations [27] or other
labeled multi-view object [4] to simulate a surrogate class, our quasi-classes capture a more natural
representation of realistic images. Finally, while we boost the overall generality of CNNs for a wide
spectrum of unseen categories, semi-supervised deep learning approaches typically improve the
model generalization for specific tasks, with both labeled and unlabeled data coming from the tasks
of interest [29, 30].
Our contribution is three-fold: First, we show how LDS, based on an unsupervised margin maximization, is generated without a bias to a particular set of categories (Section 2). Second, we detail
how to use LDS modules in CNNs by plugging them into any (or all) top layers of the architecture,
leading to single-scale (or multi-scale) low-density separator networks (Section 3). Finally, we show
how such modified CNNs, with enhanced generality, are used to facilitate the recognition of novel
categories from few examples and significantly improve the performance in scene classification,
fine-grained recognition, and action recognition (Section 4). The general setup is depicted in Figure 1.
2
Pre-trained low-density separators from unsupervised data
Given a CNN architecture pre-trained on a specific set of categories, such as the ImageNet (ILSVRC)
1,000 categories, we aim to improve the generality of one of its top layers, e.g., the k-th layer. We
fix the structures and weights of the layers from 1 to k?1, and view the activation of layer k?1 as
a feature space. A unit s in layer k is fully connected to all the units in layer k ?1 via a vector of
weights ws . Each ws corresponds to a particular decision boundary (partition) of the feature space.
Intuitively, all the ws ?s then jointly further discriminate between these 1,000 categories, enforcing that
the new activations in layer k are more similar within classes and more dissimilar between classes.
To make ws ?s and the associated units in layer k unspecific to the ImageNet 1,000 categories, we use
a large amount of unlabeled images at the unsupervised meta-training stage. The layers from 1 to
k?1 remain unchanged, which means that we still tackle the same feature space. The new unlabeled
images now constitute a less biased sampling of the feature space in layer k ?1. We introduce a
new k-th layer with more units and encourage their unbiased exploration of the feature space. More
precisely, we enforce that the units learn many diverse decision boundaries ws ?s that traverse different
low-density regions while avoiding intersecting high-density regions of the unsupervised data (untied
to the original ImageNet categories). The set of possible arrangements of such decision boundaries is
rich, meaning that we can potentially generalize to a broad range of categories.
2.1
Approach overview
We denote column vectors and matrices with italic bold letters. For each unlabeled image Ii , where
i ? {1, 2, . . . , N }, let xi ? RD and ?i ? RS be the vectorized activations in layers k ? 1 and k,
respectively. Let W be the weights between the two layers, where ws is the weight vector associated
with the unit s in layer k. For notational simplicity, xi already includes
a constant 1 as the last element
and ws includes the bias term. We then have ?si = f wsT xi , where f (?) is a non-linear function,
such as sigmoid or ReLU. The resulting activation spaces of layers k?1 and k are denoted as X and
F , respectively.
To learn ws ?s as low-density separators, we are supposed to have certain high-density regions which
ws ?s separate. However, accurate estimation of high-density regions is difficult. We instead generate
quasi-classes as stand-ins for plausible high-density regions. We want samples with the same quasilabels to be similar in activation spaces (constraint within quasi-classes), while those with different
quasi-labels should be very dissimilar in activation spaces (constraints between quasi-classes). Note
that in contrast to clustering, generating quasi-classes does not require inferring membership for each
data point. Formally, assuming that there are C desired quasi-classes, we introduce a sample selection
vector Tc ? {0, 1}N for each quasi-class c. Tc,i = 1 if Ii is selected for assignment to quasi-class c
and zero otherwise. As illustrated in Figure 4, the optimization for seeking low-density separators
(LDS) while identifying high-density quasi-classes (HDQC) can be framed as
find W ? LDS, T ? HDQC
subject to W separate T .
(1)
This optimization problem enforces that each unit s learns a partition ws lying across the low-density
region among certain salient high-density quasi-classes discovered by T . This leads to a difficult
joint optimization problem in theory, because W and T are interdependent.
3
In practice, however, it may be unnecessary to find the global optimum. Reasonable local optima are
sufficient in our case to describe the feature space, as shown by the empirical results in Section 4.
We use an iterative approach that obtains salient high-density quasi-classes from coarse to fine
(Section 2.3) and produces promising discriminative low-density partitions among them (Section 2.2).
We found that the optimization procedures converge in our experiments.
2.2 Learning low-density separators
Assume that T is known, which means that we have already defined C high-density quasi-classes
by Tc . We then use a max-margin formulation to learn W . Each unit s in layer k corresponds to a
low-density hyperplane ws that separates positive and negative examples in a max-margin fashion.
To train ws , we need to generate label variables ls ? {?1, 1} for each ws , which label the samples in
the quasi-classes either as positive
(?1) training examples. We can stack all the labels
(1) or negative
for learning ws ?s to form L = l1 , . . . , lS . Moreover, in the activation space F of layer k, which is
induced by the activation space X of layer k?1 and ws , it would be beneficial to further push for large
inter-quasi-class and small intra-quasi-class distances. We achieve such properties by optimizing
min
W ,L,?
S
N X
S
h
i
X
X
2
kws k +?
Ii 1?lis wsTxi
s=1
+
i=1 s=1
C
C
N
C
N
?1 X X
?2 X X X
+
Tc,u Tc,v d (?u , ?v ) ?
Tc0 ,p Tc00 ,q d (?p , ?q ),
2 c=1 u=1
2 0
00
p=1
(2)
c =1 c =1
c00 6=c0 q=1
v=1
where d is a distance metric (e.g., square of Euclidean distance) in the activation space F of layer
k. [x]+ = max (0, x) represents the hinge loss. Here we introduce an additional indicator vector
I ? {0, 1}N for all the quasi-classes. Ii = 0 if Ii is not selected for assignment to any quasi-class (i.e.,
PC
c=1 Tc,i = 0) and one otherwise. Note that I is actually sparse, since only a portion of unlabeled
samples are selected as quasi-classes and only their memberships are estimated in T .
The new objective is much easier to optimize compared to Eqn. (1), as it only requires producing
the low-density separators ws from known quasi-classes given Tc . We then derive an algorithm to
optimize problem (2) using block coordinate descent. Specifically, problem (2) can be viewed as a
generalization of predictable discriminative binary codes in [23]: 1) compared with the fully labeled
case in [23], Eqn. (2) introduces additional quasi-class indicator variables to handle the unsupervised
scenario; 2) Eqn. (2) extends the specific binary-valued hash functions in [23] to general real-valued
non-linear activation functions in neural networks.
We adopt a similar iterative optimization strategy as in [23]. To achieve a good local minimum, our
insight is that there should be diversity in ws ?s and we thus initialize ws ?s as the top-S orthogonal
directions of PCA on data points belonging to the quasi-classes. We found that this initialization
yields promising results that work better than random initialization and do not contaminate the
pre-trained CNNs. For fixed W , we update ? using stochastic gradient descent to achieve improved
separation in the activation space F of layer k. This optimization is efficient if using ReLU as
non-linearity. We use ? to update L. lis = 1 if ?si > 0 and zero otherwise. Using L as training labels,
we then train S linear SVMs to update W . We iterate this process a fixed number of times?2 ? 4 in
practice, and we thus obtain the low-density separator ws for each unit and construct the activation
space F of layer k.
2.3 Generating high-density quasi-classes
In the previous section, we assumed T known and learned low-density separators between highdensity quasi-classes. Now we explain how to find these quasi-classes. Given the activation space
X of layer k ?1 and the activation space F of layer k (linked by the low-density separators W as
weights), we need to generate C high-density quasi-classes from the unlabeled data selected by Tc .
We hope that the quasi-classes are distinct and compact in the activation spaces. That is, we want
samples belonging to the same quasi-classes to be close to each other in the activation spaces, while
samples from different quasi-classes should be far from each other in the activation spaces. To this
end, we propose a coarse-to-fine procedure that combines the seeding heuristics of K-means++ [31]
and a max-margin formulation [24] to gradually augment confident samples into the quasi-classes.
We suppose that each quasi-class contains at least ?0 images and at most ? images. Learning T
includes the following steps:
Skeleton Generation. We first choose a single seed point Tc,ic = 1 for each quasi-class using the
K -means++ heuristics in the activation space X of layer k?1. All the seed points are now spread out
as the skeleton of the quasi-classes.
4
Output Layer Output Layer Output Layer
??
LDS Layer
Layer K
LDS Layer
Layer 2
Layer 2
Layer K
LDS
Layer
Conv
??
ReLU
??
??
Norm
??
Conv
Max-pool
Avg-pool
??
??
Layer 1
Max-pool
Conv
??
ReLU
??
Avg-pool
??
LDS
?
??
FCa
??
FCa
??
FCb
Layer 1
??
??
LDS
?
FCa
??
Layer1
ReLU
Avg-pool
??
LDS
Layer k
??
??
FCb
FCb
??
Add
??
CNN
Single-Scale
LDS+CNN
Softmax
Layer j
Multi-Scale
LDS+CNN
(a)
Layer
i
Layer K
(b)
Figure 2: We use our LDS to revisit CNN architectures. In Figure 2a, we embed LDS learned from
a large collection of unlabeled data as a new top layer into a standard CNN structure pre-trained
on a specific set of categories (left), leading to single-scale LDS+CNN (middle). LDS could be
also embedded into different layers, resulting multi-scale LDS+CNN (right). More specifically in
Figure 2b, our multi-scale LDS+CNN architecture is constructed by introducing LDS layers into
multi-scale DAG-CNN [10]. For each scale (level), we spatially (average) pool activations, learn and
plug in LDS in this activation space, add fully-connected layers FCa and FCb (with K outputs), and
finally add the scores across all layers as predictions for K output classes (that are finally soft-maxed
together) on the target task. We show that the resulting LDS+CNNs can be either used as off-the-shelf
features or discriminatively trained in an end-to-end fashion to facilitate novel category recognition.
Quasi-Class Initialization. We extend each single skeletal point to an initial quasi-class by adding
its nearest neighbors [31] in the activation space X of layer k?1. Each of the resulting quasi-classes
thus contains ?0 images, which satisfies the constraint for the minimum number of selected samples.
Augmentation and Refinement. In the above two steps, we select samples for quasi-classes based
on the similarity in the activation space of layer k ?1. Given this initial estimate of quasi-classes,
we select additional samples using joint similarity in both activation spaces of layers k?1 and k by
leveraging a max-margin formulation. For each quasi-class c, we construct quasi-class classifiers hX
c
X
F
and hF
c in the two activation spaces. Note that hc and hc are different from the low-density separator
ws . We use SVM responses to select additional samples, leading to the following optimization:
C
N
2
h
T i
X
X
X
min ?
Ii 1 ? yc,i hX
h
c
+ ?X
c xi
X
F
T ,hc ,hc
+?
c=1
i=1
+
+
C
C
N
X
X
X
Tc0 ,j Tc00 ,j
c0 =1 c00 =1 j=1
c0 6=c00
C
N
N
2
h
T i
T
X
X
X
F
Ii 1 ? yc,i hF
?
Tc,j hF
hc
+ ?F
c ?i
c ?j
c=1
s.t. ?0 ?
2
!
N
X
2
+
i=1
Tc,i ? ?, ?c ? {1, . . . , C},
!
j=1
(3)
i=1
where yc,i is the corresponding binary label used for one-vs.-all multi-quasi-class classification:
yc,i = 1 if Tc,i = 1 and ?1 otherwise. The first and second terms denote a max-margin classifier in
the activation space X , and the fourth and fifth terms denote a max-margin classifier in the activation
space F . The third term ensures that the same unlabeled sample is not shared by multiple quasiclasses. The last term is a sample selection criterion that chooses those unlabeled samples with high
classifier responses in the activation space F .
This formulation is inspired by the approach to selecting unlabeled images using joint visual features
and attributes [24]. We view our activation space X of layer k ? 1 as the feature space, and the
activation space F of layer k as the learned attribute space. However, different from the semisupervised scenario in [24], which provides an initially labeled training images, our problem (3) is
entirely unsupervised. To solve it, we use initial T corresponding to the quasi-classes obtained in
F
the first two steps to train hX
c and hc . After obtaining these two sets of SVMs in both activation
spaces, we update T . Following a similar block coordinate descent procedure as in [24], we iteratively
F
re-train both hX
c and hc and update T until we obtain the desired ? number of samples.
3
3.1
Low-density separator networks
Single-scale layer-wise training
We start from how to embed our LDS as a new top layer into a standard CNN structure, leading to
single-scale network. To improve the generality of the learned units in layer k, we need to prevent
co-adaptation and enforce diversity between these units [6, 19]. We adopt a simple random sampling
strategy to train the entire LDS layer. We break the units in layer k into (disjoint) blocks, as shown
5
in Figure 4. We encourage each block of units to explore different regions of the activation space
described by a random subset of unlabeled samples. This sampling strategy also makes LDS learning
scalable since direct LDS learning from the entire dataset is computationally infeasible.
Specifically, from an original selection matrix T0 ? {0, 1}N ?C of all zeros, we first obtain a random
sub-matrix T ? {0, 1}M ?C . Using this subset of M samples, we then generate C high-density
quasi-classes by solving the problem (3) and learn S corresponding low-density separator weights by
solving the problem (2), yielding a block of S units in layer k. We randomly produce J sub-matrices
T , repeat the procedure, and obtain S ?J units (J blocks) in total. This thus constitutes layer k, the
low-density separator layer. The entire single-scale structure is shown in Figure 2a.
3.2
Multi-scale structure
For a convolutional layer of size H1 ?H2 ?F , where H1 is the height, H2 is the width, and F is the
number of filter channels, we first compute a 1?1?F pooled feature by averaging across spatial
dimensions as in [10], and then learn LDS in this activation space as before. Note that our approach
applies to other types of pooling operation as well. Given the benefit of complementary features, LDS
could also be operationalized on several different layers, leading to multi-scale/level representations.
We thus modify the multi-scale DAG-CNN architecture [10] by introducing LDS on top of the ReLU
layers, leading to multi-scale LDS+CNN, as shown in Figure 2b. We add two additional layers on
top of LDS: FCa (with F outputs) that selects discriminative units for target tasks, and FCb (with K
outputs) that learns K -way classifier for target tasks. The output of the LDS layers could be used
as off-the-shelf multi-scale features. If using LDS weights as initialization, the entire structure in
Figure 2b could also be fine-tuned in a similar fashion as DAG-CNN [10].
4
Experimental evaluation
In this section, we explore the use of low-density separator networks (LDS+CNNs) on a number of
supervised learning tasks with limited data, including scene classification, fine-grained recognition,
and action recognition. We use two powerful CNN models?AlexNet [1] and VGG19 [3] pre-trained
on ILSVRC 2012 [25], as our reference networks. We implement the unsupervised meta-training
on Yahoo! Flickr Creative Commons100M dataset (YFCC100M) [26], which is the largest single
publicly available image and video database. We begin with plugging LDS into a single layer, and
then introduce LDS into several top layers, leading to a multi-scale model. We consider using
LDS+CNNs as off-the-shelf features in the small sample size regime, as well as through fine-tuning
when enough data is available in the target task.
Implementation Details. During unsupervised meta-training, we use 99.2 million unlabeled images
on YFCC100M [26]. After resizing the smallest side of each image to be 256, we generate the
standard 10 crops (4 corners plus one center and their flips) of size 224 ? 224 as implemented in
Caffe [32]. For single-scale structures, we learn LDS in the f c7 activation space of dimension
4,096. For multi-scale structures, following [10] we learn LDS in activation spaces of Conv3, Conv4,
Conv5, f c6, and f c7 for AlexNet, and we learn LDS in activation spaces of Conv43, Conv44, Conv51,
Conv52, and f c6 for VGG19. We use the same sets of parameters to learn LDS in these activation
spaces without further tuning. In the LDS layer, each block has S = 10 units, which separate across
M = 20,000 randomly sub-sampled data points. Repeating J = 2,000 sub-sampling, we then have
20,000 units in total. Notably, each block of units in the LDS layer can be learned independently,
making feasible for parallelization. For learning LDS in Eqn. (2), ? and ?1 are set to 1 and ?2 is set to
normalize for the size of quasi-classes, which is the same setup and default parameters as in [23].
For generating high-density quasi-classes in Eqn. (3), following [31, 24], we set the minimum and
maximum number of selected samples per quasi-classes to be ?0 = 6 and ? = 56, and produce C = 30
quasi-classes in total. We use the same setup and parameters as in [24], where ? = 1, ? = 1. While
using only the center crops to infer quasi-classes, we use all 10 crops to learn more accurate LDS.
Tasks and Datasets. We evaluate on standard benchmark datasets for scene classification: SUN397 [33] and MIT-67 [34], fine-grained recognition: Oxford 102 Flowers [35], and action recognition
(compositional semantic recognition): Stanford-40 actions [36]. These datasets are widely used
for evaluating the CNN transferability [8], and we consider their diversity and coverage of novel
categories. We follow the standard experimental setup (e.g., the train/test splits) for these datasets.
4.1
Learning from few examples
The first question to answer is whether the LDS layers improve the transferability of the original
pre-trained CNNs and facilitate the recognition of novel categories from few examples. To answer this
6
SUN?397
MIT?67
70
80
60
Accuracy (%)
Accuracy (%)
70
50
40
30
20
10
1
5
MS?LDS+CNN
SS?LDS+CNN
MS?DAG?CNN
SS?CNN
Places?CNN
10
20
50
Number of Training Examples per Category
60
50
MS?LDS+CNN
SS?LDS+CNN
MS?DAG?CNN
30
SS?CNN
1 3 5 10 15 20 25 30
40
50
80
Number of Training Examples per Category
40
Stanford?40
80
90
70
Accuracy (%)
Accuracy (%)
102 Flowers
100
80
70
60
50
40
1
MS?LDS+CNN
SS?LDS+CNN
MS?DAG?CNN
SS?CNN
2
3
4
5
6
7
8
9
10
Number of Training Examples per Category
60
50
40
30
20
135 10
MS?LDS+CNN
SS?LDS+CNN
MS?DAG?CNN
SS?CNN
20
30
40
50
60
70
80
90 100
Number of Training Examples per Category
Figure 3: Performance comparisons between our single-scale LDS+CNN (SS-LDS+CNN), multiscale LDS+CNN (MS-LDS+CNN) and the pre-trained single-scale CNN (SS-CNN), multi-scale
DAG-CNN (MS-DAG-CNN) baselines for scene classification, fine-grained recognition, and action
recognition from few labeled examples on four benchmark datasets. VGG19 [3] is used as the
CNN model for its demonstrated superior performance. For SUN-397, we also include a publicly
available strong baseline, Places-CNN, which trained a CNN (AlexNet architecture) from scratch
using a scene-centric database with over 7 million annotated images from 400 scene categories, and
which achieved state-of-the-art performance for scene classification [2]. X-axis: number of training
examples per class. Y-axis: average multi-class classification accuracy. With improved transferability
gained from a large set of unlabeled data, our LDS+CNNs with simple linear SVMs significantly
outperform the vanilla pre-trained CNN and powerful DAG-CNN for small sample learning.
Type
Weakly-supervised
CNNs
Ours
Approach
Flickr-AlexNet
Flickr-GoogLeNet
Combined-AlexNet
Combined-GoogLeNet
SS-LDS+CNN
MS-LDS+CNN
SUN-397
42.7
44.4
47.3
55.0
55.4
59.9
MIT-67
55.8
55.6
58.8
67.9
73.6
80.2
102 Flowers
74.2
65.8
83.3
83.7
87.5
95.4
Stanford-40
53.0
52.8
56.4
69.2
70.5
72.6
Table 1: Performance comparisons of classification accuracy (%) between our LDS+CNNs and
weakly-supervised CNNs [28] on the four datasets when using the entire training sets. In contrast to
our approach that uses the Flickr dataset for unsupervised meta-training, Flickr-AlexNet/GoogLeNet
train CNNs from scratch on the Flickr dataset by using associated captions as weak supervisory
information. Combined-AlexNet/GoogLeNet concatenate features from supervised ImageNet CNNs
and weakly-supervised Flickr CNNs. Despite the same amount of data used for pre-training, ours
outperform the weakly-supervised CNNs by a significant margin due to their noisy captions and tags.
question, we evaluate both LDS+CNN and CNN as off-the-shelf features without fine-tuning on the
target datasets. This is the standard way to use pre-trained CNNs [7]. We test how performance varies
with the number of training samples per category as in [16]. To compare with the state-of-the-art
performance, we use VGG19 in this set of experiments. Following the standard practice, we train
simple linear SVMs in one-vs.-all fashion on L2-normalized features [7, 10] in Liblinear [37].
Single-Scale Features. We begin by evaluating single-scale features on theses datasets. For a
fair comparison, we first reduce the dimensionality of LDS+CNN from 20,000 to 4,096, the same
dimensionality as CNN, followed by linear SVMs. This is achieved by selecting from LDS+CNN
the 4,096 most active features according to the standard criterion of multi-class recursive feature
elimination (RFE) [38] using the target dataset. We also tested PCA. The performance drops, but it is
still significantly better than the pre-trained CNN. Figure 3 summarizes the average performance over
10 random splits on these datasets. When used as off-the-shelf features for small-sample learning,
our single-scale LDS+CNN significantly outperforms the vanilla pre-trained CNN, which is already a
strong baseline. Our results are particularly impressive for the big performance boost, for example
nearly 20% on MIT-67, in the one-shot learning scenario. This verifies the effectiveness of the
layer-wise LDS, which leads to a more generic representation for a broad range of novel categories.
7
70
SUN397
MIT67
A block of units
Layer k (LDS)
.
?
.
?
.
?
?
?
.
?
.
?
65
.
?
Accuracy (%)
60
55
Low-density
separators
50
Layer k ? 1
45
A set of
quasi-classes
40
Unsupervised data
TS N-FT OTS N-FT OTS N-FT OTS N-FT
-O
NN S-CN -CNN G-CN CNN +CN CNN +CN
S
S
+
+
S
AG S-DA LDS S-LD LDS S-LD
D
S MS
M SS
M
MS
-C
SS
Figure 4: Illustration of learning low-density
separators between successive layers on a large
amount of unlabeled data. Note the color correspondence between the decision boundaries
across the unlabeled data and the connection
weights in the network.
Figure 5: Effect of fine-tuning (FT) on SUN-397
(purple bars) and MIT-67 (blue bars). Fine-tuning
LDS+CNNs (AlexNet) further improves the performance over the off-the-shelf (OTS) features for
novel category recognition.
Multi-Scale Features. Given the promise of single-scale LDS+CNN, we now evaluate multi-scale
off-the-shelf features. After learning LDS in each activation space separately, we reduce their
dimensionality to that of the corresponding activation space via RFE for a fair comparison with DAGCNN [10]. We train linear SVMs on these LDS+CNNs, and then average their predictions. Figure 3
summarizes the average performance over different splits for multi-scale features. Consistent with
the single-scale results, our multi-scale LDS+CNN outperforms the powerful multi-scale DAG-CNN.
LDS+CNN is especially beneficial to fine-grained recognition, since there is typically limited data
per class for fine-grained categories. Figure 3 also validates that multi-scale LDS+CNN allows for
transfer at different levels, thus leading to better generalization to novel recognition tasks compared
to its single-scale counterpart. In addition, Table 1 further shows that our LDS+CNNs outperform
weakly-supervised CNNs [28] that are directly trained on Flickr using external caption information.
4.2
Fine-tuning
With more training data available in the target task, our LDS+CNNs could be fine-tuned to further
improve the performance. For efficient and easy fine-tuning, we use AlexNet in this set of experiments
as in [10]. We evaluate the effect of fine-tuning of our single-scale and multi-scale LDS+CNNs in
the scene classification tasks, due to their relatively large number of training samples. We compare
against the fine-tuned single-scale CNN and multi-scale DAG-CNN [10], as shown in Figure 5.
For completeness, we also include their off-the-shelf performance. As expected, fine-tuned models
consistently outperform their off-the-shelf counterparts. Importantly, Figure 5 shows that our approach
is not limited to small-sample learning and is still effective even in the many training examples regime.
5
Conclusions
Even though current large-scale annotated datasets are comprehensive, they are only a tiny sampling
of the full visual world biased to a selection of categories. It is still not clear how to take advantage
of truly large sets of unlabeled real-world images, which constitute a much less biased sampling
of the visual world. In this work we proposed an approach to leveraging such unsupervised data
sources to improve the overall transferability of supervised CNNs and thus to facilitate the recognition
of novel categories from few examples. This is achieved by encouraging multiple top layer units
to generate diverse sets of low-density separations across the unlabeled data in activation spaces,
which decouples these units from ties to a specific set of categories. The resulting modified CNNs
(single-scale and multi-scale low-density separator networks) are fairly generic to a wide spectrum of
novel categories, leading to significant improvement for scene classification, fine-grained recognition,
and action recognition. The specific implementation described here is a first step. While we used
certain max-margin optimization to train low-density separators, it would be interesting to integrate
into the current CNN backpropagation framework both learning low-density separators and gradually
estimating high-density quasi-classes.
Acknowledgments. We thank Liangyan Gui, Carl Doersch, and Deva Ramanan for valuable and insightful
discussions. This work was supported in part by ONR MURI N000141612007 and U.S. Army Research
Laboratory (ARL) under the Collaborative Technology Alliance Program, Cooperative Agreement W911NF-102-0016. We also thank NVIDIA for donating GPUs and AWS Cloud Credits for Research program.
8
References
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, 2012.
[2] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition
using places database. In NIPS, 2014.
[3] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In
ICLR, 2015.
[4] D. Held, S. Thrun, and S. Savarese. Robust single-view instance recognition. In ICRA, 2016.
[5] Y.-X. Wang and M. Hebert. Model recommendation: Generating object detectors from few samples. In
CVPR, 2015.
[6] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson. How transferable are features in deep neural networks? In
NIPS, 2014.
[7] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features off-the-shelf: An astounding
baseline for recognition. In CVPR Workshop, 2014.
[8] H. Azizpour, A. S. Razavian, J. Sullivan, A. Maki, and S. Carlsson. Factors of transferability for a generic
ConvNet representation. TPAMI, 2015.
[9] M. Oquab, L. Bottou, I. Laptev, and J. Sivic. Learning and transferring mid-level image representations
using convolutional neural networks. In CVPR, 2014.
[10] S. Yang and D. Ramanan. Multi-scale recognition with DAG-CNNs. In ICCV, 2015.
[11] G. Koch, R. Zemel, and R. Salakhutdinov. Siamese neural networks for one-shot image recognition. In
ICML Workshops, 2015.
[12] B. M. Lake, R. Salakhutdinov, and J. B. Tenenbaum. Human-level concept learning through probabilistic
program induction. Science, 350(6266):1332?1338, 2015.
[13] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra. Matching networks for one shot
learning. In NIPS, 2016.
[14] Y.-X. Wang and M. Hebert. Learning by transferring from unsupervised universal sources. In AAAI, 2016.
[15] Z. Li and D. Hoiem. Learning without forgetting. In ECCV, 2016.
[16] Y.-X. Wang and M. Hebert. Learning to learn: Model regression networks for easy small sample learning.
In ECCV, 2016.
[17] L. Bertinetto, J. F. Henriques, J. Valmadre, P. Torr, and A. Vedaldi. Learning feed-forward one-shot learners.
In NIPS, 2016.
[18] B. Hariharan and R. Girshick. Low-shot visual object recognition. arXiv preprint arXiv:1606.02819, 2016.
[19] I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. Book in preparation for MIT Press, 2016.
[20] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In AISTATS, 2005.
[21] S. Ben-david, T. Lu, D. P?l, and M. Sot?kov?. Learning low density separators. In AISTATS, 2009.
[22] J. Hoffman, B. Kulis, T. Darrell, and K. Saenko. Discovering latent domains for multisource domain
adaptation. In ECCV, 2012.
[23] M. Rastegari, A. Farhadi, and D. Forsyth. Attribute discovery via predictable discriminative binary codes.
In ECCV, 2012.
[24] J. Choi, M. Rastegari, A. Farhadi, and L. S. Davis. Adding unlabeled samples to categories by learned
attributes. In CVPR, 2013.
[25] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla,
M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet large scale visual recognition challenge. IJCV,
115(3):211?252, 2015.
[26] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland, D. Borth, and L.-J. Li. YFCC100M:
The new data in multimedia research. Communications of the ACM, 59(2):64?73, 2016.
[27] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox. Discriminative unsupervised feature
learning with convolutional neural networks. In NIPS, 2014.
[28] A. Joulin, L. van der Maaten, A. Jabri, and N. Vasilache. Learning visual features from large weakly
supervised data. In ECCV, 2016.
[29] J. Weston, F. Ratle, H. Mobahi, and R. Collobert. Deep learning via semi-supervised embedding. In ICML,
2008.
[30] A. Ahmed, K. Yu, W. Xu, Y. Gong, and E. P. Xing. Training hierarchical feed-forward visual recognition
models using transfer learning from pseudo-tasks. In ECCV, 2008.
[31] D. Dai and L. Van Gool. Ensemble projection for semi-supervised image classification. In ICCV, 2013.
[32] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. In ACM MM, 2014.
[33] J. Xiao, K. A. Ehinger, J. Hays, A. Torralba, and A. Oliva. SUN database: Exploring a large collection of
scene categories. IJCV, 119(1):3?22, 2016.
[34] A. Torralba and A. Quattoni. Recognizing indoor scenes. In CVPR, 2009.
[35] M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In
ICVGIP, 2008.
[36] B. Yao, X. Jiang, A. Khosla, A. L. Lin, L. Guibas, and L. Fei-Fei. Human action recognition by learning
bases of action attributes and parts. In ICCV, 2011.
[37] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear
classification. JMLR, 9:1871?1874, 2008.
[38] A. Bergamo and L. Torresani. Classemes and other classifier-based features for efficient object categorization. TPAMI, 36(10):1988?2001, 2014.
9
| 6408 |@word kulis:1 cnn:75 middle:6 norm:1 c0:3 r:1 crucially:1 hsieh:1 shot:5 ld:2 liblinear:2 initial:4 contains:2 score:1 selecting:2 hoiem:1 tuned:5 ours:2 outperforms:2 existing:1 current:2 transferability:10 com:1 guadarrama:1 activation:41 si:2 realistic:1 partition:3 concatenate:1 seeding:1 drop:1 update:5 hash:1 v:2 selected:6 discovering:1 provides:3 coarse:3 completeness:1 successive:2 traverse:2 hyperplanes:1 c6:2 height:1 wierstra:1 constructed:1 direct:1 become:1 kov:1 ijcv:2 fitting:1 combine:1 manner:2 introduce:6 inter:1 forgetting:1 maki:1 notably:1 expected:1 multi:28 ratle:1 highdensity:2 inspired:3 salakhutdinov:2 encouraging:2 farhadi:2 becomes:1 spain:1 conv:3 linearity:2 moreover:1 begin:2 estimating:1 alexnet:9 lowest:1 lapedriza:1 kind:1 ag:1 transformation:1 contaminate:1 temporal:1 pseudo:1 collecting:1 tackle:2 tie:3 decouples:3 classifier:6 unit:35 ramanan:2 producing:2 positive:2 before:1 local:2 modify:1 despite:2 oxford:1 jiang:1 conditioner:1 plus:1 initialization:5 suggests:1 co:1 limited:4 shamma:1 range:2 unique:1 acknowledgment:1 enforces:1 practice:4 block:9 implement:1 recursive:1 backpropagation:1 sullivan:2 procedure:5 riedmiller:1 universal:2 empirical:1 significantly:5 gabor:1 matching:1 vedaldi:1 pre:21 projection:1 cannot:1 unlabeled:25 selection:4 close:1 thomee:1 context:1 optimize:2 conventional:2 demonstrated:1 vgg19:4 center:2 l:2 conv4:1 independently:1 arranges:1 simplicity:1 identifying:1 insight:2 importantly:1 bertinetto:1 embedding:2 handle:1 coordinate:2 target:10 enhanced:1 suppose:1 massive:1 densest:1 exact:1 caption:4 us:1 carl:1 goodfellow:1 agreement:1 element:1 recognition:37 particularly:1 muri:1 labeled:11 database:4 bottom:5 observed:1 module:3 ft:5 cooperative:1 wang:5 capture:3 cloud:1 thousand:1 preprint:1 region:19 ensures:1 connected:2 sun:5 valuable:1 predictable:2 skeleton:2 trained:18 weakly:9 solving:2 deva:1 laptev:1 serve:1 negatively:1 learner:1 fca:5 joint:3 train:14 distinct:2 fast:1 describe:1 effective:1 zemel:1 caffe:2 richer:2 larger:2 plausible:2 cvpr:5 valued:2 heuristic:2 otherwise:4 solve:1 resizing:1 stanford:3 widely:1 s:13 unseen:1 simonyan:1 jointly:4 noisy:2 validates:1 final:1 blob:1 advantage:1 tpami:2 karayev:1 propose:4 reconstruction:1 coming:1 adaptation:2 combining:1 achieve:3 supposed:1 description:2 normalize:1 rfe:2 exploiting:1 sutskever:1 optimum:2 darrell:2 produce:3 generating:4 categorization:1 ben:1 object:5 derive:1 gong:1 nearest:1 conv5:1 strong:2 auxiliary:2 c:1 implemented:1 coverage:1 arl:1 direction:2 annotated:3 attribute:5 cnns:37 filter:2 stochastic:1 exploration:1 human:2 elimination:1 require:1 hx:4 ots:4 fix:1 generalization:3 c00:3 exploring:1 mm:1 lying:1 koch:1 credit:1 ic:1 guibas:1 seed:2 adopt:3 smallest:1 torralba:3 estimation:3 label:6 expose:2 largest:1 successfully:1 bergamo:1 hoffman:1 hope:1 mit:6 aim:2 modified:3 avoid:1 shelf:14 zhou:1 azizpour:2 unspecific:1 clune:1 focus:1 notational:1 consistently:1 improvement:1 contrast:2 baseline:4 membership:2 nn:1 typically:5 entire:5 transferring:2 initially:1 w:20 quasi:51 fcb:5 selects:1 overall:3 classification:18 aforementioned:1 among:2 denoted:1 augment:1 yahoo:1 multisource:1 art:2 spatial:2 softmax:1 brox:1 fairly:3 initialize:1 construct:3 sampling:7 kw:1 broad:2 yu:2 unsupervised:32 constitutes:1 nearly:1 represents:1 icml:2 yoshua:1 dosovitskiy:1 torresani:1 few:11 surroundings:1 randomly:2 comprehensive:1 phase:4 astounding:1 gui:1 detection:1 interest:2 investigate:1 intra:1 evaluation:1 adjust:1 introduces:1 truly:1 yielding:1 pc:1 elizalde:1 held:1 accurate:2 encourage:2 wst:1 decoupled:1 orthogonal:1 plugged:2 euclidean:1 savarese:1 desired:2 re:1 alliance:1 dae:2 girshick:2 instance:1 column:1 soft:1 w911nf:1 assignment:2 maximization:2 introducing:2 subset:2 hundred:1 krizhevsky:1 recognizing:1 encoders:1 answer:2 varies:1 combined:4 confident:1 chooses:1 density:53 explores:1 probabilistic:1 off:14 pool:7 connecting:1 together:2 yao:1 intersecting:2 augmentation:1 thesis:1 aaai:1 opposed:1 choose:1 huang:1 external:2 creating:1 corner:1 book:1 leading:9 li:4 potential:1 valmadre:1 diversity:3 coding:1 bold:1 includes:3 pooled:1 forsyth:1 collobert:1 try:1 view:4 break:1 h1:2 linked:1 razavian:2 portion:1 start:1 hf:3 xing:1 lipson:1 jia:1 icvgip:1 lowdensity:2 contribution:1 collaborative:1 square:1 ni:1 publicly:2 convolutional:8 accuracy:7 purple:1 hariharan:1 ensemble:1 yield:1 generalize:2 lds:86 weak:1 kavukcuoglu:1 lu:1 notoriously:1 russakovsky:1 explain:1 detector:1 flickr:10 quattoni:1 against:1 c7:2 associated:4 donating:1 sampled:2 dataset:8 color:2 infers:2 improves:2 dimensionality:3 actually:1 centric:1 feed:2 supervised:23 follow:1 response:2 improved:2 zisserman:2 formulation:5 though:1 generality:6 stage:4 until:1 eqn:5 web:1 su:1 multiscale:1 semisupervised:1 supervisory:2 facilitate:4 effect:2 lillicrap:1 normalized:1 unbiased:1 concept:1 counterpart:3 spatially:1 iteratively:1 laboratory:1 semantic:1 illustrated:1 attractive:1 during:1 width:1 davis:1 transferable:1 criterion:2 m:13 l1:1 image:25 meaning:1 wise:2 novel:17 sigmoid:1 superior:1 overview:1 operationalized:1 million:4 extend:1 googlenet:4 yosinski:1 mellon:1 significant:3 dag:13 framed:1 tuning:12 rd:1 dbn:2 consistency:1 vanilla:2 doersch:1 chapelle:1 access:1 supervision:1 similarity:2 impressive:1 add:4 base:1 recent:2 optimizing:1 scenario:4 certain:4 nvidia:1 hay:1 meta:14 binary:5 onr:1 der:1 minimum:3 additional:7 fortunately:1 dai:1 oquab:1 deng:1 converge:1 paradigm:1 semi:7 ii:7 multiple:5 desirable:1 full:1 infer:2 siamese:1 zien:1 plug:1 cross:1 ahmed:1 long:1 lin:2 post:1 plugging:2 prediction:2 scalable:2 crop:3 oliva:2 regression:1 cmu:1 metric:1 arxiv:2 nilsback:1 robotics:2 achieved:3 addition:2 want:2 fine:27 separately:1 krause:1 aws:1 source:2 biased:6 parallelization:1 unlike:1 subject:1 induced:1 pooling:1 leveraging:2 effectiveness:1 yang:1 bernstein:1 bengio:3 enough:1 split:3 tc0:2 iterate:1 easy:2 relu:6 automated:1 architecture:9 reduce:2 classemes:1 cn:4 blundell:1 t0:1 whether:1 specialization:2 pca:2 constitute:2 action:10 compositional:1 deep:12 clear:1 karpathy:1 amount:5 repeating:1 mid:2 tenenbaum:1 svms:6 category:47 reduced:1 generate:8 http:1 outperform:4 revisit:1 estimated:1 disjoint:1 per:8 blue:1 diverse:4 carnegie:1 skeletal:1 promise:2 affected:1 key:1 salient:2 four:2 prevent:1 borth:1 letter:1 fourth:1 powerful:3 springenberg:1 extends:1 place:3 reasonable:1 separation:6 lake:1 decision:7 summarizes:2 maaten:1 entirely:1 layer:92 followed:2 distinguish:1 courville:1 correspondence:1 fold:1 fan:1 precisely:2 constraint:3 fei:4 scene:14 untied:1 tag:1 generates:1 simulate:1 min:2 relatively:1 gpus:1 according:1 creative:1 belonging:2 across:8 remain:1 beneficial:2 shallow:1 yfcc100m:3 making:1 intuitively:2 gradually:2 iccv:3 computationally:1 eventually:1 flip:1 end:5 serf:1 available:4 operation:1 hierarchical:1 away:1 generic:7 enforce:2 xiong:1 original:7 top:21 clustering:4 include:2 recognizes:1 hinge:1 pushing:1 restrictive:1 especially:2 icra:1 unchanged:1 seeking:1 objective:1 arrangement:1 already:3 question:2 strategy:4 surrogate:2 italic:1 exhibit:1 gradient:1 iclr:1 friedland:1 distance:3 separate:4 thank:2 separating:1 thrun:1 convnet:1 manifold:3 enforcing:1 induction:1 assuming:1 code:3 illustration:1 difficult:3 setup:4 potentially:1 negative:2 implementation:2 satheesh:1 perform:1 datasets:10 benchmark:2 descent:3 t:1 hinton:1 communication:1 discovered:1 stack:1 inferred:1 david:1 connection:2 imagenet:8 sivic:1 learned:6 barcelona:1 boost:2 nip:7 bar:2 flower:4 yc:4 indoor:1 regime:2 challenge:1 program:3 max:11 including:2 video:1 belief:1 gool:1 unrealistic:1 difficulty:1 natural:1 circumvent:1 indicator:2 customized:1 improve:10 technology:1 library:1 martial:1 axis:2 auto:1 poland:1 interdependent:1 discovery:2 l2:1 carlsson:2 embedded:1 fully:3 par:1 loss:1 discriminatively:1 generation:1 limitation:1 interesting:1 shelhamer:1 h2:2 integrate:1 vectorized:1 consistent:2 undergone:1 sufficient:1 xiao:2 tiny:1 eccv:6 repeat:1 last:2 supported:1 hebert:5 infeasible:1 henriques:1 bias:2 allow:1 side:1 institute:1 wide:2 neighbor:1 conv3:1 fifth:1 sparse:3 benefit:1 van:2 boundary:7 dimension:2 default:1 world:6 transition:1 stand:2 rich:1 layer1:1 evaluating:2 collection:2 avg:3 refinement:1 forward:2 far:3 sot:1 compact:3 obtains:1 global:1 active:1 vasilache:1 corpus:1 unnecessary:1 assumed:1 discriminative:7 xi:4 spectrum:2 latent:2 iterative:2 khosla:2 table:2 scratch:3 promising:3 learn:15 transfer:3 channel:1 robust:1 rastegari:2 obtaining:1 hc:7 necessarily:1 separator:27 bottou:1 domain:2 da:1 jabri:1 aistats:2 joulin:1 spread:1 motivation:1 big:1 verifies:1 fair:2 complementary:1 xu:1 fashion:4 ehinger:1 sub:4 inferring:1 lie:1 tied:1 jmlr:1 third:1 extractor:1 learns:2 grained:9 donahue:1 choi:1 embed:2 specific:11 showing:1 insightful:1 mobahi:1 explored:1 svm:1 workshop:2 adding:2 effectively:1 gained:1 push:1 margin:11 easier:1 depicted:1 tc:12 explore:2 army:1 visual:10 vinyals:1 partially:1 recommendation:1 chang:1 applies:1 corresponds:2 satisfies:1 acm:2 ma:1 weston:1 viewed:3 sun397:2 shared:1 feasible:1 specifically:3 torr:1 hyperplane:2 averaging:1 denoising:1 called:1 total:3 discriminate:4 multimedia:1 experimental:2 saenko:1 formally:1 select:3 ilsvrc:2 berg:1 dissimilar:2 preparation:1 evaluate:4 tested:1 avoiding:1 |
5,979 | 6,409 | Learning under uncertainty: a comparison between
R-W and Bayesian approach
He Huang
Laureate Institute for Brain Research
Tulsa, OK, 74133
[email protected]
Martin Paulus
Laureate Institute for Brain Research
Tulsa, OK, 74133
[email protected]
Abstract
Accurately differentiating between what are truly unpredictably random and systematic changes that occur at random can have profound effect on affect and
cognition. To examine the underlying computational principles that guide different
learning behavior in an uncertain environment, we compared an R-W model and
a Bayesian approach in a visual search task with different volatility levels. Both
R-W model and the Bayesian approach reflected an individual?s estimation of the
environmental volatility, and there is a strong correlation between the learning rate
in R-W model and the belief of stationarity in the Bayesian approach in different
volatility conditions. In a low volatility condition, R-W model indicates that learning rate positively correlates with lose-shift rate, but not choice optimality (inverted
U shape). The Bayesian approach indicates that the belief of environmental stationarity positively correlates with choice optimality, but not lose-shift rate (inverted
U shape). In addition, we showed that comparing to Expert learners, individuals
with high lose-shift rate (sub-optimal learners) had significantly higher learning
rate estimated from R-W model and lower belief of stationarity from the Bayesian
model.
1
Introduction
Learning and using environmental statistics in choice-selection under uncertainty is a fundamental
survival skill. It has been shown that, in tasks with embedded environmental statistics, subjects
use sub-optimal heuristic Win-Stay-Lose-Shift (WSLS) strategy (Lee et al. 2011), and strategies
that can be interpreted using Reinforcement Learning model (Behrens et al. 2007), or Bayesian
inference model (Mathys et al. 2014; Yu et al. 2014). Value-based model-free RL model assumes
subjects learn the values of chosen options using a prediction error that is scaled by a learning rate
(Rescorla and Wagner, 1972; Sutton and Barto, 1998). This learning rate can be used to measure
an individual?s reaction to environmental volatility (Browning et al. 2015). Higher learning rate is
usually associated with a more volatile environment, and a lower learning rate is associated with
a relatively stable situation. Different from traditional (model-free) RL model, Bayesian approach
assumes subjects make decisions by learning the reward probability distribution of all options based
on Bayes? rule, i.e., sequentially updating the posterior probability by combining the prior knowledge
and the new observation (likelihood function) over time. To examine how environment volatility may
influence this inference process, Yu & Cohen 2009 proposed to use a dynamic belief model (DBM)
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
that assumes subjects update their belief of the environmental statistics by balancing between the
prior belief and the belief of environmental stationarity, in which a belief of high stationarity will
lead to a relatively fixed belief of the environmental statistics, and vice versa.
Though formulated under different assumptions (Gershman 2015), those two approaches share similar
characteristics. First, both the learning rate in RL model and the belief of stationarity in DBM reflect
an individual?s estimation of the environmental volatility. In a highly volatile environment, one will
be expected to have a high learning rate estimated by RL model, and a belief of low stationarity
estimated by DBM. Second, though standard RL only updates the chosen option?s value and DBM
updates the posterior probability of all options, assuming maximization decision rule (Blakely, Starin,
& Poling, 1988), both models will lead to qualitatively similar choice preference. That is, the mostly
rewarded choice will have an increasing value in RL model, and increasing reward probability in
DBM, while the often-unrewarded choice will lead to a decreasing value in RL model, and decreasing
reward probability in DBM. Thirdly, they can both explain Win-Stay strategy. That is, under the
maximization assumption, choosing the option with maximum value in RL model, the rewarded
option (Win) will reinforce this choice (i.e. remain the option with the maximum value) and thus
will be chosen again (Stay). Similarly, choosing the option with the maximum reward probability
in DBM, the rewarded option (Win) will also reinforce this choice (i.e. remain the option with the
maximum reward probability) and thus will be chosen again (Stay).
While both approaches share some characteristics as mentioned above and have showed strong
evidence in explaining the overall subjects? choices in previous studies, it is unclear how they differ
in explaining other behavioral measures in tasks with changing reward contingency, such as decision
optimality, i.e., percentage of trials in which one chooses the most likely rewarded option, and
lose-shift rate, i.e., the tendency to follow the last target if the current choice is not rewarded. In a
task with changing reward contingency (e.g., 80%:20% to 20%:80%), decision optimality relies on
proper estimation of the environmental volatility, i.e., how frequent change points occur, and using
proper strategy, i.e. staying with the mostly likely option and ignoring the noise before change points
(i.e. not switching to the option with lower reward rate when it appears as the target). Thus it is
important to know how the parameter in each model (learning rate vs. the belief of stationarity) affects
decision optimality in tasks with different volatility. On the other hand, lose-shift can be explained
as a heuristic decision policy that is used to reduce a cognitively difficult problem (Kahneman &
Frederick, 2002), or as an artifact of learning that can be interpreted in a principled fashion (using RL:
Worthy et al. 2014; using Bayesian inference: Bonawitz et al. 2014). Intuitively, when experiencing
a loss in the current trial, in a high volatility environment where change points frequently occur, one
may tend to shift to the last target; while in a stable environment with fixed reward rates, one may tend
to stay with the option with the higher reward rate. That is, the frequency of using lose-shift strategy
should depend on how frequent the environment changes. Thus it is also important to examine how
the parameter in each model (learning rate vs. the belief of stationarity) affects lose-shift rate under
different volatility conditions.
However, so far little is known about how a model-free RL model and a Bayesian model differ in
explaining decision optimality and lose-shift in tasks with different levels of volatility. In addition,
it is unclear if parameters in each model can capture the individual differences in learning. For
example, if they can provide satisfactory explanation of individuals who always choose the same
choice while disregarding feedback information (No Learning), individuals who always choose the
most likely rewarded option (expert), and individuals who always use the heuristic win-stay-lose-shift
strategy. Here we aim to address the first question by investigating the relationship between decision
optimality and lose-shift rate with parameters estimated from an Rescorla-Wagner (R-W model) and
a Bayesian model in three volatility conditions (Fig 1a) in a visual search task (Yu et al. 2014): 1)
stable, where the reward contingency at three locations remains the same (relative reward frequency
at three locations: 1:3:9), 2) low volatility, where the reward contingency at three locations changes
(e.g. from 1:3:9 to 9:1:3) based on N (30, 1) (i.e. on average change points occur every 30 trials), and
3) high volatility, where the reward contingency changes based on N (10, 1) (i.e. on average change
points occur every 10 trials). For the second question, we will examine how the two models differ in
explaining three types of behavior: No Learning, Expert, and WSLS (Fig 1b).
2
Most likely rewarded location
a
b
Volatility conditions
Behavioral types
?
?
?
?
Target ?
A
WSLS ?
Expert ?
No learning ?
B
C
Trials
Trials
Stable
Low volatility
High volatility
Figure 1: Example of volatility conditions and behavioral types in a visual search task. a. Example of
three volatility conditions. b. Example of three behavioral types. Colors indicate target location in
each trial. WSLS: Win-stay-Lose-shift (follow last target). Expert: always choose the most likely
location. No Learning: always choose the same location.
2
Value-based RL model
Assuming a constant learning rate, the Rescorla-Wagner model takes the following form (Rescorla
and Wagner, 1972; Sutton and Barto, 1998):
Vit+1 = Vit + ?(Rt ? Vit )
(1)
where ? is the learning rate, and Rt is the reward feedback (0-no reward, 1-reward) for the chosen
option i in trial t. In this paper, we assume subjects use a softmax decision rule for all models as
follows:
e?Vi
p(i) = P ?Vj
je
(2)
where ? is the inverse decision temperature parameter, which measures the degree to which subjects
use the estimated value in choosing among three options (i.e. a large ? approximates ?maximization?
strategy). This model has two free parameters = {?, ?} for each subject.
2.1
Simulation in three volatility conditions
Learning rate is expected to increase as the volatility increases. To show this, we simulated three
volatility conditions (Stable, Low and High volatility, Fig 2ab) and the results are summarized in
Table1. For each condition, we simulated 100 runs (90 trial per run) of agents? choices with ? ranges
from 0 to 1 with an increment of 0.1 and fixed ? = 20. As is shown in Fig 2a, decision optimality in a
stable and a low volatility environment has an inverted U shape as a function of learning rate ?. It is
not surprising, as in those conditions, where one should rely more on the long term statistics, if the
learning rate is too high, then subjects will tend to shift more due to recent experience (Fig 2b), which
would adversely influence decision optimality. On the other hand, in a high volatility environment,
decision optimality has a linear correlation with the learning rate, suggesting that higher learning rate
leads to better performance. In fact, the optimal learning rate increases as the environmental volatility
increases (i.e. the peak of the inverted U should shift to the right). On the other hand, across all
volatility conditions, lose shift rate increases as learning rate increases (Fig 2b), except for learning
rate=0. It is not surprising as zero learning rate indicates subjects make random choices, thus it will
be close to 1/3.
2.2
Simulation of three behavioral types
To examine if learning rate can be used to explain different types of learning behavior, we have
simulated three types of behavior (No Learning, Expert and WSLS, Fig 1b) in a low volatility
condition. In particular, we simulated 60 runs (90 trials per run) of target sequences with a relative
reward frequency 1:3:9 that changes based on N (30, 1), and generated three types of behavior for
3
Table 1: R-W model: Influence of learning rate ?
Condition
Decision optimality
Lose-shift rate
Stable
Low volatility
High volatility
Inverted U shape, ?optimal = low
Inverted U shape, ?optimal = medium
Positive linear relationship, ?optimal = high
Positive linear
Positive linear
Positive linear
each run. For each simulated behavior type, R-W model was fitted using Maximum Likelihood
Estimation with ? ranges from 0 to 1 with an increment of .025 and ? = 20. Based on what we have
shown in 2.1, in a low volatility condition where decision optimality has an inverted U shape as a
function of learning rate, individuals that perform poorly will be expected to have a low learning
rate, and individuals that use heuristic WSLS strategy will be expected to have a high learning rate.
We confirmed this in simulation (Fig 2c), that agents with the same choice over time (No Learning)
have the lowest learning rate, indicating their choices have little influence from the reward feedback.
Expert agents have the medium learning rate indicating the effect of long-term statistics. Agents that
strictly follow WSLS have the highest learning rate, indicating their choices are heavily impacted by
recent experience. Results for learning rate estimation of three behavioral types in stable and high
volatility condition can be seen in Supplementary Figure S1.
Optimal choice%
1
Stable Non
N(30,1)
N(10,1)
Low volatility
High volatility
0.9
0.8
0.7
0.6
0.4
0.8
0.35
0.7
0.25
0.2
0.15
0.5
0.1
0.4
0.05
0.3
0
0.2
0.4
eta
0.6
0.8
c
Lose shift%
0.3
Lose shift%
Optimal choice%
b
Learning rate ?
a
1
0
Learning rate
Low volatility
0.6
0.5
0.4
0.3
0.2
0.1
0
Learning rate ?
0.2
0.4
0.6
0.8
0
1
eta
Learning rate ?
No learning
Expert
WSLS
Learning type
Figure 2: R-W simulation. a. Percentage of trials in which agents chose the optimal choice (the
most likely location) as a function of learning rate in three volatility conditions. b. Lose shift rate
as a function of learning rate in three volatility conditions. c. Learning rate estimation of three
simulated behavior types in low volatility condition. Errorbars indicate standard error of the mean
across simulation runs.
Simu_all.m
Simu_Fix_TD.m
Simu_all_new.m
3
A Bayesian approach
Simu_target.m
Here we compare above R-W model to a dynamic belief model that is based on a Bayesian hidden
Figuremodel,
2 where we assume subjects make decisions by using the inferred posterior target
Markov
probability st based on the inferred hidden reward probability ? t and the reward contingency mapping
bt (Equation 3). To examine the influence of volatility, we assume (?? t , bt ) has probability ? of
remaining the same as the last trial, and 1-? of being drawn from the prior distribution p0 (?? t , bt )
(Equation 4). Here ? represents an individual?s estimation of environmental stationarity, which
contrasts with learning rate ? in R-W model. For model details, please refer to Yu & Huang 2014.
? 1 1 1
( 3 , 3 , 3 ),
?
?
?
?
(?
?
h , ?m , ?l ),
?
?
?(?h , ?l , ?m ),
P (st |?? t , bt ) = (?m , ?h , ?l ),
?
?
?(?m , ?l , ?h ),
?
?
?
?(?l , ?h , ?m ),
?
(?l , ?m , ?h ),
4
bk
bk
bk
bk
bk
bk
bk
=1
=2
=3
=4
=5
=6
=7
(3)
P (?? t , bt |st?1 ) = ?P (?? t?1 , bt?1 |st?1 ) + (1 ? ?)p0 (?? t , bt )
(4)
where ? t is the hidden reward probability, bt is the reward contingency mapping of the probability to
the options, st?1 is the target history from trial 1 to trial t ? 1. We also used softmax decision rule
here (Equation 2), thus this model also has two free parameters = {?, ?}.
3.1
Simulation in three volatility conditions
Belief of stationarity ? is expected to decrease as the volatility increases, as subjects are expected to
depend more on the recent trials to predict next outcome (Yu &Huang 2014). We have shown this in
three simulated conditions under different volatility (Fig 3ab) and the results are summarized in Table
2. For each simulated condition (stable, low and high volatility), we simulated 100 runs (90 trials per
run) for agents? choices with ? ranges from 0 to 1 and fixed ? = 20. As is shown in Fig 3a, in a stable
condition, decision optimality increases as ? increases, indicating a fixed belief mode (i.e. no change
of the environmental statistics) is optimal in this condition. In the other two volatile environments,
decision optimality also increases as ? increases, but both drop as ? approaches 1. It is reasonable as
in volatile environments a belief of high stationarity is no longer optimal. On the other hand, lose
shift rate in all conditions (Fig 3b) have an inverted U shape as a function of alpha, where ? = 0 leads
to a random lose shift rate (1/3), and ? = 1(fixed belief model) leads to the minimal lose shift rate.
3.2
Simulation of three behavioral types
To examine if an individual?s belief of environmental stationarity can be used to explain different
types of learning behavior, we fit DBM using Maximum Likelihood Estimation with the simulated
behavioral data in 2.2. DBM results suggest that WSLS has a significantly lower belief of stationarity
comparing to Expert behavior (Fig 3c), which is consistent with the higher volatility estimation
reflected by a higher learning rate than Expert from R-W model (Fig 2c). Simulation results also
suggest that No Learning agents have a significantly lower belief of stationarity than Expert learners,
but not different from WSLS. However, the comparison of model accuracy between R-W and DBM
(Fig 3d) shows DBM outperforms R-W in predicting Expert and WSLS behavior (p = .000), but it
does not perform as well in No Learning behavior where R-W has significantly better performance
(p = .000). Model accuracy is measured as the percentage of trials that the model correctly predicted
subjects? choice. Thus further investigation is needed to examine the validity of using DBM in
explaining poor learners? choice behavior in this task. Results for ? in stable and high volatility
condition can be seen in Supplementary Figure S2.
a
b
c
Optimal choice%
1
1
0.8
Non
Stable
N(30,1)
N(10,1)
Low
volatility
High volatility
0.8
0.7
0.6
0.5
0.4
0.3
0
0.2
Non
N(30,1)
N(10,1)
0.9
Lose Shift%
Optimal choice%
0.9
0.4
0.6
alpha
0.8
1
Belief of stationarity ?
d
Belief of stationarity
Lose shift%
0.7
Model accuracy
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
? 0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0
0.2
0.4
0.6
alpha
0.8
1
Belief of stationarity ?
RW
DBM
0.6
0.1
No learning
Expert
WSLS
0
No learning
Expert
WSLS
Learning type
Learning type
(Low volatility)
(Low volatility)
Compare_Simu_TD_DBM.m
Figure 3: DBM
simulation. a. Percentage of trials in which agents chose the optimal choice (the most
CompareNcp_10_30_DBM.m
Simu_DBM_bk_low.m
likely location) as a function of ? in three volatility conditions.
b: Lose shift rate as a function of ?
in three volatility conditions. c. ? estimation of three types
of behavior in low volatility condition
check_simuDBM_low.m
DBM_Softmax_Ncp,
CPbetween R-W and DBM in low volatility condition.
(N (30, 1)). d. Model
performance comparison
Add optimal alpha in Non, 45,
5 30, 10
CompareNcp_10_30_45_DBM.m
Figure 3
check_simuDBM.m
Table 2: DBM: Influence of stationarity belief ?
4
Condition
Decision optimality
Lose-shift rate
Stable
Low volatility
High volatility
Positive linear relationship, ?optimal =1
Inverted U shape, ?optimal = high
Inverted U shape, ?optimal = medium
Inverted U shape
Inverted U shape
Inverted U shape
Experiment
We applied the above models to two sets of data in a visual search task: (1) stable condition with no
change points (from Yu & Huang, 2014) and (2) low volatility condition with change points based
on N (30, 1). For both data sets, we fitted an R-W model and DBM for each subject, and compared
learning rate ? in R-W and estimation of stochasticity (1 ? ?) in DBM, as well as how they correlate
with decision optimality and lose shift rate. For (2), we also looked at how model parameters differ in
explaining No Learning, Expert and WSLS behavior.
4.1
4.1.1
Results
Stable condition
In a visual search task with relative reward frequency 1:3:9 but no change points, we found a significant
correlation between ? estimated from R-W model and 1-? from DBM (r2 = .84, p = .0001, Fig 4a),
which is consistent with the hypothesis that both the learning rate in R-W model and the belief of
stochasticity in the Bayesian approach reflects subjects? estimation of environmental volatility. We
also examined the relationship between decision optimality (optimal choice%) (Fig 4b) and lose-shift
rate (Fig 4c) with ? and 1-? respectively. As is shown in Fig 4b, in this stable condition, decision
optimality decreases as the learning rate increases, as well as the belief of stochasticity increases,
which is consistent with Fig 2a (red, for ? ? .1) and Fig 3a (red). For lose-shift rate, there is a
significant positive relationship between lose-shift% and ?, as shown previously in Fig 2b (red), and
an inverted U shape as suggested in Fig 3b (red). There are no significant differences in the prediction
accuracy of R-W model and DBM (R-W: .81+/-.03, DBM: .81+/-.03) or inverse decision parameters
(p > .05).
b
0.4
0.3
c
0.8
0.7
0.6
0.5
0.2
1
0.1
0.8
Lose-shift%
alpha-DBM
1-? (DBM)
0.5
0
-0.1
-0.2
0
0.2
0.4
0.6
0.8
1
?eta-TD
(RW)
Optimal choice%
0.6
Optimal choice%
1
0.9
0
0.5
2 (RW)
Lose-shift%
1
0.6
0.4
0.2
0
0
0.5
2 (RW)
Optimal choice%
1
0.9
0.8
0.7
0.6
0.5
0
0.8
Lose-shift%
Optimal choice%
a
1
0.5
1-, (DBM)
Lose-shift%
0.6
0.4
0.2
0
0
0.5
1-, (DBM)
Figure 4: Stable condition: R-W vs. DBM. a. Relationship between learning rate ? in R-W model
and 1-? in DBM. b. Optimal choice% as a function of ? and 1-?. c. Lose-shift% as a function of ?
and 1-?.
4.1.2
Low volatility condition
In a visual search task with relative reward frequency 1:3:9, and change of the reward contingency
based on N (30, 1) (3 blocks of 90 trials/block), we looked at the correlation between model parameters, their correlation with decision optimality and lose-shift rate, as well as how model parameters
differ in explaining different types of behavior.
6
Subjects (N=207) were grouped into poor learners (optimal choice%< .5, n = 63), good learners
(.5? optimal choice% ? 9/13, n = 108) and expert learners (optimal choice% > 9/13, n = 36) based
on their performance (percentage of trials started from the most likely rewarded location). Consistent
with what we have shown previously (Fig 3d-No Learning), R-W model outperformed DBM in poor
learners (p = .000). Similar as in stable condition (Fig 4a), among good and expert learners, there is
a significant positive correlation between ? and 1-? (Fig 5b, r2 = .35, p = .000). The relationship
between decision optimality and lose-shift% is shown in Fig 5c. As is shown, in this task where
change points occur with relatively low frequency (N (30, 1)), lose shift% has an inverted U shape as
a function of optimal choice%, indicating that a high lose-shift rate does not necessarily lead to better
performance.
a
b
Model accuracy
c
Good & Expert learners
0.9
0.6
0.8
0.75
0.7
0.5
0.5
0.4
0.4
0.3
0.2
0
0.6
-0.1
Good(.5-9/13)
Expert(>9/13)
0.3
0.2
0.1
0.1
0.65
Poor(<.5)
lose shift%
RW
DBM
1-, (DBM)
1-?(DBM)
Model acc
0.85
Optimal choice % vs. Lose-shift%
0.6
0
0
0.2
0.4
0.6
0.8
?2 (RW)
(RW)
1
-0.1
.1-.2 .2-.3 .3-.4 .4-.5 .5-6 .6-.7 .7-.8 .8-9
optimal choice%
Figure 5: a. Prediction accuracy in poor, good and expert learners. b. Correlation between ? from
R-W and 1-? from DBM. c. Correlation between optimal choice% and lose-shift%.
Belief of stationarity ?
Compare_TD2_DBM_overall.m
Next, we looked at how each
model parameter correlates with decision optimality and lose-shift rate.
For decision optimality (Fig 6ab), consistent with simulation result, it has an inverted U shape as a
function of learning rate ? in R-W model (Fig 6a), while it is positively correlated with ? in DBM
(Fig 6b). For lose-shift rate (Fig 6cd), also consistent with simulation result, it is positively correlated
with ? in R-W (Fig 6c), while having an inverted U shape as a function of ? in DBM (Fig 6d).
a
Optimal choice %
(RW)
b
0.7
c
Optimal choice %
(DBM)
d
Lose-shift %
(RW)
0.6
0.7
0.7
0.65
0.5
0.5
0.4
0.5
0.45
0.4
0.3
0.35
0-.2
.2-.4
.4-.6
2 (RW)
.6-.8
Learning rate ?
.8-1
0.2
0.3
0.2
0.1
0.25
0.4
0.4
0.2
0.3
0.45
LS%
0.55
0.55
LS%
0.6
0.6
0.5
0.6
Optimal choice%
Optimal choice%
0.65
Lose-shift %
(DBM)
0-.2
.2-.4
.4-.6
.6-.8
.8-1
,(DBM)
Belief of stationarity ?
0.1
0-.2
.2-.4
.4-.6
.6-.8
2 (RW)
Learning
rate ?
.8-1
0
0-.2
.2-.4
.4-.6
.6-.8
.8-1
Belief of ,(DBM)
stationarity ?
Figure 6: Decision optimality and lose-shift rate. a. Optimal choice% as a function of ? in R-W
model. b. Optimal choice% as a function of ? in DBM. c. Lose-shift% as a function of ? in R-W
model. d. Lose-shift% as a function of ? in DBM.
In addition, we examinedCompare_TD2_DBM_overall.m
how poor, expert learners and individuals with a high lose-shift rate
(LS, lose shift%> .5 and optimal choice% < 9/13, n = 51) differ in model parameters (Figure 7).
Consistent with what we have shown (Fig 2c), those three different behavioral types had significantly
different learning rate (one-way ANOVA, p = .000) and each condition is significant from each other
(p = .000 for t test across conditions), in which poor learners had the lowest learning rate while
subjects with high lose-shift rate had the highest learning rate (Fig 7a). Belief of stationarity from
DBM also confirmed what we have shown (Fig 3c), that expert subjects had significantly higher
belief of stationarity (one-way ANOVA, p = .003, and p = .004 for t test comparing to Poor subjects
and p = .000 comparing to LS subjects). It also suggested that poor learners did not differ from LS
subjects (p > .05), though DBM had a lower accuracy in predicting poor learners? choices (Fig 5a).
No significant difference of inverse decision parameter ? was found between R-W and DBM for
7
expert and LS subjects (p > .05), but it was significantly lower in poor learners estimated in DBM
(Supplementary Figure S3).
b
Learning rate
Belief of stationarity
0.8
0.9
0.7
0.8
0.6
0.7
0.6
0.5
0.4
0.3
, (DBM)
1-? (DBM)
? (RW)
a
0
0.4
0.3
0.2
0.1
0.5
0.2
Poor
Expert
0.1
LS
0
Poor
Expert
LS
Poor
Expert
LS
Figure 7: Parameter estimation for different
behavioral types. a. Learning rate in R-W model. b.
Compare_RW_DBM_overall.m
Belief of stationarity in DBM.
Figure 7
5
Compare_TD2_DBM_overall.m
Discussion
In this paper we compared an R-W model and a Bayesian model in a visual search task across different
volatility conditions, and examined parameter differences for different types of learning behavior.
We have shown in simulation that both the learning rate ? estimated from R-W and the belief of
stochasticity 1 ? ? estimated from DBM have strong positive correlation with increasing volatility,
and confirmed that they are highly correlated with behavioral data (Fig 4a and Fig5b). This suggests
that both models are able to reflect an individual?s estimation of environmental volatility. We also have
shown in simulation that R-W model can differentiate No Learning, Expert and WSLS behavioral
types with (increasing) learning rate, and DBM can differentiate Expert and WSLS behavioral types
with (increasing) belief of stochasticity, and confirmed this with behavioral data in a low volatility
condition. A few other things to note here:
Correlation between decision optimality and lose-shift rate. Here we have provided a modelbased explanation of using lose-shift strategy and how it is related to decision optimality. 1) R-W
model suggests that, across different levels of environmental volatility, the frequency of using loseshift is positively correlated with learning rates (Fig 2b). However, decision optimality is NOT
positively correlated with lose-shift rate across conditions. 2) DBM model suggests that, across
different levels of environmental volatility, there is an inverted U shape relationship between the
frequency of using lose-shift and one?s belief of stationarity (Fig 3b), and a close-to-linear relationship
between decision optimality and the belief of stationarity in a low volatility environment (Fig 6b).
Implications for model selection. We have shown that both models have comparable prediction
accuracy for individuals with good performance, but R-W model is better in explaining poor learners?
choice. There are several possible reasons: 1) the Bayesian model assumed subjects would use the
feedback information to update the posterior probability of target reward distribution. Thus for ?poor?
learners who did not use the feedback information, this assumption is no longer appropriate. 2) the
R-W model assumed subjects would only update the chosen option?s value, thus error trials may have
less influence (especially in the early stages, with low learning rate). That is, for 0 value option, it
will remain 0 if not rewarded, and for the highest value option, it will remain being the highest value
option even if not rewarded. Therefore, R-W model may capture poor learners? search pattern better
with a low learning rate.
Future directions. For future work, we will modify current R-W model with a dynamic learning rate
that will change based on value estimation, and modify current DBM model with a parameter that
controls how much feedback information is used in updating posterior belief and a hyper-parameter
that models the dynamic of ?.
Acknowledgements
We thank Angela Yu for sharing the data in Yu et al. 2014, and for allowing us to use it in this paper.
8
References
[1] Lee, M. D., Zhang, S., Munro, M., & Steyvers, M. (2011). Psychological models of human and optimal
performance in bandit problems. Cognitive Systems Research, 12(2), 164-174.
[2] Behrens, T. E., Woolrich, M. W., Walton, M. E., & Rushworth, M. F. (2007). Learning the value of
information in an uncertain world. Nature neuroscience, 10(9), 1214-1221.
[3] Mathys, C. D., Lomakina, E. I., Daunizeau, J., Iglesias, S., Brodersen, K. H., Friston, K. J., & Stephan, K. E.
(2014). Uncertainty in perception and the Hierarchical Gaussian Filter. Front Hum Neurosci, 8.
[4] Yu, A. J., & Huang, H. (2014). Maximizing masquerading as matching in human visual search choice
behavior. Decision, 1(4), 275.
[5] Rescorla, R. A., & Wagner, A. R. (1972). A theory of Pavlovian conditioning: Variations in the effectiveness
of reinforcement and nonreinforcement. Classical conditioning II: Current research and theory, 2, 64-99.
[6] Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT press.
[7] Browning, M., Behrens, T. E., Jocham, G., O?Reilly, J. X., & Bishop, S. J. (2015). Anxious individuals have
difficulty learning the causal statistics of aversive environments. Nature neuroscience, 18(4), 590-596.
[8] Yu, A. J., & Cohen, J. D. (2009). Sequential effects: superstition or rational behavior?. In Advances in neural
information processing systems (pp. 1873-1880).
[9] Gershman, S. J. (2015). A Unifying Probabilistic View of Associative Learning. PLoS Comput Biol, 11(11),
e1004567.
[10] Blakely, E., Starin, S., & Poling, A. (1988). Human performance under sequences of fixed-ratio schedules:
Effects of ratio size and magnitude of reinforcement. The Psychological Record, 38(1), 111.
[11] Kahneman, D., & Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive
judgment. Heuristics and biases: The psychology of intuitive judgment, 49.
[12] Worthy, D. A., & Maddox, W. T. (2014). A comparison model of reinforcement-learning and win-stay-loseshift decision-making processes: A tribute to WK Estes. Journal of mathematical psychology, 59, 41-49.
[13] Bonawitz, E., Denison, S., Gopnik, A., & Griffiths, T. L. (2014). Win-Stay, Lose-Sample: A simple
sequential algorithm for approximating Bayesian inference. Cognitive psychology, 74, 35-65.
9
| 6409 |@word trial:21 simulation:13 p0:2 substitution:1 outperforms:1 reaction:1 current:5 com:1 comparing:4 surprising:2 gmail:1 shape:17 drop:1 update:5 v:4 denison:1 record:1 revisited:1 location:10 preference:1 org:1 zhang:1 mathematical:1 profound:1 behavioral:14 expected:6 behavior:18 frequently:1 examine:8 brain:2 decreasing:2 td:1 little:2 increasing:5 spain:1 provided:1 underlying:1 medium:3 lowest:2 what:5 interpreted:2 every:2 scaled:1 control:1 before:1 positive:8 modify:2 switching:1 sutton:3 chose:2 examined:2 suggests:3 range:3 block:2 significantly:7 matching:1 reilly:1 griffith:1 suggest:2 close:2 selection:2 influence:7 maximizing:1 vit:3 l:9 rule:4 steyvers:1 variation:1 increment:2 behrens:3 target:10 heavily:1 experiencing:1 hypothesis:1 updating:2 capture:2 plo:1 decrease:2 highest:4 mentioned:1 principled:1 environment:13 reward:27 dynamic:4 depend:2 learner:19 kahneman:2 anxious:1 hyper:1 choosing:3 outcome:1 heuristic:5 supplementary:3 statistic:8 associative:1 differentiate:2 sequence:2 rescorla:5 frequent:2 combining:1 poorly:1 iglesias:1 intuitive:2 table1:1 walton:1 staying:1 volatility:66 measured:1 strong:3 predicted:1 indicate:2 differ:7 direction:1 gopnik:1 attribute:1 filter:1 human:3 investigation:1 strictly:1 cognition:1 dbm:51 mapping:2 predict:1 early:1 estimation:15 outperformed:1 lose:55 grouped:1 vice:1 reflects:1 mit:1 always:5 gaussian:1 aim:1 brodersen:1 barto:3 indicates:3 likelihood:3 contrast:1 inference:4 browning:2 bt:8 hidden:3 bandit:1 overall:1 among:2 softmax:2 having:1 represents:1 yu:10 future:2 superstition:1 few:1 individual:16 cognitively:1 ab:3 stationarity:28 highly:2 truly:1 implication:1 experience:2 causal:1 minimal:1 uncertain:2 fitted:2 psychological:2 eta:3 maximization:3 too:1 front:1 chooses:1 st:5 fundamental:1 peak:1 stay:9 systematic:1 lee:2 probabilistic:1 modelbased:1 again:2 reflect:2 woolrich:1 huang:5 choose:4 adversely:1 cognitive:2 expert:28 suggesting:1 summarized:2 wk:1 representativeness:1 vi:1 view:1 red:4 rushworth:1 bayes:1 option:22 accuracy:8 characteristic:2 who:4 judgment:2 bayesian:17 accurately:1 confirmed:4 history:1 acc:1 explain:3 sharing:1 frequency:8 pp:1 associated:2 rational:1 knowledge:1 color:1 schedule:1 appears:1 ok:2 higher:7 follow:3 reflected:2 impacted:1 though:3 stage:1 correlation:10 hand:4 mode:1 artifact:1 effect:4 validity:1 satisfactory:1 please:1 temperature:1 volatile:4 rl:11 cohen:2 conditioning:2 thirdly:1 he:1 approximates:1 refer:1 significant:6 versa:1 similarly:1 stochasticity:5 had:6 stable:19 longer:2 add:1 posterior:5 showed:2 recent:3 rewarded:10 inverted:18 seen:2 nonreinforcement:1 ii:1 long:2 prediction:4 lomakina:1 addition:3 daunizeau:1 subject:24 tend:3 thing:1 effectiveness:1 unrewarded:1 stephan:1 affect:3 fit:1 psychology:3 reduce:1 shift:57 munro:1 rw:12 percentage:5 s3:1 estimated:9 neuroscience:2 correctly:1 per:3 drawn:1 changing:2 anova:2 run:8 inverse:3 uncertainty:3 reasonable:1 decision:36 comparable:1 occur:6 optimality:27 pavlovian:1 martin:1 relatively:3 poor:17 remain:4 across:7 maddox:1 making:1 s1:1 explained:1 intuitively:1 equation:3 remains:1 previously:2 needed:1 know:1 wsls:16 hierarchical:1 appropriate:1 assumes:3 remaining:1 angela:1 estes:1 unifying:1 especially:1 approximating:1 classical:1 question:2 looked:3 hum:1 strategy:9 rt:2 traditional:1 unclear:2 win:8 thank:1 reinforce:2 simulated:10 reason:1 assuming:2 relationship:9 ratio:2 difficult:1 mostly:2 tribute:1 aversive:1 proper:2 policy:1 perform:2 allowing:1 observation:1 markov:1 situation:1 worthy:2 inferred:2 bk:7 errorbars:1 barcelona:1 nip:1 address:1 frederick:2 suggested:2 able:1 usually:1 pattern:1 perception:1 explanation:2 belief:39 friston:1 rely:1 difficulty:1 predicting:2 started:1 prior:3 acknowledgement:1 relative:4 embedded:1 loss:1 gershman:2 contingency:8 degree:1 agent:8 consistent:7 principle:1 share:2 balancing:1 cd:1 last:4 free:5 guide:1 bias:1 institute:2 explaining:8 wagner:5 differentiating:1 feedback:6 world:1 qualitatively:1 reinforcement:5 far:1 correlate:4 alpha:5 skill:1 laureate:2 sequentially:1 investigating:1 assumed:2 blakely:2 search:9 table:3 bonawitz:2 learn:1 nature:2 ignoring:1 necessarily:1 vj:1 did:2 paulus:1 neurosci:1 s2:1 noise:1 positively:6 fig:40 je:1 fashion:1 sub:2 comput:1 bishop:1 r2:2 disregarding:1 survival:1 evidence:1 sequential:2 magnitude:1 likely:8 visual:8 environmental:18 relies:1 formulated:1 change:17 except:1 unpredictably:1 tendency:1 indicating:5 biol:1 correlated:5 |
5,980 | 641 | Discriminability-Based Transfer between
Neural Networks
L. Y. Pratt
Department of Mathematical and Computer Sciences
Colorado School of Mines
Golden, CO 80401
[email protected]
Abstract
Previously, we have introduced the idea of neural network transfer,
where learning on a target problem is sped up by using the weights
obtained from a network trained for a related source task. Here,
we present a new algorithm. called Discriminability-Based Transfer
(DBT), which uses an information measure to estimate the utility
of hyperplanes defined by source weights in the target network,
and rescales transferred weight magnitudes accordingly. Several
experiments demonstrate that target networks initialized via DBT
learn significantly faster than networks initialized randomly.
1
INTRODUCTION
Neural networks are usually trained from scratch, relying only on the training data
for guidance. However, as more and more networks are trained for various tasks,
it becomes reasonable to seek out methods that. avoid "'reinventing the wheel" , and
instead are able to build on previously trained networks' results. For example, consider a speech recognition network that was only trained on American English speakers. However, for a new application, speakers might have a British accent. Since
these tasks are sub-distributions of the same larger distribution (English speakers),
they may be related in a way that. can be exploited to speed up learning on the
British network, compared to when weights are randomly initialized.
We have previously introduced the question of how trained neural networks can be
204
Discriminability-Based Transfer between Neural Networks
"recycled' in this way [Pratt et al., 1991]; we've called this the transfer problem.
The idea of transfer has strong roots in psychology (as discussed in [Sharkey and
Sharkey, 1992]), and is a standard paradigm in neurobiology, where synapses almost
always come "pre-wired".
There are many ways to formulate the transfer problem. Retaining performance on
the source task mayor may not be important. When it is, the problem has been
called sequential learning, and has been explored by several authors (cf. [McCloskey
and Cohen, 1989]). Our paradigm assumes that source task performance is not
important, though when thl~ source task training data is a subset of the target
training data, our method may be viewed as addressing sequential learning as well.
Transfer knowledge can also be inserted into several different entry points in a
back-propagation network (see [Pratt, 1993al). We focus on changing a network's
initial weights; other studies change other aspects, such as the objective function
(cf. [Thrun and Mitchell, 1993, Naik et al., 1992]).
Transfer methods mayor may not use back-propagation for target task training.
Our formulation does, because this allows it to degrade, in the worst case of no
source task relevance, to back-propagation training on the tar~et task with randomly
initialized weights. An alternative approach is described by lAgarwal et al., 1992].
Several studies have explored literal transfer in back-propagation networks, where
the final weights from training on a source task are used as the initial conditions for
target training (cf. [Martin, 1988]). However, these studies have shown that often
networks will demonstrate worse performance after literal transfer than if they had
been randomly initialized.
This paper describes the Discriminability-Based Transfer (DBT) algorithm, which
overcomes problems with literal transfer. DBT achieves the same asymptotic accuracy as randomly initialized networks, and requires substantially fewer training
updates. It is also superior to literal transfer, and to just using the source network
on the target task.
2
ANALYSIS OF LITERAL TRANSFER
As mentioned above, several studies have shown that networks initialized via literal
transfer give worse asymptotic perlormance than randomly initialized networks. To
understand why. consider the situation when only a subset. of the source network
input-to-hidden (IH) layer hyperplanes are relevant to the target problem. as illustrated in Figure 1. We've observed that some hyperplanes initialized by source
network training don't shift out of their initial positions, despite the fact that they
don't help to separate the target training data. The weights defining such hyperplanes often have high magnitudes [Dewan and Sontag, 1990]. Figure 2 (a) shows
a simulation of such a situation, where a hyperplane that has a high magnitude, as
if it came from a source network, causes learning to be slowed down. 1
Analysis of the back-propagation weight update equations reveals that high source
weight magnitudes retard back-propagation learning on the target task because this
1 Neural network visualization will be explored more thoroughly in an upcoming pape...
An X-based animator is available from the author via anonymous ftp. Type "archie ha".
205
206
Pratt
Source training data
0.9
:0 !
0
....?....?.... .Q~ ?... ; .?.....?...?...?.?....?.......?.?...?.??.......?.?..?........Q....????...??.?..?..??..
o f 1./ 0
o i 1 ./ 0
0.1
Target training data
~ Hyperplanes
0
1 0
should be
0 1 0
! 1!
retained
1
?????????????if???f??'O?????????????????????????????????......................... 0-?......0............ .
;...j.~_ Hyperplanes need to move
0
f
0.1
0.5
Feature 1
0.9
Figure 1: Problem Illustrating the need for DBT. The source and target tasks are
identical, except that the target task has been shifted along one axis, as represented
by the training data shown. Because of this shift, two of the source hyperplanes are
helpful in separating class-O from class-l data in the target task, and two are not.
equation is not scaled relative to weight magnitudes. Also, the weight update equation contains the factor y( 1 - y) (where y is a unit's activation). which is small
for large weights. Considering this analysis, it might at first appear that a simple
solution to the problem with literal transfer is to uniformly lower all weight magnitudes. However, we have also observed that hyperplanes in separating positions will
move unless they are given high weight magnitudes. To address both of these problems, we must rescale hyperplanes so that useful ones are defined by high-magnitude
weights and less useful hyperplanes receive low magnitudes. To implement such a
method, we need a metric for evaluating hyperplane utility.
3
EVALUATING CLASSIFIER COMPONENTS
We borrow the 1M metric for evaluating hyperplanes from decision tree induction
[Quinlan, 1983]. Given a set of training data and a hyperplane that crosses through
it, the 1M function returns a value between 0 and 1, indicating the amount that the
hyperplane helps to separate the data into different classes.
The formula for 1M, for a decision surface in a multi-class problem, is: 1M ~ ( L L xij logxij - L Xi. log Xi. - L X.j logx.j + Nlog N)
[Mingers, 1989). Here, N is the number of patterns, i is either 0 or 1, depending
on the side of a hyperplane on which a pattern falls, j indexes over all classes, Xij
is the count of class j patterns on side i of the hyperplane, ~. i is the count of all
patterns on side i, and x.j is the total number of patterns in class j.
4
THE DBT ALGORITHM
The DBT algorithm is shown in Figure 3. It inputs the target training data and
weights from the source network, along with two parameters C and S (see below).
DBT outputs a modified set of weights, for initializing training on the target task.
Figure 2 (b) shows how the problem of Figure 2 (a) was repaired via DBT.
DBT modifies the weights defining each source hyperplane to be proportional to the
Discriminability-Based Transfer between Neural Networks
(a) Literal
(b) OBT
'" ~ 1....... ,
EpodI1 \
!I
f? ??? ? ??? ? ? ?~............. ,
0
0
I.L.
l
0
0
., ..
C!
\
~
0.4
0.6
\
il : I?. ....r.. ...........................\
0.1
0.2
0.4
o
0.8
o
\
1
\
III
0.2
0.'
0.4
J~
-
\
1? . .. .
o
1
0
??????1??. .. ....
\
'"e
i
Lt
HIf'2O '. . ..
f
0
????????\.;.???????m
\
C!
0.2
0.4
0.8
l
0.8
F. .ture 1
300
-
o
\
F...."..,
'"e
o
\
1
... .
,
C!
0.8
EDOdI100
,.. ............ .\\ .~ ...,
0.4
0.8
F. .ture 1
100
0.2
\
1
F. .u . l
...
:J
EpodI1
0
0:
0.2
o
0.8
'e" Cl
o
"
0
'."'''',' ?? ?? ?m
\
C!
III
0
.. .. .. \.
,
0.2
400
0
\
," .. .... ? ?tlJ
FHU.1
~
\
III
0.8
C!
EpodI 300
?????1... .. ...
1
0.4
0.8
FHtur.1
~
I .. ... .. .... .. ...
f
0
0.8
:400 \ 0
tlJ
... -- .......s;: ......O' . .. +tJ
0
'\
r
.
~
0.2
0.4
0.8
0.'
FHture 1
E:~
0.4
0.2
0.8
0.1
0.2
F..u.1
0.4
3100
1
1
0
. .. . . .\
0
.\..... ....
\
... ... '0"
\
I
...,
III
-
0.4
0.8
FHU.1
0.8
0.2
,,
0
ttl
. ? ? \ ? ? ??? ? ? I)- ?? ? +tJ
1
\
D.2
0.8
3100
\
0?????????? ... 1.
0.8
F. .tI.n 1
0.4
0.6
0.8
F. .ture1
Figure 2: Hyperplane 110vement speed in Literal Transfer. Compared to DBT.
Each image in this figure shows the hyperplanes implemented by IH weights at a
different epoch of training. Hidden unit l's hyperplane is a solid line; HU2's is a
dotted line, and HU3's hyperplane is shown as a dashed line. In (a) note how HUl
seems fixed in place. Its high magnitude causes learning to be slow (taking about
3100 epochs to converge). In (b) note how DBT has given HUl a small magnitude,
allowing it to be flexible, so that the training data is separated by epoch 390. A
randomly initialized network on this problem takes about 600 epochs.
207
208
Pratt
Input:
Source network weights
Target training data
Parameters: C (cutoff (actor), S (scaleup (actor)
Output:
Initial weigbts (or target network, assuming same topology as source network
Method:
For eacb source network bidden unit i
Compare tbe byperplane defined by incoming weights to i to tbe target training data,
calculating [Mt ; (f [0,1])
Rescale [Mfa values so that largest has value S. Put. result in s, .
For [Mt ; 's tbat are less tban C
I( higbest magnitude ratio between weights defining hyperplane i is > 100.0,
reset weights for that hyperplane randomly
Else uniformly scale down byperplane to bave low-valued weigbts (maximum
magnitude of 0.5), but to be in the same position.
For eacb remaining IH hidden unit i
For eacb weight wj; defining hyperplane i in target network
t
?
? X Si
L et Wj.
= source
weJgh t Wj;
Set hidden-to-output target network weights randomly in [-0.5,0.5]
Figure 3: The Discriminability-Based Transfer (DBT) Algorithm.
1M value, according to an input parameter, S. DBT is based on the idea that the
best initial magnitude Aft for a target hyperplane is M t = S X M. x I M t , where S
("scaleup") is a constant of proportionality, At. is the magnitude of a source network
hyperplane, and I M t is the discriminability of the source hyperplane on the target
training data. We assume that this simple relationship holds over some range of I M t
values. A second parameter, C, determines a cut-off in this relationship - source
hyperplanes with I M t < C receive very low magnitudes, so that the hyperplanes
are effectively equivalent to those in a randomly initialized network. The use of
the C parameter was motivated by empirical experiments that indicated that the
multiplicative scaling via S was not adequate.
To determine S and C for a particular source and target task, we ran DBT several
times for a small number of epochs with different Sand C values. We chose the S
and C values that yielded the best average TSS (total sum of squared errors) after
a few epochs. We used local hill climbing in average TSS space to decide how to
move in S, C space.
DBT randomizes the weights in the network's hidden-to-output (HO) layer. See
[Sharkey and Sharkey, 1992) for an extension to this work showing that literal
transfer of HO weights might also be effective.
5
EMPIRICAL RESULTS
DBT was evaluated on seven tasks: female-to-male speaker transfer on a lO-vowel
recognition task (PB), a 3-class subset of the PB task (PB123). transfer from all
females to a single male in the PB task (Onemale), transfer for a heart disease
diagnosis problem from Hungarian to Swiss patients (Heart-HS). transfer for the
same task from patients in California to Swiss patients (Heart-VAS). transfer from
a subset of DNA pattern recognition exanlples to a superset (DNA). and transfer
Discriminability-Based Transfer between Neural Networks
from a subset of chess endgame problems to a superset (Chess). Note that the
DNA and chess tasks effectively address the sequential learning problem; as long as
the source data is a subset of the target data, the target network can build on the
previous results.
DBT was compared to randomly initialized networks on the target task. We
measured generalization performance in both conditions by using 10-way crossvalidation on 10 different initial conditions for each t.arget task, resulting in 100
different runs for each of the two conditions, and for each of the seven tasks. Our
empirical methodology controlled carefully for initial conditions, hidden unit count,
back-propagation paranleter:> '1 (learning rate) and Q" (momentum), and DBT parameters S and C.
5.1
SCENARIOS FOR EVALUATION
There are at least two different practical situations in which we may want to speed
up learning. First, we may have a limited amount of computer time. all of which
will be used because we have no way of detecting when a network's performance has
reached some criterion. In this case. if our speed-up method (i.e. DBT) is significantly superior to a baseline for a large proportion of epochs during training. then
the probability that we'll have to stop during that period of significant superiority
is high If we do stop at an epoch when our method is significantly better, then this
justifies it over the baseline, because the resulting network has better petfonnance.
A second situation is when we have some way of detecting when petformance is
"good enough" for an application. In contrast to the above situation. here a DBT
network may be run for a shorter time than a baseline network, because it reaches
this criterion faster. In this case, the number of epochs of DBT significant superiority is less important than the speed with which it achieves the criterion.
5.2
RESULTS
To evaluate networks according to the first scenario, we tested for statistical significance at the 99.0% level between the 100 DBT and the 100 randomly initialized
networks at each training epoch. We found (1) that asymptotic DBT petformance
scores were the same as for random networks and (2), that DBT was superior for
much of the training period. Figure 4 (a) shows the number of weight updates for
which a significant difference was found for the seven tasks.
For the second scenario, we also found (3) that DBT networks required many fewer
epochs to reach a criterion performance score. For this test, we found the last
significantly different epoch between the two methods. Then we measured the
number of epochs required to reach 98%, 95o/c" and 66%, of that level. The number
of weight updates required for DBT and randomly initialized networks to reach the
98% criterion are shown in Figure 4 (b). Note that the y axis is logarithmic, so,
for example. over 30 million weight updates were saved by using DBT instead of
random initialization in the PB123 problem. Results for the 95% and 66% criteria
also showed DBT to be at least as fast as random initialization for every task.
Using the same tests described for DBT above, we also tested literal networks on
the seven transfer tasks. We found that, unlike DBT, literal networks reached sig-
209
210
Pratt
(b) Time required to train to 98% criterion
(a) Time for sig. epoch difference: OBT vs. random
00
o
o
g,~
o
o
PB
PB1230nema1eHeart? HeartHS
VAS
DNA
a-
PB
PB123 0nemaIe Heart-
Task
HS
Heart-
a-
VAS
Task
Figure 4: Summary of DBT Empirical Results.
nificantly worse asymptotic performance scores than randomly initialized networks.
Literal networks also learned slower for some tasks. These results justify the use of
the more complicated D BT method oyer literal transfer.
\\Te also evaluated the source networks directly on the target tasks, without any
back-propagation training on the target training data. Scores were significantly and
substantially worse than random networks. This result indicates that the transfer
scenarios we chose for evaluation were nontrivial.
6
CONCLUSION
We have described the DBT algorithm for transfer between neural networks. 2 DBT
demonstrated substantial and significant learning speed improvement over randomly
initialized networks in 6 out of 7 tasks studied (and the same learning speed in the
other task). DBT never displayed worse asymptotic performance than a randomly
initialized network. We have also shown that DBT is superior to literal transfer,
and to simply using the source network on the target task.
Acknowledgements
The author is indebted to John Smith. Gale MartinI and Anshu Agarwal for their
valuable comments 011 this paper, and to Jack Mostow and Haym Hirsh for their
contribution to this research program.
2See [Pratt, 1993b] for more details.
Discriminability-Based Transfer between Neural Networks
References
[Agarwal et al., 19921 A. Agarwal, R. J. Mammone, and D. K. Naik. An on-line
training algorithm to overcome catastrophic forgetting. In Intelligence Engineering Systems through Artificial Neural Networks. volume 2, pages 239-244. The
American Society of Mechanical Engineers, AS~IE Press, 1992.
[Dewan and Sontag, 1990) Hasanat M. Dewan and Eduardo Sontag. Using extrapolation to speed up the backpropagation algorithm. In Proceedings oj the International Joint Conjerence on Neural Networks, Washington, DC, volume 1,
pages 613-616. IEEE Pub:ications, Inc., January 1990.
[Martin, 1988] Gale Martin. The effects of old learning on new in Hopfield and Backpropagation nets. Technical Report ACA-HI-0l9. Microelectronics and Computer
Technology Corporation (MCC), 1988.
[McCloskey and Cohen, 1989J Michael McCloskey and Neal J. Cohen. Catastrophic
interference in connectionist networks: the sequential learning problem. The
psychology oj learning and motivation, 24, 1989.
[Mingers. 1989J John Mingers. An empirical comparison of selection measures for
decision- tree induction. Machine Learning, 3( 4):319-342, 1989.
[Naik et al., 1992] D. K. Naik, R. J. Mammone. and A. Agarwal. Meta-neural
network approach to learning by learning. In Intelligence Engineering Systems
through Artificial Neural Networks, volume 2. pages 245-252. The American Society of Mechanical Engineers, AS ME Press. 1992.
[Pratt et al., 19911 Lorien Y. Pratt, Jack Mostow. and Candace A. Kamm. Direct
transfer of learned information among neural networks. In Proceedings oj the
Ninth National Conjerence on Artificial Intelligence (AAAI-91), pages 584-589,
Anaheim, CA, 1991.
[Pratt, 1993aJ Lorien Y. Pratt. Experiments in the transfer of knowledge between
neural networks. In S. Hanson, G. Drastal, and R. Rivest, editors, Computational
Learning Theory and Natural Learning Systems. Constraints and Prospects, chapter 4.1. MIT Press, 1993. To appear.
[Pratt, 1993b] Lorien Y. Pratt. Non-literal transfer of informat.ion among inductive
learners. In R.J.Mammone and Y. Y. Zeevi. editors. Neural Networks: Theory
and Applications II. Academic Press, 1993. To appear.
[Quinlan, 1983] J. R. Quinlan. Learning efficient classification procedures and their
application to chess end games. In Machine Learning, pages 463-482. Palo Alto,
CA: Tioga Publishing Company. 1983.
[Sharkey and Sharkey, 19921 Noel E. Sharkey and Amanda J. C. Sharkey. Adaptive
generalisation and the transfer of knowledge. 1992. \Vorking paper, Center for
Connection Science, University of Exeter, 1992.
[Thrun and Mitchell, 1993J Sebastian B. Thrun and Tom M. Mitchell. Integrating inductive neural network learning and explanation-based learning. In C.L.
Giles, S. J . Hanson, and J . D. Cowan, editors. Advances In Neural Injormahon.
Processing Systems 5. Morgan Kaufmann Publishers. San Mateo, CA, 1993.
211
| 641 |@word h:2 illustrating:1 nificantly:1 seems:1 proportion:1 proportionality:1 seek:1 simulation:1 solid:1 initial:7 contains:1 score:4 pub:1 activation:1 si:1 must:1 aft:1 john:2 update:6 v:1 intelligence:3 fewer:2 accordingly:1 smith:1 detecting:2 hyperplanes:14 mathematical:1 along:2 direct:1 forgetting:1 multi:1 relying:1 retard:1 animator:1 kamm:1 company:1 considering:1 becomes:1 rivest:1 alto:1 ttl:1 substantially:2 corporation:1 eduardo:1 every:1 golden:1 ti:1 scaled:1 classifier:1 unit:5 appear:3 superiority:2 hirsh:1 engineering:2 local:1 randomizes:1 despite:1 might:3 chose:2 discriminability:9 initialization:2 studied:1 mateo:1 co:1 limited:1 range:1 practical:1 implement:1 swiss:2 backpropagation:2 procedure:1 empirical:5 mcc:1 significantly:5 pre:1 integrating:1 dbt:36 wheel:1 selection:1 put:1 equivalent:1 demonstrated:1 center:1 modifies:1 formulate:1 borrow:1 target:30 colorado:2 us:1 sig:2 recognition:3 cut:1 haym:1 observed:2 inserted:1 initializing:1 worst:1 wj:3 prospect:1 valuable:1 ran:1 mentioned:1 disease:1 substantial:1 mine:2 trained:6 arget:1 learner:1 joint:1 hopfield:1 various:1 represented:1 chapter:1 train:1 separated:1 fast:1 effective:1 artificial:3 tbat:1 mammone:3 larger:1 valued:1 final:1 net:1 nlog:1 reset:1 relevant:1 tioga:1 crossvalidation:1 wired:1 help:2 ftp:1 depending:1 mostow:2 measured:2 rescale:2 school:1 strong:1 implemented:1 hungarian:1 come:1 saved:1 anshu:1 sand:1 generalization:1 anonymous:1 extension:1 hold:1 zeevi:1 achieves:2 palo:1 largest:1 mit:1 always:1 modified:1 avoid:1 tar:1 focus:1 improvement:1 indicates:1 contrast:1 baseline:3 helpful:1 bt:1 hidden:6 among:2 flexible:1 classification:1 retaining:1 never:1 washington:1 identical:1 report:1 logx:1 connectionist:1 few:1 randomly:16 ve:2 weigbts:2 national:1 vowel:1 tss:2 evaluation:2 male:2 tj:2 shorter:1 unless:1 tree:2 old:1 initialized:17 guidance:1 giles:1 addressing:1 subset:6 entry:1 archie:1 anaheim:1 thoroughly:1 international:1 hul:2 ie:1 off:1 michael:1 squared:1 lorien:3 aaai:1 gale:2 literal:16 worse:5 american:3 return:1 inc:1 rescales:1 candace:1 hu2:1 root:1 multiplicative:1 extrapolation:1 aca:1 reached:2 complicated:1 contribution:1 il:1 accuracy:1 kaufmann:1 bave:1 climbing:1 indebted:1 synapsis:1 reach:4 sebastian:1 sharkey:8 stop:2 mitchell:3 knowledge:3 carefully:1 back:8 methodology:1 tom:1 formulation:1 evaluated:2 though:1 just:1 propagation:8 accent:1 aj:1 indicated:1 thl:1 effect:1 inductive:2 neal:1 illustrated:1 ll:1 during:2 game:1 speaker:4 criterion:7 hill:1 demonstrate:2 bidden:1 image:1 jack:2 superior:4 sped:1 mt:2 cohen:3 volume:3 million:1 discussed:1 significant:4 had:1 actor:2 surface:1 showed:1 female:2 scenario:4 meta:1 came:1 exploited:1 pape:1 morgan:1 converge:1 paradigm:2 eacb:3 determine:1 dashed:1 ii:1 period:2 technical:1 faster:2 academic:1 cross:1 long:1 va:3 controlled:1 mayor:2 metric:2 patient:3 agarwal:4 ion:1 receive:2 want:1 else:1 source:28 publisher:1 unlike:1 comment:1 cowan:1 iii:4 pratt:13 ture:2 superset:2 enough:1 psychology:2 topology:1 idea:3 shift:2 motivated:1 utility:2 sontag:3 speech:1 cause:2 adequate:1 useful:2 amount:2 obt:2 dna:4 xij:2 shifted:1 dotted:1 diagnosis:1 pb:5 changing:1 cutoff:1 naik:4 tbe:2 sum:1 run:2 place:1 almost:1 reasonable:1 decide:1 decision:3 informat:1 scaling:1 layer:2 hi:1 yielded:1 nontrivial:1 constraint:1 l9:1 aspect:1 speed:8 endgame:1 martin:3 transferred:1 department:1 according:2 describes:1 chess:4 slowed:1 interference:1 heart:5 equation:3 visualization:1 previously:3 count:3 end:1 available:1 tlj:2 alternative:1 ho:2 slower:1 conjerence:2 assumes:1 remaining:1 cf:3 publishing:1 quinlan:3 calculating:1 build:2 society:2 upcoming:1 objective:1 move:3 question:1 separate:2 thrun:3 separating:2 degrade:1 seven:4 me:1 induction:2 assuming:1 retained:1 index:1 relationship:2 ratio:1 hu3:1 allowing:1 displayed:1 january:1 situation:5 neurobiology:1 defining:4 dc:1 ninth:1 fhu:2 introduced:2 required:4 mechanical:2 connection:1 hanson:2 california:1 learned:2 address:2 able:1 amanda:1 usually:1 pattern:6 below:1 program:1 oj:3 explanation:1 mfa:1 natural:1 technology:1 axis:2 epoch:14 acknowledgement:1 asymptotic:5 relative:1 repaired:1 hif:1 proportional:1 editor:3 martini:1 lo:1 summary:1 last:1 english:2 side:3 understand:1 fall:1 taking:1 overcome:1 evaluating:3 author:3 adaptive:1 san:1 overcomes:1 reveals:1 incoming:1 xi:2 don:2 why:1 learn:1 transfer:38 ca:3 recycled:1 cl:1 significance:1 motivation:1 slow:1 sub:1 position:3 momentum:1 british:2 down:2 formula:1 showing:1 explored:3 microelectronics:1 ih:3 sequential:4 effectively:2 magnitude:16 te:1 justifies:1 lt:1 logarithmic:1 simply:1 mccloskey:3 determines:1 viewed:1 noel:1 change:1 generalisation:1 except:1 uniformly:2 hyperplane:16 justify:1 engineer:2 called:3 total:2 catastrophic:2 indicating:1 drastal:1 mingers:3 relevance:1 evaluate:1 tested:2 scratch:1 |
5,981 | 6,410 | A Non-convex One-Pass Framework for Generalized
Factorization Machine and Rank-One Matrix Sensing
Ming Lin
University of Michigan
[email protected]
Jieping Ye
University of Michigan
[email protected]
Abstract
We develop an efficient alternating framework for learning a generalized version of
Factorization Machine (gFM) on steaming data with provable guarantees. When
the instances are sampled from d dimensional random Gaussian vectors and the
target second order coefficient matrix in gFM is of rank k, our algorithm converges
linearly, achieves O() recovery error after retrieving O(k 3 d log(1/)) training
instances, consumes O(kd) memory in one-pass of dataset and only requires matrixvector product operations in each iteration. The key ingredient of our framework is
a construction of an estimation sequence endowed with a so-called Conditionally
Independent RIP condition (CI-RIP). As special cases of gFM, our framework can
be applied to symmetric or asymmetric rank-one matrix sensing problems, such as
inductive matrix completion and phase retrieval.
1
Introduction
Linear models are one of the foundations of modern machine learning due to their strong learning
guarantees and efficient solvers [Koltchinskii, 2011]. Conventionally linear models only consider the
first order information of the input feature which limits their capacity in non-linear problems. Among
various efforts extending linear models to the non-linear domain, the Factorization Machine [Rendle,
2010] (FM) captures the second order information by modeling the pairwise feature interaction in
regression under low-rank constraints. FMs have been found successful in many applications, such as
recommendation systems [Rendle et al., 2011] and text retrieval [Hong et al., 2013]. In this paper, we
consider a generalized version of FM called gFM which removes several redundant constraints in
the original FM such as positive semi-definite and zero-diagonal, leading to a more general model
without sacrificing its learning ability. From theoretical side, the gFM includes rank-one matrix
sensing [Zhong et al., 2015, Chen et al., 2015, Cai and Zhang, 2015, Kueng et al., 2014] as a special
case, where the latter one has been studied widely in context such as inductive matrix completion
[Jain and Dhillon, 2013] and phase retrieval [Candes et al., 2011].
Despite of the popularity of FMs in industry, there is rare theoretical study of learning guarantees for
FMs. One of the main challenges in developing a provable FM algorithm is to handle its symmetric
rank-one matrix sensing operator. For conventional matrix sensing problems where the matrix sensing
operator is RIP, there are several alternating methods with provable guarantees [Hardt, 2013, Jain
et al., 2013, Hardt and Wootters, 2014, Zhao et al., 2015a,b]. However, for a symmetric rank-one
matrix sensing operator, the RIP condition doesn?t hold trivially which turns out to be the main
difficulty in designing efficient provable FM solvers.
In rank-one matrix sensing, when the sensing operator is asymmetric, the problem is also known as
inductive matrix completion which can be solved via alternating minimization with a global linear
convergence rate [Jain and Dhillon, 2013, Zhong et al., 2015]. For symmetric rank-one matrix sensing
operators, we are not aware of any efficient solver by the time of writing this paper. In a special case
when the target matrix is of rank one, the problem is called ?phase retrieval? whose convex solver
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
is first proposed by Candes et al. [2011] then alternating methods are provided in [Lee et al., 2013,
Netrapalli et al., 2013]. While the target matrix is of rank k > 1 , only convex methods minimizing
the trace norm have been proposed recently, which are computationally expensive [Kueng et al., 2014,
Cai and Zhang, 2015, Chen et al., 2015, Davenport and Romberg, 2016].
Despite of the above fundamental challenges, extending rank-one matrix sensing algorithm to gFM
itself is difficult. Please refer to Section 2.1 for an in-depth discussion. The main difficulty is due to
the first order term in the gFM formulation, which cannot be trivially converted to a standard matrix
sensing problem.
In this paper, we develop a unified theoretical framework and an efficient solver for generalized
Factorization Machine and its special cases such as rank-one matrix sensing, either symmetric or
asymmetric. The key ingredient is to show that the sensing operator in gFM satisfies a so-called
Conditionally Independent RIP condition (CI-RIP, see Definition 2) . Then we can construct an
estimation sequence via noisy power iteration [Hardt and Price, 2013]. Unlike previous approaches,
our method does not require alternating minimization or choosing the step-size as in alternating
gradient descent. The proposed method works on steaming data, converges linearly and has O(kd)
space complexity for a d-dimension rank-k gFM model. The solver achieves O() recovery error
after retrieving O(k 3 d log(1/)) training instances.
The remainder of this paper is organized as following. In Section 2, we introduce necessary notation
and background of gFM. Subsection 2.1 investigates several fundamental challenges in depth. Section
3 presents our algorithm, called One-Pass gFM, followed by its theoretical guarantees. Our analysis
framework is presented in Section 4. Section 5 concludes this paper.
2
Generalized Factorization Machine (gFM)
In this section, we first introduce necessary notation and background of FM and its generalized
version gFM. Then in Subsection 2.1, we reveal the connection between gFM and rank-one matrix
sensing followed by several fundamental challenges encountered when applying frameworks of
rank-one matrix sensing to gFM.
The FM predicts the labels of instances by not only their features but also high order interactions
between features. In the following, we focus on the second order FM due to its popularity. Suppose
we are given N training instances xi ? Rd independently and identically (I.I.D.) sampled from
the standard Gaussian distribution and so are their associated labels yi ? R. Denote the feature
matrix X = [x1 , x2 , ? ? ? , xn ] ? Rd?n and the label vector y = [y1 , y2 , ? ? ? , yn ]> ? Rn . In second
order FM, yi is assumed to be generated from a target vector w? ? Rd and a target rank k matrix
M ? ? Rd?d satisfying
yi =xi > w? + xi > M ? xi + ?i
(1)
where ?i is a random subgaussian noise with proxy variance ? 2 . It is often more convenient
to write Eq. (1) in matrix form. Denote the linear operator A : Rd?d ? Rn as A(M ) ,
[hA1 , M i , hA2 , M i , ? ? ? , hAn , M i]> where Ai = xi xi > . Then Eq. (1) has a compact form:
y = X > w? +A(M ? ) + ? .
(2)
The FM model given by Eq. (2) consists of two components: the first order component X > w? and
the second order component A(M ? ). The component A(M ? ) is a symmetric rank-one Gaussian
measurement since Ai (M ) = xi > M xi where the left/right design vectors (xi and xi > ) are identical.
The original FM requires that M ? should be positive semi-definite and the diagonal elements of M ?
should be zero. However our analysis shows that both constraints are redundant for learning Eq. 2.
Therefore in this paper we consider a generalized version of FM which we call gFM where M ? is
only required to be symmetric and low rank. To make the recovery of M ? well defined, it is necessary
to assume M ? to be symmetric. Indeed for any asymmetric matrix M ? , there is always a symmetric
?
?
matrix Msym
= (M ? + M ?> )/2 such that A(M ? ) = A(Msym
) thus the symmetric constraint does
not affect the model. Another standard assumption in rank-one matrix sensing is that the rank of M ?
should be no more than k for k d. When w? = 0, gFM is equal to the symmetric rank-one matrix
sensing problem. Recent researches have proposed several convex programming methods based on
the trace norm minimization to recover M ? with a sampling complexity on order of O(k 3 d) [Candes
2
et al., 2011, Cai and Zhang, 2015, Kueng et al., 2014, Chen et al., 2015, Zhong et al., 2015]. Some
authors also call gFM as second order polynomial network [Blondel et al., 2016].
When d is much larger than k, the convex programming on the trace norm or nuclear norm of M ?
becomes difficult since M ? can be a d ? d dense matrix. Although modern convex solvers can scale
to large d with reasonable computational cost, a more popular strategy to efficiently estimate w?
and M ? is to decompose M ? as U V > for some U, V ? Rd?k , then alternatively update w, U, V to
minimize the empirical loss function
min
w,U,V
1
ky ? X > w ? A(U V > )k22 .
2N
(3)
The loss function in Eq. (3) is non-convex. It is even unclear whether an estimator of the optimal
solution {w? , M ? } of Eq. (3) with a polynomial time complexity exists or not.
In our analysis, we denote M + O() as a matrix M plus a perturbation matrix whose spectral
norm is bounded by . We use k ? k2 , k ? kF , k ? k? to denote the matrix spectral norm, Frobenius
norm and nuclear norm respectively. To abbreviate the high probability bound, we denote C =
polylog(d, n, T, 1/?) to be a constant polynomial logarithmic in {d, n, T, 1/?}. The eigenvalue
decomposition of M ? is M ? = U ? ?? U ?> where U ? ? Rd?k is the top-k eigenvectors of M ?
and ?? = diag(??1 , ??2 , ? ? ? , ??k ) are the corresponding eigenvalues sorted by |?i | ? |?i+1 |. Let
?
?i? = |??i | denote the singular value of M ? and ?i {M } be the i-th largest singular value of M . U?
?
denotes an matrix whose columns are the orthogonal basis of the complementary subspace of U .
2.1
gFM and Rank-One Matrix Sensing
When w? = 0 in Eq. (1), the gFM becomes the symmetric rank-one matrix sensing problem.
While the recovery ability of rank-one matrix sensing is somehow provable recently despite of the
computational issue, it is not the case for gFM. It is therefore important to discuss the differences
between gFM and rank-one matrix sensing to give us a better understanding of the fundamental
barriers in developing provable gFM algorithm.
In the rank-one matrix sensing problem, a relaxed setting is to assume that the sensing operator is
>
asymmetric, which is defined by Aasy
i (M ) = ui M v i where ui and vi are independent random
vectors. Under this setting, the recovery ability of alternating methods is provable [Jain and Dhillon,
2013]. However, existing analyses cannot be generalized to their symmetric counterpart, since ui
and v i are not allowed to be dependent in these frameworks. For example, the sensing operator
Aasy (?) is unbiased ( EAasy (?) = 0) but the symmetric sensing operator is clearly not [Cai and
Zhang, 2015]. Therefore, the asymmetric setting oversimplifies the problem and loses important
structure information which is critical to gFM.
As for the symmetric rank-one matrix sensing operator, the state-of-the-art estimator is based on the
trace norm convex optimization [Tropp, 2014, Chen et al., 2015, Cai and Zhang, 2015], which is
computationally expensive. When w? 6= 0, the gFM has an extra perturbation term X > w? . This
first order perturbation term turns out to be a fundamental challenge in theoretical analysis. One might
attempt to merge w? into M ? in order to convert gFM as a rank (k + 1) matrix sensing problem. For
? ? = [M ? ; w?> ] ? R(d+1)?d .
? i , [xi , 1]> and the matrix M
example, one may extend the feature x
? ?) = x
? ? xi . It is no
? i>M
However, after this simple extension, the sensing operator becomes A(M
longer symmetric. The left/right design vector is neither independent nor identical. Especially, not
? i are random variables. According to the above discussion, the conditions to
all dimensions of x
guarantee the success of rank-one matrix sensing do not hold after feature extension and all the
mentioned analyses cannot be directly applied.
3
One-Pass gFM
In this section, we present the proposed algorithm, called One-Pass gFM followed by its theoretical
guarantees. We will focus on the intuition of our algorithm. A rigorous theoretical analysis is
presented in the next section.
The One-Pass gFM is a mini-batch algorithm. In each mini-batch, it processes n training instances
and then alternatively updates parameters. The iteration will continue until T mini-batch updates.
3
Algorithm 1 One-Pass gFM
Require: The mini-batch size n, number of total mini-batch update T , training instances X =
[x1 , x2 , ? ? ? xnT }, y = [y1 , y2 , ? ? ? , ynT ]> , desired rank k ? 1.
Ensure: w(T ) , U (T ) , V (T ) .
(t)
1
1: Define M (t) , (U (t) V (t)> + V (t) U (t)> )/2 , H1 , 2n
A0 (y ? A(M (t) ) ? X (t)> w(t) ) ,
(t)
(t)
1
1 >
(t)
(t)> (t)
(t)
w ) , h3 , n X (y ? A(M (t) ) ? X (t)> w(t) ) .
h2 , n 1 (y ? A(M ) ? X
(0)
(0)
2: Initialize: w (0) = 0, V (0) = 0. U (0) = SVD(H1 ? 21 h2 I, k), that is, the top-k left singular
vectors.
3: for t = 1, 2, ? ? ? , T do
4:
Retrieve n training instances X (t) = [x(t?1)n+1 , ? ? ? , x(t?1)n+n ] . Define A(M ) ,
(t)
(t)
[Xi > M Xi ]ni=1 .
? (t) = (H (t?1) ? 1 h(t?1) I + M (t?1)> )U (t?1) .
U
1
2 2
(t)
?
? (t) .
Orthogonalize U via QR decomposition: U (t) = QR U
5:
6:
(t?1)
w(t) = h3
+ w(t?1) .
(t?1)
(t?1)
8:
V (t) = (H1
? 12 h2
I + M (t?1) )U (t)
9: end for
10: Output: w (T ) , U (T ) , V (T ) .
7:
Since gFM deals with a non-convex learning problem, the conventional gradient descent framework
hardly works to show the global convergence. Instead, our method is based on a construction
of an estimation sequence. Intuitively, when w? = 0, we will show in the next section that
1 >
1 0
?
n A A(M ) ? 2M + tr(M )I and tr(M ) ? n 1 A(M ). Since y ? A(M ), we can estimate
1
1 >
?
0
M via 2n A (y) ? n 1 yI. But this simple construction cannot generate a convergent estimation
sequence since the perturbation terms in the above approximate equalities cannot be reduced along
iterations. To overcome this problem, we replace A(M ? ) with A(M ? ? M (t) ) in our construction.
Then the perturbation terms will be on order of O(kM ? ? M (t) k2 ). When w? 6= 0, we can apply a
similar trick to construct its estimation sequence via the second and the third order moments of X.
Algorithm 1 gives a step-by-step description of our algorithm1 .
In Algorithm 1, we only need to store w(t) ? Rd , U (t) , V (t) ? Rd?k . Therefore the space com(t)
(t)
(t)
plexity is O(d + kd). The auxiliary variables M (t) , H1 , h2 , h3 can be implicitly presented
(t)
(t)
(t)
by w , U , V . In each mini-batch updating, we only need matrix-vector product operations
which can be efficiently implemented on many computation architectures. We use truncated SVD
to initialize gFM, a standard initialization step in matrix sensing. We do not require this step to
be computed exactly but up to an accuracy of O(?) where ? is the RIP constant. The QR step on
line 6 requires O(k 2 d) operations. Compared with SVD which requires O(kd2 ) operations, the QR
step is much more efficient when d k. Algorithm 1 retrieves instances streamingly, a favorable
behavior on systems with high speed cache. Finally, we export w(T ) , U (T ) , V (T ) as our estimation
of w? ? w(T ) and M ? ? U (T ) V (T )> .
Our main theoretical result is presented in the following theorem, which gives the convergence rate
of recovery and sampling complexity of gFM when M ? is low rank and the noise ? = 0.
Theorem 1. Suppose xi ?s are independently sampled from the standard Gaussian distribution. M ?
is a rank k matrix. The noise ? = 0. Then with a probability at least 1 ? ?, there exists a constant C
and a constant ? < 1 such that
kw? ? w(t) k2 + kM ? ? M (t) k2 ?? t (kw? k2 + kM ? k2 )
?
provided n ? C(4 5?1? /?k? + 3)2 k 3 d/? 2 , ? ?
?
?
?
(4 5?1? /?k
+3)?k
?
? ?
? +4 5kw ? k2
4 5?1 +3?k
2
.
?
?
Theorem 1 shows that {w(t) , M (t) } will converge
? to {w , M } linearly. The convergence rate is
controlled by ?, whose value is on order of O(1/ n). A small ? will result in a fast convergence rate
1
Implementation is available from https://minglin-home.github.io/
4
but a large sampling complexity. To reduce the sampling complexity, a large ? is preferred. The largest
allowed ? is bounded by O(1/(kM ? k2 + kw? k2 )). The sampling complexity is O((?1? /?k? )2 k 3 d).
If M ? is not well conditioned, it is possible to remove (?1? /?k? )2 in the sampling complexity by a
procedure called ?soft-deflation? [Jain et al., 2013, Hardt and Wootters, 2014]. By theorem 1, gFM
achieves recovery error after retrieving nT = O(k 3 d log ((kw? k2 + kM ? k2 )/)) instances.
The noisy case where M ? is not exactly low rank and ? > 0 is more intricate therefore we postpone
it to Subsection 4.1. The main conclusion is similar to the noise-free case Theorem 1 under a small
noise assumption.
4
Theoretical Analysis
In this section, we give the sketch of our proof of Theorem 1. Omitted details are postponed to
appendix.
f(t) , t } such that t ? 0 and
e (t) , M
From high level, our proof constructs an estimation sequence {w
(t)
?
?
(t)
f k2 ? t . In conventional matrix sensing, this construction is possible
e k2 + kM ? M
kw ? w
when the sensing matrix satisfies the Restricted Isometric Property (RIP) [Cand?s and Recht, 2009]:
Definition 2 (`2 -norm RIP). A sensing operator A is `2 -norm ?k -RIP if for any rank k matrix M ,
1
(1 ? ?k )kM k2F ? kA(M )k22 ? (1 + ?k )kM k2F .
n
When A is `2 -norm ?k -RIP for any rank k matrix M , A0 A is nearly isometric [Jain et al., 2012], which
implies kM ? A0 A(M )/nk2 ? ?. Then we can construct our estimation sequence as following:
1
f(t) = 1 A0 A(M ? ? M
f(t?1) ) + M
f(t?1) , w
e (t) = (I ? XX > )(w? ? w
e (t?1) ) + w
e (t?1) .
M
n
n
However, in gFM and symmetric rank-one matrix sensing, the `2 -norm RIP condition cannot be
satisfied with high probability [Cai and Zhang, 2015]. To establish an RIP-like condition for rank-one
matrix sensing, several variants have been proposed, such as the `2 /`1 -RIP condition [Cai and Zhang,
2015, Chen et al., 2015]. The essential idea of these variants is to replace the `2 -norm kA(M )k2 with
`1 -norm kA(M )k1 then a similar norm inequality can be established for all low rank matrix again.
However, even using these `1 -norm RIP variants, we are still unable to design an efficient alternating
algorithm. All these `1 -norm RIP variants have to deal with trace norm programming problems. In
fact, it is impossible to construct an estimation sequence based on `1 -norm RIP because we require
`2 -norm bound on A0 A during the construction.
A key ingredient of our framework is to propose a novel `2 -norm RIP condition to overcome the
above difficulty. The main technique reason for the failure of conventional `2 -norm RIP is that it
tries to bound A0 A(M ) over all rank k matrices. This is too aggressive to be successful in rank-one
matrix sensing. Regarding to our estimation sequence, what we really need is to make the RIP hold
for current low rank matrix M (t) . Once we update our estimation M (t+1) , we can regenerate a new
sensing operator independent of M (t) to avoid bounding A0 A over all rank k matrices. To this end,
we propose the Conditionally Independent RIP (CI-RIP) condition.
Definition 3 (CI-RIP). A matrix sensing operator A is Conditionally Independent RIP with constant
?k , if for a fixed rank k matrix M , A is sampled independently regarding to M and satisfies
1
k(I ? A0 A)M k22 ? ?k .
(4)
n
An `2 -norm or `1 -norm RIP sensing operator is naturally CI-RIP but the reverse is not true. In CI-RIP,
A is no longer a fixed but random sensing operator independent of M . In one-pass algorithm, this is
achievable if we always retrieve new instances to construct A in one mini-batch updating. Usually
Eq. (4) doesn?t hold in a batch method since M (t+1) depends on A(M (t) ).
An asymmetric rank-one matrix sensing operator is clearly CI-RIP due to the independency between
left/right design vectors. But a symmetric rank-one matrix sensing operator is not CI-RIP. In fact it is
a biased estimator since E(x> M x) = tr(M ) . To this end, we propose a shifted version of CI-RIP
for symmetric rank-one matrix sensing operator in the following theorem. This theorem is the key
tool in our analysis.
5
Theorem 4 (Shifted CI-RIP). Suppose xi are independent standard random Gaussian vectors, M is
a fixed symmetric rank k matrix independent of xi and w is a fixed vector. Then with a probability at
least 1 ? ?, provided n ? Ck 3 d/? 2 ,
k
1 0
1
A A(M ) ? tr(M )I ? M k2 ? ?kM k2 .
2n
2
1
Theorem 4 shows that 2n
A0 A(M ) is nearly isometric after shifting by its expectation 12 tr(M )I. The
p
RIP constant ? = O( k 3 d/n) . In gFM, we choose M = M ? ? M (t) therefore M is of rank 3k .
Under the same settings of Theorem 4, suppose that d ? C then the following lemmas hold true with
a probability at least 1 ? ? for fixed w and M .
Lemma 5. | n1 1> A(M )) ? tr(M )| ? ?kM k2 provided n ? Ck/? 2 .
Lemma 6. | n1 1> X > w| ? kwk2 ? provided n ? C/? 2 .
Lemma 7. k n1 A0 (X > w)k2 ? kwk2 ? provided n ? Cd/? 2 .
Lemma 8. k n1 X > A(M )k2 ? kM k2 ? provided n ? Ck 2 d/? 2 .
Lemma 9. kI ? n1 XX > k2 ? ? provided n ? Cd/? 2 .
Equipping with the above lemmas, we construct our estimation sequence as following.
(t)
(t)
(t)
Lemma 10. Let M (t) , H1 , h2 , h3 be defined as in Algorithm 1. Define t = kw? ? w(t) k2 +
kM ? ? M (t) k2 . Then with a probability at least 1 ? ?, provided n ? Ck 3 d/? 2 ,
(t)
(t)
H1 =M ? ? M (t) + tr(M ? ? M (t) )I + O(?t ) , h2 = tr(M ? ? M (t) ) + O(?t )
(t)
h3 =w? ? w(t) + O(?t ) .
(t)
(t)
(t)
Suppose by construction, t ? 0 when t ? ?. Then H1 ?h2 I +M (t) ? M ? and h3 +w(t) ?
w? and then the proof of Theorem 1 is completed. In the following we only need to show that Lemma
10 constructs an estimation sequence with t = O(? t ) ? 0. To this end, we need a few things from
matrix perturbation theory.
By Theorem 1, U (t) will converge to U ? up to column order perturbation. We use the largest
canonical angle to measure the subspace distance spanned by U (t) and U ? , which is denoted as
?t = ?(U (t) , U ? ). For any matrix U , it is well known [Zhu and Knyazev, 2013] that
?>
?>
sin ?(U, U ? ) = kU?
U k2 , cos ?(U, U ? ) = ?k {U ?> U }, tan ?(U, U ? ) = kU?
U (U ?> U )?1 k2 .
The last tangent equality allows us to bound the canonical angle after QR decomposition. Suppose
? (t) in the QR step of Algorithm 1, we have
U (t) R = U
? > ? (t)
? > (t)
? (t) , U ? ) = kU?
? (t) )?1 k2 = kU?
tan ?(U
U (U ?> U
U R(U ?> U (t) R)?1 k2
? > (t)
= kU?
U (U ?> U (t) )?1 k2 = tan ?(U (t) , U ? ) .
Therefore, it is more convenient to measure the subspace distance by tangent function.
To show t ? 0, we recursively define the following variables:
?t , tan ?t , ?t , kw? ? w(t) k2 , ?t , kM ? ? M (t) k2 , t , ?t + ?t .
The following lemma derives the recursive inequalities regarding to {?t , ?t , ?t } .
?
Lemma 11. Under the same settings of Theorem 1, suppose ?t ? 2, ?t ? 4 5?k? , then
?
?t+1 ? 4 5??k??1 (?t + ?t ), ?t+1 ? ?(?t + ?t ), ?t+1 ? ?t+1 kM ? k2 + 2?(?t + ?t ) .
?
In Lemma 11, when we choose n such that ? = O(1/ n) is small enough, {?t , ?t , ?t } will converge
to zero. The only question is the initial value {?0 , ?0 , ?0 }. According to the initialization step of
gFM, ?0 ? kw? k2 and ?0 ? kM ? k2 . To bound ?0 , we need the following lemma which directly
follows Wely?s and Wedin?s theorems [Stewart and Sun, 1990].
6
e as the top-k left singular vectors of M and M
f = M +O() respectively.
Lemma 12. Denote U and U
?k ??k+1
The i-th singular value of M is ?i . Suppose that ?
. Then the largest canonical angle
4
e
e
e
between U and U , denoted as ?(U, U ), is bounded by sin ?(U, U ) ? 2/(?k ? ?k+1 ) .
According to Lemma 12, when 2?(kw? k2 + kM ? k2 ) ? ?k? /4, we have sin ?0 ? 4?(kw? k2 +
kM ? k2 )/?k? . Therefore, ?0 ? 2 provided ? ? ?k? /[8(kw? k2 + kM ? k2 )] .
?
Proof of Theorem 1. Suppose that at step t, ?t ? 2, ?t ? 4 5?k? , from Lemma 11,
?
?t+1 + ?t+1 ??t+1 + ?t+1 kM ? k2 + 2?(?t + ?t ) ? ?t + 4 5??k??1 t kM ? k2 + 2?t
?
=(4 5?1? /?k? + 3)?t .
Therefore,
?
t = ?t + ?t ? [(4 5?1? /?k? + 3)?]t (?0 + ?0 )
?
?
?
?t+1 ? 4 5??k??1 (?t + ?t ) ? 4 5??k??1 [(4 5?1? /?k? + 3)?]t (?0 + ?0 ) .
?
Clearly we need (4 5?1? /?k? + 3)? < 1 to ensure convergence, which is guaranteed by ? <
?
?k
?
? . To ensure the recursive inequality holds for any t, we require ?t+1 ? 2, which is
4 5?1? +3?k
guaranteed by
?
??
4 5(?0 + ?0 )?/?k? ? 2 ? ? ? ? ?k
.
2 5(?1 + ?0 )
?
To ensure the condition ?t ? 4 5?k? ,
?
?
?
? ? 4 5?k? /0 = 4 5?k? /(?1? + ?0 ) ? ? ? 4 5?k? /t .
In summary, when
(
? ? min
??
??
??
?k?
? ?k
, ? ?k
, ? ?k
,
?
?
4 5(?1 + ?0 ) 4 5?1 + 3?k 2 5(?1 + ?0 ) 8(?1 + ?0 )
)
?k?
?
?? ? ? ?
.
4 5?1 + 3?k? + 4 5?0
we have
?
t = [(4 5?1? /?k? + 3)?]t (?1? + ?0 ) .
?
To simplify the result, replace ? with ?1 = (4 5?1? /?k? + 3)?. The proof is completed.
4.1
Noisy Case
In this subsection, we analyze the performance of gFM under noisy setting. Suppose that M ? is no
? ?
?>
longer low rank, M ? = U ? ?? U ?> + U?
?? U?
where ??? = diag(?k+1 , ? ? ? , ?d ) is the residual
?
? ? ?>
?
spectrum. Denote Mk = U ? U to be the best rank k approximation of M ? and M?
= M ? ?Mk? .
The additive noise ?i ?s are independently sampled from subgaussian with proxy variance ?.
First we generalize the above theorems and lemmas to noisy case.
Lemma 13. Suppose that in Eq. (1) xi ?s are independent standard random Gaussian vectors. M is
?
a fixed rank k matrix. M?
6= 0 and ? > 0. Then provided n ? Ck 3 d/? 2 , with a probability at least
1 ? ?,
?
1
1
?
k A0 A(M ? ? M ) ? tr(Mk? ? M )I ? (Mk? ? M )k2 ? ?kMk? ? M k2 + C?k+1
d2 / n (5)
2n
2
?
1 >
?
?
| 1 A(M ? M ) ? tr(Mk? ? M )| ? ?kMk? ? M k2 + C?k+1
d2 / n
(6)
n
?
1
?
k X > A(M ? ? M )k2 ? ?kMk? ? M k2 + C?k+1
d2 / n
(7)
n
1
1
(8)
k A0 (X > w)k2 ? ?kwk2 , k 1> X > wk2 ? ?kwk2 .
n
n
7
Define ?t = kMk? ? M (t) k2 similar to the noise-free case. According to Lemma 13, when ? = 0, for
n ? Ck 3 d/? 2 ,
?
1
(t)
?
H1 =Mk? ? M (t) + tr(Mk? ? M (t) )I + O(?t + C?k+1
d2 / n)
2
?
(t)
?
h2 =tr(M ? ? M (t) ) + O(?t + C?k+1
d2 / n)
?
(t)
?
h3 =w? ? w(t) + O(?t + C?k+1
d2 / n) .
?
?
Define r = C?k+1
d2 / n. If ? > 0, it is easy to check that the perturbation becomes r? =
?
r + O(?/ n) . Therefore we uniformly use r to present the perturbation term. The recursive
inequalities regarding to the recovery error is constructed in Lemma 14.
?
?
/(?k? + ?k+1
). Suppose that at
Lemma 14. Under the same settings of Lemma 13, define ? , 2?k+1
?
?
?
any step i, 0 ? i ? t , ?i ? 2 . When provided 4 5(?t + r) ? ?k ? ?k+1 ,
?
?
4 5
4 5
?
+
r , ?t+1 ? ?t + r , ?t+1 ? ?t+1 kM ? k2 + 2?t + 2r .
?t+1 ???t + ?
t
?
?
?k + ?k+1
?k? + ?k+1
The solution to the recursive inequalities in Lemma 14 is non-trivial. Comparing to the inequalities
in Lemma 11, ?t+1 is bounded by ?t in noisy case. Therefore, if we simply follow Lemma 11 to
construct recursive inequality about t , we will quickly be overloaded
by recursive expansion terms.
?
?
The key construction of our solution is to bound the term ?t + 8 5/(?k? + ?k+1
)?t . The solution
is given in the following theorem.
Theorem 15. Define constants
?
?
?
?
c =4 5/(?k? + ?k+1
) , b = 3 + 4 5?1? /(?k? + ?k+1
) , q = (1 + ?)/2 .
Then for any t ? 0,
(1 + ?)cr
(1 + ?)cr
?t + 2c?t ?q 2 ?
+
.
1?q
1?q
t
(9)
provided
1?? ?
?
?
))?0 + r ? (?k? ? ?k+1
)
, } , (2 + c(?k? ? ?k+1
4??1? c 2b
?
?
?
?
?
4 5 4 + 2c(?k? ? ?k+1
) ?0 + 4 5 4 + (?k? ? ?k+1
) r ? (?k? ? ?k+1
)2 .
? ? min{
(10)
Theorem 15 gives the convergence rate of gFM under noisy settings. We bound ?t + 2c?t as the
index of recovery error, whose convergence rate is linear. The convergence rate is controlled by q, a
?
constant depends on the eigen gap ?k+1
/?k? . The final recovery error is bounded by O(r/(1 ? q)) .
Eq. (10) is the small noise condition to ensure the noisy recovery is possible. Generally speaking,
learning a d ? d matrix with O(d) samples is an ill-conditioned problem when the target matrix is full
rank. The small noise condition given by Eq. (10) essentially says that M ? can be slightly deviated
from low rank manifold and the noise shouldn?t be too large to blur the spectrum of M ? . When the
noise is large, Eq. (10) will be satisfied with n = O(d2 ) which is the information-theoretical lower
bound for recovering a full rank matrix.
5
Conclusion
In this paper, we propose a provable efficient algorithm to solve generalized Factorization Machine
(gFM) and rank-one matrix sensing. Our method is based on an one-pass alternating updating
framework. The proposed algorithm is able to learn gFM within O(kd) memory on steaming data,
has linear convergence rate and only requires matrix-vector product implementation. The algorithm
takes no more than O(k 3 d log (1/)) instances to achieve O() recovery error.
Acknowledgments
This work was supported in part by research grants from NIH (RF1AG051710) and NSF (III-1421057
and III-1421100).
8
References
Mathieu Blondel, Masakazu Ishihata, Akinori Fujino, and Naonori Ueda. Polynomial Networks and Factorization
Machines: New Insights and Efficient Training Algorithms. pages 850?858, 2016.
T. Tony Cai and Anru Zhang. ROP: Matrix recovery via rank-one projections. The Annals of Statistics, 43(1):
102?138, 2015.
Emmanuel J Cand?s and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of
Computational mathematics, 9(6):717?772, 2009.
Emmanuel J. Candes, Yonina Eldar, Thomas Strohmer, and Vlad Voroninski. Phase Retrieval via Matrix
Completion. arXiv:1109.0573, 2011.
Yuxin Chen, Yuejie Chi, and Andrea J. Goldsmith. Exact and stable covariance estimation from quadratic
sampling via convex programming. Information Theory, IEEE Transactions on, 61(7):4034?4059, 2015.
Mark A. Davenport and Justin Romberg. An overview of low-rank matrix recovery from incomplete observations.
arXiv:1601.06422, 2016.
Moritz Hardt. Understanding Alternating Minimization for Matrix Completion. arXiv:1312.0925, 2013.
Moritz Hardt and Eric Price. The Noisy Power Method: A Meta Algorithm with Applications. arXiv:1311.2495,
2013.
Moritz Hardt and Mary Wootters. Fast matrix completion without the condition number. arXiv:1407.4070, 2014.
Liangjie Hong, Aziz S. Doumith, and Brian D. Davison. Co-factorization Machines: Modeling User Interests
and Predicting Individual Decisions in Twitter. In WSDM, pages 557?566, 2013.
Prateek Jain and Inderjit S. Dhillon. Provable inductive matrix completion. arXiv:1306.0626, 2013.
Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank Matrix Completion using Alternating Minimization. arXiv:1212.0467, 2012.
Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank Matrix Completion Using Alternating
Minimization. In STOC, pages 665?674, 2013.
V. Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems, volume
2033. Springer, 2011.
Richard Kueng, Holger Rauhut, and Ulrich Terstiege. Low rank matrix recovery from rank one measurements.
arXiv:1410.6913, 2014.
Kiryung Lee, Yihong Wu, and Yoram Bresler. Near Optimal Compressed Sensing of Sparse Rank-One Matrices
via Sparse Power Factorization. arXiv:1312.0525, 2013.
Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase Retrieval using Alternating Minimization.
arXiv:1306.0160, 2013.
Steffen Rendle. Factorization machines. In ICDM, pages 995?1000, 2010.
Steffen Rendle, Zeno Gantner, Christoph Freudenthaler, and Lars Schmidt-Thieme. Fast Context-aware
Recommendations with Factorization Machines. In SIGIR, pages 635?644, 2011.
G. W. Stewart and Ji-guang Sun. Matrix Perturbation Theory. Academic Press, 1990.
Joel A. Tropp. Convex recovery of a structured signal from independent random linear measurements.
arXiv:1405.1102, 2014.
Joel A. Tropp. An Introduction to Matrix Concentration Inequalities. arXiv:1501.01571, 2015.
Tuo Zhao, Zhaoran Wang, and Han Liu. Nonconvex Low Rank Matrix Factorization via Inexact First Order
Oracle. 2015a.
Tuo Zhao, Zhaoran Wang, and Han Liu. A Nonconvex Optimization Framework for Low Rank Matrix Estimation.
In NIPS, pages 559?567, 2015b.
Kai Zhong, Prateek Jain, and Inderjit S. Dhillon. Efficient matrix sensing using rank-1 gaussian measurements.
In Algorithmic Learning Theory, pages 3?18, 2015.
Peizhen Zhu and Andrew V. Knyazev. Angles between subspaces and their tangents. Journal of Numerical
Mathematics, 21(4), 2013.
9
| 6410 |@word version:5 achievable:1 polynomial:4 norm:25 km:22 d2:8 decomposition:3 covariance:1 tr:12 recursively:1 moment:1 initial:1 liu:2 existing:1 kmk:4 ka:3 com:1 nt:1 current:1 comparing:1 additive:1 numerical:1 blur:1 remove:2 update:5 knyazev:2 yuxin:1 davison:1 zhang:8 along:1 constructed:1 retrieving:3 consists:1 introduce:2 blondel:2 pairwise:1 intricate:1 indeed:1 andrea:1 cand:2 nor:1 behavior:1 steffen:2 chi:1 wsdm:1 ming:1 cache:1 solver:7 becomes:4 spain:1 provided:13 notation:2 bounded:5 xx:2 what:1 prateek:5 thieme:1 kiryung:1 unified:1 guarantee:7 wely:1 exactly:2 k2:49 grant:1 yn:1 positive:2 limit:1 io:1 despite:3 merge:1 might:1 plus:1 koltchinskii:2 studied:1 initialization:2 wk2:1 christoph:1 yuejie:1 co:2 factorization:12 acknowledgment:1 recursive:6 definite:2 postpone:1 procedure:1 empirical:2 convenient:2 projection:1 cannot:6 operator:20 romberg:2 context:2 applying:1 writing:1 impossible:1 risk:1 conventional:4 jieping:1 independently:4 convex:12 sigir:1 recovery:17 estimator:3 insight:1 nuclear:2 spanned:1 retrieve:2 handle:1 annals:1 target:6 construction:8 suppose:12 rip:32 tan:4 programming:4 exact:2 user:1 designing:1 trick:1 element:1 expensive:2 satisfying:1 updating:3 asymmetric:7 predicts:1 export:1 solved:1 capture:1 wang:2 sun:2 consumes:1 mentioned:1 intuition:1 benjamin:1 complexity:8 ui:3 eric:1 basis:1 various:1 retrieves:1 jain:11 fast:3 choosing:1 whose:5 widely:1 larger:1 solve:1 say:1 kai:1 compressed:1 ability:3 statistic:1 itself:1 noisy:9 final:1 sequence:11 eigenvalue:2 cai:8 propose:4 interaction:2 product:3 remainder:1 achieve:1 description:1 frobenius:1 ky:1 qr:6 convergence:10 extending:2 converges:2 polylog:1 develop:2 completion:10 andrew:1 h3:7 eq:12 strong:1 netrapalli:4 auxiliary:1 implemented:1 recovering:1 implies:1 lars:1 require:5 really:1 decompose:1 brian:1 extension:2 hold:6 algorithmic:1 achieves:3 omitted:1 estimation:15 favorable:1 label:3 largest:4 tool:1 minimization:8 clearly:3 gaussian:7 always:2 ck:6 avoid:1 cr:2 zhong:4 kueng:4 focus:2 rank:66 check:1 rigorous:1 twitter:1 dependent:1 a0:12 voroninski:1 issue:1 among:1 ill:1 eldar:1 denoted:2 rop:1 art:1 special:4 initialize:2 equal:1 aware:2 construct:9 once:1 sampling:7 yonina:1 identical:2 kw:12 holger:1 kd2:1 k2f:2 nearly:2 sanghavi:3 simplify:1 richard:1 few:1 modern:2 individual:1 phase:5 n1:5 attempt:1 interest:1 joel:2 wedin:1 strohmer:1 naonori:1 necessary:3 orthogonal:1 incomplete:1 desired:1 sacrificing:1 theoretical:10 mk:7 instance:12 industry:1 modeling:2 column:2 soft:1 stewart:2 cost:1 rare:1 successful:2 fujino:1 too:2 ha2:1 recht:2 fundamental:5 lee:2 nk2:1 quickly:1 again:1 satisfied:2 choose:2 davenport:2 zhao:3 leading:1 aggressive:1 converted:1 zhaoran:2 includes:1 coefficient:1 vi:1 depends:2 h1:8 try:1 analyze:1 recover:1 candes:4 minimize:1 ni:1 accuracy:1 variance:2 ynt:1 efficiently:2 generalize:1 rauhut:1 definition:3 inexact:1 failure:1 naturally:1 associated:1 proof:5 sampled:5 dataset:1 hardt:7 popular:1 vlad:1 subsection:4 organized:1 ishihata:1 isometric:3 follow:1 formulation:1 equipping:1 until:1 sketch:1 tropp:3 somehow:1 reveal:1 mary:1 ye:1 k22:3 y2:2 unbiased:1 true:2 inductive:4 equality:2 counterpart:1 alternating:13 symmetric:20 dhillon:5 moritz:3 deal:2 conditionally:4 sin:3 during:1 please:1 hong:2 generalized:9 plexity:1 goldsmith:1 novel:1 recently:2 nih:1 ji:1 overview:1 volume:1 extend:1 kwk2:4 refer:1 measurement:4 ai:2 rd:9 sujay:3 trivially:2 mathematics:2 stable:1 han:3 longer:3 recent:1 reverse:1 jpye:1 store:1 nonconvex:2 inequality:9 meta:1 success:1 continue:1 yi:4 postponed:1 matrixvector:1 relaxed:1 converge:3 redundant:2 signal:1 semi:2 full:2 academic:1 lin:1 retrieval:6 icdm:1 controlled:2 variant:4 regression:1 essentially:1 expectation:1 arxiv:12 iteration:4 background:2 singular:5 extra:1 biased:1 unlike:1 thing:1 call:2 subgaussian:2 near:1 iii:2 identically:1 enough:1 easy:1 affect:1 architecture:1 fm:15 reduce:1 idea:1 regarding:4 praneeth:3 yihong:1 whether:1 effort:1 speaking:1 hardly:1 wootters:3 generally:1 eigenvectors:1 reduced:1 generate:1 http:1 canonical:3 nsf:1 shifted:2 popularity:2 anru:1 write:1 key:5 independency:1 neither:1 convert:1 shouldn:1 angle:4 reasonable:1 ueda:1 wu:1 home:1 decision:1 appendix:1 investigates:1 bound:8 ki:1 followed:3 guaranteed:2 convergent:1 deviated:1 masakazu:1 quadratic:1 encountered:1 oracle:2 constraint:4 x2:2 speed:1 min:3 structured:1 gfm:41 developing:2 according:4 kd:4 slightly:1 intuitively:1 restricted:1 computationally:2 turn:2 discus:1 deflation:1 rendle:4 end:4 umich:2 available:1 operation:4 endowed:1 apply:1 spectral:2 batch:8 schmidt:1 eigen:1 algorithm1:1 original:2 thomas:1 top:3 denotes:1 ensure:5 tony:1 completed:2 yoram:1 k1:1 especially:1 establish:1 emmanuel:2 freudenthaler:1 question:1 strategy:1 concentration:1 diagonal:2 unclear:1 gradient:2 subspace:4 distance:2 unable:1 capacity:1 manifold:1 trivial:1 reason:1 provable:9 index:1 mini:7 minimizing:1 zeno:1 difficult:2 stoc:1 trace:5 xnt:1 design:4 implementation:2 observation:1 regenerate:1 descent:2 truncated:1 y1:2 rn:2 perturbation:10 tuo:2 overloaded:1 required:1 connection:1 established:1 barcelona:1 nip:2 able:1 justin:1 usually:1 challenge:5 memory:2 shifting:1 power:3 critical:1 difficulty:3 predicting:1 oversimplifies:1 abbreviate:1 residual:1 zhu:2 github:1 mathieu:1 concludes:1 conventionally:1 text:1 understanding:2 tangent:3 kf:1 loss:2 bresler:1 ingredient:3 foundation:2 h2:8 proxy:2 ulrich:1 doumith:1 cd:2 summary:1 supported:1 last:1 free:2 side:1 steaming:3 barrier:1 sparse:3 ha1:1 overcome:2 depth:2 dimension:2 xn:1 doesn:2 author:1 transaction:1 approximate:1 compact:1 implicitly:1 preferred:1 global:2 assumed:1 xi:18 alternatively:2 spectrum:2 ku:5 learn:1 expansion:1 domain:1 diag:2 main:6 dense:1 linearly:3 bounding:1 noise:11 guang:1 allowed:2 complementary:1 x1:2 third:1 theorem:20 sensing:47 derives:1 exists:2 essential:1 ci:10 conditioned:2 gantner:1 chen:6 gap:1 michigan:2 logarithmic:1 simply:1 inderjit:2 recommendation:2 springer:1 loses:1 satisfies:3 sorted:1 price:2 replace:3 uniformly:1 lemma:25 called:7 total:1 pas:9 terstiege:1 svd:3 orthogonalize:1 mark:1 latter:1 |
5,982 | 6,411 | Adaptive Neural Compilation
Rudy Bunel?
University of Oxford
[email protected]
Pushmeet Kohli
Microsoft Research
[email protected]
Alban Desmaison?
University of Oxford
[email protected]
Philip H.S. Torr
University of Oxford
[email protected]
M. Pawan Kumar
University of Oxford
[email protected]
Abstract
This paper proposes an adaptive neural-compilation framework to address the
problem of learning efficient programs. Traditional code optimisation strategies
used in compilers are based on applying pre-specified set of transformations that
make the code faster to execute without changing its semantics. In contrast, our
work involves adapting programs to make them more efficient while considering
correctness only on a target input distribution. Our approach is inspired by the
recent works on differentiable representations of programs. We show that it is
possible to compile programs written in a low-level language to a differentiable
representation. We also show how programs in this representation can be optimised
to make them efficient on a target input distribution. Experimental results demonstrate that our approach enables learning specifically-tuned algorithms for given
data distributions with a high success rate.
1
Introduction
Algorithm design often requires making simplifying assumptions about the input data. Consider, for
instance, the computational problem of accessing an element in a linked list. Without the knowledge
of the input data distribution, one can only specify an algorithm that runs in a time linear in the
number of elements of the list. However, suppose all the linked lists that we encountered in practice
were ordered in memory. Then it would be advantageous to design an algorithm specifically for this
task as it can lead to a constant running time. Unfortunately, the input data distribution of a real world
problem cannot be easily specified as in the above simple example. The best that one can hope for is
to obtain samples drawn from the distribution. A natural question that arises from these observations:
?How can we adapt a generic algorithm for a computational task using samples from an unknown
input data distribution??
The process of finding the most efficient implementation of an algorithm has received considerable
attention in the theoretical computer science and code optimisation community. Recently, Conditionally Correct Superoptimization [14] was proposed as a method for leveraging samples of the
input data distribution to go beyond semantically equivalent optimisation and towards data-specific
performance improvements. The underlying procedure is based on a stochastic search over the space
of all possible programs. Additionally, they restrict their applications to reasonably small, loop-free
programs, thereby limiting their impact in practice.
In this work, we take inspiration from the recent wave of machine-learning frameworks for estimating
programs. Using recurrent models, Graves et al. [2] introduced a fully differentiable representation of
a program, enabling the use of gradient-based methods to learn a program from examples. Many other
models that have been published recently [3, 5, 6, 8] build and improve on the early work by Graves
?
The first two authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
et al. [2]. Unfortunately, these models are usually complex to train and need to rely on methods
such as curriculum learning or gradient noise to reach good solutions as shown by Neelakantan et al.
[10]. Moreover, their interpretability is limited. The learnt model is too complex for the underlying
algorithm to be recovered and transformed into a regular computer program.
The main focus of the machine-learning community has thus far been on learning programs from
scratch, with little emphasis on running time. However, for nearly all computational problems, it is
feasible to design generic algorithms for the worst-case. We argue that a more pragmatic goal for
the machine learning community is to design methods for adapting existing programs for specific
input data distributions. To this end, we propose the Adaptive Neural Compiler (ANC). We design a
compiler capable of mechanically converting algorithms to a differentiable representation, thereby
providing adequate initialisation to the difficult problem of optimal program learning. We then
present a method to improve this compiled program using data-driven optimisation, alleviating the
need to perform a wide search over the set of all possible programs. We show experimentally that
this framework is capable of adapting simple generic algorithms to perform better on given datasets.
2
Related Works
The idea of compiling programs to neural networks has previously been explored in the literature.
Siegelmann [15] described how to build a Neural Network that would perform the same operations as
a given program. A compiler has been designed by Gruau et al. [4] targeting an extended version
of Pascal. A complete implementation was achieved when Neto et al. [11] wrote a compiler for
NETDEF, a language based on the Occam programming language. While these methods allow us
to obtain an exact representation of a program as a neural network, they do not lend themselves to
optimisation to improve the original program. Indeed, in their formulation, each elementary step of a
program is expressed as a group of neurons with a precise topology, set of weights and biases, thereby
rendering learning via gradient descent infeasible. Performing gradient descent in this parameter
space would result in invalid operations and thus is unlikely to lead to any improvement. The recent
work by Reed and de Freitas [12] on Neural Programmer-Interpreters (NPI) can also be seen as a
way to compile any program into a neural network. It does so by learning a model that mimics the
program. While more flexible than previous approaches, the NPI only learns to reproduce an existing
program. Therefore it cannot be used to find a new and possibly better program.
Another approach to this learning problem is the one taken by the code optimisation community. By
exploring the space of all possible programs, either exhaustively [9] or in a stochastic manner [13],
they search for programs having the same results but being more efficient. The work of Sharma et al.
[14] broadens the space of acceptable improvements to data-specific optimisations as opposed to
the provably equivalent transformations that were previously the only ones considered. However,
this method is still reliant on non-gradient-based methods for efficient exploration of the space. By
representing everything in a differentiable manner, we aim to obtain gradients to guide the exploration.
Recently, Graves et al. [2] introduced a learnable representation of programs, called the Neural Turing
Machine (NTM). The NTM uses an LSTM as a Controller, which outputs commands to be executed
by a deterministic differentiable Machine. From examples of input/output sequences, they manage
to learn a Controller such that the model becomes capable of performing simple algorithmic tasks.
Extensions of this model have been proposed in [3, 5] where the memory tape was replaced by
differentiable versions of stacks or lists. Kurach et al. [8] modified the NTM to introduce a notion of
pointers making it more amenable to represent traditional programs. Parallel works have been using
Reinforcement Learning techniques such as the REINFORCE algorithm [1, 16, 17] or Q-learning [18]
to be able to work with non differentiable versions of the above mentioned models. All these models
are trained only with a loss based on the difference between the output of the model and the expected
output. This weak supervision results in learning becoming more difficult. For instance the Neural
RAM [8] requires a high number of random restarts before converging to a correct solution [10], even
when using the best hyperparameters obtained through a large grid search.
In our work, we will first show that we can design a new neural compiler whose target will be a
Controller-Machine model. This makes the compiled model amenable to learning from examples.
Moreover, we can use it as initialisation for the learning procedure, allowing us to aim for the more
complex task of finding an efficient algorithm.
2
Controller
IR0
Machine
R0
IR2
...
Machine
stop
Memory
M0
...
Controller
IR1
R1
M1
R2
Memory
stop
M2
...
MT
Inst
arg1
arg2
output
side effect
STOP
ZERO
INC
DEC
ADD
SUB
MIN
MAX
READ
WRITE
a
a
a
a
a
a
a
a
b
b
b
b
b
0
0
a+1
a-1
a+b
a-b
min(a,b)
max(a,b)
mta
0
JEZ
a
b
0
stop = 1
Memory access
mta = b
IRt = b
if a = 0
(a) General view of the whole Model.
(b) Machine instructions.
Figure 1: Model components.
3
Model
Our model is composed of two parts: (i) a Controller, in charge of specifying what should be
executed; and (ii) a Machine, following the commands of the Controller. We start by describing the
global architecture of the model. For the sake of simplicity, the general description will present a
non-differentiable version of the model. Section 3.2 will then explain the modifications required to
make this model completely differentiable. A more detailed description of the model is provided in
the supplementary material.
3.1
General Model
We first define for each timestep t the memory tape that contains M integer values
t
Mt = {mt1 , mt2 , . . . , mtM }, registers that contain R values Rt = {r1t , r2t , . . . , rR
} and the instruct
tion register that contains a single value IR . We also define a set of instructions that can be executed,
whose main role is to perform computations using the registers. For example, add the values contained
in two registers. We also define as a side effect any action that involves elements other than the input
and output values of the instruction. Interaction with the memory is an example of such side effect.
All the instructions, their computations and side effects are detailed in Figure 1b.
As can be seen in Figure 1a the execution model takes as input an initial memory tape M0 and
outputs a final memory tape MT after T steps. At each step t, the Controller uses the instruction
register IRt to compute the command for the Machine. The command is a 4-tuple e, a, b, o. The
first element e is the instruction that should be executed by the Machine, enumerated as an integer.
The elements a and b specify which registers should be used as arguments for the given instruction.
The last element o specifies in which register the output of the instruction should be written. For
example, the command {ADD, 2, 3, 1} means that only the value of the first register should change,
following r1t+1 = ADD(r2t , r3t ). Then the Machine will execute this command, updating the values
of the memory, the registers and the instruction register. The Machine always performs two other
operations apart from the required instruction. It outputs a stop flag that allows the model to decide
when to stop the execution. It also increments the instruction register IRt by one at each iteration.
3.2
Differentiability
The model presented above is a simple execution machine but it is not differentiable. In order to be
able to train this model end-to-end from a loss defined over the final memory tape, we need to make
every intermediate operation differentiable.
To achieve this, we replace every discrete value in our model by a multinomial distribution over all
the possible values that could have been taken. Moreover, each hard choice that would have been
non-differentiable is replaced by a continuous soft choice. We will henceforth use bold letters to
indicate the probabilistic version of a value.
First, the memory tape Mt is replaced by an M ? M matrix Mt , where Mti,j corresponds to the
probability of mti taking the value j. The same change is applied to the registers Rt , replacing them
with an R ? M matrix Rt , where Rti,j represents the probability of rit taking the value j. Finally, the
instruction register is also transformed, the same way as the other registers, from a single value IRt
to a vector of size M noted IRt , where the i-th element represents its probability to take the value i.
3
The Machine does not contain any learnable parameter and will just execute a given command. To
make it differentiable, the Machine now takes as input four probability distributions et , at , bt and
ot , where et is a distribution over instructions, and at , bt and ot are distributions over registers. We
compute the argument values arg1 t and arg2 t as convex combinations of delta-function probability
distributions of the different registers values:
arg1 t =
R
X
ati rti
arg2 t =
i=1
R
X
bti rti ,
(1)
i=1
where ati and bti are the i-th values of the vectors at and bt . Using these values, we can compute the
output value of each instruction k using the following formula:
X
?0 ? c ? M outtk,c =
arg1 ti ? arg2 tj ? 1[gk (i, j) = c mod M ],
(2)
0?i,j?M
where gk is the function associated to the k-th instruction as presented in Table 1b, outtk,c is the
probability for an instruction k to output the value c at the time-step t and arg1 ti is the probability of the argument 1 having the value i at the time-step t. Since the executed instruction is
controlled by the probability e, the output for all instructions will also be a convex combination:
PN
outt = k=1 etk outtk , where N is the number of instructions. This value is then stored into the
registers by performing a soft-write parametrised by ot : the value stored in the i-th register at time
t + 1 is rt+1
= rti (1 ? oti ) + outt oti , allowing the choice of the output register to be differentiable.
i
A special case is associated with the stop signal. When executing the model, we keep track of the
probability that the program should have terminated before this iteration based on the probability
associated at each iteration with the specific instruction that controls this flag. Once this probability
goes over a threshold ?stop ? (0, 1], the execution is halted. We applied the same techniques to make
the side-effects differentiable, this is presented in the supplementary materials.
The Controller is the only learnable part of our model. The first learnable part is the initial values for
the registers R0 and for the instruction register IR0 . The second learnable part is the parameters of
the Controller which computes the required distributions using:
et = We ? IRt ,
at = Wa ? IRt ,
bt = Wb ? IRt ,
ot = Wo ? IRt
(3)
where We is an N ? M matrix and Wa , Wb and Wo are R ? M matrices. A representation of these
matrices can be found in Figure 2c. The Controller as defined above is composed of four independent,
fully-connected layers. In Section 4.3 we will see that this complexity is sufficient for our model to
be able to represent any program.
Henceforth, we will denote by ? = {R0 , IR0 , We , Wa , Wb , Wo } the set of learnable parameters.
4
Adaptative Neural Compiler
We will now present the Adaptive Neural Compiler. Its goal is to find the best set of weights ? ? for a
given dataset such that our model will perform the correct input/output mapping as efficiently as it
can. We begin by describing our learning objective in details. The two subsequent sections will focus
on making the optimisation of our learning objective computationally feasible.
4.1
Objective function
Our goal is to solve a given algorithmic problem efficiently. The algorithmic problem is defined
as a set of input/output pairs. We also have access to a generic program that is able to perform the
required mapping. In our example of accessing elements in a linked list, the transformation would
consist in writing down the desired value at the specified position in the tape. The program given
to us would iteratively go through the elements of the linked list, find the desired value and write it
down at the desired position. If there exists some bias that would allow this traversal to be faster, we
expect the program to exploit it.
Our approach to this problem is to construct a differentiable objective function that maps controller
parameters to a loss. We define this loss based on the states of the memory tape and outputs of the
Controller at each step of the execution. The precise mathematical formulation for each term of the
loss is given in the supplementary materials. Here we present the motivation behind each of them.
4
Correctness We first want the final memory tape to match the expected output for a given input.
Halting To prevent programs from taking an infinite amount of time without stopping, we define a
maximum number of iterations Tmax after which the execution is halted. Moreover, we add a penalty
if the Controller didn?t halt before this limit.
Efficiency We penalise each iteration taken by the program where it does not stop.
Confidence We finally make sure that if the Controller wants to stop, the current output is correct.
If only the correctness term was considered, nothing would encourage the learnt algorithm to halt as
soon as it finished. If only correctness and halting were considered, then the program may not halt as
early as possible. Confidence enables the algorithm to better evaluate when to stop.
The loss is a weighted sum of the four above-mentioned terms. We denote the loss of the i-th training
sample, given parameters ?, as Li (?). Our learning objective is then specified as:
X
min
Li (?) s.t. ? ? ?,
(4)
?
i
where ? is a set over the parameters such that the outputs of the Controller, the initial values of each
register and of the instruction register are all probability distributions.
The above optimisation is a highly non-convex problem. In the rest of this section, we will first
present a small modification to the model that will remove the constraints to be able to use standard
gradient descent-based methods. Moreover, a good initialisation is helpful to solve these non-convex
problems. To alleviate this deficiency, we now introduce our Neural Compiler that will provide a
good initialisation.
4.2
Reformulation
In order to use gradient descent methods without having to project the parameters on ?, we alter the
formulation of the Controller. We use softmax layers to be able to learn learn unormalized scores that
are then mapped to probability distributions. We add one after each linear layer of the Controller and
for the initial values of the registers. This way, we transform the constrained-optimisation problem
into an unconstrained one, allowing us to use standard gradient descent methods. As discussed in
other works [10], this kind of model is hard to train and requires a high number of random restarts
before converging to a good solution. We will now present a Neural Compiler that will provide good
initialisations to help with this problem.
4.3
Neural Compiler
The goal for the Neural Compiler is to convert an algorithm, written as an unambiguous program, to
a set of parameters. These parameters, when put into the controller, will reproduce the exact steps of
the algorithm. This is very similar to the problem framed by Reed and de Freitas [12], but we show
here a way to accomplish it without any learning.
The different steps of the compilation are illustrated in Figure 2. The first step is to go from the
written version of the program to the equivalent list of low level instruction. This step can be seen
as going from Figure 2a to Figure 2b. The illustrative example uses a fairly low-level language but
traditional features of programming languages such as loops or if-statements can be supported
using the JEZ instruction. The use of constants as arguments or as values is handled by introducing
new registers that hold these values. The value required to be passed as target position to the JEZ
instruction can be resolved at compile time.
Having obtained this intermediate representation, generating the parameters is straightforward. As
can be seen in Figure 2b, each line contains one instruction, the two input registers and the output
register, and corresponds to a command that the Controller will have to output. If we ensure that
IR is a Dirac-delta distribution on a given value, then the matrix-vector product is equivalent
to selecting a row of the weight matrix. As IR is incremented at each iteration, the Controller
outputs the rows of the matrix in order. We thus have a one-to-one mapping between the lines of the
intermediate representation and the rows of the weight matrix. An example of these matrices can be
found in Figure 2c. The weight matrix has 10 rows, corresponding to the number of lines of code
of our intermediate representation. For example, on the first line of the matrix corresponding to the
output (2civ), we see that the fifth element has value 1. This is linked to the first line of code where
the output of the READ operation is stored into the fifth register. With this representation, we can
5
Initial Registers:
var head = 0;
var nb_jump = 1;
var out_write = 2;
R1 = 6;
R4 = 2;
R7 = 0;
R2 = 2;
R5 = 1;
R3 = 0;
R6 = 0;
Program:
nb_jump = READ(nb_jump);
out_write = READ(out_write);
loop : head = READ(head);
nb_jump = DEC(nb_jump);
JEZ(nb_jump, end);
JEZ(0, loop);
end : head = INC(head);
head = READ(head);
WRITE(out_write, head);
STOP();
(a) Input program
0 : R5
1 : R4
2 : R6
3 : R5
4 : R7
5 : R3
6 : R6
7 : R6
8 : R7
9 : R7
= READ (R5 , R7 )
= READ (R4 , R7 )
= READ (R6 , R7 )
= DEC (R5 , R7 )
= JEZ (R5 , R1 )
= JEZ (R3 , R2 )
= INC (R6 , R7 )
= READ (R6 , R7 )
= WRITE(R4 , R6 )
= STOP (R7 , R7 )
(b) Intermediary representation
(i) Instr.
(ii) Arg1
(iii) Arg2
(iv) Out
(c) Weights
Figure 2: Example of the compilation process. (2a) Program written to perform the ListK task. Given a pointer
to the head of a linked list, an integer k, a target cell and a linked list, write in the target cell the k-th element of
the list. (2b) Intermediary representation of the program. This corresponds to the instruction that a Random
Access Machine would need to perform to execute the program. (2c) Representation of the weights that encodes
the intermediary representation. Each row of the matrix correspond to one state/line. Initial value of the registers
are also parameters of the model, omitted here.
note that the number of parameters is linear in the number of lines of code in the original program
and that the largest representable number in our Machine needs to be greater than the number of lines
in our program.
Moreover, any program written in a regular assembly language can be rewritten to use only our
restricted set of instructions. This can be done firstly because all the conditionals of the assembly
language can be expressed as a combination of arithmetic and JEZ instructions. Secondly because all
the arithmetic operations can be represented as a combination of our simple arithmetic operations,
loops and ifs statements. This means that any program that can run on a regular computer, can be
first rewritten to use our restricted set of instructions and then compiled down to a set of weights for
our model. Even though other models use LSTM as controller, we showed here that a Controller
composed of simple linear functions is expressive enough. The advantage of this simpler model is
that we can now easily interpret the weights of our model in a way that is not possible if we use a
recurrent network as a controller.
The most straightforward way to leverage the results of the compilation is to initialise the Controller
with the weights obtained through compilation of the generic algorithm. To account for the extra
softmax layer, we need to multiply the weights produced by the compiler by a large constant to output
Dirac-delta distributions. Some results associated with this technique can be found in Section 5.1.
However, if we initialise with exactly this sharp set of parameters, the training procedure is not able
to move away from the initialisation as the gradients associated with the softmax in this region are
very small. Instead, we initialise the controller with a non-ideal version of the generic algorithm.
This means that the choice with the highest probability in the output of the Controller is correct, but
the probability of other choices is not zero. As can be seen in Section 5.2, this allows the Controller
to learn by gradient descent a new algorithm, different from the original one, that has a lower loss
than the ideal version of the compiled program.
5
Experiments
We performed two sets of experiments. The first shows the capability of the Neural Compiler to
perfectly reproduce any given program. The second shows that our Neural Compiler can adapt and
improve the performance of programs. We present results of data-specific optimisation being carried
out and show decreases in runtime for all the algorithms and additionally, for some algorithms, show
that the runtime is a different computational-complexity class altogether. All the code required to
reproduce these experiments is available online 1 .
1
https://github.com/albanD/adaptive-neural-compilation
6
5.1
Compilation
The compiler described in section 4.3 allows us to go from a program written using our instruction
set to a set of weights ? for our Controller.
To illustrate this point, we implemented simple programs that can solve the tasks introduced by Kurach
et al. [8] and a shortest path problem. One of these implementations can be found in Figure 2a, while
the others are available in the supplementary materials. These programs are written in a specific
language, and are transformed by the Neural Compiler into parameters for the model. As expected,
the resulting models solve the original tasks exactly and can generalise to any input sequence.
5.2
ANC experiments
In addition to being able to reproduce any given program as was done by Reed and de Freitas [12],
we have the possibility of optimising the resulting program further. We exhibit this by compiling
program down to our model and optimising their performance. The efficiency gain for these tasks
come either from finding simpler, equivalent algorithms or by exploiting some bias in the data to
either remove instructions or change the underlying algorithm.
We identify three different levels of interpretability for our model: the first type corresponds to
weights containing only Dirac-delta distributions, there is an exact one-to-one mapping between
lines in the weight matrices and lines of assembly code. In the second type where all probabilities
are Dirac-delta except the ones associated with the execution of the JEZ instruction, we can recover
an exact algorithm that will use if statements to enumerate the different cases arising from this
conditional jump. In the third type where any operation other than JEZ is executed in a soft way or
use a soft argument, it is not possible to recover a program that will be as efficient as the learned one.
We present here briefly the considered tasks and biases, and report the reader to the supplementary
materials for a detailed encoding of the input/output tape.
1. Access: Given a value k and an array A, return A[k]. In the biased version, the value of k is
always the same, so the address of the required element can be stored in a constant. This is
similar to the optimisation known as constant folding.
2. Swap: Given an array A and two pointers p and q, swap the elements A[p] and A[q]. In the
biased version, p and q are always the same so reading them can be avoided.
3. Increment: Given an array, increment all its element by 1. In the biased version, the array
is of fixed size and the elements of the array have the same value so you do not need to read
all of them when going through the array.
4. Listk: Given a pointer to the head of a linked list, a number k and a linked list, find the value
of the k-th element. In the biased version, the linked list is organised in order in memory, as
would be an array, so the address of the k-th value can be computed in constant time. This
is the example developed in Figure 2.
5. Addition: Two values are written on the tape and should be summed. No data bias is
introduced but the starting algorithm is non-efficient: it performs the addition as a series of
increment operation. The more efficient operation would be to add the two numbers.
6. Sort: Given an array A, sort it. In the biased version, only the start of the array might be
unsorted. Once the start has been arranged, the end of the array can be safely ignored.
For each of these tasks, we perform a grid search on the loss parameters and on our hyper-parameters.
Training is performed using Adam [7] and success rates are obtained by running the optimisation with
100 different random seeds. We consider that a program has been successfully optimised when two
conditions are fulfilled. First, it needs to output the correct solution for all test cases presenting the
same bias. Second, the average number of iterations taken to solve a problem must have decreased.
Note that if we cared only about the first criterion, the methods presented in Section 5.1 would already
provide a success rate of 100%, without requiring any training.
The results are presented in Table 1. For each of these tasks, we manage to find faster algorithms. In
the simple cases of Access and Swap, the optimal algorithms for the considered bias are obtained.
They are found by incorporating heuristics to the algorithm and storing constants in the initial values
of the registers. The learned programs for these tasks are always in the first case of interpretability,
this means that we can recover the most efficient algorithm from the learned weights.
7
Table 1: Average number of iterations required to solve instances of the problems for the original program, the
best learned program and the ideal algorithm for the biased dataset. We also include the success rate of reaching
a more efficient algorithm across multiple random restarts.
Access
Increment
Swap
ListK
Addition
Sort
Generic
Learned
Ideal
6
4
4
40
16
34
10
6
6
18
11
10
20
9
6
38
18
9.5
Success Rate
37 %
84%
27%
19%
12%
74%
While ListK and Addition have lower success rates, improvements between the original and learned
algorithms are still significant. Both were initialised with iterative algorithms with O(n) complexities.
They managed to find constant time O(1) algorithms to solve the given problems, making the runtime
independent of the input. Achieving this means that the equivalence between the two approaches
has been identified, similar to how optimising compilers operate. Moreover, on the ListK task, some
learned programs corresponds to the second type of interpretability. Indeed these programs use soft
jumps to condition the execution on the value of k. Even though these program would not generalise
to other values of k, some learned programs for this task achieve a type one interpretability and a
study of the learned algorithm reveal that they can generalise to any value of k.
Finally, the Increment task achieves an unexpected result. Indeed, it is able to outperform our best
possible algorithm. By looking at the learned program, we can see that it is actually leveraging the
possibility to perform soft writes over multiple elements of the memory at the same time to reduce
its runtime. This is the only case where we see a learned program associated with the third type of
interpretability. While our ideal algorithm would give a confidence of 1 on the output, this algorithm
is unable to do so, but it has a high enough confidence of 0.9 to be considered a correct algorithm.
In practice, for all but the most simple tasks, we observe that further optimisation is possible, as some
useless instructions remain present. Some transformations of the controller are indeed difficult to
achieve through the local changes operated by the gradient descent algorithm. An analysis of these
failure modes of our algorithm can be found in the supplementary materials. This motivates us to
envision the use of approaches other than gradient descent to address these issues.
6
Discussion
The work presented here is a first step towards adaptive learning of programs. It opens up several
interesting directions of future research. For exemple, the definition of efficiency that we considered in
this paper is flexible. We chose to only look at the average number of operations executed to generate
the output from the input. We leave the study of other potential measures such as Kolmogorov
Complexity and sloc, to name a few, for future works.
As shown in the experiment section, our current method is very good at finding efficient solutions
for simple programs. For more complex programs, only a solution close to the initialisation can
be found. Even though training heuristics could help with the tasks considered here, they would
likely not scale up to real applications. Indeed, the main problem we identified is that the gradientdescent based optimisation is unable to explore the space of programs effectively, by performing
only local transformations. In future work, we want to explore different optimisation methods. One
approach would be to mix global and local exploration to improve the quality of the solutions. A
more ambitious plan would be to leverage the structure of the problem and use techniques from
combinatorial optimisation to try and solve the original discrete problem.
Acknowledgments
We would like to thank Siddharth Narayanaswamy and Diane Bouchacourt for helpful discussions and proofreading the paper. This work was supported by the EPSRC, Leverhulme Trust, Clarendon Fund and the ERC
grant ERC-2012-AdG 321162-HELIOS, EPSRC/MURI grant ref EP/N019474/1, EPSRC grant EP/M013774/1,
EPSRC Programme Grant Seebibyte EP/M013774/1 and Microsoft Research PhD Scolarship Program.
8
References
[1] Marcin Andrychowicz and Karol Kurach. Learning efficient algorithms with hierarchical
attentive memory. CoRR, 2016.
[2] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, 2014.
[3] Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to
transduce with unbounded memory. In NIPS, 2015.
[4] Fr?d?ric Gruau, Jean-Yves Ratajszczak, and Gilles Wiber. A neural compiler. Theoretical
Computer Science, 1995.
[5] Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented
recurrent nets. In NIPS, 2015.
[6] ?ukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. In ICLR, 2016.
[7] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
[8] Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. In
ICLR, 2016.
[9] Henry Massalin. Superoptimizer: a look at the smallest program. In ACM SIGPLAN Notices,
volume 22, pages 122?126. IEEE Computer Society Press, 1987.
[10] Arvind Neelakantan, Luke Vilnis, Quoc V Le, Ilya Sutskever, Lukasz Kaiser, Karol Kurach,
and James Martens. Adding gradient noise improves learning for very deep networks. In ICLR,
2016.
[11] Jo?o Pedro Neto, Hava Siegelmann, and F?lix Costa. Symbolic processing in neural networks.
Journal of the Brazilian Computer Society, 2003.
[12] Scott Reed and Nando de Freitas. Neural programmer-interpreters. In ICLR, 2016.
[13] Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic superoptimization. In ACM SIGARCH
Computer Architecture News, 2013.
[14] Rahul Sharma, Eric Schkufza, Berkeley Churchill, and Alex Aiken. Conditionally correct
superoptimization. In OOPSLA, 2015.
[15] Hava Siegelmann. Neural programming language. In AAAI, 1994.
[16] Ronald Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 1992.
[17] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv
preprint arXiv:1505.00521, 2015.
[18] Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. Learning simple algorithms from examples. CoRR, 2015.
9
| 6411 |@word kohli:1 armand:2 version:13 briefly:1 advantageous:1 open:1 instruction:34 eng:1 simplifying:1 thereby:3 initial:7 contains:3 score:1 selecting:1 series:1 initialisation:7 tuned:1 envision:1 ati:2 existing:2 freitas:4 recovered:1 com:2 superoptimization:3 current:2 diederik:1 written:9 must:1 ronald:1 subsequent:1 enables:2 remove:2 designed:1 civ:1 fund:1 ivo:1 pointer:4 firstly:1 simpler:2 unbounded:1 mathematical:1 introduce:2 manner:2 expected:3 indeed:5 themselves:1 inspired:1 siddharth:1 little:1 considering:1 becomes:1 spain:1 estimating:1 underlying:3 moreover:7 provided:1 begin:1 didn:1 churchill:1 what:1 project:1 kind:1 developed:1 interpreter:2 finding:4 transformation:5 ifs:1 safely:1 every:2 penalise:1 ti:2 berkeley:1 charge:1 runtime:4 exactly:2 zaremba:2 uk:4 control:1 wayne:1 grant:4 danihelka:1 before:4 local:3 limit:1 pkohli:1 encoding:1 oxford:4 optimised:2 path:1 becoming:1 instr:1 tmax:1 might:1 emphasis:1 chose:1 blunsom:1 r4:4 specifying:1 equivalence:1 compile:3 luke:1 limited:1 acknowledgment:1 practice:3 writes:1 procedure:3 adapting:3 schkufza:2 pre:1 confidence:4 regular:3 symbolic:1 cannot:2 targeting:1 ir2:1 close:1 put:1 applying:1 writing:1 equivalent:5 deterministic:1 marten:1 map:1 phil:1 transduce:1 go:5 attention:1 straightforward:2 starting:1 convex:4 jimmy:1 williams:1 tomas:2 simplicity:1 m2:1 array:10 initialise:3 notion:1 increment:6 limiting:1 target:6 suppose:1 alleviating:1 exact:4 programming:3 us:3 element:17 updating:1 muri:1 ep:3 role:1 epsrc:4 lix:1 preprint:1 worst:1 region:1 connected:1 news:1 decrease:1 incremented:1 highest:1 mentioned:2 accessing:2 complexity:4 exhaustively:1 traversal:1 trained:1 efficiency:3 eric:2 completely:1 swap:4 easily:2 resolved:1 represented:1 kolmogorov:1 train:3 broadens:1 hyper:1 whose:2 heuristic:2 supplementary:6 solve:8 jean:1 transform:1 final:3 online:1 bouchacourt:1 sequence:2 differentiable:17 ir1:1 rr:1 advantage:1 net:1 propose:1 interaction:1 product:1 fr:1 loop:5 achieve:3 description:2 dirac:4 exploiting:1 sutskever:4 r1:3 generating:1 adam:2 executing:1 leave:1 karol:3 help:2 illustrate:1 recurrent:3 ac:4 received:1 edward:1 implemented:1 involves:2 indicate:1 come:1 direction:1 hermann:1 correct:8 stochastic:4 exploration:3 nando:1 programmer:2 material:6 everything:1 alleviate:1 elementary:1 secondly:1 enumerated:1 unormalized:1 exploring:1 extension:1 hold:1 considered:8 seed:1 algorithmic:4 mapping:4 m0:2 achieves:1 early:2 smallest:1 omitted:1 intermediary:3 combinatorial:1 largest:1 correctness:4 successfully:1 weighted:1 hope:1 always:4 aim:2 modified:1 reaching:1 mtm:1 pn:1 command:8 focus:2 improvement:4 m013774:2 contrast:1 inst:1 helpful:2 stopping:1 unlikely:1 bt:4 transformed:3 reproduce:5 ukasz:1 semantics:1 provably:1 marcin:2 issue:1 going:2 flexible:2 pascal:1 proposes:1 plan:1 constrained:1 special:1 softmax:3 fairly:1 summed:1 construct:1 once:2 having:4 optimising:3 r5:6 represents:2 look:2 r7:12 nearly:1 alter:1 mimic:1 future:3 others:1 report:1 connectionist:1 few:1 composed:3 replaced:3 pawan:2 microsoft:3 r2t:2 possibility:2 highly:1 multiply:1 operated:1 behind:1 tj:1 compilation:8 parametrised:1 amenable:2 tuple:1 capable:3 encourage:1 iv:1 desired:3 theoretical:2 instance:3 soft:6 wb:3 alban:2 halted:2 introducing:1 too:1 stored:4 learnt:2 accomplish:1 rudy:2 lstm:2 mechanically:1 probabilistic:1 ilya:4 jo:1 aaai:1 manage:2 opposed:1 containing:1 possibly:1 henceforth:2 lukasz:1 return:1 wojciech:2 li:2 account:1 halting:2 potential:1 de:4 bold:1 inc:3 register:31 tion:1 view:1 performed:2 try:1 linked:10 compiler:19 wave:1 start:3 recover:3 parallel:1 capability:1 npi:2 sort:3 yves:1 ir:3 greg:1 efficiently:2 correspond:1 identify:1 weak:1 produced:1 published:1 explain:1 reach:1 hava:2 definition:1 failure:1 attentive:1 initialised:1 james:1 associated:7 stop:13 gain:1 dataset:2 costa:1 adaptative:1 knowledge:1 improves:1 actually:1 clarendon:1 restarts:3 specify:2 rahul:2 formulation:3 execute:4 ox:4 done:2 though:3 arranged:1 just:1 jez:10 replacing:1 expressive:1 trust:1 mode:1 quality:1 reveal:1 name:1 effect:5 contain:2 requiring:1 managed:1 inspiration:1 read:11 moritz:1 iteratively:1 illustrated:1 conditionally:2 unambiguous:1 noted:1 illustrative:1 criterion:1 presenting:1 complete:1 demonstrate:1 performs:2 recently:3 multinomial:1 mt:5 volume:1 discussed:1 m1:1 interpret:1 significant:1 framed:1 unconstrained:1 grid:2 erc:2 language:9 henry:1 robot:3 access:7 supervision:1 compiled:4 bti:2 add:7 recent:3 showed:1 driven:1 apart:1 success:6 unsorted:1 seen:5 greater:1 converting:1 r0:3 sharma:3 shortest:1 ntm:3 signal:1 ii:2 arithmetic:3 multiple:2 mix:1 faster:3 adapt:2 instruct:1 rti:4 match:1 arvind:1 equally:1 halt:3 controlled:1 impact:1 converging:2 controller:30 optimisation:17 arxiv:2 iteration:8 represent:2 ir0:3 achieved:1 dec:3 cell:2 folding:1 addition:5 want:3 conditionals:1 decreased:1 suleyman:1 ot:4 rest:1 extra:1 biased:6 operate:1 sure:1 leveraging:2 mod:1 integer:3 bunel:1 leverage:2 intermediate:4 iii:1 enough:2 ideal:5 rendering:1 architecture:2 restrict:1 topology:1 perfectly:1 identified:2 idea:1 reduce:1 kurach:5 handled:1 passed:1 narayanaswamy:1 wo:3 penalty:1 tape:11 action:1 adequate:1 etk:1 enumerate:1 ignored:1 andrychowicz:2 detailed:3 deep:1 amount:1 neelakantan:2 differentiability:1 http:1 specifies:1 outperform:1 generate:1 notice:1 leverhulme:1 delta:5 arising:1 track:1 fulfilled:1 write:6 discrete:2 group:1 four:3 reformulation:1 threshold:1 achieving:1 drawn:1 aiken:2 changing:1 prevent:1 mt2:1 ram:1 timestep:1 mti:2 sum:1 convert:1 run:2 turing:3 letter:1 you:1 reader:1 decide:1 brazilian:1 acceptable:1 ric:1 layer:4 encountered:1 constraint:1 deficiency:1 alex:3 encodes:1 sigplan:1 sake:1 argument:5 min:3 kumar:1 performing:4 proofreading:1 mikolov:2 gpus:1 mta:2 combination:4 representable:1 across:1 remain:1 rob:1 making:4 modification:2 quoc:1 restricted:2 taken:4 sloc:1 computationally:1 previously:2 describing:2 r3:3 end:6 available:2 operation:11 rewritten:2 observe:1 hierarchical:1 away:1 generic:7 compiling:2 altogether:1 original:7 running:3 ensure:1 assembly:3 include:1 cared:1 sigarch:1 exploit:1 siegelmann:3 build:2 society:2 objective:5 move:1 question:1 already:1 kaiser:2 strategy:1 rt:4 traditional:3 desmaison:1 exhibit:1 gradient:15 iclr:5 unable:2 reinforce:1 mapped:1 thank:1 philip:2 argue:1 code:9 useless:1 reed:4 providing:1 difficult:3 unfortunately:2 executed:7 statement:3 gk:2 ba:1 irt:9 design:6 implementation:3 neto:2 motivates:1 unknown:1 contributed:1 gilles:1 perform:10 allowing:3 ambitious:1 observation:1 neuron:1 datasets:1 enabling:1 descent:8 extended:1 looking:1 precise:2 head:10 stack:2 sharp:1 community:4 introduced:4 pair:1 required:8 specified:4 learned:11 barcelona:1 kingma:1 nip:3 address:4 beyond:1 able:9 usually:1 pattern:1 scott:1 reading:1 program:74 interpretability:6 memory:18 lend:1 max:2 natural:1 rely:1 curriculum:1 representing:1 improve:5 github:1 finished:1 mt1:1 carried:1 helios:1 literature:1 graf:4 fully:2 loss:9 expect:1 interesting:1 organised:1 var:3 sufficient:1 storing:1 occam:1 row:5 karl:1 supported:2 last:1 free:1 soon:1 infeasible:1 bias:7 allow:2 guide:1 side:5 generalise:3 wide:1 taking:3 fifth:2 world:1 computes:1 author:1 adaptive:6 reinforcement:3 jump:2 avoided:1 programme:1 far:1 pushmeet:1 r1t:2 wrote:1 keep:1 global:2 mustafa:1 fergus:1 search:5 continuous:1 iterative:1 table:3 additionally:2 learn:6 reasonably:1 arg2:5 anc:2 diane:1 complex:4 joulin:2 main:3 terminated:1 whole:1 noise:2 hyperparameters:1 motivation:1 nothing:1 ref:1 augmented:1 adg:1 sub:1 position:3 inferring:1 r6:8 third:2 learns:1 oti:2 formula:1 down:4 gradientdescent:1 specific:6 learnable:6 list:13 explored:1 r2:3 consist:1 exists:1 incorporating:1 adding:1 effectively:1 corr:3 phd:1 execution:8 arg1:6 likely:1 explore:2 reliant:1 expressed:2 ordered:1 contained:1 unexpected:1 pedro:1 corresponds:5 acm:2 grefenstette:1 conditional:1 goal:4 invalid:1 towards:2 replace:1 considerable:1 feasible:2 experimentally:1 change:4 specifically:2 torr:2 hard:2 semantically:1 infinite:1 except:1 flag:2 called:1 experimental:1 pragmatic:1 rit:1 arises:1 vilnis:1 evaluate:1 scratch:1 |
5,983 | 6,412 | On the Recursive Teaching Dimension
of VC Classes
Yu Cheng
Department of Computer Science
University of Southern California
[email protected]
Xi Chen
Department of Computer Science
Columbia University
[email protected]
Bo Tang
Department of Computer Science
Oxford University
[email protected]
Abstract
The recursive teaching dimension (RTD) of a concept class C ? {0, 1}n , introduced
by Zilles et al. [ZLHZ11], is a complexity parameter measured by the worst-case
number of labeled examples needed to learn any target concept of C in the recursive
teaching model. In this paper, we study the quantitative relation between RTD and
the well-known learning complexity measure VC dimension (VCD), and improve
the best known upper and (worst-case) lower bounds on the recursive teaching
dimension with respect to the VC dimension.
Given a concept class C ? {0, 1}n with VCD(C) = d, we first show that RTD(C)
is at most d ? 2d+1 . This is the first upper bound for RTD(C) that depends only on
VCD(C), independent of the size of the concept class |C| and its domain size n.
Before our work, the best known upper bound for RTD(C) is O(d2d log log |C|),
obtained by Moran et al. [MSWY15]. We remove the log log |C| factor.
We also improve the lower bound on the worst-case ratio of RTD(C) to VCD(C).
We present a family of classes {Ck }k?1 with VCD(Ck ) = 3k and RTD(Ck ) = 5k,
which implies that the ratio of RTD(C) to VCD(C) in the worst case can be as
large as 5/3. Before our work, the largest ratio known was 3/2 as obtained by
Kuhlmann [Kuh99]. Since then, no finite concept class C has been known to satisfy
RTD(C) > (3/2) ? VCD(C).
1
Introduction
In computational learning theory, one of the fundamental challenges is to understand how different
information complexity measures arising from different learning models relate to each other. These
complexity measures determine the worst-case number of labeled examples required to learn any
concept from a given concept class. For example, one of the most notable results along this line
of research is that the sample complexity in PAC-learning is linearly related to the VC dimension
[BEHW89]. Recall that the VC dimension of a concept class C ? {0, 1}n [VC71], denoted by
VCD(C), is the maximum size of a shattered subset of [n] = {1, . . . , n}, where we say Y ? [n] is
shattered if for every binary string b of length |Y |, there is a concept c ? C such that c |Y = b. Here
we use c |X to denote the projection of c on X. As the best-studied information complexity measure,
VC dimension is known to be closely related to many other complexity parameters, and it serves as a
natural parameter to compare against across various models of learning and teaching.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Instead of the PAC-learning model where the algorithm takes random samples, we consider an
interactive learning model where a helpful teacher selects representative examples and present them
to the learner, with the objective of minimizing the number of examples needed. The notion of a
teaching set was introduced in mathematical models for teaching. The teaching set of a concept c ? C
is a set of indices (or examples) X ? [n] that uniquely identifies c from C. Formally, given a concept
class C ? {0, 1}n (a set of binary strings of length n), X ? [n] is a teaching set for a concept c ? C
(a binary string in C) if X satisfies
c|X 6= c0 |X ,
for all other concepts c0 ? C.
The teaching dimension of a concept class C is the smallest number t such that every c ? C has a
teaching set of size no more than t [GK95, SM90]. However, teaching dimension does not always
capture the cooperation in teaching and learning (as we will see in Example 2), and a more optimistic
and realistic notion of recursive teaching dimension has been introduced and studied extensively in
the literature [Kuh99, DSZ10, ZLHZ11, WY12, DFSZ14, SSYZ14, MSWY15].
Definition 1. The recursive teaching dimension of a class C ? {0, 1}n , denoted by RTD(C), is the
smallest number t where one can order all the concepts of C as an ordered sequence c1 , . . . , c|C| such
that every concept ci , i < |C|, has a teaching set of size no more than t in {ci , . . . , c|C| }.
Hence, RTD(C) measures the worst-case number of labeled examples needed to learn any target
concept in C, if the teacher and the learner are cooperative. We would like to emphasize that an
optimal ordered sequence (as in Definition 1) can be derived by the teacher and learner separately
without any communication: They can put all concepts in C that have the smallest teaching dimension
appear at the beginning of the sequence, then remove these concepts from C and proceeds recursively.
By definition, RTD(C) is always bounded from above by the teaching dimension of C but can be
much smaller than the teaching dimension. We use the following example to illustrate the difference
between the teaching dimension and the recursive teaching dimension.
Example 2. Consider the class C ? {0, 1}n with n + 1 concepts: the empty concept 0 and all the
singletons. For example when n = 3, C = {000, 100, 010, 001}. Each singleton concept has teaching
dimension 1, while the teaching dimension for the empty concept 0 is n, because the teacher has to
reveal all labels to distinguish 0 from the other concepts. However, if the teacher and the learner
are cooperative, every concept can be taught with one label: If the teacher reveals a ?0? label, the
learner can safely assume that the target concept must be 0, because otherwise the teacher would
present a ?1? label instead for the other concepts. Equivalently, in the setting of Definition 1, the
teacher and the learner can order the concepts so that the singleton concepts appear before the empty
concept 0. Then every concept has a teaching set of size 1 to distinguish it from the later concepts in
the sequence, and thus the recursive teaching dimension of C is 1.
In this paper, we study the quantitative relationship between the recursive teaching dimension (RTD)
and the VC dimension (VCD). A bound on the RTD that depends only on the VCD would imply a
close connection between learning from random samples and teaching (under the recursive teaching
model). The same structural properties that make a concept class easy to learn would also give a
bound on the number of examples needed to teach it. Moreover, the recursive teaching dimension is
known to be closely related to sample compression schemes [LW86, War03, DKSZ16], and a better
understanding of the relationship between RTD and VCD might help resolve the long-standing sample
compression conjecture [War03], which states that every concept class has a sample compression
scheme of size linear in its VCD.
1.1
Our Results
Our main result (Theorem 3) is an upper bound of d ? 2d+1 on RTD(C) when VCD(C) = d. This
is the first upper bound for RTD(C) that depends only on VCD(C), but not on |C|, the size of the
concept class, or n, the domain size. Previously, Moran et al. [MSWY15] showed an upper bound of
O(d2d log log |C|) for RTD(C); our result removes the log log |C| factor, and answers positively an
open problem posed in [MSWY15].
Our proof tries to reveal examples iteratively to minimize the number of the remaining concepts.
Given a concept class C ? {0, 1}n , we pick a set of examples Y ? [n] and their labels b ? {0, 1}Y , so
that the set of remaining concepts (with the projection c |Y = b) is nonempty and has the smallest size
among all choices of Y and b. We then prove by contradiction (with the assumption of VCD(C) = d)
2
that, if the size of Y is large enough (but still depends on only VCD(C)), the remaining concepts
must have VC dimension at most d ? 1. This procedure gives us a recursive formula, which we solve
and obtain the claimed upper bound on RTD of classes of VC dimension d.
We also improve the lower bound on the worst-case factor by which RTD may exceed VCD. We
present a family of classes {Ck }k?1 (Figure 4) with VCD(Ck ) = 3k and RTD(Ck ) = 5k, which shows
that the worst-case ratio between RTD(C) and VCD(C) is at least 5/3. Before our work, the largest
known multiplicative gap between RTD(C) and VCD(C) was a ratio of 3/2, given by Kuhlmann
[Kuh99]. (Later Doliwa et al. [DFSZ14] showed the smallest class CW with RTD(CW ) = (3/2) ?
VCD(CW ) (Warmuth?s class)). Since then, no finite concept class C with RTD(C) > (3/2) ? VCD(C)
has been found.
Instead of exhaustively searching through all small concept classes, our improvement on the lower
bound is achieved by formulating the existence of a concept class with the desired RTD, VCD and
domain size, as a boolean satisfiability problem. We then run the state-of-the-art SAT solvers on
these formulae to discover a concept class C0 with VCD(C0 ) = 3 and RTD(C0 ) = 5. Based on the
concept class C0 , one can produce a family of concept classes {Ck }k?1 with VCD(Ck ) = 3k and
RTD(Ck ) = 5k, by taking the Cartesian product of k copies of C0 : Ck = C0 ? . . . ? C0 .
2
Upper Bound on the Recursive Teaching Dimension
In this section, we prove the following upper bound on RTD(C) with respect to VCD(C).
Theorem 3. Let C ? {0, 1}n be a class with VCD(C) = d. Then RTD(C) ? 2d+1 (d ? 2) + d + 4.
Given a class C, we use TSmin (C) to denote the smallest integer t such that at least one concept c ? C
has a teaching set of size t. Notice that TSmin (C) is different from teaching dimension. Teaching
dimension is defined as the smallest t such that every c ? C has a teaching set of size at most t.)
Theorem 3 follows directly from Lemma 4 and the observation that the VC dimension of a concept
class does not increase after a concept is removed. (After removing a concept from C, the new class
C 0 still has VCD(C 0 ) ? d, and one can apply Lemma 4 again to obtain another concept that has a
teaching set of the desired size in C 0 and repeat this process.)
Lemma 4. Let C ? {0, 1}n be a class with VCD(C) = d. Then TSmin (C) ? 2d+1 (d ? 2) + d + 4.
We start with some intuition by reviewing the proof of Kuhlmann [Kuh99] that every class C with
VCD(C) = 1 must have a concept c ? C with a teaching set of size 1. Given an index i ? [n] and a
bit b ? {0, 1}, we use Cbi to denote the set of concepts c ? C such that ci = b. The proof starts by
picking an index i and a bit b such that Cbi is nonempty and has the smallest size among all choices of
i and b. The proof then proceeds to show that Cbi contains a unique concept, which by the definition of
Cbi has a teaching set {i} of size 1. To see why Cbi must be a singleton set, we assume for contradiction
that it contains more than one concept. Then there exists an index j 6= i and two concepts c, c0 ? Cbi
such that cj = 0 and c0j = 1. Since C has VCD(C) = 1, {i, j} cannot be shattered and thus, all the
concepts c? ? C with c?i = 1 ? b must share the same c?j , say c?j = 0. As a result, it is easy to verify
that C1j is a nonempty proper subset of Cbi , contradicting the choice of i and b at the beginning.
Moran et al. [MSWY15] used a similar approach to show that every so-called (3, 6)-class C has
TSmin (C) at most 3. They define a class C ? {0, 1}n to be a (3, 6)-class if for any three indices
i, j, k ? [n], the projection of C onto {i, j, k} has at most 6 patterns. (In contrast, VCD(C) = 2
means that the projection of C has at most 7 patterns. So C being a (3, 6)-class is a stronger condition
than VCD(C) = 2.) The proof of [MSWY15] starts by picking two indices i, j ? [n] and two bits
b1 , b2 ? {0, 1} such that Cbi,j
, i.e., the set of c ? C such that ci = b1 and cj = b2 , is nonempty
1 ,b2
and has the smallest size among all choices of i, j and b1 , b2 . They then prove by contradiction that
VCD(Cbi,j
) = 1, and combine with [Kuh99] to conclude that TSmin (C) ? 3.
1 ,b2
Our proof extends this approach further. Given a concept class C ? {0, 1}n with VCD(C) = d, let
?
k = 2d (d ? 1) + 1 and we pick a set Y ? ? [n] of k indices and a string b? ? {0, 1}k such that CbY? ,
?
the set of c ? C such that the projection c |Y ? = b , is nonempty and has the smallest size among all?
choices of Y and b. We then prove by contradiction (with the assumption of VCD(C) = d) that CbY?
must have VC dimension at most d ? 1. This gives us a recursive formula that bounds the TSmin of
classes of VC dimension d, which we solve to obtain the upper bound stated in Lemma 4.
We now prove Lemma 4.
3
Proof of Lemma 4. We prove by induction on d. Let
f (d) =
max TSmin (C).
C : VCD(C)?d
Our goal is to prove the following upper bound for f (d):
f (d) ? 2d+1 (d ? 2) + d + 4,
The base case of d = 1 follows directly from [Kuh99].
for all d ? 1.
(1)
For the induction step, we show that condition (1) holds for some d > 1, assuming that it holds for
d ? 1. Take any concept class C ? {0, 1}n with VCD(C) ? d. Let k = 2d (d ? 1) + 1. If n ? k then
we are already done because
TSmin (C) ? n ? k = 2d (d ? 1) + 1 ? 2d+1 (d ? 2) + d + 4,
where the last inequality holds for all d ? 1. Assume in the rest of the proof that n > k. Then any set
of k indices Y ? [n] partitions C into 2k (possibly empty) subsets, denoted by
CbY = {c ? C : c |Y = b}, for each b ? {0, 1}k .
We follow the approach of [Kuh99]? and [MSWY15] to choose a set of k indices Y ? ? [n] as well as
a string b? ? {0, 1}k such that CbY? is nonempty and has the smallest size among all nonempty CbY ,
over all choices of Y and b. Without loss of generality we assume below that Y?? = [k] and b? = 0
is the all-zero string. For notational convenience, we also write Cb to denote CbY for b ? {0, 1}k .
?
Notice that if Cb? = CbY? has VC dimension at most d ? 1, then we have
TSmin (C) ? k + f (d ? 1) ? 2d+1 (d ? 2) + d + 4,
using the inductive hypothesis. This is because according to the definition of f , one of the concepts
c ? Cb? has a teaching set T ? [n] \ Y ? of size at most f (d ? 1) to distinguish it from other concepts
of Cb? . Thus, [k] ? T is a teaching set of c in the original class C, of size at most k + f (d ? 1).
0
0
0
0
0
1
1
0 0
0 1
1 0
1 1
X
X
X
0
0
X
X
X
0
1
X
X
1X
0
1
1
1
X
X
1X
1
X
X
0X
0
Figure 1: An illustration for the proof of Lemma
4, TSmin (C) ? 6 when d = 2. ?We prove by
?
contradiction that the smallest nonempty set CbY? , after fixing five bits, has VCD(CbY? ) = 1, where
Y ? = {1, 2, 3, 4, 5} and
b? = 0. In this example, we have Z ?= {6, 7}, Y 0 = {2, 3, 4, 6, 7} and
0
Y0
b = 0. Note that C0 is indeed a nonempty proper subset of C0Y .
Finally, we prove by contradiction that Cb? has VC dimension at most d ? 1. Assume that Cb? has
VC dimension d. Then by definition there exists a set Z ? [n] \ Y ? of d indices that is shattered by
Cb? (i.e., all the 2d possible strings appear in Cb? on Z). Observe that for each i ? Y ? , the union of
all Cb with bi = 1 (recall that b? is the all-zero string) must miss at least one string on Z, which we
denote by pi (choose one arbitrarily if more than one are missing); otherwise, C has a shattered set
of size d + 1, i.e., Z ? {i}, contradicting with the assumption that VCD(C) ? d. (See Figure 1 for
an example when d = 2 and k = 5.) However, given that there are only 2d possibilities for each pi
(and |Y ? | = k = 2d (d ? 1) + 1), it follows from the pigeonhole principle that there exists a subset
K ? Y ? of size d such that pi = p for every i ? K, for some
p ? {0, 1}d . Let Y 0 = (Y ? \ K) ?? Z
0
be a new set of k indices and let b0 = 0k?d ? p. Then CbY0 is a nonempty and proper subset of CbY? , a
contradiction with our choice of Y ? and b? .
This finishes the induction and the proof of the lemma.
4
3
Lower Bound on the Worst-Case Recursive Teaching Dimension
We also improve the lower bound on the worst-case factor by which RTD may exceed VCD.
In this section, we present an improved lower bound on the worst-case factor by which RTD(C) may
exceed VCD(C). Recall the definition of TSmin (C), which denotes the number of examples needed
to teach some concept in c ? C. By definition we always have RTD(C) ? TSmin (C) for any class C.
Kuhlmann [Kuh99] first found a class C such that RTD(C) = TSmin (C) = 3 and VCD(C) = 2, with
domain size n = 16 and |C| = 24. Since then, no class C with RTD(C) > (3/2) ? VCD(C) has been
found. Recently, Doliwa et al. [DFSZ14] gave the smallest such class CW (Warmuth?s class, as shown
in Figure 2), with RTD(CW ) = TSmin (CW ) = 3, VCD(CW ) = 2, n = 5, and |CW | = 10. We can
view CW as taking all five possible rotations of the two concepts (0, 0, 0, 1, 1) and (0, 1, 0, 1, 1).
x1
x2
x3
x4
x5
0
0
0
1
1
0
1
0
1
1
0
0
1
1
0
1
0
1
1
0
0
1
1
0
0
0
1
1
0
1
1
1
0
0
0
1
1
0
1
0
1
0
0
0
1
1
0
1
0
1
x1
x2
x3
x4
x5
0
0
0
1
0
0
1
1
1
1
(b)
(a)
Figure 2: (a) Warmuth?s class CW with RTD(CW ) = 3 and VCD(CW ) = 2; (b) The succinct
representation of CW with one concept selected from each rotation-equivalent set of concepts.
The teaching set of each concept is marked with underline.
Given CW one can obtain a family of classes {Ck }k?1 by taking the Cartesian product of k copies:
k
Ck = CW
= CW ? ? ? ? ? CW ,
and it follows from the next lemma that RTD(Ck ) = TSmin (Ck ) = 3k and VCD(Ck ) = 2k.
Lemma 5 (Lemma 16 of [DFSZ14]). Given two concept classes C1 and C2 .
Let C1 ? C2 = {(c1 , c2 ) | c1 ? C1 , c2 ? C2 }. Then
TSmin (C1 ? C2 ) = TSmin (C1 ) + TSmin (C2 ),
RTD(C1 ? C2 ) ? RTD(C1 ) + RTD(C2 ), and
VCD(C1 ? C2 ) = VCD(C1 ) + VCD(C2 ).
Lemma 5 allows us to focus on finding small concept classes with RTD(C) > (3/2) ? VCD(C). The
first attempt to find such classes is to exhaustively search over all possible binary matrices and then
compute and compare their VCD and RTD. But brute-force search quickly becomes infeasible as the
domain size n gets larger. For example, even the class CW has fifty 0/1 entries. Instead, we formulate
the existence of a class with certain desired RTD, VCD, and domain size, as a boolean satisfiability
problem, and then run state-of-the-art Boolean Satisfiability (SAT) solvers to see whether the boolean
formula is satisfiable or not.
We briefly describe how to construct an equivalent boolean formula in conjunctive normal form
(CNF). For a fixed domain size n, we have 2n basic variables xc , each describing whether a concept
c ? {0, 1}n is included in C or not. We need VC dimension to be at most VCD, which is equivalent to
requiring that every set S ? [n] of size |S| = VCD + 1 is not shattered by C. So we define auxiliary
variables y(S,b) for each set S of size |S| = VCD + 1, and every string b ? {0, 1}S , indicating
whether a specific pattern b appears in the projection of C on S or not. These auxiliary variables are
decided by the basic variables, and for every S, at least one of the 2|S| patterns must be missing on S.
5
For the minimum teaching dimension to be at least RTD, we cannot teach any row with RTD ? 1
labels. So for every concept c, and every set of indices T ? [n] of size |T | = RTD ? 1, we need
at least one other concept c0 6= c satisfying c |T = c0 |T so that c0 is there to ?confuse? c on T . As
an example, we list one clause of each type, from the SAT instance with n = 5, VCD = 2, and
RTD = 3:
_
_
x01011 ? y({1,2,3},010) ,
? y({1,2,3},b) , x01011 ?
x(01,b) .
b
b6=011
Note that there are many ways to formulate our problem as a SAT instance. For example, we could
directly use a boolean variable for each entry of the matrix. But in our experiments, the SAT solvers
run faster using the formulation described above. The SAT solvers we use are Lingeling [Bie15]
and Glucose [AS14] (based on MiniSAT [ES03]). We are able to rediscover CW and rule out the
existence of concept classes for certain small values of (VCD, RTD, n); see Figure 3.
VCD(C)
RTD(C)
n (domain size)
Satisfiable
Concept Class
2
2
3
3
4
4
3
4
5
6
6
7
5
7
7
8
7
8
Yes
No
No
No
No
No
CW (Figure 2)
3
5
12
Yes
Figure 4
Figure 3: The satisfiability of the boolean formulae for small values of VCD(C), RTD(C), and n.
Unfortunately for n > 8, even these SAT solvers are no longer feasible. We use another heuristic
to speed up the SAT solvers when we conjecture the formula to be satisfiable ? adding additional
clauses to the SAT formula so that it has fewer solutions (but hopefully still satisfiable), and faster
to solve. More specifically, we bundle all the rotation-equivalent concepts, that is if we include a
concept, we must also include all its rotations. Note that with this restriction, we can reduce the
number of variables by having one for each rotation-equivalent set; we can also reduce the number of
clauses, since if S is not shattered, then we know all rotations of S are also not shattered.
We manage to find a class C0 with RTD(C0 ) = TSmin (C0 ) = 5 and VCD(C) = 3, and domain size
n = 12. A succinct representation of C0 is given in Figure 4, where all rotation-equivalent concepts
(i.e. rows) are omitted. The first 8 rows each represents 12 concepts, and the last row represents 4
concepts (because it is more symmetric), with a total of |C0 | = 100 concepts. We also include a text
file with the entire concept class C0 (as a 100 ? 12 matrix) in the supplemental material. Applying
Lemma 5, we obtain a family of concept classes {Ck }k?1 , where Ck = C0 ? ? ? ? ? C0 is the Cartesian
product of k copies of C0 , that satisfy RTD(Ck ) = 5k and VCD(Ck ) = 3k.
4
Conclusion and Open Problem
We improve the best known upper and lower bounds for the worst-case recursive teaching dimension
with respect to VC dimension. Given a concept class C with d = VCD(C) we improve the upper
bound RTD(C) = O(d2d log log |C|) of Moran et al. [MSWY15] to 2d+1 (d ? 2) + d + 4, removing
the log log |C| factor as well as the dependency on |C|. In addition, we improve the lower bound
maxC (RTD(C)/VCD(C)) ? 3/2 of Kuhlmann [Kuh99] to maxC (RTD(C)/VCD(C)) ? 5/3.
Our results are a step towards answering the following question:
Is RTD(C) = O(VCD(C))?
posed by Simon and Zilles [SZ15].
While Kuhlmann [Kuh99] showed that RTD(C) = 1 when VCD(C) = 1, the simplest case that is still
open is to give a tight bound on RTD(C) when VCD(C) = 2: Doliwa et al. [DFSZ14] presented a
concept class C (Warmuth?s class) with RTD(C) = 3, while our Theorem 3 shows that RTD(C) ? 6.
6
x1
x2
x3
x4
x5
x6
x7
x8
x9
x10
x11
x12
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
1
1
0
1
0
0
0
1
1
1
1
1
1
0
0
1
0
1
0
0
0
0
1
1
1
1
1
1
1
1
1
0
1
0
1
0
0
1
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
Figure 4: The succinct representation of a concept class C0 with RTD(C0 ) = 5 and VCD(C0 ) = 3.
The teaching set of each concept is marked with underline.
Acknowledgments
We thank the anonymous reviewers for their helpful comments and suggestions. We also thank Joseph
Bebel for pointing us to the SAT solvers. This work was done in part while the authors were visiting
the Simons Institute for the Theory of Computing. Xi Chen is supported by NSF grants CCF-1149257
and CCF-1423100. Yu Cheng is supported in part by Shang-Hua Teng?s Simons Investigator Award.
Bo Tang is supported by ERC grant 321171.
References
[AS14] G. Audemard and L. Simon. Glucose 4.0. 2014. Available at
http://www.labri.fr/perso/lsimon/glucose.
[BEHW89] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the
Vapnik-Chervonenkis dimension. J. ACM, 36(4):929?965, 1989.
[Bie15] A. Biere. Lingeling, Plingeling and Treengeling. 2015. Available at
http://fmv.jku.at/lingeling.
[DFSZ14] T. Doliwa, G. Fan, H.-U. Simon, and S. Zilles. Recursive teaching dimension,
VC-dimension and sample compression. Journal of Machine Learning Research,
15(1):3107?3131, 2014.
[DKSZ16] M. Darnst?dt, T. Kiss, H. U. Simon, and S. Zilles. Order compression schemes. Theor.
Comput. Sci., 620:73?90, 2016.
[DSZ10] T. Doliwa, H.-U. Simon, and S. Zilles. Recursive teaching dimension, learning
complexity, and maximum classes. In Proceedings of the 21st International Conference
on Algorithmic Learning Theory, pages 209?223, 2010.
[ES03] N. E?n and N. S?rensson. An extensible SAT-solver. In Theory and Applications of
Satisfiability Testing, 6th International Conference, SAT 2003., pages 502?518, 2003.
[GK95] S. A. Goldman and M. J. Kearns. On the complexity of teaching. Journal of Computer
and System Sciences, 50(1):20?31, 1995.
[Kuh99] C. Kuhlmann. On teaching and learning intersection-closed concept classes. In
Proceedings of the 4th European Conference on Computational Learning Theory, pages
168?182, 1999.
[LW86] N. Littlestone and M. Warmuth. Relating data compression and learnability. Technical
report, University of California, Santa Cruz, 1986.
7
[MSWY15] S. Moran, A. Shpilka, A. Wigderson, and A. Yehudayoff. Compressing and teaching
for low VC-dimension. In Proceedings of the 56th IEEE Annual Symposium on
Foundations of Computer Science, pages 40?51, 2015.
[SM90] A. Shinohara and S. Miyano. Teachability in computational learning. In ALT, pages
247?255, 1990.
[SSYZ14] R. Samei, P. Semukhin, B. Yang, and S. Zilles. Algebraic methods proving Sauer?s
bound for teaching complexity. Theoretical Computer Science, 558:35?50, 2014.
[SZ15] H.-U. Simon and S. Zilles. Open problem: Recursive teaching dimension versus VC
dimension. In Proceedings of the 28th Conference on Learning Theory, pages
1770?1772, 2015.
[VC71] V. N. Vapnik and A. Ya. Chervonenkis. On the uniform convergence of relative
frequencies of events to their probabilities. Theory of Probability and Its Applications,
16:264?280, 1971.
[War03] M. K. Warmuth. Compressing to VC dimension many points. In Proceedings of the
16th Annual Conference on Computational Learning Theory and 7th Kernel Workshop,
COLT/Kernel 2003, pages 743?744, 2003.
[WY12] A. Wigderson and A. Yehudayoff. Population recovery and partial identification. In
Proceedings of the 53rd IEEE Annual Symposium on Foundations of Computer Science,
pages 390?399, 2012.
[ZLHZ11] S. Zilles, S. Lange, R. Holte, and M. Zinkevich. Models of cooperative teaching and
learning. Journal of Machine Learning Research, 12:349?384, 2011.
8
| 6412 |@word behw89:2 briefly:1 compression:6 stronger:1 underline:2 c0:26 open:4 biere:1 pick:2 recursively:1 contains:2 chervonenkis:2 jku:1 com:1 gmail:1 conjunctive:1 must:9 cruz:1 realistic:1 partition:1 remove:3 selected:1 fewer:1 warmuth:7 beginning:2 d2d:3 five:2 mathematical:1 along:1 c2:11 symposium:2 prove:9 combine:1 indeed:1 goldman:1 resolve:1 solver:8 becomes:1 spain:1 discover:1 bounded:1 moreover:1 string:10 supplemental:1 finding:1 safely:1 quantitative:2 every:15 interactive:1 brute:1 grant:2 appear:3 before:4 oxford:1 might:1 studied:2 bi:1 decided:1 unique:1 acknowledgment:1 testing:1 recursive:19 union:1 as14:2 x3:3 procedure:1 projection:6 get:1 cannot:2 close:1 onto:1 convenience:1 put:1 applying:1 restriction:1 equivalent:6 www:1 reviewer:1 missing:2 zinkevich:1 formulate:2 recovery:1 contradiction:7 rule:1 haussler:1 proving:1 searching:1 notion:2 population:1 target:3 hypothesis:1 satisfying:1 labeled:3 cooperative:3 pigeonhole:1 capture:1 worst:12 compressing:2 removed:1 intuition:1 complexity:10 exhaustively:2 reviewing:1 tight:1 learner:6 various:1 describe:1 labri:1 heuristic:1 posed:2 solve:3 larger:1 say:2 otherwise:2 sequence:4 cby:10 product:3 fr:1 convergence:1 empty:4 produce:1 help:1 illustrate:1 cbi:9 fixing:1 measured:1 b0:1 shpilka:1 auxiliary:2 c:1 implies:1 perso:1 closely:2 vc:21 material:1 anonymous:1 theor:1 hold:3 yehudayoff:2 normal:1 cb:9 algorithmic:1 pointing:1 smallest:13 omitted:1 label:6 largest:2 always:3 ck:19 derived:1 focus:1 improvement:1 notational:1 contrast:1 helpful:2 bebel:1 minisat:1 shattered:8 entire:1 relation:1 selects:1 x11:1 among:5 colt:1 denoted:3 art:2 shinohara:1 construct:1 having:1 x4:3 represents:2 yu:3 report:1 c1j:1 c0j:1 usc:1 attempt:1 possibility:1 bundle:1 partial:1 sauer:1 littlestone:1 desired:3 theoretical:1 instance:2 boolean:7 rtd:62 extensible:1 subset:6 entry:2 uniform:1 learnability:2 dependency:1 answer:1 teacher:8 st:1 fundamental:1 international:2 standing:1 picking:2 quickly:1 again:1 x9:1 manage:1 choose:2 possibly:1 singleton:4 b2:5 satisfy:2 notable:1 depends:4 later:2 try:1 multiplicative:1 view:1 optimistic:1 closed:1 start:3 satisfiable:4 simon:8 b6:1 minimize:1 yes:2 identification:1 maxc:2 definition:9 against:1 frequency:1 kuhlmann:7 proof:10 recall:3 satisfiability:5 cj:2 appears:1 dt:1 follow:1 x6:1 improved:1 formulation:1 done:2 generality:1 hopefully:1 reveal:2 concept:85 verify:1 requiring:1 ccf:2 inductive:1 hence:1 symmetric:1 iteratively:1 ehrenfeucht:1 x5:3 uniquely:1 recently:1 rotation:7 clause:3 relating:1 glucose:3 rd:1 teaching:52 erc:1 longer:1 base:1 showed:3 claimed:1 certain:2 inequality:1 binary:4 arbitrarily:1 minimum:1 additional:1 holte:1 determine:1 x10:1 technical:1 faster:2 long:1 award:1 basic:2 kernel:2 achieved:1 c1:12 addition:1 separately:1 fifty:1 rest:1 file:1 comment:1 integer:1 structural:1 yang:1 exceed:3 easy:2 enough:1 finish:1 gave:1 xichen:1 reduce:2 lange:1 whether:3 algebraic:1 cnf:1 santa:1 extensively:1 simplest:1 http:2 nsf:1 notice:2 arising:1 write:1 taught:1 run:3 extends:1 family:5 bit:4 bound:25 distinguish:3 cheng:3 fan:1 annual:3 x2:3 x7:1 speed:1 formulating:1 x12:1 conjecture:2 department:3 according:1 smaller:1 across:1 y0:1 joseph:1 previously:1 describing:1 nonempty:10 needed:5 know:1 serf:1 available:2 apply:1 observe:1 existence:3 original:1 denotes:1 remaining:3 include:3 vc71:2 wigderson:2 xc:1 objective:1 already:1 question:1 visiting:1 southern:1 cw:20 thank:2 sci:1 induction:3 assuming:1 length:2 index:12 relationship:2 illustration:1 ratio:5 minimizing:1 equivalently:1 unfortunately:1 relate:1 teach:3 stated:1 proper:3 upper:13 observation:1 finite:2 communication:1 introduced:3 required:1 connection:1 california:2 barcelona:1 nip:1 able:1 proceeds:2 below:1 pattern:4 challenge:1 max:1 event:1 natural:1 force:1 scheme:3 improve:7 imply:1 identifies:1 x8:1 columbia:2 text:1 literature:1 understanding:1 relative:1 loss:1 suggestion:1 versus:1 foundation:2 principle:1 miyano:1 share:1 pi:3 row:4 cooperation:1 repeat:1 last:2 copy:3 supported:3 infeasible:1 understand:1 institute:1 zilles:8 taking:3 dimension:47 author:1 emphasize:1 reveals:1 sat:12 b1:3 conclude:1 xi:2 search:2 why:1 learn:4 european:1 domain:9 main:1 linearly:1 contradicting:2 succinct:3 positively:1 x1:3 representative:1 comput:1 answering:1 tang:2 theorem:4 formula:8 removing:2 specific:1 pac:2 moran:5 list:1 alt:1 exists:3 workshop:1 vapnik:2 adding:1 ci:4 confuse:1 cartesian:3 chen:2 gap:1 intersection:1 rediscover:1 ordered:2 kiss:1 bo:2 hua:1 satisfies:1 acm:1 goal:1 marked:2 blumer:1 towards:1 feasible:1 included:1 specifically:1 miss:1 lemma:13 shang:1 called:1 total:1 teng:1 kearns:1 ya:1 indicating:1 formally:1 investigator:1 |
5,984 | 6,413 | Showing versus Doing: Teaching by Demonstration
Mark K Ho
Department of Cognitive, Linguistic, and Psychological Sciences
Brown University
Providence, RI 02912
[email protected]
Michael L. Littman
Department of Computer Science
Brown University
Providence, RI 02912
[email protected]
Fiery Cushman
Department of Psychology
Harvard University
Cambridge, MA 02138
[email protected]
James MacGlashan
Department of Computer Science
Brown University
Providence, RI 02912
[email protected]
Joseph L. Austerweil
Department of Psychology
University of Wisconsin-Madison
Madison, WI 53706
[email protected]
Abstract
People often learn from others? demonstrations, and inverse reinforcement learning
(IRL) techniques have realized this capacity in machines. In contrast, teaching
by demonstration has been less well studied computationally. Here, we develop
a Bayesian model for teaching by demonstration. Stark differences arise when
demonstrators are intentionally teaching (i.e. showing) a task versus simply performing (i.e. doing) a task. In two experiments, we show that human participants
modify their teaching behavior consistent with the predictions of our model. Further, we show that even standard IRL algorithms benefit when learning from
showing versus doing.
1
Introduction
Is there a difference between doing something and showing someone else how to do something?
Consider cooking a chicken. To cook one for dinner, you would do it in the most efficient way
possible while avoiding contaminating other foods. But, what if you wanted to teach a completely
na?ve observer how to prepare poultry? In that case, you might take pains to emphasize certain
aspects of the process. For example, by ensuring the observer sees you wash your hands thoroughly
after handling the uncooked chicken, you signal that it is undesirable (and perhaps even dangerous)
for other ingredients to come in contact with raw meat. More broadly, how could an agent show
another agent how to do a task, and, in doing so, teach about its underlying reward structure?
To model showing, we draw on psychological research on learning and teaching concepts by example.
People are good at this. For instance, when a teacher signals their pedagogical intentions, children
more frequently imitate actions and learn abstract functional representations [6, 7]. Recent work
has formalized concept teaching as a form of recursive social inference, where a teacher chooses
an example that best conveys a concept to a learner, who assumes that the teacher is choosing in
this manner [14]. The key insight from these models is that helpful teachers do not merely select
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
probable examples of a concept, but rather choose examples that best disambiguate a concept from
other candidate concepts. This approach allows for more effective, and more efficient, teaching and
learning of concepts from examples.
We can extend these ideas to explain showing behavior. Although recent work has examined userassisted teaching [8], identified legible motor behavior in human-machine coordination [9], and
analyzed reward coordination in game theoretic terms [11], previous work has yet to successfully
model how people naturally teach reward functions by demonstration. Moreover, in Inverse Reinforcement Learning (IRL), in which an observer attempts to infer the reward function that an expert
(human or artificial) is maximizing, it is typically assumed that experts are only doing the task and not
intentionally showing how to do the task. This raises two related questions: First, how does a person
showing how to do a task differ from them just doing it? And second, are standard IRL algorithms
able to benefit from human attempts to show how to do a task?
In this paper, we investigate these questions. To do so, we formulate a computational model of
showing that applies Bayesian models of teaching by example to the reward function learning setting.
We contrast this pedagogical model with a model of doing: standard optimal planning in Markov
Decision Processes. The pedagogical model predicts several systematic differences from the standard
planning model, and we test whether human participants reproduce these distinctive patterns. For
instance, the pedagogical model chooses paths to a goal that best disambiguates which goal is being
pursued (Experiment 1). Similarly, when teaching feature-based reward functions, the model will
prioritize trajectories that better signal the reward value of state features or even perform trajectories
that would be inefficient for an agent simply doing the task (Experiment 2). Finally, to determine
whether showing is indeed better than doing, we train a standard IRL algorithm with our model
trajectories and human trajectories.
2
A Bayesian Model of Teaching by Demonstration
Our model draws on two approaches: IRL [2] and Bayesian models of teaching by example [14].
The first of these, IRL and the related concept of inverse planning, have been used to model people?s
theory of mind, or the capacity to infer another agent?s unobservable beliefs and/or desires through
their observed behavior [5]. The second, Bayesian models of pedagogy, prescribe how a teacher
should use examples to communicate a concept to an ideal learner. Our model of teaching by
demonstration, called Pedagogical Inverse Reinforcement Learning, merges these two approaches
together by treating a teacher?s demonstration trajectories as communicative acts that signal the
reward function that an observer should learn.
2.1
2.1.1
Learning from an Expert?s Actions
Markov Decision Processes
An agent that plans to maximize a reward function can be modeled as the solution to a Markov
Decision Process (MDP). An MDP is defined by the tuple < S, A, T, R, ? >: a set of states in the
world S; a set of actions for each state A(s); a transition function that maps states and actions to
next states, T : S ? A ? S (in this work we assume all transitions are deterministic, but this can
be generalized to probabilistic transitions); a reward function that maps states to scalar rewards,
R : S ? R; and a discount factor ? ? [0, 1]. Solutions to an MDP are stochastic policies that
map states to distributions over actions, ? : S ? P (A(s)). Given a policy, we define the expected
cumulative discounted reward, or value, V ? (s), at each state associated with following that policy:
V ? (s) = E?
?
hX
i
? k rt+k+1 | st = s .
(1)
k=0
In particular, the optimal policy for an MDP yields the optimal value function, V ? , which is the value
function that has the maximal value for every state (V ? (s) = max? V ? (s), ?s ? S). The optimal
policy also defines an optimal state-action value function, Q? (s, a) = E? [rt+1 + ?V ? (st+1 ) | st =
s, at = a].
2
Algorithm 1 Pedagogical Trajectory Algorithm
Require: starting states s, reward functions {R1 , R2 , ..., RN }, transition function T , maximum
showing trajectory depth lmax , minimum hypothetical doing probability pmin , teacher maximization parameter ?, discount factor ?.
1: ? ? ?
2: for i = 1 to N do
3:
Qi = calculateActionValues(s, Ri , T , ?)
4:
?i = softmax(Qi , ?)
5:
?.add(?i )
Q
6: Calculate j = {j : s1 ? s, length(j) ? lmax , and ?? ? ? s.t. (si ,ai )?j ?(ai | si ) > pmin }.
7: Construct hypothetical doing probability distribution PDoing (j | R) as an N x M array.
P
(j|R)P (R)
8: PObserving (R | j) = P 0 Doing
PDoing (j|R0 )P (R0 )
R
9: PShowing (j | R) =
?
P PObserving (R|j) 0 ?
j 0 PObserving (R|j )
10: return PShowing (j | R)
2.1.2
Inverse Reinforcement Learning (IRL)
In the Reinforcement Learning setting, an agent takes actions in an MDP and receives rewards, which
allow it to eventually learn the optimal policy [15]. We thus assume that an expert who knows the
reward function and is doing a task selects an action at in a state st according to a Boltzmann policy,
which is a standard soft-maximization of the action-values:
exp{Q? (si , ai )/?}
.
?
0
a0 ?A(si ) exp{Q (si , a )/?}
PDoing (at | st , R) = P
(2)
? > 0 is an inverse temperature parameter (as ? ? 0, the expert selects the optimal action with
probability 1; as ? ? ?, the expert selects actions uniformly randomly).
In the IRL setting, an observer sees a trajectory of an expert executing an optimal policy,
j = {(s1 , a1 ), (s2 , a2 ), ..., (sk , ak )}, and infers the reward function R that the expert is maximizing. Given that an agent?s policy is stationary and Markovian, the probability of the trajectory
given
a reward function is just the product of the individual action probabilities, PDoing (j | R) =
Q
t PDoing (at | st , R). From a Bayesian perspective [13], the observer is computing a posterior
probability over possible reward functions R:
PDoing (j | R)P (R)
.
0
0
R0 PDoing (j | R )P (R )
PObserving (R | j) = P
(3)
Here, we always assume that P (R) is uniform.
2.2
Bayesian Pedagogy
IRL typically assumes that the demonstrator is executing the stochastic optimal policy for a reward
function. But is this the best way to teach a reward function? Bayesian models of pedagogy and
communicative intent have shown that choosing an example to teach a concept differs from simply
sampling from that concept [14, 10]. These models all treat the teacher?s choice of a datum, d, as
maximizing the probability a learner will infer a target concept, h:
PLearner (h | d)?
.
0 ?
d0 PLearner (h | d )
PTeacher (d | h) = P
(4)
? is the teacher?s softmax parameter. As ? ? 0, the teacher chooses uniformly randomly; as ? ? ?,
the teacher chooses d that maximally causes the learner to infer a target concept h; when ? = 1, the
teacher is ?probability matching?.
The teaching distribution describes how examples can be effectively chosen to teach a concept. For
instance, consider teaching the concept of ?even numbers?. The sets {2, 2, 2} and {2, 18, 202} are
both examples of even numbers. Indeed, given finite options with replacement, they both have the
same probability of being randomly chosen as sets of examples. But {2, 18, 202} is clearly better
3
for helpful teaching since a na?ve learner shown {2, 2, 2} would probably infer that ?even numbers?
means ?the number 2?. This illustrates an important aspect of successful teaching by example: that
examples should not only be consistent with the concept being taught, but should also maximally
disambiguate the concept being taught from other possible concepts.
2.3
Pedagogical Inverse Reinforcement Learning
To define a model of teaching by demonstration, we treat the teacher?s trajectories in a reinforcementlearning problem as a ?communicative act? for the learner?s benefit. Thus, an effective teacher will
modify its demonstrations when showing and not simply doing a task. As in Equation 4, we can
define a teacher that selects trajectories that best convey the reward function:
PObserving (R | j)?
.
0 ?
j 0 PObserving (R | j )
PShowing (j | R) = P
(5)
In other words, showing depends on a demonstrator?s inferences about an observer?s inferences about
doing.
This model provides quantitative and qualitative predictions for how agents will show and teach how
to do a task given they know its true reward function. Since humans are the paradigm teachers and a
potential source of expert knowledge for artificial agents, we tested how well our model describes
human teaching. In Experiment 1, we had people teach simple goal-based reward functions in a
discrete MDP. Even though in these cases entering a goal is already highly diagnostic, different paths
of different lengths are better for showing, which is reflected in human behavior. In Experiment
2, people taught more complex feature-based reward functions by demonstration. In both studies,
people?s behavior matched the qualitative predictions of our models.
3
Experiment 1: Teaching Goal-based Reward Functions
Consider a grid with three possible terminal goals as shown in Figure 1. If an agent?s goal is &, it
could take a number of routes. For instance, it could move all the way right and then move upwards
towards the & (right-then-up) or first move upwards and then towards the right (up-then-right). But,
what if the agent is not just doing the task, but also attempting to show it to an observer trying to learn
the goal location?
When the goal is &, our pedagogical model predicts that up-then-right is the more probable trajectory
because it is more disambiguating. Up-then-right better indicates that the intended goal is & than
right-then-up because right-then-up has more actions consistent with the goal being #. We have
included an analytic proof of why this is the case for a simpler setting in the supplementary materials.
Additionally, our pedagogical model makes the prediction that when trajectory length costs are
negligible, agents will engage in repetitive, inefficient behaviors that gesture towards one goal
location over others. This ?looping? behavior results when an agent can return to a state with an
action that has high signaling value by taking actions that have a low signaling ?cost? (i.e. they do not
signal something other than the true goal). Figure 1d shows an example of such a looping trajectory.
In Experiment 1, we tested whether people?s showing behavior reflected the pedagogical model when
reward functions are goal-based. If so, this would indicate that people choose the disambiguating
path to a goal when showing.
3.1
Experimental Design
Sixty Amazon Mechanical Turk participants performed the task in Figure 1. One was excluded
due to missing data. All participants completed a learning block in which they had to find the
reward location without being told. Afterwards, they were either placed in a Do condition or a Show
condition. Participants in Do were told they would win a bonus based on the number of rewards
(correct goals) they reached and were shown the text, ?The reward is at location X?, where X was
one of the three symbols %, #, or &. Those in Show were told they would win a bonus based on how
well a randomly matched partner who was shown their responses (and did not know the location of
the reward) did on the task. On each round of Show, participants were shown text saying ?Show your
partner that the reward is at location X?. All participants were given the same sequence of trials in
which the reward locations were <%, &, #, &, %, #, %, #, &>.
4
Figure 1: Experiment 1: Model predictions and participant trajectories for 3 trials when the goal is
(a) &, (b) %, and (c) #. Model trajectories are the two with the highest probability (? = 2, ? = 1.0,
pmin = 10?6 , lmax = 4). Yellow numbers are counts of trajectories with the labeled tile as the
penultimate state. (d) An example of looping behavior predicted by the model when % is the goal.
3.2
Results
As predicted, Show participants tended to choose paths that disambiguated their goal as compared to
Do participants. We coded the number of responses on & and % trials that were ?showing? trajectories
based on how they entered the goal (i.e. out of 3 for each goal). On & trials, entering from the left,
and on % trials, entering from above were coded as ?showing?. We ran a 2x2 ANOVA with Show vs
Do as a between-subjects factor and goal (% vs &) as a repeated measure. There was a main effect
of condition (F (1, 57) = 16.17, p < .001; Show: M = 1.82, S.E. 0.17; Do: M = 1.05, S.E. 0.17) as
well as a main effect of goal (F (1, 57) = 4.77, p < .05; %-goal: M = 1.73, S.E. = 0.18; &-goal: M
= 1.15, S.E. = 0.16). There was no interaction (F (1, 57) = 0.98, p = 0.32).
The model does not predict any difference between conditions for the # (lower right) goal. However,
a visual analysis suggested that more participants took a ?swerving? path to reach #. This observation
was confirmed by looking at trials where # was the goal and comparing the number of swerving
trials, which was defined as making more than one change in direction (Show: M = 0.83, Do: M =
0.26; two-sided t-test: t(44.2) = 2.18, p = 0.03). Although not predicted by the model, participants
may swerve to better signal their intention to move ?directly? towards the goal.
3.3
Discussion
Reaching a goal is sufficient to indicate its location, but participants still chose paths that better
disambiguated their intended goal. Overall, these results indicate that people are sensitive to the
distinction between doing and showing, consistent with our computational framework.
4
Experiment 2: Teaching Feature-based Reward Functions
Experiment 1 showed that people choose disambiguating plans even when entering the goal makes
this seemingly unnecessary. However, one might expect richer showing behavior when teaching more
complex reward functions. Thus, for Experiment 2, we developed a paradigm in which showing
how to do a task, as opposed to merely doing a task, makes a difference for how well the underlying
reward function is learned. In particular, we focused on teaching feature-based reward functions that
allow an agent to generalize what it has learned in one situation to a new situation. People often
use feature-based representations for generalization [3], and feature-based reward functions have
been used extensively in reinforcement learning (e.g. [1]). We used a colored-tile grid task shown in
5
Figure 2: Experiment 2 results. (a) Column labels are reward function codes. They refer to which
tiles were safe (o) and which were dangerous (x) with the ordering <orange, purple, cyan>. Row 1:
Underlying reward functions that participants either did or showed; Row 2: Do participant trajectories
with visible tile colors; Row 3: Show participant trajectories; Row 4: Mean reward function learned
from Do trajectories by Maximum-Likelihood Inverse Reinforcement Learning (MLIRL) [4, 12];
Row 5: Mean reward function learned from Show trajectories by MLIRL. (b) Mean distance between
learned and true reward function weights for human-trained and model-trained MLIRL. For the
models, MLIRL results for the top two ranked demonstration trajectories are shown.
Figure 2 to study teaching feature-based reward functions. White tiles are always ?safe? (reward of 0),
while yellow tiles are always terminal states that reward 10 points. The remaining 3 tile types?orange,
purple, and cyan?are each either ?safe? or ?dangerous? (reward of ?2). The rewards associated with
the three tile types are independent, and nothing about the tiles themselves signal that they are safe or
dangerous.
A standard planning algorithm will reach the terminal state in the most efficient and optimal manner.
Our pedagogical model, however, predicts that an agent who is showing the task will engage in
specific behaviors that best disambiguate the true reward function. For instance, the pedagogical
model is more likely to take a roundabout path that leads through all the safe tile types, choose
to remain on a safe colored tile rather than go on the white tiles, or even loop repeatedly between
multiple safe tile-types. All of these types of behaviors send strong signals to the learner about which
tiles are safe as well as which tiles are dangerous.
4.1
Experimental Design
Sixty participants did a feature-based reward teaching task; two were excluded due to missing data.
In the first phase, all participants were given a learning-applying task. In the learning rounds, they
interacted with the grid shown in Figure 2 while receiving feedback on which tiles won or lost points.
6
Figure 3: Experiment 2 normalized median model fits.
Safe tiles were worth 0 points, dangerous tiles were worth -2 points, and the terminal goal tile was
worth 5 points. They also won an additional 5 points for each round completed for a total of 10
points. Each point was worth 2 cents of bonus. After each learning round, an applying round occurred
in which they applied what they just learned about the tiles without receiving feedback in a new
grid configuration. They all played 8 pairs of learning and applying rounds corresponding to the
8 possible assignments of ?safe? and ?dangerous? to the 3 tile types, and order was randomized
between participants.
As in Experiment 1, participants were then split into Do or Show conditions with no feedback.
Do participants were told which colors were safe and won points for performing the task. Show
participants still won points and were told which types were safe. They were also told that their
behavior would be shown to another person who would apply what they learned from watching the
participant?s behavior to a separate grid. The points won would be added to the demonstrator?s bonus.
4.2
Results
Responses matched model predictions. Do participants simply took efficient routes, whereas Show
participants took paths that signaled tile reward values. In particular, Show participants took paths
that led through multiple safe tile types, remained on safe colored tiles when safe non-colored tiles
were available, and looped at the boundaries of differently colored safe tiles.
4.2.1
Model-based Analysis
To determine how well the two models predicted human behaviors globally, we fit separate models for
each reward function and condition combination. We found parameters that had the highest median
likelihood out of the set of participant trajectories in a given reward function-condition combination.
Since some participants used extremely large trajectories (e.g. >25 steps) and we wanted to include
an analysis of all the data, we calculated best-fitting state-action policies. For the standard-planner, it
is straightforward to calculate a Boltzmann policy for a reward function given ?.
For the pedagogical model, we first need to specify an initial model of doing and distribution
over a finite set of trajectories. We determine this initial set of trajectories and their probabilities
using three parameters: ?, the softmax parameter for a hypothetical ?doing? agent that the model
assumes the learner believes it is observing; lmax , the maximum trajectory length; and pmin , the
minimum probability for a trajectory under the hypothetical doing agent. The pedagogical model
then uses an ? parameter that determines the degree to which the teacher is maximizing. State-action
probabilities are calculated from a distribution over trajectories using the equation P (a | s, R) =
P
|{(s,a):s=st ,a=at ?(st ,at )?j}|
.
j P (a | s, j)P (j | R), where P (a | s, j) =
|{(s,a):s=st ?(st ,at )?j}|
We fit parameter values that produced the maximum median likelihood for each model for each
reward function and condition combination. These parameters are reported in the supplementary
materials. The normalized median fit for each of these models is plotted in Figure 3. As shown
in the figure, the standard planning model better captures behavior in the Do condition, while the
pedagogical model better captures behavior in the Show condition. Importantly, even when the
standard planning model could have a high ? and behave more randomly, the pedagogical model
better fits the Show condition. This indicates that showing is not simply random behavior.
7
4.2.2
Behavioral Analyses
We additionally analyzed specific behavioral differences between the Do and Show conditions
predicted by the models. When showing a task, people visit a greater variety of safe tiles, visit tile
types that the learner has uncertainty about (i.e. the colored tiles), and more frequently revisit states
or ?loop? in a manner that leads to better signaling. We found that all three of these behaviors were
more likely to occur in the Show condition than in the Do condition.
To measure the variety of tiles visited, we calculated the entropy of the frequency distribution over
colored-tile visits by round by participant. Average entropy was higher for Show (Show: M = 0.50,
SE = 0.03; Do: M = 0.39, SE = 0.03; two-sided t-test: t(54.9) = ?3.27, p < 0.01). When analyzing
time spent on colored as opposed to un-colored tiles, we calculated the proportion of visits to colored
tiles after the first colored tile had been visited. Again, this measure was higher for Show (Show:
M = 0.87, SE = 0.01; Do: M = 0.82, SE = 0.01; two-sided t-test: t(55.6) = ?3.14, p < .01).
Finally, we calculated the number of times states were revisited in the two conditions?an indicator of
?looping??and found that participants revisited states more in Show compared to Do (Show: M = 1.38,
SE = 0.22; Do: M = 0.10, SE = 0.03; two-sided t-test: t(28.3) = ?2.82, p < .01). There was no
difference between conditions in the total rewards won (two-sided t-test: t(46.2) = .026, p = 0.80).
4.3
Teaching Maximum-Likelihood IRL
One reason to investigate showing is its potential for training artificial agents. Our pedagogical model
makes assumptions about the learner, but it may be that pedagogical trajectories are better even
for training off-the-shelf IRL algorithms. For instance, Maximum Likelihood IRL (MLIRL) is a
state-of-the-art IRL algorithm for inferring feature-based reward functions [4, 12]. Importantly, unlike
the discrete reward function space our showing model assumes, MLIRL estimates the maximum
likelihood reward function over a space of continuous feature weights using gradient ascent.
To test this, we input human and model trajectories into MLIRL. We constrained non-goal feature
weights to be non-positive. Overall, the algorithm was able to learn the true reward function better
from showing than doing trajectories produced by either the models or participants (Figure 2).
4.3.1
Discussion
When learning a feature-based reward function from demonstration, it matters if the demonstrator
is showing or doing. In this experiment, we showed that our model of pedagogical reasoning over
trajectories captures how people show how to do a task. When showing as opposed to simply doing,
demonstrators are more likely to visit a variety of states to show that they are safe, stay on otherwise
ambiguously safe tiles, and also engage in ?looping? behavior to signal information about the tiles.
Moreover, this type of teaching is even better at training standard IRL algorithms like MLIRL.
5
General Discussion
We have presented a model of showing as Bayesian teaching. Our model makes accurate quantitative
and qualitative predictions about human showing behavior, as demonstrated in two experiments.
Experiment 1 showed that people modify their behavior to signal information about goals, while
Experiment 2 investigated how people teach feature-based reward functions. Finally, we showed that
even standard IRL algorithms benefit from showing as opposed to merely doing.
This provides a basis for future study into intentional teaching by demonstration. Future research
must explore showing in settings with even richer state features and whether more savvy observers
can leverage a showing agent?s pedagogical intent for even better learning.
Acknowledgments
MKH was supported by the NSF GRFP under Grant No. DGE-1058262. JLA and MLL were
supported by DARPA SIMPLEX program Grant No. 14-46-FP-097. FC was supported by grant
N00014-14-1-0800 from the Office of Naval Research.
8
References
[1] P. Abbeel and A. Y. Ng. Apprenticeship Learning via Inverse Reinforcement Learning. In
Proceedings of the Twenty-first International Conference on Machine Learning, ICML ?04,
pages 1?, New York, NY, USA, 2004. ACM.
[2] B. D. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from
demonstration. Robotics and Autonomous Systems, 57(5):469?483, May 2009.
[3] J. L. Austerweil and T. L. Griffiths. A nonparametric Bayesian framework for constructing
flexible feature representations. Psychological Review, 120(4):817?851, 2013.
[4] M. Babes, V. Marivate, K. Subramanian, and M. L. Littman. Apprenticeship learning about
multiple intentions. In Proceedings of the 28th International Conference on Machine Learning
(ICML-11), pages 897?904, 2011.
[5] C. L. Baker, R. Saxe, and J. B. Tenenbaum. Action understanding as inverse planning. Cognition,
113(3):329?349, Dec. 2009.
[6] D. Buchsbaum, A. Gopnik, T. L. Griffiths, and P. Shafto. Children?s imitation of causal action
sequences is influenced by statistical and pedagogical evidence. Cognition, 120(3):331?340,
Sept. 2011.
[7] L. P. Butler and E. M. Markman. Preschoolers use pedagogical cues to guide radical reorganization of category knowledge. Cognition, 130(1):116?127, Jan. 2014.
[8] M. Cakmak and M. Lopes. Algorithmic and Human Teaching of Sequential Decision Tasks. In
AAAI Conference on Artificial Intelligence (AAAI-12), July 2012.
[9] A. D. Dragan, K. C. T. Lee, and S. S. Srinivasa. Legibility and predictability of robot motion.
In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages
301?308, Mar. 2013.
[10] M. C. Frank and N. D. Goodman. Predicting Pragmatic Reasoning in Language Games. Science,
336(6084):998?998, May 2012.
[11] D. Hadfield-Menell, A. Dragan, P. Abbeel, and S. Russell. Cooperative Inverse Reinforcement
Learning. In D. D. Lee, M. Sugiyama, U. von Luxburg, and I. Guyon, editors, Advances in
Neural Information Processing Systems, volume 28. The MIT Press, 2016.
[12] J. MacGlashan and M. L. Littman. Between imitation and intention learning. In Proceedings
of the 24th International Conference on Artificial Intelligence, pages 3692?3698. AAAI Press,
2015.
[13] D. Ramachandran and E. Amir. Bayesian Inverse Reinforcement Learning. In Proceedings of
the 20th International Joint Conference on Artifical Intelligence, IJCAI?07, pages 2586?2591,
San Francisco, CA, USA, 2007. Morgan Kaufmann Publishers Inc.
[14] P. Shafto, N. D. Goodman, and T. L. Griffiths. A rational account of pedagogical reasoning:
Teaching by, and learning from, examples. Cognitive Psychology, 71:55?89, June 2014.
[15] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
9
| 6413 |@word trial:7 proportion:1 initial:2 configuration:1 comparing:1 si:5 yet:1 must:1 visible:1 analytic:1 wanted:2 motor:1 treating:1 v:2 stationary:1 pursued:1 cue:1 cook:1 intelligence:3 imitate:1 amir:1 menell:1 colored:11 grfp:1 provides:2 revisited:2 location:8 marivate:1 simpler:1 qualitative:3 fitting:1 behavioral:2 manner:3 apprenticeship:2 expected:1 indeed:2 behavior:23 themselves:1 frequently:2 planning:7 terminal:4 discounted:1 globally:1 food:1 spain:1 underlying:3 moreover:2 matched:3 bonus:4 baker:1 what:5 argall:1 developed:1 quantitative:2 every:1 hypothetical:4 act:2 grant:3 cooking:1 positive:1 negligible:1 modify:3 treat:2 sutton:1 ak:1 analyzing:1 path:9 might:2 chose:1 studied:1 examined:1 someone:1 reinforcementlearning:1 acknowledgment:1 recursive:1 block:1 lost:1 differs:1 signaling:3 jan:1 matching:1 intention:4 word:1 griffith:3 undesirable:1 applying:3 map:3 deterministic:1 missing:2 maximizing:4 send:1 go:1 straightforward:1 starting:1 demonstrated:1 focused:1 formulate:1 survey:1 formalized:1 amazon:1 insight:1 array:1 importantly:2 autonomous:1 target:2 engage:3 savvy:1 us:1 prescribe:1 harvard:2 predicts:3 labeled:1 cooperative:1 observed:1 capture:3 calculate:2 ordering:1 russell:1 highest:2 ran:1 preschooler:1 reward:61 littman:3 hri:1 trained:2 raise:1 distinctive:1 learner:10 completely:1 basis:1 darpa:1 joint:1 differently:1 train:1 effective:2 artificial:5 choosing:2 richer:2 supplementary:2 otherwise:1 austerweil:3 seemingly:1 sequence:2 took:4 interaction:2 maximal:1 product:1 ambiguously:1 loop:2 entered:1 interacted:1 ijcai:1 r1:1 executing:2 spent:1 develop:1 radical:1 hadfield:1 strong:1 c:1 predicted:5 come:1 indicate:3 differ:1 direction:1 safe:19 shafto:2 gopnik:1 correct:1 stochastic:2 human:15 saxe:1 material:2 require:1 hx:1 abbeel:2 generalization:1 probable:2 intentional:1 exp:2 cognition:3 predict:1 algorithmic:1 a2:1 label:1 prepare:1 communicative:3 coordination:2 visited:2 sensitive:1 successfully:1 mit:2 clearly:1 always:3 rather:2 reaching:1 shelf:1 dinner:1 barto:1 office:1 linguistic:1 june:1 naval:1 indicates:2 likelihood:6 contrast:2 helpful:2 inference:3 browning:1 typically:2 a0:1 reproduce:1 selects:4 unobservable:1 overall:2 flexible:1 plan:2 art:1 softmax:3 orange:2 constrained:1 construct:1 ng:1 sampling:1 icml:2 markman:1 looped:1 future:2 simplex:1 others:2 randomly:5 ve:2 individual:1 mll:1 intended:2 phase:1 replacement:1 attempt:2 investigate:2 highly:1 analyzed:2 sixty:2 chernova:1 accurate:1 tuple:1 signaled:1 plotted:1 causal:1 legible:1 psychological:3 instance:6 column:1 soft:1 markovian:1 assignment:1 maximization:2 cost:2 uniform:1 successful:1 reported:1 providence:3 teacher:17 chooses:4 thoroughly:1 person:2 st:10 international:5 randomized:1 stay:1 systematic:1 probabilistic:1 told:6 receiving:2 off:1 michael:1 together:1 lee:2 na:2 again:1 aaai:3 von:1 opposed:4 choose:5 prioritize:1 tile:36 watching:1 cognitive:2 expert:9 inefficient:2 return:2 stark:1 pmin:4 account:1 potential:2 matter:1 inc:1 depends:1 performed:1 observer:9 doing:26 observing:1 reached:1 participant:31 option:1 purple:2 kaufmann:1 who:5 yield:1 yellow:2 generalize:1 bayesian:11 raw:1 produced:2 trajectory:34 confirmed:1 worth:4 explain:1 reach:2 tended:1 influenced:1 frequency:1 intentionally:2 james:1 turk:1 conveys:1 naturally:1 associated:2 proof:1 rational:1 knowledge:2 color:2 infers:1 higher:2 reflected:2 response:3 maximally:2 specify:1 though:1 mar:1 just:4 babe:1 hand:1 receives:1 ramachandran:1 irl:16 defines:1 perhaps:1 mdp:6 dge:1 usa:2 effect:2 brown:6 concept:18 true:5 normalized:2 entering:4 excluded:2 white:2 round:7 game:2 won:6 generalized:1 trying:1 pedagogical:23 theoretic:1 motion:1 temperature:1 upwards:2 reasoning:3 srinivasa:1 legibility:1 functional:1 volume:1 extend:1 occurred:1 refer:1 cambridge:1 ai:3 grid:5 similarly:1 teaching:32 sugiyama:1 language:1 had:4 swerving:2 robot:3 add:1 something:3 mlittman:1 contaminating:1 posterior:1 recent:2 showed:5 perspective:1 route:2 certain:1 n00014:1 mlirl:8 morgan:1 minimum:2 additional:1 greater:1 r0:3 determine:3 maximize:1 paradigm:2 signal:10 july:1 afterwards:1 multiple:3 infer:5 d0:1 gesture:1 veloso:1 visit:5 coded:2 a1:1 ensuring:1 prediction:7 qi:2 repetitive:1 robotics:1 chicken:2 dec:1 whereas:1 else:1 median:4 source:1 publisher:1 goodman:2 unlike:1 probably:1 ascent:1 subject:1 leverage:1 ideal:1 split:1 variety:3 buchsbaum:1 fit:5 psychology:3 identified:1 idea:1 whether:4 york:1 cause:1 action:19 repeatedly:1 se:6 nonparametric:1 discount:2 extensively:1 tenenbaum:1 category:1 demonstrator:6 nsf:1 revisit:1 diagnostic:1 broadly:1 discrete:2 taught:3 key:1 wisc:1 anova:1 merely:3 luxburg:1 inverse:12 you:5 communicate:1 uncertainty:1 lope:1 saying:1 planner:1 guyon:1 draw:2 decision:4 cyan:2 datum:1 played:1 dangerous:7 occur:1 your:2 looping:5 ri:4 x2:1 aspect:2 disambiguated:2 extremely:1 performing:2 attempting:1 department:5 according:1 combination:3 describes:2 remain:1 wi:1 joseph:1 making:1 s1:2 sided:5 computationally:1 equation:2 eventually:1 count:1 mind:1 know:3 available:1 apply:1 ho:1 cent:1 assumes:4 top:1 remaining:1 include:1 completed:2 madison:2 contact:1 move:4 question:2 realized:1 already:1 added:1 fa:1 rt:2 pain:1 win:2 gradient:1 distance:1 separate:2 capacity:2 penultimate:1 partner:2 reason:1 length:4 code:1 modeled:1 reorganization:1 demonstration:15 frank:1 teach:9 intent:2 design:2 policy:12 boltzmann:2 perform:1 twenty:1 observation:1 markov:3 finite:2 behave:1 situation:2 looking:1 rn:1 pair:1 mechanical:1 merges:1 distinction:1 learned:7 barcelona:1 nip:1 able:2 suggested:1 pattern:1 fp:1 program:1 max:1 belief:2 subramanian:1 ranked:1 predicting:1 indicator:1 sept:1 text:2 review:1 understanding:1 swerve:1 dragan:2 wisconsin:1 expect:1 versus:3 ingredient:1 agent:19 degree:1 sufficient:1 consistent:4 editor:1 row:5 lmax:4 placed:1 supported:3 guide:1 allow:2 taking:1 benefit:4 feedback:3 depth:1 boundary:1 world:1 transition:4 cumulative:1 calculated:5 cakmak:1 reinforcement:12 san:1 social:1 emphasize:1 meat:1 disambiguates:1 assumed:1 unnecessary:1 francisco:1 butler:1 imitation:2 un:1 continuous:1 sk:1 why:1 disambiguate:3 additionally:2 learn:6 ca:1 investigated:1 complex:2 constructing:1 did:4 main:2 s2:1 arise:1 nothing:1 child:2 repeated:1 convey:1 pedagogy:3 ny:1 predictability:1 inferring:1 candidate:1 remained:1 specific:2 showing:34 symbol:1 r2:1 evidence:1 sequential:1 effectively:1 wash:1 illustrates:1 entropy:2 led:1 fc:1 simply:7 likely:3 explore:1 visual:1 desire:1 scalar:1 applies:1 determines:1 acm:2 ma:1 goal:34 towards:4 disambiguating:3 change:1 included:1 uniformly:2 called:1 total:2 experimental:2 select:1 pragmatic:1 mark:1 people:16 artifical:1 tested:2 avoiding:1 handling:1 |
5,985 | 6,414 | Strategic Attentive Writer for Learning
Macro-Actions
Alexander (Sasha) Vezhnevets, Volodymyr Mnih, John Agapiou,
Simon Osindero, Alex Graves, Oriol Vinyals, Koray Kavukcuoglu
Google DeepMind
{vezhnick,vmnih,jagapiou,osindero,gravesa,vinyals,korayk}@google.com
Abstract
We present a novel deep recurrent neural network architecture that learns to build
implicit plans in an end-to-end manner purely by interacting with an environment
in reinforcement learning setting. The network builds an internal plan, which is
continuously updated upon observation of the next input from the environment.
It can also partition this internal representation into contiguous sub-sequences
by learning for how long the plan can be committed to ? i.e. followed without
replaning. Combining these properties, the proposed model, dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally abstracted macro-actions
of varying lengths that are solely learnt from data without any prior information.
These macro-actions enable both structured exploration and economic computation.
We experimentally demonstrate that STRAW delivers strong improvements on
several ATARI games by employing temporally extended planning strategies (e.g.
Ms. Pacman and Frostbite). It is at the same time a general algorithm that can
be applied on any sequence data. To that end, we also show that when trained
on text prediction task, STRAW naturally predicts frequent n-grams (instead of
macro-actions), demonstrating the generality of the approach.
1
Introduction
Using reinforcement learning to train neural network controllers has recently led to rapid progress
on a number of challenging control tasks [15, 17, 26]. Much of the success of these methods has
been attributed to the ability of neural networks to learn useful abstractions or representations of
the stream of observations, allowing the agents to generalize between similar states. Notably, these
agents do not exploit another type of structure ? the one present in the space of controls or policies.
Indeed, not all sequences of low-level controls lead to interesting high-level behaviour and an agent
that can automatically discover useful macro-actions should be capable of more efficient exploration
and learning. The discovery of such temporal abstractions has been a long-standing problem in
both reinforcement learning and sequence prediction in general, yet no truly scalable and successful
architectures exist.
We propose a new deep recurrent neural network architecture, dubbed STRategic Attentive Writer
(STRAW), that is capable of learning macro-actions in a reinforcement learning setting. Unlike the
vast majority of reinforcement learning approaches [15, 17, 26], which output a single action after
each observation, STRAW maintains a multi-step action plan. STRAW periodically updates the plan
based on observations and commits to the plan between the replanning decision points. The replanning
decisions as well as the commonly occurring sequences of actions, i.e. macro-actions, are learned
from rewards. To encourage exploration with macro-actions we introduce a noisy communication
channel between a feature extractor (e.g. convolutional neural network) and the planning modules,
taking inspiration from recent developments in variational auto-encoders [11, 13, 24]. Injecting noise
at this level of the network generates randomness in plans updates that cover multiple time steps and
thereby creates the desired effect.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Our proposed architecture is a step towards more natural decision making, wherein one observation
can generate a whole sequence of outputs if it is informative enough. This provides several important
benefits. First and foremost, it facilitates structured exploration in reinforcement learning ? as the
network learns meaningful action patterns it can use them to make longer exploratory steps in the
state space [4]. Second, since the model does not need to process observations while it is committed
to its action plan, it learns to allocate computation to key moments thereby freeing up resources
when the plan is being followed. Additionally, the acquisition of macro-actions can aid transfer and
generalization to other related problems in the same domain (assuming that other problems from the
domain share similar structure in terms of action-effects).
We evaluate STRAW on a subset of Atari games that require longer term planning and show that
it leads to substantial improvements in scores. We also demonstrate the generality of the STRAW
architecture by training it on a text prediction task and show that it learns to use frequent n-grams as
the macro-actions on this task.
The following section reviews the related work. Section 3 defines the STRAW model formally.
Section 4 describes the training procedure for both supervised and reinforcement learning cases.
Section 5 presents the experimental evaluation of STRAW on 8 ATARI games, 2D maze navigation
and next character prediction tasks. Section 6 concludes.
2
Related Work
Learning temporally extended actions and temporal abstraction in general are long standing problems
in reinforcement learning [5, 6, 7, 8, 12, 20, 21, 22, 23, 25, 27, 28, 30]. The options framework [21,
28] provides a general formulation. An option is a sub-policy with a termination condition, which
takes in environment observations and outputs actions until the termination condition is met. An agent
picks an option using its policy-over-options and subsequently follows it until termination, at which
point the policy-over-options is queried again and the process continues. Notice, that macro-action is
a particular, simpler instance of options, where the action sequence (or a distribution over them) is
decided at the time the macro-action is initiated. Options are typically learned using subgoals and
?pseudo-rewards? that are provided explicitly [7, 8, 28]. For a simple, tabular case [30], each state
can be used as a subgoal. Given the options, a policy-over-options can be learned using standard
techniques by treating options as actions. Recently [14, 29] have demonstrated that combining
deep learning with pre-defined subgoals delivers promising results in challenging environments like
Minecraft and Atari, however, subgoal discovery remains an unsolved problem. Another recent
work by [1] shows a theoretical possibility of learning options jointly with a policy-over-options by
extending the policy gradient theorem to options, but the approach was only tested on a toy problem.
In contrast, STRAW learns macro-actions and a policy over them in an end-to-end fashion from only
the environment?s reward signal and without resorting to explicit pseudo-rewards or hand-crafted
subgoals. The macro-actions are represented implicitly inside the model, arising naturally from the
interplay between action and commitment plans within the network. Our experiments demonstrate
that the model scales to a variety of tasks from next character prediction in text to ATARI games.
3
The model
STRAW is a deep recurrent neural network with two modules. The first module translates environment
observations into an action-plan ? a state variable which represents an explicit stochastic plan of
future actions. STRAW generates macro-actions by committing to the action-plan and following
it without updating for a number of steps. The second module maintains commitment-plan ? a
state variable that determines at which step the network terminates a macro-action and updates the
action-plan. The action-plan is a matrix where one dimension corresponds to time and the other to
the set of possible discrete actions. The elements of this matrix are proportional to the probability
of taking the corresponding action at the corresponding time step. Similarly, the commitment plan
represents the probabilities of terminating a macro-action at the particular step. For updating both
plans we use attentive writing technique [11], which allows the network to focus on parts of a plan
where the current observation is informative of desired outputs. This section formally defines the
model, we describe the way it is trained later in section 4.
The state of the network at time t is comprised of matrices At ? RA?T and ct ? R1?T . Matrix
At is the action-plan. Each element Ata,? is proportional to the probability of outputting a at time
t + ? . Here A is a total number of possible actions and T is a maximum time horizon of the plan.
To generate an action at time t, the first column of At (i.e. At?0 ) is transformed into a distribution
2
action
space:
actionplan
commitmentplan
=1 * replan
0 commit
0 commit
1* replan
STRAW
Conv
Net
...
Input frame
time
Figure 1: Schematic illustration of STRAW playing a maze navigation game. The input frames
indicate maze geometry (black = corridor, blue = wall), red dot corresponds to the position of the
agent and green to the goal, which it tries to reach. A frame is first passed through a convolutional
network, acting as a feature extractor and then into STRAW. The top two rows depict the plans A
and c. Given a gate gt , STRAW either updates the plans (steps 1 and 4) or commits to them.
over possible outputs by a SoftMax function. This distribution is then sampled to generate the action
at . Thereby the content of At corresponds to the plan of future actions as conceived at time t. The
single row matrix ct represents the commitment-plan of the network. Let gt be a binary random
variable distributed as follows: gt ? ct?1
1 . If gt = 1 then at this step the plans will be updated,
otherwise gt = 0 means they will be committed to. Macro-actions are defined as a sequence of
outputs {at }tt21 ?1 produced by the network between steps where gt is ?on?: i.e gt1 = gt2 = 1 and
gt0 = 0, ?t1 < t0 < t2 . During commitment the plans are rolled over to the next step using the
matrix time-shift operator ?, which shifts a matrix by removing the first column and appending a
column filled with zeros to its rear. Applying ? to At or ct reflects the advancement of time. Figure 1
illustrates the workflow. Notice that during commitment (step 2 and 3) the network doesn?t compute
the forward pass, thereby saving computation.
Attentive planning. An important assumption that underpins the usage of macro-actions is that
one observation reveals enough information to generate a sequence of actions. The complexity of the
sequence and its length can vary dramatically, even within one environment. Therefore the network
has to focus on the part of the plan where the current observation is informative of desired actions.
To achieve this, we apply differentiable attentive reading and writing operations [11], where attention
is defined over the temporal dimension. This technique was originally proposed for image generation,
here instead it is used to update the plans At and ct . In the image domain, the attention operates over
the spatial extent of an image, reading and writing pixel values. Here it operates over the temporal
extent of a plan, and is used to read and write action probabilities. The differentiability of the attention
model [11] makes it possible to train with standard backpropagation.
An array of Gaussian filters is applied to the plan, yielding a ?patch? of smoothly varying location
and zoom. Let A be the total number of possible actions and K be a parameter that determines the
temporal resolution of the patch. A grid of A ? K one-dimensional Gaussian filters is positioned
on the plan by specifying the coordinates of the grid center and the stride distance between adjacent
filters. The stride controls the ?zoom? of the patch; that is, the larger the stride, the larger an area of
the original plan will be visible in the attention patch, but the lower the effective resolution of the
patch will be. The filtering is performed along the temporal dimension only. Let ? be a vector of
attention parameters, i.e.: grid position, stride, and standard deviation of Gaussian filters. We define
the attention operations as follows:
D = write(p, ?tA ); ?t = read(At , ?tA )
(1)
The write operation takes in a patch p ? RA?K and attention parameters ?tA . It produces a matrix D
of the same size as At , which contains the patch p scaled and positioned according to ?tA with the
rest set to 0. Analogously the read operation takes the full plan At together with attention parameters
3
Figure 2: Schematic illustration of an action-plan update. Given zt , the network produces an actionpatch and attention parameters ?tA . The write operation creates an update to the action-plan by
scaling and shifting the action-patch according to ?tA . The update is then added to ?(At ).
?tA and outputs a read patch ? ? RA?K , which is extracted from At according to ?tA . We direct
readers to [11] for details.
Action-plan update. Let zt be a feature representation (e.g. the output of a deep convolutional
network) of an observation xt . Given zt , gt and the previous state At?1 STRAW computes an update
to the action-plan using the Algorithm 1. Here f ? and f A are linear functions and h is two layer
perceptron. Figure 2 gives an illustration of an update to At .
Algorithm 1 Action-plan update
Input: zt , gt , At?1
Output: At , at
if gt = 1 then
Compute attention parameters ?tA = f ? (zt )
Attentively read the current state of the action-plan ?t = read(At?1 , ?tA );
Compute intermediate representation ?t = h([?t , zt ])
Update At = ?(At?1 ) + gt ? write(f A (?t ), ?tA )
else // gt = 0
Update At = ?(At?1 ) //just advance time
end if
Sample an action at ? SoftMax(At0 )
Commitment-plan update. Now we introduce a module that partitions the action-plan into macroactions by defining the temporal extent to which the current action-plan At can be followed without
re-planning. The commitment-plan ct is updated at the same time as the action-plan, i.e. when gt = 1
or otherwise it is rolled over to the next time step by ? operator. Unlike the planning module, where
At is updated additively, ct is overwritten completely using the following equations:
gt ? ct?1
if gt = 0 then ct = ?(ct?1 )
1
else ?tc = f c ([?tA , ?t ]) ct = Sigmoid(b + write(e, ?tc ));
(2)
Here the same attentive writing operation is used, but with only one Gaussian filter for attention over
c. The patch e is therefore just a scalar, which we fix to a high value (40 in our experiments). This
high value of e is chosen so that the attention parameters ?tc define a time step when re-planning is
guaranteed to happen. Vector b is of the same size as dt filled with a shared, learnable bias b, which
defines the probability of re-planning earlier than the step implied by ?tc .
Notice that gt is used as a multiplicative gate in algorithm 1. This allows for retroactive credit
assignment during training, as gradients from write operation at time t + ? directly flow into the
commitment module through state ct ? we set ?ct?1
? ?gt as proposed in [3]. Moreover, when
1
gt = 0 only the computationally cheap operator ? is invoked. Thereby more commitment significantly
saves computation.
3.1 Structured exploration with macro-actions
The architecture defined above is capable of generating macro-actions. This section describes
how to use macro-actions for structured exploration. We introduce STRAW-explorer (STRAWe),
a version of STRAW with a noisy communication channel between the feature extractor (e.g. a
convolutional neural network) and STRAW planning modules. Let ?t be the activations of the
last layer of the feature extractor. We use ?t to regress the parameters of a Gaussian distribution
4
Q(zt |?t ) = N (?(?t ), I ? ?(?t )) from which zt is sampled. Here ? is a vector and ? is a scalar.
Injecting noise at this level of the network generates randomness on the level of plan updates that
cover multiple time steps. This effect is reinforced by commitment, which forces STRAWe to execute
the plan and experience the outcome instead of rolling the update back on the next step. In section 5
we demonstrate how this significantly improves score on games like Frostbite and Ms. Pacman.
4
Learning
The training loss of the model is defined as follows:
T
X
L=
Lout (At ) + gt ? ?KL(Q(zt |?t )|P (zt )) + ?ct1 ,
(3)
t=1
where Lout is a domain specific differentiable loss function defined over the network?s output. For
supervised problems, like next character prediction in text, Lout can be defined as negative log
likelihood of the correct output. We discuss the reinforcement learning case later in this section. The
two extra terms are regularisers. The first is a cost of communication through the noisy channel,
which is defined as KL divergence between latent distributions Q(zt |?t ) and some prior P (zt ). Since
the latent distribution is a Gaussian (sec. 3.1), a natural choice for the prior is a Gaussian with a zero
mean and standard deviation of one. The last term penalizes re-planning and encourages commitment.
For reinforcement learning we consider the standard setting where an agent is interacting with an
environment in discrete time. At each step t, the agent observes the state of the environment xt and
selects an action at from a finite set of possible actions. The environment responds with a new state
xt+1 and a scalar reward rt . The process continues until the terminal state isPreached, after which
?
it restarts. The goal of the agent is to maximize the discounted return Rt = k=0 ?k rt+k+1 . The
agent?s behaviour is defined by its policy ? ? mapping from state space into action space. STRAW
produces a distribution over possible actions (a stochastic policy) by passing the first column of
the action-plan At through a SoftMax function: ?(at |xt ; ?) = SoftMax(At?0 ). An action is then
produced by sampling the output distribution.
We use a recently proposed Asynchronous Advantage Actor-Critic (A3C) method [18], which directly
optimizes the policy of an agent. A3C requires a value function estimator V (xt ) for variance
reduction. This estimator can be produced in a number of ways, for example by a separate neural
network. The most natural solution for our architecture is to create value-plan containing the estimates.
To keep the architecture simple and efficient, we simply add an auxiliary row to the action plan which
corresponds to the value function estimation. It participates in attentive reading and writing during
the update, thereby sharing the temporal attention with action-plan. The plan is then split into action
part and the estimator before the SoftMax is applied and an action is sampled. The policy gradient
update for Lout in L is defined as follows:
?Lout = ? log ?(a |x ; ?)(R ? V (x ; ?)) + ?? H(?(a |x ; ?))
(4)
?
t
t
t
t
?
t
t
Here H(?(at |xt ; ?)) is entropy of the policy, which stimulates the exploration of primitive actions.
The network contains two random variables ? zt and gt , which we have to pass gradients through. For
zt we employ the re-parametrization trick [13, 24]. For gt , we set ?c1t?1 ? ?gt as proposed in [3].
5
Experiments
The goal of our experiments was to demonstrate that STRAW learns meaningful and useful macroactions. We use three domains of increasing complexity: supervised next character prediction in
text [10], 2D maze navigation and ATARI games [2].
5.1
Experimental setup
Architecture. The read and write patches are A ? 10 dimensional, and h is a 2 layer perceptron
with 64 hidden units. The time horizon T = 500. For STRAWe (sec. 3.1) the Gaussian distribution
for structured exploration is 128-dimensional. Ablative analysis for some of these choices is provided
in section 5.5. Feature representation of the state space is particular for each domain. For 2D mazes
and ATARI it is a convolutional neural net (CNN) and it is an LSTM [9] for text. We provide more
details in the corresponding sections.
Baselines. The experiments employ two baselines: a simple feed forward network (FF) and a
recurrent LSTM network, both on top of a representation learned by CNN. FF directly regresses
the action probabilities and value function estimate from feature representation. The LSTM [9]
5
architecture is a widely used recurrent network and it was demonstrated to perform well on a suite
of reinforcement learning problems [18]. It has 128 hidden units and its inputs are the feature
representation of an observation and the previous action of the agent. Action probabilities and the
value function estimate are regressed from its hidden state.
Optimization. We use the A3C method [18] for all reinforcement learning experiments. It was
shown to achieve state-of-the-art results on several challenging benchmarks [18]. We cut the trajectory
and run backpropagation through time [19] after 40 forward passes of a network or if a terminal signal
is received. The optimization process runs 32 asynchronous threads using shared RMSProp. There
are 4 hyper-parameters in STRAW and 2 in the LSTM and FF baselines. For each method, we ran 200
experiments, each using randomly sampled hyperparameters. Learning rate and entropy penalty were
sampled from a LogUniform(10?4 , 10?3 ) interval. Learning rate is linearly annealed from a sampled
value to 0. To explore STRAW behaviour, we sample coding cost ? ? LogUniform(10?7 , 10?4 ) and
replanning penalty ? ? LogUniform(10?6 , 10?2 ). For stability, we clip the advantage Rt ?V (xt ; ?)
(eq. 4) to [?1, 1] for all methods and for STRAW(e) we do not propagate gradients from commitment
module into planning module through ?tA and ?t . We define a training epoch as one million
observations.
Maze replanning
Text
Commitment
STRAW rank
Training epochs
a)
c)
b)
f_th
_'s_
ent_
_pro
Frostbite
Ms. Pacman
Amidar
Breakout
and_
__th
_com
_the
_<un
the_
<unk
_to_
_of_
ing_
_in_
_and
ion_
_tha
_for
Score
N-gram rank
_it_
Transfer in mazes
Training epochs
d)
Figure 3: Detailed analysis. a) re-planning in random maze b) transfer to farther goals c) learned
macro-actions for next character prediction d) commitment on ATARI.
5.2
Text
STRAW is a general sequence prediction architecture. To demonstrate that it is capable of learning
output patterns with complex structure we present a qualitative experiment on next character prediction
using Penn Treebank dataset [16]. Actions in this case correspond to emitting characters and macroactions to their sequences. For this experiment we use an LSTM, which receives a one-hot-encoding
of 50 characters as input. A STRAW module is connected on top. We omit the noisy Gaussian
channel, as this task is fully supervised and does not require exploration. The actions now correspond
to emitting characters. The network is trained with stochastic gradient descent using supervised
negative log-likelihood loss (sec. 4). At each step we feed a character to the model which updates the
LSTM representation, but only update the STRAW plans according to the commitment plan ct . For
a trained model we record macro-actions ? the sequences of characters produced when STRAW is
committed to the plan. If STRAW adequately learns the structure of the data, then its macro-actions
should correspond to common n-grams. Figure 3.c plots the 20 most frequent macro-action of length
4 produced by STRAW. On x-axis is the rank of the frequency at which macro-action is used by
STRAW, on y-axis is it?s actual frequency rank as a 4-gram. Notice how STRAW correctly learned to
predict frequent 4-grams such as ?the?, ?ing?, ?and? and so on.
5.3
2D mazes
To investigate what macro-actions our model learns and whether they are useful for reinforcement
learning we conduct an experiment on a random 2D mazes domain. A maze is a grid-world with
two types of cells ? walls and corridors. One of the corridor cells is marked as a goal. The agent
receives a small negative reward of ?r every step, and double that if it tries to move through the wall.
It receives as small positive reward r when it steps on the goal and the episode terminates. Otherwise
episode terminates after 100 steps. Therefore to maximize return an agent has to reach the goal as
fast as possible. In every training episode the position of walls, goal and the starting position of the
agent are randomized. An agent fully observes the state of the maze. The remainder of this section
presents two experiments in this domain. Here we use STRAWe version with structured exploration.
6
For feature representation we use a 2-layer CNN with 3x3 filters and stride of 1, each followed by a
rectifier nonlinearity.
In our first experiment we train a STRAWe agent on an 11 x 11 random maze environment. We then
evaluate a trained agent on a novel maze with fixed geometry and only randomly varying start and
goal locations. The aim is to visualize the positions in which STRAWe terminates macro-actions
and re-plans. Figure 3.a shows the maze, where red intensity corresponds to the ratio of re-planning
events at the cell normalized with the total amount of visits by an agent. Notice how some corners
and areas close to junctions are highlighted. This demonstrates that STRAW learns adequate temporal
abstractions in this domain. In the next experiment we test whether these temporal abstractions are
useful.
The second experiment uses a larger 15 x 15 random mazes. If the goal is placed arbitrarily far from
an agent?s starting position, then learning becomes extremely hard and neither LSTM nor STRAWe
can reliably learn a good policy. We introduce a curriculum where the goal is first positioned very
close to the starting location and is moved further away during the progress of training. More
precisely, we position the goal using a random walk starting from the same point as an agent. We
increase the random walks length by one every two epochs, starting from 2. Although the task gets
progressively harder, the temporal abstractions (e.g. follow the corridor, turn the corner) remain the
same. If learnt early on, they should make adaptation easy. The Figure 3.b plots episode reward
against training steps for STRAW, LSTM and the optimal policy given by Dijkstra algorithm. Notice
how both learn a good policy after approximately 200 epochs, when the task is still simple. As the
goal moves away LSTM has a strong decline in reward relative to the optimal policy. In contrast,
STRAWe effectively uses macro-actions learned early on and stays close to the optimal policy at
harder stages. This demonstrates that temporal abstractions learnt by STRAW are useful.
Table 1: Comparison of STRAW, STRAWe, LSTM and FF baselines on 8 ATARI games. The score
is averaged over 100 runs of top 5 agents for each architecture after 500 epochs of training.
Frostbite
Ms. Pacman
Q-bert
Hero
Crazy cl.
Alien
Amidar
Breakout
STRAWe
STRAW
8074
4138
6673
6557
23430
21350
36948
35692
142686
144004
3191
2632
1833
2120
363
423
LSTM
FF
1409
1543
4135
2302
21591
22281
35642
39610
128351
128146
2917
2302
1382
1235
632
95
5.4
ATARI
This section presents results on a subset of ATARI games. All compared methods used the same CNN
architecture, input preprocessing, and an action repeat of 4. For feature representation we use a CNN
with a convolutional layer with 16 filters of size 8 x 8 with stride 4, followed by a convolutional layer
with with 32 filters of size 4 x 4 with stride 2, followed by a fully connected layer with 128 hidden
units. All three hidden layers were followed by a rectifier nonlinearity. This is the same architecture
as in [17, 18], the only difference is that in pre-processing stage we keep colour channels.
We chose games that require some degree of planning and exploration as opposed to purely reactive
ones: Ms. Pacman, Frostbite, Alien, Amidar, Hero, Q-bert, Crazy Climber. We also added a reactive
Breakout game to the set as a sanity check. Table. 1 shows the average performance of the top
5 agents for each architecture after 500 epochs of training. Due to action repeat, here an epoch
corresponds to four million frames (across all threads). STRAW or STRAWe reach the highest
score on 6 out of 8 games. They are especially strong on Frostbite, Ms. Pacman and Amidar. On
Frostbite STRAWe achieves more than 6? improvement over the LSTM score. Notice how structured
exploration (sec. 3.1) improves the performance on 5 out of 8 games; on 2 out of 3 other games
(Breakout and Crazy Climber) the difference is smaller than score variance. STRAW and STRAWe
do perform worse than an LSTM on breakout, although they still achieves a very good score (human
players score below 100). This is likely due to breakout requiring fast reaction and action precision,
rather than planning or exploration. FF baseline scores worst on every game apart from Hero, where
it is the best. Although the difference is not very large, this is still a surprising result, which might be
due to FF having fewer parameters to learn.
Figure 4 demonstrates re-planning behaviour on Amidar. In this game, agent explores a maze with a
yellow avatar. It has to cover all of the maze territory without colliding with green figures (enemies).
7
Figure 4: Re-planning behaviour in Amidar. The agent controls yellow figure, which scores points by
exploring the maze, while avoiding green enemies. At time t the agent has explored the area to the
left and below and is planning to head right to score more points. At t + 7 an enemy blocks the way
and STRAW retreats to the left by drastically changing the plan At . It then resorts the original plan
on step t + 14 when the path is clear and heads to the right. Notice, that when the enemy is near (t + 7
to t + 12) it plans for smaller macro-actions ? ct has a high value (white spot) closer to the origin.
Notice how the STRAW agent changes its plan as an enemy comes near and blocks its way. It
backs off and then resumes the initial plan when the enemy takes a turn and danger is avoided.
Also, notice that when the enemy is near, the agent plans for shorter macro-actions as indicated by
commitment-plan ct . Figure 3.d shows the percentage of time the best STRAWe agent is committed
to a plan on 4 different games. As training progresses, STRAWe learns to commit more and converges
to a stable regime after about 200 epochs. The only exception is breakout, where meticulous control
is required and it is beneficial to re-plan often. This shows that STRAWe is capable of adapting to the
environment and learns temporal abstractions useful for each particular case.
Figure 5: Ablative analysis on Ms. Pacman game. Figures plot episode reward against seen
frames for different configurations of STRAW. From left to right: varying action patch size, varying
replanning modules on Frostbite and on Ms. Pacman. Notice, that models here are trained for only
100 epochs, unlike models in Table 1 that were trained for 500 epochs.
5.5 Ablative analysis
Here we examine different design choices that were made and investigate their impact on final
performance. Figure 5 presents the performance curves of different versions of STRAW trained for
100 epochs. From left to right, the first plot shows STRAW performance given different resolution of
the action patch on Ms. Pacman game. The greater the resolution, the more complex is the update
that STRAW can generate for the action plan. In the second and third plot we investigate different
possible choices for the re-planning mechanism on Frostbite and Ms. Pacman games: we compare
STRAW with two simple modifications, one re-plans at every step, the other commits to the plan
for a random amount of steps between 0 and 4. Re-planning at every step is not only less elegant,
but also much more computationally expensive and less data efficient. The results demonstrate that
learning when to commit to the plan and when to re-plan is beneficial.
6
Conclusion
We have introduced the STRategic Attentive Writer (STRAW) architecture, and demonstrated its
ability to implicitly learn useful temporally abstracted macro-actions in an end-to-end manner.
Furthermore, STRAW advances the state-of-the-art on several challenging Atari domains that require
temporally extended planning and exploration strategies, and also has the ability to learn temporal
abstractions in general sequence prediction. As such it opens a fruitful new direction in tackling an
important problem area for sequential decision making and AI more broadly.
8
References
[1] Pierre-Luc Bacon and Doina Precup. The option-critic architecture. In NIPS Deep RL Workshop, 2015.
[2] Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment:
An evaluation platform for general agents. Journal of Artificial Intelligence Research, 2012.
[3] Yoshua Bengio, Nicholas L?onard, and Aaron Courville. Estimating or propagating gradients through
stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
[4] Matthew M Botvinick, Yael Niv, and Andrew C Barto. Hierarchically organized behavior and its neural
foundations: A reinforcement learning perspective. Cognition, 2009.
[5] Craig Boutilier, Ronen I Brafman, and Christopher Geib. Prioritized goal decomposition of markov
decision processes: Toward a synthesis of classical and decision theoretic planning. In IJCAI, 1997.
[6] Peter Dayan. Improving generalization for temporal difference learning: The successor representation.
Neural Computation, 1993.
[7] Peter Dayan and Geoffrey E Hinton. Feudal reinforcement learning. In NIPS. Morgan Kaufmann Publishers,
1993.
[8] Thomas G Dietterich. Hierarchical reinforcement learning with the maxq value function decomposition. J.
Artif. Intell. Res.(JAIR), 2000.
[9] Felix A Gers, J?rgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with lstm.
Neural computation, 2000.
[10] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
[11] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. Draw: A recurrent neural
network for image generation. In ICML, 2015.
[12] Leslie Pack Kaelbling. Hierarchical learning in stochastic domains: Preliminary results. In ICML, 2014.
[13] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. ICLR, 2014.
[14] Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, and Joshua B. Tenenbaum. Hierarchical
deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprint
arXiv:1604.06057, 2016.
[15] Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel. End-to-end training of deep visuomotor
policies. arXiv preprint arXiv:1504.00702, 2015.
[16] Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus
of english: The penn treebank. Computational linguistics, 1993.
[17] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis
Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 02
2015.
[18] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim
Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning.
ICML, 2016.
[19] Michael C Mozer. A focused back-propagation algorithm for temporal pattern recognition. Complex
systems, 1989.
[20] Ronald Parr and Stuart Russell. Reinforcement learning with hierarchies of machines. NIPS, 1998.
[21] Doina Precup. Temporal abstraction in reinforcement learning. PhD thesis, University of Massachusetts,
2000.
[22] Doina Precup, Richard S Sutton, and Satinder P Singh. Planning with closed-loop macro actions. Technical
report, 1997.
[23] Doina Precup, Richard S Sutton, and Satinder Singh. Theoretical results on reinforcement learning with
temporally abstract options. In European Conference on Machine Learning (ECML). Springer, 1998.
[24] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 2014.
[25] J?rgen Schmidhuber. Neural sequence chunkers. Technical report, 1991.
[26] John Schulman, Sergey Levine, Philipp Moritz, Michael I Jordan, and Pieter Abbeel. Trust region policy
optimization. In ICML, 2015.
[27] Richard S Sutton. Td models: Modeling the world at a mixture of time scales. In ICML, 1995.
[28] Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for
temporal abstraction in reinforcement learning. Artificial intelligence, 1999.
[29] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J Mankowitz, and Shie Mannor. A deep hierarchical
approach to lifelong learning in minecraft. arXiv preprint arXiv:1604.07255, 2016.
[30] Marco Wiering and J?rgen Schmidhuber. Hq-learning. Adaptive Behavior, 1997.
9
| 6414 |@word cnn:5 version:3 open:1 termination:3 pieter:2 additively:1 overwritten:1 propagate:1 decomposition:2 pick:1 thereby:6 harder:2 reduction:1 moment:1 configuration:1 contains:2 score:12 initial:1 jimenez:1 daniel:1 reaction:1 current:4 com:2 surprising:1 activation:1 yet:1 tackling:1 diederik:1 john:2 ronald:1 periodically:1 partition:2 informative:3 visible:1 happen:1 cheap:1 treating:1 plot:5 update:22 depict:1 progressively:1 intelligence:2 fewer:1 advancement:1 generative:1 ivo:1 amir:1 parametrization:1 farther:1 record:1 provides:2 mannor:1 location:3 philipp:1 simpler:1 wierstra:3 along:1 direct:1 corridor:4 qualitative:1 inside:1 introduce:4 manner:2 notably:1 ra:3 indeed:1 rapid:1 behavior:2 planning:23 nor:1 multi:1 examine:1 terminal:2 discounted:1 automatically:1 td:1 actual:1 increasing:1 conv:1 spain:1 discover:1 moreover:1 estimating:1 becomes:1 provided:2 what:1 underpins:1 atari:12 macroactions:3 dharshan:1 deepmind:1 narasimhan:1 dubbed:2 suite:1 temporal:19 pseudo:2 every:6 continual:1 botvinick:1 scaled:1 demonstrates:3 control:7 unit:3 penn:2 omit:1 danihelka:1 t1:1 before:1 positive:1 felix:1 sutton:4 encoding:2 initiated:1 solely:1 path:1 approximately:1 black:1 chose:1 might:1 specifying:1 challenging:4 averaged:1 decided:1 block:2 backpropagation:3 x3:1 spot:1 procedure:1 demis:1 danger:1 area:4 riedmiller:1 significantly:2 adapting:1 onard:1 pre:2 integrating:1 arcade:1 petersen:1 get:1 close:3 operator:3 applying:1 writing:5 bellemare:2 fruitful:1 demonstrated:3 center:1 annealed:1 primitive:1 attention:13 starting:5 helen:1 focused:1 resolution:4 estimator:3 array:1 stability:1 exploratory:1 coordinate:1 updated:4 avatar:1 hierarchy:1 us:2 origin:1 trick:1 element:2 expensive:1 recognition:1 updating:2 continues:2 cut:1 predicts:1 levine:2 module:12 preprint:5 worst:1 wiering:1 region:1 connected:2 episode:5 russell:1 highest:1 observes:2 ran:1 substantial:1 mozer:1 environment:13 complexity:2 rmsprop:1 reward:10 trained:8 terminating:1 singh:3 purely:2 upon:1 writer:4 creates:2 ablative:3 gt0:1 completely:1 represented:1 train:3 committing:1 describe:1 effective:1 fast:2 artificial:2 visuomotor:1 hyper:1 outcome:1 zahavy:1 sanity:1 larger:3 widely:1 enemy:7 otherwise:3 ability:3 commit:4 jointly:1 noisy:4 highlighted:1 final:1 shakir:1 interplay:1 sequence:16 differentiable:2 advantage:2 net:2 propose:1 outputting:1 commitment:17 macro:34 frequent:4 remainder:1 combining:2 adaptation:1 loop:1 achieve:2 breakout:7 moved:1 ijcai:1 double:1 darrell:1 extending:1 r1:1 produce:3 generating:2 karol:1 converges:1 silver:2 retroactive:1 tim:1 recurrent:7 andrew:1 propagating:1 freeing:1 received:1 progress:3 eq:1 strong:3 auxiliary:1 indicate:1 come:1 met:1 direction:1 korayk:1 correct:1 annotated:1 filter:8 subsequently:1 stochastic:6 exploration:14 human:2 enable:1 successor:1 in_:1 require:4 behaviour:5 beatrice:1 fix:1 generalization:2 wall:4 niv:1 marcinkiewicz:1 abbeel:2 preliminary:1 exploring:1 marco:1 credit:1 mapping:1 predict:1 visualize:1 cognition:1 matthew:1 rgen:3 parr:1 achieves:2 vary:1 early:2 estimation:1 injecting:2 replanning:5 create:1 gt1:1 reflects:1 gaussian:9 aim:1 rather:1 rusu:1 varying:5 barto:1 rezende:2 focus:2 improvement:3 legg:1 rank:4 likelihood:2 check:1 alien:2 contrast:2 baseline:5 inference:1 abstraction:12 rear:1 dayan:2 typically:1 hidden:5 transformed:1 selects:1 c1t:1 pixel:1 unk:1 development:1 plan:70 art:2 softmax:5 vmnih:1 platform:1 spatial:1 saving:1 having:1 veness:2 koray:3 sampling:1 represents:3 stuart:1 icml:6 tabular:1 future:2 t2:1 yoshua:1 mirza:1 richard:4 employ:2 report:2 randomly:2 divergence:1 zoom:2 intell:1 geometry:2 karthik:1 harley:1 ostrovski:1 mankowitz:1 possibility:1 mnih:3 investigate:3 evaluation:2 joel:2 rolled:2 truly:1 navigation:3 mixture:1 yielding:1 capable:5 encourage:1 closer:1 experience:1 shorter:1 filled:2 conduct:1 puigdomenech:1 penalizes:1 desired:3 re:16 a3c:3 walk:2 theoretical:2 instance:1 column:4 earlier:1 modeling:1 contiguous:1 cover:3 assignment:1 leslie:1 strategic:4 cost:2 deviation:2 subset:2 kaelbling:1 rolling:1 comprised:1 successful:1 osindero:2 encoders:1 stimulates:1 learnt:3 lstm:14 randomized:1 explores:1 stay:1 standing:2 participates:1 off:1 straw:51 michael:3 analogously:1 continuously:1 together:1 precup:5 synthesis:1 again:1 thesis:1 containing:1 opposed:1 worse:1 corner:2 resort:1 return:2 toy:1 volodymyr:3 stride:7 sec:4 coding:1 ioannis:1 gt2:1 explicitly:1 doina:5 stream:1 later:2 try:2 performed:1 multiplicative:1 closed:1 tessler:1 red:2 start:1 bayes:1 option:15 maintains:2 simon:1 convolutional:7 variance:2 kaufmann:1 reinforced:1 correspond:3 ronen:1 resume:1 yellow:2 generalize:1 territory:1 kavukcuoglu:3 produced:5 craig:1 trajectory:1 randomness:2 reach:3 sharing:1 trevor:1 attentive:9 against:2 acquisition:1 frequency:2 mohamed:1 regress:2 naturally:2 attributed:1 unsolved:1 sampled:6 dataset:1 massachusetts:1 mitchell:1 improves:2 organized:1 positioned:3 back:3 feed:2 originally:1 ta:13 supervised:5 dt:1 restarts:1 wherein:1 follow:1 jair:1 danilo:2 formulation:1 subgoal:2 execute:1 tom:1 generality:2 furthermore:1 just:2 implicit:1 stage:2 until:3 hand:1 receives:3 christopher:1 mehdi:1 trust:1 propagation:1 google:2 defines:3 indicated:1 artif:1 mary:1 building:1 effect:3 usage:1 normalized:1 requiring:1 meticulous:1 dietterich:1 adequately:1 lillicrap:1 inspiration:1 read:7 moritz:1 white:1 adjacent:1 game:20 during:5 encourages:1 bowling:1 replan:2 m:10 theoretic:1 demonstrate:7 delivers:2 pacman:10 pro:1 image:4 variational:2 invoked:1 novel:2 recently:3 charles:1 sigmoid:1 common:1 at0:1 rl:1 vezhnevets:1 subgoals:3 million:2 queried:1 ai:1 resorting:1 grid:4 similarly:1 nonlinearity:2 dot:1 stable:1 actor:1 longer:2 badia:1 gt:21 add:1 chelsea:1 recent:2 perspective:1 optimizes:1 apart:1 schmidhuber:3 binary:1 success:1 arbitrarily:1 shahar:1 joshua:1 seen:1 morgan:1 greater:1 maximize:2 signal:2 cummins:1 semi:1 multiple:2 full:1 ing:1 technical:2 long:3 visit:1 schematic:2 prediction:12 scalable:1 impact:1 controller:1 foremost:1 arxiv:10 sergey:2 cell:3 interval:1 else:2 publisher:1 extra:1 rest:1 unlike:3 pass:1 shane:1 elegant:1 facilitates:1 shie:1 flow:1 jordan:1 near:3 intermediate:1 split:1 enough:2 easy:1 bengio:1 variety:1 architecture:17 economic:1 decline:1 andreas:1 translates:1 shift:2 t0:1 thread:2 whether:2 allocate:1 colour:1 passed:1 givony:1 penalty:2 regularisers:1 peter:2 passing:1 action:95 adequate:1 deep:12 workflow:1 useful:8 dramatically:1 gravesa:1 detailed:1 clear:1 boutilier:1 amount:2 tenenbaum:1 clip:1 differentiability:1 generate:5 exist:1 percentage:1 notice:11 arising:1 conceived:1 correctly:1 blue:1 broadly:1 naddaf:1 discrete:2 write:8 georg:1 key:1 four:1 demonstrating:1 changing:1 frostbite:9 neither:1 saeedi:1 vast:1 run:3 reader:1 patch:13 draw:1 decision:6 scaling:1 layer:8 ct:16 followed:7 guaranteed:1 courville:1 precisely:1 feudal:1 alex:5 colliding:1 regressed:1 generates:3 extremely:1 yavar:1 martin:1 structured:7 according:4 describes:2 terminates:4 remain:1 character:11 climber:2 across:1 smaller:2 beneficial:2 making:2 modification:1 ardavan:1 computationally:2 resource:1 equation:1 remains:1 discus:1 turn:2 mechanism:1 hero:3 finn:1 antonoglou:1 end:10 junction:1 operation:7 yael:1 apply:1 hierarchical:4 away:2 stig:1 pierre:1 nicholas:1 appending:1 save:1 hassabis:1 gate:2 original:2 thomas:1 top:5 linguistics:1 commits:3 exploit:1 build:2 especially:1 classical:1 gregor:1 implied:1 move:2 added:2 strategy:2 rt:4 responds:1 gradient:7 iclr:1 hq:1 distance:1 separate:1 fidjeland:1 majority:1 s_:1 extent:3 toward:1 marcus:1 assuming:1 length:4 illustration:3 ratio:1 setup:1 attentively:1 negative:3 design:1 reliably:1 zt:14 policy:20 perform:2 allowing:1 observation:14 neuron:1 markov:1 kumaran:1 benchmark:1 finite:1 daan:3 descent:1 ecml:1 dijkstra:1 defining:1 extended:3 communication:3 committed:5 head:2 frame:5 interacting:2 hinton:1 santorini:1 bert:2 intensity:1 introduced:1 david:2 required:1 kl:2 ct1:1 learned:7 amidar:6 barcelona:1 maxq:1 nip:4 kingma:1 below:2 pattern:3 lout:5 reading:3 regime:1 green:3 max:1 oriol:1 shifting:1 hot:1 event:1 natural:3 explorer:1 force:1 curriculum:1 bacon:1 mdps:2 temporally:6 axis:2 concludes:1 auto:2 text:8 prior:3 review:1 discovery:2 epoch:12 schulman:1 graf:5 relative:1 loss:3 fully:3 interesting:1 generation:2 proportional:2 filtering:1 geoffrey:1 foundation:1 retreat:1 agent:29 degree:1 treebank:2 playing:1 share:1 critic:2 ata:1 row:3 placed:1 last:2 asynchronous:3 repeat:2 brafman:1 english:1 drastically:1 bias:1 perceptron:2 lifelong:1 taking:2 benefit:1 distributed:1 curve:1 dimension:3 fred:1 gram:6 world:2 maze:19 doesn:1 computes:1 forward:3 commonly:1 reinforcement:23 preprocessing:1 avoided:1 made:1 adaptive:1 employing:1 emitting:2 far:1 welling:1 approximate:1 implicitly:2 keep:2 satinder:3 abstracted:2 reveals:1 corpus:1 agapiou:1 un:1 latent:2 table:3 additionally:1 promising:1 pack:1 nature:1 learn:7 channel:5 transfer:3 minecraft:2 improving:1 complex:3 cl:1 european:1 domain:11 marc:2 hierarchically:1 linearly:1 whole:1 noise:2 hyperparameters:1 motivation:1 crafted:1 ff:7 fashion:1 andrei:1 aid:1 precision:1 sub:2 position:7 explicit:2 gers:1 sasha:1 crazy:3 third:1 extractor:4 learns:11 theorem:1 removing:1 xt:7 specific:1 rectifier:2 learnable:1 explored:1 workshop:1 intrinsic:1 sequential:1 effectively:1 phd:1 illustrates:1 occurring:1 horizon:2 chen:1 smoothly:1 tc:4 led:1 entropy:2 simply:1 explore:1 likely:1 forget:1 timothy:1 vinyals:2 scalar:3 sadik:1 springer:1 corresponds:6 determines:2 extracted:1 tha:1 tejas:1 conditional:1 goal:14 marked:1 king:1 ann:1 adria:1 towards:1 prioritized:1 shared:2 luc:1 content:1 experimentally:1 hard:1 change:1 operates:2 acting:1 beattie:1 total:3 pas:2 experimental:2 player:1 meaningful:2 exception:1 formally:2 aaron:1 internal:2 alexander:1 reactive:2 kulkarni:1 geib:1 evaluate:2 tested:1 avoiding:1 |
5,986 | 6,415 | Online Pricing with Strategic and Patient Buyers
Michal Feldman
Tel-Aviv University and MSR Herzliya
[email protected]
Roi Livni?
Princeton University
[email protected]
Yishay Mansour?
Tel-Aviv University
[email protected]
Tomer Koren?
Google Brain
[email protected]
Aviv Zohar?
Hebrew University of Jerusalem
[email protected]
Abstract
We consider a seller with an unlimited supply of a single good, who is faced with
a stream of T buyers. Each buyer has a window of time in which she would like
to purchase, and would buy at the lowest price in that window, provided that this
price is lower than her private value (and otherwise, would not buy at all). In this
setting, we give an algorithm that attains O(T 2/3 ) regret over any sequence of T
buyers with respect to the best fixed price in hindsight, and prove that no algorithm
can perform better in the worst case.
1
Introduction
Perhaps the most common way to sell items is using a ?posted price? mechanism in which the seller
publishes the price of an item in advance, and buyers that wish to obtain the item decide whether to
acquire it at the given price or to forgo the purchase. Such mechanisms are extremely appealing. The
decision made by the buyer in a single-shot interaction is simple: if it values the item by more than
the offering price, it should buy, and if its valuation is lower, it should decline. The seller on the other
hand needs to determine the price at which she wishes to sell goods. In order to set prices, additive
regret can be minimized using, for example, a multi-armed bandit (MAB) algorithm in which arms
correspond to a different prices, and rewards correspond to the revenue obtained by the seller.
Things become much more complicated when the buyers who are facing the mechanism are patient
and can choose to wait for the price to drop. The simplicity of posted price mechanisms is then
tainted by strategic considerations, as buyers attempt to guess whether or not the seller will lower
the price in the future. The direct application of MABs is no longer adequate, as prices set by such
algorithms may fluctuate at every time period. Strategic buyers can make use of this fact to gain
the item at a lower price, which lowers the revenue of the seller and, more crucially, changes the
seller?s feedback for a given price. With patient buyers, the revenue from sales is no longer a result
of the price at the current period alone, but rather the combined outcome of prices that were set in
surrounding time periods, and of the expectation of buyers regarding future prices.
In this paper, we focus on strategic buyers that may delay their purchase in hopes of obtaining a
better deal. We assume that each buyer has a valuation for the item, and a ?patience level? which
represents the length of the time-window during which it is willing to wait in order to purchase the
item. Buyers wish to minimize the price during this period. Note that such buyers may interfere with
na?ve attempts to minimize regret, as consecutive days at which different prices are set are no longer
independent.
? Parts
of this work were done while the author was at Microsoft Research, Herzliya.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
To regain the simplicity of posted prices for the buyers, we consider a setting in which the seller
commits to the price in subsequent time periods in advance, publishing prices for the entire window
of the buyers. Strategic buyers that arrive at the market are then able to immediately choose the
lowest price within their window. Thus, given the valuation and patience of the buyers (the number
of days they are willing to wait) their actions are clearly determined: buy it at a day that is within the
buyer?s patience window and price is cheapest, provided that it is lower than the valuation.
An important aspect of our proposed model is to consider for each buyer a window of time (rather
than, for example, discounting). For example, when considering discounting, the buyers, in order
to best respond, would have argue how would other buyers would behave and how would the seller
adjust the prices in response to them. By fixing a window of time, and forcing the seller to publish
prices for the entire window, the buyers become ?price takers? and their behavior becomes tractable
to analyze.
As in previous works, we focus on minimizing the additive regret of the seller, assuming that the
appearance of buyers is adversarial; that is, we do not make any statistical assumptions on the buyers?
valuation and window size (except for a simple upper bound). Specifically we assume that the values
are in the range [0, 1] and that the window size is in the range {1, . . . , ?? + 1}. The regret is measured
with respect to the best single price in hindsight. Note that the benchmark of a fixed price p? implies
that any buyer with value above p? buys and any buyer with value below p? does not buy. The window
size has no effect when we have a fixed price. On the other hand, for the online algorithm, having to
deal with various window sizes create a new challenge.
The special case of this model where ?? = 0 (and hence all buyers have window of size exactly one)
was previously studied by Kleinberg and Leighton [11], who discussed a few different models for
the buyer valuations and derived tightpregret bounds for them. When the set of feasible prices is of
constant size their result implies a ?( T) regret bound with respect to the best fixed price, which
is also proven to be the best possible in that case. In contrast, in the current paper we focus on
the case ?? 1, where the buyers? window sizes may be larger than one, and exhibit the following
contributions:
(i) We present an algorithm that achieves O(?? 1/3T 2/3 ) additive regret in an adversarial setting,
compared to the best fixed posted price in hindsight. The upper bound relies on creating epochs,
when the price within each epoch is fixed and the number of epochs limit the number of times
the seller switches prices. The actual algorithm that is used to select prices within an epoch is
E XP 3 (or can be any other multi-arm bandit algorithm with similar performance).
(ii) We exhibit a matching lower bound of ?(?? 1/3T 2/3 ) regret. The proof of the lower bound reveals
that the difficulty in achieving lower regret stems from the lost revenue that the seller suffers
every time she tries to lower costs. Buyers from preceding time slots wait and do not purchase
the items at the higher prices that prevailed when they arrive. We are thus able to prove a lower
bound by reducing to a multi-armed bandit problem with switching costs. Our lower bound
uses only two prices.
In other words, we see that as soon as p
the buyers? patience increases from zero to one, the optimal
regret rate immediately jumps from ?( T) to ?(T 2/3 ).
The rest of the paper is organized as follows. In the remainder of this section we briefly overview
related work. We then proceed in Section 2 to provide a formal definition of the model and the
statement of our main results. We continue in Section 3 with a presentation of our algorithm and its
analysis, present our lower bound in Section 4, and conclude with a brief discussion.
1.1
Related work
As mentioned above, the work most closely related to ours is the paper of Kleinberg and Leighton
[11] that studies the case ?? = 0, i.e., in which the buyers? windows are limited
to be all of size one.
p
For a fixed set of feasible prices of constant size, their result implies a ?( T) regret bound, whereas
for a continuum of prices they achieve a ?(T 2/3 ) regret bound. The ?(T 2/3 ) lower bound found in
[11] is similar to our own in asymptotic magnitude, but stems from the continuous nature of the prices.
In our case the lower bound is achieved
p for buyers with only 2 prices, a case in which Kleinberg
and Leighton [11] have a bound of ?( T). Hence, we show that such a bound can occur due to the
strategic nature of the interaction itself.
2
A line of work appearing in [1, 12, 13] considers a model of a single buyer and a single seller, where
the buyer is strategic and has a constant discount factor. The main issue is that the buyer continuously
interacts with the seller and thus has an incentive to lower future prices at the cost of current valuations.
They define strategic regret and derive near optimal strategic regret bounds for various valuation
models. We differ from this line of work in a few important ways. First, they consider other either
fixed unknown valuation or stochastic i.i.d. valuations, while we consider adversarial valuations.
Second, they consider a single buyer while we consider a stream of buyers. More importantly, in
our model the buyers do not influence the prices they are offered, so the strategic incentives are very
different. Third, their model uses discounting to model the decay of buyer valuation over time, while
we use a window of time.
There is a vast literature in Algorithmic Game Theory on revenue maximization with posted prices,
in settings where agents? valuations are drawn from unknown distributions. For the case of a single
good of unlimited supply, the goal is to approximate the best price, as a function of the number
of samples observed and with a multiplicative approximation ratio. The work of Balcan et al. [4]
gives a generic reduction which can be used to show that one can achieve an ?-optimal pricing with
a sample of size O((H/? 2 ) log(H/?)), where H is a bound on the maximum valuation. The works
of Cole and Roughgarden [8] and Huang et al. [10] show that for regular and Monotone Hazard
Rate distributions sample bounds of ?(? 3 ) and ?(? 3/2 ), respectively, guarantee a multiplicative
approximation of 1 ?.
Finally, our setting is somewhat similar to a unit-demand auction in which agents desire a single
item out of several offerings. In our case, we can consider items sold at different times as different
items and agents desire a single one that is within their window. When agents have unit-demand
preferences, posted-price mechanisms can extract a constant fraction of the optimal revenue [5, 6, 7].
Note that a constant ratio approximation algorithm implies a linear regret in our model. On the other
hand, these works consider a more involved problem from a buyer?s valuation perspective.
2
Setup and Main Results
We consider a setting with a single seller and a sequence of T buyers b1, . . . , bT . Every buyer bt is
associated with value vt 2 [0, 1] and patience ?t . A buyer?s patience indicates the time duration in
which the buyer stays in the system and may purchase an item.
The seller posts prices in advance over some time window. Let ?? be the maximum patience, and
assume that ?t ? ?? for every t. Let pt denote the price at time t, and assume that all prices are chosen
from a discrete (and normalized) predefined set of n prices P = {0, n1 , n2 , . . . 1}. At time t = 1, the
seller posts prices p1, . . . , p?+1
? , and learns the revenue obtained at time t = 1 (the revenue depends
on the buyers? behavior, which is explained below). Then, at each time step t, the seller publishes
a new price pt+?? 2 P, and learns the revenue obtained at time t, which she can use to set the next
prices. Note that at every time step, prices are known for the next ?? time steps.
The revenue in every time step is determined by the strategic behavior of buyers, which is explained
next. Every buyer bt observes prices pt , . . . , pt+?t , and purchases the item at the lowest price among
these prices (breaking ties toward earlier times), if she does not exceed her value. The revenue
obtained from buyer bt is given by:
?
min{pt , . . . , pt+?t } if min{pt , . . . , pt+?t } ? vt ,
(pt , . . . , pt+?? ; bt ) =
0
otherwise.
As bt has patience ?t , we will sometime omit the irrelevant prices and write (pt , . . . , pt+?t ; bt ) =
(pt , . . . , pt+?? ; bt ).
As we described, a buyer need not buy the item on her day of appearance and may choose to wait.
If the buyer chooses to wait, we will observe the feedback from her decision only on the day of
purchase. We therefore need to distinguish between the revenue from buyer t and the revenue at time
t. Given a sequence of prices p1, . . . , pt+?? and a sequence of buyers b1, . . . , bt we define the revenue
at time t to be the sum of all revenues from buyers that preferred to buy at time t. Formally, let It
denote the set of all buyers that buy at time t, i.e.,
It = {bi : t = arg min{i ? t ? i + ?i : pt = (pi . . . , pi+?? ; bi )}}.
3
Then the revenue obtained at time t is given by:
Rt (pt ?? , . . . , pt+?? ) = R(p1, . . . , pt+?? ; b1:t ) :=
?
i 2It
(pi, . . . pi+?? ; bi )),
where we use the notation b1:T as a shorthand for the sequence b1, . . . , bT . The regret of the (possibly
randomized) seller A is the difference between the revenue obtained by the best fixed price in hindsight
and the expected revenue obtained by the seller A, given a sequence of buyers:
"T
#
T
?
?
?
?
RegretT (A; b1:T ) = max
R(p , . . . , p ; b1:t ) E
R(p1, . . . pt+?? ; b1:t ) .
?
p 2P
t=1
t=1
We further denote by RegretT (A) the expected regret a seller A incurs for the worst case sequence,
i.e., RegretT (A) = maxb1:T RegretT (A; b1:T ).
2.1
Main Results
Our main result are optimal regret rates in the strategic buyers setting.
Theorem 1. The T-round expected regret of Algorithm 1 for any sequence of buyers b1, . . . , bT with
patience at most ?? 1 is upper bounded as RegretT ? 10(?n
? log n)1/3T 2/3 .
Theorem 2. For any ?? 1, n 2 and for any pricing algorithm, there exists a sequence of buyers
b1, . . . , bT with patience at most ?? such that RegretT = ?(?? 1/3T 2/3 ).
3
Algorithm
In this section we describe and analyze our online pricing algorithm. It is worth to start by highlighting
why simply running an ?off the shelf? multi-arm bandit algorithm such as E XP 3 would fail. Consider
a fixed distribution over the actions and assume the buyer has a window size of two. Unlike the
standard multi-arm bandit, where we get the expected revenue from the price we select, now the
buyer would select the lower of the two prices, which would clearly hurt our revenue (there is a slight
gain, by the increased probability of sell, but it does suffice to offset the loss). For this reason, the
seller would intuitively like to minimize the number of time it changes prices (more precisely, lower
the prices).
Our online pricing algorithm, which is given in Algorithm 1, is based on the E XP 3 algorithm of Auer
et al. [3] which we use as a black-box. The algorithm divides the time horizon to roughly T 2/3 epochs,
and within each epoch the seller repeatedly announces the same price, that was chosen by the E XP 3
black-box in the beginning of the epoch. In the end of the epoch, E XP 3 is updated with the overall
average performance of the chosen price during the epoch (ignoring the time steps which might be
influenced by different prices). Hence, our algorithm changes the posted price only O(T 2/3 ) times,
thereby keeping under control the costs associated with price fluctuations due to the patience of the
buyers.
Algorithm 1: Online posted pricing algorithm
Parameters: horizon T, number of prices n, and maximal patience ?;
?
Let B = b ?? 2/3 (n log n) 1/3T 1/3 c and T 0 = bT/Bc;
Initialize A
E XP 3(T 0, n);
for j = 0, . . . , T 0 1 do
Sample i ? A and let p0j = i/n;
for t = B j + 1, . . . , B( j + 1) do
Announce price pt+?? = p0j ; %On j = 0, t = 1 announce p1, . . . pt+? = p00 .;
Receive and observe total revenue Rt (pt ?? , . . . , pt+?? );
?B(j+1)
Update A with feedback B1 t=B j+2?+1
Rt (pt ?? , . . . , pt+?? );
?
for t = BT 0 + 1, . . . , T do
Announce price pt+?? = pT0 0 1 ;
4
We now analyze Algorithm 1 and prove Theorem 1. The proof follows standard arguments in
adversarial online learning (e.g., Arora et al. [2]); we note, however, that for obtaining the optimal
dependence on the maximal patience ?? one cannot apply existing results directly and has to analyse
the effect of accumulating revenues over epochs more carefully, as we do in the proof below. This
is mainly because in our model the revenue at time t is not bounded by 1 but by ?, hence readily
amenable results would add a factor ? to the regret.
Proof of Theorem 1. For all 0 ? j ? T 0 and for all prices p 2 P, define
R j0 (p)
1
=
B
B(j+1)
?
t=B j+2?+1
?
Rt (p, . . . , p).
(Here, the argument p is repeated 2?? + 1 times.) Observe that 0 ? R j0 (p) ? 1 for all j and p, as the
maximal total revenue between rounds B j + 2?? + 1 and B( j + 1) is at most B; indeed, there are at
most B buyers who might make a purchase during that time, and each purchase yields revenue of at
most 1. By a similar reasoning, we also have
B?
j+2??
t=B j+1
Rt (p, . . . , p) ? 4??
(1)
for all j and p.
Now, notice that pt = p0j for all B j + ?? + 1 ? t ? B( j + 1) + ?,
? hence the feedback fed back to A
after epoch j is
1
B
B(j+1)
?
t=B j+2?+1
?
Rt (pt ?? , . . . , pt+?? ) =
1
B
B(j+1)
?
t=B j+2?+1
?
Rt (p0j , . . . , p0j ) = R j0 (p0j ).
That is, Algorithm 1 is essentially running E XP 3 on the reward functions R j0 . By the regret bound of
E XP 3, we know that
0 1
T?
j=0
R j0 (p? )
for any fixed p? 2 P, which implies
0
2T?
3
p
6 1 0 0 7
E 66
R j (p j )77 ? 3 T 0 n log n
6 j=0
7
4
5
0
2T?
3
?
p
6 1 B(j+1)
7
E 66
Rt (pt ?? , . . . , pt+?? )77 ? 3 BT n log n.
6 j=0 t=B j+2?+1
7
j=0 t=B j+2?+1
?
?
4
5
In addition, due to Eq. (1) and the non-negativity of the revenues, we also have
0 1
T?
B(j+1)
?
Rt (p?, . . . , p? )
(2)
0
2T?
3
j+2??
6 1 B?
7
4?T
?
E 66
Rt (pt ?? , . . . , pt+?? )77 ? 4?T
? 0?
.
(3)
B
6 j=0 t=B j+1
7
j=0 t=B j+1
4
5
0
Summing Eqs. (2) and (3), and taking into account rounds BT + 1, . . . , T during which the total
revenue is at most B + 2?,
? we obtain the regret bound
"T
#
T
?
?
p
4?T
?
Rt (p?, . . . , p? ) E
Rt (pt ?? , . . . , pt+?? ) ? 3 BT n log n +
+ B + 2?.
?
B
t=1
t=1
0 1 B j+2??
T?
?
Rt (p?, . . . , p? )
Finally, for B = b ?? 2/3 (n log n)
4
1/3T 1/3 c,
the theorem follows (assuming that ?? < T).
?
Lower Bound
We next briefly overview the lower bound and the proof?s main technique. A full proof is given in the
supplementary material; for simplicity of exposition, here we assume ?? = 1 and n = 2.
5
Our proof relies on two steps. The first step is a reduction from pricing with patience ?? = 0 but with
switching cost. The second step is to lower bound the regret of pricing with switching cost. This we
do again by reduction from the Multi Armed Bandit (MAB) problem with switching cost. We begin
by briefly over-viewing these terms and definitions.
We recall the standard setting of MAB with two actions and switching cost c. A sequence of losses
is produced `1, . . . , `T where each loss is defined as a function `t : {1, 2} ! {0, 1}. At each round
a player chooses an action i t 2 {1, 2} and receives as feedback `t (i t ). The switching cost regret of
player A is given by
"T
#
T
?
?
?
Sc -RegretT (A; `1:T ) = E
`t (i t ) min
`t (i ) + cE [|{i t : i t , i t 1 }|] .
?
t=1
i
t=1
We will define analogously the switching cost regret for non-strategic buyers. Namely, given a
sequence of buyers b1, . . . , bT , all with patience ?? = 0, the switching cost regret for a seller is given
by:
"
#
T
?
?
?
Sc -RegretT (A; b1:T ) = E max
R(p ; bt )
R(pt ; bt ) + cE [|{pt : pt , pt 1 }|] .
?
p
4.1
t=1
Reduction from Switching Cost Regret
As we stated above, our first step is to show a reduction from switching cost regret for non-strategic
buyers. This we do in Theorem 3:
Theorem 3. For every (possibly randomized) seller A for strategic buyers with patience at most
?? = 1, there exists a randomized seller A0 for non-strategic buyers with patience ?? = 0 such that:
0
1
1 -RegretT (A )
2 S 12
? RegretT (A)
The proof idea is to construct from every sequence of non-strategic buyers b1, . . . , bT a sequence of
strategic buyers b? 1, . . . , b? T such that the regret incurred to A by b? 1:T is at least the switching cost
regret incurred to A0 by b1:T . The idea behind the construction is as follows: At each iteration t we
choose with probability half to present to the seller bt and with probability half we present to the
seller a buyer zt that has the following statistics:
?
(v = 12 , ? = 0) w.p. 12
zt =
(4)
(v = 1, ? = 1) w.p. 12
That is, zt is with probability 12 a buyer with value v =
zt is a buyer with value v = 1 and patience ? = 1.
1
2
and patience ? = 0, and with probability 12 ,
Observe that if zt would always have patience ? = 0 (i.e., even if her value is v = 1), for any sequence
of prices the expected rewards from the zt buyer is always half, independent of the prices. In other
words, the sequence of noise does not change the performance of the sequence of prices and cannot
be exploited to improve. On the other hand, note since the value 1 corresponds to patience 1, the
seller might lose half whenever she reduces the price from 1 to 12 . A crucial point is that the seller
must post her price in advance, therefore she cannot in any way predict if the buyer is willing to
wait or not and manipulate prices accordingly. A proof for the following Lemma is provided in the
supplementary material.
Lemma 4. Consider the pricing problem with ?? = 1 and n = 2. Let b1, . . . , bT be a sequence of
buyers with patience 0. Let z1, . . . , zT be a sequence of stochastic buyers as in Eq. (4). Define b? t to
be a stochastic buyer that is with probability half bt and with probability half zt . Then, for any seller
A, the expected regret A incurs from the sequence b? 1:T is at least
"
#
"T
#
T
?
?
? 1
1 ?
?
?
E RegretT (A; b1:T )
E max
(p ; bt )
(pt ; bt ) + E
(5)
|{pt : pt > pt+1 }|
2 p? 2P t=1
8 t=1
where the expectations are taken with respect to the internal randomization of the seller A and the
random bits used to generate the sequence b? 1:T .
6
4.1.1
Proof for Theorem 3
To construct algortihm A0 from A, we develop a meta algorithm A, depicted in Algorithm 2 that
receives an algorithm, or seller, as input. A0 is then the seller obtained by fixing A as the input for A.
In our reduction we assume that at each iteration algorithm A can ask from A one posted price,pt ,
and in turn she can return a feedback rt to algorithm A, then a new iteration begins.
The idea of construction is as follows: As an initialization step Algorithm A0 produces a stochastic
sequence of buyers of type z1, . . . , zt , the algorithm then chooses apriori if at step t a buyer b? t is
going to be the buyer bt that she observes or zt (with probability half each). The sequence b? t is
distributed as depicted in Lemma 4. Note that we do not assume that the learner knows the value of
bt .
At each iteration t, algorithm A0 receives price pt from algorithm A and posts price pt . She then
receives as feedback (pt ; bt ): Given the revenues (p1 ; b1 ), . . . , (pt ; bt ) and her own internal
random variables, the algorithm can calculate the revenue for algorithm A w.r.t to the sequence of
buyers b? 1, . . . , b? t , namely rt = R(pt 1, . . . , pt+1, b? 1:t ).
In turn, at time t algorithm A0 returns to algorithm A her revenue, or feedback, w.r.t b? 1, . . . , b? T at
time t which is rt .
Since Algorithm A receives as feedback at time t R(pt 1, pt , pt+1 ; b? 1:t ), we obtain that for the sequence
of posted prices p1, . . . , pT :
T
T
?
?
RegretT (A; b? 1:T ) =
(p?, p? ; b?t )
(pt , pt+1 ; b? t ).
t=1
t=1
Taking expectation, using Lemma 4, and noting that the number of time pt+1 > pt is at least 1/3 of
the times pt , pt+1 (since there are only 2 prices), we have that
?
?
1
S 1 -RegretT (A0; b1:T ) ? Eb? 1:T RegretT (A; b? 1:T ) ? RegretT (A)
12
2
Since this is true for any sequence b1:T we obtain the desired result.
Algorithm 2: Reduction from from pricing with switching cost to strategic buyers
Input:T, A % A is an algorithm with bounded regret for strategic buyers;
Output:p1, . . . , pT ;
Set r1 = . . . = rT = 0;
Draw IID z1, . . . , zT % see Eq. 4;
Draw IID e1, . . . , eT 2 {0, 1} Distributed according to Bernoulli distribution;
for t=1,. . . ,T do
Receive from A a posted price pt+1 ; %At first round receive two prices p1, p2 .;
post price pt and receive as feedback (pt ; bt );
if et = 0 then
Set rt = rt + (pt ; bt ); % b? t = bt
else
if (pt ? pt+1 ) OR (zt has patience 0) then
Set rt = rt + (pt ; zt )
else
Set rt+1 = rt+1 + (pt , pt+1 ; zt )
Return rt as feedback to A.
4.2
From MAB with switching cost to Pricing with switching cost
The above section concluded that switching cost for pricing may be reduced to pricing with strategic
buyers. Therefore, our next step would be to show that we can produce a sequence of non-strategic
buyers with high switching cost regret. Our proof relies on a further reduction for MAB with
Switching cost.
Theorem 5 (Dekel et al. [9]). Consider the MAB setting with 2 actions. For any randomized
player, there exists a sequence of loss functions `1, . . . , `T where `t : {1, 2} ! {0, 1} such that
Sc -RegretT (A; `1:T ) 2 ?(T 2/3 ), for every c > 0.
7
Here we prove an analogous statement for pricing setting:
Theorem 6. Consider the pricing problem for buyers with patience ?? = 0 and n = 2. For any
randomized seller, there exists a sequence of buyers b1, . . . , bT such that Sc -RegretT (A; b1:T ) 2
?(T 2/3 ), for every c > 0.
The transition from MAB with switching cost to pricing with switching cost is a non-trivial task. To
do so, we have to relate actions to prices and values to loss vectors in a manner that would relate the
revenue regret to the loss regret. The main challenge, perhaps, is that the structure of the feedback is
inherently different in the two problems. In two-armed bandit problems all loss configuration are
feasible. In contrast, in the pricing case certain feedbacks collapse to full information: for example, if
we sell at price 1 we know the feedback from price 12 , and if we fail to sell at price 12 we obtain full
feedback for price 1.
Our reduction proceeds roughly along the following lines. We begin by constructing stochastic
mappings that turn loss vectors into values ?t : {0, 1}2 ! {0, 12 , 1}. This in turn defines a mapping
from a sequences of losses `t to stochastic sequences of buyers bt . In our reduction we assume we
are given an algorithm A that solves the pricing problem; that is, at each iteration we may ask for
a price and then in turn we return a feedback (pt ; bt ). Note that we cannot assume that we have
access or know bt that is defined by ?t (`t ). The buyer bt depends on the full loss vector `t : assuming
that we can see the full `t would not lead to a meaningful reduction for MAB.
However, our construction of ?t is such that each posted price is associated with a single action. This
means that for each posted price there is a single action we need to observe in order to calculate the
correct feedback or revenue. This also means that we switch actions only when algorithm A switches
prices. Finally, our sequence of transformation has the following property: if i is the action needed in
order to discover the revenue for price p, then E(`t (i)) = 12 14 E( (p; bt )). Thus, the regret for our
actions compares to the regret of the seller.
5
Discussion
In this work we introduced a new model of strategic buyers, where buyers have a window of time
in which they would like to purchase the item. Our modeling circumvents complicated dynamics
between the buyers, since it forces the seller to post prices for the entire window of time in advance.
We consider an adversarial setting, where both buyer valuation and window size are selected adversarially. We compare our online algorithm to a static fixed price, which is by definition oblivious
to the window sizes. We show that the regret is sub-linear, and more precisely ?(T 2/3 ). The upper
bound shows that in this model the average regret per buyer is still vanishing. The lower bound shows
that having a window size greater than 1 impacts the regret bounds dramatically. Even for window
sizes 1 or 2 and prices 12 or 1 we get a regret of ?(T 2/3 ), compared to a regret of O(T 1/2 ) when all
the windows are of size 1.
Given the sharp ?(T 2/3 ) bound, it might be worth revisiting our feedback model. Our model assumes
that the feedback for the seller is the revenue obtained at the end of each day. It is worthwhile to
consider stronger feedback models, where the seller can gain more information about the buyers.
Namely, their day of arrival and their window size. In terms of the upper bound, our result applies to
any feedback model that is stronger, i.e., as long as the seller gets to observe the revenue per day,
the O(T 2/3 ) bound holds. As far as the lower bound is concerned, one can observe that our proofs
and construction are valid even for very strong feedback models. Namely, even if the seller gets as
feedback the revenue from buyer t at time t (instead of the time of purchase), and in fact even if she
gets to observe the patience of the buyers (i.e. full information w.r.t. patience), the ?(T 2/3 ) bound
holds, as long as the seller posts prices in advance.
We did not consider continuous pricing explicitly, but one can verify that applying our algorithm to a
setting of continuous pricing gives a regret bound of O(T 3/4 ), by discretizing the continuous prices
to T 1/4 prices. On the positive side, it shows that we still obtain a vanishing average regret in the
continuous case. On the other hand, we were not able to improve our lower bound to match this upper
bound. This gap is one of the interesting open problems in our work.
8
References
[1] K. Amin, A. Rostamizadeh, and U. Syed. Learning prices for repeated auctions with strategic
buyers. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger,
editors, Advances in Neural Information Processing Systems 26, pages 1169?1177. 2013.
[2] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from
regret to policy regret. arXiv preprint arXiv:1206.6400, 2012.
[3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[4] M.-F. Balcan, A. Blum, J. D. Hartline, and Y. Mansour. Reducing mechanism design to
algorithm design via machine learning. J. Comput. Syst. Sci., 74(8):1245?1270, 2008.
[5] S. Chawla, J. D. Hartline, and R. D. Kleinberg. Algorithmic pricing via virtual valuations. In
ACM Conference on Electronic Commerce, pages 243?251, 2007.
[6] S. Chawla, J. D. Hartline, D. L. Malec, and B. Sivan. Multi-parameter mechanism design and
sequential posted pricing. In STOC, pages 311?320, 2010.
[7] S. Chawla, D. L. Malec, and B. Sivan. The Power of Randomness in Bayesian Optimal
Mechanism Design. In the 11th ACM Conference on Electronic Commerce (EC), 2010.
[8] R. Cole and T. Roughgarden. The sample complexity of revenue maximization. In Proceedings
of the 46th Annual ACM Symposium on Theory of Computing, pages 243?252. ACM, 2014.
[9] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. In
Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 459?467.
ACM, 2014.
[10] Z. Huang, Y. Mansour, and T. Roughgarden. Making the most of your samples. In Proceedings
of the Sixteenth ACM Conference on Economics and Computation, EC, pages 45?60, 2015.
[11] R. D. Kleinberg and F. T. Leighton. The value of knowing a demand curve: Bounds on regret for
online posted-price auctions. In 44th Symposium on Foundations of Computer Science FOCS,
pages 594?605, 2003.
[12] M. Mohri and A. Munoz. Optimal regret minimization in posted-price auctions with strategic
buyers. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger,
editors, Advances in Neural Information Processing Systems 27, pages 1871?1879. 2014.
[13] M. Mohri and A. Munoz. Revenue optimization against strategic buyers. In C. Cortes, N. D.
Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information
Processing Systems 28, pages 2530?2538. 2015.
9
| 6415 |@word msr:1 briefly:3 private:1 leighton:4 stronger:2 dekel:3 open:1 willing:3 crucially:1 incurs:2 thereby:1 shot:1 reduction:11 configuration:1 pt0:1 offering:2 ours:1 bc:1 existing:1 current:3 com:1 michal:2 must:1 readily:1 additive:3 subsequent:1 drop:1 update:1 alone:1 half:7 selected:1 guess:1 item:15 accordingly:1 beginning:1 vanishing:2 preference:1 along:1 direct:1 become:2 supply:2 symposium:3 focs:1 prove:4 shorthand:1 manner:1 expected:6 indeed:1 roughly:2 p1:9 market:1 behavior:3 multi:7 brain:1 actual:1 armed:4 window:28 considering:1 becomes:1 provided:3 spain:1 notation:1 bounded:3 suffice:1 begin:3 discover:1 lowest:3 hindsight:4 transformation:1 guarantee:1 every:11 tie:1 exactly:1 sale:1 unit:2 control:1 omit:1 positive:1 limit:1 switching:20 fluctuation:1 black:2 might:4 initialization:1 studied:1 eb:1 limited:1 collapse:1 range:2 bi:3 commerce:2 lost:1 regret:49 j0:5 matching:1 word:2 regular:1 wait:7 get:5 cannot:4 influence:1 applying:1 accumulating:1 jerusalem:1 economics:1 duration:1 announces:1 simplicity:3 immediately:2 importantly:1 hurt:1 analogous:1 updated:1 yishay:1 pt:72 construction:4 us:2 observed:1 preprint:1 ding:1 worst:2 calculate:2 revisiting:1 observes:2 mentioned:1 complexity:1 reward:3 seller:43 dynamic:1 learner:1 various:2 surrounding:1 describe:1 sc:4 outcome:1 larger:1 supplementary:2 otherwise:2 statistic:1 taker:1 analyse:1 itself:1 online:9 sequence:31 regain:1 interaction:2 maximal:3 remainder:1 achieve:2 sixteenth:1 amin:1 r1:1 produce:2 derive:1 develop:1 ac:3 fixing:2 measured:1 eq:4 strong:1 solves:1 p2:1 c:3 implies:5 differ:1 malec:2 closely:1 correct:1 stochastic:6 viewing:1 material:2 virtual:1 mab:8 randomization:1 hold:2 roi:1 lawrence:2 algorithmic:2 predict:1 mapping:2 achieves:1 consecutive:1 continuum:1 sometime:1 lose:1 cole:2 create:1 hope:1 minimization:1 clearly:2 always:2 rather:2 shelf:1 fluctuate:1 derived:1 focus:3 she:11 bernoulli:1 indicates:1 mainly:1 contrast:2 adversarial:5 attains:1 rostamizadeh:1 algortihm:1 entire:3 bt:39 a0:8 her:8 bandit:10 going:1 issue:1 among:1 arg:1 overall:1 special:1 initialize:1 apriori:1 construct:2 having:2 sell:5 represents:1 adversarially:1 purchase:12 minimized:1 future:3 few:2 oblivious:1 ve:1 microsoft:1 n1:1 attempt:2 adjust:1 behind:1 predefined:1 amenable:1 divide:1 desired:1 increased:1 earlier:1 modeling:1 maximization:2 strategic:26 cost:23 delay:1 combined:1 chooses:3 randomized:5 huji:1 siam:1 stay:1 lee:1 off:1 analogously:1 continuously:1 na:1 again:1 cesa:1 p00:1 choose:4 huang:2 possibly:2 creating:1 return:4 syst:1 account:1 explicitly:1 depends:2 stream:2 multiplicative:2 try:1 analyze:3 start:1 complicated:2 contribution:1 minimize:3 il:3 who:4 correspond:2 yield:1 bayesian:1 produced:1 iid:2 worth:2 hartline:3 randomness:1 influenced:1 suffers:1 whenever:1 definition:3 against:2 involved:1 proof:12 associated:3 static:1 gain:3 ask:2 recall:1 organized:1 carefully:1 auer:2 back:1 higher:1 day:8 response:1 done:1 box:2 hand:5 receives:5 google:2 interfere:1 defines:1 perhaps:2 pricing:22 aviv:3 effect:2 verify:1 true:1 normalized:1 discounting:3 hence:5 deal:2 round:5 during:5 game:1 mabs:1 balcan:2 auction:4 reasoning:1 consideration:1 common:1 overview:2 discussed:1 slight:1 multiarmed:1 munoz:2 feldman:2 sugiyama:1 access:1 longer:3 add:1 own:2 perspective:1 irrelevant:1 forcing:1 certain:1 meta:1 discretizing:1 continue:1 vt:2 exploited:1 greater:1 somewhat:1 preceding:1 determine:1 period:5 ii:1 full:6 reduces:1 stem:2 match:1 long:2 hazard:1 post:7 manipulate:1 e1:1 impact:1 patient:3 expectation:3 essentially:1 publish:1 arxiv:2 iteration:5 achieved:1 receive:4 whereas:1 addition:1 else:2 concluded:1 crucial:1 rest:1 unlike:1 thing:1 near:1 noting:1 exceed:1 concerned:1 switch:3 nonstochastic:1 decline:1 regarding:1 idea:3 knowing:1 whether:2 proceed:1 action:11 adequate:1 regrett:17 repeatedly:1 dramatically:1 tewari:1 discount:1 reduced:1 generate:1 schapire:1 notice:1 per:2 rlivni:1 discrete:1 write:1 incentive:2 sivan:2 blum:1 achieving:1 drawn:1 ce:2 vast:1 monotone:1 fraction:1 sum:1 respond:1 arrive:2 announce:3 decide:1 electronic:2 draw:2 circumvents:1 decision:2 patience:26 bit:1 bound:35 koren:2 distinguish:1 annual:2 roughgarden:3 occur:1 precisely:2 your:1 unlimited:2 kleinberg:5 aspect:1 argument:2 extremely:1 min:4 according:1 appealing:1 making:1 explained:2 intuitively:1 taken:1 previously:1 turn:5 mechanism:8 fail:2 needed:1 know:4 tractable:1 fed:1 end:2 apply:1 observe:8 worthwhile:1 generic:1 chawla:3 appearing:1 weinberger:2 assumes:1 running:2 publishing:1 commits:1 ghahramani:2 rt:24 dependence:1 interacts:1 exhibit:2 sci:1 valuation:17 argue:1 considers:1 trivial:1 toward:1 reason:1 assuming:3 length:1 ratio:2 minimizing:1 hebrew:1 acquire:1 setup:1 statement:2 relate:2 stoc:1 stated:1 design:4 zt:14 policy:1 unknown:2 perform:1 bianchi:1 upper:6 sold:1 benchmark:1 behave:1 peres:1 mansour:4 sharp:1 tomer:1 introduced:1 publishes:2 namely:4 z1:3 barcelona:1 nip:1 zohar:1 able:3 adversary:1 proceeds:1 below:3 challenge:2 max:3 tau:2 power:1 syed:1 difficulty:1 force:1 prevailed:1 arm:4 improve:2 brief:1 arora:2 negativity:1 extract:1 faced:1 epoch:11 literature:1 asymptotic:1 freund:1 loss:10 interesting:1 proven:1 facing:1 revenue:38 foundation:1 incurred:2 agent:4 offered:1 xp:8 editor:3 pi:4 mohri:2 soon:1 keeping:1 formal:1 side:1 burges:1 taking:2 livni:1 distributed:2 feedback:23 curve:1 transition:1 valid:1 author:1 made:1 jump:1 adaptive:1 far:1 ec:2 welling:2 approximate:1 preferred:1 buy:9 reveals:1 b1:23 p0j:6 conclude:1 summing:1 continuous:5 why:1 nature:2 inherently:1 ignoring:1 tel:2 obtaining:2 bottou:1 posted:16 constructing:1 garnett:1 cheapest:1 did:1 main:7 noise:1 arrival:1 n2:1 repeated:2 sub:1 wish:3 comput:1 breaking:1 third:1 learns:2 theorem:10 offset:1 decay:1 cortes:2 tkoren:1 exists:4 sequential:1 tainted:1 magnitude:1 demand:3 horizon:2 gap:1 depicted:2 simply:1 appearance:2 highlighting:1 desire:2 applies:1 corresponds:1 relies:3 acm:7 slot:1 goal:1 presentation:1 exposition:1 price:109 feasible:3 change:4 determined:2 except:1 specifically:1 reducing:2 lemma:4 total:3 forgo:1 buyer:103 player:3 meaningful:1 select:3 formally:1 internal:2 princeton:2 |
5,987 | 6,416 | FPNN: Field Probing Neural Networks for 3D Data
Yangyan Li1,2
1
S?ren Pirk1
Hao Su1
Stanford University, USA
Charles R. Qi1
2
Leonidas J. Guibas1
Shandong University, China
Abstract
Building discriminative representations for 3D data has been an important task in
computer graphics and computer vision research. Convolutional Neural Networks
(CNNs) have shown to operate on 2D images with great success for a variety
of tasks. Lifting convolution operators to 3D (3DCNNs) seems like a plausible
and promising next step. Unfortunately, the computational complexity of 3D
CNNs grows cubically with respect to voxel resolution. Moreover, since most 3D
geometry representations are boundary based, occupied regions do not increase
proportionately with the size of the discretization, resulting in wasted computation.
In this work, we represent 3D spaces as volumetric fields, and propose a novel
design that employs field probing filters to efficiently extract features from them.
Each field probing filter is a set of probing points ? sensors that perceive the
space. Our learning algorithm optimizes not only the weights associated with the
probing points, but also their locations, which deforms the shape of the probing
filters and adaptively distributes them in 3D space. The optimized probing points
sense the 3D space ?intelligently?, rather than operating blindly over the entire
domain. We show that field probing is significantly more efficient than 3DCNNs,
while providing state-of-the-art performance, on classification tasks for 3D object
recognition benchmark datasets.
1
Introduction
Rapid advances in 3D sensing technology have
made 3D data ubiquitous and easily accessible,
rendering them an important data source for high
10.41%
5.09%
2.41%
level semantic understanding in a variety of enFigure 1: The sparsity characteristic of 3D data
vironments. The semantic understanding probin occupancy grid representation. 3D occupancy
lem, however, remains very challenging for 3D
grids in resolution 30, 64 and 128 are shown in
data as it is hard to find an effective scheme for
this figure, together with their density, defined as
converting input data into informative features #occupied grid
#total grid . It is clear that 3D occupancy grid
for further processing by machine learning algospace
gets sparser and sparser as the fidelity of the
rithms. For semantic understanding problems in
surface
approximation increases.
2D images, deep CNNs [15] have been widely
used and have achieved great success, where the convolutional layers play an essential role. They
provide a set of 2D filters, which when convolved with input data, transform the data to informative
features for higher level inference.
In this paper, we focus on the problem of learning a 3D shape representation by a deep neural network.
We keep two goals in mind when designing the network: the shape features should be discriminative
for shape recognition and efficient for extraction at runtime. However, existing 3D CNN pipelines that
simply replace the conventional 2D filters by 3D ones [31, 19], have difficulty in capturing geometric
structures with sufficient efficiency. The input to these 3D CNNs are voxelized shapes represented
by occupancy grids, in direct analogy to pixel array representation for images. We observe that
the computational cost of 3D convolution is quite high, since convolving 3D voxels has cubical
complexity with respect to spatial resolution, one order higher than the 2D case. Due to this high
computational cost, researchers typically choose 30 ? 30 ? 30 resolution to voxelize shapes [31, 19],
which is significantly lower than the widely adopted resolution 227 ? 227 for processing images [24].
We suspect that the strong artifacts introduced at this level of quantization (see Figure 1) hinder the
process of learning effective 3D convolutional filters.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(a)
(b)
(c)
(d)
Figure 2: An visualization of probing filters before (a) and after (d) training them for extracting 3D
features. The colors associated with each probing point visualize the filter weights for them. Note
that probing points belong to the same filter are linked together for visualizing purpose. (b) and (c)
are subsets of probing filters of (a) and (d), for better visualizing that not only the weights on the
probing points, but also their locations are optimized for them to better ?sense? the space.
Two significant differences between 2D images and 3D shapes interfere with the success of directly
applying 2D CNNs on 3D data. First, as the voxel resolution grows, the grids occupied by shape
surfaces get sparser and sparser (see Figure 1). The convolutional layers that are designed for 2D images thereby waste much computation resource in such a setting, since they convolve with 3D blocks
that are largely empty and a large portion of multiplications are with zeros. Moreover, as the voxel
resolution grows, the local 3D blocks become less and less discriminative. To capture informative
features, long range connections have to be established for taking distant voxels into consideration.
This long range effect demands larger 3D filters, which yields an even higher computation overhead.
To address these issues, we represent 3D data as 3D fields, and propose a field probing scheme, which
samples the input field by a set of probing filters (see Figure 2). Each probing filter is composed of a
set of probing points which determine the shape and location of the filter, and filter weights associated
with probing points. In typical CNNs, only the filter weights are trained, while the filter shape
themselves are fixed. In our framework, due to the usage of 3D field representation, both the weights
and probing point locations are trainable, making the filters highly flexible in coupling long range
effects and adapting to the sparsity of 3D data when it comes to feature extraction. The computation
amount of our field probing scheme is determined by how many probing filters we place in the 3D
space, and how many probing points are sampled per filter. Thus, the computational complexity does
not grow as a function of the input resolution. We found that a small set of field probing filters is
enough for sampling sufficient information, probably due to the sparsity characteristic of 3D data.
Intuitively, we can think our field probing scheme as a set of sensors placed in the space to collect
informative signals for high level semantic tasks. With the long range connections between the
sensors, global overview of the underlying object can be easily established for effective inference.
Moreover, the sensors are ?smart? in the sense that they learn how to sense the space (by optimizing
the filter weights), as well as where to sense (by optimizing the probing point locations). Note that
the intelligence of the sensors is not hand-crafted, but solely derived from data. We evaluate our field
probing based neural networks (FPNN) on a classification task on ModelNet [31] dataset, and show
that they match the performance of 3DCNNs while requiring much less computation, as they are
designed and trained to respect the sparsity of 3D data.
2
Related Work
3D Shape Descriptors. 3D shape descriptors lie at the core of shape analysis and a large variety
of shape descriptors have been designed in the past few decades. 3D shapes can be converted into
2D images and represented by descriptors of the converted images [13, 4]. 3D shapes can also be
represented by their inherent statistical properties, such as distance distribution [22] and spherical
harmonic decomposition [14]. Heat kernel signatures extract shape descriptions by simulating an
heat diffusion process on 3D shapes [29, 3]. In contrast, we propose an approach for learning the
shape descriptor extraction scheme, rather than hand-crafting it.
Convolutional Neural Networks. The architecture of CNN [15] is designed to take advantage
of the 2D structure of an input image (or other 2D input such as a speech signal), and CNNs have
advanced the performance records in most image understanding tasks in computer vision [24]. An
important reason for this success is that by leveraging large image datasets (e.g., ImageNet [6]),
general purpose image descriptors can be directly learned from data, which adapt to the data better
and outperform hand-crafted features [16]. Our approach follows this paradigm of feature learning,
but is specifically designed for 3D data coming from object surface representations.
2
CNNs on Depth and 3D Data. With rapid advances in 3D sensing technology, depth has became
available as an additional information channel beyond color. Such 2.5D data can be represented as
multiple channel images, and processed by 2D CNNs [26, 10, 8]. Wu et al. [31] in a pioneering
paper proposed to extend 2D CNNs to process 3D data directly (3D ShapeNets). A similar approach
(VoxNet) was proposed in [19]. However, such approaches cannot work on high resolution 3D data, as
the computational complexity is a cubic function of the voxel grid resolution. Since CNNs for images
have been extensively studied, 3D shapes can be rendered into 2D images, and be represented by
the CNN features of the images [25, 28], which, surprisingly, outperforms any 3D CNN approaches,
in a 3D shape classification task. Recently, Qi et al. [23] presented an extensive study of these
volumetric and multi-view CNNs and refreshed the performance records. In this work, we propose a
feature learning approach that is specifically designed to take advantage of the sparsity of 3D data,
and compare against results reported in [23]. Note that our method was designed without explicit
consideration of deformable objects, which is a purely extrinsic construction. While 3D data is
represented as meshes, neural networks can benefit from intrinsic constructions[17, 18, 1, 2] to learn
object invariance to isometries, thus require less training data for handling deformable objects.
Our method can be viewed as an efficient scheme of sparse coding[7]. The learned weights of each
probing curve can be interpreted as the entries of the coding matrix in the sparse coding framework.
Compared with conventional sparse coding, our framework is not only computationally more tractable,
but also enables an end-to-end learning system.
3
3.1
Field Probing Neural Network
(a)
Input 3D Fields
(d)
We study the 3D shape classification problem
(b)
by employing a deep neural network. The input
of our network is a 3D vector field built from the
input shape and the output is an object category
label. 3D shapes represented as meshes or point
(c)
(e)
clouds can be converted into 3D distance fields. Figure 3: 3D mesh (a) or point cloud (b) can be
Given a mesh (or point cloud), we first convert converted into occupancy grid (c), from which the
it into a binary occupancy grid representation, input to our algorithm ? a 3D distance field (d),
where the binary occupancy value in each grid is obtained via a distance transform. We further
is determined by whether it intersects with any transform it to a Gaussian distance field (e) for
mesh surface (or contains any sample point). focusing attention to the space near the surface.
Then we treat the occupied cells as the zero level The fields are visualized by two crossing slices.
set of a surface, and apply a distance transform
to build a 3D distance field D, which is stored in a 3D array indexed by (i, j, k), where i, j, k =
1, 2, ..., R, and R is the resolution of the distance field. We denote the distance value at (i, j, k) by
D(i,j,k) . Note that D represents distance values at discrete grid locations. The distance value at an
arbitrary location d(x, y, z) can be computed by standard trilinear interpolation over D. See Figure 3
for an illustration of the 3D data representations.
Similar to 3D distance fields, other 3D fields, such as normal fields Nx , Ny , and Nz , can also be used
for representing shapes. Note that the normal fields can be derived from the gradient of the distance
?d
?d
?d ?d ?d
field: Nx (x, y, z) = 1l ?x
, Ny (x, y, z) = 1l ?y
, Nz (x, y, z) = 1l ?d
?z , where l = |( ?x , ?y , ?z )|. Our
framework can employ any set of fields as input, as long as the gradients can be computed.
3.2
Field Probing Layers
The basic modules of deep neural networks are
layers, which gradually convert input to output
in a forward pass, and get updated during a backward pass through the Back-propagation [30]
mechanism. The key contribution of our approach is that we replace the convolutional layers in CNNs by field probing layers, a novel Figure 4: Initialization of field probing layers. For
component that uses field probing filters to effi- simplicity, a subset of the filters are visualized.
ciently extract features from the 3D vector field. They are composed of three layers: Sensor layer,
3
DotProduct layer and Gaussian layer. The Sensor layer is responsible for collecting the signals
(the values in the input fields) at the probing points in the forward pass, and updating the probing
point locations in the backward pass. The DotProduct layer computes the dot product between the
probing filter weights and the signals from the Sensor layer. The Gaussian layer is an utility layer that
transforms distance field into a representation that is more friendly for numerical computation. We
introduce them in the following paragraphs, and show that they fit well for training a deep network.
Sensor Layer. The input to this layer is a 3D field V, where V(x, y, z) yields a T channel (T = 1
for distance field and T = 3 for normal fields) vector at location (x, y, z). This layer contains C
probing filters scattered in space, each with N probing points. The parameters of this layer are the
locations of all probing points {(xc,n , yc,n , zc,n )}, where c indexes the filter and n indexes the probing
point within each filter. This layer simply outputs the vector at the probing points V(xc,n , yc,n , zc,n ).
The output of this layer forms a data chunk of size C ? N ? T .
?p ?p
The gradient of this function ?V = ( ?V
?x , ?y , ?z ) can be evaluated by numerical computation, which
will be used for updating the locations of probing points in the back-propagation process. This formal
definition emphasizes why we need the input being represented as 3D fields: the gradients computed
from the input fields are the forces to push the probing points towards more informative locations
until they converge to a local optimum.
DotProduct Layer. The input to this layer is the output of the Sensor layer ? a data chunk of size
C ? N ? T , denoted as {pc,n,t }. The parameters of DotProduct layer are the filter weights associated
with probing points, i.e., there are C filters, each of length N , in T channels. We denote the set
of parameters as {wc,n,t }. The function at this layerP
computes a dot product between {pc,n,t } and
{wc,n,t }, and outputs vc = v({pc,i,j }, {wc,i,j }) = i=1,...,N pc,i,j ? wc,i,j , ? a C-dimensional
j=1,...,T
, ?v ) = ({wc,i,j }, {pc,i,j }).
vector, and the gradient for the backward pass is: ?vc = ( ?{p?v
c,i,j } ?{wc,i,j }
Typical convolution encourages weight sharing within an image patch by ?zipping? the patch into a
single value for upper layers by a dot production between the patch and a 2D filter. Our DotProduct
layer shares the same ?zipping? idea, which facilitates to fully connect it: probing points are grouped
into probing filters to generate output with lower dimensionality.
Another option in designing convolutional layers is to decide whether their weights should be shared
across different spatial locations. In 2D CNNs, these parameters are usually shared when processing
general images. In our case, we opt not to share the weights, as information is not evenly distributed
in 3D space, and we encourage our probing filters to individually deviate for adapting to the data.
Gaussian Layer. Samples in locations distant to the object surface are associated with large distance
values from the distance field. Directly feeding them into the DotProduct layer does not converge
and thus does not yield reasonable performance. To emphasize the importance of samples in the
vicinity of the object surface, we apply a Gaussian transform (inverse exponential) on the distances
so that regions approaching the zero surface have larger weights while distant regions matter less.1 .
We implement this transform with a Gaussian layer. The input is the output values of the Sensor
layer. Let us assume the values are {x}, then this layer applies an element-wise Gaussian transform
x2
2
? x2
g(x) = e? 2?2 , and the gradient is ?g = ? xe ?22? for the backward pass.
Complexity of Field Probing Layers. The complexity of field probing layers is O(C ? N ? T ),
where C is the number of probing filters, N is the number of probing points on each filter, and T is
the number of input fields. The complexity of the convolutional layer is O(K 3 ? C ? S 3 ), where K
is the 3D kernel size, C is the output channel number, and S is the number of the sliding locations for
each dimension. In field probing layers, we typically use C = 1024, N = 8, and T = 4 (distance and
normal fields), while in 3D CNN K = 6, C = 48 and S = 12. Compared with convolutional layers,
field probing layers save a majority of computation (1024 ? 8 ? 4 ? 1.83% ? 63 ? 48 ? 123 ), as the
1
Applying a batch normalization [11] on the distances also resolves the problem. However, Gaussian
transform has two advantages: 1. it can be approximated by truncated distance fields [5], which is widely used
in real time scanning and can be compactly stored by voxel hashing [21], 2. it is more efficient to compute than
batch normalization, since it is element-wise operation.
4
probing filters in field probing layers are capable of learning where to ?sense?, whereas convolutional
layers exhaustively examine everywhere by sliding the 3D kernels.
Initialization of Field Probing Layers. There are two sets of parameters: the probing point
locations and the weights associated with them. To encourage the probing points to explore as many
potential locations as possible, we initialize them to be widely distributed in the input fields. We
first divide the space into G ? G ? G grids and then generate P filters in each grid. Each filter
is initialized as a line segment with a random orientation, a random length in [llow , lhigh ] (we use
[llow , lhigh ] = [0.2, 0.8] ? R by default), and a random center point within the grid it belongs to
(Figure 4 left). Note that a probing filter spans distantly in the 3D space, so they capture long range
effects well. This is a property that distinguishes our design from those convolutional layers, as they
have to increase the kernel size to capture long range effects, at the cost of increased complexity. The
weights of field probing filters are initialized by the Xavier scheme [9]. In Figure 4 right, weights for
distance field are visualized by probing point colors and weights for normal fields by arrows attached
to each probing point.
FPNN Architecture and Usage. Field probing layers transform input 3D fields into an intermediate
representation, which can further be processed and
eventually linked to task specific loss layers (Figure 5). To further encourage long range connections,
we feed the output of our field probing layers into
fully connected layers. The advantage of long range
connections makes it possible to stick with a small
number of probing filters, while the small number of Figure 5: FPNN architecture. Field probing
probing filters makes it possible to directly use fully layers can be used together with other inferconnected layers.
ence layers to minimize task specific losses.
Object classification is widely used in computer vision as a testbed for evaluating neural network
designs, and the neural network parameters learned from this task may be transferred to other highlevel understanding tasks such as object retrieval and scene parsing. Thus we choose 3D object
classification as the task for evaluating our FPNN.
234.9
200
Results and Discussions
4.1
Running Time (ms)
4
Timing
Convolutional Layers
Field Probing Layers
150
We implemented our field probing layers in
100
Caffe [12]. The Sensor layer is parallelized by assigning computation on each probing point to one
50
GPU thread, and DotProduct layer by assigning computation on each probing filter to one GPU thread.
Figure 6 shows a run time comparison between con1.99
16 32
64
128
227
vonlutional layers and field probing layers on differGrid Resolution
ent input resolutions. The computation cost of our
Figure 6: Running time of convolutional layfield probing layers is agnostic to input resolutions,
ers (same settings as that in [31]) and field
the slight increase of the run time on higher resoluprobing layers (C ? N ? T = 1024 ? 8 ? 4)
tion is due to GPU memory latency introduced by the
on Nvidia GTX TITAN with batch size 83 .
larger 3D fields. Note that the convolutional layers
in [12] are based on highly optimized cuBlas library from NVIDIA, while our field probing layers
are implemented with our naive parallelism, which is likely to be further improved.
4.2
Datasets and Evaluation Protocols
We use ModelNet40 [31] (12,311 models from 40 categories, training/testing split with 9,843/2,468
models4 ) ? the standard benchmark for 3D object classification task, in our experiments. Models
3
The batch size is chosen to make sure the largest resolution data fits well in GPU memory.
The split is provided on the authors? website. In their paper, a split composed of at most 80/20 training/testing
models for each category was used, which is tiny for deep learning tasks and thus prone to overfitting. Therefore,
we report and compare our performance on the whole ModelNet40 dataset.
4
5
in this dataset are already aligned with a canonical orientation. For 3D object recognition scenarios
in real world, the gravity direction can often be captured by the sensor, but the horizontal ?facing?
direction of the objects are unknown. We augment ModelNet40 data by randomly rotating the shapes
horizontally. Note that this is done for both training and testing samples, thus in the testing phase, the
orientation of the inputs are unknown. This allows us to assess how well the trained network perform
on real world data.
1-FC
4-FCs
4.3
w/o FP
79.1
Performance of Field Probing Layers
w/ FP
85.0
+NF
86.0
w/o FP
86.6
w/ FP
87.5
+NF
88.4
We train our FPNN 80, 000 iterations on 64 ? Table 1: Top-1 accuracy of FPNNs on 3D object
64 ? 64 distance field with batch size 1024.5 , classification task on M odelN et40 dataset.
with SGD solver, learning rate 0.01, momentum 0.9, and weight decay 0.0005.
Trying to study the performance of our field probing layers separately, we build up an FPNN with only
one fully connected layer that converts the output of field probing layers into the representation for
softmax classification loss (1-FC setting). Batch normalization [11] and rectified-linear unit [20] are
used in-between our field probing layers and the fully connected layer for reducing internal covariate
shift and introducing non-linearity. We train the network without/with updating the field probing
layer parameters. We show their top-1 accuracy on 3D object classification task on M odelN et40
dataset with single testing view in Table 1. It is clear that our field probing layers learned to sense
the input field more intelligently, with a 5.9% performance gain from 79.1% to 85.0%. Note that,
what achieved by this simple network, 85.0%, is already better than the state-of-the-art 3DCNN
before [23] (83.0% in [31] and 83.8% in [19]).
We also evaluate the performance of our field probing layers in the context of a deeper FPNN,
where four fully connected layers6 , with in-between batch normalization, rectified-linear unit and
Dropout [27] layers, are used (4-FCs setting). As shown in Table 1, the deeper FPNN performs better,
while the gap between with and without field probing layers, 87.5% ? 86.6% = 0.9%, is smaller than
that in one fully connected FPNN setting. This is not surprising, as the additional fully connected
layers, with many parameters introduced, have strong learning capability. The 0.9% performance gap
introduced by our field probing layers is a precious extra over a strong baseline.
It is important to note that in both settings (1-FC and 4-FCs), our FPNNs provides reasonable performance even without optimizing the field probing layers. This confirms that long range connections
among the sensors are beneficial.
Furthermore, we evaluate our FPNNs with multiple input fields (+NF setting). We did not only employ
distance fields, but also normal fields for our probing layers and found a consistent performance gain
for both of the aforementioned FPNNs (see Table 1). Since normal fields are derived from distance
fields, the same group of probing filters are used for both fields. Employing multiple fields in the
field probing layers with different groups of filters potentially enables even higher performance.
FPNN Setting
R
R15
R15 + T0.1 + S
R45
T0.2
Robustness Against Spatial Perturbations.
1-FC
85.0 82.4
76.2
74.1 72.2
4-FCs
87.5 86.8
84.9
85.3 85.4
We evaluate our FPNNs on different levels of
[31]
84.7
83.0 84.8
spatial perturbations, and summarize the results
in Table 2, where R indicates random horizontal Table 2: Performance on different perturbations.
rotation, R15 indicates R plus a small random rotation (?15? , 15? ) in the other two directions, T0.1
indicates random translations within range (?0.1, 0.1) of the object size in all directions, S indicates
random scaling within range (0.9, 1.1) in all directions. R45 and T0.2 shares the same notations,
but with even stronger rotation and translation, and are used in [23] for evaluating the performance
of [31]. Note that such perturbations are done on both training and testing samples. It is clear that our
FPNNs are robust against spatial perturbations.
FPNN Setting
1-FC
4-FCs
0.2 ? 0.8
85.0
87.5
0.2 ? 0.4
84.1
86.8
0.1 ? 0.2
82.8
86.9
Advantage of Long Range Connections.
We evaluate our FPNNs with different range
parameters [llow , lhigh ] used in initializing the Table 3: Performance with different filter spans.
probing filters, and summarize the results in Table 3. Note that since the output dimensionality
5
To save disk I/O footprint, a data augmentation is done on the fly. Each iteration, 256 data samples are
loaded, and augmented into 1024 samples for a batch.
6
The first three of them output 1024 dimensional feature vector.
6
of our field probing layers is low enough to be directly feed into fully connected layers, distant
sensor information is directly coupled by them. This is a desirable property, however, it poses the
difficulty to study the advantage of field probing layers in coupling long range information separately.
Table 3 shows that even if the following fully connected layer has the capability to couple distance
information, the long range connections introduced in our field probing layers are beneficial.
FPNN Setting
16 ? 16 ? 16
32 ? 32 ? 32
64 ? 64 ? 64
Performance on Different Field Resolutions.
1-FC
84.2
84.5
85.0
We evaluate our FPNNs on different input field
4-FCs
87.3
87.3
87.5
resolutions, and summarize the results in Table 4. Table 4: Performance on different field resolutions.
Higher resolution input fields can represent input data more accurately, and Table 4 shows that our FPNN can take advantage of the more accurate
representations. Since the computation cost of our field probing layers is agnostic to the resolution
of the data representation, higher resolution input fields are preferred for better performance, while
coupling with efficient data structures reduces the I/O footprint.
?Sharpness? of Gaussian Layer. The ? hyper-parameter in Gaussian layer controls how ?sharp?
is the transform. We select its value empirically in our experiments, and the best performance is given
when we use ? ? 10% of the object size. Smaller ? slightly hurts the performance (? 1%), but has
the potential of reducing I/O footprint.
FPNN Features and Visual Similarity. Figure 7 shows a visualization
of the features extracted by the FPNN
trained for a classification task. Our
FPNN is capable of capturing 3D geometric structures such that it allows
to map 3D models that belong to the
same categories (indicated by colors)
to similar regions in the feature space.
More specifically, our FPNN maps 3D
models into points in a high dimensional feature space, where the distances between the points measure the
similarity between their corresponding 3D models. As can be seen from
Figure 7 (better viewed in zoomin
Figure 7: t-SNE visualization of FPNN features.
mode), the FPNN feature distances between 3D models represent their shape similarities, thus FPNN features can support shape exploration
and retrieval tasks.
FP+FC on Source
4.4
Generalizability of FPNN Features
Testing Dataset
FP+FC
FC Only
M N 401
M N 402
93.8
89.4
90.7
85.1
FC Only on Target
92.7
88.2
One superior characteristic of CNN features is Table 5: Generalizability test of FPNN features.
that features from one task or dataset can be transferred to another task or dataset. We evaluate the
generalizability of FPNN features by cross validation ? we train on one dataset and test on another.
We first split M odelN et40 (lexicographically by the category names) into two parts M N 401 and
M N 402 , where each of them contains 20 non-overlapping categories. Then we train two FPNNs in
a 1-FC setting (updating both field probing layers and the only one fully connected layer) on these
two datasets, achieving 93.8% and 89.4% accuracy, respectively (the second column in Table 5).7
Finally, we fine tune only the fully connected layer of these two FPNNs on the dataset that they were
not trained from, and achieved 92.7% and 88.2% on M N 401 and M N 402 , respectively (the fourth
column in Table 5), which is comparable to that directly trained from the testing categories. We also
trained two FPNNs in 1-FC setting with updating only the fully connected layer, which achieves
90.7% and 85.1% accuracy on M N 401 and M N 402 , respectively (the third column in Table 5).
These two FPNNs do not perform as well as the fine-tuned FPNNs (90.7% < 92.7% on M N 401
7
The performance is higher than that on all the 40 categories, since the classification task is simpler on less
categories. The performance gap between M N 401 and M N 402 is presumably due to the fact that M N 401
categories are easier to classify than M N 402 ones.
7
and 85.1% < 88.2% on M N 402 ), although all of them only update the fully connected layer. These
experiments show that the field probing filters learned from one dataset can be applied to another one.
4.5
Our FPNN
(4-FCs+NF)
88.4
Comparison with State-of-the-art
SubvolSup+BN
88.8
[23]
MVCNN-MultiRes
93.8
We compare the performance of our FPNNs Table 6: Comparison with state-of-the-art methods.
against two state-of-the-art approaches ? SubvolSup+BN and MVCNN-MultiRes, both from [23], in Table 6. SubvolSup+BN is a subvolume
supervised volumetric 3D CNN, with batch normalization applied during the training, and MVCNNMultiRes is a multi-view multi-resolution image based 2D CNN. Note that our FPNN achieves
comparable performance to SubvolSup+BN with less computational complexity. However, both our
FPNN and SubvolSup+BN do not perform as well as MVCNN-MultiRes. It is intriguing to answer
the question why methods directly operating on 3D data cannot match or outperform multi-view 2D
CNNs. The research on closing the gap between these modalities can lead to a deeper understanding
of both 2D images and 3D shapes or even higher dimensional data.
4.6
Limitations and Future Work
FPNN on Generic Fields. Our framework provides a general means for optimizing probing locations in 3D fields where the gradients can be computed. We suspect this capability might be
particularly important for analyzing 3D data with invisible internal structures. Moreover, our approach can easily be extended into higher dimensional fields, where a careful storage design of the
input fields is important for making the I/O footprint tractable though.
From Probing Filters to Probing Network. In our current framework, the probing filters are
independent to each other, which means, they do not share locations and weights, which may result in
too many parameters for small training sets. On the other hand, fully shared weights greatly limit the
representation power of the probing filters. A trade-off might be learning a probing network, where
each probing point belongs to multiple ?pathes? in the network for partially sharing parameters.
FPNN for Finer Shape Understanding. Our current approach is superior for extracting robust
global descriptions of the input data, but lacks the capability of understanding finer structures inside
the input data. This capability might be realized by strategically initializing the probing filters
hierarchically, and jointly optimizing filters at different hierarchies.
5
Conclusions
We proposed a novel design for feature extraction from 3D data, whose computation cost is agnostic
to the resolution of data representation. A significant advantage of our design is that long range
interaction can be easily coupled. As 3D data is becoming more accessible, we believe that our
method will stimulate more work on feature learning from 3D data. We open-source our code at
https://github.com/yangyanli/FPNN for encouraging future developments.
Acknowledgments
We would first like to thank all the reviewers for their valuable comments and suggestions. Yangyan
thanks Daniel Cohen-Or and Zhenhua Wang for their insightful proofreading. The work was
supported in part by NSF grants DMS-1546206 and IIS-1528025, UCB MURI grant N00014-131-0341, Chinese National 973 Program (2015CB352501), the Stanford AI Lab-Toyota Center for
Artificial Intelligence Research, the Max Planck Center for Visual Computing and Communication,
and a Google Focused Research award.
References
[1] D. Boscaini, J. Masci, S. Melzi, M. M. Bronstein, U. Castellani, and P. Vandergheynst. Learning classspecific descriptors for deformable shapes using localized spectral convolutional networks. In SGP, pages
13?23. Eurographics Association, 2015.
[2] Davide Boscaini, Jonathan Masci, Emanuele Rodol?, Michael M Bronstein, and Daniel Cremers.
Anisotropic diffusion descriptors. CGF, 35(2):431?441, 2016.
[3] Alexander M. Bronstein, Michael M. Bronstein, Leonidas J. Guibas, and Maks Ovsjanikov. Shape google:
Geometric words and expressions for invariant shape retrieval. ToG, 30(1):1:1?1:20, February 2011.
8
[4] Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ming Ouhyoung. On visual similarity based 3d model
retrieval. CGF, 22(3):223?232, 2003.
[5] Brian Curless and Marc Levoy. A volumetric method for building complex models from range images. In
SIGGRAPH, SIGGRAPH ?96, pages 303?312, New York, NY, USA, 1996. ACM.
[6] Jia Deng, Wei Dong, R. Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical
image database. In CVPR, pages 248?255, June 2009.
[7] David L Donoho. Compressed sensing. IEEE Transactions on information theory, 52(4):1289?1306, 2006.
[8] Andreas Eitel, Jost Tobias Springenberg, Luciano Spinello, Martin Riedmiller, and Wolfram Burgard.
Multimodal deep learning for robust rgb-d object recognition. In IEEE/RSJ International Conference on
Intelligent Robots and Systems, pages 681?687. IEEE, 2015.
[9] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural
networks. In International conference on artificial intelligence and statistics, pages 249?256, 2010.
[10] Saurabh Gupta, Ross Girshick, Pablo Arbelaez, and Jitendra Malik. Learning rich features from rgbd images for object detection and segmentation. In ECCV, volume 8695, pages 345?360. Springer
International Publishing, 2014.
[11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing
internal covariate shift. In ICML, pages 448?456, 2015.
[12] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio
Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv
preprint arXiv:1408.5093, 2014.
[13] Andrew E. Johnson and Martial Hebert. Using spin images for efficient object recognition in cluttered 3d
scenes. TPAMI, 21(5):433?449, May 1999.
[14] Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz. Rotation invariant spherical harmonic
representation of 3d shape descriptors. In SGP, SGP ?03, pages 156?164, Aire-la-Ville, Switzerland,
Switzerland, 2003. Eurographics Association.
[15] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, Nov 1998.
[16] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436?444, 2015.
[17] Roee Litman and Alexander M Bronstein. Learning spectral descriptors for deformable shape correspondence. TPAMI, 36(1):171?180, 2014.
[18] Jonathan Masci, Davide Boscaini, Michael M Bronstein, and Pierre Vandergheynst. Geodesic convolutional
neural networks on riemannian manifolds. In ICCV Workshop on 3D Representation and Recognition
(3dRR), 2015.
[19] Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object
recognition. In IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, September
2015.
[20] Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML,
pages 807?814, 2010.
[21] Matthias Nie?ner, Michael Zollh?fer, Shahram Izadi, and Marc Stamminger. Real-time 3d reconstruction
at scale using voxel hashing. ToG, 32(6):169:1?169:11, November 2013.
[22] Robert Osada, Thomas Funkhouser, Bernard Chazelle, and David Dobkin. Shape distributions. ToG,
21(4):807?832, October 2002.
[23] Charles R. Qi, Hao Su, Matthias Nie?ner, Angela Dai, Mengyuan Yan, and Leonidas Guibas. Volumetric
and multi-view cnns for object classification on 3d data. In CVPR, 2016.
[24] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang,
Andrej Karpathy, Aditya Khosla, Michael Bernstein, AlexanderC. Berg, and Li Fei-Fei. Imagenet large
scale visual recognition challenge. IJCV, 115(3):211?252, 2015.
[25] Baoguang Shi, Song Bai, Zhichao Zhou, and Xiang Bai. Deeppano: Deep panoramic representation for
3-d shape recognition. IEEE Signal Processing Letters, 22(12):2339?2343, Dec 2015.
[26] Richard Socher, Brody Huval, Bharath Bath, Christopher D Manning, and Andrew Y Ng. Convolutionalrecursive deep learning for 3d object classification. In NIPS, pages 665?673, 2012.
[27] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout:
A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research,
15(1):1929?1958, 2014.
[28] Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik G. Learned-Miller. Multi-view convolutional
neural networks for 3d shape recognition. In ICCV, 2015.
[29] Jian Sun, Maks Ovsjanikov, and Leonidas Guibas. A concise and provably informative multi-scale signature
based on heat diffusion. In SGP, pages 1383?1392. Eurographics Association, 2009.
[30] DE Rumelhart GE Hinton RJ Williams and GE Hinton. Learning representations by back-propagating
errors. Nature, pages 323?533, 1986.
[31] Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao.
3d shapenets: A deep representation for volumetric shapes. In CVPR, pages 1912?1920, 2015.
9
| 6416 |@word cnn:8 seems:1 stronger:1 disk:1 open:1 confirms:1 bn:5 decomposition:1 rgb:1 concise:1 sgd:1 thereby:1 bai:2 contains:3 daniel:3 tuned:1 document:1 past:1 existing:1 outperforms:1 current:2 discretization:1 com:1 surprising:1 guadarrama:1 chazelle:1 assigning:2 intriguing:1 parsing:1 gpu:4 mesh:5 distant:4 numerical:2 informative:6 shape:39 enables:2 christian:1 designed:7 update:1 intelligence:3 website:1 rodol:1 core:1 wolfram:1 record:2 provides:2 location:19 simpler:1 zhang:1 direct:1 become:1 kalogerakis:1 ijcv:1 overhead:1 paragraph:1 qi1:1 introduce:1 inside:1 rapid:2 themselves:1 examine:1 multi:7 salakhutdinov:1 ming:1 spherical:2 resolve:1 encouraging:1 solver:1 spain:1 provided:1 moreover:4 underlying:1 linearity:1 agnostic:3 notation:1 what:1 interpreted:1 collecting:1 nf:4 friendly:1 runtime:1 gravity:1 litman:1 stick:1 control:1 unit:3 grant:2 planck:1 before:2 ner:2 local:2 treat:1 timing:1 limit:1 analyzing:1 solely:1 interpolation:1 becoming:1 might:3 plus:1 nz:2 china:1 studied:1 initialization:2 collect:1 challenging:1 range:17 tian:1 acknowledgment:1 responsible:1 lecun:2 testing:8 block:2 implement:1 footprint:4 evan:1 riedmiller:1 deforms:1 yan:1 significantly:2 adapting:2 subvolume:1 word:1 modelnet:1 get:3 cannot:2 operator:1 andrej:1 storage:1 context:1 applying:2 conventional:2 map:2 reviewer:1 center:3 shi:1 williams:1 attention:1 cluttered:1 focused:1 resolution:23 sharpness:1 simplicity:1 shen:1 perceive:1 array:2 embedding:1 hurt:1 updated:1 construction:2 play:1 target:1 hierarchy:1 us:1 designing:2 crossing:1 element:2 recognition:11 approximated:1 updating:5 particularly:1 rumelhart:1 muri:1 database:1 role:1 cloud:3 module:1 fly:1 initializing:2 capture:3 wang:1 ding:1 preprint:1 region:4 connected:12 sun:1 yangyan:2 trade:1 valuable:1 complexity:9 nie:2 tobias:1 hinder:1 exhaustively:1 geodesic:1 signature:2 trained:7 subhransu:1 segment:1 smart:1 purely:1 tog:3 efficiency:1 compactly:1 easily:4 siggraph:2 multimodal:1 xiaoou:1 represented:8 intersects:1 maji:1 train:4 heat:3 fast:1 effective:3 artificial:2 hyper:1 caffe:2 precious:1 quite:1 whose:1 stanford:2 plausible:1 widely:5 larger:3 kai:1 cvpr:3 compressed:1 zhichao:1 statistic:1 think:1 transform:10 jointly:1 advantage:8 highlevel:1 karayev:1 intelligently:2 tpami:2 matthias:2 propose:4 reconstruction:1 interaction:1 coming:1 product:2 fer:1 modelnet40:3 aligned:1 linguang:1 bath:1 deformable:4 description:2 ent:1 sutskever:1 empty:1 optimum:1 darrell:1 object:25 coupling:3 andrew:2 maturana:1 propagating:1 pose:1 strong:3 implemented:2 come:1 direction:5 switzerland:2 cnns:16 filter:53 vc:2 exploration:1 require:1 feeding:1 opt:1 brian:1 normal:7 guibas:3 great:2 presumably:1 visualize:1 achieves:2 purpose:2 ruslan:1 label:1 ross:2 individually:1 grouped:1 largest:1 evangelos:1 sensor:15 gaussian:10 dcnn:1 rather:2 occupied:4 zhou:1 derived:3 focus:1 june:1 indicates:4 panoramic:1 greatly:1 contrast:1 baseline:1 sense:7 inference:2 cubically:1 entire:1 typically:2 provably:1 pixel:1 issue:1 aforementioned:1 dcnns:3 classification:14 fidelity:1 flexible:1 denoted:1 augment:1 development:1 art:5 spatial:5 initialize:1 softmax:1 mak:2 field:99 saurabh:1 extraction:4 shahram:1 sampling:1 ng:1 represents:1 yu:2 icml:2 distantly:1 future:2 report:1 yoshua:2 intelligent:2 inherent:1 employ:3 aire:1 distinguishes:1 few:1 strategically:1 composed:3 randomly:1 national:1 richard:1 geometry:1 phase:1 classspecific:1 detection:1 highly:2 evaluation:1 pc:5 accurate:1 baoguang:1 encourage:3 capable:2 shuran:1 indexed:1 divide:1 initialized:2 rotating:1 girshick:2 increased:1 con1:1 column:3 classify:1 cgf:2 ence:1 multires:3 voxnet:2 cublas:1 cost:6 introducing:1 subset:2 entry:1 burgard:1 krizhevsky:1 johnson:1 graphic:1 too:1 reported:1 stored:2 connect:1 answer:1 scanning:1 generalizability:3 adaptively:1 chunk:2 density:1 thanks:1 international:4 accessible:2 off:1 dong:1 michael:6 together:3 szymon:1 ilya:1 sanjeev:1 augmentation:1 eurographics:3 choose:2 huang:1 convolving:1 li:5 szegedy:1 converted:4 potential:2 huval:1 de:1 coding:4 waste:1 feedforward:1 matter:1 titan:1 jitendra:1 cremers:1 leonidas:4 tion:1 view:6 lab:1 linked:2 portion:1 option:1 capability:5 jia:4 contribution:1 minimize:1 ass:1 spin:1 accuracy:4 convolutional:19 descriptor:10 characteristic:3 efficiently:1 largely:1 yield:3 became:1 trilinear:1 loaded:1 miller:1 curless:1 accurately:1 emphasizes:1 ren:1 cubical:1 researcher:1 rectified:3 finer:2 russakovsky:1 bharath:1 sharing:2 trevor:1 sebastian:1 volumetric:6 definition:1 against:4 dm:1 associated:6 refreshed:1 r45:2 riemannian:1 rithms:1 sampled:1 gain:2 dataset:11 couple:1 davide:2 color:4 dimensionality:2 ubiquitous:1 segmentation:1 sean:1 back:3 focusing:1 feed:2 higher:10 hashing:2 supervised:1 improved:1 wei:1 evaluated:1 done:3 though:1 furthermore:1 until:1 hand:4 horizontal:2 christopher:1 su:3 overlapping:1 propagation:2 google:2 interfere:1 lack:1 mode:1 artifact:1 indicated:1 stimulate:1 grows:3 believe:1 usa:2 effect:4 building:2 usage:2 requiring:1 gtx:1 name:1 xavier:2 vicinity:1 semantic:4 sgp:4 funkhouser:2 visualizing:2 during:2 encourages:1 drr:1 m:1 trying:1 yun:1 invisible:1 performs:1 zhiheng:1 image:24 harmonic:2 consideration:2 novel:3 recently:1 charles:2 wise:2 superior:2 rotation:4 empirically:1 overview:1 cohen:1 attached:1 volume:1 anisotropic:1 belong:2 extend:1 slight:1 association:3 significant:2 ai:1 grid:15 closing:1 emanuele:1 dot:3 robot:2 similarity:4 operating:2 surface:9 sergio:1 isometry:1 optimizing:5 optimizes:1 belongs:2 scenario:1 zhirong:1 nvidia:2 n00014:1 binary:2 success:4 xe:1 captured:1 seen:1 additional:2 dai:1 deng:2 converting:1 parallelized:1 determine:1 paradigm:1 converge:2 signal:5 ii:1 sliding:2 multiple:4 desirable:1 rj:1 reduces:1 match:2 adapt:1 lexicographically:1 cross:1 long:15 retrieval:4 award:1 qi:2 jost:1 basic:1 vision:3 blindly:1 iteration:2 represent:4 kernel:4 normalization:6 sergey:2 achieved:3 cell:1 boscaini:3 arxiv:2 whereas:1 dec:1 separately:2 fine:2 krause:1 grow:1 source:3 jian:1 modality:1 extra:1 operate:1 probably:1 sure:1 suspect:2 comment:1 facilitates:1 leveraging:1 extracting:2 ciently:1 near:1 bernstein:1 intermediate:1 vinod:1 enough:2 split:4 rendering:1 variety:3 castellani:1 fit:2 bengio:3 li1:1 architecture:4 approaching:1 andreas:1 idea:1 haffner:1 shift:2 t0:4 whether:2 thread:2 expression:1 utility:1 accelerating:1 song:2 speech:1 york:1 deep:13 latency:1 clear:3 tune:1 karpathy:1 amount:1 transforms:1 extensively:1 processed:2 category:10 visualized:3 generate:2 http:1 outperform:2 ovsjanikov:2 canonical:1 nsf:1 extrinsic:1 per:1 discrete:1 group:2 key:1 four:1 achieving:1 yangqing:1 prevent:1 among:1 diffusion:3 backward:4 wasted:1 ville:1 convert:3 run:2 inverse:1 everywhere:1 fourth:1 letter:1 springenberg:1 place:1 reasonable:2 decide:1 wu:2 yann:1 patch:3 scaling:1 comparable:2 osada:1 capturing:2 layer:91 dropout:2 brody:1 correspondence:1 fei:4 alex:1 x2:2 scene:2 r15:3 wc:6 orientation:3 nitish:1 span:2 proofreading:1 rendered:1 martin:1 transferred:2 manning:1 across:1 smaller:2 beneficial:2 slightly:1 pathes:1 making:2 lem:1 intuitively:1 restricted:1 iccv:2 gradually:1 invariant:2 pipeline:1 computationally:1 resource:1 visualization:3 remains:1 izadi:1 eventually:1 mechanism:1 mind:1 ge:2 tractable:2 end:2 adopted:1 available:1 operation:1 deeppano:1 apply:2 observe:1 hierarchical:1 generic:1 spectral:2 simulating:1 pierre:1 save:2 batch:10 robustness:1 convolved:1 thomas:2 convolve:1 running:2 top:2 angela:1 publishing:1 xc:2 effi:1 build:2 chinese:1 february:1 rsj:2 crafting:1 malik:1 already:2 question:1 realized:1 shapenets:2 september:1 gradient:8 distance:28 thank:1 arbelaez:1 majority:1 nx:2 evenly:1 manifold:1 reason:1 erik:1 length:2 code:1 index:2 illustration:1 providing:1 unfortunately:1 voxelized:1 robert:1 potentially:1 sne:1 october:1 hao:3 design:6 bronstein:6 pei:1 unknown:2 perform:3 boltzmann:1 upper:1 satheesh:1 convolution:3 datasets:4 benchmark:2 november:1 truncated:1 extended:1 communication:1 hinton:5 perturbation:5 arbitrary:1 sharp:1 dobkin:1 introduced:5 david:2 pablo:1 extensive:1 optimized:3 connection:7 imagenet:3 learned:6 testbed:1 established:2 barcelona:1 nip:2 address:1 beyond:1 usually:1 parallelism:1 yc:2 fp:6 sparsity:5 summarize:3 challenge:1 pioneering:1 program:1 built:1 max:1 memory:2 power:1 llow:3 difficulty:3 force:1 advanced:1 representing:1 occupancy:7 scheme:7 github:1 technology:2 dotproduct:7 library:1 improve:1 martial:1 extract:3 naive:1 coupled:2 deviate:1 understanding:9 geometric:3 voxels:2 multiplication:1 xiang:1 fully:15 loss:3 suggestion:1 limitation:1 scherer:1 analogy:1 facing:1 vandergheynst:2 localized:1 geoffrey:3 validation:1 shelhamer:1 sufficient:2 consistent:1 xiao:2 tiny:1 share:4 production:1 translation:2 prone:1 eccv:1 placed:1 surprisingly:1 supported:1 hebert:1 zc:2 formal:1 deeper:3 taking:1 sparse:3 benefit:1 slice:1 boundary:1 depth:2 curve:1 distributed:2 dimension:1 default:1 computes:2 evaluating:3 forward:2 made:1 author:1 world:2 levoy:1 rich:1 voxel:6 employing:2 melzi:1 transaction:1 nov:1 emphasize:1 hang:1 preferred:1 keep:1 global:2 overfitting:2 ioffe:1 discriminative:3 decade:1 khosla:2 why:2 table:18 promising:1 nature:2 learn:2 channel:5 robust:3 rusinkiewicz:1 bottou:1 complex:1 domain:1 protocol:1 marc:2 did:1 hierarchically:1 arrow:1 whole:1 rgbd:1 augmented:1 crafted:2 scattered:1 cubic:1 ny:3 probing:101 momentum:1 explicit:1 exponential:1 lie:1 third:1 toyota:1 zipping:2 proportionately:1 masci:3 donahue:1 tang:1 specific:2 covariate:2 insightful:1 er:1 sensing:3 decay:1 gupta:1 glorot:1 essential:1 intrinsic:1 quantization:1 socher:2 workshop:1 importance:1 lifting:1 te:1 push:1 demand:1 sparser:4 gap:4 easier:1 chen:1 fcs:7 fc:12 simply:2 explore:1 likely:1 mvcnn:3 visual:4 horizontally:1 aditya:2 partially:1 applies:1 springer:1 srivastava:1 extracted:1 acm:1 luciano:1 nair:1 ma:1 goal:1 viewed:2 donoho:1 careful:1 towards:1 jeff:1 replace:2 shared:3 fisher:1 hard:1 kazhdan:1 typical:2 determined:2 specifically:3 reducing:3 distributes:1 olga:1 total:1 pas:6 invariance:1 bernard:1 la:1 ucb:1 select:1 berg:1 internal:3 support:1 jonathan:4 alexander:2 jianxiong:1 evaluate:7 trainable:1 handling:1 |
5,988 | 6,417 | Recovery Guarantee of Non-negative Matrix
Factorization via Alternating Updates
Yuanzhi Li, Yingyu Liang, Andrej Risteski
Computer Science Department at Princeton University
35 Olden St, Princeton, NJ 08540
{yuanzhil, yingyul, risteski}@cs.princeton.edu
Abstract
Non-negative matrix factorization is a popular tool for decomposing data into
feature and weight matrices under non-negativity constraints. It enjoys practical
success but is poorly understood theoretically. This paper proposes an algorithm
that alternates between decoding the weights and updating the features, and shows
that assuming a generative model of the data, it provably recovers the groundtruth under fairly mild conditions. In particular, its only essential requirement on
features is linear independence. Furthermore, the algorithm uses ReLU to exploit
the non-negativity for decoding the weights, and thus can tolerate adversarial noise
that can potentially be as large as the signal, and can tolerate unbiased noise much
larger than the signal. The analysis relies on a carefully designed coupling between
two potential functions, which we believe is of independent interest.
1
Introduction
In this paper, we study the problem of non-negative matrix factorization (NMF), where given a matrix
Y ? Rm?N , the goal to find a matrix A ? Rm?n and a non-negative matrix X ? Rn?N such
that Y ? AX.1 A is often referred to as feature matrix and X referred as weights. NMF has been
extensively used in extracting a parts representation of the data (e.g., [LS97, LS99, LS01]). It has
been shown that the non-negativity constraint on the coefficients forcing features to combine, but not
cancel out, can lead to much more interpretable features and improved downstream performance of
the learned features.
Despite all the practical success, however, this problem is poorly understood theoretically, with only
few provable guarantees known. Moreover, many of the theoretical algorithms are based on heavy
tools from algebraic geometry (e.g., [AGKM12]) or tensors (e.g. [AKF+ 12]), which are still not
as widely used in practice primarily because of computational feasibility issues or sensitivity to
assumptions on A and X. Some others depend on specific structure of the feature matrix, such as
separability [AGKM12] or similar properties [BGKP16].
A natural family of algorithms for NMF alternate between decoding the weights and updating the
features. More precisely, in the decoding step, the algorithm represents the data as a non-negative
combination of the current set of features; in the updating step, it updates the features using the
decoded representations. This meta-algorithm is popular in practice due to ease of implementation,
computational efficiency, and empirical quality of the recovered features. However, even less
theoretical analysis exists for such algorithms.
1
In the usual formulation of the problem, A is also assumed to be non-negative, which we will not require in
this paper.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This paper proposes an algorithm in the above framework with provable recovery guarantees. To
be specific, the data is assumed to come from a generative model y = A? x? + ?. Here, A? is the
ground-truth feature matrix, x? are the non-negative ground-truth weights generated from an unknown
distribution, and ? is the noise. Our algorithm can provably recover A? under mild conditions, even
in the presence of large adversarial noise.
Overview of main results. The existing theoretical results on NMF can be roughly split into two
categories. In the first category, they make heavy structural assumptions on the feature matrix A?
such as separability ([AGM12]) or allowing running time exponential in n ( [AGKM12]). In the
second one, they impose strict distributional assumptions on x? ([AKF+ 12]), where the methods are
usually based on the method of moments and tensor decompositions and have poor tolerance to noise,
which is very important in practice.
In this paper, we present a very simple and natural alternating update algorithm that achieves the best
of both worlds. First, we have minimal assumptions on the feature matrix A? : the only essential
condition is linear independence of the features. Second, it is robust to adversarial noise ? which
in some parameter regimes be potentially be on the same order as the signal
A? x? , and is robust to
?
unbiased noise potentially even higher than the signal by a factor of O( n). The algorithm does not
require knowing the distribution of x? , and allows a fairly wide family of interesting distributions.
We get this at a rather small cost of a mild ?warm start?. Namely, we initialize each of the features to
be ?correlated? with the ground-truth features. This type of initialization is often used in practice as
well, for example in LDA-c, the most popular software for topic modeling ([lda16]).
A major feature of our algorithm is the significant robustness to noise. In the presence of adversarial
noise on each entry of y up to level C? , the noise level k?k1 can be in the same order as the signal
A? x? . Still, our algorithm is able to output a matrix A such that the final kA? ? Ak1 ? O(k?k1 ) in
the order of the?
noise in one data point. If the noise is unbiased (i.e., E[?|x? ] = 0), the noise level
k?k1 can?be ?( n) times larger than the signal A? x? , while we can still guarantee kA? ? Ak1 ?
O (k?k1 n) ? so our algorithm is not only tolerant to noise, but also has very strong denoising effect.
Note that even for the unbiased case the noise can potentially be correlated with the ground-truth in
very complicated manner, and also, all our results are obtained only requiring the columns of A? are
independent.
Technical contribution. The success of our algorithm crucially relies on exploiting the non-negativity
of x? by a ReLU thresholding step during the decoding procedure. Similar techniques have been
considered in prior works on matrix factorization, however to the best of our knowledge, the analysis
(e.g., [AGMM15]) requires that the decodings are correct in all the intermediate iterations, in the
sense that the supports of x? are recovered with no error. Indeed, we cannot hope for a similar
guarantee in our setting, since we consider adversarial noise that could potentially be the same order
as the signal. Our major technical contribution is a way to deal with the erroneous decoding through
out all the intermediate iterations. We achieve this by a coupling between two potential functions
that capture different aspects of the working matrix A. While analyzing iterative algorithms like
alternating minimization or gradient descent in non-convex settings is a popular topic in recent
years, the proof usually proceeds by showing that the updates are approximately performing gradient
descent on an objective with some local or hidden convex structure. Our technique diverges from the
common proof strategy, and we believe is interesting in its own right.
Organization. After reviewing related work, we define the problem in Section 3 and describe our
main algorithm in Section 4. To emphasize the key ideas, we first present the results and the proof
sketch for a simplified yet still interesting case in Section 5, and then present the results under much
more general assumptions in Section 6. The complete proof is provided in the appendix.
2
Related work
Non-negative matrix factorization relates to several different topics in machine learning.
Non-negative matrix factorization. The area of non-negative matrix factorization (NMF) has a rich
empirical history, starting with the practical algorithm of [LS97].On the theoretical side, [AGKM12]
provides a fixed-parameter tractable algorithm for NMF, which solves algebraic equations and thus has
poor noise tolerance. [AGKM12] also studies NMF under separability assumptions about the features.
2
[BGKP16] studies NMF under heavy noise, but also needs assumptions related to separability, such
as the existence of dominant features. Also, their noise model is different from ours.
Topic modeling. A closely related problem to NMF is topic modeling, a common generative model
for textual data [BNJ03, Ble12]. Usually, kx? k1 = 1 while there also exist work that assume
x?i ? [0, 1] and are independent [ZX12]. A popular heuristic in practice for learning A? is variational
inference, which can be interpreted as alternating minimization in KL divergence norm. On the
theory front, there is a sequence of works by based on either spectral or combinatorial approaches,
which need certain ?non-overlapping? assumptions on the topics. For example, [AGH+ 13] assume
the topic-word matrix contains ?anchor words?: words which appear in a single topic. Most related
is the work of [AR15] who analyze a version of the variational inference updates when documents
are long. However, they require strong assumptions on both the warm start, and the amount of
?non-overlapping? of the topics in the topic-word matrix.
ICA. Our generative model for x? will assume the coordinates are independent, therefore our problem
can be viewed as a non-negative variant of ICA with high levels of noise. Results here typically are
not robust to noise, with the exception of [AGMS12] that tolerates Gaussian noise. However, to best
of our knowledge, no result in this setting is provably robust to adversarial noise.
Non-convex optimization. The framework of having a ?decoding? for the samples, along with
performing an update for the model parameters has proven successful for dictionary learning as
well. The original empirical work proposing such an algorithm (in fact, it suggested that the V1
layer processes visual signals in the same manner) was due to [OF97]. Even more, similar families
of algorithms based on ?decoding? and gradient-descent are believed to be neurally plausible as
mechanisms for a variety of tasks like clustering, dimension-reduction, NMF, etc ([PC15, PC14]). A
theoretical analysis came latter for dictionary learning due to [AGMM15] under the assumption that
the columns of A? are incoherent. The technique is not directly applicable to our case, as we don?t
wish to have any assumptions on the matrix A? . For instance, if A? is non-negative and columns
with l1 norm 1, incoherence effectively means the the columns of A? have very small overlap.
3
Problem definition and assumptions
Given a matrix Y ? Rm?N , the goal of non-negative matrix factorization (NMF) is to find a matrix
A ? Rm?n and a non-negative matrix X ? Rn?N , so that Y ? AX. The columns of Y are
called data points, those of A are features, and those of X are weights. We note that in the original
NMF, A is also assumed to be non-negative, which is not required here. We also note that typically
m n, i.e., the features are a few representative components in the data space. This is different
from dictionary learning where overcompleteness is often assumed.
The problem in the worst case is NP-hard [AGKM12], so some assumptions are needed to design
provable efficient algorithms. In this paper, we consider a generative model for the data point
y = A ? x? + ?
(1)
where A? is the ground-truth feature matrix, x? is the ground-truth non-negative weight from some
unknown distribution, and ? is the noise. Our focus is to recover A? given access to the data
distribution, assuming some properties of A? , x? , and ?. To describe our assumptions, we let [M]i
denote the i-th row of a matrix M, [M]j its i-th column, Mi,j
its column
P its (i, j)-th entry. DenoteP
norm, row norm, and symmetrized norm as kMk1 = maxj i |Mi,j |, kMk? = maxi j |Mi,j |,
and kMks = max {kMk1 , kMk? } , respectively.
We assume the following hold for parameters C1 , c2 , C2 , `, C? to be determined in our theorems.
(A1) The columns of A? are linearly independent.
(A2) For all i ? [n], x?i ? [0, 1], E[x?i ] ?
C1
n
and
c2
n
? E[(x?i )2 ] ?
C2
n ,
and x?i ?s are independent.
(A3) The initialization A(0) = A? (?(0) + E(0) ) + N(0) , where ?(0) is diagonal, E(0) is offdiagonal, and
?(0) (1 ? `)I,
E(0)
? `.
s
We consider two noise models.
3
(N1) Adversarial noise: only assume that maxi |?i | ? C? almost surely.
(N2) Unbiased noise: maxi |?i | ? C? almost surely, and E[?|x? ] = 0.
Remarks. We make several remarks about each of the assumptions.
(A1) is the assumption about A? . It only requires the columns of A? to be linear independent, which
is very mild and needed to ensure identifiability. Otherwise, for instance, if (A? )3 = ?1 (A? )1 +
?2 (A? )2 , it is impossible to distinguish between the case when x?3 = 1 and the case when x?2 = ?1
and x?1 = ?2 . In particular, we do not restrict the feature matrix to be non-negative, which is more
general than the traditional NMF and is potentially useful for many applications. We also do not
make incoherence or anchor word assumptions that are typical in related work.
(A2) is the assumption on x? . First, the coordinates are non-negative and bounded by 1; this is simply
a matter of scaling. Second, the assumption on the moments requires that, roughly speaking, each
feature should appear with reasonable probability. This is expected: if the occurrences of the features
are extremely unbalanced, then it will be difficult to recover the rare ones. The third requirement
on independence is motivated by that the features should be different so that their occurrences are
not correlated. Here we do not stick to a specific distribution, since the moment conditions are more
general, and highlight the essential properties our algorithm needs. Example distributions satisfying
our assumptions will be discussed later.
(0)
The warm start required by (A3) means that each feature Ai has a large fraction of the groundtruth feature A?i and a small fraction of the other features, plus some noise outside the span of the
ground-truth features. We emphasize that N(0) is the component of A(0) outside the column space
of A? , and is not the difference between A(0) and A? . This requirement is typically achieved in
practice by setting the columns of A(0) to reasonable ?pure? data points that contains one major
feature and a small fraction of some other features (e.g. [lda16, AR15]); in this initialization, it is
generally believed that N(0) = 0. But we state our theorems to allow some noise N(0) for robustness
in the initialization.
The adversarial noise model (N1) is very general, only imposing an upper bound on the entry-wise
noise level. Thus, ? can be correlated with x? in some complicated unknown way. (N2) additionally
requires it to be zero mean, which is commonly assumed and will be exploited by our algorithm to
tolerate larger noise.
4
Main algorithm
Algorithm 1 Purification
Input: initialization A(0) , threshold ?, step size ?, scaling factor r, sample size N , iterations T
1: for t = 0, 1, 2, ..., T ? 1 do
2:
Draw examples y1 , . . . , yN .
3:
(Decode) Compute A? , the pseudo-inverse of A(t) with minimum k(A)? k? .
Set x = ?? (A? y) for each example y.
// ?? is ReLU activation; see (2) for the
definition
4:
(Update) Update the feature matrix
? (y ? y 0 )(x ? x0 )>
A(t+1) = (1 ? ?) A(t) + r? E
? is over independent uniform y, y 0 from {y1 , . . . , yN }, and x, x0 are their decodings.
where E
Output: A = A(T )
Our main algorithm is presented in Algorithm 1. It keeps a working feature matrix and operates in
iterations. In each iteration, it first compute the weights for a batch of N examples (decoding), and
then uses the computed weights to update the feature matrix (updating).
The decoding is simply multiplying the example by the pseudo-inverse of the current feature matrix
and then passing it through the rectified linear unit (ReLU) ?? with offset ?. The pseudo-inverse
with minimum infinity norm is used so as to maximize the robustness to noise (see the theorems).
The ReLU function ?? operates element-wisely on the input vector v, and for an element vi , it is
4
defined as
?? (vi ) = max {vi ? ?, 0} .
(2)
To get an intuition why the decoding makes sense, suppose the current feature matrix is the groundtruth. Then A? y = A? A? x? + A? ? = x? + A? ?. So we would like to use a small A? and use
threshold to remove the noise term.
In the encoding step, the algorithm move the feature matrix along the direction E (y ? y 0 )(x ? x0 )> .
To see intuitively
why this isa good direction, note that when the decoding is perfect and there is no
noise, E (y ? y 0 )(x ? x0 )> = A? , and thus it is moving towards the ground-truth. Without those
ideal conditions, we need to choose a proper step size, which is tuned by the parameters ? and r.
5
Results for a simplified case
Our intuitions can be demonstrated in a simplified setting with (A1), (A2?), (A3), and (N1), where
(A2?) x?i ?s are independent, and x?i = 1 with probability s/n and 0 otherwise for a constant s > 0.
Furthermore, let N(0) = 0. This is a special case of our general assumptions, with C1 = c2 = C2 = s
where s is the parameter in (A2?). It is still an interesting setting; as far as we know, there is no
existing guarantee of alternating type algorithms for it.
To present our results, we let (A? )? denote the matrix satisfying (A? )? A? = I; if there are multiple
such matrices we let it denote the one with minimum k(A? )? k? .
Theorem 1 (Simplified case, adversarial noise). There exists a absolute constant G such that when
Assumption (A1)(A2?)(A3) and (N1) are satisfied with l = 1/10, C? ? max m,nGc(A? )?
for
{ k
k? }
(0)
some 0 ? c ? 1, and N
= 0, then there exist ?, ?, r such that for every 0 < , ? < 1 and
N = poly(n, m, 1/, 1/?) the following holds with probability at least 1 ? ?.
After T = O ln 1 iterations, Algorithm 1 outputs a solution A = A? (? + E) + N where
? (1 ? `)I is diagonal, kEk1 ? + c is off-diagonal, and kNk1 ? c.
? i = Ai /kAi k , and the
Remarks. Consequently, when kA? k1 = 1, we can do normalization A
1
?
normalized output A satisfies
? ? A? k1 ? + 2c.
kA
So under mild conditions and with proper parameters, our algorithm recovers the ground-truth in a
geometric rate. It can achieve arbitrary small recovery error in the noiseless setting, and achieve error
up to the noise limit even with adversarial noise whose level is comparable to the signal.
The condition on ` means that a constant warm start is sufficient for our algorithm to converge, which
is much better than previous work such as [AR15]. Indeed, in that work, the ` needs to even depend
on the dynamic range of the entries of A? which is problematic in practice.
It is shown that with large adversarial noise, the algorithm can still recover the features up to the
?
noise limit. When m ? nk (A? ) k? , each data point has adversarial noise with `1 norm as large
as k?k1 = C? m = ?(c), which is in the same order as the signal kA? x? k1 = O(1). Our algorithm
still works in this regime. Furthermore, the final error kA ? A? k1 is O(c), in the same order as the
adversarial noise in one data point.
?
Note the appearance of k (A? ) k? is not surprising. The case when the columns are the canonical
?
unit vectors for instance, which corresponds to k (A? ) k? = 1, is expected to be easier than the
?
case when the columns are nearly the same, which corresponds to large k (A? ) k? .
A similar theorem holds for the unbiased noise model.
Theorem 2 (Simplified?case, unbiased noise). If Assumption (A1)(A2?)(A3) and (N2) are satisfied
Gc n
with C? = max m,n
and the other parameters set as in Theorem 1, then the same
{ k(A? )? k? }
guarantee in holds.
5
Remarks. With unbiased noise which is commonly assumed in many applications, the algorithm can
?
?
tolerate noise level n larger than the adversarial case. When m ?
? nk (A? ) k? , each ?
data point
has adversarial noise with `1 norm as large as k?k1 = C? m = ?(c n), which can be ?( n) times
larger than the signal kA? x? k1 = O(1). The algorithm can recover ?
the ground-truth in this heavy
?
noise regime. Furthermore, the final error kA ? A? k1 is O (k?k1 / n), which is only O(1/ n)
fraction of the noise in one data point. This is very strong denoising effect and a bit counter-intuitive.
It is possible since we exploit the average of the noise for cancellation, and also use thresholding to
remove noise spread out in the coordinates.
5.1
Analysis: intuition
A natural approach typically employed to analyze algorithms for non-convex problems is to define a
function on the intermediate solution A and the ground-truth A? measuring their distance and then
show that the function decreases at each step. However, a single potential function will not be enough
in our case, as we argue below, so we introduce a novel framework of maintaining two potential
functions which capture different aspects of the intermediate solutions.
Let us denote the intermediate solution and the update as (omitting the superscript (t))
? ? y 0 )(x ? x0 )> ] = A? (?
e + E)
e + N,
e
E[(y
A = A? (? + E) + N,
e are diagonal, E and E
e are off-diagonal, and N and N
e are the terms outside the span
where ? and ?
?
of A which is caused by the noise. To cleanly illustrate the intuition behind ReLU and the coupled
potential functions, we focus on the noiseless case and assume that we have infinite samples.
P
P
Since Ai = ?i,i A?i + j6=i Ej,i A?j , if the ratio between kEi k1 = j6=i |Ej,i | and ?i,i gets smaller,
then the algorithm is making progress; if the ratio is large at the end, a normalization of Ai gives a
good approximation of A?i . So it suffices to show that ?i,i is always about a constant while kEi k1
decreases at each iteration. We will focus on E and consider the update rule in more detail to argue
this. After some calculation, we have
e = E[(x? ? (x0 )? ) (x ? x0 )> ],
E
(3)
where x, x are the decoding for x , (x ) respectively:
x = ?? (? + E)?1 x? ,
x0 = ?? (? + E)?1 (x0 )? .
(4)
e
E ? (1 ? ?)E + r? E,
0
?
0 ?
To see why the ReLU function matters, consider the case when we do not use it.
e = E(x? ? (x0 )? ) A? A? (x? ? (x0 )? ) > = E (x? ? (x0 )? )(x? ? (x0 )? )> (? + E)?1 >
E
>
? (? + E)?1 ? ??1 ? ??1 E??1 .
where we used Taylor expansion and the fact that E (x? ? (x0 )? )(x? ? (x0 )? )> is a scaling of
identity. Hence, if we think of ? as approximately I and take an appropriate r, the update to the
matrix E is approximately E ? E ? ?E> . Since we do not have control over the signs of E
throughout the iterations, the problematic case is when the entries of E> and E roughly match in
signs, which would lead to the entries of E increasing.
Now we consider the decoding to see why ReLU is important. Ignoring the higher order terms and
regarding ? = I, we have
x = ?? (? + E)?1 x? ? ?? ??1 x? ? ??1 E??1 x? ? ?? (x? ? Ex? ) .
(5)
The problematic term is Ex? . These errors when summed up will be comparable or even larger
than the signals, and the algorithm will fail. However, since the signals are non-negative and most
coordinates with errors only have small values, thresholding with ReLU properly can remove those
e i,i and small E
e j,i ?s, and then
errors while keeping a large fraction of the signals. This leads to large ?
we can choose an r such that Ej,i ?s keep decreasing while ?i,i ?s stay in a certain range.
To get a quantitative bound, we divide E into its positive part E+ and its negative part E? :
[E+ ]i,j = max {Ei,j , 0} ,
[E? ]i,j = max {?Ei,j , 0} .
6
(6)
The reason
to do so is the following: when Ei,j is negative, by the Taylor expansion approxima
tion, (? + E)?1 x? i will tend to be more positive and will not be thresholded most of the time.
Therefore,
Ej,iwill turn more positive at next iteration. On the other hand, when Ei,j is positive,
(? + E)?1 x? i will tend to be more negative and zeroed out by the threshold function. Therefore,
Ej,i will not be more negative at next iteration. We will show for positive and negative parts of E:
postive(t+1) ? (1??)positive(t) +(?)negative(t) , negative(t+1) ? (1??)negative(t) +(??)positive(t)
for a small ? 1. Due to , we can couple the two parts so that a weighted average of them will
decrease, which implies that kEks is small at the end. This leads to our coupled potential function.2
5.2
Analysis: proof sketch
Here we describe a proof sketch for the simplified case while the complete proof for the general case
is presented in the appendix. The lemmas here are direct corollaries of those in the appendix.
One iteration. We focus on one update and omit the superscript (t). Recall the definitions of E, ?
e ?
e and N
e in (5.1). Our goal is to derive lower and upper bounds for E,
e ?
e
and N in (5.1), and E,
e
and N, assuming that ?i,i falls into some range around 1, while E and N are small. This will allow
doing induction on them.
First, begin with the decoding. Some calculation shows that, the decoding for y = A? x? + ? is
x = ?? (Zx? + ?) , where Z = (? + E)
?1
, ? = ?A? NZx? + A? ?.
(7)
e ?,
e and N.
e
Now, we can present our key lemmas bounding E,
e informal). (1) if Zi,j < 0, then E
e j,i ? O
Lemma 3 (Simplified bound on E,
e j,i ? O 1 kZi,j k .
(2) if Zi,j ? 0, then ?O nc2 + nc |Zi,j | + n12 |Zi,j | ? E
n
1
n2
(|Zi,j | + c) ,
Note that Z ? ??1 ? ??1 E??1 , so Zi,j < 0 corresponds roughly to Ei,j > 0. In this case,
e j,i | is very small and thus |Ej,i | decreases, as described in the intuition.
the upper bound on |E
What is most interesting is the case when Zi,j ? 0 (roughly Ei,j < 0). The upper bound is much
larger, corresponding to the intuition that negative Ei,j can contribute a large positive value to Ej,i .
Fortunately, the lower bounds are of much smaller absolute value, which allows us to show that a
potential function that couples Case (1) and Case (2) in Lemma 3 actually decreases; see the induction
below.
e informal). ?
e i,i ? ?(??1 ? ?)/n.
Lemma 4 (Simplified bound on ?,
i,i
e
e
Lemma 5 (Simplified bound on N, adversarial noise, informal). N
i,j ? O(C? /n).
Induction by iterations. We now show how to use the three lemmas to prove the theorem for the
adversarial noise, and that for the unbiased noise is similar.
(t)
(t)
Let at :=
E+
and bt :=
E?
, and choose ? = `/6. We begin with proving the following
s
s
three claims by induction on t: at the beginning of iteration t,
(1) (1 ? `)I ?(t)
(2)
E(t)
s ? 1/8, and if t > 0, then at + ?bt ? 1 ?
? ? (1, 8), and some small value h,
(3)
N(t)
? c/10.
1
25 ?
(at?1 + ?bt?1 ) + ?h, for some
s
The most interesting part is the second claim. At a high level, by Lemma 3, we can show that
3
24
1
at+1 ? 1 ? ? at + 7?bt + ?h,
bt+1 ? 1 ? ? bt +
?at + ?h.
25
25
100
2
Note that since intuitively, Ei,j gets affected by Ej,i after an update, if we have a row which contains
negative entries, it is possible that kAi ? A?i k1 increases. So we cannot simply use maxi kAi ? A?i k1 as a
potential function.
7
Notice that the contribution of bt to at+1 is quite large (due to the larger upper bound in Case (2)
in Lemma 3), but the other terms are much nicer, such as the small contribution of at to bt+1 . This
allows to choose a ? ? (1, 8) so that at+1 + ?bt+1 leads to the desired recurrence in the second
claim. In other words, at+1 + ?bt+1 is our potential function which decreases at each iteration up to
the level h. The other claims can also be proved by the corresponding lemmas. Then the theorem
follows from the induction claims.
6
More general results
More general weight distributions. Our argument holds under more general assumptions on x? .
Theorem 6 (Adversarial noise). There exists an absolute constant G such thatwhen Assumption (A0)
c2 Gc
c4 Gc
2
(A3) and (N1) are satisfied with l = 1/10, C2 ? 2c2 , C13 ? Gc22 n, C? ? C22 m , C 5 n (A
? )?
k?
1
1 k
(0)
2
c
Gc
2
for 0 ? c ? 1, and
N
? ? C 3 k(A
, then there exist ?, ?, r such that for every 0 < , ? < 1
? )? k
1
?
and N = poly(n, m, 1/, 1/?), with probability at least 1 ? ? the following holds.
After T = O ln 1 iterations, Algorithm 1 outputs a solution A = A? (? + E) + N where
? (1 ? `)I is diagonal, kEk1 ? + c/2 is off-diagonal, and kNk1 ? c/2.
Theorem 7? (Unbiased noise). If Assumption (A0)-(A3) and (N2) are satisfied with C? =
c2 G cn
and the other parameters set as in Theorem 6, then the same guarantee
C1 max{m,nk(A? )? k }
?
holds.
The conditions on C1 , c2 , C2 intuitively mean that each feature needs to appear with reasonable
probability. C2 ? 2c2 means that their proportions are reasonably balanced. This may be a mild
restriction for some applications, and additionally we propose a pre-processing step that can relax
this in the next subsection. The conditions allow a rather general family of distributions, so we point
out an important special case to provide a more concrete sense of the parameters. For example, for
the uniform independent distribution considered in the simplified case, we can actually allow s to be
much larger than a constant; our algorithm just requires s ? Gn for a fixed constant G. So it works
for uniform sparse distributions even when the sparsity is linear, which is an order of magnitude larger
than in the dictionary learning regime. Furthermore, the distributions of x?i can be very different,
since we only require C13 = O(c22 n). Moreover, all these can be handled without specific structural
assumptions on A? .
More general proportions. A mild restriction in Theorem 6 and 7 is that C2 ? 2c2 , that is,
maxi?[n] E[(x?i )2 ] ? 2 mini?[n] E[(x?i )2 ]. To satisfy this, we propose a preprocessing algorithm for
balancing E[(x?i )2 ]. The idea is quite simple: instead of solving Y ? A? X, we could also solve
Y ? [A? D][(D)?1 X] for a positive diagonal matrix D, where E[(x?i )2 ]/D2i,i is with in a factor of
2 from each other. We show in the appendix that this can be done under assumptions as the above
theorems, and additionally ? (1 + `)I and E(0) ? entry-wise. After balancing, one can use
Algorithm 1 on the new ground-truth matrix [A? D] to get the final result.
7
Conclusion
A simple and natural algorithm that alternates between decoding and updating is proposed for
non-negative matrix factorization and theoretical guarantees are provided. The algorithm provably
recovers a feature matrix close to the ground-truth and is robust to noise. Our analysis provides
insights on the effect of the ReLU units in the presence of the non-negativity constraints, and the
resulting interesting dynamics of the convergence.
Acknowledgements
This work was supported in part by NSF grants CCF-1527371, DMS-1317308, Simons Investigator
Award, Simons Collaboration Grant, and ONR-N00014-16-1-2329.
8
References
[AGH+ 13] S. Arora, R. Ge, Y. Halpern, D. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A
practical algorithm for topic modeling with provable guarantees. In ICML, 2013.
[AGKM12] Sanjeev Arora, Rong Ge, Ravindran Kannan, and Ankur Moitra. Computing a nonnegative matrix factorization?provably. In STOC, pages 145?162. ACM, 2012.
[AGM12] S. Arora, R. Ge, and A. Moitra. Learning topic models ? going beyond svd. In FOCS,
2012.
[AGMM15] S. Arora, R. Ge, T. Ma, and A. Moitra. Simple, efficient, and neural algorithms for
sparse coding. In COLT, 2015.
[AGMS12] Sanjeev Arora, Rong Ge, Ankur Moitra, and Sushant Sachdeva. Provable ica with
unknown gaussian noise, with implications for gaussian mixtures and autoencoders. In
NIPS, pages 2375?2383, 2012.
[AKF+ 12] A. Anandkumar, S. Kakade, D. Foster, Y. Liu, and D. Hsu. Two svds suffice: Spectral
decompositions for probabilistic topic modeling and latent dirichlet allocation. Technical
report, 2012.
[AR15] Pranjal Awasthi and Andrej Risteski. On some provably correct cases of variational
inference for topic models. In NIPS, pages 2089?2097, 2015.
[BGKP16] Chiranjib Bhattacharyya, Navin Goyal, Ravindran Kannan, and Jagdeep Pani. Nonnegative matrix factorization under heavy noise. In Proceedings of the 33nd International Conference on Machine Learning, 2016.
[Ble12] David M Blei. Probabilistic topic models. Communications of the ACM, 2012.
[BNJ03] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. JMLR,
3:993?1022, 2003.
[lda16] Lda-c software. https://github.com/blei-lab/lda-c/blob/master/readme.
txt, 2016. Accessed: 2016-05-19.
[LS97] Daniel D Lee and H Sebastian Seung. Unsupervised learning by convex and conic
coding. NIPS, pages 515?521, 1997.
[LS99] Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative
matrix factorization. Nature, 401(6755):788?791, 1999.
[LS01] Daniel D Lee and H Sebastian Seung. Algorithms for non-negative matrix factorization.
In NIPS, pages 556?562, 2001.
[OF97] Bruno A Olshausen and David J Field. Sparse coding with an overcomplete basis set:
A strategy employed by v1? Vision research, 37(23):3311?3325, 1997.
[PC14] Cengiz Pehlevan and Dmitri B Chklovskii. A hebbian/anti-hebbian network derived
from online non-negative matrix factorization can cluster and discover sparse features.
In Asilomar Conference on Signals, Systems and Computers, pages 769?775. IEEE,
2014.
[PC15] Cengiz Pehlevan and Dmitri Chklovskii. A normative theory of adaptive dimensionality
reduction in neural networks. In NIPS, pages 2260?2268, 2015.
[ZX12] Jun Zhu and Eric P Xing. Sparse topical coding. arXiv preprint arXiv:1202.3778, 2012.
9
| 6417 |@word mild:7 version:1 norm:8 proportion:2 nd:1 cleanly:1 crucially:1 decomposition:2 reduction:2 moment:3 liu:1 contains:3 daniel:3 tuned:1 ours:1 document:1 bhattacharyya:1 existing:2 kmk:2 current:3 recovered:2 ka:8 surprising:1 com:1 activation:1 yet:1 remove:3 designed:1 interpretable:1 update:14 generative:5 beginning:1 blei:3 provides:2 contribute:1 c22:2 accessed:1 along:2 c2:16 direct:1 focs:1 prove:1 combine:1 yingyu:1 introduce:1 manner:2 x0:15 theoretically:2 ravindran:2 ica:3 expected:2 indeed:2 roughly:5 decreasing:1 increasing:1 spain:1 provided:2 moreover:2 bounded:1 begin:2 suffice:1 discover:1 what:1 interpreted:1 c13:2 proposing:1 nj:1 guarantee:10 pseudo:3 quantitative:1 every:2 rm:4 stick:1 control:1 unit:3 grant:2 omit:1 appear:3 yn:2 positive:9 understood:2 local:1 limit:2 despite:1 encoding:1 analyzing:1 incoherence:2 ls97:3 approximately:3 plus:1 initialization:5 ankur:2 ease:1 factorization:14 range:3 practical:4 practice:7 goyal:1 procedure:1 yingyul:1 area:1 pehlevan:2 empirical:3 word:6 pre:1 get:6 cannot:2 close:1 andrej:2 impossible:1 restriction:2 demonstrated:1 starting:1 convex:5 agh:2 recovery:3 pure:1 rule:1 insight:1 proving:1 n12:1 coordinate:4 suppose:1 decode:1 us:2 element:2 satisfying:2 updating:5 distributional:1 preprint:1 capture:2 worst:1 svds:1 counter:1 bnj03:2 decrease:6 balanced:1 intuition:6 seung:3 dynamic:2 halpern:1 d2i:1 depend:2 reviewing:1 solving:1 efficiency:1 eric:1 basis:1 describe:3 outside:3 whose:1 heuristic:1 larger:10 widely:1 plausible:1 kai:3 quite:2 otherwise:2 relax:1 solve:1 think:1 final:4 superscript:2 online:1 sequence:1 blob:1 propose:2 poorly:2 achieve:3 intuitive:1 exploiting:1 convergence:1 cluster:1 requirement:3 diverges:1 perfect:1 object:1 coupling:2 illustrate:1 derive:1 andrew:1 approxima:1 tolerates:1 progress:1 strong:3 solves:1 c:1 come:1 implies:1 direction:2 closely:1 correct:2 require:4 suffices:1 rong:2 hold:7 around:1 considered:2 ground:13 claim:5 major:3 achieves:1 dictionary:4 a2:7 applicable:1 combinatorial:1 overcompleteness:1 tool:2 weighted:1 hope:1 minimization:2 awasthi:1 gaussian:3 always:1 rather:2 ej:8 corollary:1 ax:2 focus:4 derived:1 properly:1 adversarial:18 sense:3 inference:3 typically:4 bt:10 a0:2 hidden:1 going:1 provably:6 issue:1 agmm15:3 colt:1 proposes:2 yuanzhil:1 fairly:2 initialize:1 special:2 summed:1 field:1 having:1 ng:1 represents:1 cancel:1 nearly:1 icml:1 unsupervised:1 report:1 others:1 np:1 few:2 primarily:1 divergence:1 maxj:1 geometry:1 n1:5 organization:1 interest:1 mixture:1 behind:1 implication:1 taylor:2 divide:1 desired:1 overcomplete:1 theoretical:6 minimal:1 instance:3 column:13 modeling:5 denotep:1 gn:1 measuring:1 cost:1 entry:8 rare:1 uniform:3 successful:1 front:1 ngc:1 st:1 international:1 sensitivity:1 stay:1 probabilistic:2 off:3 lee:3 decoding:19 michael:1 concrete:1 nicer:1 sanjeev:2 satisfied:4 moitra:5 choose:4 li:1 potential:9 coding:4 coefficient:1 matter:2 satisfy:1 caused:1 vi:3 later:1 tion:1 lab:1 analyze:2 doing:1 start:4 recover:5 offdiagonal:1 complicated:2 xing:1 identifiability:1 simon:2 contribution:4 nc2:1 purification:1 who:1 multiplying:1 rectified:1 zx:1 j6:2 history:1 sebastian:3 definition:3 dm:1 proof:7 mi:3 recovers:3 couple:2 hsu:1 proved:1 popular:5 recall:1 knowledge:2 subsection:1 dimensionality:1 carefully:1 actually:2 tolerate:4 higher:2 improved:1 formulation:1 done:1 furthermore:5 just:1 autoencoders:1 working:2 sketch:3 hand:1 navin:1 ei:8 overlapping:2 lda:3 quality:1 believe:2 olshausen:1 effect:3 omitting:1 requiring:1 unbiased:10 normalized:1 ccf:1 hence:1 alternating:5 deal:1 during:1 recurrence:1 complete:2 l1:1 variational:3 wise:2 novel:1 common:2 overview:1 discussed:1 significant:1 imposing:1 ai:4 cancellation:1 bruno:1 moving:1 access:1 risteski:3 etc:1 dominant:1 own:1 recent:1 forcing:1 certain:2 n00014:1 meta:1 onr:1 success:3 came:1 exploited:1 minimum:3 fortunately:1 impose:1 employed:2 surely:2 converge:1 maximize:1 signal:15 relates:1 neurally:1 multiple:1 hebbian:2 technical:3 match:1 calculation:2 believed:2 long:1 award:1 a1:5 feasibility:1 variant:1 noiseless:2 txt:1 vision:1 arxiv:2 iteration:15 normalization:2 achieved:1 c1:5 chklovskii:2 strict:1 tend:2 jordan:1 anandkumar:1 extracting:1 structural:2 presence:3 ideal:1 intermediate:5 split:1 enough:1 variety:1 independence:3 relu:10 zi:7 restrict:1 idea:2 regarding:1 knowing:1 cn:1 motivated:1 handled:1 algebraic:2 sontag:1 speaking:1 passing:1 remark:4 useful:1 generally:1 amount:1 extensively:1 category:2 http:1 exist:3 wisely:1 problematic:3 canonical:1 notice:1 nsf:1 sign:2 affected:1 key:2 threshold:3 thresholded:1 v1:2 dmitri:2 downstream:1 fraction:5 year:1 inverse:3 master:1 family:4 almost:2 groundtruth:3 reasonable:3 throughout:1 wu:1 draw:1 appendix:4 scaling:3 comparable:2 bit:1 layer:1 bound:10 distinguish:1 nonnegative:2 constraint:3 precisely:1 infinity:1 software:2 aspect:2 argument:1 extremely:1 span:2 performing:2 department:1 alternate:3 combination:1 poor:2 smaller:2 separability:4 kakade:1 making:1 intuitively:3 asilomar:1 ln:2 equation:1 chiranjib:1 turn:1 mechanism:1 fail:1 needed:2 know:1 ge:5 tractable:1 end:2 informal:3 decomposing:1 yuanzhi:1 spectral:2 appropriate:1 sachdeva:1 occurrence:2 batch:1 robustness:3 symmetrized:1 existence:1 original:2 running:1 clustering:1 ensure:1 dirichlet:2 maintaining:1 exploit:2 k1:18 tensor:2 objective:1 move:1 strategy:2 usual:1 diagonal:8 traditional:1 ak1:2 gradient:3 distance:1 olden:1 topic:15 argue:2 reason:1 provable:5 induction:5 kannan:2 assuming:3 mini:1 ratio:2 liang:1 difficult:1 nc:1 potentially:6 stoc:1 negative:33 implementation:1 design:1 proper:2 unknown:4 allowing:1 upper:5 postive:1 descent:3 anti:1 communication:1 y1:2 rn:2 gc:4 topical:1 arbitrary:1 nmf:13 david:3 namely:1 required:2 kl:1 c4:1 learned:1 textual:1 barcelona:1 akf:3 nip:6 able:1 suggested:1 proceeds:1 usually:3 below:2 beyond:1 regime:4 sparsity:1 kek1:2 max:7 overlap:1 natural:4 warm:4 readme:1 zhu:2 github:1 conic:1 arora:5 negativity:5 incoherent:1 coupled:2 jun:1 prior:1 geometric:1 acknowledgement:1 highlight:1 interesting:7 allocation:2 proven:1 sufficient:1 thresholding:3 zeroed:1 foster:1 heavy:5 balancing:2 row:3 collaboration:1 pranjal:1 supported:1 keeping:1 enjoys:1 side:1 allow:4 wide:1 fall:1 absolute:3 sparse:5 tolerance:2 mimno:1 dimension:1 world:1 rich:1 commonly:2 adaptive:1 preprocessing:1 simplified:10 kei:2 far:1 kzi:1 emphasize:2 keep:2 tolerant:1 anchor:2 assumed:6 don:1 iterative:1 latent:2 why:4 additionally:3 nature:1 reasonably:1 robust:5 correlated:4 ignoring:1 expansion:2 poly:2 cengiz:2 main:4 spread:1 linearly:1 bounding:1 noise:60 n2:5 referred:2 representative:1 decoded:1 wish:1 exponential:1 sushant:1 third:1 jmlr:1 theorem:14 erroneous:1 specific:4 showing:1 normative:1 maxi:5 offset:1 a3:7 essential:3 exists:3 effectively:1 magnitude:1 kx:1 nk:3 easier:1 simply:3 appearance:1 visual:1 corresponds:3 truth:13 satisfies:1 relies:2 acm:2 ma:1 goal:3 viewed:1 identity:1 consequently:1 towards:1 hard:1 determined:1 typical:1 operates:2 infinite:1 denoising:2 lemma:10 called:1 svd:1 exception:1 support:1 latter:1 unbalanced:1 investigator:1 princeton:3 ex:2 |
5,989 | 6,418 | Interaction Networks for Learning about Objects,
Relations and Physics
Anonymous Author(s)
Affiliation
Address
email
Abstract
Reasoning about objects, relations, and physics is central to human intelligence, and
a key goal of artificial intelligence. Here we introduce the interaction network, a
model which can reason about how objects in complex systems interact, supporting
dynamical predictions, as well as inferences about the abstract properties of the
system. Our model takes graphs as input, performs object- and relation-centric
reasoning in a way that is analogous to a simulation, and is implemented using
deep neural networks. We evaluate its ability to reason about several challenging
physical domains: n-body problems, rigid-body collision, and non-rigid dynamics.
Our results show it can be trained to accurately simulate the physical trajectories of
dozens of objects over thousands of time steps, estimate abstract quantities such
as energy, and generalize automatically to systems with different numbers and
configurations of objects and relations. Our interaction network implementation
is the first general-purpose, learnable physics engine, and a powerful general
framework for reasoning about object and relations in a wide variety of complex
real-world domains.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
1
Introduction
Representing and reasoning about objects, relations and physics is a ?core? domain of human common
sense knowledge [25], and among the most basic and important aspects of intelligence [27, 15]. Many
everyday problems, such as predicting what will happen next in physical environments or inferring
underlying properties of complex scenes, are challenging because their elements can be composed
in combinatorially many possible arrangements. People can nevertheless solve such problems by
decomposing the scenario into distinct objects and relations, and reasoning about the consequences
of their interactions and dynamics. Here we introduce the interaction network ? a model that can
perform an analogous form of reasoning about objects and relations in complex systems.
Interaction networks combine three powerful approaches: structured models, simulation, and deep
learning. Structured models [7] can exploit rich, explicit knowledge of relations among objects,
independent of the objects themselves, which supports general-purpose reasoning across diverse
contexts. Simulation is an effective method for approximating dynamical systems, predicting how the
elements in a complex system are influenced by interactions with one another, and by the dynamics
of the system. Deep learning [23, 16] couples generic architectures with efficient optimization
algorithms to provide highly scalable learning and inference in challenging real-world settings.
Interaction networks explicitly separate how they reason about relations from how they reason about
objects, assigning each task to distinct models which are: fundamentally object- and relation-centric;
and independent of the observation modality and task specification (see Model section 2 below
and Fig. 1a). This lets interaction networks automatically generalize their learning across variable
numbers of arbitrarily ordered objects and relations, and also recompose their knowledge of entities
Submitted to 30th Conference on Neural Information Processing Systems (NIPS 2016). Do not distribute.
a.
Relational reasoning
Objects,
relations
Object reasoning
Predictions,
inferences
Effects
Compute interaction
Apply object dynamics
b.
Figure 1: Schematic of an interaction network. a. For physical reasoning, the model takes objects and relations
as input, reasons about their interactions, and applies the effects and physical dynamics to predict new states. b.
For more complex systems, the model takes as input a graph that represents a system of objects, oj , and relations,
hi, j, rk ik , instantiates the pairwise interaction terms, bk , and computes their effects, ek , via a relational model,
fR (?). The ek are then aggregated and combined with the oj and external effects, xj , to generate input (as cj ),
for an object model, fO (?), which predicts how the interactions and dynamics influence the objects, p.
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
and interactions in novel and combinatorially many ways. They take relations as explicit input,
allowing them to selectively process different potential interactions for different input data, rather
than being forced to consider every possible interaction or those imposed by a fixed architecture.
We evaluate interaction networks by testing their ability to make predictions and inferences about various physical systems, including n-body problems, and rigid-body collision, and non-rigid dynamics.
Our interaction networks learn to capture the complex interactions that can be used to predict future
states and abstract physical properties, such as energy. We show that they can roll out thousands of
realistic future state predictions, even when trained only on single-step predictions. We also explore
how they generalize to novel systems with different numbers and configurations of elements. Though
they are not restricted to physical reasoning, the interaction networks used here represent the first
general-purpose learnable physics engine, and even have the potential to learn novel physical systems
for which no physics engines currently exist.
Related work Our model draws inspiration from previous work that reasons about graphs and
relations using neural networks. The ?graph neural network? [22] is a framework that shares learning
across nodes and edges, the ?recursive autoencoder? [24] adapts its processing architecture to exploit
an input parse tree, the ?neural programmer-interpreter? [21] is a composable neural network that
mimics the execution trace of a program, and the ?spatial transformer? [11] learns to dynamically
modify network connectivity to capture certain types of interactions. Others have explored deep
learning of logical and arithmetic relations [26], and relations suitable for visual question-answering
[1].
The behavior of our model is similar in spirit to a physical simulation engine [2], which generates
sequences of states by repeatedly applying rules that approximate the effects of physical interactions
and dynamics on objects over time. The interaction rules are relation-centric, operating on two or
more objects that are interacting, and the dynamics rules are object-centric, operating on individual
objects and the aggregated effects of the interactions they participate in.
Previous AI work on physical reasoning explored commonsense knowledge, qualitative representations, and simulation techniques for approximating physical prediction and inference [28, 9, 6]. The
?NeuroAnimator? [8] was perhaps the first quantitative approach to learning physical dynamics, by
training neural networks to predict and control the state of articulated bodies. Ladick? et al. [14]
recently used regression forests to learn fluid dynamics. Recent advances in convolutional neural
networks (CNNs) have led to efforts that learn to predict coarse-grained physical dynamics from
images [19, 17, 18]. Notably, Fragkiadaki et al. [5] used CNNs to predict and control a moving
ball from an image centered at its coordinates. Mottaghi et al. [20] trained CNNs to predict the 3D
trajectory of an object after an external impulse is applied. Wu et al. [29] used CNNs to parse objects
from images, which were then input to a physics engine that supported prediction and inference.
2
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
2
Model
Definition To describe our model, we use physical reasoning as an example (Fig. 1a), and build
from a simple model to the full interaction network (abbreviated IN). To predict the dynamics of a
single object, one might use an object-centric function, fO , which inputs the object?s state, ot , at
time t, and outputs a future state, ot+1 . If two or more objects are governed by the same dynamics,
fO could be applied to each, independently, to predict their respective future states. But if the
objects interact with one another, then fO is insufficient because it does not capture their relationship.
Assuming two objects and one directed relationship, e.g., a fixed object attached by a spring to a freely
moving mass, the first (the sender, o1 ) influences the second (the receiver, o2 ) via their interaction.
The effect of this interaction, et+1 , can be predicted by a relation-centric function, fR . The fR takes
as input o1 , o2 , as well as attributes of their relationship, r, e.g., the spring constant. The fO is
modified so it can input both et+1 and the receiver?s current state, o2,t , enabling the interaction to
influence its future state, o2,t+1 ,
et+1 = fR (o1,t , o2,t , r)
o2,t+1 = fO (o2,t , et+1 )
The above formulation can be expanded to larger and more complex systems by representing them
as a graph, G = hO, Ri, where the nodes, O, correspond to the objects, and the edges, R, to the
relations (see Fig. 1b). We assume an attributed, directed multigraph because the relations have
attributes, and there can be multiple distinct relations between two objects (e.g., rigid and magnetic
interactions). For a system with NO objects and NR relations, the inputs to the IN are,
O = {oj }j=1...NO , R = {hi, j, rk ik }k=1...NR where i 6= j, 1 ? i, j ? NO , X = {xj }j=1...NO
93
The O represents the states of each object. The triplet, hi, j, rk ik , represents the k-th relation in the
system, from sender, oi , to receiver, oj , with relation attribute, rk . The X represents external effects,
such as active control inputs or gravitational acceleration, which we define as not being part of the
system, and which are applied to each object separately.
94
The basic IN is defined as,
90
91
92
IN(G) = ?O (a(G, X, ?R ( m(G) ) ))
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
(1)
m(G) = B = {bk }k=1...NR
a(G, X, E) = C = {cj }j=1...NO
fR (bk ) = ek
fO (cj )
= pj
(2)
?R (B) = E = {ek }k=1...NR
?O (C)
= P = {pj }j=1...NO
The marshalling function, m, rearranges the objects and relations into interaction terms, bk =
hoi , oj , rk i ? B, one per relation, which correspond to each interaction?s receiver, sender, and
relation attributes. The relational model, ?R , predicts the effect of each interaction, ek ? E, by
applying fR to each bk . The aggregation function, a, collects all effects, ek ? E, that apply to each
receiver object, merges them, and combines them with O and X to form a set of object model inputs,
cj ? C, one per object. The object model, ?O , predicts how the interactions and dynamics influence
the objects by applying fO to each cj , and returning the results, pj ? P . This basic IN can predict
the evolution of states in a dynamical system ? for physical simulation, P may equal the future states
of the objects, Ot+1 .
The IN can also be augmented with an additional component to make abstract inferences about the
system. The pj ? P , rather than serving as output, can be combined by another aggregation function,
g, and input to an abstraction model, ?A , which returns a single output, q, for the whole system. We
explore this variant in our final experiments that use the IN to predict potential energy.
An IN applies the same fR and fO to every bk and cj , respectively, which makes their relational and
object reasoning able to handle variable numbers of arbitrarily ordered objects and relations. But
one additional constraint must be satisfied to maintain this: the a function must be commutative and
associative over the objects and relations. Using summation within a to merge the elements of E into
C satisfies this, but division would not.
Here we focus on binary relations, which means there is one interaction term per relation, but another
option is to have the interactions correspond to n-th order relations by combining n senders in each bk .
The interactions could even have variable order, where each bk includes all sender objects that interact
with a receiver, but would require a fR than can handle variable-length inputs. These possibilities are
beyond the scope of this work, but are interesting future directions.
3
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
Implementation The general definition of the IN in the previous section is agnostic to the choice
of functions and algorithms, but we now outline a learnable implementation capable of reasoning
about complex systems with nonlinear relations and dynamics. We use standard deep neural network
building blocks, multilayer perceptrons (MLP), matrix operations, etc., which can be trained efficiently
from data using gradient-based optimization, such as stochastic gradient descent.
We define O as a DS ? NO matrix, whose columns correspond to the objects? DS -length state vectors.
The relations are a triplet, R = hRr , Rs , Ra i, where Rr and Rs are NO ? NR binary matrices which
index the receiver and sender objects, respectively, and Ra is a DR ? NR matrix whose DR -length
columns represent the NR relations? attributes. The j-th column of Rr is a one-hot vector which
indicates
h 0 0the
i receiver object?s
h 1 0 i index; Rs indicates the sender similarly. For the graph in Fig. 1b,
1
1
Rr =
and Rs = 0 0 . The X is a DX ? NO matrix, whose columns are DX -length vectors
00
01
that represent the external effect applied each of the NO objects.
The marshalling function, m, computes the matrix products, ORr and ORs , and concatenates them
with Ra : m(G) = [ORr ; ORs ; Ra ] = B .
The resulting B is a (2DS + DR ) ? NR matrix, whose columns represent the interaction terms, bk ,
for the NR relations (we denote vertical and horizontal matrix concatenation with a semicolon and
comma, respectively). The way m constructs interaction terms can be modified, as described in our
Experiments section (3).
The B is input to ?R , which applies fR , an MLP, to each column. The output of fR is a DE -length
vector, ek , a distributed representation of the effects. The ?R concatenates the NR effects to form the
DE ? NR effect matrix, E.
? = ERrT , whose
The G, X, and E are input to a, which computes the DE ? NO matrix product, E
j-th column is equivalent to the elementwise sum across all ek whose corresponding relation has
? is concatenated with O and X: a(G, X, E) = [O; X; E]
? = C.
receiver object, j. The E
The resulting C is a (DS + DX + DE ) ? NO matrix, whose NO columns represent the object states,
external effects, and per-object aggregate interaction effects.
The C is input to ?O , which applies fO , another MLP, to each of the NO columns. The output of fO
is a DP -length vector, pj , and ?O concatenates them to form the output matrix, P .
To infer abstract properties of a system, an additional ?A is appended and takes P as input. The g
aggregation function performs an elementwise sum across the columns of P to return a DP -length
vector, P? . The P? is input to ?A , another MLP, which returns a DA -length vector, q, that represents
an abstract, global property of the system.
Training an IN requires optimizing an objective function over the learnable parameters of ?R and ?O .
Note, m and a involve matrix operations that do not contain learnable parameters.
163
Because ?R and ?O are shared across all relations and objects, respectively, training them is statistically efficient. This is similar to CNNs, which are very efficient due to their weight-sharing scheme.
A CNN treats a local neighborhood of pixels as related, interacting entities: each pixel is effectively
a receiver object and its neighboring pixels are senders. The convolution operator is analogous to
?R , where fR is the local linear/nonlinear kernel applied to each neighborhood. Skip connections,
recently popularized by residual networks, are loosely analogous to how the IN inputs O to both
?R and ?O , though in CNNs relation- and object-centric reasoning are not delineated. But because
CNNs exploit local interactions in a fixed way which is well-suited to the specific topology of images,
capturing longer-range dependencies requires either broad, insensitive convolution kernels, or deep
stacks of layers, in order to implement sufficiently large receptive fields. The IN avoids this restriction
by being able to process arbitrary neighborhoods that are explicitly specified by the R input.
164
3
153
154
155
156
157
158
159
160
161
162
165
166
167
168
169
170
Experiments
Physical reasoning tasks Our experiments explored two types of physical reasoning tasks: predicting future states of a system, and estimating their abstract properties, specifically potential energy.
We evaluated the IN?s ability to learn to make these judgments in three complex physical domains:
n-body systems; balls bouncing in a box; and strings composed of springs that collide with rigid
objects. We simulated the 2D trajectories of the elements of these systems with a physics engine, and
recorded their sequences of states. See the Supplementary Material for full details.
4
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
In the n-body domain, such as solar systems, all n bodies exert distance- and mass-dependent
gravitational forces on each other, so there were n(n ? 1) relations input to our model. Across
simulations, the objects? masses varied, while all other fixed attributes were held constant. The
training scenes always included 6 bodies, and for testing we used 3, 6, and 12 bodies. In half of
the systems, bodies were initialized with velocities that would cause stable orbits, if not for the
interactions with other objects; the other half had random velocities.
In the bouncing balls domain, moving balls could collide with each other and with static walls.
The walls were represented as objects whose shape attribute represented a rectangle, and whose
inverse-mass was 0. The relations input to the model were between the n objects (which included the
walls), for (n(n ? 1) relations). Collisions are more difficult to simulate than gravitational forces, and
the data distribution was much more challenging: each ball participated in a collision on less than 1%
of the steps, following straight-line motion at all other times. The model thus had to learn that despite
there being a rigid relation between two objects, they only had meaningful collision interactions when
they were in contact. We also varied more of the object attributes ? shape, scale and mass (as before)
? as well as the coefficient of restitution, which was a relation attribute. Training scenes contained 6
balls inside a box with 4 variably sized walls, and test scenes contained either 3, 6, or 9 balls.
The string domain used two types of relations (indicated in rk ), relation structures that were more
sparse and specific than all-to-all, as well as variable external effects. Each scene contained a string,
comprised of masses connected by springs, and a static, rigid circle positioned below the string. The
n masses had spring relations with their immediate neighbors (2(n ? 1)), and all masses had rigid
relations with the rigid object (2n). Gravitational acceleration, with a magnitude that was varied
across simulation runs, was applied so that the string always fell, usually colliding with the static
object. The gravitational acceleration was an external input (not to be confused with the gravitational
attraction relations in the n-body experiments). Each training scene contained a string with 15 point
masses, and test scenes contained either 5, 15, or 30 mass strings. In training, one of the point masses
at the end of the string, chosen at random, was always held static, as if pinned to the wall, while the
other masses were free to move. In the test conditions, we also included strings that had both ends
pinned, and no ends pinned, to evaluate generalization.
Our model takes as input the state of each system, G, decomposed into the objects, O (e.g., n-body
objects, balls, walls, points masses that represented string elements), and their physical relations, R
(e.g., gravitational attraction, collisions, springs), as well as the external effects, X (e.g., gravitational
acceleration). Each object state, oj , could be further divided into a dynamic state component
(e.g., position and velocity) and a static attribute component (e.g., mass, size, shape). The relation
attributes, Ra , represented quantities such as the coefficient of restitution, and spring constant. The
input represented the system at the current time. The prediction experiment?s target outputs were the
velocities of the objects on the subsequent time step, and the energy estimation experiment?s targets
were the potential energies of the system on the current time step. We also generated multi-step
rollouts for the prediction experiments (Fig. 2), to assess the model?s effectiveness at creating visually
realistic simulations. The output velocity, vt , on time step t became the input velocity on t + 1, and
the position at t + 1 was updated by the predicted velocity at t.
Data Each of the training, validation, test data sets were generated by simulating 2000 scenes
over 1000 time steps, and randomly sampling 1 million, 200k, and 200k one-step input/target pairs,
respectively. The model was trained for 2000 epochs, randomly shuffling the data indices between
each. We used mini-batches of 100, and balanced their data distributions so the targets had similar
per-element statistics. The performance reported in the Results was measured on held-out test data.
We explored adding a small amount of Gaussian noise to 20% of the data?s input positions and
velocities during the initial phase of training, which was reduced to 0% from epochs 50 to 250. The
noise std. dev. was 0.05? the std. dev. of each element?s values across the dataset. It allowed the
model to experience physically impossible states which could not have been generated by the physics
engine, and learn to project them back to nearby, possible states. Our error measure did not reflect
clear differences with or without noise, but rollouts from models trained with noise were slightly
more visually realistic, and static objects were less subject to drift over many steps.
Model architecture The fR and fO MLPs contained multiple hidden layers of linear transforms
plus biases, followed by rectified linear units (ReLUs), and an output layer that was a linear transform
plus bias. The best model architecture was selected by a grid search over layer sizes and depths. All
5
Model
True
Model
True
Model
Time
Time
Time
True
Figure 2: Prediction rollouts. Each column contains three panels of three video frames (with motion blur),
each spanning 1000 rollout steps. Columns 1-2 are ground truth and model predictions for n-body systems, 3-4
are bouncing balls, and 5-6 are strings. Each model column was generated by a single model, trained on the
underlying states of a system of the size in the top panel. The middle and bottom panels show its generalization
to systems of different sizes and structure. For n-body, the training was on 6 bodies, and generalization was to 3
and 12 bodies. For balls, the training was on 6 balls, and generalization was to 3 and 9 balls. For strings, the
training was on 15 masses with 1 end pinned, and generalization was to 30 masses with 0 and 2 ends pinned.
6
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
inputs (except Rr and Rs ) were normalized by centering at the median and rescaling the 5th and 95th
percentiles to -1 and 1. All training objectives and test measures used mean squared error (MSE)
between the model?s prediction and the ground truth target.
All prediction experiments used the same architecture, with parameters selected by a hyperparameter
search. The fR MLP had four, 150-length hidden layers, and output length DE = 50. The fO MLP
had one, 100-length hidden layer, and output length DP = 2, which targeted the x, y-velocity. The
m and a were customized so that the model was invariant to the absolute positions of objects in the
scene. The m concatenated three terms for each bk : the difference vector between the dynamic states
of the receiver and sender, the concatenated receiver and sender attribute vectors, and the relation
attribute vector. The a only outputs the velocities, not the positions, for input to ?O .
The energy estimation experiments used the IN from the prediction experiments with an additional
?A MLP which had one, 25-length hidden layer. Its P inputs? columns were length DP = 10, and
its output length was DA = 1.
We optimized the parameters using Adam [13], with a waterfall schedule that began with a learning
rate of 0.001 and down-scaled the learning rate by 0.8 each time the validation error, estimated over
a window of 40 epochs, stopped decreasing.
Two forms of L2 regularization were explored: one applied to the effects, E, and another to the model
parameters. Regularizing E improved generalization to different numbers of objects and reduced
drift over many rollout steps. It likely incentivizes sparser communication between the ?R and ?O ,
prompting them to operate more independently. Regularizing the parameters generally improved
performance and reduced overfitting. Both penalty factors were selected by a grid search.
250
Few competing models are available in the literature to compare our model against, but we considered
several alternatives: a constant velocity baseline which output the input velocity; an MLP baseline,
with two 300-length hidden layers, which took as input a flattened vector of all of the input data; and
a variant of the IN with the ?R component removed (the interaction effects, E, was set to a 0-matrix).
251
4
247
248
249
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
Results
Prediction experiments Our results show that the IN can predict the next-step dynamics of our task
domains very accurately after training, with orders of magnitude lower test error than the alternative
models (Fig. 3a, d and g, and Table 1). Because the dynamics of each domain depended crucially on
interactions among objects, the IN was able to learn to exploit these relationships for its predictions.
The dynamics-only IN had no mechanism for processing interactions, and performed similarly to the
constant velocity model. The baseline MLP?s connectivity makes it possible, in principle, for it to
learn the interactions, but that would require learning how to use the relation indices to selectively
process the interactions. It would also not benefit from sharing its learning across relations and
objects, instead being forced to approximate the interactive dynamics in parallel for each objects.
The IN also generalized well to systems with fewer and greater numbers of objects (Figs. 3b-c, e-f
and h-k, and Table SM1 in Supp. Mat.). For each domain, we selected the best IN model from the
system size on which it was trained, and evaluated its MSE on a different system size. When tested
on smaller n-body and spring systems from those on which it was trained, its performance actually
exceeded a model trained on the smaller system. This may be due to the model?s ability to exploit its
greater experience with how objects and relations behave, available in the more complex system.
We also found that the IN trained on single-step predictions can be used to simulate trajectories over
thousands of steps very effectively, often tracking the ground truth closely, especially in the n-body
and string domains. When rendered into images and videos, the model-generated trajectories are
usually visually indistinguishable from those of the ground truth physics engine (Fig. 2; see Supp.
Mat. for videos of all images). This is not to say that given the same initial conditions, they cohere
perfectly: the dynamics are highly nonlinear and imperceptible prediction errors by the model can
rapidly lead to large differences in the systems? states. But the incoherent rollouts do not violate
people?s expectations, and might be roughly on par with people?s understanding of these domains.
Estimating abstract properties We trained an abstract-estimation variant of our model to predict
potential energies in the n-body and string domains (the ball domain?s potential energies were always
0), and found it was much more accurate (n-body MSE 1.4, string MSE 1.1) than the MLP baseline
7
MSE (log-scale)
n-body
2
a. 6
Balls
b. 3
c. 12
10
10
1
10-1
10-2
10-1
d. 6
String
e. 3
f. 9
IN (12 obj)
i. 30, 1
j. 15, 0
k. 15, 2
10-3
10-3
IN (3 obj)
h. 5, 1
10-2
10-2
IN (6 obj)
g. 15, 1
IN (6 obj)
Constant velocity
IN (9 obj)
IN (3 obj)
Baseline MLP
IN (15 obj, 1 pin)
Dynamics-only IN
IN (5 obj, 1 pin)
IN (15 obj, 0 pin)
IN (30 obj, 1 pin)
IN (15 obj, 2 pin)
Figure 3: Prediction experiment accuracy and generalization. Each colored bar represents the MSE between a
model?s predicted velocity and the ground truth physics engine?s (the y-axes are log-scaled). Sublots (a-c) show
n-body performance, (d-f) show balls, and (g-k) show string. The leftmost subplots in each (a, d, g) for each
domain compare the constant velocity model (black), baseline MLP (grey), dynamics-only IN (red), and full IN
(blue). The other panels show the IN?s generalization performance to different numbers and configurations of
objects, as indicated by the subplot titles. For the string systems, the numbers correspond to: (the number of
masses, how many ends were pinned).
Table 1: Prediction experiment MSEs
Domain
Constant velocity
n-body
Balls
String
82
0.074
0.018
Baseline
Dynamics-only IN
IN
79
0.072
0.016
76
0.074
0.017
0.25
0.0020
0.0011
279
(n-body MSE 19, string MSE 425). The IN presumably learns the gravitational and spring potential
energy functions, applies them to the relations in their respective domains, and combines the results.
280
5
278
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
Discussion
We introduced interaction networks as a flexible and efficient model for explicit reasoning about
objects and relations in complex systems. Our results provide surprisingly strong evidence of their
ability to learn accurate physical simulations and generalize their training to novel systems with
different numbers and configurations of objects and relations. They could also learn to infer abstract
properties of physical systems, such as potential energy. The alternative models we tested performed
much more poorly, with orders of magnitude greater error. Simulation over rich mental models is
thought to be a crucial mechanism of how humans reason about physics and other complex domains
[4, 12, 10], and Battaglia et al. [3] recently posited a simulation-based ?intuitive physics engine?
model to explain human physical scene understanding. Our interaction network implementation is the
first learnable physics engine that can scale up to real-world problems, and is a promising template for
new AI approaches to reasoning about other physical and mechanical systems, scene understanding,
social perception, hierarchical planning, and analogical reasoning.
In the future, it will be important to develop techniques that allow interaction networks to handle
very large systems with many interactions, such as by culling interaction computations that will have
negligible effects. The interaction network may also serve as a powerful model for model-predictive
control inputting active control signals as external effects ? because it is differentiable, it naturally
supports gradient-based planning. It will also be important to prepend a perceptual front-end that
can infer from objects and relations raw observations, which can then be provided as input to an
interaction network that can reason about the underlying structure of a scene. By adapting the
interaction network into a recurrent neural network, even more accurate long-term predictions might
be possible, though preliminary tests found little benefit beyond its already-strong performance.
By modifying the interaction network to be a probabilistic generative model, it may also support
probabilistic inference over unknown object properties and relations.
By combining three powerful tools from the modern machine learning toolkit ? relational reasoning
over structured knowledge, simulation, and deep learning ? interaction networks offer flexible,
accurate, and efficient learning and inference in challenging domains. Decomposing complex
systems into objects and relations, and reasoning about them explicitly, provides for combinatorial
generalization to novel contexts, one of the most important future challenges for AI, and a crucial
step toward closing the gap between how humans and machines think.
8
310
311
312
313
314
References
[1] J Andreas, M Rohrbach, T Darrell, and D Klein. Learning to compose neural networks for question
answering. NAACL, 2016.
[2] D Baraff. Physically based modeling: Rigid body simulation. SIGGRAPH Course Notes, ACM SIGGRAPH,
2(1):2?1, 2001.
316
[3] PW Battaglia, JB Hamrick, and JB Tenenbaum. Simulation as an engine of physical scene understanding.
Proceedings of the National Academy of Sciences, 110(45):18327?18332, 2013.
317
[4] K.J.W. Craik. The nature of explanation. Cambridge University Press, 1943.
318
[5] K Fragkiadaki, P Agrawal, S Levine, and J Malik. Learning visual predictive models of physics for playing
billiards. ICLR, 2016.
315
319
320
321
322
323
324
325
326
327
328
[6] F. Gardin and B. Meltzer. Analogical representations of naive physics. Artificial Intelligence, 38(2):139?
159, 1989.
[7] Z. Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452?459,
2015.
[8] R Grzeszczuk, D Terzopoulos, and G Hinton. Neuroanimator: Fast neural network emulation and control of
physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive
techniques, pages 9?20. ACM, 1998.
[9] P.J Hayes. The naive physics manifesto. Universit? de Gen?ve, Institut pour les ?tudes s ? mantiques et
cognitives, 1978.
329
[10] M. Hegarty. Mechanical reasoning by mental simulation. TICS, 8(6):280?285, 2004.
330
[11] M Jaderberg, K Simonyan, and A Zisserman. Spatial transformer networks. In in NIPS, pages 2008?2016,
2015.
331
333
[12] P.N. Johnson-Laird. Mental models: towards a cognitive science of language, inference, and consciousness,
volume 6. Cambridge University Press, 1983.
334
[13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015.
335
[14] L Ladick?, S Jeong, B Solenthaler, M Pollefeys, and M Gross. Data-driven fluid simulations using
regression forests. ACM Transactions on Graphics (TOG), 34(6):199, 2015.
332
336
338
[15] B Lake, T Ullman, J Tenenbaum, and S Gershman. Building machines that learn and think like people.
arXiv:1604.00289, 2016.
339
[16] Y LeCun, Y Bengio, and G Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
340
[17] A Lerer, S Gross, and R Fergus. Learning physical intuition of block towers by example. arXiv:1603.01312,
2016.
337
341
342
343
344
345
[18] W Li, S Azimi, A Leonardis, and M Fritz. To fall or not to fall: A visual approach to physical stability
prediction. arXiv:1604.00066, 2016.
[19] R Mottaghi, H Bagherinezhad, M Rastegari, and A Farhadi. Newtonian image understanding: Unfolding
the dynamics of objects in static images. arXiv:1511.04048, 2015.
347
[20] R Mottaghi, M Rastegari, A Gupta, and A Farhadi. " what happens if..." learning to predict the effect of
forces in images. arXiv:1603.05600, 2016.
348
[21] SE Reed and N de Freitas. Neural programmer-interpreters. ICLR, 2016.
349
350
[22] F. Scarselli, M. Gori, A.C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model.
IEEE Trans. Neural Networks, 20(1):61?80, 2009.
351
[23] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85?117, 2015.
352
[24] R Socher, E Huang, J Pennin, C Manning, and A Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In in NIPS, pages 801?809, 2011.
346
353
354
355
356
357
358
359
[25] E Spelke, K Breinlinger, J Macomber, and K Jacobson. Origins of knowledge. Psychol. Rev., 99(4):605?
632, 1992.
[26] I Sutskever and GE Hinton. Using matrices to model symbolic relationship. In D. Koller, D. Schuurmans,
Y. Bengio, and L. Bottou, editors, in NIPS 21, pages 1593?1600. 2009.
[27] J.B. Tenenbaum, C. Kemp, T.L. Griffiths, and N.D. Goodman. How to grow a mind: Statistics, structure,
and abstraction. Science, 331(6022):1279, 2011.
360
[28] P Winston and B Horn. The psychology of computer vision, volume 73. McGraw-Hill New York, 1975.
361
[29] J Wu, I Yildirim, JJ Lim, B Freeman, and J Tenenbaum. Galileo: Perceiving physical object properties by
integrating a physics engine with deep learning. In in NIPS, pages 127?135, 2015.
362
9
| 6418 |@word cnn:1 middle:1 pw:1 grey:1 simulation:17 r:5 crucially:1 initial:2 configuration:4 contains:1 o2:7 freitas:1 current:3 assigning:1 dx:3 must:2 subsequent:1 realistic:3 happen:1 blur:1 shape:3 intelligence:5 half:2 selected:4 fewer:1 generative:1 core:1 colored:1 mental:3 coarse:1 provides:1 node:2 rollout:2 ik:3 qualitative:1 combine:3 compose:1 inside:1 incentivizes:1 introduce:2 pairwise:1 notably:1 ra:5 roughly:1 themselves:1 planning:2 pour:1 multi:1 behavior:1 freeman:1 breinlinger:1 decomposed:1 decreasing:1 automatically:2 little:1 window:1 farhadi:2 confused:1 estimating:2 underlying:3 project:1 panel:4 mass:17 agnostic:1 provided:1 what:2 tic:1 string:20 interpreter:2 inputting:1 quantitative:1 every:2 interactive:2 returning:1 scaled:2 universit:1 control:6 unit:1 before:1 negligible:1 local:3 modify:1 treat:1 depended:1 consequence:1 despite:1 culling:1 merge:1 might:3 plus:2 exert:1 black:1 neuroanimator:2 dynamically:1 collect:1 challenging:5 range:1 statistically:1 ms:1 directed:2 horn:1 lecun:1 tsoi:1 testing:2 galileo:1 recursive:2 block:2 implement:1 thought:1 adapting:1 integrating:1 griffith:1 symbolic:1 operator:1 context:2 influence:4 transformer:2 applying:3 impossible:1 restriction:1 equivalent:1 imposed:1 independently:2 rule:3 attraction:2 stability:1 handle:3 coordinate:1 analogous:4 updated:1 target:5 origin:1 element:8 velocity:17 variably:1 std:2 predicts:3 bottom:1 levine:1 capture:3 thousand:3 connected:1 removed:1 balanced:1 gross:2 environment:1 intuition:1 dynamic:28 trained:12 predictive:2 serve:1 tog:1 division:1 collide:2 siggraph:2 various:1 represented:5 pennin:1 articulated:1 distinct:3 forced:2 effective:1 describe:1 fast:1 artificial:3 aggregate:1 neighborhood:3 whose:9 larger:1 solve:1 supplementary:1 say:1 ability:5 statistic:2 simonyan:1 think:2 transform:1 laird:1 final:1 associative:1 hagenbuchner:1 sequence:2 rr:4 differentiable:1 agrawal:1 took:1 interaction:58 product:2 fr:13 cognitives:1 neighboring:1 combining:2 rapidly:1 gen:1 poorly:1 adapts:1 academy:1 analogical:2 intuitive:1 everyday:1 sutskever:1 darrell:1 adam:2 newtonian:1 object:90 develop:1 recurrent:1 measured:1 strong:2 implemented:1 predicted:3 skip:1 direction:1 closely:1 emulation:1 attribute:13 cnns:7 stochastic:2 modifying:1 centered:1 human:5 programmer:2 hoi:1 material:1 pinned:6 require:2 generalization:9 wall:6 anonymous:1 preliminary:1 summation:1 gravitational:9 sufficiently:1 considered:1 ground:5 visually:3 presumably:1 scope:1 predict:13 purpose:3 battaglia:2 estimation:3 combinatorial:1 currently:1 title:1 combinatorially:2 tudes:1 tool:1 unfolding:2 always:4 gaussian:1 modified:2 rather:2 ax:1 focus:1 waterfall:1 indicates:2 ladick:2 baseline:7 sense:1 inference:10 abstraction:2 rigid:11 dependent:1 hidden:5 relation:64 koller:1 pixel:3 among:3 flexible:2 spatial:2 equal:1 construct:1 field:1 ng:1 sampling:1 represents:6 broad:1 future:10 mimic:1 others:1 jb:2 fundamentally:1 few:1 modern:1 randomly:2 composed:2 national:1 ve:1 individual:1 scarselli:1 phase:1 rollouts:4 maintain:1 detection:1 prepend:1 mlp:12 highly:2 possibility:1 jacobson:1 held:3 accurate:4 commonsense:1 rearranges:1 edge:2 capable:1 experience:2 respective:2 institut:1 tree:1 loosely:1 initialized:1 orbit:1 circle:1 stopped:1 column:14 modeling:1 dev:2 comprised:1 johnson:1 front:1 graphic:2 reported:1 dependency:1 combined:2 fritz:1 probabilistic:3 physic:19 connectivity:2 squared:1 central:1 satisfied:1 recorded:1 reflect:1 huang:1 dr:3 external:9 creating:1 ek:8 prompting:1 cognitive:1 return:3 rescaling:1 ullman:1 supp:2 li:1 distribute:1 potential:9 de:7 orr:2 includes:1 coefficient:2 explicitly:3 performed:2 azimi:1 red:1 aggregation:3 option:1 relus:1 parallel:1 solar:1 mlps:1 ass:1 oi:1 appended:1 roll:1 convolutional:1 became:1 efficiently:1 accuracy:1 correspond:5 judgment:1 generalize:4 raw:1 accurately:2 yildirim:1 trajectory:5 rectified:1 straight:1 submitted:1 explain:1 influenced:1 fo:13 sharing:2 email:1 definition:2 centering:1 against:1 energy:11 naturally:1 attributed:1 static:7 couple:1 dataset:1 logical:1 knowledge:6 lim:1 cj:6 schedule:1 positioned:1 actually:1 back:1 centric:7 exceeded:1 multigraph:1 zisserman:1 improved:2 formulation:1 evaluated:2 though:3 box:2 autoencoders:1 d:4 horizontal:1 parse:2 nonlinear:3 billiards:1 perhaps:1 impulse:1 indicated:2 building:2 effect:23 naacl:1 contain:1 true:3 normalized:1 evolution:1 regularization:1 inspiration:1 consciousness:1 indistinguishable:1 during:1 percentile:1 generalized:1 leftmost:1 hill:1 outline:1 performs:2 motion:2 reasoning:24 image:9 regularizing:2 novel:5 recently:3 began:1 common:1 physical:29 overview:1 attached:1 insensitive:1 volume:2 million:1 elementwise:2 cambridge:2 ai:3 shuffling:1 grid:2 similarly:2 closing:1 language:1 had:11 moving:3 specification:1 stable:1 longer:1 operating:2 toolkit:1 etc:1 recent:1 optimizing:1 driven:1 scenario:1 schmidhuber:1 certain:1 affiliation:1 arbitrarily:2 binary:2 vt:1 mottaghi:3 additional:4 greater:3 subplot:1 freely:1 aggregated:2 signal:1 arithmetic:1 full:3 multiple:2 hrr:1 infer:3 violate:1 imperceptible:1 offer:1 long:1 posited:1 hamrick:1 divided:1 schematic:1 prediction:22 scalable:1 basic:3 regression:2 variant:3 multilayer:1 expectation:1 vision:1 physically:2 craik:1 represent:5 kernel:2 arxiv:5 separately:1 participated:1 median:1 grow:1 modality:1 crucial:2 ot:3 operate:1 goodman:1 fell:1 subject:1 pooling:1 spirit:1 effectiveness:1 obj:11 bengio:2 subplots:1 variety:1 xj:2 meltzer:1 psychology:1 architecture:6 topology:1 competing:1 perfectly:1 andreas:1 lerer:1 effort:1 penalty:1 york:1 cause:1 jj:1 repeatedly:1 deep:10 generally:1 collision:6 fragkiadaki:2 involve:1 clear:1 se:1 amount:1 transforms:1 tenenbaum:4 reduced:3 generate:1 exist:1 estimated:1 per:5 klein:1 serving:1 diverse:1 blue:1 hyperparameter:1 mat:2 pollefeys:1 key:1 four:1 nevertheless:1 spelke:1 macomber:1 pj:5 rectangle:1 graph:7 sum:2 run:1 inverse:1 powerful:4 bouncing:3 wu:2 lake:1 draw:1 capturing:1 layer:8 hi:3 followed:1 winston:1 annual:1 constraint:1 scene:13 ri:1 colliding:1 semicolon:1 nearby:1 generates:1 aspect:1 simulate:3 spring:9 expanded:1 rendered:1 structured:3 popularized:1 ball:16 manning:1 instantiates:1 across:10 slightly:1 smaller:2 delineated:1 rev:1 happens:1 restricted:1 invariant:1 abbreviated:1 pin:5 mechanism:2 mind:1 ge:1 end:7 available:2 decomposing:2 operation:2 apply:2 sm1:1 hierarchical:1 generic:1 magnetic:1 simulating:1 batch:1 alternative:3 ho:1 top:1 gori:1 exploit:5 concatenated:3 ghahramani:1 build:1 especially:1 approximating:2 contact:1 objective:2 move:1 arrangement:1 quantity:2 question:2 already:1 receptive:1 malik:1 nr:11 gradient:3 dp:4 iclr:3 distance:1 separate:1 simulated:1 entity:2 concatenation:1 participate:1 errt:1 tower:1 kemp:1 reason:8 spanning:1 toward:1 assuming:1 length:16 o1:3 index:4 relationship:5 insufficient:1 mini:1 reed:1 difficult:1 restitution:2 trace:1 fluid:2 ba:1 implementation:4 unknown:1 perform:1 allowing:1 vertical:1 observation:2 convolution:2 enabling:1 descent:1 behave:1 supporting:1 immediate:1 relational:5 communication:1 hinton:3 frame:1 interacting:2 varied:3 stack:1 arbitrary:1 paraphrase:1 drift:2 bk:10 introduced:1 pair:1 mechanical:2 specified:1 connection:1 optimized:1 jeong:1 engine:13 merges:1 kingma:1 nip:5 trans:1 address:1 able:3 beyond:2 bar:1 dynamical:3 below:2 usually:2 perception:1 leonardis:1 challenge:1 monfardini:1 program:1 oj:6 including:1 video:3 explanation:1 hot:1 suitable:1 grzeszczuk:1 force:3 predicting:3 residual:1 customized:1 representing:2 scheme:1 incoherent:1 psychol:1 autoencoder:1 naive:2 epoch:3 literature:1 l2:1 understanding:5 par:1 interesting:1 composable:1 gershman:1 validation:2 principle:1 editor:1 playing:1 share:1 course:1 supported:1 surprisingly:1 free:1 bias:2 allow:1 terzopoulos:1 wide:1 neighbor:1 template:1 fall:2 absolute:1 sparse:1 distributed:1 benefit:2 depth:1 world:3 avoids:1 rich:2 computes:3 author:1 social:1 transaction:1 approximate:2 jaderberg:1 mcgraw:1 global:1 active:2 overfitting:1 hayes:1 receiver:12 fergus:1 comma:1 search:3 triplet:2 table:3 promising:1 nature:3 learn:12 concatenates:3 rastegari:2 forest:2 schuurmans:1 interact:3 mse:8 bottou:1 complex:14 domain:19 da:2 did:1 whole:1 noise:4 allowed:1 body:26 augmented:1 fig:8 inferring:1 position:5 explicit:3 governed:1 answering:2 perceptual:1 bagherinezhad:1 learns:2 grained:1 dozen:1 rk:6 down:1 specific:2 learnable:6 explored:5 gupta:1 evidence:1 socher:1 adding:1 effectively:2 flattened:1 magnitude:3 execution:1 commutative:1 sparser:1 gap:1 suited:1 led:1 explore:2 sender:10 likely:1 rohrbach:1 visual:3 ordered:2 contained:6 tracking:1 applies:5 truth:5 satisfies:1 acm:3 goal:1 sized:1 targeted:1 acceleration:4 towards:1 shared:1 included:3 specifically:1 except:1 perceiving:1 meaningful:1 perceptrons:1 selectively:2 people:4 support:3 evaluate:3 tested:2 |
5,990 | 6,419 | Minimax Optimal Alternating Minimization
for Kernel Nonparametric Tensor Learning
?,?
Taiji Suzuki? , Heishiro Kanagawa?
Department of Mathematical and Computing Science, Tokyo Institute of Technology
?
PRESTO, Japan Science and Technology Agency
?
Center for Advanced Integrated Intelligence Research, RIKEN
[email protected], [email protected]
Hayato Kobayash, Nobuyuki Shimizu, Yukihiro Tagami
Yahoo Japan Corporation
{ hakobaya, nobushim, yutagami } @yahoo-corp.jp
Abstract
We investigate the statistical performance and computational efficiency of the
alternating minimization procedure for nonparametric tensor learning. Tensor
modeling has been widely used for capturing the higher order relations between
multimodal data sources. In addition to a linear model, a nonlinear tensor model has
been received much attention recently because of its high flexibility. We consider
an alternating minimization procedure for a general nonlinear model where the true
function consists of components in a reproducing kernel Hilbert space (RKHS).
In this paper, we show that the alternating minimization method achieves linear
convergence as an optimization algorithm and that the generalization error of the
resultant estimator yields the minimax optimality. We apply our algorithm to some
multitask learning problems and show that the method actually shows favorable
performances.
1
Introduction
Tensor modeling is widely used for capturing the higher order relations between several data sources.
For example, it has been applied to spatiotemporal data analysis [19], multitask learning [20, 2, 14]
and collaborative filtering [15]. The success of tensor modeling is usually based on the low-rank
property of the target parameter. As in the matrix, the low-rank decomposition of tensors, e.g.,
canonical polyadic (CP) decomposition [10, 11] and Tucker decomposition [31], reduces the effective
dimension of the statistical model, improves the generalization error, and gives a better understanding
of the model based on an condensed representation of the target system.
Among several tensor models, linear models have been extensively studied from both theoretical
and practical points of views [16]. A difficulty of the tensor model analysis is that typical tensor
analysis problems usually fall under a non-convex problem and it is difficult to solve the problem.
To overcome the computational difficulty, several authors have proposed convex relaxation methods
[18, 23, 9, 30, 29]. Unfortunately, however, convex relaxation methods lose statistical optimality in
favor of computational efficiency [28].
Another promising approach is the alternating minimization procedure which alternately updates
each component of the tensor with the other fixed components. The method has shown a nice
performance in practice. Moreover, its theoretical analysis has also been given by several authors
[1, 13, 6, 3, 21, 36, 27, 37]. These theoretical analyses indicate that the estimator given by the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
alternating minimization procedure has a good generalization error, with a mild dependency on the
size of the tensor if the initial solution is properly set. In addition to the alternating minimization
procedure, it has been shown that the Bayes estimator achieves the minimax optimality under quite
weak assumptions [28].
Nonparametric models have also been proposed for capturing nonlinear relations [35, 24, 22]. In
particular, [24] extended the linear tensor learning to the nonparametric learning problem using
a kernel method and proposed a convex regularization method and an alternating minimization
method. Recently, [14, 12] showed that the Bayesian approach has good theoretical properties for the
nonparametric problem. In particular, it achieves the minimax optimality under weak assumptions.
However, from a practical point of view, the Bayesian approach is computationally expensive
compared with the alternating minimization approach. An interesting observation is that the practical
performance of the alternating minimization procedure is quite good [24] and is comparable to the
Bayesian one [14], although its computational efficiency is much better than that of the Bayesian
one. Despite the practical usefulness of the alternating minimization, its statistical properties have
not been investigated yet in the general nonparametric model.
In this paper, we theoretically analyze the alternating minimization procedure in the nonparametric
model. We investigate its computational efficiency and analyze its statistical performance. It is shown
that, if the true function is included in a reproducing kernel Hilbert space (RKHS), then the algorithm
converges to an (a possibly local) optimal solution in linear rate, and the generalization error of the
estimator achieves the minimax optimality if the initial point of the algorithm is in the O(1) distance
from the true function. Roughly speaking, the theoretical analysis shows that
1
t
kfb(t) ? f ? k2L2 = Op dKn? 1+s log(dK) + dK (3/4)
where fb(t) is the estimated nonlinear tensor at the tth iteration of the alternating minimization
procedure, n is the sample size, d is the rank of the true tensor, K is the number of modes, and s is
the complexity of the RKHS. This indicates that the alternating minimization procedure can produce
a minimax optimal estimator after O(log(n)) iterations.
2
Problem setting: nonlinear tensor model
Here, we describe the model to be analyzed. Suppose that we are given n input-output pairs
{(xi , yi )}ni=1 that are generated from the following system. The input xi is a concatenation of K
(1)
(K)
(k)
variables, i.e., xi = (xi , ? ? ? , xi ) ? X1 ? ? ? ? ? XK = X , where each xi is an element of a set
Xk and is generated from a distribution Pk . We consider the regression problem where the outputs
{yi }ni=1 are observed according to the nonparametric tensor model [24]:
yi =
d Y
K
X
(k)
?
f(r,k)
(xi ) + i ,
(1)
r=1 k=1
?
where {i }ni=1 represents an i.i.d. zero-mean noise and each f(r,k)
is a component of the true function
included in some RKHS Hr,k . In this regression problem, our objective is to estimate the true function
Pd QK
?
f ? (x) = f ? (x(1) , . . . , x(K) ) = r=1 k=1 f(r,k)
(x(k) ) based on the observations {(xi , yi )}ni=1 .
This model has been applied to several problems such as multitask learning, recommendation system
and spatiotemporal data analysis. Although we focus on the squared loss regression problem, the
discussion in this paper can be easily generalized to Lipschitz continuous and strongly convex losses
as in [4].
Example 1: multitask learning Suppose that we have several tasks indexed by a two-dimensional
index (s, t) ? [M1 ] ? [M2 ]1 , and each task (s, t) is a regression problem for which there is a true
?
function g[s,t]
(x) that takes an input feature w ? X3 . The ith input sample is given as xi = (si , ti , wi ),
which is a combination of task index (si , ti ) and input feature wi . By assuming that the true function
?
g[s,t]
is a linear combination of a few latent factors hr as
?
g[s,t]
(x) =
1
Pd
r=1
?s,r ?t,r hr (w) (x = (s, t, w)),
We denote by [k] = {1, . . . , k}.
2
(2)
Algorithm 1 Alternating minimization procedure for nonlinear tensor estimation
Require: Training data Dn = {(xi , yi )}ni=1 , the regularization parameter ?(n) , iteration number T .
Pd
(T ) QK
b(T )
Ensure: fb = r=1 v?r
k=1 f(r,k) as the estimator
for t = 1, . . . , T do
(t?1)
(t?1)
Set f?(r,k) = fb(r,k) (?(r, k)), and v?r = v?r
(?r).
for (r, k) ? {1, . . . , d} ? {1, . . . , K} do
The (r, k)-element of f? is updated as
(
0
f?(r,k)
= argmin
f(r,k) ?Hr,k
)
n
K
i2
Y
X
Y
1 Xh
2
?
?
yi ? f(r,k)
f(r,k0 ) +
v?r0
f(r0 ,k0 ) (xi ) + Cn kf kHr,k . (4)
n i=1
0
0
0
k 6=k
r 6=r
k =1
0
0
v?r ? kf?(r,k)
kn , f?(r,k) ? f?(r,k)
/?
vr .
end for
(t)
(t)
Set fb(r,k) = f?(r,k) (?(r, k)) and v?r = v?r (?r).
end for
?
and the output is given as yi = g[s
(xi ) + i [20, 2, 14], then we can reduce the multitask
i ,ti ]
?
learning problem for estimating {g[s,t]
}s,t to the tensor estimation problem, where f(r,1) (s) =
?s,r , f(r,2) (t) = ?t,r , f(r,3) (w) = hr (w).
3
Alternating regularized least squares algorithm
To learn the nonlinear tensor factorization model (1), we propose to optimize the regularized empirical
risk in an alternating way. That is, we optimize each component f(r,k) with the other fixed components
{f(r0 ,k0 ) }(r0 ,k0 )6=(r,k) . Basically, we want to execute the following optimization problem:
min
{f(r,k) }(r,k) :f(r,k) ?Hr,k
!2
d Y
K
n
d X
d
X
X
1X
(k)
yi ?
f(r,k) (xi )
+ Cn
kf(r,k) k2Hr,k ,
n i=1
r=1
r=1
(3)
k=1
k=1
Pd QK
where the first term is the loss function for measuring how our guess r=1 k=1 f(r,k) fits the data
and the second term is a regularization term for controlling the complexity of the learning function.
However, this optimization problem is not convex and is difficult to exactly solve.
We found that this computational difficulty could be overcome if we assume some additional assumptions and aim to achieve a better generalization error instead of exactly minimizing the training error.
The optimization procedure we discuss to obtain such an estimator is the alternating minimization
procedure, which minimizes the objective function (3) alternately with respect to each component
f(r,k) . For each component f(r,k) , the objective function (3) is a convex function, and thus, it is easy
to obtain the optimal solution. Actually, the subproblem is reduced to a variant of the kernel ridge
regression, and the solution can be analytically obtained.
The algorithm we call alternating minimization procedure (AMP) is summarized in Algorithm 1.
After minimizing the objective (Eq. (4)), the obtained solution is normalized so that its empirical L2 norm becomes 1 to adjust the scaling factor freedom. The parameter Cn in Eq. (4) is a regularization
parameter that is appropriately chosen.
For theoretical simplicity, we consider the following equivalent constraint formula instead of the
penalization one (4):
(
0
f?(r,k)
?
argmin
f(r,k) ?Hr,k
kf(r,k) kH
r,k
?
?R
n
K
Y
X
Y
1X
(k0 )
(k)
(k0 )
yi ? f(r,k) (xi )
f?(r,k0 ) (xi ) ?
v?r0
f?(r0 ,k0 ) (xi )
n i=1
0
0
0
k 6=k
r 6=r
!2 )
k =1
(5)
? is a regularization parameter for controlling the complexity of the estimated
where the parameter R
function.
3
4
Assumptions and problem settings for the convergence analysis
Here, we prepare some assumptions for our theoretical analysis. First, we assume that the distribution
P (X) of the input feature x ? X is a product measure of Pk (X) on each Xk . That is, PX (dX) =
P1 (dX1 ) ? ? ? ? ? PK (dXK ) for X = (X1 , . . . , XK ) ? X = X1 ? ? ? ? ? XK . This is typically
assumed in the analysis of linear tensor estimation methods [13, 6, 3, 21, 1, 36, 27, 37]. Thus, the
QK
L2 -norm of a ?rank-1? function f (x) = k=1 fk (x(k) ) can be decomposed into
kf k2L2 (PX ) = kf1 k2L2 (P1 ) ? ? ? ? ? kfK k2L2 (PK ) .
Hereafter, with a slight abuse of notations, we denote by kf kL2 = kf kL2R(Pk ) for a function f :
Xk ? R. The inner product in the space L2 is denoted by hf, giL2 := f (X)g(X)dPX (X).
QK
Note that because of the construction of PX , it holds that hf, giL2 = k=1 hfk , gk iL2 for functions
QK
Q
K
f (x) = k=1 fk (x(k) ) and g(x) = k=1 gk (x(k) ) where x = (x(1) , . . . , x(K) ) ? X .
Next, we assume that the norm of the true function is bounded away from zero and from above.
QK
?
Let the magnitude of the rth component of the true function be vr := k k=1 f(r,k)
kL2 and the
??
?
?
normalized components be f(r,k) := f(r,k) /kf(r,k) kL2 (?(r, k)).
Assumption 1 (Boundedness Assumption).
(A1-1) There exist 0 < vmin ? vmax such that vmin ? vr ? vmax (?r = 1, . . . , d).
?
?
(A1-2) The true function f(r,k)
is included in the RKHS Hr,k , i.e., f(r,k)
? Hr,k (?(r, k)), and
??
there exists R > 0 such that max{vr , 1}kf(r,k) kHr,k ? R (?(r, k)).
(A1-3) The kernel function k(r,k) associated with the RKHS Hr,k is bounded as
supx?Xk k(r,k) (x, x) ? 1 (?(r, k)).
(A1-4) There exists L > 0 such that the noise is bounded as |i | ? L (a.s.).
Assumption 1 is a standard one for the analysis of the tensor model and the kernel regression
model. Note that the boundedness condition of the kernel gives that kf k? = supx(k) |f (x(k) )| ?
kf kHr,k for all f ? Hr,k because the Cauchy-Schwarz inequality gives |hf, k(r,k) (?, x(k) )iHr,k | ?
??
k? ? R.
k(r,k) (x(k) , x(k) )kf kHr,k for all x(k) . Thus, combining with (A1-2), we also have kf(r,k)
The last assumption (A1-4) is a bit restrictive. However, this assumption can be replaced with a
Gaussian assumption. In that situation, we may use the Gaussian concentration inequality [17] instead
of Talagrand?s concentration inequality in the proof.
Next, we characterize the complexity of each RKHS Hr,k by using the entropy number [33, 25].
The -covering number N (, G, L2 (PX )) with respect to L2 (PX ) is the minimal number of balls
with radius measured by L2 (PX ) needed to cover a set G ? L2 (PX ). The ith entropy number
ei (G, L2 (PX )) is defined as the infimum of > 0 such that N (, G, L2 ) ? 2i?1 [25]. Intuitively, if
the entropy number is small, the space G is ?simple?; otherwise, it is ?complicated.?
Assumption 2 (Complexity Assumption). Let BHr,k be the unit ball of an RKHS Hr,k . There exist
0 < s < 1 and c such that
1
ei (BHr,k , L2 (PX )) ? ci? 2s ,
(6)
for all 1 ? r ? d and 1 ? k ? K.
The optimal rate of the ordinary kernel ridge regression on the RKHS with Assumption 2 is given as
1
n? 1+s [26]. Next, we give a technical assumption about the L? -norm.
Assumption 3 (Infinity Norm Assumption). There exist 0 < s2 ? 1 and c2 such that
1?s2
kf k? ? c2 kf kL
kf ksH2r,k (?f ? Hr,k )
2
(7)
for all 1 ? r ? d and 1 ? k ? K.
By Assumption 1, this assumption is always satisfied for c2 = 1 and s2 = 1. s2 < 1 is a nontrivial
situation and gives a tighter bound. We would like to note that this condition with s2 < 1 is satisfied
4
by many practically used kernels such as the Gaussian kernel. In particular, it is satisfied if the kernel
is smooth so that Hr,k is included in a Sobolev space W 2,s2 [0, 1]. More formal characterization of
this condition using the notion of a real interpolation space can be found in [26] and Proposition
2.10 of [5].
?
Finally, we assume an incoherence condition on {f(r,k)
}r,k . Roughly speaking, the incoherence
property of a set of functions {f(r,k) }r,k means that components {f(r,k) }r are linearly independent
across different 1 ? r ? d on the same mode k. This is required to distinguish each component. An
analogous assumption has been assumed also in the literature of linear models [13, 6, 3, 21, 36, 27].
Definition 1 (Incoherence). A set of functions {f(r,k) }r,k , where f(r,k) ? L2 (Pk ), is ?-incoherent if,
for all k = 1, . . . , K, it holds that
|hf(r,k) , f(r0 ,k) iL2 | ? ?kf(r,k) kL2 kf(r0 ,k) kL2 (?r 6= r0 ).
Assumption 4 (Incoherence Assumption). There exists 1 > ?? ? 0 such that the true function
?
{f(r,k)
}r,k is ?? -incoherent.
5
Linear convergence of alternating minimization procedure
In this section, we give the convergence analysis of the AMP algorithm. Under the assumptions
presented in the previous section, it will be shown that the AMP algorithm shows linear convergence
in the sense of optimization algorithm and achieves the minimax optimal rate in the sense of statistical
performance. Roughly speaking, if the initial solution is sufficiently close to the true function (namely,
in a distance of O(1)), then the solution generated by AMP linearly converges to the optimal solution
1
and the estimation accuracy of the final solution is given as O(dKn? 1+s ) up to log(dK) factor.
We analyze how close the updated estimator is to the true one when the (r, k)th component is
0
updated from f?(r,k) to f?(r,k)
. The tensor decomposition {f(r,k) }r,k of a nonlinear tensor model has a
freedom of scaling. Thus, we need to measure the accuracy based on a normalized representation to
avoid the scaling factor uncertainty. Let the normalized components of the estimator be f?(r0 ,k0 ) =
QK
f?(r0 ,k0 ) /kf?(r0 ,k0 ) kL2 (?(r0 , k 0 ) ? [d] ? [K]) and v?r0 = v?r0 k0 =1 kf?(r0 ,k0 ) kL2 (?r0 ? [d]). On the
0
other hand, the newly updated (r, k)th element is denoted by f?(r,k)
(see Eq. (4)) and we denote by v?r0
Q
the updated value of v?r correspondingly: v?0 = kf?0 kL
kf?(r,k0 ) kL . The normalized newly
0
r
(r,k)
2
k 6=k
2
0
0
0
updated element is denoted by f?(r,k)
= f?(r,k)
/kf?(r,k)
kL2 .
For an estimator (f?, v?) = ({f?(r0 ,k0 ) }r0 ,k0 , {?
vr0 }r0 ) which is a couple of the normalized component
and the scaling factor, define
??
d? (f?, v?) := max
{vr0 kf?(r0 ,k0 ) ? f(r
?r0 |}.
0 ,k 0 ) kL2 + |vr 0 ? v
0 0
(r ,k )
For any ?1,n > 0 and ?2,n > 0 and ? > 0, we let a? := max{1, L} max{1, ? } log(dK) and define
?n = ?n (?1,n , ? ) and ?n0 = ?n0 (?2,n , ? ) as 2
1+2s
1+2s
? 2s
?s
K 2 ?1,n2
?2,n
K 1+s
1
0
?
? 2s+(1?s)s2
?n := a?
, ?n := a? ? ? 1
.
1
1
n
n
2
?2,n
n 1+s
?1,n2(1+s) n 1+s
? in Eq.
Theorem 2. Suppose that Assumptions 1?4 are satisfied, and the regularization parameter R
?
?
?
(5) is set as R = 2R. Let R = 8R/ min{vmin , 1} and suppose that we have already obtained an
estimator f? satisfying the following conditions:
?
? The RKHS-norms of {f?(r0 ,k0 ) }r0 ,k0 are bounded as kf?(r0 ,k0 ) kHr0 ,k0 ? R/2
(?(r0 , k 0 ) 6= (r, k)).
? The distance from the true one is bounded as d? (f?, v?) ? ?.
Then, for a sufficiently small ?? and ? (independent of n), there exists an event with probability
greater than 1 ? 3 exp(?? ) where any (f?, v?) satisfying the above conditions gives
2
1
0
??
? 2K
vr kf?(r,k)
? f(r,k)
kL2 + |?
vr0 ? vr | ? d? (f?, v?)2 + Sn R
(8)
2
2
The symbol ? indicates the max operation, that is, a ? b := max{a, b}.
5
for any sufficiently large n, where Sn is defined for a constant C 0 depending on s, s2 , c, c2 as
i
h
1
2
1/2
1/2
? 2(K?1)( s2 ?1) (d?n )2/s2 (1 + vmax )2 .
Sn := C 0 ?n0 ?2,n + ? 0 n + d?n ?1,n + R
Moreover, if we denote by ?n the right hand side of Eq. (8), then it holds that
0
kf?(r,k)
kHr,k ?
2
?
? R.
vr ? ? n
The proof and its detailed statement are given in the supplementary material (Theorem A.1). It
is proven by using such techniques as the so-called peeling device [32] or, equivalently, the local
Rademacher complexity [4], and by combining these techniques with the coordinate descent optimization argument. Theorem 2 states that, if the initial solution is sufficiently close to the true one,
then the following updated estimator gets closer to the true one and its RKHS-norm is still bounded
above by a constant. Importantly, it can be shown that the updated one still satisfies the conditions
of Theorem 2 for large n. Since the bound given in Theorem 2 is uniform, the inequality (8) can be
recursively applied to the sequence of fb(t) (t = 1, 2, . . . ).
1+s
2
1
1
By substituting ?1,n = K ? 1?s d? 1?s n? 1+s and ?2,n = n? 1+s , we have that
1?s
1
1
, s (1+s)
}
? 1 ?(1?s2 ) min{ 4(1+s)
2
Sn = O n? 1+s ? n 1+s
poly(d, K) log(dK),
where poly(d, K) means a polynomial of d, K. Thus, if s2 < 1 and n is sufficiently large compared
1
with d and K, then the second term is smaller than the first term and we have Sn ? Cn? 1+s with a
constant C. Furthermore, we can bound the L2 -norm from the true one as in the following theorem.
Theorem 3. Let (fb(t) , v?(t) ) be the estimator at the tth iteration. In addition to the assumptions of
2
v2
? 2K ? vmin , s2 < 1
Theorem 2, suppose that (fb(1) , v?(1) ) satisfies d? (fb(1) , v?(1) )2 ? min
8 and Sn R
8
Pd
(t) QK
(t)
and n d, K, then f?(t) (x) =
v?r
fb (x(k) ) satisfies
r=1
k=1
(r,k)
1
t
kf?(t) ? f ? k2L2 = O dKn? 1+s log(dK) + dK (3/4)
.
for all t ? 2 uniformly with probability 1 ? 3 exp(?? ).
More detailed argument is given in Theorem A.3 in the supplementary material. This means that
1
after T = O(log(n)) iterations, we obtain the estimation accuracy of O(dKn? 1+s log(dK)). The
1
estimation accuracy bound O(dKn? 1+s log(dK)) is intuitively natural because we are estimating
?
?
d ? K functions {f(r,k)
}r,k and the optimal sample complexity to estimate one function f(r,k)
is
1
known as n? 1+s [26]. Indeed, recently, it has been shown that this accuracy bound is minimax
optimal up to log(dK) factor [14], that is,
1
inf sup E[kfb ? f ? k2 ] & dKn? 1+s
f?
f?
?
where inf is taken over all estimators and sup runs over all low rank tensors f ? with kf(r,k)
kHr,k ? R.
The Bayes estimator also achieves this minimax lower bound [14]. Hence, a rough Bayes estimator
would be a good initial solution satisfying the assumptions.
6
Relation to existing works
In this section, we describe the relation of our work to existing works. First, our work can be
seen as a nonparametric extension of the linear parametric tensor model. The AMP algorithm
and related methods for the linear model has been extensively studied in the recent years, e.g.
[1, 13, 6, 3, 21, 36, 27, 37]. Overall, the tensor completion problem has been mainly studied instead
of a general regression problem. Among the existing works, [37] analyzed the AMP algorithm for
a low-rank matrix estimation problem. It is shown that, under an incoherence condition, the AMP
algorithm converges to the optimal in a linear rate. However, their analysis is limited to a matrix
case. [1] analyzed an alternating minimization approach to estimate a low-rank tensor with positive
entries in a noisy observation setting. [13, 6] considered an AMP algorithm for a tensor completion.
6
Their estimation method is close to our AMP algorithm. However, the analysis is for a linear tensor
completion with/without noise and is a different direction from our general nonparametric regression
setting. [3, 36] proposed estimation methods other than an alternating minimization one, which were
specialized to a linear tensor completion problem.
As for the theoretical analysis for the nonparametric tensor regression model, some Bayes estimators
have been analyzed very recently by [14, 12]. They analyzed Bayes methods with Gaussian process
priors and showed that the Gaussian process methods possess a good statistical performance. In
particular, [14] showed that the Gaussian process method for the nonlinear tensor estimation yields
the mini-max optimality as an extension of the linear model analysis [28]. However, the Bayes
estimators require posterior sampling such as Gibbs sampling, which is rather computationally
expensive. On the other hand, the AMP algorithm yields a linear convergence rate and satisfies
the minimax optimality. An interesting observation is that the AMP algorithm requires a stronger
assumption than the Bayesian one. There would be a trade-off between computational efficiency and
statistical property.
7
Numerical experiments
We numerically compare the following methods in multitask learning problems (Eq. (2)):
? Gaussian process method (GP-MTL) [14]: The nonparametric Bayesian method with Gaussian
process priors. It was shown that the generalization error of GP-MTL achieves the minimax
optimal rate [14].
? Our AMP method with different kernels for the latent factors hr (see Eq. (2)); the Gaussian RBF
kernel and the linear kernel. We also examined their mixture like 2 RBF kernels and 1 linear
kernel among d = 3 components. They are indicated as Lin(1)+RBF(2).
The tensor rank for AMP and GP-MTL was fixed d = 3 in the following two data sets. The kernel
width and the regularization parameter were tuned by cross validation. We also examined the scaled
latent convex regularization method [34]. However, it did not perform well and was omitted.
7.1
Restaurant data
Here, we compared the methods in the Restaurant & Consumer Dataset [7]. The task was to
predict consumer ratings about several aspects of different restaurants, which is a typical task of a
recommendation system. The number of consumers was M1 = 138, and each consumer gave scores
of about M2 = 3 different aspects (food quality, service quality, and overall quality). Each restaurant
was described by M3 = 44 features as in [20], and the task was to predict the score of an aspect
by a certain consumer based on the restaurant feature vector. This is a multitask learning problem
consisting of M1 ? M2 = 414 (nonlinear) regression tasks where the input feature vector is M3 = 44
dimensional. The kernel function representing the task similarities among Task 1 (restaurant) and
Task 2 (aspect) are set as k(p, p0 ) = ?p,p0 + 0.8 ? (1 ? ?p,p0 ) (where the pair p, p0 are restaurants or
aspects) 3 .
Fig. 1 shows the relative MSE (the discrepancy of MSE from the best one) for different training
sample sizes n computed on the validation data against the number of iterations t averaged over 10
repetitions. It can be seen that the validation error dropped rapidly to the optimal one. The best
achievable validation error depended on the sample size. An interesting observation was that, until
the algorithm converged to the best possible error, it dropped at a linear rate. After it reached the
bottom, the error was no longer improved.
Fig. 2 shows the performance comparison between the AMP method with different kernels and
the Gaussian process method (GP-MTL). The performances of AMP and GP-MTL were almost
identical. Although AMP is computationally quite efficient, as shown in Fig. 1, it did not deteriorate
the statistical performance. This is a remarkable property of the AMP algorithm.
7.2
Online shopping data
Here, we compared our AMP method with the existing method using data of Yahoo! Japan shopping.
Yahoo! Japan shopping contains various types of shops. The dataset is built on the purchase history
3
We also tested the delta kernel k(p, p0 ) = ?p,p0 , but its performance was worse that the presented kernel.
7
13
MSE
12
11
10
9
8
GP-MTL(cosdis)
GP-MTL(cossim)
AMP(cosdis)
AMP(cossim)
4000
Figure 1: Convergence property
of the AMP method: relative
MSE against the number of iterations.
Figure 2: Comparison between
AMP method with different kernels and GP-MTL on the restaurant data.
6000
8000 10000 12000 14000
Sample size
Figure 3: Comparison between
AMP and GP-MTL on the online shopping data with different
kernels.
that describes how many times each consumer bought each product in each shop. Our objective was
to predict the quantity of a product purchased by a consumer at a specific shop. Each consumer was
described by 65 features based on his/her properties such as age, gender, and industry type of his/her
occupation. We executed the experiments on 100 items and 508 different shops. Hence, the problem
was reduced to a multitask learning problem consisting of 100 ? 508 regression tasks.
Similarly to [14], we put a commute-time kernel K = L? [8] on the shops based on a Laplacian matrix
L on a weighted graph constructed by two similarity measures between shops
? denotes
P (where
psuedoinverse). Here, the Lapalacian on the graph is given by Li,j =
j?V wi,j ?i,j ? wi,j
where wi,j is the similarity between shops (i, j). We employed the cosine similarity with different
parameters as the similarity measures (indicated by ?cossim? and ?cosdis?).
Based on the above settings, we performed a comparison between AMP and GP-MTL with different
similarity parameters. We used the Gaussian kernel for the latent factor hr . The result is shown in
Fig. 3, which presents the validation error (MSE) against the size of the training data. We can see that,
for both ?cossim? and ?cosdis,? AMP performed comparably well to the GP-MTL method and even
better than the GP-MTL method in some situations. Here it should be noted that AMP is much more
computationally efficient than GP-MTL despite its high predictive performance. This experimental
result justifies our theoretical analysis.
8
Conclusion
We have developed a convergence theory of the AMP method for the nonparametric tensor learning.
The AMP method has been used by several authors in the literature, but its theoretical analysis has
not been addressed in the nonparametric setting. We showed that the AMP algorithm converges in a
linear rate as an optimization algorithm and achieves the minimax optimal statistical error if the initial
point is in the O(1)-neighborhood of the true function. We may use the Bayes estimator as a rough
initial solution, but it would be an important future work to explore more sophisticated determination
of the initial solution.
Acknowledgment This work was partially supported by MEXT kakenhi (25730013, 25120012,
26280009, 15H01678 and 15H05707), JST-PRESTO and JST-CREST.
References
[1] A. Aswani. Low-rank approximation and completion of positive tensors. arXiv:1412.0620, 2014.
[2] M. T. Bahadori, Q. R. Yu, and Y. Liu. Fast multivariate spatio-temporal analysis via low rank tensor
learning. In Advances in Neural Information Processing Systems 27.
[3] B. Barak and A. Moitra. Tensor prediction, Rademacher complexity and random 3-xor. arXiv:1501.06521,
2015.
[4] P. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. The Annals of Statistics, 33:
1487?1537, 2005.
[5] C. Bennett and R. Sharpley. Interpolation of Operators. Academic Press, Boston, 1988.
[6] S. Bhojanapalli and S. Sanghavi. A new sampling technique for tensors. arXiv:1502.05023, 2015.
[7] V.-G. Blanca, G.-S. Gabriel, and P.-M. Rafael. Effects of relevant contextual features in the performance
of a restaurant recommender system. In Proceedings of 3rd Workshop on Context-Aware Recommender
Systems, 2011.
8
[8] F. Fouss, A. Pirotte, J.-M. Renders, and M. Saerens. Random-walk computation of similarities between
nodes of a graph with application to collaborative recommendation. IEEE Trans. on Knowl. and Data Eng.,
19(3):355?369, Mar. 2007.
[9] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex
optimization. Inverse Problems, 27:025010, 2011.
[10] F. L. Hitchcock. The expression of a tensor or a polyadic as a sum of products. Journal of Mathematics
and Physics, 6:164?189, 1927.
[11] F. L. Hitchcock. Multilple invariants and generalized rank of a p-way matrix or tensor. Journal of
Mathematics and Physics, 7:39?79, 1927.
[12] M. Imaizumi and K. Hayashi. Doubly decomposing nonparametric tensor regression. In International
Conference on Machine Learning (ICML2016), page to appear, 2016.
[13] P. Jain and S. Oh. Provable tensor factorization with missing data. In Advances in Neural Information
Processing Systems 27, pages 1431?1439. Curran Associates, Inc., 2014.
[14] H. Kanagawa, T. Suzuki, H. Kobayashi, N. Shimizu, and Y. Tagami. Gaussian process nonparametric tensor
estimator and its minimax optimality. In International Conference on Machine Learning (ICML2016),
pages 1632?1641, 2016.
[15] A. Karatzoglou, X. Amatriain, L. Baltrunas, and N. Oliver. Multiverse recommendation: N-dimensional
tensor factorization for context-aware collaborative filtering. In Proceedings of the 4th ACM Conference
on Recommender Systems 2010, pages 79?86, 2010.
[16] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM Rev., 51(3):455?500, Aug.
2009.
[17] M. Ledoux. The concentration of measure phenomenon. Number 89 in Mathematical Surveys and
Monographs. American Mathematical Soc., 2005.
[18] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in visual data.
In Proceedings of the 12th International Conference on Computer Vision (ICCV), pages 2114?2121, 2009.
[19] M. M?rup. Applications of tensor (multiway array) factorizations and decompositions in data mining.
Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1):24?40, 2011.
[20] B. Romera-Paredes, H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask learning. In
Proceedings of the 30th International Conference on Machine Learning (ICML2013), volume 28 of JMLR
Workshop and Conference Proceedings, pages 1444?1452, 2013.
[21] P. Shah, N. Rao, and G. Tang. Optimal low-rank tensor recovery from separable measurements: Four
contractions suffice. arXiv:1505.04085, 2015.
[22] W. Shen and S. Ghosal. Adaptive Bayesian density regression for high-dimensional data. Bernoulli, 22(1):
396?420, 02 2016.
[23] M. Signoretto, L. D. Lathauwer, and J. Suykens. Nuclear norms for tensors and their use for convex
multilinear estimation. Technical Report 10-186, ESAT-SISTA, K.U.Leuven, 2010.
[24] M. Signoretto, L. D. Lathauwer, and J. A. K. Suykens. Learning tensors in reproducing kernel Hilbert
spaces with multilinear spectral penalties. CoRR, abs/1310.4977, 2013.
[25] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[26] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In Proceedings
of the Annual Conference on Learning Theory, pages 79?93, 2009.
[27] W. Sun, Z. Wang, H. Liu, and G. Cheng. Non-convex statistical optimization for sparse tensor graphical
model. In Advances in Neural Information Processing Systems, pages 1081?1089, 2015.
[28] T. Suzuki. Convergence rate of Bayesian tensor estimator and its minimax optimality. In Proceedings of
the 32nd International Conference on Machine Learning (ICML2015), pages 1273?1282, 2015.
[29] R. Tomioka and T. Suzuki. Convex tensor decomposition via structured schatten norm regularization. In
Advances in Neural Information Processing Systems 26, pages 1331?1339, 2013. NIPS2013.
[30] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor decomposition. In Advances in Neural Information Processing Systems 24, pages 972?980, 2011. NIPS2011.
[31] L. R. Tucker. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279?311,
1966.
[32] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[33] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes: With Applications to
Statistics. Springer, New York, 1996.
[34] K. Wimalawarne, M. Sugiyama, and R. Tomioka. Multitask learning meets tensor factorization: task
imputation via convex optimization. In Advances in Neural Information Processing Systems 27, pages
2825?2833. 2014. NIPS2014.
[35] Z. Xu, F. Yan, and Y. A. Qi. Inftucker: t-process based infinite tensor decomposition. CoRR, abs/1108.6296,
2011.
[36] Z. Zhang and S. Aeron. Exact tensor completion using t-svd. arXiv:1502.04689, 2015.
[37] T. Zhao, Z. Wang, and H. Liu. A nonconvex optimization framework for low rank matrix estimation. In
Advances in Neural Information Processing Systems 28, pages 559?567. Curran Associates, Inc., 2015.
NIPS2015.
9
| 6419 |@word multitask:10 mild:1 polynomial:1 norm:10 stronger:1 achievable:1 paredes:1 nd:1 decomposition:9 p0:6 eng:1 contraction:1 commute:1 boundedness:2 recursively:1 initial:8 liu:4 contains:1 score:2 hereafter:1 tuned:1 rkhs:11 amp:29 romera:1 sharpley:1 existing:4 scovel:1 contextual:1 si:2 yet:1 dx:1 numerical:1 update:1 n0:3 intelligence:1 vmin:4 guess:1 device:1 ihr:1 item:1 xk:7 ith:2 yamada:1 characterization:1 node:1 zhang:1 mathematical:4 dn:1 c2:4 constructed:1 lathauwer:2 consists:1 doubly:1 deteriorate:1 theoretically:1 indeed:1 k2hr:1 p1:2 roughly:3 decomposed:1 food:1 becomes:1 spain:1 estimating:3 moreover:2 notation:1 bounded:6 psychometrika:1 suffice:1 bhojanapalli:1 argmin:2 minimizes:1 developed:1 corporation:1 temporal:1 ti:3 h05707:1 exactly:2 k2:1 scaled:1 unit:1 appear:1 positive:2 service:1 dropped:2 local:3 kobayashi:1 depended:1 despite:2 k2l2:5 meet:1 incoherence:5 interpolation:2 abuse:1 baltrunas:1 studied:3 examined:2 factorization:5 limited:1 averaged:1 practical:4 acknowledgment:1 practice:1 hfk:1 x3:1 dpx:1 procedure:14 pontil:1 empirical:4 yan:1 get:1 close:4 operator:1 put:1 risk:1 context:2 optimize:2 equivalent:1 center:1 missing:2 attention:1 convex:14 survey:1 shen:1 simplicity:1 recovery:2 m2:3 estimator:21 array:1 importantly:1 nuclear:1 oh:1 his:2 notion:1 coordinate:1 analogous:1 updated:8 annals:1 target:2 suppose:5 controlling:2 construction:1 kolda:1 exact:1 curran:2 associate:2 element:4 expensive:2 taiji:2 satisfying:3 observed:1 bottom:1 subproblem:1 wang:2 sun:1 trade:1 monograph:1 pd:5 agency:1 complexity:9 rup:1 inftucker:1 nips2011:1 predictive:1 efficiency:5 multimodal:1 easily:1 k0:21 various:1 riken:1 jain:1 fast:1 effective:1 describe:2 hitchcock:2 neighborhood:1 quite:3 widely:2 solve:2 supplementary:2 otherwise:1 favor:1 statistic:2 vaart:1 gp:13 noisy:1 final:1 online:2 sequence:1 ledoux:1 propose:1 product:5 relevant:1 combining:2 rapidly:1 flexibility:1 achieve:1 nips2015:1 kh:1 convergence:10 rademacher:3 produce:1 converges:4 depending:1 ac:2 nobuyuki:1 completion:8 measured:1 op:1 received:1 aug:1 eq:7 soc:1 christmann:1 indicate:1 direction:1 fouss:1 radius:1 tokyo:1 bader:1 karatzoglou:1 jst:2 material:2 require:2 shopping:4 generalization:6 proposition:1 tighter:1 multilinear:3 extension:2 hold:3 practically:1 sufficiently:5 considered:1 exp:2 predict:3 substituting:1 achieves:8 omitted:1 favorable:1 estimation:13 condensed:1 lose:1 prepare:1 knowl:1 schwarz:1 repetition:1 weighted:1 minimization:20 rough:2 gaussian:12 always:1 aim:1 rather:1 avoid:1 dkn:6 focus:1 properly:1 kakenhi:1 rank:14 indicates:2 mainly:1 bernoulli:1 sense:2 gandy:1 integrated:1 typically:1 her:2 relation:5 overall:2 among:4 denoted:3 yahoo:4 aware:2 sampling:3 identical:1 represents:1 yu:1 discrepancy:1 purchase:1 future:1 sanghavi:1 report:1 few:1 replaced:1 consisting:2 ab:3 freedom:2 investigate:2 mining:2 adjust:1 analyzed:5 mixture:1 oliver:1 closer:1 il2:2 indexed:1 walk:1 theoretical:10 minimal:1 industry:1 modeling:3 rao:1 cover:1 measuring:1 ordinary:1 entry:1 uniform:1 usefulness:1 gil2:2 characterize:1 dependency:1 kn:1 supx:2 spatiotemporal:2 recht:1 density:1 international:5 siam:1 interdisciplinary:1 off:1 physic:2 squared:1 satisfied:4 moitra:1 possibly:1 worse:1 american:1 zhao:1 li:1 japan:4 de:1 summarized:1 inc:2 vr0:3 view:2 performed:2 analyze:3 sup:2 reached:1 bayes:7 hf:4 complicated:1 collaborative:3 square:2 ni:5 accuracy:5 xor:1 qk:9 yield:3 weak:3 bayesian:8 comparably:1 basically:1 converged:1 history:1 definition:1 against:3 kl2:10 imaizumi:1 tucker:2 resultant:1 associated:1 proof:2 couple:1 newly:2 dataset:2 knowledge:1 improves:1 hilbert:3 musialski:1 sophisticated:1 actually:2 higher:2 mtl:13 improved:1 execute:1 strongly:1 mar:1 furthermore:1 until:1 talagrand:1 hand:3 steinwart:2 ei:2 nonlinear:10 khr:6 mode:3 infimum:1 quality:3 indicated:2 aung:1 dxk:1 effect:1 ye:1 normalized:6 true:19 regularization:9 analytically:1 hence:2 alternating:22 i2:1 width:1 covering:1 noted:1 cosine:1 generalized:2 ridge:2 cp:1 saerens:1 recently:4 specialized:1 jp:3 volume:1 slight:1 m1:3 rth:1 numerically:1 measurement:1 cambridge:1 gibbs:1 rd:1 leuven:1 fk:2 mathematics:2 similarly:1 sugiyama:1 multiway:1 similarity:7 polyadic:2 longer:1 posterior:1 multivariate:1 showed:4 recent:1 inf:2 corp:1 certain:1 nonconvex:1 inequality:4 success:1 yi:9 der:1 seen:2 additional:1 greater:1 employed:1 r0:27 reduces:1 smooth:1 technical:2 determination:1 academic:1 cross:1 lin:1 a1:6 laplacian:1 qi:1 prediction:1 variant:1 regression:15 vision:1 titech:2 arxiv:5 iteration:7 kernel:27 suykens:2 addition:3 want:1 addressed:1 source:2 appropriately:1 posse:1 call:1 bought:1 easy:1 fit:1 restaurant:9 gave:1 reduce:1 inner:1 cn:4 expression:1 bartlett:1 wellner:1 penalty:1 render:1 speaking:3 york:1 gabriel:1 detailed:2 nonparametric:16 extensively:2 tth:2 reduced:2 exist:3 canonical:1 sista:1 estimated:2 delta:1 four:1 imputation:1 graph:3 relaxation:2 year:1 sum:1 run:1 inverse:1 uncertainty:1 almost:1 kfb:2 sobolev:1 scaling:4 comparable:1 bit:1 capturing:3 bound:6 distinguish:1 cheng:1 annual:1 nontrivial:1 constraint:1 infinity:1 wimalawarne:1 bousquet:1 aspect:5 argument:2 optimality:9 min:4 separable:1 px:9 department:1 structured:1 according:1 combination:2 ball:2 across:1 smaller:1 describes:1 tagami:2 wi:5 icml2016:2 amatriain:1 rev:1 intuitively:2 invariant:1 iccv:1 taken:1 computationally:4 aeron:1 discus:1 needed:1 end:2 presto:2 operation:1 decomposing:1 apply:1 away:1 v2:1 spectral:1 bahadori:1 kashima:1 shah:1 denotes:1 ensure:1 graphical:1 restrictive:1 purchased:1 tensor:60 objective:5 already:1 quantity:1 parametric:1 concentration:3 distance:3 pirotte:1 concatenation:1 schatten:1 cauchy:1 provable:1 assuming:1 consumer:8 index:2 mini:1 minimizing:2 equivalently:1 difficult:2 unfortunately:1 executed:1 statement:1 gk:2 wonka:1 kfk:1 perform:1 bianchi:1 recommender:3 observation:5 descent:1 situation:3 extended:1 multiverse:1 reproducing:3 rating:1 ghosal:1 pair:2 required:1 kl:3 namely:1 kf1:1 barcelona:1 hush:1 alternately:2 nip:1 trans:1 esat:1 usually:2 nips2014:1 built:1 max:7 event:1 difficulty:3 natural:1 regularized:3 hr:17 advanced:1 minimax:14 representing:1 shop:7 technology:2 incoherent:2 sn:6 nice:1 understanding:1 l2:12 literature:2 kf:29 prior:2 review:1 relative:2 occupation:1 discovery:1 loss:3 interesting:3 filtering:2 proven:1 remarkable:1 age:1 penalization:1 validation:5 berthouze:1 supported:1 last:1 formal:1 side:1 barak:1 institute:1 fall:1 correspondingly:1 sparse:1 van:2 overcome:2 dimension:1 fb:9 author:3 suzuki:5 adaptive:1 vmax:3 crest:1 rafael:1 assumed:2 spatio:1 xi:16 continuous:1 latent:4 promising:1 learn:1 kanagawa:3 mse:5 investigated:1 poly:2 did:2 pk:6 linearly:2 s2:13 noise:3 n2:2 x1:3 xu:1 fig:4 vr:8 wiley:1 tomioka:3 xh:1 jmlr:1 peeling:1 tang:1 formula:1 theorem:9 specific:1 dx1:1 symbol:1 dk:10 exists:4 mendelson:1 workshop:2 corr:2 ci:1 magnitude:1 justifies:1 boston:1 shimizu:2 entropy:3 explore:1 visual:1 signoretto:2 partially:1 recommendation:4 hayashi:2 springer:2 gender:1 satisfies:4 acm:1 rbf:3 lipschitz:1 bennett:1 aswani:1 included:4 typical:2 infinite:1 uniformly:1 called:1 geer:1 experimental:1 svd:1 m3:2 mext:1 support:1 tested:1 phenomenon:1 |
5,991 | 642 | Global Regularization of Inverse Kinematics for Redundant
Manipulators
David DeMers
Dept. of Computer Science & Engr.
Institute for Neural Computation
University of California, San Diego
La Jolla. CA 92093-0114
Kenneth Kreutz-Delgado
Dept. of Electrical & Computer Engr.
Institute for Neural Computation
University of California, San Diego
La Jolla, CA 92093-0407
Abstract
The inverse kinematics problem for redundant manipulators is ill-posed and
nonlinear. There are two fundamentally different issues which result in the need
for some form of regularization; the existence of multiple solution branches
(global ill-posedness) and the existence of excess degrees of freedom (local illposedness). For certain classes of manipulators, learning methods applied to
input-output data generated from the forward function can be used to globally
regularize the problem by partitioning the domain of the forward mapping into
a finite set of regions over which the inverse problem is well-posed. Local
regularization can be accomplished by an appropriate parameterization of the
redundancy consistently over each region. As a result, the ill-posed problem can
be transformed into a finite set of well-posed problems. Each can then be solved
separately to construct approximate direct inverse functions.
1 INTRODUCTION
The robot forward kinematics function maps a vector of joint variables to the end-effector
configuration space, or workspace, here assumed to be Euclidean. We denote this mapping
by f(?) : en -+ wm ~ X m , f(O) I-t x, for 0 E en (the input space or joint space)
and x E wm (the workspace). When m < n, we say that the manipulator has redundant
degrees--of-freedom (dot).
The inverse kinematics problem is the following: given a desired workspace location x,
find joint variables 0 such that f(O) = x. Even when the forward kinematics is known,
255
256
DeMers and Kreutz-Delgado
the inverse kinematics for a manipulator is not generically solvable in closed form (Craig.
1986). This problem is ill-posed l due to two separate phenomena. First. multiple solution
branches can exist (for both non-redundant as well as redundant manipulators). The second
source of ill-posedness arises because of the redundant dofs. Each of the inverse solution
branches consists of a submanifold of dimensionality equal to the number of redundant
dofs. Thus the inverse solution requires two regularizations; global regularization to select
a solution branch. and local regularization. to resolve the redundancy. In this paper the
existence of at least one solution is assumed; that is. inverses will be sought only for points
in the reachable workspace. i.e. desired x in the image of 1(?).
Given input-output data generated from the kinematics mapping (pairs consisting of joint
variable values & corresponding end-effector location). can the inverse mapping be learned
without making any a priori regularizing assumptions or restrictions? We show that the
answer can be "yes". The approach taken towards the solution is based on the use of
learning methods to partition the data into groups such that the inverse kinematics problem.
when restricted to each grouP. is well-posed. after which a direct inverse function can be
approximated on each group.
A direct inverse function is desireable. For instance. a direct inverse is computable quickly;
if implemented by a feedforward network were used. one function evaluation is equivalent to
a single forward propagation. More importantly. theoretical results show that an algorithm
for tracking a cyclic path in the workspace will produce a cyclic trajectory of joint angles
if and only if it is equivalent to a direct inverse function (Baker. 1990). That is. inverse
functions are necessary to ensure that when following a closed loop the ann configurations
which result in the same end-effector location will be the same.
Unfortunately. topological results show that a single global inverse function does not exist
for generic robot manipulators. However. a global topological analysis of the kinematics
function and the nature of the manifolds induced in the input space and workspace show
that for certain robot geometries the mapping may be expressed as the union of a finite set
of well-behaved local regions (Burdick. 1991). In this case. the redundancy takes the form
of a submanifold which can be parameterized (locally) consistently by. for example. the
use of topology preserving neural networks.
2 TOPOLOGY AND ROBOT KINEMATICS
It is known that for certain robot geometries the input space can be partitioned into disjoint
regions which have the property that no more than one inverse solution branch lies within
anyone of the regions (Burdick. 1988). We assume in the following that the manipulator in
question has such a geometry. and has all revolute joints. Thus en 1'" ? the n-torus. The
redundancy manifolds in this case have the topology of T n - m ? n - m-dimensional torii.
=
For en a compact manifold of dimensionality n. wm a compact manifold of dimensionality
m. and I a smooth map from en to wm .let the differential del be the map from the tangent
space of en at (J E en to the tangent space of wm at I( (J). The set of points in en which
1lli-posedness can arise from having either too many or too few constraints to result in a unique
and valid solution. That is, an overconstrained system may be ill-posed and have no solutions; such
systems are typically solved by finding a least-squares or some such minimum cost solution. An
underconstrained system may have multiple (possibly infinite) solutions. The inverse kinematics
problem for redundant manipulators is underconstrained.
Global Regularization of Inverse Kinematics for Redundant Manipulators
map to x E wm is the pre-image of x, denoted by 1- 1 (x). The differential del has a
natural representation given by an m by n Jacobian matrix whose elements consist of the
first partial derivatives of I w.r.t. a basis of en. Define S as the set of critical points of I,
which are the set of all 0 E en such that de/(O) has rank less than the dimensionality of
wm. Elements of the image of S, I(S) are called the critical values. The set'R!.Wm\S
are the regular values of f. For 0 E en, if 30? E 1- 1 (/(0)), O? E S, we call 0 a
co-regular point of I.
The kinematic ~napping of certain classes of manipulators (with the geometry herein assumed) can be decomposed based on the c<rregular surfaces which divide en into a finite
number of disjoint, connected regions, Ci. The image of each Ci under I is a connected
region in the workspace, Wi. We denote the kinematics mapping restricted to Cito be Ii;
Ii : Ci
--+
Wi.
Locally, one inverse solution branch for a region in the workspace has the structure of
a product space. We conjecture that (Wi, Tn-m ,Ci, flci) forms a locally trivial fiber
bundle2 and that the Ci, therefore, form regions where the inverse is unique modulo the
redundancy.
Given a point in the workspace, x, for which a configuration is sought, global regularization requires choosing from among the multiple pre-image torii. Local regularization
(redundancy resolution) requires finding a location on the chosen torus. We would like to
effectively "mod out" the redundancy manifolds by constructing an indexed one-t<rone,
invertible mapping from each pre-image manifold to a point in X m , and to obtain a consistent representation of the manifold by constructing an invertible mapping from itself to
a set of n - m "location" parameters.
3
GLOBAL REGULARIZATION
The existence of multiple solutions for even non-redundant manipulators poses difficult
problems. U sua1ly (and often for plausible reasons) the manipulator's allowable configurations or task space is effectively constrained so that there exists only a single inverse solution
(Martinetz et al., 1990; Kuperstein, 1991). This approach regularizes the problem by allowing the existence of only one-t<rone data. We seek to generalize to the multi-solution
case and to learn all of the possible solutions.
For an all revolute manipulator there typically will be multiple pre-image torii for a particular point in the workspace. The topology of the pre-image solution branches will generally
be known from the type of geometry, although their number may not be obvious by inspection, and will usually be different for different regions of the workspace. An upper bound on
the number of inverse solutions is known (Burdick, 1988); consequently, the determination
of the number could be made by search for the best fit among the possibilities.
The sampling and clustering approach described in (DeMers & Kreutz-Delgado, 1992)
can be used to partition input-output data into disjoint pre-image sets. This approach
uses samples of the forward behavior to identify the sets in the input space which map to
'}.A fiber bundle is a four-tuple consisting of a base space, a fiber (here. T m - n ). a total space and
a projection p mapping the total space to the base space with certain properties (here. the projection
is equivalent to fi restricted to the total space). A locally trivial fiber bundle is one for which a
consistent parameterization of the fibers is possible.
257
258
DeMers and Kreutz-Delgado
::u
....,:
"- .
,
r-1
.J?
.b~
3
t
"
~I"
o
a
U
?
"".
...... ? .
..t
i
-
.
:
? ?
t
-"
\
'.
'"
r",\
V
~
U
J.int. 2 0
-2
.,....t:
?
\
\.
?..,.1-
~
t
1
"-
"'.1at. 1
Figure 1: The two pre-image manifoldsfor positioning the end-effector of the 3-R planar
manipulator at a specific (x, y) location.
within a small distance of a specific location in the workspace. The pre-image points will
lie on the disjoint pre-image manifolds. These manifolds will typically be separable by
clustering 3 ? Figures 1 shows two views of the two redundancy, or "self-motion" manifolds
for a particular end-effector location of the 3-R planar arm. All of the input space points
shown position the arm's end-effector near the same (x,y) location. Note that the input
space is the 3-torus, T3. In order to visualize the space, the torus is "sliced" along each
dimension. Thus opposite faces of the "cube" shown are identified with each other.
4 LOCAL REGULARIZATION
The inverse kinematics problem for manipulators with redundant dofs is usually solved
either by using differential methods, which attempt to exploit the redundancy by optimizing
a task-<lependent objective function, or by using learning methods which regularize at
training time by adding constraints equal to the number of redundant dof. The former may
be computationally inefficient in that iterative solution of derivatives or matrix inversions
are required, and may be unsatisfactory for real-time control. The latter is unsatisfying as
it eliminates the run-time dexterity available from the redundancy; that is, it imposes prior
constraints on the use of any extra dofs.
Although in practice numerical, differential methods are used for redundancy resolution,
it has recently been shown that simple recurrent neural networks can resolve the redundancy by optimization of certain side-constraints at run-time, (Jordan & Rumelhart, 1992),
(Kindermann & Linden, 1990). The differential methods have a number of desireable properties. In general it is possible to iterate in order to achieve a solution of arbitrary accuracy.
They also tend to be capable of handling very flexible constraints. Global regularization as
discussed above can be used to augment such methods. For example, once a choice of a
solution branch has been made, an initial starting location away from singularities can be
selected, and differential methods used to achieve an accurate solution on that branch.
Our work shows that construction of redundancy-parameterized approximations to di3Por end-effector locations near the co-regular values, the pre-image manifolds tend to merge.
This phenomenon can be identified by our methods.
Global Regularization of Inverse Kinematics for Redundant Manipulators
rect inverses are achievable. That is, the mapping from the workspace (augmented by
a parameterization of the redundancy) to the input space can be approximated. This local regularization is accomplished by parameterizing the p~image solution branch torii.
Given (enough) samples of (J points paired with their x image, a parameterization can be
discovered for each branch. The method used exploits the fact that neighborhoods within
each Ci map to neighborhoods, and that neighborhoods within each Wi have as pre-images
a finite number of solution branches.
First, all points in our sample which have their image near some initial point Xo are
found. Pullin!; back to the input space by accessing the (J component of each of these
?(J, x) data pairs finds the points in the pre-image set of the neighborhood of Xo. Now,
because the topology of the p~image set is known (here, the torus), a self-organizing
map of appropriate topology can be fit to the p~image points in order to parameterize
this manifold. Neighboring torii have similar parameterizations; thus by repeating this
process for a point Xl near Xo and using the parameterization of the p~image of Xo as
initial conditions. a parameterization of the pre-image of Xl qualitatively similar to that
for Xo can be constructed efficiently. By stepping between such "query points", Xi a set of
parameterizations can be obtained.
5
THE 3-R REDUNDANT PLANAR ARM
This approach can be used to provide a global and local regularization for a three-link
manipulator performing the task of positioning in the plane. For this manipulator, I :
T3 -... R2. The map I restricted to each connected region in the input space bounded by
the co-regular separating surfaces defines Ii : T x R X 51 -... R X 51.
The pre-image of a point in the workspace thus consists of either one or two 1-torii (the
actual number can be no more than the number of inverse solutions for a non-redundant
manipulator of the same type, (Burdick, 1988?. Each torus is the pre-image of one of the
restricted mappings Ii. The goal is to identify these torii and parameterize them. Figure 2
shows the input space and workspace of this arm, and their separating surfaces. These
partition the workspace into disjoint annular regions,and the input space into disjoint tubular
regions. The circles indicate workspace locations which can be reached in a kinematically
singular configuration. The inverse image of these circles form the co-regular separating
surfaces in the input space. For the link lengths used here (It = 5, /2 = 4, h = 3), there
is a single self-motion manifold for workspace locations in regions A and C, and two
self-motion manifolds for workspace locations in B and D. Figure 3 shows two views of
the data points near the p~image manifolds for an end effector location in region A. and
its parameterization by a self-organizing map (using the elastic net algorithm). Such a
parameterization can be made for various locations in the workspace. Inverse kinematics
can thus be computed by first locating the nearest parameterization network for a given
workspace position and then choosing a configuration on the manifold. which can be done
at run-time. Because the kinematics map is locally smooth, interpolation between networks
and nodes on the networks can be done for greater accuracy. For convenience, a node on
each torus was chosen as a canonical zero-point, and the remaining nodes assigned values
based on their normalized distance from this point. Therefore all parameter values are
scaled to be in the interval [0,1].
For some end-effector locations, there are two pre-image manifolds. These first need to
be identified by a global partitioning, then the individual manifolds parameterized. The
259
260
DeMers and Kreutz-Delgado
Workspace
Input Space
- -~~------~~
-l __~~~~::~~-..
-7t
-7t
....
o
Joint 3
Figure 2: The forward kinematics for a generic three-link planar manipulator. The sep-
arating sUrfaces in the input space do not depend on the value of joint angle 1, therefore
the input space is shown projected to the joint 2 - joint 3 space. All end-effector locations
inside regions A and C of the workspace have a single pre-image manifold in A and C,
respectively, of the input space. End-effector locations inside regions B and D of the
workspace have two pre-image manifolds, one in each of B and B' (resp. D and D' ) of the
input space.
manifolds may belong to one of a finite set of homotopy classes; that is. because they "live"
in an ambient space which is a torus, they mayor may not wrap around one or more of the
dimensions of the torus. Unlike in Euclidean space. where there are only two possible onedimensional manifolds. there are multiple topologically distinct types (homotopy classes)
of closed loops which can serve as self-motion manifolds. Fortunately. because physical
robots rarely have joints with unlimited range of motion. in practice the manifolds will
usually not have wraparound. However. we should like to be able to parameterize any
possibility. Appropriate choice of topology for a topology-preserving net results in an
effective parameterization. Figure 4 shows two views of a parameterization for one of the
self-motion manifolds shown above in Figure 1. which is the pre-image for an end-effector
location in region B of Figure 2.
6 DISCUSSION
The global regularization accomplished by the method described above partitions the original input/output data into sets for each of the distinct Ci regions. The redundancy parameters,
t. obtained by local regularization can be used to augment this data. resulting in a transformation of the (0, x) data into (0 i, (Xi, t?. Let 7 : 0 1-+ t be a function that computes a
parameter value for each 0 in the input space. Let fi(O) = (fi(O), 7(0?. By construction.
the regularized mapping fi : 0 1-+ (x, t) is one-to-one and onto. Now, given examples
1 (x, t) 1-+ 0 can be directly approximated
from a one-to-one mapping, the inverse map
by, e.g .? a feedforward neural network.
h-
Global Regularization of Inverse Kinematics for Redundant Manipulators
Joint 2 Anq Ie
-1
1
1
Joint 3 Angle
0
.5
-L-_ _ _ _ _ _ _
o i nt 1 Angle
~2.5
Figure 3: The data points in the llself-motion" pre-image manifold of a point in the
workspace of the 3-R planar arm, and a closed, 1-D elastic network after adaptation to
them. This manifold is smoothly contractible to a point since it does not llwrap around"
any of the dimensions ofT3.
This method requires data sample sizes exponential in the number of degrees of freedom of
the manipulator and thus will not be adequate for large dof"snake" manipulators. However,
practical industrial robots of 7-dof may be amenable to our technique, especially if, as is
common, it is designed with a separable wrist and is thus composable into a 4-dof redundant
positioner plus a 3-dof non-redundant orienter.
This work can also be used to augment the differential methods of redundancy resolution. An
approximate solution can be found extremely rapidly, and used to initialize gradient-based
methods, which can then iterate to achieve a highly accurate solution. Global decisions such
as choosing between multiple manifolds and identifying criteria for choosing locations on
the manifold can now be made at run-time. Computation of an approximate direct inverse
can then be made in constant time.
Acknowledgements
This work was supported in part by NSF Presidential Young Investigator award IRI9057631 and Fellowships from the California Space Institute and the McDonnell-Pew
Center for Cognitive Neuroscience. The first author would like to thank the NIPS Foundation for providing student travel grants.
References
Daniel Baker (1990), "Some Topological Problems in Robotics", The Mathematical Intelligencer, Vol. 12, No. I, pp. 66-76.
Joel Burdick (1988), "Kinematics and Design of Redundant Robot Manipulators", Stanford
Ph.D. Thesis, Dept. of Mechanical Engineering.
Joel Burdick (1991), "A Classification of 3R Regional Manipulator Singularities and Geometries", Proc. 19911?EE Inti. Con! Robotics & Automation, Sacramento.
261
262
DeMers and Kreutz-Delgado
(,,;::================7-Il
Joi nt Z ",.,.1.
Joint 1 "ntl.
-2
0
Joint ) .. ",1.
0
-2
-2
-2
J---~------~-2
o
Joint 1 h91e
Figure 4: Two views of the same data points in one of the two Itself-motion" pre-image
manifolds ofa point in region B, and an elastic net after adaptation. It belongs to a different
homotopy class than that of Fig 3 - it is not contractible to a point.
John Craig (1986).Introduction to Robotics.
David DeMers & Kenneth Kreutz-Delgado (1992). ''Learning Global Direct Inverse Kinematics'" in Moody, J. E .? Hanson. SJ. and Lippmann, R.P.? eds, Advances in Neural
Information Processing Systems 4. 589-594.
Michael 1. Jordan & David Rumelhart (1992), "Forward Models: Supervised Learning with
a Distal Teacher", Cognitive Science 16,307-354.
J. Kindermann & Alexander Linden (1990), "Inversion of Neural Networks by Gradient
Descent". 1. Parallel Computing 14.277-286.
Michael Kuperstein (1991), "INFANT Neural Controller for Adaptive Sensory-Motor
Control". Neural Networks. Vol. 4, pp. 131-145.
Thomas Martinetz, Helge Ritter. & Klaus Schulten (1990), ''Three-Dimensional Neural
Networks for Learning Visuomotor Coordination of a Robot Arm", IEEE Trans. Neural
Networks, Vol. 1, No. 1.
Charles Nash & Siddhartha Sen (1983). Topology and Geometry for Physicists.
Philippe Wenger (1992), "On the Kinematics of Manipulators with General Geometry:
Application to the Feasibility Analysis of Continuous Trajectories", in M. Jamshidi, et al.,
eds, Robotics and Manufacturing: Recent Trends in Research, Education and Applications
4, 15-20 (ISRAM-92, Santa Fe).
| 642 |@word inversion:2 achievable:1 seek:1 delgado:7 initial:3 configuration:6 cyclic:2 daniel:1 nt:2 john:1 numerical:1 partition:4 burdick:6 motor:1 designed:1 infant:1 selected:1 parameterization:11 inspection:1 plane:1 parameterizations:2 node:3 location:21 mathematical:1 along:1 constructed:1 direct:7 differential:7 consists:2 inside:2 behavior:1 multi:1 globally:1 decomposed:1 resolve:2 actual:1 baker:2 bounded:1 finding:2 transformation:1 ofa:1 scaled:1 partitioning:2 control:2 grant:1 engineering:1 local:9 physicist:1 path:1 interpolation:1 merge:1 plus:1 co:4 range:1 unique:2 practical:1 wrist:1 union:1 practice:2 projection:2 pre:21 regular:5 convenience:1 onto:1 live:1 restriction:1 equivalent:3 map:11 center:1 starting:1 resolution:3 identifying:1 sacramento:1 parameterizing:1 importantly:1 regularize:2 resp:1 diego:2 construction:2 modulo:1 us:1 element:2 rumelhart:2 approximated:3 trend:1 electrical:1 solved:3 parameterize:3 region:20 connected:3 accessing:1 nash:1 engr:2 depend:1 serve:1 basis:1 sep:1 joint:16 various:1 fiber:5 distinct:2 effective:1 query:1 visuomotor:1 klaus:1 choosing:4 dof:5 neighborhood:4 whose:1 posed:7 plausible:1 stanford:1 say:1 presidential:1 itself:2 net:3 sen:1 product:1 adaptation:2 neighboring:1 loop:2 rapidly:1 organizing:2 achieve:3 jamshidi:1 produce:1 recurrent:1 pose:1 nearest:1 implemented:1 indicate:1 education:1 homotopy:3 singularity:2 around:2 mapping:13 visualize:1 sought:2 joi:1 proc:1 travel:1 coordination:1 kindermann:2 consistently:2 rank:1 unsatisfactory:1 industrial:1 typically:3 snake:1 transformed:1 issue:1 among:2 ill:6 flexible:1 denoted:1 priori:1 augment:3 classification:1 constrained:1 initialize:1 cube:1 equal:2 construct:1 once:1 having:1 sampling:1 fundamentally:1 few:1 individual:1 geometry:8 consisting:2 attempt:1 freedom:3 possibility:2 kinematic:1 highly:1 evaluation:1 joel:2 generically:1 bundle:2 amenable:1 accurate:2 ambient:1 tuple:1 capable:1 partial:1 necessary:1 indexed:1 dofs:4 euclidean:2 divide:1 desired:2 circle:2 theoretical:1 effector:12 instance:1 desireable:2 cost:1 submanifold:2 too:2 answer:1 teacher:1 ie:1 workspace:25 ritter:1 invertible:2 michael:2 quickly:1 moody:1 thesis:1 kinematically:1 possibly:1 cognitive:2 derivative:2 inefficient:1 de:1 student:1 automation:1 int:1 unsatisfying:1 view:4 closed:4 reached:1 wm:8 parallel:1 square:1 il:1 accuracy:2 efficiently:1 t3:2 identify:2 yes:1 generalize:1 craig:2 lli:1 trajectory:2 ed:2 pp:2 obvious:1 con:1 demers:7 dimensionality:4 kuperstein:2 overconstrained:1 back:1 supervised:1 planar:5 done:2 nonlinear:1 propagation:1 del:2 defines:1 behaved:1 manipulator:27 normalized:1 former:1 regularization:18 assigned:1 distal:1 self:7 criterion:1 allowable:1 tn:1 motion:8 image:32 dexterity:1 fi:4 recently:1 charles:1 common:1 physical:1 stepping:1 discussed:1 belong:1 onedimensional:1 pew:1 dot:1 reachable:1 robot:9 surface:5 base:2 recent:1 optimizing:1 jolla:2 belongs:1 certain:6 contractible:2 accomplished:3 preserving:2 minimum:1 greater:1 fortunately:1 redundant:19 ii:4 branch:12 multiple:8 smooth:2 annular:1 positioning:2 determination:1 tubular:1 award:1 paired:1 feasibility:1 controller:1 mayor:1 robotics:4 fellowship:1 separately:1 interval:1 singular:1 source:1 extra:1 eliminates:1 unlike:1 regional:1 martinetz:2 induced:1 tend:2 mod:1 jordan:2 call:1 ee:1 near:5 feedforward:2 enough:1 iterate:2 fit:2 topology:9 opposite:1 identified:3 computable:1 locating:1 adequate:1 generally:1 santa:1 repeating:1 locally:5 ph:1 exist:2 canonical:1 nsf:1 neuroscience:1 disjoint:6 vol:3 siddhartha:1 group:3 redundancy:16 four:1 kenneth:2 run:4 inverse:34 angle:4 parameterized:3 topologically:1 decision:1 bound:1 topological:3 constraint:5 unlimited:1 anyone:1 extremely:1 performing:1 separable:2 conjecture:1 mcdonnell:1 partitioned:1 wi:4 helge:1 making:1 restricted:5 xo:5 inti:1 taken:1 computationally:1 ntl:1 kinematics:22 end:12 available:1 away:1 appropriate:3 generic:2 existence:5 original:1 thomas:1 clustering:2 ensure:1 remaining:1 exploit:2 especially:1 objective:1 question:1 gradient:2 wrap:1 distance:2 separate:1 link:3 separating:3 thank:1 manifold:30 trivial:2 reason:1 anq:1 length:1 providing:1 difficult:1 unfortunately:1 fe:1 design:1 allowing:1 upper:1 finite:6 descent:1 philippe:1 regularizes:1 discovered:1 arbitrary:1 posedness:3 wraparound:1 david:3 pair:2 required:1 mechanical:1 hanson:1 california:3 learned:1 herein:1 nip:1 trans:1 able:1 usually:3 critical:2 natural:1 regularized:1 solvable:1 arm:6 prior:1 acknowledgement:1 tangent:2 revolute:2 composable:1 foundation:1 degree:3 consistent:2 imposes:1 supported:1 side:1 institute:3 face:1 cito:1 dimension:3 valid:1 computes:1 sensory:1 forward:8 made:5 qualitatively:1 san:2 projected:1 author:1 adaptive:1 excess:1 approximate:3 compact:2 sj:1 lippmann:1 global:16 kreutz:7 rect:1 assumed:3 xi:2 search:1 iterative:1 continuous:1 nature:1 learn:1 ca:2 elastic:3 constructing:2 domain:1 arise:1 sliced:1 augmented:1 fig:1 en:12 position:2 schulten:1 torus:9 exponential:1 xl:2 lie:2 jacobian:1 young:1 specific:2 r2:1 linden:2 consist:1 exists:1 underconstrained:2 adding:1 effectively:2 ci:7 smoothly:1 expressed:1 tracking:1 goal:1 ann:1 torii:7 towards:1 consequently:1 manufacturing:1 wenger:1 infinite:1 called:1 total:3 la:2 rarely:1 select:1 latter:1 arises:1 alexander:1 investigator:1 dept:3 regularizing:1 phenomenon:2 handling:1 |
5,992 | 6,420 | Cooperative Inverse Reinforcement Learning
Dylan Hadfield-Menell?
Anca Dragan
Pieter Abbeel
Stuart Russell
Electrical Engineering and Computer Science
University of California at Berkeley
Berkeley, CA 94709
Abstract
For an autonomous system to be helpful to humans and to pose no unwarranted
risks, it needs to align its values with those of the humans in its environment in
such a way that its actions contribute to the maximization of value for the humans.
We propose a formal definition of the value alignment problem as cooperative
inverse reinforcement learning (CIRL). A CIRL problem is a cooperative, partialinformation game with two agents, human and robot; both are rewarded according
to the human?s reward function, but the robot does not initially know what this
is. In contrast to classical IRL, where the human is assumed to act optimally in
isolation, optimal CIRL solutions produce behaviors such as active teaching, active
learning, and communicative actions that are more effective in achieving value
alignment. We show that computing optimal joint policies in CIRL games can be
reduced to solving a POMDP, prove that optimality in isolation is suboptimal in
CIRL, and derive an approximate CIRL algorithm.
1
Introduction
?If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere
effectively . . . we had better be quite sure that the purpose put into the machine is the purpose which
we really desire.? So wrote Norbert Wiener (1960) in one of the earliest explanations of the problems
that arise when a powerful autonomous system operates with an incorrect objective. This value
alignment problem is far from trivial. Humans are prone to mis-stating their objectives, which can
lead to unexpected implementations. In the myth of King Midas, the main character learns that
wishing for ?everything he touches to turn to gold? leads to disaster. In a reinforcement learning
context, Russell & Norvig (2010) describe a seemingly reasonable, but incorrect, reward function for
a vacuum robot: if we reward the action of cleaning up dirt, the optimal policy causes the robot to
repeatedly dump and clean up the same dirt.
A solution to the value alignment problem has long-term implications for the future of AI and its
relationship to humanity (Bostrom, 2014) and short-term utility for the design of usable AI systems.
Giving robots the right objectives and enabling them to make the right trade-offs is crucial for
self-driving cars, personal assistants, and human?robot interaction more broadly.
The field of inverse reinforcement learning or IRL (Russell, 1998; Ng & Russell, 2000; Abbeel
& Ng, 2004) is certainly relevant to the value alignment problem. An IRL algorithm infers the
reward function of an agent from observations of the agent?s behavior, which is assumed to be
optimal (or approximately so). One might imagine that IRL provides a simple solution to the value
alignment problem: the robot observes human behavior, learns the human reward function, and
behaves according to that function. This simple idea has two flaws. The first flaw is obvious: we
don?t want the robot to adopt the human reward function as its own. For example, human behavior
(especially in the morning) often conveys a desire for coffee, and the robot can learn this with IRL,
but we don?t want the robot to want coffee! This flaw is easily fixed: we need to formulate the value
?
{dhm, anca, pabbeel, russell}@cs.berkeley.edu
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
alignment problem so that the robot always has the fixed objective of optimizing reward for the
human, and becomes better able to do so as it learns what the human reward function is.
The second flaw is less obvious, and less easy to fix. IRL assumes that observed behavior is optimal
in the sense that it accomplishes a given task efficiently. This precludes a variety of useful teaching
behaviors. For example, efficiently making a cup of coffee, while the robot is a passive observer, is a
inefficient way to teach a robot to get coffee. Instead, the human should perhaps explain the steps in
coffee preparation and show the robot where the backup coffee supplies are kept and what do if the
coffee pot is left on the heating plate too long, while the robot might ask what the button with the
puffy steam symbol is for and try its hand at coffee making with guidance from the human, even if
the first results are undrinkable. None of these things fit in with the standard IRL framework.
Cooperative inverse reinforcement learning. We propose, therefore, that value alignment should
be formulated as a cooperative and interactive reward maximization process. More precisely, we
define a cooperative inverse reinforcement learning (CIRL) game as a two-player game of partial
information, in which the ?human?, H, knows the reward function (represented by a generalized
parameter ?), while the ?robot?, R, does not; the robot?s payoff is exactly the human?s actual reward.
Optimal solutions to this game maximize human reward; we show that solutions may involve active
instruction by the human and active learning by the robot.
Reduction to POMDP and Sufficient Statistics. As one might expect, the structure of CIRL games
is such that they admit more efficient solution algorithms than are possible for general partialinformation games. Let (? H , ? R ) be a pair of policies for human and robot, each depending, in
general, on the complete history of observations and actions. A policy pair yields an expected sum of
rewards for each player. CIRL games are cooperative, so there is a well-defined optimal policy pair
that maximizes value.2 In Section 3 we reduce the problem of computing an optimal policy pair to the
solution of a (single-agent) POMDP. This shows that the robot?s posterior over ? is a sufficient statistic,
in the sense that there are optimal policy pairs in which the robot?s behavior depends only on this
statistic. Moreover, the complexity of solving the POMDP is exponentially lower than the NEXP-hard
bound that (Bernstein et al., 2000) obtained by reducing a CIRL game to a general Dec-POMDP.
Apprenticeship Learning and Suboptimality of IRL-Like Solutions. In Section 3.3 we model
apprenticeship learning (Abbeel & Ng, 2004) as a two-phase CIRL game. In the first phase, the
learning phase, both H and R can take actions and this lets R learn about ?. In the second phase,
the deployment phase, R uses what it learned to maximize reward (without supervision from H).
We show that classic IRL falls out as the best-response policy for R under the assumption that the
human?s policy is ?demonstration by expert? (DBE), i.e., acting optimally in isolation as if no robot
exists. But we show also that this DBE/IRL policy pair is not, in general, optimal: even if the robot
expects expert behavior, demonstrating expert behavior is not the best way to teach that algorithm.
We give an algorithm that approximately computes H?s best response when R is running IRL under
the assumption that rewards are linear in ? and state features. Section 4 compares this best-response
policy with the DBE policy in an example game and provides empirical confirmation that the bestresponse policy, which turns out to ?teach? R about the value landscape of the problem, is better than
DBE. Thus, designers of apprenticeship learning systems should expect that users will violate the
assumption of expert demonstrations in order to better communicate information about the objective.
2
Related Work
Our proposed model shares aspects with a variety of existing models. We divide the related work into
three categories: inverse reinforcement learning, optimal teaching, and principal?agent models.
Inverse Reinforcement Learning. Ng & Russell (2000) define inverse reinforcement learning
(IRL) as follows: ?Given measurements of an [actor]?s behavior over time. . . . Determine the
reward function being optimized.? The key assumption IRL makes is that the observed behavior is
optimal in the sense that the observed trajectory maximizes the sum of rewards. We call this the
demonstration-by-expert (DBE) assumption. One of our contributions is to prove that this may be
suboptimal behavior in a CIRL game, as H may choose to accept less reward on a particular action
in order to convey more information to R. In CIRL the DBE assumption prescribes a fixed policy
2
A coordination problem of the type described in Boutilier (1999) arises if there are multiple optimal policy
pairs; we defer this issue to future work.
2
Ground Truth
Expert Demonstration
Instructive Demonstration
Figure 1: The difference between demonstration-by-expert and instructive demonstration in the
mobile robot navigation problem from Section 4. Left: The ground truth reward function. Lighter
grid cells indicates areas of higher reward. Middle: The demonstration trajectory generated by the
expert policy, superimposed on the maximum a-posteriori reward function the robot infers. The robot
successfully learns where the maximum reward is, but little else. Right: An instructive demonstration
generated by the algorithm in Section 3.4 superimposed on the maximum a-posteriori reward function
that the robot infers. This demonstration highlights both points of high reward and so the robot learns
a better estimate of the reward.
for H. As a result, many IRL algorithms can be derived as state estimation for a best response to
different ? H , where the state includes the unobserved reward parametrization ?.
Ng & Russell (2000), Abbeel & Ng (2004), and Ratliff et al. (2006) compute constraints that
characterize the set of reward functions so that the observed behavior maximizes reward. In general,
there will be many reward functions consistent with this constraint. They use a max-margin heuristic
to select a single reward function from this set as their estimate. In CIRL, the constraints they compute
characterize R?s belief about ? under the DBE assumption.
Ramachandran & Amir (2007) and Ziebart et al. (2008) consider the case where ? H is ?noisily
expert,? i.e., ? H is a Boltzmann distribution where actions or trajectories are selected in proportion
to the exponent of their value. Ramachandran & Amir (2007) adopt a Bayesian approach and place
an explicit prior on rewards. Ziebart et al. (2008) places a prior on reward functions indirectly by
assuming a uniform prior over trajectories. In our model, these assumptions are variations of DBE
and both implement state estimation for a best response to the appropriate fixed H.
Natarajan et al. (2010) introduce an extension to IRL where R observes multiple actors that cooperate
to maximize a common reward function. This is a different type of cooperation than we consider,
as the reward function is common knowledge and R is a passive observer. Waugh et al. (2011) and
Kuleshov & Schrijvers (2015) consider the problem of inferring payoffs from observed behavior in a
general (i.e., non-cooperative) game given observed behavior. It would be interesting to consider an
analogous extension to CIRL, akin to mechanism design, in which R tries to maximize collective
utility for a group of Hs that may have competing objectives.
Fern et al. (2014) consider a hidden-goal MDP, a special case of a POMDP where the goal is an
unobserved part of the state. This can be considered a special case of CIRL, where ? encodes a
particular goal state. The frameworks share the idea that R helps H. The key difference between the
models lies in the treatment of the human (the agent in their terminology). Fern et al. (2014) model
the human as part of the environment. In contrast, we treat H as an actor in a decision problem that
both actors collectively solve. This is crucial to modeling the human?s incentive to teach.
Optimal Teaching. Because CIRL incentivizes the human to teach, as opposed to maximizing
reward in isolation, our work is related to optimal teaching: finding examples that optimally train
a learner (Balbach & Zeugmann, 2009; Goldman et al., 1993; Goldman & Kearns, 1995). The key
difference is that efficient learning is the objective of optimal teaching, while it emerges as a property
of optimal equilibrium behavior in CIRL.
Cakmak & Lopes (2012) consider an application of optimal teaching where the goal is to teach the
learner the reward function for an MDP. The teacher gets to pick initial states from which an expert
executes the reward-maximizing trajectory. The learner uses IRL to infer the reward function, and
the teacher picks initial states to minimize the learner?s uncertainty. In CIRL, this approach can be
characterized as an approximate algorithm for H that greedily minimizes the entropy of R?s belief.
Beyond teaching, several models focus on taking actions that convey some underlying state, not
necessarily a reward function. Examples include finding a motion that best communicates an agent?s
intention (Dragan & Srinivasa, 2013), or finding a natural language utterance that best communicates
3
a particular grounding (Golland et al., 2010). All of these approaches model the observer?s inference
process and compute actions (motion or speech) that maximize the probability an observer infers the
correct hypothesis or goal. Our approximate solution to CIRL is analogous to these approaches, in
that we compute actions that are informative of the correct reward function.
Principal?agent models. Value alignment problems are not intrinsic to artificial agents. Kerr
(1975) describes a wide variety of misaligned incentives in the aptly titled ?On the folly of rewarding
A, while hoping for B.? In economics, this is known as the principal?agent problem: the principal
(e.g., the employer) specifies incentives so that an agent (e.g., the employee) maximizes the principal?s
profit (Jensen & Meckling, 1976).
Principal?agent models study the problem of generating appropriate incentives in a non-cooperative
setting with asymmetric information. In this setting, misalignment arises because the agents that
economists model are people and intrinsically have their own desires. In AI, misalignment arises
entirely from the information asymmetry between the principal and the agent; if we could characterize
the correct reward function, we could program it into an artificial agent. Gibbons (1998) provides a
useful survey of principal?agent models and their applications.
3
Cooperative Inverse Reinforcement Learning
This section formulates CIRL as a two-player Markov game with identical payoffs, reduces the
problem of computing an optimal policy pair for a CIRL game to solving a POMDP, and characterizes
apprenticeship learning as a subclass of CIRL games.
3.1
CIRL Formulation
Definition 1. A cooperative inverse reinforcement learning (CIRL) game M is a two-player Markov
game with identical payoffs between a human or principal, H, and a robot or agent, R. The
game is described by a tuple, M = hS, {AH , AR }, T (?|?, ?, ?), {?, R(?, ?, ?; ?)}, P0 (?, ?), ?i, with the
following definitions:
S a set of world states: s ? S.
AH a set of actions for H: aH ? AH .
AR a set of actions for R: aR ? AR .
T (?|?, ?, ?) a conditional distribution on the next world state, given previous state and action for
both agents: T (s0 |s, aH , aR ).
? a set of possible static reward parameters, only observed by H: ? ? ?.
R(?, ?, ?; ?) a parameterized reward function that maps world states, joint actions, and reward
parameters to real numbers. R : S ? AH ? AR ? ? ? R.
P0 (?, ?) a distribution over the initial state, represented as tuples: P0 (s0 , ?)
? a discount factor: ? ? [0, 1].
We write the reward for a state?parameter pair as R(s, aH , aR ; ?) to distinguish the static reward
parameters ? from the changing world state s. The game proceeds as follows. First, the initial
state, a tuple (s, ?), is sampled from P0 . H observes ?, but R does not. This observation model
captures the notion that only the human knows the reward function, while both actors know a prior
distribution over possible reward functions. At each timestep t, H and R observe the current state st
R
H
R
and select their actions aH
t , at . Both actors receive reward rt = R(st , at , at ; ?) and observe
each other?s action selection. A state for the next timestep is sampled from the transition distribution,
R
st+1 ? PT (s0 |st , aH
t , at ), and the process repeats.
Behavior in a CIRL game is defined by a pair of policies, (? H , ? R ), that determine action selection
for H and R respectively.
In general,
these policies can
of their observation
?
be arbitrary functions
?
histories; ? H : AH ? AR ? S ? ? ? AH , ? R : AH ? AR ? S ? AR . The optimal joint
policy is the policy that maximizes value. The value of a state is the expected sum of discounted
rewards under the initial distribution of reward parameters and world states.
Remark 1. A key property of CIRL is that the human and the robot get rewards determined by the
same reward function. This incentivizes the human to teach and the robot to learn without explicitly
encoding these as objectives of the actors.
4
3.2
Structural Results for Computing Optimal Policy Pairs
The analogue in CIRL to computing an optimal policy for an MDP is the problem of computing an
optimal policy pair. This is a pair of policies that maximizes the expected sum of discounted rewards.
This is not the same as ?solving? a CIRL game, as a real world implementation of a CIRL agent must
account for coordination problems and strategic uncertainty (Boutilier, 1999). The optimal policy pair
represents the best H and R can do if they can coordinate perfectly before H observes ?. Computing
an optimal joint policy for a cooperative game is the solution to a decentralized-partially observed
Markov decision process (Dec-POMDP). Unfortunately, Dec-POMDPs are NEXP-complete (Bernstein
et al., 2000) so general Dec-POMDP algorithms have a computational complexity that is doubly
exponential. Fortunately, CIRL games have special structure that reduces this complexity.
Nayyar et al. (2013) shows that a Dec-POMDP can be reduced to a coordination-POMDP. The actor in
this POMDP is a coordinator that observes all common observations and specifies a policy for each
actor. These policies map each actor?s private information to an action. The structure of a CIRL game
implies that the private information is limited to H?s initial observation of ?. This allows the reduction
to a coordination-POMDP to preserve the size of the (hidden) state space, making the problem easier.
Theorem 1. Let M be an arbitrary CIRL game with state space S and reward space ?. There exists
a (single-actor) POMDP MC with (hidden) state space SC such that |SC | = |S| ? |?| and, for any
policy pair in M , there is a policy in MC that achieves the same sum of discounted rewards.
Theorem proofs can be found in the supplementary material. An immediate consequence of this
result is that R?s belief about ? is a sufficient statistic for optimal behavior.
?
?
Corollary 1. Let M be a CIRL game. There exists an optimal policy pair (? H , ? R ) that only
depends on the current state and R?s belief.
Remark 2. In a general Dec-POMDP, the hidden state for the coordinator-POMDP includes each
actor?s history of observations. In CIRL, ? is the only private information so we get an exponential
decrease in the complexity of the reduced problem. This allows one to apply general POMDP
algorithms to compute optimal joint policies in CIRL.
It is important to note that the reduced problem may still be very challenging. POMDPs are difficult
in their own right and the reduced problem still has a much larger action space. That being said,
this reduction is still useful in that it characterizes optimal joint policy computation for CIRL as
significantly easier than Dec-POMDPs. Furthermore, this theorem can be used to justify approximate
methods (e.g., iterated best response) that only depend on R?s belief state.
3.3
Apprenticeship Learning as a Subclass of CIRL Games
A common paradigm for robot learning from humans is apprenticeship learning. In this paradigm,
a human gives demonstrations to a robot of a sample task and the robot is asked to imitate it in a
subsequent task. In what follows, we formulate apprenticeship learning as turn-based CIRL with a
learning phase and a deployment phase. We characterize IRL as the best response (i.e., the policy
that maximizes reward given a fixed policy for the other player) to a demonstration-by-expert policy
for H. We also show that this policy is, in general, not part of an optimal joint policy and so IRL is
generally a suboptimal approach to apprenticeship learning.
Definition 2. (ACIRL) An apprenticeship cooperative inverse reinforcement learning (ACIRL) game
is a turn-based CIRL game with two phases: a learning phase where the human and the robot take
turns acting, and a deployment phase, where the robot acts independently.
Example. Consider an example apprenticeship task where R needs to help H make office supplies.
H and R can make paperclips and staples and the unobserved ? describe H?s preference for paperclips
vs staples. We model the problem as an ACIRL game in which the learning and deployment phase
each consist of an individual action. The world state in this problem is a tuple (ps , qs , t) where ps
and qs respectively represent the number of paperclips and staples H owns. t is the round number.
An action is a tuple (pa , qa ) that produces pa paperclips and qa staples. The human can make 2
items total: AH = {(0, 2), (1, 1), (2, 0)}. The robot has different capabilities. It can make 50
units of each item or it can choose to make 90 of a single item: AR = {(0, 90), (50, 50), (90, 0)}.
We let ? = [0, 1] and define R so that ? indicates the relative preference between paperclips and
staples:R(s, (pa , qa ); ?) = ?pa + (1 ? ?)qa . R?s action is ignored when t = 0 and H?s is ignored
when t = 1. At t = 2, the game is over, so the game transitions to a sink state, (0, 0, 2).
5
Deployment phase ? maximize mean reward estimate. It is simplest to analyze the deployment
phase first. R is the only actor in this phase so it get no more observations of its reward. We have
shown that R?s belief about ? is a sufficient statistic for the optimal policy. This belief about ? induces
a distribution over MDPs. A straightforward extension of a result due to Ramachandran & Amir
(2007) shows that R?s optimal deployment policy maximizes reward for the mean reward function.
Theorem 2. Let M be an ACIRL game. In the deployment phase, the optimal policy for R maximizes
reward in the MDP induced by the mean ? from R?s belief.
In our example, suppose that ? H selects (0, 2) if ? ? [0, 13 ), (1, 1) if ? ? [ 31 , 23 ] and (2, 0) otherwise.
R begins with a uniform prior on ? so observing, e.g., aH = (0, 2) leads to a posterior distribution
that is uniform on [0, 31 ). Theorem 2 shows that the optimal action maximizes reward for the mean ?
so an optimal R behaves as though ? = 61 during the deployment phase.
Learning phase ? expert demonstrations are not optimal. A wide variety of apprenticeship
learning approaches assume that demonstrations are given by an expert. We say that H satisfies the
demonstration-by-expert (DBE) assumption in ACIRL if she greedily maximizes immediate reward
on her turn. This is an ?expert? demonstration because it demonstrates a reward maximizing action
but does not account for that action?s impact on R?s belief. We let ? E represent the DBE policy.
Theorem 2 enables us to characterize the best response for R when ? H = ? E : use IRL to compute
the posterior over ? during the learning phase and then act to maximize reward under the mean ? in
the deployment phase. We can also analyze the DBE assumption itself. In particular, we show that
? E is not H?s best response when ? R is a best response to ? E .
Theorem 3. There exist ACIRL games where the best-response for H to ? R violates the expert
demonstrator assumption. In other words, if br(?) is the best response to ?, then br(br(? E )) 6= ? E .
The supplementary material proves this theorem by computing the optimal equilibrium for our
51
E
example. In that equilibrium, H selects (1, 1) if ? ? [ 41
92 , 92 ]. In contrast, ? only chooses (1, 1) if
? = 0.5. The change arises because there are situations (e.g., ? = 0.49) where the immediate loss of
reward to H is worth the improvement in R?s estimate of ?.
Remark 3. We should expect experienced users of apprenticeship learning systems to present
demonstrations optimized for fast learning rather than demonstrations that maximize reward.
Crucially, the demonstrator is incentivized to deviate from R?s assumptions. This has implications
for the design and analysis of apprenticeship systems in robotics. Inaccurate assumptions about user
behavior are notorious for exposing bugs in software systems (see, e.g., Leveson & Turner (1993)).
3.4
Generating Instructive Demonstrations
Now, we consider the problem of computing H?s best response when R uses IRL as a state estimator.
For our toy example, we computed solutions exhaustively, for realistic problems we need a more
efficient approach. Section 3.2 shows that this can be reduced to an POMDP where the state is a
tuple of world state, reward parameters, and R?s belief. While this is easier than solving a general
D ec- POMDP , it is a computational challenge. If we restrict our attention to the case of linear reward
functions we can develop an efficient algorithm to compute an approximate best response.
Specifically, we consider the case where the reward for a state (s, ?) is defined as a linear combination
of state features for some feature function ? : R(s, aH , aR ; ?) = ?(s)> ?. Standard results from the
IRL literature show that policies with the same expected feature counts have the same value (Abbeel
& Ng, 2004). Combined with Theorem 2, this implies that the optimal ? R under the DBE assumption
computes a policy that matches the observed feature counts from the learning phase.
This suggests a simple approximation scheme. To compute a demonstration trajectory ? H , first
compute the feature counts R would observe in expectation from the true ? and then select actions
that maximize similarity to these target features. If ?? are the expected feature counts induced by ?
then this scheme amounts to the following decision rule:
? H ? argmax ?(? )> ? ? ?||?? ? ?(? )||2 .
(1)
?
This rule selects a trajectory that trades off between the sum of rewards ?(? )> ? and the feature
dissimilarity ||?? ? ?(? )||2 . Note that this is generally distinct from the action selected by the
demonstration-by-expert policy. The goal is to match the expected sum of features under a distribution
of trajectories with the sum of features from a single trajectory. The correct measure of feature
6
num-features = 3
br
?E
9
16
8
3
4
Regret
br
?E
12
6
0
num-features = 10
KL
? 2
||?GT ? ?||
0
Regret for br
1
0.75
Regret
12
0.5
0.25
Regret
KL
? 2
||?GT ? ?||
0
10?3
10?1
?
101
Figure 2: Left, Middle: Comparison of ?expert? demonstration (? E ) with ?instructive? demonstration (br).
Lower numbers are better. Using the best response causes R to infer a better distribution over ? so it does a
better job of maximizing reward. Right: The regret of the instructive demonstration policy as a function of how
optimal R expects H to be. ? = 0 corresponds to a robot that expects purely random behavior and ? = ?
corresponds to a robot that expects optimal behavior. Regret is minimized for an intermediate value of ?: if ? is
too small, then R learns nothing from its observations; if ? is too large, then R expects many values of ? to lead
to the same trajectory so H has no way to differentiate those reward functions.
similarity is regret: the difference between the reward R would collect if it knew the true ? and the
reward R actually collects using the inferred ?. Computing this similarity is expensive, so we use an
`2 norm as a proxy measure of similarity.
4
4.1
Experiments
Cooperative Learning for Mobile Robot Navigation
Our experimental domain is a 2D navigation problem on a discrete grid. In the learning phase of
the game, H teleoperates a trajectory while R observes. In the deployment phase, R is placed in a
random state and given control of the robot. We use a finite horizon H, and let the first H
2 timesteps
be the learning phase. There are N? state features defined as radial basis functions where the centers
are common knowledge. Rewards are linear in these features and ?. The initial world state is in the
middle of the map. We use a uniform distribution on [?1, 1]N? for the prior on ?. Actions move in
one of the four cardinal directions {N, S, E, W } and there is an additional no-op ? that each actor
executes deterministically on the other agent?s turn.
Figure 1 shows an example comparison between demonstration-by-expert and the approximate best
response policy in Section 3.4. The leftmost image is the ground truth reward function. Next to
it are demonstration trajectories produce by these two policies. Each path is superimposed on the
maximum a-posteriori reward function the robot infers from the demonstration. We can see that the
demonstration-by-expert policy immediately goes to the highest reward and stays there. In contrast,
the best response policy moves to both areas of high reward. The robot reward function the robot
infers from the best response demonstration is much more representative of the true reward function,
when compared with the reward function it infers from demonstration-by-expert.
4.2
Demonstration-by-Expert vs Best Responder
Hypothesis. When R plays an IRL algorithm that matches features, H prefers the best response
policy from Section 3.4 to ? E : the best response policy will significantly outperform the DBE policy.
Manipulated Variables. Our experiment consists of 2 factors: H-policy and num-features. We
make the assumption that R uses an IRL algorithm to compute its estimate of ? during learning and maximizes reward under this estimate during deployment. We use Maximum-Entropy
IRL (Ziebart et al., 2008) to implement R?s policy. H-policy varies H?s strategy ? H and has two
levels: demonstration-by-expert (? E ) and best-responder (br). In the ? E level H maximizes reward
during the demonstration. In the br level H uses the approximate algorithm from Section 3.4 to
compute an approximate best response to ? R . The trade-off between reward and communication ? is
set by cross-validation before the game begins. The num-features factor varies the dimensionality of
?across two levels: 3 features and 10 features. We do this to test whether and how the difference
between experts and best-responders is affected by dimensionality. We use a factorial design that
leads to 4 distinct conditions. We test each condition against a random sample of N = 500 different
reward parameters. We use a within-subjects design with respect to the the H-policy factor so the
same reward parameters are tested for ? E and br.
7
Dependent Measures. We use the regret with respect to a fully-observed setting where the robot
knows the ground truth ? as a measure of performance. We let ?? be the robot?s estimate of the reward
parameters and let ?GT be the ground truth reward parameters. The primary measure is the regret of
R?s policy: the difference between the value of the policy that maximizes the inferred reward ?? and
the value of the policy that maximizes the true reward ?GT . We also use two secondary measures.
The first is the KL-divergence between the maximum-entropy trajectory distribution induced by ??
and the maximum-entropy trajectory distribution induced by ?. Finally, we use the `2 -norm between
the vector or rewards defined by ?? and the vector induced by ?GT .
Results. There was relatively little correlation between the measures (Cronbach?s ? of .47), so
we ran a factorial repeated measures ANOVA for each measure. Across all measures, we found a
significant effect for H-policy, with br outperforming ? E on all measures as we hypothesized (all
with F > 962, p < .0001). We did find an interaction effect with num-features for KL-divergence
and the `2 -norm of the reward vector but post-hoc Tukey HSD showed br to always outperform ? E .
The interaction effect arises because the gap between the two levels of H-policy is larger with fewer
reward parameters; we interpret this as evidence that num-features = 3 is an easier teaching problem
for H. Figure 2 (Left, Middle) shows the dependent measures from our experiment.
4.3
Varying R?s Expectations
Maximum-Entropy IRL includes a free parameter ? that controls how optimal R expects H to behave.
If ? = 0, R will update its belief as if H?s observed behavior is independent of her preferences ?. If
? = ?, R will update its belief as if H?s behavior is exactly optimal. We ran a followup experiment
to determine how varying ? changes the regret of the br policy.
Changing ? changes the forward model in R?s belief update: the mapping R hypothesizes between
a given reward parameter ? and the observed feature counts ?? . This mapping is many-to-one for
extreme values of ?. ? ? 0 means that all values of ? lead to the same expected feature counts
because trajectories are chosen uniformly at random. Alternatively, ? >> 0 means that almost all
probability mass falls on the optimal trajectory and many values of ? will lead to the same optimal
trajectory. This suggests that it is easier for H to differentiate different values of ? if R assumes she
is noisily optimal, but only up until a maximum noise level. Figure 2 plots regret as a function of ?
and supports this analysis: H has less regret for intermediate values of ?.
5
Conclusion and Future Work
In this work, we presented a game-theoretic model for cooperative learning, CIRL. Key to this model
is that the robot knows that it is in a shared environment and is attempting to maximize the human?s
reward (as opposed to estimating the human?s reward function and adopting it as its own). This leads
to cooperative learning behavior and provides a framework in which to design HRI algorithms and
analyze the incentives of both actors in a reward learning environment.
We reduced the problem of computing an optimal policy pair to solving a POMDP. This is a useful
theoretical tool and can be used to design new algorithms, but it is clear that optimal policy pairs
are only part of the story. In particular, when it performs a centralized computation, the reduction
assumes that we can effectively program both actors to follow a set coordination policy. This is
clearly infeasible in reality, although it may nonetheless be helpful in training humans to be better
teachers. An important avenue for future research will be to consider the coordination problem: the
process by which two independent actors arrive at policies that are mutual best responses. Returning
to Wiener?s warning, we believe that the best solution is not to put a specific purpose into the machine
at all, but instead to design machines that provably converge to the right purpose as they go along.
Acknowledgments
This work was supported by the DARPA Simplifying Complexity in Scientific Discovery (SIMPLEX)
program, the Berkeley Deep Drive Center, the Center for Human Compatible AI, the Future of Life
Institute, and the Defense Sciences Office contract N66001-15-2-4048. Dylan Hadfield-Menell is
also supported by a NSF Graduate Research Fellowship.
8
References
Abbeel, P and Ng, A. Apprenticeship learning via inverse reinforcement learning. In ICML, 2004.
Balbach, F and Zeugmann, T. Recent developments in algorithmic teaching. In Language and
Automata Theory and Applications. Springer, 2009.
Bernstein, D, Zilberstein, S, and Immerman, N. The complexity of decentralized control of Markov
decision processes. In UAI, 2000.
Bostrom, N. Superintelligence: Paths, dangers, strategies. Oxford, 2014.
Boutilier, Craig. Sequential optimality and coordination in multiagent systems. In IJCAI, volume 99,
pp. 478?485, 1999.
Cakmak, M and Lopes, M. Algorithmic and human teaching of sequential decision tasks. In AAAI,
2012.
Dragan, A and Srinivasa, S. Generating legible motion. In Robotics: Science and Systems, 2013.
Fern, A, Natarajan, S, Judah, K, and Tadepalli, P. A decision-theoretic model of assistance. JAIR, 50
(1):71?104, 2014.
Gibbons, R. Incentives in organizations. Technical report, National Bureau of Economic Research,
1998.
Goldman, S and Kearns, M. On the complexity of teaching. Journal of Computer and System
Sciences, 50(1):20?31, 1995.
Goldman, S, Rivest, R, and Schapire, R. Learning binary relations and total orders. SIAM Journal on
Computing, 22(5):1006?1034, 1993.
Golland, D, Liang, P, and Klein, D. A game-theoretic approach to generating spatial descriptions. In
EMNLP, pp. 410?419, 2010.
Jensen, M and Meckling, W. Theory of the firm: Managerial behavior, agency costs and ownership
structure. Journal of Financial Economics, 3(4):305?360, 1976.
Kerr, S. On the folly of rewarding A, while hoping for B. Academy of Management Journal, 18(4):
769?783, 1975.
Kuleshov, V and Schrijvers, O. Inverse game theory. Web and Internet Economics, 2015.
Leveson, N and Turner, C. An investigation of the Therac-25 accidents. IEEE Computer, 26(7):
18?41, 1993.
Natarajan, S, Kunapuli, G, Judah, K, Tadepalli, P, and Kersting, Kand Shavlik, J. Multi-agent inverse
reinforcement learning. In Int?l Conference on Machine Learning and Applications, 2010.
Nayyar, A, Mahajan, A, and Teneketzis, D. Decentralized stochastic control with partial history
sharing: A common information approach. IEEE Transactions on Automatic Control, 58(7):
1644?1658, 2013.
Ng, A and Russell, S. Algorithms for inverse reinforcement learning. In ICML, 2000.
Ramachandran, D and Amir, E. Bayesian inverse reinforcement learning. In IJCAI, 2007.
Ratliff, N, Bagnell, J, and Zinkevich, M. Maximum margin planning. In ICML, 2006.
Russell, S. and Norvig, P. Artificial Intelligence. Pearson, 2010.
Russell, Stuart J. Learning agents for uncertain environments (extended abstract). In COLT, 1998.
Waugh, K, Ziebart, B, and Bagnell, J. Computational rationalization: The inverse equilibrium
problem. In ICML, 2011.
Wiener, N. Some moral and technical consequences of automation. Science, 131, 1960.
Ziebart, B, Maas, A, Bagnell, J, and Dey, A. Maximum entropy inverse reinforcement learning. In
AAAI, 2008.
9
| 6420 |@word h:2 private:3 middle:4 proportion:1 norm:3 tadepalli:2 instruction:1 pieter:1 crucially:1 simplifying:1 p0:4 pick:2 profit:1 reduction:4 initial:7 existing:1 current:2 must:1 exposing:1 realistic:1 subsequent:1 informative:1 enables:1 hoping:2 plot:1 update:3 v:2 intelligence:1 selected:2 fewer:1 item:3 amir:4 imitate:1 parametrization:1 short:1 menell:2 num:6 provides:4 contribute:1 preference:3 along:1 supply:2 incorrect:2 prove:2 doubly:1 consists:1 incentivizes:2 introduce:1 apprenticeship:14 expected:7 behavior:25 planning:1 multi:1 discounted:3 goldman:4 actual:1 little:2 becomes:1 spain:1 begin:2 moreover:1 underlying:1 maximizes:15 mass:1 estimating:1 rivest:1 what:6 minimizes:1 unobserved:3 finding:3 warning:1 berkeley:4 act:3 subclass:2 interactive:1 exactly:2 returning:1 demonstrates:1 control:5 unit:1 before:2 engineering:1 treat:1 consequence:2 encoding:1 oxford:1 path:2 approximately:2 might:3 suggests:2 challenging:1 misaligned:1 deployment:12 collect:2 limited:1 graduate:1 acknowledgment:1 regret:12 implement:2 danger:1 area:2 empirical:1 managerial:1 significantly:2 intention:1 word:1 radial:1 staple:5 get:5 cannot:1 selection:2 put:2 risk:1 context:1 zinkevich:1 map:3 center:3 maximizing:4 straightforward:1 economics:3 attention:1 independently:1 go:2 pomdp:20 formulate:2 survey:1 automaton:1 immediately:1 q:2 estimator:1 rule:2 financial:1 classic:1 notion:1 autonomous:2 variation:1 analogous:2 coordinate:1 target:1 imagine:1 norvig:2 pt:1 user:3 cleaning:1 lighter:1 suppose:1 us:5 humanity:1 kuleshov:2 hypothesis:2 pa:4 paperclip:5 expensive:1 natarajan:3 asymmetric:1 cooperative:16 observed:12 electrical:1 capture:1 russell:10 trade:3 decrease:1 observes:6 highest:1 ran:2 environment:5 agency:2 complexity:7 gibbon:2 reward:100 ziebart:5 asked:1 hri:1 exhaustively:1 personal:1 prescribes:1 depend:1 solving:6 purely:1 learner:4 misalignment:2 sink:1 basis:1 easily:1 joint:7 darpa:1 represented:2 train:1 distinct:2 fast:1 effective:1 describe:2 artificial:3 sc:2 pearson:1 firm:1 whose:1 quite:1 heuristic:1 solve:1 supplementary:2 larger:2 say:1 otherwise:1 precludes:1 statistic:5 itself:1 seemingly:1 differentiate:2 hoc:1 propose:2 interaction:3 relevant:1 achieve:1 gold:1 academy:1 bug:1 description:1 ijcai:2 asymmetry:1 p:2 produce:3 generating:4 help:2 derive:1 depending:1 stating:1 pose:1 develop:1 op:1 hadfield:2 job:1 pot:1 c:1 implies:2 direction:1 correct:4 stochastic:1 human:39 material:2 everything:1 violates:1 abbeel:6 fix:1 really:1 investigation:1 extension:3 considered:1 ground:5 equilibrium:4 mapping:2 algorithmic:2 driving:1 achieves:1 adopt:2 purpose:5 unwarranted:1 assistant:1 estimation:2 communicative:1 coordination:7 successfully:1 tool:1 offs:1 clearly:1 always:2 rather:1 kersting:1 mobile:2 varying:2 office:2 earliest:1 corollary:1 derived:1 focus:1 hsd:1 zilberstein:1 she:2 improvement:1 indicates:2 superimposed:3 contrast:4 greedily:2 wishing:1 sense:3 helpful:2 inference:1 flaw:4 posteriori:3 waugh:2 dependent:2 inaccurate:1 accept:1 initially:1 hidden:4 coordinator:2 relation:1 her:2 selects:3 provably:1 issue:1 colt:1 exponent:1 development:1 spatial:1 special:3 mutual:1 field:1 balbach:2 dbe:13 ng:9 identical:2 represents:1 stuart:2 icml:4 future:5 minimized:1 simplex:1 report:1 cardinal:1 manipulated:1 preserve:1 divergence:2 national:1 individual:1 phase:23 argmax:1 organization:1 centralized:1 certainly:1 alignment:9 navigation:3 extreme:1 implication:2 tuple:5 partial:2 divide:1 guidance:1 legible:1 theoretical:1 uncertain:1 modeling:1 ar:12 formulates:1 maximization:2 strategic:1 cost:1 expects:6 uniform:4 too:3 optimally:3 characterize:5 teacher:3 varies:2 chooses:1 combined:1 st:4 siam:1 stay:1 contract:1 rewarding:2 off:2 aaai:2 management:1 opposed:2 choose:2 emnlp:1 admit:1 expert:24 usable:1 inefficient:1 toy:1 account:2 includes:3 int:1 automation:1 hypothesizes:1 explicitly:1 depends:2 try:2 observer:4 analyze:3 characterizes:2 observing:1 tukey:1 capability:1 defer:1 contribution:1 minimize:1 responder:3 wiener:3 efficiently:2 yield:1 landscape:1 bayesian:2 iterated:1 craig:1 fern:3 none:1 mc:2 trajectory:17 pomdps:3 worth:1 drive:1 executes:2 history:4 ah:15 explain:1 sharing:1 definition:4 against:1 nonetheless:1 pp:2 obvious:2 conveys:1 proof:1 mi:1 static:2 sampled:2 treatment:1 intrinsically:1 ask:1 kunapuli:1 knowledge:2 car:1 infers:7 emerges:1 dimensionality:2 actually:1 higher:1 jair:1 follow:1 response:22 bostrom:2 formulation:1 though:1 dey:1 furthermore:1 myth:1 correlation:1 until:1 hand:1 ramachandran:4 web:1 irl:25 touch:1 interfere:1 morning:1 perhaps:1 scientific:1 mdp:4 believe:1 grounding:1 effect:3 hypothesized:1 true:4 mahajan:1 round:1 assistance:1 game:40 self:1 during:5 suboptimality:1 generalized:1 leftmost:1 plate:1 schrijvers:2 complete:2 theoretic:3 performs:1 motion:3 passive:2 cooperate:1 image:1 dirt:2 srinivasa:2 common:6 behaves:2 exponentially:1 volume:1 he:1 interpret:1 employee:1 measurement:1 significant:1 cup:1 ai:4 automatic:1 grid:2 teaching:12 language:2 had:1 nexp:2 robot:47 actor:17 supervision:1 similarity:4 gt:5 align:1 anca:2 posterior:3 own:4 showed:1 recent:1 noisily:2 optimizing:1 rewarded:1 outperforming:1 binary:1 life:1 fortunately:1 additional:1 accident:1 accomplishes:1 determine:3 maximize:10 paradigm:2 converge:1 play:1 violate:1 multiple:2 infer:2 reduces:2 technical:2 bestresponse:1 characterized:1 match:3 cross:1 long:2 post:1 impact:1 expectation:2 represent:2 adopting:1 disaster:1 robotics:2 dec:7 cell:1 golland:2 receive:1 want:3 fellowship:1 else:1 folly:2 crucial:2 sure:1 induced:5 subject:1 thing:1 call:1 structural:1 bernstein:3 intermediate:2 easy:1 variety:4 isolation:4 fit:1 timesteps:1 followup:1 competing:1 suboptimal:3 perfectly:1 reduce:1 idea:2 restrict:1 avenue:1 br:13 economic:1 whether:1 utility:2 defense:1 moral:1 akin:1 titled:1 speech:1 cause:2 action:28 repeatedly:1 deep:1 remark:3 boutilier:3 useful:4 generally:2 involve:1 ignored:2 prefers:1 factorial:2 amount:1 clear:1 discount:1 induces:1 category:1 simplest:1 reduced:7 zeugmann:2 specifies:2 demonstrator:2 exist:1 outperform:2 nsf:1 schapire:1 designer:1 nayyar:2 klein:1 broadly:1 write:1 discrete:1 incentive:6 affected:1 group:1 key:5 four:1 terminology:1 demonstrating:1 achieving:1 changing:2 clean:1 anova:1 kept:1 n66001:1 timestep:2 button:1 sum:8 inverse:18 parameterized:1 powerful:1 communicate:1 uncertainty:2 lope:2 place:2 employer:1 reasonable:1 almost:1 arrive:1 decision:6 dhm:1 entirely:1 bound:1 internet:1 distinguish:1 precisely:1 constraint:3 software:1 encodes:1 aspect:1 optimality:2 attempting:1 relatively:1 according:2 combination:1 vacuum:1 rationalization:1 describes:1 across:2 character:1 making:3 notorious:1 turn:7 count:6 mechanism:1 kerr:2 know:6 operation:1 decentralized:3 apply:1 observe:3 indirectly:1 appropriate:2 bureau:1 assumes:3 running:1 include:1 giving:1 especially:1 coffee:8 prof:1 classical:1 objective:8 move:2 strategy:2 primary:1 rt:1 bagnell:3 said:1 incentivized:1 aptly:1 trivial:1 assuming:1 economist:1 relationship:1 teneketzis:1 demonstration:33 liang:1 difficult:1 unfortunately:1 teach:7 ratliff:2 implementation:2 design:8 collective:1 policy:68 boltzmann:1 steam:1 observation:9 markov:4 enabling:1 finite:1 behave:1 immediate:3 payoff:4 situation:1 communication:1 extended:1 arbitrary:2 inferred:2 pair:18 mechanical:1 kl:4 optimized:2 california:1 learned:1 barcelona:1 nip:1 qa:4 able:1 beyond:1 proceeds:1 challenge:1 program:3 max:1 explanation:1 belief:13 analogue:1 natural:1 turner:2 scheme:2 immerman:1 mdps:1 utterance:1 dragan:3 prior:6 deviate:1 literature:1 discovery:1 relative:1 loss:1 expect:3 highlight:1 fully:1 multiagent:1 interesting:1 pabbeel:1 validation:1 agent:22 sufficient:4 consistent:1 s0:3 proxy:1 story:1 share:2 prone:1 compatible:1 maas:1 cooperation:1 repeat:1 placed:1 free:1 supported:2 infeasible:1 formal:1 institute:1 fall:2 wide:2 taking:1 shavlik:1 world:9 transition:2 computes:2 forward:1 cakmak:2 reinforcement:17 far:1 ec:1 transaction:1 approximate:8 wrote:1 active:4 uai:1 owns:1 assumed:2 tuples:1 knew:1 alternatively:1 don:2 reality:1 learn:3 ca:1 confirmation:1 necessarily:1 domain:1 did:1 main:1 backup:1 noise:1 arise:1 judah:2 heating:1 nothing:1 repeated:1 convey:2 representative:1 dump:1 experienced:1 inferring:1 explicit:1 deterministically:1 exponential:2 dylan:2 lie:1 communicates:2 learns:6 theorem:9 specific:1 jensen:2 symbol:1 evidence:1 exists:3 intrinsic:1 consist:1 sequential:2 effectively:2 dissimilarity:1 margin:2 horizon:1 gap:1 easier:5 entropy:6 desire:3 norbert:1 unexpected:1 partially:1 collectively:1 springer:1 corresponds:2 truth:5 satisfies:1 conditional:1 goal:6 formulated:1 king:1 ownership:1 shared:1 hard:1 change:3 determined:1 specifically:1 operates:1 reducing:1 acting:2 justify:1 uniformly:1 principal:9 kearns:2 total:2 secondary:1 experimental:1 player:5 select:3 people:1 support:1 arises:5 preparation:1 tested:1 instructive:6 |
5,993 | 6,421 | Bayesian Optimization for Probabilistic Programs
?
Tom Rainforth? Tuan Anh Le? Jan-Willem van de Meent?
Michael A. Osborne? Frank Wood?
?
Department of Engineering Science, University of Oxford
College of Computer and Information Science, Northeastern University
{twgr,tuananh,mosb,fwood}@robots.ox.ac.uk, [email protected]
Abstract
We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables. By using a series of code
transformations, the evidence of any probabilistic program, and therefore of any
graphical model, can be optimized with respect to an arbitrary subset of its sampled
variables. To carry out this optimization, we develop the first Bayesian optimization
package to directly exploit the source code of its target, leading to innovations in
problem-independent hyperpriors, unbounded optimization, and implicit constraint
satisfaction; delivering significant performance improvements over prominent existing packages. We present applications of our method to a number of tasks including
engineering design and parameter optimization.
1
Introduction
Probabilistic programming systems (PPS) allow probabilistic models to be represented in the form
of a generative model and statements for conditioning on data [4, 9, 10, 16, 17, 29]. Their core
philosophy is to decouple model specification and inference, the former corresponding to the userspecified program code and the latter to an inference engine capable of operating on arbitrary
programs. Removing the need for users to write inference algorithms significantly reduces the burden
of developing new models and makes effective statistical methods accessible to non-experts.
Although significant progress has been made on the problem of general purpose inference of program
variables, less attention has been given to their optimization. Optimization is an essential tool for
effective machine learning, necessary when the user requires a single estimate. It also often forms a
tractable alternative when full inference is infeasible [18]. Moreover, coincident optimization and
inference is often required, corresponding to a marginal maximum a posteriori (MMAP) setting
where one wishes to maximize some variables, while marginalizing out others. Examples of MMAP
problems include hyperparameter optimization, expectation maximization, and policy search [27].
In this paper we develop the first system that extends probabilistic programming (PP) to this more
general MMAP framework, wherein the user specifies a model in the same manner as existing
systems, but then selects some subset of the sampled variables in the program to be optimized, with
the rest marginalized out using existing inference algorithms. The optimization query we introduce
can be implemented and utilized in any PPS that supports an inference method returning a marginal
likelihood estimate. This framework increases the scope of models that can be expressed in PPS and
gives additional flexibility in the outputs a user can request from the program.
MMAP estimation is difficult as it corresponds to the optimization of an intractable integral, such that
the optimization target is expensive to evaluate and gives noisy results. Current PPS inference engines
are typically unsuited to such settings. We therefore introduce BOPP1 (Bayesian optimization for
probabilistic programs) which couples existing inference algorithms from PPS, like Anglican [29],
with a new Gaussian process (GP) [22] based Bayesian optimization (BO) [11, 15, 20, 23] package.
1
Code available at http://www.github.com/probprog/bopp/
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
0.25
p(Y,3)
0.2
0.15
BOPP
Even Powers
0.1
0.05
0
50
Iteration
100
Figure 1: Simulation-based optimization of radiator powers subject to varying solar intensity. Shown
are output heat maps from Energy2D [30] simulations at one intensity, corresponding from left to
right to setting all the radiators to the same power, the best result from a set of randomly chosen
powers, and the best setup found after 100 iterations of BOPP. The far right plot shows convergence
of the evidence of the respective model, giving the median and 25/75% quartiles.
(defopt house-heating [alphas] [powers]
(let [solar-intensity (sample weather-prior)
powers (sample (dirichlet alphas))
temperatures (simulate solar-intensity powers)]
(observe abc-likelihood temperatures)))
Figure 2: BOPP query for optimizing the power allocation to radiators in a house.
Here
weather-prior is a distribution over the solar intensity and a uniform Dirichlet prior with concentration alpha is placed over the powers. Calling simulate performs an Energy2D simulation of
house temperatures. The utility of the resulting output is conditioned upon using abc-likelihood.
Calling doopt on this query invokes the BOPP algorithm to perform MMAP estimation, where the
second input powers indicates the variable to be optimized.
To demonstrate the functionality provided by BOPP, we consider an example application of engineering design. Engineering design relies extensively on simulations which typically have two things in
common: the desire of the user to find a single best design and an uncertainty in the environment in
which the designed component will live. Even when these simulations are deterministic, this is an
approximation to a truly stochastic world. By expressing the utility of a particular design-environment
combination using an approximate Bayesian computation (ABC) likelihood [5], one can pose this as
a MMAP problem, optimizing the design while marginalizing out the environmental uncertainty.
Figure 1 illustrates how BOPP can be applied to engineering design, taking the example of optimizing
the distribution of power between radiators in a house so as to homogenize the temperature, while
marginalizing out possible weather conditions and subject to a total energy budget. The probabilistic
program shown in Figure 2 allows us to define a prior over the uncertain weather, while conditioning
on the output of a deterministic simulator (here Energy2D [30]-a finite element package for heat transfer) using an ABC likelihood. BOPP now allows the required coincident inference and optimization
to be carried out automatically, directly returning increasingly optimal configurations.
BO is an attractive choice for the required optimization in MMAP as it is typically efficient in the
number of target evaluations, operates on non-differentiable targets, and incorporates noise in the
target function evaluations. However, applying BO to probabilistic programs presents challenges,
such as the need to give robust performance on a wide range of problems with varying scaling and
potentially unbounded support. Furthermore, the target program may contain unknown constraints,
implicitly defined by the generative model, and variables whose type is unknown (i.e. they may be
continuous or discrete).
On the other hand, the availability of the target source code in a PPS presents opportunities to
overcome these issues and go beyond what can be done with existing BO packages. BOPP exploits
the source code in a number of ways, such as optimizing the acquisition function using the original
generative model to ensure the solution satisfies the implicit constaints, performing adaptive domain
scaling to ensure that GP kernel hyperparameters can be set according to problem-independent
hyperpriors, and defining an adaptive non-stationary mean function to support unbounded BO.
Together, these innovations mean that BOPP can be run in a manner that is fully black-box from the
user?s perspective, requiring only the identification of the target variables relative to current syntax
for operating on arbitrary programs. We further show that BOPP is competitive with existing BO
engines for direct optimization on common benchmarks problems that do not require marginalization.
2
2
2.1
Background
Probabilistic Programming
Probabilistic programming systems allow users to define probabilistic models using a domain-specific
programming language. A probabilistic program implicitly defines a distribution on random variables,
whilst the system back-end implements general-purpose inference methods.
PPS such as Infer.Net [17] and Stan [4] can be thought of as defining graphical models or factor
graphs. Our focus will instead be on systems such as Church [9], Venture [16], WebPPL [10], and
Anglican [29], which employ a general-purpose programming language for model specification. In
these systems, the set of random variables is dynamically typed, such that it is possible to write
programs in which this set differs from execution to execution. This allows an unspecified number
of random variables and incorporation of arbitrary black box deterministic functions, such as was
exploited by the simulate function in Figure 2. The price for this expressivity is that inference
methods must be formulated in such a manner that they are applicable to models where the density
function is intractable and can only be evaluated during forwards simulation of the program.
One such general purpose system, Anglican, will be used as a reference in this paper. In Anglican,
models are defined using the inference macro defquery. These models, which we refer to as queries
[9], specify a joint distribution p(Y, X) over data Y and variables X. Inference on the model is
performed using the macro doquery, which produces a sequence of approximate samples from
the conditional distribution p(X|Y ) and, for importance sampling based inference algorithms (e.g.
sequential Monte Carlo), a marginal likelihood estimate p(Y ).
Random variables in an Anglican program are specified using sample statements, which can be
thought of as terms in the prior. Conditioning is specified using observe statements which can be
thought of as likelihood terms. Outputs of the program, taking the form of posterior samples, are
indicated by the return values. There is a finite set of sample and observe statements in a program
source code, but the number of times each statement is called can vary between executions. We refer
the reader to http://www.robots.ox.ac.uk/?fwood/anglican/ for more details.
2.2
Bayesian Optimization
Consider an arbitrary black-box target function f : ? ? R that can be evaluated for an arbitrary point
? ? ? to produce, potentially noisy, outputs w
? ? R. BO [15, 20] aims to find the global maximum
?? = argmax f (?) .
(1)
???
The key idea of BO is to place a prior on f that expresses belief about the space of functions within
which f might live. When the function is evaluated, the resultant information is incorporated by
conditioning upon the observed data to give a posterior over functions. This allows estimation
of the expected value and uncertainty in f (?) for all ? ? ?. From this, an acquisition function
? : ? ? R is defined, which assigns an expected utility to evaluating f at particular ?, based on the
trade-off between exploration and exploitation in finding the maximum. When direct evaluation of f
is expensive, the acquisition function constitutes a cheaper to evaluate substitute, which is optimized
to ascertain the next point at which the target function should be evaluated in a sequential fashion.
By interleaving optimization of the acquisition function, evaluating f at the suggested point, and
updating the surrogate, BO forms a global optimization algorithm that is typically very efficient in the
required number of function evaluations, whilst naturally dealing with noise in the outputs. Although
alternatives such as random forests [3, 14] or neural networks [26] exist, the most common prior
used for f is a GP [22]. For further information on BO we refer the reader to the recent review by
Shahriari et al [24].
3
Problem Formulation
Given a program defining the joint density p(Y, X, ?) with fixed Y , our aim is to optimize with
respect to a subset of the variables ? whilst marginalizing out latent variables X
Z
?? = argmax p(?|Y ) = argmax p(Y, ?) = argmax
p(Y, X, ?)dX.
(2)
???
???
???
3
To provide syntax to differentiate between ? and X, we introduce a new query macro defopt. The
syntax of defopt is identical to defquery except that it has an additional input identifying the
variables to be optimized. To allow for the interleaving of inference and optimization required
in MMAP estimation, we further introduce doopt, which, analogous to doquery, returns a lazy
? ??
?
? ?m ? X are the program outputs associated with ? = ??m
sequence {??m
, ?m , u
??m }m=1,... where ?
and
?
+
?
?
each u
?m ? R is an estimate of the corresponding log marginal log p(Y, ?m ) (see Section 4.2). The
?
sequence is defined such that, at any time, ??m
corresponds to the point expected to be most optimal
of those evaluated so far and allows both inference and optimization to be carried out online.
Although no restrictions are placed on X, it is necessary to place some restrictions on how programs
use the optimization variables ? = ?1:K specified by the optimization argument list of defopt.
First, each optimization variable ?k must be bound to a value directly by a sample statement with
fixed measure-type distribution argument. This avoids change of variable complications arising from
nonlinear deterministic mappings. Second, in order for the optimization to be well defined, the
program must be written such that any possible execution trace binds each optimization variable ?k
exactly once. Finally, although any ?k may be lexically multiply bound, it must have the same base
measure in all possible execution traces, because, for instance, if the base measure of a ?k were to
change from Lebesgue to counting, the notion of optimality would no longer admit a conventional
interpretation. Note that although the transformation implementations shown in Figure 3 do not
contain runtime exception generators that disallow continued execution of programs that violate these
constraints, those actually implemented in the BOPP system do.
4
Bayesian Program Optimization
In addition to the syntax introduced in the previous section, there are five main components to BOPP:
- A program transformation, q?q-marg, allowing estimation of the evidence p(Y, ?) at a fixed ?.
- A high-performance, GP based, BO implementation for actively sampling ?.
- A program transformation, q?q-prior, used for automatic and adaptive domain scaling, such
that a problem-independent hyperprior can be placed over the GP hyperparameters.
- An adaptive non-stationary mean function to support unbounded optimization.
- A program transformation, q?q-acq, and annealing maximum likelihood estimation method to
optimize the acquisition function subject the implicit constraints imposed by the generative model.
Together these allow BOPP to perform online MMAP estimation for arbitrary programs in a manner
that is black-box from the user?s perspective - requiring only the definition of the target program in
the same way as existing PPS and identifying which variables to optimize. The BO component of
BOPP is both probabilistic programming and language independent, and is provided as a stand-alone
package.2 It requires as input only a target function, a sampler to establish rough input scaling, and a
problem specific optimizer for the acquisition function that imposes the problem constraints.
Figure 3 provides a high level overview of the algorithm invoked when doopt is called on a query q
that defines a distribution p (Y, a, ?, b). We wish to optimize ? whilst marginalizing out a and b, as
indicated by the the second input to q. In summary, BOPP performs iterative optimization in 5 steps
- Step 1 (blue arrows) generates unweighted samples from the transformed prior program q-prior
(top center), constructed by removing all conditioning. This initializes the domain scaling for ?.
- Step 2 (red arrows) evaluates the marginal p(Y, ?) at a small number of the generated ?? by
performing inference on the marginal program q-marg (middle centre), which returns samples
from the distribution p (a, b|Y, ?) along with an estimate of p(Y, ?). The evaluated points (middle
right) provide an initial domain scaling of the outputs and starting points for the BO surrogate.
- Step 3 (black arrow) fits a mixture of GPs posterior [22] to the scaled data (bottom centre) using a
problem independent hyperprior. The solid blue line and shaded area show the posterior mean and
?2 standard deviations respectively. The new estimate of the optimum ??? is the value for which
the mean estimate is largest, with u
?? equal to the corresponding mean value.
2
Code available at http://www.github.com/probprog/deodorant/
4
?
(defquery q-marg [y ?]
(let [a (defquery
(sample (p-a))
q-prior [y]
?
? (observe<(p-? (p-a))
a) ?)
(let [a (sample
1
? (sample
(p-? a))]
b (sample
(p-b a ?))]
?))
(observe
(lik a ? b) y)
[a b]))
(a) Original
query
4
(a) Prior query
(b) Conditional query
1
(defquery q-acq [y ?]
(let [a (sample (p-a))
? (sample (p-? a))]
(observe (factor) (? ?))
?))
-15
-10
-5
0
5
10
15
3
(b) Acquisition query
2
2
Figure 2: Left: a transformation of q that samples from the prior p(?). Right: a transformation of q
Figure 1: Left: a simple example optimization
we wantoftothe
optimize
?. Right:
the same
usedquery
in thewhere
optimization
acquisition
function.
Observing from factor assigns a probability
0
query(defopt
after theq transformation
applied
by
BOPP
to
make
the
query
amenable
to
optimization.
Note
?
exp
?(?)
to
the
execution,
i.e.
(
factor
)
returns
a
distribution
of object for which the log probability
(defquery
q-marg
[y
?]
[?]
(defquery q-acq [y]
[y ?]
(let
[a p-v
(sample
p-u
a (p-a))
distribution
p-?,
and(p-a))
lik
represent
functions which return
(let
[a (sample
(p-a))object, whilst
density
function
is theallidentity
function.
(let represents
[a
(sample
-20
?
?objects.
(sample
(p-? a))
? (observe<- (p-? a) ?)
distributions
? (sample
(p-? a))]
2
b (sample(?(p-b
(observe (factor)
?)) a ?))]
?)) (observe (lik a ? b) y)
[a b]))
(a) Prior query
b (sample (p-b a ?))]
(observe (lik a ? b) y)
[a b]))
log p(Y,3)
ry q-prior [y]
a (sample (p-a))
? (sample (p-? a))]
(defopt q [y] [?]
(let [a (sample (p-a))
? (sample (p-? a))
b (sample (p-b a ?))]
(observe (lik a ? b) y)
[a b]))
-40
-60
-15
(b) Acquisition query
(a)
4 Original query
(b) Conditional query
5
-10
-5
0
3
5
10
15
3
Expected improvement
2: Left: a transformation of q that samples from the prior p(?). Right: a transformation of q
Figure Observing
1: Left: a simple
exampleassigns
optimization
query where we want to optimize ?. Right: the same
the optimization of the acquisition function.
from factor
a probability
query
after
the
transformation
applied
by
BOPP
to make the query amenable to optimization. Note
0.1 a distribution of object for which the log probability
) to the execution, i.e. (factor) returns
u
?? p-?, p-v and lik all represent functions which return
function is the identity function. 0.08 p-u represents a distribution object, whilst
distributions objects.
?
?
?
4
0.06
3
? ,u
{?? , ?
? }
1
0.04
0.02
0
-15
-10
-5
0
3
5
??next
10
???
15
Figure 3: Overview of the BOPP algorithm, description given in main text. p-a, p-?, p-b and lik
all represent distribution object constructors. factor is a special distribution constructor that assigns
probability p(y) = y, in this case y = ?(?).
1
(defquery q-prior [y]
(let [a (sample (p-a))
? (sample (p-? a))]
?))
(defquery q-acq [y ?]
(let [a (sample (p-a))
? (sample (p-? a))]
(observe (factor) (? ?))
?))
- Step 4 (purple arrows) constructs an acquisition function ? : ? ? R+ (bottom left) using the GP
posterior. This is optimized, giving the next point to evaluate ??next , by performing annealed importance sampling on a transformed program q-acq (middle left) in which all observe statements
(a) Prior query
(b)observe
Acquisition assigning
query
are removed
and replaced with a single
probability ?(?) to the execution.
Figure 2: Left: a transformation of q that samples from the prior p(?). Right: a transformation of q
- used
Stepin5the
(green
arrow)ofevaluates
??next function.
using q-marg
and
continues
to step a3.probability
optimization
the acquisition
Observing
from
factor assigns
exp ?(?) to the execution, i.e. (factor) returns(defquery
a distribution
of object for which the log probability
q-acq [y ?]
density
functionq-prior
is the identity
function.
(defquery
[y]
(let [a (sample (p-a))
4.1
(let [a (sample
(p-a))
? (sample
Program
Transformation
to Generate the
Target(p-?
? (sample (p-? a))]
?))
a))]
(observe (factor) (? ?))
?))
Consider the defopt query q in Figure 3, the body of which defines the joint distribution p (Y, a, ?, b).
Calculating (2)
(defining
standard
optimization
scheme presents two issues: ? is
(a) Prior
query X = {a, b}) using a(b)
Acquisition
query
a random variable within the program rather than something we control and its probability distribution
Figure 2: Left: a transformation of q that samples from the prior p(?). Right: a transformation of q
is onlyused
defined
onthe
a. acquisition function. Observing from factor assigns a probability
in the conditioned
optimization of
expwith
?(?) both
to the these
execution,
i.e. simultaneously
(factor) returns using
a distribution
of object
for which the log
probability
We deal
issues
a program
transformation
similar
to the disindensity
function
is
the
identity
function.
tegration transformation in Hakaru [31]. Our marginal transformation returns a new query object,
q-marg as shown in Figure 3, that defines the same joint distribution on program variables and
inputs, but now accepts the value for ? as an1 input. This is done by replacing all sample statements
associated with ? with equivalent observe<- statements, taking ? as the observed value, where
observe<- is identical to observe except that it returns the observed value. As both sample and
observe operate on the same variable type - a distribution object - this transformation can always
be made, while the identical returns of sample and observe<- trivially ensures validity of the
transformed program.
4.2
Bayesian Optimization of the Marginal 1
The target function for our BO scheme is log p(Y, ?), noting argmax f (?) = argmax log f (?) for
any f : ? ? R+ . The log is taken because GPs have unbounded support, while p (Y, ?) is always
positive, and because we expect variations over many orders of magnitude. PPS with importance
sampling based inference engines, e.g. sequential Monte Carlo [29] or the particle cascade [21], can
return noisy estimates of this target given the transformed program q-marg.
5
Our BO scheme uses a GP prior and a Gaussian likelihood. Though the rationale for the latter is
predominantly computational, giving an analytic posterior, there are also theoretical results suggesting
that this choice is appropriate [2]. We use as a default covariance function a combination of a Mat?ern3/2 and Mat?ern-5/2 kernel. By using automatic domain scaling as described in the next section,
problem independent priors are placed over the GP hyperparameters such as the length scales
and observation noise. Inference over hyperparameters is performed using Hamiltonian Monte
Carlo (HMC) [6], giving an unweighted mixture of GPs. Each term in this mixture has an analytic
i
distribution fully specified by its mean function ?im : ? ? R and covariance function km
: ??? ? R,
where m indexes the BO iteration and i the hyperparameter sample.
This posterior is first used to estimate which of the previously evaluated ??j is the most optimal, by
PN
taking the point with highest expected value , u
??m = maxj?1...m i=1 ?im (??j ). This completes the
definition of the output sequence returned by the doopt macro. Note that as the posterior updates
globally with each new observation, the relative estimated optimality of previously evaluated points
changes at each iteration. Secondly it is used to define the acquisition function ?, for which we take
p
i
u?
i
m
i (?, ?) and ? i (?) = ?m (?)??
the expected improvement [25], defining ?m
(?) = km
m
? i (?) ,
m
? (?) =
N
X
i=1
i
i
i
?im (?) ? u
??m ? ?m
(?) + ?m
(?) ? ?m
(?)
(3)
where ? and ? represent the pdf and cdf of a unit normal distribution respectively. We note that more
powerful, but more involved, acquisition functions, e.g. [12], could be used instead.
4.3
Automatic and Adaptive Domain Scaling
Domain scaling, by mapping to a common space, is crucial for BOPP to operate in the required
black-box fashion as it allows a general purpose and problem independent hyperprior to be placed
on the GP hyperparameters. BOPP therefore employs an affine scaling to a [?1, 1] hypercube for
both the inputs and outputs of the GP. To initialize scaling for the input variables, we sample directly
from the generative model defined by the program. This is achieved using a second transformed
program, q-prior, which removes all conditioning, i.e. observe statements, and returns ?. This
transformation also introduces code to terminate execution of the query once all ? are sampled,
in order to avoid unnecessary computation. As observe statements return nil, this transformation trivially preserves the generative model of the program, but the probability of the execution
changes. Simulating from the generative model does not require inference or calling potentially
expensive likelihood functions and is therefore computationally inexpensive. By running inference
on q-marg given a small number of these samples as arguments, a rough initial characterization of
output scaling can also be achieved. If points are observed that fall outside the hypercube under the
initial scaling, the domain scaling is appropriately updated3 so that the target for the GP remains the
[?1, 1] hypercube.
4.4
Unbounded Bayesian Optimization via Non-Stationary Mean Function Adaptation
Unlike standard BO implementations, BOPP is not provided with external constraints and we therefore
develop a scheme for operating on targets with potentially unbounded support. Our method exploits
the knowledge that the target function is a probability density, implying that the area that must be
searched in practice to find the optimum is finite, by defining a non-stationary prior mean function.
This takes the form of a bump function that is constant within a region of interest, but decays rapidly
outside. Specifically we define this bump function in the transformed space as
(
0
if r ? re
?prior (r; re , r? ) =
(4)
r?re
r?re
log r? ?re + r? ?re otherwise
where r is the radius from the origin, re is the maximum radius of any point generated in the initial
scaling or subsequent evaluations, and r? is a parameter set to 1.5re by default. Consequently, the
acquisition function also decays and new points are never suggested arbitrarily far away. Adaptation
3
An important exception is that the output mapping to the bottom of the hypercube remains fixed such that
low likelihood new points are not incorporated. This ensures stability when considering unbounded problems.
6
p(Y, ?)
p(Y, ?)
p(Y, ?)
p(Y, ?)
p(Y, ?)
p(Y, ?)
p(Y, ?)
p(Y, ?)
Figure
4:4:Convergence
Convergence
on
an
unconstrained
bimodal
problem
with
pwith
(?)
0.5) and
Figure
Figure
Figure
Figure
4:4:Convergence
4:Convergence
Convergence
ofofBOPP
of
BOPP
ofBOPP
BOPP
on
onunconstrained
on
unconstrained
onunconstrained
unconstrained
bimodal
bimodal
bimodal
bimodal
problem
problem
problem
problem
with
with
with
p=
p(?)
(?)
pNormal(0,
(?)
p=(?)
=Normal(0,
=
Normal(0,
=Normal(0,
Normal(0,
0.5)
0.5)
0.5)
0.5)
p
(Y
|?)
=
Normal(5
?
|?|
,
0.5)
giving
significant
prior
misspecification.
The
top
plots
show
ashow
and
and
and
pand
p(Y(Y
p |?)
(Y
p|?)
(Y
|?)
==
|?)
Normal(5
=
Normal(5
=Normal(5
Normal(5
|?|
|?|,|?|
0.5)
, |?|
0.5)
, 0.5)
,giving
0.5)
giving
giving
giving
significant
significant
significant
significant
prior
prior
prior
prior
misspecification.
misspecification.
misspecification.
misspecification.
The
The
The
top
The
toptop
plots
top
plots
plots
plots
show
show
show
regressed
GP,
with
the
solid
line
corresponding
to
mean
and
the
shading
shows
?
22?
standard
the
thethe
regressed
regressed
the
regressed
regressed
GP,
GP,
GP,
with
GP,
with
with
the
with
thethe
solid
solid
the
solid
solid
line
line
line
corresponding
line
corresponding
corresponding
corresponding
totothe
the
to
the
tothe
mean
mean
themean
mean
and
and
and
the
and
thethe
shading
shading
theshading
shading
shows
shows
shows
shows
??
2standard
?
standard
2 2standard
standard
deviations.
The
bottom
plots
show
the
corresponding
acquisition
functions.
deviations.
deviations.
deviations.
deviations.
Below
Below
Below
Below
isisthe
is
the
is
the
corresponding
corresponding
the
corresponding
corresponding
acquisition
acquisition
acquisition
acquisition
function
function
function
function
which
which
which
which
away
away
away
away
from
from
from
from
the
thethe
region
region
theregion
region
ofofinterest.
of
interest.
ofinterest.
interest.
Iteration
BOPP mean
Iteration
BOPP median
4.5
4.54.54.5
Optimizing
Optimizing
Optimizing
Optimizing
the
the
the
Acquisition
the
Acquisition
Acquisition
Acquisition
Function
Function
Function
Function
SMAC
Error
Error
Error
Error
Hartmann 6D
Branin
SVM on-grid
LDA on-grid
acquisition
acquisition
acquisition
acquisition
function
function
function
function
also
also
also
decays
also
decays
decays
decays
and
and
new
and
new
new
points
new
points
points
points
are
areare
never
never
arenever
never
suggested
suggested
suggested
suggested
arbitrarily
arbitrarily
arbitrarily
arbitrarily
faraway.
far
away.
far
away.
away.
Adaptation
Adaptation
Adaptation
Adaptation
100and
102far
0
-1
10
ofofthe
of
the
of
the
scaling
scaling
the
scaling
scaling
will
will
will
automatically
will
automatically
automatically
automatically
update
update
update
update
this
this
this
mean
this
mean
mean
mean
function
function
function
function
appropriately,
appropriately,
appropriately,
appropriately,
learning
learning
learning
learning
a aregion
region
a region
a region
ofofinterest
of
interest
ofinterest
interest
10
that
that
that
matches
that
matches
matches
matches
that
that
that
of
that
ofthe
ofthe
ofthe
true
the
true
true
problem,
true
problem,
problem,
problem,
without
without
without
without
complicating
complicating
complicating
complicating
the
thethe
optimization
the
optimization
optimization
optimization
by
by
over-extending
over-extending
by
over-extending
over-extending
0 by
10
-3
-1method
10
this
this
this
region.
this
region.
region.
region.
We
We
We
note
We
note
note
that
note
that
that
our
that
our
our
method
our
method
method
shares
shares
shares
shares
similarity
similarity
similarity
similarity
with
with
with
the
with
thethe
recent
recent
therecent
recent
work
work
work
work
ofofShahriari
of
Shahriari
ofShahriari
Shahriari
etetalet
al[24],
etal
[24],
al[24],
[24],
10
-2
but
butbut
overcomes
overcomes
but
overcomes
overcomes
the
the
the
sensitivity
sensitivity
the
sensitivity
sensitivity
of
of
their
of
their
of
their
their
method
method
method
method
upon
upon
upon
upon
a
a
user-specified
user-specified
a
user-specified
a
user-specified
bounding
bounding
bounding
bounding
box
box
box
representing
box
representing
representing
representing
soft
soft
soft
soft
10
10-5
10-2
constraints,
constraints,
constraints,
constraints,
byinitializing
by
initializing
by
initializing
initializing
automatically
automatically
automatically
automatically
and
and
and
adapting
and
adapting
adapting
adapting
asas
as
more
asmore
more
data
data
is
data
isobserved.
is
observed.
is
observed.
observed.
0
100
200
0 by
100
200
0 more
50data
100
0
25
50
Iteration
Spearmint
Iteration
TPE
Figure 5: Comparison of BOPP used as an optimizer to prominent BO packages on common
benchmark
problems.
Thefunction
dashed
lines
shows
the
final
mean
error
of
SMAC
(red),
Spearmint
(green)
Optimizing
Optimizing
Optimizing
Optimizing
the
the
the
acquisition
acquisition
theacquisition
acquisition
function
function
function
for
forfor
BOPP
BOPP
forBOPP
BOPP
presents
presents
presents
presents
the
thethe
issue
the
issue
issue
issue
that
that
that
the
that
thethe
query
the
query
query
query
contains
contains
contains
contains
implicit
implicit
implicit
implicit
and
TPEthat
(black)
asunknown
quoted
by
[7].
The
dark
blue
line
shows
the
mean
error
for
BOPP
averaged
over
constraints
constraints
constraints
constraints
that
that
are
that
areare
unknown
are
unknown
unknown
totothe
to
the
tothe
surrogate
surrogate
thesurrogate
surrogate
function.
function.
function.
function.
The
The
The
problem
The
problem
problem
problem
ofofunknown
of
unknown
ofunknown
unknown
constraints
constraints
constraints
constraints
has
has
has
has
100
runs,
whilst
the
median
25/75%
percentiles
are
shownthat
inthat
cyan.
Resultstake
fortake
Spearmint
on
been
been
been
been
previously
previously
previously
previously
covered
covered
covered
covered
ininthe
inthe
inand
the
literature
the
literature
literature
literature
[8,
[8,
[8,
11]
11]
[8,11]
by
11]
byby
assuming
assuming
byassuming
assuming
that
constraints
that
constraints
constraints
constraints
take
the
take
thethe
form
the
form
form
form
of
ofaofaofa a
Branin
and
SMAC
on
SVM
on-grid
are
omitted
both
BOPP
andand
the
respective
algorithms
black-box
black-box
black-box
black-box
function
function
function
function
which
which
which
which
isismodelled
is
modelled
ismodelled
modelled
with
with
with
with
a asecond
second
a second
abecause
second
surrogate
surrogate
surrogate
surrogate
function
function
function
function
and
and
must
and
must
must
must
bebeevaluated
be
evaluated
beevaluated
evaluated
inininin
averaged
zero
error
to
the
provided
number
of
significant
figures
in
[7].
guess-and-check
guess-and-check
guess-and-check
guess-and-check
strategy
strategy
strategy
strategy
totoestablish
to
establish
to
establish
establish
whether
whether
whether
whether
a apoint
point
a point
a point
isisvalid.
is
valid.
isvalid.
valid.
Along
Along
Along
Along
with
with
with
the
with
thethe
potentially
potentially
thepotentially
potentially
significance
significance
significance
significance
expense
expense
expense
expense
such
such
such
such
a amethod
method
a method
a method
incurs,
incurs,
incurs,
incurs,
this
this
this
approach
this
approach
approach
approach
isisinappropriate
is
inappropriate
isinappropriate
inappropriate
for
forfor
equality
equality
forequality
equality
constraints
constraints
constraints
constraints
ororwhen
or
when
orwhen
when
the
thethe
the
of the scaling will automatically update this mean function appropriately, learning a region of interest
target
target
target
target
variables
variables
variables
variables
are
areare
potentially
potentially
are
potentially
potentially
discrete.
discrete.
discrete.
discrete.
that matches that of the true problem, without complicating the optimization by over-extending
this
region.
We
note
that
our method
shares
similarity
with
the
recent
work
ofprogram
Shahriari
al [23],
We
We
We
therefore
We
therefore
therefore
therefore
take
take
take
an
take
anan
alternative
alternative
an
alternative
alternative
approach
approach
approach
approach
based
based
based
based
on
onon
directly
directly
on
directly
directly
using
using
using
using
the
the
the
program
the
program
program
totooptimize
tooptimize
toet
optimize
optimize
the
thethe
the
but
overcomes
the To
sensitivity
of
their
method
upon
user-specified
bounding
box
representing
softtotototo
acquisition
acquisition
acquisition
acquisition
function.
function.
function.
function.
ToTo
do
do
Toso
do
so
do
we
so
we
sowe
consider
consider
weconsider
consider
use
use
use
ause
atransformed
transformed
aaatransformed
transformed
program
program
program
program
q-acq
q-acq
q-acq
q-acq
that
that
that
isthat
isidentical
is
identical
isidentical
identical
constraints,
by
initializing
automatically
and
adapting
as more
data
is observed.
q-prior
q-prior
q-prior
q-prior
(see
(see
(see
Section
(see
Section
Section
Section
4.3),
4.3),
4.3),
4.3),
but
but
but
adds
adds
butadds
adds
ananadditional
an
additional
an
additional
additional
observe
observe
observe
observe
statement
statement
statement
statement
that
that
that
assigns
that
assigns
assigns
assigns
a aweight
weight
a weight
a weight
?(?)
?(?)
?(?)
to
?(?)
to toto
the
thethe
execution.
execution.
the
execution.
execution.
By
ByBy
setting
setting
By
setting
setting
?(?)
?(?)
?(?)
to
?(?)
tothe
to
the
tothe
acquisition
acquisition
theacquisition
acquisition
function,
function,
function,
function,
the
thethe
maximum
maximum
themaximum
maximum
likelihood
likelihood
likelihood
likelihood
corresponds
corresponds
corresponds
corresponds
totototo
the
thethe
optimum
optimum
theoptimum
optimum
ofofthe
of
the
ofthe
acquisition
acquisition
the
acquisition
acquisition
function
function
function
function
subject
subject
subject
subject
totothe
to
the
tothe
implicit
implicit
theimplicit
implicit
program
program
program
program
constraints.
constraints.
constraints.
constraints.
We
We
We
obtain
We
obtain
obtain
obtain
aa a a
4.5
Optimizing
the
Acquisition
Function
maximum
maximum
maximum
maximum
likelihood
likelihood
likelihood
likelihood
estimate
estimate
estimate
estimate
for
forfor
q-acq
q-acq
forq-acq
q-acq
using
using
using
using
a avariant
variant
a variant
a variant
ofofannealed
of
annealed
ofannealed
annealed
importance
importance
importance
importance
sampling
sampling
sampling
sampling
[18]
[18]
[18]
in
[18]
ininin
which
which
which
which
lightweight
lightweight
lightweight
lightweight
Metropolis
Metropolis
Metropolis
Hastings
Hastings
Hastings
Hastings
(LMH)
(LMH)
(LMH)
(LMH)
[29]
[29]
[29]
with
[29]
with
with
with
local
local
local
local
random-walk
random-walk
random-walk
random-walk
moves
moves
moves
moves
isisused
is
used
is
used
used
asasthe
as
the
asthe
the
Optimizing
theMetropolis
acquisition
function
for
BOPP
presents
the
issue
that
the query
contains
implicit
base
base
base
transition
base
transition
transition
transition
kernel.
kernel.
kernel.
kernel.
constraints that are unknown to the surrogate function. The problem of unknown constraints has
been previously covered in the literature [8, 13] by assuming that constraints take the form of a
black-box function which is modeled with a second surrogate function and must be evaluated in
55 5Experiments
5Experiments
Experiments
Experiments
guess-and-check
strategy to establish whether a point is valid. Along with the potentially significant
expense such a method incurs, this approach is inappropriate for equality constraints or when the
target
variables
arethe
potentially
For
example,
the
Dirichlet
distribution
inusing
Figure
introduces
We
We
We
first
first
We
first
demonstrate
first
demonstrate
demonstrate
demonstrate
the
the
ability
ability
theability
ability
ofdiscrete.
ofBOPP
of
BOPP
ofBOPP
BOPP
totocarry
to
carry
tocarry
carry
out
outout
unbounded
unbounded
outunbounded
unbounded
optimization
optimization
optimization
optimization
using
using
using
a a21D
1D
a 1D
aproblem
problem
1Dproblem
problem
an
equality
constraint
on powers
, mismatch
namely
that
its
components
must
sum
toshows
1.BOPP
with
with
with
with
a asignificant
significant
a significant
a significant
prior-posterior
prior-posterior
prior-posterior
prior-posterior
mismatch
mismatch
mismatch
as
asshown
as
shown
asshown
shown
ininFigure
in
Figure
inFigure
Figure
4.4.It4.It
4.
shows
It
shows
Itshows
BOPP
BOPP
BOPP
adapting
adapting
adapting
adapting
totothe
to
the
tothe
the
target
target
target
target
and
and
and
effectively
and
effectively
effectively
effectively
establishing
establishing
a amaxima
maxima
a maxima
a maxima
ininbased
the
in
the
inthe
presence
presence
the
presence
presence
ofofmultiple
of
multiple
ofmultiple
multiple
modes.
modes.
modes.
modes.
After
After
After
20
20evaluations
20
evaluations
20evaluations
evaluations
We
therefore
takeestablishing
anestablishing
alternative
approach
on
directly
using
the
program
toAfter
optimize
the
the
thethe
acquisitions
acquisitions
the
acquisitions
acquisitions
begin
begin
begin
begin
to
to
explore
to
explore
to
explore
explore
the
the
the
left
left
the
left
mode,
left
mode,
mode,
mode,
after
after
after
after
50
50
both
50
both
50
both
modes
both
modes
modes
modes
have
have
have
have
been
been
been
been
fully
fully
fully
fully
uncovered.
uncovered.
uncovered.
uncovered.
acquisition function. To do so we consider a transformed program q-acq that is identical to q-prior
(see
Section
4.3),
but
an
additional
observe
statement
that
assigns
a [26]
weight
?(?)
to
the
Next
Next
Next
Next
we
wewe
compare
compare
we
compare
compare
BOPP
BOPP
BOPP
BOPP
totoadds
the
to
the
to
the
prominent
prominent
the
prominent
prominent
BO
BO
BO
packages
BO
packages
packages
packages
SMAC
SMAC
SMAC
SMAC
[12],
[12],
[12],
[12],
Spearmint
Spearmint
Spearmint
Spearmint
[26]
[26]
and
[26]
and
and
TPE
and
TPE
TPE
[3]
TPE
[3][3]
on
on
[3]
aon
aona a
execution.
By
setting
?(?) to
acquisition
function,
the
maximum
likelihood
corresponds
to
number
number
number
number
ofofclassical
of
classical
of
classical
classical
benchmarks
benchmarks
benchmarks
benchmarks
asasthe
shown
as
shown
asshown
shown
ininFigure
in
Figure
inFigure
Figure
5.5.These
5.
These
5.These
These
results
results
results
results
demonstrate
demonstrate
demonstrate
demonstrate
that
that
that
BOPP
that
BOPP
BOPP
BOPP
provides
provides
provides
provides
the
optimum
of theover
acquisition
function
subject
tosimply
the
implicit
program
constraints.
We
obtainand
a and
substantial
substantial
substantial
substantial
advantages
advantages
advantages
advantages
over
over
these
over
these
these
these
systems
systems
systems
systems
when
when
when
when
used
used
used
used
simply
simply
simply
asasan
as
anas
optimizer
an
optimizer
anoptimizer
optimizer
ononboth
on
both
onboth
continuous
both
continuous
continuous
continuous
and
and
maximum
likelihood
estimate
for
q-acq using a variant of annealed importance sampling [19] in
discrete
discrete
discrete
discrete
optimization
optimization
optimization
optimization
problems.
problems.
problems.
problems.
which lightweight Metropolis Hastings (LMH) [28] with local random-walk moves is used as the
base
transition
kernel.performance
Finally
Finally
Finally
Finally
we
we
we
demonstrate
demonstrate
wedemonstrate
demonstrate
performance
performance
performance
ofofBOPP
ofBOPP
ofBOPP
BOPP
on
onon
a aon
MMAP
MMAP
a aMMAP
MMAP
problem.
problem.
problem.
problem.
Comparison
Comparison
Comparison
Comparison
here
here
here
here
isismore
ismore
ismore
more
difficult
difficult
difficult
difficult
due
due
due
todue
tothe
to
the
tothe
dearth
dearth
thedearth
dearth
ofofexisting
of
existing
ofexisting
existing
alternatives
alternatives
alternatives
alternatives
for
forfor
PPS.
PPS.
forPPS.
PPS.
InInparticular,
In
particular,
Inparticular,
particular,
simply
simply
simply
simply
running
running
running
running
inference
inference
inference
inference
does
does
does
does
not
notnot
return
not
return
return
return
estimates
estimates
estimates
estimates
ofofthe
of
the
ofthe
density
density
thedensity
density
function
function
function
function
pp(Y,
(Y,
p (Y,
p?).
?).
(Y,
?).
We
?).
We
We
consider
We
consider
consider
consider
the
thethe
possible
possible
thepossible
possible
alternative
alternative
alternative
alternative
ofofofof
7a aparticle
using
using
using
using
our
ourour
conditional
conditional
our
conditional
conditional
code
code
code
code
transformation
transformation
transformation
transformation
totodesign
to
design
todesign
design
particle
a particle
a particle
marginal
marginal
marginal
marginal
Metropolis
Metropolis
Metropolis
Metropolis
Hastings
Hastings
Hastings
Hastings
(PMMH,
(PMMH,
(PMMH,
(PMMH,
77 7 7
Figure 6: Convergence for transition dynamics parameters of the pickover attractor in terms of the
cumulative best log p (Y, ?) (left) and distance to the ?true? ? used in generating the data (right).
Solid line shows median over 100 runs, whilst the shaded region the 25/75% quantiles.
5
Experiments
We first demonstrate the ability of BOPP to carry out unbounded optimization using a 1D problem
with a significant prior-posterior mismatch as shown in Figure 4. It shows BOPP adapting to the
target and effectively establishing a maxima in the presence of multiple modes. After 20 evaluations
the acquisitions begin to explore the right mode, after 50 both modes have been fully uncovered.
Next we compare BOPP to the prominent BO packages SMAC [14], Spearmint [25] and TPE [3] on a
number of classical benchmarks as shown in Figure 5. These results demonstrate that BOPP provides
substantial advantages over these systems when used simply as an optimizer on both continuous and
discrete optimization problems. In particular, it offers a large advantage over SMAC and TPE on
the continuous problems (Branin and Hartmann), due to using a more powerful surrogate, and over
Spearmint on the others due to not needing to make approximations to deal with discrete problems.
Finally we demonstrate performance of BOPP on a MMAP problem. Comparison here is more
difficult due to the dearth of existing alternatives for PPS. In particular, simply running inference
on the original query does not return estimates for p (Y, ?). We consider the possible alternative of
using our conditional code transformation to design a particle marginal Metropolis Hastings (PMMH,
[1]) sampler which operates in a similar fashion to BOPP except that new ? are chosen using a MH
step instead of actively sampling with BO. For these MH steps we consider both LMH [28] with
proposals from the prior and the random-walk MH (RMH) variant introduced in Section 4.5. Results
for estimating the dynamics parameters of a chaotic pickover attractor, while using an extended
Kalman smoother to estimate the latent states are shown in Figure 6. Model details are given in the
supplementary material along with additional experiments.
6
Discussion and Future Work
We have introduced a new method for carrying out MMAP estimation of probabilistic program
variables using Bayesian optimization, representing the first unified framework for optimization
and inference of probabilistic programs. By using a series of code transformations, our method
allows an arbitrary program to be optimized with respect to a defined subset of its variables, whilst
marginalizing out the rest. To carry out the required optimization, we introduce a new GP-based BO
package that exploits the availability of the target source code to provide a number of novel features,
such as automatic domain scaling and constraint satisfaction.
The concepts we introduce lead directly to a number of extensions of interest, including but not
restricted to smart initialization of inference algorithms, adaptive proposals, and nested optimization.
Further work might consider maximum marginal likelihood estimation and risk minimization. Though
only requiring minor algorithmic changes, these cases require distinct theoretical considerations.
Acknowledgements
Tom Rainforth is supported by a BP industrial grant. Tuan Anh Le is supported by a Google
studentship, project code DF6700. Frank Wood is supported under DARPA PPAML through the U.S.
AFRL under Cooperative Agreement FA8750-14-2-0006, Sub Award number 61160290-111668.
8
References
[1] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. J Royal Stat.
Soc.: Series B (Stat. Methodol.), 72(3):269?342, 2010.
[2] J. B?erard, P. Del Moral, A. Doucet, et al. A lognormal central limit theorem for particle approximations of
normalizing constants. Electronic Journal of Probability, 19(94):1?28, 2014.
[3] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K?egl. Algorithms for hyper-parameter optimization. In
NIPS, pages 2546?2554, 2011.
[4] B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A. Brubaker, J. Guo, P. Li,
and A. Riddell. Stan: a probabilistic programming language. Journal of Statistical Software, 2015.
[5] K. Csill?ery, M. G. Blum, O. E. Gaggiotti, and O. Franc?ois. Approximate Bayesian Computation (ABC) in
practice. Trends in Ecology & Evolution, 25(7):410?418, 2010.
[6] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics letters B, 1987.
[7] K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown. Towards
an empirical foundation for assessing Bayesian optimization of hyperparameters. In NIPS workshop on
Bayesian Optimization in Theory and Practice, pages 1?5, 2013.
[8] J. R. Gardner, M. J. Kusner, Z. E. Xu, K. Q. Weinberger, and J. Cunningham. Bayesian optimization with
inequality constraints. In ICML, pages 937?945, 2014.
[9] N. Goodman, V. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: a language for
generative models. In UAI, pages 220?229, 2008.
[10] N. D. Goodman and A. Stuhlm?uller. The Design and Implementation of Probabilistic Programming
Languages. 2014.
[11] M. U. Gutmann and J. Corander. Bayesian optimization for likelihood-free inference of simulator-based
statistical models. JMLR, 17:1?47, 2016.
[12] J. M. Hern?andez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient
global optimization of black-box functions. In NIPS, pages 918?926, 2014.
[13] J. M. Hern?andez-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. A general
framework for constrained Bayesian optimization using information-based search. JMLR, 17:1?53, 2016.
[14] F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm
configuration. In Learn. Intell. Optim., pages 507?523. Springer, 2011.
[15] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions.
J Global Optim, 13(4):455?492, 1998.
[16] V. Mansinghka, D. Selsam, and Y. Perov. Venture: a higher-order probabilistic programming platform with
programmable inference. arXiv preprint arXiv:1404.0099, 2014.
[17] T. Minka, J. Winn, J. Guiver, and D. Knowles. Infer .NET 2.4, Microsoft Research Cambridge, 2010.
[18] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.
[19] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[20] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In 3rd
international conference on learning and intelligent optimization (LION3), pages 1?15, 2009.
[21] B. Paige, F. Wood, A. Doucet, and Y. W. Teh. Asynchronous anytime sequential monte carlo. In NIPS,
pages 3410?3418, 2014.
[22] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[23] B. Shahriari, A. Bouchard-C?ot?e, and N. de Freitas. Unbounded Bayesian optimization via regularization.
AISTATS, 2016.
[24] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A
review of Bayesian optimization. Proceedings of the IEEE, 104(1):148?175, 2016.
[25] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms.
In NIPS, pages 2951?2959, 2012.
[26] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, M. Ali, R. P. Adams, et al.
Scalable Bayesian optimization using deep neural networks. In ICML, 2015.
[27] J.-W. van de Meent, B. Paige, D. Tolpin, and F. Wood. Black-box policy search with probabilistic programs.
In AISTATS, pages 1195?1204, 2016.
[28] D. Wingate, A. Stuhlmueller, and N. D. Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In AISTATS, pages 770?778, 2011.
[29] F. Wood, J. W. van de Meent, and V. Mansinghka. A new approach to probabilistic programming inference.
In AISTATS, pages 2?46, 2014.
[30] C. Xie. Interactive heat transfer simulations for everyone. The Physics Teacher, 50(4), 2012.
[31] R. Zinkov and C.-C. Shan. Composing inference algorithms as program transformations. arXiv preprint
arXiv:1603.01882, 2016.
9
| 6421 |@word exploitation:1 middle:3 km:2 simulation:7 covariance:2 incurs:5 solid:7 shading:4 carry:5 initial:4 configuration:2 series:3 contains:5 lightweight:6 uncovered:5 rippel:1 fa8750:1 existing:10 freitas:2 current:2 com:2 optim:2 assigning:1 dx:1 must:11 written:1 subsequent:1 analytic:2 remove:1 plot:7 designed:1 update:6 sundaram:1 stationary:4 generative:8 alone:1 implying:1 guess:5 hamiltonian:1 core:1 provides:6 characterization:1 complication:1 five:1 unbounded:13 branin:3 along:7 constructed:1 direct:2 shahriari:6 introduce:6 manner:4 snoek:3 expected:6 kiros:1 simulator:2 ry:1 globally:1 automatically:11 inappropriate:3 considering:1 spain:1 provided:4 moreover:1 begin:5 estimating:1 project:1 anh:2 what:1 unspecified:1 whilst:9 unified:1 finding:1 transformation:29 interactive:1 runtime:1 exactly:1 returning:2 scaled:1 uk:2 control:1 unit:1 grant:1 positive:1 engineering:5 bind:1 fwood:2 local:5 limit:1 oxford:1 establishing:3 black:15 might:2 initialization:1 dynamically:1 shaded:2 range:1 averaged:2 lion3:1 practical:1 practice:3 implement:1 differs:1 chaotic:1 jan:1 area:2 empirical:1 significantly:1 weather:4 thought:3 cascade:1 adapting:10 gelman:1 risk:1 live:2 applying:1 marg:8 www:3 optimize:9 map:1 deterministic:4 restriction:2 conventional:1 imposed:1 go:1 attention:1 center:1 starting:1 annealed:5 guiver:1 welch:1 williams:1 identifying:2 assigns:10 schonlau:1 continued:1 stability:1 notion:1 variation:1 constructor:2 analogous:1 target:29 user:13 onon:2 programming:12 gps:3 us:1 origin:1 agreement:1 element:1 trend:1 expensive:4 pps:13 utilized:1 updating:1 continues:1 roy:1 cooperative:1 mosb:1 observed:8 bottom:4 preprint:2 initializing:4 wang:1 wingate:1 region:14 ensures:2 gutmann:1 trade:1 removed:1 highest:1 substantial:5 environment:2 dynamic:2 carrying:1 smart:1 ali:1 predictive:1 upon:7 joint:4 mh:3 darpa:1 represented:1 ppaml:1 unsuited:1 heat:3 distinct:1 effective:2 monte:6 goodrich:1 query:32 hyper:1 outside:2 pendleton:1 whose:1 supplementary:1 otherwise:1 ability:4 statistic:1 gp:17 noisy:3 final:1 online:2 differentiate:1 sequence:4 differentiable:1 advantage:6 net:2 adaptation:6 macro:4 loop:1 rapidly:1 flexibility:1 description:1 venture:2 convergence:7 optimum:6 extending:5 spearmint:9 produce:2 generating:1 assessing:1 adam:4 stuhlmueller:1 object:11 develop:3 ac:2 stat:2 pose:1 minor:1 mansinghka:3 progress:1 soc:1 implemented:2 ois:1 larochelle:1 mmap:14 radius:2 functionality:1 quartile:1 stochastic:1 exploration:1 human:1 ana:1 material:1 require:3 andez:2 secondly:1 im:3 extension:1 normal:10 exp:2 scope:1 mapping:3 algorithmic:1 bump:2 vary:1 optimizer:6 omitted:1 purpose:6 estimation:10 applicable:1 largest:1 tool:1 hoffman:3 minimization:1 uller:1 rough:2 mit:2 gaussian:4 always:2 aim:2 rather:1 pn:1 avoid:1 varying:2 focus:1 improvement:3 likelihood:23 indicates:1 check:5 industrial:1 posteriori:2 inference:34 typically:4 cunningham:1 transformed:9 selects:1 issue:8 theq:1 hartmann:2 constrained:1 special:1 initialize:1 platform:1 marginal:15 equal:1 once:2 construct:1 never:4 sampling:11 pwith:1 identical:6 represents:2 jones:1 icml:2 constitutes:1 future:1 others:2 intelligent:1 employ:2 franc:1 randomly:1 simultaneously:1 preserve:1 intell:1 cheaper:1 murphy:1 maxj:1 replaced:1 argmax:6 lebesgue:1 attractor:2 microsoft:1 ecology:1 interest:7 multiply:1 evaluation:10 introduces:2 truly:1 mixture:3 compilation:1 chain:1 amenable:2 integral:1 capable:1 necessary:2 respective:2 hyperprior:3 walk:6 re:8 theoretical:2 uncertain:1 roweth:1 instance:1 hutter:2 soft:4 perov:1 maximization:1 deviation:6 subset:4 uniform:1 satish:1 teacher:1 tolpin:1 density:8 international:1 sensitivity:5 accessible:1 probabilistic:23 off:1 ofofthe:3 ismore:2 lee:1 michael:1 together:2 physic:2 central:1 admit:1 external:1 expert:1 leading:1 return:20 actively:2 li:1 suggesting:1 transformational:1 de:5 bergstra:2 lexically:1 availability:2 performed:2 observing:4 red:2 competitive:1 bouchard:1 ery:1 solar:4 acq:16 purple:1 pand:1 ofthe:5 modelled:2 bayesian:20 identification:1 carlo:6 kennedy:1 holenstein:1 definition:2 evaluates:1 inexpensive:1 energy:1 acquisition:55 pp:2 typed:1 involved:1 minka:1 resultant:1 naturally:1 associated:2 couple:1 sampled:3 ofinterest:2 knowledge:1 anytime:1 anglican:6 actually:1 back:1 afrl:1 higher:1 xie:1 tom:2 wherein:1 specify:1 formulation:1 done:2 ox:2 box:18 evaluated:11 furthermore:1 though:2 implicit:12 hand:1 hastings:10 replacing:1 nonlinear:1 google:1 del:1 defines:4 mode:15 lda:1 indicated:2 ininthe:1 validity:1 contain:2 requiring:3 true:6 brown:2 former:1 regularization:1 equality:5 andrieu:1 evolution:1 concept:1 neal:1 deal:2 attractive:1 during:1 meent:3 percentile:1 prominent:7 syntax:4 pdf:1 gelbart:1 demonstrate:15 performs:2 temperature:4 invoked:1 novel:1 consideration:1 predominantly:1 common:5 overview:2 conditioning:6 interpretation:1 significant:13 expressing:1 refer:3 cambridge:1 automatic:4 unconstrained:3 trivially:2 grid:3 rd:1 etal:1 particle:7 centre:2 language:7 robot:2 specification:2 longer:1 operating:3 similarity:5 base:7 add:3 something:1 posterior:13 recent:5 perspective:3 optimizing:14 inequality:1 arbitrarily:5 patwary:1 exploited:1 additional:7 maximize:1 dashed:1 lik:7 multiple:3 full:1 violate:1 reduces:1 infer:2 needing:1 smoother:1 match:5 offer:1 thethe:17 award:1 variant:5 scalable:1 expectation:1 arxiv:4 iteration:8 kernel:7 represent:4 bimodal:5 achieved:2 proposal:2 background:1 addition:1 want:1 annealing:1 winn:1 completes:1 median:4 source:5 crucial:1 appropriately:6 goodman:3 rest:2 operate:2 unlike:1 ot:1 subject:8 thing:1 incorporates:1 counting:1 noting:1 presence:5 bengio:1 marginalization:1 fit:1 idea:1 selsam:1 whether:5 utility:3 moral:1 aon:2 returned:1 paige:2 programmable:1 deep:1 inthe:2 delivering:1 covered:5 dark:1 extensively:1 tenenbaum:1 http:3 specifies:1 generate:1 exist:1 estimated:1 arising:1 blue:3 write:2 hyperparameter:2 discrete:11 radiator:4 mat:2 express:1 key:1 blum:1 bardenet:1 graph:1 wood:5 sum:1 run:3 package:13 letter:1 uncertainty:3 powerful:2 swersky:2 extends:1 place:2 reader:2 electronic:1 knowles:1 scaling:21 bound:2 cyan:1 shan:1 constraint:39 incorporation:1 bp:1 toto:2 regressed:5 software:1 calling:3 generates:1 simulate:3 argument:3 optimality:2 toet:1 performing:3 ern:1 department:1 developing:1 according:1 request:1 combination:2 abecause:1 ascertain:1 increasingly:1 kusner:1 metropolis:9 smac:9 restricted:1 taken:1 computationally:1 previously:7 remains:2 hern:2 tractable:1 infigure:2 end:1 available:2 willem:1 hyperpriors:2 observe:27 away:8 appropriate:1 amethod:1 lobato:2 simulating:1 alternative:17 weinberger:1 original:4 substitute:1 top:4 dirichlet:3 include:1 ensure:2 tuan:2 graphical:2 opportunity:1 marginalized:1 running:6 calculating:1 exploit:4 giving:9 invokes:1 ghahramani:2 establish:5 hypercube:4 classical:4 initializes:1 move:5 equivalent:1 strategy:5 concentration:1 surrogate:12 corander:1 distance:1 ause:1 lmh:6 rainforth:2 assuming:4 code:17 length:1 index:1 modeled:1 kalman:1 innovation:2 difficult:6 setup:1 hmc:1 robert:1 statement:16 frank:2 potentially:13 expense:5 trace:2 design:11 implementation:5 policy:2 unknown:9 perform:2 allowing:1 teh:1 observation:2 markov:1 benchmark:7 finite:3 coincident:2 defining:6 extended:1 incorporated:2 misspecification:5 brubaker:1 arbitrary:8 intensity:5 introduced:3 namely:1 required:7 userspecified:1 specified:9 optimized:7 engine:4 accepts:1 expressivity:1 barcelona:1 nip:6 beyond:1 suggested:6 below:4 mismatch:5 challenge:1 program:61 valid:3 including:2 green:2 royal:1 belief:1 everyone:1 power:12 satisfaction:2 hybrid:1 methodol:1 representing:6 scheme:4 github:2 stan:2 gardner:1 carried:2 church:2 text:1 prior:43 review:2 literature:5 acknowledgement:1 marginalizing:6 relative:2 betancourt:1 fully:7 expect:1 rationale:1 allocation:1 generator:1 pmmh:5 foundation:1 forq:1 affine:1 imposes:1 share:5 summary:1 placed:5 supported:3 free:1 asynchronous:1 infeasible:1 rasmussen:1 disallow:1 allow:4 wide:1 fall:1 taking:5 lognormal:1 van:3 studentship:1 overcome:1 default:2 transition:6 cumulative:1 complicating:5 world:1 evaluating:2 avoids:1 stand:1 unweighted:2 forward:1 made:2 adaptive:6 far:6 dearth:4 alpha:3 approximate:3 implicitly:2 overcomes:5 dealing:1 global:6 doucet:3 uai:1 unnecessary:1 quoted:1 search:4 continuous:7 latent:2 iterative:1 an1:1 bonawitz:1 terminate:1 transfer:2 robust:1 learn:1 composing:1 forest:1 feurer:1 domain:10 garnett:1 aistats:4 significance:4 main:2 arrow:5 bounding:5 noise:3 hyperparameters:6 asthe:1 heating:1 osborne:2 tothe:8 body:1 carpenter:1 xu:1 quantiles:1 fashion:3 riddell:1 sub:1 wish:2 house:4 jmlr:2 interleaving:2 northeastern:2 removing:2 theorem:1 specific:2 rmh:1 list:1 decay:6 svm:2 evidence:3 a3:1 burden:1 essential:1 intractable:2 normalizing:1 sequential:5 effectively:5 importance:9 workshop:1 magnitude:1 execution:18 conditioned:2 illustrates:1 budget:1 egl:1 entropy:1 hoos:2 simply:9 explore:5 faraway:1 erard:1 lazy:1 expressed:1 desire:1 bo:25 springer:1 duane:1 aa:1 leyton:2 corresponds:7 nested:1 environmental:1 relies:1 abc:5 satisfies:1 webppl:1 cdf:1 conditional:8 identity:3 formulated:1 consequently:1 towards:1 price:1 change:5 specifically:1 except:3 operates:2 sampler:2 decouple:1 total:1 called:2 nil:1 exception:2 college:1 tpe:7 support:6 searched:1 latter:2 guo:1 stuhlm:1 philosophy:1 eggensperger:1 evaluate:3 |
5,994 | 6,422 | Dual Decomposed Learning with Factorwise Oracles
for Structural SVMs of Large Output Domain
Ian E.H. Yen ? Xiangru Huang ?
Pradeep Ravikumar ?
?
Carnegie Mellon University
Kai Zhong ? Ruohan Zhang ?
Inderjit S. Dhillon ?
?
University of Texas at Austin
Abstract
Many applications of machine learning involve structured outputs with large domains, where learning of a structured predictor is prohibitive due to repetitive
calls to an expensive inference oracle. In this work, we show that by decomposing
training of a Structural Support Vector Machine (SVM) into a series of multiclass
SVM problems connected through messages, one can replace an expensive structured oracle with Factorwise Maximization Oracles (FMOs) that allow efficient
implementation of complexity sublinear to the factor domain. A Greedy Direction
Method of Multiplier (GDMM) algorithm is then proposed to exploit the sparsity
of messages while guarantees convergence to sub-optimality after O(log(1/))
passes of FMOs over every factor. We conduct experiments on chain-structured
and fully-connected problems of large output domains, where the proposed approach is orders-of-magnitude faster than current state-of-the-art algorithms for
training Structural SVMs.
1
Introduction
Structured prediction has become prevalent with wide applications in Natural Language Processing (NLP), Computer Vision, and Bioinformatics to name a few, where one is interested in outputs
of strong interdependence. Although many dependency structures yield intractable inference problems, approximation techniques such as convex relaxations with theoretical guarantees [10, 14, 7]
have been developed. However, solving the relaxed problems (LP, QP, SDP, etc.) is computationally
expensive for factor graphs of large output domain and results in prohibitive training time when embedded into a learning algorithm relying on inference oracles [9, 6]. For instance, many applications
in NLP such as Machine Translation [3], Speech Recognition [21], and Semantic Parsing [1] have
output domains as large as the size of vocabulary, for which the prediction of even a single sentence
takes considerable time.
One approach to avoid inference during training is by introducing a loss function conditioned on
the given labels of neighboring output variables [15]. However, it also introduces more variance
to the estimation of model and could degrade testing performance significantly. Another thread of
research aims to formulate parameter learning and output inference as a joint optimization problem
that avoids treating inference as a subroutine [12, 11]. In this appraoch, the structured hinge loss
is reformulated via dual decomposition, so both messages between factors and model parameters
are treated as first-class variables. The new formulation, however, does not yield computational advantage due to the constraints entangling the two types of variables. In particular, [11] employs a
hybrid method (DLPW) that alternatingly optimizes model parameters and messages, but the algorithm is not significantly faster than directly performing stochastic gradient on the structured hinge
loss. More recently, [12] proposes an approximate objective for structural SVMs that leads to an
algorithm considerably faster than DLPW on problems requiring expensive inference. However, the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: (left) Factors with large output domains in Sequence Labeling. (right) Large number of
factors in a Correlated Multilabel Prediction problem. Circles denote variables and black boxes
denote factors. (Yu : domain of unigram factor. Yb : domain of bigram factor.)
approximate objective requires a trade-off between efficiency and approximation quality, yielding
an O(1/2 ) overall iteration complexity for achieving sub-optimality.
The contribution of this work is twofold. First, we propose a Greedy Direction Method of Multiplier
(GDMM) algorithm that decomposes the training of a structural SVM into factorwise multiclass
SVMs connected through sparse messages confined to the active labels. The algorithm guarantees
an O(log(1/)) iteration complexity for achieving an sub-optimality and each iteration requires
only one pass of Factorwise Maximization Oracles (FMOs) over every factor. Second, we show that
the FMO can be realized in time sublinear to the cardinality of factor domains, hence is considerably more efficient than a structured maximization oracle when it comes to large output domain.
For problems consisting of numerous binary variables, we further give realization of a joint FMO
that has complexity sublinear to the number of factors. We conduct experiments on both chainstructured problems that allow exact inference and fully-connected problems that rely on Linear
Program relaxations, where we show the proposed approach is orders-of-magnitude faster than current state-of-the-art training algorithms for Structured SVMs.
2
Problem Formulation
Structured prediction aims to predict a set of outputs y ? Y(x) from their interdependency
and inputs x ? X . Given a feature map ?(x, y) : X ? Y(x) ? Rd that extracts relevant information from (x, y), a linear classifier with parameters w can be defined as h(x; w) =
arg maxy?Y(x) hw, ?(x, y)i, where we estimate the parameters w from a training set D =
? i )}ni=1 by solving a regularized Empirical Risk Minimization (ERM) problem
{(xi , y
n
min
w
X
1
?i) .
L(w; xi , y
kwk2 + C
2
i=1
(1)
In case of a Structural SVM [19, 20], we consider the structured hinge loss
? ) = max hw, ?(x, y) ? ?(x, y
? )i + ?(y, y
? ),
L(w; x, y
y?Y(x)
(2)
? i ) is a task-dependent error function, for which the Hamming distance ?H (y, y
? i ) is
where ?(y, y
commonly used. Since the size of domain |Y(x)| typically grows exponentially with the number of output variables, the tractability of problem (1) lies in the decomposition of the responses
hw, ?(x, y)i into several factors, each involving only a few outputs. The factor decomposition can
be represented as a bipartite graph G(F, V, E) between factors F and variables V, where an edge
(f, j) ? E exists if the factor f involves the variable j. Typically, a set of factor templates T exists
so that factors of the same template F ? T share the same feature map ?F (.) and parameter vector
wF . Then the response on input-output pair (x, y) is given by
X X
hw, ?(x, y)i =
hwF , ?F (xf , y f )i,
(3)
F ?T f ?F (x)
where F (x) denotes the set of factors on x that share a template F , and y f denotes output variables
relevant to factor f of domain Yf = YF . We will use F(x) to denote the union of factors of
different templates {F (x)}F ?T . Figure 1 shows two examples that both have two factor templates
2
P
(i.e. unigram and bigram) for which the responses have decomposition f ?u(x) hwu , ?u (xf , yf )i+
P
f ?b(x) hw b , ?b (yf )i. Unfortunately, even with such decomposition, the maximization in (2) is
still computationally expensive. First, most of graph structures do not allow exact maximization,
so in practice one would minimize an upper bound of the original loss (2) obtained from relaxation
[10, 18]. Second, even for the relaxed loss or a tree-structured graph that allows polynomial-time
maximization, its complexity is at least linear to the cardinality of factor domain |Yf | times the
number of factors |F|. This results in a prohibitive computational cost for problems with large output
domain. As in Figure 1, one example has a factor domain |Yb | which grows quadratically with the
size of output domain; the other has the number of factors |F| which grows quadratically with the
number of outputs. A key observation of this paper is, in contrast to the structural maximization
(2) that requires larger extent of exploration on locally suboptimal assignments in order to achieve
global optimality, the Factorwise Maximization Oracle (FMO)
y ?f := argmax hwF , ?(xf , y f )i
(4)
yf
can be realized in a more efficient way by maintaining data structures on the factor parameters wF .
In the next section, we develop globally-convergent algorithms that rely only on FMO, and provide
realizations of message-augmented FMO with cost sublinear to the size of factor domain or to the
number of factors.
3
Dual-Decomposed Learning
We consider an upper bound of the loss (2) based on a Linear Program (LP) relaxation that is tight
in case of a tree-structured graph and leads to a tractable approximation for general factor graphs
[11, 18]:
X
? ) = max
(5)
LLP (w; x, y
? f (w), q f
(q,p)?ML
where ? f (w) :=
f ?F (x)
? f ) + ?f (y f , y
?f ) y
wF , ?F (xf , y f ) ? ?F (xf , y
f ?Yf
. ML is a polytope that
constrains q f in a |Yf |-dimensional simplex ?|Yf | and also enforces local consistency:
q = (q f )f ?F (x) q f ? ?|Yf | ,
?f ? F (x), ?F ? T
,
ML :=
p = (pj )j?V(x) Mjf q f = pj , ?(j, f ) ? E(x)
where Mjf is a |Yj | by |Yf | matrix that has Mjf (yj , y f ) = 1 if yj is consistent with y f (i.e.
yj = [y f ]j ) and Mjf (yj , y f ) = 0 otherwise. For a tree-structured graph G(F, V, E), the LP
relaxation is tight and thus loss (5) is equivalent to (2). For a general factor graph, (5) is an upper
bound on the original loss (2). It is observed that parameters w learned from the upper bound (5)
tend to tightening the LP relaxation and thus in practice lead to tight LP in the testing phase [10].
Instead of solving LP (5) as a subroutine, a recent attempt formulates (1) as a problem that optimizes
(p, q) and w jointly via dual decomposition [11, 12]. We denote ?jf as dual variables associated
with constraint Mjf q f = pj , and ?f := (?j f )j?N (f ) where N (f ) = {j | (j, f ) ? E}. We have
X
X
? ) = max min
LLP (w; x, y
h? f (w), q f i +
h?jf , Mjf q f ? pj i
(6)
q,p
= min
???
X
f ?F (x)
?
f ?F (x)
max (? f (w) +
q f ??|Yf |
j?N (f )
X
T
Mjf
?jf )T q f
?
= min
???
X
f ?F (x)
(7)
j?N (f )
?
? max ?f (y f ; w) +
y f ?Yf
X
?jf ([y f ]j )? = min
???
j?N (f )
where (7) follows the strong duality, and the domain ? =
X
? f , ?f ) (8)
Lf (w; xf , y
f ?F (x)
n P
o
? (j,f )?E(x) ?jf = 0, ?j ? V(x)
follows the maximization w.r.t. p in (6). The result (8) is a loss function Lf (.) that penalizes the
response of each factor separately given ?f . The ERM problem (1) can then be expressed as
?
?
X
X 1
? kwF k2 + C
? f , ?f )? ,
min
Lf (wF ; xf , y
(9)
w,???
2
F ?T
f ?F
3
Algorithm 1 Greedy Direction Method of Multiplier
0. Initialize t = 0, ?0 = 0, ?0 = 0 and A0 = Ainit .
for t = 0, 1, ... do
1. Compute (?t+1 ,
At+1 ) via one pass
of Algorithm 2, 3, or 4.
t
t+1
2. ?t+1
? ?t+1
, j ? N (f ), ?f ? F.
j
jf = ?jf + ? Mjf ?f
end for
SN
S
where F = i=1 F (xi ) and F = F ?T F . The formulation (9) has an insightful interpretation:
each factor template F learns a multiclass SVM given by parameters wF from factors f ? F , while
each factor is augmented with messages ?f passed from all variables related to f .
Despite the insightful interpretation, formulation (9) does not yield computational advantage directly. In particular, the non-smooth loss Lf (.) entangles parameters w and messages ?, which
leads to a difficult optimization problem. Previous attempts to solve (9) either have slow convergence [11] or rely on an approximation objective [12]. In the next section, we propose a Greedy
Direction Method of Multiplier (GDMM) algorithm for solving (9), which achieves sub-optimality
in O(log(1/)) iterations while requiring only one pass of FMOs for each iteration.
3.1
Greedy Direction Method of Multiplier
Let ?f (y f ) be dual variables for the factor responses zf (y f ) = hw, ?(xf , y f )i and {?j }j?V be
that for constraints in ?. The dual problem of (9) can be expressed as 1
X
1 X
G(?) :=
min
kwF (?)k2 ?
? Tj ?j
|Yf |
2
?f ??
F ?T
s.t.
j?V
Mjf ?f = ?j , j ? N (f ), f ? F.
X
wF (?) =
?Tf ?f
(10)
f ?F
where ?f lie in the shifted simplex
X
|Yf |
?
?
:= ?f ?f (?
y f ) ? C , ?f (y f ) ? 0, ?y f 6= y f ,
?f (y f ) = 0. .
(11)
y f ?Yf
Problem (10) can be interpreted as a summation of the dual objectives of |T | multiclass SVMs
(each per factor template), connected with consistency constraints. To minimize (10) one factor at a
time, we adopt a Greedy Direction Method of Multiplier (GDMM) algorithm that alternates between
minimizing the Augmented Lagrangian function
X
?
mjf (?, ?t )
2 ? k?tjf k2
min L(?, ?t ) := G(?) +
(12)
2
?f ??|Yf |
j?N (f ) ,f ?F
and updating the Lagrangian Multipliers (of consistency constraints)
t
?t+1
jf = ?jf + ? (Mjf ?f ? ?j ) . ?j ? N (f ), f ? F,
(13)
where mjf (?, ?t ) = Mjf ?f ? ?j + ?tjf plays the role of messages between |T | multiclass
problems, and ? is a constant step size. The procedure is outlined in Algorithm 1. The minimization
(12) is conducted in an approximate and greedy fashion, in the aim of involving as few dual variables
as possible. We discuss two greedy algorithms that suit two different cases in the following.
Factor of Large Domain For problems with large factor domains, we minimize (12) via a variant
of Frank-Wolfe algorithm with away steps (AFW) [8], outlined in Algorithm 2. The AFW algorithm
maintains the iterate ?t as a linear combination of bases constructed during iterates
X
?t =
ctv v, At := {v | ctv 6= 0}
(14)
v?At
1
?j is also dual variables for responses on unigram factors. We define U := V and ?f := ?j , ?f ? U .
4
Algorithm 2 Away-step Frank-Wolfe (AFW)
Algorithm 3 Block-Greedy Coordinate Descent
repeat
for i ? [n] do
1. Find a greedy direction v + satisfying (15).
1. Find f ? satisfying (18) for i-th sample.
?
2. As+1
= Asi ? {f ? }.
2. Find an away direction v satisfying (16).
i
t+1
for f ? Ai do
3. Compute ?
according to (17).
3.1 Update ?f according to (19).
4. Maintain active set At by (14).
3.2
Maintain wF (?) according to (10).
5. Maintain wF (?) according to (10).
end for
until a non-drop step is performed.
end for
where At maintains an active set of bases of non-zero coefficients. Each iteration of AFW finds
a direction v + := (v +
f )f ?F leading to the most descent amount according to the current gradient,
subject to the simplex constraints:
t
t
v+
), ?f ? F
? f ? ey ?
f := argmin h??f L(? , ? ), v f i = C(ey
f
(15)
v f ??|Yf |
where y ?f := arg maxyf ?Yf \{?yf } h??f L(?t , ?t ), eyf i is the non-ground-truth labeling of factor
f of highest response. In addition, AFW finds the away direction
v ? := argmax h?? L(?t , ?t ), vi,
(16)
v?At
which corresponds to the basis that leads to the most descent amount when being removed. Then
the update is determined by
t
? + ?F dF , h?? L, dF i < h?? L, dA i
t+1
(17)
?
:=
?t + ?A dA , otherwise.
where we choose between two descent directions dF := v + ? ?t and dA := ?t ? v ? . The step size
of each direction ?F := arg min??[0,1] L(?t + ?dF ) and ?A := arg min??[0,cv? ] L(?t + ?dA )
can be computed exactly due to the quadratic nature of (12). A step is called drop step if a step size
? ? = cv? is chosen, which leads to the removal of a basis v ? from the active set, and therefore
the total number of drop steps can be bounded by half of the number of iterations t. Since a drop
step could lead to insufficient descent, Algorithm 2 stops only if a non-drop step is performed. Note
Algorithm 2 requires only a factorwise greedy search (15) instead of a structural maximization (2).
In section 3.2 we show how the factorwise search can be implemented much more efficiently than
structural ones. All the other steps (2-5) in Algorithm 2 can be computed in O(|Af |nnz(?f )),
where |Af | is the number of active states in factor f , which can be much smaller than |Yf | when
output domain is large.
In practice, a Block-Coordinate Frank-Wolfe (BCFW) method has much faster convergence than
Frank-Wolfe method (Algorithm 2) [13, 9], but proving linear convergence for BCFW is also much
more difficult [13], which prohibits its use in our analysis. In our implementation, however, we adopt
the BCFW version since it turns out to be much more efficient. We include a detailed description on
the BCFW version in Appendix-A (Algorithm 4).
Large Number of Factors Many structured prediction problems, such as alignment, segmentation, and multilabel prediction (Fig. 1, right), comprise binary variables and large number of factors
with small domains, for which Algorithm 2 does not yield any computational advantage. For this
type of problem, we minimize (12) via one pass of Block-Greedy Coordinate Descent (BGCD) (Algorithm 3) instead. Let Qmax be an upper bound on the eigenvalue of Hessian matrix of each block
?2?f L(?). For binary variables of pairwise factor, we have Qmax =4(maxf ?F k?f k2 + 1). Each
iteration of BGCD finds a factor that leads to the most progress
Qmax
kdk2 .
(18)
f ? := argmin
min
h??f L(?t , ?t ), di +
2
f ?F (xi )
?f +d??|Yf |
for each instance xi , adds them into the set of active factors Ai , and performs updates by solving
block subproblems
Qmax
d?f = argmin
h??f L(?t , ?t ), di +
kdk2
(19)
2
|Yf |
? +d??
f
5
for each factor f ? Ai . Note |Ai | is bounded by the number of GDMM iterations and it converges
to a constant much smaller than |F(xi )| in practice. We address in the next section how a joint FMO
can be performed to compute (18) in time sublinear to |F(xi )| in the binary-variable case.
3.2
Greedy Search via Factorwise Maximization Oracle (FMO)
The main difference between the FMO and structural maximization oracle (2) is that the former
involves only simple operations such as inner products or table look-ups for which one can easily
come up with data structures or approximation schemes to lower the complexity. In this section,
we present two approaches to realize sublinear-time FMOs for two types of factors widely used in
practice. We will describe in terms of pairwise factors, but the approach can be naturally generalized
to factors involving more variables.
Indicator Factor Factors ?f (xf , y f ) of the form
hwF , ?F (xf , y f )i = v(xf , y f )
(20)
are widely used in practice. It subsumes the bigram factor v(yi , yj ) that is prevalent in sequence,
grid, and network labeling problems, and also factors that map an input-output pair (x, y) directly
to a score v(x, y). For this type of factor, one can maintain ordered multimaps for each factor
template F , which support ordered visits of {v(x, (yi , yj ))}(yi ,yj )?Yf , {v(x, (yi , yj ))}yj ?Yj and
{v(x, (yi , yj ))}yi ?Yi . Then to find y f that maximizes (26), we compare the maximizers in 4 cases:
(i) (yi , yj ) : mif (yi ) = mjf (yj ) = 0, (ii) (yi , yj ) : mif (yi ) = 0, (iii) (yi , yj ) : mjf (yj ) = 0,
(iv) (yi , yj ) : mjf (yj ) 6= 0, mif (yi ) 6= 0. The maximization requires O(|Ai ||Aj |) in cases (ii)-(iv)
and O(max(|Ai ||Yj |, |Yi ||Aj |)) in case (i) (see details in Appendix C-1). However, in practice we
observe an O(1) cost for case (i) and the bottleneck is actually case (iv), which requires O(|Ai ||Aj |).
Note the ordered multimaps need maintenance
P whenever the vector wF (?) is changed. Fortunately,
since the indicator factor has v(y f , x) = f ?F,xf =x ?f (y f ), each update (25) leads to at most
|Af | changed elements, which gives a maintenance cost bounded by O(|Af | log(|YF |)). On the
other hand, the space complexity is bounded by O(|YF ||XF |) since the map is shared among factors.
Binary-Variable Interaction Factor Many problems consider pairwise-interaction factor between binary variables, where the factor domain is small but the number of factors is large. For
this type of problem, there is typically an rare outcome yfA ? YF . We call factors exhibiting such
outcome as active factors and the score of a labeling is determined by the score of the active factors
(inactive factors give score 0). For example, in the problem of multilabel prediction with pairwise
interactions (Fig. 1, right), an active unigram factor has outcome yjA = 1 and an active bigram
factor has y A
f = (1, 1), and each sample typically has only few outputs with value 1.
For this type of problem, we show that the gradient magnitude w.r.t. ?f for a bigram factor f can be
determined by the gradient w.r.t. ?f (y A
f ) when one of its incoming message mjf or mif is 0 (see
details in Appendix C-2). Therefore, we can find the greedy factor (18) by maintaining an ordered
A
multimap for the scores of outcome y A
f in each factor {v(y f , xf )}f ?F . The resulting complexity
for finding a factor that maximizes (18) is then reduced from O(|Yi ||Yj |) to O(|Ai ||Aj |), where the
latter is for comparison among factors that have both messages mif and mjf being non-zero.
Inner-Product Factor We consider another widely-used type of factor of the form
?f (xf , y f ) = hwF , ?F (xf , y f )i = hwF (y f ), ?F (xf )i
where all labels y f ? Yf share the same feature mapping ?F (xf ) but with different parameters
wF (y f ). We propose a simple sampling approximation method with a performance guarantee for
the convergence of GDMM. Note although one can apply similar sampling schemes to the structural
maximization oracle (2), it is hard to guarantee the quality of approximation. The sampling method
S?
(k)
divides Yf into ? mutually exclusive subsets Yf = k=1 Yf , and realizes an approximate FMO
by first sampling k uniformly from [?] and returning
? f ? arg max hwF (y f ), ?F (xf )i.
y
(k)
y f ?Yf
6
(21)
? f ? arg maxyf ?Yf hwF (y f ), ?F (xf )i since at least
Note there is at least 1/? probability that y
(k)
one partition Yf contains a label of the highest score. In section 3.3, we show that this approximate
FMO still ensures convergence with a rate scaled by 1/?. In practice, since the set of active labels
is not changing frequently during training, once an active label y f is sampled, it will be kept in the
active set Af till the end of the algorithm and thus results in a convergence rate similar to that of
an exact FMO. Note for problems of binary variables with large number of inner-product factors,
S?
(k)
the sampling technique applies similarly by simply partitioning factors as Fi = k=1 Fi and
searching active factors only within one randomly chosen partition at a time.
3.3
Convergence Analysis
We show the iteration complexity of the GDMM algorithm with an 1/?-approximated FMO given
in section 3.2. The convergence guarantee for exact FMOs can be obtained by setting ? = 1. The
analysis leverages recent analysis on the global linear convergence of Frank-Wolfe variants [8] for
function of the form (12) with a polyhedral domain, and also the analysis in [5] for Augmented
Lagrangian based method. This type of greedy Augmented Lagrangian Method was also analyzed
previously under different context [23, 24, 22].
Let d(?) = min? L(?, ?) be the dual objective of (12), and let
?td := d? ? d(?t ), ?tp := L(?t , ?t ) ? d(?t )
(22)
be the dual and primal suboptimality of problem (10) respectively. We have the following theorems.
Theorem 1 (Convergence of GDMM with AFW). The iterates {(?t , ?t )}?
t=1 produced by Algorithm 1 with step 1 performed by Algorithm 2 has
1
(23)
E[?tp + ?td ] ? for t ? ? log( )
n
o
?
with ? = max 2(1 + 4 mQ(1+?)
), ?? , where ?M is the
for any 0 < ? ? 4+16(1+?)mQ/?
?M
M
generalized geometric strong convexity constant of (12), Q is the Lipschitz-continuous constant for
the gradient of objective (12), and ? > 0 is a constant depending on optimal solution set.
Theorem 2 (Convergence of GDMM with BGCD). The iterates {(?t , ?t )}?
t=1 produced by Algorithm 1 with step 1 performed by Algorithm 3 has
1
(24)
E[?tp + ?td ] ? for t ? ?1 log( )
o
n
?
Qmax ?
?
for any 0 < ? ? 4(1+Qmax
?/?1 ) with ?1 = max 2(1 +
?1 ), ? , where ?1 is the generalized
strong convexity constant of objective (12) and Qmax = maxf ?F Qf is the factorwise Lipschitzcontinuous constant on the gradient.
4
Experiments
In this section, we compare with existing approaches on Sequence Labeling and Multi-label prediction with pairwise interaction. The algorithms in comparison are: (i) BCFW: a Block-Coordinate
Frank-Wolfe method based on structural oracle [9], which outperforms other competitors such as
Cutting-Plane, FW, and online-EG methods in [9]. (ii) SSG: an implementation of the Stochastic
Subgradient method [16]. (iii) Soft-BCFW: Algorithm proposed in ([12]), which avoids structural
oracle by minimizing an approximate objective, where a parameter ? controls the precision of the
approximation. We tuned the parameter and chose two of the best on the figure. For BCFW and
SSG, we adapted the MATLAB implementation provided by authors of [9] into C++, which is an
order of magnitude faster. All other implementations are also in C++. The results are compared in
terms of primal objective (achieved by w) and test accuracy.
Our experiments are conducted on 4 public datasets: POS, ChineseOCR, RCV1-regions, and EURLex (directory codes). For sequence labeling we experiment on POS and ChineseOCR. The POS
dataset is a subset of Penn treebank2 that contains 3,808 sentences, 196,223 words, and 45 POS labels. The HIT-MW3 ChineseOCR dataset is a hand-written Chinese character dataset from [17]. The
2
3
https://catalog.ldc.upenn.edu/LDC99T42
https://sites.google.com/site/hitmwdb/
7
POS
2.4
GDMM
Soft-BCFW-?=1
Soft-BCFW-?=10
?10 5
ChineseOCR
2.4
GDMM
Soft-BCFW-?=1
Soft-BCFW-?=10
2.3
2.2
10 0
2
2
1.9
1.9
Objective
Objective
objective
10 1
10 2
1.8
1.8
1.7
1.7
1.6
1.6
1.5
1.4
10 0
10 3
GDMM
GDMM-subFMO
2.1
1.5
10 0
ChineseOCR
2.2
2.1
10 -1
?10 5
2.3
1.4
10 1
10 2
10 3
10 4
iteration
iteration
time
Figure 2: (left) Compare two FMO-based algorithms (GDMM, Soft-BCFW) in number of iterations.
(right) Improvement in training time given by sublinear-time FMO.
POS
3
BCFW
GDMM-subFMO
SSG
Soft-BCFW-?=1
Soft-BCFW-?=10
ChineseOCR
10 9
10 8
2
BCFW
GDMM-subFMO
SSG
Soft-BCFW-?=1
Soft-BCFW-?=10
10 7
Objective
Objective
RCV1-regions
BCFW
GDMM-subFMO
SSG
Soft-BCFW-?=1
Soft-BCFW-?=10
2.5
Relative-Objective
10 1
?10 5
10 6
10 0
10 5
1.5
10 4
10 2
10 4
10 3
time
10 4
10 2
10 3
time
POS
RCV1-regions
BCFW
GDMM-subFMO
SSG
Soft-BCFW-?=1
Soft-BCFW-?=10
0.9
0.85
0.12
0.8
10 5
EUR-Lex
0.1
0.95
BCFW
GDMM-subFMO
SSG
Soft-BCFW-?=1
Soft-BCFW-?=10
0.13
10 4
time
ChineseOCR
0.14
BCFW
BCFW
GDMM-subFMO
SSG
Soft-BCFW-?=1
Soft-BCFW-?=10
0.09
0.08
0.07
GDMM-subFMO
0.09
0.06
Soft-BCFW-?=1
Soft-BCFW-?=10
0.7
0.65
10
test error
0.1
0.75
test error
test error
test error
SSG
0.11
-2
0.6
0.05
0.04
0.03
0.55
0.02
0.08
0.5
0.01
0.07
0.45
10
2
10
time
4
10
3
10
4
10
time
2
10
3
10
4
10
5
10 2
time
10 3
10 4
time
Figure 3: Primal Objective v.s. Time and Test error v.s. Time plots. Note that figures of objective
have showed that SSG converges to a objective value much higher than all other methods, this is also
observed in [9]. Note the training objective for the EUR-Lex data set is too expensive to compute
and we are unable to plot the figure.
dataset has 12,064 hand-written sentences, and a total of 174,074 characters. The vocabulary (label)
size is 3,039. For the Correlated Multilabel Prediction problems, we experiment on two benchmark
datasets RCV1-regions4 and EUR-Lex (directory codes)5 . The RCV1-regions dataset has 228 labels,
23,149 training instances and 47,236 features. Note that a smaller version of RCV1 with only 30
labels and 6000 instances is used in [11, 12]. EUR-Lex (directory codes) has 410 directory codes as
labels with a sample size of 19,348. We first compare GDMM (without subFMO) with Soft-BCFW
in Figure 2. Due to the approximation (controlled by ?), Soft-BCFW can converge to a suboptimal primal objective value. While the gap decreases as ? increases, its convergence becomes also
slower. GDMM, on the other hand, enjoys a faster convergence. The sublinear-time implementation
of FMO also reduces the training time by an order of magnitude on the ChineseOCR data set, as
showed in Figure 2 (right). More general experiments are showed in Figure 3. When the size of output domain is small (POS dataset), GDMM-subFMO is competitive to other solvers. As the size of
output domain grows (ChineseOCR, RCV1, EUR-Lex), the complexity of structural maximization
oracle grows linearly or even quadratically, while the complexity of GDMM-subFMO only grows
sublinearly in the experiments. Therefore, GDMM-subFMO achieves orders-of-magnitude speedup
over other methods. In particular, when running on ChineseOCR and EUR-Lex, each iteration of
SSG, GDMM, BCFW and Soft-BCFW take over 103 seconds, while it only takes a few seconds in
GDMM-subFMO.
Acknowledgements. We acknowledge the support of ARO via W911NF-12-1-0390, NSF via
grants CCF-1320746, CCF-1117055, IIS-1149803, IIS-1546452, IIS-1320894, IIS-1447574, IIS1546459, CCF-1564000, DMS-1264033, and NIH via R01 GM117594-01 as part of the Joint
DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical
Sciences.
4
5
www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multilabel.html
mulan.sourceforge.net/datasets-mlc.html
8
References
[1] D. Das, D. Chen, A. F. Martins, N. Schneider, and N. A. Smith. Frame-semantic parsing. Computational
linguistics, 40(1):9?56, 2014.
[2] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. Liblinear: A library for large linear
classification. The Journal of Machine Learning Research, 9:1871?1874, 2008.
[3] K. Gimpel and N. A. Smith. Structured ramp loss minimization for machine translation. In NAACL, pages
221?231. Association for Computational Linguistics, 2012.
[4] A. Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the
National Bureau of Standards, 1952.
[5] M. Hong and Z.-Q. Luo. On the linear convergence of the alternating direction method of multipliers.
arXiv preprint arXiv:1208.3922, 2012.
[6] T. Joachims, T. Finley, and C.-N. J. Yu. Cutting-plane training of structural svms. Machine Learning,
77(1):27?59, 2009.
[7] M. P. Kumar, V. Kolmogorov, and P. H. Torr. An analysis of convex relaxations for map estimation.
Advances in Neural Information Processing Systems, 20:1041?1048, 2007.
[8] S. Lacoste-Julien and M. Jaggi. On the global linear convergence of frank-wolfe optimization variants.
In Advances in Neural Information Processing Systems, pages 496?504, 2015.
[9] S. Lacoste-Julien, M. Jaggi, M. Schmidt, and P. Pletscher. Block-coordinate frank-wolfe optimization for
structural svms. In ICML 2013 International Conference on Machine Learning, pages 53?61, 2013.
[10] O. Meshi, M. Mahdavi, and D. Sontag. On the tightness of lp relaxations for structured prediction. arXiv
preprint arXiv:1511.01419, 2015.
[11] O. Meshi, D. Sontag, T. Jaakkola, and A. Globerson. Learning efficiently with approximate inference via
dual losses. 2010.
[12] O. Meshi, N. Srebro, and T. Hazan. Efficient training of structured svms via soft constraints. In AISTAT,
2015.
[13] A. Osokin, J.-B. Alayrac, I. Lukasewitz, P. K. Dokania, and S. Lacoste-Julien. Minding the gaps for block
frank-wolfe optimization of structured svms. arXiv preprint arXiv:1605.09346, 2016.
[14] P. Ravikumar and J. Lafferty. Quadratic programming relaxations for metric labeling and markov random
field map estimation. In ICML, 2006.
[15] R. Samdani and D. Roth. Efficient decomposed learning for structured prediction. ICML, 2012.
[16] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for
svm. Mathematical programming, 2011.
[17] T. Su, T. Zhang, and D. Guan. Corpus-based hit-mw database for offline recognition of general-purpose
chinese handwritten text. IJDAR, 10(1):27?38, 2007.
[18] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: A large
margin approach. In ICML, 2005.
[19] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in neural information
processing systems, volume 16, 2003.
[20] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004.
[21] P. Woodland and D. Povey. Large scale discriminative training for speech recognition. In ASR2000Automatic Speech Recognition Workshop (ITRW), 2000.
[22] I. E. Yen, X. Lin, E. J. Zhang, E. P. Ravikumar, and I. S. Dhillon. A convex atomic-norm approach to
multiple sequence alignment and motif discovery. 2016.
[23] I. E. Yen, D. Malioutov, and A. Kumar. Scalable exemplar clustering and facility location via augmented
block coordinate descent with column generation. In AISTAT, 2016.
[24] I. E.-H. Yen, K. Zhong, C.-J. Hsieh, P. K. Ravikumar, and I. S. Dhillon. Sparse linear programming via
primal and dual augmented coordinate descent. In NIPS, 2015.
9
| 6422 |@word version:3 polynomial:1 bigram:5 norm:1 yja:1 decomposition:6 hsieh:2 liblinear:1 minding:1 series:1 score:6 contains:2 tuned:1 outperforms:1 existing:1 current:3 com:1 luo:1 written:2 parsing:2 realize:1 partition:2 hofmann:1 treating:1 drop:5 update:4 plot:2 greedy:15 prohibitive:3 half:1 plane:2 directory:4 smith:2 iterates:3 location:1 zhang:3 mathematical:2 constructed:1 become:1 initiative:1 kdk2:2 polyhedral:1 interdependence:1 pairwise:5 upenn:1 sublinearly:1 frequently:1 sdp:1 multi:1 relying:1 decomposed:3 globally:1 td:3 cardinality:2 solver:2 becomes:1 spain:1 provided:1 bounded:4 mulan:1 maximizes:2 argmin:3 interpreted:1 prohibits:1 developed:1 finding:1 gm117594:1 guarantee:6 every:2 exactly:1 returning:1 classifier:1 k2:4 scaled:1 partitioning:1 control:1 grant:1 penn:1 hit:2 local:1 despite:1 black:1 chose:1 globerson:1 enforces:1 testing:2 yj:21 union:1 practice:8 block:9 lf:4 atomic:1 procedure:1 nnz:1 empirical:1 asi:1 significantly:2 ups:1 word:1 altun:1 pegasos:1 tsochantaridis:1 risk:1 context:1 www:1 equivalent:1 map:6 lagrangian:4 roth:1 convex:3 formulate:1 mq:2 proving:1 searching:1 coordinate:7 play:1 exact:4 programming:3 tjf:2 wolfe:9 element:1 expensive:6 recognition:4 updating:1 satisfying:3 approximated:1 database:1 observed:2 role:1 csie:1 preprint:3 taskar:2 wang:1 region:4 ensures:1 connected:5 trade:1 highest:2 removed:1 decrease:1 convexity:2 complexity:11 constrains:1 multilabel:5 solving:5 tight:3 bipartite:1 efficiency:1 basis:2 easily:1 joint:4 po:8 represented:1 lukasewitz:1 kolmogorov:1 describe:1 labeling:7 outcome:4 shalev:1 kai:1 larger:1 solve:1 widely:3 ramp:1 otherwise:2 tightness:1 jointly:1 online:1 advantage:3 sequence:5 eigenvalue:1 net:1 propose:3 aro:1 interaction:4 product:3 neighboring:1 relevant:2 realization:2 till:1 achieve:1 description:1 aistat:2 sourceforge:1 convergence:16 converges:2 depending:1 develop:1 exemplar:1 progress:1 strong:4 implemented:1 involves:2 come:2 exhibiting:1 direction:13 stochastic:2 exploration:1 libsvmtools:1 public:1 meshi:3 ntu:1 biological:1 summation:1 ground:1 mapping:1 predict:1 appraoch:1 achieves:2 adopt:2 purpose:1 estimation:3 realizes:1 label:12 tf:1 hoffman:1 minimization:3 cotter:1 aim:3 avoid:1 zhong:2 jaakkola:1 joachim:2 improvement:1 prevalent:2 contrast:1 wf:10 inference:9 dependent:1 motif:1 chatalbashev:1 typically:4 a0:1 koller:2 subroutine:2 interested:1 overall:1 dual:14 arg:6 among:2 html:2 classification:1 eurlex:1 proposes:1 art:2 initialize:1 field:1 comprise:1 once:1 sampling:5 yu:2 look:1 icml:5 simplex:3 few:5 employ:1 randomly:1 national:1 argmax:2 consisting:1 phase:1 maintain:4 mjf:18 attempt:2 suit:1 message:11 alignment:2 introduces:1 analyzed:1 pradeep:1 yielding:1 primal:6 tj:1 yfa:1 chain:1 edge:1 conduct:2 tree:3 iv:3 divide:1 penalizes:1 circle:1 theoretical:1 eyf:1 instance:4 column:1 soft:24 tp:3 formulates:1 w911nf:1 assignment:1 maximization:15 tractability:1 introducing:1 cost:4 subset:2 rare:1 predictor:1 conducted:2 too:1 dependency:1 considerably:2 eur:6 international:1 off:1 huang:1 choose:1 leading:1 mahdavi:1 subsumes:1 coefficient:1 vi:1 performed:5 hazan:1 competitive:1 maintains:2 yen:4 contribution:1 minimize:4 ni:1 afw:6 accuracy:1 variance:1 efficiently:2 yield:4 ssg:11 handwritten:1 produced:2 malioutov:1 alternatingly:1 whenever:1 competitor:1 dm:2 naturally:1 associated:1 di:2 hamming:1 stop:1 sampled:1 dataset:6 segmentation:1 actually:1 higher:1 response:7 yb:2 formulation:4 box:1 until:1 hand:4 kwf:2 su:1 google:1 yf:34 quality:2 aj:4 grows:6 name:1 naacl:1 requiring:2 multiplier:8 ccf:3 former:1 hence:1 facility:1 alternating:1 dhillon:3 semantic:2 eg:1 during:3 suboptimality:1 hong:1 generalized:3 performs:1 interface:1 recently:1 fi:2 mif:5 nih:1 qp:1 exponentially:1 volume:1 association:1 interpretation:2 kwk2:1 mellon:1 ai:8 cv:2 rd:1 consistency:3 outlined:2 grid:1 similarly:1 language:1 etc:1 base:2 add:1 jaggi:2 recent:2 showed:3 optimizes:2 inequality:1 binary:7 yi:16 guestrin:2 fortunately:1 relaxed:2 schneider:1 ey:2 converge:1 ii:7 multiple:1 interdependency:1 reduces:1 smooth:1 faster:7 xf:20 af:5 lin:2 ravikumar:4 visit:1 controlled:1 prediction:12 involving:3 variant:3 maintenance:2 scalable:1 vision:1 metric:1 df:4 arxiv:6 repetitive:1 iteration:14 confined:1 achieved:1 addition:1 separately:1 pass:1 subject:1 tend:1 lafferty:1 alayrac:1 call:2 structural:16 mw:1 leverage:1 iii:2 iterate:1 suboptimal:2 inner:3 multiclass:5 texas:1 maxf:2 bottleneck:1 thread:1 inactive:1 passed:1 reformulated:1 speech:3 hessian:1 sontag:2 dokania:1 matlab:1 woodland:1 detailed:1 involve:1 amount:2 locally:1 svms:10 reduced:1 http:2 nsf:1 shifted:1 estimated:1 per:1 carnegie:1 key:1 achieving:2 changing:1 pj:4 povey:1 kept:1 lacoste:3 graph:8 relaxation:9 subgradient:1 qmax:7 appendix:3 bound:5 convergent:1 fan:1 quadratic:2 oracle:14 adapted:1 constraint:7 optimality:5 min:12 kumar:2 performing:1 rcv1:7 martin:1 speedup:1 structured:22 according:5 alternate:1 combination:1 smaller:3 character:2 lp:7 tw:1 maxy:1 erm:2 computationally:2 xiangru:1 mutually:1 previously:1 discus:1 turn:1 cjlin:1 singer:1 tractable:1 end:4 decomposing:1 operation:1 apply:1 observe:1 away:4 schmidt:1 slower:1 original:2 bureau:1 denotes:2 running:1 nlp:2 include:1 linguistics:2 clustering:1 hinge:3 maintaining:2 exploit:1 chinese:2 r01:1 objective:20 realized:2 lex:6 exclusive:1 gradient:7 distance:1 unable:1 degrade:1 polytope:1 extent:1 code:4 insufficient:1 minimizing:2 difficult:2 unfortunately:1 frank:9 subproblems:1 tightening:1 implementation:6 upper:5 zf:1 observation:1 datasets:4 markov:2 benchmark:1 acknowledge:1 hwu:1 descent:8 frame:1 mlc:1 pair:2 sentence:3 catalog:1 quadratically:3 learned:1 barcelona:1 nip:2 address:1 llp:2 factorwise:9 sparsity:1 program:2 max:10 ldc:1 natural:1 treated:1 regularized:1 hybrid:1 rely:3 indicator:2 pletscher:1 scheme:2 library:1 numerous:1 gimpel:1 julien:3 ctv:2 finley:1 extract:1 sn:1 text:1 geometric:1 acknowledgement:1 removal:1 interdependent:1 discovery:1 relative:1 embedded:1 fully:2 loss:13 sublinear:8 generation:1 hwf:7 srebro:2 consistent:1 share:3 translation:2 austin:1 qf:1 changed:2 repeat:1 enjoys:1 offline:1 allow:3 wide:1 template:8 sparse:2 vocabulary:2 avoids:2 lipschitzcontinuous:1 author:1 commonly:1 osokin:1 approximate:8 cutting:2 ml:3 global:3 active:14 incoming:1 corpus:1 xi:7 shwartz:1 discriminative:1 search:3 continuous:1 decomposes:1 table:1 nature:1 domain:27 da:5 main:1 linearly:1 augmented:7 fig:2 site:2 fashion:1 slow:1 precision:1 sub:5 lie:2 guan:1 entangling:1 learns:1 hw:6 ian:1 theorem:3 unigram:4 insightful:2 svm:6 maximizers:1 intractable:1 exists:2 workshop:1 magnitude:6 conditioned:1 margin:2 gap:2 chen:1 nigms:1 simply:1 expressed:2 ordered:4 inderjit:1 chang:1 applies:1 samdani:1 corresponds:1 truth:1 twofold:1 replace:1 jf:9 considerable:1 shared:1 hard:1 lipschitz:1 determined:3 fw:1 uniformly:1 torr:1 called:1 total:2 pas:4 duality:1 support:5 latter:1 bioinformatics:1 correlated:2 |
5,995 | 6,423 | A Unified Approach for Learning the Parameters of
Sum-Product Networks
Han Zhao
Machine Learning Dept.
Carnegie Mellon University
[email protected]
Pascal Poupart
School of Computer Science
University of Waterloo
[email protected]
Geoff Gordon
Machine Learning Dept.
Carnegie Mellon University
[email protected]
Abstract
We present a unified approach for learning the parameters of Sum-Product networks
(SPNs). We prove that any complete and decomposable SPN is equivalent to a
mixture of trees where each tree corresponds to a product of univariate distributions.
Based on the mixture model perspective, we characterize the objective function
when learning SPNs based on the maximum likelihood estimation (MLE) principle
and show that the optimization problem can be formulated as a signomial program.
We construct two parameter learning algorithms for SPNs by using sequential
monomial approximations (SMA) and the concave-convex procedure (CCCP),
respectively. The two proposed methods naturally admit multiplicative updates,
hence effectively avoiding the projection operation. With the help of the unified
framework, we also show that, in the case of SPNs, CCCP leads to the same
algorithm as Expectation Maximization (EM) despite the fact that they are different
in general.
1
Introduction
Sum-product networks (SPNs) are new deep graphical model architectures that admit exact probabilistic inference in linear time in the size of the network [14]. Similar to traditional graphical
models, there are two main problems when learning SPNs: structure learning and parameter learning.
Parameter learning is interesting even if we know the ground truth structure ahead of time; structure learning depends on parameter learning , so better parameter learning can often lead to better
structure learning. Poon and Domingos [14] and Gens and Domingos [6] proposed both generative
and discriminative learning algorithms for parameters in SPNs. At a high level, these approaches
view SPNs as deep architectures and apply projected gradient descent (PGD) to optimize the data
log-likelihood. There are several drawbacks associated with PGD. For example, the projection step in
PGD hurts the convergence of the algorithm and it will often lead to solutions on the boundary of the
feasible region. Also, PGD contains an additional arbitrary parameter, the projection margin, which
can be hard to set well in practice. In [14, 6], the authors also mentioned the possibility of applying
EM algorithms to train SPNs by viewing sum nodes in SPNs as hidden variables. They presented an
EM update formula without details. However, the update formula for EM given in [14, 6] is incorrect,
as first pointed out and corrected by [12].
In this paper we take a different perspective and present a unified framework, which treats [14, 6] as
special cases, for learning the parameters of SPNs. We prove that any complete and decomposable
SPN is equivalent to a mixture of trees where each tree corresponds to a product of univariate
distributions. Based on the mixture model perspective, we can precisely characterize the functional
form of the objective function based on the network structure. We show that the optimization problem
associated with learning the parameters of SPNs based on the MLE principle can be formulated
as a signomial program (SP), where both PGD and exponentiated gradient (EG) can be viewed as
first order approximations of the signomial program after suitable transformations of the objective
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
function. We also show that the signomial program formulation can be equivalently transformed into
a difference of convex functions (DCP) formulation, where the objective function of the program
can be naturally expressed as a difference of two convex functions. The DCP formulation allows
us to develop two efficient optimization algorithms for learning the parameters of SPNs based on
sequential monomial approximations (SMA) and the concave-convex procedure (CCCP), respectively.
Both proposed approaches naturally admit multiplicative updates, hence effectively deal with the
positivity constraints of the optimization. Furthermore, under our unified framework, we also show
that CCCP leads to the same algorithm as EM despite that these two approaches are different from
each other in general. Although we mainly focus on MLE based parameter learning, the mixture
model interpretation of SPN also helps to develop a Bayesian learning method for SPNs [21].
PGD, EG, SMA and CCCP can all be viewed as different levels of convex relaxation of the original
SP. Hence the framework also provides an intuitive way to compare all four approaches. We conduct
extensive experiments on 20 benchmark data sets to compare the empirical performance of PGD, EG,
SMA and CCCP. Experimental results validate our theoretical analysis that CCCP is the best among
all 4 approaches, showing that it converges consistently faster and with more stability than the other
three methods. Furthermore, we use CCCP to boost the performance of LearnSPN [7], showing that
it can achieve results comparable to state-of-the-art structure learning algorithms using SPNs with
much smaller network sizes.
2
2.1
Background
Sum-Product Networks
To simplify the discussion of the main idea of our unified framework, we focus our attention on SPNs
over Boolean random variables. However, the framework presented here is general and can be easily
extended to other discrete and continuous random variables. We first define the notion of network
polynomial. We use Ix to denote an indicator variable that returns 1 when X = x and 0 otherwise.
Definition 1 (Network Polynomial [4]). Let f (?) 0 be an unnormalized probability distribution
over a Boolean random vector X1:N . The network polynomial of f (?) is a multilinear function
P
QN
x f (x)
n=1 Ixn of indicator variables, where the summation is over all possible instantiations of
the Boolean random vector X1:N .
A Sum-Product Network (SPN) over Boolean variables X1:N is a rooted DAG that computes the
network polynomial over X1:N . The leaves are univariate indicators of Boolean variables and
internal nodes are either sum or product. Each sum node computes a weighted sum of its children
and each product node computes the product of its children. The scope of a node in an SPN is
defined as the set of variables that have indicators among the node?s descendants. For any node v
in an SPN, ifSv is a terminal node, say, an indicator variable over X, then scope(v) = {X}, else
scope(v) = v?2Ch(v) scope(?
v ). An SPN is complete iff each sum node has children with the same
T
scope. An SPN is decomposable iff for every product node v, scope(vi ) scope(vj ) = ? where
vi , vj 2 Ch(v), i 6= j. The scope of the root node is {X1 , . . . , XN }.
In this paper, we focus on complete and decomposable SPNs. For a complete and decomposable
SPN S, each node v in S defines a network polynomial fv (?) which corresponds to the sub-SPN
(subgraph) rooted at v. The network polynomial of S, denoted by fS , is the network polynomial
defined by the root of S, which can be computed recursively from its children. The probability
distribution induced by an SPN S is defined as PrS (x) , PfSf(x)
. The normalization constant
x S (x)
P
Px fS (x) can be computed in O(|S|) in SPNs by setting the values of all the leaf nodes to be 1, i.e.,
x fS (x) = fS (1) [14]. This leads to efficient joint/marginal/conditional inference in SPNs.
2.2
Signomial Programming (SP)
Before introducing SP, we first introduce geometric programming (GP), which is a strict subclass
of SP. A monomial is defined as a function h : Rn++ 7! R: h(x) = dxa1 1 xa2 2 ? ? ? xann , where the
domain is restricted to be the positive orthant (Rn++ ), the coefficient d is positive and the exponents
PK
ai 2 R, 8i. A posynomial is a sum of monomials: g(x) = k=1 dk xa1 1k xa2 2k ? ? ? xannk . One of the
key properties of posynomials is positivity, which allows us to transform any posynomial into the log
2
domain. A GP in standard form is defined to be an optimization problem where both the objective
function and the inequality constraints are posynomials and the equality constraints are monomials.
There is also an implicit constraint that x 2 Rn++ .
A GP in its standard form is not a convex program since posynomials are not convex functions
in general. However, we can effectively transform it into a convex problem by using the logarithmic transformation trick on x, the multiplicative coefficients of each monomial and also each
objective/constraint function [3, 1].
An SP has the same form as GP except that the multiplicative constant d inside each monomial is not
restricted to be positive, i.e., d can take any real value. Although the difference seems to be small,
there is a huge difference between GP and SP from the computational perspective. The negative
multiplicative constant in monomials invalidates the logarithmic transformation trick frequently used
in GP. As a result, SPs cannot be reduced to convex programs and are believed to be hard to solve in
general [1].
3
Unified Approach for Learning
In this section we will show that the parameter learning problem of SPNs based on the MLE principle
can be formulated as an SP. We will use a sequence of optimal monomial approximations combined
with backtracking line search and the concave-convex procedure to tackle the SP. Due to space
constraints, we refer interested readers to the supplementary material for all the proof details.
3.1
Sum-Product Networks as a Mixture of Trees
We introduce the notion of induced trees from SPNs and use it to show that every complete and
decomposable SPN can be interpreted as a mixture of induced trees, where each induced tree
corresponds to a product of univariate distributions. From this perspective, an SPN can be understood
as a huge mixture model where the effective number of components in the mixture is determined by
its network structure. The method we describe here is not the first method for interpreting an SPN (or
the related arithmetic circuit) as a mixture distribution [20, 5, 2]; but, the new method can result in an
exponentially smaller mixture, see the end of this section for more details.
Definition 2 (Induced SPN). Given a complete and decomposable SPN S over X1:N , let T =
(TV , TE ) be a subgraph of S. T is called an induced SPN from S if
1. Root(S) 2 TV .
2. If v 2 TV is a sum node, then exactly one child of v in S is in TV , and the corresponding
edge is in TE .
3. If v 2 TV is a product node, then all the children of v in S are in TV , and the corresponding
edges are in TE .
Theorem 1. If T is an induced SPN from a complete and decomposable SPN S, then T is a tree that
is complete and decomposable.
As a result of Thm. 1, we will use the terms induced SPNs and induced trees interchangeably. With
some abuse of notation, we use T (x) to mean the value of the network polynomial of T with input
vector x.
Q
QN
Theorem 2. If T is an induced tree from S over X1:N , then T (x) = (vi ,vj )2TE wij n=1 Ixn ,
where wij is the edge weight of (vi , vj ) if vi is a sum node and wij = 1 if vi is a product node.
Remark. Although we focus our attention on Boolean random variables for the simplicity of
discussion and illustration, Thm. 2 can be extended to the case where the univariate distributions at
the leaf nodes are continuous or discrete distributions with countably infinitely many values, e.g.,
Gaussian distributions or Poisson distributions. We can simply replace the product of univariate
QN
QN
distributions term, n=1 Ixn , in Thm. 2 to be the general form n=1 pn (Xn ), where pn (Xn ) is a
univariate distribution over Xn . Also note that it is possible for two unique induced
trees to share
Q
the same product of univariate distributions, but in this case their weight terms (vi ,vi )2TE wij are
guaranteed to be different. As we will see shortly, Thm. 2 implies that the joint distribution over
{Xn }N
n=1 represented by an SPN is essentially a mixture model with potentially exponentially many
components in the mixture.
3
Definition 3 (Network cardinality). The network cardinality ?S of an SPN S is the number of unique
induced trees.
Theorem 3. ?S = fS (1|1), where fS (1|1) is the value of the network polynomial of S with input
vector 1 and all edge weights set to be 1.
P?S
Theorem 4. S(x) = t=1
Tt (x), where Tt is the tth unique induced tree of S.
Remark. The above four theorems prove the fact that an SPN S is an ensemble or mixture of trees,
where each tree computes an unnormalized distribution over X1:N . The total number of unique trees
in S is the network cardinality ?S , which only depends on the structure of S. Each component is a
simple product of univariate distributions. We illustrate the theorems above with a simple example in
Fig. 1.
+
w1
?
X1
+
w3
w2
?
X1
?
X2
= w1
X2
+
+w2
?
X1
X2
+
+w3
?
X1
X2
?
X1
X2
Figure 1: A complete and decomposable SPN is a mixture of induced trees. Double circles indicate
univariate distributions over X1 and X2 . Different colors are used to highlight unique induced trees;
each induced tree is a product of univariate distributions over X1 and X2 .
Zhao et al. [20] show that every complete and decomposable SPN is equivalent to a bipartite Bayesian
network with a layer of hidden variables and a layer of observable random variables. The number
of hidden variables in the bipartite Bayesian network is equal to the number of sum nodes in S. A
naive expansion of such Bayesian network to a mixture model will lead to a huge mixture model with
2O(M ) components, where M is the number of sum nodes in S. Here we complement their theory
and show that each complete and decomposable SPN is essentially a mixture of trees and the effective
number of unique induced trees is given by ?S . Note that ?S = fS (1|1) depends only on the network
structure, and can often be much smaller than 2O(M ) . Without loss of generality, assuming that in S
layers of sum nodes are alternating with layers of product nodes, then fS (1|1) = ?(2h ), where h is
the height of S. However, the exponentially many trees are recursively merged and combined in S
such that the overall network size is still tractable.
3.2
Maximum Likelihood Estimation as SP
Let?s consider the likelihood function computed by an SPN S over N binary random variables
with model parameters w and input vector x 2 {0, 1}N . Here the model parameters in S are edge
weights from every sum node, and we collect them together into a long vector w 2 RD
++ , where D
corresponds to the number of edges emanating from sum nodes in S. By definition, the probability
distribution induced by S can be computed by PrS (x|w) , PfSf(x|w)
= ffSS (x|w)
(1|w) .
S (x|w)
x
N
Corollary 5. Let S be an SPN with weights w 2 RD
++ over input vector x 2 {0, 1} , the netPfS (1|1) QN (t) QD
Iwd 2Tt
work polynomial fS (x|w) is a posynomial: fS (x|w) = t=1
, where
n=1 Ixn
d=1 wd
Iwd 2Tt is the indicator variable whether wd is in the t-th induced tree Tt or not. Each monomial
corresponds exactly to a unique induced tree SPN from S.
The above statement is a direct corollary of Thm. 2, Thm. 3 and Thm. 4. From the definition of
network polynomial, we know that fS is a multilinear function of the indicator variables. Corollary 5
works as a complement to characterize the functional form of a network polynomial in terms of
w. It follows that the likelihood function LS (w) , PrS (x|w) can be expressed as the ratio of two
posynomial functions. We now show that the optimization problem based on MLE is an SP. Using
the definition of Pr(x|w) and Corollary 5, let ? = fS (1|1), the MLE problem can be rewritten as
P? QN (t) QD
Iw 2T
Ix
wd d t
fS (x|w)
maximizew
= t=1P?n=1QDn d=1
Iwd 2Tt
fS (1|w)
(1)
t=1
d=1 wd
subject to
w 2 RD
++
4
Proposition 6. The MLE problem for SPNs is a signomial program.
Being nonconvex in general, SP is essentially hard to solve from a computational perspective [1, 3].
However, despite the hardness of SP in general, the objective function in the MLE formulation of
SPNs has a special structure, i.e., it is the ratio of two posynomials, which makes the design of
efficient optimization algorithms possible.
3.3
Difference of Convex Functions
Both PGD and EG are first-order methods and they can be viewed as approximating the SP after
applying a logarithmic transformation to the objective function only. Although (1) is a signomial
program, its objective function is expressed as the ratio of two posynomials. Hence, we can still
apply the logarithmic transformation trick used in geometric programming to its objective function
and to the variables to be optimized. More concretely, let wd = exp(yd ), 8d and take the log of
the objective function; it becomes equivalent to maximize the following new objective without any
constraint on y:
0
!1
!!
? (x)
D
?
D
X
X
X
X
@
A
maximize log
exp
yd Iyd 2Tt
log
exp
yd Iyd 2Tt
(2)
t=1
t=1
d=1
d=1
Note that in the first term of Eq. 2 the upper index ? (x) ? ? , fS (1|1) depends on the current input
x. By transforming into the log-space, we naturally guarantee the positivity of the solution at each
iteration, hence transforming a constrained optimization problem into an unconstrained optimization
problem without any sacrifice. Both terms in Eq. 2 are convex functions in y after the transformation.
Hence, the transformed objective function is now expressed as the difference of two convex functions,
which is called a DC function [9]. This helps us to design two efficient algorithms to solve the
problem based on the general idea of sequential convex approximations for nonlinear programming.
3.3.1
Sequential Monomial Approximation
Let?s consider the linearization of both terms in Eq. 2 in order to apply first-order methods in the
transformed space. To compute the gradient with respect to different components of y, we view each
node of an SPN as an intermediate function of the network polynomial and apply the chain rule to
back-propagate the gradient. The differentiation of fS (x|w) with respect to the root node of the
network is set to be 1. The differentiation of the network polynomial with respect to a partial function
at each node can then be computed in two passes of the network: the bottom-up pass evaluates the
values of all partial functions given the current input x and the top-down pass differentiates the
network polynomial with respect to each partial function. Following the evaluation-differentiation
passes, the gradient of the objective function in (2) can be computed in O(|S|). Furthermore, although
the computation is conducted in y, the results are fully expressed in terms of w, which suggests that
in practice we do not need to explicitly construct y from w.
Let f (y) = log fS (x|exp(y)) log fS (1|exp(y)). It follows that approximating f (y) with the best
linear function is equivalent to using the best monomial approximation of the signomial program (1).
This leads to a sequential monomial approximations of the original SP formulation: at each iteration
y(k) , we linearize both terms in Eq. 2 and form the optimal monomial function in terms of w(k) . The
additive update of y(k) leads to a multiplicative update of w(k) since w(k) = exp(y(k) ), and we use
a backtracking line search to determine the step size of the update in each iteration.
3.3.2
Concave-convex Procedure
Sequential monomial approximation fails to use the structure of the problem when learning SPNs.
Here we propose another approach based on the concave-convex procedure (CCCP) [18] to use the
fact that the objective function is expressed as the difference of two convex functions. At a high level
CCCP solves a sequence of concave surrogate optimizations until convergence. In many cases, the
maximum of a concave surrogate function can only be solved using other convex solvers and as a
result the efficiency of the CCCP highly depends on the choice of the convex solvers. However, we
show that by a suitable transformation of the network we can compute the maximum of the concave
surrogate in closed form in time that is linear in the network size, which leads to a very efficient
5
algorithm for learning the parameters of SPNs. We also prove the convergence properties of our
algorithm.
Consider the objective function to be maximized in DCP: f (y) = log fS (x| exp(y))
log fS (1| exp(y)) , f1 (y) + f2 (y) where f1 (y) , log fS (x| exp(y)) is a convex function and
f2 (y) , log fS (1| exp(y)) is a concave function. We can linearize only the convex part f1 (y) to
obtain a surrogate function
f?(y, z) = f1 (z) + rz f1 (z)T (y
z) + f2 (y)
(3)
for 8y, z 2 R . Now f?(y, z) is a concave function in y. Due to the convexity of f1 (y) we have
f1 (y) f1 (z) + rz f1 (z)T (y z), 8y, z and as a result the following two properties always hold
for 8y, z: f?(y, z) ? f (y) and f?(y, y) = f (y). CCCP updates y at each iteration k by solving
y(k) 2 arg maxy f?(y, y(k 1) ) unless we already have y(k 1) 2 arg maxy f?(y, y(k 1) ), in which
case a generalized fixed point y(k 1) has been found and the algorithm stops.
D
It is easy to show that at each iteration of CCCP we always have f (y(k) ) f (y(k 1) ). Note also
that f (y) is computing the log-likelihood of input x and therefore it is bounded above by 0. By the
monotone convergence theorem, limk!1 f (y(k) ) exists and the sequence {f (y(k) )} converges.
We now discuss how to compute a closed form solution for the maximization of the concave surrogate
f?(y, y(k 1) ). Since f?(y, y(k 1) ) is differentiable and concave for any fixed y(k 1) , a sufficient and
necessary condition to find its maximum is
ry f?(y, y(k
1)
) = ry(k
1)
f1 (y(k
1)
) + ry f2 (y) = 0
(4)
In the above equation, if we consider only the partial derivative with respect to yij (wij ), we obtain
(k 1)
wij
fS
fvj (x|w(k
1)
(x|w(k 1) )
) @fS (x|w(k
@fvi (x|w(k
1)
)
1) )
=
wij fvj (1|w) @fS (1|w)
fS (1|w) @fvi (1|w)
(5)
Eq. 5 leads to a system of D nonlinear equations, which is hard to solve in closed form. However,
0
0
if
0 and
Pwe 0do a change of variable by considering locally normalized weights wij (i.e., wij
w
=
1
8i),
then
a
solution
can
be
easily
computed.
As
described
in
[13,
20],
any
SPN
can
be
j ij
transformed into an equivalent normal SPN with locally normalized weights in a bottom up pass as
follows:
wij fvj (1|w)
0
wij
=P
(6)
j wij fvj (1|w)
We can then replace wij fvj (1|w) in the above equation by the expression it is equal to in Eq. 5 to
obtain a closed form solution:
(k 1)
)
(k 1) fvj (x|w
(k
1)
fS (x|w
)
0
wij
/ wij
@fS (x|w(k
@fvi (x|w(k
1)
)
1) )
(7)
Note that in the above derivation both fvi (1|w)/fS (1|w) and @fS (1|w)/@fvi (1|w) can be treated
0
as constants and hence absorbed since wij
, 8j are constrained to be locally normalized. In order to
obtain a solution to Eq. 5, for each edge weight wij , the sufficient statistics include only three terms,
(k 1)
i.e, the evaluation value at vj , the differentiation value at vi and the previous edge weight wij
,
all of which can be obtained in two passes of the network for each input x. Thus the computational
complexity to obtain a maximum of the concave surrogate is O(|S|). Interestingly, Eq. 7 leads to
the same update formula as in the EM algorithm [12] despite the fact that CCCP and EM start from
different perspectives. We show that all the limit points of the sequence {w(k) }1
k=1 are guaranteed to
be stationary points of DCP in (2).
Theorem 7. Let {w(k) }1
k=1 be any sequence generated using Eq. 7 from any positive initial point,
then all the limiting points of {w(k) }1
k=1 are stationary points of the DCP in (2). In addition,
limk!1 f (y(k) ) = f (y? ), where y? is some stationary point of (2).
We summarize all four algorithms and highlight their connections and differences in Table 1. Although
we mainly discuss the batch version of those algorithms, all of the four algorithms can be easily
adapted to work in stochastic and/or parallel settings.
6
Table 1: Summary of PGD, EG, SMA and CCCP. Var. means the optimization variables.
4
Algo
Var.
Update Type
PGD
w
Additive
EG
SMA
CCCP
w
log w
log w
Multiplicative
Multiplicative
Multiplicative
Update Formula n
(k+1)
wd
(k+1)
(k)
PR?++ wd + (rwd f1 (w(k) )
(k)
rwd f2 (w(k) ))
o
wd
wd exp{ (rwd f1 (w(k) ) rwd f2 (w(k) ))}
(k+1)
(k)
(k)
wd
wd exp{ wd ? (rwd f1 (w(k) ) rwd f2 (w(k) ))}
(k+1)
(k)
wij
/ wij ? rvi fS (w(k) ) ? fvj (w(k) )
Experiments
4.1
Experimental Setting
We conduct experiments on 20 benchmark data sets from various domains to compare and evaluate
the convergence performance of the four algorithms: PGD, EG, SMA and CCCP (EM). These 20
data sets are widely used in [7, 15] to assess different SPNs for the task of density estimation. All the
features in the 20 data sets are binary features. All the SPNs that are used for comparisons of PGD,
EG, SMA and CCCP are trained using LearnSPN [7]. We discard the weights returned by LearnSPN
and use random weights as initial model parameters. The random weights are determined by the
same random seed in all four algorithms. Detailed information about these 20 datasets and the SPNs
used in the experiments are provided in the supplementary material.
4.2
Parameter Learning
We implement all four algorithms in C++. For each algorithm, we set the maximum number of
iterations to 50. If the absolute difference in the training log-likelihood at two consecutive steps is
less than 0.001, the algorithms are stopped. For PGD, EG and SMA, we combine each of them with
backtracking line search and use a weight shrinking coefficient set at 0.8. The learning rates are
initialized to 1.0 for all three methods. For PGD, we set the projection margin ? to 0.01. There is no
learning rate and no backtracking line search in CCCP. We set the smoothing parameter to 0.001 in
CCCP to avoid numerical issues.
We show in Fig. 2 the average log-likelihood scores on 20 training data sets to evaluate the convergence
speed and stability of PGD, EG, SMA and CCCP. Clearly, CCCP wins by a large margin over
PGD, EG and SMA, both in convergence speed and solution quality. Furthermore, among the four
algorithms, CCCP is the most stable one due to its guarantee that the log-likelihood (on training data)
will not decrease after each iteration. As shown in Fig. 2, the training curves of CCCP are more
smooth than the other three methods in almost all the cases. These 20 experiments also clearly show
that CCCP often converges in a few iterations. On the other hand, PGD, EG and SMA are on par
with each other since they are all first-order methods. SMA is more stable than PGD and EG and
often achieves better solutions than PGD and EG. On large data sets, SMA also converges faster than
PGD and EG. Surprisingly, EG performs worse than PGD in some cases and is quite unstable despite
the fact that it admits multiplicative updates. The ?hook shape? curves of PGD in some data sets, e.g.
Kosarak and KDD, are due to the projection operations.
Table 2: Average log-likelihoods on test data. Highest log-likelihoods are highlighted in bold. "
shows statistically better log-likelihoods than CCCP and # shows statistically worse log-likelihoods
than CCCP. The significance is measured based on the Wilcoxon signed-rank test.
Data set
NLTCS
MSNBC
KDD 2k
Plants
Audio
Jester
Netflix
Accidents
Retail
Pumsb-star
CCCP
-6.029
-6.045
-2.134
-12.872
-40.020
-52.880
-56.782
-27.700
-10.919
-24.229
LearnSPN
#-6.099
#-6.113
#-2.233
#-12.955
#-40.510
#-53.454
#-57.385
#-29.907
#-11.138
#-24.577
ID-SPN
#-6.050
-6.048
#-2.153
"-12.554
-39.824
#-52.912
"-56.554
"-27.232
-10.945
"-22.552
Data set
DNA
Kosarak
MSWeb
Book
EachMovie
WebKB
Reuters-52
20 Newsgrp
BBC
Ad
7
CCCP
-84.921
-10.880
-9.970
-35.009
-52.557
-157.492
-84.628
-153.205
-248.602
-27.202
LearnSPN
#-85.237
#-11.057
#-10.269
#-36.247
#-52.816
#-158.542
#-85.979
#-156.605
#-249.794
#-27.409
ID-SPN
"-84.693
-10.605
-9.800
"-34.436
"-51.550
"-153.293
"-84.389
"-151.666
#-252.602
#-40.012
Figure 2: Negative log-likelihood values versus number of iterations for PGD, EG, SMA and CCCP.
The computational complexity per update is O(|S|) in all four algorithms. CCCP often takes less
time than the other three algorithms because it takes fewer iterations to converge. We list detailed
running time statistics for all four algorithms on the 20 data sets in the supplementary material.
4.3
Fine Tuning
We combine CCCP as a ?fine tuning? procedure with the structure learning algorithm LearnSPN and
compare it to the state-of-the-art structure learning algorithm ID-SPN [15]. More concretely, we keep
the model parameters learned from LearnSPN and use them to initialize CCCP. We then update the
model parameters globally using CCCP as a fine tuning technique. This normally helps to obtain
a better generative model since the original parameters are learned greedily and locally during the
structure learning algorithm. We use the validation set log-likelihood score to avoid overfitting. The
algorithm returns the set of parameters that achieve the best validation set log-likelihood score as
output. Experimental results are reported in Table. 2. As shown in Table 2, the use of CCCP after
LearnSPN always helps to improve the model performance. By optimizing model parameters on
these 20 data sets, we boost LearnSPN to achieve better results than state-of-the-art ID-SPN on 7
data sets, where the original LearnSPN only outperforms ID-SPN on 1 data set. Note that the sizes
of the SPNs returned by LearnSPN are much smaller than those produced by ID-SPN. Hence, it is
remarkable that by fine tuning the parameters with CCCP, we can achieve better performance despite
the fact that the models are smaller. For a fair comparison, we also list the size of the SPNs returned
by ID-SPN in the supplementary material.
5
Conclusion
We show that the network polynomial of an SPN is a posynomial function of the model parameters,
and that parameter learning yields a signomial program. We propose two convex relaxations to solve
the SP. We analyze the convergence properties of CCCP for learning SPNs. Extensive experiments are
conducted to evaluate the proposed approaches and current methods. We also recommend combining
CCCP with structure learning algorithms to boost the modeling accuracy.
Acknowledgments
HZ and GG gratefully acknowledge support from ONR contract N000141512365. HZ also thanks
Ryan Tibshirani for the helpful discussion about CCCP.
8
References
[1] S. Boyd, S.-J. Kim, L. Vandenberghe, and A. Hassibi. A tutorial on geometric programming.
Optimization and Engineering, 8(1):67?127, 2007.
[2] H. Chan and A. Darwiche. On the robustness of most probable explanations. In In Proceedings
of the Twenty Second Conference on Uncertainty in Artificial Intelligence.
[3] M. Chiang. Geometric programming for communication systems. Now Publishers Inc, 2005.
[4] A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM
(JACM), 50(3):280?305, 2003.
[5] A. Dennis and D. Ventura. Greedy structure search for sum-product networks. In International
Joint Conference on Artificial Intelligence, volume 24, 2015.
[6] R. Gens and P. Domingos. Discriminative learning of sum-product networks. In Advances in
Neural Information Processing Systems, pages 3248?3256, 2012.
[7] R. Gens and P. Domingos. Learning the structure of sum-product networks. In Proceedings of
The 30th International Conference on Machine Learning, pages 873?880, 2013.
[8] A. Gunawardana and W. Byrne. Convergence theorems for generalized alternating minimization
procedures. The Journal of Machine Learning Research, 6:2049?2073, 2005.
[9] P. Hartman et al. On functions representable as a difference of convex functions. Pacific J.
Math, 9(3):707?713, 1959.
[10] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear
predictors. Information and Computation, 132(1):1?63, 1997.
[11] G. R. Lanckriet and B. K. Sriperumbudur. On the convergence of the concave-convex procedure.
pages 1759?1767, 2009.
[12] R. Peharz. Foundations of Sum-Product Networks for Probabilistic Modeling. PhD thesis, Graz
University of Technology, 2015.
[13] R. Peharz, S. Tschiatschek, F. Pernkopf, and P. Domingos. On theoretical properties of sumproduct networks. In AISTATS, 2015.
[14] H. Poon and P. Domingos. Sum-product networks: A new deep architecture. In Proc. 12th Conf.
on Uncertainty in Artificial Intelligence, pages 2551?2558, 2011.
[15] A. Rooshenas and D. Lowd. Learning sum-product networks with direct and indirect variable
interactions. In ICML, 2014.
[16] R. Salakhutdinov, S. Roweis, and Z. Ghahramani. On the convergence of bound optimization
algorithms. UAI, 2002.
[17] C. J. Wu. On the convergence properties of the EM algorithm. The Annals of Statistics, pages
95?103, 1983.
[18] A. L. Yuille, A. Rangarajan, and A. Yuille. The concave-convex procedure (CCCP). Advances
in Neural Information Processing Systems, 2:1033?1040, 2002.
[19] W. I. Zangwill. Nonlinear programming: a unified approach, volume 196. Prentice-Hall
Englewood Cliffs, NJ, 1969.
[20] H. Zhao, M. Melibari, and P. Poupart. On the Relationship between Sum-Product Networks and
Bayesian Networks. In ICML, 2015.
[21] H. Zhao, T. Adel, G. Gordon, and B. Amos. Collapsed variational inference for sum-product
networks. In ICML, 2016.
9
| 6423 |@word version:1 polynomial:16 seems:1 propagate:1 recursively:2 initial:2 contains:1 score:3 interestingly:1 outperforms:1 current:3 wd:12 fvi:5 numerical:1 additive:2 kdd:2 shape:1 update:14 stationary:3 generative:2 leaf:3 fewer:1 intelligence:3 greedy:1 warmuth:1 chiang:1 provides:1 math:1 node:27 height:1 direct:2 differential:1 incorrect:1 prove:4 descendant:1 combine:2 inside:1 darwiche:2 introduce:2 sacrifice:1 hardness:1 frequently:1 ry:3 terminal:1 salakhutdinov:1 globally:1 cardinality:3 solver:2 becomes:1 spain:1 considering:1 notation:1 bounded:1 circuit:1 provided:1 webkb:1 interpreted:1 unified:8 transformation:7 differentiation:4 nj:1 guarantee:2 every:4 subclass:1 concave:15 tackle:1 exactly:2 normally:1 before:1 positive:4 understood:1 engineering:1 treat:1 limit:1 despite:6 id:7 cliff:1 abuse:1 yd:3 signed:1 collect:1 suggests:1 tschiatschek:1 statistically:2 unique:7 acknowledgment:1 zangwill:1 practice:2 implement:1 procedure:9 empirical:1 projection:5 boyd:1 cannot:1 prentice:1 collapsed:1 applying:2 optimize:1 equivalent:6 attention:2 xa1:1 l:1 convex:25 decomposable:12 simplicity:1 rule:1 vandenberghe:1 stability:2 notion:2 hurt:1 limiting:1 annals:1 exact:1 programming:7 domingo:6 lanckriet:1 trick:3 bottom:2 solved:1 region:1 graz:1 decrease:1 highest:1 mentioned:1 transforming:2 convexity:1 complexity:2 trained:1 solving:1 algo:1 yuille:2 bipartite:2 efficiency:1 f2:7 bbc:1 easily:3 joint:3 indirect:1 geoff:1 represented:1 various:1 derivation:1 train:1 effective:2 describe:1 emanating:1 artificial:3 quite:1 supplementary:4 solve:5 widely:1 say:1 otherwise:1 hartman:1 statistic:3 gp:6 transform:2 highlighted:1 sequence:5 differentiable:1 propose:2 interaction:1 product:28 combining:1 gen:3 poon:2 iff:2 achieve:4 subgraph:2 roweis:1 ppoupart:1 intuitive:1 validate:1 convergence:12 double:1 rangarajan:1 converges:4 help:5 illustrate:1 develop:2 linearize:2 measured:1 ij:1 school:1 eq:9 solves:1 c:2 implies:1 indicate:1 qd:2 drawback:1 merged:1 stochastic:1 viewing:1 material:4 f1:13 proposition:1 probable:1 multilinear:2 summation:1 ryan:1 yij:1 hold:1 hall:1 ground:1 normal:1 exp:12 seed:1 scope:8 sma:15 achieves:1 consecutive:1 rooshenas:1 estimation:3 proc:1 iw:1 waterloo:1 weighted:1 amos:1 minimization:1 clearly:2 gaussian:1 always:3 pn:2 avoid:2 corollary:4 focus:4 consistently:1 rank:1 likelihood:16 mainly:2 greedily:1 kim:1 helpful:1 inference:4 fvj:7 hidden:3 transformed:4 wij:20 interested:1 issue:1 overall:1 among:3 arg:2 pascal:1 denoted:1 pumsb:1 exponent:1 jester:1 smoothing:1 art:3 initialize:1 special:2 marginal:1 equal:2 construct:2 constrained:2 icml:3 recommend:1 gordon:2 simplify:1 few:1 huge:3 englewood:1 possibility:1 highly:1 evaluation:2 mixture:18 dcp:5 chain:1 edge:8 partial:4 necessary:1 unless:1 tree:25 conduct:2 initialized:1 circle:1 theoretical:2 stopped:1 modeling:2 boolean:6 maximization:2 introducing:1 monomials:3 predictor:1 conducted:2 characterize:3 reported:1 combined:2 thanks:1 density:1 international:2 probabilistic:2 contract:1 together:1 w1:2 thesis:1 gunawardana:1 positivity:3 worse:2 admit:3 book:1 conf:1 zhao:5 sps:1 return:2 derivative:1 star:1 bold:1 coefficient:3 inc:1 explicitly:1 ad:1 depends:5 vi:9 multiplicative:10 view:2 root:4 closed:4 analyze:1 start:1 netflix:1 parallel:1 msweb:1 ass:1 accuracy:1 ensemble:1 maximized:1 yield:1 bayesian:6 produced:1 definition:6 evaluates:1 sriperumbudur:1 naturally:4 associated:2 proof:1 stop:1 color:1 peharz:2 posynomial:5 back:1 formulation:5 generality:1 furthermore:4 implicit:1 until:1 hand:1 dennis:1 nonlinear:3 xa2:2 defines:1 quality:1 lowd:1 normalized:3 byrne:1 hence:8 equality:1 alternating:2 eg:17 deal:1 pwe:1 interchangeably:1 during:1 rooted:2 unnormalized:2 generalized:2 gg:1 complete:12 tt:8 performs:1 interpreting:1 variational:1 functional:2 exponentially:3 volume:2 interpretation:1 mellon:2 refer:1 dag:1 ai:1 rd:3 unconstrained:1 tuning:4 pointed:1 gratefully:1 stable:2 han:2 wilcoxon:1 chan:1 perspective:7 optimizing:1 discard:1 nonconvex:1 inequality:1 binary:2 onr:1 additional:1 accident:1 determine:1 maximize:2 converge:1 arithmetic:1 eachmovie:1 smooth:1 faster:2 believed:1 long:1 dept:2 cccp:40 mle:8 essentially:3 cmu:2 expectation:1 poisson:1 iteration:10 normalization:1 retail:1 background:1 addition:1 fine:4 else:1 publisher:1 w2:2 limk:2 strict:1 pass:3 induced:20 subject:1 hz:2 invalidates:1 intermediate:1 easy:1 w3:2 architecture:3 idea:2 whether:1 expression:1 rwd:6 adel:1 f:31 returned:3 remark:2 deep:3 detailed:2 locally:4 tth:1 reduced:1 dna:1 tutorial:1 per:1 tibshirani:1 carnegie:2 discrete:2 key:1 four:10 spns:32 relaxation:2 monotone:1 sum:27 uncertainty:2 almost:1 reader:1 wu:1 comparable:1 layer:4 bound:1 guaranteed:2 adapted:1 ahead:1 precisely:1 constraint:7 x2:7 speed:2 px:1 pacific:1 tv:6 representable:1 smaller:5 em:9 maxy:2 restricted:2 pr:5 equation:3 discus:2 differentiates:1 know:2 tractable:1 end:1 operation:2 rewritten:1 apply:4 batch:1 robustness:1 shortly:1 original:4 rz:2 top:1 running:1 include:1 n000141512365:1 graphical:2 iyd:2 spn:39 ghahramani:1 approximating:2 objective:16 already:1 traditional:1 surrogate:6 gradient:7 win:1 signomial:9 kosarak:2 poupart:2 unstable:1 assuming:1 index:1 relationship:1 illustration:1 ratio:3 equivalently:1 ventura:1 potentially:1 statement:1 negative:2 design:2 twenty:1 upper:1 datasets:1 benchmark:2 acknowledge:1 msnbc:1 descent:2 orthant:1 extended:2 communication:1 dc:1 rn:3 pernkopf:1 arbitrary:1 thm:7 sumproduct:1 complement:2 extensive:2 optimized:1 connection:1 fv:1 learned:2 barcelona:1 boost:3 nip:1 nltcs:1 summarize:1 program:11 explanation:1 suitable:2 treated:1 ixn:4 indicator:7 kivinen:1 improve:1 technology:1 hook:1 naive:1 geometric:4 loss:1 fully:1 highlight:2 par:1 plant:1 interesting:1 var:2 versus:2 remarkable:1 validation:2 foundation:1 sufficient:2 principle:3 share:1 summary:1 surprisingly:1 monomial:12 exponentiated:2 absolute:1 boundary:1 curve:2 xn:5 qn:6 computes:4 author:1 concretely:2 projected:1 observable:1 countably:1 keep:1 overfitting:1 instantiation:1 uai:1 discriminative:2 continuous:2 search:5 ggordon:1 table:5 ca:1 expansion:1 domain:3 vj:5 sp:16 pk:1 main:2 significance:1 aistats:1 uwaterloo:1 reuters:1 child:6 fair:1 x1:15 fig:3 shrinking:1 sub:1 fails:1 hassibi:1 ix:2 formula:4 theorem:9 down:1 showing:2 list:2 dk:1 admits:1 exists:1 sequential:6 effectively:3 iwd:3 phd:1 te:5 linearization:1 margin:3 logarithmic:4 backtracking:4 simply:1 univariate:11 infinitely:1 learnspn:11 absorbed:1 jacm:1 expressed:6 ch:2 corresponds:6 truth:1 acm:1 rvi:1 conditional:1 viewed:3 formulated:3 replace:2 feasible:1 hard:4 change:1 determined:2 pgd:23 corrected:1 except:1 called:2 total:1 pas:3 experimental:3 internal:1 support:1 evaluate:3 audio:1 avoiding:1 |
5,996 | 6,424 | A Posteriori Error Bounds for Joint Matrix
Decomposition Problems
Nicol? Colombo
Department of Statistical Science
University College London
[email protected]
Nikos Vlassis
Adobe Research
San Jose, CA
[email protected]
Abstract
Joint matrix triangularization is often used for estimating the joint eigenstructure
of a set M of matrices, with applications in signal processing and machine learning.
We consider the problem of approximate joint matrix triangularization when the
matrices in M are jointly diagonalizable and real, but we only observe a set M?
of noise perturbed versions of the matrices in M. Our main result is a first-order
upper bound on the distance between any approximate joint triangularizer of the
matrices in M? and any exact joint triangularizer of the matrices in M. The bound
depends only on the observable matrices in M? and the noise level. In particular, it
does not depend on optimization specific properties of the triangularizer, such as its
proximity to critical points, that are typical of existing bounds in the literature. To
our knowledge, this is the first a posteriori bound for joint matrix decomposition.
We demonstrate the bound on synthetic data for which the ground truth is known.
1
Introduction
Joint matrix decomposition problems appear frequently in signal processing and machine learning,
with notable applications in independent component analysis [7], canonical correlation analysis [20],
and latent variable model estimation [5, 4]. Most of these applications reduce to some instance of
a tensor decomposition problem, and the growing interest in joint matrix decomposition is largely
motivated by such reductions. In particular, in the past decade several ?matricization? methods have
been proposed for factorizing tensors by computing the joint decomposition of sets of matrices
extracted from slices of the tensor (see, e.g., [10, 22, 17, 8]).
In this work we address a standard joint matrix decomposition problem, in which we assume a set of
jointly diagonalizable ground-truth (unobserved) matrices
M? = {Mn = V diag([?n1 , . . . , ?nd ])V ?1 , V ? Rd?d , ? ? RN ?d }N
n=1 ,
(1)
which have been corrupted by noise and we observe their noisy versions:
? n = Mn + ?Rn , Mn ? M? , Rn ? Rd?d , kRn k ? 1}N .
M? = {M
n=1
(2)
? n ? M? are the only observed quantities. The scalar ? > 0 is the noise level, and
The matrices M
the matrices Rn are arbitrary noise matrices with Frobenius norm kRn k ? 1. The key problem
is to estimate from the observed matrices in M? the joint eigenstructure V, ? of the ground-truth
matrices in M? . One way to address this estimation problem is by trying to approximately jointly
diagonalize the observed matrices in M? , for instance by directly searching for an invertible matrix
that approximates V in (1). This approach is known as nonorthogonal joint diagonalization [23, 15,
18], and is often motivated by applications that reduce to nonsymmetric CP tensor decomposition
(see, e.g., [20]).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
An alternative approach to the above estimation problem (in the general case of nonorthogonal V ) is
via joint triangularization, also known as joint or simultaneous Schur decomposition [1, 13, 11, 12,
22, 8]. Under mild conditions [14], the ground-truth matrices in M? can be jointly triangularized,
that is, there exists an orthogonal matrix U? that simultaneously renders all matrices U?> Mn U? upper
triangular:
low(U?> Mn U? ) = 0,
for all n = 1, . . . N,
(3)
where low(A) is the strictly lower triangular part of A, i.e., [low(A)]ij = Aij if i > j and 0 otherwise.
On the other hand, when ? > 0 the observed matrices in M? can only be approximately jointly
triangularized, for instance by solving the following optimization problem
min L(U ),
U ?O(d)
where
L(U ) =
N
1 X
? n U )k2 ,
klow(U > M
N n=1
(4)
where k?k denotes Frobenius norm and optimization is over the manifold O(d) of orthogonal matrices.
The optimization problem can be addressed by Jacobi-like methods [13], or Newton-like methods that
optimize directly on the O(d) manifold [8]. For any feasible U in (4), the joint eigenvalues ? in (1)
? n U . This approach has been used in nonsymmetric CP
can be estimated from the diagonals of U > M
tensor decomposition [22, 8] and other applications [9, 13].
We also note two related problems. In the special case that the ground-truth matrices Mn in M? are
symmetric, the matrix V in (1) is orthogonal, and the estimation problem is known as orthogonal joint
diagonalization [7]. Our results apply to this special case too. Another problem is joint diagonalization
by congruence [6, 3, 17], in which the matrix V ?1 in (1) is replaced by V T . In that case the matrix ?
in (1) does not contain the joint eigenvalues, and our results do not apply directly.
Contributions We are addressing the joint matrix triangularization problem defined via (4), under
the model assumptions (1), (2), and (3). The optimization problem (4) is nonconvex, and hence it
is expected to be hard to solve to global optimality in general. Therefore, error bounds are needed
that can assess the quality of a solution produced by some algorithm that tries to solve (4). Our
main result (Theorem 1) is an error bound that allows to directly assess a posteriori the quality of
any feasible triangularizer U in (4), in terms of its proximity to the (unknown) exact trangularizer
of the ground-truth matrices in M? , regardless of the algorithm used for optimization. The bound
depends only on observable quantities and the noise parameter ? in (2). The parameter ? can often be
bounded by a function of the sample size, as in problems involving empirical moment matching [4].
Our approach draws on the perturbation analysis of the Schur decomposition of a single matrix [16].
To our knowledge, our bound in Theorem 1 is the first a posteriori error bound for joint matrix
decomposition problems. Existing bounds in the literature have a dependence on the ground-truth
(and hence unobserved) matrices [11, 17], the proximity of a feasible U to critical points of the
objective function [6], or the amount of collinearity between the columns of the matrix ? in (1) [3].
Our error bound is free of such dependencies. Outside the context of joint matrix decomposition, a
posteriori error bounds have found practical uses in nonconvex optimization [19] and the design of
algorithms [21].
Notation All matrices, vectors, and numbers are real. When the context is clear we use 1 to
denote the identity matrix. We use k ? k for matrix Frobenius norm and vector l2 norm. O(d) is the
manifold of orthogonal matrices U such that U > U = 1. The matrix commutator [A, B] is defined
by [A, B] = AB ? BA. We use ? for Kronecker product. For a matrix A, we denote by ?i (A) its
ith eigenvalue, ?min (A) its smallest eigenvalue, ?(A) its condition number, vec(A) its columnwise
vectorization, and low(A) and up(A) its strictly lower triangular and strictly upper triangular parts,
respectively. Low is a binary diagonal matrix defined by vec(low(A)) = Low vec(A). Skew is a
skew-symmetric projector defined by Skew vec(A) = vec(A ? A> ). PLow is a d(d ? 1)/2 ? d2
binary matrix with orthogonal rows, which projects to the subspace of vectorized strictly lower
>
>
triangular matrices, such that PLow PLow
= 1 and PLow
PLow = Low. For example, for d = 3, one
has Low = diag([0, 1, 1, 0, 0, 1, 0, 0, 0]) and
!
0 1 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 .
Plow =
0 0 0 0 0 1 0 0 0
2
2
Perturbation of joint triangularizers
The objective function (4) is continuous in the parameter ?. This implies that, for ? small enough,
? n ? M? can be expected to be
the approximate joint triangularizers of the observed matrices M
perturbations of the exact triangularizers of the ground-truth matrices Mn ? M? . To formalize this,
we express each feasible triangularizer U in (4) as a function of some exact triangularizer U? of the
ground-truth matrices, as follows:
U = U? e?X ,
where
X = ?X > ,
kXk = 1,
? > 0,
(5)
where e denotes matrix exponential and X is a skew-symmetric matrix. Such an expansion holds for
any pair U, U? of orthogonal matrices with det(U ) = det(U? ) (see, e.g., [2]). The scalar ? in (5) can
be interpreted as the ?distance? between the matrices U and U? . Our main result is an upper bound on
this distance:
Theorem 1. Let M? and M? be the sets of matrices defined in (1) and (2), respectively. Let U be a
feasible solution of the optimization problem (4), with corresponding value L(U ). Then there exists
an orthogonal matrix U? that is an exact joint triangularizer of M? , such that U can be expressed as
a perturbation of U? according to (5), with ? obeying
p
L(U ) + ?
?? p
+ O(?2 ) ,
where
(6)
?min (?
?)
?? =
N
1 X ?> ?
T T ,
2N n=1 n n
>
? n U ) ? (U > M
? n> U ) ? 1 Skew Plow
T?n = Plow 1 ? (U > M
.
(7)
Proof. Let U? be the exact joint triangularizer of M? that is the nearest to U and det U = det U? .
Then U = U? e?X for some unit-norm skew-symmetric matrix X and scalar ? > 0. Using the
expansion e?X = I + ?X + O(?2 ) and the fact that X is skew-symmetric, we can write, for any
n = 1, . . . , N ,
? n U = U?> M
? n U? + ?[U?> M
? n U? , X] + O(?2 ),
U >M
(8)
where [?, ?] denotes matrix commutator. Applying the low(?) operator and using the facts that
? n U? = ?U > M
? n U + O(?2 ) and low(U?> Mn U? ) = 0, for any n = 1, . . . , N , we can write
?U?> M
? n U, X]) = low(U > M
? n U ) ? ? low(U?> Rn U? ) + O(?2 ).
? low([U > M
(9)
Stacking (9) over n, then taking Frobenius norm, and applying the triangle inequality together with
the fact klow(U?> Rn U? )k ? kU?> Rn U? k = kRn k ? 1 for all n = 1, . . . , N , we get
?
X
N
? n U, X])k2
klow([U M
>
12
?
p
?
N L(U ) + ? N + O(?2 ),
(10)
n=1
where we used the definition of L(U ) from (4). The rest of the proof involves computing a lower
bound of the left-hand side of (10) that holds for all X. Since klow(A)k = kPlow vec(A)k, we can
rewrite the argument of each norm in the left-hand side of (10) as
? n U, X])
low([U > M
? n U, X])
= Plow vec([U > M
? n U ) ? (U > M
? n> U ) ? 1 vec(X),
= Plow 1 ? (U > M
(11)
(12)
and, due to the skew-symmetry of X, we can write
vec(X)
=
=
vec(low(X) + up(X)) = vec(low(X) ? low(X)> )
Skew Low vec(X) =
>
Skew Plow
Plow vec(X) .
(13)
(14)
? n U, X])k2 = kT? xk2 = x> T?> T? x, where
Hence, for all n = 1, . . . , N , we can write klow([U > M
n
n n
x = Plow vec(X) and T?n is defined in (7). The inequality in (6) then follows by using the inequality
x> Ax ? kxk2 ?min (A), which holds for any symmetric matrix A, and noting that kxk2 = 21 (since
x contains the lower triangular part of X and kXk2 = 1).
3
For general M? , an analytical expression of ?min (?
? ) in (6) is not available. However, it is straightforward to compute ?min (?
? ) numerically since all quantities in (7) are observable. Moreover, it
is possible to show (see Theorem 2) that in the limit ? ? 0 and under certain conditions on the
ground-truth matrices in M? , the operator ? = lim??0 ?? is nonsingular, i.e., ?min (? ) > 0. Since
both ?? and L are continuous in ? for ? ? 0, the boundedness of the right-hand side of (6) is
guaranteed, for ? small enough, by eigenvalue perturbation theorems.
Theorem 2. The operator ?? defined in (7) obeys
?
lim
??0
where ? = mini>j
1
2N
PN
n=1 (?i (Mn )
p
?min (?
?) ?
?
,
?(V )2
(15)
? ?j (Mn ))2 , and the matrix V is defined in (1).
The proof is given in the Appendix. The quantity ? can be interpreted as a ?joint eigengap? of M? (see
also [17] for a similar definition in the context of joint diagonalization by congruence). Theorem 2
implies that lim??0 ?min (?
? ) > 0 if ? > 0, and the latter is guaranteed under the following condition:
Condition 1. For every i 6= j, i, j = 1, . . . , d, there exists at least n ? {1, . . . , N } such that
?i (Mn ) 6= ?j (Mn ),
3
where
Mn ? M? .
(16)
Experiments
To assess the tightness of the inequality in (6), we created a set of synthetic problems in which
the ground truth is known, and we evaluated the bounds obtained from (6) against the true values.
Each problem involved the approximate triangularization of a set of randomly generated nearly joint
? n = V ?n V ?1 + ?Rn , with ?n diagonal and kRn k = 1, for
diagonalizable matrices of the form M
? n }N , two approximate joint triangularizers were computed
n = 1, . . . , N . For each set M? = {M
n=1
by optimizing (4) using two different iterative algorithms, the Gauss-Newton algorithm [8], and the
Jacobi algorithm [13] (our implementation), initialized with the same random orthogonal matrix. The
obtained solutions U (which may not be the global optima) were then used to compute the empirical
bound ? from (6), as well as the actual distance parameter ?true = k log U > U? k, with U? being the
global optimum of the unperturbed problem (? = 0) that is closest to U and has the same determinant.
Locating the closest U? to the given U required checking all 2d d! possible exact triangularizers of M? ,
thus we restricted our empirical evaluation to the case d = 5. We considered two settings, N = 5
and N = 100, and several different noise levels obtained by varying the perturbation parameter ?.
The first two graphs in Figure 1 show the value of the noise level ? against the values of ?true = ?true (U )
and the corresponding empirical bounds ? = ?(U ) from (6), where U are the solutions found by the
Gauss-Newton algorithm. (Very similar results were obtained using the Jacobi algorithm.) All values
are obtained by averaging over 10 equivalent experiments, and the errorbars show the corresponding
standard deviations. For the same set of solutions U , the third graph in Figure 1 shows the ratios ??true .
The experiments show that, at least for small N , the bound (6) produces a reasonable estimate of
the true perturbation parameter ?true . However, our bound does not fully capture the concentration
that is expected (and observed in practice) for large sets of nearly jointly decomposable matrices
(note, for instance, the average value of ?true in Figure 1, for N = 5 vs N = 100). This is most likely
due to the introduced approximation klow(U?> Rn U? )k ? 1 and the use of the triangle inequality in
? n U from the
(10) (see proof of Theorem 1), which are needed to separate the observable terms U > M
>
unobservable terms U0 Rn U0 in the right-hand side of (9). Extra assumptions on the distribution of
the random matrices Rn can possibly allow obtaining tighter bounds in a probabilistic setting.
4
Conclusions
We addressed a joint matrix triangularization problem that involves finding an orthogonal matrix that
approximately triangularizes a set of noise-perturbed jointly diagonalizable matrices. The setting can
have many applications in statistics and signal processing, in particular in problems that reduce to a
nonsymmetric CP tensor decomposition [4, 8, 20]. The joint matrix triangularization problem can be
cast as a nonconvex optimization problem over the manifold of orthogonal matrices, and it can be
4
N=5
0
-2
-2
-4
-6
-8
-6
-4
-2
-4
-6
0
-8
log(<)
60
40
20
,true
-10
,
N=5
N=100
80
-8
,true
-10
100
,/,true
0
-8
N=100
2
log(,)
log(,)
2
,
-6
-4
-2
0
log(<)
0
-8
-7
-6
-5
-4
-3
-2
-1
0
log(<)
Figure 1: The empirical bound ? from (6) vs the true distance ?true , on synthetic experiments.
solved numerically but with no success guarantees. We have derived a posteriori upper bounds on
the distance between any approximate triangularizer (obtained by any algorithm) and the (unknown)
solution of the underlying unperturbed problem. The bounds depend only on empirical quantities
and hence they can be used to asses the quality of any feasible solution, even when the ground truth
is not known. We established that, under certain conditions, the bounds are well defined when the
noise is small. Synthetic experiments suggest that the obtained bounds are tight enough to be useful
in practice.
In future work, we want to apply our analysis to related problems, such as nonnegative tensor
decomposition and simultaneous generalized Schur decomposition [11], and to empirically validate
the obtained bounds in machine learning applications [4].
A
Proof of Theorem 2
The proof consists of two steps. The first step consists of showing that in the limit ? ? 0 the operator
?? defined in (7) tends to a simpler operator, ? , which depends on ground-truth quantities only. The
second step is to derive a lower bound on the smallest eigenvalue of the operator ? .
Let ? be defined by
N
1 X >
?=
T T ,
2N n=1 n n
Tn = Plow 1 ? (U?> Mn U? ) ? (U?> Mn> U? ) ? 1 Plow ,
(17)
where Mn ? M? , and U? is the exact joint triangularizer of M? that is closest to, and has the same
determinant as, U , the approximate joint triangularizer that is used to define ??. Proving that ?? ? ?
as ? ? 0 is equivalent to showing that
>
? n U ) ? (U > M
? n> U ) ? 1 Skew Plow
T?n
= Plow 1 ? (U > M
= Tn .
(18)
?=0
?=0
? n ? Mn when ? ? 0, we need to prove that U ? U? and
Since for all n = 1, . . . , N , one has M
that we can remove the Skew operator on the right.
We first show that U = U? e?X ? U? , that is, ? ? 0 as ? ? 0. Assume that the descent algorithm
? ? ? M? . Let
used to obtain U is initialized with Uinit obtained from the Schur decomposition of M
U? be the exact triangularizer of M? closest to Uinit and Uopt be the local optimum of the joint
triangularization objective closest to Uinit . Then, as ? ? 0 one has Uopt ? U? , by continuity of the
objective in ?, and also Uinit ? U? due to the perturbation properties of the Schur decomposition.
This implies U ? U? , and hence ? ? 0.
>
Then, it is easy to prove that Plow (1 ? (U?> Mn U? ) ? (U?> Mn> U? ) ? 1) Skew Plow
= Plow (1 ?
>
>
>
>
(U? Mn U? ) ? (U? Mn U? ) ? 1)Plow by considering the action of the two operators on x =
Plow vec(X), with X = ?X > . One has
>
>
Plow (1 ? (U?> Mn U? ) ? (U?> Mn> U? ) ? 1) Skew Plow
= Plow
vec low[U?> Mn U? , X]
>
>
= Plow
vec low[U?> Mn U? , low(X)] = Plow (1 ? (U?> Mn U? ) ? (U?> Mn> U? ) ? 1)Plow
(19)
where in the second line we used the fact that U?> Mn> U? is upper triangular. This shows that ?? ? ?
as ? ? 0.
5
The second part of the proof consists of bounding the smallest eigenvalue of ? . We will make use of
the following identity that holds when A and C are upper triangular:
low(ABC) = low(A low(B) C) ,
(20)
from which we get the following identity when A and C are upper triangular:
Low vec(ABC) = Low (C > ? A) Low vec(B) .
>
Plow
Tn x
Lowvec([U?> Mn U? , low(X)])
1
In particular, one has
=
low(X)U?> Mn U? ) and it can be shown that
=
(21)
Lowvec(U?> Mn U? low(X)
?
Low vec(V? ?n V? ?1 low(X)) = Low (V? ?T ? V? ) Low (I ? ?n ) Low (V? > ? V? ?1 ) Low vec(X) (29)
and
Low vec(low(X)V? ?n V? ?1 ) = Low (V? ?T ? V? ) Low (?n ? I) Low (V? > ? V? ?1 ) Low vec(X) (30)
where V? and V? ?1 are upper triangular matrices defined by U?> Mn U? = V? ?n V? ?1 . Now, since
V? = U?> V , where V is defined via the spectral decomposition Mn = V ?n V ?1 , we can rewrite the
operator Tn as
>
= Plow (U?> V ?T ? U?> V )Low(1 ? ?n ? ?n ? 1)Low(V T U? ? V ?1 U? )Plow
, (31)
PN
1
>
and use the following inequality for the smallest eigenvalue of ? = 2N n=1 Tn Tn :
Tn
1
?min (A) ?min (B) ?min (C),
2N
?min (? ) ?
(32)
where
A
B
C
=
>
Plow (U?> V ? U?> V ?T )Plow
,
N
X
= Plow
(33)
!
(1 ? ?n ? ?n ? 1)2
>
Plow
,
n=1
?1
= Plow (V
>
U? ? V T U? )Plow
.
(34)
(35)
1
?(V )2
Now, it is easy to show that ?min (A) = ?min (C) ?
since the d(d?1)/2?d(d?1)/2 matrices
A and C are obtained by deleting certain rows and columns of U?> V ? U?> V ?T and V ?1 U? ? V T U?
respectively. The matrix B is a diagonal matrix with entries given by
[B]kk =
N
X
([?n ]ii ? [?n ]jj )2 ,
k =j?i+
n=1
lim ?min (?
?) ?
??0
1
1
2N
(d ? a),
(36)
a=1
with 0 < i < j and j = 1, . . . d. This implies ?min (B) = mini<j
where ? = mini>j
Mn ? M? .
i?1
X
PN
n=1 (?i (Mn ) ? ?j (Mn ))
2
PN
n=1 (?i (Mn )
?
,
?(V )4
? ?j (Mn ))2 and
(37)
is a ?joint eigengap? of the ground-truth matrices
For any matrix Y one has
Low vec(V? ?n V? ?1 Y ) =
Low vec(V? ?n V? ?1 Y V? V? ?1 ) =
? ?T
? ?1
(22)
(23)
Low (V
? V? ) Low vec(?n V Y V? ) =
?T
Low (V?
? V? ) Low (I ? ?n ) Low vec(V? ?1 Y V? ) =
Low (V? ?T ? V? ) Low (I ? ?n ) Low (V? > ? V? ?1 ) Low vec(Y )
(24)
Low vec(Y V? ?n V? ?1 ) =
Low (V? ?T ? V? ) Low (?n ? I) Low (V? > ? V? ?1 ) Low vec(Y )
(27)
(25)
(26)
and similarly
6
(28)
References
[1] K. Abed-Meraim and Y. Hua. A least-squares approach to joint Schur decomposition. In
Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International
Conference on, volume 4, pages 2541?2544. IEEE, 1998.
[2] P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization algorithms on matrix manifolds.
Princeton University Press, 2009.
[3] B. Afsari. Sensitivity analysis for the problem of matrix joint diagonalization. SIAM Journal on
Matrix Analysis and Applications, 30(3):1148?1171, 2008.
[4] A. Anandkumar, R. Ge, D. Hsu, S. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. Journal of Machine Learning Research, 15:2773?2832, 2014.
[5] B. Balle, A. Quattoni, and X. Carreras. A spectral learning algorithm for finite state transducers.
In Machine Learning and Knowledge Discovery in Databases, pages 156?171. Springer, 2011.
[6] J.-F. Cardoso. Perturbation of joint diagonalizers. Telecom Paris, Signal Department, Technical
Report 94D023, 1994.
[7] J.-F. Cardoso and A. Souloumiac. Jacobi angles for simultaneous diagonalization. SIAM journal
on matrix analysis and applications, 17(1):161?164, 1996.
[8] N. Colombo and N. Vlassis. Tensor decomposition via joint matrix Schur decomposition. In
Proc. 33rd International Conference on Machine Learning, 2016.
[9] R. M. Corless, P. M. Gianni, and B. M. Trager. A reordered Schur factorization method for zerodimensional polynomial systems with multiple roots. In Proceedings of the 1997 international
symposium on Symbolic and algebraic computation, pages 133?140. ACM, 1997.
[10] L. De Lathauwer. A link between the canonical decomposition in multilinear algebra and
simultaneous matrix diagonalization. SIAM Journal on Matrix Analysis and Applications,
28(3):642?666, 2006.
[11] L. De Lathauwer, B. De Moor, and J. Vandewalle. Computation of the canonical decomposition
by means of a simultaneous generalized Schur decomposition. SIAM Journal on Matrix Analysis
and Applications, 26(2):295?327, 2004.
[12] T. Fu, S. Jin, and X. Gao. Balanced simultaneous Schur decomposition for joint eigenvalue estimation. In Communications, Circuits and Systems Proceedings, 2006 International Conference
on, volume 1, pages 356?360. IEEE, 2006.
[13] M. Haardt and J. A. Nossek. Simultaneous Schur decomposition of several nonsymmetric
matrices to achieve automatic pairing in multidimensional harmonic retrieval problems. Signal
Processing, IEEE Transactions on, 46(1):161?169, 1998.
[14] R. A. Horn and C. R. Johnson. Matrix analysis. Cambridge University Press, 2nd edition, 2012.
[15] R. Iferroudjene, K. A. Meraim, and A. Belouchrani. A new Jacobi-like method for joint
diagonalization of arbitrary non-defective matrices. Applied Mathematics and Computation,
211(2):363?373, 2009.
[16] M. Konstantinov, P. H. Petkov, and N. Christov. Nonlocal perturbation analysis of the Schur
system of a matrix. SIAM Journal on Matrix Analysis and Applications, 15(2):383?392, 1994.
[17] V. Kuleshov, A. Chaganty, and P. Liang. Tensor factorization via matrix factorization. In 18th
International Conference on Artificial Intelligence and Statistics (AISTATS), 2015.
[18] X. Luciani and L. Albera. Joint eigenvalue decomposition using polar matrix factorization. In
International Conference on Latent Variable Analysis and Signal Separation, pages 555?562.
Springer, 2010.
[19] J.-S. Pang. A posteriori error bounds for the linearly-constrained variational inequality problem.
Mathematics of Operations Research, 12(3):474?484, 1987.
7
[20] A. Podosinnikova, F. Bach, and S. Lacoste-Julien. Beyond CCA: Moment matching for multiview models. In Proc. 33rd International Conference on Machine Learning, 2016.
[21] S. Prudhomme, J. T. Oden, T. Westermann, J. Bass, and M. E. Botkin. Practical methods for a
posteriori error estimation in engineering applications. International Journal for Numerical
Methods in Engineering, 56(8):1193?1224, 2003.
[22] S. H. Sardouie, L. Albera, M. B. Shamsollahi, and I. Merlet. Canonical polyadic decomposition
of complex-valued multi-way arrays based on simultaneous Schur decomposition. In Acoustics,
Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 4178?
4182. IEEE, 2013.
[23] A. Souloumiac. Nonorthogonal joint diagonalization by combining Givens and hyperbolic
rotations. IEEE Transactions on Signal Processing, 57(6):2222?2231, 2009.
8
| 6424 |@word mild:1 collinearity:1 determinant:2 version:2 polynomial:1 norm:7 nd:2 d2:1 decomposition:31 sepulchre:1 boundedness:1 moment:2 reduction:1 contains:1 past:1 existing:2 com:1 numerical:1 remove:1 v:2 intelligence:1 ith:1 simpler:1 lathauwer:2 symposium:1 pairing:1 transducer:1 consists:3 prove:2 expected:3 frequently:1 growing:1 multi:1 actual:1 considering:1 spain:1 estimating:1 bounded:1 notation:1 project:1 moreover:1 underlying:1 circuit:1 plow:37 interpreted:2 unobserved:2 finding:1 guarantee:1 every:1 multidimensional:1 k2:3 uk:1 unit:1 appear:1 eigenstructure:2 engineering:2 local:1 tends:1 limit:2 approximately:3 factorization:4 obeys:1 practical:2 horn:1 practice:2 empirical:6 hyperbolic:1 matching:2 suggest:1 symbolic:1 get:2 mahony:1 operator:9 context:3 applying:2 optimize:1 equivalent:2 projector:1 straightforward:1 regardless:1 petkov:1 decomposable:1 array:1 proving:1 searching:1 diagonalizable:4 exact:9 us:1 kuleshov:1 database:1 observed:6 solved:1 capture:1 bass:1 balanced:1 depend:2 solving:1 rewrite:2 tight:1 algebra:1 reordered:1 triangle:2 icassp:1 joint:44 london:1 artificial:1 outside:1 solve:2 valued:1 tightness:1 otherwise:1 triangular:10 statistic:2 jointly:7 noisy:1 eigenvalue:10 analytical:1 ucl:1 product:1 combining:1 achieve:1 frobenius:4 validate:1 optimum:3 produce:1 telgarsky:1 derive:1 ac:1 nearest:1 ij:1 involves:2 implies:4 tighter:1 multilinear:1 strictly:4 hold:4 proximity:3 considered:1 ground:14 congruence:2 nonorthogonal:3 smallest:4 xk2:1 estimation:6 proc:2 polar:1 moor:1 pn:4 colombo:3 varying:1 luciani:1 ax:1 derived:1 afsari:1 absil:1 posteriori:8 unobservable:1 constrained:1 special:2 corless:1 nearly:2 future:1 report:1 randomly:1 simultaneously:1 replaced:1 albera:2 n1:1 ab:1 interest:1 evaluation:1 kt:1 nossek:1 fu:1 orthogonal:11 initialized:2 instance:4 column:2 stacking:1 addressing:1 deviation:1 entry:1 vandewalle:1 johnson:1 too:1 dependency:1 perturbed:2 corrupted:1 synthetic:4 international:9 sensitivity:1 siam:5 probabilistic:1 invertible:1 together:1 possibly:1 de:3 notable:1 depends:3 try:1 root:1 contribution:1 ass:4 square:1 pang:1 largely:1 nonsingular:1 produced:1 meraim:2 simultaneous:8 quattoni:1 definition:2 against:2 involved:1 proof:7 jacobi:5 hsu:1 knowledge:3 lim:4 formalize:1 evaluated:1 correlation:1 hand:5 continuity:1 quality:3 contain:1 true:13 hence:5 symmetric:6 generalized:2 trying:1 multiview:1 demonstrate:1 tn:7 cp:3 harmonic:1 variational:1 rotation:1 empirically:1 volume:2 nonsymmetric:4 approximates:1 numerically:2 cambridge:1 vec:30 chaganty:1 rd:4 automatic:1 mathematics:2 similarly:1 uinit:4 polyadic:1 nicolo:1 carreras:1 closest:5 optimizing:1 certain:3 nonconvex:3 inequality:7 binary:2 success:1 nikos:1 signal:9 u0:2 ii:1 multiple:1 technical:1 bach:1 retrieval:1 podosinnikova:1 oden:1 adobe:2 involving:1 want:1 addressed:2 diagonalize:1 extra:1 rest:1 klow:6 schur:13 anandkumar:1 noting:1 enough:3 easy:2 reduce:3 det:4 motivated:2 expression:1 eigengap:2 render:1 locating:1 algebraic:1 speech:2 jj:1 action:1 useful:1 clear:1 cardoso:2 amount:1 canonical:4 estimated:1 write:4 express:1 key:1 lacoste:1 graph:2 jose:1 angle:1 reasonable:1 separation:1 draw:1 appendix:1 bound:30 cca:1 guaranteed:2 nonnegative:1 kronecker:1 argument:1 min:17 optimality:1 department:2 according:1 kakade:1 abed:1 restricted:1 skew:14 needed:2 ge:1 available:1 operation:1 apply:3 observe:2 spectral:2 alternative:1 uopt:2 denotes:3 newton:3 tensor:10 objective:4 triangularization:8 quantity:6 concentration:1 dependence:1 diagonal:4 subspace:1 distance:6 separate:1 link:1 columnwise:1 manifold:5 mini:3 ratio:1 kk:1 liang:1 ba:1 design:1 implementation:1 unknown:2 upper:9 finite:1 descent:1 jin:1 vlassis:3 communication:1 rn:11 perturbation:10 commutator:2 arbitrary:2 introduced:1 pair:1 required:1 cast:1 paris:1 acoustic:2 errorbars:1 established:1 barcelona:1 nip:1 address:2 beyond:1 deleting:1 critical:2 mn:38 julien:1 created:1 literature:2 l2:1 checking:1 balle:1 nicol:1 discovery:1 fully:1 vectorized:1 row:2 belouchrani:1 free:1 aij:1 side:4 allow:1 taking:1 slice:1 souloumiac:2 san:1 transaction:2 nonlocal:1 approximate:7 observable:4 global:3 factorizing:1 continuous:2 latent:3 vectorization:1 decade:1 iterative:1 matricization:1 ku:1 ca:1 symmetry:1 obtaining:1 expansion:2 complex:1 diag:2 aistats:1 main:3 linearly:1 bounding:1 noise:10 edition:1 defective:1 telecom:1 obeying:1 exponential:1 kxk2:3 third:1 krn:4 theorem:9 specific:1 showing:2 unperturbed:2 exists:3 diagonalization:9 likely:1 gao:1 kxk:1 expressed:1 scalar:3 hua:1 springer:2 truth:14 extracted:1 abc:2 acm:1 identity:3 feasible:6 hard:1 typical:1 averaging:1 gauss:2 college:1 latter:1 princeton:1 |
5,997 | 6,425 | Structured Sparse Regression via Greedy
Hard-thresholding
Prateek Jain
Microsoft Research India
Nikhil Rao
Technicolor
Inderjit Dhillon
UT Austin
Abstract
Several learning applications require solving high-dimensional regression problems
where the relevant features belong to a small number of (overlapping) groups. For
very large datasets and under standard sparsity constraints, hard thresholding
methods have proven to be extremely efficient, but such methods require NP hard
projections when dealing with overlapping groups. In this paper, we show that
such NP-hard projections can not only be avoided by appealing to submodular
optimization, but such methods come with strong theoretical guarantees even
in the presence of poorly conditioned data (i.e. say when two features have
correlation 0.99), which existing analyses cannot handle. These methods exhibit
an interesting computation-accuracy trade-off and can be extended to significantly
harder problems such as sparse overlapping groups. Experiments on both real and
synthetic data validate our claims and demonstrate that the proposed methods are
orders of magnitude faster than other greedy and convex relaxation techniques for
learning with group-structured sparsity.
1
Introduction
High dimensional problems where the regressor belongs to a small number of groups play a critical
role in many machine learning and signal processing applications, such as computational biology and
multitask learning. In most of these cases, the groups overlap, i.e., the same feature can belong to
multiple groups. For example, gene pathways overlap in computational biology applications, and
parent-child pairs of wavelet transform coefficients overlap in signal processing applications.
The existing state-of-the-art methods for solving such group sparsity structured regression problems
can be categorized into two broad classes: a) convex relaxation based methods , b) iterative hard
thresholding (IHT) or greedy methods. In practice, IHT methods tend to be significantly more
scalable than the (group-)lasso style methods that solve a convex program. But, these methods
require a certain projection operator which in general is NP-hard to compute and often certain simple
heuristics are used with relatively weak theoretical guarantees. Moreover, existing guarantees for
both classes of methods require relatively restrictive assumptions on the data, like Restricted Isometry
Property or variants thereof [2, 7, 16], that are unlikely to hold in most common applications. In fact,
even under such settings, the group sparsity based convex programs offer at most polylogarithmic
gains over standard sparsity based methods [16].
Concretely, let us consider the following linear model:
y = Xw? + ,
(1)
where ? N (0, 2 I), X 2 Rn?p , each row of X is sampled i.i.d. s.t. xi ? N (0, ?), 1 ? i ? n,
and w? is a k ? -group sparse vector i.e. w? can be expressed in terms of only k ? groups, Gj ? [p].
The existing analyses for both convex as well as hard thresholding based methods require ? =
1 / p ? c, where c is an absolute constant (like say 3) and i is the i-th largest eigenvalue of ?.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This is a significantly restrictive assumption as it requires all the features to be nearly independent of
each other. For example, if features 1 and 2 have correlation more than say .99 then the restriction on
? required by the existing results do not hold.
Moreover, in this setting (i.e., when ? = O(1)), the number of samples required to exactly recover
w? (with = 0) is given by: n = ?(s + k ? log M ) [16], where s is the maximum support size of a
union of k ? groups and M is the number of groups. In contrast, if one were to directly use sparse
regression techniques (by ignoring group sparsity altogether) then the number of samples is given by
n = ?(s log p). Hence, even in the restricted setting of ? = O(1), group-sparse regression improves
upon the standard sparse regression only by logarithmic factors.
Greedy, Iterative Hard Thresholding (IHT) methods have been considered for group sparse regression
problems, but they involve NP-hard projections onto the constraint set [3]. While this can be
circumvented using approximate operations, the guarantees they provide are along the same lines as
the ones that exist for convex methods.
In this paper, we show that IHT schemes with approximate projections for the group sparsity
problem yield much stronger guarantees. Specifically, our result holds for arbitrarily large ?, and
arbitrary group structures. In particular, using IHT with greedy projections, we show that n =
? (s log 1? + ?2 k ? log M ) log 1? samples suffice to recover ?-approximatation to w? when = 0.
On the other hand, IHT for standard sparse regression [10] requires n = ?(?2 s logqp). Moreover,
2 ?
? s.t. kw
? w? k ? 2? + ? ? s+? kn log M .
for general noise variance 2 , our method recovers w
On the other hand,
p the existing state-of-the-art results for IHT for group sparsity?[4] guarantees
? w? k ? ? s + k ? log M for ? ? 3, i.e., w
? is not a consistent estimator of w even for small
kw
condition number ?.
Our analysis is based on an extension of the sparse regression result by [10] that requires exact
projections. However, a critical challenge in the case of overlapping groups is the projection onto
the set of group-sparse vectors is NP-hard in general. To alleviate this issue, we use the connection
between submodularity and overlapping group projections and a greedy selection based projection is
at least good enough. The main contribution of this work is to carefully use the greedy projection
based procedure along with hard thresholding iterates to guarantee the convergence to the global
optima as long as enough i.i.d. data points are generated from model (1).
Moreover, the simplicity of our hard thresholding operator allows us to easily extend it to more
complicated sparsity structures. In particular, we show that the methods we propose can be generalized
to the sparse overlapping group setting, and to hierarchies of (overlapping) groups.
We also provide extensive experiments on both real and synthetic datasets that show that our methods
are not only faster than several other approaches, but are also accurate despite performing approximate
projections. Indeed, even for poorly-conditioned data, IHT methods are an order of magnitude faster
than other greedy and convex methods. We also observe a similar phenomenon when dealing with
sparse overlapping groups.
1.1
Related Work
Several papers, notably [5] and references therein, have studied convergence properties of IHT
methods for sparse signal recovery under standard RIP conditions. [10] generalized the method to
settings where RIP does not hold, and also to the low rank matrix recovery setting. [21] used a similar
analysis to obtain results for nonlinear models. However, these techniques apply only to cases where
exact projections can be performed onto the constraint set. Forward greedy selection schemes for
sparse [9] and group sparse [18] constrained programs have been considered previously, where a
single group is added at each iteration. The authors in [2] propose a variant of CoSaMP to solve
problems that are of interest to us, and again, these methods require exact projections.
Several works have studied approximate projections in the context of IHT [17, 6, 12]. However, these
results require that the data satisfies RIP-style conditions which typically do not hold in real-world
regression problems. Moreover, these analyses do not guarantee a consistent estimate of the optimal
regressor when the measurements have zero-mean random noise. In contrast, we provide results
under a more general RSC/RSS condition, which is weaker [20], and provide crisp rates for the error
bounds when the noise in measurements is random.
2
2
Group Iterative Hard Thresholding for Overlapping Groups
In this section, we formally set up the group sparsity constrained optimization problem, and then
briefly present the IHT algorithm for the same. Suppose we are given a set of M groups that can
arbitrarily overlap G = {G1 , . . . , GM }, where Gi ? [p]. Also, let [M
i=1 Gi = {1, 2, . . . , p}. We
let kwk denote the Euclidean norm of w, and supp(w) denotes the support of w. For any vector
w 2 Rp , [8] defined the overlapping group norm as
kwkG := inf
M
X
i=1
M
X
kaGi k s.t.
i=1
aGi = w, supp(aGi ) ? Gi
(2)
We also introduce the notion of ?group-support? of a vector and its group-`0 pseudo-norm:
kwkG0 := inf
G-supp(w) := {i s.t. kaGi k > 0},
M
X
i=1
1{kaGi k > 0},
(3)
where aGi satisfies the constraints of (2). 1{?} is the indicator function, taking the value 1 if the
condition is satisfied, and 0 otherwise. For a set of groups G, supp(G) = {Gi , i 2 G}. Similarly,
G-supp(S) = G-supp(wS ).
Suppose we are given a function f : Rp ! R and M groups G = {G1 , . . . , GM }. The goal is to
solve the following group sparsity structured problem (GS-Opt):
min f (w) s.t. kwkG0 ? k
GS-Opt:
w
(4)
f can be thought of as a loss function over the training data, for instance, logistic or least squares loss.
In the high dimensional setting, problems of the form (4) are somewhat ill posed and are NP-hard
in general. Hence, additional assumptions on the loss function (f ) are warranted to guarantee a
reasonable solution. Here, we focus on problems where f satisfies the restricted strong convexity and
smoothness conditions:
Definition 2.1 (RSC/RSS). The function f : Rp ! R satisfies the restricted strong convexity (RSC)
and restricted strong smoothness (RSS) of order k, if the following holds:
?k I
H(w)
Lk I,
where H(w) is the Hessian of f at any w 2 Rp s.t. kwkG0 ? k.
Note that the goal of our algorithms/analysis would be to solve the problem for arbitrary ?k > 0 and
Lk < 1. In contrast, adapting existing IHT results to this setting lead to results that allow Lk /?k less
than a constant (like say 3).
We are especially interested in the linear model described in (1), and in recovering w? consistently
(i.e. recover w? exactly as n ! 1). To this end, we look to solve the following (non convex)
constrained least squares problem
GS-LS:
? = arg min f (w) :=
w
w
1
ky
2n
Xwk2 s.t. kwkG0 ? k
(5)
with k k ? being a positive, user defined integer 1 . In this paper, we propose to solve (5) using an
Iterative Hard Thresholding (IHT) scheme. IHT methods iteratively take a gradient descent step, and
then project the resulting vector (g) on to the (non-convex) constraint set of group sparse vectos, i.e.,
w? = PkG (g) = arg min kw
w
gk2 s.t kwkG0 ? k
(6)
Computing the gradient is easy and hence the complexity of the overall algorithm heavily depends on
the complexity of performing the aforementioned projection. Algorithm 1 details the IHT procedure
for the group sparsity problem (4). Throughout the paper we consider the same high-level procedure,
but consider different projection operators PbkG (g) for different settings of the problem.
1
typically chosen via cross-validation
3
Algorithm 1 IHT for Group-sparsity
Algorithm 2 Greedy Projection
1: Input : data y, X, parameter k, iterations
2:
3:
4:
5:
6:
7:
2.1
? groups G
Require: g 2 Rp , parameter k,
b
? = 0 , v = g, G = {0}
1: u
? do
2: for t = 1, 2, . . . k
?
3:
Find G = arg maxG2G\Gb kvG k
S
4:
Gb = Gb G?
5:
v = v v G?
6:
u = u + v G?
7: end for
? := PbkG (g), Gb = supp(u)
8: Output u
T , step size ?
Initialize: t = 0, w0 2 Rp a k-group
sparse vector
for t = 1, 2, . . . , T do
gt = wt ?rf (wt )
wt = PbkG (gt ) where PbkG (gt ) performs
(approximate) projections
end for
Output : wT
Submodular Optimization for General G
Suppose we are given a vector g 2 Rp , which needs to be projected onto the constraint set kukG0 ? k
(see (6)). Solving (6) is NP-hard when G contains arbitrary overlapping groups. To overcome
this, PkG (?) can be replaced by an approximate operator PbkG (?) (step 5 of Algorithm 1). Indeed,
the procedure for performing projections reduces to a submodular optimization problem [3], for
which the standard greedy procedure can be used (Algorithm 2). For completeness, we detail this in
Appendix A, where we also prove the following:
? Gb as the output of Algorithm
Lemma 2.2. Given an arbitrary vector g 2 Rp , suppose we obtain u,
G
? Let u? = P (g) be as defined in (6). Then
2 with input g and target group sparsity k.
k
?
ku
gk2 ? e
?
k
k
k(g)supp(u? ) k2 + ku?
gk2
where e is the base of the natural logarithm.
Note that the term with the exponent in Lemma 2.2 approaches 0 as k? increases. Increasing k? should
imply more samples for recovery of w? . Hence, this lemma hints at the possibility of trading off
sample complexity for better accuracy, despite the projections being approximate. See Section 3 for
more details. Algorithm 2 can be applied to any G, and is extremely efficient.
2.2
Incorporating Full Corrections
IHT methods can be improved by the incorporation of ?corrections? after each projection step. This
merely entails adding the following step in Algorithm 1 after step 5:
? s.t. supp(w)
? = supp(PbkG (gt ))
wt = arg min f (w)
?
w
When f (?) is the least squares loss as we consider, this step can be solved efficiently using Cholesky
decompositions via the backslash operator in MATLAB. We will refer to this procedure as IHTFC. Fully corrective methods in greedy algorithms typically yield significant improvements, both
theoretically and in practice [10].
3
Theoretical Performance Bounds
We now provide theoretical guarantees for Algorithm 1 when applied to the overlapping group
sparsity problem (4). We then specialize the results for the linear regression model (5).
Theorem 3.1. Let w? = arg minw,kwG k0 ?k? f (w) and let f satisfy RSC/RSS with constants ?k0 ,
? ?2
?
?
Lk 0
kw? k2
?
k0
Lk0 , respectively (see Definition 2.1). Set k = 32 L
?k
log
?
and let k 0 ? 2k +k ? .
? 0
? 0
?
k
k
Suppose we run Algorithm 1, with ? = 1/Lk0 and projections computed according to Algorithm 2.
Then, the following holds after t + 1 iterations:
?
?
?k 0
?k 0
kwt+1 w? k2 ? 1
? kwt w? k2 + +
?,
10 ? Lk0
Lk 0
4
where
= L2 0 maxS, s.t., | G-supp(S)|?k k(rf (w? ))S k2 . Specifically, the output of the T =
k?
?
kw? k2
k0
O L
?
-th iteration of Algorithm 1 satisfies:
? 0
?
k
kwT
w? k2 ? 2? +
10 ? Lk0
? .
?k 0
The proof uses the fact that Algorithm 2 performs approximately good projections. The result follows
from combining this with results from convex analysis (RSC/RSS) and a careful setting of parameters.
We prove this result in Appendix B.
Remarks
?
?
k0
Theorem 3.1 shows that Algorithm 1 recovers w? up to O L
error. If k arg minw f (w)kG0 ? k,
?k 0 ?
then, = 0. In general our result obtains an additive error which is weaker than what one can obtain
for a convex optimization problem. However, for typical statistical problems, we show that is small
and gives us nearly optimal statistical generalization error (for example, see Theorem 3.2).
Theorem 3.1 displays an interesting interplay between the desired accuracy ?, and the penalty we thus
pay as a result of performing approximate projections . Specifically, as ? is made small, k becomes
large, and thus so does . Conversely, we can let ? be large so that the projections are coarse, but
incur a smaller penalty via the term. Also, since the projections are not too accurate in this case, we
can get away with fewer iterations. Thus, there is a tradeoff between estimation error ? and model
selection error . Also, note that the inverse dependence of k on ? is only logarithmic in nature.
We stress that our results do not hold for arbitrary approximate projection operators. Our proof
critically uses the greedy scheme (Algorithm 2), via Lemma 2.2. Also, as discussed in Section 4, the
proof easily extends to other structured sparsity sets that allow such greedy selection steps.
We obtain similar result as [10] for the standard sparsity case, i.e., when the groups are singletons.
However, our proof is significantly simpler and allows for a significantly easier setting of ?.
3.1
Linear Regression Guarantees
We next proceed to the standard linear regression model considered in (5). To the best of our
knowledge, this is the first consistency result for overlapping group sparsity problems, especially
when the data can be arbitrarily conditioned. Recall that max ( min ) are the maximum (minimum)
singular value of ?, and ? := max / min is the condition number of ?.
Theorem 3.2. Let the observations y follow the model in (1). Suppose w? is k ? -group sparse and
1
let f (w) := 2n
kXw yk22 . Let the number of samples satisfy:
?
? ? ??
n ? (s + ?2 ? k ? ? log M ) ? log
,
?
where s = maxw,kwkG ?k | supp(w)|. Then, applying Algorithm 1 with k = 8?2 k ? ? log ?? ,
0
?
?
?
? = 1/(4 max ), guarantees the following after T = ? ? log ??kw? k2 iterations (w.p. 1 1/n8 ):
r
s + ?2 k ? log M
?
kwT w k ? ? ?
+ 2?
n
Remarks
Note that one can ignore the group sparsity constraint, and instead look to recover the (at most) ssparse vector w? using IHT methods for `0 optimization [10]. However, the corresponding sample
complexity is n ?2 s log(p). Hence, for an ill conditioned ?, using group sparsity based methods
provide a significantly stronger result, especially when the groups overlap significantly.
Note that the number of samples required increases logarithmically with the accuracy ?. Theorem
3.2 thus displays an interesting phenomenon: by obtaining more samples, one can provide a smaller
recovery error while incurring a larger approximation error (since we choose more groups).
Our proof critically requires that when restricted to group sparse vectors, the least squares objective
1
function f (w) = 2n
ky Xwk22 is strongly convex as well as strongly smooth:
5
Lemma 3.3. Let X 2 Rn?p be such that each xi ? N (0, ?). Let w 2 Rp be k-group sparse over
groups G = {G1 , . . . GM }, i.e., kwkG0 ? k and s = maxw,kwkG ?k | supp(w)|. Let the number of
0
samples n ?(C (k log M + s)). Then, the following holds with probability 1 1/n10 :
?
1
4
p
C
?
2
min kwk2 ?
1
kXwk22 ?
n
?
4
1+ p
C
?
2
max kwk2 ,
We prove Lemma 3.3 in Appendix C. Theorem 3.2 then follows by combining Lemma 3.3 with
Theorem 3.1. Note that in the least squares case, these are the Restricted Eigenvalue conditions on
the matrix X, which as explained in [20] are much weaker than traditional RIP assumptions on the
data. In particular, RIP requires almost 0 correlation between any two features, while our assumption
allows for arbitrary high correlations albeit at the cost of a larger number of samples.
3.2
IHT with Exact Projections PkG (?)
We now consider the setting where PkG (?) can be computed exactly and efficiently for any k. Examples
include the dynamic programming based method by [3] for certain group structures, or Algorithm 2
when the groups do not overlap. Since the exact projection operator can be arbitrary, our proof of
Theorem 3.1 does not apply directly in this case. However, we show that by exploiting the structure
of hard thresholding, we can still obtain a similar result:
Theorem 3.4. Let w? = arg minw,kwG k0 ?k? f (w). Let f satisfy RSC/RSS with constants ?2k+k? ,
?
?
kw? k2
k0
L2k+k? , respectively (see Definition 2.1). Then, the following holds for the T = O L
?
-th
?k 0
?
G
G
iterate of Algorithm 1 (with ? = 1/L2k+k? ) with Pb (?) = P (?) being the exact projection:
k
k
10 ? Lk0
kwT w k2 ? ? +
? .
?k 0
?
k0 2
where k 0 = 2k + k ? , k = O(( L
= L2 0 maxS, s.t., | G-supp(S)|?k k(rf (w? ))S k2 .
? 0 ) ? k ),
?
k
k
See Appendix D for a detailed proof. Note that unlike greedy projection method (see Theorem 3.1), k
is independent of ?. Also, in the linear model, the above result also leads to consistent estimate of w? .
4
Extension to Sparse Overlapping Groups (SoG)
The SoG model generalizes the overlapping group sparse model, allowing the selected groups
themselves to be sparse. Given positive integers k1 , k2 and a set of groups G, IHT for SoG would
perform projections onto the following set:
(
)
M
X
sog
G
C0 := w =
aGi : kwk0 ? k1 , kaG1 k0 ? k2
(7)
i=1
As in the case of overlapping group lasso, projection onto (7) is NP-hard in general. Motivated by
our greedy approach in Section 2, we propose a similar method for SoG (see Algorithm 3). The
algorithm essentially greedily selects the groups that have large top-k2 elements by magnitude.
Below, we show that the IHT (Algorithm 1) combined with the greedy projection (Algorithm 3)
indeed converges to the optimal solution. Moreover, our experiments (Section 5) reveal that this
method, when combined with full corrections, yields highly accurate results significantly faster than
the state-of-the-art.
We suppose that there exists a set of supports Sk? such that supp(w? ) 2 Sk? . Then, we obtain the
following result, proved in Appendix E:
Theorem 4.1. Let w? = arg minw,supp(w)2Sk? f (w), where Sk? ? Sk ? {0, 1}p is a fixed set
parameterized by k ? . Let f satisfy RSC/RSS with constants ?k , Lk , respectively. Furthermore, assume
that there exists an approximately good projection operator
for the ?set defined in (7) (for example,
?
?
Lk 0
Algorithm 3). Then, the following holds for the T = O ? 0 ? kw? k2 -th iterate of Algorithm 1 :
k
kwT
L
?
2k+k
where k = O(( ?2k+k
)2 ? k ? ?
?
10 ? L2k+k?
w? k2 ? 2? +
? ,
?2k+k?
?2k+k?
L2k+k?
?
),
=
2
L2k+k?
6
maxS, S2Sk k(rf (w? ))S k2 .
Algorithm 3 Greedy Projections for SoG
Require: g 2 Rp , parameters k1 , k2 , groups G
? = 0 , v = g, Gb = {0}, S? = {0}
1: u
2: for t=1,2,. . . k1 do
3:
Find G? = arg maxG2G\Gb kvG k
S
4:
Gb = Gb G?
5:
Let S correspond to the indices of the top k2 entries of vG? by magnitude
6:
Define v? 2 Rp , v?S = (vG? )S v?i = 0 i 2
/S
S
7:
S? = S? S
8:
v = v v?
9:
u = u + v?
10: end for
b S?
? G,
11: Output u,
Remarks
Similar to Theorem 3.1, we see that there is a tradeoff between obtaining accurate projections ? and
model mismatch . Specifically in this case, one can obtain small ? by increasing k1 , k2 in Algorithm
3. However, this will mean we select large number of groups, and subsequently increases.
A result similar to Theorem 3.2 can be obtained for the case when f is least squares loss function.
Specifically, the sample complexity evaluates to n ?2 k1? log(M ) + ?2 k1? k2? log(maxi |Gi |) . We
obtain results for least squares in Appendix F.
An interesting extension to the SoG case is that of a hierarchy of overlapping, sparsely activated
groups. When the groups at each level do not overlap, this reduces to the case considered in [11].
However, our theory shows that when a corresponding approximate projection operator is defined for
the hierarchical overlapping case (extending Algorithm 3), IHT methods can be used to obtain the
solution in an efficient manner.
5
Experiments and Results
8
IHT
IHT+FC
CoGEnT
FW
GOMP
4.5
4
3.5
3
0
50
100
150
200
Time (seconds)
250
300
measurements
log(objective)
log(objective)
5
IHT
IHT+FC
CoGEnT
FW
GOMP
7
6
5
2000
2000
1800
1800
1600
4
1400
3
1200
0
50
100
150
200
250
measurements
6
5.5
1200
1200
50
300
100
150
200
condition number
Time (seconds)
1600
250
300
50
100
150
200
condition number
250
300
Figure 1: (From left to right) Objective value as a function of time for various methods, when data is
well conditioned and poorly conditioned. The latter two figures show the phase transition plots for
poorly conditioned data, for IHT and GOMP respectively.
In this section, we empirically compare and contrast our proposed group IHT methods against the
existing approaches to solve the overlapping group sparsity problem. At a high level, we observe
that our proposed variants of IHT indeed outperforms the existing state-of-the-art methods for groupsparse regression in terms of time complexity. Encouragingly, IHT also performs competitively with
the existing methods in terms of accuracy. In fact, our results on the breast cancer dataset shows a
10% relative improvement in accuracy over existing methods.
Greedy methods for group sparsity have been shown to outperform proximal point schemes, and
hence we restrict our comparison to greedy procedures. We compared four methods: our algorithm
with (IHT-FC) and without (IHT) the fully corrective step, the Frank Wolfe (FW) method [19] ,
CoGEnT, [15] and the Group OMP (GOMP) [18]. All relevant hyper-parameters were chosen via
a grid search, and experiments were run on a macbook laptop with a 2.5 GHz processor and 16gb
memory. Additional experimental results are presented in Appendix G
7
?2
1
0
?3
?1
log(MSE)
?4
500
1000
1500
2000
2500
3000
3500
500
1000
1500
2000
2500
3000
3500
500
1000
1500
2000
2500
3000
3500
1
?5
0
?1
IHT
IHT?FC
COGEnT
FW
?6
?7
?8
0
1
0
?1
5
10
time (seconds)
15
20
Method
FW
IHT
GOMP
CoGEnT
IHT-FC
Error %
29.41
27.94
25.01
23.53
21.65
time (sec)
6.4538
0.0400
0.2891
0.1414
0.1601
Figure 2: (Left) SoG: error vs time comparison for various methods, (Center) SoG: reconstruction of
the true signal (top) from IHT-FC (middle) and CoGEnT (bottom). (Right:) Tumor Classification:
misclassification rate of various methods.
Synthetic Data, well conditioned: We first compared various greedy schemes for solving the
overlapping group sparsity problem on synthetic data. We generated M = 1000 groups of contiguous
indices of size 25; the last 5 entries of one group overlap with the first 5 of the next. We randomly
set 50 of these to be active, populated by uniform [ 1, 1] entries. This yields w? 2 Rp , p ? 22000.
i.i.d
X 2 Rn?p where n = 5000 and Xij ? N (0, 1). Each measurement is corrupted with Additive
White Gaussian Noise (AWGN) with standard deviation = 0.1. IHT mehods achieve orders
of magnitude speedup compared to the competing schemes, and achieve almost the same (final)
objective function value despite approximate projections (Figure 1 (Left)).
Synthetic Data, poorly conditioned: Next, we consider the exact same setup, but with each row of
X given by: xi ? N (0, ?) where ? = max (?)/ min (?) = 10. Figure 1 (Center-left) shows again
the advantages of using IHT methods; IHT-FC is about 10 times faster than the next best CoGEnT.
We next generate phase transition plots for recovery by our method (IHT) as well as the stateof-the-art GOMP method. We generate vectors in the same vein as the above experiment, with
M = 500, B = 15, k = 25, p ? 5000. We vary the the condition number of the data covariance
(?) as well as the number of measurements (n). Figure 1 (Center-right and Right) shows the
phase transition plot as the measurements and the condition number are varied for IHT, and GOMP
respectively. The results are averaged over 10 independent runs. It can be seen that even for condition
numbers as high as 200, n ? 1500 measurements suffices for IHT to exactly recovery w? , whereas
GOMP with the same setting is not able to recover w? even once.
Tumor Classification, Breast Cancer Dataset We next compare the aforementioned methods on
a gene selection problem for breast cancer tumor classification. We use the data used in [8] 2 . We ran
a 5-fold cross validation scheme to choose parameters, where we varied ? 2 {2 5 , 2 4 , . . . , 23 } k 2
{2, 5, 10, 15, 20, 50, 100} ? 2 {23 , 24 , . . . , 213 }. Figure 2 (Right) shows that the vanilla hard
thresholding method is competitive despite performing approximate projections, and the method with
full corrections obtains the best performance among the methods considered. We randomly chose
15% of the data to test on.
Sparse Overlapping Group Lasso: Finally, we study the sparse overlapping group (SoG) problem
that was introduced and analyzed in [14] (Figure 2). We perform projections as detailed in Algorithm
3. We generated synthetic vectors with 100 groups of size 50 and randomly selected 5 groups to be
active, and among the active group only set 30 coefficients to be non zero. The groups themselves
were overlapping, with the last 10 entries of one group shared with the first 10 of the next, yielding
p ? 4000. We chose the best parameters from a grid, and we set k = 2k ? for the IHT methods.
6
Conclusions and Discussion
We proposed a greedy-IHT method that can applied to regression problems over set of group sparse
vectors. Our proposed solution is efficient, scalable, and provide fast convergence guarantees under
general RSC/RSS style conditions, unlike existing methods. We extended our analysis to handle even
more challenging structures like sparse overlapping groups. Our experiments show that IHT methods
achieve fast, accurate results even with greedy and approximate projections.
2
download at http : //cbio.ensmp.f r/ ljacob/
8
References
[1] Francis Bach. Convex analysis and optimization with submodular functions: A tutorial. arXiv preprint
arXiv:1010.4207, 2010.
[2] Richard G Baraniuk, Volkan Cevher, Marco F Duarte, and Chinmay Hegde. Model-based compressive
sensing. Information Theory, IEEE Transactions on, 56(4):1982?2001, 2010.
[3] Nirav Bhan, Luca Baldassarre, and Volkan Cevher. Tractability of interpretability via selection of groupsparse models. In Information Theory Proceedings (ISIT), 2013 IEEE International Symposium on, pages
1037?1041. IEEE, 2013.
[4] Thomas Blumensath and Mike E Davies. Sampling theorems for signals from the union of finitedimensional linear subspaces. Information Theory, IEEE Transactions on, 55(4):1872?1882, 2009.
[5] Thomas Blumensath and Mike E Davies. Normalized iterative hard thresholding: Guaranteed stability and
performance. Selected Topics in Signal Processing, IEEE Journal of, 4(2):298?309, 2010.
[6] Chinmay Hegde, Piotr Indyk, and Ludwig Schmidt. Approximation algorithms for model-based compressive sensing. Information Theory, IEEE Transactions on, 61(9):5129?5147, 2015.
[7] Junzhou Huang, Tong Zhang, and Dimitris Metaxas. Learning with structured sparsity. The Journal of
Machine Learning Research, 12:3371?3412, 2011.
[8] Laurent Jacob, Guillaume Obozinski, and Jean-Philippe Vert. Group lasso with overlap and graph lasso. In
Proceedings of the 26th annual International Conference on Machine Learning, pages 433?440. ACM,
2009.
[9] Prateek Jain, Ambuj Tewari, and Inderjit S Dhillon. Orthogonal matching pursuit with replacement. In
Advances in Neural Information Processing Systems, pages 1215?1223, 2011.
[10] Prateek Jain, Ambuj Tewari, and Purushottam Kar. On iterative hard thresholding methods for highdimensional m-estimation. In Advances in Neural Information Processing Systems, pages 685?693,
2014.
[11] Rodolphe Jenatton, Julien Mairal, Francis R Bach, and Guillaume R Obozinski. Proximal methods for
sparse hierarchical dictionary learning. In Proceedings of the 27th International Conference on Machine
Learning (ICML-10), pages 487?494, 2010.
[12] Anastasios Kyrillidis and Volkan Cevher. Combinatorial selection and least absolute shrinkage via the
clash algorithm. In Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on, pages
2216?2220. IEEE, 2012.
[13] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for
maximizing submodular set functions. Mathematical Programming, 14(1):265?294, 1978.
[14] Nikhil Rao, Christopher Cox, Rob Nowak, and Timothy T Rogers. Sparse overlapping sets lasso for
multitask learning and its application to fmri analysis. In Advances in neural information processing
systems, pages 2202?2210, 2013.
[15] Nikhil Rao, Parikshit Shah, and Stephen Wright. Forward?backward greedy algorithms for atomic norm
regularization. Signal Processing, IEEE Transactions on, 63(21):5798?5811, 2015.
[16] Nikhil S Rao, Ben Recht, and Robert D Nowak. Universal measurement bounds for structured sparse
signal recovery. In International Conference on Artificial Intelligence and Statistics, pages 942?950, 2012.
[17] Parikshit Shah and Venkat Chandrasekaran. Iterative projections for signal identification on manifolds:
Global recovery guarantees. In Communication, Control, and Computing (Allerton), 2011 49th Annual
Allerton Conference on, pages 760?767. IEEE, 2011.
[18] Grzegorz Swirszcz, Naoki Abe, and Aurelie C Lozano. Grouped orthogonal matching pursuit for variable
selection and prediction. In Advances in Neural Information Processing Systems, pages 1150?1158, 2009.
[19] Ambuj Tewari, Pradeep K Ravikumar, and Inderjit S Dhillon. Greedy algorithms for structurally constrained
high dimensional problems. In Advances in Neural Information Processing Systems, pages 882?890, 2011.
[20] Sara A Van De Geer, Peter B?uhlmann, et al. On the conditions used to prove oracle results for the lasso.
Electronic Journal of Statistics, 3:1360?1392, 2009.
[21] Xiaotong Yuan, Ping Li, and Tong Zhang. Gradient hard thresholding pursuit for sparsity-constrained
optimization. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages
127?135, 2014.
9
| 6425 |@word multitask:2 cox:1 briefly:1 middle:1 stronger:2 norm:4 laurence:1 c0:1 r:8 decomposition:1 covariance:1 jacob:1 harder:1 n8:1 contains:1 backslash:1 outperforms:1 existing:12 clash:1 additive:2 plot:3 v:1 greedy:25 fewer:1 selected:3 intelligence:1 volkan:3 iterates:1 completeness:1 coarse:1 allerton:2 simpler:1 zhang:2 mathematical:1 along:2 symposium:2 yuan:1 prove:4 specialize:1 blumensath:2 pathway:1 manner:1 introduce:1 theoretically:1 xwk22:1 notably:1 indeed:4 themselves:2 kwk0:1 increasing:2 becomes:1 spain:1 project:1 moreover:6 suffice:1 laptop:1 prateek:3 what:1 compressive:2 guarantee:14 pseudo:1 exactly:4 k2:21 control:1 positive:2 naoki:1 despite:4 awgn:1 laurent:1 approximately:2 chose:2 therein:1 studied:2 conversely:1 challenging:1 sara:1 kg0:1 averaged:1 atomic:1 practice:2 union:2 procedure:7 universal:1 significantly:8 thought:1 projection:44 adapting:1 davy:2 vert:1 matching:2 get:1 cannot:1 onto:6 selection:8 operator:9 context:1 applying:1 restriction:1 crisp:1 hegde:2 center:3 maximizing:1 l:1 convex:13 l2k:5 simplicity:1 recovery:8 estimator:1 stability:1 handle:2 notion:1 hierarchy:2 play:1 suppose:7 rip:5 exact:7 gm:3 user:1 heavily:1 target:1 us:2 programming:2 logarithmically:1 element:1 wolfe:1 sparsely:1 vein:1 bottom:1 role:1 mike:2 preprint:1 solved:1 trade:1 ran:1 convexity:2 complexity:6 dynamic:1 solving:4 incur:1 upon:1 easily:2 k0:9 various:4 corrective:2 jain:3 fast:2 encouragingly:1 artificial:1 hyper:1 jean:1 heuristic:1 posed:1 solve:7 larger:2 nikhil:4 say:4 otherwise:1 statistic:2 gi:5 g1:3 transform:1 final:1 indyk:1 interplay:1 advantage:1 eigenvalue:2 propose:4 reconstruction:1 relevant:2 combining:2 poorly:5 achieve:3 ludwig:1 validate:1 ky:2 exploiting:1 parent:1 convergence:3 optimum:1 cosamp:1 extending:1 converges:1 ben:1 agi:4 strong:4 recovering:1 come:1 trading:1 submodularity:1 subsequently:1 rogers:1 require:9 suffices:1 generalization:1 alleviate:1 opt:2 isit:2 extension:3 junzhou:1 correction:4 hold:11 marco:1 considered:5 wright:1 claim:1 gk2:3 vary:1 dictionary:1 estimation:2 baldassarre:1 combinatorial:1 uhlmann:1 largest:1 grouped:1 gaussian:1 shrinkage:1 focus:1 improvement:2 consistently:1 rank:1 contrast:4 greedily:1 duarte:1 unlikely:1 typically:3 w:1 interested:1 selects:1 issue:1 arg:9 ill:2 overall:1 aforementioned:2 exponent:1 stateof:1 classification:3 among:2 art:5 constrained:5 initialize:1 once:1 piotr:1 sampling:1 biology:2 kw:8 broad:1 look:2 icml:2 nearly:2 fmri:1 np:8 hint:1 richard:1 randomly:3 kwt:6 parikshit:2 replaced:1 phase:3 replacement:1 microsoft:1 interest:1 possibility:1 highly:1 rodolphe:1 analyzed:1 pradeep:1 yielding:1 activated:1 accurate:5 nowak:2 minw:4 orthogonal:2 euclidean:1 logarithm:1 desired:1 theoretical:4 rsc:8 cevher:3 instance:1 rao:4 marshall:1 contiguous:1 cost:1 tractability:1 deviation:1 entry:4 uniform:1 too:1 ensmp:1 kn:1 corrupted:1 proximal:2 synthetic:6 combined:2 recht:1 st:1 international:6 off:2 regressor:2 again:2 satisfied:1 choose:2 huang:1 style:3 li:1 supp:16 singleton:1 de:1 sec:1 coefficient:2 satisfy:4 chinmay:2 depends:1 performed:1 kwk:1 francis:2 competitive:1 recover:5 complicated:1 contribution:1 square:7 accuracy:6 variance:1 efficiently:2 yield:4 correspond:1 weak:1 metaxas:1 identification:1 critically:2 processor:1 n10:1 ping:1 iht:46 definition:3 evaluates:1 against:1 thereof:1 proof:7 recovers:2 gain:1 sampled:1 proved:1 dataset:2 macbook:1 recall:1 knowledge:1 ut:1 improves:1 carefully:1 jenatton:1 follow:1 improved:1 strongly:2 furthermore:1 correlation:4 hand:2 christopher:1 nonlinear:1 overlapping:25 logistic:1 reveal:1 normalized:1 true:1 lozano:1 hence:6 regularization:1 dhillon:3 iteratively:1 white:1 generalized:2 stress:1 demonstrate:1 kwkg:3 performs:3 common:1 empirically:1 belong:2 extend:1 discussed:1 kwk2:2 measurement:9 refer:1 significant:1 smoothness:2 vanilla:1 consistency:1 grid:2 similarly:1 populated:1 submodular:5 entail:1 gj:1 gt:4 base:1 isometry:1 purushottam:1 belongs:1 inf:2 certain:3 kar:1 arbitrarily:3 seen:1 minimum:1 additional:2 somewhat:1 george:1 omp:1 signal:9 stephen:1 multiple:1 full:3 reduces:2 anastasios:1 smooth:1 faster:5 offer:1 long:1 cross:2 bach:2 luca:1 ravikumar:1 prediction:1 scalable:2 regression:15 variant:3 breast:3 essentially:1 arxiv:2 iteration:6 whereas:1 gomp:8 kvg:2 singular:1 unlike:2 tend:1 integer:2 presence:1 yk22:1 enough:2 easy:1 iterate:2 lasso:7 restrict:1 competing:1 kyrillidis:1 tradeoff:2 motivated:1 gb:10 lk0:5 penalty:2 peter:1 hessian:1 proceed:1 remark:3 matlab:1 tewari:3 detailed:2 involve:1 generate:2 http:1 outperform:1 exist:1 xij:1 tutorial:1 group:85 four:1 pb:1 groupsparse:2 backward:1 graph:1 relaxation:2 merely:1 run:3 inverse:1 parameterized:1 baraniuk:1 extends:1 throughout:1 reasonable:1 almost:2 chandrasekaran:1 electronic:1 appendix:7 bound:3 pay:1 guaranteed:1 display:2 fold:1 g:3 annual:2 oracle:1 constraint:7 incorporation:1 extremely:2 min:8 xiaotong:1 performing:5 relatively:2 circumvented:1 speedup:1 structured:7 according:1 smaller:2 appealing:1 rob:1 explained:1 restricted:7 previously:1 technicolor:1 end:4 pursuit:3 operation:1 incurring:1 generalizes:1 cogent:7 apply:2 observe:2 kwg:2 away:1 hierarchical:2 competitively:1 schmidt:1 shah:2 altogether:1 rp:12 thomas:2 denotes:1 top:3 include:1 xw:1 restrictive:2 k1:7 especially:3 objective:5 added:1 dependence:1 traditional:1 exhibit:1 gradient:3 nemhauser:1 subspace:1 w0:1 topic:1 manifold:1 index:2 setup:1 robert:1 frank:1 perform:2 allowing:1 observation:1 datasets:2 descent:1 philippe:1 extended:2 communication:1 rn:3 varied:2 arbitrary:7 download:1 kxw:1 abe:1 aurelie:1 grzegorz:1 introduced:1 pair:1 required:3 extensive:1 connection:1 polylogarithmic:1 barcelona:1 swirszcz:1 nip:1 able:1 below:1 mismatch:1 dimitris:1 sparsity:25 challenge:1 program:3 ambuj:3 rf:4 max:8 memory:1 interpretability:1 critical:2 overlap:9 natural:1 misclassification:1 indicator:1 scheme:8 imply:1 julien:1 lk:7 cbio:1 l2:2 relative:1 loss:5 fully:2 interesting:4 wolsey:1 proven:1 vg:2 validation:2 consistent:3 thresholding:14 austin:1 row:2 cancer:3 last:2 weaker:3 allow:2 india:1 taking:1 absolute:2 sparse:30 ghz:1 van:1 overcome:1 finitedimensional:1 world:1 transition:3 concretely:1 forward:2 author:1 projected:1 avoided:1 made:1 transaction:4 approximate:13 obtains:2 ignore:1 gene:2 dealing:2 global:2 active:3 mairal:1 xi:3 search:1 iterative:7 sk:5 ku:2 nature:1 ignoring:1 obtaining:2 warranted:1 mse:1 main:1 noise:4 child:1 categorized:1 venkat:1 tong:2 structurally:1 wavelet:1 theorem:15 maxi:1 sensing:2 incorporating:1 exists:2 albeit:1 adding:1 magnitude:5 conditioned:9 easier:1 logarithmic:2 fc:7 timothy:1 expressed:1 inderjit:3 maxw:2 satisfies:5 acm:1 obozinski:2 goal:2 careful:1 shared:1 fisher:1 hard:22 fw:5 specifically:5 typical:1 wt:5 lemma:7 tumor:3 geer:1 experimental:1 formally:1 select:1 guillaume:2 highdimensional:1 support:4 cholesky:1 latter:1 phenomenon:2 |
5,998 | 6,426 | Stochastic Variational Deep Kernel Learning
Andrew Gordon Wilson*
Cornell University
Zhiting Hu*
CMU
Ruslan Salakhutdinov
CMU
Eric P. Xing
CMU
Abstract
Deep kernel learning combines the non-parametric flexibility of kernel methods
with the inductive biases of deep learning architectures. We propose a novel deep
kernel learning model and stochastic variational inference procedure which generalizes deep kernel learning approaches to enable classification, multi-task learning,
additive covariance structures, and stochastic gradient training. Specifically, we
apply additive base kernels to subsets of output features from deep neural architectures, and jointly learn the parameters of the base kernels and deep network
through a Gaussian process marginal likelihood objective. Within this framework,
we derive an efficient form of stochastic variational inference which leverages local
kernel interpolation, inducing points, and structure exploiting algebra. We show
improved performance over stand alone deep networks, SVMs, and state of the
art scalable Gaussian processes on several classification benchmarks, including an
airline delay dataset containing 6 million training points, CIFAR, and ImageNet.
1
Introduction
Large datasets provide great opportunities to learn rich statistical representations, for accurate
predictions and new scientific insights into our modeling problems. Gaussian processes are promising
for large data problems, because they can grow their information capacity with the amount of available
data, in combination with automatically calibrated model complexity [21, 25].
From a Gaussian process perspective, all of the statistical structure in data is learned through a kernel
function. Popular kernel functions, such as the RBF kernel, provide smoothing and interpolation,
but cannot learn representations necessary for long range extrapolation [22, 25]. With smoothing
kernels, we can only use the information in a large dataset to learn about noise and length-scale
hyperparameters, which tell us only how quickly correlations in our data vary with distance in the
input space. If we learn a short length-scale hyperparameter, then by definition we will only make
use of a small amount of training data near each testing point. If we learn a long length-scale, then
we could subsample the data and make similar predictions.
Therefore to fully use the information in large datasets, we must build kernels with great representational power and useful learning biases, and scale these approaches without sacrificing this
representational ability. Indeed many recent approaches have advocated building expressive kernel
functions [e.g., 22, 9, 26, 25, 17, 31], and emerging research in this direction takes inspiration from
deep learning models [e.g., 28, 5, 3]. However, the scalability, general applicability, and interpretability of such approaches remain a challenge. Recently, Wilson et al. [30] proposed simple and scalable
deep kernels for single-output regression problems, with promising performance on many experiments. But their approach does not allow for stochastic training, multiple outputs, deep architectures
with many output features, or classification. And it is on classification problems, in particular, where
we often have high dimensional input vectors, with little intuition about how these vectors should
correlate, and therefore most want to learn a flexible non-Euclidean similarity metric [1].
*Equal contribution. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona,
Spain.
In this paper, we introduce inference procedures and propose a new deep kernel learning model
which enables (1) classification and non-Gaussian likelihoods; (2) multi-task learning1 ; (3) stochastic
gradient mini-batch training; (4) deep architectures with many output features; (5) additive covariance
structures; and (5) greatly enhanced scalability.
We propose to use additive base kernels corresponding to Gaussian processes (GPs) applied to subsets
of output features of a deep neural architecture. We then linearly mix these Gaussian processes,
inducing correlations across multiple output variables. The result is a deep probabilistic neural
network, with a hidden layer composed of additive sets of infinite basis functions, linearly mixed
to produce correlated output variables. All parameters of the deep architecture and base kernels
are jointly learned through a marginal likelihood objective, having integrated away all GPs. For
scalability and non-Gaussian likelihoods, we derive stochastic variational inference (SVI) which
leverages local kernel interpolation, inducing points, and structure exploiting algebra, and a hybrid
sampling scheme, building on Wilson and Nickisch [27], Wilson et al. [29], Titsias [24], Hensman
et al. [10], and Nickson et al. [18]. The resulting approach, SV-DKL, has a complexity of O(m1+1/D )
for m inducing points and D input dimensions, versus the standard O(m3 ) for efficient stochastic
variational methods.
We achieve good predictive accuracy and scalability over a wide range of classification tasks,
while retaining a straightforward, general purpose, and highly practical probabilistic non-parametric
representation, with code available at https://people.orie.cornell.edu/andrew/code.
2
Background
Throughout this paper, we assume we have access to vectorial input-output pairs D = {xi , yi },
where each yi is related to xi through a Gaussian process and observation model. For example,
in regression, one could model y(x)|f (x) ? N (y(x); f (x), ? 2 I), where f (x) is a latent vector of
independent Gaussian processes f j ? GP(0, kj ), and ? 2 I is a noise covariance matrix.
The computational bottleneck in working with Gaussian processes typically involves computing
(KX,X + ? 2 I)?1 y and log |KX,X | over an n ? n covariance matrix KX,X evaluated at n training
inputs X. Standard procedure is to compute the Cholesky decomposition of KX,X , which incurs
O(n3 ) computations and O(n2 ) storage, after which predictions cost O(n2 ) per test point. Gaussian
processes are thus typically limited to at most a few thousand training points. Many promising
approaches to scalability have been explored, for example, involving randomized methods [20, 16, 31]
, and low rank approximations [23, 19]. Wilson and Nickisch [27] recently introduced the KISS-GP
e X,X 0 = MX KZ,Z M >0 , which admits fast computations, given the
approximate kernel matrix K
X
exact kernel matrix KZ,Z evaluated on a latent multidimensional lattice of m inducing inputs Z, and
MX , a sparse interpolation matrix. Without requiring any grid structure in X, KZ,Z decomposes
into a Kronecker product of Toeplitz matrices, which can be approximated by circulant matrices [29].
Exploiting such structure in combination with local kernel interpolation enables one to use many
inducing points, resulting in near-exact accuracy in the kernel approximation, and O(n) inference.
Unfortunately, this approach does not typically apply to D > 5 dimensional inputs [29].
Moreover, the Gaussian process marginal likelihood does not factorize, and thus stochastic gradient
descent does not ordinarily apply. To address this issue, Hensman et al. [10] extended the variational
approach from Titsias [24] and derived a stochastic variational GP posterior over inducing points
for a regression model which does have the required factorization for stochastic gradient descent.
Hensman et al. [12], Hensman et al. [11], and Dezfouli and Bonilla [6] further combine this with
a sampling procedure for estimating non-conjugate expectations. These methods have O(m3 )
sampling complexity which becomes prohibitive where many inducing points are desired for accurate
approximation. Nickson et al. [18] consider Kronecker structure in the stochastic approximation of
Hensman et al. [10] for regression, but do not leverage local kernel interpolation or sampling.
To address these limitations, we introduce a new deep kernel learning model for multi-task classification, mini-batch training, and scalable kernel interpolation which does not require low dimensional
input spaces. In this paper, we view scalability and flexibility as two sides of one coin: we most want
the flexible models on the largest datasets, which contain the necessary information to discover rich
1
We follow the GP convention where multi-task learning involves a function mapping a single input to
multiple correlated output responses (class probabilities, regression responses, etc.). Unlike NNs which naturally
have correlated outputs by sharing hidden basis functions (and multi-task can have a more specialized meaning),
most GP models perform multiple binary classification, ignoring correlations between output classes. Even
applying a GP to NN features for deep kernel learning does not naturally produce multiple correlated outputs.
2
(1)
h1
W (2)
W (L)
(2)
h1
y1
...
...
...
Output layer
f1
(L)
h1
x1
...
A
...
yC
(L)
xD
(2)
hB
...
...
Input layer
...
W (1)
hQ
fJ
(1)
hA
Hidden layers
Additive GP layer
Figure 1: Deep Kernel Learning for Multidimensional Outputs. Multidimensional inputs x ? RD are mapped
through a deep architecture, and then a series of additive Gaussian processes f1 , . . . , fJ , with base kernels
(L)
(L)
k1 , . . . , kJ , are each applied to subsets of the network features h1 , . . . , hQ . The thick lines indicate a
probabilistic mapping. The additive Gaussian processes are then linearly mixed by the matrix A and mapped to
output variables y1 , . . . , yC (which are then correlated through A). All of the parameters of the deep network,
base kernel, and mixing layer, ? = {w, ?, A} are learned jointly through the (variational) marginal likelihood of
our model, having integrated away all of the Gaussian processes. We can view the resulting model as a Gaussian
process which uses an additive series of deep kernels with weight sharing.
statistical structure. We show that the resulting approach can learn very expressive and interpretable
kernel functions on large classification datasets, containing millions of training points.
3
Deep Kernel Learning for Multi-task Classification
We propose a new deep kernel learning approach to account for classification and non-Gaussian
likelihoods, multiple correlated outputs, additive covariances, and stochastic gradient training.
We propose to build a probabilistic deep network as follows: 1) a deep non-linear transformation
h(x, w), parametrized by weights w, is applied to the observed input variable x, to produce Q
(L)
(L)
features at the final layer L, h1 , . . . , hQ ; 2) J Gaussian processes, with base kernels k1 , . . . , kJ ,
are applied to subsets of these features, corresponding to an additive GP model [e.g., 7]. The base
kernels can thus act on relatively low dimensional inputs, where local kernel interpolation and learning
biases such as similarities based on Euclidean distance are most natural; 3) these GPs are linearly
mixed by a matrix A ? RC?J , and transformed by an observation model, to produce the output
variables y1 , . . . , yC . The mixing of these variables through A produces correlated multiple outputs,
a multi-task property which is uncommon in Gaussian processes or SVMs. The structure of this
network is illustrated in Figure 1. Critically, all of the parameters in the model (including base kernel
hyperparameters) are trained through optimizing a marginal likelihood, having integrated away the
Gaussian processes, through the variational inference procedures described in section 4.
For classification, we consider a special case of this architecture. Let C be the number of classes, and
we have data {xi , yi }ni=1 , where yi ? {0, 1}C is a one-shot encoding of the class label. We use the
softmax observation model:
exp(A(f i )> yi )
p(yi |f i , A) = P
,
>
c exp(A(f i ) ec )
(1)
where f i ? RJ is a vector of independent Gaussian processes followed by a linear mixing layer
A(f i ) = Af i ; and ec is the indicator vector with the cth element being 1 and the rest 0.
For the jth Gaussian process in the additive GP layer, let f j = {fij }ni=1 be the latent functions on
the input data features. By introducing a set of latent inducing variables uj indexed by m inducing
inputs Z, we can write [e.g., 19]
(j)
(j),?1
p(f j |uj ) = N (f j |KX,Z KZ,Z
e (j) ) ,
uj , K
e = KX,X ? KX,Z K ?1 KZ,X .
K
Z,Z
(2)
Substituting the local interpolation approximation KX,X 0 = M KZ,Z M > of Wilson and Nickisch
e (j) = 0; it therefore follows that f j = KX,Z K ?1 u = M u. In section 4
[27] into Eq. (2), we find K
Z,Z
we exploit this deterministic relationship between f and u, governed by the sparse matrix M , to
derive a particularly efficient stochastic variational inference procedure.
3
Eq. (1) and Eq. (2) together form the additive GP layer and the linear mixing layer of the proposed
deep probabilistic network in Figure 1, with all parameters (including network weights) trained jointly
through the Gaussian process marginal likelihood.
4
Structure Exploiting Stochastic Variational Inference
Exact inference and learning in Gaussian processes with a non-Gaussian likelihood is not analytically
tractable. Variational inference is an appealing approximate technique due to its automatic regularization to avoid overfitting, and its ability to be used with stochastic gradient training, by providing a
factorized approximation to the Gaussian process marginal likelihood. We develop our stochastic
variational method equipped with a fast sampling scheme for tackling any intractable marginalization.
Let u = {uj }Jj=1 be the collection of the inducing variables of the J additive GPs. We assume a
variational posterior over the inducing variables q(u). By Jensen?s inequality we have
log p(y) ? Eq(u)p(f |u) [log p(y|f )] ? KL[q(u)kp(u)] , L(q),
(3)
where we have omitted the mixing weights A for clarity. The KL divergence term can be interpreted
as a regularizer encouraging the approximate posterior q(u) to be close to the prior p(u). We aim
at tightening the marginal likelihood lower bound L(q) which is equivalent to minimizing the KL
divergence from q to the true posterior.
Qn
Since the likelihood function typically factorizes over data instances: p(y|f ) = i=1 p(yi |f i ),
we
Q can optimize the lower bound with stochastic gradients. In particular, we specify q(u) =
j N (uj |?j , Sj ) for the independent GPs, and iteratively update the variational parameters
{?j , Sj }Jj=1 and the kernel and deep network parameters using a noisy approximation of the gradient
of the lower bound on minibatches of the full data. Henceforth we omit the index j for clarity.
Unfortunately, for general non-Gaussian likelihoods the expectation in Eq (3) is usually intractable.
We develop a sampling method for tackling this intractability which is highly efficient with structured
reparameterization, local kernel interpolation, and structure exploiting algebra.
Using local kernel interpolation, the latent function f is expressed as a deterministic local interpolation
of the inducing variables u (section 3). This result allows us to work around any difficult approximate
posteriors on f which typically occur in variational approaches for GPs. Instead, our sampler only
needs to account for the uncertainty on u. The direct parameterization of q(u) yields a straightforward
and efficient sampling procedure. The latent function samples (indexed by t) are then computed
directly through interpolation f (t) = M u(t) .
As opposed to conventional mean-field methods, which assume a diagonal variational covariance
matrix, we use the Cholesky decomposition for reparameterizing u in order to preserve structures
within the covariance. Specifically, we let S = LT L, resulting in the following sampling procedure:
u(t) = ? + L(t) ;
(t) ? N (0, I).
Each step of the above standard sampler has complexity of O(m2 ), where m is the number of
inducing points. Due to the matrix vector product, this sampling procedure becomes prohibitive
in the presence of many inducing points, which are required for accuracy on large datasets with
multidimensional inputs ? particularly if we have an expressive kernel function [27].
We scale up the sampler by leveraging the fact that the inducing points are placed on a grid (taking
advantage of both Toeplitz and circulant structure), and additionally imposing a Kronecker decompoND
sition on L = d=1 Ld , where D is the input dimension of the base kernel. With the fast Kronecker
matrix-vector products, we reduce the above sampling cost of O(m2 ) to O(m1+1/D ). Our approach
thus greatly improves over previous stochastic variational methods which typically scale with O(m3 )
complexity, as discussed shortly.
Note that the KL divergence term between the two Gaussians in Eq (3) has a closed form without the
need for Monte Carlo estimation. Computing the KL term and its derivatives, with the Kronecker
3
method, is O(Dm D ). With T samples of u and a minibatch of data points of size B, we can estimate
the marginal likelihood lower bound as
T
B
N XX
(t)
L'
log p(yi |f i ) ? KL[q(u)kp(u)],
T B t=1 i=1
4
(4)
and the derivatives ?L w.r.t the model hyperparameters ? and the variational parameters
{?, {Ld }D
d=1 } can be taken similarly. We provide the detailed derivation in the supplement.
Although a small body of pioneering work has developed stochastic variational methods for Gaussian
processes, our approach distinctly provides the above representation-preserving variational approximation, and exploits algebraic structure for significant advantages in scalability and accuracy. In
particular, a similar variational lower bound as in Eq (3) was proposed in [24, 10] for a sparse GP,
which were extended to non-conjugate likelihoods, with the intractable integrals estimated using
Gaussian quadrature as in the KLSP-GP [11] or univariate Gaussian samples as in the SAVI-GP [6].
Hensman et al. [12] estimates nonconjugate expectations with a hybrid Monte Carlo sampler (denoted
as MC-GP). The computations in these approaches can be costly, with O(m3 ) complexity, due to
a complicated variational posterior over f as well as the expensive operations on the full inducing
point matrix. In addition to its increased efficiency, our sampling scheme is much simpler, without
introducing any additional tuning parameters. We empirically compare with these methods and show
the practical significance of our algorithm in section 5.
Variational methods have also been used in GP regression for stochastic inference (e.g., [18, 10]),
and some of the most recent work in this area applied variational auto-encoders [14] for coupled
variational updates (aka back constraints) [4, 2]. We note that these techniques are orthogonal and
complementary to our inference approach, and can be leveraged for further enhancements.
5
Experiments
We evaluate our proposed approach, stochastic variational deep kernel learning (SV-DKL), on a
wide range of classification problems, including an airline delay task with over 5.9 million data
points (section 5.1), a large and diverse collection of classification problems from the UCI repository
(section 5.2), and image classification benchmarks (section 5.3). Empirical results demonstrate the
practical significance of our approach, which provides consistent improvements over stand-alone
DNNs, while preserving a GP representation, and dramatic improvements in speed and accuracy over
modern state of the art GP models. We use classification accuracy when comparing to DNNs, because
it is a standard for evaluating classification benchmarks with DNNs. However, we also compute the
negative log probability (NLP) values (supplement), which show similar trends.
All experiments were performed on a Linux machine with eight 4.0GHz CPU cores, one Tesla K40c
GPU, and 32GB RAM. We implemented deep neural networks with Caffe [13].
Model Training For our deep kernel learning model, we used deep neural networks which produce
C-dimensional top-level features. Here C is the number of classes. We place a Gaussian process on
each dimension of these features. We used RBF base kernels. The additive GP layer is then followed
by a linear mixing layer A ? RC?C . We initialized A to be an identity matrix, and optimized in the
joint learning procedure to recover cross-dimension correlations from data.
We first train a deep neural network using SGD with the softmax loss objective, and rectified linear
activation functions. After the neural network has been pre-trained, we fit an additive KISS-GP
layer, followed by a linear mixing layer, using the top-level features of the deep network as inputs.
Using this pre-training initialization, our joint SV-DKL model of section 3 is then trained through the
stochastic variational method of section 4 which jointly optimizes all the hyperparameters ? of the
deep kernel (including all network weights), as well as the variational parameters, by backpropagating
derivatives through the proposed marginal likelihood lower bound of the additive Gaussian process in
section 4. In all experiments, we use a relatively large mini-batch size (specified according to the
full data size), enabled by the proposed structure exploiting variational inference procedures. We
achieve good performance setting the number of samples T = 1 in Eq. 4 for expectation estimation
in variational inference, which provides additional confirmation for a similar observation in [14].
5.1
Airline Delays
We first consider a large airline dataset consisting of flight arrival and departure details for all
commercial flights within the US in 2008. The approximately 5.9 million records contain extensive
information about the flights, including the delay in reaching the destination. Following [11], we
consider the task of predicting whether a flight was subject to delay based on 8 features (e.g., distance
to be covered, day of the week, etc).
5
Classification accuracy Table 1 reports the classification accuracy of 1) KLSP-GP [11], a recent
scalable variational GP classifier as discussed in section 4; 2) stand-alone deep neural network (DNN);
3) DNN+, a stand-alone DNN with an extra Q ? c fully-connected hidden layer with Q, c defined as
in Figure 1; 4) DNN+GP which is a GP applied to a pre-trained DNN (with same architecture as in 2);
and 5) our stochastic variational DKL method (SV-DKL) (same DNN architecture as in 2). For DNN,
we used a fully-connected architecture with layers d-1000-1000-500-50-c.2 The DNN component of
the SV-DKL model has the exact same architecture. The SV-DKL joint training was conducted using
a large minibatch size of 50,000 to reduce the variance of the stochastic gradient. We can use such a
large minibatch in each iteration (which is daunting for regular GP even as a whole dataset) due to the
efficiency of our inference strategy within each mini-batch, leveraging structure exploiting algebra.
From the table we see that SV-DKL outperforms both the alternative variational GP model (KLSPGP) and the stand-alone deep network. DNN+GP outperforms stand-alone DNNs, showing the
non-parametric flexibility of kernel methods. By combining KISS-GP with DNNs as part of a joint
SV-DKL procedure, we obtain better results than DNN and DNN+GP. Besides, both the plain DNN
and SV-DKL notably improve on stand-alone GPs, indicating a superior capacity of deep architectures
to learn representations from large but finite training sets, despite the asymptotic approximation
properties of Gaussian processes. By contrast, adding an extra hidden layer, as in DNN+, does not
improve performance.
Figure 2(a) further studies how performance changes as data size increases. We observe that the
proposed SV-DKL classifier trained on 1/50 of the data already can reach a competitive accuracy as
compared to the KLSP-GP model trained on the full dataset. As the number of the training points
increases, the SV-DKL and DNN models continue to improve. This experiment demonstrates the
value of expressive kernel functions on large data problems, which can effectively capture the extra
information available as seeing more training instances. Furthermore, SV-DKL consistently provides
better performance over the plain DNN, through its non-parametric flexibility.
Scalability We next measure the scalability of our variational DKL in terms of the number of
inducing points m in each GP. Figure 2(c) shows the runtimes in seconds, as a function of m, for
evaluating the objective and any relevant derivatives. We compare our structure exploiting variational
method with the scalable variational inference in KLSP-GP, and the MCMC-based variational method
in MC-GP [12]. We see that our inference approach is far more efficient than previous scalable
algorithms. Moreover, when the number of inducing points is not too large (e.g., m = 70), the added
time for SV-DKL over DNN is reasonable (e.g., 0.39s vs. 0.27s), especially considering the gains in
performance and expressive power. Figure 2(d) shows the runtime scaling of different variational
methods as m grows. We can see that the runtime of our approach increases only slowly in a wide
range of m (< 2, 000), greatly enhancing the scalability over the other methods. This empirically
validates the improved time complexity of our new inference method as presented in section 4.
We next investigate the total training time of the models. Table 1, right panel, lists the time cost of
training KLSP-GP, DNN, and SV-DKL; and Figure 2(b) shows how the training time of SV-DKL and
DNN changes as more training data is presented. We see that on the full dataset DKL, as a GP model,
saves over 60% time as compared to the modern state of the art KLSP-GP, while at the same time
achieving over an 18% improvement in predictive accuracy. Generally, the training time of SV-DKL
increases slowly with growing data sizes, and has only modest additional overhead compared to
stand-alone architectures, justified by improvements in performance, and the general benefits of a
non-parametric probabilistic representation. Moreover, the DNN was fully trained on a GPU, while
in SV-DKL the base kernel hyperparameters and variational parameters were optimized on a CPU.
Since most updates of the SV-DKL parameters are computed in matrix forms, we believe the already
modest time gap between SV-DKL and DNNs can be almost entirely closed by deploying the whole
SV-DKL model on GPUs.
5.2
UCI Classification Tasks
The second evaluation of our proposed algorithm (SV-DKL) is conducted on a number of commonly
used UCI classification tasks of varying sizes and properties. Table 1 (supplement) lists the classification accuracy of SVM, DNN, DNN+ (a stand-alone DNN with an extra Q ? c fully-connected
hidden layer with Q, c defined as in Figure 1), DNN+GP (a GP trained on the top level features of a
trained DNN without the extra hidden layer), and SV-DKL (same architecture as DNN).
2
We obtained similar results with other DNN architectures (e.g., d-1000-1000-500-50-20-c).
6
Table 1: Classification accuracy and training time on the airline delay dataset, with n data points, d input
dimensions, and c classes. KLSP-GP is a stochastic variational GP classifier proposed in [11]. DNN+ is the
DNN with an extra hidden layer. DNN+GP is a GP applied to fixed pre-trained output layer of the DNN (without
the extra hidden layer). Following Hensman et al. [11], we selected a hold-out sets of 100,000 points uniformly
at random, and the results of DNN and SV-DKL are averaged over 5 runs ? one standard deviation. Since the
code of KLSP-GP is not publicly available we directly show the results from [11].
Airline
Accuracy
0.76
5,934,530
Accuracy
c
8
DNN
DNN+
DNN+GP
SV-DKL
KLSP-GP
DNN
SV-DKL
?0.675
0.773?0.001
0.722?0.002
0.7746?0.001
0.781?0.001
?11
0.53
3.98
2
12
DNN
SV?DKL
KLSP?GP
10
0.74
0.72
0.7
0.68
Total Training Time (h)
KLSP-GP [11]
Training time (h)
0.78
d
DNN
SV?DKL
KLSP?GP
300
8
6
4
SV?DKL
KLSP?GP
MC?GP
DNN
200
Runtime scaling
n
Runtime (s)
Datasets
200
100
2
150
SV?DKL
KLSP?GP
MC?GP
slope=1
100
50
0.66
1
2
3
4
#Training Instances
5
6
0
6
x 10
1
2
3
4
#Training Instances
5
0
70 200 400
6
800
1200
#Inducing points
6
x 10
1600
2000
1
70 200 400
800
1200
1600
2000
#Inducing points
Figure 2: (a) Classification accuracy vs. the number of training points (n). We tested the deep models, DNN
and SV-DKL, by training on 1/50, 1/10, 1/3, and the full dataset, respectively. For comparison, the cyan diamond
and black dashed line show the accuracy level of KLSP-GP trained on the full data. (b) Training time vs. n. The
cyan diamond and black dashed line show the training time of KLSP-GP on the full data. (c) Runtime vs. the
number of inducing points (m) on airline task, by applying different variational methods for deep kernel learning.
The minibatch size is fixed to 50,000. The runtime of the stand-alone DNN does not change as m varies. (d)
The scaling of runtime relative to the runtime of m = 70. The black dashed line indicates a slope of 1.
The plain DNN, which learns salient features effectively from raw data, gives notably higher accuracy
compared to an SVM, the mostly widely used kernel method for classification problems. We see that
the extra layer in DNN+GP can sometimes harm performance. By contrast, non-parametric flexibility
of DNN+GP consistently improves upon DNN. And SV-DKL, by training a DNN through a GP
marginal likelihood objective, consistently provides further enhancements (with particularly notable
performance on the Connect4 and Covtype datasets).
5.3
Image Classification
We next evaluate the proposed scalable SV-DKL procedure for efficiently handling high-dimensional
highly-structured image data. We used a minibatch size of 5,000 for stochastic gradient training of
SV-DKL. Table 2 compares SV-DKL with the most recent scalable GP classifiers. Besides KLSP-GP,
we also collected the results of the MC-GP [12] which uses a hybrid Monte Carlo sampler to tackle
non-conjugate likelihoods, SAVI-GP [6] which approximates with a univariate Gaussian sampler,
as well as the distributed GP latent variable model (denoted as D-GPLVM) [8]. We see that on the
respective benchmark tasks, SV-DKL improves over all of the above scalable GP methods by a large
margin. We note that these datasets are very challenging for conventional GP methods.
We further compare SV-DKL to stand-alone convolutional neural networks, and GPs applied to
fixed pre-trained CNNs (CNN+GP). On the first three datasets in Table 2, we used the reference
CNN models implemented in Caffe; and for the SVHN dataset, as no benchmark architecture is
available, we used the CIFAR10 architecture which turned out to perform quite well. As we can see,
the SV-DKL model outperforms CNNs and CNN+GP on all datasets. By contrast, the extra hidden
Q ? c hidden layer CNN+ does not consistently improve performance over CNN.
ResNet Comparison: Based on one of the best public implementations on Caffe, the ResNet-20 has
0.901 accuracy on CIFAR10, and SV-DKL (with this ResNet base architecture) improves to 0.910.
ImageNet: We randomly selected 20 categories of images with an AlexNet variant as the base NN
[15], which has an accuracy of 0.6877, while SV-DKL achieves 0.7067 accuracy.
5.3.1
Interpretation
In Figure 3(a) we investigate the deep kernels learned on the MNIST dataset by randomly selecting 4
classes and visualizing the covariance matrices of respective dimensions. The covariance matrices are
evaluated on the set of test inputs, sorted in terms of the labels of the input images. We see that the
7
Table 2: Classification accuracy on the image classification benchmarks. MNIST-Binary is the task to
differentiate between odd and even digits on the MNIST dataset. We followed the standard training-test set
partitioning of all these datasets. We have collected recently published results of a variety of scalable GPs.
For CNNs, we used the respective benchmark architectures (or with slight adaptations) from Caffe. CNN+ is
a stand-alone CNN with Q ? c fully connected extra hidden layer. See the text for more details, including a
comparison with ResNets on CIFAR10.
Datasets
MNIST-Binary
MNIST
CIFAR10
SVHN
n
d
28?28
28?28
3?32?32
3?32?32
60K
60K
50K
73K
Accuracy
c
MC-GP [12]
SAVI-GP [6]
D-GPLVM [8]
KLSP-GP [11]
CNN
CNN+
CNN+GP
SV-DKL
?
0.9804
?
?
?
0.9749
?
?
?
0.9405
?
?
0.978
?
?
?
0.9934
0.9908
0.7592
0.9214
0.8838
0.9909
0.7618
0.9193
0.9938
0.9915
0.7633
0.9221
0.9940
0.9920
0.7704
0.9228
2
10
10
10
c=2
c=3
2k
2k
4k
4k
6k
6k
0.6
0.2
0
8k
8k
10k
10k
0.5
1
2
4k
6k
8k
10k
0.4
2k
c=6
4k
6k
8k
10k
0.3
c=8
2k
2k
4k
4k
0.2
Output classes
2k
0.1
3
4
5
6
0
7
0.1
6k
6k
8k
8
8k
10k
9
0
10k
2k
4k
6k
8k
10k
2k
4k
6k
8k
10k
1
2
3
4
5
6
7
8
9
-0.1
Input dimensions
Figure 3: (a) The induced covariance matrices on classes 2, 3, 6, and 8, on test cases of the MNIST dataset
ordered according to the labels. (b) The final mixing layer (i.e., matrix A) on MNIST digit recognition.
deep kernel on each dimension effectively discovers the correlations between the images within the
corresponding class. For instance, in c = 2 the data points between 2k-3k (i.e., images of digit 2) are
strongly correlated with each other, and carry little correlation with the rest of the images. Besides,
we can also clearly observe that the rest of the data points also form multiple ?blocks?, rather than
being crammed together without any structure. This validates that the DKL procedure and additive
GPs do capture the correlations across different dimensions.
To further explore the learnt dependencies between the output classes and the additive GPs serving
as the bases, we visualized the weights of the mixing layer (A) in Fig. 3(b), enabling the correlated
multi-output (multi-task) nature of the model. Besides the expected high weights along the diagonal,
we find that class 9 is also highly correlated with dimension 0 and 6, which is consistent with the
visual similarity between digit ?9? and ?0?/?6?. Overall, the ability to interpret the learned deep
covariance matrix as discovering an expressive similarity metric across data instances is a distinctive
feature of our approach.
6
Discussion
We introduced a scalable Gaussian process model which leverages deep learning, stochastic variational
inference, structure exploiting algebra, and additive covariance structures. The resulting deep kernel
learning approach, SV-DKL, allows for classification and non-Gaussian likelihoods, multi-task
learning, and mini-batch training. SV-DKL achieves superior performance over alternative scalable
GP models and stand-alone deep networks on many significant benchmarks.
Several fundamental themes emerge from the exposition: (1) kernel methods and deep learning
approaches are complementary, and we can combine the advantages of each approach; (2) expressive
kernel functions are particularly valuable on large datasets; (3) by viewing neural networks through
the lens of metric learning, deep learning approaches become more interpretable.
Deep learning is able to obtain good predictive accuracy by automatically learning structure which
would be difficult to a priori feature engineer into a model. In the future, we hope deep kernel
learning approaches will be particularly helpful for interpreting these learned features, leading to new
scientific insights into our modelling problems.
Acknowledgements: We thank NSF IIS-1563887, ONR N000141410684, N000141310721,
N000141512791, and ADeLAIDE FA8750-16C-0130-001 grants.
8
References
[1] C. C. Aggarwal, A. Hinneburg, and D. A. Keim. On the surprising behavior of distance metrics in high
dimensional space. Springer, 2001.
[2] T. D. Bui and R. E. Turner. Stochastic variational inference for Gaussian process latent variable models
using back constraints. Black Box Learning and Inference NIPS workshop, 2015.
[3] R. Calandra, J. Peters, C. E. Rasmussen, and M. P. Deisenroth. Manifold Gaussian processes for regression.
arXiv preprint arXiv:1402.5876, 2014.
[4] Z. Dai, A. Damianou, J. Gonz?lez, and N. Lawrence. Variational auto-encoded deep Gaussian processes.
arXiv preprint arXiv:1511.06455, 2015.
[5] A. Damianou and N. Lawrence. Deep Gaussian processes. Artificial Intelligence and Statistics, 2013.
[6] A. Dezfouli and E. V. Bonilla. Scalable inference for Gaussian process models with black-box likelihoods.
In Advances in Neural Information Processing Systems, pages 1414?1422, 2015.
[7] N. Durrande, D. Ginsbourger, and O. Roustant. Additive kernels for Gaussian process modeling. arXiv
preprint arXiv:1103.4023, 2011.
[8] Y. Gal, M. van der Wilk, and C. Rasmussen. Distributed variational inference in sparse Gaussian process
regression and latent variable models. In Advances in Neural Information Processing Systems, pages
3257?3265, 2014.
[9] M. G?nen and E. Alpayd?n. Multiple kernel learning algorithms. Journal of Machine Learning Research,
12:2211?2268, 2011.
[10] J. Hensman, N. Fusi, and N. Lawrence. Gaussian processes for big data. In Uncertainty in Artificial
Intelligence (UAI). AUAI Press, 2013.
[11] J. Hensman, A. Matthews, and Z. Ghahramani. Scalable variational Gaussian process classification. In
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages
351?360, 2015.
[12] J. Hensman, A. G. Matthews, M. Filippone, and Z. Ghahramani. MCMC for variationally sparse Gaussian
processes. In Advances in Neural Information Processing Systems, pages 1648?1656, 2015.
[13] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[14] D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
[15] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural
networks. In Advances in Neural Information Processing Systems, 2012.
[16] Q. Le, T. Sarlos, and A. Smola. Fastfood-computing Hilbert space expansions in loglinear time. In
Proceedings of the 30th International Conference on Machine Learning, pages 244?252, 2013.
[17] J. R. Lloyd, D. Duvenaud, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Automatic construction and
Natural-Language description of nonparametric regression models. In Association for the Advancement of
Artificial Intelligence (AAAI), 2014.
[18] T. Nickson, T. Gunter, C. Lloyd, M. A. Osborne, and S. Roberts. Blitzkriging: Kronecker-structured
stochastic Gaussian processes. arXiv preprint arXiv:1510.07965, 2015.
[19] J. Qui?onero-Candela and C. Rasmussen. A unifying view of sparse approximate Gaussian process
regression. The Journal of Machine Learning Research, 6:1939?1959, 2005.
[20] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Neural Information
Processing Systems, 2007.
[21] C. E. Rasmussen and Z. Ghahramani. Occam?s razor. In Neural Information Processing Systems (NIPS),
2001.
[22] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for Machine Learning. The MIT Press, 2006.
[23] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve
fitting. Journal of the Royal Statistical SocietyB, 47(1):1?52, 1985.
[24] M. K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In International
Conference on Artificial Intelligence and Statistics, pages 567?574, 2009.
[25] A. G. Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian
processes. PhD thesis, University of Cambridge, 2014.
[26] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. International Conference on Machine Learning (ICML), 2013.
[27] A. G. Wilson and H. Nickisch. Kernel interpolation for scalable structured Gaussian processes (KISS-GP).
International Conference on Machine Learning (ICML), 2015.
[28] A. G. Wilson, D. A. Knowles, and Z. Ghahramani. Gaussian process regression networks. In International
Conference on Machine Learning (ICML), Edinburgh, 2012.
[29] A. G. Wilson, C. Dann, and H. Nickisch. Thoughts on massively scalable Gaussian processes. arXiv
preprint arXiv:1511.01870, 2015. https://arxiv.org/abs/1511.01870.
[30] A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing. Deep kernel learning. Artificial Intelligence and
Statistics, 2016.
[31] Z. Yang, A. J. Smola, L. Song, and A. G. Wilson. A la carte - learning fast kernels. Artificial Intelligence
and Statistics, 2015.
9
| 6426 |@word repository:1 cnn:10 hu:2 covariance:13 decomposition:2 dramatic:1 sgd:1 incurs:1 shot:1 carry:1 ld:2 series:2 selecting:1 fa8750:1 outperforms:3 guadarrama:1 comparing:1 surprising:1 activation:1 tackling:2 must:1 gpu:2 additive:21 enables:2 interpretable:2 update:3 v:4 alone:13 intelligence:7 prohibitive:2 selected:2 discovering:1 parameterization:1 advancement:1 short:1 core:1 record:1 provides:5 org:1 simpler:1 rc:2 along:1 direct:1 become:1 combine:3 overhead:1 fitting:1 introduce:2 notably:2 expected:1 indeed:1 behavior:1 growing:1 multi:10 salakhutdinov:2 automatically:2 little:2 encouraging:1 equipped:1 cpu:2 considering:1 becomes:2 spain:1 estimating:1 moreover:3 discover:1 panel:1 factorized:1 alexnet:1 xx:1 interpreted:1 emerging:1 developed:1 transformation:1 gal:1 multidimensional:4 act:1 auai:1 xd:1 tackle:1 runtime:8 classifier:4 demonstrates:1 partitioning:1 grant:1 omit:1 local:9 despite:1 encoding:2 interpolation:14 approximately:1 black:5 initialization:1 challenging:1 limited:1 factorization:1 range:4 averaged:1 practical:3 testing:1 block:1 silverman:1 svi:1 digit:4 procedure:14 area:1 empirical:1 thought:1 pre:5 regular:1 seeing:1 cannot:1 close:1 storage:1 applying:2 optimize:1 equivalent:1 deterministic:2 conventional:2 eighteenth:1 sarlos:1 straightforward:2 williams:1 m2:2 insight:2 enabled:1 reparameterization:1 embedding:1 enhanced:1 commercial:1 construction:1 exact:4 gps:11 us:2 element:1 trend:1 approximated:1 particularly:5 expensive:1 recognition:1 observed:1 preprint:7 capture:2 thousand:1 connected:4 valuable:1 intuition:1 complexity:7 trained:13 algebra:5 predictive:3 titsias:3 upon:1 distinctive:1 eric:1 efficiency:2 basis:2 joint:4 regularizer:1 derivation:1 train:1 fast:6 monte:3 kp:2 artificial:7 tell:1 caffe:5 quite:1 encoded:1 widely:1 ability:3 toeplitz:2 statistic:5 gp:71 jointly:5 noisy:1 validates:2 final:2 differentiate:1 advantage:3 karayev:1 propose:5 product:3 adaptation:1 uci:3 combining:1 relevant:1 turned:1 mixing:9 flexibility:5 achieve:2 representational:2 description:1 inducing:23 scalability:10 exploiting:9 sutskever:1 enhancement:2 darrell:1 produce:6 adam:1 resnet:3 derive:3 andrew:2 develop:2 odd:1 advocated:1 eq:8 implemented:2 involves:2 indicate:1 convention:1 direction:1 thick:1 fij:1 cnns:3 stochastic:30 enable:1 viewing:1 public:1 require:1 dnns:6 f1:2 learning1:1 hold:1 around:1 duvenaud:1 exp:2 great:2 lawrence:3 mapping:2 week:1 matthew:2 substituting:1 vary:1 achieves:2 omitted:1 purpose:1 ruslan:1 estimation:2 label:3 largest:1 hope:1 carte:1 reparameterizing:1 gunter:1 clearly:1 mit:1 gaussian:56 aim:1 reaching:1 rather:1 avoid:1 cornell:2 factorizes:1 varying:1 wilson:13 derived:1 improvement:4 consistently:4 rank:1 likelihood:21 indicates:1 aka:1 greatly:3 contrast:3 modelling:1 helpful:1 inference:23 nn:2 integrated:3 typically:6 hidden:12 dnn:45 transformed:1 issue:1 classification:31 flexible:2 overall:1 denoted:2 priori:1 retaining:1 art:3 smoothing:3 special:1 softmax:2 marginal:11 equal:1 field:1 having:3 sampling:11 runtimes:1 icml:3 future:1 report:1 spline:1 gordon:1 few:1 modern:2 randomly:2 composed:1 preserve:1 divergence:3 consisting:1 ab:1 highly:4 investigate:2 evaluation:1 uncommon:1 accurate:2 dezfouli:2 integral:1 necessary:2 cifar10:4 respective:3 orthogonal:1 modest:2 indexed:2 euclidean:2 connect4:1 initialized:1 desired:1 sacrificing:1 girshick:1 instance:6 increased:1 modeling:2 lattice:1 applicability:1 cost:3 deviation:1 introducing:2 subset:4 krizhevsky:1 delay:6 nickson:3 conducted:2 too:1 calandra:1 dependency:1 encoders:1 varies:1 sv:42 learnt:1 nickisch:5 calibrated:1 nns:1 durrande:1 recht:1 fundamental:1 randomized:1 international:6 probabilistic:6 destination:1 together:2 quickly:1 n000141512791:1 lez:1 linux:1 thesis:1 aaai:1 containing:2 opposed:1 leveraged:1 slowly:2 henceforth:1 derivative:4 leading:1 account:2 lloyd:2 notable:1 bonilla:2 dann:1 performed:1 view:3 extrapolation:3 h1:5 closed:2 candela:1 xing:2 recover:1 competitive:1 complicated:1 bayes:1 slope:2 jia:1 contribution:1 ni:2 accuracy:23 publicly:1 variance:1 convolutional:3 efficiently:1 yield:1 raw:1 critically:1 mc:6 carlo:3 onero:1 rectified:1 published:1 reach:1 deploying:1 damianou:2 sharing:2 definition:1 dm:1 naturally:2 gain:1 dataset:12 popular:1 improves:4 hilbert:1 variationally:1 back:2 higher:1 day:1 follow:1 nonconjugate:1 response:2 improved:2 specify:1 daunting:1 evaluated:3 box:2 strongly:1 furthermore:1 smola:2 correlation:7 working:1 flight:4 expressive:7 minibatch:5 scientific:2 grows:1 believe:1 building:2 requiring:1 contain:2 true:1 inductive:1 analytically:1 inspiration:1 regularization:1 iteratively:1 illustrated:1 visualizing:1 k40c:1 backpropagating:1 razor:1 demonstrate:1 svhn:2 fj:2 interpreting:1 meaning:1 variational:49 image:9 novel:1 recently:3 discovers:1 superior:2 specialized:1 empirically:2 million:4 discussed:2 interpretation:1 m1:2 approximates:1 slight:1 interpret:1 association:1 significant:2 cambridge:1 imposing:1 rd:1 automatic:3 grid:2 tuning:1 similarly:1 language:1 access:1 similarity:4 etc:2 base:15 posterior:6 recent:4 perspective:1 optimizing:1 optimizes:1 gonz:1 massively:1 inequality:1 binary:3 continue:1 onr:1 yi:8 der:1 preserving:2 additional:3 dai:1 dashed:3 ii:1 multiple:9 mix:1 rj:1 full:8 rahimi:1 aggarwal:1 af:1 cross:1 long:3 cifar:1 dkl:46 prediction:3 variant:1 scalable:16 regression:12 involving:1 sition:1 cmu:3 metric:4 expectation:4 enhancing:1 iteration:1 kernel:66 sometimes:1 resnets:1 arxiv:15 filippone:1 justified:1 background:1 want:2 addition:1 grow:1 extra:10 rest:3 unlike:1 airline:7 subject:1 induced:1 leveraging:2 near:2 leverage:4 presence:1 yang:1 hb:1 variety:1 marginalization:1 fit:1 architecture:21 reduce:2 bottleneck:1 whether:1 gb:1 song:1 peter:1 algebraic:1 jj:2 deep:54 useful:1 generally:1 detailed:1 covered:1 amount:2 nonparametric:1 tenenbaum:1 hinneburg:1 svms:2 category:1 visualized:1 http:2 nsf:1 estimated:1 per:1 serving:1 diverse:1 write:1 hyperparameter:1 salient:1 keim:1 achieving:1 clarity:2 ram:1 run:1 uncertainty:2 place:1 throughout:1 reasonable:1 almost:1 knowles:1 fusi:1 scaling:3 qui:1 entirely:1 layer:28 bound:6 cyan:2 followed:4 occur:1 vectorial:1 kronecker:6 constraint:2 n3:1 aspect:1 speed:1 relatively:2 gpus:1 structured:4 according:2 combination:2 conjugate:3 remain:1 across:3 appealing:1 cth:1 taken:1 tractable:1 generalizes:1 available:5 gaussians:1 operation:1 nen:1 apply:3 eight:1 observe:2 away:3 save:1 batch:5 coin:1 shortly:1 alternative:2 top:3 nlp:1 opportunity:1 unifying:1 exploit:2 k1:2 build:2 uj:5 especially:1 ghahramani:5 objective:5 already:2 added:1 parametric:7 costly:1 strategy:1 diagonal:2 loglinear:1 gradient:10 mx:2 distance:4 hq:3 mapped:2 thank:1 capacity:2 parametrized:1 manifold:1 collected:2 alpayd:1 length:3 code:3 index:1 relationship:1 mini:5 providing:1 minimizing:1 besides:4 difficult:2 unfortunately:2 mostly:1 robert:1 negative:1 tightening:1 ordinarily:1 implementation:1 perform:2 diamond:2 observation:4 n000141310721:1 datasets:13 benchmark:8 finite:1 enabling:1 descent:2 gplvm:2 wilk:1 extended:2 hinton:1 y1:3 introduced:2 pair:1 required:2 kl:6 specified:1 optimized:2 imagenet:3 extensive:1 learned:6 barcelona:1 kingma:1 nip:3 address:2 able:1 usually:1 pattern:2 yc:3 departure:1 challenge:1 pioneering:1 including:7 zhiting:1 interpretability:1 royal:1 power:2 natural:2 hybrid:3 predicting:1 indicator:1 turner:1 scheme:3 improve:4 auto:3 coupled:1 kj:3 text:1 prior:1 acknowledgement:1 discovery:2 asymptotic:1 relative:1 fully:6 loss:1 roustant:1 mixed:3 limitation:1 versus:1 shelhamer:1 consistent:2 intractability:1 occam:1 placed:1 rasmussen:5 jth:1 bias:3 allow:1 side:1 circulant:2 wide:3 taking:1 emerge:1 sparse:7 distinctly:1 edinburgh:1 ghz:1 curve:1 benefit:1 distributed:2 hensman:10 dimension:10 stand:13 evaluating:2 rich:2 kz:6 qn:1 plain:3 collection:2 commonly:1 ginsbourger:1 ec:2 far:1 welling:1 correlate:1 sj:2 approximate:5 bui:1 overfitting:1 uai:1 harm:1 xi:3 factorize:1 latent:9 decomposes:1 table:8 additionally:1 promising:3 learn:9 nature:1 correlated:10 confirmation:1 ignoring:1 expansion:1 significance:2 fastfood:1 linearly:4 whole:2 noise:2 hyperparameters:5 subsample:1 n2:2 arrival:1 big:1 tesla:1 complementary:2 osborne:1 quadrature:1 x1:1 body:1 fig:1 grosse:1 theme:1 governed:1 learns:1 donahue:1 showing:1 jensen:1 explored:1 list:2 admits:1 orie:1 svm:2 covtype:1 intractable:3 workshop:1 mnist:7 adding:1 effectively:3 supplement:3 phd:1 kx:9 margin:1 gap:1 lt:1 univariate:2 explore:1 visual:1 expressed:1 ordered:1 kiss:4 van:1 springer:1 minibatches:1 identity:1 sorted:1 rbf:2 exposition:1 change:3 infinite:1 specifically:2 uniformly:1 sampler:6 engineer:1 total:2 lens:1 la:1 m3:4 indicating:1 deisenroth:1 people:1 cholesky:2 adelaide:1 evaluate:2 mcmc:2 tested:1 handling:1 |
5,999 | 6,427 | Toward Deeper Understanding of Neural Networks: The Power
of Initialization and a Dual View on Expressivity
Amit Daniely
Google Brain
Roy Frostig?
Google Brain
Yoram Singer
Google Brain
Abstract
We develop a general duality between neural networks and compositional kernel
Hilbert spaces. We introduce the notion of a computation skeleton, an acyclic
graph that succinctly describes both a family of neural networks and a kernel space.
Random neural networks are generated from a skeleton through node replication
followed by sampling from a normal distribution to assign weights. The kernel
space consists of functions that arise by compositions, averaging, and non-linear
transformations governed by the skeleton?s graph topology and activation functions.
We prove that random networks induce representations which approximate the
kernel space. In particular, it follows that random weight initialization often yields
a favorable starting point for optimization despite the worst-case intractability of
training neural networks.
1
Introduction
Neural network (NN) learning has underpinned state of the art empirical results in numerous applied
machine learning tasks, see for instance [25, 26]. Nonetheless, theoretical analyses of neural network
learning are still lacking in several regards. Notably, it remains unclear why training algorithms ?nd
good weights and how learning is impacted by network architecture and its activation functions.
This work analyzes the representation power of neural networks within the vicinity of random
initialization. We show that for regimes of practical interest, randomly initialized neural networks
well-approximate a rich family of hypotheses. Thus, despite worst-case intractability of training
neural networks, commonly used initialization procedures constitute a favorable starting point for
training.
Concretely, we de?ne a computation skeleton that is a succinct description of feed-forward networks.
A skeleton induces a family of network architectures as well as an hypothesis class H of functions
obtained by non-linear compositions mandated by the skeleton?s structure. We then analyze the set of
functions that can be expressed by varying the weights of the last layer, a simple region of the training
domain over which the objective is convex. We show that with high probability over the choice of
initial network weights, any function in H can be approximated by selecting the ?nal layer?s weights.
Before delving into technical detail, we position our results in the context of previous research.
Current theoretical understanding of NN learning. Standard results from complexity theory [22]
imply that all ef?ciently computable functions can be expressed by a network of moderate size.
Barron?s theorem [7] states that even two-layer networks can express a very rich set of functions. The
generalization ability of algorithms for training neural networks is also fairly well studied. Indeed,
both classical [3, 9, 10] and more recent [18, 33] results from statistical learning theory show that, as
the number of examples grows in comparison to the size of the network, the empirical risk approaches
the population risk. In contrast, it remains puzzling why and when ef?cient algorithms, such as
stochastic gradient methods, yield solutions that perform well. While learning algorithms succeed in
?
Most of this work performed while the author was at Stanford University.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
practice, theoretical analyses are overly pessimistic. For example, hardness results suggest that, in
the worst case, even very simple 2-layer networks are intractable to learn. Concretely, it is hard to
construct a hypothesis which predicts marginally better than random [15, 23, 24].
In the meantime, recent empirical successes of neural networks prompted a surge of theoretical
results on NN learning. For instance, we refer the reader to [1, 4, 12, 14, 16, 28, 32, 38, 42] and the
references therein.
Compositional kernels and connections to networks. The idea of composing kernels has repeatedly appeared in the machine learning literature. See for instance the early work by Grauman and
Darrell [17], Sch?lkopf et al. [41]. Inspired by deep networks? success, researchers considered deep
composition of kernels [11, 13, 29]. For fully connected two-layer networks, the correspondence
between kernels and neural networks with random weights has been examined in [31, 36, 37, 45].
Notably, Rahimi and Recht [37] proved a formal connection, in a similar sense to ours, for the
RBF kernel. Their work was extended to include polynomial kernels [21, 35] as well as other
kernels [5, 6]. Several authors have further explored ways to extend this line of research to deeper,
either fully-connected networks [13] or convolutional networks [2, 20, 29].
This work establishes a common foundation for the above research and expands the ideas therein. We
extend the scope from fully-connected and convolutional networks to a broad family of architectures.
In addition, we prove approximation guarantees between a network and its corresponding kernel in
our general setting. We thus generalize previous analyses which are only applicable to fully connected
two-layer networks.
2
Setting
Notation. We denote vectors by bold-face letters (e.g. x), and matrices by upper case Greek letters
(e.g. ?). The 2-norm of x ? Rd is denoted by ?x?. For functions ? : R ? R we let
?
??
?
x2
??? := EX?N (0,1) ? 2 (X) = ?12? ?? ? 2 (x)e? 2 dx .
Let G = (V, E) be a directed acyclic graph. The set of neighbors incoming to a vertex v is denoted
in(v) := {u ? V | uv ? E} .
The d ? 1 dimensional sphere is denoted Sd?1 = {x ? Rd | ?x? = 1}. We provide a brief overview
of reproducing kernel Hilbert spaces in the sequel and merely introduce notation here. In a Hilbert
space H, we use a slightly non-standard notation HB for the ball of radius B, {x ? H | ?x?H ? B}.
We use [x]+ to denote max(x, 0) and 1[b] to denote the indicator function of a binary variable b.
Input space. Throughout the paper we assume that each example is a sequence of n elements,
each of which? is represented
as a unit vector. Namely, we ?x n and take the input space to be
?n
X = Xn,d = Sd?1 . Each input example is denoted,
x = (x1 , . . . , xn ), where xi ? Sd?1 .
(1)
We refer to each vector xi as the input?s ith coordinate, and use xij to denote it jth scalar entry.
Though this notation is slightly non-standard, it uni?es input types seen in various domains. For
example, binary features can be encoded by taking d = 1, in which case X = {?1}n . Meanwhile,
images and audio signals are often represented as bounded and continuous numerical values; we can
assume in full generality that these values lie in [?1, 1]. To match the setup above, we embed [?1, 1]
into the circle S1 , e.g. through the map
? ?x ??
? ? ?x ?
, cos
.
x ?? sin
2
2
When each coordinate is categorical, taking one of d values, one can represent the category j ? [d]
by the unit vector ej ? Sd?1 . When d is very large or the basic units exhibit some structure?such as
when the input is a sequence of words?a more concise encoding may be useful, e.g. using unit vectors
?
in a low dimension space Sd where d? ? d (see for instance Levy and Goldberg [27], Mikolov et al.
[30]).
2
Supervised learning. The goal in supervised learning is to devise a mapping from the input space
X to an output space Y based on a sample S = {(x1 , y1 ), . . . , (xm , ym )}, where (xi , yi ) ? X ? Y,
drawn i.i.d. from a distribution D over X ? Y. A supervised learning problem is further speci?ed
by an output length k and a loss function ? : Rk ? Y ? [0, ?), and the goal is to ?nd a predictor
h : X ? Rk whose loss,
LD (h) := E ?(h(x), y)
(x,y)?D
is small. The empirical loss
m
LS (h) :=
1 ?
?(h(xi ), yi )
m i=1
is commonly used as a proxy for the loss LD . Regression problems correspond to Y = R and, for
instance, the squared loss ?(?
y , y) = (?
y ? y)2 . Binary classi?cation is captured by Y = {?1} and,
y , y) = 1[?
y y ? 0] or the hinge loss ?(?
y , y) = [1 ? y?y]+ , with standard
say, the zero-one loss ?(?
extensions to the multiclass case. A loss ? is L-Lipschitz if |?(y1 , y) ? ?(y2 , y)| ? L|y1 ? y2 | for all
y1 , y2 ? Rk , y ? Y, and it is convex if ?(?, y) is convex for every y ? Y.
Neural network learning. We de?ne a neural network N to be directed acyclic graph (DAG)
whose nodes are denoted V (N ) and edges E(N ). Each of its internal units, i.e. nodes with both
incoming and outgoing edges, is associated with an activation function ?v : R ? R. In this paper?s
context, an activation can be any function that is square integrable with respect to the Gaussian
measure on R. We say that ? is normalized if ??? = 1. The set of nodes having only incoming
edges are called the output nodes. To match the setup of a supervised learning problem, a network N
has nd input nodes and k output nodes, denoted o1 , . . . , ok . A network N together with a weight
vector w = {wuv | uv ? E} de?nes a predictor hN ,w : X ? Rk whose prediction is given by
?propagating? x forward through the network. Formally, we de?ne hv,w (?) to be the output of the
subgraph of the node v as follows: for an input node v, hv,w is the identity function, and for all other
nodes, we de?ne hv,w recursively as
?
??
hv,w (x) = ?v
w
h
(x)
.
uv
u,w
u?in(v)
Finally, we let hN ,w (x) = (ho1 ,w (x), . . . , hok ,w (x)). We also refer to internal nodes as hidden
units. The output layer of N is the sub-network consisting of all output neurons of N along with
their incoming edges. The representation induced by a network N is the network rep(N ) obtained
from N by removing the output layer. The representation function induced by the weights w is
RN ,w := hrep(N ),w . Given a sample S, a learning algorithm searches for weights w having small
?m
1
empirical loss LS (w) = m
i=1 ?(hN ,w (xi ), yi ). A popular approach is to randomly initialize the
weights and then use a variant of the stochastic gradient method to improve these weights in the
direction of lower empirical loss.
Kernel learning. A function ? : X ? X ? R is a reproducing kernel, or simply a kernel, if for
every x1 , . . . , xr ? X , the r ? r matrix ?i,j = {?(xi , xj )} is positive semi-de?nite. Each kernel
induces a Hilbert space H? of functions from X to R with a corresponding norm ???H? . A kernel and
its corresponding space are normalized if ?x ? X , ?(x, x) = 1. Given a convex loss function ?, a
k
sample S, and a kernel ?, a kernel
k ) ? H? whose
? learning algorithm ?nds a function f = (f1 , . . . , f?
1
empirical loss, LS (f ) = m i ?(f (xi ), yi ), is minimal among all functions with i ?fi ?2? ? R2
for some R > 0. Alternatively, kernel algorithms minimize the regularized loss,
LR
S (f ) =
k
m
1 ?
1 ?
?(f (xi ), yi ) + 2
?fi ?2? ,
m i=1
R i=1
a convex objective that often can be ef?ciently minimized.
3
Computation skeletons
In this section we de?ne a simple structure that we term a computation skeleton. The purpose of a
computational skeleton is to compactly describe feed-forward computation from an input to an output.
A single skeleton encompasses a family of neural networks that share the same skeletal structure.
Likewise, it de?nes a corresponding kernel space.
3
S1
S2
S3
S4
Figure 1: Examples of computation skeletons.
De?nition. A computation skeleton S is a DAG whose non-input nodes are labeled by activations.
Though the formal de?nition of neural networks and skeletons appear identical, we make a conceptual
distinction between them as their role in our analysis is rather different. Accompanied by a set of
weights, a neural network describes a concrete function, whereas the skeleton stands for a topology
common to several networks as well as for a kernel. To further underscore the differences we note
that skeletons are naturally more compact than networks. In particular, all examples of skeletons in
this paper are irreducible, meaning that for each two nodes v, u ? V (S), in(v) ?= in(u). We further
restrict our attention to skeletons with a single output node, showing later that single-output skeletons
can capture supervised problems with outputs in Rk . We denote by |S| the number of non-input
nodes of S.
Figure 1 shows four example skeletons, omitting the designation of the activation functions. The
skeleton S1 is rather basic as it aggregates all the inputs in a single step. Such topology can be
useful in the absence of any prior knowledge of how the output label may be computed from an input
example, and it is commonly used in natural language processing where the input is represented as a
bag-of-words [19]. The only structure in S1 is a single fully connected layer:
Terminology (Fully connected layer of a skeleton). An induced subgraph of a skeleton with r + 1
nodes, u1 , . . . , ur , v, is called a fully connected layer if its edges are u1 v, . . . , ur v.
The skeleton S2 is slightly more involved: it ?rst processes consecutive (overlapping) parts of the
input, and the next layer aggregates the partial results. Altogether, it corresponds to networks with a
single one-dimensional convolutional layer, followed by a fully connected layer. The two-dimensional
(and deeper) counterparts of such skeletons correspond to networks that are common in visual object
recognition.
Terminology (Convolution layer of a skeleton). Let s, w, q be positive integers and denote n =
s(q ? 1) + w. A subgraph of a skeleton is a one dimensional convolution layer of width w and stride s
if it has n + q nodes, u1 , . . . , un , v1 , . . . , vq , and qw edges, us(i?1)+j vi , for 1 ? i ? q, 1 ? j ? w.
The skeleton S3 is a somewhat more sophisticated version of S2 : the local computations are ?rst
aggregated, then reconsidered with the aggregate, and ?nally aggregated again. The last skeleton,
S4 , corresponds to the networks that arise in learning sequence-to-sequence mappings as used in
translation, speech recognition, and OCR tasks (see for example Sutskever et al. [44]).
3.1
From computation skeletons to neural networks
The following de?nition shows how a skeleton, accompanied with a replication parameter r ? 1 and
a number of output nodes k, induces a neural network architecture. Recall that inputs are ordered sets
of vectors in Sd?1 .
4
N (S, 5)
S
Figure 2: A 5-fold realizations of the computation skeleton S with d = 1.
De?nition (Realization of a skeleton). Let S be a computation skeleton and consider input coordinates in Sd?1 as in (1). For r, k ? 1 we de?ne the following neural network N = N (S, r, k). For
each input node in S, N has d corresponding input neurons. For each internal node v ? S labeled
by an activation ?, N has r neurons v 1 , . . . , v r , each with an activation ?. In addition, N has k
output neurons o1 , . . . , ok with the identity activation ?(x) = x. There is an edge v i uj ? E(N )
whenever uv ? E(S). For every output node v in S, each neuron v j is connected to all output
neurons o1 , . . . , ok . We term N the (r, k)-fold realization of S. We also de?ne the r-fold realization
of S as2 N (S, r) = rep (N (S, r, 1)).
Note that the notion of the replication parameter r corresponds, in the terminology of convolutional
networks, to the number of channels taken in a convolutional layer and to the number of hidden units
taken in a fully-connected layer.
Figure 2 illustrates a 5-realization of a skeleton with coordinate dimension d = 1. The realization is a
network with a single (one dimensional) convolutional layer having 5 channels, stride of 1, and width
of 2, followed by two fully-connected layers. The global replication parameter r in a realization
is used for brevity; it is straightforward to extend results when the different nodes in S are each
replicated to a different extent.
We next de?ne a scheme for random initialization of the weights of a neural network, that is similar
to what is often done in practice. We employ the de?nition throughout the paper whenever we refer
to random weights.
De?nition (Random weights). A random initialization of a neural network N is a multivariate
Gaussian w = (wuv )uv?E(N ) such that each
weight wuv
?
? is sampled independently from a normal
2
distribution with mean 0 and variance 1/ ??u ? |in(v)| .
Architectures such as convolutional nets have weights that are shared across different edges. Again, it
is straightforward to extend our results to these cases and for simplicity we assume no weight sharing.
3.2
From computation skeletons to reproducing kernels
In addition to networks? architectures, a computation skeleton S also de?nes a normalized kernel
?S : X ? X ? [?1, 1] and a corresponding norm ? ? ?S on functions f : X ? R. This norm has
the property that ?f ?S is small if and only if f can be obtained by certain simple compositions of
functions according to the structure of S. To de?ne the kernel, we introduce a dual activation and
dual kernel. For ? ? [?1, 1], we
by N? the multivariate Gaussian distribution on R2 with
? 1 ?denote
?
mean 0 and covariance matrix ? 1 .
De?nition (Dual activation and kernel). The dual activation of an activation ? is the function
?
? : [?1, 1] ? R de?ned as
?
? (?) =
E
?(X)?(Y ) .
(X,Y )?N?
The dual kernel w.r.t. to a Hilbert space H is the kernel ?? : H1 ? H1 ? R de?ned as
? (?x, y?H ) .
?? (x, y) = ?
2
Note that for every k, rep (N (S, r, 1)) = rep (N (S, r, k)).
5
Activation
Identity
2nd Hermite
x
ReLU
?
Step
Exponential
Dual Activation
?
?2
2
x?
?1
2
?
e
2 [x]+
2 1[x ? 0]
x?2
1
?
1
2
1
e
+
+
+
?
2
?
?
?
e
+
+
+
?
1??2 +(??cos?1 (?))?
?2
?4
2? + 24? + . . . =
?
??cos?1 (?)
?3
3?5
+
+
.
.
.
=
6?
40?
?
?2
?3
??1
+
+
.
.
.
=
e
2e
6e
Kernel
linear
poly
Ref
arccos1
[13]
arccos0
RBF
[13]
[29]
Table 1: Activation functions and their duals.
We show in the supplementary material that ?? is indeed a kernel for every activation ? that adheres
with the square-integrability requirement. In fact, any continuous ? : [?1, 1] ? R, such that
(x, y) ?? ?(?x, y?H ) is a kernel for all H, is the dual of some activation. Note that ?? is normalized
iff ? is normalized. We show in the supplementary material that dual activations are closely related
to Hermite polynomial expansions, and that these can be used to calculate the duals of activation
functions analytically. Table 1 lists a few examples of normalized activations and their corresponding
dual (corresponding derivations are in supplementary material). The following de?nition gives the
kernel corresponding to a skeleton having normalized activations.3
De?nition (Compositional kernels). Let S be a computation skeleton with normalized activations
and (single) output node o. For every node v, inductively de?ne a kernel ?v : X ? X ? R as follows.
For an input node v corresponding to the ith coordinate, de?ne ?v (x, y) = ?xi , yi ?. For a non-input
node v, de?ne
?
??
u?in(v) ?u (x, y)
.
?v
?v (x, y) = ?
|in(v)|
The ?nal kernel ?S is ?o , the kernel associated with the output node o. The resulting Hilbert space
and norm are denoted HS and ? ? ?S respectively, and Hv and ? ? ?v denote the space and norm
when formed at node v.
As we show later, ?S is indeed a (normalized) kernel for every skeleton S. To understand the
kernel in the context of learning, we need to examine which functions can be expressed as moderate
norm functions in HS . As we show in the supplementary material, these are the functions obtained
by certain simple compositions according to the feed-forward structure of S. For intuition, we
contrast two examples of two commonly used skeletons. For simplicity, we restrict to the case
X = Xn,1 = {?1}n , and omit the details of derivations.
Example 1 (Fully connected skeletons). Let S be a skeleton consisting of l fully connected
? layers,
?
?
where the i?th layer is associated with the activation ?i . We have ?S (x, x ) = ?
?l ? . . . ? ?
?1 ?x,y?
.
n
For such kernels, any moderate norm function in H is well approximated by a low degree polynomial.
? ?
For example, if ?f ?S ? n, then there is a second degree polynomial p such that ?f ? p?2 ? O ?1n .
We next argue that convolutional skeletons de?ne kernel spaces that are quite different from kernels
spaces
de?ned by fully connected skeletons. Concretely, suppose f : X ? R is of the form
?m
f = i=1 fi where each fi depends only on q adjacent coordinates. We call f a q-local function. In
Example 1 we stated that for fully-connected skeletons, any function of in the induced space of norm
less then n is well approximated by second degree polynomials. In contrast, the following example
underscores that for convolutional skeletons, we can ?nd functions that are more complex, provided
that they are local.
Example 2 (Convolutional skeletons). Let S be a skeleton consisting of a convolutional layer of
stride 1 and width q, followed by a single fully connected layer. (The skeleton S2 from Figure 1 is a
concrete example with q = 2 and n = 4.) To simplify the argument, we assume that all activations
are ?(x) = ex and q is a constant. For any q-local function f : X ? R we have
?
?f ?S ? C ? n ? ?f ?2 .
3
For a skeleton S with unnormalized activations, the corresponding kernel is the kernel of the skeleton S ?
obtained by normalizing the activations of S.
6
Here, C > 0 is a constant depending only on q. Hence, for example, any average of functions
from X to ?
[?1, 1], each of which depends on q adjacent coordinates, is in HS and has norm of
merely O ( n).
4
Main results
We review our main results. Proofs can be found in the supplementary material. Let us ?x a
compositional kernel S. There are a few upshots to underscore upfront. First, our analysis implies
that a representation generated by a random initialization of N = N (S, r, k) approximates the kernel
?S . The sense in which the result holds is twofold. First, with the proper rescaling we show that
?RN ,w (x), RN ,w (x? )? ? ?S (x, x? ). Then, we also show that the functions obtained by composing
bounded linear functions with RN ,w are approximately the bounded-norm functions in HS . In other
words, the functions expressed by N under varying the weights of the ?nal layer are approximately
bounded-norm functions in HS . For simplicity, we restrict the analysis to the case k = 1. We also
con?ne the analysis to either bounded activations, with bounded ?rst and second derivatives, or the
ReLU activation. Extending the results to a broader family of activations is left for future work.
Through this and remaining sections we use ? to hide universal constants.
De?nition. An activation ? : R ? R is C-bounded if it is twice continuously differentiable and
???? , ?? ? ?? , ?? ?? ?? ? ???C.
Note that many activations are C-bounded for some constant
? C > 0. In particular, most of the popular
sigmoid-like functions such as 1/(1 + e?x ), erf(x), x/ 1 + x2 , tanh(x), and tan?1 (x) satisfy the
boundedness requirements. We next introduce terminology that parallels the representation layer
of N with a kernel space. Concretely, let N be a network whose representation part has q output
neurons. Given weights w, the normalized representation
?w is obtained from the representation
?
RN ,w by dividing each output neuron v by ??v ? q. The empirical kernel corresponding to w is
de?ned as ?w (x, x? ) = ??w (x), ?w (x? )?. We also de?ne the empirical kernel space corresponding
to w as Hw = H?w . Concretely,
Hw = {hv (x) = ?v, ?w (x)? | v ? Rq } ,
and the norm of Hw is de?ned as ?h?w = inf{?v? | h = hv }. Our ?rst result shows that the
empirical kernel approximates the kernel kS .
Theorem 3. Let S be a skeleton with C-bounded activations. Let w be a random initialization of
N = N (S, r) with
(4C 4 )depth(S)+1 log (8|S|/?)
.
r?
?2
Then, for all x, x? ? X , with probability of at least 1 ? ?,
|kw (x, x? ) ? kS (x, x? )| ? ? .
We note that if we ?x the activation and assume that the depth of S is logarithmic, then the required
bound on r is polynomial. For the ReLU activation we get a stronger bound with only quadratic
dependence on the depth. However, it requires that ? ? 1/depth(S).
Theorem 4. Let S be a skeleton with ReLU activations. Let w be a random initialization of N (S, r)
with
depth2 (S) log (|S|/?)
r?
.
?2
?
Then, for all x, x ? X and ? ? 1/depth(S), with probability of at least 1 ? ?,
|?w (x, x? ) ? ?S (x, x? )| ? ? .
For the remaining theorems, we ?x a L-Lipschitz loss ? : R ? Y ? [0, ?). For a distribution D on
X ? Y we denote by ?D?0 the cardinality of the support of the distribution. We note that log (?D?0 )
is bounded by, for instance, the number of bits used to represent an element in X ? Y. We use the
following notion of approximation.
De?nition. Let D be a distribution on X ?Y. A space H1 ? RX ?-approximates the space H2 ? RX
w.r.t. D if for every h2 ? H2 there is h1 ? H1 such that LD (h1 ) ? LD (h2 ) + ?.
7
Theorem 5. Let S be a skeleton with C-bounded activations and let R > 0. Let w be a random
initialization of N (S, r) with
?
?
L4 R4 (4C 4 )depth(S)+1 log LRC|S|
??
.
r?
?4
Then,
with probability of at least 1?? ? over the choices of w we have that, for any data distribution,
?
R
Hw2R ?-approximates HSR and HS 2R ?-approximates Hw
.
Theorem 6. Let S be a skeleton with ReLU activations, ? ? 1/depth(C), and R > 0. Let w be a
random initialization of N (S, r) with
?
?
L4 R4 depth2 (S) log ?D??0 |S|
r?
.
?4
Then,
with probability of at least 1?? ? over the choices of w we have that, for any data distribution,
?
R
Hw2R ?-approximates HSR and HS 2R ?-approximates Hw
.
As in Theorems 3 and 4, for a ?xed C-bounded activation and logarithmically deep S, the required
bounds on r are polynomial. Analogously, for the ReLU activation the bound is polynomial even
without restricting the depth. However, the polynomial growth in Theorems 5 and 6 is rather large.
Improving the bounds, or proving their optimality, is left to future work.
Acknowledgments
We would like to thank Percy Liang and Ben Recht for fruitful discussions, comments, and suggestions.
References
[1] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In
Proceedings of the 31st International Conference on Machine Learning, pages 1908?1916, 2014.
[2] F. Anselmi, L. Rosasco, C. Tan, and T. Poggio. Deep convolutional networks are hierarchical kernel
machines. arXiv:1508.01084, 2015.
[3] M. Anthony and P. Bartlet. Neural Network Learning: Theoretical Foundations. Cambridge University
Press, 1999.
[4] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. In
Proceedings of The 31st International Conference on Machine Learning, pages 584?592, 2014.
[5] F. Bach. Breaking the curse of dimensionality with convex neural networks. arXiv:1412.8690, 2014.
[6] F. Bach. On the equivalence between kernel quadrature rules and random feature expansions. 2015.
[7] A.R. Barron. Universal approximation bounds for superposition of a sigmoidal function. IEEE Transactions
on Information Theory, 39(3):930?945, 1993.
[8] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:463?482, 2002.
[9] P.L. Bartlett. The sample complexity of pattern classi?cation with neural networks: the size of the weights
is more important than the size of the network. IEEE Transactions on Information Theory, 44(2):525?536,
March 1998.
[10] E.B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation, 1(1):151?160,
1989.
[11] L. Bo, K. Lai, X. Ren, and D. Fox. Object recognition with hierarchical kernel descriptors. In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1729?1736. IEEE, 2011.
[12] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 35(8):1872?1886, 2013.
[13] Y. Cho and L.K. Saul. Kernel methods for deep learning. In Advances in neural information processing
systems, pages 342?350, 2009.
[14] A. Choromanska, M. Henaff, M. Mathieu, G. Ben Arous, and Y. LeCun. The loss surfaces of multilayer
networks. In AISTATS, pages 192?204, 2015.
[15] A. Daniely and S. Shalev-Shwartz. Complexity theoretic limitations on learning DNFs. In COLT, 2016.
[16] R. Giryes, G. Sapiro, and A.M. Bronstein. Deep neural networks with random gaussian weights: A
universal classi?cation strategy? arXiv preprint arXiv:1504.08291, 2015.
8
[17] K. Grauman and T. Darrell. The pyramid match kernel: Discriminative classi?cation with sets of image
features. In Tenth IEEE International Conference on Computer Vision, volume 2, pages 1458?1465, 2005.
[18] M. Hardt, B. Recht, and Y. Singer. Train faster, generalize better: Stability of stochastic gradient descent.
arXiv:1509.01240, 2015.
[19] Z.S. Harris. Distributional structure. Word, 1954.
[20] T. Hazan and T. Jaakkola. Steps toward deep kernel methods from in?nite neural networks.
arXiv:1508.05133, 2015.
[21] P. Kar and H. Karnick. Random feature maps for dot product kernels. arXiv:1201.6530, 2012.
[22] R.M. Karp and R.J. Lipton. Some connections between nonuniform and uniform complexity classes. In
Proceedings of the twelfth annual ACM symposium on Theory of computing, pages 302?309. ACM, 1980.
[23] M. Kearns and L.G. Valiant. Cryptographic limitations on learning Boolean formulae and ?nite automata.
In STOC, pages 433?444, May 1989.
[24] A.R. Klivans and A.A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. In FOCS,
2006.
[25] A. Krizhevsky, I. Sutskever, and G.E. Hinton. Imagenet classi?cation with deep convolutional neural
networks. In Advances in neural information processing systems, pages 1097?1105, 2012.
[26] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436?444, 2015.
[27] O. Levy and Y. Goldberg. Neural word embedding as implicit matrix factorization. In Advances in Neural
Information Processing Systems, pages 2177?2185, 2014.
[28] R. Livni, S. Shalev-Shwartz, and O. Shamir. On the computational ef?ciency of training neural networks.
In Advances in Neural Information Processing Systems, pages 855?863, 2014.
[29] J. Mairal, P. Koniusz, Z. Harchaoui, and Cordelia Schmid. Convolutional kernel networks. In Advances in
Neural Information Processing Systems, pages 2627?2635, 2014.
[30] T. Mikolov, I. Sutskever, K. Chen, G.S. Corrado, and J. Dean. Distributed representations of words and
phrases and their compositionality. In NIPS, pages 3111?3119, 2013.
[31] R.M. Neal. Bayesian learning for neural networks, volume 118. Springer Science & Business Media,
2012.
[32] B. Neyshabur, R. R Salakhutdinov, and N. Srebro. Path-SGD: Path-normalized optimization in deep neural
networks. In Advances in Neural Information Processing Systems, pages 2413?2421, 2015.
[33] B. Neyshabur, N. Srebro, and R. Tomioka. Norm-based capacity control in neural networks. In COLT,
2015.
[34] R. O?Donnell. Analysis of boolean functions. Cambridge University Press, 2014.
[35] J. Pennington, F. Yu, and S. Kumar. Spherical random features for polynomial kernels. In Advances in
Neural Information Processing Systems, pages 1837?1845, 2015.
[36] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177?1184,
2007.
[37] A. Rahimi and B. Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Advances in neural information processing systems, pages 1313?1320, 2009.
[38] I. Safran and O. Shamir. On the quality of the initial basin in overspeci?ed neural networks.
arxiv:1511.04210, 2015.
[39] S. Saitoh. Theory of reproducing kernels and its applications. Longman Scienti?c & Technical England,
1988.
[40] I.J. Schoenberg et al. Positive de?nite functions on spheres. Duke Mathematical Journal, 9(1):96?108,
1942.
[41] B. Sch?lkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector kernels. In Advances
in Neural Information Processing Systems 10, pages 640?646. MIT Press, 1998.
[42] H. Sedghi and A. Anandkumar. Provable methods for training neural networks with sparse connectivity.
arXiv:1412.2693, 2014.
[43] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms.
Cambridge University Press, 2014.
[44] I. Sutskever, O. Vinyals, and Q.V. Le. Sequence to sequence learning with neural networks. In Advances
in neural information processing systems, pages 3104?3112, 2014.
[45] C.K.I. Williams. Computation with in?nite neural networks. pages 295?301, 1997.
9
| 6427 |@word h:7 version:1 polynomial:11 norm:14 stronger:1 nd:6 twelfth:1 covariance:1 sgd:1 concise:1 arous:1 boundedness:1 recursively:1 ld:4 initial:2 selecting:1 ours:1 nally:1 current:1 activation:40 dx:1 numerical:1 intelligence:1 ith:2 lr:1 lrc:1 node:28 sigmoidal:1 zhang:1 hermite:2 mathematical:1 along:1 symposium:1 replication:4 focs:1 prove:2 consists:1 introduce:4 notably:2 hardness:2 indeed:3 surge:1 examine:1 brain:3 inspired:1 salakhutdinov:1 spherical:1 curse:1 cardinality:1 spain:1 provided:1 notation:4 bounded:12 medium:1 qw:1 what:2 xed:1 transformation:1 guarantee:1 sapiro:1 every:8 expands:1 growth:1 grauman:2 sherstov:1 control:1 unit:7 underpinned:1 omit:1 appear:1 before:1 positive:3 local:4 wuv:3 sd:7 despite:2 encoding:1 path:2 approximately:2 twice:1 initialization:11 studied:1 therein:2 examined:1 k:2 r4:2 equivalence:1 co:3 factorization:1 directed:2 practical:1 acknowledgment:1 lecun:2 practice:2 xr:1 procedure:1 nite:5 empirical:10 universal:3 word:6 induce:1 ho1:1 suggest:1 get:1 context:3 risk:3 fruitful:1 map:2 dean:1 baum:1 straightforward:2 attention:1 starting:2 l:3 convex:6 independently:1 automaton:1 williams:1 simplicity:3 rule:1 haussler:1 population:1 proving:1 notion:3 coordinate:7 stability:1 embedding:1 schoenberg:1 shamir:2 suppose:1 tan:2 mallat:1 duke:1 goldberg:2 hypothesis:3 element:2 roy:1 approximated:3 recognition:4 logarithmically:1 predicts:1 labeled:2 distributional:1 role:1 preprint:1 hv:7 worst:3 capture:1 calculate:1 region:1 connected:16 halfspaces:1 rq:1 intuition:1 complexity:5 skeleton:55 inductively:1 sink:1 compactly:1 represented:3 various:1 derivation:2 train:1 describe:1 aggregate:3 shalev:3 whose:6 encoded:1 stanford:1 supplementary:5 reconsidered:1 say:2 quite:1 cvpr:1 ability:1 erf:1 sequence:6 differentiable:1 net:2 product:1 mandated:1 realization:7 subgraph:3 iff:1 description:1 rst:4 sutskever:4 darrell:2 requirement:2 extending:1 rademacher:1 ben:3 object:2 depending:1 develop:1 propagating:1 dividing:1 implies:1 direction:1 greek:1 radius:1 closely:1 stochastic:3 duals:2 material:5 assign:1 f1:1 generalization:2 randomization:1 pessimistic:1 safran:1 extension:1 hold:1 considered:1 normal:2 scope:1 mapping:2 early:1 consecutive:1 purpose:1 favorable:2 applicable:1 bag:1 label:1 tanh:1 superposition:1 establishes:1 weighted:1 minimization:1 mit:1 gaussian:5 rather:3 ej:1 varying:2 broader:1 jaakkola:1 karp:1 integrability:1 underscore:3 contrast:3 sense:2 nn:3 hidden:2 choromanska:1 dual:10 among:1 colt:2 denoted:7 art:1 fairly:1 initialize:1 construct:1 having:4 cordelia:1 sampling:1 identical:1 kw:1 broad:1 yu:1 future:2 minimized:1 simplify:1 employ:1 irreducible:1 few:2 randomly:2 kitchen:1 consisting:3 interest:1 scienti:1 edge:8 partial:1 poggio:1 fox:1 initialized:1 circle:1 theoretical:5 minimal:1 instance:6 boolean:2 phrase:1 vertex:1 entry:1 daniely:2 predictor:2 uniform:1 krizhevsky:1 cho:1 recht:5 st:2 international:3 sequel:1 donnell:1 ym:1 continuously:1 together:1 concrete:2 analogously:1 connectivity:1 squared:1 again:2 rosasco:1 hn:3 derivative:1 simard:1 rescaling:1 de:35 accompanied:2 stride:3 bold:1 satisfy:1 vi:1 depends:2 performed:1 view:1 later:2 h1:6 analyze:1 hazan:1 parallel:1 minimize:1 formed:1 square:2 convolutional:14 variance:1 descriptor:1 likewise:1 yield:2 correspond:2 generalize:2 lkopf:2 bayesian:1 marginally:1 ren:1 rx:2 researcher:1 cation:5 whenever:2 ed:2 sharing:1 nonetheless:1 involved:1 naturally:1 associated:3 proof:1 con:1 sampled:1 proved:1 hardt:1 popular:2 recall:1 knowledge:2 dimensionality:1 hilbert:6 sophisticated:1 feed:3 ok:3 scattering:1 supervised:5 impacted:1 done:1 though:2 generality:1 implicit:1 smola:1 replacing:1 overlapping:1 google:3 quality:1 grows:1 omitting:1 normalized:11 y2:3 counterpart:1 vicinity:1 analytically:1 hence:1 neal:1 adjacent:2 sin:1 width:3 unnormalized:1 theoretic:1 percy:1 image:2 meaning:1 ef:4 fi:4 common:3 sigmoid:1 overview:1 volume:2 extend:4 approximates:7 refer:4 composition:5 cambridge:3 dag:2 rd:2 uv:5 frostig:1 language:1 dot:1 bruna:1 surface:1 saitoh:1 multivariate:2 recent:2 hide:1 henaff:1 moderate:3 inf:1 certain:2 kar:1 binary:3 success:2 rep:4 yi:6 devise:1 nition:11 integrable:1 seen:1 analyzes:1 captured:1 somewhat:1 speci:1 aggregated:2 corrado:1 signal:1 semi:1 full:1 harchaoui:1 rahimi:3 technical:2 match:3 faster:1 england:1 bach:2 sphere:2 lai:1 prediction:1 variant:1 basic:2 regression:1 multilayer:1 vision:2 arxiv:9 kernel:64 represent:2 pyramid:1 addition:3 whereas:1 sch:2 comment:1 induced:4 integer:1 ciently:2 call:1 structural:1 anandkumar:1 bengio:1 hb:1 xj:1 relu:6 architecture:6 topology:3 restrict:3 idea:2 computable:1 multiclass:1 bartlett:2 speech:1 compositional:4 constitute:1 repeatedly:1 deep:11 useful:2 s4:2 induces:3 category:1 xij:1 s3:2 upfront:1 overly:1 skeletal:1 express:1 four:1 terminology:4 drawn:1 nal:3 tenth:1 longman:1 v1:1 graph:4 merely:2 sum:1 letter:2 family:6 reader:1 throughout:2 bit:1 layer:26 bound:8 followed:4 correspondence:1 fold:3 quadratic:1 annual:1 as2:1 x2:2 lipton:1 u1:3 argument:1 optimality:1 klivans:1 kumar:1 mikolov:2 ned:5 according:2 ball:1 march:1 describes:2 slightly:3 across:1 ur:2 s1:4 invariant:1 taken:2 vq:1 remains:2 singer:2 dnfs:1 ge:1 neyshabur:2 ocr:1 barron:2 hierarchical:2 altogether:1 anselmi:1 remaining:2 include:1 hinge:1 yoram:1 amit:1 uj:1 classical:1 objective:2 strategy:1 dependence:1 hrep:1 unclear:1 exhibit:1 gradient:3 thank:1 capacity:1 argue:1 extent:1 toward:2 provable:2 sedghi:1 panigrahy:1 length:1 o1:3 prompted:1 liang:1 setup:2 stoc:1 stated:1 depth2:2 proper:1 bronstein:1 cryptographic:2 perform:1 upper:1 neuron:8 convolution:3 descent:1 extended:1 hinton:2 y1:4 rn:5 nonuniform:1 reproducing:4 compositionality:1 david:1 namely:1 required:2 giryes:1 connection:3 imagenet:1 distinction:1 expressivity:1 barcelona:1 nip:3 pattern:3 xm:1 regime:1 appeared:1 encompasses:1 max:1 power:2 natural:1 business:1 regularized:1 meantime:1 indicator:1 scheme:1 improve:1 brief:1 imply:1 numerous:1 ne:18 mathieu:1 arora:1 categorical:1 schmid:1 prior:2 understanding:3 literature:1 review:1 upshot:1 lacking:1 fully:15 loss:15 designation:1 suggestion:1 limitation:2 acyclic:3 srebro:2 foundation:2 h2:4 degree:3 basin:1 proxy:1 intractability:2 share:1 translation:1 succinctly:1 last:2 jth:1 formal:2 deeper:3 understand:1 neighbor:1 saul:1 face:1 taking:2 livni:1 sparse:1 distributed:1 regard:1 dimension:2 xn:3 stand:1 depth:8 rich:2 valid:1 karnick:1 concretely:5 commonly:4 forward:4 author:2 replicated:1 transaction:3 approximate:2 compact:1 uni:1 global:1 incoming:4 koniusz:1 mairal:1 conceptual:1 xi:9 shwartz:3 alternatively:1 discriminative:1 continuous:2 search:1 un:1 why:2 table:2 learn:1 channel:2 delving:1 composing:2 nature:1 improving:1 adheres:1 expansion:2 poly:1 meanwhile:1 complex:1 domain:2 anthony:1 aistats:1 main:2 s2:4 arise:2 succinct:1 ref:1 quadrature:1 x1:3 cient:1 tomioka:1 sub:1 position:1 ciency:1 exponential:1 lie:1 governed:1 hsr:2 breaking:1 levy:2 hw:5 bhaskara:1 theorem:8 rk:5 embed:1 removing:1 formula:1 showing:1 explored:1 r2:2 list:1 normalizing:1 intractable:1 mendelson:1 restricting:1 andoni:1 valiant:2 pennington:1 vapnik:1 illustrates:1 chen:1 intersection:1 logarithmic:1 simply:1 visual:1 vinyals:1 expressed:4 ordered:1 scalar:1 bo:1 springer:1 corresponds:3 harris:1 ma:1 acm:2 succeed:1 goal:2 identity:3 rbf:2 twofold:1 lipschitz:2 absence:1 shared:1 hard:1 averaging:1 classi:5 kearns:1 called:2 duality:1 e:1 l4:2 formally:1 puzzling:1 internal:3 support:2 brevity:1 outgoing:1 audio:1 ex:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.